text stringlengths 9 7.94M |
|---|
\begin{document}
\title{Ultrafilters maximal for finite embeddability}
\author{Lorenzo Luperi Baglini\thanks{University of Vienna, Faculty of Mathematics, Oskar-Morgenstern-Platz 1, 1090 Vienna, AUSTRIA, e-mail: \texttt{lorenzo.luperi.baglini@univie.ac.at}, supported by grant P25311-N25 of the Austrian Science Fund FWF.}} \maketitle \date{}
\begin{abstract}
In \cite{fe} the authors showed some basic properties of a pre-order that arose in combinatorial number theory, namely the finite embeddability between sets of natural numbers, and they presented its generalization to ultrafilters, which is related to the algebraical and topological structure of the Stone-\v{C}ech compactification of the discrete space of natural numbers. In this present paper we continue the study of these pre-orders. In particular, we prove that there exist ultrafilters maximal for finite embeddability, and we show that the set of such ultrafilters is the closure of the minimal bilateral ideal in the semigroup $(\beta\mathbb{N},\oplus)$, namely $\overline{K(\beta\mathbb{N},\oplus)}$. As a consequence, we easily derive many combinatorial properties of ultrafilters in $\overline{K(\beta\mathbb{N},\oplus)}$. We also give an alternative proof of our main result based on nonstandard models of arithmetic.
\end{abstract}
\section{Introduction}
This paper is a planned sequel of the paper \cite{fe} written by Andreas Blass and Mauro Di Nasso. Both in \cite{fe} and in this present paper it is studied a notion that arose in combinatorial number theory (see \cite{DN} and \cite{ruzsa}, where this notion was implicitly used), the finite embeddability between sets of natural numbers. We recall its definition:
\begin{defn}[\cite{fe}, Definition 1] For $A,B$ subsets of $\mathbb{N}$, we say that $A$ is finitely embeddable in $B$ and we write $A\leq_{fe} B$ if each finite subset $F$ of $A$ has a rightward translate $F+k$ included in $B$. \end{defn}
We use the standard notation $n+F=\{n+a\mid a\in F\}$ and we use the standard convention that $\mathbb{N}=\{0,1,2,...\}$. In \cite{fe} the authors also considered the generalization of $\leq_{fe}$ to ultrafilters:
\begin{defn}[\cite{fe}, Definition 2] For ultrafilters $\mathcal{U}, \mathcal{V}$ on $\mathbb{N}$, we say that $\mathcal{U}$ is finitely embeddable in $\mathcal{V}$ and we write $\mathcal{U}\leq_{fe}\mathcal{V}$ if, for each set $B\in\mathcal{V}$, there is some $A\in\mathcal{U}$ such that $A\leq_{fe} B$. \end{defn}
It is easy to prove (see \cite{fe}, \cite{Tesi}) that both $(\mathcal{P}(\mathbb{N}),\leq_{fe})$ and $(\beta\mathbb{N},\leq_{fe})$ are preorders. In \cite{fe} the authors studied some properties of $\leq_{fe}$, giving in particular many equivalent characterization of the relations $A\leq_{fe} B$ and $\mathcal{U}\leq_{fe}\mathcal{V}$ using standard and nonstandard techniques; in this present paper we use similar techniques to continue the study of these pre-orders. Our main result is that there exist ultrafilters maximal for finite embeddability and that the set of such maximal ultrafilters is the closure of the minimal bilateral ideal in $(\beta\mathbb{N},\oplus)$, namely $\overline{K(\beta\mathbb{N},\oplus)}$. This result allows to easily deduce many combinatorial properties of ultrafilters in $\overline{K(\beta\mathbb{N},\oplus)}$, e.g. that for every ultrafilter $\mathcal{U}\in\overline{K(\beta\mathbb{N},\oplus)}$, for every $A\in\mathcal{U}$, $A$ has positive upper Banach density, it contains arbitrarily long arithmetic progressions and it is piecewise syndetic\footnote{Let us note that many of these combinatorial properties of ultrafilters in $\overline{K(\beta\mathbb{N},\oplus)}$ where already known.}. We will also show that there do not exist minimal sets in $(\mathcal{P}_{\aleph_{0}}(\mathbb{N}),\leq_{fe})$ or minimal ultrafilters in $(\beta\mathbb{N}\setminus\mathbb{N},\leq_{fe})$, where $\mathcal{P}_{\aleph_{0}}(\mathbb{N})$ is the set of infinite subsets of $\mathbb{N}$ and $\beta\mathbb{N}\setminus\mathbb{N}$ is the set of nonprincipal ultrafilters. These topics are studied in sections \ref{propy} and \ref{extreme}. In section \ref{NS23} we reprove our main result by nonstandard methods; nevertheless, this is the only section in which nonstandard methods are used, so the rest of the paper is accessible also to readers unfamiliar with nonstandard methods.\par We refer to \cite{rif12} for all the notions about combinatorics and ultrafilters that we will use, to \cite{rif5}, §4.4 for the foundational aspects of nonstandard analysis and to \cite{davis} for all the nonstandard notions and definitions. Finally, we refer the interested reader to \cite{Tesi}, Chapter 4 for other properties and characterizations of the finite embeddability.
\section{Some basic properties of $(\mathcal{P}(\mathbb{N}),\leq_{fe})$}\label{propy}
Let $n$ be a natural number. Throughout this section we will denote by $\mathcal{P}_{\geq n}(\mathbb{N})$ the set
\begin{equation*} \mathcal{P}_{\geq n}(\mathbb{N})=\{A\subseteq\mathbb{N}\mid |A|\geq n\}; \end{equation*}
similarly, we will denote by $\mathcal{P}_{\aleph_{0}}(\mathbb{N})$ the set
\begin{equation*} \mathcal{P}_{\aleph_{0}}(\mathbb{N})=\{A\subseteq\mathbb{N}\mid |A|=\aleph_{0}\}. \end{equation*}
Moreover, we will denote by $\equiv_{fe}$ the equivalence relation such that, for every $A,B\subseteq\mathbb{N}$,
\begin{equation*} A\equiv_{fe} B\Leftrightarrow A\leq_{fe} B \wedge B\leq_{fe} A \end{equation*}
and, for every set $A$, we will denote by $[A]$ its equivalence class. Finally we will denote by $\leq_{fe}$ the ordering induced on the space of equivalence classes defined by setting, for every $A,B\subseteq\mathbb{N}$,
\begin{equation*} [A]\leq_{fe} [B]\Leftrightarrow A\leq_{fe} B.\end{equation*}
It is immediate to see that the relation $\leq_{fe}$ on $\mathcal{P}(\mathbb{N})$ is not antysimmetric (e.g., $\{2n\mid n\in\mathbb{N}\}\equiv_{fe}\{2n+1\mid n\in\mathbb{N}\}$), so to search for maximal and minimal sets we will actually work in $(\mathcal{P}(\mathbb{N})/\mathord\equiv_{fe},\leq_{fe})$.\par In \cite{fe} the authors proved that the finite embeddability has the following properties (for the relevant definitions, see \cite{rif12}):
\begin{prop}[\cite{fe}, Proposition 6]\label{trs} Let $A,B$ be sets of natural numbers. \begin{enumerate} [leftmargin=*,label=(\roman*),align=left ]
\item $A$ is maximal with respect to $\leq_{fe}$ if and only if it is thick;
\item if $A\leq_{fe} B$ and $A$ is piecewise syndetic then $B$ is also piecewise syndetc;
\item if $A\leq_{fe} B$ and $A$ contains a $k$-term arithmetic progression then also $B$ contains a $k$-term arithmetic progression;
\item if $A\leq_{fe} B$ then the upper Banach densities satisfy $BD(A)\leq BD(B)$;
\item if $A\leq_{fe} B$ then $A-A\subseteq B-B$;
\item if $A\leq_{fe} B$ then $\bigcap\limits_{t\in G} (A-t)\leq_{fe} \bigcap\limits_{t\in G}(B-t)$ for every finite $G\subseteq\mathbb{N}$. \end{enumerate} \end{prop}
We will use Proposition \ref{trs} to (re)prove some combinatorial properties of ultrafilters in $\overline{K(\beta\mathbb{N},\oplus)}$ in Section \ref{extreme}. In this present section we want to study the existence of minimal elements with respect to $\leq_{fe}$ in various subsets of $\mathcal{P}(\mathbb{N})$, and a nice property of the ordering $\leq_{fe}$ on the set of equivalence classes, namely that for every set $A$ there does not exist a set $B$ such that $[A]<_{fe} [B] <_{fe} [A+1]$. To prove this result we need the following lemma:
\begin{lem}\label{basico} For every $A,B\subseteq\mathbb{N}$ the following two properties hold: \begin{enumerate} [leftmargin=*,label=(\roman*),align=left ]
\item\label{a1} if $B\nleq_{fe} A$ and $B\leq_{fe} A+1$ then $B\subseteq A+1$;
\item\label{a2} if $A\leq B$ and $A+1\nleq B$ then $A\subseteq B$. \end{enumerate}\end{lem}
\begin{proof} We prove only \ref{a1}, since \ref{a2} can be proved similarly. Let $F\subseteq B$ be a finite subset of $B$ such that $F+n\nsubseteq A$ for every $n\in\mathbb{N}$. In particular, for every finite $H\subseteq B$ such that $F\subseteq H$ and for every $n\in\mathbb{N}$ we have that $n+H\nsubseteq A$. But, by hypothesis, there exists $n\in\mathbb{N}$ such that $n+H\subseteq A+1$. If $n\geq 1$ we have a contradition, so it must be $n=0$, i.e $H\subseteq A+1$. Since this holds for every finite $H\subseteq B$ (with $F\subseteq H$) we deduce that $B\subseteq A+1$. \end{proof}
\begin{thm}\label{ledzeppelin} Let $A,B\subseteq\mathbb{N}$. If $A\leq_{fe} B\leq_{fe} A+1$ then $[A]=[B]$ or $[A+1]=[B]$. \end{thm}
\begin{proof} Let us suppose that $A+1\nleq_{fe} B\nleq_{fe} A$. Then, since $A\leq_{fe} B\leq_{fe} A+1$, by Lemma \ref{basico} we deduce that $A\subseteq B\subseteq A+1$, so $A\subseteq A+1$. This is absurd since $A\setminus (A+1)\supseteq\{\min A\}\neq\emptyset$. \end{proof}
We now turn the attention to the existence of minimal elements in various subsets of $\mathcal{P}(\mathbb{N})$. Two immediate observations are that the empty set is the minimum in $(\mathcal{P}(\mathbb{N}),\leq_{fe})$ and that $\{0\}$ is the minimum in $(\mathcal{P}(\mathbb{N})_{\geq 1}\equiv_{fe},\leq_{fe})$. Moreover, if we identify each natural number $n$ with the singleton $\{n\}$, it is immediate to see that $(\mathbb{N},\leq)$ forms an initial segment of $(\mathcal{P}_{\geq 1}(\mathbb{N}),\leq_{fe})$ and that, more in general, the following easy result holds:
\begin{prop} A set $A$ is minimal in $(\mathcal{P}_{\geq n}(\mathbb{N}),\leq_{fe})$ if and only if $0\in A$ and $|A|=n$. \end{prop}
The proof follows easily from the definitions. Let us note that, in particular, the following facts follow: \begin{enumerate} [leftmargin=*,label=(\roman*),align=left ]
\item for every natural number $m\geq n-1$ there are $\binom{m}{n-1}$ inequivalent minimal elements in $(\mathcal{P}_{\geq n}(\mathbb{N}),\leq_{fe})$ that are subsets of $\{0,...,m\}$;
\item if $n\geq 2$ then $(\mathcal{P}_{\geq n}(\mathbb{N}),\leq_{fe})$ does not have a minimum element.
\end{enumerate}
If we consider only infinite subsets of $\mathbb{N}$ the situation is different: there are no minimal elements in $(\mathcal{P}_{\aleph_{0}}(\mathbb{N})/\mathord\equiv_{fe},\leq_{fe})$, as we are now going to show.
\begin{defn} Let $A,B\subseteq\mathbb{N}$. We say that $A$ is strongly non f.e. in $B$ (notation: $A\nleq_{fe}^{S} B$) if for every set $C\subseteq A$ with $|C|= 2$ we have that $C\nleq_{fe}B$. If both $A\nleq_{fe}^{S} B$ and $B\nleq_{fe}^{S} A$ we say that $A,B$ are strongly mutually unembeddable (notation: $A\not\equiv_{S} B$). \end{defn}
Let us observe that, in the previous definition, we can equivalenty substitute the condition "$|C|=2$" with "$|C|\geq 2$".
\begin{prop}\label{incomparabili1} Let $X$ be an infinite subset of $\mathbb{N}$. Then there are $A,B\subseteq X$, $A,B$ infinite, such that $A\cap B=\emptyset$ and $A\not\equiv_{S} B$. \end{prop}
\begin{proof} To prove the thesis we construct $A,B\subseteq X$ such that, for any $C\subseteq A$, $D\subseteq B$ with $|C|=|D|=2$, we have $C\nleq_{fe} B$ and $D\nleq_{fe} B$.\par Let $X=\{x_{n}\mid n\in\mathbb{N}\}$, with $x_{n}<x_{n+1}$ for every $n\in\mathbb{N}$. We set
\begin{equation*} a_{0}=x_{0}, b_{0}=x_{1} \end{equation*}
and, recursively, we set
\begin{equation*} a_{n+1}=\min\{x\in X\mid x>a_{n}+b_{n}+1\}, \ b_{n+1}=\min\{x\in X\mid x>b_{n}+a_{n+1}+1\}. \end{equation*}
Finally, we set $A=\{a_{n}\mid n\in\mathbb{N}\}$ and $B=\{b_{n}\mid n\in\mathbb{N}\}$. Clearly $A\cap B=\emptyset$, and both $A,B$ are infinite subsets of $X$. Now we let $a_{n_{1}}<a_{n_{2}}$ be any elements in $A$. Let us suppose that there are $b_{m_{1}}<b_{m_{2}}$ in $B$ with $a_{n_{2}}-a_{n_{1}}=b_{m_{2}}-b_{m_{1}}$ and let us assume that $b_{n_{2}}>a_{n_{2}}$ (if the converse hold, we can just exchange the roles of $a_{n_{1}},a_{n_{2}},b_{m_{1}},b_{m_{2}}$). By construction, since $b_{m_{2}}>a_{n_{2}}$, we have $b_{m_{2}}-b_{m_{1}}\geq a_{n_{2}}+1 >a_{n_{2}}$, while $a_{n_{2}}-a_{n_{1}}\leq a_{n_{2}}$. So $A\not\equiv_{S} B$.\end{proof}
Three corollaries follow immediatly by Proposition \ref{incomparabili1}:
\begin{cor}\label{nitrogeno} For every infinite set $X\subseteq\mathbb{N}$ there is an infinite set $A\subseteq X$ such that $X\nleq_{fe} A$. \end{cor} \begin{proof} Let $A,B$ be infinite subsets of $X$ such that $A\not\equiv_{S} B$. Then $X$ cannot be finitely embeddable in both $A$ and $B$ otherwise, since clearly $A,B\leq_{fe} X$, we would have that $[A]=[X]=[B]$, which is absurd. \end{proof}
\begin{cor}\label{popporoppo} For every infinite set $X\subseteq\mathbb{N}$ there is an infinite descending chain $X=X_{0}\supset X_{1}\supset X_{2}...$ in $\mathcal{P}_{\aleph_{0}}(\mathbb{N})$ such that $X_{i+1}\nleq_{fe} X_{i}$ for every $i\in\mathbb{N}$. \end{cor} \begin{proof} The result follows immediatly by Corollary \ref{nitrogeno}.\end{proof}
\begin{cor}\label{gugugaga} There are no minimal elements in $(\mathcal{P}_{\aleph_{0}}(\mathbb{N})/\mathord\equiv_{fe},\leq_{fe})$. \end{cor}
\begin{proof} The result follows immediatly by Corollary \ref{popporoppo}. \end{proof}
\section{Properties of $(\beta\mathbb{N},\leq_{fe})$}\label{extreme}
In this section we want to prove some basic properties of $(\beta\mathbb{N},\leq_{fe})$, in particular the generalization of Theorem \ref{ledzeppelin} to ultrafilters, and to characterize the maximal ultrafilters with respect to $\leq_{fe}$. We fix some notations: we will denote by $\equiv_{fe}$ the equivalence relation such that, for every $\mathcal{U},\mathcal{V}$ ultrafilters on $\mathbb{N}$,
\begin{equation*} \mathcal{U}\equiv_{fe} \mathcal{V}\Leftrightarrow \mathcal{U}\leq_{fe} \mathcal{V} \wedge \mathcal{U}\leq_{fe} \mathcal{V} \end{equation*}
and, for every ultrafilter $\mathcal{U}$, we will denote by $[\mathcal{U}]$ its equivalence class. Finally we will denote by $\leq_{fe}$ the ordering induced on the space of equivalence classes defined by setting, for every $\mathcal{U},\mathcal{V}\in\beta\mathbb{N}$,
\begin{equation*} [\mathcal{U}]\leq_{fe} [\mathcal{V}]\Leftrightarrow \mathcal{U}\leq_{fe} \mathcal{V}.\end{equation*}
\subsection{Some basic properties of $(\beta\mathbb{N},\leq_{fe})$}
The first result that we prove is that Theorem \ref{ledzeppelin} can be generalized to ultrafilters:
\begin{thm} For every $\mathcal{U},\mathcal{V}\in\beta\mathbb{N}$ if $\mathcal{U}\leq_{fe} \mathcal{V}\leq_{fe} \mathcal{U}\oplus 1$ then $[\mathcal{U}]=[\mathcal{V}]$ or $[\mathcal{U}\oplus 1]=[\mathcal{V}]$. \end{thm}
\begin{proof} Let us suppose that $\mathcal{U}\oplus 1\nleq_{fe} \mathcal{V}\nleq_{fe} \mathcal{U}$. In particular, $\mathcal{U}\oplus 1\neq \mathcal{V}$, so there exists $A\in\mathcal{U}$ such that $A+1\notin \mathcal{V}$. Since $\mathcal{V}\nleq_{fe} \mathcal{U}$ there exists $B\in\mathcal{U}$ such that $K\nleq_{fe} B$ for every $K\in\mathcal{V}$. In particular, $K\nleq A\cap B$ for every $K\in\mathcal{V}$.\par Moreover, since $(A\cap B)+1\in\mathcal{U}\oplus 1$ we derive that there exists $C\in\mathcal{V}$ such that $C\leq_{fe} (A\cap B)+1$. So we have that
\begin{equation*} C\nleq_{fe} (A\cap B) \ \mbox{and} \ C\leq_{fe} (A\cap B)+1; \end{equation*}
by Lemma \ref{basico} we conclude that $C\subseteq (A\cap B)+1$. But $C\in\mathcal{V}$, so $(A\cap B)+1\in\mathcal{V}$ and, since $(A\cap B)+1\subseteq A+1$, this entails that $A+1\in\mathcal{V}$, which is absurd. \end{proof}
Another result that we want to prove is that $(\beta\mathbb{N},\leq_{fe})$ is not a total preorder:
\begin{prop} There are nonprincipal ultrafilters $\mathcal{U},\mathcal{V}$ such that $\mathcal{U}$ is not finitely embeddable in $\mathcal{V}$ and $\mathcal{V}$ is not finitely embeddable in $\mathcal{U}$. \end{prop}
\begin{proof} Let $A,B$ be strongly mutually unembeddable infinite sets (which existence is a consequence of Proposition \ref{incomparabili1}). Let $\mathcal{U},\mathcal{V}$ be nonprincipal ultrafilters such that $A\in\mathcal{U}, B\in\mathcal{V}$ and let us suppose that $\mathcal{U}\leq_{fe}\mathcal{V}$. Let $C\in \mathcal{U}$ be such that $C\leq_{fe} B$. Since $C\in\mathcal{U}$, $A\cap C$ is in $\mathcal{U}$ and it is infinite (since $\mathcal{U}$ is nonprincipal). So we have that
\begin{itemize}
\item $A\cap C\leq_{fe} B$, since $A\cap C\subseteq C$;
\item $A\cap C\nleq_{fe} B$, since $A\not\equiv_{S} B$. \end{itemize}
This is absurd, so $\mathcal{U}$ is not finitely embeddable in $\mathcal{V}$. In the same way we can prove that $\mathcal{V}$ is not finitely embeddable in $\mathcal{U}$.\end{proof}
It is easy to show that, if we identity each natural number $n$ with the principal ultrafilter $\mathcal{U}_{n}=\{A\in\mathcal{P}(\mathbb{N})\mid n\in A\}$, then $(\mathbb{N},\leq)$ is an initial segment in $(\beta\mathbb{N},\leq_{fe})$. In particular, $\mathcal{U}_{0}$ is the minimum element in $\beta\mathbb{N}$. One may wonder if there is a minimum element in $(\beta\mathbb{N}\setminus\mathbb{N},\leq_{fe}),$ and the answer is no. In the following proposition, by $\Theta_{X}$ we mean the clopen set
\begin{equation*} \Theta_{X}=\{\mathcal{U}\in\beta\mathbb{N}\mid X\in\mathcal{U}\}. \end{equation*}
\begin{prop} For every infinite set $X\subseteq\mathbb{N}$ there is not a minimum in $((\Theta_{X}\setminus\mathbb{N})/\mathord\equiv_{fe},\leq_{fe})$. \end{prop}
\begin{proof} Let us suppose that such a minimum $M$ exists, and let $\mathcal{U}\in\Theta_{X}$ be such that $M=[\mathcal{U}]$. Let $A,B\subseteq X$ be mutually unembeddable subsets of $X$ and let $\mathcal{V}_{1},\mathcal{V}_{2}$ be nonprincipal ultrafilters such that $A\in \mathcal{V}_{1}$ and $B\in \mathcal{V}_{2}$ (in particular, $\mathcal{V}_{1},\mathcal{V}_{2}\in\Theta_{X}$). Since, by hypothesis, $[\mathcal{U}]$ is the minumum in $((\Theta_{X}\setminus\mathbb{N})/\mathord\equiv_{fe},\leq_{fe})$, there are $C_{1},C_{2}\in \mathcal{U}$ such that $C_{1}\leq_{fe} A$ and $C_{2}\leq_{fe} B$. Let us consider $C_{1}\cap C_{2}\in\mathcal{U}$. By construction, $C_{1}\cap C_{2}$ is finitely embeddable in $A$ and in $B$. But this is absurd: in fact, let $c_{1}<c_{2}$ be any two elements in $C_{1}\cap C_{2}$. Then there are $n,m$ such that $n+\{c_{1},c_{2}\}=\{a_{1},a_{2}\}\subset A$ and $m+\{c_{1},c_{2}\}=\{b_{1},b_{2}\}\subset B$, and this cannot happen, because in this case we would have $b_{2}-b_{1}=c_{2}-c_{1}=a_{2}-a_{1}$, while $A\not\equiv_{S} B$. \end{proof}
In particular, by taking $X=\mathbb{N}$, we prove that:
\begin{cor} There is not a minimum in $((\beta\mathbb{N}\setminus\mathbb{N})/\mathord\equiv_{fe},\leq_{fe})$.\end{cor}
\subsection{Maximal Ultrafilters}
To study maximal ultrafilters in $(\beta\mathbb{N},\leq_{fe})$ we need to recall three results that have been proved in \cite{fe}:
\begin{thm}[\cite{fe}, Theorem 10]\label{fondamentali} Let $\mathcal{U},\mathcal{V}$ be ultrafilters on $\mathbb{N}$. Then $\mathcal{U}\leq_{fe}\mathcal{V}$ if and only if $\mathcal{V}\in\overline{\{\mathcal{U}\oplus\mathcal{W}\mid \mathcal{W}\in\beta\mathbb{N}\}}$. \end{thm}
\begin{cor}[\cite{fe}, Corollary 12]\label{updir} The ordering $\leq_{fe}$ on ultrafilters on $\mathbb{N}$ is upward directed.\end{cor}
We also recall that, actually, Corollary \ref{updir} can be improved: in fact, for every $\mathcal{U},\mathcal{V}\in\beta\mathbb{N}$ we have
\begin{equation*} \mathcal{U},\mathcal{V}\leq_{fe}\mathcal{U}\oplus\mathcal{V}. \end{equation*}
Let us introduce the following definition:
\begin{defn} For any $\mathcal{U}\in\beta\mathbb{N}$ the upward cone generated by $\mathcal{U}$ is the set
\begin{equation*} \mathcal{C}(\mathcal{U})=\{\mathcal{V}\in\beta\mathbb{N}\mid \mathcal{U}\leq_{fe}\mathcal{V}\}. \end{equation*}
\end{defn}
\begin{cor}[\cite{fe}, Corollary 13]\label{hui} For any $\mathcal{U}\in\beta\mathbb{N}$, the upward cone $\mathcal{C}(\mathcal{U})$ is a closed, two-sided ideal in $\beta\mathbb{N}$. It is the smallest closed right ideal containing $\mathcal{U}$ and therefore it is also the smallest two-sided ideal containing $\mathcal{U}$.\end{cor}
Let us note that from Theorem \ref{fondamentali} it easily follows that the relation $\leq_{fe}$ is not antisymmetric: in fact, if $R$ is a minimal right ideal in $(\beta\mathbb{N},\oplus)$ and $\mathcal{U}\in R$ then $\mathcal{C}(\mathcal{U})=\mathcal{C}(\mathcal{U}\oplus 1)$, so $\mathcal{U}\leq_{fe}\mathcal{U}\oplus 1$ and $\mathcal{U}\oplus 1\leq_{fe} \mathcal{U}$.\par We want to prove that there is a maximum in $(\beta\mathbb{N}/\mathord\equiv_{fe},\leq_{fe})$. Due to Corollary \ref{hui}, since $(\beta\mathbb{N}/\mathord\equiv_{fe},\leq_{fe})$ is an order then to prove that it has a maximum if is enough\footnote{An upward directed ordered set $(A,\leq)$ has at most one maximal element which, if it exists, is the greatest element of the order.} to prove that it has maximal elements.\par To prove the existence of maximal elements we use Zorn's Lemma. A technical lemma that we need is the following:
\begin{lem}\label{ultraordine} Let $I$ be a totally ordered set. Then there is an ultrafilter $\mathcal{V}$ on $I$ such that, for every element $i\in I$, the set
\begin{equation*} G_{i}=\{j\in I\mid j\geq i\}.\end{equation*}
is included in $\mathcal{V}$.
\end{lem}
\begin{proof} We have just to observe that $\{G_{i}\}_{i\in I}$ is a filter and to recall that every filter can be extended to an ultrafilter.\end{proof}
The key property of these ultrafilters is the following:
\begin{prop}\label{ultraordine2} Let $I$ be a totally ordered set and let $\mathcal{V}$ be given as in Lemma \ref{ultraordine}. Then for every $A\in \mathcal{V}$ and $i\in I$ there exists $j\in A$ such that $i\leq j$. \end{prop}
We omit the straightforward proof.\par In the next Theorem we use the notion of limit ultrafilter. We recall that, given an ordered set $I$, an ultrafilter $\mathcal{V}$ on $I$ and a family $\mathcal{U}_{i}$ of ultrafilters on $\mathbb{N}$, the $\mathcal{V}$-limit of the family $\langle \mathcal{U}_{i}\mid i\in I\rangle$ (denoted by $\mathcal{V}-\lim\limits_{i\in I}\mathcal{U}_{i}$) is the ultrafilter such that, for every $A\subseteq\mathbb{N}$,
\begin{equation*} A\in\mathcal{V}-\lim\limits_{i\in I}\mathcal{U}_{i}\Leftrightarrow\{i\in I\mid A\in\mathcal{U}_{i}\}\in\mathcal{V}. \end{equation*}
Let us introduce the notion of $\leq_{fe}$-chain:
\begin{defn} Let $(I,<)$ be an ordered set. We say that the family $\langle \mathcal{U}_{i}\mid i\in I\rangle$ is an $\leq_{fe}$-chain if for every $i<j\in I$ we have $\mathcal{U}_{i}\leq_{fe}\mathcal{U}_{j}$.\end{defn}
\begin{thm} Every $\leq_{fe}$-chain $\langle \mathcal{U}_{i}\mid i\in I\rangle$ has an $\leq_{fe}$-upper bound $\mathcal{U}$. \end{thm}
\begin{proof} Let $\mathcal{V}$ be an ultrafilter on $I$ with the property expressed in Lemma \ref{ultraordine}. We claim that the ultrafilter
\begin{equation*} \mathcal{U}=\mathcal{V}-\lim\limits_{i\in I} \mathcal{U}_{i} \end{equation*}
is an $\leq_{fe}$-upper bound for the $\leq_{fe}$-chain $\langle \mathcal{U}_{i}\mid i\in I\rangle$. We have to prove that $\mathcal{U}_{i}\leq_{fe} \mathcal{U}$ for every index $i$; let $A$ be an element of $\mathcal{U}$. By definition,
\begin{equation*} A\in \mathcal{U}\Leftrightarrow I_{A}=\{i\in I\mid A\in \mathcal{U}_{i}\} \in \mathcal{V}. \end{equation*}
$I_{A}$ is a set in $\mathcal{V}$ so, by Proposition \ref{ultraordine2}, there is an element $j>i$ in $I_{A}$. Therefore $A\in\mathcal{U}_{j}$ and, since $\mathcal{U}_{i}\leq_{fe} \mathcal{U}_{j}$, there exists an element $B$ in $\mathcal{U}_{i}$ with $B\leq_{fe} A$. Hence $\mathcal{U}_{i}\leq_{fe}\mathcal{U}$, and the thesis is proved.\end{proof}
As an immediate consequence we have that:
\begin{cor} Every $\leq_{fe}$-chain $\langle [\mathcal{U}_{i}]\mid i\in I\rangle$ has an upper bound $[\mathcal{U}]$. \end{cor}
Being an upward directed set with maximal elements, $(\beta\mathbb{N}/\mathord{\equiv_{fe}},\leq_{fe})$ has a maximum, that we denote by $M$.
\begin{defn} We say that an ultrafilter $\mathcal{U}$ on $\mathbb{N}$ is maximal if $[\mathcal{U}]=M$. We denote by $\mathcal{M}$ the set of maximal ultrafilters.\end{defn}
By definition, for every ultrafilter $\mathcal{U}$ we have the following equivalences:
\begin{equation*} [\mathcal{U}]=M\Leftrightarrow \mathcal{U}\in\mathcal{M}\Leftrightarrow \mathcal{V}\leq_{fe}\mathcal{U} \ \mbox{for every} \ \mathcal{V}\in\beta\mathbb{N}. \end{equation*}
In particular, we can characterize $\mathcal{M}$ in terms of the $\leq_{fe}$-cones:
\begin{cor}\label{zumpa} $\mathcal{M}=\bigcap\limits_{\mathcal{U}\in\beta\mathbb{N}} \mathcal{C}(\mathcal{U}).$\end{cor}
\begin{proof} We have just to observe that $\mathcal{M}\subseteq\mathcal{C}(\mathcal{U})$ for every ultrafilter $\mathcal{U}$ and that, if $\mathcal{U}$ is a maximal ultrafilter, then $\mathcal{C}(\mathcal{U})=\mathcal{M}$.\end{proof}
We can now prove our main result:
\begin{thm}\label{eccolo} $\mathcal{M}=\overline{K(\beta\mathbb{N},\oplus)}$. \end{thm}
\begin{proof} Given any ultrafilter $\mathcal{U}$, by Proposition \ref{fondamentali} we know that $\mathcal{C}(\mathcal{U})$ is the minimal closed bilateral ideal containing $\mathcal{U}$. By Corollary $\ref{zumpa}$ we know that $\mathcal{M}=\bigcap\limits_{\mathcal{U}\in\beta\mathbb{N}}\mathcal{C}(\mathcal{U})$ so, in particular, being the intersection of a family of closed bilateral ideal $\mathcal{M}$ itself is a closed bilater ideal. So if $\mathcal{U}$ is any ultrafilter in $K(\beta\mathbb{N},\oplus)$, we know that: \begin{enumerate} [leftmargin=*,label=(\roman*),align=left ]
\item $\mathcal{M}\subseteq\mathcal{C}(\mathcal{U})$;
\item $\mathcal{C}(\mathcal{U})=\overline{K(\beta\mathbb{N},\oplus)}$. \end{enumerate}
So $\mathcal{M}$ is a closed bilateral ideal included in $\overline{K(\beta\mathbb{N},\oplus)}$, and the only such ideal is $\overline{K(\beta\mathbb{N},\oplus)}$ itself.\end{proof} This result has a few interesting consequences:
\begin{cor}\label{uno} An ultrafilter $\mathcal{U}$ is maximal if and only if every element $A$ of $\mathcal{U}$ is piecewise syndetic. \end{cor}
\begin{proof} This follows from this well-known characterization of $\overline{K(\beta\mathbb{N},\oplus)}$: an ultrafilter $\mathcal{U}$ is in $\overline{K(\beta\mathbb{N},\oplus)}$ if and only if every element $A$ of $\mathcal{U}$ is piecewise syndetic (see, e.g., \cite{rif12}). \end{proof}
As mentioned in the introduction, the notion of finite embeddability is related with some properties that arose in combinatorial number theory. A particularity of maximal ultrafilters is that every set in a maximal ultrafilter satisfies many of these combinatorial properties:
\begin{defn} We say that a property $P$ is $\leq_{fe}$-upward invariant if the following holds: for every $A,B\subseteq \mathbb{N}$, if $P(A)$ holds and $A\leq_{fe} B$ then $P(B)$ holds.\par We way that $P$ is partition regular if the family $S_{P}=\{A\subseteq\mathbb{N}\mid P(A)$ holds$\}$ contains an ultrafilter (i.e., if for every finite partition $\mathbb{N}=A_{1}\cup... \cup A_{n}$ there exists at least one index $i\leq n$ such that $A_{i}\in S_{P})$.\end{defn}
By Proposition \ref{trs} it follows that the following properties are $\leq_{fe}$-upward invariant:
\begin{enumerate} [leftmargin=*,label=(\roman*),align=left ]
\item $A$ is thick;
\item\label{ps} $A$ is piecewyse syndetic;
\item\label{alap} $A$ contains arbitrarily long arithmetic progressions;
\item\label{bd} $BD(A)>0$, where $BD(A)$ is the upper Banach density of $A$. \end{enumerate} In particular, properties \ref{ps}, \ref{alap}, \ref{bd} are also partition regular. These kind of properties are important in relation with maximal ultrafilters:
\begin{prop}\label{consequence} Let $P$ be a partition regular $\leq_{fe}$-upward invariant property of sets. Then for every maximal ultrafilter $\mathcal{U}$, for every $A\in\mathcal{U}$, $P(A)$ holds.\end{prop}
\begin{proof} Let $P$ be given, let $S_{P}=\{A\subseteq\mathbb{N}\mid P(A)$ holds$\}$ and let $\mathcal{V}\subseteq S_{P}$ (such an ultrafilter exists because $P$ is partition regular). Let $B\in\mathcal{U}$. Since $\mathcal{U}$ is maximal, $\mathcal{V}\leq_{fe}\mathcal{U}$. Let $A\in\mathcal{V}$ be such that $A\leq_{fe} B$. Since $P$ is $\leq_{fe}$-upward invariant and $P(A)$ holds, we obtain that $P(B)$ holds, hence we have the thesis. \end{proof}
E.g., as a consequence of Proposition \ref{consequence} we can prove the following:
\begin{cor}\label{due} Let $\mathcal{U}\in\overline{K(\beta\mathbb{N},\oplus)}$. Then:
\begin{enumerate} [leftmargin=*,label=(\roman*),align=left ]
\item each set $A$ in $\mathcal{U}$ has positive Banach density; \item each set $A$ in $\mathcal{U}$ contains arbitrarily long arithmetic progressions; \item each set $A$ in $\mathcal{U}$ is piecewise syndetic. \end{enumerate}
\end{cor}
In particular, by combining Corollaries \ref{uno} and \ref{due} we obtain an alternative proof of the following known results: \begin{itemize}
\item every piecewise syndetic set contains arbitrarily long arithmetic progressions;
\item every piecewise syndetic set has positive upper Banach density. \end{itemize}
In the forthcoming paper \cite{functions} we will show how, actually, similar arguments can be used to prove combinatorial properties of other families of ultrafilters, e.g. to prove that for every ultrafilter $\mathcal{U}\in\overline{K(\beta\mathbb{N},\odot)}$, for every $A\in\mathcal{U}$, $A$ contains arbitrarily long arithmetic progression and it contains a solution to every partition regular homogeneous equation\footnote{An equation $P(x_{1},...,x_{n})=0$ is partition regular if and only if for every finite coloration $\mathbb{N}=C_{1}\cup...\cup C_{n}$ of $\mathbb{N}$ there exists an index $i$ and monocromatic elements $a_{1},...,a_{n}\in C_{i}$ such that $P(a_{1},...,a_{n})=0$.}.
\section{A Direct Nonstandard Proof that $M_{fe}=\overline{K(\beta\mathbb{N},\oplus)}$}\label{NS23}
In this section we assume the reader to be familiar with the basics of nonstandard analysis. In particular, we will use the notions of nonstandard extension of subsets of $\mathbb{N}$ and the transfer principle. We refer to \cite{rif5} and \cite{davis} for an introduction to the foundations of nonstandard analysis and to the nonstandard tools that we are going to use.\par Both in \cite{fe} and in \cite{Tesi} it has been shown that the relation of finite embeddability between sets has a very nice characterization in terms of nonstandard analysis, which allows to study some of its properties in a quite simple, and elegant, way. We recall the characterization (in the following proposition, it is assumed for technical reasons that the nonstandard extension that we consider satisfies at least the $\mathfrak{c}^{+}$-enlarging property\footnote{We recall that a nonstandard extension $^{*}\mathbb{N}$ of $\mathbb{N}$ has the $\mathfrak{c}^{+}$ enlarging property if, for every family $\mathcal{F}$ of subsets of $\mathbb{N}$ with the finite intersection property, the intersection $\bigcap\limits_{A\in\mathcal{F}}$$^{*}A$ is nonempty.}, where $\mathfrak{c}$ is the cardinality of $\mathcal{P}(\mathbb{N})$):
\begin{prop}[\cite{fe}, Proposition 15]\label{NSCAR} Let $A,B$ be subsets of $\mathbb{N}$. The following two conditions are equivalent: \begin{enumerate} [leftmargin=*,label=(\roman*),align=left ]
\item $A$ is finitely embeddable in $B$;
\item there is an hypernatural number $\alpha$ in $^{*}\mathbb{N}$ such that $\alpha+A\subseteq$$^{*}B$. \end{enumerate} \end{prop}
We use Proposition \ref{NSCAR} to reprove directly, with nonstandard methods, Theorem \ref{eccolo}:
\begin{proof}[Theorem \ref{eccolo}] Let $A$ be a set in $\mathcal{U}$, and let $\mathcal{V}$ be an ultrafilter on $\mathbb{N}$. Since $A$ is piecewise syndetic there is a natural number $n$ such that
\begin{equation*} T=\bigcup_{i=1}^{n} (A+i) \end{equation*}
is thick. By transfer\footnote{Thick set can be characterized by mean of nonstandard analysis as follows (see e.g. \cite{Tesi}): a set $T\subseteq\mathbb{N}$ is thick if and only if $T^{*}$ contains an interval of infinite lenght.} it follows that there are hypernatural numbers $\alpha\in$$^{*}\mathbb{N}$ and $\eta\in$$^{*}\mathbb{N}\setminus\mathbb{N}$ such that the interval $[\alpha,\alpha+\eta]$ is included in $^{*}T$. In particular, since $\eta$ is infinite, $\alpha+\mathbb{N}\subseteq$$^{*}T$.\par For every $i\leq n$ we consider
\begin{equation*} B_{i}=\{n\in\mathbb{N}\mid \alpha+n\in\mbox{}^{*}(A+i)\}. \end{equation*}
Since $\bigcup_{i=1}^{n} B_{i}=\mathbb{N}$, there is an index $i$ such that $B_{i}\in\mathcal{V}$. We claim that $B_{i}\leq_{fe} A$. In fact, by construction $\alpha+B_{i}\subseteq$$^{*}A+i$, so
\begin{equation*} (\alpha-i)+B_{i}\subseteq\mbox{}^{*}A. \end{equation*} By Proposition \ref{NSCAR}, this entails that $B_{i}\leq_{fe} A$, and this proves that $\mathcal{V}\leq_{fe}\mathcal{U}$ for every ultrafilter $\mathcal{V}$. Hence $\mathcal{U}$ is maximal.\end{proof}
In bibliografia devo aggiungere un lavoro di Beiglbock ed uno di Krautzberger
\end{document} |
\begin{document}
\title{Quasiperiodic Poincar\'{e} Persistence at High Degeneracy}
\author{Weichao Qian, Yong Li, Xue Yang } \date{}
\maketitle
\begin{abstract} For Hamiltonian systems with degeneracy of any higher order, we study the persistence of resonant invariant tori, which as some lower-dimensional invariant tori might be elliptic, hyperbolic or of mixed types. Hence we prove a quasiperiodic Poincar\'{e} theorem at high degeneracy. This answers a long standing conjecture on the persistence of resonant invariant tori in quite general situations.
{\bf Keywords} {Hamiltonian systems; high degeneracy; KAM theory; Treshch\"{e}v's resonant invariant tori, quasiperiodic Poincar\'{e} theorem.} \end{abstract}
\section{Introduction}\label{introduction}
This paper concerns the persistence of resonant invariant tori for the following Hamiltonian system
\begin{eqnarray}\label{005} H(x,y)=H_0 (y) + \varepsilon P(x,y,\varepsilon), \end{eqnarray} where $x \in T^d = R^d/ Z^d$, $y\in G,$ a closed region in $R^d$; $H_0 (y)$ is a real analytic function
on a complex neighborhood of the bounded closed region $T^d \times G$; $\varepsilon P(x, y, \varepsilon)$, a general small perturbation, is a real analytic function, and $\varepsilon > 0$ is a small parameter. Here the so-called resonant invariant tori mean the frequency $\omega(y) = \frac{\partial H_0}{\partial y}$ does not satisfy rationally independence. We will give the accurate statement below.
The celebrated KAM theory due to Kolmogorov, Arnold and Moser asserts that, if an integrable system, $H_0(y)$, is nondegenerate, i.e. $\det \partial_y^2 H_0(y) \neq 0$, then for the perturbed system $H (x,y) = H_0(y)+ \varepsilon P(x,y,\varepsilon),$ most of nonresonant invariant tori still survive (\cite{Arnold,Kolmogorov,Moser}). For some recent developments and applications of KAM theory, refer to \cite{Guardia,han,Kaloshin1,Meyer,Palacian,Palacian1,Qian}. However, in the presence of resonance, the persistence problem will become very complicated. Let us make a brief recall. The periodic case can go back to
the work of Poincar\'{e} in nineteenth century, which does not involve the small divisor problem(\cite{Poincare}). There has been a long standing conjecture about resonant tori under a convexity assumption on $H_0$
(\cite{Broer,Cong,Livia,Gentile,Kappeler}), as written by Kappeler and P\"{o}schel in \cite{Kappeler}:
\begin{itemize}
\item[] \emph{For $m = 1 $ in particular, such a torus is foliated into identical closed orbits. Bernstein $\&$ Katok $(\emph{\cite{Bernstein}})$ showed that in a convex system at least $n$ of them survive any sufficiently small perturbation. $\cdots$ For the intermediate cases with $1<m<n-1$, only partial results are known $\cdots$. The long standing conjecture is that at least $n- m +1$, and generically $2^{n-m}$, invariant $m-$tori always survive in a nondegenerate system $\cdots$. That is, their number should be equal to the number of critical points of smooth functions on the torus $T^{n-m}$.}
\end{itemize}
In above description, $n$ and $m$ are dimensions on the degree of freedom of the system and lower-dimensional invariant tori, respectively.
The first breakthrough of the conjecture mentioned above was due to Treshch\"{e}v (\cite{Treshchev}) for the persistence of hyperbolic resonant tori in 1989, 35 years after the establishment of KAM theory, and such tori have been called Treshch\"{e}v's ones today. For the persistence of general resonant tori, we refer readers to \cite{Cong,Li}, and for multi-scale Hamiltonian systems, refer to \cite{Xu2,xu3}. In fact, when the 1-order perturbation in $\varepsilon$ is nondegenerate, ones have completed the proof of the conjecture mentioned above, see \cite{Cong,Li,Treshchev}. However,
\begin{itemize}
\item[] \emph{is the conjecture true if the perturbation is of high degeneracy?}
\end{itemize} In the present paper we will touch this essential problem.
In order to state our main result, first, let us introduce some notations. We say that a frequency vector $\omega = {\partial_y H_0} (y)$ is nonresonant, if $\langle k, \omega \rangle \neq 0$ for any $k \in {Z^d \setminus \{0\}}$. Furthermore, if there is a subgroup $g$ of $Z^d$ such that $\langle k , \omega\rangle = 0$ for all $k \in g$ and $\langle k , \omega \rangle \neq 0$ for all $k \in {Z^d / g}$, then $\omega$ is called an multiplicity $m_0$ resonant frequency ($g-$resonant frequency), where $g$ is generated by independent $d-$dimensional integer vector $\tau _1, \ldots , \tau _ {m_0}$. For a given subgroup $g$, the $m~(= d - m_0)$ dimensional surface \begin{eqnarray*} \widetilde{\Lambda}(g, G)=\{y \in G : \langle k , \omega \rangle =0, k\in g \} \end{eqnarray*} is called the $g-$resonant surface. Following the way in \cite{Treshchev}, by group theory, there are integer vectors ${\tau _1'}, \cdots, {\tau _m'}$ $\in$ $Z ^ d$, such that $Z^d$ is generated by ${\tau _1}, \cdots, {\tau _{m_0}},{\tau _1'}, \cdots, {\tau _m'}$, and~ $\det K_0 = 1 $, where $K_0 = (K_* , K^{'})$,~ $K_* = (\tau _1', \cdots, \tau _m')$, $K^{'}=({\tau _1}, \cdots, {\tau _{m_0}}) $ are ~$d\times d$, $d\times m $, $d\times m_0$, respectively, and $K_{*}$ generates the quotient group $Z^d / g$, while $K^{'}$ generates the group $g$. If $H_0$ is nondegenerate and $\det {K^{'}}^T \partial_y^2 {H_0}{K^{'}} \neq 0$ for $y \in \widetilde{\Lambda}(g,G)$, then $H_0$ is said to be $g-$nondegenerate.
Let \begin{eqnarray*} \Gamma = K_0^T \partial_y^2 H_0 (y_0) K_0 = \left(
\begin{array}{cc}
\Gamma_{11} & \Gamma_{12} \\
\Gamma_{21} & \Gamma_{22} \\
\end{array}
\right), \end{eqnarray*} where $\Gamma_{11},$ $\Gamma_{12}$, $\Gamma_{21},$ $\Gamma_{22}$ are $m\times m,$ $m\times m_0$, $m_0\times m,$ $m_0\times m_0$ matrices, respectively, $\Gamma_{12} = \Gamma_{21}^T$, $\Gamma_{22} = {K'}^T \partial_y^2 H_0 (y_0) K'$, and $m_0 = d - m$.
For any $y_0 \in \widetilde{\Lambda}(g, G)$, with the help of the Taylor expansion at $y_0$ and the following coordinate transformation $y- y_0 = K_0 p$, $q = K_0 ^ T x$, Hamiltonian (\ref{005}) is changed to \begin{eqnarray}\label{qq}
H(q,p)&=& \langle\omega^{*} , p'\rangle + \frac{1}{2}\langle p, \Gamma(\omega^{*}) p\rangle + O(|K_0 p|^3) + \varepsilon \bar{P} (q, p, \varepsilon) \end{eqnarray} up to an irrelative constant, where
\begin{eqnarray*}
\omega^{*} &=& K_{*}^T \omega(y_0) \in \Lambda (g,G),~~~~~~ \Lambda(g,G)= \{\omega^{*} \in R^m : y\in \widetilde{\Lambda}(g,G)\},\\
p&=&(p', p''),~~~~~~~~~~p' = (p_1, \cdots ,p_m)^T,~~~~~~~~~~p'' = (p_{m+1}, \cdots ,p_d)^T,\\ \bar{P}(q,p,\varepsilon) &=& P((K_0 ^T)^{-1} q, y_0 + K_0 p,\varepsilon). \end{eqnarray*}
Here we used the fact that $\Lambda(g, G)$ is diffeomorphic to the $m-$dimensional surface $\widetilde{\Lambda}(g,G)$.
\begin{remark} The coordinate transformation: $y- y_0 = K_0 p$, $q = K_0 ^ T x$ is symplectic. In fact, \begin{eqnarray*} \left(
\begin{array}{c}
x \\
y -y_0 \\
\end{array} \right) = \left(
\begin{array}{cc}
(K_0^T)^{-1} & 0 \\
0 & K_0 \\
\end{array}
\right). \end{eqnarray*} Then \begin{eqnarray*} &~&\left(
\begin{array}{cc}
((K_0^T)^{-1})^T & 0 \\
0 & K_0^T \\
\end{array} \right) \left(
\begin{array}{cc}
0 & I \\
-I & 0 \\
\end{array} \right) \left(
\begin{array}{cc}
(K_0^T)^{-1} & 0 \\
0 & k_0 \\
\end{array} \right)\\ &=& \left(
\begin{array}{cc}
K_0^{-1} & 0 \\
0 & K_0^T \\
\end{array} \right) \left(
\begin{array}{cc}
0 & I \\
-I & 0 \\
\end{array} \right) \left(
\begin{array}{cc}
(K_0^T)^{-1} & 0 \\
0 & k_0 \\
\end{array} \right)\\ &=& \left(
\begin{array}{cc}
0 & I \\
-I & 0 \\
\end{array}
\right), \end{eqnarray*} which means the coordinate transformation is symplectic. \end{remark}
By the following symplectic transformation:
\begin{eqnarray*}
p \rightarrow \varepsilon^{\frac{1}{4}}p, ~q \rightarrow q, ~H \rightarrow\varepsilon^{-\frac{1}{4}}H,
\end{eqnarray*}
Hamiltonian (\ref{qq}) is changed to
\begin{eqnarray}\label{N1}
H(q,p)&=& \langle\omega^{*} , p'\rangle + \frac{\varepsilon^{\frac{1}{4}}}{2}\langle p, \Gamma(\omega^{*}) p\rangle + \varepsilon^{\frac{1}{2}}O(|K_0 p|^3) + \varepsilon^{\frac{3}{4}} \bar{P} (q, p, \varepsilon).
\end{eqnarray}
For $\omega \in \Lambda(g,G)$, replace $p'$, $p''$, $q'$, $q''$, $H(p', p'', q', q'')$ by ${y}$, ${v}$, ${x}$, ${u}$, ${H}(x,y,u,v)$ respectively and rewrite $\varepsilon^{\frac{1}{4}}$ as $\varepsilon$. Then Hamiltonian (\ref{N1}) arrives at \begin{eqnarray}\label{082}
\nonumber H(x,y,u,v) &=& \langle\omega, y\rangle + \frac{\varepsilon}{2} \langle \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right) , \Gamma \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right) \rangle\\
&~&~+ \varepsilon^2 O(|K_0 \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right)
|^3) + \varepsilon^3 P_1(x,y,u,v, \varepsilon). \end{eqnarray}
Since the mean $[P_1(\cdot , y,u,v,0)]= \int_{T^m} P_1(x, y,u,v,0) dx$ is $T^{m_0} - $periodic in $u$, there are at least $m_0 + 1$ critical points for the mean $[P_1(\cdot , y,u,v,0)]$ (\cite{Milnor}). Treshch\"{e}v $(\cite{Treshchev})$ dealt with general hyperbolic critical points. In \cite{Cong}, Cong, K\"{u}pper, Li and You dealt with the persistence of resonant invariant tori when $[P_1(\cdot,y,u,v,0)]$ possesses nondegenerate critical points, where $[P_1(\cdot,y,u,v,0)]$ corespondents to $0-th$ order Taylor expansion of $[P_1(x, y,u,v,\varepsilon)]$ in $\varepsilon$. Li and Yi$(\cite{Li})$ further removed the $g-$degeneracy. However, if the term about $0-th$ order Taylor expansion in $\varepsilon$ of the mean perturbation on angle variable is degenerate, the conjecture is that there are at least $m_0+1$ invariant tori.
Now we are in a position to state our main result for (\ref{005}) on $\widetilde{\Lambda}(g,G)$. \begin{theorem}\label{dingli11} Let $H$ be real analytic
on the complex neighborhood of $T^d \times G$. Assume
\begin{enumerate}
\item [\bf{(S1)}.] $\omega_* = K_*^T \partial_y H_0(y)$ satisfies R\"{u}ssmann non-degenerate condition;
\item [\bf{(S2)}.]$ rank ~K_0^T \partial_y^2 H_0 (y) K_0 = n+ m_0$, $0<n\leq m$, and $rank((K')^T \partial_y^2 H_0 K_*, (K')^T \partial_y^2 H_0 K') = m_0$ for a given $g$ on $\widetilde{\Lambda}(g,G);$
\item [\bf{(S3)}.] $rank \left(
\begin{array}{cc}
K & \bar{\omega}_* \\
\bar{\omega}_*^T & 0 \\
\end{array}
\right) = n + 2m_0+1$ for a given $g$ on $\widetilde{\Lambda}(g,G), $ where $\bar{\omega}_* = \left(
\begin{array}{c}
\omega_* \\
0 \\
\end{array}
\right)
\in R^{m+ 2m_0}$, $K= \left(
\begin{array}{cc}
K_0^T \partial_y^2 H_0(y) K_0 & 0 \\
0& \partial_{x_2}^2 \int_{T^m}P_1 (x,0,\varepsilon) d x_1 \\
\end{array}
\right)
$;
\item [\bf{(S4)}.] for some positive constant $\tilde \sigma$ (independent of $\varepsilon$), \begin{eqnarray}\label{condition1}
|\det \partial_{x_2}^2\varepsilon^{-\kappa} \int_{T^m}P_1 (x,0,\varepsilon) d x_1| > \tilde{\sigma}~~~\forall x,~y,~u,~v, \end{eqnarray}
where $\kappa$ is a given constant, $x_1 = K_*^T x$, $x_2 = K_0^T x$.
\end{enumerate}
Then there exist a $\varepsilon_0 >0 $ and a family of Cantor sets $\widetilde{\Lambda}_\varepsilon(g,G) \subset \widetilde{\Lambda}(g,G)$, $0<\varepsilon < \varepsilon_0$, such that
i) combining condition $\bf{(S1)}$, $\bf{(S2)}$ and $\bf{(S4)}$, for each $y \in \widetilde{\Lambda}_\varepsilon(g,G)$, system (\ref{005}) admits $2^{m_0}$ families of invariant tori, possessing hyperbolic, elliptic or mixed types, associated to nondegenerate relative equilibrium. Moreover, all such perturbed tori corresponding to a same $y \in \widetilde{\Lambda}_\varepsilon(g,G) $ are symplectically conjugated to the standard quasiperiodic $m-$tori $T^m$ with the Diophantine frequency vector $ \hat{\omega}$, for which there are $n$ components that are the same as $\omega_*$ and the others slightly drift;
ii) combining $\bf{(S1)}$, $\bf{(S2)}$, $\bf{(S3)}$ and $\bf{(S4)}$, for each $y \in \widetilde{\Lambda}_\varepsilon(g,G)$, on a given energy surface system (\ref{005}) admits $2^{m_0}$ families of invariant tori, possessing hyperbolic, elliptic or mixed types, associated to nondegenerate relative equilibrium. Moreover, there are $n$ components of the frequency on the unperturbed tori $\omega_*$ and perturbed tori $\bar{\omega}$ satisfied $\bar{\omega} =t \omega_*$ and the others slightly drift, where $t$ is a constant.
iii)
the relative Lebesgue measure $|\widetilde{\Lambda}(g,G) \setminus \widetilde{\Lambda}_{\varepsilon}(g,G)|$ tends to 0 as $\varepsilon \rightarrow 0$.
\end{theorem}
\begin{remark} Condition $\bf{(S1)}$ and $\bf{(S2)}$ is weaker than $g-$nondegenerate condition of $H_0$. As a result, the preservation of frequencies on the perturbed tori will be determined by $(K_{*}^T \partial_y^2 {H_0}K_{*}, K_*^T \partial_y^2 H K')$. The details will be shown in Section $\ref{normal form}$. \end{remark}
\begin{remark} Condition $\bf{(S3)}$ is equal to the following condition$:$ \begin{itemize}
\item [$\bf{(S3')}$] $rank \left(
\begin{array}{cc}
K_0^T \partial_y^2 H K_0 & \bar{\omega}_* \\
\bar{\omega}_*^T & 0 \\
\end{array}
\right) = n + m_0+1$ for a given $g$ on $\widetilde{\Lambda}(g,G), $ where $\bar{\omega}_* =\left(
\begin{array}{c}
\omega_* \\
0 \\
\end{array}
\right)
\in R^{m+ m_0}$, $\omega_* = K_*^T \partial_y H_0(y)\in R^{m}$; \end{itemize}
\end{remark}
\begin{cor}\label{cor1} Let $H$ be real analytic
on the complex neighborhood of $T^d \times G$. Assume
\begin{enumerate}
\item[{\bf (S5)}] $K_0^T \partial_y^2 H_0 K_0$ has a $(m_0+n)\times (m_0+n)$ nonsingular minor, $n < m$, and $\det {K^{'}}^T \partial_y^2 {H_0}{K^{'}} \neq 0$ for $y \in \widetilde{\Lambda}(g,G)$.
\end{enumerate}
Then there exist a $\varepsilon_0 >0 $ and a family of Cantor sets $\widetilde{\Lambda}_\varepsilon(g,G) \subset \widetilde{\Lambda}(g,G)$, $0<\varepsilon < \varepsilon_0$, such that
i) combining condition $\bf{(S1)}$, $\bf{(S5)}$ and $\bf{(S4)}$, for each $y \in \widetilde{\Lambda}_\varepsilon(g,G)$, system (\ref{005}) admits $2^{m_0}$ families of invariant tori, possessing hyperbolic, elliptic or mixed types, associated to nondegenerate relative equilibrium. Moreover, all such perturbed tori corresponding to a same $y \in \widetilde{\Lambda}_\varepsilon(g,G) $ are symplectically conjugated to the standard quasiperiodic $m-$tori $T^m$ with the Diophantine frequency vector $ \hat{\omega}$, for which there are $n$ components that are the same as $\omega_*$ and the others slightly drift;
ii) combining $\bf{(S1)}$, $\bf{(S3)}$, $\bf{(S4)}$ and $\bf{(S5)}$, for each $y \in \widetilde{\Lambda}_\varepsilon(g,G)$, on a given energy surface system (\ref{005}) admits $2^{m_0}$ families of invariant tori, possessing hyperbolic, elliptic or mixed types, associated to nondegenerate relative equilibrium. Moreover, there are $n$ components of the frequency on the unperturbed tori $\omega_*$ and perturbed tori $\bar{\omega}$ satisfied $\bar{\omega} =t \omega_*$ and the others slightly drift, where $t$ is a constant.
iii)
the relative Lebesgue measure $|\widetilde{\Lambda}(g,G) \setminus \widetilde{\Lambda}_{\varepsilon}(g,G)|$ tends to 0 as $\varepsilon \rightarrow 0$. \end{cor}
\begin{remark} $\bf{(S5)}$ is equal to the following $\bf{(S6)}$: \begin{enumerate}
\item[{\bf (S6)}] $rank (K_*^T \partial_y^2 H K' {K'}^T \partial_y^2 H K' {K'}^T\partial_y^2 H K_* + K_*^T \partial_y^2 HK_* )= n,$ $n < m$, and $\det {K^{'}}^T \partial_y^2 {H_0}{K^{'}} \neq 0$ for $y \in \widetilde{\Lambda}(g,G)$,
\end{enumerate}
which comes from the following fact:
\begin{eqnarray*}
\left(
\begin{array}{cc}
I_r & 0 \\
-DB^{-1} & I_{m-r} \\
\end{array}
\right) \left(
\begin{array}{cc}
B & C \\
D & E \\
\end{array}
\right)
= \left(
\begin{array}{cc}
B & C \\
0 & -DB^{-1}C + E \\
\end{array}
\right),
\end{eqnarray*}
where $B$ is nonsingular. \end{remark}
The following case with $g-$nondegenerate condition is a direct conclusion. \begin{cor} \label{cor2} Let $H$ be real analytic
on the complex neighborhood of $T^d \times G$. Assume
\begin{enumerate}
\item [\bf{(S7)}] $H_0$ is $g-$nondegenerate for a given $g$ on $\widetilde{\Lambda}(g,G);$
\item [\bf{(S8)}]$ rank \left(
\begin{array}{cc}
K_0^T \partial_y^2 H_0(y) K_0 & \bar{\omega}_* \\
\bar{\omega}_*^T & 0 \\
\end{array}
\right) = m+ m_0+1 $ for given $g$ on $\widetilde{\Lambda}(g,G)$, where $\bar{\omega}_* = \left(
\begin{array}{c}
\omega_* \\
0 \\
\end{array}
\right) \in R^{m+ m_0}$, $\omega_* = K_*^T \partial_y H_0(y);$
\end{enumerate}
Then there exist a $\varepsilon_0 >0 $ and a family of Cantor sets $\widetilde{\Lambda}_\varepsilon(g,G) \subset \widetilde{\Lambda}(g,G)$, $0<\varepsilon < \varepsilon_0$, such that
i) combining condition $\bf{(S4)}$ and $\bf{(S7)}$, for each $y \in \widetilde{\Lambda}_\varepsilon(g,G)$, system (\ref{005}) admits $2^{m_0}$ families of invariant tori, possessing hyperbolic, elliptic or mixed types, associated to nondegenerate relative equilibrium. Moreover, all such perturbed tori corresponding to a same $y \in \widetilde{\Lambda}_\varepsilon(g,G) $ are symplectically conjugated to the standard quasiperiodic $m-$torus $T^m$ with the Diophantine frequency vector $ \omega^* = K_{*}^T \omega(y_0)$;
ii)combining $\bf{(S4)}$, $\bf{(S7)}$ and $\bf{(S8)}$, for each $y \in \widetilde{\Lambda}_\varepsilon(g,G)$, on a given energy surface system (\ref{005}) admits $2^{m_0}$ families of invariant tori, possessing hyperbolic, elliptic or mixed types, associated to nondegenerate relative equilibrium. Moreover, the frequency of the unperturbed tori $\omega_*$ and perturbed tori $\bar{\omega}$ satisfy $\bar{\omega} =t \omega_*$, where $t$ is a constant.
iii)
the relative Lebesgue measure $|\widetilde{\Lambda}(g,G) \setminus \widetilde{\Lambda}_{\varepsilon}(g,G)|$ tends to 0 as $\varepsilon \rightarrow 0$.
\end{cor}
\begin{remark} The motion equation of the unperturbed Hamiltonian system $H_0(y)$ is \begin{eqnarray*} \left\{
\begin{array}{ll}
\dot{x} = \omega(y), \\
\dot{y} = 0.
\end{array} \right. \end{eqnarray*} When $\omega(y)$ is $g-$resonant, under the following sympletic transformation: \begin{eqnarray*} K_0 ^T x = \left(
\begin{array}{c}
q' \\
q'' \\
\end{array}
\right), y = y, \end{eqnarray*} the equation of motion becomes \begin{eqnarray*} \left\{
\begin{array}{lll}
\dot{q'} = K_1^T \omega(y), \\
\dot{q''} = 0,\\ \dot{y} = 0,
\end{array} \right. \end{eqnarray*} where $K_0$ and $K_1$ are mentioned as above. Then each $(y, q'')$ is a relative equilibrium of the unperturbed system. \end{remark} \begin{remark} Here a map defined on a Cantor set is said to be smooth in Whitney's sense if it has a smooth Whitney extension. For details, see $\emph{\cite{Poschel}}$. \end{remark}
Obviously, in quite general situations with high degeneracy, we prove the conjecture on the persistence of resonant invariant tori, because in assumption (\ref{condition1}), the order of degeneracy on the perturbation is $\kappa-1$, might be arbitrary.
The classical Birkhoff normal form theory provides a formal integrability to harmonic oscillators with perturbation. But it does not work to our persistence of resonant invariant tori, mainly due to the nonlinearity of the unperturbed system and the degeneracy of $[P_1(\cdot,y,u,v,0)]$. To overcome these difficulties, besides employing Treshch\"{e}v's reduction, we propose a nonlinear normal form program by introducing nonlinear KAM iteration, which is used for searching high degeneracy and keeping critical points that are relative to certain quasiperiodicity of the perturbation. In particular, it will be seen that our KAM iteration is more suitable for problems with worse normal forms. Hence, this approach provides a thorough way to study the persistence of resonant invariant tori under high degenerate perturbations. To be specific, we prove that, for (\ref{005}), Hamiltonian systems with high order degenerate perturbation, there are $2^{m_0}$ lower-dimensional invariant tori born from resonant invariant tori.
The paper is organized as follows. In Section \ref{normal form}, we give an abstract Hamiltonian system and show the corresponding persistence of invariant tori. With the results of the abstract Hamiltonian system we finish the proof of main theorem in Section \ref{074}.
\section{Abstract Hamiltonian systems}\label{normal form}\setcounter{equation}{0}
Throughout the paper, unless specified explanation, we shall use the same symbol $|\cdot|$ to denote an equivalent (finite dimensional) vector norm and its induced matrix norm, absolute value of functions, and measure of sets, etc., and use $|\cdot|_D$ to denote the supremum norm of functions on a domain $D$. Also, for any two complex column vectors $\xi$, $\zeta$ of the same dimension, $\langle \xi, \zeta \rangle$ always stands for $\xi^T \zeta$, i.e., the transpose of $\xi$ times $\zeta$. For the sake of brevity, we shall not specify smoothness orders for functions having obvious orders of smoothness indicated by their derivatives taking. All constants below are positive and independent of the iteration process. Moreover, all Hamiltonian functions in the sequel are associated to the standard symplectic structure.
To prove Theorem \ref{dingli11}, consider the following real analytic Hamiltonian system \begin{eqnarray}\label{model33}
H(x,y,z) =N(y,z, \lambda) + P(x,y,z, \lambda, \varepsilon), \end{eqnarray} defined on \begin{center}
$D(r,s)=\{(x,y,z):|Im~x|<r,~ |y|<s, ~|z|<s\}$, \end{center} with \begin{eqnarray} \nonumber N (y,z,\lambda)&=& \langle \omega(\lambda), y\rangle + \frac{\delta}{2}\langle \left(
\begin{array}{c}
y \\
z \\
\end{array}
\right) , M(\lambda) \left(
\begin{array}{c}
y \\
z \\
\end{array}
\right) \rangle + \delta h (y,z,\lambda,\varepsilon),\\
\label{080} |\partial_\lambda^l P| &\leq& \delta \gamma^b s^2 \mu, ~~~|l|\leq l_0, \end{eqnarray}
where $x \in T^m$, $y \in R^m$, $z=(u, v) \in R ^{2m_0}$, $b=(2l_0^2 +3)(m+2m_0)^2$, $\lambda \in \Lambda$, $M$, a symmetric matrix, depends smoothly on $\lambda$, $h = O(|\left(
\begin{array}{c}
y \\
z \\
\end{array}
\right) |^3 )$ is smooth in $\varepsilon$. In the above, all $\lambda-$dependence are of class $C^{l_0}$ for some $l_0 \geq d$. Rewrite \begin{eqnarray*} M = \left(
\begin{array}{cc}
M_{11} & M_{12} \\
M_{21} & M_{22} \\
\end{array}
\right), \end{eqnarray*} where $M_{11},$ $M_{12}$, $M_{21},$ $M_{22}$ are $m\times m,$ $m\times 2m_0$, $2m_0\times m,$ $2m_0\times 2 m_0$ matrices, respectively.
\subsection{A General Theorem}
To show the persistence of invariant tori for Hamiltonian (\ref{model33}), assume:
\begin{itemize}
\item[{\bf (A1)}]$rank~\{\frac{\partial^\alpha \omega}{\partial \lambda^ \alpha}: 0\leq|\alpha| \leq m -1 \} = m $, for all $\lambda \subset {\Lambda}$. \item[{\bf (A2)}]For given $n$, $0< n \leq m,$ $rank M = n+ 2m_0$ and $rank (M_{21}, M_{22}) = 2m_0,$ where $\lambda\in \Lambda.$ \item[{\bf (A3)}]For given $n$, $0< n \leq m,$ \begin{center}
$ rank \left(
\begin{array}{cc}
M(\lambda) & \bar{\omega}_1(\lambda) \\
{\bar{\omega}_1 }^T(\lambda) & 0 \\
\end{array}
\right)
= n+ 2m_0 +1,$ \end{center}
where $\bar{\omega}_1 = \left(
\begin{array}{c}
\omega \\
0 \\
\end{array}
\right)\in R^{m+ 2m_0} $, $\omega\in R^{m}$. \end{itemize}
\begin{remark}
We call $\bf{(A2)}$ and $\bf{(A3)}$ sub-isoenergetically non-degenerate conditions for the persistence of lower dimensional invariant tori. Specifically, when $n = m$ and $m_0 = 0$, they are isoenergetically non-degenerate condition introduced by Arnold$(\emph{\cite{Arnold}})$. When $m_0 =0$, they are similar to the isoenergetically non-degenerate condition contained in \emph{\cite{LLL,Sevryuk}}. When $M$ is a block diagonal matrix, refer to $\emph{\cite{Qian1}}$ for a similar condition. \end{remark}
We state our result for (\ref{model33}) as follows.
\begin{theorem}\label{shengluede}
Assume {\bf (A1)}. Let $\tau > d(d-1) -1$ be fixed. If $\delta$ is sufficiently small and there exists a sufficiently small $\mu>0$ such that \begin{center}\label{003}
$|\partial_\lambda^l P|_{D(r,s)\times \Lambda} \leq \delta \gamma ^{b} s^2 \mu , ~|l| \leq l_0$, \end{center}
then there exist Cantor sets $\Lambda_\gamma\subset \Lambda$ with $|\Lambda \setminus \Lambda_\gamma | = O(\gamma ^{\frac{1}{d-1}})$, and a $C^{d -1}$ Whitney smooth family of symplectic transformations \begin{center}
$ \Psi_{\lambda} : D(\frac{r}{2}, \frac{s}{2})\rightarrow D(r,s), ~~~\lambda \in \Lambda_\gamma$, \end{center} which is real analytic in x and closes to the identity such that \begin{eqnarray*}
H \circ \Psi _\lambda = e_*+{\langle \omega_{*}(\lambda) , y\rangle}+ \frac{\delta}{2}\langle \left(
\begin{array}{c}
y \\
z \\
\end{array}
\right)
, M_{*}(\lambda) \left(
\begin{array}{c}
y \\
z \\
\end{array}
\right)
\rangle+ \delta h_* (y,z,\lambda,\varepsilon) + P_{*}, \end{eqnarray*} where for all $\lambda \in \Lambda _\gamma $, and $(x,y) \in D(\frac{r}{2}, \frac{s}{2})$, \begin{eqnarray*}
|\partial_{\lambda}^{l}{e_{*}} - \partial_{\lambda}^{l}{e}| &=& O (\delta \gamma ^ {d+ 6 }\mu), ~~|l| \leq d -1,\\
|\partial_{\lambda}^{l}{\omega_{*}} - \partial_{\lambda}^{l}{\omega}| &=& O (\delta \gamma ^ {d+ 6 } \mu),~~|l| \leq d -1,\\
|\partial_{\lambda}^{l}{{h}_{*}}(y,z) - \partial_{\lambda}^{l}{{h}}(y,z)| &=& O ( \gamma ^ {d+ 6 }\mu),~~|l| \leq d -1,\\
{\partial_{\lambda}^{l} \partial_{y}^{i} \partial_{z}^{j} P_{*}}|_{y= 0, z= 0} &=& 0, ~~|l| \leq d -1,\\
h_* &=&O(|\left(
\begin{array}{c}
y \\
z \\
\end{array}
\right)
|^3). \end{eqnarray*} \begin{enumerate}
\item [\bf{(1)}]With assumption $\bf{(A2)}$, we have \begin{eqnarray*}
(\omega_{*})_{i_q} &\equiv& (\omega )_{i_q}~~for ~all ~1\leq q \leq n. \end{eqnarray*} Thus, for each $\lambda \in \Lambda$, the unperturbed torus $T_\lambda = T^m \times \{0\}\times \{0\}$ associated to the toral frequency $\omega$ persists and gives rise to an analytic, Diophantine, invariant torus with the toral frequency $\omega_{*}$, which preserves $n$ components of the unperturbed toral frequency $\omega$, $\omega_{i_1}$, $\omega_{i_2}$, $\cdots$, $\omega_{i_n}$, determined by those rows of $(M_{11}, M_{12})$ that are linearly independent. Moreover, these perturbed tori form a $C^{d -1}$ Whitney smooth family.
\item [\bf{(2)}]Assume $\bf{(A2)}$ and $\bf{(A3)}$ hold. Then
\begin{eqnarray}
\label{E_65} (\omega_{*})_{i_q} &\equiv& t (\omega )_{i_q}~~for ~all ~1\leq q \leq n. \end{eqnarray} Thus, for each $\lambda \in \Lambda$, the unperturbed torus $T_\lambda = T^m \times \{0\}\times \{0\}$ associated to the toral frequency $\omega$ persists and gives rise to an analytic, Diophantine, invariant torus with the toral frequency $\omega_{*}$ which satisfies \emph{(\ref{E_65})} and is determined by those rows of $(M_{11}, M_{12})$ that are linearly independent. Moreover, these perturbed tori form a $C^{d -1}$ Whitney smooth family. \end{enumerate}
\end{theorem}
The proof of Theorem \ref{shengluede} will be proceed by quasilinear KAM iteration process, which consists of infinitely many KAM steps. At each step, we simplify the process of solving homological equation. Next, we show the detail of a cycle of KAM step. Finally, in Section $\ref{example}$, we also give several examples to show the complexity resulting from the high degeneracy.
\subsection{KAM step}\label{KAM}
We show first the $0-$th KAM step. For the sake of induction, let \begin{eqnarray*}
r_0 = r, ~~~\beta_0 = s, ~~~\gamma_0 = 4 \gamma,~~~\eta_0= \eta,~~~ \Lambda_0 = \Lambda,~~~ P_0 = P, \end{eqnarray*}
where $0 <r, s, \gamma_0, \eta_0\leq 1$, and denote \begin{eqnarray*}
c^* =\sup \{|\lambda|: \lambda\in \Lambda_0\}, \end{eqnarray*} and let \begin{eqnarray*}
M^* &=&\max\limits_{|l|\leq l_0, |j|\leq m+5, |y|\leq \beta_0, \lambda\in \Lambda_0} |\partial_\lambda^l \partial_{(y,z)}^j h_0(y,z,\lambda)|_{\mathcal{O}_0}. \end{eqnarray*} By monotonicity, we define $0<\mu_0 \leq1$, $s_0$ implicity through the following equations: \begin{eqnarray*} \mu &=& \frac{4^{3b -1}\mu_0}{c_0 K_1^{(m+2m_0)^2\tau + (m+2m_0)^2 +1}},\\ s_0 &=& \frac{\beta_0 \gamma_0^{2b}}{4 c_0 K_1^{(m+2m_0)^2\tau + (m+2m_0)^2 +1}}, \end{eqnarray*} where \begin{eqnarray*} c_0 &=& (m+2m_0)^4 (c^*)^{(m+2m_0)^2} ({M^*}+1)^{(m+2m_0)^2},\\ K_1 &=& ([\log \frac{1}{\mu_0}] +1 )^{3 {\eta}}, \end{eqnarray*} $[\cdot]$ denotes the integral part of a real number and ${\eta}$ is a fixed positive integer such that $(1+ \sigma)^{{\eta}} > 2$ with $\sigma = \frac{1}{12}$. Obviously, $\mu_0 \rightarrow 0$ iff $\mu\rightarrow 0$, and, for any fixed $0< \epsilon<1$, \begin{eqnarray*} \mu_0 = O(\mu^{1 - \epsilon})~ as ~ \mu \rightarrow 0. \end{eqnarray*} When $\mu$ is sufficiently small, we have \begin{eqnarray*} 4c_0 K_1^{(m+2m_0)^2\tau+(m+2m_0)^2+1} >1. \end{eqnarray*} Hence \begin{eqnarray*} 0<s_0\leq\min\{\beta_0, \frac{\gamma_0^b}{4 c_0 K_1^{(m+2m_0)^2\tau+ (m+2m_0)^2 +1}}\}. \end{eqnarray*} For $j \in Z_+^n$, define \begin{eqnarray*}
a_j &=& 1- sgn {(|j|-1)} = \left\{
\begin{array}{lll}
2, & \hbox{$|j| = 0$,} \\
1, & \hbox{$|j| \geq 1$,}\\
0, & \hbox{$|j| \geq 2$,}
\end{array}
\right.\\
b_j &=&b(1 - sgn |j| sgn (|j|-1) sgn (|j|-2))= \left\{
\begin{array}{ll}
b, & \hbox{$|j| = 0,1,2$,} \\
0, & \hbox{$|j|\geq 3$,}
\end{array}
\right.\\
d_j &=& 1 - \lambda_0 sgn(|j|)sgn (|j|-1) sgn (|j|-2)= \left\{
\begin{array}{ll}
1, & \hbox{$|j| = 0,1,2$,} \\
1- \lambda_0, & \hbox{$|j|\geq 3$,}
\end{array}
\right. \end{eqnarray*} where $\frac{2}{13}<\lambda_0<1$ is fixed. Therefore \begin{eqnarray}\label{714}
|\partial_\lambda^l \partial_x^i \partial_{(y,z)}^j P_0|_{D(r_0, s_0) \times {\Lambda}_0} \leq \delta\gamma_0^{b_j} s_0^{a_j} \mu_0^{d_j} \end{eqnarray}
for all $(l,i,j)\in Z_+^m \times Z_+^m \times Z_+^{2m_0}$, $|l|+ |i|+|j| \leq l_0.$
Next we characterize the iteration scheme for Hamiltonian (\ref{model33}) in one KAM step, say, from the $\nu-$th KAM step to the $(\nu + 1)-$th step. For convenience, we shall omit the index for all quantities of the $\nu-$th KAM step and use $'+'$ to index all quantities in the ${(\nu+ 1)}-$th KAM step. To simplify the notions, we shall suspend the $\lambda-$dependence in most terms of this section. Now, suppose that after $\nu$ KAM steps, we have arrived at the following real analytic Hamiltonian system \begin{eqnarray}\label{N2}
H(x,y,z) &=&N(y,z) + P(x,y,z,\varepsilon),\\ \nonumber N(y,z) &=& \langle \omega(\lambda), y\rangle + \frac{\delta}{2}\langle \left(
\begin{array}{c}
y \\
z \\
\end{array}
\right) , M(\lambda) \left(
\begin{array}{c}
y \\
z \\
\end{array}
\right) \rangle + \delta h(y,z,\lambda, \varepsilon),\\
\nonumber |\partial_\lambda^l \partial_x^i \partial_{(y,z)}^j P| &\leq&\delta \gamma^{b_j} s^{a_j} \mu^{d_j}, ~~~~~~~|l|+ |i|+|j| \leq l_0. \end{eqnarray} By considering both averaging and translation, we shall find a symplectic transformation $\Phi _ {+}$, which, on a small phase domain $D(r_+, s_+)$ and a smaller parameter domain $\Lambda_+$, transforms Hamiltonian (\ref{N2}) into the following form:
\begin{center}
$H_{+} = H{\circ} {\Phi _+}= {N_+} +{P_+}$,
\end{center} where on $D(r_+, s_+)\times \Lambda_+,$ $N_+$ and $P_+$ enjoy similar properties as $N$ and $P$, respectively. Define \begin{eqnarray*} s_+ &=& \frac{1}{8} \alpha s,\\ \mu_+ &=& (64 c_0)^{\frac{1}{1 - \lambda_0}} \mu^{1 + \sigma},\\ r_+ &=& r - \frac{r_0}{2^{\nu+1}},\\ \gamma_+ &=& \gamma - \frac{\gamma_0}{2^{\nu+1}},\\ K_+ &=& ([\log\frac{1}{\mu}]+ 1)^{3{\eta}},\\ D_\alpha &=& D(r_++ \frac{7}{8}(r - r_+), \alpha s),\\ \hat{D}(\lambda) &=& D(r_++ \frac{7}{8}(r - r_+), \lambda),\\
D(\lambda) &=& \{y\in C^n:|y|< \lambda\},\\ D_{\frac{i}{8} \alpha} &=& D(r_+ + \frac{i -1 }{8}(r - r_+), \frac{i}{8}\alpha s), ~~i = 1,2,\cdots, 8,\\
\Gamma(r- r_+) &=& \sum_{0<|k|\leq K_+} |k|^\chi e^{-|k|\frac{r -r_+}{8}}, \end{eqnarray*} where $\alpha = \mu^{\frac{1}{3}},$ $\lambda > 0$, $\chi = (b + 2)\tau +5 l_0 + 10,$ $c_0$ is the maximal among all $c's$ mentioned in this paper and depends on $r_0$, $\beta_0$ .
\subsubsection{Truncation of the perturbation} Consider the Taylor-Fourier series of $P$: \begin{eqnarray*} P = \sum_{\imath\in Z_+^m,\jmath\in Z_+^{2m_0}, k \in Z^m} p_{k\imath\jmath} y^{\imath}z^{\jmath} e^{\sqrt{-1}\langle k, x\rangle}, \end{eqnarray*} and let $R$ be the truncation of $P$ with the following form: \begin{eqnarray*}
R = \sum\limits_{|k| \leq K_+} (p_{k00}+ \langle p_{k10},y\rangle+\langle p_{k01},z\rangle + \langle y, p_{k20}y\rangle+ \langle y, p_{k11}z\rangle + \langle z, p_{k02}z\rangle) e^{\sqrt{-1} \langle k, x\rangle}, \end{eqnarray*} where $K_+$ is defined as above.
\begin{lemma} Assume that \begin{itemize} \item[\bf{(H1)}] $K_+ \geq \frac{8(m + l_0)}{r - r_+}$, \item[\bf{(H2)}] $\int_{K_+}^\infty \lambda^{m + l_0} e^{-\lambda \frac{r - r_+}{8}} d \lambda \leq \mu.$ \end{itemize}
Then there is a constant $c$ such that for all $|l|+|i|+|j| \leq l_0$, $\lambda \in \Lambda$, \begin{eqnarray*}
|\partial_\lambda^l \partial_x^i \partial_{(y,z)}^j (P- R)|_{D_\alpha \times {\Lambda}} \leq c \delta\gamma^{b_j} s^{a_j} \mu^{d_j+1}. \end{eqnarray*} \end{lemma}
\begin{proof} The proof is standard. For detail, refer to, for example, $\bf{Lemma ~3.1}$ of \cite{Li}.
\end{proof}
\subsubsection{Homological equations} We want to average out all coefficients of $R$ by constructing a symplectic transformation as the time-1 map $\phi_F^1$ of the flow generated by a Hamiltonian $F$ with the following form: \begin{eqnarray}
\nonumber F &=& \sum\limits_{0< |k| \leq K_+} (f_{k00}+ \langle f_{k10},y\rangle+\langle f_{k01},z\rangle + \langle y, f_{k20}y\rangle+ \langle y, f_{k11}z\rangle\\ \label{709} &~&~~~~~~~~~ + \langle z, f_{k02}z\rangle) e^{\sqrt{-1} \langle k, x\rangle}, \end{eqnarray}
where $f_{kij}$, $0\leq |i|+|j|\leq 2$, are scalar, vectors or matrices with obvious dimensions. Under the time-1 map $\phi_F^1$, Hamiltonian (\ref{N2}) becomes \begin{eqnarray} \nonumber H\circ \phi_F^1 &=& (N + R )\circ \phi_F^1 + (P - R)\circ \phi_F^1\\ \label{N6} &=&N + R+ \{N, F\} +\int_0 ^1 \{R_t,F\}\circ \phi_F^t dt + (P - R )\circ \phi_F^1,~~~~ \end{eqnarray} where \begin{eqnarray} \nonumber R_t &=& (1-t) \{N, F\} + R. \end{eqnarray} Let \begin{eqnarray}\label{706} \{N, F\} + R - [R] - R'= 0, \end{eqnarray} where \begin{eqnarray*} [R] &=& \int_{T^n} R(x, \cdot) dx,\\ R' &=& \partial_z \hat{h} J \partial_z F,\\ \hat{h} &=& \frac{\delta}{2} \langle \left(
\begin{array}{c}
y \\
z \\
\end{array}
\right), M \left(
\begin{array}{c}
y \\
z \\
\end{array}
\right) \rangle + \delta h(y,z, \lambda, \varepsilon). \end{eqnarray*} Then Hamiltonian $(\ref{N6})$ arrives at \begin{eqnarray}\label{N4} \bar{H}_+ = \bar{N}_+(y,z) + \bar{P}_+(x,y,z), \end{eqnarray} where \begin{eqnarray*} \bar{N}_+ &=& N + [R],\\ \bar{P}_+ &=& {R'} + \int_0 ^1 \{R_t,F\}\circ \phi_F^t dt + (P - R )\circ \phi_F^1. \end{eqnarray*}
Consider the following symplectic translation: \begin{eqnarray}\label{E_21} \phi: x \rightarrow x, \left(
\begin{array}{c}
y \\
z \\
\end{array}
\right)
\rightarrow \left(
\begin{array}{c}
y + y_0 \\
z+ z_0 \\
\end{array}
\right), \end{eqnarray} where $(y_0, z_0)$ is determined by \begin{eqnarray}\label{E_1} \delta\frac{ M}{2} \left(
\begin{array}{c}
y_0 \\
z_0 \\
\end{array} \right) + \delta \left(
\begin{array}{c}
\partial_y h(y_0, z_0, \lambda) \\
\partial_z h(y_0, z_0, \lambda) \\
\end{array}
\right)
= - \left(
\begin{array}{c}
p_{010} \\
p_{001} \\
\end{array}
\right). \end{eqnarray} Then Hamiltonian system $(\ref{N4})$ is changed to \begin{eqnarray*} H_+ &=& \bar{H}_+ \circ \phi\\
&=& e_+ + \langle \omega_+, y\rangle +\frac{\delta}{2} \langle\left(
\begin{array}{c}
y \\
z \\
\end{array}
\right), M_+ \left(
\begin{array}{c}
y \\
z \\
\end{array}
\right)\rangle + \delta h_+(y,z,\lambda,\varepsilon)+ P_+, \end{eqnarray*} where \begin{eqnarray} \nonumber e_+ &=& e + \langle \omega, y_0\rangle+ \frac{\delta}{2} \langle \left(
\begin{array}{c}
y_0 \\
z_0 \\
\end{array}
\right), M \left(
\begin{array}{c}
y_0 \\
z_0 \\
\end{array}
\right) \rangle + p_{000} + \langle\left(
\begin{array}{c}
p_{010} \\
p_{001} \\
\end{array}
\right), \left(
\begin{array}{c}
y_0 \\
z_0 \\
\end{array}
\right)
\rangle\\ \nonumber&~&~~~+ \langle \left(
\begin{array}{c}
y_0 \\
z_0 \\
\end{array}
\right), \left(
\begin{array}{cc}
p_{010} & \frac{1}{2} p_{011} \\
\frac{1}{2} p_{011}^T & p_{002} \\
\end{array}
\right)
\left(
\begin{array}{c}
y_0 \\
z_0 \\
\end{array}
\right)
\rangle + \delta h(y_0, z_0,\lambda),\\ \nonumber \omega_+ &=& \omega + \frac{\delta M }{2} \left(
\begin{array}{c}
y_0 \\
z_0 \\
\end{array}
\right)+ \delta \left(
\begin{array}{c}
\partial_y h(y_0, z_0, \lambda) \\
\partial_z h(y_0, z_0, \lambda) \\
\end{array}
\right)+ \left(
\begin{array}{c}
p_{010} \\
p_{001} \\
\end{array}
\right)
,\\ \nonumber M_+&=& M +2 \left(
\begin{array}{cc}
p_{020} & \frac{1}{2} p_{011} \\
\frac{1}{2} p_{011}^T & p_{002}\\
\end{array}
\right)+ \frac{\delta}{2} \langle \left(
\begin{array}{c}
y \\
z \\
\end{array}
\right), \partial_{(y,z)}^2 h(y_0, z_0, \lambda) \left(
\begin{array}{c}
y \\
z \\
\end{array}
\right)
\rangle,\\ \label{N5} P_+ &=& \bar{P}_+ + \delta \langle \left(
\begin{array}{c}
y \\
z \\
\end{array}
\right), \left(
\begin{array}{cc}
p_{010} & \frac{1}{2} p_{011} \\
\frac{1}{2} p_{011}^T & p_{002} \\
\end{array}
\right)
\left(
\begin{array}{c}
y_0 \\
z_0 \\
\end{array}
\right)
\rangle,\\ \nonumber h_+&=& h(y,z,\lambda)- h(y_0, z_0, \lambda) - \langle \left(
\begin{array}{c}
\partial_y h(y_0, z_0, \lambda) \\
\partial_z h(y_0, z_0, \lambda) \\
\end{array}
\right), \left(
\begin{array}{c}
y \\
z \\
\end{array}
\right) \rangle\\ &~&~~ - \frac{1}{2} \langle \left(
\begin{array}{c}
y \\
z \\
\end{array}
\right), \partial_{(y,z)}^2h (y_0, z_0, \lambda) \left(
\begin{array}{c}
y \\
z \\
\end{array}
\right) \rangle. \end{eqnarray}
\subsubsection{Estimate on the transformation} Substituting the Taylor-Fourier series of $F$ and $R$ into $(\ref{706})$ yields: \begin{eqnarray}
\label{707}\sqrt{-1} \langle k, \omega+ \Delta\rangle f_{kij} &=& p_{kij}, ~~0\leq |i|+|j|\leq2, \end{eqnarray} which, in fact, are solvable on the following Diophantine set: \begin{eqnarray*}
\mathcal{O}_+ = \{\lambda \in \mathcal{O}: |\langle k, \omega\rangle| > \frac{ \gamma}{|k|^\tau} ~for~all~0< |k|\leq K_+\}, \end{eqnarray*} where $\Delta = \partial_y h = \delta (M_{11} y + M_{12} z)$.
\begin{lemma} Assume that \begin{itemize}
\item[\bf{(H3)}] $\max\limits_{|l| \leq l_0, |j| \leq m+5} |\partial_\lambda^l \partial_{(y,z)}^j h(y,z,\lambda) - \partial_\lambda^l \partial_{(y,z)}^j h_0(y,z,\lambda)|_{\mathcal{O}}\leq \mu_0^{\frac{1}{2}}.$ \end{itemize}
Then there is a constant c such that for all $|l| \leq l_0$, \begin{eqnarray}
\label{E_18}|\partial_\lambda^l e_+ - \partial_\lambda^l e|_{\mathcal{O}} &\leq& c \gamma^b s \mu,\\
\label{E_19}|\partial_\lambda^l M_+ - \partial_\lambda^l M|_{\mathcal{O}} &\leq& c \gamma^b \mu,\\
\label{E_21}|\partial_\lambda^l \omega_+ - \partial_\lambda^l \omega|_{\mathcal{O}} &\leq& c\delta s (\gamma^b \mu+ s),\\
\label{E_20}|\left(
\begin{array}{c}
\partial_\lambda^l y_0 \\
\partial_\lambda^l z_0 \\
\end{array}
\right)
|_{\mathcal{O}} &\leq& c \gamma^b s \mu. \end{eqnarray} \end{lemma} \begin{proof} Obviously, \begin{eqnarray*}
|\partial _ \lambda^l p_{000}|_{\mathcal{O}} &\leq& c \delta\gamma^b s^2 \mu,\\
|\partial _ \lambda^l p_{010}|_{\mathcal{O}} + |\partial _ \lambda^l p_{001}|_{\mathcal{O}}&\leq& c \delta \gamma^b s \mu,\\
|\left(
\begin{array}{cc}
\partial_{\lambda}^l p_{020} & \partial_{\lambda}^l p_{011} \\
\partial_{\lambda}^l p_{011}^T & \partial_{\lambda}^l p_{002} \\
\end{array}
\right)
|_{\mathcal{O}} &\leq& c\delta \gamma^b \mu. \end{eqnarray*} Denote \begin{eqnarray*} B= \frac{M}{2}+ \left(
\begin{array}{cc}
\int_0^1 \partial_y^2 h(\theta y,z,\lambda) d\theta & \int_0^1 \partial_y\partial_z h( y,\theta z,\lambda) d\theta \\
\int_0^1\partial_z \partial_y h(\theta y,z,\lambda) d\theta & \int_0^1 \partial_z^2 h( y,\theta z,\lambda) d\theta \\
\end{array}
\right). \end{eqnarray*} Then $(\ref{E_1})$ becomes \begin{eqnarray}\label{E_17} \delta B \left(
\begin{array}{c}
y \\
z \\
\end{array}
\right) = - \left(
\begin{array}{c}
p_{010} \\
p_{001} \\
\end{array}
\right). \end{eqnarray} For given matrix $A =(a_{ij})_{n\times n}$, let \begin{eqnarray*}
||A||_1 = \frac{1}{n}\sum\limits_{i,j=1}^n |a_{ij}(\lambda)|, \end{eqnarray*}
where $|a_{ij}(\lambda)|$ is the absolute value of $a_{ij}(\lambda)$, $\lambda\in \Lambda$. According to assumption $\bf{(H3)}$ and the definition of $M^*$, we have \begin{eqnarray*}
||M - M_0||_1 &\leq& \mu_0^{\frac{1}{2}},\\
||\partial_{(y,z)}^2 h||_1 &\leq& (M^*+1 )s, \end{eqnarray*} respectively. Without loss of generality, we let $\mu_0$ and $s_0$ be small enough such that \begin{eqnarray*} s_0^{\frac{1}{2}} M_* (M^* +1 ) &\leq&\frac{1}{4},\\ \mu_0 M_* &\leq&\frac{1}{4}. \end{eqnarray*} Then \begin{eqnarray*}
||M_0 - B||_1 &\leq& ||M- M_0||_1 + ||B - M||_1\\ &\leq& \mu_0^{\frac{1}{2}} + (M^*+1 )s^2\\ &\leq& \frac{1}{2M_*}. \end{eqnarray*} Let $M_0$ is nonsingular. It follows that $B$ is nonsingular and \begin{eqnarray*}
||B^{-1} ||_1 &=& ||\frac{M_0^{-1}}{I - (M_0 - B) M_0^{-1}}||_1\\
&\leq& \frac{||M_0^{-1}||_1}{||I - (M_0 - B) M_0^{-1}||_1}\\
&\leq& \frac{||M_0^{-1}||_1}{1 - ||(M_0 - B) M_0^{-1}||_1}\\
&\leq& \frac{||M_0^{-1}||_1}{1 - ||(M_0 - B)||_1|| M_0^{-1}||_1}\\ &\leq& \frac{M_*}{1- \frac{1}{2M_*}M_*}\\ &=&2M_*. \end{eqnarray*} Here, we use the fact that \begin{eqnarray*}
||(I - A)^{-1}||_1 \leq \frac{1}{1 - ||A||_1}, \end{eqnarray*}
which is obvious if $||I||_1 = 1$ and $||A||_1 <1.$
Therefore,
\begin{eqnarray*}
|\left(
\begin{array}{c}
y \\
z \\
\end{array}
\right)| &=& |\frac{1}{\delta} B^{-1} \left(
\begin{array}{c}
P_{010} \\
P_{001} \\
\end{array}
\right)
|\\
&\leq& \frac{n}{\delta} ||B^{-1}||_1 |\left(
\begin{array}{c}
P_{010} \\
P_{001} \\
\end{array}
\right)
|\\
&\leq& 2M_*\gamma^{b_j} s \mu.
\end{eqnarray*}
Consider the differential with respect to $\lambda$ on both sides of $(\ref{E_17})$
\begin{eqnarray*}
\partial_{(y,z)} B \left(
\begin{array}{c}
\partial_\lambda y \\
\partial_\lambda z \\
\end{array}
\right) \left(
\begin{array}{c}
y \\
z \\
\end{array}
\right)
+ \partial_\lambda B \left(
\begin{array}{c}
y \\
z \\
\end{array}
\right) + B \left(
\begin{array}{c}
\partial_\lambda y \\
\partial_\lambda z \\
\end{array}
\right)
= - \left(
\begin{array}{c}
\partial_\lambda P_{010} \\
\partial_\lambda P_{001} \\
\end{array}
\right).
\end{eqnarray*}
Then
\begin{eqnarray*}
|\left(
\begin{array}{c}
\partial_\lambda y \\
\partial_\lambda z \\
\end{array}
\right)| &=& |B^{-1} ( \left(
\begin{array}{c}
\partial_\lambda P_{010} \\
\partial_\lambda P_{001} \\
\end{array}
\right) + \partial_{(y,z)} B \left(
\begin{array}{c}
\partial_\lambda y \\
\partial_\lambda z \\
\end{array}
\right)\left(
\begin{array}{c}
y \\
z \\
\end{array}
\right)
+ \partial_\lambda B \left(
\begin{array}{c}
y \\
z \\
\end{array}
\right))|\\
&\leq& 2M_* \gamma^{b_j} s \mu + 4M_*^2 (M^* +1) \gamma^{b_j} s \mu |\left(
\begin{array}{c}
\partial_\lambda y \\
\partial_\lambda z \\
\end{array}
\right)| + 4M_*^2 (M^* +1) \gamma^{b_j} s\mu\\
&\leq& 4M_*^2 (M^* +1) \gamma^{b_j} s\mu.
\end{eqnarray*}
Inductively, we get $(\ref{E_20})$. According to the definition of $e_+$, $\omega_+$ and $M_+$, $(\ref{E_18})$, $(\ref{E_19})$ and $(\ref{E_21})$ are obvious.
\end{proof}
\begin{lemma} Assume that \begin{itemize}
\item[\bf{(H4)}]$s K_+ ^{(|l|+1)\tau +|l|+ |i|}= o(\gamma)$. \end{itemize}
The following hold for all $0< |k| \leq K_+$.
\begin{itemize} \item[\bf{(1)}] On $D(s)\times {\Lambda}_+$, \begin{eqnarray*}
|\partial_\lambda^{l} f_{kij}| &\leq& c \delta|k|^{(|l|+1)\tau + |l| } s^{2-i-j}\mu e ^{- |k|r}, ~0 \leq |i|+|j|\leq 2; \end{eqnarray*} \item[\bf{(2)}] On $\hat{D}(s) \times {\Lambda}_+$, \begin{eqnarray*}
|\partial_\lambda^l \partial_x^i \partial_{(y,z)}^j F| \leq c\delta s^{a_j} \mu^{d_j} \Gamma(r -r_+),~~ |l|+ |i|+ |j| \leq l_0 +2. \end{eqnarray*} \end{itemize} \end{lemma}
\begin{proof}
For any $\lambda \in \Lambda_+$, $0<|k|\leq K_+$, with assumption $\bf{(H4)}$ we have \begin{eqnarray*}
| L_{0k}| &=& |\sqrt{-1}\langle k, \omega\rangle + \sqrt{-1}\langle k, \Delta\rangle|\\
&\geq& \frac{\gamma}{|k|^\tau} - c s \delta K_+\\
& \geq& \frac{\gamma}{2 |k|^\tau}, \end{eqnarray*} and \begin{eqnarray*}
|\partial_\lambda^l L_{k0}|\leq c |k|. \end{eqnarray*} Applying the above and the following inequalities \begin{eqnarray*}
|\partial_\lambda^l L_{k0}^{-1}| \leq |L_{k0}^{-1}| \sum_{|l'| = 1}^{|l|} \left(
\begin{array}{c}
l \\
l'\\
\end{array}
\right)
|\partial_\lambda^{l - l'} L_{k0}^{-1}| |\partial_\lambda^{l'} L_{k0}|, \end{eqnarray*} inductively, we deduce that \begin{eqnarray}\label{N3}
|\partial_\lambda^l L_{k0}^{-1}| \leq c |k|^{|l|} |L_{k0}^{-1}|^{|l|+1} \leq \frac{|k|^{(|l|+1)\tau + |l|}}{\gamma^{|l|+1}}. \end{eqnarray} It now follows from $(\ref{707})$, $(\ref{N3})$ and Cauchy's estimate that \begin{eqnarray*}
|\partial^{l}_{\lambda} f_{k00}| &\leq& \delta |\partial_\lambda^l (L_k^{-1} p_{k00})|\\
&\leq&\frac{\delta |k|^{(|l|+1)\tau+|l|}}{\gamma^{|l|+1}} \gamma^b s^{2} \mu e^{-|k| r}\\
&\leq& c\delta s^{2} \mu |k|^{(|l|+1)\tau+|l| } e^{- |k|r}. \end{eqnarray*} Similarly, \begin{eqnarray*}
|\partial_\lambda^{l} f_{k10}| &\leq& \delta |\partial_\lambda^l (L_k^{-1} p_{k 10})|\\
&\leq& c \delta s \mu |k|^{(|l|+1)\tau+|l| } e^{- |k|r},\\
|\partial_\lambda^{l} f_{k01}| &\leq& \delta |\partial_\lambda^l (L_k^{-1} p_{k01})|\\
&\leq& c \delta s \mu |k|^{(|l|+1)\tau+|l| } e^{- |k|r},\\
|\partial_\lambda^{l} f_{k11}| &\leq& \delta |\partial_\lambda^l (L_k^{-1} p_{k 11})|\\
&\leq& c \delta \mu |k|^{(|l|+1)\tau+|l| } e^{- |k|r},\\
|\partial_\lambda^{l} f_{k02}| &\leq& \delta |\partial_\lambda^l (L_k^{-1} p_{k 02})|\\
&\leq& c\delta \mu |k|^{(|l|+1)\tau+|l| } e^{- |k|r},\\
|\partial_\lambda^{l} f_{k20}| &\leq& \delta |\partial_\lambda^l (L_k^{-1} p_{k20})|\\
&\leq& c \delta \mu |k|^{(|l|+1)\tau+|l| } e^{- |k|r}. \end{eqnarray*} This fulfils the proof of part $\bf{(1)}$.
For part $\bf{(2)}$, by part $\bf{(1)}$ and directly differentiating to $(\ref{709})$, we have, on $\hat{D}(s) \times {{\Lambda}}_+$, \begin{eqnarray*}
|\partial_\lambda^l\partial_x^i \partial_{(y,z)}^j F|&\leq &\sum\limits_{ 0<|k|\leq K_+} |k|^{|i|} (|\partial_\lambda^l f_{k00}|+ |\partial_\lambda^l f_{k10}|s^{1-sgn|j|}+ |\partial_\lambda^l f_{k01}|s^{1-sgn|j|}\\
&~&~~+|\partial_\lambda^l f_{k20}|s^{1-sgn(|j|-1)}+|\partial_\lambda^l f_{k02}|s^{1-sgn(|j|-1)}\\
&~&~~+|\partial_\lambda^l f_{k11}|s^{1-sgn(|j|-1)}) e^{-|k|(r_+ + \frac{7}{8} (r - r_+))}\\
&\leq& c\delta s^{a_j} \mu^{d_j} \sum\limits_{0< |k|\leq K_+} |k|^{\chi} e^{- \frac{|k|( r - r_+)}{8}}\\ &=& c \delta s^{a_j} \mu^{d_j} \Gamma(r - r_+). \end{eqnarray*} \end{proof}
Similar to $\bf{Lemma~ 3.6}$ of \cite{Li}, here, $F$ can also be smoothly extended to functions of H\"{o}lder class $C^{l_0 + \sigma_0 + 1, l_0 -1 + \sigma_0} (\hat{D}(\beta_0) \times \tilde{\Lambda}_0)$, where $0<\sigma_0< 1$ is fixed. Moreover, there is a constant $c$ such that \begin{eqnarray*}
|F|_{C^{l_0 + \sigma_0 + 1, l_0 -1 + \sigma_0} (\hat{D}(\beta_0) \times \tilde{\Lambda}_0)} \leq c\delta s^{a_j} \mu^{d_j} \Gamma(r - r_+). \end{eqnarray*}
\begin{lemma} Assume \begin{itemize} \item[\bf{(H5)}] $c \mu^{\sigma} \Gamma( r- r_+) < \frac{1}{8} (r -r_+),$ \item[\bf{(H6)}] $c \mu^{\sigma} \Gamma ( r- r_+) < \frac{1}{8}\alpha.$ \end{itemize} Then the following hold$:$ \begin{itemize} \item[\bf{1)}] For all $0\leq t\leq1$, \begin{eqnarray}\label{712} \phi_F^t&:& D_{\frac{1}{4}\alpha} \rightarrow D_{\frac{1}{2}\alpha},\\ \phi&:& D_{\frac{1}{8}\alpha} \rightarrow D_{\frac{1}{4}\alpha} \end{eqnarray} are well defined, real analytic and depend smoothly on $\lambda \in {\Lambda}_+$$;$
\item[\bf{2)}]There is a constant $c$ such that for all $0\leq t\leq 1$, $|i|+|j|+|l|\leq l_0$, \begin{eqnarray*}
|\partial_\lambda^l\partial_x^i \partial_{(y,z)}^j( \phi_F^t\circ \phi - id )|_{D_{\frac{1}{4} \alpha}\times {\Lambda}_+} \leq \left\{
\begin{array}{ll}
c s \mu^{d_j} \Gamma(r - r_+), & \hbox{$|i| + |j| =0, |l|\geq 1$$;$} \\
c \mu^{d_j} \Gamma(r - r_+), & \hbox{$ 2\leq|l|+|i|+|j|\leq l_0+2$$;$} \\
c, & \hbox{otherwise.}
\end{array}
\right. \end{eqnarray*} \end{itemize} \end{lemma} \begin{proof}
This lemma follows from $\bf{Lemma~3.7}$ of \cite{Li}. We omit details.
\end{proof}
\subsubsection{New perturbation}
Here we will estimate the new perturbation $P_+$ on the domain $D_+ \times \Lambda_+$, where $D_+ = D_{\frac{\alpha}{8}}$.
\begin{lemma} Assume
\begin{itemize} \item[\bf{(H7)}] $\mu^{\sigma} \Gamma^3 (r - r_+) \leq \frac{\gamma_+^{b}}{\gamma^{b}}$. \end{itemize} Then on $D_+ \times {\Lambda}_+$, \begin{eqnarray*}
|\partial_ \lambda^ l \partial_x^i \partial_{(y,z)}^j P_+|\leq c\delta \gamma_+^{b_j} s_+^{a_j} \mu_+^{d_j}. \end{eqnarray*} \end{lemma}
\begin{proof}
Denote $\partial^{l,i,j} = \partial_\lambda^l \partial_x^i\partial_y^j$ for $|l|+ |i|+|j| \leq l_0.$ With the same argument as $\bf{Lemma~3.9}$ given in \cite{Li}, we have \begin{eqnarray*}
|\partial^{l,i,j}( \int_0^1 \{R_t, F\} \circ \phi_F^t dt \circ \phi)|_{D_{\frac{\alpha}{4}} \times {\Lambda}_+} &\leq& c\delta s^{a_j} \mu^2 \Gamma^3(r - r_+),\\
|\partial^{l,i,j} ( P- R)\circ \phi_F^1\circ\phi|_{D_{\frac{\alpha}{4}} \times {{\Lambda}}_+} &\leq& c\delta \gamma^{b}s^{a_j} \mu^{2}\Gamma(r -r_+),\\
|\partial^{l,i,j} R' \circ \phi|_{D_{\frac{\alpha}{4}} \times {\Lambda}_+} &\leq& c\delta^2 s^{a_j+1} \mu \Gamma(r -r_+),\\
|\partial^{l,i,j} \langle \left(
\begin{array}{c}
y \\
z \\
\end{array}
\right), \left(
\begin{array}{cc}
p_{020} & \frac{1}{2}p_{011} \\
\frac{1}{2}p_{011}^T & p_{002} \\
\end{array}
\right)\left(
\begin{array}{c}
y_0 \\
z_0 \\
\end{array}
\right)
\rangle|_{D_{\frac{\alpha}{8}} \times \Lambda_+} &\leq& c \delta \gamma^b s^{a_j} \mu^2. \end{eqnarray*} Further, by $(\ref{N5})$ we have \begin{eqnarray*}
|\partial_\lambda^l \partial_x^i \partial_{(y,z)}^j P_+|\leq c\delta s^{a_j} \mu^{2}\Gamma^3(r - r_+). \end{eqnarray*} Here we use the fact that $s = \mu \mu_0^{-\frac{1}{3\sigma}} s_0$ and $\delta \mu_0^{-\frac{1}{3\sigma}} s_0 = o(c).$ Using assumption $\bf{(H7)}$ yields the lemma.
\end{proof}
\subsubsection{The preservation of frequencies} As we see above, if $M(\lambda)$ is nonsingular, there is a transformation $(\ref{E_21})$ such that all the frequencies are preserved after a KAM step. However, when the system is degenerate, $(\ref{E_1})$ is not solvable, i.e. there is no transformation such that all frequencies are preserved after a KAM step. To show the part preservation of frequency, we give two simple properties.
\begin{lemma}\label{Pro2} For an $n\times n$ symmetrical matrix $A$ with $rank A = m$, there is an invertible matrix $T$, which corresponds to a linear transformation such that only some rows of $A$ exchange, such that
$$T^{-1} A T = \left(
\begin{array}{cc}
B & C \\
D & E \\
\end{array}
\right),
$$
where $B$ is an $m \times m$ nonsingular minor. \end{lemma}
\begin{proof} Rewrite \begin{eqnarray*} A = \left(
\begin{array}{c}
a_1 \\
a_2 \\
\vdots \\
a_n \\
\end{array}
\right) = (b_1, b_2, \cdots, b_n), \end{eqnarray*} where $a_i$ is $i-$th row of $A$ and $b_i$ is $i-$th column of $A$, $i=1, \cdots, n.$ Since $A$ is symmetrical, $a_i = b_i^T$, $i = 1, \cdots, n$, which means that there is a same linear relation between $a_i$ and $b_i$, $i=1, \cdots, n$. Because $rank A = m$, there are $m$ linearly independent rows (columns) of $A$. Then there is an invertible matrix $T$, which corresponds to a linear transformation that exchange some rows of $A$, such that \begin{eqnarray*} T \left(
\begin{array}{c}
a_1 \\
a_2 \\
\vdots \\
a_n \\
\end{array}
\right) = \left(
\begin{array}{c}
a_1^1 \\
a_2^1 \\
\vdots \\
a_m^1 \\
\vdots \\
a_n^1 \\
\end{array}
\right), \end{eqnarray*} where $a_1^1, ~\cdots,~ a_m^1$ are linearly independent. Since $T^{-1} = T$ and $T^{-1}$ does not change the linear relation among $b_1$, $\cdots$, $b_m$, we get \begin{eqnarray*} T^{-1} A T &=& \left(
\begin{array}{c}
a_1^1 \\
a_2^1 \\
\vdots \\
a_m^1 \\
\vdots \\
a_n^1 \\
\end{array}
\right) T\\
&=& \left(
\begin{array}{cc}
B & C \\
D & E \\
\end{array}
\right), \end{eqnarray*} where $B$ is an $m \times m$ nonsingular minor.
\end{proof}
Combining assumption $\bf{(A2)}$ and $\bf{Property~\ref{Pro2}}$, there is an invertible matrix $T$, which corresponds to a transformation only exchanging columns or rows, such that \begin{eqnarray*} T^{-1} \left(
\begin{array}{cc}
M_{11} & M_{12} \\
M_{21} & M_{22} \\
\end{array}
\right) T = \left(
\begin{array}{cc}
C_{11} & C_{12} \\
C_{21} & C_{22} \\
\end{array}
\right), \end{eqnarray*} where $(C_{11}, C_{12})_{(n+2m_0)\times(m + 2m_0)}$ is a matrix with $rank (C_{11}, C_{12}) = n + 2m_0$ and $(C_{21}, C_{22})_{(m-n)\times(m + 2m_0)}$ is the complements. Moreover, $(C_{11})_{(n+2m_0)\times (n+2m_0)}$ is nonsingular. Denote \begin{eqnarray*} \left(
\begin{array}{c}
y_1 \\
y_2 \\
\end{array}
\right) &=& T^{-1} \left(
\begin{array}{c}
y_* \\
z_* \\
\end{array}
\right),\\ \left(
\begin{array}{c}
p_1 \\
p_2 \\
\end{array} \right)&=& T^{-1} \left(
\begin{array}{c}
P_{010} \\
P_{001} \\
\end{array}
\right) , \end{eqnarray*} where $p_1, ~y_1 = (y_3, z_*)^T\in R^{n+2m_0}$, $y_2,$ $p_2$ $\in R^{m-n}$, $P_{010}, ~y_* = (y_3, y_2)^T\in R^{m}$, $P_{001}, ~z_*\in R^{2m_0}$. Then (\ref{E_1}) is changed to $:$ \begin{eqnarray}\label{933} \frac{\delta}{2} \left(
\begin{array}{cc}
C_{11} & C_{12} \\
C_{21} & C_{22} \\
\end{array}
\right)
\left(
\begin{array}{c}
y_1 \\
y_2 \\
\end{array}
\right)+ \delta \left(
\begin{array}{c}
\partial_{y_1} h(y_*, z_*, \lambda) \\
\partial_{y_2} h(y_*, z_*, \lambda) \\
\end{array}
\right) = -\left(
\begin{array}{c}
p_1 \\
p_2 \\
\end{array}
\right). \end{eqnarray} Since $$ rank (C_{11}, C_{12}) =rank \left(
\begin{array}{cc}
C_{11} & C_{12} \\
C_{21} & C_{22} \\
\end{array}
\right), $$ there is a inverse matrices $T_1$ that only exchange columns or rows such that $$ T_1 \left(
\begin{array}{cc}
C_{11} & C_{12} \\
C_{21} & C_{22} \\
\end{array}
\right) = \left(
\begin{array}{cc}
C_{11} & C_{12} \\
0 & 0 \\
\end{array}
\right), $$ which is equal to the fact that the rows of $(C_{21}, C_{22})$ is linearly dependent on the rows of $(C_{11}, C_{12}).$ Obviously, $T_1$ is a matrix with the following form $$\left(
\begin{array}{cc}
I & 0 \\
D_1 & I \\
\end{array}
\right), $$ where $D_1$ is determined by the linear relation among the rows of $(C_{21}, C_{22})$ and $(C_{11}, C_{12})$. Then \begin{eqnarray*} T_1 \left(
\begin{array}{c}
\partial_{y_1} h(y_*, z_*, \lambda) \\
\partial_{y_2} h(y_*, z_*, \lambda) \\
\end{array}
\right) &=& \left(
\begin{array}{c}
\partial_{y_1} h(y_*, z_*, \lambda) \\
D_1 \partial_{y_1} h(y_*, z_*, \lambda)+ \partial_{y_2} h (y_*, z_*, \lambda)\\
\end{array}
\right),\\ T_1 \left(
\begin{array}{c}
p_1 \\
p_2 \\
\end{array}
\right) &=& \left(
\begin{array}{c}
p_1 \\
D_1 p_1 + p_2 \\
\end{array}
\right). \end{eqnarray*} Consider the following equation \begin{eqnarray}\label{934} \frac{\delta}{2} \left(
\begin{array}{cc}
C_{11} & C_{12} \\
0 & 0 \\
\end{array}
\right)
\left(
\begin{array}{c}
y_1 \\
y_2 \\
\end{array}
\right)+ \delta \left(
\begin{array}{c}
\partial_{y_1} h(y_*, z_*, \lambda) \\
0 \\
\end{array}
\right) = -\left(
\begin{array}{c}
p_1 \\
0 \\
\end{array}
\right), \end{eqnarray} where $C_{11}$ is nonsingular. Obviously, $(y_1, y_2)^T = (y_1, 0)^T$ is a specific solution of $(\ref{934})$, i.e., with assumption $\bf{(A2)}$ there is a symplectic transformation such that part of the frequencies are preserved. \begin{remark} If $M$ is singular, some of the frequencies are preserved and the others drift. Moreover, the drift depends on $D_1 p_1 + p_2$ and $D_1 \partial_{y_1} h(y_*, z_*, \lambda)+ \partial_{y_2} h (y_*, z_*, \lambda)$ and the estimate on drift is shown by \emph{(\ref{drift})}. \end{remark}
Consider $:$ \begin{eqnarray} \label{935} \langle \omega, y_*\rangle + \frac{\delta}{2} \langle \left(
\begin{array}{c}
y_* \\
z_* \\
\end{array}
\right), M \left(
\begin{array}{c}
y_* \\
z_* \\
\end{array}
\right) \rangle + P_{000}+ \langle \left(
\begin{array}{c}
P_{010} \\
P_{001} \\
\end{array}
\right), \left(
\begin{array}{c}
y_* \\
z_* \\
\end{array}
\right) \rangle&~&\\ \nonumber~~~+ \langle\left(
\begin{array}{c}
y_* \\
z_* \\
\end{array}
\right), \left(
\begin{array}{cc}
P_{020} & \frac{1}{2}P_{011} \\
\frac{1}{2}P_{011}^T & P_{002} \\
\end{array}
\right) \left(
\begin{array}{c}
y_* \\
z_* \\
\end{array}
\right)\rangle + \delta h(y_*, z_*, \lambda)&=& 0,\\ \label{936} \frac{\delta M}{2} \left(
\begin{array}{c}
y_* \\
z_* \\
\end{array}
\right)+ \delta \left(
\begin{array}{c}
\partial_y h(y_*, z_*, \lambda) \\
\partial_z h(y_*, z_*, \lambda) \\
\end{array}
\right) + \left(
\begin{array}{c}
P_{010} \\
P_{001} \\
\end{array}
\right) - t \left(
\begin{array}{c}
\omega \\
0 \\
\end{array}
\right)
&=& 0.~~~~~~~~ \end{eqnarray}
If $M$ is nonsingular, according $\bf{(A3)}$ and the continuity of determinant, we have \begin{eqnarray*} \det \left(
\begin{array}{cc}
M & \bar{\omega}_1 \\
\bar{\omega}_2 & 0 \\
\end{array} \right) \neq 0, \end{eqnarray*}
where $\omega_1 = (\omega, 0)^T \in R^{n+ 2m_0}$, $\omega_2 = (P_{010}+ \omega, P_{001})$. Then, combining $(\ref{935})$ and $(\ref{936})$, with implicit theorem we get $(y_*, z_*, t),$ i.e., we construct a transformation such that on the same energy surface the ratio of the frequency is preserved after a KAM step.
\begin{remark}
If $M$ is nonsingular, the condition
\begin{eqnarray*} \det \left(
\begin{array}{cc}
M & \bar{\omega}_1 \\
\bar{\omega}_1 & 0 \\
\end{array} \right) \neq 0 \end{eqnarray*} is a generalization of isoenergetically nondegenerate condition given by V. I. Arnold to the persistence of lower dimensional invariant tori on a given energy surface, where $\omega_1 = (\omega, 0)^T $.
\end{remark} \begin{remark} In $(\ref{936})$, $t\in R$, i.e., the ratio of the frequency is $t+1$. \end{remark}
Assume $M$ is singular and conditions $\bf{(A2)}$ and $\bf{(A3)}$ hold. Denote $\tilde{\omega}_1$ by the first $n+ 2m_0$ components of $T_1T^{-1} (\omega, 0)^T$, which is equal to the first $n+ 2m_0$ components of $T^{-1} (\omega, 0)^T$. In fact, $$T_1T^{-1} \left(
\begin{array}{c}
\omega \\
0 \\
\end{array}
\right) = T_1 \left(
\begin{array}{c}
\tilde{\omega}_1 \\
\omega_4 \\
\end{array}
\right)=\left(
\begin{array}{cc}
I & 0 \\
D_1 & I \\
\end{array}
\right)\left(
\begin{array}{c}
\tilde{\omega}_1 \\
\omega_4 \\
\end{array}
\right)=
\left(
\begin{array}{c}
\tilde{\omega}_1 \\
D_1\tilde{\omega}_1+ \omega_4 \\
\end{array}
\right),
$$ where $\omega = (\omega_3, \omega_4)^T\in R^{m},$ $\tilde{\omega}_1 = (\omega_3, 0)^T\in R^{n+ 2m_0}.$ Similarly, combining $(\ref{934})$, we have \begin{eqnarray*} \frac{\delta}{2} \left(
\begin{array}{cc}
C_{11} & C_{12} \\
0 & 0 \\
\end{array}
\right)
\left(
\begin{array}{c}
y_1 \\
y_2 \\
\end{array}
\right)+ \delta \left(
\begin{array}{c}
\partial_{y_1} h(y_*, z_*, \lambda) \\
0 \\
\end{array}
\right)- t\left(
\begin{array}{c}
\tilde{\omega}_1 \\
0 \\
\end{array}
\right)
= -\left(
\begin{array}{c}
p_1 \\
0 \\
\end{array}
\right). \end{eqnarray*} Assume \begin{eqnarray}\label{937} det \left(
\begin{array}{cc}
C_{11} & \tilde{\omega}_1 \\
\tilde{\omega}_2 & 0 \\
\end{array} \right) \neq 0, \end{eqnarray} where $\tilde{\omega}_2 $ are the first $n+ 2m_0$ components of $(P_{010}+ \omega, P_{001}) T$. Then there is a $(y_{i_1}^*, \cdots, y_{i_n}^*, 0, \cdots, 0, z_1^*, \cdots, z_{2m_0}^*, t)$ such that \begin{eqnarray*} \langle \omega, y_*\rangle + \frac{\delta}{2} \langle \left(
\begin{array}{c}
y_* \\
z_* \\
\end{array}
\right), M \left(
\begin{array}{c}
y_* \\
z_* \\
\end{array}
\right) \rangle + P_{000}+ \langle \left(
\begin{array}{c}
P_{010} \\
P_{001} \\
\end{array}
\right), \left(
\begin{array}{c}
y_* \\
z_* \\
\end{array}
\right) \rangle&~&\\ \nonumber~~~+ \langle\left(
\begin{array}{c}
y_* \\
z_* \\
\end{array}
\right), \left(
\begin{array}{cc}
P_{020} & \frac{1}{2}P_{011} \\
\frac{1}{2}P_{011}^T & P_{002} \\
\end{array}
\right) \left(
\begin{array}{c}
y_* \\
z_* \\
\end{array}
\right)\rangle + \delta h(y_*, z_*, \lambda)&=& 0,\\ \frac{\delta}{2} \left(
\begin{array}{cc}
C_{11} & C_{12} \\
0 & 0 \\
\end{array}
\right)
\left(
\begin{array}{c}
y_1 \\
y_2 \\
\end{array}
\right)+ \delta \left(
\begin{array}{c}
\partial_{y_1} h(y_*, z_*, \lambda) \\
0 \\
\end{array}
\right)- t\left(
\begin{array}{c}
\tilde{\omega}_1 \\
0 \\
\end{array}
\right) &=& -\left(
\begin{array}{c}
p_1 \\
0 \\
\end{array}
\right). \end{eqnarray*} Finally, combining $\bf{(A2)}$, $\bf{(A3)}$, $\bf{Property ~\ref{Pro2}}$ and the continuity of determinant, assumption $(\ref{937})$ holds. Therefore, on a given energy surface there is a transformation such that frequency ratio between the unperturbed torus and the perturbed is preserved.
\begin{remark} Assume $\bf{(A2)}$ and $\bf{(A3)}$. For a given energy, the ratio of part frequencies can be preserved. Simultaneously, the other frequencies slightly drift and the drift depend on $D_1 p_1 + p_2$ and $D_1 \partial_{y_1} h(y_*, z_*, \lambda)+ \partial_{y_2} h (y_*, z_*, \lambda)$. \end{remark}
\subsection{Iteration Lemma}\label{727} Let $r_0$, $\gamma_0$, $s_0$, $\beta_0$, $\mu_0$, $\eta_0$, $\Lambda_0$, $H_0$, $N_0$, $e_0$, $P_0$ be given as above and denote $\hat{D}_0 = D(r_0, \beta_0)$. For any $\nu = 0,1, \cdots,$ denote \begin{eqnarray*} r_\nu &=& r_0 (1 - \sum_{i=1}^\nu \frac{1}{2^{i+1}}), ~~~\gamma_\nu = \gamma_0 (1 - \sum_{i=1}^\nu \frac{1}{2^{i+1}}),\\ \beta_\nu &=&\beta_0(1 - \sum_{i=1}^\nu \frac{1}{2^{i+1}}),~~~\mu_\nu = (64c_0)^{\frac{1}{1- \lambda_0}} \mu_{\nu-1}^{1 + \sigma}, ~~~K_\nu = ([\log\frac{1}{\mu_{\nu-1}}]+1)^{3 {\eta}},\\ D_\nu &=& D(r_\nu, s_\nu),~~~~~~~\hat{D}_\nu = D(r_\nu+ \frac{7}{8} (r_{\nu-1} - r_\nu)),~~~s_\nu = \frac{1}{8} \alpha_{\nu-1} s_{\nu-1},\\
\Lambda_\nu &=& \{\lambda \in \Lambda_{\nu-1}: |\langle k, \omega\rangle| > \frac{ \gamma_0}{|k|^\tau},~~~0<|k|\leq K_\nu\},~~~\alpha_\nu = \mu_\nu^{\frac{1}{3}}. \end{eqnarray*}
We have the following Iteration Lemma. \begin{lemma}\label{balala} Assume $(\ref{714})$ holds. Then the KAM step described in Section $\ref{KAM}$ is valid for all $\nu = 0,1,\cdots$, and the following facts hold for all $\nu = 1,2,\cdots.$
\begin{itemize} \item[\bf{(1)}] $P_\nu$ is real analytic in $(x,y)\in D_\nu$, smooth in $(x,y)\in \hat{D}_\nu$ and smooth in $\lambda \in {\Lambda}_\nu$, and moreover, \begin{eqnarray*}
|\partial_\lambda^l \partial_x^i \partial_{(y,z)}^j P_\nu |_{D_\nu \times {\Lambda}_\nu} \leq \delta\gamma_\nu^{b_j} s_\nu^{a_j} \mu_\nu^{d_j}, ~|l|+|i|+|j|\leq l_0; \end{eqnarray*} \item[\bf{(2)}] $\Phi_\nu = \phi_F^t\circ \phi: \hat{D} \times {\Lambda}_0 \rightarrow \hat{D}_{\nu - 1}, D_\nu \times {\Lambda}_\nu \rightarrow D_{\nu-1}$, is symplectic for each $\lambda \in {\Lambda}_0$, and is of class $C^{l_0 + 1 + \sigma_0, l_0 -1 +\sigma_0}$, $C^{\alpha, l_0}$, respectively, where $\alpha$ stands for real analyticity and $0<\sigma_0<1$ is fixed. Moreover, \begin{eqnarray*} \tilde{H}_\nu = H_{\nu-1}\circ \Phi_\nu = N_\nu+ {P}_\nu, \end{eqnarray*} on $\hat{D} \times {\Lambda}_\nu$, and \begin{eqnarray*}
|\Phi_\nu - id |_{C^{l_0+ 1 + \sigma_0,l_0 -1 + \sigma_0} (\hat{D} \times \tilde{\Lambda}_0)} \leq c_0 \delta \gamma^{b-1} \frac{\mu_0}{2^\nu}; \end{eqnarray*}
\item[\bf{(3)}] $\Lambda_\nu = \{\lambda\in \Lambda_{\nu-1}: |\langle k, \omega\rangle| > \frac{ \gamma_{\nu-1}}{|k|^\tau} ~for~all~K_{\nu-1}<|k|\leq K_\nu\}$. \end{itemize} \end{lemma}
\begin{proof} The proof of this lemma is to verify conditions $\bf{(H1)} - \bf{(H7)}$. Those are standard and we omit the detail for brevity. For details, for example, refer to the proof of $\bf{Lemma~ 4.1}$ of $\cite{Li}$.
\end{proof}
\begin{remark} According to the choice of iteration sequence, the drift of frequency can be estimated by \begin{eqnarray}\label{drift}
|\partial_\lambda^l\omega_\nu - \partial_\lambda^l \omega_0|_{\tilde{\Lambda}_\nu} \leq c\mu_0^{\frac{\nu \sigma^2 + (1+\sigma)\sum\limits_{r=2}^{\nu} C_{\nu}^r \sigma^r}{3 \sigma^2}} (\gamma_0^b s_0 \mu_0^{ \frac{(1+ \sigma)((1+\sigma)^{\nu} -1)}{\sigma}} + c \mu_0^{\frac{\nu \sigma^2 + (1+\sigma)\sum\limits_{r=2}^{\nu} C_{\nu}^r \sigma^r}{3 \sigma^2}} s_0^2),~~
\end{eqnarray}
for $|l|\leq l_0.$ \end{remark}
\subsection{Convergence and measure estimate}\setcounter{equation}{0} Let \begin{eqnarray}
\nonumber \Psi^{\nu } &=& \Phi_1 \circ \Phi_2 \circ \cdots \circ \Phi_\nu, ~~ \nu = 1,2, \cdots . \end{eqnarray}
Then $\Psi^{\nu}: \tilde{D}_{\nu} \times \Lambda _0(g,G) \rightarrow \tilde{D}_0$, and
\begin{eqnarray}
\nonumber H_0 \circ \Psi^{\nu} &=& H_{\nu} = N_{\nu}+ P_{\nu},\\
\nonumber N_{\nu}&=& e_{\nu} + \langle \omega_\nu , y\rangle+ h_\nu (y, \omega),~~\nu= 0,1,\cdots,
\end{eqnarray}
where $\Psi _0 = id $.
Standardly, $N_\nu$ converges uniformly to $N_\infty$, $P_\nu$ converges uniformly to $P_\infty$
and $\partial _y^i \partial_z^j P_\infty = 0,~2|i|+|j|\leq 2.$
Hence for each $\lambda\in \Lambda_{*}$, $T^d \times \{0\} \times \{0\}$ is an analytic invariant torus of $H_\infty$ with the toral frequency $\omega_\infty$, which for all $k\in Z^m \backslash \{0\},~1\leq q\leq n$, by the definition of $\Lambda_\nu$ and Lemma \ref{balala} (2), satisfies the facts that \begin{itemize}
\item [\bf{(1)}] if $\bf{(A1)}$ hold and $M$ is nonsingular, then \begin{eqnarray}
\nonumber \omega_\infty \equiv \omega_0,~~~|\langle k, \omega_\infty \rangle| > \frac{\gamma}{2|k|^\tau}; \end{eqnarray}
\item [\bf{(2)}]if $\bf{(A1)}$ and $\bf{(A3)}$ hold and $M$ is nonsingular, then on a given energy surface
\begin{eqnarray}
\nonumber\omega_\infty \equiv t \omega_0, ~~|\langle k, \omega_\infty \rangle| > \frac{\gamma}{2|k|^\tau}; \end{eqnarray}
\item [\bf{(3)}]if $\bf{(A1)}$ and $\bf{(A2)}$ hold, then
\begin{eqnarray}
\nonumber (\omega_\infty)_{i_q} \equiv (\omega_0)_{i_q}, q= 1, \cdots, n, ~~|\langle k, \omega_\infty \rangle| > \frac{\gamma}{2|k|^\tau};
\end{eqnarray}
\item [\bf{(4)}]if $\bf{(A1)}$, $\bf{(A2)}$ and $\bf{(A3)}$ hold, then
\begin{eqnarray}
\nonumber (\omega_\infty)_{i_q} \equiv t (\omega_0)_{i_q}, q= 1, \cdots, n, ~~|\langle k, \omega_\infty \rangle| > \frac{\gamma}{2|k|^\tau}. \end{eqnarray} \end{itemize}
Following the Whitney extension of $\Psi^\nu,$ all $e_\nu,$ $\omega_\nu,$ $h_\nu,$ $P_\nu,$ $(\nu = 0,1,\cdots)$ admit uniform $C^{d-1 +\sigma_0}$ extensions in $\lambda \in \Lambda_0$ with derivatives in $\lambda$ up to order $d-1$. Thus, $e_\infty$, $\omega_\infty$, $h_\infty$, $P_\infty$ are $C^{d-1}$ Whitney smooth in $\lambda \in \Lambda_{*}$, and the derivatives of $e_\infty -e_0$, $\omega_\infty -\omega_0$, $h_\infty -h_0$ satisfy similar estimates. Consequently, the perturbed tori form a $C^{d-1}$ Whitney smooth family on $\Lambda_{*}(g,G)$.
The measure estimate is the same as ones in \cite{LLL}. For the sake of simplicity, we omit details. Now we finish the proof of Theorem \ref{shengluede}.
\section{Proof of Main Theorem}\label{074} In order to use Theorem $\ref{shengluede}$, we should reduce Hamiltonian system $(\ref{082})$ to $(\ref{model33})$. But the traditional method is fail due to high order degeneracy of perturbation. Here we show a program, combinations of finite nonlinear KAM steps, to finish this reduction and to fix thought we only give an outline.
Regard Hamiltonian system (\ref{082}) as the original Hamiltonian with the following form: \begin{eqnarray} \label{xiaoming} H(x,y,u,v) &=& N_1(y,v) + \varepsilon^2 {P_1} (x,y,u,v,\varepsilon), \end{eqnarray} where \begin{eqnarray*} N_1&=& \langle\omega, y\rangle + \hat{h},\\ \hat{h}&=&\frac{\varepsilon}{2} \langle \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right) , M \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right)
\rangle+ \varepsilon^2 O(|K_0 \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right)|^3),\\ P_1(x,y,u,v) &=& \varepsilon P(x,y,u,v). \end{eqnarray*} We choose $\varepsilon = \delta$, $\gamma = \delta^{\frac{1}{4(9+d)}}$, $s = \delta^{\frac{1}{4}}$, $\mu = \delta^{\frac{1}{4}}$. Then \begin{eqnarray} \label{xiaoming1} H(x,y,u,v) &=& N_1(y,v) + \delta^2 {P_1} (x,y,u,v,\varepsilon), \end{eqnarray} where \begin{eqnarray*} N_1&=& \langle\omega, y\rangle + \hat{h},\\ \hat{h}&=&\frac{\delta}{2} \langle \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right) , M \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right) \rangle+ \delta^2 h,\\
|{P_1}(x,y,u,v)| &\leq& \gamma^{d+9} s^2 \mu. \end{eqnarray*} Here, $h$ is a polynomial of $K_0 \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right)$ from three order term. Write \begin{eqnarray*} {P}_1&=& \sum_{k} P_{k} e^{\sqrt{-1} \langle k, x\rangle},\\
\label{R_1} R_1&=& \sum_{|k|\leq K_1} P_{k} e^{\sqrt{-1} \langle k, x\rangle},\\
\label{I_1}{{P}_1} - R_1 &=& \sum_{|k|> K_1} P_{k} e^{\sqrt{-1} \langle k, x\rangle},\\ \end{eqnarray*} where $K_1$ is specified in Section $\ref{normal form}$.
Next, we are going to improve the order of ${{P}_1}$ by the symplectic transformation $\Phi_{F_1}^1$, the time$-1$ map generated by the vector field $J \nabla F_1$, where \begin{eqnarray*} J = \left(
\begin{array}{cccc}
0 & I_m & 0 & 0 \\
-I_m & 0 & 0 & 0 \\
0 & 0 & 0 & I_{m_0} \\
0 & 0 & -I_{m_0} & 0 \\
\end{array}
\right), \end{eqnarray*}
\begin{eqnarray*}
F_1(x,y,u,v) = \sum _{0<|k|\leq K_1} f_{k} e^{\sqrt{-1} \langle k, x\rangle}
\end{eqnarray*} satisfies \begin{eqnarray}\label{tongdiao31} \{N_1,F_1\} + \delta^2 (R_1- [R_1]) - R_1' = 0, \end{eqnarray} \begin{eqnarray*} R_1' &=& \partial_u N_1 \partial_v F_1 - \partial_v N_1 \partial_u F_1,\\ ~[R_1](y,u,v,\varepsilon)&=& \int_{T^m} {R_1}(x,y,u,v,\varepsilon)dx . \end{eqnarray*}
Concretely, using (\ref{tongdiao31}) and comparing coefficients, we obtain the following nonlinear homological equations \begin{eqnarray}\label{(11)}
-\sqrt{-1} \langle k, \omega + \partial _y \hat{h}\rangle f_{k} = \delta^2 P_{k}. \end{eqnarray} It is clear that the homological equations are uniquely solvable on the following domain \begin{eqnarray*}
\Lambda_{1} = \{ \omega \in \Lambda: |\langle k , \omega \rangle | > \frac{\gamma }{ |k|^{\tau} }~~ for ~all~ 0<|k| \leq K_1\}. \end{eqnarray*} By (\ref{tongdiao31}), we have \begin{eqnarray*}
\bar{H}_2&=& H_1 \circ \Phi _{F_1}^1\\
&=& N_2(y,u,v) + \delta^2 \bar{P}_2 (x,y,u,v,\varepsilon), \end{eqnarray*} where \begin{eqnarray*} N_2 &=& N_1 + \delta^2 [{R}_1],\\ \bar{P}_2 &=& \frac{1}{\delta^2}(R_1' + \int_0^1 \{R_{1,t}, F_1\}\circ \Phi_{F_1}^t dt +\delta^2 ({\bar{P}_1} - R_1) \circ\Phi_{F_1}^1),\\ R_{1,t} &=& t\delta^2 R_1 + (1-t)R_1' + (1-t)\delta^2 [R_1]. \end{eqnarray*} It is easy to see that $[{R}_1]$ has critical point on $u$, due to the $T^{m_0}-$periodicity in $u$. Consider the following transformation $$\phi: ~~x \rightarrow x, ~~ y \rightarrow y+ y_0, ~~v \rightarrow v+ v_0,~~u\rightarrow u,$$ where $y_0$ and $v_0$ is determined by the following equation$:$ \begin{eqnarray*} \delta M \left(
\begin{array}{c}
y_0 \\
v_0 \\
\end{array}
\right) + \delta^2 \left(
\begin{array}{c}
\partial_y h(y_0, v_0) \\
\partial_v h(y_0, v_0) \\
\end{array}
\right) = \delta^2 \left(
\begin{array}{c}
\partial_y [{R}_1] \\
\partial_v [{R}_1]\\
\end{array}
\right). \end{eqnarray*} Here and below, denote \begin{eqnarray*}
[R_i]_2 = O(|\left(
\begin{array}{c}
y \\
v \\
\end{array}
\right)
|^2). \end{eqnarray*} Then \begin{eqnarray}\label{jiayou}
H_2&=& N_2(y,u,v) + \delta^2 {P}_2 (x,y,u,v,\varepsilon), \end{eqnarray} where \begin{eqnarray*} N_2 &=& \langle\omega_2, y\rangle + \frac{\delta}{2} \langle \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right) , M_2 \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right) \rangle+ \delta^2h_2+ \delta^2 [{R}_1]_2,\\ \omega_2 & =& \omega+\delta M \left(
\begin{array}{c}
y_0 \\
v_0 \\
\end{array}
\right) + \delta^2 \left(
\begin{array}{c}
\partial_y h (y_0, v_0) \\
\partial_v h (y_0, v_0) \\
\end{array}
\right)+ \delta^2 \left(
\begin{array}{c}
\partial_y [{R}_1] \\
\partial_v [{R}_1] \\
\end{array}
\right),\\ M_2 &=& M + \delta^2 \partial_{(y,v)}^2 h,\\
h_2 &=& O(|K_0 \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right)|^3),\\ P_2&=&\bar{P}_2\circ\phi + \langle \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right), \partial_{(y,v)}^2 [{R}_1] \left(
\begin{array}{c}
y_0 \\
v_0 \\
\end{array}
\right) \rangle. \end{eqnarray*} Moreover,
\begin{eqnarray*}
|\partial_\lambda^l P_2 |\leq c \delta^{\frac{657}{576}},~|l|\leq d.
\end{eqnarray*}
Here and below, we denote by $c$ the positive constant independent of the iteration process. In other word, at the 2-nd step of the iteration program the normal form is (\ref{jiayou}). In a similar manner to Hamiltonian (\ref{jiayou}), write \begin{eqnarray*} {P}_2&=& \sum_{k} P_{k} e^{\sqrt{-1} \langle k, x\rangle},\\
\label{R_1} R_2&=& \sum_{|k|\leq K_2} P_{k} e^{\sqrt{-1} \langle k, x\rangle},\\
\label{I_1}{{P}_2} - R_2 &=& \sum_{|k|> K_2} P_{k} e^{\sqrt{-1} \langle k, x\rangle}.\\ \end{eqnarray*}
Improve the order of ${{P}_2}$ by symplectic transformation $\Phi_{F_2}^1$, where
\begin{eqnarray} \label{F}
F_2(x,y,u,v) = \sum _{0<|k|\leq K_2} f_{k} e^{\sqrt{-1} \langle k, x\rangle}
\end{eqnarray} satisfies \begin{eqnarray}\label{tongdiao41} \{N_2,F_2\} + \delta^2 (R_2- [R_2]) - R_2' = 0, \end{eqnarray} \begin{eqnarray*} R_2' &=& \partial_u N_2 \partial_v F_2 - \partial_v N_2 \partial_u F_2 + \partial_y [{R}_1]_2 \partial_x F_2,\\ ~[R_2]&=& \int_{T^m} {R_2}(x,y,u,v,\varepsilon)dx . \end{eqnarray*} Then \begin{eqnarray*}
H_3&=& H_2 \circ \Phi _{F_2}^1\circ\phi\\
&=& N_3(y,u,v) + {P}_3 (x,y,u,v, \varepsilon), \end{eqnarray*} where \begin{eqnarray*} N_3 &=&\langle\omega_3, y\rangle + \frac{\delta}{2} \langle \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right) , M_3 \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right) \rangle+ \delta^2h_3 + \delta^2 [{R}_1]_2+ \delta^2 [{R}_2]_2,\\ \omega_3 & =& \omega_2+\delta M_2 \left(
\begin{array}{c}
y_0 \\
v_0 \\
\end{array}
\right) + \delta^2 \left(
\begin{array}{c}
\partial_y h_2 (y_0, v_0) \\
\partial_v h_2 (y_0, v_0) \\
\end{array}
\right)+ \delta^2 \left(
\begin{array}{c}
\partial_y [{R}_2] \\
\partial_v [{R}_2] \\
\end{array}
\right),\\ M_3 &=& M_2 + \delta^2 \partial_{(y,v)}^2 h_2,\\ {P}_3 &=& R_2'\circ \phi + \int_0^1 \{R_{2,t}, F_2\}\circ \Phi_{F_2}^t \circ \phi dt +({{P}_2} - R_2) \circ\Phi_{F_2}^1\circ \phi \\ &~&+ \langle \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right), \partial_{(y,v)}^2 [{R}_2] \left(
\begin{array}{c}
y_0 \\
v_0 \\
\end{array}
\right) \rangle,\\ R_{2,t} &=& t R_2 + (1-t)R_2' + (1-t)[R_2]. \end{eqnarray*} Hence
\begin{eqnarray*}
|\partial_\lambda^l {{P}}_3|\leq c \delta^{\frac{3220}{768}}, ~|l|\leq d.
\end{eqnarray*}
Generally, the $\kappa-$th KAM step is the following, where $\kappa$ is a given constant. Write \begin{eqnarray*} {P}_\kappa&=& \sum_{k} P_{k} e^{\sqrt{-1} \langle k, x\rangle},\\
\label{R_1} R_\kappa&=& \sum_{|k|\leq K_{\kappa}} P_{k} e^{\sqrt{-1} \langle k, x\rangle},\\
\label{I_1}{{P}_\kappa} - R_\kappa &=& \sum_{|k|> K_{\kappa}} P_{k} e^{\sqrt{-1} \langle k, x\rangle}.\\ \end{eqnarray*}
Improve the order of ${{P}_\kappa}$ by the symplectic transformation $\Phi_{F_\kappa}^1$, where
\begin{eqnarray} \label{F}
F_\kappa(x,y,u,v) = \sum _{0<|k|\leq K_{\kappa}} f_{k} e^{\sqrt{-1} \langle k, x\rangle}
\end{eqnarray} satisfies \begin{eqnarray}\label{tongdiao511} \{N_\kappa,F_\kappa\} + \delta^2 (R_\kappa- [R_\kappa] )- R_\kappa' = 0, \end{eqnarray} \begin{eqnarray*} R_\kappa' &=& \partial_u N_\kappa \partial_v F_\kappa - \partial_v N_\kappa \partial_u F_\kappa +\partial_y [R_1]_2 \partial_x F_\kappa+\cdots+ \partial_y [R_{\kappa-1}]_2 \partial_x F_\kappa,\\ ~[R_i]&=& \int_{T^m} {R_i}(x,y,u,v,\varepsilon)dx, ~~~~1\leq i \leq \kappa-1. \end{eqnarray*} Using (\ref{tongdiao511}) and comparing coefficients, we obtain the following nonlinear homological equations \begin{eqnarray}\label{(11)}
-\sqrt{-1} \langle k, \omega + \partial _y \hat{h}_*\rangle f_{k} = P_{k}. \end{eqnarray} It is clear that the homological equations are uniquely solvable on the following domain \begin{eqnarray*}
\Lambda_{\kappa}(g,G) = \{ \omega \in \Lambda(g,G) : |\langle k , \omega \rangle | > \frac{\gamma_\kappa }{ |k|^{\tau} }~~ for ~all~ 0<|k| \leq K_{\kappa}\}. \end{eqnarray*}
Then \begin{eqnarray*}
H_{\kappa+1}&=& H_\kappa \circ \Phi _{F_\kappa}^1 \circ\phi = N_{\kappa+1}(y,u,v) + {P}_{\kappa+1} (x,y,u,v,\varepsilon), \end{eqnarray*} where \begin{eqnarray*} N_{\kappa+1} &=& \langle\omega_{\kappa+1}, y\rangle + \frac{\delta}{2} \langle \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right) , M_{\kappa+1} \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right) \rangle+ \delta^2h_\kappa \\ &~&+ \delta^2 [\bar{R}_1]_2+ \delta^2 [\bar{R}_2]_2+ \cdots+ \delta^2 [\bar{R}_\kappa]_2 ,\\ \omega_{\kappa+1} & =& \omega_\kappa+\delta M_\kappa \left(
\begin{array}{c}
y_0 \\
v_0 \\
\end{array}
\right) + \delta^2 \left(
\begin{array}{c}
\partial_y h_\kappa (y_0, v_0) \\
\partial_v h_\kappa (y_0, v_0) \\
\end{array}
\right)+ \delta^2 \left(
\begin{array}{c}
\partial_y [{R}_\kappa] \\
\partial_v [{R}_\kappa] \\
\end{array}
\right),\\ M_{\kappa+1} &=& M_\kappa + \delta^2 \partial_{(y,v)}^2 h_\kappa,\\ {P}_{\kappa+1} &=& R_\kappa'\circ\phi + \int_0^1 \{R_{\kappa,t}, F_\kappa\}\circ \Phi_{F_\kappa}^t \circ\phi dt +({P_\kappa} - R_\kappa) \circ\Phi_{F_\kappa}^1\circ\phi\\ &~&\langle \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right), \partial_{(y,v)}^2 [{R}_\kappa] \left(
\begin{array}{c}
y_0 \\
v_0 \\
\end{array}
\right) \rangle,\\ R_{\kappa,t} &=& t R_\kappa + (1-t)R_\kappa' + (1-t)[R_\kappa]. \end{eqnarray*} Hence,
\begin{eqnarray*}
|\partial_\lambda^l {P}_{\kappa+1}|\leq c \delta^{\frac{3}{4} + (\frac{13}{12})^{\kappa+1} 9- 8},~|l| \leq d.
\end{eqnarray*}
Therefore, after $\kappa$ KAM steps, the new Hamiltonian reads as
\begin{eqnarray} \label{youyong}
H_{\kappa+1} = N_{\kappa+1}+ \delta^{2} P, \end{eqnarray} where \begin{eqnarray*} N_{\kappa+1} &=& \langle\omega_{\kappa+1}, y\rangle + \frac{\delta}{2} \langle \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right) , M_{\kappa+1} \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right) \rangle+ \delta^2h_{\kappa+1} \\ &~&+ \delta^2 [{R}_1]_2+ \delta^2 [{R}_2]_2+ \cdots+ \delta^2 [{R}_\kappa]_2 ,\\ \omega_{\kappa+1} & =& \omega_\kappa+\delta M_\kappa \left(
\begin{array}{c}
y_0 \\
v_0 \\
\end{array}
\right) + \delta^2 \left(
\begin{array}{c}
\partial_y h_\kappa (y_0, v_0) \\
\partial_v h_\kappa (y_0, v_0) \\
\end{array}
\right)+ \delta^2 \left(
\begin{array}{c}
\partial_y [{R}_\kappa] \\
\partial_v [{R}_\kappa] \\
\end{array}
\right),\\ M_{\kappa+1} &=& M_\kappa + \delta^2 \partial_{(y,v)}^2 h_\kappa. \end{eqnarray*}
Let \begin{eqnarray*} \bar{g }&=&\delta^2 [{R}_1]_2+ \delta^2 [{R}_2]_2 +\cdots +\delta^{2} [{R}_{\kappa}]_2\\
&=& \delta^{\frac{1809}{576}} [\bar{R}_1]_2+ \delta^{\frac{4756}{768}} [\bar{R}_2]_2 +\cdots +\delta^{(\frac{13}{12})^{\kappa+1} 9- \frac{21}{4}} [\bar{R}_{\kappa}]_2\\
&=&\sum\limits_{j_1} \delta^{\frac{1809}{576}+ j_1} [\bar{R}_1]_2^{(j_1)}+ \sum\limits_{j_2}\delta^{\frac{4756}{768}+ j_2} [\bar{R}_2]_2^{(j_2)} +\cdots +\sum\limits_{j_\kappa}\delta^{(\frac{13}{12})^{\kappa+1} 9- \frac{21}{4} + j_\kappa} [\bar{R}_{\kappa}]_2^{(j_\kappa)}. \end{eqnarray*}
\begin{definition}\label{HOP} If the following hold:
(1)At critical points of $\bar{g}$, $(y_0, u_0, v_0)$, \begin{eqnarray*} ~\det~ \partial_u^2 \delta^{-a+1} \bar{g} = 0, \end{eqnarray*}
(2) At critical points of $\bar{g}$, $(y_0, u_0, v_0)$, there is a constant $\bar{\sigma}_0 >0$, such that \begin{eqnarray*}
|\det \partial_u^2 ~\delta^{-a} \bar{g} |\geq \bar{\sigma}_0, \end{eqnarray*} then we call $\delta P(x,y,u,v,\delta)$ a perturbation with $a-$order degeneracy at $(y_0, u_0, v_0)$. \end{definition}
\begin{remark}
With condition$:$ $|\det {\partial_{u}^2 \varepsilon^{- a}P_1(x,y,u,v,\varepsilon)}| > \tilde{\sigma}$, obviously, the condition of $a-$order degeneracy for perturbation can be realized. And since $\bar{g}$ is $T^{m_0}$ periodic in $u$, it has $2^{m_0}$ critical points via the high order nondegeneracy and Morse theory \emph{(\cite{Milnor})}. \end{remark}
\begin{remark} Assumption $\emph{(2)}$ in the definition of $a-$order degenerate perturbation is equivalent to the following $\bf{(\mathfrak{S}1)}$.
\begin{itemize}
\item[$\bf{(\mathfrak{S}1)}$] At critical point of $\bar{g}$, $(y_0, u_0, v_0)$, there exists a constant $ c > 0$ such that the minimum $\lambda_{min}^{\varepsilon} (\omega)$ among absolute values of all eigenvalues of $\partial_u^2 \bar{g}$ satisfies $|\lambda_{min}^{\varepsilon} |\geq c \varepsilon ^a $ for all $\omega \in \Lambda(g,G)$. \end{itemize}
\end{remark}
Using the definition of perturbation with $a-$order degeneracy, we can rewrite Hamiltonian at the critical point of $\bar{g}$, $(y_0,u_0,v_0)$, \begin{eqnarray}\label{new221}
H(x,y,u,v)&=& N(y,u,v) +{\delta^{a +1}} \tilde{P}(x,y,u,v,\varepsilon), \end{eqnarray} where \begin{eqnarray*} N&=&\langle\omega_{\kappa+1}, y\rangle + \frac{\delta}{2} \langle \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right) , M_{\kappa+1} \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right)
\rangle+ \delta^2h_\kappa + \frac{\delta^{a}}{2} \langle u, Vu\rangle + \delta^{a}O(|u|^3),\\ \delta^{a + 1} \tilde{P}&=& \delta^{\kappa + 1} P(x,y,u,v,\varepsilon)+ O(\delta^{a+1}), \end{eqnarray*} which $x \in T^m$, $y \in R^m$, $u, v \in R ^{m_0}$, $\omega \in \Lambda(g,G)$, $1 \leq a \leq \kappa$. In the above, all $\omega-$dependence is of class $C^{l_0}$ for some $l_0 \geq d$.
Next we should raise the order of $\tilde{P}$ by performing finite times nonlinear KAM steps. Let $\tilde{\tau}$ be the smallest integer such that $[ (\frac{13}{12})^{\tilde{\tau}} 9 -\frac{21}{4}]\geq\frac{3a+1}{2}$, where $a$ is a constant. After $\tilde{\tau}$ KAM steps mentioned as above and by {\bf{Remark 3.1}}, at each critical point, we obtain the following \begin{eqnarray} \nonumber H_{\tilde{\tau}}(x,y) &=& \langle\omega_{\tilde{\tau}}, y\rangle + \frac{\delta}{2} \langle \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right) , M_{\tilde{\tau}} \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right) \rangle+ \delta^2h_{\tilde{\tau}}\\ \label{keyileba}&~&~ + \frac{\delta^{a}}{2}{\langle u , V_{\tilde{\tau}_1}(\omega) u\rangle}+ \delta^{a}\hat{u}_{\tilde{\tau}_1}(u)+\delta ^{\frac{{3a +1}}{2}} \hat{P}(x,y,u,v,\delta),~~ \end{eqnarray} up to an irrelevant constant, where \begin{eqnarray*} V_{\tilde{\tau}_1} &=& V+ \partial_u^2 \tilde{h},~~~~~\hat{u}_{\tilde{\tau}}(u) = \hat{u} + (\tilde{h} - \langle\partial_u^2 \tilde{h} u, u\rangle),\\ \tilde{h} &=& \delta^{(\frac{13}{12})^{\kappa+1} 9- \frac{21}{4}} [\bar{R}_{\kappa+1}]+\cdots+ \delta^{(\frac{13}{12})^{\tilde{\tau}} 9- \frac{21}{4}} [\bar{R}_{\tilde{\tau}}],\\ \hat{P} &=& \delta P(x,y,u,v,\delta), ~~~~~1\leq a\leq\kappa, \end{eqnarray*}
with nonsingular $V_{\tilde{\tau}}$. But in each KAM step we have a similar hypothesis in form, $\delta K^{\tau+1} = o(\gamma)$. And the assumption is obviously hold for finite times KAM steps. Consider re-scaling $x\rightarrow x$, $y\rightarrow \delta^{\frac{{a -1}}{2}}y$, $u\rightarrow u$, $v \rightarrow \delta^{\frac{{a -1}}{2}} v$, $H\rightarrow \delta^{\frac{{-a+1}}{2}}H$. Then the re-scaled Hamiltonian reads \begin{eqnarray*} H_{\tau_1}(x,y) &=& \langle\omega_{\tilde{\tau}}, y\rangle + \frac{\delta}{2} \langle \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right) , M_{\tilde{\tau}} \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right) \rangle+ \delta^2h_{\tilde{\tau}}\\ &~&~+ \frac{\delta^{\frac{a+1}{2}}}{2}{\langle u , V_{\tilde{\tau}}(\omega)u\rangle}+ \delta^{\frac{a+1}{2}}\hat{u}_{\tilde{\tau}}(u)+\delta ^{a+1} \tilde{P}(x,y,u,v). \end{eqnarray*} Denote $\delta^{\frac{a+1}{2}} = \delta$. Then we have \begin{eqnarray}\label{model31}
H(x,y,u,v) =N(y,u,v) + P(x,y,u,v), \end{eqnarray} with \begin{eqnarray*} N &=& \langle\omega_{\tilde{\tau}}, y\rangle + \frac{\delta}{2} \langle \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right) , M_{\tilde{\tau}} \left(
\begin{array}{c}
y \\
v \\
\end{array}
\right) \rangle+ \delta^2h_{\tilde{\tau}}+ \frac{\delta}{2}{\langle u , V_{\tilde{\tau}} (\omega)u\rangle}+ \delta \hat{u}_{\tilde{\tau}}(u),\\
P&=&\delta^2 \tilde{P}(x,y,u,v),~~~~\hat{u}(u)= O(|u|^3), \end{eqnarray*}
where $x \in T^m$, $y \in R^m$, $u, v \in R ^{m_0}$, $\omega \in \Lambda(g,G)$. In the above, all $\omega-$dependence is of class $C^{l_0}$ for some $l_0 \geq d$.
Actually, we have \begin{eqnarray*}
|\partial_\omega^l {P}| \leq c \delta \gamma^{d+9} s^2 \mu,~~~~~|l|\leq d, \end{eqnarray*} where $\gamma= \varepsilon^{\frac{1 - 3\iota}{3(d+9)}}$, $s = \varepsilon^{\frac{1}{3}}$, $\mu = \varepsilon^\iota$, $\iota \in (0,\frac{1}{3})$. Applying Theorem \ref{shengluede} to (\ref{model31}), the system admits a family of invariant tori. By Morse theory and {\bf{Remark 2.1}}, there are $2^{m_0}$ critical points, and consequently it has $2^{m_0}$ families of invariant tori. This completes the proof of Theorem \ref{dingli11}.
\section{Examples}\label{example} It is interesting to know how the critical points change during the process of KAM iteration. And we show this will be very complex through the following examples. \begin{example} \label{071} Consider the following Hamiltonian system \begin{eqnarray}\label{youqudelizi} H(x,y) = \omega y + \frac{\varepsilon}{2}v^2 + \varepsilon^2 \cos u + \varepsilon~ \cos u ~ \sin x ~e^y, \end{eqnarray} where $x\in T^1$, $y$, $u$, $v \in R^1$.
Let perturbation $P_1(x,y,u)=\varepsilon^2 \cos u + \varepsilon~ \cos u ~ \sin x~ e^y$. Then $\int_{T^1} P_1 dx = \varepsilon^2 \cos u$, which is of $2-$order in $\varepsilon$. Hence some previous results do not work. However, applying our result, we can obtain the persistence of resonant tori.
\emph{Using the above program twice, we have \begin{eqnarray} H_3(x,y,u,v) = \omega y + \frac{\varepsilon}{2}v^2 - \varepsilon^2 \cos u+ O(\varepsilon^3). \end{eqnarray} By \textbf{Theorem \emph{\ref{shengluede}}}, there is a family of invariant tori for the Hamiltonian \emph{(\ref{youqudelizi})}.} \end{example}
\begin{remark} This example shows that during the processing of KAM iteration the critical points do not change. \end{remark}
\begin{example}\label{072} Consider the following Hamiltonian system \begin{eqnarray}\label{050} H(x,y) = \omega y + \frac{\varepsilon}{2}v^2 + \varepsilon^2 \cos (u+\frac{\iota\pi}{4}) + \varepsilon~ \sin u ~ \sin x ~e^y, ~\iota = 0,1,~ \end{eqnarray} where $x\in T^1$, $y$, $u$, $v \in R^1$.
Let $P_1(x,y,u)=\varepsilon^2 \cos (u+\frac{\pi}{4}) + \varepsilon~ \sin u ~ \sin x~ e^y$. Then $\int_{T^1} P_1 dx = \varepsilon^2 \cos (u+\frac{\pi}{4})$, which is of $2-$order in $\varepsilon$, implies that some previous results do not work. However, applying our result, we can obtain the persistence of resonant tori.
\emph{Using the program mentioned above twice, we have \begin{eqnarray*} H_3(x,y,u,v) = \omega y + \frac{\varepsilon}{2}v^2 + \varepsilon^2 \cos (u+\frac{\iota\pi}{4}) - \frac{\varepsilon^2 \sin^2u e^{2y}}{2\omega} + O(\varepsilon^3), \end{eqnarray*} where $\iota = 0,1.$ By \emph{\textbf{Theorem \ref{shengluede}}}, there is a family of invariant tori for the Hamiltonian system (\ref{050}).} \end{example} \begin{remark} \begin{enumerate}
\item [(1).] When $\iota = 0$, during the processing of KAM iteration the number of critical points increases.
\item [(2).] When $\iota = 1$, the critical points of the unperturbed system are not the critical points of the perturbed system. \end{enumerate} \end{remark}
\begin{appendix} \section*{Appendix} \setcounter{equation}{0}
\section{The computing process of Example \ref{071} } Consider the following Hamiltonian system \begin{equation}\label{061} H(x,y) = \omega y + \frac{\varepsilon}{2}v^2 + \varepsilon^2 \cos u + \varepsilon~ \cos u ~ \sin x ~e^y, \end{equation} where $x\in T^1$, $y$, $u$, $v \in R^1$.
Set $P_1(x,y,u)=\varepsilon^2 \cos u + \varepsilon~ \cos u ~ \sin x~ e^y$. Then $[P_1] = \int_{T^1} P_1 dx = \varepsilon^2 \cos u$, which is of $2-$order in $\varepsilon$. Now we improve the order of ${{P}_1}$ by the symplectic transformation $\Phi_{F_1}^1$, where
\begin{eqnarray} \label{062}
F_1(x,y,u,v) =a_1(y,u,v) \sin x + b_1(y,u,v) \cos x
\end{eqnarray} satisfies \begin{eqnarray}\label{063} \{N,F_1\} + P_1- [P_1] - P_1' = 0, \end{eqnarray} \begin{eqnarray*} P_1' &=& \partial_u N \partial_v F_1 - \partial_v N \partial_u F_1,\\ N &=& \omega y + \frac{\varepsilon}{2}v^2. \end{eqnarray*} Take \begin{eqnarray*} F_1(x,y,u) = \frac{- \varepsilon ~ \cos u ~ e^y ~ \cos x}{\omega}. \end{eqnarray*} Then \begin{eqnarray*} H_2(x,y,u,v) = N_2(y,u) + P_1'(x,u,v,\varepsilon)+ \int_0^1 \{(1-t)\{N, F_1\}+ P_1, F_1\}\circ \phi_{F_1}^t dt, \end{eqnarray*} where \begin{eqnarray*} N_2(y,u) &=& N(y,u) + \varepsilon^2 \cos u,\\ P_1'(x,u,v) &=& \frac{- \varepsilon^2 ~ v~ \sin u~ e^y~ \cos x}{\omega},\\ P_2&=&\int_0^1 \{(1-t)\{N, F_1\}+ P_1, F_1\}\circ \phi_{F_1}^t dt\\
&=& \frac{\varepsilon^2 \cos^2 u e^{2y} \cos 2x}{2 \omega} + O(\varepsilon^3). \end{eqnarray*} In fact, \begin{eqnarray*} R_t &=& (1-t) \{N, F_1\} + P_1\\ &=& (1-t) (- \varepsilon\cos u e^y \sin x - \varepsilon^2 v\sin u e^y \cos x)+ \varepsilon^2 \cos u + \varepsilon \cos u \sin x e^y,\\ \{R_t, F_1\} &=& \frac{\partial R_t}{\partial x} \frac{\partial F_1}{\partial y} - \frac{\partial R_t }{\partial y} \frac{\partial F_1}{\partial x} + \frac{\partial R_t}{\partial u} \frac{\partial F_1}{\partial v} - \frac{\partial R_t }{\partial v} \frac{\partial F_1}{\partial u}\\ &=& \frac{\varepsilon^2 t \cos^2 u e^{2y} \cos 2x}{\omega} + \varepsilon^3 (1-t) \frac{\sin^2 u e^{2y}(\cos 2x +1)}{2 \omega^2}. \end{eqnarray*} Let \begin{eqnarray*} F_2 = \frac{- \varepsilon^2~ v~ \sin u ~e^y~ \sin x}{\omega^2}+ \frac{\varepsilon^2 \cos^2 u e^{2y} \sin2x}{4\omega^2}, \end{eqnarray*} then \begin{eqnarray*} \{N_2, F_2\} + P_2 - [P_2] - P_2' = 0, \end{eqnarray*} \begin{eqnarray*} P_2' &=& \partial _u N_2 \partial_v F_2 - \partial_v N_2 \partial_u F_2. \end{eqnarray*} With the help of $\Phi_{F_2}^1$, we have \begin{eqnarray*} H_3(x,y,u,v) = N_2(y,u) + P_3(x,y,u,v), \end{eqnarray*} where \begin{eqnarray*} N_2 =\omega y + \frac{\varepsilon}{2}v^2 + \varepsilon^2 \cos u,~~~~P_3= O(\varepsilon^3). \end{eqnarray*}
Hence, during the process of iterations the critical points do not change.
\section{The computing process of Example \ref{072}}\setcounter{equation}{0} Consider the following Hamiltonian system \begin{eqnarray}\label{064} H(x,y) = \omega y + \frac{\varepsilon}{2}v^2 + \varepsilon^2 \cos (u+\frac{\iota\pi}{4}) + \varepsilon~ \sin u ~ \sin x ~e^y, ~~\iota = 1,2, \end{eqnarray} where $x\in T^1$, $y$, $u$, $v \in R^1$.
Since $P_1(x,y,u)=\varepsilon^2 \cos (u+\frac{\pi}{4}) + \varepsilon~ \sin u ~ \sin x~ e^y$, $[P_1] = \int_{T^1} P_1 dx = \varepsilon^2 \cos (u+\frac{\pi}{4})$, which is of $2-$order in $\varepsilon$.
To improve the order of ${{P}_1}$ by the symplectic transformation $\Phi_{F_1}^1$, we choose
\begin{eqnarray} \label{065}
F_1(x,y,u,v) =a_1(y,u,v) \sin x + b_1(y,u,v) \cos x
\end{eqnarray} such that \begin{eqnarray*} \{N_1,F_1\} + P_1- [P_1] - P_1' = 0, \end{eqnarray*} \begin{eqnarray*} P_1' &=& \partial_u N_1 \partial_v F_1 - \partial_v N_1 \partial_u F_1,\\ ~[P_1]&=& \int_0^{2 \pi} {P_1}(x,y,u,\varepsilon)dx =0,\\ N_1 &=& \omega y + \frac{\varepsilon}{2}v^2. \end{eqnarray*} Let \begin{eqnarray*} F_1(x,y,u,v) = \frac{- \varepsilon ~ \sin u ~ e^y ~ \cos x}{\omega}. \end{eqnarray*} Then \begin{eqnarray*} H_2(x,y,u,v) = N_2(y,u) + P_2'(x,u,v,\varepsilon)+ \bar{P}_3(x,y,u,v,\varepsilon), \end{eqnarray*} where \begin{eqnarray*} N_2(y,u) &=& \omega y + \frac{\varepsilon}{2}v^2 + \varepsilon^2 \cos (u + \frac{\iota\pi}{4}),\\ P_2'(x,u,v) &=& \frac{ \varepsilon^2 ~ v~ \cos u~ e^y~ \cos x}{\omega},\\ \bar{P}_3 &=& \int_0^1 \{R_t, F_1\}\circ \phi_{F_1}^t dt,\\ R_t &=& (1-t) \{N, F_1\}+ P_1. \end{eqnarray*} Moreover, \begin{eqnarray*} R_t &=& (1-t) \{N, F_1\} +P \\ &=& t \varepsilon \sin u e^y \sin x + (1-t) \frac{\varepsilon^2 \nu \cos u e^y \cos x}{\omega} + \varepsilon^2 \cos (u+ \frac{\iota\pi}{4}),\\ \{R_t, F_1\} &=& \frac{\partial R_t}{\partial x} \frac{\partial F_1}{\partial y} - \frac{\partial R_t }{\partial y} \frac{\partial F_1}{\partial x} + \frac{\partial R_t}{\partial u} \frac{\partial F_1}{\partial v} - \frac{\partial R_t }{\partial v} \frac{\partial F_1}{\partial u}\\ &=& -(t \varepsilon \sin u e^y \cos x- (1-t)\frac{\varepsilon^2 \nu \cos u e^y \sin x}{\omega} )\frac{\varepsilon \sin u e^y \cos x}{\omega}\\ &~& - (t \varepsilon \sin u e^y \sin x+ (1-t)\frac{\varepsilon^2 \nu \cos u e^y \cos x}{\omega} )\frac{\varepsilon \sin u e^y \sin x}{\omega}\\ &~&+ (1- t) \frac{\varepsilon^2 \cos u e^y \cos x}{\omega} \frac{\varepsilon\cos u e^y \cos x}{\omega}\\ &=&-\frac{t \varepsilon^2 \sin ^2 u e^{2y}}{\omega} + (1- t) \frac{\varepsilon^3 \cos^2 u e^{2y} \cos^2 x}{\omega^2}. \end{eqnarray*} Hence \begin{eqnarray*}
| \bar{P}_3| \leq c \varepsilon^3. \end{eqnarray*} Set \begin{eqnarray*} F_2 = \frac{ \varepsilon^2~ v~ \cos u ~e^y~ \sin x}{\omega^2}, \end{eqnarray*} then \begin{eqnarray*} \{N_2, F_2\} + P_2 - [P_2] - P_2' = o(\varepsilon^3), \end{eqnarray*} \begin{eqnarray*} [P_2] &=& \int_0^{2\pi} P_2(x,u,v) dx = 0,\\ P_2' &=& \partial _u N_2 \partial_v F_2 - \partial_v N_2 \partial_u F_2\\ &=&-\frac{\varepsilon^4 \sin (u + \frac{\iota\pi}{4}) \cos u e^y \sin x }{\omega^2} + \varepsilon^3 \frac{v^2 \sin u e^y \sin x }{\omega^2}. \end{eqnarray*} With the aid of $\Phi_{F_2}^1$, we have \begin{eqnarray*} H_3(x,y,u,v) = N_2(y,u) + P_3(x,y,u,v), \end{eqnarray*} where \begin{eqnarray*} N_2 &=& \omega y + \frac{\varepsilon}{2}v^2 + \varepsilon^2 \cos (u+\frac{\iota\pi}{4}) - \frac{\varepsilon^2 ~\sin^2u~e^{2y}}{2 \omega},\\ P_3&=& O(\varepsilon^3). \end{eqnarray*} The critical points of $\cos (u + \frac{\iota\pi}{4})$ are $u = - \frac{\iota\pi}{4} + k\pi$, $k \in \mathds{Z}$. Denote \begin{eqnarray*} g(u) = - \cos (u + \frac{\iota\pi}{4}) - \frac{2\sin^2u e^{2y}}{2\omega}. \end{eqnarray*} We have \begin{eqnarray*} g'(u) = - \sin (u + \frac{\iota\pi}{4}) - \frac{4\sin u \cos u e^{2y}}{2\omega}. \end{eqnarray*} Hence, when $\iota =0$, $u = - \frac{\iota\pi}{4} + k\pi$ are critical points of $g(u).$ When $\iota =1$, $g'(-\frac{\pi}{4}+ k \pi) = \sqrt{2} \cos k\pi\neq 0$, i.e. $-\frac{\pi}{4}+ k \pi$ are not critical points of $g(u)$. \end{appendix}
\end{document} |
\begin{document}
\begin{frontmatter} \title{Pathwise Convergence of the Hard Spheres Kac Process} \runtitle{Pathwise Convergence of the Kac Process}
\begin{aug} \author{\fnms{Daniel} \snm{Heydecker}\thanksref{t1}\ead[label=e1]{dh489@cam.ac.uk}}.
\thankstext{t1}{This work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grant EP/L016516/1 for the University of Cambridge Centre for Doctoral Training, the Cambridge Centre for Analysis}
\runauthor{D. Heydecker}
\affiliation{University of Cambridge}
\address{Centre for Mathematical Sciences\\ Wilberforce Road, Cambridge\\ CB3 0WA\\ \printead{e1}\\}
\end{aug}
\begin{abstract} We derive two estimates for the deviation of the $N$-particle, hard-spheres Kac process from the corresponding Boltzmann equation, measured in expected Wasserstein distance. Particular care is paid to the long-time properties of our estimates, exploiting the stability properties of the limiting Boltzmann equation at the level of realisations of the interacting particle system. As a consequence, we obtain an estimate for the propagation of chaos, uniformly in time and with polynomial rates, as soon as the initial data has a $k^\mathrm{th}$ moment, $k>2$. Our approach is similar to Kac's proposal of relating the long-time behaviour of the particle system to that of the limit equation. Along the way, we prove a new estimate for the continuity of the Boltzmann flow measured in Wasserstein distance. \end{abstract}
\begin{keyword}[class=MSC] \kwd[Primary ]{60J25} \kwd{60K35} \kwd[; secondary ]{35Q20} \end{keyword}
\begin{keyword} \kwd{Kac Process} \kwd{Law of Large Numbers} \kwd{Wasserstein Distance} \end{Boltzmann Equation}
\end{frontmatter}
\section{Introduction \& Main Results} Kac \cite{FKT} introduced a Markov model for the behaviour of a dilute gas, corresponding to the spatially homogeneous Boltzmann equation. We consider an ensemble of $N$ indistinguishable particles, with velocities $v_1(t), ..., v_N(t) \in \mathbb{R}^d$ at time $t\ge 0$, which are are encoded in the empirical velocity distribution \begin{equation} \mu^N_t=N^{-1}\sum_{i=1}^N \delta_{v_i(t)}. \end{equation} Throughout, unless specified otherwise, we consider only the following example, known as the \emph{hard spheres} kernel, of Kac processes, which is one of two main examples of physical interest. The dynamics are as follows: \begin{enumerate}\item For every (unordered) pair of particles with velocities $v, v_\star \in \text{supp}(\mu^N_t)$, the particles collide at a rate $2|v-v_\star|/N$.
\item When two particles collide, take an independent random variable $\Sigma$, distributed uniformly on $S^{d-1}$. The particles then separate in direction $\Sigma.$
\item The velocities change to $v'(v, v_\star, \Sigma)$ and $v'_\star(v,v_\star, \Sigma)$, given by conservation of energy and momentum as \begin{equation}\label{eq: PCV} v'(v, v_\star, \Sigma)=\frac{v+v_\star+\Sigma|v-v_\star|}{2}; \hspace{0.5cm} v_\star'(v, v_\star, \Sigma)=\frac{v+v_\star-\Sigma|v-v_\star|}{2} \end{equation} The measure changes to \begin{equation} \label{eq: change of measure at collision} \mu \mapsto \mu^{N, v, v_\star, \Sigma} = \mu+\frac{1}{N}(\delta_{v'}+\delta_{v'_\star}-\delta_{v}-\delta_{v_\star}). \end{equation} \end{enumerate} More formally, we consider the space $\mathcal{S}$ of Borel measures on $\mathbb{R}^d$, satisfying \begin{equation} \langle 1, \mu \rangle =1; \hspace{0.5cm} \langle v, \mu \rangle =0; \hspace{0.5cm} \langle |v|^2, \mu \rangle = 1\end{equation} where we have adopted the notational conventions that angle brackets $\langle, \rangle$ denote integration against a measure, and $v$ denotes the identity function on $\mathbb{R}^d$. $\mathcal{S}$ is called the \emph{Boltzmann Sphere}, and consists of those measures with normalised mass, momentum and energy. We write $\mathcal{S}^k$ for the subspace of $\mathcal{S}$ where the $k^\text{th}$ moment $\langle |v|^k, \mu\rangle$ is finite, and define the following family of weights: \begin{equation}\Lambda_k(\mu):=\langle (1+|v|^2)^\frac{k}{2}, \mu\rangle. \end{equation} This leads to a natural family of subspaces: \begin{equation} \label{eq: definition of SKA} \mathcal{S}^k_a := \{\mu \in \mathcal{S}: \Lambda_k(\mu)\leq a\}. \end{equation} For shorthand, we will often write $\Lambda_k(\mu, \nu):=\max(\Lambda_k(\mu), \Lambda_k(\nu))$.
\\
Let $\mathcal{S}_N$ be the subset of $\mathcal{S}$ consisting of normalised empirical measures on $N$ points; we will typically write $\mu^N$ for a generic element of $\mathcal{S}_N$. Formally, the Kac process is the Markov process on $\mathcal{S}_N$ with kernel \begin{equation} \label{eq: definition of script Q} \mathcal{Q}_N(\mu^N)(A)= N \int_{\mathbb{R}^d \times \mathbb{R}^d \times S^{d-1}} 1(\mu^{N, v, v_\star, \sigma} \in A) |v-v_\star| \mu^N(dv)\mu^N(dv_\star) d\sigma.\end{equation} Note that, since the map $\mu^N \mapsto \mu^{N,v,v_\star, \sigma}$ preserves particle number, momentum, and kinetic energy, $\mathcal{Q}_N(\mu^N)$ is supported on $\mathcal{S}_N$ whenever $\mu^N \in \mathcal{S}_N$. We write $(\mu^N_t)_{t \geq 0}$ for a Kac process on $N$ particles. Observe that the rates are bounded by $2N$, and so for any initial datum $\mu^N_0$, the law of a Kac process started from $\mu^N_0$ exists, and is unique, and the process is almost surely non-explosive.
\paragraph{Measure Solutions to the Boltzmann Equation}Following many previous works, \cite{L&M, M+M, ACE}, we study measure-valued solutions to the Boltzmann equation. We define the Boltzmann collision operator $Q(\mu, \nu)$ for measures $\mu, \nu \in \mathcal{S}$ as \begin{multline} \label{eq: defn of Q} Q(\mu, \nu)=\int_{\mathbb{R}^d\times \mathbb{R}^d\times S^{d-1}} \left\{\delta_{v'}+\delta_{v_\star'}-\delta_v-\delta_{v_\star}\right\}|v-v_\star|d\sigma \mu(dv)\nu(dv_\star). \end{multline} For brevity, we will denote $Q(\mu, \mu)$ by $Q(\mu)$. We say that a family $(\mu_t)_{t\geq 0}$ of measures in $\mathcal{S}$ satisfies the \emph{Boltzmann equation} if, for any bounded measurable $f$ of compact support, \begin{equation} \tag{BE}\label{BE} \forall t \geq 0 \hspace{1cm} \langle f, \mu_t \rangle =\langle f, \mu_0 \rangle +\int_0^t \langle f, Q(\mu_s)\rangle ds. \end{equation} The Boltzmann equation is known to have a unique fixed point $\gamma \in \mathcal{S}$, which is given by the Maxwellian, or Gaussian, density: \begin{equation}
\gamma(dv)=\frac{e^{-\frac{d}{2}|v|^2}}{(2\pi d^{-1})^{d/2}}dv.
\end{equation}
\paragraph{Measuring Convergence to the Boltzmann Equation}To discuss the convergence of Kac's process to the Boltzmann equation, we will work with the following \emph{Wasserstein metric} on $\mathcal{S}$. Consider the Sobolev space of test functions \begin{equation} X=W^{1, \infty}(\mathbb{R}^d)=\{\text{Bounded, Lipschitz functions } f:\mathbb{R}^d \rightarrow \mathbb{R} \}; \end{equation}
\begin{equation} \|f\|_X:=\max\left(\sup_{v}|f|(v), \hspace{0.1cm} \sup_{v\neq w} \frac{|f(v)-f(w)|}{|v-w|}\right). \end{equation} We write $B_X$ for the unit ball of $X$; that is, those functions which are $1$-bounded and $1$-Lipschitz. Given a function $f$ on $\mathbb{R}^d$, we write $\hat{f}$ for the function \begin{equation} \hat{f}(v)=\frac{f(v)}{1+|v|^2}. \end{equation} We write $\mathcal{A}$ for the space of weighted-Lipschitz functions:\begin{equation} \label{eq: defn of script A}\mathcal{A}:=\left\{f: \mathbb{R}^d \rightarrow \mathbb{R}: \hat{f}\in X, \|\hat{f}\|_X\leq 1\right\}.\end{equation} We will also write \begin{equation} \label{eq: defn of script A 0}\mathcal{A}_0=\left\{f: \mathbb{R}^d \rightarrow \mathbb{R}: \hat{f}\in L^\infty(\mathbb{R}^d), \|\hat{f}\|_\infty\leq 1\right\}.\end{equation} The weighted Wasserstein metric $W$ is given by the duality: \begin{equation}\label{eq: definition of W} W(\mu, \nu):= \sup_{f\in\mathcal{A}}|\langle f, \mu-\nu\rangle|. \end{equation}
We make the following remark on alternative possible choices of metric. Our metric $W$ is closely related to the $p$- Wasserstein metrics $W_p$ on the subspaces $\mathcal{S}^p$, given by \begin{equation}\label{eq: definition of Wp} W_p(\mu, \nu)=\inf\left\{\int_{\mathbb{R}^d} |v-w|^p\pi(dv,dw): \hspace{0.2cm} \pi\text{ is a coupling of } \mu \text{ and } \nu\right\}.\end{equation} In the special case $p=1$, the metric $W_1$ is known as the Monge-Kantorovich-Wasserstein (MKW) metric, and can alternatively be given by \begin{equation} W_1(\mu, \nu)=W\left(\frac{\mu}{1+|v|^2}, \frac{\nu}{1+|v|^2}\right).\end{equation} It is straightforward to check that, on the space $\mathcal{S}$, the metrics $W, W_1, W_2$ all induce the same topology, and that for some absolute constant $C$, we have the bound $W_1 \le CW$ on $\mathcal{S}$. Moreover, on the subspaces $\mathcal{S}^k_a$ defined in (\ref{eq: definition of SKA}), with $k>2$, we can find explicit bounds $W \le C W_1^\alpha$, with $\alpha\in(0,1)$.
\\We now state the motivating result of \cite{ACE} on the convergence of the Kac process to the Boltzmann equation: \begin{proposition}\label{thrm: bad convergence theorem} \cite[Theorem 10.1]{ACE} Let $k>2$. We say that a family $(\mu_t)_{t\ge 0}$ is locally $\mathcal{S}^k$-bounded if $\sup_{s\leq t} \hspace{0.1cm} \Lambda_k(\mu_s) <\infty $ for any $t \ge 0$. \\ \\ For any $\mu_0 \in \mathcal{S}^k$, there is a unique locally $\mathcal{S}^k$-bounded solution to the Boltzmann equation (\ref{BE}), starting from $\mu_0$; we write this solution as $(\phi_t(\mu_0))_{t\geq 0}$.
\\ Moreover, for any $\epsilon>0$, $t_\text{fin}<\infty$, $\lambda<\infty$, there exist constants $C(\epsilon, \lambda, k, t_\text{fin})<\infty$ and $\alpha(d,k)>0$ such that, whenever $(\mu^N_t)_{t\geq 0}$ is a Kac process on $N\geq 1$ particles, with $\Lambda_k(\mu^N_0) \leq \lambda, \Lambda_k(\mu_0) \leq \lambda$, we have \begin{equation} \mathbb{P}\left(\sup_{t\leq t_\text{fin}} W(\mu^N_t, \phi_t(\mu_0))>C(W(\mu^N_0, \mu_0)+N^{-\alpha})\right)< \epsilon.\end{equation} For $d\geq 3$ and $k>8$, we can take $\alpha = \frac{1}{d}$. \end{proposition}
While the study of the convergence of the Kac process to the Boltzmann equation is a well-known and extensively studied topic, this is most usually studied through the propagation of chaos, discussed below, by contrast to the \emph{pathwise} style of estimate here which we seek to emulate. We note that the existence of solutions is known \cite{L&M} for the case $k=2$, but that nothing is known for the convergence of the Kac process in this case.\\
From existence and uniqueness, we can consider the Boltzmann equation as describing a non-linear semigroup of flow operators on $(\phi_t)_{t\geq 0}$ on $ \cup_{k>2} \mathcal{S}^k$. To prove Proposition \ref{thrm: bad convergence theorem}, Norris \cite{ACE} introduces a family of random linear operators $E_{st}$, and develops a representation formula in terms of these operators, which will be reviewed in Sections \ref{sec: continuity of BE}, \ref{sec: LMR}. Cruicial to the proof are estimates for the operator norms of $E_{st}$, which are obtained by Gr\"onwall-style estimates. As a result, the constant $C$ depends badly on the terminal time $t_\text{fin}$, with \textit{a priori} exponential growth. Our work was inspired by the observation that strong \emph{stability estimates} for the non-linear semigroup $(\phi_t)$, proven by Mischler and Mouhot \cite{M+M}, allow us to avoid using Gr\"onwall-style estimates, and hence obtain estimates with better long-time properties.
\paragraph{Chaoticity} We will also discuss the notion of chaoticity, which is the usual framework used to analyse the convergence of the Kac process to the Boltzmann equation. In this context, it is natural to preserve the labels on the particles, and to consider the \emph{labelled Kac process} $\mathcal{V}^N_t=(v_1(t),...,v_N(t))$, taking values in the labelled Boltzmann Sphere \begin{equation} \mathbb{S}^{N}=\left\{(v_1, ..., v_N) \in (\mathbb{R}^d)^N: \hspace{0.2cm}\sum_{i=1}^N v_i=0, \hspace{0.2cm}\sum_{i=1}^N |v_i|^2 = N\right\}. \end{equation} We may recover recover $\mathcal{S}_N$ by taking empirical measures: \begin{equation} \theta_N: \mathbb{S}^N \rightarrow \mathcal{S}_N; \hspace{1cm} (v_1, ..., v_N) \mapsto \frac{1}{N} \sum_{i=1}^N \delta_{v_i}.\end{equation}Moreover, if $\mathcal{V}^N_t$ is a labelled Kac process, then $\mu^N_t=\theta_N(\mathcal{V}^N_t)$ is an unlabelled Kac process. We write $\mathcal{LV}^N_t$ for the law of $(v_1(t),..,v_N(t))$ on $\mathbb{S}^N$. We will measure chaoticity using the following (unweighted) Wasserstein metrics on probability measures on $(\mathbb{R}^d)^l$ for all $l\ge 1$, defined in a similar way to (\ref{eq: definition of W}): \begin{equation}\label{eq: definition of script W} \mathcal{W}_{1,l}\left(\mathcal{L}, \mathcal{L}'\right)=\sup\left\{\int_{(\mathbb{R}^d)^l} f(V)\hspace{0.1cm}(\mathcal{L}(dV)-\mathcal{L}'(dV)) \right\}\end{equation} where the supremum is over all functions $f$ of the form $f=f_1\otimes f_2 \otimes...\otimes f_l$, with each $f_i$ a bounded and Lipschitz test function, $f_i\in B_X$, and the subscript $l$ recalls the relevant dimension. We now recall the following definition from \cite{FKT}: \begin{definition*}[Finite Dimensional Chaos] For each $N$, let $\mathcal{L}^N$ be a law on $\mathbb{S}^N$, which is symmetric under permutations of the indexes. We say that $(\mathcal{L}^N)_{N\ge 2}$ is $\mu$-chaotic, if, for all $l \ge 1$, we have \begin{equation} \label{eq: POC}\mathcal{W}_{1,l}\left(\Pi_l[\mathcal{L}^N], \mu^{\otimes l}\right) \rightarrow 0\end{equation} where $\Pi_l$ denotes the marginal distribution on the first $l$ factors. \end{definition*} A stronger notion, put forward by Mischler and Mouhot \cite{M+M}, is that of \emph{infinite-dimensional chaos}, which allows the number of marginals $l$ to vary with $N$:\begin{equation} \label{eq: IDPOC} \max_{1\le l\le N}\left[\frac{1}{l} \mathcal{W}_{1,l}\left(\Pi_l[\mathcal{L}^N], \mu^{\otimes l}\right)\right] \rightarrow 0.\end{equation} Kac proposed the following \emph{propagation of chaos} property. Let $(\mathcal{V}^N_t)_{t\ge 0}$ be a labelled Kac process, such that the initial distribution $\mathcal{L}\mathcal{V}^N_t$ is $\mu_0$-chaotic. Then, for all times $t\ge 0$, the law $\mathcal{LV}^N_t$ will be $\phi_t(\mu_0)$-chaotic, where $\phi_t(\mu_0)$ is the solution to the Boltzmann equation starting at $\mu_0$. This is the original sense in which Kac proposed to study the convergence of his model to the Boltzmann equation, and has been extensively studied; key previous results in this direction will be discussed in our literature review. \subsection{Main Results} We now state the main results of the paper, concerning the long-time nature of the convergence to the Boltzmann flow. Our first theorem controls the deviation from the Boltzmann flow at a single, deterministic time $t\geq 0$, which we refer to as a \emph{pointwise} estimate. Moreover, this estimate is \emph{uniform in time}.
\begin{thm} \label{thrm: PW convergence} Let $0<\epsilon<\frac{1}{d}$ and let $a\ge 1$. For sufficiently large $k$, depending on $\epsilon, d$, let $(\mu^N_t)_{t\geq 0}$ be a Kac process in dimension $d\geq 3$, and let $\mu_0 \in \mathcal{S}^k$, satisfying the moment bounds \begin{equation} \Lambda_k(\mu^N_0) \leq a; \hspace{1cm} \Lambda_k(\mu_0)\le a. \end{equation} Then for some $C=C(\epsilon, d, k)< \infty$ and $\zeta=\zeta(d)>0$, we have the uniform bound \begin{equation} \sup_{t\geq 0} \hspace{0.1cm} \left\|W\left(\mu^N_t, \phi_t\left(\mu_0\right)\right)\right\|_{L^2(\mathbb{P})} \leq C a \hspace{0.1cm} \left( N^{\epsilon-1/d} +W\left(\mu^N_0, \mu_0\right)^\zeta\right).\end{equation} This generalises, by conditioning, to the case where the initial data $\mu^N_0$ is random, provided that $\mathbb{E}\Lambda_k(\mu^N_0)\le a$. \end{thm} This result is, to the best of our knowledge, new, although an equivalent result is known for Maxwell molecules \cite{CF 2018}. We will see, in Theorem \ref{corr: PW convergence as POC}, that estimates of this form imply the propagation of chaos for hard spheres, in the sense of (\ref{eq: POC}-\ref{eq: IDPOC}), with better rates than found in \cite{M+M} for the hard spheres process.
\\ Our second main theorem controls, in $L^p(\mathbb{P})$, the maximum deviation from the Boltzmann flow up to a time $t_\text{fin}$, in analogy with Proposition \ref{thrm: bad convergence theorem}. We refer to this as a \emph{pathwise, local uniform in time} estimate.
\begin{thm} \label{thrm: Main Local Uniform Estimate} Let $0<\epsilon<\frac{1}{2d}$, $a\ge 1$ and $p\geq 2$. For sufficiently large $k\geq 0$, depending on $\epsilon, d$, let $(\mu^N_t)_{t\geq 0}$ be a Kac process on $N\geq 2$ particles and let $\mu_0 \in \mathcal{S}^k$, with initial moments \begin{equation} \Lambda_{kp}(\mu^N_0)\le a^p ;\hspace{1cm}\Lambda_k(\mu_0)\le a. \end{equation} For some $\alpha=\alpha(\epsilon, d, p)>0$ and $C=C(\epsilon, d, p, k)< \infty$ and $\zeta=\zeta(d)>0$, we can estimate, for all $t_\text{fin}\ge 0$, \begin{equation} \left\|\hspace{0.1cm}\sup_{t\leq t_\text{fin}} \hspace{0.1cm} W\left(\mu^N_t, \phi_t(\mu_0)\right) \hspace{0.1cm}\right\|_{L^p(\mathbb{P})} \leq Ca\left( (1+t_\text{fin})^{1/p}\hspace{0.1cm} N^{-\alpha} + W(\mu^N_0, \mu_0)^\zeta)\right).\end{equation} $\alpha$ is given explicitly by \begin{equation} \alpha = \frac{p'}{2d}-\epsilon \end{equation}where $1<p'\le 2$ is the H\"{o}lder conjugate to $p$. \end{thm} At the end of this section, we will discuss related results, and how they may be compared to this estimate.
\\
An unfortunate feature of these approximation theorems is the dependence on the unknown, and potentially large, moment index $k$; a trivial reformulation which avoids this is to ask instead for an exponential moment bound $\langle e^{z|v|},\mu^N_0\rangle \le b$, for some $z>0$. We will also prove the following variant of the theorems above which allows us to use any moment estimate higher than second. \begin{thm}\label{thm: low moment regime}[Convergence with few moment estimates] Let $k>2$ and $a\ge 1$. Let $(\mu^N_t)$ be an $N$-particle Kac process, and $\mu_0$ in $\mathcal{S}$ with initial moment estimates \begin{equation} \Lambda_k(\mu^N_0) \le a;\hspace{1cm} \Lambda_k(\mu_0) \le a. \end{equation} There exists $\epsilon=\epsilon(d,k)>0$ and a constant $C=C(d,k)$ such that \begin{equation} \label{eq: pw convergence with few moments}\sup_{t\ge 0} \left\|W\left(\mu^N_t, \phi_t(\mu_0)\right)\right\|_{L^1(\mathbb{P})} \le Ca(N^{-\epsilon}+W(\mu^N_0,\mu_0)^\epsilon).\end{equation} For a local uniform estimate, if $p\ge 2$, then there exists a constant $C=C(d,k,p)$ and $\epsilon=\epsilon(d,k,p)>0$ such that, for all $t_\mathrm{fin}<\infty$, \begin{equation} \left\|\sup_{t\le t_\mathrm{fin}}W\left(\mu^N_t, \phi_t(\mu_0)\right)\right\|_{L^1(\mathbb{P})} \le Ca((1+t_\mathrm{fin})^{1/p}N^{-\epsilon}+W(\mu^N_0,\mu_0)^\epsilon).\end{equation} \end{thm} In the course of proving this result, we will see that the higher moment conditions are only required to obtain the optimal rates on a very short time interval $[0, u_N]$ and, in particular, we can obtain very good time-dependence without higher moment estimates.
\\ We also study the long-time behaviour of the Kac Process. We cannot extend Theorem \ref{thrm: Main Local Uniform Estimate} to control the maximum deviations over all times $t\geq 0$, due to the following recurrence features of the Kac process.
\begin{thm}\label{thrm: No Uniform Estimate} There exists a universal constant $C>0$ such that, for every $N$, for every $k> 2$ and $a>1$, there exists a Kac process $(\mu^N_t)_{t\geq 0}$ with initial moment $\Lambda_{k}(\mu^N_0)\le a$ but, almost surely, \begin{equation} \label{eq: conclusion of 1.5} \limsup_{t\rightarrow \infty} \hspace{0.1cm} W\left(\mu^N_t, \phi_t(\mu^N_0)\right) \geq 1-\frac{C}{\sqrt{N}}.\end{equation} Hence we cannot omit the factor of $(1+t_\text{fin})^{1/p}$ in Theorem \ref{thrm: Main Local Uniform Estimate}. \end{thm} In keeping with the terminology above, we say that there is no \emph{pathwise, uniform in time} estimate. In the course of proving Theorem \ref{thrm: No Uniform Estimate}, we will show that the long-time deviation (\ref{eq: conclusion of 1.5}) is typical for the Kac process. We will show that the Kac process returns, infinitely often, to `highly ordered' subsets of $\mathcal{S}_N$, which are far from the Boltzmann flow. However, we make the following remark on the times necessary for such deviations to occur. \begin{corollary}\label{corr: variation 3} Define \begin{equation} T_{N, \epsilon}=\inf\left\{t\geq 0: W(\mu^N_t, \phi_t(\mu_0))>\epsilon \right\}.\end{equation} Let $(\mu^N_t)$ be a family of Kac processes with an initial exponential moment bound: $\langle e^{z|v|}, \mu^N_0\rangle \leq b$, for some $z>0$ and $b>0$. Let $\mu_0\in\mathcal{S}$ satisfy $\langle e^{z|v|}, \mu_0\rangle \le b$, and suppose that $W(\mu^N_0, \mu_0)\rightarrow 0$ in probability.
\\ Let $t_{N, \epsilon, \delta}$ be the quantile constants of $T_{N, \epsilon}$ under $\mathbb{P}$; that is, \begin{equation} \mathbb{P}(T_{N, \epsilon} \leq t_{N, \epsilon, \delta})\geq \delta. \end{equation} Then, for fixed $\epsilon, \delta >0$, $t_{N, \epsilon, \delta}\rightarrow \infty$, faster than any power of $N$. \end{corollary} This follows as an immediate consequence of Theorem \ref{thrm: Main Local Uniform Estimate}. Taken together with Theorem \ref{thrm: No Uniform Estimate}, we see that macroscopic deviations occur, but typically at times growing faster than any power of $N$.
In the course of proving Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate}, we will establish the following continuity estimate for the Boltzmann flow $\phi_t$ measured in the Wasserstein distance $W$, which may be of independent interest. \begin{thm} \label{thrm: W-W continuity of phit} There exist constants $k, C, w$ depending only on $d$ such that, whenever $a\ge 1$ and $\mu, \nu \in \mathcal{S}^k_a$, we have the estimate \begin{equation} W\left(\phi_t(\mu), \phi_t(\nu)\right) \le Ce^{wt}a W(\mu, \nu). \end{equation} Moreover, for all $k>2$, there exist constants $C=C(k,d)$ and $\zeta=\zeta(k,d)>0$ such that, whenever $\mu, \nu \in \mathcal{S}^k_a$, we have the estimate \begin{equation} \label{eq: good continuity estimate} \sup_{t\ge 0} \hspace{0.1cm}W\left(\phi_t(\mu),\phi_t(\nu)\right) \le C a W(\mu, \nu)^\zeta.\end{equation} \end{thm} In the second part of the theorem, and in Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate} above, the exponent $\zeta$ can be taken to be $\lambda_0/({\lambda_0+2w})$ by making $k$ large enough, where $w$ is as in the first part of the theorem, and $\lambda_0=\lambda_0(d)>0$ is the spectral gap of the linearised Boltzmann operator. While it may be possible to obtain better continuity results, with $\zeta$ close to 1, we will not explore this here.
\\ Due to a result of Sznitman \cite{Sznitman Chaos}, the property of chaoticity is equivalent to convergence of the empirical measures in expected Wasserstein distance $W$. Therefore, as mentioned before, the theorems displayed above are closely related to the propagation of chaos for the hard-spheres Kac process, proven in \cite{M+M}. We now give a chaoticity result which may be derived from the previous theorems. \begin{thm}[Theorems \ref{thrm: PW convergence}, \ref{thm: low moment regime} as a Chaos Estimate] \label{corr: PW convergence as POC} We can view Theorems \ref{thrm: PW convergence}, \ref{thm: low moment regime} as \emph{Propagation of Chaos} and \emph{Conditional Propagation of Chaos}, as follows.
\\ We denote $\mathcal{P}_t^N(\mathcal{V}^N, \cdot)$ the transition probabilities of the $N$-particle labelled Kac process, started at $\mathcal{V}^N\in\mathbb{S}^N$. We form the symmetrised version, which we denote $\mathcal{P}^N_t(\mu^N, \cdot)$ by \begin{equation} \mathcal{P}^N_t(\mu^N, A)=\frac{1}{\#\theta_N^{-1}(\mu^N)}\hspace{0.1cm}\sum_{\mathcal{V}^N \in \theta_N^{-1}(\mu^N)}\mathcal{P}^N_t(\mathcal{V}^N, A). \end{equation} Let $k>2$ and $a\ge 1$, and suppose $\mu^N_0 \in \mathcal{S}_N$ satisfies a moment bound $\Lambda_k(\mu^N_0)\le a$. Then we can estimate \begin{equation} \label{eq: CPOC} \sup_{t\ge 0}\hspace{0.1cm}\max_{1\le l\le N} \hspace{0.1cm} \frac{\mathcal{W}_{1,l}\left(\Pi_l[\mathcal{P}^N_t(\mu^N_0, \cdot)], \phi_t(\mu^N_0)^{\otimes l}\right)}{l} \le C\hspace{0.1cm}a \hspace{0.1cm} N^{-\beta} \end{equation} for some constants $C=C(d,k)<\infty$; $\beta=\beta(d,k)>0$. This has the following consequences:
\begin{enumerate}[label=\roman{*}).]
\item \emph{(Chaotic case)} Let $k, a$ be as above, and suppose $\mu_0\in \mathcal{S}$ satisfies $\Lambda_k(\mu_0)\le a.$
\\ Construct initial data $\mathcal{V}^N_0=(v_1(0),...v_N(0))$ as follows. Let $u_1, ....u_N$ be an independent, and identically distributed sample from $\mu_0$. Define \begin{equation} \overline{u}_N=\frac{1}{N}\sum_{i=1}^N u_i; \hspace{1cm}s_N=\frac{1}{N}\sum_{i=1}^N|u_i-\overline{u}_N|^2\end{equation} and set \begin{equation} v_i(0)=s_N^{-1/2}(u_i-\overline{u}_N),\hspace{0.3cm} i=1,2,...,N;\hspace{0.6cm} \mathcal{V}^N_0=(v_1(0),...,v_N(0)). \end{equation} Let $\mathcal{V}^N_t$ be a labelled Kac process starting from $\mathcal{V}^N_0$. Then there exist constants $C=C(d,k)<\infty$; $\beta=\beta(d,k)>0$ such that \begin{equation} \sup_{t\ge 0}\hspace{0.1cm}\max_{1\le l\le N} \hspace{0.1cm} \frac{\mathcal{W}_{1,l}\left(\Pi_l[\mathcal{LV}^N_t], \phi_t(\mu_0)^{\otimes l}\right)}{l} \le C\hspace{0.1cm}N^{-\beta}. \end{equation}
\item \emph{(General Case)} Let $a, k$ be as above, and suppose that $(\mathcal{V}^N_t)_{t\ge 0}$ are labelled Kac processes such that the empirical measures $\mu^N_0$ satisfy \begin{equation} \mathbb{E}\Lambda_k(\mu^N_0) \le a.\end{equation} Then we have the estimate \begin{equation} \sup_{t\ge 0}\hspace{0.1cm}\max_{1\le l\le N} \hspace{0.1cm} \frac{\mathcal{W}_{1,l}\left(\Pi_l[\mathcal{LV}^N_t], \mathcal{L}^l_t\right)}{l} \le C\hspace{0.1cm}a\hspace{0.1cm}N^{-\beta} \end{equation} for $C$ and $\beta$ as in the main statement, and where $\mathcal{L}^l_t$ is the probability measure given by \begin{equation} \label{eq: defn of ll} \mathcal{L}^l_t=\mathbb{E}\left[\phi_t(\mu^N_0)^{\otimes l}\right]. \end{equation} \end{enumerate} \end{thm}
\begin{remark} \begin{enumerate}[label=\roman{*}).]
\item Roughly, (\ref{eq: CPOC}) says that, conditional on the observation of the empirical data $\mu^N_0$ at time $0$, the law $\mathcal{L}\mathcal{V}^N_t$ is quantitatively $\phi_t(\mu^N_0)$-chaotic. This may be viewed as propagation of chaos, with the heuristic that `conditional on $\mu^N_0, \mathcal{V}^N_0$ is $\mu^N_0$-chaotic'. We term this \emph{conditional propagation of chaos}. In this spirit, we may view the main estimate (\ref{eq: CPOC}) and point (ii.) as a \emph{quenched} and \emph{annealed} pair.
\item The polynomial result obtained here improves on the previously known result \cite[Theorem 6.2]{M+M} for the hard spheres chaos. This improvement is due to the continuity estimate (\ref{eq: good continuity estimate}), which improves on the corresponding estimate in \cite[Equations 6.39, 6.42]{M+M}; we could derive the chaoticity estimate (\ref{eq: CPOC}) by using the estimate (\ref{eq: good continuity estimate}) in the arguments of \cite[Section 6]{M+M}, at the cost of potentially requiring a stronger initial moment control. We will recall the relevant arguments for completeness, and this will be discussed in the literature review.
\item This construction of chaotic initial data in point (i.) is due to \cite[Proposition 9.2]{ACE}, which may be thought of as `as close to perfect independence as possible'. \item We will show that the main point can be deduced from Theorem \ref{thrm: PW convergence} or \ref{thm: low moment regime}. However, we will see in Section \ref{sec: proof of POC} that deriving either of these from this result appears to be no less technical than the main proof presented in Section \ref{sec: proof of pw}. \end{enumerate} \end{remark} In our arguments, we will frequently encounter numerical constants which are ultimately absorbed into the constants $C$ whose dependence is specified in the relevant theorem. To ease notation, we will denote inequality, up to such a constant, by $\lesssim$.
\subsection{Plan of the paper} Our programme will be as follows: \begin{enumerate} [label=\text{\roman*.}] \item In the remainder of this section, we will present a review of known results in the study of the Kac process and similar models. We will then discuss several aspects of our results, and how they may be interpreted.
\item For later convenience, we discuss some classical moment estimates for the Kac process and the Boltzmann equation. These allow us to stochastically control the weights $\Lambda_k$ in appropriate $L^p$ spaces.
\item We cite the analytical \emph{regularity and stability estimates} from Mischler and Mouhot, \cite{M+M}. The stability estimates, in particular, are crucial to obtaining the good time-dependence in Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate}. \item As a first application of the stability estimates, we analyse the continuity of the Boltzmann flow $\phi_t$ on subsets $\mathcal{S}^k_a$, with respect to the metric $W$, and uniformly in time. This is the content of Theorem \ref{thrm: W-W continuity of phit}, and allows us to reduce Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate} to the special case $\mu_0=\mu^N_0$. \item We use ideas of infinite-dimensional differential calculus, developed by \cite{M+M}, to prove an \emph{interpolation decomposition} of the difference $\mu^N_t - \phi_t(\mu^N_0)$. This is the key identity used for the proofs of Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate}, as all of the terms appearing in our formula can be controlled by the stability estimates. \item We then turn to the proof of Theorem \ref{thrm: PW convergence}. The main technical aspect is the control of a family of martingales $(M^{N,f}_t)_{f\in \mathcal{A}}$, uniformly in $f$. This is obtained using a quantitative compactness argument similar to that in \cite{ACE}. \item For a local uniform analysis, we first adopt the ideas of Theorem \ref{thrm: PW convergence} to a local uniform setting, with suitable adaptations, to state a local uniform martingale estimate, and deduce a preliminary, weak version of Theorem \ref{thrm: Main Local Uniform Estimate} with worse dependence in $t_\text{fin}$. We then use the stability estimates to `bootstrap' to the improved estimate Theorem \ref{thrm: Main Local Uniform Estimate}, and finally return to prove the local martingale estimate.
\item We next prove Theorem \ref{thm: low moment regime}. The strategy here is to use a localised form of the main argument from \cite{ACE} to control behaviour on a very short time interval $[0, u_N]$, and use the previous results, together with the \emph{moment production} property recalled in Section \ref{sec:moment estimates}, to control behaviour at times larger than $u_N$. \item We prove Theorem \ref{thrm: No Uniform Estimate}, based on relaxation to equilibrium. \item Finally, we prove the chaoticity result Theorem \ref{corr: PW convergence as POC}. This proof follows a similar pattern to the proof in \cite{M+M}, using our esimates. \end{enumerate}
\subsection{Literature Review} We will now briefly discuss related works, to which our results may be compared.
\paragraph{1. Probabilistic Techniques for the Kac Process and Boltzmann Equation} The probabilistic, \emph{pathwise} approach to the Kac process was pioneered by Tanaka \cite{Tanaka 78,Tanaka 02}, who constructed a Markov process describing the velocity of a `typical' particle in the Kac process with Maxwell molecules, and whose law at time $t$ is the solution to the associated Boltzmann equation. This was generalised by Fournier and M\'el\'eard \cite{FM} to include the cases without cutoff, and for non-Maxwellian molecules. A similar idea was used by Rousset \cite{Rousset} to prove convergence to equilibrium as $t\rightarrow \infty$.
\\ Our main convergence results may be compared to the motivating work of Norris \cite{ACE}, of which the main result is recalled in Proposition \ref{thrm: bad convergence theorem} above. Theorem \ref{thrm: Main Local Uniform Estimate} improves on Proposition \ref{thrm: bad convergence theorem} in two notable ways. Firstly, we have much better asymptotic behaviour in the time-horizon $t_\text{fin}$, which was the original motivation for our work. Secondly, we control the deviation in the stronger sense of $L^p$, rather than in probability; this arises as a result of using moment estimates within the framework of a `growth control', rather than excluding events of small probability where the moments are large. We also remark that the analysis of the martingale term in Sections \ref{sec: proof of pw}, \ref{sec: proof of LU} is simplified from the equivalent analysis in \cite[Theorem 1.1]{ACE} by our `interpolation decomposition', Formula \ref{form:newdecomposition}, which removes anticipating behaviour. \paragraph{2. Propagation of Chaos for the Kac Process} The problem of propagation of chaos for the Kac Process and Boltzmann equation has been extensively studied. The earliest results in this direction are due to McKean \cite{McKean 67}, Gr\"unbaum \cite{Gruenbaum} and Sznitman \cite{Sznitman BE}, and prove the qualitative statement (\ref{eq: POC}) for the cases of the hard spheres kernel considered here, or for the related case of Maxwell molecules. Recent work has produced quantitative estimates: Mischler and Mouhout \cite{M+M} showed propagation of infinite-dimensional chaos (\ref{eq: IDPOC}) for both hard spheres and Maxwell molecules. The estimates are uniform in time, with a quantitative estimate going as $(\log N)^{-r}$ for the hard spheres case. As remarked above, our estimates (Theorem \ref{thrm: PW convergence}, \ref{thm: low moment regime}, \ref{corr: PW convergence as POC}) improve this rate; this improvement is due to the improvement of Theorem \ref{thrm: W-W continuity of phit} over the corresponding estimate in \cite{M+M}, and this will be discussed further below. More recently, \cite{CF 2018} proved a chaoticity estimate for Maxwell molecules in $d=3$, measured in the $L^2(\mathbb{P})$ norm of Wasserstein$_2$ distance (\ref{eq: definition of Wp}), and with an almost optimal rate $N^{\epsilon-1/3}$, which is almost completely analagous to Theorem \ref{thrm: PW convergence}.
\paragraph{3. Propagation of Chaos for Related Models} We also mention the study of other models in kinetic theory where chaoticity has been studied. Malrieu \cite{Malrieu} studied a McKean-Vlasov model related to granular media equations, and deduced chaoticity for a related system. The main estimate here is a uniform in time estimate, similar in nature to Theorem \ref{thrm: PW convergence}. Similarly, Bolley, Guillin and Malrieu \cite{BGM} have also proven propagation of chaos for a particle system associated to a Vlasov-Focker-Plank equation, through a pointwise convergence result. Most recently, Durmus et al. \cite{Durmus} have proved a uniform in time chaoticity estimate based on a coupling approach, for the case with a confinement potential. Both of these models are amenable to the general framework of \cite{M+M}, and propagation of chaos for these models has been proven using the same techniques in a companion paper \cite{M+MCompanion}.
\\We may also compare Theorem \ref{thrm: Main Local Uniform Estimate} to a result of Bolley, Guillin and Villani \cite[Theorem 2.9]{BGV}, which proves exponential concentration of the maximum $\sup_{t\le t_\text{fin}} W(\mu^N_t, \phi_t(\mu))$ about $0$, for McKean-Vlasov dynamics. This improves upon the rates $\mathcal{O}(N^{-\infty})$ which would be obtained using Theorem \ref{thrm: Main Local Uniform Estimate}, but does not produce an explicit $L^p(\mathbb{P})$ bound. More recently, Holding \cite{Holding} proved a result similar to Theorem \ref{thrm: Main Local Uniform Estimate} for McKean-Vlasov systems interacting through a H\"older continuous force, in order to deduce propagation of chaos. However, neithere of these results track the dependence in the terminal time $t_\text{fin}$, and so may have much weaker time dependence than our result. To the best of our knowledge, no local uniform estimate for the McKean-Vlasov system exists which seeks to optimise time dependence in the spirit of Theorem \ref{thrm: Main Local Uniform Estimate}; the applicability of our methods to this system will be considered in the discussion section below.
\\ The notion of chaoticity has also been studied in more abstract settings. Sznitman \cite{Sznitman Chaos} has studied equivalent conditions for a family of measures to be chaotic, and Gottlieb \cite{Gottlieb} has produced a necessary and sufficient condition for families of Markov chains to propagate chaoticity.
\paragraph{4. Relaxtion to Equilibrium of the Kac Process} Kac \cite{FKT} proposed to relate the asymptotic behaviour of the Boltzmann flow $\phi_t(\mu_0)$ to the asymptotic relaxation to equilibrium of the particle system, and conjectured the existence of a spectral gap for the master equation. This has been extensively studied, and Kac's conjecture on the spectral gap positively answered \cite{Carlen 00,Janvresse,Carlen 03,Malsen}. However, this is not an entirely satisfactory answer for Kac's question on convergence to equilibrium; for chaotic initial data, this still requires times order $\mathcal{O}(N)$ to show relaxation to equilibrium. Carlen et al. also considered in a later paper \cite{Carlen 08} the more intricate notion of convergence \emph{in relative entropy}, which somewhat avoids this problem. Mischler and Mouhot \cite{M+M} answered Kac's question, proving relaxation to equilibrium in Wasserstein distance, uniformly in $N$, for the cases of hard spheres and Maxwell molecules.
\\ We remark that our philosophy is similar to Kac's proposal. Rather than investigating the long-time behaviour of the \emph{law} $\mathcal{LV}^N_t$ of the Kac process, our results use the asymptotics of the Boltzmann equation to partially understand the asymptotics of \emph{realisations} of the Kac process. Moreover, Theorem \ref{thrm: No Uniform Estimate} shows that this cannot be extended to completely understand the full, long-time asymptotics in this sense.
\subsection{Discussion of Our Results}\label{sec: discussion of our results} In this subsection, we will discuss the interpretation of our results, especially in view of the framework of chaoticity set out above.
\paragraph{1. Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate} as a pathwise interpretation of the Boltzmann Equation} The main philosophy of our approach follows \cite{ACE}, in considering the Kac process as a Markov chain, and adapting techniques \cite{D&N,PDF} from the general scaling limits of Markov processes.
\\ It is instructive to compare this to the case of a particle system evolving under Vlasov dynamics. In this case, we write $\mu^{N,\text{Vl}}_t$ for the $N$-particle empirical measure, evolving under (nonrandom) Hamiltonian dynamics; Dobrushin \cite{Dobrushin} showed that $\mu^{N, \text{Vl}}_t$ is a weak measure solution to the associated mean field PDE, the Vlasov equation. For the case of Kac dynamics, we may interpret Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate} as saying that \begin{equation} \forall t\ge 0\hspace{1cm} \mu^N_t=\phi_t(\mu^N_0)+\mathcal{N}^N_t\end{equation} where $\mathcal{N}^N_t$ is a stochastic noise term, which is small in an appropriate sense. This is a general phenomenon in the `fluid limit' scaling of Markov processes \cite{D&N,PDF,ACE}. In this sense, we may interpret the Boltzmann equation in a \emph{pathwise} sense; we stress that this interpretation of the Boltzmann equation does \emph{not} require any chaoticity assumptions on the initial data.
\paragraph{2. Theorem \ref{thrm: PW convergence} as Propagation of Chaos} It is natural, and instructive, to compare our chaoticity result Theorem \ref{corr: PW convergence as POC} and our techniques to those of \cite{M+M}, on whose work we build.
\\ In Theorem \ref{corr: PW convergence as POC}, we have improved the rate of chaoticity, from $(\log N)^{-r}$ to a polynomial estimate $N^{-\alpha}$. In proving this result, we will compare our estimates to the estimates of the three error terms $\mathcal{T}_1$, $\mathcal{T}_2$, $\mathcal{T}_3$ in the abstract result \cite[Theorem 3.1]{M+M}:\begin{enumerate}[label=\roman{*}).] \item The first term $\mathcal{T}_1$ is a purely combinatorial term which may be controlled by general, elementary arguments. \item The second error term $\mathcal{T}_2$ may be controlled by $\mathbb{E}W(\phi_t(\mu^N_0), \mu^N_t)$, which is a special case of Theorems \ref{thrm: PW convergence}, \ref{thm: low moment regime} with $\mu_0=\mu^N_0$. \item The third error $\mathcal{T}_3$ depends on the continuity of the Boltzmann flow $\phi_t$ in Wasserstein distance, which is controlled by the H\"older estimates Theorem \ref{thrm: W-W continuity of phit}. \end{enumerate} As mentioned above, the improvement over \cite[Theorem 6.2]{M+M} is due to the improved control on $\mathcal{T}_3$, using the estimate (\ref{eq: good continuity estimate}). The controls on $\mathcal{T}_1, \mathcal{T}_2$ are similar to those in \cite{M+M}, and the claimed result (\ref{eq: CPOC}) follows by using our estimates (\ref{eq: pointwise bound on martingale term}, \ref{eq: good continuity estimate}) in the arguments of \cite[Section 6]{M+M}. In order to give a self-contained proof, we will recall the relevant arguments in Section \ref{sec: proof of POC}.
\\ We also remark that we use each of the assumptions (\textbf{A1}-\textbf{5}) from \cite{M+M} in our analysis: \begin{enumerate}[label=\roman{*}).] \item Assumption (\textbf{A1}) corresponds to the moment bounds, which follow from the discussion of moment bounds in Proposition \ref{thrm:momentinequalities}. \item Assumption (\textbf{A2}i) and (\textbf{A5}) concern the continuity of the Boltzmann flow $\phi_t$, which is addressed in Theorem \ref{thrm: W-W continuity of phit}. Assumption (\textbf{A2}ii) concerns the continuity of the collision operator $Q$, which is discussed in Section \ref{sec: Regularity and Stability Estimates}. \item Assumption (\textbf{A3}) is the convergence of the generators. A special case of this is the content of Lemma \ref{lemma:DAP}, which is used to prove our `interpolation decomposition' Formula \ref{form:newdecomposition}. \item Assumption (\textbf{A4}) is the differential stability of the Boltzmann flow $\phi_t$, recalled in Proposition \ref{thrm: stability for BE}, which is crucial to obtaining estimates with good long-time properties.\end{enumerate} We will also see that, in order to recover Theorem \ref{thrm: PW convergence} theorem from either of the chaoticity results (Theorem \ref{corr: PW convergence as POC} or \cite[Theorem 6.2]{M+M}), we would need to move a supremum over test functions $f$ \emph{inside an expectation}, which corresponds to one of the most technical steps in our proof (Lemmas \ref{thrm: pointwise martingale control}, \ref{thrm: local uniform martingale control}). Moreover, this technique cannot generalise to produce a pathwise, local uniform convergence result analogous to Theorem \ref{thrm: Main Local Uniform Estimate} or Proposition \ref{thrm: bad convergence theorem}.
\paragraph{3. Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate} without chaoticity} We also remark that neither of the approximation results Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate} require special preparation of the initial data, beyond a moment estimate; in particular, both are valid even if the initial data $\mathcal{V}^N_0$ are not chaotic. We will now give an explicit example of such a distribution where this chaoticity property fails. \begin{example}[Non-chaotic initial data]\label{ex: nonchaotic initial data} Assume that $N$ is a multiple of $2^d$. Choose $\Sigma \in S^{d-1}$ uniformly at random, and let $P_1, P_2,...,P_{2^d}$ be the $2^d$ points obtained from $\Sigma$ by all reflections in coordinate axes. Let $\mathcal{V}^N_0$ be given by giving $\frac{N}{2^d}$ particles velocity $P_i$, for each $i=1,...,2^d$, such that the resulting law $\mathcal{LV}^N_0$ is symmetric. Then each marginal distribution is the uniform distribution $\text{Uniform}(S^{d-1}) \in \mathcal{S}$, but there exists a constant $\delta>0$, uniform in $N$, such that \begin{equation} W\left(\mu^N_0, \text{Uniform}(S^{d-1})\right) \ge \delta >0\end{equation} almost surely, where $\mu^N_0$ is the empirical measure of $\mathcal{V}^N_0$. In particular, by Sznitman's characterisation, $\mathcal{V}^N_0$ is not $\text{Uniform}(S^{d-1})$-chaotic. \end{example} In cases such as this, we may still understand the Boltzmann equation as `nearly' holding pathwise, in the sense of point 1. Alternatively, we may view the result Theorem \ref{thrm: PW convergence}, and its consequence in Theorem \ref{corr: PW convergence as POC}, as a chaoticity estimate for $\mathcal{V}^N_t$ about $\phi_t(\mu^N_0)$, \emph{conditional on the initial measure $\mu^N_0$}.
\paragraph{4. Theorem \ref{thrm: No Uniform Estimate} in view of the $H$-Theorem} As commented after the statement of Theorem \ref{thrm: No Uniform Estimate}, the key idea of the proof of Theorem \ref{thrm: No Uniform Estimate} is that the Kac process $\mu^N_t$ will, infinitely often, return to `highly ordered' subsets of the state space $\mathcal{S}_N$. However, this appears to contradict a na\"ive statement of Boltzmann's celebrated $H$-Theorem \cite{Boltzmann H Thrm}, that \emph{``entropy increases"}. Indeed, this is highly reminiscent of Zermelo's objection, based on Poincar\'e recurrence of deterministic dynamical systems \cite{Zermelo H Thrm}.
\\ However, our results are compatible with the $H$-Theorem, which is rigorously established in \cite{M+M}. This apparent paradox arises because the $H$-functional, representing the negative of entropy, is a \emph{statistical}, and not \emph{pathwise}, concept; that is, $H_t$ depends on the data $\mathcal{V}^N_t$ through the law $\mathcal{LV}^N_t$, rather than being a random variable depending directly on a particular observation $\mathcal{V}^N_t(\omega)$. In particular, for our case, the time $T_N$ of reaching the `ordered state' is a large, random time, and observing a particular realisation $T_N(\omega)=t$ tells us very little about the general behaviour $\mathcal{LV}^N_t$, and so about the entropy at time $t$. \paragraph{5. Sharpness of our Results} We will now discuss how sharp the main results (Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate}) are, with regards to dependencies in $N$, and the terminal time $t_\text{fin}$ in the case of Theorem \ref{thrm: Main Local Uniform Estimate}. \\
\subparagraph{5a. $N$-dependence} It is instructive to first consider the `optimal' case of independent particles, for which the empirical measure converges in Wasserstein distance at rate $N^{-1/d}$. More precisely, for $d\ge 3$, let $\mu \in \mathcal{S}^k_a$ for $k\ge \frac{3d}{d-1}$, and let $\mu^N$ be an empirical measure for $N$ independent draws from $\mathcal{S}$. Then, for some $C=C(a, k, d)$, we have\begin{equation} \left\|W(\mu^N, \mu)\right\|_{L^2(\mathbb{P})} \le C N^{-1/d}. \end{equation} This is shown in \cite[Proposition 9.3]{ACE}. Moreover, this rate is optimal: if $\mu$ is absolutely continuous with respect to the underlying Lebesgue measure, then the optimal approximation in $W$ metric is of the order $N^{-1/d}$, for $d\ge 3$. Results of Talagrand (\cite{T1,T2}, and discussion in \cite{Wasserstein}) suggest that this may also be true for higher $L^p$ norms, at least for the simple case of the uniform distribution on $(-1, 1]^d$.
\\ In view of this, we see that the exponent for the pointwise bound is \emph{almost sharp}, in the sense that we obtain exponents $\epsilon-\frac{1}{d}$ which are arbitrarily close to the optimal exponent $-\frac{1}{d}$, but cannot obtain the optimal exponent itself. This appears to be a consequence of using a particular estimate (\ref{eq: stability for BE 1}) from \cite{M+M}, which is `almost Lipschitz' in a similar sense. For the local uniform estimate Theorem \ref{thrm: Main Local Uniform Estimate}, we obtain exponent $-\alpha$, where $\alpha$ is given by \begin{equation} \alpha=-\epsilon+\frac{p'}{2d}; \hspace{1cm} \frac{1}{p}+\frac{1}{p'}=1.\end{equation} In the special case $p=2$, this produces the almost sharp exponent as discussed above. However, for $p>2$, the exponents are bounded away from $-\frac{1}{d}$, and so do not appear to be sharp. \\ \subparagraph{5b. Time Dependence} In light of Theorem \ref{thrm: No Uniform Estimate}, we see that we cannot exclude the factor $(1+t_\text{fin})^{1/p}$ in Theorem \ref{thrm: Main Local Uniform Estimate}. Hence, this time dependence is sharp \emph{among power laws}. However, we do not know what the \emph{true} sharpest time-dependence is. Similar techniques to those of Graversen and Peskir \cite{GP} may be able to provide a sharper bound; we do not explore this here.
\\ We remark that Theorem \ref{thrm: Main Local Uniform Estimate} interpolates between almost optimal $N$ dependence at $p=2$, and almost optimal $t_\text{fin}$ dependence as $p\rightarrow \infty$. Moreover, by taking $p\rightarrow \infty$, we sacrifice optimal dependence in $N$, but the exponent $\alpha(d,p)$ is bounded away from $0$, and so we have good convergence, on any polynomial time scale. This is the content of Corollary \ref{corr: variation 3}. \paragraph{6. Further Applicability of our Methods in Kinetic Theory} Finally, we will mention other models in kinetic theory which may be amenable to our techniques. \begin{enumerate}[label=\alph{*}).] \item \emph{Sharp $N$ dependence for hard spheres.} We believe that our techniques could be modified to prove an estimate for Theorem \ref{thrm: PW convergence}, and Theorem \ref{thrm: Main Local Uniform Estimate} in the case $p=2$, in order to obtain the optimal rate $N^{-1/d}$ discussed above; however, this would likely come at the cost of poor dependence in time. Since a similar result (Proposition \ref{thrm: bad convergence theorem}) is already known, and since this is not the spirit of this work in seeking to optimise time dependence, we will not consider this further. \\ \item \emph{The Kac process on Maxwell Molecules.} In addition to the hard spheres case analysed here, the main collision kernel of physical interest is the case of \emph{Maxwell molecules} with or without cutoff. Many of the estimates used in our argument for the hard spheres kernel have an analagous version for Maxwell molecules, including the stability estimates proven in \cite{M+M}. For this case, a result similar to Theorem \ref{thrm: PW convergence} is already known \cite[Theorem 2]{CF 2018}.\\ \item \emph{McKean-Vlasov Dynamics, and Inelastic Collisions.} Other kinetic system which may be analysed in the framework of \cite{M+M} include cases of \emph{McKean-Vlasov} dynamics, and \emph{Inelastic Collisions, coupled to a heat bath}, which have been studied in the functional framework of \cite{M+M} by Mischler, Mouhot and Wennburg in a companion paper \cite{M+MCompanion}. In these cases, the analagous estimates for stability and differentiability, computed in \cite{M+MCompanion}, have potentially poor dependence in time. As a result, our methods would still apply, but with correspondingly poor time dependence.
\\ For the case of McKean-Vlasov dynamics without confinement potential, this is a fundamental limitation; Malrieu \cite{Malrieu} showed that the propagation of chaos is \emph{not} uniform in time. Instead, he proposed to study a \emph{projected} particle system, which satisfies uniform propagation of chaos, and whose limiting flow has exponential convergence to equilibrium \cite[Theorem 6.2]{Malrieu}. This suggests that it may be possible to use our bootstrap method, used in the proof of Theorem \ref{thrm: Main Local Uniform Estimate}, to obtain a pathwise estimates with good long-time properties, analagous to Theorem \ref{thrm: Main Local Uniform Estimate}.
\\ We remark that, in the case of McKean-Vlasov dynamics, the presence of Brownian noise may complicate the derivation of the interpolation decomposition (Formula \ref{form:newdecomposition}), which is the key identity required for our argument. \end{enumerate}
\section{Moment estimates}\label{sec:moment estimates}
In order to deal with the appearance of the moment-based weights $\Lambda_k$ in future calculations, we discuss the moment structure of Kac's Process and the Boltzmann Equation. That is, we seek bounds on $\Lambda_k(\mu_t)$ where $\mu_t$ is, correspondingly, either a Kac process, or a solution to the Boltzmann equation.
\\ The results presented here are mostly classical, and the arguments are well-known for the Boltzmann equation. Central to the proof is an inequality due to Povzner \cite{Povzner}, from which Elmroth \cite{Elmroth} deduced global moment bounds for the (function-valued) Boltzmann equation in terms of the moments of the initial data. This conclusion was strengthened to moment \emph{production} by Desvillettes \cite{Desvilettes} provided control of an initial moment $\Lambda_s(\mu_0)$ for any $s>2$\iffalse, which is used to deduce Corollary \ref{thrm: variation 1}\fi. Wennberg \cite{Wennberg,Wennberg Mischler} demonstrated an optimal version of this result, only requiring finite initial energy $\langle |v|^2, \mu_0\rangle$. Bobylev \cite{Bobylev} proved propagation of exponential moments, which may also be applied here as a simplification. These results have been proven for measure-valued solutions of the Boltzmann equation by Lu and Mouhot \cite{L&M}, and the techniques have been applied to the Kac process by Mischler and Mouhot \cite{M+M} and Norris \cite{ACE}. We collect below the precise results which we will use.
\begin{proposition} [Moment Inequalities for the Kac Process and Boltzmann Equation] \label{thrm:momentinequalities} We have the following moment bounds for polynomial velocity moments: \\ \begin{enumerate}[label={(\roman*.)},ref={2.\roman*.}] \item \label{lemma:momentboundpt1} Let $(\mu^N_t)_{t\geq 0}$ be a Kac process on $N\geq 1$ particles, and let $q>2$, $p\ge 2$ with $q\ge p$. Then there exists a constant $C(p,q)<\infty$ such that, for all $t\ge 0$, \begin{equation} \label{eq: pointwise moment bound} \mathbb{E}\left[ \Lambda_q(\mu^N_t) \right] \leq C(1+t^{p-q})\Lambda_p(\mu^N_0)\end{equation} and, for another constant $C=C(q)$, \begin{equation} \label{eq: local uniform moment bound} \mathbb{E}\left(\sup_{0\leq t \leq t_\text{fin}} \Lambda_q(\mu^N_t)\right) \leq (1+C(q)t_\text{fin})\Lambda_q(\mu^N_0).\end{equation}
\item \label{lemma:momentboundpt2} Let $p, q$ be as above, and let, and $\mu_0\in \cup_{k>2}\mathcal{S}^k$. Then there exists a constant $C=C(p,q)$ such that the solution $\phi_t(\mu_0)$ to (\ref{BE}) satisfies \begin{equation}\label{eq: BE moment bound} \Lambda_q(\phi_t(\mu_0)) \le C(1+t^{p-q})\Lambda_p(\mu_0).\end{equation} \item There exist constants $C_1, C_2 < \infty$ such that, whenever $\mu_0 \in \cup_{k>2} \mathcal{S}^k$, we have the bound for all $t\ge 0$ \begin{equation} \int_0^t \Lambda_3(\phi_s(\mu_0))ds \le C_1 t+C_2\langle\hspace{0.05cm} (1+|v|^2) \hspace{0.05cm}\log(1+|v|^2),\mu_0\rangle.\end{equation} As a consequence, if $c\ge0$, then there exists $w<\infty, k<\infty$ such that, for all $t\ge 0$, \begin{equation} \exp\left(c\int_0^t \Lambda_3(\phi_s(\mu_0))ds\right)\le e^{wt}\Lambda_k(\mu_0). \end{equation}
\end{enumerate} \end{proposition} The first item is exactly \cite[Proposition 3.1]{ACE}. For the second item, if $\phi_t(\mu_0)$ is locally $\mathcal{S}^k$ bounded for all $k$, then we can apply the same reasoning as the cited proposition to the Boltzmann equation. To remove this condition, we consider the Boltzmann equation started from $\mu_\delta=\phi_\delta(\mu_0)$: thanks to the qualitative moment creation property \cite{Desvilettes, Wennberg Mischler}, the Boltzmann flow started at $\mu_\delta$ is locally $\mathcal{S}^k$ bounded for all $k$, and so the claimed result holds with $\mu_\delta$ in place of $\mu_0$. The claimed result may then be obtained by carefully taking the limit $\delta\downarrow 0$.
\\ The first conclusion of item iii. is proven in \cite[Equation 6.20]{M+M}, and the final point follows, using the interpolation, for all $\mu \in \mathcal{S}$,\begin{equation} \langle (1+|v|^2) \hspace{0.05cm}\log(1+|v|^2),\mu\rangle \le 8(1+\log \Lambda_5(\mu)). \end{equation} In our estimates for the various terms of the interpolation decomposition, we will frequently encounter the weightings $\Lambda_k(\mu^N_t)$ appearing in the integrand. We refer to points (i-ii.) of Proposition \ref{thrm:momentinequalities}, along with the following lemma, as \emph{growth control} of the weightings, which allows us to control these factors in suitable $L^p$ norms. \begin{lemma} \label{lemma:momentincreaseatcollision} Let $\left(\mu^N_t\right)_{t\geq 0}$ be a Kac process on $N\geq 1$ particles, and fix an exponent $k\geq 2$. Then for any time $t\geq 0$, and any measure $\mu^N$ which can be obtained from $\mu^N_t$ by a collision, \begin{equation} \Lambda_k(\mu^N) \leq 2^{\frac{k}{2}+1} \Lambda_k(\mu^N_t)\end{equation} \end{lemma} \begin{proof} This is immediate, by noting that if $v, v_\star$ are pre-collision velocities leading to post-collision $v', v_\star'$, we have the bound \begin{equation} \begin{split} (1+|v'|^2)^k &\le ((1+|v|^2)+(1+|v_\star|^2))^\frac{k}{2} \\ & \le 2^{k/2}((1+|v|^2)^\frac{k}{2}+(1+|v_\star|^2)^\frac{k}{2}). \end{split}\end{equation} Using the same bound for $v_\star'$ leads to the claimed result. \end{proof}
\iffalse FINDME The assertion for the Kac process is proven in \cite[Proposition 3.1]{ACE} for a range of Kac processes including the hard spheres case. T
\\ We also use the following \emph{logarithmic} moment creation property, which was established by \cite{M+M}, and which will be useful in deriving continuity estimates for the Boltzmann equation. \fi
A final property of the weighting estimates which will prove useful is the following correlation inequality: \begin{lemma} \label{lemma: correlation of moments} Let $k_1, k_2 \geq 2$, and let $\mu \in \mathcal{S}^{k_1+k_2}$. Then we have \begin{equation} \Lambda_{k_1}(\mu)\Lambda_{k_2}(\mu)\leq \Lambda_{k_1+k_2}(\mu).\end{equation} \end{lemma} \begin{proof} Since the maps $x\mapsto (1+|x|^2)^{k_i/2}$, for $i=1,2$, are both monotonically increasing on $[0, \infty)$, for any $v, v_\star$ we have the bound \begin{equation} \left\{(1+|v|^2)^{k_1/2}-(1+|v_\star|^2)^{k_1/2}\right\}\left\{(1+|v|^2)^{k_2/2}-(1+|v_\star|^2)^{k_2/2}\right\}\geq 0.\end{equation} Integrating both variables with respect to $\mu$ produces the result. \end{proof} \section{Regularity and Stability Estimates} \label{sec: Regularity and Stability Estimates}
In this section, we give precise statements of analytical results concerning the flow maps $(\phi_t)_{t\geq 0}$, and the drift operator $Q$, which will be used in our convergence theorems. We need a combination of \emph{regularity} for the drift map $Q$, which appears in the proof of Lemma \ref{thrm: local uniform martingale control}, and \emph{differentiability and stability} results for the flow maps $(\phi_t)_{t\geq 0}$.
\subsection{Stability Estimates} The key component to our analysis of the Kac process is the \emph{stability} of the limiting Boltzmann equation - that is, that the limit flow suppresses errors, rather than allowing exponential amplification. We begin by defining appropriate linear structures.
\begin{definition}\label{def: weighted normed spaces} Consider the space $Y$ of signed measures, given by \begin{equation} Y=\left\{ \xi: \hspace{0.3cm}\|\xi\|_\mathrm{TV} <\infty;\hspace{0.3cm} \langle 1, \xi \rangle =0\right\}.\end{equation} We equip $Y$ with the total variation norm $\|\cdot\|_\mathrm{TV}$. For real $q\geq 0$, we define the subspace $Y_q$ of measures with finite $q^\text{th}$ moments: \begin{equation} Y_q =\left\{ \xi \in Y: \langle 1+|v|^q, |\xi|\rangle < \infty \right\}. \end{equation} We define the norm with $q$-weighting on $Y_q$ by \begin{equation} \|\xi\|_{\mathrm{TV}+q}=\langle 1+|v|^q, |\xi|\rangle.\end{equation} The notation $\|\cdot\|_{\mathrm{TV}+q}$ is chosen to emphasise that this is a total variation norm, with additional polynomial weighting of order $q$, while avoiding potential ambiguity with the $L^q$ norms of random variables. \end{definition} \begin{remark}\label{rmk: compactness} The total variation norms $\|\cdot\|_{\mathrm{TV}+q}$ appearing in the following analysis are much stronger than the Wasserstein distance appearing in Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate}, \ref{thm: low moment regime}. We can understand this as follows. Recalling the definitions of $\mathcal{A}, \mathcal{A}_0$ in (\ref{eq: defn of script A}, \ref{eq: defn of script A 0}), we note that the $\mathrm{TV}+2$ distance is given by a duality \begin{equation} \|\mu-\nu\|_{\mathrm{TV}+2} =\sup_{f\in \mathcal{A}_0} \hspace{0.1cm} |\langle f, \mu-\nu\rangle| \end{equation} and, if we write $\mathcal{A}|_r, \mathcal{A}_0|_r$ for the restriction of functions to $[-r,r]^d$, then the inclusion \begin{equation} \mathcal{A}|_r\subset \mathcal{A}_0|_r\end{equation} is compact in the norm of $\mathcal{A}_0|_r$, by the classical theorem of Arzel\'a-Ascoli. This is at the heart of a \emph{quantitative compactness} argument in Lemmas \ref{thrm: pointwise martingale control}, \ref{thrm: local uniform martingale control}, which allows us to to take the supremum over $f\in\mathcal{A}$ inside the expectation. \end{remark} We can now state the precise results as they appear in \cite[Lemma 6.6]{M+M}:
\begin{proposition}\label{thrm: stability for BE} Let $\eta \in (0,1)$. Then there are absolute constants $C\in (0, \infty)$ and $\lambda_0>0$ such that, for $k$ large enough (depending only on $\eta$), and all $\mu, \nu \in \mathcal{S}^k$, there is a unique solution $(\xi_t)_{t\geq 0} \subset Y_2$ to the linearised differential equation \begin{equation} \label{eq: definition of the difference term} \xi_0=\nu-\mu; \hspace{0.5cm} \partial_t \xi_t = 2Q(\phi_t(\mu), \xi_t).\end{equation} This solution satisfies the bounds \begin{equation} \label{eq: stability for BE 1} \|\phi_t(\nu)-\phi_t(\mu)\|_{\mathrm{TV}+2} \leq C e^{-\lambda_0 t/2} \Lambda_{k}(\mu, \nu)^\frac{1}{2}\|\mu-\nu\|_\mathrm{TV}^\eta; \end{equation} \begin{equation} \label{eq: stability for BE 1.5} \|\xi_t\|_{\mathrm{TV}+2} \leq C e^{-\lambda_0 t/2} \Lambda_{k}(\mu, \nu)^\frac{1}{2}\|\mu-\nu\|_\mathrm{TV}^\eta;\end{equation} \begin{equation}\label{eq: stability for BE 2}
\|\phi_t(\nu)-\phi_t(\mu) - \xi_t \|_{\mathrm{TV}+2} \leq C e^{-\lambda_0 t/2} \Lambda_{k}(\mu, \nu)^\frac{1}{2}\|\mu-\nu\|_\mathrm{TV}^{1+\eta}.\end{equation} This allows us to define a linear map $\mathcal{D}\phi_t(\mu)$ by \begin{equation} \mathcal{D}\phi_t(\mu)[\nu-\mu]:=\xi_t.\end{equation} This linear map will play the r\^ole of a functional derivative for the Boltzmann flow $\phi_t$ in the calculus developed by \cite{M+M}. \end{proposition}
To obtain estimates with the weighted metric $W$, we will use a version of Proposition \ref{thrm: stability for BE} with the difference $\phi_t(\mu)-\phi_t(\nu)$ measured in stronger norms $\|\cdot \|_{\mathrm{TV}+q}$. The following estimate may be obtained by a simple interpolation between Propositions \ref{thrm:momentinequalities}, \ref{thrm: stability for BE}. \begin{corollary} \label{cor: new stability for BE} Let $q\geq 2$, $\eta \in (0,1)$ and $\lambda<\lambda_0$. Then for all $k$ large enough, depending on $\eta, \lambda$ and $q$, there exists a constant $C$ such that \begin{equation} \forall \mu, \nu \in \mathcal{S}^k, \hspace{1cm}\|\phi_t(\mu)-\phi_t(\nu)\|_{\mathrm{TV}+q} \leq C e^{-\lambda t/2} \Lambda_{k}(\mu, \nu)^\frac{1}{2} \|\mu-\nu\|_\mathrm{TV}^{\eta}. \end{equation} \end{corollary} \iffalse \begin{proof} The case $q=2$ is immediate from Proposition \ref{thrm: stability for BE}; for the rest of the proof, assume that $q>2$. Choose $\eta' \in (0,1)$ and $\delta\in \left(0,\frac{1}{2}\right]$ such that \begin{equation} \eta < \eta'(1-\delta) < 1; \hspace{0.5cm} (1-\delta)\lambda_0 > \lambda.\end{equation} Choose $k_0$ large enough, depending on $\eta'$, such that Proposition \ref{thrm: stability for BE} holds with exponent $\eta'$. Let $k$ be given by \begin{equation} k=k_0+\frac{q}{\delta}. \end{equation} Fix $\mu, \nu \in \mathcal{S}^k$, and $t\geq 0$. For ease of notation, write $\theta$ for the total variation measure $\theta=|\phi_t(\mu)-\phi_t(\nu)|$.
\\ Observe that $(1+|v|^q)\lesssim (1+|v|^2)^{1-\delta}(1+|v|^{q'})$, where $q'=q-2(1-\delta)>0$. Applying H\"{o}lder's inequality, we obtain \begin{equation} \begin{split} \langle 1+|v|^q, \theta\rangle &\lesssim \langle1+|v|^2, \theta\rangle^{1-\delta}\left\langle (1+|v|^{q'})^\frac{1}{\delta}, \theta\right\rangle^\delta \\ &\lesssim \left(e^{-\lambda_0 t/2} \Lambda_{k_0}(\mu, \nu)^\frac{1}{2}\|\mu-\nu\|_\mathrm{TV}^{\eta'}\right)^{1-\delta} \Lambda_{\frac{q'}{\delta}}(\phi_t(\mu), \phi_t(\nu))^\delta \end{split} \end{equation} Since $k\geq \frac{q'}{\delta}+1$, we can apply Proposition \ref{lemma:momentboundpt2} and the correlation property Lemma \ref{lemma: correlation of moments} to obtain \begin{equation} \begin{split} \langle 1+|v|^q, \theta \rangle & \lesssim e^{\lambda_0 (1-\delta)t/2} \|\mu-\nu\|_\mathrm{TV}^{\eta'(1-\delta)} \Lambda_{k_0}(\mu, \nu)^\frac{1}{2} \Lambda_{\frac{q'}{\delta}}(\mu, \nu)^\frac{1}{2} \\ & \lesssim e^{-\lambda_0(1-\delta) t/2} \|\mu-\nu\|_\mathrm{TV}^{\eta'(1-\delta)} \Lambda_{k}(\mu, \nu)^\frac{1}{2}. \end{split} \end{equation}By the choice of $\delta$, we have the bound desired. \end{proof} \fi We emphasise that the rapid decay is the key property that allows us to obtain good long-time behaviour for our estimates. The pointwise estimate Theorem \ref{thrm: PW convergence} and the initial estimate for pathwise local uniform convergence Lemma \ref{lemma: initial LU bound} would hold for estimates \begin{equation} \label{eq: weaker stability 1} \|\phi_t(\nu)-\phi_t(\mu)\|_{\mathrm{TV}+5} \leq F(t) \Lambda_{k}(\mu, \nu)^\frac{1}{2}\|\mu-\nu\|_\mathrm{TV}^\eta; \end{equation} \begin{equation}\label{eq: weaker stability 2}
\|\phi_t(\nu)-\phi_t(\mu) - \xi_t \|_{\mathrm{TV}+2} \leq G(t) \Lambda_{k}(\mu, \nu)^\frac{1}{2}\|\mu-\nu\|_\mathrm{TV}^{1+\eta}\end{equation} for functions $F,G$ such that \begin{equation} \label{eq: weaker stability 3} \left(\int_0^\infty F^2 dt\right)^{1/2}<\infty;\hspace{0.5cm} \int_0^\infty G dt<\infty. \end{equation} The full strength of exponential decay is used to `bootstrap' to the pathwise local uniform estimate Theorem \ref{thrm: Main Local Uniform Estimate}, which provides better behaviour in the time horizon $t_\text{fin}$, with only a logarithmic loss in the number of particles $N$. Provided that $F\rightarrow 0$ as $t\rightarrow \infty$, we could use the same `bootstrap', but with a potentially much larger loss in $N$. \subsection{Regularity Estimates}
For the proof of the local uniform estimate Lemma \ref{thrm: local uniform martingale control}, it will be important to control the continuity of $Q$ \emph{after application of the flow maps} $\phi_t$; for brevity, we will write the composition as $Q_t=Q\circ \phi_t$. We can exploit the use of the stronger $\|.\|_{\mathrm{TV}+2}-$ norm in the stability estimates Proposition \ref{thrm: stability for BE}, to prove a strong notion of continuity for $Q_t$, including the dependence on $t$.
\\ It is well known that, for $q\geq 1$, and $\mu, \nu \in \mathcal{S}^{q+1}$, we have the bilinear estimate \begin{equation} \|Q(\mu)-Q(\nu)\|_{\mathrm{TV}+q}\lesssim \Lambda_{q+1}(\mu, \nu)^\frac{1}{2}\|\mu-\nu\|_{\mathrm{TV}+(q+1)} \end{equation} and, by interpolating, this leads to \begin{equation} \label{eq: holder continuity of Q} \|Q(\mu)-Q(\nu)\|_{\mathrm{TV}+q}\lesssim \Lambda_{3(q+1)}(\mu, \nu)^\frac{1}{2}\|\mu-\nu\|_{\mathrm{TV}}^\frac{1}{2}. \end{equation} Combining this the stability estimate in Corollary \ref{cor: new stability for BE}, we deduce the following. For $q\geq 1$, $\eta \in (0,1)$ and $\lambda<\lambda_0$, then there exists $k$ such that, for $\mu, \nu \in \mathcal{S}^k$, we have the estimate \begin{equation} \label{eq: Lipschitz continuity of Q}\|Q_t(\mu)-Q_t(\nu)\|_{\mathrm{TV}+q} \lesssim e^{-\lambda t} \Lambda_{k}(\mu, \nu)^\frac{1}{2}\|\mu-\nu\|_\mathrm{TV}^\eta.\end{equation}
\section{Proof of Theorem \ref{thrm: W-W continuity of phit}}\label{sec: continuity of BE} As a first application of the stability estimates, we will now prove Theorem \ref{thrm: W-W continuity of phit}, which establishes a continuity result for the Boltzmann flow $(\phi_t)$ with respect to our weighted Wasserstein metric $W$. For Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate}, we wish to approximate a given starting point $\mu_0$ by an empirical measure $\mu^N_0 \in \mathcal{S}_N$ on $N$ points; in this context, the total variation distance is too strong, as there is no discrete approximation to any continuous measure $\mu_0$. We therefore seek a continuity estimate for the Boltzmann flow $\phi_t$, measured in the Wasserstein distance $W$ defined in (\ref{eq: definition of W}), and which is uniform in time.
\\ The proof combines a representation formula, and associated estimates, from \cite{ACE}, which establishes the first claim; the second claim will then follow using a long-time estimate recalled in Proposition \ref{thrm: stability for BE}. We will first review the definition, and claimed representation formula for the Boltzmann flow. \begin{definition}[Linearised Kac Process]\label{def: LKP} Write $V=\mathbb{R}^d$ and $V^*$ for the signed space $V^*=V\times\{\pm 1\}=V^+\sqcup V^-$. We write $\pi: V^*\rightarrow V$ as the projection onto the first factor, and $\pi_\pm: V^\pm\rightarrow V$ for the obvious bijections. \\ Let $(\rho_t)_{t\ge 0}$ be family of measures on $V=\mathbb{R}^d$ such that \begin{equation} \langle 1, \rho_t \rangle =1;\hspace{1cm} \langle |v|^2, \rho_t\rangle =1;\end{equation} \begin{equation} \label{eq: integrability for environment} \int_0^t \Lambda_3(\rho_s)ds <\infty \hspace{1cm} \text{for all }t<\infty. \end{equation} The \emph{Linearised Kac Process} \emph{in environment $(\rho_t)_{t\ge 0}$} is the branching process on $V^*$ where each particle of type $(v,1)$, at rate $2|v-v_\star|\rho(dv_\star) d\sigma$, dies, and is replaced by three particles, of types \begin{equation} (v'(v,v_\star,\sigma),1);\hspace{0.5cm}(v_\star'(v,v_\star, \sigma),1);\hspace{0.5cm}(v_\star,-1) \end{equation} where $v', v_\star'$ are the post-collisional velocities given by (\ref{eq: PCV}). The dynamics are identical for particles of type $(v,-1)$, with the signs exchanged.
\\ We write $\Xi^*_t$ for the associated process of unnormalised empirical measures on $V^*$, and define a signed measure $\Xi_t$ on $V$ by including the sign at each particle: \begin{equation} \Xi_t=\Xi^+_t-\Xi^-_t ; \hspace{1cm} \Xi^\pm_t=\Xi^\star_t\circ \pi_\pm^{-1}.\end{equation} We can also consider the same branching process, started from a time $s\ge 0$ instead. We write $E$ for the expectation over the branching process, which is not the full expectation in the case where $\rho$ is itself random. When we wish to emphasise the initial velocity $v$ and starting time $s$, we will write $E_{(s,v)}$ when the process is started from $\Lambda^*_0=\delta_{(v,1)}$ at time $s$, and $E_v$ in the case $s=0$. \end{definition} Provided that the initial data $\Xi_0$ is finitely supported, one can show that the branching process is almost surely non-explosive, and that \begin{equation}\label{eq: no explosion}E_{v_0} \langle 1+|v|^2, |\Xi_t|\rangle \le (1+|v|^2)\exp\left[8\int_0^t \Lambda_3(\rho_s) ds\right]. \end{equation} \begin{remark} We can connect this branching process with a different proof of existence and uniqueness for the difference $\xi_t$ in Theorem \ref{thrm: stability for BE}. For existence, consider the linearised Kac process $(\Xi_t)_{t\ge 0}$ in environment $\rho_t=\phi_t(\mu)$, where particles are initialised at $t=0$ according to a Poisson random measure of intensity \begin{equation} \theta(dv)= \begin{cases} \xi_0^+(dv)=\nu(dv) & \text{on }V^+ \\ \xi_0^-(dv)=\mu(dv) & \text{on }V^-. \end{cases} \end{equation} Let $\xi_t = \mathbb{E}(\Xi_t)$, which may be formalised in the sense of a Bochner integral in the weighted space $(Y_2, \|\cdot\|_{\mathrm{TV}+2})$ defined in (\ref{def: weighted normed spaces}). Then the same proof of the representation formula \cite[Proposition 4.2]{ACE} shows that $\partial_t \xi_t = 2Q(\phi_t(\mu), \xi_t)$, and that this solution is unique. \end{remark} Recall from the introduction that $\mathcal{A}$ is the set of all functions $f$ on $\mathbb{R}^d$, such that $\widehat{f}(v)=(1+|v|^2)^{-1}f(v)$ satisfies \begin{equation} |\widehat{f}(v)|\le 1; \hspace{1cm} \frac{|f(v)-f(w)|}{|v-w|}\le 1 \hspace{1cm}\text{for all }v\neq w. \end{equation} From the bound (\ref{eq: no explosion}), we can now define, for functions of quadratic growth, \begin{equation} \label{eq: defn of f0t} f_{st}(v_0)=E_{(s,v_0)}\left[\langle f, \Xi_t\rangle \right].\end{equation} When we wish to emphasise the environment, we will write $f_{st}[\rho](v_0)$. We now recall the following estimates from \cite{ACE}:
\begin{proposition}[Continuity Estimates for $f_{st}$]\label{prop: continuity for branching process} Fix $t\ge 0$, and let $z_t$ be given by \begin{equation} z_t=3\exp\left[8\int_0^t \Lambda_3(\rho_u)du \right].\end{equation} Then, for $f\in\mathcal{A}$ and $s\le t$, we have $f_{st} \in z_t\hspace{0.1cm} \mathcal{A}$. This is, in our notation, a reformulation of \cite[Propositions 4.3]{ACE}. \end{proposition} The other result which we will use is the representation formula \cite[Proposition 4.2]{ACE}, which expresses the difference of two Boltzmann flows $\phi_t(\mu)-\phi_t(\nu)$ in terms of the functions $f_{0t}$. This may be obtained from the proof of \cite[Proposition 4.2]{ACE} without essential modification, as in the proof of \cite[Theorem 10.1]{ACE}. \begin{proposition}[Representation Formula]\label{prop: bad representation formula} Let $\mu, \nu \in \mathcal{S}^k$ for some $k>2$, and let $(\rho_t)_{t\ge 0}$ be given by \begin{equation} \rho_t=\frac{1}{2}(\phi_t(\mu)+\phi_t(\nu)) \end{equation} where $\phi_t(\mu)$ is the unique, locally $\mathcal{S}^k$-bounded solution to the Boltzmann equation, starting at $\mu$, and similarly for $\nu$. Then, for all $f\in \mathcal{A}$, we have \begin{equation} \label{eq: bad representation formula} \langle f, \phi_t(\mu)-\phi_t(\nu)\rangle =\left\langle f_{0t}[\rho], \mu-\nu\right\rangle.\end{equation} \end{proposition} Note that the moment production property in Proposition \ref{thrm:momentinequalities} guarantees that (\ref{eq: integrability for environment}) holds for this environment. This will allow us to find an estimate for the Boltzmann flow $\phi_t$ which behaves well in short time. We now give the proof of Theorem \ref{thrm: W-W continuity of phit} \begin{proof}[Proof of Theorem \ref{thrm: W-W continuity of phit}] From the representation formula (\ref{eq: bad representation formula}) and continuity estimate Proposition \ref{prop: continuity for branching process}, for any $f\in \mathcal{A}$, \begin{equation} \label{eq: short time bound on BE} \langle f, \phi_t(\mu)-\phi_t(\nu)\rangle =\langle f_{0t}[\rho],\mu-\nu\rangle \le z_t\hspace{0.1cm} W(\mu, \nu) \end{equation} where $\rho_t=(\phi_t(\mu)+\phi_t(\nu))/2$. It therefore suffices to bound \begin{equation} z_t:=3\exp\left(4\int_0^t \left[\Lambda_3(\phi_s(\mu))+\Lambda_3(\phi_s(\nu))\right] ds \right).\end{equation} Using the logarithmic moment production for the Boltzmann equation recalled in Proposition \ref{thrm:momentinequalities}, there exist constants $k,w$ such that \begin{equation} \begin{split} z_t & \lesssim e^{wt} \Lambda_{k/2}(\mu)\Lambda_{k/2}(\nu) \\ &\hspace{0.5cm} \lesssim e^{wt} \Lambda_{k/2}(\mu, \nu)^2 \lesssim e^{wt}\Lambda_k(\mu, \nu).\end{split} \end{equation} This proves the first claim. For the second claim, we first deal with the case where $k\ge 3$ is large enough that the above holds, and such that the stability estimate Proposition \ref{thrm: stability for BE} holds with H\"older exponent $\eta=\frac{1}{2}$. Fix $\mu, \nu \in \mathcal{S}^k_a$, and assume without loss of generality that $0<W(\mu, \nu)<1$. From the stability estimate (\ref{eq: stability for BE 1}) we have \begin{equation} \|\phi_t(\mu)-\phi_t(\nu)\|_{\mathrm{TV}+2} \lesssim a^\frac{1}{2} e^{-\lambda_0t/2} \hspace{0.05cm} \end{equation} for some constants $\lambda_0>0$. It is immediate from the definitions that \begin{equation} W(\mu, \nu)\le \|\mu-\nu\|_{\mathrm{TV}+2} \end{equation} and so combining with the previous result, we have\begin{equation} W\left(\phi_t(\mu), \phi_t(\nu)\right)\lesssim a \min\left(e^{-\lambda_0 t/2}, W(\mu, \nu)e^{wt}\right).\end{equation} The right hand side is maximised when $e^{-\lambda_0 t/2}=W(\mu, \nu)e^{wt}$, which occurs when \begin{equation} t=-\frac{2}{\lambda_0+2w} \hspace{0.1cm} \log \hspace{0.05cm}W(\mu, \nu).\end{equation} Therefore, the maximum value of the right-hand side is \begin{equation} \label{eq: holder continuity} \begin{split} \sup_{t\ge 0} W\left(\phi_t(\mu), \phi_t(\nu)\right) & \lesssim a\exp\left(\frac{\lambda_0}{\lambda_0+2w} \log W(\mu, \nu)\right) \\ & =aW(\mu, \nu)^\zeta\end{split} \end{equation} with \begin{equation} \zeta(d)=\frac{\lambda_0}{\lambda_0+2w}\end{equation} which is the claimed H\"older continuity, for $k$ sufficiently large.
\\ Finally, we deal with the second point for arbitrary $k>2$. This argument uses a localisation principle to control the moments on a very short initial interval $[0,u]$, and may be read as a warm-up to the more involved arguments in the proof of Theorem \ref{thm: low moment regime}.
\\ Let $k_0$ be large enough such that the estimate (\ref{eq: holder continuity}) holds, and let $\zeta_0$ be the resulting exponent. Let $\beta=\frac{k-2}{2}$, let $\mu, \nu$ be as in the statement of the result, and let $u\in (0,1]$ be chosen later. Define \begin{equation} T=\inf\left\{t\ge 0: \Lambda_3(\rho_t)>\frac{\beta t^{\beta-1}+1}{2}\right\}\end{equation} where $\rho_t$ is as above. We now deal with the two cases $T>u, T\le u$ separately.
\\ If $T>u$, then we have the estimate \begin{equation} \begin{split} z_u&:=3\exp\left(4\int_0^u \Lambda_3(\rho_s)ds\right) \\ & \le 3\exp\left(4\int_0^1 \frac{\beta s^{\beta-1}+1}{2} ds\right)\lesssim 1. \end{split} \end{equation} Using the representation formula in Proposition \ref{prop: bad representation formula} as in (\ref{eq: short time bound on BE}), we therefore obtain \begin{equation} \label{eq: v short time BE estimate} \sup_{t\le u} W(\phi_t(\mu),\phi_t(\nu)) \lesssim W(\mu, \nu).\end{equation} Using (\ref{eq: holder continuity}) on $\phi_u(\mu), \phi_u(\nu)$, and using the moment production property recalled in Proposition \ref{thrm:momentinequalities}, we have the estimate \begin{equation} \label{eq: restarted BF estimate}\sup_{t\ge u} W(\phi_t(\mu),\phi_t(\nu)) \lesssim u^{2-k_0}W(\mu, \nu)^{\zeta_0}. \end{equation} We next deal with the case $T\le u$. In this case, comparing the moment production property to the definition of $T$ shows that \begin{equation} T^{\beta-1}\lesssim \Lambda_3(\phi_T(\mu))+\Lambda_3(\phi_T(\nu))\lesssim a T^{k-3} ;\hspace{1cm} T\le u\end{equation} which rearranges to produce the bound $1\lesssim au^{k/2-1}$. In particular, in this case, we have \begin{equation} \label{eq: bad moment case} \sup_{t\ge 0} W(\phi_t(\mu),\phi_t(\nu))\le 4 \lesssim a u^{k/2-1}. \end{equation} Combining estimates (\ref{eq: v short time BE estimate}, \ref{eq: restarted BF estimate}, \ref{eq: bad moment case}), we see that in all cases, \begin{equation} \sup_{t\ge 0} W(\phi_t(\mu), \phi_t(\nu)) \lesssim u^{2-k_0}W(\mu,\nu)^{\zeta_0}+au^{k/2-1}.\end{equation} Now, if we choose $u=\min(1,W(\mu, \nu)^\delta) $ for sufficiently small $\delta>0$, we obtain \begin{equation} \sup_{t\ge 0} W(\phi_t(\mu), \phi_t(\nu)) \lesssim aW(\mu,\nu)^\zeta \end{equation} for a new exponent $\zeta=\zeta(d,k)>0$. \end{proof}
\section{The Interpolation Decomposition for Kac's Process}\label{sec: interpolation decomposition} We introduce a pair of random measures associated to the Markov process $(\mu^N_t)_{t\geq 0}$. The \emph{jump measure} $m^N$ is the un-normalised empirical measure on $(0,\infty) \times \mathcal{S}_N$, of all pairs $(t, \mu^N)$, such that the system collides at time $t$, with new measure $\mu^N$. Its \emph{compensator} $\overline{m}^N$ is the random measure on $(0, \infty)\times \mathcal{S}_N$ given by \begin{equation}\label{eq: definition of mbar} \overline{m}^N(dt,d\mu^N)=\mathcal{Q}_N(\mu^N_{t-}, d\mu^N)dt \end{equation} where $\mathcal{Q}_N(\cdot, \cdot)$ is the transition kernel of the Kac process, given by (\ref{eq: definition of script Q}). The goal of this section is to prove the following `interpolation decomposition' for the difference between Kac's process and the Boltzmann flow, which is the key identity required for the proofs of Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate}. This is based on an idea of Norris \cite{PDF}, which was inspired by \cite[Section 3.3]{M+M}.\begin{formula}\label{form:newdecomposition} Let $\mu^N_t$ be a Kac process on $N\geq 2$ particles, and suppose $f \in \mathcal{A}_0$ is a test function. To ease notation, we write \begin{equation} \label{eq: definition of Delta} \Delta(s,t,\mu^N)=\phi_{t-s}(\mu^N)-\phi_{t-s}(\mu^N_{s-}); \hspace{0.5cm} 0\le s \le t, \hspace{0.2cm}\mu^N \in \mathcal{S}_N; \end{equation} \begin{equation} \label{eq: definition of psi} \psi(u,\mu, \nu)= \phi_{u}(\nu)-\phi_{u}(\mu)-\mathcal{D}\phi_{u}(\mu)[\nu-\mu]; \hspace{0.3cm} u\ge 0,\hspace{0.3cm} \mu, \nu \in \bigcap_{k>2} \mathcal{S}^k\end{equation} where $\mathcal{D}\phi_t$ is the derivative of the Boltzmann flow $\phi_t$, defined in Proposition \ref{thrm: stability for BE}; this makes sense, provided that all moments of $\mu, \nu$ are finite. Then we can decompose \begin{equation} \langle f, \mu^N_t -\phi_t(\mu^N_0) \rangle =M^{N,f}_t + \int_0^t \langle f, \rho^N(t-s, \mu^N_s) \rangle ds \end{equation} where \begin{equation} \label{eq: definition of MNFT} M^{N,f}_t=\int_{(0,t]\times \mathcal{S}_N} \langle f, \Delta(s,t,\mu^N) \rangle (m^N-\overline{m}^N)(ds, d\mu^N_s)\end{equation} and where $\rho^N$ is given in terms of the transition kernel $\mathcal{Q}_N$ (\ref{eq: definition of script Q}) by \begin{equation} \label{eq: definition of rho} \langle f, \rho^N(u, \mu^N)\rangle = \int_{\mathcal{S}_N}\langle f, \psi(u,\mu^N, \nu) \rangle \mathcal{Q}_N(\mu^N, d\nu).\end{equation} \end{formula} \begin{remark} \begin{enumerate}[label=\roman{*}).] \item This is the key identity needed for Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate}; the remainder of the proofs are to establish suitable controls over each of the two terms. \item This representation formula offers two major advantages over the equivalent representation formula in \cite{ACE}, which will be recalled in Proposition \ref{prop: very bad rep formula}. \begin{itemize} \item Firstly, all the quantities appearing in our formula are adapted to the natural filtration of $(\mu^N_t)_{t\ge 0}$, and so we can use martingale estimates directly; by contrast, \cite[Proposition 4.2]{ACE} contains anticipating terms. This allows us to prove convergence in $L^p$ spaces, rather than simply in probability. \item Secondly, all terms appearing in our formula may be controlled by the stability estimates (\ref{eq: stability for BE 1}, \ref{eq: stability for BE 2}). This allows us to exploit the stability of the limit equation, at the level of \emph{individual realisations of} the empirical particle system $\mu^N_0$. \end{itemize} \end{enumerate} \end{remark} The main technicality in the proof of this is to derive a Chapman-Kolmogorov-style equation, which allows us to manipulate the functional derivatives $\mathcal{D}\phi_t$. This is the content of the following lemma.
\begin{lemma}[Exchange Lemma]\label{lemma:DAP} Let $\mu^N \in \mathcal{S}_N$ and $f\in \mathcal{A}$. Then for all times $t\geq 0$, we have the equalities \begin{equation}\label{eq:exchangeDandI} \begin{split} &\frac{d}{dt}\langle f, \phi_t(\mu^N)\rangle = \langle f,\mathcal{D}\phi_t(\mu^N)\left[Q(\mu^N)\right]\rangle \\& \hspace{1cm}= \int_{\mathbb{R}^d \times \mathbb{R}^d \times S^{d-1}} \hspace{0.05cm} \langle f,\mathcal{D}\phi_t(\mu^N)[\mu^{N, v, v_\star, \sigma}-\mu^N]\rangle \hspace{0.05cm} |v-v_\star|\hspace{0.05cm}N\hspace{0.05cm}d\sigma \mu^N(dv)\mu^N(dv_\star)\end{split} \end{equation} where $\mu^{N,v,v_\star,\sigma}$ is the post-collision measure given by (\ref{eq: change of measure at collision}), $\mathcal{Q}_N$ is the generator of the Kac process (\ref{eq: definition of script Q}) and where $\mathcal{D}\phi_t$ is the functional derivative given by Proposition \ref{thrm: stability for BE}. \end{lemma}The first equality is familiar from semigroup theory, but is complicated by the non-linearity of the flow maps; we resolve this by using ideas of the infinite dimensional differential calculus developed in \cite{M+M}. The second equality can be thought of as a continuity property for the linear map $\mathcal{D}\phi_t(\mu^N)[\cdot]$, and is justified in Lemma \ref{lemma:DAP} by the explicit construction of the derivative in Proposition \ref{thrm: stability for BE}.
\\ Assuming this for the moment, we now prove the interpolation decomposition Formula \ref{form:newdecomposition}. \begin{proof}[Proof of Formula \ref{form:newdecomposition}] To begin with, we restrict to bounded, measurable $f$. Fix $t\geq 0$, and consider the process $\Gamma^{N,f,t}_s=\langle f, \phi_{t-s}(\mu^N_s)\rangle$, for $0\leq s \leq t$. Then $\Gamma^{N,f,t}$ is c\`{a}dl\`{a}g, and is differentiable on intervals where $\mu^N_s$ is constant. On such intervals, Lemma \ref{lemma:DAP} tells us that
\begin{multline}\begin{split} \frac{d}{ds} & \langle f, \phi_{t-s}(\mu^N_s)\rangle =-\left.\frac{d}{du}\right|_{u=t-s}\langle f, \phi_u(\mu^N_s)\rangle\\[1ex] & = -\int_{\mathbb{R}^d\times \mathbb{R}^d\times S^{d-1}} \langle f, \mathcal{D}\phi_{t-s}(\mu^N_s)[\mu^{N,v,v_\star, \sigma}-\mu^N_s] \rangle |v-v_\star|\hspace{0.05cm}N\hspace{0.05cm} \mu^N_s(dv)\mu^N_s(dv_\star)d\sigma \\[1ex] & = - \int_{\mathcal{S}_N} \langle f, \mathcal{D}\phi_{t-s}(\mu^N_s)[\mu^N-\mu^N_s] \rangle \mathcal{Q}_N(\mu^N_s, d\mu^N)\end{split}\end{multline} where the final equality is to rewrite integral in terms of the transition kernel $\mathcal{Q}_N$ of the Kac process, defined in (\ref{eq: definition of script Q}). Writing $\mathcal{I}_t$ for the (finite) set of jumps $\mathcal{I}_t=\{s\le t: \mu^N_s \neq \mu^N_{s-}\}$, the contribution to $\Gamma^{N,f,t}_t-\Gamma^{N,f,t}_0$ from drift between jumps is \begin{equation} \begin{split}&\int_{(0,t]\setminus \mathcal{I}_t} \hspace{0.1cm}\frac{d}{ds}\langle \phi_{t-s}(\mu^N_s)\rangle \hspace{0.1cm} ds \\&=-\int_{((0,t]\setminus \mathcal{I}_t)\times\mathcal{S}_N} \langle f, \mathcal{D}\phi_{t-s}(\mu^N_s)[\mu^N-\mu^N_s]\rangle\mathcal{Q}_N(\mu^N_s, d\mu^N) ds.\end{split} \end{equation} Using the definitions (\ref{eq: definition of Delta}, \ref{eq: definition of psi}) of $\psi$ and $\Delta$, the integrand can be expressed as \begin{equation} \langle f, \mathcal{D}\phi_{t-s}(\mu^N_s)[\mu^N-\mu^N_s]\rangle = \langle f,\Delta(s,t,\mu^N)-\psi(t-s, \mu^N_s, \mu^N)\rangle \end{equation} for any $s \not \in \mathcal{I}_t.$ Since the set $\mathcal{I}_t$ has $0$ Lebesgue measure, the set $\mathcal{I}_t\times\mathcal{S}_N$ has $0$ measure with respect to $\mathcal{Q}_N(\mu^N_s,d\mu^N)ds$, and so the inclusion of this set does not change the integral. Using the definitions (\ref{eq: definition of mbar}, \ref{eq: definition of rho}) of $\overline{m}^N$ and $\rho^N$, we can rewrite the integral as \begin{equation} \label{eq: contribution from drift} \begin{split} &\int_{(0,t]\times \mathcal{S}_N} \langle f, \psi(t-s, \mu^N_s,\mu^N)-\Delta(s, t, \mu^N)\rangle \mathcal{Q}_N(\mu^N_s, d\mu^N) ds \\ =&\int_0^t \langle f, \rho^N(t-s, \mu^N_s)\rangle ds - \int_{(0,t]\times \mathcal{S}_N} \langle f, \Delta(s,t,\mu^N)\rangle \overline{m}^N(ds, d\mu^N). \end{split}\end{equation} On the other hand, at the times when $\mu^N_s$ jumps, we have \begin{equation} \Gamma^{N,f,t}_s-\Gamma^{N,f,t}_{s-}=\langle f, \phi_{t-s}(\mu^N_s)-\phi_{t-s}(\mu^N_{s-})\rangle =\langle f, \Delta(s,t,\mu^N_s)\rangle. \end{equation} Therefore, the contribution to $\Gamma^{N,f,t}_t-\Gamma^{N,f,t}_0$ from jumps is \begin{equation} \label{eq: contribution from jump} \begin{split} \sum_{s\in \mathcal{I}_t} \Gamma^{N,f,t}_s-\Gamma^{N,f,t}_{s-}= \int_{(0,t]\times \mathcal{S}_N}\langle f, \Delta(s,t,\mu^N)\rangle\hspace{0.1cm} m^N(ds, d\mu^N) \\ = M^{N,f}_t+\int_{(0,t]\times \mathcal{S}_N}\langle f, \Delta(s,t,\mu^N)\rangle\hspace{0.1cm} \overline{m}^N(ds, d\mu^N) \end{split} \end{equation} Combining the contributions (\ref{eq: contribution from drift}, \ref{eq: contribution from jump}), we see that \begin{equation} \begin{split} \langle f, \mu^N_t-\phi_t(\mu^N_0)\rangle &= \Gamma^{N,f,t}_t-\Gamma^{N,f,t}_0 \\[1ex]& = \int_{(0,t]\setminus \mathcal{I}_t} \hspace{0.1cm}\frac{d}{ds} \langle f, \phi_{t-s}(\mu^N_s)\rangle ds + \sum_{s\in \mathcal{I}_t} \Gamma^{N,f,t}_s-\Gamma^{N,f,t}_{s-} \\[1ex] & = M^{N,f}_t+\int_0^t\langle f ,\rho^N(t-s, \mu^N_s)ds \end{split} \end{equation} as desired. \end{proof}
\subsection{Proof of Lemma \ref{lemma:DAP}} In this subsection, we will prove the Chapman-Kolmogorov property Lemma \ref{lemma:DAP}, which is crucial to the interpolation decomposition. We prove the two claimed equalities separately.
\begin{lemma} Let $N\ge 2$ and let $\mu^N \in \mathcal{S}_N$. Then, for all $t>0$ and $f\in \mathcal{A}$, we have the differentiability \begin{equation} \label{eq: first equality of CK} \frac{d}{dt}\langle f, \phi_t(\mu^N)\rangle = \langle f, \mathcal{D}\phi_t(\mu^N)[Q(\mu^N)]\rangle. \end{equation} At $t=0$, this is a one-sided, right differentiability. \end{lemma} The following proof uses ideas of \cite{M+M}, notably the concept of the infinite-dimensional differential calculus and building on ideas of \cite[Lemma 2.11]{M+M}. \begin{proof} Throughout, fix $\mu^N\in \mathcal{S}_N$ and $f\in \mathcal{A}$. Recall, for clarity, the notation $Q_t(\mu)=Q(\phi_t(\mu))$. Using the boundedness of appropriate moments of $\mu^N\in\mathcal{S}_N$, together with the continuity estimate (\ref{eq: holder continuity of Q}), it is straightforward to see that the map $t\mapsto Q_t(\mu^N)$ is H\"older continuous in time, with respect to the weighted norm $\|\cdot\|_{\mathrm{TV}+2}$: for some constant $C_1=C_1(N)$, we have the estimate \begin{equation}\label{eq: time Holder continuity of Qt} \|Q_t(\mu^N)-Q_s(\mu^N)\|_{\mathrm{TV}+2} \le C_1 |t-s|^\frac{1}{2}. \end{equation} From the definition (\ref{BE}) of the Boltzmann dynamics, together with dominated convergence, we have that \begin{equation} \langle f, \phi_t(\mu^N_0)\rangle =\langle f, \mu^N\rangle +\int_0^t \langle f, Q_s(\mu^N)\rangle ds. \end{equation} Therefore, the map $t\mapsto \langle f, \phi_t(\mu^N)\rangle$ is continuously differentiable in time, with derivative \begin{equation} \frac{d}{dt} \langle f, \phi_t(\mu^N)\rangle=\langle f, Q_t(\mu^N)\rangle\end{equation} where, at $t=0$, this is a one-sided, right derivative. It therefore suffices to show that (\ref{eq: first equality of CK}) holds as a \emph{right} derivative.
\\ Fix $t\ge 0$, and observe that, for $s>0$ small enough, $\nu^N_s=\mu^N+sQ(\mu^N)$ defines a measure $\nu^N_s\in \mathcal{S}$. From the semigroup property, it follows that $\phi_t(\phi_s(\mu^N))=\phi_{t+s}(\mu^N)$, and we can therefore expand \begin{equation} \begin{split} &\big\langle f, \phi_{t+s}(\mu^N)-\phi_t(\mu^N)-s\mathcal{D}\phi_t(\mu^N)[Q(\mu^N)]\big\rangle\\ &=\underbrace{\langle f, \phi_t(\phi_s(\mu^N))-\phi_t(\nu^N_s)\rangle}_{:=\mathcal{T}_1(s)} + \underbrace{\langle f,\phi_t(\nu^N_s)-\phi_t(\mu^N)-s\mathcal{D}\phi_t(\mu)[Q(\mu^N)]\rangle}_{:=\mathcal{T}_2(s)}. \end{split}\end{equation} We will now show that each of the two terms $\mathcal{T}_1, \mathcal{T}_2$ are $o(s)$, which implies the result.
\paragraph{Estimate on $\mathcal{T}_1(s)$} Let $\eta\in (\frac{2}{3},1)$, and choose $k$ large enough that the stability estimates (\ref{eq: stability for BE 1}, \ref{eq: stability for BE 2}) hold with exponent $\eta$. As $s\downarrow 0$, the probability measures $\nu^N_s =\mu^N+sQ(\mu^N)$ and $\phi_s(\mu^N)$ are bounded in $\mathcal{S}^k$. Therefore, from (\ref{eq: stability for BE 1}), there exists a constant $C_2=C_2(N)<\infty$ such that, for all $s>0$ small enough, \begin{equation} \label{eq: est of T1, 1} \|\phi_t(\phi_s(\mu))-\phi_t(\nu_s)\|_{\mathrm{TV}+2} \le C_2\|\phi_s(\mu)-\nu_s\|^\eta_{\mathrm{TV}+2}.\end{equation} The left-hand side is a bound for $\mathcal{T}_1(s)$. Using the estimate (\ref{eq: time Holder continuity of Qt}) above, we estimate the right-hand side, following \cite[Lemma 2.11]{M+M}: \begin{equation} \label{eq: est of T1, 2}\begin{split} \|\phi_s(\mu^N)-\nu^N_s\|_{\mathrm{TV}+2} &= \left\|\int_0^s (Q_u(\mu^N)-Q_0(\mu^N)) du\right\|_{\mathrm{TV}+2} \\ & \le \int_0^s \|Q_u(\mu^N)-Q_0(\mu^N)\|_{\mathrm{TV}+2} \hspace{0.1cm}du \\ & \le C_1(N)\int_0^s u^\frac{1}{2}du = \frac{2}{3}C_1(N) s^\frac{3}{2}.\end{split} \end{equation} Combining the estimates (\ref{eq: est of T1, 1}, \ref{eq: est of T1, 2}), we see that \begin{equation} \mathcal{T}_1(s) \le C_2\left(\frac{2}{3}C_1\right)^\eta s^\frac{3\eta}{2}.\end{equation} Since we chose $\eta>\frac{2}{3}$, this shows that $\mathcal{T}_1$ is $o(s)$ as $s\downarrow 0$.
\paragraph{Estimate on $\mathcal{T}_2$} Let $\eta$ and $k$ be as above, and recall that in (\ref{eq: stability for BE 2}), $\xi_t$ is the definition of $\mathcal{D}\phi_t(\mu)[\nu-\mu]$. We now apply this estimate to $\mu^N$ and $\nu^N_s$, noting that $\nu^N_s=\mu^N+sQ(\mu^N)$ and $\phi_s(\mu^N)$ are bounded in $\mathcal{S}^k$ as $s\downarrow 0$, and that $\nu^N_s-\mu^N=sQ(\mu^N)$. The bound (\ref{eq: stability for BE 2}) now shows that, for some constants $C_3, C_4<\infty$, \begin{equation} \begin{split} \|\phi_t(\nu^N_s)-\phi_t(\mu^N)-s\mathcal{D}\phi_t(\mu^N)[Q(\mu^N)]\|_{\mathrm{TV}+2} &\le C_3\|\nu^N_s-\mu^N\|_\mathrm{TV}^{1+\eta} \\& = C_3\|sQ(\mu^N)\|_\mathrm{TV}^{1+\eta}\\ & \le C_4 s^{1+\eta}. \end{split}\end{equation} The left-hand side is a bound for $\mathcal{T}_2$, which implies that $\mathcal{T}_2$ is $o(s)$, as desired. Together with the previous estimate on $\mathcal{T}_1$, this concludes the proof.\end{proof}
We now turn to the proof of the second equality in (\ref{eq:exchangeDandI}), that is, \begin{equation}\label{eq:DAP}\begin{split}&\langle f,\mathcal{D}\phi_t(\mu^N)\left[Q(\mu^N)\right]\rangle \\& = \int_{\mathbb{R}^d \times \mathbb{R}^d \times S^{d-1}} \hspace{0.05cm} \langle f,\mathcal{D}\phi_t(\mu^N)[\mu^{N, v, v_\star, \sigma}-\mu^N]\rangle \hspace{0.05cm}N\hspace{0.05cm} |v-v_\star|d\sigma \mu^N(dv)\mu^N(dv_\star). \end{split} \end{equation} Using the definition (\ref{eq: change of measure at collision}), we see that the integral on the right-hand side is equivalent to that defining $Q(\mu^N)$ in (\ref{eq: defn of Q}). However, we \emph{cannot} simply exchange the integration with the linear map $\mathcal{D}\phi_t$, as the construction in Proposition \ref{thrm: stability for BE} does not guarantee that $\mathcal{D}\phi_t(\mu^N)$ is bounded as a linear map. We will instead prove (\ref{eq:DAP}) from the \emph{explicit} way in which $\mathcal{D}\phi_t(\mu^N)$ is constructed in Proposition \ref{thrm: stability for BE}, and show that this construction implies `enough' continuity.
\\ This is closely related to, and may be derived from, condition (\textbf{A3}), convergence of the generators, in \cite{M+M}. We present here a more direct proof, to avoid introducing additional spaces and notation. The crucial observation of our argument is that `enough' small perturbations of a discrete measure $\mu^N\in\mathcal{S}_N$ will remain in $\mathcal{S}$; this is made precise in equation (\ref{eq: proof of dap delta is small}). The same idea is present in the corresponding argument \cite[Section 5.5]{M+M}, but not made explicit.
\\ Before turning to the proof of (\ref{eq:DAP}), we will prove the following auxiliary lemma. In order to justify the exchange of various integrals, we wish to improve the moments of the derivative $\xi_t=\mathcal{D}\phi_t(\mu)[\nu-\mu]$ in Proposition \ref{thrm: stability for BE}. The following argument combines ideas of \cite[Proposition 4.2]{ACE} and \cite[Lemma 6.3]{M+M}. \begin{lemma}\label{lemma: moments of xi} Suppose $\mu, \nu \in \cap_{k\ge 2} \mathcal{S}^k$, and let $(\xi_t)_{t\ge 0}$ be the solution to the differential equation (\ref{eq: definition of the difference term}). Then, for all $k\ge 2$, there exists a constant $c=c(k)$ such that, for all $t\ge 0$, \begin{equation} \|\xi_t\|_{TV+k}\le 2\Lambda_k(\mu,\nu) \exp\left(ct \Lambda_{k+1}(\mu)\right). \end{equation} Moreover, if $k'>2$ is large enough, then we have the continuity estimate, for all $0\le s \le t$, and for some absolute constants $C_1, C_2$, \begin{equation} \|\xi_t-\xi_s\|_{\mathrm{TV}+k}\le C_1\Lambda_{k+k'}(\mu, \nu)^\frac{1}{2}\exp\left(\frac{1}{2}C_2\Lambda_{2(k+1)}(\mu)t\right)(t-s)^\frac{1}{2}.\end{equation} \end{lemma} \begin{proof} Firstly, we observe that, by hypothesis, the map $t\mapsto \xi_t$ is continuous in the norm $\|\cdot\|_{\mathrm{TV}+2}$, and is therefore locally bounded. We have the estimate on total variation \begin{equation}\label{eq: bound TV of Q} \|Q(\phi_t(\mu), \xi_t)\|_\mathrm{TV} \le 4\int_{\mathbb{R}^d\times\mathbb{R}^d}|v-v_\star|\phi_t(\mu)(dv)|\xi_t|(dv_\star) \le 8 \|\xi_t\|_{\mathrm{TV}+2}\end{equation} where we have used the bound $|v-v_\star|\le (1+|v|^2)(1+|v_\star|^2)$. Similarly, we estimate \begin{equation} \label{eq: continuity of partial t xi t}\begin{split} &\|Q(\phi_t(\mu), \xi_t)-Q(\phi_s(\mu), \xi_s)\|_\mathrm{TV} \\[0.5ex] & \hspace{1cm}\le \|Q(\phi_t(\mu)-\phi_s(\mu), \xi_t)\|_\mathrm{TV} +\|Q(\phi_s(\mu), \xi_t-\xi_s)\|_\mathrm{TV} \\[0.5ex] & \hspace{1cm} \le 4(\|\xi_t\|_{\mathrm{TV}+2}\hspace{0.05cm}\|\phi_t(\mu)-\phi_s(\mu)\|_{\mathrm{TV}+2}+2\hspace{0.05cm}\|\xi_t-\xi_s\|_{\mathrm{TV}+2}). \end{split} \end{equation} Since $t\mapsto \phi_t(\mu)$ is continuous in $\|\cdot\|_{\mathrm{TV}+2}$, it follows that the map \begin{equation} t\mapsto \partial_t\xi_t=2Q(\phi_t(\mu), \xi_t) \end{equation} is continuous and locally bounded in $\|\cdot\|_{\mathrm{TV}}$. Therefore, for all $t\ge 0$, the measure $\pi_t=\int_0^t |\partial_s\xi_s| ds$ is a finite measure, and $\partial_s \xi_s$ is absolutely continuous with respect to $\pi_t$ for all $0\le s\le t$. Therefore, by a result of Norris \cite[Lemma 11.1]{ACE} on the time variation of signed measures, there exists a measurable map $f: [0,\infty)\times \mathbb{R}^d \rightarrow \{-1,0,1\}$ such that \begin{equation}\label{eq: definition of f} \xi_t=f_t|\xi_t|;\hspace{1cm} |\xi_t|=|\xi_0|+\int_0^t f_s\partial_s\xi_s ds.\end{equation}Writing $\check{f}_s(v)=(1+|v|^k)f_s$, we have the bound \begin{equation} \begin{split} &\hspace{1cm}\label{eq: gain of integrability of xi} \langle 1+|v|^k, |\xi_t|-|\xi_0|\rangle\\ &=\int_0^t ds \int_{\mathbb{R}^d\times\mathbb{R}^d\times S^{d-1}} (\check{f}(v')+\check{f}(v_\star')-\check{f}(v_\star)-\check{f}(v))|v-v_\star|\hspace{0.05cm}\phi_s(\mu)(dv)\hspace{0.05cm}\xi_s(dv_\star)d\sigma \\[1ex]& \le \int_0^t ds \int_{\mathbb{R}^d\times \mathbb{R}^d \times S^{d-1}} (2+|v'|^k+|v_\star'|^k+|v_\star|^k-|v|^k)|v-v_\star|\hspace{0.05cm}\phi_s(\mu)(dv_\star)\hspace{0.05cm}|\xi_s|(dv)d\sigma.\end{split} \end{equation} Now, there exists a constant $C_1=C_1(k)$ such that, for all $v, v_\star, \sigma$, we have the bound \begin{equation} |v'|^k+|v_\star'|^k+|v_\star|^k-|v|^k \le C_1(k)(|v|^{k-2}|v_\star|^2+|v_\star|^k)\end{equation} Therefore, for a different constant $C_2=C_2(k)$, \begin{equation} 2|v-v_\star|(2+|v'|^k+|v_\star'|^k+|v_\star|^k-|v|^k) \le C_2(k)(1+|v|^k)(1+|v_\star|^{k+1}).\end{equation} Using the moment bounds in Proposition \ref{thrm:momentinequalities}, we obtain for some $c=c(k)$, \begin{equation} \begin{split} & \langle 1+|v|^k, |\xi_t|\rangle \le \langle 1+|v|^k, |\xi_0|\rangle \\& \hspace{2cm}+ C_2\int_0^t \int_{\mathbb{R}^d\times\mathbb{R}^d}(1+|v|^k)(1+|v_\star|^{k+1})|\xi_s|(dv)\phi_s(\mu)(dv_\star) \\[1ex] &\hspace{1cm} \le \langle1+|v|^k, |\xi_0|\rangle +c\Lambda_{k+1}(\mu)\int_0^t \langle 1+|v|^k, |\xi_s|\rangle ds.\end{split} \end{equation} Gr\"onwall's lemma now gives the claimed moment bound. For the continuity statement, if $k'$ is chosen large enough that (\ref{eq: stability for BE 1.5}) holds for some $\eta<1$, then (\ref{eq: bound TV of Q}) gives the bound \begin{equation}\label{eq: continuity in TV} \|Q(\phi_t(\mu), \xi_t)\|_\mathrm{TV} \le C_3\Lambda_{k'}(\mu, \nu) \end{equation} and therefore, for all $0\le s\le t$, \begin{equation} \label{eq: continuity in TV'} \|\xi_t-\xi_s\|_\mathrm{TV} \le C_3\Lambda_{k'}(\mu, \nu)(t-s).\end{equation} The continuity statement follows by combining (\ref{eq: continuity in TV'}) with the moment bound for $2k$, with the interpolation \begin{equation} \|\xi_t-\xi_s\|_{\mathrm{TV}+k}\le \|\xi_t-\xi_s\|^{1/2}_{\mathrm{TV}}\hspace{0.1cm}\|\xi_t+\xi_s\|_{\mathrm{TV}+2k}^{1/2} \end{equation} and using the correlation property (Lemma \ref{lemma: correlation of moments}) to absorb both moment terms. \end{proof} We can now prove the second claimed equality in Lemma \ref{lemma:DAP}. \begin{lemma} Let $\mu^N \in \mathcal{S}_N$, for $N\ge 2$. Then we have the equality \begin{equation}\label{eq:DAP2}\begin{split}&\hspace{1cm}\mathcal{D}\phi_t(\mu^N)\left[Q(\mu^N)\right] \\& = \int_{\mathbb{R}^d \times \mathbb{R}^d \times S^{d-1}} \hspace{0.05cm} \mathcal{D}\phi_t(\mu^N)[\mu^{N, v, v_\star, \sigma}-\mu^N] \hspace{0.05cm}N\hspace{0.05cm} |v-v_\star|d\sigma \mu^N(dv)\mu^N(dv_\star). \end{split} \end{equation} where the right hand side is a Bochner integral in the space $(Y_2,\|\cdot\|_{\mathrm{TV}+2})$. In particular, the equality (\ref{eq:DAP}) holds. \end{lemma} \begin{proof} We exploit the fact that, for $\delta>0$ small enough, we have \begin{equation} \label{eq: proof of dap delta is small} \mu^N + \delta Q(\mu^N)\in \mathcal{S}; \hspace{0.5cm} \forall v, v_\star, \sigma, \hspace{0.2cm}\mu^N +\delta[\mu^{N,v,v_\star,\sigma}-\mu^N] \in \mathcal{S}. \end{equation} We will assume that $\delta>0$ is chosen so that this holds. For $v, v_\star \in \text{Supp}(\mu^N)$ and $\sigma \in S^{d-1}$, we define $\xi^{N,v,v_\star,\sigma}_t$ by the differential equation \begin{equation} \xi^{N,v,v_\star,\sigma}_0 = \delta [\mu^{N, v, v_\star, \sigma}-\mu^N]; \hspace{1cm} \partial_t \xi^{N,v,v_\star,\sigma}_t = 2Q(\phi_t(\mu^N),\xi^{N,v,v_\star,\sigma}_t). \end{equation}From Proposition \ref{thrm: stability for BE}, the solution to this equation exists, and is unique. By the characterisation of the derivative $\mathcal{D}\phi_t(\mu^N)$, we also have \begin{equation} \xi^{N,v,v_\star,\sigma}_t = \delta \hspace{0.1cm} \mathcal{D}\phi_t(\mu^N)[\mu^{N,v,v_\star,\sigma}-\mu^N]\end{equation} From Lemma \ref{lemma: moments of xi}, we also have a bound that $\| \xi^{N,v,v_\star,\sigma}_s\|_{\mathrm{TV}+4} \leq C$ for all $s\le t$, and for some constant $C=C(\mu^N,N,t)$ independent of $v, v_\star$ and $\sigma$. In this notation, we wish to establish the equality \begin{equation}\label{eq: desired equality for DAP} \mathcal{D}\phi_t(\mu^N)[Q(\mu^N)]\equalsquestion\int_{\mathbb{R}^d\times\mathbb{R}^d\times S^{d-1}} \xi^{N,v,v_\star,\sigma} |v-v_\star|\mu^N(dv)\mu^N(dv_\star) d\sigma.\end{equation} From the bound above, the right-hand side is well-defined as a Bochner integral in $(Y_2, \|\cdot\|_{\mathrm{TV}+2})$.
\\ Firstly, arguing as in (\ref{eq: bound TV of Q}), for all $t \ge 0$, there is a constant $C=C(\mu^N, N, t)$ such that, for all $v, v_\star, \sigma$ and $s\le t$, we have \begin{equation}\begin{split} &\|Q(\phi_t(\mu^N),\xi^{N,v,v_\star,\sigma}_s)\|_{\mathrm{TV}+3} \le C.\end{split}\end{equation} We now define \begin{equation} \xi_t = \int_{\mathbb{R}^d\times \mathbb{R}^d \times S^{d-1}} \xi^{N,v,v_\star,\sigma}_t |v-v_\star| d\sigma \hspace{0.1cm} \mu^N(dv) \mu^N (dv_\star) \end{equation} where the right-hand side is a Bochner integral in $(Y_3, \|\cdot\|_{\mathrm{TV}+3})$. From the definition (\ref{eq: defn of Q}) of $Q$, we have \begin{equation} \xi_0 = \delta \hspace{0.1cm}N^{-1}\hspace{0.1cm} Q(\mu^N)\end{equation} Moreover, using Fubini, we can express \begin{equation} \label{eq: use fubini}\begin{split} &\xi_t-\xi_0 \\[1ex] &=\int_{\mathbb{R}^d\times \mathbb{R}^d \times S^{d-1}} \left\{ \int_0^t 2Q(\phi_s(\mu^N),\xi^{N,v,v_\star,\sigma}_s) ds\right\} |v-v_\star| d\sigma \hspace{0.1cm} \mu^N(dv) \mu^N (dv_\star) \\[1ex]& =\int_0^t \left\{ \int_{\mathbb{R}^d\times \mathbb{R}^d \times S^{d-1}} 2Q(\phi_s(\mu^N),\xi^{N,v,v_\star,\sigma}_s) |v-v_\star| d\sigma \hspace{0.1cm} \mu^N(dv) \mu^N (dv_\star) \right\} ds.\end{split}\end{equation} The same argument as in (\ref{eq: bound TV of Q}) shows that, for fixed $\mu\in \mathcal{S}^3$, the map \begin{equation} Q(\mu, \cdot): (Y_3, \|\cdot\|_{\mathrm{TV}+3})\rightarrow (Y_2, \|\cdot\|_{\mathrm{TV}+2});\hspace{1cm}\xi\mapsto Q(\mu, \xi) \end{equation} is a bounded linear map. It follows that, for all $s \ge 0$, \begin{equation} Q(\phi_s(\mu^N), \xi_s)=\int_{\mathbb{R}^d\times \mathbb{R}^d \times S^{d-1}} Q(\phi_s(\mu^N),\xi^{N,v,v_\star,\sigma}_s) |v-v_\star| d\sigma \hspace{0.1cm} \mu^N(dv) \mu^N (dv_\star) \end{equation} as an equality of Bochner integrals in $(Y_2, \|\cdot\|_{\mathrm{TV}+2})$. Therefore, (\ref{eq: use fubini}) shows that, for all $t\ge 0$,\begin{equation}\label{eq: integral eqn for xi t} \xi_t=\xi_0+\int_0^t 2Q(\phi_s(\mu^N), \xi_s)ds. \end{equation} From Lemma \ref{lemma: moments of xi}, there exists a constant $C=C(\mu^N,N,t)$ such that, for all $v, v_\star, \sigma$ and $0\le s \le t$, \begin{equation} \|\xi^{N,v,v_\star, \sigma}_t-\xi^{N,v,v_\star, \sigma}_s\|_{\mathrm{TV}+2} \le C(t-s)^\frac{1}{2}\end{equation} and therefore, for a different constant $C'$, \begin{equation} \|\xi_t-\xi_s\|_{\mathrm{TV}+2} \le C'(t-s)^\frac{1}{2}.\end{equation} By the same reasoning as (\ref{eq: continuity of partial t xi t}), we see that the map $t\mapsto 2Q(\phi_t(\mu^N),\xi_t)$ is continuous with respect to the norm $\|\cdot\|_{\mathrm{TV}+2}$, and so we may differentiate (\ref{eq: integral eqn for xi t}) to obtain $\partial_t \xi_t = 2Q(\phi_t(\mu^N),\xi_t)$. From Proposition \ref{thrm: stability for BE}, this uniquely characterises the derivative $\mathcal{D}\phi_t(\mu^N)[\delta \hspace{0.05cm}N^{-1}\hspace{0.05cm} Q(\mu^N)]$. Hence we have the claimed equality\begin{equation} \begin{split} &\mathcal{D}\phi_t(\mu^N)[Q(\mu^N)] =\delta^{-1} N \xi_t \\& =\delta^{-1} \int_{\mathbb{R}^d\times \mathbb{R}^d \times S^{d-1}} \xi^{N,v,v_\star,\sigma}_t |v-v_\star|\hspace{0.05cm}N\hspace{0.05cm} d\sigma \hspace{0.1cm} \mu^N(dv) \mu^N (dv_\star) \\& =\int_{\mathbb{R}^d\times \mathbb{R}^d \times S^{d-1}} \mathcal{D}\phi_t(\mu^N)[\mu^{N,v,v_\star,\sigma}-\mu^N] |v-v_\star|\hspace{0.05cm}N\hspace{0.05cm} d\sigma \hspace{0.1cm} \mu^N(dv) \mu^N (dv_\star). \end{split} \end{equation} \end{proof}
\section{Proof of Theorem \ref{thrm: PW convergence}} \label{sec: proof of pw}
The main difficulty in obtaining a pathwise statement is the martingale term $M^{N,f}_t$ in Formula \ref{form:newdecomposition}, which we defined above as \begin{equation} M^{N,f}_t = \int_{(0,t]\times \mathcal{S}_N} \bigg \langle f, \phi_{t-s}(\mu^N)-\phi_{t-s}(\mu^N_{s-})\bigg\rangle (m^N-\overline{m}^N)(ds, d\mu^N). \end{equation} Recall the definition of $\mathcal{A}$ as those functions $f: \mathbb{R}^d \rightarrow \mathbb{R}$ satisfying \begin{equation} \forall \hspace{0.1cm} v, v'\in \mathbb{R}^d, \hspace{0.5cm}|\hat{f}(v)|\leq 1; \hspace{1cm}|\hat{f}(v)-\hat{f}(v')|\leq |v-v'|.\end{equation} We will be interested in controlling an expression of the form $\sup_{f\in\mathcal{A}} |M^{N,f}_t|$, either pointwise in time, or (pathwise) locally uniformly in time. However, unlike in the finite dimensional cases in \cite{D&N}, we cannot directly apply estimates from the elementary theory of martingales, as such estimates degrade in large dimensions. Instead, we will use the relative compactness discussed in Remark \ref{rmk: compactness} to argue that this is an \emph{effectively finite dimensional} problem. More precisely, we show that it can be approximated by a discretised, finite dimensional martingale approximation problem, with the following trade off: that making the truncation error small requires taking a large (finite) dimensional martingale. As in \cite{D&N,ACE}, the martingale term is `small', as a function of $N$, but will increase as a function of the dimension of the approximation. By optimising over the discretisation, we will be able to balance the two terms to find a useful estimate on the family of processes. This is the same approach as used for an equivalent problem in \cite[Theorem 1.1]{ACE}.
\\ Finding the best exponents of $N$ we have been able to obtain uses a `hierarchical decomposition'. This approach was inspired by an equivalent technique used in \cite[Proposition 7.1]{ACE}.
\begin{lemma} \label{thrm: pointwise martingale control} Let $\epsilon>0$, $a\ge 1$ and $0<\lambda<\lambda_0$. Let $k$ be large enough that Corollary \ref{cor: new stability for BE} holds with $q=4$, exponent $\lambda$ and H\"{o}lder exponent $1-\epsilon$. \\ Let $(\mu^N_t)_{t\geq 0}$ be a Kac process in dimension $d\geq 3$, with initial moment $\Lambda_{k}(\mu^N_0) \leq a$. Let $M^{N,f}_t$ be the processes given by (\ref{eq: definition of MNFT}). Then we have, uniformly in $t\geq 0$, \begin{equation} \left\|\hspace{0.1cm} \sup_{f\in \mathcal{A}} \left|M^{N,f}_t\right| \hspace{0.1cm} \right\|_{L^2(\mathbb{P})} \lesssim a^{1/2} \hspace{0.1cm} N^{\epsilon-1/d}.\end{equation} \end{lemma} Once we have obtained the control of the martingale term, the remaining proof of Theorem \ref{thrm: PW convergence} is straightforward. \begin{proof}[Proof of Theorem \ref{thrm: PW convergence}] Take $k=k(\epsilon)$ as in Lemma \ref{thrm: pointwise martingale control}, and such that Proposition \ref{thrm: stability for BE} holds with exponent $\max(1-\epsilon, \frac{1}{2})$.
\\ We first note that it is sufficient to prove the case $\mu_0=\mu^N_0$. Given this case, we use the continuity established in Theorem \ref{thrm: W-W continuity of phit} to estimate the difference \begin{equation} W\left(\phi_t(\mu^N_0), \phi_t(\mu_0)\right) \lesssim a^{1/2} W(\mu^N_0, \mu_0)^\zeta\end{equation} for some $\zeta=\zeta(d,k)$, which implies the claimed result.
\\ From now on, we assume that $\mu_0=\mu^N_0$. From the interpolation decomposition Formula \ref{form:newdecomposition}, we majorise \begin{equation} \label{eq: dominate pointwise bound} W\left(\mu^N_t, \phi_t\left(\mu^N_0\right)\right) \leq \sup_{f\in \mathcal{A}} \left|M^{N,f}_t\right| + \int_0^t \sup_{f\in \mathcal{A}} \hspace{0.1cm} \langle f, \rho^N(t-s, \mu^N_s)\rangle \hspace{0.1cm} ds \end{equation} where, as in (\ref{eq: definition of psi}, \ref{eq: definition of rho}), the integrand is given by \begin{equation} \langle f, \rho^N(t-s, \mu^N_s)\rangle = \int_{\mathcal{S}_N} \langle f, \psi(t-s,\mu^N_s, \nu)\rangle \mathcal{Q}_N(\mu^N, d\nu); \end{equation} \begin{equation} \psi(u,\mu, \nu)=\phi_u(\nu)-\phi_u(\mu)-\mathcal{D}\phi_u(\mu)[\nu-\mu] \end{equation} and $\mathcal{Q}_N$ is the transition kernel (\ref{eq: definition of script Q}) of the Kac process.
\\ The first term of (\ref{eq: dominate pointwise bound}) is controlled in $L^2$ by Lemma \ref{thrm: pointwise martingale control}, and so it remains to bound the second term in $L^2$. Let $s\ge 0$, and let $\mu^N$ be a measure obtained from $\mu^N_s$ by a collision, as in (\ref{eq: change of measure at collision}). Then, using the estimate (\ref{eq: stability for BE 2}), we bound
\begin{equation} \begin{split} \|\psi(t-s, \mu^N_s, \mu^N)\|_{\mathrm{TV}+2} & = \|\phi_{t-s}(\mu^N)-\phi_{t-s}(\mu^N_s)-\mathcal{D}\phi_{t-s}(\mu^N_s)\|_{\mathrm{TV}+2} \\[2ex] &\lesssim e^{-\lambda_0(t-s)/2} \|\mu^N-\mu^N_2\|^{2-\epsilon}_\mathrm{TV}\hspace{0.1cm}\Lambda_k(\mu^N, \mu^N_s)^\frac{1}{2}. \end{split}\end{equation} By Lemma \ref{lemma:momentincreaseatcollision}, we know that $\Lambda_k(\mu^N)\lesssim \Lambda_k(\mu^N_s)$. Moreover, from the form (\ref{eq: change of measure at collision}) of possible $\mu^N$, we know that \begin{equation} \|\mu^N-\mu^N_s\|_\mathrm{TV}\le \frac{4}{N}\hspace{0.5cm}\text{for }\mathcal{Q}_N(\mu^N_s, \cdot)\text{-almost all }\mu^N. \end{equation} Therefore, almost surely, for all $s$ and $\mathcal{Q}_N(\mu^N_s, \cdot)$-almost all $\mu^N$, we have the bound \begin{equation} \|\psi(t-s,\mu^N_s, \mu^N)\|_{\mathrm{TV}+2} \lesssim e^{-\lambda_0(t-s)/2} N^{\epsilon-2}\hspace{0.1cm}\Lambda_k(\mu^N_s)^\frac{1}{2} \end{equation} where the implied constants are independent of $s, \mu^N_s$. Integrating with respect to $\mathcal{Q}_N(\mu^N_s, d\mu^N)$, we obtain an upper bound for $\langle f, \rho^N(t-s,\mu^N_s)\rangle$:\begin{equation} \label{eq: dominate rho} \begin{split} \sup_{f\in \mathcal{A}}\hspace{0.05cm} \langle f, \rho^N(t-s, \mu^N_s)\rangle &\leq \int_{\mathcal{S}_N} \left\|\psi(t-s, \mu^N_s, \mu^N)\right\|_{\mathrm{TV}+2} \hspace{0.1cm} \mathcal{Q}_N(\mu^N_s, d\mu^N) \\ & \lesssim e^{-\lambda_0(t-s)/2} \hspace{0.1cm} N^{\epsilon-1} \hspace{0.1cm} \Lambda_k(\mu^N_s)^\frac{1}{2}.\end{split} \end{equation} We now take the $L^2$ norm of the second term in (\ref{eq: dominate pointwise bound}). Using Proposition \ref{lemma:momentboundpt1} to control the moments $\Lambda_k$ appearing in the integral, we obtain \begin{equation} \begin{split} \left\|\hspace{0.1cm} \int_0^t \sup_{f\in \mathcal{A}} \hspace{0.1cm} \langle f, \rho^N(t-s, \mu^N_s)\rangle \hspace{0.1cm} ds \hspace{0.1cm}\right\|_{L^2(\mathbb{P})} &\leq \mathlarger{\mathlarger{\int}}_0^t \hspace{0.1cm} \left\| \sup_{f\in \mathcal{A}} \hspace{0.1cm} \langle f, \rho^N(t-s, \mu^N_s)\rangle \right\|_{L^2(\mathbb{P})} ds \\ & \lesssim \int_0^t e^{-\lambda(t-s)/2}\hspace{0.1cm} N^{\epsilon-1} \hspace{0.1cm} \left\|\Lambda_{k}(\mu^N_s)^\frac{1}{2}\right\|_{L^2(\mathbb{P})} ds \\ & \lesssim N^{\epsilon-1} \hspace{0.1cm} a^{1/2}. \end{split} \end{equation} Noting that the exponent $\epsilon-1 < \epsilon-\frac{1}{d}$, we combine this with Lemma \ref{thrm: pointwise martingale control}, and keep the worse asymptotics. \end{proof} \begin{proof}[Proof of Lemma \ref{thrm: pointwise martingale control}] We begin by reviewing the following estimates for $1-$Lipschitz functions from \cite{ACE}. Following \cite{ACE}, we use angle brackets $\langle f \rangle_C $ to denote the average of a bounded function $f$ over a Borel set $C$ of finite, nonzero measure. \\ Let $f$ be $1-$ Lipschitz, and consider $B=[0,2^{-j}]^d$. Then, for some numerical constant $c_d$, we have \begin{equation} \label{eq:scalebound} \forall v \in B, \hspace{0.1cm} |f(v)-\langle f \rangle_B|\le c_d 2^{-j};\hspace{1cm} |\langle f \rangle _B -\langle f \rangle _{2B} | \le c_d 2^{-j}. \end{equation} We note that both of these bounds are linear in the length scale $2^{-j}$ of the box. We deal with the case $N\ge 2^{2d}$.
\\ The proof is based on the following `hierarchical' partition of $\mathbb{R}^d$, given in the proof \cite[Proposition 7.1]{ACE}. \begin{itemize} \item For $j \in \mathbb{Z}$, we take $B_j=(-2^j, 2^j]$. \item Set $A_0 = B_0$ and, for $j\geq 1$, $A_j = B_j \setminus B_{j-1}$. \item For $j\geq 1$ and $l \ge 2$, there is a unique partition $\mathcal{P}_{j,l}$ of $A_j$ by $2^{ld}-2^{(l-1)d}$ translates of $B_{j-l}$. \item Similarly, write $\mathcal{P}_{0,l}$ for the unique partition of $A_0$ by $2^{dl}$ translates of $B_{-l}$. \item For $l\geq 3$ and $k\in \mathbb{Z}$, let $B\in \mathcal{P}_{j,l}$. We write $\pi(B)$ for the unique element of of $\mathcal{P}_{j,l-1}$ such that $B\subset \pi(B)$.\end{itemize} We deal first with the case $d\geq 3$. Fix discretisation parameters $L, J \ge 1$. Given a test function $f\in \mathcal{A}$, we can decompose \begin{equation} f=\sum_{j=0}^J \hspace{0.2cm} \sum_{l=2}^L \hspace{0.1cm} \sum_{B \in \mathcal{P}_{j,l}} a_B(f)(1+|v|^2) 1_{B}+ \beta(f)\end{equation} where we define \begin{equation} a_B(f)=\begin{cases} \langle \hat{f} \rangle_ B & \text{if } B \in \mathcal{P}_{j,2}, \text{ for some } j \ge 0 \\ \langle \hat{f} \rangle _B - \langle \hat{f} \rangle _{\pi(B)} & \text{if } B \in \mathcal{P}_{j,l}, \text{ for some } j \ge 0, l\ge 3\end{cases} \end{equation} and the equation serves to define the remainder term $\beta(f)$. Write $h_B = 2^{2j}(1+|v|^2)1_B$, for $B\in \mathcal{P}_{j,l}$, and write $M^{N;B}_t = M^{N, h_B}_t$. We can now write \begin{equation}\begin{split} \label{eq: decomposition of MNFT} & M^{N,f}_t= \sum_{j=0}^J\sum_{l=2}^L \sum_{B \in \mathcal{P}_{j,l}} 2^{-2j}a_B(f) M^{N;B}_t +R^{N,f}_t; \end{split} \end{equation} \begin{equation} \label{eq: definition of RNFT} R^{N,f}_t= \int_{(0,t]\times \mathcal{S}_N} \langle \beta(f), \Delta(s,t,\mu^N)\rangle (m^N-\overline{m}^N)(ds, d\mu^N) \end{equation} and where $\Delta$, $m^N$ and $\overline{m}^N$ are defined in Section \ref{sec: interpolation decomposition}. This is the key decomposition in the proof. Roughly speaking: \begin{itemize} \item The martingales $M^{N;B}$ are controlled by a bound (\ref{eq: elltwo martingale control}) from the general theory of Markov chains, \emph{independently of f}. \item The coefficients $a_B$ depend on $f$, but are bounded, uniformly over $f\in \mathcal{A}$. \item On $B_J$, $\beta(f)$ will be small, uniformly in $f$, due to the Lipschitz bound on $f$ and the estimate (\ref{eq:scalebound}). This may be viewed as a \emph{relative compactness} argument, as discussed in Remark \ref{rmk: compactness}: given $\epsilon>0$, one could use this construction to produce a finite $\epsilon$-net for $\mathcal{A}|_{B_J}$ in the norm of $\mathcal{A}_0|_{B_J}$. \item $|\beta(f)|\leq 1$ is bounded on $\mathbb{R}^d\setminus B_J$, and the contribution from this region will be controlled by the moment bounds.\end{itemize} To control the martingale term uniformly in $f$, observe that for $B \in \mathcal{P}_{j,l}$, the bound (\ref{eq:scalebound}) gives $2^{-2j}|a_B(f)|\lesssim 2^{-j-l}$, and $\#\mathcal{P}_{j,l}\le 2^{dl}$. Hence, independently of $f\in \mathcal{A}$, \begin{equation} \left(\sum_{j=0}^J \sum_{B \in \mathcal{P}_{j,l}} (a_B(f)2^{-2j})^2\right) \lesssim 2^{(d-2)l}.\end{equation} Now, by Cauchy-Schwarz, \begin{equation} \label{eq: use of CS} \sup_{f\in \mathcal{A}} \hspace{0.1cm} \left|\sum_{j=0}^J\sum_{l=2}^L \sum_{B \in \mathcal{P}_{j,l}} 2^{-2j}a_B(f) M^{N;B}_t\right| \lesssim \sum_{l=2}^L \left(\sum_{j=0}^J \sum_{B \in \mathcal{P}_{j,l}} \left\{M^{N;B}_t\right\}^2\right)^{1/2}2^{(d/2-1)l}. \end{equation} Let $(M^{N;B;t}_s)_{s\leq t}$ be the martingale \begin{equation} \label{eq: MNBTS} M^{N;B;t}_s = \int_{(0,s]\times\mathcal{S}_N} \langle h_B, \Delta(u,t,\mu^N)\rangle (m^N-\overline{m}^N)(du, d\mu^N).\end{equation} We can control the remaining martingale term pointwise in $L^2$ by applying the martingale bound (\ref{eq: elltwo martingale control}) at the terminal time $t$: \begin{equation} \begin{split} & \left\|M^{N;B}_t\right\|_{L^2(\mathbb{P})}^2 = \mathbb{E} \int_{(0,t]\times \mathcal{S}_N} \langle (1+|v|^2)2^{2j}1_{B}, \Delta(s,t,\mu^N)\rangle ^2 \overline{m}^N(ds, d\mu^N) \\& \lesssim \mathbb{E} \left[\int_{(0,t]\times \mathcal{S}_N} \langle (1+|v|^4)1_{B}, |\Delta(s,t,\mu^N)|\rangle ^2 \overline{m}^N(ds, d\mu^N)\right].\end{split} \end{equation} Summing over $B\in \mathcal{P}_{j,l}$ and $j=0, ..,J$, we Minkowski's inequality to move the sum inside the integral against $\Delta$, and note that $\sum_j \sum_{B\in\mathcal{P}_{j,l}} h_B \lesssim (1+|v|^4)$. This produces the bound \begin{equation}
\begin{split} &\sum_{j=0}^J \sum_{B\in \mathcal{P}_{j,l}} \left\|M^{N;B}_t\right\|_{L^2(\mathbb{P})}^2 \lesssim \mathbb{E} \left[\int_{(0,t]\times \mathcal{S}_N} \langle (1+|v|^4), |\Delta(s,t,\mu^N)|\rangle ^2 \overline{m}^N(ds, d\mu^N) \right] \\[1ex] & = \mathbb{E} \left[\int_{(0,t]\times \mathcal{S}_N} \|\phi_{t-s}(\mu^N)-\phi_{t-s}(\mu^N_{s-})\|^2_{\mathrm{TV}+4} \hspace{0.1cm} \overline{m}^N(ds, d\mu^N) \right]
\end{split}\end{equation} where the second line follows by the definition of $\Delta$ in (\ref{eq: definition of Delta}). Using the stability estimates in Corollary \ref{cor: new stability for BE} with $q=4$, we find \begin{equation} \begin{split} \sum_{j=0}^J\sum_{B\in\mathcal{P}_{j,l}} \left\|M^{N;B}_t\right\|_{L^2(\mathbb{P})}^2 \lesssim \mathbb{E} \left[\int_{(0,t]\times \mathcal{S}_N} e^{-\lambda(t-s)}\Lambda_k(\mu^N_s, \mu^N) N^{2(\epsilon-1)}\hspace{0.1cm}\overline{m}^N(ds, d\mu^N) \right]. \end{split} \end{equation} For $\overline{m}^N$-almost all $(s, \mu^N)$, we bound $\Lambda_k(\mu^N_s, \mu^N)\lesssim \Lambda_k(\mu^N_s)$ by Lemma \ref{lemma:momentincreaseatcollision}, and $\overline{m}^N(ds, \mathcal{S}_N)\le 2Nds$, to bound the right hand side by \begin{equation} \begin{split} \sum_{j=0}^J\sum_{B\in\mathcal{P}_{j,l}} \left\|M^{N;B}_t\right\|_{L^2(\mathbb{P})}^2 &\lesssim \int_0^t e^{-\lambda(t-s)}N^{2\epsilon-1} \hspace{0.1cm} \mathbb{E}[\Lambda_k(\mu^N_s)]\hspace{0.1cm}ds \\ & \lesssim N^{2\epsilon-1}a^\frac{1}{2}\end{split} \end{equation} where the second line follows using the moment estimates for the Kac process, established in Proposition \ref{thrm:momentinequalities}. Therefore, (\ref{eq: use of CS}) gives \begin{equation} \begin{split} \label{eq: pointwise bound on martingale term} \left\|\hspace{0.1cm} \sup_{f\in \mathcal{A}} \hspace{0.1cm} \left|\sum_{j=0}^J\sum_{l=2}^L \sum_{B \in \mathcal{P}_{j,l}} a_B(f) M^{N;l}_t\right|\hspace{0.1cm} \right\|_{L^2(\mathbb{P})} & \lesssim N^{\epsilon-1/2} a^{1/2} \sum_{l=2}^L 2^{(d/2-1)l} \\[1ex] & \lesssim N^{\epsilon-1/2} \hspace{0.2cm}2^{(d/2-1)L}\hspace{0.1cm}a^{1/2}. \end{split}
\end{equation} The remaining points are a control on $\beta(f)$, uniformly in $f\in\mathcal{A}$, dealing with $B_J$ and $\mathbb{R}^d \setminus B_J$ separately. Fix $f\in \mathcal{A}$ and let $B \in \mathcal{P}_{j,L}$ with $j\le J$. The definition gives $\hat{\beta}(f)= \hat{f} - \langle \hat{f} \rangle _B$ on $B$, and so \begin{equation} \text{On } B, \hspace{0.5cm} |\beta(f)| = (1+|v|^2)|\hat{f}-\langle \hat{f} \rangle _B| \hspace{0.1cm}\lesssim \hspace{0.1cm} (1+|v|^2)2^{j-L}. \end{equation} Since $|v|\ge 2^{j-1}$ on $B$, and $B \in \mathcal{P}_{j,L}$ is arbitrary, we see that \begin{equation} \text{On } B_J, \hspace{0.5cm} |\beta(f)|\lesssim 2^{-L}(1+|v|^4).\end{equation} On the other hand, the uniform bound $\|\hat{f}\|_\infty \le 1$ implies that \begin{equation} \text{On }B_J^c, \hspace{0.5cm} |\beta(f)|\leq (1+|v|^2) \leq 2^{-2J}(1+|v|^4).\end{equation} Combining, we have the global bound for all $f\in \mathcal{A}$:\begin{equation} \forall v\in \mathbb{R}^d, \hspace{0.5cm} |\beta(f)| \lesssim (2^{-2J}+2^{-L})(1+|v|^4).\end{equation} Recalling the definition (\ref{eq: definition of Delta}) of $\Delta$, we use the stability estimate in Corollary \ref{cor: new stability for BE}, with $q=4$, and the moment increase bound Lemma \ref{lemma:momentincreaseatcollision}, as above to see that almost surely, for $m^N+\overline{m}^N$-almost all $(s, \mu^N)$, we have the bound \begin{equation} \label{eq: majorise integrand of error term} \begin{split} \sup_{f\in \mathcal{A}} \left| \langle \beta(f), |\Delta(s,t,\mu^N)|\rangle \right| & \lesssim (2^{-2J}+2^{-L})\hspace{0.1cm}\|\Delta(s,t,\mu^N)\|_{\mathrm{TV}+4} \\ & \lesssim (2^{-2J}+2^{-L})\hspace{0.1cm} e^{-\lambda(t-s)/2}\hspace{0.1cm} N^{\epsilon-1} \hspace{0.1cm} \Lambda_{k}(\mu^N_{s-})^\frac{1}{2} \\ &=:H_s\end{split}\end{equation} where we introduced the shorthand $H_s$ for the final expression, for simplicity. We now use the trivial observation that \begin{equation} \label{eq: dominate integrand and integrator seperately} \sup_{f\in \mathcal{A}} \hspace{0.1cm}\left|R^{N,f}_t\right| \le \int_{(0,t]\times\mathcal{S}_N} \left\{\sup_{f\in \mathcal{A}}\hspace{0.1cm}\left\langle |\beta(f)|, |\Delta(s,t,\mu^N)|\right\rangle\right\}(m^N+\overline{m}^N)(ds, d\mu^N).\end{equation}We split the measure $m^N +\overline{m}^N= (m^N-\overline{m}^N)+2\overline{m}^N$ to obtain a uniform bound for the error terms $R^{N,f}_t$ defined in (\ref{eq: definition of RNFT}): \begin{equation} \label{eq: introduce t1 t2} \begin{split} \left\|\hspace{0.1cm}\sup_{f\in \mathcal{A}} \hspace{0.1cm}R^{N,f}_t\hspace{0.1cm}\right\|_{L^2(\mathbb{P})} & \lesssim \left\| \int_0^t H_s (m^N+\overline{m}^N)(ds,\mathcal{S}_N)\right\|_{L^2(\mathbb{P})} \\[1ex] & \lesssim (2^{-2J}+2^{-L})N^{\epsilon-1}\left[\mathcal{T}_1+\mathcal{T}_2\right]\end{split} \end{equation} where we have written \begin{equation} \mathcal{T}_1=\left\| \int_0^t e^{-\lambda(t-s)/2} \Lambda_{k}(\mu^N_{s-})^\frac{1}{2} \overline{m}^N(ds,\mathcal{S}_N)\right\|_{L^2(\mathbb{P})} \end{equation} \begin{equation} \mathcal{T}_2= \left\| \int_0^t e^{-\lambda(t-s)/2} \Lambda_{k}(\mu^N_{s-})^\frac{1}{2} (m^N-\overline{m}^N)(ds,\mathcal{S}_N)\right\|_{L^2(\mathbb{P})}. \end{equation} $\mathcal{T}_1$ is controlled by dominating $\overline{m}^N(ds, \mathcal{S}_N)\leq 2N ds$ to obtain \begin{equation} \begin{split} \label{eq:dominatembar} \mathcal{T}_1 \lesssim N \left\|\int_0^t e^{-\lambda(t-s)/2} \Lambda_{k}(\mu^N_{s})^\frac{1}{2} ds \right\|_{L^2(\mathbb{P})} & \lesssim N \int_0^t e^{-\lambda(t-s)/2} \|\Lambda_{k}(\mu^N_{s})^\frac{1}{2}\|_{L^2(\mathbb{P})} \hspace{0.1cm} ds \\[1ex] &\lesssim N a^{1/2}. \end{split}\end{equation} We control $\mathcal{T}_2$ by It\^{o}'s isometry for $m^N-\overline{m}^N$, which is reviewed in (\ref{eq: QV of M}): \begin{equation} \label{eq:itoisometrycontrol}\begin{split} \mathcal{T}_2^2 &= \mathbb{E} \left\{ \int_0^t e^{-\lambda(t-s)} \Lambda_k(\mu^N_{s-}) \overline{m}^N(ds,\mathcal{S}_N)\right\} \\& \lesssim N \int_0^t e^{-\lambda(t-s)} \mathbb{E}\left\{\Lambda_k(\mu^N_{s-}) \right\} ds \\ & \lesssim N \hspace{0.1cm}a.\end{split} \end{equation} Combining (\ref{eq: introduce t1 t2}, \ref{eq:dominatembar}, \ref{eq:itoisometrycontrol}), we obtain \begin{equation} \label{eq:control of error term pw} \begin{split} \left\|\hspace{0.1cm}\sup_{f\in \mathcal{A}} \hspace{0.1cm}R^{N,f}_t\hspace{0.1cm}\right\|_{L^2(\mathbb{P})} \lesssim (2^{-2J}+2^{-L})\hspace{0.1cm}N^{\epsilon-1}\hspace{0.1cm}a^{1/2}.\end{split} \end{equation} Finally, we combine (\ref{eq: decomposition of MNFT}, \ref{eq: pointwise bound on martingale term}, \ref{eq:control of error term pw}) to obtain \begin{equation} \left\|\hspace{0.1cm} \sup_{f \in \mathcal{A}} \left|M^{N,f}_t\right|\right\|_{L^2(\mathbb{P})} \hspace{0.2cm} \lesssim \hspace{0.2cm} N^{\epsilon} \hspace{0.1cm} a^{1/2} (N^{-1/2}\hspace{0.1cm}2^{(d/2-1)L}+2^{-L}+2^{-2J}). \end{equation} Taking $L=\lfloor \log_2(N)/d \rfloor$ and $J\uparrow \infty$ produces the claimed result. For $d=2$, we replace $2^{(d/2-1)L}$ by $L$ in (\ref{eq: pointwise bound on martingale term}), and optimise as before, absorbing the factors of $(\log N)$ to make the exponent of $N$ slightly larger. \end{proof} \section{Proof of Theorem \ref{thrm: Main Local Uniform Estimate}} \label{sec: proof of LU}
We now adapt the ideas of Theorem \ref{thrm: pointwise martingale control} to a local uniform setting, and working in $L^p$, to prove the local uniform approximation result Theorem \ref{thrm: Main Local Uniform Estimate}. As in the proof above, most of the work is in controlling the martingale term $(M^{N,f}_t)_{f\in \mathcal{A}}$ defined in (\ref{eq: definition of MNFT}), uniformly in $f$; for a pathwise local uniform estimate, we wish to control an expression of the form \begin{equation}\label{eq: local unf mg exp} \left\| \hspace{0.1cm}\sup_{f\in \mathcal{A}} \hspace{0.1cm}\sup_{t\leq t_\text{fin}} \hspace{0.1cm} \left|M^{N,f}_t\right|\right\|_{L^p(\mathbb{P})}. \end{equation} Since we will frequently encounter suprema of processes on compact time intervals, we introduce notation. For any stochastic process $M$, we write \begin{equation}\label{eq: use of star} M_{\star,t}=\sup_{s\leq t}|M_t| \end{equation} Proving the sharpest asymptotics in the time horizon $t_\text{fin}$ requires working in $L^p$ instead of $L^2$, for large exponents $p$. This leads to a weaker exponent in $N$: we obtain only $N^{\epsilon-p'/2d}$ instead of $N^{\epsilon -1/d}$, where $p'\leq 2$ is the H\"{o}lder conjugate to $p$. However, by making $p$ large, we are able to obtain estimates which degrade slowly in the time horizon $t_\text{fin}$, with only a factor of $(1+t_\text{fin})^{1/p}$. The exponent for $t_\text{fin}$ can thus be made arbitrarily small, while the resulting exponent for $N$ is bounded away from $0$ as we make $p$ large. \\ \\ The key result required for the local uniform estimate is the following control of the expression (\ref{eq: local unf mg exp}), in analogy to Lemma \ref{thrm: pointwise martingale control}. \begin{lemma} \label{thrm: local uniform martingale control} Let $\epsilon>0$, $a\ge 1$ and $p\geq 2$, and let $1<p'\le 2$ be the H\"{o}lder conjugate to $p$. Let $k$ be large enough that Corollary \ref{cor: new stability for BE} holds for $q=5$, with H\"{o}lder exponent $1-\epsilon$, and with some $0<\lambda<\lambda_0.$ \\ Let $(\mu^N_t)_{t\geq 0}$ be a Kac process on $N\geq 2$ particles, with initial moment $\Lambda_{kp}(\mu^N_0)\leq a^p$. Let $M^{N,f}_t$ be the processes given by (\ref{eq: definition of MNFT}), and $M^{N,f}_{\star, t}$ their local suprema, as in (\ref{eq: use of star}). Then, for any time horizon $t_\text{fin}\in [0,\infty)$, we have the control \begin{equation} \left\|\hspace{0.1cm} \sup_{f\in \mathcal{A}} \hspace{0.1cm} \hspace{0.1cm} M^{N,f}_{\star,t_\text{fin}}\hspace{0.1cm}\right\|_{L^p(\mathbb{P})} \lesssim a^{1/2} \hspace{0.1cm} N^{-\alpha}\hspace{0.1cm}(\log N)^{1/p'}\hspace{0.1cm}(1+t_\text{fin})^\frac{3p+1}{2p}\end{equation} where $\alpha = \frac{p'}{2d}-\epsilon$.\end{lemma} The proof of this Lemma follows the same ideas as the proof of the equivalent result, Lemma \ref{thrm: pointwise martingale control}, for the pointwise bound. However, in this case, we must modify the argument to work in $L^p$ rather than $L^2$, and also to control all terms uniformly on the compact time interval $[0, t_\text{fin}]$. This will be deferred until the end of this section.
\\
Following the argument of the pointwise bound in Theorem \ref{thrm: PW convergence}, we can now produce an initial pathwise, local uniform estimate for the case $\mu_0=\mu^N_0$, with worse long-time behaviour. From this, we will `bootstrap' to the desired long-time behaviour in Theorem \ref{thrm: Main Local Uniform Estimate}. \begin{lemma} \label{lemma: initial LU bound} Let $\epsilon>0$, $a\ge 1$ and $p\geq 2$, with H\"older conjugate $p'\le 2$. Choose $k$ large enough that Proposition \ref{thrm: stability for BE} holds with exponent $1-\epsilon$, and that Corollary \ref{cor: new stability for BE} holds with exponent $1-\epsilon$ and $q=5$. Let $(\mu^N_t)_{t\geq 0}$ be a Kac process on $N\geq 2$ particles, with initial moment $\Lambda_{kp}(\mu^N_0)\leq a^p$. Then, for any time horizon $t_\text{fin}\ge 0$, we have the control \begin{equation} \left\| \hspace{0.1cm} \sup_{t\leq t_\text{fin}} \hspace{0.1cm}W\left(\mu^N_t, \phi_t\left(\mu^N_0\right)\right) \hspace{0.1cm} \right\|_{L^p(\mathbb{P})} \lesssim a^{1/2} \hspace{0.1cm}N^{\epsilon-\frac{p'}{2d}}\hspace{0.1cm} (\log N)^{1/p'}(\hspace{0.1cm}1+t_\text{fin})^\frac{3p+1}{2p}.\end{equation} \end{lemma} \begin{proof}[Proof of Lemma \ref{lemma: initial LU bound}] As in Theorem \ref{thrm: PW convergence}, it remains to control the supremum of the integral term in Formula \ref{form:newdecomposition}\begin{equation} \sup_{t\leq t_\text{fin}} \int_0^t \sup_{f\in \mathcal{A}} \langle f, \rho^N(t-s, \mu^N_s)\rangle ds \end{equation} where $\rho^N$ is given by (\ref{eq: definition of rho}). Following the previous calculation (\ref{eq: dominate rho}), we majorise, for $s\leq t\leq t_\text{fin}$, \begin{equation} \label{eq: loc unf dominate rho}\sup_{f\in \mathcal{A}} \langle f, \rho^N(t-s, \mu^N_s)\rangle \lesssim N^{\epsilon-1} \hspace{0.1cm} \sup_{u\leq t_\text{fin}} \left\{ \Lambda_{k}(\mu^N_u)^\frac{1}{2}\right\} \end{equation} from which it follows that \begin{equation} \label{eq: loc unf dominate rho 2} \sup_{t\leq t_\text{fin}} \int_0^t \sup_{f\in \mathcal{A}} \langle f, \rho^N(t-s, \mu^N_s)\rangle ds \lesssim N^{\epsilon-1} \hspace{0.1cm} t_\text{fin} \hspace{0.1cm} \sup_{u\leq t_\text{fin}} \left\{ \Lambda_{k}(\mu^N_u)^\frac{1}{2}\right\}.\end{equation} From the local uniform moment bound established in Proposition \ref{lemma:momentboundpt1}, and the initial moment bound on $\mu^N_0$, \begin{equation} \label{eq: locunf moment bound} \begin{split} \left\|\hspace{0.1cm} \sup_{u\leq t_\text{fin}} \left\{ \Lambda_{k}(\mu^N_u)^\frac{1}{2}\right\} \right\|_{L^p(\mathbb{P})} &\leq \left\| \hspace{0.1cm} \sup_{u\leq t_\text{fin}} \left\{ \Lambda_{k}(\mu^N_u)^\frac{1}{2}\right\} \right\|_{L^{2p}(\mathbb{P})} \leq \mathbb{E}\left[\sup_{u\leq t_\text{fin}} \Lambda_{pk}(\mu^N_u)^\frac{1}{2}\right]^{1/2p} \\[1ex] & \lesssim a ^{1/2} \hspace{0.1cm} (1+t_\text{fin})^{1/2p}. \end{split} \end{equation} Combining the estimates (\ref{eq: loc unf dominate rho 2}, \ref{eq: locunf moment bound}), we see that \begin{equation} \left\|\sup_{t\leq t_\text{fin}} \int_0^t \sup_{f\in \mathcal{A}} \langle f, \rho^N(t-s, \mu^N_s)\rangle ds\right\|_{L^p(\mathbb{P})} \lesssim N^{\epsilon-1}\hspace{0.1cm}a^{1/2} \hspace{0.1cm}(1+t_\text{fin})^\frac{2p+1}{2p}.\end{equation} We combine this with Lemma \ref{thrm: local uniform martingale control} and keep the worse asymptotics.\end{proof}
We will now show how to `bootstrap' to better dependence of the time horizon $t_\text{fin}$. Heuristically, the proof allows us to replace powers of $t_\text{fin}$ in the initial bound with the same power of $\log N$, and introduce an additional factor of $(1+t_\text{fin})^{1/p}$. As was remarked below Proposition \ref{thrm: stability for BE}, we could derive Theorem \ref{thrm: PW convergence} and Lemma \ref{lemma: initial LU bound} under the milder assumptions \begin{equation} \label{eq: weaker stability 4} \|\phi_t(\nu)-\phi_t(\mu)\|_{\mathrm{TV}+5} \leq F(t) \Lambda_{k}(\mu, \nu)^\frac{1}{2}\|\mu-\nu\|_\mathrm{TV}^\eta; \end{equation} \begin{equation}\label{eq: weaker stability 5}
\|\phi_t(\nu)-\phi_t(\mu) - \xi_t \|_{\mathrm{TV}+2} \leq G(t) \Lambda_{k}(\mu, \nu)^\frac{1}{2}\|\mu-\nu\|_\mathrm{TV}^{1+\eta}\end{equation} for functions $F,G$ such that \begin{equation} \label{eq: weaker stability 6} \left(\int_0^\infty F^2 dt\right)^{1/2}<\infty;\hspace{0.5cm} \int_0^\infty G dt<\infty. \end{equation} If we also assume that $F\rightarrow 0$ as $t\rightarrow \infty$, we can use an identical bootstrap argument, with $\log N$ replaced by a power of \begin{equation} \tau_N := \sup\{t: F(t) > N^{-\alpha}\}\end{equation} which produces a potentially larger loss. \emph{Hence, the the full strength of exponential decay in Proposition \ref{thrm: stability for BE} is used to control the asymptotic loss due to the bootstrap}.
\begin{proof}[Proof of Theorem \ref{thrm: Main Local Uniform Estimate}] As in the proof of Theorem \ref{thrm: PW convergence}, it is sufficient to prove the case $\mu^N_0=\mu_0$. Then, making $k$ larger if necessary, we may use Theorem \ref{thrm: W-W continuity of phit} to control $\sup_{t\ge 0}W(\phi_t(\mu^N_0), \phi_t(\mu_0))$, which proves the general result.
\\ Let $0<\epsilon'<\epsilon$, and choose $k$ such that Lemma \ref{lemma: initial LU bound} holds for $\epsilon'$. Let $\alpha'<\alpha$ be the exponent of $N$ obtained with $\epsilon'$ in place of $\epsilon$. From the stability estimate Proposition \ref{thrm: stability for BE}, we have \begin{equation} \forall \mu, \nu \in \mathcal{S}^k_a, \hspace{0.2cm}\|\phi_t(\mu)-\phi_t(\nu)\|_{\mathrm{TV}+2} \lesssim \Lambda_{k}(\mu, \nu)^\frac{1}{2} e^{-\lambda_0 t/2}.\end{equation}Define $\tau = \tau_N = -2\lambda_0^{-1} \log(N^{-\alpha'})$ and consider $t_\text{fin}> \tau +1 $. Fix a positive integer $n$, and partition the interval $[0, t_\text{fin}]$ as $I_1\cup I_1 \cup...\cup I_n$: \begin{equation} I_0=[0,\tau]; \hspace{0.5cm}I_r = \left[\tau+(r-1)\frac{t_\text{fin}-\tau}{n},\tau+r\frac{t_\text{fin}-\tau}{n}\right]=:[s_r+\tau,t_r].\end{equation} Write also $H_r=[s_r, t_r] \supset I_r$. Since the norm $\|\cdot\|_{\mathrm{TV}+2}$ dominates the Wasserstein distance $W$, we have the bound\begin{equation} \label{eq: bootstrap bound}\sup_{t\in I_r} \hspace{0.1cm} W(\mu^N_t, \phi_t(\mu^N_0)) \lesssim \sup_{t\in H_r} \hspace{0.1cm} W(\mu^N_t, \phi_{t-s_r}(\mu^N_{s_r})) + e^{-\lambda \tau} \Lambda_{k}(\mu^N_{s_r}, \phi_{s_r}(\mu^N_0))^\frac{1}{2}.\end{equation} We bound the two terms in (\ref{eq: bootstrap bound}) separately. Denote $(\mathcal{F}^N_t)_{t\geq 0}$ the natural filtration of $(\mu^N_t)_{t\geq 0}$. We control the first term by Lemma \ref{lemma: initial LU bound}, applied to the restarted process $(\mu^N_t)_{t\geq s_r}$: \begin{equation}\begin{split} \left\|\hspace{0.1cm}\sup_{t\in H_r} \hspace{0.1cm} W(\mu^N_t, \phi_{t-s_r}(\mu^N_{s_r})) \right \|_{L^p(\mathbb{P})} ^p = \mathbb{E}\left\{\mathbb{E}\left(\left[\left.\sup_{s_r \leq t \leq t_r} \hspace{0.1cm} W(\mu^N_t, \phi_{t-s_r}(\mu^N_{s_r}))\right]^p \right| \mathcal{F}^N_{s_r} \right) \right\} \\ \lesssim \mathbb{E} \left\{\Lambda_{pk}(\mu^N_{s_r}) ^{1/p}\right\} \left(1+ \tau + \frac{t-\tau}{n}\right)^\frac{3p+1}{2} N^{-p\alpha'}\hspace{0.1cm}(\log N)^\frac{p}{p'}.\end{split}\end{equation} We control the moment in the usual way, using Proposition \ref{thrm:momentinequalities}\ref{lemma:momentboundpt1}, to obtain \begin{equation} \label{eq: Bootstrap bound 2} \left\|\hspace{0.1cm}\sup_{t\in H_r} \hspace{0.1cm} W(\mu^N_t, \phi_{t-s_r}(\mu^N_{s_r})) \right \|_{L^p(\mathbb{P})} ^p \lesssim a^p \left(1+ \tau + \frac{t-\tau}{n}\right)^\frac{3p+1}{2} N^{-p\alpha'}\hspace{0.1cm}(\log N)^\frac{p}{p'}.\end{equation} We now turn to the second term in (\ref{eq: bootstrap bound}). Using the definition of $\tau$ and the moment estimates (\ref{eq: pointwise moment bound}, \ref{eq: BE moment bound}) in Proposition \ref{thrm:momentinequalities}, \begin{equation} \label{eq: Bootstrap bound 3} \|e^{-\lambda \tau/2} \Lambda_{k}(\mu^N_{s_r}, \phi_{s_r}(\mu^N_0))^\frac{1}{2} \|_{L^p(\mathbb{P})} \lesssim N^{-\alpha'} \hspace{0.1cm} a^{1/2}. \end{equation} Combining the estimates (\ref{eq: Bootstrap bound 2}, \ref{eq: Bootstrap bound 3}), and absorbing powers of $\tau$ into the powers of $(\log N)$, we obtain \begin{equation} \left\| \hspace{0.1cm} \sup_{t\in I_r} W(\mu^N_t, \phi_t(\mu^N_0)) \right\|_{L^p(\mathbb{P})} \lesssim a^{1/2} \hspace{0.1cm} \left(1+ \frac{t_\text{fin}-\tau}{n}\right)^\frac{3p+1}{2p}\left(N^{-\alpha'} (\log N)^{\frac{3p+1}{2p}+\frac{1}{p'}}\right).\end{equation} Observe that \begin{equation} \left\{ \sup_{\tau \leq t \leq t_\text{fin}} W\left(\mu^N_t, \phi_t(\mu^N_0)\right) \right\}^p \leq \mathlarger{\mathlarger{\sum}}_{r=1}^n \left\{ \sup_{t \in I_r} W\left(\mu^N_t, \phi_t(\mu^N_0)\right) \right\}^p. \end{equation} Taking expectations and $p^\text{th}$ root, we find that\begin{equation} \begin{split} &\left\|\hspace{0.1cm} \sup_{\tau \leq t \leq t_\text{fin}} W\left(\mu^N_t, \phi_t(\mu^N_0)\right) \right\|_{L^p(\mathbb{P})} \\& \hspace{2cm} \lesssim n^\frac{1}{p} \hspace{0.1cm} a^{1/2} \hspace{0.1cm} \left(1+ \frac{t_\text{fin}-\tau}{n}\right)^\frac{3p+1}{2p}\left(N^{-\alpha'} (\log N)^{\frac{3p+1}{2p}+\frac{1}{p'}}\right).\end{split} \end{equation} This is optimised at $n\sim (t_\text{fin}-\tau)$, where we obtain the estimate \begin{equation} \begin{split} \label{eq: tfin > tau} \left\|\hspace{0.1cm} \sup_{\tau \leq t \leq t_\text{fin}} W\left(\mu^N_t, \phi_t(\mu^N_0)\right) \right\|_{L^p(\mathbb{P})} &\lesssim a^{1/2} (t_\text{fin}-\tau)^\frac{1}{p}\hspace{0.1cm}\left(N^{-\alpha'} (\log N)^{\frac{3p+1}{2p}+\frac{1}{p'}}\right) \\ &\leq a^{1/2} \hspace{0.1cm} t_\text{fin}^\frac{1}{p}\hspace{0.1cm}\left(N^{-\alpha'} (\log N)^{\frac{3p+1}{2p}+\frac{1}{p'}}\right). \end{split} \end{equation} From Lemma \ref{lemma: initial LU bound} applied up to time $\tau=\tau_N$, we have \begin{equation} \begin{split} \label{eq:shorttimecontrolforiteration} \left\|\hspace{0.1cm} \sup_{0 \leq t \leq \tau_N} W\left(\mu^N_t, \phi_t(\mu^N_0)\right) \right\|_{L^2(\mathbb{P})} &\lesssim a^{1/2} \hspace{0.1cm} N^{-\alpha'} \left(1+\frac{2\alpha}{\lambda} \log(N)\right)^{\frac{3p+1}{2p}}(\log N)^{\frac{1}{p'}} \\ &\lesssim a^{1/2} \hspace{0.1cm}\left(N^{-\alpha} (\log N)^{\frac{3p+1}{2p}+\frac{1}{p'}}\right) .\end{split}\end{equation} Combining (\ref{eq: tfin > tau}, \ref{eq:shorttimecontrolforiteration}), and absorbing the powers of $(\log N)$ into $N^{\epsilon-\epsilon'}$, we have \begin{equation} \begin{split} \left\|\hspace{0.1cm} \sup_{0 \leq t \leq t_\text{fin}} W\left(\mu^N_t, \phi_t(\mu^N_0)\right) \right\|_{L^p(\mathbb{P})} \lesssim a^{1/2} \hspace{0.1cm}(1+t_\text{fin})^\frac{1}{p} \hspace{0.1cm}N^{-\alpha}.\end{split} \end{equation} The case where $t_\text{fin}\leq \tau+ 1$ is essentially identical to (\ref{eq:shorttimecontrolforiteration}). \end{proof} \begin{remark} We note that this `bootstrap' argument would produce the same result with any \emph{polynomial} time dependence in Lemma \ref{lemma: initial LU bound}. As a result, the precise time dependence of Lemmas \ref{thrm: local uniform martingale control}, \ref{lemma: initial LU bound} is uninteresting, and we do not attempt to optimise it. We also remark that this method produces the same long-time behaviour even starting from an exponential estimate, at the cost of a fractional power of $N$. \end{remark}
It remains to prove Lemma \ref{thrm: local uniform martingale control}. We draw attention to the fact that $M^{f, N}$ are \emph{not} themselves martingales, despite the general construction (\ref{eq: M is for martingale}), since the integrand $\phi_{t-s}(\mu^N)-\phi_{t-s}(\mu^N_{s-})$ depends on the terminal time $t$. We address this by computing an associated family of martingales: \begin{lemma}\label{lemma:newmartingaleconstruction} Let $(M^{N,f}_t)_{t\geq 0}$ be the processes defined in Formula \ref{form:newdecomposition}. Recalling the notation $Q_t=Q\circ \phi_t$, define \begin{equation} \label{eq: defn of chi} \chi(s,t,\mu^N)=Q_{t-s}(\mu^N)-Q_{t-s}(\mu^N_{s-}).\end{equation} Suppose $f$ satisfies a growth condition $|f(v)|\leq (1+|v|^q)$, for some $q\geq 0$. Consider the martingales $Z^{N,f}_t$ given by \begin{equation}Z^{N,f}_t=\int_{(0,t]\times \mathcal{S}_N}\langle f,\mu^N-\mu^N_{s-}\rangle (m^N-\overline{m}^N)(ds, d\mu^N) \rangle. \end{equation} Then we have the equality \begin{equation}\begin{split} &Z^{N,f}_t=M^{N,f}_t- C^{N,f}_t\\ &= M^{N,f}_t-\int_0^t ds \int_{(0,s]\times \mathcal{S}_N} \langle f, \chi(u,s,\mu^N) \rangle (m^N-\overline{m}^N)(du, d\mu^N).\end{split}\end{equation} \end{lemma} \begin{proof} Firstly, we note that $Z^{N,f}_t$ are martingales by standard results from Markov chains, (\ref{eq: M is for martingale}). Observe that the integrand in the definition of $C^{N,f}_t$ is bounded, since whenever $0\leq u \leq s$, and $\mu^N$ is obtain from $\mu^N_{u-}$ by collision, we use the estimate (\ref{eq: Lipschitz continuity of Q}) with $\eta=\frac{1}{2}$, to obtain for some $k$ \begin{equation} \begin{split} |\langle f, \chi(u,s,\mu^N)\rangle | & \leq \| Q_{s-u}(\mu^N)-Q_{s-u}(\mu^N_{u-})\|_{\mathrm{TV}+q} \\[1ex] & \lesssim\Lambda_{k}(\mu^N, \mu^N_{u-})^\frac{1}{2}N^{-\frac{1}{2}} \lesssim N^{\frac{k-2}{4}} <\infty. \end{split} \end{equation} Moreover, for initial data $\mu^N \in \mathcal{S}_N$, the Boltzmann flow $(\phi_s(\mu^N))_{s=0}^t$ has uniformly bounded $(q+1)^\text{th}$ moments and so, by approximation, the Boltzmann dynamics (\ref{BE}) extend to $f$. Now, we apply Fubini to the integral: \begin{equation} \begin{split} &C^{N,f}_t \\ &= \int_{(0,t]\times \mathcal{S}_N} \int_0^t ds\hspace{0.1cm} \langle f, Q_{s-u}(\mu^N)-Q_{s-u}(\mu^N_{u-})\rangle \hspace{0.1cm} 1[u \le s \le t] \hspace{0.1cm} (m^N-\overline{m}^N)(du, d\mu^N) \\& = \int_{(0,t]\times \mathcal{S}_N} \left\{ \int_u^t \left(\langle f, Q_{s-u}(\mu^N)\rangle -\langle f, Q_{s-u}(\mu^N_{u-})\rangle \right) ds\right\} (m^N-\overline{m}^N)(du, d\mu^N) \\ & =\int_{(0,t]\times\mathcal{S}_N} \left\{\langle f, \phi_{t-u}(\mu^N)-\phi_{t-u}(\mu^N_{u-})\rangle -\langle f, \mu^N-\mu^N_{u-}\rangle\right\}(m^N-\overline{m}^N)(du, d\mu^N) \\& \hspace{2cm} =: M^{N,f}_t - Z^{N,f}_t \end{split}\end{equation} where the third equality is precisely the (extended) Boltzmann dynamics (\ref{BE}) in the variable $s \in[u,t]$. \end{proof}
To prove Lemma \ref{thrm: local uniform martingale control}, we return to the decomposition (\ref{eq: decomposition of MNFT}) used in the proof of Lemma \ref{thrm: pointwise martingale control}. Our first point is to establish a control on \begin{equation} \mathbb{E} \left[ \sum_{j=0}^J \sum_{B\in \mathcal{P}_{j,l}} \left\{ M^{N;B}_{\star,t_\text{fin}}\right\}^p\right]\end{equation} where $\star$ denotes the local supremum (\ref{eq: use of star}). We will do so by breaking the supremum into two parts, each of which can be controlled by elementary martingale estimates. Let $(J^{N;B;t}_s)_{0\le s\le t}$ be the process \begin{equation} J^{N;B;t}_s = \int_{(0,s]\times\mathcal{S}_N} \langle h_B, Q_{t-u}(\mu^N)-Q_{t-u}(\mu^N_{u-})\rangle (m^N-\overline{m}^N)(du, d\mu^N)\end{equation} where, as in the proof of Theorem \ref{thrm: PW convergence}, \begin{equation}\label{eq: defn of hB} h_B=2^{2j}(1+|v|^2)1_B; \hspace{1cm} B \in \mathcal{P}_{j,l}. \end{equation} Each process $(J^{N;B:t}_s)_{0\le s\le t}$ is a martingale, by standard results for Markov chains (\ref{eq: M is for martingale}). Writing $Z^{N;B}=Z^{N,h_B}$, Lemma \ref{lemma:newmartingaleconstruction} gives \begin{equation} Z^{N;B}_t = M^{N;B}_t +\int_0^t J^{N;B;s}_s \hspace{0.1cm} ds. \end{equation} \begin{lemma} \label{lemma:break up LU} Let $p\geq 2$, and let $p'$ be the H\"{o}lder conjugate to $p$. In the notation above, we have the comparison \begin{equation} \mathbb{E} \left[ \sum_{j=0}^J \sum_{B\in \mathcal{P}_{j,l}} \left\{ \left|M^{N;B}_{\star,t_\text{fin}}\right|\right\}^p\right] \lesssim \mathbb{E} \left[ \sum_{j=0}^J \sum_{B\in \mathcal{P}_{j,l}} \left\{\hspace{0.1cm} \left|M^{N;B}_{t_\text{fin}}\right|^p+t_\text{fin}^{p/p'}\int_0^{t_\text{fin}} \left|J^{N;B;t}_t \right|^p dt\right\}\right]. \end{equation} \end{lemma} \begin{proof} For each $B$, we observe that \begin{equation} \begin{split} \sup_{t\le t_\text{fin}} \left|M^{N;B}_t - Z^{N;B}_t\right| \le \int_0^{t_\text{fin}}\left|J^{N;B;s}_s\right| ds\end{split} \end{equation} which implies the two bounds \begin{equation} \label{eq: comparing ZNB and MNB} M^{N;B}_{\star,t_\text{fin}} \le Z^{N;B}_{\star, t_\text{fin}} +\int_0^{t_\text{fin}} \left|J^{N;B;s}_s\right| ds; \hspace{1cm} Z^{N;B}_{t_\text{fin}} \le M^{N;B}_{t_\text{fin}}+\int_0^{t_\text{fin}} \left|J^{N;B;s}_s\right| ds.\end{equation} By Doob's $L^p$ inequality, we have \begin{equation} \label{eq: Doob LP} \begin{split} \left\|\hspace{0.1cm} Z^{N;B}_{\star,t_\text{fin}}\hspace{0.1cm}\right\|_{L^p(\mathbb{P})} &\leq p'\hspace{0.1cm}\left\|\hspace{0.1cm} Z^{N;B}_{t_\text{fin}}\hspace{0.1cm}\right\|_{L^p(\mathbb{P})}\end{split}. \end{equation} Combining (\ref{eq: comparing ZNB and MNB}, \ref{eq: Doob LP}), we obtain \begin{equation} \left\|\hspace{0.1cm} M^{N;B}_{\star,t_\text{fin}}\hspace{0.1cm}\right\|_{L^p(\mathbb{P})} \lesssim \left\|\hspace{0.1cm} M^{N;B}_{t_\text{fin}}\hspace{0.1cm}\right\|_{L^p(\mathbb{P})} + \left\|\hspace{0.1cm}\int_0^{t_\text{fin}}\left|J^{N;B;s}_s\right|ds \hspace{0.1cm}\right\|_{L^p(\mathbb{P})}.\end{equation} Using H\"{o}lder's inequality on the integral,\begin{equation} \begin{split} \mathbb{E} \left[\left\{\hspace{0.1cm} M^{N;B}_{\star,t_\text{fin}}\right\}^p \right] &\lesssim \mathbb{E}\left[ \hspace{0.1cm}\left|M^{N;B}_{t_\text{fin}}\right|^p \hspace{0.1cm}\right] + \mathbb{E}\left[\hspace{0.1cm}\left\{\int_0^{t_\text{fin}} \left|J^{N;B;s}_s\right| \hspace{0.1cm} ds \right\}^p\hspace{0.1cm}\right] \\ & \lesssim \mathbb{E}\left[ \hspace{0.1cm}\left|M^{N;B}_{t_\text{fin}}\right|^p \hspace{0.1cm}\right] +t_\text{fin}^{p/p'} \int_0^{t_\text{fin}} \mathbb{E}\left[\hspace{0.1cm}\left|J^{N;B;t}_t\right|^p \hspace{0.1cm}\right]\hspace{0.1cm} ds .\end{split} \end{equation} Summing over $B\in \mathcal{P}_{j,l}$ and $j=0,1,\dots ,J$, we obtain the desired comparison. \end{proof}
\begin{proof}[Proof of Lemma \ref{thrm: local uniform martingale control}] We begin by controlling the integral term in Lemma \ref{lemma:break up LU}. The quadratic variation is given by \begin{equation} \begin{split} \left[J^{N;B;t}\right]_s &= \int_{(0,s]\times \mathcal{S}_N} \langle h_B, \chi(u,t,\mu^N)\rangle^2 m^N(du, d\mu^N) \\& \le \int_{(0,s]\times \mathcal{S}_N} \langle h_B, |\chi(u,t,\mu^N)|\rangle^2 m^N(du, d\mu^N) \end{split} \end{equation} where $h_B$ is as in (\ref{eq: defn of hB}) and $\chi$ is as in (\ref{eq: defn of chi}). Hence, using Burkholder's inequality (\ref{lemma:Burkholder}) we see that, for all $t\leq t_\text{fin}$, \begin{equation} \begin{split} &\mathbb{E} \left[\sum_{j=0}^J \sum_{B\in \mathcal{P}_{j,l}} \left\{\hspace{0.1cm} \left|J^{N;B;t}_t\right|\right\}^p\right] \\ & \hspace{2cm}\lesssim \mathbb{E} \left[\sum_{j=0}^J \sum_{B\in \mathcal{P}_{j,l}} \left\{\int_{(0,t]\times \mathcal{S}_N}\langle h_B, |\chi(u,t,\mu^N)|\rangle^2 m^N(du, d\mu^N)\right\}^{p/2}\right]. \end{split} \end{equation} Using Minkowski's inequality to move the double sum inside the parentheses, and recalling that $\sum_j \sum_{B\in \mathcal{P}_{j,l}} h_B \lesssim (1+|v|^4)$, we obtain the bound \begin{equation}\label{eq: LU mg bound on J}\begin{split} &\mathbb{E} \left[\sum_{j=0}^J \sum_{B\in \mathcal{P}_{j,l}} \left\{\hspace{0.1cm} \left|J^{N;B;t}_t\right|\right\}^p\right]\\ &\hspace{0.5cm}\lesssim \mathbb{E}\left[ \left\{ \int_{(0,t]\times \mathcal{S}_N} \langle 1+|v|^4, |\chi(u,t,\mu^N)|\rangle^2 m^N(du, d\mu^N) \right\}^{p/2} \right] \\ &\hspace{0.5cm}\lesssim \mathbb{E} \left[\left\{\int_{(0,t]\times \mathcal{S}_N} \|Q_{t-u}(\mu^N)-Q_{t-u}(\mu^N_{u-})\|_{\mathrm{TV}+4}^2 \hspace{0.1cm} m^N(du, d\mu^N)\right\}^{p/2}\right] \end{split} \end{equation} where the second equality is the definition of $\chi$ (\ref{eq: defn of chi}).
\\ Using the continuity estimate for $Q$ established in (\ref{eq: Lipschitz continuity of Q}), and arguing as in the proof of Lemma \ref{thrm: pointwise martingale control}, we see that almost surely, for $m^N$-almost all $(u, \mu^N)$, we have \begin{equation} \|Q_{t-u}(\mu^N)-Q_{t-u}(\mu^N_{u-})\|_{\mathrm{TV}+4} \lesssim N^{\epsilon-1}\Lambda_k(\mu^N_{u-}).\end{equation} Therefore, using Cauchy-Schwarz, (\ref{eq: LU mg bound on J}) gives the bound \begin{equation} \label{eq: LU mg bound on J 2} \begin{split} & \mathbb{E} \left[\sum_{j=0}^J \sum_{B\in \mathcal{P}_{j,l}} \left\{\left|J^{N;B;t}_t\right|\right\}^p\right] \\& \hspace{1cm}\lesssim N^{p(\epsilon-1)} \hspace{0.1cm} \mathbb{E}\left[\sup_{t\leq t_\text{fin}} \Lambda_{kp}(\mu^N_t)\right]^{1/2} \left\|m^N\left((0,t_\text{fin}]\times \mathcal{S}_N\right)\right\|_{L^p(\mathbb{P})}^{p/2}.\end{split}\end{equation} The moment term is controlled by the initial moment bound and Proposition \ref{thrm:momentinequalities} : \begin{equation} \label{eq: LU mg moment bound} \mathbb{E}\left[\sup_{t\leq t_\text{fin}} \Lambda_{kp}(\mu^N_t)\right] \lesssim (1+t_\text{fin})\Lambda_{kp}(\mu^N_0)\leq (1+t_\text{fin})a^p.\end{equation} Since the rates of the Kac process are bounded by $2N$, we can stochastically dominate $m^N(dt \times \mathcal{S}_N)$ by a Poisson random measure $\mathfrak{m}^N(dt)$ of rate $2N$. By the additive property of Poisson processes, it follows that \begin{equation} \label{eq: Poisson} \|m^N((0,t_\text{fin}]\times \mathcal{S}_N) \|_{L^p(\mathbb{P})}\le \|\mathfrak{m}^N(0,t_\text{fin}]\|_{L^p(\mathbb{P})} \lesssim N(1+t_\text{fin}). \end{equation} Combining (\ref{eq: LU mg bound on J 2}, \ref{eq: LU mg moment bound}, \ref{eq: Poisson}), we have the control of the integrand: \begin{equation} \begin{split} \sup_{t\leq t_\text{fin}} \hspace{0.2cm} \mathbb{E} \left[\sum_{j=0}^J \sum_{B\in \mathcal{P}_{j,l}} \left\{ \left|J^{N;B;t}_t\right|\right\}^p\right] \lesssim N^{p(\epsilon-1/2)}\hspace{0.1cm} a^{p/2} (1+t_\text{fin})^{\frac{p+1}{2}}. \end{split} \end{equation} This gives the following control of the integral term in Lemma \ref{lemma:break up LU}: \begin{equation} \label{eq:control of integral term} \begin{split} t_\text{fin}^{p/p'} \hspace{0.2cm} \mathbb{E} \left[\sum_{j=0}^J \sum_{B\in \mathcal{P}_{j,l}} \mathlarger{\mathlarger{\int}}_0^{t_\text{fin}}\left\{\left|J^{N;B;t}_t\right|\right\}^p dt \right] \lesssim N^{p(\epsilon-1/2)}\hspace{0.1cm} a^{p/2} (1+t_\text{fin})^{\frac{p+3}{2}+\frac{p}{p'}}. \end{split} \end{equation} Using the definition of $p'$ as the H\"older conjugate to $p$, it is straightforward to see that the exponent of $(1+t_\text{fin})$ is $\frac{3p+1}{2}$.
\\ We now perform a similar analysis for the terms $M^{N;B}_{t_\text{fin}}$ in Lemma \ref{lemma:break up LU}. Let $(M^{N;B;t}_s)_{s\leq t}$ be the martingale defined in (\ref{eq: MNBTS}). The quadratic variation is \begin{equation} \begin{split} \left[M^{N;B;t}\right]_s&=\int_{(0,s]\times\mathcal{S}_N}\langle h_B, \phi_{t-u}(\mu^N)-\phi_{t-u}(\mu^N_{u-})\rangle^2 \hspace{0.1cm} m^N(du, d\mu^N) \\ & \leq \int_{(0,s]\times\mathcal{S}_N}\langle h_B, |\phi_{t-u}(\mu^N)-\phi_{t-u}(\mu^N_{u-})|\rangle^2 \hspace{0.1cm} m^N(du, d\mu^N). \end{split} \end{equation} Arguing using Burkholder and the stability estimate Corollary \ref{cor: new stability for BE}, an identical calculation to the above shows that \begin{equation} \label{eq: control of M at terminal time} \sum_{j=0}^J \sum_{B\in \mathcal{P}_{j,l}} \left\|M^{N;B}_{{t_\text{fin}}}\right\|_{L^p(\mathbb{P})}^p \lesssim N^{p(\epsilon-1/2)}\hspace{0.1cm}a^{p/2}\hspace{0.1cm}(1+t_\text{fin})^{\frac{p+1}{2}}. \end{equation} Hence, by Lemma \ref{lemma:break up LU}, we obtain \begin{equation} \mathbb{E} \left[ \sum_{j=0}^J \sum_{B\in \mathcal{P}_{j,l}} \left\{ \left|M^{N;B}_{\star,t_\text{fin}}\right|\right\}^p\right] \lesssim N^{p(\epsilon-1/2)}a^{p/2}(1+t_\text{fin})^\frac{3p+1}{2}.\end{equation} We control the coefficients $2^{-2j}a_B(f)$ as in the argument of Lemma \ref{thrm: pointwise martingale control}. Using H\"{o}lder's inequality in place of Cauchy-Schwarz, we obtain \begin{equation} \begin{split} \label{eq: control of mg term LU} &\left\|\hspace{0.1cm}\sup_{f\in \mathcal{A}} \hspace{0.1cm} \sup_{t\leq t_\text{fin}} \hspace{0.1cm} \left|\sum_{j=0}^J\sum_{l=2}^L \sum_{B \in \mathcal{P}_{j,l}} 2^{-2j}a_B(f)M^{N;B}_t \hspace{0.1cm} \right| \hspace{0.1cm}\right\|_{L^p(\mathbb{P})} \\& \hspace{2cm} \lesssim \sum_{l=2}^L \left[ \mathbb{E}\sum_{j=0}^J \sum_{B \in \mathcal{P}_{j,l}} \left\{M^{N;B}_{\star,t_\text{fin}}\right\}^p\right]^{1/p}2^{(d/p'-1)l}J^{1/p'} \\& \hspace{2cm} \lesssim \sum_{l=2}^L N^{\epsilon-\frac{1}{2}} \hspace{0.1cm} a^{1/2} \hspace{0.1cm} (1+t_\text{fin})^{\frac{3p+1}{2p}}\hspace{0.1cm} 2^{(d/p'-1)l}J^{1/p'} \\ & \hspace{2cm} \lesssim N^{\epsilon-\frac{1}{2}} \hspace{0.1cm} a^{1/2} \hspace{0.1cm} (1+t_\text{fin})^{\frac{3p+1}{2p}}\hspace{0.1cm}2^{(d/p'-1)L} \hspace{0.1cm} J^{1/p'}. \end{split} \end{equation} Following the argument of Lemma \ref{thrm: pointwise martingale control}, we wish to control the error terms $R^{N,f}_t$ given by (\ref{eq: definition of RNFT}), locally uniformly in time. As in (\ref{eq: majorise integrand of error term}), we majorise, for $m^N+\overline{m}^N$-almost all $(s, \mu^N)$, \begin{equation} \begin{split} \sup_{f\in \mathcal{A}} \left|\langle \beta(f), \phi_{t-s}(\mu^N)-\phi_{t-s}(\mu^N_{s-})\rangle \right| &\lesssim (2^{-2J}+2^{-L}) \hspace{0.1cm}N^{\epsilon-1} \hspace{0.1cm} \Lambda_{k}(\mu^N_{s-})^\frac{1}{2} \\& =:H'_s. \end{split} \end{equation} As in (\ref{eq: dominate integrand and integrator seperately}), we may bound \begin{equation} \label{eq: break up error term} \left\|\hspace{0.1cm} \sup_{f\in\mathcal{A}}\hspace{0.1cm} \sup_{t\le t_\text{fin}}\left|R^{N,f}_t\right|\hspace{0.1cm}\right\|_{L^p(\mathbb{P})} \le \left\|\hspace{0.1cm} \int_0^{t_\text{fin}}H'_s(m^N+\overline{m}^N)(ds, \mathcal{S}_N)\hspace{0.1cm}\right\|_{L^p(\mathbb{P})} \le \mathcal{T}_1+\mathcal{T}_2 \end{equation} where the two error terms are \begin{equation} \mathcal{T}_1= \left\|\int_0^{t_\text{fin}} H'_s \hspace{0.1cm}m^N(ds, \mathcal{S}_N)\right\|_{L^p(\mathbb{P})}\end{equation} and \begin{equation} \mathcal{T}_2=\left\|\int_0^{t_\text{fin}} H'_s \hspace{0.1cm}\overline{m}^N(ds, \mathcal{S}_N)\right\|_{L^p(\mathbb{P})}. \end{equation} We now deal with the two terms separately. For the $\mathcal{T}_1$, we dominate $\overline{m}^N(ds, \mathcal{S}_N)\leq 2N ds$ to see that \begin{equation} \begin{split} \int_0^{t_\text{fin}} H'_s\hspace{0.1cm} \overline{m}^N(ds, \mathcal{S}_N) \lesssim (2^{-2J}+2^{-L})\hspace{0.1cm}N^{\epsilon}\hspace{0.1cm}t_\text{fin}\hspace{0.1cm}\left( \hspace{0.1cm} \sup_{s\leq t_\text{fin}} \Lambda_{k}(\mu^N_s)^\frac{1}{2}\right). \end{split} \end{equation} Using the monotonicity of $L^p$ norms, and using the moment control in the usual way, \begin{equation} \begin{split} \label{eq: control of mbar integral} \mathcal{T}_1 &\lesssim (2^{-2J}+2^{-L})\hspace{0.1cm}N^{\epsilon}\hspace{0.1cm}t_\text{fin}\hspace{0.1cm}\mathbb{E}\left[ \hspace{0.1cm} \sup_{s\leq t_\text{fin}} \Lambda_{pk}(\mu^N_s)\right]^\frac{1}{2p} \\[1ex] & \lesssim (2^{-2J}+2^{-L})\hspace{0.1cm}N^{\epsilon}\hspace{0.1cm}a^{1/2}\hspace{0.1cm}(1+t_\text{fin})^\frac{2p+1}{2p}. \end{split} \end{equation} For $\mathcal{T}_2$, we dominate $m^N(ds, \mathcal{S}_N)$ by a Poisson random measure $\mathfrak{m}^N(ds)$ of rate $2N$, as above. Controlling $\mathfrak{m}^N$ as in (\ref{eq: Poisson}), we obtain \begin{equation} \begin{split} \label{eq: control of m integral} \mathcal{T}_2 & \lesssim (2^{-2J}+2^{-L})N^{\epsilon-1} \left\|\int_0^{t_\text{fin}} \Lambda_{k}(\mu^N_{s-})^\frac{1}{2}\mathfrak{m}^N(ds) \right\|_{L^p(\mathbb{P})} \\[1ex]& \lesssim (2^{-2J}+2^{-L})N^{\epsilon-1} \left\|\left(\hspace{0.1cm}\sup_{s\leq t_\text{fin}} \Lambda_{k}(\mu^N_s)^\frac{1}{2}\right)\right\|_{L^{2p}(\mathbb{P})}\left\|\mathfrak{m}^N\left(\left(0, t_\text{fin}\right]\right)\right\|_{L^{2p}(\mathbb{P})} \\[1ex]& = \lesssim (2^{-2J}+2^{-L})N^{\epsilon}(1+t_\text{fin})^\frac{2p+1}{2p}. \end{split} \end{equation} Combining the local uniform estimates (\ref{eq: control of mg term LU}, \ref{eq: break up error term}, \ref{eq: control of mbar integral}, \ref{eq: control of m integral}) of the terms in the decomposition (\ref{eq: decomposition of MNFT}), we find that \begin{multline*} \left\|\hspace{0.1cm} \sup_{f\in \mathcal{A}} \hspace{0.1cm}M^{N,f}_{\star,t_\text{fin}} \hspace{0.1cm} \right\|_{L^p(\mathbb{P})} \lesssim N^\epsilon a^{1/2} \hspace{0.1cm} (1+t_\text{fin})^\frac{3p+1}{2p}\hspace{0.1cm} \left(N^{-1/2} \hspace{0.1cm} 2^{(d/q-1)L}J^{1/p'}+2^{-2J}+2^{-L}\right). \end{multline*} Taking $J=\lfloor\frac{p'}{4d}\log_2(N)\rfloor$ and $L=\lfloor \frac{p'}{2d}\log_2(N)\rfloor$ proves the result claimed. \end{proof}
\section{Proof of Theorem \ref{thm: low moment regime}} \label{sec: LMR}
We now turn to the proof of Theorem \ref{thm: low moment regime}, which establishes a convergence estimate in the presence of a $k^\text{th}$ moment bound, for any $k>2$. Our strategy will be to use the ideas of \cite{ACE}, which work well with few moments, to prove convergence on a small initial time interval $[0,u_N]$, for some $u_N$ to be chosen later. Then, thanks to the moment production property recalled in Proposition \ref{thrm:momentinequalities}, we may use Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate} to control the behaviour at times $t\ge u_N$. The argument is similar to the final argument in the proof of Theorem \ref{thrm: W-W continuity of phit} given in Section \ref{sec: continuity of BE}, which may be read as a warm-up to this proof.
\\ Throughout, let $k, a, (\mu^N_t), \mu_0$ be as in the statement of the Theorem.
\\ We begin by recalling the representation formula established in \cite[Proposition 4.2]{ACE}, which is a noisy version of Proposition \ref{prop: bad representation formula}. \begin{proposition} \label{prop: very bad rep formula} Let $\mu \in \mathcal{S}^k$ for some $k>2$, and let $\mu^N_t$ be a Kac process on $N$ particles. Let $\rho_t=(\phi_t(\mu_0)+\mu^N_t)/2$, and for $f\in \mathcal{A}, 0\le s\le t$, let $f_{st}$ be the propagation described in Definition \ref{def: LKP} in this environment. Then, for all $t\ge 0$, we have the equality \begin{equation} \begin{split} & \langle f, \mu^N_t-\phi_t(\mu_0)\rangle =\langle f_{0t}, \mu^N_0-\mu_0\rangle \\&\hspace{2cm}+\int_{(0,t]\times \mathcal{S}_N} \langle f_{st},\mu^N-\mu^N_{s-}\rangle (m^N-\overline{m}^N)(ds,d\mu^N) \end{split} \end{equation} where $m^N, \overline{m}^N$ are as defined in Section \ref{sec: interpolation decomposition}.\end{proposition} The major difficulty in using this representation formula is the appearance of an exponentiated random moment in the quantity $z_t$ parametrising the continuity of $f_{st}$. We will use the following proposition, which controls the stochastic integrals on the right-hand side, modulo this difficulty. \begin{proposition} \label{prop: short time mg estimate} Let $\rho_t$ be a potentially random environment such that, for some $\beta>0$, \begin{equation} \label{eq: moment condition for environment} w=\left\|\hspace{0.1cm}\sup_{t\le 1} \left(\frac{ \Lambda_3(\rho_t)}{\beta t^{\beta-1}+1}\right)\hspace{0.1cm}\right\|_{L^\infty(\mathbb{P})}<\infty. \end{equation} For $f\in \mathcal{A}$ and $0\le s\le t\le 1$, let $f_{st}[\rho]$ denote the propagation in this environment, as described in Definition \ref{def: LKP}.
\\ Let $k>2$ and $a\ge 1$, and let $\mu^N_t$ be a Kac process with initial moment $\Lambda_k(\mu^N_0)\le a$, and let $m^N, \overline{m}^N$ be as in Section \ref{sec: interpolation decomposition}. We write \begin{equation} \widetilde{M}^{N,f}_t[\rho]=\int_{(0,t]\times \mathcal{S}_N} \langle f_{st}[\rho], \mu^N-\mu^N_{s-}\rangle (m^N-\overline{m}^N)(ds, d\mu^N).\end{equation} In this notation, we have the bound \begin{equation} \left\|\hspace{0.1cm} \sup_{t\le 1}\hspace{0.1cm} \sup_{f\in \mathcal{A}} \hspace{0.1cm} \widetilde{M}^{N,f}_t[\rho]\right\|_1 \le CaN^{-\eta}\end{equation} for some $C=C(d,k,\beta)$ and $\eta=\eta(d,\beta)>0$. Here, we emphasise that $\|\cdot\|_{L^1(\mathbb{P})}$ refers to the $L^1$ norm with simultaneous expectation over $\mu^N_t$ and the environment $\rho$. \end{proposition} This largely follows from the proof of \cite[Theorem 1.1]{ACE}, and the argument follows a similar pattern to Lemmas \ref{thrm: pointwise martingale control}, \ref{thrm: local uniform martingale control}, using the continuity estimate recalled in Proposition \ref{prop: continuity for branching process} and a similar estimate for the dependence on the initial time $s$. The key difference is that the hypotheses on the environment $\rho$ guarantee an $L^\infty(\mathbb{P})$ control on the quantities \begin{equation} z_1=\exp\left(8\int_0^1 \Lambda_3(\rho_u)du\right); \hspace{0.8cm} y_\beta=z_1\hspace{0.1cm}\sup_{0\le s\le s'\le 1}\left[(s'-s)^{-\beta}\int_s^{s'}\Lambda_3(\rho_u)du\right]\end{equation} which describe the continuity of $f_{st}(v)$ in $v$ and $s$ respectively. By contrast, these are only controlled in probability in \cite[Theorem 1.1]{ACE}; correspondingly, we obtain an $L^1(\mathbb{P})$ estimate rather than an estimate in probability. With this estimate, we turn to the proof of Theorem \ref{thm: low moment regime}.
\begin{proof}[Proof of Theorem \ref{thm: low moment regime}] We first introduce a localisation argument, following the argument in Section \ref{sec: continuity of BE}, which allows us to guarantee that (\ref{eq: moment condition for environment}) holds for the environment $\rho= (\mu^N_t+\phi_t(\mu_0))/2$. Let $\beta=\frac{k-2}{2}$, and let $u_N \le 1$ be chosen later. Now, define $T_N$ to be the stopping time \begin{equation} T_N=\inf\left\{t\le u_N: \Lambda_3(\rho_t) > \frac{(\beta t^{\beta-1}+1)}{8\sqrt{2}}\right\}. \end{equation} We use the convention that $\inf \emptyset =\infty$, so that if $T_N>u_N$, then $T_N=\infty$. Let $\rho^T$ be the stopped environment $\rho^T_t=\rho_{t\land T_N}$, and write $f_{st}^T$ for the propagation in the stopped environment.
\\ We observe first that on the event $T_N=\infty$, we have the equality $f_{st}^T=f_{st}$ for all $f\in \mathcal{A}, s\le t\le u_N$. Moreover, since $\Lambda_3(\rho_t)$ increases by a factor of at most $4\sqrt{2}$ at jumps by Lemma \ref{lemma:momentincreaseatcollision}, we have the bound, almost surely for all $t\ge 0$, \begin{equation} \Lambda_3(\rho^T_t) \le \frac{(\beta t^{\beta-1}+1)}{2}.\end{equation} Therefore, the stopped environment $\rho^T$ satisfies the bound \ref{eq: moment condition for environment} with $w=\frac{1}{2}$. Now, we write $\widetilde{M}^{N,f}_t=\widetilde{M}^{N,f}_t[\rho^T]$ as in the proposition above, and by the representation formula in Proposition \ref{prop: very bad rep formula}, we have the bound for all $t\le u_N$,\begin{equation}\begin{split} &W\left(\mu^N_t,\phi_t(\mu_0)\right) 1[T_N=\infty] \le CW(\mu^N_0, \mu_0)+\sup_{f\in \mathcal{A}}\hspace{0.2cm}\widetilde{M}^{N,f}_t \end{split}\end{equation} for some absolute constant $C$. By Proposition \ref{prop: short time mg estimate}, we obtain the estimate \begin{equation} \label{eq: shorttime bound 1} \left\|\sup_{t\le u_N} W\left(\mu^N_t, \phi_t(\mu_0)\right)1[T_N=\infty]\right\|_1 \lesssim W(\mu^N_0,\mu_0)+aN^{-\eta}.\end{equation} Let $k_0=k_0(d)$ be large enough that Theorem \ref{thrm: PW convergence} holds with $\epsilon=\frac{1}{2d}$. By applying Theorem \ref{thrm: PW convergence}, restarted at time $u_N$, and the moment production property, we obtain \begin{equation} \label{eq: restarted estimate} \begin{split}\sup_{t\ge u_N} \left\|W(\mu^N_t,\phi_{t-u_N}(\mu^N_{u_N}))\right\|_2&\lesssim N^{\epsilon-1/d}\hspace{0.1cm}\mathbb{E}\left[\Lambda_{k_0}(\mu^N_{u_N})\right]^{1/2} \\ & \lesssim N^{\epsilon-1/d}u_N^{1-k_0/2}.\end{split} \end{equation} Using our continuity estimate Theorem \ref{thrm: W-W continuity of phit}, we have the bound for some $\zeta=\zeta(d)$\begin{equation} \begin{split} &\sup_{t\ge u_N} W(\phi_{t-u_N}(\mu^N_{u_N}),\phi_t(\mu_0)) \\&\hspace{1cm}\lesssim W(\mu^N_{u_N},\phi_{u_N}(\mu_0))^\zeta\Lambda_{k_0}(\mu^N_{u_N},\phi_{u_N}(\mu_0))\end{split}\end{equation} and, considering the cases $\{T_N\le u_N\}, \{T_N=\infty\}$ separately, we see that \begin{equation} \begin{split} &\sup_{t\ge u_N} W(\phi_{t-u_N}(\mu^N_{u_N}),\phi_t(\mu_0)) \\&\hspace{2.5cm}\lesssim W(\mu^N_{u_N},\phi_{u_N}(\mu_0))^\zeta\Lambda_{k_0}(\mu^N_{u_N},\phi_{u_N}(\mu_0))1[T_N=\infty]\\[1ex]&\hspace{3.5cm}+ 1[T_N\le u_N].\end{split}\end{equation}To ease notation, we will write $\mathcal{T}_1, \mathcal{T}_2$ for the two terms respectively. We estimate the expectation of $\mathcal{T}_1$ using H\"older's inequality: for some $k_1>k_0$, \begin{equation}\label{eq: holder estimate on BF}\begin{split} &\left\|\mathcal{T}_1\right\|_{L^1(\mathbb{P})} \lesssim\mathbb{E}\left(W(\mu^N_{u_N},\phi_{u_N}(\mu_0))1[T_N=\infty]\right)^\zeta\mathbb{E}\left(\Lambda_{k_1}(\mu^N_{u_N},\phi_{u_N}(\mu_0))\right) \\[1ex] & \hspace{3cm}\lesssim (N^{-\eta}+W(\mu^N_0,\mu_0))^\zeta\hspace{0.1cm} u_N^{1-k_1/2}.\end{split}\end{equation} where $\eta$ is as in (\ref{eq: shorttime bound 1}) with our choice of $\beta$. In order to deal with $\mathcal{T}_2$, we now estimate $\mathbb{P}(T_N\le u_N)$. Let $Z_N$ be given by \begin{equation} Z_N=\sum_{l:2^{-l}\le u_N} 2^{(\beta-1)l+1}\beta^{-1}\sup_{t\in [2^{-l},2^{1-l}]}\langle 1+|v|^3, \rho_t\rangle \end{equation} and observe that, for all $t\le u_N$, we have the bound \begin{equation} \langle 1+|v|^3, \rho_t\rangle \le \frac{(\beta t^{\beta-1}+1)Z_N}{2}. \end{equation} Therefore, \begin{equation} \mathbb{P}(T_N\le u_N)\le \mathbb{P}(Z_N>1/8) \le 8 \mathbb{E}[Z_N].\end{equation} Using the moment production property of the Kac process and Boltzmann equation in Proposition \ref{thrm:momentinequalities}, we compute \begin{equation} \mathbb{E}(Z_N)\le \sum_{l: 2^{-l}\le u_N}2^{(\beta-1)l+1}2^{-l(k-3)}\hspace{0.1cm} \beta^{-1}a \lesssim a u_N^\beta \end{equation} and so \begin{equation} \label{eq: estimate on restarted BF} \begin{split} &\left\|\sup_{t\ge u_N} W(\phi_{t-u_N}(\mu^N_{u_N}),\phi_t(\mu_0))\right\|_{L^1(\mathbb{P})} \\[1ex]& \hspace{1.5cm}\lesssim (N^{-\eta}+W(\mu^N_0,\mu_0))^\zeta\hspace{0.1cm} u_N^{1-k_1/2} +au_N^\beta. \end{split} \end{equation} We now return to (\ref{eq: shorttime bound 1}) and observe that \begin{equation} \label{eq: shorttime bound 2} \begin{split}& \left\|\sup_{t\le u_N} W(\mu^N_t, \phi_t(\mu_0))\right\|_1 \\& \hspace{2cm}\lesssim \left\|\sup_{t\le u_N} W\left(\mu^N_t, \phi_t(\mu_0)\right)1[T_N=\infty]\right\|_1 +\mathbb{P}(T_N\le u_N) \\[1ex] & \hspace{2cm} \lesssim W(\mu^N_0,\mu_0)+aN^{-\eta}+au_N^\beta. \end{split} \end{equation} Combining (\ref{eq: restarted estimate}, \ref{eq: estimate on restarted BF}, \ref{eq: shorttime bound 2}) and keeping the worst terms, we have shown that \begin{equation} \sup_{t\ge 0} \left\|W(\mu^N_t, \phi_t(\mu_0)\right\|_{L^1(\mathbb{P})}\lesssim (N^{-\eta}+W(\mu^N_0,\mu_0))^{\delta} u_N^{-\alpha}+ au_N^\beta\end{equation} for some $\eta, \delta, \alpha, \beta>0$, depending on $d, k$. If we choose \begin{equation} u_N=(N^{-\eta}+W(\mu^N_0,\mu_0))^{\delta/(\alpha+\beta)}\end{equation} then we finally obtain \begin{equation}\begin{split} \sup_{t\ge 0} \left\|W(\mu^N_t, \phi_t(\mu_0))\right\|_{L^1(\mathbb{P})}& \lesssim a(N^{-\eta}+W(\mu^N_0,\mu_0))^{-\beta \delta/(\alpha+\beta)} \\ & \lesssim a\left(N^{-\eta \beta \delta/(\alpha+\beta)}+W(\mu^N_0,\mu_0)^{\beta \delta/(\alpha+\beta)}\right) \\ & \lesssim a\left(N^{-\epsilon}+W(\mu^N_0,\mu_0)^\epsilon\right)\end{split} \end{equation} as desired, for sufficiently small $\epsilon=\epsilon(d,k)>0$. The case for the local uniform estimate is similar, using Theorem \ref{thrm: Main Local Uniform Estimate} in place of Theorem \ref{thrm: PW convergence}. \end{proof}
\section{Proof of Theorem \ref{thrm: No Uniform Estimate}} The proof of Theorem \ref{thrm: No Uniform Estimate} is based on the following heuristic argument: \begin{heuristic} Fix $N$, and consider a Kac process $(\mu^N_t)$ on $N$ particles. As $t\rightarrow \infty$, its law relaxes to the equilibrium distribution $\pi_N$, which is known to be the uniform distribution $\sigma^N$ on $\mathcal{S}_N$. Since this measure assigns non-zero probability to regions $R_N$ at macroscopic distance from the fixed point $\gamma$, given by \begin{equation}
\gamma(dv)=\frac{e^{-\frac{d}{2}|v|^2}}{(2\pi d^{-1})^{d/2}}dv,
\end{equation} the process will almost surely hit $R_N$ on an unbounded set of times. Meanwhile, the Boltzmann flow $\phi_t(\mu_0)$ will converge to $\gamma$. Therefore, at some large time, the particle system $\mu^N_t$ will have macroscopic distance from the Boltzmann flow $\phi_t(\mu^N_0)$.\end{heuristic} The regions $R_N$ which we construct in the proof are those where the energy is concentrated in only a few particles, which might na\"{i}vely be considered `highly ordered, and so low-entropy'. This appears to contradict the principle that entropy should increase; this \emph{apparent} paradox is explained in the discussion section at the beginning of the paper.
\\
We recall that a \emph{labelled} Kac process is the Markov process of velocities $(v_1(t),....,v_N(t))$ corresponding to the particle dynamics. The state space is the set $\mathbb{S}^{N}=\left\{(v_1, ..., v_N) \in (\mathbb{R}^d)^N: \sum_{i=1}^N v_i=0, \sum_{i=1}^N |v_i|^2 = N\right\}$, which we call the labelled Boltzmann Sphere. We denote $\theta_N$ the map taking $(v_1,...,v_N)$ to its empirical measure in $\mathcal{S}_N$: \begin{equation} \theta_N: \mathbb{S}^N \rightarrow \mathcal{S}_N; \hspace{1cm} (v_1, ..., v_N) \mapsto \frac{1}{N} \sum_{i=1}^N \delta_{v_i}.\end{equation} Moreover, if $\mathcal{V}^N_t$ is a labelled Kac process, then the empirical measures $\mu^N_t:= \theta_N(\mathcal{V}^N_t)$ are a Kac process in the sense defined in the introduction.
\\ Considered as a $((N-1)d-1)$-dimensional sphere, $\mathbb{S}^N$ has a uniform (Hausdorff) distribution $\gamma^N$. We define the `uniform distribution' $\sigma^N$ on $\mathcal{S}_N$ to be the pushforward of $\gamma^N$ by $\theta_N$: \begin{equation} \label{eq: defn of sigmaN} \sigma^N(A):=\gamma^N\left\{(v_1, ...v_N)\in \mathbb{S}^d: \theta_N(v_1,...,v_N)\in A\right\}. \end{equation} We will use this definition to transfer the positivity of the measure $\gamma^N$ forward to $\sigma^N$.
\\
As discussed in the literature review, the problem of relaxation to equilibrium for the Kac process is a subtle problem, and has been extensively studied. For our purposes, the following $L^2$ convergence is sufficient: \begin{proposition}\label{prop: relaxation} Suppose that $(\mu^N_t)_{t\geq 0}$ is a hard-spheres Kac process, where the law of the initial data $\mathcal{L}\mu^N_t$ has a density $h^N_0\in L^2(\sigma^N)$ with respect to $\sigma^N$. Then at all positive times $t\geq 0$, the law $\mathcal{L}\mu^N_t$ has a density $h^N_t\in L^2(\sigma^N)$ with respect to $\sigma^N$, and for some universal constant $\lambda_0>0$, we have \begin{equation} \left\|h^N_t-1\right\|_{L^2(\sigma^N)} \leq e^{-\lambda_0 t}\left\| h^N_0-1\right\|_{L^2(\sigma^N)}. \end{equation} \end{proposition} A version of this, for the labelled Kac process, appears as \cite[Theorem 6.8 and corollary]{M+M}; the result stated above follows by a pushforward argument. This is sufficient to prove the following weak ergodic theorem: \begin{lemma}\label{lemma: ergodic theorem} Let $(\mu^N_t)_{t\geq 0}$ be a hard-spheres Kac process on $N$ particles, started from $\mu^N_0\sim \sigma^N$. Let $R_N\subset \mathcal{S}_N$ be such that $p=\sigma^N(R_N)>0$. Then \begin{equation} \frac{1}{t}\int_0^t 1(\mu^N_s\in R_N)ds \rightarrow p\end{equation} in $L^2$. In particular, almost surely, $\mu^N_t$ visits $R_N$ on an unbounded set of times. \end{lemma} \begin{proof} Observe that \begin{equation} \mathbb{E}\left[\frac{1}{t}\int_0^t 1(\mu^N_s \in R_N) ds\right] = \frac{1}{t}\int_0^t \mathbb{P}(\mu^N_s\in R_N) ds = p\end{equation} so our claim reduces to bounding the variance. \\ \\ For times $t\geq 0$, write $A(t)$ as the event $A(t)=\{\mu^N_t\in R_N\}$; we will compute the covariance of $1_{A(s_1)}$ and $1_{A(s_2)}$, for $0 \leq s_1 \leq s_2$. Observe that \begin{equation} \mathbb{E}\left[1_{A(s_1)}(1_{A(s_2)}-p)\right]=p\left(\mathbb{P}\left(A(s_2)|A(s_1)\right)-p\right).\end{equation} Conditional on $A(s_1)$, the law of $\mu^N_{s_1}$ has a conditional density $h^N_{s_1}\propto 1_{R_N}$ with respect to $\sigma^N$. By Proposition \ref{prop: relaxation}, conditional on $A(s_1)$, $\mu^N_{s_2}$ has a density $h^N_{s_2}$, and we can bound \begin{equation}|\mathbb{P}(A(s_2)|A(s_1))-p|\leq \left\|h^N_{s_2}-1\right\|_{L^1(\sigma^N)}\leq \left\|h^N_{s_2}-1\right\|_{L^2(\sigma^N)} \leq C(R_N) e^{-\lambda_0(s_2-s_1)} \end{equation} for some constant $C(R_N)$ independent of time. Hence \begin{equation} \mathbb{E}\left[(1_{A(s_1)}-p)(1_{A(s_2)}-p)\right]=p(\mathbb{P}(A(s_2)|A(s_1))-p) \leq p C(R_N) e^{-\lambda_0(s_2-s_1)}.\end{equation}We can now integrate to bound the variance: \begin{equation} \begin{split} \text{Var}\left(\frac{1}{t}\int_0^t 1(\mu^N_s \in R_N) ds\right) & =\frac{2}{t^2}\int_0^t ds_1 \int_{s_1}^t ds_2 \hspace{0.2cm} \mathbb{E}\left[(1_{A(s_1)}-p)(1_{A(s_2)}-p)\right] \\[1ex] & \leq \frac{2pC}{t^2} \int_0^t ds_1\int_{s_1}^\infty ds_2\hspace{0.2cm} e^{-\lambda_0(s_2-s_1)} \\[1ex] & \leq \frac{2pC}{\lambda_0 t} \rightarrow 0. \end{split} \end{equation} \end{proof} An immediate corollary is that the long-run deviation must be bounded \emph{below} by the essential supremum of the deviation under the invariant measure: \begin{corollary} Let $(\mu^N_t)_{t\geq 0}$ be a $N$- particle Kac process in equilibrium. Then, almost surely, \begin{equation} \begin{split} \limsup_{t\rightarrow \infty}W(\mu^N_t, \gamma) \geq &\left\|W(\cdot, \gamma)\right\|_{L^\infty(\sigma^N)} \\ =& \esssup_{\sigma^N(d\mu)}\hspace{0.05cm} W(\mu, \gamma). \end{split} \end{equation} \end{corollary} \begin{proof} For ease of notation, write $W^*$ as the essential supremum appearing on the right hand side. For any $\epsilon>0$, let $R_{N, \epsilon}=\{\mu\in \mathcal{S}_N: W(\mu, \gamma)>W^*-\epsilon\}$; it is immediate that $\sigma^N(R_{N, \epsilon})>0$. By the remark in Lemma \ref{lemma: ergodic theorem}, almost surely, $\mu^N_t$ visits $R_{N, \epsilon}$ on an unbounded set of times, and so \begin{equation} \limsup_{t\rightarrow \infty} W(\mu^N_t, \gamma) \geq W^*- \epsilon. \end{equation} The conclusion now follows on taking an intersection over some sequence $\epsilon_n \downarrow 0$. \end{proof} To prove Theorem \ref{thrm: No Uniform Estimate}, it now only remains to show a lower bound on the essential supremum. \begin{lemma} \label{lemma: construct bad regions} Let $f$ be given by \begin{equation} f(v)=(1+|v|^2)\min\left(\frac{|v|}{\sqrt{N/2}},1\right). \end{equation} Then $f \in \mathcal{A}$, and \begin{equation} \left\|\langle f, \mu-\gamma\rangle\right\|_{L^\infty(\sigma^N)} \geq 1-\frac{C}{\sqrt{N}} \end{equation} for some constant $C=C(d)$. In particular, this is a lower bound for the essential supremum $W^*$, and so for the long-run deviation.\end{lemma}
\begin{proof} It is easy to see that $f \in \mathcal{A}$. Moreover, the region \begin{equation} \widetilde{R}_{N}=\left\{(v_1,...v_N)\in \mathbb{S}^N: \langle f, \theta_N(v_1,...,v_N) \rangle > 1 \right\}\end{equation} is an open subset of $\mathbb{S}^N$, containing $\left(\sqrt{\frac{N}{2}}e_1,-\sqrt{\frac{N}{2}}e_1,0,..,0\right)$. By positivity of the uniform measure $\gamma^N$ on $\mathbb{S}^N$, it follows that $\gamma^N(\widetilde{R}_{N})>0$. The corresponding region in $\mathcal{S}_N$: \begin{equation} R_{N}=\{\mu^N \in \mathcal{S}_N: \langle f, \mu^N\rangle >1 \} \supset \theta_N(\widetilde{R}_{N}).\end{equation} By definition (\ref{eq: defn of sigmaN}) of $\sigma^N$, we have \begin{equation} \sigma^N(R_{N}) \geq \gamma^N(\widetilde{R}_{N})>0. \end{equation} For all $\mu^N \in R_{N}$, we have \begin{equation} W(\mu^N, \gamma) \geq \langle f, \mu^N-\gamma \rangle \geq 1-N^{-1/2}\langle (1+|v|^2)|v|, \gamma\rangle. \end{equation} Since $R_{N}$ has positive measure, taking $C=\langle (1+|v|^2)|v|, \gamma\rangle$, we can conclude that \begin{equation} W^*\geq 1-\frac{C}{\sqrt{N}}. \end{equation} \end{proof}
\begin{proof}[Proof of Theorem \ref{thrm: No Uniform Estimate}] From the previous two lemmas, we know that for all $N\geq 2$, and for $\sigma^N$- almost all $\mu^N$, \begin{equation} \label{eq: ergodicistation} \mathbb{P}_{\mu^N}\left(\limsup_{t\rightarrow \infty} W(\mu^N_t, \gamma) \geq 1-\frac{C}{\sqrt{N}}\right)=1 \end{equation} where $\mathbb{P}_{\mu^N}$ denotes the law of a Kac process started at $\mu^N$. \\ \\ Let $N\geq 2, k> 2$ and $a> 1$. The region $R_{\star, N}$ of the labelled sphere such that $\Lambda_{k}(\theta_N(\mathcal{V}))<a$ is an open set; to conclude that it has positive $\sigma^N$- measure, it suffices to show that it is nonempty.
\\ Let $r$ be a rotation by $\frac{2\pi}{N}$ in the plane corresponding to the first two axes $(e_1,e_2)$. Then the data \begin{equation} \mathcal{V}_\star=(e_1, re_1, ..., r^{N-1}e_1) \end{equation} belongs to $\mathbb{S}^N$, and has $\Lambda_{k}(\theta_N(\mathcal{V}_\star))=\frac{1}{N}\sum_{i=1}^N 1^s = 1$. Hence $\mathcal{V}_\star \in R_{\star, N}$ is open and nonempty, so $\gamma^N(R_{\star, N})>0$. The positivity transfers to the corresponding region of $\mathcal{S}_N$: \begin{equation} \sigma^N\left\{\mu^N\in \mathcal{S}_N: \Lambda_{k}(\mu^N)<a\right\}=\gamma^N(R_{N, \star})>0.\end{equation} Hence, for any $N\geq 2$, we can choose an initial datum $\mu^N_0=\mu^N$, with $\Lambda_{k}(\mu^N_0)<a$, such that (\ref{eq: ergodicistation}) holds. Observing that \begin{equation} W(\phi_t(\mu^N_0),\gamma) \leq \|\phi_t(\mu^N_0)-\phi_t(\gamma)\|_{\mathrm{TV}+2} \rightarrow 0\end{equation} it follows that, $\mathbb{P}_{\mu^N}$- almost surely \begin{equation} \limsup_{t\rightarrow \infty} W(\mu^N_t, \gamma) = \limsup_{t\rightarrow \infty} W(\mu^N_t, \phi_t(\mu^N_0)) \geq 1-\frac{C}{\sqrt{N}}. \end{equation} \end{proof} \begin{remark} \begin{enumerate}[label=\roman{*}).] \item The proof of Lemma \ref{lemma: ergodic theorem} leaves open the possibility that there is a non-empty `exceptional set' of initial data $\mu^N$ where (\ref{eq: ergodicistation}) does not hold. A stronger assertion would be positive Harris recurrence, as defined in \cite{Harris recurrence}, which allows a similar ergodic theorem for \emph{any} initial data $\mu^N$. This is not necessary for our purposes. \item In principle, one could use this compute the typical time scales necessary for these deviations to occur, and sharper estimates may be obtained by using more detailed forms of relaxation, such as the entropic relaxation considered by \cite{Carlen 08}. This is not necessary for our arguments. \end{enumerate}\end{remark}
\section{Proof of Theorem \ref{corr: PW convergence as POC}} \label{sec: proof of POC}
Finally, we show that Theorems \ref{thrm: PW convergence}, \ref{thm: low moment regime} implies the claimed chaoticity estimates in Theorem \ref{corr: PW convergence as POC}. The following proof largely follows that of \cite[Theorem 3.1]{M+M}, using the estimates derived in this paper. As remarked in the introduction, the novelty is the use of the H\"older estimate (\ref{eq: good continuity estimate}) to control the term $\mathcal{T}_3$.
In the following proof, we will use estimates from Theorem \ref{thm: low moment regime}, which allow us to minimise the moment conditions required on the initial data. Better results can be obtained using Theorem \ref{thrm: PW convergence} at the cost of requiring a stronger moment estimate, although these still do not obtain optimal rates. \begin{proof}[Proof of Theorem \ref{corr: PW convergence as POC}] Let $k>2$, and $\epsilon=\epsilon(d,k)>0$ be the resulting exponent from Theorem \ref{thm: low moment regime}. Let $\mu^N_0 \in \mathcal{S}_N$ satisfy $\Lambda_k(\mu^N_0)\le a$.
\\ Recall that we wish to estimate \begin{equation} \frac{\mathcal{W}_{1,l}\left(\Pi_l[\mathcal{P}^N_t(\mu^N_0, \cdot), \phi_t(\mu^N_0)^{\otimes l}\right)}{l}\end{equation} uniformly in $t\ge 0$ and $l=1,...,N$, and where $\mathcal{W}_{1,l}$ is the Wasserstein$_1$ distance on laws, given by (\ref{eq: definition of script W}). Let $\mathcal{V}^N_t$ be a labelled Kac process, and let $\mu^N_t$ be the associated process of empirical measures. Fixing a test function $f\in B_X^{\otimes l}$, we break up the difference as \begin{equation}\begin{split} \label{eq: decomposition for chaos} &\int_{(\mathbb{R}^d)^N} f(V)\left(\Pi_l[\mathcal{P}^N_t(\mu^N_0, \cdot)]-(\phi_t(\mu^N_0))^{\otimes l}\right)(dV) \\ & \hspace{1cm} =\mathbb{E}_{\mu^N_0}\left[\prod_{j=1}^l f_j(v_j(t))\right]-\prod_{j=1}^l\langle f_j, \phi_t(\mu^N_0)\rangle \\[1ex] & \hspace{1cm} = \mathcal{T}_1+\mathcal{T}_2\end{split} \end{equation} where $\mathbb{E}_{\mu^N_0}$ denotes expectation under the law $\mathcal{P}^N_t(\mu^N_0, \cdot)$, and where the two error terms are\begin{equation} \mathcal{T}_1:= \mathbb{E}_{\mu^N_0}\left[\prod_{j=1}^l f_j(v_j(t))-\prod_{j=1}^l \langle f_j, \mu^N_t\rangle\right]; \end{equation} \begin{equation} \mathcal{T}_2:=\mathbb{E}_{\mu^N_0}\left[\prod_{j=1}^l \langle f_j, \mu^N_t\rangle-\prod_{j=1}^l\langle f_j, \phi_t(\mu^N_0)\rangle \right].\end{equation} Now, $\mathcal{T}_1$ is a purely combinatorial term, based on the use of empirical measures, and $\mathcal{T}_2$ may be controlled using the pointwise estimates Theorems \ref{thrm: PW convergence}, \ref{thm: low moment regime}. We will indicate how these terms may be controlled for the simple case $l=2$, and use this to show the full, `infinite dimensional' chaos estimate claimed.
\paragraph{Step 1: Estimate on $\mathcal{T}_1$} Since the law $\mathcal{P}^N_t(\mu^N_0, \cdot)$ is symmetric, we may rewrite \begin{equation} \mathbb{E}_{\mu^N_0}\left[f_1(v_1(t))f_2(v_2(t))\right]=\mathbb{E}_{\mu^N_0}\left[\frac{1}{N(N-1)}\sum_{i\neq j} f_1(v_i(t))f_2(v_j(t))\right] \end{equation} where $N(N-1)$ counts the number of \emph{ordered} pairs of indexes $(i,j)$. Similarly, the second term may be written \begin{equation} \mathbb{E}_{\mu^N_0} \left[\langle f_1, \mu^N_t\rangle\langle f_2, \mu^N_t\rangle\right]=\mathbb{E}_{\mu^N_0}\left[\left(\frac{1}{N}\sum_{i=1}^N f_1(v_i(t))\right)\left(\frac{1}{N}\sum_{j=1}^N f_1(v_j(t))\right)\right].\end{equation} Comparing the two terms, and using the bound $\|f_j\|_{L^\infty}\le \|f_j\|_X\le 1$ for $j=1,2$, we obtain the estimate \begin{equation} \begin{split} \left|\mathcal{T}_1\right| &\le \sum_{i\neq j} \left|\frac{1}{N(N-1)}-\frac{1}{N^2}\right|+\sum_{i=1}^N \frac{1}{N^2}. \end{split} \end{equation} Therefore, we have the bound $|\mathcal{T}_1| \le \frac{2}{N}$, uniformly in $f$ and $t$.
\paragraph{Step 2: Estimate on $\mathcal{T}_2$} For the case $l=2$, we break up the product as \begin{equation} \begin{split} &\prod_{j=1}^2 \langle f_j, \mu^N_t\rangle-\prod_{j=1}^2\langle f_j, \phi_t(\mu^N_0)\rangle \\&\hspace{1cm} = \langle f_1, \mu^N_t-\phi_t(\mu^N_0)\rangle \langle f_2, \mu^N_t\rangle + \langle f_1, \phi_t(\mu^N_0)\rangle \langle f_2, \mu^N_t-\phi_t(\mu^N_0)\rangle. \end{split}\end{equation} In each case, the difference term is dominated by a multiple of the Wasserstein distance $W(\mu^N_t, \phi_t(\mu))$, where $W$ is as in (\ref{eq: definition of W}), and the remaining term is absolutely bounded, by the boundedness of $f_j, j=1,2$. Therefore, we estimate \begin{equation}\label{eq: extensivity}\left|\prod_{j=1}^l \langle f_j, \mu^N_t\rangle-\prod_{j=1}^l\langle f_j, \phi_t(\mu^N_0)\rangle\right| \lesssim W(\mu^N_t, \phi_t(\mu^N_0)). \end{equation} Now, the right-hand side is precisely the term controlled by Theorems \ref{thrm: PW convergence}, \ref{thm: low moment regime}, in the special case $\mu_0=\mu^N_0$. By the choice of $\epsilon$ and $k$ above, we obtain the control \begin{equation} \mathcal{T}_2\lesssim \hspace{0.05cm}\Lambda_k(\mu^N_0)^\frac{1}{2}\hspace{0.05cm}N^{-\epsilon} \lesssim \hspace{0.05cm}a\hspace{0.05cm}N^{-\epsilon} \end{equation} for some explicit $\epsilon=\epsilon(d,k)>0$.
\\ We also remark here that this implication, given Theorems \ref{thrm: PW convergence}, \ref{thm: low moment regime} is immediate. However, attempting to reverse this implication, and deduce a theorem similar to \ref{thrm: PW convergence} from a control of $\mathcal{T}_2$, requires moving the supremum over test functions $f$ \emph{inside} the expectation. This corresponds to the most technical step in our proof (Lemmas \ref{thrm: pointwise martingale control}, \ref{thrm: local uniform martingale control}). Therefore, while it may be possible to deduce a version Theorem \ref{thrm: PW convergence} from the control of $\mathcal{T}_2$ given by \cite{M+M}, this would scarcely be less technical than the proof given, and would not lead to a proof of Theorem \ref{thrm: Main Local Uniform Estimate}. \paragraph{Step 3: Deduction of Infinite-Dimensional Chaos} Combining the two estimates for the case $l=2$ above, we deduce that there exists $\epsilon=\epsilon(d,k)>0$ such that\begin{equation} \sup_{t\ge 0} \hspace{0.05cm} \mathcal{W}_{1,2}\left(\Pi_2\left[\mathcal{P}^N_t(\mu^N_0, \cdot)\right], \phi_t(\mu^N_0)^{\otimes l}\right) \lesssim a\hspace{0.05cm}N^{-\epsilon}.\end{equation} To deduce the full statement, we appeal to the following result from \cite{Hauray Mischler}, which may also be found in \cite[Theorem 2.1]{Mischler}. For any probability measure $\mu$ on $\mathbb{R}^d$, and any symmetric distribution $\mathcal{L}^N$ on $(\mathbb{R}^d)^N$, we may estimate \begin{equation} \max_{l\le N}\hspace{0.05cm}\frac{\mathcal{W}_{1,l}\left(\Pi_l[\mathcal{L}^N], \mu^{\otimes l}\right)}{l} \le C\left(\mathcal{W}_{1,2}\left(\Pi_2[\mathcal{L}^N], \mu^{\otimes 2}\right)^{\alpha_1}+N^{-\alpha_2}\right)\end{equation} for some explicit constants $C, \alpha_1, \alpha_2>0$ depending on the dimension $d$. The claimed result (\ref{eq: CPOC}) now follows.
\\ We now turn to the two consequences claimed as a result. \paragraph{i). Chaotic Case} Let $\mu_0 \in \mathcal{S}$ have an $k^\mathrm{th}$ moment $\Lambda_k(\mu_0)\le a$, and construct $\mathcal{V}^N_0=(v_1(0),...,v_N(0))$ be as described in the statement of the theorem with associated empirical measure $\mu^N_0$. It is straightforward to show that this construction preserves moments up to a constant: that is, $\mathbb{E}(\Lambda_k(\mu^N_0))\lesssim a.$
\\ For a fixed test function $f\in B_X^{\otimes l}$, we return to the decomposition (\ref{eq: decomposition for chaos}). For this case, where $\mu^N_0\neq \mu_0$, we have a third error term: \begin{equation} \int_{(\mathbb{R}^d)^N} f(V)(\Pi_l[\mathcal{LV}^N_t]-(\phi_t(\mu_0))^{\otimes l})(dV)=\mathcal{T}_1+\mathcal{T}+\mathcal{T}_3.\end{equation} Here, $\mathcal{T}_1$ and $\mathcal{T}_2$ are as above, replacing $\mathbb{E}_{\mu^N_0}$ by the full expectation $\mathbb{E}$, and $\mathcal{T}_3$ is an additional error term, from approximating $\mu_0$ by $\mu^N_0$: \begin{equation} \mathcal{T}_3:=\mathbb{E}\left[\prod_{j=1}^l \langle f_j, \phi_t(\mu^N_0)\rangle-\prod_{j=1}^l\langle f_j, \phi_t(\mu_0)\rangle\right].\end{equation} As in the case above, we consider first the case $l=2$. The first two terms $\mathcal{T}_1, \mathcal{T}_2$ may be estimated as above, by conditioning on $(v_1(0),...,v_N(0))$ to conclude that \begin{equation} \mathcal{T}_1+\mathcal{T}_2 \lesssim aN^{-\epsilon}\end{equation} for some $\epsilon>0$, uniformly in $f\in B_X^{\otimes l}$ and $t\ge 0$.
\\ Arguing as in (\ref{eq: extensivity}), we bound \begin{equation} \mathcal{T}_3 \lesssim\mathbb{E}W(\phi_t(\mu^N_0), \phi_t(\mu_0)).\end{equation}We estimate this term using the contunity estimate Theorem \ref{thrm: W-W continuity of phit}. Let $k'\in (2,k)$, and let $\zeta>0$ be the resulting exponent using Theorem \ref{thrm: W-W continuity of phit}; by making $\zeta$ smaller if necessary, we assume that \begin{equation} \frac{\zeta k}{k-k'} \le 1. \end{equation} From Theorem \ref{thrm: W-W continuity of phit}, we have the estimate \begin{equation} \sup_{t\ge 0} W(\phi_t(\mu^N_0),\phi_t(\mu_0))\lesssim\Lambda_{k'}(\mu^N_0,\mu_0)W(\mu^N_0,\mu_0)^\zeta\end{equation} and we use H\"older's inequality to obtain, uniformly in $t\ge 0$, \begin{equation} \begin{split}\mathbb{E}\left[W(\phi_t(\mu^N_0)\phi_t(\mu_0))\right] &\lesssim \mathbb{E}\left[\Lambda_k(\mu^N_0)\right]^{k'/k}\mathbb{E}\left[W(\mu^N_0,\mu_0)^\frac{\zeta k}{k-k'}\right]^\frac{k-k'}{k} \\[1ex] & \lesssim a^{k'/k} \hspace{0.1cm}\mathbb{E}\left[W(\mu^N_0,\mu_0)\right]^\zeta. \end{split} \end{equation} From \cite[Proposition 9.2]{ACE}, there is a constant $\beta=\beta(d,k)>0$ such that $ \mathbb{E} W(\mu^N_0,\mu_0)\lesssim N^{-\beta}$, so we obtain \begin{equation} \mathbb{E}\left[W(\phi_t(\mu^N_0),\phi_t(\mu_0))\right] \lesssim aN^{-\beta\zeta}.\end{equation} Combining, and since all of our estimates are uniform in $f$ and $t$, we have shown that \begin{equation} \mathcal{W}_{1,2}\left(\Pi_2[\mathcal{LV}^N_t], \phi_t(\mu_0)^{\otimes 2}\right) \lesssim aN^{-\alpha}\end{equation} for some $\alpha=\alpha(d,k)>0$. The improvement to infinite-dimensional chaos is exactly as above. \paragraph{ii). General Case} The general case follows from the first case, by taking expectations over the initial data $\mu^N_0$. Indeed, for all $l\le N$, all $f\in B_X^{\otimes l}$ and $t\ge 0$, and for any initial data $(v_1(0),...v_N(0))$ with associated measure $\mu^N_0$, we have the bound \begin{equation} \frac{1}{l}\hspace{0.1cm}\mathbb{E}_{\mu^N_0}\left[f_1(v_1(t))...f_l(v_l(t))-\prod_{j=1}^l \langle f_j, \phi_t(\mu^N_0)\rangle\right] \lesssim \Lambda_k(\mu^N_0)N^{-\epsilon}.\end{equation} Taking expectation over the random initial data $(v_1(0),...,v_N(0))$ produces a full expectation on the left-hand side, and by definition of $\mathcal{L}^l_t$ in (\ref{eq: defn of ll}), \begin{equation}\mathbb{E}\left[\prod_{j=1}^l \langle f_j, \phi_t(\mu^N_0)\rangle\right]=\int_{(\mathbb{R}^d)^l} f(V)\hspace{0.1cm}\mathcal{L}^l_t(dV). \end{equation} Optimising over $f\in B_X^{\otimes l}$, $l\le N$ and $t\ge 0$ proves the claimed result. \end{proof}
\appendix
\section{Calculus of Martingales} \label{sec: Calculus of mgs}
We also review some basic facts and inequalities for martingales associated to the Kac process. All of these facts are true for general Markov chains, see \cite{D&N}.
Let $\mu^N_t$ be a Kac process, and write $m^N$, $\overline{m}^N$ for the jump measure and compensator defined in Section \ref{sec: interpolation decomposition}. Then, for any bounded and measurable $F^N: [0,T]\times \mathcal{S}_N \rightarrow \mathbb{R}$, the process \begin{equation} \label{eq: M is for martingale} \mathcal{M}^N_t= \int_{(0,t]\times \mathcal{S}_N} \left\{F^N_s(\mu^N)-F^N_s(\mu^N_{s-})\right\} (m^N-\overline{m}^N)(ds, d\mu^N) , \hspace{0.5cm} 0\leq t\leq T\end{equation} is a martingale for the natural filtration $(\mathcal{F}^N_t)_{t\geq 0}$ of the process. We have the $L^2$ control \begin{equation} \label{eq: elltwo martingale control} \left\| \mathcal{M}^N_t\right\|^2_2 = \mathbb{E}\left\{\int_{(0,t]\times \mathcal{S}_N} \left\{F^N_s(\mu^N)-F^N_s(\mu^N_{s-})\right\}^2 \overline{m}^N(ds, d\mu^N)\right\}. \end{equation} We will also use another special case of It\^{o}'s isometry for the measure $m^N-\overline{m}^N$ for a similar form of martingale. If $F^N$ is bounded and measurable on $[0,T]\times \mathcal{S}_N$, then for $t\leq T$, \begin{equation} \label{eq: QV of M}\left\|\int_0^t F^N_s(\mu^N_{s-})(m^N-\overline{m}^N)(ds, \mathcal{S}_N)\right\|^2_2 = \mathbb{E}\left\{\int_0^t F^N_s(\mu^N_s)^2\hspace{0.1cm} \overline{m}^N(ds, \mathcal{S}_N)\right\}.\end{equation} For the local uniform case, Theorem \ref{thrm: Main Local Uniform Estimate}, it will be necessary to control martingales of the form (\ref{eq: M is for martingale}) in general $L^p$ spaces, rather than simply $L^2$. Since $\mathcal{M}^N$ of this form are finite variation martingales, the quadratic variation is given by \begin{equation} \left[\mathcal{M}^N\right]_t=\int_{(0,t]\times \mathcal{S}_N} \left\{F^N_s(\mu^N)-F^N_s(\mu^N_{s-})\right\}^2 m^N(ds, d\mu^N), \hspace{0.5cm} 0\leq t\leq T.\end{equation} Our analysis in $L^p$ is based on Burkholder's inequality for c\`{a}dl\`{a}g martingales, which we state here for the class of martingales constructed above: \begin{lemma} \label{lemma:Burkholder} Suppose that $(\mathcal{M}^N_t)_{t=0}^T$ is the process given by (\ref{eq: M is for martingale}), and let $p\geq 2$. Then there exists a constant $C=C(p)<\infty$ such that for all $t\leq T$, we have the $L^p$ control \begin{equation} \left\|\hspace{0.1cm}\sup_{s\leq t} \left|\mathcal{M}^N_s\right|\hspace{0.1cm}\right\|^p_p \leq C(p) \mathbb{E}\left[\left(\int_0^t \left\{F^N_s(\mu^N)-F^N_s(\mu^N_{s-})\right\}^2 m^N(ds, d\mu^N) \right)^{p/2} \right].\end{equation} \end{lemma}
\end{document} |
\begin{document}
\title[]{Cohomogeneity one central K\"ahler metrics in dimension four}
\author[]{Thalia Jeffres and Gideon Maschler} \address{Wichita State University, Wichita, KS} \email{jeffres@math.wichita.edu}
\address{Department of Mathematics and Computer Science\\ Clark University\\ Worcester, MA } \email{gmaschler@clarku.edu}
\maketitle \thispagestyle{empty} \begin{abstract} A K\"ahler metric is called central if the determinant of its Ricci endomorphism is constant \cite{m}. For the case in which this constant is zero, we study on $4$-manifolds the existence of complete metrics of this type which are cohomogeneity one for three unimodular $3$-dimensional Lie groups: $SU(2)$, the group of Euclidean plane motions $E(2)$ and a quotient by a discrete subgroup of the Heisenberg group $\mathrm{nil}_3$. We obtain a complete classification for $SU(2)$, and some existence results for the other two groups, in terms of specific solutions of an associated ODE system. \end{abstract}
\section{Introduction}
In this paper the term central K\"ahler metric refers to a K\"ahler metric for which the determinant of the Ricci endomorphism is constant. This is a special case of the metric type called central in \cite{m}. Riemannian and hermitian metrics with constant Ricci determinant were considered earlier, see for example \cite{k, l, b-m}.
On a compact K\"ahler manifold there exists a Futaki-type invariant for central K\"ahler metrics \cite{f-t}. An associated functional analogous to the K-energy appears in \cite{c-t,s-w,t,r}. If a compact manifold admits a K\"ahler-Einstein metric, it is shown in \cite{m} that a central K\"ahler metric also exists in any K\"ahler class, and an appropriate notion of uniqueness holds for it as well. If a compact manifold with a definite first Chern class admits a central K\"ahler metric, it also admits a K\"ahler-Einstein metric. It is, as far as we know, an open question whether in the case where the first Chern class has no sign, a similar result holds with the conclusion that the manifold admits a K\"ahler metric with constant Ricci eigenvalues.
On noncompact manifolds the methods for obtaining the above results are unavailable, and existence of complete central K\"ahler metrics does not seem to have been explored. The main purpose of this paper is to demonstrate existence of such metrics which are also invariant under certain cohomogeneity one group actions on $4$-manifolds. For technical reasons our results are limited to central metrics with {\em zero Ricci determinant}, which we call centrally flat, or metrics of zero central curvature. Note that in the rough classification in \cite{m} of compact complex surfaces admitting central K\"ahler metrics, the most difficult and least understood case is the centrally flat one.
It should be noted that the groups we consider are not always compact. More specifically, up to a possible quotient by a discrete subgroup, the groups are three of the six unimodular $3$-dimensional Lie groups. These are (a quotient of) the Heisenberg group $\mathrm{nil}_3$, $SU(2)$ and the group of Euclidean plane motions $E(2)$. For the first and second of these, closely related incomplete central metrics appear in \cite[Thm. 1 and sec. 3.5]{a-m2}.
Our methods involve ODE techniques, and are directly inspired by the papers \cite{d-s1,d-s2} of Dancer and Strachan, and the recent articles \cite{a-m2,mr}. In all of these the K\"ahler-Einstein case is prominent. Another less closely related work is \cite{mr1}, which examines K\"ahler-Ricci solitons for actions of Heisenberg groups also in higher dimensions. For all the metrics we find, completeness holds on manifolds admitting a singular orbit, and the smooth extension of the metric and K\"ahler form to this orbit are shown using the recent systematic approach of Verdiani and Ziller \cite{vz}.
We remark that the need to restrict ourselves to centrally flat metrics is due to the rather unexpected fact that the Center Manifold Theorem applies only in this case to our systems of ODEs. Throughout the paper we are, of course, only interested in centrally flat metrics which are not Ricci-flat.
It is interesting to compare our results to those for K\"ahler-Einstein metrics in the above references. We note first that our results are restricted to metrics which are diagonal in an appropriate coframe containing left invariant $1$-forms for the group. Note that cohomogeneity one K\"ahler-Einstein metrics under $SU(2)$ must be diagonal, but we are not aware of a similar result for central metrics.
For the action of $SU(2)$, we classify the possible cases (Theorem~\ref{thm3}), but our methods yield only complete diagonal centrally flat metrics which are biaxial, meaning that two out of three metric coefficients are equal. In contrast, \cite{d-s1} also find complete triaxial K\"ahler-Einstein metrics (in which the three coefficients are all distinct). Finally, the biaxial centrally flat metrics we find, just like the corresponding K\"ahler-Einstein ones in \cite{d-s1}, can be given in explicit form.
For $E(2)$, we obtain inexplicit triaxial metrics in analogy with the same result in \cite{mr} in the K\"ahler-Einstein case (see Theorem~\ref{thm4}). However, in that article all cases are classified, whereas for centrally flat metrics, we have to exclude one case from consideration, as we only find for it partial information concerning solutions satisfying a certain analyticity property.
The complete centrally flat metrics under the action of the quotient of $\mathrm{nil}_3$ are explicitly given examples. See Theorem~\ref{thm2}.
In sections \ref{sec:sh} and \ref{sec:construct} and the appendix, we recall the ansatz of \cite{mr}, based on the notion of shear operators, and adopt it to the case of central metrics. As in the K\"ahler-Einstein case, this ansatz may include more than just cohomogeneity one examples. Here we include it mainly to connect with that work, and recall its specialization to the cohomogeneity one case in section \ref{sec:coho}. Our main results are given in sections \ref{sec:heis}, \ref{sec:SU2} and \ref{sec:Euc}.
\section{Shear and integrability}\lb{sec:sh}
Let $(M,g,J)$ be an almost hermitian $4$-manifold. We fix a local oriented orthonormal frame denoted \[ \{e_i\}=\{\kk, \tT=J\kk, \xx, \yy=J\xx\}. \] In the frame domain, we have an orthogonal decomposition of the tangent bundle: \[ \text{$TM=\mathcal{V}\oplus\mathcal{H}$, \quad with ${\mathcal{V}}=\mathrm{span}(\kk,\tT),\quad {\mathcal{H}}=\mathrm{span}(\xx,\yy)$.} \]
Let $\mathcal{U}$ stand for either $\mathcal{V}$ or $\mathcal{H}$, and $\pi_{\mathcal{U}^\perp}:TM\to{\mathcal{U}^\perp}$ denote the orthogonal projection. For a vector field $X\in\Gamma(\mathcal{U})$, consider the operator $\pi_{\mathcal{U}^\perp}\circ \n X|_{\mathcal{U}^\perp}:\Gamma(\mathcal{U}^\perp)\to\Gamma({\mathcal{U}^\perp})$, where $\n$ is the Levi-Civita covariant derivative of $g$. Define the {\em shear operator} of $X$ by \[
\text{$S_X$\ :=\ trace-free symmetric part of $\pi_{\mathcal{U}\perp}\circ \n X|_{\mathcal{U}^\perp}$.} \]
Recall the condition for integrability of $J$ in terms of shear operators given in \cite{a-m, mr}. \begin{thm}\lb{Nij0} Given the above set-up, the almost complex structure $J$ is integrable in the frame domain if and only if \begin{align}\lb{Nij} \mathrm{i})&\ \ J S_{\xx}=S_{\yy} \text{ on ${\mathcal{V}}$.}\nonumber\\ \mathrm{ii})&\ \ J S_{\kk}=S_{\tT} \text{ on ${\mathcal{H}}$.}
\end{align} \end{thm} In application we will also rely on the following expression of the matrix corresponding to the shear operator in a local oriented orthonormal frame $\{v_1,v_2\}$ on $\mathcal{U}^\perp$. \[ [S_X]_{v_1,v_2}= \begin{bmatrix}
- \sigma_1 & \sigma_2\\
\sigma_2 & \sigma_1\\
\end{bmatrix}, \] with {\em shear coefficients}: \begin{equation}\lb{sh-coef} \begin{aligned} 2\sigma_1\ &:=\
\ \ g([X,v_1],v_1)-g([X,v_2],v_2),\\ 2\sigma_2\ &:=\
-g([X,v_1],v_2) - g([X,v_2],v_1). \end{aligned} \end{equation}
One simple case in which integrability holds by Theorem \ref{Nij0} is when all the shears vanish: $S_{e_i}=0$, $i=1,\ldots,4$. We refer to this as the shear-free case.
\section{Shear and K\"ahler metrics}\lb{sec:construct}
We recall here an ansatz for K\"ahler metrics on $4$-manifolds given in \cite{mr}. Let$(M,g, J)$ be an almost hermitian $4$-manifold admitting an orthonormal frame $\{e_i\}=\{\kk, \tT, \xx, \yy\}$, with $J\kk=\tT$, $J\xx=\yy$, defined over an open $U\subset M$, which satisfies the Lie bracket relations
\begin{align} &[\kk,\tT]=L(\kk+\tT),\qquad &&[\xx,\yy]=N(\kk+\tT),\lb{brack1}\\ &[\kk,\xx]=A\xx+B\yy,\qquad &&[\kk,\yy]=C\xx+D\yy,\lb{brack2}\\ &[\tT,\xx]=E\xx+F\yy,\qquad &&[\tT,\yy]=G\xx+H\yy,\lb{brack3} \end{align}
for smooth functions $A, B, C, D, E, F, G, H, L, N$ on $U$ such that
\begin{align} A&-D=F+G,\qquad B+C=H-E,\lb{rels1}\\ N&=A+D=-(E+H).\lb{rels2} \end{align} Then $(g,J)$ is K\"ahler (see \cite[Prop. 3.1]{mr}). Its Levi-Civita connection over $U$ can be given by setting \begin{equation}\lb{conn} \n_\kk\kk=-L\tT,\qquad \n_\xx\xx=A\kk+E\tT,\qquad \n_\xx\kk=-A\xx+E\yy, \end{equation} and then having all other covariant derivative expressions on frame fields determined by the requirement that $\n$ be torsion-free and make $J$ parallel.
The Ricci form of the K\"ahler metric $g$ was shown in \cite{mr} to take the form
\begin{multline}\lb{ric} \rho=
L(d\hat\kk+d\hat\tT)+(C-H)d\hat\kk+(A-F)d\hat\tT\\ +dL\wedge(\hat\kk+\hat\tT)+d(C-H)\wedge\hat\kk+d(A-F)\wedge\hat\tT. \end{multline} where the hatted quantities denote the dual coframe of $\{e_\ell\}$.
Using formulas \Ref{d-frame} in the appendix for the exterior derivatives of the coframe $1$-forms, as well as $df=d_\kk f\,\hat\kk+d_\tT f\,\hat\tT+d_\xx f\,\hat\xx+d_\yy f\,\hat\yy$, valid for a smooth function $f$ on $M$, we can rewrite this formula in the form \begin{equation*} \rho=\alpha\hat\xx\wedge\hat\yy+\beta\hat\kk\wedge\hat\tT+\gamma\hat\kk\wedge\hat\xx+\delta\hat\kk\wedge\hat\yy+ \phi\hat\tT\wedge\hat\xx+\psi\hat\tT\wedge\hat\yy, \end{equation*} where \begin{align} \alpha&=-N(2L+C-H+A-F),\nonumber\\ \beta&=-L(2L+C-H+A-F)+d_{\kk-\tT}L-d_\tT(C-H)+d_\kk(A-F),\nonumber\\ \gamma&=-d_\xx(L+C-H),\nonumber\\ \delta&=-d_\yy(L+C-H),\nonumber\\ \phi&=-d_\xx(L+A-F),\nonumber\\ \psi&=-d_\yy(L+A-F).\lb{ric-coeff} \end{align}
The central curvature $c$ is defined by the equation \[ \rho^{\wedge 2}=c\,\om^{\wedge 2} . \] If $c$ is constant, we write \( c=\lambda, \) and call the corresponding metric a {\em central metric}. In terms of the Ricci coefficients \Ref{ric-coeff}, we then have \begin{equation}\lb{cent} c=\alpha\beta-\gamma\psi+\delta\phi=\lambda, \end{equation} because $\rho^{\wedge 2}=2c\,\hat\xx\wedge\hat\yy\wedge\hat\kk\wedge\hat\tT$ while $\om^{\wedge 2}=2\hat\xx\wedge\hat\yy\wedge\hat\kk\wedge\hat\tT$. Such a central metric will not be Einstein if either at least one of $\gamma$, $\delta$, $\phi$, $\psi$ is not identically zero or $\alpha$ and $\beta$ are not both equal to the same constant.
We now recall a function built in to our ansatz that gave rise in \cite{mr} to the independent variable in a system of ODEs used in both \cite{mr} and \cite{mr1}.
The Lie bracket relations \Ref{brack1}-\Ref{brack3} imply that the distribution spanned by $\kk+\tT$, $\xx$ and $\yy$ is integrable. Since this distribution is orthogonal to $\kk-\tT$, while the latter vector field has constant length and is easily seen to have geodesic flow, it follows that it is locally a gradient (cf. \cite[Cor. 12.33]{onel}). Thus, there exists a smooth function $\tau$ defined in some open set $V\subset U$, such that \begin{equation}\lb{grad} \kk-\tT=\n\tau. \end{equation} Consider now the six functions $P$, $Q$, $R$, $S$, $L$, $N$, where the last two are as in \Ref{brack1}, and the first four are given in terms of four of the functions in \Ref{brack2}-\Ref{brack3} by \begin{align} P&=(B-C)+(F-G), &&Q=(B-C)-(F-G),\nonumber \\ R&=\sqrt{(B+C)^2+(F+G)^2}, &&S=\tan^{-1}\left(\frac{B+C}{F+G}\right),\lb{chan-var}
\end{align} where $S$ is only defined on the set $\{F+G\}\ne 0$.
In terms of these variables, it is shown in the appendix that in case $A, B, \ldots H$, $L$ and $N$ are each a composition of a function of $\tau$, the ansatz equations simplify to five ODEs \Ref{cent-onevar} involving those functions. In particular the ODE giving the central curvature equation takes the form \begin{equation} -N(2L+N-P/2)[-L(2L+N-P/2)+(2L'+N'-P'/2)]=\lambda.\lb{cent-onevar0} \end{equation}
\section{Cohomogeneity one examples}\lb{sec:coho} It was shown in \cite{mr} that the ansatz of section \ref{sec:construct} includes as a special case cohomogeneity one diagonal K\"ahler metrics under the action of a unimodular group in dimension three. In this section we review their construction, and derive the central metric equation for such metrics.
Assume that $(M,g)$ is a $4$-dimensional Riemannian manifold admitting a proper isometric action by a three dimensional Lie group $\mathcal{G}$ with cohomogeneity one having a discrete isotropy group. Assuming also that $\mathcal{G}$ is a unimodular group, we choose a frame of left-invariant vector fields $X_1$, $X_2$, $X_3$, and dual coframe consisting of left-invariant $1$-forms $\sigma_1$, $\sigma_2$, $\sigma_3$. These satisfy \begin{align}
[X_1,X_2] &= -p_3 X_3, &&d\sigma_1 = p_1\sigma_2\wedge\sigma_3,\nonumber \\ [X_2,X_3] &= -p_1 X_1, &&d\sigma_2 = p_2\sigma_3\wedge\sigma_1,\nonumber\\ [X_3,X_1] &= -p_2 X_2. &&d\sigma_3 = p_3\sigma_1\wedge\sigma_2.\lb{X123} \end{align} for some constants $p_1$, $p_2$, $p_3$. Cohomogeneity one metrics for such groups are also described as having Bianchi type A. A diagonal such metrics takes the form \begin{equation}\label{bianchiAmet} g = (abc)^2dt^2+a^2\sigma_1^2+b^2\sigma_2^2+c^2\sigma_3^2, \end{equation} for functions $a$, $b$, $c$ of $t$. We note that, of course considering the orthogonal frame $\partial_t,X_1,X_2,X_3$ dual to $dt,\sigma_1,\sigma_2,\sigma_3$, on $M$, $\partial_t$ commutes, of course, with all $X_i$, $i=1,2,3$.
Following Dancer and Strachan \cite{d-s1}, denoting $w_1=bc$, $w_2=ac$, and $w_3=ab$, we define functions $\alpha$, $\beta$, and $\gamma$ so that \begin{align} w_1'&=p_1w_2w_3+\alpha w_1, \\ w_2'&=p_2w_1w_3+\beta w_2, \\ w_3'&=p_3w_1w_2+\gamma w_3. \end{align} They show that (modulo reordering the frame vectors) the only K\"ahler structures $(M,g,J)$ with $g$ of the form \Ref{bianchiAmet} have complex structure determined by \begin{equation}\label{complexJ} J\partial_t = abX_3 \quad\text{and}\quad JX_1=\frac{a}{b}X_2, \end{equation} and $\alpha$, $\beta$, and $\gamma$ satisfy \[ \alpha=\beta \quad \text{and}\quad \gamma=0.\] The K\"ahler form is then given by \begin{equation}\label{kahlerform} \omega = abc^2dt\wedge\sigma_3+ab\sigma_1\wedge\sigma_2 = w_1w_2dt\wedge\sigma_3+w_3\sigma_1\wedge\sigma_2, \end{equation} and $w_1,w_2,w_3$ satisfy \begin{align} w_1'&=p_1w_2w_3+\alpha w_1,\nonumber \\ w_2'&=p_2w_1w_3+\alpha w_2,\nonumber \\ w_3'&=p_3w_1w_2.\lb{w33} \end{align} This, in terms of $a,b,c$ implies \begin{align} 2a'/a&=-p_1a^2+p_2b^2+p_3c^2,\lb{K1} \\ 2b'/b&=p_1a^2-p_2b^2+p_3c^2, \lb{K2}\\ 2c'/c&=p_1a^2+p_2b^2-p_3c^2+2\alpha.\lb{KKE} \end{align}
We recall the prescription that makes this model fit with the ansatz of Section $3$. The orthonormal frame and dual coframe are given by \begin{align*} \mathbf{k} &= \frac{\sqrt{2}}{2}\left(\frac{1}{c}X_3+\frac{1}{abc}\partial_t \right), & \hat{\mathbf{k}} &= \frac{\sqrt{2}}{2}(c\sigma_3+abcdt), \\ \mathbf{t} &= \frac{\sqrt{2}}{2}\left(\frac{1}{c}X_3-\frac{1}{abc}\partial_t \right), & \hat{\mathbf{t}} &= \frac{\sqrt{2}}{2}(c\sigma_3-abcdt), \\ \mathbf{x} &= \frac{X_1}{a}, & \hat{\mathbf{x}} &= a\sigma_1, \\ \mathbf{y} &= \frac{X_2}{b}, & \hat{\mathbf{y}} &= b\sigma_2. \end{align*} One can easily check that relations \Ref{brack1}-\Ref{brack3} hold with these choices.
Next the functions of the ansatz are given in terms of $a$, $b$, $c$, by \begin{align*} A&=-E=-\frac{a'}{\sqrt{2}a^2bc}=-\frac{1}{a}\frac{da}{d\tau}, & B &=F= -\frac{bp_2}{\sqrt{2}ac}, \\ D&=-H=-\frac{b'}{\sqrt{2}ab^2c}=-\frac{1}{b}\frac{db}{d\tau}, & C&=G=\frac{ap_1}{\sqrt{2}bc}, \\ L&=-\frac{c'}{\sqrt{2}abc^2}=-\frac{1}{c}\frac{dc}{d\tau}, & N &= -\frac{cp_3}{\sqrt{2}ab}. \end{align*} Here the prime denotes differentiation with respect to $t$, while the expressions in terms of $d/d\tau$ hold due to the relation between $\tau$ and $t$ given by \[ \hat{\mathbf{k}}-\hat{\mathbf{t}}=d\tau=\sqrt{2}abcdt, \qquad\frac{d}{d\tau}=\frac{1}{\sqrt{2}abc}\frac{d}{dt}. \]
Finally, we give the functions $P,Q,R,S$ of the change of variables \Ref{chan-var}. \begin{align*} P&=-\sqrt{2}\frac{a^2p_1+b^2p_2}{abc}, & Q &= 0, \\ R&=\frac{a^2p_1-b^2p_2}{abc}, & S &= \frac{\pi}{4}. \end{align*}
From the point of view of the ansatz, the four relations in \Ref{rels1}-\Ref{rels2} that imply the K\"ahler condition impose only two additional relations here, say $A+D=N$ and $B+C=H-E$, giving \begin{align} \frac{a'}a+\frac{b'}b&=p_3c^2,\lb{K3}\\ \frac{b'}b-\frac{a'}a&=p_1a^2-p_2b^2\lb{another} \end{align} which are equivalent to \Ref{K1}-\Ref{K2}. Our remaining task is to determine how the condition that the metric is central constrain $\alpha$ in \Ref{KKE}.
The central metric equation \Ref{cent}, is given in the variables \Ref{chan-var} by \Ref{cent-onevar0}: \[ -N(2L+N-P/2)[-L(2L+N-P/2)+\frac d{d\tau}(2L+N-P/2)]=\lambda. \] Calculating using the above formulas for $L$, $N$, $P$ and also \Ref{KKE}, we have \begin{align*} 2L+N-P/2&=\frac 1{\sqrt{2}abc}\Big(-2\frac{c'}c-p_3c^2+p_1a^2+p_2b^2\Big)\\ &=\frac 1{\sqrt{2}abc}(-2\alpha). \end{align*} So that \Ref{cent-onevar0} takes the form \begin{align*} \frac{p_3c}{\sqrt{2}ab}\Big(\frac{-2\alpha}{\sqrt{2}abc}\Big) \Big[\frac{c'}{\sqrt{2}abc^2}\frac{-2\alpha}{\sqrt{2}abc}+\frac 1{\sqrt{2}abc}\Big(\frac{-2\alpha}{\sqrt{2}abc}\Big)' \,\Big]=\lambda. \end{align*} A relatively straightforward simplification of this which also uses \Ref{K3} yields, the equivalent form \begin{equation}\lb{alpha}
p_3(\alpha^2)'=2c^2\Big(\lambda(ab)^4+p_3^2\alpha^2\Big) \end{equation} This equation, together with \Ref{K1}-\Ref{KKE} constitutes the ODE system for diagonal Bianchi IX central metrics.
Note that setting $p_3=0$ (hence also $N=0$) forces $\lambda=0$ but no other constraints. On the other hand, setting $\lambda=0$ yields, for $p_3\ne 0$ the equation \[ \alpha'=p_3c^2\alpha \] which will play a major role in the following sections.
Additionally, one can check that the formula $\alpha=\mp(\sqrt{\lambda}/p_3)(ab)^2$, $\lambda>0$ reduces
\Ref{alpha} to an identity, and this corresponds to the fact that this is the K\"ahler-Einstein condition (for $p_3\ne 0$, see \cite{mr}). On the other hand, a K\"ahler-Einstein metric with $p_3=0$ must be Ricci flat ($\lambda=0$) and necessarily $\alpha'=0$. But note in general from \Ref{ric-coeff} that $\alpha=0$ is necessarily a Ricci flat case, so such solutions will not concern us.
\section{cohomogeneity one central flat metric under a Heisenberg group quotient action.}\lb{sec:heis}
\subsection{The equations} On the Heisenberg group, with $p_1=0$, $p_2=0$ and $p_3=1$, equations \Ref{K1}-\Ref{KKE} and \Ref{alpha} take the form \begin{align} 2\frac{a'}a&=c^2,\lb{K1H}\\ 2\frac{b'}b&=c^2,\nonumber\\ 2\frac{c'}c&=-c^2+2\alpha,\lb{KKEH}\\ \alpha'&=c^2(\alpha+\lambda(ab)^4/\alpha).\lb{KKEH1} \end{align} Since $(a/b)'=0$ is a first integral, $b$ is a constant multiple of $a$, so that potential metrics are so-called biaxial. From now on we assume this constant is equal to $1$. Additionally, we adopt the form used in \cite{a-m2,mr1} by making the change of variables $a^2\,dt=dq$. Then, setting $\phi(q):=a^2$, we see from \Ref{K1H} that \[ \phi'(q)=2a\frac {da}{dq}=2a\frac {da}{dt}\frac{dt}{dq}=2a\frac {da}{dt}\frac{1}{a^2}=c^2. \] It follows that the metric takes the form \begin{equation}\lb{g-phi} g=\phi(q)(\sigma_1^2+\sigma_2^2)+\phi'(q)(\sigma_3^2+dq^2) \end{equation} with K\"ahler form \[ \om=d(\phi(q)\sigma_3). \] We now use a prime exclusively for the derivative with respect to $q$, while $\alpha$ will be considered, depending on the context, as a function of $t$ or a function of $q$. The two equations \Ref{KKEH}-\Ref{KKEH1} then translate as follows \begin{align*} \frac{\phi''(q)}{\phi'(q)}&=2c\frac{dc}{dt}\frac{dt}{dq}\frac 1{c^2}=\left(-c^2+2\alpha\right)\frac 1{a^2}= -\frac{\phi'(q)}{\phi(q)}+2\frac{\alpha}{\phi(q)},\\ \alpha'(q)&=\frac{d\alpha}{dt}\frac{dt}{dq}=c^2\Big(\alpha+\lambda\frac{a^8}{\alpha}\Big)\frac 1{a^2} =\frac{\phi'(q)}{\phi(q)}\Big(\alpha+\lambda\frac{\phi^4}{\alpha}\Big). \end{align*} Or, simplified \begin{align} \frac{(\phi^2)''}{(\phi^2)'}&=2\frac{\alpha}{\phi},\lb{Heis1}\\ \alpha'&=\frac{\phi'}{\phi}\Big(\alpha+\lambda\frac{\phi^4}{\alpha}\Big).\lb{Heis2} \end{align}
We now set $\lambda=0$. Then \Ref{Heis2} implies (if $\alpha$ is nonzero) that $\alpha/\phi$ is constant, which again we choose to be $1$. Substituting this into \Ref{Heis1} gives an equation with explicit solution \[ \text{$\phi=C\sqrt{e^{2q}+B}$, \qquad $C>0$, $B$ constants.} \] For simplicity we choose $C=1$ and $B=c_1^2$, $c_1>0$. Then for $q$ real valued, $\phi$ takes values in $(c_1,\infty)$, and \[ \phi'=\frac{e^{2q}}{(e^{2q}+c_1^2)^{1/2}}. \] We show that $g$ is complete, in the next few subsections. Here we point out that $g$ is not Ricci flat. In fact as $p_3$ is nonzero, the formula near the end of section \ref{sec:coho} shows that for Ricci flatness we must have $\alpha=0$, but in this solution $\alpha=\phi\ne 0$.
\subsection{Setup}
As in \cite{a-m2}, in order to avail ourselves of the methods of \cite{vz}, we consider a cohomogeneity one action under the quotient $\mathcal{G}=\mathrm{nil_3}/\tilde{\mathbb{Z}}$ of the Heisenberg group by the infinite cyclic group lying in its center, and given by
\[ \tilde{\mathbb{Z}}:= \left\{\begin{bmatrix} 1 & 0 & 2\pi n\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix} \Big|\ n\in\mathbb{Z} \right\}.\]
$\mathcal{G}$ has center $K$ isomorphic to $SO(2)$, whose transitive action on the circle $S^1$ extends to a linear action on $V:=\mathbb{R}^2$. We consider the homogeneous vector bundle $M=\mathcal{G}\times_K V$ (in which points of the product are identified according to $(g,v)\sim(gk^{-1},kv)$ for $k\in K$). $\mathcal{G}$ acts on $M$ by left multiplication on the first factor.
The action of $\mathcal{G}$ has trivial isotropy at points of a regular orbit, but isotropy $K$ at a point of the singular orbit $\mathcal{G}/K\approx \mathbb{R}^2$.
\subsection{Length of an escaping curve}
We now choose a left-invariant frame for $\mathcal{G}$ which is given in coordinates $x$, $y$, $z$ by $X_1=\partial_x$, $X_2=\partial_y+x\partial_z$, $X_3=-\partial_z$, to which we will add on $M$ the vector field $\partial_q$. Note that the domain of this coordinate system is open and dense in $M$, and $z$ is bounded due to the fact that we are considering a quotient.
The corresponding coframe consists of $dq$ and the left invariant coframe for the group given by $\sigma_3=xdy-dz$, $\sigma_1=dx$, $\sigma_2=dy$. Given a curve $\gamma(s):I\to M$ of finite length $L(\gamma)$, with coordinate presentation $(x(s),y(s),z(s),q(s))$, we have \begin{align*} \gamma'&=x'\partial_x+y'\partial_y+z'\partial_z+q'\partial_q\\ &=\sigma_1(\gamma')X_1+\sigma_2(\gamma')X_2+\sigma_3(\gamma')X_3+dq(\gamma')\partial_q \end{align*} so that \begin{align*}
g\Big(\gamma',\frac{X_1}{|X_1|}\Big)=x'\sqrt{\phi},\qquad g\Big(\gamma',\frac{X_2}{|X_2|}\Big)=y'\sqrt{\phi},\qquad
g\Big(\gamma',\frac{\partial_q}{|\partial_q|}\Big)=q'\sqrt{\phi'}. \end{align*} It follows that the length of $\gamma$ satisfies the Cauchy-Schwarz estimates \begin{align}
L(\gamma)=\int_I|\gamma'(s)|\,ds&\ge\inf_I(\sqrt{\phi(q)})\Big|\!\int_I x'\,ds\Big|,\lb{x-prime}\\
L(\gamma)&\ge\inf_I(\sqrt{\phi(q)}) \Big|\!\int_I y'\,ds\Big|,\lb{y-prime}\\
L(\gamma)&\ge\Big|\int_I\sqrt{\phi'(q)}q'\,ds\Big|.\lb{q-prime} \end{align} Now the right hand side of \Ref{q-prime} equals \[
\Big|\int_{q(I)}\sqrt{\phi'(q)}\,dq\Big|. \] If $q(I)$ has $q=\infty$ as an endpoint, this integral is infinite, so it follows that this cannot occur if $\gamma$ has finite length. On the other hand the infima in \Ref{x-prime}-\Ref{y-prime} are positive since this holds for any $q\in\mathbb{R}$. It then follows from these two equations that for a finite length curve, $x$ and $y$ are bounded. Thus such a curve can only leave every compact set in $M$ if a sequence of its $q$ values approach $-\infty$. To address this problem we have to attach a ``bolt" to $M$ at $q=-\infty$, that is, a singular orbit for the group action, and see that the metric and K\"ahler form extend smoothly to it.
\subsection{Attaching a bolt}
As $\phi'(q)(dq^2+\sigma_3^2)=\frac{d\phi^2}{\phi'(q)}+\phi'(q)\sigma_3^2$, and $\phi'=(\phi^2-c_1^2)/\phi$, the metric $g$ can be written in the form \begin{align*} g&=\frac{\phi}{\phi^2-c_1^2}d\phi^2 +\frac{\phi^2-c_1^2}{\phi}\sigma_3^2 +\phi(\sigma_1^2+\sigma_2^2)\\ \end{align*} defined on the domain $\phi\in (c_1,\infty)$.
To apply the Verdiani-Ziller smoothness conditions \cite{vz} for a metric at a singular orbit. We write the metric near $c_1$ in the for $dr^2+h_r$ where $r=0$ corresponds to $\phi=c_1$. Note that converting equations \Ref{K1H}-\Ref{KKEH1} into this form amounts to dividing their right hand side by $abc$. From this one can see that a solution can be extended smoothly if $a$, $b$, $\alpha$ are even in $r$ and $c$ is odd in $r$. this mean that $\phi=a^2$ and $\phi'=c^2$ are even as functions of $r$.
Computing asymptotically near $c_1$, we have $dr=\sqrt{\frac{\phi}{\phi^2-c_1^2}}d\phi\approx \sqrt{\frac{c_1}{(\phi-c_1)2c_1}}d\phi$ so that $r\approx\sqrt{2(\phi-c_1)}$. Thus near $c_1$ \[ g\approx dr^2+\frac{r^4/4+r^2c_1}{r^2/2+c_1}\sigma_3^2+(r^2/2+c_1)(\sigma_1^2+\sigma_2^2). \]
We compare this with the smoothness conditions in \cite{vz}, which in our case, for $\mathfrak{m}=\mathrm{span}(X_1,X_2)$, $\mathfrak{p}=\mathrm{span}(X_3)$, are, near $r=0$, \begin{align*} &\text{$g(\mathfrak{m},\mathfrak{m})$ is even in $r$,}\\ &\text{$g(\mathfrak{p},\mathfrak{m})=r^2\psi(r^2)$,}\\ &\text{$g(X,X)=\bar a^2r^2+r^4\xi(r^2)$ for $X\in\mathfrak{p}$.} \end{align*}
Only the last condition is not automatic in our case, and in it, $\bar a$ denotes the cardinality of the intersection of the (trivial) stabilizer with $\{\exp(\theta X)\,|\,0\le\theta\le2\pi\}$, with $X$ normalized so that the latter set is a closed one-parameter subgroup. For the group $\mathcal{G}$ and $X=-X_3$ we have $\bar a=1$ and it is thus sufficient to check the form of $g(X,X)$ for this $X$. The coefficient of $\sigma_3^2$ is \[ \frac{r^4/4+r^2c_1}{r^2/2+c_1}=r^2-\fr1{4c_1}r^4+O(r^6). \] Thus the conditions for smoothness of the metric are verified.
The K\"ahler form similarly extends smoothly to the singular fiber. In fact, it is \begin{multline*} d\phi\wedge\sigma_3+ \phi(\sigma_1\wedge\sigma_2)= d\left[\left(\sqrt{\phi-c_1}\right)^2\right]\wedge\sigma_3+ \phi(\sigma_1\wedge\sigma_2)\\ \approx 2^{-1}d(r^2)\wedge\sigma_3+ (r^2/2+c_1)(\hat\xx\wedge\hat\yy), \end{multline*} whereas modifying the conditions in \cite{vz} so that they apply to a $2$-form, shows that in our case smoothness requires that near $r=0$ the coefficient of $dr\wedge\sigma_3$ has the form $r\psi(r^2)$ and the coefficient of $\sigma_1\wedge\sigma_2$ is even. The fact that $\phi$ is even and the above form conclude the proof. We thus showed \begin{thm}\lb{thm2} For every $c_1>0$ the metric \[ g=\frac{e^{2q}}{\sqrt{e^{2q}+c_1^2}}(dq^2+\sigma_3^2)+\sqrt{e^{2q}+c_1^2}(\sigma_1^2+\sigma_2^2),\qquad q\in\mathbb{R}, \] defined on the $\mathcal{G}\times_{SO(2)}\mathbb{R}^2$, with $\mathcal{G}\simeq\mathrm{nil}_3/\tilde{\mathbb{Z}}$, is complete and centrally flat. \end{thm} We note that it is not too difficult to classify all complete diagonal cohomogeneity one centrally flat metrics under the action of $\mathcal{G}$. We demonstrate how to carry this out in a more difficult case in the next section.
\section{Centrally flat metric under the action of the compact group $SU(2).$ }\lb{sec:SU2}
\subsection{The equations} We now consider the case where the action is by the compact group \( SU(2) \) for a metric with central curvature $\lambda=0$. With the choices \( p_{1} = p_{2} = p_{3} =1, \) the system \Ref{K1}-\Ref{KKE},\Ref{alpha} becomes
\begin{eqnarray*}
a' & = & \frac{a}{2} (-a^{2} +b^{2} +c^{2} ) \\
b' & = & \frac{b}{2} (a^{2} -b^{2} +c^{2} ) \\
c' & = & \frac{c}{2} (a^{2} + b^{2} -c^{2} +2\alpha ) \\
\alpha ' & = & c^{2} \alpha . \end{eqnarray*} Note that we must take $\lambda=0$ to ensure that the right hand side of the system is smooth, which allows us to employ the center manifold theorem.
We observe immediately that this can be reduced to a system of three equations in three unknown functions. From the first and second equations, it follows that \[ \frac{d}{dt} (ab) = abc^{2} .\] Combining this with the fourth equation, we have \[ \frac{d}{dt} \log \frac{ab}{\alpha } = 0 ,\] and so there exists a constant $A$ such that \[ \log \frac{ab}{\alpha } = A, \] and so \( \alpha = e^{-A} (ab). \) With this, the fourth equation can be eliminated, and the third equation rewritten. The system becomes
\begin{align}
a' & = \frac{a}{2} (-a^{2} + b^{2} + c^{2} ) \nonumber \\
b' & = \frac{b}{2} (a^{2} -b^{2} + c^{2} ) \nonumber \\
c' & = \frac{c}{2} (a^{2} + b^{2} + 2e^{-A} ab -c^{2} ).\lb{SU2-ode} \end{align}
By a uniqueness argument, if an analytic solution defined on a maximal interval has an initial value in the region \[ \mathcal{R} = \{ (a,b,c ) \in \mathbb{R} ^{3} \mid a, b, c >0 \} , \] then the trajectory will remain in \( \mathcal{R} \) for all values of $t$ for which the solution exists. From now on we only consider such solutions, for which the metric will defined for values of $t$ in this interval.
\subsection{Linearization at equilibrium solutions and preliminary calculations}
Equilibrium solutions that lie in \( \overline{\mathcal{R} } \) are \( (q, q,0), \, (0,q, q), \) and \( (q, 0,q), \) for \( q\geq 0. \) The coefficient matrix of the linearized system at \( (q, q,0) \) has eigenvalues \( 0, -2q^{2} , q^{2} (1+e^{-A} ). \) At \( (0,q,q), \) the eigenvalues are \( q^{2} , 0, -2q^{2}, \) and at \( (q,0,q), \) the eigenvalues are \( 0, q^{2} -2q^{2} . \) In all cases, if \( q>0 \) one of these eigenvalues is positive. The Center Manifold Theorem guarantees that near an equilibrium solution for which the linearized system has a positive eigenvalue with no multiplicity, the system admits an unstable curve.
Next, one verifies \begin{lemma} For any solution to \Ref{SU2-ode}, we have \begin{eqnarray*}
\frac{d}{dt} (ab) & = & (ab) c^{2} \\
\frac{d}{dt} (ac) & = & ac (b^{2} + e^{-A} ab) \\
\frac{d}{dt} (bc) & = & bc (a^{2} + e^{-A} ab) \\
\frac{d}{dt} \Big(\frac{a}{b} \Big) & = & \frac{a}{b} (-a^{2} +b^{2} ) \\
\frac{d}{dt} \Big(\frac{a}{c} \Big) & = & \frac{a}{c} (-a^{2} - e^{-A} ab +c^{2} ) \\
\frac{d}{dt} (a^{2} -b^{2} ) & = & (a^{2} -b^{2} ) (-a^{2} -b^{2} +c^{2} ) \\
\frac{d}{dt} (a^{2} -c^{2} ) & = & -(a^{2} -c^{2} ) (a^{2} -b^{2} +c^{2} ) - 2e^{-A} abc^{2} . \end{eqnarray*}
\end{lemma}
These are straightforward consequences of the equations of the system.
An immediate implication of these calculations is that in $\mathcal{R}$, the products \( ab, \, ac, \) and \( bc\) are increasing functions. It follows that at any value of $t,$ at most one of the three can be decreasing, and also that all three products have finite, non-negative limits as $t$ approaches the lower endpoint of the maximal interval on which a solution exists. As a further implication, from the equation for \( d/dt (a/b), \) we see by uniqueness that either $a$ is identically equal to $b$ or never equal to $b.$ Since the roles of $a$ and $b$ are interchangeable, we may therefore assume that if $a$ and $b$ do not coincide, that it is $a$ that is greater. Finally, this makes $b$ a strictly increasing function.
Now choose an initial value in the region \( \mathcal{R} .\) Local existence theory provides for existence to the initial value problem on a non-empty interval; let \( (\xi , \eta ) \) be the maximal interval of existence for a given initial value. We investigate whether there are trajectories which correspond to complete metrics.
\subsection{A maximal solution interval bounded from below}
We will first discover that any candidates for complete metrics correspond to trajectories for which \( \xi = -\infty .\) This explains the attention paid earlier to the unstable curves, for they are such trajectories. We then investigate whether any of these do in fact give rise to complete metrics.
\begin{prop} Trajectories for which \( \xi > -\infty \) correspond to incomplete metrics.
\end{prop}
\begin{proof} The limit of $b$ as \( t\rightarrow \xi \) is zero: If the limit of \( b \) were non-zero, then both $a$ and $c$ would also have limits, and then all three functions could be extended continuously to \( \xi \) itself, violating the maximality of the interval \( (\xi , \eta ). \) Therefore, \[ \lim _{t\rightarrow \xi } b(t) =0. \]
We observed above that at any particular point, at most one of the functions \( a, b, \) or $c$ can be decreasing. However, under the assumption that \( a\geq b,\) the derivative of $b$ is positive, so $b$ increases throughout the entire interval of existence. Moreover, it is not possible that all three increase on all of \( (\xi , \eta ), \) because if they did, then all three could be extended continuously to the lower endpoint $\xi ,$ contradicting the maximality of the interval \( (\xi ,\eta ). \) There remain therefore two possibilities to consider.
{\bf (i)} Suppose at some point \( u\in (\xi ,\eta ), \) that \( a'(u) <0. \) Then \( -a^{2} +b^{2} +c^{2} <0 \) at this point. Calculating the derivative of this quantity, we find that \[ \frac{d}{dt} (-a^{2} +b^{2} +c^{2} ) = a^{4} -(b^{2} -c^{2} )^{2} +2e^{-A} abc^{2} > 0. \] In other words, whenever \( -a^{2} +b^{2} +c^{2} \) is negative, the derivative of this quantity is positive. This implies that if $a$ decreases at any point $u,$ then it decreases on all of \( (\xi , u ). \) A calculation also shows that where \( a'(t) <0, \) that \( a''(t) >0, \) and this will be used later.
Since at most one of the three functions can decrease at a point or on an interval, $c$ must be increasing on all of \( (\xi ,u). \) Therefore, \( \lim _{t\to \xi^- } c(t) \) exists. We already know that \( b\rightarrow 0 \) as \( t\searrow \xi .\) If \( a(t) \) approached a finite limit as $t$ approached \( \xi ,\) then all three functions could be continuously extended to \( \xi, \) contradicting the maximality of the interval of existence. Therefore, \[ \lim _{t\to \xi^- } a(t)= \infty . \] Since \( a\rightarrow \infty ,\) but \( ac \) approaches a finite limit, it must be that \( \lim _{t\rightarrow \xi } c(t) = 0. \) It follows, then, that as \( t\searrow \xi ,\) the system can be approximated by \begin{eqnarray*}
a' & = & \frac{a}{2} (-a^{2} ) \\
b' & = & \frac{b}{2} (a^{2} ) \\
c' & = & \frac{c}{2} (a^{2} +2e^{-A} B) , \end{eqnarray*} where \( B = \lim _{t\rightarrow \xi } ab ,\) a non-negative number. This system can be solved explicitly. Solving, we find that \begin{align*} a(t) &\simeq \frac{1}{\sqrt{t-\xi } } ,\\
b(t) &\simeq C\sqrt{t-\xi } ,\\
c(t) &\simeq D \sqrt{t-\xi } , \end{align*} for constants $C$ and $D$ whose values do not affect the completeness question. Regarding that question, we recall that the metric is of the form \[ g = (abc)^{2} dt^{2} + a^{2} \sigma _{1} ^{2} + b^{2} \sigma _{2} ^{2} + c^{2} \sigma _{3} ^{2} . \] Let \( \gamma (s) \) be a curve that is constant in the orbit direction, and with \( t(s) = s. \) Then \[ l(\gamma ) = \lim _{\varepsilon \rightarrow 0} \int _{\xi + \varepsilon } ^{t_{1} } (abc) (s) \ ds < \infty \] by direct calculation. Since the length of this curve is finite, the distance to the boundary at $t=\xi$ is also finite, and the metric is incomplete.
{\bf (ii)} There is a point \( u\in (\xi , \eta ) \) at which \( c'(u) < 0. \) Then \( a^{2} + b^{2} +2e^{-A} ab -c^{2} < 0 \) at that point. Similarly to above, we calculate the derivative of this quantity: \[ \frac{d}{dt} (a^{2} + b^{2} +2e^{-A} ab -c^{2} ) = c^{4} - (a^{2} -b^{2} )^{2} . \] At $u,$ we have \[ c^{2} > a^{2} +b^{2} + 2e^{-A} ab > a^{2} +b^{2} , \] and therefore, \[ c^{4} > (a^{2} + b^{2} )^{2} \geq (a^{2} -b^{2} )^{2} . \] We therefore see that the derivative of this quantity is positive, implying that \( c' (t) <0 \) on the entire interval \( (\xi ,u). \) We also find that \( c''(t) > 0 \) on this interval. It must be that \( c \rightarrow \infty \) and \( a, \, b \rightarrow 0 \) as \( t\rightarrow \xi .\) Permuting the roles of \( a, \, b, \) and $c$ in the approximated equations that appeared in Case (i), we again find that the metric is incomplete.
\end{proof}
\subsection{Maximal solution intervals of the form $(-\infty,\eta)$} We turn now to those trajectories for which \( \xi = -\infty .\) Analyzing the behavior of these solutions as \( t\rightarrow -\infty ,\) we find which equilibrium points these approach.
\begin{prop} A trajectory for which \( \xi = -\infty \) converges to an equilibrium solution of the form \( (q, q, 0), \) with \( q> 0, \) or \( ( q, 0, q) \) as \( t\rightarrow -\infty .\) \end{prop}
\begin{proof} Since $b$ is increasing, its limit as \( t\rightarrow -\infty \) exists, and so again the behavior of $b$ gives a convenient way to split into cases.
{\bf (1)} Suppose first that \( \lim _{t\rightarrow -\infty } b(t) >0. \) In this case, and because $ab$ and $bc$ are also increasing functions, $a$ and $c$ also have finite limits as \( t\rightarrow -\infty .\) Comparing to the list of possible equilibrium solutions, and remembering that we have assumed without loss of generality that \( a(t) \geq b(t), \) we see that \( (a(t), b(t), c(t) ) \rightarrow (q, q ,0), \) with \( q >0. \) This implies that \[ \lim _{t\rightarrow -\infty } \frac{a(t)}{b(t)} =1. \] From the earlier lemma, \[ \frac{d}{dt} \Big(\frac{a}{b} \Big) = \frac{a}{b} (-a^{2} +b^{2} ) .\] This is non-positive under the assumption that \( a \geq b, \) and it also follows that \( a(t) /b(t) \geq 1 \) and is either strictly decreasing or else identically equal to one. It can only be that \( a(t) \equiv b(t) \) for all \( t\in (-\infty , \eta ). \)
{\bf (2)} Now suppose that \( \lim _{t\rightarrow -\infty } b(t) =0. \) There are several possibilities to consider.
{\bf (i)} Suppose there exists \( u \in (-\infty , \eta ) \) at which \( a'(u) <0. \) The earlier calculation showed that at any point or on any interval where \( -a^{2} +b^{2} +c^{2} <0, \) that the derivative of this quantity is positive, and therefore \( -a^{2} +b^{2} +c^{2} \) remains negative on all of \( (-\infty , u). \) Then $a$ is decreasing on all of \( (-\infty ,u), \) and since at most one of the three functions can decrease on an interval, it follows that $b$ and $c$ are non-decreasing. Those same calculations also give us that \( a''(t) >0 \) on this interval and that \[ \lim _{t\rightarrow -\infty } a(t) = \infty .\] Since both \( \lim _{t\rightarrow -\infty } c(t) \) and \( \lim _{t\rightarrow -\infty } ac \) are finite numbers, it must be that \( \lim _{t\rightarrow -\infty } c(t) = 0. \) For large, negative values of $t$ therefore, the first equation can be approximated in the asymptotic sense by \[ a' = \frac{a}{2} (-a^{2} ) .\] By a direct calculation, $a$ diverges at a finite value in the interval of existence, which is a contradiction.
{\bf (ii)} The second possibility is that $c$ decreases at some point \( u\in (-\infty ,\eta ). \) As calculated previously, if \( c'(u) > 0, \) that is, if \( a^{2} + b^{2} +2e^{-A} ab -c^{2} < 0 \) at \( u\in (-\infty ,\eta ), \) then the derivative of this quantity is positive, and so $c$ remains a decreasing function on all of \( (-\infty ,u ). \) Also, \( c''(t) > 0 \) on this interval, and so \( \lim _{t\rightarrow -\infty } c(t) = \infty . \) Because the product $ac$ has a finite limit, we have \( \lim _{t\rightarrow -\infty } a(t) =0. \) The third equation can be approximated for negative values of $t$ of large magnitude by \[ c' = \frac{c}{2} (-c^{2} ) , \] implying that \( c(t) \rightarrow \infty \) at an interior point, a contradiction.
{\bf (iii)} If all three of \( a, \, b, \) and $c$ are increasing on all of \( (-\infty , \eta ), \) then all three have finite limits as \( t\rightarrow -\infty , \) so \( (a(t), \, b(t), c(t) ) \) converges to an equilibrium solution. Checking the list, the equilibrium solution is \( (q, 0, q ) ,\) with the possibility \( q= 0\) not excluded.
\end{proof}
\subsection{Equilibrium $\mathbf{(q,q,0)}$, $\mathbf{q>0}$} We now investigate the completeness of the metrics that correspond to trajectories converging to equilibrium solutions of the form \( (q, q ,0) \)
So assume the initial value was chosen to lie on the trajectory of an unstable curve of \( (q, q ,0), \) with \( q >0. \)
\subsubsection{The endpoint $\xi=-\infty$}
For large, negative values of $t,$ both $a$ and $b$ can be approximated by \( q ,\) but for \( c(t) ,\) we examine the equation itself, in order to determine the rate at which \( c(t) \rightarrow 0.\) Since \( c^{2} \) is small compared to $c,$ the third equation can be approximated by \[ c' \simeq (1+e^{-A} ) q ^{2} c. \] For the moment, we will write \( \gamma = (1+e^{-A} ) q ^{2} , \) a positive constant. Solving the equation gives \[ c(t) \simeq k e^{\gamma t} , \] where $k$ is a further constant. The metric can then be approximated as \( t\rightarrow -\infty \) by \[ g = q ^{4} k^{2} e^{2\gamma t} dt^{2} + q ^{2} \sigma _{1} ^{2} + q ^{2} \sigma _{2} ^{2} + k^{2} e^{2\gamma t} \sigma _{3} ^{2} . \] Making the change of variables \[ y(t) = \frac{q ^{2} k}{\gamma } e^{\gamma t} ,\] as in [DS1], the metric can be written as \[ g = dy^{2} + q ^{2} \sigma _{1} ^{2} + q^{2} \sigma _{2} ^{2} + \frac{\gamma ^{2} }{q^{4} } y^{2} \sigma _{3} ^{2} , \] or \[ g = dy^{2} +q^{2} \sigma _{2} ^{2} + q^{2} \sigma _{2} ^{2} + (1+e^{-A} )^2 y^{2} \sigma _{3} ^{2} , \] where \( t\rightarrow -\infty \) if and only if \( y\rightarrow 0. \) Thus the distance to the boundary corresponding to $\xi=-\infty$ is finite. However, in this case, under certain conditions that metric and K\"ahler form extend smoothly to a singular orbit (a bolt). We now show this.
\subsubsection{Attaching a singular orbit at $\xi=-\infty$ for equilibrium $(q,q,0)$}
Denoting $r=\int_{-\infty}^ra(s)b(s)c(s)\,ds$ the metric can be transformed to the form $g=dr^2+a^2\sigma_1^2+b^2\sigma_2^2+c^2\sigma_3^2$. Recall that the solutions we are examining satisfy $a=b$ with equilibrium $(q,q,0)$, $q\ne 0$. In that case the Lie algebra of Killing vector fields is one dimension higher, and the metric is preserved by a corresponding action of $U(2)$. The manifold then has the form $M=U(2)\times_{K}\mathbb{R}^2\setminus\{ 0 \}$, with $K=U(1)\times U(1)$, and we wish to examine whether it is possible to attach smoothly a singular orbit (so-called bolt) at $r=0$, where the finite distance end of the manifold resides. As the principal orbits are $3$-dimensional, the isotropy group is $H=U(1)$, and $K$ acts on $K/H\approx S^1$ non-effectively. However one of its factors acts, of course, effectively, and since $H$ embeds as the other factor, for the purposes of examining smooth extendibility, one can equally consider just the action of $SU(2)\subset U(2)$, which is still of cohomogeneity one, and regard the manifold as $SU(2)\times_{U(1)}\mathbb{R}^2\setminus\{ 0 \}$. We will take this point of view in what follows. Note that $U(1)$ acts on $\mathbb{R}^2$ by a restriction of one the representations of $SU(2)$.
With this change of variables, equations \eqref{SU2-ode} in that case take the form \begin{align*} \frac{da}{dr}&=\frac c{2a},\\
\frac{dc}{dr}&=1+e^{-A}-\frac{c^2}{2a^2}.\\ \end{align*} We see from these equations that $a$ can be smoothly extended as an even function and $c$ as an odd one near $r=0$. Using the notations in \cite{vz}, we denote \[ \mathfrak{k}=\mathrm{span}(X_3),\qquad \mathfrak{m}=\mathrm{span}(X_1,X_2)=\ell_1\qquad V=\mathrm{span}(\partial_r,X_3):=\ell_{-1}', \] where $X_i, i=1\ldots 3$ respectively denote a usual the dual vectors to $\sigma_i$.
Consider now a solution $(a,b,c)$ approaching as $r\to 0$ the equilibrium point $(q,q,0)$, with $a=b$. As in $SU(2)$, with our choice of normalization of the Lie algebra basis, we have $\exp(4\pi X_3)=\mathrm{id}$, $2X_3$ generates an $U(1)$-action whose rotational isotropy action on the plane spanned by it and $d/dr$ is by $a_1\theta$, and on $\mathfrak{m}$ by $d_1\theta$, where the constants $a_1$ and $d_1$ are determined as follows. First, \begin{align*}
a_1&=\lim_{r\to 0}\frac{|2X_3|}r=\lim_{r\to 0}\frac {2c}{r}=\lim_{r\to 0}2\frac {dc}{dr}\\ &=2\lim_{r\to 0}\Big(1+e^{-A}-\frac{c^2}{2a^2}\Big)=2(1+e^{-A}-\frac 0{2q^2})=2(1+e^{-A}).\\[2pt] \end{align*} Since $c^2(r)$ is even with no constant term, following \cite{vz}, smooth extendibility requires first of all that $2(1+e^{-A})$ is an integer.
Second, as $[X_3,X_1]= X_2$ and $[X_3,X_2]=- X_1$, we have $d_1=1$. The remaining potentially nontrivial smoothness conditions for the metric in \cite{vz} are \begin{align*} a^2+b^2&=\phi_1(r^2),\\ a^2-b^2&=r^{\frac{2d_1}{a_1}}\phi_2(r^2), \end{align*} for some functions $\phi_1$, $\phi_2$. The first of these clearly holds as $a=b$ can be extended to an even function. The second is also obvious as $a=b$.
We turn to checking that the K\"ahler form extends across the singular orbit at $r=0$. For the isotropic action of $SO(2)$ on $T_pM$, we find that $\partial_r+\frac{i}{r}X_3$ is an eigenvector with eigenvalue $e^{ia_1\theta}$, and $X_1+iX_2$ is an eigenvector with eigenvalue $e^{id_1\theta}$, and likewise for their complex conjugates. Dualizing gives eigenspaces of $T^*_pM$: $dr-ir \sigma_3$ has eigenvalue $e^{ia_1\theta}$, and $\sigma_1-i\sigma_2$ has eigenvalue $e^{id_1\theta}$. Thus the eigenspaces of $\Lambda^2T^*_pM$ are \begin{align*} E_1 &= \mathrm{span}\{r dr\wedge \sigma_3, \sigma_1\wedge\sigma_2\}\\ E_{e^{i(a_1+d_1)\theta}}&=\mathrm{span}\{dr\wedge\sigma_1-r\sigma_3\wedge\sigma_2 +i(-dr\wedge\sigma_2-r\sigma_3\wedge\sigma_1)\}\\ E_{e^{i(a_1-d_1)\theta}}&=\mathrm{span}\{dr\wedge\sigma_1+r\sigma_3\wedge\sigma_2 +i(dr\wedge\sigma_2-r\sigma_3\wedge\sigma_1)\}\\ &\qquad \end{align*} The smoothness condition is the equivariance condition $\omega(e^{a_1\theta}p)=\exp(\theta X_3)^*\omega$. This requires that the coefficient of \begin{align} E_1 & \text{ is } \phi_1(r^2), \\
E_{e^{\pm i(a_1-d_1)\theta}}& \text{ is } r^{\frac{|a_1-d_1|}{a_1}}\phi_2(r^2),\\
E_{e^{\pm i(a_1+d_1)\theta}}& \text{ is } r^{\frac{|a_1+d_1|}{a_1}}\phi_3(r^2). \end{align} Now we have \begin{align*}\omega&=c\,dr\wedge\sigma_3+ab\sigma_1\wedge\sigma_2\\ &=\frac{c}{r}\cdot rdr\wedge\sigma_3+a^2\sigma_1\wedge \sigma_2. \end{align*} Thus our only nonzero coefficients are in $E_1$. Thus the smoothness conditions become \begin{align*} \frac cr &= \phi_1(r^2), \\ a^2 &= \phi_2(r^2). \\ \end{align*} Both of these hold trivially from the even/odd extendibility of $a$, $c$. Thus in total, the K\"ahler form extends smoothly across the singular orbit if and only $2e^{-A}$ is an integer.
\subsubsection{The endpoint $\eta$} Next, we consider the question of completeness as \( t\rightarrow \eta \).
Recalling here that in this case that \( a \equiv b, \) the system reduces to \begin{align}
a' & = \frac{a}{2} c^{2}\nonumber \\
c' & = \frac{c}{2} (2(1+e^{-A} ) a^{2} -c^{2} ) .\lb{a=b-case} \end{align}
Following \cite{d-s1}, one can write the equations using the variables $w_1=bc$, $w_2=ac$, $w_3=ab$. In the case at hand $w_1=w_2$, and the above two equations are then equivalent to the system \begin{align}
w_{1} ' & = (1+e^{-A} ) w_{1} w_{3},\nonumber \\
w_{3} ' & = w_{1} ^{2} ,\lb{2d-sys} \end{align} with the metric given in terms of \( w_{1} \) and \( w_{3} \) as \[ g = w_{1} ^{2} w_{3} dt^{2} + w_{3} \sigma _{1} ^{2} + w_{3} \sigma _{2} ^{2} + \frac{w_{1} ^{2} }{w_{3} } \sigma _{3} ^{2} . \] A curve \( \gamma :[t_{0} ,\eta ) \rightarrow M \) which is constant in the orbit direction has length \[ l(\gamma ) = \int _{t_{0} } ^{\eta } w_{1} \sqrt{w_{3}} \ dt . \] In order to determine the behavior of \( w_{1} \) and \( w_{3} \) as \( t\rightarrow \eta , \) \ first eliminate \( w_{1} \) to reduce to an equation in \( w_{3} \) alone, which integrated once gives \begin{equation}\lb{w3} w_{3} ' (t) = (1+e^{-A} ) w_{3} ^{2} (t) + \delta , \end{equation} where \( \delta \) is a constant of integration.
\begin{lemma} $\eta $ is finite. \end{lemma}
\begin{proof} First observe that \( w_{1} ', \, w_{3} ', \, w_{3} '' >0. \) Next, we see that \( w_{1} \) and \( w_{3} \) become unbounded as \( t\rightarrow \eta ,\) whether \( \eta \) is finite or not. If \( \eta = \infty ,\) then since \( w_{3} ' \) and \( w_{3} '' \) are both positive, $w_{3} $ must become unbounded; since \( w_{3} ' = w_{1} ^{2} ,\) then \( w_{1} \rightarrow \infty \) also. If \( \eta < \infty ,\) then at least one of \( w_{1} , \, w_{3} \) becomes unbounded, because otherwise the maximality of the interval of existence would be contradicted. From the equations, if one becomes unbounded, so does the other.
A comparison argument now shows that in fact \( \eta < \infty .\) Since \( w_{3} \rightarrow \infty \) as \( t \rightarrow \eta , \) there exists an $M$ so that for all \( t\in (M, \eta ), \) \[ (1+e^{-A} )w_{3} ^{2} (t) + \delta > w_{3} ^{2} (t) .\] Choose \( t_{1} \in (M, \eta ), \) and consider the comparison equation \[ w' = w^{2} , \] with initial value \( (t_{1} ,w_{3} (t_{1} ) ). \) The actual solution \( w_{3} \) passes through the point \( (t_{1} , w_{3} (t_{1} ) ) \) and has a steeper derivative at every point, so \( w_{3} \) lies above the comparison function $w.$ The comparison function becomes unbounded at a finite value \( \eta _{c} ; \) its solution is \( w = 1/(\eta _{c} -t) .\) Therefore, \( w_{3} \) also diverges to $\infty$ at some \( \eta \leq \eta _{c} < \infty .\)
\end{proof}
Since \( w_{3} \rightarrow \infty \) as \( t\rightarrow \eta ,\) a multiple of that same comparison function also serves in the asymptotic sense: \[ \lim _{t\rightarrow \eta } \frac{(1+e^{-A} )w_{3} ^{2} (t) +\delta }{(1+e^{-A} )w_{3} ^{2} (t) } = 1. \] The behavior of \( w_{3} \) can be discerned from that of $w,$ the solution to \( w' = (1+e^{-A} )w^{2} .\) This solution is a multiple of \( 1/(\eta -t) .\) Using the equation \( w_{3} ' = w_{1} ^{2} , \) we see that \[ \int _{t_{1} } ^{\eta } w_{1} \sqrt{w_{3} } \ dt =\infty, \] since the integrand is asymptotically $(t-\eta)^{-3/2}$. Thus the metric is complete.
\subsection{Equilibrium $\mathbf{(0,0,0)}$}
\subsubsection{Reduction to an explicit solution} The case of equilibrium $(0,0,0)$ contains many of the ideas already introduced, so we will be brief. First, in order to find the rates of convergence of $a$, $b$, $c$ as $t\to-\infty$, convert the system \Ref{SU2-ode} from the variable $t$ to the variable $r=\int_{-\infty}^tabc\,ds$. Solutions approaching this equilibrium have that $a$, $b$, $c$ can be extended smoothly as odd functions of $r$. Writing odd power series expansions of $a$, $b$, $c$ in $r$ and solving for the first order coefficients $\hat{a}$, $\hat{b}$, $\hat{c}$ respectively, yields positive solutions $\hat{a}=\hat{b}=\sqrt{\gamma}/2$, $\hat{c}=\gamma/2$ for $\gamma:=1+e^{-A}$. Hence $a/b$ tends to $1$ as $r\searrow 0$ (or $t\to-\infty$), and as before we must have $a\equiv b$.
Thus in this case the ODE system again becomes \Ref{2d-sys} and equation \Ref{w3} also holds. Now since $w_3'=w_1^2$, $w_3$ is an increasing function converging to $0$ as $t\to-\infty$. Therefore $w_3'$ converges to $0$ as $t\to-\infty$ and thus $w_3'-\gamma w_3^2\to0-\gamma 0=0$, so that we have $\delta=0$. The ODE system thus simplifies and its solution is just a case of the one used as comparison in the previous subsection, specifically $a(t)=\sqrt{\fr1{-\gamma t}}$, $c(t)=\sqrt{\frac{1}{-t}}$.
\subsubsection{No smooth extension to a singular orbit} The distance to the endpoint $t=-\infty$ in this explicit metric is finite. Hence one needs to investigate whether the metric and K\"ahler form can be extended smoothly to a singular orbit. The singular orbit in this case is just a point, i.e. a ``nut". But in the $r$ coordinate one easily sees that $a(r)=\sqrt{\gamma}r/2$, $c(r)=\gamma r/2$, and since we cannot have both $a'(0)=1$ and $c'(0)=1$ simultaneously, the metric is not complete. This can alternatively be deduced from the statement of the smoothness condition in \cite{vz} for the case where the isotropy subgroups of a singular fiber is $Sp(1)$ and generic isotropy subgroup is trivial.
\subsection{Equilibrium $\mathbf{(q,0,q)}$, $\mathbf{q>0}$}
Suppose now that the initial value is chosen to lie on an unstable curve of \( (q , 0, q ), \) with \( q > 0. \)
For large, negative values of $t,$ the functions $a$ and $c$ can be approximated by \( q ; \) to discover the rate of vanishing of $b,$ we examine the equation \[ b' = q ^{2} b. \] This is easily solved to obtain \( b(t) = k e^{q ^{2} t} .\) After a change of variables \( v(t) = k e^{q^{2} t} \) (the same as in [DS]), the metric is approximated by \[ g = dv^{2} + q ^{2} \sigma _{1} ^{2} + v^{2} \sigma _{2} ^{2} + q ^{2} \sigma _{3} ^{2} , \] with \( t\rightarrow -\infty \) if and only if \( v\rightarrow 0. \) This metric is incomplete. However, in this case we cannot extend the metric smoothly to a singular orbit. Namely, upon switching to the coordinate $r$, so that the metric has the form $dr^2+h_r$, in the resulting ODE system $c$ can be extended only as an odd function near $r=0$ (whereas $a$ and $b$ can be extended either both as even, or both as odd functions). But an odd function can't have a nonzero value $q$ at $r=0$. Therefore the corresponding metric is necessarily incomplete, and we do not pursue this case further.
Collecting these investigations, we summarize the findings.
\begin{thm}\lb{thm3} Let $(M,g)$ be a Riemannian $4$-manifold admitting a cohomogeneity one $SU(2)$-action by isometries. Then $g$ is a complete diagonal centrally flat K\"{a}hler metric precisely when it is of the form \Ref{bianchiAmet} with $(a, b, c)$ an unstable solution curve of the system \Ref{SU2-ode} for an equilibrium point $(q,q,0)$, $q>0$, defined on a maximal interval. Such metrics satisfy $a=b$, and $M$ contains a unique singular orbit. \end{thm}
With regard to the explicitness of these solutions, note that one could have proceeded with the system \Ref{a=b-case} by making the change of variables as in section \ref{sec:heis}. This would give a similar explicit solution, where this time \[ \phi(q)=\frac{e^k}{2\gamma}e^{2\gamma q}+B, \text{ with $\gamma=1+e^{-A}$ and constants $k$, $B$.} \] The case of positive $B$ corresponds to a solution converging to equilibrium $(q,q,0)$, $q>0$, whereas $B=0$ yields one converging to $(0,0,0)$.
\section{Centrally flat metrics under the Euclidean Group of plane motions}\lb{sec:Euc} In this section we describe a complete triaxial centrally flat metric with a cohomogeneity one action of the Euclidean group $E(2)$. The method employed is that of the recent \cite{mr}, which in turn was inspired by \cite{d-s1}.
We set $p_2=0$, $p_1=p_3=1$, and $\lambda=0$. Then the Lie algebra spanned by $X_1,X_2,X_3$ is the Lie algebra of the Euclidean group. The equations for zero central curvature are, from \Ref{K1}-\Ref{KKE} and \Ref{alpha} \begin{align} \label{E2ODEa}a'&=\frac{a}{2}(-a^2+c^2), \\ \label{E2ODEb}b'&=\frac{b}{2}(a^2+c^2), \\ \label{E2ODEc}c'&=\frac{c}{2}(a^2-c^2+2\alpha),\\ \lb{E2ODEd} \alpha'&=\alpha c^2. \end{align} These can be reduced to a system of three equations as in section \ref{sec:SU2}, but we will generally stick with the above version.
As in the case of $SU(2)$, the derivatives in this system are given by polynomials in the dependent variables, hence are locally Lipschitz, so that standard ODE theory applies. The symmetries of these equations include, as they are autonomous, constant shifts in $t$. Additionally, the equations possess a scaling symmetry \[ (a(t),b(t),c(t),\alpha(t))\to (ka(k^2t),b(k^2t),kc(k^2t),k^2\alpha(k^2t)), \] taking solutions to solutions.
\subsection{Linearization about Equilibria} The equilibrium solutions are $(q,0,q,0)$ and $(0,p,0,r)$, and we concentrate on the nonzero case. Then the system \Ref{E2ODEa}-\Ref{E2ODEd} has linearization about $(q,0,q,0)$ given by \begin{align*} a'&=-q^2a+q^2c, \\ b'&=q^2b, \\ c'&=q^2a-q^2c+q\alpha, \\ \alpha'&=q^2\alpha, \end{align*} which has one double positive, one negative and one zero eigenvalue for $q> 0$. The linearization about $(0,p,0,r)$ has three zero eigenvalues and one with the sign of $r$.
\begin{thm}\lb{thm4} A solution of \Ref{E2ODEa}-\Ref{E2ODEd} yields a complete centrally flat metric of the form \Ref{bianchiAmet} on a cohomogeneity one $E(2)$ $4$-manifold if it is a solution along an unstable curve of an equilibrium point $(q,0,q,0)$, $q>0$. \end{thm} \begin{proof} The proof is broken into three steps. As in the case of $SU(2)$, solutions with a maximal interval having a finite left endpoint do not yield complete metrics. See Proposition~\ref{prop:solutions}. Solutions with maximal interval $(-\infty,\eta)$ are the unstable curves of the equilibrium points $(q,0,q,0)$, and satisfy $0\le c^2-a^2 \le 2\alpha$. Once again for a geodesic orthogonal to the orbits, $\eta$ is infinitely far, while $t=-\infty$ is at a finite distance. See Proposition~\ref{prop:estimates}. At $t=-\infty$ the metric nd K\"ahler form extend smoothly (Proposition~\ref{prop:bolt}). The proof that all finite length curves remain inside some compact set is as in \cite{mr}.
\end{proof}
We first record in a lemma some relations, easily verifiable via \Ref{E2ODEa}-\Ref{E2ODEc}, which will be used later in the proof.
\begin{lemma}\lb{prelim} For the system \Ref{E2ODEa}-\Ref{E2ODEd}, \begin{align*} (ab)'&=abc^2,\qquad (\alpha c)'=\frac{\alpha c (a^2+c^2+2\alpha)}2, \\ (bc)'&=bc\left(a^2+\alpha\right),\qquad \left(\frac{a}{b}\right)'=-\frac{a^3}{b},\\ (ac)'&=ac\alpha, \\
\end{align*}
\end{lemma}
\subsection{Solutions} \begin{prop}\label{prop:solutions} There are no complete metrics corresponding to solutions of \Ref{E2ODEa}-\Ref{E2ODEd} with maximal interval $(\xi,\eta)$, when $\xi$ is finite. Furthermore, the unstable curves of the equilibrium points $(q,0,q,0)$ are non-equilibrium solutions with maximal interval $(-\infty,\eta)$ which satisfy $0\le c^2-a^2\le 2\alpha$. \end{prop} \begin{proof} For an initial time $t_0$, let $(\xi,\eta)$ be a maximal solution interval for the initial value problem for \Ref{E2ODEa}-\Ref{E2ODEd} with $a(t_0)=a_0$, $b(t_0)=b_0$, $c(t_0)=c_0$ and $\alpha(t_0)=\alpha_0$.
Uniqueness of solutions to \Ref{E2ODEa}-\Ref{E2ODEd} implies that if any of $a$, $b$, $c$ or $\alpha$ are zero anywhere in $(\xi,\eta)$ then they are zero everywhere. Accordingly we assume that $a$, $b$, $c$ and $\alpha$ are all positive on $(\xi,\eta)$. Then we see from Lemma~\ref{prelim} and \Ref{E2ODEb} that $ab$, $bc$, $ac$, and $b$ are all increasing on $(\xi,\eta)$.
We consider the following cases: \subsubsection*{Case 1: $c_0^2-a_0^2<0$} We first make the following claim.\\[4pt] Claim: In this case $a\to\infty$ as $t\to\xi^+$.\\[3pt] \textit{Proof of claim:} Since \[ (c^2-a^2)'=-(c^2-a^2)(c^2+a^2)+2\alpha c^2, \] if $c^2-a^2<0$ then $(c^2-a^2)'>0$, thus $c^2-a^2<0$ for all $\xi<t<t_0$. Therefore, \begin{align*} a' &= \frac{a}{2}(c^2-a^2), \\ a'' &= \frac{a}{4}[(c^2-a^2)^2-2(c^2-a^2)(c^2+a^2)+4\alpha c^2], \end{align*} showing that $a$ is decreasing and concave up on $(\xi,t_0)$. Next, we always have $b'>0$, while on $(\xi,t_0)$ \[c'=\frac{c}{2}(a^2-c^2+2\alpha)>0, \] i.e. $c$ is increasing on $(\xi,t_0)$. Therefore $b$ and $c$ are bounded on $(\xi,t_0)$. Thus, as $(\xi,\eta)$ is the maximal solution interval, $a$ could be bounded as $t\to\xi^+$ only if $\xi=-\infty$. But since $a$ is concave up, $a\to\infty$ as $t\to\xi^+$ even when $\xi=-\infty$. \qed
Since $ab$ and $ac$ are increasing, they are bounded as $t\to\xi^+$ and $a\to\infty$, so $b\to0$, $c\to0$. Now $\alpha$ is also increasing, so $\alpha\to k$ for some constant $k$ as $t\to\xi^+$. Then as $t\to\xi^+$ the first three equations will take the asymptotic form \begin{align*} a'&=-\frac{1}{2}a^3 \\ b'&=\frac{1}{2}ba^2 \\ c'&=\frac{1}{2}c(a^2+2k) \\ \end{align*} the solution of which has asymptotic form \begin{align*} a&\simeq (t-\xi)^{-\frac{1}{2}},\\ b&\simeq b_1(t-\xi)^{\frac{1}{2}},\\ c&\simeq c_1(t-\xi)^{\frac{1}{2}},\\ \end{align*} for some constants $b_1$ and $c_1$. This shows that $\xi$ is finite in this case and \[ \int_{\xi}^{t_0} abc\,dt <\infty,\] so the metric is not complete.
\subsubsection*{Case 2: $c_0^2-a_0^2>2\alpha_0$} Here we have a similar claim.\\[4pt] Claim: In this case $c\to\infty$ as $t\to\xi^+$.\\[3pt] \textit{Proof of claim:} Analogous to the previous claim.\qed
Since $ac$, $bc$ and $\alpha c$ are increasing (see Lemma~\ref{prelim}), they are bounded as $t\to\xi^+$ and $c\to\infty$, so $a\to0$, $b\to0$ and $\alpha\to 0$. Then as $t\to\xi^+$ the equations take the asymptotic form \begin{align*} a'&=\frac{1}{2}ac^2 \\ b'&=\frac{1}{2}bc^2 \\ c'&=-\frac{1}{2}c^3 \\ \end{align*} which has solution \begin{align*} a&\simeq a_1(t-\xi)^{\frac{1}{2}}\\ b&\simeq b_1(t-\xi)^{\frac{1}{2}}\\ c&\simeq (t-\xi)^{-\frac{1}{2}}\\ \end{align*} for some constants $a_1$ and $b_1$. This shows that $\xi$ is finite in this case and \[ \int_{\xi}^{t_0} abc\,dt <\infty,\] so the metric is not complete.
If $c^2-a^2<0$ or $c^2-a^2>2\alpha$ at any time, then a constant shift in $t$ will give one of the previous cases. In both previous cases, $\xi$ is finite, but we know that the unstable curve of the equilibrium points $(q,0,q,0)$ must have $\xi=-\infty$. The existence of these curves is guaranteed by the center manifold theorem. Therefore we consider the final case:
\subsubsection*{Case 3: $0\le c^2-a^2 \le 2\alpha\textnormal{ for all }t\in(\xi,\eta)$} Here we have a different claim.\\[4pt] Claim: In this case $\xi=-\infty$. \\[3pt] \textit{Proof of claim:} In this case $a$, $b$, and $c$ are all increasing, therefore they are all bounded on $(\xi,t_0)$. Since $(\xi,\eta)$ is the maximal solution interval $\xi=-\infty$. \qed
As $a$, $b$, $c$ and $\alpha$ are all increasing, it must be that they all approach finite non-negative limits as $t\to-\infty$. Thus $(a,b,c,\alpha)$ must approach an equilibrium point. If $(a,b,c,\alpha)\to(0,p,0,r)$ with $p>0$, then $a/b\to 0$ as $t\to-\infty$, but $a/b$ is decreasing and positive (see Lemma~\ref{prelim}), so this cannot happen. On the other hand, if $r>0$ and $p=0$, note first that from \Ref{E2ODEa}-\Ref{E2ODEb} it easily follows that $\alpha=kab$ for some constant $k$ which is positive for a non-equilibrium solution. Then, as $a/\alpha$ approaches $0$ as $t\to-\infty$, so does $1/b$, but $1/b$ approaches $\infty$, which is a contradiction.
Therefore, when $t\to-\infty$ we see that $(a,b,c,\alpha)\to(q,0,q,0)$. \end{proof}
Note that we did not rule out the possibility that $q=0$. However, power series calculations show at least that there are no non-equilibrium trajectories approaching $(0,0,0,0)$ which are analytic, in an appropriate sense, at $t=-\infty$. From now on we will only consider the case $q>0$. The center manifold theorem guarantees that solutions exist and are defined over a maximal interval with left endpoint $-\infty$, while the above proof shows that $a$ and $c$, along with $b$ and $\alpha$ are non-decreasing on this interval.
We will need a one more property of the solutions in Case 3. \begin{lemma}\lb{ab-unbd} In Case 3 above, $ab$ is unbounded from above. \end{lemma} \begin{proof} By \Ref{E2ODEc} and \Ref{E2ODEd} \[ (c)'=\frac c2(a^2-c^2+2\alpha)\le \frac c2 2\alpha=\frac c2 \frac{2\alpha'}{c^2}=\frac{2\alpha'}{2c} \]
so $(c^2)'\le 2\alpha'$ or $c^2|_s^t\le 2\alpha|_s^t$. Taking $s\to-\infty$ gives \begin{equation}\lb{al-c} 2\alpha\ge c^2-q^2. \end{equation} Applying this to \Ref{E2ODEc} gives \[ (\log c^2)'\ge a^2-c^2+c^2-q^2=a^2-q^2. \] As one easily checks, as we are always assuming $\alpha$ is not identically zero, there is no non-equilibrium solution with $a=q$ identically. Thus for some $t_0$, for any $t>t_0$, $a(t)\ge a(t_0)>q$, so on that domain $(\log c^2)'>\epsilon>0$. Thus $c^2$ grows faster than exponentially, and hence so does $2\alpha$ by \Ref{al-c}. And $\alpha=kab$, $k>0$. This proves the result if $\eta=\infty$. If $\eta$ is finite, one of $a$, $b$, $c$, $\alpha$ is unbounded and they are all increasing, which proves the claim if it is $a$ or $b$ that are unbounded. If it is $c$, $\alpha$ is also unbounded by \Ref{al-c} again. \end{proof}
\begin{prop}\label{prop:estimates} Let $g$ be a Riemannian metric of the form \Ref{bianchiAmet} on an $E(2)$-manifold $M$, with $a$, $b$, $c$ a solution to \Ref{E2ODEa}-\Ref{E2ODEd} along an unstable curve of an equilibrium point $(q,0,q,0)$, $q>0$, having maximal domain $I=(-\infty,\eta)$. Assume that the latter interval is also the range of the coordinate function $t$ on $M$. For a point $p_0\in M$ with orbit through $p_0$ of principal type and a level set $M^t$ of $t$, \[ \lim_{t\to -\infty}d_g(p_0,M^t)<\infty,\qquad \lim_{t\to \eta}d_g(p_0,M^t)=\infty, \] where $d_g$ is the distance function induced by $g$.
\end{prop} \begin{proof} As in \cite{mr} we note that
the level sets of $t$ are orbits of $\mathcal{G}$ and for $t_0=t(p_0)$ \[d_g(p_0,M^{t_1}) = d_g(M^{t_0},M^{t_1}), \] measures the distance in the quotient manifold $\tilde{M}/\mathcal{G}$, where
\begin{equation}\lb{dist-def} d_g(M^{t_0},M^{t_1}) = \left|\int_{t_0}^{t_1} abc dt\right|, \end{equation} and the metric is $(abc)^2dt^2$.
We omit the proof that $\lim_{t\to -\infty}d_g(p_0,M^t) < \infty$ as it is identical to that in \cite{mr}, and also similar to the case of $SU(2)$.
To understand the behavior at the $\eta$ side of the solution interval, we adopt the change of variable $r=2(ab)^{1/2}$ first appearing in \cite{pp1} and \cite{d-s1}, which is allowable as $ab$ is strictly increasing (Lemma~\ref{prelim}). $r\to\infty$ as $t\to\eta$ since otherwise $ab$ is bounded, contradicting Lemma~\ref{ab-unbd}. Using Lemma~\ref{prelim}, the metric after this change takes the form \begin{equation}\lb{gWV} g=W^{-1}dr^2+\frac{r^2}4(V\sigma_1^2+V^{-1}\sigma_2^2+W\sigma_3^2) \end{equation} with $W=c^2/(ab)$ and $V=a/b$. Additionally, \begin{align*} \frac{dW}{dr}=W'/r'&=\Big(\frac{c^2}{ab}\Big)'(ab)^{-1/2}c^{-2}\\ &=-\frac 4rW+\frac 2r\frac ab+16\frac{\alpha}{r^3}. \end{align*} Now $a/b$ decreases to a finite nonnegative limit $L$ as $r\to\infty$, so that asymptotically \[ \frac{dW}{dr}= -\frac 4rW+\frac 2rL+16\frac{\alpha}{r^3}, \] an equation which, using the aforementioned relation $\alpha=kab=k\frac{r^2}4$, $k>0$ constant, has solution \[ W=L/2+k+\frac p{r^4} \] for an integration constant $p$.
The metric then has the asymptotic form \Ref{gWV} for $W$ as above and $V=L$. If $L=0$ this asymptotic form is degenerate, but nonetheless one can still use its $dr^2$ component to compute the distance to $M_r:=M_{t(r)}$. Thus the integral of $W^{-1/2}$ in this asymptotic form shows that \begin{equation}\lb{eta-dist} \lim_{r\to\infty}d_g(p_0,M_r)=\infty. \end{equation} This completes the proof. \end{proof}
\subsection{Smooth extension to a singular orbit}
For the case at hand, the cohomogeneity one $4$-manifold with one singular orbit attached can be described as \[ E(2)\times_{SO(2)}\mathbb{R}^2= (0,\infty)\times E(2)\ \amalg\ \{0\}\times \mathbb{R}^2 , \] where the right $SO(2)$-action is $(g, (T,x))\to (Tg,g^{-1}x)$. \begin{prop}\label{prop:bolt} The metric and K\"ahler form corresponding to solutions of \Ref{E2ODEa}-\Ref{E2ODEd} along the unstable curves of the equilibrium points $(q,0,q,0)$, $q>0$, defined on $(-\infty, \eta)$, can be smoothly extended to $M=E(2)\times_{SO(2)}\mathbb{R}^2$, with the two-dimensional singular orbit $E(2)/SO(2)$ defined over $\xi=-\infty$. \end{prop} \begin{proof}
For any $E(2)$ invariant metric $g$ on $M$, with $r$ the distance along a geodesic perpendicular to the singular orbit, \[ g = dr^2+g_r. \] For a metric $g$ of the form \Ref{bianchiAmet}, as usual, let $r=\int_{-\infty}^t a(s)b(s)c(s)\,ds$, then \[ g = dr^2+a^2\sigma_1^2+b^2\sigma_2^2+c^2\sigma_3^2. \] The ODE's \Ref{E2ODEa}-\Ref{E2ODEd} in this coordinate become \begin{align} \label{rODEa}\frac{da}{dr}&=\frac{a}{2}\left(-\frac{a}{bc}+\frac{c}{ab}\right),\\ \label{rODEb}\frac{db}{dr}&=\frac{1}{2}\left(\frac{a}{c}+\frac{c}{a}\right),\\ \label{rODEc}\frac{dc}{dr}&=\frac{c}{2}\left(\frac{a}{bc}-\frac{c}{ab}+\frac{2\alpha}{abc}\right),\\ \lb{rODEd}\frac{d\alpha}{dr}&=\alpha\frac c{ab}. \end{align} From these it is seen that $a$, $b$, $c$ and $\alpha$ can be extended at $r=0$ so that $a$, $c$ and $\alpha$ are even and $b$ is odd, as functions of $r$. Following the notations of Verdiani and Ziller \cite{vz}, the tangent space for $r\neq 0$ splits as \[ T_p M = \mathbb{R}\partial_r\oplus \mathfrak{k}\oplus\mathfrak{m}, \] where \[\mathfrak{k}=\mathrm{span}\{X_2\}, \] \[ \mathfrak{m}=\mathrm{span}\{X_1,X_3\}=:\ell_1, \] and we set \[ V=\mathrm{span}\{\partial_r,X_2\}=:\ell_{-1}'. \] Now $\exp(\theta X_2)$ acts on both $V$ and $\mathfrak{m}$ as a rotation by $\theta$, so the weights are $a_1=d_1=1$. The smoothness conditions for $V$ is that $b$ can be extended to an odd function and $b'(0)=1$. Since we know that $b$ can be extended to be odd, we complete from \Ref{rODEb} the check that
\[ \left.\frac{db}{dr}\right|_{r=0}=\frac{1}{2}\left(\frac{q}{q}+\frac{q}{q}\right)=1.\] Since $\ell'_{-1}$ and $\ell_1$ are perpendicular, the smoothness conditions in table C of \cite{vz} are automatically satisfied, while those in table B there, are \begin{align}\label{smooth1}a^2+c^2&=\phi_1(r^2),\\ \label{smooth2}a^2-c^2&=r^2\phi_2(r^2), \end{align} for some smooth functions $\phi_1$ and $\phi_2$. Now to see that \Ref{smooth1} is satisfied, note that \[ a^2+c^2=2ac\frac{db}{dr}.\] Since $a$, $c$, and $\frac{db}{dr}$ are even, it just remains to check \Ref{smooth2}. Solving the equations in their power series expansions in $r$ gives $a=q+o(r^4)$, $c=q+o(r^4)$, so that $a^2-c^2$ has the required form.
Thus $g$ extends to a smooth metric on $M$.
The derivation that the K\"ahler form also extends smoothly proceeds as in \cite{mr}, so we just state the resulting smoothness conditions:
\begin{align*} cr+ab &= r\phi_2(r^2), \\ cr-ab &= r^3\phi_3(r^2). \\ \end{align*} The first of these is clear from the oddness/evenness properties of $a$, $b$, $c$. The above Taylor series expansion of $a$, $c$, in addition to the one for $b$, namely $b=r+o(r^5)$ easily shows that $cr-ab$ has the required form.
\end{proof}
\subsection{Completeness} \begin{prop}\label{prop:complete} For the metrics of Proposition~\ref{prop:bolt}, all finite length curves remain inside some compact set. \end{prop} The proof here is identical to that in \cite{mr}, and will thus be omitted. This completes the proof of Theorem \ref{thm4}.
\section{Outline of the derivation of the ODE and PDE systems} \subsection{Generalized PDEs} Suppose one is given a $4$-manifold with a frame $\kk$, $\tT$, $\xx$, $\yy$ satisfying the Lie bracket relations \Ref{brack1}-\Ref{brack3} for functions $A$, $B$, $C$, $D$, $E$, $F$, $G$, $H$, $L$, $N$ on the frame domain. The dual coframe $\hat\kk$, $\hat\tT$, $\hat\xx$, $\hat\yy$ then satisfies \begin{align} d\kf&=-N\xf\wedge\yf-L\kf\wedge\tf,\nonumber\\ d\tf&=-N\xf\wedge\yf-L\kf\wedge\tf,\nonumber\\ d\xf&=-A\kf\wedge\xf-C\kf\wedge\yf-E\tf\wedge\xf-G\tf\wedge\yf,\nonumber\\ d\yf&=-B\kf\wedge\xf-D\kf\wedge\yf-F\tf\wedge\xf-H\tf\wedge\yf.\lb{d-frame} \end{align} The vanishing of $d^2$ on the coframe $1$-forms gives four equations, two of which are identical. Writing, for example, $dN=d_\kk N\kf+d_\tT N\tf+d_\xx N\xf+d_\yy N\yf$ etc. and separating components yields $12$ scalar equations \begin{align} d_\xv L&=0,\qquad d_\yv L=0,\nonumber\\ d_\yv A&=d_\xv C,\qquad d_\yv B=d_\xv D,\qquad d_\yv E=d_\xv G,\qquad d_\yv F=d_\xv H,\nonumber\\ d_\tv N&=NE+NH+LN,\qquad d_\kv N=NA+ND-LN,\lb{nl}\\ d_\tv A&=d_\kv E-AL+CF-EL-GB,\lb{dta} \\ d_\tv B&=d_\kv F-BL+BE+DF-FL-FA-HB,\lb{dtb} \\ d_\tv C&=d_\kv G+AG-CL+CH-EC-GL-GD,\lb{dtc} \\ d_\tv D&=d_\kv H+BG-DL-FC-HL.\lb{dtd} \end{align} Adding and subtracting the two equations \Ref{nl}, the two equations \Ref{dta} and \Ref{dtd} and the two equations \Ref{dtb}-\Ref{dtc}, while using relations \Ref{rels1}-\Ref{rels2}, yields six equations of which only five are independent. The resulting equivalent system is \begin{align} &d_\xv L=0,\qquad d_\yv L=0,\lb{ll}\\ &d_\yv A=d_\xv C,\qquad d_\yv B=d_\xv D,\qquad d_\yv E=d_\xv G,\qquad d_\yv F=d_\xv H,\lb{foursome}\\ &d_{\kv+\tv} N=0,\qquad d_{\kv-\tv} N=2N^2-2LN,\lb{nll} \\ &d_\tv (F+G)=-d_\kv (B+C)-(F+G)L+(B+C)L-2(F+G)B+2(B+C)F,\lb{long1}\\ &d_\kv (F+G)=d_\tv (B+C)+(B+C)L+(F+G)L+F^2-G^2+B^2-C^2,\lb{long2}\\ &d_\tv (B-C)=d_\kv (F-G)-(B-C)L-(F-G)L-(B+C)^2-(F+G)^2.\lb{long3} \end{align} Assume now that $M$ admits a K\"ahler metric making our frame orthonormal, which is additionally central. Then, in addition to the above system, we have equation \Ref{cent}, which we now reproduce:
\begin{align} &-N(2L+C-H+A-F)[-L(2L+C-H+A-F)\nonumber\\ &+d_{\kk-\tT}L-d_\tT(C-H)+d_\kk(A-F)]\nonumber\\ &-d_\xx(L+C-H)d_\yy(L+A-F)+d_\yy(L+C-H)d_\xx(L+A-F)=\lambda.\lb{cent1} \end{align} Equations \Ref{ll}-\Ref{cent1} constitute our system in the general case. With the help of \Ref{ll}, equation \Ref{cent1} can be simplified a little to the form \begin{align} &-N(2L+C-H+A-F)[-L(2L+C-H+A-F)\nonumber\\ &+d_{\kk-\tT}L-d_\tT(C-H)+d_\kk(A-F)]\nonumber\\ &-d_\xx(C-H)d_\yy(A-F)+d_\yy(C-H)d_\xx(A-F)=\lambda.\lb{cent2}\end{align}
\subsection{The equations in new variables} Recall our functions $L$, $N$ along with the four given in \Ref{chan-var} reproduced here. \begin{align} P&=(B-C)+(F-G), &&Q=(B-C)-(F-G),\nonumber \\ R&=\sqrt{(B+C)^2+(F+G)^2}, &&S=\tan^{-1}\left(\frac{B+C}{F+G}\right),\lb{chan-var1}
\end{align} where $S$ is only defined on the set $\{F+G\}\ne 0$.
In terms of these, we have the inverse transformation \begin{align} B&=[(P+Q)+2R\sin S]/4,\qquad C=[-(P+Q)+2R\sin S]/4,\nonumber\\ F&=[(P-Q)+2R\cos S]/4,\qquad G=[-(P-Q)+2R\cos S]/4.\lb{change} \end{align}
We can write the system \Ref{ll}-\Ref{long3}, \Ref{cent2} in these variables as follows \begin{align} &d_\xv L=0,\qquad d_\yv L=0,\lb{ll1}\\ &d_\yy N+d_\yy(R\cos S)=d_\xx(R\sin S)-d_\xx(P+Q)/2\\ &d_\xx N-d_\xx(R\cos S)=d_\yy(R\sin S)+d_\yy(P+Q)/2\\ &-d_\yy N-d_\yy(R\sin S)=d_\xx(R\cos S)-d_\xx(P-Q)/2\\ &-d_\xx N+d_\xx(R\sin S)=d_\yy(R\cos S)+d_\yy(P-Q)/2\\ &d_{\kv+\tv} N=0,\qquad d_{\kv-\tv} N=2N^2-2LN,\lb{nll1} \\ &d_\tv (R\cos S)=-d_\kv (R\sin S)-RL(\cos S-\sin S)\nonumber\\ &+\frac{1}{2}(P-Q)R\sin S-\frac{1}{2}(P+Q)R\cos S, \lb{trig11}\\ &d_\kv (R\cos S)=d_\tv (R\sin S)+RL(\sin S+\cos S)\nonumber\\ &+\frac{1}{2}(P-Q)R\cos S+\frac{1}{2}(P+Q)R\sin S, \lb{trig21}\\ &\frac{1}{2}d_\tv (P+Q) = \frac{1}{2}d_\kv (P-Q)-PL-R^2,\lb{longy11}\\ &-N(2L+N-P/2)[-L(2L+N-P/2)+d_{\kk-\tT}L-(d_\tT N/2-d_\tT(P+Q)/4)\nonumber\\ &+(d_\kk N/2-d_\kk(P-Q)/4)]-(d_\yy N/2-d_\yy(P-Q)/4)(d_\xx N/2-d_\xx(P+Q)/4)\nonumber\\ &+(d_\yy N/2-d_\yy(P+Q)/4)(d_\xx N/2-d_\xx(P-Q)/4)=\lambda.\lb{cent3} \end{align} The verification is as in \cite{mr}, except that for \Ref{cent3} we used \begin{equation}\lb{intermediate1} A-F=(N-F+G)/2,\qquad C-H=(N-B+C)/2, \end{equation} which follows from \Ref{rels1}-\Ref{rels2}.
Of these equations, \Ref{trig11}-\Ref{trig21} can be simplified as in \cite{mr} to \begin{align}\lb{R-S1} d_\tv R &= -R d_\kv S - RL - \frac{1}{2}(P+Q)R,\qquad d_\kv R = R d_\tv S + RL + \frac{1}{2}(P-Q)R. \end{align} Additionally, \Ref{cent3} can be rewritten as \begin{align} &-N(2L+N-P/2)[-L(2L+N-P/2)+d_{\kk-\tT}L+d_{\kk-\tT} N/2-d_{\kk-\tT}P/4+d_{\kk+\tT}Q/4]\nonumber\\ &+d_\yy N d_\xx Q/4-d_\xx N d_\yy Q/4+d_\yy Q\, d_\xx P/8-d_\yy P\, d_\xx Q/8 =\lambda.\lb{cent4} \end{align}
At this point our derivation splits into cases.
\subsection{The case where all functions depend on $\tau$}. Recall that there exists a local function $\tau$ such that $\n\tau=\kk-\tT$. Since \[ d_\xx\tau=0,\qquad d_\yy\tau=0,\qquad d_{\kk+\tT}\tau=0, \] it follows that if $A,\ldots H,L, N$ are locally compositions of functions of $\tau$, the equations \Ref{ll}-\Ref{long3}, \Ref{cent2} simplify to \Ref{nll}-\Ref{long3} without the first equation in \Ref{nll}, together with \begin{align} &-N(2L+C-H+A-F)[-L(2L+C-H+A-F)\nonumber\\ &+d_{\kk-\tT}L-d_\tT(C-H)+d_\kk(A-F)]=\lambda.\lb{cent6} \end{align} In terms of the variables \Ref{chan-var1} this system takes the form \begin{align*} &d_{\kv-\tv} N=2N^2-2LN,\\ &d_{\kk-\tT}R=R(P+2L),\qquad 0=-R(d_{\kk-\tT}S+Q),\\ &d_{\kk-\tT}P=2LP+2R^2,\\ &-N(2L+N-P/2)[-L(2L+N-P/2)+d_{\kk-\tT}(L+N/2-P/4)]=\lambda, \end{align*} where we have used \Ref{R-S1} as well as $d_{\kk}\tau=1$, $d_{\tT}\tau=-1$. Alternatively, with a prime denoting differentiation with respect to $\tau$, since $d_{\kk-\tT}\tau=2$, we can write the system as \begin{align} &N'=N^2-LN,\nonumber \\ &R'=R(P/2+L),\qquad 0=-R(2S'+Q),\nonumber\\ &P'=LP+R^2,\nonumber\\ &-N(2L+N-P/2)[-L(2L+N-P/2)+(2L'+N'-P'/2)]=\lambda.\lb{cent-onevar} \end{align}
\subsection{The case $N=0$}
When $N=0$, the only equations that become trivial are \Ref{nll1}, but some equations simplify. We only write the resulting system in the variables \Ref{chan-var1}. We have \begin{align} &d_\xv L=0,\qquad d_\yv L=0,\\ &d_\yy(R\cos S)=d_\xx(R\sin S)-d_\xx(P+Q)/2,\\ -&d_\xx(R\cos S)=d_\yy(R\sin S)+d_\yy(P+Q)/2,\\ -&d_\yy(R\sin S)=d_\xx(R\cos S)-d_\xx(P-Q)/2,\\ &d_\xx(R\sin S)=d_\yy(R\cos S)+d_\yy(P-Q)/2,\\ &d_\yy Q\, d_\xx P/8-d_\yy P\, d_\xx Q/8=\lambda,\lb{cent7}\\ &d_{\kk-\tT}R=R(d_{\kk+\tT}S+P+2L),\lb{Skpt}\\ &d_{\kk+\tT}R=-R(d_{\kk-\tT}S+Q),\lb{Skmt}\\ &d_{\kk-\tT}P-d_{\kk+\tT}Q=2LP+2R^2, \end{align} where we have written the central curvature equation in \Ref{cent7}.
We see that the system decouples, in the sense that the first six equations involve only $d_\xx$, $d_\yy$ derivatives, while the last three involve only $d_{\kk\pm\tT}$. For these last three, recall from \cite{mr} that a rotation in the planes spanned by $\xx$, $\yy$ allows us to dispense with $d_{\kk\pm\tT}S$ in \Ref{Skpt}-\Ref{Skmt}, simplifying the equations further.
\end{document} |
\begin{document}
\title{An Integer Programming Approach to the Student-Project Allocation Problem with Preferences over Projects}
\author{ David Manlove\thanks{Supported by grant EP/P028306/1 from the Engineering and Physical Sciences Research Council. Orcid ID: 0000-0001-6754-7308.} \and Duncan Milne \and Sofiat Olaosebikan\thanks{Supported by a College of Science and Engineering Scholarship, University of Glasgow. Orcid ID: 0000-0002-8003-7887.} }
\institute{School of Computing Science, University of Glasgow, \\e-mail: \texttt{David.Manlove@glasgow.ac.uk, Duncan.Milne1@gmail.com, s.olaosebikan.1@research.gla.ac.uk} } \maketitle
\begin{abstract} The Student-Project Allocation problem with preferences over Projects ({\scriptsize SPA-P}) involves sets of students, projects and lecturers, where the students and lecturers each have preferences over the projects. In this context, we typically seek a stable matching of students to projects (and lecturers). However, these stable matchings can have different sizes, and the problem of finding a maximum stable matching ({\scriptsize MAX-SPA-P}) is NP-hard. There are two known approximation algorithms for {\scriptsize MAX-SPA-P}, with performance guarantees of $2$ and $\frac{3}{2}$. In this paper, we describe an Integer Programming (IP) model to enable {\scriptsize MAX-SPA-P} to be solved optimally. Following this, we present results arising from an empirical analysis that investigates how the solution produced by the approximation algorithms compares to the optimal solution obtained from the IP model, with respect to the size of the stable matchings constructed, on instances that are both randomly-generated and derived from real datasets. Our main finding is that the $\frac{3}{2}$-approximation algorithm finds stable matchings that are very close to having maximum cardinality. \keywords{Integer Programming model \and Student-Project Allocation problem \and Blocking pair \and Stable matching \and Maximum cardinality matching \and Empirical analysis} \end{abstract}
\thispagestyle{empty} \setcounter{page}{1} \pagestyle{headings} \section{Introduction} Matching problems, which generally involve the assignment of a set of agents to another set of agents based on preferences, have wide applications in many real-world settings. One such application can be seen in an educational context, e.g., the allocation of pupils to schools, school-leavers to universities and students to projects. In the context of allocating students to projects, university lecturers propose a range of projects, and each student is required to provide a preference over the available projects that she finds acceptable. Lecturers may also have preferences over the students that find their project acceptable and/or the projects that they offer. There may also be upper bounds on the number of students that can be assigned to a particular project, and the number of students that a given lecturer is willing to supervise. The problem then is to allocate students to projects based on these preferences and capacity constraints -- the so-called \textit{Student-Project Allocation problem} ({\scriptsize SPA}) \cite{Man13,RGSA17}.
Two major models of {\scriptsize SPA} exist in the literature: one permits preferences only from the students \cite{AB03,HSVS05,KIMS15}, while the other permits preferences from the students and lecturers \cite{AIM07,Kaz02}. Given the large number of students that are typically involved in such an allocation process, many university departments seek to automate the allocation of students to projects. Examples include the School of Computing Science, University of Glasgow \cite{KIMS15}, the Faculty of Science, University of Southern Denmark \cite{CFG17}, the Department of Computing Science, University of York \cite{Kaz02}, and elsewhere \cite{AB03,RGSA17,HSVS05}.
In general, we seek a \textit{matching}, which is a set of agent pairs who find one another acceptable that satisfies the capacities of the agents involved. For matching problems where preferences exist from the two sets of agents involved (e.g., junior doctors and hospitals in the classical \textit{Hospitals-Residents problem} ({\scriptsize HR}) \cite{GS62}, or students and lecturers in the context of {\scriptsize SPA}), it has been argued that the desired property for a matching one should seek is that of \textit{stability} \cite{Rot84}. Informally, a \textit{stable matching} ensures that no acceptable pair of agents who are not matched together would rather be assigned to each other than remain with their current assignees.
Abraham, Irving and Manlove \cite{AIM07} proposed two linear-time algorithms to find a stable matching in a variant of {\scriptsize SPA} where students have preferences over projects, whilst lecturers have preferences over students. The stable matching produced by the first algorithm is student-optimal (that is, students have the best possible projects that they could obtain in any stable matching) while the one produced by the second algorithm is lecturer-optimal (that is, lecturers have the best possible students that they could obtain in any stable matching).
Manlove and O'Malley \cite{MO08} proposed another variant of {\scriptsize SPA} where both students and lecturers have preferences over projects, referred to as {\scriptsize SPA-P}. In their paper, they formulated an appropriate stability definition for {\scriptsize SPA-P}, and they showed that stable matchings in this context can have different sizes. Moreover, in addition to stability, a very important requirement in practice is to match as many students to projects as possible. Consequently, Manlove and O'Malley \cite{MO08} proved that the problem of finding a maximum cardinality stable matching, denoted {\scriptsize MAX-SPA-P}, is NP-hard. Further, they gave a polynomial-time $2$-approximation algorithm for {\scriptsize MAX-SPA-P}. Subsequently, Iwama, Miyazaki and Yanagisawa \cite{IMY12} described an improved approximation algorithm with an upper bound of $\frac{3}{2}$, which builds on the one described in \cite{MO08}. In addition, Iwama \textit{et al.}~\cite{IMY12} showed that {\scriptsize MAX-SPA-P} is not approximable within $\frac{21}{19} - \epsilon$, for any $\epsilon > 0$, unless P = NP. For the upper bound, they modified Manlove and O'Malley's algorithm \cite{MO08} using Kir\'{a}ly's idea \cite{Kir11} for the approximation algorithm to find a maximum stable matching in a variant of the \textit{Stable Marriage problem}.
Considering the fact that the existing algorithms for {\scriptsize MAX-SPA-P} are only guaranteed to produce an approximate solution, we seek another technique to enable {\scriptsize MAX-SPA-P} to be solved optimally. Integer Programming (IP) is a powerful technique for producing optimal solutions to a range of NP-hard optimisation problems, with the aid of commercial optimisation solvers, e.g., Gurobi \cite{ZZZ20}, GLPK \cite{ZZZ21} and CPLEX \cite{ZZZ22}. These solvers can allow IP models to be solved in a reasonable amount of time, even with respect to problem instances that occur in practical applications.
\paragraph{\textbf{Our Contribution.}} In Sect.~\ref{sect:ip-model}, we describe an IP model to enable {\scriptsize MAX-SPA-P} to be solved optimally, and present a correctness result. In Sect.~\ref{sect:empirical-analysis}, we present results arising from an empirical analysis that investigates how the solution produced by the approximation algorithms compares to the optimal solution obtained from our IP model, with respect to the size of the stable matchings constructed, on instances that are both randomly-generated and derived from real datasets. These real datasets are based on actual student preference data and manufactured lecturer preference data from previous runs of student-project allocation processes at the School of Computing Science, University of Glasgow. We also present results showing the time taken by the IP model to solve the problem instances optimally. Our main finding is that the $\frac{3}{2}$-approximation algorithm finds stable matchings that are very close to having maximum cardinality. The next section gives a formal definition for {\scriptsize SPA-P}.
\section{Definitions and Preliminaries} \label{sect:definitions} We give a formal definition for {\scriptsize SPA-P} as described in the literature \cite{MO08}. An instance $\mathtt{I}$ of {\scriptsize SPA-P} involves a set $\mathcal{S} = \{s_1 , s_2, \ldots , s_{n_1}\}$ of \textit{students}, a set $\mathcal{P} = \{p_1 , p_2, \ldots , p_{n_2}\}$ of \textit{projects} and a set $\mathcal{L} = \{l_1 , l_2, \ldots , l_{n_3}\}$ of \textit{lecturers}. Each lecturer $l_k \in \mathcal{L}$ offers a non-empty subset of projects, denoted by $P_k$. We assume that $P_1, P_2, \ldots, P_{n_3}$ partitions $\mathcal{P}$ (that is, each project is offered by one lecturer). Also, each student $s_i \in \mathcal{S}$ has an \textit{acceptable} set of projects $A_i \subseteq \mathcal{P}$. We call a pair $(s_i, p_j) \in \mathcal{S} \times \mathcal{P}$ an \textit{acceptable pair} if $p_j \in A_i$. Moreover $s_i$ ranks $A_i$ in strict order of preference. Similarly, each lecturer $l_k$ ranks $P_k$ in strict order of preference. Finally, each project $p_j \in \mathcal{P}$ and lecturer $l_k \in \mathcal{L}$ has a positive capacity denoted by $c_j$ and $d_k$ respectively.
An \textit{assignment} $M$ is a subset of $\mathcal{S} \times \mathcal{P}$ where $(s_i, p_j) \in M$ implies that $s_i$ finds $p_j$ acceptable (that is, $p_j \in A_i$). We define the \textit{size} of $M$ as the number of (student, project) pairs in $M$, denoted $|M|$. If $(s_i, p_j) \in M$, we say that $s_i$ is \textit{assigned to} $p_j$ and $p_j$ is \textit{assigned} $s_i$. Furthermore, we denote the project assigned to student $s_i$ in $M$ as $M(s_i)$ (if $s_i$ is unassigned in $M$ then $M(s_i)$ is undefined). Similarly, we denote the set of students assigned to project $p_j$ in $M$ as $M(p_j)$. For ease of exposition, if $s_i$ is assigned to a project $p_j$ offered by lecturer $l_k$, we may also say that $s_i$ is \textit{assigned to} $l_k$, and $l_k$ is \textit{assigned} $s_i$. Thus we denote the set of students assigned to $l_k$ in $M$ as $M(l_k)$.
A project $p_j \in \mathcal{P}$ is \textit{full}, \textit{undersubscribed} or \textit{oversubscribed} in $M$ if $|M(p_j)|$ is equal to, less than or greater than $c_j$, respectively. The corresponding terms apply to each lecturer $l_k$ with respect to $d_k$. We say that a project $p_j \in \mathcal{P}$ is \textit{non-empty} if $|M(p_j)| > 0$.
A \textit{matching} $M$ is an assignment such that $|M(s_i)| \leq 1$ for each $s_i \in \mathcal{S}$, $|M(p_j)| \leq c_j$ for each $p_j \in \mathcal{P}$, and $|M(l_k)| \leq d_k$ for each $l_k \in \mathcal{L}$ (that is, each student is assigned to at most one project, and no project or lecturer is oversubscribed). Given a matching $M$, an acceptable pair $(s_i, p_j) \in (\mathcal{S} \times \mathcal{P}) \setminus M$ is a \textit{blocking pair} of $M$ if the following conditions are satisfied: \begin{enumerate} \label{text:blocking-pair-definition} \item[1.] either $s_i$ is unassigned in $M$ or $s_i$ prefers $p_j$ to $M(s_i)$, and $p_j$ is undersubscribed, and either
\begin{enumerate}
\item $s_i \in M(l_k)$ and $l_k$ prefers $p_j$ to $M(s_i)$, or
\item $s_i \notin M(l_k)$ and $l_k$ is undersubscribed, or
\item $s_i \notin M(l_k)$ and $l_k$ prefers $p_j$ to his worst non-empty project,
\end{enumerate}
where $l_k$ is the lecturer who offers $p_j$. \end{enumerate}
If such a pair were to occur, it would undermine the integrity of the matching as the student and lecturer involved would rather be assigned together than remain in their current assignment. With respect to the {\scriptsize SPA-P} instance given in Fig.~\ref{fig:spa-p-instance-1}, $M_1 = \{(s_1, p_3), (s_2, p_1)\}$ is clearly a matching. It is obvious that each of students $s_1$ and $s_2$ is matched to her first ranked project in $M_1$. Although $s_3$ is unassigned in $M_1$, the lecturer offering $p_3$ (the only project that $s_3$ finds acceptable) is assumed to be indifferent among those students who find $p_3$ acceptable. Also $p_3$ is full in $M_1$. Thus, we say that $M_1$ admits no blocking pair. \begin{figure}
\caption{An instance $\mathtt{I}_1$ of {\scriptsize SPA-P}. Each project has capacity $1$, whilst each of lecturer $l_1$ and $l_2$ has capacity $2$ and $1$ respectively.}
\label{fig:spa-p-instance-1}
\end{figure} Another way in which a matching could be undermined is through a group of students acting together. Given a matching $M$, a \textit{coalition} is a set of students $\{s_{i_0}, \ldots, s_{i_{r-1}}\}$, for some $r \geq 2$ such that each student $s_{i_j}$ ($0 \leq j \leq r-1$) is assigned in $M$ and prefers $M(s_{i_{j+1}})$ to $M(s_{i_j})$, where addition is performed modulo $r$. With respect to Fig.~\ref{fig:spa-p-instance-1}, the matching $M_2 = \{(s_1, p_1), (s_2, p_2), (s_3, p_3)\}$ admits a coalition $\{s_1, s_2\}$, as students $s_1$ and $s_2$ would rather permute their assigned projects in $M_2$ so as to be better off. We note that the number of students assigned to each project and lecturer involved in any such swap remains the same after such a permutation. Moreover, the lecturers involved would have no incentive to prevent the switch from occurring since they are assumed to be indifferent between the students assigned to the projects they are offering. If a matching admits no coalition, we define such matching to be \textit{coalition-free}.
Given an instance $\mathtt{I}$ of {\scriptsize SPA-P}, we define a matching $M$ in $\mathtt{I}$ to be \textit{stable} if $M$ admits no blocking pair and is coalition-free. It turns out that with respect to this definition, stable matchings in $\mathtt{I}$ can have different sizes. Clearly, each of the matchings $M_1 = \{(s_1, p_3), (s_2, p_1)\}$ and $M_3 = \{(s_1, p_2), (s_2, p_1), (s_3, p_3)\}$ is stable in the {\scriptsize SPA-P} instance $\mathtt{I}_1$ shown in Fig.~\ref{fig:spa-p-instance-1}. The varying sizes of the stable matchings produced naturally leads to the problem of finding a maximum cardinality stable matching given an instance of {\scriptsize SPA-P}, which we denote by {\scriptsize MAX-SPA-P}. In the next section, we describe our IP model to enable {\scriptsize MAX-SPA-P} to be solved optimally.
\section{An IP model for {\footnotesize MAX-SPA-P}} \label{sect:ip-model} Let $\mathtt{I}$ be an instance of {\scriptsize SPA-P} involving a set $\mathcal{S} = \{s_1 , s_2, \ldots , s_{n_1}\}$ of students, a set $\mathcal{P} = \{p_1 , p_2, \ldots , p_{n_2}\}$ of projects and a set $\mathcal{L} = \{l_1 , l_2, \ldots , l_{n_3}\}$ of lecturers. We construct an IP model $\mathtt{J}$ of $\mathtt{I}$ as follows. Firstly, we create binary variables $x_{i, j} \in \{0, 1\}$ $(1 \leq i \leq n_1, 1 \leq j \leq n_2)$ for each acceptable pair $(s_i, p_j) \in \mathcal{S} \times \mathcal{P}$ such that $x_{i, j}$ indicates whether $s_i$ is assigned to $p_j$ in a solution or not. Henceforth, we denote by $S$ a solution in the IP model $\mathtt{J}$, and we denote by $M$ the matching derived from $S$. If $x_{i,j} = 1$ under $S$ then intuitively $s_i$ is assigned to $p_j$ in $M$, otherwise $s_i$ is not assigned to $p_j$ in $M$. In what follows, we give the constraints to ensure that the assignment obtained from a feasible solution in $\mathtt{J}$ is a matching.
\paragraph{\textbf{Matching Constraints}.} The feasibility of a matching can be ensured with the following three sets of constraints. \begin{align} \label{ineq:studentassignment} \sum\limits_{p_{j} \in A_{i}} x_{i,j} \leq 1 &\qquad (1 \leq i \leq n_1), \\ \label{ineq:projectcapacity} \sum\limits_{i = 1}^{n_1} x_{i,j} \leq c_j & \qquad (1 \leq j \leq n_2), \\ \label{ineq:lecturercapacity} \sum\limits_{i = 1}^{n_1} \; \sum\limits_{p_{j} \in P_k} x_{i,j} \leq d_k & \qquad (1 \leq k \leq n_3)\enspace. \end{align} Note that \eqref{ineq:studentassignment} implies that each student $s_i \in \mathcal{S}$ is not assigned to more than one project, while \eqref{ineq:projectcapacity} and \eqref{ineq:lecturercapacity} implies that the capacity of each project $p_j \in \mathcal{P}$ and each lecturer $l_k \in \mathcal{L}$ is not exceeded.
We define $rank(s_i, p_j)$, the \textit{rank} of $p_j$ on $s_i$'s preference list, to be $r+1$ where $r$ is the number of projects that $s_i$ prefers to $p_j$. An analogous definition holds for $rank(l_k, p_j)$, the \textit{rank} of $p_j$ on $l_k$'s preference list. With respect to an acceptable pair $(s_i, p_j)$, we define $S_{i,j} = \{p_{j'} \in A_i: rank(s_i, p_{j'}) \leq rank(s_i, p_j)\}$, the set of projects that $s_i$ likes as much as $p_j$. For a project $p_j$ offered by lecturer $l_k \in \mathcal{L}$, we also define ${T_{k,j} = \{p_q \in P_k: rank(l_k, p_j) < rank(l_k, p_q)\}}$, the set of projects that are worse than $p_j$ on $l_k$'s preference list.
In what follows, we fix an arbitrary acceptable pair $(s_i, p_j)$ and we impose constraints to ensure that $(s_i, p_j)$ is not a blocking pair of the matching $M$ (that is, $(s_i, p_j)$ is not a type 1(a), type 1(b) or type 1(c) blocking pair of $M$). Firstly, let $l_k$ be the lecturer who offers $p_j$.
\paragraph{\textbf{Blocking Pair Constraints.}} We define $\theta_{i,j} = 1 - \sum_{p_{j'} \in S_{i, j}}x_{i,j'}$.
Intuitively, $\theta_{i,j} = 1$ if and only if $s_i$ is unassigned in $M$ or prefers $p_j$ to $M(s_i)$. Next we create a binary variable $\alpha_j$ in $\mathtt{J}$ such that $\alpha_j = 1$ corresponds to the case when $p_j$ is undersubscribed in $M$. We enforce this condition by imposing the following constraint. \begin{eqnarray} \label{ineq:project-undersubscribed} c_j \alpha_j \geq c_j - \sum\limits_{i' = 1}^{n_1} x_{i',j}\enspace, \end{eqnarray}
where $\sum_{i' = 1}^{n_1} x_{i',j} = |M(p_j)|$. If $p_j$ is undersubscribed in $M$ then the RHS of \eqref{ineq:project-undersubscribed} is at least $1$, and this implies that $\alpha_j = 1$. Otherwise, $\alpha_j$ is not constrained. Now let $\gamma_{i,j,k} = \sum_{p_{j'} \in T_{k,j}} x_{i,j'}$.
Intuitively, if $\gamma_{i,j,k} = 1$ in $S$ then $s_i$ is assigned to a project $p_{j'}$ offered by $l_k$ in $M$, where $l_k$ prefers $p_j$ to $p_{j'}$. The following constraint ensures that $(s_i, p_j)$ does not form a type 1(a) blocking pair of $M$. \begin{align} \label{ineq:type-a-blockingpair} \Aboxed{ \theta_{i,j} + \alpha_{j} + \gamma_{i,j,k} \leq 2\enspace.} \end{align} Note that if the sum of the binary variables in the LHS of \eqref{ineq:type-a-blockingpair} is less than or equal to $2$, this implies that at least one of the variables, say $\gamma_{i,j,k}$, is $0$. Thus the pair $(s_i, p_j)$ is not a type 1(a) blocking pair of $M$.
Next we define $\beta_{i,k} = \sum_{p_{j'} \in P_k} x_{i,j'}.$
Clearly, $s_i$ is assigned to a project offered by $l_k$ in $M$ if and only if $\beta_{i,k} = 1$ in $S$. Now we create a binary variable $\delta_k$ in $\mathtt{J}$ such that $\delta_k = 1$ in $S$ corresponds to the case when $l_k$ is undersubscribed in $M$. We enforce this condition by imposing the following constraint. \begin{eqnarray} \label{ineq:lecturer-undersubscribed} d_k\delta_k \geq d_k - \sum\limits_{i' = 1}^{n_1} \; \sum\limits_{p_{j'} \in P_k} x_{i',j'}\enspace, \end{eqnarray}
where $\sum_{i' = 1}^{n_1} \; \sum_{p_{j'} \in P_k} x_{i',j'} = |M(l_k)|$. If $l_k$ is undersubscribed in $M$ then the RHS of \eqref{ineq:lecturer-undersubscribed} is at least $1$, and this implies that $\delta_k = 1$. Otherwise, $\delta_k$ is not constrained. The following constraint ensures that $(s_i, p_j)$ does not form a type 1(b) blocking pair of $M$. \begin{align} \label{ineq:type-b-blockingpair} \Aboxed{ \theta_{i,j} + \alpha_{j} + (1 - \beta_{i,k}) + \delta_k \leq 3\enspace.} \end{align}
We define $D_{k,j} = \{p_{j'} \in P_k: rank(l_k, p_{j'}) \leq rank(l_k, p_j)\}$, the set of projects that $l_k$ likes as much as $p_j$. Next, we create a binary variable $\eta_{j,k}$ in $\mathtt{J}$ such that $\eta_{j,k} = 1$ if $l_k$ is full and prefers $p_j$ to his worst non-empty project in $S$. We enforce this by imposing the following constraint. \begin{eqnarray} \label{ineq:lecturerfullconstraint} d_k\eta_{j,k} \geq d_k - \sum\limits_{i' = 1}^{n_1} \; \sum\limits_{p_{j'} \in D_{k,j}} x_{i',j'}\enspace. \end{eqnarray} Finally, to avoid a type 1(c) blocking pair, we impose the following constraint. \begin{align} \label{ineq:type-c-blockingpair} \Aboxed{ \theta_{i,j} + \alpha_{j} + (1 - \beta_{i,k}) + \eta_{j,k} \leq 3\enspace.} \end{align}
Next, we give the constraints to ensure that the matching obtained from a feasible solution in $\mathtt{J}$ is coalition-free. \paragraph{\textbf{Coalition Constraints.}} First, we introduce some additional notation. Given an instance $\mathtt{I}'$ of {\scriptsize{SPA-P}} and a matching $M'$ in $\mathtt{I}'$, we define the \textit{envy graph} $G(M') = (\mathcal{S}, A)$, where the vertex set $\mathcal{S}$ is the set of students in $\mathtt{I}'$, and the arc set $A = \{(s_i, s_{i'}): s_i \mbox{ prefers } M'(s_{i'}) \mbox{ to } M'(s_i)\}$. It is clear that the matching $M_2 = \{(s_1, p_1), (s_2, p_2), (s_3, p_3)\}$ admits a coalition $\{s_1, s_2\}$ with respect to the instance given in Fig.~\ref{fig:spa-p-instance-1}. The resulting envy graph $G(M_2)$ is illustrated below. \begin{figure}\label{fig:envygraph}
\end{figure} Clearly, $G(M')$ contains a directed cycle if and only if $M'$ admits a coalition. Moreover, $G(M')$ is acyclic if and only if it admits a topological ordering. Now to ensure that the matching $M$ obtained from a feasible solution $S$ under $\mathtt{J}$ is coalition-free, we will enforce $\mathtt{J}$ to encode the envy graph $G(M)$ and impose the condition that it must admit a topological ordering. In what follows, we build on our IP model $\mathtt{J}$ of $\mathtt{I}$.
We create a binary variable $e_{i,i'}$ for each $(s_i, s_{i'}) \in \mathcal{S} \times \mathcal{S}$, $s_i \neq s_{i'}$, such that the $e_{i,i'}$ variables will correspond to the adjacency matrix of $G(M)$. For each $i$ and $i'$ ($1 \leq i \leq n_1, \; 1 \leq i' \leq n_1, \; i \neq i'$) and for each $j$ and $j'$ ($1 \leq j \leq n_2, \; 1 \leq j' \leq n_2$) such that $s_i$ prefers $p_{j'}$ to $p_j$, we impose the following constraint. \begin{align} \label{ineq:envy-variable} e_{i,{i'}} + 1 \geq x_{i,j} + x_{{i'},{j'}\enspace.} \end{align}
If $(s_i, p_j) \in M$ and $(s_{i'}, p_{j'}) \in M$ and $s_i$ prefers $p_{j'}$ to $p_j$, then $e_{i,{i'}} = 1$ and we say $s_i$ \textit{envies} $s_{i'}$. Otherwise, $e_{i,{i'}}$ is not constrained. Next we enforce the condition that $G(M)$ must have a topological ordering. To hold the label of each vertex in a topological ordering, we create an integer-valued variable $v_i$ corresponding to each student $s_i \in \mathcal{S}$ (and intuitively to each vertex in $G(M)$). We wish to enforce the constraint that if $e_{i,{i'}} = 1$ (that is, $(s_i, s_{i'}) \in A$), then $v_i < v_{i'}$ (that is, the label of vertex $s_i$ is smaller than the label of vertex $s_{i'}$). This is achieved by imposing the following constraint for all $i$ and $i'$ ($1 \leq i \leq n_1, \; 1 \leq i' \leq n_1, \; i \neq i'$). \begin{align} \label{ineq:topological-ordering} \Aboxed{ v_i < v_{i'} + n_1(1 - e_{i,i'})\enspace.} \end{align} Note that the LHS of \eqref{ineq:topological-ordering} is strictly less than the RHS of \eqref{ineq:topological-ordering} if and only if $G(M)$ does not admit a directed cycle, and this implies that $M$ is coalition-free. \paragraph{\textbf{Variables}.} We define a collective notation for each variable involved in $\mathtt{J}$ as follows. \begin{center} \begin{tabular}{lll} $X = \{ x_{i,j}: 1 \leq i \leq n_1, 1 \leq j \leq n_2\}$, & & $\Lambda = \{\alpha_{j}: 1 \leq j \leq n_2\}$, \\ $H = \{ \eta_{j,k}: 1 \leq j \leq n_2, 1 \leq k \leq n_3\}$, & & $\Delta = \{\delta_{k}: 1 \leq k \leq n_3\}$, \\ $E = \{ e_{i,i'}: 1 \leq i \leq n_1, 1 \leq i' \leq n_1\}$, & & $V = \{ v_{i}: 1 \leq i \leq n_1\}$\enspace. \end{tabular} \end{center} \paragraph{\textbf{Objective Function.}} The objective function given below is a summation of all the $x_{i,j}$ binary variables. It seeks to maximize the number of students assigned (that is, the cardinality of the matching). \begin{align} \label{ineq:objective-function} \Aboxed{\max \; \sum\limits_{i = 1}^{n_1} \; \sum\limits_{p_j \in A_i}x_{i,j}\enspace.} \end{align} Finally, we have constructed an IP model $\mathtt{J}$ of $\mathtt{I}$ comprising the set of integer-valued variables $X, \Lambda, H, \Delta, E \mbox{ and } V$, the set of constraints \eqref{ineq:studentassignment} - \eqref{ineq:topological-ordering} and an objective function \eqref{ineq:objective-function}. Note that $\mathtt{J}$ can then be used to solve {\scriptsize{MAX-SPA-P}} optimally. Given an instance $\mathtt{I}$ of {\scriptsize{SPA-P}} formulated as an IP model $\mathtt{J}$ using the above transformation, we have the following lemmas.
\begin{lemma} \label{lemma:solution-matching}
A feasible solution $S$ to $\mathtt{J}$ corresponds to a stable matching $M$ in $\mathtt{I}$, where $obj(S) = |M|$. \end{lemma}
\begin{proof}
Assume firstly that $\mathtt{J}$ has a feasible solution $S$. Let $M = \{(s_i, p_j) \in \mathcal{S} \times \mathcal{P}: x_{i,j} = 1\}$ be the assignment in $\mathtt{I}$ generated from $S$. Clearly $obj(S) = |M|$. We note that \eqref{ineq:studentassignment} ensures that each student is assigned in $M$ to at most one project. Moreover, \eqref{ineq:projectcapacity} and \eqref{ineq:lecturercapacity} ensures that the capacity of each project and lecturer is not exceeded in $M$. Thus $M$, is a matching. We will prove that \eqref{ineq:project-undersubscribed} - \eqref{ineq:type-c-blockingpair} guarantees that $M$ admits no blocking pair.
Suppose for a contradiction that there exists some acceptable pair $(s_i, p_j)$ that forms a blocking pair of $M$, where $l_k$ is the lecturer who offers $p_j$. This implies that $s_i$ is either unassigned in $M$ or prefers $p_j$ to $M(s_i)$. In either of these cases, $\sum_{p_{j'} \in S_{i,j}} x_{i,j'} = 0$, and thus $\theta_{i,j} = 1$. Moreover, as $(s_i, p_j)$ is a blocking pair of $M$, $p_j$ has to be undersubscribed in $M$, and thus $\sum_{i'=1}^{n_1} x_{i',j} < c_j$. This implies that the RHS of \eqref{ineq:project-undersubscribed} is strictly greater than $0$, and since $S$ is a feasible solution to $\mathtt{J}$, $\alpha_j = 1$.
Now suppose $(s_i, p_j)$ is a type 1(a) blocking pair of $M$. This implies $M(s_i) = p_{j''}$ for some $p_{j''} \in P_k$, where $l_k$ prefers $p_j$ tp $p_{j''}$. Thus $\gamma_{i,j,k} = \sum_{p_{j'} \in T_{k,j}} x_{i,j'} = 1$, which implies that the LHS of \eqref{ineq:type-a-blockingpair} is strictly greater than $2$. Thus $S$ is infeasible, a contradiction.
Next suppose $(s_i, p_j)$ is a type 1(b) blocking pair of $M$. This implies $s_i \notin M(l_k)$ and thus $1 - \beta_{i,k} = 1 - \sum_{p_j' \in P_k} x_{i,j'} = 1$. Also, $l_k$ has to be undersubscribed in $M$ which implies that the RHS of \eqref{ineq:lecturer-undersubscribed} is strictly greater than $0$, and thus $\delta_k = 1$. Hence the LHS of \eqref{ineq:type-b-blockingpair} is strictly greater than $3$, a contradiction, since $S$ is a feasible solution.
Next suppose $(s_i, p_j)$ is a type 1(c) blocking pair of $M$. This implies that $s_i \notin M(l_k)$ and thus $\beta_{i,k} = 0$. Also $l_k$ is full in $M$ and prefers $p_j$ to $p_z$, where $p_z$ is $l_k$'s worst non-empty project in $M$. This implies that the RHS of \eqref{ineq:lecturerfullconstraint} is strictly greater than $0$, and thus $\eta_{j,k} = 1$. Hence the LHS of \eqref{ineq:type-c-blockingpair} is strictly greater than $3$, and thus $S$ is infeasible, a contradiction.
Finally, we show that \eqref{ineq:envy-variable} and \eqref{ineq:topological-ordering} ensure that $M$ is coalition-free. Suppose for a contradiction that $M$ admits a coalition $\{s_{i_0}, \ldots, s_{i_{r-1}}\}$, for some $r \geq 2$. This implies that for each $t$ $(0 \leq t \leq r-1)$, $s_{i_t}$ prefers $M(s_{i_{t+1}})$ to $M(s_{i_t})$, where addition is taken modulo $r$, and hence $e_{i_t, i_{t+1}} = 1$, by \eqref{ineq:envy-variable}. It follows from \eqref{ineq:topological-ordering} that $v_{i_0} < v_{i_1} < \cdots < v_{i_{r-2}} < v_{i_{r-1}} < v_{i_r} = v_{i_0}$, a contradiction. Hence $M$ is coalition-free, and thus $M$ is a stable matching. \end{proof}
\begin{lemma} \label{lemma:matching-solution}
A stable matching $M$ in $\mathtt{I}$ corresponds to a feasible solution $S$ to $\mathtt{J}$, where $|M| = obj(S)$. \end{lemma}
\begin{proof} Let $M$ be a stable matching in $\mathtt{I}$. First we set all the binary variables involved in $\mathtt{J}$ to $0$. For all $(s_i, p_j) \in M$, we set $x_{i, j} = 1$. Now, since $M$ is a matching, it is clear that \eqref{ineq:studentassignment} - \eqref{ineq:lecturercapacity} is satisfied. For any acceptable pair $(s_i, p_j) \in (\mathcal{S} \times \mathcal{P}) \setminus M$ such that $s_i$ is unassigned in $M$ or prefers $p_j$ to $M(s_i)$, we set $\theta_{i,j} = 1$. For any project $p_j \in \mathcal{P}$ that is undersubscribed in $M$, we set $\alpha_j = 1$ and thus \eqref{ineq:project-undersubscribed} is satisfied. For \eqref{ineq:type-a-blockingpair} not to be satisfied, its LHS must be strictly greater than $2$. This would only happen if there exists $(s_i, p_j) \in (\mathcal{S} \times \mathcal{P}) \setminus M$, where $l_k$ is the lecturer who offers $p_j$, such that $\theta_{i,j} = 1$, $\alpha_j = 1$ and $\gamma_{i,j,k} = 1$. This implies that either $s_i$ is assigned in $M$ to a project $p_{j'}$ offered by $l_k$ such that $s_i$ prefers $p_j$ to $p_{j'}$, $p_j$ is undersubscribed in $M$, and $l_k$ prefers $p_j$ to $p_{j'}$. Thus $(s_i, p_j)$ is a type 1(a) blocking pair of $M$, a contradiction to the stability of $M$. Hence \eqref{ineq:type-a-blockingpair} is satisfied.
Now for any lecturer $l_k \in \mathcal{L}$ that is undersubscribed in $M$, we set $\delta_k = 1$. Thus \eqref{ineq:lecturer-undersubscribed} is satisfied. Suppose \eqref{ineq:type-b-blockingpair} is not satisfied. This would only happen if there exists $(s_i, p_j) \in (\mathcal{S} \times \mathcal{P}) \setminus M$, where $l_k$ is the lecturer who offers $p_j$, such that $\theta_{i,j} = 1$, $\alpha_j = 1$, $\beta_{i,k} = 0$ and $\delta_{k} = 1$. This implies that either $s_i$ is unassigned in $M$ or prefers $p_j$ to $M(s_i)$, $s_i \notin M(l_k)$, and each of $p_j$ and $l_k$ is undersubscribed. Thus $(s_i, p_j)$ is a type 1(b) blocking pair of $M$, a contradiction to the stability of $M$. Hence \eqref{ineq:type-b-blockingpair} is satisfied.
Suppose $l_k$ is a lecturer in $\mathcal{L}$ and $p_j$ is any project on $l_k$'s preference list. Let $p_z$ be $l_k$'s worst non-empty project in $M$. If $l_k$ is full in $M$ and prefers $p_j$ to $p_z$, we set $\eta_{j,k} = 1$. Then \eqref{ineq:lecturerfullconstraint} is satisfied. Now suppose \eqref{ineq:type-c-blockingpair} is not satisfied. This would only happen if there exists $(s_i, p_j) \in (\mathcal{S} \times \mathcal{P}) \setminus M$, where $l_k$ is the lecturer who offers $p_j$, such that $\theta_{i,j} = 1, \alpha_j = 1, \beta_{i,k} = 0$ and $\eta_{j,k} = 1$. This implies that either $s_i$ is unassigned in $M$ or prefers $p_j$ to $M(s_i)$, $s_i \notin M(l_k)$, $p_j$ is undersubscribed and $l_k$ prefers $p_j$ to his worst non-empty project in $M$. Thus $(s_i, p_j)$ is a type 1(c) blocking pair of $M$, a contradiction to the stability of $M$. Hence \eqref{ineq:type-c-blockingpair} is satisfied.
We denote by $G(M) = (\mathcal{S}, A)$ the envy graph of $M$. Suppose $s_i$ and $s_{i'}$ are any two distinct students in $\mathcal S$ such that $(s_i, p_j) \in M$, $(s_{i'}, p_{j'}) \in M$ and $s_i$ prefers $p_{j'}$ to $p_j$ (that is, $(s_i, s_{i'}) \in A$), we set $e_{i,i'} = 1$. Thus \eqref{ineq:envy-variable} is satisfied. Since $M$ is a stable matching, $M$ is coalition-free. This implies that $G(M)$ is acyclic and has a topological ordering $\sigma : \mathcal{S} \rightarrow \{1, 2, \ldots, n_1\}$. For each $i$ ($1 \leq i \leq n_1$), let $v_i = \sigma(s_i)$. Now suppose \eqref{ineq:topological-ordering} is not satisfied. This implies that there exist vertices $s_i$ and $s_{i'}$ in $G(M)$ such that $v_i \geq v_{i'} + n_1(1 - e_{i,i'})$. This is only possible if $ e_{i,i'} = 1$ since $1 \leq v_i \leq n_1$ and $1 \leq v_{i'} \leq n_1$. Hence $v_i \geq v_{i'}$, a contradiction to the fact that $\sigma$ is a topological ordering of $G(M)$ (since $(s_i, s_{i'}) \in A$ implies $v_i < v_{i'}$). Hence $S$, comprising the above assignment of values to the variables in $X \cup \Lambda \cup H \cup \Delta \cup E \cup V$, is a feasible solution to $\mathtt{J}$; and clearly $|M| = obj(S)$. \end{proof}
Lemmas \ref{lemma:solution-matching} and \ref{lemma:matching-solution} immediately give rise to the following theorem regarding the correctness of $\mathtt{J}$. \begin{restatable}[]{theorem}{correctness} \label{thrm} A feasible solution to $\mathtt{J}$ is optimal if and only if the corresponding stable matching in $\mathtt{I}$ is of maximum cardinality. \end{restatable} \begin{proof}
Let $S$ be an optimal solution to $\mathtt{J}$. Then by Lemma \ref{lemma:solution-matching}, $S$ corresponds to a stable matching $M$ in $\mathtt{I}$ such that $obj(S) = |M|$. Suppose $M$ is not of maximum cardinality. Then there exists a stable matching $M'$ in $\mathtt{I}$ such that $|M'| > |M|$. By Lemma \ref{lemma:matching-solution}, $M'$ corresponds to a feasible solution $S'$ to $\mathtt{J}$ such that $obj(S') = |M'| > |M| = obj(S)$. This is a contradiction, since $S$ is an optimal solution to $\mathtt{J}$. Hence $M$ is a maximum stable matching in $\mathtt{I}$. Similarly, if $M$ is a maximum stable matching in $\mathtt{I}$ then $M$ corresponds to an optimal solution $S$ to $\mathtt{J}$. \end{proof}
\section{Empirical Analysis} \label{sect:empirical-analysis} In this section we present results from an empirical analysis that investigates how the sizes of the stable matchings produced by the approximation algorithms compares to the optimal solution obtained from our IP model, on {\scriptsize SPA-P} instances that are both randomly-generated and derived from real datasets.
\subsection{Experimental Setup} There are clearly several parameters that can be varied, such as the number of students, projects and lecturers; the length of the students' preference lists; as well as the total capacities of the projects and lecturers. For each range of values for the first two parameters, we generated a set of random {\scriptsize SPA-P} instances. In each set, we record the average size of a stable matching obtained from running the approximation algorithms and the IP model. Further, we consider the average time taken for the IP model to find an optimal solution.
By design, the approximation algorithms were randomised with respect to the sequence in which students apply to projects, and the choice of students to reject when projects and/or lecturers become full. In the light of this, for each dataset, we also run the approximation algorithms 100 times and record the size of the largest stable matching obtained over these runs. Our experiments therefore involve five algorithms: the optimal IP-based algorithm, the two approximation algorithms run once, and the two approximation algorithms run 100 times.
We performed our experiments on a machine with dual Intel Xeon CPU E5-2640 processors with 64GB of RAM, running Ubuntu 14.04. Each of the approximation algorithms was implemented in Java\footnote{\label{github}https://github.com/sofiat-olaosebikan/spa-p-isco-2018}. For our IP model, we carried out the implementation using the Gurobi optimisation solver in Java\textsuperscript{\ref{github}}. For correctness testing on these implementations, we designed a stability checker which verifies that the matching returned by the approximation algorithms and the IP model does not admit a blocking pair or a coalition.
\subsection{Experimental Results} \label{subsect:experimental-results} \subsubsection{Randomly-generated Datasets.} All the {\scriptsize SPA-P} instances we randomly generated involved $n_1$ students ($n_1$ is henceforth referred to as the size of the instance), $0.5n_1$ projects, $0.2n_1$ lecturers and $1.1n_1$ total project capacity which was randomly distributed amongst the projects. The capacity for each lecturer $l_k$ was chosen randomly to lie between the highest capacity of the projects offered by $l_k$ and the sum of the capacities of the projects that $l_k$ offers.
In the first experiment, we present results obtained from comparing the performance of the IP model, with and without the coalition constraints in place.
\paragraph{\textbf{Experiment 0.}} We increased the number of students $n_1$ while maintaining a ratio of projects, lecturers, project capacities and lecturer capacities as described above. For various values of $n_1 \; (100 \leq n_1 \leq 1000)$ in increments of $100$, we created $100$ randomly-generated instances. Each student's preference list contained a minimum of $2$ and a maximum of $5$ projects. With respect to each value of $n_1$, we obtained the average time taken for the IP solver to output a solution, both with and without the coalition constraints being enforced. The results, displayed in Table~\ref{table:ip-time} show that when we removed the coalition constraints, the average time for the IP solver to output a solution is significantly faster than when we enforced the coalition constraints.
In the remaining experiments, we thus remove the constraints that enforce the absence of a coalition in the solution. We are able to do this for the purposes of these experiments because the largest size of a stable matching is equal to the largest size of a matching that potentially admits a coalition but admits no blocking pair\footnote{This holds because the number of students assigned to each project and lecturer in the matching remains the same even after the students involved in such coalition permute their assigned projects.}, and we were primarily concerned with measuring stable matching cardinalities. However the absence of the coalition constraints should be borne in mind when interpreting the IP solver runtime data in what follows.
In the next two experiments, we discuss results obtained from running the five algorithms on randomly-generated datasets.
\paragraph{\textbf{Experiment 1.}} As in the previous experiment, we maintained the ratio of the number of students to projects, lecturers and total project capacity; as well as the length of the students' preference lists. For various values of $n_1 \; (100 \leq n_1 \leq 2500)$ in increments of $100$, we created $1000$ randomly-generated instances. With respect to each value of $n_1$, we obtained the average sizes of stable matchings constructed by the five algorithms run over the $1000$ instances. The result displayed in Fig.~\ref{experiment1} (and also in Fig.~\ref{experiment2}) shows the ratio of the average size of the stable matching produced by the approximation algorithms with respect to the maximum cardinality matching produced by the IP solver.
Figure~\ref{experiment1} shows that each of the approximation algorithms produces stable matchings with a much higher cardinality from multiple runs, compared to running them only once. Also, the average time taken for the IP solver to find a maximum cardinality matching increases as the size of the instance increases, with a running time of less than one second for instance size $100$, increasing roughly linearly to $13$ seconds for instance size $2500$ (see Fig.~\ref{experiment1time}).
\paragraph{\textbf{Experiment 2.}} In this experiment, we varied the length of each student's preference list while maintaining a fixed number of students, projects, lecturers and total project capacity. For various values of $x$ ($2 \leq x \leq 10$), we generated $1000$ instances, each involving $1000$ students, with each student's preference list containing exactly $x$ projects. The result for all values of $x$ is displayed in Fig.~\ref{experiment2}. Figure~\ref{experiment2} shows that as we increase the preference list length, the stable matchings produced by each of the approximation algorithms gets close to having maximum cardinality. It also shows that with a preference list length greater than $5$, the $\frac{3}{2}$-approximation algorithm produces an optimal solution, even on a single run. Moreover, the average time taken for the IP solver to find a maximum matching increases as the length of the students' preference lists increases, with a running time of two seconds when each student's preference list is of length $2$, increasing roughly linearly to $17$ seconds when each student's preference list is of length $10$ (see Fig.~\ref{experiment2time}).
\subsubsection{Real Datasets.} The real datasets in this paper are based on actual student preference data and manufactured lecturer data from previous runs of student-project allocation processes at the School of Computing Science, University of Glasgow. Table \ref{table:realdatasets} shows the properties of the real datasets, where $n_1, n_2$ and $n_3$ denotes the number of students, projects and lecturers respectively; and $l$ denotes the length of each student's preference list. For all these datasets, each project has a capacity of $1$. In the next experiment, we discuss how the lecturer preferences were generated. We also discuss the results obtained from running the five algorithms on the corresponding {\scriptsize SPA-P} instances.
\paragraph{\textbf{Experiment 3.}} We derived the lecturer preference data from the real datasets as follows. For each lecturer $l_k$, and for each project $p_j$ offered by $l_k$, we obtained the number $a_j$ of students that find $p_j$ acceptable. Next, we generated a strict preference list for $l_k$ by arranging $l_k$'s proposed projects in (i) a random manner, (ii) ascending order of $a_j$, and (iii) descending order of $a_j$, where (ii) and (iii) are taken over all projects that $l_k$ offers. Table~\ref{table:realdatasets} shows the size of stable matchings obtained from the five algorithms, where $A, B, C, D$ and $E$ denotes the solution obtained from the IP model, 100 runs of $\frac{3}{2}$-approximation algorithm, single run of $\frac{3}{2}$-approximation algorithm, 100 runs of $2$-approximation algorithm, and single run of $2$-approximation algorithm respectively. The results are essentially consistent with the findings in the previous experiments, that is, the $\frac{3}{2}$-approximation algorithm produces stable matchings whose sizes are close to optimal.
\begin{table}[H] \caption{Results for Experiment 0. Average time (in seconds) for the IP solver to output a solution, both with and without the coalition constraints being enforced.} \centering \setlength{\tabcolsep}{0.2em}
\label{table:ip-time}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline
Size of instance & $100$ & $200$ & $300$ & $400$ & $500$ & $600$ & $700$ & $800$ & $900$ & $1000$\\
\hline
Av.\ time without coalition & $0.12$ & $0.27$ & $0.46$ & $0.69$ & $0.89$ & $1.17$ & $1.50$ & $1.86$ & $2.20$ & $2.61$\\ \hline Av.\ time with coalition & $0.71$ & $2.43$ & $4.84$ & $9.15$ & $13.15$ & $19.34$ & $28.36$ & $38.18$ & $48.48$ & $63.50$\\ \end{tabular} \end{table}
\begin{figure}
\caption{Result for Experiment 1.}
\label{experiment1}
\label{experiment1time}
\label{fig1}
\end{figure}
\begin{figure}
\caption{Result for Experiment 2.}
\label{experiment2}
\label{experiment2time}
\label{fig2}
\end{figure}
\begin{table}[H] \caption{Properties of the real datasets and results for Experiment 3.}
\setlength{\tabcolsep}{0.4em}
\label{table:realdatasets}
\begin{tabular}{c|c|c|c|c||c|c|c|c|c||c|c|c|c|c||c|c|c|c|c} \hline
\multicolumn{5}{c|}{}& \multicolumn{5}{c||}{Random} & \multicolumn{5}{c||}{Most popular} & \multicolumn{5}{c}{Least popular}\\ \hline
Year & $n_1$ & $n_2$ & $n_3$ & $l$ & $A$ & $B$ & $C$ & $D$ & $E$ & $A$ & $B$ & $C$ & $D$ & $E$ & $A$ & $B$ & $C$ & $D$ & $E$\\
\hline \hline
2014 & $55$ & $149$ & $38$ & $6$ & $55$ & $55$ & $55$ & $54$ & $53$ & $55$ & $55$ & $55$ & $54$ & $50$ & $55$ & $55$ & $55$ & $54$ & $52$\\ \hline 2015 & $76$ & $197$ & $46$ & $6$ & $76$ & $76$ & $76$ & $76$ & $72$ & $76$ & $76$ & $76$ & $76$ & $72$ & $76$ & $76$ & $76$ & $76$ & $75$\\ \hline 2016 & $92$ & $214$ & $44$ & $6$ & $84$ & $82$ & $83$ & $77$ & $75$ & $85$ & $85$ & $83$ & $79$ & $76$ & $82$ & $80$ & $77$ & $76$ & $74$\\ \hline 2017 & $90$ & $289$ & $59$ & $4$ & $89$ & $87$ & $85$ & $80$ & $76$ & $90$ & $89$ & $86$ & $81$ & $79$ & $88$ & $85$ & $84$ & $80$ & $77$\\ \end{tabular} \end{table}
\subsection{Discussions and Concluding Remarks} The results presented in this section suggest that even as we increase the number of students, projects, lecturers, and the length of the students' preference lists, each of the approximation algorithms finds stable matchings that are close to having maximum cardinality, outperforming their approximation factor. Perhaps most interesting is the $\frac{3}{2}$-approximation algorithm, which finds stable matchings that are very close in size to optimal, even on a single run. These results also holds analogously for the instances derived from real datasets.
We remark that when we removed the coalition constraints, we were able to run the IP model on an instance size of $10000$, with the solver returning a maximum matching in an average time of $100$ seconds, over $100$ randomly-generated instances. This shows that the IP model (without enforcing the coalition constraints), can be run on {\scriptsize SPA-P} instances that appear in practice, to find maximum cardinality matchings that admit no blocking pair. Coalitions should then be eliminated in polynomial time by repeatedly constructing an \emph{envy graph}, similar to the one described in \cite[p.290]{Man13}, finding a directed cycle and letting the students in the cycle swap projects.
\newcounter{mycounter} \setcounter{mycounter}{\value{enumiv}}
\end{document} |
\begin{document}
\title{The extent of saturation of induced ideals}
\begin{abstract}
We construct a model with a saturated ideal ${I}$ over $\mathcal{P}_{\kappa}\lambda$ and study the extent of saturation of $I$. \end{abstract} \section{Introduction} The existence of a saturated ideal on a successor cardinal is a kind of generic large cardinal axioms. The first model with a saturated ideal on $\aleph_1$ was constructed by Kunen~\cite{MR495118}. He established \begin{thm}[Kunen]\label{kunen}
Suppose that $j$ is a huge embedding with critical point $\kappa$. Then there is a poset $P$ such that $P \ast \dot{S}(\kappa,j(\kappa))$ forces that $\aleph_1$ carries a saturated ideal. \end{thm} See Section 2 for the definitions of a saturated ideal and its strengthenings below.
$S(\kappa,j(\kappa))$ denotes Silver collapse. The poset $P$, called the universal collapse, is useful in constructing a model with a saturated ideal because of its nice absorption property. Indeed, using the method of universal collapses each of the following was shown to be consistent: \begin{enumerate}
\item (Laver~\cite{MR673792}) $\aleph_1$ carries a strongly saturated ideal.
\item (Foreman--Magidor--Shelah~\cite{MR942519}) $\aleph_1$ carries a layered ideal.
\item (Foreman--Laver~\cite{MR925267}) $\aleph_1$ carries a centered ideal. \end{enumerate} Kunen's proof has been improved in two ways. One is due to Magidor. He used a sequence of local master conditions, while Kunen used a single master condition. Incorporating this improvement, Foreman--Magidor--Shelah proved (2), only assuming the existence of an almost-huge cardinal with target Mahlo, rather than a huge cardinal. The improvement enables us to use the Levy collapse instead of the Silver collapse, and weaken the assumption of Theorem \ref{kunen} to the existence of an almost-huge cardinal. The other improvement is due to Shioya~\cite{shioya}. He pointed out that the diagonal product of Silver collapses has a nice absorption property and works as $P$ in Theorem \ref{kunen}.
In this paper, we construct a model with a saturated ideal on the successor of a given regular cardinal, combining the two improvements. We also study the extent of saturation of our ideal. We will show that
\begin{thm}\label{maintheorem} Suppose that $j$ is an almost-huge embedding with critical point $\kappa$ and $\mu < \kappa \leq \lambda < j(\kappa)$ are regular cardinals. Then $P(\mu,\kappa) \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa))$ forces that there is a saturated ideal ${I}$ over $\mathcal{P}_\kappa\lambda$ with the following properties:
\begin{enumerate}
\item ${I}$ is $(\lambda^{+},\lambda^{+},<\mu)$-saturated.
\item $I$ is not $(\lambda^{+},\mu,\mu)$-saturated. In particular, $I$ is not strongly saturated.
\item $I$ is layered if and only if $j(\kappa)$ is Mahlo in $V$.
\item $I$ is not centered. In particular, $I$ is not strongly layered. \end{enumerate} \end{thm} Here, $P(\mu,\kappa)$ is the diagonal product of Levy collapses. See Section 4 for the definition of $P(\mu,\kappa)$. Note that an ideal over $\kappa$ can be seen as an ideal over $\mathcal{P}_{\kappa}\kappa$.
Modifying our proof of Theorem~\ref{maintheorem} we can show that \begin{thm} Let $I$ be the saturated ideal on $\aleph_1$ in the model of Theorem \ref{kunen}. Then the following hold: \begin{enumerate}
\item $I$ is $(\aleph_2,\aleph_2,<\aleph_0)$-saturated.
\item $I$ is not $(\aleph_2,\aleph_0,\aleph_0)$-saturated. In particular, $I$ is not strongly saturated.
\item (Foreman--Magidor--Shelah~\cite{MR942519}) $I$ is layered.
\item (Foreman--Laver~\cite{MR925267}) $I$ is not centered. \end{enumerate} \end{thm} Foreman--Laver~\cite{MR925267} claimed (4) without proof.
The structure of this paper is as follows. In Section 2, we recall basic facts of forcing projections and some saturation properties. In Section 3, we introduce two properties of continuous and $(\lambda,\lambda,<\mu)$-nice for the projections. We also see these properties work when we study the saturation properties of quotient forcing. In Section 4, we introduce the diagonal product of Levy collapse $P(\mu,\kappa)$ and study the saturation property of $P(\mu,\kappa)$ and Levy collapses. In Section 5, we give the proof of Theorem \ref{maintheorem}.
\section{Preliminaries} In this section, we recall basic facts of forcing and saturated ideal. We use~\cite{MR1994835} as a reference for set theory in general. For more on the topic of saturated ideal, we refer to~\cite{MR2768692}.
Our notation is standard. We use $\kappa,\lambda$ to denote a regular cardinal unless otherwise stated. We also use $\mu$ to denote a cardinal, possibly finite, unless otherwise stated. For $\kappa < \lambda$, $E^{\lambda}_\kappa$, $E^{\lambda}_{>\kappa}$ and $E^{\lambda}_{\leq\kappa}$ denote the set of all ordinals below $\lambda$ of cofinality $\kappa$, $>\kappa$ and $\leq\kappa$, respectively. We also write $[\kappa,\lambda) = \{\xi \mid \kappa \leq \xi < \lambda\}$. By $\mathrm{Reg}$, we mean the class of regular cardinals.
Throughout this paper, we identify a poset $P$ with its separative quotient. Thus, $p \leq q\leftrightarrow \forall r \leq p(r {\parallel} q) \leftrightarrow p \Vdash q \in \dot{G}$, where $\dot{G}$ is the canonical name of $(V,P)$-generic filter. A projection $\pi:Q \to P$ is an order-preserving mapping with the property that $q \leq_P \pi(p)$ implies $\exists r \leq_Q p(\pi(r) \leq_P q)$ and $\pi(1_P) = 1_{Q}$. We say that $P$ is a complete suborder of $Q$, denoted by $P \lessdot Q$ if the identity mapping $i:P \to Q$ is a complete embedding. Whenever a projection $\pi:Q \to P$ is given, for every dense $D$ in $P$, $\pi^{-1}D$ is also dense in $Q$. It follows that $Q \Vdash \pi``\dot{H}$ generates a $(V,P)$-generic filter, where $\dot{H}$ is the canonical name of $(V,Q)$-generic filter. The quotient forcing is defined by $P \Vdash Q / \dot{G} = \{q \in Q\mid \pi(q) \in \dot{G}\}$, ordered by $\leq_P$. In addition, we can define a dense embedding $\tau: Q \to P \ast Q / \dot{G}$ by $\tau(q) = \langle{\pi(q),\hat{q}}\rangle$ where $\hat{q}$ is a $P$-name with $P \Vdash \pi(q) \in \dot{G} \to \hat{q} = q$ and $\pi(q) \not \in \dot{G} \to \hat{q} = 1$. Thus, $Q \simeq P \ast Q/\dot{G}$. The completion of $P$ is a complete Boolean algebra $B(P)$ such that $P \lessdot B(P)$ is a dense subset. $B(P)$ is unique up to isomorphism.
Whenever a projection $\pi:Q \to P$ is given, there is a complete embedding $e:P \to \mathcal{B}(Q)$ which is defined by $e(p) = \sum\{q\mid \pi(q) \leq p\}$ . It is easy to see that $\pi(e(p)) = p$ and $e(\pi(q)) \geq q$. First, we check about basic properties of $e$ and $\pi$ for Section 3.
\begin{lem}
If $e:P \to Q$ is a complete embedding between complete Boolean algebras, then the following holds: \begin{enumerate}
\item For every $A \subseteq P$, $e(\prod A) = \prod e``A$.
\item If $e$ is defined by a projection $\pi:Q \to P$, $\pi(e(p) \cdot q) = p\cdot \pi(q)$. \end{enumerate} \end{lem}
\begin{proof}
(1) It is easy to see $e(\prod A) \leq \prod e``A$. Let us see $\prod e``A \leq e(\prod A)$. By separativity, it suffices to show that $\forall b \leq \prod e``A(b \cdot e(\prod A) \not= 0)$. Let $c \in P$ be a reduct of $b$, that is $\forall d \leq c(e(d)\cdot b \not= 0)$. For every $d \leq c$ and $a \in A$, $e(d) \cdot e(a)\geq e(d) \cdot b \not = 0$. Thus, $d \cdot a \not= 0$ for all $d \leq c$ in $P$, especially $c \leq a$. Therefore, $c \leq \prod A$ and $b \cdot e(\prod A) \geq b \cdot e(c) \not= 0$
(2) Observe that $\pi(e(p) \cdot q) \leq \pi(e(p)) \cdot \pi(q) = p \cdot \pi(q)$. To show $p \cdot \pi(q) \leq \pi(e(p) \cdot q)$, we check $\forall r \leq \pi(q) \cdot p(r \cdot (p \cdot \pi(q)) \not= 0)$. For any $r \leq \pi(q) \cdot p$, there is an $s \leq q$ with $\pi(s) \leq r$. By $q \cdot e(p) = q \cdot \sum\{x \mid \pi(x) \leq p\} = \sum\{q \cdot x \mid \pi(x) \leq p\}$, $s = q \cdot s \leq q \cdot e(p)$. Therefore $\pi(s) \leq \pi(q \cdot e(p)) \cdot r$. \end{proof}
Next, we define saturation properties that we will deal with in this paper. For cardinals $\mu\leq \kappa \leq \lambda$, we say that $P$ has the $(\lambda,\kappa,<\mu)$-c.c. if, for every $X \in [P]^{\lambda}$, there is a $Y \in [X]^{\kappa}$ such that $Z$ has a lower bound for all $Z \in [Y]^{<\mu}$. By $(\lambda,\kappa,\mu)$-c.c., we mean $(\lambda,\kappa,<\mu^{+})$-c.c. Of course, $\lambda$-c.c. and $\lambda$-Knaster are the same as $(\lambda,2,2)$-c.c. and $(\lambda,\lambda,2)$-c.c., respectively.
For a stationary subset $S \subseteq \lambda$, we say that $P$ is $S$-layered if there is an $\subseteq$-increasing sequence $\langle P_\delta \mid \delta < \lambda \rangle$ with the following properties:
\begin{enumerate}
\item $P = \bigcup_{\delta < \lambda}P_\delta$.
\item $P_{\delta} \lessdot P$ and $|P_\delta| < \lambda$ for all $\delta < \lambda$.
\item There is a club $C \subseteq \lambda$ such that $\forall \delta \in S \cap C(P_\delta = \bigcup_{\zeta < \delta}P_{\zeta})$.
\end{enumerate}
We say such sequence a $S$-layering of $P$. For a later purpose, we introduce the notion of filtration. We say that $\langle P_{\delta} \mid \delta < \lambda \rangle$ is a filtration of $P$ if it is an $\subseteq$-increasing continuous sequence with $P = \bigcup_{\delta < \lambda}P_\delta$ and $|P_{\delta}| < \lambda$ for all $\delta < \lambda$. \begin{lem}\label{layeredchar}
The following are equivalent.
\begin{enumerate}
\item $P$ is $S$-layered.
\item There are a filtration $\langle P_{\delta} \mid \delta < \lambda \rangle$ and a club $C \subseteq \lambda$ such that $P_\delta \lessdot P$ for all $\delta \in S \cap C$.
\item For any filtration $\langle P_{\delta} \mid \delta < \lambda \rangle$, there is a club $C \subseteq \lambda$ such that $P_\delta \lessdot P$ for all $\delta \in S \cap C$.
\end{enumerate} \end{lem} \begin{proof}
In \cite{MR3911105}, it is shown that (2) and (3) are equivalent. We check that (1) and (2) are equivalent. First, we assume (1). Let $\langle P_\alpha \mid \alpha < \lambda \rangle$ be a $S$-layering sequence of $P$. It is easy to see that $\langle \bigcup_{\beta < \alpha}P_{\beta} \mid \alpha < \lambda \rangle$ is a filtration which witnesses to (2).
Let us see the inverse direction. Let $\langle Q_{\alpha} \mid \alpha < \lambda \rangle$ be a filtration of $P$ and $C\subseteq \lambda$ be a club such that $Q_{\alpha} \lessdot P$ for all $\alpha \in S \cap C$. Define $P_{\alpha} = Q_{\min (S\cap C \setminus \alpha)}$ for all $\alpha < \lambda$. It is easy to see that $\langle P_{\alpha} \mid \alpha < \lambda \rangle$ is a $S$-layering sequence of $P$. \end{proof}
Shelah showed that $S$-layered implies the $\lambda$-c.c. Moreover, Cox~\cite{MR3911105} showed that $S$-layered implies $\lambda$-Knaster.
We say that $P$ is $\lambda$-centered if $P$ is a union of $\lambda$-many centered subset of $P$. A centered subset is $X \subseteq P$ such that $\forall Z \in [X]^{<\omega}(X$ has a lower bound$)$. We call such centered sets a centering of $P$. It is easy to see that every $\lambda$-centered poset has the $(\lambda^{+},\lambda^{+},<\omega)$-c.c., which in turn implies the $\lambda^{+}$-c.c. These properties are preserved by projection. Indeed, \begin{lem}
If $\pi:Q \to P$ is a projection. Then the following holds. \begin{enumerate}
\item If $Q$ has the $(\lambda,\lambda,<\mu)$-c.c., then so does $P$.
\item If $Q$ is $S$-layered for some stationary $S \subseteq \lambda$, then so is $P$.
\item If $Q$ is $\lambda$-centered, then so is $P$. \end{enumerate} \end{lem} It is easy to see that $(\lambda,\lambda,<\mu)$-c.c. and $\lambda$-centeredness are preserved under taking the completion. We need to be careful in the case of layeredness, but there is no harm in the current definition because the cardinalities remain the same after taking the completions in this paper.
In this paper, by ideal, we means a fine and normal ideal. For an ideal $I$ over $\mathcal{P}_{\kappa}\lambda$, $\mathcal{P}(\mathcal{P}_\kappa(\lambda)) / I$ denotes the poset $\mathcal{P}(\mathcal{P}_{\kappa}\lambda) \setminus I$ ordered by $A \leq B\leftrightarrow A \setminus B \in I$. We say that $I$ is $(\lambda',\kappa',<\mu')$-saturated if $\mathcal{P}(\mathcal{P}_{\kappa}\lambda) / I$ has the $(\lambda',\kappa',<\mu')$-c.c. Simply, we say $I$ is saturated if $I$ is $(\lambda^{+},2,2)$-saturated.
Similarly, we say that $I$ is strongly saturated, layered and centered if $\mathcal{P}(\mathcal{P}_{\kappa}\lambda) / I$ is $(\lambda^{+},\lambda^{+},<\lambda)$-saturated, $S$-layered for some stationary subset $S \subseteq E^{\lambda^{+}}_{\lambda}$ and $\lambda$-centered, respectively. The implications between these properties are as follows:
\begin{center}
\begin{tikzpicture}
\node at (0,0) {{Dense}};
\draw[->] (0,-0.3) to (-2,-0.5);
\draw[->] (0,-0.3) to (0,-0.8);
\draw[->] (0,-0.3) to (2,-0.8);
\node[anchor = east] at (-2.2,-0.5) {Strongly layered};
\node[anchor = east] at (-2.2,-2) {Layered};
\node at (0,-1) {Centered};
\node[anchor = west] at (2.2,-1) {Strongly saturated};
\draw[->] (0,-1.2) to (0,-1.8);
\draw[->] (2,-1.2) to (1,-1.8);
\node at (0,-2) {$(\lambda^{+},\lambda^{+},2)$-saturated};
\draw[->] (0,-2.2) to (0,-2.8);
\node at (0,-3) {Saturated};
\draw[->] (-2.8,-0.8) to (-2.8,-1.7);
\draw[->] (-2,-2) -- node[auto = right]{\tiny{Cox~\cite{MR3911105}}} (-1.7,-2);
\draw[->] (-2.8,-0.8) --node[pos = 0.9, auto = right]{\tiny{Shelah~\cite{MR850051}}} (-1,-1);
\end{tikzpicture} \end{center} Here, $I$ is dense and strongly layered if $\mathcal{P}(\mathcal{P}_{\kappa}\lambda)/I$ has a dense subset of size $\lambda$ and $\mathcal{P}(\mathcal{P}_{\kappa}\lambda)/I$ is $E^{\lambda^{+}}_{\lambda}$-layered, respectively.
\section{Continuity of projections} Our study of ideals will be reduced to that of the quotient forcing induced by projections. In the first half of this section, we give sufficient conditions for the quotient to have the desired properties. \begin{defi}
Suppose $\pi:Q \to P$ is a projection between complete Boolean algebras. We say that $\pi$ is $<\mu$-continuous if $\pi(\prod Z) = \prod \pi``Z$ for all $Z \in [Q]^{<\mu}$ with $\prod^{Q}Z\not= 0$.
We also say that $\pi$ is continuous if $\pi$ is $<\mu$-continuous for all $\mu$. \end{defi} For a projection $\pi:Q \to P$ between posets, we say that $P$ is $<\mu$-continuous if the lifting $\pi:\mathcal{B}(Q) \to \mathcal{B}(P)$ is $<\mu$-continuous. For the following lemma, recall that a Boolean algebra $P$ is $(<\mu,\lambda)$-distributive if $P$ adds no new sequences to $\lambda$ of length $<\mu$. \begin{lem}\label{ccquotient}
Suppose that $P$ is $(<\mu,\lambda)$-distributive, $\pi:Q \to P$ is $<\mu$-continuous projection between complete Boolean algebras, and $Q$ has the $(\lambda,\lambda,<\mu)$-c.c. then $P \Vdash Q/ \dot{G}$ has the $(\lambda,\lambda,<\mu)$-c.c. \end{lem} \begin{proof}
Let $p$ and $\{\dot{q}_{\alpha} \mid \alpha < \lambda \}$ be arbitrary with $p \Vdash \dot{q}_{\alpha} \in Q / \dot{G}$. For each $\alpha$, we can take $r_{\alpha}$ such that $\pi(r_\alpha) \leq p$ and $\pi(r_\alpha) \Vdash r_{\alpha} \leq \dot{q}_{\alpha}$ in $Q / \dot{G}$. Since $Q$ has the $(\lambda,\lambda,<\mu)$-c.c., there is a $K \in [\lambda]^{\lambda}$ such that $\forall Z \in [K]^{<\mu}(\prod_{\alpha \in Z} q_{\alpha} \not= 0)$. Let $b = ||\{\alpha \in K \mid \pi(q_{\alpha}) \in \dot{G}\}| = \lambda||$. Since $P$ has the $\lambda$-c.c., $b \not=0$. Let $\dot{K}$ be a $P$-name for $\{\alpha \in K \mid \pi(q_{\alpha}) \in \dot{G}\}$.
We claim that $b \leq p$ forces $\forall Z \in [\dot{K}]^{<\lambda}(\prod_{\alpha \in Z} q_{\alpha} \in Q / \dot{G})$. By $(<\mu,\lambda)$-distributivity, if $q \leq b$ forces $\dot{Z} \in [\dot{K}]^{<\mu}$ for some $\dot{Z}$ then we may assume that $q \Vdash \dot{Z} = Z$ for some $Z \in [K]^{<\mu}$. For each $\alpha \in Z$, we have $q \Vdash \pi(r_{\alpha}) \in \dot{G}$ and thus $q \leq \pi(r_{\alpha})$. Because of $q \leq \prod_{\alpha \in Z} \pi(r_{\alpha}) = \pi(\prod_{\alpha \in Z}r_{\alpha})$, $q$ forces $\prod_{\alpha \in Z}r_{\alpha} \in Q / \dot{G}$. $q$ also forces $\prod_{\alpha \in Z}r_{\alpha} \leq r_{\alpha} \leq \dot{q}_{\alpha}$ for each $\alpha \in Z$. \end{proof}
Next, we consider the case of layeredness. \begin{lem}\label{layeredquotient}
Suppose that $Q$ is $S$-layered for some stationary subset $S \subseteq \lambda$ and $\pi:Q \to P$ is a $2$-continuous projection. Then $P \Vdash Q / \dot{G}$ is $S$-layered. \end{lem} \begin{proof}
We may assume that $P$ and $Q$ are Boolean algebras. Remark that they need not be complete. Let $\langle \mathcal{B}_\delta \mid \delta < \lambda \rangle$ be a filtration of $Q$ with each $\mathcal{B}_\delta$ is a Boolean subalgebra of $Q$. Because $P$ has the $\lambda$-c.c., $S$ remains stationary in the extension by $P$. It is enough to prove that $\mathcal{B}_\delta \lessdot Q$ implies $P \Vdash \mathcal{B}_\delta / \dot{G} \lessdot Q/\dot{G}$ for each $\delta$.
Let $D = \{q \in Q \mid \exists b \in \mathcal{B}_\delta(b \geq q$ and $b$ is a reduct of $q)\}$. $D$ is dense in $Q$. For each $q \in Q$, $q$ has a reduct $b \in \mathcal{B}_\delta$. It is easy to see that $b$ is a reduct of $q \cdot b$. Thus, $q \cdot b \in D$ and this extends $q$.
To show $P \Vdash \mathcal{B}_\delta / \dot{G} \lessdot Q/\dot{G}$, take an arbitrary $p \in P$ and $q \in Q$ with $p \Vdash q \in Q / \dot{G}$. We may assume $q \in D$. Thus, $q$ has a reduct $b \geq q$. Because of $p \leq \pi(q) \leq \pi(b)$, $p \Vdash b \in \mathcal{B}_\delta / \dot{G}$. It suffices to show that $p \Vdash \forall c \in \mathcal{B}_\delta / \dot{G}(c \leq b \to c\cdot q \in Q / \dot{G})$. For any $p' \leq p$ and $c \leq b$ with $p' \Vdash c \in \mathcal{B}_\delta / \dot{G}$, Since $b$ is a reduct of $q$ and $\pi$ is $2$-continuous, $p' \leq \pi(c)\cdot \pi(q) = \pi(c \cdot q)$. Thus, $p' \Vdash c \cdot q \in Q / \dot{G}$. \end{proof} We get an analogous result for centeredness, although we do not use this in the proof of Theorem~\ref{maintheorem}. \begin{prop}\label{centeredquotient}
For a projection $\pi:Q \to P$, suppose that $\pi$ is $<\omega$-continuous and $P \Vdash Q$ is $\lambda$-centered. Then $P \Vdash Q / \dot{G}$ is $\lambda$-centered. \end{prop} \begin{proof}
We may assume that $P$ and $Q$ are Boolean algebras. Let $G$ be an arbitrary $(V,P)$-generic filter. We discuss in $V[G]$. Let $\langle F_{\xi} \mid \xi < \lambda \rangle$ be a centering of $Q$. It is enough to prove that $p_{0},...,p_{n-1} \in F_{\xi}$ implies $p_0 \cdot p_1 \cdots p_{n-1} \in Q / G \ast H$ for every $p_i \in Q/ G \ast H$.
Note that $\pi(p_i) \in G$ for each $i$. Since $F_{\xi}$ is a centered subset, $p_0 \cdot p_1 \cdots p_{n-1} \not= 0$ in $Q$. The $<\omega$-continuity implies $\pi(p_0 \cdot p_1 \cdots p_{n-1})= \pi(p_0) \cdot \pi(p_1) \cdots \pi(p_{n-1}) \in G$, as desired. \end{proof}
The rest of this section is not used in the proof of Theorem \ref{maintheorem}, but is of independent interest. Refining Lemma \ref{ccquotient}, we characterize the $(\lambda,\lambda,<\mu)$-c.c. of the quotient forcing in terms of the following notion. \begin{defi}
For a projection $\pi:Q \to P$ between complete Boolean algebras, we say that $\pi$ is $(\lambda,\lambda,<\mu)$-nice if, for every $X \in [Q]^{\lambda}$, there is a $Y \in [Q]^{\lambda}$ with the following properties: \begin{itemize}
\item There is an injection $f:Y \to X$ such that $y \leq f(y)$ for all $y \in Y$.
\item $\prod Z \not= 0$ and $\pi(\prod Z) = \prod \pi``Z$ for all $Z \in [Y]^{<\mu}$. \end{itemize} \end{defi}
\begin{thm}\label{characterization}
Suppose that $P$ is $(<\mu,\lambda)$-distributive, $\pi:Q \to P$ is a projection between complete Boolean algebras, and $Q$ has the $(\lambda,\lambda,<\mu)$-c.c. Then the following are equivalent. \begin{enumerate}
\item $\pi$ is $(\lambda,\lambda,<\mu)$-nice.
\item $P \Vdash Q/ \dot{G}$ has the $(\lambda,\lambda,<\mu)$-c.c. \end{enumerate} \end{thm} \begin{proof}
The forward direction can be shown as in the proof of Lemma \ref{ccquotient}. We should check the inverse direction. Let $\{q_{\alpha}\mid \alpha < \lambda\} \subseteq Q$ be arbitrary. We let $b = || |\{\alpha < \lambda \mid \pi(q_{\alpha}) \in \dot{G}\}| = \lambda ||$ and $\dot{K}$ be a $P$-name for $\{\alpha < \lambda \mid \pi(q_{\alpha}) \in \dot{G}\}$. Since $P$ has the $\lambda$-c.c., $b \not= 0$.
By the definition of quotient forcing, $b \Vdash \{q_{\alpha} \mid \alpha \in \dot{K}\} \subseteq Q/\dot{G}$. Because $P$ forces that $Q/\dot{G}$ has the $(\lambda,\lambda,<\mu)$-c.c., we can choose $\dot{K}'$ such that $b \Vdash \dot{K}' \in [\dot{K}]^{\lambda}$ and $\prod_{\alpha \in Z}q_\alpha\not= 0$ for all $Z \in [\dot{K}']^{<\mu}$.
By the $\lambda$-c.c. of $P$, $K = \{\alpha< \lambda \mid b \cdot ||\alpha \in \dot{K}'||\not=0\}$ is of size $\lambda$. Define $p_{\alpha} = b \cdot ||\alpha \in \dot{K}'||$ for each $\alpha \in K$. There is a $K' \in [K]^{\lambda}$ with $\forall Z \in [K']^{<\omega} (\prod_{\alpha \in K'} p_{\alpha} \not = 0)$.
Observe that for every $Z \in [K']^{<\mu}$, $\prod_{\alpha \in Z} p_{\alpha}$ forces $\prod_{\alpha \in Z}q_{\alpha} \in Q / \dot{G}$, and thus, $\prod_{\alpha \in Z}p_\alpha = \prod_{\alpha \in Z} p_{\alpha} \cdot \pi(\prod_{\alpha \in Z}q_{\alpha})$.
Let $r_{\alpha} = q_{\alpha} \cdot e(p_{\alpha})$, where $e$ is a complete embedding induced by $\pi$. We claim that $\prod_{\alpha \in Z}\pi(q_{\alpha}) = \pi(\prod_{\alpha \in Z} q_{\alpha})$ for every $Z \in [K']^{<\mu}$. This follows by:
\begin{align*}
\textstyle{\prod_{\alpha \in Z} \pi(r_{\alpha})} &= \textstyle{\prod_{\alpha \in Z} p_{\alpha}} \\ &= \textstyle{\prod_{\alpha \in Z}p_{\alpha} \cdot \pi(\prod_{\alpha \in Z}q_{\alpha})} \\ &= \textstyle{\pi(\prod_{\alpha \in Z}q_{\alpha} \cdot \prod_{\alpha \in Z}e(p_{\alpha}))} \\ &= \textstyle{\pi(\prod_{\alpha \in Z}q_{\alpha} \cdot e(p_{\alpha}))} = \textstyle{\pi(\prod_{\alpha \in Z}r_\alpha)}.
\end{align*}
Thus, $\{r_{\alpha} \mid \alpha \in K'\}$ witnesses to $(\lambda,\lambda,<\mu)$-nice. \end{proof}
In particular, Knasterness of the quotient forcing can be characterized in term of projections as follows. \begin{coro}\label{knasterchar}
Suppose that $\pi:Q \to P$ is a projection between complete Boolean algebras and $Q$ is $\lambda$-Knaster. Then the following are equivalent. \begin{enumerate}
\item $\pi$ is $(\lambda,\lambda,2)$-nice.
\item $P \Vdash Q / \dot{G}$ is $\lambda$-Knaster. \end{enumerate} \end{coro} We will show that Corollary \ref{knasterchar} is not meaningless, that is, (2) does not hold unconditionally. To see this, we use Todor\v{c}evi\'c's construction of a Suslin tree from a Cohen real. \begin{lem}[Todor\v{c}evi\'c]
There is an $\langle{e_\alpha:\alpha \to \omega \mid \alpha < \omega_1}\rangle$ with the following properties: \begin{enumerate}
\item $\{\xi < \alpha\mid e_\alpha(\xi) \not= e_\beta(\xi)\}$ is finite for all $\alpha < \beta$.
\item $\{\xi < \alpha \mid e_\alpha(\xi) \leq n\}$ is finite for all $n < \omega$. \end{enumerate} \end{lem}
\begin{prop}
There is a projection $\pi:Q \to P$ between $\aleph_1$-Knaster posets such that $P \Vdash Q / \dot{G}$ is not $\aleph_1$-Knaster. In particular, $\pi$ is not $(\aleph_1,\aleph_1,2)$-nice. \end{prop} \begin{proof}
Let $C$ be a Cohen forcing, that is, $C = {^{<\omega} \omega}$. Let $\dot{c}$ be a $C$-name such that $C \Vdash \dot{c} = \bigcup \dot{G}$. Todor\v{c}evi\'c showed that $C$ forces that the poset $\dot{T} = \{\dot{c} \circ e_\alpha \upharpoonright \beta \mid \beta \leq \alpha <\omega_1\}$, ordered by reverse inclusion, has the $\aleph_1$-c.c. and is not $\aleph_1$-Knaster. We refer to~\cite{MR2355670} for more details.
Let $P = C$, $Q = C \ast \dot{T}$ and $\pi:Q \to P$ be a natural projection. Of course, $P \Vdash Q / \dot{G} \simeq \dot{T}$ is not $\aleph_1$-Knaster.
It remains to show that $Q$ is $\aleph_1$-Knaster. Let $X = \{ \langle p_i,\dot{c} \circ e_{\alpha_i} \upharpoonright \beta_i\rangle \mid i < \omega_1 \}$ be arbitrary. Shrinking $X$, there are $K \in [\omega_1]^{\omega_1}$ and $p$ such that $p_i = p$ for all $i\in K$. For each $i \in K$, $a_i = \{\xi < \alpha_i \mid e_{\alpha_i}(\xi) \leq |p|\}$ is finite. The usual $\Delta$-system argument takes $K' \in [\omega_1]^{\omega_1}$ and $r$ such that $a_i \cap a_j = r$ for each $i<j$ in $K'$. Note that the number of functions that has a form of $e_\alpha \upharpoonright r$ is $\omega$ at most. There is a $K'' \in [K']^{\omega_1}$ such that $e_{\alpha_i} \upharpoonright a = e_{\alpha_j} \upharpoonright a$ for each $i < j$ in $K''$. We claim that any two elements in $Y = \{\langle p_i,\dot{c} \circ e_{\alpha_i} \upharpoonright \beta_i\rangle \mid i \in K''\}$ are compatible.
Fix a pair $i < j$ in $K''$. For every $\xi$, if $e_{\alpha_i}(\xi), e_{\alpha_j}(\xi) < |p|$ then $\xi \in r$, which in turn implies $e_{\alpha_i}(\xi) = e_{\alpha_j}(\xi)$. This ensures us, for every $\xi$ with $e_{\alpha_i}(\xi) \not= e_{\alpha_j}(\xi)$, one of the following holds: \begin{itemize}
\item $e_{\alpha_i}(\xi),e_{\alpha_j}(\xi) \geq |p|$.
\item $e_{\alpha_i}(\xi) \geq |p|$ and $e_{\alpha_j}(\xi) < |p|$.
\item $e_{\alpha_j}(\xi) \geq |p|$ and $e_{\alpha_i}(\xi) < |p|$. \end{itemize}
Since $\Delta = \{\xi \mid e_{\alpha_{i}}(\xi) \not= e_{\alpha_{i}}(\xi)\}$ is finite, $m = \max (e_{\alpha_i}``\Delta) \cup (e_{\alpha_{j}}``\Delta) + 1$ is a natural number.
Define $q \in {^{m}\omega}$ by \begin{center}
$q(n) = \begin{cases}p(n) & n < |p|\\
p(e_{\alpha_j}(\xi)) & \text{there is a }\xi\text{ such that }n = e_{\alpha_i}(\xi)\text{ and }e_{\alpha_j}(\xi)<|p|\\
p(e_{\alpha_i}(\xi)) & \text{there is a }\xi\text{ such that }n = e_{\alpha_j}(\xi)\text{ and }e_{\alpha_i}(\xi)<|p|\\
0 & \text{otherwise}
\end{cases}$ \end{center}
It is easy to see that $\langle q,\dot{c} \circ e_{\alpha_k} \upharpoonright \beta_k\rangle$ is a common extension of $\langle p,\dot{c} \circ e_{\alpha_i} \upharpoonright \beta_i\rangle$ and $\langle p,\dot{c} \circ e_{\alpha_j} \upharpoonright \beta_j\rangle$, here $k$ is $i$ or $j$ such that $\beta_i,\beta_j \leq \beta_k$. \end{proof}
\section{Diagonal product of Levy collapses}
In this paper, we use a slight modification of Levy collapse. First, we write $[\kappa,\lambda)_{\mu\text{-cl}}$ for the set of all $\mu$-closed cardinal in $[\kappa,\lambda)$. Here, a $\mu$-closed cardinal is a cardinal $\gamma$ with ${\gamma}^{<\mu} = \gamma$. For regular cardinals $\mu < \kappa$, our Levy collapse $\mathrm{Coll}(\mu,<\kappa)$ is the $<\mu$-support product $\prod_{\gamma \in [\mu^{+},\kappa)_{\mu\text{-cl}}}^{<\mu} {^{<\mu}\gamma}$. $\mathrm{Coll}(\mu,<\kappa)$ is $\mu$-directed closed. We remark that our Levy collapse $\mathrm{Coll}(\mu,<\kappa)$ is forcing equivalent to the usual one and $\mathrm{Coll}(\mu,<\kappa)$ has the $\kappa$-c.c. if $\kappa$ is inaccessible.
For regular cardinals $\mu < \kappa$, the diagonal product of Levy collapses is $P(\mu,\kappa) = \prod_{\alpha \in [\mu,\kappa) \cap \mathrm{Reg}}^{<\mu}\mathrm{Coll}(\alpha,<\kappa)$. It is easy to see that $P$ is $<\mu$-directed closed. \begin{lem}
If $\kappa$ is inaccessible, then $P(\mu,\kappa)$ has the $(\kappa,\kappa,<\mu)$-c.c. \end{lem} \begin{proof}
For any $X \in [P(\mu,\kappa)]^{\kappa}$, the usual $\Delta$-system argument takes $Y \in [X]^{\kappa}$ and $r$ with the following properties: \begin{itemize}
\item $\{\mathrm{supp}(p) \mid p \in Y\}$ is a $\Delta$-system with root $r$.
\item $r \subseteq \kappa$ is bounded by some regular cardinal $\eta < \kappa$. \end{itemize}
For each $\alpha \in r$, we can see that $p(\alpha)$ is a partial function from $\alpha \times [\alpha,\kappa)_{\alpha^{+}\text{-cl}}$ to $\kappa$. Note that $|\bigcup_{\alpha \in r}\{\alpha\} \times \mathrm{dom}(p(\alpha))| < \eta$. Again, the usual $\Delta$-system argument takes $Y' \in [Y]^{\kappa}$, $r'$ and $q$ such that \begin{itemize}
\item $\{\{\alpha\} \times \mathrm{dom}(p(\alpha)) \mid \alpha \in Y'\}$ is a $\Delta$-system with root $r'$.
\item $q \in P(\mu,\kappa)$.
\item For all $p \in Y'$ and $\alpha \in r$, $p(\alpha) \upharpoonright \{\xi \mid \langle{\alpha,\xi}\rangle \in r'\} = q(\alpha)$. \end{itemize} It is easy to see that $Y'$ works. \end{proof} From this, $P(\mu,\kappa)$ forces $\mu^{+} = \kappa$ if $\kappa$ is inaccessible. \begin{lem}\label{diagonallayered} Suppose $\kappa$ is inaccessible. \begin{enumerate}
\item If $\kappa$ is Mahlo, then $P(\mu,\kappa)$ is $[\mu,\kappa) \cap \mathrm{Reg}$-layered.
\item If $\kappa$ is not Mahlo, then $P(\mu,\kappa)$ is not $S$-layered for all stationary subsets $S \subseteq \kappa$. \end{enumerate} \end{lem} To prove Lemma \ref{diagonallayered}, we need the following lemma. \begin{lem}\label{regularchar}Suppose $\kappa$ is inaccessible.
\begin{enumerate}
\item $P(\mu,\delta) \lessdot P(\mu,\kappa)$ for all $\delta < \kappa$.
\item There is a club $C$ such that $\bigcup_{\eta < \delta} P(\mu,\eta) \lessdot P(\mu,\kappa)$ if and only if $\delta$ is regular for all $\delta \in C$.
\end{enumerate} \end{lem} \begin{proof} (1) is easy. Let us see $(2)$. It is easy to see $P(\mu,\delta) \supseteq \bigcup_{\eta < \delta}P(\mu,\eta)$.
Let $C = \{\delta < \kappa \mid \forall \eta < \delta(\eta^{<\eta} < \delta)$ and $\delta$ is a limit cardinal$\}$. $C$ is a club in $\kappa$. Note that $\sup [\alpha^{+},\delta)_{\alpha\text{-cl}} = \delta$ for each $\delta \in C$ and $\alpha < \delta$.
In the case of $\delta \in C$ regular, $P(\mu,\delta) = \bigcup_{\eta < \delta}P(\mu,\eta) \lessdot P(\mu,\kappa)$, by (1). If $\delta \in C$ is singular, there is a regular cardinal $\alpha$ with $\mathrm{cf}(\delta) < \alpha < \delta$. Then $\sup[\alpha^{+},\delta)_{\alpha\text{-cl}} = \delta$. Let $\{\delta_i \mid i < \mathrm{cf}(\delta)\} \subseteq [\alpha^{+},\delta)_{\alpha\text{-cl}}$ be a sequence which converges to $\delta$. Define $p \in P(\mu,\delta)$ by, \begin{itemize}
\item $\mathrm{supp}(p) = \{\alpha\}$.
\item $p(\alpha) \in \mathrm{Coll}(\alpha,<\delta)$ is such that
\begin{itemize}
\item $\mathrm{dom}(p(\alpha)) = \{\delta_i \mid i < \mathrm{cf}(\delta)\}$, and
\item $p(\alpha)(\delta_i) = \begin{cases} \{\langle 0,\delta_{i-1}\rangle\} &i\text{ is successor ordinal} \\ \{\langle 0,0\rangle\} &\text{otherwise} \end{cases}$
\end{itemize} \end{itemize} It is easy to see $p(\alpha) \in \mathrm{Coll}(\alpha,<\delta) \setminus \bigcup_{\eta < \delta}\mathrm{Coll}(\alpha,<\eta)$. In particular, $p$ does not have a reduct in $\bigcup_{\eta < \delta} P(\mu,\eta)$. \end{proof}
\begin{proof}[Proof of Lemma \ref{diagonallayered}]
Let $C$ be a club in Lemma~\ref{regularchar}. For (1), by Lemma \ref{regularchar}, $P(\mu,\kappa)$ is $[\mu,\kappa)\cap \mathrm{Reg}$-layered witnessed by $\langle P(\mu,\delta) \mid \delta < \kappa \rangle$.
For (2), by the assumption, there is a club $D \subseteq C$ such that every element in $D$ are singular. Define $Q_\delta = \bigcup_{\eta<\delta}P(\mu,\eta)$. $\langle Q_{\delta} \mid \delta < \kappa\rangle$ is a filtration of $P(\mu,\kappa)$. By Lemma \ref{singularcase}, $Q_\delta \not\mathrel{\lessdot} P(\mu,\kappa)$ for all $\delta \in D$. By Lemma \ref{layeredchar}, $P(\mu,\kappa)$ is not $S$-layered for all stationary subsets $S \subseteq \kappa$. \end{proof}
The following lemma is contained in the proof of Lemma \ref{regularchar} (2), and is used in the proof of Claim \ref{layeredconclusion}. \begin{lem}\label{singularcase}
For inaccessible $\kappa$, let $C$ be a club in Lemma \ref{regularchar}.(2). For every singular $\delta \in C$, there is a $p \in P(\mu,\delta) \setminus \bigcup_{\eta < \delta} P(\mu,\eta)$ with the following properties: \begin{itemize}
\item $\mathrm{supp}(p) \cap (\lambda + 1) = \emptyset$, and,
\item For every $q \in \bigcup_{\eta < \delta} P(\mu,\eta)$, there is an $r \in \bigcup_{\eta < \delta} P(\mu,\eta)$ such that $\mathrm{dom}(r) \cap (\lambda + 1) = \emptyset$, $r \perp p$ in $P(\mu,\delta)$ and $r \cdot q \in \bigcup_{\eta < \delta} P(\mu,\eta)$. \end{itemize} \end{lem} \begin{proof}
The condition $p$ which was defined in the proof of Lemma~\ref{regularchar} works. \end{proof}
The following property of Levy collapses is used in the proof of Claim \ref{centeredconclusion}. \begin{lem}\label{levycentered2}
For inaccessible $\lambda$ and regular $\kappa < \alpha < \lambda$, $\mathrm{Coll}(\kappa,<\lambda)$ forces $\mathrm{Coll}^{V}(\alpha,<\lambda)$ is not $\kappa$-centered. \end{lem} \begin{proof}
We show by contradiction. We may assume that $\mathrm{Coll}(\kappa,\lambda)$ forces that $\mathrm{Coll}^{V}(\alpha,<\lambda)$ is $\kappa$-centered. Let $\langle \dot{F}_{\xi} \mid \xi < \kappa \rangle$ be a $\mathrm{Coll}(\kappa,<\lambda)$-name for a centering. We may assume that it is forced that each $\dot{F}_{\xi}$ is a filter because $\prod X \in \mathrm{Coll}(\kappa,<\lambda)$ for every $X \subseteq \mathrm{Coll}(\kappa,<\lambda)$ with $X$ has a lower bound.
For each $\xi < \kappa$ and $q \in \mathrm{Coll}(\alpha,<\lambda)$, let $\rho(q,\xi)$ be the least cardinal $\delta$ such that there is a maximal anti-chain $\mathcal{A} \subseteq \mathrm{Coll}(\kappa,<\delta)$ with $\forall p \in \mathcal{A}(p$ decides $q \in \dot{F}_{\xi})$. Let $D \subseteq \lambda$ be a club generated by a mapping $\delta \mapsto \sup\{\rho(q,\xi) \mid \xi < \kappa \land q \in {\mathrm{Coll}(\alpha,<\delta)}\}\cup\{\delta^{<\delta}\}$.
Fix a $\delta \in D \cap E^{\lambda}_{\geq \kappa} \cap E^{\lambda}_{<\alpha}$. The following holds now.
\begin{itemize}
\item $|\mathrm{Coll}(\kappa,<\delta)| = \delta$, in particular, $\mathrm{Coll}(\kappa,<\delta) \Vdash (\delta^{+})^V \geq \kappa^{+}$.
\item $\mathrm{Coll}(\alpha,<\delta)$ has an anti-chain of size $\delta^{\mathrm{cf}(\delta)} \geq \delta^{+}$.
\end{itemize}
The first item follows by the standard cardinal arithmetic. Let us define an anti-chain for $\mathrm{Coll}(\alpha,<\delta)$ of size $\delta^{\mathrm{cf}(\delta)}$. Note that we can choose a sequence $\langle\delta_{i} \mid i < \mathrm{cf}(\delta)\rangle \subseteq [\alpha^{+},\delta)_{\alpha\text{-cl}}$ which converges to $\delta$. For each $i < \mathrm{cf}(\delta)$, ${^{<\alpha}\alpha_{i}}$ has an anti-chain $\{p^{i}_{\xi} \mid \xi < \alpha_{i}\}$ of size $\alpha_i$. For each $f \in \prod_{i < \mathrm{cf}(\delta)}\alpha_{i}$, define $p_f \in \mathrm{Coll}(\alpha,<\lambda)$ as follows: \begin{itemize}
\item $\mathrm{supp}(p_f) = \{\alpha_i \mid i < \mathrm{cf}(\delta)\}$.
\item $p_f(\alpha_i) = p_{f(i)}^i$. \end{itemize} It is easy to see that $\{p_f \mid f \in \prod_{i < \mathrm{cf}(\delta)}\alpha_{i}\}$ witnesses.
Let $G$ be an arbitrary $(V,\mathrm{Coll}(\kappa,<\lambda))$-generic. $G$ can be factored as $G = G_0 \times G_1$ where $G_0$ is a $(V,\mathrm{Coll}(\kappa,<\delta))$-generic. Let us discuss in $V[G_0]$. Letting $Q = (\bigcup_{\zeta < \delta}\mathrm{Coll}(\alpha,<\zeta))^{V}$. Let $F_\xi = \dot{F}_{\xi}^{G}$ in $V[G]$, note that $F_\xi \cap Q \in V[G_0]$ by $\delta \in D$. In particular, $Q$ has a centering $\langle {F}_\xi \cap Q \mid \xi < \kappa \rangle$ in $V[G_0]$. Define $H_\xi = \{q \in \mathrm{Coll}^V(\alpha,<\delta) \mid \forall \alpha \in \mathrm{supp}(q)(q \upharpoonright (\mathrm{supp}(q) \cap \alpha) \in F_{\xi} \cap Q)\}$. We claim that $\langle H_\xi \mid \xi < \kappa \rangle$ is a centering for $\mathrm{Coll}^V(\alpha,<\delta)$. It is easy to see that each $H_\xi$ is a filter. For each $q \in \mathrm{Coll}^V(\alpha,<\delta)$, in $V[G]$, there is a $\xi$ such that $q \in F_\xi$. For every $\alpha < \mathrm{supp}(q)$, $q \upharpoonright (\mathrm{supp}(q) \cap \alpha) \in Q \cap F_{\xi}$. This has held in $V[G_0]$ yet, and thus, $q \in H_{\xi}$ in $V[G_0]$.
We showed that $\mathrm{Coll}^V(\alpha,<\delta)$ is $\kappa$-centered, which in turn implies the $\kappa^{+}$-c.c. But $\mathrm{Coll}^V(\alpha,<\delta)$ has a maximal anti-chain of size $(\delta^{\mathrm{cf}(\delta)})^V \geq \kappa^{+}$ as we have seen. This is a contradiction. \end{proof}
\begin{rema}\label{remarklevy}
For inaccessible $\lambda$ and regular $\alpha \leq \kappa$, $\mathrm{Coll}(\kappa,<\lambda)$ forces $\mathrm{Coll}^{V}(\alpha,<\lambda)$ is $\kappa$-centered. \end{rema} \begin{proof}
We discuss in the extension by $\mathrm{Coll}(\kappa,<\lambda)$. For all $\gamma \in [\alpha^{+},\lambda)_{\alpha\text{-cl}}$, because of $|({^{<\alpha}\gamma})^V| \leq \kappa$, $({^{<\alpha}\gamma})^{V}$ is $\kappa$-centered. By Lemma 4 in~\cite{MR925267}, it follows that $\prod_{\gamma \in [\alpha^{+},\lambda)}^{<\alpha}({^{<\alpha}\gamma})^{V}$ is $\kappa$-centered. In particular, $\mathrm{Coll}^{V}(\alpha,<j(\kappa))$ is $\kappa$-centered. \end{proof} By Lemma \ref{levycentered2} and Remark \ref{remarklevy}, we have there is no complete embedding from $\mathrm{Coll}(\alpha,<\lambda)$ to $\mathrm{Coll}(\kappa,<\lambda)$ for all $\alpha \in [\kappa,\lambda)\cap \mathrm{Reg}$. On the other hand, the diagonal product of Levy collapses have a nice absorption properties as follows. Lemma \ref{mainprojection} plays an important role in the proof of Theorem \ref{maintheorem}. \begin{lem}\label{mainprojection}
Suppose that $\mu < \kappa \leq \lambda < \nu$ are regular and $\kappa$ and $\nu$ are inaccessible. Then there is a projection $\pi:P(\mu,\nu) \to P(\mu,\kappa) \ast \dot{\mathrm{Coll}}(\lambda,<\nu)$ which is continuous. In addition, $\pi(p) = \langle p,\emptyset\rangle$ for all $p \in P(\mu,\kappa)$. \end{lem}
To prove this, we need \begin{lem}\label{levyprojection}
Suppose that $P$ has the $\kappa$-c.c. and $|P| \leq \kappa$. For inaccessible $\lambda > \kappa$, there is a projection $P \times \mathrm{Coll}(\kappa,<\lambda) \to P \ast \dot{\mathrm{Coll}}(\kappa,<\lambda)$ which is identity on the first coordinate and continuous. \end{lem}
In the proof of Lemma \ref{levyprojection}, we use knowledge of the term forcing. For a poset $P$ and a $P$-name $\dot{Q}$ for a poset, the term forcing $T(P,\dot{Q})$ is a complete set of representatives from $\{\dot{q} \mid \Vdash \dot{q} \in \dot{Q}\}$ with respect to the canonical equivalence relation. $T(P,\dot{Q})$ is ordered by $\dot{q} \leq \dot{q}'\leftrightarrow \Vdash \dot{q} \leq \dot{q}'$. The following lemma is due to Laver. \begin{lem}[Laver]\label{termforcing}
${\mathrm{id}}:P \times T(P,\dot{Q}) \to P \ast \dot{Q}$ is a projection. \end{lem}
The proof of the following lemma is based on Shioya~\cite{MR4159767}. \begin{lem}\label{termproduct}
Suppose $P$ has the $\kappa$-c.c. and $|P| \leq \kappa$. Then the following holds: \begin{enumerate}
\item If $\gamma$ is $\kappa$-closed, then there is a dense embedding from ${^{<\kappa}\gamma}$ onto $T(P,\dot{{^{<\kappa}\gamma}})$.
\item If $\langle\dot{Q}_{\gamma} \mid \gamma \in I\rangle$ is a sequence of $P$-names of a poset. Then there is a dense embedding from $\prod_{\gamma \in I}^{<\kappa} T(P,\dot{Q}_{\gamma})$ onto $T(P,\prod_{\gamma \in I}^{<\kappa}\dot{Q}_{\gamma})$ \end{enumerate} \end{lem} \begin{proof}
(1) Note that $D = \{\dot{q} \in T(P,\dot{{^{<\kappa}}\gamma}) \mid \exists \delta(\Vdash \mathrm{dom}(\dot{q}) = \delta)\}$ is dense in $T(P,\dot{{^{<\kappa}}\gamma})$. For each $\dot{p} \in T(P,\dot{{^{<\kappa}}\gamma})$, by the $\kappa$-c.c. of $P$, there is a $\delta < \kappa$ with $\Vdash \mathrm{dom}(\dot{p}) < \delta$. The usual density argument takes $\dot{q} \in D$ with $\Vdash \dot{q} \leq \dot{p}$.
By the assumption, there is a sequence $\langle \dot{\tau}_\alpha \mid \alpha < \gamma \rangle$ of $P$-names for ordinals below $\gamma$ with the following properties: \begin{itemize}
\item $\Vdash \dot{\tau} \in \gamma$ implies $\Vdash \dot{\tau} = \dot{\tau}_\alpha$ for some $\alpha$.
\item $\not\Vdash \dot{\tau}_\alpha = \dot{\tau}_\beta$ for all $\alpha < \beta$. \end{itemize}
For each $p \in {^{<\kappa}\gamma}$, the mapping which sends $p$ onto $\langle{\dot{\tau}_{p(\xi)} \mid \xi \in \mathrm{dom}(p)}\rangle$ is an isomorphism between ${^{<\kappa}\gamma}$ and $D$. This is a required embedding.
(2) Note that $E = \{\dot{q} \in T(P,\prod_{\gamma \in I}^{<\kappa}\dot{Q}_{\gamma}) \mid \exists d \subseteq I( \Vdash \mathrm{supp}(\dot{q}) = d)\}$ is dense in $T(P,\prod_{\gamma \in I}^{<\kappa}\dot{Q}_{\gamma})$, as we have seen in (1). The natural isomorphism from $\prod_{\gamma \in I}^{<\kappa} T(P,\dot{Q}_{\gamma})$ onto $E$ works. \end{proof} From this, we have \begin{proof}[Proof of Lemma~\ref{levyprojection}]
We remark that $P$ does not change the class of all $\kappa$-closed cardinals. The required projection follows by,
\begin{align*}
P \times \mathrm{Coll}(\kappa,<\lambda) &= P \times \textstyle{\prod_{\gamma \in [\kappa^{+},<\lambda)_{\kappa\text{-cl}}}^{<\kappa} {^{<\kappa}\gamma}} \\
&\to P \times \textstyle{\prod_{\gamma \in [\kappa^{+},<\lambda)_{\kappa\text{-cl}}}^{<\kappa} T(P,\dot{{^{<\kappa}\gamma}})} \\
&\to P \times T(P,\textstyle{\prod_{\gamma \in [\kappa^{+},<\lambda)_{\kappa\text{-cl}}}^{<\kappa} \dot{{^{<\kappa}\gamma}}}) \\
&= P \times T(P,\dot{\mathrm{Coll}}(\kappa,<\lambda)) \to P \ast \dot{\mathrm{Coll}}(\kappa,<\lambda).
\end{align*}
The second line and the third line follow by Lemma~\ref{termproduct} (1) and (2), respectively. The last line follows from Lemma~\ref{termforcing}.
For continuity, we may assume that $P$ is a complete Boolean algebra. For $q\in \mathrm{Coll}(\kappa,<\lambda)$, let $\dot{q}$ be a $P$-name such that $\pi_0(\emptyset,q) = \langle \emptyset, \dot{q} \rangle$. Then $\dot{q}$ is a $P$-name such that \begin{itemize}
\item $P \Vdash \mathrm{dom}(\dot{q}) = \mathrm{dom}(q) \subseteq \lambda \times [\kappa^{+},\lambda)_{\kappa\text{-cl}}$.
\item $P \Vdash \dot{p}(\xi,\zeta) = \dot{\tau}_{p(\xi,\zeta)}^{\zeta}$ for all $\langle \xi,\zeta \rangle \in \mathrm{dom}(q))$. \end{itemize} Here, $\langle{\dot{\tau}^{\zeta}_\alpha \mid \alpha < \zeta}\rangle$ is a sequence of $P$-names defined in Lemma~\ref{termproduct} and we identify an element of $\mathrm{Coll}(\kappa,<\lambda)$ with a partial function from $\kappa \times [\kappa^{+},\lambda)_{\kappa\text{-cl}}$ to $\lambda$. We have $\pi_0(p,q) = \langle p,\dot{q} \rangle$ for all $\langle p,q \rangle \in P \times \mathrm{Coll}(\kappa,<\lambda)$.
Consider a set $Z = \{ \langle p_i,q_i\rangle \mid i < \nu \} \subseteq P \times \mathrm{Coll}(\kappa,<\lambda)$ such that $\prod Z \in P \times \mathrm{Coll}(\kappa,<\lambda)$. We remark that $\prod Z = \langle \prod_i p_i,\bigcup_i q_i \rangle$. We let $p = \prod_i p_i$ and $q = \bigcup_i q_i$. Our goal is showing $\pi_0(p,q) = \prod \pi_0``Z$. Note that \begin{itemize}
\item $P \Vdash \dot{q}_i(\xi,\zeta) = \dot{\tau}^{\zeta}_{q_i(\xi,\zeta)} = \dot{\tau}^{\zeta}_{q(\xi,\zeta)} = \dot{q}(\xi,\zeta)$ for each $\langle\xi,\zeta\rangle \in \mathrm{dom}(p_i)$.
\item $P \Vdash \mathrm{dom}(\dot{q}) = \mathrm{dom}(q) = \bigcup_{i < \nu}\mathrm{dom}({p}_i) = \bigcup_{i < \nu}\mathrm{dom}(\dot{p}_i)$. \end{itemize}
Therefore, $p \Vdash \dot{q} = \bigcup_{i}\dot{p}_i = \prod_i \dot{p}_i$. In particular, we have $\pi_0(p,q) = \langle p,\dot{q} \rangle = \prod \pi_0``Z$. \end{proof}
\begin{proof}[Proof of Lemma \ref{mainprojection}] $\pi$ is defined as \begin{align*}
P(\mu,\nu) & = \textstyle{\prod^{<\mu}_{\alpha \in [\mu,\nu)\cap \mathrm{Reg}} \mathrm{Coll}(\alpha,<\nu)} \\ &\to \textstyle{\prod^{<\mu}_{\alpha \in [\mu,\kappa)\cap \mathrm{Reg}} \mathrm{Coll}(\alpha,<\nu) \times \mathrm{Coll}(\lambda,<\nu)} \\ &\to \textstyle{\prod^{<\mu}_{\alpha \in [\mu,\kappa)\cap \mathrm{Reg}} \mathrm{Coll}(\alpha,<\kappa) \times \mathrm{Coll}(\lambda,<\nu)} \\ &= P(\mu,\kappa) \times \mathrm{Coll}(\lambda,<\nu) \\ &\to P(\mu,\kappa) \ast \dot{\mathrm{Coll}}(\lambda,<\nu). \end{align*} The last line follows by Lemma \ref{levyprojection}. By the definition of $\pi$, $\pi(p) = \langle p,\emptyset \rangle$ for all $p \in P(\mu,\kappa)$. \end{proof}
\section{Proof of Theorem \ref{maintheorem}} In this section is devoted to \begin{proof}[Proof of Theorem \ref{maintheorem}]
We let $P = P(\mu,\kappa)$. We remark that $j(P) = P(\mu,j(\kappa))$ has the $j(\kappa)$-c.c. by the almost-hugeness. By Lemma \ref{mainprojection}, we have a continuous projection from $\pi:j(P) \to P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa))$.
Note that $P \subseteq V_{\kappa}$, $P \lessdot j(P)$, and, $\pi$, $\pi(p) = \langle p,\emptyset \rangle$ for all $p \in P$.
Let $G \ast H$ be an arbitrary $(V,P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa)))$-generic filter. First, we give a saturated ideal on $\mathcal{P}_{\kappa}\lambda$ in $V[G][H]$. Let $\overline{G}$ be an arbitrary $(V,j(P))$-generic with $\pi``\overline{G} \subseteq G \ast H$. Note that $j``G = G \subseteq \overline{G}$, which in turn implies that $j$ lifts to $j:V[G] \to M[\overline{G}]$ in $V[\overline{G}]$ such that $j(G) = \overline{G}$. By the $j(\kappa)$-c.c. of $j(P)$, ${^{<j(\kappa)}M[\overline{G}]} \subseteq M[\overline{G}]$. Let $m_{\alpha}$ be the coordinate-wise union of $j``(H \cap \mathrm{Coll}^{M[\overline{G}]}(j(\lambda),<\alpha))$. $m_{\alpha} \in \mathrm{Coll}^{M[\overline{G}]}(j(\lambda),<jj(\kappa))$ for all $\alpha < j(\kappa)$ by the closure property of $M[G]$ and the directed closedness of $\mathrm{Coll}^{M[\overline{G}]}(j(\lambda),<jj(\kappa))$. By the $j(\kappa)$-c.c., we can choose a list $\langle X_{\alpha} \mid \alpha < j(\kappa) \rangle$ of $P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa))$-names of all subset in $\mathcal{P}_\kappa\lambda$. There is a descending chain $\langle s_{\alpha} \mid \alpha < j(\kappa) \rangle$ with the following properties: \begin{itemize}
\item $s_{\alpha}\leq m_{\alpha}$.
\item $s_{\alpha}$ decides $j``\lambda \in j(\dot{X}_{\alpha})$. \end{itemize}
Let $U = \{\dot{X}^{G \ast H} \mid \exists \beta(s_\beta \Vdash j``\lambda \in j(\dot{X}))\}$. $U$ is a $V[G][H]$-normal $V[G][H]$-ultrafilter over $\mathcal{P}_\kappa\lambda$. Because $\overline{G}$ was an arbitrary $(V,j(P))$-generic with $\pi``\overline{G} \subseteq G \ast H $, we can take a $j(P) / G \ast H$-name $\dot{U}$ for such ultrafilter. Let $I$ be define by
\begin{center}
$X \in I$ if and only if $j(P) / G \ast H \Vdash \mathcal{P}_{\kappa} \lambda \setminus X \in \dot{U}$.
\end{center} The standard argument shows that $I$ is a normal and fine ideal over $\mathcal{P}_{\kappa}\lambda$. Towards a showing $j(\kappa)$-saturation of $I$, let $\langle{X_\xi \mid \xi \in K}\rangle$ be an anti-chain in $\mathcal{P}(\mathcal{P}_{\kappa}\lambda)$, we have the following: \begin{itemize}
\item $||X_\xi \in \dot{U}||\cdot ||X_\zeta \in \dot{U}|| = ||X_\xi \cap X_\zeta \in \dot{U}|| = 0$ for each $\xi \not= \zeta$ in $K$.
\item $||X_{\xi} \in \dot{U}|| \not= 0$ for each $\xi \in K$. \end{itemize}
It follows that $\{||X_{\xi} \in \dot{U}|| \mid \xi \in K\}$ is an anti-chain in $\mathcal{B}(j(P) / G \ast H)$. Note that $j(P) = P(\mu,j(\kappa))$ has the $j(\kappa)$-c.c., and thus $j(P) / G \ast H$ has the $j(\kappa)$-c.c. Therefore $|K| < j(\kappa)$, as desired.
\begin{clam}\label{duality}
$j(P)/G \ast H \simeq \mathcal{P}(\mathcal{P}_\kappa\lambda) / I$. \end{clam} \begin{proof}[Proof of Claim]
The proof is based on Foreman--Magidor--Shelah~\cite{MR942519}. As in the previous argument, let us consider a mapping $\tau:\mathcal{P}(\mathcal{P}_\kappa\lambda) / I \to \mathcal{B}(j(P)/G \ast H)$ that sends $X$ to $||X \in \dot{U}||$. The standard argument shows that $\tau$ is a complete embeding and $\dot{U}$ is a $j(P)/G \ast H$-name for $(V[G][H],\mathcal{P}(\mathcal{P}_\kappa\lambda) / I)$-generic filter generated by $\tau^{-1}\dot{\overline{G}}$. Here, $\dot{\overline{G}}$ is the canonical name of $(V[G][H],j(P)/G \ast H)$-generic filter. It is enough to prove that $\tau$ is a dense embedding.
We claim that there is an $f_q$ such that $||\{a \in \mathcal{P}_\kappa\lambda \mid f_q(a) \in G\} \in \dot{U}|| = q$ for every $q \in j(P) / G \ast H$. It follows that the range of $\tau$ is a dense subset in $\mathcal{B}(j(P) / G \ast H)$.
Let $\overline{G}$ be an arbitrary $(V,j(P))$-generic filter with $\pi``\overline{G} \subseteq G \ast H$. Note that $q \in j(P) \cap V_{\beta}$ for some $\beta < j(\kappa)$. By the elementarity of $j$ and $j$ is almost-huge, we can choose inaccessible $\alpha < j(\kappa)$ with $\alpha > \beta$. Let $U_\alpha = \dot{U}^{\overline{G}} \cap \mathcal{P}(\mathcal{P}_\kappa\lambda)^{V[G][H\upharpoonright\alpha]}$. By the definition of $\dot{U}$, we can choose a $(V[\overline{G}],\mathrm{Coll}^{M[\overline{G}]}(j(\lambda),<jj(\kappa)))$-generic filter $\overline{H}$ such that, \begin{itemize}
\item $j$ lifts to $j:V[G][H\upharpoonright \alpha] \to M[\overline{G}][\overline{H}\upharpoonright {j(\alpha)}]$ and $j(G) = \overline{G}$.
\item $X \in U_\alpha$ if and only if $j``\lambda \in j(X)$ in $M[\overline{G}][\overline{H}\upharpoonright j(\alpha)]$. \end{itemize} Here, $H\upharpoonright \alpha = H \cap \mathrm{Coll}(\lambda,<\alpha)$ and $\overline{H} \upharpoonright j(\alpha) = \overline{H} \cap {\mathrm{Coll}^{M[\overline{G}]}(j(\lambda),<j(\alpha))}$. We can consider the following commutative diagram of elementary embeddings.
\begin{center}
\begin{tikzpicture}[auto,->]
\node (vtwo) at (0,-1) {$V[G][H\upharpoonright \alpha]$};
\node (mtwo) at (5,-1) {$M[\overline{G}][\overline{H}\upharpoonright j(\alpha)]$};
\node (n) at (2.5,-3) {$N$};
\draw (vtwo) --node {$\scriptstyle {j}$} (mtwo);
\draw (vtwo) --node[swap] {$\scriptstyle i$} (n);
\draw (n) --node[swap] {$\scriptstyle k$} (mtwo);
\end{tikzpicture}
\end{center}
Here, $i:V \to N \simeq \mathrm{Ult}(V[G][H\upharpoonright \alpha],U_\alpha)$ is an ultrapower mapping and $k$ is defined by $k([f]_{\dot{U}_\alpha}) = j(f)(j``\lambda)$. It is easy to see that $k$ is an elementary embedding. We claim $\mathrm{crit}(k) \geq \alpha$. Because $\alpha$ is inaccessible, $\lambda^{+} = \alpha$ in $V[G][H\upharpoonright \alpha]$. We remark that $\mathcal{P}(\lambda)^{V[G][H \upharpoonright \alpha]} \subseteq N$ and $i(\kappa) \geq \alpha = \lambda^{+}$. $\mathcal{P}(\lambda)^{V[G][H \upharpoonright \alpha]} \subseteq N$ follows by, for each $x$, $x$ can be written as $\{\xi \in \lambda \mid i(\xi) \in i``\lambda\cap i(x)\}$. That is, $x$ is definable in $N$ by the normality of $U_\alpha$. By $\kappa = \mu^{+}$ in $V[G][H\upharpoonright \alpha]$, $N \models i(\kappa)$ is the least cardinal greater than $\mu$. $N$ has no cardinals between $\kappa$ and $\alpha$. On the other hand, $\mathrm{crit}(k) \geq \kappa$ must be cardinal in $N$. Therefore $\mathrm{crit}(k) \geq \alpha$.
Let us find a name of $q$ in $N$. Since $|V_\beta| = \lambda < \alpha$ holds in $V[G][H \upharpoonright \alpha]$, the same thing holds in $N$. We can enumerate $i(P) \cap V_\beta^{N}$ as $\langle q_\xi \mid \xi < \lambda \rangle$ in $N$. By $\mathrm{crit}(k) \geq \alpha > \beta$, $k$ is the identity mapping on $V_\beta$, and thus, $k(\langle q_\xi \mid \xi < \lambda \rangle) = \langle q_\xi \mid \xi < \lambda \rangle$. By the elementarity of $k$, $q$ appears in this sequence, that is there exists a $\xi$ such that $q_\xi = k(q_\xi) = q \in N$. We can choose $x$ with $[{x}]_{U_{\alpha}} = q$. Since $\dot{U}$ is $(V[G][H],\mathcal{P}(\mathcal{P}_\kappa\lambda)/ I)$-generic, $\overline{G}$ is an arbitrary, and, $I$ is a saturated ideal, there is an $f_q:\mathcal{P}_\kappa\lambda \to V[G][H]$ such that $\mathcal{P}(\mathcal{P}_\kappa\lambda)/I \Vdash q = [f_q]_{\dot{U}_\alpha}$.
Therefore $||\{a \in \mathcal{P}_\kappa \lambda \mid f_q(a) \in G\} \in \dot{U}|| = q$ follows by \begin{align*}
\{a \in \mathcal{P}_\kappa\lambda \mid f_q(a) \in G\} \in U &\Leftrightarrow[f_q]_{U_\alpha} = q \in i(G)(\text{for some }\alpha)\\ &\Leftrightarrow k(q) = q \in k \circ i(G) = \overline{G}. \end{align*} \end{proof} By virtue of Claim \ref{duality}, items (1)--(4) follow from the corresponding claims for the quotient forcing $j(P)/G \ast H$. From now on, we discuss in $V$.
Items (1) and (2) follow from Claim \ref{ccconclusion}. \begin{clam}\label{ccconclusion}\renewcommand{(\roman{enumi})}{(\roman{enumi})}
$P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa))$ forces that
\begin{enumerate}
\item $j(P) / \dot{G} \ast \dot{H}$ has the $(j(\kappa),j(\kappa),<\mu)$-c.c.
\item $j(P) / \dot{G} \ast \dot{H}$ does not have the $(j(\kappa),\mu,\mu)$-c.c.
\end{enumerate} \end{clam} \begin{proof}[Proof of Claim]
For (i), We recall that $j(P)$ has the $(j(\kappa),j(\kappa),<\mu)$-c.c. and $P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa))$ is $\mu$-closed. By Lemma \ref{ccquotient} and the continuity of $\pi$, it is forced by $P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa))$ that $j(P) / \dot{G} \ast \dot{H}$ has the $(j(\kappa),j(\kappa),<\mu)$-c.c.
We prove (ii) by contradiction. Suppose otherwise. We consider a set $X= \{r_\alpha \mid \alpha \in [\lambda^{+}, j(\kappa))\cap \mathrm{Reg}\} \subseteq j(P)$ with $\mathrm{supp}(r_\alpha) = \{\alpha\}$ for every $\alpha$. By the definition of $\pi$, $\pi(r_\alpha) = \langle{\emptyset,\emptyset}\rangle$. Therefore $P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa)) \Vdash r_\alpha \in j(P) / \dot{G} \ast \dot{H}$ for every $\alpha$. By the assumption, there are $\dot{Z}$, $r \in j(P)$ and $\langle p,\dot{q} \rangle$ such that, $\langle p,\dot{q} \rangle$ forces that \begin{itemize}
\item $r$ is a lower bound in $\{r_\alpha \mid \alpha \in \dot{Z}\}$ in $j(P) / \dot{G} \ast \dot{H}$, and
\item $|\dot{Z}| = \mu$. \end{itemize}
Since $|\mathrm{supp}(r)| < \mu$, we can choose $\beta$ and $\langle p',\dot{q}' \rangle \leq \langle p,\dot{q} \rangle$ such that $\langle p',\dot{q}' \rangle \Vdash \beta \in \dot{Z} \setminus \mathrm{supp}(r)$. Clearly, $\langle p',\dot{q}' \rangle$ does not force that $r \leq r_\beta$ in $j(P)/\dot{G} \ast \dot{H}$ but this is a contradiction. \end{proof}
Note that $P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa)) \Vdash \mathcal{P}(\mathcal{P}_{\kappa}\lambda) / \dot{I}$ is a complete Boolean algebra because $\dot{I}$ is saturated. The poset $P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa))$ forces $|\mathcal{P}(\mathcal{P}_{\kappa}\lambda) /\dot{I}| \leq 2^{\lambda^{<\kappa}} = 2^{\lambda} = j(\kappa)$, and thus, $|\mathcal{B}(j(P) / \dot{G} \ast \dot{H})| \leq j(\kappa)$. Thus, the notion of $S$-layering sequence is not meaningless. Item (3) follows from Claim \ref{layeredconclusion}. \begin{clam}\label{layeredconclusion}\renewcommand{(\roman{enumi})}{(\roman{enumi})} \begin{enumerate}
\item If $j(\kappa)$ is Mahlo, then $P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa)) \Vdash j(P) / \dot{G} \ast \dot{H}$ is $S$-layered for some stationary subset $S \subseteq E^{j(\kappa)}_\lambda$.
\item If $j(\kappa)$ is not Mahlo, then $P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa)) \Vdash j(P) / \dot{G} \ast \dot{H}$ is not $S$-layered for any stationary subset $S\subseteq \lambda$. \end{enumerate} \end{clam} \begin{proof}[Proof of Claim]
First, we show (i). By Lemma \ref{diagonallayered}, $j(P)$ is $[\mu,j(\kappa))\cap \mathrm{Reg}$-layered. Note that $[\lambda,j(\kappa))\cap \mathrm{Reg}$ remains stationary in the extension. We also remark that $P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa))$ forces $[\lambda,j(\kappa))\cap \mathrm{Reg}^{V}\subseteq E^{j(\lambda)}_{\lambda}$ since $P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa))$ has the form of $(\kappa$-c.c.$)\ast(\lambda$-closed$)$.
By Lemma~\ref{layeredquotient} and the continuity of $\pi$, $P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa)) \Vdash j(P) / \dot{G} \ast \dot{H}$ is $[\lambda,j(\kappa))\cap \mathrm{Reg}^{V}$-layered.
For (ii), we let $Q_\delta = \bigcup_{\eta < \delta}P(\mu,\eta)$ and $C\subseteq j(\kappa)$ be a club in Lemma \ref{regularchar}. $\langle Q_\delta \mid \delta < j(\kappa) \rangle$ is a filtration and $Q_\delta \not\mathrel{\lessdot} j(P)$ for all singular $\delta \in C$ by Lemma \ref{singularcase}. Since $j(\kappa)$ is not Mahlo, there is a club $D \subseteq C$ such that each element in $D$ is singular. Note that it is forced that $\langle Q_\delta/ \dot{G}\ast \dot{H} \mid \delta < j(\kappa) \rangle$ is a filtration of $j(P) / \dot{G}\ast \dot{H}$. By Lemma \ref{layeredchar}, it is enough to show that $\Vdash Q_\delta / \dot{G}\ast \dot{H} \not\mathrel{\lessdot} j(P) / \dot{G}\ast \dot{H}$ for all $\delta \in D$. Fix $\delta \in D$, Lemma \ref{singularcase} gives a $p \in P(\mu,\delta)$ with certain properties. By $\pi(p) = \langle \emptyset,\emptyset\rangle$, we have $\Vdash p \in P(\mu,\delta) / \dot{G}\ast \dot{H}$. We claim that $\Vdash p \in P(\mu,\delta) / \dot{G}\ast \dot{H}$ has no reduct in $Q_\delta/\dot{G} \ast \dot{H}$.
For any $b \in P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa))$ and $q \in Q_\delta$ with $b \Vdash q \in Q_\delta / \dot{G}\ast \dot{H}$, there is an $r \in {Q_\delta}$ such that $\pi(r) = \langle\emptyset,\emptyset\rangle$, $r \perp p$ in $P(\mu,\delta)$, and, $r \cdot q \in Q_\delta$. Then, the following hold:
\begin{itemize}
\item $b \leq \pi(q) = \pi(q) \cdot \langle\emptyset,\emptyset\rangle = \pi(q) \cdot \pi(r) = \pi(q \cdot r)$.
\item $b \Vdash q \cdot r \leq q$ in $Q_\delta / \dot{G}\ast \dot{H}$ but $(q \cdot r) \perp p$ in $P(\mu,\delta)/ \dot{G}\ast \dot{H}$.
\end{itemize}
Thus, $P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa))$ forces that $q \in Q_{\delta} / \dot{G} \ast \dot{H}$ is not reduct of $p$, as desired. \end{proof} Item (4) follows from Claim \ref{centeredconclusion}. \begin{clam}\label{centeredconclusion}
$P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa)) \Vdash j(P) / \dot{G} \ast \dot{H}$ is not $\lambda$-centered. \end{clam} \begin{proof}[Proof of Claim]
Define \begin{itemize}
\item $P_{\ast} = \prod_{\alpha \in [\mu,\lambda + 1)\cap \mathrm{Reg}}^{<\mu}\mathrm{Coll}(\alpha,<j(\kappa))$ and
\item $P^{\ast} = {\prod^{<\mu}_{\alpha \in [\lambda + 1,j(\kappa))\cap \mathrm{Reg}}\mathrm{Coll}(\alpha,<j(\kappa))}$. \end{itemize}
We have $j(P) \simeq P_* \times P^*$. By the definition of $\pi$, it follows that $P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa))$ forces $j(P)/ \dot{G} \ast \dot{H} \simeq (P_* / \dot{G} \ast \dot{H}) \times P^*$. Thus, it suffices that $P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa))$ forces $P^{*}$ is not $\lambda$-centered.
Fix $\alpha \in [\lambda^{+},j(\kappa)) \cap \mathrm{Reg}$. Lemma \ref{levyprojection} shows that $P$ forces $\mathrm{Coll}^{V}(\alpha,<j(\kappa))$ is projected to $\mathrm{Coll}^{V^{P}}(\alpha,<j(\kappa))$. Since $P^{\ast}$ is projected to $\mathrm{Coll}^{V}(\alpha,<j(\kappa))$, in the extension by $P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa))$, if $P^{\ast}$ is $\lambda$-centered then so is $\mathrm{Coll}^{V^P}(\alpha,<j(\kappa))$.
But, by Lemma \ref{levycentered2}, $P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa))$ forces $\mathrm{Coll}^{V^{P}}(\alpha,<j(\kappa))$ is not $\lambda$-centered for all $\alpha > \lambda$. In particular, $P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa))$ forces that $P^*$ is not $\lambda$-centered. \end{proof} This complete the proof. \end{proof}
Magidor~\cite{MR526312} gave a model with a normal and countably complete saturated ideal over $[\aleph_3]^{\aleph_1}$ using the universal collapse. We get the same result using the diagonal product of Levy collapses. Indeed, \begin{thm}
Suppose that $j$ is a huge embedding with critical point $\kappa$, GCH holds, and $\mu < \kappa < \lambda < j(\kappa)$ are regular cardinals. Then $P(\mu,\kappa) \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa))$ forces that there is a normal and $\kappa$-complete ideal ${I}$ over $[j(\kappa)]^{\kappa}$ with the following properties: \begin{enumerate}
\item $\mathcal{P}([j(\kappa)]^{\kappa})/I$ has the $(j(\kappa),j(\kappa),<\mu)$-c.c.
\item $\mathcal{P}([j(\kappa)]^{\kappa})/I$ does not have the $(j(\kappa),\mu,\mu)$-c.c.
\item $\mathcal{P}([j(\kappa)]^{\kappa})/I$ is $S$-layered poset for some stationary subset $S \subseteq E^{j(\kappa)}_{\lambda}$. In particular, it has a dense subset of size $j(\kappa)$.
\item $\mathcal{P}([j(\kappa)]^{\kappa})/I$ is not $\lambda$-centered. \end{enumerate} \end{thm} \begin{proof}
Let $P = P(\mu,\kappa)$. Let $G \ast H$ be a $(V,P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa)))$-generic. By the proof of Theorem \ref{maintheorem}, it is enough to construct an ideal $I$ over $\mathcal{P}([j(\kappa)]^{\kappa}) / I$ with $\mathcal{P}([j(\kappa)]^{\kappa}) / I \simeq j(P) / G \ast H$.
Let $\pi:j(P) \to P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa))$ be a projection in Lemma \ref{mainprojection} and $\overline{G}$ be an arbitrary $(V,j(P))$-generic with $\pi `` \overline{G} \subseteq G \ast H$. In $V[\overline{G}]$, $j$ lifts to $j:V[G] \to M[\overline{G}]$. By the GCH, we can choose a list $\langle X_{\alpha} \mid \alpha < j(\kappa)^{+} \rangle$ of $P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa))$-names of subset of $[j(\kappa)]^{\kappa}$. Note that ${^{j(\kappa)}}M[\overline{G}] \cap V[\overline{G}] \subseteq M[\overline{G}]$. By $\lambda > \kappa$ and the closure property of $M[\overline{G}]$, $\mathrm{Coll}^{M[\overline{G}]}(j(\lambda),<jj(\kappa))$ is $j(\kappa)^{+}$-directed closed in $V[\overline{G}]$. Let $m$ be the coordinate-wise union of $j ``H$. Then we have $m \in \mathrm{Coll}^{M[G]}(j(\lambda),<jj(\kappa))$ by $|j ``H| = j(\kappa)$. We can construct a decending sequence $\langle s_{\alpha} \mid \alpha < j(\kappa)^{+} \rangle$ such that \begin{itemize}
\item $s_{\alpha} \leq m$.
\item $s_{\alpha}$ decides $j``j(\kappa) \in j(\dot{X}_{\alpha})$. \end{itemize}
Let $U = \{X \subseteq [j(\kappa)]^{\kappa} \mid s_{\alpha} \Vdash j``j(\kappa) \in X_{\alpha}\}$. There is a $j(P)/{G} \ast {H}$-name $\dot{U}$ for ${U}$. In $V[G][H]$, let $I$ be an induced ideal by $\dot{U}$, that is, $X \in I$ if and only if $j(P) / G \ast H \Vdash [j(\kappa)]^{\kappa} \setminus X \in \dot{U}$. $I$ works as witness. \end{proof}
\begin{rema} In the proof of Theorem \ref{maintheorem}, we proved that it is forced by $P(\mu,\kappa) \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa))$ that $P^{\ast}$ is not $\lambda$-centered. On the other hand, $P_{\ast} / \dot{G} \ast \dot{H}$ is $\lambda$-centered. Indeed, in the extension by $P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa))$, we have \begin{center}
$P_{\ast} = (\prod^{<\mu}_{\alpha \in [\mu,\lambda +1)\cap \mathrm{Reg}} \mathrm{Coll}(\alpha,<j(\kappa)))^{V} = \prod^{<\mu}_{\alpha \in [\mu,\lambda+1)\cap \mathrm{Reg}^V} \mathrm{Coll}^{V}(\alpha,<j(\kappa))$. \end{center} Remark \ref{remarklevy} and Lemma 4 in~\cite{MR925267} show $P_{\ast}$ is $\lambda$-centered. By the proof of Lemma~\ref{centeredquotient}, we have that ${P \ast \dot{\mathrm{Coll}}(\lambda,<j(\kappa))}$ forces $P_\ast / \dot{G} \ast \dot{H}$ is $\lambda$-centered, as desired. \end{rema}
We conclude this paper with the following question. \begin{ques}
Is it consistent that there is an ideal over $\aleph_1$ which is $\aleph_2$-saturated but not $(\aleph_2,\aleph_2,2)$-saturated? \end{ques} Note that every $\aleph_2$-saturated ideal over $\aleph_1$ is $(\aleph_2,\aleph_1,2)$-saturated under the $\mathrm{CH}$. It is known that $\mathrm{CH}$ implies the partition relation $\aleph_2 \to (\aleph_2,\aleph_1)$ (See Theorem 3,10 in~\cite{MR2768681}). That is, for every $c:[\aleph_2]^{2} \to 2$, if there is no $H_0 \in [\aleph_2]^{\aleph_2}$ with $c ``[H_0]^2 = \{0\}$ then there is an $H_1 \in [\aleph_2]^{\aleph_1}$ with $c ``[H_1]^{2} = \{1\}$. For a $\aleph_2$-saturated ideal $I$ over $\aleph_1$ and $I$-positive sets $\{A_\xi \mid \xi < \aleph_2\}$, define $c:[\aleph_2]^{2} \to 2$ by $c(\xi,\zeta) = 0 \leftrightarrow A_\xi \cap A_{\zeta} \in I$. By the saturation of $I$, there is no $H_0 \in [\aleph_2]^{\aleph_2}$ with $c ``[H_0]^2 = \{0\}$. We can take an $H_1 \in [\aleph_2]^{\aleph_1}$ with $c ``[H_1]^{2} = 1$. For every $\xi < \zeta$ in $H_1$, by $c(\xi,\zeta) = 1$, $A_\xi \cap A_{\zeta} \not\in I$.
\end{document} |
\begin{document}
\vskip 6cm
\title{\bf On certain topological indices of graphs} \markright{Abbreviated Article Title}
\author{
Arber Avdullahu\\
University of Primorska, Koper, Slovenia \\
\texttt{arbr.avdullahu@gmail.com}
\and
Slobodan Filipovski\footnote{Supported in part by the Slovenian Research Agency (research program P1-0285 and Young Researchers Grant).} \\
University of Primorska, Koper, Slovenia \\
\texttt{slobodan.filipovski@famnit.upr.si}
}
\date{} \maketitle \begin{abstract} In this paper we give new bounds for a several vertex-based and edge-based topological indices of graphs: Albertson irregularity index, degree variance index, Mostar and the first Zagreb index. Moreover, we give a new upper bound for the energy of graphs through $IRB$-index. Most of our results rely on a well-known characterization of the Laplacian spectral radius. \end{abstract}
\section{Introduction} \quad Let $G$ be an undirected graph with $n$ vertices and $m$ edges without loops and multiple edges. The degree of a vertex $v$, denoted $\deg(v)$, is the number of the vertices connected to $v$. Since $\sum_{v\in V(G)} \deg(v) =2m$, average of vertex degrees can be given as $\overline{d}(G)=\frac{2m}{n}.$ A graph is $k$-regular if every degree is equal to $k$. Otherwise, the graph is said to be an irregular graph. Once we know the average degree of a graph, it is possible to compute more complex measures of the heterogeneity in connectivity across vertices (e.g., the extent to which there is a very big spread between well-connected and not so well-connected vertices in the graph) beyond the simpler measures of range such as the difference between the maximum degree $\Delta$ and the minimum degree $\delta$.
One such measure was proposed by Tom Snijders in \cite{snijders}, called the degree variance of the graph. This vertex-based measure is defined as the average squared deviation between the degree of each vertex and the average degree: \begin{equation}\label{varr} Var(G)=\frac{\sum_{u\in V(G)}\left(\deg(u)-\overline{d}(G)\right)^{2}}{n}=\frac{\sum_{u\in V(G)}\left(\deg(u)-\frac{2m}{n}\right)^{2}}{n}. \end{equation}
Note that the degree variance of a regular graph is always zero.\\ Among the oldest and most studied topological indices, there are two classical vertex-degree based topological indices–the first Zagreb index and second Zagreb index. The Zagreb indices were first introduced by Gutman et al. in \cite{gutman, gutman2}; they present an important molecular descriptor closely correlated with many chemical properties. The first Zagreb index $M_{1}(G)$ is defined as \begin{equation} M_{1}(G)=\sum_{v\in V(G)}\deg(v)^{2}. \end{equation} A general edge-additive index is defined as the sum, over all edges, of edge effects. These effects can be of various types, but the most common ones are defined in terms of some property of the end-vertices of the considered edge. In many cases the edge contribution represents how similar its end-vertices are. In 1997, the Albertson irregularity index of a connected graph $G$, introduced by Albertson [1], was defined by
\begin{equation}\label{irr} Irr(G)=\sum_{(u,v)\in E(G)}|\deg(u)-\deg(v)|. \end{equation}
This index has been of interest to mathematicians, chemists and scientists from related fields due to the fact that the Albertson irregularity index plays a major role in irregularity measures of graphs \cite{dimitrov,abdo,chen,reti}, predicting the biological activities and properties of chemical compounds in the QSAR/QSPR modeling and the quantitative characterization of network heterogeneity. Due to their simple computation, the degree-variance $Var(G)$ and the Albertson index $Irr(G)$ belong to the family of the widely used irregularity indices. Another bond-additive index studied in this paper is $IRB$-index, defined as \begin{equation}\label{irb} IRB(G)=\sum_{e=(u,v)\in E(G)}(\sqrt{\deg(u)}-\sqrt{\deg(v)})^{2}. \end{equation}
The Mostar index is a recently introduced bond-additive distance-based graph invariant that measures the degree of peripherality of particular edges and of the graph as a whole. This index was introduced in \cite{mostar}, and independently in \cite{reti}. Let $e=(u,v)\in E(G).$ Let $n_{e}(u)$ be the number of vertices of $G$ closer to $u$ than to $v$. The Mostar index of $G$ is defined as
\begin{equation}\label{most} Mo(G)=\sum_{e=(u,v)\in E(G)} | n_{e}(u)-n_{e}(v)|. \end{equation}
In this paper we present an inequality between the degree variance and the Albertson irregularity index, Theorem 2.2. As a consequence of this result, we improve the well-known lower bound for the first Zagreb index, $M_{1}(G)\geq \frac{4m^{2}}{n}.$ By involving $IRB$ index we provide an upper bound for the energy of graphs, which presents an improvement of the well-known upper bound $\sqrt{2mn},$ Theorem 2.5.
In Section 3 we focus on the Mostar index. In subsection 3.1 we consider bipartite graphs with diameter three; here we derive an upper bound for $Mo(G)$ which depends on the order of the partite sets. We finish our paper with a general upper bound for the Mostar index, in terms of $m$ and $n$, Theorem 3.11.
Almost all results in this paper share the same proving key which comes from a well-known characterization of the Laplacian spectral radius, Lemma 2.1.
\section{Laplacian matrices and their applications in estimating certain graph invariants}
\quad Most of the results in this paper are based on a well-known upper bound for the spectral radius of the laplacian matrix. Let $G=(V, E)$ be a graph whose vertices are labelled $\{1,2,\ldots, n\}$, and let $L$ be the Laplacian matrix of $G$. The Laplacian matrix of the graph $G$ is defined as follows \begin{equation*} \label{laplacian} L_{ij} = \left\{
\begin{array}{lc} -1, & \mbox{ if } (i,j)\in E \\
0, & \mbox{ if } (i,j)\notin E \; \mbox {and } i\neq j\\
-\sum_{k\neq i} L_{ik}, & \mbox{ if } i=j.
\end{array}
\right.
\end{equation*}
It is well-known that $L$ is a positive semidefinite matrix. More about the properties of the Laplacian matrices can be found in \cite{meris, mohar}. In our proofs we use the fact that the largest eigenvalue $\lambda_{max}$ of $L$ satisfies $\lambda_{max}\leq n$, that is, $\frac{\lambda_{\max}}{n}\leq 1.$ \\ For a graph $G$ on $n$ vertices we identify a vector $x\in \mathbb{R}^{n}$ with a function $x: V(G)\rightarrow \mathbb{R}.$ The quadratic form defined by $L$ has the following expression \begin{equation}\label{quadratic} x^{T}Lx=\sum_{(u,v)\in E(G)}(x(u)-x(v))^{2}. \end{equation} We also need Fiedler's \cite{eigenvalue} characterization of $\lambda_{max}.$ \begin{lemma} $$\lambda_{max}=2n\max_{x}\frac{\sum_{(u,v)\in E(G)}(x(u)-x(v))^{2}}{\sum_{u\in V(G)}\sum_{v\in V(G)}(x(u)-x(v))^{2}}$$ where $x$ is a nonconstant vector. \end{lemma}
\subsection{An inequality between the degree variance and the irregularity index of graphs}
\begin{theorem} Let $G$ be a connected graph on $n$ vertices and $m$ edges. Then \begin{equation}\label{teorema}Var(G)\geq \frac{Irr^{2}(G)}{mn^{2}}. \end{equation} The equality holds if and only if $G$ is a regular graph. \end{theorem} \begin{proof}
From the inequality between arithmetic and quadratic mean for the numbers $|\deg(u)-\deg(v)|$, where $(u,v)\in E(G),$ we get \begin{equation} \begin{gathered}
Irr(G)=\sum_{(u,v)\in E(G)}|\deg(u)-\deg(v)|\leq m\cdot \sqrt{\frac{\sum_{(u,v)\in E(G)}(\deg(u)-\deg(v))^{2}}{m}}\\ =\sqrt{m}\sqrt{\sum_{(u,v)\in E(G)}(\deg(u)-\deg(v))^{2}}= \sqrt{m}\sqrt{\sum_{(u,v)\in E(G)}((\deg(u)-\frac{2m}{n})-(\deg(v)-\frac{2m}{n}))^{2}}. \end{gathered}\label{hm} \end{equation}
For the graph $G$ we define a vector $x \in \mathbb{R}^{n}$ such that $x(u)=\deg(u)-\frac{2m}{n},$ where $u\in V(G).$ The inequality in (\ref{hm}) becomes \begin{gather*} Irr(G)\leq \sqrt{m} \sqrt{\sum_{(u,v)\in E(G)}\left((\deg(u)-\frac{2m}{n})-(\deg(v)-\frac{2m}{n})\right)^{2}}= \sqrt{m}\sqrt{\sum_{(u,v)\in E(G)}(x(u)-x(v))^{2}}. \end{gather*} Using arithmetic calculations it is easy to see that: \begin{equation}\label{eq:1} \frac{1}{2}\cdot \sum_{u\in V(G)}\sum_{v\in V(G)}(x(u)-x(v))^{2} = n\cdot \sum_{u\in V(G)}x(u)^{2}-(\sum_{v\in V(G)}x(v))^{2} \end{equation} Now, applying Lemma 2.1 and equation (\ref{eq:1}) we obtain \begin{gather*} Irr(G)\leq \sqrt{m}\sqrt{\frac{\lambda_{\max}}{2n}\cdot \sum_{u\in V(G)}\sum_{v\in V(G)}(x(u)-x(v))^{2}}= \sqrt{m}\sqrt{\frac{\lambda_{\max}}{n}[n\sum_{u\in V(G)}x(u)^{2}-(\sum_{v\in V(G)}x(v))^{2}]}. \end{gather*} From $\sum_{u\in V(G)}x(u)^{2}=\sum_{u\in V(G)} (\deg(u)-\frac{2m}{n})^{2}=n\cdot Var(G)$ and from $\sum_{v\in V(G)}x(v)=\sum_{v\in V(G)}(\deg(v)-\frac{2m}{n})=\sum_{v\in V(G)}(\deg(v)) - 2m=0$ we get
\begin{equation}\label{ravenka} Irr(G)\leq\sqrt{m}\sqrt{\frac{\lambda_{\max}}{n}\cdot n^{2}Var(G)}\leq n\sqrt{m}\sqrt{Var(G)}. \end{equation} From (\ref{ravenka}) we get $$Var(G)\geq \frac{Irr^{2}(G)}{mn^{2}}.$$ Since the equality between arithmetic and quadratic mean holds when $\deg(u)=\deg(v)$ for each $u,v \in V(G)$, we obtain that the equality in (\ref{teorema}) holds when $G$ is a regular graph. On the other hand, if $G$ is a regular graph we get $Var(G)=Irr(G)=0.$ \qed
\end{proof}
\subsection{A new lower bound for the first Zagreb index} \quad As a consequence of Theorem 2.2, we improve the well-known lower bound for the first Zagreb index, $M_{1}(G)\geq \frac{4m^{2}}{n}$. \begin{corollary} Let $G$ be a graph with $m$ edges and $n$ vertices. Then $$M_{1}(G)\geq \frac{4m^{2}}{n}+\frac{Irr^{2}(G)}{mn}.$$ \end{corollary} \begin{proof} Recall, the first Zagreb index for the graph $G$ is defined as \begin{equation}\label{eden} M_{1}(G)=\sum_{u\in V(G)} \deg(u)^{2}.\end{equation} It is easy to show that \begin{equation}\label{dva}Var(G)=\frac{M_{1}(G)}{n}-\frac{4m^{2}}{n^{2}}.\end{equation} From Theorem 2.2 and (\ref{dva}) we obtain the required inequality. \qed \end{proof} \begin{remark} Let $G$ be a graph on $n$ vertices and $m$ edges, and let $d_{1},d_{2},\ldots, d_{n}$ be its vertex-degrees. Corollary 2.3 is equivalent to the inequality
$$d_{1}^{2}+d_{2}^{2}+\ldots+d_{n}^{2}\geq \frac{(d_{1}+d_{2}+\ldots+d_{n})^{2}}{n}+\frac{(\sum_{i\sim j}|d_{i}-d_{j}|)^{2}}{mn},$$ which presents an improvement of the inequality between quadratic and arithmetic means for the positive numbers $d_{1},d_{2},\ldots, d_{n}$. \end{remark}
\subsection{A new upper bound for the energy of graphs} \quad An \emph{adjacency matrix} $A=A(G)$ of the graph $G$ is the $n\times n$ matrix $[a_{ij}]$ with $a_{ij}=1$ if $v_{i}$ is adjacent to $v_{j},$ and $a_{ij}=0$ otherwise. The eigenvalues $\lambda_{1}, \lambda_{2}, \ldots , \lambda_{n}$ of the graph $G$ are the eigenvalues of its adjacency matrix $A$. Since $A$ is a symmetric matrix with zero trace, these eigenvalues are real and their sum is equal to zero.
The \emph{energy }of $G$, denoted by $E(G)$, was first defined by I. Gutman in \cite{ivan0} as the sum of the absolute values of its eigenvalues. Thus,
\begin{equation}\label{energija} E(G)=\sum_{i=1}^{n} |\lambda_{i}|. \end{equation} This concept arose in theoretical chemistry, since it can be used to approximate the total $\pi$-electron energy of a molecule. The first result relating the energy of a graph with its order and size is the following upper bound obtained in 1971 by McClelland \cite{pi}: \begin{equation}\label{bound2} E(G)\leq \sqrt{2mn}. \end{equation} Since then, numerous other bounds for $E(G)$ were discovered, see \cite{babic, ivan5, jahan1, li}. In \cite{kulen1} Koolen and Moulton improved the bound (\ref{bound2}) as follows: If $2m>n$ and $G$ is a graph with $n$ vertices and $m$ edges, then \begin{equation}\label{kulen} E(G)\leq \frac{2m}{n}+\sqrt{(n-1)\left(2m-\left(\frac{2m}{n}\right)^{2}\right)}. \end{equation}
We improve the bound in (\ref{bound2}) by using $IRB$ index for a given graph and the technique presented in this paper. Recall, the $IRB$-index for the graph $G$ is defined as \\ $IRB(G)=\sum_{(u,v)\in E(G)}(\sqrt{\deg(u)}-\sqrt{\deg(v)})^2$.
\begin{theorem} Let $G$ be a connected graph on $n$ vertices and $m$ edges. Then $$E(G) \leq \sqrt{2mn - IRB(G)}.$$ \end{theorem} \begin{proof} For the graph $G$ we define a vector $x \in \mathbb{R}^{n}$ such that $x(u)=\sqrt{\deg(u)},$ where $u\in V(G).$ Then we rewrite $IRB(G) = \sum_{(u,v)\in E(G)}(\sqrt{\deg(u)}-\sqrt{\deg(v)})^2=\sum_{(u,v)\in E(G)} (x(u)-x(v))^{2}.$ Applying Lemma 2.1 and equation (8) we obtain \begin{gather*}\label{energija} IRB(G)\leq \frac{\lambda_{\max}}{n}\cdot( n\sum_{u\in V(G)}x(u)^{2}-(\sum_{v\in V(G)}x(v))^{2}) \\ \leq n\sum_{u\in V(G)}x(u)^{2}-(\sum_{v\in V(G)}x(v))^{2} = n\sum_{u\in V(G)}\deg(u)-(\sum_{v\in V(G)}\sqrt{\deg(v)})^{2}. \end{gather*} In \cite{ariz} was proven that $E(G)\leq \sum_{u\in V(G)} \sqrt{\deg(u)}.$ Using this estimation we get $$E(G) \leq \sqrt{2mn - IRB(G)}.$$ \qed \end{proof}
Unfortunately we are not able to compare the bounds in Theorem 2.5 and (\ref{kulen}).
\section{Mostar index} \quad As we mentioned in the introduction, the Mostar index is a new bond-additive structural invariant which measures the peripherality in graphs. Moreover, the Mostar index can be used to measure how much a given graph $G$ deviates from being distance-balanced. This index was introduced by Došlić et. al in \cite{mostar} and is studied in several publications, for example, see in \cite{mostar,gao,reti}. In \cite{mostar} was proven that $Mo(P_{n})\leq Mo(T_{n})\leq (n-1)(n-2)=Mo(S_{n}),$ where $P_{n}, T_{n}$ and $S_{n}$ are paths, trees and stars on $n$ vertices, respectively. We list several known results. \begin{proposition}\cite{gao} Let $G$ be a graph of diameter $2$. Then $Mo(G)=Irr(G).$ \end{proposition} \begin{proposition}\cite{gao} Let $T$ be a tree. Then $Mo(T)$ and $Irr(T)$ have the same parity. \end{proposition} \begin{theorem}\cite{gao} Let $T_{n}$ be a tree on $n$ vertices. Then $$Mo(T_{n})\geq Irr(T_{n})$$ with equality if and only if $T_{n}$ is isomorphic with $S_{n}$. \end{theorem} \begin{theorem} \label{retii}\cite{reti} If $G$ is a connected graph of order $n\geq 3$ and size $m$, then $$0\leq Mo(G)\leq m(n-2)$$ with the left equality if and only if $G$ is a distance-balanced graph and with the right equality if and only if $G$ is isomorphic to the star graph $S_{n}.$ \end{theorem}
In this section we derive several new upper bounds for the Mostar index. \subsection{Bipartite graphs with diameter three} \quad In \cite{gao} was proven that the graphs with diameter two have equal Mostar and Albertson index, that is, $Mo(G)=Irr(G).$ In this subsection we consider bipartite graphs with diameter three. Let $G=(V, E)$ be a bipartite graph with diameter three, of order $n$ and size $m$. Let $V_{1}$ and $V_{2}$ be the partite sets of $G$, that is, $V=V_{1}\cup V_{2}.$
We suppose that $|V_{1}|=n_{1}$ and $|V_{2}|=n_{2}.$ Let $e=(u,v)\in E(G)$ such that $u\in V_{1}$ and $v\in V_{2}.$ Since the diameter of $G$ is three we easily observe that $n_{e}(u)=n_{1}+\deg(u)-\deg(v)$ and $n_{e}(v)=n_{2}+\deg(v)-\deg(u).$ Thus
\begin{equation} \label{mostar} Mo(G)=\sum_{(u,v)\in E(G)}|n_{e}(u)-n_{e}(v)|=\sum_{(u,v)\in E(G)}|(n_{1}+2\deg(u))-(n_{2}+2\deg(v))|. \end{equation}
Using Lemma 2.1 we derive the following result.
\begin{theorem} Let $G$ be a bipartite graph on $n$ vertices, $m$ edges and with diameter three. Let $V_{1}$ and $V_{2}$ be the partitive sets of $G$ such that $|V_{1}|=n_{1}$ and $|V_{2}|=n_{2}.$ Then $$Mo(G)\leq \sqrt{m}\sqrt{\frac{\lambda_{\max}}{n}(n_{1}n_{2}n^{2}-4mn^{2}+4M_{1}(G)n-4n_{1}^{2}n_{2}^{2}-16m^{2}+16n_{1}n_{2}m)}.$$ \end{theorem} \begin{proof} For the graph $G$ we define a vector $x \in \mathbb{R}^{n}$ as follows:
\begin{equation*} \label{laplacian} x(w) = \left\{
\begin{array}{lc} n_{1}+2\deg(w), & \mbox{ if } w \in V_{1} \\
n_{2}+2\deg(w), & \mbox{ if } w \in V_{2} .\\
\end{array}
\right.
\end{equation*} From (\ref{mostar}) and from the inequality between arithmetic and quadratic mean we get $$Mo(G)\leq \sqrt{m} \sqrt{\sum_{(u,v)\in E(G)}((n_{1}+2\deg(u))-(n_{2}+2\deg(v)))^{2}}=\sqrt{m}\sqrt{\sum_{(u,v)\in E(G)}(x(u)-x(v))^{2}}.$$ Now, applying Lemma 2.1 we obtain $$Mo(G)\leq \sqrt{m}\sqrt{\frac{\lambda_{\max}}{2n}\cdot \sum_{u\in V(G)}\sum_{v\in V(G)}(x(u)-x(v))^{2}}=\sqrt{m}\sqrt{\frac{\lambda_{\max}}{n}[n\sum_{u\in V(G)}x(u)^{2}-(\sum_{v\in V(G)}x(v))^{2}]}=$$ $$=\sqrt{m}\sqrt{\frac{\lambda_{\max}}{n}\left(n(n_{1}^{3}+n_{2}^{3}+4M_{1}(G)+4n_{1}m+4n_{2}m)-(n^{2}-2n_{1}n_{2}+4m)^{2}\right)}=$$ $$\sqrt{m}\sqrt{\frac{\lambda_{\max}}{n}(n_{1}n_{2}n^{2}-4mn^{2}+4M_{1}(G)n-4n_{1}^{2}n_{2}^{2}-16m^{2}+16n_{1}n_{2}m)}.$$
\qed
\end{proof}
\begin{remark} If $n_{1}=n_{2}$, then $Mo(G)=2\sum_{(u,v)\in E(G)}| \deg(u)-\deg(v)|=2Irr(G).$ Setting $n_{1}=n_{2}=\frac{n}{2}$ in Theorem 3.5 we get $$Mo(G)=2\cdot Irr(G)\leq 2\sqrt{m(nM_{1}(G)-4m^{2})\frac{\lambda_{\max}}{n}},$$ which matches with the Goldberg's bound given in \cite{gold}. \end{remark}
\begin{corollary} Let $G$ be a bipartite graph with diameter three such that $|V_{1}|=n_{1}$ and $|V_{2}|=n_{2}$. Then $$Mo(G)\leq \sqrt{\frac{n_{1}^{2}n_{2}^{2}n^{2}}{3}+\left(\frac{2n_{1}^{2}n_{2}^{2}}{27}+\frac{n_{1}n_{2}n^{2}}{18}\right)\sqrt{4n_{1}^{2}n_{2}^{2}+3n_{1}n_{2}n^{2}}-\frac{4n_{1}^{3}n_{2}^{3}}{27}}.$$ \end{corollary} \begin{proof} Clearly, $G$ is a triangle-free graph, and for its first Zagreb index holds $M_{1}(G)\leq mn.$ Using this bound in Theorem 3.5 we get $$Mo(G)\leq \sqrt{m(n_{1}n_{2}n^{2}-4n_{1}^{2}n_{2}^{2}-16m^{2}+16n_{1}n_{2}m)}.$$ Let $f(x)=-16x^{3}+16n_{1}n_{2}x^{2}+(n_{1}n_{2}n^{2}-4n_{1}^{2}n_{2}^{2})x$ be a function defined on the interval $[1, n_{1}n_{2}].$ Clearly $m\in [1, n_{1}n_{2}].$ Since $f^{'}(x)=-48x^{2}+32n_{1}n_{2}x+n_{1}n_{2}n^{2}-4n_{1}^{2}n_{2}^{2}$ we get \begin{equation} \label{max}f^{'}(x)\geq 0 \Leftrightarrow x\in [\frac{n_{1}n_{2}}{3}-\frac{\sqrt{4n_{1}^{2}n_{2}^{2}+3n_{1}n_{2}n^{2}}}{12}, \frac{n_{1}n_{2}}{3}+\frac{\sqrt{4n_{1}^{2}n_{2}^{2}+3n_{1}n_{2}n^{2}}}{12}]. \end{equation} From (\ref{max}) we conclude that $f(x)$ achieves its maximum (on $[1,n_{1}n_{2}])$ at $m_{\max}=\frac{n_{1}n_{2}}{3}+\frac{\sqrt{4n_{1}^{2}n_{2}^{2}+3n_{1}n_{2}n^{2}}}{12}.$ Hence $$Mo(G)\leq \sqrt{f(m)}\leq \sqrt{f(m_{\max})}=\sqrt{m_{\max}(n_{1}n_{2}n^{2}-4n_{1}^{2}n_{2}^{2}-16m_{\max}^{2}+16n_{1}n_{2}m_{\max})}=$$ $$=\sqrt{\frac{n_{1}^{2}n_{2}^{2}n^{2}}{3}+\left(\frac{2n_{1}^{2}n_{2}^{2}}{27}+\frac{n_{1}n_{2}n^{2}}{18}\right)\sqrt{4n_{1}^{2}n_{2}^{2}+3n_{1}n_{2}n^{2}}-\frac{4n_{1}^{3}n_{2}^{3}}{27}}.$$
\qed \end{proof}
In chemical graph theory, the Szeged index is another edge-based topological index of molecule. This index was introduced by Gutman in \cite{gutman10}. The Szeged index of a connected graph $G$ is defined as \begin{equation}\label{seged} Sz(G)=\sum_{e=(u,v)\in E(G)} n_{e}(u)\cdot n_{e}(v). \end{equation} In the next result we give a relation between Mostar and Szeged index for bipartite graphs. \begin{proposition}Let $G$ be a bipartite graph on $n$ vertices and $m$ edges. Then \begin{equation}\label{bound}Mo(G)\leq \sqrt{m^{2}n^{2}-4mSz(G)}. \end{equation} \end{proposition} \begin{proof} Since $G$ is a bipartite graph, then $n_{e}(u)+n_{e}(v)=n.$ Thus \begin{equation}\label{haha} (n_{e}(u)-n_{e}(v))^{2}=n^{2}-4n_{e}(u)n_{e}(v). \end{equation}
From the inequality between quadratic and arithmetic mean for the numbers $|n_{e}(u)-n_{v}(v)|$ we get
$$\frac{Mo^{2}(G)}{m}=\frac{\left(\sum_{e\in E(G)}|n_{e}(u)-n_{e}(v)|\right)^{2}}{m}\leq \sum_{e\in E(G)}(n_{e}(u)-n_{e}(v))^{2}=mn^{2}-4Sz(G).$$ Thus $Mo^{2}(G)\leq m^{2}n^{2}-4mSz(G).$ \qed \end{proof}
\begin{remark} When $n_{e}(u)+n_{e}(v)=n$ the parameter $n_{e}(u)n_{e}(v)$ achieves minimum $n-1$ (if one number is $n-1$ and other $1$). Thus $Sz(G)\geq m(n-1).$ Replacing this in (\ref{bound}) we get $Mo^{2}(G)\leq m^{2}n^{2}-4mSz(G)\leq m^{2}n^{2}-4m\cdot m\cdot(n-1)=m^{2}(n-2)^{2}.$ Hence $$Mo(G)\leq m(n-2)$$ which is a trivial upper bound for $Mo(G).$ We note that the equality holds for the complete bipartite graph $K_{1,n-1}.$ \qed \end{remark}
\subsection{A new upper bound for the Mostar index}
\quad In the last part of the paper we consider non triangle-free graphs. For an edge $(u,v)$ of $G$ we define $n_{uv}=|\{w\;|\; d(w,v)=d(w,u)\}|.$ It can be noted that $0\leq n_{uv}\leq n-2.$ It is easy to see that for any edge $(u,v)$ of $G$, $n_{e}(u)+n_{e}(v)=n-n_{uv}.$ \\Denote by $t(G)$ the number of triangles of a graph $G$. We will use the following result: \begin{proposition} For the number of triangles of $G$ it holds $$\frac{m(4m-n^{2})}{n}\leq 3t(G)\leq \sum_{(u,v)} n_{uv}.$$ \end{proposition} The first inequality is due to Bollob\' {a}s \cite{bolobas}. Since $n_{uv}$ counts cycles of odd length containing an edge $(u,v)$, the second inequality is obvious.
\begin{theorem} Let $G$ be a graph on $n$ vertices and $m$ edges. Then $$Mo(G)< \sqrt{m^{2}(n-2)^{2}-\frac{m^{2}(n-2)(4m-n^{2})}{n}}.$$ \end{theorem} \begin{proof} We already show that $ Mo^{2}(G)\leq m \sum_{(u,v)\in E(G)}(n_{e}(u)-n_{e}(v))^{2}.$ Thus $$Mo^{2}(G)\leq m\sum_{(u,v)\in E(G)}(n_{e}(u)+n_{e}(v))^{2}-4m\sum_{(u,v)\in E(G)}n_{e}(u)n_{e}(v)\leq$$ $$\leq m\sum _{(u,v)\in E(G)}(n-n_{uv})^{2}-4m \sum_{(u,v)\in E(G)} (n-1-n_{uv})$$ $$=m^{2}n^{2}-4m^{2}(n-1)-m\sum_{(u,v)\in E(G)}n_{uv}(2n-n_{uv}-4)$$ $$\leq m^{2}(n-2)^{2}-m\sum_{(u,v)\in E(G)}n_{uv}(2n-4-(n-2))=$$ $$=m^{2}(n-2)^{2}-m(n-2)\sum_{(u,v)\in E(G)}n_{uv}$$ $$\leq m^{2}(n-2)^{2}-\frac{m^{2}(n-2)(4m-n^{2})}{n}.$$ \qed
\end{proof} \begin{remark} In the above theorem we can assume that $m > \frac{n{2}}{4}.$ According to the Mantel theorem, it directly implies that $G$ is not a triangle-free graph. This assumption makes our bound better than the existing trivial bound $m(n-2)$ given in Theorem 3.4. \end{remark}
\section*{Acknowledgment}
The authors would like to thank Dr. Ademir Hujdurović for introducing us with the Mostar index.
\end{document} |
\begin{document}
\title{A Trust Region Method for Finding Second-Order Stationarity in Linearly Constrained Non-Convex Optimization}
\begin{abstract}
\noindent Motivated by TRACE algorithm~\cite{curtis2017trust}, we propose a trust region algorithm for finding second order stationary points of a linearly constrained non-convex optimization problem. We show the convergence of the proposed algorithm to ($\epsilon_g, \epsilon_H$)-second order stationary points in $\widetilde{\mathcal{O}}\left(\max\left\{\epsilon_g^{-3/2}, \epsilon_H^{-3}\right\}\right)$ iterations. This iteration complexity is achieved for general linearly constrained optimization without cubic regularization of the objective function.
\end{abstract}
\section{Introduction}
Due to its wide application in machine learning, solving non-convex optimization problems encountered significant attention in recent years \cite{anandkumar2016efficient,cartis2013evaluation, cartis2015evaluation, cartis2017second, curtis2017inexact,curtis2017trust, lu2012trust}. While this topic has been studied for decades, recent applications and modern analytical and computational tools revived this area of research. In particular, a wide variety of numerical methods for solving non-convex problems have been proposed in recent years \cite{lee2016gradient, hong2018gradient, nesterov2006cubic, cartis2012adaptive, cartis2011adaptive, cartis2011adaptive-2,nouiehed2019solving}. \\
For general non-convex optimization problems, it is well-known that computing a local optimum is NP-Hard \cite{murty1987some}. Given this hardness result, recent focus has been shifted toward computing (approximate) first and second-order stationary points of the objective function. The latter set of points provides stronger guarantees compared to the former as it constitutes a smaller subset of points that includes local and global optima. Therefore, when applied to problems with ``\textit{nice}'' geometrical properties, the set of second order stationary points could even coincide with the set of global optima -- see \cite{bandeira2016low, barazandeh2018behavior, boumal2016nonconvex, nouiehed2018learning, sun2016geometric, ge2015escaping,sun2017complete-2, sun2017complete} for examples of such objective functions.\\
Convergence to second-order stationarity in smooth unconstrained setting has been thoroughly investigated in the optimization literature \cite{ge2016matrix, curtis2017exploiting, cartis2011adaptive, cartis2011adaptive-2, nesterov2006cubic, curtis2017inexact, curtis2017trust, ge2015escaping}. As a second-order algorithm, \cite{nesterov2006cubic} proposed a cubic regularization method that converges to approximate second-order stationarity in finite number of steps. More recently, \cite{cartis2011adaptive, cartis2011adaptive-2} proposed the Adaptive Regularization Cubic algorithm (ARC) that computes an approximate solution for a local cubic model at each iteration. They established convergence to first and second order stationary points with optimal complexity rates. Motivated by these rates, \cite{curtis2017trust} proposed an adaptive trust region method, entitled TRACE, and established iteration complexity bounds for finding $\epsilon$-first-order stationarity with worst-case iteration complexity ${\cal O}(\epsilon^{-3/2})$; and for finding $(\epsilon_g,\epsilon_H)$-second-order stationarity with worst-case complexity ${\cal O}(\max\{\epsilon_g^{-3/2}, \epsilon_H^{-3}\})$. This method alters the acceptance criteria adopted by traditional trust region methods, and implements a new mechanism for updating the trust region radius. A more recent second-order algorithm that uses a dynamic choice of direction and step-size was proposed in \cite{curtis2017exploiting}. This method computes first and second order descent directions and chooses the direction that predicts a more significant reduction in the objective value. All of the above methods satisfy the set of generic conditions of a general framework proposed in \cite{curtis2017inexact}.\\
Recent results show that for smooth unconstrained optimization problems, even first order methods can converge to second-order stationarity, almost surely. For instance, \cite{ge2015escaping} shows that noisy stochastic gradient descent escapes \textit{strict saddle points} with probability one. Therefore, when applied to problems satisfying the \textit{strict saddle property} this method converges to a local minimum. A similar result was shown for the vanilla gradient descent algorithm in \cite{lee2016gradient}.
A negative result provided by \cite{du2017gradient} shows that vanilla gradient descent can take exponential number of steps to converge to second-order stationarity. This computational inefficiency can be overcome by a smart perturbed form of gradient descent proposed in \cite{jin2017escape}. \\
Most of the above results can be extended to the smooth constrained optimization in the presence of simple manifold constraints. In this case, \cite{lee2017first} shows that manifold gradient descent converges to second-order stationarity, almost surely. More recently, \cite{hong2018gradient} established similar results for gradient primal-dual algorithms applied on linearly constrained optimization problems. When the constraints are non-manifold type, projected gradient descent is a natural replacement of gradient descent. As a negative result, \cite{nouiehed2018convergence} constructs an example, with a single linear constraint, showing that there is a positive probability that projected gradient descent with random initialization can converge to a strict saddle point. This raises the question of
\textit{whether there exist a first order method that can converge to second-order stationarity in the presence of inequality constraints.} To our knowledge, no affirmative answer has been given to this question to date.\\
The answer to the question above is obvious when replacing first-order methods with second-order methods. In fact, convergence to second-order stationarity in the presence of convex constraints has been established by adapting many of the aforementioned second-order algorithms \cite{cartis2012adaptive, cartis2015evaluation, cartis2014complexity}. The work in \cite{cartis2012adaptive} adapts the ARC algorithm and showed convergence to $\epsilon_g$-first-order stationarity in at most ${\cal O}(\epsilon_g^{-3/2})$ iterations. \cite{birgin2018regularization} uses active set method and cubic regularization to achieve this rate for special types of constraints. The work\cite{bian2015complexity} uses interior point method to achieve a second order stationarity in $O\big(\max\{\epsilon_g^{-3/2}, \epsilon_H^{-3}\}\big)$ iterations for box constraints. For general constraints, \cite{cartis2017second} proposed a \textit{conceptual} trust region algorithm that can compute an $\epsilon$-${q^{th}}$ stationary point in at most ${\cal O}(\epsilon^{-q-1})$ iterations.
More recently, \cite{mokhtari2018escaping} proposed a general framework for computing $(\epsilon_g, \epsilon_H)$-second-order stationary points for convex-constrained optimization problem with worst-case complexity ${\cal O}\big(\max\{\epsilon_g^{-2}, \epsilon_H^{-3}\}\big)$. In particular, this framework allows for using Frank-Wolfe or projected gradient descent to converge to an approximate first-order method, and then computes a second-order descent direction if it exists. The iteration complexity bounds computed for these methods hide the per-iteration complexity of solving the quadratic or cubic sub-problems. As shown in \cite{nouiehed2018convergence}, for linearly constrained non-convex problems, even checking whether a given point is an \textit{approximate} second-order stationary point is NP-Hard. Despite this hardness result, \cite{nouiehed2018convergence} proposed a second-order Frank-Wolfe algorithm that adapts the dynamic method introduced in \cite{curtis2017exploiting}, and identified instances for which solving the constrained quadratic sub-problem can be done efficiently. The algorithm converges to approximate first and second-order stationarity with a worst-case complexity similar to \cite{curtis2017exploiting}.
However, second-order information as utilized in the adapted ARC algorithm yields better iteration complexity rates. Motivated by this result, in this paper, we propose a trust region algorithm, entitled LC-TRACE, that adapts TRACE to linearly-constrained non-convex problems. We establish the convergence of our algorithm to $(\epsilon_g,\epsilon_H)$-second order stationarity in at most $\widetilde{\cal O}(\epsilon_g^{-3/2}, \epsilon_H^{-3})$ iterations. \\
The remainder of this paper is organized as follow. In section~\ref{sec:FirstSecondOS}, we first review and define the concepts of first and second order stationarity. Then, we review some of our previous results in section~\ref{sec:ReviewPrevious}. Finally, in section~\ref{sec:LC-TRACE}, we propose and analyze LC-TRACE algorithm.
\section{First and Second Order Stationarity Definitions}
\label{sec:FirstSecondOS}
To understand the definition of first and second order stationarity, let us first start by considering the unconstrained optimization problem
\begin{equation}\label{eq:General-Optimization-Prob}
\underset{\mathbf{x} \in \mathbb{R}^{n}}{\min} \, \, f(\mathbf{x}),
\end{equation}
where $f: \mathbb{R}^n \mapsto \mathbb{R}$ is a twice continuously differentiable function. We say a point $\bar\mathbf{x}$ is a first order stationary point (FOSP) of~\eqref{eq:General-Optimization-Prob} if $\nabla f(\bar\mathbf{x}) = \mathbf{0}$. Similarly, a point $\bar\mathbf{x}$ is said to be a second-order stationary point (SOSP) of~\eqref{eq:General-Optimization-Prob} if $\nabla f(\bar\mathbf{x}) = \mathbf{0}$ and $\nabla^2 f(\bar\mathbf{x}) \succeq 0$. In practice, most of the algorithms used for finding stationary points are iterative. Therefore, we define the concept of approximate first and second order stationarity. We say a point $\bar\mathbf{x}$ is an
$\epsilon_g$-first-order stationary point if
\begin{equation}\label{eq:FOSUnconstrained}
\|\nabla f(\bar\mathbf{x})\|_2 \leq \epsilon_g.
\end{equation}
Moreover, we say a point $\bar\mathbf{x}$ is an
$(\epsilon_g, \epsilon_H)$-second-order stationary point if
\begin{equation}\label{eq:SoSUnconstrained}
\|\nabla f(\bar\mathbf{x})\|_2 \leq \epsilon_g \mbox{ and } \nabla^2 f(\bar\mathbf{x}) \succeq -\epsilon_H\mathbf{I}.
\end{equation}
We now extend these definitions to the constrained optimization problem
\begin{equation}\label{eq:General-Cons-Optimization-Prob}
\underset{\mathbf{x} \in \mathcal{P}}{\min} \, \, f(\mathbf{x}),
\end{equation}
where $\mathcal{P}\subseteq \mathbb{R}^n$ is a closed convex set. As defined in \cite{bertsekas1999nonlinear}, we say $\bar{\mathbf{x}} \in {\cal P}$ is a FOSP of \eqref{eq:General-Cons-Optimization-Prob} if
\begin{equation}\label{eq:FOSConstrained}
\langle \nabla f(\bar{\mathbf{x}}), \mathbf{x} - \bar{\mathbf{x}} \rangle \geq 0 \quad \forall \, \mathbf{x} \in \mathcal{P}.
\end{equation}
Similarly, we say a point $\bar{\mathbf{x}}$ is a SOSP of the optimization problem \eqref{eq:General-Cons-Optimization-Prob} if $\bar{\mathbf{x}} \in {\cal P}$ is a first order stationary point and
\begin{equation}\label{eq:SoSConstrained}
0\leq \mathbf{d}^T \nabla^2f(\bar{\mathbf{x}}) \mathbf{d},\quad \forall \, \mathbf{d} \,\,\rm s.t.\,\, \langle\mathbf{d},\nabla f(\bar{\mathbf{x}}) \rangle =0\textrm{ and } \bar{\mathbf{x}} + \mathbf{d} \in \mathcal{P}.
\end{equation}
Notice that when ${\cal P}=\mathbb{R}^n$, the definitions above obviously correspond to the definitions in the unconstrained case.\\
Motivated by \eqref{eq:FOSConstrained} and \eqref{eq:SoSConstrained}, given a feasible point $\mathbf{x}$, we define the following first and second order stationarity measures
\begin{equation}\label{eq:X_k}
\begin{split}
{\cal X}(\mathbf{x}) \triangleq - \;\min_{\mathbf{s}}\quad &\langle \nabla f(\mathbf{x}), \mathbf{s} \rangle \\
\rm s.t. \quad & \mathbf{x} + \mathbf{s} \in {\cal P}, \, \|\mathbf{s}\|\leq 1.
\end{split}
\end{equation}
and
\begin{equation}\label{eq:psi_k}
\begin{split}
{\cal \psi}(\mathbf{x}) \triangleq - \; \min_{\mathbf{d}}\quad &\mathbf{d}^T \nabla^2f(\mathbf{x})\mathbf{d} \, \\
\rm s.t. \quad & \mathbf{x} + \mathbf{d} \in {\cal P}, \, \|\mathbf{d}\|\leq 1\\ &\langle \nabla f(\mathbf{x}), \mathbf{d} \rangle \leq 0.
\end{split}
\end{equation}
Notice that since $\mathbf{x}$ is feasible, ${\cal X}(\mathbf{x}) \geq 0 $ and ${\cal \psi}(\mathbf{x}) \geq 0$. Moreover, these optimality measures, which are also used in \cite{nouiehed2018convergence}, can be linked to the standard definitions in \cite{bertsekas1999nonlinear} by the following Lemma.
\begin{lemma}[\cite{nouiehed2018convergence}] \label{lem:StationarityContinuous}
The first and second order stationarity measures ${\cal X}(\cdot)$ and ${\cal \psi}(\cdot)$ are continuous in $\mathbf{x}$. Moreover, if $\bar\mathbf{x} \in {\cal P}$ then
\begin{itemize}
\item ${\cal X} (\bar\mathbf{x}) = 0$ if and only if $\bar\mathbf{x}$ is a first order stationary point.
\item ${\cal X}(\bar{\mathbf{x}}) = {\cal \psi}(\bar{\mathbf{x}}) = 0$ if and only if $\bar\mathbf{x}$ is a second order stationary point.
\end{itemize}
\end{lemma}
Using this lemma, we define the approximate first and second order stationarity.
\begin{definition}\label{def:EpsFOSSOS}
\textbf{Approximate Stationary Point:} For problem \eqref{eq:General-Cons-Optimization-Prob}, \begin{itemize}
\item A point $\bar\mathbf{x} \in {\cal P}$ is said to be an $\epsilon_g$-first order stationary point if ${\cal X}(\bar\mathbf{x}) \leq \epsilon_g$.
\item A point $\bar\mathbf{x} \in {\cal P}$ is said to be an $(\epsilon_g, \epsilon_H)$-second order stationary point if ${\cal X}(\bar\mathbf{x}) \leq \epsilon_g$ and ${\cal \psi}(\bar{\mathbf{x}}) \leq \epsilon_H$.
\end{itemize}
\end{definition}
In the unconstrained scenario, these definitions correspond to the standard definitions \eqref{eq:SoSUnconstrained} and \eqref{eq:FOSUnconstrained}.
\begin{remark}
Notice that our definition of $(\epsilon_g,\epsilon_H)$-second order stationarity is different than the definition in \cite{mokhtari2018escaping}. In particular, there are two major differences:
\begin{itemize}
\item[1)] The definition used for approximate first and second order stationarity in \cite{mokhtari2018escaping} does not include the normalization constraints $\|\mathbf{s}\|\leq 1$ and $\|\mathbf{d}\|\leq 1$ in \eqref{eq:X_k} and \eqref{eq:psi_k}.
\item[2)] The second order optimality measure in \cite{mokhtari2018escaping} is defined based on using equality constraint $\langle \nabla f(\mathbf{x}), \mathbf{d} \rangle = 0$ in~\eqref{eq:psi_k} instead of the inequality constraint $\langle \nabla f(\mathbf{x}), \mathbf{d} \rangle \leq 0$.
\end{itemize}
To understand the necessity of using normalization, consider the optimization problem $\min x^2$ and the point $\bar{x} = \epsilon$ with $\epsilon$ being (arbitrary) small. Clearly, $\bar{x}$ is close to optimal, while the optimality measure \eqref{eq:X_k} does not reflect this approximate optimality if we do not include the normalization constraint in \eqref{eq:X_k}.
To understand the importance of using inequality constraint $\langle \nabla f(\mathbf{x}), \mathbf{d} \rangle \leq 0$ instead of equality constraint in \eqref{eq:psi_k}, consider the scalar optimization problem
\begin{align}
\min_x \;\; &-\frac{1}{2} x^2 \nonumber\\
\rm s.t. \quad &0\leq x\leq 10. \nonumber
\end{align}
Let us look at the point $\bar{x} = \epsilon >0$. Using second order information, one can say that $\bar{x}$ is not a reasonable point to terminate your algorithm at. This is because the Hessian provides a descent direction with large amount of improvement in the second order approximation of the objective value. This fact is also reflected in the value of $\psi(\bar{x})= 1$. However, if we had used equality constraint $\langle \nabla f(\mathbf{x}), \mathbf{d} \rangle = 0$ in the definition of $\psi(\cdot)$ in \eqref{eq:psi_k}, then the value of $\psi(\cdot)$ would have been zero.
\end{remark}
\begin{remark}
There are other definitions of second order stationarity in the literature. For example, the works~\cite{bian2015complexity, birgin2018regularization} use a scaled version of the Hessian in different directions to define second order stationarity for box constraints. Recently, \cite{o2019log} carefully revised it to account for the coordinates which are very far from the boundary. Another related definition of second order stationary, which leads to a practical perturbed gradient descent algorithm, is provided in \cite{SongTao2019} for general linearly constrained optimization problems.
\end{remark}
\section{Finding second-order stationary points for constrained optimization}
\label{sec:ReviewPrevious}
Consider the quadratic co-positivity problem
\begin{equation}\label{co-positivity-problem}
\min_{\mathbf{x} \in \mathbb{R}^n} \quad \frac{1}{2} \mathbf{x}^T \mathbf{Q}\mathbf{x} \quad \quad \rm s.t. \quad \mathbf{x} \geq \mathbf{0}, \,\,\|\mathbf{x}\| \leq 1.
\end{equation}
Clearly, checking whether $\bar\mathbf{x} = \mathbf{0}$ is a second order stationary point of~\eqref{co-positivity-problem} is equivalent to checking its local optimality, which is an NP-Hard problem \cite{murty1987some}. This observation shows that checking exact second order stationarity is Hard. The following result, which is borrowed from~\cite{nouiehed2018convergence}, shows that even checking \textit{approximate} second order stationarity is NP-hard.
\begin{theorem} [Theorem~6 in \cite{nouiehed2018convergence}]
There is no algorithm which can check whether x = 0 is an $(\epsilon_g,\epsilon_H)$-second order stationary point in polynomial time in $(n, 1/{\epsilon_H})$, unless P = NP.
\end{theorem}
This hardness result implies that we should not expect an efficient algorithm for finding second order stationary points of non-convex problems. However, in this problem, \textit{the source of hardness stems from the number of linear inequality constraints}. In fact, when we have a small constant number of linear constraints (say $m$ fixed constraints), \cite{hsia2013trust} proposed a backtracking approach that efficiently solves this quadratic constrained optimization problem. Although the method proposed is exponential in $m$ as it uses an exhaustive search over the set of active constraints, it can still be used when $m$ is small. Motivated by this observation, in the next section we describe our LC-TRACE algorithm and analyze its iteration complexity for finding second order stationary points of linearly constrained non-convex optimization problems. A core assumption in our algorithm is that a certain quadratic objective can be minimized given existing linear constraints (for example when $m$ is small).
\section{A Trust Region Algorithm for Solving Linearly-Constrained Smooth Non-Convex Optimization Problems}\label{sec:LC-TRACE}
Consider the optimization problem
\begin{equation}\label{Linearly-Cons-Prob}
\begin{split}
\min_{\mathbf{x} \in \mathbb{R}^{ n}} \, \, & f(\mathbf{x}), \\
\rm s.t. \quad & \mathbf{A} \mathbf{x} \leq \mathbf{b},
\end{split}
\end{equation}
where $\mathbf{A} \in \mathbb{R}^{m\times n}$ and $\mathbf{b} \in \mathbb{R}^m$.
In this section, we propose a trust region algorithm, entitled LC-TRACE (Linearly Constrained TRACE), that adapts TRACE \cite{curtis2017trust} to the above linearly-constrained non-convex problem. We establish its convergence to $\epsilon_g$-first order stationarity with iteration complexity order $\widetilde{\cal O}(\epsilon_g^{-3/2})$. This method is then used to develop an algorithm to converge to~$(\epsilon_g, \epsilon_H)$-second-order stationarity with the iteration complexity $\widetilde{\cal O}\big(\max\{\epsilon_g^{-3/2}, \epsilon_H^{-3}\}\big)$.\\
LC-TRACE is different from the traditional trust region method proposed in \cite{cartis2017second} for constrained optimization. More specifically, LC-TRACE utilizes the mechanisms used in TRACE \cite{curtis2017trust} to provide a faster convergence rate compared to \cite{cartis2017second}. The improved convergence rate matches the rates achieved by adapted ARC \cite{cartis2012adaptive} and TRACE \cite{curtis2017trust}, up to logarithmic factors. Since applying TRACE directly to constrained optimization fails (as will be discussed later), we introduced modifications to adapt this method to linearly constrained problems. Our modifications are not the result of a ``simple extension" of unconstrained to constrained scenario. Before explaining LC-TRACE, let us first provide an overview of the classical trust region and TRACE algorithms.
\subsection{Background on Traditional Trust Region Algorithm and TRACE}
In traditional trust region methods, the trial step $\mathbf{s}_k$ at iteration $k$ is computed by solving the standard trust region sub-problem
\begin{equation}\label{sub-problem}
\displaystyle{ \operatornamewithlimits{\mbox{min}}_{\mathbf{s} \in \mathbb{R}^n }} \, \, q_k(\mathbf{s}), \quad \mbox{s.t. } \|\mathbf{s}\|_2 \leq \delta_k,
\end{equation}
where $q_k(\mathbf{s}): \mathbb{R}^n \mapsto \mathbb{R}$ is the second-order Taylor approximation of $f$ around $\mathbf{x}_k$, i.e.,
\[
q_k(\mathbf{s}) \triangleq f_k + \mathbf{g}_k^T\mathbf{s} + \dfrac{1}{2}\mathbf{s}^T\mathbf{H}_k \mathbf{s}.
\]
Here $f_k = f(\mathbf{x}_k)$, $\mathbf{g}_k = \nabla f(\mathbf{x}_k)$, and $\mathbf{H}_k = \nabla^2 f(\mathbf{x}_k)$. Based on the resulting trial step, an acceptance criteria is used to either \textit{accept} or \textit{reject} the step. In particular, if the ratio of actual-to-predicted reduction
\[ \dfrac{f_k - f(\mathbf{x}_k + \mathbf{s}_k)}{f_k - q_k(\mathbf{s}_k)}\]
is greater than a prescribed constant, the step is accepted, otherwise it is rejected. The iterate $\mathbf{x}^{k+1}$ and trust region radius are updated accordingly. Traditional trust region methods use a geometric update rule for the trust region radius $\delta_k$, i.e., $\delta_{k+1}$ is some constant factor of $\delta_k$. TRACE algorithm, on the other hand, modifies the acceptance criteria and this linear update rule for $\delta_k$ to match the rate achieved by the ARC algorithm \cite{cartis2011adaptive, cartis2011adaptive-2}. In particular, the authors in \cite{curtis2017trust} observed that ARC computes a positive sequence of cubic regularization coefficients $\sigma_k \in [\underline{\sigma}, \, \overline{\sigma}]$ that satisfy
\begin{equation}\label{eq:alg-conditions}
f_k - f_{k+1} \geq c_1\sigma_{k}\|\mathbf{s}_k\|_2^3 \quad \mbox{and} \quad \|\mathbf{s}_k\|_2 \geq \Big(\dfrac{c_2}{\overline{\sigma} + c_3}\Big)^{1/2}\|\mathbf{g}_{k+1}\|_2^{1/2},
\end{equation}
for some given positive constants $c_1, c_2, c_3$. TRACE designed a modified acceptance criteria and a new mechanism for updating the trust region radius to satisfy the conditions provided in \eqref{eq:alg-conditions}. Some of these ideas are discussed next.\\
\textbf{Sufficient Decrease Acceptance Criteria.}
TRACE defines the ratio
\begin{equation}\label{eq:rho-k}
\rho_k \triangleq \dfrac{f_k - f(\mathbf{x}_k + \mathbf{s}_k)}{\|\mathbf{s}_k\|_2^3},
\end{equation}
as a measure to decide whether to \textit{accept} or \textit{reject} a trial step. For some prescribed $\rho \in (0,1)$, a trial step $\mathbf{s}_k$ can only be accepted if $\rho_k \geq \rho$. By noticing that a small $\|\mathbf{s}_k\|_2$ may satisfy only the first condition in \eqref{eq:alg-conditions}, the developers of TRACE realize that an acceptance criteria that only involves \eqref{eq:rho-k} is not sufficient. To avoid such cases, TRACE defines a sequence $\{\sigma_k\}$ to estimate an upper bound for the ratio $\lambda_k/\|\mathbf{s}_k\|_2$ used for acceptance. Here $\{\lambda_k\}$ is the sequence of dual variables corresponding to the constraint $\|\mathbf{s}\|_2 \leq \delta_k$ in sub-problem~\eqref{sub-problem}. In short, TRACE accepts a trial pair $(\mathbf{s}_k, \lambda_k)$ if it satisfies the following conditions:
\begin{equation}\label{acceptance-criteria}
\rho_k \geq \rho \quad \mbox{and} \quad \lambda_k /\|\mathbf{s}_k\|_2 \leq \sigma_k.
\end{equation}
\textbf{Trust Region Radius Update Procedure.} In contrast to the linear update rule utilized in traditional trust region algorithms, TRACE uses a CONTRACT subroutine that allows for sub-linear updates. In particular, this subroutine compares the radius obtained by the linear update scheme to the norm of the trial step computed using
\begin{equation}\label{reg-sub-problem}
\displaystyle{ \operatornamewithlimits{\mbox{min}}_{\mathbf{s} \in \mathbb{R}^n }} \, \, f_k + \mathbf{g}_k^T\mathbf{s} + \dfrac{1}{2}\mathbf{s}^T(\mathbf{H}_k + \lambda \mathbf{I})\mathbf{s},
\end{equation}
for a carefully chosen $\lambda$. If the norm of this trial step falls within a desired range, then it is chosen to be the new trust region radius. This subroutine is called at iteration $k$ if $\rho_k < \rho$.\\
TRACE is designed to solve unconstrained smooth optimization problems. A direct implementation of this algorithm fails in the constrained setting. In the next section, we describe \textit{two fundamental difficulties} introduced in the presence of constraints and discuss the necessary modifications.
\subsection{Difference Between LC-TRACE and TRACE}\label{Section:Diff-Bw-Alg}
In the constrained setting, we define the trust region sub-problem and its regularized Lagrangian form as
\begin{equation}\label{sub-problem-cons}
Q_k \triangleq \displaystyle{ \operatornamewithlimits{\mbox{min}}_{\mathbf{s} \in \mathbb{R}^n }} \, \, q_k(\mathbf{s}), \quad \mbox{s.t. } \begin{cases}
\mathbf{A}\mathbf{s} \leq \mathbf{b} - \mathbf{A}\mathbf{x}_k\\
\|\mathbf{s}\|_2 \leq \delta_k
\end{cases}
\end{equation}
and
\begin{equation}\label{reg-sub-problem-cons}
Q_k(\lambda) \triangleq \displaystyle{ \operatornamewithlimits{\mbox{min}}_{\mathbf{s} \in \mathbb{R}^n }} \, \, f_k + \mathbf{g}_k^T\mathbf{s} + \dfrac{1}{2}\mathbf{s}^T(\mathbf{H}_k + \lambda \mathbf{I})\mathbf{s}, \quad \mbox{s.t. }
\mathbf{A}\mathbf{s} \leq \mathbf{b} - \mathbf{A}\mathbf{x}_k.
\end{equation}
A major difficulty introduced by the constraints is related to the optimality conditions of the sub-problem. In the unconstrained case, it is known that $\mathbf{H}_k + \lambda_k\mathbf{I} \succeq 0$ at every iteration \cite[Corollary~7.2.2]{conn2000trust} for optimal Lagrange multiplier~$\lambda_k$. Along with the fact that $\lambda > \lambda_k$ in the CONTRACT subroutine of TRACE, we conclude that $Q_k({\lambda})$ is a strongly convex quadratic optimization problem which has a unique global minimizer. Let $\mathbf{s}^{*}(\lambda)$ be the solution of $Q_k(\lambda)$. It follows that the function $\mathbf{s}^{*}(\lambda)$ is continuous in $\lambda$ in the unconstrained scenario. However,
in the linearly constrained scenario, the regularized sub-problem \eqref{reg-sub-problem-cons} might have multiple optimal solutions. Moreover, $\mathbf{s}^*(\lambda)$ and the ratio $\lambda/\|\mathbf{s}^{*}(\lambda)\|_2$, which are core quantities in TRACE, might not even be continuous. To clarify this difficulty, consider the following simple example
\begin{equation}\label{example}
Q(\lambda) = \underset{s_1 \leq 5, \, s_2 \geq 0 }{\min} \, \, s_1^2 - s_2^2 + \lambda(s_1^2 + s_2^2) \quad \mbox{s.t. } s_2 - 3s_1 \leq -12.
\end{equation}
It is not hard to see that the optimal solution of~\eqref{example} is given by
\[ s^{*}(\lambda) = \begin{cases} \begin{array}{cc}
(5,3) \quad & \mbox{if } \lambda<0 \\
(5,3); \, (4,0) \quad & \mbox{if } \lambda=0 \\
(4,0) \quad & \mbox{Otherwise.}
\end{array}\end{cases}.\]
Thus, a small increases in $\lambda$ may lead to a huge change in the ratio $\lambda / \|s^{*}(\lambda)\|_2$. Therefore, the luxury of having an arbitrarily choice for the bounds $\underline{\sigma}$ and $\bar{\sigma}$ of the ratio $\lambda/\|\mathbf{s}\|_2$ is not present in the constrained case. In LC-TRACEC, we resolved this issue by defining
\begin{equation}\label{sigma-min-max}
\underline{\sigma} = \dfrac{\epsilon}{C_{min} + \max\{\lambda_{max}, \lambda_0\}} \quad \mbox{and} \quad \bar{\sigma} = 2\Delta,
\end{equation}
and altering the update rule of $\lambda$ in the CONTRACT sub-routine. Here $\epsilon>0$ is the threshold used for the termination of the algorithm, $C_{min}$ is defined in Lemma \ref{LXksk}, $\lambda_{max}$ is defined in Lemma \ref{LBG}, and $\Delta$ is defined in Lemma~\ref{L3.11}. \\
Another major difficulty in the constrained scenario is related to the standard trust region theory on the relationship between sub-problem solutions and their corresponding dual variables. In the unconstrained case, $\lambda_1 > \lambda_2$, implies $\|\mathbf{s}^{*}(\lambda_1)\|_2 < \|\mathbf{s}^{*}(\lambda_1)\|_2$, see \cite[Chapter~7]{conn2000trust}. This relationship was used in \cite{curtis2017trust}, to show that CONTRACT subroutine reduces the radius of the trust region. However, it can be seen from example~\eqref{example}, that this relation may not hold in the constrained case. To account for this issue, we modified the CONTRACT sub-routine to guarantee a reduction in the trust region radius (See Lemma \ref{L3.4}). In summary, the differences between LC-TRACE and TRACE are mainly in the CONTRACT sub-routine. Next, we describe the steps of the algorithm.
\subsection{Description of LC-TRACE}
Our proposed algorithm LC-TRACE has two main building blocks: \textit{First-Order-LC-TRACE} and \textit{Second-Order-LC-TRACE}. We first present First-Order-LC-TRACE which can converge to $\epsilon_g$-first order stationarity in $\mathcal{\tilde{O}} (\epsilon_g^{-3/2})$. Then, we use this algorithm in Second-Order-LC-TRACE to find an $(\epsilon_g,\epsilon_H)$-Second Order stationarity in $\mathcal{\tilde{O}}(\max\{\epsilon_g^{-3/2},\epsilon_H^{-3}\})$ iterations.
The First-Order-LC-TRACE algorithm is outlined in Algorithm~\ref{alg-LC}. At each iteration $\mathbf{x}_k$, this iterative algorithm computes the values $\mathbf{s}_k$, $\boldsymbol{\lambda}_k$, and $\rho_k$ by solving the optimization problem~\eqref{sub-problem-cons} and using the equation~\eqref{eq:rho-k}. Depending on the obtained values, it decides to either \textit{accept} the trial point $\mathbf{s}_k$, or reject it. When rejecting the trial point, it either goes to \textit{contraction} or \textit{expansion} procedures. Thus, the main decisions include: Acceptance, Contraction, or Expansion. We distinguish the iterations by partitioning the set of iteration numbers into what we refer to as the sets of accepted $({\cal A})$, contraction $({\cal C})$, and expansion $({\cal E})$ steps:
\begin{flalign*}
&{\cal A} \triangleq \{k \in \mathbb{N} : \rho_k \geq \rho \mbox{ and either } \lambda_k \leq {\sigma}_k\|\mathbf{s}_k\|_2 \mbox{ or } \|\mathbf{s}_k\|_2 = \Delta_k \}, \\
&{\cal C} \triangleq \{k \in \mathbb{N}: \rho_k < \rho\}, \mbox{ and}\\
&{\cal E} \triangleq \{k \in \mathbb{N} : k \notin {\cal A} \, \cup \, {\cal C}\}.
\end{flalign*}
Hence, step $k$ is accepted if the computed pair $(\mathbf{s}_k,\lambda_k)$ satisfies the sufficient decrease criteria $\rho_k \geq \rho$, and either the norm of $\mathbf{s}_k$ is large enough $(\|\mathbf{s}_k\|_2=\Delta_k)$ or the ratio $\lambda_k/\|\mathbf{s}_k\|_2$ is smaller than an upper-bound $\sigma_k$. We also partition the set of accepted steps into two disjoint subsets
\[{\cal A}_{\Delta} \triangleq \{k \in {\cal A} : \|\mathbf{s}_k\|_2 = {\Delta}_k \} \mbox{ and } {\cal A}_{\sigma} \triangleq \{k \in {\cal A} : k \notin {\cal A}_{\Delta}\}.
\]
The sequence ${\Delta_k}$ is used in the algorithm as an upper bound on the norm of $\|\mathbf{s}_k\|_2$. From steps~\ref{Delta-acc}, \ref{Delta-cont}, and \ref{Delta-exp}, we notice that this sequence is non-decreasing. We now describe the update mechanism used in a contraction step of First-Order-LC-TRACE which is the main difference between TRACE and our proposed algorithm.\\
When CONTRACT subroutine is called, two different cases may occur in Algorithm~\ref{alg-contract}. The first case is reached whenever conditions in Step~\ref{Cond-lower-bound-no} in the CONTRACT subroutine tests true. In that case, we carefully choose choose $\lambda > \lambda_k$
to ensure that the pair $(\mathbf{s}, \lambda)$ with $\mathbf{s}$ being the solution of $Q_k(\lambda)$ satisfies
\begin{equation*}
\underline{\sigma}\leq \lambda/\|\mathbf{s}\|_2 \leq \overline{\sigma},
\end{equation*}
where $\underline{\sigma}$ and $\bar{\sigma}$ are prescribed positive constants defined in \eqref{sigma-min-max}. The second case is reached whenever the conditions in Step~\ref{Cond-lower-bound-no} tests false. In that case, we choose $\lambda \in (\lambda_k, C\lambda_k]$ with $C>1$ is a constant scalar, to ensure that the pair $(\mathbf{s}, \lambda)$ with $\mathbf{s}$ being the solution of $Q_k(\lambda)$ satisfies the following
\begin{equation*}
\dfrac{\lambda}{\|\mathbf{s}\|_2} < \max\Bigg\{\bar{\sigma}, \Big(\dfrac{\gamma_{\lambda}}{\gamma_C}\Big)\dfrac{H_{Lip} + 2\rho}{2\kappa}\Bigg\},
\end{equation*}
where $\kappa \in (0,1]$ is a constant scalars, and $H_{Lip}$ is defined in assumption \ref{Assumption1-LC}. In what follows, we first present our results about the convergence of First-Order-LC-TRACE algorithm and its iteration complexity.
\begin{algorithm}
\caption{First-Order-LC-TRACE}\label{alg-LC}
\textbf{Require:} an acceptance constant $\rho \in (0,1)$.\\
\textbf{Require:} update constants $\{\gamma_C, \gamma_E, \gamma_{\lambda} \}$ with $\gamma_C \in (0,1)$ and $\gamma_{\lambda}, \gamma_E >1$.\\
\textbf{Require:} ratio bound constants $\underline{\sigma}$ and $\overline{\sigma}$ defined in \eqref{sigma-min-max}.
\hrulefill
\begin{algorithmic}[1]
\Procedure{First-Order-LC-TRACE}{}
\State Choose a feasible point $\mathbf{x}_0$, a pair $\{\delta_0 , \Delta_0 \}$ with $0<\delta_0 \leq \Delta_0$, and $\sigma_0$ with $\sigma_0 \geq \underline{\sigma}$. \label{initial}
\State Compute $(\mathbf{s}_0 , \lambda_0)$ by solving $Q_0$, then compute $\rho_0$ using the definition in \eqref{eq:rho-k} \label{compute_s0}.
\For{$k=0, 1, 2, \ldots$}
\If{$\rho_k \geq \rho$ and either $\lambda_k/\|\mathbf{s}_k\|_2 \leq \sigma_k$ or $\|\mathbf{s}_k\|_2=\Delta_k$} \label{Cond-acc} \hspace{0.7 cm}(Acceptance)
\State set $\mathbf{x}_{k+1} \gets \mathbf{x}_k + \mathbf{s}_k$ \label{x-acc}
\State set $\Delta_{k+1} \gets \max\{\Delta_k, \, \gamma_E\|\mathbf{s}_k\|_2\}$ \label{Delta-acc}
\State set $\delta_{k+1} \gets \min\{\Delta_{k+1}, \max\{\delta_k, \, \gamma_E \|\mathbf{s}_k\|_2\}\}$ \label{delta-acc}
\State set $\sigma_{k+1} \gets \max\{\sigma_k , \, \lambda_k/\|\mathbf{s}_k\|_2 \}$ \label{sigma-acc}
\ElsIf{$\rho_k<\rho$} \label{Cond-cont}\hspace{6. cm}(Contraction)
\State set $\mathbf{x}_{k+1} \gets \mathbf{x}_k$ \label{x-cont}
\State set $\Delta_{k+1} \gets \Delta_k$ \label{Delta-cont}
\State set $\delta_{k+1} \gets$ CONTRACT$(\mathbf{x}_k, \delta_k, \sigma_k, \mathbf{s}_k, \lambda_k)$ defined in Algorithm \eqref{alg-contract}\label{delta-cont}
\ElsIf{$\rho_k \geq \rho$, $\lambda_k/\|\mathbf{s}_k\|_2 > \sigma_k$, and $\|\mathbf{s}_k\|_2<\Delta_k$} \label{Cond-exp} \hspace{1 cm}(Expansion)
\State set $\mathbf{x}_{k+1} \gets \mathbf{x}_k$ \label{x-exp}
\State set $\Delta_{k+1} \gets \Delta_k$ \label{Delta-exp}
\State set $\delta_{k+1} \gets \min \{\Delta_{k+1} ,\, \lambda_k /\sigma_k \}$ \label{delta-exp}
\State set $\sigma_{k+1} \gets \sigma_k$ \label{sigma-exp}
\EndIf
\State Compute $(\mathbf{s}_{k+1}, \lambda_{k+1})$ by solving $Q_{k+1}$, then compute $\rho_{k+1}$ using \eqref{eq:rho-k} \label{compute-sk}
\If{$\rho_k < \rho$} \label{Cond-rho_k}
\State set $\sigma_{k+1} \gets \max\{\sigma_k, \, \lambda_{k+1}/\|\mathbf{s}_{k+1}\|_2 \}$ \label{Sigma-rho_k}
\EndIf
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{CONTRACT Sub-routine}\label{alg-contract}
\textbf{Require:} update constant $\gamma_C \in (0,1)$.\\
\textbf{Require:} ratio bound constants $\underline{\sigma}$ and $\overline{\sigma}$ defined in \eqref{sigma-min-max}.
\hrulefill
\begin{algorithmic}[1]
\Procedure{CONTRACT$(\mathbf{x}_k, \delta_k, \sigma_k, \mathbf{s}_k, \lambda_k)$}{}
\State set $\bar{\lambda} \gets \lambda_k + \underline{\sigma}\Delta_k$ and set $\bar{\mathbf{s}}$ as the solution of $Q_k(\bar{\lambda})$.\label{lambda-update-sigma}
\If{$\|\bar{\mathbf{s}}\|_2 < \|\mathbf{s}_k\|_2$ and $\lambda_k < \underline{\sigma}\|\mathbf{s}_k\|_2$} \label{Cond-lower-bound-no}
\State set $\lambda \gets \bar{\lambda} + H_{max} + \big(\underline{\sigma}{\cal X}_k\big)^{1/2}$ and set $\mathbf{s}$ as the solution of $Q_k(\lambda)$. \label{lambda-update-H}
\If{$\lambda/\|\mathbf{s}\|_2 \leq \overline{\sigma}$} \label{Cond-upper-bound-yes}
\State \textbf{return} $\delta_{k+1} \gets \|\mathbf{s}\|_2$ \label{return-1}
\Else \label{Cond-upper-bound-no}
\State set $\lambda \gets \bar{\lambda}$ \label{lambda-update-2}
\State \textbf{return} $\delta_{k+1} \gets \|\bar{\mathbf{s}}\|_2$ \label{return-2}
\EndIf
\Else \label{Cond-lower-bound-yes}
\If{$\|\bar{\mathbf{s}}\|_2 = \|\mathbf{s}_k\|_2$} \label{update-lambda-no-update-s}
\State set $\lambda \gets \gamma_{\lambda}\bar{\lambda}$ and set $\mathbf{s}$ as the solution of $Q_k(\lambda)$ \label{lambda-bar-update-linear}
\Else \label{update-lambda-update-s}
\State set $\lambda \gets \gamma_{\lambda}\lambda$ and set $\mathbf{s}$ as the solution of $Q_k(\lambda)$ \label{lambda-update-linear}
\EndIf
\While{$\|\mathbf{s}\|_2 = \|\mathbf{s}_k\|_2$} \label{while-loop}
\State $\lambda \gets \gamma_{\lambda}\lambda$ and set $\mathbf{s}$ as the solution of $Q_k(\lambda)$ \label{lambda-update-linear-while}
\EndWhile
\If{ $\|\mathbf{s}\|_2 \geq \gamma_C \|\mathbf{s}_k\|_2$} \label{Cond-s-large}
\State \textbf{return} $\delta_{k+1} \gets \|\mathbf{s}\|_2$ \label{return-3}
\Else \label{Cond-s-small}
\State \textbf{return} $\delta_{k+1} \gets \gamma_C \|\mathbf{s}_k\|_2$ \label{return-4}
\EndIf
\EndIf
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsection{Convergence of First-Order-LC-TRACE to First-order Stationarity}
Throughout this section, we make the following assumptions that are standard for global convergence theory of trust region methods.
\begin{assumption}\label{Assumption1-LC}
The objective function $f$ is twice continuously differentiable and bounded below by a scalar $f_{min}$ on ${\cal P}$. We assume that the functions $\mathbf{g}(\cdot) \triangleq \nabla f(\cdot)$ and $\mathbf{H}(\cdot) \triangleq \nabla^2 f(\cdot)$ are Lipschitz continuous on the path defined by the iterates computed in the Algorithm, with Lipschitz constants $L$ and $H_{Lip}$, respectively. Furthermore, we assume the gradient sequence $\{\mathbf{g}_k \}$ is bounded in norm, that is, there exists a scalar constant $g_{max} > 0$ such that $\|\mathbf{g}_k\|_2 \triangleq \|\nabla f(\mathbf{x}_k)\|_2 \leq g_{max}$ for all $k \in \mathbb{N}$. Moreover, we assume that the Hessian sequence $\{\mathbf{H}_k\}$ is bounded in norm, that is, there exist a scalar constant $H_{max}>0$, such that $\|H_k\|_2 \triangleq \|\nabla^2 f(\mathbf{x}_k)\|_2 \leq H_{max}$ for all $k \in \mathbb{N}$.
\end{assumption}
We next state the main results for convergence of Frist-Order-LC-TRACE.
\begin{theorem}\label{first-order-convergence}
Under Assumption~\ref{Assumption1-LC}, any limit point of the iterates generated by First-Order-LC-TRACE algorithm is a first-order stationary point .
\end{theorem}
\begin{proof}
The proof of the Theorem is relegated to Appendix~\ref{first-order-convergence-app}.
\end{proof}
Unfortunately, Assumption~\ref{Assumption1-LC} is not sufficient to obtain the desired rate of convergence in the presence of constraints; in particular,
Assumption \ref{Assumption1-LC} may not ensure a model decrease of the form
\begin{equation}\label{eq:Model_Suff_Decrease}
f_k - q_k(\mathbf{s}_k) = -\mathbf{g}_k^T\mathbf{s}_k - \dfrac{1}{2}\mathbf{s}_k^T\mathbf{H}_k \mathbf{s}_k \geq \kappa\lambda_k \|\mathbf{s}_k\|_2^2,
\end{equation}
for some constant $\kappa \in (0,1)$. To understand this, let us first review the same result for the unconstrained scenario: it is known that $\mathbf{H}_k + \lambda_k\mathbf{I} \succeq 0$ at every iteration \cite[Corollary~7.2.2]{conn2000trust}. Thus, by Lemma \ref{L3.3}, we get
\begin{equation}\label{eq:easy-case}
f_k - q_k(\mathbf{s}_k) = -\mathbf{g}_k^T\mathbf{s}_k - \dfrac{1}{2}\mathbf{s}_k^T\mathbf{H}_k \mathbf{s}_k \geq \dfrac{1}{2}\lambda_k \|\mathbf{s}_k\|_2^2.
\end{equation}
However, in contrast to the unconstrained case, there is no guarantee that the step $\mathbf{s}_k$ satisfies \eqref{eq:Model_Suff_Decrease} in the constrained scenario. More specifically, in the presence of constraints, the condition is not guaranteed when the step $\mathbf{s}_k$ provides ascent first-order direction with negative curvature. To account for this case, we assume the following assumption holds.
\begin{assumption}\label{Assumption 3}
If $\mathbf{g}_k^T\mathbf{s}_k \geq 0$ and $\mathbf{s}_k^T\mathbf{H}_k\mathbf{s}_k \leq 0$, there exists a sequence of feasible points $\{\mathbf{x}_{k,i}\}_{i=0}^{l_k}$ with $0 \leq l_k \leq \bar{l}$, $\mathbf{x}_{k,0}=\mathbf{x}_k$, $\mathbf{s}_{k,i}=\mathbf{x}_{k,i}-\mathbf{x}_k$ and $\mathbf{x}_{k,l_k} = \mathbf{x}_k + \mathbf{s}_k$ such that for $i = 1,\ldots, l_k$,
\[\arraycolsep=1pt\def1.4{1.4}
\begin{array}{l}
q_k(\mathbf{s}_{k,i}) \leq q_k(\mathbf{s}_{k,i-1});\\
\mathbf{g}_k^T(\mathbf{x}_{k,i} - \mathbf{x}_{k,i-1}) + \mathbf{s}_{k,i}^T\mathbf{H}_k(\mathbf{x}_{k,i} - \mathbf{x}_{k,i-1}) \leq -\lambda_k \mathbf{s}_{k,i}^T(\mathbf{x}_{k,i} - \mathbf{x}_{k,i-1});\\
\mathbf{g}_k^T(\mathbf{x}_{k,i} - \mathbf{x}_{k,i-1}) + \mathbf{s}_{k,i-1}^T\mathbf{H}_k(\mathbf{x}_{k,i} - \mathbf{x}_{k,i-1}) \leq -\lambda_k \mathbf{s}_{k,i-1}^T(\mathbf{x}_{k,i} - \mathbf{x}_{k,i-1}).
\end{array}\]
\end{assumption}
This assumption was also used in \cite{cartis2012adaptive} to show that the number of iterations required to reach an $\epsilon$-first order stationary point when adaptive ARC algorithm is used is ${\cal O}(\epsilon^{-3/2})$. As mentioned in \cite{cartis2012adaptive}, this assumptions holds if $\mathbf{x}_{k,l_k}$ is the first minimizer of the model $q_{\lambda_k}$ along the piecewise linear path ${\cal P}_k \triangleq \bigcup\limits_{i=1}^{l_k}[\mathbf{x}_{k,i-1}, \mathbf{x}_i].$
Using Assumption~\ref{Assumption 3}, we obtain the desired model decrease \eqref{eq:Model_Suff_Decrease} and we have the following Theorem.
\begin{theorem}\label{epsilon-first-order-convergence-complexity}
Under Assumptions \ref{Assumption1-LC} and \ref{Assumption 3}, for any given scalar $\epsilon \in (0,\infty)$, the total number of sub-problem routines of First-Order-LC-TRACE required to reach an $\epsilon$-first order stationary point of~\eqref{Linearly-Cons-Prob} is ${{\cal O}}(\epsilon^{-3/2} \log^3(1/\epsilon))$.
\end{theorem}
\begin{proof}
The proof of the Theorem is relegated to Appendix~\ref{epsilon-first-order-convergence-complexity-app}.
\end{proof}
In the next section, we use this first order result to develop an algorithm for finding second order stationary points.
\section{Second-Order-LC-TRACE Algorithm}
Leveraging the convergence result of First-Order-LC-TRACE, we propose algorithm~\ref{alg-GLC-cap} for converging to second order stationary points.
\begin{algorithm}[H]
\caption{Second-Order-LC-TRACE}\label{alg-GLC-cap}
\textbf{Require:} The constants $\tilde{L}\triangleq \max\{L, g_{max}\}$, $\tilde{H} \triangleq \max\{H_{Lip}, H_{max}\}$, $\epsilon_g >0$, $\epsilon_H >0$.
\hrulefill
\begin{algorithmic}[1] \label{alg-GLC}
\Procedure{}{}
\State Choose a feasible point $\mathbf{x}_0$.
\State Compute ${\cal X}_0$ and ${\cal \psi}_0$ by solving (\ref{eq:X_k}) and (\ref{eq:psi_k}), respectively.
\For{$k=0, 1, 2, \ldots$}
\If{${\cal X}_k > \epsilon_g$}\label{alg-first-order-not-reached}
\State Compute $\mathbf{x}_{k+1}$ by running one iteration of First-Order-LC-TRACE starting with $\mathbf{x}_k$.\label{alg-use-LCTRACE}
\Else \label{alg-first-order-reached}
\State Compute $\widehat{\mathbf{d}}_k$ and $\mathcal{\psi}_k$ by solving \eqref{eq:psi_k}.\label{alg-compute-d}
\State set $\mathbf{x}_{k+1} \gets \mathbf{x}_k + \dfrac{2{\cal \psi}_k}{\tilde{H}}\widehat{\mathbf{d}}_k$. \label{alg-second-order}
\EndIf
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
We now show that this algorithm can find an $(\epsilon_g, \epsilon_H)$-second-order stationary point of problem~\eqref{Linearly-Cons-Prob}.
\begin{theorem}\label{epsilon-second-order-convergence-complexity}
Under Assumptions \ref{Assumption1-LC} and \ref{Assumption 3}, for any given scalars $\epsilon_g>0$ and $\epsilon_H>0$, the total number of iterations required to reach an $(\epsilon_g, \epsilon_H)$-second-order stationary point of~\eqref{Linearly-Cons-Prob} when running Algorithm~\ref{alg-GLC-cap} is ${{\cal O}}\Big(\log^3(\epsilon_g^{-1})\max\big\{\epsilon_g^{-3/2} , \epsilon_H^{-3}\big\}\Big)$.
\end{theorem}
\begin{proof}
The proof of the Theorem is relegated to Appendix~\ref{epsilon-second-order-convergence-complexity-app}.
\end{proof}
{
}
\appendix
\section{Proofs for Section~\ref{sec:LC-TRACE}}
Consider the following optimization problem
\begin{equation}
\displaystyle{ \operatornamewithlimits{\mbox{minimize}}_{\mathbf{x} \in {\cal P}}} \, \, f(\mathbf{x}),
\end{equation}
where ${\cal P} \triangleq \{\mathbf{x} \in \mathbb{R}^n \, | \, \mathbf{A}\mathbf{x} \leq \mathbf{b} \}$ is a polyhedron with finite number of linear constraints. In this section we generalize results from \cite{curtis2017trust} to adapt for the linear constraints. For the sake of completeness of the manuscript, some Lemmas and proofs are restated from \cite{curtis2017trust}.
Recall the sub-problem $Q_k$ with trust region $\delta_k$,
\[Q_k \triangleq \displaystyle{ \operatornamewithlimits{\mbox{min}}_{\mathbf{s} }} \, \, q_k(\mathbf{s}) \triangleq f_k + \mathbf{g}_k^T\mathbf{s} + \dfrac{1}{2}\mathbf{s}^T\mathbf{H}_k \mathbf{s}, \quad \mbox{subject to } \begin{cases}
\mathbf{A}\mathbf{s} \leq \mathbf{b} - \mathbf{A}\mathbf{x}_k\\
\|\mathbf{s}\|_2 \leq \delta_k
\end{cases}.
\]
Let $\bm{\lambda}_k^C$ be the multiplier corresponding to the linear constraint $\mathbf{A} \mathbf{s} \leq \mathbf{b} - \mathbf{A} \mathbf{x}_k$, and $\lambda_k$ be the multiplier for the trust region constraint $\|\mathbf{s}\|_2 \leq \delta_k$. The first order K.K.T optimality conditions for the above problem are state below \cite{bertsekas1999nonlinear}
\begin{flalign}
&\mathbf{g}_k + (\mathbf{H}_k + \lambda_k \mathbf{I})\mathbf{s}_k + \mathbf{A}^T \boldsymbol{\lambda}_k^C = \mathbf{0}, \label{CC1}\\
&\mathbf{0} \leq \boldsymbol{\lambda}_k^C \perp \mathbf{b} - \mathbf{A}\mathbf{x}_k - \mathbf{A} \mathbf{s}_k \geq \mathbf{0}, \label{CC2}\\
&0 \leq \lambda_k \perp \delta_k - \|\mathbf{s}_k\|_2^2 \geq 0. \label{CC3}
\end{flalign}
\subsection{Proof of Theorem \ref{first-order-convergence}}\label{first-order-convergence-app}
To show convergence to first-order stationarity, we first provide in Lemma~\ref{L3.3} a sufficient decrease condition. Then, in Lemma \ref{L3.10} we show that the number of accepted steps $|{\cal A}|$ is infinite. Combining these two results with the assumption that $f$ is lower bounded, we get the desired convergence result. In practice, the algorithm terminates when ${\cal X}_k$ is below a prescribed positive threshold $\epsilon >0$. Hence, we assume, without loss of generality that ${\cal X}_k \geq \epsilon$ for all $k\in \mathbb{N}$.
\begin{lemma}\label{L3.3}
For any $k \in \mathbb{N}$, the trial step $\mathbf{s}_k$ and dual variable $\lambda_k$ satisfy
\begin{equation}\label{3.2}
f_k - q_k (\mathbf{s}_k) \geq \dfrac{1}{2} \mathbf{s}_k^T(\mathbf{H}_k + \lambda_k \mathbf{I})\mathbf{s}_k +\dfrac{1}{2}\lambda_k \|\mathbf{s}_k\|_2^2.
\end{equation}
In addition, for any $k \in \mathbb{N}_{+}$, the trial step $\mathbf{s}_k$ satisfies
\begin{equation}\label{3.3}
f_k - q_k(\mathbf{s}_k) \geq C{\cal X}_k \min \Big\{ \delta_k, \, \dfrac{{\cal X}_k}{\|\mathbf{H}_k\|_2}, 1\Big\}.
\end{equation}
\end{lemma}
\begin{proof}
By definition of $q_k$,
\begin{flalign}\label{eq: proof 3.3}
f_k - q_k(\mathbf{s}_k) &= -\mathbf{g}_k^T\mathbf{s}_k - \dfrac{1}{2} \mathbf{s}_k^T \mathbf{H}_k\mathbf{s}_k \nonumber\\
& = \mathbf{s}_k^T\mathbf{H}_k\mathbf{s}_k + \lambda_k\|\mathbf{s}_k\|_2^2 + \mathbf{s}_k^T\mathbf{A}^T(\boldsymbol{\lambda}_k^C) - \dfrac{1}{2}\mathbf{s}_k^T\mathbf{H}_k\mathbf{s}_k \nonumber\\
& = \dfrac{1}{2} \mathbf{s}_k^T(\mathbf{H}_k + \lambda_k\mathbf{I})\mathbf{s}_k + \dfrac{1}{2}\lambda_k\|\mathbf{s}_k\|^2 + \mathbf{s}_k^T\mathbf{A}^T(\boldsymbol{\lambda}_k^C) \nonumber \\
& \geq \dfrac{1}{2} \mathbf{s}_k^T(\mathbf{H}_k + \lambda_k\mathbf{I})\mathbf{s}_k + \dfrac{1}{2}\lambda_k\|\mathbf{s}_k\|^2,
\end{flalign}
where the second equality follows by KKT condition \eqref{CC1}, and the last inequality follows from the feasibility of $\mathbf{x}_k$ and the complementary slackness \eqref{CC2}.
Also, using \cite[Theorem~12.2.2]{conn2000trust}, we obtain
\[f_k - q_k(\mathbf{s}_k) \geq C{\cal X}_k \min \Big\{ \delta_k, \, \dfrac{{\cal X}_k}{\|\mathbf{H}_k\|_2}, 1\Big\}.\]
\end{proof}
To prove the infinite cardinality of the set ${\cal A}$, we need some intermediate Lemmas. The next result shows that the trust region radius is reduced when the CONTRACT subroutine is called.
\begin{lemma}\label{L3.4}
For any $k \in \mathbb{N}$, if $k \in {\cal C}$, then $\delta_{k+1} < \delta_k$.
\end{lemma}
\begin{proof}
Suppose that $k \in {\cal C}$. We prove the result by considering the various cases that may occur within the CONTRACT subroutine. If Step~\ref{return-4} is reached, the subroutine returns $\delta_{k+1} = \gamma_C\|\mathbf{s}_k\|_2 < \delta_k$. Otherwise, if Step~\ref{return-1} is reached, the subroutine returns $\delta_{k+1} = \|\mathbf{s}\|_2$ where $\mathbf{s}$ solves $Q_k(\lambda)$ for $\lambda \geq \bar{\lambda}$. Hence,
\[
\delta_{k+1} = \|\mathbf{s}\|_2 < \|\mathbf{s}_k\|_2 \leq \delta_k,
\]
where the strict inequality follows from Step~\ref{Cond-lower-bound-no}. Similarly, if Step~\ref{return-2} is reached, the subroutine returns $\delta_{k+1} = \|\bar\mathbf{s}\|_2$ where $\bar\mathbf{s}$ solves $Q_k(\bar\lambda)$. Hence,
\[
\delta_{k+1} = \|\bar\mathbf{s}\|_2 < \|\mathbf{s}_k\|_2 \leq \delta_k.
\]
Otherwise, Step~\ref{return-3} is reached. In which case, the subroutine returns $\delta_{k+1} = \|\mathbf{s}\|_2$ where $\mathbf{s}$ solves $Q_k(\lambda)$ for $\lambda > \lambda_k$. The result follows using the while loop condition Step~\ref{while-loop} along with the inverse relationship of $\lambda$ and $\|\mathbf{s}\|$.
\end{proof}
We now show that for all iterations $k$, the trust region region radius $\delta_k$ is upper bounded by a non-decreasing sequence $\{\Delta_k\}$. Also, if $k \in {\cal A} \cup {\cal E}$, we show that $\delta_{k+1}\geq \delta_k$.
\begin{lemma}\label{L3.5-3.6}
For any $k \in \mathbb{N}$, there holds $\delta_k \leq \Delta_k \leq \Delta_{k+1}$. Moreover, $\delta_{k+1}\geq \delta_k$ for all $k \in {\cal A} \cup {\cal E}$.
\end{lemma}
\begin{proof}
The fact that $\Delta_k \leq \Delta_{k+1}$ for all $k \in \mathbb{N}$ follows from the computations in Steps \ref{Delta-acc}, \ref{Delta-cont}, and \ref{Delta-exp} of Algorithm~\ref{alg-LC}. It remains to show that $\delta_k \leq \Delta_k$ for all $k \in \mathbb{N}$. We prove the result by means of induction.
The inequality holds for $k = 0$ by the initialization of quantities in Step~\ref{initial} of Algorithm~\ref{alg-LC}. Assume the induction hypothesis holds for iteration $k$. By the computations in Steps~\ref{Delta-acc}, \ref{delta-acc}, \ref{Delta-exp}, \ref{delta-exp} and by Lemma \ref{L3.4}, the result holds for iteration~$k+1$. We next show that $\delta_{k+1}\geq \delta_k$ for all $k \in {\cal A} \cup {\cal E}$.
Suppose $k \in {\cal A}$. It follows from Steps~\ref{Delta-acc} and \ref{delta-acc} that
\[\delta_{k+1} = \min\{\max\{\Delta_k,\gamma_E\|\mathbf{s}_k\|_2\} , \max\{\delta_k, \gamma_E\|\mathbf{s}_k\|_2\}\} \geq \delta_k.\]
Here the inequality follows since $\delta_k \leq \Delta_k \leq \Delta_{k+1}$. Now suppose $k \in {\cal E}$. By the conditions indicated in Step~\ref{Cond-exp}, we have $\lambda_k > \sigma_k\|\mathbf{s}_k\|_2 \geq 0$. It follows by \eqref{CC3} that $\|\mathbf{s}_k\|_2 =\delta_k$. We obtain
\[\delta_{k+1} = \min\{\Delta_{k+1}, \lambda_k/\sigma_k\} \geq \min\{\delta_k , \|\mathbf{s}_k\|_2\} = \delta_k,\]
where the inequality follows since $\delta_k \leq \Delta_k \leq \Delta_{k+1}$.
\end{proof}
The next result, shows that we cannot have two consecutive expansion steps.
\begin{lemma}\label{L3.7}
For any $k \in \mathbb{N}$, if $k \in {\cal C} \cup {\cal E}$, then $k+1 \notin {\cal E}$.
\end{lemma}
\begin{proof}
Observe that if $\lambda_{k+1}= 0$, then conditions in Steps~\ref{Cond-exp} of Algorithm \ref{alg-LC} ensure that
$(k+1) \notin {\cal E}$. Thus, by \eqref{CC3}, we may proceed under the assumptions that $\|\mathbf{s}_{k+1}\|_2 = \delta_{k+1}$ and $\lambda_{k+1}> 0$.
Suppose that $k \in {\cal C}$, i.e. $\rho_k< \rho$. It follows that Step~\ref{Sigma-rho_k} sets $\sigma_{k+1} \geq \lambda_{k+1}/\|\mathbf{s}_{k+1}\|_2$. Therefore, if $\rho_{k+1}\geq \rho$, we have $(k+1) \in {\cal A}$. Otherwise, $\rho_{k+1} <\rho,$ which implies that $(k + 1) \in {\cal C}$.
Now suppose that $k \in {\cal E}$. It follows that
\begin{equation}\label{eq:proofL3.7}
\lambda_k > \sigma_k \|\mathbf{s}_k\|_2, \quad \delta_{k+1} = \min\{\Delta_k, \lambda_k/\sigma_k\}, \quad \mbox{and} \quad \sigma_{k+1} = \sigma_{k}.
\end{equation}
Combined with \eqref{CC3}, we get $\|\mathbf{s}_k\|_2= \delta_k$. We now consider two different cases:
\begin{enumerate}
\item Suppose $\Delta_k \geq \lambda_k/\sigma_k$. It follows from \eqref{eq:proofL3.7} that
\begin{equation}\label{eq:proof3.7-2}
\delta_{k+1} = \lambda_k/\sigma_k > \|\mathbf{s}_k\|_2 = \delta_k.
\end{equation}
Therefore, by the relationship between the trust region radius and its corresponding multiplier, we get $\lambda_{k+1} \leq \lambda_k$. Combined with \eqref{eq:proofL3.7} and \eqref{eq:proof3.7-2}, we obtain
\[\lambda_{k+1} \leq \lambda_k = \sigma_k\delta_{k+1} = \sigma_{k+1}\|\mathbf{s}_{k+1}\|_2.\]
Hence $(k + 1) \notin {\cal E}$.
\item Suppose $\Delta_k < \lambda_k / \sigma_k$. Using \eqref{eq:proofL3.7}
\[\|\mathbf{s}_{k+1}\|_2= \delta_{k+1} = \Delta_k = \Delta_{k+1},\]
where the last equality holds by Step~\ref{Delta-exp}.
If $\rho_{k+1} \geq \rho$, then $(k + 1) \in {\cal A}_{\Delta}\subseteq {\cal A}$. Otherwise, $\rho_{k+1} < \rho$, from which it follows that $(k + 1) \in {\cal C}$.
\end{enumerate}
Hence, in both cases $(k + 1) \notin {\cal E}$.
\end{proof}
Next, we show that if the dual variable for the trust region constraint $\lambda_k$ is sufficiently large, then the constraint is active and the sufficient decrease criteria is met.
\begin{lemma}\label{L3.8}
For any $k \in \mathbb{N}$, if the trial step $\mathbf{s}_k$ and dual variable $\lambda_k$ satisfy
\begin{equation}\label{3.6}
\lambda_k \geq g_{Lip} + H_{max} + \rho \|\mathbf{s}_k\|_2,
\end{equation}
then $\|\mathbf{s}_k\|_2 = \delta_k$ and $\rho_k \geq \rho$.
\end{lemma}
\begin{proof}
By the definition of the objective function of the model $q_k$, there exists a point $\bar{\mathbf{x}}_k \in \mathbb{R}^n$ on the line segment $[\mathbf{x}_k , \mathbf{x}_k + \mathbf{s}_k ]$ such that
\begin{equation}\label{3.7}
\begin{array}{l l}
q_k (\mathbf{s}_k ) - f (\mathbf{x}_k + \mathbf{s}_k) = & \big(\mathbf{g}_k - \mathbf{g}(\bar{\mathbf{x}}_k)\big)^T \mathbf{s}_k + \dfrac{1}{2} \mathbf{s}_k^T \mathbf{H}_k \mathbf{s}_k\\
& \geq -\|\mathbf{g}_k - \mathbf{g}(\bar{\mathbf{x}}_k)\|_2\|\mathbf{s}_k\|_2 - \dfrac{1}{2}\|\mathbf{H}_k\|_2\|\mathbf{s}_k\|_2^2.
\end{array}
\end{equation}
Therefore,
\begin{flalign*}
f_k - f(\mathbf{x}_k + \mathbf{s}_k) &= f_k - q_k(\mathbf{s}_k) + q_k(\mathbf{s}_k) - f(\mathbf{x}_k + \mathbf{s}_k)\\
&\geq \dfrac{1}{2}\mathbf{s}_k^T\mathbf{H}_k \mathbf{s}_k + \lambda_k \|\mathbf{s}_k\|_2^2 - \|\mathbf{g}_k - \mathbf{g}(\bar{\mathbf{x}}_k)\|_2\|\mathbf{s}_k\|_2 - \dfrac{1}{2}\|\mathbf{H}_k\|_2\|\mathbf{s}_k\|_2^2\\
&\geq -\|\mathbf{H}_k\|_2\|\mathbf{s}_k\|_2^2 + \lambda_k \|\mathbf{s}_k\|_2^2 - g_{Lip}\|\mathbf{s}_k\|_2^2\\
&\geq (\lambda_k - g_{Lip} - H_{max})\|\mathbf{s}_k\|_2^2\\
&\geq \rho \|\mathbf{s}_k\|_2^3.
\end{flalign*}
Here the first inequality holds from Lemma~\ref{L3.3} and expression \eqref{3.7}. The result $\|\mathbf{s}_k\|=\delta_k$ follows directly from \eqref{3.6} and \eqref{CC3}.
\end{proof}
We now use the previous results to show that if from some iteration onward, all the steps are contraction steps, then the sequence of trust region radii converge to zero, and the sequence of dual variables converge to infinity.
\begin{lemma}\label{L3.9}
If $k \in {\cal C}$ for all $k \geq k_0$, then $\{\delta_k\} \rightarrow 0$ and $\{\lambda_k\} \rightarrow \infty$.
\end{lemma}
\begin{proof}
Assume, without loss of generality, that $k \in {\cal C}$ for all $k \in \mathbb{N}$. It follows from Lemma \ref{L3.4} that $\{\delta_k\}$ is monotonically strictly decreasing. Combined with the fact that $\{\delta_k\}$ is bounded below by zero, we have that $\{\delta_k\}$ converges. We may now observe that if Step~\ref{return-4} of the CONTRACT subroutine is reached infinitely often, then clearly,
$\{\delta_k\} \rightarrow 0$. Hence, it follows by the relationship between the trust region radius and its corresponding multiplier that $\{\lambda_k\} \rightarrow \infty$. Therefore, let us assume that Step~\ref{return-4} of the CONTRACT subroutine does not occur infinitely often, i.e., that there exists $k_{\cal C} \in \mathbb{N}$ such that Step~\ref{return-1}, \ref{return-2}, or \ref{return-3} is reached for all $k \geq k_{\cal C}$.
Consider iteration $k_{\cal C}$. Steps \ref{lambda-update-sigma}, \ref{lambda-update-H}, \ref{lambda-bar-update-linear}, \ref{lambda-update-linear}, \ref{lambda-update-linear-while} in the CONTRACT subroutine will set
\[\lambda_{k+1} =\lambda \geq \min\{ \lambda_k + \underline{\sigma}\Delta_k, \gamma_{\lambda} \lambda_k\} >\lambda_k \quad \mbox{ for all } k \geq k_{\cal C} +1.\]
Therefore, since $k \in {\cal C}$ for all $k \geq k_{\cal C}$, we have $\mathbf{x}_k = \mathbf{x}_{k_{\cal C}}$ (and so ${\cal X}_k = {\cal X}_{k_{\cal C}}$) for all $k \geq k_{\cal C}$, which implies that $\{\lambda_k\}\rightarrow \infty$. It follows by the relationship between the trust region radius and its corresponding multiplier that $\|\mathbf{s}_k\|_2 = \delta_k \rightarrow 0$.
\end{proof}
We now prove that the set of accepted steps is infinite.
\begin{lemma}\label{L3.10}
The set ${\cal A}$ has infinite cardinality.
\end{lemma}
\begin{proof}
To derive a contradiction, suppose that $|{\cal A}| < \infty$. We claim that this implies $|{\cal C}| = \infty$. Indeed, if $|{\cal C}| <\infty$, then there exist some $k_{\cal E} \in \mathbb{N}$ such that $k \in {\cal E}$ for all $k \geq k_{\cal E}$ , which contradicts Lemma \ref{L3.7}. Thus, $|{\cal C}|= \infty$. Combining this with the result of Lemma \ref{L3.7}, we conclude that there exists some $k_{\cal C} \in \mathbb{N}_{+}$ such that $k \in {\cal C}$ for all $k \geq k_{\cal C}$. It follows from Lemma \ref{L3.9} that $\{\|\mathbf{s}_k\|_2\} \leq \{\delta_k\} \rightarrow 0$ and $\{\lambda_k\} \rightarrow \infty$. In combination with Lemma \ref{L3.8}, we conclude that there exists some $k \geq k_{\cal C}$ such that $\rho_k \geq \rho$, which contradicts the fact that $k \in {\cal C}$ for all $k \geq k_{\cal C}$. Having arrived at a contradiction under the supposition that $|{\cal A}| <\infty$, the result follows.
\end{proof}
We now provide an upper bound for the sequence $\{\Delta_k\}$ and the trial steps $\{\mathbf{s}_k\}$. Moreover, we show that the number of ${\cal A}_{\Delta}$ steps computed by the algorithm is finite.
\begin{lemma}\label{L3.11}
There exists a scalar constant $\Delta>0$ and $k_{\mathcal{A}}\in \mathbb{N}$, such that $\Delta_k = \Delta$ for all $k \geq k_{\mathcal{A}}$. Moreover, the set ${\cal A}_{\Delta}$ has finite cardinality, and there exists a scalar
constant $s_{max}>0$ such that $\|\mathbf{s}_k\|_2 \leq s_{max}$ for all $k \in \mathbb{N}$.
\end{lemma}
\begin{proof}
For all $k \in {\cal A}$, we have $\rho_k \geq \rho$, which implies by Step~\ref{x-acc} of Algorithm~\ref{alg-LC} that
\[f(\mathbf{x}_k) - f(\mathbf{x}_{k+1}) \geq \rho\|\mathbf{s}_k\|_2^3.\]
Combining this with Lemma \ref{L3.10} and the fact that $f$ is bounded below, it follows that $\{\mathbf{s}_k\}_{k\in {\cal A}} \rightarrow 0$. In particular, there
exists $k_{\cal A} \in \mathbb{N}$ such that for all $k \in {\cal A}$ with $k \geq k_{\cal A}$, we have
\begin{equation}\label{3.8}
\gamma_E\|\mathbf{s}_k\|_2 \leq \Delta_0 \leq \Delta_k,
\end{equation}
where the latter inequality follows from Lemma \ref{L3.5-3.6}. Combined with the update in Steps \ref{Delta-acc}, \ref{Delta-cont} and \ref{Delta-exp} of LC-TRACE, we get
\[\Delta_{k+1} = \Delta_k \mbox{ for all } k \geq k_{\cal A}.\]
This proves the first part of the lemma. The second part also follows from \eqref{3.8} which implies that $\|\mathbf{s}_k\|_2 < \Delta_k$ for all $k \in {\cal A}$ with $k \geq k_{\cal A}$. Finally, the last part of the lemma follows from the first part and the fact that Lemma \ref{L3.5-3.6} ensures $\|\mathbf{s}_k\|_2 \leq \delta_k \leq \Delta_k = \Delta$ for all sufficiently large $k \in \mathbb{N}$.
\end{proof}
We now show that there exits a uniform upper bound on the term ${\|\mathbf{g}_k +\mathbf{A}^T\bm{\lambda}_k^C \|_2}$.
\begin{lemma}\label{LBG}
For all $k \in \mathbb{N}$, ${\|\mathbf{g}_k +\mathbf{A}^T\bm{\lambda}_k^C \|_2} \leq G_{max}$, where $G_{max}>0$ is a constant scalar. Moreover,
\[\lambda_k \leq \max\{\lambda_0, \lambda_{max}\} \quad \forall \, k \in \mathbb{N},\]
where $\lambda_{max} \triangleq \max\{g_{Lip} + 2H_{max} + (\rho +\underline{\sigma}) \Delta + (\underline{\sigma} g_{max})^{1/2}, \, \gamma_{\lambda}(g_{Lip} + H_{max} + \rho\Delta)\}$.
\end{lemma}
\begin{proof}
By \eqref{CC1} and Lemma \ref{L3.5-3.6},
\[{\|\mathbf{g}_k +\mathbf{A}^T\bm{\lambda}_k^C \|_2} = \|\mathbf{H}_k \mathbf{s}_k + \lambda_k \mathbf{s}_k\|_2 \leq (H_{max} + \lambda_k) \delta_k \leq (H_{max} + \lambda_k) \Delta.\]
Thus, it suffices to find a constant upper bound for $\lambda_k$ to get the desired result.
If $\|\mathbf{s}_{k+1}\|_2 < \delta_{k+1}$, then by \eqref{CC3}, $\lambda_{k+1} = 0$. Therefore, we may proceed under the assumption that $\|\mathbf{s}_{k+1}\|_2 = \delta_{k+1}$.
Suppose $k \in {\cal C}$, then by Lemma \ref{L3.8}, $\lambda_{k} < g_{Lip} + H_{max} + \rho \Delta$.\\
If Step~\ref{Cond-lower-bound-no} in the CONTRACT subroutine tests true, we get
\begin{equation}\label{LBGex1}
\begin{array}{ll}
\lambda_{k+1} & \leq \lambda_k +H_{max} + \underline{\sigma}\Delta_k + (\underline{\sigma}{\cal X}_k)^{1/2}\\
& \leq g_{Lip} + 2H_{max} + (\rho +\underline{\sigma}) \Delta + (\underline{\sigma} g_{max})^{1/2}.
\end{array}
\end{equation}
Otherwise, if Step~\ref{Cond-lower-bound-no} tests false, we claim that
\begin{equation}\label{LBGex2}
\lambda_{k+1} \leq \gamma_{\lambda}(g_{Lip} + H_{max} + \rho \Delta).
\end{equation}
To show our claim, we assume the contrary, i.e. $\lambda_{k+1} > \gamma_{\lambda}(g_{Lip} + H_{max} + \rho \Delta).$ Then the condition of the while loop in Step~\ref{while-loop} of the CONTRACT subroutine tested true for some $\hat{\mathbf{s}}$ being a solution of $Q_{k}(\hat{\lambda})$ for $\hat{\lambda} \geq g_{Lip} + H_{max} + \rho \Delta$.
There exist $\hat{\mathbf{x}}$ on the line segment $[\mathbf{x}_{k} , \mathbf{x}_{k} + \hat{\mathbf{s}}]$ such that
\begin{equation}\label{eq:l3.11.3}
q_k(\hat{\mathbf{s}}) - f(\hat{\mathbf{x}} + \hat{\mathbf{s}}) = \big(\mathbf{g}_k - \mathbf{g}(\hat{\mathbf{x}}_k)\big)^T\mathbf{s}_k +\dfrac{1}{2}\hat{\mathbf{s}}^T\mathbf{H}_k\hat{\mathbf{s}} \geq -g_{Lip}\|\hat{\mathbf{s}}\|_2^2 - \dfrac{1}{2}H_{max}\|\hat{\mathbf{s}}\|_2^2.
\end{equation}
Therefore,
\begin{flalign*}
\dfrac{f(\hat{x}) - f(\hat{\mathbf{x}} + \hat{\mathbf{s}})}{\|\hat{\mathbf{s}}\|_2^3} &= \dfrac{f(\hat{x}) - q_k(\hat{\mathbf{s}}) + q_k(\hat{\mathbf{s}}) - f(\hat{\mathbf{x}} + \hat{\mathbf{s}})}{\|\hat{\mathbf{s}}\|_2^3}\\
&\geq \dfrac{-\|\mathbf{H}_k\|_2 + 2\hat{\lambda} -2g_{Lip} - H_{max}}{2\|\hat{\mathbf{s}}\|_2}\\
&\geq \dfrac{\hat{\lambda} - g_{Lip} - H_{max}}{\|\hat{\mathbf{s}}\|_2}\\
& \geq \rho,
\end{flalign*}
where the first inequality holds by Lemma \ref{L3.3} and \eqref{eq:l3.11.3}. Since
\[\rho_{k} = \dfrac{f_k - f(\mathbf{x}_k + \mathbf{s}_k)}{\|\mathbf{s}_k\|_2^3} < \rho,\] it follows that $\|\hat{\mathbf{s}}\|_2 \neq \|\mathbf{s}_k\|_2$ which contradicts the condition of the while loop in Step~\ref{while-loop} which tested true for $\hat{\mathbf{s}}$ generated by solving $Q_k(\hat\lambda)$.
Combining \eqref{LBGex1} and \eqref{LBGex2}, we get that for all $k \in {\cal C}$
\begin{equation}\label{LBGex3}
\lambda_{k+1} \leq \lambda_{max},
\end{equation}
where $\lambda_{max} \triangleq \max\{g_{Lip} + 2H_{max} + (\rho +\underline{\sigma}) \Delta + (\underline{\sigma} g_{max})^{1/2}, \, \gamma_{\lambda}(g_{Lip} + H_{max} + \rho\Delta)\}$. Now, suppose that $k \in {\cal A} \cup {\cal E}$. By Lemma \ref{L3.5-3.6}, we have $\|\mathbf{s}_{k+1}\|_2 = \delta_{k+1} \geq \delta_k \geq \|\mathbf{s}_k\|_2.$ Hence, by the relationship between the trust region radius and its corresponding multiplier, we obtain
\begin{equation}\label{LBGex4}
\lambda_{k+1} \leq \lambda_k.
\end{equation}
Let $k_{\cal C} \triangleq \min\{k \in \mathbb{N} \, |\, k \in {\cal C}\}$ be the first contract step. By \eqref{LBGex4}, $\lambda_k \leq \lambda_0$ for all $k \leq k_{\cal C}$. Moreover, using \eqref{LBGex3} and \eqref{LBGex4},
\[\lambda_k \leq \lambda_{max} \quad \forall \, k > k_{\cal C}.\]
Combining these results yield
\[\lambda_k \leq \max\{\lambda_0, \lambda_{max}\} \quad \forall \, k \in \mathbb{N},\]
which completes the proof.
\end{proof}
Notice that in the proof of Lemma \ref{LBG}, we have shown that there exists a uniform upper bound for the dual variables $\lambda_k$. Our next result shows that the ratio $\dfrac{{\cal X}_k}{\|\mathbf{s}\|_2}$ is upper bounded by $C_{min} + \lambda_k$, where $C_{min}$ is a scalar constant.
\begin{lemma}\label{LXksk}
For any $k \in \mathbb{N}$, it holds
\begin{equation}\label{eq:X_ks_k}
{\cal X}_k \leq (C_{min} + \lambda_k)\|\mathbf{s}_k\|_2,
\end{equation}
where $C_{min} \triangleq H_{max} + G_{max} + g_{max}$ is a scalar constant.
\end{lemma}
\begin{proof}
Let $\xi_{k,1}$ be the largest singular value of $\mathbf{H}_k$. For all $\mathbf{d}$ satisfying $\mathbf{A}\mathbf{d} \leq \mathbf{b} - \mathbf{A}\mathbf{x}_k$, we have
\begin{flalign}\label{eq:proofL3.12}
\mathbf{g}_k^T\mathbf{d} &= -\mathbf{d}^T(\mathbf{H}_k + \lambda_k\mathbf{I})\mathbf{s}_k^T - (\boldsymbol{\lambda}_k^C)^T\mathbf{A}\mathbf{d} \nonumber \\
&\geq -\mathbf{d}^T(\mathbf{H}_k + \lambda_k\mathbf{I})\mathbf{s}_k^T - (\boldsymbol{\lambda}_k^C)^T\mathbf{A}\mathbf{s}_k \nonumber \\
&\geq -(\xi_{k,1} + \lambda_k)\|\mathbf{d}\|_2\|\mathbf{s}_k\|_2 - (\boldsymbol{\lambda}_k^C)^T\mathbf{A}\mathbf{s}_k,
\end{flalign}
where the first equality holds by \eqref{CC1}, and the first inequality holds by complementary slackness \eqref{CC2}. Minimizing over all such $\mathbf{d}$, we obtain
\begin{flalign*}
\underset{\mathbf{A}\mathbf{d} \leq \mathbf{b} - \mathbf{A}\mathbf{x}_k, \, \, \|\mathbf{d}\|\leq 1}{\min} \,\,\mathbf{g}_k^T\mathbf{d} &\geq -(\xi_{k,1} + \lambda_k)\|\mathbf{s}_k\|_2-\|(\boldsymbol{\lambda}_k^C)^T\mathbf{A}\|_2\|\mathbf{s}_k\|_2\\
&\geq -(\xi_{k,1} + \lambda_k)\|\mathbf{s}_k\|_2-(\|g_k + (\boldsymbol{\lambda}_k^C)^T\mathbf{A}\|_2+ \|g_k\|_2)\|\mathbf{s}_k\|_2\\
&\geq -(H_{max} + \lambda_k)\|\mathbf{s}_k\|_2 - (G_{max} + g_{max})\|\mathbf{s}_k\|_2,
\end{flalign*}
where the last inequality uses Lemma \ref{LBG}. Then definition of ${\cal X}_k$ yields
\begin{flalign*}
{\cal X}_k &\leq ( H_{max} + \lambda_k + G_{max} + g_{max})\|\mathbf{s}_k\|_2\\
&=(C_{min}+\lambda_k)\|\mathbf{s}_k\|_2,
\end{flalign*}
where $C_{min} \triangleq H_{max} + G_{max} + g_{max}$ is a scalar constant.
\end{proof}
We now show that the limit inferior of stationarity measure ${\cal X}_k$ is equal to zero.
\begin{lemma}\label{L3.13}
There holds
\[\displaystyle{ \operatornamewithlimits{\mbox{lim inf}}_{k \in \mathbb{N}, \, k \rightarrow \infty}} {\cal X}_k =0.\]
\end{lemma}
\begin{proof}
Suppose the contrary that there exists a scalar constant ${\cal X}_{min} > 0$ such that ${\cal X}_k \geq {\cal X}_{min}$ for all $k \in \mathbb{N}$. Then by Lemmas \ref{LBG} and \ref{LXksk}, for scalar
\[s_{min} = \dfrac{{\cal X}_{min}}{C_{min} + \max\{\lambda_{max}, \lambda_0\}}\,,\]
we have that $\|\mathbf{s}_k\|_2 \geq s_{min} > 0$ for all $k \in \mathbb{N}$. Moreover, for all $k \in {\cal A}$ we have $f_k - f_{k+1} \geq \rho\|\mathbf{s}_k\|_2^3 > 0$. Given the lower boundedness of $f$ and Lemma \ref{L3.10} that insures infinite cardinality of set ${\cal A}$, we have $\{\mathbf{s}_k \}_{k \in {\cal A}} \rightarrow 0$. This contradicts the existence of $s_{min} > 0$.
\end{proof}
\begin{theorem}\label{L3.14}
Under Assumption \ref{Assumption1-LC}, it holds that
\begin{equation}\label{3.11}
\lim_{k \in \mathbb{N}, \, k \rightarrow \infty} \, {\cal X}_k =0.
\end{equation}
\end{theorem}
\begin{proof}
Suppose the contrary that \eqref{3.11} does not hold. Combined with Lemmas \ref{L3.10} and \ref{L3.13}, it implies that there exist an infinite sub-sequence $\{t_i\} \subseteq {\cal A}$ (indexed over $i \in \mathbb{N}$) such that ${\cal X}_{t_i} \geq 2\epsilon_{\cal X}$ for some $\epsilon_{\cal X} > 0$ and all $i \in \mathbb{N}$. Additionally, Lemmas \ref{L3.10} and \ref{L3.13} imply that there exist an infinite subsequence $\{l_i\} \subseteq {\cal A}$ such that
\begin{equation}\label{3.12}
{\cal X}_k \geq \epsilon_{\cal X} \mbox{ and } {\cal X}_{l_i} < \epsilon_{\cal X} \quad \forall \, \, i\in \mathbb{N}, \,\, k \in \mathbb{N}, \, \, t_i \leq k <l_i.
\end{equation}
We claim that for all $k \in \mathbb{N}_{+}$, the trial step $\mathbf{s}_k$ satisfies the following
\begin{equation}\label{eq:Lproof3.14}
\|\mathbf{s}_k\|_2 \geq \min \Big\{\delta_k, \dfrac{{\cal X}_k}{C_{min}} \Big\}.
\end{equation}
The proof of this claim follows directly from Lemma \ref{LXksk}. If $\|\mathbf{s}_k\|_2 = \delta_k$, the result trivially holds. Otherwise, using KKT condition \eqref{CC3}, $\lambda_k=0$ which proves our claim when combined with Lemma \ref{LXksk}.
We now restrict our attention to indices in the infinite index set
\[{\cal K} \triangleq \{k \in {\cal A}: t_i \leq k < l_i \mbox{ for some } i \in \mathbb{N}\}.\]
Observe from \eqref{3.12} and \eqref{eq:Lproof3.14} that
\begin{equation}\label{3.13}
f_k - f_{k+1} \geq \rho \|\mathbf{s}_k\|_2^3 \geq \rho \Big(\min\Big\{\delta_k , \dfrac{\epsilon_{\cal X}}{C_{min}}\Big\}\Big)^{3}.
\end{equation}
Since $\{ f_k \}$ is monotonically decreasing and bounded below, we know that $f_k \rightarrow \underline{f}$ for some $\underline{f} \in \mathbb{R}$. When combined with \eqref{3.13}, we obtain
\begin{equation}\label{3.14}
\lim_{k \in {\cal K}, k \rightarrow \infty} \delta_k =0.
\end{equation}
Using this fact and Lemma \ref{L3.3}, we have for all sufficiently large $k \in {\cal K}$ that
\begin{flalign*}
f_k - f_{k+1} & =f_k - q_k(\mathbf{s}_k) +q_k(\mathbf{s}_k) - f_{k+1}\\
&\geq C{\cal X}_k \min \Big\{\delta_k, \dfrac{{\cal X}_k}{\|\mathbf{H}_k\|_2}, 1 \Big\} - (g_{Lip} + \dfrac{1}{2}H_{max})\|\mathbf{s}_k\|_2^2\\
& \geq C\epsilon_{\cal X} \min \Big\{\delta_k, \dfrac{\epsilon_{\cal X}}{H_{max}}, 1 \Big\} - (g_{Lip} + \dfrac{1}{2}H_{max})\|\mathbf{s}_k\|_2^2\\
& \geq C\epsilon_{\cal X}\delta_k - (g_{Lip} + \dfrac{1}{2}H_{max})\delta_k^2\\
& \geq \dfrac{C}{2}\epsilon_{\cal X}\delta_k.
\end{flalign*}
Consequently, for all sufficiently large $i \in \mathbb{N}$, we have
\begin{flalign*}
\|\mathbf{x}_{t_i} - \mathbf{x}_{l_i}\|_2 &\leq \sum_{k \in {\cal K}, k=t_i}^{l_i -1} \|\mathbf{x}_k - \mathbf{x}_{k+1}\|_2\\
& \leq \sum_{k \in {\cal K}, k=t_i}^{l_i -1} \delta_k \leq \sum_{k \in {\cal K}, k=t_i}^{l_i -1} \dfrac{2}{C\epsilon_{\cal X}} (f_k - f_{k+1}) = \dfrac{2}{C\epsilon_{\cal X}}(f_{t_i} - f_{l_i}).
\end{flalign*}
Since $\{ f_{t_i} - f_{l_i}\} \rightarrow 0$, we get $\{\|\mathbf{x}_{t_i} -\mathbf{x}_{l_i}\|_2\}\rightarrow 0$, which, in turn, implies that $\{{\cal X}_{t_i} - {\cal X}_{l_i}\} \rightarrow 0$. This contradicts \eqref{3.12}.
\end{proof}
\subsection{Proof of Theorem \ref{epsilon-first-order-convergence-complexity}}\label{epsilon-first-order-convergence-complexity-app}
In this section we show that the number of iterations required to reach an $\epsilon$-first order stationary point is ${\cal O}(\epsilon^{-3/2}\log^3 \epsilon^{-1})$. To that end, we start by showing the desired model decrease using Assumption \ref{Assumption 3}.
\begin{lemma}\label{L4.5_ACR}
Consider the directions $\mathbf{s}$, $\mathbf{s}^{+}$ and points $\mathbf{x} = \mathbf{x}_k + \mathbf{s}$, $\mathbf{x}^{+} = \mathbf{x}_k + \mathbf{s}^{+}$. If for some $\bar{\kappa} \in (0,1]$
\begin{flalign}
&f_k -q_k(\mathbf{s}) \geq \bar{\kappa} \lambda_k \|\mathbf{s}\|_2^2, \label{eq:l1}\\
&q_k(\mathbf{s}^{+}) \leq q_k (\mathbf{s}), \label{eq:l2}\\
&\mathbf{g}_k^T(\mathbf{s}^{+} - \mathbf{s}) + (\mathbf{s}^{+})^T\mathbf{H}_k(\mathbf{s}^{+} - \mathbf{s}) \leq -\lambda_k (\mathbf{s}^{+})^T(\mathbf{s}^{+} - \mathbf{s}), \label{eq:l3}\\
&\mathbf{g}_k^T(\mathbf{s}^{+} - \mathbf{s}) + \mathbf{s}^T\mathbf{H}_k(\mathbf{s}^{+} - \mathbf{s}) \leq -\lambda_k \mathbf{s}^T(\mathbf{s}^{+} - \mathbf{s}), \label{eq:l4}
\end{flalign}
then
\[f_k -q_k(\mathbf{s}^+) \geq \dfrac{1}{3}\bar{\kappa} \lambda_k\|\mathbf{s}^{+}\|_2^2.\]
\end{lemma}
\begin{proof}
Suppose that for a given constant scalar $\alpha \in (0,1)$, $\|\mathbf{s}\|_2 \geq \alpha \|\mathbf{s}^{+}\|_2$. Then, it directly follows by~\eqref{eq:l1} and \eqref{eq:l2} that
\begin{equation}\label{eq:L4.5_ARC_1}
f_k - q_k(\mathbf{s}^{+}) = f_k - q_k(\mathbf{s}) + q_k(\mathbf{s}) - q_k(\mathbf{s}^{+})\geq \bar{\kappa}\lambda_k\|\mathbf{s}\|_2^2 \geq \bar{\kappa} \alpha^2 \lambda_k\|\mathbf{s}^{+}\|_2^2.
\end{equation}
Now consider the case that $\|\mathbf{s}\|_2 < \alpha \|\mathbf{s}^{+}\|_2$. First note that by \eqref{eq:l3},
\begin{equation}\label{eq:l4.5-1}
\begin{array}{ll}
0 & \geq (\mathbf{g}_k + \mathbf{H}_k \mathbf{s}^{+})^T(\mathbf{s}^{+} - \mathbf{s}) +\lambda_k (\mathbf{s}^{+})^T(\mathbf{s}^{+} - \mathbf{s})\\
& = (\mathbf{g}_k + \mathbf{H}_k \mathbf{s})^T(\mathbf{s}^{+} - \mathbf{s}) +(\mathbf{s}^{+} - \mathbf{s})^T\mathbf{H}_k(\mathbf{s}^{+} - \mathbf{s}) +\lambda_k (\mathbf{s}^{+})^T(\mathbf{s}^{+} - \mathbf{s}).
\end{array}
\end{equation}
Also, by \eqref{eq:l4}
\begin{equation}\label{eq:l4.5-2}
(\mathbf{g}_k + \mathbf{H}_k \mathbf{s})^T(\mathbf{s}^{+} - \mathbf{s}) + \lambda_k\mathbf{s}^T(\mathbf{s}^{+} - \mathbf{s}) \leq 0.
\end{equation}
Adding \eqref{eq:l4.5-1} and \eqref{eq:l4.5-2}, we get
\begin{equation}\label{eq:l11}
\begin{array}{ll}
q_k(\mathbf{s}^{+}) - q_k(\mathbf{s}) &= (\mathbf{g}_k + \mathbf{H}_k \mathbf{s})^T(\mathbf{s}^{+} - \mathbf{s}) + \dfrac{1}{2}(\mathbf{s}^{+} - \mathbf{s})^T\mathbf{H}_k (\mathbf{s}^{+} - \mathbf{s})\\
&\leq -\dfrac{1}{2}\lambda_k (\|\mathbf{s}^{+}\|_2^2 - \|\mathbf{s}\|_2^2).
\end{array}
\end{equation}
Since $\|\mathbf{s}\|_2 < \alpha \|\mathbf{s}^{+}\|_2$, it follows that
\begin{equation}\label{eq:L4.5_ARC_2}
f_k - q_k(\mathbf{s}^{+}) \geq q_k(\mathbf{s}) - q_k(\mathbf{s}^{+}) \geq \dfrac{1}{2}\lambda_k (\|\mathbf{s}^{+}\|_2^2 - \|\mathbf{s}\|_2^2) \geq \dfrac{1}{2}\lambda_k\bar{\kappa} \|\mathbf{s}^{+}\|_2^2 (1- \alpha^2),
\end{equation}
where the first inequality holds by \eqref{eq:l1}, the second inequality holds by \eqref{eq:l11}, and the last inequality holds because $\|\mathbf{s}\|_2 < \alpha \|\mathbf{s}^{+}\|_2$. We now choose the value of $\alpha$ for which the lower bounds~\eqref{eq:L4.5_ARC_1} and \eqref{eq:L4.5_ARC_2} are equal; i.e. $\alpha^2 = \dfrac{1}{2}\left(1 - \alpha^2\right)$, equivalently $\alpha = \sqrt{\dfrac{1}{3}}$.
\end{proof}
We next show that sufficient model decrease is satisfied when either $\mathbf{g}_k^T\mathbf{s}_k \leq 0$ or $\mathbf{s}_k^T\mathbf{H}_k\mathbf{s}_k \geq 0$.
\begin{lemma}\label{L4.4_ACR}
Suppose that $\mathbf{g}_k^T\mathbf{s}_k \leq 0$ or $\mathbf{s}_k^T\mathbf{H}_k\mathbf{s}_k\geq 0$. Then,
\begin{equation}
f_k - q_k(\mathbf{s}_k) = -\mathbf{g}_k^T\mathbf{s}_k - \dfrac{1}{2}\mathbf{s}_k^T\mathbf{H}_k\mathbf{s}_k \geq \dfrac{1}{2}\lambda_k\|\mathbf{s}_k\|_2^2.
\end{equation}
\end{lemma}
\begin{proof}
First notice that since the origin is feasible in $Q_k$,
\[\mathbf{g}_k^T\mathbf{s}_k + \dfrac{1}{2}\mathbf{s}_k^T\mathbf{H}_k\mathbf{s}_k \leq 0.\]
Hence,
\[\mathbf{s}_k^T\mathbf{H}_k\mathbf{s}_k\geq 0 \Rightarrow \mathbf{g}_k^T\mathbf{s}_k \leq 0.\]
On the other hand, if $\mathbf{g}_k^T\mathbf{s}_k \leq 0$, by \eqref{CC1},
\begin{equation}\label{eq:lemma4.4-1}
2\big(\mathbf{g}_k^T \mathbf{s}_k + \dfrac{1}{2}\mathbf{s}_k^T\mathbf{H}_k\mathbf{s}_k + \dfrac{1}{2}\lambda_k\|\mathbf{s}_k\|_2^2\big) = \mathbf{g}_k^T\mathbf{s}_k - \mathbf{s}_k^T \mathbf{A}^T \boldsymbol{\lambda}_k^C \leq \mathbf{g}_k^T\mathbf{s}_k \leq 0,
\end{equation}
where the first inequality holds due to the complementary slackness condition~\eqref{CC2}.
\end{proof}
\begin{lemma}\label{L4.6_ACR}
Suppose Assumption \ref{Assumption 3} holds at iteration $k$. Then there exist a constant $\kappa>0$ independent of $k$ such that
\[f_k - q_k(\mathbf{s}_k) \geq \kappa \lambda_k \|\mathbf{s}_k\|_2^2.\]
\end{lemma}
\begin{proof}
If $\mathbf{g}_k^T \mathbf{s}_k \leq 0$ or $\mathbf{s}_k^T\mathbf{H}_k\mathbf{s}_k \geq 0$, the result follows by Lemma \ref{L4.4_ACR}. Thus, we may proceed under the assumption that $\mathbf{g}_k^T\mathbf{s}_k \geq 0$ and $\mathbf{s}_k^T\mathbf{H}_k\mathbf{s}_k \leq 0$. Using Assumption~\ref{Assumption 3}, we proceed with a proof by induction on $l_k$. If $l_k = 1$, the last condition in Assumption~\ref{Assumption 3} implies that $\mathbf{g}_k^T \mathbf{s}_k \leq 0$, thus Lemma~\ref{L4.4_ACR} implies the desired result. Now assume the result holds for $l_k = i<\bar{l}$, we next show that the result holds for $l_k = i+1$.
By the induction step and Assumption \ref{Assumption 3}, we have
\[f_k - q_k(\mathbf{s}_{k,i}) \geq \bar{\kappa} \lambda_k\|\mathbf{s}_{k,i}\|_2^2,\]
\[q_k(\mathbf{s}_{k,i+1}) \leq q_k(\mathbf{s}_{k,i}),\]
\[\big\langle \mathbf{g}_k + \mathbf{H}_k\mathbf{s}_{k,i+1} , \mathbf{x}_{k,i+1} - \mathbf{x}_{k,i} \big\rangle \leq - \lambda_k\mathbf{s}_{k,i+1}^T(\mathbf{x}_{k,i+1} - \mathbf{x}_{k,i+1}),\]
\[ \big\langle \mathbf{g}_k + \mathbf{H}_k\mathbf{s}_{k,i} , \mathbf{x}_{k,i+1} - \mathbf{x}_{k,i} \big\rangle \leq - \lambda_k\mathbf{s}_{k,i}^T(\mathbf{x}_{k,i+1} - \mathbf{x}_{k,i+1}).\]
Then using Lemma~\ref{L4.5_ACR}, with $\mathbf{x} = \mathbf{x}_{k,i}$ and $\mathbf{x}^{+} = \mathbf{x}_{k,i+1}$ we obtain
\[f_k - q_k(\mathbf{s}_{k,i+1}) \geq \bar{\kappa}^{+} \lambda_k \|\mathbf{s}_{k,i+1}\|_2^2,\]
for some $\bar{\kappa}^{+} \in (0,1)$ independent of $k$.
\end{proof}
Our next result provides a bound on the ratio $\lambda_{k+1} / \|\mathbf{s}_{k+1}\|_2$ when $k \in {\cal C}$.
\begin{lemma}\label{L3.17}
Assume Assumption \ref{Assumption 3} holds at iteration $k \in {\cal C}$. Then,
\begin{itemize}
\item If Step~\ref{return-1}, \ref{return-2}, or \ref{return-3} of Algorithm~\ref{alg-contract} is reached, then
\[ \underline{\sigma} \leq \dfrac{\lambda_{k+1}}{\|\mathbf{s}_{k+1}\|_2}\leq \max \Big\{\overline{\sigma},\Big(\dfrac{\gamma_{\lambda}}{\gamma_{C}} \Big)\dfrac{H_{Lip} + 2 \rho}{2 \kappa}\Big\}.\]
\item If Step~\ref{return-4} of Algorithm~\ref{alg-contract} is reached, then
\[ \dfrac{\lambda_{k+1}}{\|\mathbf{s}_{k+1}\|_2}\leq \max \Big\{\overline{\sigma},\Big(\dfrac{\gamma_{\lambda}}{\gamma_{C}} \Big)\dfrac{H_{Lip} + 2 \rho}{2 \kappa}\Big\}.\]
\end{itemize}
\end{lemma}
\begin{proof}
Let $k \in {\cal C}$ and consider the three possible cases. The
first two correspond to situations in which the conditions in Step~\ref{Cond-lower-bound-no} in the CONTRACT subroutine tests true.
\begin{itemize}
\item Suppose that Step~\ref{return-1} is reached. Then, $\delta_{k+1}= \|\mathbf{s}\|_2$ where $(\lambda, \mathbf{s})$ is computed in Step~\ref{lambda-update-H}. It follows that Step~\ref{Sigma-rho_k} in Algorithm~\ref{alg-LC} will then produce the primal-dual pair $(\mathbf{s}_{k+1}, \lambda_{k+1})=(\mathbf{s}, \lambda)$ with $\lambda >0$. Since the condition in Step~\ref{Cond-upper-bound-yes} tested true, we have
\begin{equation}\label{L3.17-1}
\underline{\sigma} \leq \dfrac{\lambda_k + \underline{\sigma}{\Delta_k}}{\Delta_k} \leq \dfrac{\lambda_{k+1}}{\|\mathbf{s}_{k+1}\|_2} = \dfrac{\lambda}{\|\mathbf{s}\|_2} \leq \overline{\sigma},
\end{equation}
where the second inequality holds since $\|\mathbf{s}_{k+1}\|_2 =\delta_{k+1} \leq \|\mathbf{s}_k\|_2 \leq \Delta_k$.
\item Suppose that Step~\ref{return-2} is reached. Then, $\delta_{k+1} = \|\bar\mathbf{s}\|_2 $ where $(\bar\lambda, \bar\mathbf{s})$ is computed in Step~\ref{lambda-update-sigma}. Similar to the previous case, it follows that Step~\ref{Sigma-rho_k} in Algorithm~\ref{alg-LC} will produce the primal-dual pair $(\mathbf{s}_{k+1}, \lambda_{k+1})= (\bar\mathbf{s}, \bar\lambda)$ with $\bar\lambda = \lambda_k + \underline{\sigma}\Delta_k$. We first show that $\|\bar\mathbf{s}\|_2 \geq \underline{\sigma}$. Assume the contrary, then by Lemma \ref{LXksk} and the fact that ${\cal X}_{k+1} ={\cal X}_k$ for all $k \in {\cal C}$,
\[{\cal X}_k \leq (C_{min} + max\{\lambda_{max} , \lambda_0\})\|\bar\mathbf{s}\|_2 \leq (C_{min} + max\{\lambda_{max} , \lambda_0\})\underline{\sigma} = \epsilon,\]
which contradicts our assumption on ${\cal X}_k$. Here the last inequality uses the definition \eqref{sigma-min-max} of $\underline{\sigma}$. Combined with Lemma \ref{L3.5-3.6} we obtain
\[\underline{\sigma} \leq \|\bar\mathbf{s}\|_2 \leq \|\mathbf{s}_k\|_2 \leq \Delta_k.\]
Therefore,
\begin{equation}\label{L3.17-2}
\underline{\sigma} \leq \dfrac{ \lambda_k + \underline{\sigma}\Delta_k}{\Delta_k} \leq \dfrac{\bar\lambda}{\|\bar\mathbf{s}\|_2} \leq \dfrac{\lambda_k + \underline{\sigma}\Delta_k}{\underline{\sigma}} \leq \dfrac{\underline{\sigma}(\|\mathbf{s}_k\|_2 + \Delta_k)}{\underline{\sigma}} \leq 2\Delta_k \leq \bar{\sigma},
\end{equation}
where the fourth inequality holds by the condition of Step~\ref{Cond-lower-bound-no} and the last inequality holds by Lemmas \ref{L3.5-3.6}, \ref{L3.11} and the definition of $\bar{\sigma}$ in \ref{sigma-min-max}.\\
The other case correspond to situations in
which the condition in Step~\ref{Cond-lower-bound-no} tests false. It follows by Steps~\ref{lambda-update-sigma} and \ref{Cond-lower-bound-no} that
\begin{equation}\label{3.17}
\underline{\sigma} \leq \dfrac{\lambda}{\|\mathbf{s}\|_2},
\end{equation}
Finally, using the argument of Lemma \ref{LBG}, we claim that
\begin{equation}\label{3.17-2}
\lambda_{k+1} \leq \gamma_{\lambda}\Big(\dfrac{H_{Lip} + 2\rho}{2 \kappa}\Big)\|\mathbf{s}_k\|_2.
\end{equation}
To show our claim, we assume the contrary, i.e. $\lambda_{k+1} > \gamma_{\lambda}\Big(\dfrac{H_{Lip} + 2\rho}{2 \kappa}\Big)\|\mathbf{s}_k\|_2.$ Then the condition of the while loop in Step~\ref{while-loop} tested true for some $\hat{\mathbf{s}}$ computed by solving $Q_k(\hat{\lambda})$ for $\hat{\lambda} \geq (\dfrac{H_{Lip} + 2\rho}{2 \kappa})\|\hat{\mathbf{s}}\|_2$.
There exists $\hat{\mathbf{x}}$ on the line segment $[\mathbf{x}_{k} , \mathbf{x}_{k} + \hat{\mathbf{s}}]$ such that
\begin{equation}\label{eq:l-24-1}
q_k(\hat{\mathbf{s}}) - f(\hat{\mathbf{x}} + \hat{\mathbf{s}}) = \dfrac{1}{2}\hat{\mathbf{s}}^T\big(\mathbf{H}_k - \mathbf{H}(\mathbf{x}_k)\big)\hat{\mathbf{s}} \geq -\dfrac{1}{2}H_{Lip}\|\hat{\mathbf{s}}\|_2^3.
\end{equation}
Therefore,
\begin{flalign*}
\dfrac{\hat{f} - f(\hat{\mathbf{x}} + \hat{\mathbf{s}})}{\|\hat{\mathbf{s}}\|_2^3} &= \dfrac{\hat{f} - q_k(\hat{\mathbf{s}}) + q_k(\hat{\mathbf{s}}) - f(\hat{\mathbf{x}} + \hat{\mathbf{s}})}{\|\hat{\mathbf{s}}\|_2^3}\\
&\geq \dfrac{\kappa \hat{\lambda}\|\hat{\mathbf{s}}\|_2^2 - 0.5H_{Lip}\|\hat{\mathbf{s}}\|_2^3}{\|\hat{\mathbf{s}}\|_2^3}\\
&\geq \dfrac{\|\mathbf{s}_k\|_2\rho }{\|\hat{\mathbf{s}}\|_2}\\
& \geq \rho,
\end{flalign*}
where the first inequality holds by Lemma \ref{L4.6_ACR} and \eqref{eq:l-24-1}, and the last inequality holds since $\|\mathbf{s}_k\|_2 \geq \|\hat{\mathbf{s}}\|_2$. However $\rho_{k} = \dfrac{f_k - f(\mathbf{x}_k + \mathbf{s}_k)}{\|\mathbf{s}_k\|_2^3} < \rho$. It follows that $\|\hat{\mathbf{s}}\|_2 \neq \|\mathbf{s}_k\|_2$ which contradicts the condition of the while loop in Step~\ref{while-loop} for $\hat\mathbf{s}$ computed by solving $Q_k(\hat\lambda)$.
\item Suppose that Step~\ref{return-3} is reached. Then, $\delta_{k+1} = \|\mathbf{s}\|_2$. It follows that Step~\ref{Sigma-rho_k} in Algorithm~\ref{alg-LC} will produce the primal-dual pair $(\mathbf{s}_{k+1}, \lambda_{k+1})$ solving $Q_{k+1}$ such that $\mathbf{s}_{k+1} = \mathbf{s}$ and
$\lambda_{k+1} = \lambda$. In conjunction with \eqref{3.17}, \eqref{3.17-2}, and the condition in Step~\ref{Cond-s-large} of the CONTRACT sub-routine, we observe that
\begin{equation}\label{L3.17-3}
\underline{\sigma} \leq \dfrac{\lambda_{k+1}}{\|\mathbf{s}_{k+1}\|_2}= \dfrac{\lambda}{\|\mathbf{s}\|_2} \leq \Big(\dfrac{\gamma_{\lambda}}{\gamma_{C}} \Big)\dfrac{H_{Lip} + 2 \rho}{2 \kappa}
\end{equation}
\item Suppose that Step~\ref{return-4} is reached. Then, $\delta_{k+1}= \gamma_C\|\mathbf{s}_k\|_2$. It follows that Step~\ref{Sigma-rho_k} in Algorithm~\ref{alg-LC} will produce the primal-dual pair $(\mathbf{s}_{k+1}, \lambda_{k+1}) = (\mathbf{s}, \lambda)$. If $\|\mathbf{s}\|_2 < \delta_{k+1} = \gamma_C\|\mathbf{s}_k\|_2$, then $\lambda_{k+1} = 0$. Otherwise, $\|\mathbf{s}\|_2 = \delta_{k+1} = \gamma_C\|\mathbf{s}_k\|_2$ and $\lambda_{k+1} > 0$. Combined with \eqref{3.17} and \eqref{3.17-2}, we obtain
\[ \dfrac{\lambda_{k+1}}{\|\mathbf{s}_{k+1}\|_2} = \dfrac{\lambda}{\|\mathbf{s}\|_2} \leq \Big(\dfrac{\gamma_{\lambda}}{\gamma_C} \Big)\Big(\dfrac{H_{Lip} + 2\rho}{2 \kappa}\Big).\]
\end{itemize}
The result follows since we have obtained the desired inequalities in all cases.
\end{proof}
We now provide an upper bound for the sequence $\{\sigma_{max}\}$.
\begin{lemma}\label{L3.18}
Assume Assumption \ref{Assumption 3} holds. There exists a scalar constant $\sigma_{max} > 0$ such that
\[\sigma_{k} \leq \sigma_{max} \quad \forall \, \,k \in \mathbb{N}.\]
\end{lemma}
\begin{proof}
First note that by Lemma~\ref{L3.11}, the cardinality of the set ${\cal A}_{\Delta}$ is finite. Hence, there exist $k_{{\cal A}} \in \mathbb{N}$ such that $k \notin {\cal A}_{\Delta}$ for all $k \geq k_{{\cal A}}$. We continue by showing that $\sigma_k$ is upper bounded for all $k \geq k_{{\cal A}}$. Consider the following three cases:
\begin{itemize}
\item If $k \in {\cal A}_{\sigma}$, then by definition $\lambda_k \leq \sigma_k \|\mathbf{s}_k\|_2$, which implies by Step~\ref{sigma-acc} of Algorithm~\ref{alg-LC} that
\[ \sigma_{k+1}= \max\{\sigma_k , \lambda_k /\|\mathbf{s}_k\|_2 \} = \sigma_k.\]
\item If $k \in {\cal C}$, by Step~\ref{Sigma-rho_k} of Algorithm~\ref{alg-LC} and Lemma \ref{L3.17}, it follows that
\begin{equation}\label{expL3.18}
\sigma_{k+1} = \max \Big\{ \sigma_k , \dfrac{\lambda_{k+1}}{\|\mathbf{s}_{k+1}\|_2}\Big\} \leq \max \Big\{\sigma_k, \overline{\sigma},\Big(\dfrac{\gamma_{\lambda}}{\gamma_{C}} \Big)\dfrac{H_{Lip} + 2 \rho}{2 \kappa} \Big\}.
\end{equation}
\item If $k \in {\cal E}$, then Step~\ref{sigma-exp} of Algorithm~\ref{alg-LC} implies that $\sigma_{k+1} = \sigma_k$.
\end{itemize}
Combining the results of these three cases, the desired result follows.
\end{proof}
We now establish an upper bound on the norm trial steps $\mathbf{s}_k$ when $k \in {\cal A}_{\sigma}$.
\begin{lemma}\label{L3.19}
For all $k \in {\cal A}_{\sigma}$, the accepted step $\mathbf{s}_k$ satisfies
\[\|\mathbf{s}_k\|_2 \geq (H_{Lip} + \sigma_{max})^{-1/2}{\cal X}_{k+1}^{1/2}.\]
\end{lemma}
\begin{proof}
For all $k \in {\cal A}_{\sigma}$ , there exists $\bar{\mathbf{x}}_k$ on the line segment $[\mathbf{x}_k , \mathbf{x}_k + \mathbf{s}_k]$ such that
\begin{flalign*}
\mathbf{g}_{k+1}^T\mathbf{d} & = \mathbf{g}_{k+1}^T\mathbf{d} - \mathbf{g}_k^T\mathbf{d} - \mathbf{d}^T(\mathbf{H}_k + \lambda_k\mathbf{I})\mathbf{s}_k - \mathbf{d}^T \mathbf{A}^T \boldsymbol{\lambda}_k^C\\
& = \big(\mathbf{g}(\mathbf{x}_k + \mathbf{s}_k) - \mathbf{g}_k \big)^T\mathbf{d} - \mathbf{d}^T(\mathbf{H}_k + \lambda_k\mathbf{I})\mathbf{s}_k - \mathbf{d}^T\mathbf{A}^T\boldsymbol{\lambda}_k^C\\
& = \mathbf{d}^T(\mathbf{H}(\bar{\mathbf{x}}_k) - \mathbf{H}_k)\mathbf{s}_k - \lambda_k\mathbf{d}^T\mathbf{s}_k - \mathbf{d}^T\mathbf{A}^T\boldsymbol{\lambda}_k^C\\
& \geq -H_{Lip}\|\mathbf{s}_k\|_2^2\|\mathbf{d}\|_2 - \lambda_k\|\mathbf{s}_k\|_2\|\mathbf{d}\|_2 - \mathbf{d}^T\mathbf{A}^T\boldsymbol{\lambda}_k^C\\
& \geq -H_{Lip}\|\mathbf{s}_k\|_2^2 - \sigma_k\|\mathbf{s}_k\|_2^2 - \mathbf{d}^T\mathbf{A}^T\boldsymbol{\lambda}_k^C \quad \mbox{ for all } \mathbf{d} \mbox{ with } \|\mathbf{d}\|_2\leq 1,
\end{flalign*}
where the first equation follows from \eqref{CC1} and the last inequality follows since $\lambda_k \leq \sigma_k \|\mathbf{s}_k\|_2$ for all $k \in {\cal A}_{\sigma}$. Thus
\begin{equation}\label{eq:proofL17}
\displaystyle{ \operatornamewithlimits{\mbox{min}}_{\mbox{s.t. } d \, \in \, {\cal D}_{k+1}}} \, \mathbf{g}_{k+1}^T\mathbf{d} \geq -H_{Lip}\|\mathbf{s}_k\|_2^2 - \sigma_k\|\mathbf{s}_k\|_2^2 - \displaystyle{ \operatornamewithlimits{\mbox{max}}_{\mbox{s.t. } d \, \in \, {\cal D}_{k+1}}} \, \mathbf{d}^T\mathbf{A}^T\boldsymbol{\lambda}_k^C,
\end{equation}
where ${\cal D}_{k+1} \triangleq \{ \mathbf{d} \in \mathbb{R}^n \, | \, \|\mathbf{d}\|_2 \leq 1; \, \, \, \mathbf{A}\mathbf{d} \leq \mathbf{b} - \mathbf{A}\mathbf{x}_{k+1}\}$. Note that since $k \in {\cal A}_{\sigma}$ the updated step will be $\mathbf{x}_{k+1}= \mathbf{x}_k + \mathbf{s}_k$. Now, let ${\cal I}_k \triangleq \{ i \, | \, \mathbf{a}_i^T\mathbf{s}_k = \mathbf{b}_i - \mathbf{a}_i^T\mathbf{x}_k \}$, then
\[\mathbf{A}_{{\cal I}_k} \mathbf{d} \leq \mathbf{b}_{{\cal I}_k} - \mathbf{A}_{{\cal I}_k}\mathbf{x}_{k+1} = \mathbf{b}_{{\cal I}_k} - \mathbf{A}_{{\cal I}_k}\mathbf{x}_{k} - \mathbf{A}_{{\cal I}_k}\mathbf{s}_{k} =\mathbf{0} \Rightarrow (\boldsymbol{\lambda}_k^C)^T\mathbf{A}\mathbf{d} = (\boldsymbol{\lambda}_k^C)_{{\cal I}_k}^T\mathbf{A}_{{\cal I}_k}\mathbf{d} \leq 0,\]
for all $\mathbf{d} \in {\cal D}_{k+1}$.
Substituting in \eqref{eq:proofL17}, we obtain
\[{\cal X}_{k+1} \leq H_{Lip}\|\mathbf{s}_k\|_2^2 + \sigma_k\|\mathbf{s}_k\|_2^2,\]
which along with Lemma \ref{L3.18}, implies the result.
\end{proof}
We are now ready to compute a worst-case upper bound on the number of steps in ${\cal A}_{\sigma}$ for which the first-order criticality measure ${\cal X}_k$ is larger than a prescribed $\epsilon >0$.
\begin{lemma}\label{L3.20}
Assume Assumption \ref{Assumption 3} holds. For a scalar $\epsilon \in (0,\infty)$, the total number of elements in the index set
\[ {\cal K}_{\epsilon}\triangleq \{k \in \mathbb{N}_{+} : k\geq 1; \, (k-1) \in {\cal A}_{\sigma}; \, {\cal X}_k >\epsilon\}\]
is at most
\begin{equation}\label{3.18}
\Bigg\lceil \Bigg( \dfrac{f_0 - f_{min}}{\rho(H_{Lip} + \sigma_{max})^{-3/2}} \Bigg)\epsilon^{-3/2}\Bigg\rceil \triangleq K_{\sigma}(\epsilon) \geq 0.
\end{equation}
\end{lemma}
\begin{proof}
By Lemma \ref{L3.19}, we have for all $k \in {\cal K}_{\epsilon}$ that
\begin{flalign*}
f_{k-1} - f_k &\geq \rho \|\mathbf{s}_{k-1}\|_2^3\geq \rho (H_{Lip} + \sigma_{max})^{-3/2}{\cal X}_k^{3/2} \geq \rho (H_{Lip} + \sigma_{max})^{-3/2}\epsilon^{3/2}.
\end{flalign*}
In addition, we have by Theorem \ref{L3.14} that $|{\cal K}_{\epsilon}| < \infty$. Hence, we have that
\begin{flalign*}
f_0 - f_{min} &\geq \sum_{k \in {\cal K}_{\epsilon}} (f_{k-1}-f_k) \geq |{\cal K}_{\epsilon}|\rho(H_{Lip} + \sigma_{max})^{-3/2}\epsilon^{3/2}.
\end{flalign*}
Rearranging this inequality to yield an upper bound for $|{\cal K}_{\epsilon}|$ we obtain the desired result.
\end{proof}
It remains to compute a worst-case upper bound for the number of iterations in ${\cal A}_{\Delta}$ for which ${\cal X}_k$ is larger than a prescribed $\epsilon >0$, and the number of contraction and expansion iterations that may occur between two acceptance steps. We compute these bounds separately in Lemmas \ref{L3.21} and \ref{L3.24}.
\begin{lemma}\label{L3.21}
The cardinality of the set ${\cal A}_{\Delta}$ is upper-bounded by
\begin{equation}\label{3.19}
\Bigg\lceil \dfrac{f_0 - f_{min}}{\rho \Delta_0^3} \Bigg\rceil \triangleq K_{\Delta} \geq 0.
\end{equation}
\end{lemma}
\begin{proof}
For all $k \in {\cal A}_{\Delta}$, it follows by Lemma \ref{L3.5-3.6} that
\[f_k - f_{k+1} \geq \rho \|\mathbf{s}_k\|_2^3 = \rho\Delta_k^3 \geq \rho \Delta_0^3. \]
Hence, we have that
\[f_0 - f_{min} \geq \sum_{k=0}^{\infty} (f_k - f_{k+1}) \geq \sum_{k \in {\cal A}_{\Delta}} (f_k - f_{k+1}) \geq |{\cal A}_{\Delta}| \rho \Delta_0^3,\]
from which the desired result follows.
\end{proof}
So far, we have obtained upper-bound on the number of acceptance iterations. To obtain upper-bounds on the number of contraction and expansion iterations that may occur until the next accepted step, let us define, for a given $\hat{k} \in {\cal A}$,
\[k_{{\cal A}}(\hat{k}) \triangleq \min \{k \in {\cal A}: k > \hat{k}\}, \]
\[{\cal I}(\hat{k}) \triangleq \{k \in \mathbb{N}_{+}: \hat{k} <k<k_{{\cal A}}(\hat{k})\}, \]
Using this notation, the following result shows that the number of expansion iterations between the first iteration and the first accepted step, or between consecutive accepted steps, is never greater than one. Moreover, when such an expansion iteration occurs, it must take place immediately.\\
\begin{lemma}\label{L3.22}
For any $\hat{k} \in \mathbb{N}_{+}$, if $\hat{k} \in {\cal A},$ then ${\cal E} \cap {\cal I}(\hat{k}) \subseteq \{\hat{k}+1\}$.
\end{lemma}
\begin{proof}
By the definition of $k_{\cal A}(\hat{k})$, we have under the conditions of the lemma that ${\cal I}(\hat{k}) \cap {\cal A} = \emptyset $, which means that ${\cal I}(\hat{k}) \subseteq{\cal C} \, \cup \, {\cal E}$. It then follows from Lemma \ref{L3.7} that
that ${\cal E} \, \cap \, {\cal I}(\hat{k}) \subseteq \{\hat{k}+1\}$, as desired.
\end{proof}
\begin{lemma}\label{L3.23}
For any $k \in \mathbb{N}_{+}$, if $k \in {\cal C}$ and Step~\ref{return-3} in Algorithm~\ref{alg-contract} is reached, then
\[\dfrac{\lambda_{k+1}}{\|\mathbf{s}_{k+1}\|_2} \geq \gamma_{\lambda}\Big( \dfrac{\lambda_k}{\|\mathbf{s}_k\|_2} \Big). \]
\end{lemma}
\begin{proof}
If Step~\ref{return-3} is reached, then $\|\mathbf{s}\|_2 \geq \gamma_C\|\mathbf{s}_k\|_2$. It follows that Step~\ref{Sigma-rho_k} in Algorithm~\ref{alg-LC} will produce the primal-dual pair $(\mathbf{s}_{k+1}, \lambda_{k+1})$ solving $Q_{k+1}$ such that $\|\mathbf{s}_{k+1}\|_2 = \delta_{k+1}< \|\mathbf{s}_k\|_2 =\delta_k $ and $\lambda_{k+1} \geq \gamma_{\lambda}\lambda_k$, i.e.,
\begin{equation}\label{3.20}
\dfrac{\lambda_{k+1}}{\|\mathbf{s}_{k+1}\|_2} \geq \dfrac{\gamma_{\lambda}\lambda_k}{\|\mathbf{s}_k\|_2}.
\end{equation}
\end{proof}
\begin{lemma}\label{L3.24}
Assume Assumptions \ref{Assumption1-LC} and \ref{Assumption 3} hold. For any $\hat{k} \in \mathbb{N}_{+}$, if $\hat{k} \in {\cal A}$, then
\[ |{\cal C} \, \cap \, {\cal I}(\hat{k})| \leq K_{{\cal C}},\]
where
\[
K_\mathcal{C} \triangleq 1+ \left\lceil
\left(2 + \dfrac{1}{\log(\gamma_{\lambda})}\log \left( \dfrac{\sigma_{max}}{\underline{\sigma}}\right)\right)\dfrac{\log\left(\epsilon^{-1}\Delta\left(C_{min} + \max\{\lambda_{max}, \lambda_0\right)\right)}{\log(1/\gamma_C)} \right\rceil.
\]
\end{lemma}
\begin{proof}
The result holds trivially if $|{\cal C} \cap {\cal I}(\hat{k})| = 0$. Thus, we may assume $|{\cal C} \cap {\cal I}(\hat{k})| \geq 1$. To proceed with our proof, we first claim that the number of iterations $k \in {\cal C}\cap {\cal I}(\hat{k})$ with Step~\ref{return-4} in CONTRACT sub-routine reached, we denote by $K_{{\cal C},1}$, satisfies
\[K_{{\cal C},1} \leq \dfrac{\log\big(\epsilon^{-1}\Delta(C_{min} + \max\{\lambda_{max}, \lambda_0\big)\big)}{\log(1/\gamma_C)}.\]
By Lemma \ref{LXksk}, and the assumption that ${\cal X}_{k_{\cal A}(\hat{k})} \geq \epsilon$ (optimality not reached yet),
\[(C_{min} + \max\{\lambda_{max}, \lambda_0\})^{-1}\epsilon \leq \|\mathbf{s}_{k_{\cal A}(\hat{k})-1}\|_2 \leq \delta_{k_{\cal A}(\hat{k})-1} \leq \gamma_C^{K_{{\cal C},1}}\Delta,\]
where the last inequality holds by Lemma~\ref{L3.5-3.6}, Lemma~\ref{L3.4}, and the fact that each time Step~\ref{return-4} is reached the radius of the trust region is multiplied by $\gamma_C$. The proof of the claim follows by rearranging the inequality to yield an upper bound for $K_{{\cal C},1}$.\\
It remains to compute the number of iterations between Steps in $k \in {\cal C}\cap {\cal I}(\hat{k})$ for which Step~\ref{return-4} is reached. For a given $\hat{k} \in {\cal A}$, We define
\[{\cal I}_{\cal C}(\hat{k})\triangleq \{k \in {\cal C}\cap {\cal I}(\hat{k}): \,\, \mbox{Step~\ref{return-4}} \mbox{ is reached in Algorithm~\ref{alg-contract}}\} \]
which correspond to indices in ${\cal C}\cap {\cal I}(\hat{k})$ for which step~\ref{return-4} is reached. Let $k_{{\cal C},1}(\hat{k})$ and $k_{{\cal C},2}(\hat{k})$ be any two consecutive indices in ${\cal I}_{\cal C}(\hat{k})$ with $k_{{\cal C},2}(\hat{k}) - k_{{\cal C},1}(\hat{k}) > 2$. By Lemma~\ref{L3.17}, we have
\[\dfrac{\lambda_{k}}{\|\mathbf{s}_{k}\|_2} \geq \underline{\sigma}, \quad \forall \; k_{{\cal C},1}(\hat{k}) + 2 \leq k \leq k_{{\cal C},2}(\hat{k}),\]
which implies that step~\ref{return-3} of CONTRACT subroutine is reached for every $k_{{\cal C},1}(\hat{k}) + 2 < k < k_{{\cal C},2}(\hat{k})$. By Lemmas \ref{L3.18} and \ref{L3.23}, we then get
\[\sigma_{max} \geq \dfrac{\lambda_{k_{{\cal C},2}(\hat{k})}}{\|\mathbf{s}_{k_{{\cal C},2}(\hat{k})}\|_2} \geq \underline{\sigma}\gamma_{\lambda}^{k_{{\cal C},2}(\hat{k}) - k_{{\cal C},1}(\hat{k}) - 2}.\]
Hence,
\[k_{{\cal C},2}(\hat{k}) - k_{{\cal C},1}(\hat{k}) \leq \dfrac{1}{\log(\gamma_{\lambda})}\log \big( \dfrac{\sigma_{max}}{\underline{\sigma}}\big)+2.\]
Now let $k_{{\cal C},last}(\hat{k}) - k_{{\cal C},1}(\hat{k})$ be the last element of ${\cal I}_{\cal C}(\hat{k})$. Similarly, we can show that
\[k_{{\cal A}}(\hat{k}) - k_{{\cal C},last}(\hat{k}) \leq \dfrac{1}{\log(\gamma_{\lambda})}\log \big( \dfrac{\sigma_{max}}{\underline{\sigma}}\big)+2.\]
The desired result follows s since $|{\cal C} \, \cap \, {\cal I}(\hat{k})| = 1 + K_{C,1}\left( \dfrac{1}{\log(\gamma_{\lambda})}\log \big( \dfrac{\sigma_{max}}{\underline{\sigma}}\big)+2\right) $.
\end{proof}
Notice that since $\underline{\sigma} = {\cal O}(\epsilon^{-1})$, the number of contract steps $K_C$ is of order ${\cal O}(\log^2 \epsilon^{-1})$. Due to the while loop in Step~\ref{while-loop} of the CONTRACT sub-routine, completing a contract step may require solving more than one sub-problem. Our next result provides an upper bound on the number of subproblem routine calls required in one contract step.
\begin{lemma}\label{K_C^1}
Assume Assumptions \ref{Assumption1-LC} and \ref{Assumption 3} hold. For a scalar $\epsilon \in (0,\infty)$, the total number of sub-problems we are required to solve in a step $k \in {\cal C}$ with ${\cal X}_k \geq \epsilon$, is at most
\[K_{\cal C}^{1} \triangleq \log \Big((C_{min} + \max\{\lambda_{max}, \lambda_0\}\big)\dfrac{H_{max} + g_{Lip} + \rho \Delta}{\underline{\sigma}\epsilon}\Big).\]
\end{lemma}
\begin{proof}
We prove the result by contradiction. Assume the contrary,
\begin{flalign*}
\lambda &\geq \lambda_k\gamma_{\lambda}^{\log \Big(\big(C_{min} + \max\{\lambda_{max}, \lambda_0\}\big) \dfrac{H_{max} + g_{Lip} + \rho \Delta}{\underline{\sigma}\epsilon}\Big)}\\
&\geq \underline{\sigma}\|\mathbf{s}_k\|_2 \big(C_{min} + \max\{\lambda_{max}, \lambda_0\}\big) \dfrac{H_{max} + g_{Lip} + \rho \Delta}{\underline{\sigma}\epsilon}\\
&\geq H_{max} + g_{Lip} + \rho \Delta,
\end{flalign*}
where the last inequality holds by Lemma \ref{LXksk} and the fact that ${\cal X}_k \geq \epsilon$. Hence, by Lemma \ref{L3.8}
\[\dfrac{f(\mathbf{x}_k + \mathbf{s}) - f_k}{\|\mathbf{s}\|_2^3} \geq \rho > \dfrac{f(\mathbf{x}_k + \mathbf{s}_k) - f_k}{\|\mathbf{s}_k\|_2^3},\]
where $\mathbf{s}$ is computed by solving $Q_k(\lambda)$ and the strict inequality holds since $k \in {\cal C}$. We conclude that $\|\mathbf{s}_k\|_2 \neq \|\mathbf{s}\|_2$ which contradicts the condition of the while loop. This completes the proof.
\end{proof}
Notice that from the definition of $\underline{\sigma}$ in the algorithm, $K_{\cal C}^1 = {\cal O}(\log \epsilon^{-1})$.
\begin{theorem}\label{L3.25}
Under Assumptions \ref{Assumption1-LC} and \ref{Assumption 3}, for a scalar $\epsilon \in (0,\infty)$, the total number of elements in the index set
\[ \{k \in \mathbb{N}_{+} : {\cal X}_k >\epsilon\}\]
is at most
\begin{equation}\label{3.22}
K(\epsilon) \triangleq 1 + (K_{\sigma}(\epsilon) + K_{\Delta})(1 + K_{{\cal C}}K_{\cal C}^1),
\end{equation}
where $K_{\sigma}(\epsilon)$, $K_{\Delta}$, $K_{\cal C}^{1}$, and $K_{{\cal C}}$ are defined in Lemmas \ref{L3.20}, \ref{L3.21}, \ref{L3.24}, and \ref{K_C^1} respectively. Hence, for $\epsilon_g>0$, the number of sub-problem routines required for First-Order-LC-TRACE to find an $\epsilon_g$-first-order stationary point is at most $K(\epsilon_g) = {\cal O}\Big(\epsilon_g^{-3/2}\log^3 \epsilon_g^{-1}\Big)$.
\end{theorem}
\begin{proof}
Without loss of generality, we may assume that at least one iteration is performed. Lemmas \ref{L3.20} and \ref{L3.21} guarantee that the total number of elements in the index set $\{k \in {\cal A} \, | \, k \geq 1, \, {\cal X}_k > \epsilon\}$ is at most $K_{\sigma}(\epsilon)+K_{\Delta}$. Also, immediately prior to each of the corresponding accepted steps, Lemmas \ref{L3.22}, \ref{L3.24}, and \ref{K_C^1} guarantee that at most $1 + K_{C}K_{C}^1$ sub-problem routine calls are required in expansion and contraction. Accounting for the first iteration, the desired result follows.
\end{proof}
\subsection{Proof of Theorem~\ref{epsilon-second-order-convergence-complexity}}\label{epsilon-second-order-convergence-complexity-app}.
\begin{proof}
Let us first define
\[ {\cal K}_{\epsilon}\triangleq \{k \in \mathbb{N}_{+} : k\geq 1; \, (k-1) \in {\cal A}_{\sigma}; \, {\cal X}_k >\epsilon\}\]
and
\[{\cal V} \triangleq \{k \, | \, \mbox{Step~\ref{alg-first-order-reached} in Algorithm~\ref{alg-GLC-cap} is reached at iteration } k\}.\]
To proceed with our proof, we first show that if $k \in {\cal V} \, \cup \, {\cal K}_{\epsilon}$, the following reduction bound on the objective value holds
\begin{equation}
\label{eq:Reduction}
f_{k-1} - f_{k+1} \geq \min\left\{\dfrac{\rho {\cal X}_k^{3/2} }{(H_{Lip} + \sigma_{max})^{3/2}}, \dfrac{2{\cal \psi}_k^3}{3\tilde{H}^2}\right\}
\end{equation}
If $k \in {\cal V}$, then using second-order descent lemma, we obtain
\begin{flalign}
f_{k+1} &\leq f_k + \langle \mathbf{g}_k, \mathbf{x}_{k+1}- \mathbf{x}_k \rangle + \dfrac{1}{2}(\mathbf{x}_{k+1}- \mathbf{x})^T \mathbf{H}_k (\mathbf{x}_{k+1}- \mathbf{x}_k) + \dfrac{H_{Lip}}{6}\|\mathbf{x}_{k+1}- \mathbf{x}_k\|_2^3 \nonumber\\
&\leq f_k + \langle \mathbf{g}_k, \mathbf{x}_{k+1}- \mathbf{x}_k \rangle + \dfrac{1}{2}(\mathbf{x}_{k+1}- \mathbf{x}_k)^T\mathbf{H}_k (\mathbf{x}_{k+1}- \mathbf{x}_k) + \dfrac{\tilde{H}}{6}\|\mathbf{x}_{k+1}- \mathbf{x}_k\|_2^3 \nonumber \\
&= f_k + \dfrac{2{\cal \psi}_k}{\tilde{H}}\langle \mathbf{g}_k, \widehat{\mathbf{d}}_k \rangle + \dfrac{2{\cal \psi}_k^2}{\tilde{H}^2}(\widehat{\mathbf{d}}_k)^T \mathbf{H}_k (\widehat{\mathbf{d}}_k) + \dfrac{4{\cal \psi}_k^3}{3\tilde{H}^2}\|\widehat{\mathbf{d}}_k\|_2^3 \nonumber\\
&\leq f_k - \dfrac{2{\cal \psi}_k^3}{\tilde{H}^2} + \dfrac{4{\cal \psi}_k^3}{3\tilde{H}^2} \nonumber\\
&= f_k - \dfrac{2{\cal \psi}_k^3}{3\tilde{H}^2}.\label{reduction-d-LC}
\end{flalign}
Now if a first-order stationary point was reached at $k-1$, i.e. $k-1 \in {\cal V}$, it directly follows from~\eqref{reduction-d-LC} that
\begin{equation}\label{reduction-second-order-1}
f_{k-1} - f_{k+1} \geq f_k - f_{k+1} \geq \dfrac{2{\cal \psi}_k^3}{3\tilde{H}^2}, \quad \forall \,\, k \in {\cal V}, \, k-1 \in {\cal V}.
\end{equation}
Otherwise, Step~\ref{alg-first-order-not-reached} of Algorithm~\ref{alg-GLC-cap} was reached at iteration $k-1$ and First-Order-LC-TRACE was called. The former algorithm by definition is a monotone algorithm, i.e. it generates a sequence of iterates for which the corresponding sequence of objective values is decreasing. This property combined with~\eqref{reduction-d-LC} implies that
\begin{equation}\label{reduction-second-order-2}
f_{k-1} - f_{k+1} \geq f_k - f_{k+1} \geq \dfrac{2{\cal \psi}_k^3}{3\tilde{H}^2}, \quad \forall \,\, k \in {\cal V}, \, \mbox{First-Order-LC-TRACE called at } k-1.
\end{equation}
Combining~\eqref{reduction-second-order-1} and \eqref{reduction-second-order-2}, we get
\begin{equation}\label{reduction-second-order}
f_{k-1} - f_{k+1} \geq f_k - f_{k+1} \geq \dfrac{2{\cal \psi}_k^3}{3\tilde{H}^2}, \quad \forall \,\, k \in {\cal V}.
\end{equation}
We next show a lower bound on the reduction of the objective value if $k \in {\cal K}_{\epsilon}$. By Lemma~\ref{L3.20}, we have
\begin{equation}\label{reduction-A-sigma}
f_{k-1} - f_{k+1} \geq f_{k-1} - f_{k} \geq \dfrac{\rho {\cal X}_k^{3/2} }{(H_{Lip} + \sigma_{max})^{3/2}},
\end{equation}
where the first inequality again holds by the monotonicity of First-Order-LC-TRACE and~\eqref{reduction-d-LC}. Combining~\eqref{reduction-second-order} and \eqref{reduction-A-sigma}, we get
\[
f_{k-1} - f_{k+1} \geq \min\left\{\dfrac{\rho {\cal X}_k^{3/2} }{(H_{Lip} + \sigma_{max})^{3/2}}, \dfrac{2{\cal \psi}_k^3}{3\tilde{H}^2}\right\}, \quad \forall k \in \mathcal{K}_{\epsilon} \cup \mathcal{V}.
\]
By summing over the iterations we get
\begin{flalign*}
2(f_0 - f_{min}) &\geq \sum_{k \in {\cal K}_{\epsilon} \cup {\cal V}, k \geq 1} (f_{k-1}-f_{k+1}) \geq |{\cal K}_{\epsilon} \cup {\cal V}|\min\left\{\dfrac{\rho {\cal X}_k^{3/2} }{(H_{Lip} + \sigma_{max})^{3/2}}, \dfrac{2{\cal \psi}_k^3}{3\tilde{H}^2}\right\}.
\end{flalign*}
Rearranging this inequality yields
\begin{equation}\label{reduction-1}
|{\cal K}_{\epsilon} \cup {\cal V}| \leq \dfrac{2(f_0 - f_{min})}{\max\left\{\dfrac{\rho {\cal X}_k^{3/2} }{(H_{Lip} + \sigma_{max})^{3/2}}, \dfrac{2{\cal \psi}_k^3}{3\tilde{H}^2}\right\}}
\end{equation}
Let ${\cal H}(\epsilon_g, \epsilon_H) \triangleq \{k \, \, | \, \, {\cal X}_k >\epsilon_g,\,\, {\cal \psi}_k >\epsilon_H\}.$ Using Lemmas \ref{L3.22}, \ref{L3.24} and \ref{K_C^1}, the number of sub-problem routine calls required in expansion and contraction between two acceptance steps is upper bounded by $1 + K_{{\cal C}}K_{\cal C}^1$. Hence,
\[|{\cal H}(\epsilon_g, \epsilon_H)| \leq (|{\cal K}_{\epsilon} \cup {\cal V}| + K_{\Delta})(1 + K_{{\cal C}}K_{\cal C}^1),\]
which concludes our proof.
\end{proof}
\end{document} |
\begin{document}
\title{An energy model for harmonic functions with junctions} \author{D. De Silva} \address{Department of Mathematics, Barnard College, Columbia University, New York, NY 10027} \email{\tt desilva@math.columbia.edu} \author{O. Savin} \address{Department of Mathematics, Columbia University, New York, NY 10027}\email{\tt savin@math.columbia.edu} \begin{abstract}We consider an energy model for harmonic graphs with junctions and study the regularity properties of minimizers and their free boundaries. \end{abstract}
\maketitle \section{Introduction}
In this paper we propose an energy model for $N$ harmonic graphs with junctions and study the regularity properties of the minimizers and their free boundaries.
The physical motivation is the following. Let $\Omega$ be a bounded domain in $\mathbb R^n$ and consider $N \ge 2$ elastic membranes which are represented by the graphs of $N$ real-valued functions $$\{(x,u_i(x))| \quad x \in \Omega\} \quad \subset \mathbb R^{n+1}, \quad \quad i=1,\ldots,N,$$ which are in contact with one-another. In the most simplified form, the elastic membranes are modeled by harmonic graphs. The non-penetration condition implies the functions are ordered in the vertical direction, so we assume that $$ u_1 \ge u_2 \ge \ldots \ge u_N.$$ Suppose all membranes coincide initially, say $u_1=u_2=\ldots =u_N=0$, and then we move them continuously by pulling apart their boundary data $t \varphi_i$ on $\partial \Omega$ with $t \ge 0$, and $ \varphi_1>\varphi_2> \ldots > \varphi_n$. We consider the physical situation when the membranes do not separate strictly in the whole cylinder $\Omega \times \mathbb R$ instantaneously for all small $t>0$, but rather in a continuous way. In other words, for small $t$, the strict separation happens only near the boundary while the functions still coincide well in the interior of $\Omega$. This means the membranes {\it stick} to each other and we need to spend energy to physically separate them (say for example if the membranes are wet, or the elastic material has some adhesive properties).
\subsection{The model} In view of the discussion above, a natural energy functional associated to a system of adhesive elastic membranes is given by \begin{equation}\label{JI}
J_N(U, \Omega): = \int_\Omega (|\nabla U|^2 + W(U)) dx, \quad U:=(u_1,\ldots,u_N), \quad N\geq 2, \end{equation}
with \begin{equation}\label{WI}|\nabla U|^2:= \sum_{i=1}^N |\nabla u_i|^2, \quad W(U)= \# \{ u_1,\ldots,u_N\}, \end{equation} and $J_N$ is defined over the class of admissible vector-valued functions
\begin{equation}\label{orderI} \mathcal A:= \left \{ U \in H^1(\Omega) | \quad u_1 \geq u_2 \geq \ldots \geq u_{N} \right \}.\end{equation} Here $\Omega$ is a bounded domain in $\mathbb R^n$ with Lipschitz boundary, and the potential term $W(U)$ represents the cardinality of the set $\{ u_1,\ldots,u_N\}$ (that is the number of distinct elements in the set). Clearly $W(U)$ is minimized when all $u_i$'s coincide. On the other hand, the Dirichlet integral is minimized when each $u_i$ is harmonic, and by the maximum principle they belong to the admissible cone $\mathcal A$ provided the boundary data is in $\mathcal A$ as well. The presence of the potential term has the effect of collapsing some of these graphs that are close to each other in a certain region, and we expect an optimal configuration consisting of piecewise harmonic graphs with junctions.
The functional $J_N$ is lower semicontinuous and the existence of minimizers with given boundary data $\Phi \in \mathcal A$ on $\partial \Omega$ follows easily from the direct method of the calculus of variations. We remark that $J_N$ is not convex, and the uniqueness of minimizers can fail.
One of the interesting questions about minimizers $U$ of \eqref{J} regards the geometry of the graphs near junctions where the membranes separate. Precisely, there are possibly $N-1$ free boundaries originating from this minimization problem which are denoted by $$\Gamma_i:= \partial \{u_i>u_{i+1}\} \cap \Omega, \quad i=1,\ldots, N-1.$$ The sets $\{u_i>u_{i+1}\}$ have no apriori constraints with respect to one-another, which means the $\Gamma_i$'s can intersect and cross each other and possibly have complicated geometries. The functions $u_i$ are piecewise harmonic, that is they are harmonic in the interior of the regions carved out by the collections of the free boundaries. The problem is interesting even in dimension $n=1$ in which the $\Gamma_i$'s consist of points and the $u_i$'s are piecewise linear, and a certain balancing condition needs to hold at the junction points. The physical situation can be described as the equilibrium configurations of $N$-tapes that stick to one another, see Figure \ref{fig0}.
\begin{figure}
\caption{A minimizer for $N=4$, $n=1$.}
\label{fig0}
\end{figure}
A first observation about minimizers $U$ of \eqref{JI} is that the average of the $u_i$'s is harmonic in $\Omega$, see Lemma \ref{harmonic}. Moreover, minimizers remain invariant under the operation of adding the same harmonic function to each component of $U$. In view of this, one can reduce the problem to the $0$ average situation $$ \sum u_i=0 \quad \mbox{in $\Omega$},$$ which means that we deal with a system involving only $N-1$ unknowns. Then, the case $N=2$ corresponds to the scalar minimization problem
$$ \min_{u_1 \ge 0} \int_{\Omega} \left(2 |\nabla u_1|^2 + \chi_{\{u_1>0\}} \right)dx,$$ which is the classical one-phase free boundary problem introduced by Alt and Caffarelli in \cite{AC}. Here $\chi_E$ denotes the characteristic function of a set $E$.
The one-phase problem appears in cavitation flows in fluid dynamics, flame propagation etc., and it has been extensively studied over the past four decades. We refer to the books of Caffarelli and Salsa \cite{CS} and Velichkov \cite{V} for the mathematical treatment of this problem. Concerning the free boundary regularity of $\Gamma_1$, it is an analytic hypersurface outside a closed singular set of dimension at most $n-5$ (see \cite{JS}), and there are examples of free boundaries with point singularities in dimension $n=7$, (see \cite{DJ}) .
For $N\ge 3$ we subtract 1 from $W$, which does not affect the problem, and rewrite the potential term in the form $$ W(U)= \sum_{i=1}^{N-1} \chi_{\{u_i>u_{i+1} \}}.$$ In this way, the model we propose can be viewed as a system of $N-1$ coupled one-phase free boundary problems that interact in the vertical direction.
\subsection{Some related works} There are several works in the literature that involve free boundaries and energy functionals for vector-valued functions similar to the one we propose in \eqref{JI}. For example Mazzoleni, Terracini, Velichkov \cite{MTV1,MTV2}, Caffarelli, Shahgholian, Yeressian \cite{CSY}, and De Silva, Tortone \cite{DT} considered the vectorial one-phase problem with $W(U)=\chi_{\{|U|>0\}}$ and unconstrained $U$, which is relevant in the study of cooperative systems of species or in optimization problems for spectral functions, see also \cite{KL1,KL2}. On the other hand, in \cite{ASUW} the authors studied a vectorial version of the obstacle problem by taking $W(U)=|U|$. In other cases motivated by strongly competitive systems or optimal partition problems, the components $u_i \ge 0$ have disjoint supports which means that the vector $U$ is restricted to the union of the nonnegative coordinate axes. For this situation we refer to \cite{CL} where Caffarelli and Lin investigated harmonic maps onto such singular spaces, see also \cite{TT}.
A minimization problem closely related to our model, which involves the constraint $U \in \mathcal A$ as in \eqref{orderI} and with the potential $W(U)=F \cdot U$, was introduced by Chipot and Vergara-Caffarelli in \cite{CV}. It describes the equilibrium configuration of $N$ elastic membranes interacting with each-other under the action of external forces $F$. More recently, the second author and H. Yu established the optimal regularity for this problem and studied the free boundary regularity in a series of papers \cite{SY1, SY2,SY3}. While these results motivated the current work, the two models have quite different qualitative properties that can be seen from simple one-dimensional examples as well as from the theorems in the next section.
Some of our results that involve the junction of $N=3$ membranes resemble the soap-films like configurations for minimal surfaces in two and three dimensions that were first studied by J. Taylor in \cite{T}. A crucial difference with respect to soap-films is that our energy model counts a common surface where more than two graphs coincide according to its multiplicity, and the collapsing phenomena is only due to the presence of the potential energy $W$.
\subsection{Main results}
In this section we state our main results. We recall that \begin{equation}\label{J}
J_N(U, \Omega): = \int_\Omega (|\nabla U|^2 + W(U)) dx, \quad U=(u_1,\ldots,u_N), \quad N\geq 2, \end{equation}
with \begin{equation}\label{W}|\nabla U|^2 = \sum_{i=1}^N |\nabla u_i|^2, \quad W(U):= \sum_{i=1}^{N-1} \chi_{\{u_i>u_{i+1} \}}, \end{equation} is defined over the class of admissible vector-valued functions
\begin{equation}\label{order} \mathcal A:= \left \{ U \in H^1(\Omega) | \quad u_1 \geq u_2 \geq \ldots \geq u_{N} \right \}.\end{equation}
Positive constants depending only on $N$ and the ambient dimension $n$ are called universal. The first result gives the optimal regularity of minimizers. \begin{thm}[Optimal regularity]\label{nonCI} Let $U$ minimize $J_N$ in $B_1$. Then, $U \in C^{0,1}$ and
\begin{equation*}\|U\|_{C^{0,1}(B_{1/2})} \le C(1+ \|U\|_{L^2(B_1)}), \end{equation*} for $C>0$ universal. \end{thm}
We study the regularity of the $N-1$ free boundaries $$\Gamma_i:= \partial \{u_i > u_{i+1} \}, \quad \quad i\in \{1,\ldots,N-1\},$$ by performing a blow-up analysis at junction points. After rescaling, we may assume that we are in the $0$-average situation, and the origin is a junction point where all membranes coincide \begin{equation}\label{Res}
\sum u_i =0, \quad \quad 0 \in \partial \{|U|>0\}. \end{equation} By employing Weiss monotonicity formula Proposition \ref{WMF}, and a non-degeneracy property of minimizers, we can deduce that blow-ups are one-homogenous. \begin{thm} \label{Bup} Let $U$ minimize $J_N$ in $B_1$, and assume \eqref{Res} holds. Then
$$ \max_{B_r} |U| \ge c r,$$ and there exists a sequence of $r_j \to 0$ such that $$ U_{r_j}(x):= \frac{U(r_j x)}{r_j} $$
converges uniformly on compact sets of $\mathbb R^n$ to a cone $\bar U$, i.e. a homogenous of degree one minimizer with $0 \in \partial \{|\bar U| >0\}$. \end{thm}
The minimizing cones in dimension $n=1$ must coincide on one side of $0$ and have two branches on the other side. Precisely for each $k \le N-1,$ we define $U_{0,k}$ to be the vector whose components are given by $$ u_i=\left(\frac 1k - \frac 1N \right) x^+ \quad \quad \mbox{for $i \le k$}, \quad u_i=-\left(\frac {1}{N-k} - \frac 1N \right) x^+ \quad \quad \mbox{for $i > k$} .$$ \begin{prop}[1d cones]\label{P1dI} The only minimizing cones in dimension $n=1$ that satisfy \eqref{Res} are given by $U_{0,k}$ (up to a reflection with respect to 0). \end{prop}
The classification of 1d cones combined with an $\epsilon$-regularity theorem and a dimension reduction argument gives the following general partial regularity result.
\begin{thm}[Partial regularity]\label{prI} Let $U$ be a minimizer in $B_1$. The free boundaries $\Gamma_i$ are analytic and disjoint from one another outside a closed set $\Sigma$ of singular points of Hausdorff dimension $n-2$, and $$\mathcal H^{n-1}(\Gamma_i \cap B_{1/2}) \le C,$$ with $C$ universal. \end{thm}
The remaining results focus on the intersection points of two distinct free boundaries which, by Theorem \ref{prI}, has dimension at most $n-2$. For this part we restrict to the simplest case that involves $N=3$ membranes.
In dimension $n=2$ we define $V_0$ as the cone whose components are given by $$u_1(x):= \frac{1}{\sqrt 6}\max \left \{ x \cdot e_{ \frac \pi 6}, \, 2 x \cdot e_{- \frac \pi 6}, \, 0 \right \},\quad \quad e_\theta:=(\cos \theta, \sin \theta),$$ $$ u_3(x):=-u_1(x_1,-x_2), \quad \quad u_2 :=- u_1-u_3.$$ \begin{thm}\label{m2d} Let $N=3$. Then, up to rotations and reflections, $V_0$ is the unique minimizing cone in dimension $n=2$ which satisfies $0 \in \Gamma_1 \cap \Gamma_2$ and \eqref{Res}.
\end{thm} The proof of Theorem \ref{m2d} is indirect, through an elimination process. It turns out that there are two possible cone candidates, $V_0$ and $V_s$, which are critical for the energy $J_N$. We show however that $V_s$ is not minimizing and also that the set of cones satisfying $0 \in \Gamma_1 \cap \Gamma_2$ and \eqref{Res} is nonempty, see Section \ref{Sec7}.
The next result gives the regularity of the free boundaries in two dimensions near an intersection point, see Figure \ref{fig1}.
\begin{thm}\label{T2dI} Let $N=3$ and let $U$ be a minimizer of $J_N$ in $\Omega \subset \mathbb R^2$. Then $\Gamma_1$ and $\Gamma_2$ are piecewise $C^{1,\alpha}$ curves in a neighborhood of any intersection point $x_0 \in \Gamma_1 \cap \Gamma_2$, with $\alpha \sim 0.36$. \end{thm}
\begin{figure}
\caption{ Free boundary geometry at an intersection point in $\mathbb R^2$}
\label{fig1}
\end{figure}
Theorem \ref{T2dI} can be extended to arbitrary dimensions. For this we define regular intersection points $$x_0 \in Reg(\Gamma_1 \cap \Gamma_2),$$ as those points for which there exists a blow-up cone at $x_0$ which is a rotation of a two-dimensional cone extended trivially in the remaining variables.
\begin{thm}\label{TndI} $Reg(\Gamma_1 \cap \Gamma_2)$ is locally a $C^{1,\alpha}$-smooth submanifold of codimension two. Near such an intersection point, each of the free boundaries $\Gamma_1$ and $\Gamma_2$ consists of two piecewise $C^{1,\alpha}$ hypersurfaces which intersect on $Reg(\Gamma_1 \cap \Gamma_2)$ at a $120^ \circ$ angle. \end{thm}
As a consequence we obtain a general partial regularity result for the free boundary. We define $Reg(\Gamma_i)$ as the collection of points $x_0 \in \Gamma_i$ that have a blow-up profile which is a rotation of a one-dimensional cone extended trivially in the remaining variables.
\begin{thm}[Partial regularity]\label{TPRI} Let $N=3$ and $U$ be a minimizer of $J_N$ in $B_1$. Then $$ \Gamma_1 \cup \Gamma_2 = Reg(\Gamma_1) \cup Reg(\Gamma_2)\cup Reg (\Gamma_1 \cap \Gamma_2) \cup \Sigma',$$ with $Reg(\Gamma_i)$ locally analytic hypersurface, $Reg (\Gamma_1 \cap \Gamma_2)$ locally a $C^{1,\alpha}$ submanifold of codimension two, and $\Sigma'$ a closed singular set of Hausdorff dimension $n-3$. \end{thm}
The study of regular intersection points leads to a transmission-type problem which appears in the linearization, which is new and interesting in its own, see \eqref{a1}-\eqref{a4} in Section \ref{Sec8}. The value of $\alpha$ in the theorems above is dictated by the spectrum of the linearized problem. The novelty is that the transmission condition does not occur along a hypersurface but it involves a region of full dimension where two functions interact. We refer the reader to Sections \ref{Sec8} and \ref{Sec9} for more details.
The paper is organized as follows. In Section \ref{Sec3} we collect some general facts about minimizers and prove the optimal regularity result Theorem \ref{nonCI}. In Section \ref{Sec4} we obtain the Weiss monotonicity formula and prove Theorem \ref{Bup}. We classify one-dimensional cones in Section \ref{Sec5} and then establish Theorem \ref{prI} in Section \ref{Sec6}. The last three sections are devoted to the study of two dimensional cones and regular intersection points for $N=3$ membranes.
\section{Lipschitz continuity and non-degeneracy of minimizers} \label{Sec3}
\subsection{Preliminaries.} Recall that, throughout this note, constants depending only on possibly $n,N$ are called universal. Also, whenever this does not create confusion, the dependence of $J_N$ on the domain is omitted.
We start by proving some basic facts about minimizers.
\begin{lem}[Lower semicontinuity] If $U_m \to U$ in $L^2(\Omega)$, then $$\liminf J_N(U_m,\Omega) \ge J_N(U,\Omega).$$ \end{lem}
\begin{proof}The Dirichlet energy is lower semicontinuous. Since $W: \mathbb R^N \to \mathbb R$ is lower semicontinuous, it follows the potential term in $J_N$ is lower semicontinuous as well.
\end{proof} As a consequence we obtain the existence of minimizers with boundary data in $H^1(\Omega)$.
\begin{prop} Given $\Phi \in \mathcal A$, there exists a minimizer $U$ of $J_N$ in $\Omega$ with boundary data $\Phi$ on $\partial \Omega$.
\end{prop}
Next we show that the average of the $u_i$'s is harmonic.
\begin{lem}\label{harmonic} If $U$ is a minimizer of $J_N$ in $\Omega$, then $\sum_{i=1}^N u_i$ is harmonic in $\Omega.$ \end{lem} \begin{proof} Let $\psi \in C_0^1(\Omega)$, and $\Psi=(\psi, \ldots, \psi)$. Then $$W(U)= W(U+\epsilon \Psi)$$ and
$$\frac{J_N(U+\epsilon \Psi) - J_N(U)}{\epsilon} = \frac{1}{\epsilon}\int_{\Omega} ( |\nabla (U +\epsilon \Psi)|^2 - |\nabla U|^2) dx$$ $$= 2 \int_{\Omega} \nabla (\sum_{i=1}^n u_i) \cdot \nabla \psi dx + o(\epsilon),$$ from which our claim follows.\end{proof}
Similarly, the problem remains invariant with respect to addition of a harmonic function $\psi$ to each component, that is $$J_N(U + \psi(1,\ldots,1))= J_N(U) + C(\Psi,\Phi), \quad \Psi=\psi(1,\ldots, \psi)$$ for some constant $$C(\Psi,\Phi)=\int_\Omega |\nabla \Psi|^2 dx + \int_{\partial \Omega} \Phi \cdot \Psi_\nu d \sigma,$$ that depends only on $\psi$ and the boundary data $\Phi$ of $U$. Therefore, at times we may assume that $U$ satisfies, \begin{equation}\label{zero} \sum_{i=1}^N u_i=0. \end{equation}
\begin{lem}\label{sub-super}Let $U$ minimize $J_N$ in $\Omega$. Then $u_1$ is subharmonic and $u_N$ is superharmonic in $\Omega$. \end{lem} \begin{proof} We prove the first claim, the second one can be proved similarly. Let $v_1$ be the harmonic replacement of $u_1$ in $B_r \subset \Omega.$ We wish to prove that $u_1 \leq v_1$ in $B_r$. Indeed, let $$\tilde u_k = \min\{u_k, v_1\}, \quad \tilde U = (\tilde u_1,\ldots, \tilde u_N).$$ Clearly, $$W(U) \geq W(\tilde U),$$ hence by minimality
\begin{equation}\label{smaller}\int_{B_r} |\nabla U|^2 dx \leq \int_{B_r} |\nabla \tilde U|^2 dx.\end{equation} On the other hand, since $$v_1 = u_1 \geq u_k, \quad \forall k=1, \ldots N, \quad \text{on $\partial B_r$},$$ we have $$w_k:=(u_k-v_1)^+ \in H_0^1(B_r).$$ After integration by parts and using that $v_1$ is harmonic, we get
$$\int_{B_r} (|\nabla u_k|^2 - |\nabla \tilde u_k|^2) dx = \int_{B_r}( |\nabla (v_1+w_k)|^2 - |\nabla v_1|^2)dx \geq 0,$$ with strict inequality unless $w_k \equiv 0$. Thus, in view of \eqref{smaller}, $w_k \equiv 0$ in $B_r$ for all $k$'s, which gives the desired claim that $u_1 \leq v_1$ in $B_r.$
Since this argument can be repeated for any ball included in $\Omega$, we conclude that $u_1$ is subharmonic. \end{proof}
Next, we prove the following obvious yet useful fact.
\begin{lem}\label{induction} Let $U$ minimize $J_N$ in $\Omega$ and assume that $$u_{k} > u_{k+1} \quad \text{for some $k \in \{1, \ldots, N-1\}$}.$$ Then, $\underbar U := (u_1,\ldots, u_{k})$ minimizes $J_k$ in $\Omega$ and $\bar U := (u_{k+1}, \ldots, u_{N})$ minimizes $J_{N-k}$. \end{lem} \begin{proof}The proof is straightforward. Assume by contradiction that $\underline{V}$ minimizes $J_k$ in $\Omega' \subset \subset \Omega$ with boundary data $\underline U$ and \begin{equation}\label{bar}J_k(\underline V) < J_k(\underline U).\end{equation} Let $\bar V$ minimize $J_{N-k}$ in $\Omega'$ with boundary data $\bar U$, and set $$V= (\underline V, \bar V).$$ Since, by Lemma \ref{sub-super} $\underline{v}_k$ is superharmonic while $\bar v_{k+1}$ is subharmonic and one in strictly on top of the other one on the boundary, we conclude that $\underline v_k \geq \bar v_{k+1}$ in $\Omega'$, hence $V$ is an admissible competitor. Furthermore, in view of \eqref{bar} and our assumption,
$$J_N(V) \leq J_k(\underline V)+ J_{N-k}(\bar V)+ |\Omega'| < J_k(\underline U)+ J_{N-k}(\bar U)+ |\Omega'|= J_N(U)$$ and we reach a contradiction. \end{proof}
\subsection{Lipschitz Continuity.} We now turn to the proof of Lipschitz continuity. We start first by establishing that minimizers are H\"older continuous, by treating the potential term $W(U)$ as a perturbation. The proof is standard but we provide the details, for completeness.
After multiplying $U$ by $\delta^{1/2}$, we may assume that $U$ minimizes
$$J_N^\delta(U):=\int_\Omega (|\nabla U|^2 + \delta W(U)) dx,$$ with $\delta>0$ small.
Without loss of generality, we assume that we deal with minimizers in $B_1$. H\"older continuity follows with standard arguments from the next proposition.
From now on, in the body of the proofs $c,C>0$ are universal constants possibly changing from step to step.
\begin{prop}\label{prop1} Let $U$ be a minimizer of $J_N^\delta$ in $B_1$. For every $\alpha \in (0,1),$ there exist $\delta_0, \rho>0$ depending on $\alpha$, such that, if $\delta \leq \delta_0$ and \begin{equation}\label{ass1}
\fint_{B_1} |\nabla U|^2 dx \leq 1, \end{equation} then \begin{equation}
\rho^{2-2\alpha}\fint_{B_{\rho}} |\nabla U|^2 dx \leq 1. \end{equation} \end{prop} \begin{proof} Let $V$ be the harmonic replacement of $U$ in $B_1$. Then, by the maximum principle $v_i \geq v_{i+1}, i=1,\ldots, N$, and $V$ is an admissible competitor for the minimization of $J_N^\delta$. Thus, \begin{align}\label{new}
\nonumber \int_{B_1} |\nabla (U-V)|^2 dx & \leq J_N^\delta(U) + \int_{B_1} (|\nabla V|^2 - 2 \nabla U \cdot \nabla V) dx\\
\nonumber & = J_N^\delta(U) - \int_{B_1} |\nabla V|^2 dx \\
& \leq J_N^\delta(V) - \int_{B_1} |\nabla V|^2 dx \\ \nonumber &=\int \delta W(V) dx \leq \delta N.
\end{align}
Hence,
\begin{equation}\label{1}
\fint_{B_\rho} |\nabla (U-V)|^2 dx \leq \delta N \rho^{-n}.
\end{equation}
On the other hand, since $|\nabla V|^2$ is subharmonic in $B_1$ and by \eqref{ass1}
$$\int_{B_1} |\nabla V|^2 dx \leq \int_{B_1} |\nabla U|^2 dx\leq C,$$ we conclude that
$$|\nabla V| \leq C, \quad \text{in $B_{1/2}$}.$$ In particular,
$$\fint_{B_\rho} |\nabla V|^2 dx\leq C_0,$$ for $C_0$ universal. Combining this last inequality with \eqref{1} we conclude that,
$$\fint_{B_\rho} |\nabla U|^2 dx \leq 2 \delta N \rho^{-n} + 2 C_0 \leq 4 C_0,$$ as long as $$\delta \leq \frac {C_0}{N} \rho^n.$$ Thus,
$$\rho^{2-2\alpha}\fint_{B_\rho} |\nabla U|^2 dx \leq 4C_0 \rho^{2-2\alpha} \leq 1,$$ by choosing $\rho$ such that, $$\rho^{2-2\alpha} \leq (4C_0)^{-1}.$$ \end{proof}
The next corollary now follows via standard arguments.
\begin{cor}\label{holder} Let $U$ be a minimizer to $J_N$ in $B_1$. Then $U \in C_{loc}^{0,\alpha} (B_1)$, for any $\alpha \in (0,1)$ and $$[U]_{C^\alpha(B_{1/2})} \le C(1+\|\nabla U \|_{L^2(B_1)}),$$ with $C(\alpha,n,N)$.\end{cor}
Prior to proving Lipschitz continuity, we introduce the following definition.
\begin{defn}\label{conn} We say that $U$ is disconnected in $B_\rho$ if there exists $k \in \{2,\ldots, N\}$ such that $u_k>u_{k-1}$ in $B_\rho$. Otherwise we say that $U$ is connected in $B_\rho$. \end{defn}
Our main Lipschitz regularity result reads as follow.
\begin{lem}[Uniform Lipschitz estimate]\label{uniLip}Let $U$ minimize $J_N$ in $B_1$ and assume that $\sum_{i=1}^N u_i=0$ and $U$ is connected in $B_{1/2}$. Then \begin{equation}\label{con1}\|U\|_{C^{0,1}(B_{1/2})} \le C,\end{equation} for $C>0$ universal. \end{lem}
Then, the next theorem is an immediate corollary. \begin{thm}\label{nonC} Let $U$ minimize $J_N$ in $B_1$. Then, $U \in C^{0,1}(B_{1/2})$ and
\begin{equation}\label{in1}\|U\|_{C^{0,1}(B_{1/2})} \le C(1+ \|U\|_{L^2(B_1)}), \end{equation} for $C>0$ universal. \end{thm}
We use Lemma \ref{uniLip} to prove Theorem \ref{nonC} by induction on $N$.
The case $N=1$ corresponds to harmonic functions and it is obvious.
Next assume the claim holds for $j$ membranes, for any $j \leq N-1$. It suffices to prove \eqref{in1} only in the case of $N$ membranes with $\sum u_i=0$.
Indeed, in the general case we let $h:=\sum_i u_i$ which is harmonic, and then $U= H + V$ with $H:= (h,\ldots, h)$ and $\sum_i v_i=0$. Since $H$ satisfies \eqref{in1} and
$$\|H+V\|_{L^2} = \|H\|_{L^2} + \|V\|_{L^2},$$ it suffices to show that $V$ satisfies \eqref{in1} as well.
If the $N$ membranes are connected in $B_{1/2}$ then the claim follows from Lemma \ref{uniLip}. If they are disconnected in $B_{1/2}$, then in view of Lemma \ref{induction}, by the induction hypothesis we have
\begin{equation}\label{in2}\|U\|_{C^{0,1}(B_{1/4}(x_0))} \le C(1+ \|U\|_{L^2(B_{1/2})}), \quad \forall x_0 \in B_{1/4},\end{equation} which gives the desired claim.
\textit{Proof of Lemma $\ref{uniLip}$}. We argue by induction. In the case $N=1$, $u_1 \equiv 0$ and the statement is trivial. Let us assume that \eqref{con1} holds when $U$ minimizes $J_k$ and $k<N.$ Then, by the proof above, also the estimate \eqref{in1} holds for minimizers of $J_k$ and $k<N.$
Let $V$ be the harmonic replacement of $U$ in $B_1$, and by the maximum principle $v_i \ge v_{i+1}$, and moreover \begin{equation}\label{vi}\sum_{i=1}^N v_i=0.\end{equation} Then, arguing as in \eqref{new} in Proposition \ref{prop1}, and by Poincare's inequality we obtain
\begin{equation}\label{UV}\|U-V\|_{L^2(B_{1})},\|\nabla (U-V)\|_{L^2(B_1)} \le C.\end{equation} Set $$\mu:= \max_i \, \, \{v_i(0)\} = v_1(0).$$ By Harnack inequality, $$ v_i -v_{i+1} \geq c (v_i -v_{i+1})(0) \quad \text{in $B_{7/8}$}, \quad \forall i=1,\ldots, N-1,$$ and in view of \eqref{vi} simple arithmetic gives that for some $k \in \{1,\ldots, N-1\}$, \begin{equation}\label{k}v_k -v_{k+1} \geq c (v_k -v_{k+1})(0) \geq c \mu \quad \text{in $B_{7/8}$}.\end{equation} Furthermore, Harnack inequality for $v_1 \geq 0$ and $-v_N \geq 0$ also guarantees that
\begin{equation}\label{|V|}|V|= \max_i\{|v_i|\} = \max\{v_1, -v_N\} \leq C \mu \quad \text{in $B_{7/8}$}.\end{equation}
We now wish to show that $\mu$ is bounded by a universal constant. Indeed, if $\mu \gg 1$ then, by the second inequality in \eqref{UV}, \eqref{|V|}, and the fact the $V$ is harmonic, we get $$ \|\nabla U\|_{L^2(B_{7/8})} \le C (\mu + 1) \le C \mu.$$
Hence, by Corollary \ref{holder} and the fact that $V$ is harmonic and satisfies \eqref{|V|}, we get in $B_{3/4}$, $$[U]_{C^\alpha} \le C \mu, \quad [U-V]_{C^\alpha} \le C \mu.$$ Thus, if at $x_0\in B_{1/2}$, $|(U-V)(x_0)|=t$, then $|U-V| \ge t/2$ in a small neighborhood of size $r \sim (t/\mu)^{1/\alpha}$. By the first inequality in \eqref{UV} we get that $t \leq C \mu^{n/(2\alpha+n)}.$ Hence,
$$ |U-V| \le C \mu^{1-\delta} \quad \text{in $B_{1/2}$},$$ for some $0<\delta<1.$ In view of \eqref{k} $$u_k -u_{k+1} \geq c\mu -C \mu^{1-\delta}>0 \quad \mbox{in} \quad B_{1/2}, $$ and we contradict that $U$ is connected in $B_{1/2}$. Thus $\mu \le C$ as desired, and it follows from the estimates above that
\begin{equation}\label{h1}\|U\|_{H^1(B_{1/2})} \le C.\end{equation} This proves the statement for the $H^1$ norm instead of the $C^{0,1}$ norm.
In order to conclude our proof, it suffices to obtain a scale invariant version of this $H^1$ estimate, i.e.
$$ \fint_{B_{r_k}} |\nabla U|^2 \leq C, \quad r_k=2^{-k}, \quad \forall k\geq 1.$$ In terms of the rescalings $$\tilde U_r(x):=U(rx)/r, \quad x \in B_1,$$ this is equivalent to
$$\fint_{B_{1}} |\nabla \tilde U_{r_k}|^2 \leq C, \quad \forall k \geq 1.$$
Let $m \geq 1$ be the first value for which $U$ is connected in $B_{r_m}$ but it is not connected in $B_{r_{m+1}}.$ If no such $m$ exists, then \eqref{h1} holds for all $\tilde U_{r_k}$ and the desired statement follows. Otherwise, in view of \eqref{h1}, \begin{equation}\label{usubj}
\|U_{r_k}\|_{H^1(B_{1/2})} \leq C, \quad \forall k \leq m. \end{equation} Since $U$ is disconnected in $B_{r_{m+1}}$, we can use the induction hypothesis and conclude that
$$\|U_{r_m}\|_{C^{0,1}(B_{1/4})} \leq C(1+ \|U_{r_m}\|_{L^2(B_{1/2})}) \leq C.$$ Then \eqref{usubj} holds for all $k \ge m$ and we reached the conclusion. \qed
Having established the continuity results above, we can obtain the following compactness property of minimizers.
\begin{prop}[Compactness of minimizers] If $U_m$ is a sequence of minimizers in $B_1$ and $U_m \to \bar U$ uniformly on compact sets, then $\bar U$ is a minimizer and $$ J_N(U_m,B_r) \to J_N(\bar U, B_r) \quad \quad \forall r \in (0,1).$$ .\end{prop}
\begin{proof} Since the $U_m$'s are continuous, then $\bar U$ is continuous and the $U_m$'s are uniformly bounded on compact sets. In view of the Lipschitz continuity Theorem \ref{nonC}, we conclude that the $U_m$'s and $\bar U$ are uniformly Lipschitz on compacts.
Let $V$ be a competitor which agrees with $\bar U$ in $B_1 \setminus B_{r}$. Let $0\leq \varphi \leq 1$ be a radial cutoff function which is 1 in $B_{r}$ and vanishes outside $B_{r+\delta}$.Then $$ V_m:= \varphi \, V + (1- \varphi)U_m $$ agrees with $U_m$ in $B_1 \setminus B_{r+\delta}$, and it is an admissible competitor as well. Thus, minimality implies that \begin{equation}\label{up}J_N(U_m,B_{r+\delta}) \le J_N(V_m, B_{r+\delta}).\end{equation} Since $$ \nabla V_m = (V-U_m) \nabla \varphi + \varphi \nabla V + (1-\varphi) \nabla U_m,$$
$$ |\nabla U_m|, |\nabla {\bar U}| \le K , \quad \quad W(U_m), W(V_m) \le N, \quad |\nabla \varphi| \le C \delta^{-1},$$ we have that in the annulus $\mathcal A_\delta:=B_{r+\delta}\setminus B_r$,
$$ J_N(V_m, \mathcal A_\delta ) - J_N(U_m,\mathcal A_\delta) \le C(1+K^2)|\mathcal A_\delta| + C \delta^{-1} \|U_m-\bar U\|^2_{L^\infty(\mathcal A_\delta)}.$$ The right hand side can be made arbitrarily small by first choosing $\delta$ small depending on $K$ and them $m$ large. On the other hand, in view of \eqref{up} we have $$J_N(U_m, B_r) \leq J_N(V_m, \mathcal A_\delta ) - J_N(U_m,\mathcal A_\delta) + J_N(V,B_r).$$ In conclusion $$\limsup J_N(U_m,B_r) \le J_N(V, B_r),$$ while the lower semicontinuity of the energy gives $$ J_N(\bar U,B_r) \le \liminf J_N(U_m,B_r) .$$ \end{proof}
\subsection{Non-degeneracy}
\begin{lem}[Caccioppoli's estimate]\label{CE}Let $U$ minimize $J_N$ in $B_1.$ Then,
$$ \int_{B_{1/2}} (|\nabla U|^2 + W(U)) \, dx \le C \int_{B_1} |U|^2 dx,$$ for $C>0$
universal.\end{lem} \begin{proof}Let $0 \leq \phi \leq 1$ be a smooth bump function, which equals 1 in $B_{1/2}$ and $0$ outside of $B_{3/4}$. Set, $$\tilde u_k:= u_k(1-\phi), \quad k=1,\ldots, N.$$ Then $\tilde U=(\tilde u_1,\ldots, \tilde u_N)$ is an admissible competitor and by minimality,
$$\int_{B_1}( |\nabla U|^2+ W(U))\; dx \leq \int_{B_1} (|\nabla \tilde U|^2+ W(\tilde U))\; dx,$$ hence $$\text{$W(\tilde U) \leq W(U)$ and $W(\tilde U)=0$ in $B_{1/2}$,}$$ implying that
\begin{equation}\label{Ca}\int_{B_1}(|\nabla U|^2 + W(U)\chi_{B_{1/2}})\; dx\leq \int_{B_1}|\nabla \tilde U|^2\; dx.\end{equation} Moreover, \begin{align*}
\int_{B_1}|\nabla \tilde U|^2\; dx & = \int_{B_1}((1-\phi)^2|\nabla U|^2 + |U|^2 |\nabla \phi|^2 - (1-\phi)\nabla |U|^2 \cdot \nabla \phi) \, \, \; dx \\
& \leq \int_{B_1\setminus B_{1/2}}|\nabla U|^2 \; dx + \int_{B_1}|U|^2(1-\phi)\Delta \phi\; dx, \end{align*} where in the last inequality we integrated by parts. Using this in \eqref{Ca} we get,
$$\int_{B_{1/2}} (|\nabla U|^2 + W(U)) \; dx \leq C\int_{B_1} |U|^2\;dx$$ with $C=C(n)$ as desired. \end{proof}
A direct consequence of the Caccioppoli's estimate is that $|U|$ cannot be small and strictly positive in $B_1$.
\begin{cor}\label{CorC1}
If $\sum_{i=1}^N u_i=0$ and $B_r(x_0) \subset \{|U| >0\}$ then
$$ \max_{B_{r}(x_0)} |U| \ge c r,$$ for some $c>0$ universal. \end{cor}
Indeed, by scaling we may assume that $x_0=0$, $r=1$. Then $|U|>0$ implies $W(U)\ge 1$ in $B_1$ and the conclusion follows by the estimate in Lemma \ref{CE}.
We state a stronger version of non-degeneracy for $|U|$ than in Corollary \ref{CorC1}. \begin{lem}[Non-degeneracy]\label{NDeg}
Let $U$ be a minimizer in $B_1$. If $\sum_{i=1}^N u_i=0$ and $0 \in \partial \{|U|>0\}$ then $$\max _{B_r} |U| \ge c r, \quad \quad \forall r <1.$$ \end{lem}
\begin{proof} We use the previous corollary and a standard ball sequence construction for the classical one-phase problem. Notice that $\sum_{i=1}^N u_i=0$ and $u_1 \ge ..\ge u_N$ implies that
$$c_0 |U| \le u_1 \le |U|,$$ with $c_0>0$ depening on $N$ and $n$, and recall that by Lemma \ref{sub-super} $u_1$ is a subharmonic function.
Assume that $r=1$. We construct inductively a sequence of points $x_k \in \{|U|>0\}$ such that $x_0$ is sufficiently close to the origin and
1) $u_1(x_{k+1}) \ge (1+ c_1) u_1(x_k)$
2) $$ c_2 r_k \le u_1(x_k) \le C_2 r_k, \quad \quad |x_{k+1}-x_k| \le 4 r_k, \quad \quad r_k:=d(x_k, \{|U|=0\}).$$
The inequality $u_1(x_k) \le |U(x_k)| \le C_2 r_k$ is a consequence of Lemma \ref{uniLip}.
Let's assume that $x_1$,..,$x_k$ are constructed, and let $$y_k \in \partial \{|U|=0\} \cap \partial B_{r_k}(x_k).$$ Since $u_1 \ge 0$ is subharmonic in $B_{r_k}(x_k)$, $u_1(x_k) \ge c_2 r_k$ and, by Lemma \ref{uniLip}, $$u_1 (x) \le |U(x)| \le C_2 |x-y_k|,$$
it follows from the mean value inequality that there exists $z_k \in B_{r_k}(x_k)$ such that $u_1(z_k) \ge (1+ c_1) u_1(x_k)$ for some $c_1>0$ depending on $c_2$, $C_2$ and $n$. We pick $x_{k+1} \in B_{t_k }(z_k)$ to be the point where the maximum value of $u_1$ occurs, where $t_k$ represents the distance from $z_k$ to the set $\{|U|=0\}$. Then \begin{align*} u_1(x_{k+1}) & = \max_{B_{t_k }(z_k)} u_1 \\
& \ge c_0 \, \max_{B_{t_k }(z_k)} |U| \\ & \ge c_0 \, c \, t_k \\
& \ge c_2 \, d(x_{k+1},\{|U|=0\}), \end{align*} where in the second inequality we have used Corollary \ref{CorC1}, and we chose $c_2$ small depending on $c_0$ and $c$. This proves the existence of the sequence $x_k$.
Then we have $$(1+ c_1) u_1(x_k) \le u_1(x_{k+1}) \le C_2 r_{k+1} \le C_3 r_k \le C_4 u_1(x_k),$$ hence $u_1(x_k)$ is a sequence that grows at a geometric rate. The conclusion of the lemma follows easily since $u_1(x_k) \sim r_k$.
\end{proof}
Next we show the weak non-degeneracy for the difference of two consecutive membranes. It is not clear whether the strong non-degeneracy holds for two consecutive membranes as well. This lemma will not be used in the remaining of the paper.
\begin{lem}If $B_r \subset \{u_k > u_{k+1}\}$ is tangent to $\Gamma_k$ then $(u_k - u_{k+1}) (0) \ge c r$. \end{lem} \begin{proof} We only sketch the proof. After rescaling, we can assume that $U$ minimizes $J_N$ in $B_1$, $\sum u_i =0$, and \begin{equation}\label{pos}u_k - u_{k+1} >0 \quad \text{in $B_1$}.\end{equation} We wish to show that $$(u_k - u_{k+1})(0) \geq c_0,$$ for some $c_0$ universal.
We may also assume that $\bar U=\{u_1,..,u_k\}$ and $\underline U=\{u_{k+1},..,u_N\}$ are connected in $B_{\delta}$ for some $\delta>0$ small to be chosen later (see Definition \ref{conn}). Otherwise we can reduce the problem to one in $B_\delta$ with a strictly smaller number of membranes $N$. Let $\bar h$, $\underline h$ denote the averages of the membranes in $\bar U$, respectively $\underline U$.
Then $\bar h$, $\underline h$ are harmonic in $B_1$ and by Lemma \ref{uniLip} we know that
$$ \|u_i - \bar h\|_{C^{0,1}(B_{1/2})} \le C, \quad \quad i \in \{1,..,k\}.$$ Since $\bar U$ is connected in $B_\delta$ we find that
$$ \|u_i - \bar h\|_{L^\infty(B_\delta)} \le C \delta,$$ hence $$ u_k \le \bar h, \quad \quad \bar h(0) \le u_k(0) + C \delta.$$ Similarly we obtain $$ u_{k+1} \ge \underline h, \quad \quad \underline h(0) \ge u_{k+1}(0) - C \delta.$$ Assume by contradiction that \begin{equation}\label{con15}(u_k - u_{k+1})(0) \leq \delta. \end{equation} Since $\bar h > \underline h$, by Harnack inequality we find $$ \bar h - \underline h \le C \delta \quad \mbox{in $B_{1/2}$.}$$ This implies that $u_1 \le u_N + C \delta$ in $B_{1/2}$ and then by Corollary \ref{CorC1} it follows that all membranes coincide in $B_{1/4}$ a contradiction.
\end{proof}
\section{Monotonicity formula}\label{Sec4}
In this section we present the Weiss type monotonicity formula associated to our energy functional $J_N$ and prove Theorem \ref{Bup}. This is a standard tool necessary for the analysis of the partial regularity of free boundaries arising in a minimization problem, see for example \cite{W}.
\begin{prop}[Weiss monotonicity formula] \label{WMF}Assume that $U$ is a critical point to $J_N$ in $B_1$ with respect to continuous deformations in $\mathbb R^{n+1}$. Then for $r<1$,
$$\Phi(r):=r^{-n} \int_{B_r} \left(|\nabla U|^2+W(U)\right)dx - r^{-n-1} \int_{\partial B_r} |U|^2 d \sigma, $$ is monotone increasing, and $$\Phi'(r) = 2 \, r^{1-n} \int_{\partial B_r}(r^{-1}U- U_\nu)^2 d \sigma \, \, \, \ge 0.$$ The $\Phi$ functional is constant in $r$ if and only if $U$ is ``a cone" i.e. homogenous of degree one, and then this constant (the energy of $U$) is given by $$\Phi(U)=\int_{B_1} W(U) dx.$$
\end{prop}
First we specify the notion of $U$ to be critical with respect to continuous deformations in $\mathbb R^{n+1}$. Given a smooth diffeomorphism $\Phi: \mathbb R^{n+1} \to \mathbb R^{n+1}$ with compact support in $\Omega \times \mathbb R$, we deform the $N$-graphs of $U$ by the map $$X \mapsto X + t \Phi(X), \quad \quad \mbox{$t$ small,} \quad X \in \mathbb R^{n+1},$$
onto the graphs of another admissible function that we denote by $U_t$. Then we say that $U$ is critical if $$\frac{d}{dt} J_N(U_t,\Omega) \, |_{t=0} =0. $$ If $\Phi$ is independent of the $x_{n+1}$ variable then $U$ becomes critical for $J_N$ with respect to the standard domain deformations in $\mathbb R^n$.
We start with the following lemma, in which we compute the first variation for critical points with respect to domain deformations.
\begin{lem}[First variation]\label{FV} If $U$ is a critical point of $J_N$ with respect to domain variations then \begin{equation}\label{fv}
\int_{B_1} \left(-2 \, \psi^k_l U_k \cdot U_l + (|\nabla U|^2 + W(U)) div \, \Psi \right)\, \, dx =0, \end{equation} for any Lipschitz map $\Psi: \mathbb R^n \to \mathbb R^n$ with compact support in $B_1$.
In particular, for a.e. $r \in (0,1)$ \begin{equation}\label{div}
\frac{d}{dr} (r^{-n} J_N(U,B_r)) = 2 r^{-n} \int_{\partial B_r} |U_\nu| ^2 d \sigma -2 \, r^{-n-1} \int_{B_r}|\nabla U|^2 dx. \end{equation} \end{lem}
\begin{proof} Let $\Psi: \mathbb R^n \to \mathbb R^n$ be a a Lipschitz map with compact support in $B_1$, and we consider the domain deformation $$x \mapsto x + \epsilon \, \Psi(x) =: y.$$ Then $$ D_x y = I + \epsilon \, D \Psi, \quad \quad D_y x = (D_x y)^{-1}= I - \epsilon D \Psi + O(\epsilon^2), $$ $$ dy=1+ \epsilon \, div \, \Psi + O(\epsilon^2),$$ and \begin{align*}
&J_N(U(y), B_1) = \int_{B_1} |\nabla_y U|^2 + W(U(y)) \, dy \\ &=\int_{B_1} \left(\nabla_x U (I - \epsilon \, D \Psi)(I - \epsilon D \Psi)^T (\nabla_x U)^T + W(U)\right) (1+ \epsilon \, div \, \Psi) \, \, dx + O(\epsilon^2)\\
& = J_N(U,B_1) + \epsilon \int_{B_1} \left(-2 \psi^k_l U_k \cdot U_l + (|\nabla U|^2 + W(U)) div \, \Psi \right)\, \, dx + O(\epsilon^2). \end{align*}
Here $O(\epsilon^2)$ depends on $\|U\|^2_{H^1}$ and $\|D\Psi\|_{L ^\infty}$.
We take in \eqref{fv} $$\Psi(x)= \psi_\delta(|x|) x $$ with $\psi_\delta$ a cutoff function which is 1 in $[0,r-\delta]$ and $0$ in $[r , \infty)$, and we let $\delta \to 0$ and obtain
$$\int_{B_r} \left((n-2)|\nabla U|^2 + n W(U)\right) dx = r \int_{\partial B_r} \left(|\nabla U|^2 + W (U) - 2 |U_\nu|^2\right) \, d \sigma,$$ which gives \eqref{div}. \end{proof}
Let us consider continuous deformations of the graph of $U$ in the vertical direction $$ (x, U) \mapsto (x, U + t \varphi(x) U)=: (x,U_t(x))$$ with $\varphi$ a smooth function with compact support. Notice that these deformations are admissible for all $t$ close to $0$, and in addition $$ W(U)=W(U_t).$$ Then, if $U$ is critical for $J_N$ with respect to the family $U_t$ we find \begin{equation}\label{div2} 0 = \int_{B_1} \nabla U \cdot \nabla (\varphi U) dx, \end{equation}
which means that $$U \cdot \triangle U =0 $$ holds in the distribution sense. We take in \eqref{div2} $\psi(x)=\psi_\delta(|x|)$ as in the proof above and obtain that \begin{equation}\label{div3}
\int_{B_r} |\nabla U|^2 dx = \int_{\partial B_r} U \cdot U_\nu \, d\sigma \quad \quad \mbox{for a.e. $r$.} \end{equation}
\begin{proof}[Proof of Proposition \ref{WMF}]
We compute
$$ \frac{d}{dr} \left(r^{-n-1} \int_{\partial B_r} |U|^2 d \sigma \right) = 2 r^ {-n-1} \int _{\partial B_r} \left(U \cdot U_\nu - |U|^2\right) \, d \sigma,$$ which together with \eqref{div} and \eqref{div3} give the formula for $\Phi'(r)$.
If $U$ is a cone, then $\Phi(r)$ is a constant which we denote by $\Phi(U)$, and since
$$ \int_{B_1} |\nabla U|^2 dx=\int_{\partial B_1} U \cdot U_\nu d \sigma = \int_{\partial B_1}| U|^2 d \sigma,$$ we find $$ \Phi(U)= \int_{B_1} W(U) dx.$$ \end{proof}
Having established Weiss' formula, we can now introduce the notion of blow-up cones. Let $U$ be a minimizer in $\Omega$ and assume $U(x_0)=0$. The Lipschitz continuity Theorem \ref{nonC} shows that $\Phi(r)$ is bounded below, hence it has a limit as $r \to 0$. For a sequence of $r_m \to 0$, consider the rescalings, $$U_{r_m}(x):= \frac{U(x_0+r_mx)}{r_m}.$$ Then according to Theorem \ref{nonC}, up to extracting a subsequence, $U_m \to U_0$ uniformly on compact sets of $\mathbb R^n$. The limit $U_0$ is called a blow-up limit, and by our compactness theorem, $U_0$ is a minimizer. It easily follows from Weiss' formula, that $U_0$ is homogeneous of degree 1.
This discussion and the non-degeneracy Lemma \ref{NDeg} gives Theorem \ref{Bup}.
\section{One dimensional minimizers}\label{Sec5}
We show that the minimizing cones to $J_N$ in dimension $n=1$ are given by $$u_1=u_2=..=u_k \ge u_{k+1}=..=u_N, \quad \quad k=1,\ldots, N-1,$$
with $u_k=u_{k+1}$ for $x\leq 0$ and $u_k >u_{k+1}$ for $x>0$. More precisely, for $k \le N-1,$ define $U_{0,k}$ as $$ u_i=\left(\frac 1k - \frac 1N \right) x^+ \quad \quad \mbox{for $i \le k$}, \quad u_i=-\left(\frac {1}{N-k} - \frac 1N \right) x^+ \quad \quad \mbox{for $i > k$} .$$
Notice that $$ |\nabla U_{0,k}|^2=1=W(U_{0,k}), \quad x>0,$$ and $$ \Phi(U_{0,k})=1,$$ where we are using the notation $$\Phi(U):= \int_{-1}^1 W(U) \;dx.$$ \begin{prop}\label{P1d} The only minimizing cones $U$ in dimension $n=1$ are given by $U(x)=L(x) + U_{0,k}(x)$, with $L(x)= (a\cdot x, \ldots, a\cdot x), a \in \mathbb R^n$ (up to a reflection with respect to 0). If $\sum u_i = 0$ then $U=U_{0,k}.$ \end{prop}
Toward the proof of Proposition \ref{P1d}, we establish the next lemmas.
\begin{lem}[Inwards perturbations]\label{L1da} Let $U$ be a minimizer to $J_N$ in an interval around $0$ in dimension $n=1$. Assume that a number of membranes $u_i$ with indeces $$i \in I=\{ j+1,..,k \},$$ coincide at $0$, and on a small interval $(0,\delta)$ to the right of $0$
a) $u_i$ are linear,
b) the family $u_i$, $i \in I$, is strictly separated from the remaining membranes i.e. $u_{j-1}>u_j$, $u_k > u_{k+1}$.
Then, in $(0,\delta)$ the slopes satisfy the inequality
$$ \sum_{i \in I} |\nabla (u_i - u_I)|^2 \ge W (\{u_j,..,u_k \}), \quad \quad u_I:= \frac{1}{k-j} \sum_{i\in I} u_i.$$ \end{lem}
\begin{proof} After subtracting a linear function and after a dilation we reduce to the case $$\delta =1, \quad u_I=0.$$ Also, after relabeling the membranes we may assume that $$I=\{1,2,.., N\},$$ and $U$ minimizes the energy in $[0,1]$ among all admissible competitors $V$ with the same boundary data as $U$ in $[0,1]$ and with $v_1 \le u_1$ and $v_N \ge u_N$.
We pick $$ V_\epsilon (x):=\left \{ \begin{array}{l} U(\frac {x-\epsilon}{1-\epsilon}) \quad \mbox{if} \quad x \in [\epsilon,1] \\
\
\\
0 \quad \mbox{if} \quad x \in [0,\epsilon],
\end{array} \right . $$ and find that
$$ J_N(V_\epsilon)= \int_0^1\left( \frac{1}{1-\epsilon} |\nabla U|^2 + (1-\epsilon) W(U)\right) dx. $$ The minimality implies
$$ 0 \le \frac{d}{d \epsilon} J_N(V_\epsilon)|_{\epsilon=0} = \int_0^1 (|\nabla U|^2 - W(U)) dx,$$
which gives the desired conclusion $$ |\nabla U|^2 \ge W(U).$$
\end{proof}
\begin{rem}\label{R1d} If in addition to the hypotheses of Lemma \ref{L1da} we assume that the $u_i$ with $i \in I$ coincide in $[-\delta,0]$, and the separation with respect to the remaining membranes holds in $[-\delta,\delta]$, then the inequality becomes an equality:
\begin{equation}\label{EL1d}
\sum_{i \in I} |\nabla (u_i - u_I)|^2 = W (\{u_j,..,u_k \}), \quad \quad u_I:= \frac{1}{k-j} \sum_{i\in I} u_i.
\end{equation}
Indeed, in the proof above we may replace $\epsilon$ by $-\epsilon$, and the corresponding functions are still admissible.
This equality can be interpreted as the Euler-Lagrange equation at the branching points, which expresses the equipartition of energy. \end{rem}
\begin{lem}\label{L1db}Let $U$ be a minimizer to $J_N$ in $(-\delta,\delta)$ in dimension $n=1$ with $$u_1=u_2=..=u_k, \quad \mbox{for some $k \ge 2$.}$$ Assume that
a) $u_1$ is linear in $[0,\delta)$ and $(-\delta,0]$, and let $a^+$, respectively $a^-$ denote its slopes;
b) $u_k > u_{k+1}$ in $(0,\delta)$.
Then
$$ 0 \le a^+-a^- \le \frac{1}{\sqrt k}.$$
\end{lem}
\begin{proof} As above, after subtracting a linear function we may assume that $\delta=1$ and $a^-=0$. Then $a^+ \ge 0$ since $u_1$ is subharmonic, according to Proposition \ref{sub-super}.
The top $k$ membranes vanish to the left of $0$, and we can use as an admissible competitor for these functions the same one used in the proof of Lemma \ref{L1da} with $\epsilon$ replaced by $-\epsilon$. Precisely, let $$ v_{\epsilon,i}:=\frac{1}{1+\epsilon} u_i(x+\epsilon) \chi_{[-\epsilon,1]} \quad \mbox{if $i \le k$,}\quad \mbox{and} \quad v_{\epsilon,i} := u_i \quad \mbox{if $i > k$},$$ and notice that \begin{align*}
J_N(V_\epsilon) - J_N(U) & = \int_{-1}^1 \left(\sum_{i=1}^k (|\nabla v_{\epsilon,i}|^2 - |\nabla u_i|^2) + \chi_{\{v_{\epsilon,k} > u_{k+1}\} \cap [-\epsilon,0]} \right)dx\\
& \le \epsilon - \frac{\epsilon}{1+\epsilon} \int_0^1 \sum_{i=1}^k |\nabla u_i|^2 dx, \end{align*} which, as $\epsilon \to 0$, gives the desired inequality $$ k (a^+)^2 \le 1.$$ \end{proof}
We are now ready to provide the proof of Proposition \ref{P1d}.
\begin{proof}[Proof of Proposition $\ref{P1d}$] Let $U$ be a nonzero minimizing cone of average 0 in dimension $n=1$ and let $$a_1^+ > a_2^+>...>a_l^+$$ be the slopes of the membranes on $[0, \infty)$ and denote by $k_1^+,..,k_l^+$ their multiplicities that is, $k^+_j$ is the number of membranes with slope $a^+_j$. Since $U$ has average $0$ we have $a_1^+ \ge 0 \ge a_l^+$. Similarly on $(-\infty,0]$ we have slopes $-a_1^-,-a_2^-,..,-a_m^-$ with $$a_1^->...>a_m^-, \quad \quad a_1^- \ge 0 \ge a_m^-,$$ with multiplicities $k_1^-,..,k_m^-$. We apply Lemma \ref{L1da} for the collection of membranes corresponding to 2 consecutive slopes $a_i$ and $a_{i+1}$ (here we drop the $+$ or $-$ superscript for simplicity of notation) and obtain $$ (a_i -a_{i+1})^2 \left ( k_i \frac{k_{i+1}^2}{(k_i+k_{i+1})^2} + k_{i+1} \frac{k_{i+1}^2}{(k_i+k_{i+1})^2}\right) \ge 1,$$ or $$ (a_i - a_{i+1})^2 \ge \frac{1}{k_i} + \frac{1}{k_{i+1}}.$$ Next assume that \begin{equation}\label{999} k_1^+ \le k_1^-. \end{equation}
We can apply Lemma \ref{L1db} to the top $k_1^+$ membranes and obtain $$ a_1^+ + a_1^- \le \sqrt {\frac{1}{k_1^+}}.$$ If $l=1$ then, $k_1^+ =N$ and therefore $k_1^-=N$ as well. This means $l=m=1$ hence $U \equiv 0$ which contradicts our hypothesis. In conclusion, $l \ge 2$ and then the inequalities above (with $i=1$) imply \begin{equation}\label{1000}
a_2^+ < - a_1^- \le 0,
\end{equation} and since $a_l^+ \le a_2^+$, we find \begin{equation}\label{1001}
a_1^- + a_l^+ <0.
\end{equation} We claim that \begin{equation}\label{1002} k_l^+ < k_m^-.
\end{equation}
Indeed, otherwise $k_l^+ \ge k_m^-$ and we can argue as in \eqref{1001} for the membranes in the reversed order (notice that $(u_1,..,u_N)$ is a minimizer if and only if $(-u_N,..,-u_1)$ is a minimizer). We obtain $$-a_1^- -a_l^+ <0,$$ and contradict \eqref{1001}, hence the claim \eqref{1002} is proved.
If $l \ge 3$ then we may apply \eqref{1000} for the reversing order (see \eqref{1002}) and obtain $-a_2^+ < 0$, a contradiction.
In conclusion $l=2$, and \eqref{999}, \eqref{1002} imply that $m=1$ and $U=U_{0,k}$ for some $k$.
It remains to show that $U_{0,k}$ is a minimizer. Let $V$ be a perturbation of $U$ which coincides with $U$ outside a compact interval that contains the origin, say $[-1,1]$. Let $\rho$ be such that $u_1 >0 \in (\rho,1]$, $u_1(\rho)=0$ (and therefore $U(\rho)=0$). Then $W(V) \ge 1$ in $[\rho,1]$, hence $$ J_N(V,[-1,1]) \ge J_N(\bar V, [-1,1]) $$ where $\bar V$ is the harmonic replacement of $V$ in $[\rho,1]$, with $\bar V$ extended to be $0$ in $[-1,\rho]$. On the other hand, the right hand side is minimized when $\rho=0$ as can be seen from Remark \ref{R1d}.
\end{proof}
If we consider minimality in the more restrictive class of continuous deformations of the graph of $U$ in $\mathbb R \times \mathbb R$ (which do not change the topology of the junctions), then there are other cones in 1D. For example, for $N=3$, $$ u_1= x^+ , \quad u_2=0, \quad u_3=-x^+,$$ is a {\it minimizing} cone with a triple junction at $0$. Notice that the Euler -Lagrange equation \eqref{EL1d} is satisfied.
The minimizer in $H^1$ with the same boundary data in $[-1,1]$ of the example above is (see Figure \ref{fig3})
$$ u_1= \lambda \left (x -1 + \frac 1 \lambda \right )^+, \quad u_2= - \frac 12 u_1 + \mu \left(x-1 + \frac {1}{2 \mu}\right )^+, \quad u_3=-u_1-u_2, $$
$$ \lambda=\frac { \sqrt 2}{\sqrt 3}, \quad \mu = \frac {1}{ \sqrt 2 }.$$
\begin{figure}
\caption{A minimizer for $N=3$, $n=1$.}
\label{fig3}
\end{figure}
\section{Free boundary regularity near one-dimensional cones}\label{Sec6}
In this section we prove the partial regularity result for the free boundaries $\Gamma_i$, in dimension $n \geq 2.$ For convenience we restate Theorem \ref{prI} from the Introduction.
\begin{thm}[Partial regularity]\label{pr} Let $U$ be a minimizer in $B_1$. The free boundaries $\Gamma_i$'s are analytic and disjoint from one another outside a closed set $\Sigma$ of singular points of Hausdorff dimension $n-2$, and $$\mathcal H^{n-1}(\Gamma_i \cap B_{1/2}) \le C.$$ \end{thm}
In order to obtain Theorem \ref{pr}, we first establish a series of lemmas. First we write a dimension reduction lemma for minimizers depending on fewer variables.
\begin{lem}$U(x)$ is a minimizer for $J_N$ in $B_1 \subset \mathbb R^n$ if and only if its trivial extension in one more variable is a minimizer for $J_N$ in $B_1 \times \mathbb R \subset \mathbb R^{n+1}$. \end{lem}
The proof of this lemma is standard and we omit the details.
Next, we obtain a lower bound of the energy of non trivial cones. Recall that cones are 1-homogeneous functions.
\begin{lem}[Least energy cones]\label{L41} Let $U$ be a nonzero cone in $\mathbb R^n$, with $\sum u_i=0$. Then
$$ \Phi(U) \ge \frac 12 |B_1|,$$ with equality if and only if $$U= U_{0,k}(x \cdot \nu),$$ for some $k <N$, and some unit direction $\nu$. \end{lem}
\begin{proof} We prove this by induction on the dimension $n$. The case $n=1$ was addressed in the previous section.
If $\{|U|=0\}$ does not contain a ray, then $W(U) \ge 1$, and $\Phi(U) \ge |B_1| > \frac 12 |B_1|$. If $|U|=0$ contains a ray, then we can choose $x_0 \in \partial\{|U|>0\} \cap \partial B_1$, and by the monotonicity formula obtain $$ \Phi(U)= \lim_{r \to \infty} \Phi_{x_0}(U,r) \ge \Phi_{x_0}(U,0+),$$ with equality only if $U$ is constant in the $x_0$ direction. Here $\Phi_{x_0}(U,0+)$ represents the limit of the $\Phi$ functional centered at $x_0$ and its value coincides with the energy of a blow-up cone at $x_0$. This cone is constant in the $x_0$ direction and by the previous lemma it is the extension of an $n-1$ dimensional cone to $\mathbb R^n$. Now the conclusion follows easily from the induction hypothesis. \end{proof}
We state the $\epsilon$-regularity theorem for our problem.
\begin{prop} \label{GK}Assume that $U$ is a minimizer of $J_N$ in $B_1$ with $$0 \in \{|U|>0\}, \quad \quad \sum u_i=0,$$ and
$$ \Phi(U,B_1) \le \frac 12 |B_1| + \delta,$$ for some small $\delta$ universal. Then, there exists $k<N$ such that $\Gamma_k \cap B_{1/2}$ is an analytic hypersurface and $\Gamma_i \cap B_{1/2}=\emptyset$ if $i \ne k$. \end{prop}
We start by proving the following result.
\begin{lem}\label{L42} Assume that $U$ satisfies the hypotheses of Proposition $\ref{GK}$. There exists $k$ such that in $B_{3/4}$ $u_1=..=u_k$, $u_{k+1}=..=u_N$, i.e. $\Gamma_i \cap B_{3/4}=\emptyset$ for all $i \ne k$. \end{lem}
\begin{proof}
First we show that there exists a unit direction $\nu$ such that in $B_{7/8}$, \begin{equation}\label{4000}
|U-U_{0,k}(x \cdot \nu)| \le \eta(\delta) \quad \quad \mbox{ with $\eta(\delta) \to 0$ as $\delta \to 0$.} \end{equation} This follows by compactness from Lemma \ref{L41}. Indeed, if $U_m$ is a sequence of minimizers in $B_1$ which satisfy the hypotheses with $\delta=\delta_m \to 0$, then up to subsequences, $U_m \to \bar U$ uniformly on compact sets of $B_{7/8}$ and
$$0 \in \partial\{|\bar U|>0\}, \quad \quad \Phi(\bar U,B_{7/8}) = \lim \Phi(U_m,B_{7/8}) \le \frac 12 |B_1|.$$ Lemma \ref{L41} implies that $\bar U$ is a cone of least energy, i.e. $\bar U = U_{0,k} (x \cdot \nu)$ for some $k$, and \eqref{4000} is proved.
Conclusion \eqref{4000} holds also for the rescalings $U_r(x) =r^{-1}U(xr)$ with $k$ and $\nu$ depending on $r$. However, the continuity of the $U_r$'s imply that $k$ is in fact independent of $r$, provided that we choose $\eta(\delta)$ sufficiently small.
On the other hand \eqref{4000} and the non-degeneracy of $|U|$, give that $\{|U|=0\}$ and the half-space $x \cdot \nu <0$ are close in the Hausdorff distance sense in $B_{3/4}$. The same argument as above can be applied at any point $x _0 \in \partial \{|U|>0\} \cap B_{3/4}$ instead of the origin, and the conclusion easily follows.
\end{proof}
\begin{proof}[Proof of Proposition $\ref{GK}$]
Let us assume that $\sum u_i =0$. Then, by Lemma \ref{L42}, $\Gamma_i \cap B_{3/4}= \emptyset$ if $i \ne k$, and $u_k$ is the minimizer for the scalar Bernoulli problem
$$ \int_{B_{3/4}} \left(\frac{nk}{n-k}|\nabla u|^2 + \chi_{\{u>0\}}\right) dx,$$
with sufficiently flat free boundary in $B_{3/4}$. The result follows from the classical work of Alt and Caffarelli \cite{AC}.
\end{proof}
\begin{defn} We say that $x_0$ is a regular point of $\Gamma_k$ if, when restricting to the collections of membranes that coincide at $x_0$, the corresponding blow-up cone is one-dimensional.
\end{defn}
\begin{proof}[Proof of Theorem \ref{pr}]
If at $x_0 \in \Gamma_k$ we have, $$u_l >u_{l+1}=...=u_m>u_{m+1},$$ with $l+1\le k <m$, then $$\tilde U:=(u_{l+1},...,u_m),$$ minimizes the energy locally for the energy $J_{m-l}$ involving $m-l$ membranes. After subtracting the average from $\tilde U$, we reduce to the case $\tilde U(x_0)=0$. If the blow-up cone at $x_0$ is one-dimensional, then $x_0$ is a regular point.
Then, by Proposition \ref{GK}, $\Gamma_k$ is an analytic surface and $\Gamma_i=\emptyset$ near $x_0$, with $i \in \{l+1,...,m \}$. Furthermore, Proposition \ref{GK} implies that if $U_m \to U$ then $\Gamma_{m,k}$ converges to $\Gamma_k$ in $C^{1,\alpha}$ sense near $x_0$.
Now the theorem follows from the standard dimension reduction argument of Federer, see \cite{F}. We omit the details.
\end{proof}
\section{2 dimensional cones for $N=3$ membranes}\label{Sec7}
In this section we study nontrivial cones $U$ which are not one-dimensional. We restrict to the case $N=3$ membranes in $n=2$ dimensions.
We introduce two cones $V_0$ and $V_s$ which turn out to be the only one-homogenous functions which minimize the energy locally at points outside the origin.
We use polar coordinates $(r, \theta)$, and view sectors as intervals in the variable $\theta$. We also use the notation $$ e_\theta=(\cos \theta, \sin \theta).$$
\begin{defn} The cone $V_0=(v_{0,1}, v_{0,2}, v_{0,3})$ is defined as $$v_{0,1}(x)= \left\{ \begin{array}{l} \sqrt{\frac 23} \, \, x \cdot e_{-\frac \pi 6}, \quad \quad \quad \quad \quad \quad \mbox{if} \quad \theta \in [-\frac 23 \pi, \frac 16 \pi ], \\
\
\\
\sqrt{\frac 1 6} \, \, x \cdot e_{\frac \pi 6}, \quad \quad \quad \quad \quad \quad \mbox{if} \quad \theta \in [\frac 1 6 \pi,\frac 23 \pi],\\
\
\\
0 \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \mbox{otherwise},\\
\end{array} \right . $$
$$ v_{0,3}(x)= - v_{0,1}(x_1,-x_2), \quad \quad v_{0,2}=-v_{0,1}-v_{0,2}.$$
The cone $V_s=(v_{s,1},v_{s,2}, v_{s,3})$ is defined as
$$ v_{s,1}(x)= \frac{1}{\sqrt {10} }\, \max \left \{ |x_1|, 2 |x_2| \right \}, $$ $$ v_{s,3}(x)= - v_{s,1}(x_2,x_1), \quad \quad v_{s,2}=-v_{s,1}-v_{s,3}. $$ \end{defn}
\begin{figure}
\caption{ The free boundaries for the cones $V_0$ and $V_s$}
\label{fig1}
\end{figure}
\begin{prop} Let $N=3$ and $U$ be a cone with $\sum u_i=0$ which minimizes $J_N$ locally at points on the unit circle $\partial B_1$, and assume that $U$ is not one-dimensional. Then, up to rotations $U=V_0$, or $U=V_s$.
\end{prop}
\begin{proof} By the partial regularity Theorem \ref{pr}, the free boundaries $\Gamma_1$ and $\Gamma_2$ of $U$ consist of at most a finite number of distinct rays. Each $u_i$ is linear in the sectors determined by these rays, and on $\partial B_1$, the membranes behave as in the 1 dimensional case at the points on $\Gamma_i \cap \partial B_1$. More precisely, if at such a point all three membranes coincide, i.e. $U=0$, then on one side they all coincide and on the other they split with multiplicities 1 and 2, and slopes $\sqrt {\frac 23}$ and $\sqrt{ \frac 16} $ respectively. If at such a point only two membranes coincide, then on one side the two membranes coincide, while on the other they split and the slopes have a jump of size $\pm \frac{1}{\sqrt 2}$.
Let $S$ be a sector where $\{u_1>u_2\}$ which, after a rotation, corresponds to an interval $(0, \theta_0)$ in polar coordinates. Then $$ u_1 = f \cdot x \ge 0 \quad \quad \mbox{in $S$,} $$ for some non-zero vector $f$. Notice that $\theta_0 \le \pi$.
We distinguish 2 cases depending on whether or not $u_1$ vanishes on $\partial S$.
\
{\it Case 1:} $u_1 >0$ on $\partial S \setminus \{0\}$.
In this case we show that $f$ bisects the sector $S$, and $\theta_0> 2 \pi /3$. Moreover, $f$ bisects also the sector $\{u_2=u_3\} \cap S$.
\
From the 1 dimensional analysis, in the connected sector $\{\theta \in (0, \theta_1) \}\subset S$ where $u_1>u_2>u_3,$ we have $$ u_2 = (f - \sqrt 2 \, e_{\frac \pi 2}) \cdot x, \quad \quad u_3 = (\sqrt 2 \, e_{\frac \pi 2} - 2 f) \cdot x.$$ Notice that $\theta_1 < \theta_0$, since otherwise $\theta_0=\pi$ and we contradict that $u_1>0$ on $\partial S \setminus \{0\}$. The two linear functions corresponding to $u_2$ and $u_3$ in $[0, \theta_1]$ intersect on the line $$ (3f - 2 \sqrt 2 \, e_{\frac \pi 2}) \cdot x =0,$$ hence the ray $\{\theta=\theta_1\} \subset \Gamma_2$ is obtained by intersecting this line with $S$.
The Euler-Lagrange equation on $\Gamma_2$ implies that \begin{equation}\label{4004}
|3f-2 \sqrt 2 \, e_ {\frac \pi 2}|= \sqrt 2,
\end{equation} i.e. $ \frac{3}{\sqrt 2} f $ belongs to the unit circle centered at $2 e_{\frac \pi 2}$, which means that the angle between $f$ and $e_ {\frac \pi 2}$ is at most $\pi /6$.
There is one more unit direction besides $e_{\frac \pi 2}$ which satisfies the equality above, and it must be the inner normal $e_{\theta_0 - \frac \pi 2}$ to $S$ on the other ray of $\partial S$. Clearly the vector $f$ makes the same angle with these 2 inner directions to $S$, hence $f$ bisects $S$. Notice that the sector $S$ is uniquely determined by $f$.
We remark that the angle between $f$ and $e_2$ cannot be $\pi/6$. Otherwise the ray $\theta=\theta_0$ has the direction of $f$ which means that the set $\{u_2=u_3\} \cap S$ consists of a single ray which bisects $S$, and then $U$ cannot be locally minimizing by the 1 dimensional analysis.
\
{\it Case 2:} $u_1$ vanishes on the ray $\theta=0$.
In this case we show that $U$ coincides with a $\frac \pi 6$ rotation of $V_0$.
\
From the 1 dimensional analysis, $$ u_1=\sqrt{\frac 23} \, \, (x \cdot e_{\frac \pi 2})^+, \quad u_2=u_3 = - \frac 12 u_1,$$ on a sector near the ray $\theta=0$. Let $\{\theta \in (0,\theta_1]\}$ be a connected sector in $S$ where $u_2=u_3$. We claim that $$\theta_1 < \theta_0.$$ Otherwise $u_2=u_3=- \frac 12 u_1$ throughout $S$, which means that $S$ is a half-space, i.e. $\theta_0= \pi$. Since $u_1$ is subharmonic, and vanishes near the endpoints of the interval $[\pi,2\pi]$ we conclude that it vanishes in the whole interval, i.e. the formulas above hold in $\mathbb R^2$ and $U$ is one-dimensional, a contradiction.
We remark that $u_1>u_2>u_3$ in the remaining sector $\theta \in (\theta_1,\theta_0)$ of $S$. Then, in this sector we have $$ u_1=\sqrt{\frac 23} (x \cdot e_ {\frac \pi 2})^+, \quad u_2= -\frac 12 u_1 + \frac{1}{\sqrt 2} (x \cdot e)^+, \quad u_3= -\frac 12 u_1 - \frac{1}{\sqrt 2} (x \cdot e)^+, $$ with $e:=e_{\theta_1+ \frac \pi 2}$.
The Euler-Lagrange equation at $\theta=\theta_0$ gives
$$ \left|\frac 32 \sqrt{\frac 23} e_{\frac \pi 2} - \frac{1}{\sqrt 2} e \right|=\sqrt 2 $$ which means that $e = e_\pi $, i.e. $\theta_1=\pi/2$. Then the ray $\theta=\theta_0$ is given by the direction of $$ \frac{\sqrt 3}{2} e+ \frac 12 e_{\frac \pi 2},$$ along which $u_1$ and $u_2$ intersect, i.e. $\theta_0=5 \pi /6$.
The direction $\theta=\theta_0$ is also the one of the gradient of $-u_3$ in the connected sector $S':=(\theta_1,\theta_2)$ where $\{u_2>u_3\}$. We claim that $U$ vanishes on $\theta = \theta_2.$ Otherwise we are in case 1) for $-u_3$ in the sector $S'$, and deduce that the angle of $S'$ is $2(\theta_0-\theta_1)=2 \pi /3$ which is a contradiction.
Thus we are in case 2) for $-u_3$ and $S'$ which means that $\theta_2=\theta_0+ \pi/2$. In the remaining sector $\theta=[\theta_2,2\pi]$, $u_1$ (and therefore $U$) must vanish since it is $0$ on the boundary and it is subharmonic in the interior. In conclusion $U$ is a $\pi /6$ rotation of $V_0$.
\
If $u_1$ and $u_3$ do not vanish on any ray, then we are in Case 1) for each sector of $U$, and then the connected sectors of $\{u_1>u_2\}$ and $\{u_1=u_2\}$ occur in a periodic pattern along the circle. Since $\theta_0 \in (\frac 23 \pi, \pi)$, we can only have four such sectors. Then the angle between the bisectors of two such consecutive sectors, $f$ for $\{u_1>u_2\}$ and $f - \frac{1}{\sqrt 2} e_ {\frac \pi 2}$ for $\{u_1=u_2\}$, must be $- \pi /2$. This together with \eqref{4004} determines $f$ uniquely as $$ f= \frac{\sqrt 2}{5} (e_0 + 2 e_{\frac \pi 2}),$$ and then it follows that $U$ is a rotation of $V_s$. \end{proof}
Next we show that we can lower the energy of $V_s$ by using a compact perturbation near the origin. Our proof does not use the precise form of $V_s$ but only that it has an axis of symmetry.
\begin{prop} $V_s$ is not a minimizer of $J_N$ in $B_1$. \end{prop}
\begin{proof} Assume by contradiction that $V_s$ is a minimizer of $J_N$ in $B_1$.
We take the domain deformation
$$x \mapsto y:=\Psi(x)=x + \epsilon \varphi(|x|) e_1,$$
with $\varphi(|x|)$ a radial function with compact support in $B_1$, and with $\|\epsilon D \Psi \|_{L^\infty} <1$. Then, using the Lipschitz continuity of $V_s$, we find as in the proof of Lemma \ref{FV} that $$ U(y):=V_s(x(y)),$$ satisfies
$$ J_N(U,B_1) = J_N(V_s,B_1) + \int_{B_1} L(\epsilon D \Psi, V_s) + O(\epsilon^2 |D \Psi|^2) dx,$$ with $L$ linear in the first argument. Since $V_s$ is minimizing, it follows that $$\int_{B_1} L(\epsilon D \Psi, V_s) dx =0,$$ hence
$$J_N(U,B_1) - J_N(V_s,B_1) \le C \epsilon^2 \int_{B_1} |\nabla \varphi|^2 dx =o(\epsilon^2)$$ provided that we choose the logarithmic cutoff $$\varphi(r):= \frac{\max\{\log r, \log (2\epsilon)\}}{\log (2\epsilon)}.$$
Let $$U_1 := U(-|x_1|,x_2), \quad \quad U_2:=U(|x_1|, x_2)$$ and notice that $U_1$, $U_2$ have the same boundary data as $U$, hence $$J_N(V_s,B_1) \le J_N(U_1,B_1), \quad J_N(V_s,B_1) \le J_N(U_2,B_1) $$ $$ J_N(U_1,B_1) + J_N(U_2, B_1) = 2 J_N(U,B_1) \le 2 J_N(V_s , B_1) + o(\epsilon^2),$$ which gives $$J_N(U_1,B_1) \le J_N (V_s,B_1) + o(\epsilon^2).$$ Notice that in $B_{2 \epsilon}$, $U$ is a translation of $V_s$ by the vector $\epsilon e_1$, and $U_1$ and $U_2$ are obtained by reflecting this translation with respect to the line $x_1=0$.
We claim that the minimizer $\bar U_1$ with the boundary data of $U_1$ in $B_\epsilon$ decreases the energy of $J_N(U_1,B_\epsilon)$ by $c \epsilon^2$, $c>0$. By the scaling of $\epsilon^{-1}$ factor, this is equivalent to prove the claim for $\epsilon=1$. This means that, we first translate $V_s$ by $e_1$ and take its values in $B_1 \cap \{ x_1<0\}$, and then reflect them evenly with respect to $\{x_1=0\}$. We need to show that the resulting function is not minimizing $J_N$ in $B_1$. Indeed, it suffices to look at this function on the $x_1$ axis. On this line we have $u_1=u_2 >0$, $u_3<0$ and at the origin $u_3$ is not smooth, therefore it is not a minimizer.
\end{proof}
Next we prove that $V_0$ is a minimizing cone in $\mathbb R^2$. \begin{prop}\label{PV0} $V_0$ is a minimizer of $J_N$. \end{prop}
First we state a result about the continuity of $U$ near the boundary.
\begin{lem} Assume that $\Omega$ is a Lipschitz domain and the boundary data of $U$ is Lipschitz. Then $U \in C^\alpha(\overline \Omega)$, for some $\alpha>0$. \end{lem}
The proof is standard and follows as in the interior case. We omit the details.
\begin{lem}\label{L7.6} Assume that $0 \in \partial \Omega$ and $\Omega$ is a conical domain near the origin. If the boundary data of $U$ is homogenous of degree one near $0$, then the rescalings $U_r(x)=U(rx)/r$ of $U$ converge on subsequences as $ r \to 0$ to a global minimizing cone (defined in the conical domain). \end{lem}
This follows from the Weiss monotonicity formula which still applies in this setting. We only use Lemma \ref{L7.6} when $\Omega = B_1^+ \subset \mathbb R^2$.
\begin{lem}\label{LBd0} Assume that $N=3$, $\Omega=B_1^+=B_1 \cap \{ x_n >0\}$, $\sum u_i =0$, and $U$ vanishes on $x_n=0$. If
$$ \max\{u_1, |u_3|\} \le \frac{1}{ \sqrt 6} \, x_n,$$ then $U \equiv 0$ in a neighborhood of the origin. \end{lem}
\begin{proof}
By Lemma \ref{L7.6} and non-degeneracy, it suffices to prove the statement when $U$ is a cone. Since $u_1 \ge 0$ is subharmonic, one-homogenous, and vanishes on $x_n=0$, it must be a linear function in the $x_n$ variable. The same is true for $u_3$ and therefore also for $u_2$. By dimension reduction, it suffices to show the statement in dimension $n=1$. The growth hypothesis implies the slopes of $u_1$, $u_2$ and $u_3$ at the origin are strictly less that $1 / \sqrt 6$, and they need to vanish by Lemma \ref{L1da}. \end{proof}
\begin{proof}[Proof of Proposition $\ref{PV0}$]
We prove that $V_0$ is a minimizing cone by constructing a minimizer $U: \Omega \to \mathbb R$ for which $\Gamma_1 \cap \Gamma_2 \cap \Omega \ne \emptyset$. By the results above, $V_0$ is the only possible blow-up profile at an intersection point, and it is therefore minimizing.
Let $\Omega:= [-M, M] \times [-1, 1]$, for some $M$ large to be specified later, and let $U$ be a minimizer of $J_N$ with the boundary data given by $$ \varphi(x_1) U_{0,1}(x_2) + (1-\varphi(x_1)) U_{0,2}(x_2),$$ where $U_{0,1}$, $U_{0,2}$ are the one-dimensional solutions in the case $N=3$, and $\varphi(x_1)$ is a smooth non-decreasing function which is equal to 0 when $x_1\le -1$ and equal to $1$ when $x_1 \ge 1$.
Thus $U=0$ on $\partial \Omega \cap \{x_2\le 0\}$, $U \ne 0$ on the remaining part of the boundary. Moreover, since $u_1$, $-u_3$ are subharmonic we have
$$ \max\{u_1, - u_3\} \le \frac{1}{\sqrt 6} (1+x_2),$$
and by Lemma \ref{LBd0} we find that the boundary points
$(-M,M) \times \{-1\} $ are interior to the coincidence set $\{|U|=0\}$. We have $$ J_N(U, \Omega) \le(M-1) (J_N(U_{0,1}, [-1,1]) + J_N(U_{0,2}, [-1,1]) + C,$$ and since $U_{0,2}$ is minimizing in one dimension we find $$ J_N(U, \Omega \cap \{x_1 < -1\}) \ge (M-1)J_N(U_{0,2}, [-1,1]),$$ thus $$ J_N(U, \Omega \cap \{x_1 > 1\}) \le (M-1)J_N(U_{0,1}, [-1,1]) + C.$$
On the other hand for each $y \in (1,M)$ for which $$\|U(y, \cdot) - U_{0,1}(\cdot)\|_{L^\infty[-1,1]} \ge \delta,$$ we have $$ J_N(U, \mathcal D) \ge J_N(U_{0,2}(x_2), \mathcal D) + \sigma(\delta), \quad \mathcal D:=[y-\delta,y+ \delta] \times [-1,1], $$ for some $\sigma(\delta)>0$, hence if $M= M(\delta)$ is large enough we can find a $y \in (M/4,M/2)$ such that
$$ |U - U_{0,1}(x_2)| \le \delta \quad \mbox{in} \quad \mathcal R:=[y-1,y+1] \times [-1,1],$$
which, by Proposition \ref{GK}, gives that the coincidence set $\{|U|=0\}$ has $\Gamma_1$ as its boundary in $\mathcal R$, while $\Gamma_2= \emptyset$. The same conclusion holds for some $y \in (-M/2,-M/4)$ with the roles of $\Gamma_1$ and $\Gamma_2$ interchanged.
Now we can conclude that $\{|U|=0\}$ has a boundary point in $\Gamma_1 \cap \Gamma_2 \cap \Omega$. Indeed, otherwise $\partial \{ |U|=0\} \cap \Omega$ is a smooth curve which locally is in either $\Gamma_1$ or $\Gamma_2$ and which does not intersect the lines $x_2=1$ or $x_2=-1$ in the region $|x_1| \le M/2$. This is topologically impossible and we reached a contradiction.
Finally we show that $V_0$ is the unique minimizer with its own boundary data. If $U$ is another minimizer with the same energy and boundary data in $B_1$, then $U$ extended by $V_0$ outside $B_1$, is a minimizer in any compact subset of $\mathbb R^n$. As above, we find that there exists $x _0 \in \Gamma_1 \cap \Gamma_2$, and since the tangent cone at $x_0$ has the same energy with the cone at infinity we find that $U$ is a cone with vertex at $x_0$, i.e. $U = V_0$. \end{proof}
\section{Regularity of cones near $V_0$}\label{Sec8}
In this section we assume that we are in $n=2$ dimensions. We establish the uniqueness of the blow-up cone at a point in $\Gamma_1 \cap \Gamma_2$ and obtain the regularity of the two free boundaries near such a point. \begin{thm}\label{T2d} Assume $N=3$ and $U$ minimizes $J_N$ in $B_1$. If $V_0$ is the blow-up profile of $U$ at $0$, then $\Gamma_1$ and $\Gamma_2$ are piecewise $C^{1,\alpha}$ curves. \end{thm}
Since $V_0$ is the only possible blow-up profile for $\Gamma_1 \cap \Gamma_2$ in two dimensions, we know that $U$ is well-approximated by rotations of $V_0$ at all scales. Precisely we may assume that for each $r \in (0,1]$, there exists a rotation matrix $O_r$ such that \begin{equation}\label{6009}
\|U - V_0(O_r x)\|_{L^\infty(B_r)} \le \epsilon r, \end{equation} for some $\epsilon$ sufficiently small. Our goal is to show that the value of $\epsilon$ improves in a $C^\alpha$ fashion with respect to $r$.
Assume that $O_1=I,$ and set $$w_1:= \frac 1 \epsilon \left(u_1- \sqrt {\frac 23}\, \, x \cdot e_{-\frac \pi 6}\right), \quad \mbox{defined in $\{u_1>u_2\}$,} $$ $$w_3:= - \frac 1 \epsilon \left(\sqrt {\frac 23} \, \, x \cdot e_{\frac \pi 6} +u_3 \right), \quad \mbox{defined in $\{u_2>u_3\}$.} $$ We claim that, as $\epsilon \to 0$, the graphs of the functions $(w_1,w_3)$ converge uniformly on compact sets of $$B_1\setminus \{0\} \times \mathbb R \subset \mathbb R^{2+1}$$ to the graphs of a limiting pair $(\bar w_1, \bar w_3)$ with $\bar w_1$ defined in the sector $\mathcal S_1$ given by $$\mathcal S_1:=\left\{ (r,\theta) \in (0,1) \times (-\frac {2 \pi}{3}, \frac \pi 6)\right\},$$ \begin{figure}
\caption{The sector $\mathcal S_1$}
\label{fig5}
\end{figure} and $\bar w_1$ satisfies \begin{align} \label{a1}\triangle \bar w_1 & =0 \quad \quad \quad \mbox{in $\mathcal S_1$,}\\ \label{a2}\partial _\nu \bar w_1 & =0 \quad \quad \quad \mbox{on} \quad \partial \mathcal S_1 \cap \left\{ \theta=-\frac{2 \pi}{3}\right\},\\ \label{a3}\partial _\nu \bar w_1& =\frac 12 \partial_\nu \bar w_3 \quad \mbox{on} \quad \partial \mathcal S_1 \cap \left\{ \theta=\frac{\pi}{6}\right\}, \end{align} while $\bar w_3$ is defined in $\mathcal S_3$ with $$\mathcal S_3:=\left\{ (r,\theta) \in (0,1) \times (- \frac \pi 6,\frac {2 \pi}{3})\right\},$$ and \begin{align} \label{a3.5}\triangle \bar w_3&=0 \quad \quad \quad \mbox{in $\mathcal S_3$,}\\ \partial _\nu \bar w_3 &=0 \quad \quad \quad \mbox{on} \quad \partial \mathcal S_3 \cap \left\{ \theta=\frac{2 \pi}{3}\right\},\\ \label{a4}\partial _\nu \bar w_3&=\frac 12 \partial_\nu \bar w_1 \quad \mbox{on} \quad \partial \mathcal S_3 \cap \left\{\theta=-\frac{\pi}{6}\right\}. \end{align}
Here the functions $\bar w_1$, $\bar w_3$ are bounded by $-1$ and $1$, and their first derivatives are continuous up to the boundary at the points belonging to the lateral sides of the sector. The equations imply that they are in fact smooth up to the boundary at these points.
Moreover, $\bar w_1$, $\bar w_3$ are continuous at $0$, \begin{equation}\label{6010} \bar w_1(0)=\bar w_3(0)=0, \end{equation} and the convergence of $W=(w_1,w_3)$ to $\bar W=(\bar w_1, \bar w_3)$ is uniform also in a neighborhood of $0$.
Indeed, by the results of Section \ref{Sec6}, $u_1$ solves a Bernoulli one-phase problem near the line $\theta =-\frac 23 \pi$ which is an $\epsilon$-perturbation of the one-dimensional solution of slope $ \sqrt{2/3}$. Now we can apply the results for the scalar one-phase problem in [D], and find that $w_1$ converges on subsequences as $\epsilon \to 0$ to a limiting function $\bar w_1$ which satisfies \eqref{a1}-\eqref{a2}. On the other hand $$u_1-u_2=2 u_1 + u_3 = \sqrt{\frac 23} \, (2 e_{-\frac \pi 6} - e_{\frac \pi 6}) \cdot x+ \epsilon(2w_1-w_3)$$ solves a Bernoulli one-phase problem near the line $\theta =\frac \pi 6 $ which is an $\epsilon$-perturbation of a one-dimensional solution. The same argument as above gives \eqref{a3}. Similarly we obtain the equations \eqref{a3.5}-\eqref{a4} for $\bar w_3$.
Finally, from \eqref{6009} we have $|O_r-O_{r/2}| \le C \epsilon$, hence $|O_r -I| \le C \epsilon |\log r|$ which means
$$\|U - V_0(x)\|_{L^\infty(B_r)} \le C \epsilon r |\log r|.$$
This implies that $$|w_i| \le C |x| \log |x| \quad \mbox{if} \quad |x| \in [\epsilon^2 ,1],$$ hence $\bar w_i$ has a $C r |\log r|$ modulus of continuity at $0$ and satisfies \eqref{6010}.
Next we study the linear system \eqref{a1}-\eqref{a4} and notice that it appears as the Euler-Lagrange system for the quadratic functional \begin{align*}
Q(w_1,w_2): = \frac 32 \int_{\mathcal S_1 \setminus \mathcal S_3} & |\nabla w_1|^2 dx + \frac 32 \int_{\mathcal S_3 \setminus \mathcal S_1}|\nabla w_3|^2 dx \\
&+ \int_{\mathcal S_1 \cap \mathcal S_3} |\nabla w_1|^2 + |\nabla w_3|^2 + |\nabla ( w_1- w_3)|^2 dx, \end{align*} acting on pairs $$(w_1,w_3) \in H^1(\mathcal S_1) \times H^1(\mathcal S_3).$$ For simplicity of notation we denote $$ W=(w_1,w_3), \quad \nabla W=(\nabla w_1,\nabla w_3),$$
$$ L^2(\mathcal S):=\{ W| \quad w_i \in L^2(\mathcal S_i) \}, \quad H^1(\mathcal S):=\{ W| \quad w_i \in H^1(\mathcal S_i) \},$$ and define the inner product on $L^2 (\mathcal S)$ as \begin{align*} \langle W,V \rangle: = \frac 3 2 \int_{\mathcal S_1 \setminus \mathcal S_3} & w_1 v_1 dx + \frac 3 2 \int_{\mathcal S_3 \setminus \mathcal S_1} w_3 v_3 dx \\ &+ \int_{\mathcal S_1 \cap \mathcal S_3} w_1 v_1 + w_2 v_2 + ( w_1- w_2)(v_1-v_2) dx. \end{align*} The norm induced by the inner product is equivalent to the standard $L^2$ norm on $\mathcal S$. We also define $\langle \nabla W, \nabla V \rangle$ as above, by replacing the terms $w_i v_j$ by $\nabla w_i \cdot \nabla v_j$. With this notation $$ Q(W) = \langle \nabla W, \nabla W \rangle.$$
We consider minimizers of $Q$ which have fixed boundary data on $\partial B_1$ or, in other words, we consider {\it harmonic maps} induced by $\langle \cdot, \cdot \rangle$. We establish the $C^{1,\alpha}$ regularity of minimizers of $Q$ near the origin.
\begin{prop}\label{C1a} Assume that $W \in H^1(\mathcal S)$ is a minimizer of $Q$ among competitors with the same boundary data on $\mathcal S \cap \partial B_1$. Then $W \in C^{1,\alpha_0}(\mathcal S)$ and
$$ W(x)= W(0)+ q (e_{\frac \pi 3} \cdot x, - e_{-\frac \pi 3} \cdot x ) + O(|x|^{1+\alpha_0}),$$ for some explicit $\alpha_0 \in (0,1)$. \end{prop}
The precise formula of $\alpha_0$ is given by $$ \cos (\frac \pi 3 (1+\alpha_0))= \frac {\sqrt {17}-3}{8},$$ and $\alpha_0 \simeq 0.36$.
First we show that a classical solution to the system \eqref{a1}-\eqref{a4} which remains bounded near the origin minimizes the energy $Q$.
\begin{lem}\label{L5.3}Assume that $\bar W$ is a bounded solution of \eqref{a1}-\eqref{a4}. Then it minimizes the energy $Q$ with respect to perturbations in $H_0^1(\mathcal S)$. \end{lem}
\begin{proof} Indeed, we notice that if $V \in C^1$ vanishes near $\partial B_1$ and the origin $0$, then \begin{align*}
\langle \nabla \bar W, \nabla V \rangle & = \langle \triangle \bar W, V \rangle \\ & + \int_{\mathcal S_3 \cap \partial \mathcal S_1 } - \frac 32 v_3 (\bar w_3)_\nu + v_1 (\bar w_1)_\nu + v_3 (\bar w_3)_\nu +(v_1-v_3) (\bar w_1-\bar w_3)_\nu \, \, d \sigma \\ & + \int_{\mathcal S_1 \cap \partial \mathcal S_3 } - \frac 32 v_1 (\bar w_1)_\nu + v_3 (\bar w_3)_\nu + v_1 (\bar w_1)_\nu +(v_3-v_1) (\bar w_3- \bar w_1)_\nu \, \, d \sigma \\ &=0. \end{align*}
Let $W_0$ be the minimizer of $Q$ in $\mathcal S \cap B_r$ with the same boundary data as $\bar W$ on $\mathcal S \cap \partial B_r$ for some $ r \in (0,1)$. We show that $\bar W$ and $W_0$ coincide in $B_r$.
The computation above gives $$\langle \nabla (\bar W-W_0), \nabla V \rangle =0,$$ for any $V \in H^1(\mathcal S)$ which vanishes on $\partial B_1$ and in a neighborhood of $0$.
We choose $V= \psi ^2 (\bar W - W_0),$ with $\psi$ a radial cutoff function which is 1 outside $B_\delta$ and vanishes near $0$, and obtain the Caccioppoli inequality \begin{equation}\label{CCI}
\langle \psi \nabla (\bar W-W_0), \psi \nabla (\bar W-W_0) \rangle \le C \langle |\nabla \psi|(\bar W-W_0),|\nabla \psi|(\bar W-W_0) \rangle.
\end{equation}
Since $\bar W-W_0$ is bounded near the origin and $\|\nabla \psi\|_{L^2}$ can be made arbitrarily small, we obtain $$ \langle \nabla (\bar W-W_0), \nabla (\bar W-W_0)\rangle =0,$$ as $\delta \to 0$, which gives the desired conclusion.
\end{proof}
Theorem \ref{T2d} follows from the estimate of Proposition \ref{C1a} which applies to $\bar W$.
\begin{proof}[Proof of Theorem $\ref{T2d}$] Since $\bar W(0)=0$ then, for all $\rho \le 1/2$,
$$\|\bar W - q (e_{\frac \pi 3} \cdot x, - e_{-\frac \pi 3} \cdot x ) \|_{L^\infty(B_\rho)} \le C \rho ^{1+\alpha_0},$$
with $|q| \le C$, and $C$ universal.
The uniform convergence of the $W$'s to $\bar W$ and the inequality above imply
$$\|U - V_0 (O x)\|_{L^\infty(B_\rho)} \le C \rho^{1+\alpha_0} \epsilon \le \epsilon \rho^{\alpha},$$ with $O$ the rotation of angle $\epsilon \sqrt{ \frac 32} \, q$, provided that $\alpha < \alpha_0$ and $\rho$ is chosen sufficiently small depending on $\alpha$. This means that $U$ is approximated in a $C^{1,\alpha}$ fashion by rotations of $V_0$, and the desired conclusion follows by iterating this result. \end{proof}
It remains to prove Proposition \ref{C1a}.
\begin{proof}[Proof of Proposition $\ref{C1a}$]
We solve the Dirichlet problem for minimizers of $Q$ by the method of Fourier series.
We investigate the eigenvalues and eigenfunctions of the corresponding $Q$ operator on the unit circle $\partial B_1$. Precisely, let
$$\mathcal S_i':=\mathcal S_i \cap \partial B_1,$$
and notice that $$\mathcal S_1'=(-b, a), \quad \mathcal S_3'=(-a, b),$$
where $a<b$ denote $\frac {\pi}{6}$ and $\frac{2\pi}{3}$.
We define the corresponding spaces $L^2(\mathcal S')$, $H^1(\mathcal S')$, and the inner product
$ \langle W,V \rangle $ as above. The eigenfunctions $\Phi_k$ and eigenvalues $\lambda_k$ are defined inductively through the
Rayleigh quotient formula
$$ \lambda_{k}:=\min_{W \in (span\{\Phi_1,..,\Phi_{k-1}\})^\perp} \frac{\langle \nabla W,\nabla W \rangle}{\langle W,W \rangle},$$
and $\Phi_k=W$ is the element of $H^1(\mathcal S')$ which has unit norm in $L^2(\mathcal S')$, where the minimum is realized.
Then $\{\Phi_k\}_{k=1}^\infty$ is an orthonormal basis of $L^2(\mathcal S')$.
The minimizer of the $Q$ functional with boundary data
$\Phi_k(\theta)$ on
$\partial B_1$ is
$$ r^{ \sqrt { \lambda_k }} \Phi_k(\theta).$$
We write the Euler-Lagrange equations for an eigenfunction $\Phi_k=W$ of eigenvalue $\lambda_k=\lambda$ and obtain
\begin{equation}\label{6000}
w_1 '' + \lambda w_1 = 0 \quad \mbox{in} \quad (-b,-a) \cup(-a,a),
\end{equation}
$$ w_1'(-b)_+=0, $$
\begin{equation}\label{6001}
\frac 32 w_1'(-a)_- + w_3'(-a)_+ - 2w_1'(-a)_+ =0, \quad w_1'(-a)_+ - 2 w_3'(-a)_+ =0,
\end{equation} together with a similar equations for $w_3$. Here $w'_1(-a)_{\pm}$ denote the left and right limits of the derivative of $w_1'$ at $-a$. From \eqref{6001} we find $$w_1'(-a)_-=w_1'(-a)_+=2w_3'(-a)_+,$$ which means that equation \eqref{6000} holds in the full interval $(-b,a)$. In conclusion $$w_1(\theta) = \mu_1 \cos (\sqrt \lambda (\theta + b)), \quad w_3(\theta)= \mu_3 \cos (\sqrt \lambda (\theta-b) ),$$ and $$ w_1'(-a)= 2 w_3'(-a), \quad w_3'(a)= 2 w_1'(a).$$ This implies that $$ \sin (\sqrt \lambda (b-a)) = \pm 2 \sin(\sqrt \lambda(b+a)) $$ or \begin{equation}\label{6011}
|\sin (3 t)| = 2 |\sin (5t)|, \quad t:= \frac{\sqrt{\lambda}}{6} \pi.
\end{equation} This equation has periodic solutions in $t$ of period $\pi$. In the interval $[0,\pi)$ we have 9 solutions, two in each intervals of length $\pi /5$. The first one is $t_0=0$, then $$t_1=\frac{\pi}{6}, \quad t_2 \in (\frac \pi 5,\frac \pi 3), \quad t_3 \in (\frac \pi 3,\frac {2 \pi}{ 5}) \quad \quad \mbox{ etc.} $$ The solutions can be computed explicitly since after dividing by $\sin t$ in \eqref{6011}, we end up with two quadratic equations in $4 \cos (2t)$ $$ \beta^2 + 2 \beta - 4 = \pm (\beta +2), \quad \quad \beta:= 4 \cos(2t).$$ The corresponding eigenvalues and eigenfunctions are \begin{align*} \lambda_1 &=0, \quad \quad \quad W= (1,0), \\ \lambda_2 &=0, \quad \quad \quad W=(0,1), \\ \lambda_3 & =1, \quad \quad \quad W=\left(\cos (\theta + b), - \cos (\theta - b)\right),\\ \sqrt{\lambda_4} & \in (\frac 65,2), \quad W=\left( \cos\left( \sqrt{\lambda_4} (\theta + b)\right), \cos \left(\sqrt {\lambda_4}(\theta - b) \right)\right), \end{align*} and $\sqrt{\lambda_3} \in (2, \frac{12}{5})$ etc. The eigenvalues corresponding to $t / \pi \in \mathbb Z$ have multiplicity 2, while the others are simple, and notice that $\sqrt {\lambda_k} \sim \frac 35 k $ for $k$ large.
If $\Phi(\theta)$ is the boundary data of $W$, then we decompose it in $L^2(\mathcal S')$ as $$ \Phi = \sum \sigma_k \Phi_k$$ with $\sum \sigma_k^2 < \infty$, and write the solution $W$ in $\mathcal S$ in polar coordinates as the series $$W= \sum \sigma_k r^{\sqrt {\lambda_k}} \Phi_k(\theta),$$ which converges uniformly in compact intervals of $ r \in [0,1)$.
This gives the desired conclusion with $$1+\alpha=\sqrt {\lambda_4},$$ and $$ 4 \cos (\frac \pi 3 \sqrt {\lambda_4}) = \frac{\sqrt {17} - 3}{2}.$$ We remark that $\Phi$ is the trace of a function in $H^1(\mathcal S)$ is equivalent to $$Q(W)= \langle \nabla W,\nabla W \rangle = \sum \sqrt{\lambda_k} \, \, \sigma_k^2 < \infty.$$ \end{proof}
\section{Regularity for $\Gamma_1 \cap \Gamma_2$ in higher dimensions}\label{Sec9}
In this section we prove a version of Theorem \ref{T2d} in arbitrary dimensions. First we recall the following definitions.
\begin{defn} Let $U$ be a minimizer of $J_N$ with $N=3$. We say that $$x_0 \in Reg(\Gamma_1 \cap \Gamma_2)$$ if there exists a blow-up profile at $x_0$ which is a rotation of a two-dimensional cone $V_0$ extended trivially in the remaining variables.
We say that $$x_0 \in Reg (\Gamma_k)$$ if there exists a blow-up cone $x_0\in \Gamma_k$ which is one-dimensional. \end{defn}
For convenience we restate Theorems \ref{TndI} and \ref{TPRI} from the Introduction which will be proved in this section. \begin{thm}\label{Tnd} $Reg(\Gamma_1 \cap \Gamma_2)$ is locally a $C^{1,\alpha}$-smooth manifold of codimension two. Near such an intersection point, each of the free boundaries $\Gamma_1$ and $\Gamma_2$ consists of two piecewise $C^{1,\alpha}$ hypersurfaces which intersect on $Reg(\Gamma_1 \cap \Gamma_2)$. \end{thm}
As a consequence we obtain the partial regularity result.
\begin{thm}[Partial regularity]\label{TPR} Let $N=3$ and $U$ be a minimizer of $J_N$ in $B_1$. Then
$$ \partial \{|U|>0\}=Reg(\Gamma_1) \cup Reg(\Gamma_2)\cup Reg (\Gamma_1 \cap \Gamma_2) \cup \Sigma',$$ with $\Sigma'$ a closed singular set of Hausdorff dimension $n-3$ and $$ \mathcal H^{n-1} (Reg(\Gamma_i)\cap B_{1/2}) \le C, \quad \quad \mathcal H^{n-2}(Reg(\Gamma_1 \cap \Gamma_2)\cap B_{1/2}) \le C.$$
\end{thm}
Theorems \ref{Tnd} and \ref{TPR} are deduced from the next proposition which will be proved later in the section. It states that a minimizer $U$ which is approximated in each ball $B_r$ with $r \in (0,1]$, by a rotation of $V_0$, must be a $C^{1,\alpha}$ deformation of $V_0$. \begin{prop}\label{C1alphand} Let $U$ be a minimizer in $B_1$. Assume that for each $r \in (0,1]$ there exists a pair of orthonormal vectors $\nu^1_r,\nu^2_r$ such that
$$ \|U-V_0(\nu_r^1 \cdot x, \nu_r^2 \cdot x) \|_{L^\infty (B_r)} \le \epsilon r,$$ for some $\epsilon \le \epsilon_0(n)$ small universal. Then there exists $\nu_0^1, \nu_0^2$ such that
$$ \|U-V_0(\nu_0^1 \cdot x, \nu_0^2 \cdot x) \|_{L^\infty (B_r)} \le C \epsilon r^{1+\alpha}.$$ \end{prop}
A direct consequence of this result is that there are no other cones that are sufficiently close to the set of rotations of $V_0$.
\begin{cor}\label{Cco}
Assume that $U$ is a minimal cone and $\|U-V_0\|_{L^\infty (B_1)} \le \epsilon_0$. Then $U$ is a rotation of $V_0$. \end{cor}
We show that the hypotheses of Proposition \ref{C1alphand} can be relaxed to require that $U$ is approximated by $V_0$ only in $B_1$ and the the energy of the blow-up cones at the origin does not go below the energy of $V_0$.
\begin{lem} \label{L6p5} Let $U$ be a minimizer in $B_1$ with $0 \in \partial \{|U|>0\}$. Assume that $$\|U-V_0\|_{L^\infty (B_1)} \le \epsilon, \quad \mbox{and} \quad \Phi_U(0+) \ge \Phi(V_0).$$ Then the only possible blow-up cones for $U$ at $0$ are rotations of $V_0$. \end{lem}
\begin{proof} First we notice that the hypotheses imply that $$\Phi_U(\frac 12) \le \Phi(V_0) + C \epsilon,$$ and then \begin{equation}\label{6020} 0 \le \Phi_U(r)-\Phi(V_0) \le C \epsilon, \quad \forall r \in (0, \frac 12). \end{equation} Assume by contradiction that the conclusion does not hold for a sequence $U_n$, and $\epsilon=\epsilon_n \to 0$. Then, by Proposition \ref{C1alphand}, we can find appropriate dilations $\tilde U_n:= r_n^{-1} U_n(r_n x)$ such that $$ dist (\tilde U_n, \mathcal V_0) = \epsilon_0,$$ where $\mathcal V_0$ represents the collections of cones obtained by rotations of $V_0$ and the distance between $\tilde U_n$ and the elements of $\mathcal V_0$ is measured in $L^\infty(B_1)$. As $n \to \infty$ we may extract a convergent subsequence in $L^\infty(B_1)$ of the $\tilde U_n$'s to a limiting function $\bar U$. Then $\bar U$ must be a cone, since its $\Phi_{\bar U}$ energy is a constant in the radial variable by \eqref{6020}. The distance from $\bar U$ to $\mathcal V_0$ is $\epsilon_0$, and we contradict Corollary \ref{Cco}.
\end{proof}
Next we use the dimension reduction argument to show that the set of free boundary points $\partial \{|U|>0\}$ whose tangent cones have energy strictly between the one dimensional solutions $U_{0,k}$ and the two-dimensional solution $V_0$, has Hausdorff dimension $n-3$. \begin{lem}\label{L6p6}
The set $$ \mathcal A :=\{x \in \partial \{|U|>0\}\cap B_1| \quad \Phi_{U,x}(0+) \in (\Phi(U_{0,k}), \Phi(V_0)) \},$$ has Hausdorff dimension $n-3$. \end{lem}
Here $\Phi_{U,x}(r)$ denotes the Weiss energy of $U$ in a ball of radius $r$ centered at $x$, and $\Phi_{U,x}(0+)$ its limit as $r \to 0$. The continuity of $\Phi_{U,x}(r)$ with respect to $x$, and $r$ fixed, shows that $$ \Phi_{U,x}(r) < \Phi(V_0),$$
for all $x \in \partial \{ |U|>0\}$ near a point in $\mathcal A$ and $r$ sufficiently small. Thus, it suffices to prove the following lemma. \begin{lem} Fix $\delta>0$, and assume that $U$ is a minimizer in $B_3$ and
$$ \Phi_{U,x}(r) \le \Phi(V_0) - \delta, \quad \quad \forall x \in \partial \{|U|>0\} \cap \overline B_{1}, \quad \forall r \le 1.$$ Then $$\mathcal H^{n-3+\delta} (\mathcal A \cap \overline B_1) =0.$$
\end{lem} We remark that $\mathcal A \cap \overline B_1$ is a closed set by the regularity result in Proposition \ref{GK}.
\begin{proof} The proof follows the standard dimension reduction argument of Federer \cite{F}. Notice that in dimension $n=3$ the set $\mathcal A \cap \overline B_{1}$ is discrete by Proposition \ref{GK}, and the statement is obvious.
We prove the statement by induction on the dimension $n \ge 3$, in two steps. We only sketch the main ideas and leave the details to the interested reader.
{\it Step 1:} Assume the result holds in dimension $n$, then it holds for any $n+1$-dimensional cone $U$ with $\Phi(U) \le \Phi(V_0) - \delta$.
{\it Step 2:} Assume the result holds for cones in dimension $n$, then it holds for a minimizer in dimension $n$.
For Step 1, assume $U$ is a cone in $\mathbb R^{n+1}$. We take its restriction to a ball $B_r(x_0)$ with $x_0 \in \mathcal A \cap \partial B_1$ and then normalize it to the unit ball after a translation and dilation. The resulting function is uniformly-well approximated as $r \to 0$ by a minimizer that is constant in the $x_0$-direction, for which the induction hypothesis holds. By compactness, this means that, there exists $r_0(n)>0$ small such that if $r \le r_0$
then $\mathcal A \cap \partial B_1 \cap B_r(x_0)$ can be covered by a finite collection of balls of radii $r_i$ and centers on $\mathcal A \cap \partial B_1$ with \begin{equation}\label{n-3d} \sum r_i^{n-3 + \delta} \le \frac 12 r^{n-3+\delta}. \end{equation}
Step 1 follows by iterating this result a number of times.
Step 2 is a consequence of the fact that around each point in $\mathcal A$, $U$ is well-approximated by cones at all small scales, and the conclusion holds for these cones by Step 1. Precisely, by compactness, for each $x_0 \in \mathcal A \cap \overline B_1$ there exists $\delta(x_0)>0$ such that if $r \le \delta(x_0)$ the set $\mathcal A \cap B_r (x_0)$ can be covered by a finite collection of balls of radii $r_i$ that satisfy inequality \eqref{n-3d}.
Let $\mathcal A_k$ denote the set of points $x_0$ in $\mathcal A$ with the property that $\delta(x_0) \ge \frac 1 k$, and notice that $\mathcal A \subset \cup \mathcal A_k$. On the other hand $\mathcal H^{n-3+\delta} (\mathcal A_k \cap \overline B_1) =0$ since, as in Step 1, we can iterate \eqref{n-3d} for $\mathcal A_k$. Thus, the desired conclusion holds for $\mathcal A$ as well. \end{proof}
In view of Lemma \ref{L6p5} and Lemma \ref{L6p6} we obtain a stronger version of Proposition \ref{C1alphand} in which the only hypothesis is that $U$ is approximated by $V_0$ in $B_1$. \begin{prop}\label{PLa}
Assume that $U$ is a minimizer in $B_1$, $0 \in \partial \{|U|>0\}$, and $$\|U-V_0(x_1,x_2)\|_{L^\infty(B_1)} \le \epsilon_0,$$ for some $\epsilon_0$ small universal. Then there exist $\nu_0^1, \nu_0^2$ such that
$$ \|U-V_0(\nu_0^1 \cdot x, \nu_0^2 \cdot x) \|_{L^\infty (B_r)} \le r^{1+\alpha}.$$ In particular, $\Gamma_1 \cap \Gamma_2 \cap B_{1/2}$ consists only of regular intersection points. \end{prop}
\begin{proof} For each $$x \in \mathcal D:=\{x_1=x_2=0\} \cap B_{1/2},$$ we look at the two dimensional plane generated by the first two coordinates of $x$. By topological considerations in this plane, the set $\Gamma_1 \cap \Gamma_2 $ contains at least one point $\bar x$ in the disk of radius $C \epsilon_0$ around the origin.
Indeed, our hypothesis and Proposition \ref{GK} imply that in the two-dimensional annulus $ C \epsilon_0 \le r \le 2 C \epsilon_0$ the open sets $\{u_1 >u_2\}$ and $\{u_2 > u_3 \}$ are $C^1$ perturbations of the sectors $\mathcal S_1$ and $\mathcal S_3$ defined in Section \ref{Sec8}. By two-dimensional topology, the boundaries of these two open sets must intersect in the disk of radius $C \epsilon_0$.
Since $dim(\mathcal A) \le n-3$, we find that $$ \bar x \notin \mathcal A \quad \mbox{ for $\mathcal H^{n-2}$ a.e. $x \in \mathcal D$,} $$
hence $\Phi_{U,\bar x}(0+) \ge \Phi(V_0)$ and we can apply Proposition \ref{C1alphand} at $\bar x$.
The conclusion follows since the set of such $x$'s is dense in $\mathcal D$.
\end{proof}
Theorem \ref{Tnd} follows easily from Proposition \ref{PLa} and we omit the details.
Regarding Theorem \ref{TPR}, we notice that
$$\Sigma':= \partial \{|U| >0\} \setminus \left(Reg(\Gamma_1) \cup Reg(\Gamma_2) \cup Reg(\Gamma_1 \cap \Gamma_2) \right) ,$$ is a closed set according to Proposition \ref{PLa} and Theorem \ref{pr}. The dimension reduction argument as in Lemma \ref{L6p6} implies that $dim (\Sigma') \le n-3$, and rest of Theorem \ref{TPR} follows by standard techniques.
It remains to prove Proposition \ref{C1alphand}. The considerations at the beginning the previous section remain valid, and they reduce the proof of Proposition \ref{C1alphand} to the validity of $C^{1,\alpha}$ estimates for bounded solutions of the elliptic system \eqref{a1}-\eqref{a4}.
The rest of the section is devoted to establish Proposition \ref{C1a} in arbitrary dimensions, see Proposition \ref{C1an} below.
We introduce some notation. We denote by $$x=(x',x''), \quad x'=(x_1,x_2), \quad x''=(x_3,..,x_n),$$ $$(r,\theta) \quad \mbox{the polar coordinates for $x'$},$$ $$ \mathcal S_1 := \{\theta \in (-4a, a)\} \cap B_1, \quad \mathcal S_3 := \{\theta \in (-a, 4a)\} \cap B_1, \quad a:= \frac \pi 6,$$ and recall the definitions of $L^2 (\mathcal S)$, $H^1(\mathcal S)$, $\langle W,V \rangle$, $\langle \nabla W, \nabla V \rangle$, and $Q$ from the previous section. We establish the $C^{1,\alpha}$ regularity of minimizers of $Q$.
\begin{prop}\label{C1an} Assume that $W \in H^1(\mathcal S)$ is a minimizer of $Q$ among functions with the same trace on $\partial B_1$. Then $W \in C^{1,\alpha}(\mathcal S)$ and
$$ W(x)= W(0)+ \left((qe_{\frac \pi 3},\nu_1'') \cdot x, (- qe_{-\frac \pi 3},\nu_3'') \cdot x \right) + O(|x|^{1+\alpha}),$$ for some $\alpha \in (0,1)$. \end{prop}
As in the previous section, Proposition \ref{C1alphand} follows from Proposition \ref{C1an} provided we show that bounded solutions of the system \eqref{a1}-\eqref{a4} do minimize the energy, as in Lemma \ref{L5.3} for $n=2$.
\begin{lem}\label{L5.3n}Assume that $\bar W \in L^\infty$ solves the system \eqref{a1}-\eqref{a4} in the classical sense in the domain $$\overline{\mathcal S} \setminus (\{x'=0\} \cup \partial B_1).$$ Then it minimizes the energy $Q$ with respect to perturbations in $H^1(\mathcal S)$ which vanish near $\partial B_1$. \end{lem}
\begin{proof} The proof is essentially the same as the one of Lemma \ref{L5.3}. However, in order to justify the existence of a minimizer $W_0$ with the same boundary data as $\bar W$ on $\partial B_r$ we need to show first that $\bar W \in H^1(\mathcal S \cap B_r)$.
Notice that for any $V \in C^1$ that vanishes near $\{x'=0\} \cup \partial B_1$, we have $$ \langle \nabla \bar W, \nabla V \rangle =0.$$ Then the Caccioppoli inequality
$$\langle \varphi \nabla \bar W, \varphi \nabla \bar W \rangle \le C \langle |\nabla \varphi| \bar W, |\nabla \varphi| \bar W \rangle,$$ holds if $\varphi$ vanishes near $\{x'=0\} \cup \partial B_1$. We choose $$\varphi(x)= \psi(x') \eta(x),$$
with $\psi$ a radial cutoff function which vanishes near the origin and $\psi=1$ when $|x'| \ge \delta$, and $\eta$ a cutoff function which vanishes near $\partial B_1$. Since $\bar W \in L^\infty$, it follows that we can take $\psi \equiv 1$ in the limit as $\delta \to 0$, i.e. $$\bar W \in H^1 (\mathcal S \cap B_r) \quad \mbox{ for any $r<1$.}$$ We define $W_0$ to be the minimizer of $Q$ in $\mathcal S \cap B_r$ with the same boundary data as $\bar W$ on $\partial B_r$. Now the proof of Lemma \ref{L5.3} applies, by taking $\psi=\psi(x')$ depending only on the variable $x'$. \end{proof}
There are several ways to prove Proposition \ref{C1an}. Here we take advantage of the product structure of the problem and reduce it back to the two-dimensional case. The system is invariant under translations in the $x''$ variable, and then we can estimate higher order derivatives $D_{x''}^\beta W$ through successive iterations. Then $$ \triangle W =0 \quad \Longrightarrow \quad \triangle_{x'} W = - \triangle_{x''} W,$$ and the right hand side is well behaved.
We start with some preliminary estimates.
\begin{lem}\label{Ave} Let $W$ be a minimizer of $Q$ in $H^1(\mathcal S)$. Then,
a) (Caccioppoli inequality) If $\varphi \in C_0^1(B_1)$ then
$$\langle \varphi \nabla W, \varphi \nabla W \rangle \le C \langle |\nabla \varphi| W, |\nabla \varphi| W \rangle,$$
b) $W$ is smooth up to the boundary of $S$ away from $\partial B_1\cup \{x'=0\}$, and the Euler-Lagrange equations \eqref{a1}-\eqref{a4} are satisfied in the classical sense,
c) $W \in L^\infty(\mathcal S \cap B_{1/2})$.
\end{lem}
\begin{proof} Part a) is standard and we skip the details.
For part b) we remark that in a ball $B_\delta(x_0)$ near a point $x_0 \in \mathcal S_1 \cap \partial \mathcal S_3$ the energy can be written as
$$ \frac 3 2 \int_{B_\delta(x_0)} |\nabla w_1|^2 dx + \frac 1 2 \int_{B_\delta(x_0) \cap S_3}| \nabla (2 w_2- w_1)|^2 dx.$$ This shows that $w_1$ is harmonic near $x_0$, and $2 w_2 - w_1$ can be extended harmonically in the whole $B_\delta(x_0)$ by the even reflection across $\partial \mathcal S_3$. Hence $W$ and its derivatives can be bounded in $\mathcal S \cap B_{1/2}$ away from the codimension two edge $\{x'=0\}$ in terms of the $L^2(\mathcal S)$ norm of $W$.
For part c) we first show that \begin{equation}\label{6012}
\fint _{ \mathcal S \cap \partial B_r} |W|^2 dx, \end{equation} remains bounded for all $r$ small. For this we prove a mean value inequality with respect to the $L^2(\mathcal S)$ norm: \begin{equation}\label{6013} \langle W(rx), W(rx) \rangle_{\mathcal S \cap \partial B_1}, \end{equation} is monotone increasing in $r$, where $\langle \cdot ,\cdot \rangle_{S \cap \partial B_r}$ denotes the inner product induced on the sphere $\partial B_r$.
Let $\varphi \in C_0^1(B_1)$, $\varphi \ge 0$, and then $$ \langle \nabla W, \nabla (\varphi W) \rangle =0,$$or $$ 0 \le \langle \nabla W, \varphi \nabla W \rangle \le \langle - \nabla \varphi \cdot \nabla W, W \rangle.$$ We take $\varphi \to \chi_{B_r}$ and obtain $$ 0 \le \langle W_\nu , W \rangle_{\mathcal S \cap \partial B_r}.$$ This means that the derivative in $r$ of the expression in \eqref{6013} is nonnegative, and the claim \eqref{6012} is proved.
From \eqref{6012} and part b) applied in $\mathcal S \cap B_r$, we deduce that $|W|$ is bounded in $B_{1/2} \cap \mathcal S \cap \{x''=0 \}$ by a multiple of its $L^2(\mathcal S)$ norm. The conclusion follows by translating the origin at the other points in $\{x'=0\} \cap B_{1/2}$. \end{proof}
\begin{lem}\label{L6.5} Assume that $W \in H^1(\mathcal S)$ is a minimizer of $Q$. Then
$$\|D_{x''}^\beta W\|_{L^\infty(\mathcal S \cap B_{1/2})} \le C(\beta) \|W\|_{L^2(\mathcal S)},$$ and, for each fixed $x''$, the function $W(\cdot, x'')$ minimizes the two-dimensional energy $$\langle \nabla W , \nabla W\rangle_{x'} + \int_{\mathcal S_1} f_1 w_1 dx' + \int_{\mathcal S_3} f_3 w_3 dx',$$ for some bounded functions $f_1$, $f_3$. \end{lem}
\begin{proof} The discrete differences of $W$ in the $x''$-directions are minimizers of $Q$. By iterating Caccioppoli inequality we obtain that $D_{x''}^\beta W$ minimizes $Q$ and
$$ \|D_{x''}^\beta W\|_{H^1(\mathcal S \cap B_{1/2})} \le C(\beta) \|W\|_{L^2(\mathcal S)}.$$ The $L^\infty$ bound follows from Lemma \ref{Ave}, part c). This means that $W(x',0)$ satisfies in the two-dimensions $$\triangle _{x'} W \in L^\infty,$$ and the boundary conditions in \eqref{a1}-\eqref{a4} hold in the classical sense on $\partial \mathcal S \setminus \{0\}$. The conclusion follows as in Lemma \ref{L5.3} since $W \in L^\infty$. \end{proof}
The results of the previous section imply the $C^{1,\alpha}$ estimates in this inhomogeneous setting.
\begin{lem}\label{C1a2} Assume that $W \in H^1(\mathcal S)$ is a minimizer of the 2 dimensional energy $$\langle \nabla W , \nabla W\rangle + \int_{\mathcal S_1} f_1 w_1 dx + \int_{\mathcal S_3} f_3 w_3 dx,$$ for some bounded functions $f_i$. Then $W \in C^{1,\alpha}(\mathcal S)$ and
$$ |W(x)- W(0)+ q (e_{\frac \pi 3} \cdot x, - e_{-\frac \pi 3} \cdot x ) | \le C(1+ \|f_1\|_{L^\infty} + \|f_3\|_{L^\infty}) \|W\|_{L^2} \, \, |x|^{1+\alpha},$$ in $ \mathcal S \cap B_{1/2},$ for some universal constant $C$. \end{lem}
Now we can prove the $C^{1,\alpha}$ estimate in arbitrary dimensions.
\begin{proof}[Proof of Proposition $\ref{C1an}$] By Lemma \ref{L6.5} and Lemma \ref{C1a2} (applied to $W$ and $D_{x''} W$) we find that
$$W(x',0)=W(0) + q (e_{\frac \pi 3} \cdot x', - e_{-\frac \pi 3} \cdot x ') + O(|x'|^{1+\alpha}),$$
$$ D_{x''}W(x',0)= (\nu_1'',\nu_3'') + O(|x'|), \quad \quad |D^2_{x''} W| \le C,$$ which gives the desired conclusion. \end{proof}
Finally we prove the Schauder estimates in the two-dimensional inhomogeneous setting.
\begin{proof}[Proof of Lemma \ref{C1a2}]
The proof is standard and uses Campanato iterations. We sketch some of the details.
Assume that $\|f_i\|_{L^\infty} \le \delta$, $\|W\|_{H^1} \le \delta$. It suffices to show inductively in $k$ that for each $r=\rho^k$ there exist {\it linear} functions at $0$ of the type $$ L_r=\left (d_1 + q e_{\frac \pi 3} \cdot x, d_3 - q e_{-\frac \pi 3} \cdot x\right),$$ depending on $r$ which approximate $W$ in $B_r$ such that
$$\left(\fint_{\mathcal S \cap B_r} |W-L_r|^2 dx \right)^{1/2}\le r^{1+\alpha}.$$ Indeed, the rescaled function $$\tilde W(x) = \frac {1}{ r^{1+\alpha}} (W-L_r)(rx),$$ satisfies
$$\fint_{S \cap B_1} |\tilde W|^2 dx \le 1,$$
and $\tilde W$ is a minimizer for a functional as above with $$\|\tilde f_i\|_{L^\infty} \le \delta r^{1-\alpha} \le \delta.$$ The Caccioppoli inequality for $\tilde W$ gives
$$ \|\tilde W\|_{H^1(\mathcal S \cap B_{1/2})} \le C. $$
Let $W_0$ be the minimizer of the $Q$ functional with the same boundary data as $\tilde W$ on $\mathcal S \cap \partial B_{1/2}$, and notice that $$\| W_0\|_{H^1(\mathcal S \cap B_{1/2})} \le C .$$Then in $\mathcal S \cap B_{1/2}$ we have \begin{align*} \langle \nabla(\tilde W-W_0),\nabla(\tilde W - W_0) \rangle& =\langle \nabla \tilde W, \nabla (\tilde W - W_0) \rangle- \langle \nabla \tilde W_0,\nabla (\tilde W - W_0) \rangle \\ & = \frac 12 \int_{\mathcal S_1} \tilde f_1 (\tilde w_1 - w_{0,1}) dx + \frac 12 \int_{\mathcal S_3} \tilde f_3 (\tilde w_3 - w_{0,3}) dx\\
& \le C \delta \|\tilde W-W_0\|_{L^2}^{\frac 1 2}. \end{align*} By Poincar\'e inequality it follows that
$$\|\tilde W - W_0\|_{L^2(\mathcal S \cap B_{1/2})} \le C' \delta^2.$$ The $C^{1,\alpha_0}$ regularity of $W_0$ (see Proposition \ref{C1a}) gives
$$\left(\fint_{\mathcal S \cap B_\rho} |W_0-L_0|^2 dx \right)^{1/2}\le C \rho ^{1+\alpha_0},$$ for some {\it linear} function $L_0$ at $0$. The last two inequalities imply the inductive result for $W$ with $r=\rho^{k+1}$ by first choosing $\rho$ sufficiently small depending on $\alpha< \alpha_0$, and then $\delta$ small depending on $\rho$. \end{proof}
\end{document} |
\begin{document}
\renewcommand{\thefootnote}{} \date{}
\def\thebibliography#1{\noindent{\normalsize\bf References}
\list{{\bf
\arabic{enumi}}.}{\settowidth\labelwidth{[#1]}\leftmargin\labelwidth
\advance\leftmargin\labelsep
\usecounter{enumi}}
\def\hskip .11em plus .33em minus .07em{\hskip .11em plus .33em minus .07em}
\sloppy\clubpenalty4000\widowpenalty4000
\sfcode`\.=1000\relax}
\baselineskip=12pt \title{\vspace*{-1cm} {\large BAND DESCRIPTION OF KNOTS AND VASSILIEV INVARIANTS}}
\author{{\normalsize KOUKI TANIYAMA}\\ {\small Department of Mathematics, College of Arts and Sciences}\\[-1.5mm] {\small Tokyo Woman's Christian University}\\[-1.5mm] {\small Zempukuji 2-6-1, Suginamiku, Tokyo 167-8585, Japan}\\[-1.5mm] {\small e-mail: taniyama@twcu.ac.jp}\\[3mm] {\normalsize AKIRA YASUHARA }\\ {\small Department of Mathematics, Tokyo Gakugei University}\\[-1.5mm] {\small Nukuikita 4-1-1, Koganei, Tokyo 184-8501, Japan}\\ {\small {\em Current address}: }\\[-1.5mm] {\small Department of Mathematics, The George Washington University}\\[-1.5mm] {\small Washington, DC 20052, USA}\\[-1.5mm] {\small e-mail: yasuhara@u-gakugei.ac.jp}\\ }
\maketitle
\vspace*{-5mm}
{\small \begin{quote} \begin{center}A{\sc bstract}\end{center} \baselineskip=10pt \hspace*{1em}In 1993 K. Habiro defined {$C_k$-move} of oriented links and around 1994 he proved that two oriented knots are transformed into each other by $C_k$-moves if and only if they have the same Vassiliev invariants of order $\leq k-1$. In this paper we define Vassiliev invariant of type $(k_1,...,k_l)$, and show that, for $k=k_1+\cdots+k_l$, two oriented knots are transformed into each other by $C_k$-moves if and only if they have the same Vassiliev invariants of type $(k_1,...,k_l)$. We introduce a concept \lq band description of knots' and give a diagram-oriented proof of this theorem. When $k_1=\cdots=k_l=1$, the Vassiliev invariant of type $(k_1,...,k_l)$ coincides with the Vassiliev invariant of order $\leq l-1$ in the usual sense. As a special case, we have Habiro's theorem stated above. \end{quote}}
\baselineskip=12pt
\footnote{{\em 2000 Mathematics Subject Classification}: 57M25} \footnote{{\em Keywords and Phrases}: knot, $C_n$-move, Vassiliev invariant, finite type invariant, band description}
\noindent {\bf Introduction}
In 1993 K. Habiro defined {\it $C_k$-move} of oriented links for each natural number $k$ \cite{Habiro2}. A $C_k$-move is a kind of local move of oriented links. Around 1994 he proved that two oriented knots have the same Vassiliev invariants of order $\leq k-1$ if and only if they are transformed into each other by $C_k$-moves. Thus he has succeeded in deducing a geometric conclusion from an algebraic condition. However this theorem appears only in his recent paper \cite{Habiro1}. In \cite{Habiro1} he develops his original clasper theory and obtains the theorem as a consequence of clasper theory. We note that the \lq if' part of the theorem is also shown in \cite{Gusarov2}, \cite{Ohyama}, \cite{Stanford0} and \cite{T-Y}, and in \cite{Stanford3} T. Stanford gives another characterization of knots with the same Vassiliev invariants of order $\leq k-1$.
In this paper we define Vassiliev invariant of type $(k_1,...,k_l)$, and show that, for $k=k_1+\cdots+k_l$, two oriented knots are transformed into each other by $C_k$-moves if and only if they have the same Vassiliev invariants of type $(k_1,...,k_l)$. When $k_1=\cdots=k_l=1$, the Vassiliev invariant of type $(k_1,...,k_l)$ coincides with the Vassiliev invariant of order $\leq l-1$ in the usual sense. As a special case, we have Habiro's theorem. We use a concept \lq band description of knots' and give a diagram-oriented proof of the theorem. The proof is elementary and completely self-contained. Note that the prototypes of band description appear in \cite{Suzuki}, \cite{Yamamoto} and \cite{Yasuhara}. In particular in \cite{Yasuhara} the second author showed that any knot can be expressed as a band sum of a trivial knot and some Borromean rings. The concept of band description is a development of this fact. More generally the authors defined \lq band description of spatial graphs' in \cite{T-Y}. The concept of Vassiliev invariant of type $(k_1,...,k_l)$ is defined in \cite{T-Y}. A related result to the case $k_1=\cdots=k_l=2$ is shown in \cite{Stanford4}.
\noindent {\bf 1. Definitions and Main Result}
Throughout this paper we work in the piecewise linear category.
A {\it tangle} $T$ is a disjoint union of properly embedded arcs in the unit $3$-ball $B^{3}$. A tangle $T$ is {\it trivial} if there exists a properly embedded disk in $B^3$ that contains $T$. A {\it local move} is a pair of trivial tangles $(T_{1},T_{2})$ with $\partial T_{1}=\partial T_{2}$ such that for each component $t$ of $T_1$ there exists a component $u$ of $T_2$ with $\partial t=\partial u$. Such a pair of components is called a {\it corresponding pair}. Two local moves $(T_{1},T_{2})$ and $(U_{1},U_{2})$ are {\it equivalent}, denoted by $(T_{1},T_{2})\cong (U_{1},U_{2})$, if there is an orientation preserving self-homeomorphism $\psi :B^{3}\rightarrow B^{3}$ such that $\psi (T_{i})$ and $U_{i}$ are ambient isotopic in $B^3$ relative to $\partial B^{3}$ for $i=1,2$. Here $\psi (T_{i})$ and $U_{i}$ are {\it ambient isotopic in $B^3$ relative to $\partial B^{3}$} if $\psi (T_{i})$ is deformed to $U_{i}$ by an ambient isotopy of $B^3$ that is pointwisely fixed on $\partial B^3$.
Let $(T_{1},T_{2})$ be a local move, $t_{1}$ a component of $T_1$ and $t_2$ a component of $T_2$ with $\partial t_{1}=\partial t_{2}$. Let $N_1$ and $N_2$ be regular neighbourhoods of $t_1$ and $t_2$ in $(B^3-T_1)\cup t_1$ and $(B^3-T_2)\cup t_2$ respectively such that $N_{1}\cap \partial B^{3}=N_{2}\cap \partial B^{3}$. Let $\alpha$ be a disjoint union of properly embedded arcs in $B^{2}\times [0,1]$ as illustrated in Fig. 1.1. Let $\psi_{i}:B^{2}\times [0,1]\rightarrow N_{i}$ be a homeomorphism with $\psi_{i}(B^{2}\times \{ 0,1\} )=N_{i}\cap \partial B^{3}$ for $i=1,2$. Suppose that $\psi_{1}(\partial \alpha )=\psi_{2}(\partial \alpha )$ and $\psi_{1}(\alpha )$ and $\psi_{2}(\alpha )$ are ambient isotopic in $B^{3}$ relative to $\partial B^3$. Then we say that a local move $((T_{1}-t_{1})\cup \psi_{1}(\alpha ), (T_{2}-t_{2})\cup \psi_{2}(\alpha ))$ is a {\it double of $(T_{1},T_{2})$ with respect to the components $t_1$ and $t_2$}. Note that a double of $(T_{1},T_{2})$ with respect to $t_1$ and $t_2$ is well-defined up to equivalence.
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=.2\linewidth] {band1-1.eps}
Fig. 1.1 \end{center}
A {\it $C_{1}$-move} is a local move $(T_1,T_2)$ as illustrated in Fig. 1.2. A double of a $C_{k}$-move is called a {\it $C_{k+1}$-move}. Note that for each natural number $k$ there are only finitely many $C_{k}$-moves up to equivalence. We note that by the definition a $C_k$-move is {\it Brunnian}. That is, if $(T_1,T_2)$ is a $C_k$-move and $t_1$, $t_2$ components of $T_1$ and $T_2$ respectively with $\partial t_1=\partial t_2$, then the tangles $T_1-t_1$ and $T_2-t_2$ are ambient isotopic in $B^3$ relative to $\partial B^3$.
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=.3\linewidth] {band1-2.eps}
Fig. 1.2 \end{center}
Let $(T_{1},T_{2})$ be a local move. Then $(T_{2},T_{1})$ is also a local move. We call $(T_{2},T_{1})$ the {\it inverse} of $(T_{1},T_{2})$. It is easy to see that the inverse of a $C_{1}$-move is equivalent to itself. Then it follows inductively that the inverse of a $C_{n}$-move is equivalent to a $C_{n}$-move (but possibly not equivalent to itself).
Let $K_1$ and $K_2$ be oriented knots in the oriented three-sphere $S^3$. We say that $K_1$ and $K_2$ are {\it related by a local move $(T_{1},T_{2})$} if there is an orientation preserving embedding $h:B^{3}\rightarrow S^{3}$ such that $K_{i}\cap h(B^{3})=h(T_{i})$ for $i=1,2$ and $K_{1}-h(B^{3})=K_{2}-h(B^{3})$ together with orientations. Then we also say that $K_2$ is obtained from $K_1$ by an {\it application of $(T_{1},T_{2})$}. Two oriented knots $K_{1}$ and $K_{2}$ are {\it $C_{k}$-equivalent} if $K_{1}$ and $K_{2}$ are related by a finite sequence of $C_{k}$-moves and ambient isotopies. This relation is an equivalence relation on knots. It is known that $C_k$-equivalence implies $C_{k-1}$-equivalence \cite{Habiro1}, see Remark 2.4.
Let $l$ be a positive integer and $k_1,...,k_l$ positive integers. Suppose that for each $P\subset\{1,...,l\}$ an oriented knot $K_P$ in $S^3$ is assigned. Suppose that there are orientation preserving embeddings $h_i:B^3\rightarrow S^3$ $(i=1,...,l)$ such that \\ (1) $h_i(B^3)\cap h_j(B^3)=\emptyset$ if $i\neq j$,\\ (2) $K_P-\bigcup_{i=1}^l h_i(B^3)=K_{P'}-\bigcup_{i=1}^l h_i(B^3)$ together with orientation for any subsets $P,P'\subset \{1,...,l\}$,\\ (3) $(h_i^{-1}(K_\emptyset),h_i^{-1}(K_{\{1,...,l\}}))$ is a $C_{k_i}$-move $(i=1,...,l)$, and\\ (4) $K_P\cap h_i(B^3)=\left\{ \begin{array}{ll} K_{\{1,...,l\}}\cap h_i(B^3) & \mbox{if $i\in P$},\\ K_\emptyset\cap h_i(B^3) & \mbox{otherwise}. \end{array} \right.$\\
Then we call the set of knots $\{K_P|P\subset\{1,...,l\}\}$ a {\em singular knot of type $(k_1,...,k_l)$}. Let ${\cal K}$ be the set of all oriented knot types in $S^3$ and ${\Bbb Z}{\cal K}$ the free abelian group generated by ${\cal K}$. We sometimes identify a knot and its knot type without explicite mention. For a singular knot
$K=\{K_P|P\subset\{1,...,l\}\}$ of type $(k_1,...,k_l)$, we define an element $\kappa(K)$ of ${\Bbb Z}{\cal K}$ by
\[\kappa(K)=\sum_{P\subset\{1,...,l\}}(-1)^{| P|}K_P.\] Let ${\cal V}(k_1,...,k_l)$ be the subgroup of ${\Bbb Z}{\cal K}$ generated by all $\kappa(K)$ where $K$ varies over all singular knots of type $(k_1,...,k_l)$.
Let $K_1\#K_2$ be the composite knot of two knots $K_1$ and $K_2$. Then $K_1\#K_2-K_1-K_2\in{\Bbb Z}{\cal K}$ is called a {\em composite relator} following Stanford \cite{Stanford3}. Let ${\cal R}_\#$ be the subgroup of ${\Bbb Z}{\cal K}$ generated by all composite relators.
Let $\iota:{\cal K}\rightarrow{\Bbb Z}{\cal K}$ be the natural inclusion map. Let $\pi_{(k_1,...,k_l)}:{\Bbb Z}{\cal K}\rightarrow {\Bbb Z}{\cal K}/{\cal V}(k_1,...,k_l)$ and $\lambda_{(k_1,...,k_l)}:{\Bbb Z}{\cal K}/{\cal V}(k_1,...,k_l)\rightarrow{\Bbb Z}{\cal K}/({\cal V}(k_1,...,k_l)+{\cal R}_\#)$ be the quotient homomorphisms. Then the composite maps $\pi_{(k_1,...,k_l)}\circ\iota: {\cal K}\rightarrow {\Bbb Z}{\cal K}/{\cal V}(k_1,...,k_l)$ and $\lambda_{(k_1,...,k_l)}\circ\pi_{(k_1,...,k_l)}\circ\iota: {\cal K}\rightarrow {\Bbb Z}{\cal K}/({\cal V}(k_1,...,k_l)+{\cal R}_\#)$ are called the {\em universal Vassiliev invariant of type $(k_1,...,k_l)$} and the {\em universal additive Vassiliev invariant of type $(k_1,...,k_l)$} respectively. We denote them by $v_{(k_1,...,k_l)}$ and $w_{(k_1,...,k_l)}$ respectively.
Since a $C_1$-move is a crossing change we have that a singlar knot of type $(\underbrace{1,...,1}_{l})$ is essentially the same as a singlar knot with $l$ crossing vertices in the usual sense. Therefore we have that $\displaystyle v_{(1,...,1)}$ is the universal Vassiliev invariant of order $\leq l-1$ and $\displaystyle w_{(1,...,1)}$ is the universal additive Vassiliev invariant of order $\leq l-1$. Note that $\displaystyle v_{(1,...,1)}(K_1)=\displaystyle v_{(1,...,1)}(K_2)$ if and only if $v(K_1)=v(K_2)$ for any Vassiliev invariant $v$ of order $\leq l-1$. Similarly $w_{(1,...,1)}(K_1)=\displaystyle w_{(1,...,1)}(K_2)$ if and only if $w(K_1)=w(K_2)$ for any additive Vassiliev invariant $w$ of order $\leq l-1$. We also note that $v_{(2,...,2)}$ is essentially same as that defined in \cite{Mellor}, \cite{Stanford4}. In \cite{T-Y} the authors defined a finite type invariant of order $(k;n)$, that is essentially same as $\displaystyle v_{(\scriptsize\underbrace{n-1,...,n-1}_{k+1})}$.
Now we state our main results.
\noindent {\bf Theorem 1.1.} {\em Let $k_1,...,k_l$ be positive integers and $k=k_1+\cdots+k_l$. Then ${\cal V}(k)\subset {\cal V}(k_1,...,k_l)$.}
\noindent {\bf Theorem 1.2.} {\em Let $k_1,...,k_l$ be positive integers and $k=k_1+\cdots+k_l$. Then ${\cal V}(k_1,...,k_l)\subset {\cal V}(k)+{\cal R}_\#$.}
By Theorems 1.1 and 1.2, we have the following corollary.
\noindent {\bf Corollary 1.3.} {\em Let $k_1,...,k_l$ be positive integers and $k=k_1+\cdots+k_l$. Then ${\cal V}(k_1,...,k_l)+{\cal R}_\#= {\cal V}(k)+{\cal R}_\#$. $\Box$}
The following theorem was proved by Habiro \cite{Habiro1}, and the authors gave a proof by using band description \cite{T-Y}. In section 3, we give the same proof as in \cite{T-Y} for the convenience of the reader.
\noindent {\bf Theorem 1.4.} (Habiro \cite{Habiro1}) {\it The $C_k$-equivalence classes of oriented knots in $S^3$ forms an abelian group under connected sum of oriented knots.}
We denote this group by ${\cal K}/C_k$.
\noindent {\bf Theorem 1.5.} {\it Let $\eta_k:{\cal K}/C_k\rightarrow{\Bbb Z}{\cal K}/({\cal V}(k)+{\cal R}_\#)$ be a map induced by the inclusion $\iota$. Then $\eta_k$ is an isomorphism.}
Let $K_1$ and $K_2$ be oriented knots and $k=k_1+\cdots+k_l$. If $K_1$ and $K_2$ are $C_k$-equivalent, then by Theorem 1.1, $K_1-K_2\in{\cal V}(k)\subset {\cal V}(k_1,...,k_l)$. Therefore we have $v_{(k_1,...,k_l)}(K_1)=v_{(k_1,...,k_l)}(K_2)$. On the other hand, if $v_{(k_1,...,k_l)}(K_1)=v_{(k_1,...,k_l)}(K_2)$, then $w_{(k_1,...,k_l)}(K_1)=w_{(k_1,...,k_l)}(K_2)$. Then by Corollary 1.3, $K_1-K_2\in{\cal V}(k)+{\cal R}_\#$. Then by Theorem 1.5 we have that $K_1$ and $K_2$ are $C_k$-equivalent. Hence we have the following theorem.
\noindent {\bf Theorem 1.6.} {\em Let $k_1,...,k_l$ be positive integers and $k=k_1+\cdots+k_l$. Let $K_1$ and $K_2$ be oriented knots in $S^3$. Then the following conditions are mutually equivalent.
{\rm (1)} $K_1$ and $K_2$ are $C_k$-equivalent,
{\rm (2)} $v_{(k_1,...,k_l)}(K_1)=v_{(k_1,...,k_l)}(K_2)$,
{\rm (3)} $w_{(k_1,...,k_l)}(K_1)=w_{(k_1,...,k_l)}(K_2)$. $\Box$}
As a special case of Theorem 1.6 we have the following theorem.
\noindent{\bf Theorem 1.7.} (Habiro \cite{Habiro1}) {\it Two oriented knots $K_1$ and $K_2$ are $C_{k}$-equivalent if and only if their values of the universal {\rm(}additive{\rm)} Vassiliev invariant of order $\leq k-1$ are equal. $\Box$}
The remainder of this paper, we prove Theorems 1.1, 1.2, 1.4 and 1.5. We give a proof of Theorem 1.1 in section 2 and proofs of Theorems 1.2, 1.4 and 1.5 in section 3. The reader who wishes may change the order of sections 2 and 3 since they are independent except for Remark 2.4.
\noindent {\bf 2. $C_k$-moves}
A graph is called a {\em tree} if it is connected and simply connected as a topological space. A graph is {\it uni-trivalent} if each vertex has degree one or three. Let $G$ be a uni-trivalent tree embedded on the unit disk $D^2$ such that $G\cap\partial D^2$ is exactly the set of degree-one vertices $\{v_1,...,v_{k+1}\}$ of $G$. Suppose that an edge $e$ of $G$ is specified. We will assign a $C_k$-move to $G$ with respest to $e$ and a pair of corresponding components of it to each $v_i$ as follows. If $G$ is a tree with just two vertices $v_1,v_2$ and one edge $e$ joining them then we assign a diagram of a $C_1$-move as illustrated in Fig. 2.1. Suppose that for each uni-trivalent tree on $k$ vertices with a specified edge, a diagram of a $C_{k-1}$-move is assigned. Suppose that $v_k$ and $v_{k+1}$ are degree-one vertices of $G$ such that there is a degree-three vertex $u$ of $G$ that is adjacent to both of $v_k$ and $v_{k+1}$. Suppose that neither $uv_k$ nor $uv_{k+1}$ is a specified edge $e$. Let $G'$ be a uni-trivalent tree obtained from $G$ by deleting $v_{k+1}$ and the edge $uv_{k+1}$ and forgetting $u$. Let ${\cal D'}$ be a diagram of a $C_{k-1}$-move assigned to $G'$ with respect to $e$. Let $t_1$ and $t_2$ be a pair of corresponding components assigned to $v_k$. Then we replace their parts in $D'$ as the diagram illustrated in Fig. 2.2. We assign the new pairs of corresponding components to $v_k$ and $v_{k+1}$ respecting the cyclic order of them on $\partial D^2$. Thus we have assigned a $C_k$-move to $G$ with respect to the specified edge. See for example Fig. 2.3. Note that any $C_k$-move is assigned to a uni-trivalent tree with respect to a specified edge up to equivalence of local moves.
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=.5\linewidth] {band2-1.eps}
Fig. 2.1 \end{center}
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=.5\linewidth] {band2-2.eps}
Fig. 2.2 \end{center}
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=.7\linewidth] {band2-3.eps}
Fig. 2.3 \end{center}
\noindent{\bf Lemma 2.1.} {\it Let $(T_1,T_2)$ be a $C_k$-move assigned to a uni-trivalent tree $G$ with respect to a specified edge $e$. Suppose that an edge $e'$ is incident to $e$. Then there is a re-embedding $f:G\rightarrow D^2$ such that the $C_k$-move assigned to $f(G)$ with respect to $f(e')$ is equivalent to $(T_1,T_2)$.}
\noindent{\bf Proof.} Let $v$ be the common vertex of $e$ and $e'$. First suppose that only $v$ is the degree-three vertex of $G$. Then we have the result by the deformation illustrated in Fig. 2.4. By taking appropriate doubles we have the general case. $\Box$
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=.65\linewidth] {band2-4.eps}
Fig. 2.4 \end{center}
A {\it path} of $G$ is a subgraph of $G$ homeomorphic to a closed interval. Let $\sigma(G)$ be the maximal of the number of the edges of a path of $G$ and call it the {\it diameter} of $G$. We say that a $C_k$-move $(T_1,T_2)$ is {\it one-branched} if it is assigned to a uni-trivalent tree of diameter $k$. Note that if $G$ is a uni-trivalent tree with $k+1$ degree-one vertices and $\sigma(G)=k$, then all of the degree-three vertices lie on the path with $k$ edges.
Let $S_1$ and $S_2$ be tangles. We say that $S_1$ and $S_2$ are {\it related by a local move $(T_{1},T_{2})$} if there is an orientation preserving embedding $h:B^{3}\rightarrow {\rm int}B^{3}$ such that $S_{i}\cap h(B^{3})=h(T_{i})$ for $i=1,2$ and $S_{1}-h(B^{3})=S_{2}-h(B^{3})$. Then we say that $S_2$ is obtained from $S_1$ by an {\it application of $(T_{1},T_{2})$}.
The following result was shown by Habiro \cite{Habiro2}. Since the article \cite{Habiro2} is written in Japanese, we give a proof by using our terms.
\noindent{\bf Lemma 2.2.} (Habiro \cite{Habiro2}) {\it Let $(T_1,T_2)$ be a $C_k$-move. Then $T_1$ and $T_2$ are related by a finite sequence of one-branched $C_k$-moves and ambient isotopies relative to $\partial B^3$.}
\noindent{\bf Proof.} Suppose that $(T_1,T_2)$ is assigned to a uni-trivalent tree $G$. If $\sigma(G)=k$ then $(T_1,T_2)$ itself is a one-branched $C_k$-move. Therefore we may suppose that $\sigma(G)<k$. It is sufficient to show that $T_1$ and $T_2$ are related by $C_k$-moves each of which is assigned to a uni-trivalent tree of diameter $\sigma(G)+1$. Let $P$ be a path of $G$ containing $\sigma(G)$ edges. Since $\sigma(G)<k$, there are degree-three vertices of $G$ that are not on $P$. Let $v$ be one of them such that $v$ is adjacent to a vertex $w$ on $P$. Let $w_1$ and $w_2$ be the vertices on $P$ adjacent to $w$. Let $v_1$ and $v_2$ be the other vertices adjacent to $v$. By Lemma 2.1 we may assume that the specified edge is $ww_1$. We temporarily forget the embedding of $G$ into $D^2$ and define two abstract graphs $G_1$ and $G_2$ from $G$ as follows. Let $G_i$ be a uni-trivalent tree obtained from $G$ by deleting the edge $vv_i$, forgetting $v$, adding a vertex $u$ on $ww_2$ and adding an edge $uv_i$ $(i=1,2)$. Then we have $\sigma(G_1)=\sigma(G_2)=\sigma(G)+1$. We will show that $T_1$ and $T_2$ are related by a $C_k$-move assigned to some embedding of $G_1$ and a $C_k$-move assigned to some embedding of $G_2$. In the following we consider the simplest case illustrated in Fig. 2.5. General cases follow immediately by taking appropriate doubles. The $C_k$-move of Fig. 2.5 is assigned to the graph of Fig. 2.5 with respect to $ww_1$. Note that the $C_k$-move of Fig. 2.5 is equivalent to a local move of Fig. 2.6. Then it is realized by two local moves Fig. 2.7 (a) and (b). Then each of them is equivalent to a desired $C_k$-move. Fig. 2.8 indicates it for the case (a). The case (b) is similar and we omit it. $\Box$
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=.65\linewidth] {band2-5.eps}
Fig. 2.5 \end{center}
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=.35\linewidth] {band2-6.eps}
Fig. 2.6 \end{center}
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=.75\linewidth] {band2-7.eps}
Fig. 2.7 \end{center}
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=.75\linewidth] {band2-8.eps}
Fig. 2.8 \end{center}
\noindent{\bf Corollary 2.3.} (Habiro \cite{Habiro2}) {\it Two oriented knots $K_1$ and $K_2$ are $C_k$-equivalent if and only if they are related by a finite sequence of one-branched $C_k$-moves and ambient isotopies. $\Box$}
Let $T_0$ be a $j$-component tangle. Let ${\cal T}(T_0)$ be the set of the tangles each element of which is homotopic to $T_0$ relative to $\partial B^3$. Let ${\Bbb Z}{\cal T}(T_0)$ be the free abelian group generated by ${\cal T}(T_0)$. Let $k_1,...,k_l$ be natural numbers. We define a {\it singular tangle of type $(k_1,...,k_l)$} and a subgroup ${\cal V}(k_1,...,k_l)(T_0)$ of ${\Bbb Z}{\cal T}(T_0)$ as follows. Suppose that for each $P\subset\{1,...,l\}$ a tangle $T_P\in{\cal T}(T_0)$ is assigned. Suppose that there are orientation preserving embeddings $h_i:B^3\rightarrow {\rm int}B^3$ $(i=1,...,l)$ such that \\ (1) $h_i(B^3)\cap h_j(B^3)\neq\emptyset$ if $i\neq j$,\\ (2) $T_P-\bigcup_{i=1}^l h_i(B^3)=T_{P'}-\bigcup_{i=1}^l h_i(B^3)$ for any subsets $P,P'\subset \{1,...,l\}$,\\ (3) $(h_i^{-1}(T_\emptyset),h_i^{-1}(T_{\{1,...,l\}}))$ is a $C_{k_i}$-move $(i=1,...,l)$, and\\ (4) $T_P\cap h_i(B^3)=\left\{ \begin{array}{ll} T_{\{1,...,l\}}\cap h_i(B^3) & \mbox{if $i\in P$},\\ T_\emptyset\cap h_i(B^3) & \mbox{otherwise}. \end{array} \right.$\\
Then we call the set of tangles $\{T_P|P\subset\{1,...,l\}\}$ a {\em singular tangle of type
$(k_1,...,k_l)$}. For a singular tangle $T=\{T_P|P\subset\{1,...,l\}\}$ of type $(k_1,...,k_l)$, we define an element $\kappa(T)$ of ${\Bbb Z}{\cal T}(T_0)$ by
\[\kappa(T)=\sum_{P\subset\{1,...,l\}}(-1)^{|P|}T_P.\] Let ${\cal V}(k_1,...,k_l)(T_0)$ be the subgroup of ${\Bbb Z}{\cal T}(T_0)$ generated by all $\kappa(T)$ where $T$ varies over all singular tangles of type $(k_1,...,k_l)$.
\noindent{\bf Proof of Theorem 1.1.} By Corollary 2.3 it is sufficient to show that if $(T_1,T_2)$ is a one-branched $C_k$-move, then $T_1-T_2\in{\cal V}(k_1,...,k_l)(T_1)$. We first show that $T_1-T_2\in{\cal V}(k-k_l,k_l)(T_1)$. By Lemma 2.1 we have that any one-branched $C_k$-move is equivalent to a move illustrated in Fig. 2.9 (a) or (b). Therefore we may assume that $T_1$ and $T_2$ are as illustrated in Fig. 2.9 (a) or (b). Suppose that $T_1$ and $T_2$ are as in Fig. 2.9 (a) (resp. (b)). Let $T_3,T_4,T_5$ and $T_6$ be tangles as illustrated in Fig. 2.10 (a) (resp. (b)). Note that $T_3$ and $T_4$ are ambient isotopic relative to $\partial B^3$. Then we have that $T_1-T_2=(T_1-T_5-T_3+T_6)+(T_5-T_2-T_6+T_4)$. Note that $T_1$ and $T_3$, $T_2$ and $T_4$, and $T_5$ and $T_6$ are related by a (one-branched) $C_{k_l}$-move. Similarly $T_1$ and $T_5$, $T_3$ and $T_6$, $T_5$ and $T_2$, and $T_6$ and $T_4$ are related by a one-branched $C_{k-k_l}$-move. It is not hard to see that both $T=\{T_1,T_5,T_3,T_6\}$ and $T'=\{T_5,T_2,T_6,T_4\}$ are singular tangles of type $(k-k_l,k_l)$ and that $\varepsilon\kappa(T)=T_1-T_5-T_3+T_6$ and $\varepsilon'\kappa(T')=T_5-T_2-T_6+T_4$, where $\varepsilon=\pm 1$, and $\varepsilon'=\pm 1$. Thus $T_1-T_2\in {\cal V}(k-k_l,k_l)(T_1)$.
Similarly we have that if $(T'_1,T'_2)$ is a one-branched $C_{k-k_l}$-move, then $T'_1-T'_2\in{\cal V}(k-k_l-k_{l-1},k_{l-1})(T_1)$. By substituting this to one-branched $C_{k-k_l}$-moves in $B^3$ with respect to $T_1$ and $T_5$, $T_3$ and $T_6$, $T_5$ and $T_2$, and $T_6$ and $T_4$, we have that $T_1-T_2\in{\cal V}(k-k_l-k_{l-1},k_{l-1},k_l)(T_1)$. Repeating this argument we finally have the desired conclusion. $\Box$
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=.7\linewidth] {band2-9.eps}
Fig. 2.9 \end{center}
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=.7\linewidth] {band2-10.eps}
Fig. 2.10 \end{center}
\noindent {\bf Remark 2.4.} In Proof of Theorem 1.1, since $T_1$ and $T_5$, and $T_5$ and $T_2$ are related by a one-branched $C_{k-k_l}$-move, a one-branced $C_k$-move is realized by twice applications of one-branched $C_{k-k_l}$-moves. By Lemma 2.2, for any positive integer $k,k'$ $(k'<k)$, a $C_k$-move is realized by finitely many $C_{k'}$-moves. Hence $C_k$-equivalence implies $C_{k'}$-equivalence.
\noindent {\bf 3. Band description}
A {\it $C_{1}$-link model} is a pair $(\alpha,\beta)$ where $\alpha$ is a disjoint union of properly embedded arcs in $B^3$ and $\beta$ is a disjoint union of arcs on $\partial B^3$ with $\partial \alpha=\partial \beta$ as illustrated in Fig. 3.1. Suppose that a $C_{k}$-link model $(\alpha, \beta)$ is defined where $\alpha$ is a disjoint union of $k+1$ properly embedded arcs in $B^3$ and $\beta$ is a disjoint union of $k+1$ arcs on $\partial B^3$ with $\partial \alpha=\partial \beta$ such that $\alpha \cup \beta$ is a disjoint union of $k+1$ circles. Let $\gamma$ be a component of $\alpha \cup \beta$ and $W$ a regular neighbourhood of $\gamma$ in $(B^3-(\alpha \cup \beta))\cup\gamma$. Let $V$ be an oriented solid torus, $D$ a disk in $\partial V$, $\alpha_{0}$ properly embedded arcs in $V$ and $\beta_{0}$ arcs on $D$ as illustrated in Fig. 3.2. Let $\psi :V\rightarrow W$ be an orientation preserving homeomorphism such that $\psi (D)=W\cap \partial B^{3}$ and $\psi (\alpha_{0}\cup \beta_{0})$ bounds disjoint disks in $B^3$. Then we call the pair $((\alpha -\gamma )\cup \psi (\alpha_{0}), (\beta -\gamma )\cup\psi (\beta_{0}))$ a {\it $C_{k+1}$-link model.} We also say that the pair $((\alpha -\gamma )\cup \psi (\alpha_{0}), (\beta -\gamma )\cup\psi (\beta_{0}))$ is a {\it double} of $(\alpha,\beta)$ with respect to the component $\gamma$. A {\it link model} is a $C_{k}$-link model for some $k$.
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=.17\linewidth] {band3-1.eps}
Fig. 3.1 \end{center}
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=.2\linewidth] {band3-2.eps}
Fig. 3.2 \end{center}
Let $(\alpha_{1},\beta_{1}),...,(\alpha_{l},\beta_{l})$ be link models. Let $K$ be an oriented knot (resp. a tangle). Let $\psi_{i}:B^{3}\rightarrow S^3$ (resp. $\psi_{i}:B^{3}\rightarrow {\rm int}B^3$) be an orientation preserving embedding for $i=1,..., l$ and $b_{1,1},b_{1,2},...,b_{1,\rho(1)},b_{2,1},b_{2,2},...,b_{2,\rho(2)}, ...,b_{l,1},b_{l,2},...,b_{l,\rho(l)}$ mutually disjoint disks embedded in $S^3$ (resp. $B^3$). Suppose that they satisfy the following conditions;\\ (1) $\psi_{i}(B^{3})\cap \psi_{j}(B^{3})=\emptyset$ if $i\neq j$,\\ (2) $\psi_{i}(B^{3})\cap K=\emptyset$ for each $i$,\\ (3) $b_{i,k}\cap K=\partial b_{i,k}\cap K$ is an arc for each $i,k$,\\ (4) $b_{i,k}\cap (\bigcup_{j=1}^{l} \psi_{j}(B^{3}))= \partial b_{i,k}\cap \psi_{i}(B^{3})$ is a component of $\psi_{i}(\beta_{i})$ for each $i,k$,\\ (5) ($\bigcup_{k=1}^{\rho(i)}b_{i,k})\cap \psi_{i}(B^{3}) =\psi_{i}(\beta_{i})$ for each $i$.\\ Let $J$ be an oriented knot (resp. a tangle) defined by \[ J=K\cup (\bigcup_{i,k}\partial b_{i,k})\cup (\bigcup_{i=1}^{l}\psi_{i}(\alpha_{i})) - \bigcup_{i,k}{\rm int}(\partial b_{i,k}\cap K) - \bigcup_{i=1}^{l}\psi_{i}({\rm int}\beta_{i}), \] where the orientation of $J$ coincides that of $K$ on $K-\bigcup_{i,k}b_{i,k}$ if $K$ is oriented. Then we say that $J$ is a {\it band sum} of $K$ and link models $(\alpha_{1},\beta_{1}),..., (\alpha_{l},\beta_{l})$. We call each $b_{i,k}$ a {\it band}. Each image $\psi_{i}(B^{3})$ is called a {\it link ball}. In particular if $(\alpha_i,\beta_i)$ is a $C_k$-link model then $b_{i,k}$ is called a {\it $C_k$-band} and $\psi_{i}(B^{3})$ is called a {\it $C_k$-link ball}. We set ${\cal B}_i=((\alpha_i,\beta_i),\psi_i,\{b_{i,1},...,b_{i,\rho(i)}\})$ and call ${\cal B}_i$ a {\it chord}. In particular ${\cal B}_i$ is called a {\it $C_k$-chord} when $(\alpha_i,\beta_i)$ is a $C_k$-link model. We denote $J$ by $J=\Omega(K;\{{\cal B}_1,...,{\cal B}_l\})$ and call it a {\it band description} of $J$. We also say that $J$ is a band sum of $K$ and chords ${\cal B}_1,...,{\cal B}_l$.
\noindent{\bf Sublemma 3.1.} {\it Let $(T_1,T_2)$ be a $C_k$-move. Then there is a $C_k$-link model $(\alpha,\beta)$ such that $(T_1,T_2)\cong(\alpha,\hat{\beta})$ where $\hat{\beta}$ is a slight push in of $\beta$.}
\noindent{\bf Proof.} It is clearly true for $k=1$. Suppose that $(T_1,T_2)$ is a double of a $C_{k-1}$-move $(U_1,U_2)$ with respect to the components $u_1$ and $u_2$ and $(\alpha',\beta')$ is a $C_{k-1}$-link model such that $(U_1,U_2)\cong(\alpha',\hat{\beta'})$. Then by the deformation illustrated in Fig. 3.3, we have that there is a double $(\alpha,\beta)$ of $(\alpha',\beta')$ with respect to the component which corresponds to $u_1$ and $u_2$ such that $(\alpha,\hat{\beta})$ is equivalent to a double of $(\alpha',\hat{\beta'})$ and therefore equivalent to $(T_1,T_2)$. $\Box$
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=.7\linewidth] {band3-3.eps}
Fig. 3.3 \end{center}
Fig. 3.3 indicates that for a double $(\alpha,\beta)$ of $(\alpha',\beta')$, $(\alpha,\hat{\beta})$ is equivalent to a double of $(\alpha',\hat{\beta'})$. Thus we have
\noindent {\bf Sublemma 3.2.} {\it If $(\alpha,\beta)$ is a $C_k$-link model, then $(\alpha,\hat{\beta})$ is equivalent to a $C_k$-move. $\Box$}
\noindent{\bf Sublemma 3.3.} {\it Let $(T_1,T_2)$ be a $C_k$-move. Then there is a $C_k$-link model $(\alpha,\beta)$ such that a band sum of $T_1$ and $(\alpha,\beta)$ is ambient isotopic to $T_2$ relative to $\partial B^3$.}
\noindent{\bf Proof.} Since the inverse $(T_2,T_1)$ is also a $C_k$-move we have by Sublemma 3.1 that there is a $C_k$-link model $(\alpha,\beta)$ such that $(T_2,T_1)\cong(\alpha,\hat{\beta})$. It is easy to see that $\alpha$ is a band sum of $\hat{\beta}$ and $(\alpha,\beta)$ up to ambient isotopy relative to $\partial B^3$. Therefore we have the result. $\Box$
From now on we consider knots up to ambient isotopy of $S^3$ and tangles up to ambient isotopy of $B^3$ relative to $\partial B^3$ without explicit mention. As an immeditate consequence of Sublemmas 3.2 and 3.3 we have
\noindent{\bf Sublemma 3.4.} {\it Let $K$ and $J$ be oriented knots. Then $K$ and $J$ are related by a $C_k$-move if and only if $J$ is a band sum of $K$ and a $C_k$-link model. $\Box$}
Let $K$ be a knot and $J=\Omega(K;\{{\cal B}_1,...,{\cal B}_{l}\})$ a band sum of $K$ and $C_{k_i}$-chords ${\cal B}_i$ $(i=1,...,l)$. We define an element $\kappa(J)$ of ${\Bbb Z}{\cal K}$ by
\[\kappa(J)=\sum_{P\subset\{1,...,l\}}(-1)^{|
P|} \Omega\left(K;\bigcup_{i\in P}\{{\cal B}_{i}\}\right).\] By Subemmas 3.2 and 3.3 we have that the subgroup ${\cal V}(k_1,...,k_l)$ of ${\Bbb Z}{\cal K}$ is generated by all $\kappa(J)$ where $J$ varies over all band sums of knots and $C_{k_i}$-chords $(i=1,...,l)$.
\noindent{\bf Sublemma 3.5.} {\it Let $K$, $J$ and $I$ be oriented knots. Suppose that $J=\Omega(K;\{{\cal B}_1,...,{\cal B}_l\})$ for some chords ${\cal B}_1,...,{\cal B}_l$ and $I=\Omega(J;\{{\cal B}\})$ for some $C_k$-chord ${\cal B}$. Then there is a $C_k$-chord ${\cal B}'$ such that $I=\Omega(K;\{{\cal B}_1,...,{\cal B}_l,{\cal B}'\})$. Moreover, if there is a subset $P$ of $\{1,...,l\}$ such that the link ball and the bands of $\cal B$ intersect neither the link ball nor the bands of ${\cal B}_j$ for any $j\in\{1,...,l\}\setminus P$, then $\Omega(\Omega(K;\bigcup_{i\in P}\{{\cal B}_i\});\{{\cal B}\})= \Omega(K;(\bigcup_{i\in P}\{{\cal B}_i\})\cup\{{\cal B}'\})$.}
\noindent{\bf Proof.} If the bands and the link ball of ${\cal B}$ are disjoint from those of ${\cal B}_1,...,{\cal B}_l$ then we have that $I=\Omega(K;\{{\cal B}_1,...,{\cal B}_l,{\cal B}\})$. If not then we deform $I$ up to ambient isotopy as follows. First we thin and shrink the bands and the link ball of ${\cal B}$ so that they are thin enough and small enough respectively. If the link ball of ${\cal B}$ intersects the bands and the link balls of ${\cal B}_1,...,{\cal B}_l$ then we move the link ball of ${\cal B}$ so that they does not intersect. Then we slide the bands of ${\cal B}$ along $J$ so that the intersection of the bands with $J$ is disjoint from the bands and the link balls of ${\cal B}_1,...,{\cal B}_l$. Then we sweep the bands of ${\cal B}$ out of the link balls of ${\cal B}_1,...,{\cal B}_l$. Note that this is always possible since the tangles are trivial. Finally we sweep the intersection of the bands of ${\cal B}$ and the bands of ${\cal B}_1,...,{\cal B}_l$ out of the intersection of the bands of ${\cal B}_1,...,{\cal B}_l$ and $K$. Let ${\cal B}'$ be the result of the deformation of ${\cal B}$ described above. Then it is not hard to see that ${\cal B}'$ is a desired chord. $\Box$
By repeated applications of Sublemmas 3.4 and 3.5 we immediately have the following lemma.
\noindent{\bf Lemma 3.6.} {\it Let $k$ be a positive integer and let $K$ and $J$ be oriented knots. Then $K$ and $J$ are $C_{k}$-equivalent if and only if $J$ is a band sum of $K$ and some $C_{k}$-link models. $\Box$}
As in the definition of $C_k$-move we define iteratedly doubled strings as follows. A {\it $0$-double pattern} is $\{{\bf o}\}\times[0,1]$ in $B^2\times [0,1]$ where ${\bf o}$ is the center of $B^2$. Suppose that a $k$-double pattern $A$ in $B^2\times [0,1]$ is defined. Let $N$ be a regular neighbourhood of a component $\gamma$ of $A$ in $(B^2\times [0,1]-A)\cup\gamma$ such that $N\cap(\partial B^2\times[0,1])=\emptyset$. Let $\psi:B^2\times[0,1] \rightarrow N$ be a homeomorphism with $\psi(B^2\times\{0,1\})=N\cap(B^2\times\{0,1\})$. Let $\alpha$ be a disjoint union of properly embedded arcs in $B^2\times[0,1]$ as illustrated in Fig. 1.1. Then $(A-\gamma)\cup\psi(\alpha)$ is called a {\it $(k+1)$-double pattern}. Let $N$ be a regular neighbourhood of a properly embedded arc in $B^3$. Let $\psi:B^2\times[0,1]\rightarrow N$ be a homeomorphism. Then the image $\psi(A)$ of a $k$-double pattern $A$ is called a {\it $k$-double string}. Note that a \lq crossing change' between $k$-double strng and $j$-double string is equivalent to a $C_{k+j+1}$-move.
\noindent{\bf Sublemma 3.7.} {\it Let $(\alpha,\beta)$ be a $C_k$-link model and $\gamma$ a component of $\alpha\cup\beta$. Let $D$ be a disk in $B^3$ such that $\partial D=\gamma$ and ${\rm int}D\cap \partial B^3=\emptyset$. Let $\delta$ be a properly embedded unknotted arc in $B^3$ that intersects $D$ transversally at one point in ${\rm int}D$. Let $N$ be a regular neighbourhood of $\delta$ in $B^3-\gamma$. Then there exist a $(k-1)$-double string ${\cal D}$ in $N$ and an orientation preserving homeomorphism $\varphi:B^3\rightarrow B^3$ such that
$\varphi|_{\gamma\cap\beta}={\rm id}|_{\gamma\cap\beta}$, and $\varphi(\alpha\cup\gamma)={\cal D}\cup\gamma$. }
\noindent In the lemma above, we note that ${\cal D}\cup(\alpha\cap \gamma)$ is a $k$-double string.
\noindent{\bf Proof.} The case $k=1$ is clear. Suppose that it is shown for $k-1$. Let $(\alpha,\beta)$ be a double of a $C_{k-1}$-link model $(\alpha',\beta')$ with respect to the component $\gamma'$ of $\alpha'\cup\beta'$.
If $\gamma$ is already a component of $\alpha'\cup\beta'$, then by the deformation of $\alpha$ in a regular neighbourhood of $\gamma'$ as illustrated in Fig. 3.4 and by the assumption, we have the result.
Next suppose that $\gamma$ is not a component of $\alpha'\cup\beta'$. In other words $\gamma$ is contained in a regular neighbourhood $W$ of $\gamma'$. Let $D'$ be a disk in $B^3$ such that $\partial D'=\gamma'$ and ${\rm int}D'\cap \partial B^3=\emptyset$. Let $\delta'$ be a properly embedded unknotted arc in $B^3$ that intersects $D'$ transversally at one point in ${\rm int}D'$. Let $N'$ be a regular neighbourhood of $\delta'$ in $B^3-\gamma'$. Then by the assuption there is a $(k-2)$-double string ${\cal D}'$ in $N'$ and an orientation preserving homeomorphism $\varphi':B^3\rightarrow B^3$ such that
$\varphi'|_{\gamma'\cap\beta'}={\rm id}|_{\gamma'\cap\beta'}$ and $\varphi'(\alpha'\cup\gamma')={\cal D}'\cup\gamma'$. By modifying $\varphi'$ if necessary we may suppose that
$\varphi'|_{W\cap\partial B^3}={\rm id}|_{W\cap\partial B^3}$. Then we have that $\varphi'(\alpha\cup\gamma)$ is as illustrated in Fig. 3.5 (a). Then by the deformation illustrated in Fig. 3.5 we have the result. $\Box$
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=.4\linewidth] {band3-4.eps}
Fig. 3.4 \end{center}
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=.7\linewidth] {band3-5.eps}
Fig. 3.5 \end{center}
\noindent{\bf Lemma 3.8.} {\it
Let $K$, $J=\Omega(K;\{{\cal B}_1,...,{\cal B}_l,{\cal B}_0\})$
and $I=\Omega(K;\{{\cal B}_1,...,{\cal B}_l,{\cal B}'_0\})$ be
oriented knots, where ${\cal B}_1,...,{\cal B}_l$ are chords and ${\cal B}_0,{\cal B}'_0$ are $C_k$-chords. Suppose that $J$ and $I$ differ locally as illustrated in Fig $3.6$ {\rm (a), (b)}, i.e., $I$ is obtained from $J$ by a crossing change between $K$ and a band of ${\cal B}_0$. Then $J$ and $I$ are related by a $C_{k+1}$-move. Moreover, there is a $C_{k+1}$-chord $\cal B$ such that $\Omega(K;(\bigcup_{i\in P}\{{\cal B}_i\})\cup\{{\cal B}_0\})= \Omega(K;(\bigcup_{i\in P}\{{\cal B}_i\})\cup\{{\cal B}'_0,{\cal B}\})$ for any subset $P$ of $\{1,...,l\}$. }
\noindent{\bf Proof.} By shrinking the band and pulling the link ball as illustrated in Fig. 3.6 it is sufficient to show the case that $K$ is near the link ball. Then by Sublemma 3.7 we deform the strings in the link ball without disturbing a neighbourhood of the band. Then the crossing change is realized by a $C_{k+1}$-move. See Fig. 3.7. In Fig. 3.7, there is a $3$-ball $B$ in $S^3$ and a homeomorphism $h:B\rightarrow B^3$ such that $(h(J), h(I))$ is a $C_{k+1}$-move and $B$ is disjoint from the link ball and the bands of any chord ${\cal B}_i$ $(i=1,...,l)$. Thus by Sublemmas 3.3 and 3.5, we have the latter assertion. $\Box$
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=.7\linewidth] {band3-6.eps}
Fig. 3.6 \end{center}
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=.4\linewidth] {band3-7.eps}
Fig. 3.7 \end{center}
\noindent{\bf Lemma 3.9.} {\it Let $K$, $J=\Omega(K;\{{\cal B}_1,...,{\cal B}_l,{\cal B}_{0j}, {\cal B}_{0k}\})$ and $I=\Omega(K;\{{\cal B}_1,...,{\cal B}_l, {\cal B}'_{0j},{\cal B}'_{0k}\})$ be oriented knots, where ${\cal B}_1,...,{\cal B}_l$ are chords and ${\cal B}_{0j},{\cal B}'_{0j}$ $($resp. ${\cal B}_{0k},{\cal B}'_{0k})$ are $C_j$-chords $($resp. $C_k$-chords$)$. Suppose that $J$ and $I$ differ locally as illustrated in Fig. {\rm3.8}. Then $J$ and $I$ are related by a $C_{j+k}$-move. Moreover, there is a $C_{j+k}$-chord $\cal B$ such that $\Omega(K;(\bigcup_{i\in P}\{{\cal B}_i\})\cup\{{\cal B}_{0j}, {\cal B}_{0k}\})=\Omega(K;(\bigcup_{i\in P}\{{\cal B}_i\})\cup \{{\cal B}'_{0j},{\cal B}'_{0k},{\cal B}\})$ for any subset $P$ of $\{1,...,l\}$. }
\noindent We call the change from $J$ to $I$ in Lemma 3.9 a {\it band exchange}.
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=.7\linewidth] {band3-8.eps}
Fig. 3.8 \end{center}
\noindent{\bf Proof.} First we deform the strings in the link balls as stated in Sublemma 3.7 then we slide one of the two bands along the other band and then perform a $C_{j+k}$-move. See Fig. 3.9. In Fig. 3.9, there is a $3$-ball $B$ in $S^3$ and a homeomorphism $h:B\rightarrow B^3$ such that $(h(J), h(I))$ is a $C_{j+k}$-move and $B$ is disjoint from the link ball and the bands of any chord ${\cal B}_i$ $(i=1,...,l)$. Thus by Sublemmas 3.3 and 3.5, we have the latter assertion. $\Box$
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=.7\linewidth] {band3-9.eps}
Fig. 3.9 \end{center}
\noindent {\bf Proof of Theorem 1.2.} Let $k_1,...,k_l$ $(l\geq 2)$ be positive integer and $k=k_1+\cdots+ k_l$. Let $K_0$ be a knot and $K_1$ a band sum of $K_0$ and $C_{k_j}$-chords ${\cal B}_{k_j,j}$ $(j=1,...,l)$. It is sufficient to show that
\[\mu_k\left(\sum_{P\subset\{1,...,l\}}(-1)^{|
P|} \Omega\left(K_0;\bigcup_{j\in P}\{{\cal B}_{k_j,j}\}\right)\right) =0\in {\Bbb Z}{\cal K}/({\cal V}(k)+{\cal R}_\#),\] where $\mu_k:{\Bbb Z}{\cal K}\rightarrow {\Bbb Z}{\cal K}/({\cal V}(k)+{\cal R}_\#)$ is the quotient homomorphism.
Set \[K_P=\Omega\left(K_0;\bigcup_{j\in P}\{{\cal B}_{k_j,j}\}\right).\] {\em In the following we consider knots up to $C_{k}$-equivalence and by a symbol $K_1$ $($resp. $K_P)$ we express a knot that is $C_{k}$-equivalent to $K_1$ $($resp. $K_P)$.} We will deform the form of band description of $K_1$ step by step. At each step $K_1$ is expressed as a band sum of $K_0$ and some chords such that each $K_P$ is a band sum of $K_0$ and some subset of the chords of $K_1$. To be more precise \[K_1=\Omega\left(K_0;\bigcup_{i,j}\{{\cal B}_{i,j}\}\right)\] at each step where ${\cal B}_{i,j}$ is a $C_i$-chord for some $i$ with $1\leq i<k$ and it has an associated subset $\omega({\cal B}_{i,j})\subset\{1,...,l\}$ with $\sum_{t\in\omega({\cal B}_{i,j})}k_t\leq i$ such that for each $P\subset\{1,...,l\}$ \[(*)\ \ K_P=\Omega\left(K_0;\bigcup_{\omega({\cal B}_{i,j}) \subset P}\{{\cal B}_{i,j}\}\right).\] A chord ${\cal B}_{i,j}$ is called a {\it local chord} if there is a 3-ball $B$ such that $B$ contains all of the bands and the link ball of ${\cal B}_{i,j}$, $B$ does not intersect any other bands and link balls, and that $(B,B\cap K_0)$ is a trivial ball-arc pair. Such a local chord ${\cal B}_{ij}$ represents a knot $K_{ij}$ connected summed to $K_0$. The final goal of the step by step deformation is a band sum of $K_0$ and some local chords ${\cal B}_{ij}$'s so that $K_1$ is a connected sum of $K_0$ and $K_{ij}$'s. Since $\mu_k(K\#K')=\mu_k(K+K')\in {\Bbb Z}{\cal K}/({\cal V}(k)+{\cal R}_\#)$, we have \[\begin{array}{rl}
&\displaystyle{\mu_k\left(
\sum_{P\subset\{1,...,l\}}
(-1)^{| P|}\Omega\left(K_0;\bigcup_{\omega({\cal B}_{i,j})\subset P}
\{{\cal B}_{i,j}\}\right)\right)}\\
=&\displaystyle{\mu_k\left(
\sum_{P\subset\{1,...,l\}}(-1)^{| P|}\left(K_0
+\sum_{\omega({\cal B}_{i,j})\subset P}K_{i,j}\right)\right)}\\ =&\displaystyle{\mu_k\left(
\sum_{P\subset\{1,...,l\}}(-1)^{| P|}K_0+
\sum_{P\subset\{1,...,l\}}(-1)^{| P|}\left(\sum_{\omega
({\cal B}_{i,j})\subset P}K_{i,j}\right)\right)}\\ =&\displaystyle{\mu_k\left(
0+\sum_{i,j}\left(\sum_{P\subset\{1,...,l\},\omega({\cal B}_{i,j})
\subset P}(-1)^{| P|}K_{i,j}\right)\right)}. \end{array}\] We consider the coefficient of $K_{i,j}$. Since $\sum_{t\in\omega({\cal B}_{i,j})}k_t< k$, $\omega({\cal B}_{i,j})$ is a proper subset of $\{1,...,l\}$. We may assume that $\omega({\cal B}_{i,j})$ does not contain $a\in\{1,...,l\}$. Then we have that \[\begin{array}{rl} \displaystyle{ \sum_{P\subset\{1,...,l\},\omega({\cal B}_{i,j})\subset P}
(-1)^{| P|}}= &\displaystyle{ \sum_{P\subset\{1,...,l\}\setminus\{a\},\omega({\cal B}_{i,j})
\subset P}(-1)^{| P|}}\\ &\hspace*{3em}+
\displaystyle{\sum_{P\subset\{1,...,l\}\setminus\{a\},\omega({\cal B}_{i,j})\subset P}(-1)^{| P\cup\{a\}|}=0.} \end{array}\] Thus, we have the conclusion if we can get a desired band description.
Now we will deform the band description of $K_1$ into a desired form. We first set $\omega({\cal B}_{k_j,j})=\{j\}$ for $j=1,...,l$. Then we have $\sum_{t\in\omega({\cal B}_{k_j,j})}k_t=k_j$ and \[K_P=\Omega\left(K_0;\bigcup_{\omega({\cal B}_{k_j,j})\subset P}\{{\cal B}_{k_j,j}\}\right).\] Note that a crossing change between bands or a self-crossing change of a band can be realized by crossing changes between $K_0$ and a band as illustrated in Fig. 3.10. Therefore we can deform each chord into a local chord by band exchanges and crossing changes between $K_0$ and bands.
When we perform a crossing change between $K_0$ and a $C_p$-band of a $C_p$-chord ${\cal B}_{p,q}$ with $p\leq k-2$ we introduce a new $C_{p+1}$-chord ${\cal B}_{p+1,r}$ and we set $\omega({\cal B}_{p+1,r})=\omega({\cal B}_{p,q})$ so that the condition $(*)$ still holds for all subset $P$ of $\{1,...,l\}$. To do this we use Lemma 3.8. When we perform a band exchange between a $C_p$-chord ${\cal B}_{p,q}$ and a $C_r$-chord ${\cal B}_{r,s}$ with $p+r\leq k-1$ we introduce a new $C_{p+r}$-chord ${\cal B}_{p+r,n}$ and set $\omega({\cal B}_{p+r,n})=\omega({\cal B}_{p,q})\cup\omega({\cal B}_{r,s})$ so that the condition $(*)$ still holds for all subset $P$ of $\{1,...,l\}$. To do this we use Lemma 3.9. Note that the condition $\sum_{t\in\omega({\cal B}_{i,j})}k_t\leq i$ still holds for all chords.
By Lemma 3.8, a crossing change between $K_0$ and a $C_{k-1}$-band is realized by a $C_{k}$-move and therefore does not change the $C_{k}$-equivalence class. Similarly, by Lemma 3.9, a band exchange between a $C_p$-chord ${\cal B}_{p,q}$ and a $C_r$-chord ${\cal B}_{r,s}$ with $p+r\geq k$ is realized by a $C_{p+r}$-move. Therefore, by Remark 2.4, it also does not change the $C_{k}$-equivalence class.
We note that the process definitely ends at last because a deformation of a $C_p$-chords does not produces $C_q$-chords for $q\leq p$. $\Box$
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=.7\linewidth] {band3-10.eps}
Fig. 3.10 \end{center}
\noindent {\bf Proof of Theorem 1.4.} It is clear that if $K_1$ and $K_2$ are $C_k$-equivalent and if $J_1$ and $J_2$ are $C_k$-equivalent then the connected sums $K_1\#J_1$ and $K_2\#J_2$ are $C_k$-equivalent. Thus the binary operation is well-defined. We note that the ambient isotopy classes of oriented knots forms a commutative monoid under connected sum of knots with unit element a trivial knot. Therefore it is sufficient to show the existence of the inverse element. Let $K$ be an oriented knot. First we note that $K$ itself is $C_1$-equivalent to a trivial knot. Suppose that there is an oriented knot $J$ such that $K\#J$ is $C_{k-1}$-equivalent to a trivial knot $O$. Then by Lemma 3.6 we have that $O$ is a band sum of $K\#J$ and some $C_{k-1}$-link models. We choose a 3-ball $B$ in $S^3$ such that $B\cap K\#J$ is an unknotted arc in $B$. By an ambient isotopy we deform the link balls into $B$ and slide the ends of bands into $B$. Then using Lemma 3.8 we deform $O$ up to $C_k$-equivalence so that the whole of the bands are contained in $B$. Then we have that the result is a connected sum of $K\#J$ and some knot $L$. Namely $K\#J\#L$ is $C_k$-equivalent to $O$. Thus $J\#L$ is the desired knot. $\Box$
\noindent {\bf Proof of Theorem 1.5.} Let $\xi_k:{\Bbb Z}{\cal K}/({\cal V}(k)+{\cal R}_\#)\rightarrow{\cal K}/C_k$ be a homomorphism defined by $\xi_k(K)=[K]_{C_k}$ for $K\in{\cal K}$ where $[K]_{C_k}$ denote the $C_k$-equivalence class of $K$. It follows from Theorem 1.4 that both $\eta_k$ and $\xi_k$ are well-defined homomorphisms. Then it is clear that both $\xi_k\circ\eta_k$ and $\eta_k\circ\xi_k$ are identities. Therefore both $\eta_k$ and $\xi_k$ are isomorphisms. $\Box$
\footnotesize{
}
\end{document}
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=1.\linewidth] {band1.eps}\end{center}
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=1.\linewidth] {band2.eps}\end{center}
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=1.\linewidth] {band3.eps}\end{center}
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=1.\linewidth] {band4.eps}\end{center}
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=1.\linewidth] {band5.eps}\end{center}
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=1.\linewidth] {band6.eps}\end{center}
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=1.\linewidth] {band7.eps}\end{center}
\begin{center} \includegraphics[trim=0mm 0mm 0mm 0mm, width=1.\linewidth] {band8.eps}\end{center}
\end{document} |
\begin{document}
\title{Computing invariants of cubic surfaces}
\begin{abstract} We report on the computation of invariants, covariants, and contravariants of cubic surfaces. All algorithms are implemented in the computer algebra system {\tt magma}. \end{abstract}
\section{Introduction}
Given two hypersurfaces of the same degree in projective space over an algebraically closed field, one may ask for the existence of an automorphism of the projective space that maps one of the hypersurfaces to the other. It turns out that if the hypersurfaces are stable~\cite[Def.~1.7]{MFK} in the sense of geometric invariant theory such an isomorphism exists if and only if all the invariants of the hypersurfaces coincide~\cite{Mu}.
Aside from cubic curves in $\mathop{\text{\bf P}}\nolimits^2$ and quartic surfaces in $\mathop{\text{\bf P}}\nolimits^3$, an isomorphism between smooth hypersurfaces of degree $d \geq 3$ always extends to an automorphism of the ambient projective space~\cite[Th.~2]{MM}. Thus, the invariants may be used to test abstract isomorphy.
If the base field is not algebraically closed, two varieties with equal invariants can differ by a twist. A necessary condition for the existence of a non-trivial twist is that the variety has a non-trivial automorphism.
In this article, we focus on the case of cubic surfaces. For them, it was proven by Clebsch~\cite{Cl} that the ring of invariants of even weight is generated by five invariants of degrees 8, 16, 24, 32, and 40. Later, Salmon~\cite{Sa3} worked out explicit formulas for these invariants based on the pentahedral representation of the cubic surface, introduced by Sylvester~\cite{Sy}. Using modern computer algebra, it is possible to compute the pentahedral representation of a given cubic surface and to deduce the invariants from this~\cite{EJ1}.
We describe a different approach to compute the Clebsch-Salmon invariants, linear covariants, and some contravariants of cubic surfaces, based on the Clebsch transfer principle. Using this, we also compute an invariant of degree 100~\cite[Sec.~9.4.5]{Do} and odd weight that vanishes if and only if the cubic surface has a non-trivial automorphism. The square of this invariant is a polynomial expression in Clebsch's invariants.
This answers the question of isomorphy for all stable cubic surfaces over algebraically closed fields and for all surfaces over non-closed fields, for which the degree 100 invariant does not vanish.
All algorithms are implemented in the computer algebra system {\tt magma}~\cite{BCP}.
\section{The Clebsch-Salmon invariants}
\begin{dfn} \label{def_inv} Let $K$ be a field of characteristic zero and $K[X_1,\ldots,X_n]^{(d)}$ the $K$-vector space of all homogeneous forms of degree $d$. Further, we fix the left group action \begin{eqnarray*} \mathop{\text{\rm GL}}\nolimits_n(K) \times K[X_1,\ldots,X_n] \rightarrow K[X_1,\ldots,X_n],\quad (M,f) \mapsto M \cdot f, \end{eqnarray*} with $(M \cdot f)(X_1,\ldots,X_n) := f((X_1,\ldots,X_n) \, M)$. Finally, on the polynomial ring $K[Y_1,\ldots,Y_n]$, we choose the action \begin{eqnarray*} \mathop{\text{\rm GL}}\nolimits_n(K) \times K[Y_1,\ldots,Y_n] \rightarrow K[Y_1,\ldots,Y_n], \quad (M,f) \mapsto M \cdot f, \end{eqnarray*} given by $(M \cdot f)(Y_1,\ldots,Y_n) := f((Y_1,\ldots,Y_n) \left(M^{-1}\right)^\top)$. \begin{enumerate} \item An {\em invariant $I$ of degree $D$ and weight $w$} is a map $K[X_1,\ldots,X_n]^{(d)} \rightarrow K$ that may be given by a homogeneous polynomial of degree $D$ in the coefficients of $f$ and satisfies $$ I(M \cdot f) = \det(M)^w \cdot I(f), $$ for all $M \in \mathop{\text{\rm GL}}\nolimits_n(K)$ and all forms $f \in K[X_1,\ldots,X_n]^{(d)}$. \item A {\em covariant $C$ of degree $D$, order $p$, and weight $w$} is a map $$ K[X_1,\ldots,X_n]^{(d)} \rightarrow K[X_1,\ldots,X_n]^{(p)} $$ such that each coefficient of $C(f)$ is a homogeneous degree $D$ polynomial in the coefficients of $f$ and that satisfies $$ C(M \cdot f) = \det(M)^w \cdot M \cdot (C(f)), $$ for all $M \in \mathop{\text{\rm GL}}\nolimits_n(K)$ and all forms $f \in K[X_1,\ldots,X_n]^{(d)}$. \item A {\em contravariant $c$ of degree $D$, order $p$, and weight $w$} is a map $$ K[X_1,\ldots,X_n]^{(d)} \rightarrow K[Y_1,\ldots,Y_n]^{(p)} $$ such that each coefficient of $c(f)$ is a homogeneous degree $D$ polynomial in the coefficients of $f$ and that satisfies $$ c(M \cdot f) = \det(M)^w \cdot M \cdot c(f), $$ for all $M \in \mathop{\text{\rm GL}}\nolimits_n(K)$ and all forms $f \in K[X_1,\ldots,X_n]^{(d)}$. Note that the right hand side uses the action on $K[Y_1,\ldots,Y_n]$. \end{enumerate} \end{dfn}
\begin{rems} \begin{enumerate} \item The set of all invariants is a commutative ring and an algebra over the base field. \item The set of all covariants (resp. contravariants) is a commutative ring and a module over the ring of invariants. \item Geometrically, the vanishing locus of $f$ or a covariant $C(f)$ is a subset of the projective space whereas the vanishing locus of a contravariant $c(f)$ is a subset of the dual projective space. Replacing the matrix by the transpose inverse matrix gives the action on the dual space in a naive way. \end{enumerate} \end{rems}
\begin{exas} \begin{enumerate} \item The discriminant of binary forms of degree $d$ is an invariant of degree $2d - 2$ and weight $d(d-1)$~\cite[Chap.~2]{Ol}. \item Let $f$ be a form of degree $d > 2$ in $n$ variables. Then the {\it Hessian}~$H$ defined by $$ H(f) := \det \left(\frac{\partial^2 f}{\partial X_i \, \partial X_j} \right)_{i,j =1,\ldots,n} $$ is a covariant of degree $n$, order $(d-2) n$, and weight $2$. \item Let a smooth plane curve $V \subset \mathop{\text{\bf P}}\nolimits^2$ be given by a ternary form $f$ of degree $d$. Mapping $f$ to the form that defines the dual curve~\cite[Sec.~1.2.2]{Do} of $V$ is an example of a contravariant of degree $2d - 2$ and order $d(d-1)$. \end{enumerate} \end{exas}
\subsection*{Salmon's formulas} A cubic surface given by a system of equations of the shape $$ a_0 X_0^3 + a_1 X_1^3 +a_2 X_2^3 +a_3 X_3^3 +a_4 X_4^3 = 0 , \quad X_0 + X_1 + X_2 + X_3 + X_4 = 0 $$ is said to be in {\it pentahedral form}. The coefficients $a_0,\ldots,a_4$ are called the pentahedral coefficients of the surface. The cubic surfaces that have a pentahedral form are a Zariski open subset in the Hilbert scheme of all cubic surfaces. Thus, it suffices to give the invariants for these surfaces. For this, we denote by $\sigma_1,\ldots,\sigma_5$ the elementary symmetric functions of the pentahedral coefficients. Then the Clebsch-Salmon invariants (as mentioned in the introduction) of the cubic surface are given by~\cite[\S~467]{Sa3}, \begin{eqnarray*} I_8 = \sigma_4^2 - 4 \sigma_3 \sigma_5, \quad I_{16} = \sigma_1 \sigma_5^3, \quad I_{24} = \sigma_4 \sigma_5^4, \quad I_{32} = \sigma_2 \sigma_5^6, \quad I_{40} = \sigma_5^8\, . \end{eqnarray*} Further, Salmon lists four linear covariants of degrees 11, 19, 27, and 43~\cite[\S~468]{Sa3} \begin{align*} L_{11} &= \sigma_5^2 \sum_{i=0}^4 a_i x_i, & L_{19} &= \sigma_5^4 \sum_{i=0}^4 \frac{1}{a_i} x_i, \\ L_{27} &= \sigma_5^5 \sum_{i=0}^4 a_i^2 x_i, & L_{43} &= \sigma_5^8 \sum_{i=0}^4 a_i^3 x_i \,. \end{align*} Finally, the $4 \times 4$-determinant of the matrix formed by the coefficients of these linear covariants of a cubic surface in $\mathop{\text{\bf P}}\nolimits^3$ is an invariant $I_{100}$ of degree 100. It vanishes if and only if the surface has Eckardt points or equivalently a non-trivial automorphism group~\cite[Sec.~9.4.5, Table~9.6]{Do}. The square of $I_{100}$ can be expressed in terms of the other invariants above. For a modern view on these invariants, we refer to~\cite[Sec.~9.4.5]{Do}.
\section{Transvection} One classical approach to write down invariants is to use the transvection (called \"Uberschiebung in German). This is part of the so called symbolic method~\cite[Chap.~8,~\S2]{We},~\cite[App.~B.2]{Hu}. We illustrate it in the case of ternary forms.
\begin{dfn} Let $K[X_1,\ldots,X_n,Y_1,\ldots,Y_n,Z_1,\ldots,Z_n]$ be the polynomial ring in $3 n$ variables. For $i,j,k \in \{1,\ldots,n\}$, we denote by $(i\, j\, k)$ the differential operator \begin{eqnarray*} (i\, j\, k) := \det \left( \begin{array}{ccc} \frac{\partial}{\partial X_i} & \frac{\partial}{\partial X_j} & \frac{\partial}{\partial X_k} \\ \frac{\partial}{\partial Y_i} & \frac{\partial}{\partial Y_j} & \frac{\partial}{\partial Y_k} \\ \frac{\partial}{\partial Z_i} & \frac{\partial}{\partial Z_j} & \frac{\partial}{\partial Z_k} \\ \end{array} \right)
\, . \end{eqnarray*} \end{dfn}
\begin{exa} Using this notation, the {\em Aronhold invariants} $S$ and $T$ of the ternary cubic form $f$ are given by \begin{align*} S(f) &:= (1\, 2 \, 3) (2 \, 3 \, 4) (3 \, 4 \, 1) (4 \, 1 \, 2) f(X_1,Y_1,Z_1) \cdots f(X_4,Y_4,Z_4), \\ T(f) &:= (1 \, 2 \, 3) (1 \, 2 \, 4) (2 \, 3 \, 5) (3 \, 1 \, 6) (4 \, 5 \, 6)^2 f(X_1,Y_1,Z_1) \cdots f(X_6,Y_6,Z_6)\, . \end{align*} The first one is of degree and weight~$4$, the second one is of degree and weight~$6$. Using $S$ and $T$, one can write down the discriminant of a ternary cubic as $\Delta := S^3 - 6 T^2$. The discriminant vanishes if and only if the corresponding cubic curve is singular.
See~\cite[Sec.~V]{Sa2} for a historical and~\cite[Sec.~3.4.1]{Do} for modern references concerning invariants of ternary cubic forms. \end{exa}
\begin{rem} One can use the transvection to write down invariants of quaternary forms, as well. For example, if $f$ is a quartic form in four variables then $$ (1\, 2\, 3\, 4)^4 f(X_1,Y_1,Z_1,W_1) \cdots f(X_4,Y_4,Z_4,W_4) $$ is an invariant of degree 4. Here, $(1\, 2\, 3 \, 4)$ denotes the differential operator $$ (1\, 2\, 3 \, 4) := \det \left( \begin{array}{cccc} \frac{\partial}{\partial X_1} & \frac{\partial}{\partial X_2} & \frac{\partial}{\partial X_3} & \frac{\partial}{\partial X_4} \\ \frac{\partial}{\partial Y_1} & \frac{\partial}{\partial Y_2} & \frac{\partial}{\partial Y_3} & \frac{\partial}{\partial Y_4} \\ \frac{\partial}{\partial Z_1} & \frac{\partial}{\partial Z_2} & \frac{\partial}{\partial Z_3} & \frac{\partial}{\partial Z_4} \\ \frac{\partial}{\partial W_1} & \frac{\partial}{\partial W_2} & \frac{\partial}{\partial W_3} & \frac{\partial}{\partial W_4} \end{array} \right)\, . $$ For a quaternary cubic form, one can apply this to its Hessian to get an invariant of degree 16. However, a direct evaluation of such formulas for forms in four variables is too slow in practice. The reason is that both the differential operators and the product $f(X_1,Y_1,Z_1,W_1) \cdots f(X_4,Y_4,Z_4,W_4)$ usually have many terms. \end{rem}
\section{The Clebsch transfer principle} We refer to~\cite[Sec.~3.4.2]{Do} for a detailed and modern description of the Clebsch transfer principle. The basic idea is to compute a contravariant of a form of degree $d$ in $n$ variables out of an invariant of a form of degree $d$ in $(n-1)$ variables.
\begin{dfn} \begin{enumerate} \item We consider the vector space $V = K^n$ and choose the volume form given by the determinant. We have the following isomorphism $$ \Phi \colon \Lambda^{n-1} V \rightarrow V^*,\quad v_1 \wedge \dots \wedge v_{n-1} \mapsto (v \mapsto \det (v,v_1,\ldots,v_{n-1}))\,. $$ \item Let $I$ be a degree $D$, weight $w$ invariant on $K[U_1,\ldots,U_{n-1}]^{(d)}$. Then the {\it Clebsch transfer} of $I$ is the contravariant $\tilde{I}$ of degree $D$ and order $w$ $$ \tilde{I} \colon K[X_1,\ldots,X_{n}]^{(d)} \rightarrow K[Y_1,\ldots,Y_n]^{(w)}, $$ given by $$ \tilde{I}(f) \colon (K^n)^* \rightarrow K,\quad l \mapsto I(f(U_1 v_1 + \cdots + U_{n-1} v_{n-1}))\, . $$ Here, $v_1,\ldots,v_{n-1}$ are given by $v_1 \wedge \ldots \wedge v_{n-1} = \Phi^{-1}(l)$. Note that $\tilde{I}(f)$, as defined, is indeed a polynomial mapping and homogeneous of degree~$w$. \end{enumerate} \end{dfn}
\begin{exa} Denote by $S$ and $T$ the invariants of ternary cubic forms, introduced above. Then $\tilde{S}$ is a degree 4, order 4 contravariant of quaternary cubic forms. Further, $\tilde{T}$ is a contravariant of degree 6 and order 6.
The discriminant of a cubic curve is given by $\Delta = S^3 - 6 T^2$. It vanishes if and only if the cubic curve is singular. Thus, the dual surface of the smooth cubic surface $V(f)$ is given by $\tilde{\Delta}(f) = \tilde{S}(f)^3 - 6 \tilde{T}(f)^2 = 0$. \end{exa}
\begin{rem} By definition, the dual surface of a smooth surface $V(f) \subset \mathop{\text{\bf P}}\nolimits^3$ is the set of all tangent hyperplanes of $V(f)$. A plane $P \in (\mathop{\text{\bf P}}\nolimits^3)^*$ is tangent if and only it the intersection $V(f) \cap P$ is singular. Thus, $P$ is a point on the dual surface if and only if $\tilde{\Delta}(f)(P) = 0$. Here, $\Delta$ is the discriminant of ternary forms of the same degree as $f$. \end{rem}
\begin{rem} For a given cubic form $f \in K[X,Y,Z,W]$, we compute $\tilde{S}(f)$ by interpolation as follows: \begin{enumerate} \item Choose 35 vectors $p_1,\ldots,p_{35} \in \left(K^4\right)^*$ in general position. \item Compute $\Phi^{-1}(p_i)$, for $i = 1,\ldots,35$. \item Compute $s_i := S(f(U_1 v_1 + U_2 v_2 + U_3 v_3))$, for $v_1 \wedge v_2 \wedge v_3 = \Phi^{-1}(p_i)$ and all $i = 1,\ldots,35$. \item Compute the degree $4$ form $\tilde{S}(f)$ by interpolating the arguments $p_i$ and the values $s_i$. \end{enumerate} We can compute $\tilde{T}(f)$ in the same way. The only modification necessary is to increase the number of vectors, as the space of sextic forms is of dimension 84. \end{rem}
\section{Action of contravariants on covariants and vice versa}
\begin{enumerate} \item Recall that the rings $K[X_1,\ldots, X_n]$ and $K[Y_1,\ldots, Y_n]$ are equipped with
$\mathop{\text{\rm GL}}\nolimits_n(K)$-actions, as introduced in Definition~\ref{def_inv}. \item The ring of differential operators $$K\left[\frac{\partial }{\partial X_1},\ldots,\frac{\partial }{\partial X_n}\right]$$ acts on $K[X_1,\ldots,X_n]$. \item The $\mathop{\text{\rm GL}}\nolimits_n(K)$-action on $\calD$ given by $$ M \cdot \left(\frac{\partial }{\partial v} \right) := \frac{\partial }{\partial (v \cdot M^{-1})} \mbox{ for all } v \in K^n $$ results in the equality $$ M \cdot \left(\frac{\partial f}{\partial v}\right) = \left( M \cdot \frac{\partial}{\partial v} \right) \left(M \cdot f \right), $$ for all $f \in K[X_1,\ldots,X_n]$ and all $v \in K^n$. \item The map $$ \psi \colon K[Y_1,\ldots,Y_n] \rightarrow K\left[\frac{\partial }{\partial X_1},\ldots,\frac{\partial }{\partial X_n}\right], \quad Y_i \mapsto \frac{\partial }{\partial X_i} $$ is an isomorphism of rings. Further, for each $M \in \mathop{\text{\rm GL}}\nolimits_n(K)$, we have the following commutative diagram \begin{eqnarray*} \diagram K[Y_1,\ldots,Y_n] \rrto^{\psi~~~} \dto_M & & \calD
\dto^{M} \\ K[Y_1,\ldots,Y_n] \rrto^{\psi~~~} & &
\calD \enddiagram_{\displaystyle .} \end{eqnarray*} In other words, $\psi$ is an isomorphism of $\mathop{\text{\rm GL}}\nolimits_n(K)$-modules. \item Let $C$ be a covariant and $c$ a contravariant on $K[X_1,\ldots,X_n]^{(d)}$. Denote the order of $C$ by $P$ and the order of $c$ by $p$. For $P \geq p$, we define \begin{eqnarray*} &c \vdash C \colon K[X_1,\ldots,X_n]^{(d)} \rightarrow K[X_1,\ldots,X_n]^{(P-p)},\quad f \mapsto \psi(c(f)) \left(C(f)\right)\, . \end{eqnarray*} The notation $\vdash$ follows~\cite[p.~304]{Hu}. \item Assume $c \vdash C$ not to be zero. If $p < P$ then $c \vdash C$ is a covariant of order $P - p$. If $p = P$ then $c \vdash C$ is an invariant. In both cases, the degree of $c \vdash C$ is the sum of the degrees of $c$ and $C$. \item Similarly to $\psi$, one can introduce a map $$\widehat{\psi} \colon K[X_1,\ldots,X_n] \rightarrow K\left[\frac{\partial }{\partial Y_1},\ldots,\frac{\partial }{\partial Y_n} \right],\quad X_i \mapsto \frac{\partial }{\partial Y_i}\,. $$ As above, $\widehat{\psi}$ is an isomorphism of rings and $\mathop{\text{\rm GL}}\nolimits_n(K)$-modules. Let $C$ a covariant and $c$ a contravariant
on $K[X_1,\ldots,X_n]^{(d)}$. We define $C \vdash c$ by $$ (C \vdash c)(f) := \widehat{\psi}(C(f)) \left(c(f)\right)\, . $$ \item Assume $c \vdash C$ not to be zero. If $p > P$ then $C \vdash c$ is a contravariant of order $p - P$. If $p = P$ then $C \vdash c$ is an invariant. In both cases, the degree of $C \vdash c$ is the sum of the degrees of $C$ and $c$. \end{enumerate}
\section{Explicit invariants of cubic surfaces}
\begin{rems} \begin{enumerate} \item It is well known that the ring of invariants of quaternary cubic forms is generated by the six invariants of degrees 8, 16, 24, 32, 40, and 100~\cite[Sec.~9.4.5]{Do}. The first five generators are primary invariants~\cite[Def.~2.4.6]{DK}. Thus, the vector spaces of all invariants of degrees 8, 16, 24, 32 and 40 are of dimensions 1, 2, 3, 5, and 7. In general, these dimensions are encoded in the Molien series, which can be computed efficiently using character theory~\cite[Ch.~4.6]{DK}. \item In the lucky case that one is able to write down a basis of the vector space of all invariants of a given degree $d$, one can find an expression of a given invariant of degree $d$ by linear algebra. This requires that the invariant is known for sufficiently many surfaces. For cubic surfaces, this is provided by the pentahedral equation. \item Applying the methods above, we can write down many invariants for quaternary cubic forms. We start with the form $f$, its Hessian covariant $H(f)$, and the contravariant $\tilde{S}(f)$. Then we apply known covariants to contravariants and vice versa. Further, one can multiply two covariants or contravariants to get a new one. For efficiency, it is useful to keep the orders of the covariants and contravariants as small as possible. This way, they will not consist of too many terms.
\end{enumerate} \end{rems}
\begin{prop} Let $f$ be a quarternary cubic form. With
\begin{align*} C_{4,0,4} &:= \tilde{S}(f), & C_{4,4} &:= H(f), \\ C_{6,2} &:= C_{4,0,4} \vdash f^2, & C_{9,3} &:= C_{4,0,4} \vdash (f \cdot C_{4,4}), \\ C_{10,0,2} &:= C_{6,2} \vdash C_{4,0,4}, & C_{11,1a} &:= C_{10,0,2} \vdash f, \\ C_{13,0,1} &:= C_{9,3} \vdash C_{4,0,4}, & C_{14,2} &:= C_{10,0,2} \vdash C_{4,4}, \\ C_{14,2a} &:= C_{13,0,1} \vdash f, & C_{19,1a} &:= C_{13,0,1} \vdash C_{6,2}, \end{align*}
the following expressions \begin{align*} I_8 &:= \frac{1}{2^{11} \cdot 3^9} C_{4,0,4} \vdash C_{4,4},\\ I_{16} &:= \frac{1}{2^{30} \cdot 3^{22}} C_{6,2} \vdash C_{10,0,2}, \\ I_{24} &:= \frac{1}{2^{41} \cdot 3^{33}} C_{10,0,2} \vdash C_{14,2}, \\ I_{32a} &:= C_{10,0,2} \vdash C_{11,1a}^2, \\ I_{32} &:= \frac{2}{5}(I_{16}^2 - \frac{1}{2^{60} \cdot 3^{44}} \cdot I_{32a}), \\ I_{40a} &:= C_{4,0,4} \vdash (C_{11,1a}^2 \cdot C_{14,2}), \\ I_{40} &:= \frac{-1}{100} \cdot I_8 \cdot I_{32} - \frac{1}{50} \cdot I_{16} \cdot I_{24}
- \frac{1}{2^{72} \cdot 3^{53} \cdot 5^2} I_{40a}, \end{align*} give the Clebsch-Salmon invariants $I_8,\ I_{16},\ I_{24},\ I_{32},$ and $I_{40}$. Further, with \begin{align*}
C_{11,1} :=& \frac{1}{2^{20} 3^{15}} C_{11,1a}, \\
C_{19,1} :=& \frac{1}{2^{33} \cdot 3^{24} \cdot 5} (C_{19,1a} + 2^{32} \cdot 3^{24} \cdot I_8 \cdot C_{11,1a}), \\
C_{27,1a} :=& \frac{1}{2^{42} 3^{33}} C_{13,0,1} \vdash C_{14,2a}, \\
C_{27,1} :=& I_{16} \cdot C_{11,1} + \frac{1}{200}(C_{27,1a} - 2 \cdot I_8^2 \cdot C_{11,1} - 10 \cdot I_8 \cdot C_{19,1}), \\
C_{43,1a} :=& \frac{1}{2^{68} \cdot 3^{53}} C_{13,0,1} \vdash ( C_{13,0,1} \vdash (C_{13,0,1} \vdash C_{4,4})), \\
C_{43,1} :=& \frac{-1}{1000} C_{43,1a} - \frac{1}{200} \cdot I_8^2 \cdot C_{27,1} + I_{16} \cdot C_{27,1} \\
& + \frac{1}{1000} \cdot I_8^3 \cdot C_{19,1} -\frac{1}{10} \cdot I_8 \cdot I_{16} \cdot C_{19,1} - I_{24} \cdot C_{19,1} \\
& + \frac{1}{200} \cdot I_8^2 \cdot I_{16} \cdot C_{11,1} + \frac{3}{20} \cdot I_8 \cdot I_{24} \cdot C_{11,1}, \end{align*} $C_{11,1},\ C_{19,1},\ C_{27,1},$ and $C_{43,1}$ are Salmon's linear covariants. Here, we use the first index to indicate the degree of an invariant, covariant, or contravariant. The second index is the order of a covariant, whereas the third index is the order of a contravariant. Finally, we can compute $I_{100}$ as the determinant of the 4 linear covariants. \end{prop}
\begin{proof} The following {\tt magma} script shows in approximately one second of CPU time that the algorithm as described above coincides with Salmon's formulas for the pentahedral family, as the last two comparisons result in {\tt true}. \begin{verbatim} r5 := PolynomialRing(Integers(),5); ff5<a,b,c,d,e> := FunctionField(Rationals(),5); r4<x,y,z,w> := PolynomialRing(ff5,4);
lfl := [x,y,z,w,-x-y-z-w]; col := [ff5.i : i in [1..5]]; f := a*x^3 + b*y^3 + c*z^3 + d*w^3 + e*(-x-y-z-w)^3;
sy_f := [ElementarySymmetricPolynomial(r5,i) : i in [1..5]]; sigma := [Evaluate(sf,col) : sf in sy_f];
I_8 := sigma[4]^2 - 4 *sigma[3] * sigma[5]; I_16 := sigma[1] * sigma[5]^3; I_24 := sigma[4] * sigma[5]^4; I_32 := sigma[2] * sigma[5]^6; I_40 := sigma[5]^8;
L_11 := sigma[5]^2 * &+[ col[i] * lfl[i] : i in [1..5]]; L_19 := sigma[5]^4 * &+[ 1/col[i] * lfl[i] : i in [1..5]]; L_27 := sigma[5]^5 * &+[ col[i]^2 * lfl[i] : i in [1..5]]; L_43 := sigma[5]^8 * &+[ col[i]^3 * lfl[i] : i in [1..5]];
inv := ClebschSalmonInvariants(f); cov := LinearCovariantsOfCubicSurface(f);
inv eq [I_8, I_16, I_24, I_32, I_40]; cov eq [L_11, L_19, L_27, L_43]; \end{verbatim} \end{proof}
\section{Performance test} Computing the Clebsch-Salmon invariants, following the approach above, for 100 cubic surfaces chosen at random with two digit integer coefficients takes about 3 seconds of CPU time. Most of the time is used for the direct evaluation of the invariant $S$ of ternary cubics by transvection. Note that computing the contravariant $\tilde{S}$ by interpolation requires 35 evaluations of the invariant $S$ of a ternary cubic. Computing both contravariants $\tilde{S}$ and $\tilde{T}$ and the dual surface takes about 18 seconds of CPU time for the same 100 randomly chosen surfaces.
For comparison, the computation of the pentahedral form by inspecting the singular points of the Hessian takes about 10 seconds per example~\cite[Sec.~5.11]{EJ1}.
All computations are done on one core of an Intel i5-2400 processor running at 3.1GHz.
\end{document} |
\begin{document}
\title[Lyapunov Functionals and Local Dissipativity] {Lyapunov Functionals and Local Dissipativity for the Vorticity Equation in $\mathrm{L^{p}}$ and Besov Spaces}
\author[Utpal Manna]{Utpal Manna*\\ Department of Mathematics, University of Wyoming,\\ Laramie, Wyoming 82071, USA\\ e-mail: utpal@uwyo.edu\\ \\} \thanks{* This research is supported by Army Research Office, Probability and Statistics Program, grant number DODARMY1736}
\author[S.S. Sritharan]{\\ \\S.S. Sritharan*\\Department of Mathematics, University of Wyoming,\\ Laramie, Wyoming 82071, USA\\ e-mail: sri@uwyo.edu}
\subjclass[2000] {35Q35, 47H06, 76D03, 76D05}
\keywords{Vorticity equation, Lyapunov function, Dissipative operator, Littlewood-Paley decomposition, Besov Spaces}
\begin{abstract} In this paper we establish the local Lyapunov property of certain $\mathrm{L}^{p}$ and Besov norm of the vorticity fields. We have resolved in part, certain open problem posed by Tosio Kato for the three dimensional Navier-Stokes equation by studying the vorticity equation. The local dissipativity of the sum of linear and non-linear operators of the vorticity equation is established. One of the main techniques used here is the Littlewood-Paley analysis. \end{abstract}
\maketitle
\section{Introduction}
Stability and control of a dynamical system is often studied using Lyapunov functions~\cite{FrKo96, La66, La62}. The local Lyapunov property we study in this paper can thus be of interest to the understanding, control and stabilization of turbulent fields~\cite{Sr98}. This property also sheds some light towards the research on global Navier-Stokes solutions in super-critical spaces (for definitions and examples of these spaces see~\cite{Ca95} and~\cite{CaMe95}). Weak solutions of the Navier-Stokes equation satisfy the energy inequality which in turn implies that the $\mathrm{L}^2$-norm of velocity decreases in time~\cite{La69}. This idea was generalized by Tosio Kato~\cite{Ka90} to prove that for every solution of Navier-Stokes equation in $\mathbb{R}^m$ ($m\geq 3$), there exist a large number of Lyapunov functions, which decrease monotonically in time if the solution have small $\mathrm{L}^m(\mathbb{R}^m)$-norm. More specifically Kato proved that the local Lyapunov property in $\mathrm{L}^p$-norm for $1<p<\infty$ and in $\mathrm{W}^{s,p}$-norm for $ s>0,\ 2\leq p<\infty$. He also noted that for any Lyapunov function $\mathfrak{L}(u)$ and any monotone increasing function $\Phi$, $\Phi(\mathfrak{L}(u))$ will again be a Lyapunov function. Moreover Kato also proved the local dissipativity of the sum of the linear and nonlinear operators of the Navier-Stokes equation in $\mathrm{L}^p$-norm for $2\leq p<\infty$. However the local dissipativity in $\mathrm{W}^{s,p}$-norm for $s > 0$ has remained an open problem.
Cannone and Planchon~\cite{CaPl00} proved the Lyapunov property for the $3$-D Navier-Stokes equation in Besov spaces. In particular they proved that if $p, q \geq 2$, $\frac{3}{p}+\frac{2}{q}>1$ and as long as the $\dot{B}_{\infty}^{-1, \infty}$-norm of the velocity is small, the $\dot{B}_{p}^{-1+3/p, q}$-norm of velocity decreases in time.
In~\cite{KoTa01} Koch and Tataru considered the local and global (in time) well-posedness for the incompressible Navier-Stokes equation and proved the existence and uniqueness of global mild solution in $BMO^{-1}$ provided that the initial solution is small enough in this space. Due to Cannone and Planchon~\cite{CaPl00}, existence of the Lyapunov functions for small $\dot{B}_{\infty}^{-1, \infty}$-norm is known but the global solvability of Navier-Stokes equation in this space remains an open problem. Noting the embedding theorem $BMO^{-1}\subset \dot{B}_{\infty}^{-1, \infty}$, $BMO^{-1}$ is thus the largest space of initial data for which global mild solution has been shown to exist.
Recently, P. G. Lemari\'{e}-Rieusset~\cite{Lr02} has extended the result of Cannone and Planchon~\cite{CaPl00} to a larger class of initial data. He proved that for initial data $u_{0} \in \dot{B}_{p}^{s,q} \cap BMO^{-1}$ where $s > -1$, $p\geq 2$, $q\geq 1$ and $s + \frac{2}{q} > 0$, there exists a constant $C_{0} > 0$ independent of $p$ and $q$, such that if $u$ is a Koch-Tataru solution of Navier-Stokes equation and satisfying $sup_{t}\parallel u(t) \parallel_{\dot{B}_{\infty}^{-1, \infty}} < C_{0}$, then $ t\rightarrow\parallel u(t)\parallel_{\dot{B}_{p}^{s,q}}$ is a Lyapunov function.
Local monotonicity of different type has been used in proving the solvability in unbounded domains for Navier-Stokes in $2$-D~\cite{MeSr00}, in $3$-D~\cite{BaSr01} and for modified $2$-D Navier-Stokes with artificial compressibility~\cite{MaMeSr06}. Local monotonicity has also been useful in Control theory~\cite{BaSr01}. For extensive theories and applications on dissipative and accretive operators see Barbu~\cite{Ba76} and Browder~\cite{Br76}.
In this paper, we achieve a partial resolution to the open problems posed by Kato~\cite{Ka90} for the Navier-Stokes equation by studying the vorticity equation: \begin{equation}\label{e1.1}\left\{\begin{aligned} &\partial_{t} \omega\ - \nu\triangle\omega+ u\cdot\nabla\omega - \omega\cdot\nabla u = 0,\ \text{in} \ R^{m}\times R_{+},\\ &\nabla\cdot\omega =0,\ \text{in} \ R^{m}\times R_{+},\\ &\omega(x,0)= \omega_{0}(x), \ x \in \ R^{m}. \end{aligned}\right.\end{equation}
To be specific, we have proved that the vorticity equation have a family of Lyapunov functions in $\mathrm{L}^{p}(\mathbb{R}^m)$ for $2\leq p<\infty$ and $m \geq 3$ provided that the $\mathrm{L}^{m}$-norm of the velocity is small enough. We then prove $\dot{B}_{p}^{-1+3/p, q}$-norm of the vorticity is a Lyapunov function for $3$-D vorticity equation provided the velocity and the vorticity are small in $\dot{B}_{\infty}^{-1, \infty}$-norm and $\dot{B}_{\infty}^{-2, \infty}$-norm respectively.
We have also proved the dissipativity of the sum of the linear and nonlinear operators of the vorticity equation \eqref{e1.1} in $\mathrm{L}^{p}$ for $2\leq p<\infty$, which in part answers the open problem of Kato for the local dissipativity of the Navier-Stokes operators in $\mathrm{W}^{1,p}$-norm.
In Section 2 and 3 we recall some basic facts concerning Littlewood-Paley decomposition, homogeneous Besov spaces and the Paraproduct rule. The main results are presented in section 4.
\section{Some Definitions and Estimates} \begin{defn}($Duality\ Map$)
The mapping $G: X\rightarrow 2^{X^{\star}}$ is
called the duality mapping of the space X if $$ G(x) = \{x^{\star}\in X^{\star};\ \langle x, x^{\star}\rangle =\ \parallel x \parallel_{X}^{2}\ =\ \parallel x^{\star}
\parallel_{X^{\star}}^{2},\ \forall x\in X \}.$$ \end{defn} \begin{rem} The duality map for $\mathrm{L}^{p}$ is given by $$G(x)= \frac{x\mid x\mid^{p-2}}{\parallel x\parallel_{p}^{p-2}}.$$ \end{rem}
\begin{defn}($Dissipative\ Operator$) An operator $\textit{A}$ is said to be dissipative
if $$\langle \textit{A} x - \textit{A} y, \ G(x-y) \rangle \leq 0,\qquad \ \forall x, y \in \emph{D}(\textit{A}).$$
An operator $\textit{A}$ is said to be accretive if $- \textit{A}$ is
dissipative.\\
See~\cite{Ba76} and~\cite{Br76} for extensive theories and applications on nonlinear
operators in Banach spaces. \end{defn}
\begin{defn}($Lyapunov\ Function$) Let $v$ be a solution of the Navier-Stokes equation. Then any function $\mathfrak{L}(v)(t)$ montonically decreasing in time is called a Lyapunov function associated to $v$. \end{defn} The most well-known example is certainly provided by
energy~\cite{La69}
$$ E(v)(t) = \frac{1}{2}\parallel v(t)\parallel_{2}^{2}.$$ The energy equality for the Navier-Stokes equation yield \begin{align} \frac{d}{dt}E(t) + \nu\parallel\nabla v(t)\parallel_{2}^{2}\ = 0,\nonumber \end{align} which proves that $E(t)$ is Lyapunov functional.
Let us now recall two lemmas due to Kato~\cite{Ka90}.
\begin{lem} Let $2\leq p <\infty\ \text{and}\ \phi\in \mathrm{W}^{1,p}$. Define \begin{align}\label{e2.1} &Q_{p}(\phi) = \int_{\partial\phi(x)\neq 0}\mid\phi(x)\mid^{p-2}\mid\nabla\phi(x)\mid^{2}\/\mathrm{d}\/ x\ \geq 0. \end{align} Then \begin{align}\label{e2.2}
CQ_{p}(\phi) \leq -\langle\mid\phi\mid^{p-2}\phi, \Delta\phi\rangle \ < \infty, \end{align} where $C$ denotes a positive constant. \end{lem}
\begin{lem} Let $2\leq p <\infty$ and $\phi\in \mathrm{W}^{1,p}$. Then \begin{align}\label{e2.3}
\parallel \phi \parallel_{\frac{mp}{m-2}} \ \leq CQ_{p}(\phi)^\frac{1}{p}. \end{align} \end{lem}
\begin{lem} Let $u$ be the velocity field obtained from $\omega$ via the Biot-Savart law: \begin{align}\label{e2.4} u(x) = -\frac{\Gamma(m/2+1)}{m(m-2)\pi^{m/2}}\int_{\mathbb{R}^{m}}\frac{(x-y)}{\mid x-y
\mid^{m}}\ \times \omega(y) \/\mathrm{d}\/ y, \qquad x \in {\mathbb{R}^{m}},
m\geq 3. \end{align} \textbf{(a)} Assume that $ 1 < p <\infty $. Then for every divergence-free vector field $u$ whose gradient is in $\mathrm{L}^{p}$, there exists a $ C > 0 ,$ depending on $p$, such that \begin{align}\label{e2.5}
\parallel\nabla u \parallel_{p}\leq \ \ C\parallel\omega\parallel_{p}. \end{align} \textbf{(b)} If $\omega\in \mathrm{L}^{1}(\mathbb{R}^{m}) \cap \mathrm{L}^{p}(\mathbb{R}^{m})$, $\frac{m}{m-1} \ < \ p \ \leq \ \infty $, then \begin{align}\label{e2.6} \parallel u \parallel_{\mathrm{L}^{p}(\mathbb{R}^{m})} \ \leq C\big(\parallel\omega\parallel_{\mathrm{L}^{1}(\mathbb{R}^{m})} \ + \ \parallel\omega\parallel_{\mathrm{L}^{p}(\mathbb{R}^{m})}\big). \end{align}
\end{lem} \begin{proof} \textbf{(a)} See Theorem 3.1.1 in~\cite{Ch98}.\\ \textbf{(b)} The proof is due to Ying and Zhang~\cite{LyZp96}, Lemma 3.3.1. \end{proof}
\section{Littlewood-Paley Decomposition and \\ Besov Spaces}
In this section, we recall some classical results concerning the homogeneous Besov spaces in terms of the Littlewood-Paley decomposition. Several related embedding relations and inequalities will also be given here. For more details the reader is referred to the books~\cite{JbLo76},~\cite{Ca04},~\cite{Ch98},~\cite{Pe76},~\cite{Tr83} for a comprehensive treatment. \subsection{Littlewood-Paley Decomposition:} Let us start with the Littlewood-Paley decomposition in $\mathbb{R}^{3}$. To this end, we take an arbitrary function $\psi$ in the Schwartz class $\mathcal{S}(\mathbb{R}^{3}) $ whose Fourier transform $\hat{\psi}$ is such that \begin{eqnarray} supp\ \hat{\psi} \subset \{\xi, \frac{1}{2}\leq \mid\xi\mid\leq 2\}, \end{eqnarray} and \begin{align} \forall\xi\neq 0, \quad\sum_{j \in\mathbb{Z}} \hat{\psi}(\frac{\xi}{2^{j}}) = 1.\nonumber \end{align} Let us define $\varphi$ by \begin{align} \hat{\varphi}(\xi) = 1 - \sum_{j\geq 0}\hat{\psi}(\frac{\xi}{2^{j}}),\nonumber \end{align} and hence \begin{eqnarray} supp\ \hat{\varphi} \subset \{\xi, \mid\xi\mid\leq 1\}. \end{eqnarray} For $j\in \mathbb{Z}$, we write $\varphi_{j}(x) = 2^{3j}\varphi(2^{j}x)$. We denote by $S_{j}$ and $\triangle_{j}$, the convolution operators with $\varphi_{j}$ and $\psi_{j}$ respectively. Hence \begin{align} S_{j}(f) = f\star\varphi_{j},\nonumber \end{align} and \begin{align} \triangle_{j}f = \psi_{j}\star f,\quad\text{where}\ \psi_{j}(x) = 2^{3j}\psi(2^{j}x).\nonumber \end{align}
Then \begin{align}
S_{j} = \sum_{p<j} \triangle_{p}\quad\text{and}\quad
I = \sum_{j\in \mathbb{Z}}\ \triangle_{j}.\nonumber \end{align} The dyadic decomposition \begin{eqnarray}\label{e3.3}
u = \sum_{j\in \mathbb{Z}}\ \triangle_{j} u, \end{eqnarray} is called the homogeneous Littlewood-Paley decomposition of $u$ and converges only in the quotient space $\mathcal{S}^{'}/_{\mathcal{P}}$ where $\mathcal{S}^{'}$ is the space of tempered distributions and $\mathcal{P}$ is the space of polynomials. Now let us mention here the following quasi-orthogonality properties of the dyadic decomposition~\cite{Ch98} (proposition 2.1.1): \begin{eqnarray}\label{e3.4} \triangle_{p}\triangle_{q}u = 0 \quad\text{if}\quad\mid p-q \mid \geq 2, \end{eqnarray} \begin{eqnarray}\label{e3.5} \triangle_{p}(S_{q-2}u\triangle_{q}u) = 0 \quad\text{if}\quad\mid p-q \mid \geq 4. \end{eqnarray} \subsection{Besov Spaces:} Let $0<p, q\leq\infty$ and $s\in \mathbb{R}$. Then a tempered distribution $f$ belongs to the homogeneous Besov space $\dot{B}_{p}^{s,q}$ if and only if \begin{eqnarray}\label{e3.6} \Big(\sum_{j\in\mathbb{Z}}\ 2^{jsq}\parallel\triangle_{j}f\parallel_{p}^{q}\Big)^{\frac{1}{q}} < \infty \end{eqnarray} and $f = \sum_{j\in Z}\triangle_{j}f$ in $ \mathcal{S}^{'}/_{\mathcal{P}_{m}}$ where $\mathcal{P}_{m}$ is the space of polynomials of degree $\leq m$ and $m = [s-\frac{d}{p}]$, the integer part of $s-\frac{d}{p}$.
Besov space is a quasi-Banach space~\cite{RuSi96}. Here we recall the following standard embedding rules~\cite{Tr83} (chapter 2.7):\\ If $s_{1} > s_{2}$ and $p_{2}\geq p_{1}\geq 1$ such that $s_{1}-\frac{d}{p_{1}} = s_{2}-\frac{d}{p_{2}}$, then \begin{eqnarray} \dot{B}_{p_{1}}^{s_{1},q}\hookrightarrow \dot{B}_{p_{2}}^{s_{2},q}. \end{eqnarray} Moreover if $q_{1} < q_{2}$ then \begin{eqnarray} \dot{B}_{p}^{s,q_{1}}\hookrightarrow \dot{B}_{p}^{s,q_{2}}. \end{eqnarray} The above mentioned embeddings are also valid for inhomogeneous Besov spaces. For more embedding theorems and their proofs we refer the readers to~\cite{Pe76} and~\cite{Tr83}.
Next let us recall the following result from Chapter 3 in
Triebel~\cite{Tr83}:
\begin{lem} Let $1\leq p, q\leq\infty$ and $s<0$. Then $\forall f\in \dot{B}_{p}^{s,q}$ we have, \begin{eqnarray}\label{e3.9} \Big(\sum_{j\in\mathbb{Z}}\ 2^{jsq}\parallel\triangle_{j}f\parallel_{p}^{q}\Big)^{\frac{1}{q}} < \infty \ \Leftrightarrow \ \Big(\sum_{j\in\mathbb{Z}}\ 2^{jsq}\parallel S_{j}f\parallel_{p}^{q}\Big)^{\frac{1}{q}} < \infty. \end{eqnarray} \end{lem}
Now we recall the following versions of Bernstein inequalities (chapter 3 in~\cite{Lr02}):
\begin{lem} Let $1\leq p\leq\infty$. Then there exist constants $C_{0}, C_{1}, C_{2} > 0$ such that \\ \textbf{(a)} If $f$ has its frequency in a ball $\mathbb{B}(0, \lambda)$ $(supp\ \mathcal{F}(f) \subset \mathbb{B}(0, \lambda))$ then \begin{eqnarray}\label{e3.10} \parallel (-\triangle)^{\frac{s}{2}} f\parallel_{p} \ \leq C_{0} \ \lambda^{\mid s\mid}\parallel f\parallel_{p}. \end{eqnarray} \textbf{(b)} If $f$ has its frequency in an annulus $\mathbb{C}(0, A\lambda, B\lambda)$ \\ $(supp\ \mathcal{F}(f) \subset \{\xi, A\lambda \leq \mid\xi\mid \leq B\lambda\} )$ then \begin{eqnarray}\label{e3.11} C_{1} \ \lambda^{\mid s\mid}\parallel f\parallel_{p} \ \leq \ \parallel (-\triangle)^{\frac{s}{2}} f\parallel_{p} \ \leq C_{2} \ \lambda^{\mid s\mid}\parallel f\parallel_{p}. \end{eqnarray} \end{lem}
Now let us state here the modified Poincar\'{e} type inequality given by Planchon~\cite{Pl00}.
\begin{lem} Let $f\in \mathcal{S}$, the Schwartz space, whose fourier transform is supported outside the ball $\mathbb{B}(0,1)$. Then for $p\geq 2$, \begin{eqnarray}\label{e3.12} \int\mid f\mid^{p}\/\mathrm{d}\/ x\ \leq \ C_{p}\int\mid\nabla f\mid^{2} \mid f\mid ^{p-2}\/\mathrm{d}\/ x. \end{eqnarray} \end{lem}
\subsection{The Paraproduct rule:} Another important tool in Littlewood-Paley analysis is the paraproduct operator introduced by J. M. Bony~\cite{Bo81}. The idea of the paraproduct enables us to define a new product between distributions which turns out to be continuous in many functional spaces where the pointwise multiplication does not make sense. This is a powerful tool for the analysis of nonlinear partial differential equations. \\ Let $f, g\in \mathcal{S}^{'}$. Then using the formal Littlewood-Paley decomposition, \begin{eqnarray}
f = \sum_{j\in \mathbb{Z}} \triangle_{j} f, \qquad g = \sum_{j\in \mathbb{Z}} \triangle_{j}
g.\nonumber \end{eqnarray} Hence \begin{eqnarray} fg &=& \sum_{j, l}\triangle_{j} f \triangle_{l}
g\nonumber\\
&=& \sum_j\sum_{l<j-2}\triangle_{j} f \triangle_{l}
g + \sum_j\sum_{l>j+2}\triangle_{j} f \triangle_{l}
g + \sum_j\sum_{\mid l-j\mid\leq 2}\triangle_{j} f \triangle_{l}
g \nonumber\\
&=& \sum_j\sum_{l<j-2}\triangle_{j} f \triangle_{l}
g + \sum_l\sum_{j<l-2}\triangle_{j} f \triangle_{l}
g + \sum_j\sum_{\mid l-j\mid\leq 2}\triangle_{j} f \triangle_{l}
g \nonumber\\ &=& \sum_{j}\triangle_{j}f S_{j-2}g + \sum_{j}\triangle_{j}g S_{j-2}f + \sum_{\mid l-j\mid \leq 2}\triangle_{j}f \triangle_{l}g\nonumber. \end{eqnarray} In other words, the product of two tempered distributions is decomposed into two homogeneous paraproducts, respectively \begin{eqnarray} \dot\pi(f,g) = \sum_{j}\triangle_{j}f S_{j-2}g \qquad \text{and} \qquad \dot\pi(g,f) = \sum_{j}\triangle_{j}g S_{j-2}f \nonumber, \end{eqnarray} plus a remainder \begin{eqnarray} R(f,g) = \sum_{\mid l-j\mid \leq 2}\triangle_{j}f \triangle_{l}g\nonumber. \end{eqnarray}
$\dot\pi$ is called the homogeneous paraproduct operator and the convergence of the above series holds true in the quotient space $\mathcal{S}^{'}/_{\mathcal{P}}$. Finally, using the quasi-orthogonality properties from \eqref{e3.4} and \eqref{e3.5} and after neglecting some non-diagonal terms for simplicity (since the contributions from these non-diagonal terms are taken care of by the terms which are being considered and hence negligible and also this does not affect the convergence of the paraproducts~\cite{Ca95, CaMe95}), we obtain \begin{eqnarray}\label{e3.13} \triangle_{j}(fg) = \triangle_{j}f S_{j-2}g + \triangle_{j}g S_{j-2}f + \triangle_{j}\big(\sum_{k\geq j}\triangle_{k}f \triangle_{k}g\big). \end{eqnarray} We refer the readers~\cite{Ca95},~\cite{Ch98},~\cite{Me81},~\cite{RuSi96} for extensive studies on paraproducts.
\section{Main Results}
\begin{thm}[Local Lyapunov Property in $\mathrm{L}^p$] Let $m\geq 3$, $2\leq p <\infty$. Let $\omega$ be the solution of the vorticity equation \eqref{e1.1} such that \begin{align} & u\in \mathrm{C}{([0,T];\mathrm{L}^{m}\cap \mathrm{L}^{p})}, \qquad\nabla u\in \mathrm{L}^{1}_{loc}((0, T); \mathrm{L}^{p}), \nonumber \end{align} and \begin{align} &\omega\in \mathrm{C}{([0,T];\mathrm{L}^{m}\cap \mathrm{L}^{p})},\qquad\nabla\omega\in \mathrm{L}^{1}_{loc}((0, T); \mathrm{L}^{p}),\quad\text{for} \ 0 < T \leq\infty.\nonumber \end{align} Then \begin{eqnarray}\label{e4.1} \partial_{t}\parallel\omega(t)\parallel_{p}^{p} &\leq& -C(\nu - K \parallel u(t)\parallel_{m})Q_{p}(\omega(t)) , \qquad 0 < t < T, \end{eqnarray} where $K$ denotes a positive constant depending upon $m$ and $p$. \end{thm} This implies for small $\mathrm{L}^m$-norm $t\rightarrow\parallel\omega(t)\parallel_{\mathrm{L}^p}$ is a Lyapunov function.
\begin{proof} Consider, \begin{eqnarray} \partial_{t}\parallel\omega\parallel_{p}^{p} \nonumber &=& \frac{\partial}{\partial t}\int\mid\omega\mid^{p}\/\mathrm{d}\/ x \ = \ \frac{\partial}{\partial t}\int\mid\omega^{2}\mid^{p/2}\/\mathrm{d}\/ x\nonumber \\ &=& p \int \mid\omega\mid^{p-2}\omega\cdot\frac{\partial \omega}{\partial t}\/\mathrm{d}\/ x \ = \ p\langle\mid\omega\mid^{p-2}\omega, \partial_{t}\omega\rangle\nonumber \\ &=& p\langle\mid\omega\mid^{p-2}\omega,\ \nu\triangle\omega - u\cdot\nabla\omega + \omega\cdot\nabla u\rangle\nonumber \\ &=& \nu p \langle\mid\omega\mid^{p-2}\omega, \ \triangle\omega\rangle \ - \ p \langle\mid\omega\mid^{p-2}\omega, \ u\cdot\nabla\omega\rangle\nonumber\\ &&\quad + \ p \langle\mid\omega\mid^{p-2}\omega, \ \omega\cdot\nabla u\rangle.\label{e4.2} \end{eqnarray} Using Lemma 2.5 on the first term of the right hand side, we have from \eqref{e4.2} \begin{eqnarray}\label{e4.3} \partial_{t}\parallel\omega\parallel_{p}^{p}\ &\leq& -C \nu Q_{p}(\omega) - \ p \langle\mid\omega\mid^{p-2}\omega, \ u\cdot\nabla\omega\rangle \ + \ p \langle\mid\omega\mid^{p-2}\omega, \ \omega\cdot\nabla u\rangle\nonumber\\ \end{eqnarray} Now we need to estimate the second and the third terms of the right hand side of the equation \eqref{e4.3}.
Using the fact that $\mathop{\mathrm{Div}} u = 0$, we have \begin{eqnarray}\label{e4.4} u\cdot\nabla\omega \ = \ u_{i}\frac{\partial\omega_{j}}{\partial x_{i}} \ = \ \frac{\partial}{\partial x_{i}}(u_{i}\omega_{j}) \ - \ \omega_{j}\frac{\partial u_{i}}{\partial x_{i}} \ = \nabla\cdot(u\otimes\omega), \end{eqnarray} where $\otimes$ represents the tensor product.\\ Then \begin{eqnarray} \mid\langle\mid\omega\mid^{p-2}\omega, \ u\cdot\nabla\omega\rangle\mid &=& \mid\langle\mid\omega\mid^{p-2}\omega, \ \nabla\cdot(u\otimes\omega)\rangle\mid\nonumber \\ &=& \mid\langle\nabla(\mid\omega\mid^{p-2}\omega), \
u\otimes\omega\rangle\mid\nonumber \\ &\leq & \langle\mid\nabla(\mid\omega\mid^{p-2}\omega)\mid, \
\mid u \otimes\omega\mid\rangle.\label{e4.5} \end{eqnarray} Notice that $ \mid\nabla\mid\omega\mid^{p-2}\omega\mid
\ \leq\ C\mid\omega\mid^{p-2}\mid\nabla\omega\mid.$
Hence using this and H\"{o}lder's inequality in \eqref{e4.5} we have \begin{eqnarray} \mid\langle\mid\omega\mid^{p-2}\omega, \ u\cdot\nabla\omega\rangle\mid &\leq& \langle\mid\omega\mid^{p-2}\mid\nabla\omega\mid, \ \mid u
\otimes\omega\mid\rangle\nonumber\\
&\leq &
\parallel\ \mid\omega\mid^{p-2}\mid\nabla\omega\mid \ \parallel_{q} \
\parallel u\otimes\omega\parallel_{q'},\quad \text{where}\
\frac{1}{q}+\frac{1}{q'}=1.\nonumber\\ \label{e4.6} \end{eqnarray} Now \begin{align} \parallel \ \mid\omega\mid^{p-2}\mid\nabla\omega\mid \ \parallel_{q}^{q} &=\int \mid\omega\mid^{q(p-2)} \ \mid\nabla\omega\mid^{q}\/\mathrm{d}\/ x\nonumber \\&=\int \mid\omega\mid^{q(p-2)/2} \ \big(\ \mid\omega\mid^{p-2} \ \mid\nabla\omega\mid^{2}\ \big)^{q/2}\/\mathrm{d}\/ x.\nonumber \end{align} Since $\frac{2-q}{2}+\frac{q}{2}=1$, H\"{o}lder inequality yields \begin{align} &\parallel \ \mid\omega\mid^{p-2}\mid\nabla\omega\mid \ \parallel_{q}^{q}\nonumber \\&\leq\Big[\int \big(\ \mid\omega\mid^{q(p-2)/2}\ \big)^{2/(2-q)}\/\mathrm{d}\/ x \Big]^\frac{2-q}{2} \ \Big[\int \Big\{\big(\ \mid\omega\mid^{p-2} \ \mid\nabla\omega\mid^{2}\big)^{q/2} \Big\}^{2/q}\/\mathrm{d}\/ x \Big ]^\frac{q}{2} \nonumber \\&=\Big[\int\mid\omega\mid^{q(p-2)/(2-q)}\/\mathrm{d}\/ x\Big]^{(2-q)/2} \ \Big[ \int\mid\omega\mid^{p-2}\ \mid\nabla\omega\mid^{2}\/\mathrm{d}\/ x\Big]^{q/2}\nonumber \\&=\ \parallel\omega\parallel_{r}^{q(p-2)/2} \ Q_{p}(\omega)^{q/2}, \quad \text{where} \ r = \frac{q(p-2)}{(2-q)}. \nonumber \end{align} Hence \begin{align}\label{e4.7} \parallel \ \mid\omega\mid^{p-2}\mid\nabla\omega\mid \ \parallel_{q}\ \leq\ \parallel\omega\parallel_{r}^{(p-2)/2} \ Q_{p}(\omega)^{1/2}. \end{align} Again by H\"{o}lder, \begin{align}\label{e4.8} \parallel u\otimes\omega\parallel_{q'}\ \leq\ C\parallel u\parallel_{m} \ \parallel\omega\parallel_{r},\quad \text{since} \ \frac{1}{q'}=\frac{1}{m}+\frac{1}{r}. \end{align} Now from the relations \begin{eqnarray} \frac{1}{q} + \frac{1}{q'} = 1 ,\ r = \frac{q(p-2)}{(2-q)}\ \text{and} \ \frac{1}{q'} = \frac{1}{m} + \frac{1}{r}, \nonumber \end{eqnarray} we find that \begin{eqnarray}\label{e4.9} \qquad\ r = \frac{mp}{(m-2)}. \end{eqnarray} Using equations \eqref{e4.7} and \eqref{e4.8} in \eqref{e4.6} we have \begin{eqnarray} \mid\langle\mid\omega\mid^{p-2}\omega, \ u\cdot\nabla\omega\rangle\mid &\leq & C\parallel\omega\parallel_{r}^{(p-2)/2} \ Q_{p}(\omega)^{1/2} \parallel u\parallel_{m}\ \parallel\omega\parallel_{r}\nonumber \\&=& C\parallel u\parallel_{m}\ \parallel\omega\parallel_{r}^{p/2}\ Q_{p}(\omega)^{1/2}.\nonumber \end{eqnarray} Applying the Lemma 2.6 in the above equation we obtain \begin{eqnarray}\label{e4.10} \mid\langle\mid\omega\mid^{p-2}\omega, \ u\cdot\nabla\omega\rangle\mid &\leq& C\parallel u\parallel_{m}\ Q_{p}(\omega). \end{eqnarray} The third term in the equation \eqref{e4.3} can be estimated by using the fact that $\mathop{\mathrm{Div}} \omega = 0$ along with the similar kind of techniques taken to estimate the second term.
Thus we get \begin{eqnarray}\label{e4.11} \mid\langle\mid\omega\mid^{p-2}\omega,\ \omega\cdot\nabla\ u \rangle\mid &\leq & C\parallel u\parallel_{m}\ Q_{p}(\omega). \end{eqnarray} Combining \eqref{e4.10} and \eqref{e4.11} with \eqref{e4.3} we get the desired result \eqref{e4.1}. \end{proof}
\begin{thm}[Local Lyapunov Property in Besov Spaces]
Let the initial data $\omega_{0}$ for the $3$-D vorticity equation be in $\dot{B}_{p}^{s,q}$ where $ s= \frac{3}{p}-1$, $p, q \geq 2$, and $\frac{3}{p}+\frac{2}{q}>1$. Then there exist small constants $\varepsilon_{1}> 0$ and $\varepsilon_{2}> 0$ such that if the velocity field satisfies $sup_{t}\parallel u(t) \parallel_{\dot{B}_{\infty}^{-1, \infty}} < \varepsilon_{1}$ and the vorticity field satisfies $sup_{t}\parallel \omega (t)\parallel_{\dot{B}_{\infty}^{-2, \infty}} < \varepsilon_{2}$, then $ t\rightarrow\parallel\omega(t)\parallel_{\dot{B}_{p}^{s,q}}$ is a Lyapunov function. \end{thm}
\begin{proof} Let us consider \begin{align} F(u,w) = u\cdot\nabla\omega - \omega\cdot\nabla u.\nonumber \end{align} Multiply the equation \eqref{e1.1} by $\triangle_{j}$ to get, \begin{align}\label{e4.12} \partial_{t}(\triangle_{j}\omega)\ - \nu\triangle(\triangle_{j}\omega)\ + \triangle_{j}(F(u,w)) = 0. \end{align} Now, \begin{align} \partial_{t}\parallel\triangle_{j}\omega\parallel_{p}^{p}&= \frac{\partial}{\partial t}\int\mid\triangle_{j}\omega\mid^{p}\/\mathrm{d}\/ x \ = \ \frac{\partial}{\partial t}\int\mid(\triangle_{j}\omega)^{2}\mid^{p/2}\/\mathrm{d}\/ x\nonumber \\ &=p \int \mid\triangle_{j}\omega\mid^{p-2}\triangle_{j}\omega\cdot\frac{\partial(\triangle_{j}\omega)}{\partial t}\/\mathrm{d}\/ x\nonumber\\ &=p\langle\mid\triangle_{j}\omega\mid^{p-2}\triangle_{j}\omega, \partial_{t}(\triangle_{j}\omega)\rangle.\nonumber \end{align} Hence using \eqref{e4.12} we have from the above equation \begin{align} \partial_{t}\parallel\triangle_{j}\omega\parallel_{p}^{p} &=p\langle\mid\triangle_{j}\omega\mid^{p-2}\triangle_{j}\omega,\ \nu\triangle(\triangle_{j}\omega) - \triangle_{j}(F(u,w)) \rangle\nonumber\\ &=\nu p \langle\mid\triangle_{j}\omega\mid^{p-2}\triangle_{j}\omega, \ \triangle(\triangle_{j}\omega)\rangle \nonumber\\ &\qquad -p \langle\mid\triangle_{j}\omega\mid^{p-2}\triangle_{j}\omega, \triangle_{j}(F(u,w)) \rangle.\nonumber \end{align} Applying the Lemma 2.5 on the first term on the right hand side of the above equation we obtain \begin{align} \partial_{t}\parallel\triangle_{j}\omega\parallel_{p}^{p} &\leq -\nu p\int\mid\triangle_{j}\omega\mid^{p-2}\mid\nabla\triangle_{j}\omega\mid^{2}dx \nonumber\\ &\qquad -p \langle\mid\triangle_{j}\omega\mid^{p-2}\triangle_{j}\omega, \triangle_{j}(F(u,w)) \rangle.\nonumber \end{align} Hence, \begin{align} \partial_{t}\parallel\triangle_{j}\omega\parallel_{p}^{p} + &\nu p\int\mid\triangle_{j}\omega\mid^{p-2}\mid\nabla\triangle_{j}\omega\mid^{2}\/\mathrm{d}\/ x\nonumber\\ &\leq -p \int\mid\triangle_{j}\omega\mid^{p-2}\triangle_{j}\omega \triangle_{j}(F(u,w))\/\mathrm{d}\/ x, \nonumber \end{align} which is equivalent of considering the equation \begin{align} \frac{d}{dt}\parallel\triangle_{j}\omega\parallel_{p}^{p} + & \nu p\int\mid\triangle_{j}\omega\mid^{p-2}\mid\nabla\triangle_{j}\omega\mid^{2}\/\mathrm{d}\/ x\nonumber\\ &\leq p \int\mid\triangle_{j}\omega\mid^{p-1}\mid\triangle_{j}(F(u,w))\mid dx. \nonumber \end{align} Using Lemma 3.3 we replace the second term to get \begin{align}\label{e4.13} &\frac{d}{dt}\parallel\triangle_{j}\omega\parallel_{p}^{p} + \ \tilde{C}_p\nu p\ 2^{2j} \parallel\triangle_{j}\omega\parallel_{p}^{p} \ \leq p \int\mid\triangle_{j}\omega\mid^{p-1}\mid\triangle_{j}(F(u,w))\mid \/\mathrm{d}\/ x,\\ &\text{where $\tilde{C}_p$ is positive constant depending on $p$}.\nonumber \end{align} Now, \begin{align} \mid\triangle_{j}(F(u,w))\mid\nonumber &= \mid\triangle_{j}(u\cdot\nabla\omega - \omega\cdot\nabla u)\mid \nonumber \\ &\leq\mid\triangle_{j}(u\cdot\nabla\omega)\mid + \mid\triangle_{j}(\omega\cdot\nabla u)\mid.\nonumber \end{align} Moreover \begin{align} u\cdot\nabla\omega \ = \ u_{i}\frac{\partial\omega_{j}}{\partial x_{i}} \ = \ \frac{\partial}{\partial x_{i}}(u_{i}\omega_{j}) \ - \ \omega_{j}\frac{\partial u_{i}}{\partial x_{i}} \ = \nabla\cdot(u\otimes\omega), \quad \text{since}\ \mathop{\mathrm{Div}} u = 0\nonumber, \end{align} and similarly $\omega\cdot\nabla u = \nabla\cdot(\omega\otimes u),$ where $\otimes$ represents the usual tensor product. \\ Since the terms $\nabla\cdot(u\otimes\omega)$ and $\nabla\cdot(\omega\otimes u)$ behave in similar fashion, we have from equation \eqref{e4.13} \begin{align}\label{e4.14} \frac{d}{dt}\parallel\triangle_{j}\omega\parallel_{p}^{p} + \ \tilde{C}_p\nu p\ 2^{2j} \parallel\triangle_{j}\omega\parallel_{p}^{p} \ \leq\ 2p \int\mid\triangle_{j}\omega\mid^{p-1}\mid\triangle_{j}\nabla\cdot(u\otimes\omega)\mid \/\mathrm{d}\/ x. \end{align} Now using the paraproduct rule \eqref{e3.13}, we have \begin{align} \triangle_{j}\nabla\cdot(u\otimes\omega)\nonumber &= \nabla\triangle_{j}(u\otimes\omega)\nonumber \\ &= \nabla\big(\triangle_{j}u \ S_{j-2}\omega\big) + \nabla\big(\triangle_{j}\omega \ S_{j-2}u\big) + \nabla\big( \triangle_{j}\big(\sum_{k\geq j}\triangle_{k}u\ \triangle_{k}\omega\big)\big).\label{e4.15} \end{align} Using \eqref{e4.15} in \eqref{e4.14} we obtain, \begin{align} &\frac{d}{dt}\parallel\triangle_{j}\omega\parallel_{p}^{p} + \ \tilde{C}_p \nu p\ 2^{2j} \parallel\triangle_{j}\omega\parallel_{p}^{p}\nonumber\\ &\quad\leq 2p \int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}u \ S_{j-2}\omega\big)\mid \/\mathrm{d}\/ x \nonumber \\ &\quad\quad + 2p \int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}\omega \ S_{j-2}u\big)\mid \/\mathrm{d}\/ x \nonumber \\ &\quad\quad + 2p \int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}\big(\sum_{k\geq j}\triangle_{k}u\ \triangle_{k}\omega\big)\big)\mid \/\mathrm{d}\/ x.\label{e4.16} \end{align} We need to estimate each of the terms on the right hand side of \eqref{e4.16} separately.
First consider the term \begin{align} \int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}\omega \ S_{j-2}u\big)\mid \/\mathrm{d}\/ x,\nonumber \end{align}
and apply H\"{o}lder's Inequality to get, \begin{align} \int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}\omega \ S_{j-2}u\big)\mid \/\mathrm{d}\/ x\leq \ \parallel\triangle_{j}\omega\parallel_{p}^{p-1} \ \parallel\nabla\big(\triangle_{j}\omega \ S_{j-2}u\big)\parallel_{p}.\nonumber \end{align} With the help of Lemma 3.2 we obtain \begin{align} &\int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}\omega \ S_{j-2}u\big)\mid \/\mathrm{d}\/ x\nonumber\\ &\quad\leq \ C_1 \parallel\triangle_{j}\omega\parallel_{p}^{p-1} \ 2^{j} \parallel\triangle_{j}\omega\ S_{j-2}u\parallel_{p}\nonumber\\ &\quad=\ C_1 \parallel\triangle_{j}\omega\parallel_{p}^{p-1} \ 2^{j} \parallel\big(2^{j}\triangle_{j}\omega\big)\big(2^{-j}S_{j-2}u\big)\parallel_{p}\nonumber \\ &\quad\leq\ C_1 \parallel\triangle_{j}\omega\parallel_{p}^{p-1} \ 2^{j} \parallel 2^{j}\triangle_{j}\omega\parallel_{p} \ \sup_{j}\big(2^{-j}\parallel S_{j-2}u\parallel_{\infty}\big)\nonumber \\ &\quad=\ C_1\ 2^{2j}\parallel\triangle_{j}\omega\parallel_{p}^{p}\ \sup_{j}\big(2^{-j}\parallel S_{j-2}u\parallel_{\infty}\big),\label{e4.17}\\ &\text{where $C_1$ is positive constant.}\nonumber \end{align}
Now from Lemma 3.1, for $s = -1$ and $p = q = \infty$, we have \begin{align} &\qquad 2^{-j}\parallel\triangle_{j} u\parallel_{\mathrm{L}^{\infty}}\ \in l^{\infty}\ \Leftrightarrow \ 2^{-j}\parallel S_{j} u\parallel_{\mathrm{L}^{\infty}}\ \in l^{\infty}\nonumber\\ &\Rightarrow \ \sup_{j} 2^{-j}\parallel\triangle_{j} u\parallel_{\infty}\ \Leftrightarrow \ \sup_{j} 2^{-j}\parallel S_{j} u\parallel_{\infty}\nonumber\\ &\Rightarrow \ \parallel u(x,t)\parallel_{\dot{B}_{\infty}^{-1, \infty}}\ \Leftrightarrow \ \sup_{j} 2^{-j}\parallel S_{j} u\parallel_{\infty}. \end{align} Then using the conditions assumed in the theorem, we get, \begin{align} &\qquad\parallel u(x,t)\parallel_{\dot{B}_{\infty}^{-1, \infty}} \ \leq \ \sup_{t}\parallel u(x,t)\parallel_{\dot{B}_{\infty}^{-1, \infty}}\ \leq\varepsilon_{1},\nonumber\\ &\Rightarrow \ \sup_{j} 2^{-j}\parallel S_{j} u\parallel_{\infty}\ \leq\varepsilon_{1}. \end{align} So finally \eqref{e4.17} yields \begin{align}\label{e4.20} \int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}\omega \ S_{j-2}u\big)\mid dx \ \leq C_1\varepsilon_{1} 2^{2j}\parallel\triangle_{j}\omega\parallel_{p}^{p}. \end{align} Now let us consider the term: \begin{align} \int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}u \ S_{j-2}\omega\big)\mid \/\mathrm{d}\/ x.\nonumber \end{align} As before H\"{o}lder's Inequality and Lemma 3.2 yield \begin{align} &\int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}u \ S_{j-2}\omega\big)\mid \/\mathrm{d}\/ x\nonumber\\ &\quad\leq \ \parallel\triangle_{j}\omega\parallel_{p}^{p-1} \ \parallel\nabla\big(\triangle_{j} u \ S_{j-2}\omega\big)\parallel_{p}\nonumber\\ &\quad\leq \ C_2 \parallel\triangle_{j}\omega\parallel_{p}^{p-1} \ 2^{j} \parallel\triangle_{j} u\ S_{j-2}\omega\parallel_{p}\nonumber\\ &\quad=\ C_2 \parallel\triangle_{j}\omega\parallel_{p}^{p-1} \ 2^{j} \parallel\big(2^{2j}\triangle_{j} u\big)\big(2^{-2j}S_{j-2}\omega\big)\parallel_{p}\nonumber\\ &\quad\leq\ C_2 \parallel\triangle_{j}\omega\parallel_{p}^{p-1} \ 2^{j} \parallel 2^{2j}\triangle_{j} u\parallel_{p} \ \sup_{j}\big(2^{-2j}\parallel S_{j-2}\omega\parallel_{\infty}\big),\label{e4.21}\\ &\text{where $C_2$ is positive constant.}\nonumber \end{align} From Lemma 3.2, equation \eqref{e3.11}, we obtain \begin{align}\label{e4.22} 2^{j}\parallel\triangle_{j} u\parallel_{p}\ \leq \ \parallel\nabla\triangle_{j} u\parallel_{p}. \end{align} The above equation and \eqref{e2.5} in Lemma 2.7 yield \begin{align}\label{e4.24} \parallel\triangle_{j} u\parallel_{p}\ \leq \ 2^{-j}\parallel \triangle_{j}\omega\parallel_{p}. \end{align} Now applying Lemma 3.1, for $s = -2$ and $p = q = \infty$ and proceeding as before we obtain \begin{align}\label{e4.25} \sup_{j} 2^{-j}\parallel S_{j}\omega\parallel_{\infty}\ \leq\varepsilon_{2}. \end{align} Using \eqref{e4.24} and \eqref{e4.25} in \eqref{e4.21} we have \begin{align}\label{e4.26} \int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}u \ S_{j-2}\omega\big)\mid dx \leq C_2\varepsilon_{2} 2^{2j}\parallel\triangle_{j}\omega\parallel_{p}^{p}. \end{align} Next we estimate the last term \begin{align} &\int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}\big(\sum_{k\geq j}\triangle_{k}u\ \triangle_{k}\omega\big)\big)\mid \/\mathrm{d}\/ x\nonumber\\ &\quad\leq \ \parallel\triangle_{j}\omega\parallel_{p}^{p-1} \parallel\nabla\big(\triangle_{j}\big(\sum_{k\geq j}\triangle_{k}u\ \triangle_{k}\omega\big)\big)\parallel_{p}\nonumber\\ &\quad\leq\ C_3 2^{j}\parallel\triangle_{j}\omega\parallel_{p}^{p-1} \parallel\triangle_{j}\big(\sum_{k\geq j}\triangle_{k}u\ \triangle_{k}\omega\big)\parallel_{p}\nonumber\\ &\text{where $C_3$ is positive constant.}\nonumber \end{align} Using Young's Inequality as in~\cite{CaMe95}, we have \begin{align} &\int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}\big(\sum_{k\geq j}\triangle_{k}u\ \triangle_{k}\omega\big)\big)\mid \/\mathrm{d}\/ x\nonumber\\&\quad\leq\ C_p 2^{j}\parallel\triangle_{j}\omega\parallel_{p}^{p-1} \big(\sum_{k\geq j}\parallel\triangle_{k}u\parallel_{p}\parallel \triangle_{k}\omega\parallel_{p}\big),\label{e4.27}\\ &\text{where $C_p$ is positive constant depending on $p$.}\nonumber \end{align} Now \begin{align} \parallel\triangle_{j}\omega\parallel_{p}^{p-1} &=\ 2^{2j} \big(2^{-2j}\parallel\triangle_{j}\omega\parallel_{p}\big)\ \parallel\triangle_{j}\omega\parallel_{p}^{p-2}\nonumber \\ &\leq\ 2^{2j} \big(\sup_{j} 2^{-2j}\parallel\triangle_{j}\omega\parallel_{\infty}\big) \ \parallel\triangle_{j}\omega\parallel_{p}^{p-2}\nonumber \\ &=\ 2^{2j} \parallel\omega(x,t)\parallel_{\dot{B}_{\infty}^{-2, \infty}}\ \parallel\triangle_{j}\omega\parallel_{p}^{p-2}\nonumber \\ &\leq\ \varepsilon_{2}\ 2^{2j}\parallel\triangle_{j}\omega\parallel_{p}^{p-2}.\label{e4.28} \end{align} Using \eqref{e4.24} and \eqref{e4.28} in \eqref{e4.27} we have, \begin{align}\label{e4.29} &\int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}\big(\sum_{k\geq j}\triangle_{k}u\ \triangle_{k}\omega\big)\big)\mid\/\mathrm{d}\/ x\nonumber\\ &\quad\leq \ C_p \varepsilon_{2}\ 2^{2j}\parallel\triangle_{j}\omega\parallel_{p}^{p-2} \big(\sum_{k\geq j}\parallel\triangle_{k} \omega\parallel_{p}^{2}\big). \end{align} Now combining all results from \eqref{e4.20}, \eqref{e4.26} and \eqref{e4.29} and neglecting the constants $C_1, C_2, C_p, \tilde{C}_p$, we obtain from \eqref{e4.16} \begin{align} \frac{d}{dt}\parallel\triangle_{j}\omega\parallel_{p}^{p} + \nu p\ 2^{2j} \parallel\triangle_{j}\omega\parallel_{p}^{p}\ &\leq\ 2p\varepsilon _{1}\ 2^{2j}\parallel\triangle_{j}\omega\parallel_{p}^{p} +\ 2p\varepsilon_{2}\ 2^{2j}\parallel\triangle_{j}\omega\parallel_{p}^{p} \nonumber\\ &\quad +\ 2p\varepsilon_{2}\ 2^{2j}\parallel\triangle_{j}\omega\parallel_{p}^{p-2} \big(\sum_{k\geq j}\parallel\triangle_{k} \omega\parallel_{p}^{2}\big).\nonumber \end{align} Simplifying we get, \begin{align}\label{e4.30} \frac{d}{dt}\parallel\triangle_{j}\omega\parallel_{p}^{2} +\ p \big(\nu - 2\varepsilon_{1} - 2\varepsilon_{2}\big)2^{2j}\parallel\triangle_{j}\omega\parallel_{p}^{2} \ &\leq\ 2p\varepsilon_{2}\ 2^{2j}\big(\sum_{k\geq j}\parallel\triangle_{k} \omega\parallel_{p}^{2}\big). \end{align} The rest of the construction is motivated by~\cite{CaPl00}. Multiplying both sides of \eqref{e4.30} by $2^{jqs}\parallel\triangle_{j}\omega\parallel_{p}^{q-2}$, we get \begin{align} &\frac{d}{dt}\big(2^{jqs}\parallel\triangle_{j}\omega\parallel_{p}^{q}\big) +\ p \big(\nu - 2\varepsilon_{1} - 2\varepsilon_{2}\big)2^{j(qs+2)}\parallel\triangle_{j}\omega\parallel_{p}^{q}\nonumber\\ &\quad\leq \ 2p\varepsilon_{2}\ 2^{j(qs+2)}\parallel\triangle_{j}\omega\parallel_{p}^{q-2}\big(\sum_{k\geq j}\parallel\triangle_{k} \omega\parallel_{p}^{2}\big).\nonumber \end{align} Let $ s= \frac{3}{p}-1$, $p, q \geq 2$, and $\frac{3}{p}+\frac{2}{q}>1$. Then $r = \frac{2}{q} + s > 0$ or $qs + 2 = rq$. \\ Then \begin{align} &\frac{d}{dt}\big(2^{jqs}\parallel\triangle_{j}\omega\parallel_{p}^{q}\big) + \ p \big(\nu - 2\varepsilon_{1} - 2\varepsilon_{2}\big)2^{rqj}\parallel\triangle_{j}\omega\parallel_{p}^{q}\nonumber\\ &\quad\leq \ 2p\varepsilon_{2}\ 2^{(q-2)rj}\parallel\triangle_{j}\omega\parallel_{p}^{q-2} 2^{2rj}\big(\sum_{k\geq j}\parallel\triangle_{k} \omega\parallel_{p}^{2}\big).\label{e4.31} \end{align} Let $ f_{j} = 2^{js}\parallel\triangle_{j}\omega\parallel_{p}$ and $g_{j} = 2^{jr}\parallel\triangle_{j}\omega\parallel_{p}$. Then taking sum over $j$ of \eqref{e4.31} we have, \begin{align} \frac{d}{dt}\big(\sum_{j}f_{j}^{q}\big) + \ p \big(\nu - 2\varepsilon_{1} - 2\varepsilon_{2}\big)\sum_{j}g_{j}^{q}&\leq\ 2p\varepsilon_{2} \sum_{j}g_{j}^{q-2} \ 2^{2rj}\big(\sum_{k\geq j}\parallel\triangle_{k} \omega\parallel_{p}^{2}\big)\nonumber\\ &=\ 2p\varepsilon_{2}\sum_{k=1}^{\infty}\sum_{j=1}^{k}\ g_{j}^{q-2} \ 2^{2rj}\parallel\triangle_{k} \omega\parallel_{p}^{2}\nonumber\\ &=\ 2p\varepsilon_{2}\sum_{k=1}^{\infty}\sum_{j=1}^{k}\ g_{j}^{q-2} \ 2^{2rj} \ 2^{-2rk}\ g_{k}^{2}.\label{e4.32} \end{align} Let us consider $$\sum_{j=1}^{k}\ g_{j}^{q-2} \ 2^{2rj} = 2^{2rk}\ h_{k}^{q-2}.$$ Then it is clear that \begin{align}\label{e4.33} \sum_{k} h_{k}^{q} \ \leq \sum_{j} g_{j}^{q}. \end{align} So \eqref{e4.32} yields with the help of H\"{o}lder Inequality and \eqref{e4.33} \begin{align} &\frac{d}{dt}\big(\sum_{j}f_{j}^{q}\big) + \ p \big(\nu - 2\varepsilon_{1} - 2\varepsilon_{2}\big)\sum_{j}g_{j}^{q}\nonumber\\ &\quad\leq\ 2p\varepsilon_{2}\sum_{k} h_{k}^{q-2} g_{k}^{2}\nonumber\\ &\quad\leq\ 2p\varepsilon_{2}\Big(\sum_{k}\big(h_{k}^{q-2}\big)^{\frac{q}{q-2}}\Big)^{\frac{q-2}{q}} \ \Big(\sum_{k}\big(g_{k}^{2}\big)^{\frac{q}{2}}\Big)^{\frac{2}{q}},\quad\text{since}\ \ \frac{q-2}{q}+\frac{2}{q} =
1,\nonumber\\ &\quad\leq\ 2p\varepsilon_{2} \big(\sum_{k} g_{k}^{q}\big)^{\frac{q-2}{q}}\ \big(\sum_{k} g_{k}^{q}\big)^\frac{2}{q}\nonumber\\ &\quad=\ 2p\varepsilon_{2}\sum_{k} g_{k}^{q}. \end{align} Hence, \begin{align} \frac{d}{dt}\big(\sum_{j}f_{j}^{q}\big) + \ p \big(\nu - 2\varepsilon_{1} - 4\varepsilon_{2}\big)\sum_{j}g_{j}^{q}\ \leq 0. \end{align} Using the definition of Besov Spaces in \eqref{e3.6}, we can write, \begin{align} \frac{d}{dt}\big(\parallel\omega(x,t)\parallel_{\dot{B}_{p}^{s, q}}^{q}\big) + \ p \big(\nu - 2\varepsilon_{1} - 4\varepsilon_{2}\big)\parallel\omega(x,t)\parallel_{\dot{B}_{p}^{r, q}}^{q} \ \leq 0.\nonumber \end{align} Hence $ t\rightarrow\parallel\omega(t)\parallel_{\dot{B}_{p}^{s,q}}$ is a Lyapunov function for small $\varepsilon_{1}$ and $\varepsilon_{2}$ and comparatively large $\nu$. \end{proof}
Now let us prove the dissipativity of the sum of the linear and nonlinear operators of the vorticity equation in $\mathrm{L}^{m}(\mathbb{R}^{m})$.
We write \eqref{e1.1} in the form $\partial_{t}\omega = \emph{A} (u,\omega)$, where $\emph{A} : u, \omega \mapsto \emph{A} (u,\omega) = \nu\triangle\omega - u\cdot\nabla\omega + \omega\cdot\nabla u$ is a nonlinear operator. We know that $ G(\omega) = \mid\omega\mid^{p-2}\omega$ \ is the duality map on $\mathrm{L}^{p}$ to $\mathrm{L}^{p^{\prime}}$. In Theorem 4.1 we proved that \begin{align}\label{e4.36} \langle\emph{A} (u,\omega), G(\omega)\rangle\ \leq\ -C(\nu - K \parallel u(t)\parallel_{m})Q_{p}(\omega(t)). \end{align} Here we will prove a stronger property than \eqref{e4.36}.
\begin{thm}[Local Dissipativity in $\mathrm{L}^p$] Let \ $m\geq 3$, $2\leq p <\infty$. Then if $(\omega - \tilde{\omega})\in {\mathrm{L}^{1}}(\mathbb{R}^{m}) \cap {\mathrm{L}^{r}}(\mathbb{R}^{m}), \ for \ r = \frac{mp}{m-2},$ \begin{align} &\langle\emph{A}(u,\omega) - \emph{A}(v,\tilde{\omega}), G(\omega - \tilde{\omega})\rangle\nonumber\\ &\quad\leq\ -C\big(\nu - K (\parallel u \parallel _{m} +\parallel v \parallel _{m} + \parallel\omega \parallel_{m}+\parallel \tilde{\omega} \parallel _{m})Q_{p}(\omega - \tilde{\omega})\nonumber \\ &\quad\quad\ -K(\parallel\omega \parallel_{m}+\parallel \tilde{\omega} \parallel _{m})\parallel\omega - \tilde{\omega}\parallel_{\mathrm{L}^{1}}Q_{p}(\omega - \tilde{\omega})^{1/p'}\big),\label{e4.37} \end{align} where $\frac{1}{p} + \frac{1}{p'} = 1.$ \end{thm}
Hence, in the light of \eqref{e2.6}, we note that if $\omega$ and $\tilde{\omega}$ are small in $\mathrm{L}^1\cap\mathrm{L}^m$, then $$\langle\emph{A}(u,\omega) - \emph{A}(v,\tilde{\omega}), G(\omega - \tilde{\omega})\rangle\ \leq\ 0,$$ which is a local dissipativity property for $\emph{A}(\cdot, \cdot)$.
\begin{proof} It is clear that \begin{align} &\langle\emph{A}(u,\omega) - \emph{A}(v,\tilde{\omega}),\ G(\omega - \tilde{\omega})\rangle\nonumber\\ &\quad= \langle\nu\triangle\omega - u\cdot\nabla\omega + \omega\cdot\nabla u -\nu\triangle\tilde{\omega} + v\cdot\nabla\tilde{\omega} - \tilde{\omega}\cdot\nabla v, \ G(\omega - \tilde{\omega})\rangle\nonumber\\ &\quad= \nu\langle\triangle(\omega-\tilde{\omega}),\ G(\omega - \tilde{\omega})\rangle - \langle u\cdot\nabla\omega - v\cdot\nabla\tilde{\omega},\ G(\omega - \tilde{\omega})\rangle\nonumber\\ &\quad\quad + \langle\omega\cdot\nabla u - \tilde{\omega}\cdot\nabla v,\ G(\omega - \tilde{\omega})\rangle.\label{e4.38} \end{align} According to Lemma 2.5 \begin{align}\label{e4.39} \nu\langle\triangle(\omega-\tilde{\omega}),\ G(\omega - \tilde{\omega})\rangle\ \leq\ -C\nu Q_{p}(\omega - \tilde{\omega}). \end{align} Now we need to estimate the second and third terms of the right hand side of \eqref{e4.38}.\\ Notice that \begin{align} &\mid\langle u\cdot\nabla\omega - v\cdot\nabla\tilde{\omega},\ G(\omega - \tilde{\omega})\rangle\mid\nonumber\\ &\quad=\ \mid\langle u\cdot\nabla\omega - v\cdot\nabla\omega + v\cdot\nabla\omega - v\cdot\nabla\tilde{\omega},\ G(\omega - \tilde{\omega})\rangle\mid\nonumber\\ &\quad\leq\ \mid\langle (u-v)\cdot\nabla\omega, \ G(\omega - \tilde{\omega})\rangle\mid\ + \mid\langle v\cdot\nabla(\omega - \tilde{\omega}),\ G(\omega - \tilde{\omega})\rangle\mid.\label{e4.40} \end{align} Let us denote $\omega^{*} = \omega - \tilde{\omega}$. Then with the help of \eqref{e4.10} we obtain \begin{align}\label{e4.41} \mid\langle v\cdot\nabla(\omega - \tilde{\omega}),\ G(\omega - \tilde{\omega})\rangle\mid\ &\leq\ C\parallel v\parallel_{m}\ Q_{p}(\omega - \tilde{\omega}). \end{align} Since $\mathop{\mathrm{Div}}(u-v) = 0$, we have \begin{align} \mid\langle (u-v)\cdot\nabla\omega, \ G(\omega - \tilde{\omega})\rangle\mid\ &=\ \mid\langle\nabla\cdot((u-v)\otimes\omega), \ G(\omega^{*})\rangle\mid\nonumber\\ &=\ \mid\langle\nabla\cdot((u-v)\otimes\omega),\ \mid\omega^{*}\mid^{p-2}\omega^{*} \rangle\mid\nonumber. \end{align} Integrating by parts we get, \begin{align} \mid\langle (u-v)\cdot\nabla\omega, \ G(\omega - \tilde{\omega})\rangle\mid\ &=\ \mid\langle (u-v)\otimes\omega ,\ \nabla(\mid\omega^{*}\mid^{p-2}\omega^{*}) \rangle\mid\nonumber\\ &\leq\langle\mid(u-v)\otimes\omega\mid, \ \mid\nabla(\mid\omega^{*}\mid^{p-2}\omega^{*})\mid\rangle\nonumber\\ &\leq\langle\mid(u-v)\otimes\omega\mid, \ \mid\omega^{*}\mid^{p-2}\mid\nabla\omega^{*}\mid\rangle.\nonumber \end{align} Now using the H\"{o}lder's inequality we obtain, \begin{align}\label{e4.42} \mid\langle (u-v)\cdot\nabla\omega, \ G(\omega - \tilde{\omega})\rangle\mid\ &\leq\parallel (u-v)\otimes\omega\parallel_{q'} \ \parallel\ \mid\omega^{*}\mid^{p-2}\mid\nabla\omega^{*}\mid\ \parallel_{q}, \end{align} where $\frac{1}{q} + \frac{1}{q'} = 1$.
Using \eqref{e4.7} and H\"{o}lder's inequality one more time, we have \begin{align}\label{e4.43} \mid\langle (u-v)\cdot\nabla\omega, \ G(\omega - \tilde{\omega})\rangle\mid\ &\leq\ C\parallel u-v\parallel_{r} \ \parallel\omega\parallel_{m}\ \parallel\omega^{*}\parallel_{r}^{(p-2)/2} \ Q_{p}(\omega^{*})^{1/2}, \end{align} where $\frac{1}{q'} = \frac{1}{r} + \frac{1}{m}$ and $r = \frac{mp}{m-2}$.
Notice that if $K$ is the Biot-Savart kernel then $u-v = K*\omega - K*\tilde{\omega} = K*(\omega - \tilde{\omega}) = K*\omega^{*}$. Hence using the Lemma 2.7, \eqref{e2.6} we get from \eqref{e4.43} \begin{align} &\mid\langle (u-v)\cdot\nabla\omega, \ G(\omega - \tilde{\omega})\rangle\mid\nonumber\\ &\quad\leq\ C\ (\parallel\omega^{*}\parallel_{\mathrm{L}^{1}}+\parallel\omega^{*}\parallel_{\mathrm{L}^{r}})\ \parallel\omega\parallel_{m} \ \parallel\omega^{*}\parallel_{r}^{(p-2)/2} \ Q_{p}(\omega^{*})^{1/2}\nonumber\\ &\quad=\ C\parallel\omega^{*}\parallel_{r}^{p/2} \ \parallel\omega\parallel_{m} \ Q_{p}(\omega^{*})^{1/2}\nonumber \\ &\quad\quad+ C\parallel\omega^{*}\parallel_{\mathrm{L}^{1}} \ \parallel\omega^{*}\parallel_{r}^{(p-2)/2} \ \parallel\omega\parallel_{m} \ Q_{p}(\omega^{*})^{1/2}.\label{e4.44} \end{align} With the help of Lemma 2.6, equation \eqref{e4.44} yields \begin{align} &\mid\langle (u-v)\cdot\nabla\omega, \ G(\omega - \tilde{\omega})\rangle\mid\nonumber\\ &\quad\leq\ C\parallel\omega\parallel_{m} \ Q_{p}(\omega^{*}) + \ C\parallel\omega^{*}\parallel_{\mathrm{L}^{1}} \ \parallel\omega\parallel_{m} \ Q_{p}(\omega^{*})^{(p-1)/p}\nonumber\\ &\quad=\ C\parallel\omega\parallel_{m} \ Q_{p}(\omega -\tilde{\omega}) + \ C\parallel\omega - \tilde{\omega}\parallel_{\mathrm{L}^{1}} \ \parallel\omega\parallel_{m} \ Q_{p}(\omega -\tilde{\omega})^{1/p'}.\label{e4.45} \end{align} Thus substituting the results from \eqref{e4.41} and \eqref{e4.45} in \eqref{e4.40} we have \begin{align} \mid\langle u\cdot\nabla\omega - v\cdot\nabla\tilde{\omega},\ G(\omega - \tilde{\omega})\rangle\mid &\leq\ C\ (\ \parallel v \parallel_{m} + \parallel\omega\parallel_{m}\ ) \ Q_{p}(\omega -\tilde{\omega})\nonumber \\ &\quad + \ C\parallel\omega - \tilde{\omega}\parallel_{\mathrm{L}^{1}} \ \parallel\omega\parallel_{m} \ Q_{p}(\omega -\tilde{\omega})^{1/p'},\label{e4.46} \end{align} where $C$ is a positive constant depending upon $m$ and $p$.
Next we estimate the third term of the equation \eqref{e4.38}. We notice that \begin{align} &\mid\langle\omega\cdot\nabla u - \tilde{\omega}\cdot\nabla v,\ G(\omega - \tilde{\omega})\rangle\mid\nonumber\\ &\quad=\ \mid\langle\omega\cdot\nabla u - \tilde{\omega}\cdot\nabla u + \tilde{\omega}\cdot\nabla u - \tilde{\omega}\cdot\nabla v, \ G(\omega - \tilde{\omega})\rangle\mid\nonumber\\ &\quad\leq\ \mid\langle (\omega - \tilde{\omega})\cdot u, \ G(\omega - \tilde{\omega})\rangle\mid + \mid\langle\tilde{\omega}\cdot\nabla (u-v),\ G(\omega - \tilde{\omega})\rangle\mid.\label{e4.47} \end{align} Here we proceed in the similar way as before to get \begin{align}\label{e4.48} \mid\langle (\omega - \tilde{\omega})\cdot u, \ G(\omega - \tilde{\omega})\rangle\mid\ \leq\ C\parallel u\parallel_{m} \ Q_{p}(\omega - \tilde{\omega}), \end{align} and \begin{align} \mid\langle\tilde{\omega}\cdot\nabla (u-v),\ G(\omega - \tilde{\omega})\rangle\mid\ &\leq\ C\parallel\tilde{\omega}\parallel_{m} \ Q_{p}(\omega -\tilde{\omega})\nonumber\\ &\quad+\ C\parallel\omega - \tilde{\omega}\parallel_{\mathrm{L}^{1}} \ \parallel\tilde{\omega}\parallel_{m} \ Q_{p}(\omega -\tilde{\omega})^{1/p'}. \end{align} Thus \eqref{e4.47} yields \begin{align} \mid\langle\omega\cdot\nabla u - \tilde{\omega}\cdot\nabla v,\ G(\omega - \tilde{\omega})\rangle\mid &\leq\ C\ (\ \parallel u \parallel_{m} + \parallel\tilde{\omega}\parallel_{m}\ ) \ Q_{p}(\omega -\tilde{\omega})\nonumber\\ &\quad+\ C\parallel\omega - \tilde{\omega}\parallel_{\mathrm{L}^{1}} \ \parallel\tilde{\omega}\parallel_{m} \ Q_{p}(\omega -\tilde{\omega})^{1/p'},\label{e4.50} \end{align} where $C$ is a positive constant depending upon $m$ and $p$.
Hence \eqref{e4.39}, \eqref{e4.46} and \eqref{e4.50} yield the desired result \eqref{e4.37} from \eqref{e4.38}. \end{proof}
\end{document} |
\begin{document}
\title[Stationary measures on homogeneous bundles over flag varieties]{Stationary measures for $\SL_2(\R)$-actions on homogeneous bundles over flag varieties}
\author{Alexander Gorodnik} \address{Institut f\"{u}r Mathematik, Universit\"{a}t Z\"{u}rich, 8057 Z\"{u}rich, Switzerland} \email{alexander.gorodnik@math.uzh.ch, lijialun36@gmail.com, sertcagri@gmail.com} \thanks{}
\author{Jialun Li}
\email{} \thanks{}
\author{Cagri Sert}
\email{} \thanks{ A.G. and J.L. were supported by the SNF grant 200021--182089; C.S. was supported by SNF Ambizione grant 193481}
\begin{abstract} Let $G$ be a real semisimple Lie group with finite centre and without compact factors, $Q<G$ a parabolic subgroup and $X$ a homogeneous space of $G$ admitting an equivariant projection on the flag variety $G/Q$ with fibres given by copies of lattice quotients of a semisimple factor of $Q$. Given a probability measure $\mu$, Zariski-dense in a copy of $H=\SL_2(\R)$ in $G$, we give a description of $\mu$-stationary probability measures on $X$ and prove corresponding equidistribution results. Contrary to the results of Benoist--Quint corresponding to the case $G=Q$, the type of stationary measures that $\mu$ admits depends strongly on the position of $H$ relative to $Q$. We describe possible cases and treat all but one of them, among others using ideas from the works of Eskin--Mirzakhani and Eskin--Lindenstrauss. \end{abstract}
\maketitle
\section{Introduction}
Let $G$ be a real Lie group and $R<G$ a closed subgroup. The actions of subgroups of $G$ on the homogeneous space $X=G/R$ constitute a natural class of dynamical systems whose (topological, statistical etc.) properties are of key relevance to various problems in mathematics. Accordingly, the study of such dynamical systems has a rich history; it has prompted the introduction of various new techniques and contains major results. The nature of these systems varies according to the acting subgroup and the group $R$ to be factored out, ranging over classes such as partially hyperbolic, parabolic, proximal dynamics, etc.
One type of ambient homogeneous space $X$ is obtained by considering quotients by discrete subgroups $R=\Lambda<G$. For the actions of connected subgroups of $G$ generated by unipotents (e.g.~ semisimple subgroups) on these quotients $X$, settling conjectures of Raghunathan and Dani, definitive results were obtained by Ratner \cite{ratner.measure.class,ratner.topological}. Her results can be considered as a vast generalization of classical results on vector flows on the tori $\mathbb{T}^d$ and have far-reaching consequences. A key step/result in Ratner's works --- corresponding to Dani's conjecture --- is the classification of measures invariant under unipotent flows. To obtain this result, Ratner introduced an important technique, the polynomial drift argument.
Setting aside the actions of commuting diagonal flows, a next major step for actions on quotients $X$ by lattices $\Lambda<G$ is reached by the seminal work of Benoist--Quint \cite{BQ1,bq.non-escape,BQ2,BQ3}. Their work involved describing dynamics of actions by subgroups $\Gamma$ whose algebraic (Zariski) closure has semisimplicity but the subgroups themselves can be genuinely irregular, e.g.~ discrete. Focusing on stationary measures of random walks on quotients $X$, they developed the exponential drift argument used to obtain a description of all stationary measures. The drift argument of Benoist--Quint requires a \textit{precise control of random matrix products} (e.g.~ local limit theorem), a feature not readily available without a semisimplicity assumption. Benoist--Quint's ideas were then remarkably modified in a non-homogeneous setting by Eskin--Mirzakhani \cite{eskin-mirzakhani} who managed to set up a much more flexible argument bypassing for example the need for a local limit theorem. This development enabled further extensions of Benoist--Quint's results in several directions by Eskin--Lindenstrauss \cite{eskin-lindenstraus.short,eskin-lindenstrauss.long}. Some of our arguments in this article (e.g.~ the six points drift argument) draws on the ideas of the latter works \cite{eskin-mirzakhani,eskin-lindenstraus.short} and in fact can be seen as a slightly modified and simpler version of them.
Continuing to expound elements of our setting, a second type of homogeneous space is obtained by considering quotients by parabolic subgroups $R=Q<G$, giving rise to flag varieties $X=G/Q$. The dynamics on these quotients are quite different from those on quotients by discrete subgroups; in particular, when the acting group has semisimple (non-compact) Zariski-closure, the action is proximal (if the group is split) and the space supports no invariant measures. Starting with the pioneering works of Furstenberg \cite{furstenberg.poisson, furstenberg.nc, furstenberg.boundary.theory} on random matrix products and boundary theory, a thorough qualitative description of dynamics is established by Guivarc'h--Raugi \cite{guivarch-raugi.isom.ext} and Benoist--Quint \cite{BQ.compositio}.
The homogeneous space $X=G/R$ considered in this article is a combination of the two types of classical homogeneous spaces discussed above; it has the structure of a fibre bundle over the flag variety $G/Q$ with fibres given by a homogeneous space $S/\Lambda$, where $S$ is a semisimple group and $\Lambda$ is a discrete subgroup. To illustrate and motivate this structure, recall that a standard example for the first kind of spaces (obtained as quotients by discrete subgroups) is provided by the space $X_{d,d}$ of rank-$d$ lattices in $\R^d$ up to homothety; which can be identified with $\PGL_d(\R)/\PGL_d(\Z)$. Now when one considers more generally the space $X_{k,d}$ of rank-$k$ lattices in $\R^d$ up to homothety, if $k \neq d$, then $X_{k,d}$ has a natural structure of a bundle over the space of $k$-Grassmannians in $\R^d$ (which is a standard example of a space realized as a quotient by a parabolic subgroup) with fibres given by copies of $\PGL_k(\R)/\PGL_k(\Z)$.
The study of dynamics on these quotients is initiated by the work of Sargent--Shapira \cite{sargent-shapira}. Generalizing arguments of Benoist--Quint \cite{BQ1,bq.non-escape}, they managed to describe the dynamics on the space $X_{2,3}$ when the acting probability measure is Zariski-dense in $\SL_3(\R)$ or in an irreducible copy of $\PGL_2(\R)$. Remarkably, they discovered\footnote{interestingly, with a computer experiment} a somewhat unexpected phenomenon (a $\Gamma_\mu$-invariant section, see \cite{sargent-shapira}) in the latter case, a precise understanding of which was an initial motivation for our work. The goal of the current article is more generally to obtain measure classification and equidistribution results in all possible situations\footnote{We manage this except in one case, Case 2.3.b, see Figure \ref{figure.cases} and the discussion below.} when the acting probability measure is Zariski-dense in a copy of $\SL_2(\R)$ or $\PGL_2(\R)$ and when the ambient space $G/R$ has minimal assumptions.
Among others, the results of our work show that in contrast to the type of results obtained by Benoist--Quint, a variety of various dynamical situations are possible even for the actions of groups such as $\SL_2(\R)$ (see Figure \ref{figure.cases}). Moreover, for some of these cases the exponential drift argument of Benoist--Quint is not applicable as such and indeed we develop a different drift argument inspired from those of Eskin--Mirzakhani \cite{eskin-mirzakhani} and Eskin--Lindenstrauss \cite{eskin-lindenstraus.short}. Alternatively, we also demonstrate that a precise control on random matrix products (such as a uniform renewal theorem) can also be used to obtain measure classification. Even though the actions on fibres and base individually are well-understood in by-now classical works, the description of dynamics on these homogeneous spaces for a general acting group remains a challenge.
\begin{center} * \nopagebreak * * \end{center}
We now proceed with introducing the notation needed to state our results. In the sequel, the meaning of the following groups, spaces, measures etc.~ will be fixed unless otherwise stated. Let $G$ be a semisimple real Lie group with finite centre and $Q<G$ a parabolic subgroup. Let $R_0 \unlhd Q$ be a normal algebraic subgroup and $R<Q$ be a closed subgroup containing $R_0$ such that $S:=Q/R_0$ is semisimple with finite centre and without compact factors and $\Lambda:=R/R_0$ is a discrete subgroup of $S$. We denote by $X$ the quotient space $G/R$ which will serve as the ambient space. A guiding example is provided by the homothety classes of rank-$k$ lattices in $\R^d$, see Example \ref{ex.ss} for a detailed description of these groups in that case.
All probability measures considered in this article will be Borel probability measures. We will denote by $\mu$ a probability measure on $G$. A measure $\mu$ on $G$ is said to have finite first moment if for a (equivalently any) irreducible finite-dimensional faithful linear representation $V$ of $G$, we have $\int \log \|g\|d\mu(g)<\infty$, where $\|.\|$ any choice of an operator norm on $\Endo(V)$. The group generated by the support of $\mu$ will be denoted $\Gamma_\mu$ and $H$ will denote the Zariski-closure of $\Gamma_\mu$ --- we will simply say that $\mu$ is Zariski-dense in $H$. Recall that a measure $\nu$ on $X$ is said to be $\mu$-stationary if it satisfies $\mu \ast \nu=\int g_\ast \nu d\mu(g)=\nu$, where $g_\ast \nu$ is the pushforward of $\nu$ by $g \in G$. By a stationary measure, we will understand a stationary probability measure. A $\mu$-stationary measure is said to be ergodic if it is an extremal point in the compact convex set $P_\mu(X)$ of $\mu$-stationary measures on $X$. Finally, we will always suppose that $H$ is isomorphic to either $\SL_2(\R)$ or $\PGL_2(\R)$. The intersection of $H$ and the parabolic group $Q$ will be denoted $Q_H$.
\begin{wrapfigure}{r}{4cm}\label{figure.bundle}
\begin{tikzpicture}
\coordinate[label = above :{${}$}] (0) at (-0.7,0);
\coordinate[label = above :{$X\simeq G/R$}] (1) at (1, 1.8);
\coordinate (2) at (1, 1.8);
\coordinate (3) at (1, 0);
\coordinate[label = below :{$G/Q$}] (4) at (1, 0);
\coordinate[label = right :{$\simeq S/\Lambda$}] (5) at (1.2, 1);
\draw[->] (2) -- (3);
\end{tikzpicture}
\end{wrapfigure}
Before proceeding, in order to conceptually expose our results, we discuss the fibre bundle structure and various possible situations that arise; see the guiding Figure \ref{figure.cases}. Since the factored-out subgroup $R$ is contained in the parabolic $Q$, the space $X$ has a natural $G$-equivariant projection $\pi$ onto the flag variety $G/Q$. The fibres of $\pi$ are copies of the quotient $Q/R$, and by construction, we have $Q/R \simeq (Q/R_0)/(R/R_0)=S/\Lambda$. Since this projection is $G$, and hence $\Gamma_\mu$-equivariant, any $\mu$-stationary measure $\nu$ on $X$ projects down to a $\mu$-stationary measure $\overline{\nu}:=\pi_\ast \nu$ on $G/Q$. It follows that a first rough classification of stationary measures is provided by the classification in the base $G/Q$. Thanks to the results of Guivarc'h--Raugi \cite{guivarch-raugi} and Benoist--Quint \cite{BQ.compositio} (see \S \ref{subsub.stat.meas.base}), there are two types of projections giving rise to \textbf{Case 1} and \textbf{Case 2}, respectively, Dirac measures and Furstenberg measures on the base. In Case 1, the works of Benoist--Quint \cite{BQ2,BQ3} and Eskin--Lindenstrauss \cite{eskin-lindenstraus.short,eskin-lindenstrauss.long} directly apply and hence we will not comment on it further here (see \S \ref{subsub.dirac.base}).
If a stationary measure $\nu$ is in Case 2, up to replacing $Q$ by a conjugate, $Q_H$ is a parabolic subgroup of $H$ and the projection $\overline{\nu}$ of $\nu$ is the Furstenberg measure on $\calC:=H/Q_H$ in $G/Q$ (see \S \ref{subsub.furstenberg.base}). We will denote this projection as $\overline{\nu}_F$. In this case, the group $H$ preserves a subbundle of $X$, namely $\pi^{-1}(H/Q_H)$ which we will denote as $X_\mathcal{C}$ for brevity.
\begin{figure}
\caption{List of all possible cases for $H \acts X_\mathcal{C}$}
\label{figure.cases}
\end{figure}
A convenient way (which we will follow) to read the stationary measures and the action on $X_\mathcal{C}$ is to choose natural Borel trivializations of the bundle $X_\mathcal{C}$. By working with a class of trivializations (those induced by sections $G/Q \to G$) which we call standard trivializations (see \S \ref{sec.prelim}), we will consider the identifications $X \simeq G/Q \times S/\Lambda$ and $X_\mathcal{C} \simeq H/Q_H \times S/\Lambda$ where the latter identification is made equivariant by $H$-acting on the right-hand-side via a cocycle $\alpha: H \times H/Q_H \to S$.
Now, according to the algebraic relations between $Q_H$ and the group $R_0 \unlhd Q$, we distinguish three (exhaustive) possibilities giving rise to different dynamics on fibres via the cocycle $\alpha$. \textbf{Case 2.1} is the case when $Q_H^o<R_0$. In this situation, we have trivial dynamics on the fibre and every stationary measure on $X$ is a copy of the Furstenberg measure (see \S \ref{subsec.case.2.1}). \textbf{Case 2.2} is when $Q^o_H \cap R_0$ is neither $Q^o_H$ nor $\{\id\}$. Since $R_0$ is a normal subgroup of $Q$, the intersection $Q^o_H\cap R_0$ must be the unipotent radical $R_u(Q_H^o)$. In this situation, up to a judicious choice of trivialization, the cocycle $\alpha$ takes values in a rank-one diagonal subgroup of $S$ and we obtain a classification of stationary measures (Theorem \ref{thm.measure.class.geod}) as product measures on $H/Q_H \times S/\Lambda$ in the second factor invariant under diagonal flow. This is the result for which we develop our drift argument and also give an alternative proof, under a stronger moment assumption, using a uniform quantitative renewal theorem for random matrix products. Finally, the remaining \textbf{Case 2.3} occurs when $Q_H^o \cap R_0=\{\id\}$. In this case we restrict the ambient group $G$ to be $\SL_n(\R)$ or $\PGL_n(\R)$. When the associated linear or projective $H$-action is irreducible (\textbf{Case 2.3.a}), we prove that the cocycle $\alpha$ comes from an algebraic morphism $H \to S$ (what we call a decomposable action, see \S \ref{subsub.bundle.dec}) allowing us to reduce the analysis to the work of Benoist--Quint and Eskin--Lindenstrauss again (Theorem \ref{thm.irreducible.H.decompsable}). Our result for this case allows an interpretation of the aforementioned phenomenon appearing in the work of Sargent--Shapira \cite{sargent-shapira} and generalizes it. Finally, \textbf{Case 2.3.b} occurs when $H$ is reducible. The description of dynamics in this case remains open, we provide a conjecture (Conjecture \ref{conjecture}) expressing our expectation.
We now state our measure classification results in Case 2.2 and Case 2.3.a followed by the corresponding equidistribution results.
\begin{theorem}[Case 2.2:~Diagonal flow invariance and product structure]\label{thm.measure.class.geod} Let the space $X$ and groups $G,Q,R_0,R,S\simeq Q/R_0,\Lambda\simeq R/R_0$ and $H$ be as defined above. Suppose that $Q_H<H$ is a parabolic subgroup and $Q_H^o \cap R_0=R_u(Q_H^o)$. Let $\mu$ be a Zariski-dense probability measure on $H$ with finite first moment. Then, there exist a standard trivialization $X_\mathcal{C} \simeq H/Q_H \times S/\Lambda$ and a one-dimensional connected diagonal subgroup $D$ of $S$ satisfying the following. Let $\nu$ be a $\mu$-stationary and ergodic probability measure on $X_\calC$. Then, there exists a $D$-invariant and ergodic probability measure $\tilde{\nu}$ on $S/\Lambda$ such that we have $\nu=\overline{\nu}_F\otimes \tilde{\nu}$. \end{theorem}
The hypotheses of this result entail a lack of expansion on the fibres whose existence is a key feature exploited in Benoist--Quint's exponential drift argument. Instead, we adapt a drift argument inspired by the works of Eskin--Mirzakhani \cite{eskin-mirzakhani} and Eskin--Lindenstrauss \cite{eskin-lindenstraus.short} that exploits the interaction between different fibres. The commutativity of the target group of the cocycle considerably simplifies the steps compared to the previous works \cite{eskin-mirzakhani,eskin-lindenstraus.short}. Moreover, if $\mu$ is supposed to have a finite exponential moment, taking advantage of the special setting, we give an alternative proof using uniform quantitative renewal theorem due to Li \cite{jialun.ens} and Li--Sahlsten \cite{jialun.advances}. We defer any further comments to the more detailed discussion below in \S \ref{subsec.intro.proofs}.
\begin{remark}\label{rk.conversely.to.diagonal.thm} The converse to Theorem \ref{thm.measure.class.geod} is also true in the sense that when $Q_H<H$ is a parabolic subgroup such that $Q_H^o \cap R_0=R_u(Q_H^o)$, there exists a standard trivialization $X_\mathcal{C} \simeq H/Q_H \times S/\Lambda$ and an index-two extension $D^{\pm}$ of $D$ such that for any $D^\pm$-invariant probability $\tilde{\nu}$, the measure $\overline{\nu}_F \otimes \tilde{\nu}$ is $\mu$-stationary probability measure on $X$. \end{remark}
We now continue with Case 2.3.a. We first introduce the following definition to state our result.
\begin{definition}\label{def.decomposable} An $H$-homogeneous subbundle $X_\mathcal{C}$ of $X$ is said to be \textbf{decomposable} if $X_\mathcal{C}$ is isomorphic as $H$-space to $\mathcal{C} \times S/\Lambda$, where the latter is endowed with the $H$-action $h(c,f)=(hc,\rho(h)f)$ and $\rho:H \to S$ is a morphism extending $Q_H \hookrightarrow Q \twoheadrightarrow S $. \end{definition}
Our next measure classification result\footnote{After we have obtained our results, we have been informed by Uri Shapira that in a sequel work to \cite{sargent-shapira} together with Uri Bader and Oliver Sargent, for the classification of stationary measures, they independently obtain the same result (and also introduce a similar notion as in Definition \ref{def.decomposable}). Our proof ideas for this case seem to be similar. They also obtain equidistribution results in some situations of Case 2.3.a for random walks starting outside the bundle $X_\mathcal{C}$. We thank Uri Shapira for related and kind discussions.} is the following.
\begin{theorem}[Decomposable action]\label{thm.irreducible.H.decompsable} Let $G=\PGL_n(\R)$, the space $X$ and groups $Q,R_0,R,S\simeq Q/R_0,\Lambda\simeq R/R_0$ and $H$ be as defined above.
Suppose that the $H$-action on $\P(\R^{n})$ is irreducible. Then, there exists a unique $H$-compact orbit $\calC$ in $G/Q$ and the $H$-action on $X_\mathcal{C}$ is decomposable. In particular, given a Zariski-dense probability measure $\mu$ on $H$ with finite first moment, we have a bijection \begin{equation}\label{eq.bijection.in.thm.SL2.dec}
P_\mu^{\erg}(X_{\mathcal{C}}) \simeq P_\mu^{\erg}(S/\Lambda), \end{equation} where the action of $\mu$ on $S/\Lambda$ comes from the morphism $\rho: H \to S$ in Definition \ref{def.decomposable}. \end{theorem}
This result provides, in a more general setting, a conceptual explanation for the existence of the invariant section discovered by Sargent--Shapira (see the relevant discussion in \cite{sargent-shapira}) and allows us to deduce affirmative answers to (1),(2) and (6) \cite[Problem 1.13]{sargent-shapira}.
\begin{remark} 1. Case 2.3.a is the main particular case of this theorem. In the above statement, if we suppose that $R_0 \neq Q$, then one can verify that $Q_H^\circ\cap R_0=\{\id\}$ and we are in Case 2.3.a.\\ 2. It might be possible to generalize the setting of the above theorem to the case where $G$ is a simple $\R$-split linear Lie group and $H<G$ is the image of a principal $\SL_2(\R)$ in $G$ in the sense of Kostant \cite{kostant}. \end{remark}
In view of the measure classification results of Benoist--Quint and Eskin--Lindenstrauss \cite[Theorem 1.3]{eskin-lindenstrauss.long} on quotients by discrete subgroups, i.e.~ the right-hand-side of \eqref{eq.bijection.in.thm.SL2.dec}, the following is a consequence of Theorem \ref{thm.irreducible.H.decompsable} (and Proposition \ref{prop.decomposable.measure.class}). Recall that a homogeneous measure $\tilde{\nu}$ on $S/\Lambda$ is a probability measure supported on a closed orbit of its stabilizer $S_0<S$. We also say that such a measure is $S_0$-homogeneous.
\begin{corollary}[Homogeneous fibres]\label{corol.homogeneous.fibres} Keep the setting of Theorem \ref{thm.irreducible.H.decompsable}. Let $\nu$ be a $\mu$-stationary and ergodic probability measure on $X_\mathcal{C}$. There exists a trivialization $X_\mathcal{C} \simeq H/Q_H \times S/\Lambda$ in whose coordinates $\nu$ is a product measure $\overline{\nu}_F \otimes \tilde{\nu}$, where $\tilde{\nu}$ is $S_0$-homogeneous. \end{corollary}
\begin{remark} Consider any standard trivialization $X_\mathcal{C} \simeq H/Q_H \times S/\Lambda$. Let $\nu=\int \delta_\theta \otimes \nu_\theta d\overline{\nu}_F(\theta)$ be the disintegration of $\nu$ over the base $H/Q_H$. Then, there exists a closed subgroup $S_0<S$ such that for $\overline{\nu}_F$-a.e.~$\theta \in H/Q_H$, the fibre measure $\nu_\theta$ is $S_\theta$-homogeneous, where $S_\theta$ is a conjugate of $S_0$. \end{remark}
The last remaining possibility for the action of $H$ on an $H$-invariant subbundle $X_\mathcal{C}$ is Case 2.3.b, it happens when $Q_H$ is a parabolic subgroup of $H$ and $Q_H^o \cap R_0=\{\id\}$ but $H$-action on $\P(\R^n)$ is not irreducible (see Example \ref{ex.to.be.treated}). In the statement below, we conjecture that the fibre measures are homogeneous, supposing only that the natural morphism $Q_H \to S$ has finite kernel (equivalently, $Q_H^o \cap R_0=\{\id\}$); in other words, the conclusion of Corollary \ref{corol.homogeneous.fibres} holds, without the irreducibility assumption.
\begin{conjecture}[Homogeneous fibres]\label{conjecture} Let $G=\PGL_n(\R)$, the space $X$ and groups $Q,R_0,R,S\simeq Q/R_0,\Lambda\simeq R/R_0$ and $H$ be as defined above. Suppose we are in Case 2.3, i.e.~ $Q_H$ is a parabolic subgroup of $H$ and $Q_H^o \cap R_0=\{\id\}$. Then the conclusion of Corollary \ref{corol.homogeneous.fibres} holds. \end{conjecture}
We now turn to the equidistribution aspect of random walks on $H$-subbundles $X_\mathcal{C}$ of $X$. We keep the same setting as in the measure classification part above; we suppose in addition that $\Lambda$ is a lattice in $S$ (except for Theorem \ref{thm.equidist.geod} below). Let as usual $\mu$ be a probability measure on $G$ that is Zariski-dense in a copy $H$ of $\SL_2(\R)$ or $\PGL_2(\R)$ in $G$. Given a point $x \in X_\mathcal{C}$, we are interested in describing the asymptotic behaviour of the averaged distribution $\frac{1}{n} \sum_{k=1}^n \mu^{\ast n}\ast \delta_x$ of the random walk on $X$ starting from $x$ up to the step $n$.
In all cases in which we treat the measure classification problem, i.e.~ all cases except Case 2.3.b, it will be possible to address the equidistribution problem. In fact, Case 1 (trivial base) is precisely the setting of Benoist--Quint \cite{BQ3} so the corresponding equidistribution results (see \cite{prohaska-sert-shi, benard-desaxce} extending the original results with respect to moment hypotheses) directly apply; we do not comment on it further here. Case 2.1 boils down to the equidistribution to the Furstenberg measure; even quantitative statements are known for this case, see \S \ref{subsub.equidist.trivial.fibre}. Finally, thanks to the decomposability obtained in Theorem \ref{thm.irreducible.H.decompsable}, it is not hard to see that Case 2.3.a also boils down to the setting of Benoist--Quint, see Proposition \ref{prop.equidist.H.irred}.
Therefore the only case that needs to be handled is Case 2.2, i.e.~diagonal fibre action. In this case, we will observe that (see Lemma \ref{lemma.its.iwasawa}) we have a standard trivialization $X_\calC\simeq \calC\times_\alpha S/\Lambda$ such that the action $\alpha$ of $H$ on the fibre $S/\Lambda$ is by a one-dimensional diagonal subgroup $D$ of $S$ through the Iwasawa cocycle $\sigma$ up to a sign. It is well-known that the $D$-orbits of different points on $S/\Lambda$ can exhibit very different statistical behaviours, i.e.~ not characterized by a single $D$-invariant measure. Given the existence of this chaotic behaviour, the most one can hope to establish is that the statistical behaviour of the $\mu$-random walk in the fibres matches that of the $D$-flow. This is the content of the following result. In the statement, the equidistribution of a $D$-orbit is understood with respect to a Haar/Lebesgue measure on $D$.
\begin{theorem}[Diagonal fibre action: equidistribution]\label{thm.equidist.geod} Keep the hypotheses and notation of Theorem \ref{thm.measure.class.geod} and let $X_\mathcal{C} \simeq H/Q_H \times S/\Lambda$ be the trivialization given by Theorem \ref{thm.measure.class.geod}. Suppose in addition that the measure $\mu$ has finite exponential moment and $\Gamma_\mu$ is inside the connected component of $H\simeq\PGL_2(\R)$. Then, the $D$-orbit of $z \in S/\Lambda$ equidistribute to a probability measure $m$ on $S/\Lambda$ if and only if for any $x=(\theta,z)\in X_\calC$, we have the convergence \[ \frac{1}{n}\sum_{k=1}^n\mu^{*k}*\delta_x\rightarrow \bar{\nu}_F\otimes m \quad \text{as} \; \; n \to \infty. \] \end{theorem}
\begin{remark} For $H\simeq \SL_2(\R)$, we have a similar equidistribution result. But the statement is more complicated. See \S\ref{subsec.equidist.diagonal} for more details. \end{remark}
\begin{remark}[Alternative proof for Theorem \ref{thm.measure.class.geod}] The results we establish to prove Theorem \ref{thm.equidist.geod} allow us to obtain a different proof of Theorem \ref{thm.measure.class.geod} under the additional finite exponential moment condition: let $\nu$ be a $\mu$-stationary and ergodic measure. By Chacon-Ornstein ergodic theorem, there exists $x$ such that $\frac{1}{n}\sum_{k=1}^n\mu^{*k}*\delta_x\rightarrow \nu$ as $n \to \infty$. From the proof of Theorem \ref{thm.equidist.geod}, we actually obtain that $\frac{1}{n}\sum_{k=1}^n\mu^{*k}*\delta_x$ and $\bar\nu_F\otimes \frac{1}{t}\int_0^t \delta_{\alpha(t)(z)}\ dt$ have the same limit as $t,n \to \infty$, where $\alpha(t)$ is the flow of $D$ and $x=(\theta,z)$. Therefore $\bar\nu_F\otimes \frac{1}{t}\int_0^t \delta_{\alpha(t)(z)}\ dt\to\nu$, which implies $\frac{1}{t}\int_0^t \delta_{\alpha(t)(z)}\ dt\to m$ for some $D$-invariant probability measure $m$ and hence the conclusion of Theorem \ref{thm.measure.class.geod}. \end{remark}
\subsection{Ideas of proofs}\label{subsec.intro.proofs} Before finishing the introduction, we give a brief overview of the ideas of proofs used to obtain the main results of this paper.\\[-4pt]
\textbullet ${}$ \textbf{Case 2.2 (measure classification): Drift argument.} The basic idea in Case 2.2 is to use the non-triviality of the fibre bundle structure of $X_\mathcal{C}$ over $\mathcal{C}$ to obtain invariance of the measures.
More concretely, for each cocycle $\alpha:H\times \calC\to S$, one can try to define a cross-ratio for quadruple $a,a',b,b'\in H^\N$ \[C_\alpha(a,a',b,b')=\lim_{n,m\to \infty}\alpha(a'^n,\xi(b))\alpha(a^{m},\xi(b'))^{-1}\alpha(a'^n,\xi(b'))^{-1}\alpha(a^m,\xi(b')) \] with suitable limits of $n,m$, where $\xi$ is some map from $H^\N$ to $\calC$. If the cocycle $\alpha$ is cohomologous to a morphism from $H$ to $S$, that is, the bundle structure is trivial, then any reasonable definition of cross-ratio will yield no information. This corresponds to \textit{decomposable} action in Theorem \ref{thm.irreducible.H.decompsable}. Otherwise, if the bundle structure is not trivial, as in Case 2.2, then the cross-ratio is non-trivial for generic four points $a,a',b,b'$ and yields certain information on the relation between asymptotic behaviour of products corresponding to those points. In this case, we adapt the drift argument of \cite{eskin-lindenstraus.short} to ``six points drift argument'' to exploit this information and obtain invariance under a limit cross-ratio. This six points drift argument is very different from the drift argument in \cite{BQ1} or \cite{sargent-shapira}; we do not use expansion on some tangent directions (indeed, in Case 2.2, we have no expansion). It is really the non-triviality of the cross-ratio or equivalently the bundle structure that helps us to obtain the invariance of the measures.
\begin{figure}
\caption{Six points drift argument}
\label{fig.drift}
\label{tikzpic}
\end{figure}
\textbullet ${}$ \textbf{Case 2.3 (measure classification): Decomposable action.} In some sense, the key difficulty in this case is to suspect the possibility of the existence of a decomposable action in our setting. Once one has this possibility in mind, one can use ideas about cocycles going back to Mackey \cite{mackey49, mackey58} (see Varadarajan \cite[\S 5]{varadarajan.quantum} or Zimmer \cite{zimmer.book} for precise expressions) to establish this decomposability. The latter expresses a certain algebraic structure in, or equivalently triviality of, the fibre-bundle $X_\mathcal{C}$ and in more concrete terms it boils down to an extension of a natural morphism $Q_H \to S$ to a larger group (only possibility being $H$ in our setting). Once this is established, one reduces the situation to a trivial-bundle structure and hence one can bring in the results of Benoist--Quint and Eskin--Lindenstrauss.\\[-3pt]
\textbullet ${}$ \textbf{Case 2.2 (Equidistribution): Uniform quantitative renewal theorem.} The key point that enables us to obtain the equidistribution result (Theorem \ref{thm.equidist.geod}) and an alternative proof of Theorem \ref{thm.measure.class.geod} under a stronger moment assumption, is the fact that thanks to the particular situation we have precise control of random matrix products in the form of a uniform quantitative renewal theorem and exponential large deviation estimates. More precisely, under suitable trivialization, the Ces\`{a}ro average can be expressed as \[\frac{1}{n}\sum_{k=1}^n\mu^{*k}*\delta_x=\frac{1}{n}\sum_{k=1}^n\int \delta(g\theta,\alpha(\sigma_\chi(g,\theta))z) \ d\mu^{*k}(g), \] where $x=(\theta,z)$ and $\sigma_\chi$ is some cocycle from $H\times \calC$ to $\R$. This is very similar to the renewal sum $\sum_{k=1}^\infty \int \delta(g\theta,\sigma_\chi(g,\theta)-t)\ d\mu^{*k}(g)$ which converges to the product measure $\bar{\nu}_F\otimes Leb_{\R^+}$ with respect to compactly supported continuous functions. Combined with exponential large deviation estimates and good error estimates from the uniform quantitative renewal theorem, we can prove the equidistribution result.
This article is organized as follows. Section \ref{sec.prelim} contains some preliminary tools about fibred dynamics, cocycles and stationary measures. Section \ref{sec.meas.class} is devoted to proving the measure classification results; Theorem \ref{thm.measure.class.geod} and Theorem \ref{thm.irreducible.H.decompsable} are proved therein. Finally, Section \ref{sec.equidist} contains the equidistribution results, in particular the proof of Theorem \ref{thm.equidist.geod}.
\section{Preliminaries: Cocycles, decomposable actions and stationary measures}\label{sec.prelim}
This section contains a collection of preliminaries for the proofs in the following parts. We adopt a general setting. In \S \ref{subsec.cocycles}, after a discussion of cocycle induced by trivializations or sections, we introduce the notion of a decomposable action and present an important criterion for decomposability. \S \ref{subsec.stat.measures} contains a discussion of stationary measures and their decompositions, due to Furstenberg. Finally, in \S \ref{subsec.meas.class.sec2}, we single out a description of stationary measures for decomposable actions.
\subsection{Generalities on cocycles and decomposable actions}\label{subsec.cocycles} Let $G$ be a locally compact and second countable (lcsc) group and $X$ a standard Borel space endowed with a Borel $G$-action $G \times X \to X$. Let $Q<G$ be a closed subgroup and suppose we have a measurable $G$-equivariant surjection $\pi: X \to G/Q$. We shall refer to such a $G$-space $X$ as a fibre bundle over $G/Q$. A fruitful way to describe the $G$-action on such bundles $X$ is by using the notion of cocycles. This approach -- going back to the work of Mackey \cite{mackey49, mackey58} (see Varadarajan \cite[\S 5]{varadarajan.quantum}) on induction of unitary representations -- will be instrumental to our considerations.
\subsubsection{Cocycles defined by actions and vice versa}
Given a bundle over $G/Q$ with $G$-action, let $F$ be a copy of the fibre above $Q$, i.e.~ of the Borel set $\pi^{-1}(Q)$ endowed with the $Q$-action. A $(G,Q)$-bundle trivialization of $X$ is a Borel isomorphism $\phi=(\phi_1,\phi_2):X \simeq G/Q \times F$ such that $\phi_1$ is $G$-equivariant and $\phi_2$ is equivariant with respect to a $Q$-valued cocycle $\alpha: G \times G/Q \to Q$, i.e.~ $\phi(gx)=(g\phi_1(x),\alpha(g,\phi_1(x))\phi_2(x))$. Notice that $F$ has a natural $Q$-action. Recall that a cocycle $\alpha:G \times G/Q \to Q$ is a map satisfying $\alpha(g_1g_2,f)=\alpha(g_1,g_2f) \alpha(g_2,f)$ for every $g_1,g_2 \in G$ and $f \in G/Q$ (this corresponds to what is called a strict cocycle in \cite{varadarajan.quantum}). By using the cocycle relation, one sees that any cocycle $\alpha: G \times G/Q \to Q$ endows the space $G/Q \times F$ with a $G$-action. We shall denote the space $G/Q \times F$ endowed with a $G$-action induced by a $Q$-valued cocycle $\alpha$ by $G/Q \times_\alpha F$. Therefore, a $(G,Q)$-bundle trivialization is a $G$-equivariant isomorphism between $X$ and $G/Q \times_\alpha F$ for some $Q$-valued cocycle $\alpha$.
In the rest of this paper, the ambient space $X$ which we will work with will be, in particular, a homogeneous space of a lcsc group $G$. For $x \in X$, we denote by $G_x$ the stability group $\{g \in G : gx=x\}$ so that we have a $G$-equivariant identification $X \simeq G/G_x$. We will suppose that the stability group $G_{x_0}=:R$ of a base point $x_0 \in X$ is contained in $Q$ so that we have a continuous $G$-equivariant surjection $\pi: G/R \simeq X \to G/Q$ turning the homogeneous space $X$ into a bundle over $G/Q$ with $G$-action. The choice of the base point $x_0$ identifies the fibre $\pi^{-1}(Q)$ with the $Q$-homogeneous space $Q/R$. In this setting, any Borel section $s:G/Q \to G$ yields a trivialization of $X$ given by the Borel isomorphism \begin{equation}\label{eq.trivialization} \begin{aligned} X &\to G/Q \times Q/R\\ x &\mapsto (\pi(x), s(\pi(x))^{-1}x). \end{aligned} \end{equation} Here the Borel section $s$ is a section of the principal $Q$-bundle $G\rightarrow G/Q$, i.e.~ $s(gQ)Q=gQ$. Such a section $s$ induces a Borel cocycle $\alpha:G \times G/Q \to Q$ by setting \begin{equation}\label{eq.construct.cocycle} \alpha(g,hQ):=s(ghQ)^{-1}g s(hQ), \end{equation} which makes the trivialization \eqref{eq.trivialization} into a $(G,Q)$-bundle trivialization.
For a $G$-homogeneous bundle $X$, we will say that a $(G,Q)$-bundle trivialization $X \simeq G/Q \times_\alpha Q/R$ is \textit{standard} if the morphism $\rho_\alpha: Q \to Q$ given by $\rho_\alpha(q)=\alpha(q,Q)$ is conjugate to the identity morphism $Q \to Q$. A trivialization given by a choice of section as above is standard. Conversely, a standard trivialization is induced by a choice of section $s:G/Q \to G$.
In the sequel, we will assume further structure on the groups making up the space $X$. Namely, we will suppose that there exists a closed normal subgroup $R_0$ of $Q$ which is also a subgroup of $R$ so that writing $S:=Q/R_0$ and $\Lambda=R/R_0$, we have an identification of the fibre $Q/R$ with $S/\Lambda$. The reason for this is that starting from Section \ref{sec.meas.class}, the group $\Lambda$ will be a discrete subgroup of $S$ which will make the action on $Q/R \simeq S/\Lambda$ more tractable. Composing a cocycle $\tilde{\alpha}:G \times G/Q \to Q$ with the epimorphism $\pi_1:Q \to S=Q/R_0$, we obtain an $S$-valued cocycle $\alpha: G \times G/Q \to S$. Since the action of $Q$ on the fibre $Q/R$ factors through $S$, the $S$-valued cocycle $\alpha=\pi_1\circ\tilde{\alpha}$ is sufficient to reconstruct the bundle. From now on, we will mainly consider $S$-valued cocycles.
\subsubsection{Equivalence of cocycles}\label{subsub.equiv.cocycles} Let $G'$ be a lcsc group. Two $G'$-valued cocycles $\alpha, \beta: G \times G/Q \to G'$ are said to be equivalent, denoted $\alpha \sim \beta$, if there exists a Borel map $\phi:G/Q \to G'$ such that for every $x \in G/Q$ and $g \in G$, we have \begin{equation}\label{eq.eq.cocycles} \alpha(g,x)=\phi(gx)^{-1} \beta(g,x) \phi(x). \end{equation} It is clear that for two $S$-valued cocycles $\alpha$ and $\beta$ over $G/Q$, if $\alpha \sim \beta$, then the associated $G$-spaces $G/Q \times_\alpha Q/R$ and $G/Q \times_\beta Q/R$ are isomorphic. Accordingly, $S$-valued cocycles obtained from different sections $s:G/Q \to G$ via \eqref{eq.construct.cocycle} are equivalent.
In the sequel, we will be interested in actions of subgroups $H$ of $G$ on a $G$-homogeneous bundle $X$ and the associated subbundles. We will use similar terminology for cocycles restricted to $H$. Let $H$ be a closed subgroup of $G$ and $\mathcal{C} \subseteq G/Q$ an $H$-homogeneous closed subset of $G/Q$. Up to replacing $Q$ by a conjugate, suppose $\mathcal{C}=HQ$ so that $\mathcal{C} \simeq H/Q_H$ where $Q_H:=H \cap Q$. In that case $X_\mathcal{C}:=\pi^{-1}(\mathcal{C}) \subseteq X$ is an $H$-invariant closed (not necessarily $H$-homogeneous) subset of $X$ giving rise to a bundle over $\mathcal{C}$ with $H$-action.
\begin{example}\label{ex.ss} The above situation appears in (and is motivated by) the setting of the work of Sargent--Shapira \cite{sargent-shapira}: Let $X$ be the set of homothety-equivalence classes of rank-2 lattices in $\mathbb{R}^3$. The set $X$ has a natural lcsc topology and the group $G=\SL_3(\R)$ acts continuously and transitively on $X$. The connected component $R_0$ of the stabilizer $R$ of a point $x \in X$ consists of the solvable radical of a maximal parabolic group $Q$ in $G$ and we have a surjective map $S = Q/R_0 \to Q/R\simeq S/\Lambda\simeq \SL_2(\R)/\SL_2(\Z)$, where $R/R_0 \simeq \Lambda=\PGL_2(\Z)$. One can then consider the action of the subgroup $H=\SO(2,1)<G$ on $X$ which has a unique minimal invariant subset $\mathcal{C}$ in $G/Q$. The group $Q_H=H \cap Q$ corresponds to a Borel subgroup of $H$. \end{example}
\subsubsection{Induced morphisms and decomposable subbundle actions}\label{subsub.bundle.dec}
Recall the notion of a decomposable bundle in Definition \ref{def.decomposable}. We provide a criterion to ensure that an $H$-homogeneous bundle $X_\mathcal{C}$ is decomposable.
\begin{proposition}\label{prop.decomposable} If the morphism $Q_H \hookrightarrow Q \twoheadrightarrow S $ extends to a morphism $H \to S$, then $X_\mathcal{C}$ is decomposable. \end{proposition}
The following statement is a version of \cite[Proposition 4.2.16]{zimmer.book} and provides a useful characterization of a decomposable action in our setting. Let $\Co(H,Q_H,S)$ denote the set of Borel cocycles $H \times H/Q_H \to S$. Given $\alpha \in \Co(H,Q_H,S)$, the map $\rho_\alpha$ defined by $\rho_\alpha(p):=\alpha(p,Q_H)$ for $p \in Q_H$ defines a Borel (hence continuous) morphism from $Q_H \to S$.
The proof is based on an important observation of Mackey which characterizes equivalence classes of Borel cocycles $\Co(H,P,G')$ by conjugacy classes of induced morphisms $P \to G'$ where conjugacy is understood by an element of $G'$ in the target. We have the following result from \cite[Theorem 5.27]{varadarajan.quantum} that we adapt to our setting here.
\begin{lemma}\label{lemma.cocycle.bijection} The map \begin{equation*} \begin{aligned} \Co(H,P,G') &\to \Hom(P,G')\\ \alpha & \mapsto \rho_\alpha \end{aligned} \end{equation*} is a surjective map that descends to a bijection when we quotient $\Co(H,P,G')$ by equivalence of cocycles and $\Hom(P,G')$ by conjugation in $G'$. \end{lemma} \begin{proof} Fix a Borel section $s:H/P \to H$ with $s(P)=\id$. Given a morphism $\rho:P \to G'$, the map given by \begin{equation}\label{eq.morphism.to.cocycle} \alpha(h_1,hP)=\rho(s(h_1hP)^{-1}h_1s(hP)) \end{equation} is a cocycle whose restriction to $P \simeq P \times \{P\}$ recovers $\rho:P \to G'$. This shows that the map is surjective.
Note that if two cocycles $H \times H/P \to G'$ are equivalent, then it is clear that the morphisms $P \to G'$ that they induce are conjugate by an element of $G'$. In particular, the map $\alpha \mapsto \rho_\alpha$ descends to a (surjective) map on the equivalence classes of cocycles.
To show that this map is a bijection, let $\alpha$ and $\beta$ be two cocycles $H \times H/P \to G'$ and suppose that $\rho_\alpha(.)=l\rho_\beta(.)l^{-1}$ for some element $l \in G'$. By direct computation, we have \begin{equation}\label{eq.section.inout} \alpha(s(h_1hP),P)^{-1}\alpha(h_1,hP)\alpha(s(hP),P)=\rho_\alpha(s(h_1hP)^{-1}h_1s(hP))^{-1}. \end{equation} Writing the equivalent of \eqref{eq.section.inout} for $\beta$ (substituting $\rho_\beta$ for $\rho_\alpha$) and combining it with \eqref{eq.section.inout}, we get $$ \alpha(h_1,hP)=\alpha(s(h_1hP), P)l^{-1}\beta(s(h_1hP),P)^{-1}\beta(h_1,hP)\beta(s(hP), P)l\alpha(s(hP),P)^{-1}. $$ This shows that $\alpha \sim \beta$ via the fibre automorphisms given by the map $$hP \mapsto \beta(s(hP), P)l\alpha(s(hP), P)^{-1}$$ which proves the claim. \end{proof}
We say that a cocycle $\alpha: H \times H/P \to S$ is of morphism-type with morphism $\rho$ if there exists a Borel morphism $\rho:H \to S$ such that $\alpha(h_1,hP)=\rho(h_1)$ for every $h_1,h \in H$. We can now give proof of the decomposability criterion.
\begin{proof}[Proof of Proposition \ref{prop.decomposable}] Choose a Borel section $s: G/Q \to G$ with $s(Q)=\id$. Let $\tilde\alpha:G \times G/Q \to Q$ be the associated cocycle. Recall $\pi_1$ is the quotient map $Q\to S=Q/R_0$. The associated morphism $\rho_{\tilde\alpha}:Q \to Q$ is then the identity map and hence the map $\pi_1\circ \rho_{\tilde\alpha}:Q_H\to Q\to S$ extends to a morphism $\tau:H \to S$ by assumption. We thus get a morphism-type cocycle $\beta:H \times H/Q_H \to S$ by taking $\beta(h,c)=\tau(h)$. By Lemma \ref{lemma.cocycle.bijection}, the cocycle $\alpha:=\pi_1\circ \tilde\alpha$ restricted to $H \times H/Q_H$ and $\beta$ are equivalent. Since equivalent cocycles induce isomorphic bundles, we are done. \end{proof}
\subsection{Skew-product systems and stationary measures}\label{subsec.stat.measures}
Let $H^\Z$ be the set of two-sided sequences of elements of $H$. We denote an element (bi-infinite word) of $H^\Z$ by $w=(b,a)$ where, by convention, we consider $b=(\ldots, b_{-2},b_{-1}) \in H^{-\N^\ast}$ and $a=(a_0,a_1,\ldots) \in H^\N$. The sequence $b$, also denoted $w^-$ will be referred to as the past of $w$ and $a$, also denoted $w^+$, as the future of $w$. We denote by $T$ the shift map on $H^\Z$ taking one step forward to future, i.e.~ $T(b,a)=(ba_0,Ta)$, where $ba_0$ is the concatenation $(\ldots, b_{-1},a_0)$ and $Ta$ is the image of $a$ under the usual shift map, also denoted $T$, on $H^\N$. Accordingly, the inverse of $T$ is given by $T^{-1}(b,a)=(T^{-1}b,b_{-1}a)$, where $T^{-1}b=(\ldots, b_{-2})$.
Let $\mu$ be a probability measure on $H$ and $\nu$ a $\mu$-stationary probability measure on a locally compact and second countable $H$-space $Y$. It follows from the martingale convergence theorem that the limit as $n\to \infty$ of $b_{-1}\ldots b_{-n} \nu$ exists for $\mu^{\Z}$-almost every $w$; it will be denoted by $\nu_{w}$, or sometimes $\nu_b$. These limit measures satisfy a key equivariance property which says that for $\mu^{-\N^\ast}$-a.e. $b \in H^{-\N^\ast}$, we have $\nu_{T^{-1}b}=b_{-1}^{-1}\nu_b$ or equivalently, for $\mu^{-\N^\ast}$-a.e. $b \in H^{-\N^\ast}$ and $\mu$-a.e. $a_0 \in H$, we have $\nu_{ba_{0}}=a_0\nu_b$.
Stationary measures can also be seen as part of invariant measures on skew-product systems. Let $\hat{Y}$ denote the product $H^\Z \times Y$ and $\hat{T}$ the skew-shift given by $\hat{T}((b,a),y)=(T(b,a), a_0 y)$. A basic fact (see e.g.~ \cite[Chapter 2]{bq.book}) is that given a probability measure $\mu$ on $H$, any $\mu$-stationary measure on $Y$ gives rise to a $\hat{T}$-invariant measure on $\hat{Y}$ that projects onto $\mu^\Z$ on the $H^\Z$ factor: indeed given a $\mu$-stationary measure $\nu$, the measure $$\hat{\nu}=\int \delta_{w} \otimes \nu_{w} d\mu^\Z(w)$$ defines a $\hat{T}$-invariant measure. The measure $\hat{\nu}$ is $\hat{T}$-ergodic if and only if $\nu$ is $\mu$-ergodic \cite[Chapter 2.6]{bq.book}.
\subsubsection{Stationary measures on the flag varieties}\label{subsub.stat.meas.flag}
Let $H$ be a real semisimple linear Lie group and $P$ a parabolic subgroup. According to a fundamental result of Furstenberg \cite{furstenberg.boundary.theory} (generalized to the current form by Guivarc'h--Raugi \cite{guivarch-raugi} and Goldsheid--Margulis \cite{goldsheid-margulis}), for any Zariski-dense probability measure $\mu$ on $H$, there exists a unique $\mu$-stationary probability measure on $H/P$. We shall refer to this measure as the Furstenberg measure and denote it by $\overline{\nu}_F$.
Recall that a stationary probability measure $\nu$ is said to be \textbf{$\mu$-proximal} if the limit measures $\nu_b$ are Dirac measures $\mu^{-\N^\ast}$-a.s. We will also say that the $H$-action on a space $Y$ is \textbf{$\mu$-proximal} if for every $\mu$-stationary and ergodic probability measure $\nu$ on $Y$ is $\mu$-proximal. For a Zariski-dense probability $\mu$, the Furstenberg measure (and hence the $H$ action on $H/P$) is $\mu$-proximal.
If $H$ acts $\mu$-proximally on a space $Y$, then every $\mu$-stationary probability measure $\nu$ induces a boundary map $w=(b,a) \mapsto \xi(w)=\xi(b)$ defined $\mu^{\Z}$-a.s.\ satisfying $\nu_w=\delta_{\xi(w)}$ and the equivariance property $b_{-1}\xi(T^{-1}w)=\xi(w)$. Conversely, a boundary map $\xi$ with the last equivariance property induces a $\mu$-proximal stationary probability measure. We will use the shorthand $P_{\mu}^{\erg}(Y)$ to denote the set of $\mu$-stationary and ergodic probability measures on $Y$.
\subsubsection{Limit measures on the fibre}\label{subsub.fibre.measures} Let $Y_0$ and $Y=Y_0 \times F$ be $H$-spaces such that the projection $Y \to Y_0$ is $H$-equivariant. Let $\nu$ be a $\mu$-stationary probability measure on $Y$ such that its projection $\overline{\nu}$ on $Y_0$ (which is also automatically $\mu$-stationary) is $\mu$-proximal. Then, that for $\mu^{-\N^\ast}$-almost every $b$, the limit measure $\nu_{b}$ on $Y \simeq Y_0 \times F$ is of the form $\delta_{\xi(b)}\otimes \tilde{\nu}_{b}$, where $\xi: H^{-\N^\ast} \to Y_0$ is a measurable equivariant map (i.e.~ $\mu^{\Z}$-a.s. $\xi(ba_0)=a_0\xi(b)$) and $\tilde{\nu}_{w}=\tilde{\nu}_{b}$ is a probability measure on $F$.
\subsection{Measure classification for product systems with equivariant projections}\label{subsec.meas.class.sec2}
In the following result, we record, in a general setting, a description of stationary measures for actions on product spaces with equivariant projections on both factors. It is based on the Furstenberg decomposition of a stationary measure into its limit measures, i.e.~ $\nu=\int \nu_b d\mu^{-\N^\ast}(b)$.
\begin{proposition}\label{prop.decomposable.measure.class} Let $H$ be a lcsc group, $Y_0$ and $F$ be lcsc $H$-spaces. Consider the $H$-action on $Y=Y_0 \times F$ for which both projections $Y \to Y_0$ and $Y \to F$ are $H$-equivariant. Let $\mu$ be a probability measure on $H$ such that the $H$-action on $Y_0$ or $F$ is $\mu$-proximal. Then, we have $$ P_\mu^{\erg}(Y) \simeq P_\mu^{\erg}(Y_0)\times P_\mu^{\erg}(F). $$ More precisely, the map \begin{equation}\label{eq.bijection.map.decomposable} \nu \mapsto (\overline{\nu},\nu^F) \end{equation} is a bijection where the latters are, respectively, pushforwards of $\nu$ by the projections $Y \to Y_0$ and $Y \to F$. \end{proposition}
The assumption of proximality induces a certain disjointness between two factors; it is clear that without such an assumption the conclusion fails (e.g.~ if $Y_0$ and $F$ have a common non-trivial factor).
\begin{proof} Note that since the projections $Y \to Y_0$ and $Y \to F$ are both $H$-equivariant, the pushforward measures $\overline{\nu}$ and $\nu^F$ are both $\mu$-stationary. Moreover, it is clear that if $\nu$ is ergodic, then so are $\overline{\nu}$ and $\nu^F$. Let us show that the map $P_\mu^{\erg}(Y) \ni \nu \to P_\mu^{\erg}(Y_0)\times P_\mu^{\erg}(F)$ given by $\nu \mapsto (\overline{\nu},\nu^F)$ yields the desired bijection.
Without loss of generality, let us suppose that the $H$-action on $Y_0$ is $\mu$-proximal and show that the above map is injective. Given $\nu \in P_\mu^{\erg}(Y)$, since the projections to each factor commute with the $H$-action, we have $\mu^{-\N^\ast}$-a.s.\ $\overline{\nu_b}=(\overline{\nu})_b$ and $(\nu_b)^F=(\nu^F)_b$. Moreover, since the $H$-action on $Y_0$ is $\mu$-proximal and $\overline{\nu}$ is ergodic, there exists a boundary map $\xi:B \to Y_0$ such that $(\overline{\nu})_b=\delta_{\xi(b)}$ for $\mu^{-\N^\ast}$-a.s.~ $b \in H^{-\N^\ast}$. Therefore, the probability measure $\nu_b$ is given by $\delta_{\xi(b)} \otimes (\nu^F)_b$ and hence by the Furstenberg decomposition, we can recover the measure $\nu$ as $\nu=\int \delta_{\xi(b)} \otimes (\nu^F)_b d\beta(b)$. This shows that the map $\nu \mapsto (\overline{\nu},\nu^F)$ is injective.
Surjectivity does not use the $\mu$-proximality assumption and follows from the fact that both projections $\overline{\nu}$ and $\nu^F$ are $\mu$-stationary and ergodic. Indeed, one readily checks that $\int (\overline{\nu})_b \otimes (\nu^F)_b d\beta(b)$ is a $\mu$-stationary and ergodic probability measure on $Y$. \end{proof}
\section{$\SL_2(\R)$-Zariski closure: measure classification}\label{sec.meas.class}
We now begin the main part on classifying stationary measures on homogeneous bundles over flag varieties. Following the scheme exposed in the Introduction, in \S \ref{sub.base.and.cases}, we start by distinguishing Case 1 (Dirac base) and Case 2 (Furstenberg base) according to the classification in the base followed by a precise description of various possibilities that occur (see Figure \ref{figure.cases}) in Case 2. In the rest, we focus on Case 2. \S \ref{subsec.case.2.1} treats the trivial fibre case. In \S \ref{subsec.case.2.2.diag}, we treat the diagonal fibre action case and prove Theorem \ref{thm.measure.class.geod}. Finally, in \S \ref{subsec.remaining.case}, we prove Theorem \ref{thm.irreducible.H.decompsable} and provide an example for Case 2.3.b.
\subsection{The setting, classification on the base and the cases}\label{sub.base.and.cases} Let us start by recalling the notations from the introduction. Let $G$ be a semisimple real Lie group with finite centre and $Q<G$ a parabolic subgroup. Let $R_0 \unlhd Q$ be a normal algebraic subgroup and $R<Q$ be a closed subgroup containing $R_0$ such that $S:=Q/R_0$ is semisimple with finite centre and without compact factors and $\Lambda:=R/R_0$ is a discrete subgroup of $S$. We denote by $X$ the quotient space $G/R$. As explained in the introduction, the space $X$ has a natural $G$-equivariant projection to $G/Q$ endowing it with a fibre-bundle structure over the flag variety $G/Q$ with fibres given by copies of $S/\Lambda$.
A convenient way to find such subgroup $R_0 \unlhd Q$ as above is by considering the refined Langlands decomposition $Q=S_Q E_QA_QN_Q$ of $Q$ (see e.g.~ \cite[VII,7]{knapp.book}), where $S_Q$ is a semisimple subgroup of $Q$ without compact factors, $E_Q$ is a compact subgroup commuting to $S_Q$, $A_Q$ is a maximal $\R$-split diagonalizable subgroup commuting with $S_QE_Q$, and $N_Q$ is the unipotent radical of $Q$. One can then take the normal subgroup $R_0$ of $Q$ to be of the form $S'_QE_Q A_Q N_Q$ where $S'_Q$ is a simple factor of $S_Q$ or the trivial group.
Let $\mu$ be a probability measure on $G$ with finite first moment, $\Gamma_\mu$ the closed semigroup generated by the support of $\mu$ and suppose that the Zariski-closure $\overline{\Gamma}_\mu^Z$ of $\Gamma_\mu$ is a copy of either $\PGL_2(\R)$ or $\SL_2(\R)$ in $G$. We will denote this Zariski closure by $H$. On our way to establishing a description of $\mu$-stationary probability measures $\nu$ on $X$, we remark that a measure $\nu$ on $X$ determines, and in turn is determined by, its projection on $G/Q$ via $\pi: X \to G/Q$ and the fibre measures of this projection. Therefore, we proceed by first discussing the possible $\mu$-stationary measures on the base $G/Q$.
\subsubsection{Stationary measures on the base}\label{subsub.stat.meas.base} Since the projection $X \to G/Q$ is $G$-equivariant, any $\mu$-stationary probability measure $\nu$ on $X$ projects down to $\mu$-stationary probability measure $\overline{\nu}$ on the base $G/Q$. The description of stationary measures on the base is handled by Guivarc'h--Raugi \cite{guivarch-raugi} and by Benoist--Quint \cite{BQ.compositio}. We have the following.
\begin{lemma}\label{lemma.base.cases} There exists a bijection between $\mu$-stationary and ergodic probability measures on $G/Q$ and compact $H$-orbits on $G/Q$. \end{lemma}
\begin{proof} It is clear that any compact $H$-orbit carries a $\mu$-stationary and ergodic probability measure. Conversely, let $\nu$ be a $\mu$-stationary and ergodic probability measure. By Chacon--Ornstein ergodic theorem, there exists $x \in G/Q$ such that $\frac{1}{N} \sum_{k=1}^N \mu^{\ast k} \ast \delta_x \to \nu$ as $N \to \infty$. So, in particular, $\nu$ is supported in the compact $\overline{\Gamma_\mu x}$. Since $G/Q$ is a flag variety, the orbit $Hx$ is locally closed (see e.g.~\cite[Theorem 3.1.1]{zimmer.book}). Hence the compact $\overline{\Gamma_\mu x}$ is contained in $Hx$. Moreover, since $Hx$ is locally compact, up to conjugating $Q$, it is ($H$-equivariantly) homeomorphic to $H/Q_H$ where $Q_H=H \cap Q$. It now follows from \cite[Proposition 5.5]{BQ.compositio} that $Q_H$ is cocompact in $H$ and $\nu$ is the unique $\mu$-stationary and ergodic probability measure supported in $H/Q_H \simeq Hx$. This concludes the proof. \end{proof}
It follows from this result that there are two types of $\mu$-stationary and ergodic probability measures on the base $G/Q$. The first type, which we will refer to as \textbf{Case 1}, is Dirac measures. This happens if and only if $H$ is contained in a conjugate of $Q$. The second type (\textbf{Case 2}) is the Furstenberg measure supported in a copy of $H/P$ in $G/Q$ where $P$ is a parabolic subgroup of $H$. This happens if and only if $H$ intersects a conjugate of $Q$ in a parabolic subgroup. Note that both types of stationary measures can be simultaneously present in a $G$-homogeneous bundle $X$.
Our study will primarily concern the analysis of stationary probability measures falling in Case 2. Indeed, as we now discuss, Case 1 is handled precisely by the seminal works of Benoist--Quint \cite{BQ1,BQ2} and Eskin--Lindenstrauss \cite{eskin-lindenstrauss.long}.
\subsubsection{Case 1: Dirac base}\label{subsub.dirac.base}\label{subsub.dirac.base}
Let us observe that the results of Benoist--Quint \cite{BQ1,BQ2} and Eskin--Lindenstrauss \cite{eskin-lindenstrauss.long} imply the following.
\begin{proposition} Keep the setting above and let $\nu$ be a $\mu$-stationary and ergodic probability on $X$ whose projection onto $G/Q$ is a Dirac measure. Then the fibre measure $\nu^F$ of $\nu$ on $(Q/R_0)/(R/R_0)\simeq S/\Lambda$ is homogeneous. \end{proposition}
This result follows from a direct application of \cite[Theorem 1.3]{eskin-lindenstrauss.long} which extends the main measure classification results of Benoist--Quint \cite{BQ2} with regards to the moment assumption and the fact that the group $\Lambda$ is only required to be discrete.
\begin{proof} By assumption, the projection $\overline{\nu}$ of $\nu$ is a Dirac $\delta_{gQ}$ on $G/Q$. Replacing if necessary $Q$ by a conjugate, we can suppose $H<Q$ and that $g=\id$. The fibre of the map $X \to G/Q$ above $\id Q$ identifies $Q$-equivariantly with $Q/R \simeq (Q/R_0)/(R/R_0)$. Identifying $\mu$ with its image in $HR_0/R_0<Q/R_0$, the measure $\nu^F$ is $\mu$-stationary and ergodic. Therefore, it follows from \cite[Theorem 1.3]{eskin-lindenstrauss.long} that $\nu^F$ is a homogeneous measure. \end{proof}
\subsubsection{Case 2: Furstenberg base}\label{subsub.furstenberg.base}
The rest of this section is devoted to the analysis of the remaining case, i.e.~ the description of a $\mu$-stationary and ergodic probability on $X$ whose projection to $G/Q$ is non-atomic (and which is consequently the Furstenberg measure on a copy of $H/P$ in $G/Q$, where $P$ is a parabolic subgroup of $H$, see Lemma \ref{lemma.base.cases}). In this case $H$ intersects a conjugate $gQg^{-1}$ of $Q$ in a parabolic subgroup. By conjugating $Q$ if necessary, we can and will suppose that $g=\id$, i.e.~ $P=Q_H:=H \cap Q$ and $\nu$ lives in $\pi^{-1}(H/P)$. As before, the analysis of stationary measure will vary depending on the relative position of $H$ with respect to the parabolic group $Q$ within the ambient group $G$. We will distinguish three cases that will be dealt with separately in the following subsections.\\
\noindent \textbullet ${}$ \textit{Case 2.1}: Trivial fibre action. This is a trivial case that occurs when the parabolic subgroup $Q_H^o$ of $H$ is contained in $R_0$.\\[4pt] \textbullet ${}$ \textit{Case 2.2}: Diagonal fibre action. This is the case when $Q_H^o \cap R_0$ is a proper non-trivial subgroup of $Q_H^o$. As we shall see, this positioning gives rise to a situation where the $S$-valued cocycle describing the fibre action has values in a diagonal subgroup of $S$.\\[4pt] \textbullet ${}$ \textit{Case 2.3}: This is the remaining case, i.e.~ the case where $Q_H^o \cap R_0$ is trivial. Interestingly, the analysis in this case depends on further properties $H$ with respect to $G$ that we will explain. Accordingly, our analysis will involve two subcases (\textit{2.3.a} and \textit{2.3.b}). In this paper, we will not be able to give the full description of stationary measures in the last of these two subcases.
\subsection{Case 2.1: Trivial fibre action}\label{subsec.case.2.1}
We express the description in this simple case in the following result.
\begin{proposition}\label{prop.trivial.fibre.measure.class} Let the space $X$ and groups $G,Q,R_0,R,S\simeq Q/R_0,\Lambda\simeq R/R_0$ and $H$ be as defined before. Suppose that $Q_H^o$ is contained in $R_0$. Then, there exists a standard trivialization $X \simeq G/Q \times S/\Lambda$ such that any $\mu$-stationary and ergodic probability measure $\nu$ on $X_\mathcal{C}$ can be written as $\overline{\nu}_F \otimes \delta_{q\Lambda}$ a product of the Furstenberg measure $\overline{\nu}_F$ with a Dirac measure $\delta_{q\Lambda}$ for some $q \in Q$. \end{proposition}
\begin{proof} Start by noting that since $Q_H$ is a (Zariski) connected algebraic group, $Q_H^o<R_0$ implies that $Q_H<R_0$. Now fix any standard trivialization $X_\mathcal{C} \simeq H/Q_H \times S/\Lambda$, let $\beta$ be the associated cocycle. The hypothesis $Q_H<R_0$ then entails that the associated morphism $\rho_\beta:Q_H \to S$ has trivial image. In particular $\rho_\beta$ extends trivially to a morphism $H \to S$ and hence by Proposition \ref{prop.decomposable}, the $H$-action on $X_\mathcal{C}$ is decomposable. Therefore there exists a standard trivialization $X_\mathcal{C} \simeq H/Q_H \times S/\Lambda$ for which the associated cocycle is morphism-type with trivial morphism. The result follows.
\end{proof}
Here are two examples where the trivial fibre action situation arises.
\begin{example}\label{ex.case.2.1} 1. (Trivial example) Let $G=\SL_4(\R)$, $Q$ be the minimal parabolic subgroup preserving the standard full flag in $\R^4$. Let $H$ be the copy of $\SL_2(\R)$ on the top-left corner, i.e.~ acting on the plane generated by the standard basis elements $e_1$ and $e_2$. In this case, $R$ is necessarily equal to $R_0$ which is $Q$ itself.
2. Let $G=\SL_4(\R)$, $Q$ be the parabolic subgroup stabilizing the plane generated by the first two vectors $e_1,e_2$ of the standard basis of $\R^4$, $H$ be the reducible representation given by the sum of the standard representation of $\SL_2(\R)$ on the planes generated by the basis vectors $e_1,e_4$ and $e_2,e_3$; $$ Q= \left \{ \begin{pmatrix} \ast & \ast & \ast & \ast \\ \ast & \ast & \ast & \ast \\ 0 & 0 & \ast & \ast \\ 0 & 0 & \ast & \ast \\ \end{pmatrix} \right \}, \quad \quad H = \left \{ \begin{pmatrix} a & 0 & 0 & b \\ 0 & a & b & 0 \\ 0 & c & d & 0 \\ c & 0 & 0 & d \\
\end{pmatrix} | \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \SL_2(\R) \right \}. $$ We can take $R$ to be the group generated by $R_0$ which is the solvable radical of $Q$ and $\SL_2(\Z) \times \SL_2(\Z)$ acting in the standard way on the planes generated by $e_1,e_2$ and $e_3,e_4$. \end{example}
\begin{remark}
Case 1 and Case 2.1 work more generally and we only need the assumption that $H$ is a real semisimple linear Lie group. \end{remark}
\subsection{Case 2.2: Diagonal fibre action}\label{subsec.case.2.2.diag}
The main goal of this part is to prove Theorem \ref{thm.measure.class.geod}. We start by discussing an example to which this result applies.
\begin{example}\label{ex.reducible} We start by recalling Example \ref{ex.ss}. Let $X$ be the space of $2$-lattices in $\R^3=V$ up to homotheties of $V$. The space $X$ with its natural topology admits a continuous transitive action of $G=\SL_3(\R)$. Let $y_0<V$ be a copy of $\R^2$ generated by $e_1,e_2$, where $e_i$'s denote the standard base elements of $V$ and $x_0$ be the class of $\Z^2$ in $y_0$, $R$ its stabilizer in $G$ and $Q$ the parabolic subgroup of $G$ stabilizing $y_0$. Note that the connected component $R_0$ of $R$ is the solvable radical of $Q$ and we have $R<Q$ so that the space $X$ is a $G$-homogeneous bundle over $G/Q$ with fibres $Q/R \simeq \PGL_2(\R)/\PGL_2(\Z)$. Let $\pi:X \to G/Q$ denote the natural projection associating to a class of $2$-lattice the $2$-plane that it generates.
Let $H$ be a copy of $\SL_2(\R)$ given by the classes of matrices of the form $ \begin{pmatrix} 1 & 0 & 0 \\ 0 & a & b \\ 0 & c & d \end{pmatrix}$, where $ \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \SL_2(\R)$. This configuration falls into Case 2.2: indeed, $Q_H=H \cap Q$ is a parabolic subgroup of $H$ and $Q_H^\circ \cap R_0$ is the unipotent radical of $Q_H^\circ$.
One can also see in explicitly how the one-dimensional split subgroup of $S$, for which Theorem \ref{thm.measure.class.geod} proves invariance, appears: let $\mathcal{C}$ be the $H$-invariant circle of 2-planes given by $\langle e_1, t e_2+se_3 \rangle$ for $t,s \in \R$ and $X_{\mathcal{C}}$ be the bundle over $\calC$ given by the closed subset of $X$ given by $2$-lattices contained in subspaces belonging to $\mathcal{C}$. We can choose an explicit Borel section $s:H/Q_H \to H$ as follows to obtain a standard trivialization of $X_\mathcal{C}$: given a vector space $y=\langle e_1, t e_2+se_3 \rangle \in \mathcal{C} \simeq H/Q_H$, we can associate the class of the matrix $R_{\theta(y)}:=\begin{pmatrix} \cos \theta(y) & -\sin \theta(y) \\ \sin \theta(y) & \cos \theta(y) \end{pmatrix} $ with $\theta(y) \in [0,\pi)$ chosen so that $R_{\theta(y)}$ seen in $H$ sends $y_0$ on $y$. The resulting trivialization writes as \begin{equation*} \begin{aligned} X_\mathcal{C} &\simeq H/Q_H \times F\\ x &\mapsto (\pi(x), R_{\theta(\pi(x))}^{-1}(x)). \end{aligned} \end{equation*} A straightforward calculation shows that the $S \simeq \PGL_2(\R)$-valued cocycle $H \times H/Q_H \to S$ given by this trivialization takes values in the full diagonal subgroup $D^{\pm}<\PGL_2(\R)$ and coincides with the Iwasawa cocycle of $H$ up to a sign, which will be defined in the following section. It follows then from Theorem \ref{thm.measure.class.geod}, Remark \ref{rk.conversely.to.diagonal.thm} and uniqueness of $\mu$-stationary measure on $H/Q_H$ that there is a bijection between diagonal-flow (or an index-two extension of it) invariant probability measures on $\PGL_2(\R)/\PGL_2(\Z)$ and $\mu$-stationary probability measures on $X_\mathcal{C}$. The difference between diagonal invariance and the index-two extension is a minor one related to the sign group. This is discussed further below in \S \ref{subsub.iwasawa.sign}. Note finally that Case 1 also appears within this same example, namely the singleton corresponding to the two-plane generated by $\{e_2,e_3\}$ is $H$-invariant. \end{example}
The rest of this Subsection \ref{subsec.case.2.2.diag} is devoted to the proof of Theorem \ref{thm.measure.class.geod}.
\subsubsection{Iwasawa cocycle and representation theory}\label{subsub.iwasawa}
Let $H$ denote either the group $\SL_2(\R)$ or $\PGL_2(\R)$. Let $K$ be a maximal compact subgroup of $H$ and $P$ be a minimal parabolic subgroup so that we have the decomposition $H=K^oP $. Let $D$ be the maximal connected diagonal subgroup of $P$, $N$ the unipotent radical of $P$ and $M=K \cap P$. Let $H/P$ be the flag variety of $H$. Given $h \in H$ and $\xi=kP \in H/P$, we denote by $\sigma(h,\xi)$ the unique element of $D$ such that \begin{equation}\label{eq.iwasawa.characterizing} hk \in K \sigma(h,\xi) N. \end{equation} This map $\sigma: H \times H/P \to D$ defines a continuous cocycle (see \cite[Lemma 8.2]{bq.book}), called the \textit{Iwasawa cocycle}. The morphism $\rho_\sigma$ associated with the Iwasawa cocycle is simply the projection $P \to D \simeq P/MN$.
An alternative way, more in the spirit of Section \ref{sec.prelim}, to construct the Iwasawa cocycle is as follows. Consider $H$ as a fibre bundle over $H/P$ and let $s$ be a section $s:H/P \to H$ given by the Iwasawa decomposition, namely $s(kP)\in kM$ for $k \in K^\circ$. We then get a trivialization $H \simeq H/P \times P$ (see \eqref{eq.trivialization}) and an associated cocycle $\tilde{\sigma}:H \times H/P \to P$ (see \eqref{eq.construct.cocycle}). It is not hard to verify that the cocycle obtained by composing $\tilde{\sigma}$ with the projection $P \to P/MN \simeq D$ satisfies the characterizing property \eqref{eq.iwasawa.characterizing} of the Iwasawa cocycle. Regarding the section $s$, for $\PGL_2(\R)$ case, we can define it canonically to have values in $K^\circ$. For $\SL_2(\R)$ case, we need to make a choice in $kM$ so that $s(kP)$ is a Borel section. Even though the cocycle $\tilde{\sigma}$ depends on $s$, by \eqref{eq.construct.cocycle}, since the ambiguity $M$ is in the centre, we know that Iwasawa cocycle does not depend on the choice of the value of $s(kP)$ in $kM$.
In the course of our proofs, sometimes it will be more convenient to switch to the additive notation for cocycles. Let $\mathfrak{d}$ be the Lie algebra of $D$. For a $D$-valued cocycle $\alpha$, we will denote by $\overline{\alpha}$, the $\mathfrak{d}$-valued cocycle obtained by composing $\alpha$ with the logarithm map $D \to \mathfrak{d}$.
Given an algebraic irreducible representation $\rho: H \to \GL(V)$ of $H$ in a finite dimensional real vector space $V$, for every character $\chi$ of $D$, the associated weight space is $V^\chi=\{v \in V : \rho(a) v= \chi(a)v \; \; \text{for every} \; a \in D \}$. The set of characters $\chi$ for which $V^\chi \neq \{0\}$ is called the set of (restricted) weights of $(V,\rho)$ and denoted $\Sigma(\rho)$. For a character $\chi$ of $D$, we denote by $\overline{\chi}$ the corresponding additive character on $\mathfrak{d}$. The set $\Sigma(\rho)$ is endowed with an order: $\overline{\chi}_1 \geqslant \overline{\chi}_2$ if and only if $\overline{\chi}_1 - \overline{\chi}_2$ is a sum of positive roots of $H$ in $\mathfrak{d}$. $\Sigma(\rho)$ has a largest element $\chi$, called the highest weight of $\rho$. The corresponding eigenspace is the subspace $V^N$ of $N$-fixed vectors. Since $H$ is $\R$-split, this is a line in $V$. For an element $\eta=gP$ in the flag variety $H/P$, we denote by $V_\eta$ the line $gV^N$ in $V$ constructing a map $H/P \to \mathbb{P}(V)$.
The following lemma will be important for our considerations; it will allow us to control the Iwasawa cocycle.
\begin{lemma}\cite[Lemma 6.33]{bq.book}\label{lemma.iwasawa.norm}
Let $(V,\rho)$ be an algebraic irreducible representation of $H$ with the highest weight $\chi$. Then, there exists a $K$-invariant Euclidean norm $\|.\|$ on $V$ such that for every element $a \in D$, $\rho(a)$ is a symmetric endomorphism of $V$. Moreover, for every $\eta \in H/P$, non-zero $v \in V_\eta$ and $h \in H$, we have $$
\overline{\chi}(\overline{\sigma}(h,\eta))=\log \frac{\|\rho(h)v\|}{\|v\|}. $$ \end{lemma}
\subsubsection{Iwasawa cocycle, sign group and standard trivialization}\label{subsub.iwasawa.sign}
The goal of this part is to obtain a lemma (Lemma \ref{lemma.its.iwasawa} below) which, for a standard trivialization, expresses the action of $H$ on the fibres of the subbundle $X_{H/Q_H}$ of $X \to G/Q$ with the Iwasawa cocycle of the group $H$, up to a sign. Recall that in Case 2.2, $R_0 \cap Q_H^\circ$ is a non-trivial proper subgroup of $Q_H^\circ$. Since $R_0$ is normal in $Q$, $R_0 \cap Q_H^\circ$ is also normal in $Q_H^\circ$. It follows that this intersection is the unipotent radical of $Q_H^\circ$.
Therefore the projection of $Q_H^\circ$ to $S$ given by $Q_H^\circ/(Q_H^\circ\cap R_0)$ is a connected split torus. We will denote by $D$ the image of $Q_H^\circ$ in $S$ obtained by projection. Let $D^{\pm}$ be the algebraic $\R$-split torus containing $D$. Then $Q_H/(Q_H\cap R_0)$, the image of $Q_H$ in $S$, is contained in $D^{\pm}$. The group $D^{\pm}\simeq \R^*$ is isomorphic to $D\times(\Z/2\Z)\simeq \R_{>0}\times \{\pm 1\}$ in $S$. In order to treat the sign problem of the cocycle in $D^\pm$, we need to go to the two-fold cover space $K$ of $H/Q_H$ to recover the information of the sign. Here we need to distinguish two cases in a similar way for both $H \simeq \SL_2(\R)$ or $\PGL_2(\R)$.
For $H \simeq \SL_2(\R)$ case: Let $V=\R^2$. A convex cone in $V$ is called proper if it does not contain a line. From Iwasawa cocycle, or just from the action of $\SL_2(\R)$ on $\mathbb{S}^1\subset V$, we have a group action of $\SL_2(\R)$ on $K\simeq \mathbb{S}^1$. Guivarc'h and Le Page \cite[Proposition 2.14]{GLP} proved that if $\Gamma_\mu$ preserves a closed proper convex cone in $V$ then there exist two $\mu$-stationary and ergodic measures $\nu_1$ and $\nu_2$ on the circle $K$. The supports of these two measures are just the inverses of each other, and we denote them by $\Lambda_1$ and $-\Lambda_1$, respectively. Otherwise, there exists a unique $\mu$-stationary measure on $K$. We now distinguish two cases depending on the action of $\Gamma_\mu$ on $K$. \begin{itemize}
\item Case 2.2.a: $\Gamma_\mu$ preserves a closed proper convex cone in $V$. In this case we take a section $s:K/M\to K$ such that $s$ takes values in a half circle containing $\Lambda_1$.
\item Case 2.2.b: Otherwise. We just take a section $s:K/M\to K$. There is no better choice in this case. \end{itemize}
For $H\simeq \PGL_2(\R)$ case: The maximal compact subgroup $K$ has two connected components and each component is isomorphic to $H/Q_H$. In this case, we take the section $s:K/M\to K$ in the connected component of $K$. \begin{itemize}
\item Case 2.2.a: If $\Gamma_\mu$ is inside the connected component $\PGL_2(\R)^\circ$.
\item Case 2.2.b: Otherwise. In this case, we have a unique $\mu$-stationary measure on $K$, which has weight $1/2$ on each connected component. \end{itemize}
We mention that unlike other main cases (those appearing in Figure \ref{figure.cases}), Cases 2.2.a or b depend on $\Gamma_\mu$ rather than the Zariski closure $H$ itself.
For $H$ equal to either $\SL_2(\R)$ or $\PGL_2(\R)$, from now on we distinguish Case 2.2.a and Case 2.2.b, and choose a Borel section $s:H/Q_H\simeq K/M\to K<H$ as specified above. We define a sign function on $K$ by \[\mathrm{sg}(k):=k^{-1}s(kM)\in M\simeq \Z/2\Z. \] We define a sign cocycle with respect to the section $s$ for $g\in H$ and $\eta\in K/M$ by \[\mathrm{sg}(g,\eta):=\mathrm{sg}(k)\mathrm{sg}(k_g)=k^{-1}s(kM)k_g^{-1}s(k_gM), \] where $k$ is a preimage of $\eta$ in $K$ and $k_g$ is the $K$-part of $gk\in k_g\sigma(g,k)N$ in the Iwasawa decomposition. The value of $\mathrm{sg}$ does not depend on the choice of preimage $k$.
In Case 2.2.b, with this sign function, we can recover the sign in $D^\pm$ of the cocycle $\alpha$. Recall the quotient map $Q_H$ to $S$, whose image is $Q_H/(Q_H\cap R_0)<D^{\pm}$. If $Q_H/(Q_H\cap R_0)=D$, then there is no ambiguity about the sign. In the following, in order to simplify the notation, we suppose that we are in the case where $Q_H/(Q_H\cap R_0)=D^\pm$. The proof of the case $Q_H/(Q_H\cap R_0)=D$ is simpler; the sign cocycle disappears, or equivalently, it is constant with value identity.
\begin{lemma}\label{lem:sign}\label{lemma.its.iwasawa} Under the above choice of the section $s$, for $g\in H$ and $\eta\in H/Q_H$, as an element in $D^\pm$, we have \[\alpha(g,\eta)=(\sigma(g,\eta),\mathrm{sg}(g,\eta)). \] In particular, for Case 2.2.a, when $\eta$ is in the support of the Furstenberg measure and $g\in\Gamma_\mu$, the cocycle $\alpha$ coincides with the Iwasawa cocycle. \end{lemma} \begin{proof} By definition of the Borel section $s$ and cocycle $\alpha$, \[\alpha(g,\eta)=s(g\eta)^{-1}gs(\eta)R_0\in S=Q/R_0. \] The Iwasawa cocycle is defined by \[\sigma(g,\eta)=k_g^{-1}gkN. \] Recall that $k$ is a preimage of $\eta$ in $K$ and $k_g$ is the $K$-part of $gk\in k_g\sigma(g,k)N$ in the Iwasawa decomposition. The difference of the sign comes from the product of the differences of the signs of $k,s(\eta)$ and $k_g,s(g\eta)$. By definition of the sign cocycle, we obtain the formula for $\alpha(g,\eta)$.
Regarding the second statement, in the case of $H \simeq \PGL_2(\R)$, it is a consequence of positive determinant. For $H \simeq \SL_2(\R)$, since the action of $\Gamma_\mu$ preserves $\Lambda_1$ inside $K$, if we take $k$ in $\Lambda_1$, then $k_g$ is still in $\Lambda_1$ for $g\in \Gamma_\mu$. In this case we obtain that the sign cocycle $\mathrm{sg}$ is identically equal to $\id$ for the $\Gamma_\mu$-action, whence the claim. \end{proof}
\begin{comment} \begin{lemma}
Let $X \simeq G/Q \times Q/R$ be a standard trivialization and $\alpha:H \times H/Q_H \to S$ the restriction of the associated $S$-valued cocycle. Then, the cocycle $\alpha$
is equivalent to a cocycle $\alpha':H\times H/Q_H\to D^{\pm}$, whose projection to $D$ is the Iwasawa cocycle of $H$ with respect to $Q_H$. \end{lemma}
The claim in Remark \ref{}
\label{rk.when.pgl2.start.of.proof} In the sequel, in order to avoid burdening the proof of Theorem \ref{thm.measure.class.geod} with new notations, we will suppose that $\Gamma_\mu<H^\circ$. This is pertinent only when $H=\PGL_2(\R)$. See Remark \ref{rk.when.pgl2.end.of.proof} for an indication of how one can restrict to this case.
\begin{proof}
Recall that by definition of a standard trivialization, the morphism $Q \to Q$ associated to the cocycle $\tilde{\alpha}: G \times G/Q \to Q$ inducing $\alpha: G \times G/Q \to S$ (by composing with the projection $Q \to Q/R_0 \simeq S$) is conjugate to the identity $Q \to Q$. So the morphism $\rho_\alpha:Q_H \to S$ is conjugate to the projection, where the projection is given by $Q_H\to D^{\pm}=Q_H/(Q_H\cap R_0)<S$. The morphism $\rho_\sigma$ induced by the Iwasawa cocycle $\sigma$ is the projection $Q_H \to D$. \red{Due to $D^{\pm}=D\times (\Z/2\Z)$ and the sign part commutes with $D$, by the same proof as Lemma \ref{lemma.cocycle.bijection}, we can find another cocycle $\alpha'$ equivalent to $\alpha$, with values in $D^{\pm}$ and the projection of $\alpha'$ to $D$ is exactly the Iwasawa cocycle $\sigma$.}
\end{proof} \end{comment}
It follows from this lemma that our choice of the section $s:G/Q \to G$ implies that the cocycle $\alpha$ for the associated standard trivialization, projected on $D$, is equal to the Iwasawa cocycle $H \times H/Q_H \to D$. In the rest of this part (Case 2.2), we will work with this choice of coordinates on $X$ (i.e.~ trivialization induced by the section $s$). We will identify the space $Q/R$ with the quotient $S/\Lambda$ where $S\simeq Q/R_0$ and $\Lambda$ is the lattice $R/R_0$. To alleviate the notation, sometimes we will write $F=S/\Lambda$.
\subsubsection{Limit measures on the fibre}
Let $\mu$ be a Zariski-dense probability measure on $H$ and $\nu$ be a $\mu$-stationary probability measure on $H/Q_H \times_\alpha F$. Recall from \S \ref{subsub.stat.meas.flag} that $\mu$ admits a unique stationary probability measure $\overline{\nu}_F$ on $H/Q_H$ which is also $\mu$-proximal (the Furstenberg measure). It follows (see \S \ref{subsub.fibre.measures}) that $\mu^{-\N^\ast}$-almost every $w^-$, the measure $\nu_{w^-}$ on $H/Q_H \times_\alpha F$ is of the form $\delta_{\xi(w^-)}\otimes \tilde{\nu}_{w^-}$, where $\xi: H^{-\N^\ast} \to H/Q_H$ is a measurable equivariant map (i.e.~ $\xi(ba_0)=a_0\xi(b)$ for $\mu^\Z$-a.e.~ $w$) and $\tilde{\nu}_{w^-}=\tilde{\nu}_{b}$ is a probability measure on $F$. In view of the equivariance property of $\nu_{w^-}$ and the fact that the action on the $F$-coordinate is given by the cocycle $\alpha$ over $H/Q_H$, the measure $\tilde{\nu}_{b}$ satisfies the following equivariance formula for $\mu^{-\N^\ast}$-a.e.~ $b \in H^{-\N^\ast}$ and $\mu$-a.e.~ $a_0 \in H$, \begin{equation}\label{eq.equiv.cocycle} \tilde{\nu}_{ba_0}=\alpha(a_0,\xi(b))\tilde{\nu}_b. \end{equation} In the sequel, to simplify the notation, we also use the notation $\nu_{w^-}$ (or $\nu_w$) to denote the fibre measure $\tilde{\nu}_{w^-}$. This should not cause confusion.
We start with a first claim which will allow us to focus attention on a single generic fibre measure $\nu_{w^-}$.
\noindent \textbf{Claim 0:} To prove Theorem \ref{thm.measure.class.geod}, it suffices to show that for $\mu^{-\N^\ast}$-a.e. $w^-$, the measure $\nu_{w^-}$ on $S/\Lambda$ is $D$-invariant.
\textit{Proof of Claim 0:} If we are in Case 2.2.a, by Lemma \ref{lemma.its.iwasawa}, the cocycle $\alpha$ actually takes values in $D$. From the equivariance formula \eqref{eq.equiv.cocycle} for $\nu_{w^-}$, it follows that the map $w^- \mapsto \nu_{w^-}$ is invariant under the inverse of the shift $T$ and hence is almost surely constant, by ergodicity of the map $T^{-1}$.
For Case 2.2.b, we need to consider an extension by $\Z/2\Z=\{\pm 1 \} \simeq M$. We define $T^\mathrm{sg}$ on $H^{\Z}\times \Z/2\Z$ by \[T^\mathrm{sg}(w,j)=(Tw, \mathrm{sg}(w_0,\xi(w^-))j), \] where $w\in H^\Z$ and $j\in \Z/2\Z$. For $\mu^\Z$-a.e. $w\in H^\Z$, we define \[\nu_{w,1}=\nu_{w^-},\ \nu_{w,-1}=(-1)_*\nu_{w^-}, \] where $(-1)_*$ is understood as the action of $-id\in D^\pm$. Then the formula \eqref{eq.equiv.cocycle} and Lemma \ref{lem:sign} imply \[ {\nu}_{T^\mathrm{sg}(w,j)}=\sigma(w_0,\xi(w^-)){\nu}_{w,j}. \] Now since the Iwasawa cocycle $\sigma$ takes values in $D$ and $\nu_{\omega,j}$'s are $D$-invariant by running the same argument as in Case 2.2.a, we see that it is sufficient to prove that the measure $\beta^\mathrm{sg}:=\mu^\Z\otimes ((\delta_1+\delta_{-1})/2)$ is $T^\mathrm{sg}$-ergodic. We now proceed to prove this.
We consider the $\mu^\Z$-a.e.~ defined map $p$ from $H^\Z\times \Z/2\Z$ to $ H^\Z\times K$, by letting \[ p(w,j)=(w,k_{w^-} ),\text{ where } \mathrm{sg}(k_{w^-})=j, k_{w^-}Q_H=\xi(w^-). \] Let $\tilde T^\mathrm{sg}(w,k)=(Tw, w_0k)$. The pushforward of the measure $\beta^\mathrm{sg}$ yields the measure \[\tilde\beta^\mathrm{sg}:=\int_{H^\Z} \delta_w\otimes ((\delta_{k_{w^-}}+\delta_{-k_{w^-}})/2)\ d\mu^\Z(w) \] on $H^\Z\times K$. Then $p$ is a semiconjugacy from $(H^\Z\times\Z/2\Z,T^\mathrm{sg},\beta^\mathrm{sg})$ to $(H^\Z\times K,\tilde T^\mathrm{sg},\tilde \beta^\mathrm{sg})$. The fiber measure $(\delta_{k_{w^-}}+\delta_{-k_{w^-}})/2 $ is actually the measure $(\nu_K)_w$ for the unique $\mu$-stationary measure $\nu_K$ on $K$ and $(\nu_K)_w$ is the limit of $b_{-1}\cdots b_{-n}\nu_K$ for $\mu^Z$-a.e.~ $w$. (This measure $(\nu_K)_w$ is a lift of the Dirac mass $\delta_{\xi(w^-)}$ on $H/Q_H$. Since $\nu_K$ is unique, we can verify that the limiting measure has equal mass on two preimages). By \cite[Section 2.6]{bq.book}, since $\nu_K$ is $\mu$-ergodic, we know that $\tilde\beta^\mathrm{sg}$ is $\tilde T^\mathrm{sg}$ ergodic. Then from the semiconjugacy $p$, we obtain that $\beta^\mathrm{sg}$ is $T^\mathrm{sg}$ ergodic. The proof is complete. \qed
\begin{remark}[$D^{\pm}$-invariance in Case 2.2.b] In Case 2.2.b, the argument above implies that $\nu_{w,1}=\nu_{w,-1}$ for $\mu^\Z$-a.e.~ $w$. So the fiber measure $\nu^F$ is indeed $D^\pm$-invariant. We will also see later in the equidistribution part that the limiting measure will be $D^\pm$-invariant. \end{remark}
\subsubsection{Dynamically defined norms}
To obtain the required $D$-invariance for a typical limit measure $\nu_{w^-}$ on the fibre, using the equivariance formula \eqref{eq.equiv.cocycle}, we will be passing to a limit of cocycle differences of type $\alpha(a_m'\ldots a_0', \xi(b))\alpha(a_n \ldots a_0, \xi(b))^{-1}$ for various sequences $b$ and $a$ as well as carefully chosen times $m,n \in \N$. The choice of times and sequences will be made so that the sequences land in some nice compact subset of the shift space and, simultaneously, the cocycle differences are controlled. An important tool for this purpose will be the dynamically defined norms given by the next result.
We fix an irreducible algebraic representation $V$ of $H$, where $V$ is a finite-dimensional real vector space. Endow it with a $K$-invariant Euclidean structure and let $\|\cdot\|$ be the standard Euclidean norm on $V$. Here and below, we will also use the shorthand $a^n$ to denote the finite product $a_{n-1} \ldots a_0$ of the corresponding sequence $(a_0,\ldots, a_{n-1}) \in H^n$. We have
\begin{proposition}\label{prop.dynnorm} \cite[Proposition 2.3]{eskin-lindenstraus.short}
There exists a measurable map $w \mapsto \|.\|_w$ from $H^\Z$ into the space of Euclidean norms on $V$ and a $T$-invariant full measure subset $\Psi$ of $H^{\Z}$ such that for every $w=(b,a) \in \Psi$ and $n\in\N$, letting
\[\lambda_1(w,n):=\log\frac{\|a^nv_b \|_{T^n w}}{\|v_b \|_w}, \] there exists $\kappa>1$ such that \[\lambda_1(w,n)\in [1/\kappa,\kappa]n. \] In particular, due to cocycle property, for $w\in\Psi$ and $m>n$ in $\N$, \begin{equation}\label{eq.lip.dynnorm} \lambda_1(w,m)-\lambda_1(w,n)\in[1/\kappa,\kappa](m-n) . \end{equation} \end{proposition}
We note at this point that the finite first moment assumption in Theorem \ref{thm.measure.class.geod} is required in the proof of the previous proposition in \cite{eskin-lindenstraus.short}.
This norm $\|.\|_w$ is called \textit{dynamical defined norm}. It is chosen with respect to the dynamics such that Proposition \ref{prop.dynnorm} holds. Due to measurability of $w \mapsto \|.\|_w$, we can always compare the dynamically defined norms and the original norm on a large measure subset of $H^\Z$. \begin{lemma}\cite[Lemma 2.7]{eskin-lindenstraus.short} \label{lemma.comparison.dyn.norm} For every $\delta>0$, there exists a compact subset $K(\delta)$ of $\Psi$ with $\mu^{\Z}(K(\delta))>1-\delta/10$ and a constant $C(\delta)>0$ such that for $v\in V$ and $w\in K(\delta)$
\[ 1/C(\delta)\leqslant \frac{\|v\|_w}{\|v\|}\leqslant C(\delta). \] \end{lemma}
We denote by $\chi$ the highest weight of the representation from Lemma \ref{lemma.iwasawa.norm}. Combining Lemmas \ref{lemma.iwasawa.norm} and \ref{lemma.comparison.dyn.norm}, and Proposition \ref{prop.dynnorm}, we deduce the following
\begin{corollary}\label{corol.alpha.close.to.dynamical.cocycle} For every $\delta>0$, there exists a compact subset $K(\delta)$ of $H^\Z$ with $\mu^{\Z}(K(\delta))>1-\delta/10$ and a constant $C(\delta)>0$ such that for every $w \in \Psi$ and $n \in \N$ such that $w$ and $T^n w$ are both in $ K(\delta)$, we have \begin{equation}\label{eq.comparison.dyn.norm}
|\overline{\chi}(\overline{\alpha}(a_{n-1}\ldots a_0,\xi(b)))-\lambda_1(w,n)|\leqslant C(\delta). \end{equation} \end{corollary}
\begin{proof}
Recall from \S \ref{subsub.stat.meas.flag} that given a Zariski-dense probability measure $\mu$ on $H$, we have a map $\xi: H^\Z \to H/P$ defined for $\mu^\Z$-a.e.~ $w=(b,a)$ satisfying $(\overline{\nu}_F)_\omega=\delta_{\xi(w)}$ and the equivariance property $b_{-1}\xi(T^{-1}w)=\xi(w)$. Recall also (see \S \ref{subsub.iwasawa}) that there exists an $H$-equivariant map $H/P \to \P(V)$ given by $hP \mapsto hV^N$ where $N$ is the unipotent radical of $P$. The image of $\overline{\nu}_F$ under this map is the unique $\mu$-stationary and proximal measure on $\P(V)$. It follows that the line $\R v_w$ is the image of $\xi(w)$ under the map $hP \mapsto hV^N$. Therefore, Lemma \ref{lemma.iwasawa.norm} implies that we have $\overline{\chi}(\overline{\alpha}(h,\xi(w)))=\log \frac{\|h v_w\|}{\|v_w\|}$ for $\mu^\Z$-a.e.~ $w \in H^\Z$.
Given $\delta>0$, let $K(\delta)$ and $C(\delta)>1$ be as given by Lemma \ref{lemma.comparison.dyn.norm}, $C(\delta)$ increased if necessary to satisfy $2 \log C(\delta) \leqslant C(\delta)$. Then, if $w$ and $T^n w $ belong to $K(\delta)$, since $a^n v_w=v_{T^nw}$, by Lemma \ref{lemma.comparison.dyn.norm}, both $\frac{\|a^n v_w\|_{T^nw}}{\|a^nv_w\|}$ and $\frac{\|v_w\|_w}{\|v_w\|}$ belong to $[1/C(\delta), C(\delta)]$. The corollary follows. \end{proof}
\subsubsection{Divergence estimates}
We also need the following lemma which essentially follows from Oseledets' theorem and Lemma \ref{lemma.comparison.dyn.norm}.
\begin{lemma}\label{lemma.div.est} \cite[Lemma 3.5]{eskin-lindenstraus.short} For every $\delta>0$ and $t_0 \in \N$, there exists a compact subset $K'(\delta,t_0)=K$ of $H^\Z$ with $\mu^\Z(K)>1-\delta/10$ and a constant $C=C(\delta,t_0)>0$ with the following property: for every $w \in K$, $w' \in W^-_1(w) \cap K$ and $t>0$ such that $T^t w \in K$ and $T^tw' \in T^{[-t_0,t_0]}K$, we have $$
|\lambda_1(w,t)-\lambda_1(w',t)| \leqslant C.$$ \end{lemma}
Here $W^-_1(w)$ is the local stable leaf of $w$ in the shift space $H^\Z$, i.e.~ $W^-_1(w)=\{w' \in H^\Z : w'_k=w_k, \; \forall k \geqslant 0\}$.
\subsubsection{Non-degeneracy of the stationary measure on projective space}
\begin{theorem}\cite[Theorem 3.1]{bougerol.lacroix}\label{thm.random.matrix.product}
Let $\mu$ be a Zariski-dense probability measure on a linear semisimple $\R$-split group $H$ and let $V$ be an irreducible algebraic representation of $H$. Then, for $\mu^{-\N^\ast}$-a.e.~ $b=(b_{-1},\ldots)$, any limit point $\hat{\pi}_b$ of the sequence $\frac{b_{-1}\ldots b_{-n}}{\| b_{-1}\ldots b_{-n}\|}$ in $\Endo(V)$ has rank one and the same image. Moreover, for any hyperplane $W<V$, the set of $b \in H^{-\N^\ast}$ such that $Im(\hat{\pi}_b) \in W$ has zero measure. \end{theorem}
The image of any such limit point will be denoted $\R v_b$, i.e.~ $v_b \in V$ denotes a choice of a non-zero unit vector (for the norm $\|.\|$) in the image line.
We record the following statement which follows from Theorem \ref{thm.random.matrix.product}.
\begin{lemma}\label{lemma.random.lin.form}
For $\mu^\N$-a.e.~ $a \in H^\N$, there exists a linear form $\varphi_a$ of unit norm on $V$. For every such $a$ and for every $\delta>0$, there exist $\epsilon>0$ and a compact subset $K_a(\delta)$ of $H^{-\N^\ast}$ with $\mu^{-\N^\ast}(K_a(\delta))>1-\delta/10$ with the property that if $b, b' \in K_a(\delta)$, we have $|\varphi_a(v_b)|>\epsilon$ and $$
\lim_{n \to \infty} \frac{\|a_{n}\ldots a_0 v_b\|}{\|a_{n}\ldots a_0 v_{b'}\|} =\frac{|\varphi_a(v_b)|}{|\varphi_a(v_{b'})|}. $$ Moreover, for any linear form $\varphi$ on $V$, the set of $a\in H^\N$ such that $\varphi_a\in \R \varphi$ has zero measure. \end{lemma}
\begin{proof}
Applying Theorem \ref{thm.random.matrix.product} to the sequence of transposes of $a_i$'s, we get that for $\mu^\N$-a.e.~ $a=(a_0,a_1,\ldots)$, any limit point of the sequence $\frac{a_n \ldots a_0}{\|a_n \ldots a_0\|}=(\frac{a_0^t \ldots a_n^t}{\|a_0^t \ldots a_n^t\|})^t$ of linear transformations is a rank-one linear map; denote it by $\pi_a$. Note that the kernel of $\pi_a$ does not depend on the choice of the limit rank-one transformation. We then define $\varphi_a$ linear form given by orthogonal projection onto the line orthogonal to $\ker\pi_a$. By the transpose relation, indeed we have \[\ker \varphi_a=\ker\pi_a=(\im{\hat{\pi}_a})^\perp, \]
where $\hat{\pi}_a$ is a limit point of the sequence $\frac{a_0^t \ldots a_n^t}{\|a_0^t \ldots a_n^t\|}$ and $\pi_a=\hat{\pi}_a^t$. Due to the last claim of Theorem \ref{thm.random.matrix.product}, we obtain the last claim of this lemma.
Applying Theorem \ref{thm.random.matrix.product} to $b$, then the last claim of Theorem \ref{thm.random.matrix.product} implies that the $\mu^{-\N^\ast}$-measure of $b$'s such that $\ker \pi_a$ contains $v_b$ is zero. Therefore, given a typical $a$ (i.e.~ in a set of full measure) and $\delta$, there exists a compact set $K_a(\delta)$ in $H^{-\N^\ast}$ such that $\mu^{-\N^\ast}(K_a(\delta))>1-\delta/10$ and for every $b \in K_a(\delta)$, we have $d(\ker \pi_a, v_b)>\epsilon$, where $d$ denotes the projective distance induced by $\|.\|$. That is, $d(\ker \pi_a, v_b)=\frac{|\varphi_a(v_b)|}{\|\varphi_a\|\|v_b\|}$. \end{proof}
\subsubsection{Relative density of typical points} We also need one more lemma that will be used to spread the initial invariance obtained via the drift argument for a word $\hat{\omega}$ to a set of words with positive measure.
\begin{lemma}\label{lemma.relative.density.typical.orbit} Let $X$ be a separable metric space, $m$ a Borel probability measure on $X$ and $T$ a measurable measure-preserving and ergodic transformation $X \to X$. Then, for any measurable subset $K$ of $X$ with positive $m$-measure, there exists a conull subset $\dot{K}$ such that \[K_1:=\{x\in K : \overline{K\cap T^{\N}x}\supset \dot{K} \}.\]
is conull subset of $K$.
\end{lemma} \begin{proof}
Consider the induced system $(K_0,m|_{K_0}, T^K)$ where $T^K$ is the first return map to the set $K$ and $K_0$ is the conull subset of $K$ on which the points are infinitely recurrent (Poincar\'e recurrence).
By ergodicity of $T$, we know that $T^K$ is also ergodic with respect to the measure $m|_{K_0}$. By Birkhoff's theorem, we know that for $m|_{K_0}$ a.e.~ $x \in K_0$ the orbit $\{(T^K)^nx\}_{n\in\N}$ equidistributes to the measure $m|_{K_0}$.
So for $m|_{K_0}$ a.e.~ $x$, we have
\[ \overline{K\cap T^{\N}x}= \overline{ (T^K)^{\N}x}\supset \Supp{m|_{K_0}}=:\dot{K}. \] The proof is complete. \end{proof}
We are now ready to start
\begin{proof}[Proof of Theorem \ref{thm.measure.class.geod}]
\textbf{Choosing parameters and sets}: Let $\delta \in (0,1/10)$ be a small enough positive constant. Let $\Psi$ be a $T$-invariant full measure set contained in the intersection of the full-measure subset of $H^\Z$ given by Proposition \ref{prop.dynnorm} with the full-measure subset on which the map $\omega \mapsto \nu_\omega$ is defined. Denote by $C(\delta)$ the constant given by Corollary \ref{corol.alpha.close.to.dynamical.cocycle} and $K(\delta) \subseteq \Psi$ a compact set chosen using the same corollary satisfying $\mu^\Z(K(\delta))>1-\delta/20$. Fix a compact subset $K_{cont}$ of $H^\Z$ of $\mu^\Z$-measure $>1-\delta/20$ on which the map from $w\in H^\Z$ to $\nu_w$ in the space of probability measures on $F$ is continuous. Let $$K_0(\delta)=K(\delta) \cap K_{cont} \cap K'(\delta,1),$$ where $K'(\delta,1)$ is the compact subset of $\Psi$ obtained from Lemma \ref{lemma.div.est} and let $C$ be the positive constant given by the same lemma.
Now applying Lemma \ref{lemma.relative.density.typical.orbit} to the shift system $X=H^\Z$ and $m=\mu^\Z$ with $K=K_{0}(\delta)$, by regularity of $\mu^\Z$, we can find a compact subset $K_0'(\delta)$ of $K_{0}(\delta)$ with $$ \mu^\Z(K_{0}(\delta))-\mu^\Z(K_0'(\delta))<\delta/20 $$ such that for every $w \in K_0'(\delta)$, the closure of the intersection of the $T$-orbit of $w$ with $K_0(\delta)$ contains a $\mu^\Z$-conull subset of $K_0(\delta)$. Finally, fix a compact $\underline{K}$ of $H$ with sufficiently large $\mu$-measure, so that $$ \hat{\underline{K}}:=\{w \in H^\Z : (w_{-C_2},\ldots,w_0,\ldots,w_{C_2}) \in \underline{K}^{2C_2+1}\} $$ has $\mu^\Z$-measure $>1-\delta/20$, where $C_2=[\kappa(2\kappa +C)]+1$, with $\kappa$ as given by Proposition \ref{prop.dynnorm}. We now let $$K_{0}''(\delta)=K_{0}(\delta) \cap K_0'(\delta) \cap \hat{\underline{K}}.$$
Let $N(\delta) \in \N$ be a constant so that there exists a compact subset \begin{equation}\label{equ:Kgen} K^{gen}(\delta) \subset \{w \in H^\Z : \frac{1}{n}\#\{k=1,\ldots,n : T^k w \in K_0''(\delta)\}>1-\delta/2, \; \forall n \geqslant N(\delta)\} \end{equation} with $\mu^\Z$-measure $\geqslant 1-\delta/10$. The existence of this set is ensured thanks to Birkhoff's ergodic theorem. Indeed, due to Birkhoff's ergodic theorem, we obtain for $\mu^{\Z}$ a.e.~ $w\in H^{\Z}$, \[\lim_{n\rightarrow\infty}\frac{1}{n}\#\{k=1,\cdots,n : T^kw\in K_0''(\delta) \}\rightarrow \mu^{\Z}(K_0''(\delta))>1-4\delta/10. \] We can therefore find a large constant $N(\delta)$ such that \eqref{equ:Kgen} holds. Let $$K_{00}(\delta)=K_0''(\delta) \cap K^{gen}(\delta).$$
For an element $a \in H^\N$ and a subset $K$ of $H^\Z$, let $K_a^-$ denote the set $\{b \in H^{-\N^\ast} : (b,a) \in K\}$. By Markov's inequality, the set $$K^+=\{a \in H^{\N} \, | \, \mu^{-\N^{\ast}}(K_a^-) \geqslant 1-\sqrt{\delta}\}$$ satisfies $\mu^{\N}(K^{+}) \geqslant 1- \sqrt{\delta}$, if $\mu^\Z (K)>1-\delta$. Specializing to $K=K_{00}(\delta)$, we fix two elements $a,a' \in K^+$, so that the set $$K_{a,a'}^-:=K_a^- \cap K_{a'}^-$$ has $\mu^{-\N^\ast}$-measure larger than $1-2\sqrt{\delta}$.
For each $t \in \N$ and $w \in \Psi$, let $n_t(w)=\min\{n : \lambda_1(w,n) \geqslant t\}$. We then have \begin{equation}\label{equ:stopping}
|\lambda_1(w, n_t(w))-t| \leqslant \kappa, \end{equation} which is due to Proposition \ref{prop.dynnorm}.
\textbf{Drift argument}: The output of this part is \begin{proposition}\label{prop:drift} For two futures $a,a'\in K_{00}^+(\delta)$ and two pasts $b,b' \in K_{a,a'}^-$, there exist sequences of natural numbers $m_\ell, m_\ell'\to\infty$ as $\ell\to\infty$, a point $\hat{\omega}\in K_0''(\delta)$ and $s(b,a,a'),\ s(b',a,a')\in D^{\pm}$ such that $\nu_{T^{m_\ell}(b,a)}\to \nu_{\hat\omega}$ and
$$\nu_{\hat{\omega}}=s(b,a,a')^{-1}s(b',a,a')\nu_{\hat{\omega}},$$
where the element $s(b,a,a')$ is given by \[ s(b,a,a')=\lim_{\ell\to\infty}\alpha(a_{m_{{\ell}}'}' \ldots a_0', \xi(b))\alpha(a_{m_{\ell}} \ldots a_0, \xi(b))^{-1},
\] and similarly for $s(b',a,a')$. \end{proposition}
We start the drift argument here. Set $w=(b,a)$, $w'=(b,a')$, $w''=(b',a)$, and $w'''=(b',a')$. \begin{claim} There are constants $p(\delta)$ with $p(\delta)=O(\delta)$ as $\delta \to 0$ and $N_1(\delta) \in \N$ such that for any $\zeta, \zeta' \in K^{gen}(\delta) \cap \Psi$ and $T \geqslant N_1(\delta)$, we have \begin{equation}\label{eq.often.common.rec}
\#\{t=1,\ldots, T : T^{n_t(\zeta)}\zeta' \notin K_0''(\delta)\}<p(\delta)T. \end{equation} \end{claim}
\begin{proof}
By \eqref{equ:Kgen}, we have for $n\geqslant N(\delta)$ \[\#\{k=1,\cdots, n : T^k\zeta'\notin K_0''(\delta) \}<\delta n/2. \] By Lipschitz property \eqref{eq.lip.dynnorm} for $\zeta\in\Psi$, we have $n_T(\zeta)\in [1/\kappa,\kappa] T$. Hence for $T>N_1(\delta)=\kappa N(\delta)$, using Lipschitz property \eqref{eq.lip.dynnorm}, we have \[\#\{t=1,\ldots, T : T^{n_t(\zeta)}\zeta' \notin K_0''(\delta)\}\leqslant \kappa \#\{k=1,\ldots, n_T(\zeta) : T^{k}\zeta' \notin K_0''(\delta)\}<\delta\kappa^2T/2, \] proving the claim \eqref{eq.often.common.rec} with $p(\delta)=\delta \kappa^2/2$. \end{proof}
Therefore, choosing $\delta>0$ small enough so that to have $p(\delta)<1/16$ and applying \eqref{eq.often.common.rec} with all possible choices of $\omega,\omega' \in \{w, w', w'',w'''\}$, we find a sequence of positive integers $t_\ell$ tending to infinity as $\ell \to \infty$ and such that for every $\ell \in \N$, and $\omega,\omega' \in \{w, w', w'',w'''\}$, we have \begin{equation}\label{eq.common.return.times} T^{n_{t_\ell}(\omega)}\omega' \in K_0''(\delta). \end{equation}
\begin{claim} For every $\ell \in \N$, \begin{equation}\label{eq.control.nt.difference}
|n_{t_\ell}(w)-n_{t_\ell}(w'')| \, \, \, \text{and} \, \, \, |n_{t_\ell}(w')-n_{t_\ell}(w''')| \, \, \, \text{are bounded above by} \, \, \kappa(2\kappa+C), \end{equation} where $C=C(\delta,1)$ is the constant given by Lemma \ref{lemma.div.est}. \end{claim}
\begin{proof}
Indeed, by construction, we have $w'' \in W^-_1(w)$, $w''' \in W^-_1(w')$. Moreover, thanks to \eqref{eq.common.return.times} and the fact that $K_0''(\delta)$ is contained in $K'(\delta,1)$, we can apply Lemma \ref{lemma.div.est} and deduce that $$|\lambda_1(w,n_{t_\ell}(w))-\lambda_1(w'',n_{t_\ell}(w))| \leqslant C.$$
On the other hand, by \eqref{equ:stopping}, we have $$|\lambda_1(w,n_{t_\ell}(w))-t_\ell| \leqslant \kappa,\ |\lambda_1(w'',n_{t_\ell}(w''))-t_\ell| \leqslant \kappa,$$ so that $$|\lambda_1(w,n_{t_\ell}(w))-\lambda_1(w'',n_{t_\ell}(w''))| \leqslant 2\kappa.$$ This implies that $$|\lambda_1(w'',n_{t_\ell}(w''))-\lambda_1(w'',n_{t_\ell}(w))| \leqslant 2 \kappa+C.$$
Referring once more to the Lipschitz property \eqref{eq.lip.dynnorm}, we deduce that $|n_{t_\ell}(w'')-n_{t_\ell}(w)| \leqslant \kappa(2\kappa+C)$ as claimed. Clearly, the same argument applies to $n_{t_\ell}(w')$ and $n_{t_\ell}(w''')$ proving \eqref{eq.control.nt.difference}. \end{proof}
It then follows by \eqref{eq.comparison.dyn.norm}, construction of $K_0''(\delta)$, and \eqref{equ:stopping} that for every $\ell \in \N$, we have \begin{equation}\label{eq.bdd.drift.ww'} \begin{split}
&|\overline{\chi}(\overline{\alpha}(a_{n_{t_\ell}(w')}' \ldots a_0', \xi(b))- \overline{\chi}(\overline{\alpha}(a_{n_{t_\ell}(w)} \ldots a_0, \xi(b))| \\
&\leqslant |\lambda_1(w',n_{t_\ell}(w'))-\lambda_1(w,n_{t_\ell}(w))|+2C(\delta) \leqslant 2 \kappa + 2 C(\delta), \end{split} \end{equation} and similarly, \begin{equation}\label{eq.bdd.drift.w''w'''}
|\overline{\chi}(\overline{\alpha}(a_{n_{t_\ell}(w''')}' \ldots a_0', \xi(b'))- \overline{\chi}(\overline{\alpha}(a_{n_{t_\ell}(w'')} \ldots a_0, \xi(b'))| \leqslant 2 \kappa + 2 C(\delta). \end{equation}
Now thanks to the fact that $\nu_w=\nu_{w'}$ (since $w$ and $w'$ have the same past), using the equivariance relation \eqref{eq.equiv.cocycle} at times $n_{t_\ell}(w)$ and $n_{t_{\ell}}(w')$, we get \begin{equation}\label{eq.use.equiv.1} \nu_{T^{n_{t_{\ell}}(w')}w'}=\underbrace{\left(\alpha(a_{n_{t_{\ell}}(w')}' \ldots a_0', \xi(b))\alpha(a_{n_{t_\ell}(w)} \ldots a_0, \xi(b))^{-1}\right)}_{=:D_\ell}\underbrace{\alpha(a_{n_{t_\ell}(w)} \ldots a_0, \xi(b)) \nu_w}_{\nu_{T^{n_{t_{\ell}}(w)}w}}, \end{equation} and similarly, \begin{equation}\label{eq.use.equiv.1'} \nu_{T^{n_{t_{\ell}}(w''')}w'''}=\underbrace{\left(\alpha(a_{n_{t_{\ell}}(w''')}' \ldots a_0', \xi(b'))\alpha(a_{n_{t_\ell}(w'')}\ldots a_0, \xi(b'))^{-1}\right)}_{=:D'_\ell}\underbrace{\alpha(a_{n_{t_\ell}(w'')} \ldots a_0, \xi(b')) \nu_{w''}}_{\nu_{T^{n_{t_{\ell}}(w'')}w''}}. \end{equation}
Here, thanks to, respectively \eqref{eq.bdd.drift.ww'} and \eqref{eq.bdd.drift.w''w'''}, the sequences $D_\ell$ and $D_\ell'$ are bounded. Moreover, by construction (see \eqref{eq.common.return.times}) $T^{n_{t_{\ell}}(\zeta')}\zeta$ belong to the compact continuity set $K_{cont}$ for every $\zeta,\zeta' \in \{w,w',w'',w'''\}$. In particular, there exists a subsequence of $t_\ell$ such that for any such $\zeta,\zeta'$ and for some $\hat{\omega} \in H^\Z$, we have \begin{equation}\label{eq.use.continuity1} T^{n_{t_\ell}(\zeta')}\zeta \to_{\ell \to \infty} \hat{\omega} \implies \nu_{T^{n_{t_{\ell}}(\zeta')}\zeta} \to_{\ell \to \infty} \nu_{\hat{\omega}}. \end{equation}
Now we redefine the time. Let $m_{t_\ell}(w'')=m_{t_\ell}(w)=\max\{n_{t_\ell}(w),n_{t_\ell}(w'') \}$ and similarily $m_{t_\ell}(w''')=m_{t_\ell}(w')=\max\{n_{t_\ell}(w'),n_{t_\ell}(w''') \}$. Indeed, by construction of $n_{t_\ell}$'s and $m_{t_\ell}$'s, the convergence property is unaffected: see \eqref{eq.use.continuity1} and the choice of $\zeta,\zeta'\in\{w,w',w'',w''' \}$.
Moreover, thanks to the equivariance property, we still have the relations \eqref{eq.use.equiv.1} and \eqref{eq.use.equiv.1'} for these modified times $m_{t_\ell}$'s. Finally, the differences between $n_{t_\ell}(w)$ and $n_{t_\ell}(w'')$, and similarly, between $n_{t_\ell}(w')$ and $n_{t_\ell}(w''')$ are bounded, (see \eqref{eq.control.nt.difference}). Since we have chosen $K_0''(\delta)$ so that it is contained in the set $ \hat{\underline{K}}$, the modified differences --- as appearing in \eqref{eq.use.equiv.1} and \eqref{eq.use.equiv.1'} after modifying the times --- $D_\ell$ and $D_\ell'$ are still bounded.
Notice that since $w$ and $w''$; and similarly, $w'$ and $w'''$ have the same futures, for any sequence of $\ell$'s such that $T^{m_{t_\ell}(w)}w \to \hat{w}$ for some $\hat{w}$, we also have $T^{m_{t_\ell}(w'')}w''=T^{m_{t_\ell}(w)}w''\to \hat{w}$ (and similarly for the pair $w'$ and $w'''$). As a conclusion passing to a subsequence of $t_\ell$'s (that we still denote by $t_\ell$) so that we have\\[2pt] \indent \textbullet ${}$ $D_\ell \to s(b,a,a')$ for some $s(b,a,a') \in D^{\pm}$ and similarly, $D'_\ell \to s(b',a,a')$ for some $s(b',a,a') \in D^{\pm}$, and\\[2pt] \indent \textbullet ${}$ $T^{m_{t_\ell}(w)}w \to \hat{w} \in K_0''(\delta)$ and $T^{m_{t_\ell}(w')}w' \to \hat{w}' \in K_0''(\delta)$,\\[2pt] we deduce from \eqref{eq.use.equiv.1} and \eqref{eq.use.equiv.1'} that $$ \nu_{\hat{w}'}=s(b,a,a')\nu_{\hat{w}} \quad \text{and} \quad \nu_{\hat{w}'}=s(b',a,a')\nu_{\hat{w}}, $$ and hence we get \begin{equation}\label{eq.take.square.if.pgl} \nu_{\hat{w}}=s(b,a,a')^{-1}s(b',a,a') \nu_{\hat{w}}. \end{equation}
By letting $m_\ell=m_{t_\ell}(w)$ and $m_\ell'=m_{t_\ell}(w')$, we obtain Proposition \ref{prop:drift} stated in the beginning of this part.
\textbf{From invariance of one typical point to the full set}: By the equivariance property and commutativity, for every $t \in \N$, the measure $\nu_{T^t \hat{w}}$ is also invariant by $s(b,a,a')^{-1}s(b',a,a')$. On the other hand, we have $\hat{w} \in K_0''(\delta)$ and recall that the latter set is contained in $K_0'(\delta)$. So letting $K_{acc}$ be the set of elements $\omega$ in $K_0(\delta)$ such that there exists a sequence $n_m \to \infty$ such that $T^{n_m}\hat{w} \in K_0(\delta)$ and $T^{n_m}\hat{w} \to \omega$, by the definition of $K_0'(\delta)$, we get that the $\mu^\Z$-measure of $K_{acc}$ is positive. Since $K_0(\delta)$ is contained in the continuity set, this implies that for every $\omega \in K_{acc}$, the measure $\nu_{\omega}$ is invariant by $s(b,a,a')^{-1}s(b',a,a')$. By ergodicity and commutativity (since the set of $\omega$ such that $\nu_{\omega}$ is invariant by an element of $D$ is shift-invariant), this entails that $$ \nu_{\omega}=s(b,a,a')^{-1}s(b',a,a')\nu_{\omega} \quad \text{for $\mu^\Z$-a.e. $\omega \in H^\Z$}. $$
\textbf{Constructing arbitrary small drift}: Since for $\mu^\Z$-a.e.~ $\omega$ the stability group of $\nu_{\omega}$ is closed, to prove the hypothesis of Claim 0 (and hence Theorem \ref{thm.measure.class.geod}), it suffices to find sequences $\delta_n>0$, couples of futures $a_n,a'_n \in K^+:= K^+_{00}(\delta_n)$ and couples of pasts $b_n,b'_n \in K_{a_n,a'_n}^{-}$ such that \begin{equation}\label{eq.completes.drift} \id \neq (s(b_n,a_n,a'_n)^{-1}s(b'_n,a_n,a'_n))^2 \to \id \end{equation} as $n \to \infty$. Here we take square to make sure the invariance is in $D$ instead of $D^{\pm}$.
\begin{comment}
To prove \eqref{eq.completes.drift}, we now turn to the construction of the limiting elements $s(b,a,a')$. For every $\delta>0$ small enough, for any $a,a'\in K_{00}^+(\delta)$ and any $b\in K_{a,a'}^-$,
we can find a sequences of natural numbers, $t_\ell$, $m_{t_\ell}(a)$, and $m_{t_\ell}(a')$ such that for any $b \in B_{fp}$ \begin{equation}\label{eq.with.m.pasts} s(b,a,a')=\lim_{\ell \to \infty}\alpha(a_{m_{t_{\ell}}(a')}' \ldots a_0', \xi(b))\alpha(a_{m_{t_\ell}(a)} \ldots a_0, \xi(b))^{-1}. \end{equation}
Therefore, for any $b,b' \in B_{fp}$, we can express the difference $s(b,a,a')^{-1}s(b',a,a')$ as
\begin{equation}\label{eq.cross.ratio.limit}
s(b,a,a')^{-1}s(b',a,a')=\lim_{\ell \to \infty} \log \left(\frac{\|a'_{m_{t_\ell}(a')} \ldots a'_0 v_{b'}\|\|a_{m_{t_\ell}(a)} \ldots a_0 v_{b}\|}{\|a'_{m_{t_\ell}(a')} \ldots a'_0 v_{b}\|\|a_{m_{t_\ell}(a)} \ldots a_0 v_{b'}\|} \right). \end{equation} \end{comment}
Recall that $a,a'$ are two different points in $K_{00}^+(\delta)$. Due to Lemma \ref{lemma.random.lin.form} and the set $K_{00}^+(\delta)$ having a positive measure, we can suppose that the corresponding linear forms $\varphi_a$ and $\varphi_{a'}$ are not colinear. The set $K_{a,a'}^-$ has measure greater than $1-2\sqrt{\delta}$. Now given $\delta'>0$, consider the compact set $K_{a,a'}(\delta'):=K_a(\delta')\cap K_{a'}(\delta')$ given by Lemma \ref{lemma.random.lin.form}. Clearly, if $\delta$ and $\delta'$ are small enough, the set $K_{a,a'}(\delta') \cap K_{a,a'}^-$ has positive measure, bounded below by $1-2\delta'-2\sqrt{\delta}$. On the other hand, by \textit{drift argument} (Proposition \ref{prop:drift}), for every $b,b' \in K_{a,a'}(\delta') \cap K_{a,a'}^-$, there exist sequences of natural numbers $m_\ell, m_\ell'$ tending to infinity as $\ell \to \infty$ such that \begin{equation}\label{eq.cross.ratio.limit}
\overline{\chi}\left(\log(s(b,a,a')^{-1}s(b',a,a'))\right)=\lim_{\ell \to \infty} \log \left(\frac{\|a'_{m_{\ell}'} \ldots a'_0 v_{b'}\|\|a_{m_{\ell}} \ldots a_0 v_{b}\|}{\|a'_{m_{\ell}'} \ldots a'_0 v_{b}\|\|a_{m_{\ell}} \ldots a_0 v_{b'}\|} \right). \end{equation} By Lemma \ref{lemma.random.lin.form}, we have for two linear forms $\varphi=\varphi_a$ and $\varphi'=\varphi_{a'}$ of unit norm on $V$ that \begin{equation}\label{eq.loglin.form.ratio}
\overline{\chi}\left(\log(s(b,a,a')^{-1}s(b',a,a'))\right)= \log \frac{|\varphi'(v_{b'})\varphi(v_b)|}{|\varphi'(v_b)\varphi(v_{b'})|} \end{equation}
and $|\varphi(v_b)|,|\varphi'(v_b)|>\epsilon'>0$, where $\epsilon'=\epsilon'(\delta')$ is given by Lemma \ref{lemma.random.lin.form}.
Let $\epsilon>0$ be given.
Since $\mu^{-\N^\ast}(K_{a,a'}(\delta') \cap K_{a,a'}^-)>0$ and the Furstenberg measure is atomless, we can find two different points $b,b'$ in $K_{a,a'}(\delta') \cap K_{a,a'}^-$ with $v_b\wedge v_{b'}\neq 0$ and $d(v_b,v_{b'})<\epsilon$.
\begin{claim}
If $2\epsilon<(\epsilon')^2$, the drift element associated to $a,a',b,b'$ (as in \eqref{eq.loglin.form.ratio}) is non-trivial and has size $O_{\epsilon'}(\epsilon)$. \end{claim} \begin{proof} This is because \[\frac{\varphi(v_b)\varphi'(v_{b'})}{\varphi(v_{b'})\varphi'(v_b)}-1=\frac{(\varphi,\varphi')(v_{b'}\wedge v_b)}{\varphi(v_{b'})\varphi'(v_b)}, \] where $(\varphi,\varphi')$ is a linear form on $\wedge^2 V$ given by \[(\varphi,\varphi')(v\wedge v')=\varphi(v)\varphi'(v')-\varphi(v')\varphi'(v). \] Non-triviality comes from the choice of $a,a'$ and $b,b'$, that $\varphi$ and $\varphi'$ are not colinear and $v_b\wedge v_{b'}\neq 0$.
By taking $\epsilon<(\epsilon')^2/2$, we have
\[ \frac{|(\varphi,\varphi')(v_{b'}\wedge v_b)|}{|\varphi(v_{b'})\varphi'(v_b)|}\leqslant \frac{\|v_{b'}\wedge v_b\|}{| \varphi(v_{b'})\varphi'(v_b)|}\leqslant \epsilon/(\epsilon')^2<1/2. \]
Applying the inequality $|\log(1+t)|\leqslant 2|t|$ for $|t|<1/2$, we obtain
\[\left|\log \frac{\varphi'(v_{b'})\varphi(v_b)}{\varphi'(v_b)\varphi(v_{b'})}\right|\leqslant 2 \frac{|(\varphi,\varphi')(v_{b'}\wedge v_b)|}{|\varphi(v_{b'})\varphi'(v_b)|}\leqslant 2\epsilon/(\epsilon')^2. \] The proof of the claim is complete. \end{proof}
Fixing $\epsilon'>0$ and choosing $\epsilon$ arbitrarily small --- i.e.~ taking a sequence $\epsilon_n \to 0$ and associated couples $b_n,b'_n \in K_{a,a'}(\delta') \cap K_{a,a'}^-$ --- we obtain \eqref{eq.completes.drift} and conclude the proof. \end{proof}
\begin{comment}\label{rk.when.pgl2.end.of.proof} When $\Gamma_\mu \notin H^\circ$ (cf.~ Remark \ref{rk.when.pgl2.start.of.proof}), with an adaptation of Corollary \ref{corol.alpha.close.to.dynamical.cocycle} to include the signs of the cocycle $\alpha$, the only new point will be that the elements $s(b,a,a'), s(b',a,a')$, etc.~ will belong to the product group $D\times (\Z/2\Z)$. This problem disappears by simply taking square in \eqref{eq.take.square.if.pgl}; the rest (i.e.~ \eqref{eq.completes.drift}) then goes through. \end{comment}
\subsection{Case 2.3: The remaining case}\label{subsec.remaining.case}
In this part, we will restrict ourselves to a slightly more specific situation; we will assume that the ambient group $G$ is $\PGL_n(\R)$; the subgroups $H, Q,R,R_0$ have the same meaning as before. The group $S = Q/R_0$ is a quotient of a product of $\PGL_{k_i}(\R)$'s.
We are in Case 2.3, so we suppose $H$ is positioned so that $Q_H:=Q \cap H$ is a parabolic subgroup of $H$ and that $Q_H^\circ \cap R_0$ is trivial. In light of Proposition \ref{prop.decomposable} and Definition \ref{def.decomposable} of a decomposable action, it might be tempting at first sight to think that in Case 2.3, the morphism extends and we are in the decomposable situation. However, it turns out this is not the case and whether the morphism can extend depends for example on the irreducibility of the action of $H$ on $\P(\R^n)$. We signal at this point that in this paper we are not able to get a characterization of when we are in the decomposable case; as we shall see (Case 2.3.a), if $H$ acts projectively irreducibly on $\R^n$, we will be able to ensure this. Without this irreducibility assumption (Case 2.3.b), the description of what may happen is widely open; we content with some examples.
\subsubsection{Case 2.3.a: Irreducible $H$}
In this case, the irreducibility of $H$ implies that there is a unique $H$-compact orbit $\calC$ on the flag variety $G/Q$.
\begin{comment} If $Q$ is a minimal parabolic subgroup, then we are in Case 2.1 and the fibre $Q/R$ is trivial. So by Proposition \ref{prop.trivial.fibre.measure.class}, the same result is also true and the space $P_\mu^{\erg}(X_\calC)$ is just composed of one point, the Furstenberg measure $\bar\nu_F$.
\begin{remark} The last assertion \eqref{eq.bijection.in.thm.SL2.dec} above is a consequence of the first assertion (decomposability), combined with Proposition \ref{prop.decomposable.measure.class} and the fact that $H/P$ has a unique $\mu$-stationary probability measure. Moreover, the bijection \eqref{eq.bijection.in.thm.SL2.dec} is explicit in a certain standard trivialization and is given by the map \eqref{eq.bijection.map.decomposable}. This is useful since, under a moment assumption on $\mu$, $\mu$-stationary and ergodic measures on $S/\Lambda$ (i.e.~ elements of $P_\mu^{\erg}(S/\Lambda)$) are completely described by the works of Benoist--Quint \cite{BQ1,BQ2} and Eskin--Lindenstrauss \cite{eskin-lindenstrauss.long}. \end{remark}
\end{comment}
\begin{proof}[Proof of Theorem \ref{thm.irreducible.H.decompsable}] If $R_0=Q$, then the fiber is trivial and we are in Case 2.1.
Otherwise, $S\simeq Q/R_0$ is a nontrivial semisimple group. In this situation, one can verify directly that $Q_H\cap R_0$ is trivial. For example by the explicit computation given in the following proof. So we are in Case 2.3.
Let $D<Q_H$ be a rank-one $\R$-split torus in $Q_H<H$ and $U$ be the unipotent radical of $Q_H$. We denote by $\mathfrak{u}$, $\mathfrak{d}$, and $\mathfrak{h}$ the Lie algebras of $U$, $D$, and $H$, respectively. Fix a Weyl chamber $\mathfrak{d}^+$ in $\mathfrak{d}$ and two elements $x \in \mathfrak{d}^+$ and $e \in \mathfrak{u}$ such that $[x,e]=2e$, where $[.,.]$ denotes the Lie bracket in $\mathfrak{h}$. We consider the Lie algebra representation of $\mathfrak{h}$ induced by the irreducible representation of $H$ coming from the embedding $H < \PGL_n(\R)$. By the representation theory of $\mathfrak{sl}_2(\R)$, the space $\R^n$ writes as a sum of a string of one-dimensional weight spaces of $\mathfrak{d}$, we denote them by $V_1,V_2,\ldots,V_n$. They are ordered in increasing order with respect to the order on weights of $D$ coming from the choice of $\mathfrak{d}^+$. The elements of $\mathfrak{u}$ act as raising operators, i.e.~ for any non-zero $e' \in \mathfrak{u}$, we have $e' V_i =V_{i+1}$ if $i\neq n$ and $e'V_n=0$.
Let $W_1<W_2<\ldots<W_k=\R^n$ be the maximal flag preserved by $Q$. Since the diagonal subgroup $D$ is contained in $Q_H$, each space $W_i$ is also preserved by $\mathfrak{d}$ and hence each $W_i$ is a sum of the weight spaces $V_i$'s. Moreover, since $U$ is also contained in $Q$ (and hence preserves $W_i$'s) and $\mathfrak{u}$ acts as raising operator for $\mathfrak{d^+}$ in the Lie algebra representation, it follows that $W_i=V_n \oplus \ldots \oplus V_{n-k_i+1}$, where $k_1<k_2<\ldots<k_j=n$ are the dimensions of $W_1,W_2,\ldots$ respectively. We also set $k_0=0$ and set $m_i=k_i-k_{i-1}$ for $i=1,\ldots,k$. The group $S$ is then a quotient of the product $\prod_{i=1}^j S_i$ where $S_i \simeq \PGL_{m_i}(\R)$. The product $\Pi_i S_i \simeq \Pi_i\PGL_{m_i}(\R)=Q/(R_0')$, where $R_0'$ is the solvable radical of $Q$. We have a natural projection from $\prod_i S_i=Q/(R_0')\to S=Q/R_0$. The projection $Q_H$ to $S=Q/R_0$ factors through $\prod_i S_i=Q/(R_0')$. Therefore, to extend the morphism to $S$, we only need to extend the morphism from $Q_H$ to $\prod_i S_i$.
Let $\mathfrak{s}_i$ be the Lie algebra $S_i$'s. The Lie algebra morphism from the Lie algebra of $Q_H$ to $\mathfrak{s}_i$ coming from the morphism $Q_H \to \prod_i S_i$ is the morphism obtained by extending $$ x \mapsto \begin{pmatrix} m_i-1 & 0 & \cdots & & & 0 \\ 0 & m_i-3 & & & & \vdots \\ \vdots & & \ddots & & \\
& & & & -m_i+3 & 0 \\ 0 & & & & 0 & -m_i+1 \end{pmatrix}_{m_i \times m_i}$$ $$ e \mapsto \begin{pmatrix} 0 & 1 & 0 & \cdots & 0 \\
& 0 & 1 & & \\
& & \ddots & \ddots & \\
& & & 0 & 1\\ 0 & & & \cdots & 0 & \\ \end{pmatrix}_{m_i \times m_i}\\ . $$ To extend this morphism to $\mathfrak{h} \to \mathfrak{s}_i$, let $f$ be an element of $\mathfrak{h}$ so that $(e,x,f)$ is an $\mathfrak{sl}_2$-triple, i.e.~ $[x,f]=-2f$ and $[e,f]=x$. Mapping the element $f$ to the element $$ \begin{pmatrix} 0 & & & & \\ (m_i-1) & 0 & & & \\
& 2(m_i-2) & 0 & & \\ & & \ddots & \ddots & & \\
& & & (m_i-2)2& 0 & & \\
& & & & (m_i-1) & 0 \\ \end{pmatrix}_{m_i \times m_i} $$ of $\mathfrak{s}_i=\mathfrak{pgl}_{m_i}(\R)$, a direct calculation (see e.g.~ \cite[\S 3.7]{neil-ginzburg.book})
shows that we obtain a Lie algebra morphism $\mathfrak{h} \to \mathfrak{s}_i$ for each $i=1,\ldots,j$. We hence get a morphism $\mathfrak{h} \to \bigoplus_i\mathfrak{s}_i$ which gives rise to an algebraic morphism $H \to \prod_i S_i$ extending the initial morphism $Q_H \to \prod_i S_i$. (For the $\PGL_2(\R)$ case, notice that an irreducible algebraic representation from $\SL_2(\R)$ to $\PGL_{m_i}(\R)$ always induces a representation of $\PGL_2(\R)$)
Therefore Proposition \ref{prop.decomposable} yields that the $H$-action on $X_\mathcal{C}$ is decomposable. The last assertion then follows from Proposition \ref{prop.decomposable.measure.class} and uniqueness of the $\mu$-stationary probability measure (the Furstenberg measure) on $H/P$. \end{proof}
We single out the following consequence which gives a generalization (and an explanation) of the phenomenon of embedding of the Furstenberg boundary in the fibre bundle $X$. This phenomenon is discovered in the work of Sargent--Shapira \cite{sargent-shapira} when $X$ is the space of $2$-lattices inside $\R^3$. \begin{example}[Rank-$k$ lattices in $n$-space] Let $G=\PGL_n(\R)$ and $Q$ the stabilizer of a $k$-space $W$ in $\R^{n}$. Let $R$ be the stabilizer in $Q$ of the homothety class of a lattice in $W$, and $R_0$ the connected component of $R$. In this case, the bundle $G/R$ over $G/Q$ will be denoted as $X_{n,k}$. It is actually the space of homothety-equivalence classes of rank-$k$ lattices in $\R^n$. Recall that $H$ is a copy of $\SL_2(\R)$ or $\PGL_2(\R)$ acting irreducibly on $\P(\R^{n})$ and $\mathcal{C} \subset G/Q$ is the unique compact $H$-orbit in $G/Q$. That is $\calC=HQ\subset G/Q$. It is then easy to check that we are in the setting of Theorem \ref{thm.irreducible.H.decompsable} and therefore we get a trivialization $(X_{n,k})_\mathcal{C} \overset{\phi}{\simeq} H/Q_H \times S/\Lambda$, where $S=\PGL_k(\R)$, and $\Lambda=\PGL_k(\Z)$, such that the associated cocycle $H \times H/Q_H \to S$ is morphism-type, i.e.~ it does not depend on the $H/Q_H$ coordinate (in particular it is a morphism $\rho: H \to S$). In the statement below, the $\mu$-action on $S/\Lambda$ is defined via $\rho$. \end{example}
\begin{corollary} Keep the above setting. In particular, let $H$ be an algebraic subgroup of $G$ isomorphic to $\SL_2(\R)$ or $\PGL_2(\R)$ and acting irreducibly on $\P(\R^n)$. Let $(X_{n,k})_\calC$ be the sub-bundle of $X_{n,k}$ over the base $\calC\subset G/Q$. Then, we have \[P_\mu^{erg}((X_{n,k})_\calC)\simeq P_\mu^{erg}(S/\Lambda). \]
\end{corollary}
\begin{comment}
Let $x_1$ and $x_2$ be the equivalences classes of a lattice in $W$ and a lattice in $V/W$, $x_0$ be the tuple $(x_1,x_2)$ so that the stabilizer of $x_0$ is the group $R$ and the stabilizer of $x_1$ is the group $R_1$. Via the choices of $x_0$ and $x_1$, we can identify $X$ and $X_{n,k}$ with $G/R$ and $G/R_1$, respectively. We obtain a projection $X \to X_{n,k}$ from the natural projection $G/R \to G/R_1$. Recall that $\phi=(\pi,\phi_2)$ is the standard trivialization $X_\mathcal{C} \simeq H/Q_H \times S/\Lambda$ given by the application of Theorem \ref{thm.irreducible.H.decompsable}. The map $\phi_2: X_\mathcal{C} \to S / \Lambda \simeq Q/R$ is given by $\phi_2(gR)=s(gQ)^{-1}gR$, where $s$ is a Borel section of the map $G\rightarrow G/Q$. In particular, for any $f \in R_1$, we have $\phi_2(gfR)R_1=\phi_2(gR)R_1$ so that the following diagram commutes and is $H$-equivariant. \begin{center}
\begin{tikzcd}
X_\calC \arrow[d, ""' ] \arrow[r, "" ] & \calC\times S/\Lambda \arrow[d,""] \\
(X_{n,k})_\calC \arrow[r, " "'] & \calC\times S_1/\Lambda_1
\end{tikzcd}
\end{center}
It follows that the the bundle $(X_{n,k})_\mathcal{C}$ is $H$-equivariantly mapped to $\mathcal{C} \times S_1 / \Lambda_1$ where the latter is endowed with the diagonal action induced by the morphism $\rho$ precomposed with the projection $S \to S_1$. The statement now follows from Proposition \ref{prop.decomposable.measure.class}. \end{comment}
\begin{comment} Since $H$ acts irreducibly on $\R^n$, up to conjugate, we can always suppose that $Q$ is the parabolic group fixing the $k$ plane in $\R^n$ generated by $k$ vectors $e_1,\cdots, e_k$ and these vectors are the highest $k$ vectors of $H$ up to a choice of roots with respect to the diagonal subgroup of $Q_H$. Let $x_0$ be base point given by the rank-$k$ lattice generated by vectors $e_1,\cdots, e_k$. Then the stabilizer of $G$ at $x_0$ is $R_1:=\Lambda_1S_{2,Q}A_QN_Q$, where $\Lambda_1\simeq \SL_k^{\pm}(\Z)$ and $S_{2,Q}\simeq \SL_{n-k}(\R)$.
Now with the notation in the paper, let $X=G/R_0$, where $R_0=A_QN_Q$. We have $X=G/R_0\to X_{n,k}=G/R_1\to G/Q$. Since $H$-acts irreducibly, we can apply Theorem \ref{thm.irreducible.H.decompsable} to obtain that the action of $H$ on $X_\calC$ is decomposable.
Let $\phi:X_\calC\to \calC\times S/\Lambda$ be a trivilization $\phi(x)=(\pi(x),\phi_2(x))$ such that \begin{equation}\label{eq:r1}
\phi_2(gR)R_1=\phi_2(gfR)R_1 \end{equation}
for $gR\in X_\calC$ and $f\in R_1$, which is always possible by taking a standard trivialization. For an element in $(X_{n,k})_\calC$, we can take a lift in $X_\calC$, then map to $\calC\times S/\Lambda$ and project down to $\calC\times S_1/\Lambda_1$. Due to relation \eqref{eq:r1}, this map doesn't depend on the choice of the lift, hence it is well-defined.
\begin{center}
\begin{tikzcd}
X_\calC \arrow[d, ""' ] \arrow[r, "" ] & \calC\times S/\Lambda \arrow[d,""] \\
(X_{n,k})_\calC \arrow[r, " "'] & \calC\times S_1/\Lambda_1
\end{tikzcd}
\end{center} By Theorem \ref{thm.irreducible.H.decompsable} and Proposition \ref{prop.decomposable}, we can modify the trivialization such $H$ acts diagonally on the product space. The modification is in fact given by conjugate the associated cocycle by a Borel map $s$ from $\calC$ to $S$. The resulting map $\phi'(x)=(\pi(x), s(\pi(x))\phi_2(x))$. The second coordinate still satisfies \eqref{eq:r1}, due to $\pi(gR)=\pi(gfR)$ for $gR\in X_\calC$ and $f\in R_1$. So the trivilization $\phi'$ passes to $(X_{n,k})_\calC$ and we obtain that $(X_{n,k})_\calC$ is isomorphic as $H$-space $\calC\times S_1/\Lambda_1$ with some $H$-action $h(c,f)=(hc,\rho(h)f)$. Then we can use Proposition \ref{prop.decomposable.measure.class} to conclude.
\end{comment}
\subsubsection{Case 2.3.b: Reducible $H$}
Below, we give an example for Case 2.3.b and justify that for this example it is not possible to extend the morphism $Q_H \to S$.
\begin{example}\label{ex.to.be.treated} Let $G=\PGL_4(\R)$ and $Q$ be the parabolic subgroup given by the stabilizer of the 3-plane generated by the standard basis vectors $\{e_1,e_2,e_3\}$. We take $R_0$ to be the solvable radical of $Q$ and $R$ to be the stabilizer of the $3$-lattice generated by $\{e_1,e_2,e_3\}$. Finally, we take $H$ to be the copy of $\PGL_2(\R)$ in $G$ given by \begin{equation}\label{eq.2.3.b.embedding} \left \{ \begin{pmatrix} a^2 & ab & 0 & b^2 \\ 2ac & ad+bc & 0 & 2bd \\ 0 & 0 & 1 & 0 \\ c^2 & cd & 0 & d^2 \\
\end{pmatrix} | \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \SL_2^{\pm}(\R) \right \}. \end{equation} We claim that this configuration falls into Case 2.3.b. Indeed, the intersection $Q_H=Q \cap H$ is given by the image of the upper-triangular subgroup of $\PGL_2(\R)$ in the embedding \eqref{eq.2.3.b.embedding} described above; in other words $$ Q_H= \left \{ \begin{pmatrix} a^2 & ab & 0 & b^2 \\ 0 & \pm 1 & 0 & \pm 2ba^{-1} \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & a^{-2} \\
\end{pmatrix} | \; a \neq 0 \right \}. $$ So $Q_H$ is a parabolic subgroup of $H$ and therefore we are in Case 2. It is easy to see that intersection of $Q_H \cap R_0$ is trivial, hence we are in Case 2.3. Finally, clearly the $H$-representation described in \eqref{eq.2.3.b.embedding} is not irreducible justifying the claim.
Now note that $S=Q/R_0$ is the group $\PGL_3(\R)$ and the projection $Q \to S$ is given by the projectivization of the top-left 3-by-3 block in $Q$. It follows that the morphism $Q_H \to S$ is given by \begin{equation}\label{eq.not.extend} \PGL_2(\R) \ni \begin{pmatrix} a & b \\ 0 & \pm a^{-1} \end{pmatrix} \mapsto \begin{pmatrix} a & b & 0 \\ 0 & \pm a^{-1} & 0 \\ 0 & 0 & a^{-1} \end{pmatrix} \in \PGL_3(\R) \end{equation} However, it is not hard to see that the morphism \eqref{eq.not.extend} from the upper-triangular subgroup of $\PGL_2(\R)$ to $\PGL_3(\R)$ is not the restriction of a morphism $\PGL_2(\R) \to \PGL_3(\R)$. One can either use the classification of $\SL_2(\R)$-representations to see this or otherwise verify this claim by direct computation: note by $x$ and $e$ a pair of Lie algebra elements of $\mathfrak{sl}_2(\R)$ in Lie algebra of the upper-triangular group satisfying $[x,e]=2e$. Let $\overline{x}$ and $\overline{e}$ be their images in $\mathfrak{pgl}_3(\R)$ under the Lie algebra representation induced by \eqref{eq.not.extend}. Now one checks by direct computation that it is not possible to find an element $\overline{f}$ in $\mathfrak{pgl}_3(\R)$ satisfying $[\overline{x},\overline{f}]=-2\overline{f}$ and $[\overline{e},\overline{f}]=\overline{x}$.
\end{example}
\section{$\SL_2(\R)$-Zariski closure: equidistribution}\label{sec.equidist}
In this part, we study equidistribution of the averaged measure $\frac{1}{n}\sum_{1\leqslant k\leqslant n}\mu^{*k}*\delta_x$ for $x$ inside the bundle $X_\calC$. In fact, as we start by briefly explaining in \S \ref{subsec.equidist.trivial} below, all of them except the diagonal fibre action (Case 2.2) boils down to the corresponding results of Benoist--Quint \cite{BQ1,BQ3}. The part \S \ref{subsec.equidist.diagonal} is devoted to the diagonal fibre action case.
\subsection{Equidistribution from Benoist--Quint}\label{subsec.equidist.trivial}
In each case below, we keep the corresponding assumptions from \S \ref{sub.base.and.cases}.
\subsubsection{Case 1 (Dirac Base)} Recall that Case 1 corresponds to the situation when the acting group $H$ is contained in the parabolic $Q$ of $G$. As explained in \S \ref{subsub.dirac.base}, it follows that $H$ fixes a point in $G/Q$ and hence stabilizes the fibre above the fixed point. Therefore, up to conjugating $Q$, we are left with studying the associated $\mu$-random walk on the fibre $S/\Lambda$, where the probability measure $\mu$ is seen as a Zariski-dense measure in a copy of $\SL_2(\R)$ in the subgroup of $S$. This is then a particular situation of the setting treated in Benoist--Quint's work \cite{BQ1,BQ3}. Consequently, the corresponding equidistribution results apply. We do not state the result here as it would be a repetition. We refer the reader to the more recent \cite[Theorem 1.5]{prohaska-sert-shi}, where the compact support assumption of \cite{BQ3} is relaxed to finite exponential moment.
\subsubsection{Case 2.1 (Trivial fiber action)}\label{subsub.equidist.trivial.fibre}
Recall from Proposition \ref{prop.trivial.fibre.measure.class} that in this case the $H$-action on $X_\mathcal{C}$ is decomposable with trivial morphism, i.e.~ there exists a standard trivialization $X \simeq G/Q \times Q/R$ for which the associated cocycle restricted to $\mathcal{C}$ is the trivial morphism. Therefore in this case we have $\mu^{\ast k} \ast \delta_{(\theta,f)}=\int \delta_{g \theta} d\mu^{\ast k}(g) \otimes \delta_f$, in other words, the equidistribution problem is only the one in $\mathcal{C} \subseteq G/Q$. It is well-known that by spectral gap property we have the convergence $\int \delta_{g \theta} d\mu^{\ast k}(g) \to \overline{\nu}_F$ moreover with exponential speed estimates with respect to a class of H\"{o}lder functions. We omit the statement to avoid repetition; see \cite[Ch.~ V, Theorem 4.3]{bougerol.lacroix}.
\subsubsection{Case 2.3.a (Irreducible $H$-action)}
\begin{proposition}\label{prop.equidist.H.irred} Keep the setting of Theorem \ref{thm.irreducible.H.decompsable} and suppose moreover that the measure $\mu$ on $H$ has finite exponential moment. Then, there exists a standard trivialization $X \simeq G/Q \times S/\Lambda$ such that for every $x \in X_\mathcal{C}$, the limit as $n$ tends to infinity of $\frac{1}{n}\sum_{k=1}^n \mu^{\ast k}\ast \delta_x$ exists and equals to a product $\overline{\nu}_F \otimes \nu^F$, where $\overline{\nu}_F$ is the Furstenberg measure on $H/Q_H$ and $\nu^F$ is a homogeneous probability measure on $S/\Lambda$. \end{proposition}
As we shall see, the statement follows as a consequence of the decomposability of $H$-action (Theorem \ref{thm.irreducible.H.decompsable}), Benoist--Quint \cite{BQ1,BQ3} equidistribution results. We note however that we do not treat the question of equidistribution of trajectories of points $x \in X \setminus X_\mathcal{C}$. For such points, already in the level of the base space $G/Q$, the corresponding equidistribution question does not seem to be well-understood in all cases (cf.~ \cite{BQ.compositio}).
\begin{remark}
The conclusion of Proposition \ref{prop.equidist.H.irred} also holds if we replace the Ces\`{a}ro average $\frac{1}{n}\sum_{k=1}^n \mu^{k}\ast \delta_x $ by the sequence of empirical measures. More precisely, for every $x \in X_\mathcal{C}$, for $\mu^{\N}$-a.e.~ $a \in H^{\N}$, the sequence $\frac{1}{n}\sum_{k=0}^{n-1} \delta_{a_k\ldots a_0 x}$ converges to a product measure of the same form as in Theorem \ref{thm.irreducible.H.decompsable}. This follows in the same way, using in addition Breiman's law of large numbers (see e.g.~ \cite[Corollary 3.3]{BQ3}) and the corresponding empirical measure equidistribution results of Benoist--Quint. \end{remark}
\begin{proof} It is clear that any limit point $\nu$ of $\frac{1}{n}\sum_{k=1}^n \mu^{\ast k}\ast \delta_x$ is a $\mu$-stationary probability measure. By Theorem \ref{thm.irreducible.H.decompsable}, there exists a standard trivialization yielding $H$-equivariant projections on $\pi_1:X \to G/Q$ and $\pi_2:X \to S/\Lambda$, where equivariance in the latter is with respect to a morphism $H \to S$. As a result, a limit point $\nu$ projects via $\pi_1$ and $\pi_2$ to the limit points of $\frac{1}{n}\sum_{k=1}^n \mu^{\ast k}\ast \delta_{\pi_1x}$ and $\frac{1}{n}\sum_{k=1}^n \mu^{\ast k}\ast \delta_{\pi_2x}$, respectively. However, by the uniqueness of Furstenberg measure, the first sequence admits the Furstenberg measure $\overline{\nu}_F$ as a limit. Moreover, by \cite[Theorem 1.5]{prohaska-sert-shi}, the second sequence also admits a limit $\nu^{F}$ which is a homogeneous probability measure on $S/\Lambda$. Since the factor $H/Q_H$ is $\mu$-proximal, it follows by the bijection in Proposition \ref{prop.decomposable.measure.class} that $\nu$ is the unique coupling of $\overline{\nu}_F$ and $\nu^{F}$, i.e.~ the product $\overline{\nu}_F \otimes \nu^{F}$.\end{proof}
\subsection{Equidistribution for diagonal fiber actions (Case 2.2)}\label{subsec.equidist.diagonal}
As mentioned above, unlike the previous cases, the equidistribution problem for the diagonal fiber actions case does not boil down to the corresponding work of Benoist--Quint and we now proceed with our result in this case.
Recall from Case 2.2 and Lemma \ref{lemma.its.iwasawa} that we have a standard trivialization $X_\calC\simeq \calC\times_\alpha S/\Lambda$ such that the action of $H$ on the fibre $S/\Lambda$ is by a one-dimensional split subgroup $D^\pm$ of $S$ through the Iwasawa cocycle $\alpha$ up to a sign.
The main statement for $\PGL_2(\R)$-case is given in the introduction. Here is the statement for $\SL_2(\R)$ case.
\begin{theorem}\label{thm.equidist.geod.sl} Keep the hypotheses and notation of Theorem \ref{thm.measure.class.geod} and let $X_\mathcal{C} \simeq H/Q_H \times S/\Lambda$ be the trivialization given by Theorem \ref{thm.measure.class.geod}. Suppose in addition that $H\simeq \SL_2(\R)$ and the measure $\mu$ has finite exponential moment. Suppose $\Gamma_\mu$ preserves a proper closed cone in $\R^2$. Then, the $D$-orbit of $z \in S/\Lambda$ equidistribute to a probability measure $m$ on $S/\Lambda$ if and only if for any $x=(\theta,z)\in X_\calC$ with $\theta$ inside the support of the Furstenberg measure, we have the convergence \[ \frac{1}{n}\sum_{k=1}^n\mu^{*k}*\delta_x\rightarrow \bar{\nu}_F\otimes m \quad \text{as} \; \; n \to \infty. \] If $\Gamma_\mu$ does not preserve a proper closed cone in $\R^2$, then the $D^\pm$-orbit of $z \in S/\Lambda$ equidistribute to a probability measure $m$ on $S/\Lambda$ if and only if for any $x=(\theta,z)\in X_\calC$, we have the convergence \[ \frac{1}{n}\sum_{k=1}^n\mu^{*k}*\delta_x\rightarrow \bar{\nu}_F\otimes m \quad \text{as} \; \; n \to \infty. \] \end{theorem}
\subsubsection{Equidistribution result on $K\times \R$}
For $H\simeq \PGL_2(\R)$, if $\mu$ is supported on $\PSL_2(\R)$, then there is no sign issue thanks to the choice of the section $s$ (as taking values in $K^o$). We only need to prove equidistribution result on $K^o\times \R$. The proof is the same as the $\SL_2(\R)$-case. We will comment at the end on the changes needed to handle the $\PSL_2(\R)$-case (i.e.~ Theorem \ref{thm.equidist.geod}).
In order to treat the sign part in the cocycle $\alpha$, we start with equidistribution result on $K\times \R$ instead of $H/Q_H\times \R$.
Recall that for $H \simeq \SL_2(\R)$ and $\Gamma_\mu$ preserves a closed proper cone (Case 2.2.a), we have two $\mu$-stationary and ergodic measures $\nu_1,\nu_2$ on $\mathbb{S}^1$, both are the lifts of the Furstenberg measure on the projective space $\P(V)$. In this case, there exist two continuous non-negative functions $p_1$ and $p_2$ (for the characterization of $p_1$ and $p_2$, see \cite[Theorem 2.16]{GLP}) on $\mathbb{S}^1$ such that $p_1+p_2=1$, $p_i|{\Supp \nu_j}=\delta_{i,j}$, where $\delta_{i,j}$ is the Kronecker symbol, and for $j=1,2$, and $x\in \mathbb{S}^1$, we have \[ p_j(x)=\int p_j(gx)\,d\mu(g). \] Otherwise (Case 2.2.b), there exists a unique $\mu$-stationary measure $\nu_K$ on $ \mathbb{S}^1$.
Let us define the following measures $\nu_x$: \begin{definition}\label{defi:nux}
For $x\in \mathbb{S}^1$, we define
\[\nu_x :=p_1(x)\nu_1+p_2(x)\nu_2 \, \text{ in Case 2.2.a, otherwise } \nu_x=\nu_K.\] \end{definition} According to \cite[Theorem 2.16]{GLP}, these measures $\nu_x$ are the limit distributions for the random walk on $\mathbb{S}^1$ starting from $x$, following the law of $\mu$.
For the probability measure $\mu$, let $\lambda_\mu$ be its Lyapunov exponent, defined as the almost sure limit of $\frac{1}{n}\log\|g_1\cdots g_n\|$ where $g_1,\cdots,g_n$ are i.i.d.~ random variables with distribution $\mu$. Let $\sigma_\chi(g,x)=\bar\chi(\bar\sigma(g,\eta))$ for $g\in H$, $x\in \mathbb{S}^1$ and $\eta=\R x\in H/Q_H$, where $\bar\chi(\bar\sigma(g,\eta))$ is defined in Lemma \ref{lemma.iwasawa.norm}. Clearly, $\sigma_\chi$ does not depend on the lift $x$ of $\eta$ to $\mathbb{S}^1$, so we sometimes use equivalently $\eta$ in the second coordinate to ease the notation.
\begin{proposition}\label{prop.equidistribution} Under the same assumptions as in Theorem \ref{thm.equidist.geod.sl}, there exist $\gamma >0$ and $\eta>0$ such that the following holds. For $n\in\N$, $t=\lambda_\mu n$, $\lambda_\mu/2>\eps_1>2/n$, for any $\varphi\in C^3(\mathbb{S}^1\times \R)$ and for $w \in \mathbb{S}^1$ \begin{equation} \label{eq.renewal+LDP} \begin{aligned} \frac{1}{n}\sum_{k=1}^n\int \varphi(gw ,\sigma_\chi(g,w ))\ d\mu^{*k}(g)&=\frac{1}{t}\int_{\mathbb{S}^1}\int_0^t \varphi(y,s)\ d s\ d \nu_w (y)\\
&+O(e^{-\eta\eps_1 n}|\varphi|_{C^3}+|\varphi|_\infty\eps_1+\frac{C|\varphi|_\infty}{n(1-e^{-c})}), \end{aligned} \end{equation} where the constants $C,c>0$ come from the large deviation estimates with rate $\varepsilon_1$ (see Theorem \ref{thm:LDP}). \end{proposition}
The proof of Proposition \ref{prop.equidistribution} mainly uses the renewal theorem to get the equidistribution and large deviation bounds to get some control of the error.
\begin{remark}[Error term]
In order to get a rate in the convergence, we need to know the dependence of the constants $C$ and $c>0$ on $\eps_1$. When $\mu$ has bounded support, both constants can be estimated with $C$ bounded and $c$ quadratic in $\varepsilon_1$, see \cite[Proposition 1.13]{aoun-sert.concentration} which provides subgaussian concentration estimates. In this case, we can get an explicit error term $O(n^{-1/3}|\varphi|_{C^3})$ in Proposition \ref{prop.equidistribution}. With exponential moment, $c$ can still be shown to be quadratic in $\varepsilon_1$ (locally). On the other hand, it might also be possible to use large deviations bounds in a more clever way to get a better error term. \end{remark}
We now proceed to prove Proposition \ref{prop.equidistribution}. For a function $\varphi$ on $\mathbb{S}^1\times \R$, we define its $L^1C^\gamma$ norm by
\[|\varphi|_{L^1C^\gamma}=\int_{\R}\|\varphi(\cdot ,s)\|_{C^\gamma(\mathbb{S}^1)}ds,\ \] and its $W^{1,2}C^\gamma$ norm by
\[|\varphi|_{W^{1,2}C^\gamma}=|\partial_{ss}\varphi|_{L^1C^\gamma}+|\varphi|_{L^1C^\gamma}, \] where $C^\gamma$ is the $\gamma$-H\"older norm. The first ingredient of the proof of Proposition \ref{prop.equidistribution} is the following uniform quantitative renewal theorem which was first proven in \cite{jialun.ens}. We borrow the current version from \cite{jialun.advances}. \begin{theorem}\cite[Proposition 5.4]{jialun.advances}\label{thm.renewal} Under the same assumptions as in Theorem \ref{thm.equidist.geod.sl}, we have the following. For a compactly supported $C^3$ function $f$ on $\mathbb{S}^1\times\R$, define the renewal sum for $w \in \mathbb{S}^1$ and $t\in\R^+$ by \[Rf(w ,t)=\sum_{k=1}^\infty \int f(gw ,\sigma_\chi(g,w )-t)d\mu^{*k}(g). \] Then, there exists $\eta>0$ such that \begin{equation}\label{equ_renewal}
Rf(w ,t)=\frac{1}{\lambda_\mu}\int_{\mathbb{S}^1} \int_{-t}^\infty f(y,u)\ d Leb(u)\ d\nu_w (y)+O(e^{-\eta (t-|\Supp f|)}|f|_{W^{1,2}C^\gamma}), \end{equation} where
$$|\Supp f|=\sup\{|s| : (w ,s)\in\Supp f \text{ for some }w \in \mathbb{S}^1\}.$$ \end{theorem}
A crucial point in this theorem is that the error term is of the form $e^{-\eta(t-|\Supp{f}|)}$, which enables us to take $f$ with support of size $(1-\eps)t$.
The second ingredient of the proof of Proposition \ref{prop.equidistribution} is the following large deviation estimate; we borrow the precise statement from \cite[Thm. 13.11 (iii)]{bq.book}.
\begin{theorem}[Le Page]\label{thm:LDP} For every $\eps_1>0$, there exist constants $C>0$ and $c>0$ such that
\[\mu^{*n}\{g\in G : |\sigma_\chi(g,w )-\lambda_\mu n|\leqslant \eps_1 n \}\leqslant Ce^{-cn}. \] \end{theorem}
We can now give
\begin{proof}[Proof of Proposition \ref{prop.equidistribution}]
We fix $n \in \N$ large enough so that $\lambda_\mu \geqslant 5/n$ (recall that the Lyapunov exponent $\lambda_\mu$ is positive, a well-known result of Furstenberg), fix $\varepsilon_1$, $\varphi$ and $\omega$ as in the statement. We will estimate the left-hand side of \eqref{eq.renewal+LDP} separately for $\sigma_\chi(g,w )$ inside three different intervals $[(\lambda_\mu-\eps_1)n,\infty)$, $[\eps_1n,(\lambda_\mu-\eps_1)n]$ and $(-\infty,\eps_1n]$. The second interval will give us the main term, other intervals will yield the error term. Take a smooth cutoff $\chi$ which equals $1$ on $[\eps_1n,(\lambda_\mu -\eps_1)n]$ and equals $0$ outside of $[\eps_1n-1,(\lambda_\mu -\eps_1)n+1]$ so that we have $\mathds{1}-\chi\leqslant\mathds{1}_{s<\eps_1n}+\mathds{1}_{s>(\lambda_\mu-\eps_1)n} $. Then, we can write \begin{equation}\label{eq:split} \begin{split}
&\left|\frac{1}{n}\sum_{k=1}^n\int \varphi(gw ,\sigma_\chi(g,w ))\ d\mu^{*k}(g)-\frac{1}{n}\sum_{k=1}^n\int \varphi(gw ,\sigma_\chi(g,w ))\chi(\sigma_\chi(g,w ))\ d\mu^{*k}(g)\right|\\
\leqslant & \frac{1}{n}\left|\sum_{k=1}^n\int \varphi(gw ,\sigma_\chi(g,w ))\mathds{1}_{\sigma_\chi(g,w )<\eps_1n}\ d\mu^{*k}(g)\right|\\
&+\frac{1}{n}\left|\sum_{k=1}^n\int \varphi(gw ,\sigma_\chi(g,w ))\mathds{1}_{\sigma_\chi(g,w )>(\lambda_\mu-\eps_1)n}\ d\mu^{*k}(g)\right|. \end{split} \end{equation}
\textbf{Main term: } Let $t=n\lambda_\mu$ and $f(w ,s)=\varphi(w ,s+t)\chi(s+t)$. Then by \eqref{equ_renewal} \begin{equation}\label{eq:main-renewal} \begin{split} &\frac{1}{n}\sum_{k=1}^\infty\int \varphi(gw ,\sigma_\chi(g,w ))\chi(\sigma_\chi(g,w ))\ d\mu^{*k}(g)\\
=&\frac{1}{n}\sum_{k=1}^\infty\int f(gw ,\sigma_\chi(g,w )-t)\mu^{*k}(g)\\
=&\frac{1}{t}\int_{\mathbb{S}^1}\int_{-t}^\infty f\ dLeb\ d\nu_w +\frac{1}{n}O\big(e^{-\eta(t-|\Supp f|)}|f|_{W^{1,2}C^\gamma}\big), \end{split} \end{equation}
where in the error term, we have $t-|\Supp f|=t-(t-\eps_1 n)=\eps_1n$. For the main term, using the formula of $f$, we have \begin{align*} &\frac{1}{n\lambda_\mu}\int_{\mathbb{S}^1}\int_0^\infty \varphi(y,s)\chi(s)\ dLeb(s)d\nu_w(y)\\
=&\frac{1}{n\lambda_\mu}\int_{\mathbb{S}^1}\int_{\eps_1n}^{(\lambda_\mu-\eps_1)n} \varphi(y,s)\chi(s)\ dLeb(s)d\nu_w(y) +|\varphi|_\infty\frac{2}{n\lambda_\mu}\\
=&\frac{1}{t}\int_{\mathbb{S}^1}\int_0^{t}\varphi(y,s)\ dLeb(s)\ d\nu_w (y)+|\varphi|_\infty O(\eps_1+\frac{1}{n}). \end{align*} For the error term in \eqref{eq:main-renewal}, we have
\[ \frac{1}{n}|f|_{W^{1,2}C^\gamma}\leqslant \sup_s\{|\varphi|_{C^\gamma}\}+\sup_s\{ |\partial_{ss}\varphi|_{C^\gamma}\}\leqslant |\varphi|_{C^3}.\]
Now, we give an upper bound of the sum over ${k> n}$: \begin{align*} &\frac{1}{n}\sum_{k>n}^\infty\int \varphi(gw ,\sigma_\chi(g,w ))\chi(\sigma_\chi(g,w ))d\mu^{*k}(g) \\
\leqslant & |\varphi|_\infty \frac{1}{n}\sum_{k>n}^\infty\mu^{*k}(\{g,\ \sigma_\chi(g,w )<(\lambda_\mu-\eps_1)n+1\}). \end{align*} Due to the assumption $\eps_1 n\geqslant 2$, we obtain \[\sigma_\chi(g,w )-\lambda_\mu n\leqslant -\eps_1n+1\leqslant -\eps_1n/2. \] We use the large deviation estimate (Theorem \ref{thm:LDP}) to obtain \begin{equation*} \frac{1}{n}\sum_{k>n}^\infty\mu^{*k}(\{g : \sigma_\chi(g,w )<(\lambda_\mu-\eps_1)n+1\})\leqslant \frac{1}{n}\sum_{k>n}Ce^{-ck}=\frac{Ce^{-cn}}{n(1-e^{-c})}, \end{equation*} where the constants $C,c$ depend on $\eps_1$.
Collecting above estimates, we obtain \begin{equation}\label{eq:second} \begin{split} \frac{1}{n}\sum_{k=1}^n\int \varphi(gw ,\sigma_\chi(g,w ))\chi(\sigma_\chi(g,w ))d\mu^{*k}(g)=&\frac{1}{t}\int_{\mathbb{S}^1}\int_0^t \varphi(y,s)\ dLeb(s)\ d\nu_w (y)\\
&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! +O\left(e^{-\eta\eps_1 n}|\varphi|_{C^3}+|\varphi|_\infty \left(\eps_1+\frac{1}{n}+\frac{Ce^{-cn}}{n(1-e^{-c})}\right)\right). \end{split} \end{equation}
\textbf{Error term I: } For $k<n_0:=\frac{\lambda_\mu-\eps_1}{\lambda_\mu+\eps_1}n$, we have $k(\lambda_\mu+\eps_1)<(\lambda_\mu-\eps_1)n$. By the large deviation estimates (Theorem \ref{thm:LDP}), we have \begin{align*} &\frac{1}{n}\sum_{k=1}^{n_0}\int \varphi(gw ,\sigma_\chi(g,w ))\mathds{1}_{\sigma_\chi(g,w )\geqslant (\lambda_\mu-\eps_1)n}d\mu^{*k}(g)\\
\leqslant& |\varphi|_\infty \frac{1}{n}\sum_{k=1}^{n_0}\mu^{*k}(\{g\in G : \sigma_\chi(g,w )>(\lambda_\mu+\eps_1)k \})\\
\leqslant & |\varphi|_\infty C\frac{1}{n}\sum_{k=1}^{n_0} e^{-ck}\leqslant \frac{|\varphi|_\infty C}{n(1-e^{-c})}. \end{align*}
For the part $n_0\leqslant k\leqslant n$, we use the absolute value to bound
\[\frac{1}{n}\sum_{k=n_0}^{n}\int \varphi(gw ,\sigma_\chi(g,w ))\mathds{1}_{\sigma_\chi(g,w )\geqslant (\lambda_\mu-\eps_1)n}d\mu^{*k}(g)\leqslant |\varphi|_\infty\frac{n-n_0}{n}=|\varphi|_\infty\frac{2\eps_1}{\lambda_\mu+\eps_1}. \] Thus, we have \begin{equation}\label{eq:first}
\frac{1}{n}\sum_{k=1}^{n}\int \varphi(gw ,\sigma_\chi(g,w ))\mathds{1}_{\sigma_\chi(g,w )\geqslant (\lambda_\mu-\eps_1)n}d\mu^{*k}(g)\leqslant \frac{|\varphi|_\infty C}{n(1-e^{-c})}+|\varphi|_\infty\frac{2\eps_1}{\lambda_\mu+\eps_1}. \end{equation}
\textbf{Error term II: }If $k>n_1:=\eps_1n/(\lambda_\mu-\eps_1)$, then we have $\eps_1n<k(\lambda_\mu-\eps_1)$ and hence we can apply the large deviation estimate to obtain \begin{align*} &\frac{1}{n}\sum_{k=n_1}^n\int \varphi(gw ,\sigma_\chi(g,w ))\mathds{1}_{\sigma_\chi(g,w )\leqslant \eps_1 n}d\mu^{*k}(g)\\
\leqslant & |\varphi|_\infty \frac{1}{n}\sum_{k=n_1}^n\mu^{*k}(\{g\in G : \sigma_\chi(g,w )<(\lambda_\mu-\eps_1)k \})\leqslant \frac{C|\varphi|_\infty e^{-cn_1}}{n(1-e^{-c})}. \end{align*} For the part $k\leqslant n_1$, \begin{align*}
\frac{1}{n}\sum_{k=1}^{n_1}\int \varphi(gw ,\sigma_\chi(g,w ))\mathds{1}_{\sigma_\chi(g,w )\leqslant \eps_1 n}d\mu^{*k}(g)\leqslant|\varphi|_\infty\frac{n_1}{n}=|\varphi|_\infty\frac{\eps_1}{\lambda_\mu-\eps_1}. \end{align*} Thus, we have \begin{equation}\label{eq:thrid}
\frac{1}{n}\sum_{k=1}^{n}\int \varphi(gw ,\sigma_\chi(g,w ))\mathds{1}_{\sigma_\chi(g,w )\leqslant \eps_1 n}d\mu^{*k}(g)\leqslant|\varphi|_\infty\frac{\eps_1}{\lambda_\mu-\eps_1}+\frac{C|\varphi|_\infty e^{-cn_1}}{n(1-e^{-c})}. \end{equation} Finally, combining \eqref{eq:split}, \eqref{eq:second}, \eqref{eq:first} and \eqref{eq:thrid}, we obtain \begin{align*} \frac{1}{n}\sum_{k=1}^\infty\int \varphi(gw ,\sigma_\chi(g,w ))\ d\mu^{*k}(g)=\frac{1}{t}\int_0^t \varphi(y,s)\ dLeb(s)\ d\nu_w (y)\\
+O\left(e^{-\eta\eps_1 n}|\varphi|_{C^3}+|\varphi|_\infty\frac{\eps_1}{\lambda_\mu-\eps_1}+\frac{C|\varphi|_\infty}{n(1-e^{-c})}\right). \end{align*} \end{proof}
\subsubsection{Equidistribution on $X_\calC$} We now use the equidistribution on $K\times \R$ (Proposition \ref{prop.equidistribution}) to deduce the equidistribution on $X_\calC$, that is, to give
\begin{proof}[Proof of Theorem \ref{thm.equidist.geod.sl}] Let $K\times_\sigma D$ be the fiber bundle with $H$ action, the action of $H$ is given by $h(k,d)=(hk,\sigma(h,k)d)$, where we identify $K \simeq \mathbb{S}^1 \simeq H/AN$. We define a map $p$ from $K\times_\sigma D$ to $H/Q_H\times_\alpha D^\pm$ by \[ p(k,d)=(kM,\mathrm{sg}(k)d), \] where $\mathrm{sg}(k)$ is the sign element in $M$. By Lemma \ref{lem:sign}, we have \begin{lemma}\label{lem.hequivariant} The map $p$ is an $H$-equivariant map from $K\times_\sigma D$ to $H/Q_H\times_\alpha D^\pm$. \end{lemma}
We denote by $\calG(r,\mathrm{sg}(w))$ the element $(e^r,\mathrm{sg}(w))\in D^\pm$ and we use additive parameter $r\in \R$. Under this parametrization, the equidistribution of $D$ or $D^\pm$-orbits of $z \in S/\Lambda$ to some measure $m$ means, respectively, that the measure $\frac{1}{t}\int_0^t\delta_{\calG(r,1)z}\ dr$ or the measure $\frac{1}{2t}\int_0^t\delta_{\calG(r,1)z}+\delta_{\calG(r,-1)z}\ dr$ converges to $m$ as $t \to \infty$.
Let $\psi$ be $C^3$ function on $X_\mathcal{C} \simeq H/Q_H \times S /\Lambda$ and $z \in S/\Lambda$. Set $\varphi(w,r):=\psi(\eta,\calG(r,\mathrm{sg}(w))z)$ where $\eta$ is the projection of $w$ on $H/Q_H \simeq K/M$. Thanks to Lemma \ref{lem.hequivariant}, we have the relation \begin{equation} \varphi(g s(\eta),\sigma_\chi(g,\eta))=\psi(g\eta,\alpha(g,\eta)z)=\psi(gx), \end{equation} for $x=(\eta,z) \in X_\mathcal{C}$ and where, we recall, $s:K/M \to K$ is the section. Therefore, we have \begin{equation}\label{eq.corresp.phi.psi} \frac{1}{n}\sum_{1\leqslant k\leqslant n}\mu^{*k}*\delta_x(\psi)=\frac{1}{n}\sum_{k=1}^n\int \varphi(g s(\eta) ,\sigma_\chi(g,\eta))\ d\mu^{*k}(g). \end{equation}
We know that for any $\psi\in C^3(X_\calC)$ with bounded $C^3$ norm, with suitable choice of $\eps_1$ depending on $n$, thanks to Proposition \ref{prop.equidistribution} and the relation \eqref{eq.corresp.phi.psi}, for $t=\lambda_\mu n$, we have \[ \frac{1}{n}\sum_{1\leqslant k\leqslant n}\mu^{*k}*\delta_x(\psi)- \frac{1}{t}\int_{\mathbb{S}^1}\int_0^t\varphi(k',r)\ dr\ d\nu_{s(\eta)}(k')\rightarrow 0 \] as $n$, equivalently $t$, tends to $\infty$.
On the other hand, by construction of the function $\varphi(\cdot,\cdot)$ and the measures $\nu_w$ for $w \in \mathbb{S}^1$, we have \begin{align*} \int_{\mathbb{S}^1} \varphi(k',r)\ d\nu_w(k')=& p_1(w)\int_{\mathbb{S}^1} \psi(k'M,\calG(r,\mathrm{sg}(k'))z)\ d\nu_1(k')\\ &+p_2(w)\int_{\mathbb{S}^1} \psi(k'M,\calG(r,\mathrm{sg}(k'))z) d\nu_2(k')\\ =&p_1(w)\int \psi(\eta,\calG(r,1)z)\ d\overline{\nu}_F(\eta)+p_2(w)\int \psi(\eta,\calG(r,-1)z)\ d\overline{\nu}_F(\eta). \end{align*}
We get that \begin{equation*}
\frac{1}{n}\sum_{1\leqslant k\leqslant n}\mu^{*k}*\delta_x(\psi)- \frac{1}{t}\int \int_0^t p_1(s(\eta))\psi(\eta,\calG(r,1)z)+p_2(s(\eta))\psi(\eta,\calG(r,-1)z)\ dr\ d \overline{\nu}_F(\eta)\rightarrow 0.
\end{equation*}
Recall that for Case 2.2.a, the section $s$ is chosen so that its image contains the support of $\nu_1$. In particular, $p_1(s(\eta))=1$ and $p_2(s(\eta))=0$ for every $\eta$ in the support of the Furstenberg measure $\overline{\nu}_F$. Therefore, in Case 2.2.a, if the $\eta$ coordinate of $x=(\eta,z)$ belongs to the support of $\overline{\nu}_F$, then we have \begin{equation}\label{eq.conv1}
\frac{1}{n}\sum_{1\leqslant k\leqslant n}\mu^{*k}*\delta_x(\psi)- \frac{1}{t}\int \int_0^t \psi(\eta,\calG(r,1)z)\ dr\ d\bar\nu_F(\eta)\rightarrow 0.
\end{equation} In Case 2.2.b, by a similar computation and using the fact that the unique measure $\nu_K$ on $\mathbb{S}^1$ writes as $\nu_K=\frac{1}{2} \int (\delta_w + \delta_{-w}) d\overline{\nu}_F(\R w)$, we have \begin{equation}\label{eq.conv2}
\frac{1}{n}\sum_{1\leqslant k\leqslant n}\mu^{*k}*\delta_x(\psi)- \frac{1}{2t}\int \int_0^t (\psi(\eta,\calG(r,1)z)+\psi(\eta,\calG(r,-1)z)) \ dr\ d\overline{\nu}_F(\eta)\rightarrow 0.
\end{equation} By density of $C^3(X_\calC)$ in $C(X_\calC)$, we deduce from \eqref{eq.conv1} and \eqref{eq.conv2} that $ \frac{1}{n}\sum_{1\leqslant k\leqslant n}\mu^{*k}*\delta_x$ converges weakly to a measure $\bar\nu_F\otimes m$ if and only if the $D$-orbit or the $D^{\pm}$- orbit (respectively in Case 2.2.a or Case 2.2.b) starting at $z \in S/\Lambda$ equidistributes to the measure $m$. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm.equidist.geod}] For $\Gamma_\mu< \PSL_2(\R)\simeq \PGL_2(\R)^o$, we take the lift $\mu_1$ on $\SL_2(\R)$ of $\mu$ on $\PSL_2(\R)$ with equal probability on two preimages of each element. Then we can apply Theorem \ref{thm.renewal} to this new measure $\mu_1$. Here the element $\diag(-1,-1)$ (which is the non-trivial element of $M$ in the case of $H \simeq \SL_2(\R)$) maps to identity in $G$, which acts trivially. Then the same argument as in the proof of Theorem \ref{thm.renewal} readily yields Theorem \ref{thm.equidist.geod}. \end{proof}
\end{document} |
\begin{document}
\twocolumn[ \icmltitle{CDT: Cascading Decision Trees for Explainable Reinforcement Learning}
\icmlsetsymbol{equal}{*}
\begin{icmlauthorlist} \icmlauthor{Zihan Ding}{work} \icmlauthor{Pablo Hernandez-Leal}{bor} \icmlauthor{Gavin Weiguang Ding}{bor} \icmlauthor{Changjian Li}{bor} \icmlauthor{Ruitong Huang}{bor}
\end{icmlauthorlist}
\icmlaffiliation{work}{Work done as an intern at Borealis AI.} \icmlaffiliation{bor}{Borealis AI, Toronto, Canada}
\icmlcorrespondingauthor{Zihan Ding}{zhding@mail.ustc.edu.cn}
\icmlkeywords{Machine Learning, ICML}
\vskip 0.3in ]
\printAffiliationsAndNotice{\icmlEqualContribution}
\begin{abstract} Deep Reinforcement Learning (DRL) has recently achieved significant advances in various domains. However, explaining the policy of RL agents still remains an open problem due to several factors, one being the complexity of explaining neural networks decisions. Recently, a group of works have used decision-tree-based models to learn explainable policies. Soft decision trees (SDTs) and discretized differentiable decision trees (DDTs) have been demonstrated to achieve both good performance and share the benefit of having explainable policies. In this work, we further improve the results for tree-based explainable RL in both performance and explainability. Our proposal, Cascading Decision Trees (CDTs) apply representation learning on the decision path to allow richer expressivity. Empirical results show that in both situations, where CDTs are used as policy function approximators or as imitation learners to explain black-box policies, CDTs can achieve better performances with more succinct and explainable models than SDTs. As a second contribution our study reveals limitations of explaining black-box policies via imitation learning with tree-based explainable models, due to its inherent instability. \end{abstract}
\section{Introduction} \label{intro}
Explainable Artificial Intelligence (XAI), especially Explainable Reinforcement Learning (XRL)~\citep{puiutta2020explainable} is attracting more attention recently. How to interpret the action choices in reinforcement learning (RL) policies remains a critical challenge, especially as the gradually increasing trend of applying RL in various domains involving transparency and safety~\citep{cheng2019end, junges2016safety}. Currently, many state-of-the-art DRL agents use neural networks (NNs) as their function approximators. While NNs are considered stronger function approximators (for better performances), RL agents built on top of them are generally lack of interpretability~\citep{lipton2018mythos}. Indeed, interpreting the behavior of NNs themselves remains an open problem in the field~\citep{montavon2018methods, albawi2017understanding}.
In contrast, traditional DTs (with hard decision boundaries) are usually regarded as models with readable interpretations for humans, since humans can interpret the decision making process by visualizing the decision path. However, DTs may suffer from weak expressivity and therefore low accuracy.
An early approach to reduce the hardness of DT was the soft/fuzzy DT (shorten as SDT) proposed by \cite{suarez1999globally}. Recently, differentiable SDTs~\citep{frosst2017distilling} have shown both improved interpretability and better function approximation, which lie in the middle of traditional DTs and neural networks.
People have adopted differentiable DTs for interpreting RL policies in two slightly different settings: an imitation learning setting~\citep{coppens2019distilling, liu2018toward}, in which imitators with interpretable models are learned from RL agents with black-box models, or a full RL setting~\citep{silva2019optimization}, where the policy is directly represented as an interpretable model, \emph{e.g.}, DT. However, the DTs in these methods only conduct partitions in raw feature spaces without representation learning that could lead to complicated combinations of partitions, possibly hindering both model interpretability and scalability. Even worse, some methods have axis-aligned partitions (univariate decision nodes)~\citep{wu2017beyond, silva2019optimization} with much lower model expressivity.
In this paper, we propose Cascading Decision Trees (CDTs) striking a balance between model interpretability and accuracy, this is, having an adequate representation learning based on interpretable models (\emph{e.g.} linear models). Our experiments show that CDTs share the benefits of having a significantly smaller number of parameters (and a more compact tree structure) and better performance than related works. The experiments are conducted on RL tasks, in either imitation-learning or RL settings. We also demonstrate that the imitation-learning approach is less reliable for interpreting the RL policies with DTs, since the imitating DTs may be prominently different in several runs, which also leads to divergent feature importances and tree structures.
\section{Related Works}
A series of works were developed in the past two decades along the direction of differentiable DTs~\citep{irsoy2012soft, laptev2014convolutional}. Recently, \citet{frosst2017distilling} proposed to distill a SDT from a neural network. Their approach was only tested on MNIST digit classification tasks. \citet{wu2017beyond} further proposed the tree regularization technique to favor the models with decision boundaries closer to compact DTs for achieving interpretability. To further boost the prediction accuracy of tree-based models, two main extensions based on single SDT were proposed: (1) ensemble of trees, or (2) unification of NNs and DTs.
An ensemble of decision trees is a common technique used for increasing accuracy or robustness of prediction, which can be incorporated in SDTs~\citep{rota2014neural, kontschieder2015deep, kumar2016ensemble}, giving rise to neural decision forests. Since more than one tree needs to be considered during the inference process, this might yield complications in the interpretability. A common solution is to transform the decision forests into a single tree~\citep{sagi2020explainable}.
As for the unification of NNs and DTs, \citet{laptev2014convolutional} propose convolutional decision trees for feature learning from images. Adaptive Neural Trees (ANTs)~\citep{tanno2018adaptive} incorporate representation learning in decision nodes of a differentiable tree with nonlinear transformations like convolutional neural networks (CNNs). The nonlinear transformations of an ANT, not only in routing functions on its decision nodes but also in feature spaces, guarantee the prediction performances in classification tasks on the one hand, but also hinder the potential of interpretability of such methods on the other hand. \cite{wan2020nbdt} propose the neural-backed decision tree (NBDT) which transfers the final fully connected layer of a NN into a DT with induced hierarchies for the ease of interpretation, but shares the convolutional backbones with normal deep NNs, yielding the state-of-the-art performances on CIFAR10 and ImageNet classification tasks.
However, these advanced methods either employ multiple trees with multiplicative numbers of model parameters, or heavily incorporate deep learning models like CNNs in the DTs. Their interpretability is severely hindered due to their model complexity.
To interpret an RL agent, \citet{coppens2019distilling} propose distilling the RL policy into a differentiable DT by imitating a pre-trained policy. Similarly, \citet{liu2018toward} apply an imitation learning framework but to the $Q$ value function of the RL agent. They also propose Linear Model U-trees (LMUTs) which allow linear models in leaf nodes.
\citet{silva2019optimization} propose to apply differentiable DTs directly as function approximators for either $Q$ function or the policy in RL. They apply a discretization process and a rule list tree structure to simplify the trees for improving interpretability. The VIPER method proposed by \cite{bastani2018verifiable} also distills policy as NNs into a DT policy with theoretically verifiable capability, but for imitation learning settings and nonparametric DTs only.
Our proposed CDT is distinguished from other main categories of methods with differentiable DTs for XRL in the following ways: (i) Compared with SDT~\citep{frosst2017distilling}, partitions in CDT not only happen in original input space, but also in transformed spaces by leveraging intermediate features. This is well documented in recent works~\citep{kontschieder2015deep, xiao2017ndt, tanno2018adaptive} to improve model capacity, and it can be further extended into hierarchical representation learning with advanced feature learning modules like CNN~\citep{tanno2018adaptive}. (ii) Compared with work by \citet{coppens2019distilling}, space partitions are not limited to axis-aligned ones (which hinders the expressivity of trees with certain depths), but achieved with linear models of features as the routing functions. Moreover, the adopted linear models are not a restriction (but as an example) and other interpretable transformations are also allowed in our CDT method. (iii) Compared with ANTs~\citep{tanno2018adaptive}, our CDT method unifies the decision making process based on different intermediate features with a single decision making tree, which follows the low-rank decomposition of a large matrix with linear models. It thus greatly improves the model simplicity for achieving interpretability.
\section{Method}
\subsection{Soft Decision Tree (SDT)}
A SDT is a differentiable DT with a probabilistic decision boundary at each node. Considering we have a DT of depth $D$, each node in the SDT can be represented as a weight vector (with the bias as an additional dimension) $\boldsymbol{w}^j_i$, where $i$ and $j$ indicate the index of the layer and the index of the node in that layer respectively, as shown in Fig.~\ref{fig:sdt_node}. The corresponding node is represented as $n_u$, where $u=2^{i-1}+j$ uniquely indices the node. \begin{figure}
\caption{A SDT node. $\sigma(\cdot)$ is the sigmoid function with function values on decision nodes as input.}
\label{fig:sdt_node}
\end{figure}
The decision path for a single instance can be represented as set of nodes $\mathcal{P}\subset\mathcal{N}$, where $\mathcal{N}$ is the set for all nodes on the tree. We have $\mathcal{P} = \argmax_{\{u\}} \prod_{i=1}^D p^{\lfloor j/2 \rfloor \rightarrow j}_{i-1\rightarrow i}$, where $p^{\lfloor j/2 \rfloor \rightarrow j}_{i-1\rightarrow i}$ is the probability of going from node $n_{2^{i-2}+\lfloor j/2 \rfloor}$ to $n_{2^{i-1}+j}$. The $\{u\}$ indicates that the $\arg\max$ is taken over a set of nodes rather than a single one. Note that $p^{\lfloor j/2 \rfloor \rightarrow j}_{i-1\rightarrow i}$ will always be 1 for a hard DT~\citep{safavian1991survey}. Therefore the path probability to a specific node $n_u$ is: $P^u=\prod_{i^\prime=1}^{j^\prime} p^{\lfloor j^\prime/2 \rfloor \rightarrow j^\prime}_{i^\prime-1\rightarrow i^\prime}, u^\prime\in \mathcal{P}$. In the following, we name all DTs using probabilistic decision path as SDT-based methods, shorten as SDT.
\citet{silva2019optimization} propose to discretize the learned differentiable SDTs into univariate DTs for improving interpretability. Specifically, for a decision node with a $(k+1)$-dimensional vector $\boldsymbol{w}$ (the first dimension $w_1$ is the bias term), the discretization process (i) selects the index of largest weight dimension as $k^*=\argmax_k w_k$ and (ii) divides $w_1$ by $w_{k^*}$, to construct a univariate hard DT. Without further description, the default discretization process in our experiments for both SDTs and CDTs also follows this manner. The SDTs are therefore the same as DDTs in \citet{silva2019optimization}.
\subsection{Cascading Decision Tree (CDT)} \subsubsection{Motivating Examples} Before introducing our method, we first show two simple examples to demonstrate our motivations for proposing CDT.
\textbf{Multivariate \emph{v.s.} Univariate Decision Nodes} People have proposed a variety of desiderata for interpretability~\citep{lipton2018mythos}, including trust, causality, transferability, informativeness, etc. Here we summarize the answers in general into two aspects: (1) interpretable meta-variables that can be directly understood; (2) model simplicity. Human-understandable variables with simple model structures comprise most of the models interpreted by humans either in a form of physical and mathematical principles or human intuitions, which is also in accordance with the Occam's razor principle. \begin{figure}
\caption{Comparison of two different tree structures on a binary classification problem.}
\label{fig:2trees}
\end{figure} For model simplicity, a simple model in most cases is more interpretable than a complicated one. Different metrics can be applied to measure the model complexity~\citep{murray2007reducing, molnar2019quantifying}, like the number of model parameters, model capacity, computational complexity, non-linearity, etc. There are ways to reduce the model complexity: model projection from a large space into a small sub-space, model distillation, merging the replicates in the model, etc. Feature importance~\citep{schwab2019cxplain} (\emph{e.g.}, through estimating the sensitivity of model outputs with respect to inputs) is one type of methods for projecting a complicated high-dimensional parameter space into a scalar space across feature dimensions. The proposed method CDT in this paper is a way to improve model simplicity by merging the replicates through representation learning.
To clarify our choice of model in terms of simplicity, we show an example in Fig.~\ref{fig:2trees} to compare different types of decision trees involving univariate and multivariate decision nodes. It shows the comparison of a multivariate DT and a univariate DT for a binary classification task. Apparently, the multivariate DT is simpler than the univariate one in its structure, as well as with fewer parameters, which makes it potentially more interpretable. For even more complex cases, the multivariate tree structure is more likely to achieve necessary space partitioning with simpler model structures.
\textbf{Intermediate Feature} As shown in Fig.~\ref{fig:lunarlander_hdt}, we analyzed the heuristic solution\footnote{In the code repository of OpenAI Gym: https://github.com/openai/gym/blob/master/gym/envs/box2d/lunar\newline\_lander.py} of \textit{LunarLander-v2} and found that it contains duplicative structures after being transformed into a decision tree, which can be leveraged to simplify the models to be learned. Specifically, the two green modules $\mathcal{F}_1$ and $\mathcal{F}_2$ in the tree are basically assigning different values to two intermediate variables ($ht$ and $at$) under different cases, while the grey module $\mathcal{D}$ takes the intermediate variables to achieve action selection. The modules $\mathcal{F}_2$ and $\mathcal{D}$ are used repeatedly on different branches on the tree, which forms a duplicative structure. This can help with the simplicity and interpretability of the model, which motivates our idea of CDT methods for XRL. \begin{figure}
\caption{The heuristic decision tree for \textit{LunarLander-v2}, with the left branch and right branch of each node yielding the "True" and "False" cases respectively. $x$ is an 8-dimensional observation, $a$ is the univariate discrete action given by the agent. $at, ht$ are two intermediate variables, corresponding to the "angle-to-do" and "hover-to-do" in the heuristic solution.}
\label{fig:lunarlander_hdt}
\end{figure}
\subsubsection{Methods} We propose CDT as an extension based on SDT with multivariate decision nodes, allowing it to have the capability of representation learning as well as decision making in transformed spaces. In a simple CDT architecture as shown on the left of Fig.~\ref{fig:cascade_merge}, a feature learning tree $\mathcal{F}$ is cascaded with a decision making tree $\mathcal{D}$.
In tree $\mathcal{F}$, each decision node is a simple function of raw feature vector $\boldsymbol{x}$ given learnable parameters $\boldsymbol{w}$: $\phi(\boldsymbol{x};\boldsymbol{w})$, while each leaf of it is a feature representation function: $\boldsymbol{f}=f(\boldsymbol{x};\tilde{\boldsymbol{w}})$ parameterized by $\tilde{\boldsymbol{w}}$. In tree $\mathcal{D}$, each decision node is a simple function of learned features $\boldsymbol{f}$ rather than raw features $\boldsymbol{x}$ given learnable parameters $\boldsymbol{w}^\prime$: $\psi(\boldsymbol{f};\boldsymbol{w}^\prime)$. The output distribution of $\mathcal{D}$ is another parameterized function $p(\cdot;\tilde{\boldsymbol{w}}^\prime)$ independent of either $\boldsymbol{x}$ or $\boldsymbol{f}$. For simplicity and interpretability, all functions $\phi, f \text{ and } \psi$ are linear functions in our examples, but they are free to be extended with other interpretable models.
Specifically, we provide detailed mathematical relationships based on linear functions as follows. For an environment with input state vector $\boldsymbol{x}$ and output discrete action dimension $O$, suppose that our CDT has intermediate features of dimension $K$ (not the number of leaf nodes on $\mathcal{F}$, but for each leaf node), we have the probability of going to the left/right path on the $u$-th node on $\mathcal{F}$:
\begin{align}
p^{\text{Go Left}}_u = \sigma({\boldsymbol{w}_{k}\cdot \boldsymbol{x}}), \quad p^{\text{Go Right}}_u = 1-p^{\text{Go Left}}_u,
\label{eq:1} \end{align}
which is the same as in SDTs. Then we have the linear feature representation function for each leaf node on $\mathcal{F}$, which transforms the basis of the representation space with: \begin{figure}
\caption{CDT methods. I. is a simple CDT architecture, consisting of a feature learning tree $\mathcal{F}$ and a decision making tree $\mathcal{D}$; II. shows two possible types of hierarchical CDT architectures, where (a) is an example architecture with hierarchical representation learning using three cascading $\mathcal{F}$ before one $\mathcal{D}$, and (b) is an example architecture with two $\mathcal{F}$ in parallel, potentially with different dimensions of $x$ as inputs.}
\label{fig:cascade_merge}
\end{figure} \begin{align}
f_k = \tilde{\boldsymbol{w}}_{k}\cdot \boldsymbol{x}, k=0,1, ..., K-1
\label{eq:2} \end{align}
which gives the $K$-dimensional intermediate feature vector $\boldsymbol{f}$ for each possible path. Due to the symmetry in all internal layers within a tree, all internal nodes satisfy the formulas in Eq.~(\ref{eq:1})(\ref{eq:2}). In tree $\mathcal{D}$, it is also a SDT but with raw input $\boldsymbol{x}$ replaced by learned representations $\boldsymbol{f}$ for each node $u^\prime$ in $\mathcal{D}$:
\begin{align}
p^{\text{Go Left}}_{u^\prime} = \sigma(\tilde{\boldsymbol{w}}_{k}\cdot \boldsymbol{f}), \quad p^{\text{Go Right}}_{u^\prime} = 1-p^{\text{Go Left}}_{u^\prime},
\label{eq:3} \end{align}
Finally, the output distribution is feature-independent, which gives the probability mass values across output dimension $O$ for each leaf of $\mathcal{D}$ as: \begin{align}
p_{k^\prime} = \frac{\exp (\tilde{\boldsymbol{w}}^\prime)}{\sum_{k^\prime=0}^{O-1} \exp (\tilde{\boldsymbol{w}}^\prime_{k^\prime})}, k^\prime=0,1,...,O-1 \end{align}
Suppose we have a CDT of depth $N_1$ for $\mathcal{F}$ and depth $N_2$ for $\mathcal{D}$, the probability of going from root of either $\mathcal{F}$ or $\mathcal{D}$ to $u$-th leaf node on each sub-tree both satisfies previous derivation in SDTs:
$P^u=\prod_{i^\prime=1}^{j^\prime} p^{\lfloor j^\prime/2 \rfloor \rightarrow j^\prime}_{i^\prime-1\rightarrow i^\prime}, u^\prime\in \mathcal{P}$, where $\mathcal{P}$ is the set of nodes on path. Therefore the overall path probability of starting from the root of $\mathcal{F}$ to $u_1$-th leaf node of $\mathcal{F}$ and then $u_2$-th leaf node of $\mathcal{D}$ is:
\begin{align}
P = P^{u_1}P^{u_2} \end{align}
Each leaf of the feature learning tree represents one possible assignment for intermediate feature values, while they share the subsequent decision making tree. During the inference process, we simply take the leaf on $\mathcal{F}$ or $\mathcal{D}$ with the largest probability to assign values for intermediate features (in $\mathcal{F}$) or derive output probability (in $\mathcal{D}$), which may sacrifice little accuracy but increase interpretability. The detailed architecture of CDT with relationships among variables is plotted in figures in Appendix A.
\textbf{Model Simplicity}
We analyze the simplicity of CDT compared with SDT in terms of the numbers of learnable parameters in the model. The reason for doing this is that in order to increase the interpretability, we need to simplify the tree structure or reduce the number of parameters including weights and bias in the tree.
We can analyze the model simplicity of CDT against a normal SDT with linear functions in a matrix decomposition perspective. Suppose we need a total of $M$ multivariate decision nodes in the $R$-dimensional raw input space $\mathbb{X}$ to successfully partition the space for high-performance prediction, which can be written as a matrix $\mathbf{W}^x_{M\times R}$. CDT tries to achieve the same partitions through learning a transformation matrix $\mathbf{T}_{K\times R}: \mathbb{X}\rightarrow \mathbb{F}$ for all leaf nodes in $\mathcal{F}$ and a partition matrix $\mathbf{W}^f_{M\times K}$ for all internal nodes in $\mathcal{D}$ in the $K$-dimensional feature space $\mathbb{F}$, such that: \begin{align}
\mathbf{W}^x \boldsymbol{x} &= \mathbf{W}^f \boldsymbol{f} =\mathbf{W}^f \mathbf{T} \boldsymbol{x}\\
\Rightarrow \mathbf{W}^x & =\mathbf{W}^f \mathbf{T}
\label{equ:matrix_decompos} \end{align} Therefore the number of model parameters to be learned with CDT is reduced by $M\times R - (M\times K + K \times R)$ compared against a standard SDT of the same total depth, and it is a positive value as long as $K<\frac{M\times R}{M+R}$, while keeping the model expressivity.
A detailed quantitative analysis of model parameters for CDT and SDT is provided in Appendix B.
\textbf{Hierarchical CDT}
From above, a simple CDT architecture as in Fig.~\ref{fig:cascade_merge} with a single feature learning model $\mathcal{F}$ and single decision making model $\mathcal{D}$ can achieve intermediate feature learning with a significant reduction in model complexity compared with traditional SDT. However, sometimes the intermediate features learned with $\mathcal{F}$ may be unsatisfying for capturing complex structures in advanced tasks, therefore we further extend the simple CDT architecture into more hierarchical ones. As shown on the right side in Fig.~\ref{fig:cascade_merge}, two potential types of hierarchical CDT are displayed: (a) a hierarchical feature abstraction module with three feature learning models $\{\mathcal{F}_1, \mathcal{F}_2, \mathcal{F}_3\}$ in a cascading manner before inputting to the decision module $\mathcal{D}$; (b) a parallel feature extraction module with two feature learning models $\{\mathcal{F}_1, \mathcal{F}_2\}$ before concatenating all learned features into $\mathcal{D}$.
One needs to bear in mind that whenever the model structures are complicating, the interpretability of the model decreases due to the loss of simplicity. Therefore we did not apply the hierarchical CDTs in our experiments for maintaining interpretability. However, the hierarchical structure is one of the most preferred ways to keep simplicity as much as possible if trying to increase the model capacity and prediction accuracy, so it can be applied when necessary.
\section{Experiments} We compare CDT and SDT on two settings for interpreting RL agents: (1) the imitation learning setting, where the RL agent parameterized with a black-box model (\emph{e.g.} neural network) first generates a state-action dataset for imitators to learn from, and the interpretation is derived on the imitators; (2) the full RL setting, where the RL agent is directly trained with the policy represented with interpretable models like CDTs or SDTs, such that the interpretation can be derived by directly spying into those models. The environments are \textit{CartPole-v1}, \textit{LunarLander-v2} and \textit{MountainCar-v0} in OpenAI Gym~\citep{brockman2016openai}. The depth of CDT is represented as "$d_1+d_2$" in the following sections, where $d_1$ is the depth of feature learning tree $\mathcal{F}$ and $d_2$ is the depth of decision making tree $\mathcal{D}$. Each setting is trained for five runs in imitation learning and three runs in RL.
Both the fidelity and stability of mimic models reflect the reliability of them as interpretable models. Fidelity is the accuracy of the mimic model, \emph{w.r.t.} the original model. It is an estimation of similarity between the mimic model and the original one in terms of prediction results. However, fidelity is not sufficient for reliable interpretations. An unstable family of mimic models will lead to inconsistent explanations of original black-box models. The stability of the mimic model is a deeper excavation into the model itself and comparisons among several runs. Previous research~\citep{bastani2017interpreting} has investigated the fidelity and stability of decision trees as mimic models, where the stability is estimated with the fraction of equivalent nodes in different random decision trees trained under the same settings. In our experiments, the stability analysis is conducted via comparing tree weights of different instances in imitation learning settings.
\subsection{Imitation Learning} \textbf{Performance.} The datasets for imitation learning are generated with heuristic agents for environments \textit{CartPole-v1} and \textit{LunarLander-v2}, containing 10000 episodes of state-action data for each environments. See Appendix C for other training details. The results are provided in Table~\ref{tab:compare_cdt_sdt1} and \ref{tab:compare_cdt_sdt2}.
\begin{table}[htbp]
\scriptsize \centering
\begin{tabular}{|p{7mm}|p{4mm}|p{10mm}|p{14mm}|p{16mm}|p{6mm}|} \hline Tree Type & Depth & Discretized & Accuracy (\%) & Episode Reward & \# of Params \\ \hline \multirow{ 6}{*}{SDT} & \multirow{ 2}{*}{2} & \ding{55} & 94.1\tiny{$\pm0.01$} & 500.0\tiny{$\pm0.0$} & 23\\ \cline{3-6} &&\ding{51} & 49.7\tiny{$\pm0.02$} & 39.9\tiny{$\pm7.6$} & 14\\ \cline{2-6} & \multirow{ 2}{*}{3} & \ding{55} & 94.5\tiny{$\pm0.1$} & 500.0\tiny{$\pm0.0$} & 51 \\ \cline{3-6} &&\ding{51} & 50.0 \tiny{$\pm0.01$} & 42.5\tiny{$\pm7.3$} & 30\\ \cline{2-6} & \multirow{ 2}{*}{4} & \ding{55} & 94.3\tiny{$\pm0.3$} & 500.0\tiny{$\pm0.0$} & 107\\ \cline{3-6} &&\ding{51} & 50.1 \tiny{$\pm0.1$} & 40.4\tiny{$\pm7.8$} & 62 \\ \hline
\multirow{ 12}{*}{\textbf{CDT}} & \multirow{ 4}{*}{1+2} & \ding{55} & \textbf{95.4}\tiny{$\pm1.1$} & \textbf{500.0}\tiny{$\pm0.0$} & 38\\ \cline{3-6} &&$\mathcal{F}$ only & \textbf{94.4}\tiny{$\pm0.8$} & \textbf{500.0}\tiny{$\pm0.0$} & 35\\ \cline{3-6} &&$\mathcal{D}$ only & 84.1\tiny{$\pm2.8$} & 500.0\tiny{$\pm0.0$} & 35\\ \cline{3-6} &&$\mathcal{F} + \mathcal{D}$ & \textbf{83.8}\tiny{$\pm2.6$} & 497.8\tiny{$\pm8.4$} & \textbf{32}\\ \cline{2-6} & \multirow{ 4}{*}{2+1} & \ding{55} & \textbf{95.6}\tiny{$\pm0.1$} & \textbf{500.0}\tiny{$\pm0.0$} & 54 \\ \cline{3-6} && $\mathcal{F}$ only & \textbf{92.7}\tiny{$\pm0.4$} & \textbf{500.0}\tiny{$\pm0.0$} & 45\\ \cline{3-6} && $\mathcal{D}$ only & 88.4\tiny{$\pm1.3$} & 500.0\tiny{$\pm0.0$} & 53\\ \cline{3-6} && $\mathcal{F} + \mathcal{D}$ & \textbf{89.0}\tiny{$\pm0.4$} & \textbf{500.0}\tiny{$\pm0.0$} & \textbf{44}\\ \cline{2-6} & \multirow{ 4}{*}{2+2} & \ding{55} & \textbf{96.6}\tiny{$\pm0.9$} & \textbf{500.0}\tiny{$\pm0.0$} & 64\\ \cline{3-6} &&$\mathcal{F}$ only & 91.6\tiny{$\pm1.3$} & 500.0\tiny{$\pm0.0$} & 55\\ \cline{3-6} &&$\mathcal{D}$ only & 82.9\tiny{$\pm3.7$} & 494.8\tiny{$\pm19.8$} & 61\\ \cline{3-6} &&$\mathcal{F} + \mathcal{D}$ & 81.9\tiny{$\pm1.8$} & 488.8\tiny{$\pm31.4$} & 52\\ \hline
\end{tabular} \caption{Comparison of CDT and SDT with imitation-learning settings on \textit{CartPole-v1}.} \label{tab:compare_cdt_sdt1} \end{table}
CDTs perform consistently better than SDTs before and after discretization process in terms of prediction accuracy, with different depths of the tree. Additionally, for providing a similarly accurate model, CDT method always has a much smaller number of parameters compared with SDT, which improves its interpretability as shown in later sections. However, although better than SDTs, CDTs also suffer from degradation in performance after discretization, which could lead to unstable and unexpected models. We claim that this is a general drawback for tree-based methods with soft decision boundaries in XRL with imitation-learning settings, which is further studied in the following.
\begin{table}[htbp]
\scriptsize \centering
\begin{tabular}{|p{7mm}|p{4mm}|p{10mm}|p{14mm}|p{16mm}|p{6mm}|} \hline Tree Type & Depth & Discretized & Accuracy (\%) & Episode Reward & \# of Params \\ \hline \multirow{ 8}{*}{SDT} & \multirow{ 2}{*}{4} & \ding{55} & 85.4\tiny{$\pm0.4$} & 58.2\tiny{$\pm246.1$} & 199\\ \cline{3-6} &&\ding{51} & 54.8\tiny{$\pm10.1$} & -237.1\tiny{$\pm121.9$} & 94\\ \cline{2-6} & \multirow{ 2}{*}{5} & \ding{55} & 87.6\tiny{$\pm0.5$} & 191.3\tiny{$\pm143.8$} & 407 \\ \cline{3-6} &&\ding{51} & 51.6\tiny{$\pm4.5$} & -93.7\tiny{$\pm102.9$} & 190\\ \cline{2-6} & \multirow{ 2}{*}{6} & \ding{55} & 88.7\tiny{$\pm1.3$} & 193.4\tiny{$\pm161.4$} & 823\\ \cline{3-6} &&\ding{51} & 60.2\tiny{$\pm3.9$} & -172.4\tiny{$\pm122.0$} & 382\\ \cline{2-6} & \multirow{ 2}{*}{7} & \ding{55} & 88.9\tiny{$\pm0.5$} & 194.2\tiny{$\pm138.8$} & 1655\\ \cline{3-6} &&\ding{51} & 62.7\tiny{$\pm2.8$} & -233.4\tiny{$\pm62.4$} & 766\\ \hline
\multirow{ 16}{*}{\textbf{CDT}} & \multirow{ 4}{*}{2+2} & \ding{55} & \textbf{88.2}\tiny{$\pm1.6$} & \textbf{107.4}\tiny{$\pm190.7$} & \textbf{116}\\ \cline{3-6} &&$\mathcal{F}$ only & \textbf{78.0}\tiny{$\pm2.4$} & -126.9\tiny{$\pm237.0$} & \textbf{95}\\ \cline{3-6} &&$\mathcal{D}$ only & 68.3\tiny{$\pm10.3$} & -301.6\tiny{$\pm136.8$} & 113\\ \cline{3-6} &&$\mathcal{F}+\mathcal{D}$ & \textbf{64.4}\tiny{$\pm12.1$} & -229.7\tiny{$\pm256.0$} & \textbf{92}\\ \cline{2-6} & \multirow{ 4}{*}{2+3} & \ding{55} & \textbf{88.3}\tiny{$\pm1.7$} & \textbf{168.5}\tiny{$\pm169.0$} & \textbf{144} \\ \cline{3-6} &&$\mathcal{F}$ only & 70.2\tiny{$\pm2.3$} & -9.7\tiny{$\pm159.2$} & 123\\ \cline{3-6} &&$\mathcal{D}$ only & 40.7\tiny{$\pm11.9$} & -106.3\tiny{$\pm187.7$} & 137\\ \cline{3-6} &&$\mathcal{F}+\mathcal{D}$ & 35.9\tiny{$\pm1.5$} & -130.2\tiny{$\pm135.9$} & 116\\ \cline{2-6} & \multirow{ 4}{*}{3+2} & \ding{55} & \textbf{90.4}\tiny{$\pm1.7$} & \textbf{199.5}\tiny{$\pm123.7$} & 216\\ \cline{3-6} &&$\mathcal{F}$ only & 72.2\tiny{$\pm8.3$} & -14.2\tiny{$\pm175.6$} & 167\\ \cline{3-6} &&$\mathcal{D}$ only & \textbf{78.1}\tiny{$\pm2.5$} & \textbf{150.8}\tiny{$\pm148.1$} & 209\\ \cline{3-6} &&$\mathcal{F}+\mathcal{D}$ & \textbf{64.6}\tiny{$\pm4.7$} & \textbf{7.1}\tiny{$\pm173.6$} & \textbf{160}\\ \cline{2-6} & \multirow{ 4}{*}{3+3} & \ding{55} & \textbf{90.4}\tiny{$\pm1.2$} & \textbf{173.0}\tiny{$\pm124.5$} & 244\\ \cline{3-6} &&$\mathcal{F}$ only & 72.0\tiny{$\pm1.2$} & -55.3\tiny{$\pm178.6$} & 195\\ \cline{3-6} &&$\mathcal{D}$ only & 58.7\tiny{$\pm8.6$} & -91.5\tiny{$\pm97.0$} & 237 \\ \cline{3-6} &&$\mathcal{F}+\mathcal{D}$ & 46.8\tiny{$\pm5.6$} & -210.5\tiny{$\pm121.9$} & 188 \\ \hline
\end{tabular} \caption{Comparison of CDT and SDT with imitation-learning settings on \textit{LunarLander-v2}.} \label{tab:compare_cdt_sdt2} \end{table}
\textbf{Stability.} \label{sec:stablility} To investigate the stability of imitation learners for interpreting the original agents, we measure the normalized weight vectors from different imitation-learning trees. For SDTs, the weight vectors are the linear weights on inner nodes, while for CDTs $\{\tilde{\boldsymbol{w}}, \tilde{\boldsymbol{w}}^\prime \}$ are considered. Through the experiments, we would like to show how unstable the imitators $\{\mathbf{L}\}$ are. We have a tree agent $\mathbf{X}\in\{\mathbf{L'}, \mathbf{H}, \mathbf{R}\}$, where $\mathbf{L'}$ is another imitator tree agent trained under the same setting, $\mathbf{R}$ is a random tree agent, and $\mathbf{H}$ is a heuristic tree agent (used for generating the training dataset). The distances of tree weights between two agents $\mathbf{L, X}$ are calculated with the following formula: \begin{align*}
D(\mathbf{L}, \mathbf{X}) &= \frac{1}{2N}\sum^N_{n=1} \min_{m=1,2,...,M} || \boldsymbol{l}_m-\boldsymbol{x}_n ||_1 \\
&+ \frac{1}{2M}\sum^M_{m=1} \min_{n=1,2,...,N} || \boldsymbol{x}_m-\boldsymbol{l}_n ||_1 \numberthis \end{align*} where $M$ is the number of the imitation learners, and $N$ is the number of another set of imitation learners ($\mathbf{L'}$), or the number of heuristic agents ($\mathbf{H}$), or the number of random tree agents ($\mathbf{R}$), depending on the specific choice of $\mathbf{X}$. $\overline{D(\mathbf{L}, \mathbf{X})}$ are averaged over all possible $\mathbf{L}$s and $\mathbf{X}$s with the same setting. Since we have the heuristic agent for \textit{LunarLander-v2} environment and we transform the heuristic agent into a multivariate DT agent, we get the decision boundaries of the tree on all its nodes. So we also compare the differences of decision boundaries in heuristic tree agent $\mathbf{H}$ and those of the learned tree agent $\mathbf{L}$. But we do not have the official heuristic agent for \textit{CartPole-v1} in the form of a decision tree. For the decision making trees in CDTs, we transform the weights back into the input feature space to make a fair comparison with SDT and the heuristic tree agent. The results are displayed in Table~\ref{tab:stability}, all trees use intermediate features of dimension 2 for both environments. In terms of stability, CDTs generally perform similarly as SDTs and even better on \textit{CartPole-v1} environment.
\begin{table}
\scriptsize \centering
\begin{tabular}{ |p{5mm}|p{14.5mm}|p{8mm}|p{9mm}|p{12mm}|p{9mm}| }
\hline
Tree Type & Env & Depth & $\overline{D(\mathbf{L}, \mathbf{L'})}$ & $\overline{D(\mathbf{L}, \mathbf{R})}$ & $\overline{D(\mathbf{L}, \mathbf{H})}$ \\[0.8ex] \hline
\multirow{ 2}{*}{SDT} & CartPole-v1 & 3 & 0.21 & 0.90$\pm 0.10$ & - \\ \cline{2-6}
& \multirow{1}{*}{LunarLander-v2} & 4 & 0.50 & 0.92$\pm 0.05$ & 0.84\\ \hline
\multirow{ 4}{*}{\textbf{CDT}} & \multirow{ 2}{*}{CartPole-v1} & 1+2 & 0.07 & 1.05$\pm0.15$ & - \\ \cline{3-6}
& & 2+2 & 0.19 & 1.03$\pm0.10$ & - \\ \cline{2-6}
& \multirow{ 2}{*}{LunarLander-v2} & 2+2 & 0.63 & 1.01$\pm0.10$ & 0.98\\ \cline{3-6}
& & 3+3 & 0.53 & 0.83$\pm0.06$ & 0.86 \\
\hline \end{tabular} \caption{Tree Stability Analysis. $\overline{D(\mathbf{L}, \mathbf{L'})}$, $\overline{D(\mathbf{L}, \mathbf{R})}$ and $\overline{D(\mathbf{L}, \mathbf{H})}$ are average values of distance between an imitator $\mathbf{L}$ and another imitator $\mathbf{L}^\prime$, or a random agent $\mathbf{R}$, or a heuristic agent $\mathbf{H}$ with metric $D$. CDTs are generally more stable, but still with large variances over different imitators.} \label{tab:stability} \end{table}
We further evaluate the feature importance with at least two different methods on SDTs to demonstrate the instability of imitation learning settings for XRL, see Appendix D.1. We also display all trees (CDTs and SDTs) for both environments in Appendix D.3. Significant differences can be found in different runs for the same tree structure with the same training setting, which testifies the unstable and unrepeatable nature by interpreting imitators instead of the original agents.
\textbf{Conclusion.} We claim that the current imitation-learning setting with tree-based models is not suitable for interpreting the original RL agent, with the following evidence derived from our experiments: (i) The discretization process usually degrades the performance (prediction accuracy) of the agent significantly, especially for SDTs. Although CDTs alleviate the problem to a certain extent, the performance degradation is still not negligible, therefore the imitators are not expected to be alternatives for interpreting the original agents; (ii) With the stability analysis in our experiments, we find that different imitators will display different tree structures even if they follow the same training setting on the same dataset, which leads to significantly different decision paths and local feature importance assignments.
\subsection{Reinforcement Learning}
\textbf{Performance.} We evaluate the learning performances of different DTs and NNs as policy function approximators in RL, as shown in Fig.~\ref{fig:rl_compare}. Every setting is trained for three runs. We use Proximal Policy Optimization~\citep{schulman2017proximal} algorithm in our experiments. The multilayer perceptron (MLP) model is a two-layer NN with 128 hidden units. The SDT has a depth of 3 for \textit{CartPole-v1} and 4 for \textit{LunarLander-v2}. The CDT has depths of 2 and 2 for feature learning tree and decision making tree respectively on \textit{CartPole-v1}, while with depths of 3 and 3 for \textit{LunarLander-v2}. Therefore for each environment, the SDTs and CDTs have a similar number of model parameters, while MLP model has at least 6 times more parameters. Detailed training settings are provided in Appendix E. From Fig.~\ref{fig:rl_compare}, we can see that CDTs can at least outperform SDTs as policy function approximators for RL in terms of both sampling efficiency and final performance, although may not learn as fast as general MLPs with a significantly larger number of parameters. For \textit{MountainCar-v0} environment, the MLP model has two layers with 32 hidden units. The depth of SDT is 3. CDT has depths 2 and 2 for the feature learning tree and decision making tree respectively, with the dimension of the intermediate feature as 1. The learning performances are less stable due to the sparse reward signals and large variances in exploration. However, with CDT for policy function approximation, there are still near-optimal agents after training with or without state normalization.
\begin{figure}
\caption{Comparison of SDTs and CDTs on three environments in terms of average rewards in RL setting: (a)(c)(e) use normalized input states while (b)(d)(f) use unnormalized ones.}
\label{fig:rl_compare}
\end{figure}
\textbf{Tree Depth.} The depths of DTs are also investigated for both SDT and CDT, because deeper trees tend to have more model parameters and therefore lay more stress on the accuracy rather than interpretability. Fig.~\ref{fig:rl_compare_depth_norm} shows the learning curves of SDTs and CDTs in RL with different tree depths for the two environments, using normalized states as input, while the comparison with unnormalized states is in Appendix F with similar results. From the comparisons, we can see that generally deeper trees can learn faster with even better final performances for both CDTs and SDTs, but CDTs are less sensitive to tree depth than SDTs.
\begin{figure}
\caption{Comparison of SDTs and CDTs with different depths (state normalized). (a) and (b) are trained on \textit{CartPole-v1}, while (c) and (d) are on \textit{LunarLander-v2}.}
\label{fig:rl_compare_depth_norm}
\end{figure}
\textbf{Interpretability.} We display the learned CDTs in RL settings for three environments, compared against some heuristic solutions or SDTs. A heuristic solution\footnote{Provided by Zhiqing Xiao on OpenAI Gym Leaderboard: https://github.com/openai/gym/wiki/Leaderboard} for \textit{CartPole-v1} is: if $3\theta+\dot{\theta}>0$, push right; otherwise, push left. As shown in Fig.~\ref{fig:cartpole_plot}, in our learned CDT of depth 1+2, the weights of two-dimensional intermediate features ($f[0]$ and $f[1]$) are much larger on the last two dimensions of observation than the first two, therefore we can approximately ignore the first two dimensions due to their low importance in decision making process. So we get similar intermediate features for two cases in two dimensions, which are approximately $w_1x[2]+w_2x[3]\rightarrow w\theta + \dot{\theta}$ after normalization $(w>0)$. Based on the decision making tree in learned CDT, it gives a close solution as the heuristic one, yielding if $w\theta+\dot{\theta}<0$ push left otherwise push right. The original CDT before discretization and a SDT for comparison are provided in Appendix G.
For \textit{MountainCar-v0}, due to the complexity in the landscape as shown in Fig.~\ref{fig:mountaincar_plot}, interpreting the learned model is even harder. However, through CDT, we can see that the agent learns intermediate features as combinations of car position and velocity, potentially being an estimated future position or previous position, and makes action decisions based on that. The original CDT before discretization has depth 2+2 with one-dimensional intermediate features, and its structure is shown in Appendix G. For \textit{LunarLander-v2}, as in Fig.~\ref{fig:lunarlander_plot}, the learned CDT agent captures some important feature combinations like the angle with angular speed and X-Y coordinate relationships for decision making. \begin{figure}
\caption{Top: learned CDT (after discretization). Bottom: game scene of \textit{CartPole-v1}. }
\label{fig:cartpole_plot}
\end{figure}
\begin{figure}
\caption{Left: learned CDT (after discretization). Right: game scene of \textit{MountainCar-v0}.}
\label{fig:lunarlander_plot}
\end{figure}
\begin{figure}
\caption{Learned CDT (after discretization) for \textit{LunarLander-v2} with game scene at the right bottom corner.}
\label{fig:mountaincar_plot}
\end{figure}
\section{Conclusion}
In this work, we have proposed a new architecture of differentiable DT, the Cascading Decision Tree (CDT). A simple CDT cascades a feature learning DT and a decision making DT into a single model. From our experiments, we show that compared with traditional differentiable DTs (\emph{i.e.}, DDTs or SDTs) CDTs have better function approximation in both imitation learning and full RL settings with a significantly reduced number of model parameters while better preserving the tree prediction accuracy after discretization. We also qualitatively and quantitively corroborate that the SDT-based methods with imitation learning setting may not be proper for achieving interpretable RL agents due to instability among different imitators in their tree structures, even when having similar performances. Finally, we contrast the interpretability of learned DTs in RL settings, especially for the intermediate features. Our analysis supports that CDTs lend themselves to be further extended to hierarchical architectures with more interpretable modules, due to its richer expressivity allowed via representation learning. More work needs to be done to fully realize the potential of our method, which involves the investigation of hierarchical CDT settings and well-regularized intermediate features for further interpretability. Additionally, since the present experiments are demonstrated with linear transformations in the feature space, non-linear transformations are expected to be leveraged for tasks with higher complexity or continuous action space while preserving interpretability.
\onecolumn \begin{center}
\Large
\textbf{Supplementary Material} \end{center} \begin{appendices}
\section{Detailed Simple CDT Architecture} \label{subsec:appendix_cdt} \begin{figure}
\caption{A detailed architecture of simple CDT: the feature learning tree is parameterized by $w$ and $b$, and its leaves are parameterized by $\tilde{w}$
; the inner nodes of decision making tree are parameterized by $w^\prime$ and $b^\prime$, while the leaves are parameterized by $\tilde{w}^\prime$.}
\label{fig:cascade}
\end{figure}
\section{Quantitative Analysis of Model Parameters} \label{app:model_parameters} Considering the case where we have a raw feature dimension of inputs as $R$, we choose the intermediate feature dimension to be $K<R$. A CDT with two cascading trees of depth $d_1$ and $d_2$ and a SDT with depth $d$ are compared. Supposing the output dimension is $O$, we can derive the number of parameters in CDT as: \begin{equation}
N(CDT)=[(R+1)(2^{d_1}-1)+K\cdot R\cdot2^{d_1}]+[(K+1)(2^{d_2}-1)+O\cdot2^{d_2}]
\label{equ:cdt_parameters} \end{equation} while the number of parameters in SDT is: \begin{equation}
N(SDT)=(R+1)(2^d-1)+O\cdot2^d
\label{equ:sdt_parameters} \end{equation}
Considering an example for Eq.~\ref{equ:cdt_parameters} and Eq.~\ref{equ:sdt_parameters} with SDT being depth of 5 while CDT has $d_1=2, d_2=3$, raw feature dimension $R=8$, intermediate feature dimension $K=4$, and output dimension $O=4$, we can get $N(CDT)=222$ and $N(SDT)=343$. It indicates a reduction of around $35\%$ parameters in this case, which will significantly increase interpretability.
In another example, when $R=8, K=4, O=4$, the numbers of parameters in CDT or SDT models are compared in Fig.~\ref{fig:params_comparison}, assuming $d=d_1+d_2$ for a total depth of range 2 to 20. The Ratio of numbers of model parameters is derived with: $\frac{N(CDT)}{N(SDT)}$.
\begin{figure}
\caption{Comparison of numbers of model parameters in CDTs and SDTs. The left vertical axis is the number of model parameters in $\log$-scale. The right vertical axis is the ratio of model parameter numbers. CDT has a decreasing ratio of model parameters against SDT as the total depth of model increases. }
\label{fig:params_comparison}
\end{figure}
\section{Hyperparameters in Imitation Learning} \label{app:il_params} \begin{table}[H] \centering
\begin{tabular}{ m{2cm} m{3cm} m{2cm} m{2cm} }
\hline
Tree Type & Env & Hyperparameter & Value \\ \hline
\multirow{ 6}{*}{Common} & \multirow{ 3}{*}{CartPole-v1} & learning rate & $1\times 10^{-3}$\\ \cline{3-4}
& & batch size & 1280\\ \cline{3-4}
& & epochs & 80 \\ \cline{2-4}
& \multirow{ 3}{*}{LunarLander-v2} & learning rate & $1\times 10^{-3}$\\ \cline{3-4}
& & batch size & 1280\\ \cline{3-4}
& & epochs & 80 \\ \hline \hline
\multirow{ 2}{*}{SDT} & CartPole-v1 & depth & 3\\ \cline{2-4}
& \multirow{ 1}{*}{LunarLander-v2} & depth & 4\\ \hline
\multirow{ 6}{*}{CDT} & \multirow{ 3}{*}{CartPole-v1} & FL depth & 2 \\ \cline{3-4}
& & DM depth & 2 \\ \cline{3-4}
& & \# intermediate variables & 2 \\ \cline{2-4}
& \multirow{ 3}{*}{LunarLander-v2} & FL depth & 3 \\ \cline{3-4}
& & DM depth & 3 \\ \cline{3-4}
& & \# intermediate variables & 2 \\
\hline \end{tabular} \caption{Imitation learning hyperparameters. The "Common" hyperparameters are shared for both SDT and CDT.} \label{tab:rl_params} \end{table}
\section{Additional Imitation Learning Results for Stability Analysis} \label{app:additional_il}
Both the fidelity and stability of mimic models reflect the reliability of them as interpretable models. Fidelity is the accuracy of the mimic model, \emph{w.r.t.} the original model. It is an estimation of similarity between the mimic model and the original one in terms of prediction results. However, fidelity is not sufficient for reliable interpretations. An unstable family of mimic models will lead to inconsistent explanations of original black-box models. The stability of the mimic model is a deeper excavation into the model itself and comparisons among several runs. Previous research~\citep{bastani2017interpreting} has investigated the fidelity and stability of decision trees as mimic models, where the stability is estimated with the fraction of equivalent nodes in different random decision trees trained under the same settings. However, in our tests, apart from evaluating the tree weights in different imitators, we also use the feature importance given by different differentiable DT instances with the same architecture and training setting to measure the stability.
\subsection{Feature Importance Assignment on Trees} \label{app:feature_importance}
For differentiable DT methods (e.g. CDT and SDT), since the decision boundaries within each node are linear combinations of features, we can simply take the weight vector $\boldsymbol{w}^j_i$ as the importance assignment for those features within each node.
After training the tree, a \textit{local explanation} is straightforward to derive with the inference process of a single instance and the decision path on the tree. A \textit{global explanation} can be the average local explanation across instances, \emph{e.g.} in an episode or several episodes under the RL settings. Here we list several ways of assigning importance values for input features with SDT, to derive the feature importance vector $\boldsymbol{I}$ with the same dimension as the decision node vectors $\boldsymbol{w}$ and input feature:
For \textit{local explanation}: \begin{itemize}
\item \RomanNumeralCaps{1}. A trivial way of feature importance assignment on SDT would be simply adding up all weight vectors of nodes on the decision path:
$\boldsymbol{I}(x) = \sum_{i,j}\boldsymbol{w}^j_i(\boldsymbol{x})$
\item \RomanNumeralCaps{2}. The second way is a weighted average of the decision vectors, \emph{w.r.t.} the confidence of the decision boundaries for a specific instance. Considering the soft decision boundary on each node, we assume that the more confident the boundary is applied to partition the data point into a specific region within the space, the more reliable we can assign feature importance according to the boundary.
The \textit{confidence} of a decision boundary can be positively correlated with the distance from the data point to the boundary, or the probability of the data point falling into one side of the boundary. The latter one is straightforward in our settings. We define the confidence as $p(x)=p^{\lfloor j/2 \rfloor \rightarrow j}_{i-1\rightarrow i}(x)$, which is also the probability of choosing node $j$ in $i$-th layer from its parent on instance $x$'s decision path. It indicates how far the data point is from the middle of the soft boundary in a probabilistic view.
Therefore the importance value is derived via multiplying the confidence value with each decision node vector:
$\boldsymbol{I}(\boldsymbol{x}) = \sum_{i,j}p^{\lfloor j/2 \rfloor \rightarrow j}_{i-1\rightarrow i}(\boldsymbol{x}) \boldsymbol{w}^j_i(\boldsymbol{x})$.
Fig.~\ref{fig:boundary2} helps to demonstrate the reason for using the decision confidence (\emph{i.e.}, probability) as a weight for assigning feature importance, which indicates that the probability of belonging to one category is positively correlated with the distance from the instance to the decision boundary. Therefore when there are multiple boundaries for partitioning the space (\emph{e.g.}, two in the figure), we assign the boundaries having a shorter distance to the data point with smaller confidence in determining feature importance, since based on the closer boundaries the data point is much easier to be perturbed into the contrary category and less confident to remain in the original.
\begin{figure}
\caption{Multiple soft decision boundaries (dashed lines) partition the space. The dots represent input data points, and different colored regions indicate different partitions in the input space. The boundaries closer to the instance are less important in determining the feature importance since they are less distinctive for the instance.}
\label{fig:boundary2}
\end{figure}
\item \RomanNumeralCaps{3}. Since the tree we use is differentiable, we can also apply gradient-based methods for feature importance assignment, which is: $\boldsymbol{I}(\boldsymbol{x}) = \frac{\partial{y}}{\partial{\boldsymbol{x}}}$, where $ y=\text{SDT}(\boldsymbol{x})$.
\end{itemize}
For \textit{global explanation}: \begin{itemize} \item We can simply average the feature importance at each time step (\emph{i.e.}, local explanation) to get global feature importance over an episode or across episodes, where the local explanations can be derived in either of the above ways. \end{itemize}
\subsection{Results of Feature Importance in Imitation Learning} To testify the stability of applying SDT method with imitation learning from a given agent, we compare the SDT agents of different runs and original agents using certain metrics. The agent to be imitated from is a heuristic decision tree (HDT) agent, and the metric for evaluation is the assigned feature importance across an episode on each feature dimension. As described in the previous section, the feature importance for local explanation can be achieved in three ways, which work for both HDT and SDT here. The environment is \textit{LunarLander-v2} with an 8-dimensional observation in our experiments here.
Considering SDT of different runs may predict different actions, even if they are trained with the same setting and for a considerable time to achieve similarly high accuracies, we conduct comparisons not only for an online decision process during one episode, but also on a pre-collected offline state dataset by the HDT agent. We hope this can alleviate the accumulating differences in trajectories caused by consecutively different actions made by different agents, and give a more fair comparison on the decision process (or feature importance) for the same trajectory.
\textbf{Different Tree Depths.} First, the comparison of feature importance (adding up node weights on decision path) for HDT and the learned SDT of different depths in an online decision episode is shown as Fig.~\ref{fig:stability_online}. All SDT agents are trained for 40 epochs to convergence. The accuracies of three trees are $87.35\%, 95.23\%, 97.50\%$, respectively.
\begin{center} \begin{figure}
\caption{Comparison of feature importance (local explanation \RomanNumeralCaps{1}) for SDT of depth 3, 5, 7 with HDT on an episodic decision making process.}
\label{fig:stability_online}
\end{figure} \end{center}
From Fig.~\ref{fig:stability_online} we can tell significant differences among SDTs with different depths, as well as in comparing them against the HDT even on the episode with the same random seed, which indicates that the depth of SDT will not only affect the model prediction accuracy but also the decision making process.
\textbf{Same Tree with Different Runs.} We compare the feature importance on an offline dataset, containing the states of the HDT agent encounters in one episode. All SDT agents have a depth of 5 and are trained for 80 epochs to convergence. The three agents have testing accuracies of $95.88\%, 97.93\%, \text{ and } 97.79\%$ respectively after training.
The feature importance values are evaluated with different approaches as mentioned above (\textit{local explanation} \RomanNumeralCaps{1}, \RomanNumeralCaps{2} and \RomanNumeralCaps{3}) on the same offline episode, as shown in Fig~\ref{fig:feature_importance_compare}. In the results, \textit{local explanation} II and III looks similar, since most decision nodes in the decision path with greatest probability have the probability values close to 1 (\emph{i.e.} close to a hard decision boundary) when going to the child nodes.
From Fig.~\ref{fig:feature_importance_compare}, considerable differences can also be spotted in different runs for local explanations, even though the SDTs have similar prediction accuracies, no matter which metric is applied.
\begin{figure}
\caption{Comparison of feature importance for three SDTs (depth=5, trained under the same setting) with three different local explanations. All runs are conducted on the same offline episode.}
\label{a}
\label{b}
\label{c}
\label{d}
\label{e}
\label{f}
\label{g}
\label{h}
\label{i}
\label{fig:feature_importance_compare}
\end{figure}
\subsection{Tree Structures in Imitation Learning} \label{app:tree_structures} We display the agents trained with CDTs and SDTs on both \textit{CartPole-v1} and \textit{LunarLander-v2} before and after tree discretization in this section, as in Fig.~\ref{fig:sdt_vis}, \ref{fig:sdt_vis_dis}, \ref{fig:sdt_vis_cartpole}, \ref{fig:sdt_vis_dis_cartpole}, \ref{fig:cdt_vis}, \ref{fig:cdt_vis_cartpole}, \ref{fig:cdt_vis_dis_cartpole}. Each figure contains trees trained in four runs with the same setting. Each sub-figure contains one learned tree (plus an input example and its output) with an inference path (\emph{i.e.}, the solid lines) for the same input instance. The lines and arrows indicate the connections among tree nodes. The colors of the squares on tree nodes show the values of weight vectors for each node. For feature learning trees in CDTs, the leaf nodes are colored with the feature coefficients. The output leaf nodes of both SDTs and decision making trees in CDTs are colored with the output categorical distributions. Three color bars are displayed on the left side for inputs, tree inner nodes, and output leaves respectively, as demonstrated in Fig.~\ref{fig:sdt_vis}. It remains the same for the rest tree plots. The digits on top of each node represent the output action categories.
Among all the learned tree structures, significant differences can be told from weight vectors, as well as intermediate features in CDTs, even if the four trees are under the same training setting. This will lead to considerably different explanations or feature importance assignments on trees.
\begin{figure}
\caption{Comparison of four runs with the same setting for SDT (before discretization) imitation learning on \textit{LunarLander-v2}. The dashed lines with different colors on the left top diagram indicate the valid regions for each color bar, which is the default setting for the rest diagrams.}
\label{fig:sdt_vis}
\end{figure}
\begin{figure}
\caption{Comparison of four runs with the same setting for SDT (after discretization) imitation learning on \textit{LunarLander-v2}.}
\label{fig:sdt_vis_dis}
\end{figure}
\begin{figure}
\caption{Comparison of four runs with the same setting for SDT (before discretization) imitation learning on \textit{CartPole-v1}.}
\label{fig:sdt_vis_cartpole}
\end{figure}
\begin{figure}
\caption{Comparison of four runs with the same setting for SDT (after discretization) imitation learning on \textit{CartPole-v1}.}
\label{fig:sdt_vis_dis_cartpole}
\end{figure}
\begin{figure}
\caption{Comparison of four runs with the same setting for CDT (before discretization) imitation learning on \textit{LunarLander-v2}: feature learning trees (top) and decision making trees (bottom).}
\label{fig:cdt_vis}
\end{figure}
\begin{figure}
\caption{Comparison of four runs with the same setting for CDT (after discretization) imitation learning on \textit{LunarLander-v2}: feature learning trees (top) and decision making trees (bottom).}
\label{fig:cdt_vis}
\end{figure}
\begin{figure}
\caption{Comparison of four runs with the same setting for CDT (before discretization) imitation learning on \textit{CartPole-v1}: feature learning trees (top) and decision making trees (bottom).}
\label{fig:cdt_vis_cartpole}
\end{figure}
\begin{figure}
\caption{Comparison of four runs with the same setting for CDT (after discretization) imitation learning on \textit{CartPole-v1}: feature learning trees (top) and decision making trees (bottom).}
\label{fig:cdt_vis_dis_cartpole}
\end{figure}
\section{Training Details in Reinforcement Learning} \label{app:rl_params} \begin{table}[H] \centering
\begin{tabular}{ m{2cm} m{3cm} m{4cm} m{2cm} }
\hline
Tree Type & Env & Hyperparameter & Value \\ \hline
\multirow{ 24}{*}{Common} & \multirow{ 8}{*}{CartPole-v1} & learning rate & $5\times 10^{-4}$\\ \cline{3-4}
& & $\gamma$ & 0.98\\ \cline{3-4}
& & $\lambda$ & 0.95\\ \cline{3-4}
& & $\epsilon$ & 0.1\\ \cline{3-4}
& & update iteration & 3\\ \cline{3-4}
& & hidden dimension (value) & 128\\ \cline{3-4}
& & episodes & 3000 \\ \cline{3-4}
&& time horizon & 1000 \\ \cline{2-4}
& \multirow{ 8}{*}{LunarLander-v2} & learning rate & $5\times 10^{-4}$\\ \cline{3-4}
& & $\gamma$ & 0.98\\ \cline{3-4}
& & $\lambda$ & 0.95\\ \cline{3-4}
& & $\epsilon$ & 0.1\\ \cline{3-4}
& & update iteration & 3\\ \cline{3-4}
& & hidden dimension (value) & 128\\ \cline{3-4}
& & episodes & 5000 \\ \cline{3-4}
&& time horizon & 1000 \\ \cline{2-4}
& \multirow{ 8}{*}{MountainCar-v0} & learning rate & $5\times 10^{-3}$\\ \cline{3-4}
& & $\gamma$ & 0.999\\ \cline{3-4}
& & $\lambda$ & 0.98\\ \cline{3-4}
& & $\epsilon$ & 0.1\\ \cline{3-4}
& & update iteration & 10\\ \cline{3-4}
& & hidden dimension (value) & 32\\ \cline{3-4}
& & episodes & 5000 \\ \cline{3-4}
&& time horizon & 1000 \\ \hline \hline
\multirow{ 3}{*}{MLP} & CartPole-v1 & hidden dimension (policy) & 128\\ \cline{2-4}
& \multirow{ 1}{*}{LunarLander-v2} & hidden dimension (policy) & 128\\ \cline{2-4}
& \multirow{ 1}{*}{MountainCar-v0} & hidden dimension (policy) & 32\\ \hline
\multirow{ 2}{*}{SDT} & CartPole-v1 & depth & 3\\ \cline{2-4}
& \multirow{ 1}{*}{LunarLander-v2} & depth & 4\\ \cline{2-4}
& \multirow{ 1}{*}{MountainCar-v0} & depth & 3\\ \hline
\multirow{ 6}{*}{CDT} & \multirow{ 3}{*}{CartPole-v1} & FL depth & 2 \\ \cline{3-4}
& & DM depth & 2 \\ \cline{3-4}
& & \# intermediate variables & 2 \\ \cline{2-4}
& \multirow{ 3}{*}{LunarLander-v2} & FL depth & 3 \\ \cline{3-4}
& & DM depth & 3 \\ \cline{3-4}
& & \# intermediate variables & 2 \\\cline{2-4}
& \multirow{ 3}{*}{MountainCar-v0} & FL depth & 2 \\ \cline{3-4}
& & DM depth & 2 \\ \cline{3-4}
& & \# intermediate variables & 1 \\
\hline \end{tabular} \caption{RL hyperparameters. The "Common" hyperparameters are shared for both SDT and CDT.} \label{tab:rl_params} \end{table}
To normalize the states\footnote{We found that sometimes the state normalization can affect the learning performances significantly, especially in RL settings. }, we collect 3000 episodes of samples for each environment with a well-trained policy and calculate its mean and standard deviation. During training, each state input is subtracted by the mean and divided by the standard deviation.
The hyperparameters for RL are provided in Table~\ref{tab:rl_params} for MLP, SDT, and CDT on three environments.
\section{Additional Reinforcement Learning Results} \label{app:additional_rl} \begin{figure}
\caption{Comparison of SDTs and CDTs with different depths (state unnormalized). (a) and (b) are trained on \textit{CartPole-v1}, while (c) and (d) are on \textit{LunarLander-v2}.}
\label{fig:rl_compare_depth}
\end{figure} Fig.~\ref{fig:rl_compare_depth} displays the comparison of learning curves for SDTs and CDTs with different depths, under the RL settings without state normalization. The results are similar as those with state normalization in the main paragraph.
\section{Trees Structures Comparison} \label{app:tree_structure_compare} \begin{figure}
\caption{The learned CDT (before discretization) of depth 1+2 for \textit{CartPole-v1}. }
\label{fig:cartpole_plot_cdt}
\end{figure}
\begin{figure}
\caption{The learned SDT (before discretization) of depth 3 for \textit{CartPole-v1}. }
\label{fig:cartpole_plot_sdt}
\end{figure}
\begin{figure}
\caption{The learned CDT (before discretization) of depth 2+2 for \textit{MountainCar-v0}. }
\label{fig:mountaincar_plot_cdt}
\end{figure}
\end{appendices}
\end{document} |
\begin{document}
\title{Deep Variational Inverse Scattering}
\author{\IEEEauthorblockN{ AmirEhsan Khorashadizadeh\IEEEauthorrefmark{1}, Ali Aghababaei\IEEEauthorrefmark{2}, Tin Vlašić\IEEEauthorrefmark{3}, Hieu Nguyen\IEEEauthorrefmark{1}, Ivan Dokmanić\IEEEauthorrefmark{1} }
\IEEEauthorblockA{\IEEEauthorrefmark{1} Department of Mathematics and Computer Science, University of Basel, Basel, Switzerland} \IEEEauthorblockA{\IEEEauthorrefmark{2} Department of Electrical Engineering, Sharif University of Technology} \IEEEauthorblockA{\IEEEauthorrefmark{3} Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia} \IEEEauthorblockA{ \emph{*amir.kh@unibas.ch} } }
\maketitle
\begin{abstract}
Inverse medium scattering solvers generally reconstruct a single solution without an associated measure of uncertainty. This is true both for the classical iterative solvers and for the emerging deep-learning methods. But ill-posedness and noise can make this single estimate inaccurate or misleading. While deep networks such as conditional normalizing flows can be used to sample posteriors in inverse problems, they often yield low-quality samples and uncertainty estimates. In this paper, we propose U-Flow, a Bayesian U-Net based on conditional normalizing flows, which generates high-quality posterior samples and estimates physically-meaningful uncertainty. We show that the proposed model significantly outperforms the recent normalizing flows in terms of posterior sample quality while having comparable performance with the U-Net in point estimation. Our implementation is available at \url{https://github.com/swing-research/U-Flow}.
\end{abstract}
\vskip0.5\baselineskip \begin{IEEEkeywords} Bayesian inference, conditional normalizing flow, inverse scattering, U-Net. \end{IEEEkeywords}
\section{Introduction} In inverse medium scattering, the goal is to determine the properties of an unknown model from the measured scattered fields. The unknown model parameters is the medium containing scatters. Inverse scattering has numerous applications, including radar imaging~\cite{cheney2009fundamentals}, through-wall imaging~\cite{amin2017through}, medical ultrasound imaging, geophysics~\cite{persico2014introduction}, and nondestructive testing~\cite{zoughi2000microwave}. We consider the reconstruction of a finite number of parameters of the model from the scattered fields. The forward model, given by a partial differential equation, is a mathematical operator mapping the model parameters to the measurements. In our case, the forward model is the time-harmonic wave equation and is nonlinear. Even though it is a linear equation in the source term, it is nonlinear in the model parameters. The nonlinearity is due to the multiple scattering and complicates the inversion; the problem becomes more nonlinear as the contrast increases~\cite{chen2018computational}. Moreover, while inverse medium scattering is well-posed and Lipschitz stable for continuous measurements~\cite{nachman1996global}, it is an ill-posed inverse problem for sparse finite of measurements: there are many plausible reconstructions that fit the measurements to within the noise level.
This variety of solutions suggests using methods that recover more than a single reconstruction~\cite{sun2021deep, khorashadizadeh2022conditional}. A probabilistic characterization of solutions enables us more reliable interpretation of reconstructions and gives an important measure of uncertainty as the problem gets more ill-posed. Estimating uncertainty is paramount in safety-critical tasks such as medical imaging \cite{begoli2019need}, nuclear stockpile \cite{stoyer2009science,brown2015uncertainty}, and more recently self-driving vehicles \cite{arnez2020comparison,michelmore2018evaluating}.
It is natural to use a Bayesian approach via modeling the posterior over reconstructions. We assume that the measurements $y$ and the unknown model parameters $x$ are realizations of random vectors, ${Y \in \mathcal{Y}}$, ${X \in \mathcal{X}}$ with a joint distribution $p_{X,Y}$. The posterior distribution $p_{X|Y}$ is the conditional probability of the parameters given the observed measurements and can be expressed using Bayes rule as \begin{equation}
p_{X|Y}(x|y) = \dfrac{p_{Y|X}(y|x) p_X(x)}{\int_x p_{X,Y}(x,y) dx}. \end{equation} In real-world high-dimensional imaging problems, computing $\int_x p_{X,Y}(x,y) dx$ is intractable. Moreover, the prior distribution of the unknown parameters $P_X$ is unknown and must be estimated.
There are a number of approaches to approximate or sample the posterior. Tarantola \cite{tarantola2005inverse}, as well as Stuart \cite{stuart2010inverse}, provided a comprehensive review of inverse problems from a statistical point of view. Traditional approaches include variants of Markov chain Monte Carlo \cite{martin2012stochastic,zhao2019gradient} which exploits the operator structure. The main challenge is a large number of required forward simulations. To alleviate the computational cost, a class of methods \cite{cui2015data,peherstorfer2018survey} employ data-driven model reduction. More recently, neural networks have shown promising results for posterior approximation. Variational U-Net is proposed by Esser \emph{et al.}~\cite{esser2018variational} to generate images from poses and exploited by Jin \emph{et al.}~\cite{jin2019fast} for reservoir simulations where the network is trained with the evidence lower bound (ELBO), similar to variational autoencoders~\cite{kingma2013auto}. Bayesian convolutional neural networks are used for posterior sampling in several computational imaging problems~\cite{siahkoohi2022deep,wei2020uncertainty}.
Conditional normalizing flows~\cite{ardizzone2019guided} are a class of deep generative models to approximate the posterior. They provide efficient posterior sampling, likelihood estimation and uncertainty quantification~\cite{siahkoohi2020faster,zhao2022bayesian}. However, they have their own drawbacks: they require significant memory and are slow to train. Moreover, they lack any architectural regularization over posterior samples, yielding low-quality reconstructions in ill-posed inverse problems. Conditional injective flows~\cite{khorashadizadeh2022conditional} remedy these drawbacks by using a low-dimensional latent space; they can be trained fast and have a low-memory footprint leading to better posterior samples. Still, the quality of reconstructions is inferior to highly successful image-to-image regression models like the U-Net~\cite{ronneberger2015u}, especially for non-linear inverse problems.
In this paper, we propose U-Flow, a Bayesian U-Net based on conditional normalizing flows. U-Flow benefits from favorable aspects of the U-Net: it yields high-quality reconstructions even for non-linear inverse problems, while enabling regularized posterior sampling and meaningful uncertainty estimates. We show that the MMSE estimate from U-Flow has comparable quality to that of U-Net and it significantly outperforms conditional injective flows in posterior sampling and uncertainty quantification.
\section{Wave Scattering Model}
We use the Helmholtz (time-harmonic wave) equation as the forward model,\footnote{Here we numerically solve the equation using the \texttt{j-wave} package \cite{stanziola2022jwave}.} \begin{equation}\label{eq:helmholtz_eq}
\triangle u - \dfrac{\omega^2}{c^2} u = -i g ~\text{in}~ \Omega. \end{equation} The medium is characterized by the wave speed $c$. In this work, we restrict ourselves to the structural heterogeneity in the wave speed, while leaving the density constant and the attenuation zero. The source $g$ is a point source. The domain is unbounded, thus we append perfectly matched layers \cite{bermudez2007optimal} to the computation domain. We use the default GMRES method to solve \eqref{eq:helmholtz_eq}.
\subsection{Measurement Setup}
\begin{figure}
\caption{Examples of the inverse medium scattering problem. Top row: full-view measurement. Bottom row: limited-view measurement. We add $30~dB$ noise to the measurements. The backprojections are computed from auto-differentiation.}
\label{fig:measurement_setup}
\end{figure}
To image the heterogeneity, we place the (collocated) sources and sensors on a circle. Each source takes a turn to emit a circular wave that scatters through the medium and gets measured at all the sensors. Thus our measurement data are square matrices of complex-valued entries.
The inverse problem is to recover the medium from the measurement data. To set up the notation, let us denote the unknown parameters as ${{\mathbf{x}}:=c|_{\Omega}}$ and the (backprojected) measurements as ${{\mathbf{y}}:=u|_{\partial \Omega}}$. Fig.~\ref{fig:measurement_setup} illustrates two examples of the measurement setup.
The \verb+j-wave+ package \cite{stanziola2022jwave} gives easy access to the backprojection (BP) images thanks to automatic differentiation. The BP is obtained by applying the Jacobian of the discrete forward model to the measurement mismatch. This auto-diff is the main component in the discretize-then-optimize regime. The advantage is that the Jacobian exactly matches the numerical model being used and it can easily extend to other medium parameters such as density and attenuation.
The BP image is computed by taking the derivative of the measurement loss $$
{\mathbf{y}} \mapsto \dfrac{\partial }{\partial x} \|{\mathbf{y}} - \hat{y}(x)\|^2_2. $$ With a slight abuse of notation, we will use these BP images, denoted as ${\mathbf{y}}$, to estimate the posterior distribution of medium wave speed.
\subsection{Training Data}
We generate 4000 medium samples, each containing a set of random ellipsoidal scatterers, along with their discrete boundary measurements. The computation domain is a $128\times 128$ grid with resolution $\Delta x = 10^{-3}~\mathrm{m}$. The background wave speed is $1540~\mathrm{m/s}$ and the maximum contrast is close to $4$ times the background. The angular frequency is ${\omega=7\cdot 10^{5}~\mathrm{s}^{-1}}$, which corresponds to $13$ grid points per wavelength. We use 32 complex-valued measurements corrupted with Gaussian noise.
\section{Amortized Inference for Inverse Problems}
Variational inference~\cite{jordan1999introduction, rezende2015variational,papamakarios2021normalizing} is a technique for approximating posterior distributions $p_{X|Y}(x)$ using a parametrized family of normalized densities $q_\theta (x)$. To determine $\theta$, we minimize the \textit{forward} Kullback–Leibler (KL) divergence~\cite{papamakarios2021normalizing,bishop2006pattern}, \begin{equation}
\theta^\ast(y) = \argmin_{\theta \in \Theta} ~ \text{KL}(p_{X|Y}( \cdot \, | \, y) \| q_\theta ). \end{equation} This expression implies that for each measurement ${\mathbf{y}}$, we have to solve an optimization problem to obtain $\theta^\ast({\mathbf{y}})$. We are interested in measurements for many samples ${\mathbf{x}}$, which would require many minimizations of the above expression. Thus, we choose to \textit{amortize} inference by defining a bivariate family $q_\theta(x,y)$. Amortized inference minimizes the KL divergence averaged over all samples simultaneously~\cite{ whang2021composing, khorashadizadeh2022conditional}, \begin{equation} \label{eq:amortized inference} \begin{aligned}
\theta^*
&= \argmin_{\theta} ~ \mathbb{E}_{Y \sim p_Y} \text{KL}(p_{X|Y}( \, \cdot\, |Y) \| q_\theta(\, \cdot\, |Y)) \\
&= \argmax_{\theta} ~ \mathbb{E}_{X, Y \sim p_{X, Y}} \log q_{\theta}(X|Y). \end{aligned} \end{equation}
The expectation over $P_{X|Y}(x|y)$, the density in question, is estimated by the empirical expectation over training data $\{({\mathbf{x}}_i,{\mathbf{y}}_i)\}_{i = 1}^N$. In this paper, the variational approximator $q_\theta$ is a neural network.
\section{U-Flow} \label{sec:U-Flow} \begin{figure*}\label{fig:uflow_diagram}
\end{figure*}
We propose U-Flow, a probabilistic model combining U-Net~\cite{ronneberger2015u} and conditional normalizing flows~\cite{ardizzone2019guided} to approximate the posterior distribution.
\subsection{U-Net} \label{sec:Unet} Ronneberger \emph{et al.} originally developed the U-Net for medical image segmentation. It has since been adapted to many image-to-image tasks, often achieving (near-)state-of-the-art performance. The U-Net is an encoder-decoder network $$\text{UNet}_\phi \eqdef \text{dec} \circ \text{enc}.$$ The encoder and the decoder are both convolutional neural networks with pooling layers. The encoder takes the measurements ${\mathbf{y}}$ as input and produces features in different scales. The decoder then takes the computed features and reconstructs the target signal ${\mathbf{x}}$.
The encoder and decoder are jointly optimized using the mean-square error (MSE) loss \begin{equation}
\phi^* = \argmin_{\phi} ~ \dfrac{1}{N} \sum_{i = 1}^N \| {\mathbf{x}}_i - \text{UNet}_{\phi}({\mathbf{y}}_i) \|_2^2.
\label{eq: unet_loss} \end{equation}
While the U-Net produces high-quality reconstructions, its output is a single estimate. In this paper, we present a probabilistic version of the U-Net for amortized Bayesian inference.
\subsection{Conditional Normalizing Flows} \label{sec:c-flow} Normalizing flows~\cite{dinh2016density,kingma2018glow,kothari2021trumpets} are a class of likelihood-based generative models. They transform a simple and known distribution into an unknown data distribution by a sequence of invertible mappings. Invertibility enables efficient likelihood estimation and maximum-likelihood (ML) parameter fitting. Vlašić et al. \cite{vlasic2022implicitObstacleScattering} demonstrated the effectiveness of normalizing flow-based generative models in regularizing the inverse obstacle scattering problem. Recent works proposed conditional versions of normalizing flows~\cite{ardizzone2019guided} that allow for approximation of posterior distributions. However, regular conditional flows require the latent space dimension to equal the data dimension, which leads to a large network and slow training. Moreover, as the range of conditional flows covers the entire space, the posterior samples are not constrained to an image distribution and are often of low quality in ill-posed nonlinear inverse problems~\cite{khorashadizadeh2022conditional}.
\subsection{Our Approach} \label{sec:our_approach}
Since the input and output of flow models must have the same dimension, it is opportune to use flows to model low-dimensional latent spaces rather than images directly. We use flows to approximate the posterior distribution of the coarsest scale in the U-Net. Concretely, as shown in Fig.~\ref{fig:uflow_diagram}, the encoder of the U-Net takes the BPs and produces features in six scales, $\text{enc}({\mathbf{y}}) = ({\mathbf{s}}^1,{\mathbf{s}}^2, \dots, {\mathbf{s}}^6)$ for ${{\mathbf{y}} \in \mathbb{R}^{D}}$. These multiscale features feed to the decoder to reconstruct a \textit{single} estimate of the target signal ${\hat{{\mathbf{x}}}({\mathbf{y}}) = \text{dec}({\mathbf{s}}^6, {\mathbf{s}}^5, \dots, {\mathbf{s}}^1)}$. To generate posterior samples, we let a flow model learn the conditional distribution of features at the coarsest scale $p_{{\mathbf{s}}^6|Y}$ where ${{\mathbf{s}}^6 \in \mathbb{R}^d}$ and ${d \ll D}$. We first train a U-Net with the loss in \eqref{eq: unet_loss} and compute the coarsest scale features of the BPs samples in the training set. Having obtained a paired training set of the BPs and the corresponding features $\{({\mathbf{s}}^6_i, {\mathbf{y}}_i)\}_{i = 1}^N$, we then train a flow model. We use the conditional version of Glow~\cite{kingma2018glow}, where we deploy conditional coupling blocks proposed in~\cite{ardizzone2019guided} to condition the generation on backprojections. The conditional flow model is trained using amortized inference loss~\eqref{eq:amortized inference} as, \begin{equation}
\theta^* = \argmin_{\theta} \dfrac{1}{N}\sum_{i=1}^N \left(-\log p_Z({\mathbf{z}}_i) + \log|\det J_{f_{\theta}}|\right),
\label{eq: U-Flow loss} \end{equation}
where ${{\mathbf{z}}_i = f_{\theta}^{-1}({\mathbf{s}}_i^6 , {\mathbf{y}}_i)}$, $J_{f_{\theta}}$ is the Jacobian matrix of $f_\theta$ and $p_Z$ is a multivariate Gaussian distribution. In this paper, instead of directly approximating $p_{X|Y}$, we approximate the posterior distribution of the features in the lowest scale of the U-Net $p_{s^6_i|Y}$ using conditional normalizing flows as shown in Fig.~\ref{fig:uflow_diagram}. When the conditional flow model is trained, we can generate posterior samples for each BP $y^*$, \begin{equation}
{\mathbf{x}}_{\text{post}} ({\mathbf{y}}^*) = \text{dec}(f_\theta({\mathbf{z}}), {\mathbf{s}}^5, ..., {\mathbf{s}}^1)
\label{eq: U-Flow posterior} \end{equation} where ${{\mathbf{z}} \sim \mathcal{N}(0,I)}$ and ${({\mathbf{s}}^1, ..., {\mathbf{s}}^5, \cdot) = {\text{enc}({\mathbf{y}}^*)}}$. The key advantage of the proposed model is that the posterior samples have a low-dimensional structure, which acts as a strong regularizer for ill-posed inverse problems.
\section{Results} \begin{figure*}
\caption{Performance of U-Flow over inverse medium scattering}
\label{fig:side-view}
\label{fig:full-view}
\label{fig:main results}
\end{figure*}
\begin{figure}
\caption{Performance comparison of U-Flow and C-Trumpet~\cite{khorashadizadeh2022conditional} for the limited-view problem (no sensors on the left-hand side); U-Flow outperforms C-Trumpet in both posterior sampling and uncertainty quantification by assigning more uncertainty in the left part of the object (red regions).}
\label{fig:performance comparison}
\end{figure}
We trained U-Flow for 600 epochs in total, 300 for the U-Net and 300 for the conditional flow model. We used the Adam optimizer~\cite{kingma2014adam} with the learning rate set to $10^{-4}$. The conditional flow model was composed of $24$ Glow-blocks, each containing activation normalization, the $1 \times 1$ convolution, and a conditional coupling layer. We compared U-Flow with C-Trumpet~\cite{khorashadizadeh2022conditional}, a conditional injective flow which is well-suited for solving ill-posed inverse problems. In our experiments, U-Flow and C-Trumpet had $9$M and $14$M trainable parameters, respectively. The MMSE estimate was calculated by averaging $25$ posterior samples. The UQ is performed by dividing the pixel-wise standard deviation on $25$ posterior samples to the MMSE estimate to show the relative error.
Fig.~\ref{fig:side-view} illustrates the performance of U-Flow on inverse medium scattering with a limited-view sensing configuration, where the receivers and incident waves are located on the \textit{right-hand} side of the observed medium. This experiment shows that U-Flow can generate various posterior samples and capture a physically meaningful UQ. Notice that U-Flow assigned more uncertainty to the \textit{left-hand} side of the medium (red region). The results for the full-view configuration are shown in Fig.~\ref{fig:full-view}. As expected, the full-view configuration yields better posterior samples than the limited-view configuration.
Fig.~\ref{fig:performance comparison} compares the performance of U-Flow and C-Trumpet. The experiment shows that U-Flow significantly outperforms C-Trumpet both in posterior sampling and UQ. Table~\ref{tab:quantitative results} gives a quantitative comparison of U-Flow with baselines, including the basic U-Net~\cite{ronneberger2015u}. U-Flow exhibits comparable results to the U-Net while giving access to posterior samples and UQ.
\begin{table} \renewcommand{1.3}{1.3} \caption{SNR of MMSE estimate (computed over 25 posterior samples for flow-based models) of different models over inverse medium scattering in two setups} \label{tab:quantitative results} \centering \resizebox{0.45\textwidth}{!}{ \begin{tabular}{@{}lcccc@{}} \hline & BP & C-Trumpets~\cite{khorashadizadeh2022conditional} & U-Net~\cite{ronneberger2015u} & U-Flow \\ \hline {\textit{Full-view}} & -17.8 & 5.7 & \textbf{11.3} & \textbf{11.3} \\ {\textit{Side-view}} & -17.7 & 5.2 & \textbf{9.7} & 9.4 \\ \hline \end{tabular}} \end{table}
\section{Conclusion}
We demonstrated that the dichotomy between high-quality inversions without UQ by standard point-estimate networks, and low-quality inversions with UQ by the various conditional generative models is a false one. By combining a low-dimensional flow with a U-Net, we get the best of both worlds. The reconstructions are very fast---orders of magnitude faster than with the standard iterative methods. It will be interesting to see how the proposed model performs on other inverse problems.
\section*{Acknowledgment} A.~K., H.~N., and I.~D. acknowledge support from the European Research Council under Starting Grant 852821--SWING.
\end{document} |
\begin{document}
\selectlanguage{english} \title{Subcritical sharpness for multiscale Boolean percolation} \begin{abstract}
We consider a multiscale Boolean percolation on $\mathbb{R}^d$ with radius distribution $\mu$ on $[1,+\infty)$, $d\ge 2$. The model is defined by superposing the original Boolean percolation model with radius distribution $\mu$ with a countable number of scaled independent copies. The $n$-th copy is a Boolean percolation with radius distribution $\mu|_{[1,\kappa]}$ rescaled by $\kappa^{n}$. We prove that under some regularity assumption on $\mu$, the subcritical phase of the multiscale model is sharp for $\kappa $ large enough. Moreover, we prove that the existence of an unbounded connected component depends only on the fractal part (and not of the balls with radius larger than $1$). \end{abstract}
\section{Introduction} \paragraph{Overview} Boolean percolation was introduced by Gilbert in \cite{gilbert} as a continuous version of Bernoulli percolation, introduced by Broadbent and Hammersley \cite{BroadbentHammersley}. We consider a Poisson point process of intensity $\lambda>0$ on $\mathbb{R}^d$ and on each point, we center a ball of potentially random radius. In Boolean percolation we are interested in the connectivity properties of the occupied set: it is defined as the subset of $\mathbb R^d$ consisting of all the points covered by at least one ball. This model undergoes a phase transition in $\lambda$ for the existence of an unbounded connected component of balls. For $\lambda<\lambda_c$, all the connected components are bounded, and for $\lambda>\lambda_c$, there exists at least one unbounded connected component.
\paragraph{Boolean model}
Let $d\geq 2$. Denote by $\|\cdot\|$ the $\ell_2$-norm on $\mathbb{R}^ d$. For $r>0$ and $x\in\mathbb{R}^d$, set \[\mathrm B^x_r:=\{y\in\mathbb{R}^d:\,\|y-x\|\leq r\}\quad \text{and}\quad \partial \mathrm B^x_r:=\{y\in\mathbb{R}^d:\,\|y-x\|= r\}\] for the closed ball of radius $r$ centered at $x$ and its boundary. For short, we will write $\mathrm B_r$ for $\mathrm B_ r^0$. For a subset $\eta$ of $ \mathbb{R}^d\times\mathbb{R}_+$, we define \[\mathcal{O}(\eta):=\bigcup_{(z,r)\in\eta}\mathrm B_r^z.\] Let $\mu$ be a distribution on $\mathbb R_+$ representing the distribution on the radius. Let $\eta$ be a Poisson point process of intensity $\lambda dz\otimes \mu$ where $dz$ is the Lebesgue measure on $\mathbb{R}^d$. Write $\mathbb{P}_{\lambda,\mu}$ for the law of $\eta$ and $\mathbb{E}_{\lambda,\mu}$ for the expectation under the law $\mathbb{P}_{\lambda,\mu}$.
We say that two points $x$ and $y$ in $\mathbb{R}^d$ are connected by $\eta$, if there exists a continuous path in $\mathcal{O} (\eta)$ that joins $x$ to $y$. We say that two sets $A$ and $B$ are connected if there exists $x\in A$ and $y\in B$ such that $a$ and $b$ are connected by $\eta$. We denote by $\{A\longleftrightarrow B\}$ this event.
Define for every $\lambda\ge 0$ and $\mu$, the probability of percolation $$\theta_{\mu} (\lambda):=\lim_{r\rightarrow \infty}\mathbb{P}_{\lambda,\mu}\left( 0\longleftrightarrow \partial \mathrm B_r\right).$$ We define the critical parameter associated to the existence of an infinite connected component: $$\lambda_c(\mu):=\sup\left\{\lambda\geq 0: \theta_{\mu}(\lambda)=0\right\}.$$
We will work with measures $\mu$ such that \begin{align}\label{cond:mu} \int_{0}^\infty t^dd\mu(t)<\infty. \end{align} Hall proved in \cite{hall} that this condition is necessary to avoid that all the space is covered. Under the minimal assumption \eqref{cond:mu}, Gou\'er\'e proved in \cite{gouere} that $0<\lambda_c(\mu)<\infty$. We also define the following critical parameter: \[\widehat \lambda_c(\mu):=\inf\left\{\lambda\geq 0:\,\inf_{r>0}\mathbb{P}_{\lambda,\mu}(\mathrm B_r\longleftrightarrow\partial \mathrm B_{2r})>0\right\}.\] Knowing that $\lambda\le\widehat \lambda_c(\mu) $ enables to do renormalization arguments and deduce a lot of properties (see \cite{DCRT,GouereTheret}). Hence, the equality $\widehat \lambda_c(\mu) =\lambda_c(\mu)$ implies that we have a good control on the subcritical regime. If the equality occurs, we say that we have subcritical sharpness. This equality has been proved under moment condition on $\mu$ (see \cite{ATT18,DCRT,ziesche}) and for almost all power-law distributions (see \cite{dembintassion}).
\paragraph{Multiscale Boolean percolation}
The model of multiscale Boolean percolation consists of an infinite superposition of independent copies of Boolean percolation at different scales. Let $\mu$ be a finite distribution on $[1,+\infty)$ that satisfies \eqref{cond:mu}. Let $\kappa>1$. Let $\lambda>0$. For a set $E\subset \mathbb{R}^d\times\mathbb{R}_+$, write $E/\kappa$ for the set $\{x/\kappa,x\in E\}$. We denote by $$\eta_\kappa(\lambda):=\eta ^{(0)}(\lambda)\cup\bigcup_{i=1}^\infty \frac{1}{\kappa^i }(\eta^{(i)}(\lambda)\cap (\mathbb{R}^d\times [1,\kappa]))$$ where $(\eta^{(i)}(\lambda))_{i\geq 1}$ are i.i.d. Poisson point process of intensity $\lambda\, dz\otimes \mu$. Note that every point in $\mathcal{O}(\eta_\kappa(\lambda))$ is almost surely covered. Yet, it does not necessarily imply that there exists an unbounded connected component as it does not prevent the existence of a blocking surface of null Lebesgue measure.
We are interested in the percolation properties of $\mathcal{O}(\eta_\kappa(\lambda))$. Let $\mu_\kappa$ be the distribution such that $\eta_\kappa(\lambda)$ is a Poisson point process of intensity $\lambda dz\otimes \mu_\kappa$. The distribution $\mu_\kappa$ has an infinite mass but is $\sigma$-finite. We will explicit its expression later.
We will here work under the following assumption \begin{equation}\label{hyp}
\exists \kappa_0>1\quad \forall \kappa\ge \kappa_0 \qquad \sup_{a\ge \kappa}\sup_{r\ge 1}\frac{a^d\mu([a r,a\kappa])}{\mu([r,\kappa])}\le 1\, \end{equation} with the convention $0/0=0$. This assumption is in particular satisfied for distributions with compact support or distributions of the form $f(r)r^{-(d+1+\delta)}\mathds{1}_{r\ge 1}dr$ where $f$ is a non-increasing function such that $0<\inf f <\sup f <\infty$ and $\delta>0$. The following theorem is the main result of the paper. It states that there is subcritical sharpness for the fractal distribution $\mu_\kappa$ and that the existence of an unbounded connected component does not depend on the large balls.
\begin{thm}\label{thm:main} Let $\mu$ that satisfies assumption \eqref{hyp}. Let $\kappa_0$ be as in \eqref{hyp}. For any $\kappa\ge\kappa_0$, we have \[\lambda_c(\mu_\kappa) =\widehat\lambda_c(\mu_\kappa)=\lambda_c(\mu_\kappa|_{[0,1]}).\] \end{thm}
\paragraph{Idea of the proof}
The proof relies on the following key observation. Thanks to condition \eqref{hyp}, for $\kappa\ge \kappa_0$, we can prove that the Poisson model with intensity $\lambda dz\otimes \mu_\kappa|_{[0,1]}$ stochastically dominates the Poisson model with intensity $\lambda dz\otimes \mu_\kappa|_{[0,\kappa^j]}$ rescaled by $\kappa^j$. Since the support of the distribution $ \mu_\kappa|_{[0,1]}$ is bounded, it is possible to prove subcritical sharpness for this distribution using the standard $\varphi_p(S)$ argument introduced by Duminil-Copin--Tassion in \cite{D-CT} in the context of standard percolation and generalized in the context of Boolean percolation by Ziesche \cite{ziesche}. Using this argument, we can prove that when $\lambda<\lambda_c( \mu_\kappa|_{[0,1]})$, there is exponential decay of the probability of connection. Together with the stochastic domination, we can prove that when $\lambda<\lambda_c( \mu_\kappa|_{[0,1]})$ we have \[\inf_{r>0}\mathbb{P}_{\lambda,\mu_\kappa}(\mathrm B_r\longleftrightarrow\partial \mathrm B_{2r})=0\]
and $\lambda<\widehat \lambda_c(\mu_\kappa)$. This yields $\lambda_c( \mu_\kappa|_{[0,1]})\le \widehat \lambda_c(\mu_\kappa)\le\lambda_c(\mu_\kappa)$. The coincidence of these three critical points follows from the previous inequality together with $\lambda_c( \mu_\kappa|_{[0,1]})\ge\lambda_c(\mu_\kappa)$.
\paragraph{Background} In previous works on multiscale Boolean percolation, a slightly different definition was used. Define for $\kappa\ge 1$ $$\widetilde \eta _\kappa(\lambda):=\eta ^{(0)}(\lambda)\cup\bigcup_{i=1}^\infty \frac{\eta^{(i)}(\lambda)}{\kappa^i }$$ where $(\eta^{(i)}(\lambda))_{i\geq 1}$ are i.i.d. Poisson point process of intensity $\lambda\, dz\otimes \mu$. Let $\widetilde\mu_\kappa$ be the distribution such that $\widetilde\eta_\kappa(\lambda)$ is a Poisson point process of intensity $\lambda dz\otimes\widetilde \mu_\kappa$. With this definition, the range of the radius of the different scaled copies are no longer disjoint, the condition \eqref{cond:mu} is not enough to ensure that the multiscale Boolean model exhibits a non-trivial phase transition. Gouéré proved in \cite{Gouere2009} that $\lambda_c(\widetilde\mu_\kappa)>0$ if and only if \begin{equation}\label{cond:mu2}
\int_{t\ge 1}t^d\log(t)d\mu(t)<\infty. \end{equation} If this condition is not satisfied, the balls with radius greater than $1$ have an infinite mass and $\lambda_c(\widetilde\mu_\kappa)=0$. \begin{rk}Note in our definition of multiscale percolation, the range of radius among the different scaled copies are disjoint. This enables to remove assumption \eqref{cond:mu2}. \end{rk}
The Boolean multiscale model was first studied for the distribution $\mu=\delta_1$ by Menshikov--Popov--Vachkovskaia in \cite{Menshikov2001}. They proved that for $\lambda<\lambda_c(\delta_1)$ and $\kappa$ large enough the multiscale model does not percolate.
They later extended in \cite{Menshikov2003} their result to more general distribution $\mu$ that satisfy the following self-similarity condition \begin{equation*}
\lim_{a\rightarrow \infty}\sup_{r\ge 1}\frac{a^d\mu([a r,+\infty))}{\mu([r,+\infty))}=0\, \end{equation*} and for $\lambda>0$ such that \begin{equation}\label{cond:lambda}
\lim_{r\rightarrow\infty }r^d\mathbb{P}_{\lambda,\mu}(\mathrm B_r\longleftrightarrow\partial \mathrm B_{2r})=0. \end{equation} Note that the condition \eqref{cond:lambda} is quite restrictive since for distributions $\mu$ with an infinite $2d$-moment, there exists no such positive $\lambda$.
The condition \eqref{cond:lambda} was relaxed later by Gouéré in \cite{Gouere2014}, who proved that under the assumption \eqref{cond:mu2}, for $\lambda<\widehat\lambda_c(\mu)$ and $\kappa$ large enough, the multiscale model does not percolate.
\section {Proofs} \subsection{Proof of Theorem \ref{thm:main}} In this section, we prove the main theorem. We will need the two following propositions. This proposition is an adaptation of \cite{ziesche}, the only difference is that the intensity is not finite but locally finite.
\begin{prop}\label{prop:ziesche}Let $\kappa>1$ and $\lambda<\lambda_c(\mu_\kappa|_{[0,1]}) $. There exists $c_\kappa>0$ depending on $\kappa$ and $\lambda$ such that \begin{equation}\label{eq:ineqziesche}
\mathbb{P}_{\lambda,\mu_\kappa|_{(0,1]}}(\mathrm B_{1}\longleftrightarrow \partial\mathrm B_{l})\leq \exp(-c_\kappa l) \end{equation} \end{prop} The following proposition is the key observation to prove subcritical sharpness. \begin{prop}\label{prop}Let $\mu$ that satisfies hypothesis \eqref{hyp}. Let $\kappa\ge \kappa_0$. We have for any $j\geq 1$, $l>1$, $\lambda\geq 0$ \begin{align*}
\mathbb{P}_{\lambda,\mu_\kappa|_{[0,\kappa^j]}}(\mathrm B_{\kappa^j}\longleftrightarrow \partial\mathrm B_{l\kappa^j})\leq\mathbb{P}_{\lambda,\mu_\kappa|_{(0,1]}}(\mathrm B_1\longleftrightarrow \partial\mathrm B_l). \end{align*} \end{prop}
Before proving these two propositions, let us prove the main theorem. \begin{proof}[Proof of Theorem \ref{thm:main}]
Let $\lambda<\lambda_c(\mu_\kappa|_{(0,1]})$. Let $j,l\geq 1$. We have \begin{align}\label{eq:1}
\mathbb{P}_{\lambda,\mu_\kappa}(\mathrm B_{l\kappa ^j}\longleftrightarrow \partial\mathrm B_{2l\kappa ^j})&\leq \mathbb{P}_{\lambda,\mu_\kappa|_{(0,\kappa^j ]}}(\mathrm B_{l\kappa ^j}\longleftrightarrow \partial\mathrm B_{2l\kappa ^j})\nonumber\\&\quad+\mathbb{P}_{\lambda,\mu_\kappa}\left(\exists (x,r)\in\eta_\kappa (\lambda):\,r\geq \kappa^j, \,\mathrm B_r^x\cap \mathrm B_{2l\kappa ^j}\neq\emptyset\right). \end{align} Let us start by estimating the second term in the inequality: \begin{align*} \mathbb{P}_{\lambda,\mu_\kappa}\left(\exists (x,r)\in\eta_\kappa (\lambda):\,r\geq \kappa^j, \,\mathrm B_r^x\cap \mathrm B_{2l\kappa ^j}\neq\emptyset\right)=1-\exp(-\lambda dz\otimes\mu(E)) \end{align*}
where $E:=\{(x,r): \|x\|_2\leq 2l\kappa ^j+r,\,r\geq \kappa ^j\}$. We have \begin{align*} dz\otimes\mu(E)&=\int_{r\geq \kappa ^ j}\alpha_d (2l\kappa^j +r)^dd\mu(r)\le \alpha_d (4l)^d\int_{r\ge \kappa^j}r^dd\mu(r) \end{align*} where $\alpha_d$ is the volume of the unit ball in $\mathbb{R}^d$. It yields that \begin{equation}\label{eq:2} \mathbb{P}_{\lambda,\mu_\kappa}\left(\exists (x,r)\in\eta_\kappa (\lambda):\,r\geq \kappa^j, \,\mathrm B_r^x\cap \mathrm B_{2l\kappa ^j}\neq\emptyset\right)\leq \lambda \alpha_d (4l)^d\int_{r\ge \kappa^j}r^dd\mu(r). \end{equation} Let us now control the first term. There exists a constant $c_d$ depending only on $d$ such that we can cover $\partial \mathrm B_{l\kappa^j}$ by at most $c_d l^{d-1}$ balls of radius $\kappa^j$ centered at $\partial \mathrm B_{l\kappa^j}$. By union bound, we get \begin{equation}\label{eq:3} \begin{split}
\mathbb{P}_{\lambda,\mu_\kappa|_{(0,\kappa^j ]}}(\mathrm B_{l\kappa ^j}\longleftrightarrow \partial\mathrm B_{2l\kappa ^j})&\leq
c_dl^{d-1}\mathbb{P}_{\lambda,\mu_\kappa|_{(0,\kappa^j]}}(\mathrm B_{\kappa^j}\longleftrightarrow \partial\mathrm B_{l\kappa^j})\\ &\leq c_d l^{d-1}\exp(-c_\kappa l) \end{split} \end{equation} where we use in the last inequality Propositions \ref{prop} and \ref{prop:ziesche}. Combining inequalities \eqref{eq:1}, \eqref{eq:2} and \eqref{eq:3}, we obtain \begin{align*} \mathbb{P}_{\lambda,\mu_\kappa}(\mathrm B_{l\kappa ^j}\longleftrightarrow \partial\mathrm B_{2l\kappa ^j})\leq c_d l^{d-1}\exp(-c_\kappa l)+\lambda \alpha_d (4l)^d\int_{r\ge \kappa^j}r^dd\mu(r). \end{align*} Let $\varepsilon>0$. We first choose $l$ large enough depending on $c_\kappa$ and $\varepsilon$ and then $j$ large enough depending on $\kappa$, $\varepsilon$ and $l$ so that \begin{align*} \mathbb{P}_{\lambda,\mu_\kappa}(\mathrm B_{l\kappa ^j}\longleftrightarrow \partial\mathrm B_{2l\kappa ^j})\leq \varepsilon \end{align*} where we recall that since $\mu$ has a finite $d$-moment \[\lim_{j\rightarrow\infty}\int_{r\ge \kappa^j}r^dd\mu(r)=0.\] It follows that \begin{align*} \inf_{r>0}\mathbb{P}_{\lambda,\mu_\kappa}(\mathrm B_r\longleftrightarrow\partial \mathrm B_{2r})=0 \end{align*}
and $\lambda\leq \widehat \lambda_c(\mu_\kappa)$. Hence, \[ \widehat \lambda_c(\mu_\kappa)\geq \lambda_c(\mu_\kappa|_{(0,1]})\geq \lambda_c(\mu_\kappa).\] The result follows from the fact that $\widehat \lambda_c(\mu_\kappa)\le \lambda_c(\mu_\kappa)$. \end{proof} \subsection{Proof of Propositions \ref{prop:ziesche} and \ref{prop}} Let $m>0$. Set $h_m$ be the contraction by $m$ that is $h_m(x):=x/m$ for $x\in\mathbb{R}$. Set $$\mathcal{T}_m\mu:=m^dh_m*\mu$$ where $h_m*\mu$ is the pushforward of $\mu$ by $h_m$. We will need the following Lemma that characterized the distribution of a contracted in space Poisson point process. \begin{lem}\label{lem:rescale}Let $m>0$ and $\lambda>0$. Let $\nu$ be a distribution on $\mathbb{R}_+$. Let $\eta$ be a Poisson point process of intensity $\lambda dz\otimes \nu$. Then $\eta/m$ is a Poisson point process of intensity $\lambda dz\otimes \mathcal{T}_m\nu$. \end{lem} From this lemma, we can deduce the following straightforward corollary.
\begin{cor}\label{cor}Let $\kappa\ge 1$. We have $$\mu_\kappa=\mu+\sum_{j=1}^\infty \mathcal{T}_{\kappa^j}\mu|_{[1,\kappa]}.$$
\end{cor} \begin{proof}[Proof of Lemma \ref{lem:rescale}]It is clear that $\eta/m$ is still a Poisson point process, we only need to prove that its intensity is $\lambda dz\otimes \mathcal{T}_m\nu$. Let $E\subset \mathbb{R}^d\times \mathbb{R}_+$. We claim that \begin{equation} (dz\otimes \nu)(mE)=(dz\otimes \mathcal{T}_m\nu)(E). \end{equation} Indeed, we have \begin{align*} (dz\otimes \nu)(mE)=\int_{(z,r)\in mE}dz d\nu(r)&=\int_{(mz,mr)\in mE} m^{d}dzd\nu(r/m)\\&=\int_{(z,r)\in E}dzd\mathcal{T}_m\nu(r)=(dz\otimes \mathcal{T}_m\nu)(E). \end{align*} \end{proof}
Thanks to Corollary \ref{cor}, we can now prove Proposition \ref{prop}.
\begin{proof}[Proof of Proposition \ref{prop}]Thanks to Lemma \ref{lem:rescale}, we have for $l>1$ and $j\ge 0$ \begin{equation*}
\mathbb{P}_{\lambda,\mu_\kappa|_{(0,\kappa^j]}}(\mathrm B_{\kappa^j}\longleftrightarrow \partial\mathrm B_{l\kappa^j})=\mathbb{P}_{\lambda,\mathcal{T}_{\kappa^j}\mu_\kappa|_{(0,\kappa^j]}}(\mathrm B_{1}\longleftrightarrow \partial\mathrm B_{l}). \end{equation*} To complete the proof, let us prove the following inequality \begin{align*}
\mathbb{P}_{\lambda,\mathcal{T}_{\kappa^j}\mu_\kappa|_{(0,\kappa^j]}}(\mathrm B_{1}\longleftrightarrow \partial\mathrm B_{l})\leq \mathbb{P}_{\lambda,\mu_\kappa|_{(0,1]}}(\mathrm B_{1}\longleftrightarrow \partial\mathrm B_{l}) . \end{align*} Using Corollary \ref{cor}, we have \begin{align*}
\mathcal{T}_{\kappa^j}\mu_\kappa|_{(0,\kappa^j]}&=\mathcal{T}_{\kappa^j}\mu|_{[1,\kappa^j]}+\sum_{k=1}^\infty\mathcal{T}_{\kappa^j}\mathcal{T}_{\kappa^k}\mu|_{[1,\kappa]}\\
&= \sum_{k=1}^{j}\mathcal{T}_{\kappa^{k}}\mathcal{T}_{\kappa^{j-k}}\mu|_{[\kappa^{j-k},\kappa ^{j-k+1}]}+\sum_{k=j+1}^\infty\mathcal{T}_{\kappa^k}\mu|_{[1,\kappa]} \end{align*}
Let us prove that for any $k\ge 1$ $\mathcal{T}_{\kappa^{k}}\mu|_{[\kappa^{k},\kappa ^{kk+1}]}\preceq \mu|_{[1,\kappa]}$ where we write $\mu\succeq \nu$ when $\mu$ stochastically dominates $\nu$ (for every $r>0$, we have $\mu([r,+\infty))\ge \nu([r,+\infty))$). Let $\kappa_0$ be as in hypothesis \ref{hyp}. Let $\kappa\ge \kappa_0$. By hypothesis \eqref{hyp}, we have for $r\in[1,\kappa]$
\[\mathcal{T}_{\kappa ^{k}}\mu|_{[\kappa^{k},\kappa^{k+1}]}([r,\kappa])=\kappa ^{dk}\mu([\kappa^{k}r,\kappa ^{k+1}]\le \mu([r,\kappa]).\] It yields that \begin{equation*}
\mathcal{T}_{\kappa^j}\mu_\kappa|_{(0,\kappa^j]}
\preceq \sum_{k=1}^{j}\mathcal{T}_{\kappa^{k}}\mu|_{[1,\kappa]}+\sum_{k=j+1}^\infty\mathcal{T}_{\kappa^k}\mu|_{[1,\kappa]}=\mu_\kappa|_{(0,1]}. \end{equation*} Hence, we have \begin{align*}
\mathbb{P}_{\lambda,\mathcal{T}_{\kappa^j}\mu_\kappa|_{(0,\kappa^j]}}(\mathrm B_{1}\longleftrightarrow \partial\mathrm B_{l})\leq \mathbb{P}_{\lambda,\mu_\kappa|_{(0,1]}}(\mathrm B_{1}\longleftrightarrow \partial\mathrm B_{l}) . \end{align*} This yields the proof. \end{proof}
Finally, let us explain how the proof of Ziesche \cite{ziesche} can be extended in the general case of $\sigma$-finite measure (Proposition \ref{prop:ziesche}). \begin{proof}[Sketch of the proof of Proposition \ref{prop:ziesche}] First note that $\lambda ds\otimes \mu_\kappa$ is a $s$-finite measure on $\mathbb{R}^ d\times \mathbb{R}_+\setminus\{0\}$ (hence $\sigma$- finite), that is, it can be written as a countable sum of finite measures. The Mecke equation (see Theorem 4.1 in \cite{last_penrose_2017}) and the Margulis-Russo formula (see \cite{Last14}) both hold for intensity measures that are $s$-finite. Denote by $\mathcal B(\mathbb{R}^ d)$ the Borelian subsets of $\mathbb{R}^ d$. For each $S\in\mathcal B(\mathbb{R}^ d)$ such that $\mathrm B_1\subset S$, we define \begin{equation}
\varphi_{\lambda}(S):=\lambda\int_{r\in(0,1]}\int_{z\in\mathbb{R}^d}\mathbf{1}_{\mathrm B_r^z\cap\partial S\neq\emptyset}\,\mathbb{P}_{\lambda,\mu|_{(0,1]}}\left(\mathrm B_1\stackrel{\mathcal{O}(\{(w,s)\in\eta: \mathrm B_s^w\subset S\})}{\longleftrightarrow }\mathrm B_r^z\right) dz\,d\mu_\kappa(r). \end{equation}
This corresponds to the expected number of open balls intersecting the boundary of $S$ that are connected to $\mathrm B_1$ inside $S$. The arguments of Ziesche hold in that context, in particular, when $\lambda<\lambda_c(\mu_\kappa|_{[0,1]})$, there exists $S\in\mathcal B(\mathbb{R}^ d)$ such that $\mathrm B_1\subset S$ and $\varphi_{\lambda}(S)<1$. We conclude the existence of $c_\kappa>0$ depending on $\kappa $ and $\lambda$ such that inequality \eqref{eq:ineqziesche} holds.
\end{proof}
\paragraph{Acknowledgements} The author would like to thank Vincent Tassion for fruitful discussions that initiated this project. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No 851565).
\end{document} |
\begin{document}
\title{Existence of solution for Hilfer fractional differential equations with boundary value conditions \thanks{ Mathematics Subject Classifications: 34A08, 26A33, 34A12,34A40.}}
\date{} \author{Mohammed S. Abdo\thanks{ Department of Mathematics, Dr.Babasaheb Ambedkar Marathwada University, Aurangabad, (M.S) \textrm{431001}, India}\ , Satish K. Panchal\thanks{ Department of Mathematics, Dr.Babasaheb Ambedkar Marathwada University, Aurangabad, (M.S) \textrm{431001}, India}\ , Sandeep P. Bhairat \thanks{ Faculty of Engineering Mathematics, Institute of Chemical Technology Mumbai, Marathwada Campus, Jalna (M.S), India. Corresponding author email: sp.bhairat@marj.ictmumbai.edu.in}} \maketitle
\begin{abstract} In this paper, we consider a class of nonlinear fractional differential equations involving Hilfer derivative with boundary conditions. First, we obtain an equivalent integral for the given boundary value problem in weighted space of continuous functions. Then we obtain the existence results for a given problem under a new approach and minimal assumptions on nonlinear function $f$. The technique used in the analysis relies on a variety of tools including Schauder's, Schaefer's and Krasnosel'ski's fixed point theorems. We demonstrate our results through illustrative examples. \end{abstract}
\section{Introduction}
Fractional calculus (FC) is playing an even vital role in applied mathematics and engineering sciences, provoking a blurring of boundaries between scientific disciplines and the real world applications by a resurgence of interest in the modern as well as classical techniques of applied analysis, see \cite{rms,kd,rh,hi,kst}. The development of FC is a natural consequence of a high level of excitement on the research frontier in applied analysis.
Fractional differential equations (FDEs) naturally occurs in many situations and are studied intensively with initial and boundary value conditions over the last three decades. The existence of a solution for such initial value problems (IVPs) and boundary value problems (BVPs) is crucial for further qualitative studies and applications. In recent years, an increasing interest in the analysis of Hilfer FDEs has been developed in the literature \cite{as,SP1,SPN,SP5,DB1,DBN,sp1,db,fk,hlt,rk,rc,sup,zrs,zt,dv,wz,zx}. We mention here some works on Hilfer fractional differential equations.
One of the first works in this direction with an initial value condition was the paper by K. M. Furati et al. \cite{fk}. They studied the Hilfer FDE \begin{equation}\label{f1} D_{a^{+}}^{\alpha,\beta }y(x)=f \left(x,y(x)\right), \qquad x>a,\,0<\alpha<1,\,0\leq\beta\leq1, \end{equation} with the initial condition \begin{equation}\label{f2} I_{a^{+}}^{1-\gamma }y(a^+)=y_a, \quad y_a\in\mathbb{R}, \,\, \gamma=\alpha+\beta(1-\alpha), \end{equation} where $D_{a^{+}}^{\alpha,\beta}$ is Hilfer fractional derivative of order $\alpha\in(0,1)$ and type $\beta\in[0,1],$ and $I_{a^{+}}^{1-\gamma}$ is Riemann-Liouville fractional integral of order $1-\gamma.$ The existence and uniqueness of solution to IVP \eqref{f1}-\eqref{f2} is proved in weighted space of continuous functions by using Banach fixed point theorem. For details, see \cite{fk,zt}.
In the year 2015, J. Wang and Y. Zang investigated the existence of a solution to nonlocal IVP for Hilfer FDEs: \begin{align}\label{l1} D_{a^{+}}^{\alpha,\beta }u(t)&=f \left(t,u(t)\right), \qquad 0<\alpha<1, 0\leq\beta\leq1 ,t\in (a,b],\\ I_{a^{+}}^{1-\gamma }u(a^+)&=\sum_{i=1}^{m}\lambda_{i}u(\tau_{i}),\qquad \tau _{i}\in (a,b],\ \alpha\leq\gamma=\alpha+\beta-\alpha\beta,\label{l11} \end{align} For details, see \cite{wz}.
Later, H. Gu and J. J. Trujillo \cite{ht} studied the existence of mild solution of Hilfer evolution equation: \begin{align}\label{l2} D_{0^{+}}^{\nu,\mu }x(t)&=Ax(t)+f\left(t,x(t)\right),\qquad 0<\alpha<1, 0\leq\beta\leq1 ,t\in (0,b],\\ I_{0^{+}}^{1-\gamma}x(0)&=x_0,\qquad x_0\in\mathbb{R},\,\gamma=\alpha+\beta-\alpha\beta.\label{l22} \end{align}
They utilized the method of noncompact measure and established sufficient conditions to ensure the existence of a mild solution to Hilfer evolution IVP \eqref{l2}-\eqref{l22}. The state $x(t)$ defined for the values in Banach space $X$ with the norm $|\cdot|$ and $A$ is infinitesimal generator of $C_0$ semigroups in $X.$
In 2016, in \cite{rc}, Rafal Kamoki et al. considered fractional Cauchy problem involving Hilfer derivative \begin{align}\label{l3} D_{a^{+}}^{\alpha,\beta }y(t)&=g \left(t,y(t)\right), \qquad 0<\alpha<1,0\leq\beta\leq1,t\in [a,b],b>a,\\ I_{a^{+}}^{1-\gamma }y(a)&=c,\qquad c\in {\mathbb{R}}^n, \gamma=\alpha+\beta-\alpha\beta,\label{l33} \end{align} and proved the existence and uniqueness of its solution in the space of continuous functions by using Banach contraction theorem. They used Bielecki norm without partitioning the interval and obtained solutions to both homogeneous and nonhomogeneous Cauchy problems.
In recent two years, the series of works on Hilfer FDEs have been published. S. Abbas et al. \cite{as} surveyed the existence and stability for Hilfer FDEs of the form: \begin{align}\label{l4} D_{0}^{\alpha,\beta}u(t)&=f\left(t,y(t),D_{0}^{\alpha,\beta}u(t)\right), \qquad 0<\alpha<1,0\leq\beta\leq1,t\in [0,\infty),\\ I_{0}^{1-\gamma}u(0)&=\phi,\qquad \phi\in {\mathbb{R}}, \gamma=\alpha+\beta-\alpha\beta,\label{l44} \end{align} with the uniform norm on weighted space of bounded and continuous functions. They discussed existence, uniqueness and asymptotic stability of solution to IVP by using Schauder's fixed point theorem. Further, they obtained Ulam-type stabilities for Hilfer FDEs in Banach spaces using the measure of noncompactness and Monch's fixed point theorem. They also derived some results on the existence of weak solutions to \eqref{f1}-\eqref{f2}.
Z. Gao and X. Yu \cite{zx} discussed the existence of a solution to Hilfer integral BVP for the relaxation FDEs: \begin{align}\label{l5} D_{0^{+}}^{\nu,\mu }x(t)&=cx(t)+f\left(t,x(t)\right),\qquad c<0,0<\nu<1, 0\leq\mu\leq1,t\in (0,b],\\ I_{0^{+}}^{1-\gamma }x(0^+)&=\sum_{i=1}^{m}\lambda_{i}x(\tau_{i}),\qquad \tau _{i}\in (0,b),\ 0\leq\gamma=\nu+\mu-\nu\mu, \label{l55} \end{align} By utilizing properties of Mittag-Leffler function and fixed point theory, they established three existence results for the solution of Hilfer integral BVP \eqref{l5}-\eqref{l55} similar to that of results in \cite{hlt}.
Bhairat et al. in \cite{db} generalized IVP \eqref{f1}-\eqref{f2} for $\alpha\in(n-1,n).$
First, they derived equivalent integral representation in weighted space of continuous functions. Then by employing the method of successive approximations, the existence, uniqueness and continuous dependence of the solution are obtained. Further, in \cite{sp1}, Bhairat studied the singular IVP for Hilfer FDE: \begin{align}\label{l7} D_{a^+}^{\alpha,\beta}x(t)=f(t,x(t)),&\quad 0<\alpha<1,\, 0\leq\beta\leq1,\quad t>{a},\\ \displaystyle\lim_{t\to{a^{+}}}{(t-a)}^{1-\gamma}x(t)=x_0,&\qquad \gamma=\alpha+\beta(1-\alpha), \end{align} Using properties of Euler's beta, gamma functions and Picard's iterative technique, the existence and uniqueness of solution to the singular IVP were obtained. Some existence results for Hilfer-fractional implicit differential equation with nonlocal initial conditions can be found in \cite{as,dv}.
Recently, Suphawat et al \cite{sup} studied the nonlocal BVP: \begin{equation}\label{l8} D^{\alpha,\beta}x(t)=f(t,x(t)),\quad 1<\alpha<2,\, 0\leq\beta\leq1,\quad t\in[a,b], \end{equation} with the integral boundary conditions \begin{equation}\label{l9} x(a)=0,\quad x(b)=\sum_{i=1}^{m}\delta_{i}I^{\phi_i}x(\xi_i),\qquad \phi_{i}>0,\,\delta_i\in\mathbb{R},\, \xi_i\in[a,b]. \end{equation} The Banach contraction mapping principle, Banach fixed point theorem with Holder inequality, nonlinear contractions, Krasnoselskii's fixed point theorem, nonlinear Leray-Schauder alternative are employed to prove the existence of the solution to integral BVP.
Motivated by aforesaid works, in this paper, we consider the following BVP for a class of Hilfer FDEs: \begin{equation} D_{a^{+}}^{\alpha ,\beta }z(t)=f\big(t,z(t)\big),\text{ \ }0<\alpha <1,\,0\leq \beta \leq 1,t\in (a,b],\qquad \ \qquad \qquad \label{e8.1} \end{equation} \begin{equation} I_{a^{+}}^{1-\gamma }\big[cz(a^{+})+dz(b^{-})\big]=e_{k},\text{\ \ \ }\gamma =\alpha +\beta (1-\alpha ),\, e_k\in\mathbb{R}, \label{e8.2} \end{equation} where, $f:(a,b]\times \mathbb{R}\rightarrow \mathbb{R}$ be a function such that $ f(t,z)\in C_{1-\gamma }[a,b]$ for any $z\in C_{1-\gamma }[a,b]$ and $ c,d,e_{k}\in \mathbb{R}$. We obtain several existence results by Schauder's, Schaefer's and Krasnosel’ski's fixed point theorems.
The paper is organized as follows: Some preliminary concepts related to our problem are listed in Section 2 which are useful in the sequel. In Section 3, we first establish an equivalent integral equation of BVP \eqref{e8.1}-\eqref{e8.2} and then study the existence results. Illustrative examples are provided in the last section.
\section{Preliminaries}
In this section, we list some definitions, lemmas and weighted spaces which are useful in the sequel.
Let $-\infty <a<b<+\infty .$ Let $C[a,b],AC[a,b]$ and $C^{n}[a,b]$ be the spaces of continuous, absolutely continuous, $n-$times continuous and continuously differentiable functions on $[a,b],$ respectively. Here $ L^{p}(a,b),p\geq 1,$ is the space of Lebesgue integrable functions on $ (a,b). $ Further more we recall the following weighted spaces \cite{fk}: \begin{gather*} C_{\gamma }[a,b]=\{g:(a,b]\rightarrow \mathbb{R}:(t-a)^{\gamma }g(t)\in C[a,b]\},\quad 0\leq \gamma <1, \\ C_{\gamma }^{n}[a,b]=\{g:(a,b]\rightarrow \mathbb{R},g\in C^{n-1}[a,b]:g^{(n)}(t)\in C_{\gamma }[a,b]\},\,n\in \mathbb{N}. \end{gather*}
\begin{definition} (\cite{kst}) Let $g:[a,\infty )\rightarrow R$ is a real valued continuous function. The left sided Riemann-Liouville fractional integral of $g$ of order $\alpha >0$ is defined by \begin{equation} I_{a^{+}}^{\alpha }g(t)=\frac{1}{\Gamma (\alpha )}\int_{a}^{t}(t-s)^{\alpha -1}g(s)ds,\quad t>a, \label{d1} \end{equation} where $\Gamma (\cdot )$ is the Euler's Gamma function and $a\in
\mathbb{R}
.$ provided the right hand side is pointwise defined on $(a,\infty ).$ \end{definition}
\begin{definition} (\cite{kst}) Let $g:[a,\infty )\rightarrow R$ is a real valued continuous function. The left sided Riemann-Liouville fractional derivative of $g$ of order $\alpha >0$ is defined by \begin{equation} D_{a^{+}}^{\alpha }g(t)=\frac{1}{\Gamma (n-\alpha )}\frac{d^{n}}{dt^{n}} \int_{a}^{t}(t-s)^{n-\alpha -1}g(s)ds, \label{d2} \end{equation} where $n=[\alpha ]+1,$ and $[\alpha ]$ denotes the integer part of $\alpha .$ \end{definition}
\begin{definition} \label{7} (\cite{rh}) The left sided Hilfer fractional derivative of function $g\in L^{1}(a,b)$ of order $0<\alpha <1$ and type $0\leq \beta \leq 1$ is denoted as $D_{a^{+}}^{\alpha ,\beta }$ and defined by \begin{equation} D_{a^{+}}^{\alpha ,\beta }g(t)=I_{a^{+}}^{\beta (1-\alpha )}DI_{a^{+}}^{(1-\beta )(1-\alpha )}g(t),\text{ }D=\frac{d}{ dt}. \label{d3} \end{equation} where $I_{a^{+}}^{\alpha }$ and $D_{a^{+}}^{\alpha }$ are Riemann-Liouville fractional integral and derivative defined by \eqref{d1} and \eqref{d2}, respectively. \end{definition}
\begin{remark} \label{rem8.a} From Definition \ref{7}, we observe that:
\begin{itemize} \item[(i)] The operator $D_{a^{+}}^{\alpha ,\beta }$ can be written as \begin{equation*} D_{a^{+}}^{\alpha ,\beta }=I_{a^{+}}^{\beta (1-\alpha )}DI_{a^{+}}^{(1-\gamma )}=I_{a^{+}}^{\beta (1-\alpha )}D^{\gamma },~~~~~~~~\gamma =\alpha +\beta -\alpha \beta \text{.} \end{equation*}
\item[(ii)] The Hilfer fractional derivative can be regarded as an interpolator between the Riemann-Liouville derivative ($\beta =0$) and Caputo derivative ($\beta =1$) as \begin{equation*} D_{a^{+}}^{\alpha ,\beta }= \begin{cases} DI_{a^{+}}^{(1-\alpha )}=~D_{a^{+}}^{\alpha },~~~~~~~~~~if~\beta =0; \\ I_{a^{+}}^{(1-\alpha )}D=~^{c}D_{a^{+}}^{\alpha },~~~~~~~~if~\beta =1. \end{cases} \end{equation*}
\item[(iii)] In particular, if $0<\alpha <1,$ $0\leq \beta \leq 1$ and $ \gamma =\alpha +\beta -\alpha \beta ,$ then \begin{equation*} (D_{a^{+}}^{\alpha ,\beta }g)(t)=\Big(I_{a^{+}}^{\beta (1-\alpha )}\frac{d}{ dt}\Big(I_{a^{+}}^{(1-\beta )(1-\alpha )}g\Big)\Big)(t). \end{equation*} One has, \begin{equation*} (D_{a^{+}}^{\alpha ,\beta }g)(t)=\Big(I_{a^{+}}^{\beta (1-\alpha )}\Big( D_{a^{+}}^{\gamma }g\Big)\Big)(t), \end{equation*} where $\Big(D_{a^{+}}^{\gamma }g\Big)(t)=\frac{d}{dt}\Big( I_{a^{+}}^{(1-\beta )(1-\alpha )}g\Big)(t).$ \end{itemize} \end{remark}
\begin{definition} (\cite{fk}) Let $0<\alpha <1,0\leq \beta \leq 1,$ the weighted space $ C_{1-\gamma }^{\alpha ,\beta }[a,b]$ is defined by \begin{equation} C_{1-\gamma }^{\alpha ,\beta }[a,b]=\big\{g\in {C_{1-\gamma }[a,b]} :D_{a^{+}}^{\alpha ,\beta }g\in {C_{1-\gamma }[a,b]}\big\},\quad \gamma =\alpha +\beta (1-\alpha ). \label{w1} \end{equation} Clearly, $D_{a^{+}}^{\alpha ,\beta }g=I_{a^{+}}^{\beta (1-\alpha )}D_{a^{+}}^{\gamma }g$ and $C_{1-\gamma }^{\gamma }[a,b]\subset C_{1-\gamma }^{\alpha ,\beta }[a,b],\,\gamma =\alpha +\beta -\alpha \beta $, $ 0<\alpha <1,0\leq \beta \leq 1.$ Consider the space $C_{\gamma }^{0}[a,b]$ with the norm \begin{equation} {\Vert g\Vert }_{C_{\gamma }^{n}}=\sum_{k=0}^{n-1}{\Vert g^{(k)}\Vert }_{C}+{ \Vert g^{(n)}\Vert }_{C_{\gamma }}. \label{n1} \end{equation} \end{definition}
\begin{lemma} \label{def8.5} (\cite{kd}) If $\alpha >0$ and $\beta >0,$ and $g\in L^{1}(a,b)$ for $t\in \lbrack a,b]$, then \newline $\Big(I_{a^{+}}^{\alpha }I_{a^{+}}^{\beta }g\Big)(t)=\Big(I_{a^{+}}^{\alpha +\beta }g\Big)(t)$ and $\Big(D_{a^{+}}^{\alpha }I_{a^{+}}^{\beta }g\Big) (t)=g(t).$\newline In particular, if $f\in C_{\gamma }[a,b]$ or $f\in C[a,b]$, then the above properties hold for each $t\in (a,b]$ or $t\in \lbrack a,b]$ respectively. \end{lemma}
\begin{lemma} \label{Le1}(\cite{fk}) For $t>a,$ we have
\begin{description} \item[(i)] $I_{a^{+}}^{\alpha }(t-a)^{\beta -1}=\frac{\Gamma (\beta )}{ \Gamma (\beta +\alpha )}(t-a)^{\beta +\alpha -1},\quad \alpha \geq 0,\beta >0.$\newline
\item[(ii)] $D_{a^{+}}^{\alpha }(t-a)^{\alpha -1}=0,\quad \alpha \in (0,1).$ \end{description} \end{lemma}
\begin{lemma} \label{def8.8} (\cite{fk}) Let $\alpha >0$, $\beta >0$ and $\gamma =\alpha +\beta -\alpha \beta .$ If $g\in C_{1-\gamma }^{\gamma }[a,b]$, then \newline \begin{equation*} I_{a^{+}}^{\gamma }D_{a^{+}}^{\gamma }g=I_{a^{+}}^{\alpha }D_{a^{+}}^{\alpha ,\beta }g,~D_{a^{+}}^{\gamma }I_{a^{+}}^{\alpha }g=D_{a^{+}}^{\beta (1-\alpha )}g. \end{equation*} \end{lemma}
\begin{lemma} \label{Le2} (\cite{fk}) Let $0<\alpha <1,$ $0\leq \beta \leq 1$ and $g\in C_{1-\gamma }[a,b].$ Then \begin{equation*} I_{a^{+}}^{\alpha }D_{a^{+}}^{\alpha ,\beta }g(t)=g(t)-\frac{ (t-a)^{\alpha +\beta (1-\alpha )-1}}{\Gamma (\alpha +\beta (1-\alpha ))} I_{a^{+}}^{(1-\beta )(1-\alpha )}g(a),\quad \text{for all}\quad t\in (a,b], \end{equation*} Moreover, if $\ \gamma =\alpha +\beta -\alpha \beta ,$ $g\in C_{1-\gamma }[a,b]$ and $I_{a^{+}}^{1-\gamma }g\in C_{1-\gamma }^{1}[a,b],$ then \begin{equation*} I_{a^{+}}^{\gamma }D_{a^{+}}^{\gamma }g(t)=g(t)-\frac{ (t-a)^{\gamma -1}}{\Gamma (\gamma)}I_{a^{+}}^{1-\gamma }g(a),\quad \text{for all}\quad t\in (a,b]. \end{equation*} \end{lemma}
\begin{lemma} \label{def8.7} (\cite{fk}) If $0\leq \gamma <1$ and $g\in C_{\gamma }[a,b]$, then \begin{equation*} (I_{a^{+}}^{\alpha }g)(a)=\lim_{t\rightarrow a^{+}}I_{a^{+}}^{\alpha }g(t)=0,~0\leq \gamma <\alpha . \end{equation*} \end{lemma}
\begin{lemma} \label{le}(\cite{fk}) Let $\gamma =\alpha +\beta -\alpha \beta $ where $ 0<\alpha <1$ and $0\leq \beta \leq 1.$ Let $f:(a,b]\times \mathbb{R}\rightarrow \mathbb{R}$ be a function such that $f(t,z)\in C_{1-\gamma }[a,b]$ for any $z\in C_{1-\gamma }[a,b].$ If $z\in C_{1-\gamma }^{\gamma }[a,b],$ then $z$ satisfies IVP \eqref{f1}-\eqref{f2} if and only if $z$ satisfies the Volterra integral equation \begin{equation} z(t)=\frac{y_a}{\Gamma (\gamma)}(t-a)^{\gamma -1}+\frac{ 1}{\Gamma (\alpha )}\int_{a}^{t}(t-s)^{\alpha -1}f(s,z(s))ds,\quad t>a. \label{s3} \end{equation} \end{lemma}
\section{Existence of solution}
In this section we prove the existence of solution to BVP \eqref{e8.1}-\eqref{e8.2} in $C_{1-\gamma }^{\alpha ,\beta }[a,b].$
\begin{lemma} \label{lee(1)} Let $0<\alpha <1$, $0\leq \beta \leq 1$ where $ \gamma =\alpha +\beta -\alpha \beta $, and $f:(a,b]\times \mathbb{R} \rightarrow \mathbb{R}$ be a function such that $f(x,z)\in C_{1-\gamma }[a,b] $ for any $z\in C_{1-\gamma }[a,b].$ If $z\in C_{1-\gamma }^{\gamma }[a,b],$ then $z$ satisfies BVP \eqref{e8.1}-\eqref{e8.2} if and only if $z$ satisfies the integral equation$-$ \begin{eqnarray} z(t) &=&\frac{(t-a)^{\gamma -1}}{\Gamma (\gamma)}\frac{ e_{k}}{d\left( 1+\frac{c}{d}\right) }-\frac{1}{\left( 1+\frac{c}{d}\right) } \frac{(t-a)^{\gamma -1}}{\Gamma (\gamma)} \notag \\ &&\times \frac{1}{\Gamma (1-\gamma +\alpha )}\int_{a}^{b}(b-s)^{\alpha-\gamma}f(s,z(s))ds \notag \\ &&+\frac{1}{\Gamma (\alpha )}\int_{a}^{t}(t-s)^{\alpha -1}f(s,z(s))ds. \label{ee3} \end{eqnarray} \end{lemma}
Proof: \ In view of Lemma \ref{le}, the solution of \eqref{e8.1} can be written as \begin{equation} z(t)=\frac{I_{a^{+}}^{1-\gamma }z(a^{+})}{\Gamma (\gamma)} (t-a)^{\gamma -1}+\frac{1}{\Gamma (\alpha )}\int_{a}^{t}(t-s)^{\alpha -1}f(s,z(s))ds,\quad t>a. \label{e8.3} \end{equation} \
Applying $I_{a^{+}}^{1-\gamma }$ on both sides of \eqref{e8.3} and taking the limit $t\rightarrow b^{-}$, we obtain \begin{equation} I_{a^{+}}^{1-\gamma }z(b^{-})=I_{a^{+}}^{1-\gamma }z(a^{+})+ \frac{1}{\Gamma (1-\gamma +\alpha )}\int_{a}^{b}(b-s)^{\alpha-\gamma}f(s,z(s))ds. \label{e8.4} \end{equation} Also again by applying $I_{a^{+}}^{1-\gamma }$ on both sides of \eqref{e8.3} , we have \begin{eqnarray*} I_{a^{+}}^{1-\gamma }z(t) &=&I_{a^{+}}^{1-\gamma }z(a^{+})+ \frac{1}{\Gamma (1-\gamma +\alpha )}\int_{a}^{t}(t-s)^{\alpha-\gamma}f(s,z(s))ds \\ &=&I_{a^{+}}^{1-\gamma }z(a^{+})+I_{a^{+}}^{1-\beta (1-\alpha )}f(t,z(t)). \end{eqnarray*} Taking the limit $t\rightarrow a^{+}$\ and using Lemma \ref{def8.7} with $ 1-\gamma <1-\beta (1-\alpha ),$\ we obtain \begin{equation} I_{a^{+}}^{1-\gamma }z(a^{+})=I_{a^{+}}^{1-\gamma }z(a^{+}), \label{e8.4a} \end{equation} hence \begin{equation} I_{a^{+}}^{1-\gamma }z(b^{-})=I_{a^{+}}^{1-\gamma }z(a^{+})+\frac{1}{\Gamma (1-\gamma +\alpha )}\int_{a}^{b}(b-s)^{-\gamma +\alpha}f(s,z(s))ds. \label{e8.4b} \end{equation}
From the boundary condition (\ref{e8.2}), we have \ \begin{equation} I_{a^{+}}^{1-\gamma }z(b^{-})=\frac{e_{k}}{d}-\frac{c}{d}I_{a^{+}}^{1-\gamma }z(a^{+}). \label{e3} \end{equation} Comparing the equations (\ref{e8.4b}) and (\ref{e3}), and using (\ref{e8.4a} ), we get \begin{equation} I_{a^{+}}^{1-\gamma }z(a^{+})=\frac{1}{\left( 1+\frac{c}{d}\right) }\left( \frac{e_{k}}{d}-\frac{1}{\Gamma (1-\gamma +\alpha )}\int_{a}^{b}(b-s)^{ \alpha-\gamma}f(s,z(s))ds\right) . \label{e4} \end{equation}
Submitting \eqref{e8.3} into \eqref{e4}, we obtain \begin{eqnarray} z(t) &=&\frac{(t-a)^{\gamma -1}}{\Gamma (\gamma)}\frac{ e_{k}}{d\left( 1+\frac{c}{d}\right) }-\frac{1}{\left( 1+\frac{c}{d}\right) } \frac{(t-a)^{\gamma -1}}{\Gamma (\gamma)} \notag \\ &&\times \frac{1}{\Gamma (1-\gamma +\alpha )}\int_{a}^{b}(b-s)^{\alpha-\gamma}f(s,z(s))ds \notag \\ &&+\frac{1}{\Gamma (\alpha )}\int_{a}^{t}(t-s)^{\alpha -1}f(s,z(s))ds. \label{E5} \end{eqnarray} \
Conversely, applying $I_{a^{+}}^{1-\gamma }$ on both sides of \eqref{ee3}, using Lemmas \ref{Le1} and \ref{def8.5}, with some simple calculations, we get \begin{eqnarray*} &&I_{a^{+}}^{1-\gamma }cz(a^{+})+I_{a^{+}}^{1-\gamma }dz(b^{-}) \\ &=&\frac{c}{\left( 1+\frac{c}{d}\right) }\left( \frac{e_{k}}{d}-\frac{1}{ \Gamma (1-\gamma +\alpha )}\int_{a}^{b}(b-s)^{\alpha-\gamma}f(s,z(s))ds\right) \\ &&+\frac{d}{\left( 1+\frac{c}{d}\right) }\left( \frac{e_{k}}{d}-\frac{1}{ \Gamma (1-\gamma +\alpha )}\int_{a}^{b}(b-s)^{\alpha-\gamma}f(s,z(s))ds\right) \\ &&+\frac{d}{\Gamma (1-\gamma +\alpha )}\int_{a}^{b}(b-s)^{\alpha-\gamma}f(s,z(s))ds \\ &=&\frac{ce_{k}}{\left( 1+\frac{c}{d}\right) d}+\frac{de_{k}}{\left( 1+\frac{ c}{d}\right) d}-\left( \frac{c}{\left( 1+\frac{c}{d}\right) }+\frac{d}{ \left( 1+\frac{c}{d}\right) }-d\right) \\ &&\frac{1}{\Gamma (1-\gamma +\alpha )}\int_{a}^{b}(b-s)^{\alpha-\gamma}f(s,z(s))ds \\ &=&e_{k}. \end{eqnarray*} \ Which shows that the boundary condition (\ref{e8.2}) is satisfied. \newline
Next, applying $D_{a^{+}}^{\gamma }$ on both sides of \eqref{ee3} and using Lemmas \ref{Le1} and \ref{def8.8}, we have \begin{equation} D_{a^{+}}^{\gamma }z(t)=D_{a^{+}}^{\beta (1-\alpha )}f\big(t,z(t)\big). \label{e8.9} \end{equation}
Since $z\in C_{1-\gamma }^{\gamma }[a,b]$ and by definition of $C_{1-\gamma }^{\gamma }[a,b]$, we have $D_{a^{+}}^{\gamma }z\in C_{n-\gamma }[a,b]$, therefore, $D_{a^{+}}^{\beta (1-\alpha )}f=DI_{a^{+}}^{1-\beta (1-\alpha )}f\in C_{1-\gamma }[a,b].$ For $f\in C_{1-\gamma }[a,b]$, it is clear that $ I_{a^{+}}^{1-\beta (1-\alpha )}f\in C_{1-\gamma }[a,b]$. Hence $f$ and $ I_{a^{+}}^{1-\beta (1-\alpha )}f$ satisfy the hypothesis of Lemma \ref{Le2}. \newline Now, applying $I_{a^{+}}^{\beta (1-\alpha )}$ on both sides of \eqref{e8.9}, we have \begin{equation*} {\large I_{a^{+}}^{\beta (1-\alpha )}}D_{a^{+}}^{\gamma }z(t)={\large I_{a^{+}}^{\beta (1-\alpha )}}D_{a^{+}}^{\beta (1-\alpha )}f\big(t,z(t)\big). \end{equation*} Using Remark~\ref{rem8.a} (i), Eq.(\ref{e8.9}) and Lemma \ref{Le2}, we get \begin{equation*} I_{a^{+}}^{\gamma }D_{a^{+}}^{\gamma }z(t)=f\big(t,z(t)\big)- \frac{I_{a^{+}}^{1-\beta (1-\alpha )}f\big(a,z(a)\big)}{\Gamma (\beta (1-\alpha ))}(t-a)^{\beta (1-\alpha )-1},\quad \text{for all}\quad t\in (a,b]. \end{equation*} \ By Lemma \ref{def8.7}, we have $I_{a^{+}}^{1-\beta (n-\alpha )}f\big(a,z(a) \big)=0$. Therefore, we have $D_{a^{+}}^{\alpha ,\beta }z(t)=f\big(t,z(t) \big)$. This completes the proof.
Let us introduce the hypotheses needed to prove the existence of solutions for the problem at hand.
\begin{itemize} \item[ (H1)] $f:(a,b]\times
\mathbb{R}
\rightarrow
\mathbb{R}
$ is a function such that $f(\cdot ,z(\cdot ))\in C_{1-\gamma }^{\beta (1-\alpha )}[a,b]$ for any $z\in C_{1-\gamma }[a,b]$ and there exist two constants $N,\zeta >0$ such that \begin{equation*} \left\vert f\big(t,z\big)\right\vert \leq N\big(1+\zeta \left\Vert z\right\Vert _{C_{1-\gamma }}\big). \end{equation*}
\item[ (H2)] The inequality \begin{equation} \mathcal{G}:=\frac{\Gamma (\gamma)}{\Gamma (\alpha +1)}\left[ (b-a)^{\alpha }+(b-a)^{\alpha +1-\gamma }\right] N\zeta <1 \label{rr1} \end{equation} holds. \end{itemize}
Now, we are ready to present the existence result for the BVP \eqref{e8.1}-\eqref{e8.2}, which is based on Schauder's fixed point theorem (see \cite{gd}).
\begin{theorem} \label{th8.1} Assume that the hypotheses (H1) and (H2) are satisfied. Then Hilfer boundary value problem \eqref{e8.1}-\eqref{e8.2} has at least one solution in $C_{1-\gamma }^{\gamma }[a,b]\subset C_{1-\gamma }^{\alpha ,\beta }[a,b]$. \end{theorem}
\begin{proof} Define the operator ${\large \mathcal{T}}:C_{1-\gamma }[a,b]\longrightarrow C_{1-\gamma }[a,b]$ by \begin{eqnarray} \left( {\large \mathcal{T}}z\right) (t) &=&\frac{(t-a)^{\gamma-1}}{\Gamma (\gamma)}\frac{e_{k}}{d\left( 1+\frac{c}{d}\right) }-\frac{1 }{\left( 1+\frac{c}{d}\right) }\frac{(t-a)^{\gamma -1}}{\Gamma (\gamma)} \notag \\ &&\times \frac{1}{\Gamma (1-\gamma +\alpha )}\int_{a}^{b}(b-s)^{-\gamma +\alpha}f(s,z(s))ds \notag \\ &&+\frac{1}{\Gamma (\alpha )}\int_{a}^{t}(t-s)^{\alpha -1}f(s,z(s))ds. \label{e8.10} \end{eqnarray} Let $\mathbb{B}_{r}=\left\{ z\in C_{1-\gamma }[a,b]:\left\Vert z\right\Vert _{C_{1-\gamma }}\leq r\right\} $ with ${\large r\geq }\frac{\Omega }{1- \mathcal{G}},$ for $\mathcal{G<}1,$ where \begin{eqnarray*} \Omega &:&=\bigg[\frac{1}{\Gamma (\gamma)} \frac{e_{k}}{d\left( 1+\frac{c}{d}\right) }+\left\vert \frac{1}{1+\frac{c}{d} }\right\vert \frac{1}{\Gamma (\gamma)} \\ &&\times \left[ \frac{(b-a)^{\alpha +1-\gamma }}{\Gamma (2-\gamma +\alpha) }+\frac{(b-a)^{2\alpha +1-\gamma }}{\Gamma (\alpha +1)}\right] N\bigg]. \end{eqnarray*} The proof will be given by the following three steps:
Step1: We show that $\mathcal{T}(\mathbb{B}_{r})\subset \mathbb{B}_{r}$. By hypothesis $(H_{2})$, we have \begin{align*} & \left\vert (\mathcal{T}z)(t)(t-a)^{1-\gamma }\right\vert \\ & \leq \left\vert \frac{1}{\Gamma (\gamma)} \frac{e_{k}}{d\left( 1+\frac{c}{d}\right) }\right\vert \\ & +\left\vert \frac{1}{1+\frac{c}{d}}\frac{1}{\Gamma (\gamma)}\right\vert \frac{1}{\Gamma (1-\gamma +\alpha )} \int_{a}^{b}(b-s)^{-\gamma +\alpha}N(1+\zeta \left\vert z\right\vert )ds \\ & +\frac{\left\vert (t-a)^{1-\gamma }\right\vert }{\Gamma (\alpha )} \int_{a}^{t}(t-s)^{\alpha -1}N(1+\zeta \left\vert z\right\vert )ds \\ & \leq \frac{1}{\Gamma (\gamma)}\frac{e_{k}}{ d\left( 1+\frac{c}{d}\right) }+\left\vert \frac{1}{1+\frac{c}{d}}\right\vert \frac{1}{\Gamma (\gamma)} \\ & \times \frac{1}{\Gamma (1-\gamma +\alpha )}\int_{a}^{b}(b-s)^{\alpha-\gamma}N(1+\zeta (s-a)^{\gamma -1}\Vert z\Vert _{C_{1-\gamma }})ds \\ & +\frac{\left\vert (t-a)^{1-\gamma }\right\vert }{\Gamma (\alpha )} \int_{a}^{t}(t-s)^{\alpha -1}N(1+\zeta (s-a)^{\gamma -1}\Vert z\Vert _{C_{1-\gamma }})ds. \end{align*} Note that, for any $z\in \mathbb{B}_{r}$, and for each $t\in (a,b]$, we get \begin{eqnarray*} &&\frac{1}{\Gamma (1-\gamma +\alpha )}\int_{a}^{b}(b-s)^{-\gamma +\alpha}N(1+\zeta (s-a)^{\gamma -1}{\large \Vert z\Vert _{C_{1-\gamma }}})ds \\ &\leq &\left[ \frac{(b-a)^{1-\gamma }}{\Gamma (2-\gamma +\alpha )}+\frac{ \zeta r\Gamma (\gamma)}{\Gamma (\alpha +1)}\right] N(b-a)^{\alpha }, \end{eqnarray*} and \begin{eqnarray*} &&\frac{\left\vert (t-a)^{1-\gamma }\right\vert }{\Gamma (\alpha )} \int_{a}^{t}(t-s)^{\alpha -1}N(1+\zeta (s-a)^{\gamma -1}{\large \Vert z\Vert _{C_{1-\gamma }}})ds \\ &\leq &\left[ \frac{(t-a)^{\alpha }}{\Gamma (\alpha +1)}+\frac{\zeta r\Gamma (\gamma)}{\Gamma (\alpha +1)}\right] N(t-a)^{\alpha +1-\gamma }. \end{eqnarray*} Hence \begin{eqnarray*} \left\vert (\mathcal{T}z)(t)(t-a)^{1-\gamma }\right\vert &\leq &\frac{1}{\Gamma (\gamma)}\frac{e_{k}}{d\left( 1+\frac{c}{d}\right) }+\left\vert \frac{1}{1+\frac{c}{d}}\right\vert \frac{1}{\Gamma (\gamma)} \\ &&\times \left[ \frac{(b-a)^{1-\gamma }}{\Gamma (2-\gamma +\alpha)}+\frac{ \zeta r\Gamma (\gamma)}{\Gamma (\alpha +1)}\right] N(b-a)^{\alpha } \\ &&+\left[ \frac{(t-a)^{\alpha }}{\Gamma (\alpha +1)}+\frac{\zeta r\Gamma (\gamma)}{\Gamma (\alpha +1)}\right] N(t-a)^{\alpha +1-\gamma }, \end{eqnarray*} which yields \begin{eqnarray*} {\large \Vert \mathcal{T}z\Vert _{C_{1-\gamma }}} &\leq &\frac{ 1}{\Gamma (\gamma)}\frac{e_{k}}{d\left( 1+\frac{c}{d}\right) } +\left\vert \frac{1}{1+\frac{c}{d}}\right\vert\frac{ 1}{\Gamma (\gamma)} \\ &&\times \bigg[\frac{(b-a)^{\alpha +1-\gamma }}{\Gamma (2-\gamma +\alpha)} +\frac{(b-a)^{2\alpha +1-\gamma }}{\Gamma (\alpha +1)}\bigg]N \\ &&+\frac{N\zeta r\Gamma (\gamma)}{\Gamma (\alpha +1)}\bigg[ (b-a)^{\alpha }+(b-a)^{\alpha +1-\gamma }\bigg] \end{eqnarray*} In the light of hypothesis (H2) and definition of $r$, we get $\Vert {\large \mathcal{T}}z\Vert _{C_{1-\gamma }}\leq \mathcal{G}r+(1-\mathcal{G})r=r,$ that is, ${\large \mathcal{T(}}\mathbb{B}_{r})\subset \mathbb{B}_{r}.$
We shall prove that ${\large \mathcal{T}}$ is completely continuous.
Step 2. The operator ${\large \mathcal{T}}$ is continuous. Suppose that $ \{z_{n}\}$ is a sequence such that $z_{n}\rightarrow z$ in $\mathbb{B}_{r}$\ as $n\rightarrow \infty $. Then for each $t\in (a,b],$ we have \begin{eqnarray*} &&\left\vert \big((\mathcal{T}z_{n})(t)-(\mathcal{T}z)(t)\big) (t-a)^{1-\gamma }\right\vert \\ &=&\left\vert \frac{1}{1+\frac{c}{d}}\right\vert\frac{ 1}{\Gamma (\gamma)} \\ &&\times \frac{1}{\Gamma (1-\gamma +\alpha )}\int_{a}^{b}(b-s)^{-\gamma +\alpha}\left\vert f\big(s,z_{n}(s)\big)-f\big(s,z(s)\big)\right\vert ds \\ &&+\frac{(t-a)^{1-\gamma }}{\Gamma (\alpha )}\int_{a}^{t}(t-s)^{\alpha -1}\left\vert f\big(s,z_{n}(s)\big)-f\big(s,z(s)\big)\right\vert ds \\ &\leq &\left\vert \frac{1}{1+\frac{c}{d}}\right\vert\frac{ 1}{\Gamma (\gamma)} \\ &&\times \frac{\Gamma (\gamma)}{\Gamma (\alpha +1)}(b-a)^{\alpha }\left\Vert f\big(\cdot ,z_{n}(\cdot )\big)-f\big(\cdot ,z(\cdot )\big) \right\Vert _{C_{1-\gamma }} \\ &&+\frac{\Gamma (\gamma)(t-a)^{1-\gamma +\alpha }}{\Gamma (-\gamma+\alpha)}\left\Vert f\big(\cdot ,z_{n}(\cdot )\big)-f\big(\cdot ,z(\cdot )\big)\right\Vert _{C_{1-\gamma }}. \end{eqnarray*} \ Since $f$ is continuous on $(a,b]$, and $z_{n}\rightarrow z,$\ this implies \begin{equation*} \Vert (\mathcal{T}z_{n}-\mathcal{T}z)\Vert _{C_{1-\gamma }}\longrightarrow 0~~as~~n\longrightarrow \infty , \end{equation*} which means that operator $\mathcal{T}$ is continuous on $\mathbb{B}_{r}$. \newline
Step 3. We prove that $\mathcal{T}(\mathbb{B}_{r})$ is relatively compact. From Step 1, we have ${\large \mathcal{T(}}\mathbb{B}_{r})\subset \mathbb{B} _{r}.$ It follows that ${\large \mathcal{T(}}\mathbb{B}_{r})$ is uniformly bounded. Moreover, we show that operator $\mathcal{T}$ is equicontinuous on $ \mathbb{B}_{r}$. Indeed,for any $a<t_{1}<t_{2}<b$ and $z\in \mathbb{B}_{r}$, we get \begin{eqnarray*} &&\left\vert (t_{2}-a)^{1-\gamma }\big({\large \mathcal{T}}z\big) (t_{2})-(t_{1}-a)^{1-\gamma }\big({\large \mathcal{T}}z\big) (t_{1})\right\vert \\ &\leq &\frac{\left\vert (t_{2}-a)^{^{n-k}}-(t_{1}-a)^{^{n-k}}\right\vert }{\Gamma (\gamma)} \frac{e_{k}}{d\left( 1+\frac{c}{d}\right) }+\left\vert \frac{1}{1+\frac{c}{d} }\right\vert \frac{\left\vert (t_{2}-a)^{^{n-k}}-(t_{1}-a)^{^{n-k}}\right\vert }{\Gamma (\gamma)} \\ &&\times \frac{1}{\Gamma (1-\gamma +\alpha )}\int_{a}^{b}(b-s)^{1-\gamma +\alpha -1}\left\vert f\big(s,z(s)\big)\right\vert ds \\ &&+\dfrac{1}{\Gamma (\alpha )}\left\vert (t_{2}-a)^{1-\gamma }\int_{a}^{t_{2}}(t_{2}-s)^{\alpha -1}f\big(s,z(s)\big)ds\right. \\ &&\left. -(t_{1}-a)^{1-\gamma }\int_{a}^{t_{1}}(t_{1}-s)^{\alpha -1}f\big( s,z(s)\big)ds\right\vert \\ &\leq &\frac{\left\vert (t_{2}-a)^{^{n-k}}-(t_{1}-a)^{^{n-k}}\right\vert }{\Gamma (\gamma)} \left[ \frac{e_{k}}{d\left( 1+\frac{c}{d}\right) }\right. \\ &&\left. +\left\vert \frac{1}{1+\frac{c}{d}}\right\vert \frac{\left\Vert f\right\Vert _{C_{1-\gamma }}}{\Gamma (1-\gamma +\alpha )} \int_{a}^{b}(b-s)^{-\gamma +\alpha}(s-a)^{\gamma -1}ds\right] \\ &&+\dfrac{\Vert f\Vert _{C_{1-\gamma }}}{\Gamma (\alpha )}\left\vert (t_{2}-a)^{1-\gamma }\int_{a}^{t_{2}}(t_{2}-s)^{\alpha -1}(s-a)^{\gamma -1}ds\right. \\ &&\left. -(t_{1}-a)^{1-\gamma }\int_{a}^{t_{1}}(t_{1}-s)^{\alpha -1}(s-a)^{\gamma -1}ds\right\vert \\ &\leq &\frac{\left\vert (t_{2}-a)^{^{n-k}}-(t_{1}-a)^{^{n-k}}\right\vert }{\Gamma (\gamma)} \left[ \frac{e_{k}}{d\left( 1+\frac{c}{d}\right) }\right. \\ &&+\left. \left\vert \frac{1}{1+\frac{c}{d}}\right\vert \frac{\Gamma (\gamma)}{\Gamma (\alpha +1)}(b-a)^{\alpha }\left\Vert f\right\Vert _{C_{1-\gamma }}\right] \\ &&+\dfrac{\Vert f\Vert _{C_{1-\gamma }}}{\Gamma (\alpha )}\mathcal{B}(\gamma -n+1,\alpha )\left\vert (t_{2}-a)^{\alpha }-(t_{1}-a)^{\alpha }\right\vert \end{eqnarray*}
which tends to zero as $t_{2}\rightarrow t_{1},$ independent of $z\in \mathbb{B}_{r}$, where $\mathcal{B(\cdot },\mathcal{\cdot )}$ is a Beta function. Thus we conclude that $\mathcal{T}(\mathbb{B}_{r})$ is equicontinuous on $\mathbb{B}_{r}$ and therefore $\mathcal{T}(\mathbb{B}_{r}) $ is relatively compact. As a consequence of Steps 1 to 3 together with Arzela-Ascoli theorem, we can conclude that $\mathcal{T}:\mathbb{B} _{r}\rightarrow \mathbb{B}_{r}$ is completely continuous operator.
An application of Schauder's fixed point theorem shows that there exists at least a fixed point $z$ of $\mathcal{T}$ in $C_{1-\gamma }[a,b]$. This fixed point $z$ is the solution to (\ref{e8.1})-(\ref{e8.2}) in $C_{1-\gamma }^{\gamma }[a,b],$ and the proof is completed. \end{proof}
We will study the next existence result by using Schaefer fixed point theorem. To this end, we change hypothesis (H1) into the following one:
\begin{itemize} \item[ (H3)] $f:(a,b]\times
\mathbb{R}
\rightarrow
\mathbb{R}
$ is a function such that $f(\cdot ,z(\cdot ))\in C_{1-\gamma }^{\beta (1-\alpha )}[a,b]$ for any $z\in C_{1-\gamma }[a,b]$ and there exist a function $\eta (t)\in C_{1-\gamma }[a,b]$ such that \begin{equation*} \left\vert f\big(t,z\big)\right\vert \leq \eta (t),\text{ for all }t\in (a,b],\text{ }z\in
\mathbb{R}
. \end{equation*} \end{itemize}
\begin{theorem} \label{th3.3} Assume that $\ $(H3) holds. Then Hilfer boundary value problem \eqref{e8.1}-\eqref{e8.2} has at least one solution in $C_{1-\gamma }^{\gamma }[a,b]\subset C_{1-\gamma }^{\alpha ,\beta }[a,b]$. \end{theorem}
\begin{proof} As in the proof of Theorem \ref{th8.1}, one can repeat Steps 1 to 3 and show that operator ${\large \mathcal{T}}:C_{1-\gamma }[a,b]\longrightarrow C_{1-\gamma }[a,b]$ defined in (\ref{e8.10}) is a completely continuous. It remains to prove that \begin{equation*} \Delta =\left\{ z\in {\large C_{n-\gamma }[a,b]}:z=\lambda \mathcal{T}z, \text{ for some }\lambda \in (0,1)\right\} \end{equation*} is a bounded set. Let $z\in \Delta $ and $\lambda \in (0,1)$ be such that $ z=\lambda Tz.$ By hypothesis (H3) and Eq.(\ref{e8.10}), then for all $t\in \lbrack a,b],$ we have \begin{align*} & \left\vert {\large \mathcal{T}z(t)(t-a)}^{n-\gamma }\right\vert \\ & \leq \sum_{k=1}^{n}\frac{(t-a)^{n-k}}{\Gamma (\gamma -k+1)}\frac{e_{k}}{ d\left( 1+\frac{c}{d}\right) } \\ & +\left\vert \frac{1}{1+\frac{c}{d}}\right\vert \sum_{k=1}^{n}\frac{ (t-a)^{n-k}}{\Gamma (\gamma -k+1)}\frac{1}{\Gamma (n-\gamma +\alpha )} \int_{a}^{b}(b-s)^{n-\gamma +\alpha -1}\eta (s)ds \\ & +\frac{\left\vert (t-a)^{n-\gamma }\right\vert }{\Gamma (\alpha )} \int_{a}^{t}(t-s)^{\alpha -1}\eta (s)ds \\ & \leq \sum_{k=1}^{n}\frac{(t-a)^{n-k}}{\Gamma (\gamma -k+1)}\frac{e_{k}}{ d\left( 1+\frac{c}{d}\right) } \\ & +\left\vert \frac{1}{1+\frac{c}{d}}\right\vert \sum_{k=1}^{n}\frac{ (t-a)^{n-k}}{\Gamma (\gamma -k+1)}\frac{1}{\Gamma (n-\gamma +\alpha )} \int_{a}^{b}(b-s)^{n-\gamma +\alpha -1}(s-a)^{\gamma -n}\Vert \mathcal{\eta } \Vert _{C_{n-\gamma }}ds \\ & +\frac{\left\vert (t-a)^{n-\gamma }\right\vert }{\Gamma (\alpha )} \int_{a}^{t}(t-s)^{\alpha -1}(s-a)^{\gamma -n}\Vert \mathcal{\eta }\Vert _{C_{n-\gamma }}ds. \end{align*} That is \begin{eqnarray} &&\Vert \mathcal{T}z\Vert _{C_{n-\gamma }} \notag \\ &\leq &\sum_{k=1}^{n}\frac{(b-a)^{n-k}}{\Gamma (\gamma -k+1)}\frac{e_{k}}{ d\left( 1+\frac{c}{d}\right) } \notag \\ &&+\left[ \left\vert \frac{1}{1+\frac{c}{d}}\right\vert \sum_{k=1}^{n}\frac{ (b-a)^{-k}}{\Gamma (\gamma -k+1)}\frac{\Gamma (\alpha )}{\mathcal{B(\alpha } ,1\mathcal{)}}+\frac{\mathcal{B(}\gamma -n+1,1\mathcal{)}}{\Gamma (\alpha )(b-a)^{\gamma }}\right] (b-a)^{n+\alpha }\Vert \mathcal{\eta }\Vert _{C_{n-\gamma }} \notag \\ &:&=\ell . \label{ss3} \end{eqnarray}
Since $\lambda \in (0,1)$ then $z<\mathcal{T}z.$ The last inequality with Eq.(\ref{ss3}) leads us to \begin{equation*} \Vert z\Vert _{C_{n-\gamma }}<\Vert \mathcal{T}z\Vert _{C_{n-\gamma }}\leq \ell \end{equation*} Which proves that $\Delta $ is bounded. By using Schaefer fixed point theorem, the proof can be completed. \end{proof}
Finally, we present the existence result for the problem \eqref{e8.1}- \eqref{e8.2}, which is based on Krasnosel'skii fixed point theorem. For this end, we change hypothesis (H1) into the following one:
\begin{itemize} \item[ (H4)] $f:(a,b]\times
\mathbb{R}
\rightarrow
\mathbb{R}
$ is a function such that $f(\cdot ,z(\cdot ))\in C_{n-\gamma }^{\beta (n-\alpha )}[a,b]$ for any $z\in C_{n-\gamma }[a,b]$ and there exists constant $L>0$ such that \begin{equation*} \left\vert f\big(t,z\big)-f\big(t,w\big)\right\vert \leq L\left\vert z-w\right\vert ,\text{ }\forall t\in (a,b],\text{ }z,w\in
\mathbb{R}
\end{equation*} And we consider the following hypothesis:
\item[ (H5)] The inequality \begin{eqnarray*} \mathcal{W} &:&=\bigg[\left\vert \frac{1}{1+\frac{c}{d}}\right\vert \sum_{k=1}^{n}\frac{(b-a)^{n-k}}{\Gamma (\gamma -k+1)}+\frac{\mathcal{B} (\gamma -n,\alpha +1)}{\Gamma (\gamma -n)}\bigg] \\ &&\times \frac{\Gamma (\gamma -n)(b-a)^{\alpha }}{\mathcal{B}(\gamma -n,1)\Gamma (\alpha +1)}L<1 \end{eqnarray*} is holds. \end{itemize}
\begin{theorem} \label{th3.4} Assume that hypotheses (H4) and (H5) are satisfied. If \begin{equation} \left\vert \frac{1}{1+\frac{c}{d}}\right\vert \sum_{k=1}^{n}\frac{ (b-a)^{n-k+\alpha }}{\Gamma (\gamma -k+1)}\frac{\Gamma (\gamma -n+1)}{\Gamma (\alpha +1)}L<1. \label{e2} \end{equation} Then Hilfer boundary value problem \eqref{e8.1}-\eqref{e8.2} has at least one solution in $C_{n-\gamma }^{\gamma }[a,b]\subset C_{n-\gamma }^{\alpha ,\beta }[a,b]$. \end{theorem}
\begin{proof} Consider the operator ${\large \mathcal{T}}$ is defined as in Theorem \ref {th8.1}.
Now, we need to analyze the operator ${\large \mathcal{T}}$ into sum two operators ${\large \mathcal{T}}_{1}+{\large \mathcal{T}}_{2}$ as follows \begin{equation*} {\large \mathcal{T}}_{1}z(t)=-\frac{1}{\left( 1+\frac{c}{d}\right) } \sum_{k=1}^{n}\frac{(t-a)^{\gamma -k}}{\Gamma (\gamma -k+1)}\frac{1}{\Gamma (n-\gamma +\alpha )}\int_{a}^{b}(b-s)^{n-\gamma +\alpha -1}f(s,z(s))ds \end{equation*} and \begin{equation*} {\large \mathcal{T}}_{2}z(t)=\sum_{k=1}^{n}\frac{(t-a)^{\gamma -k}}{\Gamma (\gamma -k+1)}\frac{e_{k}}{d\left( 1+\frac{c}{d}\right) }+\frac{1}{\Gamma (\alpha )}\int_{a}^{t}(t-s)^{\alpha -1}f(s,z(s))ds. \end{equation*} \
Set $\widetilde{f}=f(s,0)$ and consider the ball $\mathbb{B}_{\epsilon }=\{z\in C_{n-\gamma ;\psi }([a,b]:\left\Vert z\right\Vert _{C_{n-\gamma ;\psi }}\leq \epsilon \}\ $with $\epsilon \geq \frac{\Lambda }{1-\mathcal{W}} ,\mathcal{W<}1$ where \begin{eqnarray} \Lambda &=&\bigg[\left\vert \frac{1}{1+\frac{c}{d}}\right\vert \sum_{k=1}^{n}\frac{(b-a)^{n-k}}{\Gamma (\gamma -k+1)}+\frac{\mathcal{B} (\gamma -n,\alpha +1)}{\Gamma (\gamma -n)}\bigg] \notag \\ &&\times \frac{\Gamma (\gamma -n)(b-a)^{\alpha }}{\mathcal{B}(\gamma -n,1)\Gamma (\alpha +1)}\left\Vert \widetilde{f}\right\Vert _{C_{n-\gamma }}+\sum_{k=1}^{n}\frac{(b-a)^{n-k}}{\Gamma (\gamma -k+1)}\frac{e_{k}}{ d\left( 1+\frac{c}{d}\right) }. \label{E1} \end{eqnarray}
The proof will be given in three stages.
\textbf{Stage 1:} We prove that ${\large \mathcal{T}}_{1}z+{\large \mathcal{T }}_{2}w\in \mathbb{B}_{r}$ for every $z,w\in \mathbb{B}_{\epsilon }.$
By assumpition (H4), then for every $z\in \mathbb{B}_{\epsilon },$and $t\in (a,b]$, we have \begin{eqnarray*} &&\left\vert (t-a)^{n-\gamma }{\large \mathcal{T}}_{1}z(t)\right\vert \\ &\leq &\left\vert \frac{1}{1+\frac{c}{d}}\right\vert \sum_{k=1}^{n}\frac{ (t-a)^{n-k}}{\Gamma (\gamma -k+1)}\frac{1}{\Gamma (n-\gamma +\alpha )} \\ &&\times \int_{a}^{b}(b-s)^{n-\gamma +\alpha -1}\bigg[\left\vert f(s,z(s))-f(s,0)\right\vert +\left\vert f(s,0)\right\vert \bigg]ds \\ &\leq &\left\vert \frac{1}{1+\frac{c}{d}}\right\vert \sum_{k=1}^{n}\frac{ (t-a)^{n-k}}{\Gamma (\gamma -k+1)}\frac{1}{\Gamma (n-\gamma +\alpha )} \\ &&\times \int_{a}^{b}(b-s)^{n-\gamma +\alpha -1}(s-a)^{\gamma -n}\bigg[ L\left\Vert z\right\Vert _{C_{n-\gamma }}+\left\Vert \widetilde{f} \right\Vert _{C_{n-\gamma }}\bigg]ds \\ &\leq &\left\vert \frac{1}{1+\frac{c}{d}}\right\vert \sum_{k=1}^{n}\frac{ (t-a)^{n-k}}{\Gamma (\gamma -k+1)}\frac{\Gamma (\gamma -n+1)}{\Gamma (\alpha +1)}(t-a)^{\alpha }\bigg[L\epsilon +\left\Vert \widetilde{f}\right\Vert _{C_{n-\gamma }}\bigg]. \end{eqnarray*}
This gives \begin{eqnarray} &&\left\Vert {\large \mathcal{T}}_{1}z\right\Vert _{C_{n-\gamma }} \notag \\ &\leq &\left\vert \frac{1}{1+\frac{c}{d}}\right\vert \sum_{k=1}^{n}\frac{ (b-a)^{n-k+\alpha }}{\Gamma (\gamma -k+1)}\frac{\Gamma (\gamma -n+1)}{\Gamma (\alpha +1)}\bigg[L\epsilon +\left\Vert \widetilde{f}\right\Vert _{C_{n-\gamma }}\bigg]. \label{q1} \end{eqnarray}
For operator ${\large \mathcal{T}}_{2}$, we have \begin{eqnarray*} &&\left\vert (t-a)^{n-\gamma }{\large \mathcal{T}}_{2}w(t)\right\vert \\ &\leq &\sum_{k=1}^{n}\frac{(t-a)^{n-k}}{\Gamma (\gamma -k+1)}\frac{e_{k}}{ d\left( 1+\frac{c}{d}\right) } \\ &&+\frac{(t-a)^{n-\gamma }}{\Gamma (\alpha )}\int_{a}^{t}(t-s)^{\alpha -1} \bigg[\left\vert f(s,w(s))-f(s,0)\right\vert +\left\vert f(s,0)\right\vert \bigg]ds \\ &\leq &\sum_{k=1}^{n}\frac{(t-a)^{n-k}}{\Gamma (\gamma -k+1)}\frac{e_{k}}{ d\left( 1+\frac{c}{d}\right) } \\ &&+\frac{(t-a)^{n-\gamma }}{\Gamma (\alpha )}\int_{a}^{t}(t-s)^{\alpha -1}(s-a)^{\gamma -n}\bigg[L\left\Vert w\right\Vert _{C_{n-\gamma }}+\left\Vert \widetilde{f}\right\Vert _{C_{n-\gamma }}\bigg]ds. \end{eqnarray*}
For every $w\in \mathbb{B}_{\epsilon },$and $t\in (a,b]$, this gives \begin{eqnarray} \left\Vert {\large \mathcal{T}}_{2}w\right\Vert _{C_{n-\gamma }} &\leq &\sum_{k=1}^{n}\frac{(b-a)^{n-k}}{\Gamma (\gamma -k+1)}\frac{e_{k}}{d\left( 1+\frac{c}{d}\right) } \notag \\ &&+\frac{\Gamma (\gamma -n+1)(b-a)^{\alpha }}{\Gamma (\gamma -n+\alpha +1)} \bigg[L\epsilon +\left\Vert \widetilde{f}\right\Vert _{C_{n-\gamma }}\bigg]. \label{q2} \end{eqnarray}
From Eqs.(\ref{q1}),(\ref{q2}), and using hypothesis (H5) with Eq.(\ref{E1} ), we get \begin{eqnarray*} &&\left\Vert {\large \mathcal{T}}_{1}z+{\large \mathcal{T}}_{2}w\right\Vert _{C_{n-\gamma }} \\ &\leq &\left\Vert {\large \mathcal{T}}_{1}z\right\Vert _{C_{n-\gamma }}+\left\Vert {\large \mathcal{T}}_{2}w\right\Vert _{C_{n-\gamma }} \\ &\leq &\bigg[\left\vert \frac{1}{1+\frac{c}{d}}\right\vert \sum_{k=1}^{n} \frac{(b-a)^{n-k}}{\Gamma (\gamma -k+1)}+\frac{\Gamma (\alpha +1)}{\Gamma (\gamma -n+\alpha +1)}\bigg] \\ &&\times \frac{\Gamma (\gamma -n+1)(b-a)^{\alpha }}{\Gamma (\alpha +1)}\bigg[ L\epsilon +\left\Vert \widetilde{f}\right\Vert _{C_{n-\gamma }}\bigg] \\ &&+\sum_{k=1}^{n}\frac{(b-a)^{n-k}}{\Gamma (\gamma -k+1)}\frac{e_{k}}{ d\left( 1+\frac{c}{d}\right) } \\ &\leq &\mathcal{W}\epsilon +(1-\mathcal{W})\epsilon =\epsilon . \end{eqnarray*}
This proves that ${\large \mathcal{T}}_{1}z+{\large \mathcal{T}}_{2}w\in \mathbb{B}_{r}$ for every $z,w\in \mathbb{B}_{r}.$
\textbf{Stage 2:} We prove that the operator ${\large \mathcal{T}}_{1}$ is a contration mapping on $\mathbb{B}_{r}.$
For any $z,w\in \mathbb{B}_{r},$ and for $t\in (a,b],$ then by assumptions (H4), we have \begin{eqnarray*} &&\left\vert (t-a)^{n-\gamma }{\large \mathcal{T}}_{1}z(t)-(t-a)^{n-\gamma } {\large \mathcal{T}}_{1}w(t)\right\vert \\ &\leq &\left\vert \frac{1}{1+\frac{c}{d}}\right\vert \sum_{k=1}^{n}\frac{ (t-a)^{n-k}}{\Gamma (\gamma -k+1)}\frac{1}{\Gamma (n-\gamma +\alpha )} \\ &&\times \int_{a}^{b}(b-s)^{n-\gamma +\alpha -1}\bigg[\left\vert f(s,z(s))-f(s,w(s))\right\vert \bigg]ds \\ &\leq &\left\vert \frac{1}{1+\frac{c}{d}}\right\vert \sum_{k=1}^{n}\frac{ (t-a)^{n-k}}{\Gamma (\gamma -k+1)} \\ &&\times \frac{1}{\Gamma (n-\gamma +\alpha )}\int_{a}^{b}(b-s)^{n-\gamma +\alpha -1}L\left\vert z(s)-w(s))\right\vert ds \\ &\leq &\left\vert \frac{1}{1+\frac{c}{d}}\right\vert \sum_{k=1}^{n}\frac{ (t-a)^{n-k}}{\Gamma (\gamma -k+1)}\frac{\Gamma (\gamma -n+1)}{\Gamma (\alpha +1)}(b-s)^{\alpha }L\left\Vert z-w\right\Vert _{C_{n-\gamma }} \end{eqnarray*} This gives, \begin{eqnarray*} &&\left\Vert {\large \mathcal{T}}_{1}z-{\large \mathcal{T}}_{1}w\right\Vert _{C_{n-\gamma }} \\ &\leq &\left\vert \frac{1}{1+\frac{c}{d}}\right\vert \sum_{k=1}^{n}\frac{ (b-a)^{n-k+\alpha }}{\Gamma (\gamma -k+1)}\frac{\Gamma (\gamma -n+1)}{\Gamma (\alpha +1)}L\left\Vert z-w\right\Vert _{C_{n-\gamma }}. \end{eqnarray*}
The operator ${\large \mathcal{T}}_{1}$ is contraction mapping due to assumption Eqs.(\ref{e2}).
\textbf{Stage 3:} We show that the operator ${\large \mathcal{T}}_{2}$ is completely continuous on $\mathbb{B}_{\epsilon }.$
Firstly, from the continuity of $f$, we conclude that the operator ${\large \mathcal{T}}_{2}:\mathbb{B}_{\epsilon }\rightarrow \mathbb{B}_{\epsilon }$ i.e. ${\large \mathcal{T}}_{2}$ is continuous on $\mathbb{B}_{\epsilon }$. Next, we show that for all $\epsilon >0,$ there exists some $\epsilon ^{^{\prime }}>0$ such that $\left\Vert {\large \mathcal{T}}_{2}z\right\Vert _{C_{n-\gamma }}\leq \epsilon ^{^{\prime }}.$ According to stage 1, for $ z\in \mathbb{B}_{\epsilon },$ we know that \begin{eqnarray*} \left\Vert {\large \mathcal{T}}_{2}z\right\Vert _{C_{n-\gamma }} &\leq &\sum_{k=1}^{n}\frac{(b-a)^{n-k}}{\Gamma (\gamma -k+1)}\frac{e_{k}}{d\left( 1+\frac{c}{d}\right) } \\ &&+\mathcal{B}(\gamma -n+1,\alpha )\frac{(b-a)^{\alpha }}{\Gamma (\alpha )} \bigg[L\epsilon +\left\Vert \widetilde{f}\right\Vert _{C_{n-\gamma }}\bigg]. \end{eqnarray*} which is independent of $t$ and $z$, hence there exists \begin{equation*} \epsilon ^{^{\prime }}=\sum_{k=1}^{n}\frac{(b-a)^{n-k}}{\Gamma (\gamma -k+1)} \frac{e_{k}}{d\left( 1+\frac{c}{d}\right) }+\mathcal{B}(\gamma -n+1,\alpha ) \frac{(b-a)^{\alpha }}{\Gamma (\alpha )}\bigg[L\epsilon +\left\Vert \widetilde{f}\right\Vert _{C_{n-\gamma }} \end{equation*} such that $\left\Vert {\large \mathcal{T}}_{2}z\right\Vert _{C_{n-\gamma }}\leq \epsilon ^{^{\prime }}.$ So ${\large \mathcal{T}}_{2}$ is uniformly bounded set on $\mathbb{B}_{\epsilon }.$
Finally, to prove that ${\large \mathcal{T}}_{2}$ is equicontinuous in $ \mathbb{B}_{\epsilon }$, for any $z\in \mathbb{B}_{\epsilon }$ and $ t_{1},t_{2}\in (a,b]$ with $t_{1}<t_{2},$ we have \begin{eqnarray*} &&\left\vert (t_{2}-a)^{n-\gamma }\mathcal{T}_{2}z(t_{2})-(t_{1}-a)^{n- \gamma }\mathcal{T}_{2}z(t_{1})\right\vert \\ &=&\left\vert \sum_{k=1}^{n}\frac{(t_{2}-a)^{n-k}-(t_{1}-a)^{n-k}}{\Gamma (\gamma -k+1)}\frac{e_{k}}{d\left( 1+\frac{c}{d}\right) }\right. \\ &&+\frac{(t_{2}-a)^{n-\gamma }}{\Gamma (\alpha )} \int_{a}^{t_{2}}(t_{2}-s))^{\alpha -1}f(s,z(s))ds \\ &&-\left. \frac{(t_{1}-a)^{n-\gamma }}{\Gamma (\alpha )} \int_{a}^{t_{1}}(t_{1}-s))^{\alpha -1}f(s,z(s))ds\right\vert \\ &\leq &\sum_{k=1}^{n}\frac{\left\vert (t_{2}-a)^{n-k}-(t_{1}-a)^{n-k}\right\vert }{\Gamma (\gamma -k+1)}\frac{e_{k} }{d\left( 1+\frac{c}{d}\right) } \\ &&\left\vert +\frac{(t_{2}-a)^{n-\gamma }}{\Gamma (\alpha )} \int_{a}^{t_{2}}(t_{2}-s))^{\alpha -1}(s-a)^{\gamma -n}\left\Vert f\right\Vert _{C_{n-\gamma ;\psi }[a,b]}ds\right. \\ &&\left. -\frac{(t_{1}-a)^{n-\gamma }}{\Gamma (\alpha )} \int_{a}^{t_{1}}(t_{1}-s))^{\alpha -1}(s-a)^{\gamma -n}\left\Vert f\right\Vert _{C_{n-\gamma ;\psi }[a,b]}ds\right\vert \\ &=&\sum_{k=1}^{n}\frac{\left\vert (t_{2}-a)^{n-k}-(t_{1}-a)^{n-k}\right\vert }{\Gamma (\gamma -k+1)}\frac{e_{k}}{d\left( 1+\frac{c}{d}\right) } \\ &&+\frac{\mathcal{B}(\gamma -n+1)}{\Gamma (\alpha )}\left\Vert f\right\Vert _{C_{n-\gamma ;\psi }[a,b]}\left\vert (t_{2}-a)^{\alpha }-(t_{1}-a)^{\alpha }\right\vert . \end{eqnarray*} Observe that the right-hand side of the above inequality is independent of $ z.$ So \begin{equation*} \left\vert (t_{2}-a)^{n-\gamma }{\large \mathcal{T}}
_{2}z(t_{2})-(t_{1}-a)^{n-\gamma }{\large \mathcal{T}}_{2}z(t_{1})\right\vert \rightarrow 0,\text{ as }|t_{2}-t_{1}|\rightarrow 0. \end{equation*} This proves that ${\large \mathcal{T}}_{2}$ is equicontinuous on $\mathbb{B} _{\epsilon }$. In the view of Arzela-Ascoli Theorem, it follows that $( {\large \mathcal{T}}_{2}\mathbb{B}_{\epsilon })$ is relatively compact. As a consequence of Krasnosel'skii fixed point theorem, we conclude that the problem \eqref{e8.1}-\eqref{e8.2} has at least one solution. \end{proof}
\begin{corollary} Assume that hypotheses (H4) and (H5) are satisfied. Then Hilfer boundary value problem \eqref{e8.1}-\eqref{e8.2} has a unique solution in $ C_{n-\gamma }^{\gamma }[a,b]\subset C_{n-\gamma }^{\alpha ,\beta }[a,b]$. \end{corollary}
\section{An example \label{Sec5}}
Consider the Hilfer fractional differential equation with boundary condition \begin{equation} \begin{cases} D_{a^{+}}^{\alpha ,\beta }z(t)=f\big(t,z(t)\big),t\in (0,1],0<\alpha <1,0\leq \beta \leq 1, \\ I_{a^{+}}^{1-\gamma }\big[\frac{1}{4}z(0^{+})+\frac{3}{4}z(1^{-})\big]=\frac{ 2}{5},~~\alpha \leq \gamma =\alpha +\beta -\alpha \beta , \end{cases} \label{3} \end{equation} where, $\alpha =\frac{1}{2},\beta =\frac{1}{3}$, $\gamma =\frac{2}{3}$, $c= \frac{1}{4}$, $d=\frac{3}{4}$, $e_{1}=\frac{2}{5},$ and \begin{equation*} f\big(t,z(t)\big)=t^{\frac{-1}{6}}+\frac{1}{16}t^{\frac{5}{6}}\sin z(t), \end{equation*} Clearly, $t^{\frac{1}{3}}f\big(t,z(t)\big)=t^{\frac{1}{6}}+\frac{1}{16}t^{ \frac{7}{6}}\sin z(t)\in C[0,1],$ hence $f\big(t,z(t)\big)\in C_{\frac{1}{3} }[0,1].$ Observe that, for any $z\in
\mathbb{R}
^{+}$ and $t\in (0,1],$
\begin{eqnarray*} \left\vert f\big(t,z(t)\big)\right\vert &\leq &t^{\frac{1}{6}}\left( 1+\frac{ 1}{16}t^{\frac{2}{3}}\left\vert t^{\frac{1}{3}}z(t)\right\vert \right) \\ &\leq &\left( 1+\frac{1}{16}\left\Vert z\right\Vert _{C_{\frac{1}{3} }}\right) . \end{eqnarray*}
Therefore, the conditions (H1) is satisfied with $N=1,$ and $\zeta =\frac{1}{ 16}$. It is easy to check that the (H2) is satisfied too. Indeed,$\ $by some calculations, we get \begin{equation} \mathcal{G}=\frac{\Gamma (\gamma -n+1)}{\Gamma (\alpha +1)}\left[ (b-a)^{\alpha }+(b-a)^{\alpha +n-\gamma }\right] N\zeta \simeq 0.19<1. \notag \end{equation}
An application of Theorem \ref{th8.1} implies that problem (\ref{3}) has a solution in $C_{\frac{1}{3}}^{\frac{2}{3}}([0,1])$.
Moreover, consider $f\big(t,z(t)\big)=t^{\frac{-1}{6}}+\frac{1}{16}t^{\frac{5 }{6}}\sin (t),$ it follows $\left\vert f\big(t,z(t)\big)\right\vert \leq t^{ \frac{-1}{6}}+\frac{1}{16}t^{\frac{5}{6}}=\eta (t)\in C_{1-\gamma }[0,1].$ Therefore (H3) holds. An application of Theorem \ref{th3.3} implies that problem (\ref{3}) has a solution in $C_{\frac{1}{3}}^{\frac{2}{3}}([0,1])$.
Finally, if $f\big(t,z(t)\big)=t^{\frac{-1}{6}}+\frac{1}{16}t^{\frac{5}{6} }\sin z(t),$ then for $z,w\in
\mathbb{R}
^{+}$ and $t\in (0,1],$ we get \begin{equation*} \left\vert f\big(t,z(t)\big)-f\big(t,w(t)\big)\right\vert \leq \frac{1}{16} \left\vert z-w\right\vert . \end{equation*} Thus, the hypothesis $(H4)$ is satisfied with $L=\frac{1}{16}$. It is easy to check that hypothesis (H5) \ and inequality (\ref{e2}) are satisfied. Indeed,$\ $by some calculations, we get \begin{eqnarray*} \mathcal{W} &:&=\bigg[\left\vert \frac{1}{1+\frac{c}{d}}\right\vert \sum_{k=1}^{n}\frac{(b-a)^{n-k}}{\Gamma (\gamma -k+1)}+\frac{\mathcal{B} (\gamma -n,\alpha +1)}{\Gamma (\gamma -n)}\bigg] \\ &&\times \frac{\Gamma (\gamma -n)(b-a)^{\alpha }}{\mathcal{B}(\gamma -n,1)\Gamma (\alpha +1)}L\simeq 0.14<1 \end{eqnarray*} and \begin{equation} \left\vert \frac{1}{1+\frac{c}{d}}\right\vert \sum_{k=1}^{n}\frac{ (b-a)^{n-k+\alpha }}{\Gamma (\gamma -k+1)}\frac{\Gamma (\gamma -n+1)}{\Gamma (\alpha +1)}L\simeq 0.05<1. \end{equation} An application of Theorem \ref{th3.4} implies that problem (\ref{3}) has a solution in $C_{\frac{1}{3}}^{\frac{2}{3}}[0,1]$.
\section{Conclusion}
We have obtained some existence results for the solution of boundary value problem for Hilfer fractional differential equations based on the reduction of fractional differential equations to integral equations. The employed techniques, the fixed point theorems, are quite general and effective. We trust the reported results here will have a positive impact on the development of further applications in engineering and applied sciences.
\end{document} |
\begin{document}
\maketitle
\begin{abstract} A Weyl arrangement is the hyperplane arrangement defined by a root system.
Saito proved that every Weyl arrangement is free. The Weyl subarrangements of type $A_{\ell}$ are represented by simple graphs. Stanley gave a characterization of freeness for this type of arrangements in terms of thier graph. In addition, The Weyl subarrangements of type $B_{\ell}$ can be represented by signed graphs. A characterization of freeness for them is not known. However, characterizations of freeness for a few restricted classes are known. For instance, Edelman and Reiner characterized the freeness of the arrangements between type $ A_{\ell-1} $ and type $ B_{\ell} $. In this paper, we give a characterization of the freeness and supersolvability of the Weyl subarrangements of type $B_{\ell}$ under certain assumption. \end{abstract}
{\footnotesize \textit{Keywords}: Hyperplane arrangement, Graphic arrangement, Weyl arrangement, Free arrangement, Supersolvable arrangement, Chordal graph, Signed graph }
{\footnotesize \textit{2010 MSC}: 52C35, 32S22, 05C15, 05C22, 20F55, 13N15 }
\section{Introduction}\label{sec:introduction} A (central) \textbf{hyperplane arrangement} (or simply an arrangement) is a finite collection of linear hyperplanes, that is, subspaces of codimension one in a vector space. In this paper, we will focus on \textbf{freeness} and \textbf{supersolvability} of arrangements. (See Section \ref{sec:arrangement} for the definitions and results of free and supersolvable arrangements.)
A Weyl arrangement is a collection of the hyperplanes orthogonal to the roots of a root system.
Saito \cite{saito1977unifomization, saito1980theory} proved that Weyl arrangements are free.
For a root system of type $ A_{\ell-1} $, the corresponding Weyl arrangement is also known as the braid arrangement. We can associate to each simple graph $ G $ on vertex set $ \{1, \dots, \ell \} $ a subarrangement of the braid arrangement \begin{align*}
\mathcal{A}(G)\coloneqq \Set{\{x_{i}-x_{j}=0\} | \{i,j\} \text{ is an edge of } G}, \end{align*} where $ x_{1}, \dots, x_{\ell} $ denote the coordinates and $ \{\alpha = 0\} $ denotes the hyperplane $ \Ker(\alpha) $ for a linear form $ \alpha $. The arrangement $ \mathcal{A}(G) $ is called a \textbf{graphic arrangement}. Obviously, every subarrangement of a braid arrangement is a graphic arrangement. A simple graph $ G $ is called \textbf{chordal} if every cycle of length at least four has a \textbf{chord}, which is an edge of $ G $ that is not part of the cycle but connects two vertices of the cycle. Stanley characterized freeness and supersolvability of graphic arrangements as follows: \begin{theorem}[Stanley {\cite[Corollary 4.10]{stanley2007introduction}} ]\label{Stanley graphic arrangement} For a simple graph $ G $, the following are equivalent \begin{enumerate}[(1)] \item $ G $ is chordal. \item $ \mathcal{A}(G) $ is supersolvable. \item $ \mathcal{A}(G) $ is free. \end{enumerate} \end{theorem} Thus, the freeness of Weyl subarrangements of type $ A_{\ell} $ is completely characterized in combinatorial terms. For a Weyl arrangement of arbitrary type, only sufficient condition for freeness is known by \cite{abe2016freeness-jotems}. Except Weyl arrangements of type $A_{\ell}$, any characterization of the freeness of Weyl subarrangements is not known.
In this paper, we study the freeness of Weyl subarrangements of type $ B_{\ell} $. For our purpose, we define a signed graph and the corresponding arrangement as follows. See Zaslavsky \cite{zaslavsky1982signed-dam} for a general treatment of signed graphs. \begin{definition} A \textbf{signed graph} is a tuple $ G=(G^{+},G^{-},L_{G}) $, where $ G^{+}=(V_{G},E_{G}^{+}) $ and $ G^{-}=(V_{G},E_{G}^{-}) $ are simple graphs on the same vertex set $ V_{G} $ and $ L_{G} $ is a subset of $ V_{G} $, which is called the set of \textbf{loops}. An element of $ E_{G}^{+} $ (resp. $ E_{G}^{-} $) is called a \textbf{positive edge} (resp. a \textbf{negative edge}). When we do not consider loops, we write a signed graph as $ (G^{+}, G^{-}) $. \end{definition} \begin{definition} Let $ G=(G^{+},G^{-},L_{G}) $ be a signed graph on vertex set $ \{1, \dots, \ell\} $. Define the \textbf{signed-graphic arrangement} $ \mathcal{A}(G) $ in the $ \ell $-dimensional vector space over $ \mathbb{R} $ (or any field of characteristic zero) by \begin{multline*}
\mathcal{A}(G) \coloneqq \Set{\{x_{i}-x_{j}=0 | \{i,j\} \in E_{G}^{+} \}} \\
\cup \Set{\{x_{i}+x_{j}=0 | \{i,j\} \in E_{G}^{-} \}}
\cup \Set{ \{x_{i}=0\} | i \in L_{G}}. \end{multline*} \end{definition} Note that every simple graph $ G $ on $ \ell $ vertices can be naturally regarded as a signed graph $ G=(G, \overline{K}_{\ell}, \varnothing) $, where $ \overline{K}_{\ell} $ denotes the edgeless graph on $ \ell $ vertices, and hence a signed-graphic arrangement is a generalization of a graphic arrangement.
We call the signed graph $ B_{\ell}\coloneqq(K_{\ell},K_{\ell},\{1,\dots,\ell\}) $ the \textbf{complete signed graph with loops}, where $ K_{\ell} $ denotes the complete graph on $ \ell $ vertices. The corresponding signed-graphic arrangement $ \mathcal{A}(B_{\ell}) $ is the Weyl arrangement of type $ B_{\ell} $. Moreover, every subarrangement of $ \mathcal{A}(B_{\ell}) $ is a signed-graphic arrangement of some signed graph, and vise versa.
Zaslavsky (Theorem \ref{Zaslavsky SS}) characterized supersolvability of signed-graphic arrangements. Edelman and Reiner characterized freeness and supersolvability of arrangements between type $ A_{\ell-1} $ and type $ B_{\ell} $. The signed graphs $ G $ corresponding to these arrangements satisfy $ G^{+} = K_{\ell} $. Figure \ref{Fig:known results} summarizes the known results for supersolvability and freeness.
\begin{figure}
\caption{Summary of known results for freeness and supersolvablility}
\label{Fig:known results}
\end{figure}
A \textbf{cycle} of length $ k \geq 3 $ (shortly $ k $-cycle) is a sequence of distinct vertices $ v_{1}, \dots , v_{k} $ with edges $ \{v_{1},v_{2}\}, \dots , \{v_{k-1}, v_{k}\}, \{v_{k},v_{1}\} $ (positive or negative edges are allowed). A cycle is called \textbf{balanced} if it has an even number of negative edges. Otherwise, we call it \textbf{unbalanced}. A signed graph $ G $ is called \textbf{balanced chordal} if every balanced cycle of length at least four has a \textbf{balanced chord}, which is an edge of $ G $ that is not part of the cycle but connects two vertices of the cycle, and moreover that separates the cycle into two balanced cycles. Note that balanced chordality has nothing to do with the loop set.
Our main result is as follows: \begin{theorem}\label{main theorem} Let $ G=(G^{+},G^{-}) $ be a signed graph on vertex set $ V_{G} $. Assume that $ G^{+} \supseteq G^{-} $. Then the following are equivalent: \begin{enumerate}[(1)] \item \label{main theorem 1} $ G $ is balanced chordal. \item \label{main theorem 2} $ \mathcal{A}(G^{+}, G^{-}, V_{G}) $ is free. \item \label{main theorem 3} $ \mathcal{A}(G^{+}, G^{-}, L) $ is free for some loop set $ L \subseteq V_{G} $. \end{enumerate} \end{theorem}
The organization of this paper is as follows. In Section \ref{sec:arrangement}, we give basic definitions and results for free and supersolvable arrangements. In Section \ref{sec:simple graphs}, we introduce theorems for simple graphs which are required in this paper. In Section \ref{sec:signed graphs}, we study singed graphs and the corresponding signed-graphic arrangements. In Section \ref{sec:divisional vertex}, we introduce the notion of divisional edges and vertices of signed graphs. In Section \ref{sec:proof}, we introduce several lemmas and give a proof of Theorem \ref{main theorem}.
\section{Review of free and supersolvable arrangements}\label{sec:arrangement} In this section, we review some basic concepts on arrangements. A standard reference for the theory of arrangements is \cite{orlik1992arrangements}. Throughout this section, the ambient space of an arrangement is the $ \ell $-dimensional vector space $ \mathbb{K}^{\ell} $ over an arbitrary field $ \mathbb{K} $. Let $S$ be the symmetric algebra of $ (\mathbb{K}^{\ell})^{\ast} $, which can be identified with the polynomial ring $\mathbb{K}[x_{1},\ldots ,x_{\ell}]$, where $\{x_{1},\ldots ,x_{\ell}\}$ is a basis for $(\mathbb{K}^{\ell})^{*}$. Let $\Der (S)$ denote the module of derivations of $S$: \[ \Der(S) \coloneqq \{ \theta \colon S \rightarrow S \mid \theta \text{ is } \mathbb{K} \text{-linear}, \\ \theta (fg)=\theta (f)g+f\theta (g) \text{ for } f,g \in S \}. \] For an arrangement $\mathcal{A}$ in $ \mathbb{K}^{\ell} $, the module of logarithmic derivations $D(\mathcal{A})$ of $\mathcal{A}$ is defined by \begin{align*} D(\A) &\coloneqq \{ \theta \in \Der(S) \mid \theta (Q(\mathcal{A})) \in Q(\mathcal{A})S \} \\ &= \{ \theta \in \Der(S) \mid \theta(\alpha_{H}) \in \alpha_{H}S \text{ for } H \in \mathcal{A} \}, \end{align*} where $\alpha_{H}$ is a linear form such that $\Ker (\alpha_{H})=H$ and $Q(\mathcal{A}) \coloneqq \prod_{H \in \mathcal{A}}\alpha_{H}$ is the defining polynomial of $\mathcal{A}$. \begin{definition} An arrangement $ \mathcal{A} $ is called free if $ D(\mathcal{A}) $ is a free $ S $-module. \end{definition} When $\mathcal{A}$ is free, the module $D(\mathcal{A})$ has a homogeneous basis $\{ \theta_{1},\ldots ,\theta_{\ell} \}$ and the degrees $ \deg \theta_{1},\ldots ,\deg \theta_{\ell} $ are called the \textbf{exponents} of $\mathcal{A}$.
The rank of an arrangement $ \mathcal{A} $, denoted by $ \rank(\mathcal{A}) $, is the codimension of $ \bigcap_{H \in \mathcal{A}}H $. The \textbf{intersection lattice} $L(\mathcal{A})$ is the set of all intersections of hyperplanes in $\mathcal{A}$, which is partially ordered by reverse inclusion: $ X \leq Y \Leftrightarrow Y \subseteq X $. We say that $\mathcal{A}$ is \textbf{supersolvable} if $L(\mathcal{A})$ is supersolvable as defined by Stanley \cite{stanley1972supersolvable}. In this paper, we omit the definition in detail. However, supersolvability of arrangements is characterized as follows: \begin{theorem}[{Bj{\"o}rner-Edelman-Ziegler \cite[Theorem 4.3]{bjorner1990hyperplane-dcg}}]\label{BEZ} An arrangement $ \A $ is supersolvable if and only if there exists a filtration \begin{align*} \A = \A_{r} \supseteq \A_{r-1} \supseteq \dots \supseteq \A_{1} \end{align*} such that \begin{enumerate}[(1)] \item $ \rank(\A_{i}) = i \quad (i=1,2,\dots, r) $. \item For every $ i \geq 2 $ and any distinct hyperplanes $ H, H^{\prime} \in \A_{i} \setminus \mathcal{A}_{i-1} $, there exists some $ H^{\prime\prime} \in \A_{i-1} $ such that $ H \cap H^{\prime} \subseteq H^{\prime\prime} $. \end{enumerate} \end{theorem}
Jambu and Terao revealed a relation between supersolvability and freeness. \begin{theorem}[Jambu-Terao {\cite[Theorem 4.2]{jambu1984free-aim}}]\label{JT SS=>free} Every supersolvable arrangement is free. \end{theorem} Supersolvability is a combinatorial property, that is, it is determined by the intersection lattice. It is conjectured that freeness of arrangements for a fixed field is also a combinatorial property (Terao Conjecture \cite{terao1983exponents}).
An arrangement $ \mathcal{A} $ is said to be \textbf{independent} if $ \rank(\mathcal{A}) = |\mathcal{A}| $.
Call $ \mathcal{A} $ \textbf{dependent} if $ \rank(\mathcal{A}) < |\mathcal{A}| $. It is easy to show that every independent arrangement is supersolvable and hence free. An arrangement $ \mathcal{A} $ is called \textbf{generic} if every subarrangement of cardinality $ \rank(\mathcal{A}) $ is independent. The following theorem states that, except for trivial cases, generic arrangements are non-free. \begin{theorem}[Rose-Terao {\cite{rose1991free-joa}}, Yuzvinsky {\cite{yuzvinsky1991free-joa}}]\label{RT_Y generic} Let $ \mathcal{A} $ be a generic arrangement.
Suppose that $ |\mathcal{A}| > \rank(\mathcal{A}) \geq 3 $. Then $ \mathcal{A} $ is non-free. \end{theorem}
An arrangement $ \mathcal{A} $ is called a \textbf{circuit} if $ \mathcal{A} $ is minimally dependent, that is, $ \mathcal{A} $ is dependent but $ \mathcal{A} \setminus \{H\} $ is independent for any $ H \in \mathcal{A} $. This terminology stems from matroid theory. We obtain the following corollary of Theorem \ref{RT_Y generic}. \begin{corollary}\label{circuit nonfree} If an arrangement $ \mathcal{A} $ is a circuit, then $ \mathcal{A} $ is generic.
Moreover, if $ |\mathcal{A}| \geq 4 $, then $ \mathcal{A} $ is non-free. \end{corollary}
For every arrangement $ \mathcal{A} $, the one-variable \textbf{M{\"o}bius function} $ \mu(X) $ on $ L(\mathcal{A}) $ is defined recursively by \begin{align*} \sum_{Y \leq X} \mu(Y) = \delta_{\hat{0} \, X}, \end{align*} where $ \delta_{\hat{0} \, X} $ denotes the Kronecker delta and $ \hat{0} $ denotes the minimal element of $ L(\mathcal{A}) $, namely the ambient space. Moreover, we can associate to $ \mathcal{A} $ a polynomial $ \chi(\mathcal{A}, t) \in \mathbb{Z}[t] $, called the \textbf{characteristic polynomial}, defined by \begin{align*} \chi(\mathcal{A},t) \coloneqq \sum_{X \in L(\mathcal{A})} \mu(X)t^{\dim X}. \end{align*} The characteristic polynomial of an arrangement is one of the most important invariants. Terao \cite{terao1981generalized-im} showed that the characteristic polynomial of a free arrangement can be factored into a product of linear factors over $ \mathbb{Z} $ with non-negative roots.
For an element $ X \in L(\mathcal{A}) $, we define the \textbf{localization} $ \mathcal{A}_{X} $ and the \textbf{restriction} $ \mathcal{A}^{X} $ by \begin{align*}
\mathcal{A}_{X} &\coloneqq \Set{H \in \mathcal{A} | H \supseteq X }, \\
\mathcal{A}^{X} &\coloneqq \Set{H \cap X | H \in \mathcal{A} \setminus \mathcal{A}_{X}}. \end{align*}
\begin{proposition}[Stanley {\cite[Proposition 3.2]{stanley1972supersolvable}}]\label{Stanley SS lattice} Every localization and every restriction of a supersolvable arrangement is supersolvable. \end{proposition}
The following results are quite useful for determining whether an arrangement is free or not. \begin{proposition}[Orlik-Terao {\cite[Theorem 4.37]{orlik1992arrangements}}]\label{OT_localization} Every localization of a free arrangement is free. \end{proposition}
\begin{theorem}[Division Theorem, Abe {\cite[Theorem 1.1]{abe2016divisionally-im}}]\label{Abe Division Theorem} Let $ \mathcal{A} $ be an arrangement. Assume that there exists a hyperplane $ H \in \mathcal{A} $ such that $ \chi(\mathcal{A}^{H}, t) $ divides $ \chi(\mathcal{A},t) $ and $ \mathcal{A}^{H} $ is free. Then $ \mathcal{A} $ is free. \end{theorem}
\section{Preliminaries from simple graphs}\label{sec:simple graphs} In this section, we will introduce some basic notions about simple graphs.
\subsection{Threshold graphs}\label{subsec:threshold graph} As mentioned in Section \ref{sec:introduction}, Edelman and Reiner characterized freeness of subarrangements between type $ A_{\ell-1} $ and $ B_{\ell} $. The characterization requires the notion of threshold graphs.
\begin{definition} Threshold graphs are defined recursively by the following construction: \begin{enumerate}[(1)] \item The single-vertex graph $ K_{1} $ is threshold. \item The graph obtained by adding an isolated vertex to a threshold graph is threshold. \item The graph obtained by adding a dominating vertex to a threshold graph is threshold, where a \textbf{dominating vertex} is a vertex which is adjacent to all other vertices. \end{enumerate} \end{definition}
The \textbf{degree} of a vertex $ v $ in a simple graph $ G $ (denoted $ \deg_{G}(v) $) is the number of incident edges. \begin{definition} An ordering $ (v_{1}, \dots, v_{\ell}) $ of the vertices of a simple graph $ G $ is called a \textbf{degree ordering} if $ \deg_{G}(v_{1}) \geq \dots \geq \deg_{G}(v_{\ell}) $. An \textbf{initial segment} of an ordering $ (v_{1}, \dots, v_{\ell}) $ is a set $ \{v_{1}, \dots, v_{k}\} $ for some $ k $. \end{definition} A simple graph may admit several degree orderings. However, for a threshold graph, there is only one degree ordering up to an automorphism \cite{hammer1981threshold-sjoadm}. We will need the following characterization for threshold graphs. \begin{theorem}[Golumbic {\cite[Corollary 5]{golumbic1978trivially-dm}}]\label{Golumbic} Threshold graphs have a forbidden induced subgraph characterization. Namely, a simple graph is threshold if and only if it contains no induced subgraph isomorphic $ 2K_{2}, C_{4} $, or $ P_{4} $ (See Figure \ref{Fig:threshold}). \end{theorem}
\begin{figure}
\caption{Forbidden induced subgraphs for threshold graphs}
\label{Fig:threshold}
\end{figure}
\subsection{Menger's theorem} Given a simple graph $ G=(V_{G},E_{G}) $ and a subset $ W \subseteq V_{G} $, let $ G[W] $ denote the subgraph induced by $ W $ and let $ G \setminus W $ denote $ G[V_{G} \setminus W] $. For non-adjacent vertices $ a,b $ which belong to the same connected component of $ G $, a non-empty subset $ S \subseteq V_{G} $ is called an \textbf{$ (a,b) $-separator} if the vertices $ a,b $ belong to distinct connected components of $ G \setminus S $. An $ (a,b) $-separator is \textbf{minimal} if no proper subset is $ (a,b) $-separator. Moreover, a subset $ S \subseteq V_{G} $ is called a \textbf{minimal vertex separator} if $ S $ is a minimal $ (a,b) $-separator for some vertices $ a,b $. Note that a proper subset of a minimal vertex separator can be a minimal vertex separator for other vertices.
Let $ p_{1}, \dots, p_{n} $ be paths from a vertex $ a $ to another vertex $ b $. The paths $ p_{1}, \dots, p_{n} $ are \textbf{internally disjoint} if they do not have any internal vertex in common. A path is called \textbf{induced} if it is an induced subgraph, that is, there are no edges connecting non-consecutive vertices in the path. \begin{theorem}[Menger {\cite{menger1927zur-fm}}]\label{Menger} Let $ a,b $ be non-adjacent vertices of a connected graph. Then the minimum of the cardinalities of minimal $ (a,b) $-separators equals to the maximum number of internally disjoint induced paths from $ a $ to $ b $. \end{theorem} In this paper, the following corollary is required: \begin{corollary}\label{Menger cor} Let $ a,b $ be non-adjacent vertices of a connected graph $ G $ and $ S $ a minimal $ (a,b) $-separator of minimal cardinality. Let $ u,v $ be distinct vertices in $ S $. Then there exist a cycle of $ G $ such that it contains the vertices $ a,b,u,v $, it intersects $ S $ at $ \{u,v\} $, and it consists of two induced paths from $ a $ to $ b $. \end{corollary} \begin{proof}
By Theorem \ref{Menger}, there exists a set $ \Set{p_{s} | s \in S} $ of internally disjoint induced paths from $ a $ to $ b $, where the path $ p_{s} $ intersects $ S $ at $ \{ s \} $ for every $ s \in S $. The cycle obtained by connecting $ p_{u} $ and $ p_{v} $ at their endvertices is a desired cycle. \end{proof}
\subsection{Chordal graphs and their clique-separator graph}\label{subsec:clique-separator graph} A subset $ C \subseteq V_{G} $ is called a \textbf{clique} of $ G $ if $ G[C] $ is a complete graph. A vertex $ v \in V_{G} $ is called \textbf{simplicial} if the neighborhood of $ v $ is a clique. An ordering $ (v_{1}, \dots, v_{\ell}) $ of the vertices is called a \textbf{perfect elimination ordering} if $ v_{i} $ is simplicial in $ G[\{v_{1}, \dots, v_{i}\}] $ for every $ i \in \{1,\dots, \ell\} $. Recall that a chordal graph is a graph whose cycles of length at least four have chords. The following theorems are basic results for chordal graphs. \begin{theorem}[Dirac {\cite[Theorem 1]{dirac1961rigid-aadmsduh}}]\label{Dirac minmal vertex separator} A simple graph is chordal if and only if every minimal vertex separator is a clique. \end{theorem} \begin{theorem}[Dirac {\cite[Theorem 4]{dirac1961rigid-aadmsduh}}]\label{Dirac simplicial} Every chordal graph is complete or has at least two non-adjacent simplicial vertices. \end{theorem} \begin{theorem}[Fulkerson-Gross {\cite[Section 7]{fulkerson1965incidence-pjom}}]\label{FG} A simple graph is chordal if and only if it admits a perfect elimination ordering. \end{theorem}
Ibarra \cite{ibarra2009clique-separator-dam} has invented the clique-separator graph of a chordal graph to describe its structure. \begin{definition} Let $ G $ be a chordal graph. The clique-separator graph $ \mathcal{G} $ of $ G $ consists of the following nodes, (directed) arcs, and (undirected) edges. \begin{itemize} \item $ \mathcal{G} $ has a clique node $ C $ for each maximal clique $ C $ of $ G $. \item $ \mathcal{G} $ has a separator node $ S $ for each minimal vertex separator $ S $ of $ G $. \item An arc of $ \mathcal{G} $ is from a separator node to another separator node. The tuple $ (S,S^{\prime}) $ of separator nodes is an arc of $ \mathcal{G} $ if $ S \subsetneq S^{\prime} $ and there exists no separator node $ S^{\prime\prime} $ such that $ S \subsetneq S^{\prime\prime} \subsetneq S^{\prime} $. \item An edge of $ \mathcal{G} $ is between a clique node and a separator node. For a clique node $ C $ and a separator node $ S $, the set $ \{C, S\} $ is an edge of $ \mathcal{G} $ if $ S \subsetneq C $ and there exists no separator node $ S^{\prime} $ such that $ S \subsetneq S^{\prime} \subsetneq C $. \end{itemize} \end{definition} In the rest of this subsection, $ G, \mathcal{G} $ denote a chordal graph and its clique-separator graph, respectively. Note that we use terminologies ``vertex" for $ G $ and ``node" for $ \mathcal{G} $ after Ibarra. Figure \ref{Fig:clique-separator graph} shows an example of a chordal graph and its clique-separator graph, where $ C_{X}, S_{X} $ denote the maximal clique and the minimal vertex separator on a set $ X $ of labels of vertices, respectively. The clique-separator graph has many remarkable properties which describe the structure of a chordal graph. We will now introduce some of them required in this paper.
\begin{figure}
\caption{A chordal graph and its clique-separator graph}
\label{Fig:clique-separator graph}
\end{figure}
\begin{proposition}[Ibarra {\cite[p.1739 Observation]{ibarra2009clique-separator-dam}}]\label{Ibarra observation} For every separator node $ S $ of $ \mathcal{G} $, one of the following holds. \begin{enumerate}[(1)] \item There exist distinct clique nodes $ C, C^{\prime} $ such that $ \{S,C\} $ and $ \{S,C^{\prime}\} $ are edges of $ \mathcal{G} $. \item There exist distinct separator nodes $ S^{\prime},S^{\prime\prime} $ such that $ (S,S^{\prime}), (S,S^{\prime\prime}) $ are arcs of $ \mathcal{G} $. \item There exist a clique node $ C $ and a separator node $ S^{\prime} $ such that $ \{S,C\} $ is an edge of $ \mathcal{G} $ and $ (S,S^{\prime}) $ is an arc of $ \mathcal{G} $. \end{enumerate} \end{proposition}
A \textbf{box} of the clique-separator graph $ \mathcal{G} $ is a connected components of the undirected graph obtained by deleting the arcs of $ \mathcal{G} $. Let $ \mathcal{G}^{c} $ denote the directed graph obtained by contracting each box into a single node and replacing multiple arcs by a single arc.
\begin{theorem}[Ibarra {\cite[Theorem 3(2)]{ibarra2009clique-separator-dam}}]\label{Ibarra2} The following hold: \begin{enumerate}[(1)] \item \label{Ibarra2-1} Every box is a tree with the \textbf{clique intersection property}: for any distinct nodes $ N,N^{\prime} $ of a box and any internal node $ N^{\prime\prime} $ between $ N $ and $ N^{\prime} $, we have $ N \cap N^{\prime} \subseteq N^{\prime\prime} $. \item \label{Ibarra2-2} The separator nodes in each box form an antichain, that is, there are no inclusions among the separator nodes. \item \label{Ibarra2-3} $ \mathcal{G}^{c} $ is a directed acyclic graph. \end{enumerate} \end{theorem}
Every directed acyclic finite graph has a sink, namely a vertex with no outgoing arcs from it. By Theorem \ref{Ibarra2}(\ref{Ibarra2-3}), the directed graph $ \mathcal{G}^{c} $ has a sink, which we call a \textbf{sink box} of $ \mathcal{G} $.
\begin{proposition}\label{sink_box_leaf} Let $ B $ be a sink box of $ \mathcal{G} $. Then $ B $ is a tree whose leaves are clique nodes. \end{proposition} \begin{proof} From Theorem \ref{Ibarra2}(\ref{Ibarra2-1}), the box $ B $ is a tree. Let $ S $ be a separator node of $ B $. Since $ B $ is a sink, there are no outgoing arcs from $ S $. Hence, by Proposition \ref{Ibarra observation}, there exist at least two clique nodes adjacent to $ S $. Therefore $ S $ cannot be a leaf of $ B $. Thus every leaf of $ B $ is a clique node. \end{proof} For every separator node $ S $ of $ \mathcal{G} $, define \begin{align*}
\Preds(S) \coloneqq \Set{S^{\prime} | \text{$ S^{\prime} $ is a separator node of $ \mathcal{G} $ such that $ S^{\prime} \subseteq S $}}. \end{align*} For a subgraph $ \mathcal{F} $ of $ \mathcal{G} $, let $ G[\mathcal{F}] $ denote the subgraph of $ G $ induced by \begin{align*}
\Set{v \in V_{G} | \text{$ v $ belongs to some node of $ \mathcal{F} $}}. \end{align*}
\begin{theorem}[Ibarra {\cite[Theorem 3(1)]{ibarra2009clique-separator-dam}}]\label{Ibarra 1} Let $ S $ be a separator node of $ \mathcal{G} $. Suppose that $ \mathcal{G}_{1}, \dots, \mathcal{G}_{k} $ are the connected components of $ \mathcal{G} \setminus \Preds(S) $. Then the subgraphs $ G[\mathcal{G}_{1}] \setminus S, \dots, G[\mathcal{G}_{k}] \setminus S $ are the connected components of $ G \setminus S $. \end{theorem}
\begin{corollary}\label{Ibarra 1 cor} Let $ P $ be a path of a sink box of $ \mathcal{G} $ from a clique node to another clique node indicated as follows: \begin{center} \begin{tikzpicture} \draw (0,0) node[](C1){$ C_{1} $}; \draw (1,0) node[](S2){$ S_{2} $}; \draw (2,0) node[](C2){$ C_{2} $}; \draw (4.6,0) node[](Sk){$ S_{k} $}; \draw (5.6,0) node[](Ck){$ C_{k} $}; \draw (C1)--(S2)--(C2)--(2.8,0); \draw[dotted] (2.8,0)--(3.8,0); \draw (3.8,0)--(Sk)--(Ck); \end{tikzpicture} \end{center} Take vertices $ a \in C_{1}\setminus S_{2} $ and $ b \in C_{k} \setminus S_{k} $. Then the set of $ (a,b) $-minimal separators of $ G $ coincides with $ \{S_{2}, \dots, S_{k}\} $. \end{corollary} \begin{proof} By Theorem \ref{Ibarra 1}, each $ S_{i} $ is a minimal $ (a,b) $-separator, and the other separator nodes cannot separate $ a $ from $ b $. \end{proof}
\section{Signed graphs and signed-graphic arrangements}\label{sec:signed graphs} In this section, we introduce various notions about signed graphs which are related to freeness of signed-graphic arrangements.
\subsection{Necessary conditions for freeness} A \textbf{path} is a sequence $ v_{1} \dots v_{k} $ of distinct vertices with edges $ \{v_{1},v_{2}\}, \dots, \{v_{k-1},v_{k}\} $, where the sign of each edge may be either positive or negative. A signed graph is called \textbf{connected} if any distinct two vertices are connected by a path.
Recall that a cycle of length $ k \geq 3 $ is a sequence of distinct vertices $ v_{1} \dots v_{k} $ with edges $ \{v_{1},v_{2}\}, \dots, \{v_{k-1},v_{k}\}, \{v_{k},v_{1}\} $ and that a cycle is called \textbf{balanced} if it has an even number of negative edges. Moreover, we define cycles of length $ 1 $ and $ 2 $ as follows: the $ 1 $-cycle is a vertex with the loop and the $ 2 $-cycle is a pair of vertices $ \{u,v\} $ with the positive and the negative edges between them. Figure \ref{Fig:unbalanced 1,2-cycles} illustrates these cycles, where the dashed edge denotes the negative edge. The $ 2 $-cycle has exactly one negative edge and hence it is unbalanced. Every loop corresponds to a hyperplane of the form $ \{x=0\} $, which is equal to $ \{x=-x\} $. Therefore it is natural that a loop is assigned to be negative. Thus, we define the $ 1 $-cycle to be unbalanced. \begin{figure}
\caption{The unbalanced $ 1 $-cycle and the unbalanced $ 2 $-cycle}
\label{Fig:unbalanced 1,2-cycles}
\end{figure}
Every arrangement determines a matroid on itself. Hence a signed-graphic arrangement $ \mathcal{A}(G) $ determines a matroid on itself and hence on the edge set $ E_{G} \coloneqq E_{G}^{+} \sqcup E_{G}^{-} \sqcup L_{G} $ of the corresponding signed graph $ G $. Zaslavsky has classified circuits (minimal dependent sets) of the matroid, which are called \textbf{frame circuits}, as follows (Figure \ref{Fig:exam_frame_circuits} shows examples of frame circuits): \begin{proposition}[Zaslavsky {\cite[8B.1]{zaslavsky1982signed-dam}},{\cite[Corollary 3.2]{zaslavsky2012signed-jociss}}]\label{Zaslavsky frame circuit} A connected signed graph is a frame circuit if and only if it is one of the following graphs: \begin{enumerate}[(1)] \item A balanced cycle. \item A pair of disjoint unbalanced cycles together with a path which connects them (called a \textbf{loose handcuff}). \item A pair of disjoint unbalanced cycles which intersect in precisely one vertex (called a \textbf{tight handcuff}). \end{enumerate} \end{proposition} \begin{figure}
\caption{Examples of frame circuits}
\label{Fig:exam_frame_circuits}
\end{figure}
The proposition below is a paraphrase of Corollary \ref{circuit nonfree} for signed-graphic arrangements. \begin{proposition}\label{frame circuit nonfree} Let $ G $ be a frame circuit. Then $ \mathcal{A}(G) $ is generic.
Moreover, if $ |E_{G}| \geq 4 $, then $ \mathcal{A}(G) $ is non-free. \end{proposition}
In order to determine that a signed-graphic arrangement is non-free, it is important to describe the localizations of a signed-graphic arrangement in terms of signed graphs.
A signed graph $ F=(F^{+},F^{-},L_{F}) $ is said to be a \textbf{subgraph} of a signed graph $ G=(G^{+},G^{-},L_{G}) $ if $ F^{+}, F^{-} $ are subgraphs of $ G^{+},G^{-} $, respectively, on the same vertex set $ V_{F} \subseteq V_{G} $ and $ L_{F} \subseteq L_{G} $. The signed-graphic arrangement $ \mathcal{A}(F) $ corresponding to the subgraph $ F $ can be naturally regarded as a subarrangement of $ \mathcal{A}(G) $.
A subarrangement of an arrangement $ \mathcal{A} $ is a localization if and only if it is a flat of the matroid on $ \mathcal{A} $. Hence we have the following proposition: \begin{proposition}[{\cite[Proposition 1.4.11(ii)]{oxley2011matroid}}]\label{matroid_flat} Let $ F $ be a subgraph of a signed graph $ G $. The signed-graphic arrangement $ \mathcal{A}(F) $ is a localization of $ \mathcal{A}(G) $ if and only if \begin{align*}
\Set{ e \in E_{G} | \text{$ e $ and some edges in $ F $ form a frame circuit} } \subseteq E_{F}. \end{align*} \end{proposition}
\begin{example} Consider the signed graphs in Figure \ref{Fig:examples non-free}. \begin{figure}
\caption{Examples of signed graphs corresponding to non-free arrangements}
\label{Fig:examples non-free}
\end{figure} The graph $ G_{1} $ has the balanced cycle $ F_{1} $ and $ G_{2} $ has the tight handcuff $ F_{2} $. By Proposition \ref{matroid_flat}, we have that $ \mathcal{A}(F_{1}) $ and $ \mathcal{A}(F_{2}) $ are localizations of $ \mathcal{A}(G_{1}) $ and $ \mathcal{A}(G_{2}) $, respectively. Hence, by Propositions \ref{Zaslavsky frame circuit}, \ref{frame circuit nonfree} and \ref{OT_localization}, these four arrangements are non-free. A direct computation or the result of Edelman and Reiner (Theorem \ref{ER free}) shows that $ \mathcal{A}(G_{3}) $ is non-free. However, every proper localization of $ \mathcal{A}(G_{3}) $ is free. Hence it is not enough to focus on circuits when determining freeness of signed graphic arrangements, which is different form the case of graphic arrangements. \end{example}
Given a signed graph $ G=(G^{+},G^{-},L_{G}) $, the simple graph $ G^{+} $ can be regarded as a subgraph of $ G $. \begin{proposition}\label{G+ is a localization} Every graphic arrangement $ \mathcal{A}(G^{+}) $ is a localization of a signed-graphic arrangement $ \mathcal{A}(G) $. Moreover, if $ \mathcal{A}(G) $ is free, then $ \mathcal{A}(G^{+}) $ is free, or equivalently $ G^{+} $ is chordal. \end{proposition} \begin{proof} By Propositions \ref{Zaslavsky frame circuit} and \ref{matroid_flat}, we have $ \mathcal{A}(G^{+}) $ is a localization of $ \mathcal{A}(G) $. If $ \mathcal{A}(G) $ is free, then $ \mathcal{A}(G^{+}) $ is free by Proposition \ref{OT_localization}. Using Theorem \ref{Stanley graphic arrangement}, we have that $ \mathcal{A}(G^{+}) $ is free if and only if $ G $ is chordal. \end{proof}
For a signed graph $ G=(G^{+},G^{-},L_{G}) $ and a subset $ W \subseteq V_{G} $, define the \textbf{subgraph induced by $ W $} by $ G[W] \coloneqq (G^{+}[W], G^{-}[W], W \cap L_{G}) $. A subgraph $ F $ is called an \textbf{induced subgraph} of $ G $ if $ F=G[W] $ for some $ W \subseteq V_{G} $. Using Propositions \ref{Zaslavsky frame circuit} and \ref{matroid_flat}, we obtain the following: \begin{proposition}\label{ind_sub is a localization} Let $ F $ be an induced subgraph of a signed graph $ G $. Then $ \mathcal{A}(F) $ is a localization of $ \mathcal{A}(G) $. In particular, if $ \mathcal{A}(G) $ is free, then $ \mathcal{A}(F) $ is free. \end{proposition}
Given a signed graph $ G $, we can obtain a new signed graph $G^\nu$ using a \textbf{switching function} $\nu\colon V_G\to \{\pm 1\}$. The signed graph $ G^{\nu} $ consists of the following data: \begin{itemize} \item $V_{G^\nu} \coloneqq V_{G}$.
\item $E_{G^\nu}^{+} \coloneqq \Set{\{u,v\}\in E_{G}^{+} | \nu(u)=\nu(v)} \cup \Set{\{u,v\}\in E_{G}^{-} | \nu(u)\ne\nu(v)}$.
\item$E_{G^\nu}^{-} \coloneqq \Set{\{u,v\}\in E_{G}^{+} | \nu(u)\ne\nu(v)} \cup \Set{\{u,v\}\in E_{G}^{-} | \nu(u)=\nu(v)}$. \item $L_{G^\nu} \coloneqq L_{G}$. \end{itemize} Note that a switching affects a signed-graphic arrangement as the coordinate exchange $ x_{i} \mapsto \nu(i)x_{i} $ for each $ i \in V_{G} $. Therefore a switching preserves freeness of signed-graphic arrangements. Moreover, a switching preserves the balance of cycles.
Recall that a \textbf{balanced chord} of a balanced cycle is an edge that is not part of the cycle but connects two vertices of the cycle, and moreover that separates the cycle into two balanced cycles. A signed graph is called \textbf{balanced chordal} if every balanced cycle of length at least $ 4 $ has a balanced chord. Note that balanced chordality does not require any information on the loops. \begin{lemma}\label{free => balanced chordal} Suppose that a signed-graphic arrangement $ \mathcal{A}(G) $ is free. Then $ G $ is balanced chordal. \end{lemma} \begin{proof} Assume that $ G $ is not balanced chordal. Then there exists a balanced cycle $ C $ of length at least four with no balanced chords. Let $ \nu $ be a switching function on $ G $ such that $ C^{\nu} $ consists of positive edges. Since a switching preserves the balance of cycles, we have that $ C^{\nu} $ is a chordless cycle of $ (G^{\nu})^{+} $. Therefore $ \mathcal{A}(G^{\nu}) $ is non-free by Proposition \ref{G+ is a localization}. Since switching preserves freeness, we have $ \mathcal{A}(G) $ is also non-free. Thus, the assertion has been proven. \end{proof}
\subsection{Coloring and the chromatic polynomials} One of remarkable properties of the chromatic polynomial of a simple graph is that the chromatic polynomial coincides with the characteristic polynomial of the corresponding graphic arrangement. The notions of coloring and the chromatic polynomial were extended to signed graphs by Zaslavsky \cite{zaslavsky1982signed-dm}.
\begin{definition} For a positive integer $ k $, let $ \Lambda_{k} \coloneqq \{0, \pm 1, \dots, \pm k \} $. A \textbf{proper $ k $-coloring} of a signed graph $ G $ is a function $ \gamma \colon V_{G} \rightarrow \Lambda_{k} $ such that \begin{enumerate}[(1)] \item $ \gamma(u) \neq \gamma(v) $ if $ \{u,v\} \in E_{G}^{+} $. \item $ \gamma(u) \neq -\gamma(v) $ if $ \{u,v\} \in E_{G}^{-} $. \item $ \gamma(v) \neq 0 $ if $ v \in L_{G} $. \end{enumerate} \end{definition}
\begin{theorem}[Zaslavsky {\cite[Theorem 2.2]{zaslavsky1982signed-dm}}] There exists a polynomial $ \chi(G,t) $ such that $ \chi(G,2k+1) $ is equal to the number of proper $ k $-coloring of $ G $ for any $k \geq 1$. We call $ \chi(G,t) $ the \textbf{chromatic polynomial} of $ G $. \end{theorem} \begin{theorem}[Zaslavsky {\cite[Lemma 4.3]{zaslavsky2012signed-jociss}}]\label{Zaslavsky chromatic characteristic} The chromatic polynomial of $ G $ and the characteristic polynomial of $ \mathcal{A}(G) $ coincide. Namely $ \chi(G,t)=\chi(\mathcal{A}(G),t) $. \end{theorem} Our method to prove the freeness is Abe's division theorem (Theorem \ref{Abe Division Theorem}). Thanks to Theorem \ref{Zaslavsky chromatic characteristic}, we can describe the characteristic polynomial of a signed-graphic arrangement in terms of coloring of the corresponding signed graph. Recall that $ B_{n} = (K_{n},K_{n},\{1,\dots,n\}) $ denotes the complete signed graph with loops. The purpose of the rest of this subsection is to prove Lemma \ref{gluing}, which describes the chromatic polynomial of a signed graph obtained by gluing two signed graphs together along a complete signed graph with loops.
\begin{proposition}\label{coloring1} Let $ G $ be a signed graph which contains $ B_{n} $ as a subgraph. For two proper $ k $-colorings $ \gamma_{1}, \gamma_{2} $ of $ B_{n} $, let $ C_{1}, C_{2} $ be the set of proper $ k $-colorings of $ G $ whose restrictions to $ B_{n} $ are equal to $ \gamma_{1},\gamma_{2} $, respectively.
Then $ |C_{1}|=|C_{2}| $. \end{proposition} \begin{proof}
For each $ i \in \{1,2\} $, the coloring $ \gamma_{i} $ is injective and the image $ \gamma_{i}(V_{B_{n}}) $ is disjoint to $ -\gamma_{i}(V_{B_{n}}) \coloneqq \Set{-\gamma_{i}(v) | v \in V_{B_{n}}} $. We define a bijection $ \phi \colon \gamma_{1}(V_{B_{n}}) \cup (-\gamma_{1}(V_{B_{n}})) \rightarrow \gamma_{2}(V_{B_{n}}) \cup (-\gamma_{2}(V_{B_{n}})) $ by for any $ a \in \gamma_{1}(V_{B_{n}}) $ \begin{align*} \phi(\pm a) \coloneqq \pm (\gamma_{2} \circ \gamma_{1}^{-1})(a). \end{align*} Then there exists a bijection $ \psi \colon \Lambda_{k} \rightarrow \Lambda_{k} $ such that $ \psi(-a)=-\psi(a) $ and the following diagram commutes.
Define a map $ F \colon C_{1} \rightarrow C_{2} $ by $ F(\gamma) \coloneqq \psi \circ \gamma $. We show that $ F $ is well-defined, that is, $ F(\gamma) \in C_{2} $ for any $ \gamma \in C_{1} $. First we show that $ F(\gamma) $ is a proper coloring of $ G $. This follows by the following arguments. \begin{align*} \{u,v\} \in E_{G}^{+} &\Rightarrow \gamma(u) \neq \gamma(v) \Rightarrow (\psi \circ \gamma)(u) \neq (\psi \circ \gamma)(v) \\ &\Rightarrow F(\gamma)(u) \neq F(\gamma)(v). \\ \{u,v\} \in E_{G}^{-} &\Rightarrow \gamma(u) \neq -\gamma(v) \Rightarrow \psi(\gamma(u)) \neq \psi(-\gamma(v)) \Rightarrow (\psi \circ \gamma)(u) \neq -(\psi \circ \gamma)(v) \\ &\Rightarrow F(\gamma)(u) \neq -F(\gamma)(v). \\ v \in L &\Rightarrow \gamma(v) \neq 0 \Rightarrow \psi(\gamma(v)) \neq \psi(0) \Rightarrow (\psi \circ \gamma)(v) \neq 0 \\ &\Rightarrow F(\gamma)(v) \neq 0. \end{align*} Next we show that the restriction of $ F(\gamma) $ to $ V_{B_{n}} $ equals $ \gamma_{2} $. For each vertex $ v \in V_{B_{n}} $ \begin{align*} F(\gamma)(v) = (\psi \circ \gamma)(v) = \psi(\gamma(v)) = \psi(\gamma_{1}(v)) = \gamma_{2}(v). \end{align*} Hence we may conclude that $ F(\gamma) \in C_{2} $.
The map $ F $ is injective since $ \psi $ is bijective.
Therefore we have $ |C_{1}| \leq |C_{2}| $.
By the similar argument we obtain $ |C_{1}|=|C_{2}| $. \end{proof} \begin{proposition}\label{coloring2} Let $ G $ be a signed graph which contains $ B_{n} $ as a subgraph. For a proper $ k $-coloring $ \gamma $ of $ B_{n} $, let $ C $ denote the set of proper $ k $-colorings of $ G $ whose restrictions to $ B_{n} $ are equal to $ \gamma $. Then \begin{align*}
|C|=\frac{\chi(G,2k+1)}{\chi(B_{n},2k+1)}. \end{align*} \end{proposition} \begin{proof} Let $ \gamma_{1} \coloneqq \gamma, \gamma_{2}, \dots, \gamma_{m} $ be the proper $ k $-colorings of $ B_{n} $, where $ m = \chi(B_{n},2k+1) $. For each $ i \in [m] $, let $ C_{i} $ denote the set of proper $ k $-colorings of $ G $ whose restrictions to $ B_{n} $ are equal to $ \gamma_{i} $.
Then we have that $ \chi(G,2k+1)=\sum_{i=1}^{m}|C_{i}| $.
By Proposition \ref{coloring1}, the cardinalities of $ |C_{i}| $ are equal to each other.
Hence $ \chi(G,2k+1)=m|C|=\chi(B_{n},2k+1)|C| $. Thus the assertion holds. \end{proof}
\begin{lemma}\label{gluing} Let $ G $ be a signed graph obtained by gluing two signed graphs $ G_{1}, G_{2} $ along a complete signed graph with loops $ B_{n} $. Namely, $ G_{1} $ and $ G_{2} $ are induced subgraphs of $ G $ such that $ G_{1} \cup G_{2} = G $ and $ G_{1} \cap G_{2} = B_{n} $. Then the following hold: \begin{enumerate}[(1)] \item \label{gluing 1} ${\displaystyle \chi(G,t) = \frac{\chi(G_{1},t)\chi(G_{2},t)}{\chi(B_{n},t)}.}$ \item \label{gluing 2} If $ G_{1} $ and $ G_{2} $ are balanced chordal, then $ G $ is also balanced chordal. \end{enumerate} \end{lemma} \begin{proof} (\ref{gluing 1}) For each proper $ k $-coloring of $ G_{1} $, the number of possible proper $ k $-colorings of $ G_{2} $ is $ \chi(G_{2},2k+1)/\chi(B_{n},2k+1) $ by Proposition \ref{coloring2}. Hence we have \begin{align*} \chi(G,2k+1) = \frac{\chi(G_{1},2k+1) \chi(G_{2},2k+1)}{\chi(B_{n},2k+1)}. \end{align*} Since this equality holds for any positive integer $ k $, the assertion holds.
(\ref{gluing 2}) Suppose that $ G_{1} $ and $ G_{2} $ are balanced chordal. Take a balanced cycle $ C $ of length at least $ 4 $. If $ C \subseteq V_{G_{1}} $ or $ C \subseteq V_{G_{2}} $, then $ C $ has a balanced chord since $ G_{1},G_{2} $ are balanced chordal. Now, suppose that $ C $ has vertices in both of $ V_{G_{1}}\setminus V_{B_{n}} $ and $ V_{G_{2}}\setminus V_{B_{n}} $. Then $ C $ has two non-consecutive vertices in $ B_{n} $. Either of the positive or the negative edge between these vertices is a balanced chord of $ C $. Therefore $ G $ is also balanced chordal. \end{proof}
\subsection{Edge contractions} We define an edge contraction of a signed graph $ G $ so that the corresponding signed-graphic arrangement becomes the restriction with respect to the hyperplane corresponding to the edge.
Let $ \{v,w\} $ be a positive edge of $ G $. We may define the contraction of the edge in the same way as for simple graphs. However, in order to clarify the notion, we fix a direction $ (v,w) $ of the edge $ \{v,w\} $. The contraction $ G/(v,w) $ consists of the following data: \begin{itemize} \item $ V_{G/(v,w)} \coloneqq V_{G} \setminus \{v\} $.
\item $ E_{G/(v,w)}^{+} \coloneqq E_{G \setminus \{v\}}^{+} \cup \Set{\{u,w\} | \{u,v\} \in E_{G}^{+}} $.
\item $ E_{G/(v,w)}^{-} \coloneqq E_{G \setminus \{v\}}^{-} \cup \Set{\{u,w\} | \{u,v\} \in E_{G}^{-}} $. \item $ L_{G/(v,w)} \coloneqq \begin{cases} L_{G\setminus \{v\}} \cup \{w\} & \text{ if } v \in L_{G} \text{ or } \{v,w\} \in E_{G}^{-}, \\ L_{G\setminus\{v\}} & \text{ otherwise. } \end{cases} $ \end{itemize} It is easy to see that the resulting graph is independent of choice of a direction. Namely, $ G/(v,w) $ and $ G/(w,v) $ are isomorphic.
For a negative edge $ \{v,w\} $, the contraction $ G/(v,w) $ consists of the following data: \begin{itemize} \item $ V_{G/(v,w)} \coloneqq V_{G} \setminus \{v\} $.
\item $ E_{G/(v,w)}^{+} \coloneqq E_{G \setminus \{v\}}^{+} \cup \Set{\{u,w\} | \{u,v\} \in E_{G}^{-}} $.
\item $ E_{G/(v,w)}^{-} \coloneqq E_{G \setminus \{v\}}^{-} \cup \Set{\{u,w\} | \{u,v\} \in E_{G}^{+}} $. \item $ L_{G/(v,w)} \coloneqq \begin{cases} L_{G\setminus \{v\}} \cup \{w\} & \text{ if } v \in L_{G} \text{ or } \{v,w\} \in E_{G}^{+}, \\ L_{G\setminus\{v\}} & \text{ otherwise. } \end{cases} $ \end{itemize} In this case, the contractions $ G/(v,w) $ and $ G/(w,v) $ are not isomorphic but switching equivalent, that is, there exists a switching function $ \nu $ on $ G/(v,w) $ such that $ (G/(v,w))^{\nu} $ and $ G/(w,v) $ are isomorphic (see, for example, \cite[Lemma 2.8]{zaslavsky2012signed-jociss}).
We can define the contraction for a loop. However, it is not required in this paper. From the construction of the contraction, we obtain the following proposition: \begin{proposition}\label{contraction restriction} Let $ \{v,w\} $ be a positive or negative edge of a signed graph $ G $ and $ H $ the corresponding hyperplane in $ \mathcal{A}(G) $. Then $ \mathcal{A}(G/(v,w)) = \mathcal{A}(G)^{H} $. \end{proposition}
\subsection{Simplicial extensions} As mentioned in Theorems \ref{Stanley graphic arrangement} and \ref{FG}, the freeness of graphic arrangements is characterized by existence of a perfect elimination ordering. In this section, we extend the concept and introduce signed-simplicial vertices and signed elimination orderings.
A signed graph $ G $ is called \textbf{balanced} if every cycle of $ G $ is balanced. Note that there is no relationship between balanced chordality and being balanced.
Define the \textbf{rank} of $ G $ by $ \rank(G) \coloneqq |V_{G}|-b(G) $, where $ b(G) $ denotes the number of balanced connected components of $ G $. \begin{theorem}[Zaslavsky {\cite[Theorem 3.5]{zaslavsky2012signed-jociss}}]\label{Zaslavsky rank} Given a signed graph $ G $, we have $ \rank(\mathcal{A}(G)) = \rank(G) $. \end{theorem}
\begin{definition} A vertex $ v $ of a signed graph $ G $ is called \textbf{signed simplicial} if the following statements hold: \begin{enumerate}[(1)] \item If $ \{u_{1},v\}, \{u_{2},v\} \in E_{G}^{+} $ or $ \{u_{1},v\}, \{u_{2},v\} \in E_{G}^{-} $ then $ \{u_{1},u_{2}\} \in E_{G}^{+} $. \item If $ \{u_{1},v\} \in E_{G}^{+} $ and $ \{u_{2},v\} \in E_{G}^{-} $ then $ \{u_{1},u_{2}\} \in E_{G}^{-} $. \item If $ \{u,v\} \in E_{G}^{+} \cup E_{G}^{-} $ with $ v \in L_{G} $, or $ \{u,v\} \in E_{G}^{+} \cap E_{G}^{-} $ then $ u \in L_{G} $. \end{enumerate} \end{definition}
Adding a signed simplicial vertex affects signed graphs as follows: \begin{proposition}\label{add ss} Let $ v $ be a signed-simplicial vertex of a signed graph $ G $ and let $ F \coloneqq G \setminus \{v\} $. Then the following hold. \begin{enumerate}[(1)] \item \label{add ss 1} Suppose that $ v $ has an adjacent vertex $ w $. Then $ G/(v,w) = F $. \item \label{add ss 2} If $ v $ has a loop or an adjacent vertex, then $ \rank(G) = \rank(F)+1 $. \item \label{add ss 3} Let $ d $ denote the degree of $ v $, that is, the number of incident edges and a loop of $ v $. Then $ \chi(G,t) = (t-d)\chi(F,t)$. \item \label{add ss 4} $ G $ is balanced chordal if and only if $ F $ is balanced chordal. \end{enumerate} \end{proposition} \begin{proof} (\ref{add ss 1}) We need to show that $ E_{G/(v,w)}^{+} = E_{F}^{+}, E_{G/(v,w)}^{-} = E_{F}^{-} $, and $ L_{G/(v,w)} = L_{F} $. Suppose that $ \{v,w\} $ is a positive edge. If $ \{u,v\} \in E_{G}^{+} $, then we have $ \{ u,w \} \in E_{G}^{+} $.
Hence $ \Set{\{u,w\} | \{u,v\} \in E_{G}^{+}} \subseteq E_{F}^{+} $. Therefore $ E_{G/(v,w)}^{+} = E_{F}^{+} $. The other cases are similar.
(\ref{add ss 2})
Since $ \rank(G)=|V_{G}|-b(G) $ and $ \rank(F)=|V_{G}|-1-b(F) $, it suffices to show that $ b(G)=b(F) $. When $ v $ is isolated and has a loop, the equation $ b(G)=b(F) $ holds since the loop graph ($ 1 $-cycle) is unbalanced. From now on, we assume that $ v $ has an adjacent vertex. Without loss of generality, we may assume that $ G $ is connected since removing the vertex $ v $ affects only the connected component of $ G $ including $ v $. In this case, $ F $ is also connected since $ v $ is simplicial. Therefore it suffices to show that $ F $ is balanced if and only if $ G $ is balanced.
When $ G $ is balanced, $ F $ is balanced trivially. To show the converse, suppose that $ F $ is balanced. Let $ C $ be a cycle of $ G $ containing $ v $. It is sufficient to show that $ C $ is balanced. If the length of $ C $ is $ 1 $ or $ 2 $, then $ F $ has a loop since $ v $ is signed simplicial, which is a contradiction. Hence the length of $ C $ is at least $ 3 $. Write $ C $ as a sequence of vertices $ v=v_{1},v_{2}, \dots, v_{k} $. Assume that $ C $ is unbalanced. Since $ v $ is signed simplicial, there is an edge $ e=\{v_{2},v_{k}\} $ forming a balanced $ 3 $-cycle together with the edges $ \{v_{k},v_{1}\}, \{v_{1},v_{2}\} $ in the cycle $ C $. Hence the cycle in $ F $ consisting of $ e $ and the edges $ \{v_{2},v_{3}\}, \dots, \{v_{k-1},v_{k}\} $ in the cycle $ C $ is unbalanced, which is a contradiction. As a result, $ C $ is balanced.
(\ref{add ss 3}) Let $ \gamma $ be a proper $ k $-coloring on $ G\setminus\{v\} $, where $ k $ is sufficiently large. It is sufficient to show that the number of proper $ k $-colorings on $ G $ which are extensions of $ \gamma $ is $ (2k+1)-d $.
First, suppose that $ v $ has no loop. By the definition of a proper coloring, the color of $ v $ cannot belong to the following set: \begin{align*}
\Set{\gamma(w) | \{v,w\} \in E_{G}^{+}} \cup \Set{-\gamma(w) | \{v,w\} \in E_{G}^{-}}. \end{align*} We show that the cardinality of this set coincides with the degree of $ v $, that is, the colors corresponding to the incident edges of $ v $ are different from each other.
Suppose that $ \{v,w_{1}\}, \{v,w_{2}\} \in E_{G}^{+} $, and $ w_{1} \neq w_{2} $. Since $ v $ is signed simplicial, we have $ \{w_{1},w_{2}\} \in E_{G}^{+} $ and hence $ \gamma(w_{1}) \neq \gamma(w_{2}) $. When $ \{v,w_{1}\}, \{v,w_{2}\} \in E_{G}^{-} $ with $ w_{1} \neq w_{2} $ or $ \{v,w_{1}\} \in E_{G}^{+}, \{v, w_{2}\} \in E_{G}^{-} $ with $ w_{1} \neq w_{2} $, one can prove the assertion in the same way. Next, assume that $ \{v,w\} \in E_{G}^{+} $ and $ \{v,w\} \in E_{G}^{-} $. Then we have $ w \in L_{G} $ by the definition of a signed-simplicial vertex. Hence $ \gamma(w) \neq 0 $. In other words, $ \gamma(w) \neq -\gamma(w) $.
Now, we consider the case $ v $ has a loop. In this case, the color of $ v $ is neither a member of the set above nor $ 0 $. Since $ v $ is signed simplicial, every adjacent vertex of $ v $ admits a loop, and hence $0$ does not belong to the set above. Therefore the forbidden colors of $ v $ coincides with the degree of $ v $.
(\ref{add ss 4}) If $ G $ is balanced chordal, then $ F $ is also balanced chordal, since $ F $ is an induced subgraph of $ G $. In order to prove the converse, suppose that $ F $ is balanced chordal. Let $ C $ be a balanced cycle of $ G $ which is of length at least $ 4 $ and contains $ v $. Write $ C $ as a sequence of vertices $ v=v_{1}, v_{2}, \dots, v_{k} $. Since $ v $ is signed simplicial, we have $ \{v_{2},v_{k}\} $ is an edge and the $ 3 $-cycle $ v_{1},v_{2},v_{k} $ is balanced. Hence the $ (k-1) $-cycle $ v_{2},\dots, v_{k} $ is also balanced. Therefore the edge $ \{v_{2},v_{k}\} $ is a balanced chord of $ C $. \end{proof}
\begin{definition} An ordering $ (v_{1}, \dots, v_{\ell}) $ of the vertices of a signed graph $ G $ is said to be a \textbf{singed elimination ordering} if $ v_{k} $ is signed simplicial in $ G[\{v_{1}, \dots, v_{k}\}] $ for each $ k \in \{1,\dots, \ell \} $. \end{definition} Let $ G $ be a signed graph and $ F $ an induced subgraph. We say that $ G $ is a \textbf{simplicial extension} of $ F $ if there exists an ordering $(v_{1}, \ldots ,v_{m})$ of the vertices in $V_{G} \setminus V_{F}$ such that each $v_{i}$ is signed simplicial in $G[V_{F} \cup \{ v_{1}, \ldots ,v_{i} \}]$. Zaslavsky characterized supersolvability of signed-graphic arrangements as follows: \begin{theorem}[{Zaslavsky \cite[Theorem 2.2]{zaslavsky2001supersolvable-ejoc}}]\label{Zaslavsky SS} A signed-graphic arrangement $ \mathcal{A}(G) $ is supersolvable if and only if one of the following conditions holds: \begin{enumerate}[(1)] \item $ G $ has a signed elimination ordering. \item $ G $ is a simplicial extension of one of the following: \begin{enumerate}[(i)] \item The graph $ D_{3} $ shown in Figure \ref{Fig:D3}. \item A signed graph $ F=(F^{+},F^{-},\varnothing) $ in which all edges in $ F^{-} $ are incident to a single vertex $ v $, the set of neighbors of $ v $ in $ F^{-} $ induces a complete subgraph in $ F^{+} $, and $ F^{+} $ has a perfect elimination ordering. \end{enumerate} \end{enumerate} \end{theorem} \begin{figure}
\caption{The graph $ D_{3} $}
\label{Fig:D3}
\end{figure} By Theorems \ref{Zaslavsky SS}, \ref{JT SS=>free}, and Lemma \ref{free => balanced chordal}, we obtain the following implications: However, in contrast to graphic arrangements, these four conditions are not equivalent to each other (see \cite[Lemma 4.5(c)]{edelman1994free}). These conditions are related to simplicial extension as follows: \begin{proposition}\label{ss extension} Let $ G $ be a signed simplicial extension of a signed graph $ F $. Then the following hold: \begin{enumerate}[(1)] \item \label{ss extension1} $ G $ has a signed elimination ordering if and only if $ F $ has a signed elimination ordering. \item \label{ss extension2} $ \mathcal{A}(G) $ is supersolvable if and only if $ \mathcal{A}(F) $ is supersolvable. \item \label{ss extension3} $ \mathcal{A}(G) $ is free if and only if $ \mathcal{A}(F) $ is free. \item \label{ss extension4} $ G $ is balanced chordal if and only if $ F $ is balanced chordal. \end{enumerate} \end{proposition} \begin{proof} Without loss of generality, we may assume that $ F=G\setminus\{v\} $, where $ v $ is a signed-simplicial vertex.
(\ref{ss extension1}) This is clear from the definition of a signed elimination ordering.
(\ref{ss extension2}) If $ \mathcal{A}(G) $ is supersolvable, then $ \mathcal{A}(F) $ is supersolvable by Propositions \ref{ind_sub is a localization} and \ref{Stanley SS lattice}. To show the converse, suppose that $ \mathcal{A}(F) $ is supersolvable. If $ v $ has no loop and isolated, then $ \mathcal{A}(F) = \mathcal{A}(G) $, and hence $ \mathcal{A}(G) $ is supersolvable. Suppose that $ v $ has a loop or an adjacent vertex. By Theorem \ref{Zaslavsky rank} and Proposition \ref{add ss}(\ref{add ss 2}), we have $ \rank(\mathcal{A}(G))=\rank(\mathcal{A}(F))+1 $.
We show that for distinct hyperplanes $ H,H^{\prime} \in \mathcal{A}(G)\setminus\mathcal{A}(F) $, there exists $ H^{\prime\prime} \in \mathcal{A}(F) $ such that $ H \cap H^{\prime} \subseteq H^{\prime\prime} $. Suppose that $ H,H^{\prime} $ correspond to positive edges $ \{u_{1},v\},\{u_{2},v\} \in E_{G}^{+} $. Since $ v $ is signed simplicial, we have $ \{u_{1},u_{2}\} \in E_{G}^{+} $. Let $ H^{\prime\prime} $ be the hyperplane corresponding to the edge $ \{u_{1},u_{2}\} $. Then we have $ H \cap H^{\prime} \subseteq H^{\prime\prime} \in \mathcal{A}(F) $. The other cases are similar. Using Theorem \ref{BEZ}, we conclude that $ \mathcal{A}(G) $ is supersolvable.
(\ref{ss extension3}) If $ \mathcal{A}(G) $ is free, then $ \mathcal{A}(F) $ is free by Propositions \ref{ind_sub is a localization} and \ref{OT_localization}. We prove the converse. If $ v $ is isolated, then $ \mathcal{A}(G) $ is a product of $ \mathcal{A}(F) $ and a $ 1 $-dimensional arrangement. Hence $ \mathcal{A}(G) $ is free. Assume that $ v $ has an adjacent vertex $ w $. Let $ H $ be the hyperplane corresponding to $ \{v,w\} $. Then we have \begin{align*} \mathcal{A}(G)^{H} = \mathcal{A}(G/(v,w)) = \mathcal{A}(F) \end{align*} by Propositions \ref{contraction restriction} and \ref{add ss}(\ref{add ss 1}). Using Proposition \ref{add ss}(\ref{add ss 3}), and Theorems \ref{Zaslavsky chromatic characteristic} and \ref{Abe Division Theorem}, we have that $ \mathcal{A}(G) $ is free.
(\ref{ss extension4}) This follows by Proposition \ref{add ss}(\ref{add ss 4}). \end{proof}
\section{Divisional edges and vertices}\label{sec:divisional vertex} In this section, we introduce the notions of divisional edges and vertices of a signed graph. These will play an important role to prove our main theorem.
\begin{definition} An edge $ \{v,w\} $ of a signed graph $ G $ is said to be \textbf{divisional} if $ \chi(G/(v,w),t) $ divides $ \chi(G,t) $. Note that, since $ G/(v,w) $ and $ G/(w,v) $ are switching equivalent and hence the chromatic polynomials of them coincide, the definition of a divisional edge is independent on the choice of directions. The endvertices $ v,w $ are called \textbf{divisional vertices}. \end{definition}
A motivation of introducing a divisional edge is the lemma below. \begin{lemma}\label{division theorem for signed graph} Let $ e $ be a divisional edge of a signed graph $ G $. Suppose that $ \mathcal{A}(G/e) $ is free. Then $ \mathcal{A}(G) $ is free. \end{lemma} \begin{proof} This follows by Proposition \ref{contraction restriction} and Theorem \ref{Abe Division Theorem}. \end{proof}
To use this lemma, we need to know what edges are divisional. The following proposition raises a sufficient condition. \begin{proposition}\label{ss => divisional} Every incident edge of a signed-simplicial vertex is divisional. In other words, a signed-simplicial vertex and its neighbors are divisional. \end{proposition} \begin{proof} This follows by Proposition \ref{add ss} (\ref{add ss 1}) and (\ref{add ss 3}). \end{proof}
\section{Proof of Theorem \ref{main theorem}}\label{sec:proof} In this section, we focus on the signed-graphic arrangement $ \mathcal{A}(G) $ with $ G^{+} \supseteq G^{-} $ and prove Theorem \ref{main theorem}.
\subsection{The case $ G^{+} $ is complete} As mentioned in Section \ref{sec:introduction}, Edelman and Reiner characterized freeness of the signed-graphic arrangements corresponding a signed graph $ G=(K_{\ell},G^{-},L_{G}) $. See Subsection \ref{subsec:threshold graph} for terminologies of threshold graphs. \begin{theorem}[Edelman-Reiner {\cite[Theorem 4.6]{edelman1994free}}]\label{ER free} A signed-graphic arrangement $ \mathcal{A}(K_{\ell},G^{-},L_{G}) $ is free if and only if $ G^{-} $ is threshold and $ L_{G} $ is an initial segment of some degree ordering of $ G^{-} $. \end{theorem} When $ \mathcal{A}(K_{\ell},G^{-},L_{G}) $ is free, the signed graph $ (K_{\ell},G^{-},L_{G}) $ is balanced chordal by Lemma \ref{free => balanced chordal}. Actually, the following proposition holds. \begin{proposition}\label{bc threshold} Let $ G=(K_{\ell},G^{-}) $. Then $ G $ is balanced chordal if and only if $ G^{-} $ is threshold. \end{proposition} \begin{proof} Assume that $ G^{-} $ is not threshold. By Theorem \ref{Golumbic}, the signed graph $ G $ has one of left three graphs in Figure \ref{Fig:bc-threshold} as an induced subgraph. \begin{figure}
\caption{Forbidden induced subgraphs and their balanced cycle}
\label{Fig:bc-threshold}
\end{figure} Each of these contains the balanced cycle of the rightmost graph in Figure \ref{Fig:bc-threshold}, which has no balanced chords. Hence $ G $ is not balanced chordal.
Suppose that $ G^{-} $ is threshold. Take a balanced cycle $ C $ of length at least $ 4 $. If $ C $ contains edges $ \{u,v\},\{v,w\} \in E_{G}^{+} $ or $ \{u,v\},\{v,w\} \in E_{G}^{-} $, then there is a balanced chord $ \{u,w\} \in E_{G}^{+} $ since $ G^{+}=K_{\ell} $ is complete. Therefore we may assume that $ C $ is formed by alternating signed edges. Let $ \{v_{1},v_{2}\}, \{v_{2},v_{3}\}, \{v_{3},v_{4}\} $ be consecutive edges of $ C $, where $ \{v_{1},v_{2}\}, \{v_{3},v_{4}\} \in E_{G}^{-} $ and $ \{v_{2},v_{3}\} \in E_{G}^{+} $. Put $ F=G[\{v_{1},v_{2},v_{3},v_{4}\}] $. Then $ F^{-} $ is threshold since $ G^{-} $ is threshold. There is no isolated vertices in $ F^{-} $. Hence $ F^{-} $ has a dominating vertex. Therefore $ \{v_{1}, v_{3}\} \in E_{G}^{-} $ or $ \{v_{2},v_{4}\} \in E_{G}^{-} $, which is a balanced chord of $ C $. Thus $ G $ is balanced chordal. \end{proof}
The following proposition is a translation of a result of Edelman and Reiner using terminologies from signed graphs. \begin{proposition}[Edelman-Reiner {\cite[Lemma 4.7]{edelman1994free}}]\label{ER SEO} Let $ G=(K_{\ell},G^{-},L_{G}) $ be a signed graph with $ G^{-} $ threshold. Suppose that $ L_{G} $ is an initial segment of some degree order $ (v_{1}, \dots, v_{\ell}) $ of $ G^{-} $ and that $ L_{G} $ contains at least one endvertex from every edge of $ G^{-} $. Then $ (v_{1}, \dots, v_{\ell}) $ is a signed elimination ordering. \end{proposition} \begin{corollary}\label{deg minimal ss} Let $ G=(K_{\ell},G^{-},V_{G}) $ with $ G^{-} $ threshold. Then a vertex $ v $ such that $ \deg_{G^{-}}(v) $ is minimal is signed simplicial. In particular, every vertex of $ G $ is divisional. \end{corollary} \begin{proof} This follows by Propositions \ref{ER SEO} and \ref{ss => divisional}. \end{proof}
When $ G^{+} $ is complete and every vertex has a loop, the situation is almost as simple as in the case of simple graphs. \begin{proposition}\label{G+ comp and full loops} Let $ G=(K_{\ell},G^{-},V_{G}) $. The following are equivalent. \begin{enumerate}[(1)] \item $ G $ is balanced chordal. \item $ G^{-} $ is threshold. \item $ G $ has a signed elimination ordering. \item $ \mathcal{A}(G) $ is supersolvable. \item $ \mathcal{A}(G) $ is free. \end{enumerate} \end{proposition} \begin{proof} $ (1) \Leftrightarrow (2) $ By Proposition \ref{bc threshold}.
$ (2) \Rightarrow (3) $ By Proposition \ref{ER SEO}.
$ (3) \Rightarrow (4) $ By Theorem \ref{Zaslavsky SS}.
$ (4) \Rightarrow (5) $ By Theorem \ref{JT SS=>free}.
$ (5) \Rightarrow (1) $ By Lemma \ref{free => balanced chordal}. \end{proof}
\subsection{Lemmas} In this subsection, we assume that a signed graph $ G $ satisfies the following conditions: \begin{itemize} \item $ G^{+} \supseteq G^{-} $. \item $ G^{+} $ is non-complete. \item $ G $ is balanced chordal, and hence $ G^{+} $ is chordal. \item $ L_{G}=V_{G} $, that is, every vertex admits a loop. \end{itemize}
\begin{lemma}\label{negative chord} Take a minimal vertex separator $ S $ of $ G^{+} $ and distinct vertices $ u,v \in S $. Let $ A,B $ be vertex sets of distinct connected components of $ G \setminus S $. Assume that there exists a cycle $ C $ satisfying the following properties: \begin{enumerate}[(i)] \item \label{negative chord 1} $ V_{C} \cap S = \{u,v\} $. \item \label{negative chord 2} $ C $ has exactly two negative edges $ e,e^{\prime} $. \item \label{negative chord 3} One of the endvertices of $ e,e^{\prime} $ belongs to $ A,B $, respectively. \end{enumerate} Then we have $ \{u,v\} \in E_{G}^{-} $. \end{lemma} \begin{proof} Assume that $ C $ is a cycle of minimal length satisfying the conditions. Clearly $ C $ is a balanced cycle of length at least four. Hence $ C $ must have a balanced chord. The minimality of $C$ implies $ V_{C} \subseteq A \cup S \cup B $ since $ S $ is a clique of $ G^{+} $ by Theorem \ref{Dirac minmal vertex separator}. There are no edges between a vertex in $ A $ and a vertex in $ B $ since $ S $ is a minimal vertex separator of $ G^{+} $ and $ G^{+} \supseteq G^{-} $. The minimality of $ C $ implies that $ C $ has no balanced chords between a vertex in $ A $ and another vertex in $ A \cup S $. Similarly, $ C $ has no balanced chords between a vertex in $ B $ and a vertex in $ S \cup B $. Hence only the edge $ \{u,v\} $ can be a balanced chord. The positive edge $ \{u,v\} \in E_{G}^{+} $ is not a balanced chord of $ C $. Therefore we have $ \{u,v\} \in E_{G}^{-} $. \end{proof}
Let $ \mathcal{G} $ be the clique-separator graph of $ G^{+} $ (See Subsection \ref{subsec:clique-separator graph} for results about clique-separator graphs). Every sink box has at least two clique nodes since $ G^{+} $ is non-complete. Let $ P $ be a path in a sink box from a clique node to another clique node. We write $ P $ as follows. Take vertices $ a \in C_{1} \setminus S_{2} $ and $ b \in C_{k}\setminus S_{k} $. By Corollary \ref{Ibarra 1 cor}, the set of $ (a,b) $-minimal separators of $ G^{+} $ coincides with $ \{S_{2}, \dots, S_{k}\} $. Let $ S_{i} $ be a minimal $ (a,b) $-separator whose cardinality is minimal. For $v \in V_{G}$, let $ N_{G^{-}[A]}(v) $ denote the set of vertices in a subset $ A \subseteq V_{G} $ connected to $ v $ by an negative edge. \begin{lemma}\label{separator comp signed} Suppose that $ N_{G^{-}[C_{1}]}(a) \not \subseteq S_{i} $ and $ N_{G^{-}[C_{k}]}(b) \not \subseteq S_{i} $. Then $ G[S_{i}] $ is a complete signed graph with loops. \end{lemma} \begin{proof} From the assumptions, there exist vertices $ a^{\prime} \in C_{1}\setminus S_{i}, b^{\prime} \in C_{k}\setminus S_{i} $ such that $ \{a,a^{\prime}\} \in E_{G}^{-} $ and $ \{b,b^{\prime}\} \in E_{G}^{-} $. Moreover, by our hypothesis, every vertex has a loop and $ G^{+}[S_{i}] $ is a complete simple graph from Theorem \ref{Dirac minmal vertex separator}. Therefore we only need to show that $ G^{-}[S_{i}] $ is also a complete simple graph. Take distinct vertices $ u,v \in S_{i} $. By Corollary \ref{Menger cor} and Theorem \ref{Ibarra 1}, we obtain a cycle $ C $ of $ G^{+}[P] $ such that $ C $ contains the vertices $ a,b,u,v $ and intersects $ S_{i} $ at $ \{u,v\} $. We modify the cycle $ C $ as follows (see Figure \ref{Fig:modification}). \begin{figure}
\caption{The modification of a cycle}
\label{Fig:modification}
\end{figure} Let $ a^{\prime\prime} $ be a vertex adjacent to $ a $ in $ C $. By Theorem \ref{Ibarra 1}, we have $ a^{\prime\prime} \in C_{1} $. If $ G $ has the negative edge $ \{a,a^{\prime\prime}\} $, then we replace the positive edge $ \{a,a^{\prime\prime}\} $ of $ C $ by the negative edge $ \{a,a^{\prime\prime}\} $. When $ G $ does not have the negative edge $ \{a,a^{\prime\prime}\} $, we replace the positive edge $ \{a,a^{\prime\prime}\} $ of $ C $ by the path consisting of the negative edge $ \{a,a^{\prime}\} $ and the positive edge $ \{a^{\prime},a^{\prime\prime}\} $. We make a similar modification with respect to the vertex $ b $. As a result, our cycle $ C $ has been modified so that it satisfies the assumptions in Lemma \ref{negative chord}. Therefore we have $ \{u,v\} \in E_{G}^{-} $ and hence $ G[S_{i}] $ is a complete signed graph with loops. \end{proof}
\begin{lemma}\label{dominating} Suppose that $ N_{G^{-}[C_{1}]}(a) \subseteq S_{i} $. Assume that one of the following conditions holds: \begin{enumerate}[(i)] \item \label{dominating i} $ N_{G^{-}[C_{k}]}(b) \not\subseteq S_{i} $. \item \label{dominating ii} $ N_{G^{-}[C_{1}]}(a) \subseteq N_{G^{-}[C_{k}]}(b) $. \item \label{dominating iii} $ N_{G^{-}[C_{k}]}(b) \subseteq S_{i}, N_{G^{-}[C_{1}]}(a) \not\subseteq N_{G^{-}[C_{k}]}(b) $, and $ N_{G^{-}[C_{1}]}(a) \not\supseteq N_{G^{-}[C_{k}]}(b) $. \end{enumerate} Then the following hold: \begin{enumerate}[(1)] \item \label{dominating 1} Every vertex in $ N_{G^{-}[C_{1}]}(a) $ is dominating in $ G^{-}[S_{i}] $, that is, for any $ u \in N_{G^{-}[C_{1}]}(a) $ and another vertex $ v \in S_{i} $, there exists the negative edge $ \{u,v\} $. \item \label{dominating 2} Every vertex in $ N_{G^{-}[C_{1}]}(a) $ is dominating in $ G^{-}[S_{2}] $. \end{enumerate} \end{lemma} \begin{proof} (\ref{dominating 1}) Take vertices $ u \in N_{G^{-}[C_{1}]}(a) $ and $ v \in S_{i} \setminus \{u\} $. We will show that there exists a negative edge $ \{u,v\} $.
First, we assume (\ref{dominating i}). Then there exists a vertex $ b^{\prime} \in C_{k}\setminus S_{i} $ such that $ \{b,b^{\prime}\} \in E_{G}^{-} $. By Corollary \ref{Menger cor}, we obtain a cycle $ C $ of $ G^{+} $ containing $ a,b,u,v $ and intersecting $ S_{i} $ at $ \{u,v\} $. Replace the positive edge $ \{a,u\} $ of $ C $ by the negative edge $ \{a,u\} $ and make a modification with respect to $ b $ as Figure \ref{Fig:modification}. Then our modified cycle satisfies the conditions in Lemma \ref{negative chord} and hence we have $ \{u,v\} \in E_{G}^{-} $.
Second, assume (\ref{dominating ii}). Note that $ u \in N_{G^{-}[C_{1}]}(a) $ implies $ \{a,u\}, \{u,b\} \in E_{G}^{-} $. By Corollary \ref{Menger cor} again, we obtain a cycle of $ G^{+} $ containing $ a,b,u,v $ and intersecting $ S_{i} $ at $ \{u,v\} $. Replace the positive edges $ \{a,u\}, \{u,b\} $ of $ C $ by the negative edges $ \{a,u\}, \{u,b\} $. The modified cycle satisfies the conditions in Lemma \ref{negative chord} and hence we have $ \{u,v\} \in E_{G}^{-} $.
Finally, assume (\ref{dominating iii}). If $ u \in N_{G^{-}[C_{1}]}(a) \cap N_{G^{-}[C_{k}]}(b) $, then $ u $ is dominating in $ G^{-}[S_{i}] $ as in the case (\ref{dominating ii}). We assume that $ u \in N_{G^{-}[C_{1}]}(a) \setminus N_{G^{-}[C_{k}]}(b) $ and $ v \in N_{G^{-}[C_{k}]}(b) \setminus N_{G^{-}[C_{1}]}(a) $. Using Corollary \ref{Menger cor}, we have a cycle $ C $ of $ G^{+} $ containing $ a,b,u,v $ and intersecting $ S_{i} $ at $ \{u,v\} $. Replace the positive edges $ \{a,u\},\{v,b\} $ of $ C $ by the negative edges $ \{a,u\},\{v,b\} $. The modified cycle satisfies the conditions in Lemma \ref{negative chord} and hence we have $ \{u,v\} \in E_{G}^{-} $. Take a vertex $ w \in S_{i}\setminus \{u,v\} $. By Theorem \ref{Menger}, there exists an induced path $ p $ of $ G^{+} $ from $ a $ to $ b $ such that it intersects $ S $ at $ \{w\} $. Make a cycle $ C $ by connecting the path $ p $ and the path $ bvua $, where $ \{b,v\}, \{u,a\} $ are negative and $ \{v,u\} $ is positive (see Figure \ref{Fig:the balanced cycle}). \begin{figure}
\caption{The balanced cycle (the dotted segments form the induced path $ p $)}
\label{Fig:the balanced cycle}
\end{figure} The balanced cycle $ C $ is of length at least five and hence must have a balanced chord. There are no edges connecting non-consecutive vertices in $ p $ since $ p $ is induced, and there are no negative edges $ \{u,b\}, \{v,a\} $ by the choice of $ u,v $. Hence candidates of negative chords are the negative edges connecting an internal vertex of $ p $ to $ u $ or $ v $. One can prove that such negative edges exist by considering smaller balanced cycles. Especially, we have $ \{u,w\} \in E_{G}^{-} $. Therefore we conclude that $ u $ is dominating in $ G^{-}[S_{i}] $.
(\ref{dominating 2}) We proceed by induction on $ k $. When $ k=2 $, the thesis follows by (\ref{dominating 1}) since $ S_{1} $ is a unique minimal $ (a,b) $-separator. We assume that $ k \geq 3 $. If $ i=1 $, the assertion holds from (\ref{dominating 1}). Hence we assume that $ i \geq 2 $. By Theorem \ref{Ibarra2}(\ref{Ibarra2-2}), the separators form an antichain and hence there exists a vertex $ b^{\prime} \in S_{i} \setminus S_{i-1} $. Consider the subpath $ P^{\prime} $ of $ P $ from $ C_{1} $ to $ C_{i-1} $. We have that $ b^{\prime} \in S_{i} \subseteq C_{i-1}\setminus S_{i-1} $.
Let $ S_{j} \ (1 \leq j \leq i-1) $ be a minimal $ (a,b^{\prime}) $-separator of minimal cardinality. Using the clique intersection property (Theorem \ref{Ibarra2}(\ref{Ibarra2-1})), we have that $ N_{G^{-}[C_{1}]}(a) \subseteq C_{1} \cap S_{i} \subseteq S_{j} $.
By (\ref{dominating 1}), every vertex $ u \in N_{G^{-}[C_{1}]}(a) $ is dominating in $ G^{-}[S_{i}] $. In particular, we have $ \{u,b^{\prime}\} \in E_{G}^{-} $. Hence $ N_{G^{-}[C_{1}]}(a) \subseteq N_{G^{-}[C_{i-1}]}(b^{\prime}) $ since $ N_{G^{-}[C_{1}]}(a) \subseteq S_{i} \subseteq C_{i-1} $.
Therefore the subpath $ P^{\prime} $ and the vertices $ a,b^{\prime} $ satisfy the condition of the assertion. By our induction hypothesis, we conclude that the assertion is true. \end{proof}
\begin{lemma}\label{comp_signed or ss} One of the following holds: \begin{enumerate}[(1)] \item \label{comp_signed or ss 1} $ G $ has a signed-simplicial vertex. \item \label{comp_signed or ss 2} There exists a minimal vertex separator $ S $ of $ G^{+} $ such that $ G[S] $ is a complete signed graph with loops. \end{enumerate} \end{lemma} \begin{proof} By Proposition \ref{sink_box_leaf}, every sink box of $ \mathcal{G} $ has at least two clique nodes which are leaves. We assume the clique nodes $ C_{1},C_{k} $ of our path $ P $ are leaves of a sink box. Moreover, suppose that the degrees $ \deg_{G^{-}[C_{1}]}(a), \deg_{G^{-}[C_{k}]}(b) $ are minimal in $ C_{1}\setminus S_{2}, C_{k}\setminus S_{k} $, respectively. If $ N_{G^{-}[C_{1}]}(a) \not\subseteq S_{i} $ and $ N_{G^{-}[C_{k}]}(b) \not\subseteq S_{i} $, then $ G[S_{i}] $ is a complete signed graph with loops by Lemma \ref{separator comp signed}. Thus (\ref{comp_signed or ss 2}) holds.
Now, we may assume that $ N_{G^{-}[C_{1}]}(a) \subseteq S_{i} $ by symmetry. Then we have $ N_{G^{-}[C_{1}]}(a) \subseteq S_{2} $ by the clique intersection property (see Theorem \ref{Ibarra2}(\ref{Ibarra2-1})). We will show that $ \deg_{G^{-}[C_{1}]}(a) $ is minimal in $ C_{1} $. In order to do that, take a vertex $ u \in S_{2} $ and compare the degrees.
First, assume that $ u \in N_{G^{-}[C_{1}]}(a) $. Since one of the conditions in Lemma \ref{dominating} holds, the vertex $ u $ is dominating in $ G^{-}[S_{2}] $ by Lemma \ref{dominating}(\ref{dominating 2}). Then we have \begin{align*}
\deg_{G^{-}[C_{1}]}(u) \geq |\{a\} \cup (S_{2} \setminus \{u\})| = |S_{2}| \geq |N_{G^{-}[C_{1}]}(a)| = \deg_{G^{-}[C_{1}]}(a). \end{align*}
Second, suppose that $ u \in S_{2} \setminus N_{G^{-}[C_{1}]}(a) $. There exists a negative edge from each vertex in $ N_{G^{-}[C_{1}]}(a) $ to $ u $ since every vertex in $ N_{G^{-}[C_{1}]}(a) $ is dominating in $ G^{-}[S_{2}] $. Hence we have \begin{align*}
\deg_{G^{-}[C_{1}]}(u) \geq |N_{G^{-}[C_{1}]}(a)| = \deg_{G^{-}[C_{1}]}(a). \end{align*}
Thus $ \deg_{G^{-}[C_{1}]}(a) $ is minimal in $ C_{1} $. By Proposition \ref{bc threshold} and Corollary \ref{deg minimal ss}, we have that the vertex $ a $ is signed simplicial in $ G[C_{1}] $. Since $ a \in C_{1}\setminus S_{2} $ and $ C_{1} $ is a leaf of a sink box, the separator $ S_{2} $ separates $ a $ from any vertices in $ V_{G} \setminus C_{1} $ by Theorem \ref{Ibarra 1}. Therefore the vertices adjacent to $ a $ belong to $ C_{1} $ and hence $ a $ is a signed-simplicial vertex of $ G $. \end{proof}
\begin{example} By Theorems \ref{Dirac minmal vertex separator} and \ref{Dirac simplicial}, every chordal simple graph has a simplicial vertex and all minimal vertex separators of it are cliques. Regarding our case, there are signed graphs which satisfy only one of the conditions in Lemma \ref{comp_signed or ss}. The left graph in Figure \ref{example bc comp ss} has signed-simplicial vertices $ b,e $ and a unique minimal vertex separator $ \{a,c,d\} $ does not induce a complete signed graph with loops. The right graph has no signed-simplicial vertices and the minimal vertex separator $ \{c,d\} $ induces $ B_{2} $. \begin{figure}
\caption{Examples for Lemma \ref{comp_signed or ss}}
\label{example bc comp ss}
\end{figure} \end{example}
The following Lemma is a generalization of Theorem \ref{Dirac simplicial}. \begin{lemma}\label{two non-adj div} The signed graph $ G $ has at least two non-adjacent divisional vertices $ v_{1},v_{2} $ such that the contractions $ G/e_{i} $ with respect to some incident divisional edges $ e_{i}=(v_{i},v_{i}^{\prime}) $ satisfy the following conditions: \begin{enumerate}[(i)] \item $ (G/e_{i})^{+} \supseteq (G/e_{i})^{-} $. \item $ G/e_{i} $ is balanced chordal. \end{enumerate} \end{lemma} \begin{proof} Without loss of generality, we may assume that $ G $ is connected. We proceed by induction on the number of vertices of $ G $.
If $ |V_{G}| $ equals $ 1 $ or $ 2 $, then $ G^{+} $ must be complete, and hence we have nothing to prove.
Assume that $ |V_{G}| \geq 3 $. By Lemma \ref{comp_signed or ss}, our graph $ G $ has a signed-simplicial vertex or a separator $ S $ such that $ G[S] $ is a complete signed graph with loops.
First, suppose that $ G $ has a signed-simplicial vertex $ v_{1} $. Let $ F \coloneqq G\setminus \{v_{1}\} $, which is connected since $ G $ is connected and $ v_{1} $ is signed simplicial. The vertex $ v_{1} $ is divisional by Proposition \ref{ss => divisional} and every incident edge $ e_{1}=(v_{1},v_{1}^{\prime}) $ is divisional. Moreover, by Proposition \ref{add ss}(\ref{add ss 1}), the contraction $ G/e_{1} = F $ satisfies the conditions. We will show that there exists a vertex of $ F $ which is non-adjacent to $ v_{1} $ and divisional in $ F $. If $ F^{+} $ is complete, then every vertex in $ F $ is divisional in $ F $ by Propositions \ref{G+ comp and full loops} and \ref{ss => divisional} and the contraction of $ F $ with respect to an incident divisional edge satisfies the conditions. Since $ G^{+} $ is non-complete, there exists a vertex in $ F $ which is non-adjacent to $ v_{1} $. When $ F^{+} $ is non-complete, by the induction hypothesis, $ F $ has two non-adjacent divisional vertices such that the conditions are satisfied. Since $ v_{1} $ is signed simplicial, one of them is non-adjacent to $ v_{1} $. Thus, in the both cases, there exists a vertex $ v_{2} $ of $ F $ which is non-adjacent to $ v_{1} $ and there exists an incident divisional edge $ e_{2}=(v_{2},v_{2}^{\prime}) $ such that $ F/e_{2} $ satisfies the conditions.
Now, we show that $ v_{2} $ is also divisional in $ G $ and $ G/e_{2} $ satisfies the conditions. By the definition of a divisional edge, there exists a non-negative integer $ d^{\prime} $ such that \begin{align*} \chi(F,t) = (t-d^{\prime})\chi(F/e_{2},t). \end{align*} Since $ v_{1},v_{2} $ are non-adjacent, we have that $ v_{1} $ is signed simplicial in $ G/e_{2} $. Let $ d $ denote the degree of $ v_{1} $ in $ G $ (or equivalently in $ G/e_{2} $). By Proposition \ref{add ss}(\ref{add ss 3}), we have \begin{align*} \chi(G/e_{2},t)=(t-d)\chi((G/e_{2})\setminus \{v_{1}\},t)=(t-d)\chi(F/e_{2},t). \end{align*} By Proposition \ref{add ss}(\ref{add ss 3}) again, we have that \begin{align*} \chi(G,t) &= (t-d)\chi(G \setminus\{v_{1}\},t) = (t-d)\chi(F,t) \\ &= (t-d)(t-d^{\prime})\chi(F/e_{2},t) = (t-d^{\prime})\chi(G/e_{2},t). \end{align*} Thus $ e_{2} $ is a divisional edge in $ G $. Moreover, since $ v_{1} $ is signed simplicial in $ G/e_{2} $ and $ (G/e_{2}) \setminus \{v_{1}\} = F/e_{2} $ is balanced chordal, we have $ G/e_{2} $ is also balanced chordal by Proposition \ref{add ss}(\ref{add ss 4}). The condition $ (G/e_{2})^{+} \supseteq (G/e_{2})^{-} $ follows immediately from the condition $ (F/e_{2})^{+} \supseteq (F/e_{2})^{-} $.
Next, we suppose that $ G $ has a minimal vertex separator $ S $ such that $ G[S] $ is a complete signed graph with loops. Let $ A $ be the vertex set of a connected component of $ G \setminus S $. Put $ G_{1} \coloneqq G[A\cup S] $. We have $ G_{1}^{+} $ is complete or, by the induction hypothesis, $ G_{1} $ has two non-adjacent divisional vertices such that the condition satisfied. In the both cases, there exist a vertex $ v \in A $ and an incident divisional edge $ e=(v,v^{\prime}) $ of $ G_{1} $ such that $ G_{1}/e $ satisfies the conditions.
We show that $ v $ is divisional in $ G $. By the definition of a divisional edge, there exists a non-negative integer $ d $ such that $ \chi(G_{1},t)=(t-d)\chi(G_{1}/e,t) $. Let $ G_{2} \coloneqq G \setminus (A\cup S) $. Since $ v \in A $, we have \begin{align*} G = G_{1} \cup G_{2} &\text{ and } G_{1} \cap G_{2} = G[S], \\ G/e = (G_{1}/e) \cup G_{2} &\text{ and } (G_{1}/e) \cap G_{2} = G[S]. \end{align*} By Lemma \ref{gluing}, \begin{align*} \chi(G,t) &= \frac{\chi(G_{1},t)\chi(G_{2},t)}{\chi(G[S],t)} = \frac{(t-d)\chi(G_{1}/e,t)\chi(G_{2},t)}{\chi(G[S],t)} = (t-d)\chi(G/e,t). \end{align*} Thus $ e $ is divisional in $ G $, and hence $ v $ is divisional in $ G $. Moreover, since $ G_{1}/e $ and $ G_{2} $ are balanced chordal, we have $ G/e $ is also balanced chordal by Lemma \ref{gluing}. The condition $ (G/e)^{+} \supseteq (G/e)^{-} $ is obvious.
We have shown that every connected component of $ G \setminus S $ has a vertex which is divisional in $ G $ having a divisional edge which satisfies the condition. Hence $ G $ has at least two non-adjacent divisional vertices satisfying the desired property. \end{proof}
\subsection{Proof of Theorem \ref{main theorem}} We are now ready to give a proof of Theorem \ref{main theorem}. \begin{proof}[Proof of Theorem \ref{main theorem}] $ (\ref{main theorem 1}) \Rightarrow (\ref{main theorem 2}) $ If $ G^{+} $ is complete, then $ \mathcal{A}(G) $ is free by Proposition \ref{G+ comp and full loops}. Suppose that $ G^{+} $ is non-complete.
We proceed by induction on the number of vertices $ |V_{G}| $.
We may assume that $ |V_{G}| \geq 3 $. By Lemma \ref{two non-adj div}, we have a divisional edge $ e $ such that $ (G/e)^{+} \supseteq (G/e)^{-} $ and $ G/e $ is balanced chordal. Moreover, every vertex in $ G/e $ admits a loop. Therefore, by the induction hypothesis, we have that $ \mathcal{A}(G/e) $ is free. Using Lemma \ref{division theorem for signed graph}, we conclude that $ \mathcal{A}(G) $ is free.
$ (\ref{main theorem 2}) \Rightarrow (\ref{main theorem 3}) $ This is trivial.
$ (\ref{main theorem 3}) \Rightarrow (\ref{main theorem 1}) $ This follows by Lemma \ref{free => balanced chordal}. \end{proof}
\section{Open Problems} It appears that determining freeness of arbitrary signed-graphic arrangements is quite difficult. In this paper, we consider the problem under the assumption $ G^{+} \supseteq G^{-} $. However, behavior of loop sets admitting freeness is still inscrutable. \begin{problem} Assume that a balanced-chordal signed graph $ G=(G^{+},G^{-}) $ satisfies $ G^{+} \supseteq G^{-} $. Find a necessary and sufficient condition on a loop set $ L $ for $ \mathcal{A}(G^{+},G^{-},L) $ to be free. \end{problem}
For a free arrangement $ \mathcal{A} $, the multiset of degrees of a homogeneous basis for $ D(\mathcal{A}) $ is called the set of \textbf{exponents}. The degrees of a free graphic arrangement can be described in terms of graphs.
Namely, if $ (v_{1}, \dots, v_{\ell}) $ is a perfect elimination ordering of a chordal graph $ G $, then the multiset $ \Set{\deg_{G[\{v_{1},\dots, v_{i} \}]}(v_{i}) | 1 \leq i \leq \ell} $ coincides with the exponents of $ \mathcal{A}(G) $.
\begin{problem} Is there any graphical interpretation of the exponents of free signed-graphic arrangements? \end{problem}
\end{document} |
\begin{document}
\title{Bounding the number of odd paths in planar graphs via convex optimization}
\begin{abstract}
Let $N_{\mathcal{P}}(n,H)$ denote the maximum number of copies of $H$ in an $n$ vertex planar graph.
The problem of bounding this function for various graphs $H$ has been extensively studied since the 70's.
A special case that received a lot of attention recently is when $H$ is the path on $2m+1$ vertices, denoted $P_{2m+1}$.
Our main result in this paper is that
$$
N_{\mathcal{P}}(n,P_{2m+1})=O(m^{-m}n^{m+1})\;.
$$
This improves upon the previously best known bound by a factor $e^{m}$,
which is best possible up to the hidden constant, and makes a significant step towards resolving conjectures
of Gosh et al. and of Cox and Martin. The proof uses graph theoretic arguments together with (simple) arguments
from the theory of convex optimization.
\end{abstract}
\section{Introduction}
In this paper we study the following extremal problem: given a fixed graph $H$, what is the maximum number of copies of $H$
that can be found in an $n$ vertex planar graph? We denote this maximum by $N_{\mathcal{P}}(n,H)$.
The investigation of this problem was initiated by Hakimi and Schmeichel \cite{HakSch1979} in the 70's.
They considered the case when $H$ is a cycle of length $m$, denoted $C_m$.
They determined $N_{\mathcal{P}}(n,C_3)$ and $N_{\mathcal{P}}(n,C_4)$ exactly, and for general $m\geq 3$ proved that $N_{\mathcal{P}}(n,C_{m})=\Theta(n^{\lfloor{m}/{2}\rfloor})$.
Following this result, Alon and Caro \cite{AloCar1984} determined $N_{\mathcal{P}}(n,K_{2,m})$ exactly for all $m$, where $K_{2,m}$ is the complete $2$-by-$m$ bipartite graph. In a series of works \cite{Epp1993,GyoPauSalTomZam2021,HuyWoo2022,Wor1986}, which culminated with a recent paper of Huynh, Joret and Wood \cite{HuyJorWoo2020}, the asymptotic value of $N_{\mathcal{P}}(n,H)$ was determined up to a constant factor\footnote{This line of research was also generalized to other families of sparse host graphs, e.g.\ graphs that are embeddable in a surface of genus $g$, $d$-degenerate graphs, and more. In fact, the main result of \cite{HuyJorWoo2020} also determines (up to constant factors) the maximum number of copies of a given graph in an $n$ vertex graph which is embeddable in a surface of genus $g$. A recent far reaching generalization of \cite{HuyJorWoo2020} can be found in Liu \cite{Liu2021} where the order of magnitude of the maximum number of copies of a given graph in a `nowhere dense' graph was computed up to constant factors.} (depending on $H$) for every fixed $H$.
The next natural question following the result of \cite{HuyJorWoo2020} is to determine the asymptotic growth of $N_{\mathcal{P}}(n,H)$ up to\footnote{We use the standard notation $o(1)$ to denote a quantity tending to $0$ when $n$ tends to infinity and $H$ is fixed. Similarly, when we write $o(n^k)$ we mean $o(1)\cdot n^k$. } $1+o(1)$, or more ambitiously, to determine its exact value. This line of research was initiated by Gy\H{o}ri, Paulos, Salia, Tompkins and Zamora \cite{GyoPauSalTomZam2019,GyoPauSalTomZam2021}, who showed that for large enough $n$ we have $N_{\mathcal{P}}(n,P_4)=7n^{2}-32n+27$ and $N_{\mathcal{P}}(n,C_5)=2n^2-10n+12$, where $P_m$ denotes the path with $m$ vertices (and $m-1$ edges).
We note that the result of Alon and Caro \cite{AloCar1984} implies that $N_{\mathcal{P}}(n,K_{1,2})=N_{\mathcal{P}}(n,P_3)= n^2+3n-16$.
Addressing the problem of finding the asymptotic value of $N_{\mathcal{P}}(n,P_{m})$ up to $1+o(1)$, Ghosh, Gy\H{o}ri, Martin, Paulos, Salia, Xiao and Zamora \cite{GhoGyoMarPauSalXiaZam2021}, showed that $N_{\mathcal{P}}(n,P_{5})=(1+o(1))n^3$. They also raised the following conjecture\footnote{They also conjectured that the second order term is $O(n^{m})$.}
regarding the asymptotic value of $N_{\mathcal{P}}(n,P_{2m+1})$ for arbitrary $m \geq 2$:
\begin{equation}\label{eq-Conjecture}
N_{\mathcal{P}}(n,P_{2m+1})=(4m^{-m}+o(1))n^{m+1}\;.
\end{equation}
We note that the lower bound in \eqref{eq-Conjecture} is easy. Indeed, start with a cycle of length $2m$, and then replace every second vertex with an independent set consisting of $(n-m)/m$ vertices, each with the same neighborhood as the original vertex it replaced.
In a very recent paper, Cox and Martin \cite{CoxMar2021_1} introduced an analytic approach for proving \eqref{eq-Conjecture}.
They showed that
\begin{equation}\label{eqcm}
N_{\mathcal{P}}(n,P_{2m+1})\leq (\rho(m)/2+o(1))n^{m+1}\;,
\end{equation}
where $\rho(m)$ is the solution to a certain convex optimization problem, which we define precisely in Section \ref{section3}.
They further conjectured that
\begin{equation}\label{CoxMartinConj}
\rho(m)\leq 8m^{-m}\;,
\end{equation}
which, if true, implies \eqref{eq-Conjecture}. In the same paper, they verified their conjecture for $m=3$ by showing that $\rho(3)=8/27$, which confirms \eqref{eq-Conjecture} for $m=3$.
Using the same approach they also improved the known asymptotic value of $N_{\mathcal{P}}(n,P_{2m+1})$ by showing that
\[
N_{\mathcal{P}}(n,P_{2m+1})\leq \left(\frac{1}{2\cdot(m-1)!}+o(1)\right)n^{m+1}\;.
\]
Note that this bound is roughly $e^{m}$ larger than the one conjectured in (\ref{eq-Conjecture}).
Our main result in this paper, Theorem \ref{thm:number of path in planar graphs} below, makes a significant step towards the resolution of the Cox--Martin and Gosh et al.\ conjectures, by establishing \eqref{CoxMartinConj} up to an absolute constant.
\begin{theorem}\label{thm:number of path in planar graphs}
There is an absolute constant $C$ so that for every fixed $m\geq 2$ and large enough $n$, we have
\[
N_{\mathcal{P}}(n,P_{2m+1})\leq Cm^{-m}n^{m+1}\;.
\]
\end{theorem}
As noted after \eqref{eq-Conjecture}, the above bound is best possible up to the value of $C$. Furthermore, as can be seen in the proof of Theorem \ref{thm:number of path in planar graphs}, the constant $C$ we obtain is $10^4$ (which can certainly be improved).
\subsection{Related work and paper overview}
In addition to studying $N_{\mathcal{P}}(n,P_{2m+1})$, Cox and Martin \cite{CoxMar2021_1} also introduced an analytic method for bounding the maximum number of even cycles in planar graphs.
Similar to the case of odd paths discussed above, they showed that $N_{\mathcal{P}}(n,C_{2m})\leq (\beta(C_m)+o(1)) n^{m}$, where $\beta(C_m)$ is an optimization problem, similar to the one we study in Section \ref{section2}.
They conjectured that $\beta(C_m)=m^{-m}$, a bound which implies by their framework that $N_{\mathcal{P}}(n,C_{2m})\leq (1+o(1)) (n/m)^{m}$.
Observe that the example we mentioned after \eqref{eq-Conjecture} shows that this bound is best possible.
Towards their conjecture, Cox and Martin \cite{CoxMar2021_1} proved that $\beta(C_m)\leq 1/m!$.
Using the ideas in this paper, one can significantly improve this bound. In particular, using Lemma \ref{lem:number of P_m in an n vertex graph} in Section \ref{section2}, it is not hard to show that for some absolute constant $C$, we have
\begin{equation}\label{eqcycles}
\beta(C_m)\leq Cm^{-m}\;.
\end{equation}
In an independent work, Lv, Gy\H{o}ri, He, Salia, Tompkins and Zhu \cite{LvGyoHeSalTomZhu2022} confirmed the conjecture of Cox and Martin by showing that one can in fact obtain $C=1$ in (\ref{eqcycles}). We thus do not include the proof of (\ref{eqcycles}).
We should point that the reason why studying $N_{\mathcal{P}}(n,P_{2m+1})$ appears to be much harder than $N_{\mathcal{P}}(n,C_{2m})$ is that
as opposed to $\beta(C_m)$, which is an optimization problem involving a single graph, $\rho(m)$ is an optimization problem which involves several multigraphs. To overcome this difficulty
we first study in Section \ref{section2} an optimization problem, denoted $\beta(P_m)$,
which is the analogue of $\beta(C_m)$ for the setting of $P_m$. The main advantage of first studying $\beta(P_m)$ is that it allows us
to employ a weight shifting argument, which does not seem to be applicable to $\rho(m)$.
Our main result in that section is a nearly tight bound for $\beta(P_{m})$.
However, as opposed to the case of $N_{\mathcal{P}}(n,C_{2m})$, a bound for $\beta(P_{m})$ does not immediately translate into a bound for $N_{\mathcal{P}}(n,P_{2m+1})$. Hence, in Section \ref{section3} we present the main novel part of this paper, showing how one can transfer any bound for $\beta(P_{m})$ into a bound for $\rho(m)$, thus proving Theorem \ref{thm:number of path in planar graphs}. To this end we use simple arguments from the theory of convex optimization, which allow us to exploit the fact that $\rho(m)$ is a low degree polynomial.
The key lemmas leading to the proof of Theorem \ref{thm:number of path in planar graphs} are Lemmas \ref{thm:main theorem} and \ref{prop:rho to beta} for which we obtain bounds that are optimal up to constant factors. Moreover, if one can improve these bounds to the optimal conjectured ones, then this will give the conjectured inequality (\ref{eq-Conjecture}). We believe that with more care, it is possible to use the ideas in this paper to improve the bound for $\beta(P_m)$ in Lemma \ref{thm:main theorem} to the conjectured one.
In contrast, because of the complex structure of $\rho(m)$, it seems that in order to improve the bound in Lemma \ref{prop:rho to beta} to the conjectured bound, a new idea is needed.
\section{A variant of $\rho(m)$}
\label{section2}
Our goal in this section is to prove Lemma \ref{thm:main theorem} regarding the optimization problem $\beta(P_m)$.
This lemma will be used in the next section in the proof of Theorem \ref{thm:number of path in planar graphs}.
The proof of Lemma \ref{thm:main theorem} will employ a subtle weight shifting argument.
We first recall several definitions from \cite{CoxMar2021_1}. In what follows we write $[n]$ to denote the set $\{1,\ldots,n\}$ and $K_n$ to denote the complete graph on $[n]$.
\begin{definition}
Let $n>0$ be an integer, and let $\mu$ be a probability measure on the edges of $K_{n}$.
\begin{enumerate}
\item For any $x\in [n]$ we define the weighted degree of $x$ to be
\[
\bar{\mu}(x)=\sum_{y\in [n]\setminus\{x\}}\mu(x,y)\;.
\]
\item For any subgraph $H\subseteq K_n$ we define the weight of $H$ to be
\[
\mu(H)=\prod_{e\in E(H)} \mu(e)\;.
\]
\item For any graph $H$ with no isolated vertices, define
\[
\beta(\mu;H)=\sum_{H'\in \mathbf{C}(H,n)}\mu(H')\;,
\]
where $\mathbf{C}(H,n)$ is the set of all (non-induced and unlabeled) copies of $H$ in $K_n$.
Further, we define
\[
\beta(H)=\sup_{\mu} \beta(\mu;H)\;,
\]
where the supremum is taken over all $n'$ and all probability measures $\mu$ on the edges of $K_{n'}$.
\end{enumerate}
\end{definition}
Intuitively, the function $\beta(\mu;H)$ is the probability of hitting a (non-induced and unlabeled) copy of $H$ if $|E(H)|$ independent edges were chosen according to $\mu$.
\begin{lemma}\label{thm:main theorem}
For any integer $m\geq 2$ we have
\begin{equation*}
\beta(P_{m})\leq \frac{2e^2}{m^{m-2}}\;.
\end{equation*}
\end{lemma}
We remark that this lemma is optimal up to the constant factor $2e^2$. To see this, consider the uniform distribution over the edges of $C_{m}$, which shows that $\beta(P_{m}) \geq 1/m^{m-2}$. It seems reasonable to conjecture that $\beta(P_{m}) = 1/m^{m-2}$.
The key step in the proof of Lemma \ref{thm:main theorem} is Lemma \ref{lem:number of P_m in an n vertex graph} below. To state this lemma, we first need the following definitions.
\begin{definition}
For every $k,\ell \geq 0$ we define $P_{(k,\ell)}$ to be a disjoint union of $P_{k+1}$ and $P_{\ell+1}$.
\end{definition}
From now on, we will not only deal with probability measures but also with bounded measures. Therefore, we will frequently write \emph{measure} to denote a bounded measure. Moreover, for a measure $\mu$ we will denote its total mass by $w(\mu)$.
\begin{definition}\label{defbetastar}
Suppose $\mu$ is a measure on the edges of $K_n$ and $s,t \geq 0$. Define
\[
\beta^*(\mu;P_{(s,t)}) = \sum _{P\in\mathbf{C}^*(P_{(s,t)},n)}\mu(P)\;,
\]
where $\mathbf{C}^*(P_{(s,t)},n)$ is the set of copies of $P_{(s,t)}$ in $K_{n}$ where the path of length $s$ starts with the vertex $n$, and the path of length $t$ starts with the vertex $1$. Further, for every $w>0$ we define
\[
\beta_{w,n}^*(P_{(s,t)})=\sup_{\mu} \beta^*(\mu;P_{(s,t)})\;,
\]
where the supremum is taken over all measures $\mu$ on the edges of $K_{n}$ with $w(\mu)=w$.
\end{definition}
We remark that for any measure $\mu$ on the edges of $K_n$, we have $\beta^*(\mu;P_{(0,0)})=1$. This is because $\mathbf{C}^*(P_{(0,0)},n)$ consist of a single graph, the independent set $I_2=\{1,n\}$, and because $\mu(I_2)=1$. This clearly implies that $\beta_{w,n}^*(P_{(0,0)})=1$ for every $w$ and $n$.
\begin{lemma}\label{lem:number of P_m in an n vertex graph}
For every $0\leq \ell \leq m \leq n$ we have
\[
\beta_{1,n}^*(P_{(\ell,m-\ell)})\leq \frac{1}{m^{m}}\;.
\]
\end{lemma}
\begin{claim}\label{claim_double_path}
Suppose that $t$ is a non-negative integer, $s,n$ are positive integers, and $w\geq 0$. Then, there exists a measure $\mu$ on the edges of $K_n$ with $w(\mu)=w$, satisfying:
\begin{enumerate}
\item $\beta^*(\mu;P_{(s,t)})=\beta^*_{w,n}(P_{(s,t)})$, and
\item for all $q\neq n-1$ we have $\mu(q,n)=0$.
\end{enumerate}
\end{claim}
\begin{proof}
The main idea in the proof is the introduction of the notion of a $w$-useful measure.
We say that a measure $\mu$ on the edges of $K_n$ with $w(\mu)=w$ is \emph{$w$-optimal} if
\[
\beta^*(\mu;P_{(s,t)})=\beta_{w,n}^*(P_{(s,t)})\;.
\]
We further say that $\mu$ is \emph{$w$-useful} if $\mu$ is $w$-optimal and
\[
\max_{k\in[n-1]} \mu(n,k) = \sup_{\eta,k} \eta(n,k)\;,
\]
where the supremum is taken over all $k\in [n-1]$ and all measures $\eta$ which are $w$-optimal. Let us see why such a $w$-useful measure exists.
Note that there is a natural bijection between measures $\mu$ with $w(\mu)=w$, and vectors in the simplex $\Delta=\{x\in \mathbb{R}^{\binom{n}{2}}:x_i \geq 0 ~\mbox{and}~\sum_{i=1}^{\binom{n}{2}} x_i=w\}$. Thus, to show that a $w$-useful measure exists we think of $\mu$ as a vector in $\Delta$.
Recalling that
\[
\beta^*(\mu;P_{(s,t)}) = \sum _{P\in\mathbf{C}^*(P_{(s,t)},n)}\mu(P)= \sum_{P\in\mathbf{C}^*(P_{(s,t)},n)}\prod_{e\in E(P)} \mu(e)\;,
\]
we see that $\beta^*(\mu;P_{(s,t)})$ is an $\binom{n}{2}$-variate polynomial, with variables $\mu(e)$ for all $e\in E(K_{n})$. Under these notations, $w$-optimal measures are maximal points of the polynomial $\beta^*(\mu;P_{(s,t)})$ in $\Delta$. Since $\Delta$ is compact and $\beta^*(\mu;P_{(s,t)})$ is continuous, we deduce that $O_w$, the set of all $w$-optimal measures, is non-empty. Moreover, $O_w$ is a compact set, since it is closed (as the preimage of a closed set under the continuous function $\beta^*(\mu;P_{(s,t)})$) and bounded (as it is contained in $\Delta$).
Setting $f(\mu)=\max_{k\in[n-1]}\mu(n,k)$, we find that $\mu$ is a $w$-useful measure if and only if it is a maximal point of $f$ within $O_w$. Since $O_w$ is compact and $f$ is continuous, a $w$-useful measure exists.
We now prove that the existence of $w$-useful measures implies the claim. Indeed, let $\mu$ be a $w$-useful measure. Assume with out loss of generality\footnote{If this is not the case, we can permute the vertices and end up with such measure.} that $\mu(n-1,n)\geq 0$ is maximal among all $\mu(k,n)$. We claim that $\mu$ is as required. The first condition follows immediately from the fact that any $w$-useful measure is also $w$-optimal.
Assume towards contradiction that the second condition fails, that is, that there exists a $q\neq n-1$ with $\mu(q,n)>0$. We will now show that there is a measure $\mu'$ satisfying $w(\mu')=w$ which will either
contradict the fact that $\mu$ is $w$-optimal or the fact that it is $w$-useful.
We define $\mu'$ as follows:
We first set $\mu'(e)=\mu(e)$ for every edge other than the two edges $\{n-1,n\}$ and $\{q,n\}$.
Define $W_q$ to be the weight (under $\mu$) of all copies of $P_{(s-1,t)}$, not containing $n$, such that the path of length $s-1$ starts with $q$, and the path of length $t$ starts with $1$. Define $W_{n-1}$ analogously. Then, we define
\[
\mu'(n,n-1)=\begin{cases}
\mu(n-1,n)+\mu(q,n) & \text{if } W_{n-1}\geq W_q\;,\\
0 & \text{else}\;,
\end{cases}
\]
and
\[
~~~~~~\mu'(q,n)=\begin{cases}
0 & \text{if } W_{n-1}\geq W_q\;,\\
\mu(q,n)+\mu(n-1,n) & \text{else\;.}
\end{cases}
\]
To see that we indeed get a contradiction, assume first that $W_{n-1}\geq W_q$. Since a copy of $P_{(s,t)}$ in $C^{*}(P_{(s,t)},n)$ uses at most one of the edges $\{n-1,n\}$ and $\{q,n\}$, decreasing the value of $\{q,n\}$ by some $\varepsilon$ while increasing that of $\{n-1,n\}$ by the same $\varepsilon$ increases the total weight of copies of $P_{(s,t)}$ by $\varepsilon(W_{n-1}-W_q)$. We thus infer that \begin{align*}
\beta^*(\mu';P_{(s,t)}) = \beta^*(\mu;P_{(s,t)})+\mu(q,n)(W_{n-1}-W_q) \geq \beta^*(\mu;P_{(s,t)})\;. \end{align*} Since $\mu'(n,n-1)>\mu(n,n-1)$ we see that $\mu'$ witnesses the fact that $\mu$ is not $w$-useful.
If on the other hand $W_{q}>W_{n-1}$, then
\begin{align*}
\beta^*(\mu';P_{(s,t)}) = \beta^*(\mu;P_{(s,t)})+\mu(n-1,n)(W_{q}-W_{n-1})> \beta^*(\mu;P_{(s,t)})=\beta_{w,n}^*(P_{(s,t)})\;,
\end{align*}
so $\mu'$ witnesses the fact that $\mu$ is not $w$-optimal.
\end{proof}
\begin{claim}\label{claim_induction}
Suppose $s,t$ are non-negative integers, $n$ is a positive integer, and $w\geq 0$. Then, there are $w_1,\ldots ,w_s\geq 0$ such that $\sum_{i=1}^{s}w_i\leq w$ and such that
\[
\beta_{w,n}^*(P_{s,t})\leq \beta_{w',n-s}^*(P_{0,t}) \cdot \prod_{i=1}^{s}w_i \;,
\]
where $w'=w-\sum_{i=1}^{s}w_i$.
\end{claim}
\begin{proof}
First, if $s+t+1\geq n$ then the claim is trivial, as $\mathbf{C}^*(P_{(s,t)},n)=\emptyset$. So we assume for the rest of the proof that $s+t+2\leq n$.
Let $\mu_0$ be a measure on the edges of $K_n$ as guaranteed by Claim \ref{claim_double_path}. Since $\beta_{w,n}^*(P_{(s,t)})=\beta^*(\mu_0;P_{(s,t)})$, it is enough to prove that there are $w_1,\ldots,w_s\geq 0$ such that $\sum_{i=1}^{s}w_i\leq w$ and
\begin{equation}\label{eqbeta1}
\beta^*(\mu_0;P_{(s,t)})\leq \beta_{w',n-s}^*(P_{0,t})\cdot \prod_{i=1}^{s}w_i\;,
\end{equation}
where $w'=w-\sum_{i=1}^{s}w_i$. We define inductively a sequence of reals $w_1,\ldots w_k\geq 0$ with $\sum_{i=1}^{k}w_i\leq w$, along with measures $\mu_1,\ldots ,\mu_k$ on the edges of $K_{n-1},\ldots ,K_{n-k}$, respectively, such that the following holds for all $1 \leq j \leq s$, where we set $w'_{j}=w-\sum_{i=1}^{j}w_i$:
\begin{enumerate}[label=(\roman*)]
\item\label{1} $w(\mu_j)=w'_{j}$,
\item\label{2} $w_{j}=\mu_{j-1}(n-j+1,n-j)$,
\item\label{3} for all $t\in[n-j-2]$ we have $\mu_j(n-j,t)=0$,
\item\label{4} $\beta^*(\mu_j;P_{(s-j,t)})=\beta_{w'_{j},n-j}^*(P_{(s-j,t)})$, and
\item\label{5} $\beta^*(\mu_{j-1};P_{(s-j+1,t)})\leq w_j \cdot \beta^*(\mu_{j};P_{(s-j,t)})$.
\end{enumerate}
Indeed, assuming $w_1,\ldots, w_j$ and $\mu_1,\ldots,\mu_j$ have already been chosen, we now choose $w_{j+1}$ and $\mu_{j+1}$.
We first set $w_{j+1}=\mu_j(n-j,n-j-1)\geq 0$ so that the second condition holds.
Further, set $\mu'_{j+1}=\mu_{j}|_{n-j-1}$, the restriction of $\mu_{j}$ to the edges of $K_{n-j-1}$.
Observe that by the induction hypothesis on $\mu_{j}$, we have $\mu_{j}(n-j,t)=0$ for all $t\neq n-j-1$.
Hence
\[
w(\mu'_{j+1})=w(\mu_{j})-\sum_k\mu_{j}(n-j,k)=w(\mu_{j})-w_{j+1}=w'_{j+1}\;,
\]
and
\begin{align}\label{eq1-lemma}
\beta^*(\mu_j;P_{(s-j,t)})= w_{j+1}\cdot \beta^*(\mu'_{j+1};P_{(s-j-1,t)})\leq w_{j+1}\cdot \beta_{w'_{j+1},n-j-1}^*(P_{(s-j-1,t)})\;.
\end{align}
Let $\mu_{j+1}$ be the measure given by Claim \ref{claim_double_path} applied with $P_{(s-j-1,t)}$ and total mass $w'_{j+1}$.
We claim that $\mu_{j+1}$ satisfies the inductive properties. The fact that it satisfies the first condition is immediate from its definition.
To see that $\mu_{j+1}$ satisfies the last three conditions, note that by Claim \ref{claim_double_path} the measure $\mu_{j+1}$ satisfies
\begin{equation}\label{eq2-lemma}
\beta^*(\mu_{j+1};P_{(s-j-1,t)})=\beta_{w'_{j+1},n-j-1}^*(P_{(s-j-1,t)})\;,
\end{equation}
and $\mu_{j+1}(n-j-1,t)=0 $ for all $t\neq n-j-2$. Finally, combining \eqref{eq1-lemma} and \eqref{eq2-lemma} we obtain
\[
\beta^*(\mu_j;P_{(s-j,t)})\leq w_{j+1}\cdot\beta^*(\mu_{j+1};P_{(s-j-1,t)})\;,
\]
thus verifying the last three properties.
Repeatedly applying property \ref{5} we deduce that
$$
\beta^*(\mu_0;P_{(s,t)}) \leq \beta^*(\mu_s;P_{(0,t)})\cdot \prod_{i=1}^{s}w_i\;.
$$ Since $\beta^*(\mu_s;P_{(0,t)})= \beta^*_{w'_s,n-s}(P_{(0,t)})=\beta^*_{w',n-s}(P_{(0,t)})$ (by property \ref{4} and the definition of $w'$) we have thus proved (\ref{eqbeta1}) and the proof is complete.
\end{proof}
We now use Claim \ref{claim_induction} to prove Lemma \ref{lem:number of P_m in an n vertex graph}.
\begin{proof}[Proof of Lemma \ref{lem:number of P_m in an n vertex graph}]
Claim \ref{claim_induction} applied with $s=\ell,t=m-\ell$ and with $w=1$ asserts that there are $w_1,\ldots ,w_\ell\geq 0$ such that $\sum_{i=1}^{\ell}w_i\leq 1$ and such that
\begin{equation}\label{eq1-main-lemma}
\beta_{1,n}^*(P_{\ell,m-\ell})\leq \beta_{w',n-\ell}^*(P_{0,m-\ell}) \cdot \prod_{i=1}^{\ell}w_i \;,
\end{equation}
where $w'=1-\sum_{i=1}^{\ell}w_i$.
Clearly, for all integers $s,t,k$ and $w\geq 0$ we have $\beta_{w,k}^*(P_{(s,t)})=\beta_{w,k}^*(P_{(t,s)})$.
Hence, using Claim \ref{claim_induction} with $s=m-\ell,t=0$ and with $w=w'$, we obtain a sequence $w_{\ell+1},\ldots,w_{m}$ of non-negative reals, such that $\sum_{i=\ell+1}^m w_i\leq w'$ and such that
\begin{equation}\label{eq2-main-lemma}
\beta_{w',n-\ell}^*(P_{0,m-\ell})=\beta_{w',n-\ell}^*(P_{m-\ell,0})\leq \beta_{w'',n-m}^*(P_{(0,0)})\cdot \prod_{i=\ell+1}^{m}w_i =\prod_{i=\ell+1}^m w_i\;,
\end{equation}
where $w''=w'-\sum_{i=\ell+1}^m w_i$, and we used the fact that $\beta_{w'',n-m}^*(P_{(0,0)})=1$ (see the remark after Definition \ref{defbetastar}).
Combining \eqref{eq1-main-lemma} and \eqref{eq2-main-lemma}, we infer that there are $w_1,\ldots,w_m\geq 0$ with $\sum_{i=1}^{m}w_i\leq 1$ such that
\[
\beta_{1,n}^*(P_{\ell,m-\ell})\leq \prod_{i=1}^{m}w_i\leq \left(\frac{\sum_{i=1}^{m}w_i}{m}\right)^m\leq \frac{1}{m^m}\;,
\]
where the second inequality is the AM-GM inequality, and the last inequality follows from the properties of the sequence $w_1,\ldots ,w_m$.
\end{proof}
To deduce Lemma \ref{thm:main theorem} from the above claims, we recall a definition and a lemma from Cox and Martin \cite{CoxMar2021_1} which we specialize here to the case of $P_m$.
\begin{definition}
For an integer $n$, we denote by $\Opt(n;H)$ the set of all probability measures $\mu$ on the edges of $K_{n}$ satisfying
\[
\beta(\mu; P_m) = \sup_{\eta} \beta(\eta;P_m)\;,
\]
where the supremum is taken over all probability measures $\eta$ on the edges of $K_{n}$.
\end{definition}
\begin{lemma}[Lemma 4.5 in \cite{CoxMar2021_1}]\label{lem:inequalities regarding the masses}
For every $n\geq m\geq 2$ and $\mu \in \Opt(n; P_m)$, we have the following for all $x \in [n]$
\[
\bar{\mu}(x)\cdot (m-1)\cdot \beta(\mu;P_m)=\sum_{\substack{P\in \mathbf{C}(P_m,n)\\ V(P)\ni x}}\deg_{P}(x)\mu (P)\;.
\]
\end{lemma}
\begin{proof}[Proof of Lemma \ref{thm:main theorem}]
Suppose $n \geq m$ and take any $\mu\in \Opt(n;P_m)$. We will next show that $\beta(\mu;P_m)\leq \frac{20}{m^{m-2}}$ thus completing the proof.
Let $x\in [n]$ be such that $\bar{\mu}(x)\neq 0$. By Lemma \ref{lem:inequalities regarding the masses} we have
\begin{align}\label{eq3}
\bar{\mu}(x)\cdot (m-1)\cdot \beta(\mu;P_m)\leq 2\sum_{\substack{P\in \mathbf{C}(P_m,n)\\ V(P)\ni x}}\mu (P)\;.
\end{align}
Given distinct $s,t\in[n]$ and $0 \leq \ell \leq m-2$ we define $\mathbf{C}^*(s,t,\ell)$ to be the set of all copies of $P_{(\ell,m-\ell-2)}$ in $K_n$, where the path of length $\ell$ starts with $s$ and the path of length $m-\ell-2$ starts with $t$.
We have
\begin{align}
\sum_{\substack{P\in \mathbf{C}(P_m,n)\\ V(P)\ni x}}\mu (P)&= \sum_{y\in [n]\setminus \{x\}}\mu(x,y)\sum_{\ell=0}^{m-2}\sum_{\substack{P \in \mathbf{C}^*(x,y,\ell)}}\mu (P)\nonumber\\
&\leq \sum_{y\in [n]\setminus \{x\}}\mu(x,y)\sum_{\ell=0}^{m-2}\beta^*_{1,n}(P_{(\ell,m-\ell-2)})\nonumber\\
&\leq \frac{m-1}{(m-2)^{m-2}}\sum_{y\in [n]\setminus \{x\}}\mu(x,y)=\frac{\bar{\mu}(x)(m-1)}{(m-2)^{m-2}}\label{eq4}\;,
\end{align}
where the second inequality holds by the definition\footnote{We rely on the fact that although $\beta^*_{w,n}(P_{(s,t)})$ was defined with respect to paths starting at vertices $1$ and $n$, we could have chosen any pair of vertices in $[n]$ (in the above proof we use $x,y$). } of $\beta^*_{1,n}(P_{(s,t)})$, and the third inequality holds by Lemma \ref{lem:number of P_m in an n vertex graph}.
Recalling that $\bar{\mu}(x)>0$ and combining \eqref{eq3} and \eqref{eq4} we infer that
\[
\beta(\mu;P_m)\leq \frac{2}{(m-2)^{m-2}}\leq \frac{2e^2}{m^{m-2}}\;.\qedhere
\]
\end{proof}
\section{Proving the main result}
\label{section3}
We start this section with stating the optimization problem of Cox and Martin \cite{CoxMar2021_1}.
\begin{definition}
Let $n$ be an integer and let $\mu$ be a probability measure on the edges of $K_{n}$. Then, for any integer $m\geq 2$, letting $(n)_m$ to be the set of all ordered $m$-tuples of distinct elements from $[n]$, define
\[
\rho (\mu;m)=\sum_{x\in (n)_m}\bar{\mu}(x_1)\left(\prod_{i=1}^{m-1}\mu (x_i,x_{i+1})\right)\bar{\mu}(x_m)\;.
\]
Furthermore, define
\[
\rho_n(m)=\sup_{\mu}\rho(\mu;m)\quad \text{and}\quad \rho(m)=\sup_{n\in \mathbb{N}}\rho_n(m)\;,
\]
where the supremum in the definition of $\rho_n(m)$ is taken over all probability measures $\mu$ on the edges of $K_{n}$.
\end{definition}
Note that if we expand the products in the definition of $\rho(m)$ we see that $\rho(m)$ is very similar to $\beta(P_{m+2})$. The crucial difference is that in $\rho(m)$ we count the total weight of walks of a very special structure. These walks are formed by first choosing distinct $x_2,\ldots ,x_{m+1}$ to be a copy of $P_{m}$, and then choosing \emph{arbitrary} $x_1\neq x_2$ and $x_{m+2}\neq x_{m+1}$ (so we allow $x_1=x_{m+1}$ and/or $x_1,x_{m+2}\in \{x_2,\ldots ,x_{m+1}\}$). For example, a walk of this type might be $(1,2,1,2)$ or $(1,2,1,3,1)$.
Our main task in this section is to prove the following lemma.
\begin{lemma}\label{prop:rho to beta}
For all integers $n\geq m\geq 2$ we have
\[
\rho_n(m)\leq \frac{1152}{m^2}\cdot \beta(P_{m})\;.
\]
\end{lemma}
The constant $1152$ in the above lemma is clearly not optimal. We did not make any attempt to improve it, as it seems that a new idea is required to obtain the optimal one.
A simple lower bound for $\rho_n(m)$ is $8/m^{m}$, which is achieved by the uniform distribution on the edges of $C_{m}$.
As we mentioned in the previous section, it seems reasonable to conjecture that $\beta(P_{m})=1/m^{m-2}$.
Therefore, a natural conjecture is that in Lemma \ref{prop:rho to beta} the optimal constant is $8$.
Let us first deduce Theorem \ref{thm:number of path in planar graphs} from Lemmas \ref{thm:main theorem} and \ref{prop:rho to beta}.
\begin{proof}[Proof of Theorem \ref{thm:number of path in planar graphs}]
Lemma 2.3 in Cox and Martin \cite{CoxMar2021_1} asserts that for all $m\geq 2$ we have
\[
N_{\mathcal{P}}(n,P_{2m+1})\leq (\rho(m)/2+o(1)) n^{m+1}\;.
\]
Furthermore, since Lemma \ref{prop:rho to beta} holds for all $n$, we deduce that $\rho(m)\leq \frac{10^3}{m^2}\cdot\beta(P_m)$.
Together with Lemma \ref{thm:main theorem}, this gives Theorem \ref{thm:number of path in planar graphs} as then
\begin{align*}
N_{\mathcal{P}}(n,P_{2m+1})&\leq (\rho(m)/2+o(1)) n^{m+1}\leq (576\beta(P_m)m^{-2}+o(1)) n^{m+1}\\
&\leq (10^4m^{-m}+o(1)) n^{m+1}\;.\qedhere
\end{align*}
\end{proof}
Before proving Lemma \ref{prop:rho to beta}, let us recall a special case of the Karush--Kuhn--Tucker (KKT) conditions (see Corollaries 9.6 and 9.10 in \cite{Gul2010}).
\begin{theorem}[Special case of the KKT conditions]\label{thm:KKT}
Let $f\colon \mathbb{R}^n\to \mathbb{R}$ be a continuously differentiable function, and consider the optimization problem
\begin{align*}
\max_{x\in \Delta} f(x)\;, \text{ where } \Delta=\left\{x:\sum_{i=1}^{n} x_i=1\text{ and } x_1,\ldots ,x_n\geq 0\right\}\;.
\end{align*}
If $\mathbf{x}^*$ achieves this maximum, then there is some $\lambda\in \mathbb {R}$ such that, for each $i\in [n]$, either
\[
\mathbf{x}^*_i=0,\quad\text{or}\quad \frac{\partial f}{\partial x_i}(\mathbf{x}^*)=\lambda\;.
\]
\end{theorem}
\begin{proof}[Proof of Lemma \ref{prop:rho to beta}.]
Let $\mathbf{P^*}$ be the set of walks $(x_1,x_2,\ldots,x_{m+2})$ on $[n]$ constructed as follows: first, choose $(x_2,x_3,\ldots,x_{m+1})$ to be a path (i.e.\ a non-induced and \emph{labeled} copy of $P_{m}$), and then complete the walk by choosing an arbitrary $x_1\neq x_2$ and an arbitrary $x_{m+2}\neq x_{m+1}$.
Further, for any $i\neq j\in [n]$ we let $\mathbf{P^*}(\{i,j\})$ be the set of all walks $(x_1,x_2,\ldots,x_{m+2})\in \mathbf{P^*}$ such that there is $k$ with $\{x_{k},x_{k+1}\}=\{i,j\}$.
Define $f\colon \mathbb{R}^{\binom{[n]}{2}}\to \mathbb{R}$ by
\[
f(\mathbf{x})=\sum_{p\in (n)_m}\left(\sum_{p_0\in [n]\setminus\{p_1\}}\mathbf{x}_{p_0,p_1}\right)\left(\prod_{i=1}^{m-1}\mathbf{x}_{p_i,p_{i+1}}\right)\left(\sum_{p_{m+1}\in [n]\setminus \{p_m\}}\mathbf{x}_{p_m,p_{m+1}}\right)\;.
\]
Suppose $\mu$ is a probability measure on the edges of $K_n$ with $\rho(\mu;m)=\rho_n(m)$. When viewing $\mu$ as a vector in $\mathbb{R}^{\binom{[n]}{2}}$, we have $f(\mu)=\rho(\mu;m)$, and moreover,
\[
f(\mu)=\max_{\mathbf{x}\in \Delta} f(\mathbf{x})\;, \text{ where } \Delta=\left\{\mathbf{x}:\sum_{i=1}^{\binom{n}{2}} \mathbf{x}_i=1\text{ and } \mathbf{x}_1,\ldots ,\mathbf{x}_{\binom{n}{2}}\geq 0\right\}\;.
\]
By the maximality of $\mu$ and by Theorem \ref{thm:KKT} (the KKT conditions), there is a non-negative\footnote{As the polynomial has only positive coefficients, $\lambda$ must be non-negative.} real $\lambda$ such that for all $\{i,j\}\in \binom{[n]}{2}$ we have
\[
\mu(i,j)=0 \quad \text{or}\quad \frac{\partial f(\mathbf{x})}{\partial \mathbf{x}_{i,j}}(\mu)=\lambda\;.
\]
Note that the degree of each term $\mathbf{x}_{i,j}$, in every monomial of $f(\mathbf{x})$ is at most\footnote{The only case where it is $3$ is when $m=2$ and we consider a walk on one edge three times, e.g,\ the walk $(1,2,1,2)$.} $3$. Thus, for every $\{i,j\}\in \binom{[n]}{2}$ we have
\begin{align}\label{eq1-rho}
\lambda \cdot \mu(i,j)=\frac{\partial f(\mathbf{x})}{\partial \mathbf{x}_{i,j}}(\mu)\cdot\mu(i,j)\leq 3\sum_{P\in \mathbf{P^*}(\{i,j\})}\mu (P)\;.
\end{align}
We also have the following:
\begin{align}
\lambda &= \sum_{\{i,j\}\in \binom{[n]}{2}}\lambda \cdot \mu(i,j)\\
&=\sum_{\{i,j\}\in \binom{[n]}{2}}\frac{\partial f(\mathbf{x})}{\partial \mathbf{x}_{i,j}}(\mu)\cdot \mu(i,j)\nonumber\\
&\geq \sum_{\{i,j\}\in \binom{[n]}{2}}\sum_{P\in \mathbf{P^*}(\{i,j\})}\mu(P)\nonumber\\
&=\sum_{P\in \mathbf{P^*}}\mu(P)\sum_{\{i,j\}\in \binom{[n]}{2}}\mathbbm{1} (\{i,j\}\in E(P))\\
&\geq (m-1)\rho(\mu;m)\label{eq2-rho}\;,
\end{align}
where the first equality holds as $\mu$ is a probability measure, the second equality holds by the definition of $\lambda$, and the last inequality holds as there are at least $m-1$ distinct edges in each walk in $\mathbf{P^*}$. Combining \eqref{eq1-rho} and \eqref{eq2-rho} we have the following for all $i\in [n]$:
\begin{align*}
(m-1)\cdot \bar{\mu}(i)\cdot \rho(\mu;m) &= \sum_{j\in [n]\setminus\{i\}}(m-1)\mu(i,j)\rho(\mu;m)\\
&\leq 3 \sum_{j\in [n]\setminus\{i\}} \sum_{P\in \mathbf{P^*}(\{i,j\})}\mu(P)\\
&=3\sum_{P\in \mathbf{P^*}}\mu (P)\sum_{j\in [n]\setminus\{i\}} \mathbbm{1}(\{i,j\}\in E(P))\\
&=3\sum_{P\in \mathbf{P^*}}\deg_P(i)\mu (P)\\
&\leq 12\sum_{P\in \mathbf{P^*}}\mu (P)= 12\cdot \rho(\mu;m)\;.
\end{align*}
where the last inequality follows as for every $P\in \mathbf{P^*}$ and $i\in P$ we have\footnote{An example being the walk $(1,2,1,3,1)$.} $\deg_P(i)\leq 4$.
Dividing both sides by $(m-1)\cdot \rho(\mu;m)$ we obtain that for all $i$ we have $\bar{\mu}(i)\leq \frac{12}{m-1}$. Therefore, as $m\geq 2$ we have
\begin{align*}
\rho (\mu;m)&=\sum_{x\in (n)_m}\bar{\mu}(x_1)\left(\prod_{i=1}^{m-1}\mu (x_i,x_{i+1})\right)\bar{\mu}(x_m)\\
&\leq \frac{144}{(m-1)^2}\sum_{x\in (n)_m}\prod_{i=1}^{m-1}\mu (x_i,x_{i+1})\leq \frac{1152}{m^2}\cdot \beta(P_{m})\;.\qedhere
\end{align*}
\end{proof}
\end{document} |
\begin{document}
\title{State disturbance and pointer shift in protective quantum measurements} \author{Maximilian Schlosshauer} \affiliation{\small Department of Physics, University of Portland, 5000 North Willamette Boulevard, Portland, Oregon 97203, USA}
\begin{abstract} We investigate the disturbance of the state of a quantum system in a protective measurement for finite measurement times and different choices of the time-dependent system--apparatus coupling function. The ability to minimize this state disturbance is essential to protective measurement. We show that for a coupling strength that remains constant during the measurement interaction of duration $T$, the state disturbance scales as $T^{-2}$, while a simple smoothing of the coupling function significantly improves the scaling behavior to $T^{-6}$. We also prove that the shift of the apparatus pointer in the course of a protective measurement is independent of the particular time dependence of the coupling function, suggesting that the guiding principle for choosing the coupling function should be the minimization of the state disturbance. Our results illuminate the dynamics of protective measurement under realistic circumstances and may aid in the experimental realization of such measurements.\\[-.1cm]
\noindent Journal reference: \emph{Phys.\ Rev.\ A\ }\textbf{90}, 052106 (2014) \end{abstract}
\pacs{03.65.Ta, 03.65.Wj}
\maketitle
\section{Introduction}
Protective measurement \cite{Aharonov:1993:qa,Aharonov:1993:jm,Aharonov:1996:fp,Dass:1999:az,Vaidman:2009:po} is a quantum measurement scheme in which an apparatus is weakly coupled to a quantum system for an extended period of time. If the system starts out in a nondegenerate eigenstate of its Hamiltonian and the interaction is sufficiently weak and long, then expectation values of observables of the system can be measured without appreciably disturbing the state of the system. Since measurement of a sufficient number of expectation values allows one to reconstruct a quantum state, protective measurement, if suitably implemented (see Ref.~\cite{Dass:1999:az} for a discussion of constraints and complications), may enable reconstruction of the quantum state of an individual system. This provides a perspective on state reconstruction different from that associated with conventional ensemble state tomography based on strong \cite{Vogel:1989:uu,Smithey:1993:lm,Breitenbach:az,White:1999:az} or weak \cite{Aharonov:1988:mz,Lundeen:2011:ii,Lundeen:2012:rr} measurements.
Only an infinitely weak or infinitely slowly changing measurement interaction will not disturb the state of a protectively measured system; this follows directly from perturbation theory and the quantum adiabatic theorem \cite{Born:1928:yf}. Outside these limiting cases, however, protective measurement, if it is to yield new information, cannot avoid disturbing the state of the system, in agreement with general results concerning the fundamental tradeoff between quantum state disturbance and information gain \cite{Fuchs:1996:op} and the independence of the maximum possible information gain in a quantum measurement from the method of measurement \cite{Ariano:1996:om}. From a fundamental point of view, this inevitable state disturbance disproves suggestions \cite{Aharonov:1993:qa,Aharonov:1993:jm,Gao:2013:om} that protective measurement permits state measurement akin to a classical state and bears on the meaning of the wavefunction (see Refs.~\cite{Alter:1997:oo,Dass:1999:az,Schlosshauer:2014:tp} for discussions of this important foundational point).
This limitation, however, does not invalidate the potential practical usefulness of protective measurement. Implementation of protective measurement would be interesting and important both from a fundamental point of view (as the realization of a new quantum measurement scheme) and from a practical point of view (enabling quantum state tomography for single systems). Just like traditional ensemble quantum state tomography, protective measurement provides a way of (approximately) reconstructing a quantum state. The fidelity of any such reconstruction can be measured in terms of the disturbance of the initial state of the system incurred during the measurement. At the heart of protective measurement is the idea that this state disturbance can be made arbitrarily small, such that repeated measurements on the same system permit reconstruction of its initial state with arbitrarily high fidelity \cite{Aharonov:1993:qa,Aharonov:1993:jm,Aharonov:1996:fp,Dass:1999:az}. Therefore, for practical implementations of protective measurement it is essential to gain a precise and quantitative understanding of how one may reduce the state disturbance incurred during a protective measurement while simultaneously maintaining appreciable information gain.
Despite its significance, however, the problem of state disturbance in protective measurement has not yet been adequately studied. Instead, the existing literature (see, e.g., Refs.~\cite{Aharonov:1993:qa,Aharonov:1993:jm,Aharonov:1996:fp,Vaidman:2009:po}) has relied on the consideration of mathematical limits involving infinitely long, infinitely weak, and/or infinitely slowly changing (adiabatic) measurement interactions, for which the state of the system can be shown to remain unchanged during the measurement. This, however, leaves open the important question of precisely how much the initial state will be disturbed in the physically relevant case of finite measurement times and interaction strengths, and how this disturbance depends on the particular choice of the coupling function describing the time dependence of the system--apparatus interaction.
This paper addresses this question. We study the state disturbance in a protective measurement for different coupling functions and make precise the dependence of the state disturbance on the physical parameters of the system and the measurement interaction. In particular, we show how a careful choice of the coupling function can dramatically reduce the state disturbance. In turn, this raises the question of whether and how the information gain during the measurement, represented by the shift of the apparatus pointer to a position indicating the expectation value of the measured observable of the system, depends on the particular choice of the coupling function. We show that, to a good approximation, the shift of the apparatus pointer is in fact independent of the choice of the coupling function (under the customary assumption that the coupling function is appropriately normalized).
This paper is organized as follows. In Sec.~\ref{sec:prot-meas} we introduce the basic concepts of protective measurement and develop a framework, based on time-dependent perturbation theory, for describing the dynamics of protective measurement for arbitrary time-dependent system--apparatus coupling strengths, finite measurement times, and up to any order in the interaction. In Sec.~\ref{sec:state-disturbance} we investigate the disturbance of the initial state in the course of a protective measurement for several different coupling functions. In particular, we investigate how this disturbance depends on the time dependence of the coupling and the duration of the measurement. In Sec.~\ref{sec:pointer-shift} we derive an expression for the pointer shift for arbitrary time-dependent coupling functions and show that it is generic. We discuss our results in Sec.~\ref{sec:discussion}. In Appendix~\ref{sec:high-order-corr} we investigate the influence of higher-order perturbative corrections.
\section{\label{sec:prot-meas}Protective measurement}
Following the standard framework for protective measurement (see, e.g., Refs.~\cite{Aharonov:1993:qa,Aharonov:1993:jm,Aharonov:1996:fp,Dass:1999:az}), we consider a system $S$ and apparatus $A$ with time-independent self-Hamiltonians $\op{H}_S$ and $\op{H}_A$, respectively, where the spectrum of $\op{H}_S$ is assumed to be nondegenerate. We let the system and apparatus interact via the time-dependent interaction Hamiltonian
\begin{equation}\label{eq:lalaa} \op{H}_\text{int}(t) = g(t)\op{O} \otimes \op{P}. \end{equation}
Here, $g(t)$ is a non-negative function representing a time-dependent coupling strength. We let the interaction start at $t=-T/2$ and conclude at $t=T/2$, so $g(t)= 0$ outside the interval $[-T/2,T/2]$ and the total measurement time is $T$. We also normalize $g(t)$ according to
\begin{equation}\label{eq:normal} \int_{-T/2}^{T/2} \text{d} t\, g(t) =1. \end{equation}
The normalization effectively links the interaction strength to the duration of the interaction $T$: The larger $T$ is, the weaker the average strength of the interaction. The system observable $\op{O}$ can be freely chosen and need not commute with $\op{H}_S$. The operator $\op{P}$ denotes the momentum conjugate of the pointer variable $\op{X}$ of the apparatus. In what follows we adopt the customary assumption that $\op{P}$ commutes with $\op{H}_A$, i.e., that the interaction Hamiltonian is diagonal in the energy eigenbasis of the system. While this assumption is not necessary for a protective measurement to obtain \cite{Dass:1999:az}, it simplifies the subsequent calculations and is innocent in the context of the present paper, which focuses on the effect of the measurement on the subspace of the system. In this case, the operator $\op{P}$ is a constant of motion of the total Hamiltonian $\op{H}(t)=\op{H}_S+\op{H}_A+\op{H}_\text{int}(t)$ and there exists a set of simultaneous orthonormal eigenstates $\{\ket{A_i}\}$ of $\op{P}$ and $\op{H}_A$ with spectra $\{a_i\}$ and $\{\epsilon_i\}$ obeying
\begin{equation}\label{eq:haaz} \op{P}\ket{A_i}=a_i\ket{A_i}, \qquad \op{H}_A\ket{A_i}=\epsilon_i\ket{A_i}. \end{equation}
In a standard impulsive (strong) measurement \cite{vonNeumann:1955:ii}, the interaction time $T$ is short and thus $g(t)$ is large for $t \in [-T/2,T/2]$. In this case, the evolution is dominated by the interaction Hamiltonian, leading to strong entanglement between the system and apparatus and therefore to a significant disturbance of the initial state of the system by the measurement. The idea of protective measurement is to minimize the state disturbance, and yet to obtain meaningful information about the system in a single measurement, by making the interaction weak while leaving it turned on for a long time $T$, so as to ensure an appreciable shift of the apparatus pointer. This long-interaction limit is in contrast with the weak-measurement protocol \cite{Aharonov:1988:mz}, where the measurement interaction is both weak and short and the insignificant pointer shift arising from a single measurement is compensated for by repeating the measurement on a large ensemble of systems.
A protective measurement proceeds from the assumption that the system $S$ starts (at $t=-T/2$) in a nondegenerate eigenstate $\ket{n}$ of $\op{H}_S$ with eigenvalue $E_n$. The initial composite state of system and apparatus is taken to be the product state
\begin{equation}\label{eq:1fbhjsbfkj} \ket{\Psi(-T/2)} = \ket{n}\ket{\phi(x_0)} = \ket{n} \sum_i \braket{A_i}{\phi(x_0)} \ket{A_i}. \end{equation}
Here $\ket{\phi(x_0)}$ is a Gaussian wave packet of eigenstates of the pointer variable $\op{X}$ centered on $x_0$, representing the premeasurement ready state of the apparatus pointer, where the momentum of the pointer is assumed to be bounded. What we are interested in is the final composite state $\ket{\Psi(T/2)}$ at the conclusion of the measurement interaction at time $t=T/2$. We will now discuss this problem, first by reviewing the case of constant $g(t)=1/T$ for $t \in [-T/2,T/2]$ and then by developing the solution for arbitrary time dependent $g(t)$.
\subsection{Constant coupling}
Existing studies of protective measurement (see, e.g., Refs.~\cite{Aharonov:1993:qa,Aharonov:1993:jm,Aharonov:1996:fp,Dass:1999:az,Vaidman:2009:po,Gao:2013:om}) have considered two limiting scenarios. In the first scenario, the measurement interaction is assumed to be turned on and off infinitely slowly such that, according to the quantum adiabatic theorem \cite{Born:1928:yf}, the system remains in an eigenstate of the time-dependent Hamiltonian at all times. Since at the end of the interaction the Hamiltonian returns to its pre-measurement form, i.e., $\op{H}(T/2)=\op{H}(-T/2)$, the system will also deterministically return to its initial state. In the second scenario, which is the one that is typically used in expositions of protective measurement (see, e.g., Refs.~\cite{Dass:1999:az,Vaidman:2009:po}), the interaction is discontinuously turned on at the initial time $t=-T/2$, kept constant at strength $1/T$ during the duration $T$ of the measurement, and then discontinuously turned off again at $t=T/2$. This corresponds to the choice
\begin{equation}\label{eq:igv34763578} g(t) = \begin{cases} 1/T, & -T/2 \le t \le T/2 \\ 0 & \text{otherwise}. \end{cases} \end{equation}
In the following we will refer to this choice of $g(t)$ as constant coupling. The total Hamiltonian $\op{H}$ is now effectively time independent, which allows the problem to be treated using time-independent perturbation theory.
We briefly review the derivation (see, e.g., Ref.~\cite{Dass:1999:az} for details). We consider $\op{H}_\text{int}=(1/T)\op{O} \otimes \op{P}$ as a (small) time-independent perturbation to $\op{H}_0=\op{H}_S+\op{H}_A$. Starting from the initial state $\ket{\Psi(t=-T/2)}=\ket{n}\ket{\phi(x_0)}$ [see Eq.~\eqref{eq:1fbhjsbfkj}], the final composite state at the conclusion of the measurement interaction at time $T$ is
\begin{align} \ket{\Psi(t=T/2)} &=\text{e}^{-\text{i} \op{H} T/\hbar} \ket{n}\ket{\phi(x_0)} \notag \\ &= \sum_{m,i} \text{e}^{-\text{i} \widetilde{E} (m,a_i) T/\hbar} \braket{E_m^S(a_i)}{n} \notag \\ &\quad \times \braket{A_i}{\phi(x_0)} \ket{E_m^S(a_i)}\ket{A_i},\label{eq:jkbknjd} \end{align}
where $\ket{E_m^S(a_i)}$ are the eigenstates of the system-dependent part of $\op{H}$ defined by $\op{H}'_S(a_i) = \op{H}_S+\frac{1}{T} (a_i \op{O})$ and
\begin{align}\label{eq:fnsnlk1} \widetilde{E}(m,a_i) &= \bra{E_m^S(a_i)} \op{H}_S \ket{E_m^S(a_i)} + \epsilon_i \notag \\ &\quad + \frac{1}{T}a_i \bra{E_m^S(a_i)}\op{O} \ket{E_m^S(a_i)} \end{align}
are the eigenvalues of the eigenstates $\ket{E_m^S(a_i)}\ket{A_i}$ of $\op{H}$. Using zeroth-order time-independent perturbation theory corresponding to the limit $T\rightarrow\infty$, one expands the eigenvalues $\widetilde{E} (m,a_i)$ to first order in $1/T$ (such that the argument of the exponential $\text{e}^{-\text{i} \widetilde{E} (m,a_i) T/\hbar}$ is of zeroth order in $1/T$),
\begin{equation}\label{eq:fnsnlk11} \widetilde{E} (m,a_i) \approx E_m + \epsilon_i + \frac{1}{T}a_i \bra{m}\op{O} \ket{m}, \end{equation}
and replaces the exact eigenstates $\ket{E_m^S(a_i)}$ by their zeroth-order approximations $\ket{m}$. Reintroducing the operators $\op{H}_A$ and $\op{P}$ in the exponent of the time-evolution operator, the final zeroth-order system--apparatus state is therefore
\begin{align}\label{eq:isvgaazxxx} \ket{\Psi^{(0)}(t=T/2)} &= \text{e}^{-\text{i} E_nT/\hbar} \ket{n} \text{e}^{-\text{i} \op{H}_A T/\hbar}\notag \\ &\quad \times \text{e}^{-\text{i} \op{P} \bra{n} \op{O} \ket{n}/\hbar} \ket{\phi(x_0)}. \end{align}
The operator $\text{e}^{-\text{i} \op{P} \bra{n} \op{O} \ket{n}/\hbar}$ shifts the center of the wave packet $\ket{\phi(x_0)}$ by an amount equal to $\bra{n} \op{O} \ket{n}$. In this way, information about the expectation value of $\op{O}$ in the initial state $\ket{n}$ becomes encoded in the pointer position and the final composite state, to zeroth order, is
\begin{align}\label{eq:gctctwg} \ket{\Psi^{(0)}(t=T/2)} = \text{e}^{-\text{i} E_nT/\hbar} \ket{n} \text{e}^{-\text{i} \op{H}_A T/\hbar}\ket{\phi(x_0+\langle \op{O} \rangle_n)}. \end{align}
In this strict limit $T \rightarrow \infty$ (corresponding to an infinitely weak interaction), the initial state of the system remains unchanged and there is no entanglement between the system and apparatus. It is in this sense that the state of the system is protected. The protection is provided by the dominant Hamiltonian $\op{H}_S$ such that the interaction Hamiltonian $\op{H}_\text{int}$ can be treated as a small perturbation whose effect on the system is negligible in the limit $T \rightarrow \infty$, even though it still induces a finite pointer shift in the apparatus. Note that once the state is appropriately protected, information about the expectation value of \emph{any} observable $\op{O}$ can be obtained. This permits, at least in principle, sequential protective measurements of many different observables using the same protection potential.
\subsection{Time-dependent coupling}
Clearly, the limit $T \rightarrow \infty$ is not physically realizable and it is therefore important to understand and explore protective measurement in the practically relevant case of finite $T$. Furthermore, rather than being restricted to the constant coupling given by Eq.~\eqref{eq:igv34763578}, we would like to consider arbitrary time-dependent coupling functions $g(t)$. To this end, we will now treat the evolution of the initial state $\ket{\Psi(-T/2)} = \ket{n}\ket{\phi(x_0)}$ [Eq.~\eqref{eq:1fbhjsbfkj}] for arbitrary $g(t)$ by using time-dependent perturbation theory, regarding $\op{H}_\text{int}(t)=g(t)\op{O} \otimes \op{P}$ as a time-dependent perturbation to $\op{H}_0=\op{H}_S+\op{H}_A$.
As before, we shall assume $[\op{P},\op{H}_A]=0$, i.e., the perturbation commutes with the unperturbed Hamiltonian in the apparatus subspace. Then the perturbation does not connect the different energy levels $\ket{A_i}$ of the apparatus. This can also be seen from considering the evolution operator in the interaction picture, which we may symbolically write as a time-ordered exponential,
\begin{align} \op{U}_I(-T/2,T/2)&=\mathcal{T} \exp\left[-\frac{\text{i}}{\hbar} \int_{-T/2}^{T/2} \text{d} t\, \op{H}_{\text{int},I}(t)\right], \end{align}
where $\mathcal{T}$ is the time-ordering operator and the subscript $I$ denotes interaction-picture quantities. Since the interaction Hamiltonian in the interaction picture is
\begin{align} \op{H}_{\text{int},I}(t) &= g(t)\text{e}^{\text{i} (\op{H}_S+\op{H}_A) t/\hbar}(\op{O} \otimes \op{P}) \text{e}^{-\text{i} (\op{H}_S+\op{H}_A) t/\hbar} \notag\\ &= g(t) \op{O}_I(t) \otimes\op{P}, \end{align}
the evolution operator becomes
\begin{multline} \op{U}_I(-T/2,T/2)\\ =\mathcal{T} \exp\left[-\frac{\text{i}}{\hbar} \left(\int_{-T/2}^{T/2} \text{d} t\, g(t) \op{O}_I(t)\right) \otimes \op{P}\right], \end{multline}
which is diagonal in the energy eigenbasis $\{\ket{A_i}\}$ of the apparatus.
The final composite state at the conclusion of the measurement may then be written as
\begin{align}\label{eq:1fbhjsbfk4554j} \ket{\Psi(T/2)} &= \sum_m \text{e}^{-\text{i} (E_n+E_m) T/2\hbar} \sum_i C^{(i)}_{mn}(T) \ket{m} \text{e}^{-\text{i} \epsilon_i T/\hbar} \notag\\ & \quad \times \braket{A_i}{\phi(x_0)} \ket{A_i}, \end{align}
where the interaction-picture amplitude $C^{(i)}_{mn}(T)$ is given by
\begin{equation}\label{eq:fgjf1} C^{(i)}_{mn}(T) = \bra{m} \left\{ \mathcal{T} \exp\left(-\frac{\text{i}}{\hbar} a_i \int_{-T/2}^{T/2} \text{d} t\, g(t) \op{O}_I(t) \right) \right\} \ket{n}. \end{equation}
Note that we have used $T$ as the argument of $C^{(i)}_{mn}(T)$ in order to indicate that $C^{(i)}_{mn}(T)$ is the amplitude at the conclusion of a measurement interaction of duration $T$. From here on, we will drop the subscript $n$ in $C^{(i)}_{mn}(T)$, $C^{(i)}_{mn}(T)\equiv C^{(i)}_m(T)$, since we will be assuming throughout this paper that the system starts out in the state $\ket{n}$.
We now express the amplitude $C^{(i)}_{m}(T)$ as a perturbative expansion (Dyson series),
\begin{align}\label{eq:y89ggy892} C^{(i)}_{m}(T) &= \sum_{\ell=0}^\infty a_i^{\ell} A^{(\ell)}_{m}(T), \end{align}
where $A^{(\ell)}_{m}(T)$ is the expression for the $\ell$th-order correction to the zeroth-order amplitude $A^{(0)}_{m}(T)=1$ \cite{Sakurai:1994:om},
\begin{align}\label{eq:g8fbvsv1} A^{(\ell)}_{m}(T) &= \left(-\frac{\text{i}}{\hbar}\right)^\ell \sum_{k_1,k_2,\hdots,k_{\ell-1}} \bra{m} \op{O} \ket{k_1} \bra{k_1} \op{O} \ket{k_2} \notag \\ & \quad\times\bra{k_2} \op{O} \ket{k_3} \cdots \bra{k_{\ell-1}} \op{O} \ket{n} \notag \\ & \quad \times\int_{-T/2}^{T/2} \text{d} t' \, \text{e}^{\text{i} \omega_{mk_1} t'} g(t') \notag \\ & \quad \times \int_{-T/2}^{t'} \text{d} t'' \,\text{e}^{\text{i} \omega_{k_1k_2} t''} g(t'') \cdots \notag \\ & \quad \times \int_{-T/2}^{t^{(\ell-1)}} \text{d} t^{(\ell)} \,\text{e}^{\text{i} \omega_{k_{\ell-1} n} t^{(\ell)}} g(t^{(\ell)}). \end{align}
Here we have introduced $\omega_{mn}\equiv (E_m-E_n)/\hbar$, which is the frequency (i.e., the inverse of the time scale) associated with the transition $\ket{n} \rightarrow \ket{m}$. Specifically, the first-order correction is
\begin{align}\label{eq:87gr782} A^{(1)}_{m}(T) &= -\frac{\text{i}}{\hbar} \bra{m} \op{O} \ket{n} \int_{-T/2}^{T/2} \text{d} t\, \text{e}^{\text{i} \omega_{mn} t} g(t), \end{align}
and the second-order correction is
\begin{align}\label{eq:7h7grhgrh771} A^{(2)}_{m}(T) &= \left(-\frac{\text{i}}{\hbar}\right)^2 \sum_k \bra{m} \op{O} \ket{k} \bra{k} \op{O} \ket{n} \notag \\ & \quad \times \int_{-T/2}^{T/2} \text{d} t \, \text{e}^{\text{i} \omega_{mk} t} g(t) \int_{-T/2}^{t} \text{d} t' \,\text{e}^{\text{i} \omega_{kn} t'} g(t'). \end{align}
Using Eq.~\eqref{eq:y89ggy892}, we can then write the final composite state \eqref{eq:1fbhjsbfk4554j} as
\begin{align} \ket{\Psi(T/2)} &= \sum_{m} \text{e}^{-\text{i} (E_n+E_m) T/2\hbar} \sum_{\ell=0}^\infty A^{(\ell)}_{m}(T) \ket{m} \notag \\ & \quad \times \left(\sum_{i}\text{e}^{-\text{i} \epsilon_i T/\hbar}a_i^\ell \braket{A_i}{\phi(x_0)} \ket{A_i}\right). \end{align}
We see that the interesting time-dependent dynamics are contained in the amplitude contributions $A^{(\ell)}_{m}(T)$, which specify how the initial state $\ket{n}$ of the system changes in the course of the protective measurement. In particular, these contributions tell us about the disturbance of this initial state, in the sense that they quantify the mixing of other states $\ket{m}$, that is to say, the probabilities of finding the system in a state $\ket{m}\not=\ket{n}$ at the conclusion of the measurement. It is the dependence of these terms $A^{(\ell)}_{m}(T)$ on the choice of $g(t)$, $T$, and the frequency parameters $\omega_{jk}$ that will be the focus of our investigation.
\section{\label{sec:state-disturbance}State disturbance}
In a protective measurement, the goal is to minimize the transition probabilities out of the initial state $\ket{n}$, in order to minimize the disturbance of this state in the course of the measurement interaction. To study this disturbance, we now quantitatively investigate the transition probabilities for $\ket{n} \rightarrow \ket{m} \not=\ket{n}$ for particular choices of $g(t)$ and explore the dependence of these probabilities on the total duration $T$ of the measurement interaction.
As before, we let the interaction start at $t=-T/2$ and conclude at $t=T/2$, so $g(t)= 0$ for $t<-T/2$ and $t>T/2$. We let $g(t)$ be normalized according to Eq.~\eqref{eq:normal}. We also take $g(t)$ to be an even function, i.e., we let the turn-on and turnoff time dependence be symmetric with respect to $t=0$. Then the first-order transition amplitude at the conclusion of the measurement is, from Eq.~\eqref{eq:87gr782},
\begin{align}\label{eq:1ldkf000} A^{(1)}_{m}(T) &= -\frac{\text{i}}{\hbar} \bra{m} \op{O} \ket{n} \int_{-T/2}^{T/2} \text{d} t\, \cos(\omega_{mn} t) g(t), \end{align}
and thus the corresponding transition probability $\mathcal{P}^{(1)}_m(T)$ is
\begin{align}\label{eq:bddfhdfh} \mathcal{P}^{(1)}_m(T) &= \abs{A^{(1)}_{m}(T)}^2 \notag \\ &= \frac{1}{\hbar^2} \abs{ \bra{m} \op{O} \ket{n}}^2 \abs{\int_{-T/2}^{T/2} \text{d} t\, \cos(\omega_{mn} t) g(t)}^2. \end{align}
Strictly speaking, $A^{(1)}_{m}(T)$ is not the full expression for the first-order transition amplitude but only the $T$-dependent part pertaining to the system; Eq.~\eqref{eq:1fbhjsbfk4554j} shows that the full expression (neglecting phase factors) is $\sum_i a_i A^{(1)}_{m}(T) \braket{A_i}{\phi(x_0)}$. The additional terms, however, pertain solely to properties of the apparatus and are independent of the measurement time $T$. Since we are chiefly interested in the dependence of the disturbance of the system on $T$, in what follows we can focus on the amplitude $A^{(1)}_{m}(T)$ and the corresponding transition probability $\mathcal{P}^{(1)}_m(T)$ as given by Eqs.~\eqref{eq:1ldkf000} and \eqref{eq:bddfhdfh}.
We would like to investigate the dependence of the probability $\mathcal{P}^{(1)}_m(T)$ [Eq.~\eqref{eq:bddfhdfh}] on $g(t)$ and $T$, so the relevant quantity of interest is the Fourier transform of $g(t)$,
\begin{align}\label{eq:lvdgs} \widetilde{g}(\omega_{mn}; T) &= \int_{-T/2}^{T/2} \text{d} t\, \cos(\omega_{mn} t) g(t), \end{align}
which is a function in frequency space, with all frequencies measured relative to the initial-state frequency $\omega_n=E_n/\hbar$. For the cases studied below, $\widetilde{g}(\omega_{mn};T)$ is a function of $\omega_{mn}T$, so $\widetilde{g}(\omega_{mn};T) \equiv \widetilde{g}(\omega_{mn}T)$. Note that $\omega_{mn}T$ is a dimensionless quantity that measures the ratio of the total measurement time $T$ to the internal time scale $\omega_{mn}^{-1}$ associated with the transition $\ket{n}\rightarrow\ket{m}$.
In a protective measurement, we would like to minimize the transition probability, which, as Eq.~\eqref{eq:bddfhdfh} shows, is proportional to $\abs{\widetilde{g}(\omega_{mn}T)}^2$. In light of the general relationship between a function and its Fourier transform, we expect that these goals can be accomplished by increasing $T$, i.e., by increasing the width of $g(t)$, and by making $g(t)$ smoother and less rapidly changing. We will now verify these intuitions for different choices of $g(t)$: constant $g(t)$ as given by Eq.~\eqref{eq:igv34763578} (Sec.~\ref{sec:time-indep-coupl}), constant $g(t)$ with a linear turn-on and turnoff (Sec.~\ref{sec:turnonoff}), and a smoothly varying $g(t)$ following a raised-cosine function (Sec.~\ref{sec:smoothly-vary-coupl}).
\subsection{\label{sec:time-indep-coupl}Constant coupling}
\begin{figure*}
\caption{(Color online) Normalized time-dependent system--apparatus coupling functions $g(t)$. The horizontal axis is in units of the measurement time $T$ and the vertical axis is in units of $1/T$. (a) Constant system--apparatus coupling $g(t)$ as defined in Eq.~\eqref{eq:igv34763578}. (b) Constant system--apparatus coupling with a linear turn-on and turnoff, each of duration $\Delta T$, shown here for $\Delta T/T=0.2$. (c) Triangular system--apparatus coupling, corresponding to a linear turn-on and turnoff, each of duration $\Delta T=T/2$. (d) System--apparatus coupling following a raised-cosine function as defined in Eq.~\eqref{eq:jfkhjkvhjkvhjkv11881}.}
\label{fig:g}
\end{figure*}
First, we look at the case of constant $g(t)$ given by Eq.~\eqref{eq:igv34763578}, shown in Fig.~\ref{fig:g}(a). We may write $g(t)$ as $g(t) = \Pi(-T/2,T/2)$, where $\Pi(t_0,t_1)$ is the unit-area boxcar function defined by
\begin{align}\label{eq:igv} \Pi(t_0,t_1) = \begin{cases} \frac{1}{t_1-t_0}, & t_0 \le t \le t_1 \quad (t_1>t_0) \\ 0 & \text{otherwise}. \end{cases} \end{align}
We expect that the case of constant $g(t)$ is suboptimal in the sense that approximating the sharp corners of $g(t)$, which correspond to an infinitely fast turn-on and turnoff of the interaction, will require a broad spectrum of Fourier frequency components, leading to a large domain over which the Fourier transform $\widetilde{g}(\omega_{mn}T)$ of $g(t)$ exhibits a non-negligible amplitude. The Fourier transform $\widetilde{g}(\omega_{mn}T)$ [Eq.~\eqref{eq:lvdgs}] is readily evaluated,
\begin{align}\label{eq:dklvdflk1} \widetilde{g}(\omega_{mn}T) &= \frac{1}{T} \int_{-T/2}^{T/2} \text{d} t\, \cos(\omega_{mn} t) = \mathrm{sinc}\left(\omega_{mn}T/2\right), \end{align}
where $\mathrm{sinc}(x)=\sin(x)/x$ is the sinc function. Thus, the first-order transition amplitude is
\begin{align}\label{eq:dklvdflk2} A^{(1)}_{m}(T) &= -\frac{\text{i}}{\hbar} \bra{m} \op{O} \ket{n} \mathrm{sinc}\left(\omega_{mn}T/2\right), \end{align}
and the corresponding transition probability is
\begin{align}\label{eq:ubdv1} \mathcal{P}^{(1)}_{m}(T) &= \frac{1}{\hbar^2}\abs{ \bra{m} \op{O} \ket{n}}^2 \mathrm{sinc}^2\left(\omega_{mn}T/2\right). \end{align}
\begin{figure}\label{fig:ftcomp}
\end{figure}
The dependence of $\abs{\widetilde{g}(\omega_{mn}T)}^2$, and therefore of $\mathcal{P}^{(1)}_{m}(T)$, on $\omega_{mn}T$ is shown in Fig.~\ref{fig:ftcomp} (solid line) separately for two regimes. Figure~\ref{fig:ftcomp}(a) shows the behavior for $T \gtrsim \omega_{mn}^{-1}$, while Fig.~\ref{fig:ftcomp}(b) shows the decay of the envelope of $\abs{\widetilde{g}(\omega_{mn}T)}^2$ in the large-$T$ regime $T \gg \omega_{mn}^{-1}$ typically relevant to protective measurement. Assuming $T \gtrsim \omega_{mn}^{-1}$, the envelope of the function $\mathrm{sinc}^2 \left(\omega_{mn}T/2\right)$ decays as $(\omega_{mn}T/2)^{-2}$. Then, disregarding the oscillations of this function, we can approximate the transition-probability function \eqref{eq:ubdv1} by its envelope,
\begin{align} \mathcal{P}^{(1)}_{m}(T) &\approx \frac{1}{\hbar^2}\abs{ \bra{m} \op{O} \ket{n}}^2 \frac{1}{(\omega_{mn}T/2)^2}, \end{align}
which is shown in Fig.~\ref{fig:ftcomp}(b). This demonstrates that in order to avoid appreciable state disturbance, i.e., $\mathcal{P}^{(1)}_{m}(T) \ll 1$, we must have
\begin{align}\label{eq:ubdv} T \gg \frac{\abs{ \bra{m} \op{O} \ket{n}}}{\abs{E_m-E_n}} \qquad \text{for all $m\not= n$}. \end{align}
For fixed (nonzero) values of the matrix elements $\bra{m} \op{O} \ket{n}$ and for an initial state equal to any energy eigenstate of the system, this condition simply means that $T$ must be significantly larger than the time scale set by the frequencies of the transitions between the different energy levels in the system. Not surprisingly, a condition of the form given in Eq.~\eqref{eq:ubdv} also follows from time-independent perturbation theory by imposing the requirement that the first-order state correction be small.
We may also look at the width of the main peak (around $\omega_{mn}T=0$) of the function $\abs{\widetilde{g}(\omega_{mn} T)}^2=\mathrm{sinc}^2 \left(\omega_{mn}T/2\right)$. Note that in Fig.~\ref{fig:ftcomp}, $\abs{\widetilde{g}(\omega_{mn} T)}^2$ is plotted only for positive $\omega_{mn}T$ since $\abs{\widetilde{g}(\omega_{mn} T)}^2$ is even and thus only half of the main peak centered at $\omega_{mn} T=0$ is shown; we shall nonetheless refer to it as the central peak in the following. The function $\mathrm{sinc}^2(cx)$ has a full width at half maximum (FWHM) of $R(c) \simeq 2.78c^{-1}$ and thus the FWHM of the central peak of $\abs{\widetilde{g}(\omega_{mn} T)}^2$ is
\begin{align}\label{eq:ubdv1jhfdds33sf} R\simeq 5.56. \end{align}
This means that it suffices for $T$ to be equal to a few multiples of the transition time scale $\omega_{mn}^{-1}$ to reach the region outside the central peak of the transition probability $\mathcal{P}^{(1)}_{m}(T)$, as can also be seen from Fig.~\ref{fig:ftcomp}(a).
Incidentally, the expression for $\mathcal{P}^{(1)}_{m}(T)$ [Eq.~\eqref{eq:ubdv1}] provides a physical illustration of why the assumption of a nondegenerate spectrum of $\op{H}_S$ is important to a proper protective measurement (see Ref.~\cite{Dass:1999:az} for an analysis of the issue of degeneracies in protective measurement). Namely, suppose there exists a state $\ket{m}\not=\ket{n}$ with energy $E_m=E_n$. Then $\omega_{mn}=0$ and
\begin{align}\label{eq:ubdv1jhf} \mathcal{P}^{(1)}_{m}(T) &= \frac{1}{\hbar^2}\abs{ \bra{m} \op{O} \ket{n}}^2. \end{align}
Assuming the matrix element $\bra{m} \op{O} \ket{n}$ does not vanish, this means that the probability for such an energy-conserving transition would have a nonzero value independent of $T$. In a protective measurement, however, the goal is to make this probability arbitrarily small by increasing $T$. The same observation also holds for the time-dependent couplings $g(t)$ studied below; in each case the Fourier transform of $g(t)$, and therefore the first-order transition probability $\mathcal{P}^{(1)}_{m}(T)$, becomes independent of $T$ if $\omega_{mn}=0$.
\subsection{\label{sec:turnonoff}Constant coupling with linear turn-on and turn-off}
Since the constant-coupling function $g(t)=1/T$ [Eq.~\eqref{eq:igv34763578}] exhibits a sharp step discontinuity at $\pm T/2$, we will now replace this discontinuity with a linear turn-on and turn-off of the measurement interaction over a period $\Delta T \le T/2$, and we will explore how this choice can help reduce the amount of state disturbance. Then $g(t)$ takes the shape of an isosceles trapezoid with base lengths $T$ and $T-2\Delta T$ [see Fig.~\ref{fig:g}(b)]. This function is equal to the convolution of two unit-area boxcar functions [see Eq.~\eqref{eq:igv}] of widths $\Delta T$ and $T-\Delta T$ centered at zero,
\begin{align}\label{eq:178y45y} g(t) &= \Pi\left(-\Delta T/2;\Delta T/2\right) \notag \\ &\quad * \, \Pi\left[-(T-\Delta T)/2;(T-\Delta T)/2\right]. \end{align}
Since the Fourier transform of the convolution of two functions is equal to the product of the Fourier transforms of each function, we have
\begin{align}\label{eq:hjvsjbfhj} \widetilde{g}(\omega_{mn} T) &= \mathrm{sinc}\left(\omega_{mn}\Delta T/2\right) \mathrm{sinc}\left[\omega_{mn}(T-\Delta T)/2\right], \end{align}
and therefore the first-order transition probability is
\begin{align}\label{eq:hjvsjbfhjdfh} \mathcal{P}^{(1)}_{m}(T) &=\frac{1}{\hbar^2}\abs{ \bra{m} \op{O} \ket{n}}^2 \bigl\{\mathrm{sinc}\left(\omega_{mn}\Delta T/2\right) \notag \\ &\quad\,\, \times \mathrm{sinc}\left[\omega_{mn}(T-\Delta T)/2\right]\bigr\}^2. \end{align}
Increasing $\Delta T$ increases the rate of decay of the envelope of this function, but it also increases the FWHM. We shall explore two limiting cases. Setting $\Delta T =0$ corresponds to the case of constant $g(t)=1/T$ discussed in Sec.~\ref{sec:time-indep-coupl} and Eq.~\eqref{eq:hjvsjbfhj} becomes $\widetilde{g}(\omega_{mn} T) = \mathrm{sinc} \left(\omega_{mn}T/2\right)$, in agreement with Eq.~\eqref{eq:dklvdflk1}. The other limiting case corresponds to setting $\Delta T =T/2$. Then $g(t)$ becomes the unit-area triangle function shown in Fig.~\ref{fig:g}(c) and Eq.~\eqref{eq:hjvsjbfhj} gives
\begin{align}\label{eq:hjvsjbfhjdkhsf2} \widetilde{g}(\omega_{mn} T) &= \mathrm{sinc}^2\left(\omega_{mn}T/4\right). \end{align}
Note that this function reaches a maximum value of $2/T$ at $t=0$, which is twice the value for constant $g(t)$ [see Eq.~\eqref{eq:igv34763578}]. The corresponding first-order transition probability is
\begin{align}\label{eq:hjvsjb44fh} \mathcal{P}^{(1)}_{m}(T) = \frac{1}{\hbar^2}\abs{ \bra{m} \op{O} \ket{n}}^2 \mathrm{sinc}^4\left(\omega_{mn}T/4\right). \end{align}
The dependence of this function on $\omega_{mn}T$ is shown in Fig.~\ref{fig:ftcomp} (dashed line). The envelope decays as $1/(\omega_{mn}T/4)^4$, two orders faster than in the case of constant $g(t)$. This is a significant improvement in the decay rate. The longer we make the turn-on and turnoff periods $\Delta T$, i.e., the more we approach the limiting case of the triangle function $g(t)$ shown in Fig.~\ref{fig:g}(c), the more quickly the envelope of the transition probability decreases with $T$. Note that the condition on the relationship between $T$ and $\omega_{mn}^{-1}$ stated in Eq.~\eqref{eq:ubdv} still applies here; the only difference is that $\abs{ \bra{m} \op{O} \ket{n}}$ in Eq.~\eqref{eq:ubdv} is replaced by $\abs{ \bra{m} \op{O} \ket{n}}^{1/2}$.
Finally, since the FWHM of the function $\mathrm{sinc}^4(cx)$ is $R(c) \simeq 2.00c^{-1}$, the FWHM of the central peak of $\abs{\widetilde{g}(\omega_{mn} T)}^2$ is [see also Fig.~\ref{fig:ftcomp}(a)]
\begin{align}\label{eq:643tfguwd} R\simeq 8.00. \end{align}
Given that in a protective measurement we typically have $\omega_{mn}T \gg 1$ and are therefore far away from the central peak, the increase in the FWHM compared to the case of constant $g(t)$ (which was $R\simeq 5.56$) can be considered irrelevant.
\subsection{\label{sec:smoothly-vary-coupl}Smoothly varying coupling}
We saw in Sec.~\ref{sec:turnonoff} that inclusion of linear turn-on and turnoff periods significantly decreases the state disturbance. Even a triangular $g(t)$, however, has sharp corners at the turn-on and turnoff points $t=\pm T/2$ as well as at the center $t=0$; these corners can be expected to contribute additional Fourier components and therefore increase the bandwidth. In the following we shall therefore consider a smoothly varying coupling function $g(t)$ without sharp corners. We choose the raised-cosine function with unit area shown in Fig.~\ref{fig:g}(d),
\begin{equation}\label{eq:jfkhjkvhjkvhjkv11881} g(t)=\begin{cases} \frac{1}{T}\left[ 1+\cos(2\pi t/T)\right], & -T/2 \le t \le T/2 \\ 0 & \text{otherwise}. \end{cases} \end{equation}
As in the case of triangular $g(t)$, this function has a maximum value of $2/T$ at $t=0$. The Fourier transform is
\begin{equation}\label{eq:12786svjj} \widetilde{g}(\omega_{mn}T)=\frac{1}{1-(\omega_{mn} T/ 2\pi)^2} \mathrm{sinc}(\omega_{mn} T/2). \end{equation}
Comparing this result to the Fourier transform of constant $g(t)=1/T$ [see Eq.~\eqref{eq:dklvdflk1}], we see that we have gained an extra factor of $\left[1-(\omega_{mn} T/ 2\pi)^2\right]^{-1}$. The first-order transition probability is
\begin{align} \mathcal{P}^{(1)}_m(T) &= \frac{1}{\hbar^2} \abs{ \bra{m} \op{O} \ket{n}}^2 \notag \\ &\quad \times \left[\frac{1}{1-(\omega_{mn} T/ 2\pi)^2} \mathrm{sinc}(\omega_{mn} T/2) \right]^2. \end{align}
Its dependence on $\omega_{mn}T$ is shown in Fig.~\ref{fig:ftcomp} (dotted line). Note that it decays as $1/T^6$ for large $\omega_{mn} T$. This is to be compared to the $1/T^2$ dependence for constant $g(t)$ [see Eq.~\eqref{eq:ubdv1}] and the $1/T^4$ dependence for triangular $g(t)$ [see Eq.~\eqref{eq:hjvsjb44fh}]. Thus, choosing a smoothly changing $g(t)$ provides a decisive advantage in reducing the state disturbance, in agreement with what one would generally expect in light of the quantum adiabatic theorem.
Since the FWHM of the function
\begin{equation} f(x)=\left[\frac{1}{1-(cx/\pi)^2}\mathrm{sinc}(cx)\right]^2 \end{equation}
is $R(c) \simeq 4.53c^{-1}$, the FWHM of $\abs{\widetilde{g}(\omega_{mn}T)}^2$, with $\widetilde{g}(\omega_{mn}T)$ given by Eq.~\eqref{eq:12786svjj}, is
\begin{equation} R \simeq 9.06. \end{equation}
Table~\ref{tab:comp} summarizes the results for the scaling behavior and FWHM of the transition probability $\mathcal{P}^{(1)}_m(T)$ for three different choices of $g(t)$. Note the dramatic difference in the falloff of the probability with $T$. By contrast, the FWHM is of order unity in all cases and the differences in the specific values are insignificant.
\begin{table} \begin{ruledtabular} \begin{tabular}{ccc} & Transition probability & FWHM of $\mathcal{P}^{(1)}_m(T)$ \\ Coupling $g(t)$ & $\mathcal{P}^{(1)}_m(T)$ & (in units of $\omega_{mn}T$) \\\hline constant & $O(1/T^2)$ & $5.56$ \\ triangle & $O(1/T^4)$ & $8.00$ \\ raised cosine & $O(1/T^6)$ & $9.06$ \\ \end{tabular} \end{ruledtabular} \caption{\label{tab:comp}Comparison of the dependence of the transition probability $\mathcal{P}^{(1)}_m(T)$ and its peak width (FWHM) on the measurement time $T$ for three different choices of the coupling function $g(t)$.} \end{table}
\subsection{Higher-order contributions}
To complete our analysis, we may also look at the higher-order contributions $A^{(\ell \ge 2)}_{m}(T)$ to the transition amplitude $A_{m}(T)$, which are given by Eq.~\eqref{eq:g8fbvsv1}. As shown in Appendix~\ref{sec:high-order-contr}, while such higher-order contributions turn out to contain terms that are of the same order in $1/T$ and show similar $T$ dependence as the first-order amplitude $A^{(1)}_{m}(T)$, these terms becomes exponentially suppressed with increasing order. Therefore, the first-order amplitude $A^{(1)}_{m}(T)$ discussed here provides a good representation and approximation of the full transition amplitude and the state disturbance incurred in a protective measurement.
\section{\label{sec:pointer-shift}Pointer shift}
In Sec.~\ref{sec:state-disturbance} we showed how choosing a smoothly varying coupling function $g(t)$ can significantly improve the rate at which the state disturbance decreases as the measurement time $T$ is increased. Based on this result alone, we would conclude that the optimal choice of $g(t)$ is a functional form that minimizes the state disturbance in this sense. However, there is another concern to be taken into account in protective measurement, namely, the shift of the apparatus pointer. Specifically, the question is whether and how the particular choice of $g(t)$ may influence the amount by which the apparatus pointer will move during the measurement time $T$.
We will now show that in the relevant large-$T$ case given by Eq.~\eqref{eq:ubdv}, the pointer shift is independent of the particular form of $g(t)$. To this end, we quantify the pointer shift associated with the initial state $\ket{n}$ of the system by perturbatively studying the amplitude of this state at the conclusion of the protective measurement. As before, we let the interaction start at $t=-T/2$ and end at $t=T/2$. From Eq.~\eqref{eq:87gr782} the first-order contribution to the amplitude is
\begin{equation} A^{(1)}_{n}(T) = -\frac{\text{i}}{\hbar} \bra{n} \op{O} \ket{n} \int_{-T/2}^{T/2} \text{d} t\, g(t) = -\frac{\text{i}}{\hbar} \bra{n} \op{O} \ket{n}, \end{equation}
where the last step follows from the normalization of $g(t)$ according to Eq.~\eqref{eq:normal}. Thus, $A^{(1)}_{n}(T)$ is independent of the particular functional form of $g(t)$. We will now investigate the higher-order terms $A^{(\ell \ge 2)}_{n}(T)$ given by Eq.~\eqref{eq:g8fbvsv1} with $m=n$,
\begin{align}\label{eq:jldfjkl2} A^{(\ell)}_{n}(T) &= \left(-\frac{\text{i}}{\hbar}\right)^\ell \sum_{k_1,k_2,\hdots,k_{\ell-1}} \bra{n} \op{O} \ket{k_1} \bra{k_1} \op{O} \ket{k_2} \notag \\ &\quad \times \bra{k_2} \op{O} \ket{k_3} \cdots \bra{k_{\ell-1}} \op{O} \ket{n}\notag \\ &\quad \times\int_{-T/2}^{T/2} \text{d} t' \, \text{e}^{\text{i} \omega_{nk_1} t'} g(t') \notag \\ &\quad \times\int_{-T/2}^{t'} \text{d} t'' \,\text{e}^{\text{i} \omega_{k_1k_2} t''} g(t'')\cdots \notag \\ &\quad \times\int_{-T/2}^{t^{(\ell-1)}} \text{d} t^{(\ell)} \,\text{e}^{\text{i} \omega_{k_{\ell-1} n} t^{(\ell)}}g(t^{(\ell)}). \end{align}
Let us consider the term for which all $k_j=n$, $1 \le j \le \ell-1$,
\begin{align}\label{eq:1aa} A^{(\ell)}_{n,\{k_j=n\}}(T) &= \left(-\frac{\text{i}}{\hbar}\right)^\ell \bra{n} \op{O} \ket{n}^\ell \int_{-T/2}^{T/2} \text{d} t' \, g(t') \notag \\ &\quad \times\int_{-T/2}^{t'} \text{d} t'' \, g(t'') \cdots \int_{-T/2}^{t^{(\ell-1)}} \text{d} t^{(\ell)} \, g(t^{(\ell)}). \end{align}
In Appendix~\ref{sec:eval-point-shift} we show that the multiple integral is equal to $1/\ell !$ and thus Eq.~\eqref{eq:1aa} becomes
\begin{align}\label{eq:dsfdf1aa} A^{(\ell)}_{n,\{k_j=n\}}(T) &= \frac{1}{\ell !}\left(-\frac{\text{i}}{\hbar}\right)^\ell \bra{n} \op{O} \ket{n}^\ell. \end{align}
To find the total amplitude that also includes the effect on the apparatus subspace, we use Eq.~\eqref{eq:y89ggy892} and sum $A^{(\ell)}_{n,\{k_j=n\}}(T)$ over all orders $\ell$,
\begin{align}\label{eq:vcsuaadfv} \sum_{\ell=0}^\infty a_i^{\ell} A^{(\ell)}_{n,\{k_j=n\}}(T) &= \sum_{\ell=0}^\infty \frac{1}{\ell !} \left( -\frac{\text{i}}{\hbar} a_i \bra{n} \op{O} \ket{n} \right)^\ell \notag \\ &= \text{e}^{-\text{i} a_i\bra{n} \op{O} \ket{n}/\hbar} \notag \\ &= \bra{A_i} \text{e}^{-\text{i} \op{P}\bra{n} \op{O} \ket{n}/\hbar} \ket{A_i}, \end{align}
where $\text{e}^{-\text{i} \op{P}\bra{n} \op{O} \ket{n}/\hbar}$ is the familiar result for the pointer-shift operator in zeroth-order protective measurement obtained from time-independent perturbation theory [see Eq.~\eqref{eq:isvgaazxxx}]. Note that we must include all orders $\ell$ irrespective of the size of $T$ because each of the $\ell$ matrix elements $\bra{k_j} g(t)\op{O} \ket{k_{j'}}$ is of order $1/T$ and the integrals combined are of order $T^\ell$.
Note that if $g(t)$ is not normalized, i.e., if $G\equiv\int_{-T/2}^{T/2} \text{d} t \, g(t) \not= 1$, then the multiple integral in Eq.~\eqref{eq:1aa} is equal to $G^\ell/\ell !$ and Eq.~\eqref{eq:vcsuaadfv} instead reads
\begin{align}\label{eq:dsfdf1aaadsdd} \sum_{\ell=0}^\infty a_i^\ell A^{(\ell)}_{n,\{k_i=n\}}(T) &=\text{e}^{-\text{i} G a_i\bra{n} \op{O} \ket{n}/\hbar} \notag \\ &= \bra{A_i}\text{e}^{-\text{i} G\op{P}\bra{n} \op{O} \ket{n}/\hbar}\ket{A_i}. \end{align}
Thus, the pointer shift is proportional to both $\bra{n} \op{O} \ket{n}$ and $G$, the area under the $g(t)$ graph.
As shown in Appendix~\ref{sec:corr-constant-coupling}, for constant $g(t)$ the $k_j\not=n$ contributions to Eq.~\eqref{eq:jldfjkl2} are of order $1/T$ or higher. As far as the problem of the pointer shift is concerned, they may therefore be neglected in the large-$T$ limit of protective measurement given by Eq.~\eqref{eq:ubdv}. Choosing time-varying coupling functions $g(t)$ can only further diminish the relevance of these terms since we know from our analysis of state disturbance in Sec.~\ref{sec:state-disturbance} that such couplings will significantly increase the rate of amplitude decay with $T$.
\section{\label{sec:discussion}Discussion and conclusions}
This paper provides a quantitative and detailed analysis of the state disturbance incurred during a protective measurement under physically meaningful conditions, supplying knowledge that is crucial not only to a practical implementation of protective measurement, but also to a deeper understanding of the theory and dynamics of protective measurement. We have departed from the prevailing mathematical idealization of infinitely weak or perfectly adiabatic protective measurements and instead investigated the amount of state disturbance introduced by protective measurements characterized by a finite time-dependent system--apparatus coupling strength $g(t)$ and finite duration $T$. In studying the state disturbance, we have focused on the first-order transition probabilities obtained from time-dependent perturbation theory, which quantify the mixing of states of the system different from the initial state and are proportional to the squared Fourier transform of $g(t)$. For the functions $g(t)$ studied here, their Fourier transforms, and thus the corresponding transition probabilities, are functions of the dimensionless quantity $\omega_{mn}T$, which measures the ratio of the measurement time $T$ to the internal time scale $\omega_{mn}^{-1}$ associated with the transition $\ket{n}\rightarrow\ket{m}$, with $\ket{n}$ denoting the initial state of the system.
The choice of constant $g(t)=1/T$ [with $g(t)=0$ outside the measurement interval] commonly considered in the literature (see, e.g., Refs.~\cite{Dass:1999:az,Vaidman:2009:po,Gao:2013:om}) is found to be an essentially worst-case scenario for state disturbance, owing to an infinitely fast turn-on and turnoff of the interaction that leads to a broad Fourier spectrum. In agreement with what one would expect from first-order time-independent perturbation theory, the envelope of the corresponding transition probabilities decays as $1/T^2$. The condition for these probabilities to be small is that the measurement $T$ must be (ideally significantly) larger than the longest internal time scale $\tau_\text{max} = \max_{m\not=n}\{\omega_{mn}^{-1}\}$ associated with perturbation-induced transitions out of the initial state $\ket{n}$. Given that typical atomic time scales are very short, this means that it need not be difficult in practice to choose the measurement time $T$ long enough to ensure sufficiently small state disturbance (see also Ref.~\cite{Dickson:1995:lm} for a similar argument).
We have shown that any smoothing of the coupling function $g(t)$ dramatically improves the rate of envelope decay of the transition probabilities with $T$ [see Fig.~\ref{fig:ftcomp}(b)]. For example, introduction of linear turn-on and turnoff periods increases the envelope decay rate to $1/T^\beta$, with $2 < \beta \le 4$. The longer we make the turn-on and turn-off periods, the more quickly the envelope of the transition probability decreases with $T$. The value $\beta=4$ is reached if the lengths of the turn-on and turn-off periods are maximized to $T/2$, i.e., if the interaction strength is linearly ramped up to its maximum value of $2/T$ and then immediately linearly decreased back down to zero, corresponding to a triangular pulse. Smoothing of $g(t)$ by using a raised-cosine function for $g(t)$ provides a further significant improvement, leading to a $1/T^6$ envelope decay of the transition probabilities.
For all the choices of $g(t)$ considered in this paper, the width of the central dominant peak of the transition probability for $\ket{n}\rightarrow\ket{m}$, considered as a function of $\omega_{mn}T$, has a similar value lying between about 5 and 10. It is therefore sufficient for $T$ to be equal to just a few multiples of the internal time scales $\omega_{mn}^{-1}$ to reach a region outside the central peak where the transition probabilities become relatively small. The similarity of these values for the peak width suggests that we should let the choice of $g(t)$ chiefly be guided by the goal of maximizing the decay rate of the transition-probability envelope, which means choosing a smoothly varying $g(t)$ such as the raised-cosine function considered here. In the limit $T \rightarrow \infty$, both the raised-cosine function and triangular $g(t)$ change infinitely slowly and the transition probabilities are zero, providing a concrete illustration of the quantum adiabatic theorem \cite{Born:1928:yf}.
Interestingly, we found that higher-order corrections to the transition amplitude contain terms that exhibit the same scaling behavior with $T$ as the first-order contribution (e.g., proportional to $1/T$ for constant coupling). We attributed this observation to the particular way in which the duration $T$ of the protective-measurement interaction tunes the strength of the interaction via the normalization condition $\int \text{d} t\, g(t)=1$. Such terms, however, become exponentially suppressed as one moves to higher-order corrections, suggesting that the first-order transition amplitude is indeed the dominant and appropriate quantity for measuring the state disturbance.
We also showed that the total pointer shift incurred during the protective measurement is independent of the functional form of $g(t)$. To be sure, there are higher-order corrections to the evolution of the wave packet of the apparatus pointer (corresponding to effects such as additional spreading and distortion of the wave packet) whose precise magnitudes and dynamics will depend on the choice of $g(t)$. However, such corrections become insignificant in the case $T \gg \tau_\text{max}$ relevant to protective measurement. Moreover, their influence can be expected to be minimized by using a coupling function $g(t)$ that minimizes the state disturbance. Choosing a smoothly varying coupling function, such as the triangle function or the raised-cosine function discussed in this paper, has therefore two benefits: It reduces the disturbance of the initial state and it improves the approximation of treating the evolution of the pointer wave packet as a simple combination of free spreading and a shift of the center of the wave packet by an amount given by the expectation value of the measured observable $\op{O}$ in the initial state of the system.
While theoretical schemes for state reconstruction using protective measurements have been described for specific systems and system--apparatus interactions \cite{Aharonov:1993:jm,Anandan:1993:uu,Nussinov:1998:yy,Dass:1999:az}, the experimental realization of protective measurements remains an open challenge. We hope that our analysis of state disturbance, as well as our framework for treating arbitrary time-dependent coupling functions, may aid in the implementation of protective measurements. Since the system--apparatus interaction in weak measurements \cite{Aharonov:1988:mz} is of the same structure as in protective measurement, only of much shorter duration, our results may also be of interest to the theory and implementation of weak measurements.
\begin{acknowledgments} This research was supported by a University of Portland Arthur Butine Grant. \end{acknowledgments}
\appendix
\section{\label{sec:high-order-corr}Higher-order corrections to the state-vector amplitude}
Here we will investigate the contributions of the higher-order corrections $A^{(\ell \ge 2)}_{m}(T)$ [see Eq.~\eqref{eq:g8fbvsv1}] to the amplitude $A_{m}(T)$ and discuss their influence on the state disturbance (Appendix~\ref{sec:high-order-contr}) and the evolution of the pointer wave packet (Appendix~\ref{sec:corr-constant-coupling}).
\subsection{\label{sec:high-order-contr}State disturbance}
The higher-order contributions $A^{(\ell)}_{m}(T)$, $\ell \ge 2$, to the transition amplitude $A_{m}(T)$ are given by Eq.~\eqref{eq:g8fbvsv1} with $m\not= n$,
\begin{align}\label{eq:g8f4656bvsv1dfhjjhfssd} A^{(\ell)}_{m}(T) &= \left(-\frac{\text{i}}{\hbar}\right)^\ell \sum_{k_1,k_2,\hdots,k_{\ell-1}} \bra{m} \op{O} \ket{k_1} \bra{k_1} \op{O} \ket{k_2} \cdots \notag \\ & \quad \times \bra{k_{\ell-1}} \op{O} \ket{n} \int_{-T/2}^{T/2} \text{d} t' \, \text{e}^{\text{i} \omega_{mk_1} t'} g(t') \notag \\ & \quad \times\int_{-T/2}^{t'} \text{d} t'' \,\text{e}^{\text{i} \omega_{k_1k_2} t''} g(t'') \cdots \notag \\ & \quad \times\int_{-T/2}^{t^{(\ell-1)}} \text{d} t^{(\ell)} \,\text{e}^{\text{i} \omega_{k_{\ell-1} n} t^{(\ell)}} g(t^{(\ell)}). \end{align}
This amplitude represents multistep transitions in which the system transitions from the initial state $\ket{n}$ to the final state $\ket{m}$ via up to $\ell-1$ intermediate virtual states $\ket{k_i}$, $1 \le i \le \ell-1$, summed over all possible transition times and intermediate states.
We first consider the terms in the sum in Eq.~\eqref{eq:g8f4656bvsv1dfhjjhfssd} for which all indices $k_i$ take on distinct values; let us denote such a term by $\alpha^{(\ell)}_{k_1\cdots k_{\ell-1}}(T)$. (As always, we also assume that all transition frequencies $\omega_{k_ik_j}$ are nonzero for $k_i\not= k_j$, i.e., that the spectrum is nondegenerate.) To estimate the influence of these terms, we approximate the frequencies $\omega_{k_ik_j}$ by a typical value $\bar{\omega}$ such that we can write $\alpha^{(\ell)}_{k_1\cdots k_{\ell-1}}(T)$ as
\begin{align}\label{eq:g8fbvsv1dfhjjhfd} \alpha^{(\ell)}_{k_1\cdots k_{\ell-1}}(T) &= \left(-\frac{\text{i}}{\hbar}\right)^\ell \bra{m} \op{O} \ket{k_1} \bra{k_1} \op{O} \ket{k_2} \cdots \bra{k_{\ell-1}} \op{O} \ket{n}\notag \\ &\quad\times \int_{-T/2}^{T/2} \text{d} t' \, \text{e}^{\text{i} \bar{\omega} t'} g(t') \notag \\ &\quad\times\int_{-T/2}^{t'} \text{d} t'' \,\text{e}^{\text{i} \bar{\omega} t''} g(t'') \cdots \notag \\ &\quad\times \int_{-T/2}^{t^{(\ell-1)}} \text{d} t^{(\ell)} \,\text{e}^{\text{i} \bar{\omega} t^{(\ell)}} g(t^{(\ell)}). \end{align}
Since the integrands are symmetric under an exchange of the time variables, we can employ the strategy described in Appendix~\ref{sec:eval-point-shift} below. Namely, we can replace the upper integration limits by $T/2$ and compensate for this modification by an overall multiplicative factor of $1/\ell!$,
\begin{align}\label{eq:g8fbvsv1dfhjjhfssd} \alpha^{(\ell)}_{k_1\cdots k_{\ell-1}}(T) &= \left(-\frac{\text{i}}{\hbar}\right)^\ell \bra{m} \op{O} \ket{k_1} \bra{k_1} \op{O} \ket{k_2}\cdots \bra{k_{\ell-1}} \op{O} \ket{n} \notag \\ &\quad\times \frac{1}{\ell!} \left[ \int_{-T/2}^{T/2} \text{d} t' \, \text{e}^{\text{i} \bar{\omega} t'} g(t') \right]^\ell. \end{align}
The term in square brackets is simply the Fourier transform $\widetilde{g}(\bar{\omega};T)$ of $g(t)$ [compare Eq.~\eqref{eq:lvdgs}]. Thus, our analysis of the first-order contributions in Secs.~\ref{sec:time-indep-coupl}--\ref{sec:smoothly-vary-coupl} can be directly applied to determining the dependence of Eq.~\eqref{eq:g8fbvsv1dfhjjhfssd} on the measurement time $T$. Specifically, if the first-order transition amplitude $A^{(1)}_{m}(T)$ follows a $1/T^\beta$ dependence, then the contribution to the transition amplitude $A^{(\ell)}_{m}(T)$ made by Eq.~\eqref{eq:g8fbvsv1dfhjjhfssd} will be on the order of $1/T^{\beta \ell}$. Since $\ell \ge 2$, such contributions are negligible in the large-$T$ limit relevant to protective measurement.
It follows that contributions whose dependence on $T$ is comparable to that of the first-order contribution $A^{(1)}_{m}(T)$ can only arise, if at all, for transitions involving less than $\ell-1$ intermediate virtual states $\ket{k\not=m,n}$. To this end, let us consider the terms in which no such distinct virtual intermediate transitions are present. Formally, this corresponds to setting $k_j=m$ for $1 \le j \le i$ and $k_j=n$ for $i+1 \le j \le \ell-1$ in Eq.~\eqref{eq:g8f4656bvsv1dfhjjhfssd}, with $1 \le i \le \ell-1$. There are $\ell$ such terms, which all give rise to the same dependence on $T$. Let us examine a representative term,
\begin{align}\label{eq:ub22dv2hh437tyr67} \alpha^{(\ell)}_{mn\cdots n}(T) &= \left(-\frac{\text{i}}{\hbar}\right)^\ell \bra{m} \op{O} \ket{n} \bra{n} \op{O} \ket{n}^{\ell-1} \notag \\ & \quad\times \int_{-T/2}^{T/2} \text{d} t' \, \text{e}^{\text{i} \omega_{mn} t'} g(t') \int_{-T/2}^{t'} \text{d} t'' \,g(t'') \cdots \notag \\ &\quad\times \int_{-T/2}^{t^{(\ell-1)}} \text{d} t^{(\ell)} \, g(t^{(\ell)}). \end{align}
Again using the strategy described in Appendix~\ref{sec:eval-point-shift}, we rewrite this equation as
\begin{align}\label{eq:ubh437tyr6aaww7aa} \alpha^{(\ell)}_{mn\cdots n}(T) &= \left(-\frac{\text{i}}{\hbar}\right)^\ell \bra{m} \op{O} \ket{n} \bra{n} \op{O} \ket{n}^{\ell-1}\notag \\ & \quad \times \int_{-T/2}^{T/2} \text{d} t' \, \text{e}^{\text{i} \omega_{mn} t'} g(t') \frac{1}{(\ell-1)!}\left[G(t')\right]^{\ell-1}, \end{align}
where $G(t) = \int_{-T/2}^{t} \text{d} t' \,g(t')$ measures the area under the $g(t)$ curve in the interval $[-T/2,t']$, with $-T/2 \le t' \le T/2$; this area is a non-negative dimensionless number of order unity. Thus, $\alpha^{(\ell)}_{mn\cdots n}(T)$ is of the same order in $1/T$ as the integral $\int_{-T/2}^{T/2} \text{d} t' \, \text{e}^{\text{i} \omega_{mn} t'} g(t')$, which is the Fourier transform of $g(t)$. We know from Secs.~\ref{sec:time-indep-coupl}--\ref{sec:smoothly-vary-coupl} that this Fourier transform determines the $T$ dependence of the first-order transition amplitude $A^{(1)}_{m}(T)$. Therefore, we can conclude that $\alpha^{(\ell)}_{mn\cdots n}(T)$ [Eq.~\eqref{eq:ub22dv2hh437tyr67}] has a similar $T$ dependence as $A^{(1)}_{m}(T)$. Specifically, just like $A^{(1)}_{m}(T)$, it scales as $1/T$ for constant coupling (Sec.~\ref{sec:time-indep-coupl}), $1/T^2$ for constant coupling with linear turn-on and turnoff (Sec.~\ref{sec:turnonoff}), and $1/T^3$ for the raised-cosine coupling function (Sec.~\ref{sec:smoothly-vary-coupl}).
Instead of simply estimating the influence of $G(t)$ in Eq.~\eqref{eq:ubh437tyr6aaww7aa} by unity, as we have just done, let us also explicitly evaluate this function for the case of constant $g(t)$. To simplify notation, let us take $g(t)=1/T$ in the interval $[0,T]$ rather than $[-T/2,T/2]$. Then $G(t)=t/T$ and Eq.~\eqref{eq:ubh437tyr6aaww7aa} becomes
\begin{align}\label{eq:ubh437tyh7r67aaxxx} \alpha^{(\ell)}_{mn\cdots n}(T) &= \left(-\frac{\text{i}}{\hbar}\right)^\ell \bra{m} \op{O} \ket{n} \bra{n} \op{O} \ket{n}^{\ell-1} \frac{1}{(\ell-1)!} \frac{1}{T^{\ell-1}} \notag \\ &\times \int_{0}^{T} \text{d} t \, \text{e}^{\text{i} \omega_{mn} t} g(t) t^{\ell-1}. \end{align}
The integral is the Fourier transform of the function $g(t) t^{\ell-1}$, which is given by
\begin{equation} \int_{0}^{T} \text{d} t \, \text{e}^{\text{i} \omega_{mn} t} g(t) t^{\ell-1} = \text{i}^{\ell-1} \frac{\text{d}^{\ell-1} \widetilde{g}(\omega_{mn};T)}{\text{d} \omega_{mn}^{\ell-1}}, \end{equation}
where $\widetilde{g}(\omega_{mn};T)=\int_{0}^{T} \text{d} t \, \text{e}^{\text{i} \omega_{mn} t} g(t)$ is the Fourier transform of $g(t)$. By the shift property of the Fourier transform, $\widetilde{g}(\omega_{mn};T)$ is equal to the Fourier transform of the original $g(t)$ defined on the interval $[-T/2,T/2]$ save for an overall phase factor $\text{e}^{-\text{i} \omega_{mn}T/2}$ [this result holds for arbitrary functions $g(t)$, including the ones considered in Secs.~\ref{sec:turnonoff} and \ref{sec:smoothly-vary-coupl}]. Thus, the $T$ dependence of $\alpha^{(\ell)}_{mn\cdots n}(T)$ [Eq.~\eqref{eq:ubh437tyh7r67aaxxx}] is given by
\begin{equation}\label{eq:ubh437tyh7r67aax666} \gamma_\ell(\omega_{mn};T)=\frac{1}{T^{\ell-1}} \frac{\text{d}^{\ell-1} \widetilde{g}(\omega_{mn};T)}{\text{d} \omega_{mn}^{\ell-1}}, \end{equation}
where we can now take $\widetilde{g}(\omega_{mn};T)$ to denote the Fourier transform of $g(t)$ defined on the original interval $[-T/2,T/2]$. For constant $g(t)$, $\widetilde{g}(\omega_{mn};T)\equiv \widetilde{g}(\omega_{mn}T) = \mathrm{sinc}\left(\omega_{mn}T/2\right)$ [see Eq.~\eqref{eq:dklvdflk1}]. In the expression for $\frac{\text{d}^n}{\text{d} x^n} \mathrm{sinc}(ax)$, the term to leading order in $a$ is equal to $\pm a^n \mathrm{sinc}(ax)$ if $n$ is even and equal to $\pm a^n \frac{\cos(ax)}{ax}$ if $n$ is odd. Therefore, to leading order in $1/T$, Eq.~\eqref{eq:ubh437tyh7r67aax666} can be approximated by
\begin{equation}\label{eq:ubh437tyx666} \gamma_\ell(\omega_{mn};T)= \begin{cases} \pm \mathrm{sinc}\left(\omega_{mn}T/2\right), & \ell=3,5,7,\hdots \\ \pm\frac{\cos \left(\omega_{mn}T/2\right)}{\omega_{mn}T/2}, & \ell=2,4,6,\hdots\,. \end{cases} \end{equation}
When $T$ is significantly larger than $\omega_{mn}$ (which is the case relevant to protective measurement), both $\mathrm{sinc}\left(\omega_{mn}T/2\right)$ and $\frac{\cos \left(\omega_{mn}T/2\right)}{\omega_{mn}T/2}$ exhibit similar behavior; in particular, their envelopes decay as $1/T$. We have thus confirmed our previous result that the term $\alpha^{(\ell)}_{mn\cdots n}(T)$, Eq.~\eqref{eq:ub22dv2hh437tyr67}, exhibits essentially the same $T$-dependence as the first-order amplitude $A^{(1)}_{m}(T)$.
Note, however, that the factor $1/(\ell-1)!$ appearing in Eqs.~\eqref{eq:ubh437tyr6aaww7aa} and \eqref{eq:ubh437tyh7r67aaxxx} exponentially damps such first-order-like terms with increasing order $\ell$ (while their number grows only linearly with $\ell$). This implies that their contributions rapidly become insignificant with increasing $\ell$. Also, since we have shown that their $T$ dependence is well approximated by the Fourier transform of $g(t)$, which determines the $T$ dependence of the first-order amplitude $A^{(1)}_{m}(T)$, it follows that, to leading order in $1/T$, $A^{(1)}_{m}(T)$ is indeed a good representation of the dependence of the overall transition probability on $T$. This result justifies our focus on $A^{(1)}_{m}(T)$ in Secs.~\ref{sec:time-indep-coupl}--\ref{sec:smoothly-vary-coupl}.
So far we have considered the two limiting cases of a maximum number of virtual transitions $\ket{k\not=m,n}$ and no virtual intermediate transitions at all. Applying our above reasoning to terms corresponding to the intermediate regime in which there is a nonzero but nonmaximum number of virtual transitions, it is readily seen that such terms must at least be of order $1/T^{2\beta}$ or higher, where $\beta$ is the scaling parameter for the envelope decay of the first-order amplitude $A^{(1)}_{m}(T)$. Therefore, these terms can be neglected compared to the $O(1/T^\beta)$ terms arising from $A^{(1)}_{m}(T)$ and from the first-order-like contributions to the higher-order amplitudes $A^{(\ell)}_{m}(T)$ discussed above.
As a final remark, it may seem surprising that higher-order corrections to the transition amplitudes can give rise to contributions that are only of first order in the perturbation. The reason is that in protective measurement, due to the normalization of $g(t)$ [see Eq.~\eqref{eq:normal}], the strength $g(t)$ of the perturbation is linked to the duration $T$ of the perturbation (i.e., the total measurement time) and thus to the final time at which the amplitude is evaluated. For example, in the case of constant $g(t)$, the strength is precisely the inverse of $T$, $g(t)=1/T$. This interdependence between strength and duration effectively leads, for certain terms in higher-order amplitudes, to a reduction in the order of the strength parameter $1/T$.
\subsection{\label{sec:corr-constant-coupling}Pointer evolution}
We will investigate the influence of the $k_j\not=n$ contributions to Eq.~\eqref{eq:jldfjkl2} for the case of constant $g(t)$ [see Eq.~\eqref{eq:igv34763578}]. First, we look at the second-order term
\begin{align} A^{(2)}_{n}(T) &= \left(-\frac{\text{i}}{\hbar}\right)^2 \sum_{k} \abs{\bra{n} \op{O} \ket{k}}^2 \frac{1}{T^2} \notag \\ &\quad\times \int_{-T/2}^{T/2} \text{d} t \, \text{e}^{\text{i} \omega_{nk} t} \int_{-T/2}^{t} \text{d} t' \,\text{e}^{\text{i} \omega_{kn} t'}. \end{align}
The $k\not= n$ contributions are
\begin{align} A^{(2)}_{n,k\not= n}(T) &= -\frac{\text{i}}{\hbar^2} \sum_{k \not= n} \frac{\abs{\bra{n} \op{O} \ket{k}}^2}{\omega_{nk} T} \notag \\ &\quad+ \frac{1}{\hbar^2} \sum_{k \not= n} \frac{\abs{\bra{n} \op{O} \ket{k}}^2}{(\omega_{nk} T)^2} \text{e}^{\text{i} \omega_{nk} T} \notag \\ &\quad- \frac{1}{\hbar^2} \sum_{k \not= n} \frac{\abs{\bra{n} \op{O} \ket{k}}^2}{(\omega_{nk} T)^2}. \end{align}
The first term is proportional to the second-order energy shift familiar from time-independent perturbation theory \cite{Sakurai:1994:om}
\begin{equation} \Delta E_n^{(2)} = \frac{1}{T^2}\sum_{k \not= n} \frac{\abs{\bra{n} \op{O} \ket{k}}^2}{\hbar\omega_{nk}}, \end{equation}
the second term represents the contribution from the mixing of the other unperturbed states $\ket{m}\not=\ket{n}$, and the third term ensures wave-function normalization to second order in the perturbation. Then, from Eq.~\eqref{eq:y89ggy892} and to leading order in $1/T$, the combination of the zeroth-order, first-order, and second-order contributions to the amplitude gives
\begin{align}\label{eq:imimdfkjlkdfdfjkfqlj} C^{(i)}_{n}(T) &\approx A^{(0)}_{n}(T)+a_iA^{(1)}_{n}(T)+a_i^2A^{(2)}_{n}(T)\notag\\ &= 1 -\frac{\text{i}}{\hbar} a_i\bra{n} \op{O} \ket{n} - \frac{1}{2\hbar^2} a_i^2 \bra{n} \op{O} \ket{n}^2 \notag \\ &\quad - \frac{\text{i}}{\hbar}a_i^2\Delta E_n^{(2)} T , \end{align}
where the first three terms on the right-hand side are the contributions to the pointer shift [see Eqs.~\eqref{eq:dsfdf1aa} and \eqref{eq:vcsuaadfv}]. The last term in Eq.~\eqref{eq:imimdfkjlkdfdfjkfqlj} can be thought of as the leading-order term in the expansion of the exponential $\exp\left(- \frac{\text{i}}{\hbar} a_i^2\Delta E_n^{(2)}T \right)$. This exponential arises in the context of time-independent perturbation theory if we use the second-order perturbative approximation of the exact energy eigenvalues $\widetilde{E}(n,a_i)$ of the full Hamiltonian $\op{H}$ [compare Eq.~\eqref{eq:fnsnlk11}],
\begin{align}\label{eq:378hgrr} \widetilde{E}(n,a_i) &\approx E_n + \epsilon_i + \frac{1}{T}a_i \bra{n}\op{O} \ket{n} + \frac{1}{T^2} a_i^2 \sum_{k\not= n} \frac{ \abs{\bra{n}\op{O}\ket{k}}^2}{\hbar\omega_{nk}}, \end{align}
and then employ this approximation to replace the exact time-evolution term $\text{e}^{-\text{i} \widetilde{E}(n,a_i) T/\hbar}$ for the corresponding exact eigenstate (i.e., the state shifted from $\ket{n}$ by the perturbation) by
\begin{align} \text{e}^{-\text{i} \widetilde{E}(n,a_i) T/\hbar} &\approx \text{e}^{ -\text{i} E_n T/\hbar}\text{e}^{ -\text{i} \epsilon_i T/\hbar} \text{e}^{ -\text{i} a_i \bra{n}\op{O} \ket{n}/\hbar} \notag \\ &\quad\times \text{e}^{- \text{i} a_i^2 \Delta E_n^{(2)}T/\hbar}. \end{align}
Reintroducing the operator $\op{P}$, the last two terms on the right-hand side are equivalent to $\text{e}^{ -\text{i} \op{P} \bra{n}\op{O} \ket{n}/\hbar}\text{e}^{- \text{i} \op{P}^2 \Delta E_n^{(2)}T/\hbar}$, where $\text{e}^{ -\text{i} \op{P} \bra{n}\op{O} \ket{n}/\hbar}$ is the familiar pointer-shift operator. Since the argument of the term $\text{e}^{- \text{i} \op{P}^2 \Delta E_n^{(2)}T/\hbar}$ is proportional to $\op{P}^2$, it represents a contribution to the kinetic energy of the pointer; it induces spreading of the pointer wave packet (in addition to the free spreading) without shifting its center.
A similar analysis may be applied to the higher-order amplitudes $A^{(\ell)}_{n}(T)$ given by Eq.~\eqref{eq:jldfjkl2} with $\ell \ge 3$. We have already discussed the term in $A^{(\ell)}_{n,\{k_j=n\}}(T)$ corresponding to setting all $k_j=n$, $1\le j \le \ell-1$, in the sum appearing in Eq.~\eqref{eq:jldfjkl2} and we have seen that this term represents a contribution to the pointer shift. Furthermore, in our analysis of higher-order corrections in Appendix~\ref{sec:high-order-contr}, we found that contributions of order $1/T$ arise from amplitudes representing a single transition (in that case, a direct transition $\ket{n}\rightarrow \ket{m}$ without intermediate virtual transitions). The analogous result holds here, but since the initial and final states are now the same, the $1/T$ terms correspond to a single virtual transition followed by a return to the initial state, i.e., the two-step transition $\ket{n}\rightarrow \ket{k} \rightarrow \ket{n}$ with $k\not =n$. Formally, these are the terms in the sum in Eq.~\eqref{eq:jldfjkl2} with a single $k_j\not=n$ and all other $k_{j'}=n$, with $j'\not=j$ and $1\le j' \le \ell-1$. They provide additional contributions to the evolution of the pointer wave packet (in the form of distortions, spreading, etc.) and are of order $1/T$ or higher. Thus, in the large-$T$ case relevant to protective measurements, they can be neglected. Additionally, as shown in Appendix~\ref{sec:high-order-contr}, they become exponentially suppressed with increasing order $\ell$.
\section{\label{sec:eval-point-shift}Evaluation of the pointer-shift integral}
We show here that the multiple integral appearing in Eq.~\eqref{eq:1aa} is equal to $1/\ell !$. To see the idea, let us focus on the second-order double integral $\int_{-T/2}^{T/2} \text{d} t' \, g(t') \int_{-T/2}^{t'} \text{d} t'' \, g(t'')$. This double integral is equal to the volume $V$ under the surface given by $S(t',t'')=g(t')g(t'')$ over the region of an isosceles right triangle defined by $t' \in [-T/2, T/2]$ and $t'' \in [-T/2,t']$. Since $g(t') g(t'') = g(t'') g(t')$, the surface is symmetric about the hypotenuse of the triangle. Therefore, the volume $V$ is one-half of the volume $V_\square$ under the surface $S(t',t'')$ over the square region defined by $t' \in [-T/2, T/2]$ and $t'' \in [-T/2,T/2]$. Since $g(t)$ is normalized over $[-T/2, T/2]$, i.e., $\int_{-T/2}^{T/2} \text{d} t \, g(t) = 1$, we have $V_\square=1$ and therefore $\int_{-T/2}^{T/2} \text{d} t' \, g(t') \int_{-T/2}^{t'} \text{d} t'' \, g(t'') = \frac{1}{2}$.
We can formalize this argument and extend it to higher orders as follows (the procedure described here is similar to that used when expressing the Dyson series for the evolution operator as a time-ordered exponential). By simply exchanging the variables $t'$ and $t''$, we may write
\begin{multline}\label{eq:1asdsda} \int_{-T/2}^{T/2} \text{d} t' \, g(t') \int_{-T/2}^{t'} \text{d} t'' \, g(t'') \\ = \frac{1}{2} \biggl[ \int_{-T/2}^{T/2} \text{d} t' \int_{-T/2}^{t'} \text{d} t'' \, g(t') g(t'') \\+ \int_{-T/2}^{T/2} \text{d} t'' \int_{-T/2}^{t''} \text{d} t' \, g(t'') g(t') \biggr]. \end{multline}
In the second double integral on the right-hand side, the same region can be covered by letting $t'$ range from $-T/2$ to $T/2$ and letting $t''$ range from $t'$ to $T/2$,
\begin{multline}\label{eq:1asdsdsdda} \int_{-T/2}^{T/2} \text{d} t' \, g(t') \int_{-T/2}^{t'} \text{d} t'' \, g(t'')\\ = \frac{1}{2} \biggl[ \int_{-T/2}^{T/2} \text{d} t' \int_{-T/2}^{t'} \text{d} t'' \, g(t') g(t'') \\+ \int_{-T/2}^{T/2} \text{d} t' \int_{t'}^{T/2} \text{d} t'' \, g(t'') g(t') \biggr]. \end{multline}
Thus, the double integrals can be combined into
\begin{multline}\label{eq:1asdsdsdda33} \int_{-T/2}^{T/2} \text{d} t' \, g(t') \int_{-T/2}^{t'} \text{d} t'' \, g(t'') \\= \frac{1}{2} \int_{-T/2}^{T/2} \text{d} t' \int_{-T/2}^{T/2} \text{d} t'' \, g(t') g(t'') = \frac{1}{2}, \end{multline}
where the last step follows from the fact that $g(t)$ is normalized [see Eq.~\eqref{eq:normal}].
For arbitrary orders $\ell$, the multiple integral $\int_{-T/2}^{T/2} \text{d} t' \, g(t') \int_{-T/2}^{t'} \text{d} t'' \, g(t'') \cdots \int_{-T/2}^{t^{(\ell-1)}} \text{d} t^{(\ell)} \, g(t^{(\ell)})$ is equal to the volume under the $\ell$-dimensional hypersurface in $\mathbb{R}^{\ell+1}$ given by $S(t',t'', \hdots, t^{(\ell)})=g(t')g(t'')\cdots g(t^{(\ell)})$ and we can apply the same strategy as in the $\ell=2$ case just described. Geometrically, the volume under the surface $S(t',t'', \hdots, t^{(\ell)})$ is $1/\ell !$ of the volume of the $\ell$-dimensional cube, which is $\left[\int_{-T/2}^{T/2} \text{d} t \, g(t)\right]^\ell =1$. Hence
\begin{widetext} \begin{equation}\label{eq:df1gfjknkjhaa} \int_{-T/2}^{T/2} \text{d} t' \, g(t') \int_{-T/2}^{t'} \text{d} t'' \, g(t'') \cdots \int_{-T/2}^{t^{(\ell-1)}} \text{d} t^{(\ell)} \, g(t^{(\ell)}) = \frac{1}{\ell !}. \end{equation} \end{widetext}
Instead of using this geometric argument, in analogy with Eq.~\eqref{eq:1asdsda} we could alternatively expand the multiple integral into $\ell !$ terms corresponding to the $\ell !$ possible permutations of the variables $(t',t'', \hdots, t^{(\ell)})$. Redefining the integral limits in a manner analogous to Eq.~\eqref{eq:1asdsdsdda} and using the fact that $S(t',t'', \hdots, t^{(\ell)})$ is invariant under any permutations of the variables $(t',t'', \hdots, t^{(\ell)})$, we again obtain Eq.~\eqref{eq:df1gfjknkjhaa}.
\end{document} |
\begin{document}
\title{Nondestructive photon counting in waveguide QED} \author{Daniel Malz} \author{J.\ Ignacio Cirac} \affiliation{Max Planck Institute for Quantum Optics, Hans-Kopfermann-Stra{\ss}e 1, D-85748 Garching, Germany} \affiliation{Munich Center for Quantum Science and Technology, Schellingstra{\ss}e 4, D-80799 München, Germany} \date{\today} \pacs{}
\begin{abstract}
Number-resolving single-photon detectors represent a key technology for a host of quantum optics protocols, but despite significant efforts,
state-of-the-art devices are limited to few photons.
In contrast, state-dependent atom counting in arrays can be done with extremely high fidelity up to hundreds of atoms.
We show that in waveguide QED, the problem of photon counting can be reduced to atom counting, by entangling the photonic state with an atomic array in the collective number basis.
This is possible as the incoming photons couple to collective atomic states
and can be achieved by engineering a second decay channel of an excited atom to a metastable state.
Our scheme is robust to disorder and finite Purcell factors, and its fidelity increases with atom number.
Analyzing the state of the re-emitted photons, we further show that if the initial atomic state is a symmetric Dicke state,
dissipation engineering can be used to implement a nondestructive photon-number measurement,
in which the incident state is scattered into the waveguide unchanged.
Our results generalize to related platforms, including superconducting qubits. \end{abstract} \maketitle
\section{Introduction} Single-photon detectors have a long history~\cite{Morton1949}, with a plethora of technologies available~\cite{Hadfield2009}. Applications in quantum optics, such as quantum state preparation, quantum metrology~\cite{Giovannetti2011}, entanglement distribution~\cite{Gisin2002}, and quantum computing~\cite{Knill2001,Kok2007} have placed a renewed focus on single-photon detectors capable of resolving the number of incoming photons. Perhaps most promising are superconducting transition-edge sensors, which have been demonstrated to achieve a (per-photon) detection efficiency (percentage of detected photons) of $\eta\approx95\%$~\cite{Lita2008} and to distinguish up to seven photons, with a negligible dark count rate (clicks in the absence of incoming photons). They are based on the principle that near the critical temperature, the resistance of a superconductor is very sensitive to temperature changes, down to the level of single-photon energies. While very impressive, the device is limited to optical photons, destroys the photonic state, and is difficult to scale.
One strategy that in principle also allows for nondestructive measurements is based on quantum memory. An itinerant photon may be caught by an atom in a cavity~\cite{Specht2011} or by a cavity with tunable coupling~\cite{Yin2013}, which however requires time-dependent tuning of the atom--cavity or cavity--waveguide coupling in accordance with the photon wavepacket shape. More generally, electromagnetically-induced transparency (EIT) can be used to slow down a light pulse such that it fits within an atomic cloud~\cite{Fleischhauer2000,Fleischhauer2005}. This allows the storage of arbitrary photon pulses within a length given by system parameters. The number of polaritons can in principle be read out by employing a cycling transition~\cite{Imamoglu2002,James2002}. However, this scheme also requires knowledge about the arrival of the photon wavepacket. Furthermore, the combination of quantum memory and quantum nondemolition detection of one or few excitations in a 3D cloud of atoms makes this very challenging. EIT in a waveguide QED setting is one strategy to alleviate the imaging problem (see, e.g., Ref.~\cite{Caneva2015}).
Here, we instead design a detector that does not require knowledge about the shape or arrival time of incoming wavepackets and therefore does not require implementation of a time-dependent Hamiltonian. This can be achieved either through continuous measurement, for example through dispersive coupling~\cite{Chen2011a,Govia2012,Poudel2012,Kyriienko2016,Oelsner2017,Schondorf2018}, or if the detector permanently changes its state and is read out later, as in impedance-matched $\Lambda$-systems~\cite{Pinotsi2008,Romero2009,Romero2009a,Peropadre2011,Koshino2013,Inomata2016,Bechler2018}. The former class of detectors suffer from measurement backaction on the photonic state~\cite{Helmer2009,Royer2018}, whereas the latter does not produce `clicks' to indicate detection event, and therefore does not localize photons in the waveguide, which avoids measurement backaction. The advantage of this strategy is that no knowledge about the photon wavepacket is needed, and that photons do not have to be destroyed to be detected, which opens the way toward nondestructive photon-number measurements.
In recent years, quantum emitters coupled to waveguides have emerged as a powerful experimental paradigm~\cite{Chang2018,Turschmann2019}. Examples of such systems include cold atoms levitated near optical fibres~\cite{Vetsch2010,Douglas2015} or photonic crystal waveguides~\cite{Goban2015}, but also solid-state realizations such as quantum dots~\cite{Akimov2007,Lodahl2015}, superconducting qubits~\cite{VanLoo2013,Sundaresan2019}, or nitrogen-vacancy centres~\cite{Huck2011,Sipahigil2016,Evans2018}. Due to the strong confinement of light, even a single emitter can have a profound effect on light propagation, and many emitters show remarkable collective effects~\cite{Solano2017}. At the same time, impressive experimental breakthroughs have enabled the control~\cite{Schlosser2001} and, in particular, readout of neutral atoms~\footnote{In an array of 160 atoms, a recent experiment achieved $99.94\%$ readout fidelity, albeit at the expense of a constant atom loss rate that dominates this theoretical fidelity~\cite{Wu2019}. Atoms trapped in tweezer arrays can circumvent this problem, offering similar fidelity without losing the atoms~\cite{Covey2019}.}, superconducting qubits~\cite{Jeffrey2014,Walter2017}, and trapped ions~\cite{Harty2014}. Thus, quantum emitters coupled to waveguides appear to be an ideal platform to produce~\cite{Goban2015,Paulisch2018,Paulisch2019,Perarnau-Llobet2019} and control~\cite{Zheng2013,Zhang2019} quantum light. Here we show that they also allow for detection of quantum light.
\begin{figure}\label{fig:sketch}
\end{figure}
The key idea here is to engineer atoms such that for each incident photon in the waveguide, exactly one atom changes its internal state, such that a subsequent measurement of the atomic state yields the number of photons in the scattered wavepacket. We do this by identifying conditions such that all photons are absorbed in one atomic transition ($g\to e$, blue in \cref{fig:sketch}) and dissipated in a different one ($e\to s$, green). This way, the atomic array keeps a memory of the number of scattered photons~\cite{Pinotsi2008,Romero2009,Romero2009a,Peropadre2011,Koshino2013,Inomata2016}.
In order to identify such conditions, we study a realistic model for photon absorption by atomic arrays in the presence of free-space decay (red) and disorder. In view of experimental realizations, we provide recipes for engineering an additional decay channel, and pay particular attention to spatial disorder, as this fundamentally modifies the eigenmodes of the system. Surprisingly, we find that it is possible to engineer dissipation to achieve full absorption independent of disorder. As a consequence of collective enhancement in the atom-waveguide coupling, detection efficiency and bandwidth grow with atom number, such that even with moderate Purcell factor (ratio of waveguide to free-space decay rate), detection efficiencies $\eta>99\%$ can be reached for intermediate numbers of atoms ($N>20$). We also consider emitters coupled to chiral waveguides, which have distinct advantages due to the natural suppression of backscattering. Our detection scheme is scalable, as the number of atoms required to detect a certain number of photons with a fixed error grows only polynomially.
In a second part, we study a natural modification, in which the dissipated photons are emitted back into the waveguide (\emph{cf.} \cref{fig:sketch}d). If the the resulting output wavepacket coincides with the input wavepacket, this realizes a quantum nondemolition (QND) measurement~\footnote{We use the terminology adopted by most authors~\cite{Thorne1978,Unruh1978,Braginsky1980,Caves1980,Nogues1999,Raimond2001,Guerlin2007,Wiseman2009,Johnson2010}, but note that there is some controversy whether this is an appropriate name~\cite{Monroe2011}, and sometimes \emph{nondestructive} is used instead~\cite{Reiserer2013}, but this term lacks the precise definition given by, \emph{e.g.} Wiseman and Milburn~\cite{Wiseman2009}. }. In order to achieve high fidelities, this requires collective enhancement of both decay channels, which requires the atomic array to be prepared in a symmetric Dicke state between ground states. We show that such states can be prepared through coherent interaction of two arrays and subsequent conditioning. Ultimately, this can reach a scaling where for a given error probability, the number of atoms that can be prepared in this way scales exponentially with Purcell factor $N\sim\exp(P)$, which we verify with numerical simulations.
The rest of this paper is structured as follows. In \cref{sec:model} we present the scattering theory for atomic arrays on waveguides. Building on the principle of coherent perfect absorption, we show explicitly that perfect absorption can be obtained by tuning only dissipation, even in the presence of arbitrary disorder. We apply this principle to establish the feasibility of number-resolving detection by absorption in a mirror geometry (\cref{sec:mirror}) and in an infinite waveguide (\cref{sec:infinite}). For completeness, we also discuss how dissipation can be used in a chiral waveguide to obtain a number-resolving detector (\cref{sec:chiral}). We extend these concepts to a QND measurement in \cref{sec:QND}. In \cref{sec:experiment}, we discuss the experimental implementations of our proposal, including engineered dissipation, atomic species, readout, dector reset, the effect of non-idealities, and bandwidth. We conclude in \cref{sec:conclusion}.
\section{Absorption in atomic arrays}\label{sec:model} Below, we first formulate a generic input-output theory for photons scattering off atomic arrays (\cref{sec:scattering}), following previous work~\cite{LeKien2008,LeKien2014,Caneva2015}. This allows us to generically identify the conditions under which all photons are absorbed in one transition and emitted in the other (\cref{sec:absorption}).
\subsection{Scattering theory}\label{sec:scattering} A generic Hamiltonian describing the system-waveguide interaction is given through~\cite{LeKien2008,LeKien2014,Caneva2015} \begin{equation}
\begin{aligned}
H=&\int_0^\infty\frac{dk}{2\pi}\sum_{\nu}\sum_{i=1}^2(\omega_{k,i}-\omega_0)a_{\nu,k,i}\dagg a_{\nu,k,i}\\
{}-{}&\sum_{n,\nu}\left(g_{\nu,k,1}^{(n)} \sigma_{eg}^{(n)}a_{\nu,k,1}+g_{\nu,k,2}^{(n)} \sigma_{es}^{(n)}a_{\nu,k,2} +\text{H.c.}\right)
\end{aligned}
\label{eq:generic_hamiltonian} \end{equation} In \cref{eq:generic_hamiltonian}, the label $\nu$ runs over different sets of waveguide modes, the index $i$ denotes whether the waveguide mode couples to the $g\leftrightarrow e$ transition ($i=1$) or the $s\leftrightarrow e$ transition ($i=2$), the index $n$ runs of the $N$ atoms, the individual waveguide modes are labelled by their wavevector $k$, and the coupling of them to the atoms is given by the rate $g_{\nu,k,i}^{(n)}$.
This Hamiltonian describes a situation where the two transitions are coupled to different fields, which can either be waveguide modes or free-space modes. In an infinite waveguide (\cref{fig:sketch}b), $g_{\nu,k,1}^{(n)}=\sqrt{2c\Gamma_{g}}\exp(ik\nu x_n)$ and $\nu\in\pm$, corresponding to left- and right-moving modes, whereas for the semi-infinite waveguide (\cref{fig:sketch}a), there is only one set of waveguide modes with coupling $g_{k,1}^{(n)}=\sqrt{c\Gamma_{g}}\sin(kx_n)$. In these expressions, $\Gamma_{g}$ is the decay rate to $\ket g$ of an individual atom into the waveguide, and $\omega_k=ck$. Integrating out the bath modes yields quantum Langevin equations for the spin operators (see \cref{app:langevin_equations}), \begin{equation}
\begin{aligned}
\dot \sigma_{ge}^{(n)}=(\sigma_{gg}^{(n)}-\sigma_{ee}^{(n)})&[\matr L_{n\nu,1}\vec a_{\mathrm{in},\nu,1}-i\matr H_{\mathrm{eff},1,nm}\sigma_{ge}^{(m)}]\\
{}+\sigma_{gs}^{(n)}&[\matr L_{n\nu,2}\vec a_{\mathrm{in},\nu,2}-i\matr H_{\mathrm{eff},2,nm}\sigma_{se}^{(m)}],
\end{aligned}
\label{eq:langevin} \end{equation} where the sum over input fields $\nu$ and atoms $m$ is implied. The non-Hermitian Hamiltonians $\matr H_{\mathrm{eff},1}$ ($\matr H_{\mathrm{eff},2}$) describes both coherent interaction of the quantum emitters and decay into the waveguide induced by the coupling of the $g\leftrightarrow e$ ($s\leftrightarrow e$) transition to the waveguide, and $\vec a_{\mathrm{in},\nu,i}$ are waveguide fields coupling to these transitions. This coupling is given by the generically non-square matrix $\matr L_i$. If coupled via an infinite waveguide, the $i^{\mathrm{th}}$ transition couples to two input fields, $\vec a_{\mathrm{in},i}=(a_{\mathrm{in,}+,i},a_{\mathrm{in},-,i})$, corresponding to right- and left-moving photons, such that $\matr L_i$ is a $N\times2$ matrix. This is in contrast to a semi-infinite waveguide, which only has on input field, such that $\matr L_i$ is a vector. Independent free-space decay at rate $\Gamma_{\mathrm{free},i}$, can also be captured by adding a term $\matr H_{\mathrm{free},i}=-i\Gamma_{\mathrm{free},i}\mathbbm{1}/2$ $\matr L_3=\sqrt{\Gamma_{\mathrm{free}}}\mathbbm{1}$, and introducing $N$ independent noise operators $\vec a_{\mathrm{in, free},i}$. Note that the assumption of independent decay into free-space does not necessarily hold and depends on how closely spaced the quantum emitters along the waveguide are. Such a situation may lead to a reduction in free-space decay~\cite{Asenjo-Garcia2017}, which would benefit the detector proposed here. Multiple baths coupling to the same transition appear as several terms taking either the form of the first or the second line on the right-hand side.
The Langevin equation for $\sigma_{se}^{(n)}$ can be obtained by exchanging both $1\leftrightarrow2$ and $g\leftrightarrow s$ everywhere in \cref{eq:langevin}. The explicit form of $\matr H_{\mathrm{eff}}$ for the mirror geometry and the infinite waveguide are given in \cref{eq:mirror_Hamiltonian,eq:infinite_Heff} below.
If there are only few excitations compared to the number of atoms, one can linearize the Langevin equation using a Holstein-Primakoff transformation that sends $\sigma_{ge}\to b$~\cite{gardiner2004quantum,Chang2012,Caneva2015,Chang2018}. In this approximation, $\sigma_{gs}^{(n)}\sigma_{se}^{(m)}\to\delta_{mn}\sigma_{ge}^{(n)}$, such that only the diagonal terms in the second line of \cref{eq:langevin} survive $\matr H_{\mathrm{eff},2}\to-i\Gamma_s\mathbbm{1}/2$ (green in \cref{fig:sketch}), independent of whether the decay $e\to s$ corresponds to guided or non-guided modes. Combining this with free-space decay (shown in red in \cref{fig:sketch}), we obtain uniform incoherent decay at rate $\Gamma'=\Gamma_{\mathrm{free},1}+\Gamma_s$. We thus arrive at \begin{equation}
\dot{\vec b} = (-i\matr H_{\mathrm{eff},1}-\Gamma'/2)\vec b+\matr L_{\nu,1}\vec a_{\mathrm{in},\nu,1}.
\label{eq:generic_langevin_equations} \end{equation} Here, we have neglected all input fields except the ones pertaining to the waveguide, which is valid if they are in vacuum. In order to describe destructive photon measurements, it is then sufficient to show that all incoming photons are absorbed and re-emitted into the bath modes coupling to the $s\leftrightarrow e$ transition. Strictly speaking, decay to $\ket s$ eliminates the atom from the dynamics, but this effect is neglected in the linearization.
\subsection{Complete absorption}\label{sec:absorption} We now examine in general how tuning $\Gamma'$ in \cref{eq:generic_langevin_equations} can lead to complete absorption of photons by an atomic array. For the rest of this section, we drop the indices from $\matr H_{\mathrm{eff}}$ and $\matr L$, for sake of generality and simplicity. In order to find the linear scattering properties of the array, we use the input-output equations that relate the output field operators to the input fields $\vec a_{\mathrm{out}}(t) = \vec a_{\mathrm{in}}(t)-\matr L\dagg \vec b(t)$~\cite{gardiner2004quantum}. Solving \cref{eq:generic_langevin_equations}, we obtain the scattering matrix \begin{equation}
\begin{aligned}
\vec a_{\mathrm{out}}(\omega)
&=\left\{ \matr 1-\matr L\dagg\left[(\Gamma'/2-i\omega)\mathbbm1+i\matr H_{\mathrm{eff}}\right]^{-1}\matr L \right\}\vec a_{\mathrm{in}}(\omega)\\
&\equiv \matr S(\omega)\vec a_{\mathrm{in}}(\omega).
\end{aligned}
\label{eq:generic_scattering_matrix} \end{equation}
A detector that counts the number of photons in a specific input port (say, $\vec a_{\mathrm{in},+}$) needs to absorb all of them and dissipate them via the transition to $\ket s$. This is captured by our key figure of merit, the \emph{detection efficiency} $\eta$, which is the product of the probability that a photon is not reflected
$p_{\mathrm{abs}}=1-\sum_{\nu\neq1}|\matr S_{\nu1}(\omega)|^2$, and the probability that it is dissipated via the engineered channel (rate $\Gamma_s$) rather than into free space, $\eta=p_{\mathrm{abs}}\Gamma_{s}/(\Gamma_{s}+\Gamma_{\mathrm{free}})$. Thus, a high fidelity requires $\Gamma_{s}\gg\Gamma_{\mathrm{free}}$ and $p_{\mathrm{abs}}\approx1$. In waveguide QED this regime can be reached through the collective enhancement of the emitter--waveguide coupling as compared to the free-space decay.
Unity absorption is attained if one of the eigenvalues of the scattering matrix $\matr S(\omega)$ is zero. This corresponds to a pole of the inverse scattering matrix $\matr S^{-1}=[\matr 1+\matr L\dagg(-i\omega+i\matr H_{\mathrm{eff}}\dagg+\Gamma'/2)^{-1}\matr L]$. A pole of $\matr S^{-1}$ arises whenever $\omega$ coincides with an eigenvalue of $\matr H_{\mathrm{eff}}\dagg-i\Gamma'/2$, which implies that the scattering matrix $\matr S$ has a zero if $\omega-i\Gamma'/2$ coincides with an eigenvalue of $\matr H_{\mathrm{eff}}$. Thus, tuning $\omega$ and $\Gamma'$, this can always be achieved, for any $\matr H_{\mathrm{eff}}$. Absorption based on this principle has been observed in a variety of systems~\cite{Yan1989,Kishino1991,Cai2000}, and been termed coherent perfect absorption~\cite{Chong2010,Baranov2017}. Note that to reach this conclusion we did not have to assume anything about the form of $\matr H_{\mathrm{eff}}$ (apart from linearity), which is the reason it works for arbitrary disordered systems. If these conditions are fulfilled, there exists an eigenvector $\vec e_0$, such that $\matr S(\omega_0)\vec e_0=0$. Generically, $\vec e_0$ describes a linear combination of various input fields of the system, which implies that coherent perfect absorption is only a sufficient condition for unity absorption property when there is only one input field, such as in the mirror geometry. Nevertheless, in the infinite waveguide efficient detection is still attained for large atom numbers.
\begin{figure*}\label{fig:mirror}
\end{figure*}
One can verify explicitly that $\matr S^{-1}$ is the inverse scattering matrix (here, $\matr \Gamma'$ is a matrix, for generality)
\begin{equation}
\matr S^{-1}\matr S=
\matr1-\matr L\dagg\matr M(\omega)\left[ \matr L\matr L\dagg-i(\matr H_{\mathrm{eff}}-\matr H_{\mathrm{eff}}\dagg) \right]\matr M\dagg(-\omega)\matr L=\matr 1,
\label{eq:FDT} \end{equation} where \begin{equation}
\matr M(\omega)\equiv(-i\omega+i\matr H_{\mathrm{eff}}+\matr\Gamma'/2)^{-1}.
\label{eq:M_matrix} \end{equation} The square brackets in \cref{eq:FDT} vanish, which can be checked explicitly for the two examples below. It can also be shown to hold generically, since if $\matr\Gamma'=0$, the scattering matrix is unitary $\matr S^{-1}=\matr S\dagg$,
which is only true if the term in square brackets vanishes.
Physically, this can be interpreted as the fluctuation-dissipation theorem, as the anti-Hermitian part of $\matr H_{\mathrm{eff}}$ specifies the damping, whereas $\matr L$ captures how strongly the modes are coupled to the input noise operator.
\section{Destructive photon counting}\label{sec:destructive} \subsection{Mirror geometry}\label{sec:mirror} We now turn to the specific model that best illustrates these ideas, namely an array of atoms coupled to a waveguide terminated by a mirror as sketched in \cref{fig:sketch}a. In this geometry, the non-Hermitian Hamiltonian induced by integrating out the waveguide photons reads \begin{equation}
\matr H_{\mathrm{eff},1,mn}=-i\frac{\Gamma_{g}}{4}
\left[e^{ik_0|x_m-x_n|}-e^{ik_0(x_m+x_n)}\right].
\label{eq:mirror_Hamiltonian} \end{equation} where $\Gamma_{g}$ is single-atom decay rate for the transition $e\to g$ into the wave\-guide, $x_n$ the position of the $n$th atom,
and $k_0$ is the wavevector of the emitted light (wavelength $\lambda=2\pi/k_0$). The coupling of the atoms via the waveguide contains both a term due to photons travelling directly in between them, accumulating a phase $k_0|x_m-x_n|$, and one mediated by photons being reflected from the mirror, which incurs a minus sign and a phase $k_0(x_m+x_n)$. Since there is only one input and output field (\emph{cf.} \cref{fig:sketch}a), the matrix $\matr L$ is now a vector $L_n=\sqrt{\Gamma_{g}}\sin(k_0x_n)$.
It is instructive to see an example of how perfect absorption manifests in this setup. Placing the atoms in the atomic mirror configuration at positions $x_n=(1/4+n)\lambda$ (due to the infinite-range interactions, the lattice need not have unity filling), the photonic field only couples to the symmetric collective atomic excitation $B=\sum_n b_n/\sqrt{N}$, which also is an eigenmode of the atomic array. All other modes are dark and do not participate in the dynamics. In terms of this collective mode, the governing equations reduce to the input-output equations for a one-sided cavity~\cite{Collett1984} with internal dissipation \begin{subequations}
\begin{align}
&\dot B(t)=
-\frac{\Gamma_{\mathrm{tot}}}{2}B(t)+\sqrt{N\Gamma_{g}}a_{\mathrm{in}}(t),
\label{eq:AMC_langevin}\\
&a_{\mathrm{out}}(t)=a_{\mathrm{in}}(t)-\sqrt{N\Gamma_{g}}B(t),
\label{eq:AMC_io_equation}
\end{align}
\label{eq:AMC} \end{subequations} where we have introduced the total decay rate $\Gamma_{\mathrm{tot}}=N\Gamma_{g}+\Gamma'$. As in our discussion above, $\Gamma'=\Gamma_{\mathrm{free}}+\Gamma_{s}$ comprises both free-space decay and the decay from $e\to s$, which is assumed to be tunable. Solving Eqs~(\ref{eq:AMC}a, b) in frequency space, we calculate the number of photons in the output field \begin{equation}
\langle a_{\mathrm{out}}\dagg(\omega)a_{\mathrm{out}}(\omega)\rangle
=\left|1-\frac{N\Gamma_{g}}{\Gamma_{\mathrm{tot}}/2-i\omega}\right|^2\langle a_{\mathrm{in}}\dagg(\omega)a_{\mathrm{in}}(\omega)\rangle .
\label{eq:output_photons} \end{equation} If decay is tuned such that $\Gamma_{\mathrm{tot}}=N\Gamma_g+\Gamma'=2N\Gamma_{g}$, there is perfect absorption on resonance ($p_{\mathrm{abs}}=1$), with a bandwidth of $2N\Gamma_{g}$, which in this single-mode picture is equivalent to impedance matching or critical coupling. In order to match the collective decay, the engineered decay $\Gamma_s\propto N$, such that as the atom number is increased, the detection efficiency $\eta=p_{\mathrm{abs}}\Gamma_{s}/(\Gamma_{s}+\Gamma_{\mathrm{free}})$ can become arbitrarily close to 1.
In \cref{fig:mirror}a we include spatial disorder and show how the photon loss on resonance $p_{\mathrm{loss}}\equiv 1-\eta$ scales with atom number. Clearly, while the setup works very well for low spatial disorder ($\sigma/\lambda<1\%$), it suffers significantly from disorder. In the following we show how this is mitigated. Note that in \cref{fig:mirror} and indeed all plots in this paper we choose the Purcell factor $P=10$, which close to the current state-of-the-art~\cite{Nayak2019}. However, the collective enhancement of the atom-waveguide coupling means that the primary effect of having a lower Purcell factor is that more atoms have to be employed and therefore read out.
In the presence of disorder, the energies and decay rates of the eigenmodes of the atomic array are shifted, and many collective atomic modes couple to the input field. Thus, the picture presented above breaks down. Yet, as we have shown following \cref{eq:generic_scattering_matrix}, full absorption can be attained generically, independent of disorder, by tuning the engineered dissipation $\Gamma_{s}$ to one of the eigenmodes. However, the eigenmodes in the presence of disorder are not known \emph{a priori}, so this approach appears infeasible. Surprisingly, one can still vastly improve over naively setting $\Gamma_{s}=N\Gamma_{g}$ as we did in \cref{fig:mirror}a, if the standard deviation $\sigma\equiv\sqrt{\langle \delta x^2\rangle}$ of atomic positions is known. For a given $N,\sigma$, one can then calculate the average largest eigenvalue $\langle \mu\rangle_{N,\sigma}$ and tune the engineered dissipation to its imaginary part $\Gamma_{s}=\Im[\langle \mu\rangle_{\sigma,N}]$. Note that $\Re\langle \mu\rangle_{\sigma,N}=0$ if every configuration is as likely as its reflection. This restores the favourable scaling of detection efficiency with $N$, as illustrated by the blue ($\sigma=0.1\lambda$) and green ($\sigma=\lambda$) curves in \cref{fig:mirror}b. Most strikingly, this works even in the presence of disorder equal to the lattice spacing (green), which is essentially equivalent to a fully random configuration. The reason it works lies in the fact that the largest eigenvalue and thus the absorption bandwidth grows faster than the fluctuations of the largest eigenvalue around its mean.
If the disorder is fixed as a result of fabrication, such as in solid-state implementations, one can further improve the scaling by measuring the largest decay rate. In this case, $\Gamma_{s}$ can be tuned exactly to the largest eigenvalue. This situation corresponds to the red curve in \cref{fig:mirror}b, calculated for completely random configurations, where in each case $\Gamma_{s}$ was set to coincide with the imaginary part of the largest eigenvalue. This clearly leads to a better result than without exact tuning (green). The scaling is still worse than with low disorder (blue), because the largest eigenvalue grows slower in completely disordered configurations compared with mostly ordered ones.
\begin{figure*}
\caption{
\textbf{Photon detection with an infinite waveguide.}
\textbf{a} Detection probability for a disorder-free array of $N$ atoms with different spacings.
This graph illustrates the fact that an array in the atomic-mirror configuration ($a=\lambda$) cannot serve as photon counter,
whereas all other generic spacings work similarly well, a fact that can be understood analytically (\emph{cf.} \cref{sec:infinite}).
\textbf{b} Probability for an undetected photon on resonance for an atomic array coupled to an infinite waveguide as a function of atom number.
The blue line denotes the limit of a perfectly ordered array with spacing $a=\lambda/4$ (or $5\lambda/4$ etc).
Purcell factor is $P=10$, the average for the other cases was performed over 2500 disorder realizations (standard deviation shown as lightly coloured area).
\textbf{c} Scaling of largest eigenvalue in ordered arrays with varying spacing compared to a fully random one (red).
While the atomic mirror configuration ($a=\lambda$) is clearly different, it is a fine-tuned exception, with all other generic arrays (ordered or disordered) behaving remarkably similar.
The robustness of our scheme relies to a large degree on this universal eigenvalue scaling. }
\label{fig:infinite}
\end{figure*}
So far, we have just discussed absorption and detection on resonance. Equally important is the detection bandwidth, given by the engineered decay rate. Since also the detection efficiency $\eta$ depends on the ratio between engineered dissipation and total decay rate, best detection is achieved when tuning to the most dissipative eigenmode (\emph{cf.} \cref{fig:mirror}c). This is our choice for all plots.
\subsection{Infinite waveguide}\label{sec:infinite} Let us now turn to an atomic array coupled to an infinite waveguide, which has the simpler effective Hamiltonian \begin{equation}
\matr H_{\mathrm{eff},1,mn}=-i\Gamma_{g}\exp(ik_0|x_m-x_n|),
\label{eq:infinite_Heff} \end{equation} since there is only one path for a photon to travel from one atom to the next. As illustrated in \cref{fig:sketch}b, there are now two input and two output modes, a right-moving one ($+$) and a left-moving one ($-$) [\emph{cf.} \cref{eq:generic_langevin_equations}]. The atomic lowering operators couple to the input operators via the $N\times2$ matrix $\matr L_{n,\nu}=\sqrt{\Gamma_{g}}\exp(ik_0\nu x_n)$, where $\nu\in\{\pm1\}$ labels right- and left-propagating modes. The scattering matrix reads \begin{equation}
\matr S(\omega) = \mat{1&0\\0&1} - \mat{\vec L_-\matr M(\omega)\vec L_+ &\vec L_- \matr M(\omega)\vec L_-\\
\vec L_+\matr M(\omega)\vec L_+ & \vec L_+\matr M(\omega)\vec L_-},
\label{eq:scattering_matrix} \end{equation} where $\matr M$ is defined as above [\cref{eq:M_matrix}]. Since $\matr M$ is symmetric, transmission of right- and left-moving waves (the diagonal elements of $\matr S$) are equal.
One can show that for atoms arranged in a periodic array, parity symmetry implies that reflection amplitude of right- and left-moving photons differs only by a phase. This is because $\matr J\matr M\matr J=\matr M$ under the action of the exchange matrix $\matr J$, which consists of ones on the anti-diagonal and otherwise zeros, and $\matr J\vec L_+=\exp(i\phi)\vec L_-$ for some $\phi$. In this case, we can parameterize the scattering matrix as \begin{equation}
\matr S(\omega) = \mat{A(\omega) & B(\omega)e^{2i\phi} \\ B(\omega) & A(\omega)}, \end{equation} with eigenvalues $\mu=A\pm B\exp(i\phi)$. In this system, coherent perfect absorption (one eigenvalue is zero) is equivalent to $B=\exp(i\theta)A$ and $\exp(i\phi+i\theta)=\pm1$. In the end, full absorption of a uni-directional wavepacket may only be attained if $\matr S=0$. Thus, parity symmetry implies that perfect absorption may only occur if the scattering matrix is zero, corresponding to an exceptional point. Interestingly, as the atom number $N\to\infty$, the scattering matrix $\matr S\to0$ for all arrays \emph{except} the atomic-mirror configuration. In the latter, the scattering matrix can be shown to reduce to (for full absorption $\Gamma_{s}=N\Gamma_{g}$) \begin{equation}
\matr S_{\mathrm{AMC}}(\omega=0) = \frac12\mat{1 & -1 \\ -1 & 1},
\label{eq:infinite_S_AMC} \end{equation}
which gives perfect absorption for wavepackets that are symmetric superpositions of left- and right-propagating modes, but not for wavepackets incident from one direction, an effect seen before~\cite{Romero2009,Fan2013}. This can be overcome through atomic lenses~\cite{Li2015a}, a mirror as above~\cite{Peropadre2011}, or by using any other lattice spacing~\cite{Romero2009}. The latter can be deduced by estimating the magnitude of the elements of the scattering matrix through $|t|^N\sim\exp(-N^{1-\alpha})$, having assumed that the largest eigenvalue scales as $N^\alpha\Gamma_{g}$. Only in the atomic mirror configuration is $\alpha=1$, otherwise $\alpha<1$.
We numerically check the behaviour for a range of other spacings as well as fully disordered arrays and find similar behaviour as long as the atomic positions differ sufficiently from the atomic mirror configuration. In \cref{fig:infinite}b,c we demonstrate this for a selection of array spacings (without including disorder). We choose $k_0=\pi/(2\lambda)$ for no particular reason, except that this appears to be a good choice. Including disorder, and employing the same technique of tuning to the average largest eigenvalue, we find similar scaling behaviour as in the mirror geometry, illustrated in \cref{fig:infinite}a. The upshot is that arbitrary detection efficiencies can again be attained by increasing atom number, independent of disorder.
\subsection{Chiral atom--waveguide coupling}\label{sec:chiral} Interestingly, the recently demonstrated platforms for chiral atom-waveguide coupling~\cite{Lodahl2017} are another architecture in which robust photon detection may be achieved. Such a coupling is realized in a range of situation, for example when the light field is strongly confined \cite{Luxmoore2013,Junge2013,Shomroni2014,Sollner2015}, when giant atoms are tuned to give a chiral coupling~\cite{Kockum2018}, or in topological systems~\cite{Barik2018,Barik2019}.
By design, (almost) no backscattering occurs in these systems, there are no collective effects, and the spacing of the atoms is immaterial, making the analysis straightforward. If an atom coupled to right- and left-moving modes at different rates, transmission and reflection on resonance are captured by~\cite{Lodahl2017} \begin{equation}
\beta_\pm=\frac{\gamma_\pm}{\gamma_++\gamma_-+\Gamma'},\quad t_\pm=1-2\beta_\pm,\quad r_\pm=-2\sqrt{\beta_+\beta_-}.
\label{eq:beta_factor} \end{equation} As before, $\Gamma'=\Gamma_{s}+\Gamma_{\mathrm{free}}$. This allows us to calculate the per-atom absorption probability (for the $+$-mode) \begin{equation}
A_+\equiv 1-|t_+|^2-|r_+|^2 = 4\beta_+\left( 1-\beta_+\frac{\gamma_-}{\gamma_+} \right).
\label{eq:absorption} \end{equation} The probability that the photon is dissipated in the right channel is $\Gamma_s/\Gamma'$ as before. In the limit of many atoms, the transmission probability vanishes, as all photons are either reflected or absorbed. The detection efficiency is therefore the ratio of absorbed photons times the probability they are dissipated to $\ket s$, which to first order in $\gamma_-/\gamma_+$ is given by \begin{equation}
\eta_{\mathrm{chiral}}
=\left( 1-\frac{\gamma_-}{\gamma_++\Gamma'} \right)\frac{\Gamma_{s}}{\Gamma'}.
\label{eq:chiral_p} \end{equation}
Note that even for moderate $\gamma_-/\gamma_+$, the second term can be reduced arbitrarily by increasing $\Gamma_{s}$, with the caveat that a larger number of emitters is needed before complete extinction is attained. In the absence of backscattering, this scheme is intrinsically robust against disorder. On top of that, the detection bandwidth depends only on the bandwidth of chirality and thus is---at least in principle---independent of $\Gamma_{s}$. This comes again with the caveat that photons far detuned from resonance on a scale of $\Gamma_{s}$ can only be absorbed with a large number of emitters.
\section{Nondestructive photon counting}\label{sec:QND} \subsection{Outline} In what we have discussed so far, we have disregarded the photons emitted via the engineered decay. After the state of the atomic array is measured, the only information obtained concerns the number of photons in the pulse. Since for each photon absorbed, one is emitted via the engineered channel, the question is pertinent whether a situation can arise in which the emitted photonic state coincides with the input state. This requires the outgoing photons to be disentangled from the atoms, save for in the collective number basis, and that the photonic state is not distorted in any other way. If these conditions are fulfilled, the setup realizes a quantum nondemolition (QND) photon-number measurement.
It is immediately obvious that if the photons are emitted into free space and thus scattered in all directions, the outgoing photonic state is a) useless, b) still entangled with the atoms, and c) distributed over many different modes. In principle, this is mitigated to some degree if the photons are emitted back into the waveguide. However, if each atom were to emit independently, the probability of any one photon getting lost is $1/(1+P)$, where $P$ is the Purcell factor, which severely limits the fidelity of the scattered wavepacket for realistic Purcell factors $P$. Furthermore, a photon has a different phase depending on the atom it is emitted from, causing residual entanglement of the photons with the atomic state, except in the atomic-mirror configuration. Thus, QND detection requires the mirror geometry (\emph{cf.} \cref{fig:sketch}d), which we study in detail below~\footnote{We note that even in the mirror geometry the phase of the emitted photon depends on the atom is emitted from if its frequency differs from the frequency of absorbed photons. This effect is negligible if $\delta \lambda/\lambda\ll \lambda/L$.}.
It turns out that the same trick that can make absorption robust against free-space decay and disorder---collective decay---can also be used to protect the re-emission into the waveguide. This is achieved if the atoms are in a superposition of $\ket g$ and $\ket s$ instead of all in $\ket g$. As we demonstrate mathematically below, this yields collective enhancement of the atom-waveguide coupling for both decay channels. This mode of operation therefore holds one further big advantage over the non-QND operation, namely that the bandwidth of the overall detector is not limited to the single-atom bandwidth. If the final readout of the atomic state is to give information about the number of photons, the ground state superposition must initially posses a known number of atoms in $\ket g$. These requirements are fulfilled by \emph{Dicke states}, fully symmetric states with a definite number of excitations \begin{equation}
\ket{N-m,m,0} \equiv \sqrt{\frac{(N-m)!}{N!\, m!}}[S_{sg}]^m\ket G,
\label{eq:dicke_states} \end{equation} where $S_{sg}\equiv \sum_{i=1}^N\sigma_{sg}^{(i)}$ is a collective spin operator, and $\ket G$ is the state in which all atoms are in $\ket g$.
\begin{figure*}
\caption{\textbf{QND number-resolving detection.}
\textbf{a} Overlap $\mathcal F_1$ of the reflected wavepacket with the incoming wavepacket when a single photon in a Gaussian mode of width $\Gamma_{g}$ is scattered from a perfect array initially in state $\ket{\psi_0}=\ket{N/2,N/2,0}$,
obtained from a numerical simulation (dots) for Purcell factors of $P=10$ (blue) and $P=\infty$ (green, no free-space decay).
Free-space decay clearly reduces the fidelity, as is expected, but does not affect the scaling with the number of atoms, as the coherently enhanced coupling grows faster than the incoherent decay.
The fidelity is captured very well by the transmission fidelity of an impedance-matched two-port cavity in which each port has decay rate $\kappa=N\Gamma_{g}/2$ [black line, see \cref{eq:F1}].
In this effective description, free-space decay corresponds to internal decay of the cavity as outlined following \cref{eq:F1}.
\textbf{b}
The fidelity as a function of the initial imbalance $m_0=m-N/2$, again comparing numerics with the analytical calculation for a two-sided cavity [\cref{eq:F1}].
Here and in the following we always take $P=10$.
\textbf{c}
Numerical calculation of the fidelity $\mathcal F_m$ of the QND measurement like in \textbf{a}, but now for several photons.
The dots are numerically calculated, the solid lines is a simple estimate, given in \cref{eq:fidelity} below.
\textbf{d} A time trace of a simulation of 5 photons in the same Gaussian mode of width $\Gamma_{g}$ scattering off 40 atoms. This illustrates that transmission is good even with modest atom numbers,
and that the probability for an atom to be in the excited state remains small throughout, implying that free-space decay is suppressed.
The numerical method we use is detailed in \cref{app:numerics}. }
\label{fig:QND}
\end{figure*}
\subsection{QND measurement with Dicke states} We now aim to describe how a number-resolving QND measurement works in practice. We first focus on analytical results and approximations to motivate and illustrate the idea, and then corroborate the results with numerics. In the atomic-mirror configuration ($a=\lambda$) the coupling via a semi-infite waveguide \eqref{eq:mirror_Hamiltonian} simplifies to $\matr H_{\mathrm{eff},1,mn}=-i\Gamma_g/2$. We also assume that the $e\to s$ transition couples to either the same waveguide at a slightly different frequency or to another waveguide field, such that $\matr H_{\mathrm{eff},2,mn}=-i\Gamma_s/2$. In this case, the Langevin equations \cref{eq:langevin} reduces to a single equation of motion for the collective spin operator \begin{equation}
\begin{aligned}
&\dot S_{ge}= (S_{gg}-S_{ee})\left(\sqrt{\Gamma_g}a_{\mathrm{in},1}-\frac{\Gamma_g}{2}S_{ge}\right)\\
&+S_{gs}\left(\sqrt{\Gamma_s}a_{\mathrm{in},2}-\frac{\Gamma_s}{2}S_{se}\right)
-\frac{\Gamma_{\mathrm{free}}}{2}S_{ge},
\end{aligned}
\label{eq:QND_eom} \end{equation} and another one with $g\leftrightarrow s$ and $1\leftrightarrow2$, where $S_{\alpha\beta}=\sum_i^N\sigma^{(i)}_{\alpha\beta}$ are collective spin operators. The input-output equations in this case read \begin{equation}
a_{\mathrm{out},1} = a_{\mathrm{in},1} - \sqrt{\Gamma_g}S_{ge},\quad a_{\mathrm{out},2} = a_{\mathrm{in},2}-\sqrt{\Gamma_s}S_{se}.
\label{eq:qnd_io} \end{equation}
Note that neglecting the input fields corresponding to free-space decay in \cref{eq:QND_eom} is an approximation, as they take the system out of subspace of Dicke states. The reason is that when a photon decays into free-space, it destroys the coherence, which leaves the system in a mixed state. Therefore, \cref{eq:QND_eom} fails to account properly for the dynamics after the first photon has been dissipated to free space. The reason we can still use it to calculate the fidelity of QND measurements of Fock states is that once a photon is lost, the fidelity immediately drops to zero and remains zero, such that the subsequent time-evolution of the system is immaterial.
To understand how impedance matching can be attained here, consider the system being in the symmetric state with $m_e$ excitations $\ket\psi=\ket{N/2-m_0-m_{\mathrm{e}},N/2+m_0,m_{\mathrm{e}}}$, where we have defined the imbalance $m_0=m-N/2$. Acting on this generic symmetric state, \begin{equation}
S_{gs}S_{se}\ket\psi = (S_{ss}+1)S_{ge}\ket\psi.
\label{eq:symmetric_simplification} \end{equation} Since this is true independent of $m_0$ and $m_{\mathrm{e}}$, it is an operator identity, but only in the subspace of symmetric states. Another identity can be obtained by exchanging $s\leftrightarrow g$. This allows us to rewrite the equation of motion in the simplified form \begin{equation}
\begin{aligned}
\dot S_{ge}= -&\left[(S_{gg}-S_{ee})\Gamma_g + (S_{ss}+1)\Gamma_s-\Gamma_{\mathrm{free}}\right]\frac{S_{ge}}{2}
+\xi_{\mathrm{in}},
\end{aligned}
\label{eq:QND_eom2} \end{equation} where $\xi_{\mathrm{in}}$ is the same input noise as in \cref{eq:QND_eom}. In symmetric states with sufficiently large population in both ground states, we can replace the operators approximately by their expectation value in the initial state, $S_{gg}-S_{ee}\approx N/2-m_0$ and $S_{ss}\approx N/2+m_0$. Thus we conclude that \begin{equation}
\Gamma_g(N/2-m_0) = \Gamma_s(N/2+m_0+1)
\label{eq:impedance_matching_condition} \end{equation} ensures impedance matching, such that the decay rate of an excitation via the first and via the second channel are equal up to $\mathcal O(1/N)$. Under this condition, the equation of motion~\eqref{eq:QND_eom2} on resonance can be solved approximately $S_{ge}\approx a_{\mathrm{in},1}/\sqrt{\Gamma_g}$. On the one hand, this implies that $a_{\mathrm{out},1}$ is independent of $a_{\mathrm{in},1}$, since the whole signal is absorbed. On the other hand, $a_{\mathrm{out},2}\approx-S_{sg}a_{\mathrm{in},1}\sqrt{2\Gamma_s/\Gamma_gN^2}$. Given this, $\langle a_{\mathrm{out},2}\dagg a_{\mathrm{out},2}\rangle\approx \langle a_{\mathrm{in},1}\dagg a_{\mathrm{in},1}\rangle $, \emph{i.e.}, every photon incident on port 1 is transmitted to port 2. The presence of $S_{sg}$ in this expression indicates that for each incoming photon, an atom is transferred from $g$ to $s$. This still holds for the second, third, \ldots, $n^{\mathrm{th}}$ photon, but with an error of order $\mathcal O(n/N)$, which is the same as for the non-QND detector, up to constants of $\mathcal O(1)$.
To make this intuition quantitative, we follow a simple argument. First of all, let us denote the overlap of the output with the input wavepacket in case of the scattering of a single photon in a given state as $\mathcal F_1=|\langle\psi_{\mathrm{out}}|\psi_{\mathrm{in}}\rangle|^2$. For any finite-bandwidth wavepacket this differs from unity. For example, a single photon with a Gaussian wavefunction of width $\tau$ is transmitted by an impedance-matched cavity with outcoupling rate $\kappa$ and internal decay rate $\kappa_{\mathrm{int}}$ with fidelity \begin{equation}
\mathcal F_1 = \frac{\kappa}{\kappa+\kappa_{\mathrm{int}}}
\left\{1-\sqrt{2\pi}e^{2\kappa^2\tau^2}\kappa\tau[1-\erf(\sqrt{2}\kappa\tau)]\right\},
\label{eq:F1} \end{equation} where $\erf(x)=(2/\sqrt{\pi})\int_{0}^x\exp(-y^2)dy$ is the error function. To make contact with our simulations, we thus set $\kappa=\Gamma_g(N/2-m_0)$ and the internal cavity decay $\kappa_{\mathrm{int}}=\Gamma_{\mathrm{free}}$. In this limit of few excitations, it is therefore valid to think of free-space decay as limiting the fidelity by introducing a branching ratio between decay back into the waveguide and decay into free space, which is captured explicitly by the pre\-factor in \cref{eq:F1}.
Given some single-photon fidelity $\mathcal F_1$, a linear device would transmit $m_{\mathrm{p}}$ photons in the same mode with a fidelity of $\mathcal F_1^{m_{\mathrm{p}}}$. However, for each atom that transitions from $\ket g$ to $\ket s$, the single-photon collective decay rate from $e\to g$ is reduced by $\Gamma_g$, whereas the collective decay rate from $e\to s$ is increased by $\Gamma_s$, which leads to imperfect impedance matching. Using \cref{eq:output_photons} to estimate the resulting additional reflection, we find the probability for absorbing the $k^{\mathrm{th}}$ photon is reduced by $1-2k^2/3N^2$. Thus, the probability that $m_{\mathrm{p}}$ photons are absorbed is reduced by a factor of $1-2(2m_{\mathrm{p}}^3-3m_{\mathrm{p}}^2+m_{\mathrm{p}})/3N^2$ to third order in $1/N$. Combining this with the single-photon fidelity $\mathcal F_1$, we can estimate the $m_{\mathrm{p}}$-photon QND fidelity as \begin{equation}
\mathcal F_{m_{\mathrm{p}}} = 1-\left(1-\frac{4m_{\mathrm{p}}^3-6m_{\mathrm{p}}^2+2m_{\mathrm{p}}}{3N^2}\right)\mathcal F_1^{m_{\mathrm{p}}}.
\label{eq:fidelity} \end{equation} In the following, we compare these predictions with numerical simulation and find they agree well.
\subsection{Numerical simulation} In order to verify these conclusions numerically, we study the scattering of a multi-photon Fock states with a Gaussian wavepacket of width $\Gamma_{g}$ from an array of atoms in the atomic mirror configuration in the mirror geometry using a recently proposed technique~\cite{Kiilerich2019}. We present details of the simulation in \cref{app:numerics}.
In \cref{fig:QND}a we show the fidelity for a single photon scattering off an array starting from the initial state $\ket{N/2,N/2,0}$. Clearly, \cref{eq:F1} is a good approximation to the transmission fidelity of the atomic array. This is still true for any other symmetric starting state, as we illustrate through \cref{fig:QND}b, which shows the fidelity of single-photon scattering when starting with an initial state $\ket{N/2-m_0,N/2+m_0,0}$, with the imbalance $m_0$ ranging from $-N/2$ to $N/2$. We assume there is a maximally achievable decay rate $\Gamma_{\mathrm{max}}\geq\Gamma_g,\Gamma_s$ and consequently lower either $\Gamma_g$ or $\Gamma_s$ to fulfil the above condition~\eqref{eq:impedance_matching_condition}. Together, these results show that single-photon transmission is captured very well by the above equations.
Turning to multi-photon scattering, we simulate several photons in the same Gaussian wavepacket scattering off the atomic array and again calculate the overlap of the output wavepacket with the input wavepacket. The results are shown in \cref{fig:QND}c. We find that our simple argument captures the fidelity well, and that our proposal can in principle reach very high fidelities for modest atom numbers. Finally, in \cref{fig:QND}d we show an example of the time evolution of the system.
\subsection{Dicke state preparation}\label{sec:preparation} Following similar arguments as in the other sections, the QND detector is robust against spatial disorder. However, the suppression of free-space decay crucially relies on the preparation of a Dicke state between the two ground states. One way to obtain such a state is to start with all atoms in the ground state $\ket g$, apply a $\pi/2$-pulse on the ground states $\left\{ g,s \right\}$, and finally perform a projective measurement of $S_{gg}$ (or $S_{ss}$ or $S_{gg}-S_{ss}$). This heralds a fully symmetric state with a binomial distribution of imbalances around zero and standard deviation $\sqrt{N/4}$. However, such measurements are difficult. In principle, one can apply an off-resonant probe in the waveguide and recording the phase shift of the reflected light, which is proportional to the number of atoms~\cite{Beguin2014}. In \cref{app:prep} we briefly analyze this kind of measurement and find it is fundamentally limited to atom numbers of the order of the Purcell factor $N\lesssim P$. It has been proposed to produce atomic states by manipulating the dark-state manifold~\cite{Gonzalez-Tudela2015a}, which however is limited in fidelity by $1-\mathcal F\propto N/(2\sqrt{P})$, where $N$ is the number of atoms and $P$ the Purcell factor. Neither method scales well to many atoms. Thus, in the following we find a fast preparation method that imposes much less stringent requirements on the Purcell factor.
\begin{figure}\label{fig:prep}
\end{figure}
We propose to produce Dicke states in a way that is in keeping with the core idea of this article: measuring atoms, not photons. This requires two arrays (halves of one array) that are individually addressable with external driving. We further require that the atomic transition frequency be in the bandgap of the waveguide, which allows the atoms to be coupled coherently without dissipation. As shown elsewhere~\cite{Douglas2015}, such a setup readily gives rise to a Hamiltonian that couples all spins \begin{equation}
H = \frac{2\pi g_k^2}{\Delta L}\sum_{ij}\sigma_{sg}^{(i)}\sigma_{gs}^{(j)}e^{-|x_i-x_j|/L}\simeq
g_{\mathrm{eff}}S_{sg}S_{gs},
\label{eq:H_dicke} \end{equation} where the combination of coupling to individual waveguide modes $g_k\approx\text{const}$, bound state decay length $L=\sqrt{\alpha/\Delta}$, detuning of the impurity from the band edge $\Delta=\omega_0-\omega_b$, and band curvature $\omega_k=\omega_b+\alpha k^2$ yield the effective coupling strength $g_{\mathrm{eff}}=2\pi g_k^2/(\Delta L)$.
Importantly, when using a Raman transition ($s\to e\to g$) to couple the atoms to the waveguide modes, both $\Delta$ and $g_k$ in \cref{eq:H_dicke} are tunable. Careful analysis (\emph{cf.}~\cref{app:band_gap_interaction}) reveals that the effective Purcell factor $P_{\mathrm{eff}}=g_{\mathrm{eff}}/\Gamma_{\mathrm{eff,free}}$ scales as the ratio of the effective waveguide density of state ($1/\sqrt{\Delta\alpha}$) and the constant free-space density of state ($\rho_0$), \emph{viz.}, $P_{\mathrm{eff}}\propto1/(\rho_0\sqrt{\alpha\Delta})$. The detuning from the band edge $\Delta$ can in principle be made arbitrarily small without violating the adiabatic condition or the Markov approximation (\emph{cf.}~\cref{app:band_gap_interaction}). Note also that disorder in the positions of the atoms is not an issue here, as there is no position-dependent phase~\footnote{
As discussed in Ref.~\cite{Douglas2015}, a dependence on position may arise, if the modes at the band edge, through which the atoms are coupled, have spatial structure, for example a finite wavevector.
If the atoms are placed below the band edge of a waveguide, this does not happen as the lowest energy modes are extended modes at zero wavevector. }. Instead this scheme is likely ultimately limited by disorder in the coupling strengths or energy, as they destroy the symmetry of the effective Hamiltonian.
\begin{figure}\label{fig:prep2}
\end{figure}
The protocol to prepare an approximately half-excited state of one of the arrays is as follows. One array is fully excited (by applying a $\pi$-pulse) and the other is left in the ground state. The time evolution under the Hamiltonian \eqref{eq:H_dicke}, shown in \cref{fig:prep}a, transfers excitations from one chain to the other, while leaving them in their individual symmetric subspaces. Notably, after a short time $g_{\mathrm{eff}}t_0\sim1/\sqrt{N}$, corresponding to the first zero crossing in \cref{fig:prep}a, the average number of excitations in each array is equal. However, this comes with the caveat that while on average the two arrays hold $N/2$ excitations each, in fact their imbalance has a very wide probability distribution, as we illustrate in \cref{fig:prep}c for $N=100$ emitters. As we have shown above, a large imbalance does not invalidate our scheme, but it does mean that the usable atom number on average is halved. Since this is a constant penalty, it does not change the overall scaling with atom number $N$. Ultimately, this scheme requires $P_{\mathrm{eff}}\gg\sqrt{N}$, where $P_{\mathrm{eff}}$ is the Purcell factor enhanced through the proximity to the band edge.
There is another, intrinsically faster, way to prepare Dicke states, if the system is governed by the Hamiltonian \begin{equation}
H = g_{\mathrm{eff}}[S_{sg}^{(1)}S_{gs}^{(2)}+S_{sg}^{(2)}S_{gs}^{(1)}].
\label{eq:H_dicke2} \end{equation} As we detail in \cref{app:engineering_H2}, this Hamiltonian may be engineered through the interference of interactions via two different waveguides (or bands in the same waveguide),
and constitutes a waveguide QED version of spin flip-flops recently realized in cavity QED~\cite{Davis2019}. While certainly more challenging to implement experimentally, this Hamiltonian has the advantage of equilibrating the number of excitations in each array on an asymptotic time scale of $g_{\mathrm{eff}}t_0\sim \log(N)/(2\sqrt{2}N)$ (\emph{cf.}~\cref{app:time_scales}). This is the fastest time that can be achieved for a given $g_{\mathrm{eff}}$, essentially saturating the time scale obtained by adding all average transition times $\sum_n^N | |S_+^{(2)}S_-^{(1)}\ket{N,n}\otimes\ket{N,N-n}| |^{-1}\sim \log(N)/2N$. As a result, using the Hamiltonian \cref{eq:H_dicke2} reduces the requirement on the effective Purcell factor to $P_{\mathrm{eff}}\gg\log(N)$. Another advantage of this Hamiltonian is illustrated by \cref{fig:prep2}c, which shows the probability distribution of Dicke states after a time $t_0$. The overall probability for the state to be close to $\ket{N/2,N/2}$ is larger as with the other Hamiltonian (\emph{cf.}~\cref{fig:prep}c).
\begin{figure*}\label{fig:level_scheme}
\end{figure*}
\section{Experimental considerations}\label{sec:experiment} A tacit assumption in the preceding sections has been that the decay rates from $\ket e$ to $\ket g$ and/or $\ket s$ are tunable. In circuit quantum electrodynamics, decay rates can be tuned by changing the detuning of an intermediate resonator~\cite{Pechal2014}. In atomic systems, this can be done with Raman transitions, which we analyze in the following for the D$_2$ line of $^{87}$Rb. For concreteness, we assume here that the waveguide efficiently couples to $\pi$ transitions.
\subsection{Engineered decay: Destructive photon measurement}
The level scheme we consider specifically is drawn in \cref{fig:level_scheme}a. In it, we make the choice $\ket g\equiv\ket{F=2,m_F=2}$, $\ket e\equiv\ket{1,1}$, and $\{\ket s_i\}=\{\ket{2,1},\ket{2,-1},\ket{1,0},\ket{1,-1}\}$ (green area), which are coupled via excited states (in $5^2$P$_{3/2}$) $\ket{f_1}\equiv\ket{2,2}$, $\ket{f_2}\equiv\ket{2,0}$. While this is just one choice among many it has the advantage that the applied lasers do not couple to any transition of the large number of atoms in the ground state $\ket g$. Since in the destructive scheme we only need to count how many atoms have scattered photons, the final state is irrelevant, provided it is not $\ket g$. The Raman scheme allows for large tuneability of the relative decay rate, which is required to obtain $\Gamma_{e\to s}(\Omega_2)\approx N\Gamma_{e\to g}(\Omega_1)$.
To show that the additional decays drawn in red do not spoil the scheme, we derive the effective quantum master equation governing the double-$\Lambda$ model in their presence. Neglecting the energy shifts due to the pumps, the dynamics are purely dissipative, given by the jump operators \begin{subequations}
\begin{align}
\hat L_{g,\mathrm{eff}}&=\frac{\sqrt{\Gamma_{1,g}}\Omega_1}{2\Delta_1-i(\Gamma_{1,g}+\Gamma_{1,e})}\ket g\bra e,\\
\hat L_{s_i,\mathrm{eff}}&=\frac{\sqrt{\Gamma_{2,s_i}}\Omega_2}{2\Delta_2-i(\Gamma_{2,s_i}+\Gamma_{2,e})}\ket{s_i}\bra e,\\
\hat L_{ee,\mathrm{eff}}&=\sum_{i=1}^2\frac{\sqrt{\Gamma_{i,e}\Omega_i}}{2\Delta_i-i(\Gamma_{i,g_i}+\Gamma_{i,e})}
\ket e\bra e,
\end{align} \end{subequations} where in the last expression $\Gamma_{1,g_1}=\Gamma_{1,g}$ and $\Gamma_{2,g_2}=\Gamma_{2,s}$. We denote the (sum of the) rates corresponding to these jump operators $\Gamma_{e\to g}(\Omega_1), \Gamma_{e\to s}(\Omega_2)$, and $\Gamma_{ee,\mathrm{eff}}$, respectively.
We note that the deleterious decays $\Gamma_{i,e}$ induce dephasing described by $\hat L_{ee,\mathrm{ee}}$ that is not negligible, primarily due to the $i=2$ term, as $\Omega_2\gg\Omega_1$. However, the dephasing removes the coherence and thus the superradiant decay to $\ket g$. Since $\Gamma_{e\to g}/\Gamma_{e\to s}=\O(1/N)$, the corresponding excitation decays to $\ket s$, with an error $\mathcal O(1/N)$. We conclude that all potential errors analyzed here are suppressed with increasing $N$.
\subsection{Engineered decay: Nondestructive photon measurement}
Similar considerations apply to the operation of the QND detector. A possible choice, shown in \cref{fig:level_scheme}b contains ground states
$\ket s =\ket{2,-1}$, $\ket g =\ket{2,1}$, and excited states $\ket{f_1}=\ket{2,1}$ and $\ket{f_2}=\ket{2,-1}$. Assuming we are able to prepare a Dicke state $|N_g=m,N_s=N-m,N_e=0\rangle$, we require the two collective decay rates to be similar [\emph{cf.}~\cref{eq:impedance_matching_condition}].
Unlike the destructive scheme, now all other decays are deleterious. However, both decays $ e\to s$ and $e\to g$ are enhanced by a factor of $N/2$. This implies that photon loss, either into free space or through decay into another hyperfine state scales as $1/N$. All deleterious decay rates can be taken together to form $\Gamma_{\mathrm{free}}$ in the calculations of \cref{sec:QND}.
\subsection{Dicke state preparation} In order to prepare Dicke states, we propose to start with one array of the atoms in one hyperfine ground state $\ket s$ and the other array in another hyperfine ground state $\ket g$. In this protocol, the requirements for the Purcell factor (ratio of coupling strength to free-space decay) is most stringent. The most favourable level scheme is therefore a closed $\Lambda$-system, where $\ket g=\ket{F=2,m_F=2}$ and $\ket s=\ket{2,1}$ are hyperfine ground states, but $\ket f=\ket{3,3}$ is an excited state in $5^2$P$_{3/2}$. In this case, $f\to g$ is a cycling transition and decays outside this subspace are suppressed. Coherent driving between $\ket s$ and $\ket f$ can be implemented using a two-photon transition~\cite{Porras2008}. This leaves free-space decay as error source, which has been discussed in \cref{sec:preparation}.
\subsection{Other sources of disorder} We have neglected inhomogeneous broadening and atomic motion, which could be taken into account in the same way as positional disorder and do not modify our conclusions. Furthermore, the relative effect of disorder decreases as the number of atoms increases~\cite{Romero2009a}. On the other hand, fast atomic motion, analyzed in \cref{app:motion}, essentially only renormalizes the coupling strength of the atoms to the waveguide (thereby decreasing the Purcell factor), but is not a fundamental obstruction.
In typical experiments today, the atom number might fluctuate in unknown ways from one experiment to the next. Allowing some error in $N$ is like increasing the detuning error in $\Gamma_{\mathrm{eng}}$. As long as the relative error decreases with $N$, as it should, this does not affect the scaling with atom number.
\subsection{Photon loss from coupling into the waveguide} It is challenging to couple photonic wavepackets travelling in free space into fibre. If this is done with $80\%$ fidelity, say, then the detector becomes already unreliable for three photons, as there is already a 50\% chance of losing one in the first stage.
However, the detectors described here are also useful for entirely waveguide-based setups, which obviate the need for coupling free-space wavepackets into a fibre. Recent proposals show that atomic arrays make excellent sources of quantum light~\cite{Paulisch2018,Paulisch2019,Perarnau-Llobet2019} especially in view towards quantum metrology. There also exist proposals for two-photon gates in this platform~\cite{Zheng2013}, and it has been found that such arrays coupled to waveguides support long-lived subradiant states that can be used for storage~\cite{Zhang2019,Albrecht2019} and also manipulation~\cite{Paulisch2016} of quantum states of light.
\subsection{Atom readout}\label{sec:readout} After the photons have scattered, one needs to measure the number of atoms in one of the states, or ideally both. Readout of superconducting qubits is well studied due to the advent of quantum computers. Fidelities now reach $>99\%$ in realistic devices~\cite{Jeffrey2014,Walter2017}.
For neutral atoms, a range of techniques have been developed. Among the earliest was to employ a cycling transition between a ground state and some excited state, which in recent experiments has yielded fidelities of 98-99\%~\cite{Kwon2017,Martinez-Dorantes2017}. A similar idea is to push one type of atoms out of the trap with a resonant drive and use a quantum gas microscope to detect occupied sites~\cite{Nelson2007}. Instead, one could ``boil'' one type of atom away through inelastic light scattering. This can in principle achieve extremely high fidelities (99.97\%), but is limited by atom loss and has the drawback that the lattice has to be refilled. We note that detection of atom numbers has already been used in experiment~\cite{Goban2015,Prasad2019}. Likewise, a strong magnetic field~\cite{Boll2016} or a state-dependent trap~\cite{Wu2019} can be used to separate the states and image them afterwards. In these setups, imaging has been performed with 99.94\% fidelity, but background gas collisions still cause atom loss~\cite{Wu2019}. One option to prevent this would be tweezer arrays~\cite{Covey2019}. While such capabilities have not yet been demonstrated for atomic arrays coupled to waveguides, and scattering from the fibre might make photon collection harder, a solution could be to move the optical lattice slowly away from the fibre before performing the imaging.
After successful detection, the detector can be reset by pumping the $s\to e$ transition, such that eventually all atoms decay to $\ket g$.
\section{Conclusion}\label{sec:conclusion} We have explored the use of arrays of quantum emitters coupled to \mbox{waveguides} for number-resolving photon detection. Paying particular heed to experimental limitations such as disorder and free-space decay, we have found that both can be overcome, leaving no fundamental limitation to the achievable detection efficiency. Moreover, we have shown that the same platform can also be used to perform QND number-resolving photon detection. To this end, we also propose a novel way to prepare Dicke states based on the interaction of two arrays and subsequent heralding by measuring the number of excitations in one of them.
In a nutshell, our proposal builds on four facts that together enable highly efficient detectors: (1), few-level systems allow for strong, projective measurements of their state due to their intrinsic nonlinearity, (2), nevertheless, sufficiently large ensembles of atoms are linear, (3), collective decay mitigates errors due to non-idealities, and (4), in linear systems one can always engineer dissipation to obtain complete absorption. We hope that the ideas outlined here will mark a step towards high-fidelity number-resolving photon detectors, both destructive and nondestructive.
\begin{acknowledgments}
D.M.\ would like to thank Adam Smith, Clara Wanjura, and Petr Zapletal for enlightening discussions.
D.M.\ and J.I.C.\ acknowledge funding from ERC Advanced Grant QENOCOBA under the EU Horizon 2020 program (Grant Agreement No. 742102). \end{acknowledgments}
\appendix \begin{widetext} \section{Derivation of Langevin equations}\label{app:langevin_equations} \subsection{Two-level systems coupled to one semi-infinite waveguide field} We study quantum emitters coupled to a semi-infinite waveguide terminated at $x=0$ by a mirror. First assuming that the waveguide is also terminated on the other side after a length $L$, the bath eigenmodes have the wavefunction $\phi_n(x)=\sin(k_nx)\sqrt{2/L}$, where $k_n=\pi n / L$ for all natural $n$. Taking the length of the waveguide to infinity, we recover the Hamiltonian given in the main text [\cref{eq:generic_hamiltonian}] \begin{equation}
H=\int_0^\infty\frac{dk}{2\pi}\left\{(\omega_k-\omega_0)a_k\dagg a_k-g\sum_{n}\sin(kx_n)\left(a_k\dagg \sigma^{(n)}_{ge} +\text{H.c.}\right)\right\}
\label{eq:generic_interaction} \end{equation}
Defining the sine transform and its inverse through \begin{equation}
\tilde f(\nu)=2\int_0^\infty f(t)\sin(\nu t)\,dt,\qquad
f(t) = \frac{1}{\pi}\int_0^\infty \tilde f(\nu)\sin(\nu t)\, d\nu,
\label{eq:sine_transform} \end{equation} we can write the field in the waveguide as \begin{equation}
\phi(x,t) = \int_0^\infty \frac{dk}{\pi}\sin(k x)\left( a_k(t) + a_k\dagg(t) \right). \end{equation} Defined this way, the commutation relation $[\phi(x,t),\phi(x',t)]=\delta(x-x')$ (for positive $x$ only) implies canonical commutation relations $[a_k(t),a_q\dagg(t)]=2\pi\delta(k-q)$. In terms of complex amplitudes, we have \begin{equation}
a(x,t) = \int_0^\infty\frac{dk}{\pi}\sin(kx)a_k(t),\qquad
a_k(t) =2\int_0^\infty dx \sin(kx)a(x,t). \end{equation}
Solving the operator equations of motion \begin{equation}
\dot a_k = -i(\omega_k-\omega_0)a_k + ig\sum_n\sin(kx_n)\sigma^{(n)}_{ge},\qquad
\dot \sigma^{(n)}_{ge} = ig\int\frac{dk}{2\pi}\sin(kx_n)\left( \sigma_{gg}^{(n)}-\sigma_{ee}^{(n)} \right)a_k, \end{equation} yields \begin{equation}
a_k(t) = e^{-i(\omega_k-\omega_0)t}a_k(0)+ig\int_0^t d\tau\,e^{-i(\omega_k-\omega_0)(t-\tau)}\sum_m\sin(kx_m)\sigma^{(m)}_{ge}(\tau).
\label{eq:integrated_qle} \end{equation} This solution can be plugged into the equation of motion for $\sigma_{ge}^{(n)}$. We need the following integral \begin{equation}
\begin{aligned}
&\int_0^\infty\frac{d\omega}{\pi}\sin(\omega x_n/c)\sin(\omega x/c)e^{-i(\omega-\omega_0)t}\\
&= \frac{e^{i\omega_0t}}{-4}\left[ \delta\left( \frac{x_n+x}{c}-t \right)+\delta\left( \frac{x_n+x}{c}+t \right)-\delta\left( \frac{x_n-x}{c}-t \right)-\delta\left( \frac{x_n-x}{c}+t \right) \right]\\
&= \frac{e^{i\omega_0t}}{4}\left[ \delta\left( \frac{|x_n-x|}{c}-t \right) - \delta\left( \frac{x_n+x}{c}-t \right)\right],\qquad\text{if }x,x_n,t>0.
\end{aligned}
\label{eq:aux_integral} \end{equation} The equation of motion for $\sigma_{ge}^{(n)}$ becomes \begin{equation}
\begin{aligned}
\dot \sigma_{ge}^{(n)}(t)&=\left( \sigma_{gg}^{(n)}-\sigma_{ee}^{(n)} \right)\left\{\frac{ige^{i\omega_0t}}{4}\left[ a(x_n+ct,0)-a(ct-x_n,0) \right]\right.\\
&+\left. \frac{g^2}{8c}\sum_m\left[ e^{ik_0(x_m+x_n)}\sigma^{(m)}_{ge}\left( t-\frac{x_m+x_n}{c} \right)-e^{ik_0|x_n-x_m|}\sigma^{(m)}_{ge}\left( t-\frac{|x_m-x_n|}{c} \right) \right]\right\}.
\end{aligned} \end{equation} We define the input field $a_{\mathrm{in}}(t)$ as the portion of the waveguide field that was at a position $x=ct$ at time $t=0$ and has since travelled all the way to the atoms. Thus, $a_{\mathrm{in}}(t) =-\sqrt{c/2}e^{i\omega_0t}a(ct,0)$, where the pre-factor is fixed by the commutation relations of $a_{\mathrm{in}}$, up to an arbitrary phase. This yields the Langevin equation \begin{equation}
\begin{aligned}
\dot \sigma_{ge}^{(n)}(t)&=\left( \sigma_{gg}^{(n)}-\sigma_{ee}^{(n)} \right)\left\{\frac{g}{4i}\sqrt{\frac{2}{c}}\left[ e^{ik_0x_n}a_{\mathrm{in}}(t-x_n/c)+e^{-ik_0x_n}a_{\mathrm{in}}(t+x_n/c) \right]\right.\\
&+\left. \frac{g^2}{8c}\sum_m\left[ e^{ik_0(x_m+x_n)}\sigma^{(m)}_{ge}\left( t-\frac{x_m+x_n}{c} \right)-e^{ik_0|x_n-x_m|}\sigma^{(m)}_{ge}\left( t-\frac{|x_m-x_n|}{c} \right) \right]\right\}.
\end{aligned} \end{equation} As defined, $a_{\mathrm{in}}(t)$ is a slow variable, so if the dynamics of the system and the bandwidth of the input state around $\omega_0$ are slow compared to the time it takes for light to travel a distance $2x_n/c$, we can neglect the retardation, rendering our description Markovian. The same applies to the atomic lowering operators. Finally, we arrive at a time-local equation \begin{equation}
\dot \sigma^{(n)}_{ge}(t)=\left( \sigma_{gg}^{(n)}-\sigma_{ee}^{(n)} \right)\left\{\frac{g}{\sqrt{2c}}\sin(k_0x_n)a_{\mathrm{in}}(t)
+ \frac{g^2}{8c}\sum_m\left[ e^{ik_0(x_m+x_n)}-e^{ik_0|x_n-x_m|}\right]\sigma^{(m)}_{ge}(t)\right\}. \end{equation}
To calculate the output field, we take the integrated equation of motion for the light field and apply a sine transform. This is essentially the same as the right-hand side of the equation of motion for $\sigma_{ge}^{(n)}$, except evaluated at a different point in space. Choosing this point to be $x_R+\eps$, i.e., a small distance to the right of the rightmost atom, and again neglecting retardation, we find \begin{equation}
\int\frac{d_k}{\pi}\sin(kx)a_k(t)=-i\sqrt{\frac{2}{c}}\sin[k_0(x_R+\eps)]a_{\mathrm{in}}(t)-\frac{ig}{4c}\sum_me^{ik_0(x_R+\eps)}2i\sin(k_0x_m)\sigma^{(m)}_{ge}(t). \end{equation} Further choosing $\eps$ such that $\sin[k_0(x_R+\eps)]=1$, and defining $a_{\mathrm{out}}(t)=-\sqrt{c/2}e^{i\omega_0t}a(x_R+\eps,t)$, we have \begin{equation}
a_{\mathrm{out}}(t) = a_{\mathrm{in}}(t)-\frac{g}{\sqrt{2c}}\sum_m\sin(k_0x_m)\sigma^{(m)}_{ge}(t).
\label{eq:io} \end{equation} Finally, let us define the decay rate $\Gamma_{g}=g^2/2c$. \begin{subequations}
\begin{align}
\dot \sigma^{(n)}_{ge}(t) &= \left( \sigma_{gg}^{(n)}-\sigma_{ee}^{(n)} \right)\left\{\sqrt{\Gamma_{g}}\sin(k_0x_n)a_{\mathrm{in}}(t)-\frac{\Gamma_{g}}{4}\sum_m\left[ e^{ik_0|x_m-x_n|}-e^{ik_0(x_n+x_m)} \right]\sigma^{(m)}_{ge}(t)\right\},\\
a_{\mathrm{out}}(t) &= a_{\mathrm{in}}(t)-\sqrt{\Gamma_{g}}\sum_m\sin(k_0x_m)\sigma^{(m)}_{ge}(t).
\end{align} \end{subequations}
\subsection{Three-level systems coupled to two semi-infinite waveguide fields} In the main text we consider (effective) three-level systems with state $\ket g, \ket e, \ket s$ coupled to two waveguide fields, which respectively couple to the transition $g\leftrightarrow e$ and $s\leftrightarrow e$. Starting from the Hamiltonian \begin{equation}
H=\int_0^\infty\frac{dk}{2\pi}\left[(\omega_{1,k}-\omega_0)a_{1,k}\dagg a_{1,k}+(\omega_{2,k}-\omega_0)a_{2,k}\dagg a_{2,k}
-\sum_{n}\sin(kx_n)\left(g_1a_{1,k}\dagg \sigma^{(n)}_{ge} +g_2a_{2,k}\dagg \sigma^{(n)}_{se}+\text{H.c.}\right)\right],
\label{eq:three_level_interaction} \end{equation} the derivation follows through as above, and yields \begin{subequations}
\begin{align}
\dot \sigma^{(n)}_{ge}(t) &= \left( \sigma_{gg}^{(n)}-\sigma_{ee}^{(n)} \right)
\left\{\sqrt{\Gamma_{g}}\sin(k_0x_n)a_{\mathrm{in},1}(t)-\frac{\Gamma_{g}}{4}\sum_m\left[ e^{ik_0|x_m-x_n|}-e^{ik_0(x_n+x_m)} \right]\sigma^{(m)}_{ge}(t)\right\},\nonumber\\
&\qquad\qquad+ \sigma_{gs}^{(n)}\left\{\sqrt{\Gamma_{s}}\sin(k_0x_n)a_{\mathrm{in},2}(t)-\frac{\Gamma_{s}}{4}\sum_m\left[ e^{ik_0|x_m-x_n|}-e^{ik_0(x_n+x_m)} \right]\sigma^{(m)}_{se}(t)\right\},\\
a_{\mathrm{out},1}(t) &= a_{\mathrm{in},1}(t)-\sqrt{\Gamma_{g}}\sum_m\sin(k_0x_m)\sigma^{(m)}_{ge}(t).
\end{align} \end{subequations} and two more equations if $1\leftrightarrow2$ and $g\leftrightarrow s$ are exchanged simultaneously. These equations reduce to \cref{eq:QND_eom} for atoms in the atomic-mirror configuration $k_0x_n=2\pi(n+1/4)$.
\subsection{Two-level and three-level systems coupled to one and two infinite waveguide fields} The derivation for the infinite waveguide proceeds in much the same way and can be found elsewhere~\cite{Caneva2015}. A Hamiltonian that combines both bath couplings reads \begin{equation}
\begin{aligned}
H&=\sum_{\nu=\pm}\int\frac{dk}{2\pi}\sum_\alpha(\omega_k-\omega_0)a_{k,\nu,\alpha}\dagg a_{k,\nu,\alpha}
-\sum_n \left[ \sqrt{\Gamma_g}e^{i(\nu k-k_{L,1})x_n}\sigma_{eg}^n a_{k,\nu,1}+\sqrt{\Gamma_s}e^{i(\nu k-k_{L,2})x_n}\sigma_{es}^n a_{k,\nu,2}+\text{H.c.} \right].
\end{aligned}
\label{eq:infinite_dissipation_hamiltonian} \end{equation} Here, $k_{L,i}$ are phases imparted by the laser. They have little effect on the detection of incoming light. There are two waveguide fields, $\alpha=1,2$, distinguished either in frequency, polarization, or by being in a different waveguide. As before, $\nu\in\{\pm1\}$ labels right- and left-moving modes.
From \cref{eq:infinite_dissipation_hamiltonian}, we can derive the bath equations of motion, integrate them up and Fourier transform them~\cite{Caneva2015} \begin{equation}
a_{\nu,1}(x,t) = e^{i\omega_0t}a_{\nu,1}(x-ct,0)+i\sqrt{\Gamma_g}\Theta[ (x-\nu x_n)/c ]e^{ik_1(x-\nu x_n)+ik_{L,1}x_n}\sigma_{ge}^n[t-(x-\nu x_n)/c].
\label{eq:collective_photon_field} \end{equation} For the other field, we have to exchange $1\leftrightarrow2$ and $g\leftrightarrow s$. As above, we next derive the atomic equations of motion \begin{subequations}
\begin{align}
\dot \sigma^n_{ge} &= \sum_{\nu=\pm}\int\frac{dk}{2\pi}(-i\sqrt{\Gamma_g})e^{i(\nu k-k_{L,1})x_n}a_{k,\nu,1}(\sigma_{ee}^n-\sigma_{gg}^n)
+ (i\sqrt{\Gamma_s})e^{i(\nu k-k_{L,2})x_n}a_{k,\nu,2}\sigma_{gs}^n,\\
\dot \sigma^n_{se} &= \sum_{\nu=\pm}\int\frac{dk}{2\pi}(-i\sqrt{\Gamma_s})e^{i(\nu k-k_{L,2})x_n}a_{k,\nu,2}(\sigma_{ee}^n-\sigma_{ss}^n)
+ (i\sqrt{\Gamma_g})e^{i(\nu k-k_{L,1})x_n}a_{k,\nu,1}\sigma_{sg}^n,\\
\dot \sigma^n_{gs} &= \sum_{\nu=\pm}\int\frac{dk}{2\pi}(-i\sqrt{\Gamma_g})e^{i(\nu k-k_{L,1})x_n}a_{k,\nu,1}\sigma_{es}^n
+ (i\sqrt{\Gamma_s})e^{-i(\nu k-k_{L,2})x_n}a_{k,\nu,2}\dagg\sigma_{ge}^n,
\end{align} \end{subequations} and replace the photon field [\cref{eq:collective_photon_field}] \begin{subequations}
\begin{align}
\dot \sigma_{ge}^n & = \sqrt{\Gamma_g}\sum_{\nu=\pm}e^{i(k_1\nu-k_{L,1}) x_n}(\sigma_{gg}^n-\sigma_{ee}^n)a_{\mathrm{in},\nu,1}
- \Gamma_g\sum_me^{ik_1|x_m-x_n|-ik_{L,1}(x_m-x_n)}(\sigma_{gg}^n-\sigma_{ee}^n)\sigma_{ge}^m\nonumber\\
&+\sqrt{\Gamma_s}\sum_{\nu=\pm}e^{i(k_2\nu-k_{L,2}) x_n}\sigma_{gs}^na_{\mathrm{in},\nu,2}
-\Gamma_s\sum_me^{ik_2|x_m-x_n|-ik_{L,2}(x_m-x_n)}\sigma_{gs}^n\sigma_{se}^m,\\
\dot \sigma_{se}^n & = \sqrt{\Gamma_s}\sum_{\nu=\pm}e^{i(k_2\nu-k_{L,2})x_n}(\sigma_{ss}^n-\sigma_{ee}^n)a_{\mathrm{in},\nu,2}
- \Gamma_s\sum_me^{ik_2|x_m-x_n|-ik_{L,2}(x_m-x_n)}(\sigma_{ss}^n-\sigma_{ee}^n)\sigma_{se}^m\nonumber\\
&+\sqrt{\Gamma_g}\sum_{\nu=\pm}e^{i(k_1\nu-k_{L,1}) x_n}\sigma_{sg}^na_{\mathrm{in},\nu,1}
-\Gamma_g\sum_me^{ik_1|x_m-x_n|-ik_{L,1}(x_m-x_n)}\sigma_{sg}^n\sigma_{ge}^m,\\
\dot \sigma_{gs}^n & =
\sqrt{\Gamma_g}\sum_{\nu=\pm}e^{i(k_1\nu-k_{L,1}) x_n}(-\sigma_{es}^n)a_{\mathrm{in},\nu,1}
- \Gamma_g\sum_me^{ik_1|x_m-x_n|-ik_{L,1}(x_m-x_n)}(-\sigma_{es}^n)\sigma_{ge}^m\nonumber\\
&+\sqrt{\Gamma_s}\sum_{\nu=\pm}e^{i(k_{L,2}-k_2\nu)x_n}\sigma_{ge}^na_{\mathrm{in},\nu,2}\dagg
-\Gamma_s\sum_me^{-ik_2|x_m-x_n|+ik_{L,2}(x_m-x_n)}\sigma_{ge}^n\sigma_{es}^m,\\
a_{\mathrm{out},\nu,1} & = a_{\mathrm{in},\nu,1}-\sqrt{\Gamma_g}\sum_ne^{i(k_{L,1}-\nu k_1)x_n}\sigma_{ge}^n,\\
a_{\mathrm{out},\nu,2} & = a_{\mathrm{in},\nu,2}-\sqrt{\Gamma_s}\sum_ne^{i(k_{L,2}-\nu k_2)x_n}\sigma_{se}^n.
\end{align} \end{subequations} Using the description above, the Langevin equations for two-level systems on an infinite waveguide read \begin{subequations}
\begin{align}
\dot \sigma^{(n)}_{ge}(t) &= \left( \sigma_{gg}^{(n)}-\sigma_{ee}^{(n)} \right)
\left\{\sum_{\nu\in\{\pm1\}}\sqrt{\Gamma_{g}}e^{i\nu k_0x_n}a_{\mathrm{in},\nu}(t)-\Gamma_{g}\sum_m e^{ik_0|x_m-x_n|}\sigma^{(m)}_{ge}(t)\right\},\\
a_{\mathrm{out},\nu}(t) &= a_{\mathrm{in},\nu}(t)-\sqrt{\Gamma_{g}}\sum_me^{-i\nu k_0x_0}\sigma^{(m)}_{ge}(t),
\end{align} \end{subequations} whereas the governing equations for three-level systems are \begin{subequations}
\begin{align}
\dot \sigma^{(n)}_{ge}(t) &= \left( \sigma_{gg}^{(n)}-\sigma_{ee}^{(n)} \right)
\left\{\sum_{\nu\in\{\pm1\}}\sqrt{\Gamma_{g}}e^{i\nu k_0x_n}a_{\mathrm{in},\nu,1}(t)-\Gamma_{g}\sum_m e^{ik_0|x_m-x_n|}\sigma^{(m)}_{ge}(t)\right\},\nonumber\\
& \qquad\qquad + \sigma_{gs}^{(n)}\left\{\sum_{\nu\in\{\pm1\}}\sqrt{\Gamma_{s}}e^{i\nu k_0x_n}a_{\mathrm{in},\nu,2}(t)-
\Gamma_{s}\sum_m e^{ik_0|x_m-x_n|}\sigma^{(m)}_{se}(t)\right\},\\
a_{\mathrm{out},\nu,1}(t) &= a_{\mathrm{in},\nu,1}(t)-\sqrt{\Gamma_{g}}\sum_me^{-i\nu k_0x_0}\sigma^{(m)}_{ge}(t),
\end{align} \end{subequations} and two more equations if $1\leftrightarrow2$ and $g\leftrightarrow s$ are exchanged simultaneously.
To linearize these equations, we substitute $\sigma_{gg}\to1$, $\sigma_{ee}\to0$ and $\sigma_{ge}\to b$, where $[b,b\dagg]=1$.
\end{widetext}
\section{Preparing a Dicke state with collective measurements}\label{app:prep} There is another way to prepare a Dicke state in our setup. First applying a $\pi/2$-pulse to all atoms, and then measuring $S_{gg}$, \emph{i.e.,}
the number of atoms in $\ket g$, which projects the state onto a symmetric state with a definite number of excitations. For large $N$, $m$ has variance $\sqrt{N}$ around $N/2$, such that $|N/2-m|\ll N$.
A $\pi/2$ pulse can for example be produced by driving the two transitions $g\leftrightarrow e$ and $s\leftrightarrow e$ with lasers, each detuned by a large amount $\Delta$, or by applying a microwave tone. After the pulse, the atomic state becomes $\ket X = \prod_i(1/\sqrt{2})(1+\sigma_{sg}^{(i)})\ket G$, which could optionally be written as $\ket X=2^{-N/2}\sum_m \mat{N\\ m}\ket{N-m,m,0}$. Measuring $\hat S_{gg}$ thus probabilistically returns the state $\ket{N-m,m,0}$ where $m$ follows a binomial distribution.
A known method to measure the number of atoms is by measuring the phase shift of an off-resonant probe tone~\cite{Beguin2014}. In order to prevent excited atoms from decaying via emission of a free-space photon (essentially removing the photon from the symmetric state) or via emission of a photon in the $e\to s$ channel (thereby inducing a transition to another Dicke state), the probe has to be far detuned. However, the further detuned the probe is, the lower is the induced phase shift, thus requiring longer averaging times, which leads to a trade-off. The analysis below shows that this sort of measurement is limits the number of atoms to the Purcell factor.
If it is possible to turn off $\Gamma_s$, we only have to consider free-space decay. Using the input-output equations, we find the output field for a weak coherent probe with amplitude $\alpha_{\mathrm{in}}$ detuned by $\Delta\gg N_g\Gamma_{g}$ \begin{equation}
\alpha_{\mathrm{out}} = \left( 1-\frac{N_g\Gamma_g/2}{N_g\Gamma_g/2-i\Delta} \right)\alpha_{\mathrm{in}}
\simeq\left( 1+\frac{N_g\Gamma_g}{2i\Delta} \right)\alpha_{\mathrm{in}}, \end{equation} where $N_g$ is the number of atoms in state $\ket g$.
Thus, the per-photon phase shift is $\varphi \simeq -\Gamma_g/2\Delta$. The number of photons required to resolve single excitations therefore is $N_{\mathrm{p}}\propto4\Delta^2/\Gamma_g^2$. On the other hand, for a given probe amplitude $\alpha_{\mathrm{in}}$ the probability of an atom to be excited is $\langle S_{ee}\rangle \simeq \Gamma_g N_g|\alpha_{\mathrm{in}}/2\Delta|^2$. Thus, during the time it takes to scatter $N_{\mathrm{p}}$ photons, $\Gamma_{\mathrm{free}}N_g/\Gamma_g=N_g/P$ photons are lost. Ultimately, this way of preparing Dicke states is thus limited to atom numbers lower than the Purcell factor. If $\Gamma_s$ cannot be set to zero, but instead is comparable to $\Gamma_g$, then this constitutes the dominant decay channel and the requirement on the Purcell factor becomes even more stringent $P>N_gN_s$.
\section{All-to-all interaction in the bandgap}\label{app:band_gap_interaction} \subsection{Strong coupling} Here we give details on how using a Raman transition allows strong coupling of the atoms when brought close to the bandgap of a waveguide and discuss briefly some limitations.
Following \cite{Douglas2015}, two-level systems interacting via the modes in the bandgap couple at a rate
$g_{\mathrm{eff}}=2\pi g_k^2/\sqrt{\Delta \alpha}$, where $g_k$ is the coupling to the individual waveguide modes (assumed constant), $\Delta$ is the effective detuning of the renormalized transition frequency from the band edge and $\alpha$ is a parameter to characterize the band curvature at the band edge, via $\omega_k=\omega_b+\alpha k^2$. If $g_k$ is the effective coupling rate obtained from adiabatically eliminating an excited state $\ket e$ in a Raman transition, it takes the form $g_k=g_{k,0}(\Omega/\delta)$, where $\Omega$ is the pump Rabi frequency and $\delta$ is the detuning of the pump from the transition. Thus, \begin{equation}
g_{\mathrm{eff}} = \frac{1}{\sqrt{\Delta}} \frac{2\pi g_{k,0}^2}{\sqrt{\alpha}}\left(\frac{\Omega}{\delta}\right)^2.
\label{eq:full_geff} \end{equation} In order for the Markov approximation and the adiabatic elimination to be valid, we require $\delta\gg\Omega$ and $\sqrt{\Delta\alpha}\gg g_k$, so $g_{\mathrm{eff}}$ will be slow. Fortunately, reducing $\Omega/\delta$ also reduces free-space decay in the same way, $\Gamma_{\mathrm{free,eff}}=\Gamma_{\mathrm{free}}(\Omega/\delta)^2$.
Since the atom-atom coupling additionally scales with the effective waveguide density of states $g_{\mathrm{eff}}\propto(\alpha\Delta)^{-1/2}$, the Purcell factor $P_{\mathrm{eff}}=g_{\mathrm{eff}}/\Gamma_{\mathrm{eff,free}}\propto(\alpha\Delta)^{-1/2}$ can be made very large, independent of the original Purcell factor, while preserving the Markov condition $\Delta\gg g$. As this analysis shows, physically this relies on increasing the effective waveguide density of states. At the same time, this has the effect of making the bound state extent and therefore the decay length of the induced interaction $L$ much larger than the extent of the atomic array, such that the all-to-all interaction on the right-hand side of \cref{eq:H_dicke} becomes a very good approximation.
\subsection{Engineering $S_+^{(1)}S_-^{(2)}+\text{H.c.}$}\label{app:engineering_H2} In order to instead realize a Hamiltonian with the interaction $S_+^{(1)}S_-^{(2)}+\text{H.c.}$ between two atomic arrays, one needs to combine two bands of the same waveguide (or two waveguides), and additionally make use of the spatial profile of the induced atom-atom interaction. The full expression for the coupling induced between the atoms through a waveguide band is given through~\cite{Douglas2015} \begin{equation}
H=\frac{2\pi g_k^2}{\Delta L}\sum_{ij}\sigma_{sg}^{(i)}\sigma_{gs}^{(j)}E_{k_0}(x_i)E_{k_0}^*(x_j)e^{-|x_i-x_j|/L},
\label{eq:full_coupling} \end{equation} which differs from \cref{eq:H_dicke} through the addition of the spatial profile $E_{k_0}(x)$ of the bound state induced by coupling an atom at position $x$, where $k_0$ is the wavevector at the bandgap. A simple model of a waveguide exhibits bands with a dispersion $\omega_k\sim\cos(ka)$. As such, there are two band edges at wavevectors $k_0=0$ and $k_0=\pi$. In more complex models the bandgaps might occur at different values of $k_0$, but these are generically not expected to all coincide, and certainly not in between two different waveguides. With an otherwise constant mode profile, we can approximate $E_{k_0}(x_i)\approx e^{ik_0 x_i}$. For simplicity we will take $k_0=\pi$, but this is by no means required.
Combining two waveguides (waveguide bands), one with positive detuning $\Delta$, the other with the opposite detuning $-\Delta$, and placing both arrays of atoms in atomic mirror configurations, but spaced by an odd half-integer-multiple of wavelengths from each other, the two waveguides induce the interaction \begin{equation}
\begin{aligned}
H_2 &= \frac{2\pi g_k^2}{\Delta L}\left[ S_{sg}S_{gs} - (S^{(1)}_{sg}-S^{(2)}_{sg})(S^{(1)}_{gs}-S^{(2)}_{gs}) \right]\\
&=2g_{\mathrm{eff}}\left( S^{(1)}_{sg}S^{(2)}_{gs}+\mathrm{H.c.} \right),
\end{aligned}
\label{eq:H2} \end{equation} as required. Note that we have taken $L$ to be larger than the whole configuration, and choosing $g_{\mathrm{eff}}$ to be the same for both waveguides, which is not unreasonable as they can be tuned with the Raman transition. Furthermore, we would like to note that the requirement of placing the atoms in two atomic mirror configurations spaced by an odd multiple of half a wavelength is not any more stringent that producing a single atomic mirror configuration, although it certainly is experimentally more challenging than the disordered configurations we have considered in other parts of the main text.
\subsection{Interaction time scales}\label{app:time_scales} While the full Hilbert space of two arrays of $N$ spins each is $2^{2N}$ dimensional, the Hamiltonian \cref{eq:H2} connects the fully symmetric state $\ket{N,0}=\ket{N}_1\otimes\ket{0}_2$, to only $N$ other states, $\{\ket{N-m,m}\}$. As in the main text, our convention here is that $\ket{m}$ denotes a fully symmetric spins state in which $m$ out of $N$ atoms are excited, whereas $\ket{m_1,m_2}$ denotes two arrays of $N$ atoms each, with $m_1$ and $m_2$ excitations, respectively.
In this space of $N+1$ states, the Hamiltonian \cref{eq:H2} is a matrix of the form \begin{equation}
\mathcal H_2=\mat{0 & a_1 & 0 &\cdots \\ a_1 & 0 & a_2 & \\ 0 & a_2 & 0 &\ddots \\
\vdots&& \ddots&\ddots},
\label{eq:H2_matrix} \end{equation} where \begin{equation}
a_m=2g_{\mathrm{eff}}m(N-m+1).
\label{eq:am} \end{equation} As an aside, we note that this is similar to the model studied in Ref.~\cite{Christandl2004}, except with the elements of the matrix squared. The connection arises since the model with $\sqrt{a_m}$ on the diagonal maps to the $N$-excitations subspace of two coupled harmonic oscillators $H=a\dagg b+\mathrm{H.c.}$ (leading to perfect state transfer), while here we instead study two coupled spins $H_2=2g_{\mathrm{eff}}S_+^{(2)}S_-^{(1)}+\mathrm{H.c.}$
If the index $m=x$ is understood as a spatial coordinate, the Hamiltonian $H_2$ can be rewritten as \begin{equation}
\begin{aligned}
H_2&= e^{i\hat p}a(\hat x)+\mathrm{H.c.}\\
&= 2a(\hat x)+i[\hat p,a(\hat x)]-\frac{1}{2}\{ \hat p^2,a(\hat x) \}+\mathcal O(p^3),
\end{aligned}
\label{eq:H2_spatial} \end{equation} where $[\hat x,\hat p]=i$, such that $e^{i\hat p}$ generates a translation by $-1$. In order to get some insight into the dynamics of $H_2$, we consider the classical long-wavelength limit neglecting terms $\hat p^{n}$ with $n\geq3$ and replacing $\hat x\to x,\hat p\to p$, which yields \begin{equation}
H_2(x,p) = 2a(x) + a'(x) - a(x)p^2=V(x)-\frac{p^2}{2m(x)},
\label{eq:H_classical} \end{equation} where \begin{equation}
a(x) = 2g_{\mathrm{eff}}(x+1)(N-x).
\label{eq:a(x)} \end{equation} This Hamiltonian gives rise to periodic trajectories. A more physical Hamiltonian would be obtained by sending $p^2\to-p^2$ and $H\to-H$, but the dynamics are the same.
Starting at $x(t=0)=p(t=0)=0$, the conserved energy of the particle is $H(0,0)=E_0=2g_{\mathrm{eff}}(3N-1)$. This allows us to solve for the momentum as a function of position \begin{equation}
p^2(x) = 2m(x)\left[ -E_0+V(x) \right],
\label{eq:p(x)} \end{equation} which can be used to calculate the period of the orbit \begin{equation}
\begin{aligned}
T(E_0)&=2\int_0^{N-2}dx\left( \frac{1}{\dot x} \right)\\
&=\int_0^{N-2}\!\!-\frac{dx}{\sqrt{(x+1)(x-N)2x(x-N+2)}}.
\end{aligned}
\label{eq:TE0} \end{equation} The upper limit of the integral is the point at which the particle turns around, which can be found from $p(x)=0$. For large $N$, the integral may be approximated through \begin{equation}
\begin{aligned}
T(E_0)&\approx-2\int_0^{N/2}dx\left[ (x+1)2xN^2 \right]^{-1/2}\\
&=\frac{\sqrt{2}\log(1+N+\sqrt{N(2+N)})}{N}\\
&\approx\sqrt2\log(N)/N.
\end{aligned}
\label{eq:T_approx} \end{equation}
For completeness, we mention another way to arrive at this result. Instead of \cref{eq:H2}, consider the Hamiltonian \begin{equation}
H_3=g_{\mathrm{eff}}\left[(a\dagg)^2b^2+\mathrm{H.c.}\right],
\label{eq:H_squeeze} \end{equation} where $a,b$ are the annihilation operators for two harmonic oscillators. In the subspace spanned by the states $\ket{2N-2m}_1\otimes\ket{2m}_2$, where now $\{\ket{m}\}$ are now Fock states of an oscillator, the Hamiltonian is again given by a matrix of the same form as \cref{eq:H2_matrix}, except that now \begin{equation}
a_m=2g_{\mathrm{eff}}\sqrt{\left(N-m+\frac12\right)(N-m+1)m\left(m-\frac12\right)}.
\label{eq:am_new} \end{equation} For large $N,m$ these are essentially the same matrix elements as before, and one can check numerically that the dynamics is very similar for large enough $N$. In the classical limit, we replace the annihilation operators by the amplitudes of the corresponding coherent states. The classical mean-field equations of motion read \begin{equation}
\frac{d}{dt}\mat{\alpha\\ \beta}=-2ig_{\mathrm{eff}}\mat{\alpha^*\beta^2\\ \beta^*\alpha^2},
\label{eq:squeezing_eom} \end{equation} with initial conditions $\beta(0)=1$, $\alpha(0)=\sqrt{2N}$.
Amplitude and phase degrees of freedom can be separated by change of variables to $\alpha=a\exp(ix)$, $\beta=b\exp(iy)$, which follow equations of motion \begin{subequations}
\begin{align}
\dot a &= -ab^2\sin\phi,\qquad \dot x=-b^2\cos\phi,\\
\dot b &= +a^2b\sin\phi,\qquad \dot y=-a^2\cos\phi,
\end{align} \end{subequations} where $\phi=2x-2y$ and we have redefined time to include the factor $2g_{\mathrm{eff}}$. Further defining the constant $K=a^2+b^2$, and the variable $r=a^2-b^2$, we obtain the equations of motion \begin{subequations}
\begin{align}
\dot\phi &= 2r\cos\phi ,\\
\dot r &= (r^2-K^2)\sin\phi.
\end{align} \end{subequations} The shape of the resulting periodic orbits can be found by integrating $dr/d\phi$.
However, since the phase $\phi$ is ill-defined in the initial state, we can choose it to be $\phi=\pi/2$. In this case, the system is governed only by \begin{equation}
\dot r = r^2-K^2,
\label{eq:switch_eom} \end{equation} which is readily integrated to yield \begin{equation}
\left[\tanh^{-1}\left(\frac{r}{K}\right)\right]_{r(t_i)}^{r(t_f)}=-2K(t_f-t_i).
\label{eq:swich_soln} \end{equation} Since initially, $r/K\simeq 1-1/N$ and $K\simeq N$, we conclude that the natural timescale for the switch is \begin{equation}
T\simeq \frac{1}{N}\tanh^{-1}\left( 1-\frac{1}{N} \right)\sim\frac{\log(N/2)}{N},
\label{eq:switch_timescale} \end{equation} the same result as before. We note that if another phase $\phi$ is chosen initially, $\phi$ first rapidly evolves to a value close to $\pi/2$, after which the system evolution is very similar.
\section{Numerical simulation of multi-photon scattering}\label{app:numerics} In this Appendix we briefly outline the theory behind our simulation of multi-photon scattering, following Ref.~\cite{Kiilerich2019}. The key idea is to the time-evolution of the system and bath for a specific input field using the Langevin equation~\eqref{eq:QND_eom} in combination with the input-output equations~\eqref{eq:qnd_io}.
In general, an input wavepacket can be described through one or more modes of the waveguide in which photons are created on top of the vacuum $\ket0$, \begin{equation}
\ket{\psi_{\mathrm{in}}}=\prod_j \int dx_j \psi_j(x_j)a\dagg(x_j)\ket{0}.
\label{eq:input_field} \end{equation} As a function of time, the field travels through the waveguide, and eventually the photons interact with the atoms locally. Instead of this spatiotemporal description, in which a quantum system couples with constant rate to the waveguide, but the wavefunction of the photons changes in space, it has been shown that the dynamics can be equivalently captured by modulating the coupling between the system and the input mode according to the mode shape~\cite{Kiilerich2019}.
Specifically, here we assume that all $m_p$ photons of the input field are in one mode, such that the waveguide field can be written (at $t=0$, before the photons interact with the system) \begin{equation}
\ket{\psi_{\mathrm{in}}}=\left[\int_{-\infty}^0 dx\, \psi_{\mathrm{in}}(x)a\dagg(x)\right]^{m_p}\ket0,
\label{eq:waveguide_field} \end{equation} with \begin{equation}
\psi_{\mathrm{in}}(x)=(2\pi\Gamma_g^2)^{-1/2}\exp\left(-\frac{(x/c+t_0)^2}{2\Gamma_g^2}\right).
\label{eq:wavefunction} \end{equation} Since this is assumed to be the wavepacket before the interaction, it has support only for negative $x$ (we assume the wavepacket is incident from the right in \cref{fig:sketch}). After a time $t_0$, the centre of the wavepacket (travelling at speed $c$) has arrived at the system.
Replacing the spatiotemporal evolution of wavepackets in the waveguide with a time-dependent coupling constant, the input-output equations can be replaced by a quantum master equation~\cite{Kiilerich2019} \begin{equation}
\dot\rho=-i[H(t),\rho]+\mathcal D[L(t)]\rho,
\label{eq:QME} \end{equation} where the Hamiltonian \begin{equation}
\begin{aligned}
H(t)= \frac{i}{2}&\left[ \sqrt{\Gamma_g}g_{\mathrm{in}}^*(t)a_{i}\dagg(S_{gg}-S_{ee}) S_{ge}\right.\\
-&\left.\sqrt{\Gamma_s}g_{\mathrm{out}}^*(t)a_o\dagg S_{gs}S_{se}
-\mathrm{H.c.}\right]
\end{aligned}
\label{eq:SLH_H} \end{equation} describes coupling to the input mode, and the jump operator \begin{equation}
L(t) = \sqrt{\Gamma_g}S_{ge}+\sqrt{\Gamma_s}S_{se}
+g_{\mathrm{in}}(t)a_i + g_{\mathrm{out}}(t)a_o
\label{eq:jump} \end{equation} describes the dissipation induced by the coupled waveguides. We determine the time-evolution given by \cref{eq:QME} using QuTiP~\cite{Qutip}.
Note a subtlety when comparing with the equation corresponding to \cref{eq:SLH_H} derived by Kiilerich and M{\o}lmer~\cite{Kiilerich2019}, which features an additional term $g_{\mathrm{out}}^*(t)g_{\mathrm{in}}(t)a_i\dagg a_o$. This term arises when calculating the scattering from one input mode to the corresponding output mode, which in our case might be $a_{\mathrm{in},1}\to a_{\mathrm{out},1}$ and can be thought of arising from the first term in the input-output equation \cref{eq:qnd_io}. Here, we are interested in the scattering $a_{\mathrm{in},1}\to a_{\mathrm{out},2}$, where this term is absent.
In the above expressions, the variable couplings strengths are given by \begin{equation}
g_{\mathrm{in}}(t) = \frac{\tilde\psi_{\mathrm{in}}(t)}{\sqrt{1-\int_0^tdt'|\psi_{\mathrm{in}}(t')|^2}}
\label{eq:gin} \end{equation} and \begin{equation}
g_{\mathrm{out}}(t) =-\frac{\tilde\psi_{\mathrm{out}}(t)}{\sqrt{\int_0^tdt'|\psi_{\mathrm{out}}(t')|^2}}.
\label{eq:gout} \end{equation}
Here, the initial time to start the simulation has been chosen to be $t=0$, and since photons are assumed to travel at speed $c$, the temporal wavefunction is related to the spatial wavefunction through $\tilde\psi_{\mathrm{in}}(x)=\psi_{\mathrm{in}}(-ct)$. For consistency, $g_{\mathrm{in}}(t)=g_{\mathrm{out}}(t)$ are taken to be zero before $t=0$, and $\psi_{\mathrm{in}}$ should be square-integrable for positive times $\int_0^\infty dt\,|\psi_{\mathrm{in}}(t)|^2=1$. This is true for our choice \cref{eq:wavefunction} as long as $\Gamma_{g}^{-1}\ll t_0$. Since $t_0$ is an arbitrary offset it can always be chosen to fulfil this condition.
If the system and the input field are initially in states of known excitation number, which we assume to be the case throughout, then it is clear that for $m_p$ photons in the input wavepacket, we need to consider at most $m_p$ excitations in the bosonic modes $a_o$ and $a_i$ as well as the system. Recall that we assume a symmetric Dicke state as starting point, $\ket{\psi_{\mathrm{sys}}(0)}=\ket{N/2-m_0,N/2,0}$. As a result, the required Hilbert space has a dimension of $(m_p+1)^3$.
We note that this formalism does not specify what shape the output wavepacket $\psi_{\mathrm{out}}$ has. Indeed, it is generically not even true that it can be described through a single mode and neither is it true in general that all photons are emitted in the second channel rather than the first. The advantage of this formalism in our case is that the system dynamics are independent of the choice of $\psi_{\mathrm{out}}(t)$. Instead, if the set of output modes considered does not comprise all modes of the physical output field, the corresponding photons are lost. In the extreme case, when $g_{\mathrm{out}}(t)=0$, the QME \cref{eq:QME} describes the system dynamics for a given input, but does not yield any information about the output photonic state.
Here, we use this property by setting the output mode equal to the input mode $\psi_{\mathrm{out}}(t)=-\psi_{\mathrm{in}}(t)$. This is equivalent to asking the question how many photons are transmitted without changing the shape of the wavepacket -- \emph{i.e.,} performing a QND measurement. If the final number of excitations in mode $a_o$ is equal to the number of input photons $m_p$, all photons have been transmitted faithfully, and all photons have been dissipated via the second channel. This defines the fidelity $\mathcal F$, which we plot in \cref{fig:QND}. Mathematically, it is defined as \begin{equation}
\mathcal F_{m_p}=
\bra{m_p}\tr[\rho]_{\mathrm{input,sys}}\ket{m_p},
\label{eq:fidelity_definition} \end{equation} where $\ket{m_p}$ is the $m_p^{\mathrm{th}}$-Fock state of the output mode $a_o$. This is the probability that the all photons have been transmitted in the specified output mode.
\begin{figure}
\caption{\textbf{Comparison between presence (blue) and absence (green) of free-space decay.}
Overall, the effect of free-space decay on the fidelity is clearly captured by introducing the branching ration as we have done in \cref{eq:F1}.
This brings down the single-photon fidelity considerably.
Interestingly, though, as we consider multi-photon scattering, the effect of free-space decay diminishes relative to the effect of the non-linearities introduced.
Naturally, it never aids the fidelity, but in the end, the fidelity retains its favourable scaling with atom number, most clearly illustrated in \cref{fig:QND}a.
}
\label{fig:QND_purcell}
\end{figure}
\section{Effect of finite Purcell factor on performance of QND detector} In this Appendix, we offer a side-to-side comparison of the fidelity with a finite Purcell factor, as shown in \cref{fig:QND} in the main text, and infinite Purcell factor. This allows to discern which features come from free-space decay, and which from nonlinearities and finite bandwidth. It furthermore offers further verification of the predicted analytical approximate fidelity \cref{eq:F1,eq:fidelity} through simulations. The results are shown in \cref{fig:QND_purcell} and are commented on the the caption.
\section{Fast thermal motion of atoms}\label{app:motion} In the main text we have only considered static disorder, which is valid for slowly moving atoms. If the thermal motion of atoms is fast, some of the effect of disorder will be averaged out. This can be modelled by instead averaging the atomic coupling over a distribution of atomic positions~\cite{Porras2008}. Assuming a Gaussian distributions of positions around the atomic mirror configuration, captured by the random variable $y_m$ \begin{equation}
x_m = \frac{\pi}{k_0}\left(\frac{1}{2}+2m\right)+y_m,
\label{eq:position_fluctuations} \end{equation} we can calculate the off-diagonal coupling \begin{equation}
\begin{aligned}
\bar g_{mn}&=-\frac{\Gamma_{g}}{8\pi\sigma^2}\iint dy_mdy_ne^{-(y_m^2+y_n^2)/2\sigma^2}\\
&\qquad\times\left[ e^{ik_0|y_m-y_n|}+e^{ik_0(y_m+y_n)} \right].
\end{aligned}
\label{eq:coupling_average} \end{equation} If $m=n$, there should only be one integral, giving $\bar g_{nn}=-[1+\exp(-k_0^2\sigma^2)]\Gamma_{g}/(4g)$. In the case $m\neq n$, we can straightforwardly evaluate the second term, which yields $-\Gamma_{g}e^{-k_0^2\sigma^2}/4$ overall. For the first term, we first shift $y_m\to y_m+y_n$, in which case the $y_n$ integral becomes a straightforward Gaussian giving a factor of $\sqrt{\pi\sigma^2}e^{y_m^2/4\sigma^2}$. The leftover integral reads \begin{equation}
\begin{aligned}
&-\frac{\Gamma_{g}}{8\sqrt{\pi\sigma^2}}\int dy_me^{-y_m^2/4\sigma^2}e^{ik_0|y_m|}\\
&=-\frac{\Gamma_{g}}{4}e^{-k_0^2\sigma^2}\left[ 1+i\mathrm{erfi}(k_0\sigma) \right].
\end{aligned}
\label{eq:leftover_integral} \end{equation} Taken together, we get \begin{equation}
\bar g_{mn}=-\frac{\Gamma_{g}}{2}\left[ e^{-k_0^2\sigma^2}+\frac{i}{\sqrt{\pi}}F(k_0\sigma) \right],
\label{eq:averaged_coupling} \end{equation}
where $F(x)$ is the purely real Dawson integral. It is peaked at $x=1$, is odd, and obeys $|F(x)|<0.6$, and $F(x)\to0^\pm$ as $x\to\pm\infty$. \Cref{eq:averaged_coupling} predicts that the coupling decreases exponentially in $(k_0\sigma)^2$. For low to moderate $k_0\sigma$, the effect of fast thermal motion can simply be accounted for by re-scaling the couplings, without affecting the conclusions in the main text.
{}
\end{document} |
\begin{document}
\title{On slant helices in Minkowski space $\hbox{\bf E}_1^3$} \author{ Ahmad T. Ali\\Mathematics Department\\
Faculty of Science, Al-Azhar University\\
Nasr City, 11448, Cairo, Egypt\\ email: atali71@yahoo.com\\ \vspace*{1cm}\\
Rafael L\'opez\footnote{Partially supported by MEC-FEDER
grant no. MTM2007-61775.}\\ Departamento de Geometr\'{\i}a y Topolog\'{\i}a\\ Universidad de Granada\\ 18071 Granada, Spain\\ email: rcamino@ugr.es} \date{}
\maketitle \begin{abstract} We consider a curve $\alpha=\alpha(s)$ in Minkowski 3-space $\hbox{\bf E}_1^3$ and denote by $\{\hbox{\bf T},\hbox{\bf N},\hbox{\bf B}\}$ the Frenet frame of $\alpha$. We say that $\alpha$ is a slant helix if there exists a fixed direction $U$ of $\hbox{\bf E}_1^3$ such that the function $\langle \hbox{\bf N}(s),U\rangle$ is constant. In this work we give characterizations of slant helices in terms of the curvature and torsion of $\alpha$. \end{abstract}
\emph{MSC:} 53C40, 53C50
\emph{Keywords}: Minkowski 3-space; Frenet equations; Slant helix.
\section{Introduction and statement of results}
Let $\hbox{\bf E}_1^3$ be the Minkowski 3-space, that is, $\hbox{\bf E}_1^3$ is the real vector space $\hbox{\bb R}^3$ endowed with the standard flat metric $$\langle,\rangle=dx_1^2+dx_2^2-dx_3^2,$$
where $(x_1,x_2,x_3)$ is a rectangular coordinate system of $\hbox{\bf E}_1^3$. An arbitrary vector $v\in\hbox{\bf E}_1^3$ is said spacelike if $\langle v,v\rangle>0$ or $v=0$, timelike if $\langle v,v\rangle<0$, and lightlike (or null) if $\langle v,v\rangle =0$ and $v\neq0$. The norm (length) of a vector $v$ is given by $\parallel v\parallel=\sqrt{|\langle v,v\rangle|}$.
Given a regular (smooth) curve $\alpha:I\subset\hbox{\bb R}\rightarrow\hbox{\bf E}_1^3$, we say that $\alpha$ is spacelike (resp. timelike, lightlike) if all of its velocity vectors $\alpha'(t)$ are spacelike (resp. timelike, lightlike). If $\alpha$ is spacelike or timelike we say that $\alpha$ is a non-null curve. In such case, there exists a change of the parameter $t$, namely, $s=s(t)$, such that $\parallel\alpha'(s)\parallel=1$. We say then that $\alpha$ is parametrized by the arc-length parameter. If the curve $\alpha$ is lightlike, the acceleration vector $\alpha''(t)$ must be spacelike for all $t$. Then we change the parameter $t$ by $s=s(t)$ in such way that $\parallel \alpha''(s)\parallel=1$ and we say that $\alpha$ is parameterized by the pseudo arc-length parameter. In any of the above cases, we say that $\alpha$ is a unit speed curve.
Given a unit speed curve $\alpha$ in Minkowski space $\hbox{\bf E}_1^3$ it is possible to define a Frenet frame $\{\hbox{\bf T}(s),\hbox{\bf N}(s),\hbox{\bf B}(s)\}$ associated for each point $s$ \cite{ku,wa}. Here $\hbox{\bf T}$, $\hbox{\bf N}$ and $\hbox{\bf B}$ are the tangent, normal and binormal vector field, respectively. The geometry of the curve $\alpha$ can be describe by the differentiation of the Frenet frame, which leads to the corresponding Frenet equations. Although different expressions of the Frenet equations appear depending of the causal character of the Frenet trihedron (see the next sections below), we have the concepts of curvature $\kappa$ and torsion $\tau$ of the curve. With this preparatory introduction, we give the following
\begin{definition} A unit speed curve $\alpha$ is called a slant helix if there exists a constant vector field $U$ in $\hbox{\bf E}_1^3$ such that the function $\langle \hbox{\bf N}(s),U\rangle$ is constant. \end{definition}
This definition is motivated by what happens in Euclidean ambient space $\hbox{\bf E}^3$. In this setting, we recall that a helix is a curve where the tangent lines make a constant angle with a fixed direction. Helices are characterized by the fact that the ratio $\tau/\kappa$ is constant along the curve \cite{dc}. Helices in Minkowski space have been studied depending on the causal character of the curve $\alpha$: see for example \cite{fgl,ko,ps}.
Recently, Izumiya and Takeuchi have introduced the concept of slant helix in Euclidean space by saying that the normal lines make a constant angle with a fixed direction \cite{it}. They characterize a slant helix if and only if the function \begin{equation}\label{slant} \dfrac{\kappa^2}{(\kappa^2+\tau^2)^{3/2}}\Big(\dfrac{\tau}{\kappa}\Big)' \end{equation} is constant. See also \cite{ky, okkk}. Thus, our definition of slant helix is the Lorentzian version of the Euclidean one. Only it is important to point out that, in contrast to what happens in Euclidean space, in Minkowski ambient space we can not define the angle between two vectors (except that both vectors are of timelike type). For this reason, we avoid to say about the angle between the vector fields $\hbox{\bf N}(s)$ and $U$.
Our main result in this work is the following characterization of slant helices in the spirit of the one given in equation (\ref{slant}). We will assume throughout this work that the curvature and torsion functions do not equal zero. Exactly, we prove
\begin{theorem}\label{t1} Let $\alpha$ be a unit speed timelike curve in $\hbox{\bf E}_1^3$. Then $\alpha$ is a slant helix if and only if either one the next two functions \begin{equation}\label{slant2} \frac{\kappa^2}{(\tau^2-\kappa^2)^{3/2}}\Big(\dfrac{\tau}{\kappa}\Big)'\hspace*{1cm}\mbox{or}\hspace*{1cm} \frac{\kappa^2}{(\kappa^2-\tau^2)^{3/2}}\Big(\dfrac{\tau}{\kappa}\Big)' \end{equation} is constant everywhere $\tau^2-\kappa^2$ does not vanish. \end{theorem}
\begin{theorem}\label{t2} Let $\alpha$ be a unit speed spacelike curve in $\hbox{\bf E}_1^3$. \begin{enumerate} \item If the normal vector of $\alpha$ is spacelike, then $\alpha$ is a slant helix if and only if either one the next two functions \begin{equation}\label{slant3} \frac{\kappa^2}{(\tau^2-\kappa^2)^{3/2}}\Big(\dfrac{\tau}{\kappa}\Big)'\hspace*{1cm}\mbox{or}\hspace*{1cm} \frac{\kappa^2}{(\kappa^2-\tau^2)^{3/2}}\Big(\dfrac{\tau}{\kappa}\Big)' \end{equation} is constant everywhere $\tau^2-\kappa^2$ does not vanish. \item If the normal vector of $\alpha$ is timelike, then $\alpha$ is a slant helix if and only if the function \begin{equation}\label{slant4} \frac{\kappa^2}{(\tau^2+\kappa^2)^{3/2}}\Big(\dfrac{\tau}{\kappa}\Big)' \end{equation} is constant. \item Any spacelike curve with lightlike normal vector is a slant curve. \end{enumerate} \end{theorem}
In the case that $\alpha$ is a lightlike curve, we have
\begin{theorem}\label{t3} Let $\alpha$ be a unit speed lightlike curve in $\hbox{\bf E}_1^3$. Then $\alpha$ is a slant helix if and only if the torsion is \begin{equation}\label{slant5} \tau(s)=\frac{a}{(bs+c)^2}, \end{equation} where $a$, $b$ and $c$ are constant. \end{theorem}
The proof of Theorems \ref{t1}, \ref{t2} and \ref{t3} is carried in the successive sections.
\section{Timelike slant helices}
Let $\alpha$ be a unit speed timelike curve in $\hbox{\bf E}_1^3$. The Frenet frame $\{\hbox{\bf T},\hbox{\bf N},\hbox{\bf B}\}$ of $\alpha$ is given by $$\hbox{\bf T}(s)=\alpha'(s),\ \ \hbox{\bf N}(s)=\dfrac{\alpha''(s)}{\parallel\alpha''(s)\parallel},\ \ \hbox{\bf B}(s)=\hbox{\bf T}(s)\times\hbox{\bf N}(s).$$ The Frenet equations are \begin{equation}\label{equi1}
\left[
\begin{array}{c}
\hbox{\bf T}'(s) \\
\hbox{\bf N}'(s) \\
\hbox{\bf B}'(s)
\end{array}
\right]=\left[
\begin{array}{ccc}
0 & \kappa(s) & 0 \\
\kappa(s) & 0 &\tau(s)\\
0 &-\tau(s) & 0\\
\end{array}
\right]\left[
\begin{array}{c}
\hbox{\bf T}(s) \\
\hbox{\bf N}(s) \\
\hbox{\bf B}(s) \\
\end{array}
\right].
\end{equation}
In order to prove Theorem \ref{t1}, we first assume that $\alpha$ is a slant helix. Let $U$ be the vector field such that the function $\langle \hbox{\bf N}(s),U\rangle:=c$ is constant. There exist smooth functions $a_1$ and $a_3$ such that \begin{equation}\label{u1} U=a_1(s)\hbox{\bf T}(s)+c \hbox{\bf N}(s)+a_3(s) \hbox{\bf B}(s),\ \ s\in I. \end{equation} As $U$ is constant, a differentiation in (\ref{u1}) together (\ref{equi1}) gives \begin{equation}\label{u2} \left.\begin{array}{ll} a_1'-c\kappa&=0\\ \kappa a_1-\tau a_3&=0\\ a_3'+c \tau &=0 \end{array}\right\} \end{equation} From the second equation in (\ref{u2}) we have \begin{equation}\label{u5} a_1=a_3\big(\dfrac{\tau}{\kappa}\big). \end{equation} Moreover \begin{equation}\label{u3} \langle U,U\rangle=-a_1^2+c^2+a_3^2=\mbox{constant}. \end{equation}
We point out that this constraint, together the second and third equation of (\ref{u2}) is equivalent to the very system (\ref{u2}). From (\ref{u5}) and (\ref{u3}), set $$a_3^2\Big(\big(\frac{\tau}{\kappa}\big)^2-1\Big)=\epsilon m^2,\ \ m>0,\epsilon\in\{-1,0,1\}.$$ If $\epsilon=0$, then $a_3=0$ and from (\ref{u2}) we have $a_1=c=0$. This means that $U=0$: contradiction. Thus $\epsilon=1$ or $\epsilon=-1$ which gives $$a_3=\pm\dfrac{m}{\sqrt{\big(\dfrac{\tau}{\kappa}\big)^2-1}}\hspace*{1cm}\mbox{or}\hspace*{1cm}a_3=\pm\dfrac{m}{\sqrt{1-\big(\dfrac{\tau}{\kappa}\big)^2}}$$ on $I$. The third equation in (\ref{u2}) yields $$\dfrac{d}{ds}\Big[\pm\dfrac{m}{\sqrt{\big(\dfrac{\tau}{\kappa}\big)^2-1}}\Big]=-c \tau \hspace*{1cm} \mbox{or}\hspace*{1cm}\dfrac{d}{ds}\Big[\pm\dfrac{m}{\sqrt{1-\big(\dfrac{\tau}{\kappa}\big)^2}}\Big]=c \tau$$ on $I$. This can be written as $$\frac{\kappa^2}{(\tau^2-\kappa^2)^{3/2}}\Big(\dfrac{\tau}{\kappa}\Big)'=\mp\dfrac{c}{m}\hspace*{1cm}\mbox{or}\hspace*{1cm} \frac{\kappa^2}{(\kappa^2-\tau^2)^{3/2}}\Big(\dfrac{\tau}{\kappa}\Big)'=\pm\dfrac{c}{m}$$ This shows a part of Theorem \ref{t1}. Conversely, assume that the condition (\ref{slant2}) is satisfied. In order to simplify the computations, we assume that the first function in (\ref{slant2}) is a constant, namely, $c$ (the other case is analogous). We define \begin{equation}\label{u9} U=\dfrac{\tau}{\sqrt{\tau^2-\kappa^2}}\hbox{\bf T}+ c\hbox{\bf N}+\dfrac{\kappa}{\sqrt{\tau^2-\kappa^2}}\hbox{\bf B}\Big. \end{equation} A differentiation of (\ref{u9}) together the Frenet equations gives $\dfrac{dU}{ds}=0$, that is, $U$ is a constant vector. On the other hand, $\langle\hbox{\bf N}(s),U\rangle=1$ and this means that $\alpha$ is a slant helix.
\begin{remark} In Theorem \ref{t1} we need to assure that the function $\tau^2-\kappa^2$ does not vanish everywhere. We do not know that happens if it vanishes at some points. On the other hand, any timelike curve that satisfies $\tau(s)^2-\kappa(s)^2=0$ is a slant curve. The reasoning is the following. For simplicity, we only consider the case that $\tau=\kappa$. We define $U=\hbox{\bf T}(s)+\hbox{\bf B}(s)$, which is constant using the Frenet equations (\ref{equi1}). Moreover,
$\langle \hbox{\bf N},U\rangle=0$, that is, $\alpha$ is a slant curve. Finally, we point that there exist curves in $\hbox{\bf E}_1^3$ that satisfies the relation
$\tau=\kappa$: it suffices to put $\tau=\kappa:=c=\mbox{constant}$ and the fundamental theorem of the theory of curves assures the existence of a timelike curve $\alpha$ with curvature and torsion $c$. \end{remark}
\section{Spacelike slant helices}
Let $\alpha$ be a unit speed spacelike curve in $\hbox{\bf E}_1^3$. In the case that the normal vector $\hbox{\bf N}(s)$ of $\alpha$ is spacelike or timelike, the proof of Theorem \ref{t2} is similar to the one given for Theorem \ref{t1}. We omit the details.
The case that remains to study is that the normal vector $\hbox{\bf N}(s)$ of the curve is a lightlike vector for any $s\in I$. Now the Frenet trihedron is $\hbox{\bf T}(s)=\alpha'(s)$, $\hbox{\bf N}(s)=\hbox{\bf T}'(s)$ and $\hbox{\bf B}(s)$ is the unique lightlike vector orthogonal to $\hbox{\bf T}(s)$ such that $\langle\hbox{\bf N}(s),\hbox{\bf B}(s)\rangle=1$. Then the Frenet equations as \begin{equation}\label{u11}
\left[
\begin{array}{c}
\hbox{\bf T}' \\
\hbox{\bf N}' \\
\hbox{\bf B}'
\end{array}
\right]=\left[
\begin{array}{ccc}
0 & 1 & 0 \\
0 & \tau & 0 \\
-1 & 0 & \tau\\
\end{array}
\right]\left[
\begin{array}{c}
\hbox{\bf T} \\
\hbox{\bf N} \\
\hbox{\bf B} \\
\end{array}
\right].
\end{equation} Here $\tau$ is the torsion of the curve (recall that $\tau(s)\not=0$ for any $s\in I$). We show that \emph{any} such curve is a slant helix. Let $a_2(s)$ any non-trivial solution of the O.D.E. $y'(s)+\tau(s)y(s)=0$ and define $U=a_2(s)\hbox{\bf N}(s)$. By using (\ref{u11}), $dU(s)/ds=0$, that is, $U$ is a (non-zero) constant vector field of $\hbox{\bf E}_1^3$ and, obviously, the function $\langle\hbox{\bf N}(s),U\rangle$ in constant (and equal to $0$).
\section{Lightlike slant helices }
In this section we show Theorem \ref{t3}. Let $\alpha$ be a unit lightlike in $\hbox{\bf E}_1^3$. The Frenet frame of $\alpha$ is $\hbox{\bf T}(s)=\alpha'(s)$, $\hbox{\bf N}(s)=\hbox{\bf T}'(s)$ and $\hbox{\bf B}(s)$ the unique lightlike vector orthogonal to $\hbox{\bf N}(s)$ such that $\langle\hbox{\bf T}(s),\hbox{\bf B}(s)\rangle=1$. The Frenet equations are \begin{equation}\label{u21}
\left[
\begin{array}{c}
\hbox{\bf T}' \\
\hbox{\bf N}' \\
\hbox{\bf B}'
\end{array}
\right]=\left[
\begin{array}{ccc}
0 & 1 & 0 \\
\tau & 0 & -1 \\
0 & -\tau & 0\\
\end{array}
\right]\left[
\begin{array}{c}
\hbox{\bf T} \\
\hbox{\bf N} \\
\hbox{\bf B} \\
\end{array}
\right].
\end{equation} Here $\tau(s)$ is the torsion of $\alpha$, which is assumed with the property $\tau(s)\not=0$, for any $s\in I$.
Assume that $\alpha$ is a slant helix. Let $U$ be the constant vector field such that the function $\langle \hbox{\bf N}(s),U\rangle$ is constant. As in the above cases $$U=a_1(s)\hbox{\bf T}(s)+c \hbox{\bf N}(s)+a_3(s) \hbox{\bf B}(s),\ \ s\in I,$$ where $c$ is a constant and \begin{equation}\label{u23} \left.\begin{array}{ll} a_1'+c \tau&=0\\ a_1-\tau a_3&=0\\ a_3'-c &=0 \end{array}\right\} \end{equation} Then $a_3(s)=cs+m$, $m\in \hbox{\bb R}$ and $a_1=(cs+m)\tau$. Using the first equation of (\ref{u23}), we have $(cs+m)\tau'+2c\tau=0$. The solution of this equation is $$\tau(s)=\frac{n}{(cs+m)^2},$$ where $m$ and $n$ are constant. This proves (\ref{slant5}) in Theorem \ref{t3}. Conversely, if the condition (\ref{slant5}) is satisfied,
we define
$$U=\frac{a}{bs+c}\hbox{\bf T}(s)+b\hbox{\bf N}(s)+(bs+c)\hbox{\bf B}(s).$$ Using the Frenet equations (\ref{u21}) we obtain that $dU(s)/ds=0$, that is, $U$ is a constant vector field of $\hbox{\bf E}_1^3$. Finally, $\langle \hbox{\bf N}(s),U\rangle=b$ and this proves that $\alpha$ is a slant helix.
\end{document} |
\begin{document}
\title{The triangle scheduling problem}
\begin{abstract}
This paper introduces a novel scheduling problem, where jobs occupy a triangular shape on the time line. This problem is motivated by scheduling jobs with different criticality levels. A measure is introduced, namely the \emph{binary tree ratio}. It is shown that the greedy algorithm solves the problem to optimality when the binary tree ratio of the input instance is at most $2$. We also show that the problem is unary NP-hard for instances with binary tree ratio strictly larger than 2, and provide a quasi polynomial time approximation scheme (QPTAS). The approximation ratio of Greedy on general instances is shown to be between 1.5 and 1.05. \end{abstract}
\section{Introduction}
\mypara{Mixed-criticality Scheduling} In a mixed-criticality system, tasks with different criticality levels coexist and need to share common resources, such as bandwidth in a communication channel \cite{hts16,hp98} or execution time on a machine \cite{ves07,bbdlmms10,bls10,cnqw13,pk11,spbb13}. Contrary to single-criticality systems, the estimated worst-case executing time (WCET) of a task depends on its criticality level (the higher the criticality level, the more time is estimated). Often, however, the actual execution times of tasks are not known beforehand and the estimated WCETs deviate hugely from the actual execution times. The goal is therefore to design {\em robust} schedules that are able to tolerate runtime variations to a reasonable extent. More conservative WCET estimates are usually used for highly critical tasks (e.g. braking in a car), while less conservative estimates suffice for low-critical tasks (e.g. displaying the temperature in a car). In case the allocated time for a task is insufficient at runtime, i.e., the actual runtime of a task exceeds its estimated WCET, the execution of the task may nevertheless continue and suppress subsequent tasks of lower criticality.
For a recent thorough survey on mixed-criticality systems and arising scheduling problems we refer the reader to \cite{bur14}.
\mypara{The Triangle Scheduling Problem} In this paper, we consider the problem of non-preemptively scheduling $n$ unit length jobs/tasks with different criticality levels on a single machine. Let $p_i \in \mathbb{N}$ denote the criticality level of job $i$. The expected execution time of every job is $1$. If however at runtime, a job $i$ requires more time, then we continue its execution for at most $p_i$ time units, and other jobs with lower criticality levels that were scheduled in these slots are canceled. The computed schedule has to fulfill the property that, when prolonging the execution of a job, no other job with larger criticality needs to be canceled. The objective is to minimize the {\em makespan} (total completion time).
The previous problem can be see as a one-dimensional triangle alignment (or scheduling) problem, defined as follows:
\begin{definition}[Triangle scheduling, Gap] \label{def:ts} Given integers $p_1, \ldots, p_n$ with $p_1\geq \ldots \geq p_n>0$ find starting times $s_1, \ldots, s_n \geqslant 0$ minimizing the so-called \emph{makespan} $\max_j s_j + p_j$ such that for all $i \neq j$ we have
$| s_i - s_j | \geqslant \min \{ p_i, p_j \}$. We call \emph{gap} any interval spanned by successive starting times, including the interval between the maximum starting time and the makespan of the schedule. \end{definition} We abbreviate the triangle scheduling problem by \textsc{TS}.
\begin{figure}
\caption{Example of an optimal schedule. \textbf{Left:} discrete version. \textbf{Below:} different possible executions. In the first one each job requires a single time slot and all jobs are executed. In the second and third executions, jobs require longer processing times preventing the execution of some jobs with lower criticality.
\textbf{Right:} continuous version. Each triangle
corresponds to a job $j$ labeled with its criticality level $p_j$. The left border of
a triangle is the starting time of the job, while the right end is its worst-case completion time.}
\label{fig:ExampleSolution}
\end{figure}
\mypara{Our Results} In this paper, we initiate the study of \textsc{TS}. We show that the problem is strongly NP-hard, which implies that \textsc{TS} does not admit a fully polynomial time approximation scheme (FPTAS), unless $P=NP$. We provide a quasi polynomial time approximation scheme (QPTAS), which implies that \textsc{TS} is not APX-hard. In addition we present a Greedy algorithm, which processes the triangles from largest to smallest, placing each into the largest gap and potentially shifting subsequent triangles to the right if the gap is not large enough to contain it. We show that this algorithm has an approximation factor between $1.05$ and $1.5$.
\mypara{Binary Tree Ratio} Furthermore, we establish a measure, denoted the {\em binary tree ratio}, that allows us to distinguish hard from easy instances: \begin{definition}[Binary tree ratio] Given an instance of \textsc{TS} $p=p_1,\ldots,p_n$, we define its \emph{binary tree ratio} $R(p)$ as \[
R(p) := \max_{i=2, \hdots,n} p_{\lceil i/2 \rceil} / p_i.
\] \end{definition} Schedules computed by our Greedy algorithm can be represented by binary trees on the jobs of the problem instances (see Section~\ref{sec:greedy-tree} details). The binary tree ratio is the maximum ratio between a job and its successor in the tree. We will show that our Greedy algorithm solves an instance to optimality if its binary tree ratio is at most $2$. On the other hand, we prove that there are instances with binary tree ratio strictly larger but arbitrarily close to $2$ that render the problem NP-hard. A binary tree ratio of $2$ is hence the cut-off point that separates hard from easy instances.
\mypara{Other Related Works} Since a few decades the scheduling community has been very interested in producing robust schedules that can react to changes in job characteristics. For example in \cite{HonkompMockus:97:Robust-scheduling} a model is studied where the processing times can vary, and a schedule has to be produced with good objective value even under these variations. For more information see the survey \cite{BriandLa:07:A-robust-approach} as well as a recent PhD thesis and references therein \cite{Wilson:16:Robust-scheduling}.
Vestal \cite{ves07} introduced the \emph{mixed-criticality framework}, where the execution of lower critical jobs can be canceled in order to grant high criticality jobs the necessary amount of resources. Applications are mostly embedded systems. In a communication system, jobs represent messages, and safety-critical messages have to co-exist with less critical ones that are not subject to hard constraints. For instance, the IEC 61508 standard defines four Safety Integrity Levels (SIL) (e.g. the importance of a safety-related job performing braking in a car is much higher than the importance of a job displaying the engine's temperature). Similarly the \emph{CANaerospace} protocol specifies several criticality levels for messages, and in order to guarantee deliver times of high critical messages, the transmission of lower critical messages can be canceled \cite{Stock:14:CANaerospace-specification}. This feature has been studied also for the \emph{FlexRay} protocol used in modern cars \cite{dvo14}.
Our model can be seen as a special case of the message transmission model described by Hanz\'alek et al. in \cite{hts16}. They consider a single machine scheduling problem with release times, deadlines, different criticality levels, and WCETs that depend on the task and the criticality level. They propose a linear programming formulation of the problem and prove NP-completeness of this more general problem. Note that our model does not consider release times and deadlines, and the WCET of a task equals its criticality level.
Finally we would like to mention a connection with the computational problem of packing triangles in a given rectangle, which has applications in industrial cutting and storage. The later has been shown to be NP-hard~\cite{Chou:16:NP-Hard-Triangle}, while our paper shows that the problem is already hard in the particular case of right triangles that can only be vertically translated and not rotated.
\mypara{Outline} In Section~\ref{sec:greedy} we consider our Greedy algorithms. Then, in Section~\ref{sec:NPhardness}, we show that solving the problem for instances with binary tree ratio strictly larger than $2$ is strongly NP-hard. Last, we provide a quasi polynomial time approximation scheme (QPTAS) in Section~\ref{sec:QPTAS}.
\section{The Greedy algorithm} \label{sec:greedy}
We propose a polynomial time approximation algorithm denoted \emph{Greedy}. Recall that jobs are sorted with respect to their criticality levels, i.e., $p_1 \ge p_2 \ge \dots \ge p_n$.
\begin{definition}[Greedy] Job 1 starts at time $s_1=0$. Then, every job $j=2,\ldots,n$, is placed in a largest gap (the first one in case of tie).
If the chosen gap has length $x$ and starts at time $s_i$, then the current job $j$ is placed at $s_j=s_i+p_j$. If $2p_j>x$, then all jobs $k$ with $s_k>s_j$ are delayed by $2p_j-x$ in order to maintain feasibility, see Figure~\ref{fig:insert}. \end{definition}
Note that in case $2p_j>x$ the makespan increases by $2p_j-x$. Hence by choosing the largest gap, Greedy minimizes the increase of the makespan at every step.
\begin{figure}
\caption{When Greedy inserts a job $j$ in a gap of size $x$ (left figure) it creates two gaps. One of size $p_j$ and another either of size $x-p_j$ if $x\geq 2p_j$ (center figure) or of size $p_j$ if $x<2p_j$ (right figure).}
\label{fig:insert}
\end{figure}
\subsection{Lower bound on the optimum}
A simple lower bound can be obtained by relating gaps to jobs in the schedule and using the fact that a gap between jobs $i$ and $j$ has size at least $\min\{p_i,p_j\}$.
\begin{lemma} \label{lem:inner-product-lower-bound}
Let $S=p_{\lceil n/2\rceil+1}+\ldots+p_n$ be the total processing time of the smaller half of
the jobs, and $m=p_{(n+1)/2}$ if $n$ is odd and $m=0$ if $n$ is even. The optimal makespan OPT is at least
\[
m + 2S.
\] \end{lemma} \begin{proof} Consider an arbitrary feasible schedule, and let $T$ be its makespan. To obtain a bound on $T$, we charge every gap to a job as follows:
Map every gap between jobs $i,j$ to the smaller job among them, breaking ties arbitrarily. Map the last gap between job $j$ and the makespan to job $j$. Now every job $j$ is the image of $0$, $1$ or $2$ gaps in this mapping. Let $a_j \in \{0, 1, 2\}$ be this number. There are exactly $n$ gaps, hence we have $\sum_{j=1}^n a_j=n$. Moreover, since every gap is mapped to a job of no larger size and the total gap size is $T$, we have \[
\sum_{j=1}^n a_j p_j \leq T.
\] The proof follows from the fact that the left hand side is minimized when $a_j=0$ for the larger half of the jobs, $a_j=2$ for the smaller half of the jobs and possibly $a_{(n+1)/2} = 1$ if $n$ is odd. \end{proof} Note that this lower bound can be very weak. Consider a $2$-job instance with $p_1=M$ and $p_2=1$, for a large $M\geq 2$. Then the lower bound states that the makespan is at least 2, while the optimal makespan is $M$.
\subsection{Approximation ratio of Greedy}
\begin{lemma}
The approximation ratio of Greedy is at most $1.5$. \end{lemma} \begin{proof}
Consider the final schedule produced by Greedy on instance $p$. We perform the following transformations:
First, we delay each job by as much as possible, while maintaining feasibility, the makespan and the job order. This transformation might change the order of the gap sizes, but does not modify the actual gap sizes when viewed as a multi-set.
Second, we define a \emph{truncated} instance $p'$ as follows. For every job $j$ which starts at some $t$ followed by a gap of size $x$ with $x<p_j$, we set $p'_j=x$. For all other jobs $j$, we set $p'_j=p_j$, see Figure~\ref{fig:truncated}. Makespan as well as feasibility of the schedule are preserved by the truncation.
\begin{figure}
\caption{Transformations on a schedule produced by Greedy: Jobs are delayed (right-shifted) and their sizes truncated (dotted lines).}
\label{fig:truncated}
\end{figure}
We assume that Greedy increased the makespan when placing the last job $n$. This assumption is without loss of generality, since removing jobs that followed the last makespan increase only decreases the makespan of the optimal schedule, while the makespan produced by the algorithm is preserved.
We claim that for all $i$ we have $p_n \leq p'_i < 2p_n$. Indeed when job $n$ was placed, all gaps were of size strictly less than $2p_n$ since the insertion of job $n$ increased the makespan. Furthermore, by induction, it can be shown that after placing job $j$, all gaps are of size at least $p_j$: By the induction hypothesis, after placing job $j-1$, all gaps were of size at least $p_{j-1}$ which is at least $p_j$, by the assumed ordering of the jobs. Then, no matter how job $j$ is placed, it is impossible that a gap of size smaller than $p_j$ is created (see also Figure~\ref{fig:insert}).
From now on, assume that $n$ is even; the proof for the odd case is similar. Let $A$ be the sum of the larger half of the sizes among $p'_1,\ldots,p'_n$ and $B$ the sum of the smaller half. The truncation process reduces a job to the size of gap that follows it. Therefore, the makespan of the schedule produced by Greedy on $p'$ (or on $p$) is $A+B$.
The previous claim implies that all sizes among $p'_1,\ldots,p'_n$ are within a ratio of two, which implies $A\leq 2B$.
From Lemma~\ref{lem:inner-product-lower-bound} we have $\textrm{OPT}(p')\geq 2B$. Furthermore, we clearly have $\textrm{OPT}(p')\leq \textrm{OPT}(p)$. We can hence upper bound the makespan of the schedule produced by Greedy on $p$ as \[
A + B \leq 3B \leq \frac32 \textrm{OPT}(p') \leq \frac32 \textrm{OPT}(p).
\]
\end{proof}
Note that this analysis did not use the fact that Greedy places jobs in the largest gap. The crucial property required in the analysis is the fact that when the placement of a job $j$ increases the makespan, then all gaps are of size strictly less than $2p_j$.
We were not able to determine the exact approximation factor of Greedy. In Figure~\ref{fig:lowerboundgreedy}, an instance is illustrated that shows that the approximation factor of Greedy is at least $1.05$: On the instance with jobs of processing times $20,20,10,5,5,4,4,4,4$, Greedy produces a schedule of makespan $42$, by placing them in the order $20,4,10,4,20,5,5,4,4$, while the optimal schedule places the jobs in order $20,5,10,5,20,4,4,4,4$ and has makespan $40$. This example gives the following lower bound.
\begin{lemma}
The approximation ratio of Greedy is at least $1.05$. \end{lemma}
We conducted a systematic search for stronger lower bound constructions, but could only obtain tiny improvements. For example, we found an instance consisting of $52$ jobs showing a lower bound of $101/96 > 1.052$.
\begin{figure}
\caption{The optimal schedule (left) and the schedule produced by Greedy (right) for the lower bound instance.}
\label{fig:lowerboundgreedy}
\end{figure}
\subsection{A case where Greedy is optimal} \label{sec:greedy-tree}
\begin{theorem}
Greedy is optimal for instances with binary tree ratio at most $2$. \end{theorem} \begin{proof} We show by induction on $j$ that after placing job $j$, there are $2$ gaps of size $p_i$, for every $\lceil j/2\rceil + 1 \leq i < j$, and either a single gap (if $j$ is odd) or $2$ gaps (if $j$ is even) of size $p_{(j+1)/2}$. This invariant is true after placing job $1$, where there is a single gap of size $p_1$. When $j$ is even, the job is placed in the single gap of size $p_{j/2}$. By the assumption on the binary tree ratio we have $2p_j \geq p_{j/2}$, implying that this gap is replaced by 2 gaps of size $p_j$. When $j$ is odd, the job is placed in one of the 2 gaps of size $p_{(j-1)/2 + 1}$, and for the same reasons as in the even case the gap is replaced by 2 gaps of size $p_j$. In both cases the invariant is preserved.
The implication of this observation is that by the lower bound of Lemma~\ref{lem:inner-product-lower-bound}, the schedule produced by Greedy is optimal. \end{proof}
We can relate the jobs in a tree structure as illustrated in Figure~\ref{fig:tree}. Job $1$ is the root of the tree. Then for every job $j=2,\ldots,n$ inserted into a gap assigned to job $i$, job $j$ is a descendant of job $i$. The result is a single root, connected to a binary tree, which is complete except possibly for the last level, which is left padded. The job labels on this tree are ordered by levels. The binary tree ratio of the instance is the maximum ratio between a job and its immediate ancestor in the tree, which was the motivation for the name of this ratio.
\begin{figure}
\caption{Tree structure of the schedule produced by Greedy on any instance of 13 jobs with binary tree ratio at most $2$.}
\label{fig:tree}
\end{figure}
\section{NP-hardness} \label{sec:NPhardness}
\begin{theorem}
\textsc{TS} is strongly NP-hard for instances with binary tree ratio strictly larger than $2$. \end{theorem} \begin{proof} We reduce from the strongly NP-hard numerical $3$-dimensional matching problem, see \cite[problem SP16]{GareyJohnson:79:Computers-and-intractability}. An instance of this problem consists of integers \[
a_1,\ldots,a_n, b_1,\ldots,b_n, c_1,\ldots,c_n, D,
\] with $D\geq 4$, and for all $i$: \begin{equation}
D/4 < a_i,b_i,c_i < D/2.
\label{eq:NPC-promise-indiv} \end{equation} Furthermore, we are guaranteed that \begin{equation}
\sum_{i=1}^n a_i + b_i + c_i = nD.
\label{eq:NPC-promise-sum} \end{equation} The goal is to form $n$ disjoint triplets of the form $(i,j,k)$ with $a_i+b_j+c_k=D$.
Fix some arbitrary large constant $M \geq \frac{5D}{4}$. The instance consists of $5n$ jobs. \begin{itemize} \item There are $n$ jobs $E$ of size $8M+5D$. \item There are $n$ jobs $F$ of size $4M$. \item For every $i\in\{1,\ldots,n\}$ there is a job $A_i$ of size $2M+2a_i+D$, \item as well as a job $B_i$ of size $2M+b_i$, \item and a job $C_i$ of size $M+c_i+D$. \end{itemize}
We claim that the instance has a solution of makespan $n(8M+5D)$ if and only if the initial 3-Partition instance has a solution.
For the easy direction, given a solution to the 3-Partition instance we construct a schedule consisting of the concatenation for every triplet $(i,j,k)$ of the jobs $E,A_i,F,B_j,C_k$. Straightforward verification shows that the resulting schedule has the required makespan.
For the hard direction, consider a solution to \textsc{TS} of makespan $n(8M+5D)$. Its makespan cannot be smaller, by the presence of the $E$ jobs, that need to be scheduled every $8M+5D$ time units. They structure the time line into $n$ blocks of equal size $8M+5D$ each.
Suppose that a block contains $k$ jobs of size $x_1\geq \ldots \geq x_k$ plus a single $E$ job. These jobs create $k+1$ gaps in this block. Each gap can be assigned to the smaller one among the neighboring jobs, and for an assignment we denote by $a_i\in\{0,1,2\}$ the number of assignments the job of size $x_i$ obtained.
Hence a lower bound on the block size is given by the expression $a_1x_1+\ldots+a_kx_k$ for some $\{0,1,2\}$-weights with $a_1+\ldots+a_k=k+1$. A valid lower bound is given by $a_1 x_1 + \ldots +a_k x_k$ setting $a_i$ to $2$ for the last $\lceil k/2 \rceil$ indices (corresponding to the smaller jobs), setting $a_{k/2-1}=1$ if $k$ is even and setting $a_i=0$ to the remaining indices. We will use this lower bound to determine the number of jobs of each type in block.
No block can host two $F$ jobs or more, since $3(4M) > 8M+5D$. Hence every block contains exactly one $F$ job.
No block can host an $F$ and two $A$ jobs, since (placing weights $a_1=a_2=2$ for the $A$ jobs and $a_3=0$ for the $F$ job) \[
4 \left(2M+2\frac D 4 + D\right) = 8M + 6D.
\] Hence every block contains exactly one $F$ and one $A$ job.
Similarly a block cannot not host an $A$ and $F$ job, together with two $B$ jobs, as setting total $a$-weight $4$ for the $B$ jobs and $1$ for the $A$ job results in the lower bound \[
4 \left( 2M+ \frac D 4\right) + 2M + 2\frac D 4 + D = 10M + \frac52 D.
\] Hence every block contains exactly one $F$ one $A$ and one $B$ job.
Finally a block cannot host an $A$, $F$ and $B$ job, together with two $C$ jobs, as setting total $a$-weight $4$ for the $C$ jobs and $2$ for the $B$ job results in the lower bound \[
4 \left(M + \frac D 4 + D\right) + 2 \left( 2 M + 2 \frac D 4 \right) = 8M + 6D.
\]
In conclusion every block contains an $F$ job, and $A_i$ job, a $B_j$ job and a $C_k$ job with the following lower bound for the space occupied for these jobs, using $a$-weight 2 for jobs $B_j,C_k$ and $a$-weight $1$ for job $A_i$ \[
2M + 2a_i + D + 2(2M+b_j) + 2(M+c_k + D) = 8M + 3D + 2(a_i+b_j +c_k).
\] Since this value cannot exceed the size of a block, namely $8M+5D$, every block corresponds to a triplet $(i,j,k)$ with $a_i+b_j+c_k\leq D$. By the assumption (\ref{eq:NPC-promise-sum}) we have equality for every triplet, and this shows that there is a solution to the 3-Partition instance, see Figure~\ref{fig:block} for illustration.
By construction when the jobs are ordered in decreasing size, they are grouped by types, in the order $F,E,A,B,C$. Hence the binary tree ratio is determined by the ratio between an $F$ and an $E$ job, between a $B$ and an $F$ job, and between a $C$ and an $A$ jobs. All these fractions can be made arbitrarily close to $2$ by choosing a large enough value for $M$. \end{proof}
\begin{figure}
\caption{A schematic view of a block in the optimal schedule obtained from the reduction.}
\label{fig:block}
\end{figure}
\section{A QPTAS} \label{sec:QPTAS}
\begin{theorem}
\textsc{TS} admits a quasi polynomial time approximation scheme. \end{theorem} \begin{proof} The proof starts with a sequence of claims.
\begin{claim}
Rounding all sizes up to the nearest power of $1 + \epsilon$ changes the optimal makespan by at most a factor $1 + \epsilon$. \end{claim} \begin{proof}
Take an optimal solution and multiply all processing times and start times by factor $1 + \epsilon$. The solution is still feasible, only the unit has changed. Now round all triangles down to the nearest power of $1 + \epsilon$ while keeping the start times fixed. \end{proof}
\begin{claim} We may assume that the ratio $p_1/p_n$ is at most $n/\epsilon$. \end{claim} \begin{proof}
The optimal makespan $\textrm{OPT}$ is at least $p_1$. All triangle with size less than $\epsilon p_1/n$ can be put at the end. We do not need to optimize over these, as they increase the makespan by at most an $\epsilon$ factor. \end{proof}
From now let the smallest triangle have size $p_n=1$ and the largest have size $p_1 \leq n/\epsilon$. Then $p_1 \leq \textrm{OPT} \leq n p_1 \leq n^2/\epsilon$.
\begin{claim} We may assume there are only $\lceil \log (n/\epsilon) \rceil +1$ number of different sizes. \end{claim} The proof follows from the previous two claims.
\begin{claim} Restricting start times to values from the set $P=\{0, K, 2K, ..., \lceil n^2/\epsilon \rceil K\}$ for $K = \epsilon p_1/n$ increases the optimal makespan at most by factor $1+\epsilon$. \end{claim} \begin{proof} We modify the start times of the jobs, processing them from left to right. For every job, we move it to the next time in $P$, and move simultaneously all subsequent jobs by the same amount to preserve feasibility. The value of the solution increases by at most $nK \leq \epsilon p_1 \leq \epsilon \textrm{OPT}$.
\end{proof} Let $Z$ be the set of all different job sizes after the above rounding. We have \begin{align*}
|Z| & = \lceil \log (n/\epsilon) \rceil +1
\\
|P| &\leq \lceil n^2 / \epsilon \rceil \end{align*} We design a dynamic programming scheme as follows. Partition the set of jobs into sets $S$ and its complement $\overline S$. We want to compute the optimal schedule placing first jobs from $\overline S$ and then jobs from $S$. Only a few parameters from the first schedule influence the possibilities of the second schedule, namely the position of the rightmost triangle of each size. Hence we describe every possible configuration by a vector from $P^Z$, which leads to a quasi polynomial number of configurations. The number of possible subsets $S$ is (in terms of size multiplicities) $O(n^Z)$ which is also quasi-polynomial.
Define $F (C, S)$ to be the optimal makespan of a schedule, placing first $\overline S$ with a configuration $C$ and placing then jobs from $S$. The goal is to compute $F(e, \{1,\ldots,n\})$, where $e$ is the empty configuration, which for technical reasons assigns to each size $x\in Z$ the starting time $-x$.
The basis cases consist of $F(C,\emptyset)$, where $C$ ranges over all configurations from $P^Z$, some of them might be infeasible. Then $F (C, S)$
can be computed from $F (C+j, S-j)$ where $S-j$ is $S$ minus one triangle $j\in S$ and $C+j$ is obtained from $C$ by adding the triangle $j$ to the right of $C$ and placing it as early as possible, i.e.\ at time $\max_{x\in Z} C_x + x$.
The number of choices for $j$ (in terms of size) is at most $|Z|$. Hence, we need at most $|Z|$ look-ups to compute one value. \end{proof}
\section{Final remarks}
\shapepar{{3}{0}b{0}\\{0}t{0}{6}\\{6}e{0}} We introduced a new scheduling problem, motivated by mixed criticality. The novelty lies in its combinatorial structure which defines the contribution to the makespan of each job in a non-local manner. We showed that the problem is strongly NP-hard, thus ruling out the existence of a fully polynomial time approximation schedule. In addition we provided a quasi polynomial time approximation scheme, ruling out APX-hardness. Furthermore, we introduced a greedy algorithm for this problem, but still do not understand well its approximation ratio. Closing the gap between the lower bound of $1.05$ and the upper bound of $1.5$ is the main question left open by this paper.
We would like to thank Marek Chrobak and Neil Olver for helpful discussions.
This work is partially supported by PHC VAN GOGH 2015 PROJET 33669TC, the grants FONDECYT 11140566 as well as ANR-15-CE40-0015.
\end{document} |
\begin{document}
\title{On principal curves with a length constraint}
\begin{abstract}
In this paper, we are interested in the problem of finding a parametric curve $f$ minimizing the quantity $\mathbb{E}\left[\min_{t\in[0,1]}\|X-f(t)\|^2\right]$, where $X$ is a random variable, under a length constraint. This question is known in the probability and statistical learning context as length-constrained principal curves optimization, as introduced by \cite{KKLZ}, and it also corresponds to a version of the ``average-distance problem'' studied in the calculus of variation and shape optimization community (\cite{BOS, BS03}).
We investigate the theoretical properties satisfied by a principal curve $f:[0,1]\to\mathbb{R}^d$ with length at most $L$ associated to a probability distribution with second-order moment. We suppose that the probability distribution is not supported on the image of a curve with length $L$. Studying open as well as closed optimal curves, we show that they have finite curvature. We also derive a first order Euler-Lagrange equation. This equation is then used to show that a length-constrained principal curve in two dimension has no multiple point. Finally, some examples of optimal curves are presented.
\end{abstract}
\textit{Keywords} -- Principal curves, average-distance problem, quantization of probability measures, length constraint, finite curvature.
\textit{2000 Mathematics Subject Classification}: \textit{Primary} 60E99; \textit{Secondary} 35B38, 49Q10, 49Q20.
\section{Introduction}
\subsection{Context of the problem and motivation}
We focus on the problem:
\begin{equation} \begin{minipage}{0.9\textwidth} find a curve $f:[0,1]\to \mathbb{R}^d$ minimizing the quantity $$ \mathbb{E}\left[d(X,\mbox{Im} f)^2\right]=\int d(x,\mbox{Im} f)^2d\mu(x), $$ over all curves with length $\mathscr{L}(f)$, such that $\mathscr{L}(f)\leq L$. \end{minipage}\label{eq:crit1} \end{equation}
Here, $d(\cdot,\cdot)$ is the Euclidean distance from a point
to a set, $\mbox{Im} f$ is the image of $f$, and $X$ is some random vector with distribution $\mu$, taking its values in $\mathbb{R}^d$. As an illustration, two examples of length-constrained principal curves, fitted via a stochastic gradient descent algorithm, are presented in Figure \ref{fig:exPC}.
\begin{figure}
\caption{Two examples of principal curves with length constraint: (a) Uniform distribution over the square $[0,1]^2$. (b) Standard Gaussian distribution.}
\label{fig:exPC}
\end{figure} This corresponds to principal curves with length constraint, as described by \cite{KKLZ}. These authors show that there exists indeed a minimizer whenever $X$ is square integrable. Observe that such a length constraint makes perfectly sense in the empirical case, that is in the statistical framework, when the random vector is replaced by a data cloud. Indeed, from a practical point of view, it is essential to appropriately tune some parameter reflecting the complexity of the curve, in order to achieve a trade-off between a curve passing through all data points and a too rough one. The parameter selection issue was addressed in this statistical context for instance in \cite{BF}, \cite{AF} and \cite{GW}.
Originally, principal curves were introduced by \cite{HasStuet}, with a different definition, based on the so-called self-consistency property. In this point of view, a curve $f$ is said to be self-consistent for a random vector $X$ with finite second moment if it satisfies: $$f(t_f(X))=\mathbb{E}[X|t_f(X)]\quad \mbox{a.s.} ,$$ where the projection index $t_f$ is given by $$t_f(x)=\max\argmin_{t} \|x-f(t)\| .$$ The self-consistency property may be interpreted as follows: each point on the curve is the average of the mass of the probability distribution projecting there (for more details about the notion of self-consistency, see \cite{TarFl}). Some regularity assumptions are made in addition: the principal curve is required to be smooth ($C^\infty$), it does not intersect itself, and has finite length inside any ball in $\mathbb{R}^d$.
The existence of principal curves designed according to this definition cannot be proved in general (see \cite{DSextr96}, \cite{DSgeo96} for results obtained in the case of some particular distributions in two dimensions), which is the main motivation for the least-square minimization definition proposed by \cite{KKLZ}.
Note that several other principal curve definitions, as well as algorithms, were proposed in the literature (\cite{Tib}, \cite{VVK}, \cite{Del}, \cite{Sankulk}, \cite{ETE}, \cite{OE}, \cite{GW}). Note also that principal curves, in their empirical version, have many applications in various areas (see for example \cite{HasStuet}, \cite{FO} for applications in physics, \cite{KK}, \cite{RN} in character and speech recognition, \cite{Brun}, \cite{SR}, \cite{BanRaf}, \cite{ETE, ETE1} in mapping and geology, \cite{De}, \cite{CAM}, \cite{ETE} in natural sciences, \cite{CCDH} in pharmacology, and \cite{WC}, \cite{DSD} in medicine, for the study of cardiovascular disease or cancer).
\subsection{Description of our results}
In this paper, we consider general distributions, assuming only that $X$ has a second order moment, and search for a curve which is optimal for problem \eqref{eq:crit1}. We deal with open curves (with endpoints), as well as closed curves ($f(0)=f(1)$). Throughout, we will assume that the length-constraint is effective, that is the support of $X$ is not the image of a curve with length less than or equal to $L$. In this context, we prove that a minimizing curve cannot be self-consistent. We also show that, for an optimal curve, the set of points with several different projections of the curve, called ridge set in studies about the ``average-distance problem'' (see Section \ref{section:CompAna}), or ambiguity points in the principal curves literature, is negligible for the distribution of $X$.
Then, we establish that an optimal curve is right- and left-differentiable everywhere and has bounded curvature. Moreover, we obtain a first order Euler-Lagrange equation: we show that there exist $\lambda>0$ and a random variable $\hat{t}$ taking its values in $[0,1]$ such that $\|X-f(\hat{t})\|=d(X,\mbox{Im} f)$ a.s. and \begin{equation}\label{eq:form1}
\mathbb{E}\left[ X-f(\hat{t})|\hat{t}=t\right]m_{\hat{t}}(dt)=-\lambda f ''(dt), \end{equation} where $m_{\hat{t}}$ stands for the distribution of $\hat{t}$. To obtain that $\lambda \neq 0$, we use the fact that an optimal curve is not self-consistent. Formula \eqref{eq:form1} allows us to propose in dimension $d=2$ a proof of the injectivity of an open principal curve as well as of a closed principal curve restricted to $[0,1)$.
\subsection{Comparison with previous results}\label{section:CompAna}
Our framework is related to the constrained problem:
\begin{equation}
\begin{minipage}{0.9\textwidth}
minimize
$ \displaystyle \int_{\mathbb{R}^d}d(x,\Sigma)^pd\mu(x)$ over compact connected sets $\Sigma$ such that $\mathcal H^1(\Sigma)\leq L$.
\end{minipage}\label{eq:irrigcons}
\end{equation}
Here, $\mathcal H^\ell$ denotes $\ell$-dimensional Hausdorff measure. A connected question is the minimization of the penalized version of the criterion:
\begin{equation}
\int_{\mathbb{R}^d}d(x,\Sigma)^pd\mu(x)+\lambda\mathcal H^1(\Sigma).\label{eq:irrigpen}
\end{equation}
This issue, called in the calculus of variations and shape optimization community ``average-distance problem'' or, for $p=1$, ``irrigation problem'', has been introduced by \cite{BOS,BS03} (see also the survey \cite{survAL}, and the references therein). Considering a compactly supported distribution, the penalized form is studied for connected sets, with $p=1$, in \cite{LuSle13}, and for curves, with $p\geq 1$, in \cite{LuSle}. In the first article, the authors prove that a minimizer is a tree made of a finite union of curves with finite length, and they provide a bound on the total curvature of these curves. In the second one, they show existence of a curve minimizing the penalized criterion \begin{equation}\label{eq:lengthpen} \int_{\mathbb{R}^d}d(x,\mbox{Im} f)^pd\mu(x)+\lambda\mathcal \mathscr{L}(f). \end{equation} They give a bound on the curvature of the minimizer, and prove that, in two dimensions, if $p\geq 2$ or the distribution $\mu$ has a bounded density with respect to Lebesgue measure, a minimizing curve is injective.
For the penalized irrigation problem \eqref{eq:irrigpen}, under the assumption that the distribution $\mu$, with compact support, does not charge the sets that have finite $\mathcal H^{d-1}$ measure, which is true for instance if it has a density with respect to Lebesgue measure, an Euler-Lagrange equation is obtained for $p=1$ in \cite{BMS}, whereas \cite{LemenantR2} uses arguments involving endpoints to derive one in the case of the constrained version \eqref{eq:irrigcons}, in $\mathbb{R}^2$, under the same assumption on $\mu$. This assumption implies that $X$ is almost surely different from its projection on the curve, which is required for differentiability when $p= 1$, and, moreover, it is used to ensure negligibility of the ridge set.
For the constrained problem \eqref{eq:irrigcons}, if $\Sigma^*$ denotes a minimizer and $\int_{\mathbb{R}^d}d(x,\Sigma)^pd\mu(x)>0$, it is shown in \cite{PaoSte} that $\mathcal H^1(\Sigma^*)=L$. A similar result in our context is stated in Corollary \ref{cor:L(f)} below.
Another related setting is the ``lazy travelling salesman problem'' of \cite{PoWo}: in $\mathbb{R}^2$, taking for $\mu$ an empirical distribution and considering closed curves, the authors study the penalized problem \eqref{eq:lengthpen} for $p=2$ (with $\lambda \mathscr{L}(f)$ replaced by $\lambda \mathscr{L}^2(f)$). They show that for $\lambda$ large enough, the problem is reduced to a convex optimization.
Recall that we study in this manuscript the constrained problem \eqref{eq:crit1}, for open or closed curves. In our context, the distribution of $X$ is not required to be compactly supported, and we do not need to assume that $\mu$ does not charge the sets with finite $\mathcal H^{d-1}$ measure to derive an Euler-Lagrange equation. Indeed, our proof does not rely on the fact that the ridge set is negligible. Besides, we prove that ambiguity points are actually negligible, which implies in particular that, for a given optimal curve, the Lagrange multiplier $\lambda$ in equation \eqref{eq:form1} only depends on the curve $f$. We decided to focus on the case $p=2$ for which we can state the more complete results. In particular, we are only able to show the default of self-consistency of an optimal curve when $p=2$. As already mentioned, this is a key point to get the main result. Observe that it would be interesting to define a counterpart of the default of self-consistency when considering other values of $p$.
\subsection{Organization of the paper}
Our document is organized as follows. Section \ref{section:not} introduces relevant notation and recalls some basic facts about length-constrained principal curves. In Section \ref{section:res}, negligibility of ambiguity points is given in Proposition \ref{prop:hatXneg}, and the main result is stated in his complete form in Theorem \ref{theo:main}.
Injectivity results are presented in \ref{section:inj}. Finally, we give in Section \ref{section:exe} explicit examples of optimal curves.
\section{Definitions and notation}\label{section:not}
For $d\geq 1$, the space $\mathbb{R}^d$ is equipped with the standard Euclidean norm, denoted by $\|\cdot\|$. The associated inner product between two elements $u$ and $v$ is denoted by $\langle u,v\rangle$. Let $\mathcal H^1$ denotes the 1-dimensional Hausdorff measure in $\mathbb{R}^d$.
For $x\in\mathbb{R}^d$, $A\subset \mathbb{R}^d$, let $ d(x, A)=\inf_{y\in A}\|x-y\|$ denote the distance from point $x$ to set $A$. For $r>0$, let $B(x,r)$ and $\bar B(x,r)$ denote, respectively, the open and the closed balls with center $x$ and radius $r$. Also, let $\partial A$ stand for the boundary of $A$, $\mbox{Card}(A)$ for its cardinality, and $\mbox{diam}(A)=\sup_{x,y\in A}\|x-y\|$ for its diameter.
For every $x\in \mathbb{R}^d$, let $x^j$ be its $j$-th component, for $j=1,\dots,d$, that is $x=(x^1,\dots,x^d)$. For every $x=(x^1,\dots,x^d)\in \mathbb{R}^d$, we set $\|x\|_\infty=\max_{1\leq i\leq d}|x^j|$.
Let $(\Omega,\mathcal F, \mathbb{P})$ be a probability space and $X$ a random vector on $(\Omega,\mathcal F, \mathbb{P})$ with values in $\mathbb{R}^d$, such that $\mathbb{E}[\|X\|^2]<\infty$. We will consider curves, that are continuous functions \begin{align*} f:[0,1]&\to\mathbb{R}^d\\ t &\mapsto (f^1(t),\dots,f^d(t)). \end{align*} For such a curve $f:[0,1]\to\mathbb{R}^d$, let $\mathscr{L}(f)\in[0,\infty]$ denote its length, defined by \begin{equation}
\mathscr{L}(f)=\sup\sum_{i=1}^n\|f(t_i)-f(t_{i-1})\|,\label{eq:length} \end{equation} where the supremum is taken over all possible subdivisions $0=t_0\leq \dots\leq t_n=1$, $ n\geq 1$ (see, e.g., \cite{AlexRes}). Let $\mbox{Im} f$ denote the image of $f$.
Let $$\Delta(f)=\mathbb{E}\left[d(X,\mbox{Im}f)^2\right],$$ and, for $L\geq 0$, $$G(L)=\min\{\Delta(f),f\in\mathcal C_L\},$$ where, in the sequel, $\mathcal C_L$ will denote either one of the following sets of curves: \begin{align*} &\{f\in[0,1]\to\mathbb{R}^d,\mathscr{L}(f)\leq L\},\\ &\{f\in[0,1]\to\mathbb{R}^d,\mathscr{L}(f)\leq L,f(0)=f(1)\}. \end{align*} Curves belonging to the latter set are closed curves. Note that $G$ is well-defined. Indeed, \cite{KKLZ} have shown the existence of an open curve $f$ with $\mathscr{L}(f)\leq L$ achieving the infimum of the criterion $\Delta(f)$, and the same proof applies for closed curves.
It will be useful to rewrite $ G(L)$, for every $L\geq 0$,
as the minimum of the quantity $$\mathbb{E}[\|X-\hat X\|^2]$$ over all possible random vectors $\hat X$ taking their values in the image $\mbox{Im} f$ of a curve $f\in\mathcal C_L$.
\begin{rem}If $f:[0,1]\to\mathbb{R}^d$ is Lipschitz with constant $L$, its length is at most $L$. This follows directly from the definition of the length \eqref{eq:length}.
Conversely, if the curve $f:[0,1]\to\mathbb{R}^d$ has length $\mathscr{L}(f)\leq L$, then there exists a curve with the same image which is Lipschitz with constant $L$. Indeed, a curve with finite length may be parameterized by arc-length (1-Lipschitz) (see, e.g., \citet[Theorem 2.1.4]{AlexRes}). \end{rem}
\begin{rem}
Let $L\geq 0$. Suppose that $\hat{X}$ satisfies $G(L)=\mathbb{E}[\|X-\hat{X}]\|^2]$. Writing $$
\mathbb{E}[\|X-\hat X\|^2]=\mathbb{E}[\|X-\hat X-\mathbb{E}[X-\hat{X}]\|^2]+\|\mathbb{E}[X]-\mathbb{E}[\hat{X}]\|^2, $$we see that, necessarily, \begin{equation}\label{eq:XhatX} \mathbb{E}[X]=\mathbb{E}[\hat{X}], \end{equation} since, otherwise, the criterion could be made strictly smaller by replacing $\hat{X}$ by the translated variable $\hat{X}+\mathbb{E}[X]-\mathbb{E}[\hat{X}]$, which contradicts the optimality of $\hat{X}$.
Observe that \eqref{eq:XhatX} remains true in a more general setting, as soon as the constraint corresponds to a quantity invariant by translation.
\end{rem}
\section{Main results and proofs}\label{section:res}
\subsection{Negligibility of the ridge set}
Given a curve $f :[0,1]\to \mathbb{R}^d$, consider the set $$\mathcal P_f(x)=\{y\in \mbox{Im} f,\|x-y\|=d(x, \mbox{Im} f)\}=\bar B(x,d(x,\mbox{Im} f))\cap \mbox{Im} f.$$ If $\mathcal P_f(x)$ has cardinality at least 2, $x$ is called an ambiguity point in the principal curves literature (see \cite{HasStuet}). Properties of the set of such points, named ridge set in the shape optimization community, have been studied for instance in \cite{ManMen}. In particular, the ridge set is measurable. Using property \eqref{eq:XhatX}, it may be shown that the ridge set of an optimal curve for $X$ is negligible for the distribution of $X$. Section \ref{section:proof:hatXneg} below presents the proof of this result, as well as the proof of measurability, provided for the sake of completeness.
\begin{pro}\label{prop:hatXneg}\begin{enumerate}
Let $f\in\mathcal C_L $ be an optimal curve for $X$ ($\Delta(f)=G(L)$).
\item The set $ \mathcal A_f=\{x\in\mathbb{R}^d,\mbox{Card}(\mathcal P_f(x))\geq 2\}$ of ambiguity points is measurable.
\item The set $ \mathcal A_f$ is negligible for the distribution of $X$.
\end{enumerate}
\end{pro}
\begin{rem} The fact that the ridge set is negligible for the distribution of $X$ may be extended to the context of computing optimal trees under $\mathcal H^1$ constraint. Indeed, the result relies on property \eqref{eq:XhatX}, and $\mathcal H^1$ measure is translation invariant. \end{rem}
\subsection{Main theorem and comments} Recall that a signed measure on $(\Omega,\mathcal F)$ is a function $m:\mathcal F\to \mathbb{R}$ such that $m(\emptyset)=0$ and $m$ is $\sigma$-additive, that is $m\left(\bigcup_{k\geq 1}A_k\right)=\sum_{k\geq 1}m(A_k)$ for any sequence $(A_k)_{k\geq1}$ of pairwise disjoint sets. For an $\mathbb{R}^d$-valued signed measure $m$ on $[0,1]$, that is $m=(m^1, \dots,m^d)$, where each $m^j$ is a signed measure, and for $g:[0,1]\to\mathbb{R}^d$ a measurable function, we will use the following notation: $\int \langle g(t),m(dt)\rangle=\sum_{j=1}^d\int g^j(t)m^j(dt). $
A probability space $(\tilde{\Omega},\tilde{\mathcal F}, \tilde{\mathbb{P}})$ will be called an extension of $(\Omega,\mathcal F,\mathbb{P})$ if there exists a random vector $\tilde{X}$ defined on $(\tilde{\Omega},\tilde{\mathcal F}, \tilde{\mathbb{P}})$, with the same distribution $\mu$ as $X$. For simplicity, we still denote this random vector by $X$ throughout the paper.
\begin{theo}\label{theo:main}
Let $L>0$ such that $G(L)>0$ and let $f\in\mathcal C_L$ such that $\Delta(f)=G(L)$. Then, $\mathscr{L}(f)= L$. Assuming that $f$ is $L$-Lipschitz, we obtain that
\begin{itemize}
\item $f$ is right-differentiable on $[0,1)$, $\|f '_r(t)\|=L$ for all $t\in[0,1)$,
\item $f$ is left-differentiable on $(0,1]$, $\|f '_\ell(t)\|=L$ for all $t\in (0,1]$,
\end{itemize} and there exists a unique signed measure $f ''$ on $[0,1]$ (with values in $\mathbb{R}^d$) such that \begin{itemize}
\item $f ''((s,t])=f'_r(t)-f'_r(s)$ for all $0\leq s\leq t<1$,
\item $f ''([0,1])= 0$. \end{itemize} In the case $\mathcal C_L=\{f:[0,1]\to \mathbb{R}^d, \mathscr{L}(f)\leq L\}$, we also have \begin{itemize}
\item $f''(\{0\})=f'_r(0)$,
\item $f''(\{1\})=-f'_\ell(1)$. \end{itemize}
Moreover, there exists a unique $\lambda>0$ and, there exists a random variable $\hat{t}$ with values in $[0,1]$, defined on an extension $(\tilde{\Omega},\tilde{\mathcal F}, \tilde{\mathbb{P}})$ of the probability space $(\Omega,\mathcal F,\mathbb{P})$, such that \begin{itemize}
\item $\|X-f(\hat{t})\|=d(X,\mbox{Im}f)$ a.s.,
\item for every bounded Borel function $g:[0,1]\to \mathbb{R}^d$,
\begin{equation}\label{eq:formuleth}
\mathbb{E}\left[\langle X-f(\hat{t}), g(\hat{t})\rangle\right]=-\lambda\int_{[0,1]}\langle g(t), f ''(dt)\rangle.
\end{equation} \end{itemize} \end{theo}
\begin{rem}
Let $m_{\hat{t}|X}$ denote the conditional distribution of $\hat{t}$ given $X$. Then, equation \eqref{eq:formuleth} can be written in the following form: $$\int_{\mathbb{R}^d}\int_{[0,1]}\langle x-f(t),g(t)\rangle m_{\hat{t}|X}(x,dt) d\mu(x)=-\lambda\int_{[0,1]}\langle g(t), f ''(dt)\rangle.$$ \end{rem}
\begin{rem}\label{rem:formuleIP} Whenever the function $g$ is absolutely continuous, an integration by parts (see for instance \citet[Theorem 21.67 $\&$ Remarks 21.68]{HewStr}) shows that equation \eqref{eq:formuleth} may also be written
\begin{equation}\label{eq:formuleIP}
\mathbb{E}\left[\langle X-f(\hat{t}), g(\hat{t})\rangle\right]=\lambda\int_{0}^1\langle g'(t), f_r '(t)\rangle dt.
\end{equation}
To see this, let us write
$$ f ''([0,1])g(1)=f''(\{0\})g(0)+\int_{(0,1]}\langle g(t), f ''(dt)\rangle+\int_{(0,1]}\langle g'(s),f''([0,s])\rangle ds.$$
Since $f''([0,1])=0$, we have $$0=\int_{[0,1]}\langle g(t), f ''(dt)\rangle+\int_{(0,1]}\langle g'(s), f_r'(s)\rangle ds,$$ which, combined with \eqref{eq:formuleth}, implies the announced formula \eqref{eq:formuleIP}. \end{rem}
\begin{rem}If the curve $f$ has an angle at $t$, which means that $f'_r(t)\neq f'_\ell(t)$, we see that $$\mathbb{E}[(X-f(\hat{t}))\mathbf{1}_{\{\hat{t}=t\}}]=-\lambda f''(\{t\})=\lambda(f_\ell'(t)-f_r'(t))\neq 0.$$ So, at an angle, $\mathbb{P}(\hat{t}=t)>0$.
Besides, when $\mathcal C_L=\{f:[0,1]\to \mathbb{R}^d, \mathscr{L}(f)\leq L\}$, we have
$$\mathbb{E}[(X-f(\hat{t}))\mathbf{1}_{\{\hat{t}=0\}}]=-\lambda f''(\{0\})=-\lambda f'_r(0),$$
which cannot be zero, since $f'_r(0)$ has norm $L>0$. This implies that $\mathbb{P}(\hat{t}=0)>0.$
\end{rem}
\begin{rem}
Regarding the random variable $\hat t$,
let us mention that $\hat{t}$ is unique almost surely whenever the curve is injective since $f(\hat t)$ is unique almost surely (it is the case in dimension $d\leq 2$ ; see Section \ref{section:inj}).
In general, it is worth pointing out that Theorem \ref{theo:main} does not ensure that it is a function of $X$, as $(X,\hat{t})$ is, in fact, obtained as a limit in distribution of $(X,\hat{t}_n)$ for some sequence $(\hat{t}_n)_{n\geq 1}$.
Besides, note that we do not know whether $\lambda$ depends on the curve $f$.
\end{rem}
\begin{rem}[Principal curves in dimension 1]
Let $\mathcal C_L=\{f:[0,1]\to \mathbb{R}^d, \mathscr{L}(f)\leq L\}.$
It may be of interest to consider the simplest case of dimension 1, where the problem may be solved entirely and explicitly
Assume that $X$ is a real-valued random variable, and that, for some length $L>0$, $G(L)>0$. Consider an optimal curve $f$ with length $\mathscr{L}(f)\leq L$. Using Corollary \ref{cor:L(f)} below, we have that, in fact, $\mathscr{L}(f)=L$, so that the image of $f$ is given by an interval $[a,a+L]$.
In this context,
solving directly the length-constrained principal curve problem in dimension 1 leads to minimizing in $a$ the quantity $$\Delta(a):=\mathbb{E}\left[d(X,\mbox{Im}f)^2\right]=\mathbb{E}[(X-a)^2\mathbf{1}_{\{X<a\}}]+\mathbb{E}[(X-a-L)^2\mathbf{1}_{\{X>a+L\}}].$$
The function $\Delta$ is differentiable in $a$, with derivative given by $$\Delta'(a)=2\mathbb{E}[(a-X)\mathbf{1}_{\{X<a\}}]+2\mathbb{E}[(a+L-X)\mathbf{1}_{\{X>a+L\}}].$$ Moreover, $\Delta'$ admits a right-derivative $\Delta''_r(a)=2(\mathbb{P}(X<a)+\mathbb{P}(X>a+L))$, which is positive since $G(L)>0$ implies that we do not have $X\in [a,a+L]$ almost surely.
Hence, $\Delta$ is strictly convex, which shows that the minimizing $a$ is unique, so that the image of the principal curve $f$ is also uniquely defined.
Besides, observe that equation \eqref{eq:formuleth} from Theorem \ref{theo:main} takes the following form in dimension 1: for every bounded Borel function $g:[0,1]\to \mathbb{R}^d$, $$\mathbb{E}[(X-a)\mathbf{1}_{\{X<a\}}g(0)] +\mathbb{E}[(X-a-L)\mathbf{1}_{\{X>a+L\}}g(1)]=\lambda L (g(1)-g(0)).$$ In particular, we get \begin{align*} &\mathbb{E}[(X-a)\mathbf{1}_{\{X<a\}}] =-\lambda L ,\\ &\mathbb{E}[(X-a-L)\mathbf{1}_{\{X>a+L\}}]=\lambda L, \end{align*}which characterizes $\lambda$. Let us stress that we directly see in this case that $\lambda>0$, since, otherwise $X\in [a,a+L]$ almost surely, which contradicts the fact that $G(L)>0.$
\end{rem}
\subsection{Proof of Proposition \ref{prop:hatXneg}}\label{section:proof:hatXneg} \begin{enumerate}
\item Note that \begin{align*}
\mathcal A&=\{x\in\mathbb{R}^d,\mbox{Card}(\bar B(x,d(x,\mbox{Im} f))\cap \mbox{Im} f)\geq 2\}\\
&=\{x\in\mathbb{R}^d,\mbox{diam}(\bar B(x,d(x,\mbox{Im} f))\cap \mbox{Im} f)> 0\}\\
&=\mathbb{R}^d\setminus\{x\in\mathbb{R}^d,\mbox{diam}(\bar B(x,d(x,\mbox{Im} f))\cap \mbox{Im} f)= 0\}.
\end{align*}
For every $x\in \mathbb{R}^d$, we may write $$\mbox{diam}(\bar B(x,d(x,\mbox{Im} f))\cap \mbox{Im} f)=\lim_{n\to\infty} \mbox{diam}(B(x,d(x,\mbox{Im} f)+1/n)\cap \mbox{Im} f).$$
Since $f$ is continuous, $f([0,1]\cap \mathbb Q)$ is
dense in $\mbox{Im} f$. For every $n\geq 1$, the countable set
$B(x,d(x,\mbox{Im} f)+1/n)\cap f([0,1]\cap \mathbb Q)$ is dense in $B(x,d(x,\mbox{Im} f)+1/n)\cap \mbox{Im} f$, so that both sets have the same diameter.
Yet, it can be easily checked that the diameter of a countable set is measurable, and finally, we obtain that the set $\mathcal A$ of ambiguity points is measurable.
\item To begin with, we prove that, for every $j=1,\dots,d$, it is possible to construct a random vector $\hat X$ with values in $\mbox{Im} f$ such that $\|X-\hat X\|=d(X,\mbox{Im}f)$ a.s., and $$\hat X^j=\max \pi_j(\bar B(X,d(X,\mbox{Im} f))\cap \mbox{Im} f).$$
Here, $\pi_j$ stands for the projection onto direction $j$, that is, for $x=(x^1,\dots,x^{d})\in \mathbb{R}^d$, $\pi_j(x)=x^j$.
Let $\{t_1,t_2,\dots\}$ be an enumeration of the countable set $[0,1]\cap \mathbb Q$.
Let $\varepsilon>0$, $x\in \mathbb{R}^d$.
First, note that the set $\{t\in[0,1], \|f(t)-x\|< d(x, \mbox{Im} f)+\varepsilon\}$ is open. It is nonempty since the distance from $x$ to the closed set $\mbox{Im} f$ is attained. We deduce from this that $\mbox{Card}(\{t\in[0,1]\cap \mathbb Q, \|f(t)-x\|\leq d(x, \mbox{Im} f)+\varepsilon\})=\infty.$
Let us define the sequence $(k_\varepsilon^n(x))_{m\in\mathbb{N}}$ by\begin{align*}
&k_\varepsilon^1(x)=\min\{k: \|f(t_k)-x\|\leq d(x, \mbox{Im} f)+\varepsilon\}\\
&k_\varepsilon^{m+1}(x)=\min\{k>k_\varepsilon^m(x): \|f(t_k)-x\|\leq d(x, \mbox{Im} f)+\varepsilon\},\quad m\in\mathbb{N}.
\end{align*}
Let $j\in\{1,\dots,d\}$. We set $$p^*(x)=\min\{p\geq 1, f^j(t_{k_\varepsilon^p(x)})\geq \sup_{m\in\mathbb{N}}f^j(t_{k_\varepsilon^m(x)})-\varepsilon\}.$$ We define $\hat X_\varepsilon(x)=f(t_{k_\varepsilon^{p^*(x)}(x)})$, which is a measurable choice.
Notice that, since $\{f^j(t_{k_\varepsilon^m(x)}),m\in\mathbb{N}\}=\pi_j(\bar B(x,d(x,\mbox{Im} f)+\varepsilon)\cap f([0,1]\cap\mathbb Q))$ is dense in $\pi_j(\bar B(x,d(x,\mbox{Im} f)+\varepsilon)\cap \mbox{Im} f)$, both sets have the same supremum.
Let
$$\Pi_\varepsilon(x)=\pi_j(\bar B(x,d(x,\mbox{Im} f)+\varepsilon)\cap \mbox{Im} f),\quad \Pi(x)=\pi_j(\bar B(x,d(x,\mbox{Im} f))\cap \mbox{Im} f).$$
The limit of $\hat X^j_\varepsilon(x)$ is given by $\lim_{\varepsilon\to 0}\max \Pi_\varepsilon(x)$. Yet, note that, for every $\varepsilon$, $\Pi(x)\subset \Pi_\varepsilon(x)$ so that \begin{equation}\label{eq:pi1}
\max\Pi(x)\leq\max\Pi_\varepsilon(x).
\end{equation} Moreover, if $\varepsilon$ is small enough, then for all $y\in\Pi_\varepsilon(x)$, $d(y, \Pi(x))\leq \eta(\varepsilon)$, where $\eta$ tends to 0 with $\varepsilon$, and, thus, \begin{equation}\label{eq:pi2}\max\Pi_\varepsilon(x)\leq \max\Pi(x) + \eta(\varepsilon).\end{equation} Combining inequalities \eqref{eq:pi1} and \eqref{eq:pi2}, we obtain that $\lim_{\varepsilon\to 0}\max \Pi_\varepsilon(x)=\max \Pi(x)$.
Set $\varepsilon_n=1/n$. Up to an extraction, we may assume that $(\hat X_{\varepsilon_n}(X),X)$ converges in distribution to $(\hat X,X)$ as $n\to \infty$. The random vector $\hat X$ satisfies $\|X-\hat X\|=d(X,\mbox{Im} f)$ and $\hat X^j=\max \Pi(X)$.
Similarly, as may be seen by replacing $X$ by $-X$, there exists
a random vector $\hat Y$ with values in $\mbox{Im} f$ such that $\|X-\hat Y\|=d(X,\mbox{Im}f)$ a.s., and $$\hat Y^j=\min \pi_j(\bar B(X,d(X,\mbox{Im} f))\cap \mbox{Im} f).$$
Now, we use this result to show that $\mathcal A$ is negligible for the distribution of $X$. Assume that $\mathbb{P}(\mbox{Card}(\mathcal P_f(X))\geq 2)>0.$ There exists a first coordinate $j$ such that $\mathbb{P}(\mbox{Card}(\mathcal \pi_j(\mathcal P_f(X)))\geq 2)>0.$
Then, it is possible to construct $\hat X^j$ and $\hat Y^j$ such that $\mathbb{P}(\hat X^j\geq\hat Y^j)=1$ and $\mathbb{P}(\hat X^j>\hat Y^j)>0$. Yet, by property \eqref{eq:XhatX}, $\mathbb{E}[\hat X]=\mathbb{E}[X]=\mathbb{E}[\hat Y]$, and, in particular, $\mathbb{E}[\hat X^j]=\mathbb{E}[\hat Y^j]$, which leads to a contradiction. Thus, $\mathbb{P}(\mbox{Card}(\mathcal P_f(X))=1)=1.$
\end{enumerate}
In the next sections, we present two lemmas, which are important both independently and for obtaining the main result Theorem \ref{theo:main}.
\subsection{Properties of the function $G$}
The first lemma is about the monotonicity and continuity properties of the function $G$. Observe that $G$ is nonincreasing, since $\{f :[0,1]\to \mathbb{R}^d,\mathscr{L}(f)\leq L_1\}\subset \{f:[0,1]\to \mathbb{R}^d,\mathscr{L}(f)\leq L_2\}$ when $L_1<L_2$, so that $G(L_2)\leq G(L_1)$.
\begin{lem}\label{lem:propG}\begin{enumerate}
\item The function $G$ is continuous.
\item The function $G$ is strictly decreasing over $[0,L_0)$, where $L_0=\inf\{L\geq 0,G(L)=0\}\in\mathbb{R}_+\cup\{\infty\}$.
\end{enumerate}
\end{lem}
In particular, Lemma \ref{lem:propG} admits the next useful corollary. \begin{cor}\label{cor:L(f)}
For $L>0$, if $G(L)>0$ and $f\in\mathcal C_L$ is such that $\Delta(f)=G(L), $ then $\mathscr{L}(f)=L$.
\end{cor} \begin{proof}
If $\mathscr{L}(f)<L$, then Lemma \ref{lem:propG} would imply $G(\mathscr{L}(f))>G(L)=\Delta(f)$, which contradicts the definition of $G$. \end{proof}
\begin{proof}[Proof of Lemma \ref{lem:propG}]
1. Set $L\geq 0$. Let us show that $G$ is continuous at the point $L$. Let $(L_k)_{k\in \mathbb{N}}$ be a sequence in $\mathbb{R}_+$ converging to $L$, with $L_k\neq L$ for all $k\in\mathbb{N}$. Let $f\in\mathcal C_L$ be such that $\Delta(f)=G(L)$, and let $\hat X$ stands for a random vector taking its values in $\mbox{Im} f$ such that $\|X-\hat X\|=d(X,\mbox{Im}f)$ a.s. For every $k\in\mathbb{N}$, let $f_k:[0,1]\to\mathbb{R}^d$ be a curve such that $\mathscr{L}(f_k)\leq L_k$, $\Delta(f_k)=G(L_k)$ and $\|f_k(t)-f_k(t')\|\leq L_k |t-t'|$ for $t,t'\in[0,1]$.
Observe that the sequence $(G(L_k))_{k\in\mathbb{N}}$ is bounded since $\mathbb{E}[\|X\|^2]<\infty$. Let us show that $G(L)$ is the unique limit point of this sequence. Let $\gamma:\mathbb{N}\to\mathbb{N} $ be any increasing function. Our purpose is to show that the sequence $(G(L_{\gamma(k)}))_{k\in\mathbb{N}}$ converges to $G(L)$.
Let us check that the $f_k$ are equi-uniformly continuous and that the sequence $(f_k(0))$ is bounded. Since the sequence $(L_k)_{k\in\mathbb{N}}$ is bounded, say by $L'$, the $f_k$ are Lipschitz with common Lipschitz constant $L'$, and, thus, they are equi-uniformly continuous. For every $k\in\mathbb{N}$, $t\in[0,1]$, we have $\|f_k(t)\| \ge \|f_k(0)\| - L' t\ge \|f_k(0)\| - L'$. Thus, if there exists an increasing function $\kappa:\mathbb{N}\to \mathbb{N}$ such that $\|f_{\kappa(k)}(0)\| \to\infty$, one has $G(L_{\kappa(k)})\to\infty$, which is impossible since $G(L_k)\le \mathbb{E}[\|X\|^2]<\infty$. So, the sequence $(f_k(0))_{k\in\mathbb{N}}$ is bounded.
Consequently, there exists an increasing function $\sigma:\mathbb{N}\to\mathbb{N} $ such that the subsequence $(f_{\sigma\circ\gamma(k)})_{k\in\mathbb{N}}$ converges uniformly to some function $\phi:[0,1]\to\mathbb{R}^d$.
Note that the curve $\phi$ is $L$-Lipschitz, since for all $t,t'$,
\begin{align*}\|\phi(t)-\phi(t')\|&\leq
\|\phi(t)-f_{\sigma\circ\gamma(k)}(t)\|+\|f_{\sigma\circ\gamma(k)}
(t)-f_{\sigma\circ\gamma(k)}(t')\|-\|f_{\sigma\circ\gamma(k)}(t')-\phi(t')\|\\
&\leq
\|\phi(t)-f_{\sigma\circ\gamma(k)}(t)\|+L_{\sigma\circ\gamma(k)}|t-t'|-\|f_{\sigma\circ\gamma(k)}(t')-\phi(t')\|,
\end{align*}which implies, taking the limit as $k\to \infty$,
$\|\phi(t)-\phi(t')\|\leq L|t-t'|$.
We have $\mathscr{L}(\phi)\leq \lim_{k\to\infty}L_k=L.$
Now, observe that
\begin{align*}
&\min_t\|X-f_{\sigma\circ\gamma(k)}(t)\|^2
-\min_t\|X-\phi(t)\|^2\\&=
\left(\min_t\|X-f_{\sigma\circ\gamma(k)}(t)\|
-\min_t\|X-\phi(t)\|\right)\left(\min_t\|X-f_{\sigma\circ\gamma(k)}(t)\|
+\min_t\|X-\phi(t)\|\right)
\\&\leq
\|\phi(t^*)-f_{\sigma\circ\gamma(k)}(t^*)\|(
\|X-f_{\sigma\circ\gamma(k)}(t^*)\|+\|X-\phi(t^*)\|),
\end{align*}
where $\|X-\phi(t^*)\|=\min_t\|X-\phi(t)\|$.
Since $\mathbb{E}[\|X\|^2]<\infty$ and $f_{\sigma\circ\gamma(k)}$ converges uniformly to $\phi$,
this shows that $\Delta(f_{\sigma\circ\gamma(k)})$ converges to $\Delta(\phi)$.
Finally, let us check that $\Delta(\phi)=G(L)$. If $L=0$, then for every $k$, $L_k\geq L$, thus $\Delta(f_{\sigma\circ\gamma(k)})=G(f_{\sigma\circ\gamma(k)})\leq G(0)$ for every $k$. Consequently, $\Delta(\phi)\leq G(0)$, which implies $\Delta(\phi)= G(0)$ since $\phi$ has length 0. If $L>0$, note that, for every $k$, $\frac {L_k}{L}\hat X$ is a random vector with values in $\frac {L_k}{L}\mbox{Im} f$ since $\hat X$ is taking its values in $\mbox{Im} f$. Moreover, $\frac {L_k}{L}f$ has length at most $L_k$ since $f$ has length $L$. Thus, for every $k$, $$\mathbb{E}\left[\left\|X-\frac{L_{\sigma\circ\gamma(k)}}L\hat X\right\|^2\right]\geq G(L_{\sigma\circ\gamma(k)})=\Delta(f_{\sigma\circ\gamma(k)}).$$ taking the limit as $k\to \infty$, we obtain $$\mathbb{E}\left[\|X-\hat X\|^2\right]\geq \Delta(\phi),$$ which means that $\Delta(\phi)=G(L)$ since $\mathscr{L}(\phi)\leq L$.
2. We have to show that $G$ is strictly decreasing as long as the length constraint is effective (that is $G(L)> 0$).
Let us prove that for $0\leq L_1<L_2$, we have $G(L_2)<G(L_1)$ if $G(L_1)>0$.
Let $f:[0,1]\to \mathbb{R}^d$ such that $\mathscr{L}(f)\leq L_1$ and $\Delta(f)=G(L_1)$.
For $t_0\in[0,1]$ and $r>0$, we define $\hat Z_{t_0,r}$ by
$$\begin{cases}
\hat Z^J_{t_0,r}=f^J(t_0)+r\land (X^J-f^J(t_0))\mathbf{1}_{\{X^J\geq f^J(t_0)\}}+(-r)\lor (X^J-f^J(t_0))\mathbf{1}_{\{X^J< f^J(t_0)\}},\\\mbox{where }J=\min\{i:|X^i-f^i(t_0)|=\|X-f(t_0)\|_{\infty}\}\\
\hat Z^i_{t_0,r}=f^i(t_0) \mbox{ if }i\neq J, i=1,\dots,d.
\end{cases}$$
Observe that $\hat Z_{t_0,r}$ takes its values in $$\mathcal C (t_0,r)=\bigcup_{j=1}^d\{x\in\mathbb{R}^d: x^i=f^i(t_0) \mbox{ for } i\neq j, |x^j-f^j(t_0)|\leq r\}.$$
Indeed, all coordinates of $\hat Z_{t_0,r}$ are equal to the corresponding coordinate of $f(t_0)$ apart from the $J$-th coordinate, that is the first coordinate for which the distance between $X$ and $f(t_0)$ is the largest one. Let us check that $|\hat Z_{t_0,r}^J-f^J(t_0)|\leq r$.
If $X^J\geq f^J(t_0)$, either $\hat Z_{t_0,r}^J-f^J(t_0)=r $, or $\hat Z_{t_0,r}^J-f^J(t_0)=X^J-f^J(t_0)\leq r$.
If $X^J< f^J(t_0)$, either $f^J(t_0)-\hat Z_{t_0,r}^J=r $, or $f^J(t_0)-\hat Z_{t_0,r}^J=f^J(t_0)-X^J\leq r$.
\begin{figure}
\caption{Example in $\mathbb{R}^2$, illustrating the support of $\hat Z_{t_0,r}$.}
\end{figure}
Then, letting again $\hat X$ be a random vector with values in $\mbox{Im} f$ such that $\|X-\hat X\|=d(X,\mbox{Im}f)$ a.s., we set
$$\hat X_{t_0,r}=\hat X\mathbf{1}_{\{\|X-\hat X\|\leq \|X-\hat Z_{t_0,r}\|\}}+\hat Z_{t_0,r}\mathbf{1}_{\{\|X-\hat X\|>\|X-\hat Z_{t_0,r}\|\}}.$$
Since $\|X-\hat Z_{t_0,r}\|^2=\|X-f(t_0)\|^2-\|X-f(t_0)\|_\infty^2+(\|X-f(t_0)\|_\infty-r)^2_{+}, $
\begin{align*}
&\|X-\hat X\|^2-\|X-\hat{X}_{t_0,r}\|^2\\&=\left[\|X-\hat X\|^2-\|X-\hat{Z}_{t_0,r}\|^2\right]_+\\
&=\left[\|X-\hat X\|^2-\|X-f(t_0)\|^2+\|X-f(t_0)\|_\infty^2-(\|X-f(t_0)\|_\infty-r)^2_{+}\right]_+\\
&\geq \left[\|X-\hat X\|^2-\|X-f(t_0)\|^2+\|X-f(t_0)\|_\infty^2-(\|X-f(t_0)\|_\infty-r)^2\right]_+\\
&= \left[\|X-\hat X\|^2-\|X-f(t_0)\|^2+2r\|X-f(t_0)\|_\infty-r^2\right]_+\\
&= \left[\|f(t_0)-\hat X\|^2+2\langle X-f(t_0),f(t_0)-\hat X\rangle+2r\|X-f(t_0)\|_\infty-r^2\right]_+\\
&= \left[-\|f(t_0)-\hat X\|^2+2\langle X-\hat X,f(t_0)-\hat X\rangle+2r\|X-f(t_0)\|_\infty-r^2\right]_+\\
&\geq \left[-\|f(t_0)-\hat X\|^2+2\langle X-\hat X,f(t_0)-\hat X\rangle+\frac{2r}{\sqrt{d}}\|X-f(t_0)\|-r^2\right]_+\\&\quad\mbox{ since for every }x\in\mathbb{R}^d, \|x\|\leq\sqrt{d}\|x\|_{\infty}\\
&\geq \left[-\|f(t_0)-\hat X\|^2+2\langle X-\hat X,f(t_0)-\hat X\rangle+\frac{2r}{\sqrt{d}}\|X-\hat{X}\|-r^2\right]_+\\&\quad\mbox{ since }\|X-\hat{X}\|\leq\|X-f(t_0)\|.
\end{align*}
Besides, $\hat X_{t_0,r}$ takes its values in $\mbox{Im} f\cup \mathcal{C}(t_0,r)$, which is the image of a curve with length at most $L_1+4dr$, so that $ \mathbb{E}[\|X-\hat X_{t_0,r}\|^2]\geq G(L_1+4dr)$.
Thus, \begin{multline}\label{eq:minorGL1}G(L_1)\geq G(L_1+4dr)\\+\mathbb{E}\left[\left[-\|f(t_0)-\hat X\|^2+2\langle X-\hat X,f(t_0)-\hat X\rangle+\frac{2r}{\sqrt{d}}\|X-\hat{X}\|-r^2\right]_+\right].
\end{multline}
Since $G(L_1)>0$, $\mathbb{P}(\|X-\hat X\|>0)>0$, thus there exist $\delta>0$ and $K<\infty$ such that $\eta:=P(K\geq\|X-\hat X\|\geq\delta)>0$.
Recall that, for all $(t,t')$, we have $\|f(t)-f(t')\|\leq L_1|t-t'|$. Then, for every $p\geq 1,$ there exists $k$, $1\leq k\leq p$, such that $\|\hat X-f(\frac{k}{p})\|\leq\frac{L_1}{p}$ and so, we have $$\sum_{k=1}^{p}\mathbf{1}_{\left\{\|\hat X-f(\frac{k}{p})\|\leq\frac{L_1}{p}\right\}} \geq 1.$$ Thus,
$$\sum_{k=1}^{p}\mathbb{P}\left(K\geq\|X-\hat X\| \geq \delta,\left\|\hat X-f\left(\frac{k}{p}\right)\right\|\leq\frac{L_1}{p}\right) \geq \eta.$$ Consequently, for every $p\geq 1$, there exists $t_p\in[0,1]$ such that $$\mathbb{P}\left(K\geq\|X-\hat X\| \geq \delta,\|\hat X-f(t_p)\|\leq\frac{L_1}{p}\right) \geq \frac{\eta}{p}>0.$$
According to \eqref{eq:minorGL1}, we obtain
\begin{align*}
G(L_1)&\geq G(L_1+4dr)+\mathbb{E}\left[-\|f(t_p)-\hat X\|^2+2\langle X-\hat X,f(t_p)-\hat X\rangle+\frac{2r}{\sqrt{d}}\|X-\hat{X}\|-r^2\right]_+\\
&\geq G(L_1+4dr)+\mathbb{E}\left[\mathbf{1}_{\left\{K\geq\|X-\hat X\| \geq \delta,\left\|\hat X-f\left(t_p\right)\right\|\leq\frac{L_1}{p}\right\}}\left(-\frac{L_1^2}{p^2}-\frac{2KL_1}{p}+\frac{2r\delta}{\sqrt{d}}-r^2\right)\right]\\
&\geq G(L_1+4dr)+\frac{\eta}{p}\left(-\frac{L_1^2}{p^2}-\frac{2KL_1}{p}+\frac{2r\delta}{\sqrt{d}}-r^2\right).
\end{align*}
Now, choosing $r>0$ such that $\frac{2r\delta}{\sqrt{d}}-r^2>0$ and $L_1+4dr\leq L_2$, we finally obtain, taking $p$ large enough, $$G(L_1)>G(L_1+4dr)\geq G(L_2).$$
\end{proof}
\subsection{Default of self-consistency}
The next lemma
states that a principal curve with length $\leq L$ does not satisfy the so-called self-consistency property, provided that the constraint is effective, that is $G(L)>0$.
\begin{lem}\label{lem:noneq}
Let $L>0$ such that $G(L)>0$, and let $f\in\mathcal C_L$ be such that $\Delta(f)=G(L).$ If $\hat{X}$ is a random vector with values in $\mbox{Im} f$ such that $\|X-\hat{X}\|=d(X,\mbox{Im}f)$ a.s., then $\mathbb{P}(\mathbb{E}[X|\hat{X}]\neq \hat{X})>0.$ \end{lem} \begin{proof}
First of all, observe that $\mathscr{L}(f)=L$ since $G(L)>0$, according to Corollary \ref{cor:L(f)}.
Assume that $\mathbb{E}[X|\hat{X}]=\hat{X}$ a.s..
For $\varepsilon\in[0,1], $ we set $\hat{X}_\varepsilon=(1-\varepsilon)\hat{X}$. Then,
$$\|X-\hat{X}_\varepsilon\|^2=\|X-\hat{X}+
\varepsilon\hat{X}\|^2=\|X-\hat{X}\|^2+\varepsilon^2\|\hat{X}\|^2+2\varepsilon\langle X-\hat{X},\hat{X}\rangle.$$ Since $\mathbb{E}[X|\hat{X}]=\hat{X}$ a.s.,
$\mathbb{E}[X-\hat{X}|\hat{X}]=0$ a.s., and thus, $\mathbb{E}[\langle X-\hat{X},\hat{X}\rangle]=\mathbb{E}[\langle \mathbb{E}[X-\hat{X}|\hat{X}],\hat{X}\rangle]=0,$ so that \begin{equation}\label{eq:decriteps}
\mathbb{E}[\|X-\hat{X}_\varepsilon\|^2]=\mathbb{E}[\|X-\hat{X}\|^2]+\varepsilon^2\mathbb{E}[\|\hat{X}\|^2].
\end{equation}
The random vector $\hat{X}_\varepsilon$ is taking its values in the image of $(1-\varepsilon)f$, which has length $(1-\varepsilon)L$.
Observe that \begin{equation}\label{eq:hatXfini}
\mathbb{E}[\|\hat{X}\|^2]<\infty,
\end{equation}since $\mathbb{E}[\|X\|^2]<\infty$ and \begin{align*}
\mathbb{E}[\|\hat{X}\|^2]&\leq 2\mathbb{E}[\|X-\hat{X}\|^2]+2\mathbb{E}[\|X\|^2]\\&\leq 2\mathbb{E}[\|X-f(0)\|^2]+2\mathbb{E}[\|X\|^2]\\&\leq 6\mathbb{E}[\|X\|^2]+4\|f(0)\|^2.
\end{align*}
We will show that, adding to $(1-\varepsilon)f$ a curve with length $\varepsilon L$, it is possible to build $\hat{Y}_\varepsilon$ with $\mathbb{E}[\|X-\hat{Y}_\varepsilon\|^2]<\mathbb{E}[\|X-\hat{X}\|^2]$,
which contradicts the optimality of $f$.
For $\varepsilon\in[0,1]$, let $f_\varepsilon=(1-\varepsilon)f.$ We then define $\hat{X}_{\varepsilon,t_0,r}$ as the variable $\hat{X}_{t_0,r}$ corresponding to $f_\varepsilon$. More precisely, similarly to the proof of Lemma \ref{lem:propG}, we define, for $t_0\in[0,1]$ and $r>0$, the random vector $\hat{Z}_{\varepsilon,t_0,r}$, with values in
$$\mathcal C (t_0,r)=\bigcup_{j=1}^d\{x\in\mathbb{R}^d: x^i=f_\varepsilon^i(t_0) \mbox{ for } i\neq j, |x^j-f^j_\varepsilon(t_0)|\leq r\},$$ by
$$\begin{cases}
\hat Z^J_{\varepsilon,t_0,r}=f_\varepsilon^J(t_0)+r\land (X^J-f^J_\varepsilon(t_0))\mathbf{1}_{\{X^J\geq f^J_\varepsilon(t_0)\}}+(-r)\lor (X^J-f^J_\varepsilon(t_0))\mathbf{1}_{\{X^J< f^J_\varepsilon(t_0)\}},\\\mbox{where }J=\min\{i:|X^i-f^i_\varepsilon(t_0)|=\|X-f_\varepsilon(t_0)\|_{\infty}\}\\
\hat Z^i_{\varepsilon,t_0,r}=f^i_\varepsilon(t_0) \mbox{ if }i\neq J, i=1,\dots,d.
\end{cases}$$
We set $$\hat X_{\varepsilon,t_0,r}=\hat X\mathbf{1}_{\{\|X-\hat X_\varepsilon\|\leq \|X-\hat Z_{\varepsilon,t_0,r}\|\}}+\hat Z_{\varepsilon,t_0,r}\mathbf{1}_{\{\|X-\hat X_\varepsilon\|>\|X-\hat Z_{\varepsilon,t_0,r}\|\}}.$$
By the same calculation as in the proof of Lemma \ref{lem:propG}, we obtain
$$
\|X-\hat X_\varepsilon\|^2-\|X-\hat{X}_{\varepsilon,t_0,r}\|^2\geq\left[-\|f_\varepsilon(t_0)-\hat X_\varepsilon\|^2+2\langle X-\hat X_\varepsilon,f_\varepsilon(t_0)-\hat X_\varepsilon\rangle+\frac{2r}{\sqrt{d}}\|X-f_\varepsilon(t_0)\|-r^2\right]_+. $$
Since $\|X-f_{\varepsilon}(t_0)\|\geq \|X-f(t_0)\|-\varepsilon\|f(t_0)\|\geq \|X-\hat{X}\|-\varepsilon\|f(t_0)\|,$ we get
\begin{multline*}
\|X-\hat X_\varepsilon\|^2-\|X-\hat{X}_{\varepsilon,t_0,r}\|^2\geq \left[-(1-\varepsilon)^2\|f(t_0)-\hat X\|^2+2(1-\varepsilon)\langle X-\hat X_\varepsilon,f(t_0)-\hat X\rangle\right.\\\left.+\frac{2r}{\sqrt{d}}\|X-\hat{X}\|-\frac{2r}{\sqrt{d}}\varepsilon\|f(t_0)\|-r^2\right]_+.
\end{multline*}
Thus, \begin{align}\label{eq:minorlem2}
&\mathbb{E}\left[\|X-\hat X_\varepsilon\|^2-\|X-\hat{X}_{\varepsilon,t_0,r}\|^2\Big| \hat{X}\right]\nonumber\\&\geq
\left[-\|f(t_0)-\hat X\|^2+2(1-\varepsilon)\langle \mathbb{E}[X| \hat{X}]-\hat X_\varepsilon,f(t_0)-\hat X\rangle+\frac{2r}{\sqrt{d}}\mathbb{E}\left[\|X-\hat{X}\|\Big| \hat{X}\right]-\frac{2r}{\sqrt{d}}\varepsilon\|f(t_0)\|-r^2\right]_+\nonumber\\&=
\left[-\|f(t_0)-\hat X\|^2+2(1-\varepsilon)\langle \varepsilon\hat{X},f(t_0)-\hat X\rangle+\frac{2r}{\sqrt{d}}\mathbb{E}\left[\|X-\hat{X}\|\Big| \hat{X}\right]-\frac{2r}{\sqrt{d}}\varepsilon\|f(t_0)\|-r^2\right]_+\nonumber\\&\geq
\left[-\|f(t_0)-\hat X\|^2-2\varepsilon\|\hat{X}\|\|f(t_0)-\hat X\|+\frac{2r}{\sqrt{d}}\mathbb{E}\left[\|X-\hat{X}\|\Big| \hat{X}\right]-\frac{2r}{\sqrt{d}}\varepsilon\|f(t_0)\|-r^2\right]_+.
\end{align}
Besides, since $G(L)>0$, there exist $\delta>0$, $K<\infty,$ such that
$$\eta=\mathbb{P}\left(\|\hat{X}\|\leq K, \mathbb{E}\left[\|X-\hat{X}\|\Big| \hat{X}\right]\geq \delta\right)>0.$$
Moreover, for every $p\geq 1$, $\sum_{k=1}^p \mathbf{1}_{\{\|\hat{X}-f\left(\frac{k}{p}\right)\|\leq \frac{L}{p}\}}\geq 1$ since $f$ is $L$-Lipschitz. Consequently, $$ \sum_{k=1}^{p}\mathbb{P}\left(\|\hat{X}\|\leq K, \mathbb{E}\left[\|X-\hat{X}\|\Big | \hat{X}\right]\geq \delta,\left\|\hat{X}-f\left(\frac{k}{p}\right)\right\|\leq \frac{L}{p}\right)\geq \eta.$$ Hence, setting $$A_p=\left\{\|\hat{X}\|\leq K, \mathbb{E}\left[\|X-\hat{X}\|\Big| \hat{X}\right]\geq \delta,\left\|\hat{X}-f\left(\frac{k}{p}\right)\right\|\leq \frac{L}{p}\right\},$$ we see that there exists $t_p\in[0,1]$ such that $\mathbb{P}(A_p)\geq\frac{\eta}{p}$. From \eqref{eq:minorlem2}, we get
\begin{align*}
\mathbb{E}&\left[\|X-\hat X_\varepsilon\|^2-\|X-\hat{X}_{\varepsilon,t_p,r}\|^2\right]\\&\geq
\mathbb{E}\left[\mathbf{1}_{A_p} \left[-\|f(t_p)-\hat X\|^2-2\varepsilon\|\hat{X}\|\|f(t_p)-\hat X\|+\frac{2r}{\sqrt{d}}\mathbb{E}\left[\|X-\hat{X}\|\Big| \hat{X}\right]-\frac{2r}{\sqrt{d}}\varepsilon\|f(t_p)\|-r^2\right]_+\right]\\&\geq
\mathbb{P}(A_p)\left[-\frac{L^2}{p^2}-\frac{2\varepsilon K L}{p}+\frac{2r\delta}{\sqrt{d}}-\frac{2r\varepsilon M}{\sqrt{d}}-r^2\right],
\end{align*}where $M=\sup_{t\in[0,1]}\|f(t)\|$.
Since $\hat{X}_{\varepsilon,t_p,r}$ takes its values in $f_\varepsilon([0,1])\cup \mathcal C(\varepsilon,t_p,r)$, which is the image of a curve with length at most $(1-\varepsilon)L+4dr,$ then choosing $r$ such that $4dr=\varepsilon L$, we have
\begin{align*}
\mathbb{E}\left[\left\|X-\hat{X}_{\varepsilon,t_p,\frac{\varepsilon L}{4d}}\right\|^2\right]&\leq
\mathbb{E}\left[\|X-\hat X_\varepsilon\|^2\right]-\frac{\eta}{p}\left(-\frac{L^2}{p^2}-\frac{2 K L\varepsilon}{p}+\frac{ L\delta\varepsilon}{2d^{3/2}}-\frac{ML\varepsilon^2}{2d^{3/2}}-\frac{L^2\varepsilon^2}{16d^2}\right)\\&=
\mathbb{E}[\|X-\hat{X}
\|^2]+\varepsilon^2\mathbb{E}[\|\hat{X}\|^2]+\frac{\eta L^2}{p^3}+\frac{2\eta K L\varepsilon}{p^2}-\frac{ \eta L\delta\varepsilon}{2d^{3/2}p}+\frac{\eta ML\varepsilon^2}{2d^{3/2}p}-\frac{\eta L^2\varepsilon^2}{16d^2p},
\end{align*}using \eqref{eq:decriteps}. Then, taking $\varepsilon=\frac{\rho}{p}$, we get
$$\mathbb{E}\left[\left\|X-\hat{X}_{\frac{\rho}{p},t_p,\frac{\rho L}{4dp}}\right\|^2\right]\leq
\mathbb{E}[\|X-\hat{X}
\|^2]+\frac{\rho^2}{p^2}\mathbb{E}[\|\hat{X}\|^2]+\frac{\eta L^2}{p^3}+\frac{2\eta K L\rho}{p^3}-\frac{ \eta L\delta\rho}{2d^{3/2}p^2}+\frac{\eta ML\rho^2}{2d^{3/2}p^3}-\frac{\eta L^2\rho^2}{16d^2p^3}.$$
If $\rho$ is small enough, then ${\rho^2}\mathbb{E}[\|\hat{X}\|^2]-\frac{ \eta L\delta\rho}{2d^{3/2}}<0$. Then, taking $p$ large enough, this leads to a random vector $\hat{Y}$, with values in the image of a curve with length at most $L$, such that $\mathbb{E}[\|X-\hat{Y}\|^2]<\mathbb{E}[\|X-\hat{X}\|^2]$.
\end{proof}
Equipped with lemmas \ref{lem:propG} and \ref{lem:noneq}, we can present the proof of the main result.
\subsection{Proof of Theorem \ref{theo:main}}
To obtain a length-constrained principal curve, we have to minimize a function which may not be differentiable. We propose to build a discrete approximation of the principal curve $f$, using a chain of points $v^n_1,\dots,v^n_n$, $n\geq 1$, in $\mathbb{R}^d$. For every $n\geq 1$, linking the points yields a polygonal curve $f_n$. The properties of the principal curve $f$ will be shown by passing to the limit. The chain of points is obtained by minimizing a $k$-means-like criterion, which is differentiable, under a length-constraint This criterion is based on the distances from the random vector $X$ to the $n$ points and not to the corresponding segments of the polygonal line $f_n$, which allows to simplify the computation of the gradients.
We have chosen to present the proof for open curves, that is in the case $\mathcal C_L=\{\phi:[0,1]\to \mathbb{R}^d, \mathscr{L}(\phi)\leq L\}$. It adapts straightforwardly to the case of closed curves, which turns out to be even simpler since there are no endpoints and so all points of the curve play the same role. Note that the normalization factor ``$n-1$'' below becomes ``$n$'' in the closed curve context.
\subsubsection*{First insight into the proof}To facilitate understanding, we sketch the proof in a simpler case. Assume that $X$ has a density with respect to Lebesgue measure, and consider a polygonal line $f_n$ with vertices $v_1^n,\dots,v_n^n$ obtained by minimizing under length constraint the criterion \begin{equation}
F_n^0(x_1,\dots,x_n)= \mathbb{E}\left[\min_{1\leq i\leq n}\|X-x_i\|^2\right].\label{eq:crit} \end{equation}
For $h=(h_1,\dots,h_n)\in(\mathbb{R}^d)^n$, $\nabla F_n^0.h=\sum_{i=1}^{n}\mathbb{E}\left[-2\langle X-\hat{X}_n,h_i\rangle\mathbf{1}_{\{\hat{X}=v_i^n\}}\right]$, where $\hat X$ is such that $\|X-\hat{X}\|=\min_{1\leq j\leq n}\|X-v_i^n\|$. For differentiability, it is convenient to write the length constraint as follows: $$(n-1)\sum_{i=2}^{n}\|x_i-x_{i-1}\|^2\leq L^2.$$
Let $\hat{t}_n$ be defined by $\hat{t}_n=\frac{i-1}{n-1}$ on the event $\{\hat X=v_i^n\}$. For a test function $g$, set $h_i=g\big(\frac{i-1}{n-1}\big)$ for $i=1,\dots,n$. Then, we obtain the Euler-Lagrange equation \begin{equation}\label{eq:euldis} \mathbb{E}\left[\langle X-f_n(\hat{t}_n),g(\hat{t}_n)\rangle\right]=-\lambda_n\int_{[0,1]}\langle g(t),f_n''(dt)\rangle. \end{equation}
Up to an extraction, $f_n$ converges uniformly to an optimal curve and $\hat{t}_n$ converges in distribution. Using the default of self-consistency \eqref{lem:noneq}, it may be shown that every limit point of the sequence $(\lambda_n)_{n\geq 1}$ is positive. Together with the discrete Euler-Lagrange equation \eqref{eq:euldis}, this allows to prove that $f_n''$ converges weakly to a signed measure $f''$. Finally, the desired Euler-Lagrange equation is obtained as the limit of \eqref{eq:euldis}.
\subsubsection*{Complete proof} Let us now start the complete proof of the theorem.
First, some notation is in order.
Let $Z$ be a standard $d$-dimensional Gaussian vector, independent of $X$. Let $(\zeta_n)$, $(\eta_n)$ and $(\varepsilon_n)$ be sequences of positive real numbers such that $$ \zeta_n= \mathcal O(1/n), \quad \eta_n=\mathcal O(1/n),\quad n\varepsilon_n\to\infty, \quad \varepsilon_n\to 0.$$
We also introduce i.i.d. random vectors $\xi_1^n,\dots,\xi_n^n$, independent of $X$ and $Z$, with same distribution as a centered random vector $\xi$ with compactly supported density, such that $\|\xi\|\leq \eta_n$.
We will construct a sequence of polygonal lines converging to the optimal curve $f$ by linking points $v^n_1,\dots,v^n_n$ obtained by minimization of a criterion generalizing \ref{eq:crit}. Proving differentiability in this case is a little more involved in this case.
To begin with, since the random vector $X$ is not assumed to have a density with respect to Lebesgue measure, we convolve it with a Gaussian random vector: we define, for $n\geq 1$, $X_n=X+\zeta_n Z$. So, $X$ is approximated by a sequence $(X_n)_{n\geq 1}$ of continuous random variables.
For $1\leq i\leq n,$ let $$t_i^n:=\dfrac{i-1}{n-1}.$$
In order to be able to prove results which are true for any optimal curve $f$, we have to ensure that the points $v^n_1,\dots,v^n_n$, $n\geq 1$ are located on this curve $f$. To this aim, we add to the criterion \ref{eq:crit} a penalty proportional to $$\sum_{i=1}^n\|x_i-f(t_i^n)\|^2.$$ With this penalty, we cannot affirm any more that the $x_i$'s are pairwise distinct. To overcome this difficulty, a random vector $\xi^n_i$ is added to each $x_i$: the points $x_i+\xi^n_i$, approximating the $x_i$'s, are almost surely pairwise distinct.
The desired chain of points is then defined, for $n\geq 1$, by minimizing in $x=(x_1,\dots,x_n)\in(\mathbb{R}^d)^n$ the criterion \begin{equation}
F_n(x_1,\dots,x_n)= \mathbb{E}\left[\min_{1\leq i\leq n}\|X_n-x_i-\xi_i^n\|^2\right]+\varepsilon_n\sum_{i=1}^n\|x_i-f(t_i^n)\|^2,\label{eq:crit-tot} \end{equation} under the constraint \begin{equation}\label{eq:constLvi}
(n-1)\sum_{i=2}^n\|x_i-x_{i-1}\|^2\leq L^2. \end{equation}
\begin{lem}\label{lem:existvi}
There exists $(v^n_1,\dots,v^n_n)\in(\mathbb{R}^d)^ n$, satisfying
$$(n-1)\sum_{i=2}^n\|v^n_i-v^n_{i-1}\|^2\leq L^2,$$
such that $$F_n(v^n_1,\dots,v^n_n)=\min\bigg\{F_n(x_1,\dots,x_n); (n-1)\sum_{i=2}^n\|x_i-x_{i-1}\|^2\leq L^2\bigg\}.$$ \end{lem} Let $\hat{X}_n^x$ be such that $\hat{X}_n^x\in\{x_1+\xi_1^n,\dots,x_n+\xi_n^n\}$ and \begin{equation}\label{eq:hatXnx}
\|X_n-\hat{X}_n^x\|=\min_{1\leq i\leq n}\|X_n-x_i-\xi_i^n\| \end{equation} almost surely. In the sequel, $\hat{X}_n$ will stand for $\hat{X}_n^{(v^n_1,\dots,v^n_n)}$.
\begin{lem}\label{lem:F-fini} \begin{equation*}\sup_{n\geq 1}F_n(v^n_1,\dots,v^n_n)<\infty.\end{equation*} \end{lem}
We define the sequence $(f_n)_{n\geq 1}$ of polygonal lines approximating $f$, where each $f_n:[0,1]\to \mathbb{R}^d$, $n\geq 1$, is given by $$f_n(t)=v^n_i+(n-1)\left(t-t_i^n\right)(v^n_{i+1}-v^n_i), \quad t^n_i\leq t\leq t^n_{i+1},\quad 1\leq i \leq n-1. $$
This function $f_n$ is absolutely continuous and we have $f '_n(t)=(n-1)(v^n_{i+1}-v^n_i)$ for $t\in\big(t_i^n,t_{i+1}^n\big)$.
Using the definition of $f '_n$, we obtain the following regularity properties of $f_n$.
\begin{lem}\label{lem:fn'} For $n\geq 1$, the curve $f_n$ satisfies:\begin{enumerate}
\item $\mathscr{L}(f_n)\le L.$
\item For all $t,t'\in[0,1]$, $\|f_n(t)-f_n(t')\|\leq L\sqrt{|t-t'|}.$
\end{enumerate}
\end{lem}
Asymptotically, the penalty term ensuring that the points $v^n_1,\dots,v^n_n$, $n\geq 1$ belong to the curve $f$ can be neglected. \begin{lem}\label{lem:pen}
There exists $c\geq 0$ such that, for all $n\geq 1,$ \begin{equation*}
\varepsilon_n\sum_{i=1}^{n}\|v^n_i-f(t_i^n)\|^2\leq\frac{c}{n}.\end{equation*} \end{lem}
\begin{lem}\label{lem:conv-fn}
The sequence $(f_n)_{n\geq 1}$ converges uniformly to the curve $f$.
\end{lem}
Let $\hat{t}_n = t_i^n$ on the event $\{ \hat{X}_n = v^n_i+\xi_i^n\}$, $1 \leq i \leq n$. Note that the sequence $(\hat{t}_{n})_{n\geq 1}$ is bounded. Thus, up to extending the probability space $(\Omega, \mathcal F, \mathbb{P})$, and extracting a subsequence, we may assume that $(X_n,\hat t_n)$ converge in distribution to a tuple $(X,\hat t)$. This implies the next result.
\begin{lem}\label{lem:hat-t}There exists a random variable $\hat t$ with values in $[0,1]$, defined on an extension of the probability space $(\Omega, \mathcal F, \mathbb{P})$, such that
$$\|X-f(\hat{t})\|=d(X,\mbox{Im}f)\quad a.s.$$ \end{lem}
In order to be able to state a first order Euler-Lagrange equation for the criterion \eqref{eq:crit-tot}, we show that the quantity $\mathbb{E}\left[\min_{1\leq i\leq n}\|X_n-x_i-\xi_i^n\|^2\right]$ is differentiable at $x_i$. Recall the definition \eqref{eq:hatXnx} of $\hat{X}_n^x$.
\begin{lem}\label{lem:diff}The function $(x_1,\dots,x_n)\mapsto \mathbb{E}\left[\min_{1\leq i\leq n}\|X_n-x_i-\xi_i^n\|^2\right]$ is differentiable, and, for $1\leq i\leq n$, the gradient with respect to $x_i$ is given by
$$\frac{\partial}{\partial x_i}\mathbb{E}\left[\min_{1\leq j\leq n}\|X_n-x_j-\xi_j^n\|^2\right]=-2\mathbb{E}\left[(X_n-\hat{X}_n^x)\mathbf{1}_{\{\hat{X}_n^x=x_i+\xi_i\}}\right].$$ \end{lem}
The Lagrange multiplier method then leads to the next system of equations satisfied by $v^n_1,\dots,v^n_n$.
\begin{lem}\label{lem:eqn}For $n\geq 1$, there exists a Lagrange multiplier $\lambda_n\geq 0$ such that
$$\begin{cases}
&-\mathbb{E}\left[(X_n-\hat{X}_n)\mathbf{1}_{\{\hat{X}_n=v^n_i+\xi_i^n\}}\right]+\varepsilon_n\big(v^n_i-f(t_i^n)\big)+\lambda_n(n-1)(v^n_i-v^n_{i-1}-(v^n_{i+1}-v^n_i))=0,\\&
2\leq i\leq n-1,\\
&-\mathbb{E}\left[(X_n-\hat{X}_n)\mathbf{1}_{\{\hat{X}_n=v^n_1+\xi_1^n\}}\right]+\varepsilon_n(v^n_1-f(0))-\lambda_n(n-1)(v^n_2-v^n_1)=0,\\
&-\mathbb{E}\left[(X_n-\hat{X}_n)\mathbf{1}_{\{\hat{X}_n=v^n_n+\xi_n^n\}}\right]+\varepsilon_n(v^n_n-f(1))+\lambda_n(n-1)(v^n_n-v^n_{n-1})=0.
\end{cases}$$ \end{lem}
\begin{lem}\label{lem:lambda0} If $\lambda$ is a limit point of the sequence $(\lambda_n)_{n\geq 1}$, then $\lambda\in (0,\infty]$.
\end{lem}
Hence, up to an extraction, we may assume that the sequence $(\lambda_n)_{n\geq 1}$ converges to a limit $\lambda \in (0,\infty]$.
Let $\delta_\ell$ denote the Dirac mass at $\ell$. For every $n\geq 2$, we define $f ''_n$ on $[0,1]$ by \begin{equation} \label{eq:fn''} f ''_n=(n-1)\left[\sum_{i=2}^{n-1}(v^n_{i+1}-v^n_i-(v^n_i-v^n_ {i-1}))\delta_{t_i^n}+(v^n_2-v^n_1)\delta_0-(v^n_n-v^n_{n-1})\delta_1\right], \end{equation} which is a vector-valued signed measure.
\begin{lem}
\label{lem:reg}
The sequence $(f''_n)_{n\geq 1}$ converges weakly to a signed measure $f''$ on $[0,1]$, with values in $\mathbb{R}^d$, which is the second derivative of $f$. The following regularity properties hold:
\begin{itemize}
\item $f$ is right-differentiable on $[0,1)$, $\|f '_r(t)\|=L$ for all $t\in[0,1)$,
\item $f$ is left-differentiable on $(0,1]$, $\|f '_\ell(t)\|=L$ for all $t\in (0,1]$,
\item $f ''((s,t])=f'_r(t)-f'_r(s)$ for all $0\leq s\leq t<1$,
\item $f ''([0,1])= 0$.
\item $f''(\{0\})=f'_r(0)$,
\item $f''(\{1\})=-f'_\ell(1)$.
\end{itemize}
\end{lem} These properties imply in particular that $\lambda$ is finite. \begin{lem}\label{lem:lafini} We have $\lambda<\infty$. \end{lem} Finally, collecting all the results allows to derive the Euler-Lagrange equation. \begin{lem}\label{lem:equation}
For every bounded Borel function $g:[0,1]\to \mathbb{R}^d$,
$$
\mathbb{E}\left[\langle X-f(\hat{t}), g(\hat{t})\rangle\right]=-\lambda\int_{[0,1]}\langle g(t), f ''(dt)\rangle.
$$ Moreover, $\lambda $ depends only on the curve $f$. \end{lem}
\begin{proof}[Proof of Lemma \ref{lem:existvi}]
Since $\mathbb{E}[\|X_n\|^2]\leq 2 \mathbb{E}[\|X\|^2]+2 d\zeta_n^2<\infty$ and $\mathbb{E}[\|\xi\|^2]\leq\eta_n^2<\infty$, $F_n$ takes its values in $[0,\infty)$ and is continuous.
The constraint \eqref{eq:constLvi} defines a nonempty closed set $D_n$.
Since
$$ \lim_{ \substack{\|x_1\|+\cdots+\|x_n\| \to\infty\\
(x_1,\dots,x_n)\in D_n} } F_n(x_1,\dots,x_n) =\infty, $$
the optimization problem reduces thus to minimizing a continuous function on a compact set.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:F-fini}]
Recall that, for all $t,t'\in[0,1]$, $\|f(t)-f(t')\|\leq L|t-t'|$. Hence, we have \begin{equation*}\label{eq:Lfi}
(n-1)\sum_{i=2}^{n}\|f(t_i^n)-f(t_{i-1}^n)\|^2\leq L^2,
\end{equation*} and consequently, we may consider
$(x_1,\dots,x_n)=(f(t_1^n),\dots, f(t_n^n))$. We see that \begin{align*}
F_n(v^n_1,\dots,v^n_n)&\leq \mathbb{E}\left[\|X_n-f(0)-\xi_1^n\|^2\right]\\
&\leq 2\mathbb{E}\left[\|X_n-\xi_1^n\|^2\right]+2\|f(0)\|^2\\
&\leq 2\mathbb{E}\left[\|X\|^2\right]+{ 2d\zeta_n^2}+2\eta_n^2+2\|f(0)\|^2.
\end{align*}
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:fn'}]
By definition of $f_n'$, and using that $v^n_1,\dots,v^n_n$ satisfy constraint \eqref{eq:constLvi}, we have $$\int_0^1\|f '_n(t)\|^2 dt=\sum_{i=1}^{n-1}(n-1)^2\|v^n_{i+1}-v^n_i\|^2\times\frac{1}{n-1}=(n-1)\sum_{i=1}^{n-1}\|v^n_{i+1}-v^n_i\|^2\leq L^2.$$
Hence, \begin{equation*}\label{eq:Lfn}
\mathscr{L}(f_n)\le \Bigl(\int_0^1\|f '_n(t)\|^2 dt\Bigr)^{1/2} \le L,
\end{equation*} and for all $t,t'\in[0,1]$,
\begin{equation*} \label{equiuc}
\|f_n(t)-f_n(t')\|=\Big\|\int_{0}^{1}\mathbf{1}_{[t\land t',t\lor t']}f_n '(u)du\Big\|\leq L\sqrt{|t-t'|}.
\end{equation*}
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:pen}] The aim is to show that there exists $c\geq 0$ such that, for all $n\geq 1,$ \begin{equation*}
\varepsilon_n\sum_{i=1}^{n}\|v^n_i-f(t_i^n)\|^2\leq\frac{c}{n}.
\end{equation*}
The following upper bound will be useful:
\begin{equation*}
\left|\min_{1\leq i\leq n}\left\|X_n-f(t_i^n)-\xi_i^n\right\|-\min_{1\leq i\leq n}\left\|X-f(t_i^n)\right\|\right|\leq \zeta_n\|Z\|+\eta_n.
\end{equation*}
By definition of $(v^n_1,\dots,v^n_n)$, since $(f(t_1^n),\dots, f(t_n^n))$ satisfies constraint \ref{eq:constLvi} as already mentioned in the proof of Lemma \ref{lem:F-fini}, we may write
$$F_n(v^n_1,\dots,v^n_n)\leq \mathbb{E}\left[\min_{1\leq i\leq n}\left\|X_n-f(t_i^n)-\xi_i^n\right\|^2\right].$$
Observe that \begin{align*}
&\left|\min_{1\leq i\leq n}\|X_n-f(t_i^n)-\xi_i^n\|-\min_{t\in[0,1]}\|X-f(t)\|\right|\\&\leq
\left|\min_{1\leq i\leq n}\|X_n-f(t_i^n)-\xi_i^n\|-\min_{1\leq i\leq n}\|X-f(t_i^n)\|\right|+
\left|\min_{1\leq i\leq n}\left\|X-f(t_i^n)\right\|-\min_{t\in[0,1]}\|X-f(t)\|\right|
\\&\leq \zeta_n\|Z\|+\eta_n+ \frac{L}{n-1},
\end{align*}
so that \begin{multline*}
\min_{1\leq i\leq n}\left\|X_n-f(t_i^n)-\xi_i^n\right\|^2\leq \min_{t\in[0,1]}\|X-f(t)\|^2+\left(\eta_n+\zeta_n\|Z\|+\frac{L}{n-1}\right)^2\\+2\left(\eta_n+\zeta_n\|Z\|+\frac{L}{n-1}\right)\min_{t\in[0,1]}\|X-f(t)\|.
\end{multline*}
Consequently, there exists $c_1\geq 0$, such that \begin{equation*}F_n(v^n_1,\dots,v^n_n)\leq G(L)+\frac{c_1}{n}.\end{equation*}
Besides, $$F_n(v^n_1,\dots,v^n_n)=\mathbb{E}\left[\min_{1\leq i\leq n}\left\|X_n-f_n(t_i^n)-\xi_i^n\right\|^2\right]+\varepsilon_n\sum_{i=1}^n\|f_n(t_i^n)-f(t_i^n)\|^2,$$
and, writing \begin{align*}
&\left|\min_{1\leq i\leq n}\left\|X_n-f_n(t_i^n)-\xi_i^n\right\|^2- \min_{1\leq i\leq n}\left\|X-f_n(t_i^n)\right\|^2\right|\\&\leq
\left|\min_{1\leq i\leq n}\left\|X_n-f_n(t_i^n)-\xi_i^n\right\|-\min_{1\leq i\leq n}\left\|X-f_n(t_i^n)\right\|\right|\times\left(\min_{1\leq i\leq n}\left\|X_n-f_n(t_i^n)-\xi_i^n\right\|+\min_{1\leq i\leq n}\left\|X-f_n(t_i^n)\right\|\right)\\&\leq
\bigg( \zeta_n\|Z\|+\eta_n\bigg)\bigg(\zeta_n\|Z\|+\eta_n+2\min_{1\leq i\leq n}\left\|X-f_n(t_i^n)\right\|\bigg)\\&=
\bigg( \zeta_n\|Z\|+\eta_n\bigg)^2+2\bigg( \zeta_n\|Z\|+\eta_n\bigg)\min_{1\leq i\leq n}\left\|X-f_n(t_i^n)\right\|,
\end{align*}
we obtain
\begin{align*}
F_n(v^n_1,\dots,v^n_n)&\geq
\mathbb{E}\left[\min_{1\leq i\leq n}\left\|X-f_n(t_i^n)\right\|^2\right]-\mathbb{E}\left[( \zeta_n\|Z\|+\eta_n)^2\right]-2( \zeta_n\mathbb{E}[\|Z\|]+\eta_n)\mathbb{E}\left[\min_{1\leq i\leq n}\left\|X-f_n(t_i^n)\right\|\right]\\&\quad+\varepsilon_n\sum_{i=1}^n\|f_n(t_i^n)-f(t_i^n)\|^2
\\&\geq
\mathbb{E}\left[\min_{t\in[0,1]}\left\|X-f_n(t)\right\|^2\right]- \zeta_n^2\mathbb{E}[\|Z\|^2]-\eta_n^2-2\eta_n\zeta_n\mathbb{E}[\|Z\|]\\&\quad-2( \zeta_n\mathbb{E}[\|Z\|]+\eta_n)\mathbb{E}\left[\min_{1\leq i\leq n}\left\|X-f_n(t_i^n)\right\|\right]+\varepsilon_n\sum_{i=1}^n\|f_n(t_i^n)-f(t_i^n)\|^2
\\&\geq G(L)-\frac{c_2}{n}+\varepsilon_n\sum_{i=1}^n\|f_n(t_i^n)-f(t_i^n)\|^2,
\end{align*} for some constant $c_2\geq 0$.
Indeed, $\mathscr{L}(f_n)\leq L$ according to point 1 in Lemma \ref{lem:fn'}, which allows to lower bound $\mathbb{E}\left[\min_{t\in[0,1]}\left\|X-f_n(t)\right\|^2\right]$ by $G(L)$, and moreover,
$\mathbb{E}\left[\min_{1\leq i\leq n}\left\|X-f_n(t_i^n)\right\|\right]$ is bounded since $(f_n)_{n\geq 1}$ is uniformly bounded and $\mathbb{E}[\|X\|^2]<\infty$.
Thus, there exists a constant $c_3$ such that $G(L)-\frac{c_3}{n}+\varepsilon_n\sum_{i=1}^n\big\|f_n\big(t_i^n\big)-f\big(t_i^n\big)\big\|^2\leq G(L)+\frac{c_3}{n}$, which shows that $\varepsilon_n\sum_{i=1}^n\big\|f_n\big(t_i^n\big)-f(t_i^n)\big\|^2\leq \frac{2c_3}{n}.$
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:conv-fn}]
Point 2 in Lemma \ref{lem:fn'} and Lemma \ref{lem:pen}
, together with the assumption $n\varepsilon_n\to \infty$, imply that the sequence $(f_n)_{n\geq 1}$ converges uniformly to the curve $f$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:hat-t}]
For every $n\geq 1$, \begin{align*}
&\Big|\|X_{n}-f_{n}(\hat{t}_{n})\|-\min_{1\leq i\leq n}\big\|X_{n}-f_{n}(t_i^ {n})\big\|\Big|\\&\quad\leq\Big|\|X_{n}-f_{n}(\hat{t}_{n})\|-\min_{1\leq i\leq n} \big\|X_{n}-f_{n}\big(t_i^{n}\big)-\xi_i^{n}\big\|\Big|+\left|\min_{1\leq i\leq n} \big\|X_{n}-f_{n}\big(t_i^{n}\big)-\xi_i^{n}\big\|-\min_{1\leq i\leq n} \big\|X_{n}-f_{n}\big(t_i^{n}\big)\big\|\right|\quad
\\&\quad\leq\Big|\|X_{n}-f_{n}(\hat{t}_{n})\|-\sum_{i=1}^n\|X_{n}-f_{n}(\hat{t}_{n})-\xi_i^{n}\|\mathbf{1}_{\{\hat{X}_n=f_n(t_i^n)+\xi_i^n\}}\Big|+\eta_n\quad
\\&\quad\leq\sum_{i=1}^n\Big|\|X_{n}-f_{n}(\hat{t}_{n})\|-\|X_{n}-f_{n}(\hat{t}_{n})-\xi_i^{n}\|\Big|\mathbf{1}_{\{\hat{X}_n=f_n(t_i^n)+\xi_i^n\}}+\eta_n\quad
\\&\quad\leq 2\eta_n,
\end{align*}
Hence, we obtain $$\|X-f(\hat{t})\|=\min_{t\in[0,1]}\|X-f(t)\|\quad a.s.$$
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:diff}]
For $x=(x_1,\dots,x_n)\in(\mathbb{R}^d)^n$ and $\omega\in \Omega$, we set
$$G_n(x,\omega):=\min_{1\leq i\leq n}\|X_n(\omega)-x_i-\xi_i^n(\omega)\|^2.$$
For every $x$, since the distribution of $X_n$ gives zero measure to affine hyperplanes of $\mathbb{R}^ d$ and
the vectors $x_i+\xi_i^n $, $1\leq i\leq n$, are mutually distinct $\mathbb{P}(d\omega)$ almost surely, we have $\mathbb{P}(d\omega)$ almost surely, $$G_n(x,\omega)
=\sum_{i=1}^{n}\|X_n(\omega)-x_i-\xi_i^n(\omega)\|^2\mathbf{1}_{\{
\| {X}_n(\omega)-x_i-\xi_i^n(\omega)\| < \min_{j\not=i} \|{X}_n(\omega)-x_j-\xi_j^n(\omega)\|
\}}.$$
For every $x\in(\mathbb{R}^ d)^ n$, $\mathbb{P}(d\omega)$ almost surely, $y\mapsto G_n(y,\omega)$ is differentiable at $x$ and for $1\leq i\leq n$,
\begin{align*}\frac{\partial }{\partial x_i}G_n(x,\omega)&=- 2(X_n(\omega)-x_i-\xi_i^n(\omega))\mathbf{1}_{\{
\| {X}_n(\omega)-x_i-\xi_i^n(\omega)\| < \min_{j\not=i} \|{X}_n(\omega)-x_j-\xi_j^n(\omega)\|\}}.
\\&=
-2(X_n(\omega)-\hat{X}_n^x(\omega))\mathbf{1}_{\{\hat{X}_n^x(\omega)=x_i+\xi_i^n(\omega)\}}.
\end{align*}
For every $u=(u_1,\dots,u_n)\in(\mathbb{R}^d)^n$, we set $\|u\|=(\sum_{i=1}^{n}\|u_i\|^2)^{1/2}.$
Let $x^{(k)}=(x_1^{(k)},\dots,x_n^{(k)})$ be a sequence tending to $x=(x_1,\dots,x_n)\in(\mathbb{R}^d)^n$ as $k$ tends to infinity. Then,
$$\left[G_n(x^{(k)},\omega)-G_n(x,\omega)-\sum_{i=1}^{n}\left\langle \frac{\partial }{\partial x_i}G_n(x,\omega),x_i^{(k)}-x_i\right\rangle\right]\times \frac 1{\|x^{(k)}-x\|}$$ converges $\mathbb{P}(d\omega)$ almost surely to 0 as $k$ tends to infinity.
Moreover,
\begin{align*}
&\left| G_n(x,\cdot)-G_n(x^{(k)},\cdot) \right|
\\&\quad= \Big(\min_{1\leq i\leq n}\|X_n-x_i-\xi_i^n\|+\min_{1\leq i\leq n}\|X_n-x^{(k)}_i-\xi_i^n\|\Big)\Big|\min_{1\leq i\leq n}\|X_n-x_i-\xi_i^n\|-\min_{1\leq i\leq n}\|X_n-x^{(k)}_i-\xi_i^n\|\Big|\\&\quad\leq 2\left(\|X_n\|+\eta_n+\|x_1\|+\|x_1^{(k)}\|\right)\max_{1\leq i\leq n}\|x_i-x^{(k)}_i\|,
\end{align*}
so that $$ \frac{\left| G_n(x,\cdot)-G_n(x^{(k)},\cdot) \right|}{\|x-x^{(k)}\|}\leq C(\|X_n\|+1),$$ where $C$ is a constant which does not depend on $k$. Similarly, we have, for $1\leq i\leq n$, $$\left\|\frac{\partial}{\partial x_i}G_n(x,\cdot)\right\|\leq C'(\|X_n\|+1),$$
where $C'$ does not depend on $k$, and, thus, \begin{align*}
\frac{1}{\|x-x^{(k)}\|}\left|\sum_{i=1}^{n}\left\langle \frac{\partial }{\partial x_i}G_n(x,),x_i^{(k)}-x_i\right\rangle\right|&\leq C'(\|X_n\|+1)\frac{\sum_{i=1}^{n}\|x_i^{(k)}-x_i\|}{\|x-x^{(k)}\|}\\&\leq C'\sqrt{n}(\|X_n\|+1).
\end{align*}
Since $\mathbb{E}[\|X_n\|]<\infty$, the result follows from Lebesgue's dominated convergence theorem.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:eqn}]
By Lemma \ref{lem:diff}, we obtain that $F_n$ is differentiable, and for $1\leq i\leq n$, the gradient with respect to $x_i$ is given by
$$\frac{\partial}{\partial x_i}F_n(x_1,\dots,x_n)=-2\mathbb{E}\left[(X_n-\hat{X}_n^x)\mathbf{1}_{\{\hat{X}_n^x=x_i+\xi_i^n\}}\right]+2\varepsilon_n\big(x_i-f(t_i^n)\big),\quad 1\leq i\leq n.$$
Consequently, considering
the minimization of $F_n$ under the length constraint \eqref{eq:constLvi}, there exists a Lagrange multiplier $\lambda_n\geq0$ such that
$$\begin{cases}
&-2\mathbb{E}\left[(X_n-\hat{X}_n)\mathbf{1}_{\{\hat{X}_n=v^n_i+\xi_i^n\}}\right]+2\varepsilon_n\big(v^n_i-f(t_i^n)\big)+2\lambda_n(n-1)(v^n_i-v^n_{i-1}-(v^n_{i+1}-v^n_i))=0,\\&
2\leq i\leq n-1,\\
&-2\mathbb{E}\left[(X_n-\hat{X}_n)\mathbf{1}_{\{\hat{X}_n=v^n_1+\xi_1^n\}}\right]+2\varepsilon_n(v^n_1-f(0))-2\lambda_n(n-1)(v^n_2-v^n_1)=0,\\
&-2\mathbb{E}\left[(X_n-\hat{X}_n)\mathbf{1}_{\{\hat{X}_n=v^n_n+\xi_n^n\}}\right]+2\varepsilon_n(v^n_n-f(1))+2\lambda_n(n-1)(v^n_n-v^n_{n-1})=0,
\end{cases}$$
that is,
$$\begin{cases}
&-\mathbb{E}\left[(X_n-\hat{X}_n)\mathbf{1}_{\{\hat{X}_n=v^n_i+\xi_i^n\}}\right]+\varepsilon_n\big(v^n_i-f(t_i^n)\big)+\lambda_n(n-1)(v^n_i-v^n_{i-1}-(v^n_{i+1}-v^n_i))=0,\\&
2\leq i\leq n-1,\\
&-\mathbb{E}\left[(X_n-\hat{X}_n)\mathbf{1}_{\{\hat{X}_n=v^n_1+\xi_1^n\}}\right]+\varepsilon_n(v^n_1-f(0))-\lambda_n(n-1)(v^n_2-v^n_1)=0,\\
&-\mathbb{E}\left[(X_n-\hat{X}_n)\mathbf{1}_{\{\hat{X}_n=v^n_n+\xi_n^n\}}\right]+\varepsilon_n(v^n_n-f(1))+\lambda_n(n-1)(v^n_n-v^n_{n-1})=0.
\end{cases}$$
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:lambda0}]
Let $g:[0,1]\to\mathbb{R}^d$ be an absolutely continuous function such that $\int_{0}^{1}\|g'(t)\|^2 dt<\infty$. For $n\geq 1$, we may write
\begin{align}
\mathbb{E}&[\langle X_n-f_n(\hat{t}_n), g(\hat{t}_n)\rangle]\nonumber\\&=\sum_{i=1}^{n}\left\langle\mathbb{E}\left[(X_n-\hat{X}_n+
\xi_i^n)\mathbf{1}_{\{\hat{X}_n=v^n_i+
\xi_i^n\}}\right], g(t_i^n)\right\rangle\nonumber \\&=
\sum_{i=1}^{n}\left\langle \mathbb{E}\left[(X_n-\hat{X}_n
)\mathbf{1}_{\{\hat{X}_n=v^n_i+
\xi_i^n\}}\right], g(t_i^n)\right\rangle+\sum_{i=1}^{n}\left\langle\mathbb{E}\left[
\xi_i^n\mathbf{1}_{\{\hat{X}_n=v^n_i+
\xi_i^n\}}\right], g(t_i^n)\right\rangle\nonumber\\
&=\sum_{i=1}^{n}\left\langle \mathbb{E}\left[
\xi_i^n\mathbf{1}_{\{\hat{X}_n=v^n_i+
\xi_i^n\}}\right], g(t_i^n)\right\rangle+\varepsilon_n\sum_{i=1}^{n}\langle v^n_i-f(t_i^n), g(t_i^n)\rangle\nonumber\\&\quad+\lambda_n(n-1)\left[-\langle v^n_2-v^n_1, g(0)\rangle+\sum_{i=2}^{n-1}\langle v^n_{i}-v^n_{i-1}-(v^n_{i+1}-v^n_i), g(t_i^n)\rangle+\langle v^n_n-v^n_{n-1}, g(1)\rangle\right]\nonumber\\&=\sum_{i=1}^{n}\left\langle \mathbb{E}\left[
\xi_i^n\mathbf{1}_{\{\hat{X}_n=v^n_i+
\xi_i^n\}}\right], g(t_i^n)\right\rangle+\varepsilon_n\sum_{i=1}^{n}\langle v^n_i-f(t_i^n), g(t_i^n)\rangle\nonumber\\&\quad+\lambda_n(n-1)\Bigg[\sum_{i=1}^{n-2}\langle v^n_{i+1}-v^n_{i}, g(t_{i+1}^n)\rangle-\sum_{i=2}^{n-1}\langle v^n_{i+1}-v^n_i, g(t_i^n)\rangle -\langle v^n_2-v^n_1, g(0)\rangle\nonumber\\&\quad+\langle v^n_n-v^n_{n-1}, g(1)\rangle\Bigg]\nonumber\\&=\sum_{i=1}^{n}\left\langle \mathbb{E}\left[
\xi_i^n\mathbf{1}_{\{\hat{X}_n=v^n_i+
\xi_i^n\}}\right], g(t_i^n)\right\rangle\nonumber\\&\quad+\varepsilon_n\sum_{i=1}^{n}\langle v^n_i-f(t_i^n), g(t_i^n)\rangle+\lambda_n(n-1)\sum_{i=1}^{n-1}\langle v^n_{i+1}-v^n_i,g(t_{i+1}^n)-g(t_i^n)\rangle.\label{eq:suml}
\end{align}
Note first that
\begin{align}
\left|\sum_{i=1}^{n}\left\langle \mathbb{E}\left[
\xi_i^n\mathbf{1}_{\{\hat{X}_n=v^n_i+
\xi_i^n\}}\right], g(t_i^n)\right\rangle\right|&\leq\eta_n\|g\|_\infty
\sum_{i=1}^{n}\mathbb{E}\left[
\mathbf{1}_{\{\hat{X}_n=v^n_i+
\xi_i^n\}}\right]\nonumber\\&=\eta_n\|g\|_\infty.\label{eq:Oetan}
\end{align}
Then,
\begin{align}
\left| \varepsilon_n\sum_{i=1}^{n}\langle v^n_i-f(t_i^n), g(t_i^n)\rangle\right|&\leq \varepsilon_n \sum_{i=1}^{n} \|v^n_i-f(t_i^n)\|\|g\|_\infty\nonumber\\
&\leq \varepsilon_n \Bigl(\sum_{i=1}^{n}\|v^n_i-f(t_i^n)\|^2 \bigr)^{1/2}\sqrt{n}\|g\|_\infty\nonumber\\&\leq \sqrt{c\varepsilon_n}\|g\|_\infty,\label{eq:Oepsn}
\end{align}according to Lemma \ref{lem:pen}. Regarding the last term, we may write
\begin{align*}
\left|(n-1)\sum_{i=1}^{n-1}\langle v^n_{i+1}-v^n_i,g(t_{i+1}^n)-g(t_i^n)\rangle\right|&\leq (n-1) \left[\sum_{i=1}^{n-1}\left\|v^n_{i+1}-v^n_i\right\|^2\sum_{i=1}^{n-1}\left\|g(t_{i+1}^n)-g(t_i^n)\right\|^2\right]^{1/2}\\&\leq
L\sqrt{n-1}\left[\sum_{i=1}^{n-1}\Big\|\int_{t_i^n}^{t_{i+1}^n}g'(t)dt\Big\|^2\right]^{1/2}\\&\leq L \left[\int_{0}^{1}\|g'(t)\|^2dt\right]^{1/2}.\end{align*}
Thus, if $h:\mathbb{R}^d\to\mathbb{R}^d$ is continuously differentiable, we have \begin{align*}
|\mathbb{E}[\langle X_n-f_n(\hat{t}_n), h(f_n(\hat t_n))\rangle]|&\leq \sqrt{c\varepsilon_n}\|h\|_\infty+\lambda_n L \left[\int_{0}^{1}\|\nabla h(f_n(t)), f_n '(t)\|^2dt\right]^{1/2}\\&\leq\sqrt{c\varepsilon_n}\|h\|_\infty+\lambda_nL
\sup_{t\in[0,1]}\|\nabla h(f_n(t))\|
\left[\int_{0}^{1}\|f_n'(t)\|^2dt\right]^{1/2}\\&\leq
\sqrt{c\varepsilon_n}\|h\|_\infty+\lambda_nL^2
\sup_{t\in[0,1]}\|\nabla h(f_n(t))\|.
\end{align*}Recall that $(X_n,\hat{t}_n)$ is assumed to converge to $(X,\hat{t})$. Since $\varepsilon_n\to 0$ and $(f_n)_{n\geq 1}$ is uniformly bounded, we see that $\lambda=0$ would imply that $$\mathbb{E}[\langle X-f(\hat{t}), h(f(\hat{t}))\rangle]
=0,$$ so that $\mathbb{E}[X-f(\hat{t})|f(\hat{t})]=0$ a.s. by density of continuously differentiable functions since $h$ is an arbitrary such function. This contradicts Lemma \ref{lem:noneq}.\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:reg}]
For an $\mathbb{R}^d$-valued signed measure $m=(m^1, \dots,m^d)$ on $[0,1]$, we set \begin{equation} \label{normnu}
\| m \| = \Bigl( \sum_{j=1}^d \| m^j \|_{TV}^2 \Bigr)^{1/2} \end{equation}
where $\| m^j \|_{TV}$ denotes the total variation norm of $m^j$. Recall that $$ f ''_n=(n-1)\left[\sum_{i=2}^{n-1}(v^n_{i+1}-v^n_i-(v^n_i-v^n_ {i-1}))\delta_{t_i^n}+(v^n_2-v^n_1)\delta_0-(v^n_n-v^n_{n-1})\delta_1\right]. $$ Thanks to the Euler-Lagrange system of equations obtained in Lemma \ref{lem:eqn}, we may write
\begin{align*}
\lambda_n\times\|f''_n\|&\leq\lambda_n\sum_{i=1}^n\|f''_n(\{t^n_i\})\|\\&\leq
\sum_{i=1}^{n}\left\|\mathbb{E}\left[(X_n-\hat{X}_n)\mathbf{1}_{\{\hat{X}_n=v^n_i+\xi_i^n\}}\right]\right\|+\varepsilon_n \sum_{i=1}^{n}\|v_i ^n -f(t_i^n)\|\\&\leq \mathbb{E}[\|X_n-\hat{X}_n\|]+\varepsilon_n\sqrt{n}\left(\sum_{i=1}^{n}\|v^n_i-f(t_i^n)\|^2\right)^{1/2}\\
&\le F_n(v^n_1,\dots,v^n_n)^{1/2} +\varepsilon_n\sqrt{n}\left(\sum_{i=1}^{n}\|v^n_i-f(t_i^n)\|^2\right)^{1/2}.
\end{align*}
Consequently, using Lemma \ref{lem:F-fini} and Lemma \ref{lem:pen}, $\varepsilon_n\to 0$ and $\lim_{n\to\infty} \lambda_{n}=\lambda\in(0,\infty]$, we obtain that $\sup_{n\geq 1}\|f''_n\|<\infty$, that is, the sequence of signed measures $(f''_n)_{n\geq 1}$ is uniformly bounded in total variation norm.
Hence, it is relatively compact for the topology induced by the bounded Lipschitz norm defined for every signed measure $m$ by $$\|m\|_{\mathrm{BL}}=\sup\left\{\Big\|\int g(x)m(dx)\Big\|, \|g\|_\infty\leq 1, \sup_{t\neq x}\frac{|g(x)-g(t)|}{|x-t|}\leq 1\right\} .$$
Let us show that the sequence $(f''_{n})_{n\geq 1}$ converges weakly to some signed measure. Let $\nu$ be a limit point of $(f''_{n})_{n\geq 1}$. There exists an increasing function $\sigma:\mathbb{N}\to\mathbb{N}$, such that, for every $(s,t)$ such that $\nu(\{s\})=\nu(\{t\})=0$,
\begin{align}
&f''_{\sigma(n)}((s,t])\to \nu((s,t]),\label{eq:cvnu1}\\
&f''_{\sigma(n)}([0,t])\to \nu([0,t]),\quad f''_{\sigma(n)}([0,t))\to \nu([0,t)).\label{eq:cvnu2}
\end{align}
Since, for $0\leq s \leq t<1$, $f''_n((s,t])=f '_{n,r}(t)-f '_{n,r}(s)$, we have, for $0\leq t<1$, $$f_n(t)=f_n(0)+tf '_{n,r}(0)+ \int_{0}^{t}f ''_n((0,u])du.$$
Note that $f'_{n,r}(0)=f''_n(\{0\})$, so that the fact that $\sup_{n\geq 1}\|f''_n\|<\infty$ implies in particular that $(f'_{n,r}(0))_{n\geq 1}$ is bounded.
Thus, up to an extraction, by \eqref{eq:cvnu1}, all terms converge: there exists a vector $v\in\mathbb{R}^d$, such that, for $0 \leq t<1$, $$f(t)=f(0)+tv+ \int_{0}^{t}\nu((0,u])du.$$
Consequently, $v=f'_{r}(0)$, and, for $0\leq s \leq t<1$, $$\nu((s,t])=f '_{r}(t)-f '_{r}(s).$$ In other words, the signed measure $\nu$ is the second derivative of $f$ in the distribution sense, called hereafter $f''$, and $(f''_{n})_{n\geq 1}$ converges weakly to $f''$.
Observe, on the definition \eqref{eq:fn''}, that $f''_n([0,1])=0$, so that $f''([0,1])=0$.
We have, for $t$ such that $f''(\{t\})=0$, $$f''_n([0,t])\to f''([0,t]),\quad f''_n([0,t))\to f''([0,t)).$$ Hence, since, for $t\in[0,1)$, $f''_n([0,t])=f '_{n,r}(t)$, and $t\mapsto f ''([0,t])$ is right-continuous, $f_r '(t)=f ''([0,t])$ for $t\in [0,1)$. Similarly, for $t\in (0,1]$, $f''_n([0,t))=f '_{n,\ell}(t)$, and $t\mapsto f ''([0,t))$ is left-continuous, so that $f_\ell '(t)=f ''([0,t))$ for $t\in (0,1]$.
Recall that $f$ is $L$-Lipschitz.
Moreover, according to Corollary \ref{cor:L(f)}, $\mathscr{L}(f)= L$ since $G(L)>0$. Thus,
we have $\|f '_r(t)\|=L$ $dt-$a.e., and, since $f '_r$ is right-continuous, this implies that $\|f '_r(t)\|=L$ for all $t\in [0,1)$. Similarly, we obtain that $\|f '_\ell(t)\|=L$ for all $t\in (0,1]$. \end{proof}
\begin{proof}[Proof of Lemma \ref{lem:lafini}]
Observe that $f''\neq 0$. Indeed, we have, for example, $f''(\{0\})=f'_r(0)$, with $\|f'_r(0)\|=L>0 $. Yet, $\lambda=\infty$ would imply $f''=0$ since $\sup_{n\geq 1}\left(\lambda_n\times \|f''_n\|\right)<\infty$. \end{proof}
\begin{proof}[Proof of Lemma \ref{lem:equation}]
Clearly, it suffices to consider the case where the test function $g$ is continuous.
Using equation \eqref{eq:suml} and the upper bounds \eqref{eq:Oetan} and \eqref{eq:Oepsn} in the proof of Lemma \ref{lem:lambda0}, we obtain, for $n\geq 2$,
$$\left|\mathbb{E}[\langle X_n-f_n(\hat{t}_n), g(\hat{t}_n)\rangle]-\lambda_n(n-1)\sum_{i=1}^{n-1}\langle v^n_{i+1}-v^n_i,g(t_{i+1}^n)-g(t_i^n)\rangle \right|\leq (\eta_n+c\sqrt{\varepsilon_n})\|g\|_\infty,$$
and besides
$$ \lambda_n(n-1)\sum_{i=1}^{n-1}\langle v^n_{i+1}-v^n_i,g(t_{i+1}^n)-g(t_i^n)\rangle=-\lambda_n\int_{[0,1]}\langle g(t), f ''_n(dt) \rangle.$$
Thus, passing to the limit, we see that $f$ satisfies equation \eqref{eq:formuleth}.
Finally, the uniqueness of $\lambda$ follows from the uniqueness of $\hat X$ (Proposition \ref{prop:hatXneg}), and the fact that
\begin{equation*}
\mathbb{E}[\langle X-\hat{X}, \hat{X}\rangle]=\lambda\int_{0}^{1}\|f_r'(s)\|^2ds= \lambda L^2
\end{equation*}
obtained thanks to equation \eqref{eq:formuleIP} in Remark \ref{rem:formuleIP}.
\end{proof}
\section{An application: injectivity of $f$}\label{section:inj}
In this section, we present an application of the formula \eqref{eq:formuleth} of Theorem \ref{theo:main}. We will use this first order condition to show in dimension $d=2$ that an open optimal curve is injective, and a closed optimal curve restricted to $[0,1)$ is injective, except in the case where its image is a segment. To obtain the result, we follow arguments exposed in \cite{LuSle} in the frame of the penalized problem, for open curves. The main difference is the fact that we have at hand the Euler-Lagrange equation, which allows to simplify the proof.
Again, we consider $L>0$ such that $G(L)>0$ and a curve
$f\in\mathcal C_L$ such that $\Delta(f)=G(L)$, which is $L$-Lipschitz. We let $\hat{t}$ be defined as in Theorem \ref{theo:main}. The random vector $f(\hat{t})$ will sometimes be denoted by $\hat{X}$. Recall that $\|X-\hat{X}\|=d(X, \mbox{Im}f)$ a.s. by Theorem \ref{theo:main}.
To prove the injectivity of $f$, we will need several preliminary lemmas. Let us point out that Lemma \ref{lem:turn} to Lemma \ref{lem:tangent} below are valid for every $d\geq 1$.
First of all, we state the next lemma, which will be useful in the sequel, providing a lower bound on the curvature of any closed arc of $f$. Recall that the total variation of a signed measure $\nu$ is defined by \begin{equation*}
\| \nu \| = \Bigl( \sum_{j=1}^d \| \nu^j \|_{TV}^2 \Bigr)^{1/2}, \end{equation*}
where $\| \nu^j \|_{TV}$ denotes the total variation norm of $\nu^j$. For a Borel set $A\subset [0,1]$, $f''_A$ denotes the vector-valued signed measure defined by $f''_A(B)=f''(A\cap B)$ for all Borel set $B\subset[0,1]$.
\begin{lem}\label{lem:turn}
If $0\leq a<b\leq 1$ and $f(a)=f(b)$, then $\|f''_{(a,b]}\| \geq L.$ \end{lem}
\begin{proof}[Proof of Lemma \ref{lem:turn}]
Let us write \begin{multline*}
0=f(b)-f(a)=\int_{a}^{b}f'_r(t)dt\\=\int_{a}^{b}\left[f_r'(0)+\int_{(0,t]} f''(ds)\right]dt=(b-a)f_r'(0)+\int_{(0,b]}(b-(s\lor a)) f''(ds)\\=(b-a)f_r'(0)+(b-a)f''((0,a])+\int_{(a,b]}(b-s)f''(ds)
\\=(b-a)f'_r(a)+\int_{(a,b]}(b-s)f''(ds).
\end{multline*}
Thus, $\int_{(a,b]}\frac{b-s}{b-a}f''(ds)=-f'_r(a),$ which implies $\|f''_{(a,b]}\| \geq \|f'_r(a)\|=L.$ \end{proof}
As a first step toward injectivity, we now show that, if a point is multiple, it is only visited finitely many times.
\begin{lem}\label{lem:doublefini}For every $t\in [0,1],$ the set $f^{-1}(\{f(t)\})$ is finite.
\end{lem}
\begin{proof}
Let $t\in[0,1]$. Suppose that $f^{-1}(\{f(t)\})$ is infinite. Then, for all $k\geq 1$, there exist $t_0,t_1,\dots, t_k\in f^{-1}(\{f(t)\})$ such that $0\leq t_0<t_1<\dots <t_k\leq 1$. So, by Lemma \ref{lem:turn}, $\|f''\|\geq \sum_{i=1}^{k}\|f''_{(t_{i-1},t_i]}\|\geq kL$, which contradicts the fact that $f$ has finite curvature.
\end{proof}
In the case $\mathcal C_L=\{\phi:[0,1]\to \mathbb{R}^d,\mathscr{L}(\phi)\leq L\}$, the endpoints of the curve $f$ cannot be multiple points.
\begin{lem}\label{lem:extpasdb}Let $\mathcal C_L=\{\phi:[0,1]\to \mathbb{R}^d,\mathscr{L}(\phi)\leq L\}$. We have $f^{-1}(\{f(0)\})= \{0\}$ and $f^{-1}(\{f(1)\})= \{1\}$.
\end{lem}
\begin{proof}Observe that, by symmetry, we only need to prove the first statement since the second one follows then by considering the curve $t\mapsto f(1-t)$.
Assume that the set $f^{-1}(\{f(0)\})$ has cardinality at least 2. Thanks to Lemma \ref{lem:doublefini}, we may consider $t_0=\min\{t>0: f(t)=f(0)\}$. For $x\in \mbox{Im} f,$ we set $ \hat{t}(x)=\inf\{t\in[0,1], f(t)=x\}$. For every $\varepsilon\in(0,t_0)$, we let $$\hat{X}_\varepsilon=f\big(\hat{t}\lor\varepsilon\big)\mathbf{1}_{\{\hat{t}>0\}}+f(0)\mathbf{1}_{\{\hat{t}=0\}} .$$
With this definition, the random vector $\hat{X}_\varepsilon$ takes its values in $f([\varepsilon,1])\cup \{f(0)\}$, that is in $f([\varepsilon,1])$ since $f(t_0)=f(0)$ and $\varepsilon<t_0$. Thus, $\frac{\hat{X}_\varepsilon}{1-\varepsilon}$ takes its values in $\frac{f([\varepsilon,1])}{1-\varepsilon}$, which is the image of a curve with length at most $L$. Consequently, by optimality of $f$, we have $$\mathbb{E}\left[\bigg\|X-\frac{\hat{X}_\varepsilon}{1-\varepsilon}\bigg\|^2\right]\geq \mathbb{E}[\|X-\hat{X}\|^2].$$
Besides, we may write
\begin{align*}
\bigg\|X-\frac{\hat{X}_\varepsilon}{1-\varepsilon}\bigg\|^2&=\bigg\|X-\hat{X}+\hat{X}-\frac{\hat{X}_\varepsilon}{1-\varepsilon}\bigg\|^2\\&=\|X-\hat{X}\|^2+\bigg\|\hat{X}-\frac{\hat{X}_\varepsilon}{1-\varepsilon}\bigg\|^2+2\bigg\langle X-\hat{X}, \hat{X}-\frac{\hat{X}_\varepsilon}{1-\varepsilon}\bigg\rangle
\\&=\|X-\hat{X}\|^2+\frac{1}{(1-\varepsilon)^2}\|\hat{X}-\hat{X}_\varepsilon-\varepsilon\hat{X}\|^2+\frac{2}{1-\varepsilon}\left(\langle X-\hat{X}, \hat{X}-\hat{X}_\varepsilon\rangle-\varepsilon\langle X-\hat{X},\hat{X}\rangle\right).
\end{align*}
As $\|\hat{X}-\hat{X}_\varepsilon\|\leq L\varepsilon$ since $f$ is $L$-Lipschitz, we get
\begin{align*}
\mathbb{E}[\|\hat{X}-\hat{X}_\varepsilon-\varepsilon\hat{X}\|^2]&\leq 2L^2\varepsilon^2+2\varepsilon^2\mathbb{E}[\|\hat{X}\|^2]=2(L^2+ \mathbb{E}[\|\hat{X}\|^2])\varepsilon^2.
\end{align*}
Note that $\mathbb{E}[\|\hat{X}\|^2]<\infty$
by the same argument as in $\eqref{eq:hatXfini}$.
Moreover, thanks to equation \eqref{eq:formuleIP} in Remark \ref{rem:formuleIP}, we have
\begin{equation}\label{eq:formulehatX}
\mathbb{E}[\langle X-\hat{X}, \hat{X}\rangle]=\lambda\int_{0}^{1}\|f_r'(s)\|^2ds= \lambda L^2.
\end{equation}
Furthermore, $\hat{X}-\hat{X}_\varepsilon=(f(\hat{t})-f(\varepsilon))\mathbf{1}_{\{0<\hat{t}\leq \varepsilon\}}$, so that equation \eqref{eq:formuleth} implies
$$\mathbb{E}[\langle X-\hat{X},\hat{X}-\hat{X}_\varepsilon\rangle]=-\lambda \int_{[0,1]} \langle (f(t)-f(\varepsilon))\mathbf{1}_{\{0<t\leq \varepsilon\}} ,f''(dt)\rangle.$$
Hence, \begin{align*}
|\mathbb{E}[\langle X-\hat{X},\hat{X}-\hat{X}_\varepsilon\rangle]|&
\le \lambda\sum_{j=1}^d \int_{(0,\varepsilon]} | f^j(t)-f^j(\varepsilon)|\, |(f'')^j|(dt)\\
&\le \lambda L\varepsilon \sum_{j=1}^d |(f'')^j| ((0,\varepsilon]),
\end{align*}
where $| (f'')^j|$ stands for the total variation of the signed measure $(f'')^j$.
Finally, we obtain
$$ \mathbb{E}\left[\bigg\|X-\frac{\hat{X}_\varepsilon}{1-\varepsilon}\bigg\|^2\right]\leq \mathbb{E}\left[\|X-\hat{X}\|^2\right]+2(L^2+\mathbb{E}[\|\hat{X}\|^2])\varepsilon^2+\lambda L\varepsilon\rho(\varepsilon) -\frac{2\varepsilon}{1-\varepsilon}\lambda L^2,$$ where $\rho(\varepsilon)$ tends to 0 as $\varepsilon\to 0$. This inequality shows that, for $\varepsilon$ small enough, $\mathbb{E}\left[\big\|X-\frac{\hat{X}_\varepsilon}{1-\varepsilon}\big\|^2\right]<\mathbb{E}[\|X-\hat{X}\|^2]$, which contradicts the optimality of $f$.
\end{proof}
For an open curve,
there exists a multiple point which is the last multiple point.
\begin{lem}\label{lem:lastdouble}
Let $\mathcal C_L=\{\phi:[0,1]\to \mathbb{R}^d,\mathscr{L}(\phi)\leq L\}$. There exists $\delta>0$ such that for every $t\in[1-\delta,1],$ $f^{-1}(\{f(t)\})=\{t\}.$
\end{lem}
\begin{proof} Otherwise, we can build sequences $(t_k)_{k\geq 1}$ and $(s_k)_{k\geq 1}$ such that $t_k\to 1$ and $f(t_k)=f(s_k)$, with $s_k\neq t_k$ for all $k\geq 1.$ Up to extraction of a subsequence, we may assume that $(s_k)$ converges to a limit $s\in[0,1]$. Hence, we have $f(s)=f(1)$, which implies $s=1$ by Lemma \ref{lem:extpasdb}. Up to another extraction, we may consider that the intervals $[s_k\land t_k,s_k\lor t_k]$, $k\geq1$, are mutually disjoint.
Finally, using Lemma \ref{lem:turn}, we obtain $$\|f''\|
\geq\sum_{k\geq 1}\|f''_{(s_k\land t_k,s_k\lor t_k]}\|=\infty,$$ which yields a contradiction since we have shown that an optimal curve has finite curvature.
\end{proof}
Now, we show that the two branches of the curve are necessarily tangent at a multiple point. \begin{lem}\label{lem:tangent} \begin{enumerate}
\item [$(i)$] If there exist $0<t_0<t_1<1$ such that $f(t_0)=f(t_1)$, then $f'_\ell(t_0)=f'_r(t_0)=- f'_r(t_1)=-f'_\ell(t_1)$. \item [$(ii)$] In the case $\mathcal C_L=\{\phi:[0,1]\to \mathbb{R}^d,\mathscr{L}(\phi)\leq L, \phi(0)=\phi(1)\}$, if there exists $0<t<1$ such that $f(t)=f(0)$, then $f'_\ell(t)=f'_r(t)=- f'_r(0)=-f'_\ell(1)$. \end{enumerate} \end{lem}
\begin{proof} First, we show that point $(ii)$ follows from point $(i)$. Let $t\in(0,1)$ such that $f(t)=f(0)$. Define the curve $g$ by $g(s)=f(s+t/2)$ for $s\in[0,1-t/2]$ and $g(s)=f(s+t/2-1)$ for $s\in[1-t/2,1]$. Clearly, $g$ is a closed curve, $\Delta(g)=\Delta(f)$ and $g$ is $L$-Lipschitz. Moreover, one has: $g(t/2)=g(1-t/2)$, $g'_r(t/2)=f'_r(t)$, $g'_\ell(t/2)=f'_\ell(t)$, $g'_r(1-t/2)=f'_r(0)$ and $g'_\ell(1-t/2)=f'_\ell(1)$. Consequently, if $(i)$ holds true for $g$, one deduces $(ii)$.
It remains to show point $(i)$. Suppose that $f'_\ell(t_0)\neq f'_r(t_0)$. Let $\gamma\in(0,1]$ and $\varepsilon>0$. We introduce the random vectors $\hat{X}_{0,\gamma}=(1+\gamma)\hat{X}$ and $$\hat{X}_{\varepsilon,\gamma}=(1+\gamma)\left[\hat{X}\mathbf{1}_{\hat{t}\in[0,t_0-\varepsilon)\cup (t_0+\varepsilon, 1]\cup\{t_0\}}+h_\varepsilon(\hat{t})\mathbf{1}_{\hat{t}\in[t_0-\varepsilon,t_0+\varepsilon]\setminus \{t_0\}}\right],$$ where $h_\varepsilon(t)=\left(\frac{f(t_0+\varepsilon)-f(t_0-\varepsilon)}{2\varepsilon}(t-(t_0-\varepsilon))+f(t_0-\varepsilon)\right)$.
Let us write \begin{align}
\mathbb{E}[\|X-\hat{X}_{0,\gamma}\|^2]&=\mathbb{E}[\|X-\hat{X}\|^2]+\mathbb{E}[\|\hat{X}-\hat{X}_{0,\gamma}\|^2]+2\mathbb{E}[\langle X-\hat{X}, \hat{X}-\hat{X}_{0,\gamma}\rangle]\nonumber\\&=
\mathbb{E}[\|X-\hat{X}\|^2]+\gamma^2\mathbb{E}[\|\hat{X}\|^2]-2\mathbb{E}[\langle X-\hat{X}, \gamma\hat{X}\rangle] \nonumber\\&=
\mathbb{E}[\|X-\hat{X}\|^2]+\gamma^2\mathbb{E}[\|\hat{X}\|^2]-2\gamma\lambda L^2.\label{eq:XhatXg0} \end{align}For the last equality, we used equation \eqref{eq:formulehatX}. \\
Note that $\hat{X}_{\varepsilon,\gamma}=\hat{X}_{0,\gamma}+(1+\gamma)(h_\varepsilon(\hat{t})-f(\hat{t}))\mathbf{1}_{\hat{t}\in[t_0-\varepsilon,t_0+\varepsilon]\setminus \{t_0\}}$ and that $\|h_\varepsilon(\hat{t})-f(\hat{t})\|\leq 4\varepsilon L$. So, we have \begin{multline}
\mathbb{E}[\|X-\hat{X}_{\varepsilon,\gamma}\|^2]=\mathbb{E}[\|X-\hat{X}_{0,\gamma}\|^2]+(1+\gamma)^2\mathbb{E}[\|h_\varepsilon(\hat{t})-f(\hat{t})\|^2\mathbf{1}_{\hat{t}\in[t_0-\varepsilon,t_0+\varepsilon]\setminus \{t_0\}}]\\+2(1+\gamma) \mathbb{E} [\langle X-\hat{X}_{0,\gamma},(h_\varepsilon(\hat{t})-f(\hat{t}))\mathbf{1}_{\hat{t}\in[t_0-\varepsilon,t_0+\varepsilon]\setminus \{t_0\}}\rangle]\\ =\mathbb{E}[\|X-\hat{X}_{0,\gamma}\|^2]+\mathcal O(\varepsilon^2)+ o(\varepsilon).\label{eq:XhatXge} \end{multline}
Indeed, $\mathbb{P}([t_0-\varepsilon,t_0+\varepsilon]\setminus \{t_0\})$ tends to 0 as $\varepsilon$ tends to 0. Besides, the random vector $\hat{X}_{\varepsilon,\gamma}$ is taking its values in the image of a curve of length $$L_{\varepsilon,\gamma}:=(1+\gamma)(L(1-2\varepsilon)+\|f(t_0+\varepsilon)-f(t_0-\varepsilon)\|).$$ Yet, since $f'_\ell(t_0)\neq f'_r(t_0)$, if $\varepsilon$ is small enough, there exists $\alpha\in[0,1)$ such that \begin{align*}
\|f(t_0+\varepsilon)-f(t_0-\varepsilon)\|^2&=\|f(t_0+\varepsilon)-f(t_0)+f(t_0)-f(t_0-\varepsilon)\|^2\\
&=\varepsilon^2\bigg[\Big\|\frac{f(t_0+\varepsilon)-f(t_0)}{\varepsilon}\Big\|^2+\Big\|\frac{f(t_0)-f(t_0-\varepsilon)}{\varepsilon}\Big\|^2\bigg.\\&\bigg.\quad+2\Big\langle \frac{f(t_0+\varepsilon)-f(t_0)}{\varepsilon}, \frac{f(t_0)-f(t_0-\varepsilon)}{\varepsilon}\Big\rangle\bigg].\\ &\leq \varepsilon^2(2L^2+2L^2\alpha). \end{align*}
Hence, $\|f(t_0+\varepsilon)-f(t_0-\varepsilon)\|<\varepsilon L\sqrt{2(1+\alpha)}$, and, thus,
$$L_{\varepsilon,\gamma}\leq (1+\gamma)(L-2\varepsilon L+\varepsilon L\sqrt{2(1+\alpha)})=(1+\gamma)(L-\eta\varepsilon),$$where $\eta>0.$
Let $\gamma=\frac{\eta\varepsilon}{L}$. Then, for $\varepsilon$ small enough, we get $L_{\varepsilon,\gamma}\leq L-\frac{(\eta\varepsilon)^2}{L}<L$ and, using equations \eqref{eq:XhatXg0} and \eqref{eq:XhatXge}, we have $\mathbb{E}[\|X-\hat{X}_{\varepsilon,\gamma}\|^2]<\mathbb{E}[\|X-\hat{X}\|^2].$ This contradicts the optimality of $f$. So, $f'_\ell(t_0)= f'_r(t_0)$. Similarly, we obtain that $f'_\ell(t_1)= f'_r(t_1)$. Finally, consider the curve $g$, defined by $$g(t)=\begin{cases}f(t)&\mbox{if }t\in[0,t_0]\cup [t_1,1]\\f(t_0+t_1-t)&\mbox{if }t\in(t_0,t_1).\end{cases}$$ This definition means that $g$ has the same image as $f$ but the arc between $t_0$ and $t_1$ is traveled along in the reverse direction. Since $g$, having the same image and length as $f$, is an optimal curve, which satisfies $g(t_0)=g(t_1)$, we have $g'_\ell(t_0)= g'_r(t_0)$ and $g'_\ell(t_1)= g'_r(t_1)$. On the other hand, by the definition of $g$, we know that $f'(t_0)=g'_\ell(t_0)=-g'_\ell(t_1)$ and $f'(t_1)=g'_r(t_1)=-g'_r(t_0)$. Hence, $f'(t_0)=-f'(t_1).$ \end{proof}
We introduce the set \begin{equation*} D= \Bigl\{ t\in[0,1) \mid \mbox{Card}\bigl( f^{-1}(\{f(t)\} )\cap [0,1) \bigr) \geq 2 \Bigr\}. \end{equation*}
\begin{lem}\label{lem:Gronwall} If $f(t)$, $t\in(0,1)$, is a multiple point of $f:[0,1]\to \mathbb{R}^2$, then $t$ cannot be right- or left-isolated:\\ for all $t\in D\cap(0,1)$, for all $\varepsilon>0$, $(t,t+\varepsilon)\cap D \not=\emptyset$ and $(t-\varepsilon,t)\cap D\not=\emptyset$. \end{lem}
\begin{proof} Let $t_0\in D\cap(0,1)$. Assume that there exists $\varepsilon>0$ such that $(t_0,t_0+\varepsilon)\cap D =\emptyset$ or $(t_0-\varepsilon,t_0)\cap D=\emptyset$. We will show that this leads to a contradiction. Without loss of generality, up to considering $t\mapsto f(1-t)$, we assume that $(t_0-\varepsilon,t_0)\cap D =\emptyset$. Let $t_1\in[0,1)$ such that $t_0\not=t_1$ and $f(t_0)=f(t_1)$. By Lemma \ref{lem:tangent}, one has $f'_\ell(t_0)=- f'_r(t_1)$.
Let $$y=\frac{f'_r(t_1)}{L}$$ and define the functions $\alpha$ and $\beta$ by \begin{align*} \alpha(t)&=\langle f(t)-f(t_1),y \rangle \mbox{ for } t\in [t_1,t_1+\varepsilon) \\ \beta(t)&=\langle f(t)-f(t_0), y\rangle \mbox{ for } t\in (t_0-\varepsilon,t_0]. \end{align*} Notice, since $f(t_0)=f(t_1)$, that $\alpha$ and $\beta$ are restrictions, to $[t_1,t_1+\varepsilon)$ and $(t_0-\varepsilon,t_0]$ respectively, of the same function. Nevertheless, this notation $\alpha$, $\beta$ were chosen for readability.
The functions $\alpha$ and $\beta$ satisfy the following properties: \begin{itemize}
\item $\alpha$ is right-differentiable and $\alpha'_r(t)=\langle f'_r(t),y\rangle$ for every $t\in[t_1,t_1+\varepsilon)$. Since $\alpha '_r(t_1)=L>0$ and $\alpha'_r$ is right-continuous, there exists $\delta\in(0,\varepsilon)$, such that $\alpha '_r(t)\geq \delta L$ for every $t\in[t_1,t_1+\delta]$.
\item $\beta$ is left-differentiable and $\beta '_\ell(t)=\langle f_\ell'(t), y\rangle$ for every $t\in(t_0-\varepsilon,t_0].$ Since $\beta '_\ell(t_0)=-L<0 $ and $\beta_\ell '$ is left-continuous, there exists $\delta'\in(0,\varepsilon)$ such that $\beta_\ell '(t)\leq -\delta'L$ for every $t\in [t_0 - \delta', t_0].$ \end{itemize}
Without loss of generality, we may assume that $\delta'=\delta$, since it suffices to pick the smallest of both values to have the properties on $\alpha'_r$ and $\beta'_\ell$. In particular, we see that \begin{itemize}
\item $\alpha$ is a bijection from $[t_1,t_1+\delta]$ onto its image $\alpha([t_1,t_1+\delta])=[0,a],$ where $a:=\alpha(t_1+\delta)>0,$
\item $\beta$ is a bijection from $[t_0-\delta,t_0]$ onto its image $\beta([t_0-\delta,t_0])=[0,b]$, where $b:=\beta(t_0-\delta)>0$. \end{itemize} We denote by $\alpha^{-1}$ and $\beta^{-1}$ their inverse functions.
Let $z\in\mathbb{R}^2$ be such that $\|z\|=1$ and $\langle z, y\rangle =0.$
For every $t\in(t_1,\alpha^{-1}(b)],$ we have $\langle f(t)-f(\beta^{-1}(\alpha(t))),y\rangle=0$. Then, we may write $ f(t)-f(\beta^{-1}(\alpha(t)))=\langle f(t)-f(\beta^{-1}(\alpha(t))),z\rangle z$. Moreover, for $t\in(t_1,\alpha^{-1}(b)]$, since there are no further multiple point before $t_0$, $f(t)-f(\beta^{-1}(\alpha(t)))\neq 0$. Thus, there exists $\sigma\in\{-1,1\}$ such that $$\frac{f(t)-f(\beta^{-1}(\alpha(t)))}{\|f(t)-f(\beta^{-1}(\alpha(t)))\|}=\sigma z .$$ We suppose, without loss of generality, that the vector $z$ was chosen such that $\sigma=1.$
Now, let us show that, for $t\in(t_1,\alpha^{-1}(b)],$ $$\langle z,f'_r(t)\rangle\leq \frac 1{2\lambda}\sup_{t_1\leq s\leq t}\|f(s)-f(\beta^{-1}(\alpha(s)))\|.$$ Since $\langle z,f'_r(t_1)\rangle=0$, we have, according to Theorem \ref{theo:main}, \begin{align*} \langle z,f'_r(t)\rangle&=\langle z,f'_r(t)-f'_r(t_1)\rangle\\&=\int_{(t_1,t]} \langle z,f''(ds)\rangle\\ &=-\frac{1}{\lambda} \mathbb{E}\left[\langle X-f(\hat{t}),z \rangle \mathbf{1}_{\{t_1<\hat{t}\leq t\}}\right]\\
&=-\frac{1}{\lambda} \mathbb{E}\left[\left\langle X-f(\hat{t}),\frac{f(\hat{t})-f(\beta^{-1}(\alpha(\hat{t})))}{\|f(\hat{t})-f(\beta^{-1}(\alpha(\hat{t})))\|} \right\rangle \mathbf{1}_{\{t_1<\hat{t}\leq t\}}\right] \end{align*}
Besides, for $t\in[0,1]$, starting from $$\|X-f(t)\|^2=
\|X-f(\hat{t})\|^2+\|f(\hat{t})-f(t)\|^2+2\langle X-f(\hat{t}),f(\hat{t})-f(t)\rangle,$$ we deduce, by optimality of $\hat{t}$, the inequality $$-\langle X- f(\hat{t}),f(\hat{t})-f(t) \rangle\leq \frac{1}{2}\|f(\hat{t})- f(t)\|^2\quad a.s.$$ Hence, we obtain \begin{align}\label{eq:majr}
\langle z,f'_r(t)\rangle&\leq \frac{1}{2\lambda}\mathbb{E}\left[\|f(\hat{t})-f(\beta^{-1}(\alpha(\hat{t})))\|\mathbf{1}_{\{t_1<\hat{t}\leq t\}}\right]\nonumber\\
&\leq \frac{1}{2\lambda} \sup_{t_1<s\leq t}\|f(s)-f(\beta^{-1}(\alpha(s)))\|. \end{align} Similarly, we get, for every $t\in[\beta^{-1}(a),t_0)$, \begin{equation}\label{eq:majl} \langle z,f'_\ell(t)\rangle
\leq \frac{1}{2\lambda} \sup_{t\leq s< t_0}\|f(s)-f(\alpha^{-1}(\beta(s)))\|. \end{equation} This may be seen for instance by considering the optimal curve parameterized in the reverse direction $t\mapsto f(1-t)$. For $x\in[0,a\land b)$, let $D(x)=f(\alpha^{-1}(x))-f(\beta^{-1}(x))$. This function $D$ is right-differentiable and $$ D'_r(x)=\frac{f'_r(\alpha^{-1}(x))}{\alpha'_r(\alpha^{-1}(x))}-\frac{f'_\ell(\beta^{-1}(x))}{\beta'_\ell(\beta^{-1}(x))}.$$ Moreover, $\alpha_r '(\alpha^{-1}(x))\geq \delta L$ and $-\beta '_\ell(\beta^{-1}(x))\geq \delta L$, so that \begin{align*}
\langle D'_r(x),z\rangle&\leq \frac{1}{\delta L}(\langle z, f'_r(\alpha^{-1}(x))\rangle+\langle z, f'_\ell(\beta^{-1}(x))\rangle)\\&\leq \frac{1}{\delta L\lambda}\sup_{u\leq x}\|D(u)\|.
\end{align*}For the last inequality, we used the upper bounds \eqref{eq:majr} and \eqref{eq:majl} together with the monotony of $\alpha$ and $\beta$. Observe, since $z=\frac{D(x)}{\|D(x)\|}$, that
$\langle D'_r(x),z\rangle$ is the right-derivative of $\|D(x)\|$. As $D(0)=0$, the Gronwall Lemma implies that $D(x)=0$ for all $x\in[0,a\land b)$, which yields a contradiction, since the considered multiple point is supposed to be left-isolated. \end{proof}
We may now state the injectivity result in dimension 2, for open and closed curves.
\begin{pro}
\begin{enumerate}
\item[$(i)$] If $\mathcal C_L=\{\phi\in[0,1]\to\mathbb{R}^2,\mathscr{L}(\phi)\leq L\}$, then $f$ is injective.
\item[$(ii)$] If $\mathcal C_L=\{\phi\in[0,1]\to\mathbb{R}^2,\mathscr{L}(\phi)\leq L,\phi(0)=\phi(1)\}$, then either $f$ restricted to $[0,1)$ is injective or $\mbox{Im} f$ is a segment.
\end{enumerate} \end{pro}
\begin{proof} $(i)$ $\mathcal C_L=\{\phi\in[0,1]\to\mathbb{R}^2,\mathscr{L}(\phi)\leq L\}$.\\ Thanks to Lemma \ref{lem:lastdouble}, if $f$ has multiple points, there exists a last multiple point. As such, this multiple point is right-isolated. However, by Lemma \ref{lem:Gronwall}, this cannot happen. So, $f$ is injective.
$(ii)$ $\mathcal C_L=\{\phi\in[0,1]\to\mathbb{R}^2,\mathscr{L}(\phi)\leq L,\phi(0)=\phi(1)\}$.\\ We assume that $f$ restricted to $[0,1)$ is not injective. So, our aim is to prove that $\mbox{Im} f$ is a segment. As $f$ is supposed not to be injective, the set $D=\{ t\in[0,1) \mid \mbox{Card} ([0,1) \cap f^{-1}(\{f(t)\})) \ge 2 \}$ is non-empty. Without loss of generality, we can assume that $D\cap(0,1)\not=\emptyset$. Indeed, if $D=\{0\}$, we can replace $f$ by the curve $t\mapsto f((t+1/2) \text{ mod } 1)$ for which $D=\{1/2\}$.
Let us show that $D$ is dense in $(0,1)$. Proceeding by contradiction, we assume that there exists a non-empty open interval $(a,b)\subset(0,1)$ such that $D\cap (a,b)=\emptyset$. Since $D\cap (0,1)\not=\emptyset$, one has $D\cap (0,a] \not=\emptyset$ or $D\cap [b,1) \not=\emptyset$. Consider the case where $D\cap [b,1) \not=\emptyset$. Define $\beta=\inf( D\cap [b,1) )$. There exist two sequences $(t_k)_{k\ge 1}\subset D$ and $(s_k)_{k\ge 1}\subset D$ such that $t_k\downarrow \beta$, $f(t_k)=f(s_k)$ and $s_k\not= t_k$ for all $k\geq 1$. Up to an extraction, $s_k$ converges to a limit $s\in[0,1]$. If $\beta\not= s$ then $\beta\in D$ is left-isolated which is impossible by Lemma \ref{lem:Gronwall}. Thus $s=\beta$ and consequently $s_k \ge \beta$ for $k$ large enough. This yields $f'_r(s_k) \to f'_r(\beta)$. Besides, for all $k$, $f'_r(t_k) \to f'_r(\beta)$ and, by Lemma \ref{lem:tangent}, $f'_r(t_k)=-f'_r(s_k)$, which contradicts the fact that $f$ has speed $L$. The case where $D\cap (0,a] \not=\emptyset$ is similar.
The next step is to prove that the set $[0,1)\setminus D$ is finite. Let $t\in(0,1)\setminus D$. Since $D$ is dense, there exists a sequence $(t_k)_{k\geq 1}\in D$ such that $t_k \downarrow t$. For every $k\geq 1$, there exists $s_k\neq t_k$ such that $f(t_k)=f(s_k)$. If $s\in[0,1]$ is a limit point of $(s_k)$, then $f(t)=f(s)$ which implies $t=s$ since $t\notin D$ and $t\not=0$. Therefore $\lim_{k\to\infty} s_k=t$. Up to an extraction, we may assume that $(s_k)$ converges increasingly or decreasingly to $t$.
By Lemma \ref{lem:tangent}, one has $f'(t_k)=-f'(s_k)$ for $k$ large enough. If $s_k\downarrow t$, one obtains a contradiction: $f'_r(t)=\lim_k f'_r(t_k)=-\lim_k f'_r(s_k)=-f'_r(t)$. Thus $s_k\uparrow t$ and one gets $f'_r(t)=-f'_\ell(t)$. This means that $f(t)$ is a cusp. Since $\|f''\|([0,1])<\infty$, there are only a finite number of such points.
Observe that, as a consequence of Lemma \ref{lem:tangent}, for every $t\in[0,1)$, $\mbox{Card} ([0,1) \cap f^{-1}(\{f(t)\}))<3$. Indeed, if a point has multiplicity at least 3, that is there exist $0\le t_1<t_2<t_3<1$ such that $f(t_1)=f(t_2)=f(t_3),$ then, on the one hand, $f'_r(t_1)=-f'(t_2)=-f'(t_3)$, and on the other hand, $f'(t_2)=-f'(t_3)$. Thus, one obtains again a contradiction: $f'_r(t_1)=f'(t_2)=f'(t_3)=0$. In other words, $D=\{ t\in[0,1) \mid \mbox{Card} ([0,1) \cap f^{-1}(\{f(t)\})) = 2 \}.$
We introduce the function $\phi:[0,1)\to[0,1)$, defined as follows: for $t\in[0,1)\setminus D$, set $ \phi(t)=t$ and for $t\in D$, set $\varphi(t)=t'$ where $t'\in f^{-1}(\{f(t)\})$ and $t'\notin t$. Note that $\phi$ is an involution.
Let us show that the function $\varphi$ is continuous on $(0,1)\setminus\{\phi(0)\}$. First, observe that $f$ is derivable on $D\cap (0,1)$ by Lemma \ref{lem:tangent}, and that $f'$ is continuous on $D\cap (0,1)$ since $f'_r$ is right-continuous and $f'_\ell$ is left-continuous. Let $t\in(0,1)$ such that $t\not=\phi(0)$ and let $(t_k)_{k\geq 1}$ be a sequence converging to $t$. Let $s\in[0,1]$ be a limit point of $(\phi(t_k))$. Since $f(t_k)=f(\phi(t_k))$, for all $k\geq 1$, one has $f(s)=f(t)$. Necessarily, $s\in(0,1)$ since $t\not=\phi(0)$. If $t\notin D$, one has $s=t=\phi(t)$. If $t\in D$, then $s\in\{t,\phi(t)\}$. Since $D\cap(0,1)$ is open, $t_k\in D$ for $k$ large enough, hence $f'(\varphi(t_k))=-f'(t_k)$ for $k$ large enough. Thus $f'(s)=-f'(t)$ and consequently $s=\phi(t)$.
Let us show that $\phi$ is derivable on $D\cap (0,1) \setminus\{\phi(0)\}$ and $\phi'(t)=-1$ for all $t\in D\cap (0,1) \setminus\{\phi(0)\}$. Let $t\in D\cap (0,1)$, $t\not=\phi(0)$. For all $h\in\mathbb{R}$ such that $|h|< t\land(1-t)$, we have \begin{align*} f(t+h)-f(t) &= f(\varphi(t+h))-f(\varphi(t))\\ &= \int_{\phi(t)}^{\phi(t+h)} f'(s) ds\\ &= \bigl(\phi(t+h)-\phi(t)\bigr) \int_0^1 f'\bigl(\phi(t)+u(\phi(t+h)-\phi(t))\bigr) du. \end{align*} Besides, since $f'$ is continuous at the point $\phi(t)\in D\cap(0,1)$ and $\phi$ is continuous at the point $t$, one has $\lim_{h\to 0} \int_0^1 f'\bigl(\phi(t)+u(\phi(t+h)-\phi(t))\bigr) du= f'(\phi(t))=-f'(t)$. One deduces that $\lim_{h\to 0} \bigl(\phi(t+h)-\phi(t)\bigr)/h=-1$.
Let us prove that $\phi(\phi(0)/2+t)=\phi(0)/2 + 1-t \mod 1$ for all $t\in[-\phi(0)/2, 1-\phi(0)/2)$. From the two previous steps, one deduces that if $\phi(0)=0$, $\phi(t)=1-t$ for all $t\in(0,1)$, as desired, while, if $\phi(0)\in(0,1)$, there exist two constants $c_1$ and $c_2$ such that $$ \phi(t)=c_1-t \quad \forall t\in(0,\phi(0)), \quad \phi(t)=c_2-t \quad \forall t\in(\phi(0),1).$$ It remains to prove that $c_1=\phi(0)$ and $c_2=1+\phi(0)$. As $\phi$ takes its values in $[0,1)$, one has $\phi(0)\le c_1\le 1$ and $1\le c_2\le 1+\phi(0)$. Moreover, since $\phi$ is a bijection, $c_2-t \ge c_1$ for $t\ge \phi(0)$ or $c_2-t\le c_1-\phi(0)$ for $t\ge \phi(0)$, that is $c_2-1\ge c_1$ or $c_2\le c_1$. In the first case, one gets $c_1=\phi(0)$ and $c_2=1+\phi(0)$. In the second case, one gets $c_1=c_2=1$, which is not possible: necessarily, $\phi(0)=1/2$, since otherwise $\phi(1-\phi(0))=\phi(0)$ which yields $1-\phi(0)=0$, and we see that the restriction of $f$ to $[0,1/2]$ is a closed curve with the same image as $f$, hence $f$ is not optimal.
Finally, define the curve $\tilde f$ by
$$\tilde f(t)= f\bigl((\phi(0)/2 +t) \mod 1\bigr).$$
This curve $\tilde f$ has the same image as $f$ and, from the last step,
$\tilde f(t)=\tilde f(1-t)$ for all $t\in[0,1]$.
Let us show that $\mbox{Im} f$ is a segment. Otherwise, the curve $g$ defined by
$$ g(t)=\tilde f(t) \quad\text{if $t\in[0,1/2]$}, \quad
g(t)=\tilde f(1/2)+ 2(t-1/2)\bigl(\tilde f(1)-\tilde f(1/2)\bigr) \quad\text{if $t\in[1/2,1]$}$$
satisfies $\mathscr{L}(g)<\mathscr{L}(f)$ and $\Delta(g)\le \Delta(f)$, since $\mbox{Im} f=\tilde f([0,1/2])$, thus $f$ cannot be optimal.
\end{proof}
\section{Examples of principal curves}
\subsection{Uniform distribution on an enlargement of a curve}
The purpose of this section is to study the principal curve problem for the uniform distribution on an enlargement of some generative curve. For $A\subset\mathbb{R}^d$ and $r\ge 0$, we denote by $$A\oplus r=\left\{x\in\mathbb{R}^d \mid d(x,A) \le r\right\}$$ the $r$-enlargement of $A$.
Under some conditions on the generative curve $f:[0,1]\to\mathbb{R}^d$, for $r$ small enough, it turns out that the image of an optimal curve with length $\mathscr{L}(f)$ for the uniform distribution on an $r$-enlargement of $\mbox{Im} f$ is necessarily $\mbox{Im} f$. More specifically, the radius $r$ must not exceed the reach of $\mbox{Im} f$.
The reach of a set $A\subset\mathbb{R}^d$ is the supremum of the radii $\rho$ such that every point at distance at most $\rho$ of $A$ has a unique projection on $A$. More formally, following \cite{F59}, we define for $A\subset\mathbb{R}^d$ $$ \mbox{reach}(A)=\sup\left\{\rho\ge 0 \mid \forall x\in\mathbb{R}^d \quad d(x,A)\le \rho \Rightarrow \exists! a\in A\quad d(x,a)=d(x,A)\right\} \in[0,+\infty].$$
The question of the optimality of the generative curve when considering the uniform distribution on an enlargement has been first addressed in dimension $d=2$ in \cite{MT05}. Observe that related ideas can be found in \cite{GPVW}. Our proof in arbitrary dimension $d\geq 1$ relies on arguments in \cite{F59}, which moreover allow to show uniqueness.
\begin{theo} \label{th}
Let $f:[0,1]\to\mathbb{R}^d$ be a curve. Suppose that $f$ is injective, differentiable, $f'$ is Lipschitz, and there exists $c>0$ such that $\|f'(t)\|\ge c$ for all $t\in[0,1]$. Then, the reach of $\mbox{Im} f$ is positive. Let $r\in (0, \mbox{reach}(\mbox{Im} f]$ and let $X$ be a random vector uniformly distributed on $ \mbox{Im} f\oplus r.$
Consider a function $V:[0,\infty)\to[0,\infty)$ continuous, increasing and such that $V(0)=0$. Then, for every curve $g:[0,1]\to \mathbb{R}^d$ such that $\mathscr{L}(g)\le \mathscr{L}(f)$ one has
$$ \mathbb{E}\big[V\left(d(X, \mbox{Im} f )\right)\big]\le \mathbb{E}\big[V\left(d(X, \mbox{Im} g )\right)\big].$$
with equality if and only if $\mbox{Im} g=\mbox{Im} f$. \end{theo}
The proof of the theorem is based on two lemmas. For $k\ge 1$, $\lambda_k$ denotes the Lebesgue measure on $\mathbb{R}^k$ and $\alpha_k$ the volume of the unit ball in $\mathbb{R}^k$. From \cite[Lemma 42]{MT05}, we have the next result. \begin{lem} \label{maj}
Let $A$ be a compact connected subset of $\mathbb{R}^d$ with $\mathcal H^1(A)<\infty$. Then for all $r\ge 0$ one has
$$ \lambda_d(A \oplus r) \le \mathcal H^1(A) \alpha_{d-1} r^{d-1} + \alpha_d r^d.$$ \end{lem}
\begin{lem}
Let $f:[0,1]\to\mathbb{R}^d$ be a curve. Suppose that $f$ is injective, $f$ is differentiable, $f'$ is Lipschitz, and there exists $c>0$ such that $\|f'(t)\|\ge c$ for all $t\in[0,1]$.
Then, the reach of $A=\mbox{Im} f$ is positive and for all $r\le \mbox{reach}(A)$ one has
\begin{equation} \label{vol}
\lambda_d\left(A\oplus r\right)= \mathscr{L}(f)\alpha_{d-1} r^{d-1} + \alpha_d r^d
\end{equation}
Moreover,
one has
\begin{equation} \label{fron}
\left\{x\in A\oplus r \mid d\left(x,\partial (A\oplus r)\right) \ge r\right\} \subset A.
\end{equation} \end{lem}
\begin{proof} The assumptions on $f$ imply that there exists $\varepsilon>0$, a set $B\subset\mathbb{R}^d$ and a function $\phi: (-\varepsilon,1+\varepsilon)\to B$ such that $\phi$ is bijective, $\phi=f$ on $[0,1]$, $\phi$ is differentiable, $\phi'$ is Lipschitz and $\phi^{-1}$ is Lipschitz. From \cite[Theorem 4.19]{F59}, we deduce that $\mbox{reach}(A)>0$. For $r\in(0,\mbox{reach}(A))$, equality \eqref{vol} follows from \cite[Theorem 5.6 \& Remark 6.14]{F59}. For $r=\mbox{reach}(A)$, we can write $$\mathscr{L}(f)\alpha_{d-1} r^{d-1} + \alpha_d r^d = \lim_{n\to\infty} \lambda_d\bigpar{A\oplus(r-1/n)}=\lambda_d\bigpar{\{x\in\mathbb{R}^d \mid d(x,A)<r\}}$$ and $\lambda_d\bigpar{A\oplus r} \le \mathscr{L}(f)\alpha_{d-1} r^{d-1} + \alpha_d r^d$ by Lemma \ref{maj}. Thus, equality \eqref{vol} holds.
Now, we prove \eqref{fron}. Let $x\in A\oplus r$ such that $d\bigpar{x,\partial (A\oplus r)}\ge r$. According to \cite[Corollary 4.9]{F59}, if $0<s<\mbox{reach}(A)$ and $A'_s=\{y\in\mathbb{R}^d\mid d(y,A)\ge s\}$ then $$ d(y,A'_s)=s-d(y,A) \quad \text{whenever $0<d(y,A)\le s$}.$$ Suppose that $d(x,A)>0$, then for all $s\in [d(x,A), r)$ one has $d(x,A)=s-d(x,A'_s)$. Since $\lim_{s\to r} d(x,A'_s)=d(x,A'_r)=d\bigpar{x,\partial (A\oplus r)}$, one gets $d(x,A)\le 0$. This proves that $x\in A$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{th}] We set $A=\mbox{Im} f$ and $B=\mbox{Im} g$. On the one hand, denoting by $V^{-1}$ the inverse of $V:[0,\infty)\to[0,V(\infty))$, \begin{align*} \mathbb{E}\bigcro{V(d(X,B))} &= \int_0^\infty \mathbb{P}\bigcro{ V(d(X,B)) >t} dt\\ &= \int_{0}^{V(\infty)} \Bigpar{1-\mathbb{P}\bigpar{ d(X,B) \le V^{-1}(t)}} dt\\ &=\int_0^{V(\infty)} \Bigpar{1-\frac{ \lambda_d\bigpar{(B\oplus V^{-1}(t))\cap (A\oplus r)} }{ \lambda_d(A\oplus r)}} dt. \end{align*} On the other hand, for $t\le V(r)$, by Lemma \ref{maj} and equation \eqref{vol}, one gets \begin{multline} \label{ine} \lambda_d\bigpar{(B\oplus V^{-1}(t))\cap (A\oplus r)} \le \lambda_d\bigpar{B\oplus V^{-1}(t)} \le \mathcal H^1(\mbox{Im} g) \alpha_{d-1} V^{-1}(t)^{d-1} + \alpha_d V^{-1}(t)^d\\
\le \mathscr{L}(f) \alpha_{d-1} V^{-1}(t)^{d-1} + \alpha_d V^{-1}(t)^d=\lambda_d\bigpar{A\oplus V^{-1}(t)}.\end{multline} Therefore, for all $t\in[0,V(\infty))$, $$1- \frac{ \lambda_d\bigpar{(B\oplus V^{-1}(t))\cap (A\oplus r)} }{ \lambda_d(A\oplus r)} \ge \biggcro{1- \frac{\lambda_d\bigpar{A\oplus V^{-1}(t)}}{ \lambda_d\bigpar{A\oplus r}}}_+ .$$ Consequently, \begin{align*} \mathbb{E}\bigcro{V(d(X,B))} &\ge \int_0^{V(r)} \biggpar{1- \frac{\lambda_d\bigpar{A\oplus V^{-1}(t)}}{ \lambda_d\bigpar{A\oplus r}}} dt =\mathbb{E}\bigcro{V(d(X,A))}. \end{align*} Suppose that $\mathbb{E}\bigcro{V(d(X,B))}=\mathbb{E}\bigcro{V(d(X,A))}$. Then, we have $$ \lambda_d\bigpar{A\oplus r} =\lambda_d\bigpar{(B\oplus V^{-1}(t))\cap (A\oplus r)}\; \mbox{$dt-$a.e. on } [V(r),+\infty).$$
By right continuity with respect to $t$, we obtain that $\lambda_d\bigpar{A\oplus r} =\lambda_d\bigpar{(B\oplus r)\cap (A\oplus r)}.$ From the chain of inequalities \eqref{ine} with $t=V(r)$, we deduce that $\mathscr{L}(f)=\mathcal H^1(\mbox{Im} g)$ and $$ \lambda_d\bigpar{(A\oplus r)\cap (B\oplus r)^c } =\lambda_d\bigpar{(A\oplus r)^c\cap (B\oplus r) }=0.$$ Let us show that $A\oplus r=B\oplus r$. Suppose that $(A\oplus r)\cap (B\oplus r)^c\not=\emptyset$. Then one can find $x\in\mathbb{R}^d$ and $a\in A$ such that $d(x,a)\le r$ and $d(x,B)>r$. Set $y=x-\varepsilon(x-a)$ where $0<\varepsilon\le 1$ and $\varepsilon < d(x,B)/r-1$. One has $d(y,a)\le r-\varepsilon r<r$ and $d(y,B)>d(x,B)-\varepsilon r>r$. Thus $y$ belongs to the interior of $(A\oplus r)\cap (B\oplus r)^c$ which implies $\lambda_d\bigpar{(A\oplus r)\cap (B\oplus r)^c }>0$. Therefore $(A\oplus r)\cap (B\oplus r)^c=\emptyset$. Similarly one can prove that $(A\oplus r)^c\cap (B\oplus r)=\emptyset$.
Finally, from \eqref{fron}, we deduce that $$B\subset \{x\in B\oplus r \mid d\bigpar{x,\partial (B\oplus r)} \ge r\} =\{x\in A\oplus r \mid d\bigpar{x,\partial (A\oplus r)} \ge r\}\subset A.$$ Since $\mathscr{L}(f)=\mathcal H^1(\mbox{Im} g)$, this implies that $A=B$. \end{proof}
\subsection{Uniform distribution on a circle}\label{section:exe}
In this section, we investigate the principal curve problem for a particular distribution, the uniform distribution on a circle.
\begin{pro}Consider the unit circle centered at the origin with parameterization given by $$g(t)=(\cos(2\pi t), \sin(2\pi t))$$ for $t\in[0,1]$. Let $U$ be a uniform random variable on $[0,1]$ and let $X=g(U)$. Then, for every $L<2\pi$, the circle centered at the origin with radius $\dfrac{L}{2\pi}$ is the unique closed principal curve with length $L$ for $X$. \end{pro}
\begin{proof}
Let $f:[0,1]\to\mathbb{R}^2$ be an optimal closed curve with length $L$.
We denote by $K$ the convex hull of $\mbox{Im} f$. Since $\mbox{Im} f$ is compact, $K$ is a compact convex set (consequence of Caratheodory's theorem; see, e.g., \cite{hiriart2012}).
Notice that $\mbox{Im} f$ is included in the unit disk: indeed, if not, since $f$ is a closed curve, with $\mathscr{L}(f)<2\pi$, there exist $u_1$ and $u_2$, such that $f(u_1)$ and $f(u_2)$ belong to the unit circle and the arc $t\in(u_1,u_2)\mapsto f(t)$ is outside the disk, which is not optimal since replacing this arc by the corresponding unit circle arc yields a better and shorter curve. In turn, the convex hull $K$ is also included in the unit disk, by convexity of the latter.
Let $\pi_K: \mathbb{R}^2 \to K$ denote the projection onto $K$ et define the curve $h$ by $h(t)=\pi_K(g(t))$ for $t\in[0,1]$.
By this definition of $h$ as projection of the unit circle on a set included in the unit disk containing $\mbox{Im} f$, we have \begin{equation*}
\Delta(h)\leq\Delta(f).
\end{equation*}
\begin{itemize}
\item Let us prove that $h$ has length at most $L$.
First, note that $h$ has finite length, since $\pi_K$ is Lipschitz. By properties of the projection on a closed convex set, we know that the set of points of $\mathbb{R}^2$ projecting onto a given element of the boundary $\partial K$ of $K$ is a cone. This ensures that $h:[0,1]\to\partial K$ is onto, because a cone with vertex in the unit disk intersects the unit circle $\mbox{Im} g$ at least once. More specifically, if the cone reduces to a half-line (degenerated case), then it intersects $\mbox{Im} g$ exactly once. Otherwise, the cone is the region delimited by two distinct half-lines with common origin in the disk, and, thus, contains an infinity of such distinct half-lines, each of them intersecting $\mbox{Im} g$ once. Hence, for every $v\in \mbox{Im} h$, there is either one $t$ such that $v=h(t)$, or an infinity.
We will use Cauchy-Crofton's formula on the length of a curve (for a proof, see, e.g., \cite{ayari}). Let $d_{r,\theta}$ denote the line with equation $x\cos\theta+y\sin\theta=r$. For every curve $\phi=(\phi^1,\phi^2)$, if $$N_\phi(r,\theta)=\mbox{Card}(\{t\in[0,1],\phi(t)\in d_{r,\theta}\})=\mbox{Card}(\{t\in[0,1],\phi^1(t)\cos\theta+ \phi^2(t)\sin\theta=r\}),$$ then the length of $\phi$ is given by $$\frac 1 4 \int_0^{2\pi}\int_{-\infty}^{\infty}N_\phi(r,\theta)drd\theta .$$
Let us compare $N_h(r,\theta)$ and $N_f(r,\theta)$ for $(r,\theta)\in \mathbb{R}\times [0,2\pi]$. To begin with, note that $N_h(r,\theta)$ is finite almost everywhere since $h$ has finite length. So, we need only consider the cases where $N_h(r,\theta)$ is finite. This allows to exclude the points $v\in \mbox{Im} h$ such that $h^{-1}(\{v\})$ is infinite, as well as the cases where a line $d_{r,\theta}$ and $\mbox{Im} h$ have a whole segment in common.
Observing that, if the line $d_{r,\theta}$ does not intersect $\mbox{Im} h$, then it does not intersect $\mbox{Im} f$ either, since $\mbox{Im} h$ is the boundary of the convex hull of $\mbox{Im} f$, it remains to look at the two following cases for comparing $N_h(r,\theta)$ and $N_f(r,\theta)$. \begin{itemize}
\item If the line $d_{r,\theta}$ intersects $\mbox{Im} h$ at a single point, then this point belongs to $\mbox{Im} f$.
\item If the line $d_{r,\theta}$ intersects $\mbox{Im} h$ at exactly two points, then $\mbox{Im} f$ crosses the line. If $\mbox{Im} f$ were located on one side of the line, $K$ were not the convex hull. Since $f$ is a closed curve, $\mbox{Im} f$ crosses the line at least twice. \end{itemize} So, $N_h(r,\theta)\leq N_f(r,\theta)$ almost everywhere, that is $\mathscr{L}(h)\leq \mathscr{L}(f)=L$.
\item Now, observe that $ \mbox{Im} h\subset \mbox{Im} f$. Indeed, otherwise, there exists $t\in[0,1]$ such that $h(t)\notin \mbox{Im} f$, which means that $d(g(t),\mbox{Im} f)>d(g(t),K)$. By continuity this implies that $d(g(s),\mbox{Im} f)>d(g(s),K)$ for all $s$ in a non-empty open set and one obtains that $\Delta(h)<\Delta(f)$.
By optimality of $f$, this is not possible since $\mathscr{L}(h)\le L$.
\item Since $ \mbox{Im} h\subset \mbox{Im} f$ and $\mathscr{L}(f)=L$, to obtain that $\mbox{Im} f$ is the circle with center $(0,0)$ and radius $L/2\pi$, it remains to show that $\mbox{Im} h$ is the circle with center $(0,0)$ and radius $L/2\pi$. Let $\theta\in[0,1]$ and let $A_\theta:\mathbb{R}^2\to\mathbb{R}^2$ denote the rotation with center $(0,0)$ and angle $2\pi\theta$. We set $ h_\theta(t)=\pi_{A_\theta(K)}(g(t))$, for every $t\in[0,1]$. Since $ h_\theta(t)=A_\theta\circ \pi_K
( A_\theta^{-1}(g(t)))= A_\theta\circ\pi_K( g(t-\theta))$, $ h_\theta$ is a curve with same length as $h$. Moreover, $A_\theta(X)$ has the same distribution as $X$, so that
\begin{align*}
\mathbb{E}\left[\|X-\pi_{A_\theta(K)}(X)\|^2\right]&=\mathbb{E}\left[\|A_\theta(X)-\pi_{A_\theta(K)}(A_\theta(X))\|^2\right]\\
&=\mathbb{E}\left[\|A_\theta(X)-A_\theta(\pi_{K}(X))\|^2\right]\\
&=\mathbb{E}\left[\|X-\pi_{K}(X)\|^2\right].
\end{align*}
By strict convexity, we deduce from this equality that,
if $\mathbb{P}\left(\pi_{A_\theta(K)}(X)\not=\pi_K(X)\right)>0$, then
$$ \mathbb{E}\left[\|X-(\pi_K(X)+\pi_{A_\theta(K)}(X))/2\|^2\right] < \Delta(h).$$
Since the random variable $(\pi_K(X)+\pi_{A_\theta(K)}(X))/2$ takes its values in the image of the curve $(h+ h_\theta)/2$ with length smaller than $\mathscr{L}(h)\le L$, that is not possible. Consequently, $\pi_{A_\theta(K)}(X)=\pi_K(X)$ almost surely. In other words,
$\pi_{A_\theta(K)}(g(t))=h(t)$ for almost every $t\in[0,1]$, and, thus, by continuity, $h_\theta(t)=h(t)$ for every $t\in[0,1]$.
For $t\in[0,1]$, let $\theta=t$. We have $h(t)=h_t(t)=A_t\circ \pi_K(g(0))=A_t( h(0) )$.
Since $\Delta(f)=\Delta(h)$ and $\mathscr{L}(h)\le L$, $\mathscr{L}(h)=L$.
Hence, $\mbox{Im} h$ is the circle with center $(0,0)$ and radius $L/2\pi$.
\end{itemize} \end{proof}
\begin{rem}
Observe that radial symmetry of a distribution is not sufficient to guarantee that a given circle will be a constrained principal curve for this distribution. Let us exhibit two counterexamples.
\begin{itemize}
\item Let $p>0$ and let $\mathcal U$ denote the uniform distribution on the unit circle. Consider a random variable $X$ taking its values in $\mathbb{R}^2$, distributed according to the mixture distribution $$p\delta_{(0,0)}+(1-p)\mathcal U,$$ where $\delta_{(0,0)}$ stands for the Dirac mass at the origin $(0,0)$. Then, for every circle with center $(0,0)$ and radius $r\in(0,1]$, because of the atom at the origin, the projection of $X$ on the circle is not unique almost surely, which implies, thanks to Proposition \ref{prop:hatXneg}, that none of these circles may be a constrained principal curve for $X$.
\item We consider the case where $X$ is a standard Gaussian random vector in $\mathbb{R}^2$. Lemma \ref{lem:noneq} ensures that the circle with center $(0,0)$ and radius $\mathbb{E}[\|X\|]=\sqrt{\pi/2}$ cannot be a constrained principal curve for $X$ because it is self-consistent.
\end{itemize}
\end{rem}
\Addresses
\end{document} |
\begin{document}
\begin{center} {\Large {\bf Dynamical maps and measurements}} \end{center}
\hfil\break
\begin{center} {\bf Aik-meng Kuah\footnote{kuah@physics.utexas.edu}, E.C.G. Sudarshan \\} {\it Department of Physics \\ University of Texas at Austin \\ Austin, Texas 78712-1081}\\ \end{center}
\begin{center} September 12, 2002 \end{center}
\begin{abstract}
We show how a set of POVMs, expressed as a set of $\mu$ linear maps, can be performed with a unitary transformation followed by a von-Neumann measurement with an ancillary system of no more than $\mu N^2$ dimensions. This result shows that all generalized linear transformations and measurements on density matrices can be performed by unitary transformations and von-Neumann measurements by coupling a suitably large ancillary system.
\end{abstract}
\hfil\break \hfil\break
\pagebreak
\section{Introduction}
Dynamical maps (also known as superoperators) describe generalized linear transformations that can be performed on density matrices. It has been shown that dynamical maps that preserve the trace of density matrix can be performed by an unitary transformation by adding an ancillary system of a given size ~\cite{Sudarshan85}.
In the related problem of measurement, generalized measurements would be described by a set of linear maps, where each individual map acting on the density matrix would not preserve its trace, much like a projection acting on a Hilbert space ray would not preserve its norm. It has been shown that if the maps can be described by one-dimensional operations, then it can always be performed as a projection in a larger system (ie. with a suitable ancillary system).
In this paper, we show that any set of non-trace preserving maps can be performed by making unitary transformations and von-Neumann (projective) measurements in a larger system. This result shows that all linear transformations and measurements on density matrices can always be performed by unitary transformations and von-Neumann measurements, by coupling an ancillary system of appropriate size to the original system.
We will begin by re-deriving the result that a dynamical map can be performed by a unitary transformation in a larger system, and we will show that this proof can be easily extended to a set of maps.
\section{A dynamical map}
A dynamical map is linear and maps valid density matrices to valid density matrices:
\begin{eqnarray} \Lambda: \rho \rightarrow \Lambda_{rr',ss'} \rho_{r',s'} = \rho'_{r,s} \\ \text{Hermiticity}\quad \rho'_{r,s} = {\rho'_{s,r}}^* \Rightarrow \Lambda_{rr',ss'} = {\Lambda_{ss',rr'}}^* \\ \text{Preserves trace}\quad Tr[\rho'] = Tr[\rho] \Rightarrow \sum_r \Lambda_{rr',rs'} = \delta_{r',s'} \end{eqnarray}
The map should also be completely positive to preserve the positivity of the density matrix.
A map acting on a $N \times N$ density matrix can be written as a $N^2 \times N^2$ hermitian matrix, and therefore it has a canonical decomposition with $\nu \leq N^2$ eigen-operators and real eigenvalues (see ~\cite{Sudarshan61},~\cite{Choi72},~\cite{Kraus83}):
\begin{equation} \Lambda: \rho \rightarrow \sum_\alpha \lambda_\alpha L_\alpha \rho L_\alpha^\dagger \end{equation}
The condition that the map preserves the trace requires:
\begin{equation} \sum_\alpha \lambda_\alpha L_\alpha^\dagger L_\alpha = I \label{norm} \end{equation}
Let us consider a transformation $U$ acting on a larger system, which consists of the original system coupled with an ancillary system of $\nu$ dimensions. This larger Hilbert space is spanned by a basis $\{ |r^A\rangle|\alpha^B\rangle; 0 \geq r \geq N-1, 0 \geq \alpha \geq \nu-1 \}$. Let $U$ be given by:
\begin{equation}
U: |r'^A\rangle|0^B\rangle \rightarrow \sum_{r \alpha} \sqrt{\lambda_\alpha}
\left[L_\alpha\right]_{rr'} |r^A\rangle|\alpha^B\rangle \end{equation}
The superscript $A$ labels the original system, and the superscript $B$ labels the ancillary system. At this point, we have not yet specified how $U$ transforms the states $|r'^A\rangle|\alpha^B\rangle$ for $\alpha \neq 0$, however we notice that $U$ preserves the orthornormality between states
$|r'^A\rangle|0^B\rangle$ and $|s'^A\rangle|0^B\rangle$ if:
\begin{equation} \sum_{r\alpha} \lambda_\alpha \left[L_\alpha^\dagger\right]_{s'r} \left[L_\alpha\right]_{rr'} = \delta_{r',s'} \end{equation}
This is nothing more than the condition that the map preserves the trace, equation~\ref{norm}. So $U$ can be easily "completed" into a unitary transformation by simply making sure that the states $\{|r'^A\rangle|\alpha^B\rangle; \alpha \neq 0\}$ are unitarily transformed into the space orthorgonal to the space spanned by $\{ U |r'^A\rangle|0^B\rangle \}$. Exactly how $U$ transforms those states is unimportant, so many choices of $U$ exist.
Now if we take our original density matrix $\rho$ and couple it to an ancillary system of dimensions $\nu$, initially in the state $|0^B\rangle\langle0^B|$, the unitary transformation $U$ acting on this overall state gives:
\begin{equation} \sum_{r s \alpha \beta} \sqrt{\lambda_\alpha} \sqrt{\lambda_\beta} \left[L_\alpha\right]_{rr'} \rho_{r's'} \left[L_\beta^\dagger\right]_{s's}
| r^A \rangle\langle s^A | \otimes | \alpha^B \rangle\langle \beta^B | \end{equation}
Tracing over the ancillary system $B$ will give us the desired transformation $\Lambda \rho$. Therefore, we see that any trace-preserving map can be thought of as the contraction of a unitary transformation acting on a larger system, by suitably coupling an ancillary system of dimension $\nu \leq N^2$ to the original system.
\section{Sets of maps}
A generalized measurement can be described by a set of maps $\{ \Lambda^{(i)} \}$, where the probability of the ith outcome is $Tr [ \Lambda^{(i)} \rho ]$ and the state of the system if the ith outcome is obtained (post-selection) is $\Lambda^{(i)} \rho / Tr [ \Lambda^{(i)} \rho ]$. Individually, the maps $\Lambda^{(i)}$ would not preserve the trace of density matrix, but if the measurement is complete (ie. total probability is 1) then the overall map $\sum_i \Lambda^{(i)}$ will preserve the trace of the density matrix.
Let us decompose the maps into their eigen-operators and eigenvalues:
\begin{equation} \Lambda^{(i)} : \rho \rightarrow \sum_\alpha {\lambda^{(i)}}_\alpha {L^{(i)}}_\alpha \rho {L^{(i)}}_\alpha^\dagger \end{equation}
The condition that the trace is preserved gives:
\begin{equation} \sum_{i \alpha} {\lambda^{(i)}}_\alpha {L^{(i)}}_\alpha^\dagger {L^{(i)}}_\alpha = I \label{sum} \end{equation}
Let us define a transformation $V$ in a larger space:
\begin{equation}
V: |r'^A\rangle|(0,0)^B\rangle \rightarrow \sum_{r (i,\alpha)} \sqrt{\lambda^{(i)}_\alpha} \left[L^{(i)}_\alpha\right]_{rr'}
|r^A\rangle|(i,\alpha)^B\rangle \end{equation}
The ancillary system here is now labelled with 2 indices $(i,\alpha)$, and its size is given by the number of operators $\L^{(i)}_\alpha$. This size is bounded by $\mu N^2$, where $\mu$ is the number of maps $\Lambda^{(i)}$, since each map $ \Lambda^{(i)}$ has at most $N^2$ eigen-operators.
As in the previous section, we have not defined the transformation on a complete set of states, but it preserves the orthornormality between the states $|r'^A\rangle|(0,0)^B\rangle$ and $|s'^A\rangle|(0,0)^B\rangle $:
\begin{equation}
\left(V|r'^A\rangle|(0,0)^B\rangle \right)^\dagger
\left(V|s'^A\rangle|(0,0)^B\rangle \right) = \sum_{r i \alpha} \lambda^{(i)}_\alpha \left[L^{(i)}_\alpha\right]^*_{rr'} \left[L^{(i)}_\alpha\right]_{rs'} = \delta_{r' s'} \end{equation}
The last equality is obtained by applying equation~\ref{sum}. We can therefore complete $V$ into a unitary transformation as in the previous section. Now $V$ acting on the initial state $\rho \otimes |(0,0)^B\rangle\langle(0,0)^B|$ will give:
\begin{equation} \sum_{i j \alpha \beta} \sqrt{\lambda^{(i)}_\alpha} \sqrt{\lambda^{(j)}_\beta} \left[L^{(i)}_\alpha\right]_{rr'} \rho_{r' s'} \left[L^{(j)}_\beta\right]^*_{ss'}
| r^A\rangle\langle s^A| \otimes | (i,\alpha)^B\rangle\langle (j,\beta)^B | \end{equation}
If we make a von-Neumann measurement on the ancillary system, given by the projections $\{ \left(\sum_\alpha |(i,\alpha)^B\rangle\langle (i,\alpha)^B|\right) \}$, the ith outcome would give $\Lambda^{(i)} \rho$.
\section{Non trace preserving maps and sets of maps}
We had required in the last section that the combination of all the maps, $\sum_i \Lambda^{(i)}$, preserves the trace of the quantum state, so it would represent a complete measurement. If $\sum_i \Lambda^{(i)}$ does not preserve the trace, we need only add a map $I - \sum_i \Lambda^{(i)}$ into the set of maps to be performed. Any outcome due to this extra map is discarded, this has the effect of reducing the trace of the quantum state.
In the case where we want to perform a dynamical map $\Lambda$ that does not preserve the trace, we can perform the measurement procedure as outlined in the last section on the set of maps $\{ \Lambda, I-\Lambda\}$, and discard the outcome if $I-\Lambda$ is obtained.
\section{Conclusions}
We have shown that all linear measurements and transformations on density matrices can be accomplished by "traditional" von-Neumann measurements and unitary transformations, when performed in a suitably large system.
\section{Acknowledgments}
We would like to thank Anil Shaji and Todd Tilma, for their help and insightful discussions.
\end{document} |
\begin{document}
\selectlanguage{english}
\begin{abstract} Dependencies of the optimal constants in strong and weak type bounds will be studied between maximal functions corresponding to the Hardy--Littlewood averaging operators over convex symmetric bodies acting on $\mathbb R^d$ and $\mathbb Z^d$. Firstly, we show, in the full range of $p\in[1,\infty]$, that these optimal constants in $L^p(\mathbb R^d)$ are always not larger than their discrete analogues in $\ell^p(\mathbb Z^d)$; and we also show that the equality holds for the cubes in the case of $p=1$. This in particular implies that the best constant in the weak type $(1,1)$ inequality for the discrete Hardy--Littlewood maximal function associated with centered cubes in $\mathbb Z^d$ grows to infinity as $d\to\infty$, and if $d=1$ it is equal to the largest root of the quadratic equation $12C^2-22C+5=0$. Secondly, we prove dimension-free estimates for the $\ell^p(\mathbb Z^d)$ norms, $p\in(1,\infty]$, of the discrete Hardy--Littlewood maximal operators with the restricted range of scales $t\geq C_q d$ corresponding to $q$-balls, $q\in[2,\infty)$. Finally, we extend the latter result on $\ell^2(\mathbb Z^d)$ for the maximal operators restricted to dyadic scales $2^n\ge C_q d^{1/q}$. \end{abstract}
\maketitle
\section{Introduction} \subsection{A brief overview of the paper} Throughout this paper $d\in\mathbb{N}$ always denotes the dimension of the Euclidean space $\mathbb{R}^d$, and $G$ denotes a convex symmetric body in $\mathbb{R}^d$, which is a bounded closed and symmetric convex subset of $\mathbb{R}^d$ with nonempty interior. We shall consider convex bodies in two contexts, continuous and discrete. Therefore, in order to avoid unnecessary technicalities, we always assume that $G\subset \mathbb{R}^d$ is closed (whereas in the literature it is usually assumed to be open). One of the most classical examples of convex symmetric bodies are the $q$-balls $B^q\subset \mathbb{R}^d$, $q\in[1,\infty]$, defined for $q \in [1,\infty)$ by \begin{equation*}
B^q:=B^q(d):=\Big\{x\in\mathbb{R}^d: \vert x\vert_q:=\Big( \sum_{i=1}^d |x_i|^q\Big)^{1/q}\le 1 \Big\}, \end{equation*} and for $q=\infty$ by \begin{equation*}
B^\infty:=B^\infty(d):=\Big\{x\in\mathbb{R}^d: \vert x\vert_\infty:=\max_{1\leq i\leq d}|x_i|\le 1 \Big\}. \end{equation*} If $p=2$ then $B^2$ is the closed unit Euclidean ball in $\mathbb{R}^d$ centered at the origin, and if $p=\infty$ then $B^{\infty}$ is the cube in $\mathbb{R}^d$ centered at the origin and of side length $2$.
We associate with a convex symmetric body $G\subset \mathbb{R}^d$ the families of continuous $(M_t^G)_{t>0}$ and discrete $(\mathcal{M}_t^G)_{t>0}$ averaging operators given respectively by \begin{equation} \label{eq:18}
M^G_t F(x):=\frac{1}{|G_t|} \int_{G_t} F(x-y)\, {\rm d} y, \qquad F\in L^1_{\rm loc}(\mathbb{R}^d), \end{equation} and \begin{equation} \label{eq:19}
\mathcal{M}^G_t f(x):=\frac{1}{|G_t\cap\mathbb{Z}^d|} \sum_{y\in G_t\cap\mathbb{Z}^d } f(x-y), \qquad f\in \ell^\infty(\mathbb{Z}^d), \end{equation} where $G_t=\{y\in\mathbb{R}^d: t^{-1}y\in G\}$ is the dilate of $G\subset \mathbb{R}^d$. Moreover, we define the corresponding maximal functions by \begin{equation*}
M_\ast^G F(x) :=\sup_{t>0} \big|M_t^G F(x)\big|, \qquad \text{ and } \qquad
\mathcal{M}_\ast^G f(x) :=\sup_{t>0} \big|\mathcal{M}_t^G f(x)\big|. \end{equation*}
It is well known that both maximal functions are of weak type $(1,1)$ and of strong type $(p,p)$ for any $p\in(1,\infty]$. Moreover, neither of these maximal functions is of strong type $(1,1).$ Our primary interest is focused on determining whether the constants arising in the weak and strong type inequalities can be chosen independently of the dimension $d$. Moreover, we shall compare the best constants in such inequalities for $M_\ast^G$ and $\mathcal{M}_\ast^G$, respectively.
For $p\in (1, \infty]$ we denote by $C(G,p)$ the smallest constant $0< C< \infty$ for which the following strong type inequality holds \begin{equation*}
\| M_\ast^G F \|_{L^p(\mathbb{R}^d)} \leq C \| F \|_{L^p(\mathbb{R}^d)}, \qquad F \in L^p(\mathbb{R}^d). \end{equation*} Similarly, $C(G,1)$ will stand for the smallest constant $0< C< \infty$ satisfying \begin{equation*}
\sup_{\lambda>0}\lambda \, |\{ x \in \mathbb{R}^d : M_\ast^G F(x) > \lambda \}| \leq C \| F \|_{L^1(\mathbb{R}^d)}, \qquad F \in L^1(\mathbb{R}^d). \end{equation*} Analogously to $C(G,p)$, we define $\mathcal{C}(G,p)$ for any $p \in [1, \infty]$, referring to $\mathcal{M}_\ast^G$ in place of $M_\ast^G$.
Our main result of this paper can be formulated as follows.
\begin{theorem}\label{thm:T} Fix $d \in \mathbb{N}$ and let $G \subset \mathbb{R}^d$ be a convex symmetric body. Then for each $p \in [1, \infty]$ we have \begin{align} \label{eq:16} C(G,p) \leq \mathcal{C}(G,p). \end{align}
Moreover, for the $d$-dimensional cube
$B^\infty(d)\subset \mathbb{R}^d$ one has
\begin{align}
\label{eq:52}
C(B^\infty(d),1) = \mathcal{C}(B^\infty(d),1).
\end{align} \end{theorem}
Some comments are in order: \begin{itemize} \item[(i)] Clearly, $C(G,\infty)=\mathcal{C}(G,\infty)=1$, since we have been working with averaging operators. \item[(ii)] Theorem \ref{thm:T} gives us a quantitative dependence between $C(G,p)$ and $\mathcal{C}(G,p)$. Inequality \eqref{eq:16} coincides with a well known phenomenon in harmonic analysis, which states that it is harder to establish bounds for discrete operators than the bounds for their continuous counterparts. \item[(iii)] Formula \eqref{eq:52} was observed by the first author in his master thesis. However, it has not been published before. In particular, it yields that $\mathcal{C}(B^\infty(d),1)\ _{\overrightarrow{d\to\infty}} \infty$ in view of the result of Aldaz \cite{Ald1} asserting that the optimal constant $C(B^\infty(d),1)$ in the weak type $(1,1)$ inequality grows to infinity as $d\to\infty$. Quantitative bounds for the constant $C(B^\infty(d),1)$ were given by Aubrun \cite{Aub1}, who proved $C(B^\infty(d),1)\gtrsim_{\varepsilon}(\log d)^{1-\varepsilon}$ for every $\varepsilon>0$, and soon after that by Iakovlev and Str\"omberg \cite{IakStr1} who considerably improved Aubrun's lower bound by showing that $C(B^\infty(d),1)\gtrsim d^{1/4}$. The latter result also ensures in the discrete setup that $\mathcal{C}(B^\infty(d),1)\gtrsim d^{1/4}$. \item[(iv)] If $d=1$, then \eqref{eq:52} combined with the result of Melas \cite{Mel} implies that $\mathcal{C}(B^\infty(1),1)$ is equal to the larger root of the quadratic equation $12C^2-22C+5=0$. \item[(v)] The product structure of the cubes $B^\infty(d)$ and the fact that one works with continuous/discrete norms for $p=1$ are essential to prove $C(B^\infty(d),1) = \mathcal{C}(B^\infty(d),1)$. At this moment it does not seem that our method can be used to attain the equality in \eqref{eq:16} for $G=B^\infty(d)$ with $p\in(1, \infty)$. In general case, as we shall see later in this paper, inequality \eqref{eq:16} cannot be reversed. \item[(vi)] The proof of inequality \eqref{eq:16} relies on a suitable generalization of the ideas described in the master thesis of the first author. The details are presented in Section \ref{sec:Tr}.
\end{itemize}
Systematic studies of dimension-free estimates in the continuous case were initiated by Stein \cite{SteinMax}, who showed that $C(B^2,p)$ is bounded independently of the dimension for all $p\in(1,\infty]$. Shortly afterwards Bourgain \cite{B1} proved that $C(G,2)$ can be estimated by an absolute constant independent of the dimension and the convex symmetric body $G\subset \mathbb{R}^d$. This result was extended to the range $p\in(3/2,\infty]$ in \cite{B2} and independently by Carbery in \cite{Car1}. It is conjectured that one can estimate $C(G,p)$ by a dimension-free constant for all $p\in(1,\infty]$. This was verified for the $q$-balls $B^q$, $q\in[1,\infty)$, by M\"uller \cite{Mul1} and for the cubes $B^\infty$ by Bourgain \cite{B3}. The latter result exhibits an interesting phenomenon, which shows that the dimension-free estimates on $L^p(\mathbb{R}^d)$ for $p\in(1, \infty]$ cannot be extended to the weak type $(1, 1)$ endpoint. Namely, the optimal constant $C(B^\infty(d),1)$ in the weak type $(1,1)$ inequality, as Aldaz \cite{Ald1} proved, grows to infinity with the dimension. Additionally, if the range of scales $t$ in the definition of the maximal operator $M_\ast^G$ is restricted to the dyadic values ($t\in \mathbb D := \{2^n : n \in \mathbb{Z} \}$), then the constants in the strong type $(p,p)$ inequalities with $p\in(1,\infty]$ are bounded uniformly in $d$ for any $G$, see \cite{Car1}. For a more detailed account of the subject in the continuous case, its history and extensive literature we refer to \cite{DGM1} or \cite{BMSW4}.
Surprisingly, in the discrete setting there is no hope for estimating $\mathcal{C}(G,p)$ independently of the dimension and the convex symmetric body. Fixing $1\leq \lambda_1<\cdots<\lambda_d<\ldots<\sqrt{2}$ and examining the ellipsoids \begin{equation} \label{eq:elip} E(d):=\Big\{x\in \mathbb{R}^d : \sum_{j=1}^d \lambda_j^2\, x_j^2\,\le 1 \Big\}, \end{equation} it was proved in \cite[Theorem~2]{BMSW3} that for every $p\in(1, \infty)$ there is a constant $C_p>0$ such that for all $d\in \mathbb{N}$ we have \begin{align} \label{eq:17}
\mathcal C(E(d), p)\ge \sup_{\|f\|_{\ell^p(\mathbb{Z}^d)}\le 1}\big\|\sup_{0<t\le d }|\mathcal{M}_{t}^{E(d)} f|\big\|_{\ell^p(\mathbb{Z}^d)} \ge C_p(\log d)^{1/p}. \end{align} This inequality shows that if $p\in(3/2, \infty)$, then for sufficiently large $d$ the inequality in \eqref{eq:16} with $G=E(d)$ is strict, since from \cite{B2} and \cite{Car1} we know that there exists a finite constant $C_p>0$ independent of the dimension such that \begin{align} \label{eq:32} C(E(d), p)\le C_p. \end{align}
On the other hand, for cubes $G=B^\infty(d)$, it was also proved \cite[Theorem~3]{BMSW3} that for every $p\in(3/2, \infty]$ there is a finite constant $C_p>0$ such that for every $d\in\mathbb{N}$ one has \begin{align} \label{eq:31} \mathcal{C}(B^\infty(d), p)\le C_p. \end{align} For $p\in(1, 3/2]$ it still remains open whether $\mathcal{C}(B^\infty(d), p)$ can be estimated independently of the dimension. In view of the second part of Theorem \ref{thm:T} interpolation does not help, since $\mathcal{C}(B^\infty(d),1)\ _{\overrightarrow{d\to\infty}} \infty$.
Inequalities \eqref{eq:17} and \eqref{eq:31} illustrate that the dimension-free phenomenon in the discrete setting contrasts sharply with the situation that we know from the continuous setup. However, as it was shown in \cite[Theorem~2]{BMSW3}, if the dimension-free estimates fail in the discrete setting it may only happen for small scales, see also \eqref{eq:17}. To be more precise, define the discrete restricted maximal function \begin{align} \label{eq:24}
\mathcal{M}_{\ast, > D}^G f(x) :=\sup_{t>D} \big|\mathcal{M}_t^G f(x)\big|; \qquad x\in\mathbb{Z}^d, \quad D\ge 0, \end{align} corresponding to the averages from \eqref{eq:19}. For $D=0$ we see that $\mathcal{M}_{\ast, > D}^G=\mathcal{M}_{\ast}^G$. Then by \cite[Theorem~1]{BMSW3} one has for arbitrary convex symmetric bodies $G\subset\mathbb{R}^d$ that there exists $c(G)>0$ such that \begin{align} \label{eq:26}
\big\|\mathcal{M}_{\ast, > c(G)d}^G f\big\|_{\ell^p(\mathbb{Z}^d)}
\leq e^6 C(G, p)\|f\|_{\ell^p(\mathbb{Z}^d)}, \qquad f\in \ell^p(\mathbb{Z}^d). \end{align} Specifically, $\frac{1}{2}d^{1/2}\le c(E(d))\le d^{1/2}$ for the ellipsoids \eqref{eq:elip}, and consequently by \eqref{eq:26}, we also have \begin{equation*} \big\Vert\mathcal{M}_{\ast, > d^{3/2}}^{E(d)} f\big\Vert_{\ell^p(\mathbb{Z}^d)}\leq e^6C(E(d), p)\Vert f\Vert_{\ell^p(\mathbb{Z}^d)}, \qquad f\in \ell^p(\mathbb{Z}^d), \end{equation*} which ensures dimension-free estimates for any $p\in (3/2, \infty]$ thanks to \eqref{eq:32}.
In the case of $q$-balls $G=B^q(d)$, $q\in[1,\infty]$, in view of \cite{B3} and \cite{Mul1}, inequality \eqref{eq:26} comes down to \begin{equation} \label{eq:28} \big\Vert\mathcal{M}_{\ast, > d^{1+1/q}}^G f\big\Vert_{\ell^p(\mathbb{Z}^d)}\leq C_{p,q}\Vert f\Vert_{\ell^p(\mathbb{Z}^d)}, \qquad f\in \ell^p(\mathbb{Z}^d), \end{equation} for all $p\in(1,\infty]$, where $C_{p,q}$ denotes a constant that depends on $p$ and $q$ but not on the dimension $d$. We close this discussion by gathering a few conjectures that arose upon completing \cite{BMSW3}, \cite{BMSW4} and \cite{BMSW2}.
\begin{conjecture} \label{con:1} Let $d\in\mathbb{N}$ and let $B^q(d)\subset\mathbb{R}^d$ be a $q$-ball. \begin{enumerate}[label*={\arabic*}.] \item (Weak form) Is it true that for every $p\in(1, \infty)$ and $q\in [1, \infty]$ there exist constants $C_{p, q}>0$ and $t_q>0$ such that for every $d\in\mathbb{N}$ we have \begin{align} \label{eq:25}
\sup_{\|f\|_{\ell^p(\mathbb{Z}^d)}\le 1}\big\|\mathcal{M}_{\ast, > t_q d}^{B^q(d)} f\big\|_{\ell^p(\mathbb{Z}^d)}\le C_{p, q} \, ? \end{align} \item (Strong form) Is it true that for every $p\in(1, \infty)$ and $q\in[1, \infty]$ there exists a constant $C_{p, q}>0$ such that for every $d\in\mathbb{N}$ we have \begin{align} \label{eq:33} \mathcal{C}(B^q(d), p)\le C_{p, q} \, ? \end{align} \end{enumerate} \end{conjecture} A few comments about these conjectures are in order. \begin{itemize} \item[(i)] The first conjecture arose on the one hand in view of the inequalities \eqref{eq:17} and \eqref{eq:28}, and on the other hand in view of the result from \cite{BMSW4}, where it had been verified for the Euclidean balls. Indeed, if $q=2$ then following \cite[Section 5]{BMSW4} one can see that \eqref{eq:28} holds with $ad$ in place of $d^{1+1/q}$, where $a>0$ is a large absolute constant independent of $d$. Since the dimension-free phenomenon may only break down for small scales, the conjectured threshold from which we can expect dimension-free estimates in the discrete setup is at the level of a constant multiple of $d$.
\item[(ii)] The second conjecture says that one should expect dimension-free estimates for $\mathcal{C}(B^q(d), p)$ corresponding to $q$-balls as we have for their continuous counterparts $C(B^q(d), p)$. It was verified \cite{BMSW3} in the case of cubes $\mathcal{C}(B^{\infty}(d), p)$ with $p\in(3/2, \infty]$ as we have seen in \eqref{eq:31}. Moreover, in \cite{BMSW2} the second and fourth author in collaboration with Bourgain and Stein proved that the discrete dyadic Hardy--Littlewood maximal function
$\sup_{n\in\mathbb{N}}|\mathcal{M}^{B^2}_{2^n} f|$ over the Euclidean balls $B^2(d)$ has dimension-free estimates on $\ell^2(\mathbb{Z}^d)$. Although this can be thought of as the first step towards establishing \eqref{eq:33}, the general case seems to be very difficult even for the Euclidean balls or cubes for $p\in(1, 3/2]$. This will surely require new methods. \end{itemize}
In our second main result of this paper we verify the first conjecture for the balls $B^q$ for all $q\in(2,\infty)$.
\begin{theorem}\label{thm:1} For every $q\in(2,\infty)$ and each $a>0$ and $p\in(1,\infty)$ there exists $C(p,q,a)>0$ independent of the dimension $d\in\mathbb{N}$ such that for all $f\in\ell^p(\mathbb{Z}^d)$ we have \begin{equation*}
\big\|\sup_{N \geq a d}|\mathcal{M}_N^{B^q}f|\big\|_{\ell^p(\mathbb{Z}^d)}\leq C(p,q,a) \|f\|_{\ell^p(\mathbb{Z}^d)}. \end{equation*} \end{theorem}
The proof of Theorem \ref{thm:1} is presented in Section \ref{sec:Bq}; it relies on the methods developed in \cite[Section 5]{BMSW4}, Hanner's inequality \eqref{eq:Han_ineq} and Newton's generalized binomial theorem. It follows from \cite[Theorem~2]{BMSW4} that Theorem \ref{thm:1} remains true for $q=2$, but only for $a>0$, which is large enough. Our proof of Theorem \ref{thm:1} can be easily adapted to yield the same result. We point out necessary changes in the proof. Since we rely on Hanner's inequality our proof of Theorem \ref{thm:1} does not carry over to $q \in [1,2).$ This is because for such values of $q$ the inequality \eqref{eq:Han_ineq} is reversed.
Our final result concerns the dyadic maximal operator associated with $q$-balls. \begin{theorem} \label{thm:10} Fix $q \in [2, \infty)$. Let $C_1, C_2>0$ and define $\mathbb D_{C_1, C_2}:=\{N\in\mathbb D:C_1d^{1/q}\le N\le C_2d\}$. Then there exists a constant $C_q>0$ independent of the dimension such that for every $f\in \ell^2(\mathbb{Z}^d)$ we have \begin{align} \label{eq:20}
\big\|\sup_{N\in \mathbb D_{C_1, C_2}}|\mathcal M_N^{B^q}f|\big\|_{\ell^2(\mathbb{Z}^d)}\le C_q\|f\|_{\ell^2(\mathbb{Z}^d)}. \end{align} \end{theorem} Theorem \ref{thm:10} is an incremental step towards establishing the second conjecture. By adapting the ideas developed in \cite{BMSW2} we are able to obtain dimension-free estimates for the discrete restricted dyadic Hardy--Littlewood maximal functions over $q$-balls for all $q\in[2, \infty)$. Theorem \ref{thm:10} generalizes \cite[Theorem 2.2]{BMSW3}, which was stated for $q = 2$. The proof of inequality \eqref{eq:20} as in \cite{BMSW2} exploits the invariance of $B_N^q\cap\mathbb{Z}^d$ under the permutation group of $\mathbb{N}_d$. Then we can
efficiently use probabilistic arguments on a permutation group corresponding to $\mathbb{N}_d$ that reduce the matter to the decrease dimension trick as in \cite{BMSW2}. The proof of Theorem \ref{thm:10} is a technical elaboration of the methods from \cite{BMSW2}. However, for the convenience of the reader, mainly due to intricate technicalities we decided to provide necessary details in Section \ref{sec:4}. We remark that the condition $q \in [2, \infty)$ cannot be dropped in our proof of Theorem \ref{thm:10} as it is required in the estimate at the origin from Proposition \ref{prop:1}.
\subsection{Notation} The following basic notation will be used throughout the paper.
\begin{enumerate}[label*={\arabic*}.] \item We will write $A \lesssim_{\delta} B$ ($A \gtrsim_{\delta} B$) to say that there is an absolute constant $C_{\delta}>0$ (which depends on a parameter $\delta>0$) such that $A\le C_{\delta}B$ ($A\ge C_{\delta}B$). We will write $A \simeq_{\delta} B$ when $A \lesssim_{\delta} B$ and $A\gtrsim_{\delta} B$ hold simultaneously. We shall abbreviate subscript $\delta$ if irrelevant. \item Let $\mathbb{N}:=\{1,2,\ldots\}$ be the set of positive integers and $\mathbb{N}_0 := \mathbb{N}\cup\{0\}$, and $\mathbb D:=\{2^n: n\in\mathbb{Z}\}$ will denote the set of dyadic numbers. We set $\mathbb{N}_N := \{1, 2, \ldots, N\}$ for any $N \in \mathbb{N}$. \item For a measurable set
$A\subseteq \mathbb{R}^d$ we denote by $|A|$ the Lebesgue measure of $A$ and by
$|A\cap \mathbb{Z}^d|$ the number of lattice points in $A.$
\item The Euclidean space $\mathbb{R}^d$ is endowed with the standard inner product \[ x\cdot\xi:=\langle x, \xi\rangle:=\sum_{k=1}^dx_k\xi_k \] for every two vectors $x=(x_1,\ldots, x_d)$ and
$\xi=(\xi_1, \ldots, \xi_d)$ from $\mathbb{R}^d$. Let $|x|_2:=\sqrt{\langle x, x\rangle}$ denote the Euclidean norm of a vector
$x\in\mathbb{R}^d$. The Euclidean open ball centered at the origin with radius one will be denoted by $B^2$. We shall abbreviate $B^2$ to $B$ and $|\cdot|_2$ to $|\cdot|$.
\item Let $(X, \mu)$ be a measure space $X$ with a $\sigma$-finite measure $\mu$. The space of all measurable functions whose modulus is integrable with $p$-th power is denoted by $L^p(X)$ for $p\in(0, \infty)$, whereas $L^{\infty}(X)$ denotes the space of all measurable essentially bounded functions. The space of all measurable functions that are weak type $(1, 1)$ will be denoted by $L^{1, \infty}(X)$. In our case we will usually have $X=\mathbb{R}^d$ or
$X=\mathbb{T}^d$ equipped with Lebesgue measure, and $X=\mathbb{Z}^d$ endowed with counting measure. If $X$ is endowed with counting measure we will abbreviate $L^p(X)$ to $\ell^p(X)$ and $L^{1, \infty}(X)$ to $\ell^{1, \infty}(X)$. If the context causes no confusions we will also abbreviate $\|\cdot\|_{L^p(\mathbb{R}^d)}$ to
$\|\cdot\|_{L^p}$ and $\|\cdot\|_{\ell^p(\mathbb{Z}^d)}$ to
$\|\cdot\|_{\ell^p}$.
\item Let $\mathcal{F}$ denote the Fourier transform on $\mathbb{R}^d$ defined for any function $f \in L^1\big(\mathbb{R}^d\big)$ as \begin{align*} \mathcal{F} f(\xi) := \int_{\mathbb{R}^d} f(x) e^{2\pi i \sprod{\xi}{x}} {\: \rm d}x \quad \text{for any}\quad \xi\in\mathbb{R}^d. \end{align*} If $f \in \ell^1\big(\mathbb{Z}^d\big)$ we define the discrete Fourier transform by setting \begin{align*} \hat{f}(\xi) := \sum_{x \in \mathbb{Z}^d} f(x) e^{2\pi i \sprod{\xi}{x}} \quad \text{for any}\quad \xi\in\mathbb{T}^d, \end{align*} where $\mathbb{T}^d$ denotes the $d$-dimensional torus, which will be identified with $Q:=[-1/2, 1/2]^d$. To simplify notation we will denote by $\mathcal F^{-1}$ the inverse Fourier transform on $\mathbb{R}^d$ or the inverse Fourier transform (Fourier coefficient) on the torus $\mathbb{T}^d$. It will cause no confusions since their meaning will be always clear from the context.
\end{enumerate}
\section{Transference of strong and weak type inequalities: Proof of Theorem \ref{thm:T}} \label{sec:Tr}
Here we elaborate the arguments from the master thesis of the first author to prove Theorem \ref{thm:T}. The general idea behind the proof of \eqref{eq:16} is as follows. We fix a non-negative bump function $F \colon \mathbb{R}^d \to \mathbb{R}$ for which the constant in the corresponding maximal inequality is almost $C(G,p)$. Since dilations are available in the continuous setting, $F$ can be taken to be very slowly varying. Then we sample the values of $F$ at lattice points to produce $f \colon \mathbb{Z}^d \to \mathbb{R}$. Because $F$ is regular, the norms of $F$ and $f$ are almost the same. Moreover, we deduce that $M_*^G F$ cannot be essentially larger than $\mathcal{M}_*^G f$. Indeed, for $F$ being slowly varying its maximal function is slowly varying as well. Also, given $n \in \mathbb{Z}^d$ we see that $M_t^G F(n)$ is certainly not much greater than $f(n)$, unless $t$ is very large. For large values of $t$, in turn, the sets $G_{t} \cap \mathbb{Z}^d$ are regular, making the quantities $M_t^G F(n)$ and $\mathcal{M}^G_{t} f(n)$ comparable to each other. The constant in the maximal inequality associated with $f$ is then at least not much smaller than $C(G,p)$. Thus, \eqref{eq:16} holds.
\begin{proof}[Proof of Theorem \ref{thm:T}] Let $G \subset \mathbb{R}^d$ be a convex symmetric body. Let $r\in(0, 1)$ and $R>1$ be real numbers such that $B_r\subset G\subset B_R$, where $B_t$ is the Euclidean ball centered at the origin with radius $t>0$. We may assume that $p\in[1, \infty)$, otherwise there is nothing to do. We now distinguish three cases. In the first two cases we prove \eqref{eq:16} for arbitrary $G$ and all $p\in[1, \infty)$. In the third case we show that the equality is attained in \eqref{eq:16} if $G=B^{\infty}$ and $p=1$. \paragraph{\bf Case 1, when $p \in (1, \infty)$} Fix $\eta \in (0,1)$ and take $F \in C_{\rm c}^\infty(\mathbb{R}^d)$ such that $F \geq 0$ and \begin{equation}\label{T1}
\| M^G_* F \|_{L^p(\mathbb{R}^d)} \geq (1-\eta) C(G,p) \|F\|_{L^p(\mathbb{R}^d)}. \end{equation} For each $K \in \mathbb{N}$ let us define $F_K$ by setting $F_K(x) := F(\frac{x}{K})$. Note that
$\|F_K\|_{L^p(\mathbb{R}^d)} = K^{d/p} \, \|F\|_{L^p(\mathbb{R}^d)}$. Moreover, since $M^G_* F_K (x) = M^G_* F (\frac{x}{K})$, we have
$\| M^G_* F_K \|_{L^p(\mathbb{R}^d)} = K^{d/p} \, \| M^G_* F \|_{L^p(\mathbb{R}^d)}$. Next, we define $f_K \colon \mathbb{Z}^d \rightarrow [0, \infty)$ by setting $f_K(n) := F_K(n)$ for $n \in \mathbb{Z}^d$. Then we immediately have \begin{align} \label{eq:34}
\|F\|_{L^p(\mathbb{R}^d)} = \lim_{K \rightarrow \infty} \Big( \frac{1}{K^d}\sum_{n \in \mathbb{Z}^d } f_K(n)^p \Big)^{1/p}. \end{align} Thus for all sufficiently large $K\in \mathbb{N}$ (say $K \geq K_1$) we see \begin{equation}\label{T2}
\| f_K \|_{\ell^p(\mathbb{Z}^d)} \leq (1+\eta) K^{d/p} \|F\|_{L^p(\mathbb{R}^d)}. \end{equation}
Choose $N \in \mathbb{N}$ such that \begin{equation}\label{T3}
\| M^G_* F \cdot \ind{[-N,N]^d} \|_{L^p(\mathbb{R}^d)} \geq (1 - \eta) \, \|M^G_* F\|_{L^p(\mathbb{R}^d)}. \end{equation} In a similar way as in \eqref{eq:34}, we conclude that there exists $K_2$ such that for all $K \geq K_2$ we have \begin{equation}\label{T4} \Big( \frac{1}{K^d}\sum_{n \in \mathbb{Z}^d \cap [-NK, NK]^d} M^G_* F_K(n)^p \Big)^{1/p}
\geq (1 - \eta) \, \| M^G_* F \cdot \ind{[-N,N]^d} \|_{L^p(\mathbb{R}^d)}. \end{equation}
Let $\kappa > 0$ be such that $M^G_*F_K(n) \geq \kappa$ for each $K \in \mathbb{N}$ and $n \in \mathbb{Z}^d \cap [-NK,NK]^d$. We fix $\varepsilon \in (0, \eta \kappa/2)$ and take $\delta > 0$ for which
$|x - y| < \delta$ implies $|F(x) - F(y)| < \varepsilon$. Since $G_t\subset B_{tR}$, we obtain \begin{displaymath} M^G_tF(x) \leq F(x)+\varepsilon, \qquad t \in (0, \delta R^{-1}), \end{displaymath} or, equivalently, \begin{displaymath} M^G_tF_K(x) \leq F_K(x)+\varepsilon, \qquad t \in (0, K \delta R^{-1}). \end{displaymath} Our goal is to prove that \begin{equation}\label{T5} \mathcal{M}_*^G f_K(n) \geq (1 - \eta) \, M^G_*F_K(n), \qquad n \in \mathbb{Z}^d \cap [-NK,NK]^d, \end{equation} if $K$ is large enough. To this end we shall show separately that \begin{displaymath} \mathcal{M}_*^G f_K(n) \geq M^G_t F_K(n) - \eta \kappa, \qquad t \in (0, K \delta R^{-1}), \end{displaymath} and \begin{displaymath} \quad \mathcal{M}_*^G f_K(n) \geq (1 - \eta/2) \, M^G_t F_K(n) - \eta \kappa / 2, \qquad t \geq K \delta R^{-1}. \end{displaymath}
Fix $n \in \mathbb{Z}^d \cap [-NK,NK]^d$. Obviously, if $t \in (0, K \delta R^{-1})$, then the first inequality follows \begin{displaymath} M^G_tF_K(n) \leq F_K(n) + \varepsilon \leq \mathcal{M}_*^G f_K(n) + \varepsilon \leq \mathcal{M}_*^G f_K(n) + \eta \kappa. \end{displaymath} Hence, we are reduced to prove the second estimate for
$M^G_t F_K(n)$ in the case $t \geq K \delta R^{-1}$. Let $\rho \in (0,1)$ be such that $|G_{1+2\rho} | \leq (1-\eta/2)^{-1} |G|$ and assume that $K \geq K_3 := \sqrt{d}R / (r \delta \rho)$. Therefore, for each $t \geq K \delta R^{-1}$ we have $t + \sqrt{d}/r \leq (1+ \rho)t$. Let $Q_m:=m+B^\infty_{1/2}$ be the cube centered at $m \in \mathbb{Z}^d$ and of side length $1$. If $Q_m \cap G_t \neq \emptyset$ for some $m \in \mathbb{Z}^d$, then one can easily see that $Q_m \subseteq G_{t+\sqrt{d}/r} \subseteq G_{(1+\rho)t}$, provided $t \geq K_3 \delta R^{-1}$. Consequently, we conclude \begin{align*}
\mathcal{M}_{t + \sqrt{d}/r}^G f_K(n) & = \frac{1}{|G_{t + \sqrt{d}/r} \cap \mathbb{Z}^d|} \, \sum_{m \in G_{t + \sqrt{d}/r} \cap \mathbb{Z}^d} f_K(n-m) \\
& \geq \frac{1}{|G_{t + \sqrt{d}/r} \cap \mathbb{Z}^d|} \, \sum_{m \in G_{t + \sqrt{d}/r} \cap \mathbb{Z}^d} \Big( \int_{Q_m} F_K(n-x) \, {\rm d}x - \varepsilon \Big) \\
& \geq \Big( \frac{1}{|G_{t + \sqrt{d}/r} \cap \mathbb{Z}^d|} \, \int_{G_t} F_K(n-x) \, {\rm d} x \Big) - \varepsilon \\
& \geq \frac{|G_t|}{|G_{(1+\rho)t} \cap \mathbb{Z}^d|} M^G_t F_K(n) - \eta \kappa / 2, \end{align*} where in the first inequality we have used that $K \geq K_3 \geq \sqrt{d} / \delta.$ Hence, it remains to show \begin{displaymath}
|G_t| \geq (1-\eta/2) \ |G_{(1+\rho)t} \cap \mathbb{Z}^d|. \end{displaymath} We notice that if $m \in G_{(1+\rho)t} \cap \mathbb{Z}^d$, then $Q_m \subseteq G_{(1+\rho)t + \sqrt{d}/r} \subseteq G_{(1+2\rho)t}$. Thus we obtain \begin{displaymath}
|G_{(1+\rho)t} \cap \mathbb{Z}^d| \leq |G_{(1+2\rho)t}| \leq (1-\eta/2)^{-1} \, |G_t|. \end{displaymath} Finally, combining \eqref{T1}, \eqref{T2}, \eqref{T3}, \eqref{T4}, and \eqref{T5}, for any $K \geq \max\{K_1, K_2, K_3\}$ we have \begin{displaymath}
\| \mathcal{M}^G_* f_K \|_{\ell^p(\mathbb{Z}^d)} \geq \frac{(1- \eta)^4}{1 + \eta} \, C(G,p) \, \|f_K \|_{\ell^p(\mathbb{Z}^d)}. \end{displaymath} Hence, since $\eta \in (0,1)$ was arbitrary, we conclude that $\mathcal{C}(G,p) \geq C(G,p)$.
\paragraph{\bf Case 2, when $p =1$} The inequality $\mathcal{C}(G,1) \geq C(G,1)$ can be deduced in a similar way as it was done for $p \in (1, \infty)$. We only describe necessary changes. Namely, as in \eqref{T1} we fix $\eta \in (0,1)$ and take $F \in C_{\rm c}^\infty(\mathbb{R}^d)$ such that $F \geq 0$ and \begin{equation}\label{T1'}
\| M^G_* F \|_{L^{1, \infty}(\mathbb{R}^d)} \geq (1-\eta) C(G,1) \|F\|_{L^1(\mathbb{R}^d)}. \end{equation} Then we choose $N \in \mathbb{N}$ such that \begin{equation}\label{T3'}
\| M^G_* F \cdot \ind{[-N,N]^d} \|_{L^{1, \infty}(\mathbb{R}^d)} \geq (1 - \eta) \, \|M^G_* F\|_{L^{1, \infty}(\mathbb{R}^d)}. \end{equation} It is easy to see that for each $x_0, y_0 \in \mathbb{R}^d$ one has \begin{displaymath}
| M^G_* F (x_0) - M^G_* F(y_0) | \leq \sup_{|x-y|=|x_0-y_0|} \ |F(x) - F(y)|. \end{displaymath} This allows us to deduce for sufficiently large $K\in\mathbb{N}$ that \begin{align} \label{eq:44}
\| M^G_* F_K \cdot \ind{\mathbb{Z}^d\cap[-NK,NK]^d} \|_{\ell^{1, \infty}(\mathbb{Z}^d)}
\geq (1 - \eta) K^d \, \| M^G_* F \cdot \ind{[-N,N]^d} \|_{L^{1, \infty}(\mathbb{R}^d)} \end{align} with the function $F_K$ as in the previous case. From now on we may proceed in much the same way as in the previous case to establish \eqref{T5}. Once \eqref{T5} is proved we combine \eqref{T1'}, \eqref{T2} (with $p=1$), \eqref{T3'}, \eqref{eq:44} and \eqref{T5} to obtain \begin{displaymath}
\| \mathcal{M}^G_* f_K \|_{\ell^{1, \infty}(\mathbb{Z}^d)} \geq \frac{(1- \eta)^4}{1 + \eta} \, C(G,1) \, \|f_K \|_{\ell^1(\mathbb{Z}^d)} \end{displaymath} for any $K \geq \max\{K_1, K_2, K_3\}$. Since $\eta \in (0,1)$ was arbitrary, we conclude that $\mathcal{C}(G,1) \geq C(G,1)$ as desired. This completes the first part of Theorem \ref{thm:T}. We now turn our attention to the case $G = B^\infty$ and show that the last inequality can be reversed.
\paragraph{\bf Case 3, when $p =1$ and $G = B^\infty=[-1, 1]^d$} Given $\eta \in (0,1)$ consider $f \in \ell^1(\mathbb{Z}^d)$ and $\lambda > 0$ such that $f \geq 0$ and \begin{displaymath}
\lambda \, |\{ n \in \mathbb{Z}^d : \mathcal{M}^{B^\infty}_* f(n) > \lambda \}| \geq (1- \eta) \, \mathcal{C}(B^\infty, 1) \, \| f \|_{\ell^1(\mathbb{Z}^d)}. \end{displaymath} Let $Q_\delta(x):=x+B^\infty_{\delta/2}$ denote the cube centered at $x\in\mathbb{R}^d$ and of side length $\delta>0$. For $\delta\in(0, 1)$, we set \begin{displaymath}
F_\delta(x) := \sum_{n \in \mathbb{Z}^d} f(n) \, |Q_\delta(n)|^{-1}\ind{Q_\delta(n)}(x). \end{displaymath}
Clearly $\|F_\delta\|_{L^1(\mathbb{R}^d)} = \| f \|_{\ell^1(\mathbb{Z}^d)}$, this is the place where it is essential that we are working with $p=1$.
We show that for each $n \in \mathbb{Z}^d$ one has \begin{equation} \label{eq:Ta} M^{B^\infty}_* F_\delta(x) \geq \mathcal{M}^{B^\infty}_* f(n), \qquad x \in Q_{1-\delta}(n). \end{equation} To prove \eqref{eq:Ta} we note that \[ \mathcal{M}^{B^\infty}_* f(n)=\sup_{t>0}\mathcal{M}^{B^\infty}_t f(n)=\sup_{N\in\mathbb{N}_0}\mathcal{M}^{B^\infty}_N f(n) \] since
$|B_t^{\infty}\cap\mathbb{Z}^d|=|B_{\lfloor t\rfloor}^{\infty}\cap\mathbb{Z}^d|= (2\lfloor t\rfloor+1)^d$, where $\mathcal{M}^{B^\infty}_0 f:=f$.
If $N=0$, then for each $n \in \mathbb{Z}^d$ and $x \in Q_{1-\delta}(n)$ we obtain \begin{align*} \mathcal{M}^{B^\infty}_0 f(n)=f(n)= \int_{Q_{1}(x)}F_{\delta}(y)\mathrm{d} y \le M^{B^\infty}_* F_\delta(x). \end{align*}
If $N \in \mathbb{N}$, then $n+ B_N^{\infty}\subseteq x+B_{N+\frac{1-\delta}{2}}^{\infty}$ for each $n \in \mathbb{Z}^d$ and $x \in Q_{1-\delta}(n)$. Therefore, \begin{align*} \mathcal{M}^{B^\infty}_N f(n)&=
\frac{1}{|n+ B_N^{\infty}\cap\mathbb{Z}^d|} \sum_{k\in n+ B_N^{\infty}\cap\mathbb{Z}^d}f(k)\\ &\le
\frac{1}{|n+ B_N^{\infty}\cap\mathbb{Z}^d|} \sum_{k\in\mathbb{Z}^d }\int_{Q_{\delta}(k)}\ind{x+B_{N+\frac{1-\delta}{2}}^{\infty}}(k)f(k) |Q_{\delta}(k)|^{-1}\ind{Q_{\delta}(k)}(y)\mathrm{d} y\\
&\le \frac{1}{|x+B_{N+1/2}^{\infty}|}\int_{x+B_{N+1/2}^{\infty}} F_{\delta}(y)\,\mathrm{d} y \le M^{B^\infty}_* F_\delta(x), \end{align*}
since $|n+ B_N^{\infty}\cap\mathbb{Z}^d|=(2N+1)^d=|x+B_{N+1/2}^{\infty}|$. Hence \eqref{eq:Ta} follows, and consequently we obtain \begin{align*}
\lambda \, |\{ x \in \mathbb{R}^d : M^{B^\infty}_* F_\delta(x) > \lambda \}|
& \geq \lambda \, (1-\delta)^d \, |\{ n \in \mathbb{Z}^d : \mathcal{M}^{B^\infty}_* f(n) > \lambda \}| \\
& \geq (1-\eta) \, (1-\delta)^d \, \mathcal{C}(B^\infty, 1) \, \|F_\delta \|_{L^1(\mathbb{R}^d)}. \end{align*} Since $\eta$ and $\delta$ were arbitrary, we conclude that $C(B^\infty, 1) \geq \mathcal{C}(B^\infty, 1)$. Finally, combining this inequality with the previous result we obtain $C(B^\infty, 1) = \mathcal{C}(B^\infty, 1)$ and the proof of Theorem \ref{thm:T} is completed. \end{proof}
\section{Discrete maximal operator for $B^q$ and large scales: Proof of Theorem \ref{thm:1}} \label{sec:Bq}
The purpose of this section is to prove Theorem \ref{thm:1}. We will follow the ideas from \cite[Section 5]{BMSW4}. From now on we assume that $q\in(2,\infty)$ is fixed. By $Q := B^\infty_{1/2}$ we mean the unit cube centered at the origin.
\begin{lemma}\label{lem:1} Let
$\tilde{C}_q := \max \big\{ \sum_{k=1}^\infty \big| {q \choose 2k} \big| \, 2^{-2k}, \big(\frac{3}{2}\big)^q \big\}$. If $N\geq d^{\frac{1}{2}+\frac{1}{q}}$, then \begin{equation*}
|B_N^q\cap\mathbb{Z}^d|\leq 2 |B_{N_1}^q|\leq 2 e^{\frac{\tilde{C}_q}{q}}|B_N^q|, \end{equation*} where $N_1:=N\big(1+ d^{-1} \tilde{C}_q \big)^{\frac{1}{q}}$. \end{lemma}
\begin{proof} Since $q > 2$, by Hanner's inequality for $x\in B_N^q\cap\mathbb{Z}^d$ and $y\in Q$ we obtain \begin{equation}\label{eq:Han_ineq} \norm{x+y}^q_q +\norm{x-y}^q_q\leq (\norm{x}_q+\norm{y}_q)^q
+|\norm{x}_q-\norm{y}_q|^q. \end{equation} Moreover, for every $x\in B_N^q$ we have
$|\{y\in Q : \norm{x+y}_q> \norm{x-y}_q\}| = |\{y\in Q : \norm{x+y}_q< \norm{x-y}_q\}|$, which implies
$|\{y\in Q : \norm{x+y}_q\leq \norm{x-y}_q\}|\geq 1/2$. Hence, \begin{align} \label{eq:3} \begin{split}
|B_N^q\cap \mathbb{Z}^d|=\sum_{x\in B_N^q\cap\mathbb{Z}^d} 1&\leq 2\sum_{x\in B_N^q\cap\mathbb{Z}^d}\int_Q \ind{\{y\in\mathbb{R}^d : \norm{x+y}_q\leq \norm{x-y}_q\}}(z)\,{\rm d}z\\ &\leq 2\sum_{x\in B_N^q\cap\mathbb{Z}^d}\int_Q \ind{\{y\in\mathbb{R}^d : 2\norm{x+y}_q^q\leq (\norm{x}_q+\norm{y}_q)^q+\abs{\norm{x}_q-\norm{y}_q}^q\}}(z)\,{\rm d}z \end{split} \end{align}
We shall estimate
$(\norm{x}_q+\norm{y}_q)^q+|\norm{x}_q-\norm{y}_q|^q$ for $x\in B_N^q\cap\mathbb{Z}^d$ and $y\in Q$. Let us firstly assume that $\norm{x}_q\geq 2 \norm{y}_q$. By Newton's generalized binomial theorem we have \begin{align*}
(\norm{x}_q+\norm{y}_q)^q+|\norm{x}_q-\norm{y}_q|^q &=\sum_{k=0}^\infty {q \choose k} \norm{y}_q^k \norm{x}_q^{q-k}+\sum_{k=0}^\infty {q \choose k} (-1)^k\norm{y}_q^k \norm{x}_q^{q-k}\\ &=2\sum_{k=0}^\infty {q \choose 2k} \norm{y}_q^{2k} \norm{x}_q^{q-2k}\\
& \leq 2 \bigg( \norm{x}_q^q + \norm{x}_q^{q-2} \norm{y}_q^2 \sum_{k=1}^\infty \bigg| {q \choose 2k} \bigg| \, 2^{-2k+2} \bigg) \\
& \leq 2 \bigg( N^q + N^{q-2} d^{\frac{2}{q}} \sum_{k=1}^\infty \bigg| {q \choose 2k} \bigg| \, 2^{-2k} \bigg) \\
& \leq 2N^q \bigg( 1 + d^{-1} \sum_{k=1}^\infty \bigg| {q \choose 2k} \bigg| \, 2^{-2k} \bigg). \end{align*} On the other hand, if $\norm{x}_q\leq 2\norm{y}_q$, then \begin{equation*}
(\norm{x}_q+\norm{y}_q)^q+|\norm{x}_q-\norm{y}_q|^q \leq 2\big(3\norm{y}_q \big)^q\leq 2 \bigg(\frac{3}{2}\bigg)^q d \leq 2 N^q d^{-\frac{q}{2}} \bigg(\frac{3}{2}\bigg)^q \leq 2 N^q d^{-1} \bigg(\frac{3}{2}\bigg)^q. \end{equation*} Combining the above gives \begin{equation}\label{eq:4}
(\norm{x}_q+\norm{y}_q)^q+|\norm{x}_q-\norm{y}_q|^q\leq 2N^q\big(1+ d^{-1} \tilde{C}_q\big)=2N_1^q. \end{equation} Finally, by \eqref{eq:3} and \eqref{eq:4} we obtain \begin{align*} \abs{B_N^q\cap \mathbb{Z}^d}&\leq 2\sum_{x\in B_N^q\cap\mathbb{Z}^d}\int_Q \ind{\{y\in\mathbb{R}^d : \norm{x+y}_q\leq N(1+d^{-1} \tilde{C}_q )^{1/q}\}}(z)\,{\rm d}z\\ &=2\sum_{x\in B_N^q\cap\mathbb{Z}^d}\int_Q \ind{B^q_{N(1+d^{-1}\tilde{C}_q)^{1/q}}}(x+z)\,{\rm d}z\\
&\leq 2|B_{N_1}^q|\\
&=2 \big(1+d^{-1} \tilde{C}_q \big)^{\frac{d}{q}} |B_N^q|\\
&\leq 2 e^{\frac{\tilde{C}_q}{q}}|B_N^q|, \end{align*} which finishes the proof. \end{proof}
We remark that Lemma \ref{lem:1} holds for $q = 2$ without any changes with $\tilde{C}_2=9/4,$ so that $e^{\tilde{C}_2/2}=e^{9/8}$.
\begin{lemma}\label{lem:2} Let $ a >0$. Take $N\geq a d$, $0\leq j\leq N-1$, and $x\in\mathbb{R}^d$ such that \begin{equation}\label{eq:1} N\bigg(1+\frac{j}{N}\bigg)^{\frac{1}{q}}\leq \norm{x}_q\leq N\bigg(1+\frac{j+1}{N}\bigg)^{\frac{1}{q}}. \end{equation} If $d\in\mathbb{N}$ is sufficiently large (depending only on $ a $ and $q$), then \begin{equation*} \abs{Q\cap(B_N^q-x)}=\abs{\{y\in Q : x+y\in B_N^q\}}\leq e^{-\frac{7}{128q^2} j^2}. \end{equation*} \end{lemma}
\begin{proof} Note that the claim is trivial for $j=0$. Therefore, let $1\leq j\leq N-1$ and take $x\in\mathbb{R}^d$ such that \eqref{eq:1} holds. Assume that $y\in Q$ and $x+y\in B_N^q$. Then \begin{equation} \label{eq:45} \norm{x}_q^q -\norm{x+y}_q^q \geq N^q\bigg(1+\frac{j}{N}\bigg) -N^q=jN^{q-1}. \end{equation} We shall also estimate the expression on the left hand side of the above inequality from above. Let $I_i:=\abs{x_i}^q-\abs{x_i+y_i}^q$, then \begin{equation*} \norm{x}_q^q-\norm{x+y}_q^q=\sum_{i=1}^d I_i. \end{equation*} For $\abs{x_i}>2 \abs{y_i}$ by Newton's generalized binomial theorem we have \begin{align*} I_i=\abs{x_i}^q-\abs{\abs{x_i}+\mathrm{sgn}(x_i)y_i}^q&=\abs{x_i}^q -\abs{\sum_{k=0}^\infty {q \choose k} \abs{x_i}^{q-k} \mathrm{sgn}(x_i)^k y_i^k}\\ &\leq \abs{x_i}^q- \abs{\abs{x_i}^q+q\abs{x_i}^{q-1}\mathrm{sgn}(x_i)y_i}+\sum_{k=2}^\infty \abs{{q \choose k}} \abs{x_i}^{q-k} \abs{y_i}^k\\ &\leq -q\abs{x_i}^{q-1}\mathrm{sgn}(x_i)y_i+\sum_{k=2}^\infty \abs{{q \choose k}} \abs{x_i}^{q-k} \abs{y_i}^k\\ &\leq -q\abs{x_i}^{q-1}\mathrm{sgn}(x_i)y_i+ \abs{x_i}^{q-2} \abs{y_i}^2 \sum_{k=2}^\infty \abs{{q \choose k}} 2^{2-k}. \end{align*} On the other hand, if $\abs{x_i}\leq 2\abs{y_i}\le 1$, and $A_q:=1 + \big(\frac{3}{2}\big)^q+q$, then \begin{align*} I_i\leq (2 \abs{y_i})^q +(3\abs{y_i})^q\leq 1 + \bigg(\frac{3}{2}\bigg)^q \leq -q\abs{x_i}^{q-1}\mathrm{sgn}(x_i)y_i+A_q. \end{align*}
Let $\tilde{x}=\big(\abs{x_1}^{q-1}\mathrm{sgn}(x_1),\ldots,\abs{x_d}^{q-1}\mathrm{sgn}(x_d)\big).$ Combining the above we obtain by H\"{o}lder's inequality \begin{align*} \norm{x}_q^q -\norm{x+y}_q^q&\leq -q\langle \tilde{x},y\rangle +A_qd +\sum_{k=2}^\infty \abs{{q \choose k}} 2^{2-k} \sum_{i=1}^d \abs{x_i}^{q-2} \abs{y_i}^2\\ &\leq -q\langle \tilde{x},y\rangle +A_qd +\sum_{k=2}^\infty \abs{{q \choose k}} 2^{2-k} \norm{x}_q^{q-2} \norm{y}_q^{2}\\ &\leq -q\langle \tilde{x},y\rangle +A_qd + N^{q-2}d^{\frac{2}{q}}\sum_{k=2}^\infty \abs{{q \choose k}} 2^{-k+1}\\ &\leq -q\langle \tilde{x},y\rangle +\bigg(A_q + \sum_{k=2}^\infty \abs{{q \choose k}} 2^{-k+1} \bigg) N^{q-2}d^{\frac{2}{q}}; \end{align*} to ensure the validity of the last inequality above we take $d$ so large that $ad\ge d^{1/q}.$
By \eqref{eq:45} and the previous display, since $q>2$, we obtain \begin{equation}\label{eq:7} -q\langle \tilde{x},y\rangle\geq jN^{q-1}-\bigg(A_q + \sum_{k=2}^\infty \abs{{q \choose k}} 2^{-k+1} \bigg) N^{q-2}d^{\frac{2}{q}} \geq \frac{1}{2} jN^{q-1}, \end{equation} provided that $d$ is so large that \begin{equation} \label{eq:lem:2:1} \bigg(A_q + \sum_{k=2}^\infty \abs{{q \choose k}} 2^{-k+1} \bigg) a^{-1}d^{-1+\frac{2}{q}}\le \frac12. \end{equation} Note that \begin{equation*} \norm{\tilde{x}}_2=\norm{x}_{2q-2}^{q-1}\leq \norm{x}_q^{q-1}\leq 2 N^{q-1}. \end{equation*} Hence, for $y\in Q$ and $x+y\in B_N^q$ we obtain \begin{equation*} \left\langle -\frac{\tilde{x}}{\norm{\tilde{x}}_2},y \right\rangle\geq \frac{j}{4q}. \end{equation*} We know from \cite[Inequality (5.6)]{BMSW4} that for every unit vector $z\in\mathbb{R}^d$ and for every $s>0$ we have \begin{align*}
|\{y\in Q: \langle z, y\rangle\ge s\}|\le e^{-\frac78s^2}. \end{align*} Applying this inequality for $z=-\frac{\tilde{x}}{\norm{\tilde{x}}_2}$ and $s=j/(4q)$ we arrive at \begin{align*} \abs{\{y\in Q : x+y\in B_N^q\}}\leq
\Big|\Big\{y\in Q : \bigg\langle -\frac{\tilde{x}}{\norm{\tilde{x}}_2},y \bigg\rangle\geq \frac{j}{4q}\Big\}\Big| \leq e^{-\frac{7j^2}{128q^2} }. \end{align*} This concludes the proof of the lemma. \end{proof}
A comment is in order here. For $q=2$, a version of Lemma \ref{lem:2} holds for all $d\in \mathbb{N}$ under the restriction \begin{equation}\label{eq:8}
a \geq 2 \, \bigg( 1 + \bigg(\frac{3}{2}\bigg)^2+2 + \sum_{k=2}^\infty \abs{{2 \choose k}} 2^{-k+1} \bigg) =2(1+(3/2)^2+2+1/2)=\frac{23}{2}. \end{equation} The above ensures that \eqref{eq:lem:2:1} (and hence also \ref{eq:7}) is satisfied for all $d \in \mathbb{N}$.
\begin{lemma}\label{lem:3} Let $a>0$. If $d\in\mathbb{N}$ is sufficiently large (depending on $a$ and $q$) then there exists a constant $C_q'>0$ depending only on $q$ such that for all $N\ge ad$ one has \begin{equation*}
|B_N^q|\leq C_q' |B_N^q\cap \mathbb{Z}^d|. \end{equation*} \end{lemma}
\begin{proof} For each $a>0$ we take $J := J_{q,a} \in \mathbb{N}$ satisfying \begin{align}\label{eq:def_J} \sum_{j\ge J} e^{-\frac{7 j^2}{128q^2}}e^{2(j+1)/(aq)}\le \frac{1}{4 }e^{-\frac{\tilde{C}_q}{q}}, \end{align} where $\tilde{C}_q>0$ is the constant specified in Lemma \ref{lem:1}. It suffices to show that for every $M \geq \frac{a d}{2} $ we have \begin{equation}\label{eq:2}
|B^q_M|\le 2|B^q_{M(1+J/M)^{1/q}}\cap\mathbb{Z}^d|. \end{equation} Indeed, take $d$ so large that $ad\ge 2 J.$ Then for $N \geq a d$ we can find $M\in[\frac{ad}{2},N],$ such that $N=M(1+J/M)^{1/q}$ and thus \eqref{eq:2} gives \begin{align*}
|B^q_N|= (1+J/M)^{d/q}|B^q_M|\leq 2e^{2J/(aq)}|B^q_{N}\cap\mathbb{Z}^d|. \end{align*}
Our aim now is to prove \eqref{eq:2}. Define $U_j=\big\{x\in\mathbb{R}^d: M\big(1+\frac{j}{M}\big)^{1/q}< \norm{x}_q\leq M\big(1+\frac{(j+1)}{M}\big)^{1/q}\big\}$ for $j \geq 0$ and observe that \begin{align*}
|B^q_M| &=\sum_{x\in\mathbb{Z}^d}\int_{Q}\ind{B^q_M}(x+y){\rm d}y\\ &=\sum_{x\in B^q_M\cap\mathbb{Z}^d}\int_{Q}\ind{B^q_M}(x+y){\rm d}y+\sum_{0\le j<J}\sum_{x\in U_j\cap\mathbb{Z}^d}\int_{Q}\ind{B^q_M}(x+y){\rm d}y\\ &\hspace{4.5cm} + \sum_{j\ge J}\sum_{x\in U_j\cap\mathbb{Z}^d}\int_{Q}\ind{B^q_M}(x+y){\rm d}y\\
&\le |B^q_{M(1+J/M)^{1/q}}\cap\mathbb{Z}^d|+\sum_{j=J}^{M-1}\sum_{x\in U_j\cap\mathbb{Z}^d}|Q\cap (B^q_M-x)|. \end{align*}
Indeed, if $j\ge M$, then $|Q\cap (B^q_M-x)|=0$ holds for each $x\in U_j$; here we take $d$ so large that $ad(2^{1/q}-1)\ge d^{1/q}$. Clearly $M \geq \frac{a d}{2} \geq d^{\frac{1}{2} + \frac{1}{q}},$ if $d$ is large enough. Now applying Lemma \ref{lem:2} (with $M$ in place of $N$ and $a/2$ in place of $a$) together with Lemma \ref{lem:1} we see that for sufficiently large $d$ one has \begin{align*}
\sum_{j=J}^{M-1}\sum_{x\in U_j\cap\mathbb{Z}^d}|Q\cap (B^q_M-x)|&\leq \sum_{j=J}^{M-1}\sum_{x\in U_j\cap\mathbb{Z}^d} e^{-\frac{7 j^2}{128q^2}}\\
&\leq \sum_{j=J}^{M-1}\big|B^q_{M(1+\frac{j+1}{M})^{1/q}}\cap\mathbb{Z}^d\big| e^{-\frac{7 j^2}{128q^2}}\\ &\leq 2e^{\frac{\tilde{C}_q}{q}} \sum_{j=J}^{\infty}\abs{B^q_{M}}\bigg(1+\frac{j+1}{M}\bigg)^{d/q} e^{-\frac{7 j^2}{128q^2}}\\ &\leq 2e^{\frac{\tilde{C}_q}{q}} \abs{B^q_{M}} \sum_{j=J}^{\infty} e^{2(j+1)/(aq)} e^{-\frac{7 j^2}{128q^2}}. \end{align*} Hence, by \eqref{eq:def_J} we have \begin{equation*}
|B^q_M|\leq |B^q_{M(1+J/M)^{1/q}}\cap \mathbb{Z}^d|+\frac{1}{2}|B^q_M|, \end{equation*} which finishes the proof. \end{proof}
We make a similar remark as below Lemma \ref{lem:2}. For $q=2$, the claim of Lemma \ref{lem:3} holds for all $d\in\mathbb{N}$ provided that the $a>0$ is large enough. In this case it suffices to take \[ a\geq \max(23,2J_{2,23}),\] where $J:=J_{2,23}$ is a non-negative integer satisfying \[ \sum_{j\ge J} e^{-\frac{7 j^2}{128q^2}}e^{2(j+1)/23}\le \frac{1}{4 }e^{-\frac{9}{8}}. \] Then the implied constant $C'_2$ is equal to $2 e^{J/23}.$ Finally, in the proof of Theorem \ref{thm:1} below it suffices to take $N \geq C'_2 d$. We leave the details to the interested reader.
\begin{proof}[Proof of Theorem \ref{thm:1}] Fix $p\in(1,\infty)$. It is well known that for any $q\in(2,\infty)$ and any $d\in\mathbb{N}$ one has \begin{equation*}
\big\|\sup_{N >0}|\mathcal{M}_N^{B^q}f|\big\|_{\ell^p(\mathbb{Z}^d)}\leq C_d(p,q) \|f\|_{\ell^p(\mathbb{Z}^d)}, \end{equation*} with a constant $C_d(p,q)$ which depends on the dimension $d.$ Thus we may assume that the dimension $d$ is large enough, in fact larger than any fixed number $d_0$ (which may depend on $q\in (2,\infty)$ and $a>0$).
Let $f \colon \mathbb{Z}^d\to \mathbb{C}$ be a non-negative function. Define $F \colon \mathbb{R}^d\to \mathbb{C}$ by setting \[ F(x):=\sum_{y\in\mathbb{Z}^d}f(y)\ind{y+Q}(x). \]
Clearly $\|F\|_{L^p(\mathbb{R}^d)}=\|f\|_{\ell^p(\mathbb{Z}^d)}$.
Fix $a > 0$ and let $\tilde{C}_q>0$ be the constant specified in Lemma \ref{lem:1}. Take $N\ge a d$ and define $N_1:=N\big(1+d^{-1} \tilde{C}_q \big)^{1/q}$. Observe that $ad \geq d^{\frac{1}{2}+\frac{1}{q}}$ when $d$ is large enough. Hence, by \eqref{eq:4} and \eqref{eq:Han_ineq}, for $z\in Q$ and $y\in B^q_{N}$ we have the estimate \[ \norm{y+z}_q\le N_1 \] on the set $\{z\in Q : \norm{y+z}_q\le \norm{y-z}_q\}$, and the Lebesgue measure of this set is at least $1/2$. Then by Lemma \ref{lem:3} for sufficiently large dimension $d$ and all $x\in\mathbb{Z}^d$ we obtain \begin{align} \label{eq:5} \begin{split} \mathcal M_N^{B^q}f(x)&
=\frac1{|B^q_N\cap\mathbb{Z}^d|}\sum_{y\in B^q_N\cap\mathbb{Z}^d}f(x+y)\ind{B^q_N}(y)\\
&=\frac1{|B^q_N\cap\mathbb{Z}^d|}\sum_{y\in B^q_N\cap\mathbb{Z}^d}f(x+y)\int_Q \ind{B^q_N}(y)\,dz\\
&\lesssim_q \frac1{|B^q_N|}\sum_{y\in\mathbb{Z}^d}f(x+y)\int_{Q}\ind{B^q_{N_1}}(y+z) \, {\rm d}z\\
&=\frac1{|B^q_N|}\sum_{y\in\mathbb{Z}^d}f(y)\int_{x+B^q_{N_1}}\ind{y+Q}(z) \, {\rm d}z\\
&=\frac1{|B^q_N|}\int_{x+B^q_{N_1}}F(z) \, {\rm d}z\\
&=\bigg(\frac{N_1}{N}\bigg)^d\frac{1}{|B^q_{N_1}|}\int_{B^q_{N_1}}F(x+z) \, {\rm d}z\\
&\lesssim_q\frac{1}{|B^q_{N_1}|}\int_{B^q_{N_1}}F(x+z) \, {\rm d}z\\ &=M_{N_1}^{B^q}F(x). \end{split} \end{align}
Let us now take $N_2:=N_1\big(1+d^{-1} \tilde{C}_q \big)^{1/q}.$ Similarly as above, for $y\in Q$ and $z\in B^q_{N_1}$ we have \begin{equation*} \norm{y+z}_q\leq N_2 \end{equation*} on the set $\{y\in Q : \norm{y+z}_q\le \norm{y-z}_q\}$, and the Lebesgue measure of this set is at least $1/2$. Therefore, Fubini's theorem leads to \begin{align} \label{eq:6} \begin{split}
M_{N_1}^{B^q}F(x)&=\frac{1}{|B^q_{N_1}|}\int_{B^q_{N_1}}F(x+z) \, {\rm d}z\\
&\leq \frac2{|B^q_{N_1}|} \int_{\mathbb{R}^d}F(x+z)\ind{B^q_{N_1}}(z)\int_{Q}\ind{B^q_{N_2}}(z+y) \, {\rm d}y{\rm d}z\\
&=2\Big(\frac{N_2}{N_1}\Big)^d \int_{Q} \frac{1}{|B^q_{N_2}|} \int_{\mathbb{R}^d}F(x+z-y)\ind{B^q_{N_1}}(z-y)\ind{B^q_{N_2}}(z) \, {\rm d}z{\rm d}y\\
&\lesssim \int_{x+Q}\frac1{|B^q_{N_2}|}\int_{\mathbb{R}^d}F(z-y)\ind{B^q_{N_2}}(z) \, {\rm d}z{\rm d}y\\ &= \int_{x+Q}M_{N_2}^{B^q}F(y) \, {\rm d}y. \end{split} \end{align} Denote $C_{d,q} := a \, \big(1+d^{-1}\tilde{C}_q\big)^{2/q} d.$ Combining \eqref{eq:5} with \eqref{eq:6} and applying H\"{o}lder's inequality, we obtain \begin{align*}
\big\|\sup_{N\ge a d}|\mathcal M_N^{B^q}f|\big\|_{\ell^p(\mathbb{Z}^d)}^p
&\lesssim_q\sum_{x\in\mathbb{Z}^d} \Big(\int_{x+Q} \sup_{N\ge C_{d,q}}|M_N^{B^q}F(y)|\Big)^p {\rm d}y\\
&\leq \sum_{x\in\mathbb{Z}^d} \int_{x+Q}\sup_{N\ge C_{d,q}}\big| M_N^{B^q}F(y)\big|^p {\rm d}y \\
&=\big\|\sup_{N\ge C_{d,q}}\big|M_N^{B^q}F\big|\big\|_{L^p(\mathbb{R}^d)}^p. \end{align*} By the dimension-free $L^p(\mathbb{R}^d)$ boundedness of the maximal operator $M_*^{B^q}$ (proved in \cite{Mul1}), we obtain \begin{equation*}
\big\|\sup_{N\ge C_{d,q}}\big|M_N^{B^q}F\big|\big\|_{L^p(\mathbb{R}^d)}^p\lesssim_q\|F\|_{L^p(\mathbb{R}^d)}^p=\|f\|_{\ell^p(\mathbb{Z}^d)}^p. \end{equation*} This proves Theorem \ref{thm:1}. \end{proof}
\section{Decrease dimension trick: Proof of Theorem \ref{thm:10}} \label{sec:4} We now prove Theorem \ref{thm:10} by adapting the methods introduced in \cite[Section 2]{BMSW2} to the case of $q$-balls. In fact this section is a technical elaboration of \cite[Section 2]{BMSW2}. However, we have decided to provide necessary details due to intricate technicalities. Throughout this section we will abbreviate
$\|\cdot\|_{\ell^p(\mathbb{Z}^d)}$ to $\|\cdot\|_{\ell^p}$ and
$\|\cdot\|_{L^p(\mathbb{R}^d)}$ to $\|\cdot\|_{L^p}$. We fix $q \in [2, \infty)$ and recall that $\mathcal M_N^{B^q}$ is the operator whose multiplier is given by $$ \mathfrak m^{B^q}_N(\xi):
=\frac{1}{|B^q_N\cap\mathbb{Z}^d|} \sum_{x\in B^q_N\cap\mathbb{Z}^d}e^{2\pi i \xi\cdot x}, \qquad \xi\in\mathbb{T}^d\equiv[-1/2, 1/2)^d. $$ For each $\xi\in\mathbb{T}^d$ we will write
$\|\xi\|^2:=\|\xi_1\|^2+\ldots+\|\xi_d\|^2$, where
$\|\xi_j\|=\operatorname{dist}(\xi_j, \mathbb{Z})$ for any $j\in\mathbb{N}_d$. Since we identify $\mathbb{T}^d$ with $[-1/2, 1/2)^d$, it is easy to see that the norm
$\|\cdot\|$ coincides with the Euclidean norm $|\cdot|_2$ restricted to $[-1/2, 1/2)^d$. It is also very well known that $\|\eta\|\simeq|\sin(\pi\eta)|$ for every $\eta\in\mathbb{T}$, since
$|\sin(\pi\eta)|=\sin(\pi\|\eta\|)$ and for $0\le|\eta|\le 1/2$ we have \begin{align} \label{eq:103}
2|\eta|\le|\sin(\pi\eta)|\le \pi|\eta|. \end{align}
The proof of Theorem \ref{thm:10} is based on Proposition \ref{prop:1} and Proposition \ref{prop:2}, which give respectively estimates of the multiplier $\mathfrak m^{B^q}_N(\xi)$ at the origin and at infinity. These estimates will be described in terms of the proportionality constant \begin{align*} \kappa(d, N):= \kappa_q(d, N):=Nd^{-1/q}. \end{align*}
\begin{proposition}
\label{prop:1}
For every $d, N \in \mathbb{N}$ and for every $\xi \in \mathbb{T}^d$ we have \begin{align} \label{eq:22}
|\mathfrak{m}^{B^q}_N(\xi) - 1| \leq 2 \pi^2 \kappa(d,N)^2 \| \xi \|^2. \end{align} \end{proposition}
\begin{proposition}
\label{prop:2}
There is a constant $C_q>0$ such that for any $d, N\in\mathbb{N}$ if $10\le \kappa(d, N) \leq 50qd^{1-1/q}$, then for all $\xi\in\mathbb{T}^d$ we have
\begin{align}
\label{eq:23}
|\mathfrak{m}^{B^q}_N(\xi)|\le C_q \big((\kappa(d, N)\|\xi\|)^{-1}+\kappa(d, N)^{-\frac{1}{7}}\big).
\end{align} \end{proposition}
Before we prove Proposition \ref{prop:1} and Proposition \ref{prop:2} we show how \eqref{eq:20} follows from \eqref{eq:22} and \eqref{eq:23}.
\begin{proof}[Proof of Theorem \ref{thm:10}] Since $\mathbb D_{C_1, C_2}$ is a subset of the dyadic set $\mathbb D$ we can assume, without loss of generality, that $C_1=C_2=10$. For every $t>0$ let $P_t$ be the semigroup with the multiplier \begin{align*} \mathfrak p_t(\xi):=e^{-t\sum_{i=1}^d\sin^2(\pi\xi_i)}, \qquad \xi\in\mathbb{T}^d. \end{align*} It follows from \cite{Ste1} (see also \cite{BMSW3} for more details) that for every $p\in(1, \infty)$ there is $C_p>0$ independent of $d\in\mathbb{N}$ such that for every $f\in\ell^p(\mathbb{Z}^d)$ we have \begin{align*}
\Big\|\sup_{t>0}|P_tf|\Big\|_{\ell^p}\le C_p \|f\|_{\ell^p}. \end{align*} It suffices to compare the averages $\mathcal M^{B^q}_N$ with $P_{N^2/d^{2/q}}$. Namely, the proof of \eqref{eq:20} will be completed if we obtain the dimension-free estimate on $\ell^2(\mathbb{Z}^d)$ for the following square function \[
Sf(x):=\Big(\sum_{N\in \mathbb D_{C_1, C_2}}|\mathcal M_Nf(x)-P_{N^2/d^{2/q}}f(x)|^2\Big)^{1/2}, \qquad x\in\mathbb{Z}^d. \]
By Plancherel's formula, \eqref{eq:22}, and \eqref{eq:23}, we can estimate $\|S(f)\|_{\ell^2}^2$ by \begin{align*} C_q' \int_{\mathbb{T}^d}\, \bigg(\sum_{\substack{m\in\mathbb{Z}:\\10d^{1/q}\le 2^m\le 10d}}
\min\bigg\{\frac{2^{2m}}{d^{2/q}}\|\xi\|^2,\bigg(\frac{2^{2m}}{d^{2/q}}\|\xi\|^2\bigg)^{-1}\bigg\}+d^{2/7q}\sum_{\substack{m\in\mathbb{Z}:\\10d^{1/q}\le 2^m\le 10d}}
2^{-2m/7}\bigg) |\hat{f}(\xi)|^2{\rm d}\xi, \end{align*} where $ C_q'$ is a constant that depends only on $q.$ To complete the proof we note that the integral above is clearly bounded by
$C \|f\|_{\ell^2}^2$ for a suitable constant $C>0$ independent of $d$. \end{proof}
The rest of this section is devoted to proving Proposition \ref{prop:1} and Proposition \ref{prop:2}. We emphasize that in the proof of Proposition \ref{prop:1} the assumption $q\geq 2$ is crucial.
\begin{proof}[Proof of Proposition \ref{prop:1}] Since the balls $B_N^q$ are symmetric under permutations and sign changes we may repeat the proof of \cite[Proposition 2.1]{BMSW2} reaching
$$|\mathfrak{m}^{B^q}_N(\xi) - 1| \le \frac{2}{|B^q_N \cap \mathbb{Z}^d |} \sum_{j=1}^d \sin^2(\pi \xi_j) \sum_{x \in B^q_N \cap \mathbb{Z}^d} \frac{|x|_2^2}{d}.$$
Observe that $|x|_2^2\le |x|_q^2\cdot d^{1-2/q}$. Indeed, for $q=2$ this is simply an equality, and for $q>2$ it suffices to apply H\"older's inequality for the pair $(\frac{q}{2},\frac{q}{q-2})$. Consequently, \begin{displaymath}
|\mathfrak{m}^{B^q}_N(\xi) - 1| \leq \frac{2}{|B^q_N \cap \mathbb{Z}^d |} \sum_{j=1}^d \sin^2(\pi \xi_j)
\sum_{x \in B^q_N \cap \mathbb{Z}^d} \frac{|x|_q^2}{d^{2/q}} \leq 2 \pi^2 \kappa(d,N)^2 \|\xi\|^2, \end{displaymath} which gives the claim. \end{proof}
The proof of Proposition \ref{prop:2} can be deduced from a series of auxiliary lemmas, which we formulate and prove below. In what follows for an integer $1\le r\le d$ and a radius $R>0$ we let
$$B_{R}^{q,(r)}=\{x\in\mathbb{R}^r: \vert x\vert_q\le R \}\qquad\textrm{and}\qquad S_{R}^{q,(r)}=\{x\in\mathbb{R}^r:|x|_q=R \}$$ be the $r$-dimensional ball and sphere of radius $R>0,$ respectively.
\begin{lemma} \label{lem:4} For all $d, N \in \mathbb{N}$ we have \begin{displaymath}
(2 \lfloor \kappa(d,N) \rfloor + 1)^d \leq |B^q_N \cap \mathbb{Z}^d|
\leq |B^q_{N + d^{1/q}}| = \frac{2^d \Gamma(1+\frac{1}{q})^d}{\Gamma(1 + \frac{d}{q})} \Big(N + d^{1/q} \Big)^d. \end{displaymath} \end{lemma}
\begin{proof} The lower bound follows from the inclusion $[-\kappa(d, N), \kappa(d, N)]^d\cap\mathbb{Z}^d\subseteq B^q_N\cap\mathbb{Z}^d$, while the upper bound is a simple consequence of the triangle inequality. \end{proof}
\begin{lemma} \label{lem:5} Given $\varepsilon_1, \varepsilon_2\in(0, 1]$ we define for every $d, N\in\mathbb{N}$ the set \[
E=\big\{x\in B^q_N\cap\mathbb{Z}^d : |\{i\in\mathbb{N}_d : |x_i|\ge \varepsilon_2\kappa(d, N)\}|\le\varepsilon_1 d\big\}. \] If $\varepsilon_1, \varepsilon_2\in(0, 1/(10q)]$ and $\kappa(d,N)\ge10$, then we have \begin{align*}
|E| \le 2e^{-\frac{d}{10}}|B^q_N\cap\mathbb{Z}^d|. \end{align*} \end{lemma}
\begin{proof} As in \cite[Lemma 2.4]{BMSW2} we can estimate \begin{equation} \label{eq:9}
|E| \le (2\varepsilon_2\kappa(d, N)+1)^{d}+ \sum_{1\le m\le \varepsilon_1d}{{d}\choose{m}}(2\varepsilon_2\kappa(d,N)+1)^{d-m}\big|B_N^{q,(m)}\cap\mathbb{Z}^{m}\big|. \end{equation} Since $\kappa(d, N) \geq 10$ and $\varepsilon_2 \leq 1/10$, using the lower bound from Lemma \ref{lem:4} gives \begin{equation} \label{eq:10}
(2\varepsilon_2\kappa(d, N)+1)^{d} \leq e^{-\frac{16d}{19}} |B^q_N\cap\mathbb{Z}^{d}| \end{equation} as in the case $q=2$. On the other hand, the upper bound from Lemma \ref{lem:4} can be applied to estimate the sum appearing in \eqref{eq:9} by \begin{align} \label{eq:11} \sum_{1\le m\le \varepsilon_1d} \frac{d^m}{m!}(2\varepsilon_2\kappa(d,N)+1)^{d-m} \frac{2^m}{\Gamma(1+\frac{m}{q})} d^{m/q} \big( \kappa(d,N) + 1 \big)^m. \end{align}
Now observe that \begin{equation} \label{eq:12} \frac{1}{\Gamma(1+\frac{m}{q})} \leq \frac{4q e^{m/q}}{(m/(2q))^{-1+m/q}}. \end{equation} Indeed, if $m/q \geq 2$, then
\begin{displaymath} \frac{1}{\Gamma(1+\frac{m}{q})} \leq \frac{1}{\lfloor \frac{m}{q} \rfloor !} \leq \frac{e^{\lfloor \frac{m}{q} \rfloor}}{\lfloor \frac{m}{q} \rfloor^{\lfloor \frac{m}{q} \rfloor}} \leq \frac{e^{\frac{m}{q}}}{(m/(2q))^{-1+m/q}}, \end{displaymath} where in the second inequality we have used that $\frac{1}{n!} \leq \frac{e^n}{n^n}$ holds for any $n \in \mathbb{N}$. If $m/q \leq 2$, then \begin{displaymath} \frac{1}{\Gamma(1+\frac{m}{q})} \leq 2 \leq \frac{4qm}{2q} (2e)^{m/q} (m/q)^{-m/q} = \frac{4q e^{\frac{m}{q}}}{(m/(2q))^{-1+m/q}}. \end{displaymath} In the first inequality we used the fact that the gamma function is estimated from below by $1/2$ on the interval $[1,3]$.
Applying \eqref{eq:12} we get \begin{align*} \frac{d^{m+m/q} 2^m}{m! \, \Gamma(1 + \frac{m}{q})} \leq \frac{d^{m+m/q} 2^m e^m 4q e^{m/q}}{m^m (m/(2q))^{-1+m/q}} = \Big( \frac{2de}{m}\Big)^{m(1+1/q)} 2m q^{m/q} \leq \Big( \frac{2deq}{m}\Big)^{m(1+1/q)}. \end{align*} Combining this with \eqref{eq:9}, \eqref{eq:10}, and \eqref{eq:11}, and repeating the argument used in \cite[(2.16)]{BMSW2}, we arrive at \begin{align} \label{eq:14} \begin{split}
|E| & \leq e^{-\frac{16d}{19}} |B^q_N\cap\mathbb{Z}^{d}| + \sum_{m=1}^{\lfloor \varepsilon_1 d \rfloor} \Big( \frac{2deq}{m}\Big)^{m(1+1/q)} (2\varepsilon_2\kappa(d,N)+1)^{d-m} \big( \kappa(d,N) + 1 \big)^m \\
&\leq \Big(e^{-\frac{16d}{19}}+\sum_{m=1}^{\lfloor \varepsilon_1 d \rfloor} \Big( \frac{2deq}{m}\Big)^{m(1+1/q)}\Big(\frac{2\varepsilon_2\kappa(d,N)+1}{2 \lfloor \kappa(d,N) \rfloor + 1}\Big)^{d-m}\Big)|B^q_N\cap\mathbb{Z}^{d}| \\
& \leq \Big( e^{-\frac{16d}{19}} + e^{- \frac{72d}{95}} \sum_{m=1}^{\lfloor \varepsilon_1 d \rfloor} e^{ \varphi(m) } \Big) |B^q_N\cap\mathbb{Z}^{d}|, \end{split} \end{align} where $\varphi(x):= (1+1/q) x \log(\frac{2eqd}{x})$, $x\ge0.$
For $x\in [0, d/(10q)]$ we have $$\varphi'(x)=(1+1/q)\log\bigg(\frac{2eqd}{x}\bigg)-(1+1/q)\ge \log\bigg(\frac{2qd}{x}\bigg)\ge \log 3.$$ Hence, $\varphi$ is increasing on $[0,\lfloor \varepsilon_1 d \rfloor],$ and arguing as in the proof of \cite[Lemma 2.4]{BMSW2} we get \begin{equation*} \sum_{m=1}^{\lfloor \varepsilon_1 d \rfloor} e^{ \varphi(m) } \leq e^{\varphi(\frac{d}{10q})} \sum_{m=1}^{\lfloor \varepsilon_1 d \rfloor}e^{-(\lfloor \varepsilon_1 d \rfloor-m)\log 3}\le \frac{3}{2} e^{\varphi(\frac{d}{10q})} \leq \frac{3}{2} (20eq^2)^{d/(5q)} \le \frac{3}{2} e^{\frac{2d}{5e}} e^{\frac{4d}{5q}} < \frac{3}{2} e^{\frac{3}{5}d}, \end{equation*} since $q^{1/q} \leq e^{1/e}$ and $20 < e^3$. Then \eqref{eq:14} gives \begin{displaymath}
\frac{|E|}{|B^q_N\cap\mathbb{Z}^{d}|} \leq e^{-\frac{16d}{19}} +e^{-\frac{72d}{95}}\sum_{m=1}^{\lfloor \varepsilon_1 d \rfloor} e^{\varphi(m)} \le e^{-\frac{16d}{19}}+ \frac{3}{2}e^{-\frac{3d}{19}} \le e^{-\frac{d}{10}}\Big(e^{-\frac{141}{190}}+\frac{3}{2}\Big) \le2e^{-\frac{d}{10}}, \end{displaymath} which finishes the proof. \end{proof}
Recall that ${\rm Sym}(d)$ denotes the permutation group on $\mathbb{N}_d$. We will also write $\sigma\cdot x=(x_{\sigma(1)}, \ldots, x_{\sigma(d)})$ for every
$x\in\mathbb{R}^d$ and $\sigma\in{\rm Sym}(d)$. Let $\mathbb P$ be the uniform distribution on the symmetry group ${\rm Sym}(d)$, i.e. $\mathbb P(A)={|A|}/{d!}$ for any $A\subseteq{\rm Sym}(d)$, since
$|{\rm Sym}(d)|=d!$. The expectation $\mathbb E$ will be always taken with respect to the uniform distribution $\mathbb P$ on the symmetry group ${\rm Sym}(d)$. We will need two lemmas from \cite{BMSW2}.
\begin{lemma} \label{lem:6}
Assume that $I, J\subseteq\mathbb{N}_d$ and $|J|=r$ for some $0\le r\le d$. Then \begin{align*}
\mathbb P[\{\sigma\in{\rm Sym}(d) : |\sigma(I)\cap J|\le {r|I|}/{(5d)}\}]\le e^{-\frac{r|I|}{10d}}. \end{align*} In particular, if $\delta_1, \delta_2\in(0, 1]$ satisfy
$5\delta_2\le\delta_1$ and $\delta_1d\le |I|\le d$, then we have \begin{align*}
\mathbb P[\{\sigma\in{\rm Sym}(d) : |\sigma(I)\cap J|\le \delta_2 r\}]\le e^{-\frac{\delta_1r}{10}}. \end{align*} \end{lemma}
\begin{lemma} \label{lem:7} Assume that we have a finite decreasing sequence $0\le u_d\le\ldots\le u_2\le u_1\le(1-\delta_0)/2$ for some $\delta_0\in(0, 1)$. Suppose that $I\subseteq\mathbb{N}_d$ satisfies
$\delta_1d\le |I|\le d$ for some $\delta_1\in(0, 1]$. Then for every $J=(d_0, d]\cap\mathbb{Z}$ with $0\le d_0\le d$ we have \begin{align*} \mathbb E\Big[\exp\Big({-\sum_{j\in\sigma(I)\cap J}u_j}\Big)\Big] \le 3\exp\Big({-\frac{\delta_0\delta_1}{20}\sum_{j\in J}u_j}\Big). \end{align*} \end{lemma}
\begin{lemma} \label{lem:8} For $d, N\in\mathbb{N}$, $\varepsilon\in (0, 1/(50q)]$ and an integer $1\le r\le d$ we define \[ E=\{x\in B^q_N\cap\mathbb{Z}^d : \sum_{i=1}^rx_i^q<\varepsilon^{q+1}\kappa(d, N)^q r\}. \] If $\kappa(d, N)\ge10$, then we have \begin{align} \label{eq:37}
|E|\le 4e^{-\frac{\varepsilon r}{10}}|B^q_N\cap\mathbb{Z}^d|. \end{align} As a consequence, $B^q_N\cap\mathbb{Z}^d$ can be written as a disjoint sum \begin{align} \label{eq:38} \begin{split} B^q_N\cap\mathbb{Z}^d &= \Big( \bigcup_{\varepsilon^{q+1}\kappa(d, N)^q r\leq l\le N^q}\big(B_{l^{1/q}}^{q,(r)}\cap\mathbb{Z}^r\big) \times\big(S^{q,d-r}_{(N^q-l)^{1/q}}\cap\mathbb{Z}^{d-r}\big) \Big) \cup E', \end{split} \end{align}
where $E' \subset \mathbb{Z}^d$ satisfies $|E'|\le 4e^{-\frac{\varepsilon r}{10}}|B^q_N\cap\mathbb{Z}^d|$. \end{lemma} \begin{proof}
Let $\delta_1\in(0, 1/(10q)]$ be such that $\delta_1\ge5\varepsilon$, and define $I_x=\{i\in\mathbb{N}_d: |x_i|\ge\varepsilon\kappa(d, N)\}$. We have $E\subseteq E_1\cup E_2$, where \begin{align*}
E_1&=\{x\in B^q_N\cap\mathbb{Z}^d : \sum_{i\in I_x\cap\mathbb{N}_r}|x_i|^q<\varepsilon^{q+1}\kappa(d, N)^q r\ \text{ and }\ |I_x|\ge\delta_1d\},\\
E_2&=\{x\in B^q_N\cap\mathbb{Z}^d : |I_x|<\delta_1d\}. \end{align*} By Lemma \ref{lem:5} (with $\varepsilon_1=\delta_1$ and $\varepsilon_2=\varepsilon$) we have
$|E_2|\le2e^{-\frac{d}{10}}|B^q_N\cap\mathbb{Z}^d|$, provided that $\kappa(d, N)\ge10$. Observe that \begin{align*}
|E_1|&=\sum_{x\in B^q_N\cap\mathbb{Z}^d}\frac{1}{d!}\sum_{\sigma\in{\rm Sym}(d)}\ind{E_1}(\sigma^{-1}\cdot x)\\ &=\sum_{x\in B^q_N\cap\mathbb{Z}^d}\mathbb P[\{\sigma\in{\rm Sym}(d) :
\sum_{i\in \sigma(I_x)\cap\mathbb{N}_r}|x_{\sigma^{-1}(i)}|^q<\varepsilon^{q+1}\kappa(d, N)^q r\ \text{ and }\ |\sigma(I_x)|\ge\delta_1d\}], \end{align*} since $I_{\sigma^{-1}\cdot x}=\sigma(I_x)$. Now by Lemma \ref{lem:6} (with $J=\mathbb{N}_r$, $\delta_2=\frac{\delta_1}{5}$ and $\delta_1$ as above) we obtain, for every $x\in B^q_N\cap\mathbb{Z}^d$ such that
$|I_x|\ge \delta_1 d$ , the estimate \begin{align*} \mathbb P[\{\sigma\in&{\rm Sym}(d) :
\sum_{i\in \sigma(I_x)\cap\mathbb{N}_r}|x_{\sigma^{-1}(i)}|^q<\varepsilon^{q+1}\kappa(d, N)^q r\ \text{ and }\ |\sigma(I_x)|\ge\delta_1d\}]\\ &\le \mathbb P[\{\sigma\in{\rm Sym}(d)
: |\sigma(I_x)\cap\mathbb{N}_r|\le\delta_2r\}]\le 2e^{-\frac{\delta_1 r}{10}}, \end{align*} since \[
\{\sigma\in{\rm Sym}(d) : \sum_{i\in \sigma(I_x)\cap\mathbb{N}_r}|x_{\sigma^{-1}(i)}|^q<\varepsilon^{q+1}\kappa(d, N)^q r\ \text{
and }\ |\sigma(I_x)|\ge\delta_1d \ \text{ and
}\ |\sigma(I_x)\cap\mathbb{N}_r|>\delta_2r\}=\emptyset. \]
Thus $|E_1|\le 2e^{-\frac{\varepsilon r}{2}}|B^q_N\cap \mathbb{Z}^d|$, which proves \eqref{eq:37}. To prove \eqref{eq:38} we write \[ B^q_N\cap\mathbb{Z}^d=\bigcup_{l=0}^{N^q}\big(B_{l^{1/q}}^{q,(r)}\cap\mathbb{Z}^r\big) \times \big(S^{q,d-r}_{(N^q-l)^{1/q}}\cap\mathbb{Z}^{d-r}\big). \] Then we see that \[ \Big(\bigcup_{l=0}^{N^q}\big(B_{l^{1/q}}^{q,(r)}\cap\mathbb{Z}^r\big) \times\big(S^{q,d-r}_{(N^q-l)^{1/q}}\cap\mathbb{Z}^{d-r}\big)\Big)\cap E^{\bf c} =\Big(\bigcup_{l\ge\varepsilon^{q+1}\kappa(d, N)^q r}\big(B_{l^{1/q}}^{q,(r)}\cap\mathbb{Z}^r\big) \times\big(S^{d-r}_{(N^q-l)^{1/q}}\cap\mathbb{Z}^{d-r}\big)\Big)\cap E^{\bf c}, \] and consequently we obtain \eqref{eq:38} with some $E'\subseteq E$. The proof is completed. \end{proof}
\begin{lemma} \label{lem:9} For $d, N\in\mathbb{N}$ and $\varepsilon\in(0, 1/(50q)]$ if $\kappa(d, N)\ge10$, then for every $1\le r\le d$ and $\xi\in\mathbb{T}^d$ we have \begin{align*}
|\mathfrak m^{B^q}_N(\xi)|\le\sup_{l\ge\varepsilon^{q+1}\kappa(d, N)^q r}|\mathfrak m_{l^{1/q}}^{B^q,(r)}(\xi_1,\ldots, \xi_r)|+4e^{-\frac{\varepsilon r}{10}}, \end{align*} where \begin{align*}
\mathfrak m_{R}^{B^q,(r)}(\eta):=\frac{1}{|B_{R}^{q,(r)}\cap\mathbb{Z}^d|}\sum_{x\in B_{R}^{q,(r)}\cap\mathbb{Z}^d}e^{2\pi i \eta\cdot x},\qquad \eta\in \mathbb{T}^r, \end{align*} is the lower dimensional multiplier with $r\in \mathbb{N}$ and $R>0$.
\end{lemma} \begin{proof}
We identify $\mathbb{R}^d\equiv\mathbb{R}^r\times\mathbb{R}^{d-r}$ and $\mathbb{T}^d\equiv\mathbb{T}^r\times\mathbb{T}^{d-r}$ and we will write $\mathbb{R}^d\ni x=(x^1, x^2)\in \mathbb{R}^r\times\mathbb{R}^{d-r}$ and $\mathbb{T}^d\ni \xi=(\xi^1, \xi^2)\in \mathbb{T}^r\times\mathbb{T}^{d-r}$ respectively.
Invoking \eqref{eq:38} one obtains
\begin{align*}
&|\mathfrak m^{B^q}_N(\xi)|\\
&\le\frac{1}{|B^q_N\cap\mathbb{Z}^d|}\sum_{l\ge\varepsilon^{q+1}\kappa(d, N)^q r}\;\sum_{x^2\in
S_{(N^q-l)^{1/q}}^{q,d-r}\cap\mathbb{Z}^{d-r}}|B_{l^{1/q}}^{q,(r)}\cap\mathbb{Z}^r|\frac{1}{|B_{l^{1/q}}^{q,(r)}\cap\mathbb{Z}^r|}\Big|\sum_{x^1\in
B_{l^{1/q}}^{q,(r)}\cap\mathbb{Z}^r}e^{2\pi i \xi^1\cdot
x^1}\Big|+4e^{-\frac{\varepsilon r}{10}}\\
& \le\sup_{l\ge\varepsilon^{q+1}\kappa(d, N)^q r}|\mathfrak
m_{l^{1/q}}^{B^q,(r)}(\xi_1,\ldots, \xi_r)|+4e^{-\frac{\varepsilon r}{10}}.
\end{align*}
In the last inequality the disjointness in the decomposition from \eqref{eq:38} has been used. \end{proof}
The next two lemmas give information on the size difference between the balls $B_R^{q,(r)}$ and their shifts $z+B_R^{q,(r)}$ for $z\in \mathbb{R}^r$. \begin{lemma} \label{lem:10} Let $R\ge1$ and let $r\in\mathbb{N}$ be such that $r\le R^{\delta}$ for some $\delta\in(0, q/(q+1))$. Then for every $z\in\mathbb{R}^r$ we have \begin{align} \label{eq:40}
\big||(z+B_R^{q,(r)})\cap\mathbb{Z}^r|-|B_R^{q,(r)}|\big|\le |B_R^{q,(r)}|r^{(q+1)/q}R^{-1}e^{r^{(q+1)/q}/R}\le e|B_R^{q,(r)}|R^{-1+(q+1)\delta/q}. \end{align} \end{lemma} \begin{proof} For the proof we refer to \cite[Lemma 2.9]{BMSW2}. \end{proof}
\begin{lemma} \label{lem:11} Let $R\ge1$ and let $r\in\mathbb{N}$ be such that $r\le R^{\delta}$ for some $\delta\in(0, q/(q+1))$. Then for every $z\in\mathbb{R}^r$ we have \begin{align*}
\big|\big(B_{R}^{q,(r)}\cap\mathbb{Z}^r\big)\triangle\big((z+B_{R}^{q,(r)})\cap\mathbb{Z}^r\big)\big|&\le 4e\big(r|z|R^{-1}e^{r|z|R^{-1}}+e^{r|z|R^{-1}}R^{-1+(q+1)\delta/q}\big)|B_R^{q,(r)}|\\
&\le 4e\big(|z|R^{-1+\delta}e^{|z|R^{-1+\delta}}+
e^{|z|R^{-1+\delta}}R^{-1+(q+1)\delta/q}\big)|B_R^{q,(r)}|. \end{align*} \end{lemma}
\begin{proof} For the proof we refer to \cite[Lemma 2.10]{BMSW2}. \end{proof}
We now recall the dimension-free estimates for the multipliers $m^{B^q_R}(\xi):=|B^q_R|^{-1}\mathcal F(\ind{B^q_R})(\xi)$ for $\xi\in\mathbb{R}^d$.
\begin{lemma}{\cite[Lemma 2.11]{BMSW2}} \label{lem:19} There exist constants $c_q,C>0$ independent of $d$ and such that for every $R>0$ and $\xi\in\mathbb{R}^d$ we have \begin{equation*}
|m^{B^q}(\xi)|\leq C (c_q R d^{-1/q} |\xi|)^{-1}, \quad \text{ and }\quad
|m^{B^q}(\xi)-1|\leq C (c_q Rd^{-1/q} |\xi|). \end{equation*} \end{lemma}
Lemma \ref{lem:11} and Lemma \ref{lem:19} are essential in proving the following estimate.
\begin{lemma}
\label{lem:20}
There exists a constant $C_q>0$ such that for every $\delta\in(0, 1/2)$ and for all
$r\in\mathbb{N}$ and $R>0$ satisfying $1\le r\le R^{\delta}$ we have
\begin{align*}
|\mathfrak{m}^{B^q, (r)}_R(\eta)|\le C_q \big(\kappa(r, R)^{-\frac{1}{3}+\frac{2\delta}{3}}
+r\kappa(r, R)^{-\frac{1+\delta}{3}}+\big(\kappa(r, R)\|\eta\|\big)^{-1}\big)
\end{align*}
for every $\eta\in\mathbb{T}^r$. \end{lemma} \begin{proof} The inequality is obvious when $R\le 16,$ so it suffices to consider $R> 16.$
Firstly, we assume that
$\max\{\|\eta_1\|,\ldots,\|\eta_r\|\}>\kappa(r, R)^{-\frac{1+\delta}{3}}$. Let $M=\big\lfloor
\kappa(r, R)^{\frac{2-\delta}{3}}\big\rfloor$ and assume without loss of generality that $\|\eta_1\|>\kappa(r, R)^{-\frac{1+\delta}{3}}$. Then \begin{align} \label{eq:48} \begin{split}
|\mathfrak{m}^{B^q, (r)}_R(\eta)|&\le
\frac{1}{|B_{R}^{q,(r)}\cap\mathbb{Z}^r|}\sum_{x\in
B_{R}^{q,(r)}\cap\mathbb{Z}^r}\frac{1}{M}\Big|\sum_{s=1}^Me^{2\pi i
(x+se_1)\cdot\eta}\Big|\\
&\quad +\frac{1}{M}\sum_{s=1}^M\frac{1}{|B_{R}^{q,(r)}\cap\mathbb{Z}^r|}\Big|\sum_{x\in
B_{R}^{q,(r)}\cap\mathbb{Z}^r}e^{2\pi i
x\cdot\eta}-e^{2\pi i
(x+se_1)\cdot\eta}\Big|. \end{split} \end{align} Since $\kappa(r,R)\ge 1$ we now see that \begin{align} \label{eq:49}
\frac{1}{M}\Big|\sum_{s=1}^Me^{2\pi i
(x+se_1)\cdot\eta}\Big|\le M^{-1}\|\eta_1\|^{-1}\le 2\kappa(r, R)^{-\frac{1}{3}+\frac{2\delta}{3}}. \end{align} We have assumed that $r\le R^{\delta}$, thus by Lemma \ref{lem:11}, with $z=se_1$ and $s\le M\le \kappa(r, R)^{\frac{2-\delta}{3}}$, we obtain \begin{align} \label{eq:50} \begin{split}
\frac{1}{|B_{R}^{q,(r)}\cap\mathbb{Z}^r|}\Big| & \sum_{x\in B_{R}^{q,(r)}\cap\mathbb{Z}^r}e^{2\pi i x\cdot\eta} - e^{2\pi i
(x+se_1)\cdot\eta}\Big|\\
&\le\frac{1}{|B_{R}^{q,(r)}\cap\mathbb{Z}^r|}\big|\big(B_R^{q,(r)}\cap\mathbb{Z}^r\big)\triangle\big((se_1+B_R^{q,(r)})\cap\mathbb{Z}^r\big)\big|\\ &\le 8e\big(srR^{-1}e^{srR^{-1}}+e^{srR^{-1}}R^{-1+(q+1)\delta/q}\big)\\ &\le 16e^2\kappa(r, R)^{-\frac{1}{3}+\frac{2\delta}{3}}, \end{split} \end{align} since $srR^{-1}\le \kappa(r, R)^{\frac{2-\delta}{3}}R^{-1+\delta}\le\kappa(r, R)^{-\frac{1}{3}+\frac{2\delta}{3}}\le1$ and $R^{-1+(q+1)\delta/q} \leq R^{-1+3\delta/2}\le R^{-\frac{1}{3}+\frac{2\delta}{3}}$, and for $R>16$ we also have $$
|B_{R}^{q,(r)}\cap\mathbb{Z}^r| \geq |B_{R-r^{1/q}}^{q,(r)}| \geq |B_{R-r^{1/2}}^{q,(r)}| = |B_R^{q,(r)}|\bigg(1-\frac{r^{1/2}}{R}\bigg)^r \geq |B_R^{q,(r)}|\big(1-r^{3/2}R^{-1}\big)\ge|B_R^{q,(r)}|/2. $$ Combining \eqref{eq:48} with \eqref{eq:49} and \eqref{eq:50} we obtain \[
|\mathfrak{m}^{B^q, (r)}_R(\eta)|\le (16e^2+2)\kappa(r, R)^{-\frac{1}{3}+\frac{2\delta}{3}}. \]
Secondly, we assume that
$\max\{\|\eta_1\|,\ldots,\|\eta_r\|\}\le\kappa(r, R)^{-\frac{1+\delta}{3}}$. Observe that by \eqref{eq:40} we have \begin{align*}
\bigg| \frac{1}{|B_{R}^{q,(r)}\cap\mathbb{Z}^r|}-
\frac{1}{|B_{R}^{q,(r)}|}\bigg|\le\frac{eR^{-1+(q+1)\delta/q}}{|B_{R}^{q,(r)}\cap\mathbb{Z}^r|}
\le\frac{2e\kappa(r, R)^{-\frac{1}{3}+\frac{2\delta}{3}}}{|B_{R}^{q,(r)}|}. \end{align*}
Then $|\mathfrak{m}^{B^q, (r)}_R(\eta)|$ is bounded by \begin{align} \label{eq:42} \begin{split}
\Big|\mathfrak{m}^{B^q, (r)}_R(\eta)&-\frac{1}{|B_{R}^{q,(r)}|}\mathcal F(\ind{B_R^{q,(r)}})(\eta)\Big|+\frac{1}{|B_{R}^{q,(r)}|}\big|\mathcal F(\ind{B_R^{q,(r)}})(\eta)\big|\\
&\le 2e\kappa(r, R)^{-\frac{1}{3}+\frac{2\delta}{3}}+\frac{1}{|B_{R}^{q,(r)}\cap\mathbb{Z}^r|}\Big|\sum_{x\in
B_{R}^{q,(r)}\cap\mathbb{Z}^r}e^{2\pi i
x\cdot\eta}-\int_{B_R^{q,(r)}}e^{2\pi i
y\cdot\eta}\mathrm{d} y\Big| \\ & \quad + \frac{1}{|B_{R}^{q,(r)}|}\big|\mathcal F(\ind{B_R^{q,(r)}})(\eta)\big|. \end{split} \end{align} Let $Q^{(r)}=[-1/2, 1/2]^r$ and note that by Lemma \ref{lem:11} with $z=t\in Q^{(r)}$ we obtain \begin{align} \label{eq:51} \begin{split}
\frac{1}{|B_{R}^{q,(r)}\cap\mathbb{Z}^r|}\Big| & \sum_{x\in
B_{R}^{q,(r)}\cap\mathbb{Z}^r}e^{2\pi i
x\cdot\eta}-\int_{B_R^{q,(r)}}e^{2\pi i
y\cdot\eta}\mathrm{d} y\Big|\\
&=\frac{1}{|B_{R}^{q,(r)}\cap\mathbb{Z}^r|}\Big|\sum_{x\in
\mathbb{Z}^r}\int_{Q^{(r)}}e^{2\pi i
x\cdot\eta}\ind{B_R^{q,(r)}}(x)-e^{2\pi i
(x+t)\cdot\eta}\ind{B_R^{q,(r)}}(x+t)\mathrm{d} t\Big|\\
&\le\frac{1}{|B_{R}^{q,(r)}\cap\mathbb{Z}^r|}\int_{Q^{(r)}}\big|(B_R^{q,(r)}\cap\mathbb{Z}^r)\triangle\big((t+B_R^{q,(r)})\cap\mathbb{Z}^r\big)\big|\mathrm{d} t\\ &\quad +
\frac{1}{|B_{R}^{q,(r)}\cap\mathbb{Z}^r|}\sum_{x\in
\mathbb{Z}^r}\ind{B_R^{q,(r)}}(x)\int_{Q^{(r)}}|e^{2\pi i
x\cdot\eta}-e^{2\pi i
(x+t)\cdot\eta}|\mathrm{d} t\\
& \le16e^2\kappa(r, R)^{-\frac{1}{3}+\frac{2\delta}{3}}+2\pi\big(\|\eta_1\|+\ldots+\|\eta_r\|\big)\\ &\le16e^2\kappa(r, R)^{-\frac{1}{3}+\frac{2\delta}{3}}+2\pi r\kappa(r, R)^{-\frac{1+\delta}{3}}. \end{split} \end{align} Finally, by Lemma \ref{lem:19} we obtain \begin{align*}
\frac{1}{|B_R^{q,(r)}|}|\mathcal F(\ind{B_R^{q,(r)}})(\eta)|\le C\big(c_q\kappa(r, R)\|\eta\|\big)^{-1}. \end{align*} Combining this with \eqref{eq:42} and \eqref{eq:51} we conclude \[
|\mathfrak{m}^{B^q, (r)}_R(\eta)|\le(16e^2+2e)\kappa(r, R)^{-\frac{1}{3}+\frac{2\delta}{3}}+2\pi r\kappa(r, R)^{-\frac{1+\delta}{3}}
+C c_q^{-1}\big(\kappa(r, R)\|\eta\|\big)^{-1}, \] which completes the proof. \end{proof}
\begin{lemma}
\label{lem:13}
For every $\delta\in(0, 1/2)$ and $\varepsilon\in(0, 1/(50q)]$ there is a
constant $C_{q,\delta, \varepsilon}>0$ such
that for every $d, N\in\mathbb{N}$, if $r$ is an integer such that $1\le r\le d$ and
$\max\{1, \varepsilon^{\frac{(q+1)\delta}{q}}\kappa(d, N)^{\delta}/2\}\le r\le \max\{1,\varepsilon^{\frac{(q+1)\delta}{q}}\kappa(d, N)^{\delta}\}$,
then for every $\xi=(\xi_1,\ldots,\xi_d)\in\mathbb{T}^d$ we have
\begin{align*}
|\mathfrak{m}^{B^q}_N(\xi)|\le C_{q,\delta, \varepsilon}\big(\kappa(d, N)^{-\frac{1}{3}+\frac{2\delta}{3}}+(\kappa(d, N)\|\eta\|)^{-1}\big),
\end{align*}
where $\eta=(\xi_1,\ldots, \xi_r)$. \end{lemma} \begin{proof}
If $\kappa(d, N)\le \varepsilon^{-\frac{q+1}{q}}$, then there is nothing to do, since the implied constant in question is allowed to depend on $q$, $\delta$, and $\varepsilon$. We will assume that $\kappa(d, N)\ge \varepsilon^{-\frac{q+1}{q}}$, which ensures that $\kappa(d, N)\ge10$.
In view of Lemma \ref{lem:9} we have
\begin{align*}
|\mathfrak{m}^{B^q}_N(\xi)|\le\sup_{R\ge \varepsilon^{(q+1)/q}\kappa(d, N) r^{1/q}}|\mathfrak{m}_{R}^{B^q,(r)}(\eta)|+4e^{-\frac{\varepsilon r}{10}},
\end{align*}
where $\eta=(\xi_1,\ldots,\xi_r)$. By Lemma \ref{lem:20}, since $r\le \varepsilon^{\frac{(q+1)\delta}{q}}\kappa(d, N)^{\delta}\le \kappa(r, R)^{\delta}\le R^{\delta}$, we obtain
\begin{align*}
|\mathfrak
m_{R}^{B^q,(r)}(\eta)|\lesssim_q \kappa(r, R)^{-\frac{1}{3}+\frac{2\delta}{3}}
+r\kappa(r, R)^{-\frac{1+\delta}{3}}+\big(\kappa(r, R)\|\eta\|\big)^{-1}.
\end{align*}
Combining the two estimates above with our assumptions we obtain the desired claim. \end{proof}
We have prepared all necessary tools to prove inequality \eqref{eq:23}. We shall be working under the assumptions of Lemma \ref{lem:13} with $\delta=2/7$.
\begin{proof}[Proof of Proposition \ref{prop:2}] Assume that $\varepsilon=1/(50q)$. If $\kappa(d, N)\le2^{\frac{7}{2}}\cdot (50q)^{\frac{q+1}{q}}$ then clearly \eqref{eq:23} holds. Therefore, we can assume that $d, N\in\mathbb{N}$ satisfy $2^{\frac{7}{2}}\cdot(50q)^{\frac{q+1}{q}} \le \kappa(d, N)\le 50qd^{1-1/q}$. We choose an integer $1\le r\le d$ satisfying $(50q)^{-\frac{2(q+1)}{7q}}\kappa(d, N)^{\frac{2}{7}}/2\le r\le (50q)^{-\frac{2(q+1)}{7q}}\kappa(d, N)^{\frac{2}{7}}$, this is possible since $(50q)^{-\frac{2(q+1)}{7q}}\kappa(d, N)^{\frac{2}{7}} \geq 2$ and
$(50q)^{-\frac{2(q+1)}{7q}}\kappa(d, N)^{\frac{2}{7}} \le d^{\frac{2(1-1/q)}{7}}\le d$. By symmetry we may also assume that $\|\xi_1\|\ge\ldots\ge\|\xi_d\|$ and we shall distinguish two cases. Suppose first that \begin{align*}
\|\xi_1\|^2+\ldots+\|\xi_r\|^2\ge\frac{1}{4}\|\xi\|^2. \end{align*} Then in view of Lemma \ref{lem:13} (with $\delta=2/7$ and $r\simeq_q \kappa(d, N)^{\frac{2}{7}}$) we obtain \begin{align*}
|\mathfrak m^{B^q}_N(\xi)|\le C_q\big(\kappa(d, N)^{-\frac{1}{7}}+(\kappa(d, N)\|\xi\|)^{-1}\big), \end{align*} and we are done. So we can assume that \begin{align} \label{eq:54}
\|\xi_1\|^2+\ldots+\|\xi_r\|^2\le\frac{1}{4}\|\xi\|^2. \end{align} Let $\varepsilon_1=1/10$ and assume first that \begin{align} \label{eq:55}
\|\xi_j\|\le\frac{\varepsilon_1^{1/q}}{10\kappa(d,N)}\quad\text{ for all } \quad r\le j\le d. \end{align} We use the symmetries of $B^q_N\cap\mathbb{Z}^d$ to write \begin{align*}
\mathfrak{m}^{B^q}_N(\xi)=\frac{1}{|B^q_N\cap\mathbb{Z}^d|}\sum_{x\in B^q_N\cap\mathbb{Z}^d}\prod_{j=1}^d \cos(2\pi x_j \xi_j). \end{align*} Applying the Cauchy--Schwarz inequality, $\cos^2(2\pi x_j \xi_j)=1-\sin^2(2\pi x_j \xi_j)$ and $1-x\le e^{-x}$, we obtain \begin{align} \label{eq:56} \begin{split}
|\mathfrak{m}^{B^q}_N(\xi)|^2\le \frac{1}{|B^q_N\cap\mathbb{Z}^d|}\sum_{x\in B^q_N\cap\mathbb{Z}^d}\exp\Big(-\sum_{j=r+1}^d\sin^2(2\pi x_j \xi_j)\Big).
\end{split}
\end{align}
For $x\in B^q_N\cap\mathbb{Z}^d$ we define
\begin{align*}
I_x&=\{i\in\mathbb{N}_d : \varepsilon\kappa(d, N)\le |x_i|\le 2\varepsilon_1^{-1/q}\kappa(d,N) \},\\
I_x'&=\{i\in\mathbb{N}_d : 2\varepsilon_1^{-1/q}\kappa(d,N)< |x_i| \},\\
I_x''&=\{i\in\mathbb{N}_d : \varepsilon\kappa(d,N)\le |x_i|\}=I_x\cup I_x'.
\end{align*}
and
\[
E=\big\{x\in B^q_N\cap\mathbb{Z}^d : |I_x|\ge\varepsilon_1 d/2\big\}.
\]
Observe that
\begin{align*}
E^{\bf c}=&\big\{x\in B^q_N\cap\mathbb{Z}^d : |I_x|<\varepsilon_1 d/2\big\}
=\big\{x\in B^q_N\cap\mathbb{Z}^d : |I_x''|<\varepsilon_1 d/2+|I_x'|\big\}\\
\subseteq&\big\{x\in B^q_N\cap\mathbb{Z}^d : |I_x''|<\varepsilon_1 d/2+|I_x'|\text{
and } |I_x'|\le \varepsilon_1 d/2\big\}
\cup
\big\{x\in B^q_N\cap\mathbb{Z}^d : |I_x'|> \varepsilon_1 d/2\big\}.
\end{align*}
Then it is not difficult to see that
\[
E^{\bf c}\subseteq \big\{x\in B^q_N\cap\mathbb{Z}^d : |I_x''|<\varepsilon_1 d\big\},
\]
since $ \big\{x\in B^q_N\cap\mathbb{Z}^d : |I_x'|> \varepsilon_1 d/2\big\}=\emptyset$.
Then by Lemma \ref{lem:5} with $\varepsilon_2=\varepsilon$,
we obtain
\begin{align*}
|E^{\bf c}|\le |\big\{x\in B^q_N\cap\mathbb{Z}^d : |I_x''|<\varepsilon_1 d\big\}|\le 2e^{-\frac{d}{10}}|B^q_N\cap\mathbb{Z}^d|.
\end{align*}
Therefore, by \eqref{eq:56} we have
\begin{align*}
|\mathfrak{m}^{B^q}_N(\xi)|^2
&\le \frac{1}{|B^q_N\cap\mathbb{Z}^d|}\sum_{x\in
B^q_N\cap\mathbb{Z}^d}\exp\Big(-\sum_{j\in I_x \cap J_r}\sin^2(2\pi x_j
\xi_j)\Big)\ind{E}(x)+2e^{-\frac{d}{10}},
\end{align*}
where $J_r=\{r+1,\ldots, d\}$. Using \eqref{eq:103} and definition of $I_x$ we have
\[
\sin^2(2\pi x_j \xi_j)\ge 16|x_j|^2\|\xi_j\|^2\ge 16\varepsilon^2\kappa(d,N)^2\|\xi_j\|^2,
\]
since $2|x_j|\|\xi_j\|\le 1/2$ by \eqref{eq:55}, and consequently we obtain
\begin{align}
\label{eq:58}
\begin{split}
\frac{1}{|B^q_N\cap\mathbb{Z}^d|}&\sum_{x\in
B^q_N\cap\mathbb{Z}^d}\exp\Big(-\sum_{j\in I_x\cap J_r}\sin^2(2\pi x_j
\xi_j)\Big)\ind{E}(x)\\
&\le
\frac{1}{|B^q_N\cap\mathbb{Z}^d|}\sum_{x\in
B^q_N\cap\mathbb{Z}^d\cap E}\exp\Big(-16\varepsilon^2\kappa(d,N)^2\sum_{j\in I_x\cap
J_r}\|\xi_j\|^2\Big)\le Ce^{-c\kappa(d,N)^2\|\xi\|^2}
\end{split}
\end{align}
for some constants $C, c>0$. In order to get the last inequality in \eqref{eq:58} observe that
\begin{align*}
\frac{1}{|B^q_N\cap\mathbb{Z}^d|}&\sum_{x\in B^q_N\cap\mathbb{Z}^d\cap E}
\exp\Big(-16\varepsilon^2\kappa(d,N)^2\sum_{j\in I_x\cap J_r}\|\xi_j\|^2\Big)\\
&=\frac{1}{|B^q_N\cap\mathbb{Z}^d|}\sum_{x\in
B^q_N\cap\mathbb{Z}^d\cap E}\frac{1}{d!}\sum_{\sigma\in{\rm Sym}(d)}\exp\Big(-16\varepsilon^2\kappa(d,N)^2\sum_{j\in \sigma(I_x)\cap
J_r}\|\xi_j\|^2\Big)\\
&=\frac{1}{|B^q_N\cap\mathbb{Z}^d|}\sum_{x\in
B^q_N\cap\mathbb{Z}^d\cap E}\mathbb E\bigg[\exp\Big(-16\varepsilon^2\kappa(d,N)^2\sum_{j\in \sigma(I_x)\cap
J_r}\|\xi_j\|^2\Big)\bigg],
\end{align*}
since $\sigma\cdot (B^q_N\cap\mathbb{Z}^d\cap E)=B^q_N\cap\mathbb{Z}^d\cap E$ for every $\sigma\in{\rm Sym}(d)$.
We now apply Lemma \ref{lem:7} with $\delta_1=\varepsilon_1/2$, $d_0=r$, $I=I_x$, $\delta_0=3/5$, and
\begin{displaymath}
u_j = \left\{ \begin{array}{rl}
16\varepsilon^2\kappa(d,N)^2 \|\xi_r\|^2 & \textrm{for } 1 \leq j \leq r, \\
16\varepsilon^2\kappa(d,N)^2 \|\xi_j\|^2 & \textrm{for } r+1 \leq j \leq d, \end{array} \right.
\end{displaymath}
noting that $16\varepsilon^2\kappa(d,N)^2 \|\xi_r\|^2 \leq 1/5$ by \eqref{eq:55}. We conclude that
\begin{align*}
\mathbb E\bigg[\exp\Big(-16\varepsilon^2\kappa(d,N)^2\sum_{j\in \sigma(I_x)\cap
J_r}\|\xi_j\|^2\Big)\bigg]\le 3 \exp\Big(-c'\kappa(d,N)^2\sum_{j=r+1}^d\|\xi_j\|^2\Big),
\end{align*}
holds for some $c'>0$ and for all $x\in B^q_N\cap\mathbb{Z}^d\cap E$.
This proves \eqref{eq:58} since by
\eqref{eq:54} we obtain
\begin{align*}
\exp\Big(-c'\kappa(d,N)^2\sum_{j=r+1}^d\|\xi_j\|^2\Big)\le \exp\Big(-\frac{c'\kappa(d,N)^2}{4}\sum_{j=1}^d\|\xi_j\|^2\Big).
\end{align*}
Assume now that \eqref{eq:55} does not hold. Then
\begin{align*}
\|\xi_j\|\ge\frac{\varepsilon_1^{1/2}}{10\kappa(d,N)}\quad\text{ for all
} \quad 1\le j\le r.
\end{align*}
Hence
\begin{align*}
\|\xi_1\|^2+\ldots+\|\xi_r\|^2\ge\frac{\varepsilon_1r}{100\kappa(d,N)^2}.
\end{align*}
Therefore, we invoke Lemma \ref{lem:13} with $\eta=(\xi_1,\ldots, \xi_r)$ again and obtain
\begin{align*}
|\mathfrak{m}^{B^q}_N(\xi)|&\lesssim_q \kappa(d, N)^{-\frac{1}{7}}+(\kappa(d, N)\|\eta\|)^{-1}\\
&\lesssim_q \kappa(d, N)^{-\frac{1}{7}},
\end{align*}
since $r\simeq_q \kappa(d, N)^{\frac{2}{7}}$. This completes the proof of Proposition \ref{prop:2}. \end{proof}
\end{document} |
\begin{document}
\title{Generation of maximum spin entanglement induced by a cavity field\\ in quantum-dot systems}
\author{Adam Miranowicz,$^{1,2,3}$ \c{S}ahin K. \"Ozdemir,$^{1,2}$ Yu-xi Liu$^{2}$, Masato Koashi,$^{1,2}$ Nobuyuki Imoto,$^{1,2,4,5}$\\ and Yoshiro Hirayama$^{1,5}$}
\address{ $^1$CREST Research Team for Interacting Carrier Electronics, Japan Science and Technology Corporation, Tokyo, Japan\\ $^2$School of Advanced Sciences, Graduate University for Advanced Studies (SOKENDAI), Hayama, Kanagawa 240-0193, Japan\\ $^3$Nonlinear Optics Division, Institute of Physics, Adam Mickiewicz University, 61-614 Pozna\'n, Poland\\ $^4$University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8654, Japan\\ $^5$NTT Basic Research Laboratories, 3-1 Morinosato Wakamiya, Atsugi, Kanagawa 243-0198, Japan}
\date{19 June 2002. Published in: {\em Physical Review A} {\bf 65} (2002) 062321} \pagestyle{plain} \pagenumbering{arabic}
\begin{abstract}
Equivalent-neighbor interactions of the conduction-band electron spins of quantum dots in the model of Imamo\={g}lu {\em et al.} [Phys. Rev. Lett. {\bf 83}, 4204 (1999)] are analyzed. An analytical solution and its Schmidt decomposition are found and applied to evaluate how much the initially excited dots can be entangled with the remaining dots if all of them are initially disentangled. It is demonstrated that perfect maximally entangled states (MESs) can only be generated in systems of up to six dots with a single dot initially excited. It is also shown that highly entangled states, approximating the MESs with good accuracy, can still be generated in systems of odd numbers of dots with almost half of them excited. A sudden decrease of entanglement is observed on increasing the total number of dots in a system with a fixed number of excitations. \end{abstract}
\pacs{03.65.Ud, 03.67.-a, 68.65.Hb, 75.10.Jm}
\maketitle
\pagenumbering{arabic}
\section{Introduction}
Since the seminal papers of Obermayer, Teich, and Mahler \cite{Obe88}, there has been growing interest in the quantum-information properties of quantum dots (QDs) in the quest to implement quantum-dot scalable quantum computers \cite{QD-QC,Loss98,QD-spin}. Those high expectations are justified to some extent by recent experimental advances in the coherent observation and manipulation of quantum dots \cite{Bon98,Oos98}, including spectacular demonstrations of the quantum entanglement of excitons in a single dot \cite{Chen00} or quantum-dot molecule \cite{Bay01}, and observations of Rabi oscillations of excitons in single dots \cite{Kam01}. Among various models of quantum computers based on localized electron spins of quantum dots as qubits \cite{Loss98,QD-spin}, the scheme of Imamo\={g}lu {\em et al.} \cite{Ima99} is the first, where the interactions between the qubits are mediated by a cavity field. This approach combines the advantages of long-distance optically controlled couplings with long-decoherence times of the spin degrees of freedom. Here, we analyze quantum entanglement in the Imamo\={g}lu {\em et al.} model.
During the last decade, it has been highlighted that quantum entanglement, being at the heart of quantum mechanics, is also a powerful resource for quantum communication and quantum-information processing. Quantum entanglement in interacting systems is a common phenomenon. It is obvious that any interacting many-body system with defined qubits, if set in a properly chosen state, will evolve through states with entangled qubits. Surprisingly, quantitative descriptions of the entanglement dynamics in multiparticle systems are by no means satisfactory yet \cite{multipart}. Nevertheless, in a special case of bipartite entanglement, a number of measures have been introduced and studied \cite{Ben96,Ved97,Woo98}. For example, entanglement of a bipartite system in a pure state, described by the density matrix $\hat{\rho}_{AB}=(|\psi \rangle \langle \psi
|)_{AB}$, can be measured by the von Neumann entropy \cite{Ben96,Ved97} \begin{equation} E[\hat{\rho}_{AB}]=-{\rm Tr}\{\hat{\rho}_{A}\log _{2}\hat{\rho}_{A}\}=-{\rm Tr}\{\hat{\rho}_{B}\log _{2}\hat{\rho}_{B}\} \label{N01} \end{equation} of the reduced density matrix $\hat{\rho}_{A}={\rm Tr}_{B}\{\hat{\rho}_{AB}\}$ or, equivalently, $\hat{\rho}_{B}={\rm Tr}_{A}\{\hat{\rho}_{AB}\}$. The entanglement of formation of a mixed state of a bipartite system is often measured by the so-called concurrence proposed by Hill and Wootters\cite{Woo98}. Concurrence has been applied to study entanglement in various models \cite{conc} including equivalent-neighbor systems \cite{Koa00,Wang01}. The following two aspects of entanglement are especially important: (i) coherent manipulation of entanglement and (ii) generation of maximum entanglement. The possibility of coherent and selective control of entanglement in a quantum-dot system was analyzed by Imamo\={g}lu {\em et al.} \cite{Ima99}. Here, we would like to focus on the latter topic, i.e., the generation of the maximally entangled states (MESs) of quantum dots in the model of Imamo\={g}lu {\em et al.} \cite{Ima99}. MESs are necessary for the majority of quantum information-processing applications. Otherwise, for example, direct application of partly entangled states for teleportation will result in unfaithful transmission, while superdense coding with partly entangled states will cause noise in the resulting classical channel.
The paper is organized as follows. In Sec. II, we describe an equivalent-neighbor quantum-dot model and give its analytical solution. In Sec. III, we analyze the possibilities of generation of the MESs or their good approximations for different initial conditions of the number of excitations and the total number of dots in the system.
\vspace*{0mm}
\section{Quantum-dot model and its solution}
We will apply the model of Imamo\={g}lu {\em et al.} \cite{Ima99} to describe strong equivalent-neighbor couplings of quantum-dot spins through a single-mode microcavity field. The dots are placed inside a microdisk, put into a microcavity tuned to frequency $\omega _{{\rm cav}}$, and illuminated selectively by laser fields of frequencies $\omega _{n}^{(L)}$. Each of $N$ dots with a single electron in the conduction band is modeled by a three-level atom as shown in Fig. 1. The total Hamiltonian for $N$ three-level quantum dots interacting with $N+1$ quantized fields reads \begin{eqnarray} \label{N02} \hat{H}&=&\hat{H}_{QD}+\hat{H}_{F}+\hat{H}_{\rm int}, \\ \hat{H}_{QD}&=&\sum_{n}({\cal E}_{n}^{(0)}\hat{\sigma} _{n}^{00}+{\cal E}_{n}^{(1)}\hat{\sigma} _{n}^{11}+{\cal E}_{n}^{(v)}\hat{\sigma} _{n}^{vv}),
\nonumber \\ \hat{H}_{F}&=&\hbar \omega _{{\rm cav}}\hat{a}_{{\rm cav}}^{\dag }\hat{a}_{{\rm cav} }+\sum_{n}\hbar \omega _{n}^{(L)}(\hat{a}_{n}^{(L)})^{\dag }\hat{a}_{n}^{(L)},
\nonumber \\ \hat{H}_{\rm int} &=&\sum_{n}\hbar g_{n}^{v0}[\hat{a}_{n}^{(L)}\hat{\sigma} _{n}^{0v}+(\hat{a}_{n}^{(L)})^{\dag }\hat{\sigma} _{n}^{v0}]
\nonumber \\ &&+\sum_{n}\hbar g_{n}^{v1}(\hat{a}_{{\rm cav}}\hat{\sigma} _{n}^{1v}+\hat{a}_{{\rm cav} }^{\dag }\hat{\sigma} _{n}^{v1}),
\nonumber \end{eqnarray} where $\hat{H}_{QD}$ and $\hat{H}_{F}$ are the free Hamiltonians of the quantum dots and the fields, respectively; $\hat{H}_{\rm int}$ is the interaction Hamiltonian; $\hat{a}_{\rm cav}$ and $\hat{a}^{\dag}_{\rm cav}$ are the annihilation and creation operators of the cavity mode, respectively; $\hat{a}^{(L)}_{n}$
and $(\hat{a}^{(L)}_{n})^{\dag}$ are the corresponding operators for the laser modes; $\hat{\sigma} _{n}^{xy}$ is the $n$th dot operator given by $\hat{\sigma} _{n}^{xy}=|x\rangle _{nn}\langle y|$; ${\cal E}_{n}^{(x)}$ is the energy of level $|x\rangle_{n}$
($x=0,1,v$); the $n$th dot levels $|0\rangle _{n}$ and $|v\rangle _{n}$ are coupled by dipole interactions with a strength of
$g_{n}^{v0}$; analogously, $g_{n}^{v1}$ is the coupling strength between levels $|1\rangle _{n}$ and $ |v\rangle _{n}$. There is no direct coupling between levels $|0\rangle _{n} $ and $|1\rangle _{m}$ in either the same ($n=m$) or different dots ($n\neq m$). The Hamiltonian (\ref{N02}) simply generalizes, to $N$ dots and $N+1$ fields, models of a three-level atom (dot) interacting with two modes of radiation fields widely discussed in the literature (see, e.g., \cite{3plus2}). By applying an adiabatic elimination method, Imamo\={g}lu {\em et al.} derived the effective interaction Hamiltonian describing the evolution of the conduction-band spins of $N$ quantum dots coupled by a microcavity field in the form \cite{Ima99} \begin{eqnarray} \hat{H}_{{\rm eff}}&=&\!\frac{\hbar}{2}\sum_{n\neq m}\kappa _{nm}(t)[{\hat{\sigma}} _{n}^{+}{\hat{\sigma}}_{m}^{-}e^{i(\Delta _{n}-\Delta _{m})t} \nonumber \\ &&~~~~~~~~~~~~~~~~+{\hat{\sigma}} _{n}^{-}{\hat{\sigma}}_{m}^{+}e^{-i(\Delta _{n}-\Delta _{m})t}] \label{N02a} \end{eqnarray} in terms of the Pauli spin creation ${\hat{\sigma}}_{n}^{+}$ and annihilation ${\hat{\sigma}}_{n}^{-}$ operators acting on the conduction-band spin states of the $n$th dot. The effective two-dot coupling strength between the spins of the $n$th and $m$th dots is given by $\kappa _{nm}(t)= g_n(t) g_m(t)/\Delta _{n}$, where the effective single-dot coupling of the $n$th spin to the cavity field is $g_n(t)= g_{n}^{v0} g_{n}^{v1}
|E^{(L)}_n(t)|/\Delta \omega _{n}$ with $\Delta \omega _{n}$ being the harmonic mean of $\Delta \omega _{n}^{(1)}$ and $\Delta \omega _{n}^{(0)}$. For simplicity, the laser fields are assumed to be strong and treated classically as described by the complex amplitudes $E^{(L)}_n(t)$. The Hamiltonian (\ref{N02a}) was derived by applying adiabatic eliminations of the valence-band states $|v\rangle _{n}$ and cavity mode $\hat{a}_{{\rm cav}},$
which are valid under the assumptions of negligible coupling strength, cavity decay rate, and thermal fluctuations in comparison to $\hbar\Delta _{n}$ and $\hbar\Delta \omega _{n}^{(x)}$ ($x=0,1$) and the energy difference ${\cal E}_{n}^{(1)}-{\cal E} _{n}^{(0)}$ (see Fig. 1). Moreover, the valence-band levels $|v\rangle _{n} $ were assumed to be far off resonance. Although the Hamiltonian (\ref{N02a}) describes apparently direct spin-spin interactions, the real physical picture is different: Quantum-dot spins are coupled only indirectly via the cavity and laser fields.
Imamo\={g}lu {\em et al.} \cite{Ima99} applied their model for quantum computing purposes by implementing the conditional phase-flip and controlled-NOT (CNOT) operations between two arbitrary dots addressed selectively by laser fields to satisfy the condition $\Delta _{n}=\Delta _{m}$. Here, we are interested in a realization of an equivalent-neighbor model scalable for a large number of dots (even for more than 100 \cite{Ima99}). This goal can readily be achieved by assuming that all dots are identical and illuminated by a single-mode stationary laser field of frequency $\omega _{n}^{(L)}\equiv \omega ^{(L)}$, which implies $\kappa _{nm}(t)=\kappa=\,$const. In fact, the condition of equivalent-neighbor interactions can also be assured for nonidentical dots by adjusting the laser-field frequencies $\omega _{n}^{(L)}$ to get the same detuning $\Delta _{n}=\,$const, and by choosing the proper laser intensities $|E^{(L)}_n|^2$ to obtain the effective coupling constants of $g_{n}(t)=\,$const or, equivalently, $\kappa _{nm}(t)=\,$const for every pair of dots. Thus, Eq. (\ref{N02a}) can be reduced to the effective equivalent-neighbor $N$-dot Hamiltonian as
\vspace*{0mm} \begin{figure}
\caption{Evolution of the quantum entanglement of $E^{N1}(t)$ (solid) and the Schmidt coefficients of $P_{0}^{N1}(t)$ (dashed) and $P_{1}^{N1}(t)$ (dot-dashed curves) for systems of $N=2,\cdots,8$ quantum dots with only one ($M=1$) of them initially excited. Figure illustrates that the exact maximally entangled states can be generated in systems of $N$ up to 6 dots only.}
\end{figure}
\begin{eqnarray} \hat{H}_{\rm eff}=\frac{\hbar\kappa}{2} \sum_{n\neq m}\left( {\hat{\sigma}}_n^{+}{\hat{\sigma}}_m^{-}+ {\hat{\sigma}}_n^{-}{\hat{\sigma}}_m^{+}\right) \label{N03} \end{eqnarray} where $\kappa$ is the coupling constant. The system described by Eq. (\ref{N03}) is sometimes referred to as the spin-$1/2$ van der Waals model~\cite{Dek79}, the infinitely coordinated system~\cite{Bot82}, the Lipkin or Lipkin-Meshkov-Glick model~\cite{Lip65}, or just the equivalent-neighbor model \cite{Liu90}. Let us assume that the initial state describing a system of $M$ ($M=0,\cdots ,N$) dots initially excited (i.e., with conduction-band spins up) and $N-M$ dots in the ground state (conduction-band spins down) is given as \begin{eqnarray}
|\psi (0)\rangle &=&\{|1\rangle ^{\otimes M}\}_{A}\{|0\rangle
^{\otimes (N-M)}\}_{B} \nonumber \\ &\equiv& |\underbrace{11\cdots 1}_{M}\rangle _{A}|\underbrace{00\cdots 0}_{N-M}\rangle _{B}. \label{N04} \end{eqnarray} Then, we find the solution of the Schr\"odinger equation of motion for the model (\ref{N03}) in the form \begin{eqnarray}
|\psi (t)\rangle &=&\sum_{m=0}^{M^{\prime }}C_{m}^{NM}(t)
{\{|1\rangle ^{\otimes (M-m)}|0\rangle ^{\otimes m}\}}_{A} \nonumber \\
&&~~~~~~~~~~~~\otimes {\{|1\rangle ^{\otimes m}|0\rangle ^{\otimes (N-M-m)}\}}_{B}, \label{N05} \end{eqnarray} where $M^{\prime }=\min (M,N-M)$. The states in curly brackets
$\{|1\rangle ^{\otimes (n-m)}|0\rangle ^{\otimes m}\}$ denote the sum of all $n$-dot states with $(n-m)$ excitations. For example,
${ \{|1\rangle ^{\otimes 2}|0\rangle ^{\otimes 2}\}}$ stands for
$|0011\rangle +|0101\rangle +|0110\rangle +|1001\rangle
+|1010\rangle +|1100\rangle$. The number of states in the superposition $\{|1\rangle ^{\otimes (n-m)}|0\rangle ^{\otimes m}\}$ (or equivalently $\{|1\rangle ^{\otimes m}|0\rangle ^{\otimes (n-m)}\}$) is equal to the binomial coefficient $\textstyle{n \choose m}$. Thus, for given $N$ and $M$, the solution (\ref{N05}) contains $\textstyle{N \choose M}$ terms. The energy of the QD system described by Eq. (\ref{N03}) is conserved; thus all the superposition states in Eq. (\ref{N05}) have the same number $M$ of excitations. We find the time-dependent superposition coefficients in Eq. (\ref{N05}) as \begin{eqnarray} C_{m}^{NM}(t)&=&\sum_{n=0}^{M^{\prime }}b_{nm}^{NM}\exp \big\{i[n(N+1-n) \nonumber \\ &&~~~~~~~~~~~~~~~~-M(N-M)]\kappa t\big\} \label{N06} \end{eqnarray} in terms of \begin{eqnarray} b_{nm}^{NM}&=&\sum_{k=0}^{m} (-1)^{k} {m \choose k} {N-2k \choose M-k}^{-1} \nonumber \\ &&~~~\times \left[{N+1-2k \choose n-k}-2{N-2k \choose n-k-1} \right] , \label{N07} \end{eqnarray} where $\textstyle{x \choose y}$ are binomial coefficients. Our solution can be represented in a biorthogonal form via the Schmidt decomposition \begin{equation}
|\psi (t)\rangle =\sum_{m=0}^{M^{\prime
}}\sqrt{P_{m}^{NM}(t)}|\phi _{m}(t)\rangle _{A}\otimes |\varphi _{m}(t)\rangle _{B}, \label{N08} \end{equation}
where $|\phi _{m}(t)\rangle _{A}$ and $|\varphi _{m}(t)\rangle _{B}$ are the orthonormal basis states of subsystems $A$ and $B$, respectively. We find that the real and positive Schmidt coefficients can be related to the squared module of superposition coefficients (\ref{N06}) as follows:
\vspace*{0mm} \begin{figure}
\caption{Evolution of the entanglement of $E^{N2}(t)$ (solid) and all Schmidt coefficients $P_{0}^{N2}(t)$ (dashed), $P_{1}^{N2}(t)$ (dot-dashed), and $P_{2}^{N2}(t)$ (dotted curves), in systems with two ($M=2$) dots initially excited.}
\end{figure}
\begin{equation}
P_{m}^{NM}(t)={M \choose m} {N-M \choose m} |C_{m}^{NM}(t)|^{2}, \label{N09} \end{equation}
while the phases of $C_{m}^{NM}(t)$ are absorbed into the definition of the basis states $|\phi _{m}(t)\rangle _{A}$ and
$|\varphi _{m}(t)\rangle _{B}$. The Schmidt coefficients are normalized to unity. The evolutions of all $P_{m}^{NM}$ for systems with single and two excitations are given in Figs. 2 and 3, respectively. We observe that the evolution of Schmidt coefficients is periodic with the period of $\kappa T=2\pi/N$ for systems with a single ($M=1$ or, equivalently, $M=N-1$) excitation (Fig. 2), and $\pi$ periodical ($2\pi$ periodical) for systems of even (odd) numbers of dots with higher numbers of excitations (see Fig. 3). For brevity, only half of the period is depicted in the right-hand panels of Fig. 3.
\section{Entanglement in quantum-dot systems}
We address the following questions: How much can the initially excited dots (say, subsystem $A$) be entangled with the remaining dots (subsystem $B$) in the equivalent-neighbor system of initially all disentangled dots if the evolution is governed by Hamiltonian (\ref{N03})? And whether the maximally entangled states can be generated exactly or, at least, approximately in systems of an arbitrary number $N$ of dots while $M$ of them are excited.
With the help of an explicit form of the Schmidt decomposition, it is convenient to calculate the entanglement (\ref{N01}) via the Shannon entropy
\begin{eqnarray}\label{N10}
E^{NM}(t)&\equiv& E[|\psi (t)\rangle \langle \psi (t)|] \nonumber \\ &=&-\sum_{m=0}^{M'}P_{m}^{NM}(t)\log _{2}P_{m}^{NM}(t) \end{eqnarray} of the Schmidt coefficients given for our system by (\ref{N09}). By applying Eq. (\ref{N10}), we can determine the maximum entanglement given by $E^{NM}_{\max}(t)\equiv \max_t E^{NM}(t)$, which can periodically be generated during the evolution of $N$-dot system with $M$ excitations. The coefficients (\ref{N09}), as well as (\ref{N06}), possess the symmetry of $P_{m}^{NM}(t)=P_{m}^{N,N-M}(t)$, which implies equal evolutions of entanglement \begin{equation} E^{NM}(t)=E^{N,N-M}(t) \label{N11} \end{equation} in the $N$-dot systems with $M$ and $N-M$ excitations. Figure 4 shows this symmetry in a special case for maximum entanglement of $\max_t E^{NM}(t)=\max_t E^{N,N-M}(t)$.
\vspace*{0mm} \begin{figure}
\caption{Maximum entanglement $E_{\max }^{NM}=\max_{t}E^{NM}(t)$ (solid bars), measured in ebits, as a function of the excitation number $M$ generated in systems of $N=10,20,30$ and 31 dots. The empty staircase corresponds to entanglement of $E_{\rm MES}^{NM}$ for the MESs. The figure illustrates that the highest entanglement, closest to $E_{\rm MES}^{NM}$, can be generated in systems with $M=[N/2]$ excitations. On decreasing $M$ or ($N-M$), the entanglement decreases. The discrepancy between $E_{\max }^{NM}$ and $E_{\rm MES}^{NM}$ becomes more pronounced with increasing $N$ especially for $0<M\ll [N/2]$.}
\end{figure}
To solve the second problem proposed at the beginning of this section, we have to determine the quantum correlations of the maximally entangled state of two subsystems having $d$ equally weighted terms in its Schmidt decomposition. According to the theorem of Bennett {\em et al.} \cite{Ben96}, the MES has $\log _{2}d$ ebits of entanglement, where $d$ is the Hilbert space dimension of the smaller subsystem. Thus, in our case, the MES of the subsystem $A$ consisting of $M$ dots and the subsystem $B$ of $N-M$ dots has \begin{equation} E_{{\rm MES}}^{NM}=\log _{2}[\min (M,N-M)+1] \label{N12} \end{equation} ebits of entanglement. In particular, the MES in the $N$-dot system with a single initial excitation has only 1 ebit independent of $N$. The empty staircase in Fig. 4 and solid lines in Fig. 5 correspond to $E_{{\rm MES}}^{NM}$. To show a deviation of a given state from the MES, it is convenient to use the relative (or scaled) entanglement defined to be \begin{eqnarray} e^{NM}_{\max} \equiv \frac{E^{NM}_{\max}}{E^{NM}_{{\rm MES}}} = \max_t \frac{ E^{NM}(t)}{E^{NM}_{\rm MES}}. \label{N13} \end{eqnarray}
In the simplest nontrivial case, for $M=1$, the Schmidt coefficients reduce to \begin{equation} P_{1}^{N1}(t)=4{{\frac{(N-1)}{N^{2}}}}\sin ^{2}\left(\frac{N}{2}\kappa t\right) \label{N14} \end{equation} and $P_{0}^{N1}(t)=1-P_{1}^{N1}(t)$, which enable a direct calculation of the entanglement $E^{N1}(t)$ with the help of Eq. (\ref{N10}). The evolutions of entanglement and the Schmidt coefficients of $P_{m}^{N1}(t)$ for $m$=0,1, are depicted in Fig. 2. The quantum-dot systems evolve into the MESs at evolution times that are the roots of the equation
\vspace*{0mm} \begin{figure}
\caption{Maximum entanglement $E_{\max }^{NM}$ as a function of the total number $N$ of dots generated in systems with $M$=1,2,3, and $[N/2]$ excitations. The solid lines and empty staircase correspond to $E_{\rm MES}^{NM}$. On the scale of the figure, an apparent plateau occurs for $N$ smaller than some critical value $N_{M}$. For $N$ higher than $N_{M}$ and fixed $M$, a monotonic decrease of the maximum entanglement is clearly visible. One concludes that arbitrary high entanglement can be achieved by increasing $N$ and keeping half $M=[N/2] $ of the system excited.}
\end{figure}
\vspace*{0mm} \begin{figure}
\caption{The same as in Fig. 5 but for the relative maximum entanglement $e_{\max }^{NM}=E_{\max }^{NM}/E_{\rm MES}^{NM}$. The figure shows that the apparent plateau for finite $M$ actually occurs for $M=1$ only. The first and second highest maxima of entanglement correspond to $N$ equal to $2M+1$ and $2M+5$, respectively.}
\end{figure} \begin{eqnarray} 0=\dot{E}^{NM}(t)&=&2\kappa{{\frac{N-1}{N}}}\sin (N\kappa t) \nonumber \\ &\times&\!\! \log _{2}\left[ {{\frac{N^{2}}{4(N-1)}}}\csc ^{2}\left({{\frac{N}{2}}}\kappa t\right)-1 \right]. \label{N15} \end{eqnarray} Thus, we get \begin{equation} \kappa t^{\prime }={{\frac{2}{N}}}{\rm arccsc}\left({{\frac{2}{N}}}\sqrt{2(N-1)}\right) \label{N16} \end{equation} and $\kappa t^{\prime \prime }=\pi/N$. We find that the maximum entanglement, equal to ${E}^{N1}(t^{\prime })=1$ ebit, can be achieved at evolution times $t^{\prime}$ for $N\leq 6$ only. For $N>6$, a real solution for $t^{\prime}$ does not exist. Another explanation of this result, as illustrated in Fig. 2, can be given as follows: The maximum entanglement corresponds to the Schmidt coefficients mutually equal or, in general, the least different. But the MES corresponds solely to the former case. As seen in Fig. 2, the condition $P_{0}^{N1}(t')= P_{1}^{N1}(t')$ is strictly satisfied for $N\le 6$. The entanglement for $N>6$ reaches its maximum at evolution times $t^{\prime \prime }$. This maximum value is given by \begin{eqnarray} {E}^{N1}(t^{\prime \prime }) &=&{{\frac{2}{N^{2}}}}\big\{N^{2}\log _{2}N-(N-2)^{2}\log _{2}(N-2) \nonumber \\ &&~~~~~-2(N-1)\log _{2}[4(N-1)]\big\}, \label{N17} \end{eqnarray} which is less than unity and monotonically decreases with increasing $N$ as clearly illustrated in Figs. 5 and 6 for $M$=1. Thus, the perfect MESs cannot be generated in systems of $N>6$ dots. Nevertheless, a good approximation of the MESs can also be obtained for $N=7$. On the scale of Fig. 2, $\max_t E^{7,1}(t)=E^{7,1}(\pi/7)=0.9997$ is close to unity since $P_{0}^{N1}(\pi/7)$ and $P_{1}^{N1}(\pi/7)$ are almost the same. It is worth noting that a critical value of $N=6$ was also found, although in the different context of the pairwise entanglement measured by the concurrence \cite{Woo98}, for an equivalent-neighbor model of entangled webs in Ref. \cite{Koa00}. In comparison, a critical value of $N=6$ for the concurrence in the equivalent-neighbor isotropic or anisotropic Heisenberg models was not observed (see, e.g., \cite{Wang01}). Similarly, generation of the MESs in an equivalent-neighbor quantum-dot model of Reina {\em et al.} was discussed only in two special cases of the Bell ($N$=2) and Greenberger-Horne-Zeilinger (GHZ) ($N$=3) entangled states \cite{Rei00}. Thus, no critical behavior of entanglement as a function of $N$ was reported there.
The case for $M=1$ is the only one where the general formula (\ref{N09}) for the Schmidt coefficients simplifies to a compact form for arbitrary evolution times. Thus, for clarity, we present mainly numerical results for $M\ge 2$. For example, Fig. 3 illustrates that the exact MESs cannot be generated in systems with $M=2$ excitations at any evolution time. This conclusion can be drawn from the observation that $P_{m}^{N2}(t)$ for $m$=0,1,2 do not cross simultaneously at any times in the period. Nevertheless, the MESs can be approximated with good precision. The highest possible entanglement, corresponding to the least mutually different $P_{m}^{N2}$, is observed for $N=5$ and 9, where the relative entanglement deviates from unity at the order of $10^{-5}$ and $10^{-4}$, respectively (see Fig. 6 for $M$=2). The states generated in $N$-dot systems with three excitations can be entangled up to $e^{7,3}_{\max}=0.9996$ (first) and $e^{11,3}_{\max}=0.9990$ (second maximum) for the relative entanglement (see Fig. 6 for $M$=3). It is interesting to compare the relative entanglement of $e^{NM}_{\max}$, depicted in Fig. 6, with the ``absolute'' entanglement of $E^{NM}_{\max}$ presented in Fig. 5. By analyzing the numerical data given, in part, in Fig. 6, we find the following rule: The maximally or almost maximally entangled states can be generated in systems of $N=2M+1$ dots with $M$ excitations. Slightly worse entanglement can be achieved in systems of $N=2M+5$ dots with $M$ excitations. Thus, systems composed of odd rather than even numbers of dots enable generation of the entangled states better approximating the MESs for $M>1$. This is clearly illustrated in Fig. 6 for $M=[N/2]$, i.e., the integer part of $N/2$. We observe that the system of odd and large numbers ($N>2M+5$ for $M>1$) of dots is the most entangled at evolution times $\kappa t=(1+2k)\pi$ for $k= 0,1,...$ (see, e.g., Fig. 3 for $N$=11). In this special case, the Schmidt coefficients can be written compactly via \begin{eqnarray}
\left|C_{m}^{NM}\left(\frac{\pi}{\kappa}\right)\right|=2^mm!(N-2M)\frac{(N-2m-2)!!}{N!!}. \label{N18} \end{eqnarray} For $\kappa t=k\pi$ and even $N$, in contrast to odd $N$, the entanglement vanishes. The maximum entanglement of $E^{NM}_{\max}$ for $N>N_{M}\equiv 2M+5-\delta_{1M}$ can be well fitted by the inverse of linear functions as shown in Fig. 7.
\vspace*{0mm} \begin{figure}
\caption{The inverse of the maximum entanglement, $(E_{\max}^{NM})^{-1}$ (dots) measured in ebits$^{-1}$, and its approximation (solid lines) as a function of $N>2M+5$ generated in systems with $M=1,2,3$ excitations.}
\end{figure}
\section{Conclusion}
We studied the evolution of the conduction-band spins of quantum dots in the model of Imamo\={g}lu {\em et al.} \cite{Ima99}. We found the analytical solution and its Schmidt decomposition for the equivalent-neighbor model and applied them in our study of bipartite entanglement in quantum-dot systems with arbitrary numbers of dots and their excitations. We have raised and solved the problem to what extent the initially excited dots can be entangled with the remaining dots if all of them are initially disentangled in the equivalent-neighbor energy-conserving model. We have shown that the perfect maximally entangled states can only be generated in systems of $N=2,\cdots ,6$ dots with a single dot initially excited. Nevertheless, highly entangled states, being excellent approximations of the MESs, can periodically be generated in systems of odd numbers $N$ of dots with the number $M$ of excitations equal to $M=(N-1)/2$ (leading to the best approximation) and $M=(N-5)/2$ (giving a slightly worse approximation). If we increase $N$ beyond $N_{M}=2M+5-\delta_{1M}$, the entanglement decreases monotonically as described by the inverse of linear functions.
\vspace*{10mm} \section*{ACKNOWLEDGMENTS} We thank J. Bajer, T. Cheon, T. Kobayashi, H. Matsueda, and I. Tsutsui for their stimulating discussions. Y.L. acknowledges support from the Japan Society for the Promotion of Science (JSPS). This work was supported by a Grant-in-Aid for Scientific Research (B) (Grant No.~12440111) and a Grant-in-Aid for Encouragement of Young Scientists (Grant No. 12740243) by the Japan Society for the Promotion of Science.
\vspace*{0mm}
\widetext
{\setlength{\fboxsep}{1pt} \begin{center} \framebox{\parbox{0.75\columnwidth}{ \begin{center} published in the {\em Physical Review A} {\bf 65} (2002) 062321\\ and selected to\\ {\em Virtual J. Nanoscale Sci. Tech.} ({\tt http://www.vjnano.org/nano/}) {\bf 6} (2002) Issue 1; \\ {\em Virtual J. Quantum Information} {\tt (http://www.vjquantuminfo.org)} {\bf 2} (2002) Issue 7. \end{center}}} \end{center}
\end{document} |
\begin{document}
\title{On a surprising relation between rectangular and square free convolutions}
\symbolfootnote[0]{{\bf MSC 2000 subject classifications.} 46L54, 15A52}
\symbolfootnote[0]{{\bf Key words.} free probability, random matrices, free convolution}
\begin{abstract}Debbah and Ryan have recently \cite{dr07} proved a result about the limit empirical singular distribution of the sum of two rectangular random matrices whose dimensions tend to infinity. In this paper, we reformulate it in terms of the rectangular free convolution introduced in \cite{bg07} and then we give a new, shorter, proof of this result under weaker hypothesis: we do not suppose the probability measure in question in this result to be compactly supported anymore. At last, we discuss the inclusion of this result in the family of relations between rectangular and square random matrices.\end{abstract}
\section*{Introduction}Free convolutions are operations on probability measures on the real line which allow to compute the spectral or singular empirical measures of large random matrices which are expressed as sums or products of independent random matrices, the spectral measures of which are known. More specifically, the operations $\boxplus,\boxtimes$, called respectively {\em free additive and multiplicative convolutions} are defined in the following way \cite{vdn91}. Let, for each $n$, $M_n$, $N_n$ be $n$ by $n$ independent random hermitian matrices, one of them having a distribution which is invariant under the action of the unitary group by conjugation, which empirical spectral measures\footnote{The {\em empirical spectral measure} of a matrix is the uniform law on its eigenvalues with multiplicity.} converge, as $n$ tends to infinity, to non random probability measures denoted respectively by $\tau_1, \tau_2$. Then $\tau_1\boxplus\tau_2$ is the limit of the empirical spectral law of $M_n+N_n$ and, in the case where the matrices are positive, $\tau_1\boxtimes\tau_2$ is the limit of the empirical spectral law of $M_nN_n$. In the same way, for any $\lambda\in [0,1]$, the {\em rectangular free convolution} $\boxplus_{\la}$ is defined, in \cite{bg07}, in the following way. Let $M_{n,p}, N_{n,p}$ be $n$ by $p$ independent random matrices, one of them having a distribution which is invariant by multiplication by any unitary matrix on any side, which symmetrized\footnote{The {\em symmetrization} of a probability measure $\mu$ on $[0,+\infty)$ is the law of $\varepsilon X$, for $\varepsilon, X$ independent random variables with respective laws $\frac{\delta_1+\delta_{-1}}{2}, \mu$. Dealing with laws on $[0,+\infty)$ or with their symmetrizations is equivalent, but for historical reasons, the rectangular free convolutions have been defined with symmetric laws. In all this paper, we shall often pass from symmetric probability measures to measures on $[0,+\infty)$ and vice-versa. Thus in order to avoid confusion, we shall mainly use the letter $\mu$ for measures on $[0,\infty)$ and $\nu$ for symmetric ones.} empirical singular measures\footnote{The {\em empirical singular measure} of a matrix $M$ with size $n$ by $p$ ($n\leq p$) is the
empirical spectral measure of $|M|:=\sqrt{MM^*}$.} tend, as $n,p$ tend to infinity in such a way that $n/p$ tends to $\lambda$, to non random probability measures $\nu_1,\nu_2$. Then the symmetrized empirical singular law of $M_{n,p}+N_{n,p}$ tends to $\nu_1\boxplus_{\la} \nu_2$.
These operations can be explicitly computed using either a combinatorial or an analytic machinery (see \cite{vdn91} and \cite{ns06} for $\boxplus, \boxtimes$ and \cite{bg07} for $\boxplus_{\la}$). In the cases $\lambda=0$ or $\lambda=1$, i.e. where the rectangular random matrices we consider are either ``almost flat" or ``almost square", the rectangular free convolution with ratio $\lambda$ can be expressed with the additive free convolution: $\boxplus_1=\boxplus$ and for all symmetric laws $\nu_1,\nu_2$, $\nu_1\boxplus_0 \nu_2$ is the symmetric law which push-forward by the map $t\mapsto t^2$ is the free convolution of the push forwards of $\nu_1$ and $\nu_2$ by the same map. However, though one can find many analogies between the definitions of $\boxplus$ and $\boxplus_{\la}$ and still more analogies have been proved \cite{fbg05.inf.div}, no general relation between $\boxplus_{\la}$ and $\boxplus$ had been proved until a paper of Debbah and Ryan \cite{dr07} (which submitted version, more focused on applications than on this result, is \cite{dr08}). It is to notice that this result is not due to researchers from the communities of Operator Algebras or Probability Theory, but to researchers from Information Theory, working on communication networks. In \cite{dr07}, Debbah and Ryan proved a result about random matrices which can be interpreted as an expression, for certain probability measures $\nu_1,\nu_2$, of their rectangular convolution $\nu_1\boxplus_{\la}\nu_2$ in terms of $\boxplus$ and of another convolution, called the {\em free multiplicative deconvolution} and denoted by $\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}$. In this note, we present this result with a new approach and we give a new and shorter proof, where the hypothesis are more general. This generalization of the hypothesis answers a question asked by Debbah and Ryan in the last section of their paper \cite{dr07}. The question of a more general relation between square and rectangular free convolutions is considered in a last ``perspectives" section.
{\bf Acknowledgments:} The author would like to thank Raj Rao for bringing the paper \cite{dr07} to his attention and M\'erouane Debbah for his encouragements and many useful discussions.
\section{The result of Debbah and Ryan}Let us define the operation $\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}$ on certain pairs of probability measures on $[0,+\infty)$ in the following way. For $\mu,\mu_2$ probability measures on $[0,+\infty)$, if there is a probability measure on $[0,+\infty)$ such that $\mu=\mu_1\boxtimes\mu_2$, then $\mu_1$ is called the {\em free multiplicative deconvolution} of $\mu$ by $\mu_2$ and is denoted by $\mu_1=\mu\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}\mu_2$. Let us define, for $\lambda\in (0,1]$, $\mu_\lambda$ be the law of $\lambda X$ for $X$ random variable distributed according to the Marchenko-Pastur law with parameter $1/\lambda$, i.e. the law with support $[(1-\sqrt{\lambda})^2, (1+\sqrt{\lambda})^2]$ and density $$x\mapsto \frac{\sqrt{4\lambda -(x-1-\lambda)^2}}{2\pi\lambda x}.$$
Theorem 1 of \cite{dr07} states the following result. $\lambda\in (0,1]$ is fixed and $(p_n)$ is a sequence of positive integers such that $n/p_n$ tends to $\lambda$ as $n$ tends to infinity. $\delta_1$ denotes the Dirac mass at $1$. \begin{Th}[Debbah and Ryan]\label{1.07.08.3}Let, for each $n$, $A_n$, $G_n$ be independent $n$ by $p_n$ random matrices such that the empirical spectral law of $A_nA_n^*$ converges almost surely weakly, as $n$ tends to infinity, to a compactly supported probability measure $\mu_A$ and such that the entries of $G_n$ are independent $N(0, \frac{1}{p_n})$ random variables. Then the empirical spectral law of $(A_n+G_n)(A_n+G_n)^*$ converges almost surely to a compactly supported probability measure $\rho$ which, in the case where $\mu_A\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}\mu_\lambda$ exists, satisfies the relation \begin{equation}\label{1.07.08.2}\rho=[(\mu_A\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}\mu_\lambda)\boxplus\delta_1]\boxtimes\mu_\lambda.\end{equation} \end{Th}
\begin{rmq}\label{1.7.8.16h}{\rm Note that in the case where $\mu_A\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}\mu_\lambda$ doesn't exist, the relation \eqref{1.07.08.2} stays true in the formal sense. More specifically, for $\mu_A$ probability measure such that $\mu_A\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}\mu_\lambda$ exists, the moments of $\rho=[(\mu_A\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}\mu_\lambda)\boxplus\delta_1]\boxtimes\mu_\lambda$ have a polynomial expression in the moments of $\mu_A$ (this can easily be seen by the theory of free cumulants \cite{ns06}). It happens that this relation between the moments of the limit spectral law $ \rho$ of $(A_n+G_n)(A_n+G_n)^*$ and the ones of $\mu_A$ stays true even when $\mu_A\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}\mu_\lambda$ doesn't exist. It follows from the original proof of Theorem \ref{1.07.08.3} and it will also follow from our proof (see Remark \ref{1.7.8.1}).}\end{rmq}
Note that by the very definition of the rectangular free convolution $\boxplus_{\la}$ with ratio $\lambda$ recalled in the introduction and since the limit empirical spectral law of $GG^*$ is $\mu_\lambda$ (it is a well known fact, see, e.g. Theorem 4.1.9 of \cite{hiai}), this result can be stated as follows: for all compactly supported probability measure $\mu$ on $[0,+\infty)$ such that $\mu\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}\mu_\lambda$ exists, \begin{equation}\label{30.06.08.1}(\sqrt{\mu}\boxplus_{\la} \sqrt{\mu_\lambda})^2=[ (\mu\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}\mu_\lambda)\boxplus\delta_1]\boxtimes\mu_\lambda,\end{equation} where for any probability measure $\rho$ on $[0,+\infty)$, $\sqrt{\rho}$ denotes the symmetrization of the push-forward by the square root functions of $\rho$ and for any symmetric probability measure $\nu$ on the real line, $\nu^2$ denotes the push-forward of $\nu$ by the function $t\mapsto t^2$. This formula allows to express the operator $\boxplus_{\la}\sqrt{\mu_\lambda}$ on the set of symmetric compactly supported probability measures on the real line in terms of $\boxplus$ and $\boxtimes$: for all symmetric probability measure on the real line $\nu$, \begin{equation}\label{30.06.08.2}\nu\boxplus_{\la} \sqrt{\mu_\lambda}=\sqrt{ [(\nu^2\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}\mu_\lambda)\boxplus\delta_1]\boxtimes\mu_\lambda}.\end{equation}
\section{A proof of the generalized theorem of Debbah and Ryan} $\lambda\in (0,1]$ is still fixed. In this section, we shall give a new shorter proof of the theorem of Debbah and Ryan, under weaker hypothesis: we shall prove \eqref{30.06.08.2} without supposing the support of $\nu$ to be compactly supported. The proof is based on the machinery of the rectangular free convolution and of the rectangular $R$-transform.
\subsection{Some analytic transforms} Let us first recall a few facts about the analytic approach to $\boxtimes$ and $\boxplus_{\la}$. Let us define, for $\rho$ probability measure on $[0,\infty)$, $$M_\rho(z):=\int_{t\in \mathbb{R}}\frac{zt}{1-zt}\mathrm{d} \rho(t),\quad S_\rho(s)=\frac{1+z}{z}M_\rho^{\langle -1\rangle}(z),$$where, as it shall be in the rest of the text, the exponent $^{\langle -1\rangle}$ stands for the inversion of analytic functions on $\mathbb{C}\backslash [0,+\infty)$ with respect to the composition operation $\circ$, in a neighborhood of zero. By \cite{vdn91}, for all pair $\mu_1, \mu_2$ of probability measures on $[0,+\infty)$, $\mu_1\boxtimes\mu_2$ is characterized by the fact that $S_{\mu_1\boxtimes\mu_2}=S_{\mu_1}S_{\mu_2}$.
In the same way, the rectangular free convolution with ratio $\lambda$ can be computed with an analytic transform of probability measures. Let $\nu$ be a symmetric probability measure on the real line. Let us define $H_\nu(z)= z(\lambda M_{\nu^2}(z)+1)(M_{\nu^2}(z)+1)$. Then the {\em rectangular $R$-transform with ratio $\lambda$} of $\nu$ is defined to be $$C_\nu(z)=U\left( \frac{z}{H_\nu^{\langle -1\rangle}(z)}-1\right), $$where $U(z)= \frac{-\lambda-1+\left[(\lambda+1)^2+4\lambda z\right]^{1/2}}{2\lambda}$. By theorem 3.12 of \cite{bg07}, for all pair $\nu_1, \nu_2$ of symmetric probability measures, $\nu_1\boxplus_{\la}\nu_2$ is characterized by the fact that $C_{\nu_1\boxplus_{\la}\nu_2}=C_{\nu_1}+C_{\nu_2}$.
\subsection{Some preliminary computations} Note that by \cite{ns06}, the $S$- and $R$-transforms of a probability measure $\mu$ on $[0,+\infty)$ are linked by the relation $S_\mu(s)=\frac{1}{z}R_\mu^{\langle -1\rangle}(z)$, thus since the free cumulants of the Marchenko-Pastur law with parameter $1/\lambda$ are all equal to $1/\lambda$ (see \cite{ns06}), we have $S_{\mu_\lambda}(z)=\frac{1}{1+\lambda z}$. Moreover, since by \cite{ns06} again, $S_\mu(s)= \frac{1+z}{z}M_\mu^{\langle -1\rangle}(z),$ for any law $\sigma$ on $[0,+\infty)$, \begin{equation}\label{1.7.8.3}M_{\sigma\boxtimes \mu_\lambda} ^{\langle -1\rangle}=\frac{z}{z+1}\frac{S_{\sigma}}{1+\lambda z}=\frac{M_\sigma^{\langle -1\rangle}}{1+\lambda z}\;\quad \textrm{ and }\;\quad M_{\sigma\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture} \mu_\lambda}^{\langle -1\rangle}=(1+\lambda z)M_\sigma^{\langle -1\rangle}.\end{equation} At last, since $\boxplus\delta_1=*\delta_1$, which implies that $M_{\sigma\boxplus\delta_1}(z)=[(z+1)M_\sigma(z)+z]\circ \frac{z}{z+1}$, for any symmetric law $\nu$, we have \begin{equation}\label{1.07.08.1}M_{((\nu^2\ltimes\mu_\lambda)\boxplus\delta_1)\boxtimes\mu_\lambda}^{\langle -1\rangle}=\frac{1}{1+\lambda z}\times\frac{z}{1+z}\circ\left[(z+1)\left((1+\lambda z)M_{\nu^2}^{\langle -1\rangle}\right)^{\langle -1\rangle}+z\right]^{\langle -1\rangle} .\end{equation}
\subsection{Proof of the result}\label{2.7.8.3} So let us consider a symmetric probability measure $\nu$ such that $\nu^2\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture} \mu_\lambda$ exists and let us prove \eqref{30.06.08.2}. As proved in the proof of Theorem 3.8 of \cite{bg07}, for any symmetric probability measure $\tau$, $H_\tau$ characterizes $\tau$, thus it suffices to prove that $H_{ \nu\boxplus_{\la} \sqrt{\mu_\lambda}}=H_{m}$ for $m=\sqrt{ [(\nu^2\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}\mu_\lambda)\boxplus\delta_1]\boxtimes\mu_\lambda}.$ By Theorem 4.3 and the paragraph preceding in \cite{fbg05.inf.div},
$C_{\sqrt{\mu_\lambda}}(z)=z$. Thus Lemma 4.1 of \cite{bba07} applies here, and it states that in a neighborhood of zero in $\mathbb{C}\backslash [0,+\infty)$, $$H_{\nu\boxplus_{\la} \sqrt{\mu_\lambda}}=H_\nu\circ \left(\frac{H_\nu}{T(H_\nu+M_{\nu^2})}\right)^{\langle-1\rangle},$$where $T(z)=(\lambda z+1)(z+1)$. So it suffices to prove that in such a neighborhood of zero, $$H_m=H_\nu\circ \left(\frac{H_\nu}{T(H_\nu+M_{\nu^2})}\right)^{\langle -1\rangle},\quad\textrm{ i.e. }\quad H_m\circ \frac{H_\nu}{T(H_\nu+M_{\nu^2})}=H_\nu.$$Using the fact that for any symmetric law $\tau$, $H_\tau(z)=zT(M_{\tau^2}(z)),$ it amounts to prove that $$\frac{H_\nu}{T(H_\nu+M_{\nu^2})}\times T\circ M_{m^2}\circ \frac{H_\nu}{T(H_\nu+M_{\nu^2})} =H_{\nu}(z), $$i.e.$$ T\circ M_{m^2}\circ \frac{H_\nu}{T(H_\nu+M_{\nu^2})}=T(H_\nu(z)+M_{\nu^2}(z)),$$which is implied, simplifying by $T$ and using again $H_\tau(z)=zT(M_{\tau^2}(z))$, by $$ M_{m^2}\circ \frac{zT(M_{\nu^2}(z))}{T[zT(M_{\nu^2}(z))+M_{\nu^2}(z)]}=zT[M_{\nu^2}(z)]+M_{\nu^2}(z).$$ It is implied, composing by $M_{\nu^2}^{\langle -1\rangle}$ on the right and by $M_{m^2}^{\langle -1\rangle}$ on the left, by $$ M_{\nu^2}^{\langle -1\rangle}\times T=(T\times M_{m^2}^{\langle -1\rangle})\circ (M_{\nu^2}^{\langle -1\rangle}(z)T(z)+z).$$Using the expression of $M_{m^2}^{\langle -1\rangle}$ given by \eqref{1.07.08.1}, it amounts to prove that $$ M_{\nu^2}^{\langle -1\rangle}(z)T(z)=$$ $$ \left((z+1)\times\frac{z}{1+z}\circ\left[(z+1)\left((1+\lambda z)M_{\nu^2}^{\langle -1\rangle}\right)^{\langle -1\rangle}+z\right]^{\langle -1\rangle}\right)\circ (M_{\nu^2}^{\langle -1\rangle}(z)T(z)+z),$$i.e. that $$ \frac{M_{\nu^2}^{\langle -1\rangle}(z)T(z)}{M_{\nu^2}^{\langle -1\rangle}(z)T(z)+z+1}=\frac{z}{1+z}\circ\left[(z+1)\left((1+\lambda z)M_{\nu^2}^{\langle -1\rangle} \right)^{\langle -1\rangle}+z\right]^{\langle -1\rangle}\circ (M_{\nu^2}^{\langle -1\rangle}(z)T(z)+z) .$$Now, composing by $\left[(z+1)\left((1+\lambda z)M_{\nu^2}^{\langle -1\rangle} \right)^{\langle -1\rangle}+z\right]\circ\frac{z}{1-z}$ on the left, it gives $$ \left[(z+1)\left((1+\lambda z)M_{\nu^2}^{\langle -1\rangle} \right)^{\langle -1\rangle}+z\right]\circ[(1+\lambda z)M_{\nu^2}^{\langle -1\rangle}(z)]=M_{\nu^2}^{\langle -1\rangle}(z)T(z)+z, $$i.e. $$ [M_{\nu^2}^{\langle -1\rangle}(z)(\lambda z+1)+1]{z}+[M_{\nu^2}^{\langle -1\rangle}(z)(\lambda z+1)]=M_{\nu^2}^{\langle -1\rangle}(z)(\lambda z+1)(z+1)+z, $$which is easily verified.
\subsection{Remarks on this result} \begin{rmq}\label{1.7.8.1}{\rm Note that we did not use the fact that $\nu^2\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture} \mu_\lambda$ exists to prove that $H_{ \nu\boxplus_{\la} \sqrt{\mu_\lambda}}=H_{m}$. It means that if $\nu^2\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture} \mu_\lambda$ doesn't exist, there is no more probability measure $\mu$ on $[0,+\infty)$ such that $M_\mu^{\langle -1\rangle}=(1+\lambda z)M_{\nu^2}^{\langle -1\rangle}$ as in \eqref{1.7.8.3}, but the polynomial expression of the moments of $ \nu\boxplus_{\la} \sqrt{\mu_\lambda}$ (i.e. of the limit symmetrized singular law of the matrix $A_n+G_n$ of Theorem \ref{1.07.08.3}) in the moments of $\nu$ following from $H_{ \nu\boxplus_{\la} \sqrt{\mu_\lambda}}=H_{m}$ for $m=\sqrt{ [(\nu^2\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}\mu_\lambda)\boxplus\delta_1]\boxtimes\mu_\lambda}$ stays true (see Remark \ref{1.7.8.16h}).}\end{rmq}
\begin{rmq}[Case $\lambda=0$]\label{1.7.8.2}{\rm A continuous way to define $\mu_\lambda$ for any $\lambda\in [0,1]$ is to define it to be the probability measure with free cumulants $k_n(\mu_\lambda)=\lambda^{1-n}$ for all $n\geq 1$ (see \cite{ns06}). This definition gives $\mu_0=\delta_1$. Note that by definition of the rectangular free convolution with null ratio $\boxplus_0$ (which is recalled in the introduction), the relation \eqref{30.06.08.2} stays true for $\lambda=0$.}\end{rmq}
\begin{rmq}\label{1.7.8.17h39}{\rm Note that the original proof of Debbah and Ryan in \cite{dr07} is based on the combinatorics approach to freeness, via the free cumulants of Nica and Speicher \cite{ns06}, whereas our proof is based on the analytical machinery for the computation of the rectangular $R$-transform, namely the rectangular $R$-transform. It happens sometimes that combinatorial proofs can be translated on the analytical plan by considering the generating functions of the combinatorial objects in question. Notice however that it is not what we did here. Indeed, the rectangular $R$-transform machinery is actually related to other cumulants than the ones of Nica and Speicher. These are the so-called rectangular cumulants, defined in \cite{bg07}.}\end{rmq}
\subsection{Remarks about the free deconvolution by $\mu_\lambda$} The following corollary is part of the answer given in the present paper to the question asked in the last section of the paper of Debbah and Ryan \cite{dr07}. Let us endow the set of probability measures on the real line with the weak topology \cite{billingsley}.
\begin{cor}The functional $\nu\mapsto [(\nu^2\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}\mu_\lambda)\boxplus\delta_1]\boxtimes\mu_\lambda$, defined on the set of probability measures $\nu$ on $[0,+\infty)$ such that $\nu\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}\mu_\lambda$ exists, extends continuously to the whole set of probability measures on $[0,+\infty)$.\end{cor}
\begin{pr} We just proved, in section \ref{2.7.8.3}, that the formula $$\nu\boxplus_{\la} \sqrt{\mu_\lambda}=\sqrt{ [(\nu^2\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}\mu_\lambda)\boxplus\delta_1]\boxtimes\mu_\lambda}$$is true for any probability measure $\nu$ on $[0,+\infty)$. Since the operation $\boxplus_{\la}$ is continuous on the set of symmetric probability measures on the real line (Theorem 3.12 of \cite{bg07}) and the bijective corespondance between symmetric laws on the real line and laws on $[0,+\infty)$, which maps any symmetric law to its push-forward by the map $t\mapsto t^2$, is continuous with continuous inverse, the corollary is obvious.\end{pr}
The functional $\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}\mu_\lambda$, which domain is contained in the set of probability measures on $[0,+\infty)$, plays surprisingly a key role here. It seems natural to try to study its domain. The first step is to notice that this domain is the whole set of probability measures on $[0,+\infty)$ if and only if $\delta_1$ is in this domain, and that in this case, the functional $\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}\mu_\lambda$ is simply equal to $\boxtimes(\delta_1\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}\mu_\lambda)$. However, the following proposition states that despite the previous corollary, the domain of the functional $\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}\mu_\lambda$ is not the whole set of probability measures on $[0,+\infty)$.
\begin{propo}The Dirac mass $\delta_1$ at $1$ is not in the domain of the functional $\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}\mu_\lambda$.\end{propo}
\begin{pr} Suppose that there is a probability measure $\tau$ on $[0,+\infty)$ such that $\delta_1=\tau\boxtimes\mu_\lambda$. Such a law $\tau$ has to satisfy $S_\tau(z)=1+\lambda z$. It implies that for $z$ small enough, $M_\tau(z)=\frac{z-1+[(1-z)^2+4\lambda z]^{1/2}}{2\lambda}$. Such a function doesn't admit any analytic continuation to $\mathbb{C}\backslash[0,+\infty)$, thus no such probability measure $\tau$ exists.
\end{pr}
\section{Relations between square and rectangular matrices/convolutions}
The theorem of Debbah and Ryan gives an expression of the empirical singular measure of the sum of two rectangular random matrices in terms of operations related to hermitian square random matrices. Two other results relate empirical singular measures of (non hermitian) square or rectangular random matrices to the operations devoted to hermitian random matrices.
The first one can be resumed by $\boxplus_1=\boxplus$. Concretely, it states that denoting by $\operatorname{ESM}(X)$ the symmetrization of empirical singular
measure of any rectangular matrix $X$, for any pair $M,N$ of large $n$ by $p$ random matrices, one of them being invariant in law by the left and right actions of the unitary groups, for $n/p\simeq 1$, \begin{equation}\label{2.7.8.16h008}\operatorname{ESM}(M+N)\simeq \operatorname{ESM}(M)\boxplus \operatorname{ESM}(N). \end{equation} Note that the matrices $M,N$ are not hermitian, which makes \eqref{2.7.8.16h008} pretty surprising (since $\boxplus$ was defined with hermitian random matrices). It means that for $\varepsilon, \varepsilon_1,\varepsilon_2$ independent random variables with law $\frac{\delta_{-1}+\delta_1}{2}$ independent of $M$ and $N$, we have \begin{equation}\label{1.7.8.20h14}\operatorname{Spectrum}(\varepsilon |M+N|){\simeq}\operatorname{Spectrum}(\varepsilon_1 |M|+\varepsilon_2|N|)\end{equation}
The second one can be resumed by for any pair $\nu,\tau$ of symmetric probability measures on the real line, $(\nu\boxplus_0\tau)^2=\nu^2\boxplus\tau^2$
Concretely, it states that for any pair $M,N$ of $n$ by $p$ unitarily invariant random matrices, for $1<<n<<p$, \begin{equation}\label{1.7.8.20h15}\operatorname{Spectrum}[(M+N)(M+N)^*] {\simeq}\operatorname{Spectrum}( MM^*+NN^*).\end{equation}
The advantage of the result of Debbah and Ryan on those ones is that it works for any value of the ratio $\lambda$, but its disadvantage is that it only works when one of the laws convoluted is $\mu_\lambda$, i.e. one of the matrices considered is a Gaussian one. In fact this sharp restriction can be understood by the fact that among rectangular random matrices which are invariant in law under multiplication by unitary matrices, the Gaussian ones are the only ones which can be extended to square matrices which are also invariant in law under multiplication by unitary matrices.
It could be interesting to understand better how relations like \eqref{1.7.8.20h14}, \eqref{1.7.8.20h15} or like the one of the Debbah and Ryan's theorem work and can be generalized. Unfortunately, until now, even though nice proofs (see \cite{bg07} for \eqref{1.7.8.20h14} and \eqref{1.7.8.20h15} or Theorem 4.3.11 of \cite{hiai} and Proposition 3.5 of \cite{haag2} for the $n=p$ case of \eqref{1.7.8.20h14}) relying in free probability have been given for these results relating rectangular convolutions and ``square non hermitian convolutions" with the ``square hermitian convolution" (i.e. $\boxplus$), no ``concrete" explanation has been given, and no generalization (to any $\lambda$, to any pair of probability measures) neither. Such a generalization could be the given of a functional $f_\lambda$ on the set of symmetric probability measures such that for all $\nu,\tau$ symmetric probability measures, $\nu\boxplus_{\la}\tau$ is the only symmetric probability measure satisfying $$f_\lambda(\nu\boxplus_{\la}\tau)=f_\lambda(\nu)\boxplus f_\lambda(\tau).$$
Note that in the case $\lambda=1$, the functional $f_\lambda(\nu)=\nu$ works, and in the case $\lambda=0$, the functional which maps a measure to its push-forward by the square function works.
\begin{rmq}{\rm Let $(\mc{A},\varphi)$ be a $*$-non commutative probability space and $p_1, p_2$ be two self-adjoint projectors of $\mc{A}$ such that $p_1+p_2=1$ such that $\lambda=\varphi(p_1)/\varphi(p_2)$.
As explained in Proposition-Definition 2.1 of \cite{bg07}, $\boxplus_{\la}$ can be defined by the fact that for any pair $a,b\in p_1\mc{A} p_2$ free with amalgamation over $\operatorname{Vect}(p_1,p_2)$, the symmetrized distribution of $|a+b|$ in $(p_1\mc{A} p_1,\frac{1}{\varphi(p_1)}\varphi_{|p_1\mc{A} p_1})$ is the rectangular free convolution with ratio $\lambda$ of the symmetrized distributions of $|a|$ and $|b|$ in the same space.
Moreover, it is easy to see that for all $a\in p_1\mc{A} p_2$, the symmetrized distribution $\tau$ of $|a|$ in $(p_1\mc{A} p_1,\frac{1}{\varphi(p_1)}\varphi_{|p_1\mc{A} p_1})$ is linked to the distribution $\nu$ of $a+a^*$
in $(\mc{A},\varphi)$ by the relation $\nu= \frac{2\lambda}{1+\lambda}\tau+\frac{1-\lambda}{1+\lambda}\delta_0.$
When $\lambda=1$, the equation $\boxplus=\boxplus_{\la}$ can be summurized in the following way: for $a,b\in p_1\mc{A} p_2$ free with amalgamation over $\operatorname{Vect}(p_1,p_2)$, the distribution of $(a+b)+(a+b)^*$ in $(\mc{A}, \varphi)$ is the free convolution of the distributions of $a+a^*$ and $b+b^*$.
If this had stayed true for other values of $\lambda$, it would have meant that for all $\nu,\tau$ compactly supported symmetric probability measures on the real line, we have \begin{equation}\label{13.03.06.1}f_\lambda(\nu\boxplus_{\la}\tau)=f_\lambda(\nu)\boxplus f_\lambda(\tau) ,\end{equation} where $f_\lambda$ is the function which maps a probability measure $\tau$ on the real line to $\frac{2\lambda}{1+\lambda}\tau+\frac{1-\lambda}{1+\lambda}\delta_0$. But looking at fourth moment, it appears that \eqref{13.03.06.1} isn't true.}\end{rmq}
\end{document} |
\begin{document}
\begin{abstract}
The Hadwiger number $h(G)$ is the order of the largest complete minor in $G$. Does sufficient Hadwiger number imply a minor with additional properties?
In \cite{GEELEN200920}, Geelen et al showed $h(G)\geqslant (1+o(1))ct\sqrt{\ln t}$ implies $G$ has a bipartite subgraph with Hadwiger number at least $t$, for some explicit $c\sim 1.276\dotsc$. We improve this to $h(G) \geqslant (1+o(1))t\sqrt{\log_2 t}$, and provide a construction showing this is tight. We also derive improved bounds for the topological minor variant of this problem. \end{abstract}
\title{Bipartite clique minors in graphs of large Hadwiger number}
\section{Introduction}
A well known result shows that every graph with average degree at least $2d$ contains a bipartite subgraph of average degree at least $d$, and further this result is essentially tight (for example, for complete graphs). Can we prove a similar result for other graph parameters? In a recent preprint, Hickingbotham and Wood \cite{HickWood} showed a number of such results, While some of their results were known, it serves as a helpful survey of the literature.
One particular choice of parameter is the Hadwiger number\footnote{A graph $H$ is a minor of $G$ if it can be obtained by a sequence of edge contractions, or vertex and edge deletions. The Hadwiger number $h(G)$ is the largest $t$ for which $K_t$ is a minor of $G$}. Geelen, Gerards, Reed, Seymour and Vetta \cite{GEELEN200920} showed that $h(G)\geqslant c_1t\sqrt{\ln t}$ implies $G$ contains a bipartite subgraph with Hadwiger number at least $t$ by combining results of Thomason \cite{ThomasonComplete} and a slight modification of the classical average degree result. The constant $c_1$ can be taken to be $1.276\dotsc + o(1)$, corresponding to twice the extremal function for $K_t$ minors.
Hickingbotham and Wood rederived this theorem, and explicitly posed the question of whether this bound is asymptotically tight. In this note, we show that the asymptotic growth rate is correct, however the constant is different.
\begin{theorem} \label{T:lb}
For all $\epsilon > 0$, there is a $t_0$ such that for all $t>t_0$, if \\\mbox{$f(t) = \ceil{(1-\epsilon)t\sqrt{\log_2 t}}$}, there is a graph with a $K_{f(t)}$ minor, and in fact a $K_{f(t)}$ topological minor, which has no bipartite subgraph with Hadwiger number at least $t$. \end{theorem} We remark that our lower bound is asymptotically $(1.201\dotsc +o_t(1))t\sqrt{\ln t})$.
\begin{theorem} \label{T:ub}
For any $\epsilon > 0$ there is a $t_0(\epsilon)$ such that for any $t > t_0$, any graph with Hadwiger number at least $t\sqrt{\log_2 t}$ contains a bipartite subgraph with Hadwiger number at least $(1-2\epsilon)t$. \end{theorem}
Hickingbotham and Wood \cite{HickWood} also considered the natural analogue for topological minors\footnote{A graph $H$ is a topological minor of $G$ if $G$ contains as a subgraph a graph obtained from $H$ by replacing each edge with a path, all such paths being internally vertex disjoint}. If $tcl(G)$ is the order of the largest topological clique minor contained in $G$, combining the high average degree bipartite subgraph result and lower bounds on $tcl(G)$ in terms of average degree, they showed that there is a constant $c$ (in particular, one can take $c = 20/23 + o(1)$) such that for sufficiently large $t$, every graph with $tcl(G)\geqslant ct^2$ has a bipartite subgraph with $tcl(H)\geqslant t$. We were able to improve their upper bound, and also provide an example showing quadratic growth is necessary.
\begin{theorem}
\label{T:topub}
If $G$ is a graph with topological clique number at least $(\frac12 + o_t(1))t^2$, then $G$ has a bipartite subgraph $H$ with $tcl(H)\geqslant t$ \end{theorem}
\begin{theorem} \label{T:toplb}
There is a constant $C$ such that for all $t$, there is a graph $G$ with $tcl(G) \geqslant Ct^2$ but $G$ has no bipartite subgraph $H$ with $tcl(H)\geqslant t$. \end{theorem}
Our proof takes $C =\frac14$, and the example is a complete graph. It seems reasonable to conjecture that $C = \frac14$ is tight (and indeed that cliques are extremal); our proof of $\frac12$ for an upper bound only requires this many vertices in a very structured case, which seems unlikely to occur. An existing upper bound of the shape $ct^2$ is twice the extremal function for topological minors. With the best known upper bound on this function (see \cite{KO_bestUB}), this gives a value $c = 20/23 + o(1)$. If it were possible to improve this upper bound to the best known lower bound, we would obtain $c = 9/32 + o(1)$ (this lower bound is from bipartite random graphs, and due to \L{}uczak) - there remains only a small gap between this and our lower bound. This suggests the lower bound is likely to be hard to improve, since a substantial improvement would also improve bounds on the extremal function.
\section{\mbox{RB-bipartite} graphs} In this section, we introduce \mbox{RB-bipartite} graphs. These will serve as a general framework for proving the bounds.
\begin{definition} Let $H$ be a graph equipped with a 2-edge-colouring (with colours Red and Blue). Then $H$ is \textit{\mbox{RB-bipartite}} if every cycle uses an even number of red edges (call such cycles R-even). Equivalently, $H$ has no R-odd cycle. \end{definition}
We note that this definition is not symmetric under interchanging colours: for instance, a $K_3$ coloured red is not \mbox{RB-bipartite}, but a blue coloured $K_3$ is. It is easily seen being \mbox{RB-bipartite} is equivalent to being bipartite if every blue edge is subdivided exactly once. The following lemma gives some other equivalent definitions.
\begin{lemma} \label{T:equivdefsrb} The following are equivalent. \begin{enumerate}
\item $H$ is $RB$-bipartite
\item $H$ has no circuit using an odd number of red edges (no R-odd circuit)
\item There is a partition $(X,Y)$ of $V(H)$ such that all edges between $X$ and $Y$ are red, and all edges within $X$ or within $Y$ are blue \end{enumerate} \end{lemma}
\begin{proof} It is easily seen that $(3)\implies (2) \implies (1)$, and so it remains to prove that \mbox{RB-bipartite} graphs have a partition as in (3).
Let $H_R$ denote the graph on the red edges, and $H_B$ the graph on blue edges. It is easily seen that $H_R$ is a bipartite graph. We will work on each component of $H_R$ in turn. Let $C_1$ be such a component, with bipartition $X_1,Y_1$. If there is a blue edge between $X_1$ and $Y_1$, then also taking a path in $H_R$ from between the endpoints we get a cycle using an odd number of red edges, contradiction. So the induced subgraph of $H$ on $C_1$ is \mbox{RB-bipartite}.
Let $C_2$ be a different component of the red subgraph, with bipartition $X_2,Y_2$. Suppose there is a blue edge from $X_1$ to $X_2$, and also $X_1$ to $Y_2$. Then taking a path between the endpoints in $C_2$, as well as in $C_1$ (note this path may be empty), we get a cycle using an odd number of red edges. This contradicts that $H$ is \mbox{RB-bipartite}, and so we can (uniquely, if there is an edge between $C_1$ and $C_2$) extend the RB-bipartition to their union.
We consider each connected component of $H$ in turn, and note that such connected components are unions of red-connected components - suppose we have $C = \bigcup_{i=1}^r C_i$. Relabelling if necessary, we can assume that $H[\cup_{i\leqslant s}C_i]$ is connected for all $s$. Apply the above argument first to $C_1$ and $C_2$. Consider now $C_1\cup C_2$ (with the uniquely extended RB-bipartition) and $C_3$ =. The argument for components actually only used connectedness, and so we can apply it to this setting also to get a unique extension of the bipartitions. We now repeat with $\cup_{i\leqslant 3}C_i$ and $C_4$, and so on. This gives an RB-bipartition of $C$. We can therefore apply this in turn to each component, and the result follows. \end{proof}
\begin{theorem} \label{T:rbhalf}
Let $H$ be a 2-coloured graph. Then $H$ has an \mbox{RB-bipartite} subgraph with at least $\frac12e(G)$ edges. \end{theorem}
\begin{proof}
Let $X,Y$ be a partition of $V(H)$. Define $e_R(X)$ to be the number of Red edges with both ends inside X, $e_R(X,Y)$ the number of red edges with one endpoint in $X$ and one endpoint in~$Y$.
Suppose that we place vertices in $X$ or $Y$ independently at random with probability~$\frac12$. Let $d(X,Y) = e_R(X,Y) - e_B(X,Y)$. Then since each edge is between $X$ and $Y$ with probability~$\frac12$, we have $\mathbb{E}(d(X,Y)) = \frac12 (e_R(H) - e_B(H))$. Pick some choice of $X,Y$ that attains at least this expectation. Construct the subgraph $H'$ consisting of all blue edges inside $X$ or $Y$, and all red edges from $X$ to $Y$. Then by the lemma, $H'$ is \mbox{RB-bipartite}, and further \begin{align*}e(H') = (e_B(H) - e_B(X,Y)) + e_R(X,Y) \geqslant \frac12 (e_R(H) + e_B(H)) = \frac12 e(H)\end{align*}
as desired. \end{proof}
These graphs are introduced to simplify later proofs by relating minors to coloured graphs. For topological minors, if we take $G$ to be a subdivided $K_t$, and form a 2-coloured graph $H$ by replacing each path with a Red edge if it has odd length, and Blue if it has even length, a subgraph of $G$ consisting of a union of paths is bipartite if and only if the corresponding graph in $H$ is \mbox{RB-bipartite} (it is easily seen a cycle in $G$ has odd length if and only if the corresponding cycle of $H$ is R-odd). For general minors, things are more complicated, though in the following restricted setting we can obtain a result sufficient for our purposes.
\begin{definition} \label{D:auxgraph} Let $G$ be a graph, with a partition into $n$ parts $V_i$, such that all $G[V_i]$ are trees, and there is one edge from $V_i$ to $V_j$. Let $v_i\in V_i$ be an arbitrary choice of roots. The \textit{canonical path} from $v_i$ to $v_j$, where there is an edge from $V_i$ to $V_j$, is the unique $v_i v_j$ path in $G[V_i\cup V_j]$
The \textit{auxiliary graph} $H[G]$ has vertex set $[n]$, and an edge from $i$ to $j$ whenever there an edge between $V_i$ to $V_j$. This edge is coloured Red if the canonical $v_iv_j$ path has odd length, and Blue if it has even length. \end{definition}
We remark that strictly this definition of $H[G]$ also depends on the choice of roots, however we suppress this notation as the choice does not matter.
\begin{lemma} \label{T:bipiffrb} Let $G, H[G]$ be as in Definition~\ref{D:auxgraph}, and let $H'$ be some (coloured) subgraph of $H$. Let $G'$ be a subgraph of $G$ containing all edges within a part $V_i$, together with the edge from $V_i$ to $V_j$ if and only if $ij\in E(H')$. Then $G'$ is bipartite if and only if $H'$ is \mbox{RB-bipartite} \end{lemma}
\begin{proof} \begin{figure}
\caption{This figure illustrates how an odd cycle (or circuit) in the auxiliary graph necessarily produces an odd cycle in the main graph. The red path is the unique odd edge, the blue paths are even. The length of the sub-cycle of this odd circuit is also odd.}
\label{fig:bipcycle}
\end{figure}
We start by proving the `only if' statement. By relabelling, we can assume we have a cycle $1,\dotsc,k$ in $H'$, where an odd number of edges $i,i+1$ are Red (and indices are taken modulo $k$). We aim to construct an odd cycle (or indeed circuit) in $G'$.
We build a circuit $C$ in $G$ as follows. Start by traversing the canonical path from $v_1$ to $v_2$ (and call this path $P_1$), then continue to $v_3$ along canonical path $P_2$, and so on. $C$ clearly has odd length, since it has an odd number of odd length segments, and so $G'$ cannot be bipartite (see Figure 1 for an illustration).
The forwards direction is similar; suppose that $G'$ is not bipartite, and consider an odd cycle $C$ in $G'$. Fix some starting point and orientation of the cycle. Relabelling, suppose that we pass through vertex classes $V_1,\dotsc,V_k$ (where we may have repetition) in this order - since there is only one edge between each pair of $V_i$, and each class is a tree, we have $k\geqslant 3$ and $V_{i-1},V_i,V_{i+1}$ are all distinct. Further, each $i$ is adjacent to $i+1$ in $H'$. Let $Q_i$ be the segment of $C$ inside $V_i$ (between $V_{i-1}$ and $V_{i+1}$, and inclusive of the edge to $V_{i-1}$).
Since $V_i$ is connected, there is some path $P'_i$ inside $V_i$ from $v_i$ to this path segment which meets the path in exactly one vertex; call its endpoint $x_i$. Then following $P'_i$ from $v_i$ to $x_i$, then traversing $Q_i$ and $Q_{i+1}$ until we reach $x_{i+1}$, and finally traversing $P'_{i+1}$ to $v_{i+1}$ must be the canonical path from $v_i$ to $v_{i+1}$ (note that we never repeat a vertex, and hence have a path which is necessarily unique)
This produces an odd circuit, since each new edge we add is traversed twice. But this circuit is a union of canonical paths, an odd number of which must have odd length. In particular, the corresponding circuit $1\dotsc k$ in $H'$ has an odd number of Red edges, and therefore $H'$ is not \mbox{RB-bipartite}. \end{proof}
\section{Proof of theorem \ref{T:lb}} \begin{proof}
Let $H$ be a graph on vertex set $[n]$. Consider the graph $G(H)$ consisting of $n$ special `branch vertices' $v_1,\dotsc,v_n$, and for each $i<j$ a path between $v_i$ and $v_j$ of length either 1 (i.e. an edge between them) if $ij$ is an edge of $H$, or 2 if not. Equivalently, start with a copy of $K_n$, and subdivide each edge $v_iv_j$ once when $ij$ is not an edge of $H$. We note the auxiliary graph $H[G]$ is a copy of $K_n$, with $H$ coloured Red and $H^c$ coloured blue.
Clearly, this graph $G(H)$ has Hadwiger number exactly $n$ for any $H$ (since we must always contract or delete all the internal vertices of induced paths for $t\geqslant 3$). We will let $H\sim G(n,\frac12)$ an Erd\H{o}s-Renyi random graph.
What can we say about bipartite subgraphs of $G$? The subgraph of $G$ consisting of all paths where $ij$ is not an edge consists of some branch vertices, and some internal vertices. This is immediately seen to form a bipartite subgraph (or equivalently, this holds by Lemma ~\ref{T:bipiffrb})
Applying this lemma, the maximal bipartite subgraphs of $G$ are formed by taking some bipartition $(X,X^c)$ of $[n]$, and taking all length 1 paths from $X$ to $X^c$, together with all length 2 paths contained in either $X$ or $X^c$. We assume our graph is one of the at most $2^n$ graphs formed in this way.
Fixing $X$, and sampling $H$ from $G(n,\frac12)$, each edge within or disjoint from $X$ has length 2 with probability $\frac12$, and each edge from $X$ to $X^c$ has length 1 with probability $\frac12$. This means that all maximal bipartite subgraphs are distributed like (subdivisions of) $G(n,\frac12)$. The subdivision does not affect the Hadwiger number. In particular, if we can show the probability $G(n,\frac12)$ has a $K_t$ minor is below $2^{-n}$, there is some choice of $H$ for which there is no bipartite $K_t$ minor.
Bollob\'as, Catlin and Erd\H{o}s \cite{BollobasCatlinErdos} showed that the probability some (fixed) partition of $G = G(n,\frac12)$ into $s$ parts is `compatible' - i.e. there is an edge of $G$ between each pair of parts - is at most $\exp(-\binom{s}22^{-n^2/s^2})$. Note that the parts of a $K_s$ model\footnote{A collection $(V_h)_{h\in H}$ of subsets (called parts) of $V(G)$ is a model of $H$ in $G$ if each $V_h$ is connected, and when $h\sim h'$ in $H$, there is an edge of $G$ between $V_h$ and $V_{h'}$. $H$ is a minor of $G$ if and only if there is a $H$ model in $G$.} form a compatible partition. The number of choices of partition into $s<n$ parts is at most $n^n$. Let $s = n / \sqrt{\log_2 n - 3\log_2\log_2 n}$.
Then it is easily seen $2^n n^n \exp(-\binom{s}2 2^{-n^2/s^2}) = o(1)$, and in particular the probability $G(n,\frac12)$ has a $K_s$ minor is less than $2^{-n}$ for large $s$. This means there is some particular choice of $H$ for which no bipartite subgraph of $G(H)$ has Hadwiger number at least \mbox{$s = (1+o(1))n/\sqrt{\log_2n}\leqslant t$}. \end{proof}
\section{Proof of Theorem~\ref{T:ub}} \begin{proof} Let $n = \ceil{t\sqrt{\log_2 t}}$, and let $G$ be a graph with $h(G)\geqslant n$. For the rest of this proof we will neglect rounding, since this can be absorbed into the value of $\epsilon$.
We delete as many edges as possible from $G$ while retaining a $K_{n}$ minor, and so we can assume that $G$ consists of $n$ disjoint subsets $V_1,\dotsc,V_n$, with each $G[V_i]$ a tree, as well as exactly one edge between each pair $V_i$ and $V_j$ ($i\neq j$).
Recall from Definition~\ref{D:auxgraph} the canonical path and auxiliary graph $H[G]$; we will use these notions here. We start by reserving a set $S\subset V(H)$ of $\epsilon n$ vertices for later use, and apply our theorems with $G' = G[\cup_{i\notin S}V_i]$. Note that $H[G']$ is a 2-coloured $K_{(1-\epsilon)n}$.
By Theorem~\ref{T:rbhalf}, $H[G']$ has an \mbox{RB-bipartite} subgraph $H'$ (with \mbox{RB-bipartition} $(A,B)$ ) on at least $\frac12 \binom{(1-\epsilon)n}{2}$ edges. By Theorem~\ref{T:bipiffrb}, the corresponding subgraph $G''$ of $G'$ (consisting of all edges inside $G'[V_i]$, as well as the edge from $V_i$ to $V_j$ whenever $ij$ is an edge of $H'$) is bipartite, and this also holds for any proper subgraph of $H'$. Note also that a minor in $H'$ directly corresponds to a minor in $G''$ (by first contracting each $V_i$ to a vertex), and hence to a minor in $G'$ which is bipartite.
\begin{definition}
Given a graph $F$, a collection of subsets $(U_v)_{v\in V(K_m)}$ is called a \textit{$K_m$-compatible-partition} if it forms a $K_m$ minor without the connectedness condition, i.e. the subsets are pairwise disjoint, and for each pair $vw$, there is an edge in $F$ between $U_v$ and $U_w$. \end{definition}
The following result of Thomason \cite{ThomasonComplete} shows that $H'$ contains a $K_m$-compatible-partition for $m = (1- 3\epsilon/2))n/\sqrt{\log_2 n} = (1-3\epsilon/2 )t$, provided $t$ (and hence $n$) is large by taking a subgraph of $H'$ of density between $\frac12$ and $\frac34$.
\begin{theorem}[\cite{ThomasonComplete}]
Let $\epsilon > 0$. Then there is an $n_0(\epsilon)$ such that for all \\\mbox{$(\log \log n)^{2+\epsilon} < p <1 - (\log n)^{-1/\epsilon}$}, and all graphs $F$ with $n>n_0$ vertices and density $p$, $F$ has a $K_m$-compatible-partition for some $m\geqslant (1-\epsilon)n/\sqrt{\log_{1/1-p}n}$ \end{theorem}
If in addition $H'$ had `reasonably high' connectivity, further results of that paper \cite{ThomasonComplete} would now imply that we can extend this to a minor, with the proof implicitly using (in the language of \cite{ThomasonWales}) two subsets $P$ (the projector) to project subsets to logarithmically smaller sets, and a $C$ (the connector) to connect small sets. However, we do not know that $H'$ has suitable connectivity, and so instead use the reserved set $S$ to serve these roles.
Suppose our $K_m$ compatible partition in $H'$ consists of the pairwise adjacent disjoint subsets $U_1,\dotsc,U_m$ of $V(H')$.
We start by performing the role of the projector. Let us first restrict attention to $U_1$, and suppose we have $U_A = U_1 \cap A$, $U_B = U_1\cap B$ (recalling $(A,B)$ is some fixed \mbox{RB-bipartition} of $H'$). Let $i$ be some element of $S\setminus V(H')$. We want to add $i$ to the subgraph $H'$ in such a way we preserve \mbox{RB-bipartiteness}. Adding $i$ to $A$, we can add all Red edges from $i$ to $B$, and all Blue edges from $i$ to $A$; adding $i$ instead to $B$ reverses the colours.
In particular, one of these choices must allow us to add edges from $i$ to at least half of $U$. Make such a choice, and add $i$ and all such edges to $H'$. Replace $U$ with the set of vertices in $U$ not adjacent in $H'$ to $i$, and iterate this process. In this way, we can construct a set $P(U_1)$ (and add its vertex set to $H'$) of size at most $p(U_1) = (1 + \log_2 |U_1|)$, add some extra edges to $H'$ such that every element of $U_1$ has a neighbour in $P(U_1)$, and extend the \mbox{RB-bipartition} $(A,B)$ of $H'$ to these new vertices and edges.
We can construct such sets in turn for each of $U_1,\dotsc,U_m$ using at most \\\mbox{$\sum p(U_i) \leqslant t + t\log(n/t)$} elements of $S$ overall, assuming $|S| > 2t + t\log(n/t)$. Since this holds for our value of $n$, we can obtain an \mbox{RB-bipartite} extension $H''$ of $H'$, which also contains disjoint new sets $P(U_i)$ of size at most $p(U_i)$, where each vertex of $U_i$ has a neighbour (in $H'$) inside $P(U_i)$. Let $S' = S \setminus \cup_i P(U_i)$, and note $|S'|\geqslant \epsilon n / 2$.
\begin{lemma} Let $H'$ be an \mbox{RB-bipartite} graph, $x,y\in V(H')$, and let $T\subset S'$ have size $t$, where $S'$ is disjoint from $H'$. Suppose that every vertex of $T$ is joined to every vertex of $T\cup H'$ by either a red or a blue edge. Then either $T$ forms an \mbox{RB-bipartite} $K_t$, or there is some set of at most 2 vertices from $T$ which can be added to $H'$ to form the internal vertices of an $xy$ path while preserving \mbox{RB-bipartiteness}. \end{lemma} \begin{proof} Let $H'$ have \mbox{RB-bipartition} (A,B). Suppose first that $x,y\in A$ (this corresponds to joining $v_i$ and $v_j$ by a path using an even number of Red edges). If some vertex is joined to both $x$ and $y$ by edges of the same colour, we can create a length 2 path from $x$ to $y$, adding the new vertex to $A$ or $B$ depending on parity.
So we can assume this does not happen. Let $X$ be the set of vertices joined to $x$ by a red edge, and $Y$ those vertices joined to $y$ by a red edge; note this forms a partition of $T$. If some pair of vertices $v,w$ in $X$ are joined by a red edge, then $xvwy$ is an even length path from $x$ to $y$; we can extend our bipartition by adding $v$ to $A$ and $w$ to $B$. The same argument works if $v,w\in Y$.
If $v\in X, w\in Y$, and $vw$ is blue, then we likewise get a length three path. So we can assume that these cases never happen. But then $(X,Y)$ forms an \mbox{RB-bipartition} of $T$. But since the graph on $T$ is complete, we directly have an \mbox{RB-bipartite} $K_t$ subgraph.
If instead $x\in A, y\in B$ the argument is similar. If some vertex is joined to $x$ and $y$ by different colours, we obtain a path of length 2 from $x$ to $y$ which is R-odd. Therefore, we can assume the set $R$ of vertices joined to $x$ and $y$ by red edges, and $B$ for blue edges form a partition of $T$. As above, and illustrated in Figure 2, if there is a red edge within either $R$ or $B$, or a blue edge between $R$ and $B$, we get an R-odd path from $x$ to $y$ of length at most 3. So we can assume this doesn't happen, and therefore the partition $(R,B)$ shows that $T$ is directly an \mbox{RB-bipartite} $K_t$.
\end{proof}
This shows in particular that if $|S'|\geqslant t + 2l$, we can sequentially connect any $l$ pairs in a disjoint fashion while preserving \mbox{RB-bipartiteness}.
We will now use Lemma 4.2 and the set $S'$ to sequentially connect pairs from $H''$ and hence obtain a minor; we can assume that we never obtain an \mbox{RB-bipartite} $K_t$ subgraph in the lemma as otherwise we are directly done. We will first label $P(U_1)$ as $x_1,\dotsc,x_{|P(U_1)|}$.
Since $\sum p(U_i) + t \leqslant |S'|/2$, we can use two new elements of $S$ to join $x_1$ to $x_2$, then $x_2$ to $x_3$ and so on while remaining \mbox{RB-bipartite}. We can then repeat this for $U_2$ and so on. Suppose that the internal vertices of the paths used to connect $P(U_i)$ are $C(U_i)$. We now replace $U_i$ with $V_i = U_i \cup P(U_i) \cup C(U_i)$, and extend the subgraph $H''$ [in a \mbox{RB-bipartite} fashion] to include $\cup C(U_i)$, with the edges along the paths also added. $V_i$ forms a connected subset containing $U_i$, and so the $V_i$ form an \mbox{RB-bipartite} $K_m$ minor. This corresponds to a bipartite $K_m$ minor in $G$. \end{proof}
\begin{figure}
\caption{The green and yellow nodes represent $A$ and $B$ respectively. The thick lines show how to add the edge while remaining bipartite.}
\label{fig:bipaddedge}
\end{figure} \section{Results for topological minors}
\begin{proof}[Proof of Theorem~\ref{T:topub}]
We reduce the theorem to the following claim. This corresponds to topological minors in $G$ where the branch vertices are preserved
\begin{clm}
Let $G$ be a 2-coloured copy of $K_{\binom{t+3}{2}} = K_{t^2/2 + O(t)}$. Then $H$ contains an \mbox{RB-bipartite} topological $K_t$ minor.
\end{clm}
We will build up a bipartite $K_s$ model one vertex at a time; the base case \\\mbox{$s=1$} being trivial. We impose the additional constraint that our $TK_s$ (topological $K_s$) uses at most $1 + \binom{s+1}{2}$ vertices to help with our inductive step.
Suppose that $V = \{v_1,\dotsc,v_s\}$ are the branch vertices of our $TK_s$, and that $S$ is the set of vertices not appearing in the $TK_s$. Pick some vertex $w$ of $S$ at random. Let $(A,B)$ be a bipartition of the vertices of the $TK_s$. By a result used in the proof for minors, adding $w$ to either $A$ or $B$ allows us to retain at least half of the edges from $w$; pick some choice where we can do so. It remains to build subdivided edges from $w$ to the remaining at most $s/2$ vertices.
Pick some vertex $v_i$ we do not yet have a path to. Suppose that we are trying to build a R-odd path to $v_i$ from $w$. Let $S_1$ be those vertices in $S$ red-joined to $v_i$, and $S_2$ those vertices which are blue-joined.
If some vertex of $S_1$ is blue-joined to $v_i$, we can take this path of length 2 and extend our model. The same argument works if a vertex of $S_2$ is blue-joined to $v_i$. So we can assume this never happens.
If $S_1$ has size at least $t$, but contains no red edges, we directly have a blue (and hence $\mbox{RB-bipartite}$) $TK_t$. So we can assume there is some red edge $xy$ in $S_1$ in this case. But then the path $wxyv_1$ has 3 red edges, and so is R-odd.
If instead $S_2$ has size at least $t$, we can also use two internal vertices to extend our minor, or otherwise directly obtain a bipartite $TK_t$. If we want to join by an R-even path, we swap the colours of all edges incident to $w$, apply the above argument, and then swap back. In particular, we can use at most $1+ 2(s/2) = s+1$ additional vertices to turn our bipartite $K_s$ topological minor into an \mbox{RB-bipartite} $TK_{s+1}$, provided at least $2t+s+1$ vertices remain. Since the $TK_s$ had at most $1+ \binom{s}{2}$ vertices, this new topological minor has at most $1+ \binom{s+1}{2}$ vertices. In particular, taking $n > 2t + 1 +\binom{t+1}{2}$, $H$ has an \mbox{RB-bipartite} $TK_t$.
\end{proof}
\begin{proof}[Proof of Theorem ~\ref{T:toplb}] Our lower bound relies on complete bipartite graphs being good examples of graphs with high average degree, and no topological $K_t$ minor.
Let $G$ be a complete bipartite graph on parts $A,B$, and suppose that there is a topological $K_t$ minor, say with $s$ branch vertices in $A$ and $(t-s)$ branch vertices in $B$. Then for each pair of vertices (in $A$, say), we need a distinct vertex in $B$ to lie on the path between them. This means we need $|B|\geqslant (t-s) + \binom{s}{2}$, and so overall $|G| \geqslant t + \binom{s}{2} + \binom{t-s}{2} \geqslant t + 2\binom{t/2}{2} = t^2/4 + t/2$ by convexity.
If we take $G$ to be a complete graph $K_{\ceil{t^2/4}}$, no bipartite subgraph can have a topological $K_t$ minor (as there are too few vertices). The result follows as $tcl(K_n) = n$.
\end{proof}
\end{document} |
\begin{document}
\title{Critically loaded multi-server queues with abandonments, retrials, and time-varying parameters}
\author{Young Myoung Ko and Natarajan Gautam} \maketitle \begin{abstract} In this paper, we consider modeling time-dependent multi-server queues that include abandonments and retrials. For the performance analysis of those, fluid and diffusion models called ``strong approximations'' have been widely used in the literature. Although they are proven to be asymptotically exact, their effectiveness as approximations in \emph{critically loaded regimes} needs to be investigated. To that end, we find that existing fluid and diffusion approximations might be either inaccurate under simplifying assumptions or computationally intractable. To address that concern, this paper focuses on developing a methodology by adjusting the fluid and diffusion models so that they significantly improve the estimation accuracy. We illustrate the accuracy of our adjusted models by performing a number of numerical experiments. \end{abstract}
\section{Introduction} \label{sec_introduction} In this paper, we are interested in the precise analysis of time-varying many-server queues with abandonments and retrials described in \citet{Mandelbaum:1998p1029} (See Figure \ref{fig_retrial}). \begin{figure}
\caption{Multi-server queue with abandonment and retrials, \citet{Mandelbaum:1998p1029}}
\label{fig_retrial}
\end{figure} Inspired by call centers, there have been extensive studies on multi-server queues, especially having a large number of servers. Most of the recent studies utilize asymptotic analysis as it makes the problem tractable and also provides good approximations under certain conditions. Asymptotic analysis, typically, utilizes weak convergence to fluid and diffusion limits which is nicely summarized in \citet{bill99} and \citet{Whitt02}. Methodologies to obtain fluid and diffusion limits, as described in \citet{Halfin81}, have been developed in the literature using two different ways in terms of the traffic intensity.\\ The first approach is to consider the convergence of a sequence of traffic intensities to a certain value. Relying on the value to which the sequence converges, there are three different operational regimes: efficiency driven (ED), quality and efficiency driven (QED), and quality driven (QD). Roughly speaking, if the traffic intensity ($\rho$) of the limit process is strictly greater than 1, it is called ED regime. If $\rho = 1$, then that is QED, otherwise QD. Many research studies have been done under the ED and QED regimes for multi-server queues like call centers (\citet{Halfin81}, \citet{Puhalskii:2000p1880}, \citet{Garnet:2002p1103}, \citet{Whitt04}, \citet{Whitt06b}, \citet{Pang:2009p1886}). Recently, the QED regime, also known as ``Halfin-Whitt regime'', has received a lot of attention; this is because it actually achieves both high utilization of servers
and quality of service (\citet{Zeltyn:2005p1072}), and is a favorable operational regime for call centers with strict performance constraints (\citet{Mandelbaum:2009p1992}).\\ The second way to obtain limit processes is to accelerate parameters keeping the traffic intensity fixed. An effective methodology called ``uniform acceleration'' or ``strong approximations'' which enables the analysis of time-dependent queues (\citet{kurtz78}, \citet{mandelbaum95}, \citet{mandelbaum98}, \citet{massey98}, \citet{Whitt90}, \citet{Mandelbaum:1998p1029}, \citet{Hampshire:2009p1879}) is included in this scheme and in fact is the basis of this paper.\\ The advantage of the strong approximations as described in \citet{kurtz78} is that it can be applied to a wide class of stochastic processes and can be nicely extended to time-dependent systems by combining with the results in \citet{Mandelbaum:1998p1029}. However, it cannot be applied to multi-server queues directly due to an assumption that is not satisfied: i.e., for the diffusion model, the differentiability of the rate functions (e.g. net arrival rates and service rates) is necessary. But some rate functions are not differentiable everywhere since they are of the forms, $\min(\cdot,\cdot)$ or $\max(\cdot,\cdot)$. To extend the theory to non-smooth rate functions, \citet{Mandelbaum:1998p1029} proves weak convergence by introducing a new derivative called ``scalable Lipschitz derivative'' and provides models for several queueing systems such as Jackson networks, multi-server queues with abandonments and retrials, multi-class preemptive priority queues, etc. In addition, several sets of differential equations are also provided to obtain the mean value and covariance matrix of the limit processes. It, however, turns out that the resulting sets of differential equations are computationally intractable to solve in general and hence the theorems cannot be applied to obtain numerical values of performance measures. In a follow-on paper, \citet{Mandelbaum:2002p995} provides numerical results for queue lengths and waiting times in multi-server queues with abandonments and retrials by adding an assumption to deal with computational intractability. Specifically, the paper assumes measure zero at a set of time points where the fluid model hits non-differentiable points, which eventually enables us to apply Kurtz's diffusion models. However, as pointed out in \citet{Mandelbaum:2002p995}, if the system stays close to a critically loaded phase for a long time (i.e. \emph{lingering} around a non-differentiable point), their approach may cause significant inaccuracy.
To explain this inaccuracy in detail, consider a multi-server queue with abandonments and retrials as shown in Figure \ref{fig_retrial}. As an example we select numerical values $n_t=50, \mu_t^1=1,\mu_t^2=0.2$ for all $t$, whereas $\lambda_t$ alternates between $\lambda_t^1 = 45$ and $\lambda_t^2=55$ every two units of time (the parameters are defined in Section \ref{sec_problem} and illustrated in Figure \ref{fig_retrial}). Using the measure-zero assumption in \citet{Mandelbaum:2002p995}, we graph $E[x_1(t)]$ and $E[x_2(t)]$ in Figure \ref{fig_eg_inaccuracy} (a), and also $Var[x_1(t)]$, $Var[x_2(t)]$, and $Cov[x_1(t),x_2(t)]$ in Figure \ref{fig_eg_inaccuracy} (b). Notice that, although $E[x_1(t)]$ is reasonably accurate, the others ($E[x_2(t)]$, $Var[x_1(t)]$, $Var[x_2(t)]$, and $Cov[x_1(t),x_2(t)]$) are not accurate at all. The reason for that is the system lingers around the non-differentiable points. In addition, if one were to solve the differential equations numerically using computationally intractable techniques via the Lipschitz derivatives as described in \citet{Mandelbaum:1998p1029}, the similar level of inaccuracy occurs. We explain this in detail in Section \ref{subsec_inaccuracy}. However, this does nourish the need for a methodology to accurately predict the system performance which is the focus of this study.\\ \begin{figure}
\caption{Simulation vs Fluid and diffusion model with measure-zero assumption}
\label{fig_eg_inaccuracy}
\end{figure} Having motivated the need to develop a methodology for the critically loaded phase, we now describe its importance. According to \citet{mandelbaum98} and \citet{Mandelbaum:2002p995}, time-dependent queues make transitions among three phases: underloaded, critically loaded, and overloaded. The phase of the system is determined by the fluid model. The limit process in the strong approximations does not require any regimes such as QD, QED, or ED. However, from Section 1.4 in \citet{Zeltyn:2005p1072}, we could find a rough correspondence between the operational regimes (QD, QED, and ED) and the phases in time-varying queues (underloaded, critically loaded, and overloaded). Recall that the QED regime is favorable to the operation of the call centers. Therefore, capturing the dynamics of multi-server queues in the critically loaded phase is also of significant importance. Nonetheless, from Figure \ref{fig_eg_inaccuracy}, we found two major issues in the existing approach: 1) the fluid model (where the non-differentiability issue is actually irrelevant) is itself inaccurate and 2) sharp spikes which cause massive estimation errors are observed at the non-differentiable points in the diffusion model in contrast to the smooth curves in the simulation. In this paper, we approach the above two issues from a different point of view and provide an effective solution to them. Considering those, the contributions of this paper can be summarized as follows: \begin{enumerate} \item To the best of our knowledge, inaccuracy in the fluid model has never been addressed in the literature. We explain why it happens and ameliorate the fluid model. \item Sharp spikes observed in the diffusion model cannot be resolved using the methodology in the literature. We provide a reasonable approximation-methodology so that it could smoothen the spikes and improve the estimation accuracy dramatically. \end{enumerate} We now describe the organization of this paper. In Section \ref{sec_problem}, we state the problem considered in this paper. In Section \ref{sec_strong}, we summarize the strong approximations in \citet{kurtz78} and \citet{Mandelbaum:1998p1029}, and describe the above issues in detail. In Section \ref{sec_adjustedfluid}, we construct an adjusted fluid model to estimate the exact mean value of the system state. However, this would not immediately result in a computationally feasible approach. For that, in Section \ref{sec_adjusteddiffusion}, we explain our Gaussian-based approximations to achieve computational feasibility and smoothness in the diffusion model. Further investigation on the adjusted models is provided in Section \ref{sec_g} to show how actually our adjusted models contribute to the estimation accuracy. In Section \ref{sec_numerical}, we provide a number of numerical examples and compare against the existing approach as well as simulation. Finally, in Section \ref{sec_conclusion}, we make concluding remarks and explain directions for future work. \section{Problem description} \label{sec_problem} Consider Figure \ref{fig_retrial} that illustrates a multi-server queue with abandonments and retrials as described in \citet{Mandelbaum:1998p1029} and \citet{Mandelbaum:2002p995}. There are $n_t$ number of servers in the service node at time $t$. Customers arrive to the service node according to a non-homogeneous Poisson process at rate $\lambda_t$. The service time of each customer follows a distribution having a memoryless property at rate $\mu_t^1$. Customers in the queue are served under the FCFS policy and the abandonment rate of customers is $\beta_t$ with exponentially distributed time to abandon. Abandoning customers leave the system with probability $p_t$ or go to a retrial queue with probability $1-p_t$. The retrial queue is equivalent to an infinite-server-queue and hence each customer in the retrial queue waits there for a random amount of time with mean $1/\mu_t^2$ and returns to the service node.\\ Let $X(t) = \big(x_1(t),x_2(t)\big)$ be the system state where $x_1(t)$ is the number of customers in the service node and $x_2(t)$ is the number of customers in the retrial queue. Then, $X(t)$ is the unique solution to the following integral equations: \begin{eqnarray}
x_1(t) &=& x_1(0) +Y_1\Big(\int_{0}^{t}\lambda_s ds\Big) + Y_2\Big(\int_{0}^{t}x_2(s)\mu_s^2ds\Big) - Y_3\Big(\int_{0}^{t}\big(x_1(s)\wedge n_s\big)\mu_s^1ds\Big) \nonumber \\
&& - Y_4\Big(\int_{0}^{t}\big(x_1(s)-n_s\big)^+\beta_s(1-p_s)ds\Big) - Y_5\Big(\int_{0}^{t}\big(x_1(s)-n_s\big)^+\beta_sp_sds\Big), \label{eqn_rx1} \\
x_2(t) &=& x_2(0) + Y_4\Big(\int_{0}^{t}\big(x_1(s)-n_s\big)^+\beta_s(1-p_s)ds\Big) - Y_2\Big(\int_{0}^{t}x_2(s)\mu_s^2ds\Big), \label{eqn_rx2} \end{eqnarray} where $Y_i$'s are independent rate-$1$ Poisson processes.\\ The performance measures we are interested in are $E[X(t)]$ and $Cov[X(t),X(t)]$ (i.e. $Var[x_1(t)]$, $Var[x_2(t)]$, and $Cov[x_1(t),x_2(t)]$) for any given time $t \in [0,T]$, where $T < \infty$ is a constant. Especially, we have an interest in the system that is \emph{lingering} near the critically loaded phase for a long time. Anyhow, as one may notice, the above two equations (\ref{eqn_rx1}) and (\ref{eqn_rx2}) cannot be solved directly. If all the parameters are constant, i.e. $\lambda_t = \lambda, \mu_t^1 = \mu^1, \mu_t^2=\mu^2, \beta_t =\beta,$ and $p_t = p$, one can consider a Continuous Time Markov Chain (CTMC) model to obtain the performance measures. However, even assuming constant parameters, calculating the performance measures at any given time $t$ is hard since $x_1(t)$ and $x_2(t)$ both are unbounded and solving balance equations in the two or more dimensional spaces requires tremendous efforts. Furthermore, when the number of servers is large, computational issues might arise. We, accordingly, would try to take advantage of an asymptotic methodology that is adequate for the analysis of time-varying systems with large number of servers. Nevertheless, as briefly mentioned in Section \ref{sec_introduction}, we found that the existing methodologies are either computationally intractable or significantly inaccurate in the critically loaded phase. The objective of this paper is to develop a new approach to enhance the accuracy in estimating the mean value and covariance matrix for the multi-server queues with abandonments and retrials.\\ To do so, we start by summarizing the strong approximations and addressing the potential limitations in the following section. \section{Summary of the strong approximations} \label{sec_strong} In Section \ref{subsec_strong}, we recapitulate the strong approximations in \citet{kurtz78} and \citet{Mandelbaum:1998p1029}. In Section \ref{subsec_inaccuracy}, we explain what produces estimation errors and why existing methodologies do not fix them. \subsection{Strong approximations} \label{subsec_strong} In this section, we review the fluid and diffusion approximations developed by \citet{kurtz78} that we would leverage upon for our methodology. We also briefly mention the result in \citet{Mandelbaum:1998p1029} which extends Kurtz's result to models involving non-smooth rate functions. Moreover, it is worthwhile to note that for $n\in \mathbf{N}$, the state of the queueing system $X_n(t)$ includes jumps but the limit process is continuous. Therefore, the weak convergence result that is presented is with respect to uniform topology in Space $D$ (\citet{bill99} and \citet{Whitt02}).\\ Let $X(t)$ be an arbitrary $d$-dimensional stochastic process which is the solution to the following integral equation: \begin{eqnarray}
X(t) = x_0 + \sum_{i=1}^{k} l_i Y_{i} \bigg(\int_{0}^{t} f_{i}\big(s,X(s)\big)ds \bigg), \label{eqn_001} \end{eqnarray}
where $x_0 = X(0)$ is a constant, $Y_{i}$'s are independent rate-$1$ Poisson processes, $l_i \in \mathbf{Z}^d$ for $i \in \{1,2,\ldots, k\}$ are constant, and $f_i$'s are continuous functions such that $|f_i(t,x)| \le C_i (1+|x|)$ for some $C_i < \infty$, $t\le T$ and $T<\infty$. Note that we just consider a finite number of $l_i$'s to simplify proofs, which is reasonable for real world applications. \\ Notice that a special case of $X(t)$ in equation (\ref{eqn_001}) is the $X(t)$ we described in equations (\ref{eqn_rx1}) and (\ref{eqn_rx2}) in our problem explained in Section \ref{sec_problem}. Following the notation in equation (\ref{eqn_001}), we have, for our problem in Section \ref{sec_problem}, $x=(x_1,x_2)$ and $t\le T$, \begin{align*}
f_1(t,x) &= \lambda_t, f_2(t,x) = \mu_t^2 x_2, f_3(t,x) = \mu_t^1(x_1 \wedge n_t),\\
f_4(t,x) &= \beta_t(1-p_t)(x_1-n_t)^+, f_5(t,x) = \beta_tp_t(x_1-n_t)^+, \\
l_1 &= \binom{1}{0}, l_2 = \binom{1}{-1}, l_3 = \binom{-1}{0}, l_4=\binom{-1}{1}, \textrm{ and } l_5=\binom{-1}{0}. \end{align*} Coming back to the generalized $X(t)$ process, we reiterate that it is usually not tractable to solve the integral equation (\ref{eqn_001}). Therefore, to approximate the $X(t)$ process, define a sequence of stochastic processes $\{X_n(t)\}$ which satisfy the following integral equation: \begin{eqnarray*}
X_n(t) = x_0 + \sum_{i=1}^{k} \frac{1}{n} l_i Y_{i} \bigg(\int_{0}^{t} n f_{i}\big(s, X_n(s)\big)ds \bigg). \label{eqn_002} \end{eqnarray*} Typically the process $X_n(t)$ (usually called a scaled process) is obtained by taking $n$ times faster rates of events and $1/n$ of the increment of the system state. This type of setting is used in the literature and is denoted as ``uniform acceleration'' in \citet{massey98}, \citet{Mandelbaum:1998p1029}, and \citet{Mandelbaum:2002p995}. Then, the following theorem provides the fluid model to which $\{X_n(t)\}$ converges almost surely as $n\rightarrow \infty$. Define \begin{eqnarray}
F(t,x) = \sum_{i=1}^{k} l_i f_{i}(t,x) \label{eqn_F}. \end{eqnarray} \begin{theorem}[Fluid model, \citet{kurtz78}] \label{theo_fluid}
If there is a constant $M < \infty$ such that $|F(t,x)-F(t,y)| \le M|x-y|$ for all $t \le T$ and $T<\infty$. Then, $\lim_{n \rightarrow \infty} X_n(t) = \bar{X}(t)$ a.s. where $\bar{X}(t)$ is the solution to the following integral equation: \begin{eqnarray*}
\bar{X}(t) = x_0 + \sum_{i=1}^{k} l_i \int_{0}^{t} f_{i}\big(s, \bar{X}(s)\big)ds. \label{eqn_003} \end{eqnarray*} \end{theorem} Note that $\bar{X}(t)$ is a deterministic time-varying quantity. We will subsequently connect $\bar{X}(t)$ and $X(t)$ defined in equation (\ref{eqn_001}), but before that we provide the following result. Once we have the fluid model, we can obtain the diffusion model from the scaled centered process ($D_n(t)$). Define $D_n(t)$ to be $\sqrt{n}\big(X_n(t) - \bar{X}(t)\big)$. Then, the limit process of $D_n(t)$ is provided by the following theorem. \begin{theorem}[Diffusion model, \citet{kurtz78}] \label{theo_diffusion} If $f_i$'s and $F$, for some $M<\infty$, satisfy \begin{eqnarray*}
|f_i(t,x)-f_i(t,y)| \le M|x-y| \quad \textrm{and} \quad \bigg| \frac{\partial}{\partial x_i}F(t,x)\bigg| \le M, \qquad \textrm{for } i \in \{1,\ldots,k\} \textrm{ and } 0\le t\le T, \end{eqnarray*} then $\lim_{n \rightarrow \infty} D_n(t) = D(t)$ where $D(t)$ is the solution to \begin{eqnarray*}
D(t) = \sum_{i=1}^{k} l_i \int_{0}^{t} \sqrt{f_i\big(s,\bar{X}(s)\big)}dW_i(s) + \int_{0}^{t} \partial F\big(s,\bar{X}(s)\big)D(s) ds, \label{eqn_004} \end{eqnarray*} $W_i(\cdot)$'s are independent standard Brownian motions, and $\partial F(t,x)$ is the gradient matrix of $F(t,x)$ with respect to $x$. \end{theorem} \begin{remark} \label{rem_nondiff} Theorem \ref{theo_diffusion} requires that $F(\cdot,\cdot)$ has a continuous gradient matrix. Therefore, if we don't have such an $F$, then we cannot apply Theorem \ref{theo_diffusion} directly to obtain the diffusion model. \end{remark} \begin{remark} \label{rem_gaussian} According to \citet{ethier86}, if $D(0)$ is a constant or a Gaussian random vector, then $D(t)$ is a Gaussian process. \end{remark} Now, we have the fluid and diffusion models for $X_n(t)$. Therefore, for a large $n$, $X_n(t)$ is approximated by \begin{eqnarray*}
X_n(t) \approx \bar{X}(t) + \frac{D(t)}{\sqrt{n}}. \label{eqn_005} \end{eqnarray*} If we follow this approximation, we can also approximate the mean and covariance matrix of $X_n(t)$ denoted by $E\big[X_n(t)\big]$ and $Cov \big[X_n(t),X_n(t)\big]$ respectively as \begin{eqnarray}
E\big[X_n(t)\big] &\approx& \bar{X}(t) + \frac{E\big[D(t)\big]}{\sqrt{n}}, \label{eqn_006} \\
Cov \big[X_n(t),X_n(t)\big] &\approx& \frac{Cov \big[D(t),D(t)\big]}{n}. \label{eqn_007} \end{eqnarray} In equations (\ref{eqn_006}) and (\ref{eqn_007}), only $\bar{X}(t)$ is known. Therefore, in order to get approximated values of $E\big[X_n(t)\big]$ and $Cov \big[X_n(t),X_n(t)\big]$, we need to obtain $E\big[D(t)\big]$ and $Cov\big[D(t),D(t)\big]$. The following theorem provides a methodology to obtain $E\big[D(t)\big]$ and $Cov \big[D(t),D(t)\big]$. \begin{theorem}[Mean and covariance matrix of linear stochastic systems, \citet{arnold92}] \label{theo_moment} Let $Y(t)$ be the solution to the following linear stochastic differential equation. \begin{eqnarray*}
dY(t) = A(t)Y(t)dt + B(t)dW(t), \quad Y(0)=0, \label{eqn_008} \end{eqnarray*} where $A(t)$ is a $d \times d$ matrix, $B(t)$ is a $d \times k$ matrix, and W(t) is a $k$-dimensional standard Brownian motion. Let $M(t) = E\big[Y(t)\big]$ and $\Sigma(t) = Cov\big[Y(t), Y(t)\big]$. Then, $M(t)$ and $\Sigma(t)$ are the solution to the following ordinary differential equations: \begin{eqnarray}
\frac{d}{dt}M(t) &=& A(t) M(t) \label{eqn_009} \nonumber \\
\frac{d}{dt}\Sigma(t) &=& A(t) \Sigma(t) + \Sigma(t) A(t)' + B(t)B(t)'. \label{eqn_010} \end{eqnarray} \end{theorem} \begin{corollary} \label{cor_moment} If $M(0)=0$, then $E\big[M(t)\big] = 0$ for $t \ge 0$. \end{corollary} By Corollary \ref{cor_moment}, if $D(0)=0$, then $E\big[D(t)\big] = 0$ for $t \ge 0$. Therefore, if $\bar{X}(0) = X(0) = x_0$, then we can rewrite (\ref{eqn_006}) to be \begin{eqnarray*}
E\big[X_n(t)\big] &\approx& \bar{X}(t). \label{eqn_011} \end{eqnarray*} Recalling Remark \ref{rem_nondiff}, the diffusion model in \citet{kurtz78} requires differentiability of rate functions. Otherwise, we cannot apply Theorem \ref{theo_diffusion}. To get this problem under control, \citet{Mandelbaum:1998p1029} introduces a new derivative called ``scalable Lipschitz derivative'' and proves weak convergence using it. Unlike the result in \citet{kurtz78}, it turns out that the diffusion limit may not be a Gaussian process when rate functions are not differentiable everywhere. In \citet{Mandelbaum:1998p1029}, expected values of the diffusion model may not be zero (compare it with Corollary \ref{cor_moment}) and could adjust the inaccuracy in the fluid model (see \citet{Mandelbaum:2002p995}). The resulting differential equations for the diffusion model, however, are computationally intractable. For example, in \citet{Mandelbaum:1998p1029}, one of the differential equations has the following form: \begin{eqnarray}
\frac{d}{dt}E\big[Q_1^{(1)}(t)\big] &=& (\mu_t^1 \mathbf{1}_{\{Q_1^{(0)} \le n_t\}} + \beta_t \mathbf{1}_{\{Q_1^{(0)} > n_t\}})E\big[Q_1^{(1)}(t)^-\big] \nonumber \\
&& - (\mu_t^1 \mathbf{1}_{\{Q_1^{(0)} < n_t\}} + \beta_t \mathbf{1}_{\{Q_1^{(0)} \ge n_t\}})E\big[Q_1^{(1)}(t)^+\big] + \mu_t^2E\big[Q_2^{(1)}(t)\big], \label{eqn_actdiff} \end{eqnarray} rendering it to be intractable.\\ Therefore, \citet{Mandelbaum:2002p995}, as we understand, resorts to the method in \citet{kurtz78} by assuming measure zero at non-smooth points to avoid computational difficulty.\\ \subsection{Inaccuracy of strong approximations} \label{subsec_inaccuracy} Though not mentioned in any previous studies, to the best of our knowledge, the fluid model has the possibility of being inaccurate when approximating the mean value of the system state. Consider the actual integral equation to get the exact value of $E\big[X(t)\big]$ by the following theorem. \begin{theorem}[Expected value of $X(t)$] \label{theo_exp} Consider $X(t)$ defined in equation (\ref{eqn_001}). Then, for $t\le T$, $E\big[X(t)\big]$ is the solution to the following integral equation. \begin{eqnarray}
E\big[X(t)\big] = x_0 + \sum_{i=1}^k l_i \int_{0}^{t}E\Big[f_i\big(s,X(s)\big)\Big] ds \label{eqn_013} \end{eqnarray} \begin{proof} Take expectation on both sides of equation (\ref{eqn_001}). Then, \begin{eqnarray}
E\big[X(t)\big] &=& x_0 + \sum_{i=1}^k l_i E\Bigg[Y_i\bigg(\int_{0}^{t}f_i\big(s,X(s)\big)ds \bigg)\Bigg] \nonumber \\
&=& x_0 + \sum_{i=1}^k l_i E\bigg[\int_{0}^{t}f_i\big(s, X(s)\big)ds \bigg] \textrm{ since $Y_i(\cdot)$'s are non-homogeneous Poisson processes} \nonumber \\
&=& x_0 + \sum_{i=1}^k l_i \int_{0}^{t}E\Big[f_i\big(s, X(s)\big)\Big] ds \textrm{ by Fubini theorem in \citet{folland99}.} \nonumber \end{eqnarray} Therefore, we prove the theorem. \end{proof} \end{theorem} Comparing Theorems \ref{theo_fluid} and \ref{theo_exp}, notice that we cannot conclude that $\bar{X}(t)$ in Theorem \ref{theo_fluid} and $E\big[X(t)\big]$ in Theorem \ref{theo_exp} are close enough since $E\big[f_i(t, X(t))\big] \neq f_i\big(t, E[X(t)]\big)$. In some applications, $f_i$'s might be constants or linear combinations of components of $X(t)$. In those cases, Theorem \ref{theo_exp} and the following corollary imply that the fluid model would be the exact mean value of the system state. \begin{corollary} \label{cor_exp} If $f_i(t,x)$'s are constants or linear combinations of the components of $x$, Then, \begin{eqnarray}
E[X(t)] = \bar{X}(t), \nonumber \end{eqnarray} where $X(t)$ is the solution to (\ref{eqn_001}) and $\bar{X}(t)$ is the deterministic fluid model from theorem \ref{theo_fluid}. \begin{proof} Using linearity of expectation in \citet{williams91}, we can obtain the same integral equation for both $E\big[X(t)\big]$ and $\bar{X}(t)$. \end{proof} \end{corollary} However, if we have different forms of $f_i$'s where $E\big[f_i(t, X(t))\big] \neq f_i\big(t, E[X(t)]\big)$, then the fluid model would be inaccurate. Notice that the fluid model does not require the differentiability of rate functions in both \citet{kurtz78} and \citet{Mandelbaum:1998p1029}. Therefore, in this problem, the differentiability issue in rate functions is actually irrelevant.\\ \begin{figure}
\caption{Simulation vs Fluid and diffusion model with measure-zero assumption}
\label{fig_eg_annotated}
\end{figure} Now we move our attention to inaccuracy in the diffusion model. We use the annotated version of Figure \ref{fig_eg_inaccuracy} (via Figure \ref{fig_eg_annotated}) here for the clear explanation. Figures \ref{fig_eg_annotated} (a) and (b) show the mean value and covariance matrix of the system against those of the simulation respectively. Since the number of servers is $50$, as shown in Figure \ref{fig_eg_annotated} (a), the mean value of $x_1(t)$ is fluctuating near the critically loaded point. From the figure, we also confirm that the fluid model is quite inaccurate for the mean value of $x_2(t)$. For the covariance matrix, as shown in Figure \ref{fig_eg_annotated} (b), the diffusion model brings about immense estimation errors (sharp spikes) in the vicinity of the critically loaded time points. Notice that from Figure \ref{fig_eg_annotated} (b) we found that \emph{even if the differential equations such as equation (\ref{eqn_actdiff}) in \citet{Mandelbaum:1998p1029}, which are known to be true, can be numerically solvable, it does not contribute to improving the estimation accuracy}. In the figure, the time point $t_0$ is the time when the fluid model hits a critically loaded point for the first time. \emph{The differential equations in \citet{Mandelbaum:1998p1029} are virtually same as those in \citet{Mandelbaum:2002p995} which assume measure zero for the computational tractability until the fluid model reaches a critically loaded point for the first time}. Therefore, we can think that the graphs before time $t_0$ in Figure \ref{fig_eg_annotated} are exactly same as those obtained from the methodology in \citet{Mandelbaum:1998p1029} though we could not get the graphs after $t_0$. However, as seen in Figure \ref{fig_eg_annotated} (b), the estimation errors become apparent much earlier than the time point $t_0$. Therefore, we figure out that the methodology in \citet{Mandelbaum:1998p1029} does not remove the sharp spikes at least until the time $t_0$. Moreover, from the shapes of the differential equations, we would conjecture that the methodology in \citet{Mandelbaum:1998p1029} might not get rid of the sharp spikes even after the time $t_0$. The drift matrix of the diffusion model in \citet{Mandelbaum:1998p1029} still makes sudden changes at the critically loaded point which actually causes the spikes. We will revisit and explain it in Section \ref{sec_g}.\\ In the next two sections, we describe our approach to the above issues in both fluid and diffusion models. In Section \ref{sec_adjustedfluid}, we address the inaccuracy in the fluid model by a constructing new process. In particular, in Section \ref{sec_adjusteddiffusion}, based on the adjusted fluid model, we explain how to remove the sharp spikes that causes vast estimation errors in the diffusion model. \section{Adjusted fluid model} \label{sec_adjustedfluid} The basic idea of our approach is to construct a new process, $Z(t)$), so that its fluid model is exactly the same as the mean value of the original process $X(t)$ as described in Theorem \ref{theo_exp} (this is schematically explained in Figure \ref{fig_newprocess}). \begin{figure}
\caption{Construction of a new process}
\label{fig_newprocess}
\end{figure} Although we concentrate on multi-server queues, this approach can be applied to more general types of stochastic systems. Therefore, we borrow the more general notation in Section \ref{subsec_strong} (as opposed to that in Section \ref{sec_problem}).\\
To begin with, define a set $\mathbb{F}$ of all distribution functions that have a finite mean and covariance matrix in $\mathbf{R}^d$. This set is valid for the fluid model since conditions on $f_i$'s guarantee that $E\big[|X(t)|\big] < \infty$ and $|Cov[X(t),X(t)]| < \infty$ for all $t \le T$. Define a subset $\mathbb{F}_0$ of $\mathbb{F}$ such that any $h \in \mathbb{F}_0$ has zero mean. We call an element of $\mathbb{F}_0$ a ``base distribution'' for the remainder of this paper. \begin{proposition} \label{prop_exp} For $t \le T$ and $i \in {1,2, \ldots, k}$, let $\mu(t) = E[X(t)]$. Then, $E\big[f_i(t,X(t))\big]$ can be represented as a function of $\mu(t)$, i.e., there exists a function $g_i(t,\cdot)$ such that \[g(t,\mu(t)) = E\big[f_i(t,X(t))\big].\] \begin{proof} For fixed $t_0 \le T$, suppose the distribution of $X(t_0)$ is $F$. Then, $F \in \mathbb{F}$. For $F \in \mathbb{F}$, we can always find $F_0 \in \mathbb{F}_0$ such that $F(x) = F_0(x-\mu)$ where $\mu = E[X(t_0)] = \int_{\mathbf{R}^d} x dF$. Then, \begin{eqnarray*}
E\big[f_i(t_0,X(t_0))\big] &=& \int_{\mathbf{R}^d} f_i(t_0,x) dF \\
&=& \int_{\mathbf{R}^d} f_i(t_0,x+\mu) dF_0. \end{eqnarray*} Since the integration removes $x$, by making $t_0$ and $\mu$ variables (i.e. substitute $t_0$ and $\mu$ with $t$ and $\mu(t)$ respectively), we have \begin{eqnarray*}
E\big[f_i(t,X(t))\big] = g_i(t,\mu(t)), \textrm{ for some function } g_i. \end{eqnarray*} \end{proof} \end{proposition} \begin{remark} Proposition \ref{prop_exp} does not mean that $\mu(\cdot)$ completely identifies the function $g_i(\cdot,\cdot)$. In fact, the function $g_i(\cdot,\cdot)$ might be unknown unless the base distribution is identified but we can say that such a function $g_i(\cdot,\cdot)$ exists. \end{remark} For $t \le T$, let $\mu(t) = E\big[X(t)\big]$. Let $g_i\big(t,\mu(t)\big) = E\big[f_i(t,X(t))\big]$ for $i \in \{1, \ldots, k\}$. Then, we can construct a new stochastic process $Z(t)$ which is the solution to the following integral equation: \begin{eqnarray}
Z(t) = z_0 + \sum_{i=1}^{k} l_i Y_{i} \bigg(\int_{0}^{t} g_{i}\big(s,Z(s)\big)ds \bigg). \label{eqn_014} \end{eqnarray} Based on equation (\ref{eqn_014}), define a sequence of stochastic processes $\{Z_n(t)\}$ satisfying \begin{eqnarray}
Z_n(t) = x_0 + \sum_{i=1}^{k} \frac{1}{n} l_i Y_{i} \bigg(\int_{0}^{t} n g_{i}\big(s, Z_n(s)\big)ds \bigg). \label{eqn_adjseq} \end{eqnarray} Next, we would like to obtain the fluid model for $Z_n(t)$. Before doing that, we need to check whether the functions $g_i$'s satisfy the conditions to apply Theorem \ref{theo_fluid}. Following lemmas show that $g_i$'s actually meet those conditions. The proofs of the lemmas are provided in Appendix \ref{app_lemmas}. \begin{lemma} \label{lem_condition1}
If $|f_i(t,x)| \le C_i (1+|x|)$ for $t\le T$, then $g_i(t,x)$'s satisfy \begin{eqnarray*}
|g_i(t,x)| &\le& D_i (1+|x|) \quad \textrm{for some } D_i < \infty. \label{eqn_015} \end{eqnarray*} \end{lemma} For the next lemma, we would like to define \begin{eqnarray}
G(t,x) = \sum_{i=1}^k l_i g_i(t,x). \label{eqn_G} \end{eqnarray} \begin{lemma} \label{lem_condition2}
For $t\le T$, if $|f_i(t,x)-f_i(t,y)| \le M |x-y|$, then $g_i(t,x)$'s satisfy \begin{eqnarray*}
|g_i(t,x) - g_i(t,y)| \le M|x-y|, \end{eqnarray*}
and if $|F(t,x)-F(t,y)| \le M |x-y|$, then $G(t,x)$ satisfies \begin{eqnarray*}
|G(t,x) - G(t,y)| \le M|x-y|. \label{eqn_019} \end{eqnarray*} \end{lemma} Lemmas \ref{lem_condition1} and \ref{lem_condition2} show that if $f_i$'s satisfy the conditions to obtain the fluid limit of $X_n(t)$, then $g_i$'s are also eligible for the fluid model of $Z_n(t)$. Therefore, we are now able to provide the adjusted fluid model based on Lemmas \ref{lem_condition1} and \ref{lem_condition2}. \begin{theorem}[Adjusted fluid model] \label{theo_modfluid} Assume \begin{eqnarray}
\big|f_i(t,x)\big| &\le& C_i\big(1+|x|\big) \quad \textrm{for } i\in \{1,\ldots, k\}, \label{eqn_024}\\
\big|F(t,x)-F(t,y)\big| &\le& M|x-y|. \label{eqn_025} \end{eqnarray} Then, $\lim_{n \rightarrow \infty} Z_n(t) = \bar{Z}(t)$ a.s., where $\bar{Z}(t)$ is the solution to the following integral equation: \begin{eqnarray}
\bar{Z}(t) = x_0 + \sum_{i=1}^{k} l_i \int_{0}^{t} g_{i}\big(s, \bar{Z}(s)\big)ds, \label{eqn_026} \end{eqnarray} and furthermore \begin{eqnarray}
\bar{Z}(t) = E\big[X(t)\big] = x_0 + \sum_{i=1}^k l_i \int_{0}^{t}E\Big[f_i\big(s,X(s)\big)\Big] ds. \label{eqn_027} \end{eqnarray} \begin{proof} From Lemmas \ref{lem_condition1} and \ref{lem_condition2}, (\ref{eqn_024}) and (\ref{eqn_025}) imply \begin{eqnarray*}
|g_i(t,x)|\le D_i (1+|x|) \quad \textrm{and} \quad |G(t,x) - G(t,y)| \le M|x-y|. \label{eqn_029} \end{eqnarray*} Therefore, by Theorem \ref{theo_fluid}, we have equation (\ref{eqn_026}), and by the definition of $g_i(t,x)$'s, we have equation (\ref{eqn_027}). \end{proof} \end{theorem} Comparing equation (\ref{eqn_027}) with equation (\ref{eqn_013}) in Theorem \ref{theo_exp}, we notice that Theorem \ref{theo_modfluid} via equation (\ref{eqn_027}) could provide the exact estimation of $E\big[X(t)\big]$. Though Theorem \ref{theo_modfluid} provides the exact estimation of $E\big[X(t)\big]$, the functions $g_i$'s cannot be identified unless the base distribution is known, which forces us to develop an algorithm to find $g_i$'s. Nonetheless, when applying our adjusted fluid model to the multi-server queues with abandonments and retrials, we, in fact, have a good candidate distribution to obtain $g_i$'s. So, the following section will describe our methodology to obtain $g_i$'s and to adjust the diffusion model also. \section{Adjusted diffusion model with Gaussian density} \label{sec_adjusteddiffusion} In general, there is no clear way to find the exact base distribution of $X(t)$. However, we could characterize the asymptotic distribution for the multi-server queues from the literature. Many research studies on multi-server queues have shown that the limit processes of the multi-server queues are Gaussian processes, and the empirical density functions of them are also close to the Gaussian density. Listing some of those, for the time-homogeneous multi-server queues, \citet{Iglehart65} and \citet{Whitt:1982p1884} show weak convergence to the Ornstein-Uhlenbeck (OU) process, and \citet{Halfin81} proves weak convergence to Brownian motion and the OU process depending on the traffic. Therefore, for a given $t$, \emph{weak convergence provides the Gaussian distribution which is asymptotically true}. For the time-varying multi-server queues (with abandonments and retrials), as depicted in Figure \ref{fig_emp}, \citet{mandelbaum98} and \citet{Mandelbaum:2002p995} show that the empirical density is close to the Gaussian density. Furthermore, \emph{the result in \citet{Mandelbaum:2002p995} implies the limit process is a Gaussian process if the fluid model hits the critically loaded time points for a countable number of times, which is true for our model}. Therefore, for our model, it is reasonable to utilize the Gaussian distribution as a base distribution to identify $g_i$'s since the Gaussian assumption is asymptotically true. \begin{figure}
\caption{Empirical density vs Gaussian density}
\label{fig_emp}
\end{figure} Once we decide to use the Gaussian density, it provides following two additional benefits: \begin{enumerate}
\item The Gaussian distribution can be completely characterized by the mean and covariance matrix which can be obtained from the fluid and diffusion models.
\item By using Gaussian density, $g_i$'s can achieve smoothness even if $f_i$'s are not smooth, which enables us to apply Theorem \ref{theo_diffusion} without additional assumptions. \end{enumerate} The second benefit is not obvious and hence we provide a proof of that. \begin{lemma} \label{lem_smooth} Let $g_i$'s be the rate functions of $Z(t)$ obtained from the Gaussian density. Then, $g_i$'s are differentiable everywhere. \begin{proof} Define \begin{eqnarray}
\phi(x,y) = \frac{1}{(2\pi)^{n/2}|\Sigma|^{1/2}}\exp \bigg(-\frac{(y-x)'\Sigma^{-1}(y-x)}{2}\bigg).\nonumber \end{eqnarray} Using Gaussian density, \begin{eqnarray*}
g_i(t,x) = \int_{\mathbf{R}^d} f_i(t,y) \phi(x,y) dy. \end{eqnarray*}
For $j \in \{1,\ldots, d\}$, since $\phi(x,y)$ is differentiable with respect to $x_j$ and $|f_i(t,y) \frac{d}{dx_j} \phi(x,y)|$ is integrable, \begin{eqnarray}
\frac{d}{dx_j}g_i(t,x) &=& \frac{d}{dx_j}\int_{\mathbf{R}^d} f_i(t,y) \phi(x,y) dy \nonumber\\
&=& \int_{\mathbf{R}^d} f_i(t,y) \frac{d}{dx_j} \phi(x,y) dy \quad \textrm{by applying Theorem 2.27 in \citet{folland99}}, \label{eqn_folland} \end{eqnarray} where $x_j$ is $j^{\textrm{th}}$ component of $x$. Therefore, $g_i$ is differentiable with respect to $x_j$. \\ \end{proof} \end{lemma} Now, we have $g_i(\cdot,\cdot)$'s which are differentiable. Then, we can apply Theorem \ref{theo_diffusion} to obtain the diffusion model for $Z_n(t)$. Note that similar to the adjusted fluid model, once we have the distribution of $X(t)$, the adjusted diffusion model is applicable to more general cases. Therefore, we first follow the notation in Section \ref{subsec_strong} and will come back to our multi-server queues with abandonments and retrials. \begin{proposition}[Adjusted diffusion model] \label{prop_moddiffusion} Let $g_i(\cdot,\cdot)$'s be the rate functions in $Z(t)$ obtained from Gaussian density. Define a sequence of scaled centered processes $\{V_n(t)\}$ for $t \le T$ to be \begin{eqnarray*}
V_n(t) = \sqrt{n}\big(Z_n(t)-\bar{Z}(t)\big), \label{eqn_030} \end{eqnarray*} where $Z_n(t)$ and $\bar{Z}(t)$ are solutions to equations (\ref{eqn_adjseq}) and (\ref{eqn_026}) respectively. If $f_i(t,x)$'s and $F(t, x)$ satisfy equations (\ref{eqn_024}) and (\ref{eqn_025}) respectively, then $\lim_{n \rightarrow \infty} V_n(t) = V(t)$, where \begin{eqnarray*}
V(t) = \sum_{i=1}^{k} l_i \int_{0}^{t} \sqrt{g_i\big(s,\bar{Z}(s)\big)}dW_i(s) + \int_{0}^{t} \partial G\big(s,\bar{Z}(s)\big) ds, \label{eqn_031} \end{eqnarray*} $W_i(\cdot)$'s are independent standard Brownian motions, and $\partial G\big(t,\bar{Z}(t)\big)$ is the gradient matrix of $G\big(t,\bar{Z}(t)\big)$ with respect to $\bar{Z}(t)$. Furthermore, $V(t)$ is a Gaussian process. \begin{proof}
From definition of $G(t,x)$ in (\ref{eqn_G}), we can easily verify that $G(t,x)$ is differentiable by Lemma \ref{lem_smooth} and hence $|G(t,x) - G(t,y)| \le M|x-y|$ implies \begin{eqnarray*}
\bigg|\frac{\partial}{\partial x_i} G(t,x) \bigg| \le M_i \quad \textrm{for some } M_i < \infty, t \le T, \textrm{ and } i\in \{1,\ldots, d\}. \end{eqnarray*} Therefore, by Theorem \ref{theo_diffusion}, we prove this proposition. \end{proof} \end{proposition} \begin{corollary} \label{cor_samedistribution} If $f_i$'s are constants or linear combinations of the components of $X(t)$. Then, \begin{eqnarray*}
X(t) = Z(t) \quad \textrm{in distribution}. \label{eqn_032} \end{eqnarray*} \begin{proof} Using the linearity of expectation, we can verify $g_i(t,x)= f_i(t,x)$ for $i\in\{1,\ldots,k\}$. \end{proof} \end{corollary} Finally, we have the adjusted fluid and diffusion models by utilizing Gaussian density. Therefore, instead of assuming measure zero at a set of non-differentiable points (as done in \citet{Mandelbaum:2002p995}), we compare the adjusted models with the empirical mean and covariance matrix. Note when we explain Theorem \ref{theo_modfluid}, we do not consider $\Sigma(t)$, the covariance matrix of $X(t)$. However, from Gaussian density, we know that $\Sigma(t)$ characterizes the base distribution and it can be obtained from Proposition \ref{prop_moddiffusion}. Therefore, we rewrite $g_i$'s to be functions of $t$, $\bar{Z}(t)$, and $\Sigma(t)$; i.e. \begin{eqnarray}
g_{i}\big(t, \bar{Z}(t)\big) &\rightarrow& g_{i}\big(t, \bar{Z}(t), \Sigma(t) \big) \quad \textrm{for } i\in\{1,\ldots,k\} \textrm{ and} \label{eqn_036}\\
G\big(t, \bar{Z}(t)\big) &\rightarrow& G\big(t, \bar{Z}(t), \Sigma(t) \big). \label{eqn_037} \end{eqnarray} \begin{proposition}[Mean and covariance matrix] \label{prop_modmoment} Let $Y(t) = \bar{Z}(t) + V(t)$. Then, \begin{eqnarray}
E\big(Y(t)\big) &=& \bar{Z}(t) \quad \textrm{and} \label{eqn_038} \\
Cov\big(Y(t),Y(t)\big) &=& Cov\big(V(t),V(t)\big) = \Sigma(t). \label{eqn_039} \end{eqnarray} The quantities $\bar{Z}(t)$ and $\Sigma(t)$ are obtained by solving the following simultaneous ordinary differential equations with initial values given by $\bar{Z}(0) = x_0$ and $\Sigma(0)=0$: \begin{eqnarray}
\frac{d}{dt}\bar{Z}(t) &=& \sum_{i=1}^{k} l_i g_{i}\big(t, \bar{Z}(t), \Sigma(t) \big), \label{eqn_040} \\
\frac{d}{dt}\Sigma(t) &=& A(t) \Sigma(t) + \Sigma(t) A(t)' + B(t)B(t)', \label{eqn_041} \end{eqnarray} where $A(t)$ is the gradient matrix of $G\big(t,\bar{Z}(t), \Sigma(t)\big)$ with respect to $\bar{Z}(t)$, and $B(t)$ is the $d \times k$ matrix such that its $i^\textrm{th}$ column is $l_i \sqrt{g_i\big(t,\bar{Z}(t), \Sigma(t)\big)}$. \begin{proof} Since $V(0) = 0$, from Corollary \ref{cor_moment}, we have (\ref{eqn_038}) and (\ref{eqn_039}). By rewriting (\ref{eqn_026}) in Theorem \ref{theo_modfluid} as a differential equation form, we have (\ref{eqn_040}), and by Theorem \ref{theo_moment}, we have (\ref{eqn_041}). Note that since both $\bar{Z}(t)$ and $\Sigma(t)$ are variables, we should solve (\ref{eqn_040}) and (\ref{eqn_041}) simultaneously. \end{proof} \end{proposition} Eventually, we now have the adjusted fluid and diffusion models for the general cases, and it is the time to return to our system as given in Section \ref{sec_problem}. Using Gaussian density, we can obtain the new rate functions, $g_i$'s, which correspond to $f_i$'s as follows. \begin{eqnarray}
g_1(t,x) &=& \lambda_t, \nonumber \\
g_2(t,x) &=& \mu_t^2 x_2, \nonumber \\
g_3(t,x) &=& \mu_t^1\big(n_t + (x_1-n_t)\Phi(n_t,x_1,\sigma_{1_t}) - \sigma_{1_t}^2 \phi(n_t,x_1,\sigma_{1_t})\big), \nonumber \\
g_4(t,x) &=& \beta_t(1-p_t)\Big((x_1-n_t)\big(1-\Phi(n_t,x_1,\sigma_{1_t})\big)+\sigma_{1_t}^2\phi(n_t,x_1,\sigma_{1_t})\Big), \quad \textrm{and} \nonumber \\
g_5(t,x) &=& \beta_tp_t\Big((x_1-n_t)\big(1-\Phi(n_t,x_1,\sigma_{1_t})\big)+\sigma_{1_t}^2\phi(n_t,x_1,\sigma_{1_t})\Big), \nonumber \end{eqnarray} where $\Phi(a,b,c)$ and $\phi(a,b,c)$ are function values at point $a$ of the Gaussian CDF and PDF respectively with mean $b$ and standard deviation $c$.\\ Since $f_1(t,x)$ and $f_2(t,x)$ are constant and linear with respect to $x$ respectively, $g_1(t,x)=f_1(t,x)$ and $g_2(t,x)=f_2(t,x)$. The derivation of other $g_i(\cdot,\cdot)$'s is straightforward but requires some computational efforts and hence we provide the details in Appendix \ref{app_gi}. Note $g_3$, $g_4$, and $g_5$ include $\sigma_{1_t}$ which is currently treated as a function of $t$ but is used by the adjusted diffusion model (see equations (\ref{eqn_036}) and (\ref{eqn_037})). With the $g_i$'s above, by Proposition \ref{prop_modmoment}, we finally obtain $E[Z(t)]$ and $Cov[Z(t), Z(t)]$ for $t \le T$ and will use them to approximate the mean and covariance matrix of our original process $X(t)$ in equations (\ref{eqn_rx1}) and (\ref{eqn_rx2}).\\ Although we obtain the functions $g_i$'s for our adjusted models, we need some intuition regarding how $g_i$'s contribute to increasing accuracy especially in the critically loaded phases. Thus, in the next section, we revisit the inaccuracy in the previous approaches and explain how our adjusted models treat this. \section{Discussion on function $g_i$'s} \label{sec_g} In this section, we are going to investigate the functions $g_i$'s precisely. In order to get a clearer intuition, we consider a simple $M_t/M_t/n_t$ queue which is a special case of our original model ($\beta_t = 0$, and $\mu_t^1 = \mu_t$). Let $x(t)$ denote the number of customers in the system at time $t$. Then, $x(t)$ is the solution to the following integral equation: \begin{eqnarray*} x(t) = x(0) +Y_1\Big(\int_{0}^{t}\lambda_s ds\Big) + - Y_2\Big(\int_{0}^{t}\big(x(s)\wedge n_s\big)\mu_sds\Big). \label{eqn_simp_ori} \end{eqnarray*} Here, for convenience, define $f_1(t,x) = \lambda_t$, $f_2(t,x)= \big(x\wedge n_t\big)\mu_t$, and $F(t,x) = \lambda_t - \big(x\wedge n_t\big)\mu_t$. Applying theorems in Section \ref{subsec_strong}, we have the fluid model $\bar{x}(t)$ and diffusion model $u(t)$ from the following integral equations: \begin{eqnarray*} \bar{x}(t) &=& x(0) +\int_{0}^{t}\lambda_s - \big(\bar{x}(s)\wedge n_s\big)\mu_s ds, \textrm{ and} \\ u(t) &=& u(0) + \int_{0}^{t} \Big(\sqrt{\lambda_s}, \sqrt{\big(\bar{x}(s)\wedge n_s\big)\mu_s}\Big)\binom{dW_1(t)}{dW_2(t)} ds + \int_{0}^{t} \partial F(s,\bar{x}(s)) ds, \end{eqnarray*} where \begin{eqnarray*}
\partial F(t,\bar{x}(t)) = \left \{ \begin{array}{ll}
-\mu_t & \textrm{if } \bar{x}(t) \le n_t, \\
0 & \textrm{otherwise.}
\end{array} \right . \end{eqnarray*} Notice that the drift part $\partial F(t,\bar{x}(t))$ of the diffusion model is completely determined by the fluid model and here we might encounter a serious problem. Suppose we observe several realizations of this multi-server queue. When the $\bar{x}(t)$ is much smaller than the number of server $n_t$ (underloaded phase), then there is not great possibility that an observed process is overloaded or critically loaded. Therefore, the drift part $-\mu_t$ is valid in that sense. Now, assume that $\bar{x}(t)$ is smaller than but fairly close to $n_t$. Then, it is likely that significant fraction of the realizations could be overloaded or critically loaded. However, the drift part is still $-\mu_t$ since \emph{the possibility of being overloaded or critically loaded is completely ignored by the fluid model}. Furthermore, imagine $\bar{x}(t)$ now becomes slightly larger than $n_t$. Then, the drift part suddenly changes to zero. As a result, if $\bar{x}(t)$ is fluctuating close to $n_t$, i.e. \emph{lingering}, then the drift part of the diffusion model would repeat sudden changes between the values $-\mu_t$ and $0$. Undoubtedly, it produces sharp spikes in the diffusion model as shown in Figure \ref{fig_eg_annotated} and make the quality of the approximation worse especially near the critically loaded phase.\\ Now, we turn our attention to the functions $g_i$'s. In Section \ref{sec_adjustedfluid}, $g_i$'s in the adjusted fluid model would improve the accuracy in estimating the mean values of the system states. Then, one may ask a question how $g_i$'s affect the estimation accuracy of the covariance matrix. To answer the question, let us follow the procedure to obtain $g_2(t,\cdot)$. Note $g_1(t,\cdot) = f_1(t,\cdot)$.\\ Define $G(t,x) = g_1(t,x) - g_2(t,x) = \lambda_t - g_2(t,x)$. For a fixed $t_0$, let $x = x(t_0)$, $\mu = \mu_{t_0}$, $n=n_{t_0}$ and $ z = E[x(t_0)]$. Then, \begin{eqnarray}
g_2(t_0, z) = E\big[\mu(x \wedge n)\big] = \mu\Big\{E[x\mathbb{I}_{x \le n}]+nPr[x > n]\Big\}. \label{eqn_actual} \end{eqnarray} From equation (\ref{eqn_actual}), we could notice the following characteristics of the function $g_2(\cdot, \cdot)$. \begin{enumerate} \item If $Pr[x > n] \rightarrow 1$, $g_2(t_0, z) \rightarrow \mu n$. \item If $Pr[x > n] \rightarrow 0$, $g_2(t_0, z) \rightarrow \mu z$. \end{enumerate} Note that $\partial G(t,\bar{z}(t))$ changes smoothly over time between $-\mu$ and $0$ according to $Pr[x(t) > n_t]$ as $Pr[x(t) > n_t]$ changes smoothly under our Gaussian assumption (in fact, any distribution having a differentiable density works). Therefore, even if the adjusted fluid model $\bar{z}(t)$ is lingering in the vicinity of $n_t$, the drift part of the adjusted diffusion model changes smoothly over time. In the following section, we provide several experimental results and show the effectiveness of the adjusted models. \section{Numerical results} \label{sec_numerical} We compare our adjusted models against the fluid and diffusion models with the measure-zero assumption in \citet{Mandelbaum:2002p995} for multi-server queues with abandonments and retrials. Under the similar settings in \citet{Mandelbaum:2002p995}, we use 5,000 independent simulation runs and compare the simulation result with both methodologies. We use the constant rates for the parameters except the arrival rate. The arrival rate alternates between $45$ and $55$ every two time units. Figures \ref{fig_retrial_mean} and \ref{fig_retrial_covariance} show the estimation of mean values from one experiment. The number of servers ($n_t$) is $50$ and the service rate of each server is $1$. \begin{figure}
\caption{Comparison of mean values, $E\big[X(t)\big]$}
\label{fig_retrial_mean}
\end{figure} As seen in Figure \ref{fig_retrial_mean}, the number of customers in service node ($x_1(t)$) stays near the critically loaded point for a long time. As \citet{Mandelbaum:2002p995} points out, the fluid model with the measure-zero assumption shows significant estimation errors for $E\big[x_2(t)\big]$. On the other hand, our adjusted fluid model provides excellent approximation results. Especially, one can recognize remarkable improvement in the estimation of $E\big[x_2(t)\big]$. For the mean value of $x_1(t)$, our adjusted fluid model provides a lot better approximation result than the method with the measure-zero assumption.\\ \begin{figure}
\caption{Comparison of covariance matrix entries, $Cov\big[X(t),X(t)\big]$}
\label{fig_retrial_covariance}
\end{figure} When we see the covariance matrix, we also notice our adjusted diffusion model shows dramatic improvement against the diffusion model with the measure-zero assumption. As seen in Figure \ref{fig_retrial_covariance}, the diffusion model assuming measure zero causes ``spikes'' as also pointed out in Section \ref{subsec_inaccuracy}. Our proposed model, however, provides excellent accuracy without spikes at all.\\ Besides this specific example, in order to verify the effectiveness of our methodology, we conduct several experiments with different parameter combinations. \begin{table}[htdp] \caption{Experiments setting} \begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline exp \# & svrs & $\lambda_1$ & $\lambda_2$ & $\mu_1$ & $\mu_2$ & $\beta$ & $p$ & alter & time \\ \hline \hline 1 & 50 & 40 & 80 & 1 & 0.2 & 2.0 & 0.5 & 2 & 20 \\ \hline 2 & 50 & 40 & 60 & 1 & 0.2 & 2.0 & 0.5 & 2 & 20 \\ \hline 3 & 100 & 80 & 120 & 1 & 0.2 & 2.0 & 0.7 & 2 & 20 \\ \hline 4 & 100 & 90 & 110 & 1 & 0.2 & 2.0 & 0.7 & 2 & 20 \\ \hline 5 & 50 & 40 & 80 & 1 & 0.2 & 1.5 & 0.7 & 2 & 20 \\ \hline 6 & 50 & 40 & 60 & 1 & 0.2 & 1.5 & 0.7 & 2 & 20 \\ \hline 7 & 50 & 45 & 55 & 1 & 0.2 & 2.0 & 0.5 & 2 & 20 \\ \hline 8 & 100 & 95 & 105 & 1 & 0.2 & 2.0 & 0.5 & 2 & 20 \\ \hline 9 & 150 & 140 & 160 & 1 & 0.2 & 2.0 & 0.5 & 2 & 20 \\ \hline 10 & 150 & 100 & 190 & 1 & 0.2 & 2.0 & 0.5 & 2 & 20 \\ \hline \end{tabular} \end{center} \label{tab_settings} \end{table} Table \ref{tab_settings} describes the setting of each experiment. In Table \ref{tab_settings}, ``svrs'' is the number of servers ($n_t$), ``alter'' is the time length for which each arrival rate lasts, and ``time'' is the end time of our analysis. We already recognize that the method assuming measure zero works well when it \emph{does not linger} too long near the non-differentiable points. For comparison, therefore, our experiments contain several cases where the system \emph{does linger} relatively long around those points as well as the cases where it does not. Experiments 1-4 are intended to see the effects of \emph{lingering} around the critically loaded points. We change $\beta_t=\beta$ and $p_t=p$ as well as the arrival rates in experiments 5-8 to see the effects of other parameters. In fact, from the other experiments not listed in Table \ref{tab_settings}, it turns out that changing other parameters does not affect estimation accuracy significantly. Experiments 9 and 10 are set to observe how larger arrival rates and number of servers affect the estimation accuracy along with the lingering effect by increasing both of them.\\ Here we explain the overall results: for the details of numerical results, see Table \ref{tab_mean_x1}-\ref{tab_var_x2} in Appendix \ref{app_table}. Similar to the results in Figures \ref{fig_retrial_mean} and \ref{fig_retrial_covariance}, we observe that lingering does debase the quality of approximations significantly when assuming measure zero. On the other hand, we see that our proposed models provide excellent accuracy for both mean and covariance matrix. Even if we increase both arrival rates and number of servers, we notice that lingering still affects the estimation accuracy significantly when assuming measure zero but it does not in our models. \begin{figure}
\caption{Average difference against simulation}
\label{fig_avgresult}
\end{figure} Figure \ref{fig_avgresult} illustrates the average percentile difference of both methods against the simulation. Figure \ref{fig_avgresult} (a) is obtained by averaging all differences in the tables (Appendix \ref{app_table} ) across time. From Figure \ref{fig_avgresult} (a), we notice that our proposed method shows promise relative to the method assuming measure zero. However, in order to clearly see the effectiveness our proposed methodology, we select the experiments 2, 4, 6, 7, 8, and 9 where lingering near the critically loaded phase occurs. We graph the differences at a critically loaded time point for those. Since the average differences are obtained from our limited experiments, it does not provide an absolute comparison between two methods. Nonetheless, we can notice that our method provides accurate estimation results consistently, but the method with measure-zero assumption results in vast inaccuracy. Note that, in Figure \ref{fig_avgresult} (b), huge estimation difference, more than $300\%$, is observed when estimating $Cov[x_1(t),x_2(t)]$ using the method with measure-zero. However, the graph is cropped at the $70\%$ level for the illustration purpose. \section{Conclusion} \label{sec_conclusion} In this paper, we initially explain the strong approximations used in the analysis of multi-server queues with abandonments and retrials and show potential problems that one faces in obtaining accuracy and computational tractability especially near the critically loaded phase. The first problem stems from the fact that expectation of a function of a random vector $X$ is not equal to the value of the function of the expectation of $X$. Therefore, unless they are equal or close, the fluid model may not provide an accurate estimation of mean values of the system state. The second problem is caused by non-differentiability of rate functions which prevents applying the diffusion model in \citet{kurtz78} and causes significant estimation errors if we ignore it. Therefore, addressing these problems is quite important in order to develop accurate approximations as well as to achieve computational feasibility. For that, we proposed a methodology to obtain the exact estimation of mean values of system states and an approach to achieve computational tractability.\\ The basic idea of our approach is to construct a new stochastic process which has the fluid limit exactly same as the mean value of the system state. We proved that if rate functions in the original model satisfy the conditions to apply the fluid model, rate functions in the constructed model also satisfy those conditions. Therefore, we can apply the adjusted fluid model if we can apply the existing fluid model. It turns out that there is, in general, no computational method to obtain the adjusted fluid model exactly. Fortunately, there are several previous research studies that show the distribution of limit processes and empirical distributions are close to the Gaussian in multi-server queueing systems and hence we utilize Gaussian density to approximate it. By using Gaussian density, we see that rate functions in the constructed model are smooth and we are able to apply the diffusion model in \citet{kurtz78} even if we could not apply it to the original process.\\ To validate our proposed method, we provide several numerical examples. In the examples, we observe that our proposed method shows great accuracy compared with the fluid and diffusion approximations with measure-zero assumption (which is the only other way in the literature, to the best of our knowledge, that provides computational tractability). Due to space restriction, we have not shown all examples where our method works well. We, however, observe that in some other types of queues other than multi-server queues considered here, e.g. peer-to-peer networks, multi-class queues, the empirical density is not close to the Gaussian density. For those types of queues, one can investigate the properties of specific rate functions that affect the shape of empirical density and can devise a new methodology to find the functions $g_i(\cdot,\cdot)$'s from other density functions in the future. \section*{Acknowledgments} The authors would like to thank Dr. William A. Massey and Dr. Martin I. Reiman for their inputs and valuable discussions. This research was partially supported by NSF grant CMMI-0946935. \appendix \appendixpage \addappheadtotoc \section{Proof of Lemmas \ref{lem_condition1} and \ref{lem_condition2}} \label{app_lemmas}
\begin{proof}[Proof of Lemma \ref{lem_condition1}]
To prove this lemma, we need to show that $E\big[|X(t)|\big] \le K\Big(1+\big|E\big[X(t)\big]\big|\Big)$ for $K < \infty$ and $t \le T$. We first show it in the one-dimensional case and then extend it to the $d$-dimensional case.\\ Let, for fixed $t_0\le T$, $X = X(t_0)$ having mean $\mu$ and variance $\sigma^2$, and $f_i(X) = f_i\big(t_0,X(t_0)\big)$. Then, by Cauchy-Schwarz inequality, \begin{eqnarray}
E\big[|X|\big] \le \sqrt{E[X^2]} = \sqrt{\mu^2 + \sigma^2} \le |\mu| + \sigma \le D (1 + |\mu|) \quad \textrm{for } D = \max(1,\sigma) \label{eqn_016}. \end{eqnarray} Now, we have the one-dimensional case and can move to the $d$-dimensional case. Suppose $X$ has a mean vector $\mu$ and a covariance matrix $\Sigma$ such that $X=(x_1, \ldots, x_d)'$, $\mu = (\mu_1, \ldots, \mu_d)'$. Then, \begin{eqnarray}
E\big[|X|\big] &=& E\bigg[\sqrt{\sum_{i=1}^{d}x_i^2}\bigg] \le E\bigg[\sum_{i=1}^{d}|x_i|\bigg] = \sum_{i=1}^{d} E\big[|x_i|\big] \nonumber \\
&\le& D\bigg(d+ \sum_{i=1}^{d} |\mu_i|\bigg) \quad \textrm{by (\ref{eqn_016})} \qquad \textrm{for } D = \max(1,\sigma_1, \ldots, \sigma_d) \nonumber \\
&\le& D\bigg(d+ d\sqrt{\sum_{i=1}^{d} \mu_{i}^{2}}\bigg) \textrm{by Cauchy-Schwarz inequality} \nonumber \\
&=& Dd\big(1+|\mu|\big) \label{eqn_017}. \end{eqnarray}
Now we have $E\big[|X|\big] \le K\Big(1+\big|E[X]\big|\Big)$ for the $d$-dimensional random vector $X$ where $K=Dd$. Then, \begin{eqnarray}
\Big|E\big[f_i(X)\big]\Big| &\le& E\Big[\big|f_i(X)\big|\Big] \le C_i + C_i E\big[|X|\big] \quad \textrm{from assumption} \nonumber \\
&\le& C_i + C_i K \big(1 + |\mu|\big) \nonumber \le D_i \big(1 + |\mu|\big) \quad \textrm{for } D_i = C_i + C_iK \quad \textrm{by equation (\ref{eqn_017})} \label{eqn_018}\nonumber \end{eqnarray}
Note $g_i(t_0,\mu) = E\big[f_i(X)\big]$. Since $|\Sigma|$ is bounded on $t\le T$, if we make $t_0>0$ arbitrary, we prove the lemma. \end{proof} \begin{proof}[Proof of Lemma \ref{lem_condition2}] For fixed $t_0 \le T$, let $X = X(t_0)$ and $Y = Y(t_0)$ and suppose $X$ and $Y$ have a same base distribution $H_0$ (we use $H$ instead of $F$ to avoid confusion with $F$ in (\ref{eqn_F})) where $E[X] = \mu_1$ and $E[Y]=\mu_2$. Then, the distribution $H_1$ of $X$ and $H_2$ of $Y$ satisfy \begin{eqnarray*}
H_1(x) &=& H_0(x-\mu_1), \quad \textrm{and} \\
H_2(y) &=& H_0(y-\mu_2), \end{eqnarray*} respectively. Now, we have \begin{eqnarray*}
\Big|E\big[F(X)\big]-E\big[F(Y)\big]\Big| &=& \bigg|\int_{\mathbf{R}^d} F(x) dH_1 - \int_{\mathbf{R}^d} F(y) dH_2\bigg|. \label{eqn_021} \end{eqnarray*} By transforming variables, \begin{eqnarray}
\Big|E\big[F(X)\big]-E\big[F(Y)\big]\Big| &=& \bigg|\int_{\mathbf{R}^d} F(x+\mu_1) dH_0 - \int_{\mathbf{R}^d} F(y+\mu_2) dH_0\bigg| \nonumber \\
&=& \bigg|\int_{\mathbf{R}^d} \big(F(x+\mu_1) - F(x+\mu_2)\big) dH_0\bigg| \quad \textrm{by linearity}, \nonumber \\
&\le& \int_{\mathbf{R}^d} \bigg|\big(F(x+\mu_1) - F(x+\mu_2)\big)\bigg| dH_0 \nonumber \\
&\le& M \int_{\mathbf{R}^d} |\mu_1-\mu_2| dH_0 = M|\mu_1 - \mu_2| \quad \textrm{by assumption}. \label{eqn_022} \nonumber \end{eqnarray}
Note $G\big(t_0, \mu_1 \big) = E\big[F(X)\big]$ and $G\big(t_0, \mu_2 \big) = E\big[F(Y)\big]$. Then, by making $t_0>0$ arbitrary, we prove the second part, i.e. if $|F(t,x)-F(t,y)|\le M|x-y|$ then $|G(t,x)-G(t,y)|\le M|x-y|$. We can prove the first part, i.e. if $|f_i(t,x)-f_i(t,y)| \le M |x-y|$, then $|g_i(t,x) - g_i(t,y)| \le M|x-y|$, in a similar fashion and hence we have the lemma. \end{proof} \section{Derivation of $g_i(t,x)$'s} \label{app_gi} For fixed $t_0>0$, let $n=n_{t_0}$, $\mu_1 = \mu_{t_0}^1$, $\beta = \beta_{t_0}$, $p=p_{t_0}$, $x_1 = x_1(t_0) \sim N(z_1, \sigma_1^2)$, and $x_2 = x_2(t_0) \sim N(z_2, \sigma_2^2)$. For $z=(z_1,z_2)'$, we have \begin{eqnarray}
g_3\big(t_0, z\big) &=& E\big[\mu_1(x_1 \wedge n)\big] = \mu_1\Big\{E[x_1\mathbb{I}_{x_1 \le n}]+nPr[x_1 > n]\Big\} \nonumber \\
&=& \mu_1\Bigg[\int_{-\infty}^{n} \frac{x}{\sqrt{2\pi}\sigma_1}\exp\bigg(-\frac{(x-z_1)^2}{2\sigma_1^2}\bigg) dx + nPr[x_1 > n]\Bigg] \nonumber \\
&=& \mu_1\Bigg[\frac{-\sigma_1}{\sqrt{2\pi}}\int_{-\infty}^{n} -\frac{x-z_1}{\sigma_1^2}\exp\bigg(-\frac{(x-z_1)^2}{2\sigma_1^2}\bigg) dx + z_1Pr[x_1 \le n] + nPr[x_1 > n]\Bigg] \nonumber \\
&=& \mu_1 \Bigg[-\sigma_1^2 \frac{1}{\sqrt{2\pi}\sigma_1} \exp\bigg(-\frac{(n-z_1)^2}{2\sigma_1^2}\bigg)+(z_1-n)Pr(x_1\le n) + n\Bigg]. \nonumber \end{eqnarray} Therefore, by making $t_0>0$ arbitrary, we have $g_3(t,x)$.\\ Note $g_4(\cdot,\cdot)$ and $g_5(\cdot,\cdot)$ are same except a constant part with respect to $x$. Therefore, it is enough to derive $g_5(\cdot,\cdot)$. We can show that \begin{eqnarray}
g_5\big(t_0, z\big) &=& E\big[\beta p (x_1 - n)^+\big] = \beta p \big\{E[x_1\vee n]-n\big\} \nonumber \\
&=& \beta p \Big\{E[x_1\mathbb{I}_{x_1 > n}]+nPr[x_1 \le n] -n\Big\} \nonumber \\
&=& \beta p \Bigg[\int_{n}^{\infty} \frac{x}{\sqrt{2\pi}\sigma_1}\exp\bigg(-\frac{(x-z_1)^2}{2\sigma_1^2}\bigg) dx + nPr[x_1 \le n] -n\Bigg] \nonumber \\
&=& \beta p \Bigg[\frac{-\sigma_1}{\sqrt{2\pi}}\int_{n}^{\infty} -\frac{x-z_1}{\sigma_1^2}\exp\bigg(-\frac{(x-z_1)^2}{2\sigma_1^2}\bigg) dx + z_1Pr[x_1 > n] + nPr[x_1 \le n] -n \Bigg] \nonumber \\
&=& \beta p \Bigg[\sigma_1^2 \frac{1}{\sqrt{2\pi}\sigma_1} \exp\bigg(-\frac{(n-z_1)^2}{2\sigma_1^2}\bigg)+(z_1-n)Pr(x_1 > n) \Bigg]. \nonumber \end{eqnarray} Therefore, by making $t_0>0$ arbitrary, we have $g_5(t,x)$. \section{Numerical results for Section \ref{sec_numerical}} \label{app_table} \begin{table}[htdp] \caption{Estimation of $E\big[x_1(t)\big]$ over time; difference from simulation} \begin{center} \footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline
\multicolumn{2}{|c|}{Experiments} & \multicolumn{10}{|c|}{Time ($t$)} \\ \hline \# & type & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ \hline \multirow{2}{*}{1} & proposed & 6.52 & 0.98 & -3.39 & -1.07 & -3.05 & -0.40 & 0.91 & 0.25 & -0.69 & -0.01 \\ \cline{2-12} & meas. 0 &4.42 & 0.82 & -3.63 & -1.94 & -3.60 & -0.23 & 0.75 & -0.15 & -2.59 & 0.11 \\ \hline \multirow{2}{*}{2} & proposed & 2.69 & 0.44 & -3.13 & -0.82 & -1.08 & -0.32 & 0.48 & 0.15 & -0.46 & -0.05 \\ \cline{2-12} & meas. 0 &3.35 & -0.42 & -2.92 & -1.64 & -1.18 & -1.01 & 0.85 & -0.44 & -0.36 & -0.60 \\ \hline \multirow{2}{*}{3} & proposed & 2.33 & 0.28 & -3.11 & -1.01 & -1.36 & -0.39 & 0.10 & -0.02 & -1.68 & -0.15 \\ \cline{2-12} & meas. 0 &2.34 & -0.42 & -2.67 & -1.55 & -1.49 & -1.00 & 0.52 & -0.49 & -0.54 & -0.53 \\ \hline \multirow{2}{*}{4} & proposed & 1.18 & 0.14 & -1.54 & -0.30 & -0.01 & 0.12 & 0.22 & 0.22 & -0.10 & -0.02 \\ \cline{2-12} & meas. 0 &0.65 & -0.96 & -1.98 & -1.32 & -0.94 & -0.95 & 0.04 & -0.64 & -0.61 & -0.94 \\ \hline \multirow{2}{*}{5} & proposed & 7.04 & 1.36 & -3.67 & -0.69 & -1.38 & -0.57 & 0.80 & 0.23 & -2.82 & -0.63 \\ \cline{2-12} & meas. 0 &5.55 & 1.04 & -3.20 & -0.93 & -1.31 & -0.53 & 0.46 & 0.06 & -1.22 & -0.18 \\ \hline \multirow{2}{*}{6} & proposed & 3.61 & 0.76 & -3.05 & -1.13 & -0.67 & 0.18 & 1.12 & 0.20 & -0.95 & -0.25 \\ \cline{2-12} & meas. 0 &2.53 & -0.07 & -3.01 & -1.72 & -1.46 & -0.43 & 0.60 & -0.47 & -1.57 & -0.80 \\ \hline \multirow{2}{*}{7} & proposed & 1.93 & 0.65 & -1.06 & -0.25 & -0.63 & 0.17 & 0.12 & -0.21 & -0.65 & -0.20 \\ \cline{2-12} & meas. 0 &0.50 & -0.86 & -2.07 & -1.51 & -1.04 & -0.73 & -0.47 & -1.07 & -0.63 & -0.76 \\ \hline \multirow{2}{*}{8} & proposed & 0.72 & 0.07 & -0.46 & 0.04 & -0.04 & -0.14 & 0.42 & -0.07 & -0.48 & -0.01 \\ \cline{2-12} & meas. 0 &0.04 & -0.98 & -1.40 & -0.91 & -0.57 & -0.85 & -0.13 & -0.69 & -0.73 & -0.46 \\ \hline \multirow{2}{*}{9} & proposed & 0.81 & 0.25 & -0.96 & -0.25 & -0.11 & -0.09 & 0.38 & -0.06 & -0.24 & -0.02 \\ \cline{2-12} & meas. 0 &0.53 & -0.50 & -1.31 & -0.88 & -0.34 & -0.61 & 0.17 & -0.51 & -0.06 & -0.32 \\ \hline \multirow{2}{*}{10} & proposed & 6.44 & 1.18 & -4.73 & -1.73 & -2.21 & -0.45 & 0.30 & -0.01 & -1.10 & -0.11 \\ \cline{2-12} & meas. 0 &6.46 & 0.77 & -3.83 & -1.62 & -2.84 & -0.83 & 0.84 & 0.00 & -2.77 & -0.60 \\ \hline \end{tabular} \end{center} \normalsize \label{tab_mean_x1} \end{table}
\begin{table}[htdp] \caption{Estimation of $E\big[x_2(t)\big]$ over time; difference from simulation} \begin{center} \footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline
\multicolumn{2}{|c|}{Experiments} & \multicolumn{10}{|c|}{Time ($t$)} \\ \hline \# & type & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ \hline \multirow{2}{*}{1} & proposed & -2.00 & 3.50 & 2.36 & -0.53 & 0.57 & -1.00 & -0.99 & -0.30 & -0.44 & -0.76 \\ \cline{2-12} & meas. 0 &11.68 & 12.60 & 7.38 & 5.88 & 11.64 & 8.18 & 5.24 & 6.29 & 10.47 & 7.82 \\ \hline \multirow{2}{*}{2} & proposed & -2.22 & 2.71 & 1.90 & -2.44 & -0.94 & -1.82 & -0.91 & -0.10 & -0.38 & -0.76 \\ \cline{2-12} & meas. 0 &45.00 & 53.07 & 33.49 & 37.12 & 41.73 & 44.51 & 31.55 & 37.48 & 40.57 & 43.21 \\ \hline \multirow{2}{*}{3} & proposed & -2.49 & 1.88 & 1.00 & -3.58 & -2.09 & -3.32 & -3.08 & -3.01 & -3.15 & -4.02 \\ \cline{2-12} & meas. 0 &28.64 & 37.65 & 19.44 & 21.88 & 24.73 & 28.38 & 16.37 & 21.45 & 22.77 & 26.96 \\ \hline \multirow{2}{*}{4} & proposed & 0.24 & 2.66 & 1.35 & -1.68 & -0.91 & -0.19 & 0.25 & 0.45 & 0.02 & -0.53 \\ \cline{2-12} & meas. 0 &67.95 & 69.81 & 47.03 & 51.81 & 56.69 & 59.66 & 45.16 & 50.75 & 54.48 & 57.27 \\ \hline \multirow{2}{*}{5} & proposed & -1.01 & 4.41 & 3.16 & -0.05 & 1.55 & 0.57 & -0.58 & -0.07 & -0.09 & -1.63 \\ \cline{2-12} & meas. 0 &9.61 & 12.28 & 7.50 & 5.73 & 11.28 & 9.42 & 5.36 & 6.00 & 9.51 & 8.02 \\ \hline \multirow{2}{*}{6} & proposed & -2.63 & 2.48 & 2.23 & -2.39 & -1.32 & -0.84 & -0.12 & 1.04 & 0.89 & -0.04 \\ \cline{2-12} & meas. 0 &44.23 & 51.84 & 32.45 & 35.11 & 39.27 & 43.72 & 31.00 & 35.94 & 38.83 & 41.83 \\ \hline \multirow{2}{*}{7} & proposed & 0.33 & 3.42 & 3.00 & 1.01 & 0.70 & 0.25 & 0.41 & 0.71 & 0.27 & -0.17 \\ \cline{2-12} & meas. 0 &78.08 & 78.96 & 60.84 & 64.86 & 69.59 & 71.19 & 59.15 & 63.20 & 67.42 & 68.95 \\ \hline \multirow{2}{*}{8} & proposed & 2.81 & 3.03 & 2.40 & 1.45 & 1.29 & 0.58 & -0.08 & 0.41 & 0.12 & -1.11 \\ \cline{2-12} & meas. 0 &92.68 & 90.60 & 73.97 & 77.06 & 80.98 & 81.48 & 70.82 & 74.24 & 77.96 & 78.55 \\ \hline \multirow{2}{*}{9} & proposed & -0.86 & 1.25 & 1.44 & -0.77 & -0.19 & -0.18 & 0.08 & 0.84 & 0.42 & 0.41 \\ \cline{2-12} & meas. 0 &80.15 & 79.90 & 57.59 & 62.03 & 67.09 & 68.91 & 55.09 & 59.98 & 64.19 & 66.14 \\ \hline \multirow{2}{*}{10} & proposed & -2.67 & 6.62 & 3.79 & -2.50 & -0.49 & -2.18 & -1.99 & -1.35 & -1.38 & -1.21 \\ \cline{2-12} & meas. 0 &8.53 & 23.91 & 10.73 & 8.77 & 10.78 & 13.05 & 5.67 & 8.96 & 9.13 & 11.69 \\ \hline \end{tabular} \end{center} \normalsize \label{tab_mean_x2} \end{table}
\begin{table}[htdp] \caption{Estimation of $Var\big[x_1(t)\big]$ over time; difference from simulation} \begin{center} \footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline
\multicolumn{2}{|c|}{Experiments} & \multicolumn{10}{|c|}{Time ($t$)} \\ \hline \# & method & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ \hline \multirow{2}{*}{1} & proposed & 6.94 & 0.94 & -1.92 & -2.02 & -3.66 & -0.20 & 1.70 & -1.02 & 0.49 & 2.89 \\ \cline{2-12} & meas. 0 &-11.03 & 2.93 & -1.93 & 17.31 & -24.44 & 1.42 & 1.66 & 14.89 & -19.73 & 4.16 \\ \hline \multirow{2}{*}{2} & proposed & 2.84 & 3.83 & -6.05 & -0.10 & -0.50 & 4.24 & 1.62 & 2.67 & -1.02 & 1.62 \\ \cline{2-12} & meas. 0 &-6.28 & 16.69 & 6.76 & -14.45 & -12.97 & 17.62 & 12.90 & -11.61 & -14.51 & 15.29 \\ \hline \multirow{2}{*}{3} & proposed & 4.15 & 2.09 & -0.60 & 2.15 & -6.57 & 0.50 & -3.10 & 1.15 & 2.76 & 3.74 \\ \cline{2-12} & meas. 0 &-0.56 & 13.38 & 7.30 & -8.74 & -12.97 & 11.92 & 4.53 & -10.16 & -2.13 & 14.22 \\ \hline \multirow{2}{*}{4} & proposed & -0.52 & -4.36 & -2.81 & 3.07 & -0.03 & 2.96 & 0.79 & 1.27 & 3.30 & 0.35 \\ \cline{2-12} & meas. 0 &-16.38 & 11.18 & 14.13 & -12.86 & -17.81 & 17.93 & 16.94 & -15.33 & -14.05 & 15.32 \\ \hline \multirow{2}{*}{5} & proposed & 6.83 & -0.22 & -2.49 & 0.09 & -1.67 & -3.27 & 1.71 & -4.14 & -0.55 & 1.98 \\ \cline{2-12} & meas. 0 &-2.30 & 1.03 & -1.69 & 10.07 & -10.59 & -1.97 & 1.43 & 5.09 & -7.69 & 3.42 \\ \hline \multirow{2}{*}{6} & proposed & 5.22 & 0.62 & -6.25 & -0.81 & -4.32 & -1.95 & 4.41 & 1.97 & -0.61 & 4.93 \\ \cline{2-12} & meas. 0 &-1.19 & 7.72 & 1.39 & -7.42 & -11.70 & 5.61 & 10.28 & -4.33 & -7.73 & 12.15 \\ \hline \multirow{2}{*}{7} & proposed & 2.91 & -2.29 & -1.04 & 0.92 & 0.21 & 0.18 & 3.14 & -1.10 & 4.36 & 2.28 \\ \cline{2-12} & meas. 0 &-17.83 & 14.52 & 18.27 & -16.55 & -22.37 & 17.07 & 20.88 & -18.77 & -18.14 & 19.01 \\ \hline \multirow{2}{*}{8} & proposed & -1.79 & 0.65 & -0.43 & 0.83 & 3.35 & -0.71 & 3.63 & 2.10 & 1.85 & 0.72 \\ \cline{2-12} & meas. 0 &-26.38 & 16.44 & 21.26 & -18.37 & -22.66 & 16.73 & 23.72 & -17.42 & -25.80 & 18.25 \\ \hline \multirow{2}{*}{9} & proposed & 0.62 & -0.86 & -0.83 & 3.53 & 3.36 & 5.09 & 1.52 & 1.71 & 2.73 & -1.37 \\ \cline{2-12} & meas. 0 &-17.84 & 13.72 & 17.09 & -14.12 & -17.40 & 19.78 & 18.07 & -16.57 & -19.03 & 14.36 \\ \hline \multirow{2}{*}{10} & proposed & 4.48 & -0.32 & -9.84 & 1.26 & -4.24 & 3.37 & 1.32 & 1.00 & 0.22 & 1.15 \\ \cline{2-12} & meas. 0 &4.12 & 7.27 & -6.22 & -2.69 & -5.55 & 10.87 & 3.68 & -3.27 & -2.12 & 8.98 \\ \hline \end{tabular} \end{center} \normalsize \label{tab_var_x1} \end{table}
\begin{table}[htdp] \caption{Estimation of $Cov\big[x_1(t),x_2(t)\big]$ over time; difference from simulation} \begin{center} \footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline
\multicolumn{2}{|c|}{Experiments} & \multicolumn{10}{|c|}{Time ($t$)} \\ \hline \# & type & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ \hline \multirow{2}{*}{1} & proposed & -3.03 & -3.27 & 4.75 & 3.10 & -1.72 & -3.63 & 0.39 & -4.00 & -4.15 & 0.91 \\ \cline{2-12} & meas. 0 &25.05 & -3.88 & 4.43 & -6.75 & 15.78 & -4.69 & -0.30 & -11.23 & 4.26 & -2.03 \\ \hline \multirow{2}{*}{2} & proposed & -6.76 & 7.23 & 2.87 & -4.73 & 11.50 & -0.47 & 6.09 & 5.48 & 5.86 & 4.82 \\ \cline{2-12} & meas. 0 &29.60 & -6.12 & -6.56 & 3.81 & 36.90 & -13.05 & -1.41 & 12.63 & 31.69 & -5.53 \\ \hline \multirow{2}{*}{3} & proposed & -6.74 & -2.24 & 4.53 & -9.51 & -28.97 & -3.44 & -2.57 & -6.43 & 5.52 & -3.76 \\ \cline{2-12} & meas. 0 &25.67 & -15.55 & 0.47 & 6.26 & 6.46 & -15.26 & -6.35 & 8.68 & 31.03 & -13.44 \\ \hline \multirow{2}{*}{4} & proposed & -0.01 & -13.57 & -8.53 & -12.42 & -9.73 & -0.00 & -10.28 & -16.74 & -1.43 & -11.64 \\ \cline{2-12} & meas. 0 &58.61 & -29.39 & -21.29 & 10.14 & 44.87 & -14.34 & -22.28 & 5.51 & 46.98 & -26.05 \\ \hline \multirow{2}{*}{5} & proposed & -7.19 & -0.18 & 0.13 & 2.88 & 7.13 & -8.20 & -0.42 & 1.07 & 4.17 & -1.88 \\ \cline{2-12} & meas. 0 &19.73 & -4.03 & -1.03 & -12.97 & 26.24 & -11.11 & -1.32 & -12.31 & 20.40 & -4.25 \\ \hline \multirow{2}{*}{6} & proposed & 2.91 & 4.90 & 2.63 & -7.64 & -7.99 & 1.17 & 2.14 & 3.80 & 6.94 & 7.92 \\ \cline{2-12} & meas. 0 &37.93 & -15.54 & -15.96 & -3.32 & 25.88 & -18.86 & -15.00 & 6.43 & 34.79 & -10.85 \\ \hline \multirow{2}{*}{7} & proposed & -4.38 & -1.88 & 0.97 & -14.36 & 3.15 & -1.95 & -1.19 & -1.08 & -4.77 & -0.46 \\ \cline{2-12} & meas. 0 &52.91 & -16.88 & -16.53 & -3.19 & 43.26 & -15.54 & -16.02 & 6.73 & 34.99 & -12.29 \\ \hline \multirow{2}{*}{8} & proposed & -20.99 & -4.21 & -6.33 & -4.51 & 3.12 & -3.43 & -1.85 & -6.78 & -1.81 & -0.79 \\ \cline{2-12} & meas. 0 &64.94 & -8.85 & -22.74 & 13.30 & 51.79 & -11.89 & -15.66 & 7.80 & 44.16 & -8.72 \\ \hline \multirow{2}{*}{9} & proposed & -15.01 & -6.15 & -6.27 & 3.33 & -0.25 & 2.45 & -6.00 & -7.57 & -6.93 & -6.74 \\ \cline{2-12} & meas. 0 &55.84 & -12.34 & -17.97 & 19.55 & 45.76 & -4.17 & -15.42 & 7.67 & 37.97 & -12.70 \\ \hline \multirow{2}{*}{10} & proposed & -18.70 & 7.57 & -3.70 & -4.76 & 8.09 & -6.43 & -2.86 & -0.03 & 2.95 & -1.11 \\ \cline{2-12} & meas. 0 &-21.43 & -2.63 & -5.67 & -0.67 & 8.99 & -15.71 & -4.40 & 3.66 & 4.18 & -9.87 \\ \hline \end{tabular} \end{center} \normalsize \label{tab_cov} \end{table}
\begin{table}[htdp] \caption{Estimation of $Var\big[x_2(t)\big]$ over time; difference from simulation} \begin{center} \footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline
\multicolumn{2}{|c|}{Experiments} & \multicolumn{10}{|c|}{Time ($t$)} \\ \hline \# & type & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ \hline \multirow{2}{*}{1} & proposed & -2.15 & 3.52 & 1.48 & -0.72 & -0.34 & -0.45 & 0.78 & 1.59 & 1.31 & 0.83 \\ \cline{2-12} & meas. 0 &6.74 & 5.31 & 2.34 & -8.01 & 5.46 & 1.00 & 1.84 & -2.78 & 2.81 & -1.20 \\ \hline \multirow{2}{*}{2} & proposed & 1.29 & 9.81 & 8.50 & 3.31 & 7.72 & 6.70 & 6.05 & 5.84 & 5.42 & 5.91 \\ \cline{2-12} & meas. 0 &7.06 & 14.88 & -6.56 & -0.09 & 17.07 & 12.12 & -3.90 & 5.44 & 16.94 & 13.19 \\ \hline \multirow{2}{*}{3} & proposed & -5.60 & 2.13 & -0.71 & -5.14 & -1.93 & -3.21 & -2.71 & -1.18 & -0.61 & -0.79 \\ \cline{2-12} & meas. 0 &-2.22 & 4.53 & -10.37 & -5.14 & 4.01 & 1.00 & -8.89 & 0.66 & 6.45 & 5.96 \\ \hline \multirow{2}{*}{4} & proposed & 5.71 & 8.34 & 2.63 & -2.01 & -0.83 & 2.01 & -0.03 & -0.27 & 0.51 & 2.66 \\ \cline{2-12} & meas. 0 &28.49 & 26.24 & -13.87 & -3.06 & 15.93 & 15.77 & -12.78 & 0.50 & 17.18 & 17.62 \\ \hline \multirow{2}{*}{5} & proposed & -0.97 & 4.30 & 1.25 & -0.07 & 3.22 & 3.50 & -0.22 & -0.16 & 1.50 & 0.76 \\ \cline{2-12} & meas. 0 &3.67 & 5.28 & 1.60 & -4.35 & 7.92 & 6.18 & 1.65 & -2.36 & 5.70 & 4.03 \\ \hline \multirow{2}{*}{6} & proposed & 2.23 & 11.10 & 8.33 & 2.94 & 4.21 & 4.30 & 1.38 & 2.79 & 3.63 & 5.03 \\ \cline{2-12} & meas. 0 &11.13 & 21.61 & -3.19 & -0.25 & 12.83 & 14.35 & -6.15 & 1.31 & 12.95 & 14.75 \\ \hline \multirow{2}{*}{7} & proposed & 5.23 & 7.48 & 5.57 & 1.60 & 2.20 & 4.24 & 3.45 & 4.88 & 5.03 & 4.33 \\ \cline{2-12} & meas. 0 &33.03 & 27.56 & -16.08 & -4.39 & 21.67 & 19.11 & -11.03 & 2.36 & 24.85 & 19.64 \\ \hline \multirow{2}{*}{8} & proposed & 10.11 & 7.03 & 3.99 & 2.44 & 3.47 & 2.45 & 2.53 & 2.25 & 1.73 & 2.30 \\ \cline{2-12} & meas. 0 &62.03 & 46.52 & -13.63 & 0.58 & 30.10 & 23.25 & -13.94 & -0.09 & 26.38 & 20.60 \\ \hline \multirow{2}{*}{9} & proposed & 8.18 & 7.49 & 3.22 & 0.53 & 3.83 & 4.55 & 3.14 & 4.24 & 4.90 & 5.28 \\ \cline{2-12} & meas. 0 &39.93 & 31.88 & -18.20 & -4.36 & 22.39 & 18.66 & -12.73 & 1.21 & 23.00 & 18.98 \\ \hline \multirow{2}{*}{10} & proposed & -0.34 & 12.31 & 5.05 & -2.01 & 1.38 & 1.13 & -3.01 & -3.64 & -4.35 & -1.66 \\ \cline{2-12} & meas. 0 &-5.73 & 7.15 & -1.91 & -3.68 & 0.93 & -2.52 & -8.00 & -4.34 & -3.94 & -5.72 \\ \hline \end{tabular} \end{center} \normalsize \label{tab_var_x2} \end{table}
\end{document} |
\begin{document}
\selectlanguage{english}
\title{Strongly anisotropic diffusion problems; asymptotic analysis}
\author{Mihai Bostan \thanks{Laboratoire d'Analyse, Topologie, Probabilit\'es LATP, Centre de Math\'ematiques et Informatique CMI, UMR CNRS 7353, 39 rue Fr\'ed\'eric Joliot Curie, 13453 Marseille Cedex 13 France. E-mail : {\tt bostan@cmi.univ-mrs.fr}} }
\date{ (\today)}
\maketitle
\begin{abstract} The subject matter of this paper concerns anisotropic diffusion equations: we consider heat equations whose diffusion matrix have disparate eigenvalues. We determine first and second order approximations, we study the well-posedness of them and establish convergence results. The analysis relies on averaging techniques, which have been used previously for studying transport equations whose advection fields have disparate components.
\end{abstract}
\paragraph{Keywords:} Anisotropic diffusion, Variational methods, Multiple scales, Average operator.
\paragraph{AMS classification:} 35Q75, 78A35.
\section{Introduction} \label{Intro} \indent Many real life applications lead to highly anisotropic diffusion equations: flows in porous media, quasi-neutral plasmas, microscopic transport in magnetized plasmas \cite{Bra65}, plasma thrusters, image processing \cite{PerMal90}, \cite{Wei98}, thermal properties of crystals \cite{DiaShaYun91}. In this paper we investigate the behavior of the solutions for heat equations whose diffusion becomes very high along some direction. We consider the problem \begin{equation}
\label{Equ1} \partial _t \ue - \divy ( D(y) \nabla _y \ue ) - \frac{1}{\eps} \divy ( b(y) \otimes b(y) \nabla _y \ue ) = 0, \;\;(t,y) \in \R_+ \times \R ^m \end{equation}
\begin{equation} \label{Equ2} \ue (0,y) = \uein (y), \;\;y \in \R^m
\end{equation}
where $D(y) \in {\cal M}_m (\R)$ and $b(y) \in \R^m$ are smooth given matrix field and vector field on $\R^m$, respectively. For any two vectors $\xi, \eta$, the notation $\xi \otimes \eta$ stands for the matrix whose entry $(i,j)$ is $\xi _i \eta _j$, and for any two matrix $A, B$ the notation $A:B$ stands for $\mathrm{trace}(^t AB) = A_{ij} B_{ij}$ (using Einstein summation convention). We assume that at any $y \in \R^m$ the matrix $D(y)$ is symmetric and $D(y) + b (y) \otimes b(y)$ is positive definite \begin{equation} \label{Equ3}
^t D (y) = D(y),\;\;\exists \;d >0 \;\;\mbox{such that}\;\;D(y)\xi\cdot\xi + (b(y) \cdot \xi)^2 \geq d \;|\xi|^2,\;\;\xi \in \R ^m,\;\;y \in \R ^m. \end{equation} The vector field $b(y)$, to which the anisotropy is aligned, is supposed divergence free {\it i.e.,} $\divy b = 0$. We intend to analyse the behavior of \eqref{Equ1}, \eqref{Equ2} for small $\eps$, let us say $0 < \eps \leq 1$. In that cases $D(y) + \frac{1}{\eps} b(y) \otimes b(y)$ remains positive definite and if $(\uein)_\eps$ remain in a bounded set of $\lty$, then $(\ue)_\eps$ remain in a bounded set of $\litlty{}$ since, for any $t \in \R_+$ we have \begin{align*}
\frac{1}{2}\inty{(\ue (t,y))^2} & + d \intsy{|\nabla _y \ue (s,y)|^2} \leq \frac{1}{2}\inty{(\ue (t,y))^2} \\ & + \intsy{\left \{D(y) + \frac{1}{\eps} b(y) \otimes b(y) \right \} : \nabla _y \ue (s,y) \otimes \nabla _y \ue (s,y)} \\ & = \frac{1}{2}\inty{(\uein (y))^2}. \end{align*} In particular, when $\eps \searrow 0$, $(\ue)_\eps$ converges, at least weakly $\star$ in $\litlty{}$ towards some limit $u \in \litlty{}$. Notice that the explicit methods are not well adapted for the numerical approximation of \eqref{Equ1}, \eqref{Equ2} when $\eps \searrow 0$, since the CFL condition leads to severe time step constraints like \[
\frac{d}{\eps} \frac{\Delta t}{|\Delta y |^2} \leq \frac{1}{2} \] where $\Delta t$ is the time step and $\Delta y $ is the grid spacing. In such cases implicit methods are desirable \cite{BalTilHow08}, \cite{ShaHam10}.
Rather than solving \eqref{Equ1}, \eqref{Equ2} for small $\eps >0$, we concentrate on the limit model satisfied by the limit solution $u = \lime \ue$. We will see that the limit model is still a parabolic problem, decreasing the $\lty$ norm and satisfying the maximum principle. At least formally, the limit solution $u$ is the dominant term of the expansion \begin{equation} \label{Equ6} \ue = u + \eps u ^1 + \eps ^2 u ^2 + ... \end{equation} Plugging the Ansatz \eqref{Equ6} into \eqref{Equ1} leads to \begin{equation} \label{Equ7} \divy (b \otimes b \nabla _y u ) = 0,\;\;(t,y) \in \R_+ \times \R ^m \end{equation}
\begin{equation} \label{Equ8} \partial _t u - \divy (D \nabla _y u ) - \divy ( b \otimes b \nabla _y u^1) = 0,\;\;(t,y) \in \R_+ \times \R ^m \end{equation} \[ \vdots \] Clearly, the constraint \eqref{Equ7} says that at any time $t \in \R_+$, $b \cdot \nabla _y u = 0$, or equivalently $u(t,\cdot)$ remains constant along the flow of $b$, see \eqref{EquFlow} \[ u(t, Y(s;y)) = u(t,y),\;\;s \in \R,\;\;y \in \R^m. \] The closure for $u$ comes by eliminating $u^1$ in \eqref{Equ8}, combined with the fact that \eqref{Equ7} holds true at any time $t \in \R_+$. The symmetry of the operator $\divy (b \otimes b \nabla _y)$ implies that $\partial _t u - \divy (D \nabla _y u)$ belongs to $(\ker (b \cdot \nabla _y ))^\perp$ and therefore we obtain the weak formulation \begin{equation} \label{Equ9} \frac{\md}{\md t}\inty{u(t,y) \varphi (y)} + \inty{D \nabla _y u (t,y) \cdot \nabla _y \varphi (y) } = 0,\;\;\varphi \in \hoy \cap \kerbg{}. \end{equation} The above formulation is not satisfactory, since the choice of test functions is constrained by \eqref{Equ7}; \eqref{Equ9} is useless for numerical simulation. A more convenient situation is to reduce \eqref{Equ9} to another problem, by removing the constraint \eqref{Equ7}. The method we employ here is related to the averaging technique which has been used to handle transport equations with diparate advection fields \cite{BosAsyAna}, \cite{BosTraSin}, \cite{BosGuidCent3D}, \cite{Bos12} \begin{equation} \label{Equ10} \partial _t \ue + a(t,y) \cdot \nabla _y \ue + \frac{1}{\eps} b (y) \cdot \nabla _y \ue = 0,\;\;(t,y) \in \R_+ \times \R^m \end{equation}
\begin{equation} \label{Equ11} \ue (0,y) = \uein (y),\;\;y \in \R^m. \end{equation} Using the same Ansatz \eqref{Equ6} we obtain as before that $b \cdot \nabla _y u (t,\cdot) = 0, t \in \R_+$ and the closure for $u$ writes \begin{equation} \label{Equ12} \mathrm{Proj}_{\kerbg} \{ \partial _t u + a\cdot \nabla _y u \} = 0 \end{equation} or equivalently \begin{equation} \label{Equ14} \frac{\md }{\md t} \inty{u(t,y) \varphi (y) } - \inty{u(t,y) \;a \cdot \nabla _y \varphi } = 0 \end{equation} for any smooth function satisfying the constraint $b \cdot \nabla _y \varphi = 0$. The method relies on averaging since the projection on $\kerbg$ coincides with the average along the flow of $b$, cf. Proposition \ref{AverageOperator}. As $u$ satisfies the constraint $b \cdot \nabla _y u = 0$, it is easily seen that $\mathrm{Proj}_{\kerbg} \partial _t u = \partial _t u$. A simple case to start with is when the transport operator $a \cdot \nabla _y$ and $b \cdot \nabla _y$ commute {\it i.e.,} $[b \cdot \nabla _y, a \cdot \nabla _y ] = 0$. In this case $a \cdot \nabla _y$ leaves invariant the subspace of the constraints, implying that $\mathrm{Proj}_{\kerbg} \{a \cdot \nabla _y u \} = a \cdot \nabla _y u$. Therefore \eqref{Equ12} reduces to a transport equation and it is easily seen that this equation propagates the constraint, which allows us to remove it. Things happen similarly when the transport operators $a \cdot \nabla _y, b \cdot \nabla _y$ do not commute, but the transport operator of the limit model may change. In \cite{BosTraSin} we prove that there is a transport operator $A \cdot \nabla _y$, commuting with $b \cdot \nabla _y$, such that for any $u \in \kerbg$ we have \[ \mathrm{Proj}_{\kerbg} \{a \cdot \nabla _y u \} = A \cdot \nabla _y u. \] Once we have determined the field $A$, \eqref{Equ12} can be replaced by $\partial _t u + A \cdot \nabla _y u = 0$, which propagates the constraint $b \cdot \nabla _y u (t) = 0$ as well.
Comming back to the formulation \eqref{Equ9}, we are looking for a matrix field $\tilde{D}(y)$ such that $\divy (\tilde{D} \nabla _y)$ commutes with $b \cdot \nabla _y$ and \[ \mathrm{Proj}_{\kerbg} \{\divy (D(y) \nabla _y u ) \}= \divy (\tilde{D}(y)\nabla _y u),\;\;u \in \kerbg{}. \] We will see that, under suitable hypotheses, it is possible to find such a matrix field $\tilde{D}$, and therefore \eqref{Equ9} reduces to the parabolic model \begin{equation} \label{Equ15} \partial _t u - \divy (\tilde{D}(y) \nabla _y u ) = 0,\;\;(t,y) \in \R_+ \times \R^m. \end{equation} The matrix field $\tilde{D}$ will appear as the orthogonal projection of the matrix field $D$ (with respect to some scalar product to be determined) on the subspace of matrix fields $A$ satisfying $[b\cdot \nabla _y, \divy(A \nabla _y)] = 0$. The field $\tilde{D}$ inherits the properties of $D$, like symmetry, positivity, etc.
Our paper is organized as follows. The main results are presented in Section \ref{ModMainRes}. Section \ref{AveOpe} is devoted to the interplay between the average operator and first and second order linear differential operators. In particular we justify the existence of the {\it averaged} matrix field $\tilde{D}$ associated to any field $D$ of symmetric, positive matrix. The first order approximation is justified in Section \ref{FirstOrdApp} and the second order approximation is discussed in Section \ref{SecOrdApp}. Several technical proofs are gathered in Appendix \ref{A}.
\section{Presentation of the models and main results} \label{ModMainRes} \noindent We assume that the vector field $b :\R^m \to \R^m$ is smooth and divergence free \begin{equation} \label{Equ21} b \in W^{1,\infty}_{\mathrm{loc}} (\R^m),\;\;\divy b = 0 \end{equation} with linear growth \begin{equation} \label{Equ22}
\exists \;C > 0\;\;\mbox{such that}\;\; |b(y)| \leq C (1 + |y|),\;\;y \in \R^m. \end{equation} We denote by $Y(s;y)$ the characteristic flow associated to $b$ \begin{equation} \label{EquFlow} \frac{\md Y}{\md s} = b(Y(s;y)),\;\;Y(s;0) = y,\;\;s \in \R,\;\;y \in \R^m. \end{equation} Under the above hypotheses, this flow has the regularity $Y \in W^{1,\infty} _{\mathrm{loc}} (\R \times \R^m)$ and is measure preserving.
We concentrate on matrix fields $A(y) \in \loloc{}$ such that $[b(y) \cdot \nabla _y, \divy ( A(y) \nabla _y)] = 0$, let us say in $\dpri$. We check that the commutator between $b \cdot \nabla _y $ and $\divy (A \nabla _y)$ writes cf. Proposition \ref{ComSecOrd} \[ [b(y) \cdot \nabla _y, \divy ( A(y) \nabla _y)] = \divy ( [b,A]\nabla _y)\;\;\mbox{in}\;\;\dpri \] where the bracket between $b$ and $A$ is given by \[ [b,A] := (b \cdot \nabla _y) A - \partial _y b A (y) - A(y) \;^t \partial _y b,\;\;y \in \R^m. \] Several characterizations for the solutions of $[b,A] = 0$ in $\dpri$ are indicated in the Propositions \ref{MFI}, \ref{WMFI}, among which \begin{equation} \label{Equ16} A(Y(s;y)) = \partial _y Y (s;y) A(y) \;{^t \partial _y Y} (s;y),\;\;s\in \R,\;\;y \in \R ^m. \end{equation} We assume that there is a matrix field $P(y)$ such that \begin{equation} \label{Equ56} ^t P = P,\;\;P(y) \xi \cdot \xi >0,\;\;\xi \in \R^m,\;\; y \in \R^m,\;\;P^{-1}, P \in \ltloc{},\;\;[b,P]= 0 \;\mbox{in}\;\dpri. \end{equation} We introduce the set \[ H_Q = \{ A = A(y)\;:\; \inty{Q(y) A(y) : A(y) Q(y) } < +\infty\} \] where $Q = P ^{-1}$, and the scalar product \[ (A,B)_Q = \inty{QA:BQ},\;\;A, B \in H_Q. \] The equality \eqref{Equ16} suggests to introduce the family of applications $G(s): H_Q \to H_Q$, $s \in \R$, $G(s)A = (\partial _y Y )^{-1}(s; \cdot) A(Y(s;\cdot)) \;^t (\partial _y Y )^{-1}(s;\cdot)$ which is a $C^0$-group of unitary operators on $H_Q$ cf. Proposition \ref{Groupe}. This allows us to introduce $L$, the infinitesimal generator of $(G(s))_{s\in \R}$. The operator $L$ is skew-adjoint on $H_Q$ and its kernel coincides with $\{A \in H_Q\subset \loloc{} : [b,A] = 0\;\mbox{in} \; \dpri\}$ cf. Proposition \ref{PropOpeL}. The averaged matrix field denoted $\ave{D}_Q$, associated to any $D \in H_Q$ appears as the long time limit of the solution of \begin{equation} \label{Equ67} \partial _t A - L(L(A)) = 0,\;\;t \in \R_+ \end{equation}
\begin{equation} \label{Equ68} A(0) = D. \end{equation} The notation $\ave{\cdot}$ stands for the orthogonal projection (in $\lty{}$) on $\kerbg{}$. \begin{thm} \label{AveMatDif} Assume that \eqref{Equ21}, \eqref{Equ22}, \eqref{Equ56} hold true. Then for any $D \in H_Q \cap \liy{}$ the solution of \eqref{Equ67}, \eqref{Equ68} converges weakly in $H_Q$ as $t \to +\infty$ towards the orthogonal projection of $D$ on $\ker L$ \[ \lim _{t \to +\infty} A(t) = \ave{D}_Q\;\mbox{ weakly in }\;H_Q,\;\;\ave{D}_Q := \mathrm{Proj} _{\ker L } D. \] If $D$ is symmetric and positive, then so is the limit $\ave{D}_Q = \lim _{t \to +\infty} A(t)$, and satisfies \begin{equation} \label{Equ72} L (\ave{D}_Q) = 0,\;\;\nabla _y u \cdot \ave{D}_Q \nabla _y v = \ave{\nabla _y u \cdot D\nabla _y v},\;\;u, v \in H^1(\R^m) \cap \kerbg{} \end{equation}
\begin{equation} \label{Equ72Bis}\ave{\nabla _y u \cdot \ave{D}_Q \nabla _y (b \cdot \nabla _y \psi )} = 0,\;\;u \in H^1(\R^m) \cap \kerbg{},\;\;\psi \in C^2_c (\R^m). \end{equation} \end{thm}
The first order approximation (for initial data not necessarily well prepared) is justified by
\begin{thm} \label{MainResult1} Assume that \eqref{Equ21}, \eqref{Equ22}, \eqref{Equ56}, \eqref{Equ26} hold true and that $D$ is a field of symmetric positive matrix, which belongs to $H_Q$. Consider a family of initial conditions $(\uein)_{\eps } \subset \lty$ such that $(\ave{\uein})_\eps$ converges weakly in $\lty{}$, as $\eps \searrow 0$, towards some function $\uin$. We denote by $\ue$ the solution of \eqref{Equ1}, \eqref{Equ2} and by $u$ the solution of \begin{equation} \label{Equ75} \partial _t u - \divy ( \ave{D}_Q \nabla _y u ) = 0,\;\;t \in \R_+,\;\;y \in \R^m \end{equation}
\begin{equation} \label{Equ76} u(0,y) = \uin (y),\;\;y \in \R^m \end{equation} where $\ave{D}_Q$ is associated to $D$, cf. Theorem \ref{AveMatDif}. Then we have the convergences \[ \lime \ue = u\;\;\mbox{weakly} \star \mbox{ in } \litlty{} \]
\[ \lime \nabla _y \ue = \nabla _y u\;\;\mbox{weakly} \mbox{ in } \lttlty{}. \] \end{thm}
The derivation of the second order approximation is more complicated and requires the computation of some other matrix fields. For simplicity, we content ourselves to formal results. The crucial point is to introduce the decomposition given by \begin{thm} \label{Decomposition} Assume that \eqref{Equ21}, \eqref{Equ22}, \eqref{Equ56}, \eqref{Equ26} hold true and that $L$ has closed range. Then, for any field of symmetric matrix $D \in H_Q$, there is a unique field of symmetric matrix $F \in \dom (L^2) \cap (\ker L )^\perp$ such that \[ - \divy ( D \nabla _y) = - \divy ( \ave{D}_Q \nabla _y ) + \divy (L^2 (F)\nabla _y ) \] that is \begin{align*} & \inty{D \nabla _y u \cdot \nabla _y v } - \inty{\ave{D}_Q\nabla _y u \cdot \nabla _y v } \\ & = \inty{L(F) \nabla _y u \cdot \nabla _y (b \cdot \nabla _y v)} + \inty{L(F) \nabla _y ( b \cdot \nabla _y u ) \cdot \nabla _y v} \\ & = - \inty{F \nabla _y ( b \cdot \nabla _y ( b \cdot \nabla _y u)) \cdot \nabla _y v} - 2 \inty{F \nabla _y ( b \cdot \nabla _y u) \cdot \nabla _y ( b \cdot \nabla _y v)}\\ & - \inty{F \nabla _y u \cdot \nabla _y ( b \cdot \nabla _y ( b \cdot \nabla _y v))} \end{align*} for any $u, v \in C^3_c(\R^m)$. \end{thm} After some computations we obtain, at least formally, the following model, replacing the hypothesis \eqref{Equ56} by the stronger one: there is a matrix field $R(y)$ such that \begin{equation} \label{Equ90} \det R(y)\neq 0,\;y \in \R^m,\;Q = {^t R} R \mbox{ and }P = Q^{-1} \in \ltloc{},\;b \cdot \nabla _y R + R \partial _y b = 0 \mbox{ in } \dpri. \end{equation}
\begin{thm} \label{MainResult2} Assume that \eqref{Equ21}, \eqref{Equ22}, \eqref{Equ23}, \eqref{Equ26}, \eqref{Equ90} hold true and that $D$ is a field of symmetric positive matrix which belongs to $H_Q \cap \liy{}$. Consider a family of initial conditions $(\uein)_\eps \subset \lty{}$ such that $(\frac{\ave{\uein} - \uin }{\eps} ) _{\eps >0}$ converges weakly in $\lty{}$, as $\eps \searrow 0$, towards a function $\vin{}$, for some function $\uin \in \kerbg{}$. Then, a second order approximation for \eqref{Equ1} is provided by \begin{equation} \label{IntroEqu87}\partial _t \tue - \divy ( \ave{D}_Q \nabla _y \tue) + \eps [ \divy ( \ave{D}_Q \nabla _y ), \divy (F \nabla _y ) ]\tue - \eps S(\tue) = 0,\;\;(t,y) \in \R_+ \times \R^m \end{equation}
\begin{equation} \label{NewIC} \tue (0,y) = \uin (y) + \eps ( \vin (y) + \win (y)),\;\;\win = \divy ( F \nabla _y \uin),\;\;y \in \R ^m \end{equation} for some fourth order linear differential operator $S$, see Proposition \ref{DifOpe}, and the matrix field $F$ given by Theorem \ref{Decomposition}. \end{thm}
\section{The average operator} \label{AveOpe} \noindent We assume that the vector field $b : \R^m \to \R^m$ satisfies \eqref{Equ21}, \eqref{Equ22}. We consider the linear operator $u \to b \cdot \nabla _y u = \divy(ub)$ in $\lty{}$, whose domain is defined by \[ \dom (b \cdot \nabla _y ) = \{ u \in \lty{} \;:\; \divy(ub) \in \lty\}. \] It is well known that \[ \kerbg = \{ u \in \lty{}\;:\; u (Y(s;\cdot)) = u (\cdot), \;s \in \R\}. \] The orthogonal projection on $\kerbg{}$ (with respect to the scalar product of $\lty{}$), denoted by $\ave{\cdot}$, reduces to average along the characteristic flow $Y$ cf. \cite{BosTraSin} Propositions 2.2, 2.3. \begin{pro} \label{AverageOperator} For any function $u \in \lty{}$ the family $\ave{u}_T : = \frac{1}{T} \int _0 ^T u (Y(s;\cdot))\md s, T>0$ converges strongly in $\lty{}$, when $T \to + \infty$, towards the orthogonal projection of $u$ on $\kerbg{}$ \[ \lim _{T \to +\infty} \ave{u}_T = \ave{u},\;\;\ave{u} \in \kerbg{} \;\mbox{and} \; \inty{(u - \ave{u}) \varphi } = 0,\;\forall\; \varphi \in \kerbg{}. \] \end{pro} Since $b \cdot \nabla _y$ is antisymmetric, one gets easily \begin{equation} \label{Equ24} \overline{\ran (b \cdot \nabla _y ) } = (\kerbg{} ) ^\perp = \ker ( \mathrm{Proj}_{\kerbg{}} ) = \ker \ave{\cdot}. \end{equation}
\begin{remark} \label{DetFun} If $u \in \lty{}$ satisfies $\inty{u(y) b \cdot \nabla _y \psi } = 0, \forall \;\psi \in C^1 _c (\R^m)$ and $\inty{u\varphi }= 0, \forall \; \varphi \in \kerbg{}$, then $u = 0$. Indeed, as $u \in \lty{} \subset \loloc{}$, the first condition says that $b \cdot \nabla _y u = 0$ in $\dpri{}$ and thus $u \in \kerbg{}$. Using now the second condition with $\varphi = u$ one gets $\inty{u^2} = 0$ and thus $u = 0$. \end{remark}
In the particular case when $\ran (b \cdot \nabla _y)$ is closed, which is equivalent to the Poincar\'e inequality (cf. \cite{Brezis} pp. 29) \begin{equation} \label{Equ23} \exists\;C_P >0\;:\; \left ( \inty{(u - \ave{u})^2}\right ) ^{1/2} \leq C_P \left ( \inty{(b \cdot \nabla _y u ) ^2} \right ) ^{1/2},\;\;u \in \dom (b \cdot \nabla _y) \end{equation} \eqref{Equ24} implies the solvability condition \[ \exists \; u \in \dom ( b \cdot \nabla _y ) \;\mbox{ such that }\; b \cdot \nabla _y u = v\;\mbox{ iff } \ave{v} = 0. \]
If $\|\cdot \|$ stands for the $\lty{}$ norm we have \begin{pro} \label{Inverse} Under the hypothesis \eqref{Equ23}, $b \cdot \nabla _y $ restricted to $\ker \ave{\cdot}$ is one to one map onto $\ker \ave{\cdot}$. Its inverse, denoted $(b \cdot \nabla _y )^{-1}$, belongs to ${\cal L}(\ker \ave{\cdot}, \ker \ave{\cdot})$ and \[
\|(b \cdot \nabla _y ) ^{-1} \|_{{\cal L}(\ker \ave{\cdot}, \ker \ave{\cdot})} \leq C_P. \] \end{pro}
Another operator which will play a crucial role is ${\cal T } = - \divy (b \otimes b \nabla _y)$ whose domain is \[ \dom ({\cal T}) = \{ u \in \dom (b \cdot \nabla _y)\;:\; b \cdot \nabla _y u \in \dom ( b \cdot \nabla _y )\}. \] The operator ${\cal T}$ is self-adjoint and under the previous hypotheses, has the same kernel and range as $b\cdot \nabla _y$.
\begin{pro} \label{KerRanTau} Under the hypotheses \eqref{Equ21}, \eqref{Equ22}, \eqref{Equ23} the operator ${\cal T}$ satisfies \[ \ker {\cal T} = \kerbg,\;\;\ran {\cal T} = \ran (b \cdot \nabla _y ) = \ker \ave{\cdot} \]
and $\| u - \ave{u}\| \leq C_P ^2 \|{\cal T} u \|,u \in \dom ({\cal T})$. \end{pro}
\begin{proof} Obviously $\kerbg{} \subset \ker {\cal T}$. Conversely, for any $u \in \ker {\cal T}$ we have $\inty{\;(\bg u )^2} = \inty{\;u {\cal T}u} = 0$ and therefore $ u \in \kerbg{}$.
Clearly $\ran {\cal T} \subset \ran ( \bg{}) = \ker \ave{\cdot}$. Consider now $w \in \ker \ave{\cdot} = \ran ( \bg{})$. By Proposition \ref{Inverse} there is $v \in \ker \ave{\cdot} \cap \dom (\bg)$ such that $\bg v = w$. Applying one more time Proposition \ref{Inverse}, there is $ u \in \ker \ave{\cdot} \cap \dom (\bg)$ such that $\bg u = v$. We deduce that $u \in \dom {\cal T}, w = {\cal T}(-u)$. Finally, for any $u \in \dom {\cal T}$ we apply twice the Poincar\'e inequality, taking into account that $\ave{\bg u } = 0$ \[
\| u - \ave{u}\| \leq C_P \|\bg u \| \leq C_P ^2 \|{\cal T} u \|. \] \end{proof}
\begin{remark} \label{AveLone} The average along the flow of $b$ can be defined in any Lebesgue space $L^q (\R^m)$, $q \in [1,+\infty]$. We refer to \cite{BosTraSin} for a complete presentation of these results. \end{remark}
\subsection{Average and first order differential operators} \label{FirstOrdDiffOpe} \noindent We are looking for first order derivations commuting with the average operator. Recall that the commutator $[\xi \cdot \nabla _y, \eta \cdot \nabla _y]$ between two first order differential operators is still a first order differential operator, whose vector field, denoted by $[\xi, \eta]$, is given by the Poisson bracket between $\xi$ and $\eta$ \[ [\xi \cdot \nabla _y, \eta \cdot \nabla _y]:= \xi \cdot \nabla _y ( \eta \cdot \nabla _y ) - \eta \cdot \nabla _y ( \xi \cdot \nabla _y ) = [\xi, \eta] \cdot \nabla _y \] where $[\xi, \eta] = (\xi \cdot \nabla _y ) \eta - ( \eta \cdot \nabla _y ) \xi$. The two vector fields $\xi$ and $\eta$ are said in involution iff their Poisson bracket vanishes.
Assume that $c(y)$ is a smooth vector field, satisfying $c(Y(s;y)) = \partial _y Y (s;y) c(y), s\in \R, y \in \R^m$, where $Y$ is the flow of $b$ (not necessarily divergence free here). Taking the derivative with respect to $s$ at $s = 0$ yields $(b \cdot \nabla _y ) c = \partial _y b \;c(y)$, saying that $[b,c] = 0$. Actually the converse implication holds true and we obtain the following characterization for vector fields in involution, which is valid in distributions as well (see Appendix \ref{A} for proof details).
\begin{pro} \label{VFI} Consider $b \in W^{1,\infty}_{\mathrm{loc}} (\R^m)$ (not necessarily divergence free), with linear growth and $c \in \loloc{}$. Then $(b \cdny) c - \partial _y b \;c = 0$ in $\dpri$ iff \begin{equation} \label{Equ34} c (Y(s;y)) = \partial _y Y(s;y) c(y),\;\;s\in \R,\;\;y \in \R^m. \end{equation} \end{pro}
We establish also weak formulations characterizing the involution between two fields, in distribution sense (see Appendix \ref{A} for the proof). The notation $w_s$ stands for $w \circ Y(s;\cdot)$. \begin{pro} \label{WVFI} Consider $b \in W^{1,\infty}_{\mathrm{loc}} (\R^m)$, with linear growth and zero divergence and $c \in \loloc{}$. Then the following statements are equivalent\\ 1. \[[b,c] = 0 \;\mbox{in}\; \dpri{} \] 2. \begin{equation} \label{Equ41} \inty{(c \cdny u )v_{-s} } = \inty{(c\cdny u_s) v },\;\;\forall \;u, v \in C^1_c(\R^m) \end{equation}
3. \begin{equation} \label{Equ42} \inty{c \cdny u \;b \cdny v } + \inty{c \cdny (b \cdny u ) v } = 0,\;\;\forall\; u \in C^2 _c (\R^m),\;\;v \in C^1 _c (\R^m). \end{equation} \end{pro}
\begin{remark} \label{VecDiv} If $[b,c]=0$ in $\dpri{}$, applying \eqref{Equ41} with $v = 1$ on the support of $u_s$ (and therefore $v_{-s} = 1$ on the support of $u$) yields \[ \inty{c \cdny u} = \inty{c\cdny u_s},\;\;u \in C^1_c(\R^m) \] saying that $\divy c$ is constant along the flow of $b$ (in $\dpri{}$). \end{remark}
We claim that for vector fields $c$ in involution with $b$, the derivation $c \cdny $ commutes with the average operator. \begin{pro} \label{AveComFirstOrder} Consider a vector field $c \in \loloc{}$ with bounded divergence, in involution with $b$, that is $[b,c] = 0$ in $\dpri{}$. Then the operators $u \to c \cdny u$, $u \to \divy(uc)$ commute with the average operator {\it i.e.,} for any $u \in \dom ( c\cdny )= \dom (\divy (\cdot \;c))$ we have $\ave{u} \in \dom ( c\cdny )= \dom (\divy (\cdot \;c))$ and \[ \ave{c \cdny u} = c\cdny \ave{u},\;\;\ave{\divy(uc) } = \divy ( \ave{u}c). \] \end{pro}
\begin{proof} Consider $u \in \dom (c \cdny ), s \in \R$ and $\varphi \in C^1 _c (\R^m)$. We have \begin{align} \label{Equ43} \inty{u_s c \cdny \varphi } & = \inty{u (c \cdny \varphi)_{-s}} \\ & = \inty{u (c \cdny ) \varphi _{-s} } \nonumber \\ & = - \inty{\divy (uc) \varphi _{-s}} \nonumber \\ & = - \inty{(\divy (uc))_s \varphi (y)} \nonumber \end{align} saying that $u _s \in \dom ( c \cdny ) = \dom ( \divy ( \cdot \;c ))$ and $ \divy (u_s c ) = ( \divy (uc))_s$. We deduce $c \cdny u_s = (c \cdny u )_s$ cf. Remark \ref{VecDiv}. Integrating \eqref{Equ43} with respect to $s$ between $0$ and $T>0$ one gets \begin{align*} \inty{\frac{1}{T} \int _0 ^T u_s \md s \;c \cdny \varphi } & = \frac{1}{T} \int _0 ^T \inty{u_s c \cdny \varphi }\md s \\ & = - \frac{1}{T} \int _0 ^T \inty{(\divy (uc ))_s \varphi (y) }\md s \\ & = - \inty{\frac{1}{T}\int _0 ^T ( \divy (uc))_s \md s \;\varphi (y) }. \end{align*} By Proposition \ref{AverageOperator} we know that $\frac{1}{T} \int _0 ^T u_s \md s \to \ave{u}$ and $\frac{1}{T}\int _0 ^T (\divy (uc))_s \md s \to \ave{\divy (uc)}$ strongly in $\lty{}$, when $T \to +\infty$, and thus we obtain \[ \inty{\ave{u} c \cdny \varphi } = - \inty{\ave{\divy(uc)} \varphi (y) } \] saying that $\ave{u} \in \dom ( c \cdny )$ and $\divy (\ave{u}c) = \ave{\divy (uc)}$, $c \cdny \ave{u} = \ave{c \cdny u }$. \end{proof}
\subsection{Average and second order differential operators} \label{SecondOrdDiffOpe} \noindent We investigate the second order differential operators $- \divy (A(y) \nabla _y)$ commuting with the average operator along the flow of $b$, where $A(y)$ is a smooth field of symmetric matrix. Such second order operators leave invariant $\kerbg{}$. Indeed, for any $u \in \dom (- \divy (A(y) \nabla _y)) \cap \kerbg{}$ we have \[ - \divy (A(y) \nabla _y u ) = - \divy (A(y) \ave{u}) = \ave{ - \divy (A(y) \nabla _y u )} \in \kerbg{}. \] For this reason it is worth considering the operators $- \divy (A(y) \nabla _y )$ commuting with $b \cdny$. A straightforward computation shows that \begin{pro} \label{ComSecOrd} Consider a divergence free vector field $b \in W^{2,\infty} (\R^m)$ and a matrix field $A \in W^{2,\infty} (\R^m)$. The commutator between $b \cdny $ and $- \divy (A(y) \nabla _y)$ is still a second order differential operator \[ [b\cdny, - \divy(A\nabla _y )] = - \divy ([b,A] \nabla _y ) \] whose matrix field, denoted by $[b,A]$, is given by \[ [b,A] = (b \cdny )A - \dyb A(y) - A(y)\; {^t \dyb},\;\;y \in \R^m. \] \end{pro}
\begin{remark} We have the formula ${^t [b,A]} = [b, {^t A}]$. In particular if $A(y)$ is a field of symmetric (resp. anti-symmetric) matrix, the field $[b,A]$ has also symmetric (resp. anti-symmetric) matrix. \end{remark}
As for vector fields in involution, we have the following characterization (see Appendix \ref{A} for proof details).
\begin{pro} \label{MFI} Consider $b \in W^{1,\infty}_{\mathrm{loc}} (\R^m)$ (not necessarily divergence free) with linear growth and $A(y) \in \loloc{}$. Then $[b,A] = 0$ in $\dpri{}$ iff \begin{equation} \label{Equ35} A(\ysy) = \dyy A(y) \;{^t \dyy},\;\;s\in \R,\;\;y \in \R^m. \end{equation} \end{pro}
For fields of symmetric matrix we have the weak characterization (see Appendix \ref{A} for the proof). \begin{pro} \label{WMFI} Consider $b \in W^{1,\infty}_{\mathrm{loc}} (\R^m)$ with linear growth, zero divergence and $A \in \loloc{}$ a field of symmetric matrix. Then the following statements are equivalent\\ 1. \[ [b,A] = 0 \;\mbox{ in } \; \dpri{}. \] 2. \[ \inty{A(y) \nabla _y u_s \cdot \nabla _y v_s } = \inty{A(y) \nabla _y u \cdot \nabla _y v } \] for any $s \in \R$, $u, v \in C^1 _c ( \R^m)$.\\ 3. \[ \inty{A(y) \nabla _y ( b \cdny u ) \cdot \nabla _y v } + \inty{A(y) \nabla _y u \cdot \nabla _y ( b \cdny v ) } = 0 \] for any $u, v \in C^2 _c (\R^m)$. \end{pro}
We consider the (formal) adjoint of the linear operator $A \to [b,A]$, with respect to the scalar product $(U,V) = \inty{U(y) : V(y)}$, given by \[ Q \to - (b \cdny ) Q - {^t \dyb} Q(y) - Q(y) \dyb \] when $\divy b = 0$. The following characterization comes easily and the proof is left to the reader. \begin{pro} \label{AdyMatFieInv} Consider $b \in W^{1,\infty}_{\mathrm{loc}} (\R^m)$, with linear growth and $Q \in L^1 _{\mathrm{loc}} (\R^m)$. Then $- (b \cdny ) Q - {^t \dyb} Q(y) - Q(y) \dyb = 0$ in $\dpri {}$ iff \begin{equation} \label{Equ36} Q(\ysy) = {^t \partial _y Y }^{-1}(s;y) Q(y) \partial _y Y ^{-1}(s;y),\;\;s\in \R,\;\;y \in \R^m. \end{equation} \end{pro}
\begin{remark} \label{InverseQ} If $Q(y)$ satisfies \eqref{Equ36} and is invertible for any $y \in \R^m$ with $Q^{-1} \in L^1 _{\mathrm{loc}}(\R^m)$, then $Q^{-1} (\ysy) = \dyy Q^{-1} (y) {^t \dyy}$, $s \in \R, y \in \R^m$ and therefore $[b,Q^{-1}] = 0$ in $\dpri{}$. If $P(y)$ satisfies \eqref{Equ35} and is invertible for any $y \in \R^m$, then \[ P^{-1} (\ysy) = {^t \partial _y Y} ^{-1} (s;y) P ^{-1} (y) \partial _y Y ^{-1} (s;y),\;\;s \in \R,\;\; y \in \R^m \] and therefore $- (b \cdny ) P - {^t \dyb} P(y) - P(y) \dyb = 0$ in $\dpri{}$. \end{remark}
As for vector fields in involution, the matrix fields in involution with $b$ generate second order differential operators commuting with the average operator. \begin{pro} \label{AveComSecondOrder} Consider a matrix field $A \in \loloc{}$ such that $\divy A \in \loloc{}$ and $[b,A] = 0$ in $\dpri{}$. Therefore the operator $u \to - \divy (A \nabla _y u )$ commutes with the average operator {\it i.e.,} for any $u \in \dom ( - \divy (A \nabla _y ))$ we have $\ave{u} \in \dom ( - \divy (A \nabla _y ))$ and \[ -\ave{\divy (A \nabla _y u )} = - \divy (A \nabla _y \ave{u}). \] \end{pro}
\begin{proof} Consider $u \in \dom( - \divy (A\nabla _y )) = \{w \in \lty{}: -\divy (A\nabla _y w ) \in \lty{}\}$. For any $s \in \R, \varphi \in C^2 _c (\R^m)$ we have \begin{equation} \label{Equ51} - \inty{u_s \;\divy ( \;{^t A} \nabla _y \varphi ) } = - \inty{u \;( \divy ( \;{^t A} \nabla _\varphi ))_{-s}}. \end{equation} By the implication $1.\implies 2.$ of Proposition \ref{WMFI} (which does not require the symmetry of $A(y)$) we know that \[ \inty{{^t A } \nabla _y \varphi \cdot \nabla _y \psi _s } = \inty{{^t A} \nabla _y \varphi _{-s} \cdot \nabla _y \psi } \] for any $\psi \in C^2 _c (\R^m )$. We deduce that \[ - \inty{\divy ( {^t A} \nabla _y \varphi ) \psi _s } = - \inty{\divy ( {^t A } \nabla _y \varphi _{-s} ) \psi } \] and thus $(\divy ( {^t A} \nabla _y \varphi ))_{-s} = \divy ( {^t A} \nabla _y \varphi _{-s})$. Combining with $\eqref{Equ51}$ yields \begin{eqnarray} \label{Equ52} - \inty{\;u_s \divy ( {^t A} \nabla _y \varphi )} & = - \inty{\;u\; \divy ( {^t A} \nabla _y \varphi _{-s})} \\ & = - \inty{\;\divy (A \nabla _y u ) \varphi _{-s}} \nonumber \\ & = - \inty{\;(\divy (A \nabla _y u ))_s \varphi (y)} \nonumber \end{eqnarray} saying that $u_s \in \dom ( - \divy (A \nabla _y ))$ and \[ - \divy ( A \nabla _y u_s) = ( - \divy (A \nabla _y u ))_s. \] Integrating \eqref{Equ52} with respect to $s$ between $0$ and $T$ we obtain \[ \inty{\frac{1}{T} \int _0 ^T u_s \;\md s\; \divy ( {^t A } \nabla _y \varphi )} = \inty{\frac{1}{T} \int _0 ^T (\divy (A \nabla _y u))_s \;\md s \;\varphi (y)}. \] Letting $T \to +\infty$ yields \[ \inty{\ave{u} \divy ( {^t A }\nabla _y \varphi ) } = \inty{\ave{\divy (A \nabla _y u )} \varphi (y)} \] and therefore $\ave{u} \in \dom ( \divy (A \nabla _y ))$, $\divy (A \nabla _y \ave{u}) = \ave{\divy (A \nabla _y u )}$. \end{proof}
\subsection{The averaged diffusion matrix field} \label{AveDifMatFie} \noindent We are looking for the limit, when $\eps \to 0$, of \eqref{Equ1}, \eqref{Equ2}. We expect that the limit $u = \lime \ue $ satisfies \eqref{Equ7}, \eqref{Equ8}. By \eqref{Equ7} we deduce that at any time $t \in \R_+$, $u(t,\cdot) \in \kerbg{}$. Observe also that $\divy(b \otimes b \nabla _y u^1) = b \cdny (b \cdny u^1) \in \ran ( b \cdny ) \subset \ker \ave{\cdot}$ and therefore the closure for $u$ comes by applying the average operator to \eqref{Equ8} and by noticing that $\ave{\partial _t u } = \partial _t \ave{u} = \partial _t u $ \begin{equation} \label{Equ54} \partial _t u - \ave{\divy ( D \nabla _y u )} = 0,\;\;t\in \R_+,\;\;y \in \R^m. \end{equation} At least when $[b,D] = 0$, we know by Proposition \ref{AveComSecondOrder} that \[ \ave{\divy (D \nabla _y u)} = \divy (D \nabla _y \ave{u}) = \divy (D \nabla _y u) \] and \eqref{Equ54} reduces to the diffusion equation associated to the matrix field $D(y)$. Nevertheless, even if $[b,D] \neq 0$, \eqref{Equ54} behaves like a diffusion equation. More exactly the $\lty{}$ norm of the solution decreases with a rate proportional to the $\lty{}$ norm of its gradient under the hypothesis \eqref{Equ3} \begin{align*} \frac{1}{2}\frac{\md }{\md t} \inty{(u(t,y))^2} & = \inty{\ave{\divy ( D \nabla _y u )} u (t,y) } \\ & = \inty{\divy ( D \nabla _y u ) u }\\ & = - \inty{D \nabla _y u \cdot \nabla _y u }\\ & = - \inty{(D + b\otimes b ) : \nabla _y u \otimes \nabla _y u } \\
& \leq - d \inty{|\nabla _y u (t,y) |^2 }. \end{align*} We expect that, under appropriate hypotheses, \eqref{Equ54} coincides with a diffusion equation, corresponding to some {\it averaged} matrix field ${\cal D}$, that is \begin{equation} \label{Equ55} \exists \; {\cal D} (y)\;:\; [b, {\cal D}] = 0\;\mbox{ and } \; \ave{- \divy ( D \nabla _y u )} = - \divy ( {\cal D} \nabla _y u ),\;\;\forall \;u \in \kerbg{}. \end{equation} It is easily seen that in this case the limit model \eqref{Equ54} reduces to \[ \partial _t u - \divy ( {\cal D}\nabla _y u ) = 0,\;\;t \in \R_+,\;\;y \in \R^m. \] In this section we identify sufficient conditions which guarantee the existence of the matrix field ${\cal D}$. We will see that it appears as the long time limit of the solution of another parabolic type problem, whose initial data is $D$, and thus as the orthogonal projection of the field $D(y)$ (with respect to some scalar product to be defined) on a subset of $\{A \in \loloc{}:[b,A] = 0\mbox{ in } \dpri{}\}$. We assume that \eqref{Equ56} holds true. We introduce the set \[ H_Q = \{A = A(y)\;:\; \inty{Q(y)A(y) : A(y) Q(y)} < +\infty\} \] where $Q = P^{-1}$ and the bilinear application \[ (\cdot, \cdot)_Q : H_Q \times H_Q \to \R,\;\;(A,B)_Q = \inty{Q(y)A(y):B(y)Q(y)} \] which is symmetric and positive definite. Indeed, for any $A \in H_Q$ we have \[ (A,A)_Q = \inty{Q^{1/2}AQ^{1/2} : Q^{1/2}AQ^{1/2}} \geq 0 \]
with equality iff $Q^{1/2}AQ^{1/2}= 0$ and thus iff $A = 0$. The set $H_Q$ endowed with the scalar product $(\cdot, \cdot)_Q$ becomes a Hilbert space, whose norm is denoted by $|A|_Q = (A, A)_Q ^{1/2}, A \in H_Q$. Observe that $H_Q \subset \{A(y):A \in \loloc{}\}$. Indeed, if for any matrix $M$ the notation $|M|$ stands for the norm subordonated to the euclidian norm of $\R^m$ \[
|M| = \sup _{\xi \in \R^m \setminus \{0\}} \frac{|M\xi|}{|\xi|} \leq ( M : M ) ^{1/2} \] we have for a.a. $y \in \R^m$ \begin{eqnarray}
\label{Equ57} |A(y)| & = & \sup _{\xi, \eta \neq 0} \displaystyle \frac{A(y) \xi \cdot \eta}{|\xi|\;|\eta|} \\
& = & \sup _{\xi, \eta \neq 0} \displaystyle \frac{Q^{1/2}AQ^{1/2} P ^{1/2}\xi \cdot P^{1/2} \eta}{|P^{1/2} \xi|\;|P^{1/2} \eta|}\;\frac{|P^{1/2}\xi|}{|\xi|}\;\frac{|P^{1/2}\eta|}{|\eta|} \nonumber \\
& \leq & |Q^{1/2}AQ^{1/2} |\;|P^{1/2}|^2 \nonumber \\
& \leq & ( Q^{1/2}AQ^{1/2}:Q^{1/2}AQ^{1/2}) ^{1/2} \;|P|.\nonumber \end{eqnarray} We deduce that for any $R>0$ \[
\int_{B_R} |A(y)|\;\md y \leq \int _{B_R} ( Q^{1/2}AQ^{1/2}:Q^{1/2}AQ^{1/2}) ^{1/2} \;|P|\;\md y \leq (A,A)_Q ^{1/2} \left ( \int _{B_R} |P(y)|^2 \;\md y \right ) ^{1/2}. \]
\begin{remark} \label{Ortho} We know by Remark \ref{InverseQ} that $Q_s = {^t \partial _y Y ^{-1} }(s;y) Q(y) \partial _y Y ^{-1}(s;y) $ which writes ${^t {\cal O}}(s;y) {\cal O}(s;y) = I$ where ${\cal O}(s;y) = Q_s ^{1/2} \dyy Q^{-1/2}$. Therefore the matrix ${\cal O}(s;y)$ are orthogonal and we have \begin{equation} \label{Equ58} Q_s ^{1/2} \dyy Q^{-1/2} = {\cal O}(s;y) = {^t {\cal O}}^{-1} (s;y) = Q_s ^{-1/2} \;{^t \partial _y Y }^{-1} Q^{1/2} \end{equation}
\begin{equation} \label{Equ59} Q ^{-1/2} \;{^t \dyy} Q_s ^{1/2} = {^t {\cal O}}(s;y) = { {\cal O}}^{-1} (s;y) = Q ^{1/2} { \partial _y Y }^{-1} Q_s^{-1/2}. \end{equation} \end{remark}
\begin{pro} \label{Groupe} The family of applications $A \to G(s)A : = \partial _y Y ^{-1} (s; \cdot) A_s \; {^t \partial _y Y } ^{-1} (s; \cdot)$ is a $C^0$- group of unitary operators on $H_Q$. \end{pro}
\begin{proof} For any $A\in H_Q$ observe, thanks to \eqref{Equ59}, that \begin{align*}
\left | \partial _y Y ^{-1}(s; \cdot) A_s {^t\partial _y Y ^{-1}(s; \cdot) }\right | ^2 _Q & = \!\!\inty{Q^{1/2}\partial _y Y ^{-1} A_s {^t \partial _y Y ^{-1}}Q^{1/2}:Q^{1/2}\partial _y Y ^{-1} A_s {^t \partial _y Y ^{-1}}Q^{1/2}}\\ & = \!\!\inty{\!\!\!\!{^t {\cal O}} (s;y) Q_s ^{1/2} A_s Q_s ^{1/2} {\cal O}(s;y) \!:\! {^t {\cal O}} (s;y) Q_s ^{1/2} A_s Q_s ^{1/2} {\cal O}(s;y)}\\ & = \inty{Q_s ^{1/2} A_s Q_s ^{1/2} : Q_s ^ {1/2} A_s Q_s ^{1/2}}\\ & = \inty{Q^{1/2}AQ^{1/2} : Q^{1/2}AQ^{1/2}} \\
& = |A|^2 _Q. \end{align*} Clearly $G(0)A = A, A\in H_Q$ and for any $s, t \in \R$ we have \begin{align*} G(s) G(t) A & = \partial _y Y ^{-1} (s;\cdot) (G(t)A)_s {^t \partial _y Y ^{-1} (s;\cdot)}\\ & = \partial _y Y ^{-1} (s;\cdot) (\partial _y Y )^{-1} (t; Y(s;\cdot))(A_t)_s {^t (\partial _y Y )^{-1} (t; Y(s;\cdot))}{^t \partial _y Y ^{-1} (s;\cdot)} \\ & = \partial _y Y ^{-1} (t + s;\cdot)A_{t+s} {^t \partial _y Y ^{-1} (t + s;\cdot)} = G(t+s) A,\;\;A \in H_Q. \end{align*} It remains to check the continuity of the group, {i.e.,} $\lim _{s \to 0 } G(s)A = A$ strongly in $H_Q$ for any $A \in H_Q$. For any $s \in \R$ we have \begin{align*}
|G(s) A - A|^2 _Q = |G(s)A|^2 _Q + |A|^2 _Q - 2 ( G(s)A, A)_Q = 2|A|^2 _Q - 2 (G(s)A, A)_Q \end{align*}
and thus it is enough to prove that $\lim _{s \to 0 } G(s)A = A$ weakly in $H_Q$. As $|G(s)| = 1$ for any $s \in \R$, we are done if we prove that $\lim _{s \to 0} (G(s)A, U)_Q = (A, U)_Q$ for any $U \in C^0 _c (\R^m) \subset H_Q$. But it is easily seen that $\lim _{s\to 0} G(-s)U = U$ strongly in $H_Q$, for $U \in C^0 _c (\R^m) $ and thus \[ \lim _{s \to 0} ( G(s)A, U)_Q = \lim _{s \to 0} (A, G(-s)U)_Q = (A,U)_Q,\;\;U \in C^0 _c (\R^m). \] \end{proof}
We denote by $L$ the infinitesimal generator of the group $G$ \[ L:\dom(L) \subset H_Q \to H_Q,\;\;\dom L = \{ A\in H_Q\;:\; \exists \;\lim _{s \to 0} \frac{G(s)A-A}{s}\;\mbox{ in } \;H_Q\} \] and $L(A) = \lim _{s \to 0} \frac{G(s)A-A}{s}$ for any $A \in \dom(L)$. Notice that $C^1 _c (\R^m) \subset \dom(L)$ and $L(A) = b \cdny A - \dyb A - A \;{^t \dyb}$, $A \in C^1 _c (\R^m)$ (use the hypothesis $Q \in \ltloc{}$ and the dominated convergence theorem). Observe also that the group $G$ commutes with transposition {\it i.e.} $G(s) \;{^t A} = {^t G(s)}A$, $s \in \R, A \in H_Q$ and for any $A \in \dom (L)$ we have $^t A \in \dom (L)$, $L({^t A}) = {^t L(A)}$. The main properties of the operator $L$ are summarized below (when $b$ is divergence free). \begin{pro} \label{PropOpeL} $\;$\\ 1. The domain of $L$ is dense in $H_Q$ and $L$ is closed.\\ 2. The matrix field $A \in H_Q$ belongs to $\dom (L)$ iff there is a constant $C >0$ such that \begin{equation}
\label{Equ61} |G(s)A - A |_Q \leq C |s|,\;\;s \in \R. \end{equation} 3. The operator $L$ is skew-adjoint.\\ 4. For any $A \in \dom (L)$ we have \[ - \divy (L(A) \nabla _y ) = b\cdny ( - \divy (A \nabla _y)) + \divy (A \nabla _y ( b \cdny ))\;\mbox{ in } \; \dpri{} \] that is \[ \inty{L(A) \nabla _y u \cdot \nabla _y v } = - \inty{A \nabla _y u \cdot \nabla _y ( b \cdny v)} - \inty{A\nabla _y ( b \cdny u ) \cdot \nabla _y v } \] for any $u, v \in C^2 _c (\R^m)$. \end{pro}
\begin{proof} 1. The operator $L$ is the infinitesimal generator of a $C^0$-group, and therefore $\dom(L)$ is dense and $L$ is closed. \\ 2. Assume that $A \in \dom(L)$. We know that $\frac{\md }{\md s} G(s)A = L(G(s)A) = G(s)L(A)$ and thus \[
|G(s)A - A|_Q = \left | \int _0 ^t G(\tau) L(A)\;\md \tau\right |_Q \leq \left | \int _0 ^s |G(\tau)L(A)|_Q \;\md \tau \right | = |s| \;|L(A)|_Q,\;\;s \in \R. \] Conversely, assume that \eqref{Equ61} holds true. Therefore we can extract a sequence $(s_k)_k$ converging to $0$ such that \[ \limk \frac{G(s_k) A - A}{s_k} = V \;\mbox{ weakly in } \;H_Q. \] For any $U \in \dom (L)$ we obtain \[ \left ( \frac{G(s_k) A - A}{s_k}, U \right ) _Q = \left ( A, \frac{G(-s_k)U - U}{s_k} \right ) _Q \] and thus, letting $k \to +\infty$ yields \begin{equation} \label{Equ62} (V, U)_Q = - (A, L(U))_Q. \end{equation} But since $U \in \dom (L)$, all the trajectory $\{G(\tau)U:\tau \in \R\}$ is contained in $\dom(L)$ and $G(-s_k)U = U + \int _0 ^{-s_k}L(G(\tau)U)\md \tau$. We deduce \begin{align*} (G(s_k)A - A, U)_Q & = \left ( A, \int _0 ^{-s_k} L(G(\tau)U ) \;\md \tau \right ) \\ & = \int _0 ^{-s_k}( A, L(G(\tau) U))_Q \;\md \tau \\ & = - \int _0 ^{-s_k}( V, G(\tau) U )_Q \;\md \tau \\ & = - \left ( V, \int _0 ^{-s_k} G(\tau)U\;\md \tau \right ) _Q. \end{align*}
Taking into account that $\left | \int _0 ^{-s_k} G(\tau ) U \md \tau \right |_Q \leq |s_k| \;|U|_Q$ we obtain \[
\left | \left ( \frac{G(s_k)A - A}{s_k}, U \right ) _Q \right | \leq |V|_Q|U|_Q,\;\;U \in \dom (L) \] and thus, by the density of $\dom (L)$ in $H_Q$ one gets \[
\left | \frac{G(s_k)A - A}{s_k} \right |_Q \leq |V|_Q,\;\;k \in \N. \] Since $V$ is the weak limit in $H_Q$ of $\left ( \frac{G(s_k)A - A}{s_k} \right )_k$, we deduce that $\limk \frac{G(s_k)A - A}{s_k} = V$ strongly in $H_Q$. As the limit $V$ is uniquely determined by \eqref{Equ62}, all the family $\left ( \frac{G(s)A - A}{s} \right )_s$ converges strongly , when $s \to 0$, towards $V$ in $H_Q$ and thus $A \in \dom (L)$.\\ 3. For any $U, V \in \dom (L)$ we can write \[ (G(s)U - U, V)_Q + (U, V - G(-s)V)_Q = 0,\;\;s\in \R. \] Taking into account that \[ \lim _{s \to 0} \frac{G(s)U - U}{s} = L(U),\;\;\lim _{s \to 0} \frac{V - G(-s)V}{s} = L(V) \] we obtain $(L(U), V)_Q + (U, L(V))_Q = 0$ saying that $V\in \dom (L^\star)$ and $L^\star (V) = - L(V)$. Therefore $L \subset (-L^\star)$. It remains to establish the converse inclusion. Let $V \in \dom (L^\star)$, {\it i.e.,} $\exists C >0$ such that \[
|(L(U), V)_Q|\leq C|U|_Q,\;\;U \in \dom (L). \] For any $s \in \R$, $U \in \dom (L)$ we have \[ (G(s)V - V , U)_Q = (V, G(-s)U - U)_Q = (V, \int _0 ^{-s}LG(\tau)U \;\md \tau )_Q = \int _0 ^{-s} (V, LG(\tau)U)_Q \;\md \tau \] implying \[
|(G(s) V - V , U )_Q|\leq C |s| \;|U|_Q,\;\;s\in \R. \]
Therefore $|G(s)V - V|_Q \leq C |s|, s \in \R$ and by the previous statement $V \in \dom (L)$. Finally $\dom (L) = \dom (L^\star)$ and $L^\star (V) = - L(V), V \in \dom (L) = \dom (L^\star)$.\\ 4. As $L$ is skew-adjoint, we obtain \[ - \inty{L(A)\nabla _y u \cdot \nabla _y v } = - ( L(A), Q^{-1} \nabla _y v \otimes \nabla _y u Q^{-1}\;)_Q = ( A, L ( Q^{-1} \nabla _y v \otimes \nabla _y u Q^{-1})\;)_Q. \] Recall that $P = Q^{-1}$ satisfies $L(P) = 0$, that is, $G(s)P = P, s \in \R$ and thus \begin{align*} L(Q^{-1} \nabla _y v \otimes & \nabla _y u Q^{-1}) = \lim _{ s \to 0} \frac{G(s)P\nabla _y v \otimes \nabla _y u P - P \nabla _y v \otimes \nabla _y u P}{s} \\ & = \lim _{ s \to 0} \frac{\partial _y Y ^{-1} (s;\cdot) P_s (\nabla _y v )_s \otimes (\nabla _y u )_s P_s {^t \partial _y Y ^{-1}}(s;\cdot) - P \nabla _y v \otimes \nabla _y u P}{s}\\ & = \lim _{ s \to 0} \frac{P{^t \partial _y Y} (s;\cdot) (\nabla _y v )_s \otimes (\nabla _y u )_s \partial _y Y (s;\cdot)P - P \nabla _y v \otimes \nabla _y u P}{s} \\ & = \lim _{ s \to 0} \frac{P \nabla _y v_s \otimes \nabla _y u_s P -P \nabla _y v \otimes \nabla _y u P }{s} \\ & = P \nabla _y ( b \cdny v ) \otimes \nabla _y u P + P \nabla _y v \otimes \nabla _y ( b \cdny u ) P. \end{align*} Finally one gets \begin{align*} - \inty{L(A) \nabla _y u \cdot \nabla _y v } & = ( A, P \nabla _y ( b \cdny v ) \otimes \nabla _y u P) + P \nabla _y v \otimes \nabla _y ( b \cdny u )P)_Q \\ & = \inty{A\nabla _y u \cdot \nabla _y ( b \cdny v)} + \inty{A\nabla _y ( b \cdny u ) \cdot \nabla _y v }. \end{align*} \end{proof} We claim that $\dom (L)$ is left invariant by some special (weighted with respect to the matrix field $Q$) positive/negative part functions. The notations $A^\pm$ stand for the usual positive/negative parts of a symmetric matrix $A$ \[ A^\pm = S \Lambda ^\pm \;{^t S},\;\;A = S\Lambda \;{^t S} \] where $\Lambda, \Lambda ^\pm $ are the diagonal matrix containing the eigenvalues of $A$ and the positive/negative parts of these eigenvalues respectively, and $S$ is the orthogonal matrix whose columns contain a orthonormal basis of eigenvectors for $A$. Notice that \[ A^+ : A^- = 0,\;\;A^+ - A^- = A,\;\;A^+ : A^+ + A^- : A^- = A: A. \] We introduce also the positive/negative part functions which associate to any field of symmetric matrix $A(y)$ the fields of symmetric matrix $A^{Q\pm}(y)$ given by \[ Q^{1/2} A^{Q\pm} \;Q^{1/2} = (Q^{1/2} AQ^{1/2})^\pm. \] Observe that $A^{Q+} - A^{Q-} = A$.
\begin{pro} \label{InvPosNeg}$\;$\\ 1. The applications $A \to A^{Q\pm}$ leave invariant the subset $\{A\in \dom (L): {^t A} = A\}$.\\ 2. For any $A \in \dom (L), {^t A } = A$ we have \[ (A^{Q+}, A^{Q-})_Q = 0,\;\;( L(A^{Q+}), L(A^{Q-}))_Q \leq 0. \] \end{pro}
\begin{proof} 1. Consider $A \in \dom (L), {^t A} = A$. It is easily seean that ${^t A^{Q\pm}} = A^{Q\pm}$ and \begin{align*}
|A^{Q+}|^2 _Q + |A^{Q-}|^2 _Q & = \inty{(Q^{1/2}A Q^{1/2} ) ^ + : ( Q^{1/2} A Q^{1/2})^+} \\ & + \inty{(Q^{1/2}A Q^{1/2} ) ^ - : ( Q^{1/2} A Q^{1/2})^-}\\
& = \inty{Q^{1/2}A Q^{1/2} : Q^{1/2} A Q^{1/2}} = |A|^2 _Q < +\infty \end{align*} and therefore $A ^{Q\pm} \in H_Q$. The positive/negative parts $A^{Q\pm}$ are orthogonal in $H_Q$ \[ ( A^{Q+} , A^{Q-})_Q = \inty{(Q^{1/2}AQ^{1/2})^+ : (Q^{1/2}AQ^{1/2})^-} = 0. \] We claim that $A^{Q\pm}$ satisfies \eqref{Equ61}. Indeed, thanks to \eqref{Equ59} we can write, using the notation $X^{:2} = X : X$ \begin{align}
\label{Equ63} |G(s)A^{Q\pm}- A^{Q\pm}|^2 _Q & = \inty{\{ Q^{1/2} ( \partial _y Y ^{-1} (A ^{Q\pm})_s {^t \partial _y Y ^{-1}} - A^{Q\pm})Q^{1/2} \} ^{:2}}\\ & = \inty{\{{^t {\cal O}} (s;y) Q_s ^{1/2} (A ^{Q\pm})_sQ_s ^{1/2} {\cal O}(s;y) - Q^{1/2} A^{Q\pm}Q^{1/2} \}^{:2}} \nonumber \\ & = \inty{\{ {^t {\cal O}} (s;y) ( Q_s ^{1/2} A_s Q_s ^{1/2})^{\pm} {\cal O}(s;y) - (Q^{1/2}A Q^{1/2} ) ^{\pm} \} ^{:2}}. \nonumber \end{align} Similarly we obtain \begin{equation}
\label{Equ64} |G(s)A - A|^2 _Q = \inty{\{{^t {\cal O}}(s;y) Q^{1/2}_s A_s Q^{1/2}_s {\cal O}(s;y) - Q^{1/2}AQ^{1/2} \} ^{:2}}. \end{equation} We are done if we prove that for any symmetric matrix $U, V$ and any orthogonal matrix $R$ we have the inequality \begin{equation} \label{Equ65} ( \;{^t R } U ^{\pm} R - V ^\pm \;) : ( \;{^t R } U ^{\pm} R - V ^\pm \;)\leq ( \;{^t R } U R - V \;):( \;{^t R } U R - V \;). \end{equation} For the sake of the presentation, we consider the case of positive parts $U^+, V^+$. The other one comes in a similar way. The above inequality reduces to \[ 2 \;{^t R } U R : V - 2 \;{^t R } U ^+ R : V ^+ \leq {^t R } U ^- R : {^t R } U ^- R + V^- : V^- \] or equivalently, replacing $U$ by $ U^+ - U^-$ and $V$ by $V^+ - V^-$, to \[ - 2 \;{^t R } U ^+ R : V^- - 2 \;{^t R } U ^- R : V^+ + 2 \;{^t R } U ^- R : V ^- \leq {^t R } U ^- R : {^t R } U ^- R + V ^- : V^-. \] It is easily seen that the previous inequality holds true, since ${^t R } U ^+ R : V^- \geq 0$, ${^t R } U ^- R : V^+ \geq 0$ and \[ 2 \;{^t R } U ^- R : V^- \leq 2 ( {^t R} U ^- R : {^t R} U ^- R) ^{1/2} ( V^- : V^- ) ^{1/2} \leq {^t R} U ^- R : {^t R} U ^- R + V^- : V^-. \] Combining \eqref{Equ63}, \eqref{Equ64} and \eqref{Equ65} with \[ U = Q^{1/2} _s A_s Q^{1/2}_s, \;\;V = Q^{1/2}AQ^{1/2},\;\;R = {\cal O} \] yields \[
\sup _{s \neq 0} \frac{|G(s)A^{Q\pm} - A^{Q\pm} |_Q}{|s|} \leq \sup _{s \neq 0} \frac{|G(s)A - A|_Q}{|s|} \leq |L(A)|_Q \] saying that $A^{Q\pm} \in \dom (L)$. \\ 2. For any $A \in \dom (L)$, $^t A = A$ we can write \begin{align*} (A^{Q+}, A^{Q-})_Q & = \inty{Q^{1/2}A^{Q+}Q^{1/2}: Q^{1/2}A^{Q-}Q^{1/2}} \\ & = \inty{(Q^{1/2}AQ^{1/2})^+ : (Q^{1/2}AQ^{1/2})^-} = 0. \end{align*} Since $A^{Q\pm} \in \dom (L)$ we have \[ L(A^{Q\pm}) = \lim _{s \to 0} \frac{G(s/2) A^{Q\pm} - G(-s/2) A^{Q\pm}}{s} \] and therefore, thanks to \eqref{Equ59}, we obtain \begin{align*} & ( L(A^{Q+}), L(A^{Q-}))_Q = \lim _{s \to 0} \left (\frac{G(\frac{s}{2}) A^{Q+} - G(-\frac{s}{2}) A^{Q+}}{s}, \frac{G(\frac{s}{2}) A^{Q-} - G(-\frac{s}{2}) A^{Q-}}{s} \right ) _Q\\ & = \lim _{s \to 0} \inty{\frac{Q^{1/2} (\;G(\frac{s}{2}) A^{Q+} - G(-\frac{s}{2}) A^{Q+} \;) Q^{1/2} }{s} : \frac{Q^{1/2} (\;G(\frac{s}{2}) A^{Q-} - G(-\frac{s}{2}) A^{Q-} \;) Q^{1/2}}{s} }\\ & = \lim _{s \to 0} \inty{ \frac{{^t {\cal O}(\frac{s}{2};y)}( Q^{1/2}_{\frac{s}{2}} A_{\frac{s}{2}} Q^{1/2}_{\frac{s}{2}})^+ {\cal O}(\frac{s}{2};y) - {^t {\cal O}(-\frac{s}{2};y)}( Q^{1/2}_{-\frac{s}{2}} A_{-\frac{s}{2}} Q^{1/2}_{-\frac{s}{2}})^+ {\cal O}(-\frac{s}{2};y)}{s} \\ & : \frac{{^t {\cal O}(\frac{s}{2};y)}( Q^{1/2}_{\frac{s}{2}} A_{\frac{s}{2}} Q^{1/2}_{\frac{s}{2}})^- {\cal O}(\frac{s}{2};y) - {^t {\cal O}(-\frac{s}{2};y)}( Q^{1/2}_{-\frac{s}{2}} A_{-\frac{s}{2}} Q^{1/2}_{-\frac{s}{2}})^- {\cal O}(-\frac{s}{2};y)}{s}}\\ & = - \lim _{s \to 0} \inty{\frac{{^t {\cal O}(\frac{s}{2};y)}( Q^{1/2}_{\frac{s}{2}} A_{\frac{s}{2}} Q^{1/2}_{\frac{s}{2}})^+ {\cal O}(\frac{s}{2};y) : {^t {\cal O}(-\frac{s}{2};y)}( Q^{1/2}_{-\frac{s}{2}} A_{-\frac{s}{2}} Q^{1/2}_{-\frac{s}{2}})^- {\cal O}(-\frac{s}{2};y)}{s^2}} \\ & - \lim _{ s \to 0}\inty{\frac{{^t {\cal O}(-\frac{s}{2};y)}( Q^{1/2}_{-\frac{s}{2}} A_{-\frac{s}{2}} Q^{1/2}_{-\frac{s}{2}})^+ {\cal O}(-\frac{s}{2};y) : {^t {\cal O}(\frac{s}{2};y)}( Q^{1/2}_{\frac{s}{2}} A_{\frac{s}{2}} Q^{1/2}_{\frac{s}{2}})^- {\cal O}(\frac{s}{2};y)}{s^2}} \\ & \leq 0 \end{align*} since \[ {^t {\cal O}}(\pm s/2;\cdot) ( Q^{1/2} A Q^{1/2}) _{\pm s/2} ^\pm {\cal O}(\pm s/2;\cdot) \geq 0,\;\;{^t {\cal O}}(\mp s/2;\cdot) ( Q^{1/2} A Q^{1/2}) _{\mp s/2} ^\pm {\cal O}(\mp s/2;\cdot) \geq 0. \] \end{proof}
We intend to solve the problem \eqref{Equ67}, \eqref{Equ68} by using variational methods. We introduce the space $V_Q = \dom (L) \subset H_Q$ endowed with the scalar product \[ ((A, B))_Q = (A, B)_Q + (L(A), L(B))_Q,\;\;A, B \in V_Q. \]
Clearly $(V_Q, ((\cdot, \cdot))_Q)$ is a Hilbert space (use the fact that $L$ is closed) and the inclusion $V_Q \subset H_Q$ is continuous, with dense image. The notation $\|\cdot \|_Q$ stands for the norm associated to the scalar product $((\cdot, \cdot))_Q$ \[
\|A\|^2 _Q = ((A, A))_Q = (A, A)_Q + (L(A), L(A))_Q = |A|^2 _Q + |L(A)|^2_Q,\;\;A\in V_Q. \] We introduce the bilinear form $\sigma : V_Q \times V_Q \to \R$ \[ \sigma (A, B) = (L(A), L(B))_Q,\;\;A, B \in V_Q. \] Notice that $\sigma$ is coercive on $V_Q$ with respect to $H_Q$ \[
\sigma (A, A) + |A|^2_Q = \|A\|^2_Q,\;\;A \in V_Q. \] By Theorems 1,2 pp. 620 \cite{DauLions88} we deduce that for any $D \in H_Q$ there is a unique variational solution for \eqref{Equ67}, \eqref{Equ68} that is $A \in C_b (\R_+; H_Q) \cap L^2 (\R_+; V_Q)$, $\partial _t A \in L^2 (\R_+; V_Q ^\prime)$ \[ A(0) = D,\;\;\frac{\md }{\md t } (A(t), U)_Q + \sigma (A(t), U) = 0,\;\;\mbox{in}\;\;\dpri{},\;\;\forall \;U \in V_Q. \] The long time limit of the solution of \eqref{Equ67}, \eqref{Equ68} provides the averaged matrix field in \eqref{Equ55}. \begin{proof} (of Theorem \ref{AveMatDif}) The identity \[
\frac{1}{2}\frac{\md }{\md t} |A(t) |^2 _Q + |L(A(t))|^2 _Q = 0,\;\;t \in \R_+ \] gives the estimates \[
|A(t)|_Q \leq |D|_Q,\;\;t \in \R_+,\;\;\int _0 ^{+\infty} |L(A(t))|^2 _Q \;\md t\leq \frac{1}{2}|D|^2 _Q. \] Consider $(t_k)_k$ such that $t_k \to +\infty$ as $k \to +\infty$ and $(A(t_k))_k$ converges weakly towards some matrix field $X$ in $H_Q$. For any $U \in \ker L$ we have \[ \frac{\md }{\md t} (A(t), U)_Q = 0,\;\;t \in \R_+ \] and therefore \begin{equation} \label{Equ70} (\mathrm{Proj}_{\ker L} D, U)_Q = (D,U)_Q = (A(0), U)_Q = (A(t_k), U)_Q = ( X, U)_Q,\;\;U \in \ker L. \end{equation} Since $L(A) \in L^2 (R_+;H_Q)$ we deduce that $\limk L(A(t_k)) = 0$ strongly in $H_Q$. For any $V \in V_Q$ we have \[ (X, L(V))_Q = \limk (A(t_k), L(V))_Q = - \limk (L(A(t_k)), V)_Q = 0. \] We deduce that $X \in \dom (L^\star) = \dom (L)$ and $L(X) = 0$, which combined with \eqref{Equ70} says that $X = \mathrm{Proj}_{\ker L} D$, or $X = \ave{D}_Q$. By the uniqueness of the limit we obtain $\lim _{t \to +\infty} A(t) = \mathrm{Proj}_{\ker L} D$ weakly in $H_Q$. Assume now that ${^t D } = D$. As $L$ commutes with transposition, we have $\partial _t {^t A} - L (L({^t A})) = 0$, ${^t A }(0) = D$. By the uniqueness we obtain ${^t A } = A$ and thus \[ ^t \ave{D}_Q = \;^t ( \mbox{w}-\lim _{t \to +\infty} A(t) ) = \mbox{w}-\lim _{t \to +\infty} {^t A(t)} = \mbox{w}-\lim _{t \to +\infty} A(t)= \ave{D}_Q. \] Suppose that $D\geq 0$ and let us check that $\ave{D}_Q \geq 0$. By Proposition \ref{InvPosNeg} we know that $A^{Q\pm}(t) \in V_Q$, $t \in \R_+$ and \[ (A^{Q+}(t), A^{Q-}(t))_Q = 0,\;\;(L(A^{Q+}(t)), L(A^{Q-}(t)))_Q \leq 0,\;\;t \in \R_+. \] It is sufficient to consider the case of smooth solutions. Multiplying \eqref{Equ67} by $-A^{Q-}(t)$ one gets \begin{align}
\label{Equ71} \frac{1}{2}\frac{\md }{\md t} |A^{Q-}(t) |^2 _Q + |L(A^{Q-}(t)|^2 _Q & = ( \partial _t A^{Q+}, A^{Q-}(t))_Q + (L(A^{Q+}(t)),L(A^{Q-}(t)) )_Q \\ & \leq ( \partial _t A^{Q+}, A^{Q-}(t))_Q. \nonumber \end{align} But for any $0 < h < t $ we have \begin{align*} (A^{Q+}(t) - A^{Q+}(t-h), A^{Q-}(t))_Q = - (A^{Q+}(t-h), A^{Q-}(t))_Q \leq 0 \end{align*} and therefore $(\partial _t A^{Q+}(t), A^{Q-}(t))_Q \leq 0 $. Observe that $Q^{1/2}A^{Q-}(0)Q^{1/2} = (Q^{1/2} D Q^{1/2})^- = 0$, since $Q^{1/2} D Q^{1/2}$ is symmetric and positive. Thus $A^{Q-}(0) = 0$, and from \eqref{Equ71} we obtain \[
\frac{1}{2} |A^{Q-}(t) |^2 _Q \leq \frac{1}{2}|A^{Q-}(0)|^2 _Q = 0 \] implying that $Q^{1/2} A(t) Q^{1/2} \geq 0$ and $A(t) \geq 0$, $t \in \R_+$. Take now any $U \in H_Q$, ${^t U } = U$, $U \geq 0$. By weak convergence we have \[ ( \ave{D}_Q, U)_Q = \lim _{t \to +\infty} (A(t), U)_Q = \lim _{t \to +\infty} \inty{Q^{1/2} A(t)Q^{1/2} :Q^{1/2} UQ^{1/2} }\geq 0 \] and thus $\ave{D}_Q \geq 0$. By construction $\ave{D}_Q = \mathrm{Proj}_{\ker L} D \in \ker L$. It remains to justify the second statement in \eqref{Equ72}, and \eqref{Equ72Bis}. Take a bounded function $\varphi \in \liy{}$ which remains constant along the flow of $b$, that is $\varphi _s = \varphi, s \in \R$, and a smooth function $u \in C^1 (\R^m)$ such that $u_s = u, s \in \R$ and \[ \inty{(\nabla _y u \cdot Q^{-1} \nabla _y u )^2 } < +\infty. \] We introduce the matrix field $U$ given by \[ U(y) = \varphi (y) Q^{-1} (y) \;\nabla _y u \otimes \nabla _y u \; Q^{-1}(y),\;\;y \in \R^m. \] By one hand notice that $U \in H_Q$ \begin{align*}
|U|^2_Q & = \inty{Q^{1/2}UQ^{1/2}:Q^{1/2}UQ^{1/2}} = \inty{\varphi ^2|Q^{-1/2} \nabla _y u |^4}\\
& \leq \|\varphi \|_{L^\infty} ^2\inty{(\nabla _y u \cdot Q^{-1} \nabla _y u )^2 }. \end{align*} By the other hand, we claim that $U \in \ker L$. Indeed, for any $s \in \R$ we have \[ \nabla _y u = \nabla _y u_s = {^t \dyy}(\nabla _y u )_s \] and thus \begin{align*} Q_s U_s Q_s & = \varphi _s (\nabla _y u )_s \otimes ( \nabla _y u )_s \\ & = \varphi \;( {^t \partial _y Y ^{-1}}\nabla _y u ) \otimes ( {^t \partial _y Y ^{-1}}\nabla _y u ) \\ & = \varphi \;{^t \partial _y Y ^{-1}}\;\nabla _y u \otimes \nabla _y u \; \partial _y Y ^{-1}\\ & = {^t \partial _y Y ^{-1}} QUQ \partial _y Y ^{-1}. \end{align*} Taking into account that $Q_s = {^t \partial _y Y ^{-1}}Q { \partial _y Y ^{-1}}$ we obtain \[ {^t \partial _y Y ^{-1}} Q { \partial _y Y ^{-1}}U_s {^t \partial _y Y ^{-1}} Q { \partial _y Y ^{-1}} = {^t \partial _y Y ^{-1}} Q U Q { \partial _y Y ^{-1}} \] saying that $U_s (y)= \dyy U(y) {^t \dyy}$. As $\ave{D}_Q = \mathrm{Proj}_{\ker L }D$ one gets \begin{align*} 0 = (D - \ave{D}_Q, U ) _Q & = \inty{(D - \ave{D}_Q) : QUQ} \\ & = \inty{\varphi (y) (D - \ave{D}_Q) :\nabla _y u \otimes \nabla _y u }\\ & = \inty{\varphi (y) \{ \nabla _y u \cdot D \nabla _y u - \nabla _y u \cdot \ave{D}_Q \nabla _y u \}}. \end{align*} In particular, taking $\varphi = 1$ we deduce that $\nabla _y u \cdot \ave{D}_Q \nabla _y u \in \loy{}$ and \[ \inty{\nabla _y u \cdot \ave{D}_Q \nabla _y u } = \inty{\nabla _y u \cdot D \nabla _y u } = (D, Q^{-1}\;\nabla _y u \otimes \nabla _y u \;Q^{-1} )_Q < +\infty \] since $D \in H_Q$, $Q^{-1}\nabla _y u \otimes \nabla _y u Q^{-1} \in H_Q$. Since $\ave{D}_Q \in \ker L$, the function $\nabla _y u \cdot \ave{D}_Q \nabla _y u $ remains constant along the flow of $b$ \[ (\nabla _y u )_s \cdot (\ave{D}_Q)_s (\nabla _y u )_s = (\nabla _y u )_s \cdot \dyy \ave{D}_Q \;{^t \dyy} (\nabla _y u )_s = \nabla _y u \cdot \ave{D}_Q\nabla _y u. \] Therefore the function $\nabla _y u \cdot \ave{D}_Q \nabla _y u $ verifies the variational formulation \begin{equation} \label{Equ73} \nabla _y u \cdot \ave{D}_Q \nabla _y u \in \loy{},\;\;(\nabla _y u \cdot \ave{D}_Q \nabla _y u)_s = \nabla _y u \cdot \ave{D}_Q \nabla _y u,\;\;s \in \R \end{equation} and \begin{equation} \label{Equ74} \inty{\nabla _y u \cdot D \nabla _y u \;\varphi } = \inty{\nabla _y u \cdot \ave{D}_Q \nabla _y u\;\varphi },\;\;\forall \;\varphi \in \liy{},\;\varphi _s = \varphi,\; s \in \R. \end{equation} It is easily seen, thanks to the hypothesis $D \in \liy{}$, that \eqref{Equ73}, \eqref{Equ74} also make sense for functions $u \in \hoy{}$ such that $u _s = u$, $s \in \R$. We obtain \[ \nabla _y u \cdot \ave{D}_Q \nabla _y u = \ave{\nabla _y u \cdot D \nabla _y u },\;\;u \in \hoy{},\;\;u_s = u,\;\;s\in \R \] where the average operator in the right hand side should be understood in the $\loy{}$ setting cf. Remark \ref{AveLone}. Moreover, if $u, v \in \hoy{} \cap \kerbg{}$ then $\ave{D}_Q ^{1/2} \nabla _y u, \ave{D}_Q ^{1/2} \nabla _y v$ belong to $\lty{}$ implying that $\nabla _y u \cdot \ave{D}_Q \nabla _y v \in \loy{}$. As before we check that $\nabla _y u \cdot \ave{D}_Q \nabla _y v$ remains constant along the flow of $b$ and for any $\varphi \in \liy{}$, $\varphi _s = \varphi, s \in \R$ we can write \begin{align*} 2 \inty{\nabla _y u \cdot D \nabla _y v \;\varphi } & = \inty{\nabla _y (u + v) \cdot D \nabla _y (u + v) \;\varphi}\\ & - \inty{\nabla _y u \cdot D \nabla _y u \;\varphi} - \inty{\nabla _y v \cdot D \nabla _y v \;\varphi}\\ & = \inty{\nabla _y (u + v) \cdot \ave{D}_Q \nabla _y (u + v) \;\varphi}\\ & - \inty{\nabla _y u \cdot \ave{D}_Q \nabla _y u \;\varphi} - \inty{\nabla _y v \cdot \ave{D}_Q \nabla _y v \;\varphi}\\ & = 2 \inty{\nabla _y u \cdot \ave{D}_Q \nabla _y v \;\varphi }. \end{align*} Finally one gets \[ \nabla _y u \cdot \ave{D}_Q \nabla _y v = \ave{\nabla _y u \cdot D \nabla _y v},\;\;u, v \in \hoy{} \cap \kerbg{}. \] Consider now $u \in \hoy{} \cap \kerbg{}$ and $\psi \in C^2 _c (\R^m)$. In order to prove that $\ave{\nabla _y u \cdot \ave{D}_Q \nabla _y ( b \cdny \psi )} = 0$, where the average is understood in the $\loy{}$ setting, we need to check that \[ \inty{\varphi (y) \;\nabla _y u \cdot \ave{D}_Q \nabla _y ( b \cdny \psi ) } = 0 \] for any $\varphi \in \liy{}$, $\varphi _s = \varphi, s \in \R$. Clearly $B(y) := \varphi (y) \ave{D}_Q (y) \in \ker L$ and therefore it is enough to prove that \[ \inty{\nabla _y u \cdot B \nabla _y ( b \cdny \psi ) }= 0 \] for any $B \in \ker L$, which comes by the third statement of Proposition \ref{WMFI}. \end{proof}
\begin{remark} \label{Parametrization} Assume that there is $u_0$ satisfying $u_0 (\ysy) = u_0 (y) + s$, $s \in \R, y \in \R^m$. Notice that $u_0$ could be multi-valued function (think to angular coordinates) but its gradient satisfies for a.a. $y \in \R^m$ and $ s \in \R$ \[ \nabla _y u_0 = {^t \dyy } (\nabla _y u_0 )_s \] exactly as any function $u$ which remains constant along the flow of $b$. For this reason, the last equality in \eqref{Equ72} holds true for any $u, v \in \hoy{} \cap \kerbg{} \cup \{u_0\}$. In the case when $m-1$ independent prime integrals of $b$ are known {\it i.e.,} $\exists u_1, ..., u_{m-1} \in \hoy{}\cap \kerbg{}$, the average of the matrix field $D$ comes by imposing \[ \nabla _y u_i \cdot \ave{D}_Q \nabla _y u_j = \ave{\nabla _y u_i \cdot D \nabla _y u_j},\;\;i, j \in \{0,...,m-1\}. \] \end{remark}
\section{First order approximation} \label{FirstOrdApp} \noindent We assume that the fields $D(y), b(y)$ are bounded on $\R^m$ \begin{equation} \label{Equ26} D \in \liy{},\;\;b \in \liy{}. \end{equation} We solve \eqref{Equ1}, \eqref{Equ2} by using variational methods. We consider the Hilbert spaces $V:= \hoy{} \subset H := \lty{}$ (the injection $V \subset H$ being continuous, with dense image) and the bilinear forms $\aeps : V \times V \to \R$ given by \[ \aeps (u,v) = \inty{D(y) \nabla _y u \cdot \nabla _y v } + \frac{1}{\eps} \inty{(b \cdny u ) \;(b \cdny v)},\;\;u, v \in V. \] Notice that for any $0 < \eps \leq 1$ and $v \in V$ we have \begin{align*}
\aeps (v, v) + d |v|_H ^2 & \geq \inty{D(y) \nabla _y v \cdot \nabla _y v + (b \cdny v ) \;(b \cdny v) } + d \inty{(v(y))^2} \\
& \geq d \inty{|\nabla _y v |^2} + d \inty{(v(y))^2} \\
& = d |v|_V ^2 \end{align*} saying that $\aeps$ is coercive on $V$ with respect to $H$. By Theorems 1,2 pp. 620 \cite{DauLions88} we deduce that for any $\uein \in H$, there is a unique variational solution for \eqref{Equ1}, \eqref{Equ2}, that is $\ue \in C_b (\R_+; H) \cap L^2(\R_+;V)$ and \[ \ue (0) = \uein,\;\;\frac{\md}{\md t } \inty{\ue (t,y) v(y) } + \aeps (\ue (t), v) = 0,\;\;\mbox{in}\;\dpri{},\;\;\forall \; v \in V. \] By standard arguments one gets \begin{pro} \label{UnifEstim} The solutions $(\ue)_\eps$ satisfy the estimates \[
\|\ue \|_{C_b (\R_+; H)} \leq |\uein|_H,\;\;\int _0 ^{+\infty} \!\!\!\!\inty{|\nabla _y \ue |^2}\md t \leq \frac{|\uein |^2 _H}{2d} \] and \[
\|b \cdny \ue \|_{L^2(\R_+; H)} \leq \left ( \frac{\eps}{2(1 - \eps)}\right ) ^{1/2} |\uein |_H,\;\;\eps \in (0,1). \] \end{pro}
We are ready to prove the convergence of the family $(\ue )_\eps$, when $\eps \searrow 0$, towards the solution of the heat equation associated to the averaged diffusion matrix field $\ave{D}_Q$.
\begin{proof} (of Theorem \ref{MainResult1}) Based on the uniform estimates in Proposition \ref{UnifEstim}, there is a sequence $(\eps _k)_k$, converging to $0$, such that \[ \uek \rightharpoonup u \;\mbox{ weakly } \star \mbox{ in } L^\infty(\R_+; H),\;\;\nabla _y \uek \rightharpoonup \nabla _y u \;\mbox{ weakly in }\;L^2(\R_+;H). \] Using the weak formulation of \eqref{Equ1} with test functions $\eta (t) \varphi (y)$, $\eta \in C^1 _c (\R_+), \varphi \in C^1 _c (\R^m)$ yields \begin{align} \label{Equ77} - \intty{\eta ^\prime (t) \varphi (y) \uek (t,y) } & - \eta (0) \inty{\varphi \uekin } + \intty{\eta \nabla _y \uek \cdot D \nabla _y \varphi } \nonumber \\ & = - \frac{1}{\eps _k} \intty{\eta (t) ( b \cdny \uek) ( b \cdny \varphi )}. \end{align} Multiplying by $\eps _k$ and letting $k \to +\infty$, it is easily seen that \[ \intty{\eta (b \cdny u ) \;(b \cdny \varphi ) } = 0. \] Therefore $u(t,\cdot) \in \ker {\cal T} = \kerbg$, $t \in \R_+$, cf. Proposition \ref{KerRanTau}. Clearly \eqref{Equ77} holds true for any $\varphi \in V$. In particular, for any $\varphi \in V \cap \kerbg{}$ one gets \begin{align} \label{Equ78} - \intty{\eta ^\prime \uek \varphi } - \eta (0) \inty{\uekin \varphi } + \intty{\eta \nabla _y \uek \cdot D \nabla _y \varphi } = 0. \end{align} Thanks to the average properties we have \[ \inty{\uekin \varphi } = \inty{\ave{\uekin} \varphi } \to \inty{\uin \varphi} \] and thus, letting $k \to +\infty$ in \eqref{Equ78}, leads to \begin{align} \label{Equ79} - \intty{\eta ^\prime u \varphi } - \eta (0) \inty{\uin \varphi } + \intty{\eta \nabla _y u \cdot D \nabla _y \varphi } = 0. \end{align} Since $u(t, \cdot), \varphi \in V \cap \kerbg{}$ we have cf. Theorem \ref{AveMatDif} \[ \inty{\nabla _y u \cdot D \nabla _y \varphi } = \inty{\nabla _y u \cdot \ave{D}_Q \nabla _y \varphi} \] and \eqref{Equ79} becomes \begin{align} \label{Equ80} - \intty{\eta ^\prime u \varphi } - \eta (0) \inty{\uin \varphi } + \intty{\eta \nabla _y u \cdot \ave{D}_Q \nabla _y \varphi } = 0. \end{align} But \eqref{Equ80} is still valid for test functions $\varphi = b \cdny \psi$, $\psi \in C^2 _c (\R^m)$ since $u(t,\cdot) \in \kerbg$, $\uin = \mbox{w}-\lime \ave{\uein} \in \kerbg$ and $\ave{D}_Q \in \ker L$ \[ \inty{u(t,y) b \cdny \psi } = 0,\;\;\inty{\uin b \cdny \psi } = 0,\;\;\inty{\nabla _y u \cdot \ave{D}_Q \nabla _y ( b \cdny \psi ) }= 0 \] cf. Theorem \ref{AveMatDif}. Therefore, for any $v \in V$ one gets \[ \frac{\md}{\md t} \inty{u (t,y) v(y) } + \inty{\nabla _y u \cdot \ave{D}_Q \nabla _y v } = 0\;\mbox{ in } \dpri{} \] with $u(0) = \uin$. By the uniqueness of the solution of \eqref{Equ75}, \eqref{Equ76} we deduce that all the family $(\ue)_\eps$ converges weakly to $u$. \end{proof}
\begin{remark} \label{Propagation} Notice that \eqref{Equ75} propagates the constraint $b \cdny u = 0$, if satisfied initially. Indeed, for any $v \in C^1 _c (\R^m)$ we have \begin{equation} \label{Equ81}\frac{\md }{\md t } \inty{u (t,y) v (y) } + \inty{ \nabla _y u \cdot \ave{D}_Q \nabla _y v } = 0\;\mbox{ in } \dpri{}. \end{equation} Since $\ave{D}_Q \in \ker L$, we know by the second statement of Proposition \ref{WMFI} that \begin{equation*}
\inty{\nabla _y u_s \cdot \ave{D}_Q \nabla _y v } = \inty{\nabla _y u \cdot \ave{D}_Q \nabla _y v_{-s}}. \end{equation*} Replacing $v$ by $v_{-s}$ in \eqref{Equ81} we obtain \[ \frac{\md }{\md t } \inty{u_s v } + \inty{\nabla _y u_s \cdot \ave{D}_Q \nabla _y v } = 0\;\mbox{ in } \dpri{} \] and therefore $u_s$ solves \[ \partial _t u_s - \divy ( \ave{D}_Q \nabla _y u_s) = 0,\;\;(t, y) \in \R_+ \times \R^m \] and $u_s (0,y) = \uin (\ysy) = \uin (y), y \in \R^m$. By the uniqueness of the solution of \eqref{Equ75}, \eqref{Equ76} one gets $u_s = u$ and thus, at any time $t \in \R_+$, $b \cdny u (t,\cdot) = 0$. \end{remark}
\section{Second order approximation} \label{SecOrdApp} \noindent For the moment we have determined the model satisfied by the dominant term in the expansion \eqref{Equ6}. We focus now on second order approximation, that is, a model which takes into account the first order correction term $\eps u ^1$. Up to now we have used the equations \eqref{Equ7}, \eqref{Equ8}. Finding a closure for $u + \eps u ^1$ will require one more equation \begin{equation} \label{Equ83} \partial _t u^1 - \divy ( D \nabla _y u^1 ) - \divy ( b \otimes b \nabla _y u^2) = 0,\;\;(t, y) \in \R_+ \times \R ^m. \end{equation} Let us see, at least formally, how to get a second order approximation for $(\ue )_\eps$, when $\eps $ becomes small. The first order approximation {\it i.e.}, the closure for $u$, has been obtained by averaging \eqref{Equ8} and by taking into account that $u \in \kerbg{}$ \[ \partial _t u = \ave{\divy( D \nabla _y u ) } = \divy ( \ave{D}_Q \nabla _y u ). \] Thus $u^1$ satisfies \begin{equation} \label{Equ84} \divy ( \ave{D}_Q \nabla _y u ) - \divy ( D \nabla _y u ) - \divy ( b \otimes b \nabla _y u^1) = 0 \end{equation} from which we expect to express $u^1$, up to a function in $\kerbg{}$, in terms of $u$.
\begin{proof} (of Theorem \ref{Decomposition}) We claim that $\ran L^2 = \ran L $ and thus $\ran L^2 $ is closed as well. Clearly $\ran L^2 \subset \ran L$. Consider now $Z = L(Y)$ for some $Y \in \dom (L)$. But $Y - \mathrm{Proj}_{\ker L} Y \in \ker L ^\perp = (\ker L^\star ) ^\perp = \overline{\ran L} = \ran L$ and there is $X \in \dom (L)$ such that $Y - \mathrm{Proj}_{\ker L} Y = L(X)$. Finally $X \in \dom (L^2)$ and \[ Z = L(Y) = L(Y - \mathrm{Proj} _{\ker L} Y ) = L(L(X)). \] By construction we have $D - \ave{D}_Q \in ( \ker L)^\perp = ( \ker L^\star ) ^\perp = \overline{\ran L} = \ran L = \ran L^2$ and thus there is a unique $F \in \dom (L^2) \cap ( \ker L )^\perp $ such that $D = \ave{D}_Q - L(L(F))$. As $F \in ( \ker L )^\perp$, there is $C \in \dom (L)$ such that $F = L(C)$ implying that ${^t F} = {^t L(C)} = L ({^t C})$. Therefore ${^t F } \in \dom (L^2) \cap ( \ker L )^\perp$ and satisfies the same equation as $F$ \[ L(L({^t F})) = {^t L}(L(F)) = \ave{D}_Q - D. \] By the uniqueness we deduce that $F$ is a field of symmetric matrix. By Proposition \ref{PropOpeL} we know that \[ - \divy(L(F) \nabla _y ) = [b \cdot \nabla _y, - \divy ( F \nabla _y )]\;\mbox{ in }\; \dpri{} \] {\it i.e.,} \[ \inty{L(F) \nabla _y u \cdot \nabla _y v } = - \inty{F \nabla _y u \cdot \nabla _y ( b \cdny v ) } - \inty{F \nabla _y ( b \cdny u ) \cdny v } \] for any $u, v \in C^2 _c (\R^m)$. Similarly, $E := L(F)$ satisfies \[ - \divy ( L^2 (F) \nabla _y ) = - \divy (L(E) \nabla _y ) = [b\cdny, - \divy ( E \nabla _y )]\;\mbox{ in }\;\dpri{} \] and thus, for any $u, v \in C^3_c (\R^m)$ one gets \begin{align*} & \inty{(\ave{D}_Q - D) \nabla _y u \cdny v } = \inty{L^2(F)\nabla _y u \cdny v } \\ & = - \inty{L(F) \nabla _y u \cdny ( b \cdny v ) }- \inty{L(F) \nabla _y ( b \cdny u ) \cdny v } \\ & = \inty{F \nabla _y u \cdny ( b \cdny ( b \cdny v ))} + \inty{F \nabla _y ( b \cdny u ) \cdny ( b \cdny v)} \\ & + \inty{F \nabla _y ( b \cdny u ) \cdny ( b \cdny v)} + \inty{F \nabla _y ( b \cdny ( b \cdny u )) \cdny v}. \end{align*} \end{proof} \noindent The matrix fields $F \in \dom (L^2)$ and $E = L(F) \in \dom (L)$ have the following properties. \begin{pro} \label{PropOpeF} For any $u, v \in C^1 (\R^m)$ which are constant along the flow of $b$ we have in $\dpri{}$ \[ D \nabla _y u \cdny v - \ave{D}_Q \nabla _y u \cdny v = - b \cdny ( E \nabla _y u \cdny v ) = - \divy ( b \otimes b \nabla _y ( F \nabla _y u \cdny v )) \] and \[ \ave{E \nabla _y u \cdny v } = \ave{ F \nabla _y u \cdny v } = 0. \] In particular \[ \inty{E \nabla _y u \cdny v } = \inty{\ave{E \nabla _y u \cdny v}}= 0 \]
\[ \inty{F \nabla _y u \cdny v } = \inty{\ave{F \nabla _y u \cdny v}}= 0 \] saying that $\ave{\divy ( E \nabla _y u ) } = \ave{\divy ( F \nabla _y u )} = 0$ in $\dpri{}$. \end{pro}
\begin{proof} Consider $\varphi \in C^1 _c (\R^m)$, $u, v \in C^1 (\R^m)$ such that $u_s = u, v_s = v$, $s \in \R$ and the matrix field $U = \varphi Q^{-1} \nabla _y v \otimes \nabla _y u Q^{-1} \in H_Q$. Actually $U \in \dom (L)$ and, as in the proof of the last statement in Proposition \ref{PropOpeL}, one gets \begin{align*} L(U) & = (b \cdny \varphi ) Q^{-1} \nabla _y v \otimes \nabla _y u Q^{-1} + \varphi \;L ( Q^{-1} \nabla _y v \otimes \nabla _y u Q^{-1}) \\ & = (b \cdny \varphi) Q^{-1} \nabla _y v \otimes \nabla _y u Q^{-1} \end{align*} since $ Q^{-1} \nabla _y v \otimes \nabla _y u Q^{-1} \in \ker (L)$. Multiplying by $U$ the equality $D - \ave{D}_Q = - L(E)$, $E = L(F)$, one gets \[ \inty{\varphi ( D - \ave{D}_Q)\nabla _y u \cdny v } = - (L(E), U)_Q = (E, L(U))_Q = \inty{(b \cdny \varphi) ( E \nabla _y u \cdny v )} \] implying that $D \nabla _y u \cdny v = \ave{D}_Q \nabla _y u \cdny v - b \cdny ( E \nabla _y u \cdny v)$ in $\dpri{}$. Multiplying by $U$ the equality $E = L(F)$ yields \[ \inty{\varphi E \nabla _y u \cdny v } = (E, U)_Q = (L(F), U)_Q = - (F, L(U))_Q = - \inty{(b \cdny \varphi) F \nabla _y u \cdny v}. \] We obtain \[ E \nabla _y u \cdny v = b \cdny ( F \nabla _y u \cdny v) \;\mbox{ in }\; \dpri{} \] and thus \[ D \nabla _y u \cdny v - \ave{D}_Q \nabla _y u \cdny v = - b \cdny (E \nabla _y u \cdny v ) = - b \cdny ( b \cdny ( F \nabla _y u \cdny v )) \] in $\dpri{}$. Consider now $U = \varphi Q^{-1} \nabla _y v \otimes \nabla _y u Q^{-1}$ with $\varphi \in \kerbg{}$. We know that $L(U) = 0$ and since, by construction $F \in (\ker L )^\perp$, we deduce \[ \inty{\varphi F \nabla _y u \cdny v } = (F, U)_Q = 0 \] saying that $\ave{F \nabla _y u \cdny v} = 0$. Similarly $E = L(F) \in (\ker L )^\perp$ and $\ave{E \nabla _y u \cdny v } = 0$. \end{proof}
\begin{remark} \label{ParametrizationBis} Assume that there is $u_0$ (eventually multi-valued) satisfying $u_0 (\ysy{}) = u_0 (y) + s$, $s \in \R, y \in \R^m$. Its gradient changes along the flow of $b$ exactly as the gradient of any function which is constant along this flow cf. Remark \ref{Parametrization}. We deduce that $Q^{-1} \nabla _y v \otimes \nabla _y u Q^{-1} \in \ker L$ for any $u, v \in \kerbg \cup \{u_0\}$ and therefore the arguments in the proof of Proposition \ref{PropOpeF} still apply when $u, v \in \kerbg{} \cup \{u_0\}$. In the case when $m-1$ independent prime integrals $\{u_1, ..., u_{m-1}\}$ of $b$ are known, the matrix fields $E, F$ come, by imposing for any $i, j \in \{0,1,...,m-1\}$ \[ - b \cdny (E \nabla _y u_i \cdny u _j) = D \nabla _y u_i \cdny u_j - \ave{D \nabla _y u_i \cdny u_j},\;\;\ave{E \nabla _y u_i \cdny u_j} = 0 \] and \[ b \cdny (F \nabla _y u_i \cdny u _j) = E \nabla _y u_i \cdny u _j,\;\;\ave{F \nabla _y u_i \cdny u _j } = 0. \] \end{remark} We indicate now sufficient conditions which guarantee that the range of $L$ is closed. \begin{pro} \label{CompleteIntegr} Assume that \eqref{Equ21}, \eqref{Equ22}, \eqref{Equ23} hold true and that there is a matrix field $R(y)$ such that \eqref{Equ90} holds true. Then the range of $L$ is closed. \end{pro}
\begin{proof} Observe that \eqref{Equ90} implies \eqref{Equ56}. Indeed, it is easily seen that $b \cdny R + R \partial _y b = 0$ in $\dpri{}$ is equivalent to $R = R_s \partial _y Y ( s; \cdot)$, $s \in \R$. We deduce that $P = R ^{-1} \;{^t R} ^{-1}$ satisfies \[ G(s)P = \partial _y Y ^{-1} (s; \cdot) P_s {^t \partial _y Y ^{-1} (s; \cdot)} = \partial _y Y ^{-1} (s; \cdot)R_s ^{-1} \;{^t R_s} ^{-1} \;{^t \partial _y Y ^{-1} (s; \cdot)} = R^{-1} \;{^t R}^{-1} = P \] saying that $[b,P] = 0$ in $\dpri{}$. Therefore we can define $L$ as before, on $H_Q$, which coincides in this case with $\{A:RA\;{^t R} \in \lty{}\}$. We claim that $i \circ L = ( b \cdny ) \circ i$ where $i : H_Q \to \lty{}$, $i(A) = R A\; {^t R}$, $A \in H_Q$, which comes immediately from the equalities \[ (i\circ G(s))A = RG(s)A {^t R} = R \partial _y Y ^{-1}( s; \cdot ) A_s {^t \partial _y Y }^{-1} {^t R} = R_s A_s {^t R_s} = (i(A))_s,\;s\in \R, A\in H_Q. \] In particular we have \[ \ker L = \{A \in H_Q\;:\; i(A) \in \kerbg\} \] and \begin{align*} (\ker L )^\perp & = \{A \in H_Q\;:\; \inty{i(A) : U } = 0\;\forall\;U \in \kerbg{}\} \\ & = \{A \in H_Q\;:\; i(A) \in ( \kerbg)^\perp \}. \end{align*} For any $A \in (\ker L)^\perp$ we can apply the Poincar\'e inequality \eqref{Equ23} to $i(A) \in (\kerbg)^\perp$ and we obtain \[
|A|_Q = |i(A)|_{L^2} \leq C_P |b \cdny (i(A))|_{L^2} = C_P |i (L(A))|_{L^2} = C_P |L(A)|_Q. \] Therefore $L$ satisfies a Poincar\'e inequality as well, and thus the range of $L$ is closed. \end{proof}
\begin{remark} \label{ClosedRanL} The hypothesis $b \cdny R + R \dyb = 0$ in $\dpri{}$ says that the columns of $R^{-1}$ form a family of $m$ independent vector fields in involution with respect to $b$, cf. Proposition \ref{VFI} \[ R_s ^{-1} (y) = \dyy R ^{-1} (y),\;\;s\in \R,\;\;y \in \R^m. \] \end{remark}
\begin{remark} \label{ExplicitAve} For any $U \in \ker L$, that is $i(U) \in \kerbg{}$, we have \[ \inty{R ( D - \ave{D}_Q) {^t R } : i(U)} = 0. \] As $\ave{D}_Q \in \ker L $, we know that $i(\ave{D}_Q) = R \ave{D}_Q {^t R } \in \kerbg{}$ and thus the matrix field $R \ave{D}_Q {^t R }$ is the average (along the flow of $b$) of the matrix field $RD\;{^t R}$, which allows us to express $\ave{D}_Q$ in terms of $R$ and $D$ \[ R \ave{D}_Q {^t R } = \ave{R D \;{^t R }}. \] \end{remark}
From now on we assume that \eqref{Equ90} holds true. Applying the decomposition of Theorem \ref{Decomposition} with the dominant term $u \in \kerbg$ in the expansion \eqref{Equ6} and any $v \in C^3 _c (\R^m)$ yields \[ \inty{(D - \ave{D}_Q) \nabla _y u \cdny v } = - \inty{F \nabla _y u \cdny ( b \cdny ( b \cdny v ))}. \] From \eqref{Equ84} one gets \[ \inty{(D - \ave{D}_Q ) \nabla _y u \cdny v } - \inty{u^1 b \cdny ( b \cdny v )} = 0 \] and thus \begin{equation} \label{CorrSplit} u^1 = \divy ( F \nabla _y u ) + v^1,\;\;v^1 \in \ker ( b \cdny ( b \cdny )) = \kerbg. \end{equation} Notice that $\ave{u^1} = v^1$, since $\ave{\divy ( F \nabla _y u )} = 0$, cf. Proposition \ref{PropOpeF}. The time evolution for $v^1 = \ave{u^1}$ comes by averaging \eqref{Equ83} \[ \partial _t v ^1 - \ave{\divy ( D \nabla _y v^1)} - \ave{\divy ( D \nabla _y ( \divy ( F \nabla _y u )))} = 0. \] As $v^1 \in \kerbg$ we have \[ - \ave{\divy ( D \nabla _y v^1)} = - \divy ( \ave{D}_Q \nabla _y v^1) \] and we can write, with the notation $w^1 = \divy (F \nabla _y u)$ \begin{align} \label{Equ86} \partial _t \{u + \eps u^1\} - \divy ( \ave{D}_Q \nabla _y \{u + \eps u^1\}) = \eps \partial _t w^1 - \eps \divy ( \ave{D}_Q \nabla _y w^1 ) + \eps \ave{\divy ( D \nabla _y w^1)}. \end{align} But the time derivative of $w^1$ is given by \[ \partial _t w^1 = \divy ( F \nabla _y \partial _t u ) = \divy ( F \nabla _y ( \divy ( \ave{D}_Q \nabla _y u ))) \] which implies \begin{align*} \partial _t w^1 - \divy ( \ave{D}_Q\nabla _y w^1) & = \divy ( F \nabla _y ( \divy ( \ave{D}_Q \nabla _y u )))- \divy ( \ave{D}_Q \nabla _y ( \divy ( F \nabla _y u ))) \\ & = - [\divy(\ave{D}_Q \nabla _y ), \divy ( F \nabla _y )]u. \end{align*} Up to a second order term, the equation \eqref{Equ86} writes \begin{align} \label{Equ102} \partial _t \{u + \eps u^1\} - \divy ( \ave{D}_Q \nabla _y \{u + \eps u ^1\}) & + \eps [\divy(\ave{D}_Q \nabla _y ), \divy ( F \nabla _y )]\{u + \eps u^1\} \nonumber \\ & - \eps \ave{\divy ( D \nabla _y ( \divy ( F \nabla _y u )))} = {\cal O}(\eps ^2). \end{align} We claim that for any $u \in \kerbg$ we have \begin{equation} \label{Equ87} \ave{\divy ( D \nabla _y ( \divy ( F \nabla _y u)))} = \ave{\divy ( E \nabla _y ( \divy ( E \nabla _y u )))}. \end{equation} By Proposition \ref{PropOpeF} we know that $\ave{\divy ( F \nabla _y u )} = 0$. As $L(\ave{D}_Q) = 0$ we have \[ [b \cdny, - \divy ( \ave{D}_Q \nabla _y )] = - \divy ( L ( \ave{D}_Q) \nabla _y ) = 0 \] and thus $\divy ( \ave{D}_Q\nabla _y)$ leaves invariant the subspace of functions which are constant along the flow of $b$. By the symmetry of the operator $\divy ( \ave{D}_Q \nabla _y )$, we deduce that the subspace of zero average functions is also left invariant by $\divy ( \ave{D}_Q \nabla _y )$. Therefore $\ave{\divy ( \ave{D}_Q \nabla _y ( \divy ( F \nabla _y u )))} = 0$ and \[ \ave{\divy ( D \nabla _y ( \divy ( F \nabla _y u )))} = \ave{\divy ((D - \ave{D}_Q) \nabla _y ( \divy ( F \nabla _y u )))}. \] Thanks to Theorem \ref{Decomposition} we have \begin{align*} \divy((D - \ave{D}_Q)\nabla _y ) & = [b \cdny, [b \cdny, - \divy ( F \nabla _y )]\;] \\ & = [b \cdny, - \divy (L(F)\nabla _y )]\\ & = [b \cdny, - \divy (E\nabla _y )] \end{align*} which implies that \begin{align*} & \ave{\divy ( D \nabla _y ( \divy ( F \nabla _y u )))} = \ave{\divy ( (D - \ave{D}_Q) \nabla _y ( \divy ( F \nabla _y u )))} \\ & = \ave{\divy ( E \nabla _y ( b \cdny ( \divy ( F \nabla _y u )))) - b \cdny ( \divy ( E \nabla _y ( \divy ( F \nabla _y u ))))}\\ & = \ave{\divy ( E \nabla _y ( b \cdny ( \divy ( F \nabla _y u ))))}. \end{align*} Finally notice that \[ - \divy ( E \nabla _y u ) = - \divy ( L(F)\nabla _y u ) = [b \cdny, - \divy ( F \nabla _y u )] = - b \cdny ( \divy ( F \nabla _y u )) \] and \eqref{Equ87} follows.
We need to average the differential operator $\divy ( E \nabla _y ( \divy ( E \nabla _y )))$ on functions $u \in \kerbg$. For simplicity we perform these computations at a formal level, assuming that all fields are smooth enough. The idea is to express the above differential operator in terms of the derivations ${^t R }^{-1} \nabla _y $ which commute with the average operator (see Proposition \ref{AveComFirstOrder}), since the columns of $R^{-1}$ contain vector fields in involution with $b(y)$.
\begin{lemma} \label{ChangeOfCoord} Under the hypothesis \eqref{Equ90}, for any smooth function $u(y)$ and matrix field $E(y)$ we have \begin{equation} \label{Equ100} \divy ( E \nabla _y u) = \divy ( R \;{^t E}) \cdot ( {^t R}^{-1} \nabla _y u ) + R E \;{^t R} : ( {^t R } ^{-1} \nabla _y \otimes {^t R }^{-1} \nabla _y ) u. \end{equation} \end{lemma}
\begin{proof} Applying the formula $\divy (A\xi) = \divy {^t A} \cdot \xi + {^t A } : \partial _y \xi$, where $A(y)$ is a matrix field and $\xi (y)$ is a vector field, one gets \[ \divy ( E \nabla _y u ) = \divy ( E \;{^t R } \;{^t R ^{-1}} \nabla _y u ) = \divy ( R \;{^t E}) \cdot ( {^t R }^{-1} \nabla _y u ) + R \;{^t E} : \partial _y ( {^t R }^{-1} \nabla _y u ). \] The last term in the above formula writes \begin{align*} R \;{^t E } : \partial _y ( {^t R }^{-1} \nabla _y u ) & = R \;{^t E} \;{^t R}\; {^t R } ^{-1} : \partial _y ( {^t R } ^{-1} \nabla _y u ) \\ & = R \;{^t E } \;{^t R } : \partial _y ( {^t R} ^{-1} \nabla _y u ) R ^{-1} \\ & = R E \;{^t R} : {^t R }^{-1} \;{^t \partial _y } ( {^t R } ^{-1} \nabla _y u ) \\ & = R E \;{^t R} : ( {^t R} ^{-1} \nabla _y \otimes {^t R} ^{-1} \nabla _y ) u \end{align*} and \eqref{Equ100} follows. \end{proof}
Next we claim that the term $\ave{\divy ( E \nabla _y ( \divy ( E \nabla _y u )))}$ reduces to a differential operator, if $u \in \kerbg{}$.
\begin{pro} \label{DifOpe} Under the hypothesis \eqref{Equ90}, for any smooth matrix field $E$ there is a linear differential operator $S(u)$ of order four, such that, for any smooth $u \in \kerbg{}$ \begin{equation} \label{Equ101} \ave{\divy ( E \nabla _y ( \divy ( E \nabla _y u )))} = S(u). \end{equation} \end{pro}
\begin{proof} For any smooth functions $u, \varphi \in \kerbg{}$ we have, cf. Lemma \ref{ChangeOfCoord} \begin{align*} & \inty{\ave{\divy(E \nabla _y ( \divy ( E \nabla _y u )))}\varphi } = \inty{\divy(E \nabla _y ( \divy ( E \nabla _y u ))) \varphi }\\ & = \inty{\divy ( E \nabla _y u ) \;\divy ( E \nabla _y \varphi )} \\ & = \inty{\{\divy ( R \; ^t E) \cdot ( ^t R ^{-1} \nabla _y u ) + R E \; ^t R : ( ^t R ^{-1} \nabla _y \otimes {^t R } ^{-1} \nabla _y )u \}\\ & \times \{\divy ( R \; ^t E) \cdot ( ^t R ^{-1} \nabla _y \varphi ) + R E \; ^t R : ( ^t R ^{-1} \nabla _y \otimes {^t R } ^{-1} \nabla _y )\varphi \}}\\ & = \inty{[\divy (R \;\;^t E) \otimes \divy ( R \;\;^t E)] : [^t R ^{-1} \nabla _y u \otimes {^t R} ^{-1} \nabla _y \varphi] }\\ & + \inty{[R E \;\;^t R \otimes \divy ( R \;\;^t E)] : [( ^t R ^{-1} \nabla _y \otimes {^t R }^{-1}\nabla _y )u \otimes {^t R } ^{-1} \nabla _y \varphi] }\\ & + \inty{[\divy( R \;\;^t E) \otimes R E \;\;^t R] : [(^t R ^{-1} \nabla _y u ) \otimes ( ^t R ^{-1} \nabla _y \otimes {^t R}^{-1} \nabla _y ) \varphi]}\\ & + \inty{[R E \;\;^t R \otimes R E \;\;^t R]:[ ( ^t R ^{-1} \nabla _y \otimes {^t R }^{-1}\nabla _y )u \otimes ( ^t R ^{-1} \nabla _y \otimes {^t R }^{-1}\nabla _y )\varphi ]} \end{align*}
Recall that $^tR ^{-1} \nabla _y $ leaves invariant $\kerbg$ and therefore \[ {^t R }^{-1} \nabla _y u \otimes {^t R } ^{-1} \nabla _y \varphi \in \kerbg{} \] implying that \begin{align*} & \inty{[\divy (R \;\;^t E) \otimes \divy ( R \;\;^t E)] : [^t R ^{-1} \nabla _y u \otimes {^t R} ^{-1} \nabla _y \varphi] }\\
= & \inty{\ave{\divy (R \;\;^t E) \otimes \divy ( R \;\;^t E)} : [^t R ^{-1} \nabla _y u \otimes {^t R} ^{-1} \nabla _y \varphi] }. \end{align*}
Similar transformations apply to the other three integrals above, and finally one gets
\begin{align*} \inty{\ave{\divy(E \nabla _y ( \divy ( E \nabla _y u )))}\varphi } & = \inty{X : [\nablar u \otimes \nablar \varphi ]} \\ & + \inty{Y : [( \nablar \otimes \nablar )u \otimes \nablar \varphi ]}\\ & + \inty{Z : [\nablar u \otimes ( \nablar \otimes \nablar ) \varphi] } \\ & + \inty{T : [( \nablar \otimes \nablar )u \otimes ( \nablar \otimes \nablar ) \varphi]}\\ & = I_1 (u, \varphi) + I_2 (u, \varphi) + I_3 (u, \varphi) + I_4 (u, \varphi) \end{align*} where $\nablar := {^t R} ^{-1} \nabla _y $ and $X, Y, Z, T$ are tensors of order two, three, three and four respectively \[ X_{ij} = \ave{\divy ( R \;\;^t E) _i \;\divy(R \;\;^t E)_j},\;\;i,j\in \{1,...,m\} \] \[ Y_{ijk} = \ave{(R E \;\;^t R) _{ij} \;\divy (R \;\;^t E)_k},\;\;Z_{ijk} = \ave{\divy ( R \;\;^t E)_i \;\;(RE \;\;^t R)_{jk}} ,\;\;i,j, k\in \{1,...,m\} \] \[ T_{ijkl} = \ave{(RE \;\;^t R)_{ij} \;\;(RE \;\;^tR)_{kl}},\;\;i,j, k, l\in \{1,...,m\}. \] Integrating by parts one gets \[ I_1 (u, \varphi) = \inty{X \nablar u \cdot \nablar \varphi } = \inty{R^{-1} X \nablar u \cdot \nabla _y \varphi } = \inty{S_1 (u) \varphi} \] where $S_1 (u) = - \divy ( R^{-1} X \nablar u)$. Notice that the differential operator \[ \xi \to \divy ( R^{-1} \xi) = \divy (\;^t R ^{-1}) \cdot \xi + {^t R}^{-1} : \partial _y \xi \] maps $(\kerbg{})^m$ to $\kerbg{}$, since the columns of $R^{-1}$ contain fields in involution with $b$, and therefore $S_1$ leaves invariant $\kerbg{}$, that is, for any $u \in \kerbg{}$, $\xi = X \nablar u \in (\kerbg{})^m$ and $S_1 (u) = - \divy ( R^{-1} X \nablar u ) = - \divy ( R^{-1} \xi ) \in \kerbg{}$. Similarly we obtain \[ I_2 (u, \varphi)
= \inty{S_2 (u) \varphi },\;\;
I_3 (u, \varphi) = \inty{S_3 (u) \varphi },\;\;I_4 (u, \varphi) = \inty{S_4 (u) \varphi }
\] where $S_2, S_3, S_4$ are differential operators of order three, three and four respectively, which leave invariant $\kerbg{}$. We deduce that \[ \inty{\ave{\divy (E \nabla _y ( \divy ( E \nabla_y u )))} \varphi } = \inty{S(u) \varphi} \] for any $u, \varphi \in \kerbg{}$, with $S = S_1 + S_2 + S_3 + S_4$, saying that \[ \ave{\divy (E \nabla _y ( \divy ( E \nabla_y u )))} - S(u) \perp \kerbg{}. \] But we also know that \[ \ave{\divy (E \nabla _y ( \divy ( E \nabla_y u )))} - S(u) \in \kerbg{} \] and thus \eqref{Equ101} holds true. \end{proof}
Combining \eqref{Equ102}, \eqref{Equ87}, \eqref{Equ101} we obtain \begin{align*} \partial _t \{u + \eps u^1\} - \divy ( \ave{D}_Q \nabla _y \{u + \eps u ^1\}) & + \eps [\divy(\ave{D}_Q \nabla _y ), \divy ( F \nabla _y )]\{u + \eps u^1\} \\ & - \eps S(u + \eps u^1) = {\cal O}(\eps ^2) \end{align*} which justifies the equation introduced in \eqref{IntroEqu87}. The initial condition comes formally by averaging the Ansatz \eqref{Equ6} \[ \ave{\ue} = u + \eps v^1 + {\cal O}(\eps ^2). \] One gets \[ v ^1 (0, \cdot) = \mbox{w-} \lime \frac{\ave{\uein} - \uin }{\eps} = \vin \] implying that $u ^1 (0, \cdot) = \vin + \divy(F \nabla _y \uin )$, cf. \eqref{CorrSplit}, which justifies \eqref{NewIC}.
\section{An example}
Let us consider the vector field $b(y) = {^\perp y} := (y_2, - y_1)$, for any $y = (y_1, y_2) \in \R^2$ and the matrix field \[ D (y) = \left( \begin{array}{cc} \lambda _ 1 (y) & 0 \\ 0 & \lambda _2 (y) \end{array} \right),\;\;y \in \R^2 \] where $\lambda _1, \lambda _2 $ are given functions, satisfying $\min _{y\in \R^2} \{\lambda _1 (y), \lambda _2 (y)\} \geq d>0$. We intend to determine the first order approximation, when $\eps \searrow 0$, for the heat equation \begin{equation} \label{Equ91} \partial _t \ue - \divy ( D(y) \nabla _y \ue ) - \frac{1}{\eps} \divy ( b(y) \otimes b(y) \nabla _y \ue ) = 0,\;\;(t, y ) \in \R_+ \times \R ^2 \end{equation} with the initial condition \[ \ue (0, y) = \uin (y),\;\;y \in \R^2. \]
The flow of $b$ is given by $Y(s;y) = {\cal R}(-s)y$, $s \in \R, y \in \R^2$ where ${\cal R}(\alpha)$ stands for the rotation of angle $\alpha \in \R$. The functions in $\kerbg{}$ are those depending only on $|y|$. Notice that the matrix field \[
R(y) = \frac{1}{|y|}\left( \begin{array}{rr} y_2 & -y_1 \\ y_1 & y_2 \end{array} \right) \] satisfies $b \cdot \nabla _y R + R \partial _y b = 0$ and $Q = {^t R } R = I_2$. The averaged matrix field $\ave{D}_Q$ comes, thanks to Remark \ref{ExplicitAve}, by the formula $R \ave{D}_Q {^t R} = \ave{R D \;{^t R}}$ and thus \[ \ave{D}_Q = {^t R} \ave{RD\; {^t R}} R,\;\;\ave{RD\; {^t R}} = \left( \begin{array}{rr}
\ave{\frac{\lambda _1 y _2 ^2 + \lambda _2 y_1 ^2 }{|y|^2}} & \ave{\frac{(\lambda _1 - \lambda _2)y_1 y _2 }{|y|^2}} \\
\ave{\frac{(\lambda _1 - \lambda _2)y_1 y _2 }{|y|^2}} & \ave{\frac{\lambda _1 y _1 ^2 + \lambda _2 y_2 ^2 }{|y|^2}} \end{array} \right). \]
In the case when $\lambda _1, \lambda _2$ are left invariant by the flow of $b$, that is $\lambda _1, \lambda _2$ depend only on $|y|$, it is easily seen that \[
\ave{\frac{y_1 ^2}{|y|^2}} = \ave{\frac{y_2 ^2}{|y|^2}} = \frac{1}{2},\;\;\ave{\frac{y_1 y_2}{|y|^2}} = 0 \] and thus \[ \ave{D}_Q = {^t R } \frac{\lambda _1 + \lambda _2}{2} I_2 R = \frac{\lambda _1 + \lambda _2}{2} I_2. \] The first order approximation of \eqref{Equ91} is given by \[ \left\{ \begin{array}{ll}
\partial _t u - \divy \left ( \frac{\lambda _1 (y) + \lambda _2 (y)}{2} \nabla _y u \right ) = 0,& \;\;(t, y ) \in \R_+ \times \R ^2 \\
u(0,y) = \uin (y),& \;\;y \in \R^2. \end{array} \right. \]
We consider the multi-valued function $u_0 (y) = - \theta (y)$, where $y = |y| ( \cos \theta (y), \sin \theta (y))$, which satisfies $b \cdot \nabla _y u_0 = 1$, or $u_0 (Y(s;y)) = u_0 (y) + s$. Notice that the averaged matrix field $\ave{D}_Q$ satisfies (with $u_1 (y) = |y|^2 /2 \in \kerbg{}$\;) \[ \nabla _y u_i \cdot \ave{D}_Q \nabla _y u _j = \ave{\nabla _y u _i \cdot D \nabla _y u_j },\;\;i, j \in \{0,1\} \] as predicted by Remark \ref{Parametrization}.
\appendix \section{Proofs of Propositions \ref{VFI}, \ref{WVFI}, \ref{MFI}, \ref{WMFI}} \label{A} \begin{proof} (of Proposition \ref{VFI}) For simplicity we assume that $b$ is divergence free. The general case follows similarly. Let $c(y)$ be a vector field satisfying \eqref{Equ34}. For any vector field $\phi \in C^1 _c (\R^m)$ we have, with the notation $u _\tau = u (Y(\tau;\cdot))$ \begin{align*} \inty{c \cdot ( \phi _{-h} - \phi )} = \inty{(c_h - c) \cdot \phi } = \inty{(\partial _y Y (h;y) - I) c \cdot \phi }. \end{align*} Multiplying by $h^{-1}$ and passing to the limit when $h \to 0$ imply \[ - \inty{c ( b \cdny \phi ) } = \inty{\partial _y b c\cdot \phi } \] and therefore $(b \cdny ) c - \partial _y b c = 0$ in $\dpri{}$.
Conversely, assume that $[b,c] = 0$ in $\dpri{}$. We introduce $e(s,y) = c(Y(s;y)) - \partial _y Y(s;y) c(y)$. Notice that $e(s,\cdot) \in \loloc{}, s \in \R$ and $e(0, \cdot) = 0$. For any vector field $\phi \in C^1 _c (\R^m)$ we have \[ E_\phi (s) : = \inty{e(s,y) \cdot \phi (y)} = \inty{c(y) \cdot \phi _{-s}} - \inty{\partial _y Y(s;y) c(y) \cdot \phi (y) } \] and thus \begin{align*} \frac{\md }{\md s} E_\phi (s) & = - \inty{c(y) \cdot ((b \cdny )\phi )_{-s}} - \inty{\partial _y (b(\ysy)) \;c(y) \cdot \phi (y) } \\ & = - \inty{c \cdot (b \cdny ) \phi _{-s}} - \inty{\partial _y b (Y(s;y)) \dyy c(y) \cdot \phi (y) }\\ & = \inty{\dyb \;c(y) \cdot \phi _{-s}} - \inty{\dyb (\ysy) \dyy c(y) \cdot \phi (y)}\\ & = \inty{\dyb (\ysy) ( c(\ysy) - \dyy c(y)) \cdot \phi (y)} \\ & = \inty{e(s,y)\cdot {^t \dyb} (\ysy) \phi (y)}. \end{align*} In the previous computation we have used the fact that the derivation and tranlation along $b$ commute \[ ((b \cdny ) \phi )_{-s} = (b\cdny )\phi _{-s}. \] After integration with respect to $s$ one gets \[ E_\phi (s) = \int _0 ^s \inty{e(\tau,y) \cdot {^t \dyb} (Y(\tau;y)) \phi (y) } \;\md \tau. \]
Clearly, the above equality still holds true for any $\phi \in C_c (\R^m)$. Consider $R>0, T>0$ and let $K = \|{^t \dyb} \circ Y\|_{L^\infty([-T,T] \times B_R)}$. Therefore, for any $s \in [-T, T]$ we obtain \begin{align*}
\|e(s,\cdot)\|_{L^\infty(B_R)} & = \sup \{ |E_\phi (s)|\;:\;\phi \in C_c (B_R),\;\;\|\phi \|_{\loy{}} \leq 1\}\\
& \leq K \left |\int _0 ^s \|e (\tau, \cdot) \|_{L^\infty (B_R)}\md \tau \right |. \end{align*}
By Gronwall lemma we deduce that $\|e(s,\cdot)\|_{L^\infty(B_R)} = 0$ for $-T \leq s \leq T$ saying that $c(\ysy) - \dyy c(y) = 0, s \in \R, y \in \R^m$. \end{proof}
\begin{proof} (of Proposition \ref{WVFI})\\ 1.$\implies$ 2. By Proposition \ref{VFI} we deduce that $c(\ysy) = \dyy c(y)$ and therefore \begin{align*} \inty{(c\cdny u) v_{-s}} & = \inty{c(\ysy) \cdot (\nabla _y u ) (\ysy) v(y)}\\ & = \inty{c(y) \cdot {^t \dyy} (\nabla _y u )(\ysy) v(y) } = \inty{(c(y) \cdot \nabla _y u_s) v(y)}. \end{align*}
2.$\implies$ 3. Taking the derivative with respect to $s$ of \eqref{Equ41} at $s = 0$, we obtain \eqref{Equ42}.
3.$\implies$ 1. Applying \eqref{Equ42} with $v \in C^1 _c (\R^m)$ and $u _i = y_i \varphi (y)$, $\varphi \in C^2 _c (\R^m)$, $\varphi = 1$ on the support of $v$, yields \[ \inty{c_i \;b \cdny v } + \inty{c \cdny b_i \;v (y)}= 0 \] saying that $b \cdny c_i = (\dyb \;c) _i$ in $\dpri{}$, $i \in \{1,...,m\}$ and thus $[b,c] = b \cdny c - \dyb c = 0$ in $\dpri{}$. \end{proof}
\begin{proof} (of Proposition \ref{MFI}) The arguments are very similar to those in the proof of Proposition \ref{VFI}. Let us give the main lines. We assume that $b$ is divergence free, for simplicity. Let $A(y)$ be a matrix field satisfying \eqref{Equ35}. For any matrix field $U \in C^1 _c (\R^m)$ we have \begin{align*} \inty{A(y) & : ( U(Y(-h;y)) - U(y) )} = \inty{(A(Y(h;y)) - A(y)) : U(y) } \\ & = \inty{( \partial _y Y (h;y) A(y) {^t \partial _y Y (h;y)} - A(y)) : U(y)} \\ & = \inty{\{( \partial _y Y (h;y) - I ) A(y) {^t \partial _y Y (h;y)} : U(y) + A(y) {^t (\partial _y Y (h;y) - I)} : U(y)\}}. \end{align*} Multiplying by $\frac{1}{h}$ and passing $h \to 0$ we obtain \[ - \inty{A(y) : ( b \cdny U ) } = \inty{(\dyb A(y) + A(y) {^t \dyb }):U(y) } \] saying that $[b,A] = 0$ in $\dpri{}$.
For the converse implication define, as before \[ f(s,y) = A(\ysy ) - \dyy A(y) {^t \dyy},\;\;s \in \R,\;\;y \in \R^m. \] For any $U \in C^1 _c (\R^m)$ we have \begin{align*} F_U (s)& := \inty{f(s,y) : U(y)} \\ &= \inty{A(y) : U(Y(-s;y))} - \inty{\dyy A(y) {^t \dyy } : U (y)} \end{align*} and thus \begin{align*} \frac{\md }{\md s} F_U (s) & = - \inty{A(y) : (\;(b \cdny )U\;)_{-s}} - \inty{\partial _y ( b (\ysy)) A(y) {^t \dyy } : U(y)} \\ & - \inty{\dyy A(y) {^t \partial _y ( b (\ysy ))} : U(y)} \\ & = - \inty{A(y) : (b \cdny ) U_{-s} } - \inty{\dyb (\ysy) \dyy A(y) {^t \dyy} : U}\\ & - \inty{\dyy A(y) {^t \dyy} {^t \dyb (\ysy) } : U(y)}\\ & = \inty{ \{ \dyb (\ysy) f(s,y) + f(s,y) {^t \dyb (\ysy)}\} : U(y)} \\ & = \inty{f(s,y) : \{ {^t \dyb (\ysy)} U(y) + U(y) \dyb (\ysy) \}}. \end{align*} The previous equality still holds true for $U \in C_c (\R^m)$, and our conclusion follows as in the proof of Proposition \ref{VFI}, by Gronwall lemma. \end{proof}
\begin{proof} (of Proposition \ref{WMFI})\\ $1.\implies 2.$ By Proposition \ref{MFI} we deduce that $A(\ysy) = \dyy A(y) {^t \dyy}$. Using the change of variable $y \to \ysy$ one gets \begin{align*} \inty{A(y) \nabla _y u \cdot \nabla _y v } & = \inty{A(\ysy) (\nabla _y u )(\ysy) \cdot (\nabla _y v ) (\ysy)} \\ & = \inty{A(y) {^t \dyy }(\nabla _y u ) (\ysy) \cdot {^t \dyy } (\nabla _y v ) (\ysy) } \\ & = \inty{A(y) \nabla _y u_s \cdot \nabla _y v_s }. \end{align*} $2.\implies 3.$ Taking the derivative with respect to $s$ at $s = 0$ of the constant function $s \to \inty{A(y) \nabla _y u_s \cdot \nabla _y v_s}$ yields \[ \inty{A(y) \nabla _y ( b \cdny u ) \cdot \nabla _y v } + \inty{A(y) \nabla _y u \cdot \nabla _y ( b \cdny v) } = 0. \] $3.\implies 2.$ For any $u, v \in C^2 _c (\R^m)$ we can write, thanks to 3. applied with the functions $u_s, v_s$ \begin{align*} \frac{\md }{\md s} \inty{A(y) \nabla _y u_s \cdot \nabla _y v_s} & = \inty{A(y) \nabla _y ( \;( b \cdny u)_s )\cdot \nabla _y v_s} \\ & + \inty{A(y) \nabla _y u_s \cdot \nabla _y ( \; (b \cdny v )_s)} \\ & = \inty{A(y) \nabla _y ( b \cdny u_s) \cdot \nabla _y v_s } \\ & + \inty{A(y) \nabla _y u_s \cdot \nabla _y ( b \cdny v_s ) } = 0. \end{align*} Therefore the function $s \to \inty{A(y) \nabla _y u_s \cdot \nabla _y v_s}$ is constant on $\R$ and thus \[ \inty{A(y) \nabla _y u_s \cdot \nabla _y v_s} = \inty{A(y) \nabla _y u \cdot \nabla _y v},\;\;s\in \R. \] Up to now, the symmetry of the matrix $A(y)$ did not play any role. We only need it for the implication $2.\implies 1.$\\
$2.\implies 1.$ We have \begin{align*} \inty{A(y) \nabla _y u \cdot \nabla _y v } & = \inty{A(y) \nabla _y u_s \cdot \nabla _y v_s } \\ & = \inty{A(y) {^t \dyy} ( \nabla _y u)_s \cdot {^t \dyy } (\nabla _y v )_s } \\ & = \inty{\dyy A(y) {^t \dyy } (\nabla _y u )_s \cdot ( \nabla _y v )_s } \\ & = \inty{(\partial _y Y A \;{^t \partial _y Y})_{-s} \nabla _y u \cdot \nabla _y v } \end{align*} where $(\partial _y Y A {^t \partial _y Y})_{-s} = \partial _y Y (s; Y(-s;y)) A(Y(-s;y)) {^t \partial _y Y (s; Y(-s;y))}$. We deduce that \[ \inty{(A(y) - (\partial _y Y A \;{^t \partial _y Y })_{-s}) \nabla _y u \cdot \nabla _y v} = 0,\;\;u, v \in C^1 _c (\R^m). \] Since $A(y) - (\partial _y Y A \;{^t \partial _y Y })_{-s}$ is symmetric, it is easily seen, cf. Lemma \ref{Divergence} below, that $A(y) - (\partial _y Y A \;{^t \partial _y Y })_{-s}= 0$. Therefore we have $A(\ysy) = \dyy A(y) {^t \dyy}$, $s \in \R,y \in \R^m$ and by Proposition \ref{MFI} we deduce that $[b,A] = 0$ in $\dpri{}$. \end{proof}
\begin{lemma} \label{Divergence} Consider a field $A(y) \in \loloc{}$ of symmetric matrix satisfying \begin{equation} \label{Equ38} \inty{A(y) \nabla _y u \cdot \nabla _y v} = 0,\;\;u, v \in C^1 _c (\R^m). \end{equation} Therefore $A(y) = 0$ a.a. $y \in \R^m$. \end{lemma}
\begin{proof} Applying \eqref{Equ38} with $v_j = y_j v$, $v \in C^1 _c (\R^m)$, $u_i = y_i \varphi (y)$ where $\varphi \in C^1 _c (\R^m)$ and $\varphi = 1$ on the support of $v$, yields \begin{equation} \label{Equ39} \inty{A(y) e_i \cdot ( y_j \nabla _y v + v e_j )} = 0. \end{equation} Applying \eqref{Equ38} with $v$ and $u_{ij} = y_i y_j \varphi (y)$ one gets \begin{equation} \label{Equ40} \inty{A(y) ( y_j e_i + y_i e _j ) \cdot \nabla _y v } = 0. \end{equation} Combining \eqref{Equ39}, \eqref{Equ40} we obtin for any $i, j \in \{1,...,m\}$ \[ 2 \inty{(A(y)e_i \cdot e_j) \;v(y)} = \inty{( A(y) e_i \cdot e_j + A(y) e_j \cdot e_i)v(y)} = 0 \] saying that $A(y) = 0$, a.a. $y \in \R^m$. \end{proof}
\end{document} |
\begin{document}
\title{$MSS_{18}$ is Digitally 18-contractible} \author{Laurence Boxer \thanks{
Department of Computer and Information Sciences,
Niagara University,
Niagara University, NY 14109, USA;
and Department of Computer Science and Engineering,
State University of New York at Buffalo.
email: boxer@niagara.edu } }
\date{ } \maketitle{}
\begin{abstract} The paper~\cite{Han06} incorrectly asserts that the digital image $MSS_{18}$, a digital model of the Euclidean 2-sphere $S^2$, is not 18-contractible. We show this assertion is false.
Key words and phrases: digital topology, digital image, contractible, fundamental group \end{abstract}
\section{Introduction} In digital topology, we often find that properties of a digital image are analogous to topological properties of an object in Euclidean space modeled by the digital image. For example, a digital image that models a contractible object may have the property of digital contractibility, and a digital image that models a non-contractible object may have the property of digital non-contractibility.
$MSS_{18}$ is the name often used for a certain digital image that models the Euclidean 2-sphere $S^2$. S.E. Han has claimed (Theorem~4.3 of~\cite{Han06}) that $MSS_{18}$ is not 18-contractible. We show this assertion is false.
\section{Preliminaries} Much of this section is quoted or paraphrased from~\cite{BxSt16}.
We use ${\mathbb Z}$ to indicate the set of integers.
\subsection{Adjacencies} A digital image is a graph $(X,\kappa)$, where $X$ is a subset of ${\mathbb Z}^n$ for some positive integer~$n$, and $\kappa$ is an adjacency relation for the points of~$X$. The $c_u$-adjacencies are commonly used. Let $x,y \in {\mathbb Z}^n$, $x \neq y$, where we consider these points as $n$-tuples of integers: \[ x=(x_1,\ldots, x_n),~~~y=(y_1,\ldots,y_n). \] Let $u \in {\mathbb Z}$, $1 \leq u \leq n$. We say $x$ and $y$ are {\em $c_u$-adjacent} if \begin{itemize} \item There are at most $u$ indices $i$ for which
$|x_i - y_i| = 1$.
\item For all indices $j$ such that $|x_j - y_j| \neq 1$ we
have $x_j=y_j$. \end{itemize} Often, a $c_u$-adjacency is denoted by the number of points adjacent to a given point in ${\mathbb Z}^n$ using this adjacency. E.g., \begin{itemize} \item In ${\mathbb Z}^1$, $c_1$-adjacency is 2-adjacency. \item In ${\mathbb Z}^2$, $c_1$-adjacency is 4-adjacency and
$c_2$-adjacency is 8-adjacency. \item In ${\mathbb Z}^3$, $c_1$-adjacency is 6-adjacency,
$c_2$-adjacency is 18-adjacency, and $c_3$-adjacency
is 26-adjacency.
\end{itemize}
\begin{comment} For $\kappa$-adjacent $x,y$, we write $x \leftrightarrow_{\kappa} y$ or $x \leftrightarrow y$ when $\kappa$ is understood. We write $x \leftrightarroweq_{\kappa} y$ or $x \leftrightarroweq y$ to mean that either $x \leftrightarrow_{\kappa} y$ or $x = y$.
Given subsets $A,B\subset X$, we say that $A$ and $B$ are \emph{adjacent} if there exist points $a\in A$ and $b\in B$ such that $a \leftrightarroweq b$. Thus sets with nonempty intersection are automatically adjacent, while disjoint sets may or may not be adjacent. \end{comment}
We write $x \leftrightarrow_{\kappa} x'$, or $x \leftrightarrow x'$ when $\kappa$ is understood, to indicate that $x$ and $x'$ are $\kappa$-adjacent. Similarly, we write $x \leftrightarroweq_{\kappa} x'$, or $x \leftrightarroweq x'$ when $\kappa$ is understood, to indicate that $x$ and $x'$ are $\kappa$-adjacent or equal.
A subset $Y$ of a digital image $(X,\kappa)$ is {\em $\kappa$-connected}~\cite{Rosenfeld}, or {\em connected} when $\kappa$ is understood, if for every pair of points $a,b \in Y$ there exists a sequence $\{y_i\}_{i=0}^m \subset Y$ such that $a=y_0$, $b=y_m$, and $y_i \leftrightarrow_{\kappa} y_{i+1}$ for $0 \leq i < m$.
\subsection{Digitally continuous functions} The following generalizes a definition of~\cite{Rosenfeld}.
\begin{definition} \label{continuous} {\rm ~\cite{Boxer99}} Let $(X,\kappa)$ and $(Y,\lambda)$ be digital images. A single-valued function $f: X \rightarrow Y$ is $(\kappa,\lambda)$-continuous if for every $\kappa$-connected $A \subset X$ we have that $f(A)$ is a $\lambda$-connected subset of $Y$. $\Box$ \end{definition}
When the adjacency relations are understood, we will simply say that $f$ is \emph{continuous}. Continuity can be expressed in terms of adjacency of points: \begin{thm} {\rm ~\cite{Rosenfeld,Boxer99}} A function $f:X\to Y$ is continuous if and only if $x \leftrightarrow x'$ in $X$ implies $f(x) \leftrightarroweq f(x')$. \qed \end{thm}
See also~\cite{Chen94,Chen04}, where similar notions are referred to as {\em immersions}, {\em gradually varied operators}, and {\em gradually varied mappings}.
A homotopy between continuous functions may be thought of as a continuous deformation of one of the functions into the other over a finite time period.
\begin{definition}{\rm (\cite{Boxer99}; see also \cite{Khalimsky})} \label{htpy-2nd-def} Let $X$ and $Y$ be digital images. Let $f,g: X \rightarrow Y$ be $(\kappa,\kappa')$-continuous functions. Suppose there is a positive integer $m$ and a function $F: X \times [0,m]_{{{\mathbb Z}}} \rightarrow Y$ such that
\begin{itemize} \item for all $x \in X$, $F(x,0) = f(x)$ and $F(x,m) = g(x)$; \item for all $x \in X$, the induced function
$F_x: [0,m]_{{{\mathbb Z}}} \rightarrow Y$ defined by
\[ F_x(t) ~=~ F(x,t) \mbox{ for all } t \in [0,m]_{{{\mathbb Z}}} \]
is $(2,\kappa')-$continuous. That is, $F_x(t)$ is a path in $Y$. \item for all $t \in [0,m]_{{{\mathbb Z}}}$, the induced function
$F_t: X \rightarrow Y$ defined by
\[ F_t(x) ~=~ F(x,t) \mbox{ for all } x \in X \]
is $(\kappa,\kappa')-$continuous. \end{itemize} Then $F$ is a {\rm digital $(\kappa,\kappa')-$homotopy between} $f$ and $g$, and $f$ and $g$ are {\rm digitally $(\kappa,\kappa')-$homotopic in} $Y$.
$\Box$ \end{definition}
If there is a $(\kappa,\kappa)$-homotopy $F: X \times [0,m]_{{\mathbb Z}} \to X$ between the identity function $1_X$ and a constant function, we say $F$ is a (digital) {\em $\kappa$-contraction} and $X$ is {\em $\kappa$-contractible}.
\section{Contractibility of $MSS_{18}$} $MSS_{18}$~\cite{Han06} is a ``small" digital model of the Euclidean 2-sphere~$S^2$, appearing rather like an American football. As shown in Figure~\ref{MSS18fig}, we can take $MSS_{18} = \{p_i\}_{i=0}^9$, where \[ p_0= (0,0,0),~p_1=(1,1,0),~p_2=(1,2,0),~p_3=(0,3,0),~p_4=(-1,2,0), \] \[ ~p_5=(-1,1,0),~p_6=(0,1,-1),~p_7=(0,2,-1),~p_8=(0,2,1),~p_9=(0,1,1). \]
\begin{figure}
\caption{$MSS_{18}$, a digital model of a 2-sphere (from Figure~2 of~\cite{BxSt16})}
\label{MSS18fig}
\end{figure}
Contrary to the claim of Theorem~4.3 of~\cite{Han06}, we have the following Theorem~\ref{MSS18contracts}. Its proof makes use of the contractibility of a 4-point digital simple closed curve~\cite{Boxer99}. Notice that $MSS_{18}$ contains the 4-point 18- and 26-simple closed curves \[ S=\{(x,1,z) \in MSS_{18}\}=\{p_1,p_6,p_5,p_9\} \mbox{ and }\] \[ S'=\{(x,2,z) \in MSS_{18}\} = \{p_2,p_7,p_4,p_8\}. \] Roughly, our contraction of $MSS_{18}$ begins by continuously deforming $MSS_{18}$ into a connected subset of $S \cup S'$, after which a contraction is completed.
\begin{thm} \label{MSS18contracts} $MSS_{18}$ is 18-contractible and 26-contractible. \end{thm}
\begin{proof} We define a contraction $H: MSS_{18} \times [0,3] \to MSS_{18}$ as follows. \begin{itemize}
\item For the step at time $t=0$, we let $H(p_i,0) = p_i$ for all members of
$\{i\}_{i=0}^9$.
\item For the step $t=1$, we let
\[ H(p_i,1) = \left \{ \begin{array}{ll}
p_1 & \mbox{if } i \in \{0,1,9\}; \\
p_6 & \mbox{if } i \in \{5,6\}; \\
p_2 & \mbox{if } i \in \{2,3,8\}; \\
p_7 & \mbox{if } i \in \{4,7\}. \\
\end{array} \right .
\]
Thus, during this step, $H$ begins contracting $S$, deforming $S$ to
$\{p_1,p_6\}$; and also begins
contracting $S'$, deforming $S$ to $\{p_2,p_7\}$; as well as bringing $p_0$ to $p_1$ and $p_3$ to $p_2$.
\item For the step $t=2$, let
\[ H(p_i,2) = \left \{ \begin{array}{ll}
p_6 & \mbox{if } H(p_i,1) \in \{p_1,p_6\}; \\
p_7 & \mbox{if } H(p_i,1) \in \{p_2,p_7\}.
\end{array} \right .
\]
This step completes the contraction of $S$ to the point $p_6$; it also
completes the contraction of $S'$ to the point $p_7$.
\item For the step $t=3$, let $H(p_i)=p_6$ for all indices~$i$. \end{itemize} It is elementary to verify that $H$ is an 18-homotopy and a 26-homotopy between the identity on $MSS_{18}$ and a constant map. \end{proof}
Theorem~\ref{MSS18contracts} adds to our knowledge~\cite{Boxer99,Boxer06} of ``small" digital spheres that are digitally contractible. It seems likely that ``large" digital spheres are not digitally contractible, although other than for digital 1-spheres, i.e., simple closed curves~\cite{Boxer10}, at the current writing the literature lacks results to support this conjecture.
Note also that since a contractible digital image has trivial fundamental group (\cite{Boxer05} - proof corrected in \cite{BxSt18}), the following assertion, originally appearing as Propositions~3.3 and~3.5 of~\cite{BxSt16}, is an immediate consequence of Theorem~\ref{MSS18contracts}.
\begin{cor} Let $x \in MSS_{18}$. Then the fundamental groups $\Pi_1^{18}(MSS_{18},x)$ and $\Pi_1^{26}(MSS_{18},x)$ of $(MSS_{18},x)$ with respect to 18- and 26-adjacency, respectively, are trivial. $\qed$ \end{cor}
\section{Further remarks} We have corrected an error of~\cite{Han06} by showing that $MSS_{18}$ is contractible with respect to 18-adjacency.
\end{document} |
\begin{document}
\title{Semiclassical bounds for spectra of biharmonic operators} \author{Davide Buoso} \author{Luigi Provenzano} \author{Joachim Stubbe}
\address{Davide Buoso, Dipartimento di Scienze e Innovazione Tecnologica (DiSIT), Università degli Studi del Piemonte Orientale ``A. Avogadro'', Viale Teresa Michel 11, 15121 Alessandria (ITALY). E-mail: {\tt davide.buoso@uniupo.it}} \address{Luigi Provenzano, Dipartimento di Scienze di Base e Applicate per l'Ingegneria, Sapienza Universit\`a di Roma, Via Antonio Scarpa 16, 00161 Roma, Italy. E-mail: {\tt luigi.provenzano@uniroma1.it} } \address{Joachim Stubbe, EPFL, SB MATH SCI-SB-JS, Station 8, CH-1015 Lausanne, Switzerland. E-mail: {\tt joachim.stubbe@epfl.ch}}
\begin{abstract} We provide complementary semiclassical bounds for the Riesz means $R_1(z)$ of the eigenvalues of various biharmonic operators, with a second term in the expected power of $z$. The method we discuss makes use of the averaged variational principle (AVP), and yields two-sided bounds for individual eigenvalues, which are semiclassically sharp. The AVP also yields comparisons with Riesz means of different operators, in particular Laplacians.
\noindent {\it Key words:} Biharmonic operator, Riesz means, eigenvalue asymptotics, semiclassical bounds for eigenvalues, averaged variational principle.
\noindent {\bf 2020 Mathematics Subject Classification:} 35P15, 35P20, 47A75, 35J30, 34L15.
\end{abstract} \maketitle \setcounter{page}{1} \tableofcontents
\section{Introduction}
Let $\Omega\subset\mathbb{R}^d$ be a bounded domain with boundary $\partial\Omega$. We consider the eigenvalue problem for the biharmonic operator with various boundary conditions: \begin{equation}\label{Biharmonic-ev-problem} \left\{\begin{array}{ll} \Delta^2 u= \omega u, & \text{on }\Omega,\\ A_1(u)=A_2(u)= 0, & \text{ on }\partial\Omega. \end{array}\right. \end{equation} The biharmonic operator $\Delta^2=\Delta\Delta$ is the first iteration of the Laplace operator $-\Delta$, and $A_1(u),A_2(u)$ represent two linear operators which we shall specify for each problem. These operators are generated from self-adjoint representations of various quadratic forms defined on a suitable dense closed subspace of the Sobolev space $H^2(\Omega)$, see Section 2.
The interest of studying problem \eqref{Biharmonic-ev-problem} is motivated by several applications as the modelling of vibrations of a thin elastic plate subject to different constraints or the static loading of a slender beam, and models for suspension bridges. We refer the reader to \cite{ciardest, ciarlet, gander, gazponti, landau, love, sweers2009} for more details on the applications related with problem \eqref{Biharmonic-ev-problem}.
We always suppose that the spectrum of \eqref{Biharmonic-ev-problem} consists of an ordered sequence of eigenvalues $\omega_j$ tending to infinity, \begin{equation*}
0\leq\omega_1\leq\omega_2\leq\omega_3\leq \cdots \end{equation*} This assumption holds, for example, when $\Omega$ has finite Lebesgue measure and the boundary conditions in \eqref{Biharmonic-ev-problem} are given by the so-called Dirichlet boundary conditions $$
A_1(u)=u,\ \ A_2(u)=|\nabla u|, $$ (where $\nabla$ denotes the grandient operator) emerging from the study of the oscillations of a clamped plate. For other boundary conditions and precise definitions we refer to Section 2.
An important issue in the spectral theory of partial differential operators is the asymptotic expansion of the eigenvalues $\omega_j$ as $j\to\infty$ and eigenvalue bounds in terms of the asymptotic expansion, called semiclassical estimates, which is the main subject of the present paper for the eigenvalue problem \eqref{Biharmonic-ev-problem}. To this end, it is convenient to consider the counting function \begin{equation*}
N(z)={\rm Card}\{\omega_j: \omega_j<z,\ \omega_j {\rm \ is\ an\ eigenvalue}\}, \end{equation*} and, in a tradition due to Berezin \cite{Ber}, Riesz means, \begin{equation*} R_\sigma(z) =\sum_{j}(z-\omega_{j})_{+}^\sigma, \end{equation*} with $\sigma>0$ (here $x_{+}$ denotes the positive part of $x$). $N(z)$ can be interpreted as the limit of $R_\sigma(z)$ when $\sigma \to 0$. The Riesz means $R_\sigma(z)$ are related to $N(z)$ via the integral transform \begin{equation} \label{inttransf} R_\sigma(z)=\sigma \int_0^\infty(z-t)_+^{\sigma-1}N(t)dt, \end{equation} and in particular the behavior of $\omega_j$ as $j\to\infty$ is given by the asymptotic expansion of the counting function $N(z)$ as $z\to\infty$. There is a large literature dealing with the asymptotic expansion of the counting function or other spectral quantities, we refer to the books by Ivrii \cite{monsterbook} and Safarov and Vassiliev \cite{safvas}, that present the state of the art as well as the key references.
The leading term in the asymptotic expansion is known as the Weyl limit, going back to the fundamental work of H.\ Weyl \cite{weyl} on the asymptotic behavior of Dirichlet Laplacian eigenvalues $$ \left\{\begin{array}{ll} -\Delta u= \omega u, & \text{on }\Omega,\\ u=0, & \text{ on }\partial\Omega. \end{array}\right. $$
It is now known that the Weyl limit depends on the principal symbol of the partial differential operator which is connected to the Fourier transform and equals $|p|^2$ for the Laplace operator, and $|p|^4$ for biharmonic operator.
We may summarize the Weyl law for an operator with principal symbol $|p|^{2m}$ as \begin{equation} \label{weyllaw}
\lim_{z\to\infty}\frac{N(z)}{z^{\frac d {2m}}}=(2\pi)^{-d}\int_\Omega\int_{\mathbb R^d}(1-|p|^{2m})_+^0dpdx=(2\pi)^{-d}B_d|\Omega|, \end{equation} where the right hand side corresponds to the normalized phase space volume of the operator. Here $m=1,2$, but \eqref{weyllaw} remains true for higher iterations of the Laplacian on a bounded domain $\Omega$ under suitable boundary conditions (see e.g., \cite{safvas}). Here $B_d=\frac{\pi^{d/2}}{\Gamma(1+d/2)}$ is the volume of the $d$-dimensional unit ball. The equivalent statement for the eigenvalues $\omega_j$ is \begin{equation} \label{weyllaweig}
\lim_{j\to\infty}\frac{\omega_j}{j^{\frac{2m}{d}}}=C_d^m|\Omega|^{-\frac{2m}{d}}, \end{equation} where $C_d=(2\pi)^2B_d^{-\frac 2 d}$ is the so-called classical constant.
The Weyl law \eqref{weyllaw} or \eqref{weyllaweig} is of striking simplicity. The limit depends only on the volume of the domain and a universal dimensional constant and is independent of the boundary conditions. In particular, we infer from the Weyl law that at least asymptotically the eigenvalues of the biharmonic problem \eqref{Biharmonic-ev-problem} equal the squares of Laplacian eigenvalues.
One may then ask whether the counting function is bounded by its Weyl law, that is whether it is possible to establish sharp semiclassical bounds of the type $$
N(z)z^{-\frac d {2m}}\le (2\pi)^{-d}B_d|\Omega|,\quad{\rm or}\quad N(z)z^{-\frac d {2m}}\ge (2\pi)^{-d}B_d|\Omega|, $$ for all $z\ge 0$. Even in the simpler case of Laplacian eigenvalues ($m=1$) this is, apart from special domains, still an open problem known as Polya's conjecture where it is conjectured that the first inequality should hold for Dirichlet boundary conditions. However, for Riesz means $R_1(z)=\int_0^zN(t)dt$, sharp bounds have been obtained both for Laplace and biharmonic operators since there are plenty of variational techniques which can be applied, see e.g., Berezin \cite{Ber}, Li-Yau \cite{LiYau}, Kr\"oger, \cite{Kro} and Laptev \cite{Lap1997, Lap2012}.
Combining the Weyl law \eqref{weyllaw} for $N(z)$ and the integral relation \eqref{inttransf}, one obtains \begin{equation*}
\lim_{z\to\infty}\frac{R_1(z)}{z^{1+\frac d {2m}}}=(2\pi)^{-d}\int_\Omega\int_{\mathbb R^d}(1-|p|^{2m})_+dpdx=\frac{2m}{2m+d}(2\pi)^{-d}B_d|\Omega|, \end{equation*} and the corresponding sharp semiclassical bounds are of the form $$
R_1(z)\le \frac{2m}{2m+d}(2\pi)^{-d}B_d|\Omega|z^{1+\frac d {2m}},\quad{\rm or} \quad R_1(z)\ge \frac{2m}{2m+d}(2\pi)^{-d}B_d|\Omega|z^{1+\frac d {2m}}. $$ When $m=1$ (Dirichlet Laplacian eigenvalues) the first inequality is the celebrated Berezin-Li-Yau bound and when $m=2$ it has been shown for biharmonic Dirichlet eigenvalues by Levine and Protter \cite{LePr}.
The effect of boundary conditions on the spectrum is already seen in the second term of the asymptotic expansion, i.e., that following the Weyl law. As shown in \cite{monsterbook,safvas}, at least for smooth domains $\Omega\subset\mathbb R^d$ there is a two terms asymptotic expansion of the form \begin{equation} \label{twoterms}
N(z)=(2\pi)^{-d}B_d|\Omega|z^{\frac d {2m}}+a_{d,m}|\partial\Omega|z^{\frac{d-1}{2m}}+o\left(z^{\frac{d-1}{2m}}\right), \end{equation} where $a_{d,m}$ is a real constant depending on the dimension $d$, the order $m$ of the differential operator (where as before $m=1$ corresponds to Laplacian eigenvalues and $m=2$ to biharmonic eigenvalues), and on the boundary conditions. Applying these techniques we compute \eqref{twoterms} for various boundary conditions of the biharmonic eigenvalue problem \eqref{Biharmonic-ev-problem}.
We remark that all the classical strategies to get two-term expansions for eigenvalues of elliptic operator, as shown in \cite{monsterbook, safvas}, involve the extensive use of microlocal analysis, that requires a number of regularity conditions on the domain $\Omega$ that are not yet well understood in simple geometrical terms. However, recently Frank and Larson \cite{franklarson} (see also \cite{frankgei1, frankgei2}) have proved a two-term expansion for Riesz means of Laplacian eigenvalues without using microlocal analysis and with low regularity assumptions on $\Omega$.
The asymptotic expansion \eqref{twoterms} suggests to look for bounds of $N(z)$ (or $R_1(z)$) in terms of \eqref{twoterms}. In this work we show that in some circumstances the Levine-Protter bound can be reversed, so that there is a kind of ``Kr\"oger'' lower bound for Dirichlet Bilaplacian Riesz means (i.e., an upper bound for eigenvalue averages). Reversing the inequalities requires lower-order correction terms, which, as will be seen, include information about the boundary of the domain. As in \cite{HaSt16} an essential tool will be the averaged variational principle first introduced in \cite{HaSt14} (see also \cite{EHIS}), which gives an efficient derivation of Kr\"oger's inequality and has been used to derive various other upper bounds for averages of eigenvalues.
Moreover, since the techniques for the two term asymptotic expansion do not apply to the eigenvalue problem \eqref{Biharmonic-ev-problem} on an interval (that is, $d=1$) we study separately the one dimensional problems and exhibit a remarkable common similarity of the different spectra, see Section \ref{biharmonic1d}. In particular, two terms asymptotics for one dimensional problems are a consequence of asymptotically sharp upper and lower bounds of the corresponding Riesz means which we provide in Section \ref{biharmonic1d}
The paper is organized as follows. In Section 2 we introduce the biharmonic eigenvalue problems we study and we present the main results of the paper. In Section 3 we prove some inequalities between biharmonic eigenvalues and then compute the respective semiclassical asymptotic expansions. Section 4 is dedicated to the semiclassical estimates for Dirichlet Bilaplacian eigenvalues, while Navier and Kuttler-Sigillito eigenvalues are treated in Section 5. Section 6 contains a few remarks on Neumann Bilaplacian eigenvalues. Finally, Section 7 is devoted to the one dimensional biharmonic eigenvalue problems.
\section{Biharmonic eigenvalue problems and main results}\label{sec:2}
In this section we introduce the eigenvalue problems of the form \eqref{Biharmonic-ev-problem} that we will study in the sequel and present the main results of the paper. Unless differently specified, we assume $\Omega\subset\mathbb R^d$ to be a bounded domain with Lipschitz boundary $\partial\Omega$.
In the following we will denote by $1_A$ the characteristic function of $A\subseteq\mathbb R^d$. For a function $f\in L^1(\mathbb R^d)$ we will denote by $\hat f(\xi)$ its Fourier transform defined by $\hat f(\xi)=(2\pi)^{-d/2}\int_{\mathbb R^d}f(x)e^{i\xi\cdot x}dx$, and with abuse of notation, for a function $f\in H^2_0(\Omega)$ we will still denote by $\hat f(\xi)$ the Fourier transform of its extension by zero to $\mathbb R^d$. We will also denote by $B(x,R)$ the $d$-dimensional ball (in $\mathbb R^d$) of radius $R$ centered at the point $x$.
The Dirichlet Laplacian eigenvalue problem is $$ \left\{\begin{array}{ll} -\Delta u_j=\lambda_j u_j, & \text{in\ }\Omega,\\ u_j=0, & \text{on\ }\partial\Omega, \end{array}\right. $$ and the eigenvalues are variationally characterized by $$
\lambda_j=\min_{\substack{V\subset H^1_0(\Omega) \\ {\rm dim\ }V=j}}\max_{u\in V\setminus \{0\}}\frac{\int_\Omega |\nabla u|^2}{\int_\Omega u^2}. $$ The Neumann Laplacian eigenvalue problem is $$ \left\{\begin{array}{ll} -\Delta v_j=\mu_j v_j, & \text{in\ }\Omega,\\ \frac{\partial v_j}{\partial\nu}=0, & \text{on\ }\partial\Omega, \end{array}\right. $$ and the eigenvalues are variationally characterized by $$
\mu_j=\min_{\substack{V\subset H^1(\Omega) \\ {\rm dim\ }V=j}}\max_{u\in V\setminus \{0\}}\frac{\int_\Omega |\nabla u|^2}{\int_\Omega u^2}. $$
The biharmonic eigenvalue equation we will consider is \begin{equation} \label{equazione} \Delta^2u=\omega u,\ \ \text{in\ }\Omega, \end{equation} complemented with four different sets of boundary conditions: \begin{itemize} \item Dirichlet boundary conditions: \begin{equation} \label{DBC} u=\frac{\partial u}{\partial \nu}=0; \end{equation} \item Navier boundary conditions: \begin{equation} \label{IBC} u=(1- a )\frac{\partial^2 u}{\partial \nu^2}+ a \Delta u=0; \end{equation} \item Kuttler-Sigillito boundary conditions: \begin{equation} \label{KBC} \frac{\partial u}{\partial \nu}=\frac{\partial\Delta u}{\partial \nu}+(1- a ){\rm div}_{\partial\Omega}\left(\frac{\partial\ }{\partial \nu}\nabla_{\partial\Omega}u\right)=0; \end{equation} \item Neumann boundary conditions: \begin{equation} \label{NBC} (1- a )\frac{\partial^2 u}{\partial \nu^2}+ a \Delta u=\frac{\partial\Delta u}{\partial \nu}+(1- a ){\rm div}_{\partial\Omega}\left(\frac{\partial\ }{\partial \nu}\nabla_{\partial\Omega}u\right)=0. \end{equation} \end{itemize}
Here $\nu$ is the outer unit normal vector defined on $\partial \Omega$, ${\rm div}_{\partial\Omega}$ and $\nabla_{\partial\Omega}$ are the tangential divergence and the tangential gradient on $\partial \Omega$, respectively, and $ a $ is the Poisson ratio, $ a \in(-(d-1)^{-1},1)$. Note that the quadratic form associated with all these problems is \begin{equation} \label{qf} Q(u,v)=\int_\Omega(1- a )D^2u:D^2v+ a \Delta u\Delta v, \end{equation} but this form is set in $H^2_0(\Omega)$ for Dirichlet boundary conditions, in $H^2(\Omega)\cap H^1_0(\Omega)$ for Navier boundary conditions, in $H^2_\nu(\Omega)=\{u\in H^2(\Omega): \frac{\partial u}{\partial\nu}=0 {\rm \ on\ }\partial\Omega\}$ for Kuttler-Sigillito boundary conditions, and in $H^2(\Omega)$ for Neumann boundary conditions. In particular, the Dirichlet problem does not see the Poisson ratio, as $$ \int_\Omega D^2u:D^2v=\int_\Omega\Delta u\Delta v $$ for any $u,v,\in H^2_0(\Omega)$. Here and in the sequel, the Frobenius product is defined as $$ D^2u:D^2v=\sum_{\alpha,\beta=1}^d\frac{\partial^2u}{\partial x_\alpha\partial x_\beta}\frac{\partial^2v}{\partial x_\alpha\partial x_\beta}. $$ Furthermore, we will denote by $U_j,\Lambda_j$ the eigenfunctions and the eigenvalues of the Dirichlet problem \eqref{equazione}, \eqref{DBC}. Similarly, we will use $\tilde U_j,\tilde \Lambda_j(a)$ for the Navier problem \eqref{equazione}, \eqref{IBC}, $\tilde V_j,\tilde M_j(a)$ for the Kuttler-Sigillito problem \eqref{equazione}, \eqref{KBC}, and $V_j,M_j(a)$ for the Neumann problem \eqref{equazione}, \eqref{NBC}. We will not write explicitly the dependence on the Poisson ratio $a$ for eigenfunctions, but we will for eigenvalues (with the exception of Dirichlet eigenvalues that do not depend on $a$). When we consider these problems in general without specifying the boundary conditions, we will use instead $u,\omega$ as a generic eigenfunction with its associated eigenvalue.
Note that the eigenvalues can be characterized via the minimax formulation as $$ \Lambda_j=\min_{\substack{V\subset H^2_0(\Omega) \\ {\rm dim\ }V=j}}\max_{u\in V\setminus \{0\}}\frac{\int_\Omega (\Delta u)^2}{\int_\Omega u^2}, $$
$$
\tilde \Lambda_j(a)=\min_{\substack{V\subset H^2(\Omega)\cap H^1_0(\Omega) \\ {\rm dim\ }V=j}}\max_{u\in V\setminus \{0\}}\frac{\int_\Omega(1- a )|D^2u|^2+ a (\Delta u)^2, }{\int_\Omega u^2}, $$
$$
\tilde M_j(a)=\min_{\substack{V\subset H^2_\nu(\Omega) \\ {\rm dim\ }V=j}}\max_{u\in V\setminus \{0\}}\frac{\int_\Omega(1- a )|D^2u|^2+ a (\Delta u)^2, }{\int_\Omega u^2}, $$
and $$
M_j(a)=\min_{\substack{V\subset H^2(\Omega) \\ {\rm dim\ }V=j}}\max_{u\in V\setminus \{0\}}\frac{\int_\Omega(1- a )|D^2u|^2+ a (\Delta u)^2, }{\int_\Omega u^2}. $$
It is worth observing that, when $a=1$, the Navier problem \eqref{equazione}, \eqref{IBC} becomes \begin{equation} \label{pure_navier} \left\{\begin{array}{ll} \Delta^2 \tilde U=\tilde\Lambda(1) \tilde U , & \text{in\ }\Omega,\\ \tilde U=\Delta \tilde U=0, & \text{on\ }\partial\Omega. \end{array}\right. \end{equation} If the domain $\Omega$ is only Lipschitz, in principle the quadratic form \eqref{qf} is not coerive in $H^2(\Omega)\cap H^1_0(\Omega)$ and the spectrum of problem \eqref{pure_navier} may be not variationally characterizable. However, the form is coercive as soon as $\Omega$ also satisfies the so-called uniform outer ball condition (see \cite{adolfsson}; see also \cite[Section 2.7]{ggs}). In particular, in this case the domain of the Dirichlet Laplacian is precisely $H^2(\Omega)\cap H^1_0(\Omega)$ and the following identification becomes immediate \begin{equation*} \tilde U_j =u_j,\ \ \tilde\Lambda_j(1)=\lambda_j^2, \end{equation*} for all $j\in\mathbb N$. Note that, in the literature, problem \eqref{pure_navier} is known to be the classical Navier problem, whereas problem \eqref{equazione}, \eqref{IBC} is a more recent generalization (see also \cite{buoso16,ggs} and the references therein for a discussion on the physical meaning of the problem). Analogously, when $a=1$, the Kuttler-Sigillito problem \eqref{equazione}, \eqref{KBC} becomes \begin{equation} \label{pure_ks} \left\{\begin{array}{ll} \Delta^2 \tilde V=\tilde M(1) \tilde V , & \text{in\ }\Omega,\\ \frac{\partial \tilde V}{\partial\nu}=\frac{\partial \Delta\tilde V}{\partial\nu}=0, & \text{on\ }\partial\Omega. \end{array}\right. \end{equation} Again, if $\Omega$ is only Lipschitz, the problem \eqref{pure_ks} may not be variationally characterizable, but for $\Omega$ smooth enough we recover that the domain of the Neumann Laplacian is precisely $H^2_\nu(\Omega)$ and then \begin{equation*} \tilde V_j =v_j,\ \ \tilde M_j(1)=\mu_j^2, \end{equation*} for all $j\in\mathbb N$. We point out that problem \eqref{equazione}, \eqref{KBC} has not been widely studied yet, although it is appearing as an important problem in a number of different situations (see e.g., \cite{buosokennedy, lamproz1, lamproz2}). We remark though that it was first stated as a Steklov-type problem by Kuttler and Sigillito in \cite{kutsig}.
On the other hand, the Neumann problem \eqref{equazione}, \eqref{NBC} with $a=1$ becomes instead \begin{equation*}
\left\{\begin{array}{ll} \Delta^2 V=M(1) V , & \text{in\ }\Omega,\\ \Delta V=\frac{\partial\Delta V}{\partial\nu}=0, & \text{on\ }\partial\Omega, \end{array}\right. \end{equation*} so that the boundary conditions do not satisfy the complementing conditions (see e.g., \cite{ggs}), and in particular it has a kernel consisting of the harmonic functions in $H^2(\Omega)$, which is infinite dimensional when $d\ge2$. It was shown in \cite{provneu} that the remaining part of the spectrum consists of the eigenvalues of the biharmonic Dirichlet problem \eqref{equazione}, \eqref{DBC}.
The main results of the present paper are inequalities related to the eigenvalues of problem \eqref{Biharmonic-ev-problem}, both for the eigenvalues and for Riesz means $R_1(z)$. To this end, we first provide inequalities between the eigenvalues of the different problems (see Theorem \ref{3.1}).
\begin{thmx} \label{thma} The following inequalities hold. \begin{itemize} \item For any $j\in\mathbb N$, and for any $ a \in(-(d-1)^{-1},1)$, \begin{equation*}
M_j\leq\tilde M_j, \tilde\Lambda_j\leq\Lambda_j. \end{equation*} \item For any $j\in\mathbb N$, \begin{equation*}
\lambda_j^2\le\Lambda_j. \end{equation*} \item If in addition $\Omega$ is convex, then for any $j\in\mathbb N$, and for any $ a \in(-(d-1)^{-1},1)$, \begin{equation*}
M_j(a)\le\mu_j^2. \end{equation*} \end{itemize} \end{thmx}
In order to understand when our bounds are sharp with respect to the semiclassical asymptotic expansion, we first compute it for all the boundary conditions (see Theorem \ref{asymptotal}). We remark that, while our assumptions are enough to ensure the validity of the first term in expansions \eqref{semiclassicalcounting} and \eqref{semiclassicalriesz} (see e.g., \cite{lapidus} and the references therein), the derivation of the second term requires additional regularity on the domain $\Omega$. In particular, the domain has to be at least piecewise $C^\infty$ and the so-called {\it nonperiodicity} and {\it nonblocking} conditions have to be satisfied. We refer to \cite[Chapter 1]{safvas} for the description of all the necessary smoothness conditions, in particular to \cite[Definition 1.3.7]{safvas} for the definition of nonperiodicity condition and to \cite[Definition 1.3.22]{safvas} for the definition of nonblocking condition.
\begin{thmx}\label{intro_exp} \label{thmb} Let $\Omega$ be piecewise $C^{\infty}$ satisfying the nonperiodicity and nonblocking conditions, and let $d\ge 2$. We have \begin{equation} \label{semiclassicalcounting}
N(z)=(2\pi)^{-d}B_d|\Omega|z^{\frac d 4}+c_1 z^{\frac{d-1}4}+o(z^{\frac{d-1}4}), \end{equation} where the geometrical constant $c_1$ involving the measure of the boundary is given by \eqref{c1dir}--\eqref{c1neu}. In particular, \begin{equation} \label{semiclassicalriesz}
R_1(z)=\frac{4}{d+4}(2\pi)^{-d}B_d|\Omega|z^{\frac{d+4}{4}}+\frac{4 c_1}{d+3}z^{\frac{d+3}{4}}+o(z^{\frac{d+3}{4}}). \end{equation} \end{thmx}
Our third main result concerns lower bounds for Riesz means of Dirichlet eigenvalues (see Theorem \ref{dirichlet_bilaplacian_thm_general}).
\begin{thmx}\label{dirichlet_bilaplacian_thm_general0} Let $\Omega$ be a domain in $\mathbb R^d$ of finite measure. Then for any $\phi\in H^2_0(\Omega)\cap L^{\infty}(\Omega)$ and $z>0$ the following inequality holds \begin{multline*}
\sum_j\left(z-\Lambda_j\right)_+\|\phi U_j\|_2^2\geq\frac{4}{d+4}(2\pi)^{-d} B_d\|\phi\|_2^2\left(z-\frac{\|\Delta\phi\|_2^2}{\|\phi\|_2^2}\right)^{\frac{d}{4}+1}_+\\
-2(2\pi)^{-d} B_d\|\nabla\phi\|_2^2\left(z-\frac{\|\Delta\phi\|_2^2}{\|\phi\|_2^2}\right)^{\frac{d}{4}+\frac{1}{2}}_+. \end{multline*} Moreover, for all positive integers $k$ \begin{equation*}
\frac{1}{k}\sum_{j=1}^k\Lambda_j\leq\frac{d}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}\rho(\phi)^{-\frac{4}{d}}
+2\frac{\|\nabla\phi\|_2^2}{\|\phi\|_2^2}C_d\left(\frac{k}{|\Omega|}\right)^{\frac{2}{d}}\rho(\phi)^{-\frac{2}{d}}
+\frac{\|\Delta\phi\|_2^2}{\|\phi\|_2^2}, \end{equation*} for $\rho(\phi)<1$, where $\rho(\phi)$ is defined in \eqref{rhophi}. \end{thmx}
Theorem \ref{dirichlet_bilaplacian_thm_general0} implies two-terms, asymptotically sharp lower bounds for Riesz means compatible with the two-term asymptotics given by Theorem \ref{intro_exp} (see Theorem \ref{main_upper_dirichlet}). Analogous results hold for the Navier \eqref{equazione},\eqref{IBC}, and the Kuttler-Sigillito \eqref{equazione},\eqref{KBC} problems.
These results are obtained by an extensive application of the averaged variational principle (AVP), that we recall here in the formulation available in \cite{EHIS}. \begin{lem} Consider a self-adjoint operator $H$ on a Hilbert space $\mathcal{H}$, the spectrum of which is discrete at least in its lower portion, so that $- \infty < \omega_0 \le \omega_1 \le \dots$. The corresponding orthonormalized eigenvectors are denoted $\{\mathbf{\psi}^{(j)}\}$. The closed quadratic form corresponding to $H$ is denoted $Q(\varphi, \varphi)$ for vectors $\varphi$ in the quadratic-form domain $\mathcal{Q}(H) \subset \mathcal{H}$. Let $f_p \in \mathcal{Q}(H)$ be a family of vectors indexed by a variable $p$ ranging over a measure space $(\mathfrak{M},\Sigma,\sigma)$. Suppose that $\mathfrak{M}_0$ is a subset of $\mathfrak{M}$. Then for any $z \in \mathbb{R}$, \begin{equation}\label{RieszVersion}
\sum_{j}{\left(z - \omega_j\right)_{+} \int_{\mathfrak{M}}\left|\langle\mathbf{\psi}^{(j)}, f_p\rangle\right|^2\,d \sigma}
\geq
\int_{\mathfrak{M}_0}{\left(z\| f_p\|^2 - Q(f_p,f_p) \right) d \sigma}, \end{equation} provided that the integrals converge. \end{lem}
\section{Comparison of eigenvalues and eigenvalue asymptotics}
In this section we provide some new results concerning the eigenvalues of problems \eqref{equazione}--\eqref{NBC}. First, we provide inequalities between eigenvalues of the problems we introduced in the previous section. Then, we complete the section by computing their asymptotics up to the second term.
\subsection{Comparison of eigenvalues}
We start with the following
\begin{thm} \label{3.1} Let $\Omega$ be a bounded domain in $\mathbb R^d$ with Lipschitz boundary. Then the following inequalities hold. \begin{itemize} \item For any $j\in\mathbb N$, and for any $ a \in(-(d-1)^{-1},1)$, \begin{equation} \label{dirnav} M_j\leq\tilde M_j, \tilde\Lambda_j\leq\Lambda_j. \end{equation} \item For any $j\in\mathbb N$, \begin{equation} \label{fullchain} \lambda_j^2\le\Lambda_j. \end{equation} \item If in addition $\Omega$ is convex, then for any $j\in\mathbb N$, and for any $ a \in(-(d-1)^{-1},1)$, \begin{equation} \label{fullchain2} M_j(a)\le\mu_j^2. \end{equation} \end{itemize} \end{thm}
We observe that all the quantities in \eqref{dirnav}--\eqref{fullchain2} have the same Weyl limit, while the respective second terms already agree with these inequalities, see Theorem \ref{asymptotal} below.
We also remark that inequality \eqref{fullchain} holds under the milder assumption that $\Omega$ is an open set of finite measure. On the other hand, if the boundary $\partial\Omega$ is assumed to be at least $C^2$, then it becomes a strict inequality. For a proof of this fact we refer to \cite[Theorem 1.1]{liu}, where the author also provides a good survey on this type of inequalities.
\begin{proof} Inequality \eqref{dirnav} follows directly from the respective minimax characterizations. As for inequality \eqref{fullchain}, we start with the Cauchy-Schwarz inequality $$
\left(\int_{\Omega}|\nabla u|^2\right)^2\le\left(\int_{\Omega} u^2\right)\left(\int_{\Omega} (\Delta u)^2\right), $$ which is valid for all $u\in H^2(\Omega)\cap H^1_0(\Omega)$. From this, we get $$
\left(\frac{\int_{\Omega} |\nabla u|^2}{\int_{\Omega} u^2}\right)^2\le\frac{\int_{\Omega} (\Delta u)^2}{\int_{\Omega} u^2} $$ for all $u\in H^2(\Omega)\cap H^1_0(\Omega)$, in particular for $u\in H^2_0(\Omega)$. From this inequality, if we choose a linear, finite dimensional subspace $V\subset H^2_0(\Omega)$, we get $$
\max_{u\in V\setminus \{0\}}\left(\frac{\int_{\Omega} |\nabla u|^2}{\int_{\Omega} u^2}\right)^2\le\max_{u\in V\setminus \{0\}}\frac{\int_{\Omega} (\Delta u)^2}{\int_{\Omega} u^2}, $$ irrespective of the choice of $V$. At this point, we may think of this as an inequality between two functions of $V$: $$
F(V)=\max_{u\in V\setminus \{0\}}\left(\frac{\int_{\Omega} |\nabla u|^2}{\int_{\Omega} u^2}\right)^2,\ \ G(V)=\max_{u\in V\setminus \{0\}}\frac{\int_{\Omega} (\Delta u)^2}{\int_{\Omega} u^2}, $$ and $$ F(V)\le G(V), $$ where $V$ varies among all the finite dimensional subspaces of $H^2_0(\Omega)$. We may as well fix a natural $j$ and restrict our attention to subspaces of dimension $j$, which is a subset of all the finite dimensional subspaces. So, it makes sense to consider the infimum, namely $$ \inf_{\substack{V\subset H^2_0(\Omega) \\ {\rm dim\ }V=j}} F(V)\le\inf_{\substack{V\subset H^2_0(\Omega) \\ {\rm dim\ }V=j}} G(V), $$ the inequality holding since it holds pointwise. If we now analyze both sides of the inequality, we recover that $$ \inf_{\substack{V\subset H^2_0(\Omega) \\ {\rm dim\ }V=j}} G(V) =\min_{\substack{V\subset H^2_0(\Omega) \\ {\rm dim\ }V=j}}\max_{u\in V\setminus \{0\}}\frac{\int_{\Omega} (\Delta u)^2}{\int_{\Omega} u^2}=\Lambda_j, $$ since the min-max is always achieved by the corresponding eigenfunctions (i.e., the infimum is achieved choosing $V$ as the space generated by the first $j$ eigenfunctions), while $$
\inf_{\substack{V\subset H^2_0(\Omega) \\ {\rm dim\ }V=j}} F(V)=\inf_{\substack{V\subset H^2_0(\Omega) \\ {\rm dim\ }V=j}} \max_{u\in V\setminus \{0\}}\left(\frac{\int_{\Omega} |\nabla u|^2}{\int_{\Omega} u^2}\right)^2. $$
Now note that, if we consider sets $A\subset\mathbb R_+$ (meaning that if $\alpha\in A$ then $\alpha\ge 0$), then $$ \inf_A \alpha^2=(\inf_A \alpha)^2,\ \min_A \alpha^2=(\min_A \alpha)^2,\ \sup_A \alpha^2=(\sup_A \alpha)^2,\ \max_A \alpha^2=(\max_A \alpha)^2, $$ since $f(x)=x^2$ is an increasing continuous function of the positive real numbers onto themselves. Hence $$
\inf_{\substack{V\subset H^2_0(\Omega) \\ {\rm dim\ }V=j}} \max_{u\in V\setminus \{0\}}\left(\frac{\int_{\Omega} |\nabla u|^2}{\int_{\Omega} u^2}\right)^2=\left(\inf_{\substack{V\subset H^2_0(\Omega) \\ {\rm dim\ }V=j}} \max_{u\in V\setminus \{0\}}\frac{\int_{\Omega} |\nabla u|^2}{\int_{\Omega} u^2}\right)^2. $$ The final step is increasing the space on which the infimum is taken: $$
\inf_{\substack{V\subset H^2_0(\Omega) \\ {\rm dim\ }V=j}} \max_{u\in V\setminus \{0\}}\frac{\int_{\Omega} |\nabla u|^2}{\int_{\Omega} u^2}\ge \inf_{\substack{V\subset H^1_0(\Omega) \\ {\rm dim\ }V=j}} \max_{u\in V\setminus \{0\}}\frac{\int_{\Omega} |\nabla u|^2}{\int_{\Omega} u^2}=\lambda_j. $$ This proves \eqref{fullchain}.
Regarding \eqref{fullchain2}, we assume first that $\Omega$ is smooth ($C^{\infty})$. We note that for any smooth function $u$ on $\Omega$ we have \begin{equation}\label{part}
\int_{\Omega}|D^2u|^2dx=\int_{\Omega}(\Delta u)^2dx+\frac{1}{2}\int_{\partial\Omega}\frac{\partial\ }{\partial\nu}(|\nabla u|^2)d\sigma-\int_{\partial\Omega}\Delta u\frac{\partial u}{\partial\nu}d\sigma, \end{equation}
where $d\sigma$ the measure element of $\partial\Omega$. Equality \eqref{part} follows from the pointwise identity $|D^2u|^2=\frac{1}{2}\Delta(|\nabla u|^2)-\nabla\Delta u\cdot\nabla u$. Now we note that, on $\partial\Omega$, \begin{equation}\label{part2}
\frac{1}{2}\frac{\partial\ }{\partial\nu}|\nabla u|^2=\nabla \frac{\partial u}{\partial\nu}\cdot\nabla u-\nabla u^T\cdot D\nu\cdot\nabla u =\nabla_{\partial\Omega}\frac{\partial u }{\partial\nu}\cdot\nabla_{\partial\Omega} u+\frac{\partial^2 u }{\partial\nu^2}\frac{\partial u }{\partial\nu}-II(\nabla_{\partial\Omega} u,\nabla_{\partial\Omega} u). \end{equation} Here $II(\cdot,\cdot)$ denotes the second fundamental form on $\partial\Omega$ (in fact $II=D\nu$). The quadratic form $II(\cdot,\cdot)$ defined on the tangent space to $\partial\Omega$ is symmetric and its eigenvalues are the principal curvatures of $\partial\Omega$.
Assume now that $u$ is such that $\frac{\partial u }{\partial\nu}=0$ on $\partial\Omega$ and that $II\geq 0$ in the sense of quadratic forms (this holds e.g., for smooth convex domains). Then $\nabla u=\nabla_{\partial\Omega}u$ on $\partial\Omega$ (the gradient of $u$ restricted on the boundary belongs to the tangent space to the boundary). This fact combined with \eqref{part} and \eqref{part2} implies that for such $u$ and $\Omega$ $$
\int_{\Omega}|D^2u|^2dx\leq\int_{\Omega}(\Delta u)^2dx. $$ Since we assumed $\Omega$ smooth, then all eigenfunctions of the Neumann Laplacian belong to $C^{\infty}(\Omega)\cap H^2(\Omega)$ and satisfy $\frac{\partial u }{\partial\nu}=0$ on $\partial\Omega$. Hence, taking the space generated by the first $j$ eigenfunctions of the Neumann Laplacian as $j$-dimensional subspace of $H^2(\Omega)$ of test functions into the min-max formula for $M_j(a)$, we obtain \eqref{fullchain2}, which is then proved in the case of a smooth convex domain. Since any convex domain can be approximated uniformly by smooth convex domains, we have pointwise convergence of eigenvalues (see e.g., \cite{arrieta_lamberti_CR}). Therefore, we deduce the validity of \eqref{fullchain2} for any convex set. \end{proof}
\subsection{Semiclassical asymptotics}
In this section, the domain $\Omega\subset\mathbb R^d$ will always be a bounded domain, smooth enough in order to apply the arguments in \cite{safvas, vas} (see Theorem \ref{thmb}). In particular, smooth convex sets and piecewise smooth domains with non positive conormal curvature (such as polyhedra) are admissible. Moreover, the dimension $d$ will always be such that $d\ge2$.
We parametrize $\Omega$ locally in such a way that $\Omega=\{(x_1,\dots,x_d):x_d>0\}$ and $\partial\Omega=\{(x_1,\dots,x_{d-1},0)\}$. We also denote by $(x,\xi)$ the elements of the cotangent bundle $T^*\Omega$, $\xi=(\xi_1,\dots,\xi_d)$ being the coordinates on the fiber $T^*_x\Omega$. Setting $x'=(x_1,\dots,x_{d-1})$ and $\xi'=(\xi_1,\dots,\xi_{d-1})$, we have that $(x',\xi')$ are coordinates for the cotangent bundle $T^*\partial\Omega$.
The operator $\Delta^2$ is represented by the symbol $$
A(\xi)=|\xi|^4=\left(\sum_{k=1}^d\xi_k^2\right)^2, $$ and the operator can be recovered from the symbol by substituting $\xi_k$ with $D_k=-i\frac{\partial\ }{\partial x_k}$. Note that this operator coincides with its principal part, i.e., the symbol only contains monomials of the same degree.
Regarding the boundary operators, we first recall that, because of the parametrization we have chosen, in this case the normal derivative is $$ \frac{\partial\ }{\partial \nu}=-\frac{\partial\ }{\partial x_k}\ \text{(on $\partial\Omega$)}. $$ Let us now discuss the various boundary conditions one by one. \begin{itemize} \item $B_0(D)u=u$. Its symbol is $B_0(\xi)=1$. \item $B_1(D)u=\frac{\partial u}{\partial\nu}$. Its symbol is $B_1(\xi)=-i\xi_d$.
\item $B_2(D)u=(1- a )\frac{\partial^2 u}{\partial\nu^2}+ a \Delta u$. Its symbol is $B_2(\xi)=-\xi_d^2-i a K \xi_d- a |\xi'|^2$ (where $K$ is the sum of the principal curvatures). Note that its principal part is $\tilde B_2(\xi)=-\xi_d^2- a |\xi'|^2$. \item $B_3(D)u=\frac{\partial\Delta u}{\partial \nu}+(1- a ){\rm div}_{\partial\Omega}\left(\frac{\partial\ }{\partial \nu}\nabla_{\partial\Omega}u\right)$. Writing the symbol for this operator is quite complicated, but using the equality $$ {\rm div}_{\partial\Omega}\left(\frac{\partial\ }{\partial \nu}\nabla_{\partial\Omega}u\right)=\Delta_{\partial\Omega} \frac{\partial u}{\partial \nu}-{\rm div}_{\partial\nu}(\nabla_{\partial\Omega}u\cdot D \nu) $$
we can easily write the principal part $\tilde B_3(\xi)=i \xi_d^3+i(2- a )\xi_d|\xi'|^2$. \end{itemize}
Now we introduce an auxiliary problem related with problems \eqref{equazione}--\eqref{NBC}: \begin{equation} \label{auxiliary} \left\{ \begin{array}{ll} A(\xi',D_d)v(x_d)=\eta v(x_d),& x_d\in(0,+\infty),\\
\tilde B_j(\xi',D_d)v|_{x_d=0}=0, &\ \ \end{array} \right. \end{equation} where the boundary conditions will be: $j=0,1$ for the Dirichlet case, $$ v(0)=v'(0)=0, $$ $j=0,2$ for the Navier case, $$ v(0)=v''(0)=0, $$ $j=1,3$ for the Kuttler-Sigillito case, $$ v'(0)=v'''(0)=0, $$ or $j=2,3$ for the Neumann case, $$
v''(0)- a |\xi'|^2v(0)=v'''(0)-(2- a )|\xi'|^2v'(0)=0. $$ Note that problem \eqref{auxiliary} depends on $\xi'\in\mathbb R^{d-1}$.
We are interested in the spectrum of problem \eqref{auxiliary}. We start by observing that there are no eigenvalues, with the sole exception of the Neumann case with $a\neq 0$, where there is a simple eigenvalue \begin{equation*}
\eta=\eta(\xi')=f( a )|\xi'|^4, \end{equation*} where \begin{equation} \label{fsigma} f( a )=4 a -1-3 a ^2+2(1- a )\sqrt{2 a ^2-2 a +1}. \end{equation}
Notice that $0<f( a )\le1$ for $ a \in(-(d-1)^{-1},1)$, with $f( a )=1$ only for $ a =0$. We remark that the case $ a =0$ does not have eigenvalues, hence neither is $|\xi'|^4$ (differently from the case $a\neq 0$).
In addition, problem \eqref{auxiliary} is known to have as essential spectrum the strip $[|\xi'|^4,+\infty[$ (see e.g., \cite[Appendix A]{safvas}). Moreover, the essential spectrum has only one threshold with one double root. A threshold $\eta^{st}$ is a point in the essential spectrum for which the equation $$ A(\xi', \zeta)=\eta^{st} $$
has a multiple real root. It is clear that, in our case, the only threshold is $\eta^{st}=|\xi'|^4$. At this point we search for generalized eigenfunctions in the strip $]\eta^{st},+\infty[$. To do so, we have first to solve the equation \begin{equation} \label{genev} A(\xi',\zeta)=\eta, \end{equation} for any $\eta\in]\eta^{st},+\infty[$. Equation \eqref{genev} always has four roots: \begin{equation*}
\zeta_1^-=-\sqrt{\sqrt{\eta}-|\xi'|^2},\ \zeta_1^+=\sqrt{\sqrt{\eta}-|\xi'|^2},\ \zeta_2^-=-i\sqrt{\sqrt{\eta}+|\xi'|^2},\ \zeta_2^+=i\sqrt{\sqrt{\eta}+|\xi'|^2}. \end{equation*} We then search for generalized eigenfunctions (associated with $\eta$) of the form \begin{equation} \label{genef} v(x_d)=a_1^-e^{i\zeta_1^-x_d}+a_1^+e^{i\zeta_1^+x_d}+a_2^+e^{i\zeta_2^+x_d}. \end{equation} Note that these generalized eigenfunctions are not proper eigenfunctions (because they are not $L^2$-functions), nevertheless they are bounded solutions. We search for generalized eigenfuntions because we need to compute the quantity ${\rm arg}\left(i\frac{a_1^+}{a_1^-}\right)$, where ${\rm arg}$ is the standard complex argument of a number.
\begin{itemize}
\item {\bf Dirichlet problem.} Through the boundary conditions we get $$ \left\{\begin{array}{l} a_1^-+a_1^++a_2^+=0,\\ \zeta_1^-a_1^-+\zeta_1^+a_1^++\zeta_2^+a_2^+=0, \end{array}\right. $$ hence $$
\frac{a_1^+}{a_1^-}=-\frac{\zeta_1^--\zeta_2^+}{\zeta_1^+-\zeta_2^+}=-\frac{|\xi'|^2}{\sqrt{\eta}}+i\frac{\sqrt{\eta-|\xi'|^4}}{\sqrt{\eta}}, $$ from which we obtain \begin{equation} \label{argdir}
{\rm arg}\left(i\frac{a_1^+}{a_1^-}\right)=\arctan{\frac{|\xi'|^2}{\sqrt{\eta-|\xi'|^4}}}-\pi+2k\pi=\arcsin{\frac{|\xi'|^2}{\sqrt{\eta}}}-\pi+2k\pi, \end{equation} for some $k\in\mathbb Z$.
\item {\bf Navier problem.} Through the boundary conditions we get $$ \left\{\begin{array}{l} a_1^-+a_1^++a_2^+=0,\\ (\zeta_1^-)^2a_1^-+(\zeta_1^+)^2a_1^++(\zeta_2^+)^2a_2^+=0, \end{array}\right. $$ that yields $a_1^+/a_1^-=-1$, hence \begin{equation} \label{argnav} {\rm arg\ }\left(i\frac{a_1^+}{a_1^-}\right)=-\frac \pi 2+2k\pi, \end{equation} for some $k\in\mathbb Z$.
\item {\bf Kuttler-Sigillito problem.} Through the boundary conditions we get $$ \left\{\begin{array}{l} \zeta_1^-a_1^-+\zeta_1^+a_1^++\zeta_2^+a_2^+=0, \\ (\zeta_1^-)^3a_1^-+(\zeta_1^+)^3a_1^++(\zeta_2^+)^3a_2^+=0, \end{array}\right. $$ that yields $a_1^+/a_1^-=1$, hence \begin{equation} \label{argks} {\rm arg\ }\left(i\frac{a_1^+}{a_1^-}\right)=\frac \pi 2+2k\pi, \end{equation} for some $k\in\mathbb Z$.
\item {\bf Neumann problem.} Through the boundary conditions we get $$ \left\{\begin{array}{l}
-(\zeta_1^-)^2a_1^--(\zeta_1^+)^2a_1^+-(\zeta_2^+)^2a_2^+- a |\xi'|^2(a_1^-+a_1^++a_2^+)=0,\\
-i(\zeta_1^-)^3a_1^--i(\zeta_1^+)^3a_1^+-i(\zeta_2^+)^3a_2^+-i(2- a )|\xi'|^2(\zeta_1^-a_1^-+\zeta_1^+a_1^++\zeta_2^+a_2^+)=0, \end{array}\right. $$ that yields \begin{multline*}
\zeta_2^+\left((\zeta_1^+)^2+ a |\xi'|^2\right)\left((\zeta_2^+)^2+(2- a )|\xi'|^2\right)\left(\frac{a_1^+}{a_1^-}+1\right)\\
=\zeta_1^+\left((\zeta_2^+)^2+ a |\xi'|^2\right)\left((\zeta_1^+)^2+(2- a )|\xi'|^2\right)\left(\frac{a_1^+}{a_1^-}-1\right). \end{multline*} Therefore $$ \frac{a_1^+}{a_1^-}=\frac{A+iB}{A-iB}=\frac{(A+iB)^2}{A^2+B^2}, $$ where $$
A=\sqrt{\sqrt{\eta}-|\xi'|^2}\left(\sqrt{\eta}+(1- a )|\xi'|^2\right)^2,\\
B=\sqrt{\sqrt{\eta}+|\xi'|^2}\left(\sqrt{\eta}-(1- a )|\xi'|^2\right)^2. $$ In particular \begin{equation} \label{argneu} {\rm arg\ }\left(i\frac{a_1^+}{a_1^-}\right)={\rm arg\ }(i)+2{\rm arg\ }(A+iB)=-\frac\pi2-2\arctan{\frac A B}+2k\pi, \end{equation} for some $k\in\mathbb Z$. \end{itemize}
We now recall the following theorem (\cite[Theorem 1.6.1]{safvas}).
\begin{thm} Let $\Omega$ be piecewise $C^{\infty}$ and satisfying the nonperiodicity and nonblocking conditions, and let $d\geq 2$. Let $N(z)$ be the counting function associated with the biharmonic operator with either Dirichlet, Navier, or Neumann boundary conditions, i.e., the problem given by equation \eqref{equazione} coupled with \eqref{DBC}, \eqref{IBC}, or \eqref{NBC}, respectively. Then, for $z\to+\infty$ we have \begin{equation*}
N(z)=c_0 z^{\frac d 4}+c_1 z^{\frac {d-1} 4}+o\left(z^{\frac {d-1} 4}\right), \end{equation*} where $$
c_0=(2\pi)^{-d}\int_{|\xi|^4\le1}dxd\xi,\ \ c_1=(2\pi)^{1-d}\int_{T^*\partial\Omega}{\rm shift}^+(1,\xi')dx'd\xi'. $$ Here ${\rm shift}^+$ is the shift function associated with problem \eqref{auxiliary}, and there exists an analytic branch ${\rm arg}_0$ of the argument ${\rm arg}$ such that we have $$ {\rm shift}^+(\eta,\xi')=N^+(\eta,\xi')+\frac{{\rm arg}_0{\rm det}(iR(\eta,\xi'))}{2\pi}, $$ where $N^+$ is the counting function of problem \eqref{auxiliary}, and $R$ is the reflexion matrix associated with problem \eqref{auxiliary}, in particular $$ {\rm det}(iR(\eta,\xi'))= \left\{\begin{array}{ll} 0, & {\rm if\ }\eta\le\eta^{st},\\ i\frac{a_1^+}{a_1^-}, & {\rm otherwise}, \end{array}\right. $$ with $a_1^{\pm}$ defined in \eqref{genef}.
In addition, the function ${\rm arg}_0$ is a suitable branch of the complex argument satisfying the following condition $$
\lim_{\eta\to|\xi'|^4}\left|{\rm arg}_0\left(i\frac{a_1^+}{a_1^-}\right)\right|=\frac\pi 2. $$ \end{thm}
We stress the fact that the function ${\rm arg}_0$ depends on the particular problem that is considered, and not a function chosen once and for all.
\begin{cor} Let $\Omega$ be piecewise $C^{\infty}$ and satisfying the nonperiodicity and nonblocking conditions, and let $d\geq 2$. If $\omega_j$ is the $j$-th eigenvalue of the biharmonic operator with either Dirichlet, Navier, or Neumann boundary conditions, then we have \begin{equation*}
\omega_j=\left(\frac{j}{c_0}\right)^{\frac 4 d}-\frac{4c_1}{dc_0^{\frac {d+3}d}}j^{\frac 3 d}+o\left(j^{\frac 3 d}\right), \end{equation*} or equivalently \begin{equation*}
\omega_j^{\frac14}=\left(\frac{j}{c_0}\right)^{\frac 1 d}-\frac{c_1}{dc_0}+o(1), \end{equation*} as $j\to+\infty$. \end{cor}
Now we compute the coefficients $c_0,c_1$. As for $c_0$, it depends only on the equation and therefore will be the same for both Dirichlet, Navier, and Neumann boundary conditions, and it is \begin{equation*}
c_0=(2\pi)^{-d}\int_{|\xi|^4\le1}\int_{\Omega}dxd\xi=(2\pi)^{-d}B_d|\Omega|. \end{equation*} As for $c_1$, its definition sensitively depends on the boundary conditions, so we split the discussion. \begin{itemize}
\item {\bf Dirichlet boundary conditions.} We have seen that problem \eqref{auxiliary} has no eigenvalue, and it is easy to check that the function ${\rm arg}_0$ is given by formula \eqref{argdir} with $k=0$. Hence \begin{equation} \label{c1dir} \begin{split}
c_1 & =(2\pi)^{-d}|\partial\Omega|\left(\int_{|\xi'|<1}\arcsin{|\xi'|^2}d\xi'-\pi B_{d-1}\right)\\
& =-\frac{ B_{d-1}|\partial\Omega|}{4(2\pi)^{d-1}}\left(1+\frac{\Gamma\left(\frac{d+1}{4}\right)}{\sqrt{\pi}\Gamma\left(\frac{d+3}{4}\right)}\right). \end{split} \end{equation}
\item {\bf Navier boundary conditions.} We have seen that problem \eqref{auxiliary} has no eigenvalue, and it is easy to check that the function ${\rm arg}_0$ is given by formula \eqref{argnav} with $k=0$. Hence \begin{equation}
c_1=(2\pi)^{1-d}|\partial\Omega|\int_{|\xi'|<1}\left(-\frac 1 4\right)d\xi'=-\frac{ B_{d-1}|\partial\Omega|}{4(2\pi)^{d-1}}. \end{equation}
\item {\bf Kuttler-Sigillito boundary conditions.} We have seen that problem \eqref{auxiliary} has no eigenvalue, and it is easy to check that the function ${\rm arg}_0$ is given by formula \eqref{argks} with $k=0$. Hence \begin{equation}
c_1=(2\pi)^{1-d}|\partial\Omega|\int_{|\xi'|<1}\frac 1 4 d\xi'=\frac{ B_{d-1}|\partial\Omega|}{4(2\pi)^{d-1}}. \end{equation}
\item {\bf Neumann boundary conditions.} Let us start with the case $ a \neq0$. Here we have seen that problem \eqref{auxiliary} has a simple eigenvalue $$
\eta=f( a )|\xi'|^4, $$ so that $$ N^+(\lambda,\xi')= \left\{\begin{array}{ll}
1,& {\rm if\ }|\xi'|<f( a )^{-\frac 1 4},\\ 0, & {\rm otherwise}. \end{array}\right. $$
It is also easily checked that the function ${\rm arg}_0$ is given by formula \eqref{argneu} with $k=0$, therefore \begin{equation} \label{c1neu}
c_1=\frac{ B_{d-1}|\partial\Omega|}{4(2\pi)^{d-1}}\left(4f( a )^{\frac{1-d}{4}}-1-4\frac{d-1}{\pi}\int_0^1t^{d-2}\arctan g(t, a )dt\right), \end{equation} where \begin{equation} \label{gneu} g(t, a )=\frac{\sqrt{1-t^2}\left(1+(1- a )t^2\right)^2}{\sqrt{1+t^2}\left(1-(1- a )t^2\right)^2}. \end{equation}
If instead we consider the case $ a =0$, we recall that there are no eigenvalues, however now the function ${\rm arg}_0$ is given by formula \eqref{argneu} but with $k=1$, so that here $$
c_1=\frac{ B_{d-1}|\partial\Omega|}{4(2\pi)^{d-1}}\left(3-4\frac{d-1}{\pi}\int_0^1t^{d-2}\arctan g(t,0)dt\right), $$ and in particular, as $f(0)=1$, we have that formula \eqref{c1neu} still holds.
We observe that, by using the equality $$ \arctan x +\arctan \frac 1 x =\frac \pi 2,\ \ \forall x>0, $$ as $g(t, a )>0$ for all $t\in(0,1)$ and for all $ a $, we obtain the equivalent formula \begin{equation*}
c_1=\frac{ B_{d-1}|\partial\Omega|}{4(2\pi)^{d-1}}\left(4f( a )^{\frac{1-d}{4}}-3+4\frac{d-1}{\pi}\int_0^1t^{d-2}\arctan (g(t, a )^{-1})dt\right). \end{equation*} \end{itemize}
Summing up, we have the following
\begin{thm} \label{asymptotal} Let $\Omega$ be piecewise $C^{\infty}$ and satisfying the nonperiodicity and nonblocking conditions, and let $d\geq 2$. Let $C_d=(2\pi)^2B_d^{-\frac 2 d}$. For any $a\in(-(d-1)^{-1},1)$, the following expansions hold: \begin{equation}\label{weyl_dirichlet_biharmonic_single}
\Lambda_j=C_d^2\left(\frac{j}{|\Omega|}\right)^{\frac{4}{d}}+\frac{C_d^2 B_{d-1}}{d B_d^{1-\frac{1}{d}}}\left(1+\frac{\Gamma\left(\frac{d+1}{4}\right)}{\sqrt{\pi}\Gamma\left(\frac{d+3}{4}\right)}\right)\frac{|\partial\Omega|}{|\Omega|}\left(\frac{j}{|\Omega|}\right)^{\frac{3}{d}}+o\left(j^{\frac{3}{d}}\right), \end{equation}
\begin{equation}\label{weyl_intermediate_biharmonic_single}
\tilde\Lambda_j(a)=C_d^2\left(\frac{j}{|\Omega|}\right)^{\frac{4}{d}}+\frac{C_d^2 B_{d-1}}{d B_d^{1-\frac{1}{d}}}\frac{|\partial\Omega|}{|\Omega|}\left(\frac{j}{|\Omega|}\right)^{\frac{3}{d}}+o\left(j^{\frac{3}{d}}\right), \end{equation}
\begin{equation}\label{weyl_KS_biharmonic_single}
\tilde M_j(a)=C_d^2\left(\frac{j}{|\Omega|}\right)^{\frac{4}{d}}-\frac{C_d^2 B_{d-1}}{d B_d^{1-\frac{1}{d}}}\frac{|\partial\Omega|}{|\Omega|}\left(\frac{j}{|\Omega|}\right)^{\frac{3}{d}}+o\left(j^{\frac{3}{d}}\right), \end{equation} and
\begin{multline*}
M_j(a)=C_d^2\left(\frac{j}{|\Omega|}\right)^{\frac{4}{d}}\\
-\frac{C_d^2 B_{d-1}}{d B_d^{1-\frac{1}{d}}}\left(4f(a)^{\frac{1-d}{4}}-1-4\frac{d-1}{\pi}\int_0^1t^{d-2}\arctan{g(t,a)}dt\right)\frac{|\partial\Omega|}{|\Omega|}\left(\frac{j}{|\Omega|}\right)^{\frac{3}{d}}+o\left(j^{\frac{3}{d}}\right), \end{multline*} as $j\to\infty$, for any $a\in(-(d-1)^{-1},1)$, where $f$ is defined in \eqref{fsigma} and $g$ is defined in \eqref{gneu}. \end{thm}
We conclude this discussion with a few remarks.
\begin{rem}
It is interesting to see that, contrary to what happens with the Laplacian, in the case of the biharmonic operator the quantity $|c_1|$ is not the same for Dirichlet and Neumann eigenvalues. In fact, this is the case even for $ a =0$. In addition, the dependence on the dimension is even stronger, and it is actually worth noticing that, as the dimension grows, the asymptotics of (the square root) of the eigenvalues of the Dirichlet Bilaplacian converge to that of the eigenvalues of the Dirichlet Laplacian, because $$ \lim_{d\to\infty}1+\frac{\Gamma\left(\frac{d+1}{4}\right)}{\sqrt{\pi}\Gamma\left(\frac{d+3}{4}\right)}=1, $$ and hence the inequality $$ \lambda_j\le\sqrt{\Lambda_j} $$ is, in a sense, ``squeezing'' towards an equality, asymptotically in $j$ and in $d$. On the other hand, the Dominated Convergence Theorem tells us that $$ \lim_{d\to\infty}4\frac{d-1}{\pi}\int_0^1t^{d-2}\arctan (g(t, a )^{-1})dt=0 $$ for all $ a \in((d-1)^{-1},1)$. However, $$ \lim_{d\to\infty}f( a )^{\frac{1-d}{4}}=\left\{\begin{array}{ll}1,& a =0,\\+\infty,&\text{otherwise,}\end{array}\right. $$ telling us that the asymptotics of (the square root) of the Neumann Bilaplacian eigenvalues converge to that of the Neumann Laplacian eigenvalues only for $ a =0$, while in the other cases the asymptotic expansions blow up. This can be interpreted as the fact that, when the dimension increases, the control of the Hessian matrix on the Laplacian (expressed by the Poisson ratio in the quadratic form \eqref{qf}) weakens significantly, making the asymptotics blow up. \end{rem}
\begin{rem} We observe that, if $\Omega$ satisfies the uniform outer ball condition (see \cite{adolfsson, ggs}), then the expansion \eqref{weyl_intermediate_biharmonic_single} holds also for $a=1$. The same remark applies to \eqref{weyl_KS_biharmonic_single}. On the other hand, even though the Neumann problem \eqref{equazione}, \eqref{NBC} does not satisfy the complementing condition (see \cite{ggs}) when $ a =1$ and the operator does not have compact resolvent, and therefore all the discussion in this section does not apply, it is nevertheless interesting to see what happens to $c_1$ as $ a \to1-$. We observe that $$
\left.-1-4\frac{d-1}{\pi}\int_0^1t^{d-2}\arctan g(t, a )dt\right|_{ a =1}=-1-\frac{\Gamma\left(\frac{d+1}{4}\right)}{\sqrt{\pi}\Gamma\left(\frac{d+3}{4}\right)}, $$ while $$ \lim_{ a \to1-}f( a )^{\frac{1-d}4}=+\infty. $$ This is coherent with what we know about the spectrum of this operator: apart from an infinite dimensional kernel, the remaining part of the spectrum consists of the eigenvalues of the Dirichlet Bilaplacian, see \cite{provneu}. \end{rem}
\begin{rem} It is striking that the asymptotics for the Navier and the Kuttler-Sigillito problems are the same as those of the Dirichlet Laplacian and the Neumann Laplacian, respectively. In particular, the dependence on the Poisson ratio is not visible in those expansions, and the link with the respective Laplacian counterpart becomes evident. However, apart from the case $a=1$ where the identification becomes immediate, it is not clear at all what are the relations between the respective Laplacian and Bilaplacian eigenvalues. \end{rem}
\section{The biharmonic Dirichlet operator}
In this section we focus our attention to the biharmonic Dirichlet problem \eqref{equazione}, \eqref{DBC}. In particular, the quadratic form \eqref{qf} will be set into $H^2_0(\Omega)$. Through all this section, $\Omega\subset \mathbb R^d$ will be a domain with finite Lebesgue measure, unless otherwise specified. In fact, since the embedding $H^1_0(\Omega)\subset L^2(\Omega)$ is compact under the sole assumption that the measure of $\Omega$ is finite, it is standard to see that the spectrum is discrete and consists of an ordered sequence of positive eigenvalues tending to infinity.
Note that the quadratic form \eqref{qf} is now equal to \begin{equation*}
Q(u,v)=\int_{\Omega}D^2u:D^2v=\int_{\Omega}\Delta u \Delta v, \end{equation*} so the dependence upon the Poisson ratio disappears.
We also observe here that, directly from \eqref{weyl_dirichlet_biharmonic_single}, we have the following asymptotic law for averages of eigenvalues, holding if we require suitable assumptions on $\Omega$ (see Theorem \ref{asymptotal}) \begin{equation}\label{weyl_dirichlet_biharmonic} \frac{1}{k}\sum_{j=1}^k\Lambda_j=
\frac{d}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}
+\frac{d}{d+3}\frac{C_d^2 B_{d-1}}{d B_d^{1-\frac{1}{d}}}\left(1+\frac{\Gamma\left(\frac{d+1}{4}\right)}{\sqrt{\pi}\Gamma\left(\frac{d+3}{4}\right)}\right)\frac{|\partial\Omega|}{|\Omega|}\left(\frac{k}{|\Omega|}\right)^{\frac{3}{d}} +o\left(k^{\frac{3}{d}}\right) \end{equation} as $k\rightarrow+\infty$, where $C_d=(2\pi)^2B_d^{-\frac 2 d}$.
\subsection{Lower bounds for Riesz means}
In this section we will apply the averaged variational principle to obtain lower bounds for Riesz means (respectively, upper bounds for averages) of eigenvalues of $\Delta^2_D$ on $\Omega$.
Applying the AVP \eqref{RieszVersion} with test functions of the form $f_{ p }(x)=(2\pi)^{-d/2}e^{i p \cdot x}\phi(x)$, with $\phi(x)\in H^2_0(\Omega)\cap L^{\infty}(\Omega)$, we obtain the following
\begin{thm}\label{dirichlet_bilaplacian_thm_general} Let $\Omega$ be a domain in $\mathbb R^d$ of finite measure. For any $\phi\in H^2_0(\Omega)\cap L^{\infty}(\Omega)$ and $z>0$ the following inequality holds \begin{multline}\label{Riesz-mean-ineq-DirichletbiLaplacian}
\sum_{j\ge 1}\left(z-\Lambda_j\right)_+\|\phi U_j\|_2^2\geq \frac{4}{d+4}(2\pi)^{-d} B_d\|\phi\|_2^2\left(z-\frac{\|\Delta\phi\|_2^2}{\|\phi\|_2^2}\right)^{\frac{d}{4}+1}_+\\
-2(2\pi)^{-d} B_d\|\nabla\phi\|_2^2\left(z-\frac{\|\Delta\phi\|_2^2}{\|\phi\|_2^2}\right)^{\frac{d}{4}+\frac{1}{2}}_+. \end{multline} Moreover, for all positive integers $k$ \begin{equation}\label{evsums-DirichletbiLaplacian1}
\frac{1}{k}\sum_{j=1}^k\Lambda_j\leq\frac{d}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}\rho(\phi)^{-\frac{4}{d}}
+2\frac{\|\nabla\phi\|_2^2}{\|\phi\|_2^2}C_d\left(\frac{k}{|\Omega|}\right)^{\frac{2}{d}}\rho(\phi)^{-\frac{2}{d}}
+\frac{\|\Delta\phi\|_2^2}{\|\phi\|_2^2}, \end{equation} for $\rho(\phi)<1$, where \begin{equation} \label{rhophi}
\rho(\phi)=\frac{||\phi||_2^2}{|\Omega|\cdot||\phi||_{\infty}^2}. \end{equation} \end{thm}
\begin{proof} We take in \eqref{RieszVersion} trial functions of the form $f_{ p }=(2\pi)^{-d/2}e^{i p \cdot x}\phi(x)$ with $\phi\in H^2_0(\Omega)\cap L^{\infty}(\Omega)$ real valued. After averaging over $ p \in\mathbb R^d$ and using the unitarity of the Fourier transform we get, for any $R>0$, \begin{equation}\label{first_step} \sum_{j\ge 1}\left(z-\Lambda_j\right)_+\int_{\Omega}\phi^2(x)U_j^2(x)dx
\geq(2\pi)^{-d}\int_{| p |\leq R}\left(z\|\phi\|_2^2-\int_{\Omega}\left|\Delta(\phi e^{i p \cdot x})\right|^2dx\right)d p. \end{equation}
Now we note that \begin{equation*} \begin{split}
\int_{\Omega}\left|\Delta(\phi e^{i p \cdot x})\right|^2 & =\int_{\Omega}\left|\Delta\phi-| p |^2\phi+2i p\cdot\nabla\phi\right|^2dx\\
& =\int_{\Omega}(\Delta\phi-| p |^2\phi)^2+4\left| p \cdot\nabla\phi\right|^2dx\\
& =\int_{\Omega}(\Delta\phi)^2+| p |^4\phi^2-2| p |^2\phi\Delta\phi+4\left| p \cdot\nabla\phi\right|^2dx\\
& =\int_{\Omega}(\Delta\phi)^2+| p |^4\phi^2+2| p |^2|\nabla\phi|^2+4\left| p \cdot\nabla\phi\right|^2dx, \end{split} \end{equation*} which implies \begin{multline}\label{step_2} \sum_{j\ge 1}\left(z-\Lambda_j\right)_+\int_{\Omega}\phi^2(x)U_j^2(x)dx\\
\geq(2\pi)^{-d}\int_{| p |\leq R}\left((z-| p |^4)\|\phi\|_2^2-2| p |^2\|\nabla\phi\|_2^2-\|\Delta\phi\|_2^2-4\int_\Omega|p\cdot\nabla\phi|^2dx^2\right)d p \\
=(2\pi)^{-d} B_d\|\phi\|_2^2\left(\left(z-\frac{\|\Delta\phi\|_2^2}{\|\phi\|_2^2}\right)R^d-\frac{d}{d+4}R^{d+4}-2\frac{\|\nabla\phi\|_2^2}{\|\phi\|_2^2}R^{d+2}\right). \end{multline} Choosing $$
R^4=\left(z-\frac{\|\Delta\phi\|_2^2}{\|\phi\|_2^2}\right)_+, $$ we obtain \eqref{Riesz-mean-ineq-DirichletbiLaplacian}.
Now we can consider \eqref{first_step} with the evaluation $z=\Lambda_{k+1}$, so that the sum at the left-hand side is taken over the first $k$ positive integers. Hence, as in \eqref{step_2}, we get \begin{multline}\label{step_3}
\|\phi\|_{\infty}^2\sum_{j=1}^k\left(\Lambda_{k+1}-\Lambda_j\right)\geq \sum_{j=1}^k\left(\Lambda_{k+1}-\Lambda_j\right)\int_{\Omega}\phi^2(x)U_j^2(x)dx\\
\geq(2\pi)^{-d}\int_{| p |\leq R}\left((\Lambda_{k+1}-| p |^4)\|\phi\|_2^2-2| p |^2\|\nabla\phi\|_2^2-\|\Delta\phi\|_2^2-4\int_\Omega|p\cdot\nabla\phi|^2dx\right)d p \\
=(2\pi)^{-d} B_d\|\phi\|_2^2\left(\left(\Lambda_{k+1}-\frac{\|\Delta\phi\|_2^2}{\|\phi\|_2^2}\right)R^d-\frac{d}{d+4}R^{d+4}-2\frac{\|\nabla\phi\|_2^2}{\|\phi\|_2^2}R^{d+2}\right), \end{multline}
for any $R>0$, where the first inequality follows from $\int_{\Omega}\phi^2(x)U_j^2(x)dx\leq\|\phi\|_{\infty}^2$. We choose now $$
R^4=C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}\rho(\phi)^{-\frac{4}{d}}. $$ Standard computations show that with this choice inequality \eqref{step_3} implies \eqref{evsums-DirichletbiLaplacian1}. \end{proof}
\begin{rem} The right side of inequality \eqref{evsums-DirichletbiLaplacian1} provides a good relation between the upper bound and the semiclassical behaviour of the average of the first $k$ eigenvalues, which is known to be a lower bound for the average, see \cite{Lap1997} (see also \cite{Ber,LiYau}). \end{rem}
As a corollary, we have a lower bound for the partition function (the trace of the heat kernel).
\begin{cor}\label{dirichlet_bounds_spec_fcn_bi} For any $\phi\in H^2_0(\Omega)\cap L^{\infty}(\Omega)$ and $t>0$, \begin{multline}\label{Partition-function-inequality_bi}
\sum_{j=1}^{\infty}e^{-\Lambda_{j}t}\|\phi U_j\|_2^2\geq \frac{4}{d+4}(2\pi)^{-d} B_d\Gamma\left(2+\frac{d}{4}\right)\|\phi\|_2^2e^{-\frac{||\Delta\phi||_2^2}{||\phi||_2^2}\,t}t^{-\frac{d}{4}}\\-2(2\pi)^{-d} B_d\Gamma\left(\frac{3}{2}+\frac{d}{4}\right)\|\nabla\phi\|_2^2e^{-\frac{||\Delta\phi||_2^2}{||\phi||_2^2}\,t}t^{\frac{1}{2}-\frac{d}{4}}. \end{multline} Moreover, \begin{equation}\label{part-fct-estimate-small-times_bi} \begin{split}
\sum_{j=1}^{\infty}e^{-\Lambda_{j}t}\geq & \frac{4}{d+4}(2\pi)^{-d} B_d\Gamma\left(2+\frac{d}{4}\right)t^{-\frac{d}{4}}|\Omega|\\
& \quad -\frac{4}{d+4}(2\pi)^{-d} B_d\Gamma\left(2+\frac{d}{4}\right)t^{-\frac{d}{4}}\left(\frac{t\|\Delta\phi\|_2^2+|\Omega|\|\phi\|_{\infty}^2-\|\phi\|_2^2}{\|\phi\|_{\infty}^2}\right)\\
& \quad -2(2\pi)^{-d} B_d\Gamma\left(\frac{3}{2}+\frac{d}{4}\right)\frac{\|\nabla\phi\|_2^2}{\|\phi\|_2^2}t^{\frac{1}{2}-\frac{d}{4}}. \end{split}\end{equation} \end{cor}
\begin{proof}
Laplace transforming \eqref{Riesz-mean-ineq-DirichletbiLaplacian} yields inequality \eqref{Partition-function-inequality_bi} for all $t>0$. In view of the semiclassical expansion, we are interested in bounds for small $t$ and therefore we apply the inequality $1-x\leq e^{-x}\leq 1$ for all $x\geq 0$ to \eqref{Partition-function-inequality_bi} and the inequality $\|\phi U_j\|_2^2\leq\|\phi\|_{\infty}^2$ and we get \eqref{part-fct-estimate-small-times_bi}. \end{proof}
We prove now more explicit bounds presenting a first term which is sharp and a second term of the correct order in $k$ with respect to \eqref{weyl_dirichlet_biharmonic}. As we shall see, the more regular the domain $\Omega$ is, the more information is contained in the bounds. We note that formula \eqref{evsums-DirichletbiLaplacian1} with $\phi=1_{\Omega}$ is a ``reverse Berezin-Li-Yau inequality'' for the biharmonic operator. Clearly, such an inequality does not hold and in fact we cannot use $\phi\equiv 1$ in \eqref{evsums-DirichletbiLaplacian1}. However, the form of inequality \eqref{evsums-DirichletbiLaplacian1} suggests that a suitable choice of $\phi$ is a function in $H^2_0(\Omega)\cap L^{\infty}(\Omega)$ which approximates the constant function $1$.
We construct now functions $\phi_h\in H^2_0(\Omega)\cap L^{\infty}(\Omega)$ depending on $h>0$ which approximate $1_{\Omega}$ as $h\rightarrow 0^+$ and with controlled $L^2(\Omega)$-norm of their gradients and Laplacians. For $x\in\mathbb R^d$ we denote by $\delta(x)$ the function $\delta(x):=\text{dist}(x,\partial\Omega)$. Let $h>0$ and let $\omega_h\subset\Omega$ be defined by \begin{equation*} \omega_h:=\left\{x\in\Omega:\delta(x)\leq h\right\}. \end{equation*} We note that $\omega_h=\Omega$ whenever $h\geq r_{\Omega}$, where $r_{\Omega}$ denotes the inradius of $\Omega$ \begin{equation*}
r_{\Omega}:=\underset{x\in\Omega}{\max}\underset{y\in\partial\Omega}{\min}|x-y|. \end{equation*}
We define a function $\phi_h\in H^2_0(\Omega)$ such that $\phi_h\equiv 1$ in $\Omega\setminus\overline\omega_h$ and $0\le\phi_h\le1$ on $\Omega$ as follows. Let $f:[0,+\infty[\rightarrow\mathbb R$ be defined by $$ f(r)= \begin{cases} \frac{d^2+6d+8}{8 B_d}(r^2-1)^2,& {\rm if\ }r\in[0,1[,\\ 0,& {\rm if\ }r\in[1,+\infty[. \end{cases} $$ By construction $f\in C^{1,1}([0,+\infty))$, $f'(0)=0$, and $f(r)>0$ on $]0,1[$.
Let now $\eta_h:\mathbb R^d\to[0,+\infty[$ be defined by $$
\eta_h(x):=\frac{1}{h^d}f\left(\frac{|x|}{h}\right). $$ By construction $$ \int_{\mathbb R^d}\eta_h(x)dx=1, $$ for all $h>0$ and $\eta_h$ is supported on $B(0,h)$.
Let $h<r_{\Omega}$ and consider $1_{h}:=1_{\Omega\setminus\overline{\omega_{h}}}$. We set \begin{equation}\label{phi_h}
\phi_h:=\left(1_{\frac{h}{2}}\ast{\eta_{\frac{h}{2}}}\right){|_{\Omega}}. \end{equation} By construction, $\phi_h\in C^{1,1}(\mathbb R^d)$. Moreover, for any $x\in\mathbb R^d\setminus\Omega$, $$ \phi_h(x)=\int_{\mathbb R^d}1_{\frac{h}{2}}(y)\eta_{\frac{h}{2}}(x-y)dy=\int_{B\left(x,\frac{h}{2}\right)}1_{\frac{h}{2}}(y)\eta_{\frac{h}{2}}(x-y)dy=0. $$ In the same way one has that, for any $x\in\mathbb R^d\setminus\Omega$, $$ \nabla\phi_h(x)=0. $$
This means that $\phi_h$ is a continuosly differentible function on $\overline{\Omega}$ with ${\phi_h}{|_{\partial\Omega}}=|{\nabla\phi_h}{|_{\partial\Omega}}|=0$ and with Lipschitz continuous first partial derivatives, in other words, $\phi_h\in H^2_0(\Omega)$.
Moreover, for any $x\in\Omega\setminus\overline{\omega_h}$, we have $$ \phi_h(x)=\int_{B\left(x,\frac{h}{2}\right)}1_{\frac{h}{2}}(y)\eta_{\frac{h}{2}}(x-y)dy=\int_{B\left(x,\frac{h}{2}\right)}\eta_{\frac{h}{2}}(x-y)dy=1, $$ and, for any $x\in\omega_h$, $$ 0\leq\phi_h(x)=\int_{B\left(x,\frac{h}{2}\right)}1_{\frac{h}{2}}(y)\eta_{\frac{h}{2}}(x-y)dy\leq\int_{B\left(x,\frac{h}{2}\right)}\eta_{\frac{h}{2}}(x-y)dy=1. $$ We estimate now the $L^{\infty}(\Omega)$-norm of $\nabla\phi_h$ and $\Delta\phi_h$ (note that, since $\phi_h\in C^{1,1}(\Omega)$, then $\Delta\phi_h\in L^{\infty}(\Omega)$). We have \begin{multline*}
\left|\nabla\phi_h\right|^2=\sum_{i=1}^d\left|1_{\frac{h}{2}}\ast\partial_{x_i}\eta_{\frac{h}{2}}\right|^2\leq\|1_{\frac{h}{2}}\|_{\infty}^2\sum_{i=1}^d\left(\int_{B\left(0,\frac{h}{2}\right)}|\partial_{x_i}\eta_{\frac{h}{2}}|dx\right)^2\\
\leq\|\nabla\eta_{\frac{h}{2}}\|_2^2\left|B\left(0,\frac{h}{2}\right)\right|=\frac{8d(d+2)(d+4)}{d+6}\cdot\frac{1}{h^2}, \end{multline*} hence $$
\|\nabla\phi_h\|_{\infty}^2\leq \frac{8d(d+2)(d+4)}{d+6}\cdot\frac{1}{h^2}. $$ Moreover $$
\left|\Delta\phi_h\right|^2=\left|1_{\frac{h}{2}}\ast\Delta\eta_{\frac{h}{2}}\right|^2\leq\|1_{\frac{h}{2}}\|_{\infty}^2\left(\int_{B\left(0,\frac{h}{2}\right)}|\Delta\eta_{\frac{h}{2}}|dx\right)^2=64d^2(d+4)^2\left(\frac{d}{d+2}\right)^d\cdot\frac{1}{h^4}, $$ hence $$
\|\Delta\phi_h\|_{\infty}^2\leq 64d^2(d+4)^2\left(\frac{d}{d+2}\right)^d\cdot\frac{1}{h^4}. $$ We set \begin{equation} \label{adconst} A_d^2:=\frac{8d(d+2)(d+4)}{d+6}\,,\ \ \ \tilde A_d^2:= 64d^2(d+4)^2\left(\frac{d}{d+2}\right)^d. \end{equation} We have proved the following \begin{lem}\label{lemma_test} Let $\Omega$ be a domain in $\mathbb R^d$ of finite measure. Let $r_{\Omega}>0$ denote the inradius of $\Omega$. Then, for all $h\in]0,r_{\Omega}]$ there exists a function $\phi_h\in H^2_0(\Omega)\cap L^{\infty}(\Omega)$ such that \begin{enumerate}[i)] \item $0\leq\phi_h(x)\leq 1$ for all $x\in\overline{\Omega}$; \item $\phi_h\equiv 1$ on $\Omega\setminus\overline{\omega_h}$;
\item $\|\nabla\phi_h\|_{\infty}\leq A_d h^{-1}$, with $A_d$ depending only on $d$;
\item $\|\Delta\phi_h\|_{\infty}\leq \tilde A_d h^{-2}$, with $\tilde A_d$ depending only on $d$. \end{enumerate} \end{lem}
We note that, once we are able to estimate the size of $\omega_h$, by choosing a suitable $h$ into \eqref{Riesz-mean-ineq-DirichletbiLaplacian}, then inequalities \eqref{evsums-DirichletbiLaplacian1} and \eqref{Partition-function-inequality_bi} become asymptotically sharp. A suitable choice will be $h\sim k^{-1/d}$ in the case of sufficiently smooth domains. This is made clear in the next theorem,which is stated for averages of eigenvalues only. We remark though that analogous computations allow to prove related estimates for Riesz means and the partition functions as well.
\begin{thm}\label{main_upper_dirichlet} Let $\Omega$ be a domain in $\mathbb R^d$ of finite measure. \begin{enumerate}[i)] \item For all positive integers $k$ \begin{equation}\label{rough_estimate_bilaplacian}
\frac{1}{k}\sum_{j=1}^k\Lambda_j\leq\frac{1}{r_{\Omega}^4}\left(\frac{d}{d+4}C_d^2\left(a_d|\Omega|\right)^{\frac{4}{d}}\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}+2C_d\left(b_d|\Omega|\right)^{\frac{2}{d}}\left(\frac{k}{|\Omega|}\right)^{\frac{2}{d}}+c_d\right), \end{equation} where $a_d,b_d,c_d$ are constants which depend only on the dimension and are given by \eqref{cd}.
\item Let $\Omega$ be such that $\lim_{h\rightarrow 0^+}\frac{|\omega_h|}{h}=|\partial\Omega|<\infty$. Then, for any $k\geq |\Omega|\left(\frac{\sqrt{d}A_d}{2C_d^{1/2}r_{\Omega}}\right)^{d}$, \begin{equation}\label{explicit_sum}
\frac{1}{k}\sum_{j=1}^k\Lambda_j\leq \frac{d}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}
+M_d\frac{|\partial\Omega|}{|\Omega|}C_d^{\frac{3}{2}}\left(\frac{k}{|\Omega|}\right)^{\frac{3}{d}}+R(k), \end{equation}
where $M_d$ depends only on $d$ and is given by \eqref{Md}. Here $R(k)=o(k^{3/d})$ as $k\rightarrow +\infty$, and it depends explicitly on $k,d,|\Omega|,|\partial\Omega|$ and $|\omega_{h(k)}|$ with $h(k)$ given by \eqref{hk}. \item If $\Omega$ is convex or if $\Omega$ is of class $C^2$ and bounded, then $ii)$ holds. Moreover there exists $C=C(d,\Omega)>0$ such that \eqref{explicit_sum} holds with \begin{equation}\label{rem_est}
R(k)\leq C \left(\frac{k}{|\Omega|}\right)^{\frac{2}{d}}. \end{equation}
Finally, there exists $k_0$ which depends only on $d$ and $\Omega$ such that, for all $k\geq k_0$, $R(k)$ depends explicitly on $k,d,|\Omega|,|\partial\Omega|$ if $\Omega$ is convex and on $k,d,|\Omega|,|\partial\Omega|$ and integrals of the mean curvature $\mathcal H$ of $\partial\Omega$ if $\Omega$ is of class $C^2$. \end{enumerate} \end{thm}
\begin{proof} We start by proving $i)$. We construct a test function in $H^2_0(\Omega)$ supported in a ball $B_{r_{\Omega}}$ of radius $r_{\Omega}$ contained in $\overline\Omega$ (by definition of $r_{\Omega}$ such a ball exists). Let then $$
\psi_{r_{\Omega}}(x):=\left(\frac{|x|^2}{r_{\Omega}^2}-1\right)^2. $$ Explicit computations show that \begin{equation*}\begin{split}
\|\psi_{r_{\Omega}}\|_2^2= &\frac{384r_{\Omega}^d B_d}{(d+2)(d+4)(d+6)(d+8)},\\
\|\psi_{r_{\Omega}}\|_{\infty}^2=&1,\\
\frac{\|\nabla\psi_{r_{\Omega}}\|_2^2}{\|\psi_{r_{\Omega}}\|_2^2}=&\frac{d(d+8)}{3r_{\Omega}^2},\\
\frac{\|\Delta\psi_{r_{\Omega}}\|_2^2}{\|\psi_{r_{\Omega}}\|_2^2}=&\frac{(8+d(d-2))(d+6)(d+8)}{6r_{\Omega}^4}. \end{split}\end{equation*} We set \begin{equation}\label{cd} \begin{split} a_d =&\frac{(d+2)(d+4)(d+6)(d+8)}{384 B_d},\\ b_d =& a_d\left(\frac{d(d+8)}{3}\right)^{\frac{d}{2}},\\ c_d =&\frac{(8+d(d-2))(d+6)(d+8)}{6}. \end{split} \end{equation} Formula \eqref{rough_estimate_bilaplacian} now follows from \eqref{evsums-DirichletbiLaplacian1} with $\phi=\psi_{r_{\Omega}}$ and standard computations. This proves point $i)$.
We prove now $ii)$. From \eqref{evsums-DirichletbiLaplacian1} it follows that, for $d\geq 4$, \begin{equation}\label{evsums-DirichletbiLaplacian1_step1} \begin{split}
\frac{1}{k}\sum_{j=1}^k\Lambda_j & \leq \frac{d}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}\rho(\phi)^{-\frac{4}{d}}\\
& \quad +2\frac{\|\nabla\phi\|_2^2}{\|\phi\|_2^2}C_d\left(\frac{k}{|\Omega|}\right)^{\frac{2}{d}}\rho(\phi)^{-\frac{2}{d}}
+\frac{\|\Delta\phi\|_2^2}{\|\phi\|_2^2}\\
& = \frac{d}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}+\frac{d}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}\left(\rho(\phi)^{-4/d}-1\right)\\
& \quad +2\frac{\|\nabla\phi\|_2^2}{\|\phi\|_2^2}C_d\left(\frac{k}{|\Omega|}\right)^{\frac{2}{d}}\rho(\phi)^{-\frac{2}{d}}
+\frac{\|\Delta\phi\|_2^2}{\|\phi\|_2^2}\\
& \leq \frac{d}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}+\frac{4}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}\left(\rho(\phi)^{-1}-1\right)\\
& \quad +2\frac{\|\nabla\phi\|_2^2}{\|\phi\|_2^2}C_d\left(\frac{k}{|\Omega|}\right)^{\frac{2}{d}}\rho(\phi)^{-\frac{2}{d}}
+\frac{\|\Delta\phi\|_2^2}{\|\phi\|_2^2}\\
& = \frac{d}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}+\frac{4}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}\left(\frac{|\Omega|\|\phi\|_{\infty}^2-\|\phi\|_2^2}{\|\phi\|_2^2}\right)\\
& \quad +2\frac{\|\nabla\phi\|_2^2}{\|\phi\|_2^2}C_d\left(\frac{k}{|\Omega|}\right)^{\frac{2}{d}}\left(\frac{|\Omega|\|\phi\|_{\infty}^2}{\|\phi\|_2^2}\right)^{\frac{2}{d}}
+\frac{\|\Delta\phi\|_2^2}{\|\phi\|_2^2}, \end{split} \end{equation} where $\rho(\phi)$ is defined in \eqref{rhophi}. We have used Bernoulli's inequality in the fifth line of \eqref{evsums-DirichletbiLaplacian1_step1}. If $d=2,3$, we use the following fact $$ \rho(\phi)^{-4/d}-1=(\rho(\phi)^2-1+1)^{-2/d}-1\leq\frac{2}{d}(\rho(\phi)^{-2}-1)=\frac{2}{d}(\rho(\phi)^{-1}+1)(\rho(\phi)^{-1}-1), $$ so that, for $d=2,3$ we have \begin{multline}\label{evsums-DirichletbiLaplacian1_step11}
\frac{1}{k}\sum_{j=1}^k\Lambda_j\leq \frac{d}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}
+\frac{2}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}\left(\frac{|\Omega|\|\phi\|_{\infty}^2+\|\phi\|_2^2}{\|\phi\|_2^2}\right)\left(\frac{|\Omega|\|\phi\|_{\infty}^2-\|\phi\|_2^2}{\|\phi\|_2^2}\right)\\
+2\frac{\|\nabla\phi\|_2^2}{\|\phi\|_2^2}C_d\left(\frac{k}{|\Omega|}\right)^{\frac{2}{d}}\left(\frac{|\Omega|\|\phi\|_{\infty}^2}{\|\phi\|_2^2}\right)^{\frac{2}{d}}
+\frac{\|\Delta\phi\|_2^2}{\|\phi\|_2^2}. \end{multline}
For each positive integer $k$, we choose $\phi=\phi_h$ defined by \eqref{phi_h} into \eqref{evsums-DirichletbiLaplacian1_step1} and \eqref{evsums-DirichletbiLaplacian1_step11}. Thanks to Lemma \ref{lemma_test} and to the fact that $|\Omega|-|\omega_h|\leq\|\phi_h\|_2^2\leq |\Omega|$, we have \begin{multline}\label{evsums-DirichletbiLaplacian1_step21}
\frac{1}{k}\sum_{j=1}^k\Lambda_j\leq \frac{d}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}
+\frac{4}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}\left(\frac{|\omega_h|}{|\Omega|-|\omega_h|}\right)\\
+2\frac{A_d^2|\omega_h|}{h^2(|\Omega|-|\omega_h|)}C_d\left(\frac{k}{|\Omega|}\right)^{\frac{2}{d}}\left(\frac{|\Omega|}{|\Omega|-|\omega_h|}\right)^{\frac{2}{d}}
+\frac{\tilde A_d^2|\omega_h|}{h^4(|\Omega|-|\omega_h|)}. \end{multline} if $d\geq 4$, and \begin{multline}\label{evsums-DirichletbiLaplacian1_step22}
\frac{1}{k}\sum_{j=1}^k\Lambda_j\leq \frac{d}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}
+\frac{2}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}\left(\frac{2|\Omega|}{|\Omega|-|\omega_h|}\right)\left(\frac{|\omega_h|}{|\Omega|-|\omega_h|}\right)\\
+2\frac{A_d^2|\omega_h|}{h^2(|\Omega|-|\omega_h|)}C_d\left(\frac{k}{|\Omega|}\right)^{\frac{2}{d}}\left(\frac{|\Omega|}{|\Omega|-|\omega_h|}\right)^{\frac{2}{d}}
+\frac{\tilde A_d^2|\omega_h|}{h^4(|\Omega|-|\omega_h|)}, \end{multline}
if $d=2,3$. In both cases, since $\lim_{h\rightarrow 0^+}\frac{|\omega_h|}{h}=|\partial\Omega|$, we can write \begin{multline}\label{evsums-DirichletbiLaplacian1_step3}
\frac{1}{k}\sum_{j=1}^k\Lambda_j\leq \frac{d}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}
+\frac{4}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}\left(\frac{h|\partial\Omega|}{|\Omega|}\right)\\
+2\frac{A_d^2|\partial\Omega|}{h|\Omega|}C_d\left(\frac{k}{|\Omega|}\right)^{\frac{2}{d}}
+\frac{\tilde A_d^2|\partial\Omega|}{h^3|\Omega|}+R(k,h), \end{multline}
where $R(k,h)$ is defined in \eqref{remainder} and the constants $A_d, \tilde A_d$ are as in \eqref{adconst}. Here we could optimize with respect ot $h$ and find the optimal $h$ which is given by an explicit dimensional constant times $C_d^{-\frac{1}{2}}\left(\frac{k}{|\Omega|}\right)^{-\frac{1}{d}}$. We set \begin{equation}\label{hk}
h=h(k)=\sqrt{\frac{d+4}4}A_d C_d^{-\frac{1}{2}}\left(\frac{k}{|\Omega|}\right)^{-\frac{1}{d}}\varepsilon, \end{equation} so that inequality \eqref{evsums-DirichletbiLaplacian1_step3} becomes \begin{multline*}
\frac{1}{k}\sum_{j=1}^k\Lambda_j\leq \frac{d}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}\\
+\sqrt{\frac{4}{d+4}}A_dC_d^{\frac 3 2}\left(\frac{k}{|\Omega|}\right)^{\frac{3}{d}}\frac{|\partial\Omega|}{|\Omega|}\left(\varepsilon+\frac 2 \varepsilon +\frac 4 {d+4}\frac {\tilde A_d^2}{A_d^4}\varepsilon^{-3}\right) +R(k,h(k)). \end{multline*}
For simplicity we choose $\varepsilon=\sqrt{2}$ which optimizes the first two terms depending on $\varepsilon$ since the goal is not to get best constants here (already the constants $A_d, \tilde A_d$ are not optimal). It follows that, for any $k\geq |\Omega|\frac{A_d^d}{r_{\Omega}^d}\left(\frac{d+4}{2C_d}\right)^{d /2}$ (we need $h\leq r_{\Omega}$), we obtain \begin{equation*}
\frac{1}{k}\sum_{j=1}^k\Lambda_j\leq \frac{d}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}
+M_d\frac{|\partial\Omega|}{|\Omega|}C_d^{\frac{3}{2}}\left(\frac{k}{|\Omega|}\right)^{\frac{3}{d}}+R(k,h(k)), \end{equation*} where \begin{equation}\label{Md} M_d=8\left(\frac{d(d+2)}{d+6}\right)^{\frac 1 2}\left(2+\frac{(d+6)^2}{(d+2)^2(d+4)}\left(\frac d {d+2}\right)^d\right). \end{equation} We also note that the remainder function $R(k,h(k))$ in \eqref{evsums-DirichletbiLaplacian1_step3} with $h=h(k)$ given by \eqref{hk} is $o(k^{3/d})$ as $k\rightarrow +\infty$. This concludes the proof of $ii)$.
Now, let us pass to $iii)$. It is known that $\lim_{h\rightarrow 0^+}\frac{|\omega_h|}{h}=|\partial\Omega|$ if $\Omega$ is Lipschitz (see e.g., \cite{colesanti_ambrosio}). In particular this is true for convex sets (which are Lipschitz) and for sets with $C^2$ boundaries. Hence $ii)$ holds for these classes of domains. Let us now write explicitly the remainder $R(k,h)$ in \eqref{evsums-DirichletbiLaplacian1_step3}. For simplicity we consider the case $d\geq 4$, the case $d=2,3$ being similar. We have that \begin{multline}\label{remainder}
R(k,h)=\frac{d}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^\frac{4}{d}\left(\frac{|\omega_h|}{|\Omega|-|\omega_h|}-\frac{h|\partial\Omega|}{|\Omega|}\right)\\
+2\frac{A_d^2}{h^2}C_d\left(\frac{k}{|\Omega|}\right)^{\frac{2}{d}}\left(\frac{|\omega_h|}{\left(1-\frac{|\omega_h|}{|\Omega|}\right)^{2/d}(|\Omega|-|\omega_h|)}-\frac{h|\partial\Omega|}{|\Omega|}\right)
+\frac{\tilde A_d^2}{h^4}\left(\frac{|\omega_h|}{|\Omega|-|\omega_h|}-\frac{h|\partial\Omega|}{|\Omega|}\right). \end{multline}
Consider $\Omega$ convex first. We note that for convex domains and for all $h\leq r_{\Omega}$, then $|\omega_h|\leq h|\partial\Omega|$. This follows from the co-area formula and from the fact that the measure of the sets $\partial\Omega_h:=\left\{x\in\Omega:{\rm dist}(x,\partial\Omega)=h\right\}$ is a non-increasing function of $h$, for $h\in[0, r_{\Omega}]$. Hence, from \eqref{remainder} we deduce that \begin{equation*} \begin{split}
R(k,h) & \leq\frac{d}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^\frac{4}{d}\frac{h^2|\partial\Omega|^2}{|\Omega|(|\Omega|-h|\partial\Omega|)}\\
& \quad +2\frac{A_d^2}{h^2}C_d\left(\frac{k}{|\Omega|}\right)^{\frac{2}{d}}h|\partial\Omega|\left(\frac{1}{|\Omega|\left(1-\frac{h|\partial\Omega|}{|\Omega|}\right)^{1+2/d}}-\frac{1}{|\Omega|}\right)
+\frac{\tilde A_d^2}{h^4}\frac{h^2|\partial\Omega|^2}{|\Omega|(|\Omega|-h|\partial\Omega|)}\\
& \leq \frac{d}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^\frac{4}{d}\frac{h^2|\partial\Omega|^2}{|\Omega|(|\Omega|-h|\partial\Omega|)}
+2\frac{A_d^2}{h^2}C_d\left(\frac{k}{|\Omega|}\right)^{\frac{2}{d}}\frac{\left(1+\frac{2}{d}\right)h^2|\partial\Omega|^2}{|\Omega|\left(|\Omega|-h|\partial\Omega|\left(1+\frac{2}{d}\right)\right)}\\
& \quad +\frac{\tilde A_d^2}{h^4}\frac{h^2|\partial\Omega|^2}{|\Omega|(|\Omega|-h|\partial\Omega|)}, \end{split} \end{equation*} where the second inequality follows from Bernoulli's inequality. Choosing $h=h(k)$ as in point $ii)$ (see \eqref{hk}) we immediately deduce the validity of $iii)$ in the case that $\Omega$ is convex.
Let now $\Omega$ be of class $C^2$ and bounded. In this case, we note that there exists $\bar h\in]0, r_{\Omega}[$ such that any point in $\omega_h$ has a unique nearest point on $\partial\Omega$, for all $h\in(0,\bar h)$. Let us take the supremum of such $\bar h$ (still denoted $\bar h$). It is standard to see that for all $h\in]0,\bar h[$ \begin{equation}\label{tubular_smooth}
|\omega_h|\leq h|\partial\Omega|+\frac{h^2}{d}\sum_{j=2}^d\binom{d}{j}(-1)^{j-1}h^{j-2}\int_{\partial\Omega}\mathcal H(s)^{j-1}d\sigma(s), \end{equation} where $\mathcal H(s)$ denotes the mean curvature of $\partial\Omega$ at $s\in\partial\Omega$. We refer to \cite[Theorem 2.19]{HaPrSt18} for a proof of \eqref{tubular_smooth}. We choose again $h=h(k)$ as in \eqref{hk} and insert it into \eqref{evsums-DirichletbiLaplacian1_step3}. Therefore, we are allowed to implement the upper bound \eqref{tubular_smooth} into \eqref{remainder}. This confirms the claim of $iii)$ for bounded domains of class $C^2$. \end{proof}
We conclude this discussion with a few remarks.
\begin{rem} Point $i)$ of Theorem \ref{main_upper_dirichlet} provides a bound which is not asymptotically sharp in $k$ and which shows a dependence on $ r_{\Omega}$. The presence of the term $ r_{\Omega}^{-1/4}$ is somehow natural for lower eigenvalues. For example, for $d=2$ it is known that $$ \lambda_1\geq\frac{1}{2\gamma r_{\Omega}^2}, $$ if $\gamma\ge 2$, where $\gamma$ denotes the number of connected components of $\partial\Omega$ (see \cite{croke}), while $$ \lambda_1\geq\frac{1}{4 r_{\Omega}^2}, $$ if $\gamma=1$. Since $\Lambda_1\geq\lambda_1^2$, the exponent $4$ on $ r_{\Omega}$ in \eqref{rough_estimate_bilaplacian} is sharp. However, for larger eigenvalues the bound \eqref{rough_estimate_bilaplacian} is not good, and in fact asymptotically sharp bounds hold starting from a given positive integer $k_0$ depending on $d$ and $\Omega$, as in \eqref{explicit_sum} (cf.\ \eqref{rough_o}). \end{rem}
\begin{rem}
Point $ii)$ of Theorem \ref{main_upper_dirichlet} holds if $\Omega$ is such that $\mathcal M_{\Omega}(\partial\Omega)=|\partial\Omega|$, where \begin{equation}\label{tube_vol}
\mathcal M_{\Omega}(\partial\Omega)=\lim_{h\rightarrow 0^+}\frac{|\omega_h|}{h}. \end{equation}
The limit \eqref{tube_vol} is usually called the Minkowski content of $\partial\Omega$ {relative to} $\Omega$ (see e.g., \cite{lapidus_fractal_1,lapidus_fractal_2}). There are some sufficient conditions which assure that $\mathcal M_{\Omega}(\partial\Omega)=|\partial\Omega|$, for example if $\Omega$ has a Lipschitz boundary (see \cite{colesanti_ambrosio} for the proof and for a more detailed discussion on Minkowski content and conditions ensuring $\mathcal M_{\Omega}(\partial\Omega)=|\partial\Omega|$). \end{rem}
\begin{rem} The estimate \eqref{rem_est} of point $iii)$ can be proved also for Lipschitz domains with piecewise $C^2$ boundaries. In addition, more refined estimates for the remainder in the case of smooth, mean convex or convex sets can be obtained by means of a deeper (though long and technical) analysis (see e.g., \cite{HaPrSt18}). In dimension $d=2$ we can find explicit dependence of the remainder $R(k)$ in terms of the number of connected components of the boundary (for $C^2$ domains) or in terms of the angles (in the case of polygons), see \cite{HaPrSt18}. We don't enter here into the details of more refined estimates, which require more careful but standard computations. However, we remark that Theorem \ref{dirichlet_bilaplacian_thm_general} gives a general recipe to obtain asymptotically sharp upper bounds for averages with explicit dependence on the geometry of $\Omega$ (via a suitable choice of test functions $\phi$).
We also remark that asymptotically sharp estimates with a well-behaved second term can be obtained for Riesz means and for the partition function by plugging into \eqref{Riesz-mean-ineq-DirichletbiLaplacian} and \eqref{evsums-DirichletbiLaplacian1} the same test functions $\phi_h$ used in the proof of Theorem \ref{main_upper_dirichlet}. \end{rem}
\begin{rem} We note that the second term in the upper bound \eqref{explicit_sum} coincides with the second term of the semiclassical asymptotic expansion of the average of biharmonic Dirichlet eigenvalues \eqref{weyl_dirichlet_biharmonic}, up to a multiplicative dimensional constant. \end{rem}
\begin{rem}
We observe that formula \eqref{evsums-DirichletbiLaplacian1_step22} holds for any $\Omega\subset\mathbb R^d$ of finite measure (it need not be bounded), hence upper bounds depend on information of $|\omega_h|$. In a general situation we can only say that $|\omega_h|\rightarrow 0$ as $h\rightarrow 0^+$. This is a simple consequence of the Dominated Convergence Theorem. We deduce that $|\omega_h|=\omega(h)$ where $\omega:]0,+\infty[\rightarrow\mathbb R$ is such that $\lim_{h\rightarrow 0^+}\omega(h)=0$. As in the proof of point $ii)$ of Theorem \ref{main_upper_dirichlet}, we can prove that, for any $\Omega$ of finite measure (we take for simplicity $h=h(k)=C_d^{-1/2}\left(\frac{k}{|\Omega|}\right)^{-1/d}$ into \eqref{evsums-DirichletbiLaplacian1_step21}-\eqref{evsums-DirichletbiLaplacian1_step22}) \begin{multline}\label{rough_o}
\frac{1}{k}\sum_{j=1}^k\Lambda_j\leq\frac{d}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}+\frac{M_d'}{\Omega}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}
\omega\left(C_d^{-1/2}\left(\frac{k}{|\Omega|}\right)^{-1/d}\right)\\
+o\left(\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}\omega\left(C_d^{-1/2}\left(\frac{k}{|\Omega|}\right)^{-1/d}\right)\right), \end{multline}
as $k\rightarrow+\infty$, for all $k\geq |\Omega|C_d^{-d/2} r_{\Omega}^{-d}$. Here $M_d'$ is a constant which depends only on the dimension and which can be computed explicitly as in the proof of point $ii)$ of Theorem \ref{main_upper_dirichlet}. Combining \eqref{rough_o} with the Berezin-Li-Yau inequality $$
\frac{1}{k}\sum_{j=1}^k\Lambda_j\geq\frac{d}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}} $$ proved in \cite{Lap1997} for all domains of finite measure, we deduce the validity of \eqref{weyllaweig} for $\omega_j=\Lambda_j$ on domains of finite measure. \end{rem}
\begin{rem}
Now, let us denote by $D$ the Minkowski dimension of $\partial\Omega$ relative to $\Omega$, which is defined by $$
D:=\inf\left\{\beta\in[d-1,d]:\lim_{h\rightarrow 0^+}\frac{|\omega_h|}{h^{d-\beta}}<+\infty\right\}. $$ Let the $D$-dimensional Minkowski content of $\partial\Omega$ relative to $\Omega$ be defined by $$
\mathcal M_D(\partial\Omega):=\lim_{h\rightarrow 0^+}\frac{|\omega_h|}{h^{d-D}}. $$ Assume now that $\Omega$ is such that the Minkowski dimension of $\partial\Omega$ relative to $\Omega$ is $D\in]d-1,d[$ (for example, if $\Omega$ is a fractal set) and let $\mathcal M_D(\partial\Omega)$ be the Minkowski content of $\partial\Omega$ relative to $\Omega$. From \eqref{rough_o} we immediately see that \begin{equation*} \frac{1}{k}\sum_{j=1}^k\Lambda_j
\leq\frac{d}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}
+M_d'\frac{\mathcal M_D(\partial\Omega)}{|\Omega|}C_d^{\frac{D-d+4}{2}}\left(\frac{k}{|\Omega|}\right)^{\frac{D-d+4}{d}}
+o\left(\left(\frac{k}{|\Omega|}\right)^{\frac{D-d+4}{d}}\right), \end{equation*}
as $k\rightarrow+\infty$, for all $k\geq |\Omega|C_d^{-d/2} r_{\Omega}^{-d}$. Hence the second term of the upper bound for the average depends only on $k,d,D,|\Omega|$ and $\mathcal M_D(\partial\Omega)$. Analogous inequalities have been proved for the eigenvalues of the Dirichlet Laplacian (see \cite{HaPrSt18}), and are related to the so-called Weyl-Berry conjecture (see \cite{carmona_fractal,lapidus_fractal_2}). \end{rem}
\subsection{Asymptotically Weyl-sharp bounds on eigenvalues} Assume that $\Omega$ is such that \begin{equation}\label{dirichlet_ineq_1}
\frac{1}{k}\sum_{j=1}^k\Lambda_j\geq\frac{d}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}} \end{equation} and \begin{equation}\label{dirichlet_ineq_2}
\frac{1}{k}\sum_{j=1}^k\Lambda_j\leq\frac{d}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}+A \left(\frac{k}{|\Omega|}\right)^{\frac{3}{d}} \end{equation} for some constant $A$ independent of $k$, for all $k\geq k_0$ (this is for example the case of point $ii)$ of Theorem \ref{main_upper_dirichlet}). Then \begin{multline}\label{dirichlet_ineq_1_2}
\Lambda_k\geq C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}
-\left(\frac{6(d+1)}{d(d+4)}\frac{C_d^2}{|\Omega|^{\frac{4}{d}}}+2\frac{A}{|\Omega|^{\frac{3}{d}}}\right)k^{\frac{7}{2d}}\\
+\left(\frac{C_d^2}{d(d+4)|\Omega|^{\frac{4}{d}}}+\frac{d+3}{d}\frac{A}{|\Omega|^{\frac{3}{d}}}\right)k^{\frac{3}{d}}
-\frac{3}{2}\left(\frac{9+12d}{4d^2}\right)\frac{k^{\frac{5}{2d}}}{|\Omega|^{\frac{3}{d}}}
+\frac{9A}{16d^2}\frac{k^{\frac{2}{d}}}{|\Omega|^{\frac{3}{d}}}, \end{multline} and \begin{multline}\label{dirichlet_ineq_2_2}
\Lambda_{k+1}\leq C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}
+\left(\frac{6(d+1)}{d(d+4)}\frac{C_d^2}{|\Omega|^{\frac{4}{d}}}+2\frac{A}{|\Omega|^{\frac{3}{d}}}\right)k^{\frac{7}{2d}}\\
+\left(\frac{9C_d^2}{d(d+4)|\Omega|^{\frac{4}{d}}}+\frac{d+3}{d}\frac{A}{|\Omega|^{\frac{3}{d}}}\right)k^{\frac{3}{d}}
+\frac{3}{2}\left(\frac{9+12d}{4d^2}\right)\frac{k^{\frac{5}{2d}}}{|\Omega|^{\frac{3}{d}}}
+\frac{81A}{16d^2}\frac{k^{\frac{2}{d}}}{|\Omega|^{\frac{3}{d}}}. \end{multline}
In particular, for all $k\geq k_0$ there exists a constant $C(d,|\Omega|,A)$ such that \begin{equation}\label{modulus_dirichlet}
\left|\Lambda_k-C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}\right|\leq C(d,|\Omega|,A)k^{\frac{7}{2d}}. \end{equation} Inequalities \eqref{dirichlet_ineq_1_2} and \eqref{dirichlet_ineq_2_2} follow from \eqref{dirichlet_ineq_1} and \eqref{dirichlet_ineq_2} by observing that $$ \Lambda_k\geq\frac{1}{l}\sum_{j=k-l+1}^k\Lambda_j $$ and $$ \Lambda_{k+1}\leq\frac{1}{l}\sum_{j=k+1}^{k+l}\Lambda_j, $$ and by choosing $l\in\mathbb N$ such that $$ l=k^{1-\frac{1}{2d}}+b $$ with $b\in\left[-\frac{1}{2},\frac{1}{2}\right]$. In particular, with this choice, \begin{equation}\label{singe_step_4} \dfrac{1}{2}k^{1-\dfrac{1}{2d}}\leq l\leq \dfrac{3}{2}k^{1-\dfrac{1}{2d}}, \end{equation} and $k-1\leq l\leq k+1$. For example, we see that \begin{equation}\label{singe_step_1} \begin{split} \Lambda_k & \geq\frac{1}{l}\sum_{j=k-l+1}^k\Lambda_j=\frac{1}{l}\left(\sum_{j=1}^k\Lambda_j-\sum_{j=1}^{k-l}\Lambda_j\right)\\
& \geq\left(\frac{d}{d+4}\frac{C_d^2}{|\Omega|^{\frac{4}{d}}}\frac{1}{l}\left(k^{1+\frac{4}{d}}-(k-l)^{1-\frac{4}{d}}\right)-\frac{A}{|\Omega|^{\frac{3}{d}}}\frac{(k-l)^{1+\frac{3}{d}}}{l}\right)\\
& =\frac{d}{d+4}C_d^2\left(\frac{k}{|\Omega|}\right)^{\frac{4}{d}}\frac{k}{l}\left(1-\left(1-\frac{l}{k}\right)^{1+\frac{4}{d}}\right)
-A\left(\frac{k}{|\Omega|}\right)^{\frac{3}{d}}\frac{k}{l}\left(1-\left(1-\frac{l}{k}\right)^{1+\frac{3}{d}}\right). \end{split} \end{equation} We also have \begin{multline}\label{singe_step_2} \frac{k}{l}\left(1-\left(1-\frac{l}{k}\right)^{1+\frac{4}{d}}\right)=\frac{k}{l}\left(1-\left(1-\frac{l}{k}\right)\left(\left(1-\frac{l}{k}\right)^{\frac{2}{d}}\right)^2\right)\\ \geq \frac{k}{l}\left(1-\left(1-\frac{l}{k}\right)\left(1-\frac{2l}{dk}\right)^2\right)=\frac{d+4}{d}-\frac{4(d+1)l}{d^2k}+\frac{4l^2}{d^2k^2}, \end{multline} and similarly \begin{equation}\label{singe_step_3} \frac{k}{l}\left(1-\left(1-\frac{l}{k}\right)^{1+\frac{3}{d}}\right)\leq\frac{k}{l}-\frac{3+d}{d}+\frac{(9+12d)l}{4d^2k}-\frac{9l^2}{4d^2k^2}. \end{equation} Bound \eqref{dirichlet_ineq_1_2} follows by plugging \eqref{singe_step_2} and \eqref{singe_step_3} into \eqref{singe_step_1} and by \eqref{singe_step_4}. The upper bound \eqref{dirichlet_ineq_2_2} is proven similarly.
\section{The biharmonic Navier and Kuttler-Sigillito operators}
In this section we focus our attention to the Navier \eqref{equazione},\eqref{IBC}, and Kuttler-Sigillito \eqref{equazione},\eqref{KBC} problems. In particular, the quadratic form \eqref{qf} will be set into $H^2(\Omega)\cap H^1_0(\Omega)$ for the Navier problem, and into $H^2_{\nu}(\Omega)$ for the Kuttler-Sigillito problem, for $\Omega\subset \mathbb R^d$ a bounded open set. We observe that for the eigenvalues of the biharmonic operator with Navier and Kuttler-Sigillito boundary conditions, the very same lower bounds on Riesz means (upper bounds on averages) of the Dirichlet case hold. In particular we have the following result, which is valid for any domain $\Omega$ with finite Lebesgue measure (for the Navier problem) and with Lipschitz boundary (for the Kuttler-Sigillito problem).
\begin{thm} Let $a\in(-(d-1)^{-1},1)$. \begin{enumerate}[i)] \item Theorem \ref{dirichlet_bilaplacian_thm_general}, Corollary \ref{dirichlet_bounds_spec_fcn_bi} and Theorem \ref{main_upper_dirichlet} hold with $\Lambda_j,U_j$ replaced by $\tilde\Lambda_j(a),\tilde U_j$ or $\tilde M_j(a),\tilde V_j$. \item Formulas \eqref{dirichlet_ineq_1_2}, \eqref{dirichlet_ineq_2_2} and \eqref{modulus_dirichlet} hold with $\Lambda_j$ replaced by $\tilde\Lambda_j(a)$ or $\tilde M_j(a)$. \end{enumerate} \end{thm} \begin{proof} As for point $i)$, the proofs of Theorem \ref{dirichlet_bilaplacian_thm_general}, Corollary \ref{dirichlet_bounds_spec_fcn_bi} and Theorem \ref{main_upper_dirichlet} in the case of Navier or Kuttler-Sigillito conditions can be carried out exactly in the same way as in the Dirichlet case by using test functions $\phi\in H^2_0(\Omega)$. Also, those arguments yield the same results for the Navier case when using $\phi\in H^2(\Omega)\cap H^1_0(\Omega)$. Alternatively, the results immediately follow by pointwise comparison of eigenvalues: $$ \tilde M_j(a),\tilde\Lambda_j(a)\leq\Lambda_j, $$ for all positive integers $j$, see \eqref{fullchain}. Point $ii)$ follows from point $i)$ as in the Dirichlet case. \end{proof}
We also have the following inequalities relating Navier eigenvalues to Laplacian eigenvalues. Note that, if $\Omega$ satisfies the uniform outer ball condition, the inequalities are valid also for $a=1$.
\begin{thm} Let $a\in(-(d-1)^{-1},1)$. For all positive integers $m,n,N$ \begin{equation}\label{prima}
\sum_{j=1}^{n}(\lambda_{n+1}-\lambda_j)\geq \sum_{k=1}^{N}(\lambda_{n+1}-\int_{\Omega}|\nabla \tilde{U}_k|^2\,dx), \end{equation} \begin{equation}\label{seconda}
\sum_{j=2}^{m}(\mu_{m+1}-\mu_j)\mu_j\geq \sum_{k=1}^{N}(\mu_{m+1}\int_{\Omega}|\nabla \tilde{U}_k|^2\,dx-\int_{\Omega}|D^2 \tilde{U}_k|^2). \end{equation} Consequently, \begin{multline*}
\left(\mu_{m+1}\sum_{j=1}^{n}(\lambda_{n+1}-\lambda_j)+\sum_{j=2}^{m}(\mu_{m+1}-\mu_j)\mu_j\right)(1-a)\\
\geq \sum_{k=1}^{N}\left((1-a)\mu_{m+1}\lambda_{n+1}+a\left(\lambda_{n+1}-\frac 1 N \sum_{j=1}^{n}(\lambda_{n+1}-\lambda_j)\right)^2-\tilde{\Lambda}_k(a)\right). \end{multline*} Moreover, \begin{equation*}
(1-a)\sum_j\left(z(z-\lambda_j)_{+}+(z-\mu_j)_{+}\mu_j\right)
\geq \sum_{k=1}^N\left((1-a)z^2+a\left(z-\frac 1 N \sum_j(z-\lambda_j)_{+}\right)^2-\tilde{\Lambda}_k\right), \end{equation*} and in particular for $a=0$ \begin{equation*}
z\sum_j\left((z-\lambda_j)_{+}-(z-\mu_j)_{+}\right)+\sum_{j=2}^{m}(z^2-\mu_j^2)\geq \sum_j(z^2-\tilde{\Lambda}_j)_{+}. \end{equation*} \end{thm}
\begin{proof}
Inequality \eqref{prima} is obtained from \eqref{RieszVersion} with $\omega_j=\lambda_j$, $\psi_j=u_j$, $f_p=\tilde U_k$, and $Q(f,f)=\int_\Omega|\nabla f|^2$, while for \eqref{seconda} we used $\omega_j=\mu_j$, $\psi_j=v_j$, $f_p=\partial_\alpha\tilde U_k$, and then we summed over $\alpha=1,\dots,d$. Moreover, inequality \eqref{prima} also yields $$ \sum_{j=1}^{n}\left(\lambda_{n+1}-\lambda_j\right)\geq \sum_{k=1}^{N}\left(\lambda_{n+1}-\frac t 2 -\frac 1 {2t}\int_{\Omega}(\Delta \tilde{U}_k)^2\,dx\right), $$ which coupled with \eqref{seconda} provides the appearance of $\tilde\Lambda_k$. \end{proof}
\section{The biharmonic Neumann operator}
In this section we focus our attention to the biharmonic Neumann problem \eqref{equazione}, \eqref{NBC}. In particular, the quadratic form \eqref{qf} will be set into $H^2(\Omega)$, for $\Omega\subset \mathbb R^d$ a bounded set with continuous boundary.
Our result is an improvement of the Kr\"{o}ger-Laptev bound using a refinement of Young's inequality for real numbers, which not only improves the estimates for Riesz means and sums, but also provides a bound on individual eigenvalues. It will be useful to introduce the following notation: \begin{equation*}
m_k:= C_d^2\left(\frac{k}{|\Omega|}\right)^{4/d}, \quad S_k(a):=\frac{\frac{d+4}{d}\frac{1}{k}\sum_{j=1}^k M_j(a)}{m_k}. \end{equation*} Note that $m_k$ is the Weyl expression, and the Kr\"{o}ger-Laptev inequality is expressed as $S_k\leq 1$. We prove the following refinement of this inequality.
\begin{thm} For all $k\geq 0$, and for all $a\in(-(d-1)^{-1},1)$, the Neumann eigenvalue $M_{k+1}(a)$ satisfies \begin{equation*}
m_k(1-S_k(a)) \geq (\sqrt{M_{k+1}(a)}-\sqrt{m_k})^2, \end{equation*} or equivalently \begin{equation*}
m_k\left(1-\sqrt{1-S_k(a)}\right)^2\leq M_{k+1}(a) \leq m_k\left(1+\sqrt{1-S_k(a)}\right)^2. \end{equation*} \end{thm} \begin{proof} The trial functions $f(x)=e^{i p\cdot x}$ are admissible, so choosing them in \eqref{RieszVersion} (see also \cite{Kro,Lap1997}) leads after a calculation to the following bound for the eigenvalues of the Neumann biharmonic operator, where the set $\mathfrak{M}$ is chosen as $\{{ p} \in \mathbb{R}^d\}$ with Lebesgue measure, and $\mathfrak{M_0}$ is the ball of radius $R$ in $\mathbb R^d$ (see \cite{EHIS,Kro} for details of the calculation):
\begin{equation*}
\mu_{k+1}R^d-\frac{d}{d+4}R^{d+4}\leq m_k^{d/4}\left(M_{k+1}(a)-\frac{1}{k}\sum_{i=1}^{k}M_i(a)\right), \end{equation*} for all $R>0$. Putting $R^d=m_k^{d/4}x_k^{d/4}$ with $x_k=\frac{M_{k+1}}{m_k}$ we get the bound \begin{equation*}
\frac{d+4}{d}\frac{1}{k}\sum_{i=1}^{k}M_i(a)-m_k\leq m_k\frac{4}{d}\,\left(\frac{d+4}{4}\,x_k-\frac{d}{4}-x_k^{\frac{d+4}{4}}\right). \end{equation*} Applying the refinement of Young's inequality given by Lemma \ref{technical_lemma} with $p=d/4$, we obtain \begin{equation*}
\frac{d+4}{d}\frac{1}{k}\sum_{i=1}^{k}M_i(a)-m_k\leq -m_k\,(\sqrt{x_k}-1)^2, \end{equation*} which strengthens the Kr\"oger-Laptev estimate \begin{equation*}
\frac{d+4}{d}\frac{1}{k}\sum_{i=1}^{k}M_i(a)\leq m_k=C_d^2\frac{k^{4/d}}{|\Omega|^{4/d}} \end{equation*} and yields the desired bound on $M_{k+1}(a)$. \end{proof}
\begin{lem} \label{technical_lemma} For any $p,x\ge 0$, let $y_p(x)=(p+1)x-p-x^{p+1}$. Then $$ y_p(x)\le -p(1-\sqrt{x})^2. $$ \end{lem}
\begin{proof} From Young's inequality we know that $y_p(x)\le 0$ (see \cite{HaSt16}). The assertion follows from the identity $$ y_p(x)=-p(1-\sqrt{x})^2+\sqrt{x}y_{2p}(\sqrt{x}). $$ \end{proof}
\section{One dimensional biharmonic eigenvalue problems} \label{biharmonic1d} On the interval $[0,1]$ we consider the fourth-order eigenvalue value problems: \begin{equation}\label{1d-biharmonic-ev-problems} \left\{\begin{array}{l} u^{(4)}_n(x)=\Lambda^{(i,j)}_nu_n(x),\quad x\in(0,1), \\ u^{(i)}(0)=u^{(j)}(0)=u^{(i)}(1)=u^{(j)}(1)=0, \end{array}\right. \end{equation} where $i,j\in\{0,1,2,3\}$, $i\neq j$ and $u_n^{(j)}$ denotes the $j$-th derivative of the function $u_n$. There are six eigenvalue problems beginning with the Dirichlet problem corresponding to $(0,1)$ up to the Neumann problem corresponding to $(2,3)$. It is easy to see that problem \eqref{1d-biharmonic-ev-problems} is a Sturm-Liouville problem for any choice of $i,j$, and in particular the spectrum consists of an increasing sequence of simple eigenvalues (with the possible exception of the kernel) diverging to infinity. In order to further analyze the eigenvalues, we need to study the equation \begin{equation}\label{1-d-ev-equation}
\cos\gamma \cosh\gamma=1. \end{equation} Let $\gamma_0=0$ and $\gamma_n$ be the positive roots (in increasing order) of \eqref{1-d-ev-equation}. Then \begin{equation}\label{1st-1d-ev-expansion}
\gamma_n=\pi\left(n+\frac{1}{2}\right)+(-1)^{n+1}r_n,\quad 0<r_n < \frac{\pi}{2} \end{equation} where $r_n$ is strictly decreasing in $n$ and satisfies the following bounds. \begin{prop}
For all odd positive integers \begin{equation}\label{sigma-bound-n-odd}
\frac{1}{2} {\rm arcsinh}\left(\frac{2}{\cosh\pi\left(n+\frac{1}{2}\right)}\right) \leq r_n\leq \arcsin \left(\frac{1}{\cosh\pi\left(n+\frac{1}{2}\right)}\right), \end{equation} and for all even positive integers \begin{equation}\label{sigma-bound-n-even}
\arcsin \left(\frac{1}{\cosh\pi\big(n+\frac{1}{2}\big)}\right) \leq r_n\leq \arcsin \left(\frac{2}{\cosh\pi\big(n+\frac{1}{2}\big)}\cdot\frac{1}{1+\sqrt{1-\frac{4}{\cosh\pi(n+\frac{1}{2})}}}\right). \end{equation} Therefore, as $n\to\infty$, \begin{equation}\label{gamma-expansion} r_n=(-1)^{n+1}\frac{1}{\cosh\pi(n+\frac{1}{2})}+O\left(\frac{1}{\cosh^2\pi(n+\frac{1}{2})}\right). \end{equation} \end{prop}
\begin{proof}
The cosine function is positive between the zeros $\left(2m-\frac{1}{2}\right)\pi$ and $\left(2m+\frac{1}{2}\right)\pi$, where equation \eqref{1-d-ev-equation} always has two roots by the intermediate value theorem applied to the continuous function $\gamma \mapsto\cos\gamma \cosh\gamma$, since $\cos 2m\pi \cosh 2m\pi= \cosh 2m\pi >1$.
Therefore, we may label the positive roots $\gamma$ as in \eqref{1st-1d-ev-expansion}, where $r_n$ verifies the condition \begin{equation}\label{condition_rn}
1=\sin r_n \cosh \left(\pi\left(n+\frac{1}{2}\right)+(-1)^{n+1} r_n\right), \end{equation} from which we easily derive the inequalities \eqref{sigma-bound-n-odd} and \eqref{sigma-bound-n-even}, having the asymptotic expansion \eqref{gamma-expansion} as a consequence. From the equations $\cosh \gamma_{n+1}\sin r_{n+1} = \cosh \gamma_{n}\sin r_{n}$ and $\gamma_{n+1}> \gamma_{n}$ we see that $r_{n}$ is strictly decreasing. \end{proof}
We now present the spectra and the associated (non-normalized) eigenfunctions of the different eigenvalue problems.
\begin{itemize} \item {\it Biharmonic Dirichlet eigenvalue problem.} The eigenfunctions are of the form \begin{equation*}
u_n(x)=A\big(\cosh(\gamma_nx)-\cos(\gamma_nx)\big)- \sinh(\gamma_nx)+\sin(\gamma_nx), \end{equation*} with $ A=\frac{\sinh(\gamma_n)-\sin(\gamma_n)}{\cosh(\gamma_n)-\cos(\gamma_n)}$ and \begin{equation*}
\Lambda^{(0,1)}_n= \gamma_n^4. \end{equation*}
\item {\it Navier eigenvalue problem.} The operator is the square of the Dirichlet Laplacian and therefore \begin{equation*}
\Lambda^{(0,2)}_n= \pi^4n^4. \end{equation*}
\item {\it Dirichlet-Neumann mixed eigenvalue problem.} There is one zero eigenvalue with eigenfunction $u_1(x)=x(1-x)$. For the positive eigenvalues the eigenfunctions are of the form \begin{equation*}
u_n(x)=A\big(\cosh(\gamma_nx)-\cos(\gamma_nx)\big)- \sinh(\gamma_nx)-\sin(\gamma_nx), \end{equation*} with $A=\frac{\sinh(\gamma_n)+\sin(\gamma_n)}{\cosh(\gamma_n)-\cos(\gamma_n)}$ and \begin{equation*}
\Lambda^{(0,3)}_n= \gamma_{n-1}^4. \end{equation*}
\item {\it Neumann-Dirichlet mixed eigenvalue problem.} There is one zero eigenvalue with eigenfunction $u_1(x)=1$. For the positive eigenvalues the eigenfunctions are of the form \begin{equation*}
u_n(x)=\big(\cosh(\gamma_nx)+\cos(\gamma_nx)\big)- A\big(\sinh(\gamma_nx)-\sin(\gamma_nx)\big), \end{equation*} with $A=\frac{\sinh(\gamma_n)-\sin(\gamma_n)}{\cosh(\gamma_n)-\cos(\gamma_n)}$ and \begin{equation*}
\Lambda^{(1,2)}_n= \gamma_{n-1}^4. \end{equation*}
\item {\it Kuttler-Sigillito eigenvalue problem.} The operator is the square of the Neumann Laplacian and therefore \begin{equation*}
\Lambda^{(1,3)}_n= \pi^4(n-1)^4. \end{equation*}
\item {\it Biharmonic Neumann eigenvalue problem.} The eigenvalue $0$ has multiplicity $2$ with corresponding eigenfunctions $1,x$. For the positive eigenvalues the eigenfunctions are of the form \begin{equation*}
u_n(x)=A\big(\cosh(\gamma_nx)+\cos(\gamma_nx)\big)- \big(\sinh(\gamma_nx)-\sin(\gamma_nx)\big), \end{equation*} with $A=\frac{\sinh(\gamma_n)-\sin(\gamma_n)}{\cosh(\gamma_n)-\cos(\gamma_n)}$ and \begin{equation*}
\Lambda^{(2,3)}_1,\Lambda^{(2,3)}_2=0,\quad \Lambda^{(2,3)}_n= \gamma_{n-2}^4, \quad n\geq 3. \end{equation*} \end{itemize}
\begin{rem} We note that the Dirichlet-Neumann and the Neumann-Dirichlet eigenvalue problems have not been described in Section \ref{sec:2} and in fact they have a completely different nature from the Dirichlet, Navier, Kuttler-Sigillito and Neumann problems. They can not be associated with the quadratic form \eqref{qf}, however, they can be understood as a ``reduction'' of a buckling-type eigenvalue problem of sixth order which has a standard variational formulation, namely problem \begin{equation} \begin{cases}\label{buck6} u^{(6)}(x)=\omega u''(x)\,,& x\in(0,1),\\ u(0)=u'''(0)=u^{(4)}(0)=u(1)=u'''(1)=u^{(4)}(1)=0. \end{cases} \end{equation} The weak formulation of this problem reads $$ \int_0^1 u'''(x)\phi'''(x)dx=\omega\int_0^1 u'(x)\phi'(x)dx\,,\ \ \ \forall\phi\in H^3((0,1))\cap H^1_0((0,1)), $$ in the unknowns $\omega\in\mathbb R$, $u\in H^3((0,1))\cap H^1_0((0,1))$. It is standard to recast this problem to an eigenvalue problem for a compact self-adjoint operator on a Hilbert space.
It is easy to prove that any solution of $u^{(6)}(x)=\omega u''(x)$ is of the form $$ u(x)=a_0+a_1x+a_2\sin(\omega^{1/4}x)+a_3\cos(\omega^{1/4}x)+a_4 e^{\omega^{1/4}x}+a_5 e^{-\omega^{1/4}x}, $$ for some $a_0,...,a_5\in\mathbb R$. Imposing boundary conditions it is possible to prove that $\omega_1=0$ is an eigenvalue of multiplicity one with corresponding (non-normalized) eigenfunction $x(1-x)$, while all the other eigenvalues $\omega_n$, $n\geq 2$, are simple and positive, and are given implicitly by the equation $\cos(\omega^{1/4})\cosh(\omega^{1/4})=1$. This means that $\omega_n=\gamma_{n-1}^4$, i.e., the eigenvalues of \eqref{buck6} coincide with $\Lambda_n^{(0,3)}$ and $\Lambda_n^{(1,2)}$.
Also the eigenfunctions coincide. We claim that an eigenpair $(u,\omega)$ of \eqref{buck6} is also an eigenpair of the Dirichlet-Neumann problem, and vice-versa. In fact, let $(u,\omega)$ be an eigenpair of \eqref{buck6}. Then we have that $(u^{(4)}(x)-\omega u(x))''=0$ for $x\in(0,1)$ and from the boundary conditions we also have that $u^{(4)}(0)-\omega u(0)=u^{(4)}(1)-\omega u(1)=0$, thus $u^{(4)}(x)-\omega u(x)=0$ for $x\in(0,1)$. Moreover, $u'''(0)=u'''(1)=0$. This implies that $u$ is a solution to the Dirichlet-Neumann problem. Vice-versa, let $(u,\omega)$ an eigenpair of the Dirichlet-Neumann problem. Thus $u^{(6)}(x)=\omega u''(x)$ for $x\in(0,1)$, and clearly $u(0)=u'''(0)=u(1)=u'''(1)=0$. Moreover $u^{(4)}(0)=\omega u(0)=0$ and $u^{(4)}(1)=\omega u(1)=0$.
Analogous arguments allow to deduce that, given an eigenpair $(u,\omega)$ of problem \eqref{buck6}, then $(u'',\omega)$ is an eigenpair of the Neumann-Dirichlet problem. Vice-versa, given an eigenpair $(v,\omega)$ of the Neumann-Dirichlet problem, then $(u,\omega)$ is an eigenpair of problem \eqref{buck6}, where $u$ is the solution to the boundary value problem $u''(x)=v(x)$ for $x\in (0,1)$, $u(0)=u(1)=0$.
\end{rem}
Summarizing, for $n\geq 1$ we have \begin{eqnarray*}
\Lambda^{(0,1)}_n&=& \gamma_n^4\\
\Lambda^{(0,2)}_n &=& \pi^4n^4 \\
\Lambda^{(0,3)}_n &=& \gamma_{n-1}^4\\ \Lambda^{(1,2)}_n&=& \gamma_{n-1}^4 \\
\Lambda^{(1,3)}_n &=& \pi^4(n-1)^4 \\
\Lambda^{(2,3)}_n &=& \gamma_{n-2}^4\\ \end{eqnarray*} with the convention $\gamma_{-1}=0$. Since $n<\gamma_n<n+1$, the spectra are in decreasing order and the eigenvalues of two ``neighbored'' operators in the table are interlacing with strict inequalities for all positive eigenvalues (with the only exception $\Lambda^{(0,3)}_n=\Lambda^{(1,2)}_n$). In particular, for all positive integers $n$ we have the following identities \begin{equation*}
\Lambda^{(0,1)}_n=\Lambda^{(0,3)}_{n+1}=\Lambda^{(1,2)}_{n+1}=\Lambda^{(2,3)}_{n+2}. \end{equation*} We note that the Neumann eigenvalues satisfy the sharp Weyl-type bound of the form $ \Lambda^{(2,3)}_n\leq \pi^4(n-1)^4$ and not $ \Lambda^{(2,3)}_n\leq \pi^4(n-2)_{+}^4$ where the shift is made by the dimension of the kernel.
With respect to semiclassical limits, while there is no two-term asymptotic expansion of the form \eqref{semiclassicalcounting} for the counting function $N(z)$, the expansion \eqref{semiclassicalriesz} for the Riesz means $R_1(z)$ is still valid. This is a corollary of the following asymptotically sharp two-term upper and lower bounds. We shall denote by $R_1^{(i,j)}(z)$ the Riesz means corresponding to the eigenvalues $\Lambda_n^{(i,j)}$, $i,j\in\{0,1,2,3\}$, $i\ne j$. A crucial ingredient in the proof of the following theorem is Lemma \ref{lemmaonedim} on polynomial bounds for one dimesional Riesz means, which is proved at the end of this section.
\begin{thm}\label{riesz-1-d} For any $z>0$ we have the following inequalities \begin{itemize}
\item {\it Biharmonic Dirichlet eigenvalue problem.} \begin{multline*} \frac{4}{5\pi}z^{\frac{5}{4}}-z-\frac{11\pi}{6}z^{\frac{3}{4}}-\frac{3\pi^2}{2}z^{\frac{1}{2}}-\frac{127\pi^3}{240}z^{\frac{1}{4}}-c\\ \leq\sum_{n\geq 1}(z-\Lambda_n^{(0,1)})_+\\ \leq\frac{4}{5\pi}z^{\frac{5}{4}}-z+\frac{\pi}{6}z^{\frac{3}{4}}+\frac{3\pi^2}{2}z^{\frac{1}{2}}+\frac{1\pi^3}{30}z^{\frac{1}{4}}+\frac{1\pi^4}{8}+c. \end{multline*} \item {\it Navier eigenvalue problem.} \begin{multline*} \frac{4}{5\pi}z^{\frac{5}{4}}-\frac{1}{2}z-\frac{\pi}{3}z^{\frac{3}{4}}\\ \leq\sum_{n\geq 1}(z-\Lambda_n^{(0,2)})_+\\ \leq\frac{4}{5\pi}z^{\frac{5}{4}}-\frac{1}{2}z+\frac{\pi}{6}z^{\frac{3}{4}}+\frac{\pi^2}{12}z^{\frac{1}{2}}. \end{multline*} \item {\it Dirichlet-Neumann and Neumann-Dirichlet mixed eigenvalue problem.} \begin{multline*} \frac{4}{5\pi}z^{\frac{5}{4}}-\frac{11\pi}{6}z^{\frac{3}{4}}-\frac{3\pi^2}{2}z^{\frac{1}{2}}-\frac{127\pi^3}{240}z^{\frac{1}{4}}-c\\ \leq\sum_{n\geq 1}(z-\Lambda_n^{(0,3)})_+=\sum_{n\geq 1}(z-\Lambda_n^{(1,2)})_+\\ \leq\frac{4}{5\pi}z^{\frac{5}{4}}+\frac{\pi}{6}z^{\frac{3}{4}}+\frac{3\pi^2}{2}z^{\frac{1}{2}}+\frac{1\pi^3}{30}z^{\frac{1}{4}}+\frac{1\pi^4}{8}+c. \end{multline*}
\item {\it Kuttler-Sigillito eigenvalue problem.} \begin{multline*} \frac{4}{5\pi}z^{\frac{5}{4}}+\frac{1}{2}z-\frac{\pi}{3}z^{\frac{3}{4}}\\ \leq\sum_{n\geq 1}(z-\Lambda_n^{(1,3)})_+\\ \leq\frac{4}{5\pi}z^{\frac{5}{4}}+\frac{1}{2}z+\frac{\pi}{6}z^{\frac{3}{4}}+\frac{\pi^2}{12}z^{\frac{1}{2}}. \end{multline*} \item {\it Biharmonic Neumann eigenvalue problem.} \begin{multline*} \frac{4}{5\pi}z^{\frac{5}{4}}+z-\frac{11\pi}{6}z^{\frac{3}{4}}-\frac{3\pi^2}{2}z^{\frac{1}{2}}-\frac{127\pi^3}{240}z^{\frac{1}{4}}-c\\ \leq\sum_{n\geq 1}(z-\Lambda_n^{(2,3)})_+\\ \leq\frac{4}{5\pi}z^{\frac{5}{4}}+z+\frac{\pi}{6}z^{\frac{3}{4}}+\frac{3\pi^2}{2}z^{\frac{1}{2}}+\frac{1\pi^3}{30}z^{\frac{1}{4}}+\frac{1\pi^4}{8}+c. \end{multline*} \end{itemize} Here $c\in]2,3[$ is defined by \eqref{c}. \end{thm}
\begin{proof} Let us start with the Dirichlet eigenvalues $\Lambda^{(0,1)}_n=\gamma_n^4$. We have $$ R_1^{(0,1)}(z)=\sum_{n\geq 1}\left(z-\left(\pi\left(n+\frac 1 2\right)+(-1)^{n+1}r_n\right)^4\right)_+. $$ We set $R:=z^{\frac{1}{4}}\pi^{-1}$ and $\rho_n=r_n\pi^{-1}$. With this notation we ave \begin{equation*}
\sum_{n\geq 1}(z-\gamma_n^4)_{+}=\pi^{4}\sum_{n\geq 1}(R^4-\gamma_n^4\pi^{-4})_{+}=\pi^{4}\sum_{n=1}^N\left(R^4-\left(n+\frac{1}{2}+(-1)^{n+1}\rho_n\right)^4\right) \end{equation*} for some $N$ satisfying $R-2< N\leq R$. Expanding the sum we get \begin{equation*} \begin{split}
\sum_{n=1}^{N}\left(R^4-\left(n+\frac{1}{2}+(-1)^{n+1}\rho_n\right)^4\right) & =\sum_{n=1}^{N}\left(R^4-\left(n+\frac{1}{2}\right)^4\right) \\
& -4\sum_{n=1}^{N}(-1)^{n+1}\left(\rho_n\left(n+\frac{1}{2}\right)^3+\rho_n^3\left(n+\frac{1}{2}\right)\right)\\
&-\sum_{n=1}^{N}\left(6\rho_n^2\left(n+\frac{1}{2}\right)^2+\rho_n^4\right).\\ \end{split} \end{equation*} The sums containing $\rho_n$ lead to absolutely converging series as $N$ tends to infinity. In order to have explicit bounds on these sums we need simpler bounds on $r_n$. From condition \eqref{condition_rn} we deduce that, for odd $n$ $$ \sin r_n=\frac{1}{\cosh \left(\pi\left(n+\frac{1}{2}\right)+r_n\right)}\leq \frac{1}{\cosh \pi\left(n+\frac{1}{2}\right)}\leq 2 e^{-\pi n}. $$ Using the fact that $\sin x\geq \frac{2}{\pi}x$ for alla $x\in[0,\frac{\pi}{2}]$ we deduce that $r_n\leq \pi e^{-\pi n}$. For even $n$ we use the fact that $0<r_n<\frac{\pi}{2}$ and deduce that $$ \sin r_n=\frac{1}{\cosh \left(\pi\left(n+\frac{1}{2}\right)-r_n\right)}\leq \frac{1}{\cosh \left(\pi\left(n+\frac{1}{2}\right)-\frac{\pi}{2}\right)}=\frac{1}{\cosh \left(\pi n\right)}\leq 2 e^{-\pi n}. $$ As in the odd case, we deduce that $r_n\leq \pi e^{-\pi n}$. Then \begin{multline}\label{c}
\left|\sum_{n=1}^{N}4(-1)^{n+1}\left(\rho_n\left(n+\frac{1}{2}\right)^3+\rho_n^3\left(n+\frac{1}{2}\right)\right)+\left(6\rho_n^2\left(n+\frac{1}{2}\right)^2+\rho_n^4\right)\right|\\ \leq \sum_{n=1}^{\infty}4\left(\pi e^{-\pi n}\left(n+\frac{1}{2}\right)^3+\pi^3 e^{-3\pi n}\left(n+\frac{1}{2}\right)\right)+6\pi^2e^{-2\pi n}\left(n+\frac{1}{2}\right)^2+\pi^4e^{-4\pi n}=:c, \end{multline} where $c$ can be explicitly computed ($c\approx 2.51272...$). In particular the previous estimates imply that $2<c<3$. Therefore \begin{equation*}
-c\leq \sum_{n\geq 1}\left(z-\gamma_n^4\right)_{+}-\pi^{4}\sum_{n=1}^{N}\left(R^4-\left(n+\frac{1}{2}\right)^4\right)\leq c. \end{equation*} We shall prove in Lemma \ref{lemmaonedim} asymptotically sharp upper and lower bounds on $\sum_{n=1}^{N}\left(R^4-n^4\right)$, see \eqref{onedim1}, and on $\sum_{n=1}^{N}\left(R^4-\left(n+\frac{1}{2}\right)^4\right)$, see \eqref{onedim2}. In particular, upper and lower bounds on $R_1^{(0,1)}(z)$ follow from \eqref{onedim2} .
The bounds on $R_1^{(2,3)}(z)$ follow from those on $R_1^{(0,1)}(z)$ by observing that $R_1^{(2,3)}(z)=R_1^{(0,1)}(z)+2z$, where the additional term is due to the kernel. In the same way, the bounds on $R_1^{(0,3)}(z)$ and $R_1^{(1,2)}(z)$ follow from those on $R_1^{(0,1)}(z)$ by observing that $R_1^{(0,3)}(z)=R_1^{(1,2)}(z)=R_1^{(0,1)}(z)+z$.
The Riesz means $R_1^{(0,2)}(z)$ for the Navier problem is instead explicitly computable and $$ R_1^{(0,2)}(z)=\sum_n\left(z-\pi^4 n^4\right)_+=\pi^4\sum_n\left(R^4-n^4\right)_+. $$ Upper and lower bounds follow then from \eqref{onedim1}. Finally, upper and lower bounds for $R_1^{(1,3)}(z)$ follow from those on $R_1^{(0,2)}(z)$ by noting that $R_1^{(1,3)}(z)=R_1^{(0,2)}(z)+z$. \end{proof}
From Theorem \ref{riesz-1-d} we deduce the following \begin{cor} The following expansion holds $$ R_1^{(i,j)}(z)=\frac 4 {5\pi}z^{\frac 5 4}+c_1^{(i,j)} z+O(z^{\frac 3 4}), $$ where $c_1^{(i,j)}=\frac{i+j-3}{2}$. \end{cor}
We conclude by proving the following polynomial upper and lower bounds on one dimensional Riesz means.
\begin{lem}\label{lemmaonedim} For all $R\geq 0$ the following inequalities hold: \begin{equation}\label{onedim1} -\frac{1}{3}R^3 \leq\sum_{n\geq 1}\left(R^4-n^4\right)_+-\frac{4}{5}R^5+\frac{1}{2}R^4 \leq \frac{1}{6}\,R^3+\frac{1}{12}\,R^2. \end{equation} and \begin{equation}\label{onedim2} -\frac{11}{6}R^3-\frac{3}{2}R^2-\frac{127}{240}R \leq\sum_{n\geq 1}\left(R^4-\left(n+\frac{1}{2}\right)^4\right)_+-\frac{4}{5}R^5+R^4 \leq \frac{1}{6}R^3+\frac{3}{2}R^2+\frac{1}{30}R+\frac{1}{8}. \end{equation} In particular \eqref{onedim2} holds if we replace $\sum_{n\geq 1}\left(R^4-\left(n+\frac{1}{2}\right)^4\right)_+$ by $\sum_{n=1}^N\left(R^4-\left(n+\frac{1}{2}\right)^4\right)$ with $N\in\mathbb N$, $R-2\leq N\leq R$. \end{lem} \begin{proof} We start by proving \eqref{onedim1} for $R\geq 1$. We have that $$ \sum_{n\geq 1}\left(R^4-n^4\right)_+=\sum_{n=1}^N \left(R^4-n^4\right) $$ where $N=[R]$ and therefore $R-1<N \leq R$. Expanding and re-arranging the sum in a suitable way we get $$ \sum_{n=1}^N \left(R^4-n^4\right)-\frac{4}{5}R^5+\frac{1}{2}R^4=-\frac{1}{5}N^5-\frac{1}{2}N^4-\frac{1}{3}N^3+\left(\frac{1}{30}+R^4\right)N+\frac{1}{2}R^4-\frac{4}{5}R^5. $$ We set $$ f_R(x):=-\frac{1}{5}x^5-\frac{1}{2}x^4-\frac{1}{3}x^3+\left(\frac{1}{30}+R^4\right)x+\frac{1}{2}R^4-\frac{4}{5}R^5 $$ and we estimate its maximum and minimum for $x\in[R-1,R]$, $R\geq 1$. We have that $f_R''(x)=-2x(x+1)(2x+1)<0$ for all $x\geq 1$. We also note that $f_R'(R-\frac{1}{2})=\frac{R^2}{2}-\frac{7}{240}>0$ and $f_R'(R-\frac{1}{3})=-\frac{2}{3}R^3+\frac{1}{3}R^2+\frac{4}{27}R-\frac{13}{810}<0$, for all $R\geq 1$. Hence the maximum is attained in between these two points at some $N_0$. By concavity \begin{multline*}
f_R(N_0)=f_R\left(R-\frac{1}{2}\right)+\int_{R-1/2}^{N_0}f_R'(y)\,dy\leq f_R\left(R-\frac{1}{2}\right)+\left(N_0-R+\frac{1}{2}\right)f_R'\left(R-\frac{1}{2}\right)\\ \leq f_R\left(R-\frac{1}{2}\right)+\frac{1}{6} f_R'\left(R-\frac{1}{2}\right) \end{multline*} which implies \begin{equation*}
f_R(N_0)\leq \frac{1}{6}R^3+\frac{1}{12}R^2-\frac{7}{240}R-\frac{7}{1440}\leq \frac{1}{6}R^3+\frac{1}{12}R^2. \end{equation*} We deduce the bound $$
\sum(R^4-n^4)_{+}\leq \frac{4}{5}\,R^5 -\frac{1}{2}\,R^4+\frac{1}{6}\,R^3+\frac{1}{12}\,R^2 $$ which holds for all $R\geq 0$.
By concavity we also deduce that $$ f_R(x)\geq \min\{f_R(R-1),f_R(R)\}=\min\left\{-\frac{1}{3}R^3+\frac{1}{30}R,-\frac{1}{3}R^3+\frac{1}{30}R\right\}=-\frac{1}{3}R^3+\frac{1}{30}R, $$ for all $R\geq 1$. This implies the bound \begin{equation}\label{Riesz-mean-n-to-4-upper-bound}
\sum(R^4-n^4)_{+}\geq \frac{4}{5}\,R^5 -\frac{1}{2}\,R^4-\frac{1}{3}\,R^3 \end{equation} which holds for all $R\geq 0$. This concludes the proof of \eqref{onedim1}.
It is possible to obtain upper and lower bounds for $\sum_{n\geq 1}\left(R^4-\left(n+\frac{1}{2}\right)^4\right)_+$ as in the proof of \eqref{onedim1}, however computations become involved. Instead, we proceed as follows. We have $$ \sum_{n\geq 1}\left(R^4-\left(n+\frac{1}{2}\right)^4\right)_+=\sum_{n=1}^N \left(R^4-\left(n+\frac{1}{2}\right)^4\right) $$ where $N\in\mathbb N$ satisfies $R-2\leq N\leq R$. Expanding and re-arranging the sum in a suitable way we get \begin{multline}\label{Riesz-mean-n-plus-one-half-to-4} \sum_{n=1}^{N}(R^4-(n+\frac{1}{2})^4)\\ =\frac{4}{5}\,R^5 -R^4+(\frac{1}{6}-2t^2)R^3+(2t^3-\frac{t}{2})R^2+(\frac{t}{2}-t^4-\frac{7}{240})R+\frac{t^5}{5}-\frac{t^3}{6}+\frac{7t}{240}+\frac{1}{16}, \end{multline} where $t:=R-1-N$ so that $t\in[-1,1]$. Minimizing and maximizing each coefficient we find \begin{multline*}
-\frac{11}{6} \leq \frac{1}{6}-2t^2\leq \frac{1}{6}, \ \
-\frac{3}{2} \leq 2t^3-\frac{t}{2}\leq \frac{3}{2},\\
-\frac{127}{240} \leq \frac{t}{2}-t^4-\frac{7}{240}\leq \frac{1}{30}, \ \
0 \leq \frac{t^5}{5}-\frac{t^3}{6}+\frac{7t}{240}+\frac{1}{16}\leq \frac{1}{8} . \end{multline*} This implies that $$ -\frac{11}{6}R^3-\frac{3}{2}R^2-\frac{127}{240}R \leq\sum_{n=1}^N\left(R^4-\left(n+\frac{1}{2}\right)^4\right)-\frac{4}{5}R^5+R^4 \leq \frac{1}{6}R^3+\frac{3}{2}R^2+\frac{1}{30}R+\frac{1}{8}. $$
This concludes the proof of the lemma. \end{proof}
\begin{comment}
\end{comment}
\end{document} |
\begin{document}
\title{Intermediate models of Magidor-Radin forcing-Part II} \begin{abstract}
We continue the work done in \cite{PrikryCaseGitikKanKoe},\cite{TomMoti},\cite{partOne}. We prove that for every set of ordinals $A$ in a Magidor-Radin generic extension using a coherent sequence such that $o^{\vec{U}}(\kappa)<\kappa^+$, there is $C'\subseteq C_G$, such that $V[A]=V[C']$.
Also we prove that the supremum of a fresh set in a Prikry, tree Prikry, Magidor, Radin-Magidor and Radin forcing, changes cofinality to $\omega$. \end{abstract} \section{Introduction}
A basic fact about the Cohen and Random forcings is that every subforcing of the Cohen (Random) forcing is equivalent to it. Kanovey, Koepke and the second author showed in \cite{PrikryCaseGitikKanKoe} that the same is true for the standard Prikry forcing. The result was generalized to the Magidor forcing in \cite{TomMoti}. This was pushed further to versions of the Magidor-Radin forcing with $o^{\vec{U}}(\kappa)< \kappa$, in \cite{partOne}. The result for $o^{\vec{U}}(\kappa)<\kappa$, splits into two parts. The first is to prove that for every $V$-generic filter $G$, for the Magidor-Radin forcing, and any set of ordinals $A\in V[G]$, there is a subsequence of the generic club $C\subseteq C_G$ such that $V[A]=V[C]$. Thus, in order to analyse the intermediate models of $V[G]$, it suffices to study models of the form $V[C]$, where $C\subseteq C_G$. The second part is to show that each model of the form $V[C]$ is a $V$-generic extension for a Magidor-Radin-like forcing.
The main purpose of the present paper is to study sets in generic extension of the version of Magidor-Radin forcing for $o^{\vec{U}}(\kappa)<\kappa^+$. It turns out that the first statement holds and every set in the extension is equivalent to a subsequence of a generic Magidor-Radin sequence. There are considerable additional difficulties here and new ideas are used to overcome them. However, we do not give here a classification for models of the form $V[C]$.
The major difference between the case $o^{\vec{U}}(\kappa)<\kappa$ and $o^{\vec{U}}(\kappa)\geq\kappa$, is that we cannot split $\mathbb{M}[\vec{U}]$ to the part below $o^{\vec{U}}(\kappa)$ and above it. As proven in \cite{partOne}, this decomposition provided the ability to run over all possible extension types. In terms of $C_G$ this means that we cannot split $C_G$ below $\kappa$ in a way that will determine what are the measures used in the construction of $C_G$. The classical example for such a sequence is
$$C_G(0),C_G(C_G(0)),C_G(C_G(C_G(0))),...$$
in which every element in the sequence is taken from a measure which depends on the previous element in the sequence. This example suggests that some sort of tree construction is needed in order to refer to such sequences in the ground model.
In context of \cite{partOne} and \cite{TomMoti}, we are working by induction on $\kappa$. Formally we prove the following inductive step:
\begin{theorem}\label{MainResaultParttwo}
Let $\vec{U}$ be a coherent sequence with maximal measurable $\kappa$, such that $o^{\vec{U}}(\kappa)<\kappa^+$. Assume the inductive hypothesis:
\begin{center}
$(IH)$ \ \ \ For every $\delta<\kappa$, any coherent sequence $\vec{W}$ with maximal measurable $\delta$ and any set of ordinals
\ \ \ \ \ $A\in V[H]$ for $H\subseteq\mathbb{M}[\vec{W}]$, there is $C\subseteq C_H$, such that $V[A]=V[C]$.
\end{center}
Then for every $V$-generic filter $G\subseteq\mathbb{M}[\vec{U}]$ and any set of ordinals $A\in V[G]$, there is $C\subseteq C_G$ such that $V[A]=V[C]$.
\end{theorem}
As a corollary of this, we obtain the main result of this paper:
\begin{theorem}\label{MainResaultPartwo}
Let $\vec{U}$ be a coherent sequence such that $o^{\vec{U}}(\kappa)<\kappa^+$. Then for every $V$-generic filter $G\subseteq\mathbb{M}[\vec{U}]$, such that $\forall\alpha\in C_G. o^{\vec{U}}(\alpha)<\alpha^+$ and every set of ordinals $A\in V[G]$, there is $C\subseteq C_G$ such that $V[A]=V[C]$.
\end{theorem} Since every intermediate $ZFC$ model $V\subseteq M\subseteq V[G]$ is of the form $M=V[A]$ for some set of ordinals $A$ (See for example \cite[Corollary 15.42, Lemma 15.43]{Jech2003}), we conclude that every such $M$ is of the form $M=V[C]$ for some $C\subseteq C_G$. In this paper, the models $V[A]$ considered are always $ZFC$ models as $A$ would be a set of ordinals or can be coded as a set of ordinals using a function in $V$.\footnote{ For example if $A\subseteq V$ or if $A$ is a \textit{sequence} of sets of ordinals.} In any case, $V[A]$ \cite[Lemma 15.43]{Jech2003} is the minimal $ZFC$ model which contains both $V$ and $A$ as an element.
Distinguishing from the case where $o^{\vec{U}}(\kappa)<\kappa$, we do not have a classification of what are exactly the subforcings which generate the models $V[C']$. Let us give some examples of subforcings of $\mathbb{M}[\vec{U}]$ in the case of $o^{\vec{U}}(\kappa)=\kappa$.
\begin{example}\label{canonicalsequence}
Let $G$ be a generic and let $C_G$ be the generic club added by $\mathbb{M}[\vec{U}]$, consider the increasing continuous enumeration of $C_G$, $\langle C_G(i)\mid i<\kappa\rangle$. Assume that $C_G(0)>0$, and consider again the sequence $\langle \kappa_n\mid n<\omega\rangle$ which is defined as follows:
$$\kappa_0=C_G(0), \ \kappa_{n+1}=C_G(\kappa_n).$$
Consider the following tree of measures:
$$\vec{W}=\langle W_{\vec{\alpha}}\mid \vec{\alpha}\in [\kappa]^{<\omega}\rangle$$
where $W_{\vec{\alpha}}=U(\kappa,{\rm max}(\vec{\alpha}))$. Note here that since $o^{\vec{U}}(\kappa)=\kappa$, this is well defined. It is not hard to check the Mathias criterion for the tree-Prikry forcing with $\vec{W}$, given in \cite{TomTreePrikry}, to conclude that $\langle \kappa_n\mid n<\omega\rangle$ is a tree-Prikry generic sequence with respect to $\vec{W}$. Note that, since the sequence of measures $\langle U(\kappa,i)\mid i<\kappa\rangle$ is a discrete family of normal measures, this tree-Prikry forcing falls under the framework of \cite{MinimalPrikry} and therefore the model $V[\langle\kappa_n\mid n<\omega\rangle]$ is minimal above $V$. This phenomenon does not occur in generic extensions of $\mathbb{M}[\vec{U}]$ with $o^{\vec{U}}(\kappa)<\kappa$.
\end{example} \begin{example}
The previous example can be made more complex. Let $f:[\kappa]^{<\omega}\rightarrow\kappa$ be any function. Then $\langle \alpha_n\mid n<\omega\rangle$ is defined as follows:
$\alpha_0=C_G({\langle}{\rangle})$ and $\alpha_{n+1}$ is obtained by applying $f$ to some finite $\vec{C}_n\in[C_G]^{<\omega}$ i.e. $\alpha_{n+1}=C_G(f(\vec{C}_n))$. \end{example}
Another theorem proven in section $6$ determines the cofinality of the supremum of a fresh set in
Prikry, Magidor, Magidor-Radin and Radin extensions. \begin{theorem}\label{freshfresh} Assume that $\mathbb{P}$ is either Prikry, tree Prikry, Magidor, Magidor-Radin or Radin forcing. Let $G\subseteq\mathbb{P}$ be $V$-generic. If $A\in V[G]$ is a fresh set of ordinals with respect to $V$, then $cf^{V[G]}({\rm sup}(A))=\omega$. \end{theorem}
The paper is organized as follows: \begin{itemize}
\item Section $2$: Subsections $2.1,2.2$ consist of basic definitions and properties of the forcing. Then $2.3$ provides several general definitions and previous results. In subsection $2.4$ we develop the theory of fat trees.
\item Section $3$: We deal with the case of sets with cardinality less than $\kappa$.
\item Section $4$: The proof for subsets of $\kappa$ is presented.
\item Section $5$: In $5.1$ an argument for general sets is given. In $5.2$, we prove some general results above the quotient forcing of several Prikry type forcing.
\item Section $6$: Devoted to the proof of \ref{freshfresh}.
\item Section $7$: Presents further research directions and open questions related to this paper. \end{itemize}
\section{Preliminaries}
Most of the basic definitions are identical to \cite{partOne} and \cite{Gitik2010}.
\subsection{Magidor forcing}
Let $\vec{U}=\langle U(\alpha,\beta)\mid \alpha\leq \kappa \ ,\beta<o^{\vec{U}}(\alpha)\rangle$ be a coherent sequence. For every $\alpha\leq\kappa$, denote $$\cap\vec{U}(\alpha)=\underset{i<o^{\vec{U}}(\alpha)}{\bigcap}U(\alpha,i).$$ \begin{definition}\label{Magidor-conditions} $\mathbb{M}[\vec{U}]$ consists of elements $p$ of the form $p=\langle t_1,...,t_n,\langle\kappa,B\rangle\rangle$.
For every $1\leq i\leq n $, $t_i$ is either an ordinal
$\kappa_i$ if $ o^{\vec{U}}(\kappa_i)=0$
or a pair $\langle\kappa_i,B_i\rangle$ if \ $o^{\vec{U}}(\kappa_i)>0$. \begin{enumerate} \item $B\in\cap\vec{U}(\kappa)$, \ ${\rm min}(B)>\kappa_n$.
\item For every $1\leq i\leq n$.
\begin{enumerate}
\item $\langle\kappa_1,...,\kappa_n\rangle\in [\kappa]^{<\omega}$ (increasing finite sequence below $\kappa$).
\item $B_i\in \cap\vec{U}(\kappa_i)$.
\item ${\rm min}(B_i)>\kappa_{i-1}$ \ $(i>1)$.
\end{enumerate} \end{enumerate}
Moreover, denote $t_{n+1}={\langle}\kappa,B{\rangle}$. \end{definition} \begin{definition}\label{Magidor-order}
For $p=\langle t_1,t_2,...,t_n,\langle\kappa,B\rangle\rangle,q=\langle s_1,...,s_m,\langle\kappa,C\rangle\rangle\in \mathbb{M}[\vec{U}]$ , define $p \leq q$ ($q$ extends $p$) iff: \begin{enumerate}
\item $n \leq m$.
\item $B \supseteq C$.
\item $\exists 1 \leq i_1 <...<i_n \leq m$ such that for every $1 \leq j \leq m$:
\begin{enumerate}
\item If $\exists 1\leq r\leq n$ such that $i_r=j$ then $\kappa(t_r)=\kappa( s_{i_r})$ and $C(s_{i_r})\subseteq B(t_r)$.
\item Otherwise $\exists \ 1 \leq r \leq n+1$ such that $ i_{r-1}<j<i_{r}$ then
\begin{enumerate}
\item $\kappa(s_j) \in B(t_r)$.
\item $B(s_j)\subseteq B(t_r)\cap \kappa(s_j)$.
\end{enumerate}
\end{enumerate} \end{enumerate} We also use ``p directly extends q", $q \leq^{*} p$ if: \begin{enumerate}
\item $q \leq p$.
\item $n=m$. \end{enumerate} \end{definition} Let us add some notation, for a pair $t=\langle \alpha, X\rangle$ we denote $\kappa(t)=\alpha,\ B(t)=X$. If $t=\alpha$ is an ordinal then $\kappa(t)=\alpha$, $B(t)=\emptyset$, and $\cap \vec{U}(\alpha)=P(\alpha)$ (the power set of $\alpha$).
For a condition $p=\langle t_1,...,t_n,\langle \kappa,B\rangle\rangle\in\mathbb{M}[\vec{U}]$ we denote $n=l(p)$, $p_i=t_i$, $B_i(p)=B(t_i)$ and $\kappa_i(p)=\kappa(t_i)$ for any $1\leq i\leq l(p)$, $t_{l(p)+1}=\langle\kappa,B\rangle$, $t_0=0$. Also denote $$\kappa(p)=\{\kappa_i(p)\mid i\leq l(p)\}\text{ and }B(p)=\bigcup_{i\leq l(p)+1}B_i(p).$$ \begin{remark}\label{changes with respect to part one} In \cite{TomMoti},\cite{partOne} we had another requirement in Definition \ref{Magidor-order}, that given a condition $p$, if we would like to add an ordinal $\alpha$ to the sequence in the interval $(\kappa_{i-1}(p),\kappa_i(p))$ then we needed to make sure that $o^{\vec{U}}(\alpha)<o^{\vec{U}}(\kappa_i(p))$. This condition is not essential as any condition $p$ can be directly extended to a condition in the set $$D=\{q\in \mathbb{M}[\vec{U}]\mid\forall i\leq l(q)+1. \forall\alpha\in B_i(q). o^{\vec{U}}(\alpha)<o^{\vec{U}}(\kappa_i(q))\}.$$ The order defined in \ref{Magidor-order} on elements of $D$ automatically satisfies the extra requirement.
For this reason we will point out along this section some points where this assumption changes properties of $\mathbb{M}[\vec{U}]$. The major one, is in Propositions \ref{indc},\ref{IndCG}. \end{remark} \begin{definition}\label{end extension}
Let $p\in\mathbb{M}[\vec{U}]$. For every $ i\leq l(p)+1$, $\alpha\in B_{i}(p)$ with $o^{\vec{U}}(\alpha)>0$, and $B\in \cap\vec{U}(\alpha)$, define $$p^{\frown}{\langle}\alpha,B{\rangle}=\langle p_1,...,p_{i-1},\langle\alpha,B_{i}(p)\cap B\rangle,\langle\kappa_{i}(p),B_{i}(p)\setminus(\alpha+1)\rangle,p_{i+1},...,p_{l(p)+1}\rangle.$$ Also $p^{\smallfrown}{\langle}\alpha{\rangle}=p^{\smallfrown}{\langle}\alpha,\alpha{\rangle}$. If $o^{\vec{U}}(\alpha)=0$, define $$p^{\frown}\langle\alpha\rangle=\langle p_1,...,p_{i-1},\alpha,\langle\kappa_{i}(p),B_{i}(p)\setminus(\alpha+1)\rangle,...,p_{l(p)+1}\rangle.$$ For $\langle\alpha_1,...,\alpha_n\rangle\in[\kappa]^{<\omega}$ and ${\langle} B_1,...,B_n{\rangle}$, where $B_i\in\cap\vec{U}(\alpha_i)$, define recursively, $$p^{\smallfrown}{\langle}{\langle}\alpha_1,...,\alpha_n{\rangle},{\langle} B_1,...,B_n{\rangle}{\rangle}=(p^{\smallfrown}{\langle}{\langle}\alpha_1,...,\alpha_{n-1}{\rangle},{\langle} B_1,...,B_{n-1}{\rangle}{\rangle})^{\smallfrown}{\langle} \alpha_n,B_n{\rangle}$$ and $$p^{\frown}\langle\alpha_1,...,\alpha_n\rangle=(p^{\frown}\langle\alpha_1,...,\alpha_{n-1}\rangle)^{\frown}\langle\alpha_n\rangle.$$ \end{definition}
For $\vec{\alpha}={\langle} \alpha_1,...,\alpha_n{\rangle}$, denote $|\vec{\alpha}|=n$ and $\vec{\alpha}(i)=\alpha_i$. If $I\subseteq\{1,...,n\}$ then $\vec{\alpha}\restriction I={\langle} \vec{\alpha}(i_1),...,\vec{\alpha}(i_k){\rangle}$ where $\{i_1,i_2,...,i_k\}$ is the increasing enumeration of $I$. For $Y\subseteq\omega$, $\vec{\alpha}\restriction Y=\vec{\alpha}\restriction (Y\cap\{1,...,n\})$. We will usually identify $\vec{\alpha}$ with the set $\{\alpha_1,...,\alpha_n\}$. Also for two sequences $\vec{\alpha},\vec{\beta}$, we denote their concatenation by $\vec{\alpha}{}^{\smallfrown}\vec{\beta}$.
Note that if we add a pair of the form $\langle \alpha , B\cap\alpha\rangle$ then in $B\cap\alpha$ there might be many ordinals which are irrelevant to the forcing and cannot be added. Namely, ordinals $\beta$ such that $B\cap\beta\notin\cap\vec{U}(\beta)$. Note that we no longer have to require $o^{\vec{U}}(\beta)\geq o^{\vec{U}}(\alpha)$. We can avoid such ordinals by shrinking the large sets. \begin{proposition}\label{BetterSet} Let $\alpha\leq\kappa$, and $A\in\cap\vec{U}(\alpha)$. Then there exists $A^*\subseteq A$ such that:
\begin{enumerate}
\item $A^*\in\cap\vec{U}(\alpha)$.
\item For every $x\in A^*$, $A^*\cap x\in\cap\vec{U}(x)$. \end{enumerate} \end{proposition}
\noindent\textit{Proof}. For any $j<o^{\vec{U}}(\alpha)$, $$Ult(V,U(\alpha,j))\models A=j_{U(\alpha,j)}(A)\cap\alpha\in \underset{i<j}{\bigcap}U(\alpha,i).$$ Coherency of the sequence implies that $A':=\{\alpha<\kappa\mid A\cap\alpha\in\cap\vec{U}(\alpha)\}\in U(\alpha,j)$, this is for every $j<o^{\vec{U}}(\alpha)$.
Define inductively $A^{(0)}=A$, $A^{(n+1)}=(A^{(n)})'$. By definition, $\forall\alpha\in A^{(n+1)}_j$, $A^{(n)}\cap\alpha\in\cap\vec{U}(\alpha)$. Define $A^*=\underset{n<\omega}{\bigcap}A^{(n)}\in\cap\vec{U}(\kappa)$, this set has the required property. $\blacksquare$
The conditions $p^{\smallfrown}\vec{\alpha}$ and $p^{\smallfrown}{\langle}\vec{\alpha},\vec{B}{\rangle}$ are minimal extensions of $p$ in a sense given in the following proposition. The proof of the proposition is a direct verification of \ref{Magidor-conditions},\ref{Magidor-order}. \begin{proposition}\label{frown extension}
Let $p\in\mathbb{M}[\vec{U}]$ and $\vec{\alpha}\in [\kappa]^n$. Suppose that $\vec{\alpha}$ decomposes according to the ordinals of $p$ as $\vec{\alpha}= \vec{\alpha}_1{}^{\smallfrown}...^{\smallfrown}\vec{\alpha}_{l(p)+1}\in\prod_{i=1}^{l(p)+1}[B_i(p)]^{l_i}$. Let $\vec{C}= \vec{C}_1{}^{\smallfrown}...^{\smallfrown}\vec{C}_{l(p)+1}$, be a sequence of sets such that $|\vec{C}_i|=l_i$ (in particular $|\vec{C}|=n$)and for each $i\leq n$, $\vec{C}(i)\subseteq \vec{\alpha}(i)$. \begin{enumerate}
\item $p^{\smallfrown}{\langle} \vec{\alpha},\vec{C}{\rangle}\in \mathbb{M}[\vec{U}]$ if and only if $\forall i\leq n. \ \exists j\leq l(p)$ such that $\vec{\alpha}(i)\in B_j(p)$ and $ B_{j}(p)\cap \vec{C}(i)\in\cap\vec{U}(\vec{\alpha}(i))$.
\item Suppose that $p^{\frown}{\langle}\vec{\alpha},\vec{C}{\rangle}\in\mathbb{M}[\vec{U}]$, then for any extension $q$ of $p$, if:
\begin{enumerate}
\item $\kappa(p)\cup\vec{\alpha}\subseteq \kappa(q)$.
\item For all $j\leq l(q)$, if $\kappa_{i-1}(p)<\kappa_j(q)\leq \kappa_i(p)$, for some $i\leq l(p)+1$, then:
\begin{enumerate}
\item $B_j(q)\subseteq \bigcup_{1\leq j\leq l_i}\vec{C}_i(j)\cup \big(B_i(p)\setminus {\rm max}(\vec{\alpha}_i+1)\big)$\footnote{Note that by Definition \ref{end extension}, the set $\bigcup_{1\leq i\leq l(p)+1}\Big(\bigcup_{1\leq j\leq l_i}\vec{C}_i(j)\cup \big(B_i(p)\setminus {\rm max}(\vec{\alpha}_i+1)\big)\Big)$ is exactly $B(p^{\smallfrown}{\langle}\vec{\alpha},\vec{C}{\rangle}$.}.
\item If $\kappa_j(q)<\kappa_i(p)$, then $$\kappa_j(q)\in \bigcup_{1\leq j\leq l_i}\vec{C}_i(j)\cup \big(B_i(p)\setminus {\rm max}(\vec{\alpha}_i+1)\big).$$
\end{enumerate}
\end{enumerate}
Then $p^{\frown}{\langle}\vec{\alpha},\vec{B}{\rangle}\leq q$. \end{enumerate} \end{proposition} The previous proposition also provides a criterion for $p^{\smallfrown}\vec{\alpha}\in \mathbb{M}[\vec{U}]$, and establishes the minimality of the extension $p^{\smallfrown}\vec{\alpha}$. Namely, $\text{for every }p\leq q,\text{ if } \kappa(q)\cup\vec{\alpha}\subseteq \kappa(q) \text{ then }p^{\smallfrown}\vec{\alpha}\leq q$. \begin{definition} Let $p\in \mathbb{M}[\vec{U}]$, $\alpha<\kappa$ and let $i\leq l(p)$ be such that $\alpha\in[\kappa_{i}(p),\kappa_{i+1}(p))$
$$p\restriction\alpha= \langle p_1,...,p_i\rangle \ and \ p\restriction(\alpha,\kappa]=\langle p_{i+1},...,p_{l(p)+1}\rangle.$$
Also, for $\lambda$ with $o^{\vec{U}}(\lambda)>0$ define
$$\mathbb{M}[\vec{U}]\restriction\lambda=\Big\{p\restriction\lambda\mid p\in\mathbb{M}[\vec{U}], \lambda \text{ appears in } p\Big\}, \ \ \
\mathbb{M}[\vec{U}]\restriction(\lambda,\kappa]=\{p\restriction (\lambda,\kappa]\mid p\in\mathbb{M}[\vec{U}], \lambda \text{ appears in } p\}.$$ \end{definition} Note that $\mathbb{M}[\vec{U}]\restriction\lambda$ is just Magidor forcing on $\lambda$ and $\mathbb{M}[\vec{U}]\restriction (\lambda,\kappa]$ is a subset of $\mathbb{M}[\vec{U}]$ which generates a Magidor club in the interval $(\lambda,\kappa]$. \begin{remark}\label{Magidor forcing in extension} Let $\lambda<\kappa$ which $o^{\vec{U}}(\lambda)>0$ and let $p\in\mathbb{M}[\vec{U}]$ be such that $\lambda$ appears in $p$. Let $H\subseteq \mathbb{M}[\vec{U}]\restriction\lambda$ with $p\restriction\lambda\in H$, then in $V[H]$ we added new (bounded) subsets of $\kappa$, hence $\vec{U}$ is no longer a sequence of ultrafilters. However, for the relevant interval $(\lambda,\kappa]$, $\vec{U}\restriction(\lambda,\kappa]$ generates a coherent sequence of ultrafilters $\vec{W}$ and formally we force with $\mathbb{M}[\vec{W}]$. Note that the ground model forcing $\mathbb{M}[\vec{U}]\restriction(\lambda,\kappa]$ is dense in $\mathbb{M}[\vec{W}]$, hence we can simply force with $\mathbb{M}[\vec{U}]\restriction(\lambda,\kappa]$ over $V[H]$ to complete to a generic extension of $\mathbb{M}[\vec{U}]$. \end{remark} The following propositions can be found in \cite{partOne}: \begin{proposition}\label{dec} Let $p\in\mathbb{M}[\vec{U}]$ and $\langle\lambda,B\rangle$ a pair in $p$. Then $$\mathbb{M}[\vec{U}]/p\simeq \Big(\mathbb{M}[\vec{U}]\restriction \lambda\Big)/\Big(p\restriction\lambda\Big)\times\Big(\mathbb{M}[\vec{U}]\restriction(\lambda,\kappa]\Big)/\Big(p\restriction(\lambda,\kappa]\Big).$$ \end{proposition} \begin{proposition}
Let $p\in\mathbb{M}[\vec{U}]$ and $\langle\lambda,B\rangle$ be a pair in $p$. Then the order $\leq^*$ in the forcing $\Big(\mathbb{M}[\vec{U}]\restriction(\lambda,\kappa]\Big)/\Big(p\restriction(\lambda,\kappa]\Big)$ is $\delta$-directed where $\delta={\rm min}\{\nu>\lambda\mid o^{\vec{U}}(\nu)>0\}$. Meaning that for every $X\subseteq \mathbb{M}[\vec{U}]\restriction (\lambda,\kappa]$ such that $|X|<\delta$ and for every $q\in X, \ p\leq^* q$, there is an $\leq^*$-upper bound for $X$. \end{proposition}
\begin{lemma} $\mathbb{M}[\vec{U}]$ satisfies $\kappa^+$-cc. \end{lemma} The following lemma is the well known Prikry condition: \begin{lemma}\label{prikrycondition}
$\mathbb{M}[\vec{U}]$ satisfies the Prikry condition i.e. for any statement in the forcing language $\sigma$ and any $p\in\mathbb{M}[\vec{U}]$ there is $p\leq^*p^*$ such that $p^*||\sigma$ i.e. either $p^*\Vdash\sigma$ or $p\Vdash\neg\sigma$. \end{lemma}
The next lemma can be found in \cite{ChangeCofinality} and the proof in \cite{partOne}: \begin{lemma}\label{MagLemma}
Let $G\subseteq \mathbb{M}[\vec{U}]$ be generic and suppose that $A\in V[G]$ is such that $A\subseteq V_\alpha$. Let $p\in G$ and ${\langle}\lambda,B{\rangle}$ a pair in $p$ such that $\alpha<\lambda$, then $A\in V[G\restriction\lambda]$. \end{lemma}
\begin{corollary} $\mathbb{M}[\vec{U}]$ preserves all cardinals. \end{corollary}
\begin{definition}
Let $G\subseteq \mathbb{M}[\vec{U}]$ be generic, define the \textit{Magidor club}
$$C_{G}=\{ \nu \mid \exists \ A\exists p\in G \ s.t. \ \langle \nu,A\rangle\in p\}.$$
\end{definition}
We will abuse notation by sometimes considering $C_G$ as the canonical enumeration of the set $C_G$. The set $C_{G}$ is closed and unbounded in $\kappa$, therefore, the order type of $C_{G}$ determines the cofinality of $\kappa$ in $V[G]$. The next propositions can be found in \cite{Gitik2010}.
\begin{proposition}\label{decprop}
Let $G\subseteq\mathbb{M}[\vec{U}]$ be generic. Then $G$ can be reconstructed from $C_{G}$ as follows
$$ G=\{p\in\mathbb{M}[\vec{U}]\mid (\kappa(p)\subseteq C_{G}) \wedge (C_{G}\setminus\kappa(p)\subseteq B(p))\}.$$
In particular $V[G]=V[C_{G}]$. \end{proposition} \begin{proposition}\label{genericproperties} Let $G\subseteq\mathbb{M}[\vec{U}]$ be generic. \begin{enumerate}
\item $C_G$ is a club at $\kappa$.
\item For every $\delta\in C_G$, $o^{\vec{U}}(\delta)>0$ iff $\delta\in Lim(C_G)$\footnote{The set of limit points of $X\subseteq\kappa$ is $Lim(X):=\{\alpha\mid {\rm sup}(\alpha\cap X)=\alpha\}\subseteq\kappa+1$.}.
\item For every $\delta\in Lim(C_G)$, and every $A\in \cap\vec{U}(\delta)$, there is $\xi<\delta$ such that $C_G\cap(\xi,\delta)\subseteq A$.
\item If ${\langle} \delta_i\mid i<\theta{\rangle}$ is an increasing sequence of elements of $C_G$, let $\delta^*={\rm sup}_{i<\theta}\delta_i$, then $o^{\vec{U}}(\delta^*)\geq\limsup_{i<\theta}o^{\vec{U}}(\delta_i)+1$.\footnote{ For a sequence of ordinals ${\langle} \rho_j\mid j<\gamma{\rangle}$, $\limsup_{j<\gamma}\rho_j={\rm min}({\rm sup}_{i<j<\gamma}\rho_j\mid i<\gamma)$.}
\item Let $\delta\in Lim(C_G)$ and let $A$ be a positive set, $A\in (\cap\vec{U}(\delta))^+$, i.e. $ \delta\setminus A\notin \cap\vec{U}(\delta)$. \footnote{Equivalently,if there is some $i<o^{\vec{U}}(\delta)$ such that $A\in U(\delta,i)$.} Then, ${\rm sup}(A\cap C_G)=\delta$. \item If $A\subseteq V_\alpha$, then $A\in V[C_G\cap\lambda]$, where $\lambda={\rm max}(Lim(C_G)\cap\alpha+1)$. \item For every $V$-regular cardinal $\alpha$, if $cf^{V[G]}(\alpha)<\alpha$ then $\alpha\in Lim(C_G)$. \end{enumerate} \end{proposition}
\noindent\textit{Proof}. The proof of $(1),(2),(3),(5),(6),(7)$ can be found in \cite{Gitik2010} and does not use the extra property of \ref{Magidor-order} (see Remark \ref{changes with respect to part one}).
To see $(4)$, use the closure of $C_G$, to find $q\in G$ such that $\delta^*$ appears in $q$. Clearly, $A:=\{\alpha<\delta^*\mid o^{\vec{U}}(\alpha)<o^{\vec{U}}(\delta^*)\}\in\cap\vec{U}(\delta^*)$, thus by $(3)$, there is $\xi<\delta^*$ such that $C_G\cap(\xi,\delta^*)\subseteq A$. Let $i<\theta$ be such that for every $j>i$, $o^{\vec{U}}(\delta_j)<o^{\vec{U}}(\delta^*)$. By definition of $limsup$, $$\limsup_{j<\theta}o^{\vec{U}}(\delta_j)+1\leq {\rm sup}_{i<j<\theta}o^{\vec{U}}(\delta_j)+1\leq o^{\vec{U}}(\delta^*).$$ $\blacksquare$
\begin{proposition}\label{indc}
Let $G\subseteq \mathbb{M}[\vec{U}]$ be a $V$-generic filter and $C_{G}$ the corresponding Magidor sequence. Let $p\in G$, then for every $i\leq l(p)+1$
\begin{enumerate}
\item If $o^{\vec{U}}(\kappa_i(p))\leq \kappa_i(p)$, and $\forall\alpha\in B_i(p)$, $o^{\vec{U}}(\alpha)<o^{\vec{U}}(\kappa_i(p))$, then
$${\rm otp}( [\kappa_{i-1}(p),\kappa_i(p))\cap C_{G} )=\omega^{o^{\vec{U}}(\kappa_i(p))}.$$
\item If $o^{\vec{U}}(\kappa_i(p))\geq \kappa_i(p)$, then
$${\rm otp}( [\kappa_{i-1}(p),\kappa_i(p))\cap C_{G} )=\kappa_i(p).$$
\end{enumerate}
\end{proposition}
\noindent\textit{Proof}. The same as in \cite{partOne}, replacing the usage of Definition \ref{Magidor-order} with the assumption that $\forall\alpha\in B_i(p)$, $o^{\vec{U}}(\alpha)<o^{\vec{U}}(\kappa_i(p))$.$\blacksquare$
Proposition \ref{indc} suggests a connection between the index in $C_G$ of ordinals appearing in $p$
and Cantor normal form.
\begin{definition}
Let $p\in G$. For each $i\leq l(p)$ define
$$\gamma_i(p)=\sum_{j=1}^{i}\omega^{o^{\vec{U}}(\kappa_j(p))}.$$
\end{definition}
\begin{corollary}\label{IndCG}
Let G be $\mathbb{M}[\vec{U}]$-generic and $C_{G}$ the corresponding Magidor sequence. Let $p\in G$, such that for every $1\leq i\leq l(p)$, and every $\alpha\in B_i(p)$, $o^{\vec{U}}(\alpha)<o^{\vec{U}}(\kappa_i(p))$, then
$$p\Vdash \lusim{C}_G(\gamma_i(p))=\kappa(t_i).$$
\end{corollary}
For more details and basic properties of Magidor forcing see \cite{ChangeCofinality},\cite{Gitik2010}, \cite{TomMoti} or \cite{partOne}. \subsection{Magidor forcing with $o(\kappa)<\kappa^+$}
When we assume $o^{\vec{U}}(\kappa)<\kappa$, the measure $U(\kappa,\xi)$ concentrates on measurables $\alpha$ with $o^{\vec{U}}(\alpha)=\xi$, which is a canonical discrete family for those measures. In our more general situation, $ o^{\vec{U}}(\kappa)<\kappa^+$, we can still separate the measures but the decomposition is not canonical. More precisely, for every $\alpha\leq\kappa$, we would like to have sets which witness the fact that the sequence of ultrafilters ${\langle} U(\alpha,\beta)\mid \beta<o^{\vec{U}}(\alpha){\rangle}$ is discrete. \begin{proposition}\label{decompositionHighOrder} Assume $o^{\vec{U}}(\alpha)<\alpha^+$, then there are pairwise disjoint sets $\langle X^{(\alpha)}_i\mid i<o^{\vec{U}}(\alpha)\rangle$ such that $X^{(\alpha)}_i\in U(\alpha,i)$. \end{proposition}
\noindent\textit{Proof}. By assumption, $|o^{\vec{U}}(\alpha)|\leq\alpha$. Enumerate the measures $$\{U(\alpha,i)\mid i<o^{\vec{U}}(\alpha)\}=\{W_j\mid j<\rho\}$$ where $\rho\leq\alpha$. For every $i\neq j$ below $\rho$, find $Y_{i,j}\in W_i\setminus W_j$. By normality $Y_i=\Delta_{j<\rho} Y_{i,j}\in W_i$. Also, for $j\neq i$, $Y_i\notin W_j$ since $Y_i\subseteq Y_{i,j}\cup j\notin W_j$. Set $Z_i=Y_i\setminus(\cup_{j<i}Y_j)$, then $Z_i\in W_i$ and ${\langle} Z_i\mid i<\rho{\rangle}$ are pairwise disjoint. Finally, define $X^{(\alpha)}_\xi=Z_i$ where $\xi<o^{\vec{U}}(\kappa)$ is such that $W_i=U(\alpha,\xi)$.$\blacksquare$ \begin{definition} Let $\alpha\leq\kappa$. \begin{enumerate}
\item For $o^{\vec{U}}(\alpha)\leq\alpha$ define for every $i<o^{\vec{U}}(\alpha)$ $$X^{(\alpha)}_i=\{x<\alpha\mid i=o^{\vec{U}}(x)\}\in U(\alpha,i).$$
\item For $\alpha< o^{\vec{U}}(\alpha)<\alpha^+$ fix a decomposition of $\alpha$, ${\langle} X^{(\alpha)}_i\mid i<o^{\vec{U}}(\alpha){\rangle}$ guaranteed by the previous proposition such that $X^{(\alpha)}_i\in U(\alpha,i)$.
\item
For $\beta<\alpha$ denote by $o^{(\alpha)}(\beta)=\xi$ the unique $\xi<o^{\vec{U}}(\alpha)$ such that $\beta\in X^{(\alpha)}_\xi$. Also let $o^{(\alpha)}(\alpha)=o^{\vec{U}}(\alpha)$.
\end{enumerate} \end{definition} Note that if $o^{\vec{U}}(\alpha)\leq\alpha$ then $o^{(\alpha)}(\beta)=o^{\vec{U}}(\beta)$. \begin{proposition}\label{increasing order at limits} For every $V$-generic $G\subseteq\mathbb{M}[\vec{U}]$ and for every $\kappa_0\in Lim(C_G)$ (Recall that $\kappa\in Lim(C_G)$) such that $o^{\vec{U}}(\kappa_0)<\kappa_0^+$, there is $\xi<\kappa_0$ such that for every $\alpha\in Lim(C_G)\cap(\xi,\kappa_0]$ $$o^{(\kappa_0)}(\alpha)\geq limsup(o^{(\kappa_0)}(\beta)+1\mid \beta\in C_G\cap\alpha).$$ In other words, there is $\xi_\alpha<\alpha$ such that for every $\beta\in C_G\cap(\xi_\alpha,\alpha)$, $o^{(\kappa_0)}(\beta)<o^{(\kappa_0)}(\alpha)$. \end{proposition}
\noindent\textit{Proof}. If $o^{\vec{U}}(\kappa_0)<\kappa_0$, then $o^{(\kappa_0)}(\alpha)=o^{\vec{U}}(\alpha)$ and the proposition follows from \ref{genericproperties}(4). Also if $\alpha=\kappa_0$, then clearly for every $\beta<\kappa_0$, $o^{(\kappa_0)}(\beta)<o^{\vec{U}}(\kappa_0)$ by definition. Assume that $\kappa_0\leq o^{\vec{U}}(\kappa_0)<\kappa_0^+$ and let $\pi:\kappa_0\longleftrightarrow o^{\vec{U}}(\kappa_0)$ be a bijection. For every $\rho<o^{\vec{U}}(\kappa_0)$ denote by $$E_\rho=\pi^{-1''}\rho\subseteq \kappa_0$$ and for every $\alpha<\kappa_0$ define $Y_\alpha=X^{(\kappa_0)}_{\pi(\alpha)}$. In $M_{U(\kappa_0,\rho)}$, define $$j_{U(\kappa_0,\rho)}({\langle} Y_\alpha\mid \alpha<\kappa_0{\rangle})={\langle} Y'_\alpha\mid \alpha<j_{U(\kappa_0,\rho)}(\kappa_0){\rangle}.$$ Since $crit(j_{U(\kappa_0,\rho)})=\kappa_0$, for $\alpha<\kappa_0$, $Y'_\alpha=j_{U(\kappa_0,\rho)}(Y_\alpha)$. Moreover, $j_{U(\kappa_0,\rho)}(E_{\rho})\cap\kappa_0=E_{\rho}$ and $j_{U(\kappa_0,\rho)}(Y_\alpha)\cap\kappa_0= Y_\alpha$. Hence $$\cup_{\alpha\in j_{U(\kappa_0,\rho)}(E_{\rho})\cap\kappa_0} Y'_\alpha\cap\kappa_0=\cup_{\alpha\in E_\rho}j_{U(\kappa_0,\rho)}(Y_\alpha)\cap\kappa_0=\cup_{\alpha\in E_\rho}Y_\alpha=\cup_{\xi<\rho}X^{(\kappa_0)}_\xi.$$ By coherency, $\cap_{\xi<\rho}U(\kappa_0,\xi)=\cap j_{U(\kappa_0,\rho)}(\vec{U})(\kappa_0)$, thus $$(*) \ \ \ \ \ M_{U(\kappa_0,\rho)}\models\cup_{\alpha\in j_{U(\kappa_0,\rho)}(E_{\rho})\cap\kappa_0} Y'_\alpha\cap\kappa_0\in \cap j_{U(\kappa_0,\rho)}(\vec{U})(\kappa_0).$$
Reflecting $(*)$ we get $$X'_\rho=\Big\{\beta\in X^{(\kappa_0)}_\rho \mid \cup_{\alpha\in E_{\rho}\cap\beta} Y_\alpha\cap\beta\in \cap \vec{U}(\beta) \Big\}\in U(\kappa_0,\rho).$$ Now let $\xi<\kappa_0$ be such that $C_G\cap (\xi,\kappa_0)\subseteq \cup_{\rho<o^{\vec{U}}(\kappa_0)}X'_{\rho}$, and let $\alpha\in Lim(C_G)\cap(\xi,\kappa_0)$. Denote $o^{(\kappa_0)}(\alpha)=\rho$, and since $X^{(\kappa_0)}_i$ are pairwise disjoint, $\alpha\in X'_{\rho}$. By definition of $X'_{\rho}$, $$\cup_{i\in E_{\rho}\cap\alpha} Y_i\cap\alpha\in \cap \vec{U}(\alpha)\text{ and } \forall i\in E_\rho\cap\alpha. Y_i\cap \beta\in(\cap \vec{U}(\alpha))^+.$$ By \ref{genericproperties}(3) there is $\xi_\alpha<\alpha$ such that $C_G\cap (\xi_\alpha,\alpha)\subseteq \cup_{i\in E_{\rho}\cap\alpha} Y_i\cap\alpha$. In particular, for every $\beta\in C_G\cap(\xi_\alpha,\alpha)$, there is $i\in E_{\rho}\cap\alpha$ such that $\beta\in Y_i=X^{(\kappa_0)}_{\pi(i)}$. Since $i\in E_\rho$, $\pi(i)<\rho$ so $o^{(\kappa_0)}(\beta)<\rho=o^{(\kappa_0)}(\alpha)$, hence $limsup_{\beta\in C_G\cap\alpha}(o^{(\kappa_0)}(\beta)+1)\leq o^{(\kappa_0)}(\alpha)$. $\blacksquare$
\begin{corollary}\label{The Wintessing sequence} For every $V$-generic $G\subseteq\mathbb{M}[\vec{U}]$ and for every $\kappa_0\in Lim(C_G)$ with $o^{\vec{U}}(\kappa_0)<\kappa_0^+$ there is $\eta<\kappa_0$ such that for every $\alpha\in Lim(C_G)\cap(\eta,\kappa_0]$ the following hold: \begin{enumerate}
\item If $o^{(\kappa_0)}(\alpha)=\beta+1$ is a successor ordinal, then there is $\xi<\alpha$ such that ${\rm otp}(C_G\cap X^{(\kappa_0)}_{\beta}\cap(\xi,\alpha))=\omega$, hence $cf^{V[G]}(\alpha)=\omega$.
\item If $cf^V(o^{(\kappa_0)}(\alpha))=\lambda<\kappa_0$, then $\lambda<\alpha$ and let ${\langle} \rho_i\mid i<\lambda{\rangle}$ be cofinal in $o^{(\kappa_0)}(\alpha)$, then there is $\xi<\alpha$ such that the sequence $x_i={\rm min}(C_G\cap X^{(\kappa)}_{\rho_i}\setminus\xi)$ is increasing and unbounded in $\alpha$, hence $cf^{V[G]}(\alpha)=cf^{V[G]}(\lambda)$.
\item Assume that $cf^V(o^{\kappa_0}(\alpha))=\kappa$, and let ${\langle} \rho_i\mid i<\kappa{\rangle}$ be cofinal in $o^{\kappa_0}(\alpha)$, then there is $\xi<\alpha$ such that the sequence $x_0={\rm min}(C_G\cap(\xi,\kappa_0))$ and $x_{n+1}={\rm min}( C_G\cap X^{(\kappa)}_{\rho_{x_n}})$ is increasing and unbounded in $\alpha$, hence $cf^{V[G]}(\alpha)=\omega$. \end{enumerate} \end{corollary}
\noindent\textit{Proof}. For each successor $\rho=\beta+1$
consider the set
$$S_\rho=\{\alpha\in X^{(\kappa_0)}_\rho\mid X^{(\kappa_0)}_\beta\cap\alpha\in(\cap\vec{U}(\alpha))^+\}.$$
Since $j_{U(\kappa_0,\rho)}(X^{(\kappa_0)}_\beta)\cap \kappa_0=X^{(\kappa_0)}_\beta\in U(\kappa_0,\beta)$, the coherency implies that $j_{U(\kappa_0,\rho)}(X^{(\kappa_0)}_\beta)\cap \kappa_0\in (\cap j_{U(\kappa_0,\rho)}(\vec{U})(\kappa_0))^+$. By elementarity, $\kappa_0\in j_{U(\kappa_0,\rho)}(S_\rho)$, hence $S_{\rho}\in U(\kappa_0,\rho)$.
For $\rho$ such that $cf^{V}(\rho)=:\lambda<\kappa_0$, fix a cofinal sequence ${\langle} \rho_i\mid i<\lambda{\rangle}\in V$.
Consider the set $$S'_{\rho}=\{\alpha\in X^{(\kappa_0)}_\rho\mid\forall i<\lambda. X^{(\kappa_0)}_{\rho_i}\cap\alpha\in(\cap\vec{U}(\alpha))^+\}.$$
Also $S'_\rho\in U(\kappa_0,\rho)$.
Indeed, since $\lambda<\kappa_0$ $$j_{U(\kappa_0,\rho)}({\langle} X^{(\kappa_0)}_{\rho_i}\mid i<\lambda{\rangle})={\langle} j_{U(\kappa_0,\rho)}(X^{(\kappa_0)}_{\rho_i})\mid i<\lambda{\rangle}.$$ As before, for every $i<\lambda$ it follows that $j_{U(\kappa_0,\rho)}(X^{(\kappa_0)}_{\rho_i})\cap \kappa_0\in (\cap j_{U(\kappa_0,\rho)}(\vec{U})(\kappa_0))^+$, thus $S'_\rho\in U(\kappa_0,\rho)$. We shrink $S'_{\rho}$ a bit more, consider
$$S_{\rho}=\Big\{\alpha\in S'_{\rho}\mid \{\beta<\alpha\mid \forall i<\lambda. \beta\in X^{(\kappa_0)}_{\rho_i}\rightarrow\forall j<i.X^{(\kappa_0)}_{\rho_j}\cap\beta\in (\cap\vec{U}(\beta))^+\}\in\cap\vec{U}(\alpha)\Big\}.$$
To see that $S_\rho\in U(\kappa_0,\rho)$,
for every $i<\lambda$ consider the set $$E_{\rho_i}=\{\beta\in X^{(\kappa_0)}_{\rho_i}\mid\forall j<i.X^{(\kappa_0)}_{\rho_j}\cap\beta\in(\cap\vec{U}(\beta))^+\}.$$
In $M_{U(\kappa_0,\rho_i)}$, for every $j<i$, $j_{U(\kappa_0,\rho_i)}(X^{(\kappa_0)}_{\rho_j})\cap\kappa_0\in(\cap j_{U(\kappa_0,\rho_i)}(\vec{U})(\kappa_0))^+$ it follows that $E_{\rho_i}\in U(\kappa_0,\rho_i)$. For $y\in \rho\setminus\{\rho_i\mid i<\lambda\}$, set $E_{y}=X^{(\kappa_0)}_y$. Then $E:=\cup_{y<\rho}E_y\in\cap_{y<\rho}U(\kappa_0,y)$. The set $E$ has the property that for every $\beta\in E$, if $\beta\in X^{(\kappa_0)}_{\rho_i}$ for some $i<\lambda$, then $\beta\in E_{\rho_i}$ and therefore $\forall j<i.X^{(\kappa_0)}_{\rho_j}\cap\beta\in(\cap\vec{U}(\beta))^+$.
In $M_{U(\kappa_0,\rho)}$, by coherency $o^{j_{U(\kappa_0,\rho)}(\vec{U})}(\kappa_0)=\rho$ and for every $\beta<\kappa_0$, $\cap j_{U(\kappa_0,\rho)}(\vec{U})(\beta)=\cap\vec{U}(\beta)$. Also $E\in M_{U(\kappa_0,\rho)}$ (by $\kappa_0$-closure) and $E\in \cap j_{U(\kappa,\rho)}(\vec{U})(\kappa_0)$. Denote $X'_i=j_{U(\kappa_0,\rho)}(X^{(\kappa_0)}_{\rho_i})$, then for every $\beta\leq\kappa_0$, $X'_i\cap\beta= X^{(\kappa_0)}_{\rho_i}\cap\beta$. It follows that $$M_{U(\kappa_0,\rho)}\models \{\beta<\kappa_0\mid \forall i<\lambda. \beta\in X'_{i}\rightarrow\forall j<i.X'_j\cap\beta\in (\cap j_{U(\kappa_0,\rho)}(\vec{U})(\beta))^+\}\in\cap j_{U(\kappa_0,\rho)}(\vec{U})(\kappa_0) .$$
Reflecting this, we get that $S_\rho\in U(\kappa_0,\rho)$.
If $cf^V(\rho)=\kappa_0$, fix a continuous cofinal sequence ${\langle} \rho_i\mid i<\kappa_0{\rangle}\in V$, consider
$$S'_\rho=\{\alpha\in X^{(\kappa_0)}_\rho\mid \forall i<\alpha. X^{(\kappa_0)}_{\rho_i}\cap\alpha\in(\cap\vec{U}(\alpha))^+\}.$$
Then as before $S'_\rho\in U(\kappa_0,\rho)$. Next, consider
$$S_\rho=\Big\{\alpha\in S'_{\rho}\mid \{\beta<\alpha\mid \exists\zeta<\beta.\cup_{i<\rho_{\zeta}}X^{(\kappa_0)}_i\cap\beta\in\cap\vec{U}(\beta)\}\in \cap\vec{U}(\alpha)\Big\}.$$
To see that $S_{\rho}\in U(\kappa_0,\rho)$, let $\xi<\rho$, find $\zeta<\kappa_0$ such that $\rho_\zeta>\xi$. Denote
$$j_{U(\kappa_0,\xi)}({\langle} X^{(\kappa_0)}_i\mid i<o^{\vec{U}}(\kappa_0){\rangle})=\langle X'_i\mid i<o^{j(\vec{U})}(j_{U(\kappa_0,\xi)}(\kappa_0)){\rangle}, \ j_{U(\kappa_0,\xi)}({\langle} \rho_i\mid i<\kappa_0{\rangle})={\langle} \rho'_i\mid i<j_{U(\kappa_0,\xi)}(\kappa_0){\rangle}.$$
then $\rho_\zeta\leq j_{U(\kappa_0,\xi)}(\rho_\zeta)=\rho'_{\zeta}$. If follows that $\cup_{i<\rho'_{\zeta}}X'_i\cap\kappa_0\in\cap_{i<\xi}U(\kappa_0,i)=\cap j_{U(\kappa_0,\xi)}(\vec{U})(\kappa_0)$. To see this, note that for every $y<\xi$, $j_{U(\kappa_0,\xi)}(y)<j_{U(\kappa_0,\xi)}(\rho_{\zeta})=\rho'_{\zeta}$, hence $$X^{(\kappa_0)}_y=j_{U(\kappa_0,\xi)}(X^{(\kappa_0)}_y)\cap\kappa_0=X'_{j_{U(\kappa_0,\xi)}(y)}\cap\kappa_0\subseteq \cup_{i<\rho'_{\zeta}}X'_i\cap\kappa_0.$$
This means that in $M_{U(\kappa_0,\xi)}$,
$$\exists\zeta<\kappa_0. \ \cup_{i<\rho'_\zeta}X'_i\cap\kappa_0\in\cap j_{U(\kappa_0,\xi)}(\vec{U})(\kappa_0).$$
Reflecting this, we get that for every $\xi<\rho$,
$$\{\beta<\kappa_0\mid \exists\zeta<\beta.\cup_{i<\rho_\zeta}X^{(\kappa_0)}_i\cap\beta\in\cap\vec{U}(\beta)\}\in U(\kappa_0,\xi).$$
Now in $M_{U(\kappa_0,\rho)}$ using coherency it follows that
$$\{\beta<\kappa_0\mid \exists\zeta<\beta.\cup_{i<\rho_\zeta}X^{(\kappa_0)}_i\cap\beta\in\cap\vec{U}(\beta)\}\in\cap_{\xi<\rho}U(\kappa_0,\xi)=\cap j_{U(\kappa_0,\rho)}(\vec{U})(\kappa_0).$$
Finally, reflect this to conclude that $S_\rho\in U(\kappa_0,\rho)$.
By \ref{genericproperties}(3) there is $\eta'$ such that $C_G\cap(\eta',\kappa_0)\subseteq\cup_{\rho<o^{\vec{U}}(\kappa_0)}S_\rho$, define $\eta={\rm max}\{\eta',\xi\}<\kappa_0$ where $\xi$ is from Proposition \ref{increasing order at limits}. Let $\alpha\in Lim(C_G)\cap(\eta,\kappa_0)$, then $\alpha\in S_{o^{(\kappa_0)}(\alpha)}$. Since $\alpha>\xi$, there is $\xi\leq \xi_\alpha<\alpha$ such that for every $\nu\in C_G\cap(\xi_\alpha,\alpha)$, $o^{(\kappa_0)}(\nu)<o^{(\kappa_0)}(\alpha)$.
If $o^{(\kappa_0)}(\alpha)=\beta+1$ then $X^{(\kappa_0)}_\beta\cap\alpha\in(\cap\vec{U}(\alpha))^+$ hence by \ref{genericproperties}(5), ${\rm sup}(X^{(\kappa_0)}_{\beta}\cap\alpha\cap C_G)=\alpha$. Let us argue that ${\rm otp}(X^{(\kappa_0)}_\beta\cap C_G\cap(\xi_\alpha,\alpha))=\omega$. Just otherwise denote by $\mu$ the $\omega$-th element of $X^{(\kappa_0)}_\beta\cap C_G\cap(\xi_\alpha,\alpha)$, then $\mu<\alpha$. Since $\mu>\xi$, Proposition \ref{increasing order at limits} implies that $o^{\kappa_0}(\mu)\geq \beta+1$. On the other hand, $\mu>\xi_\alpha$, thus $o^{(\kappa_0)}(\mu)<o^{(\kappa_0)}(\alpha)$, contradiction.
If $cf^V(o^{(\kappa_0)}(\alpha)):=\lambda<\kappa_0$, then by definition of $S_{o^{(\kappa_0)}(\alpha)}$, $\forall i<\lambda. X^{(\kappa_0)}_{\rho_i}\cap\alpha\in(\cap\vec{U}(\alpha))^+$. hence by \ref{genericproperties}(5), for every $ i<\lambda$, ${\rm sup}(X^{(\kappa_0)}_{\rho_i}\cap\alpha\cap C_G)=\alpha$, thus the sequence of $x_i$'s defined in the proposition starting above any $\xi<\alpha$ is well defined. The second property of $S_{o^{(\kappa_0)}(\alpha)}$ is that $$Y:=\{\beta<\alpha\mid \forall i<\lambda. \beta\in X^{(\kappa_0)}_{\rho_i}\rightarrow\forall j<i.X^{(\kappa_0)}_j\cap\beta\in (\cap\vec{U}(\beta))^+\}\in\cap\vec{U}(\alpha).$$ By \ref{genericproperties}(3) there is $\xi\leq \zeta_\alpha<\alpha$ such that $C_G\cap(\zeta_\alpha,\alpha)\subseteq Y$. Start the definition of $x_i$'s above $\zeta_\alpha$. To see it is increasing, note that $x_i\in C_G\cap (\zeta_\alpha,\alpha)\cap X^{(\kappa_0)}_{\rho_i}$ so by definition of $Y$, $\forall j<i$, $X^{(\kappa_0)}_{\rho_j}\cap x_i\in (\cap\vec{U}(x_i))^+$, again by \ref{genericproperties}(5), for every $j<i$ ${\rm sup}(X^{(\kappa_0)}_{\rho_j}\cap x_i\cap C_G)=x_i$ and therefore by minimality of $x_j$ it follows that for $j<i$, $x_j<x_i$. To see that the sequence of $x_i$'s is unbounded, notice that otherwise its limit point would be some $\zeta\in (\zeta_\alpha,\alpha)$. Since the $x_i$'s are increasing and by Proposition \ref{increasing order at limits}, $$o^{(\kappa_0)}(\zeta)\geq limsup_{i<\lambda} o^{(\kappa_0)}(x_i)+1 =limsup_{i<\lambda}\rho_i+1= o^{(\kappa_0)}(\alpha).$$
contradicting the choice of $\xi_\alpha$.
Finally, if $cf^V(o^{(\kappa_0)}(\alpha))=\kappa_0$, then $\forall i<\alpha. X^{(\kappa_0)}_{\rho_i}\cap\alpha\in(\cap\vec{U}(\alpha))^+$ hence by \ref{genericproperties}(5), $\forall i<\alpha.{\rm sup}(X^{(\kappa_0)}_{\rho_i}\cap\alpha\cap C_G)=\alpha$. If
the limit $x^*$ of the $x_n$'s defined in the proposition would be less than $\alpha$, then by the definition of $S_{o^{(\kappa_0)}(\alpha)}$ there is $\zeta<x^*$ such that $\cup_{i<\rho_{\zeta}}X^{(\kappa_0)}_i\cap x^*\in\cap \vec{U}(x^*)$.
To see the contradiction, on one hand there is $\sigma< x^*$ such that $C_G\cap (\sigma,x^*)\subseteq \cup_{i<\rho_{\zeta}}X^{(\kappa_0)}_i\cap\zeta$ so there is $N<\omega$ such that $\forall n\geq N$, $x_n\in \cup_{i<\rho_{\zeta}}X^{(\kappa_0)}_i\cap\zeta$.
On the other find $N\leq n<\omega$ such that $x_n>\zeta$, then $o^{(\kappa_0)}(x_{n+1})=\rho_{x_n}>\rho_{\zeta}$, which implies $x_{n+1}\notin \cup_{i<\rho_{\zeta}}X^{(\kappa_0)}_i\cap\zeta$.$\blacksquare$ \begin{corollary}\label{VGenericCardinals} Let $G\subseteq \mathbb{M}[\vec{U}]$ be $V$-generic. Assume that $o^{\vec{U}}(\kappa)<\kappa^+$, then for every $V$-regular cardinal $\alpha$, $cf^{V[G]}(\alpha)<\alpha$ iff $\alpha\in C_G\cup \{\kappa\}$ and $0<o^{\vec{U}}(\alpha)<\alpha^+$. \end{corollary} \subsection{Other preliminaries} In the last part of the proof we will need to analyze the quotient forcing. Let us recall some basic facts about it: \begin{definition} Let $\mathbb{P},\mathbb{Q}$ be forcing notions. A function $\tau:\mathbb{P}\rightarrow\mathbb{Q}$ is a projection iff $\tau$ is order preserving, $Im(\tau)$ is dense, and $$\forall p\in \mathbb{P}.\forall q\geq\tau(p).\exists p'\geq p.\pi(p')\geq q$$ \end{definition} \begin{definition}\label{definition of quotient} Let $\mathbb{P},\mathbb{Q}\in V$ be forcing notions, $\tau:\mathbb{P}\rightarrow\mathbb{Q}$ be any projection and let $H\subseteq\mathbb{Q}$ be $V$-generic. Define \textit{the quotient forcing} $\mathbb{P}/H=\tau^{-1''}H$. Also if $G\subseteq \mathbb{P}$ is a $V$-generic filter, \textit{the projection of $G$} is the filter $$\tau_*(G):=\{q\in\mathbb{Q}\mid\exists p\in G. q\leq_{\mathbb{Q}}\tau(p)\}$$ \end{definition} \begin{proposition}\label{properties of quotient} Let $\tau:\mathbb{P}\rightarrow\mathbb{Q}$ be a projection, then: \begin{enumerate} \item If $G\subseteq\mathbb{P}$ is $V$-generic then $\tau_*(G)$ is $V$-generic filter for $\mathbb{Q}$
\item If $G\subseteq\mathbb{P}$ is $V$-generic then $G\subseteq\mathbb{P}/\tau_*(G)$ is $V[\tau_*(G)]$-generic filter.
\item If $H\subseteq\mathbb{Q}$ is $V$-generic and $G\subseteq\mathbb{P}/H$ is $V[H]$-generic, then $\tau_*(G)=H$ and $G\subseteq\mathbb{P}$ is $V$-generic. \end{enumerate} \end{proposition} \begin{definition}\label{definition of equivalent subalgebra}
Let $\mathbb{P}$ be a forcing notion and $\lusim{D}$ be a $\mathbb{P}$-name for a subset of $\kappa$. Define $\mathbb{P}_{\lusim{D}}$, the complete subalgebra of regular open cuts ${\langle} RO(\mathbb{P}),\leq_B{\rangle}$ \footnote{$RO(\mathbb{M}[\vec{U}])$ is the set of all regular open cuts of $\mathbb{M}[\vec{U}]$(see for example \cite[Thm. 14.10]{Jech2003}), as usual we identify $\mathbb{M}[\vec{U}]$ as a dense subset of $RO(\mathbb{M}[\vec{U}])$. The order $\leq_B$ is in the standard definition of Boolean algebras orders i.e. $p\leq_B q$ means $p\Vdash q\in \hat{G}$.} generated by the set $X=\{||\alpha\in \lusim{D}||\mid \alpha<\kappa\}$.\end{definition}
\begin{definition}\label{definition of projection} Define the function $\pi:\mathbb{P}\rightarrow \mathbb{P}_{\lusim{D}}$ by $\pi(p)=\inf\{b\in \mathbb{P}_{\lusim{D}}\mid p\leq_B b\}$. \end{definition}
It not hard to check that $\pi$ is a projection. Let $G$ be $V$-generic for $\mathbb{P}$ and $D\subseteq \kappa$ the interpretation of $\lusim{D}$ under $G$ i.e. $\lusim{D}_G=D$. Denote by $H=\pi_*(G)$ the $V$-generic filter for $\mathbb{P}_{\lusim{D}}$ induced by $G$, then $V[D]=V[H]$ (see for example \cite[Lemma 15.42]{Jech2003}). In fact $$D=\{\alpha<\kappa\mid ||\alpha\in \lusim{D}||\in X\cap H\}$$ As for the other direction, any generic filter $H$ is definable and uniquely determined (see \cite[Lemma 15.40]{Jech2003}) by the set
$$X\cap H=\{||\alpha\in\lusim{D}||\mid \alpha\in D\}$$ We sometimes abuse notation by defining $\mathbb{P}/D=\mathbb{P}/\pi_*(G)$. It is important to note that $\mathbb{P}/D$ depends on the choice of the name $\lusim{D}$.
\begin{definition}\label{Index}
Let $X,X'$ be sets of ordinals such that $X'\subseteq X\subseteq On$. Let $\alpha=otp(X,\in)$ be the order type of $X$ and $\phi:\alpha\rightarrow X$ be the order isomorphism witnessing it. The indices of $X'$ in $X$ are $$Ind(X',X)=\phi^{-1''}X'=\{\beta<\alpha\mid \phi(\beta)\in X'\}.$$ \end{definition} \begin{definition}
We denote $X\subseteq^* Y$ if $X\setminus Y$ is finite. Also define $X=^*Y$ if $X\subseteq^* Y\wedge Y\subseteq^* X$, equivalently, if $X\bigtriangleup Y$ is finite. \end{definition} Notice that the $X\subseteq^*Y$ sometimes denotes inclusion modulo \textit{bounded}, however in this paper, $X\subseteq^* Y$ means inclusion modulo \textit{finite}. In the next theorem, we will need the Erd\"{o}s-Rado theorem \cite{ErdesRado}, which is stated here for the convenience of the reader (For the proof see \cite[Theorem 7.3]{kanamori1994} or \cite{YairEskew}). \begin{theorem}\label{ER} If $\theta$ is a regular cardinal then for every $\rho<\theta$, $$(2^{<\theta})^+\rightarrow(\theta+1)^2_{\rho}$$ i.e. for every $f:[(2^{<\theta})^+]^2\rightarrow \rho$ there is $H\subseteq (2^{<\theta})^+$ such that ${\rm otp}(H)=\theta+1$ and $f\restriction [H]^2$ is constant. \end{theorem}
\begin{theorem}\label{Modfinitestab} Let $\aleph_0<\lambda$ be a strong limit cardinal, and $\mu>\lambda$ be regular. Let $\langle D_\alpha\mid \alpha<\mu\rangle$ be any $\subseteq^*$-increasing sequence of subsets of $\lambda$. Then the sequence $=^*$-stabilizes i.e. there is $\alpha^*<\mu$ such that for every $\alpha^*\leq \alpha<\mu$, $D_\alpha=^*D_{\alpha^*}$. \end{theorem} \begin{remark} The theorem fails for $\lambda=\aleph_0$. Let us construct a counter example:
Define ${\langle} D_i\mid i<\omega_1{\rangle}$ a sequence of subsets of $\omega$ by induction, such that: \begin{enumerate}
\item ${\langle} D_i\mid i<\omega_1{\rangle}$ is $\subseteq^*$-increasing.
\item For all $i<j<\omega_1$, $|D_j\setminus D_i|=\aleph_0$.
\item For every $i<\omega_1$, $|\omega\setminus D_i|=\aleph_0$.
\end{enumerate}Let $D_0=\emptyset$. Assume that for $\alpha<\omega_1$, ${\langle} D_i\mid i<\alpha{\rangle}$ is $\subseteq^*$-increasing, and let us define $D_\alpha$. If $\alpha=\beta+1$, then by $3$, $|\omega\setminus D_\beta|=\aleph_0$, Let $\omega\setminus D_{\beta}=X\uplus Y$ where $|X|=|Y|=\aleph_0$. Define $D_\alpha=D_{\beta}\cup X$. If $\alpha$ is limit, then $cf(\alpha)=\omega$, let ${\langle} \alpha_n\mid n<\omega{\rangle}$ be increasing and cofinal in $\alpha$ and denote $E_n=D_{\alpha_n}$.
We construct natural numbers $x_n,y_n$. By $3$, $|\omega\setminus E_0|=\omega$, let $x_0,y_0\in \omega\setminus E_0$ be distinct. Assume that $x_k, y_k$ are defined for every $k\leq n$, then $Z=\omega\setminus((\cup_{m\leq n+1}E_{m})\cup\{x_k,y_k\mid k\leq n\})$ is infinite. Indeed, for each $m\leq n$, $E_m\subset^* E_{n+1}$ hence $R_m:=E_m\setminus E_{n+1}$ is finite. It follows that $R=\cup_{m\leq n}R_m$ a is finite and that $\cup_{m< n+1}E_{m}=E_{n+1}\cup R$. Apply $3$ to $E_{n+1}$, to see that $Z=\omega\setminus((\cup_{m\leq n+1}E_{m})\cup\{x_k,y_k\mid k\leq n\})$ is infinite, and pick $x_{n+1},y_{n+1}\in Z$ distinct. Clearly $$|\{x_n\mid n<\omega\}|=|\{y_n\mid n<\omega\}|=\aleph_0\text{ and }\{x_n\mid n<\omega\}\cap\{y_n\mid n<\omega\}=\emptyset.$$ Let $D_{x,\alpha}=\omega\setminus\{x_n\mid n<\omega\}$ and $D_{y,\alpha}=\omega\setminus\{y_n\mid n<\omega\}$. We claim that for every $n<\omega$, $E_n\subseteq^* D_{x,\alpha},D_{y,\alpha}$. By symmetry it suffices to show it for $D_{x,\alpha}$. If $r\in E_n\setminus D_{x,\alpha}$, then there is $m$ such that $r=x_m$, since for every $m\geq n$, $x_m\notin E_n$, it follows that $m<n$. Thus $E_n\setminus D_{x,\alpha}\subseteq \{x_m\mid m<n\}$, implying $E_n\subseteq^* D_{x,\alpha}$. Let us argue that either for every $n<\omega$, $|D_{x,\alpha}\setminus E_n|=\omega$, or for every $n<\omega$, $|D_{y,\alpha}\setminus E_n|=\omega$. Assume otherwise, so there is $n<\omega$ such that $D_{x,\alpha}=^*E_n$ and there is $k<\omega$ such that $D_{y,\alpha}=^*E_k$. For every $n\leq m<\omega$, $$D_{x,\alpha}=^*E_n\subseteq^* E_m\subseteq^*D_{x,\alpha}.$$ Hence $E_m=^*D_{x,\alpha}$. In the same way we see that for every $k\leq m<\omega$, $E_m=^*D_{y,\alpha}$. Let $m>{\rm max}\{n,k\}$. Then $D_{y,\alpha}=^*E_m=^*D_{x,\alpha}$, contradiction.
Without loss of generality, assume that for every $n<\omega$, $|D_{x,\alpha}\setminus E_n|=\omega$. Define $D_\alpha=D_{x,\alpha}$. Let us prove $(1),(2),(3)$. To see $(1)$, for each $\beta<\alpha$ find $n<\omega$ such that $\beta<\alpha_n$, then $D_\beta\subseteq^* D_{\alpha_n}=E_n\subseteq^* D_\alpha$. Also $D_{\alpha}\setminus D_{\alpha_n}\subseteq (D_{\alpha}\setminus D_{\beta})\cup( D_{\beta}\setminus D_{\alpha_n})$. Since $|D_{\alpha}\setminus D_{\alpha_n}|=\omega$ and $|D_{\beta}\setminus D_{\alpha_n}|<\omega$ it follows that $|D_{\alpha}\setminus D_{\beta}|=\omega$, so $(2)$ holds. Finally, $(3)$ follows since $\{x_n\mid n<\omega\}\subseteq\omega\setminus D_\alpha$.
\end{remark} \textit{Proof of \ref{Modfinitestab}.}
Toward a contradiction, assume that the theorem fails, then by regularity of $\mu$, there is $Y\subseteq \mu$ such that $|Y|=\mu$ and for every $\alpha,\beta\in Y$, if $\alpha<\beta$ then $D_\alpha\subseteq^* D_\beta$ and $|D_\beta\setminus D_\alpha|\geq\omega$. For every $\xi<\kappa$, find $E_{\xi}\subseteq \xi$ such that the set $$X_\xi:=\{\nu<\mu\mid D_\nu\cap \xi=E_\xi\}$$ is unbounded in $\mu$, set $\alpha_\xi:={\rm min}(X_\xi)$. Since $D_\alpha$ is $\subseteq^*$-increasing, for every $\alpha_\xi\leq\alpha<\mu$, $D_\alpha\cap \xi=^* E_\xi$. To see this, find $\beta\in X_\xi$ such that $\alpha_\xi\leq\alpha\leq\beta$, then $D_{\alpha_\xi}\subseteq^* D_{\alpha}\subseteq^* D_{\beta}$ Hence $$E_\xi=D_{\alpha_\xi}\cap \xi\subseteq^* D_{\alpha}\cap \xi\subseteq^* D_{\beta}\cap \xi=E_\xi.$$ Set $\alpha^*={\rm sup}\{\alpha_i\mid i<\lambda\}$, by regularity, $\alpha^*<\mu$. It follows that $$(*) \ \ \text{For every }\delta<\lambda\text{ and every }\alpha^*\leq \beta_1<\beta_2<\mu. \ D_{\beta_1}\cap\delta=^*E_{\delta}=^* D_{\beta_2}\cap\delta.$$ and that
$$(**) \ \ \ \ \ \text{For every }\alpha^*\leq \beta_1<\beta_2<\mu. \ \ \ |D_{\beta_1}\Delta D_{\beta_2}|\leq\omega. \ \ \ \ \ \ \ \ \ \ \ \ \ $$
To see $(**)$, assume otherwise, then there are $\beta_1,\beta_2$ such that $|D_{\beta_1}\Delta D_{\beta_2}|\geq\omega_1$. Thus there is $\delta<\lambda$ such that $|D_{\beta_1}\cap\delta\Delta D_{\beta_2}\cap\delta|\geq\aleph_0$ contradicting $(*)$.
Also $cf(\lambda)=\aleph_0$, since for any distinct $\beta_1,\beta_2\in Y\setminus\alpha^*$, $|D_{\beta_1}\Delta D_{\beta_2}|\geq\aleph_0$, and by $(**)$, $|D_{\beta_1}\Delta D_{\beta_2}|\leq\aleph_0$ so by Cantor–Bernstein $|D_{\beta_1}\Delta D_{\beta_2}|=\aleph_0$. Since $\beta_1,\beta_2>\alpha^*$, $D_{\beta_1}\Delta D_{\beta_2}$ cannot be bounded, hence $cf(\lambda)=\aleph_0$.
Let $\chi:=(2^{<\aleph_1})^+=(2^{\aleph_0})^+$ . Since $\lambda>\aleph_0$ is strong limit $\chi<\lambda<\mu$. Fix any $X\subseteq Y\setminus\alpha^*$ such that $|X|=\chi$. Define a partition $f:[X]^2\rightarrow \omega$:
Let ${\langle} \eta_n\mid n<\omega{\rangle}$ be cofinal in $\lambda$. For any $i<j$ in $X$, $D_i\subseteq^* D_j$, hence there is $n_{i,j}<\omega$ such that $(D_{i}\setminus \eta_{n_{i,j}})\subseteq (D_{j}\setminus \eta_{n_{i,j}})$. Simply pick some $\eta_{n_{i,j}}$ above all the finitely many elements in $D_{i}\setminus D_{j}$. Then set $$f(i,j)=n_{i,j}.$$ Apply the Erd\"{o}s-Rado theorem and find $I\subseteq X$ such that ${\rm otp}(I)=\omega_1+1$ which is homogeneous with color $n^*<\omega$. This means that for any $i<j$ in $I$, $D_{i}\setminus\eta_{n^*}\subseteq D_{j}\setminus \eta_{n^*}$. Recall that $i,j\in Y\setminus\alpha^*$, then by $(*)$, it follows also that $(D_j\setminus \eta_{n^*})\setminus (D_i\setminus \eta_{n^*})$ is infinite.
Let $\langle i_\rho\mid \rho<\omega_1+1\rangle$ be the increasing enumeration of $I$. We will prove that $|D_{i_{\omega_1}}\setminus D_{i_0}|\geq\omega_1$, and since $i_0,i_{\omega_1}\geq\alpha^*$, this is a contradiction to $(**)$.
Indeed, for every $r<\omega_1$, pick any $\delta_r$ from the infinite set $(D_{i_{r+1}}\setminus\eta_{n^*})\setminus (D_{i_r}\setminus \eta_{n^*})$. Since the sequence ${\langle} D_{i_r}\setminus \eta_{n^*}\mid r\leq\omega+1{\rangle}$ is $\subseteq$-increasing, for every $\beta\leq r<\alpha\leq\omega_1$, $\delta_r\in D_{i_{\alpha}}\setminus D_{i_\beta}$.
In particular, for every $r<\omega_1$, $\delta_r\in D_{i_{\omega_1}}\setminus D_{i_0}$ so the map $r\mapsto \delta_r$ is well defined from $\omega_1$ to $D_{i_{\omega_1}}\setminus D_{i_0}$. Also if $r_1<r_2<\omega_1$, then $\delta_{r_2}\notin D_{i_{r_1}+1}$ and $\delta_{r_1}\in D_{i_{r_1+1}}$ so $\delta_{r_1}\neq \delta_{r_2}$. Thus we found an injection of $\omega_1$ to $D_{i_{\omega_1}}\setminus D_{i_0}$, contradicting $(**)$.$\blacksquare$
\subsection{Fat Trees} In case $o^{\vec{U}}(\kappa)$ is for example $\omega_1$, the strong Prikry property for $\mathbb{M}[\vec{U}]$ ensures that given $p\in\mathbb{M}[\vec{U}]$ and a dense open set $D\subseteq\mathbb{M}[\vec{U}]$, there is a choice of measures $U(\kappa_1,i_1),...,U(\kappa_n,i_n)$ where $\kappa_1\leq...\leq\kappa_n\leq\kappa$ and a direct extension $p\leq^*p^*$ such that for every choice $\vec{\alpha}\in A_1\times...\times A_n $ from the typical sets associated to $U(\kappa_1,i_1),...,U(\kappa_n,i_n)$, $p^{*\smallfrown}{\langle}\alpha_1,...,\alpha_n{\rangle}\in D$. This means that in the ground model we can determine measures which are necessary to enter $D$.
For higher order of $\kappa$ this is no longer the case. For example, assume that $o^{\vec{U}}(\kappa)=\kappa$ and consider the first element of $C_G$ i.e. $C_G(0)$. Since ${\rm otp}(C_G)=\kappa$, consider $C_G(C_G(0))$. Let $\lusim{x}$ be such that $\Vdash_{\mathbb{M}[\vec{U}]} \lusim{x}=C_{\lusim{G}}(C_{\lusim{G}}(0))$. Consider any condition of the form $p={\langle} {\langle}\kappa,A{\rangle}{\rangle}$. There is no choice of measures in the ground model and no direct extension of $p$ which determines $\lusim{x}$. Instead, we can construct a tree $T$ with two levels. The first level is simply all the ordinals which can be $C_G(0)$, namely ${\rm Lev}_1(T)=\{\alpha\in A\mid o^{\vec{U}}(\alpha)=0\}\in U(\kappa,0)$. Now any extension of the form $p^{\smallfrown}\alpha$ for $\alpha\in A$ forces that $C_G(0)=\alpha$, so to determine $\lusim{x}$ we only need to pick some ordinal in the set $\{\beta\in A\setminus\alpha+1\mid o^{\vec{U}}(\beta)=\alpha\}\in U(\kappa,\alpha)$. Hence we define ${\rm Succ}_{T}({\langle}\alpha{\rangle})=\{\beta\in A\setminus\alpha+1\mid o^{\vec{U}}(\beta)=\alpha\}$. Since the measure used in the second level is different for every choice of $\alpha$, we cannot find a single measure that will turn this tree into a product.
This section is devoted to the study of some combinatorial aspects of such trees. \begin{definition}\label{fat-tree} Let $\vec{U}$ be a coherent sequence of normal measures and $\theta_1\leq...\leq\theta_n$ be measurables with $o^{\vec{U}}(\theta_i)>0$. A $\vec{U}-fat \ tree$ on $\theta_1\leq...\leq\theta_n$ is a tree $\langle T, \leq_{T}\rangle$ such that \begin{enumerate} \item $T\subseteq\prod_{i=1}^n\theta_i$ and $\langle \ \rangle\in T$. \item $\leq_{T}$ is end-extension i.e. $t\leq_{T}s \Leftrightarrow t=s\cap{\rm max}(t)+1$. \item $T$ is downward closed with respect to end-extension. \item For any $t\in T$ one of the following holds: \begin{enumerate}
\item $|t|=n$.
\item $|t|<n$ and there is $\beta< o^{\vec{U}}(\theta_{|t|+1})$ such that $\{\alpha\mid t^{\frown}\langle\alpha\rangle\in T\}\in U(\theta_{|t|+1},\beta)$.
\end{enumerate} \end{enumerate} \end{definition} Some usual notations of trees: \begin{enumerate} \item ${\rm Succ}_T(t)=\{\alpha\mid t^{\smallfrown}\langle\alpha\rangle\in T\}$.
\item For each $t\in T$ with $|t|<n$, choose $\xi(t)$ such that ${\rm Succ}_T(t)\in U(\theta_{|t|+1},\xi(t))$, and define $U^{(T)}_t=U(\theta_{|t|+1},\xi(t))$ (We drop the superscript $(T)$ when there is no risk of confusion). \item Note that if the measures in $\vec{U}$ can be separated i.e. there are $\langle X(\alpha,\beta)\mid \langle\alpha,\beta\rangle\in Dom(\vec{U})\rangle$ such that $X_i\in U_i\wedge \forall j\neq i X_i\notin U_j$, then we can intersect each set of the form ${\rm Succ}_T(t)$ with appropriate $X_i$ and then $\xi(t)$ has a unique choice. \item $ht(t)={\rm otp}(s\in T\mid s<_T t)$. \item ${\rm Lev}_i(T)=\{t\in T\mid ht(t)=i\}$. \item The height of a tree is $ht(T)={\rm max}(\{n<\omega\mid {\rm Lev}_n(T)\neq\emptyset\})$. \item We will assume that if $\theta_i<\theta_{i+1}$ then for every $t\in {\rm Lev}_i(T)$, ${\rm min}({\rm Succ}_T(t))>\theta_i$. \item For $t\in T$ the tree above $t$ is $T/t=\{s\in T\mid t\leq_Ts\}$. We identify $T/t$ with the $\vec{U}$-fat tree $\{s\setminus t\mid s\in T/t\}$. \item The set of all maximal branches of $T$ is denoted by $mb(T)={\rm Lev}_{ht(T)}(T)$. In general, we identify maximal branches of the tree with points at the top level. Note that $mb(T)$ completely determines $T$. \item Let $J\subseteq \{0,1,...,ht(T)\}$ then $T\restriction J=\{t\restriction J\mid t\in T\}$. \end{enumerate}
For every $\vec{U}$-fat tree $T$ in $\theta_1\leq...\leq\theta_n$ of height $n$, define \textit{the iteration associated to $T$}, ${\langle} j^{(T)}_{m,k},M_k\mid 0\leq m\leq k\leq n{\rangle}$, usually we drop the superscript $T$. Let $V=M_0$, $$j_1=j_{0,1}:=j_{U^{(T)}_{{\langle}{\rangle}}}:V\rightarrow Ult(V,U^{(T)}_{{\langle}{\rangle}})\simeq M_{U^{(T)}_{{\langle}{\rangle}}}:=M_1$$ then $crit(j_{1})=\theta_1\in j_{1}({\rm Succ}_T({\langle}{\rangle}))={\rm Succ}_{j_{1}(T)}({\langle}{\rangle})$. Thus ${\langle}\theta_1{\rangle}\in {\rm Lev}_1(j_1(T))$.
Assume that ${\langle} j_{m',m},M_m\mid 0\leq m\leq m'\leq k{\rangle}$ is defined for some $k<n$, for every $1\leq i\leq k<n$, denote $\kappa_i:=crit(j_{i-1,i})=j_{i-1}(\theta_i)$ and assume ${\langle} \kappa_1,...,\kappa_k{\rangle}\in {\rm Lev}_k(j_k(T))$. Let $$j_{k,k+1}:= j_{U^{(j_k(T))}_{{\langle}\kappa_1,...,\kappa_k{\rangle}}}:M_k\rightarrow Ult(M_k,U^{(j_k(T))}_{{\langle}\kappa_1,...,\kappa_k{\rangle}})\simeq M_{k+1}$$ $j_{i,k+1}=j_{k,k+1}\circ j_{i,k}$ and $j_{k+1}=j_{0,k+1}$. Note that ${\rm Succ}_{j_k(T)}({\langle}\kappa_1,...,.\kappa_k{\rangle})\in U^{(j_k(T))}_{{\langle}\kappa_1,...,\kappa_k{\rangle}}$ which is a normal measure on $j_{k}(\theta_k+1)$. Thus $$\kappa_{k+1}:=j_{k}(\theta_{k+1})={\rm crit}(j_{k,k+1})\in j_{k,k+1}({\rm Succ}_{j_k(T)}({\langle}\kappa_1,...,.\kappa_k{\rangle}))={\rm Succ}_{j_{k+1}(T)}({\langle}\kappa_1,...,\kappa_k{\rangle}).$$ Therefore, ${\langle}\kappa_1,...,.\kappa_k,\kappa_{k+1}{\rangle}\in {\rm Lev}_{k+1}(j_{k+1}(T))$. We denote $j_T=j_n$ and $M_T=M_n$.
More generally, a \textit{tree iteration} of $\vec{U}$-measures is a finite iteration ${\langle} j_{m,k},M_k\mid 0\leq m\leq k\leq n{\rangle}$ of $V$ such that for some measurable cardinals $\theta_1\leq...\leq \theta_n$, for every $0\leq m<n$, there is a normal measure $W_{m+1}\in j_m(\vec{U})$ on $j_m(\theta_{m+1})$ such that $$j_{m,m+1}=j_{W_m}: M_m\rightarrow Ult(M_m,W_m)\simeq M_{m+1}.$$ Denote $\kappa_m=j_{m-1}(\theta_m)$ and derive an ultrafilter $U$ on $\prod_{i=1}^n\theta_i$ by the formula: $$X\in U\longleftrightarrow {\langle} \kappa_1, \kappa_2,...,\kappa_n{\rangle}\in j_n(X).$$ Let us verify some standard properties of such an iteration: \begin{proposition}\label{Los for trees} Let ${\langle} j_{m,k},M_k\mid 0\leq m\leq k\leq n{\rangle}$ be a tree iteration of $\vec{U}$-measures. Then: \begin{enumerate} \item $U$ is a $\theta_1$-complete ultrafilter on $\prod_{i=1}^n\theta_i$.
\item For any formula $\Phi(y_1,...,y_{m})$ and any $f_1,...,f_m:\prod_{i=1}^n\theta_i\rightarrow V$, $$M_n\models \Phi(j_n(f_1)(\kappa_1,...,\kappa_n),...,j_n(f_m)(\kappa_1,...,\kappa_n))\Leftrightarrow \{\vec{\alpha}\in \prod_{i=1}^n\theta_i\mid \Phi(f_1(\vec{\alpha}),...,f_m(\vec{\alpha}))\}\in U.$$
\item Let $j_U:V\rightarrow Ult(V,U)\simeq M_U$ be the elementary embedding associated to $U$, then $M_U=M_n$ and $j_U=j_n$. \item For every $R\in U$ there is a $\vec{U}$-fat tree $S$ such that $mb(S)\subseteq R$, $mb(S)\in U$. Moreover, if $j_{i-1}(f_i)(\kappa_1,...,\kappa_{i-1})=W_i$ (the ultrafilter used in $j_{i,i-1}$), then for every $s\in {\rm Lev}_{i-1}(S)$, ${\rm Succ}_{S}(s)\in f_i(s)$. \end{enumerate} \end{proposition}
\noindent\textit{Proof}. $(1)$ is a standard consequence of the critical point of the iteration being $\theta_1$.
For $(2)$, by elementarity of $j_n$, $$j_n(\{\vec{\alpha}\in \prod_{i=1}^n\theta_i\mid \Phi(f_1(\vec{\alpha}),...,f_m(\vec{\alpha}))\})=\{\vec{\alpha}\in \prod_{i=1}^nj_n(\theta_i)\mid M_n\models\Phi(j_n(f_1)(\vec{\alpha}),...,j_n(f_m)(\vec{\alpha}))\}$$ Note that $\kappa_i=j_{i-1}(\theta_i)=crit(j_{i,i+1})$, thus $\kappa_i<j_{i,i+1}(j_{i-1}(\theta_i))\leq j_n(\theta_i)$. By definition of $U$, $$M_n\models \Phi(j_n(f_1)(\kappa_1,...,\kappa_n),...,j_n(f_m)(\kappa_1,...,\kappa_n))\leftrightarrow$$ $$\leftrightarrow {\langle}\kappa_1,...,\kappa_n{\rangle}\in j_n(\{\vec{\alpha}\in \prod_{i=1}^n\theta_i\mid \Phi(f_1(\vec{\alpha}),...,f_m(\vec{\alpha}))\})\leftrightarrow$$ $$\leftrightarrow\{\vec{\alpha}\in \prod_{i=1}^n\theta_i\mid \Phi(f_1(\vec{\alpha}),...,f_m(\vec{\alpha}))\}\in U$$
For $(3)$, it suffices to prove $M_U\simeq M_n$ via an isomorphism $k:M_U\rightarrow M_n$ such that $k\circ j_U=j_n$. Define $k([f]_U)=j_n(f)(\kappa_1,...,\kappa_n)$. By $(2)$, $k$ is well defined and elementary embedding. Moreover, by elementarity of $j_n$, if $c_x$ is the constant function with value $x$ then $j_n(c_x)$ is constant with value $j_n(x)$. Thus, $$k(j_U(x))=k([c_x]_U)=j_n(c_x)(\kappa_1,...,\kappa_n)=j_n(x)$$ To see $j_U$ is onto, let $x\in M_n$, since $M_n$ is the ultrapower of $M_{n-1}$ by $W_n$, there is $f_{n-1}\in M_{n-1}$, $f_{n-1}:j_{n-1}(\theta_n)\rightarrow M_{n-1}$, such that $j_{n,n-1}(f_{n-1})(\kappa_n)=x$. Inductively, assume that $x=j_{n,i}(f_i)(\kappa_{i+1},...,\kappa_n)$, where $f_i:\prod_{k=i+1}^nj_i(\theta_k)\rightarrow M_i$. Since $M_i$ is the ultrapower of $W_{i-1}$, there is $g_{i-1}: j_{i-1}(\theta_i)\rightarrow M_{i-1}$ such that $j_{i,i-1}(g_{i-1})(\kappa_i)=f_i$. By elementarity, for every $\alpha<j_{i-1}(\theta_i)$, $g_{i-1}(\alpha):\prod_{k=i+1}^nj_{i-1}(\theta_k)\rightarrow M_{i-1}$. Define $$f_{i-1}:\prod_{k=i}^nj_{i-1}(\theta_k)\rightarrow M_{i-1}\text{ by } f_{i-1}(\alpha_i,...,\alpha_n)=g(\alpha_i)(\alpha_{i+1},...,\alpha_n)$$ Since $\kappa_i=crit(j_{i,i-1})<crit(j_{n,i})=\kappa_{i+1}$, $j_{n,i}(\kappa_i)=\kappa_i$ and $$j_{n,i-1}(f_{i-1})(\kappa_i,...,\kappa_n)=j_{n,i}(j_{i,i-1}(g_{i-1})(\kappa_i))(\kappa_{i+1},...,\kappa_n)=j_{n,i}(f_{i})(\kappa_{i+1},...,\kappa_n)=x$$ We conclude that there is $f_0:\prod_{i=1}^n\theta_i:\rightarrow V$ such that $k([f]_U)=j_n(f_0)(\kappa_1,...,\kappa_n)=x$.
To see $(4)$, let $W_{i}\in M_{i-1}$ be the ultrafilter used in $j_{i,i-1}$. Apply $(3)$, and fix for every $1\leq i\leq n$ $f_i:\prod_{k=1}^{i-1}\theta_k\rightarrow V$ such that $j_{i-1}(f_i)(\kappa_1,...,\kappa_{i-1})=W_i$. We prove $(4)$ by induction on the length of the iteration $n$. For $n=1$ we can take $S$ such that ${\rm Lev}_1(S)=R$, also ${\rm Succ}_{S}({\langle}{\rangle})\in W_1=j_0(f_1)({\langle}{\rangle})=f_1({\langle}{\rangle})$. Assume this holds for iterations of length $i-1$. Let $R\in U$, where $U$ is derived from an iteration of length $i$. Since $R\in U$, by definition ${\langle}\kappa_1,...,\kappa_i{\rangle}\in j_i(R)$. It follows that $\kappa_i\in j_{i,i-1}(\{\alpha<\kappa_i\mid {\langle}\kappa_1,...,\kappa_{i-1}{\rangle}^{\smallfrown}\alpha\in R\})$. Since $j_{i,i-1}$ is the ultrapower by $W_i$, $$(\star) \ \ Z:=\{\alpha<\kappa_i\mid {\langle}\kappa_1,...,\kappa_{i-1}{\rangle}^{\smallfrown}\alpha\in j_{i-1}(R)\}\in W_i=j_{i-1}(f_i)({\langle}\kappa_1,...,\kappa_{i-1}{\rangle})$$ Let $R'=\{\vec{\alpha}\mid \{\alpha<\theta_i\mid \vec{\alpha}^{\smallfrown}\alpha\in R\}\in f_i(\vec{\alpha})\}$, then by $(\star)$, ${\langle}\kappa_1,...,\kappa_{i-1}{\rangle}\in j_{i-1}(R')$. Apply induction to $R'$ and $j_{i-1}$ to find $S'$, $\vec{U}$-fat tree such that $mb(S')\subseteq R'$, ${\langle}\kappa_1,...,\kappa_{i-1}{\rangle}\in j_{i-1}(mb(S'))$ and for every $s\in {\rm Lev}_{k-1}(S')$, ${\rm Succ}_{S'}(s)\in f_k(s)$. Define $$S\restriction\{1,...,i-1\}=S'\text{ and for every }s\in mb(S'), \ {\rm Succ}_{S}(s)=\{\alpha<\theta_i\mid s^{\smallfrown}\alpha\in R\}$$ Clearly $mb(S)\subseteq R$ and by definition of $R'$, for every $ s\in {\rm Lev}_{i-1}(S)$, ${\rm Succ}_{S}(s)\in f_i(s)$, which is a $\vec{U}$-measure over $\theta_i$. Together with the induction hypothesis, we conclude that $S$ is a $\vec{U}$-fat tree on $\theta_1\leq...\leq\theta_i$. Finally, ${\langle}\kappa_1,...,\kappa_{i-1}{\rangle}\in j_{i-1}(mb(S'))=j_{i-1}({\rm Lev}_{i-1}(S))$, and by elementarity, $${\rm Succ}_{j_{i-1}(S)}({\langle}\kappa_1,...,\kappa_{i-1}{\rangle})=\{\alpha<\kappa_i\mid {\langle}\kappa_1,...,\kappa_{i-1}{\rangle}^{\smallfrown}\alpha\in j_{i-1}(R)\}\in W_i$$ Hence $\kappa_i\in j_{i,i-1}({\rm Succ}_{j_{i-1}(S)}({\langle}\kappa_1,...,\kappa_{i-1}{\rangle}))={\rm Succ}_{j_{i}(S)}({\langle}\kappa_1,...,\kappa_{i-1}{\rangle})$. It follows that ${\langle}\kappa_1,...,\kappa_i{\rangle}\in j_i(mb(S))$ as wanted.$\blacksquare$
If $T$ is a $\vec{U}$-tree then by definition, the iteration of $T$ is a tree iteration of $\vec{U}$-measures. We denote by $U_T$ the ultrafilter derived from $j_T:V\rightarrow M_n$. \begin{proposition}\label{the drived ultrafilter of a tree} Let $T$ be a $\vec{U}$-fat tree on $\theta_1\leq...\leq\theta_n$. Then: \begin{enumerate}
\item $mb(T)\in U_T$.
\item If $S\subseteq T$ is such that
\begin{enumerate}
\item $ht(S)=ht(T)=n$.
\item ${\rm Succ}_S({\langle}{\rangle})\in U^{(T)}_{{\langle}{\rangle}}$.
\item For every $\alpha\in {\rm Succ}_S({\langle}{\rangle})$, $mb(S/{{\langle}\alpha{\rangle}})\in U_{T/{{\langle}\alpha{\rangle}}}$
\end{enumerate}
Then $mb(S)\in U_T$.
\item If $S\subseteq T$ is such that
\begin{enumerate}
\item $ht(S)=ht(T)=n$.
\item $mb(S\restriction\{1,...,n-1\})\in U_{T\restriction\{1,...,n-1\}}$.
\item For every $s\in {\rm Lev}_{n-1}(S)$, ${\rm Succ}_{S}(s)\in U^{(T)}_{{\langle} s{\rangle}}$ \footnote{If ${\langle} U_i\mid i<\lambda{\rangle}$ is a sequence of $\lambda$-complete ultrafilters over a set $B$ and $U$ is a $\lambda$-complete ultrafilter over $\lambda$, then $U-lim_{i<\lambda}U_i$ is a $\lambda$-complete ultrafilter over $\lambda\times B$, defined by: $$ U-lim_{i<\lambda}U_i:=\Big\{ X\subseteq \lambda\times B\mid \{i<\lambda\mid \{b\in B\mid (i,b)\in X\}\in U_i\}\in U\Big\}.$$ We can inductively conclude that $$U_T=U^{(T)}_{{\langle}{\rangle}}-lim_{\alpha_1<\theta_1} U^{(T)}_{{\langle}\alpha{\rangle}}-lim_{\alpha_1<\theta_2}U^{(T)}_{{\langle}\alpha_1,\alpha_2{\rangle}}-... -lim_{\alpha_{n-1}<\theta_{n-1}}U^{(T)}_{{\langle}\alpha_1,...,\alpha_{n-2},\alpha_{n-1}{\rangle}}.$$}.
\end{enumerate}
Then $mb(S)\in U_T$.
\item If $S\subseteq T$ is such that
\begin{enumerate}
\item $ht(S)=ht(T)=n$.
\item For every $s\in S\setminus mb(S)$, ${\rm Succ}_{S}(s)\in U^{(T)}_{{\langle} s{\rangle}}$.
\end{enumerate}
Then $mb(S)\in U_T$.
\item If $S$ is a $\vec{U}$-fat tree, and $mb(S)\in U_T$, then there is a choice of measures $U^{(S)}_s$ such that $j^{(S)}_n=j^{(T)}_n$ and in particular, $U_S=U_T$. \end{enumerate} \end{proposition}
\noindent\textit{Proof}. For $(1)$, by definition of $j_T$, we have that ${\langle}\kappa_1,...,\kappa_n{\rangle}\in mb(j_n(T))=j_n(mb(T))$, hence by definition of $U_T$, $mb(T)\in \vec{U}(T)$. For $(2)$, note that in $M_1$ we have the tree $j_1(T)/{{\langle}\kappa_1{\rangle}}$. By $(b),(c)$ it follow that in $M_1$, $ mb(j_1(S)/{{\langle}\kappa_1{\rangle}})\in U_{j_1(T)/{{\langle}\kappa_1{\rangle}}}$. By definition, the iteration defined inside $M_1$ of $j_1(T)_{{\langle}\kappa_1{\rangle}}$ is simply the iteration $j_T$ starting from the second step inside $M_1$, namely, ${\langle} j_{m,k}\mid 1\leq k\leq m\leq n{\rangle}$. Hence $${\langle}\kappa_2,...,\kappa_n{\rangle}\in j_{n,1}(mb(j_1(S)/{{\langle}\kappa_1{\rangle}}))=mb(j_n(S)/{{\langle}\kappa_1{\rangle}}).$$ It follows that ${\langle}\kappa_1,...,\kappa_n{\rangle}\in mb(j_n(S))$ and by definition $mb(S)\in U_T$.
As for $(3)$, note that $j_{T\restriction\{1,...,n-1\}}$ is by definition the first $n-1$ steps of the iteration of $j_T$. By $(b)$, $mb(S\restriction\{1,...,n-1\})\in U_{T\restriction\{1,...,n-1\}}$, thus ${\langle}\kappa_1,...,\kappa_{n-1}{\rangle}\in {\rm Lev}_{n-1}(j_{n-1}(S))$. By $(c)$, and elementarity of $j_{n-1}$, it follows that ${\rm Succ}_{j_{n-1}(S)}({\langle}\kappa_1,...,\kappa_{n-1}{\rangle})\in U^{(j_{n-1}(T))}_{{\langle}\kappa_1,...,\kappa_{n-1}{\rangle}}$, hence $\kappa_n\in j_{n,n-1}({\rm Succ}_{j_{n-1}(S)}({\langle}\kappa_1,...,\kappa_{n-1}{\rangle}))={\rm Succ}_{j_{n}(S)}({\langle}\kappa_1,...,\kappa_{n-1}{\rangle})$. In other words, ${\langle}\kappa_1,...,\kappa_n{\rangle}\in j_n(mb(S))$ and by definition $mb(S)\in U_T$.
For $(4)$, by induction on $i\leq n$ let us argue that ${\rm Lev}_i(S)=mb(S\restriction \{1,...,i\})\in U_{T\restriction\{1,...,i\}}$. If $i=1$ then ${\rm Lev}_1(S)={\rm Succ}_{S}({\langle}{\rangle})\in U^{(T)}_{{\langle}{\rangle}}$. Assume that $mb(S\restriction \{1,...,i-1\})\in\vec{U}_{T\restriction\{1,...,i-1\}}$. By $(b)$, for every $s\in {\rm Lev}_{i-1}(S)$, ${\rm Succ}_{S}(s)\in U^{(T)}_{s}$, now apply $(3)$ to $S\restriction\{1,...,i\}$ and $T\restriction\{1,...,i\}$ to conclude that $mb(S\restriction \{1,...,i\})\in U_{T\restriction\{1,...,i\}}$.
To see $(5)$, again argue by induction on $i$ that $j^{(T)}_{i}=j^{(S)}_{i}$. Since $mb(S)\in U_T$, ${\langle}\kappa_1,...,\kappa_n{\rangle}\in mb(j_n(S))$, hence $\kappa_1\in {\rm Lev}_1(j_n(S))$. Since $crit(j_{1,n})=\kappa_2$, $\kappa_1\in {\rm Lev}_1(j_1(S))$, and therefore ${\rm Lev}_1(S)\in U^{(T)}_{{\langle}{\rangle}}$, choose $U^{(S)}_{{\langle}{\rangle}}=U^{(T)}_{{\langle}{\rangle}}$ which implies that $j^{(T)}_{0,1}=j^{(S)}_{0,1}$. Assume that $j^{(T)}_i=j^{(S)}_i=j_i$. Since $\kappa_{i+1}\in {\rm Succ}_{j^{(T)}_n(S)}({\langle}\kappa_1,...,\kappa_i{\rangle})$ then $\kappa_{i+1}\in j^{(T)}_{i+1,i}({\rm Succ}_{j_i(S)}({\langle}\kappa_1,...,\kappa_i{\rangle}))$ thus $$(*)\ \ \ \ {\rm Succ}_{j_i(S)}({\langle}\kappa_1,...,\kappa_i{\rangle})\in U^{(j_i(T))}_{{\langle}\kappa_1,...,\kappa_i{\rangle}}.$$ Back in $V$, for every $s\in {\rm Lev}_i(S)$, if ${\rm Succ}_S(s)\in U^{(T)}_s$, let $U^{(S)}_s=U^{(T)}_s$, otherwise, we pick a random ultrafilter. Then by $(*)$, and elementarity $U^{(j_i(S))}_{{\langle}\kappa_1,...,\kappa_i{\rangle}}=U^{(j_1(T))}_{{\langle}\kappa_1,...,\kappa_i{\rangle}}$ hence $j^{(T)}_{i+1}=j^{(S)}_{i+1}$. $\blacksquare$
The following lemma is a generalization of a combinatorial property that was proven in \cite{TomMoti} for product of measures. It can be stated for more general trees, however, let us restrict the attention to our needs.
\begin{lemma}\label{StabTree} Let $\vec{U}$ be a sequence of normal measures and let $T$ be a $\vec{U}$-fat tree on $\theta_1\leq\theta_2\leq...\leq\theta_n$. For $f:mb(T)\rightarrow \theta_1$ regressive i.e. $f(t)<{\rm min}(t)$ there is a $\vec{U}$-fat tree $T'\subseteq T$ such that $mb(T')\in U_T$ and $f\restriction mb(T')=const$. \end{lemma}
\noindent\textit{Proof}. By induction on the height of the tree. If $ht(T)=1$ it is the case of one normal measure, namely $U_{\langle\rangle}$, which is well known. Assume the lemma holds for $n$ and fix $T,f$ such that $ht(T)=n+1$. For $\vec{\alpha}\in {\rm Lev}_{n}(T)$ consider ${\rm Succ}_T(\vec{\alpha})\in U^{(T)}_{\vec{\alpha}}$. Define $f_{\vec{\alpha}}:{\rm Succ}_T(\vec{\alpha})\rightarrow\theta_1$ by $f_{\vec{\alpha}}(\beta)=f(\vec{\alpha}^{\frown}\beta)$. Then there exist $H_{\vec{\alpha}}\in U_{\vec{\alpha}}$ homogeneous for $f_{\vec{\alpha}}$ with color $c_{\vec{\alpha}}<{\rm min}(\vec{\alpha})$. Consider the regressive function $$g:mb(T\restriction \{1,...,n\})\rightarrow\theta_1 \ \ \ g(\vec{\alpha})= c_{\vec{\alpha}}.$$ Since $ht(T\restriction \{1,...,n\})=n$ we can apply the induction hypothesis to $g$, so let $T'\subseteq T\restriction \{1,...n\}$ be such that $mb(T')\in U_{T\restriction\{1,...,n\}}$ be a homogeneous $\vec{U}$-fat tree with color $c^*$. Extend $T'$ by adjoining $H_{\vec{\alpha}}$ as the successors of $\vec{\alpha}\in mb(T')$, denote the resulting tree by $T^*$. Note that by the induction, $T^*\subseteq T$ is a $\vec{U}$-fat tree with $ht(T^*)=n+1$, and by \ref{the drived ultrafilter of a tree}(3) $mb(T^*)\in U_T$ and $f\restriction mb(T^*)$ is constantly $c^*$.
$\blacksquare$
In what come next, we will generalize (Corollary \ref{corollary for important coordinates}) a well known combinatorial property of normal measures (Corollary \ref{important coordinates for single measure}), which is a consequence of \textit{Weak compactness} of normal measures: \begin{proposition}[folklore] Let $U$ be a normal ultrafilter over $\kappa$, and $f:[A]^2\rightarrow \{0,1\}$ such that $A\in U$. Then there is $A'\subseteq A$ such that $A'\in U$ and $f\restriction [A']^2$ is constant.$\blacksquare$ \end{proposition} \begin{corollary}\label{important coordinates for single measure}
Let $U$ be a normal ultrafilter over $\kappa$, let $X$ be an arbitrary set, and $f:A\rightarrow X$ any function such that $A\in U$. Then there is $A'\subseteq A$ such that $A'\in U$ and $f\restriction A'$ is either constant or $1-1$. \end{corollary}
\noindent\textit{Proof}. Define $g:[A]^2\rightarrow\{0,1\}$ by $$g(\alpha,\beta)=1\leftrightarrow f(\alpha)=f(\beta).$$ By weak compactness, there is $A'\subseteq A$, $A'\in U$, and $c\in\{0,1\}$ such that for every $\alpha,\beta\in A'$, $\alpha<\beta$, $g(\alpha,\beta)=c$. If $c=1$, then $f\restriction A'$ is constant and if $c=0$ then $f\restriction A'$ is $1-1$.$\blacksquare$
In this argument we compare $f(\alpha),f(\beta)$ for distinct $\alpha,\beta$. It is always the case that $\alpha<\beta\vee \beta<\alpha$ hence we can think about this comparison as a function defined on $[A]^2$ which is a set in $U\times U$. One problem to generalize this argument to $\vec{U}$-fat trees is the following: Although for a given function $f:mb(T)\rightarrow X$ , a $\vec{U}$-fat tree $T$, and distinct pair $t,t'\in mb(T)$ we can identify this pair as a branch of some $\vec{U}$-fat tree $S$, $S$ might vary for different $t,t'$.
For example, if $t=\langle \alpha_1,\alpha_2,\alpha_3\rangle$ and $t'=\langle\alpha_1',\alpha_2',\alpha_3'\rangle$ the following is a possible such interweaving: $$\alpha_1<\alpha_1'=\alpha_2<\alpha_2'<\alpha_3'<\alpha_3$$ then we can think of $t,t'$ as a single branch from a tree $S$ of height $5$ such that any branch $s={\langle} s_1,s_2,s_3,s_4,s_5{\rangle}\in mb(S)$ decomposes back to $t={\langle} s_1,s_2,s_5{\rangle}$ ans $t'={\langle} s_2,s_3,s_4{\rangle}$. However there can be different interweaving of $t,t'$ for which we need a different tree.
Generally, if $t:={\langle}\alpha_1,...,\alpha_n{\rangle}, \ t':={\langle}\alpha'_1,...,\alpha'_n{\rangle}\in mb(T)$, the set $\{\alpha_1,...,\alpha_n\}\cup \{\alpha'_1,...,\alpha'_n\}$ naturally orders in one of finitely many ways and induces an interweaving of $t,t'$: \begin{definition}\label{interweaving}
$p$ is an \textit{interweaving of $T$}, if it is a pair of order embedding $\langle g,g'\rangle$ where $g,g':ht(T)\rightarrow \{1,...,k\}$ so that $Im(g)\cup Im(g')=\{1,...,k\}$. Denote $A_p=Im(g), A_p'=Im(g')$ and $k=|p|$. \end{definition} Let $T$ be a $\vec{U}$-fat tree on $\theta_1\leq...\leq \theta_n$. For every interweaving $p={\langle} g,g'{\rangle}$, define the iteration associated with $p$, $j_p=j^{(T)}_p$:
The length of the iteration is $|p|$. Let $M_0=V$ and $j_0=Id$. Assume that we are at the $m$th step of the iteration and denote the critical points $\kappa_1,...,\kappa_m$. Also assume inductively that $$\langle k_i\mid i\in A_p\cap \{1,...,m\}\rangle\in j_m(T),\langle k_i\mid i\in A'_p\cap \{1,...,m\}\rangle\in j_m(T).$$ If $m+1\in A_p\setminus A_p'$, let $r\leq ht(T)$ be such that $g(r)=m+1$. Then $\langle k_i\mid i\in A_p\cap \{1,...,m\}\rangle\in {\rm Lev}_{r-1}(j_m(T))$ and the ultrafilter $\vec{U}^{(j_m(T))}_{\langle \kappa_i\mid i\in A_p\cap \{1,...,m\}\rangle}$ which is an ultrafilter over $j_m(\theta_{r})$ is defined in $M_m$ and for every $i\in A_p\cap\{1,...,m\}$, $\kappa_i<j_m(\theta_r)$.
If there is $i\in A'_p\cap\{1,...,m\}$ such that $\kappa_i\geq j_m(\theta_r)$, then declare that the iteration is undefined. Otherwise, perform the ultrapower of $M_m$ by $\vec{U}^{(j_m(T))}_{\langle \kappa_i\mid i\in A_p\cap \{1,...,m\}\rangle}$. It follows that $\kappa_{m+1}:=crit(j_{m,m+1})=j_m(\theta_r)$ and $$\langle \kappa_i\mid i\in A_p\cap \{1,...,m+1\}\rangle=\langle \kappa_i\mid i\in A_p\cap \{1,...m\}\rangle^{\smallfrown}\kappa_{m+1}\in j_{m+1}(T).$$ If $m+1\in A'_p\setminus A_p$ we perform the symmetric procedure. If $m+1\in A_p\cap A'_p$, let $r,r'\leq ht(T)$ be such that $m+1=g(r)=g'(r')$
there are two possibilities, either $$\vec{U}^{(j_m(T))}_{\langle\kappa_i\mid i\in A_p\cap \{1,...,m\}\rangle}\neq \vec{U}^{(j_m(T))}_{\langle\kappa_j\mid j\in A'_p\cap \{1,...,m\}\rangle}.$$
In this case, declare that the iteration is undefined. Otherwise $$\vec{U}^{(j_m(T))}_{\langle\kappa_i\mid i\in A_p\cap \{1,...,m\}\rangle}= \vec{U}^{(j_m(T))}_{\langle\kappa_j\mid j\in A'_p\cap \{1,...,m\}\rangle}$$
then $j_m(\theta_r)=j_m(\theta_{r'})$ and perform the ultrapower with this measure. Thus for every $i\leq m$, $$\kappa_{m+1}:=crit(j_{m,m+1})=j_m(\theta_r)=j_m(\theta_{r'})>\kappa_i$$ and $$\langle \kappa_i\mid i\in A_p\cap \{1,...,m+1\}\rangle\in j_{m+1}(T), \ \langle \kappa_i\mid i\in A'_p\cap \{1,...,m+1\}\rangle\in j_{m+1}(T).$$
In any case we denote $\theta(m)=\theta_r$ so that $\kappa_m=j_{m-1}(\theta(m))$, by construction, if $m=g(r)$ then $\theta(m)=\theta_r$ and if $m=g'(r')$ then $\theta(m)=\theta_{r'}$. If $j_p$ is defined then
$$\theta(1)<j_1(\theta(2))<...<j_{|p|-1}(\theta(|p|))$$
and since $j_{m-1}(\theta(m))=crit(j_{m,m-1})$, $\theta(1)\leq\theta(2)\leq...\leq\theta(|p|)$. It follows that $j_p$ is a tree iteration of $\vec{U}$-measures.
\begin{proposition}\label{TreeUlt} Let $T$ be a $\vec{U}$-fat tree and fix an interweaving $p={\langle} g,g'{\rangle}$ such that $j_p$ is defined. Then \begin{enumerate}
\item There is a $\vec{U}$-fat tree, $S_p$, with $ht(S_p)=|p|$ and for every $s\in mb(S_p)$, $s\restriction A_p,s\restriction A'_p\in mb(T)$ interweave as $p$. Moreover, for every $r\in {\rm Lev}_m(S_p)$, if $m\in A_p$ then $U^{(S_p)}_r=U^{(T)}_{r\restriction A_p\cap \{1,...,m\}}$ and if $m\in A_p'$ then $U^{(s_p)}_r=U^{(T)}_{r\restriction A'_p\cap \{1,...,m\}}$. \item We can shrink $T$ to $R$ such that $mb(R)\in \vec{U}_T$ and if $t,t'\in mb(R)$ interweave as $p$ then $t\cup t'\in S_{p}$ \item If $g'(1)<g(1)$, then we can shrink $T$ to $R$ such that $mb(R)\in \vec{U}_T$ and for every $t\in mb(R)$ and $\alpha\in {\rm Succ}_R(\langle\rangle)\cap{\rm min}(t)$ there is $t'\in mb(T)$ such that $t,t'$ interweave as $p$ and ${\rm min}(t')=\alpha$.
\end{enumerate} \end{proposition}
\noindent\textit{Proof}. For $(1)$, if the iteration $j_p$ is defined, then in particular for every $m$, $j_{m,m+1}$ is the ultrapower by $U^{(j_{m}(T))}_{{\langle} \kappa_i\mid i\in A_p\cap\{1,...,m\}{\rangle}}$ or by $U^{(j_{m}(T))}_{{\langle} \kappa_i\mid i\in A'_p\cap\{1,...,m\}{\rangle}}$ which is a measure over $j_m(\theta_{r_{m+1}})$ for some $r_{m+1}\leq ht(T)$. Since $j_p$ is defined, we can derive the ultrafilter $U_p$ from $j_p$ over $\prod_{i=1}^{|p|}\theta(i)$. In $M_{|p|}$ we have that
$${\langle} \kappa_1,...,\kappa_{|p|}{\rangle}\restriction A_p, {\langle} \kappa_1,...,\kappa_{|p|}{\rangle}\restriction A'_p\in mb(j_p(T))\text{ interweave as }p.$$
Then by \ref{Los for trees}(2), $R=\{\vec{\alpha}\in\prod_{i=1}^{|p|}\theta(i)\mid \vec{\alpha}\restriction A_p,\vec{\alpha}\restriction A'_p\in mb(T)\text{ interweave as } p\}\in\vec{U}_p$. By construction of $j_p$, if $m\in A_p$ then the function $f_m(t)=U^{(T)}_{t\restriction A_p\cap\{1,...,m-1\}}$ satisfies that the measure $j_{m-1}(f_m)({\langle}\kappa_1,...,\kappa_{m-1}{\rangle})$ is the one applied at the $m$-th step of the iteration. If $m\in A_p'$ define a similar function $f'_m$ depending on $t\restriction A_p'\cap\{1,...,m-1\}$. By \ref{Los for trees}(4), there is a $\vec{U}$-fat tree $S_p$ such that $mb(S_p)\subseteq R$ and $mb(S_p)\in U_p$. Then any $s\in mb(S_p)$ is in $R$ and therefore $s\restriction A_p,\ s\restriction A_p'$ interweave as $p$. Moreover, for every $r\in {\rm Lev}_{m-1}(S_p)$, $U^{(S_p)}_r=f_m(r)=U^{(T)}_{r\restriction A_p\cap\{1,...,m-1\}}$ or $U^{(S_p)}_r=f'_m(r)=U^{(T)}_{r\restriction A'_p\cap\{1,...,m-1\}}$.
To see $(2)$, for every $\vec{\alpha}\in {\rm Lev}_{m+1}(S_p)$ define $t(\vec{\alpha})\in T$ to be $\vec{\alpha}\restriction A_p\cap \{1,...,m\}$ and $t'(\vec{\alpha})=\vec{\alpha}\restriction A_p'\cap \{1,...,m\}$. From $(1)$ it follows that if $m+1\in A_p$ then ${\rm Succ}_{S_p}(\vec{\alpha})\in U^{(T)}_{t(\vec{\alpha})}$ and similarly for $m+1\in A_p'$. Define $R$ inductively, the levels of $S_p$ which correspond to the first level are $g(1)$ and $g'(1)$ are the successors of nodes at levels $g(1)-1$ and $g'(1)-1$. Note that at least one of $g(1),g'(1)$ must be $1$. Also note that for every $\vec{\alpha}\in {\rm Lev}_{g(1)-1}(S_p)$, $t(\vec{\alpha})={\langle}{\rangle}$ and that for every $\vec{\beta}\in {\rm Lev}_{g'(1)-1}(S_p)$, $t'(\vec{\beta})={\langle}{\rangle}$. Define $$B_{{\langle}{\rangle}}=\Delta_{\vec{\alpha}\in {\rm Lev}_{g(1)-1}(S_p)}{\rm Succ}_{S_p}(\vec{\alpha}), C_{{\langle}{\rangle}}=\Delta_{\vec{\alpha}\in {\rm Lev}_{g'(1)-1}(S_p)}{\rm Succ}_{S_p}(\vec{\alpha})\in U^{(T)}_{\langle\rangle}.$$ Let ${\rm Succ}_{R}(\langle\rangle)=B_{{\langle}{\rangle}}\cap C_{{\langle}{\rangle}}\in U^{(T)}_{{\langle}{\rangle}}$. Moreover, at least one of $B_{{\langle}{\rangle}},C_{{\langle}{\rangle}}$ is simply ${\rm Succ}_{S_p}({\langle}{\rangle})$.
Assume $r\in {\rm Lev}_{m}(R)$ is defined, the levels of $S_p$ which correspond to the $m$th level are $g(m),g'(m)$ (Which might be the same level), thus for every $\vec{\alpha}\in {\rm Lev}_{g(m)}(S_p)$, $t(\vec{\alpha})\in {\rm Lev}_m(T)$ and for every $\vec{\beta}\in {\rm Lev}_{g'(m)}(S_p)$, $t'(\vec{\beta})\in {\rm Lev}_m(T)$. Define $${\rm Succ}_R(r)=\underset{\vec{\alpha}\in {\rm Lev}_{g(m)}(S_p), t(\vec{\alpha})=r}{\Delta}{\rm Succ}_{S_p}(\vec{\alpha})\cap \underset{\vec{\alpha}\in {\rm Lev}_{g(m)}(S_p), t'(\vec{\alpha})=r}{\Delta}{\rm Succ}_{S_p}(\vec{\alpha})\in U^{(T)}_r.$$ By \ref{the drived ultrafilter of a tree}(4) $mb(R)\in U_T$. If $t,t'\in mb(R)$ interweave as $p$, we prove inductively that $(t\cup t')\restriction\{1,...,k\}\in {\rm Lev}_k(S_p)$. Clearly $(t\cup t')\restriction \{1\}={\langle}\alpha{\rangle}\in {\rm Lev}_1(S_p)$, as $\alpha\in B_{{\langle}{\rangle}}\cap C_{{\langle}{\rangle}}\subseteq{\rm Succ}_{S_p}({\langle}{\rangle})$. Assume that $(t\cup t')\restriction\{1,...,k\}\in {\rm Lev}_k(S_p)$, if $k+1\in A_p$, let $r$ be such that $g(r)=k+1$, then $(t\cup t')(k+1)=t(r)>(t\cup t')(k)$. Also $t((t\cup t')\restriction\{1,...,k\})=t\restriction \{1,...,r-1\}$. By definition of diagonal intersection and $R$ it follows that $$t(r)\in {\rm Succ}_{R}(t\restriction \{1,...,r-1\})\subseteq{\rm Succ}_{S_p}((t\cup t')\restriction\{1,...,k\}))$$ hence $(t\cup t')\restriction\{1,...,k+1\}\in {\rm Lev}_{k+1}(S_p)$. The case where $k+1\in A_p'$ is similar.
To see $(3)$, suppose that $g'(1)<g(1)$. Define a sequence inductively, let $\vec{\eta}_1=\langle\beta_1,...,\beta_{g(1)-1}\rangle\in S_p$. Then by $(1)$, ${\rm Succ}_{S_p}(\vec{\eta}_1)\in U^{(T)}_{\langle\rangle}$, thus by definition of $j_T$, $$\kappa_1\in j_1({\rm Succ}_{S_p}(\vec{\eta}_1))={\rm Succ}_{j_1(S_p)}(\vec{\eta}_1).$$ Consider $\vec{\eta}_1^{\frown}\langle \kappa_1\rangle\in {\rm Lev}_{g(1)}(j_1(S_p))$, pick any $\vec{\eta}_2$ such that $\vec{\eta}_1^{\frown}\langle \kappa_1\rangle^{\frown}\vec{\eta}_2\in {\rm Lev}_{g(2)-1}(j_1(S_p))$, then $${\rm Succ}_{j_1(S_p)}(\vec{\eta}_1^{\frown}\langle \kappa_1\rangle^{\frown}\vec{\eta}_2)\in j_1(\vec{U})^{(j_1(T))}_{{\langle}\kappa_1{\rangle}}\text{ thus }\kappa_2\in {\rm Succ}_{j_2(S_p)}(\vec{\eta}_1^{\frown}\langle \kappa_1\rangle^{\frown}\vec{\eta}_2)$$ continuing in this fashion we end up with a witness for the statement $$M_n\models \exists t\in mb(j_n(T))\ s.t. \ \langle\kappa_1,...,\kappa_n\rangle,t\text{ interweave as } p.$$ Since $\beta_1\in {\rm Succ}_{S_p}(\langle\rangle)={\rm Succ}_T(\langle\rangle)={\rm Succ}_{j_n(T)}({\langle}{\rangle})\cap\kappa_1$ was arbitrary, it follows that $$M_n\models \forall\beta\in {\rm Succ}_{j_n(T)}(\langle\rangle)\cap\kappa_1\exists t\in mb(j_n(T)) \ s.t. \ min(t)=\beta\wedge \langle\kappa_1,...,\kappa_n\rangle,t \text{ interweave as }p.$$ By \ref{Los for trees}(2) $$\{s\in mb(T)\mid \forall\beta\in{\rm Succ}_{T}({\langle}{\rangle})\cap s_1\exists t\in mb(T).min(t)=\beta\wedge s,t\text{ interweave as }p\}\in U_T.$$ By \ref{Los for trees}(4) we can find $R$ as wanted. $\blacksquare$
\begin{proposition}\label{undefined interweaving}
Let $T$ be a $\vec{U}$-fat tree, and let $p={\langle} g,g'{\rangle}$ be an interweaving. If $j_p$ is undefined then there is $T'\subseteq T$ such that $mb(T')\in U_T$ and every $t,t'\in mb(T')$ do not interweave as $p$.
\end{proposition}
\noindent\textit{Proof}. Let $m$ be the step of the iteration where we declared that $j_p$ is undefined. By definition, there are two cases to consider:
\textbf{Case 1: Assume that $m+1\in A_p\setminus A'_p$ and there is $i\in A'_p\cap \{1,...,m\}$ such that $j_{i-1}(\theta(i))\geq j_m(\theta(m+1))$.} Then $\theta(i)>\theta(m+1)$, otherwise, $\theta(m+1)\geq \theta(i)$ hence $$j_{i-1}(\theta(m+1))\geq j_{i-1}(\theta(i))\geq j_m(\theta(m+1))\geq j_{i-1}(\theta(m+1))$$ hence $j_{i-1}(\theta(m+1))=j_{i-1}(\theta(i))$ and $\theta(m+1)=\theta(i)$. But $j_{i-1}(\theta(i))=crit(j_{i,i-1})$ $j_{i,i-1}(j_{i-1}(\theta(m+1))>j_{i-1}(\theta(m+1))$ hence $j_m(\theta(m+1))\geq j_i(\theta(m+1))>j_{i-1}(\theta(m+1))=j_{i-1}(\theta(i))$, contradiction, thus $\theta(i)>\theta(m+1)$. Let $r_1,r_2\leq ht(T)$ be such that $g(r_1)=m+1$ and $g'(r_2)=i$. Then $\theta_{r_1}=\theta(m+1)$ and $\theta_{r_2}=\theta(i)$. The tree $T'$ is obtained from $T$ by shrinking ${\rm Succ}_T(t)$ for each $t\in Lev_{r_2-1}(T)$ such that ${\rm min}({\rm Succ}_{T'}(t))>\theta(m+1)$. To see that $T'$ is as wanted, assume that $s,s'\in mb(T')$ interweave as $p$. Then $$s'(r_2)=(s\cup s')(i)<(s\cup s')(m+1)=s(r_1).$$
On the other hand, $s'(r_2)\in {\rm Succ}_{T'}(s'\restriction\{1,...,r_2-1\})$, hence $s(r_1)<\theta(m+1)<s'(r_2)$, contradiction.
\textbf{Case 2: Assume that} $m+1\in A_p\cap A'_p$ \textbf{and} $U^{(j_m(T))}_{{\langle}\kappa_i\mid i\in A_p\cap\{1,...,m\}{\rangle}}\neq U^{(j_m(T))}_{{\langle}\kappa_i\mid i\in A'_p\cap\{1,...,m\}{\rangle}}$.
These are measures over $j_m(\theta(m+1)),j_m(\theta'(m+1))$ respectively. If $\theta(m+1)\neq \theta'(m+1)$, then for example $\theta(m+1)<\theta'(m+1)$ and we can shrink $T$ as in case $1$ to eliminate such an interweaving, hence assume $\theta(m+1)=\theta'(m+1)$. Consider the first $m$ steps of the iteration $j_p$, let $A_p\cap\{1,...,m\}=\{g(1),...,g(k)\}, \ A_p'\cap\{1,...,m\}=\{g'(1),...,g'(k')\}$, then $\theta_{k+1}=\theta(m+1)=\theta'(m+1)=\theta_{k'+1}$. Similar to \ref{TreeUlt}(1), $$M_m\models {\langle}\kappa_1,...\kappa_m{\rangle}\restriction \{g(1),...,g(k)\}\in {\rm Lev}_k(j_m(T)),\ {\langle}\kappa_1,...\kappa_m{\rangle}\restriction \{g'(1),...,g'(k')\}\in {\rm Lev}_{k'}(j_m(T )).$$ Moreover, $$M_m\models U^{(j_m(T))}_{{\langle}\kappa_1,...\kappa_m{\rangle}\restriction \{g(1),...,g(k)\}}\neq U^{(j_m(T))}_{{\langle}\kappa_1,...\kappa_m{\rangle}\restriction \{g'(1),...,g'(k')\}}$$
since the iteration up to $m$ is defined we can find a $\vec{U}$-fat tree $S$ such that:
\begin{enumerate}
\item ${\langle}\kappa_1,...,\kappa_m{\rangle}\in j_m(mb(S))$.
\item For every $s\in mb(S)$, $s\restriction \{g(1),...,g(k)\}\in {\rm Lev}_k(T),\ s\restriction \{g'(1),...,g'(k')\}\in {\rm Lev}_{k'}(T)$.
\item $U^{(T)}_{s\restriction \{g(1),...,g(k)\}}\neq U^{(T)}_{s\restriction \{g'(1),...,g'(k')\}}$.
\item $s\restriction \{g(1),...,g(k)\},\ s\restriction \{g'(1),...,g'(k')\}$ interweave as in ${\langle} g\restriction\{1,...,k\},g'\restriction\{1,...,k'\}{\rangle}$.
\item For every $s\in {\rm Lev}_r(S)$, let $t(s):=s\restriction A_p\cap\{1,...,r\}$ and $t'(s):=s\restriction A'_p\cap\{1,...,r\}$. Then $U^{(S)}_s$ is either $U^{(T)}_{t(s)}$ if $r+1\in A_p$ or $U^{(T)}_{t'(s)}$ if $r+1\in A'_p$.
\end{enumerate}
Since $T$ mentions at most $|T\cap [\theta_{k+1}]^{<\omega}|\leq \theta_{k+1}$ measures on $\theta_{k+1}$ we can use the normality of the measures to separate them. Namely, for every $r\in T$ such that $U^{(T)}_r$ is a measure on $\theta_{k+1}$, find $X_r\in U^{(T)}_r$ such that if $U^{(T)}_r\neq U^{(T)}_{r'}$ then $X_r\cap X_{r'}=\emptyset$. Now we shrink the tree $T$ similar to \ref{TreeUlt}(2), from $(5)$ it follows that if $j\in A_p$ then for every $\vec{\alpha}\in {\rm Lev}_{j-1}(S)$, ${\rm Succ}_{S}(\vec{\alpha})\in U^{(T)}_{t(\vec{\alpha})}$ and similarly for $j\in A_p'$. Define $R\subseteq T$ inductively, $$B_{{\langle}{\rangle}}=\Delta_{\vec{\alpha}\in {\rm Lev}_{g(1)-1}(S)}{\rm Succ}_{S}(\vec{\alpha}), C_{{\langle}{\rangle}}=\Delta_{\vec{\alpha}\in {\rm Lev}_{g'(1)-1}(S)}{\rm Succ}_{S}(\vec{\alpha})\in U^{(T)}_{\langle\rangle}.$$ Let ${\rm Succ}_{R}(\langle\rangle)=B_{{\langle}{\rangle}}\cap C_{{\langle}{\rangle}}\in U^{(T)}_{{\langle}{\rangle}}$. As before, at least one of $B_{{\langle}{\rangle}},C_{{\langle}{\rangle}}$ is simply ${\rm Succ}_{S}({\langle}{\rangle})\subseteq{\rm Succ}_T({\langle}{\rangle})$. Given $r\in {\rm Lev}_{j}(R)$, define $$B_{r}=\begin{cases}\underset{\vec{\alpha}\in {\rm Lev}_{g(j)}(S),\ t(\vec{\alpha})=r}{\Delta}{\rm Succ}_{S}(\vec{\alpha}) & j<k\\ {\rm Succ}_T(r)\cap X_r & j=k\\ {\rm Succ}_T(r) & j>k\end{cases}$$ $$C_{r}=\begin{cases}\underset{\vec{\alpha}\in {\rm Lev}_{g'(j)}(S),\ t'(\vec{\alpha})=r}{\Delta}{\rm Succ}_{S}(\vec{\alpha}) & j<k'\\ {\rm Succ}_T(r)\cap X_r & j=k'\\ {\rm Succ}_T(r) & j>k'\end{cases}.$$ Then $B_r,C_r\in U^{(T)}_r$ and let ${\rm Succ}_{R}(r)=B_r\cap C_r$. So by \ref{the drived ultrafilter of a tree}(4) $mb(R)\in U_T$. Let us argue that $R$ is as wanted. Toward a contradiction, assume that $t,t'\in mb(R)$ interweave as $p$, in particular $t\restriction\{1,...,k\},t'\restriction\{1,...,k'\}$ interweave as ${\langle} g\restriction\{1,...,k\},g'\restriction\{1,...,k'\}{\rangle}$, as in the proof of \ref{TreeUlt}(2), we conclude that $s=(t\restriction\{1,...,k\})\cup(t'\restriction\{1,...,k'\})\in mb(S)$ and by $(3)$ $U^{(T)}_{s\restriction \{g(1),...,g(k)\}}\neq U^{(T)}_{s\restriction \{g'(1),...,g'(k')\}}$. However, $s\restriction \{g(1),...,g(k)\}=t\restriction\{1,...,k\}$, $s\restriction \{g'(1),...,g'(k')\}=t'\restriction\{1,...,k'\}$ hence ${\rm Succ}_{R}(t\restriction\{1,...,k\})\subseteq X_{t\restriction\{1,...,k\}}$ is disjoint from ${\rm Succ}_{R}(t\restriction\{1,...,k'\})\subseteq X_{t\restriction\{1,...,k'\}}$. On the other hand, $p$ impose that $t(k'+1)=t(k+1)$ is a member of the intersection, contradiction.$\blacksquare$
To illustrate the second problem of generalizing weak compactness, consider for example the function $f:mb(T)\rightarrow \kappa$, $f(\alpha,\beta)=\alpha$. No matter how we shrink $T$ to $S$, $f\restriction mb(S)$ will be neither constant nor $1-1$. However, we can ignore the coordinate $\beta$ and obtain a $1-1$ function. Generally, we will argue that $f$ might depend on some of the levels of the tree and the other levels can be ignored. Let us formulate this precisely: \begin{definition}\label{the induced function} Let $T$ be a tree of height $n$. For every $I\subseteq\{1,...,n\}$ define an equivalence relation $\sim_I$ on $mb(T)$ by $t\sim_I t'\leftrightarrow t\restriction I=t'\restriction I$. For $f:mb(T)\rightarrow X$, \textit{the induced function} denoted by $f_I:mb(T\restriction I)\rightarrow X$ is the relation $\{{\langle} t\restriction I,f(t){\rangle}\mid t\in mb(T)\}$. \end{definition} Clearly $f_I$ is a well defined function if and only if $f$ is constant on equivalence classes of $\sim_I$. For example, if $I=\emptyset$ and $f_\emptyset$ is well defined then $f$ is constant. \begin{definition}\label{ Definition of important coordinates}
Let $T$ be a $\vec{U}$-fat tree of height $n$, and let $f:mb(T)\rightarrow B$ be any function.
\begin{enumerate}
\item A coordinate $i\in\{1,...,n\}$ is called an \textit{important coordinate} for $f$ if $\forall t_1,t_2\in mb(T)$, $t_1(i)\neq t_2(i)$ implies $f(t_1)\neq f(t_2)$.
\item The \textit{set of important coordinates for $f$} is the set $$I(T,f)=\{i\in\{1,...,n\}\mid i\text{ is an important coordinate}\}.$$ We say that $I(T,f)$ is \textit{complete} if $f_{I(T,f)}$ is well defined i.e. $\forall t,t'\in mb(T). t\sim_{I(T,f)} t'$ implies $f(t)=f(t')$. Also we say that $I(T,f)$ is \textit{consistent} if for every $\vec{U}$-fat tree $S\subseteq T$ such that $mb(S)\in U_T$, $I(S,f\restriction mb(S))\subseteq I(T,f)$.
\end{enumerate} \end{definition} \begin{remark}\label{Remark important coordinates} \begin{enumerate}
\item The structure of the tree $T$, imposes some dependency between the levels of the tree which are not related to the function. For example, assume that $o^{\vec{U}}(\kappa)=\kappa$ and that ${\langle} X^{(\kappa)}_i\mid i<\kappa{\rangle}$ is a discrete family for ${\langle} U(\kappa,i)\mid i<\kappa{\rangle}$. Let $T$ be the tree of height $2$ such that:
${\rm Succ}_T({\langle}{\rangle})=X^{(\kappa)}_0$ and for every $\alpha\in{\rm Succ}_T({\langle}{\rangle})$, ${\rm Succ}_T({\langle}\alpha{\rangle})=X^{(\kappa)}_\alpha$.
Define the function $f:mb(T)\rightarrow \kappa$ by $f({\langle}\alpha,\beta{\rangle})=\beta$. Clearly, we see that the function $f$ depends only on the second coordinate i.e. for every ${\langle}\alpha,\beta{\rangle},{\langle}\gamma,\delta{\rangle}\in mb(T)$, $f({\langle}\alpha,\beta{\rangle})=f({\langle}\gamma,\delta{\rangle})\leftrightarrow \beta=\delta$ and $f_{\{2\}}$ is well defined. However, the structure of the tree is such that if $\alpha\neq \gamma$ then $X^{(\kappa)}_\alpha\cap X^{(\kappa)}_\gamma=\emptyset$ and $\beta\neq \delta$, which imposes that $1$ is important. Note that in this case, by definition, $I(T,f)=\{1,2\}$.
\item If $S\subseteq T$ then $I(T,f)\subseteq I(S,f\restriction mb(S))$. Hence if $I(T,f)$ is complete then also $I(S,f\restriction mb(S))$ is complete, and if $I(T,F)$ is consistent, then $I(T,f)=I(S,f\restriction mb(S))$ and also $I(S,f\restriction mb(S))$ is consistent. \end{enumerate} \end{remark} \begin{lemma}\label{important coordinates} Let $T$ be a $\vec{U}$-fat tree on $\theta_1\leq...\leq\theta_n$ and $f:mb(T)\rightarrow B$ where B is any set. Then there is a $\vec{U}$-fat tree $T'\subseteq T$, with $mb(T')\in U_T$ and $I\subseteq\{1,...,ht(T)\}$ such that for any $t,t'\in mb(T')$ $$t\restriction I=t'\restriction I \Leftrightarrow f(t)=f(t').$$ \end{lemma} Before proving the lemma, let us state as a corollary the generalization we desired: \begin{corollary}\label{corollary for important coordinates}
Let $T$ be a $\vec{U}$-fat tree on $\theta_1\leq...\leq\theta_n$ and $f:mb(T)\rightarrow B$ where B is any set. Then there is a $\vec{U}$-fat tree $T'\subseteq T$, with $mb(T')\in U_T$ such that the set of important coordinates $T^*:=I(T',f\restriction mb(T'))$ is complete and consistent. In particular $(f\restriction mb(T'))_{I^*}$ is well defined and $1-1$. \end{corollary} \textit{Proof of corollary \ref{corollary for important coordinates}}. Let $I\subseteq\{1,...,n\}$ be as guaranteed by \ref{important coordinates}, then $I\subseteq I(T',f\restriction mb(T'))$. Indeed, every $i\in I$ is important, since if $t_1,t_2\in mb(T')$, $t_1(i)\neq t_2(i)$ then $t_1\restriction I\neq t_2\restriction I$, thus $f(t_1)\neq f(t_2)$.
Therefore, $f_{I(T',f\restriction mb(T'))}$ is well defined, since for every $t_1,t_2\in mb(T')$, $t_1\restriction I(T',f\restriction mb(T'))=t_2\restriction I(T',f\restriction mb(T'))$ implies that $t_1\restriction I=t_2\restriction I$, hence $f(t_1)=f(t_2)$. We conclude that $I(T',f\restriction mb(T'))$ is complete.
To ensure consistency, we shrink $T'$ even more. For every $i\notin I(T',f\restriction mb(T'))$ if there is $R\subseteq T'$, $mb(R)\in U_T$ such that $i\in I(R,f\restriction mb(R))$, pick any such $R$ and denote it by $R_i$, otherwise let $R_i=T'$. Define $X^*=\cap_{i\notin I(T',f\restriction mb(T'))}mb(R_i)$. Clearly $mb(X^*)\in U_T$. By \ref{Los for trees}(4) there is a $\vec{U}$-fat tree $T^*$ such that $mb(T^*)\subseteq X^*$ and $mb(T^*)\in U_T$. It follows that $T^*\subseteq R_i\subseteq T'$ for every $i$. By \ref{Remark important coordinates}, $ I(T',f\restriction mb(T'))\subseteq I(T^*,f\restriction mb(T^*))$ and therefore $I(T^*,f\restriction mb(T^*))$ is also complete. To see it if consistent, let $S\subseteq T^*$, $mb(S)\in U_T$, and let $i\in I(S,f\restriction mb(S))$, then $S\subseteq T$, so by definition of $R_i$, $i\in I(R_i,f\restriction mb(R_i))$. Since $T^*\subseteq R_i$, then $I(R_i,f\restriction mb(R_i))\subseteq I(T^*,f\restriction mb(T^*))$. $\blacksquare$
\textit{Proof of Lemma \ref{important coordinates}.} Again we go by induction on $ht(T)$. For $ht(T)=1$ it is well known. Assume $ht(T)=n+1$ and fix $\alpha\in {\rm Lev}_1(T)$ and consider the function $$f_{\alpha}:mb(T/\langle\alpha\rangle)\rightarrow B \ \ f_\alpha(\vec{\beta})=f(\alpha^{\frown}\vec{\beta}).$$ By the induction hypothesis there is $T'_\alpha\subseteq T/\langle\alpha\rangle$ such that $mb(T'_\alpha)\in U_{T/{{\langle}\alpha{\rangle}}}$ and $I_\alpha\subseteq\{2,...,n+1\}$ such that $$(\star) \ \ \forall t_1,t_2\in mb(T'_\alpha). t_1\restriction I_\alpha=t_2\restriction I_\alpha\leftrightarrow f_\alpha(t_1)=f_\alpha(t_2).$$ Find $H\in U^{(T)}_{\langle\rangle}$ and $I'\subseteq\{2,...,n\}$ such that $I_\alpha=I'$ for $\alpha\in H$. Let $S$ be the tree with ${\rm Lev}_1(S)=H$ and for every $\alpha\in H$, $S/{{\langle}\alpha{\rangle}}=T'_\alpha$, then by \ref{the drived ultrafilter of a tree}(2), $mb(S)\in U_T$. It follows that for every $t,s\in mb(S)$, if $t\restriction \{1\}\cup I'=s\restriction \{1\}\cup I'$ then $$f(t)=f_{t(1)}(t\restriction\{2,...,n\})=f_{s(1)}(s\restriction \{2,...,n\})=f(s).$$ If the implication $f(t)=f(t')\rightarrow t \restriction\{1\}\cup I'=t'\restriction \{1\}\cup I'$ holds for every $t,t'\in mb(S)$, then we can take $I=I'\cup\{1\}$ and we are done. However there can still be a counter example i.e. $t,t'\in mb(S)$, such that $$t\restriction I'\cup\{1\}\neq t'\restriction I'\cup\{1\}\wedge f(t)=f(t').$$ Our strategy will be to go over all possible interweaving of counter examples and shrink the tree $S$ to eliminate them. We will see that if we fail to do so, then we can take $I=I'$. Note that if $t(1)=t'(1)$ then by the construction of $S$, $t,t'$ cannot be a counter example, hence a counter example is one with $t(1)\neq t'(1)$.
Fix any interweaving $p={\langle} g,g'{\rangle}$ with $g(1)\neq g'(1)$, and consider the iteration, $j_p$.
If this iteration is undefined then by \ref{undefined interweaving} we can shrink $S$ such that we have eliminated this of kind interweaving. If the iteration is defined, compare $j_p(f)(\langle \kappa_i\mid i\in A_p\rangle),j_p(f)(\langle\kappa_{j}\mid j\in A'_p\rangle)$. Suppose the interweaving is such that for some $i\in I'$, $g(i)\neq g'(i)$ we claim that $$(\star\star) \ \ j_p(f)(\langle \kappa_i\mid i\in A_p\rangle)\neq j_p(f)(\langle\kappa_{j}\mid j\in A'_p\rangle).$$
Otherwise by \ref{TreeUlt}(1) find $\vec{U}$-fat tree $S_p$ such that $(\star\star)$ holds for maximal branches of $S_p$. Let $i$ be maximal such that $g(i)\neq g'(i)$, without loss of generality, suppose that $g'(i)<g(i)$. Note that $q=g(i)\in A_p\setminus A_p'$, otherwise, if $q\in A'_p$, then for some $j>i$, $g'(j)=g(i)$ and therefore $g(j)>g(i)=g'(j)$ hence $g(j)\neq g'(j)$ contradicting the maximality of $i$.
We construct recursively $t,r\in S_p$, so pick any element in $s\in {\rm Lev}_{q-1}(S_p)$, set $$t\restriction \{1,...,.q-1\}=s=r\restriction \{1,...,q-1\}.$$ Pick $t(q)<r(q)\in {\rm Succ}_{S_p}(t)$, since $q\notin A'_p$, then $t\restriction A'_p\cap\{1,...,q\}= r\restriction A'_p\cap\{1,...,q\}$. Assume that $t\restriction\{1,...,k\},r\restriction\{1,...,k\}\in {\rm Lev}_k(S_p)$ are defined such that $$t\restriction A'_p\cap \{1,...,k\}=r\restriction A'_p\cap \{1,...,k\}.$$ If $k+1\in A'_p$ then $U^{(S_p)}_{t\restriction\{1,...,k\}}=U^{(S_p)}_{r\restriction\{1,...,k\}}$, as it depends only on $t\restriction A'_p\cap \{1,...,k\}$. Thus we can choose $$t(k+1)=r(k+1)\in {\rm Succ}_{S_p}(t\restriction\{1,...,k\})\cap {\rm Succ}_{S_p}(r\restriction\{1,...,k\}).$$
If $k+1\in A_p\setminus A'_p$, pick $t(k+1)\in {\rm Succ}_{S_p}(t\restriction\{1,...,k\})$ and $r(k+1)\in {\rm Succ}_{S_p}(r\restriction\{1,...,k\})$ randomly. Note that in any case $t\restriction A_p'\cap\{1,...,k+1\}=r\restriction\{1,...,k+1\}$ Eventually we obtain $t,r\in mb(S_p)$ with $t\restriction A'_p=r\restriction A'_p=\vec{\alpha}'$ and ${\rm min}(t)={\rm min}(r)={\rm min}(s)$.
Hence $t\restriction A_p,r\restriction A_p,\vec{\alpha}'\in mb(S)$, note that both $t\restriction A_p,\vec{\alpha}'$ and $r\restriction A_p,\vec{\alpha}'$ interweave as $p$. Consequently,
$$f(t\restriction A_p)=f(\vec{\alpha}')=f(r\restriction A_p).$$ This means we found a counter example with the same first coordinate which is a contradiction, concluding that $j_p(f)(\langle \kappa_i\mid i\in A_p\rangle)\neq j_p(f)(\langle\kappa_{j}\mid j\in A'_p\rangle)$. By \ref{TreeUlt}(1) and \ref{TreeUlt}(2) we can shrink $S$ so that for every $t,t'$ which interweaves as $p$, $f(t)\neq f(t')$, in other words, we have eliminated all counter examples which interweave as $p$. Next, consider $p$ for which $g(i)=g'(i)$ for every $i\in I'$. If $$j_p(f)(\langle \kappa_i\mid i\in A_p\rangle)= j_p(f)(\langle\kappa_{j}\mid j\in A'_p\rangle)$$ then we can shrink $S$ so that whenever $t,t'\in mb(S)$ interweave as $p$, $f(t)=f(t')$. By \ref{TreeUlt}(3) we can shrink $S$ further to $S^*$ so that for every $t\in mb(S^*)$ and $\alpha<{\rm min}(t)$ there is $s\in mb(S)$ so that ${\rm min}(s)=\alpha\wedge t,s$ interweave as $p$. We claim that we can drop $1$ i.e. $I'=I$ is the set desired. To see this, assume that $t,t'\in mb(S^*)$. Without loss of generality, assume that ${\rm min}(t')=\alpha<{\rm min}(t)$, by the construction of $S^*$, there is $t''\in mb(S)$ such that
\begin{enumerate}
\item $t,t''$ interweave as $p$.
\item $t\restriction I= t''\restriction I$.
\item ${\rm min}(t')=\alpha={\rm min}(t'')$.
\end{enumerate} Hence $$f(t)=f(t')\Leftrightarrow^{(1)} f(t'')=f(t')\Leftrightarrow^{(3)} t''\restriction I=t'\restriction I\Leftrightarrow^{(2)} t\restriction I=t'\restriction I.$$
Finally if $j_p(f)(\langle \kappa_i\mid i\in A_p\rangle)\neq j_p(f)(\langle\kappa_{j}\mid j\in A'_p\rangle)$ then we shrink $S$ and eliminate counter examples which interweave as $p$. Obviously, if we went through all possible interweaving of all counter examples and eliminated them, then $I=I'\cup\{1\}$ will be as desired.
$\blacksquare$
\begin{lemma}\label{function separation} Let $T$ and $S$ be $\vec{U}$-fat trees on $\kappa_1\leq...\leq\kappa_n$, $\theta_1\leq...\leq\theta_m$ respectively. Suppose $F:mb(T)\rightarrow \kappa$ and $G:mb(S)\rightarrow \kappa$ are any functions such that $I:=I(T,F), \ J:=I(S,G)$ are complete and consistent. Then there exists $\vec{U}$-fat subtrees $T^*,S^*$ with $mb(T^*)\in U_T$ and $ mb(S^*)\in U_S$ such that one of the following holds: \begin{enumerate}
\item $mb(T^*)\restriction I=mb(S^*)\restriction J$\footnote{ Denote $mb(T)\restriction I=\{t\restriction I\mid t\in mb(T)\}$.} and $(F\restriction mb(T^*))_{I}=(G\restriction mb(S^*))_{J}$.
\item $Im(F\restriction mb(T^*))\cap Im(G\restriction mb(S^*))=\emptyset$. \end{enumerate} \end{lemma}
\noindent\textit{Proof}. The argument is similar to product of measures version in \cite{partOne}. Fix $F,G$, we proceed by induction on $\langle ht(T),ht(S)\rangle=:{\langle} n,m{\rangle}$. Let us first deal with some trivial cases:
If $I=J=\emptyset$ i.e. $F,G$ are constantly $d_F,d_G$, respectively. Either $d_F\neq d_G$ and $(2)$ holds, or $d_F=d_G$ and $(1)$ holds. If $I=\emptyset$ and $j_0\in J\neq\emptyset$, then $F$ is constantly $d_F$. If $d_F\notin Im(G)$ then $(2)$ holds, otherwise, there is $\vec{\beta}\in mb(S)$ such that $G(\vec{\beta})=d_F$, remove $\vec{\beta}(j_0)$ from ${\rm Lev}_{j_0}(S)$ i.e. define:\begin{enumerate}
\item $S^*\restriction \{1,...,j_0-1\}:=S\restriction \{1,...,j_0-1\}$.
\item For every $t\in {\rm Lev}_{j_0-1}(S)$, define ${\rm Succ}_{S^*}(t):={\rm Succ}_{S}(t)\setminus\{\vec{\beta}(j_0)\}$.
\item For every $t\in {\rm Lev}_{j_0}(S^*)$, $S^*/t:=S/t$. \end{enumerate} By \ref{the drived ultrafilter of a tree}, $mb(S^*)\in U_S$. If $\vec{\beta}'\in mb(S^*)$, then $G(\vec{\beta}')\neq d_F$, just otherwise, $\vec{\beta}'\restriction J=\vec{\beta}\restriction J$ and in particular $\vec{\beta}(j_0)=\vec{\beta}'(j_0)$, contradiction, then again $(2)$ holds. Similarly, if $J=\emptyset$ and $I\neq \emptyset$ then we can prove $(2)$. This argument includes the case that one of the trees is $\{{\langle}{\rangle}\}$ in which case the functions are constantly $f({\langle}{\rangle})$ or $g({\langle}{\rangle})$. Thus we can assume that $n,m\geq 1$. Without loss of generality, assume that $\theta_1\leq\kappa_1$.
For every $\beta\in {\rm Succ}_S({\langle}{\rangle})$, consider the function\footnote{Note that if $m=1$ then $S/{\langle}\beta{\rangle}=\{{\langle}{\rangle}\}$ and $G_{\beta}$ is constant.} $$G_{\beta}: mb(S/{\langle}\beta{\rangle})\rightarrow \kappa, \ G_\beta(\vec{\beta})=G(\beta^{\smallfrown}\vec{\beta}).$$ Then for every $\beta\in {\rm Succ}_S({\langle}{\rangle})$, $I(S/{\langle}\beta{\rangle},G_\beta)\supseteq J\setminus \{1\}$. Shrink ${\rm Succ}_S({\langle}{\rangle})$ to stabilize $I(S/{\langle}\beta{\rangle},G_\beta)=J^*$. Then $J^*=J\setminus \{1\}$, since if we let $S^*$ be the tree obtained from $S$ by shrinking ${\rm Succ}_S({\langle}{\rangle})$, and $S^*/{\langle}\beta{\rangle}=S/{\langle}\beta{\rangle}$, then by \ref{the drived ultrafilter of a tree}(4) $mb(S^*)\in U_S$. By coherency $I(S^*,G\restriction mb(S^*))\subseteq J$. So if $j\in J^*$ then it follows by definition of important coordinate that $j\in I(S^*,G)$, hence $j\in J$. It follows now that for every $\beta$, $I(S/{\langle}\beta{\rangle},G_\beta)$ is complete. For consistency, the argument given in corollary \ref{important coordinates} applies by shrinking $S/{\langle}\beta{\rangle}$ if necessary. To ease notation we keep denoting the shrinked tree by $S$. Apply induction to $F$ and $G_\beta$, $I,J^*$, to find $T^\beta\subseteq T,\ S^{\beta}\subseteq S/{\langle}\beta{\rangle}$ for which $mb(T^\beta)\in U_T,\ mb(S^{\beta})\in U_{S/{\langle}\beta{\rangle}}$ such that one of the following holds: \begin{enumerate}
\item $mb(T^\beta)\restriction I=mb(S^\beta)\restriction J^*$ and $(F\restriction mb(T^\beta))_I=(G_{\beta}\restriction mb(S^\beta))_{J^*}$.
\item $Im(F\restriction T^\beta)\cap Im(G_{\beta}\restriction mb(S^\beta))=\emptyset$. \end{enumerate}
Denote by $i_\beta\in\{1,2\}$ the relevant case. There is $H\subseteq {\rm Succ}_S(\langle\rangle)$, $H\in U^{(S)}_{\langle\rangle}$ and $i^*\in\{1,2\}$ such that for every $\beta\in H$, $i_\beta=i^*$. Let $S^*$ be the tree such that ${\rm Succ}_{S^*}(\langle\rangle)=H$ and for every $\beta\in H$, $S^*/{\langle}\beta{\rangle}=S^\beta\in \vec{U}_{S/{\langle}\beta{\rangle}}$. By \ref{the drived ultrafilter of a tree}(2), $S^*\subseteq S$ and $mb(S^*)\in U_S$.
If $i^*=1$, let $T^*=\cup_{\beta\in H}T^\beta\subseteq T$ then $mb(T^*)\in U_T$. Argue that $1\notin J$ and therefore $J^*=J$. Indeed, fix some $\beta_1<\beta_2\in H$, Pick some $t\in mb(T^{\beta_1})\cap mb(T^{\beta_2})$ (this is possible since they are both in $U_T$) then $$t\restriction I\in ( mb(T^{\beta_1})\restriction I)\cap( mb(T^{\beta_2})\restriction I).$$ Since for every $\beta\in H$, $mb(T^\beta)\restriction I=mb(S^\beta)\restriction J^*$ there are $s_1\in mb(S^{\beta_1})$ and $s_2\in mb(S^{\beta_2})$ such that $s_1\restriction J^*=t\restriction I=s_2\restriction J^*$. Hence $\beta_1^{\smallfrown}s_1,\beta_2^{\smallfrown}s_2\in mb(S)$ and $$G(\beta_1^{\smallfrown}s_1)=G_{\beta_1}(s_1)=(G_{\beta_1})_{J^*}(s_1\restriction J^*)=F_I(t\restriction I)=(G_{\beta_2})_{J^*}(s_2\restriction J^*)=G_{\beta_2}(s_2)=G(\beta_2^{\smallfrown}s_2)$$ we found two maximal branches $x,y\in mb(S)$ which differ on $\{1\}$ such that $G(x)=G(y)$, by the definition of important coordinates it follows that $1\notin J$. Moreover, $mb(T^*)\restriction I=mb(S^*)\restriction J$ and that $(F\restriction mb(T^*))_I=(G\restriction mb(S^*))_J$, namely, $(1)$ holds. To see this, $$mb(T^*)\restriction I=\cup_{\beta\in H} mb(T^\beta)\restriction I=\cup_{\beta\in H} mb(S^\beta)\restriction J^*=\cup_{\beta\in {\rm Succ}_{S^*}({\langle}{\rangle})}mb(S^*_{{\langle}\beta{\rangle}})\restriction J=mb(S^*)\restriction J.$$ Also if $\rho\in mb(T^*)\restriction I=mb(S^*)\restriction J$, there is $\beta\in H$ such that $\rho\in mb(T^{\beta})\restriction I=mb(S^\beta)\restriction J$, hence $$(G\restriction mb(S^*))_J(\rho)=(G_{\beta}\restriction mb(S^{\beta}))_J(\rho)=(F\restriction mb(T^{\beta}))_I(\rho)=(F\restriction mb(T^*))_I(\rho).$$ Assume $i^*=2$.
We repeat the same process, consider now $F_\alpha$ for every $\alpha\in {\rm Succ}_{T}({\langle}{\rangle})$, we can shrink $T$ so that $I\setminus\{1\}=I(T/{{\langle}\alpha{\rangle}},F_\alpha)$ is complete and consistent. Apply induction to $F_\alpha,G$, such that for every $\alpha$, we have $j_\alpha\in\{1,2\}$ which correspond to $i_\beta$. We shrink ${\rm Succ}_{T}({\langle}{\rangle})$ to some $W$ and stabilize $j_\alpha$. If $j^*=1$ then $1\notin I$, and we can find $S^*\subseteq S$, $T^*\subseteq T$ such that $mb(S^*)\in U_S$ and $mb(T^*)\in U_T$ such that $$mb(S^*)\restriction J= mb(T^*)\restriction I\text{ and }(F\restriction mb(T^*))_I=(G\restriction mb(S^*))_J$$ so $(1)$ holds. Assume that $j^*=2$.
\textbf{Case 1:
Assume $\theta_1<\kappa_1$}. shrink ${\rm Succ}_{T}({\langle}{\rangle})$ so that ${\rm min}({\rm Succ}_{T}({\langle}{\rangle}))>\theta_1$. Since $U_T$ is $\kappa_1$-complete and $|H|=\theta_1$, $\cap_{\beta\in H}mb(T^{\beta})\in U_T$. By \ref{TreeUlt}(4) there is a $\vec{U}$-fat tree $T^*$ such that $mb(T^*)\in U_T$ and $mb(T^*)\subseteq \cap_{\beta\in H}mb(T^{\beta})$ in particular $T^*\subseteq T$. It follows that $$(\star) \ \ \ \forall t\in mb(T^*)\forall s\in mb(S^*). F(t)\neq G(s).$$ To see this, note that $s(1)\in {\rm Succ}_{S^*}({\langle}{\rangle})=H$, $t\in mb(T^{s(1)})$ and $s\restriction\{2,...,n\}\in mb(S^{s(1)})$. Since $i^*=2$, $Im(F\restriction mb(T^{s(1)}))\cap Im(G_\beta\restriction mb(S^{s(1)}))=\emptyset$, hence $F(t)\neq G_{s(1)}(s\restriction\{2,...,n\})=G(s)$.
\textbf{Case 2: Assume that $\theta_1=\kappa_1$}. Shrink the trees $T$ and $S$ in the following way: ${\rm Succ}_{T'}({\langle}{\rangle})=\Delta_{\beta\in H} {\rm Succ}_{T^{\beta}}({\langle}{\rangle})\in U^{(T)}_{{\langle}{\rangle}}, \ {\rm Succ}_{S'}({\langle}{\rangle})=\Delta_{\alpha\in W}{\rm Succ}_{S^{\alpha}}({\langle}{\rangle})\in U^{(S)}_{{\langle}{\rangle}}$. Also for every $\alpha\in {\rm Succ}_{T'}({\langle}{\rangle})$, find a $\vec{U}$-fat tree $T'/{\langle}\alpha{\rangle}$ such that $mb(T'/{\langle}\alpha{\rangle})\subseteq \cap_{\beta\in H\cap\alpha} mb(T^{\beta}/{\langle}\alpha{\rangle})$. In the same fashion for every $\beta\in {\rm Succ}_{S'}({\langle}{\rangle})$, find $S'/{\langle}\beta{\rangle}$ such that $mb(S'/{\langle}\beta{\rangle})\subseteq \cap_{\alpha\in W\cap\beta} mb(S^{\alpha}/{\langle}\beta{\rangle})$. Then we claim the following:
$$(\star\star) \ \ \ \forall t\in mb(T')\forall s\in mb(S'). t(1)\neq s(1)\rightarrow F(t)\neq G(s).$$ To see this, assume for example that $s(1)<t(1)$ (the case $t(1)<s(1)$ is symmetric), note that $s(1)\in {\rm Succ}_{S^*}({\langle}{\rangle})=H$, and by the definition of diagonal intersection, $t(1)\in {\rm Succ}_{T^{s(1)}}({\langle}{\rangle})$. Also, $t\restriction\{2,...,n\}\in mb(T^{s(1)}/{\langle} t(1){\rangle})$ and therefore $t\in T^{s(1)}$. Clearly, $s\restriction\{2,...,n\}\in mb(S'/{\langle} s(1){\rangle})=mb(S^{s(1)})$. Since $i^*=2$, $Im(F\restriction mb(T^{s(1)}))\cap Im(G_{s(1)}\restriction mb(S^{s(1)}))=\emptyset$, hence $F(t)\neq G_{s(1)}(s\restriction\{2,...,n\})=G(s)$.
So we are left with the situation that $s={\rm min}(s)={\rm min}(t)$. If $U^{(S)}_{{\langle}{\rangle}}\neq U^{(T)}_{{\langle}{\rangle}}$ we can shrink ${\rm Succ}_{T^*}({\langle}{\rangle}),{\rm Succ}_{S^*}({\langle}{\rangle})$ so that they are disjoint, avoid this situation and conclude $(2)$. If $U^{(T)}_{\langle\rangle}=U^{(S)}_{\langle\rangle}$, let $A={\rm Succ}_{T^*}(\langle\rangle)\cap {\rm Succ}_{S^*}(\langle\rangle)$. For every $\alpha\in A$, apply the induction hypothesis to the functions $F_\alpha,G_\alpha$, $I\setminus\{1\},J\setminus\{1\}$ we obtain $T^\alpha\subseteq T/{\langle}\alpha{\rangle}$ and $S^\alpha\subseteq S/{\langle}\alpha{\rangle}$ such that $(1)$ or $(2)$ holds. We denote the relevant case by $r_\alpha$. Again, shrink $A$ to $A^*$ and find $r^*\in\{1,2\}$ so that for every $\alpha\in A^*$, $r_\alpha=r^*$. Define ${\rm Succ}_{T^*}({\langle}{\rangle})={\rm Succ}_{S^*}({\langle}{\rangle})=A^*$ and for every $\alpha\in A^*$, $T^*/{\langle}\alpha{\rangle}=T^{\alpha}$ and $S^*/{\langle}\alpha{\rangle}=S^{\alpha}$. Clearly $T^*\subseteq T$, $S^*\subseteq S$ and $mb(T^*)\in \vec{U}_T,\ mb(S^*)\in\vec{U}_S$.
If $r^*=2$, For every $\alpha^{\smallfrown}t\in mb(T^*),\ \alpha^{\smallfrown}s\in mb(S^*)$, we have that $r_{\alpha}=2$, then $F(\alpha^{\smallfrown}t)=F_\alpha(t)\in Im(F_\alpha\restriction mb(T^{\alpha}))$ and $G(\alpha^{\smallfrown}s)=G_\alpha(s)\in Im(G_\alpha\restriction mb(S^{\alpha}))$. By $r_{\alpha}=2$, $G(\alpha,s)\neq F(\alpha,t)$ and we have eliminated the possibility of $F(t)=G(s)$ where ${\rm min}(s)={\rm min}(t)$, we conclude that $(2)$ holds.
Finally, assume $r^*=1$, namely that for $I\setminus\{1\}= I^*\subseteq\{2,...,ht(T)\},J\setminus\{1\}= J^*\subseteq\{2,...,ht(S)\}$, and every $\alpha\in A^*$ $$mb(T^\alpha)\restriction I^*=mb(S^\alpha)\restriction J^*\ \wedge \ (F_\alpha\restriction mb(T^\alpha))_{I^*}=(G_{\alpha}\restriction mb(S^\alpha))_{J^*}.$$ It follows that $$(\triangle) \ \ \ mb(T^*)\restriction I^*\cup\{1\}=\cup_{\alpha\in A^*}\{\alpha\}\times mb(T^\alpha)\restriction I^*=\cup_{\alpha\in A^*}\{\alpha\}\times mb(S^\alpha)\restriction J^*=mb(S^*)\restriction J^*\cup\{1\}.$$ Moreover, for every ${\langle}\alpha{\rangle}^{\smallfrown}\rho\in mb(T^*)\restriction I^*\cup\{1\}$, $$(\triangle\triangle) \ \ \ (F\restriction_{ mb(T^*)})_{I^*\cup\{1\}}(\alpha,\rho)=(F_\alpha\restriction mb(T^{\alpha}))_{I^*}(\rho)=(G_\alpha\restriction mb(S^{\alpha}))_{J^*}(\rho)=(G\restriction mb(S^*))_{J^*\cup\{1\}}(\alpha,\rho).$$ If $1\notin I$ then $1$ is not an important coordinate for $F\restriction mb(T^*)$ and by definition this means that there are $t_1,t_2\in mb(T^*)$ such that $t_1(1)\neq t_2(1)$ and $F(t_1)=F(t_2)$. Then $$t_1\restriction I\in mb(T^{t_1(1)})\restriction I=mb(S^{(t_1(1)})\restriction J^*$$ $$t_2\restriction I\in mb(T^{t_2(1)})\restriction I=mb(S^{(t_2(1)})\restriction J^*.$$ So there are $s_1,s_2\in mb(S^*)$ such that $s_1(1)=t_1(1), s_2(1)=t_2(1)$ and $s_1\restriction J^*=t_1\restriction I,s_2\restriction J^*=t_2\restriction I$. It follows that $$G(s_1)=G_{s_1(1)}(s_1\restriction J^*)=F_{t_1(1)}(t_1\restriction I)=F(t_1)\neq F(t_2)=F_{t_2(1)}(t_2\restriction I)=G_{s_2(1)}(s_2\restriction J^*)=G(s_2).$$ So $1$ is not important for $G\restriction mb(S^*)$, hence $1\notin J$. In a similar way, we conclude that If $1\notin J$ then $1\notin I$. In either case, from $(\triangle),(\triangle\triangle)$ we conclude that $(1)$ holds. $\blacksquare$
\section{The proof for short sequences} Let us return to $\mathbb{M}[\vec{U}]$ and use the combinatorial tools developed in the last section. \begin{definition}
Let $p\in\mathbb{M}[\vec{U}]$ be a condition.
A tree of extensions of $p$ is a $\vec{U}$-fat tree $T$ on $\theta_1\leq...\leq\theta_n$, such that for every $1\leq i\leq n$, $\theta_i\in \kappa(p)$ and each $t\in T$ is a legal extension of $p$ i.e. $p^{\frown}t\in\mathbb{M}[\vec{U}]$. Denote by $\xi(t),\kappa(t)$ the ordinals such that ${\rm Succ}_{T}(t)\in U(\kappa(t),\xi(t))$.
\end{definition}
If $T$ is a tree of extensions of $p$ and $T'\subseteq T$ is a $\vec{U}$-fat tree such that $mb(T')\in U_T$ then $T'$ is also a tree of extensions of $p$.
Let $p^{\smallfrown}\vec{\alpha}\in\mathbb{M}[\vec{U}] $, and for every $r\leq |\vec{\alpha}|=:n$ let $B_r\in \cap\vec{U}(\vec{\alpha}(r))$. Define $$p^{\smallfrown}{\langle} \vec{\alpha},\vec{B}^{\vec{\alpha}}{\rangle}:=p^{\smallfrown}{\langle} \vec{\alpha}(1),B_1\cap \vec{\alpha}(1){\rangle}^{\smallfrown}...^{\smallfrown}{\langle} \vec{\alpha}(n),B_{n}\cap( \vec{\alpha}(n-1),\vec{\alpha}(n)){\rangle}.$$ \begin{proposition}\label{amalgamate} Let $T$ be a $\vec{U}$-fat tree of extensions of $p$, and let for every $t\in mb(T)$, $p_t\geq^* p^{\frown}t$ be a condition. Then there are $p^*,T^*$ and $B^s$ for $s\in T^*\setminus mb(T^*)$ such that: \begin{enumerate}
\item $p\leq^* p^*$.
\item $T^*\subseteq T$ is a $\vec{U}$-fat tree of extensions for $p^*$ with $mb(T^*)\in U_T$.
\item $B^{s}\in \cap_{\xi<\xi(s)}U(\kappa(s),\xi)$.
\item For every $t\in mb(T^*)$ $$p_t\leq^*p^{*\smallfrown}{\langle} t,\vec{B}^{t}{\rangle}:=p^{*\smallfrown}{\langle} t(1), B^{{\langle}{\rangle}}\cap t(1){\rangle}^\smallfrown...^{\smallfrown}{\langle} t(n),B^{t\restriction\{1,...,n-1\}}\cap t(n){\rangle}.$$ \end{enumerate} \end{proposition}
\noindent\textit{Proof}. Assume that $T$ is on $\kappa_{j_1}(p)\leq...\leq \kappa_{j_n}(p)$, and let us proceed by induction on $ht(T)$. If $ht(T)=1$, then for every $\alpha\in {\rm Succ}_{T}({\langle}{\rangle})\in U(\kappa_{j_1}(p),\xi({\langle}{\rangle}))$ denote $$p^{\smallfrown}\alpha\leq^* p_\alpha={\langle} p_\alpha\restriction \kappa_{j_1-1}(p),{\langle}\alpha, B_\alpha{\rangle}, {\langle}\kappa_{j_1}(p), C_\alpha{\rangle}, p_\alpha\restriction(\kappa_{j_1}(p),\kappa]{\rangle}.$$ The order $\leq^*$ is more than $\kappa_{j_1}(p)$-closed in $\mathbb{M}[\vec{U}]\restriction(\kappa_{j_1}(p),\kappa]$, so we can find $p^*_>\in \mathbb{M}[\vec{U}]\restriction(\kappa_{j_1}(p),\kappa]$ such that $p_\alpha\restriction(\kappa_{j_1}(p),\kappa]\leq p^*_>$ for every $\alpha\in {\rm Succ}_T({\langle}{\rangle})$. For the lower part, shrink ${\rm Succ}_T({\langle}{\rangle})$ to $H\in U(\kappa_{j_1}(p),\xi({\langle}{\rangle}))$ and find $p^*_<\in \mathbb{M}[\vec{U}]\restriction \kappa_{j_1-1}(p)$ such that for every $\alpha\in H$, $p^*_<=p_\alpha\restriction\kappa_{j_1-1}(p)$. Next, by normality $$C:=\Delta_{\alpha<\kappa_{j_1}(p)}C_\alpha\in \cap \vec{U}(\kappa_{j_1}(p)).$$ Use \ref{BetterSet} to find $C^*\subseteq C$ such that for every $\alpha\in C^*$, $C^*\cap\alpha\in\vec{U}(\alpha)$. As for the $B_\alpha$'s, for every $\alpha\in H$, $B_\alpha\in \cap \vec{U}(\alpha)$. Use ineffability and shrink $H$ to $H'\in U(\kappa_{j_1}(p),\xi({\langle}{\rangle}))$ and find a single set $X$ such that for every $\alpha\in H'$, $X\cap\alpha=B_\alpha$, it follows that, $B^{{\langle}{\rangle}}:=C^*\cap X\in\cap_{j<\xi({\langle}{\rangle})}U(\kappa_{j_1}(p),j)$. Set ${\rm Succ}_{T^*}({\langle}{\rangle})=H'\cap C^*$ and let $$p\leq^*{\langle} p^*_{<},{\langle}\kappa_{j_1}(p),C^*{\rangle},p^*_>{\rangle}=:p^*.$$ To see that $p^*,B^{{\langle}{\rangle}},T^*$ is as wanted, let $\alpha\in {\rm Succ}_{T^*}({\langle}{\rangle})$. Since $\alpha\in H'$, $B^{{\langle}{\rangle}}\cap \alpha=B_\alpha\cap C^*\subseteq B_\alpha$. Since $\alpha\in H$, $p_\alpha\restriction\kappa_{j_1-1}=p^*_<$ and since $\alpha\in {\rm Succ}_{T}({\langle}{\rangle})$, $p_\alpha\restriction(\kappa_{j_1},\kappa]\leq^*p^*_>$. Finally note that $$B_{j_1}(p^*)\setminus\alpha+1=C^*\setminus\alpha+1\subseteq C_\alpha.$$ Thus $p_\alpha\leq^*p^{*\smallfrown}{\langle} \alpha, B^{{\langle}{\rangle}}\cap\alpha{\rangle}$. Assume that $n=ht(T)>1$, then for every $t\in T\setminus mb(T)$, and for every $\alpha\in {\rm Succ}_{T}(t)$, we are given some condition $p^{\smallfrown}t^{\smallfrown}\alpha\leq^*p_{t^{\smallfrown}\alpha}$. Apply the case $ht(T)=1$ to $p^{\smallfrown}t$ and ${\rm Succ}_{T}(t)$ to find $p^{\smallfrown}t\leq^* p^*_t$, ${\rm Succ}_{T^*}(t)$ and a set $B^{t}\in\cap_{\xi<\xi(t)}U(\kappa(t),\xi)$ such that for for every $\alpha\in {\rm Succ}_{T^*}(t)$, $p_{t^{\smallfrown}\alpha}\leq^* p_t^{*\smallfrown}{\langle}\alpha, B^t\cap\alpha{\rangle}$. Apply the induction hypothesis to $p, T\setminus mb(T)$, to find $p\leq^*p^*$, $T^*\subseteq T\setminus mb(T)$ and sets $B^s$ such that for every $t\in mb(T^*)$, $p^*_t\leq^* p^{*\smallfrown}{\langle} t,\vec{B}^t{\rangle}$. Hence for every $\alpha\in {\rm Succ}_{T^*}(t)$, $$p_t\leq^*p_t^{*\smallfrown}{\langle} \alpha,B^t\cap\alpha{\rangle}\leq^*p^{*\smallfrown}{\langle} t,\vec{B}^t{\rangle}^{\smallfrown}{\langle}\alpha,B^t\cap\alpha{\rangle}=p^{*\smallfrown}{\langle} t^{\smallfrown}\alpha,\vec{B}^{t^{\smallfrown}\alpha}{\rangle}.$$ It follows that $p^*$, $T^*$ and $B^{t}$ are as wanted. $\blacksquare$
The following lemma is the strong Prikry property for $\mathbb{M}[\vec{U}]$. \begin{lemma}\label{strongPrkryProperty} Let $D\subseteq \mathbb{M}[\vec{U}]$ be dense open, and let $p\in\mathbb{M}[\vec{U}]$ be any condition, then there is $p\leq^* p^*$ and a tree of extensions of $p^*$, $T$ and sets $B^s\in \cap_{\xi<\xi(s)}U(\kappa(s),\xi)$ for every $s\in T\setminus mb(T)$ such that for every $t\in mb(T)$, $p^{*\smallfrown}{\langle} t,\vec{B}^t{\rangle}\in D$. \end{lemma}
\noindent\textit{Proof}.
Let $r\leq l(p)+1$, $\vec{\alpha}\in[\kappa_r(p)]^{<\omega}$, such that $p^{\frown}\vec{\alpha}\in \mathbb{M}[\vec{U}]$ is a condition. Set
$$A^0_r(\vec{\alpha})=\{\alpha\in B_r(p)\setminus({\rm max}(\vec{\alpha})+1)\mid \exists q\geq^* p^{\frown}\vec{\alpha}^{\smallfrown}{\langle}\alpha{\rangle}. \ q\in D\}, \ \ A^1_r(\vec{\alpha})=B_r(p)\setminus A^0_r(\vec{\alpha}).$$ For every $i< o^{\vec{U}}(\kappa_r(p))$, only one of $A^0_r(\vec{\alpha}),A^1_r(\vec{\alpha})$ is in $U(\kappa_r(p),i)$. Denote it by $A_{r,i}(\vec{\alpha})$ and let $C_{r,i}(\vec{\alpha})\in\{0,1\}$ be such that $A_{r,i}(\vec{\alpha})=A_r^{C_{r,i}(\vec{\alpha})}(\vec{\alpha})$. Define
$$A_{r,i}=\underset{\vec{\alpha}\in[\kappa_r(p)]^{<\omega}}{\Delta}A_{r,i}(\vec{\alpha})\cap B_{r}(p)\in U(\kappa_r(p),i).$$ so far $A_{r,i}$ has the property that for $\vec{\alpha}\in[\kappa_r(p)]^{<\omega}$ if $\exists\alpha\in A_{r,i}$ and $p^{\frown}\vec{\alpha}^{\smallfrown}{\langle}\alpha{\rangle}\leq^* q\in D$ then for every $\alpha \in A_{r,i}$ there is $p^{\smallfrown}\vec{\alpha}^{\smallfrown}{\langle} \alpha{\rangle}\leq^*q\in D$.
For every $\langle \alpha_1,...,\alpha_{n-1}{\rangle}\in[\kappa_r(p)]^{n-1}$, define $D_{r,i}^{(1)}(\alpha_1,...,\alpha_{n-1},*):A_{r,i}\rightarrow\{0,1\}$ by
$$D_{r,i}^{(1)}(\alpha_1,...,\alpha_{n-1},\alpha)=0 \Leftrightarrow \exists r\leq s\leq l(p)+1\exists j<o^{\vec{U}}(\kappa_s(p)) \ C_{s,j}(\alpha_1,...,\alpha_{n-1},\alpha)=0.$$
Find a homogeneous set for $D^{(1)}_{r,i}$, $A^{(1)}_{r,i}(\alpha_1,...,\alpha_{n-1})\in U(\kappa_r(p),i)$ with color $C^{(1)}_{r,i}(\alpha_1,...,\alpha_{n-1})$. Define
$$A^{(1)}_{r,i}=\underset{\vec{\alpha}\in[\kappa_r(p)]^{n-1}}{\Delta} A^{(1)}_{r,i}(\vec{\alpha})\cap B_r(p)\in U(\kappa_r(p),i).$$ In similar fashion, define recursively for $k\leq n$ $$D_{r,i}^{(k)}(\alpha_1,...,\alpha_{n-k},\alpha)=0\Leftrightarrow \exists r\leq s\leq l(p)+1\exists j<o^{\vec{U}}(\kappa) \ C_{s,j}^{(k-1)}(\alpha_1,...,\alpha_{n-k},\alpha)=0.$$ find homogeneous $A_{r,i}^{(k)}(\alpha_1,...,\alpha_{n-k})\in U(\kappa_r(p),i)$ with color $C_{r,i}^{(k)}(\alpha_1,...,\alpha_{n-k})$ and let $$A^{(k)}_{r,i}=\underset{\vec{\alpha}\in[\kappa_r(p)]^{n-k}}{\Delta} A^{(k)}_{r,i}(\vec{\alpha})\cap B_r(p)\in U(\kappa_r(p),i).$$
Eventually, set $$A_{r,i,n}=\underset{k\leq n}{\bigcap}A^{(k)}_i, \ A_{r,i}=\underset{n<\omega}{\bigcap}A_{r,i,n}\in U(\kappa_r(p),i)\text{ and } A_r=\underset{i<o^{\vec{U}}(\kappa_r(p))}{\bigcup}A_{r,i}.$$
Let $p\leq^*p_1$, where $p_1$ is obtained from $p$ by shrinking $B_r(p)$ to the set obtained from \ref{BetterSet} to $A_r$ such that for every $\alpha\in B_r(p_1)$, $\alpha\cap B_r(p_1)\in\cap\vec{U}(\alpha)$. By density, there exists $p'\geq p_1$ such that $p'\in D$.
There is $\langle\vec{\alpha},\alpha\rangle\in [B(p^*)]^{<\omega}$ such that $p_1^{\frown}\langle\vec{\alpha},\alpha\rangle\leq^*p'$. Find $s_1\leq...\leq s_n\leq r$, $i_j\leq o^{\vec{U}}(\kappa_{s_j}(p))$ and $k<o^{\vec{U}}(\kappa_{r}(p))$ such that $\alpha\in A_{r,k}$ and $\vec{\alpha}=\langle\alpha_1,...,\alpha_{n-1}\rangle\in \prod^{n-1}_{j=1}A_{s_j,i_j}$.
It follows that $A_{r,k}(\vec{\alpha})=A^0_{r,k}(\vec{\alpha})$. Hence, $$C_{r,k}(\vec{\alpha})=0\Rightarrow D_{s_n,i_n}^{(1)}(\alpha_1,...,\alpha_{n})=0\Rightarrow C^{(1)}_{s_n,i_n}(\alpha_1,...,\alpha_{n-1})=0\Rightarrow D^{(2)}_{s_{n-1},i_{n-1}}(\alpha_1,...,\alpha_{n-1})=0\Rightarrow$$ $$ C^{(2)}_{s_{n-1},i_{n-1}}(\alpha_1,...,\alpha_{n-2})=0\Rightarrow...\Rightarrow D^{(n)}_{s_1,i_1}(\alpha_1)=0\Rightarrow C^{(n)}_{s_1,i_1}(\langle\rangle)=0.$$ Define the tree $T'$: Let $s({\langle}{\rangle})=s_1$, $\xi({\langle}{\rangle})=i_1$ and define $${\rm Succ}_{T'}(\langle\rangle)=A_{s({\langle}{\rangle}),\xi({\langle}{\rangle})}\cap B_{s({\langle}{\rangle})}(p_1)\in U(\kappa_{s({\langle}{\rangle})}(p),\xi({\langle}{\rangle})).$$ Since $A_{s_1,i_1}\subseteq A^{(n)}_{s_1,i_1}({\langle}{\rangle})$ is homogeneous, $D^{(n)}_{i_1}(x)=0$ for every $x\in A_{s_1,i_1}$. Hence, there are $\kappa_{s(x)}(r)$ and $\xi(x)$ such that $D^{(n-1)}_{s(x),\xi(x)}(x,*)$ takes the color $0$ on $A_{s(x),\xi(x)}$. Let $${\rm Succ}_{T'}({\langle}\alpha{\rangle})=A_{s(\alpha),\xi(\alpha)}\cap B_{s(\alpha)}(p_1).$$ Recursively, define the other levels in a similar fashion. By \ref{frown extension}, for every $t\in mb(T')$, $p_1\leq p_1^{\smallfrown}t\in \mathbb{M}[\vec{U}]$. Consider the function $t\in mb(T')\mapsto {\langle} s(t\restriction 0),s(t\restriction 1),...,s(t\restriction n){\rangle}$, then by \ref{StabTree}, we can find a $\vec{U}$-fat tree $T''\subseteq T'$, $mb(T'')\in U_{T'}$ such that ${\langle} s(t\restriction 0),s(t\restriction 1),...,s(t\restriction n){\rangle}$ is stabilized for $t\in mb(T'')$.
By the construction of the tree $T''$, for every $t\in mb(T'')$ there is $p_1^{\frown}t\leq^* p_t$ such that $p_t\in D$. By Proposition \ref{amalgamate} we can amalgamate all those $p_t$'s and find a single $p\leq^* p^*$, shrink $T''$ to $T^*$ and find $B^s$ for $s\in T^*\setminus mb(T^*)$ such that for every $t\in mb(T^*)$, $p_t\leq^*p^{*\frown}{\langle} t, \vec{B}^t{\rangle}$. Since $D$ is open then $p^{*\frown}{\langle} t, \vec{B}^t{\rangle}\in D$. $\blacksquare$
\begin{proposition}\label{densetree}
Let $p\in\mathbb{M}[\vec{U}]$ be a condition, $T$ a $\vec{U}$-fat tree of extensions of $p$, and sets $B^{s}\in\cap_{\xi<\xi(s)}U(\kappa(s),\xi)$ for every $s\in T\setminus mb(T)$ such that for every $t\in mb(T)$, $p\leq p^{\smallfrown}{\langle} t,\vec{B}^t{\rangle}\in \mathbb{M}[\vec{U}]$. Then there are $p\leq^*p^*$, a tree $T^*\subseteq T$ of extensions of $p^*$, $mb(T^*)\in U_T$ and sets $A^s\subseteq B^s$, $A^s\in \cap_{\xi<\xi(s)}U(\kappa(s),\xi)$ such that
$$D_{T^*,\vec{A}}:=\{p^{* \frown}{\langle} t,\vec{A}^t{\rangle} \mid t\in mb(T)\}$$
is pre-dense above $p^*$. In particular, for any generic $G$ with $p^*\in G$, $G\cap D_{T^*}\neq\emptyset$. \end{proposition}
\noindent\textit{Proof}. Assume that $T$ is on $\kappa_{j_1}(p)\leq...\leq\kappa_{j_n}(p)$ and again we argue by induction on $ht(T)$. Assume that $ht(T)=1$, use \ref{BetterSet} to find $A_<\subseteq B^{{\langle}{\rangle}}\cap B_{j_1}(p)$ such that $A_<\in\cap_{\xi<\xi({\langle}{\rangle})} U(\kappa_{j_1}(p),\xi)$ and for every $\alpha\in A_<$, $\alpha\cap A_<\in \cap\vec{U}(\alpha)$. Consider the sets $$A_{\xi({\langle}{\rangle})}={\rm Succ}_T({\langle}{\rangle})\cap B_{j_1}(p)\cap\{\alpha<\kappa_{j_1}(p)\mid A_{<}\cap\alpha\in\cap\vec{U}(\alpha)\}\in U(\kappa_{j_1}(p),\xi({\langle}{\rangle}))$$ $$A_>=B_{j_1}(p)\cap\{\alpha<\kappa_{j_1}(p)\mid \exists A_{\xi({\langle}{\rangle})}\cap\alpha\in (\cap\vec{U}(\alpha))^+\}\in \bigcap_{\xi({\langle}{\rangle})<\xi<o^{\vec{U}}(\kappa_{j_1}(p))}U(\kappa_{j_1}(p),\xi).$$ Let $p\leq^*p^*$ be the condition obtained from $p$ by shrinking $B_{j_1}(p)$ to $$B_{j_1}(p^*):=A_<\cup A_{\xi({\langle}{\rangle})}\cup A_>$$ let $A^{{\langle}{\rangle}}:=A_<$ and shrink ${\rm Succ}_{T}({\langle}{\rangle})$ to ${\rm Succ}_{T^*}({\langle}{\rangle}):=A_{\xi({\langle}{\rangle})}$. Clearly, $T^*$ is a tree of extensions for $p^*$ as for every $\alpha\in {\rm Succ}_{T^*}({\langle}{\rangle})$, $A_<\cap \alpha\in \cap\vec{U}(\alpha)$ and $A_<\cap\alpha\subseteq B_{j_1}(p^*)\cap\alpha$. To see that $p^*,T^*,A^{{\langle}{\rangle}}$ are as wanted, let $p^*\leq q$. Let $\vec{\alpha}$ be such that $p^{*\smallfrown}\vec{\alpha}\leq^* q$. Without loss of generality, assume that $\vec{\alpha}\in[(\kappa_{j_1-1}(p),\kappa_{j_1}(p))]^n$ and let $X_i$ denote the sets of the pairs ${\langle}\vec{\alpha}(i),X_i{\rangle}$ and ${\langle}\kappa_{j_1}(p),X{\rangle}$ appearing in $q$.
If $\vec{\alpha}\in[ A_<]^n$, since $X\in \cap\vec{U}(\kappa_{j_1}(p))$, then $$X^*:=X\cap {\rm Succ}_{T^*}({\langle}{\rangle})\cap\{\alpha \mid \alpha\cap X\in \cap\vec{U}(\alpha)\}\in U(\kappa_{j_1}(p)),\xi({\langle}{\rangle})).$$ In particular $X^*$ is unbounded and we can find $\alpha\in X^*\setminus{\rm max}(\vec{\alpha})+1$. It follows that $p^{*\smallfrown}{\langle}\alpha, A^{{\langle}{\rangle}}\cap\alpha{\rangle}\in D_{T^*,\vec{A}}$. We claim that $q,p^{*\smallfrown}{\langle}\alpha, A^{{\langle}{\rangle}}\cap\alpha{\rangle}\leq q'$, where $$q'=p^{*\smallfrown}{\langle}\vec{\alpha}(1),X_1\cap A_<{\rangle}^{\smallfrown}...^{\smallfrown}{\langle}\vec{\alpha}(n),X_{n}\cap A_<{\rangle}^{\smallfrown}{\langle}\alpha, X\cap A_{<}\cap\alpha{\rangle}.$$
Indeed, for every $\beta\in A_<$, $\beta\cap A_<\in\cap\vec{U}(\beta)$. In particular for every $i$, $\vec{\alpha}(i)\cap A_<\in \cap\vec{U}(\vec{\alpha}(i))$, thus $X_i\cap A_<\in\cap\vec{U}(\vec{\alpha}(i))$. Also by definition of $X^*$, $\alpha\cap X\in \cap\vec{U}(\alpha)$ and by definition of ${\rm Succ}_{T^*}({\langle}{\rangle})$, $A_<\cap\alpha\in\cap \vec{U}(\alpha)$. By \ref{frown extension}, $q\leq q'$ and $p^{*\smallfrown}{\langle}\alpha,A^{{\langle}{\rangle}}\cap\alpha{\rangle}\leq q'$.
If there is $j\leq n$ such that $\vec{\alpha}(j)\notin A_<$, let $r$ be the minimal such $j$. Since $\vec{\alpha}(r)\in B_{j_1}(p)$, there are two cases here, either $\vec{\alpha}(r)\in A_{\xi({\langle}{\rangle})}$ or $\vec{\alpha}(r)\in A_>$. If $\vec{\alpha}(r)\in A_{\xi({\langle}{\rangle})}={\rm Succ}_{T^*}({\langle}{\rangle})$, then $p^{*\smallfrown}{\langle}\vec{\alpha}(r),A^{{\langle}{\rangle}}\cap\alpha{\rangle}\in D_{T^*,\vec{A}}$ and we claim that $p^{*\smallfrown}{\langle}\vec{\alpha}(r),A^{{\langle}{\rangle}}\cap\vec{\alpha}(r){\rangle},q\leq q'$ where $$q'=p^{*\smallfrown}{\langle}\vec{\alpha}(1),X_1\cap A_<{\rangle}^{\smallfrown}...^{\smallfrown}{\langle}\vec{\alpha}(r),A_<\cap X_r{\rangle}^{\smallfrown}{\langle} \vec{\alpha}(r+1),X_{r+1}{\rangle}^{\smallfrown}...^{\smallfrown}{\langle}\vec{\alpha}(n),X_{n}{\rangle}.$$
By minimality of $r$, $\vec{\alpha}(i)\in A_<$ for every $i<r$ and the same argument as before justifies that, $X_i\cap A_<\in \cap\vec{U}(\vec{\alpha}(i))$. Since $\vec{\alpha}(r)\in A_{\xi({\langle}{\rangle})}$, by definition we have that $A_<\cap\vec{\alpha}(r)\in\cap\vec{U}(\vec{\alpha}(r))$, hence $X_r\cap A_<\in \cap\vec{U}(\vec{\alpha}(r))$, then again we use \ref{frown extension}. Finally, if $\vec{\alpha}(r)\in A_>$, then $A_{\xi({\langle}{\rangle})}\cap\vec{\alpha}(r)\in (\cap\vec{U}(\vec{\alpha}(r)))^+$. In particular $$X^*:=A_{\xi({\langle}{\rangle})}\cap X_r\cap\{\alpha\mid \alpha\cap X_r\in\cap\vec{U}(\alpha)\}\in (\cap\vec{U}(\vec{\alpha}(r)))^+$$ hence there is $\alpha\in X_r\cap A_{{\langle}\xi({\langle}{\rangle})}\setminus \vec{\alpha}(r-1)+1$. This time, the witness for the compatibility of $p^{*\smallfrown}{\langle}\alpha, A^{{\langle}{\rangle}}\cap\alpha{\rangle},q$ will be $$q'=p^{*\smallfrown}{\langle}\vec{\alpha}(1),X_1\cap A_<{\rangle}^{\smallfrown}...^{\smallfrown}{\langle}\vec{\alpha}(r-1),A_<\cap X_{r-1}{\rangle}^{\smallfrown}{\langle}\alpha , X_r\cap A_<\cap\alpha{\rangle}^{\smallfrown}{\langle} \vec{\alpha}(r),X_{r}\setminus\alpha{\rangle}^{\smallfrown}...^{\smallfrown}{\langle}\vec{\alpha}(n),X_{n}{\rangle}.$$ This concludes the case $ht(T)=1$. Let $T$ be such that $n=ht(T)>1$, for every $s\in T\setminus mb(T)$, apply the case $n=1$ to ${\rm Succ}_{T}(s)$ and the condition $p^{\smallfrown}s$ to find $$p^{\smallfrown}s\leq p^*_s,\text{ a set } A^{s}\subseteq B^{s},\text{ and } {\rm Succ}_{T^*}(s)\subseteq {\rm Succ}_T(s), \ {\rm Succ}_{T^*}(s)\in U(\kappa_{j_n}(p),\xi(s))$$ such that $\{p_s^{*\smallfrown}{\langle}\alpha, A^s\cap\alpha\mid \alpha\in{\rm Succ}{T^*}(s)\}$ is pre-dense above $p^*_s$. Apply \ref{amalgamate}, and find a condition $p\leq^* p_1$, $T_1\subseteq T\setminus mb(T)$, $mb(T_1)\in U_{T\setminus mb(T)}$ and sets $C^s\in\cap_{\xi<\xi(s)}U(\kappa(s),\xi)$ such that for every $t\in mb(T_1)$, $p^*_t\leq^*p_1^{\smallfrown}{\langle} t,\vec{C}^t{\rangle}$. Now apply induction hypothesis to $p_1$, $T_1$ and the sets $B^{s}\cap C^{s}$, find $p_1\leq^* p^*$ and $T^*\restriction\{1,...,n-1\}\subseteq T_1$ and sets $A^s$ such that $\{p^{*\smallfrown}{\langle} s,\vec{A}^s{\rangle}\mid s\in mb(T^*\restriction\{1,...,n-1\})\}$ is pre-dense. Let us prove that above $p^*$, $\{p^{*\smallfrown} {\langle} t,\vec{A}^t{\rangle}\mid t\in mb(T^*)\}$ is pre-dense above $p^*$. Let $p^*\leq q$, then there is $s\in mb(T^*\restriction\{1,...,n-1\})$ such that $p^{*\smallfrown}{\langle} s,\vec{A}^s{\rangle}$ and $q$ are compatible via some $q'$. Since $A^s\subseteq C^s$, it follows that $$p^{*}_s\leq^* p_1^{\smallfrown}{\langle} s,\vec{C}^s{\rangle}\leq^* p^{*\smallfrown}{\langle} s,\vec{A}^s{\rangle}\leq q'.$$ Therefore, there is $\alpha\in {\rm Succ}_{T^*}(s)$ such that $p_s^{*\smallfrown}{\langle} \alpha,A^s\cap\alpha{\rangle},q'$ are compatible via $q''$. It follows that $p_s^{*\smallfrown}{\langle} \alpha,A^s\cap\alpha{\rangle}\leq q''$ and also $p^{*\smallfrown} {\langle} s,\vec{A}^s{\rangle}\leq q'\leq q''$. So ${\langle} \alpha,A^s\cap\alpha{\rangle}$ can be added to $p^{*\smallfrown}{\langle} s,\vec{A}^s{\rangle}$ and $p^{*\smallfrown}{\langle} s^{\smallfrown}\alpha,\vec{A}^{s^{\smallfrown}\alpha}{\rangle}=p^{*\smallfrown}{\langle} s,\vec{A}^s{\rangle}^{\smallfrown}{\langle}\alpha, A^s\cap\alpha{\rangle}\leq q''$. We conclude that $q''$ is a witness for the compatibility of $q$ and $p^{*\smallfrown}{\langle} s^{\smallfrown}\alpha,\vec{A}^{s^{\smallfrown}\alpha}{\rangle}$.$\blacksquare$
We will often have two conditions $p\leq^* p^*$ and a tree of extensions $T$ of $p$ as in \ref{densetree}, so there are sets $B^t$ such that $D_{T,\vec{B}}$ is pre-dense above $p$. We would like to remove some of the branches in $T$ to get a tree of extensions of $p^*$, $T^*\subseteq T$, such that $D_{T^*,\vec{B}}$ is pre-dense above $p^*$. $T^*$ can simply be defined as: $$T^*=\{t\in mb(T)\mid p^{*\smallfrown}{\langle} t,\vec{B}^t{\rangle}\in mb(T)\}$$ It is not hard to check that $T^*$ is a $\vec{U}$-fat tree and $mb(T^*)\in U_T$. To see that $D_{T^*,\vec{B}}$ is pre-dense above $p^*$, let $p^*\leq q$ then there is $t\in mb(T)$ such that $p^{\smallfrown}{\langle} t,\vec{B}^t{\rangle},q$ are compatible via a condition $q''$. Since $t$ appears in $q''$ and $p^*\leq q\leq q''$, it follows by \ref{frown extension} that $t\in mb(T^*)$ and $p^{*\smallfrown}{\langle} t,\vec{B}^t{\rangle},q\leq q''$. \begin{corollary}\label{Rad-prop2}
Let $p\in\mathbb{M}[\vec{U}]$ and $\langle\lambda,B\rangle$ in the stem of $p$. Consider the decomposition, $p=\langle q,r\rangle$, where $q\in\mathbb{M}[\vec{U}]\restriction\lambda\wedge r\in \mathbb{M}[\vec{U}]\restriction(\lambda,\kappa]$, and $\kappa$ is the maximal measurable in $\vec{U}$. Let $\lusim{x}$ be a $\mathbb{M}[\vec{U}]$-name for an ordinal. Then there is $r\leq^*r^*\in \mathbb{M}[\vec{U}]\restriction(\lambda,\kappa]$ such that for any $q\leq q'\in\mathbb{M}[\vec{U}]\restriction \lambda$ if there exist $r^*\leq r'\in \mathbb{M}[\vec{U}]\restriction(\lambda,\kappa]$ such that
$$\langle q',r'\rangle \ || \lusim{x}$$ then there is a tree of extensions of $r^*$, $T/{q'}$, and sets $B^{t,q'}$ such that $D_{T_{q'},\vec{B}^{t,q'}}$ is pre-dense above $r^*$ and
$$\forall t\in mb(T/{q'}). \ {\langle} q',r^{*\frown}{\langle} t,\vec{B}^t{\rangle}{\rangle} \ || \lusim{x}.$$ \end{corollary}
\noindent\textit{Proof}. For every $q\in \mathbb{M}[\vec{U}]\restriction\lambda$, let
$$D_q=\Big\{p'\in\mathbb{M}[\vec{U}]\restriction(\lambda,\kappa]\ \big| \ (\langle q,p'\rangle ||\lusim{x})\vee (\forall p''\geq p'. {\langle} q,p''{\rangle} \text{ does not decide }\lusim{x})\Big\}.$$ Clearly, $D_q\subseteq \mathbb{M}[\vec{U}]\restriction(\lambda,\kappa]$ is dense open, hence by the strong Prikry property, there is $r\leq^* r_q$, a tree of extensions $T'_q$ and sets $A^{s,q}$ for $s\in T'_q\setminus mb(T'_q)$ such that for every $t\in mb(T'_q)$, $r_q^{\smallfrown}{\langle} t,\vec{A}^{t,q}{\rangle}\in D_q$. For each $t\in mb(T'_q)$ one of the following holds: \begin{enumerate}
\item $ {\langle} q,r_q^{\smallfrown}{\langle} t,\vec{A}^{t,q}{\rangle}{\rangle} ||\lusim{x}$.
\item $\forall p''\geq r_q^{\smallfrown}{\langle} t,\vec{A}^{t,q}{\rangle}. {\langle} q,p''{\rangle} \text{ does not decide }\lusim{x}$. \end{enumerate}
Denote by $i_{t}\in\{1,2\}$ the case which holds. This defines a function $g:mb(T'_q)\rightarrow \{1,2\}$. Apply \ref{StabTree}, shrink $T'_q$ to $T''_q$ and find $i^*\in\{1,2\}$ such that for every $t\in mb(T''_q)$ $i_{t}=i^*$. Finally, apply \ref{densetree}, extend $r_q$ to $r^{*}_q$ shrink $T''_q$ to $T^*_q$ and find sets $B^{s,q}\subseteq A^{s,q}$ so that $$D_{T^*_q,\vec{B}^q}=\{r^{*\smallfrown}_q{\langle} s,\vec{B}^{s,q}{\rangle}\mid s\in mb(T^*_q)\}$$ is pre-dense above $r^*_q$. There is sufficient $\leq^*$-closure in $\mathbb{M}[\vec{U}]\restriction(\lambda,\kappa]$ to find a single $r^*$ such that $r_q\leq^* r^*$ for every $q\in\mathbb{M}[\vec{U}]\restriction\lambda$. Let us prove that $r^*$ is as wanted. We can shrink the trees $T^*_q$ to $T_q$ as in the discussion before \ref{Rad-prop2}, to be extension trees of $r^*$ such that $D_{T^*_q,\vec{B}^q}$ is pre-dense. To see that $r^*,T_q,B^{s,q}$ are as wanted, let $q'\geq q$ and assume that there is $r'\geq r^*$ such that ${\langle} q',r'{\rangle} ||\lusim{x}$. Since the set $\{r^{*\smallfrown}{\langle} t,\vec{B}^{t,q}{\rangle}\mid
t\in mb(T_{q'})\}$ is pre-dense above $r^*$, there is $t\in mb(T_{q'})$ such that $r^{*\smallfrown}{\langle} t,\vec{B}^{t,q}{\rangle},r'$ are compatible. In particular, there is $r''\geq r^{*\smallfrown}{\langle} t,\vec{B}^{t,q}{\rangle}$ such that ${\langle} q',r''{\rangle} ||\lusim{x}$, indicating that $i^*=i_{t}=1$. Hence for every $s\in mb(T^*_{q'})$, $i_{s}=1$, thus
$${\langle} q',r^{*\smallfrown}{\langle} s,\vec{B}^{s,q}{\rangle}{\rangle} ||\lusim{x}.$$ $\blacksquare$
The next lemma is the first step toward Theorem \ref{MainResaultParttwo}. Recall the inductive hypothesis $(IH)$: for every $\mu<\kappa$ and every coherent sequence $\vec{W}$ with maximal element $\mu$, every $V$-generic $G_\mu\subseteq \mathbb{M}[\vec{W}]$ and a set of ordinals $X\in V[G]$ there is $C'\subseteq C_{G_\mu}$ such that $V[X]=V[C']$. \begin{lemma}\label{very short}
Let $G\subseteq\mathbb{M}[\vec{U}]$ be $V$-generic filter and assume $(IH)$. Let $A\in V[G]$ be a set of ordinals such that $|A|<\kappa$, where $\kappa$ is the maximal measurable in $\vec{U}$. Then there exists $C'\subseteq C_G$, $|C'|\leq |A|$, such that $V[A]=V[C']$. \end{lemma}
\noindent\textit{Proof}. Let $A=\langle a_i \mid i<\lambda\rangle$ where $\lambda=|A|<\kappa$ be an enumeration of $A$. In $V$, pick a sequence of $\mathbb{M}[\vec{U}]$-names for $A$, $\langle \lusim{a}_i\mid i<\lambda\rangle$. We proceed by a density argument, let $p\in\mathbb{M}[\vec{U}]\restriction(\lambda,\kappa]$ be any condition, using Lemma \ref{Rad-prop2}, find a $\leq^*$-increasing sequence $\langle p_i \mid i<\lambda\rangle$ above $p$ and maximal antichains $Z_i\subseteq\mathbb{M}[\vec{U}]\restriction \lambda$ such that for every $q\in Z_i$ there is a $\vec{U}$-fat tree $T_{q,i}$ and sets $B^{s,q}_i$ such that any extension of $p_i$ from $mb(T_{q,i})$ together with $q$ and the sets $B^{s,q}_i$ decides $\lusim{a}_i$, and the set $$D_{T_{q,i},\vec{B}^q_i}:=\{p_i^{\smallfrown}{\langle} t,\vec{B}^{t,q}_i{\rangle}\mid t\in mb(T_{q,i})\}.$$ is pre-dense above $p_i$. The forcing $\mathbb{M}[\vec{U}]\restriction(\lambda,\kappa]$ has sufficiently $\leq^*$-closure to find $p'$ such that for every $i<\lambda, \ p_i\leq^* p'$. Define the function $F_{q,i}:mb(T_{q,i})\rightarrow On$ by: $$F_{q,i}(t)=\gamma \ \ \ \Leftrightarrow \ \ \ {\langle} q,p^{'\smallfrown}{\langle} t, \vec{B}^{t,q}_i{\rangle}{\rangle}\Vdash \lusim{a}_i=\check{\gamma}.$$ By Lemma \ref{important coordinates}, we can find $T'_{q,i}\subseteq T_{q,i}$, $mb(T'_{q,i})\in U_{T_{q,i}}$ such that $I_{q,i}:=I(T'_{q,i},F_{q_i}\restriction mb(T'_{q,i}))$ is complete and consistent. For any $q,q'\in Z_i$ apply Lemma \ref{function separation} to the functions $F_{q,i},F_{q',i}$ and shrink $T'_{q,i},T'_{q',i}$ to $T^{q,q'}_{q,i},T^{q,q'}_{q',i}$, $mb(T^{q,q'}_{q,i})\in U_{T_{q,i}},mb(T^{q,q'}_{q',i})\in U_{T_{q',i}}$ so that either \begin{enumerate}
\item $mb(T^{q,q'}_{q,i})\restriction I_{q,i}=mb(T^{q,q'}_{q',i})\restriction I_{q',i}$ and $(F_{q,i}\restriction mb(T^{q,q'}_{q,i}))_{I_{q,i}}=(F_{q',i}\restriction mb(T^{q,q'}_{q',i}))_{I_{q',i}}$.
\item $Im(F_{q,i}\restriction mb(T^{q,q'}_{q,i}))\cap Im(F_{q',i}\restriction mb(T^{q,q'}_{q',i}))=\emptyset$.
\end{enumerate} The ultrafilter $U_{T_{q,i}}$ is sufficiently closed to ensure that $X^*_q=\cap_{q'\in \mathbb{M}[\vec{U}]\restriction\lambda}mb(T^{q,q'}_{q,i})\in U_{T_{q,i}}$ and by \ref{Los for trees} there is a $\vec{U}$-fat tree $T'_{q,i}\subseteq T_{q,i}$ such that $mb(T'_{q,i})\subseteq X^*_q,$ and $mb(T'_{q,i})\in U_{T_{q,i}}$. By \ref{densetree}, there is $p'\leq^*p^*_q$, $T^*_{q,i}$ and $A^{s,q}_i\subseteq B^{s,q}_i$ such that $D_{T^*_{q,i},\vec{A}^{q}_i}$ is pre-dense above $p^*_q$. Since $|\mathbb{M}[\vec{U}]\restriction\lambda|$ is small enough there is a single $p^*\in \mathbb{M}[\vec{U}]\restriction(\lambda,\kappa]$ such that $p^*_{q}\leq^* p^*$ for every $q\in \mathbb{M}[\vec{U}]\restriction\lambda$. Restrict the trees to this condition $p^*$ as in the discussion before \ref{Rad-prop2}, so that $D_{T^*_q,\vec{B}^q_i}$ are pre-dense above $p^*$. we abuse notation here by keeping the same notation after the restriction.
Denote $G=G_<\times G_>$
so that $G_<\subseteq \mathbb{M}[\vec{U}]\restriction\lambda$ is $V$-generic and $G_>\subseteq\mathbb{M}[\vec{U}]\restriction(\lambda,\kappa]$ is $V[G_<]$-generic. By density, find $p^*\in G_>$ as above. For every $i<\lambda$, since $Z_i$ is a maximal antichain, there is $q_i$ such that $G_<\cap Z_i=\{q_i\}$. Since $D_{T^*_{q_i,i},\vec{A}^{q_i}_i}$ is pre-dense above $p^*$, find $t_i\in mb(T^*_{q_i,q})$ such that $p^{*\frown}{\langle} t_i,\vec{A}^{t_i,q_i}_i{\rangle}\in G_>$, define $C_i=t_i\restriction I_{q_i,i}$ and let $C'=\underset{i<\lambda}{\bigcup}C_i\subseteq C_{G_>}$. Clearly $|C'|\leq \lambda=|A|$. Let us prove that $\langle C_i\mid i<\lambda\rangle\in V[A]$. Indeed, define in $V[A]$ the sets $$M_i=\{q\in Z_i\mid a_i\in Im(F_{q,i})\}$$ then, for any $q,q'\in M_i$, $a_i\in Im(F_{q,i})\cap Im(F_{q',i})\neq\emptyset$. Hence $(1)$ must hold for $F_{q,i},F_{q',i}$ i.e.
$$mb(T^*_{q,i})\restriction I_{q,i}=mb(T^*_{q',i})\restriction I_{q',i}\wedge(F_{q,i}\restriction mb(T^*_{q,i}))_{I_{q,i}}=(F_{q',i}\restriction mb(T^*_{q',i}))_{I_{q',i}}.$$ This means that no matter how we pick $q_i'\in M_i$, we will end up with the same function $(F_{q'_i,i}\restriction mb(T^*_{q'_i,i}))_{I_{q_i',i}}$ and the same important values $mb(T^*_{q'_i,i})\restriction I_{q_i,i}$. In $V[A]$, choose any $q'_i\in M_i$, let $D_i'\in F_{q_i',i}^{-1''}\{a_i\}\cap mb(T^*_{q_i',i})$ and $C_i'=D'_i\restriction I_{q_i',i}$. Since $q_i,q'_i\in M_i$ we have $ C_i=C_i'$, hence $\langle C_i\mid i<\lambda\rangle\in V[A]$. In order the reconstruct $A$ from the union $C'$ we still have to code some information from the part of $G_<$, namely, $\{q'_i\mid i<\lambda\},\langle Ind(C_i,C')\mid i<\lambda\rangle\in V[A]$. These sets can be coded as a subset of ordinals below $(2^{\lambda})^+$, by \ref{genericproperties}(6) $$\{q'_i\mid i<\lambda\},\langle Ind(C_i,C')\mid i<\lambda\rangle\in V[G_<].$$ By the induction hypothesis applied to $G_<$, we can find $C''\subseteq C_{G_<}$ such that $$V[{\langle} q'_i\mid i<\lambda{\rangle},\langle Ind(C_i,C')\mid i<\lambda\rangle]=V[C''].$$
Also $|C''|\leq |C_{G_<}|\leq\lambda$ hence $C:=C'\uplus C''$ is of cardinality at most $\lambda$. Note that $C',C''\in V[C]$ as $C''=C\cap\lambda, \ C'=C\setminus\lambda$. Finally, all the information about the function $F_{q,i}$ needed to restore $A$ is coded in $C',C''$. Namely, $A=\{(F_{q'_i,i})_{I_{q' _i,i}}(C'\restriction Ind(C_i,C'))\mid i<\lambda\}$. Hence $V[A]=V[C]$. $\blacksquare$ \begin{corollary}\label{specialTree}
Suppose that $p\in\mathbb{M}[\vec{U}]$ and $\lusim{x}$ is a name such that $p\Vdash \lusim{x}\in \lusim{C}_G$. Then there is $p^*\geq^* p$ such that either $p^* || \lusim{x}$ or there is a $\vec{U}$-fat tree, $T$ and sets $A^{s}$ such that $\forall t\in mb(T)$ $p^{\frown}{\langle} t,\vec{A}^{t}{\rangle}\Vdash \lusim{x}={\rm max}(t)$. Moreover, in the latter case, let $i\leq l(p)+1$ be such that $mb(T)$ splits on $\kappa_i(p)$ and assume that $o^{\vec{U}}(\kappa_i(p))<\kappa_i(p)^+$, then for every $t\in {\rm Lev}_{ht(T)-1}(T)$, $$p^{*\frown}\langle t,\vec{A}^{t}\rangle || o^{(\kappa_i(p))}(\lusim{x}).$$ In other words, there is $\gamma<o^{\vec{U}}(\kappa_i(p))$ such that $$p^{*\frown}\langle t,\vec{A}^{t}\rangle\Vdash\lusim{x}\in X^{(\kappa_i(p))}_\gamma.$$ \end{corollary}
\noindent\textit{Proof}. Assume that there is no $p^*\geq^* p$ which decides $\lusim{x}$. By \ref{Rad-prop2}
find $T$ with minimal $ht(T)$ such that there is $p^*\geq p$, sets $B^{s}$ and for every $t\in mb(T)$, $p^{*\frown}{\langle} t,B^{s}{\rangle} || \lusim{x}$. Assume that $\kappa(p)=\{\nu_1,...,\nu_n\}$ are the ordinals appearing in $p$, denote by $x_t$ the forced value and shrink $T$ so that the function $$f(t)=\begin{cases} i & x_t=\nu_i\\ n+1 & x_t\notin\{\nu_1,...,\nu_n\} \end{cases}$$ is constant. If $f$ would be constantly some $i\leq n$ then by Proposition \ref{densetree} there is $p\leq^* p'$, $T'\subseteq T$ and sets $A^s\subseteq B^s$ such that $\{p^{'\smallfrown}{\langle} t,\vec{A}^t{\rangle}\mid t\in mb(T')\}$ is pre-dense above $p'$, it follows that $p'\Vdash \lusim{x}=\nu_i$, contradiction. So we may assume that $x_t\notin \{\nu_1,...,\nu_n\}$. Keep shrinking $T$ so that there is a unique $i\leq ht(T)$, such that $x_t\in [t(i),t(i+1))$ (where $t(ht(T)+1)=\kappa$). If $i<ht(T)$ then for every $t\in {\rm Lev}_i(T)$, the function $g_t:mb(T/t)\rightarrow \kappa$, defined by $g_t(s)=x_{t^{\smallfrown}s}$ is regressive and therefore by \ref{StabTree} can be stabilized on some $S_t\subseteq T/t$, $mb(S_t)\in U_{T/t}$ so that for every $t\in S_t$, $x_{t^{\smallfrown}s}=y_t$, depending only on $t$. As in the situation that $f$ was constant, for every $t\in {\rm Lev}_i(T)$ we can find $p^{*\smallfrown}t\leq^*p_t$ such that $p_t\Vdash \lusim{x}=x_t$. By \ref{amalgamate}, there is $T^*\subseteq T\restriction\{1,...,i\}$, $p^{*}\leq^*p^{**}$ and sets $Z^s\subseteq A^s$ such that for every $t\in {\rm Lev}_i(t)$, $p _t\leq^*p^{**\smallfrown}{\langle} t,\vec{Z}^t{\rangle}$, this contradicts the minimality of $ht(T)$. Hence it must be that for every $t\in mb(T)$, $x_t\geq t(ht(T))={\rm max}(t)$. It is impossible that $x_t>{\rm max}(t)$, otherwise, $$x_t\notin \{\nu_1,...,\nu_n\}\cup t$$ and we can remove from the large sets of the condition $p^{*\smallfrown}{\langle} t,\vec{A}^t{\rangle}$ the single ordinal $x_t$ and obtain a condition $q$ such that $q\Vdash \lusim{x}=x_t\notin C_{\lusim{G}}$, but $p\leq q$, then $q\Vdash \lusim{x}\in C_{\lusim{G}}$, contradiction. We conclude that $\forall t\in mb(T).x_t={\rm max}(t)$. Which is what we desired.
For the second part, assume that for $i\leq l(p)+1$, $mb(T)$ splits on $\kappa_i(p)$ and that $o^{\vec{U}}(\kappa_i(p))<\kappa_i(p)^+$. It follows that the measures in $\vec{U}(\kappa_i(p))$ are separated by the sets $X^{(\kappa_i(p))}_{\gamma}$. For every $t\in {\rm Lev}_{ht(T)-1}(T)$, shrink ${\rm Succ}_T(t)\in U(\kappa_i(p),\xi(t))$ to ${\rm Succ}_{T^*}(t)={\rm Succ}_{T}(t)\cap X^{(\kappa_i(p))}_{\xi(t)}$. It follows that for every $t\in {\rm Lev}_{ht(T)-1}(T)$, and for every $\beta\in {\rm Succ}_{T^*}(t)$, $\beta\in X^{(\kappa_i(p))}_{\xi(t)}$. Since $p^{\frown}{\langle} t,\vec{A}^t{\rangle} \Vdash \lusim{x}\in {\rm Succ}_{T^*}(t)$, we conclude that $p^{\frown}{\langle} t,\vec{A}^t{\rangle} \Vdash o^{(\kappa_i(p))}(\lusim{x})=\xi(t)$ $\blacksquare$
The following lemma is analogous to a lemma proven in \cite{TomTreePrikry} for Prikry forcing. \begin{lemma}\label{MagidorHousdorf}
Let $G\subseteq\mathbb{M}[\vec{U}]$ be $V$-generic and let $\delta\leq \kappa$ be a limit point of $C_G$. Then for every set of ordinals $D\in V[C_G]$ such that $$|D|<\delta\wedge C_G\cap D=\emptyset$$ there is $X\in\bigcap\vec{U}(\delta)$ such that $X\cap D=\emptyset$. \end{lemma}
\noindent\textit{Proof}. Let $\lambda:=|D|$, note that $D\in V[C_G\cap\delta]$ and since $C_G\cap\delta$ is $V$-generic for $\mathbb{M}[\vec{U}]\restriction\delta$, we can assume without loss of generality that $\delta=\kappa$. We start with a single $\mathbb{M}[\vec{U}]$-name of an ordinal $\lusim{x}$ and $p\in G$ such that $p\Vdash \lusim{x}\notin \lusim{C}_G$. Assume that $p=\langle q_0,r{\rangle}$, is a decomposition of $p$ such that ${\rm max}(\kappa(q_0))\geq\lambda$. Then by \ref{Rad-prop2} there is $r\leq^*r^*$ and a maximal antichain $Z\subseteq\mathbb{M}[\vec{U}]\restriction{\rm max}(q_0)$ above $q_0$, such that for every $q\in Z$ there is a tree $T_q$ and sets $A^{s,q}$ for which the set $\{r^{*\smallfrown}{\langle} t,\vec{A}^{t,q}{\rangle}\mid t\in mb(T_{q})\}$ is pre-dense above $r^*$ and for every $t\in mb(T_q)$, $${\langle} q,r^{*\smallfrown}{\langle} t,\vec{A}^{s,q}{\rangle}{\rangle} \Vdash \lusim{x}=f_q(t).$$ Since $p\Vdash \lusim{x}\notin C_{\lusim{G}}$, for every $\vec{b}\in mb(T_q)$, $f_q(\vec{b})\notin\vec{b}$ hence it falls in one of the intervals $$(0,\vec{b}(1)),(\vec{b}(1),\vec{b}(2)),...,(\vec{b}(ht(T_q)),\kappa)$$ let $n_{\vec{b}}$ be the index of this interval. Apply \ref{StabTree} to find a tree $T'_q\subseteq T_q$, $mb(T'_q)\in U_{T_q}$ on which the value $n_{\vec{b}}$ is constantly $n^*_q$. Since for every $t\in {\rm Lev}_{n^*_q}(T_q)$, the function $s\mapsto f_q(t^{\smallfrown} s)$ defined in $mb((T'_q)/t)$, is regressive, apply \ref{StabTree}, obtain a tree $(T^*_q)_t\subseteq (T'_q)/t$ on which the value is constant. Let $T^*_q\restriction \{1,...,n^*_q\}=T'_q\restriction \{1,...,n^*_q\}$ and for every $t\in {\rm Lev}_{n^*_q}(T^*_q)$, $(T^*_q)/t=(T^*_q)_t$ is defined as above. Then on $T^*_q$, $f_q(t)$ depends only on $t\restriction\{1,...,n^*_q\}$ and $f_q(t\restriction\{1,...,n^*\})>t(n^*_q)$. Extend $r^*_0\leq^*r^*_q$, shrink $S_q\subseteq T^*_q$ to a tree of extensions of $r^*_q$ and find $B^{q,s}\subseteq A^{q,s}$ such that for every $s\in {\rm Lev}_{n^*}(S^*_q)$, $D_{S_q,\vec{B}^{q}}$ is pre-dense above $r^*_q$ and ${\langle} q,r_1^{*\smallfrown}{\langle} s,\vec{B}^{q,s}{\rangle}{\rangle}\Vdash {\rm max}(s)<\lusim{x}=f_q(s)$. Finally find a single $r^*$ such that $r^*_q\leq^*r^*$, shrink the trees and sets to this condition and denote $S_q\restriction \{1,...,n^*_q\}=S^*_q$. Apply \ref{BetterSet} and let $A_{\lusim{x}}=\{\alpha\in B_{l(r)+1}(r^*)\mid \alpha\cap B_{l(r)+1}(r^*)\in\cap\vec{U}(\alpha)\}\in \cap\vec{U}(\kappa)$. It must be that for every $q\in Z$ and every $s\in mb(S^*_q)$, $f_q(s)\notin A_{\lusim{x}}\setminus {\rm max}(s)$, otherwise, add the ordinal $f_q(s)$ and obtain the condition $$\langle q, r^{*\smallfrown}\langle s,\vec{B}^{q,s}_<{\rangle} ^{\smallfrown}{\langle} f_q(s){\rangle}{\rangle}\Vdash\lusim{x}=f_q(s)\in C_G$$ contradiction. Since $f_q(s)>{\rm max}(s)$, we conclude that $f_q(s)\notin A_{\lusim{x}}$. we claim that $$p\leq^*\langle q_0,r^*\rangle\Vdash \lusim{x}\notin A_{\lusim{x}}.$$ Otherwise, there is $q\in Z$, $s\in mb(S^*_q)$ and $p'$ such that $$\langle q,r^{*\frown}{\langle} s, B^{s,q}{\rangle}\leq p'\Vdash \lusim{x}\in A_{\lusim{x}}$$ but also $p'\Vdash\lusim{x}=f_q(s)$ so $f_q(s)\in A_{\lusim{x}}$ which is a contradiction. Now the lemma follows easily, let $\{d_i\mid i<\lambda< \kappa\}\in V[C_G]$ be some set of ordinals such that $$C_G\cap\{d_i\mid i<\lambda\}=\emptyset$$ then we can take names $\{\lusim{d}_i\mid i<\lambda\}$ and some $p={\langle} q_0,r_0{\rangle}$ forcing $\forall i<\lambda. \lusim{d}_i\notin \lusim{C}_G$, as before we can define the sets $A_{\lusim{d}_i}\in\cap\vec{U}(\kappa)$ and for $i<\lambda$ find a $\leq^*$-increasing sequence $\langle q_0, r_i\rangle$, find $p^*$ which bounds all of them and $A^*=\underset{i<\lambda}{\bigcap}A_{\lusim{d}_i}\in \cap\vec{U}(\kappa)$, then $p^*$ forces that $\forall i<\lambda$ $\lusim{d}_i\notin A^*$. By density argument we can find such $p^*$ in $G$.$\blacksquare$ \section{The proof for subsets of $\kappa$}
Let $A\in V[G]$, we do not assume that $A\subseteq\kappa$, since some of the results will be applied for other type of sets. Define $$\kappa^*:={\rm max}\{\alpha\in Lim(C_G)\mid o^{\vec{U}}(\alpha)\geq\alpha^+\}.$$ If $o^{\vec{U}}(\kappa)<\kappa^+$, then $\{\beta<\kappa\mid o^{\vec{U}}(\beta)<\beta^+\}\in\cap\vec{U}(\kappa)$, it follows that $\kappa^*<\kappa$ is well defined. Moreover, for every $\alpha\in C_G\setminus\kappa^*+1$, $o^{\vec{U}}(\alpha)<\alpha^+$ and thus $o^{(\alpha)}$ is defined.
\begin{definition}
Let $A\in V[G]$ be any set of ordinals. In $V[A]$, consider the crucial set $$X_A=\{\nu\mid \nu \text{ is }V-\text{regular and } \nu> cf^{V[A]}(\nu)\}$$ Denote $\overline{X}_A=X_A\cup Lim(X_A)\subseteq \kappa\cup\{\kappa\}$. \end{definition} \begin{proposition}\label{propertiesofXA} \begin{enumerate}
\item $\overline{X}_A\subseteq Lim(C_G)$.
\item $X_A\in V[A]$.
\item If $o^{\vec{U}}(\kappa)<\kappa^+$, $X_A\setminus\kappa^*+1$ is closed i.e. for every $\kappa^*<\alpha\leq\kappa$, if ${\rm sup}(X_A\cap\alpha)=\alpha$ then $\alpha\in X_A$.
\item If $C\subseteq^* C_G$ and $C\in V[A]$, then $Lim(C)\subseteq \overline{X}_A$ \end{enumerate} \end{proposition}
\noindent\textit{Proof}. For every $\alpha\in X_A$, $cf^{V[G]}(\alpha)\leq cf^{V[A]}(\alpha)<\alpha$, and $\alpha$ is $V$-regular, it follows by \ref{genericproperties}(7) that $X_A\subseteq Lim(C_G)$, and since $Lim(C_G)$ is closed, then $\overline{X}_A\subseteq Lim(C_G)$.
$(2)$ is trivial as the definition of $X_A$ occurs in $V[A]$. As for $(3)$, by induction on $\alpha\in{\rm Lim}(X_A\setminus\kappa^*)$. Suppose $\alpha={\rm sup}(X_A\cap\alpha)$, then by induction, $X_A\cap(\kappa^*,\alpha)$ is a club at $\alpha$ and by $(1)$, $\alpha\in Lim(C_G)\setminus\kappa^*$. Define in $V[A]$, $$o_A(\alpha)=limsup_{\gamma\in X_A\cap \alpha}o^{(\alpha)}(\gamma)+1.$$ By definition of $o^{(\alpha)}$, $o_A(\alpha)\leq o^{\vec{U}}(\alpha)<\alpha^+$, hence $cf^{V}(o_A(\alpha))\leq\alpha$. By the definition of $limsup$, $o_A(\alpha)$ satisfies two properties: \begin{enumerate}
\item For every $\nu<\alpha$ and every $j<o_A(\alpha)$ there is $j\leq j'<o_A(\alpha)$ such that $X_A\cap X^{(\alpha)}_{j'}\cap(\nu,\alpha)\neq\emptyset$.
\item There is some $\xi_\alpha<\alpha$ such that for every $\nu\in X_A\cap(\xi_\alpha,\alpha)$, $o^{(\alpha)}(\nu)<o_A(\alpha)$. \end{enumerate} We split into cases:
If $o_A(\alpha)=\beta+1$, then by property $(1)$ ${\rm sup} (X_A\cap X^{(\alpha)}_\beta\cap(\xi_\alpha,\alpha))=\alpha$. Let us argue that ${\rm otp}( X_A\cap X^{(\alpha)}_\beta\cap(\xi_\alpha,\alpha))=\omega$, this is enough to conclude $ cf^{V[A]}(\alpha)=\omega$, hence $\alpha\in X_A$. In the interval $(\xi_\alpha,\alpha)$ it is impossible to have a limit point $\zeta$ of $X^{(\alpha)}_\beta\cap X_A$. Otherwise, by induction $\zeta\in X_A$ and by \ref{increasing order at limits}, $o^{(\alpha)}(\zeta)\geq\beta+1$ contradicting property $(2)$.
If $\lambda:=cf^V(o_A(\alpha))<\alpha$, let ${\langle} \lambda_i\mid i<\lambda{\rangle}\in V$ be increasing and cofinal in $o_A(\alpha)$. Define inductively ${\langle} x_i\mid i<\lambda{\rangle}$, first, $x_0={\rm min} (X_A\cap(\xi_\alpha,\alpha))<\alpha$. At successor step, $i+1$, $x_i\in X_A\cap(\xi_\alpha,\alpha)$ is defined, by property $(1)$, there is $$\lambda_{i+1}\leq j'<\alpha\text{ and }x_{i+1}\in X_A\cap(x_i,\alpha)\cap X^{(\alpha)}_{j'}.$$ At limit step $\delta<\lambda$, if ${\rm sup}(x_i\mid i<\delta)$ is unbounded in $\alpha$, then clearly $\alpha$ changes cofinality in $V[A]$. Otherwise, let $y_\delta={\rm sup}_{i<\delta}x_i<\alpha$ and there is some $x_{\delta}\in X_A\cap X^{(\alpha)}_{\lambda_\delta}\cap(y_{\delta},\alpha)$. Assume that ${\langle} x_i\mid i<\lambda{\rangle}$ is defined, if $x^*={\rm sup}_{i<\lambda}x_i\in(\xi,\alpha)<\alpha$, then by induction hypothesis $x^*\in X_A\cap(\xi_\alpha,\alpha)$ and $$o^{(\alpha)}(x^*)\geq limsup_{i<\lambda} o^{(\alpha)}(x_i)+1=limsup_{i<\lambda}\lambda_i+1=o_A(\alpha)$$ contradicting property $(2)$.
Finally, if $cf^{V}(o_A(\alpha))=\alpha$, we take ${\langle} \alpha_i\mid i<\alpha{\rangle}\in V$ cofinal continuous sequence in $o_A(\alpha)$ which witnesses this. Let $Z:=\{\beta<\alpha\mid o^{(\alpha)}(\beta)<\alpha_{\beta}\}$ let us argue that $Z\in\cap_{i<o_A(\alpha)}U(\alpha,i)$. Let $i<o_A(\alpha)$, denote $$j_{U(\alpha,i)}({\langle} \alpha_\xi\mid \xi<\alpha{\rangle})={\langle} \alpha'_\xi\mid \xi<j_{U(\alpha,i)}(\alpha){\rangle}, \ \ j_{U(\alpha,i)}({\langle} X^{(\alpha)}_\xi\mid \xi<o^{\vec{U}}(\alpha){\rangle})={\langle} X'_\xi\mid \xi< j_{U(\alpha,i)}(o^{\vec{U}}(\alpha)){\rangle}.$$ Since $X^{(\alpha)}_i\in U(\alpha,i)$ it follows that $\alpha\in j_{U(\alpha,i)}(X^{(\alpha)}_i)=X'_{j_{U(\alpha,i)}(i)}$ which by definition implies that $$(\star) \ \ \ o^{(j_{U(\alpha,i)}(\alpha))}(\alpha)=j_{U(\alpha,i)}(i).$$ Also, since $i< o_A(\alpha)$, then $$j_{U(\alpha,i)}(i)<\cup j_{U(\alpha,i)}''[o_A(\alpha)]=\cup_{\xi<\alpha}j_{U(\alpha,i)}(\alpha_\xi)=\cup_{\xi<\alpha}\alpha'_\xi.$$ By elementarity, the sequence ${\langle}\alpha'_\xi\mid \xi<j_{U(\alpha,i)}(\alpha){\rangle}$ is also continuous, hence $$(\star\star) \ \ j_{U(\alpha,i)}(i)<\cup_{z<\alpha}\alpha'_z=\alpha'_\alpha.$$ We conclude from $(\star),(\star\star)$ that $$o^{j_{U(\alpha,i)}(\alpha)}(\alpha)=j_{U(\alpha,i)}(i)<\alpha'_\alpha.$$ Hence $\alpha\in j_{U(\alpha,i)}(Z)$ so $Z\in U(\alpha,i)$ as wanted.
Consider the set $Z_*:=Z\uplus(\cup_{o_A(\alpha)\leq j<o^{\vec{U}}(\alpha)}X^{(\alpha)}_j)$. Then $Z_*\in\cap\vec{U}(\alpha)$ and by \ref{genericproperties}(3), there is $\eta<\alpha$ such that $C_G\cap (\eta,\alpha)\subseteq Z_*$. In particular $X_A\cap(\eta,\alpha)\subseteq Z_*$. By property $(2)$, if $\rho\in X_A\cap({\rm max}\{\eta,\xi_\alpha\},\alpha)$, then $o^{(\alpha)}(\rho)<o_A(\alpha)$, hence $\rho\in Z$ hence $X_A\cap ({\rm max}\{\eta,\xi_\alpha\},\alpha)\subseteq Z$. By definition of $Z$, for every $\rho\in ({\rm max}\{\eta,\xi_\alpha\},\alpha)\cap X_A$, $o^{(\alpha)}(\rho)<\alpha_\rho$. Now to see that $cf^{V[A]}(\alpha)=\omega$, define $x_0={\rm min}(X_A\cap({\rm max}\{\eta,\xi_\alpha\},\alpha))$, recursively assume that $x_n<\alpha$ is defined. Then by property $(1)$, there is $x_{n}'\geq x_n$ and some $x_{n+1}\in X_A\cap X^{(\alpha)}_{\alpha_{x_n'}}\cap(x_n,\alpha)$. To see that ${\langle} x_n\mid n<\omega{\rangle}$ is unbounded in $\alpha$, assume otherwise, then $x^*={\rm sup}_{n<\omega}x_n<\alpha$ and by induction $x^*\in X_A\cap({\rm max}\{\eta,\xi_\alpha\},\alpha)$ hence $x^*\in Z$. By Proposition \ref{increasing order at limits} $$o^{(\alpha)}(x^*)\geq limsup_{n<\omega} o^{(\alpha)}(x_n)=limsup_{n<\omega} \alpha_{x'_n}\geq \alpha_{x^*}$$ contradiction the definition of $Z$.
To see $(4)$, if $C\setminus C_G$ is finite then clearly $Lim(C)\subseteq Lim(C_G)$ and every $\delta\in Lim(C_G)$ is $V$-regular. Let $\delta\in Lim(C)$, it suffices to prove that $X_A$ is unbounded in $\delta$. Fix any $\rho<\delta$, and let $\rho'={\rm min}(Lim(C\setminus\rho+1))$, then $\rho<\rho'\leq\delta$, and also by minimality ${\rm otp}(C\cap(\rho,\rho'))=\omega$. Since $C\in V[A]$, it follows that $cf^{V[A]}(\rho')=\omega$ and since $\rho'\in Lim(C)$, it is $V$-regular. By definition it follows that $\rho'\in X_A\cap(\rho,\delta]$. $\blacksquare$
It is possible that $X_A$ below $\kappa^*$ is not closed: \begin{example} If there is $\alpha\in C_G$ such that $o^{\vec{U}}(\alpha)=\alpha^+$, then $\alpha$ stays regular in $V[G]$. Set $A=C_G$, then $X_A\cap\alpha$ will be unbounded in $\alpha$, but $\alpha\notin X_A$. \end{example}
There are trivial examples for $A$ in which the set $X_A$ is bounded. However the following definition filters this situation. \begin{definition}
Let $A\subseteq On$, we say that $A$ \textit{stabilizes} if there is $\beta<\kappa$ such that $\forall\alpha<{\rm sup}(A), \ A\cap\alpha\in V[G\restriction\beta]$ \end{definition} This definition is more general than the notion of fresh set: \begin{definition}\label{freshSetDef} Let $M\subseteq M'$ be two $ZFC$ models. A set of ordinals $X\in M'\setminus M$ is \textit{Fresh with respect to $M$} if $\forall\alpha<{\rm sup}(X). X\cap\alpha\in M$. \end{definition} \begin{proposition}\label{nonstabcofinality} Suppose that $A\in V[G]$ such that $A$ does not stabilize. Assume that $\forall\beta<{\rm sup}(A)$ there is $C_\beta\subseteq C_G$ such that $V[C_\beta]=V[A\cap\beta]$. Then:
\begin{enumerate}
\item If $A\subseteq\kappa$, then $X_A\cap\kappa$ is unbounded in $\kappa$.
\item If $o^{\vec{U}}(\kappa)<\kappa^+$, then $cf^{V[A]}(\kappa)<\kappa$.
\end{enumerate} \end{proposition}
\noindent\textit{Proof}. The following argument works for both $(1),(2)$, we try to prove that $X_A$ is unbounded. Let $\kappa^*\leq\delta<\kappa$, take some $\beta<{\rm sup}(A)$ such that $A\cap\beta\notin V[G\restriction\delta]$ which exists by our assumption that $A$ does not stabilize. By assumption, there exists $C_{\beta}\subseteq C_G$ such that $$V[C_\beta]=V[A\cap\beta]\subseteq V[A].$$ It is impossible that $C_\beta\setminus (C_G\cap\delta)$ is finite, otherwise $$A\cap\beta\in V[C_\beta]\subseteq V[G\restriction\delta]$$ which contradicts the choice of $\beta$. Let $\gamma_\delta$ be the first limit point of $C_\beta$ above $\delta$. By minimality, ${\rm otp}(C_\beta\cap(\delta,\gamma_\delta))=\omega$, hence $cf^{V[A]}(\gamma_\delta)=\omega$ and $\gamma_\delta\in X_A\setminus\delta$.
To see $(1)$, if $A\subseteq\kappa$, then necessarily $\gamma_\delta<\kappa$ for every $\delta$, this is since $\gamma_\delta\in Lim(C_\beta)$, and $\beta<{\rm sup}(A)\leq\kappa$, so $V[C_\beta]=V[A\cap\beta]\subseteq V[C_G\cap\beta]$. This implies that $\gamma_\delta\leq\beta$, otherwise, in $V[C_G\cap\beta]$ the cofinality of some measurable above $\beta$ changes, which contradicts $\beta^+$-cc of $\mathbb{M}[\vec{U}]\restriction\beta$. To see $(2)$, if some $\gamma_\delta=\kappa$, then $\kappa\in X_A$ and $cf^{V[A]}(\kappa)<\kappa$. Otherwise, $\gamma_\delta<\kappa$, and we conclude that $X_A$ is unbounded in $\kappa$. By the assumption $o^{\vec{U}}(\kappa)<\kappa^+$, thus by \ref{propertiesofXA}(3), $X_A\setminus\kappa^*$ is closed, and $\kappa$ is a limit point of this set, so $\kappa\in X_A$. $\blacksquare$ \begin{corollary}\label{nonstabcofinalitysubsetofkappa} Assume $(IH)$ and suppose that $A\in V[G]$ such that $A\subseteq\kappa$ does not stabilize and $o^{\vec{U}}(\kappa)<\kappa^+$. Then $X_A\cap(\kappa^*,\kappa)$ is a club at $\kappa$ and $cf^{V[A]}(\kappa)<\kappa$. \end{corollary}
\noindent\textit{Proof}. Since $A\subseteq\kappa$, then by \ref{very short}, for every $\beta<\kappa$, there is $C_\beta$ such that $V[A\cap\beta]=V[C_\beta]$ so we can apply \ref{nonstabcofinality}(1), \ref{nonstabcofinality}(2) and \ref{propertiesofXA}(3) applies to conclude that $X_A\cap(\kappa^*,\kappa)$ is a club and $cf^{V[A]}(\kappa)<\kappa$.$\blacksquare$
Note that it is possible that $cf^{V[G]}(\kappa)< cf^{V[A]}(\kappa)<\kappa$, however $cf^{V[A]}(\kappa)$ must be some member of the generic club that will eventually change its cofinality to $cf^{V[G]}(\kappa)$.
\begin{example} Assume that $o^{\vec{U}}(\kappa)=\kappa$, then $cf^{V[G]}(\kappa)=\omega$. Using the enumeration $C_G=\langle C_G(i)\mid i<\kappa\rangle$ and the canonical sequence $\alpha_n$ that was defined in example \ref{canonicalsequence}, we can define in $V[G]$ the set $$A=\bigcup_{n<\omega} \{C_G(\alpha_n)+\alpha\mid \alpha<C_G(n)\}$$ then $A$ does not stabilize. Moreover, we cannot construct the sequence $\langle \alpha_n\mid n<\omega\rangle$ or any other $\omega$-sequence unbounded in $\kappa$ inside $V[A]$ since $A$ is generic for the forcing $\mathbb{M}[\vec{U}\restriction(C_G(\omega),\kappa]]$ which does not change the cofinality of $\kappa$ to $\omega$. For this kind of examples the case $o^{\vec{U}}(\kappa)<\kappa$ suffices.
\end{example}
The following definition will allow us to refer to subsets of $C_G$ in $V[A]$. \begin{definition}
Let $A\in V[G]$ be any set. A set $D\in V[A]$ is a \textit{Mathias set} if
\begin{enumerate}
\item $Lim(D)\subseteq \overline{X}_A$.
\item For every $\delta\in Lim(D)$, every $Y\in \bigcap \vec{U}(\delta)$ there is $\xi<\delta$ such that $D\cap(\xi,\delta)\subseteq Y$.
\end{enumerate} \end{definition}
\begin{lemma}\label{FiniteNoise} For every $D\in V[A]$, $D$ is Mathias if and only if $D\subseteq^* C_G$ i.e. $D\setminus C_G$ is finite. \end{lemma}
\noindent\textit{Proof}. If $D\setminus C_G$ is finite then by \ref{propertiesofXA}(4), $Lim(D)\subseteq \overline{X}_A$. For the second condition of a Mathias set, simply use \ref{genericproperties}(3).
In the other direction, assume that $D$ is a Mathias set. Toward a contradiction, assume $|D\setminus C_G|\geq\omega$, and let $\delta\leq sup(D)$ be minimal such that $|D\cap\delta\setminus C_G|\geq\omega$ then $\delta\in Lim(D)\subseteq \overline{X}_A\subseteq Lim(C_G)$. By minimality, $\{d_n\mid n<\omega\}=D\cap\delta\setminus C_G$ is unbounded in $\delta$. By \ref{MagidorHousdorf} there is $Y\in \bigcap \vec{U}(\delta)$ such that $Y\cap\{d_n\mid n<\omega\}=\emptyset$ contradicting condition $(2)$ of the Mathias set $D$.$\blacksquare$
\begin{proposition}\label{CodingBoundedInformation} Let $A\in V[G]$ and $\lambda<\kappa$, let $\lambda_0:={\rm max}(Lim(C_G)\cap\lambda+1)$ and assume $(IH)$. Then there is a Mathias set $F_\lambda\subseteq \lambda_0$ such that $V[F_\lambda]=V[A]\cap V[C_G\cap\lambda]$. \end{proposition}
\noindent\textit{Proof}. Consider in $V[A]$ the sets $$B:=\{D\subseteq\lambda\mid D\text{ is a Mathias set}\}.$$
Then $|B|\leq 2^\lambda$, enumerate $B={\langle} D_i\mid i<2^\lambda{\rangle}$, let $E=\{{\langle} i, d{\rangle}\mid i<2^{\lambda}, d\in D_i\}\subseteq 2^{\lambda}\times\lambda$, clearly $V[B]=V[E]$ and $E\subseteq V_{2^{\lambda}}$. Also, since elements of $Lim(C_G)$ are strong limits in $V[C_G]$, $${\rm max} (Lim(C_G)\cap 2^\lambda+1)={\rm max}(Lim(C_G)\cap\lambda+1)=\lambda_0.$$ By Proposition \ref{genericproperties}(6), $E\in V[C_G\cap\lambda_0]$ and by induction hypothesis there is $F_\lambda\subseteq C_G\cap\lambda_0$ such that $V[F_\lambda]=V[E]$. Since $E\in V[A]$, also $F_\lambda\in V[A]$, and since $F_\lambda\subseteq C_G\cap\lambda_0$, $F_\lambda\in V[C_G\cap\lambda_0]$ so $V[F_\lambda]\subseteq V[A]\cap V[C_G\cap\lambda_0]$. For the other direction, if $X\in V[A]\cap V[C_G\cap\lambda_0]$, then by induction there is $C\subseteq C_G\cap\lambda_0$ such that $V[X]=V[C]$, and also $C\in V[A]$. Then $C\subseteq\lambda$ is a Mathias set, hence $C\in B$, and therefore, $C\in V[B]=V[F_\lambda]$. $\blacksquare$
The following lemma will be crucial to pack information given by two sets $D,C\subseteq C_G$ into a single set $E\subseteq C_G$. \begin{proposition}\label{countableinformation}
Assume that $o^{\vec{U}}(\kappa)<\kappa^+$ and $(IH)$. Let $D,E\in V[A]$ be Mathias sets such that $\lambda:=|D|<\kappa$. Denote $\theta={\rm max}\{\lambda,\kappa^*\}$, Then there is $F\in V[A]$ such that: \begin{enumerate}
\item $F$ is a Mathias set. $F\cap\theta= F_\theta$.
\item $(D\cup E)\setminus\theta\subseteq F\subseteq {\rm sup}(D\cup E)$.
\item $D,E\in V[F]$. \end{enumerate} \end{proposition} \begin{remark}\label{problemunion} Note that simply taking the union $D\cup E$ will not suffice for the proposition:
For example, assume that $o^{\vec{U}}(\kappa)=\delta$ and $o^{\vec{U}}(\delta)=1$, and pick any generic $G$ with the condition ${\langle} {\langle} \delta, \{\alpha<\delta\mid o^{\vec{U}}(\alpha)=0\}{\rangle},{\langle}\kappa,\{\delta<\alpha<\kappa\mid o^{\vec{U}}(\alpha)<\delta\}{\rangle}{\rangle}\in G$. Then $G$ is generic such that ${\rm otp}(C_G)=C_G(\omega)=\delta$. Let $$D=\{C_G(C_G(n))\mid n<\omega\}\text{ and }E=\{C_G(\alpha)\mid \omega\leq\alpha<C_G(\omega)\}\setminus D$$ Then $D\cup E=\{C_G(\alpha)\mid \omega\leq \alpha< C_G(\omega)\}$, hence in $V[D\cup E]$, $C_G(\omega)$ is still measurable. On the other hand, from $D$, we can reconstruct ${\langle} C_G(n)\mid n<\omega{\rangle}$ as $o^{\vec{U}}(C_G(C_G(n)))=C_G(n)$. So it is impossible that $D\in V[D\cup E]$. \end{remark} \textit{Proof of \ref{countableinformation}.}
Fix $\mathbb{M}[\vec{U}]$-names $\lusim{E},\langle \lusim{d}_i\mid i<\lambda\rangle$ for the elements of $E\setminus\theta$ and $D\setminus\theta$ respectively. Split the forcing at $\theta$ and find ${\langle} q',r'{\rangle}\in G$ such that
$$(1)\ \ \ {\langle} q',r'{\rangle}\Vdash \lusim{E},\{ \lusim{d}_i\mid i<\lambda\}\subseteq\lusim{C}_G\setminus\theta\text{ and } \forall\alpha\in \lusim{C}_G\setminus\kappa^*.o^{\vec{U}}(\alpha)<\alpha^+.$$ The idea is that for every $\delta\in D\setminus \kappa(r')$, there is $i\leq l(r')+1$ such that $\delta\in (\kappa_{i-1}(r),\kappa_i(r))$. Then $\delta$ is definable from $D\cup E$ and two other parameters: $$\gamma(\delta):=o^{(\kappa_i(r'))}(\delta)\text{ and }\beta(\delta):={\rm sup}(x\in (D\cup E)\cap \delta\mid \gamma(x)\geq\gamma(\delta)).$$
Indeed,
$$\delta={\rm min}(y\in (D\cup E)\setminus\beta(\delta)\mid \gamma(y)=\gamma(\delta)).$$
Then $\beta(\delta)$ is a member of $E\cup D$ below $\delta$\footnote{Actually, since we can always shrink the large set of $\delta$ to filter from a final segment of $E\cup D$ ordinals $\rho$ with $\gamma(\rho)\geq\gamma(\delta)$, it will follow that $\beta(\delta)$ is strictly below $\delta$}. As for $\gamma(\delta)$, we use \ref{specialTree}, there is a $\vec{U}$-fat tree $T$ deciding $\delta$ to be the top most ordinal in a maximal branche of $T$, and $\gamma(\delta)$ will be decided by the lower part of the branch, and hence below $\delta$, and therefore by finitely many elements of $C_G$ below $\delta$. After adding these finitely many element to $E$, we repeat this process on the added points. This process should stabilize after $\omega$ many steps, since we are creating a decreasing sequence of ordinals.
Formally, proceed by a density argument, let $r'\leq r\in \mathbb{M}[\vec{U}]\restriction(\theta,\kappa]$.
Define recursively for every $k<\omega$:
$r\leq^* r^*_k$, maximal anti chains ${\langle} Z^{(k)}_{i,j}\mid i<\lambda,j<\omega{\rangle}$, $\mathbb{M}[\vec{U}]$-names $${\langle} \lusim{\delta}^{(k)}_{i,j}\mid i<\lambda,j<\omega{\rangle}\text{ and }{\langle} T^{(k)}_{q,i,j},I^{(k)}_{q,i,j},F^{(k)}_{q,i,j},\vec{A}^{(k)}_{q,i,j}\mid i<\lambda,j<\omega,q\in Z^{(k)}_{i,j}{\rangle}.$$ First for every $j<\omega$ and $i<\lambda$, let $\lusim{\delta}^{(0)}_{i,j}=\lusim{d}_i$. Assume $r\leq^*r^*_k$ and $\lusim{\delta}^{(k)}_{i,j}$ are defined such that such that for all $i<\lambda,j<\omega$,
${\langle} q',r^*_k{\rangle}\Vdash \lusim{\delta}^{(k)}_{i,j}\in \lusim{C}_G\setminus\theta$.
Fix $i<\lambda,j<\omega$, use \ref{specialTree} to find $r^*_k\leq^*r_{i,j}$ and a maximal antichain $Z^{(k)}_{i,j}\subseteq \mathbb{M}[\vec{U}]\restriction\theta$ above $q'$, such that for every $q\in Z^{(k)}_{i,j}$, either ${\langle} q,r_{i,j}{\rangle} ||\lusim{\delta}^{(k)}_{i,j}$, or there is a $\vec{U}$-fat tree of extensions of $r_{i,j}$ $T^{(k)}_{q,i,j}$ and sets $A^t_{q,i,j}$ such that $D_{T^{(k)}_{q,i,j},\vec{A}^{(k)}_{q,i,j}}$ is pre-dense above $r_{i,j}$, and for every $t^{\smallfrown}\alpha\in mb(T^{(k)}_{q,i,j})$, $$(2) \ \ \ {\langle} q,r_{i,j}^{\smallfrown}{\langle} t^{\smallfrown}\alpha,\vec{A}^{t^{\smallfrown}\alpha}_{q,i,j}{\rangle}{\rangle}\Vdash\lusim{\delta}^{(k)}_{i,j}=\alpha, \ \ \ {\langle} q,r_{i,j}^{\smallfrown}{\langle} t,\vec{A}^t_{q,i,j}{\rangle}{\rangle}||\ o^{(\lusim{\kappa}^{(k)}_{i,j})}(\lusim{\delta}^{(k)}_{i,j})$$ where $\lusim{\kappa}^{(k)}_{i,j}$ is an $\mathbb{M}[\vec{U}]$-name for the unique $\kappa_y(r)$, $y\leq l(r)+1$ such that $\lusim{\delta}^{(k)}_{i,j}\in (\kappa_{y-1}(r),\kappa_{y}(r))$. Note that $\lusim{\kappa}^{(k)}_{i,j}$ is also a $\mathbb{M}[\vec{U}]$-name for the measurable on which $mb(T^{(k)}_{q^*,i,j})$ splits, for the unique $q^*$ in $Z^{(k)}_{i,j}\cap G\restriction \theta$.
Let $F^{(k)}_{q,i,j}:{\rm Lev}_{ht(T^{(k)}_{q,i,j})-1}(T^{(k)}_{q,i,j})\rightarrow \kappa$ be the function defined by $$(3) \ \ \ F^{(k)}_{q,i,j}(s)=\gamma\leftrightarrow {\langle} q,r_{i,j}^{\smallfrown}{\langle} s,\vec{A}^{s}_{q,i,j}{\rangle}{\rangle}\Vdash o^{(\lusim{\kappa}^{(k)}_{i,j})}(\lusim{\delta}^{(k)}_{i,j})=\gamma.$$
This notation works in case that ${\langle} q,r_{i,j}{\rangle}||\lusim{\delta}^{(k)}_{i,j}$ by taking the tree of height $0$ and $F^{(k)}_{q,i,j}({\langle}{\rangle})$ is the decided value for $o^{\vec{U}}(\lusim{\delta}^{(k)}_{i,j})$. Shrink $T^{(k)}_{q,i,j}$, and find a complete and consistent set of important coordinates $I^{(k)}_{q,i,j}$. Also as in \ref{very short}, we shrink the trees even more so that for every $q_1,q_2\in Z^{(k)}_{i,j}$ one of the following holds: \begin{enumerate}
\item[(4.1)] $Im(F^{(k)}_{q_1,i,j})\cap Im(F^{(k)}_{q_2,i,j})=\emptyset$.
\item[(4.2)] $T^{(k)}_{q_1,i,j}\restriction I^{(k)}_{q_1,i,j}=T^{(k)}_{q_2,i,j}\restriction I^{(k)}_{q_2,i,j}$ and $(F^{(k)}_{q_1,i,j})_{I^{(k)}_{q_1,i,j}}=(F^{(k)}_{q_2,i,j})_{I^{(k)}_{q_2,i,j}}$. \end{enumerate} Note that for every $V$-generic filter $H\subseteq \mathbb{M}[\vec{U}]$ such that ${\langle} q,r_{i,j}{\rangle}\in H$, there is $t\in mb(T^{(k)}_{q,i,j})$ such that ${\langle} q,r_{i,j}^{\smallfrown}{\langle} t,\vec{A}^t_{q,i,j}{\rangle}{\rangle}\in H$, and if $t_1^{\smallfrown}\alpha_1,t_2^{\smallfrown}\alpha_2\in mb(T^{(k)}_{q,i,j})$ are two such branches, then by $(2)$ $\alpha_1=(\lusim{\delta}^{(k)}_{i,j})_H=\alpha_2$ and in particular $F^{(k)}_{q,i,j}(t_1)=F^{(k)}_{q,i,j}(t_2)$ which implies that $t_1\restriction I^{(k)}_{q,i,j}=t_2\restriction I^{(k)}_{q,i,j}$, thus $t_1\restriction I^{(k)}_{q,i,j}$ is unique. Let $\lusim{\vec{\alpha}}^{(k)}_{q,i,j}$ be a $\mathbb{M}[\vec{U}]$-name such that
$$(5) \ \ \ {\langle} q,r_{i,j}{\rangle}\Vdash \forall t\in mb(T^{(k)}_{q,i,j}). \ {\langle} q,r_{i,j}^{\smallfrown}{\langle} t,\vec{A}^{t}_{q,i,j}{\rangle}{\rangle}\in \lusim{G}\rightarrow \lusim{\vec{\alpha}}^{(k)}_{q,i,j}=t\restriction I^{(k)}_{q,i,j}.$$ Note that if $q_1,q_2\in Z^{(k)}_{i,j}$ are such that $(4.2)$ holds, then both ${\langle} q_1, r_{i,j}{\rangle},{\langle} q_2,r_{i,j}{\rangle}$ force that $\lusim{\vec{\alpha}}^{(k)}_{q_1,i,j}=\lusim{\vec{\alpha}}^{(k)}_{q_2,i,j}$. Moreover, it is forced by ${\langle} q,r_{i,j}{\rangle}$ that $|\lusim{\vec{\alpha}}^{(k)}_{q,i,j}|=|I^{(k)}_{q,i,j}|$ so we assume that $\lusim{\vec{\alpha}}^{(k)}_{q,i,j}={\langle} \lusim{\vec{\alpha}}^{(k)}_{q,i,j}(w)\mid w\leq |I^{(k)}_{q,i,j}|{\rangle}$.
Next, let $\lusim{\beta}^{(k)}_{q,i,j}$ be a $\mathbb{M}[\vec{U}]$-name such that $$(6) \ \ {\langle} q,r_{i,j}{\rangle}\Vdash\lusim{\beta}_{q,i,j}^{(k)}={\rm sup}(\{x\in(\lusim{D}\cup\lusim{E})\cap\lusim{\delta}^{(k)}_{i,j}\mid o^{(\lusim{\kappa}^{(k)}_{i,j})}(\lusim{\delta}^{(k)}_{i,j})\leq o^{(\lusim{\kappa}^{(k)}_{i,j})}(x)\}\cup\{\theta\}).$$
By definition of $\lusim{\beta}^{(k)}_{q,i,j}$ and since we split the forcing at $\theta$, the trees $T^{(k)}_{q,i,j}$ are extension trees of $r_{i,j}$ and for every $w$, $$(7)\ \ \ {\langle} q,r_{i,j}{\rangle} \Vdash \lusim{\vec{\alpha}}^{(k)}_{q,i,j}(w),\lusim{\beta}^{(k)}_{q,i,j}\in C_{\lusim{G}}\cap[\theta,\lusim{\delta}^{(k)}_{i,j})$$
just otherwise, there is a generic $H$ with ${\langle} q,r_{i,j}{\rangle} \in H$ and $(\lusim{\beta}^{(k)}_{i,j})_H=(\lusim{\delta}^{(k)}_{i,j})_H$. However, by \ref{increasing order at limits}, $$o^{((\lusim{\kappa}^{(k)}_{i,j})_H)}((\lusim{\beta}^{(k)}_{i,j})_H)>o^{((\lusim{\kappa}^{(k)}_{i,j})_H)}((\lusim{\delta}^{(k)}_{q,i,j})_H)$$ contradiction.
By $\leq^*$-closure of $\mathbb{M}[\vec{U}]\restriction(\theta,\kappa]$, find a single $r^*_{k+1}$ such that $r_{i,j}\leq^* r^*_{k+1}$ for every $i,j$. We conclude that for every $q\in Z^{(k)}_{i,j} $, we have defined $T^{(k)}_{q,i,j},F^{(k)}_{q,i,j},I^{(k)}_{q,i,j}$ and names ${\langle} \lusim{\vec{\alpha}}^{(k)}_{q,i,j}(w)\mid w\leq |I^{(k)}_{q,i,j}|{\rangle},\lusim{\beta}^{(k)}_{q,i,j}$. We would like to turn these names to be independent of $q\in Z^{(k)}_{i,j}$. For $\lusim{\beta}^{(k)}_{q,i,j}$ it is easy to find $\mathbb{M}[\vec{U}]$-names $\lusim{\beta}^{(k)}_{i,j}$ such that for every $q\in Z^{(k)}_{i,j}$, ${\langle} q,r^*_{k+1}{\rangle}\Vdash \lusim{\beta}^{(k)}_{q,i,j}=\lusim{\beta}^{(k)}_{i,j}$. As for ${\langle} \lusim{\vec{\alpha}}^{(k)}_{q,i,j}(w)\mid w\leq |I^{(k)}_{q,i,j}|{\rangle}$, the length $|I^{(k)}_{q,i,j}|$ might depend on $q$, so we define $\lusim{\vec{\alpha}}^{(k)}_{q,i,j}(w)=\theta$ if $|I^{(k)}_{q,i,j}|<w<\omega$, and we can find names $\lusim{\vec{\alpha}}^{(k)}_{i,j}(w)$ independent of $q$. With these new names, in $(6),(7)$ we can replace ${\langle} q,r_{i,j}{\rangle}$ by ${\langle} q',r^*_{k+1}{\rangle}$. Enumerate the names $$\{\lusim{\vec{\alpha}}^{(k)}_{i,j}(w),\lusim{\beta}^{(k)}_{i,j}\mid j,w<\omega\}=\{\lusim{\delta}^{(k+1)}_{i,s}\mid s<\omega\}.$$
This concludes the inductive definition. Use $\sigma$-closure to find $r^*_n\leq^* r_\omega$, and shrink all the trees to be extension trees of $r_{\omega}$ such that for every $i<\lambda,\ k,j<\omega$ and $q\in Z^{(k)}_{i,j}$, $D_{T^{(k)}_{q,i,j},\vec{A}^{(k)}_{q,i,j}}$ is pre-dense above $r_\omega$. By density there is such $r_\omega\in G$. Define $$\langle (\lusim{\delta}^{(k)}_{i,j})_G\mid k,j<\omega, i<\lambda\rangle.$$ By $(7)$, ${\langle} q',r_\omega{\rangle}\Vdash \lusim{\delta}^{(k)}_{i,j}\in\lusim{C}_G\setminus\theta$, thus $(\lusim{\delta}^{(k)}_{i,j})_G\in C_G\setminus\theta$.
\begin{claim*} $\langle (\lusim{\delta}^{(k)}_{i,j})_G\mid k,j<\omega, i<\lambda\rangle\in V[A]$. \end{claim*} \textit{Proof of claim:} Work inside $V[A]$, recall that $D,E\in V[A]$, therefore $\langle (\lusim{\delta}^{(0)}_{i,j})_G\mid i<\lambda,j<\omega\rangle$ is in $V[A]$. Assume we have successfully defined $\langle (\lusim{\delta}^{(k)}_{i,j})_G\mid i<\lambda,j<\omega\rangle$, let us define inside $V[A]$ from this sequence the sequence $\langle (\lusim{\delta}^{(k+1)}_{i,j})_G\mid i<\lambda,j<\omega\rangle$. First, in $V[G]$, for each $i<\lambda,j<\omega$, let $Z^{(k)}_{i,j}\cap G\restriction\theta=\{q^G_{i,j}\}$ and let $t_{i,j}\in mb(T^{(k)}_{q^G_{i,j},i,j})$ such that ${\langle} q^G_{i,j},r_\omega^{\smallfrown}{\langle} t_{i,j},\vec{A}^{t_{i,j}}_{q^G_{i,j},i,j}{\rangle}\in G$. Let
$y\leq l(r_\omega)+1$ be such that $(\lusim{\kappa}^{(k)}_{i,j})_G=\kappa_y(r_\omega)$, which is definable in $V[A]$ using $r_\omega, (\lusim{\delta}^{(k)}_{i,j})_G$, as the unique $y\leq l(r_\omega)+1$ such that $(\lusim{\delta}^{(k)}_{i,j})_G\in(\kappa_{y-1}(r_\omega),\kappa_y(r_\omega))$. By $(3)$,
$${\langle} q^G_{i,j},r_\omega^{\smallfrown}{\langle} t_{i,j}\setminus\{{\rm max}(t_{i,j})\},\vec{A}^{t_{i,j}\setminus\{{\rm max}(t_{i,j})\}}_{q^G_{i,j},i,j}{\rangle}\Vdash o^{(\kappa_y(r_\omega))}(\lusim{\delta}^{(k)}_{i,j})=F^{(k)}_{q^G_{i,j},i,j}(t_{i,j}\setminus\{{\rm max}(t_{i,j})\})$$
hence it must be that $F^{(k)}_{q^G_{i,j},i,j}(t_{i,j}\setminus\{{\rm max}(t_{i,j})\})=o^{(\kappa_y(r_\omega))}((\lusim{\delta}^{(k)}_{i,j})_G)$. Although the sequence ${\langle} q^{G}_{i,j}\mid i<\lambda,j<\omega{\rangle}$ might not be in $V[A]$, we can do something similar to \ref{very short}. Back in $V[A]$, $o^{(\kappa_y(r_\omega))}((\lusim{\delta}^{(k)}_{i,j})_G)$ is definable since in $V$ we have the decomposition $${\langle} X^{(\kappa_y(r_\omega))}_\gamma\mid \gamma<o^{\vec{U}}(\kappa_y(r_\omega)){\rangle}$$ and $o^{(\kappa_y(r_\omega))}((\lusim{\delta}^{(k)}_{i,j})_G)$ is the unique $\gamma_{i,j}<o^{\vec{U}}(\kappa_y(r_\omega))$ such that $(\lusim{\delta}^{(k)}_{i,j})_{G}\in X^{(\kappa_y(r_\omega))}_{\gamma_{i,j}}$. Let $$M^{(k)}_{i,j}=\{q\in Z^{(k)}_{i,j}\mid o^{(\kappa_y(r_\omega))}((\lusim{\delta}^{(k)}_{i,j})_G)\in Im(F^{(k)}_{q,i,j})\}.$$ Notice that $q^G_{i,j}\in M^{(k)}_{i,j}$, as witnessed by $t_{i,j}\setminus\{{\rm max}(t_{i,j})\}$, hence $Im(F^{(k)}_{q,i,j})\cap Im( F^{(k)}_{q^G_{i,j},i,j})\neq\emptyset$ for any $q\in M^{(k)}_{i,j}$ and we conclude that $(4.2)$ must hold. Choose in $V[A]$ any $q^{(k)}_{i,j}\in M^{(k)}_{i,j}$ and any $s^{(k)}_{i,j}\in mb(T^{(k)}_{q^{(k)}_{i,j},i,j})$ such that $F_{q^{(k)}_{i,j},i,j}^{(k)}(s^{(k)}_{i,j})=o^{(\kappa_y(r_\omega))}((\lusim{\delta}^{(k)}_{i,j})_G)$. By $(5)$, $(\lusim{\vec{\alpha}}_{i,j})_G=(t_{i,j})\restriction I^{(k)}_{q^G_{i,j},i,j}$ and since $$(F^{(k)}_{q^{(k)}_{i,j},i,j})_{I^{(k)}_{q^{(k)}_{i,j},i,j}}(s_{i,j}\restriction I^{(k)}_{q^{(k)}_{i,j},i,j})=o^{(\kappa_y(r_\omega))}((\lusim{\delta}^{(k)}_{i,j})_G)=(F^{(k)}_{q^G_{i,j},i,j})_{I^{(k)}_{q^G_{i,j},i,j}}(t_{i,j}\restriction I^{(k)}_{q^{G}_{i,j},i,j})$$ it follows that $t_{i,j}\restriction I^{(k)}_{q^{G}_{i,j},i,j}=(\lusim{\vec{\alpha}}_{i,j})_G=s_{i,j}\restriction I^{(k)}_{q^{(k)}_{i,j},i,j}$. Hence ${\langle}(\lusim{\vec{\alpha}}^{(k)}_{i,j}(w))_G\mid w<\omega{\rangle}$ is definable in $V[A]$. Also, by $(6)$, $(\lusim{\beta}^{(k)}_{i,j})_G$ is definable from $(\lusim{\delta}^{(k)}_{i,j})_G$, $\kappa_{y}(r_{\omega})$ and $D\cup E$ which are all available in $V[A]$. By definition of the sequence $\langle (\lusim{\delta}^{(k+1)}_{i,j})_G\mid i<\lambda,j<\omega\rangle$ it is definable in $V[A]$. So we conclude that $\langle (\lusim{\delta}^{(k)}_{i,j})_G\mid k,j<\omega, i<\lambda\rangle\in V[A]$. $\blacksquare_{\text{Claim}}$
We keep the notation of $q^{(k)}_{i,j}$ from the proof of the claim, use Proposition \ref{CodingBoundedInformation} to find $F_{\theta}$ such that $$V[F_{\theta}]=V[A]\cap V[C_G\cap\theta].$$ Define $$F_*=\{(\lusim{\delta}^{(k)}_{i,j})_G\mid k,j<\omega, i<\lambda\}, \ \ F^*=(E\cup F_*)\setminus \theta \uplus F_\theta\in V[A].$$ Clearly, $F^*$ is a Mathias set and $F^*\cap\theta=F_\theta$. To see $2$ of the proposition, note that $D\setminus \theta=\{(\lusim{\delta}^{(0)}_i)_G\mid i<\lambda\}\subseteq F^*$, it follows that $D\cup E\setminus\theta\subseteq F^*$. Moreover from $(6)$ it follows that for every $k,i,j$, $\lusim{\delta}^{(k)}_{i,j}$ is forced by ${\langle} q', r_\omega{\rangle}\in G$ to be below some $\lusim{\delta}^{(0)}_{s,t}$, so ${\rm sup}(F_*)={\rm sup}(D)$, hence ${\rm sup}(F^*)={\rm sup}(D\cup E)$.
To see $3$, let $\langle\lambda_\xi\mid \xi<otp(F_*)=:\rho\rangle$ be the increasing enumeration of $F_*$, clearly $|\rho|\leq \lambda$.
Consider the function $R:\rho \rightarrow[\rho]^{<\omega}$ defined by $R(\xi)={\langle} {\langle} i_1,...,i_n{\rangle},s{\rangle}$ such that for some $i,j,k$, $$\lambda_\xi=(\lusim{\delta}^{(k)}_{i,j})_G, \ {\langle}(\vec{\lusim{\alpha}}^{(k)}_{i,j}(w))_G\mid w\leq |I^{(k)}_{q^{(k)}_{i,j},i,j}|{\rangle}={\langle}\lambda_{i_1},...,\lambda_{i_n}{\rangle}\text{ and } (\lusim{\beta}^{(k)}_{i,j})_G=\lambda_s.$$ By the claim, both ${\langle}(\lusim{\delta}^{(k)}_{i,j})_G\mid k,j<\omega, i<\lambda{\rangle}\in V[A]$, hence $R\in V[A]$, since $|\rho|\leq\lambda$, then $R\in V[A]\cap V[C_G\cap\theta]=V[F_\theta]\subseteq V[F^*]$. Notice that by $(7)$, $i_1,...,i_n,s<i$.
Let us argue first that $F_*\in V[F^*]$, in $V[F^*]$, we inductively define $\langle\beta_i\mid i<\rho\rangle$.
Clearly $$\{\lambda_i\mid i<\rho\}\cap \theta+1=\{\lambda_i\mid i<\epsilon\}\in V[F_\theta]$$ so we let $\beta_i=\lambda_i$ for $i<\epsilon$. Assume that $\langle \beta_j\mid j<i\rangle$ is defined, where $i>\epsilon$, in particular $\beta_{i_1},...\beta_{i_n}$ and $\beta_s$ are defined. Let $I=Ind(F_*\setminus D,F_*)\subseteq \rho$, by the claim, $I\in V[A]\cap V[C_G\cap \theta]= V[F_\theta]\subseteq V[F^*]$. Finally, note that $\{q^{(k)}_{i,j}\mid i<\lambda,j<\omega\}\in V[A]\cap V[C_G\cap\theta]=V[F_\theta]\subseteq V[F^*]$ and let $\kappa_{i,j}$ be the measurable on which $mb(T^{(k)}_{q{(k)}_{i,j},i,j})$ splits. Define $$\beta_i={\rm min}(\{x\in (F^*\setminus \{\beta_j\mid j\in I\cap i\})\setminus\beta_{s}+1\mid \ o^{(\kappa_{i,j})}(x)\geq (F^{(k)}_{q^{(k)}_{i,j},i,j})_{I^{(k)}_{q^{(k)}_{i,j},i,j}}(\beta_{i_1},...,\beta_{i_n})\}).$$ This is a legitimate definition in $V[F^*]$ since we worked hard to ensure all the parameters used are there. Let us prove that $\beta_\xi=\lambda_\xi$, inductively assume that $\langle\beta_j\mid j<\xi\rangle=\langle\lambda_j\mid j<\xi\rangle$, we can assume that $\xi>\epsilon$, then $$\{\beta_j\mid j\in I\cap \xi\}=\{\lambda_j\mid j\in I\cap \xi\}=(F_*\setminus D)\cap \lambda_\xi $$
and therefore
$$(F^*\setminus\{\beta_j\mid j\in I\cap \xi\})\cap(\beta_s,\lambda_\xi)=[(E\cup F_*)\setminus(F_*\setminus D)]\cap(\beta_s,\lambda_\xi)=(E\cup D)\cap(\beta_s,\lambda_\xi).$$
Assume that $i,k,j$ are such that $\lambda_\xi=(\lusim{\delta}^{(k)}_{i,j})_G$, then by induction hypothesis, $\beta_s=\lambda_s=(\lusim{\beta}^{(k)}_{i,j})_G$ and
$${\langle}(\vec{\lusim{\alpha}}^{(k)}_{i,j}(w))_G\mid w\leq |I^{(k)}_{q'_{i,j},i,j}|{\rangle}={\langle}\lambda_{i_1},...,\lambda_{i_n}{\rangle}={\langle}\beta_{i_1},...,\beta_{i_n}{\rangle}.$$ By $(3)$ it follows that $$(F^{(k)}_{q^{(k)}_{i,j},i,j})_{I^{(k)}_{q^{(k)}_{i,j},i,j}}({\langle}\beta_{i_1},...,\beta_{i_n}{\rangle})=o^{(\kappa_{i,j})}((\lusim{\delta}^{(k)}_{i,j})_G)=o^{(\kappa_{i,j})}(\lambda_\xi).$$
By $(6)$, it follows that in the interval $(\beta_s,\lambda_\xi)$, there are no ordinals $x\in F^*\setminus\{\beta_j\mid j\in I\cap \xi\}$ such that $(F^{(k)}_{q^{(k)}_{i,j},i,j})_{I^{(k)}_{q^{(k)}_{i,j},i,j}}({\langle}\beta_{i_1},...,\beta_{i_n}{\rangle})\leq o^{(\kappa_{i,j})}(x)$ so $\beta_\xi\geq \lambda_\xi$. Also $\lambda_\xi\in F^*\setminus\{\beta_j\mid j\in I\cap \xi\}$ and $F^{(k)}_{q'_{i,j},i,j}(\beta_{i_1},...,\beta_{i_k})=o^{(\kappa_{i,j})}(\lambda_\xi)$ hence $\lambda_\xi= \beta_\xi$. Thus $F_*\in V[F^*]$.
From this $(3)$ easily follows, indeed, $D\setminus\theta,F_*\setminus E\in V[F^*]$ since their indices inside $F_*$ are subsets of $\theta$, hence $$E\setminus \theta=[(E\cup F_*)\setminus (F_*\setminus E)]\setminus\theta=F^*\setminus[\theta\cup(F_*\setminus E)]\in V[F^*].$$ Also $D\cap\theta,E\cap\theta\in V[F_\theta]\subseteq V[F^*]$ and therefore $D,E\in V[F^*]$ which is what we needed.$\blacksquare$
The following corollary provides a sufficient condition for the main result. It roughly says that given that $\kappa$ changes cofinality in $V[A]$, and given a single $C'\subseteq C_G$ which captures all the initial segments of $A$, we can glue the information needed to capture $A$. \begin{lemma}\label{lemmaforsubsetkappa} Assume $o^{\vec{U}}(\kappa)<\kappa^+$ and $(IH)$. Let $A \in V[G] ,\ A\subseteq\kappa $ and assume that $\exists C^*\subseteq C_G$ such that \begin{enumerate}
\item$C^*\in V[A]$ and
$\forall\alpha<\kappa \ A\cap\alpha\in V[C^*]$.
\item $cf^{V[A]}(\kappa)<\kappa$.
\end{enumerate} Then $ \exists C' \subseteq C_G $ such that $ V[A]=V[C']$. \end{lemma}
\noindent\textit{Proof}. Let $\lambda:=cf^{V[A]}(\kappa)<\kappa$ and $\langle \alpha_i\mid i<\lambda\rangle\in V[A]$ unbounded and cofinal in $\kappa$ witnessing this. By \ref{very short}, there is $C_*\subseteq C_G$ such that $|C_*|\leq\lambda$ and $V[C_*]=V[\langle \alpha_i\mid i<\lambda\rangle]$. Use \ref{countableinformation} to find $C_0\subseteq C_G$ such that $C_0\in V[A]$ and $C_*,C^*\in V[C_0]$. In $V[C_0]$, let $\pi_i:2^{\alpha_i}\leftrightarrow P(\alpha_i)$ be any bijection. Since $A\cap\alpha_i\in V[C_0]$, there is $\delta_i$ such that $$\pi_i(\delta_i)=A\cap\alpha_i.$$ Note that the sequence $\langle \delta_i\mid i<\lambda\rangle$ might not be inside $V[C_0]$, but it is in $V[A]$. Again by \ref{very short} we can find $C''\subseteq C_G$ such that $|C''|\leq\lambda$ such that $$V[\langle\delta_i\mid i<\lambda\rangle]=V[C''].$$ By Proposition \ref{countableinformation}, we can find some $C'\subseteq C_G$, $C'\in V[A]$, such that $C_0, C''\in V[C']$. Now in $V[C']$ we can compute $A$ as follows, since $C_0\in V[C']$, also ${\langle}\pi_i\mid i<\lambda{\rangle}\in V[C']$, and since $C''\in V[C']$ also ${\langle}\delta_i\mid i<\lambda{\rangle}\in V[C']$. It follows that $A=\cup_{i<\lambda}A\cap\alpha_i=\cup_{i<\lambda}\pi_i(\delta_i)\in V[C']$. $\blacksquare$
\subsection{Subsets of $\kappa$ which do not stabilize}
In this section we assume that $o^{\vec{U}}(\kappa)<\kappa^+$, $A$ does not stabilize and $(IH)$. We do not assume in general that $A\subseteq\kappa$. However, if $A\in V[G]$ is such that $A\subseteq\kappa$ and does not stabilize, then by \ref{nonstabcofinalitysubsetofkappa}, $cf^{V[A]}(\kappa)<\kappa$. By Lemma \ref{lemmaforsubsetkappa}, to conclude the main result for $A$, it remains to find $C^*\in V[A]$ such that for every $\alpha<\kappa$, $A\cap\alpha\in V[C^*]$. Along this chapter we construct such $C^*$. The naive approach is the following: Fix a cofinal sequence ${\langle}\alpha_i\mid i<cf^{V[A]}(\kappa){\rangle}\in V[A]$, since for every $i,$ $A\cap\alpha_i$ is bounded, apply \ref{very short} to find $C_i\subseteq C_G$ such that $V[A\cap\alpha_i]=V[C_i]$ and let $C^*=\cup_{i<cf^{V[A]}(\kappa)}C_i$. There are several reasons why $C^*$ in not the desired set: \begin{enumerate}
\item[(I)] The sequence ${\langle} C_i\mid i<cf^{V[A]}(\kappa){\rangle}$ is defined in $V[G]$ and by adding finitely many elements to each $C_i$ we might accumulate an infinite sequence which is not in $V[A]$.
\item[(II)] As we have seen in \ref{problemunion}, a union of two sets might lose information, so it is possible that for some $j$, $C_j\notin V[\cup_{i<cf^{V[A]}(\kappa)}C_i]$. \end{enumerate}
For problem (I), we need to ensure that the choice we make is inside $V[A]$, for this we use the definition of a Mathias set, in $V[A]$ we can choose a sequence ${\langle} D_i\mid i<cf^{V[A]}(\kappa){\rangle}$ such that $V[A\cap\alpha_i]=V[D_i]$ and each $D_i$ is a Mathias set. By Proposition \ref{FiniteNoise}, $D_i\subseteq^* C_G$, so it might be that $D_i\setminus C_G\neq\emptyset$. By fixing problem I, we have created a new problem: The sequence $D:=\cup_{i<cf^{V[A]}(\kappa)}D_i$ might accumulate infinite noise i.e. $|D\setminus C_G|\geq \omega$. Lemma \ref{Noise} and corollaries \ref{stabilization of star increasing sequences}, \ref{subsets star bound}, show we can remove this noise and stay inside $V[A]$. \begin{lemma}\label{Noise}
Let $\langle D_i\mid i<\lambda\rangle\in V[A]$ such that $\lambda<\kappa$ and: \begin{enumerate} \item $D_i$ is a Mathias set.
\item $min(D_i)\geq\lambda$. \end{enumerate} Then there is $\langle D^*_i\mid i<\lambda\rangle\in V[A]$ such that: \begin{enumerate}
\item $\underset{i<\lambda}{\bigcup}D^*_i$ is Mathias.
\item $\forall i<\lambda, D_i=^*D^*_i\subseteq D_i$. \end{enumerate} \end{lemma}
\noindent\textit{Proof}. By removing finitely many elements from every $D_i$, we can assume that $otp(D_i)$ is a limit ordinal. If every $D_i=\emptyset$, then the claim is trivial. Otherwise, since $D_i$ is a Mathias set, ${\rm sup}(D_i)\in \overline{X}_A$. Denote $D=\underset{i<\lambda}{\bigcup}D_i$ and $\nu^*={\rm sup}(D)>\lambda$. Note that $\nu^*\in \overline{X}_A$, since $\nu^*={\rm sup}( {\rm sup}(D_i)\mid i<\lambda)$ and $\overline{X}_A$ is closed.
Proceed by induction on $\nu^*$, by Lemma \ref{FiniteNoise}, $D_i\setminus C_G$ is finite. It follows that $|D\setminus C_G|\leq\lambda<\nu^*$. We would like to remove the noise accumulated in $D$ by intersecting it with sets in $\cap\vec{U}(\nu^*)$. Since $\nu^*\in Lim(C_G)$, we can apply \ref{MagidorHousdorf} to $D\setminus C_G$ and find a set $Y^*\in\cap\vec{U}(\nu^*)$ such that $Y^*\cap(D\setminus C_G)=\emptyset$. Denote $D^*=D\cap Y^*\subseteq C_G$. Note that $D^*\in V[A]$ since $D\in V[A]$ and $Y^*\in V$.
Consider the set $$ Z^{(0)}=\{\nu<\nu^*\mid Y^*\cap\nu\in\cap\vec{U}(\nu)\}$$ to see that $Z^{(0)}\in \cap\vec{U}(\nu^*)$, let $i<o^{\vec{U}}(\nu^*)$, then $j_{U(\nu^*,i)}(Y^*)\cap\nu^*=Y^{*}\in\underset{\xi<i}{\bigcap}U(\nu^*,\xi)$. By coherency, the order of $\nu^*$ in $j_{U(\nu^*,i)}(\vec{U})$ is $i$, which implies that $$\underset{\xi<i}{\cap}U(\nu^*,\xi)=\cap j(\vec{U})(\nu^*).$$ By definition $\nu^*\in j(Z^{(0)})$ thus $Z^{(0)}\in U(\nu^*,i)$ for every $i<o^{\vec{U}}(\nu^*)$ and $Z^{(0)}\in\bigcap\vec{U}(\nu^*)$. By Proposition \ref{genericproperties}(3), there is $\eta_0<\nu^*$ such that $C_G\cap (\eta_0,\nu^*)\subseteq Z^{(0)}$.
Consider the sequence of Mathias sets ${\langle} D_i\cap\eta_0\mid i<\lambda{\rangle}$, apply the induction hypothesis to it and find $\langle D'_i\mid i<\lambda\rangle$ such that \begin{enumerate}
\item $\underset{i<\lambda}{\bigcup}D'_i$ is Mathias.
\item $D_i\cap\eta_0=^*D'_i\subseteq \eta_0$. \end{enumerate} Define $$D^*_i=D'_i\uplus (D_i\cap Y^{*}\setminus \eta_0).$$
Let us argue that $\langle D^*_i\mid i<\lambda{\rangle}$ is as wanted: to see condition $(1)$, note that the set $$\underset{i<\lambda}{\cup}D_i^*=D^*\setminus\eta_0\cup (\underset{i<\lambda}{\cup}D_i')$$ is a Mathias sets as the union of two Mathias sets.
For condition $(2)$, it is clear that $D_i^*\subseteq^* D_i$. Toward a contradiction, assume that there is $i<\lambda$ and $\delta\leq{\rm sup}(D_i)$ is minimal such that
$$|(D_i\cap\delta)\setminus (D^*_i\cap\delta)|\geq\omega.$$ By the definition of $D^*_i$, $\delta>\eta_0$ and $\delta\in Lim(D_i)$. By the definition of $\eta_0$, $\delta\in C_G\cap(\delta_0,\nu^*)\in Z^{(0)}\cup\{\nu^*\}$ which means that $\delta\cap Y^{*}\in \bigcap\vec{U}(\delta)$. Since $D_i$ is Mathias, there is $\xi<\delta$ such that $D_i\cap(\xi,\delta)\subseteq Y^{*}$, in particular $$D_i\cap(\xi,\delta)=D_i\cap Y^{*}\cap(\xi,\delta)=D^*_i\cap(\xi,\delta).$$ So $(D_i\cap\delta)\setminus(D^*_i\cap\delta)=(D_i\cap\xi)\setminus(D^*_i\cap\xi)$, this is a contradiction to the minimality of $\delta$. $\blacksquare$
\begin{corollary} \label{stabilization of star increasing sequences}
Let $\langle D_i\mid i<\theta\rangle\in V[A]$ such that $\theta<\kappa^+$ and: \begin{enumerate} \item $D_i$ is a Mathias set.
\item $D_i\cap\kappa^*=F_{\kappa^*}$ where $V[F_{\kappa^*}]=V[A]\cap V[C_G\cap\theta]$.
\item$ {\langle} D_i\mid i<\theta{\rangle}$ is $\subseteq^*$-increasing. \end{enumerate} Then there is $\langle D^*_i\mid i<\theta\rangle\in V[A]$ such that: \begin{enumerate}
\item $\underset{i<\theta}{\bigcup}D^*_i$ is a Mathias set.
\item $\forall i<\theta, D_i=^*D^*_i\subseteq D_i$.
\item $D_i^*\cap \kappa^*=F_{\kappa^*}$ \end{enumerate}
\end{corollary}
\noindent\textit{Proof}. Let $\lambda=cf^{V[A]}(\theta)\leq\kappa$. Since $\kappa$ is singular in $V[A]$, $\lambda<\kappa$ and let ${\langle} \theta_i\mid i<\lambda{\rangle}\in V[A]$ be cofinal in $\theta$. We split each $D_{\theta_i}$ to three intervals: $$D_{\theta_i}=D_{\theta_i}\cap\kappa^*\uplus D_{\theta_i}\cap(\kappa^*,\lambda)\uplus D_{\theta_i}\setminus\lambda.$$ Denote these sets by $A_i,B_i,C_i$ respectively. By assumption, $A_i$ is constantly $F_{\kappa^*}$. Apply \ref{Noise} to the sequence ${\langle} C_i\mid i<\lambda{\rangle}$ to obtain ${\langle} C^*_i\mid i<\lambda{\rangle}\in V[A]$ such that $C^*_i=^*C_i$ and $C^*:=\cup_{i<\lambda}C^*_i$ is Mathias. As for the sequence ${\langle} B_i\mid i<\lambda{\rangle}$, either $\lambda\leq\kappa^*$ in which case $B_i=\emptyset$. Otherwise $\lambda>\kappa^*$, and by removing finitely many points from $B_i$, we can assume that ${\rm sup}(B_i)\in Lim(B_i)\subseteq \overline{X}_A\setminus\kappa^*\subseteq X_A$ i.e. ${\rm sup}(B_i)$ is singular in $V[A]$. Since $\lambda>\kappa^*$ is regular in $V[A]$, it follows that $\lambda\notin X_A$, hence, ${\rm sup}(B_i)\leq{\rm max}(X_A\cap\lambda):=\mu<\lambda$. Since $\mu>\aleph_0$ is a strong limit cardinal and $\lambda$ is regular, the sequence ${\langle} B_i\mid i<\lambda{\rangle}$ satisfies the assumption of
Theorem \ref{Modfinitestab}, hence there is $\lambda'<\lambda$ such that for every $\lambda'\leq\delta<\lambda$, $B_\delta=^*B^*$.
Note that $F_{\kappa^*}\cup B^*\cup C^*$ is a Mathias set as the union of finitely many of them. Let $D_i^*:=D_i\cap (F_{\kappa^*}\cup B^*\cup C^*)$.
First, since $\cup_{i<\theta}D^*_i\subseteq F_{\kappa^*}\cup B^*\cup C^*$, and $F_{\kappa^*}\cup B^*\cup C^*$ is Mathias, then also $\cup_{i<\theta}D^*_i$ by the criteria of \ref{FiniteNoise}. Also $(3)$ follows trivially. To see $(2)$, It suffices to see that for each interval $$D_i^*\cap\kappa^*=D_i\cap\kappa^*, \ D_i^*\cap (\kappa^*,\lambda)=^*D_i\cap (\kappa^*,\lambda), \ D^*_i\setminus\lambda=^*D_i\setminus\lambda$$ indeed $D_i^*\cap\kappa^*=D_i\cap\kappa^*=F_{\kappa^*}$. Find $\lambda'\leq\delta<\lambda$ such that $i<\theta_{\delta}$, then $D_i\subseteq^* D_{\theta_\delta}$. In particular, $$D_i\setminus\lambda\subseteq^* D_{\theta_\delta}\setminus\lambda= C_\delta=^*C^*_\delta\subseteq C^*\text{ and }D_i\cap(\kappa^*,\lambda)\subseteq^* D_{\theta_\delta}\cap(\kappa^*,\lambda)= B_\delta=^*B^*.$$ So $$D_i\setminus\lambda=^*D_i\cap C^*\setminus\lambda=D^*_i\setminus\lambda\text{ and }D_i\cap(\kappa^*,\lambda)=^*D_i\cap B^*\cap(\kappa^*,\lambda)=D^*_i\cap(\kappa^*,\lambda).$$ Therefore $D_i=^*D^*_i$.$\blacksquare$ \begin{corollary}\label{subsets star bound} Let $\langle D_i\mid i<\theta\rangle\in V[A]$ such that $\theta<\kappa^+$ and: \begin{enumerate} \item $D_i$ is a Mathias set.
\item $D_i\cap\kappa^*=F_{\kappa^*}$.
\item$ {\langle} D_i\mid i<\theta{\rangle}$ is $\subseteq^*$-increasing. \end{enumerate} Then in $V[A]$ there is a Mathias set $E\subseteq{\rm sup}(\cup_{i<\theta}D_i)$ which is a $\subseteq^*$-bounded for the sequence $\langle D_i\mid i<\theta\rangle$ such that $E\cap\kappa^*=F_{\kappa^*}$. \end{corollary}
\noindent\textit{Proof}. Simply apply \ref{stabilization of star increasing sequences}, to find ${\langle} D^*_i\mid i<\theta{\rangle}$ then $E=\cup_{i<\theta}D^*_i$ will be as wanted.$\blacksquare$
As for problem II mentioned in the beginning of this section, the first step will be to take $C^*$, the union of the $C_i$'s. Then we $\subseteq^*$ increase every $C^*\cap\alpha_i\subseteq^* C^{(1)}_i$ so the $C_i\in V[C^{(1)}_i]$. Repeating this process transfinitely, this will eventually stabilize to obtain the desired set. Note also that this definition must take place inside $V[A]$.
The following three propositions formally describe this process, we prove them by induction on $\nu\in X_A$. Recall that under the assumption of this section $\kappa\in X_A$. \begin{theorem}\label{Making a sequence stae increasing} Assume that $\nu\in X_A$, $\theta<\nu^+$ and let $\langle D_i\mid i<\theta\rangle\in V[A]$ such that: \begin{enumerate}
\item $D_i\subseteq\theta_i<\nu$ is a Mathias set, ${\langle} \theta_i\mid i<\theta{\rangle}$ is non decreasing.
\item $D_i\cap\kappa^*=F_{\kappa^*}$.
\end{enumerate}
Then there is $\langle D^*_i\mid i<\theta\rangle\in V[A]$ such that \begin{enumerate}
\item $D^*:=\underset{i<\theta}{\bigcup}D^*_i$ is Mathias.
\item $D_i\subseteq^*D^*_i\subseteq\theta_i$ and $D_i\in V[D^*_i]$.
\item $D^*_i\cap \kappa^*=F_{\kappa^*}$.
\item $\langle D^*_i\mid i<\theta\rangle$ is $\subseteq^*$-increasing. \end{enumerate} \end{theorem}
\begin{theorem}\label{maintheoremnonstab} Assume that $\nu\in X_A$, $\theta<\nu^+$ and let $\langle D_i\mid i<\theta\rangle\in V[A]$ such that:
\begin{enumerate}
\item $D_i\subseteq\theta_i<\nu$ is a bounded in $\nu$ Mathias set, ${\langle} \theta_i\mid i<\theta{\rangle}$ is non decreasing.
\item $D_i\cap\kappa^*=F_{\kappa^*}$.
\end{enumerate}
Then there is $\langle D^*_i\mid i<\theta\rangle\in V[A]$ such that \begin{enumerate}
\item $D^*:=\underset{i<\theta}{\bigcup}D^*_i$ is Mathias.
\item $\forall i<\theta. D^*_i\in V[D^*]$.
\item $D_i\subseteq^*D^*_i\subseteq\theta_i$ and $D_i\in V[D^*_i]$.
\item $D^*_i\cap \kappa^*=F_{\kappa^*}$.
\item $\langle D^*_i\mid i<\theta\rangle$ is $\subseteq^*$-increasing. \end{enumerate} \end{theorem}
\begin{proposition}\label{boundedUnionOfgenerics} Assume that $\nu\in X_A$, $D,D'\in V[A]$ are such that, \begin{enumerate}
\item $D,D'\subseteq\nu$ are Mathias sets.
\item $D\cap\kappa^*=D'\cap\kappa^*=F_{\kappa^*}$. \end{enumerate} Then there is $D^*\in V[A]$ such that \begin{enumerate} \item $D^*$ is a Mathias set
\item $D\cup D'\subseteq D^*\subseteq {\rm sup}(D\cup D')$.
\item $D,D'\in V[D^*]$.
\item $D^*\cap\kappa^*=F_{\kappa^*}$
\end{enumerate} \end{proposition}
As mentioned before, the proof of \ref{Making a sequence stae increasing}, \ref{maintheoremnonstab} and \ref{boundedUnionOfgenerics} is by induction on $\nu$. For $\nu\in X_A$ denote: \begin{enumerate}
\item $(18)_\nu$ is Theorem \ref{Making a sequence stae increasing} for $\nu$.
\item $(19)_{\nu}$ is Theorem \ref{maintheoremnonstab} for $\nu$.
\item $(20)_{\nu}$ is Proposition \ref{boundedUnionOfgenerics} for $\nu$.
\end{enumerate}
Clearly, for every $\nu\leq\kappa^*$, $(18)_{\nu}+(19)_{\nu}+(20)_{\nu}$ holds. Assume that $\nu>\kappa^*$, in particular, $cf^{V[A]}(\nu)<\nu$. Inductively assume that $(18)_{<\nu}+(19)_{<\nu}+(20)_{<\nu}$. The plan is to derive the induction step gradually from the following implications:
\begin{enumerate}
\item $(18)_{<\nu}+(19)_{<\nu}+(20)_{<\nu}\Longrightarrow (18)_{\nu}$.
\item $(18)_{\nu}+(19)_{<\nu}+(20)_{<\nu}\Longrightarrow (19)_{\nu}$.
\item $(18)_{\nu}+(19)_{\nu}+(20)_{<\nu}\Longrightarrow (20)_{\nu}$.
\end{enumerate}
\textit{Proof of implication 1 (Theorem \ref{Making a sequence stae increasing}).} Let us define inductively in $V[A]$ the sequence ${\langle} D^*_i\mid i<\theta{\rangle}$, define $D^*_0=D_0$. At successor stage, the sets $D^*_\alpha,D_{\alpha+1}$ are bounded in $\nu$ in $\nu$, apply the induction hypothesis $(20)_{\theta_{\alpha+1}}$ to these sets, to find a Mathias set $D^*_{\alpha+1}$ such that $D^*_\alpha\cup D_{\alpha+1}\subseteq D^*_{\alpha+1}\subseteq\theta_{\alpha+1}$, $D^*_{\alpha+1}\cap\kappa^*=F_{\kappa^*}$ and $D_{\alpha+1}\in V[D^*_{\alpha+1}]$.
At limit stage $\delta<\theta$, the sequence ${\langle} D^*_i\mid i<\delta{\rangle}$ is defined and $\subseteq^*$ increasing. By \ref{subsets star bound}, there is a Mathias set $E^*$ such that $E^*\cap\kappa^*=F_{\kappa^*}$, $E^*\subseteq{\rm sup}\{\theta_i\mid i<\delta\}\leq \theta_\delta<\nu$ which is a $\subseteq^*$-bound. Again apply $(20)_{\theta_{\delta}}$ to $E^*,D_{\delta}$ to obtain $D^*_{\delta}\subseteq \theta_{\delta}$. Then $(2),(3),(4)$ are clear. At stage $\theta$, we also need to ensure $(1)$, by \ref{stabilization of star increasing sequences}, we can change the constructed ${\langle} D_i^*\mid i<\theta{\rangle}$ to ${\langle} D^{**}_i\mid i<\theta{\rangle}$ such that $D^*_i=^*D^{**}_i$, $D^{**}_i\cap\kappa^*=F_{\kappa^*}$ and $(1)$ holds. It suffices to note that $(2),(3),(4)$ still hold if we only change finitely many elements of $D^*_i$.$\blacksquare_{\text{Implication 1}}$
\textit{Proof of implication 2 (Theorem \ref{maintheoremnonstab}).} In the second implication we assume the induction hypothesis and also $(18)_\nu$ which was derived by the first implication. The crucial difference between \ref{maintheoremnonstab} and \ref{Making a sequence stae increasing} is requirement $(2)$ that $D^*_i\in V[\cup_{j<\theta}D^*_j]$.
Apply $(18)_\nu$ to the sequence ${\langle} D_i\mid i<\theta{\rangle}$ to get $\langle D^0_i\mid i<\theta\rangle$ such that: \begin{enumerate} \item $\underset{i<\theta}{\bigcup}D^0_i$ is Mathias.
\item $D_i\subseteq^*D^0_i\subseteq\theta_i$ and $D_i\in V[D^0_i]$.
\item $D^0_i\cap\kappa^*=F_{\kappa^*}$.
\item $\langle D^0_i\mid i<\theta\rangle$ is $\subseteq^*$-increasing.
\end{enumerate} Define a matrix of sets $\langle D^\xi_i\mid i<\theta,\xi<\nu^+\rangle$ recursively on the row $\xi<\nu^+$ such that: \begin{enumerate}
\item For each $\xi<\nu^+$, $\langle D^\xi_i\mid i<\theta{\rangle}$ is $\subseteq^*$- increasing. (Each row is $\subseteq^*$ increasing)
\item For each $i<\theta$, $\langle D^\xi_i\mid \xi<\nu^+\rangle$ is $\subseteq^*$-increasing. (Each column is $\subseteq^*$ increasing)
\item $D^\xi_i\subseteq\theta_i$ and $D_i\in V[D^\xi_i]$. (sets in column $i$ are subsets of $\theta_i$)
\item $D^{(\xi)}:=\underset{j<\theta}{\bigcup}D^\xi_j$ is Mathias. (The union of each row is a Mathias set)
\item $D^\xi_i\cap\kappa^*=F_{\kappa^*}$. (All the sets are the same up to $\kappa^*$)
\item For every $i<\theta$ and every $\xi<\nu^+$, $D^{(\xi)}\cap\theta_i\subseteq^* D^{(\xi+1)}_i$. (The $i$-th set in a successor row, $\subseteq^*$ includes the union of the previous row up to $\theta_i$) \end{enumerate} At successor row, assume $\langle D^{\alpha}_i\mid i<\theta\rangle$ is defined. For each $i<\theta$ apply $(20)_{\theta_1}$ to $D_i$ and $D^{(\alpha)}\cap\theta_i$ to obtain the sets $E^{(\alpha+1)}_i$ which satisfies $(2),(3),(5),(6)$. Apply $(18)_\nu$ to the sequence ${\langle} E^{(\alpha+1)}_i\mid i<\theta{\rangle}$, obtain $E^{(\alpha+1)}_i\subseteq^*D^{(\alpha+1)}_i\subseteq\theta_i$, then also $(1),(4)$ holds without ruining $(2),(3),(5),(6)$.
For limit $\delta<\nu^+$ the sequences $\langle D^{(\rho)}_i\mid i<\theta\rangle$ are defined for every $\rho<\delta$. For each $i<\theta$, the sequence ${\langle} D^{(\rho)}_i\mid \rho<\delta{\rangle}$ is $\subseteq^*$-increasing hence by corollary \ref{subsets star bound}, there is a Mathias $E^{(\delta)}_i\subseteq\theta_i$ which is a $\subseteq^*$-bound, this ensures $(2),(5)$. Apply $(20)_{\theta_i}$ to $E^{(\delta)}_i$ and $D_i$ to obtain $F^{(\delta)}_i$ to ensure $(3)$ and finally apply $(18)_{\nu}$ to the sequence ${\langle} F^{(\delta)}_i\mid i<\theta{\rangle}$, obtain the sequence ${\langle} D^{(\delta)}_i\mid i<\theta{\rangle}$ which satisfy $(1)-(5)$.
Hence the sequence $\langle D^{(\xi)}_j\mid j<\theta\rangle$ is defined for every $\xi<\nu^+$. For every column $j<\theta$, $\langle D^{(\xi)}_j\mid \xi<\nu^+\rangle$ is a $\subseteq^*$-increasing sequence of subsets of $\theta_j$, thus there is $\xi_j<\nu^+$ from which this sequence stabilizes. Let $\xi^*={\rm sup}(\xi_j\mid j<\theta)<\nu^+$.
Let us prove that $D^{(\xi^*)}_i$ is as wanted. By the construction of the sequence $(1),(3),(4),(5)$ of the theorem follows directly. To see $(2)$, for every $\xi^*\leq\xi'<\nu^+$ and for every $i<\theta$, $D^{(\xi^*)}_i=^*D^{\xi'}_i$. In particular $D^{\xi^*+1}_i=^*D^{\xi^*}_i$. Hence $$D^{\xi^*}_i\subseteq D^{(\xi^*)}\cap\theta_i\subseteq^* D^{\xi^*+1}_i=^* D^{\xi^*}_i.$$ Hence $D^{\xi^*}_i=^* D^{(\xi^*)}\cap\theta_i\in V[D^{(\xi^*)}]$.$\blacksquare_{\text{Implication 2}}$
\textit{Proof of Implication 3 (Proposition \ref{boundedUnionOfgenerics}).} Assume the induction hypothesis, $(18)_\nu$ and $(19)_\nu$. Let us derive $(20)_\nu$. Let $cf^{V[A]}(\nu)=\lambda<\nu$ and fix a cofinal sequence ${\langle} \nu_i\mid i<\lambda{\rangle}\in V[A]$. For each $i<\lambda$, apply $(20)_{\nu_i}$ to find $$D\cap \nu_i,D'\cap\nu_i\subseteq E_i\subseteq \nu_i$$ such that $D\cap\nu_i,D'\cap\nu_i\in V[E_i]$ and $E_i\cap\kappa^*=F_{\kappa^*}$. Apply $(19)_\nu$ to the sequence ${\langle} E_i\mid i<\lambda{\rangle}$ to find a sequence ${\langle} E^*_i\mid i<\lambda{\rangle}$, such that $E_i\subseteq^* E_i^*$, $E_i\in V[E^*]$, where $E^*:=\cup_{i<\lambda}E^*_i$ is a Mathias set. Then
$|D\cup D'\setminus E^*|\leq\lambda$. As in the proof of \ref{lemmaforsubsetkappa}, in the model $V[E^*]$ we have $$\forall i<\lambda. D\cap\nu_i,D'\cap\nu_i\in V[E^*]$$ so the sequences ${\langle} D\cap \nu_i\mid i<\lambda{\rangle},{\langle} D'\cap\nu_i\mid i<\lambda{\rangle}$ can be coded as a single sequence of ordinals ${\langle} \delta_i\mid i<\lambda{\rangle}$ (fixing enumerations of $P^{V[E^*]}(\nu_i)$). By \ref{very short}, there is a Mathias set $R\in V[A]$, $|R|\leq\lambda$ such that $V[R]=V[{\langle} \delta_i\mid i<\lambda{\rangle}]$. Apply \ref{countableinformation} to $D\cup D'\setminus E^*, R$ and $E^*$ to find $G\in V[A]$ Mathias such that $D\cup D'\setminus \lambda, E^*\setminus\lambda\subseteq G $, $G\cap\lambda=F_\lambda$ and $E^*, R\in V[G]$. Let $G_0=F_{\kappa^*}\cup G\cap(\kappa^*,\lambda)$, recall that $\lambda<\nu$, hence we can apply $(20)_\lambda$ to $G_0,(D\cup D')\cap \lambda$ and find $G_1\subseteq \lambda$ such that $(D\cup D')\cap\lambda,G_0\subseteq G_1$, $G_1\cap\kappa^*=F_{\kappa^*}$ and $G_0\in V[G_1]$. Finally let $$D^*=F_{\kappa^*}\cup(G_1\cap(\kappa^*,\lambda))\cup(G\setminus\lambda).$$
Clearly, $D^*$ is a Mathias set, $D^*\cap\kappa^*=F_{\kappa^*}$ thus $(1),(4)$ of Theorem \ref{boundedUnionOfgenerics} hold. For $(2)$, ${\rm sup}(D^*)={\rm sup}(G)={\rm sup}(D\cup D')$
$$(D\cup D')\cap\kappa^*=F_{\kappa^*}\subseteq D^*, \ D\cup D'\cap(\kappa^*,\lambda)\subseteq G_1\cap(\kappa^*,\lambda)\subseteq D^*$$ and $$D\cup D'\setminus \lambda\subseteq G\setminus\lambda\subseteq D^*.$$ Hence $D\cup D'\subseteq D^*$.
Finally to see $(3)$, $$D^*\cap \kappa^*=F_{\kappa^*},\ D^*\cap(\kappa^*,\lambda)=G_1\cap(\kappa^*,\lambda), \ D^*\setminus\lambda=G\setminus\lambda.$$ Hence $F_{\kappa^*},G_1\cap (\kappa^*,\lambda),G\setminus\lambda\in V[D^*]$, so $G_1\cap \kappa^*\in V[A]\cap V[C_G\cap\kappa^*]=V[F_{\kappa^*}]\subseteq V[D^*]$, so $G_1\in V[D^*]$. It follows that $G_0\in V[G_1]\subseteq V[D^*]$. By definition of $G_0$, $G\cap(\kappa^*,\lambda)=G_0\setminus\kappa^*\in V[D^*]$, and clearly $G\cap\kappa^*\in V[F_{\kappa^*}]\subseteq V[D^*]$. Therefore $$G\cap\kappa^*,G\cap(\kappa^*,\lambda),G\setminus\lambda\in V[D^*]\text{ and }G\in V[D^*]$$ By definition of $G$, $E^*,R\in V[G]\subseteq V[D^*]$, hence ${\langle}\delta_i\mid i<\lambda{\rangle}$ and the coding of $P^{V[E^*]}(\nu_i)$ is in $V[D^*]$ so the sequences ${\langle} D\cap\nu_i\mid i<\lambda{\rangle},{\langle} D'\cap\nu_i\mid i<\lambda{\rangle}\in V[D^*]$. Therefore, $D,D'\in V[D^*]$, as wanted.$\blacksquare_{\text{ Implication 3}}$
This concludes the induction for \ref{Making a sequence stae increasing}-\ref{boundedUnionOfgenerics} for every $\nu\in X_A$ and in particular for $\kappa$. Let us conclude the main result for subsets of $\kappa$ which do not stabilize: \begin{corollary}\label{resaultsubsetkappa} Assume that $o^{\vec{U}}(\kappa)<\kappa^+$, $(IH)$, $A\subseteq\kappa$, $A\in V[G]$ and $A$ does not stabilize, then there is $C'\subseteq C_G$ such that $V[A]=V[C']$. \end{corollary}
\noindent\textit{Proof}. By \ref{nonstabcofinalitysubsetofkappa}, $\lambda:=cf^{V[A]}(\kappa)<\kappa$ and let ${\langle}\beta_i\mid i<\lambda{\rangle}\in V[A]$ be cofinal. By \ref{very short} there is a sequence of Mathias sets $\langle D'_i\mid i<\lambda\rangle\in V[A]$ such that $V[D'_i]=V[A\cap\beta_i]$ and $D'_i\subseteq\beta_i$ and denote $D_i=D'_i\setminus\kappa^*\cup F_{\kappa^*}$. Then the sequence $\langle D_i\mid i<\lambda{\rangle}\in V[A]$ and $A\cap\beta_i\in V[D_i]$. Use \ref{maintheoremnonstab} to find $\langle D^*_i\mid i<\lambda\rangle$ and set $D^*=\underset{i<\lambda}{\cup}D^*_i$. Then $D^*$ is Mathias and therefore $D^*\subseteq^* C_G$. Let $C^*=C_G\cap D^*$. Hence $C^*=^*D^*$ and $V[C^*]=V[D^*]$. Finally, for every $\alpha<\kappa$, find $i<\lambda$ such that $\alpha<\beta_i$. By the properties of $D^*$, $D_i\in V[D^*]$, hence, $A\cap\beta_i\in V[D^*]$. Note that $A\cap\alpha=(A\cap\beta_i)\cap\alpha$ and therefore $A\cap\alpha\in V[D^*]=V[C^*]$. Finally, apply \ref{lemmaforsubsetkappa}.$\blacksquare$ \subsection{subsets of $\kappa$ which stabilize} In this section assume that $o^{\vec{U}}(\kappa)<\kappa^+$, $(IH)$ hence by \ref{VGenericCardinals}, $\zeta_0:=cf^{V[G]}(\kappa)<\kappa$. Let $A\in V[G]$ be a subset of $\kappa$ such that $A$ stabilizes i.e. there is $\lambda<\kappa$ such that $$\forall\alpha<\kappa \ A\cap\alpha\in V[C_G\cap\lambda].$$ Note that if $A\in V[C_G\cap\beta]$ for some $\beta<\kappa$ then we can use $(IH)$, so we also assume that $A$ is fresh with respect to the model $V[C_G\cap\lambda]$. Again we would like to apply Lemma \ref{lemmaforsubsetkappa}, we will use freshness and work a little bit to prove $cf^{V[A]}(\kappa)<\kappa$, while finding $C^*$ is easy:
Increase $\lambda$ if necessary, and assume ${\rm max}\{\kappa^*,\zeta_0\}\leq\lambda<\kappa$. By Proposition \ref{CodingBoundedInformation}, find $F_\lambda\subseteq \lambda$ a Mathias set such that $V[F_\lambda]=V[A]\cap V[C_G\cap\lambda]$. Define $C^*=F_\lambda\cap C_G=^* F_\lambda$, then $C^*\in V[A]$ and $$\forall\alpha<\kappa. A\cap\alpha\in V[A]\cap V[C_G\cap \lambda]=V[F_\lambda]=V[C^*].$$ It remains to see that: \begin{proposition}
$cf^{V[A]}(\kappa)<\kappa$. \end{proposition}
\noindent\textit{Proof}. By \ref{definition of equivalent subalgebra}, let $\mathbb{R}\subseteq RO(\mathbb{M}[\vec{U}]\restriction\lambda)$ for which $V[C^*]=V[H_{C^*}]$ for some $V$-generic filter $H_{C^*}\subseteq \mathbb{R}$ and denote the quotient forcing (definition \ref{definition of quotient}) by $\mathbb{Q}:=(\mathbb{M}[\vec{U}]\restriction\lambda)/H_{C^*}$. To complete $V[C^*]$ to $V[G]$, it remains to force above $V[C^*]$ with $\mathbb{P}:=\mathbb{Q}\times\mathbb{M}[\vec{U}]\restriction(\lambda,\kappa]$, let $H_{\mathbb{Q}}\times G\restriction(\lambda,\kappa]\subseteq \mathbb{P}$ be $V[C^*]$-generic such that $V[C^*][H_{\mathbb{Q}}\times G\restriction(\lambda,\kappa]]=V[G]$. Notice that for every $\lambda\leq\alpha<\kappa$ with $o^{\vec{U}}(\alpha)>0$ we have $$|\mathbb{Q}\times\mathbb{M}[\vec{U}]\restriction(\lambda,\alpha]|<{\rm min}\{\nu>\alpha\mid o^{\vec{U}}(\nu)=1\}.$$ Let $\lusim{A}$ be a $\mathbb{P}$-name for $A$ and assume that $$\Vdash_{\mathbb{P}} \lusim{A}\text{ is fresh}.$$ Let ${\langle} c_i\mid i<\zeta_0\rangle\in V[G]$ be a cofinal continuous subsequence of $C_G$ such that $c_0>\lambda$. Fix $\langle\lusim{c}_i\mid i<\zeta_0{\rangle}\in V[C^*]$ a sequence of $\mathbb{P}$-names for ${\langle} c_i\mid i<\zeta_0\rangle$. Find $p={\langle} p_0,p_1{\rangle}\in H_{\mathbb{Q}}\times G\restriction (\lambda,\kappa)$ such that $$p\Vdash_{\mathbb{P}} \langle \lusim{c}_i\mid i<\zeta_0{\rangle}\text{ is a cofinal continuous subsequence of }\lusim{C_G}.$$ For every $i<\zeta_0$ and $q\in\mathbb{Q}/p_0$, consider the set $D_{i,q}$ of all conditions $p_1\leq r\in \mathbb{M}[\vec{U}]\restriction (\lambda,\kappa)$ such that one of the following holds: \begin{enumerate}
\item $ \exists\alpha.\ {\langle} q,r{\rangle} \Vdash_{\mathbb{P}}\lusim{c}_i=\alpha\ \wedge\ \exists B. \ {\langle} q,r{\rangle}\Vdash_{\mathbb{P}} \lusim{A}\cap\alpha=B$. Denote this statement by $\phi_i(q,r)$.
\item For every $r'\geq r$, $\neg \phi_i(q,r')$. \end{enumerate} Then $D_{i,q}$ is clearly dense open. By the strong Prikry property there is $p_1\leq^*p_{i,q}$, $S_{i,q}$ and sets $A^s_{i,q}$ such that for every $t\in mb(S_{i,q})$, $p_{i,q}^{\smallfrown}{\langle} t,\vec{A}^t_{i,q}{\rangle}\in D_{i,q}$. Define $$g_{i,q}:mb(S_{i,q})\rightarrow\{0,1\}\text{ by }g_{i,q}(t)=1\leftrightarrow \phi_i(q,p_{i,q}^{\smallfrown}{\langle} t,\vec{A}^t_{i,q}{\rangle})\text{ holds}.$$ Then we can shrink $S_{i,q}$ to $T_{i,q}$ such that $g_{i,q}$ is constant on $mb(T_{i,q})$. Now for every $q\in\mathbb{Q}$ such that $g_{i,q}=1$, and every $s\in mb(T_{i,q})$ let $\alpha_i(q,s),A_i(q,s)$ be the values decided by ${\langle} q, p^{\smallfrown}{\langle} s,\vec{B}^s_{i,q}{\rangle}{\rangle}$ for $\lusim{c}_i,\lusim{A}\cap\lusim{c}_i$ respectively. Let $N_{i,q}=ht(T_{i,q})$, then $$\alpha_i(q,s)\in \{\kappa_1(p),...,\kappa_{l(p)}(p),s(1),...,s(N_{i,q})\}$$ we can extend $T_{i,q}$ if necessary so that ${\rm max}(s)\geq\alpha_i(q,s)$. In particular, $A_i(q,s)\subseteq{\rm max}(s)$.
Define by recursion $A_i(q,s)$ for $s\in T_{i,q}\setminus mb(T_{q,i})$. Let $s\in {\rm Lev}_{N_{i,q}-1}(T_{i,q})$, by ineffability, we can shrink ${\rm Succ}_{T_{q,i}}(s)$ and find $A_i(q,s)$ such that for every $\alpha\in{\rm Succ}_{T_{q,i}}(s)$, $A_i(q,s^{\smallfrown}\alpha)= A_i(q,s)\cap\alpha$. Generally, take $s\in T_{i,q}$ and assume that for every $\alpha$ in ${\rm Succ}_{T_{i,q}}(s)$, $A_i(q,s^{\smallfrown}\alpha)$ is defined. We can find a single $A_i(q,s)$ and shrink ${\rm Succ}_{T_{i,q}}(s)$ such that $$\forall\alpha\in{\rm Succ}_{T_{i,q}}(s). \ A_i(q,s^{\smallfrown}\alpha)\cap\alpha= A_i(q,s)\cap\alpha.$$ We abuse notation by denoting the shrinked trees by $T_{i,q}$. Extend $p_{i,q}\leq^* p^*_{i,q}$ find $B^t_{i,q}\subseteq A^t_{i,q}$ such that extensions from $D_{T_{i,q},\vec{B}_{i,q}}$ are pre-dense above $p^*_{i,q}$ and use $\leq^*$ closure of $\mathbb{M}[\vec{U}]\restriction(\lambda,\kappa]$ to find a single $p^*$ such that for every $q\in\mathbb{Q}/p_0$ and $i<\zeta_0$, $p^*_{i,q}\leq^* p^*$, in particular $p_1\leq p^*$. As usual, shrink all the trees to $p^*$ and let $T_{i,q}$ be the resulting tree. \begin{claim*} For every $i<\zeta_0$ and $q\in\mathbb{Q}/p_0$ there is $q'\geq q$ such that $g_{i,q'}\restriction mb(T_{i,q})\equiv 1$. i.e. $$\forall t\in mb(T_{i,q}). \ \exists \alpha,B. \ {\langle} q',p^{*\smallfrown}{\langle} t,\vec{B}^t_{i,q'}{\rangle}{\rangle}\Vdash_{\mathbb{P}} \lusim{c}_i=\alpha \wedge \lusim{A}\cap\alpha=B.$$ \end{claim*}
\noindent\textit{Proof}. Let $p_0\leq q_0$, find some ${\langle} q_0,p^*{\rangle}\leq {\langle} q,r{\rangle}$ and $\alpha$ such that $${\langle} q,r{\rangle}\Vdash_{\mathbb{P}} \lusim{c}_i=\alpha.$$ By assumption on $\lusim{A}$, ${\langle} q,r{\rangle}\Vdash_{\mathbb{P}} \lusim{A}\text{ is fresh}$, which implies that there is some $B\in V[C^*]$ and some ${\langle} q',r'{\rangle}$ such that ${\langle} q,r{\rangle}\leq{\langle} q',r'{\rangle}\Vdash B=\lusim{A}\cap\alpha$. Find some $t\in T_{i,q'}$ such that $p^{*\smallfrown}{\langle} t,\vec{B}^t_{i,q'}{\rangle}$ and $r'$ are compatible, then a common extension witnesses that $g_{i,q'}(t)\neq 0$, hence $g_{i,q'}(t)=1$ as wanted.$\blacksquare_{\text{Claim}}$
Move to $V[A]$, let us compare the sets $A_i(q,s)$ with $A$.
For every $i$ and $q$ such that $g_{i,q}=1$, define $\rho_q^i(k)$ for $k\leq N_{q,i}$. Let $$\rho^i_q(0)={\rm min}(A\Delta A_i(q,{\langle}{\rangle}))+1.$$ Recursively define $$\rho^i_q(k+1)={\rm sup}({\rm min}(A\Delta A_i(q,{\langle}\delta_1,...,\delta_k{\rangle}))+1\mid {\langle}\delta_1,...\delta_k{\rangle}\in {\rm Lev}_k(T_{i,q})\cap\prod_{j=1}^k\rho^i_q(j)).$$ Finally we let $$\rho^i(k)={\rm sup}\{\rho^i_q(k)\mid q\in\mathbb{Q}\wedge g_{i,q}=1\}.$$ By the claim, for each $i<\zeta_0$, there is $q_i\in H_{\mathbb{Q}}$ such that $g_{i,q_i}=1$, and since $D_{T_{i,q_i},\vec{B}_{q_i,i}}$ is pre-dense, there is some $\vec{c}_i\in mb(T_{i,q})$ such that ${\langle} q_i,p^{*\smallfrown}{\langle}\vec{c}_i,\vec{B}^{\vec{c}_i}_{q_i,i}{\rangle}{\rangle}\in H_{\mathbb{Q}}\times G\restriction(\lambda,\kappa]$. By assumption on $T_{i,q_i}$, ${\rm max}(\vec{c}_i)\geq (\lusim{c}_i)_{H_{\mathbb{Q}}\times G\restriction(\lambda,\kappa]}$ let us argue that for every $k\leq N_i$, $\rho^i(k)> {\rm min}\{c_i,\vec{c}_i(k)\}$.
By construction of the tree $T_{i,q_i}$, $A\cap c_i=A_i(q_i,\vec{c}_i)\cap c_i$. Since for every $j\leq N_i$, by definition, $$A_i(q_i,\vec{c}_i\restriction j)\cap \vec{c}_i(j)=A_i(q_i,\vec{c}_i\restriction j+1)\cap\vec{c}_i(j).$$ It follows that for every $j\leq N_i$, $$A_i(q_i,\vec{c}_i\restriction j)\cap {\rm min}\{c_i,\vec{c}_i(j)\}=A\cap{\rm min}\{c_i,\vec{c}_i(j)\}.$$
In particular, $A\cap{\rm min}\{c_i,\vec{c}_i(0)\}=A_i(q_i,{\langle}{\rangle})\cap {\rm min}\{c_i,\vec{c}_i(0)\}$.
Since $A\cap\rho^i(0)\neq A_i(q_i,{\langle}{\rangle})\cap\rho^i(0)$, it follows that ${\rm min}\{c_i,\vec{c}_i(0)\}<\rho^i(0)$.
Inductively assume that ${\rm min}\{c_i,\vec{c}_i(j)\}<\rho^i(j)$ for every $j\leq k$. If $c_i\leq c_i(k)$ then clearly we are done. Otherwise, $\rho_i(j)>\vec{c}_i(j)$, which implies that $$\vec{c}_i\restriction\{1,...,k\}\in {\rm Lev}_k(T_{i,q_i})\cap\prod_{j=1}^k\rho^i(j)$$ and since $A_i(q_i,\vec{c}_i\restriction \{1,...,k\})\cap{\rm min}\{c_i,\vec{c}_i(k+1)\}=A\cap{\rm min}\{c_i, \vec{c}_i(k+1)\}$, then $${\rm min}\{c_i,\vec{c}_i(k+1)\}<{\rm min}(A^i(q_i,\vec{c}_i\restriction \{1,...,k\})\Delta A)\leq \rho^i(k+1).$$ Since $\vec{c}_i(N_i)\geq c_i$, it follows that $\rho^i(N_i)>c_i$.
Next we argue that $\rho^i(k)<\kappa$. Again by induction on $k$,
$\rho^i(q,0)<\kappa$ since for every $q\in\mathbb{Q}$ with $g_{i,q}=1$, $A\neq A_i(q,{\langle}{\rangle})$, as $A_i(q,{\langle}{\rangle})\in V[C^*]$ but $A\notin V[C^*]$. Since $|\mathbb{Q}|<\kappa$ and $\kappa$ is regular in $V[C^*]$, it follows that $\rho^i(0)<\kappa$.
Assume that it holds for every $j\leq k$. Toward a contradiction assume that $\rho^i(k+1)=\kappa$. Again, $|\mathbb{Q}|<\kappa$ and $\kappa$ is regular in $V[C^*]$, there must be $q\in\mathbb{Q}$ such that $g_{i,q}=1$ and $\rho^i(q,k+1)=\kappa$. Consider the collection $$\{A_i(q,{\langle}\alpha_1,...,\alpha_k{\rangle})\mid{\langle}\alpha_1,...,\alpha_k{\rangle}\in {\rm Lev}_k(T_{i,q})\cap\prod_{j=1}^k \rho^i(j)\}\in V[C^*].$$ Then for every $\gamma<\kappa$ pick any distinct $\vec{\alpha}_1,\vec{\alpha}_2\in {\rm Lev}_k(T_{i,q})\cap\prod_{j=1}^k \rho^i(j)$ such that $A_i(q,\vec{\alpha}_1)\neq A_i(q,\vec{\alpha}_2)$, but $A_i(q,\vec{\alpha}_1)\cap\gamma=A_i(q,\vec{\alpha}_2)\cap\gamma$.
To see that there are such $\vec{\alpha}_1,\vec{\alpha}_2$, by assumption that $\rho^i(k+1)=\kappa$ there is $\vec{\alpha}_1$ such that $\eta_1:={\rm min}(A\Delta A_i(q,\vec{\alpha}_1))>\gamma$,
hence $A_i(\vec{\alpha}_1)\cap\gamma=A\cap\gamma$. Let $\vec{\alpha}_2$ be such that ${\rm min}(A\Delta A_i(q,\vec{\alpha}_2))>\eta_1$.
In particular, $A_i(q,\vec{\alpha}_1)\neq A_i(q,\vec{\alpha}_2)$, but $A_i(q,\vec{\alpha}_1)\cap\gamma=A\cap\gamma=A_i(q,\vec{\alpha}_2)\cap\gamma$. Since this is defined in $V[C^*]$, where $\kappa$ is still measurable, and the number of pairs ${\langle}\vec{\alpha}_1,\vec{\alpha}_2{\rangle}$ is bounded by the induction hypothesis, we can find unboundedly many $\gamma$'s with the same $\vec{\alpha}_1,\vec{\alpha}_2$, which is clearly a contradiction.
So we found a sequence $\langle\rho^i(N_i)\mid i<\lambda\rangle\in V[A]$ such that $\rho_i(N_i)>c_i$. Hence $cf^{V[A]}(\kappa)\leq\lambda$.$\blacksquare$
As a result of this section we obtain the following: \begin{corollary}\label{resultforstabsubset}
Assume $o^{\vec{U}}(\kappa)<\kappa^+$ and $(IH)$. Let $A\in V[G]$, $A\subseteq\kappa$ be such that $A$ stabilizes, then there is $C'\subseteq C_G$ such that $V[A]=V[C']$. \end{corollary}
\section{The argument for a general set}
Recall the main theorem of this paper is: \\ \\
\textbf{Theorem 1.1} \textit{Let $\vec{U}$ be a coherent sequence with maximal measurable $\kappa$, such that $o^{\vec{U}}(\kappa)<\kappa^+$. Assume the inductive hypothesis:}
$$(IH) \ \ \ For \ every \ \delta<\kappa, \ any \ coherent \ sequence \ \vec{W}\ with\ maximal\ measurable \ \delta\ and \ any\ set \ of \ ordinals$$
$$A\in V[H]\ for \ H\subseteq\mathbb{M}[\vec{W}], \ there\ is\ \ C\subseteq C_H,\ such\ that\ V[A]=V[C].$$ \textit{ Then for every $V$-generic filter $G\subseteq\mathbb{M}[\vec{U}]$ and any set of ordinals $A\in V[G]$, there is $C\subseteq C_G$ such that $V[A]=V[C]$.}
\vskip 0.2 cm
\begin{remark} The authors would like to thanks Gunther Fuchs for pointing out that \ref{MainResaultPartwo} does not automatically generalize to every set in $V[G]$. For example, if $V=L$ and ${\langle} c_n\mid n<\omega{\rangle}$ are $\omega$ Cohen reals over $L$ (which is equivalent to adding a single real) then certainly $A:=\{c_n\mid n<\omega\}\in V[{\langle} c_n \mid n<\omega{\rangle}]$, but the minimal model containing both $L$ and $A$ is $L(A)$ which is a model of $ZF$ rather than $ZFC$. This situation can also occur in Prikry-type, namely, there are intermediate models of $ZF+\neg AC$ which are intermediate to a Prikry forcing extension. Suppose that ${\langle} c_n\mid n<\omega{\rangle}$ is a Prikry sequence over $V$, split $\omega$ to $\omega$-many infinite disjoint sets ${\langle} T_n\mid n<\omega{\rangle}$ and let $D_n=\{c_m\mid m\in T_n\}$. Now consider $R_n=\{t\mid t\text{ is a finite change of }D_n\}$, then clearly $\{ R_n\mid n<\omega\}\in V[G]$. Let $\mathcal{G}$ be the group of all permutations on Prikry forcing permuting the $R_n$'s, generated by a permutation of $\omega$. Let $\mathcal{F}$ be the filter generated by the sets $\text{fix}(E):=\{\pi\in \mathcal{G}\mid \forall x\in E\pi(x)=x\}$ for finite sets $E\subseteq \omega$. Consider the symmetric submodel $N$ of $V[G]$ (see \cite[Chapter 15]{Jech2003}), then $\{R_n\mid n<\omega\}\in N$, $V\subseteq N$ is a model of $ZF$ which fails to satisfy the axiom of choice. \end{remark} Let $A$ be a set of ordinals. We prove Theorem \ref{MainResaultParttwo} by induction of $\lambda:={\rm sup}(A)$. If $\lambda\leq\kappa$ then apply \ref{very short}, \ref{resaultsubsetkappa}, \ref{resultforstabsubset}. Assume that $\lambda>\kappa$, and let us first resolve the induction step for $cf^{V[G]}(\lambda)\leq\kappa$: \begin{proposition} Assume $o^{\vec{U}}(\kappa)<\kappa^+$, $(IH)$, and $cf^{V[G]}(\lambda)\leq\kappa$, then there is $C\subseteq C_G$ such that $V[A]=V[C]$. \end{proposition}
\noindent\textit{Proof}. Since $\kappa$ is singular in $V[G]$ then $cf^{V[G]}(\lambda)<\kappa$. Since $\mathbb{M}[\vec{U}]$ satisfies $\kappa^+-cc$ we must have that $\nu:=cf^V(\lambda)\leq\kappa$. Fix $\langle\gamma_i | \ i<\nu\rangle\in V$ cofinal in $\lambda$. Work in $V[A]$, for every $i<\nu$ find $d_i\subseteq \kappa$ such that $V[d_i]=V[A\cap\gamma_i]$. By induction, there exists $C^*\subseteq C_G$ such that $V[\langle d_i\mid i<\nu\rangle]=V[C^*]$ (Note that the $d_i$'s are subsets of $\kappa$ so we can code ${\langle} d_i\mid i<\kappa{\rangle}$ as a sequence of pairs in $V$) so , therefore \begin{enumerate} \item $\forall i<\nu \ A\cap\gamma_i\in V[C^*]$ \item $C^*\in V[A]$ \end{enumerate} Work in $V[C^*]$, for $i<\nu$ fix a bijection $\pi_i:2^{\gamma_i}\leftrightarrow P^{V[C^*]}(\gamma_i)$. Find $\delta_i$ such that $\pi_i(\delta_i)=A\cap\gamma_i$.
By $\kappa^+$-cc of $\mathbb{M}[\vec{U}]$, there if a function $F:\nu\rightarrow P(\lambda)$ in $V$ such that for every $i<\nu$, $\delta_i\in F(i)$ and $|F(i)|\leq\kappa$. Let $\epsilon_i<\kappa$ be the index of $\delta_i$ inside $F(i)$. Find $C''\subseteq C_G$ such that $V[C'']=V[\langle \epsilon_i\mid i<\nu\rangle]$. Finally we can find $C'\subseteq C_G$ such that $V[C']=V[C^*,C'']$. To see that $V[A]=V[C']$, clearly, $C^*\in V[A]$ and therefore ${\langle} \pi_i,\delta_i\mid i<\nu{\rangle}\in V[A]$. Since $F\in V$, ${\langle}\epsilon_i\mid i<\nu{\rangle}\in V[A]$, hence $C''\in V[A]$. It follows that $C'\in V[A]$. For the other direction, $C^*,C''\in V[C']$, so ${\langle}\epsilon_i\mid i<\nu{\rangle}\in V[C']$, and since $F\in V$ then ${\langle}\delta_i\mid i<\nu{\rangle}\in V[C']$. Since $C^*\in V[C']$ then also ${\langle}\pi_i\mid i<\nu{\rangle}\in V[C']$ so ${\langle}\pi_i(\delta_i)\mid i<\nu{\rangle}\in V[C']$. It follows that $A=\cup_{i<\nu}\pi_i(\delta_i)\in V[C']$. $\blacksquare$
\subsection{The Induction step for $cf(\lambda)>\kappa$}
Assume that $o^{\vec{U}}(\kappa)<\kappa^+$ and $(IH)$. The idea for the induction step where ${\rm sup}(A)=\lambda$ with $cf(\lambda)>\kappa$ (typical example is $\lambda=\kappa^+$) is the following: \begin{enumerate}
\item There is $C^*\subseteq C_G$ such that $C^*\in V[A]$ and for every $\alpha<\lambda. A\cap\alpha\in V[C^*]$.
\item The quotient forcing $\mathbb{M}[\vec{U}]/C^*$ (which completes $V[C^*]$ to $V[G]$) is $\kappa^+$-cc (and therefore $cf(\lambda)$-cc) in $V[G]$. \end{enumerate} Then we will apply the following theorem: \begin{theorem}\label{no fresh subsets} Let $W\models ZFC$ and $\mathbb{P}\in W$ be a forcing notion. Let $T\subseteq\mathbb{P}$ be any $W$-generic filter and $\theta$ a regular cardinal in $W[T]$. Assume $\mathbb{P}$ is $\theta$-cc in $W[T]$. Then in $W[T]$ there are no fresh subsets with respect to $W$ of cardinals $\lambda$ such that $cf(\lambda)=\theta$. \end{theorem} \begin{remark} Note that it is crucial that $\mathbb{P}$ is $\theta$-cc in the generic extension, otherwise there are trivial examples which contradict this. Namely, the forcing which adds a branch through a Suslin tree is $ccc$, but the branch added is a fresh subset of $\omega_1$. \end{remark}
\noindent\textit{Proof}.
Toward a contradiction, assume that $A\in W[T]\setminus W$ is a fresh subset of $\lambda$ and $cf(\lambda)=\theta$. Pick a name $\lusim{A}$ for $A$ and work within $W[T]$. We define recursively a sequence of conditions ${\langle} r_i,s_i\mid i< \theta{\rangle}$ and a sequence of ordinals ${\langle} \beta_i\mid i<\theta{\rangle}$. Let $r_0\in T$ be such that $r_0\Vdash \lusim{A}\text{ is fresh}$. There must be $\beta_0<\lambda$ such that $r_0$ does not force $\lusim{A}\cap\beta_0=A\cap\beta_0$. Otherwise, $A=\cup\{B\mid \exists\beta<\lambda. r_0\Vdash \lusim{A}\cap\beta=B\}\in W$, contradicting the fact that $A\notin W$. Hence one can find $B_0\neq A\cap\beta_0$ and $r_0\leq s_0$ such that $s_0\Vdash \lusim{A}\cap\beta_0=B_0$.
Assume $r_i,s_i,\beta_i$ are defined for every $i<j<\theta$. Let $\beta'_j:={\rm sup}\{\beta_i\mid i<j\}<\lambda$, find $r_j\in T$ such that $r_0\leq r_j\Vdash \lusim{A}\cap\beta'_j=A\cap\beta'_j$. Such $r_j$ exists since $A$ is fresh. Argue as before to find $\beta_j<\lambda$, $B_j\neq A\cap\beta_j$ and $s_j\geq r_j$ such that $s_j\Vdash \lusim{A}\cap\beta_j=B_j$. The contradiction is obtained by noticing that ${\langle} s_j\mid j<\theta{\rangle}$ is an antichain. Indeed, if $i<j$ and $s_i,s_j\leq s$ then $s_i\leq s$ implies that $s\Vdash\lusim{A}\cap \beta_i=B_i$, also since $r_j\leq s_j\leq s$, then $s\Vdash \lusim{A}\cap \beta_i=A\cap \beta_i\neq B_i$. Hence $s$ forces contradictory information.$\blacksquare$
\begin{corollary} Assume $(IH)$ and $A\in V[G]$ stabilizes, then there is $C\subseteq C_G$ such that $V[A]=V[C]$. \end{corollary}
\noindent\textit{Proof}. Let $\beta<\kappa$ be such that $\forall\alpha<{\rm sup}(A).\ A\cap\alpha\in V[G\restriction\beta]$. If $A\in V[G\restriction\beta]$, then we can apply $(IH)$ and we are done. Otherwise, $A\in V[G]$ is fresh with respect to the model $V[G\restriction\beta]$. The forcing completing $V[G\restriction\beta]$ to $V[G]$ is simply $\mathbb{M}[\vec{U}]\restriction(\beta,\kappa]$ which clearly is $\kappa^+$-cc in $V[G]$ (since $\kappa^+$ is regular in $V[G]$), this is a contradiction to Theorem \ref{no fresh subsets}.$\blacksquare$
Assume that $A$ does not stabilize, since we assumed that $o^{\vec{U}}(\kappa)<\kappa^+$ and by the induction hypothesis on ${\rm sup}(A)=\lambda$, we can apply \ref{nonstabcofinality}(2), to conclude that $cf^{V[A]}(\kappa)<\kappa$.
Let ${\langle} \lambda_i\mid i<cf(\lambda){\rangle}$ be cofinal in $\lambda$, then for each $\alpha
<cf(\lambda)$ we choose some $D_\alpha\subseteq C_G$ such that $V[A\cap\lambda_\alpha]=V[D_\alpha]$. In previous results (\cite{partOne},\cite{TomMoti}), $o^{\vec{U}}(\kappa)<\kappa$ and $|C_G|<\kappa$, therefore $2^{|C_G|}<\kappa<cf(\lambda)$, it followed that there is some $D_\alpha$ that repeated cofinaly many times. Here, since $2^{|C_G|}\geq\kappa^+$, we will need as before to somehow accumulate all the information in a $\subseteq^*$-increasing way. \begin{proposition}\label{SequenceIncreasing} Assume $o^{\vec{U}}(\kappa)<\kappa^+$, $(IH)$ and $A\in V[G]$ does not stabilize. Let ${\langle} \lambda_i\mid i<cf(\lambda){\rangle}$ be cofinal in $\lambda$ and $\kappa^*<\kappa$ such that for every $\alpha\in C_G\setminus \kappa^*$, $o^{\vec{U}}(\alpha)<\alpha^+$. Then there is a sequence $\langle D_\alpha\mid \alpha<cf(\lambda)\rangle\in V[A]$ such that: \begin{enumerate}
\item $D_\alpha$ is a Mathias set, $D_\alpha\cap\kappa^*=F_{\kappa^*}$, where $V[F_{\kappa^*}]=V[A]\cap V[C_G\cap\kappa^*]$.
\item $\langle D_\alpha\mid \alpha<cf(\lambda)\rangle$ is $\subseteq^*$-increasing.
\item $A\cap \lambda_\alpha\in V[D_\alpha]$ \end{enumerate} \end{proposition}
\noindent\textit{Proof}. Work in $V[A]$. For every $\alpha<cf(\lambda)$, by the induction hypothesis, there is a Mathias set $D'_\alpha\subseteq^*C_G$ such that $A\cap\lambda_\alpha\in V[D'_\alpha]$ and $D'_\alpha\cap\kappa^*=F_{\kappa^*}$. Then $(1),(3)$ hold but $(2)$ might fail. Let us construct the sequence $\langle D_\alpha\mid \alpha<cf(\lambda)\rangle$ more carefully to ensure condition $(2)$: We go by induction on $\beta<\kappa^+$. Assume the sequence $\langle D_\alpha\mid \alpha<\beta\rangle$ is defined. If $\beta=\alpha+1$, then use Proposition \ref{boundedUnionOfgenerics} with $D_\alpha$ and $D'_\beta$ to find $D_{\beta+1}$ such that $D_\alpha\subseteq D_\beta$, $D'_\beta\in V[D_\beta]$ and $D_\beta\cap\kappa^*=F_{\kappa^*}$. If $\beta$ is limit, let $\delta=cf^{V[A]}(\beta)$ and ${\langle} \beta_i\mid i<\delta{\rangle}\in V[A]$ be cofinal. If $\delta>\kappa$, then by \ref{Modfinitestab}, the sequence ${\langle} D_{\beta_\alpha}\mid \alpha<\delta{\rangle}$, $=^*$-stabilizes on some Mathias set $E^*_\beta$, in particuar, $E^*_\beta\cap\kappa^*=F_{\kappa^*}$ and since the sequence ${\langle} D_\alpha\mid \alpha<\beta{\rangle}$ is $\subseteq^*$-increasing, then it also stabilizes on $E^*_\beta$. Then $E^*_{\beta}$ is a $\subseteq^*$-bound.
If $\delta\leq\kappa$, since $\kappa$ is singular in $V[A]$, then $\delta<\kappa$. Apply Lemma \ref{subsets star bound} to the sequence $\langle D_{\beta_\alpha}\mid \alpha<\delta\rangle$, to find a single $E^*_{\beta}\in V[A]$ Mathias which is a $\subseteq^*$-bound and $E^*_\beta\cap\kappa^*=F_{\kappa^*}$.
In any case, apply Lemma \ref{boundedUnionOfgenerics} to $E^*_\beta,D'_\beta$ and find a Mathias $D_\beta$ such that $E^*_{\beta}\subseteq D_\beta$ and $D'_\beta\in V[D_\beta]$ and $D_\beta\cap\kappa^*=F_{\kappa^*}$. Clearly the sequence $\langle D_\alpha\mid \alpha<cf(\lambda)\rangle$ is as wanted.$\blacksquare$
\begin{corollary}\label{Candiadate} There is $C^*\subseteq C_G$, such that $C^*\in V[A]$ and for every $\alpha<\lambda$, $A\cap\alpha\in V[C^*]$. \end{corollary}
\noindent\textit{Proof}. Consider the sequence $\langle D_\alpha\mid \alpha<cf(\lambda)\rangle\in V[A]$ from Proposition \ref{SequenceIncreasing}, then use Theorem \ref{Modfinitestab} to find $\alpha^*<cf(\lambda)$ such that for every $\alpha^*\leq \beta<cf(\lambda)$, $D_\beta=^*D_{\alpha^*}$. In particular, $V[D_{\beta}]=V[D_{\alpha^*}]$. Then $C^*=D_{\alpha^*}\cap C_G$ is as wanted. $\blacksquare$
Let us turn to the proof that the quotient forcing is $\kappa^+$-cc in $V[G]$ (and therefore $cf(\lambda)$-cc). In \cite{partOne} and \cite{TomMoti}, in order to prove $\kappa^+$-cc of the quotient forcing, a concrete description of the quotient was given. Here we will give an abstract argument to avoid this description. \begin{example}\label{Example1}
It is tempting to try and discard the name $\lusim{C}^*$ and define $\mathbb{M}[\vec{U}]/C^*$ to consist of all $p$ such that there is a $V$-generic $H\subseteq\mathbb{M}[\vec{U}]$, with $p\in H$ and $C^*\subseteq C_H$. Formally, we suggest that $\mathbb{M}[\vec{U}]/C^*$ is
$$\mathbb{M}[\vec{U}]'=\{p\in\mathbb{M}[\vec{U}]\mid C^*\subseteq \kappa(p)\cup B(p)\}.$$
Such a forcing is not $\kappa^+$- cc even above $V[C^*]$. Assume that $o^{\vec{U}}(\kappa)=\kappa$, then $cf^{V[G]}(\kappa)=\omega$. We take for example any $C^*=\{c_n\mid n<\omega\}\subseteq C_G$ unbounded in $\kappa$, such that for every $n$, $o^{\vec{U}}(c_n)=0$. Basically, it is a Prikry sequence for the measure $U(\kappa,0)$. Now $V[C^*]\models \kappa^\omega=\kappa^+$ so let $\langle f_i\mid i<\kappa^+\rangle\in V[C^*]$ be an enumeration of all functions from $\omega$ to $\kappa$. we can factor the forcing to first pick $i<\kappa^+$, then the rest of the forcing ensures that $C_G(f_i(n)+1)=c_n$, this means that $f_i$ determines the places of $c_n$'s in the sequence $C_G$. Since no choice of $i\neq j$ can be compatible, the first part is not $\kappa^+$-cc and therefore also the product. \end{example} \begin{example}\label{Example2} Let us consider another possible simplification of $\mathbb{M}[\vec{U}]/C^*$, first we enumerate $C^*=\{c^*_\alpha \mid \alpha<\kappa\}$ and find $\mathbb{M}[\vec{U}]-$names $\{\lusim{c}' _\alpha\mid \alpha<\kappa\}$ for it. $$\mathbb{M}[\vec{U}]^*=\{q\in\mathbb{M}[\vec{U}]\mid \text{ for every finite } a\subseteq \kappa \text{ there is } q_a\geq q, q_a\Vdash \lusim{c}'_\alpha = \check{c}^*_\alpha, \text{ for every } \alpha\in a \}.$$
Let us prove that for suitable choice of names, $\mathbb{M}[\vec{U}]'$ is not $\kappa^+$-cc For every $\alpha<\kappa$, let $$X_\alpha=\{\nu<\kappa \mid o^{\vec{U}}(\nu)=\alpha\}.$$ Pick some different $\rho^0,\rho^1 \in X_0$. The play would be between two conditions $$p^0={\langle} \rho^0, {\langle} \kappa, \kappa\setminus \rho^0+1{\rangle}{\rangle}\text{ and }p^1={\langle} \rho^1, {\langle} \kappa, \kappa\setminus \rho^1+1{\rangle}{\rangle}.$$ Above $p^0$ we do something simple - for example, let $\lusim{c'}_\alpha$ be a name for the first element of $X_\alpha$ in the generic sequence $C_G$.
Now above $p^1$, let us do something more sophisticated. We will build a $\kappa-$tree with each of its branches corresponding to a direct extension of $p^1$ in $\mathbb{M}[\vec{U}]/C'$, where $C':=\lusim{C'}_H$ and $H\subseteq\mathbb{M}[\vec{U}]$ is a $V$-generic filter with $p^0\in H$. These extensions will be incompatible in $\mathbb{M}[\vec{U}]/C'$. Start with a description of the first level: \\ Fix $Y_1 \in U(\kappa,1), $ such that $Y_1 \subseteq X_1$ and $Z_1 =X_1 \setminus Y_1$ has cardinality $\kappa$. Split $Z_1$ into two disjoint non-empty sets $Z_{1,0},Z_{1,1}$.
Now, $\lusim{c}_1'$ be a name such that $p^1$ extended by an element of $Y_1$ forces different values from those which $p^0$ forces, for example, let it be the first element of $X_2$ in $C_G$.
For $i=0,1$, for define the name $\lusim{c}_1'$ so that $p^1$ extended by an element of $Z_{1,i}$ forces the same values as $\lusim{c}_1'$ extended by $p^0$.
The idea behind is to ensure that, $p^1{}^\frown {Z_{1 0} \cup Y_1},p^1{}^\frown {Z_{1 1} \cup Y_1}$ will be in $\mathbb{M}[\vec{U}]/C'$, but only because of $Z_{1 i}$. Note that, $p^1{}^\frown {Z_{1 0} \cup Y_1},p^1{}^\frown {Z_{1 1} \cup Y_1}$ are incompatible in $\mathbb{M}[\vec{U}]/C'$ since $Z_{1 0}$ and $Z_{1 1}$ are disjoint. Continue in a similar fashion to define the rest of the levels, at the $\alpha$-th level we take $Y_\alpha\subseteq X_\alpha$ such that $Z_\alpha:=X_\alpha\setminus Y_\alpha$ has size $\kappa$, and we split $Z_\alpha$ into two disjoint non empty sets $Z_{\alpha,0},Z_{\alpha,1}$. The definition of $\lusim{c'}_\alpha$ is such that $p^1$ extended by elements of $Y_\alpha$ forces $\lusim{c'}_\alpha$ to be the first member of $X_{\alpha+1}$ in $C_G$. While $p^1$ extended by elements of $Z_\alpha$ will force the same value as $p^0$ did.
Note that the construction is completely inside $V$.
Finally, there are $\kappa^+-$branches of length $\kappa$ in $T$. Let $p^h$ denote an extension of $p^1$ which corresponds to a $\kappa-$branch $h$ i.e. $p^h={\langle} \rho_1,{\langle} \kappa, \underset{\alpha<\kappa}{\bigcup}Y_\alpha\uplus Z_{\alpha,h(\alpha)}\rangle\rangle$.
Let $h_1, h_2$ be two different branches. Let $\alpha<\kappa$ be the least such that $h_1(\alpha)\neq h_2(\alpha)$. Then $p^{h_1}$ and $p^{h_2}$ are incompatible in $\mathbb{M}[\vec{U}]/C'$.
This follows from the choice of $\lusim{c}'_\alpha$ and the definitions of conditions at the level $\alpha$.
Note that every $p^h$ is in $\mathbb{M}[\vec{U}]'$, since for every finite $a\subseteq\kappa$, we can extend $p^h$ to some $q_a$ using the elements from $Z_{\alpha,h(\alpha)}$.
\end{example}
\begin{proposition}\label{chracterization of quotient} For every $q\in\mathbb{M}[\vec{U}]$, $$q\in \mathbb{M}[\vec{U}]/C^*\text{ iff there is a }V\text{-generic }G'\subseteq\mathbb{M}[\vec{U}]\text{ such that }\lusim{C}_{G'}=C^*.$$ \end{proposition}
\noindent\textit{Proof}. Let $q\in \mathbb{M}[\vec{U}]/C^*=\mathbb{M}[\vec{U}]/H_{C^*}$, let $G'\subseteq\mathbb{M}[\vec{U}]/C^*$ be any $V[C^*]$-generic with $q\in G'$, then $G'\subseteq\mathbb{M}[\vec{U}]$ is a $V$-generic filter and $\pi_*(G')=\pi_*(G)=H_{C^*}$ (for the definition of $\pi_*$ see Definition \ref{definition of quotient}). To see that $\lusim{C}_{G'}=C^*$, denote $C':=\lusim{C}_{G'}$, toward a contradiction, assume that $s\in C^*\setminus C'$, then there is $$q\leq q'\in G'\text{ such that }q'\Vdash s\notin \lusim{C^*}$$ hence $\pi(q')\leq ||s\notin\lusim{C}||$. It follows that $\pi(q')\bot ||s\in\lusim{C}||\in H_{C^*}$, therefore $\pi(q')\in \pi_*(G')\setminus H_{C^*}$ contradiction. Also if $s\in C'\setminus C^*$, then there is $q\leq q'\in G$ such that $q'\Vdash s\in \lusim{C}$, then $\pi(q')\leq ||s\in\lusim{C}||$, then $\pi(q')\bot ||s\notin\lusim{C}||\in H_{C^*}$. In any case $\pi(q')\in \pi_*(G')\setminus H_{C^*}$ which is again a contradiction.
For the other direction, if $q\in G'$ for some $G'\subseteq\mathbb{M}[\vec{U}]$ such that $\lusim{C}_{G'}=C^*$, then $X\cap \pi_*(G')=X\cap \pi_*(G)$, where $X=\{||\alpha\in\lusim{D}||\mid \alpha<\kappa\}$ is the generating set of $\mathbb{P}_{\lusim{C}}$. Since $\pi$ is a projection, $\pi_*(G')$ is a $V$-generic filter for $\mathbb{P}_{\lusim{C}}$ and therefor it is uniquely determined by the intersection with the set of generators $X$. It follows that $\pi_*(G')=\pi_*(G)=H_{C^*}$. Finally, for every $a\in G'$, $\pi(a)\in \pi_*(G)$, thus $a\in\pi^{-1''}H_{C^*}:=\mathbb{M}[\vec{U}]/H_{C^*}$.$\blacksquare$
\begin{remark} \begin{enumerate} \item Example \ref{Example1} produces a much larger forcing than $\mathbb{M}[\vec{U}]/C^*$ so we can obviously find $q\in \mathbb{M}[\vec{U}]'$ such that $q\Vdash c^*_\alpha\neq\lusim{c}'_\alpha$ for some $\alpha$.
\item In example \ref{Example2}, the conditions $p^{h}$ constructed are not in $\mathbb{M}[\vec{U}]/C^*$. Otherwise, by the proposition, there is a generic $H$ such that $\{(\lusim{c'}_\alpha)_H\mid \alpha<\kappa\}=C^*$ with $p^{h}\in H$. Since $Y^*:=\underset{\alpha<\kappa}{\bigcup}Y_\alpha\in\cap\vec{U}(\kappa)$, then by Proposition \ref{genericproperties}(3) there is $\xi<\kappa$ such that $C_H\setminus\xi\subseteq Y^*$. It follows that the interpretation $(\lusim{c'}_\alpha)_H$ must be different from the one $p^h$ made, contradiction. \end{enumerate} \end{remark} We will prove that the quotient forcing is $\kappa^+$-cc for more general Prikry-type forcings which use $P$-point ultrafilters. \begin{definition}\label{ppoint} Let $F$ be a uniform $\kappa-$complete filter over a regular uncountable cardinal $\kappa$. $F$ is called a \textit{$P-$point filter} iff there is $\pi:\kappa\to \kappa$ such that
\begin{enumerate}
\item $\pi$ is almost one to one, i.e. there is $X\in F$ such that for every $\alpha<\kappa$, $|\pi^{-1}{}\alpha\cap X|<\kappa$,
\item For every $\{A_i \mid i<\kappa\}\subseteq F$,
$\Delta^*_{i<\kappa}A_i=\{\nu<\kappa \mid \forall i<\pi(\nu) (\nu\in A_i)\} \in F$. \end{enumerate} \end{definition} Clearly, every normal filter $F$ is a $P-$point, but there are many non-normal $P-$points as well. For example take a normal filter $U$ and move it to a non-normal by using a permutation on $\kappa$. Also, if $F$ is an ultrafilter, then $\pi$ is just a function representing $\kappa$ in the ultrapower by $F$.
Before proving the main result, we need a generalization of Galvin's theorem (see \cite{Glavin}, or \cite[Proposition 1.4]{GitDensity}): \begin{proposition}\label{galvinGen}
Suppose that $2^{<\kappa}=\kappa$ and let $F$ be a $P$-point filter over $\kappa$ . Let $\langle X_i\mid i<\kappa^+\rangle$ be a sequence of sets such that for every $i<\kappa^+$, $X_i\in F$, and let $\langle Z_i\mid i<\kappa^+\rangle$ be any sequence of subsets of $\kappa$. Then there is $Y\subseteq \kappa^+$ of cardinality $\kappa$, such that
\begin{enumerate}
\item $\bigcap_{i\in Y}X_i\in F$.
\item there is $\alpha\notin Y$ such that $[Z_{\alpha}]^{<\omega}\subseteq \bigcup_{i\in Y}[Z_i]^{<\omega}$
\end{enumerate} \end{proposition}
\noindent\textit{Proof}. For every $\vec{\nu}\in[\kappa]^{<\omega}$, $\alpha<\kappa^+$ and $\xi<\kappa$, let $$H_{\alpha,\xi,\vec{\nu}}=\{i<\kappa^+\mid X_i\cap\xi=X_\alpha\cap\xi \wedge \vec{\nu}\in [Z_i]^{<\omega}\}.$$ \begin{claim*}
There is $\alpha^*<\kappa^+$ such that for every $\xi<\kappa$ and $\vec{\nu}\in[Z_{\alpha^*}]^{<\omega}$, $|H_{\alpha^*,\xi,\vec{\nu}}|=\kappa^+$ \end{claim*}
\textit{Proof of claim.} Otherwise, for every $\alpha<\kappa^+$ there is $\xi_\alpha<\kappa$ and $\vec{\nu}_\alpha\in [Z_\alpha]^{<\omega}$ such that $|H_{\alpha,\xi_\alpha,\vec{\nu}_\alpha}|\leq\kappa$. There is $X\subseteq \kappa^+$, $\vec{\nu}^*\in[\kappa]^{<\omega}$
and $\xi^*<\kappa$, such that $|X|=\kappa^+$ and $$\forall\alpha\in X, \ \vec{\nu}_\alpha=\vec{\nu}^*\wedge \xi_\alpha=\xi^*.$$
Since $\kappa$ is strong limit and $\xi^*<\kappa$, there are less than $\kappa$ many possibilities for $X_\alpha\cap \xi^*$. Hence we can shrink $X$ to $X'\subseteq X$ such that $|X'|=\kappa^+$ and find a single set $E^*\subseteq \xi^*$ such that for every $\alpha\in X'$, $X_\alpha\cap\xi^*=E^*$. It follows that for every $\alpha\in X'$: $$H_{\alpha,\xi_\alpha,\vec{\nu}_\alpha}=H_{\alpha,\xi^*,\vec{\nu}^*}=\{i<\kappa^+\mid X_i\cap \xi^*=E^*\wedge \vec{\nu}^*\in [Z_i]^{<\omega}\}.$$
Hence the set $H_{\alpha,\xi_\alpha,\vec{\nu}_\alpha}$ does not depend on $\alpha$, which means it is the same for every $\alpha\in X'$. Denote this set by $H^*$. To see the contradiction, note that for every $\alpha\in X'$, $\alpha\in H_{\alpha,\xi_\alpha,\vec{\nu}_\alpha}=H^*$, thus $X'\subseteq H^*$, hence $$\kappa^+=|X'|\leq|H^*|\leq \kappa$$ contradiction.$\blacksquare_{\text{Claim}}$
\underline{End of proof of Proposition \ref{galvinGen}:} Let $\alpha^*$ be as in the claim. Let us choose $Y\subseteq \kappa^+$ that witnesses the lemma. First, enumerate $[Z_{\alpha^*}]^{<\omega}$ by $\langle \vec{\nu}_i\mid i<\kappa\rangle$. Let $\pi:\kappa\rightarrow\kappa$ be the function in Definitionref{ppoint} guaranteed by
$F$ being $P$-point. There is a set $X\in F$ such that for every $\alpha<\kappa$, $X\cap\pi^{-1}{}''\alpha$ is bounded in $\kappa$. So for every $\alpha<\kappa$, we find $\rho_\alpha>{\rm sup}(\pi^{-1}{}''[\alpha+1]\cap X)$.
By recursion, define $\beta_i$ for $i<\kappa$. At each step we pick $\alpha^*\neq\beta_i\in H_{\alpha^*,\rho_i+1,\vec{\nu}_i}\setminus\{\beta_j\mid j<i\}$. It is possible find such $\beta_i$, since the cardinality of $H_{\alpha^*,\rho_i+1,\vec{\nu}_i}$ is $\kappa^+$, and $\{\beta_j\mid j<i\}$ is of size less than $\kappa$. Let us prove that $Y=\{\beta_i\mid i<\kappa\}$ is as wanted. Indeed, by definition, it is clear that $|Y|=\kappa$. Also, if $\vec{\nu}\in [Z_{\alpha^*}]^{<\omega}$, then $\vec{\nu}=\vec{\nu}_i$ for some $i<\kappa$. By definition, $\beta_i\in H_{\alpha^*,\rho_i+1,\vec{\nu}_i}$, hence $\vec{\nu}\in [Z_{\beta_i}]^{<\omega}$, so $$[Z_{\beta_i}]^{<\omega}\subseteq\bigcup_{x\in Y}[Z_x]^{<\omega}.$$
Finally, we need to prove that $\bigcap_{\gamma\in Y}X_\gamma=\bigcap_{i<\kappa}X_{\beta_i}\in F$. By the $P$-point assumption about $F$, $$X^*:=X\cap X_{\alpha^*}\cap\Delta^*_{i<\kappa}X_{\beta_i}\in F.$$ Thus it suffices to prove that $X^*\subseteq \bigcap_{i<\kappa}X_{\beta_i}$. Let $\zeta\in X^*$, then for every $i<\pi(\zeta)$, $\zeta\in X_{\beta_i}$. For $i\geq\pi(\zeta)$, $\zeta\in \pi^{-1''}i+1\cap X$, and by definition of $\rho_i$, $\zeta<\rho_i$. Recall that $\beta_i\in H_{\alpha^*,\rho_i+1,\vec{\nu}_i}$ $$X_{\alpha^*}\cap(\rho_i+1)=X_{\beta_i}\cap(\rho_i+1)$$ and since $\zeta\in X_{\alpha^*}\cap(\rho_i+1)$, $\zeta\in X_{\beta_i}$. We conclude that $\zeta\in\bigcap_{i<\kappa}X_{\beta_i}$, thus $X^*\subseteq\bigcap_{i<\kappa}X_{\beta_i}$. $\blacksquare$
\begin{theorem}\label{kappapluscc} Let $\pi:\mathbb{M}[\vec{U}]\rightarrow \mathbb{P}$ be a projection and $G\subseteq\mathbb{M}[\vec{U}]$ be $V$-generic and $H=\pi_*(G)$ be the induced generic for $\mathbb{P}$. Then $V[G]\models \mathbb{M}[\vec{U}]/H$ is $\kappa^+$-cc \end{theorem}
\noindent\textit{Proof}. Assume otherwise, and let $\langle p_i\mid i<\kappa^+\rangle\in V[G]$ be an anthichain in $\mathbb{M}[\vec{U}]/H$. Let $\langle\lusim{p}_i\mid i<\kappa^+\rangle$ be a sequence of $\mathbb{M}[\vec{U}]$-names for them and $r\in G$ such that $$r\Vdash \langle\lusim{p}_i\mid i<\kappa^+\rangle \text{ is an antichain in } \mathbb{M}[\vec{U}]/\lusim{H}.$$ Work in $V$, for every $i<\kappa^+$, let $r\leq r_i\in\mathbb{M}[\vec{U}]$ and $\xi_i\in\mathbb{M}[\vec{U}]$ be such that $r_i\Vdash \lusim{p}_i=\xi_i$. \begin{claim*} $\forall i<\kappa^+\exists q\geq \xi_i\forall q'\geq q\exists r''\geq r_i \ r''\Vdash q'\in\mathbb{M}[\vec{U}]/\lusim{H}$ \end{claim*} \textit{Proof of claim.} Otherwise, there is $i$ such that for every $q\geq \xi_i$, there is $q'\geq q$ such that every $r''\geq r_i$, $r''\not\Vdash q'\in\mathbb{M}[\vec{U}]/\lusim{H}$. In particular, the set $$E=\{q\geq \xi_i \mid \forall r''\geq r_i. r''\not\Vdash q\in\mathbb{M}[\vec{U}]/\lusim{H}\}$$ is dense above $\xi_i$. To obtain a contradiction, let $G'$ be any generic for $\mathbb{M}[\vec{U}]$ such that $r_i\in G'$. Since $ r_i\geq r$, $r\in G'$ and therefore $\xi_i=(\lusim{p}_i)_{G'}\in\mathbb{M}[\vec{U}]/\lusim{H}_{G'}$. Denote $H'=\lusim{H}_{G'}$. Then by Proposition \ref{chracterization of quotient}, there is a $V$-generic filter $G''$ for $\mathbb{M}[\vec{U}]$ such that $\xi_i\in G''$ and $\lusim{H}_{G''}=H'$. By density of $E$, there is $\xi_i\leq q\in E\cap G''$ and in particular, $q\in \mathbb{M}[\vec{U}]/H'$. Thus, there is $r_i\leq r''\in G'$ such that $r''\Vdash q\in\mathbb{M}[\vec{U}]/\lusim{H}$, contradicting $q\in E$.$\blacksquare_{\text{Claim}}$
For every $i<\kappa^+$ pick $q_i\geq \xi_i$ such that $$(*)_i \ \ \ \ \ \forall q'\geq q_i.\exists r''\geq r_i.r''\Vdash q'\in\mathbb{M}[\vec{U}]/\lusim{H}.$$
Denote $q_i=\langle t_{i,1},...,t_{i,n_i},\langle \kappa,A(q_i)\rangle\rangle$ and $r_i=\langle s_{i,1},...,s_{i,m_i},\langle\kappa,A(r_i)\rangle\rangle$. Stabilize the sequences $\langle t_{i,1},...,t_{i,n_i}\rangle$ and $\langle s_{i,1},...,s_{i,m_i}\rangle$ i.e. find $X\subset \kappa^+$ such that $|X|=\kappa^+$ and $\vec{t}=\langle t_1,...,t_n\rangle,\vec{s}=\langle s_1,...,s_m\rangle$ such that for every $i\in X$ $$\langle t_{i,1},...,t_{i,n_i}\rangle=\langle t_1,...,t_n\rangle,\text{ and } \langle s_{i,1},...,s_{i,m_i}\rangle=\langle s_1,...,s_m\rangle.$$ This means that for every $i\in X$, $q_i=\vec{t}^{\smallfrown}\langle\kappa,A(q_i)\rangle$ and $r_i=\vec{s}^{\smallfrown}\langle\kappa ,A(r_i)\rangle$. Let $$A^*(r_i)=\{\nu\in A(r_i)\mid \nu\cap A(r_i)\in\cap\vec{\nu}\}$$ by \ref{BetterSet}, $A^*(r_i)\in\cap\vec{U}(\kappa)$, it follows that for every $\vec{\nu}\in[A^*(r_i)]^{<\omega}$, $r_i^{\smallfrown}\vec{\nu}\in \mathbb{M}[\vec{U}]$. By Lemma \ref{galvinGen}, there is $Y\subseteq X$ of cardinality $\kappa$, such that \begin{enumerate}
\item $\bigcap_{i\in Y}A(q_i)\in \bigcap_{i<\kappa} U(\kappa,i)$.
\item There is $\alpha^*\in Y$ such that $[A^*(r_{\alpha^*})]^{<\omega}\subseteq \bigcup_{i\in Y\setminus\{\alpha^*\}}[A^*(r_i)]^{<\omega}$ \end{enumerate}
Consider the set $A=\bigcap_{i\in Y}A(q_i)$. For every $i\in Y$, $q_i\leq \vec{t}^{\smallfrown}\langle \kappa, A\rangle=:q^*$. Then by $(*)_{\alpha^*}$, there is $r''\geq r_{\alpha^*}$ such that $r''\Vdash q^*\in\mathbb{M}[\vec{U}]/\lusim{H}$. Hence there is $\vec{s}\leq s''\in\mathbb{M}[\vec{U}]\restriction {\rm max}(\kappa(\vec{s}))$, $k<\omega$, $\vec{\nu}\in[A(r_{\alpha^*})]^{k}$ and $B_1,...,B_k$ such that $$r''=\langle s'',\langle\nu_1,B_1\rangle,...,\langle\nu_k,B_k\rangle,\langle \kappa,A(r'')\rangle\rangle.$$
Since $r''\in\mathbb{M}[\vec{U}]$, then $\vec{\nu}\in [A^*(r_{\alpha})]^{k}$ and by the property of $\alpha^*$, $\vec{\nu}\in\cup_{j\in Y\setminus\{\alpha^*\}}[A^*(r_j)]^{<\omega}$ and so there is $j\in Y$ such that $\vec{\nu}\in [A^*(r_j)]^{k}$. Since $r_{\alpha^*}$ and $r_j$ have the same lower part, and $\vec{\nu}\in [A^*(r_j)]^{<\omega}$, it follows that $r''$ and $r_j$ are compatible by the condition: $$r^*=\langle s'', \langle\nu_1, B_1\cap A(r_j)\rangle,...\langle \nu_k, B_k\cap A(r_j)\rangle,\langle\kappa, A(r_j)\cap A(r'')\rangle\rangle.$$ To see the contradiction, note that $r^*\geq r_{\alpha^*},r_j$ and $r$, thus $$r^*\Vdash \lusim{p}_{\alpha^*}=\xi_{\alpha^*},\lusim{p}_j=\xi_j\text{ are incompatible in }\mathbb{M}[\vec{U}]/\lusim{H}$$ but also $r^*\geq r''$, therefore $$r^*\Vdash q^*\in\mathbb{M}[\vec{U}]/\lusim{H}.$$ Since $q^*\geq q_{\alpha^*}\geq\xi_{\alpha^*}$ and $q^*\geq q_j\geq\xi_j$, then $r^*\Vdash \lusim{p}_{\alpha^*},\lusim{p}_j$ are compatible in $\mathbb{M}[\vec{U}]/\lusim{H}$, contradiction.$\blacksquare$
This suffices to finish the induction step for $cf(\lambda)>\kappa$ and in turn \ref{MainResaultParttwo}. \begin{corollary}\label{Inductionstephigh}
Assume that $o^{\vec{U}}(\kappa)<\kappa^+$, $(IH)$ and let $A\in V[G]$ be a set of ordinals such that $cf({\rm sup}(A))>\kappa$. Let $C^*$ be as in \ref{Candiadate}, then $A\in V[C^*]$ and $V[A]=V[C^*]$. \end{corollary}
\noindent\textit{Proof}. By \ref{Candiadate}, $C^*\subseteq C_G$ is such that $C^*\in V[A]$ and $\forall\alpha<\lambda. A\cap\alpha\in V[C^*]$. Toward a contradiction assume that $A\notin V[C^*]$, and let $W:=V[C^*]$.
The quotient forcing $\mathbb{M}[\vec{U}]/C^*\in W$ is $\kappa^+$-cc and therefore $cf(\lambda)$-cc in $V[G]=W[G]$ and $A$ is a fresh subsets of $\lambda$ contradicting Theorem \ref{no fresh subsets}.$\blacksquare_{\text{\ref{Inductionstephigh}}}$ $\blacksquare_{\text{\ref{MainResaultParttwo}}} $ \subsection{The Quotient Forcing} For $\mathbb{M}[\vec{U}]/C^*$ which is $\kappa^+$-cc in $V[C^*]$, we can use a more abstract and direct argument:
Suppose we have an iteration $P*\lusim{Q}$ of forcing notions. It is a classical result about the iteration that if for a regular cardinal $\lambda$ we have
\begin{enumerate}
\item $P$ has $\lambda-$cc,
\item $\Vdash_P \lusim{Q} \text{ has } \lambda$-cc,
\end{enumerate} then $P*\lusim{Q}$ satisfies $\lambda-$cc.
Also, if $P$ has $\lambda-$cc, $P*\lusim{Q}$ has $\lambda-$cc, then $\Vdash_P \lusim{Q} \text{ has } \lambda-cc$. \\Suppose otherwise. Then there are $p\in P$ and a sequence of $P-$names ${\langle} \lusim{q}_\alpha \mid \alpha<\lambda{\rangle}$ such that $$p\Vdash_P {\langle} \lusim{q}_\alpha \mid \alpha<\lambda{\rangle} \text{ is an antichain in } \lusim{Q}.$$ Consider now $\{{\langle} p,\lusim{q}_\alpha{\rangle} \mid \alpha<\lambda \}\subseteq P*\lusim{Q}$. By $\lambda-$cc, there are $\alpha, \beta<\lambda, \alpha\not =\beta$ such that ${\langle} p,\lusim{q}_\alpha{\rangle}$ and ${\langle} p,\lusim{q}_\beta{\rangle}$ are compatible. Hence, there are ${\langle} p', \lusim{q}'{\rangle}\geq {\langle} p,\lusim{q}_\alpha{\rangle},{\langle} p,\lusim{q}_\beta{\rangle}$. But then $$p' \Vdash_P \lusim{q}' \text{ is stronger than both } \lusim{q}_\alpha,\lusim{q}_\beta,$$ which is impossible, since $p'$ forces that they are members of an antichain.
However, in \ref{kappapluscc}, we address a different question:
\emph{Suppose that $P*\lusim{Q}$ satisfies $\lambda-$cc. Let $G*H$ be a generic subset of $P*\lusim{Q}$. Consider the interpretation $Q$ of $\lusim{Q}$ in $V[G*H]$. Does it satisfy $\lambda-$cc? }
Clearly, this is not true in general. For a simple example, let $P$ be trivial and $Q$ be the forcing for adding a branch to a Suslin tree. Then, in $V^Q$, $Q$ will not be ccc anymore.
Our attention in Theorem \ref{kappapluscc} is to subforcings and projections of $\mathbb{M}[\vec{U}]$, however the argument given is more general:
\begin{theorem}\label{kappaplusccgeneral} Suppose that $ \mathcal P$ is either Prikry or Magidor or Magidor-Radin or Radin or Prikry with a product of $P$-point ultrafilters forcing and $\lusim{Q}$ is a projection of $\mathcal P$. Let $G(\mathcal P)$ be a generic subset of $\mathcal P$. \\Then, the interpretation of $\lusim{Q}$ in $V[G(\mathcal P)]$, satisfies $\kappa^+-$cc there. \end{theorem}
We do not know how to generalize this theorem to wider classes of Prikry type forcing notions.
For example the following may be the first step:
\begin{question} Is the result valid for a long enough Magidor iteration of Prikry forcings? \end{question} The problem is that there is no single complete enough filter here, and so the Galvin theorem (or its generalization) does not seem to apply. \begin{definition}
Let $F$ be a $\kappa-$complete uniform filter over a set $X$, for a regular uncountable cardinal $\kappa$. We say that $F$ has: \begin{enumerate} \item The \emph{Galvin property} iff every family of $\kappa^+$ members of $F$ has a subfamily of cardinality $\kappa$ with intersection in $F$. \item The \emph{generalized Galvin property} iff it satisfies the conclusion of \ref{galvinGen}. \end{enumerate}
\end{definition}
The following question looks natural in this context: \begin{question} Characterize filters (or ultrafilters) which satisfy the Galvin property (or the generalized Galvin property). \end{question} Construction by U. Abraham and S. Shelah \cite{AbrahamShelah1986} may be relevant here. They constructed a model in which there is a sequence ${\langle} C_i\mid i<2^{\mu^+}{\rangle}$ in $Cub_{\mu^+}$ such that the intersection of any $\mu^+$ clubs in the sequence is of cardinality less than $\mu$. So the filter $Cub_{\mu^+}$ does not have the Galvin property. However $GCH$ fails there. Lately, other results related to the Galvin property have been published \cite{MR3787522}, \cite{MR3604115}, \cite{ghhm}. The following questions seem to be open:
\begin{question} Assume $GCH$. Let $\kappa$ be a regular uncountable cardinal. Is there a $\kappa$-complete filter over $\kappa$ which fails to satisfy the Galvin property? \end{question} Let us note that if the ultrafilter is not on $\kappa$, then there is such an ultrafilter, namely, any fine $\kappa$-complete filter $U$ over $P_\kappa(\kappa^+)$ does not satisfy the Galvin property:
For every $\alpha<\kappa^+$, let $X_\alpha=\{Z\in P_\kappa(\kappa^+)\mid \alpha\in Z\}$, then $X_\alpha\in U$ since $U$ is fine but the intersection of any $\kappa$ elements from this sequence of sets is empty.
A fine normal ultrafilter on $P_\kappa(\lambda)$ is used for the supercompact Prikry forcing (see \cite{Gitik2010} for the definition). Hence, the following question is natural:
\begin{question} Assume $GCH$ and let $\lambda>\kappa$ be a regular cardinal. Is every quotient forcing of the supercompact Prikry forcing also $\lambda^+$-cc in the generic extension? \end{question}
One particular interesting case is of filters which extend the closed unbounded filter. \begin{question} Assume $GCH$. Let $\kappa$ be a regular uncountable cardinal. Is there a $\kappa-$complete filter which extends the closed unbounded filter $Cub_\kappa$ which fails to satisfy the Galvin property? \end{question}
Our prime interest is on $\kappa-$complete ultrafilters over a measurable cardinal $\kappa$. \\Note the following:
\begin{proposition}
It is consistent that every $\kappa-$complete(or even $\sigma$-complete) ultrafilter over a measurable cardinal $\kappa$ has the generalized Galvin property.
\end{proposition}
\noindent\textit{Proof}. This holds in the model $L[U]$, where $U$ is a unique normal measure on $\kappa$. In this model every $\kappa$-complete ultrafilter is Rudin-Keisler equivalent to a finite power of $U$ (see for example \cite[Lemma 19.21]{Jech2003}). By \ref{product galvin}, it is easy to see that all such ultrafilters satisfy the generalized Galvin property. Note that since in $L[U]$ there is a unique measurable cardinal, every $\sigma$-complete ultrafilter $W$ is actually $\kappa$-complete. Indeed, let $\lambda$ be the completness degree of $W$\footnote{The degree of completness of an ultrafilter $\mathcal{V}$ is the minimal cardinal $\theta$ such that $\mathcal{V}$ is $\theta$-complete.}, it is the critical point of the embedding $$j_W:L[U]\rightarrow Ult(L[U],W).$$ Since $W$ is $\sigma$-complete, $Ult(L[U],W)$ is well-founded, hence $crit(j_W)$ is measurable in $L[U]$ and $\lambda=\kappa$. $\blacksquare$
In context of ultrafilters over a measurable cardinal, the following is unclear:
\begin{question} Is it consistent to have a $\kappa$-complete ultrafilter over $\kappa$ which does not have the Galvin property? \end{question}
\begin{question} Is it consistent to have a measurable cardinal $\kappa$ carrying a $\kappa-$complete ultrafilter which extends the closed unbounded filter $Cub_\kappa$ (i.e., $Q-$point) which fails to satisfy the Galvin property? \end{question}
It is possible to produce more examples of ultrafilters (and filters) with generalized Galvin property. The simplest example of this kind will be $U\times W$, where $U, W$ are normal ultrafilters over $\kappa$. We will work in a bit more general setting.
\begin{definition}\label{incresing P-points} Let $F_1,...,F_n$ be $P$-point filters over $\kappa$, and let $\pi_1,...,\pi_n$ be the witnessing functions for it. Denote $[\kappa]^{n*}$, the set of all $n$-tuples ${\langle} \alpha_1,...,\alpha_n{\rangle}$ such that for every $2\leq i\leq n$, $\alpha_{i-1}<\pi_i(\alpha_i)$. \end{definition} Note that if the $F_i$'s are normal, the $\pi_i=id$ and $[\kappa]^{n*}=[\kappa]^n$. \begin{definition}\label{product of p-points} Let $F_1,...,F_n$ be $P$-point filters over $\kappa$, and let $\pi_1,...,\pi_n$ be the witnessing functions for it. Define a filter $\prod_{i=1}^{n*} F_i$ over $[\kappa]^{n*}$ recursively. For $X\subseteq[\kappa]^{n*}$: \begin{center}
$ X \in\prod_{i=1}^{n*} F_i\Leftrightarrow \Big\{\alpha<\kappa\mid X_{\alpha}\in \prod_{i=2}^{n*} F_i\Big\}\in F_1.$
\end{center} Where $X_{\alpha}=\{{\langle}\alpha_2,...,\alpha_n\rangle\in[\kappa]^{n-1*} \mid {\langle}\alpha,\alpha_2,...,\alpha_n{\rangle}\in X\}$. \end{definition} Again, if the filters are normal, this is simply a product.
\begin{proposition}\label{generate P-point} Let $F_1,...,F_n$ be $P$-point filters over $\kappa$, and let $\pi_1,...,\pi_n$ be the witnessing functions for it. Then for every $X\in\prod_{i=1}^{n*} F_i$, there are $X_i\in F_i$ such that $\prod_{i=1}^{n*}X_i\subseteq X$. \end{proposition}
\noindent\textit{Proof}. By induction on $n$, for $n=1$, it is clear. Let $X\in\prod_{i=1}^{n*} F_i$. Let \begin{center} $X_1=\Big\{\alpha<\kappa\mid X_{\alpha}\in \prod_{i=2}^{n*} F_i\Big\}\in F_1$ \end{center} For every $\alpha\in X_1$, find by the induction hypothesis $X_{\alpha,i}\in F_i$ for $2\leq i\leq n$ such that $\prod_{i=2}^{n*}X_{\alpha,i}\subseteq X_{\alpha}$. Define $$X_i=\Delta^*_{\alpha<\kappa}X_{\alpha,i}$$ since $F_i$ is $P$-point, $X_i\in F_i$. Let us argue that $\prod_{i=1}^{n*} X_i\subseteq X$. Let ${\langle}\alpha_1,...,\alpha_n{\rangle}\in \prod_{i=1}^{n*} X_i$, then for every $2\leq i\leq n$, $\alpha_1<\pi(\alpha_i)$, hence $\alpha_i\in X_{\alpha_1,i}$. It follows that ${\langle} \alpha_2,...,\alpha_n{\rangle}\in \prod_{i=2}^{n*} X_{\alpha_1,i}\subseteq X_{\alpha_1}$. By definition of $X_{\alpha_1}$, $\langle \alpha_1,\alpha_2...\alpha_n{\rangle}\in X$.$\blacksquare$ \begin{corollary}\label{product galvin} Assume that $2^{<\kappa}=\kappa$. Let $F_1,...,F_n$ be $P$-point filters over $\kappa$, and let $\pi_1,...,\pi_n$ be the witnessing functions for it. Then $\prod_{i=1}^{n*} F_i$
also satisfies the generalized Galvin property of \ref{galvinGen}. \end{corollary}
\noindent\textit{Proof}. Let $\langle Y_\alpha\mid \alpha<\kappa^+{\rangle}$ and ${\langle} Z_\alpha\mid \alpha<\kappa^+{\rangle}$ be as in \ref{galvinGen}. By Proposition \ref{generate P-point}, for every $1\leq i\leq n$, and $\alpha<\kappa^+$, find $X^{(\alpha)}_i\in F_i$ such that $\prod_{i=1}^{*n} X^{(\alpha)}_i\subseteq Y_\alpha$.
For every $\vec{\alpha}={\langle} \alpha_1,...,\alpha_n{\rangle}\in [\kappa]^{n*}$ every $\vec{\nu}\in [\kappa]^{<\omega}$ and every $\xi<\kappa^+$, define $$H_{\xi,\vec{\alpha},\vec{\nu}}=\Big\{\gamma<\kappa^+\mid \forall 1\leq i\leq n. X^{(\gamma)}_i\cap \alpha_i= X^{(\xi)}_i\cap \alpha_i\text{ and } \vec{\nu}\in [Z_\gamma]^{<\omega}\Big\}.$$
As in \ref{galvinGen}, since there are less than $\kappa^+$ many possibilities for $\langle X^{(\gamma)}_1\cap\alpha_1,X^{(\gamma)}_2\cap\alpha_2,...,X^{(\gamma)}_n\cap\alpha_n{\rangle}$, we can find $\alpha^*<\kappa^+$, such that for every $\vec{\alpha}$ and $\vec{\nu}$, $|H_{\alpha^*,\vec{\alpha},\vec{\nu}}|=\kappa^+$.
Enumerate $[Z_{\alpha^*}]^{<\omega}$ by $\langle \vec{\nu}_i\mid i<\kappa\rangle$ and also each $F_i$ is $P$-point, so for every $j<\kappa$, there is $\rho^{(j)}_i>{\rm sup}(\pi_i^{-1''}[j]\cap B_i)$ for some set $B_i\in F_i$. Define the sequence $\beta_j$ by induction, $$\beta_j\in H_{\alpha^*,{\langle} \rho^{(j)}_1,...,\rho^{(j)}_n{\rangle},\vec{\nu}_j}\setminus\{\beta_k\mid k<j\}.$$ We claim once again that \begin{center}$X_{\alpha^*}\cap\bigcap_{j<\kappa} X_{\beta_j}\in \prod_{i=1}^{n*} F_i$\end{center} To see this, define for every $1\leq i\leq n $ $$C_i:=X^{(\alpha^*)}_i\cap \Delta^*_{j<\kappa} X^{(\beta_j)}_i\in F_i.$$ Let $\vec{\alpha}\in\prod_{i=1}^{n*} C_i$, and let $j<\kappa$. For every $1\leq i\leq n$, if $j<\pi(\alpha_i)$ then $\alpha_i\in X^{(\beta_j)}_i$. If $\pi(\alpha_i)\leq j$, then $\alpha_i<\rho^{(j)}_i$, so $\alpha_i\in X^{(\alpha^*)}\cap\rho^{(j)}_i$. Since $\beta_j\in H_{\alpha^*,{\langle}\rho^{(j)}_1,...,\rho^{(j)}_n{\rangle},\vec{\nu}_j}$, $$\alpha_i\in X^{(\alpha^*)}\cap\rho^{(j)}_i=X^{(\beta_j)}\cap\rho^{(j)}_i.$$ Therefore, $\vec{\alpha}\in \prod_{i=1}^{n*}X^{(\beta_j)}_i\subseteq Y_{\beta_j}$. The continuation is as in \ref{galvinGen}.$\blacksquare$
\section{Fresh sets}
Let us conclude this paper with the following result about fresh sets in Magidor generic extensions. A very close variation of this can be found in \cite{Omer1}. \begin{theorem}\label{Fresh} Let $\vec{U}$ be a coherent sequence on $\kappa$. Let $G\subseteq\mathbb{M}[\vec{U}]$ be $V$-generic. If $A\in V[G]$ is a fresh set of ordinals with respect to $V$, then $cf^{V[G]}({\rm sup}(A))=\omega$.\footnote{ Clearly, if $\kappa$ changes its cofinality to $\omega$, then any cofinal in $\kappa$ sequence of the order type $\omega$ will be fresh.} \end{theorem} Note that we do not restrict the order of $\kappa$ and by taking $o^{\vec{U}}(\kappa)=1$ we obtain the Prikry forcing.
\noindent\textit{Proof}. By induction on $\kappa$, which is the supermum of $C_G$. Let $A$ be a fresh subset, if $A\in V[C_G\cap\alpha]$ for some $\alpha<\kappa$, by the induction hypothesis we are done. Assume that $\forall\alpha<\kappa. A\notin V[C_G\cap\alpha]$, in particular ${\rm sup}(A)\geq \kappa$. Let us start with the difficult part, where ${\rm sup}(A)=\kappa$. \begin{lemma} If $A\in V[G]$ is fresh subset of $\kappa$ with respect to $V$ such that ${\rm sup}(A)=\kappa$, then $cf^{V[G]}(\kappa)=\omega$. \end{lemma}
\noindent\textit{Proof}. Toward a contradiction assume that $\lambda:=cf^{V[G]}(\kappa)>\omega$ and let $\lusim{A}$ be a name for $A$.
First we deal with the case that $\kappa$ is singular in $V[G]$, hence $\omega<\lambda<\kappa$.
Since $\mathbb{M}[\vec{U}]$ decomposes to the part below $\lambda$ and the part above $\lambda$, we can ensure sufficient closure, by working in $V[C_G\cap\lambda]$, and force with the part of the forcing above $\lambda$. Note that $A$ is fresh also with respect to $V[C_G\cap\lambda]$.
Let ${\langle} c_\alpha\mid \alpha<\lambda\rangle$ be a cofinal continuous subsequence of $C_G$ such that $c_0>\lambda$. Let $\langle\lusim{c'}_\alpha\mid \alpha<\lambda{\rangle}$ be a sequence of $\mathbb{M}[\vec{U}]\restriction(\lambda,\kappa]$-names for it.
Find $p\in G\restriction (\lambda,\kappa)$ such that $$p\Vdash \lusim{A}\text{ is fresh}\wedge \langle \lusim{c'}_\alpha\mid \alpha<\lambda{\rangle}\text{ is a cofinal continuous subsequence of }C_G.$$
For every $i<\lambda$, the set $$D_i=\Big\{q\mid \exists\vec{\alpha} \ \exists B. \ p^{\smallfrown}\vec{\alpha}\leq^* q\wedge \ q\Vdash \lusim{c}_i={\rm max}{\vec{\alpha}}\wedge \lusim{A}\cap{\rm max}{\vec{\alpha}}=B\Big\}$$
is dense. To see that, let $q_0\geq p$, find any $q_0\leq q$ and $\vec{\beta}$ such that $$p^{\smallfrown}\vec{\beta}\leq^* q\text{ and }q\Vdash {\rm max}(\vec{\beta})=\lusim{c}_i.$$ Above ${\rm max}(\vec{\beta})$ there is enough closure to decide $\lusim{A}\cap{\rm max}(\vec{\beta})$. Find $q\restriction({\rm max}(\vec{\beta}),\kappa]\leq^*q_{>{\rm max}(\vec{\beta})}$ in $\mathbb{M}[\vec{U}]\restriction({\rm max}(\vec{\beta}),\kappa]$ which decides $\lusim{A}\cap{\rm max}(\vec{\beta})$ and $q\restriction{\rm max}(\vec{\beta})\leq q_{\leq{\rm max}(\vec{\beta})}$ in $\mathbb{M}[\vec{U}]\restriction(\lambda,{\rm max}(\vec{\beta})]$ (not necessarily a direst extension) such that for some $B\subseteq {\rm max}(\vec{\beta})$, $$q^*:=\langle q_{\leq{\rm max}(\vec{\beta})},q_{>{\rm max}(\vec{\beta})}{\rangle}\Vdash\lusim{A}\cap{\rm max}(\vec{\beta})=B\wedge \lusim{c}_i={\rm max}(\vec{\beta}).$$ Let $\vec{\alpha}$ be such that $p^{\smallfrown}\vec{\alpha}\leq^* q^*$, then by construction ${\rm max}(\vec{\alpha})={\rm max}(\vec{\beta})$ and $q^*$ is as wanted.
By \ref{strongPrkryProperty}, find a condition $p\leq^*p_i$, a $\vec{U}$-fat tree of extensions of $p_i$, $T_i$, and sets $B_i^t$ such that for every $ t\in mb(T_i)$ there is $A_i(t)\subseteq {\rm max}(t)$ such that $$p_i{}^{\smallfrown} {\langle} t, \vec{B}^t_i{\rangle} \Vdash\lusim{A}\cap{\rm max}(t)=A_i(t)\wedge\lusim{c}_i={\rm max}(t).$$ Since we have sufficient closure in the forcing above $\lambda$, we can find a single $p\leq^* p^*\in G\restriction(\lambda,\kappa]$ such that for every $i<\lambda$, $p_i\leq^* p^*$.
Keep defining by recursion sets $A_i(s)$ for $s\in T_i\setminus mb(T_i)$. Let $s\in {\rm Lev}_{ht(T_i)-1}(T_i)$, then we can shrink ${\rm Succ}_{T_i}(s)$ and find $A_i(s)$ such that for every $\alpha\in{\rm Succ}_{T_i}(s)$, $A_i(s^{\smallfrown}\alpha)= A_i(s)\cap\alpha$.
Generally, take $s\in T_i$ and assume that for every $\alpha$ in ${\rm Succ}_{T_i}(s)$, $A_i(s^{\smallfrown}\alpha)$ is defined. We can find a single $A_i(s)$ and shrink ${\rm Succ}_{T_i}(s)$ such that
$$(\star) \ \ \ \forall\alpha\in{\rm Succ}_{T_i}(s).A_i(s^{\smallfrown}\alpha)\cap\alpha= A_i(s)\cap\alpha.$$
Move to $V[A]$, let us compare the sets $A_i(s)$ with $A$. For every $i$, define recursively $\rho^i(k)$ for $k\leq N_i:=ht(T_i)$. Let $\rho^i(0)={\rm min}(A\Delta A_i({\langle}{\rangle}))+1$. Recursively define $$\rho_i(k+1)={\rm sup}({\rm min}(A\Delta A_i({\langle}\delta_1,...,\delta_k{\rangle}))+1\mid \delta_1<\rho^i(0),...,<\delta_k<\rho^i(k))).$$ Let $\vec{c}_i\in mb(T_i)$ be such that $p^{*}{}^{\smallfrown}{\langle}\vec{c}_i,\vec{B}^{\vec{c}_i}_i{\rangle}\in G$, let us argue that for every $k\leq N_i$, $\rho^i(k)> \vec{c}_i(k)$. By construction of the tree $T_i$, $c_i=(\lusim{c}_i)_G={\rm max}(\vec{c}_i)$ and $A\cap c_i=A_i(\vec{c}_i)\cap c_i$.
By $(\star)$, for every $j\leq N_i$, $$A_i(\vec{c}_i\restriction j)\cap \vec{c}_i(j)=A_i(\vec{c}_i\restriction j+1)\cap\vec{c}_i(j).$$ It follows that for every $j\leq N_i$, $$A_i(\vec{c}_i\restriction j)\cap \vec{c}_i(j)=A\cap\vec{c}_i(j)$$
In particular, $A\cap\vec{c}_i(0)=A_i({\langle}{\rangle})\cap \vec{c}_i(0)$.
Since $A\cap\rho^i(0)\neq A_i{{\langle}{\rangle}}\cap\rho^i(0)$, it follows that $\vec{c}_i(0)<\rho^i(0)$.
Inductively assume that $\vec{c}_i(j)<\rho^i(j)$ for every $j\leq k$. Since $A_i(\vec{c}_i\restriction k+1)\cap\vec{c}_i(k+1)=A\cap \vec{c}_i(k+1)$, then $$\vec{c}_i(k+1)<{\rm min}(A_i(\vec{c}_i\restriction k+1)\Delta A)\leq \rho^i(k+1)$$
Before proving that $cf^{V[G]}(\kappa)=\omega$, let us argue that $\rho^i(k)<\kappa$. Again by induction on $k$, $\rho^i(0)<\kappa$ since $A\neq A_i({\langle}{\rangle})$, as $A_i({\langle}{\rangle})\in V[C_G\cap\lambda]$ and $A\notin V[C_G\cap\lambda]$.
Toward a contradiction assume that $\rho^i(k+1)=\kappa$. Back to $V[C_G\cap\lambda]$, consider the collection $$\{A_i({\langle}\alpha_0,...,\alpha_k{\rangle})\mid\alpha_0<\rho^i(0),...,\alpha_k<\rho^i(k)\}$$ Then for every $\gamma<\kappa$ pick any distinct $\vec{\alpha}_1,\vec{\alpha}_2$ such that $A_i(\vec{\alpha}_1)\neq A_i(\vec{\alpha}_2)$, but $A_i(\vec{\alpha}_1)\cap\gamma=A_i(\vec{\alpha}_2)\cap\gamma$.
To see that there are such $\vec{\alpha}_1,\alpha_2$, if $\rho^i(k+1)=\kappa$ there is $\vec{\alpha}_1$ such that $\eta_1:={\rm min}(A\Delta A_i(\vec{\alpha}_1))>\gamma$,
hence $A_i(\vec{\alpha}_1)\cap\gamma=A\cap\gamma$. Let $\vec{\alpha}_2$ be such that ${\rm min}(A\Delta A_i(\vec{\alpha}_2))>\eta_1$.
In particular, $A_i(\vec{\alpha}_1)\neq A_i(\vec{\alpha}_2)$, but $A_i(\vec{\alpha}_1)\cap\gamma=A\cap\gamma=A_i(\vec{\alpha}_2)\cap\gamma$. Since this is all in $V[C_G\cap\lambda]$, where $\kappa$ is still measurable, we can find unboundedly many $\gamma$'s with the same $\vec{\alpha}_1,\vec{\alpha}_2$, which is clearly a contradiction.
So we found a sequence $\langle\rho^i(N_i)\mid i<\lambda\rangle\in V[A]$ such that $\rho^i(N_i)>c_i$. Let $Z$ be the closure of $\{\rho^i(N_i)\mid i<\lambda\}$. Since $\lambda>\omega$, there is some limit $\alpha<\lambda$ such that $c_\alpha<\kappa$ is a limit point of $Z$.
To see the contradiction, note that on one hand, $A\cap c_\alpha\in V[C_G\cap\lambda]$, and therefore the set $Z\cap c_\alpha$, $|Z\cap c_{\alpha}|=\lambda$ is defined in $V[C_G\cap\lambda]$ from $A\cap c_\alpha$, on the other hand, $c_\alpha>\lambda$, thus $c_\alpha$ should stay measurable in $V[C_G\cap\lambda]$, contradiction.
Next we eliminate the case that $\kappa$ is regular in $V[G]$ i.e. $\lambda=\kappa$. Many of the ideas for $\lambda<\kappa$ will also work here.
We no longer work over the model $V[C_G\cap\lambda]$, instead, simply force over $V$. Let $p\in G$ be such that $$p\Vdash \lusim{A}\text{ is fresh}$$
By induction we construct a $\leq^*$-increasing sequence of conditions $p_n$ and a tree of trees, i.e. a tree $T_0$, trees $ T_{1,t_0}$ for $t_0\in mb(T_0)$, and generally trees $ T_{n+1,t_0,...,t_n}$ where $$t_0\in mb(T_0), t_1\in mb(T_{1,t_0})... t_n\in mb(T_{n, t_0,...,t_{n-1}})$$
First find a condition $p\leq^*p_0$ and take the tree $T_0$ to be simply the tree with one level which decides $C_{\lusim{G}}(0)$ if it is not already decided, or $T_0=\{{\langle}{\rangle}\}$ otherwise. Necessarily, for each $\alpha\in T_0$, ${\rm min}(\kappa(p^{\smallfrown}\alpha))=\alpha$, hence there is enough $\leq^*$-closure to decide $\lusim{A}\cap\alpha$, so we find $p^{\smallfrown}\alpha\leq^* p_\alpha$ and a set $A_0(\alpha)$ such that $p_\alpha\Vdash \lusim{A}\cap\alpha= A_0(\alpha)$. Then $p_0$ is obtained by diagonally intersecting all the sets in $p_\alpha$, and $p_0$ has the following property $$\forall \alpha\in T_0.\ p_0{}^{\smallfrown}\alpha \Vdash\lusim{A}\cap\alpha=A_0(\alpha)\wedge C_{\lusim{G}}(0)=\alpha$$
For clarity, let us present also the construction of $p_1$ and $T_{1,t_0}$ for every $t_0\in mb(T_0)$. The proof regarding the construction will be addressed later, in the general definition.
If necessary, find a direct extension of $p_0$ and use ineffability to find a set $A_0({\langle}{\rangle})\subseteq\kappa$, such that for every $\alpha\in T_0$, $A_0({\langle}{\rangle})\cap\alpha=A_0(\alpha)\cap\alpha$.
In $V[A]$, define $\eta_0={\rm min}(A\Delta A_0({\langle}{\rangle}))$, since $A\notin V$ and $A_0({\langle}{\rangle})\in V$, $\eta_0<\kappa$ is well defined. Clearly, for every $V$-generic filter $H$ with $p_0\in H$, $\eta_0> C_H(0)$, since then $p_0^{\smallfrown} C_H(0)\in H$ forces the correct value of $A$. Let $\lusim{\eta}_0$ be a name such that $p_0\Vdash \lusim{\eta}_0={\rm min}(\lusim{A}\Delta A_0({\langle}{\rangle}))$.
Fix $t_0\in mb(T_0)$, consider $p_0^{\smallfrown}t_0$. In the general case we will prove that we can find $p_0^{\smallfrown}t_0\leq^* p_{t_0}$, $T_{1,t_0}$ and sets $Y_1^t$ for $t\in mb(T_{1,t_0})$ such that for every $t_1\in mb(T_{1,t_0})$ there is $A_1(t_0,t_1)\subseteq{\rm max}(t_1)$, such that $$p_{t_0}^{\smallfrown}{\langle} t_1, \vec{Y}^{t_1}_1{\rangle}\Vdash A_1(t_0,t_1)\cap{\rm max}(t_1)= \lusim{A}\cap{\rm max}(t_1)\wedge {\rm max}(t_1)=C_{\lusim{G}}(\lusim{\eta}_0)$$ Note that $$p_0^{\smallfrown}t_0\Vdash{\rm max}(t_0)<\lusim{\eta}_0\leq C_{\lusim{G}}(\lusim{\eta}_0)$$ Hence ${\rm max}(t_1)>{\rm max}(t_0)$.
Find a single $p_0\leq^*p_1$ such that for every $t_0\in mb(T_0)$, $p_{t_0}\leq ^*p_1^{\smallfrown}t_0$.
If necessary, directly extend $p_1$ to get $N_1<\omega$, such that for every $t_0\in mb(T_0)$, $ht(T_{1,t_0})=N_1$.
Define the sets $A_1(t_0,s)$ for every $s\in T_{1,t_0}\setminus mb(T_{1,t_0})$. Let $s\in {\rm Lev}_{N_1-1}(T_{1,t_0})$, we can shrink ${\rm Succ}_{T_{1,t_0}}(s)$ and find $A_1(t_0,s)\subseteq\kappa$ such that for every $\alpha\in{\rm Succ}_{T_{1,t_0}}(s)$, $A_1(t_0,s^{\smallfrown}\alpha)= A_0(s)\cap\alpha$.
Recursively, let $s\in T_{1,t_0}\setminus mb(T_{1,t_0})$ and assume that for every $\alpha$ in ${\rm Succ}_{T_0}(s)$, $A_1(t_0,s^{\smallfrown}\alpha)$ is defined. Find a single $A_1(t_0,s)$ and shrink ${\rm Succ}_{T_{1,t_0}}(s)$ such that $$\forall\alpha\in{\rm Succ}_{T_{1,t_0}}(s). \ A_1(t_0,s^{\smallfrown}\alpha)\cap\alpha= A_1(t_0,s)\cap\alpha$$
In $V[A]$, define $\rho^1(k)$ for every $k\leq N_1$. For $k=0$, $$\rho^1(0)={\rm sup}({\rm min}(A\Delta A_1(t_0,{\langle}{\rangle}))\mid t_0\in mb(T_0)\cap[\eta_0]^{<\omega})$$ Recursively, $$\rho^1(k+1)={\rm sup}({\rm min}(A\Delta A_1(t_0,{\langle} \alpha_0,...,\alpha_k{\rangle}))\mid t_0\in mb(T_0)\cap[\eta_0]^{<\omega}\wedge \alpha_i<\rho^1(i))$$ Note that for every $t_0\in mb(T_0)$ and $s\in T_{1,t_0}$, $A\neq A_1(t_0,s)$, as $A_1(t_0,s)\in V$ and $A\notin V$. Therefore $\rho^1(k)\leq\kappa$ is well defined for every $k\leq N_1$. In the general case we will also prove that $\rho^1(k)<\kappa$. Finally, define $\eta_1=\rho^1(N_1)$ and let $\lusim{\eta}_1$ be a name such that $p_1$ forces $\lusim{\eta}_1$ is computed by comparing the sets $A_1(t_0,s)$ and $\lusim{A}$, the way we defined it.
Now for the general definition, assume we have defined $p\leq^* p_1\leq^* p_2...\leq^*p_{n}$, trees $T_{n,t_0,...,t_{n-1}}$ for $t_0\in mb(T_0),t_1\in mb( T_{1,t_0}),...,t_{n-1}\in T_{n-1,t_0,...,t_{n-2}}$, sets $A_n(t_0,...,t_{n-1},t_n)$ for every $t_n\in mb(T_{n,t_0,...,t_{n-1}})$ and $Y^{t_0}_0,...,Y^{t_{n}}_{n}$, also a name $\lusim{\eta}_{n-1}$ such that, $$p_{n}^{\smallfrown}{\langle} t_0,\vec{Y}^{t_0}_0{\rangle}^{\smallfrown}{\langle} t_1,\vec{Y}^{t_1}_1{\rangle}...^{\smallfrown}{\langle} t_n,Y^{t_n}_n{\rangle}\Vdash \lusim{A}\cap {\rm max}(t_n)=A_n(t_0,...,t_{n-1})\cap{\rm max}(t_n)\wedge{\rm max}(t_n)= C_{\lusim{G}}(\lusim{\eta}_{n-1})$$
Define recursively the sets $A_n(t_0,...,t_{n-1},s)$ for $s\in T_{n,t_0,...,t_{n-1}}\setminus mb(T_{n,t_0,...,t_{n-1}})$. Assume that $$A_n(t_0,...,t_{n-1},s^{\smallfrown}\alpha)$$ is defined, for every $\alpha\in {\rm Succ}_{T_{n,t_0,...,t_{n-1}}}(s)$. Directly extend $p_n$ if necessary, shrink $ {\rm Succ}_{T_{n,t_0,...,t_{n-1}}}(s)$ and find by ineffability $A_n(t_0,...,t_{n-1},s)$ so that for every $\alpha\in{\rm Succ}_{T_{n,t_0,...,t_{n-1}}}(s)$, $$A_n(t_0,...,t_{n-1},s)\cap\alpha=A_n(t_0,...,t_{n-1},s^{\smallfrown}\alpha)\cap\alpha.$$
In $V[A]$, we have defined $\eta_0,...,\eta_{n-1}$, and so we can define $$\rho^n(0)={\rm sup}\Big[{\rm min}\Big(A_n(t_0,...,t_{n-1},{\langle}{\rangle})\Delta A\Big)\mid t_0\in mb(T_1)\cap[\eta_0]^{<\omega},...,t_{n-1}\in mb(T_{n-1})\cap[\eta_{n-1}]^{<\omega}\Big]$$ keep defining $\rho^n(k+1)$ recursively as $${\rm sup}\Big[{\rm min}\Big(A_n(t_0,...,t_{n-1},\vec{\alpha})\Delta A\Big)\mid t_0\in mb(T_1)\cap[\eta_0]^{<\omega},...,t_{n-1}\in mb(T_{n-1})\cap[\eta_{n-1}]^{<\omega}, \ \vec{\alpha}\in\prod_{j=1}^k\rho^n(j)\Big]$$ Finally define, $\eta_n=\rho^n(N_n)$.
Again note that $\rho^n(k)\leq\kappa$ is a well define ordinal. Let us prove that $\rho^n(k)<\kappa$. \begin{claim*} For every $k\leq N_n$, $\rho^n(k)<\kappa$. \end{claim*} \textit{Proof of claim.} The proof is similar to the case that $\kappa$ is singular in $V[G]$. Toward a contradiction assume that $\rho^n(k)=\kappa$. Back in $V$, consider the collection $$\Big\{A_n(t_0,...,t_{n-1},\vec{\alpha}) \mid t_0\in mb(T_1)\cap[\eta_0]^{<\omega},...,t_{n-1}\in mb(T_{n-1})\cap[\eta_{n-1}]^{<\omega}, \ \vec{\alpha}\in\prod_{j=1}^{k-1}\rho^n(j)\Big\}$$
Then for every $\gamma<\kappa$ pick any distinct $t_1,...,t_{n-1},\vec{\alpha}$ and $s_0,...,s_{n-1},\vec{\beta}$ such that $$A_n(t_1,...,t_{n-1},\vec{\alpha})\neq A_n(s_0,...,s_{n-1},\vec{\beta}),\text{ but } A_n(t_1,...,t_{n-1},\vec{\alpha})\cap\gamma=A_n(s_0,...,s_{n-1},\vec{\beta})\cap\gamma$$
To see that there are such $t_1,...,t_{n-1},\vec{\alpha}$ and $s_0,...,s_{n-1},\vec{\beta}$, by assumption that $\rho^n(k)=\kappa$ there are $t_1,...,t_{n-1},\vec{\alpha}$ such that $\xi_1:={\rm min}(A\Delta A_n(t_1,...,t_{n-1},\vec{\alpha}))>\gamma$,
hence $$A\Delta A_n(t_1,...,t_{n-1},\vec{\alpha})\cap\gamma=A\cap\gamma$$ Find $s_0,...,s_{n-1},\vec{\beta}$, such that ${\rm min}(A\Delta A_n(s_0,...,s_{n-1},\vec{\beta}))>\xi_1$.
In particular, $$A_n(t_1,...,t_{n-1},\vec{\alpha})\neq A_n(s_0,...,s_{n-1},\vec{\beta})\text{ but }A_n(t_1,...,t_{n-1},\vec{\alpha})\cap\gamma=A\cap\gamma=A_n(s_0,...,s_{n-1},\vec{\beta})\cap\gamma$$ Since this is all in $V$, where $\kappa$ is measurable, we can find unboundedly many $\gamma$'s with the same $t_1,...,t_{n-1},\vec{\alpha},s_0,...,s_{n-1},\vec{\beta}$, which is clearly a contradiction.$\blacksquare_{\text{Claim}}$
Find a name $\lusim{\eta}_n$ such that $p_n$ forces $\lusim{\eta}_n$ is obtained by comparing $\lusim{A}$ with the sets $A_n(t_0,...,t_n,\vec{\alpha})$ as above using $\lusim{\eta}_0,...,\lusim{\eta}_{n-1}$.
Now for the definition of the trees, fix $t_0,...,t_n$ such that $$t_0\in mb(T_0),t_1\in mb( T_{1,t_0}),...,t_{n}\in mb(T_{n,t_0,...,t_{n-1}})$$ The set $D$ of all conditions $q$ such that for some $\vec{\alpha}\in[\kappa]^{<\omega}$: \begin{enumerate}
\item $p_{n}^{\smallfrown}{\langle} t_0,\vec{Y}^{t_0}_0{\rangle}^{\smallfrown}...{}^{\smallfrown}{\langle} t_n,\vec{Y}^{t_n}_n{\rangle}^{\smallfrown}\vec{\alpha}\leq^*q$.
\item $q\Vdash \lusim{A}\cap{\rm max}(\vec{\alpha})=A(\vec{\alpha})\wedge {\rm max}(\vec{\alpha})=C_{\lusim{G}}(\lusim{\eta}_{n})$. \end{enumerate}
is dense above $p_{n}^{\smallfrown}{\langle} t_0,\vec{Y}^{t_0}_0{\rangle}^{\smallfrown}...{}^{\smallfrown}{\langle} t_n,\vec{Y}^{t_n}_n{\rangle}$. The proof is as for the case $\kappa$ is singular.
By \ref{strongPrkryProperty} and \ref{densetree}, find a condition $p_{n}^{\smallfrown}t_0^{\smallfrown}...^{\smallfrown}t_{n-1}{}^{\smallfrown}t_n\leq^*p_{t_0,...,t_n}$, a $\vec{U}$-fat tree of extensions of $p_{t_0,...,t_n}$, $T_{n+1,t_0,...,t_n}$, and sets $Y^s_{n+1}$, such that for every $ t\in mb(T_{n+1,t_0,...,t_n})$ there is $A_{n+1}(t_0,...,t_n,t)\subseteq {\rm max}(t)$ for which $$p_{t_0,...,t_n}{}^{\smallfrown}{\langle} t,\vec{Y}^t_{n+1}{\rangle} \Vdash\lusim{A}\cap{\rm max}(t)=A_{n+1}(t_0,...,t_n,t)\wedge C_{\lusim{G}}(\lusim{\eta}_n)={\rm max}(t)$$ and the set $D_{T_{n+1,t_0,...,t_n},\vec{Y}_n+1}$ is dense above $p_{t_0,...,t_n}$.
By \ref{amalgamate}, find a single $p_n\leq^* p_{n+1}$ and shrink the trees $T_{i,t_0,...,t_i}$ such that for every $t_0,...,t_n$. $p_{t_0,...,t_n}\leq^* p_{n+1}^{\smallfrown}t_0,...,t_n$
By shrinking even more if necessary, we can assume that there is $N_{n+1}$, such that for every $t_0,...,t_n$ $ht(T_{n+1,t_0,...,t_n})=N_{n+1}$. This concludes the recursive definition.
By $\sigma$-completeness, there is $p_{\omega}$ such that $p_n\leq^* p_\omega$. By density, there is such $p_{\omega}\in G$.
In $V[A]$ we have the sequence ${\langle} \eta_n\mid n<\omega{\rangle}$. Clearly, $C_G(\eta_n)\geq \eta_n$ and as we have seen, $C_G(0)<\eta_0$. Let us prove that $\eta_{n+1}>C_G(\eta_n)$.
\begin{claim*} For every $0<n<\omega$, $\eta_{n}>C_G(\eta_{n-1})$. \end{claim*} \textit{Proof of claim.} Find $\vec{c}_0\in mb(T_0)$,$\vec{c}_1\in mb(T_{1,\vec{c}_0})$,...$\vec{c}_n\in mb(T_{n,\vec{c}_0,...,\vec{c}_{n-1}})$ such that $$p_\omega^{\smallfrown} {\langle} \vec{c}_0,\vec{Y}^{\vec{c}_0}_0{\rangle}^{\smallfrown}...^{\smallfrown}{\langle}\vec{c}_n,\vec{Y}^{\vec{c}_n}_n{\rangle}\in G$$ It follows that $\vec{c}_n(N_n)=C_G(\eta_{n-1})$ and that $A\cap C_G(\eta_{n-1})=A_{n+1}(\vec{c}_0,...\vec{c}_n)\cap C_G(\eta_{n-1})$. Since for every $j\leq N_n$, by definition, $$A_{n+1}(\vec{c}_0,...\vec{c}_{n-1},\vec{c}_n\restriction j)\cap \vec{c}_n(j)=A_{n+1}(\vec{c}_0,...\vec{c}_{n-1},\vec{c}_n\restriction j+1)\cap\vec{c}_n(j)$$ It follows that for every $j\leq N_n$, $$A_{n+1}(\vec{c}_0,...\vec{c}_{n-1},,\vec{c}_n\restriction j)\cap \vec{c}_n(j)=A\cap\vec{c}_n(j)$$
In particular, $A\cap(\vec{c}_n)(0)=A_{n+1}(\vec{c}_0,...\vec{c}_{n-1},{\langle}{\rangle})\cap (\vec{c}_n)(0)$.
Let us argue that for every $k\leq N_n$, $\rho^n(k)> \vec{c}_n(k)$
Since by definition $A\cap\rho^n(0)\neq A_{n}(\vec{c}_0,...\vec{c}_{n-1},{\langle}{\rangle})\cap\rho^n(0)$, it follows that $\vec{c}_n(0)<\rho^n(0)$. Inductively assume that $\vec{c}_n(j)<\rho^n(j)$ for every $j\leq k$. Since $A_{n}(\vec{c}_0,...,\vec{c}_{n-1},\vec{c}_n\restriction k+1)\cap\vec{c}_n(k+1)=A\cap \vec{c}_n(k+1)$, then $$\vec{c}_n(k+1)<{\rm min}(A_n(\vec{c}_0,...,\vec{c}_{n-1},\vec{c}_n\restriction k+1)\Delta A)\leq \rho^n(k+1)$$ Hence $C_G(\eta_{n-1})=\vec{c}_n(N_n)<\rho^n(N_n)=\eta_n$.$\blacksquare_{\text{Claim}}$
We conclude that $$C_G(0)<\eta_0\leq C_G(\eta_0)<\eta_1\leq C_G(\eta_1)...$$ Let $\kappa^*={\rm sup}_{n<\omega}\eta_n$, then $\kappa^*\in Lim(C_G)$ and therefore regular in $V$. Also, by assumption, $cf^{V[G]}(\kappa)>\omega$, hence $\kappa^*<\kappa$. By freshness, $A\cap\kappa^*\in V$.
This means that in $V$ we can construct the sequence ${\langle} \eta_n\mid n<\omega{\rangle}$ which is a contradiction. This concludes the proof for sets with supremum $\kappa$.$\blacksquare_{\text{Lemma}}$
Now for the remaining cases of Theorem \ref{Fresh}:
\begin{lemma}\label{freshabovek} If $A\in V[G]$ is a fresh set of ordinals with respect to $V$, such that ${\rm sup}(A)>\kappa$, then $cf^{V[G]}({\rm sup}(A))=\omega$. \end{lemma}
\noindent\textit{Proof}. Let $\mu:=cf^{V}({\rm sup}(A))$, by Theorem \ref{no fresh subsets} $\mu\leq\kappa$. There is a fresh set $X\subseteq \mu$ such that $V[A]=V[X]$. To see this, pick in $V$ a cofinal sequence $\langle \eta_i\mid i<\mu{\rangle}$ in ${\rm sup}(A)$. Then by $\kappa^+$-c.c, there is $F\in V$, such that \begin{enumerate}
\item $Dom(F)=\mu$.
\item For every $i<\mu$, $|F(i)|=\kappa$.
\item $A\cap\eta_i\in F(i)$. \end{enumerate} For each $i<\mu$, find in $V$, an enumeration $\langle x^i_j\mid j<\kappa\rangle$ of $F(i)$, such that for every $W\in F(i)$, $\{j<\kappa\mid x^i_j=W\}$ is unbounded in $\kappa$.
Move to $V[A]$, inductively define ${\langle}\gamma_i\mid i<\mu{\rangle}$ increasing such that $x^i_{\gamma_i}=A\cap\eta_i$.
Set $\gamma_0={\rm min}(j\mid x^0_j=A\cap\eta_0)$. Assume that $\gamma_i$ was defined for every $i\leq k<\mu$, define $\gamma_{k+1}={\rm min}(j>\gamma_k\mid x^{k+1}_j=A\cap\eta_{k+1})$. Note that at limit stage $\delta$, the sequence $\langle\gamma_i\mid i<\delta\rangle$ is definable using only the enumeration and $A\cap\eta_\delta$ which is all available in $V$. hence $\gamma_{\delta}'={\rm sup}(\gamma_i\mid i<\delta)<\kappa$ and we define $\gamma_\delta={\rm min}(j>\gamma_\delta'\mid x^{\delta}_j=A\cap\eta_\delta)$.
Let $X=\{\gamma_i\mid i<\mu\}\subseteq \kappa$. Since $\langle \gamma_i\mid i<\mu\rangle$ is increasing, $cf^{V[G]}({\rm sup}(X))=cf^{V[G]}(\mu)$, $V[A]=V[X]$ and $X$ is fresh. It follows by the proof for subsets of $\kappa$ that $cf^{V[G]}(\mu)=\omega$, hence $cf^{V[G]}({\rm sup}(A))=\omega$. $\blacksquare_{\text{Lemma }\ref{freshabovek}}$ $\blacksquare_{\text{Theorem }\ref{Fresh}}$ \section{ Open problems }
Here are some related open problems:
Distinguishing from the case where $o^{\vec{U}}(\kappa)<\kappa$, we do not have here a classification of subforcings of $\mathbb{M}[\vec{U}]$.
\begin{question} Classify subforcings of $\mathbb{M}[\vec{U}]$. \end{question}
For $o^{\vec{U}}(\kappa)<\kappa^+$, using Theorem \ref{MainResaultParttwo}, it suffices to consider models of the form $V[C']$ for some $C'\subseteq C_G$, and try to classify the forcings which generate these models.
Our conjecture, at least for $o^{\vec{U}}(\kappa)=\kappa$ is the following: \begin{conjecture} Let $G\subseteq \mathbb{M}[\vec{U}]$ be a $V$-generic filter, where $\forall\alpha\leq \kappa.o^{\vec{U}}(\alpha)\leq\alpha$. If $V\subseteq M\subseteq V[G]$ is a transitive $ZFC$ model, then either it is a finite iteration of Magidor-like forcings as in \cite{partOne}, or there is a tree $T\subseteq[\kappa]^{<\omega}$ in $V$ such that $ht(T)=\omega$ and for every $t\in T$ and every $\alpha\in {\rm Succ}_T(t)$, there is a name $\lusim{{\mathbb{M}[\vec{U}]}^*}_{t^{\smallfrown}\alpha}$ for a Magidor-like forcing, such that if $H$ is $V$-generic filter for the forcing adding a branch through the tree $T$ along with the forcings $\lusim{{\mathbb{M}[\vec{U}]}^*}_{t^{\smallfrown}\alpha}$ corresponding to the branch, then $M=V[H]$. \end{conjecture}
\begin{question}\label{question3} Suppose that $o^{\vec{U}}(\kappa)=\kappa^+$. Is still every set of ordinals in the extension equivalent to a subsequence of a generic sequence? \end{question}
Note that the situation here is more involved since $\kappa$ stays regular in $V[G]$ and it is no longer possible to separate the measures.
\begin{question} The same as \ref{question3}, but with $o^{\vec{U}}(\kappa)>\kappa^+$.\end{question}
\begin{question} What can we say about other Prikry type forcing notions ?\end{question}
In \cite{TomMoti}, an example of a non-normal ultrafilter is given which adds a Cohen function to $\kappa$. So in general, not every intermediate model of Prikry type extensions is a Prikry type extension.
The following questions were stated in Section $5$:
In attempt to generalize \ref{kappaplusccgeneral} to a wider class of forcings, the simplest would probably be to deal with a long enough Magidor iteration of Prikry forcings and to analyze its subforcings.
\begin{question} Is the result of Theorem \ref{kappaplusccgeneral} valid for a long enough Magidor iteration of the Prikry forcings? \end{question}
\begin{question} Characterize filters (or ultrafilters) which satisfy the Galvin property (or the generalized Galvin property). \end{question}
\begin{question} Assume $GCH$. Let $\kappa$ be a regular uncountable cardinal. Is there a $\kappa$-complete filter on $\kappa$ which fails to satisfy the Galvin property? \end{question}
\begin{question} Assume $GCH$. Let $\kappa$ be a regular uncountable cardinal. Is there a $\kappa-$complete filter which extends the closed unbounded filter $Cub_\kappa$ and fails to satisfy the Galvin property? \end{question}
\begin{question} Is it consistent to have a $\kappa$-complete ultrafilter over $\kappa$ which does not have the Galvin property? \end{question}
\begin{question} Is it consistent to have a measurable cardinal $\kappa$ carrying a $\kappa-$complete ultrafilter which extends the closed unbounded filter $Cub_\kappa$ (i.e., $Q-$point) and fails to satisfy the Galvin property? \end{question}
In section $5$ we have seen that a fine $\kappa$-complete ultrafilter over $P_\kappa(\lambda)$ does not satisfy the Galvin property. Indeed, if $U$ is a fine normal measure on $P_\kappa(\lambda)$ then supercompact Prikry forcing is not $\kappa^+$-cc, however, under $GCH$ this forcing is $\lambda^+$-cc
\begin{question} Assume $GCH$ and let $\lambda>\kappa$ be a regular cardinal. Is every quotient forcing of the supercompact Prikry forcing also $\lambda^+$-cc in the generic extension? \end{question}
\end{document} |
\begin{document}
\newtheorem{theo}{Theorem} \newtheorem{lemma}{Lemma}
\title{Conditions for the Quantum to Classical Transition: Trajectories vs. Phase Space Distributions}
\author{Benjamin D. Greenbaum}
\affiliation{Department of Physics, University of Massachusetts at Boston, 100 Morrissey Blvd, Boston, MA 02125, USA}
\author{Kurt Jacobs}
\affiliation{Department of Physics, University of Massachusetts at Boston, 100 Morrissey Blvd, Boston, MA 02125, USA}
\author{Bala Sundaram}
\affiliation{Department of Physics, University of Massachusetts at Boston, 100 Morrissey Blvd, Boston, MA 02125, USA}
\begin{abstract} We contrast two sets of conditions that govern the transition in which classical dynamics emerges from the evolution of a quantum system. The first was derived by considering the trajectories seen by an observer (dubbed the ``strong'' transition) [Bhattacharya {\em et al.}, Phys. Rev. Lett. {\bf 85} 4852 (2000)], and the second by considering phase-space densities (the ``weak'' transition) [Greenbaum {\em et al.}, Chaos {\bf 15}, 033302 (2005)]. On the face of it these conditions appear rather different. We show, however, that in the semiclassical regime, in which the action of the system is large compared to $\hbar$, and the measurement noise is small, they both offer an essentially equivalent local picture. Within this regime, the weak conditions dominate while in the opposite regime where the action is not much larger than $\hbar$, the strong conditions dominate. \end{abstract}
\pacs{03.65.Yz, 03.65.Sq, 05.45.Mt} \maketitle
\section{Introduction}
It has been established by a number of recent works that the act of continuously observing a quantum system is sufficient to induce a transition from quantum to classical dynamics, so long as the action of the system is sufficiently large and the measurement sufficiently strong~\cite{{Spiller94,Schack95,Brun96,Percival98,Percival98b,Bhattacharya00,Habib02,Bhattacharya03,Ghose03,Ghose04,Everitt05,Ghose05}}. Under these conditions the quantum system remains well-localized in phase space, any noise introduced by the measurement is negligible, and the mean position and momentum of the quantum particle follow the smooth trajectories of classical mechanics. In particular, this approach provides a detailed understanding of how classical chaos emerges from quantum dynamics in the classical limit. Measurement (or equivalently the extraction of information by an environment, whether explicitly observed or not) is essential for this process: closed quantum systems cannot exhibit chaos, as demonstrated by results such as the Koslov-Rice theorem~\cite{Kosloff81,Manz89} (reviews of this topic are given in~\cite{Khanna06,GreenbaumPhD}).
Prior to this type of analysis, research on the quantum-to-classical transition focused on phase-space distribution functions, rather than observed trajectories. If the initial conditions for a classical system are not known precisely, and it is not measured during its evolution, then the state of the system is described by an ever broadening probability density in phase space. The dynamics of this density are given by the classical Liouville equation~\cite{Gold}. A quantum analog of this phase-space distribution is the Wigner function~\cite{Wigner32}. For a classically chaotic, one-dimensional, time-dependent Hamiltonian, it was found that the interaction with a large (Markovian) environment would transform the dynamics of the Wigner function into that of the classical phase-space density, at least under some circumstances~\cite{Habib98}. The study of the quantum-to-classical transition for phase-space densities under generic environmental interactions is often referred to as ``decoherence''~\cite{Zurek02,Pattanayak03}. Heuristic arguments were devised to explain this phenomenon for classically chaotic systems~\cite{Zurek02} although, due to the complexity of the quantum and classical evolution equations for these systems, such arguments are not easy to make precise. Nevertheless, the mechanisms, valid in the semiclassical limit, by which the Wigner function closely approximates the classical density for one-dimensional, time-dependent, classically chaotic systems have recently been reported~\cite{GreenbaumNew,Greenbaum05} which provides one focus for the present work.
The two approaches to the quantum-to-classical transition for open systems, the trajectory-level method employing continuous measurement theory, and the distribution approach involving the Wigner function, in fact, may treat precisely the same physical situation. When a quantum system interacts with a Markovian environment, this environment continually carries away information about the system. If an observer chooses to measure this information, the resulting dynamics is described by the stochastic master equation of continuous quantum measurement theory~\cite{JacobsSteck06,Brun02}. If the observer does not make use of this information, then the equation reduces to an evolution equation for the Wigner function under a Markovian environment, as employed in the studies of decoherence. Note that the act of observing the environment has no additional effect on the {\it system} than that already imposed by the environment. This is why the standard distribution-level description of an environmental interaction is given by averaging over all possible realizations of the underlying trajectories~\cite{foot1}.
As a result, continuous measurement theory can be used mathematically as a way to analyze the behavior of the Wigner function in the presence of an environment. This is because the measurement equations correctly describe the Wigner function dynamics regardless of whether an observer happens to be ``actually'' monitoring the system or not. Thus a continuously measured system behaving classically at the trajectory level should exhibit a corresponding Wigner function which reproduces the classical phase space density, though the converse need not be true. Namely, a density undergoing a noise induced transition may not have a smooth classical trajectory picture.
While continuous measurement will explain the emergence of classical motion at the level of phase-space densities, there are other relevant questions regarding the relationship between the emergence of classicality at the two levels, densities and trajectories. In this paper, we address two of these. The first is to define more precisely the circumstances under which the emergence of classicality at one level effects emergence at the other. Specifically, since phase-space densities can converge {\em without} the underlying, observed trajectories having become classical, we ask under what conditions the emergence of a classical phase-space density {\em does imply} that an observer would see the classical trajectories. The second, related, question regards two sets of conditions that govern the emergence of classicality. The first, derived by Bhattacharya {\em et al.}~\cite{Bhattacharya00,Bhattacharya03}, provide conditions under which the observed trajectories of a quantum system will obey classical dynamics. The second, derived by Greenbaum {\em et al.}~\cite{Greenbaum05,GreenbaumNew} show how the Wigner function matches its classical counterpart. These two sets of conditions were derived in quite different ways, involving different concepts, and we wish to understand the relationship between them.
In what follows we will refer to the emergence of classicality at the level of the phase-space densities as the {\em weak} quantum-to-classical transition (weak QCT), and the emergence at the level of observed trajectories as the {\em strong} QCT~\cite{foot2}. In the next section we summarize the arguments used to derive the conditions for the emergence of classicality in both the strong~\cite{Bhattacharya03} and weak~\cite{Greenbaum05,GreenbaumNew} cases and present a useful reformulation of the latter. In Section~\ref{sec::III} we analyze the relationship between the weak and strong transitions. In particular we explore the nature of the regime where the weak transition implies the strong as opposed to the one in which the weak QCT is satisfied but the strong is not. In Section~\ref{sec::IV} we present an alternative approach to deriving the conditions in which the weak transition implies the strong transition. This is subsumed by the condition derived in Section~\ref{sec::III}. In Section~\ref{sec::V} we conclude with a brief summary of the main results.
\section{Inequalities Governing the Quantum-to-Classical Transition}
\subsection{The Strong QCT}
In references~\cite{Bhattacharya00,Bhattacharya03} Bhattacharya {\em
et al.} derived a set of approximate inequalities governing the emergence of classical motion in an observed quantum system consisting of a single particle. These inequalities define the strong QCT as they delineate the conditions under which an observed single particle will follow a localized classical trajectory. For purposes of succinctness, we will, therefore, refer to the inequalities derived by Bhattacharya {\em et al.} as the {\em strong inequalities}, since they relate to the QCT in the strong sense. Through the paper, we will also denote the expectation values for the momentum and position of the single particle system as $x$ and $p$.
The classical Hamiltonian for the system at $(x,p)$ is generally time-dependent and of the form \begin{equation}
H(x,p,t) = \frac{p^2}{2m} + V(x,t) \end{equation} where $F(x,t) = -\partial V(x,t)/ \partial x$ is classical force, and, as usual $m$, is the particle mass. For the remainder of the work, we will not explicitly denote the time-dependence of functions of phase-space variables. The first of the strong inequalities determine when the centroid of the wave-function will remain sufficiently localized that the centroid will obey classical mechanics, and is divided into two regimes. When the strength of the non-linearity, as measured by the magnitude of $\partial^2_x F(x)$, is small enough to satisfy \begin{equation}
|\partial_x^2 F | \ll \frac{4 |F| \sqrt{ m |\partial_x F|}}{\hbar}
\, , \label{locineq1} \end{equation} then the condition is \begin{equation}
k \gg \left| \frac{\partial_x^2 F }{8F} \right| \sqrt{ \frac{|\partial_x F|}{2m}} \,. \label{locineq2} \end{equation} When the strength of the non-linearity violates Eq.(\ref{locineq1}), the condition becomes \begin{equation}
k \gg \left( \frac{\partial_x^2 F}{ 8 F} \right)^2 \frac{2\hbar}{m} \, . \label{locineq3} \end{equation} Here $k$ is the ``measurement strength'', which is the parameter that determines the rate at which the environment extracts information about the system~\cite{DJJ}. An example is given by the weak-coupling, high temperature limit of the Caldeira-Leggett master equation describing a single particle interacting with a thermal environment~\cite{GZbook}. In this case $k = D/\hbar^2$, where $D$ is the rate of momentum diffusion due to the environment. Note that while $k$ is a constant, $F$ depends on $x$, and thus varies over the phase space of the system. Thus, the right hand side of the inequalities above are understood as being averaged over the phase space, weighted by the relative time the particle spends at each point.
The inequalities as given in \cite{Bhattacharya03} also include a dimensionless quantity $\eta$, referred to as the {\em measurement
efficiency}, which is the fraction of the extracted information that is actually obtained by the observer. When considering the measurement analysis merely as a tool to derive results regarding the transition in terms of the Wigner function, $\eta$ is irrelevant. Thus, in comparing the strong inequalities with the weak transition derived by Greenbaum {\em et al.}, we will always set $\eta=1$, corresponding to the assumption that any observer has all the available information. Choosing a smaller value of $\eta$ is useful only when considering the behavior of observed trajectories in particular physical situations where the information available to observers is limited by practical considerations.
The second part of the strong inequalities gives the condition under which the noise in the observed trajectories is negligible, so that they follow the smooth classical evolution given by the Hamiltonian. This consists of two inequalities that must both be satisfied: \begin{equation}
\frac{ 2|\partial_x F| }{\bar s } \ll \hbar k \ll
\frac{ |\partial_x F|\bar{s} } {4}
\,,
\label{lnineq} \end{equation} Here $\bar s$ is a measure of the action of the system in units of $\hbar$. Specifically, $\bar{s}\equiv\mbox{min}(S/\hbar,S'/\hbar)$, where \begin{eqnarray}
S & = & \frac{|p|^3}{8 m |F|} \label{eq:s1} \\
S' & = & \frac{m |F|^3}{|p| (\partial_x F)^2} \end{eqnarray} Both $S$ and $S'$ are expressions involving the system parameters that have units of action.
The strong inequalities are thus given by Eq.(\ref{locineq2}) or (\ref{locineq3}), and Eq.(\ref{lnineq}). The first two state that the measurement must be strong enough to successfully limit the spreading of the wave-packet induced by the non-linearity. The second set, given in Eq.(\ref{lnineq}), state, essentially, that the action of the system in units of $\hbar$ should be sufficiently large so that there is a value of $k$ that satisfies both inequalities. As the action of the system becomes very large compared to $\hbar$, then effectively {\em any} measurement strength will satisfy these inequalities, and this defines the classical limit.
\subsection{The Weak QCT}
The conditions derived by Greenbaum {\em et al.}~\cite{GreenbaumNew, Greenbaum05} give a time-scale for when a Wigner function for a quantum system driven by environmental noise will agree with a noise-driven classical phase-space density. Moreover, the weak QCT has two distinct regimes depending on the noise level: a small noise regime in which the transition occurs after the classical structure evolution is in a global steady state and another in which the transition occurs locally while large structures are still forming. We now reformulate the conditions in~\cite{GreenbaumNew} to obtain an expression for the measurement strength which separates these regimes, allowing comparison with the strong inequalities. We also extend the results by providing a weak inequality relevant to the strong QCT low-noise condition.
The arguments devised in~\cite{GreenbaumNew} proceed in two parts. First, a purely classical relation is derived which gives the phase-space length scale, $l(t^*)$, below which noise will prevent the creation of fine structure in the classical phase-space density beyond a time $t^*$. This is derived by calculating two phase-space lengths, both of which are functions of time, and equating them. These lengths are scaled so as to have units of the square root of phase-space area. The first is the length over which the noise destroys fine structure as a function of time, which is given by $l_{cl}(t) = \sqrt{D t/(m\bar{\lambda})}$ where $\bar{\lambda}$ is the usual classical Lyapunov exponent defined over the bounded phase space region. $l_{cl}$ clearly increases with time. The second length is the scale of the phase-space structures developed by the dynamics, $\delta = \sqrt{\xi A} e^{-\bar{\lambda} t}$, which decreases with time. The steady-state length scale $l$ is the point at which these two match. Equating $l_{cl}$ and $\delta$, we obtain an expression for the diffusion constant in terms of the length scale $l$. This is \begin{equation}
D(l) \approx \frac{2 m \bar{\lambda}^2 l^2}{\ln(\xi A/l^2)} \label{weak1} \end{equation} where $A$ is the phase-space area accessible to the system, and $l$ is a length with units of $\sqrt{A}$. There is however, an ambiguity in the value of $\xi$. This comes from the expression for the length scale of the fine structure in the classical density. Its role is to set the scale of the structure in the density of the initial state. As a result, $\xi$ can be anywhere in the range $[1,A/\hbar]$: the lower bound corresponds to an initial state that is uniform over essentially all phase space, and the upper bound to an initial state that is confined to a single cell of area $\hbar$. This upper bound comes from the fact that any quantum phase-space density is limited to fine structure on the order of $\hbar$, and there is therefore no point in considering initial classical densities with finer structure. In fact, due to the logarithm in the expression for $D(l)$, the ambiguity in $\xi$ can be dealt with quite easily. To do so we merely choose the upper or lower bound, whichever provides the most stringent condition. That is, we choose the value of $\xi$ so as to err on the safe side.
The second step is deriving a condition under which noise is sufficient to wash out interference fringes on length scales below $l_{cl}$. This condition defines the weak QCT. In~\cite{Greenbaum05,GreenbaumNew} semiclassical arguments are used to show that inference fringes are washed out on the length scale of $l_{qu}(t) = \hbar \sqrt{\bar{\lambda} m/(Dt)} = \hbar/l_{cl}(t)$. If we set $l_{qu}=l_{cl}$, then we obtain a simple condition purely in terms of $l_{cl}$: \begin{equation}
l_{qu}^2 \approx l_{cl}^2\gtrsim \hbar \label{weak2} \end{equation} The weak QCT will occur for distributions at a time, $t_{qc}\approx m\hbar\bar{\lambda}/D \label{t_qc}$. By equating $l$ and $l_{qu}$ or, equivalently, $t^*$ and $t_{qc}$, we find the threshold between two distinct weak QCT regimes, which define whether the weak QCT occurs before classical structure growth terminates. Interpreting this noise as coming from measurement we set $D=\hbar^2 k$ yielding \begin{equation}
k_{crit} \approx \frac{2 m \bar{\lambda}^2 }{\hbar \ln(\xi A/\hbar)} \label{weak3}. \end{equation} Further, we can identify $A$ with the action of the system, so that $\tilde{s} \equiv A/\hbar$ is an action for the system in units of $\hbar$. This gives \begin{equation}
k_{crit} \approx \frac{2 m \bar{\lambda}^2}{\hbar \ln(\xi \tilde{s})} \label{weak4} \end{equation} Now, since we know that $\xi \in [1,\tilde{s}]$, we see that the difference between taking the maximum and minimum values of $\xi$ only results in a factor of two difference in the right hand side. To obtain our final expression, we take $\xi$ to have its minimum value as this results in the most stringent condition. The result is \begin{equation}
k_{crit} \approx \frac{2 m \bar{\lambda}^2}{\hbar \ln(\tilde{s})} \label{weak5a} \end{equation} When $k$ is greater than this value the weak QCT will occur while classical structures continue to evolve, while for smaller values classical structures will stop forming before the weak QCT. We also want a condition under which noise is negligible so as to obtain the classical limit in the narrow sense. This will be true if the ``smearing area'' $l^2$ is small compared to the accessible phase space $A$. Imposing this condition on the relation in Eq.(\ref{weak1}), we have \begin{equation}
k \ll \left( \frac{m \bar{\lambda}^2 }{\hbar} \right) \frac{2\tilde{s}}{\ln(\tilde{s})} \label{weak5b} \end{equation} where this time we have set $\xi$ at its maximum value to obtain the most stringent condition. Putting the two inequalities together, we define the regime in which the weak QCT occurs while large classical structures continue to form \begin{equation}
\left( \frac{2 m \bar{\lambda}^2}{\hbar}\right) \left[ \frac{1}{\ln(\tilde{s})}\right] \; \lesssim \; k \; \ll \; \left( \frac{2 m \bar{\lambda}^2 }{\hbar}\right) \left[ \frac{\tilde{s}}{\ln(\tilde{s})} \right] \label{weak6} \end{equation} It is important to note that since the weak QCT has been understood using semiclassical arguments, we can only expect these arguments to be strictly valid in the semiclassical regime --- that is, when the dimensionless action of the system $s \gg 1$, and when the noise is relatively small in comparison to the classical dynamics (that is, when $l^2$ is small compared to the accessible phase-space area $A$).
\section{The emergence of classicality: Weak vs. Strong} \label{sec::III}
We wish to examine the relationship between the weak and strong quantum-to-classical transitions. Unlike the weak QCT, the strong QCT only occurs {\it after} a minimum noise threshold is met. The observed wave-function is highly localized in phase space and the noise on observed trajectories is negligible. In this case the weak QCT should also have taken place. That is, the quantum Wigner function will agree with the classical density, and this density will exhibit fine structure down to a length scale much smaller than the available phase space $A$. This result should follow immediately from the fact that 1) the Wigner function is merely the sum of the Wigner functions for all the possible localized observed wave-packets, 2) the centroid of each wave-packet obeys the classical equations of motion, and 3) each wave-packet has area $l^2$ and therefore has a width of order $l$ in each (dimensionless) phase-space direction.
Secondly, if we are in the above highly localized regime, the weak QCT should imply the strong QCT. That is because the Wigner function would not exhibit the same fine structure (that is, the same structure of foliating unstable manifolds) as the classical density if the equivalent observed trajectories were not following the classical dynamics. (In fact, by considering the constraints on the trajectory Wigner functions implied by the scale of the fine structure, one can derive a quantitative condition for when the weak transition implies the strong, and we will do this in Section~\ref{sec::IV}.)
With the above discussion in mind, we now compare directly the strong and weak QCT. This is easy to do if we approximate the local Lyapunov exponent by its global value. This approach is consistent with the inequalities of Bhattacharya {\it et al.} in which one equates local forces with their phase-space averages. The local Lyapunov exponent measures the local stretching rate of a point in phase space, $(x_0, p_0)$. The linearized Newton's equation for the perturbation, $\delta x$, then yields \begin{equation}
m\frac{d^2 \delta x}{dt^2}\approx \left. \partial_x F \right|_{x_0}\delta x. \end{equation} The local Lyapunov exponent is defined by the solution to this equation: \begin{equation} \delta x(t)\approx \delta x_0 e^{\lambda t}, \end{equation} where \begin{equation}
\lambda^2=\frac{|\partial_x F|}{m}. \end{equation} We now simply replace $\lambda$ with its average value over phase space, $\bar{\lambda}$ to complete the approximation.
Using this relationship, the strong inequalities that give the conditions for low noise (Eq.(\ref{lnineq})) become \begin{equation}
\left( \frac{ 2 m \bar\lambda^2 }{\hbar} \right) \left[ \frac{1}{\bar{s}} \right] \; \ll \; k \; \ll \; \left( \frac{ 2 m \bar\lambda^2 } { \hbar} \right) \left[ \frac{\bar{s}}{8} \right].
\label{eq::strongln2} \end{equation} We see that these are very similar to the weak QCT regime defined by Eq.(\ref{weak6}).
We assume that the ``actions'', $\bar s$ and $\tilde s$, that we associate with the system, are both approximately equal to the system action, and may therefore be equated. In the semiclassical regime, in which $\tilde{s} \gg 1$, we have both $\tilde{s} \gg \ln(\tilde s)$ and $\ln{\tilde s} \sim O(8)$, so that the weak regime above $k_{crit}$ and the strong low-noise criteria are essentially equivalent. This is logical, as the strong QCT assumes that the trajectories explore classical structures. The caveats to this are that when $s$ is extremely large, being in this weak regime implies the strong low-noise inequality (both the left-hand inequality and the right-hand inequality). That is, the conditions for this regime are {\em stronger} than the strong low-noise inequality. By comparing Eqs. (\ref{eq::strongln2}) and (\ref{weak6}) we can write down a specific condition under which a system being in the $k>k_{crit}$ weak regime implies the strong low-noise inequality. This is \begin{equation}
s \gg e^8 \approx 3\times 10^{3} . \label{eq::wtos0} \end{equation} In the opposite case, when $s$ is not much larger than unity, the strong low-noise inequality is satisfied over a range of $k$ values before $k_{cr}$ is reached signaling the start of the weak regime, though this requires relaxing the semiclassical condition, which may effect the validity of the weak approximation.
The above result raises a curious question. The derivation of the weak QCT above $k_{crit}$ would lead us to believe that this is a sufficient condition for the emergence of classical motion in the semiclassical regime defined by Eq.(\ref{eq::wtos0}), both at the trajectory and density levels. However, the weak QCT is most easily compared to the strong inequalities that guarantee low noise (Eq.(\ref{lnineq})). The derivation of the strong inequalities implies that a second condition is required to guarantee classical behavior, this being the bound relating the noise to the size of the nonlinearity given either by Eq.(\ref{locineq2}) or Eq.(\ref{locineq3}). Either the weak QCT regime as derived is not as complete as previously assumed, or the part of the strong inequalities that bound the non-linearity is redundant in this semiclassical regime.
It turns out that the answer is the latter. That is, in the semiclassical regime defined by Eq.(\ref{eq::wtos0}), the weak QCT regime defined by Eq.(\ref{weak6}) also implies that both localization conditions (Eq.(\ref{locineq2}) and Eq.(\ref{locineq3})) are satisfied. To see this we note that it will be true if \begin{eqnarray}
\frac{2m\bar{\lambda}^2}{\hbar \ln(s)} & \gg & \left| \frac{\partial_x^2 F }{8F} \right| \frac{\bar{\lambda}}{\sqrt{2}}, \label{raw1} \end{eqnarray} and \begin{eqnarray}
\frac{2m\bar{\lambda}^2}{\hbar \ln(s)} & \gg & \left( \frac{\partial_x^2 F}{ 8 F} \right)^2 \frac{2\hbar}{m} . \label{raw2} \end{eqnarray} We now note that the quantity $\bar{\bar{s}}$, defined as \begin{equation}
\bar{\bar{s}} \equiv \frac{m\bar{\lambda} |F|}{\hbar |\partial_x^2 F|} \end{equation} is also a dimensionless action for the system in units of $\hbar$. As with Eq.(\ref{lnineq}), we have substituted the average Lyapunov exponent, $\bar{\lambda}$ into the strong inequalities. Assuming that $\bar{\bar{s}}$ is of the same order as the dimensionless action of the system, $s$, Eqs. (\ref{raw1}) and (\ref{raw2}) become \begin{eqnarray}
s & \gg & \frac{\ln(s)}{16\sqrt{2}}, \label{paw1} \end{eqnarray} and \begin{eqnarray}
s & \gg & \frac{\sqrt{\ln(s)}}{8} . \label{paw2} \end{eqnarray} These inequalities are automatically satisfied in the semiclassical regime, where $s \gg 1$. Significantly, they will be satisfied when the semiclassical criteria given by Eq.(\ref{eq::wtos0}) is met. The conclusion is that Eq.(\ref{eq::wtos0}) defines a semiclassical regime where the strong QCT will be satisfied when the weak QCT occurs in the Eq.(\ref{weak6}) regime.
This also constrains the time, $t_{qc}$, at which the weak QCT occurs. Since $D=\hbar^2 k$, we can write $t_{qc}\approx m\bar{\lambda}/\hbar k$. The strong QCT will occur within the large $k$ region. Since $k$ will be large the weak QCT will also occur quickly. This is not surprising, as localization at a level which allows a trajectory picture should imply that interference is rapidly eliminated and a local classical picture should emerge regardless of whether the system achieves a global steady-state.
We now turn to the question of what it means for the quantum-to-classical transition to occur in the weak sense without having occurred in the strong sense, particularly in the $k$ range we have been discussing. It is clear that this should not happen in the low noise regime. In this regime the wave-function of an observed trajectory is small compared to the available phase space, and thus fine details in the structure of the phase-space densities are visible. The trajectories are smooth, since the noise is small in comparison to the deterministic classical dynamics. It is also clear, as mentioned above, that the trajectories must obey the classical equations of motion. If this were false, they would not give the same fine structure as the classical density when their (well-localized) Wigner functions are averaged over all noisy realizations. It is similarly clear that when the low-noise inequalities are violated the weak transition should be able to occur without the strong transition, as the lack of a weak noise threshold implies. This is because the dynamics due to noise alone is the same in both quantum and classical systems. Thus if noise dominates the dynamics, then the quantum and classical densities will agree closely, even though the observed trajectories will also be noise dominated and will therefore not follow smooth motion of the classical Hamiltonian.
What is not so clear is how the weak transition occurs when noise does not swamp the deterministic dynamics, but the wave-function of an observed trajectory is sufficiently delocalized that the dynamics of its centroid remain noisy. Note that in this case the noise on the centroid is not purely a result of the noise introduced by the measurement/environment, but is due in large part to the fact that the wave-function is broad. The implication is that the observer does not know well the location of the system in phase space, and thus the centroid of the wave-function changes significantly as the observer obtains the random stream of measurement results. This is what one would expect in a weaker noise domain.
The question of how the weak QCT is satisfied while violating the strong was discussed briefly in~\cite{Greenbaum05}. We now provide more detailed results on this question, by simulating the Duffing oscillator with the same parameters as considered in~\cite{Greenbaum05}. The Hamiltonian of the Duffing oscillator is~\cite{Lin90} \begin{equation}
H = p^2/(2m) - \alpha x^2 + \beta x^4 + \Lambda \cos(\omega t) \end{equation} where the parameters are chosen to be $(m,\alpha,\beta,\Lambda,\omega) = (1,10,0.5,10,6.07)$, and for the quantum simulation we choose $\hbar = 0.1$. Choosing the value of $\hbar$ is merely a convenient means of setting the action of the system relative to $\hbar$. Here we fix the action (equivalently the available phase-space area $A$), and choose the area that a minimum uncertainty wave-packet occupies by setting the value of $\hbar$.
In~\cite{Greenbaum05} the weak QCT is demonstrated for the momentum diffusion rate $D=0.01$. We now examine the behavior of the observed trajectories in this regime. The environment considered in~\cite{GreenbaumNew,Greenbaum05} is equivalent to a continuous measurement of the oscillator position, $x$, and the measurement strength $k = D/\hbar^2 = 1$. The equation of motion for the system density matrix under this continuous measurement is given by the stochastic master equation~\cite{JacobsSteck06} \begin{eqnarray}
d \rho & = & -(i/\hbar) [H ,\rho] dt - k [x,[x,\rho]] dt \nonumber \\
& & + \sqrt{2k}(x\rho + \rho x - 2 \langle x \rangle \rho) dW \end{eqnarray} where $dW$ is the increment of Wiener noise satisfying $(dW)^2 = dt$. We choose the initial state to be a minimum uncertainty (coherent) state with centroid $(\langle x \rangle,\langle p \rangle) = (-3,8)$, and position and momentum variances equal to $\hbar/2 = 0.05$. The accessible phase space for the classical system has position boundaries at approximately $\pm 5$, and momentum boundaries at $\pm 20$.
\begin{figure}
\caption{(Color online) (a) A typical Wigner function for the Duffing oscillator when $\hbar=0.1$ and the measurement strength $k=1$. In this plot luminosity denotes the absolute value of the real part of the Wigner function (thus black corresponds to zero). (b) The associated probability density for the position of the oscillator. The wave-function is spread over a significant region of the phase-space.}
\label{fig1}
\end{figure} In Fig.~\ref{fig1} we show the Wigner function for the oscillator after a time of $t=12$, (approximately $12$ periods of the drive), along with the corresponding probability density for the position of the oscillator. The position wave-function is spread over a significant region of the available phase space, and one therefore expects the trajectory for the mean position to experience significant noise. In Fig~\ref{fig2} we plot the mean position up to $t=12$, and indeed the effect of the noise is clearly visible. The quantum and classical phase-space densities can thus agree on intermediate scales and achieve a weak QCT, even if the observed trajectories do not follow the smooth classical dynamics.
\begin{figure}
\caption{(Color online) The mean position of the observed Duffing oscillator when $\hbar=0.1$ and $k=1$. The position uncertainty of the quantum state is manifest in the noise that is visible on this trajectory.}
\label{fig2}
\end{figure}
\section{Deriving the Strong Transition from the Weak} \label{sec::IV}
In the previous section we derived the regime of the weak QCT where the strong QCT is also satisfied (Eq.(\ref{eq::wtos0})). Here we use an alternative approach to derive a set of conditions under which the weak transition will imply the strong. To begin we note that the existence of fine structure at the scale of the phase-space area $l^2$ bounds the width of the wave-functions of the trajectories. This is because the phase space density is an average over the wave-functions of all trajectories, and this automatically precludes the phase-space density from having oscillations smaller than the width of the wave-function. Using $ (m |\partial_x F|)^{1/4} = \sqrt{m \lambda}$ as the scaling factor between position and the phase-space length $l$, this bound is \begin{equation}
V_x \leq m \lambda l^2 .
\label{Vxl} \end{equation} Thus if $l$ is small enough, then it will force the wave-function for the corresponding trajectory to be localized. This, in turn, will force it to satisfy the conditions of the strong QCT. In this case the weak transition will imply the strong. This is because all three strong inequalities, Eq.(\ref{locineq2}) (or Eq.(\ref{locineq3})), and Eq.(\ref{lnineq}) are in fact a result of conditions limiting the position variance, as shown in Ref.(~\cite{Bhattacharya03}). We can therefore derive quantitative inequalities determining when the weak QCT will imply the strong, by using the strong bounds on $V_x$, then Eq.(\ref{Vxl}) to bound $l$, and finally Eq.(\ref{weak1}) to derive bounds on $k$.
There are three strong bounds on $V_x$. The bound that leads to the localization condition (Eq.(\ref{locineq2}) or (\ref{locineq3})) and the two bounds that lead respectively to the two low noise inequalities given in Eq.(\ref{lnineq}). The bound that leads to the localization inequality is~\cite{Bhattacharya03} \begin{equation}
V_x \ll \frac{2F}{\partial_x^2 F} \end{equation} Using the procedure just described, this gives the following condition on $k$: \begin{eqnarray}
k & \ll & \left( \frac{4 m\lambda^2}{\hbar} \right) \left[ \frac{\bar{\bar{s}}}{\ln[\bar{s}\tilde{s} / (2 \bar{\bar{s}})]} \right] \nonumber \\
& \approx & \left( \frac{ 4m\lambda^2}{\hbar} \right) \left[ \frac{s}{\ln(s/2)} \right] \label{eq::wtos1} . \end{eqnarray} The bound on $V_x$ that leads to the left hand side of Eq.(\ref{lnineq}) is $V_x \ll \sqrt{S/(k m)}$~\cite{Bhattacharya03} where $S$ has units of action and is given by Eq.(\ref{eq:s1}). This leads initially to the inequality \begin{equation}
k \ll \left( \frac{2 m \bar{\lambda}}{\hbar} \right) \left( \frac{S}{2 \hbar} \right)^{1/3} \ln\left( \xi A \sqrt{\frac{k}{S m \bar{\lambda}^2}}\right)^{-1} \end{equation} To complete the derivation we need to eliminate $k$ from the right hand side. Since we are deriving a condition for when the weak transition implies the strong, we can assume that the weak QCT takes place in the regime given by Eq.(\ref{weak6}). So as to be conservative (that is, to derive the weakest condition) we should choose the value of $k$ on the right hand side to be as large as possible. A very conservative value for $k$ is to saturate the upper bound in Eq.(\ref{weak6}), and this gives \begin{eqnarray}
k & \ll & \left( \frac{2 m \bar{\lambda}}{\hbar} \right) \left( \frac{S}{2 \hbar} \right)^{1/3} \ln\left( \bar{s} \sqrt{\frac{ 2\bar{s} \tilde{s}}{\ln\tilde{s}}}\right)^{-1} \nonumber \\
& \approx & \left( \frac{2 m \bar{\lambda}}{\hbar} \right) 2^{2/3} \left[ \frac{s^{1/3}}{ \ln( 2 s^4/ \ln s ) } \right] \label{eq::wtos2} \end{eqnarray} The third and final bound on $V_x$ is~\cite{Bhattacharya03} \begin{equation}
V_x^2 \ll \frac{1}{4 m k} \sqrt{\frac{m |F|^3}{8k |p \partial_x F|}} \end{equation} This results in the condition \begin{equation}
k \ll \left( \frac{2 m \bar{\lambda}}{\hbar} \right) 8^{-1/7} \left[ \frac{s^{1/7}}{ \ln( 32 s^5 [\ln s]^{-3/2} ) } \right] , \label{eq::wtos3} \end{equation} where we have assumed that $S'/\hbar \approx s$.
If we satisfy each of the three inequalities given by Eqs.(\ref{eq::wtos1}), (\ref{eq::wtos2}) and (\ref{eq::wtos3}), then the weak transition will imply the strong transition. The second and third conditions, Eqs.(\ref{eq::wtos2}) and (\ref{eq::wtos3}) are, however, more stringent than the weak regime we have invoked. In order be within the localized regime and satisfy these conditions, we must at least have \begin{equation}
\frac{1}{\ln s} \ll \left[ \frac{s^{1/7}}{ \ln( 32 s^5 [\ln s]^{-3/2} ) } \right] . \label{eq::wtos4} \end{equation} When $s\geq 10$, the denominator on the right hand side is well approximated by $1/(5\ln s)$, and so we have \begin{equation}
s \gg 5^7 \approx 10^5 \end{equation} Comparing this with the equivalent condition in Section~\ref{sec::III}, Eq.(\ref{eq::wtos0}), we see that while the two results are similar, the new result is well above the threshold set in Section III. Thus the above analysis, while providing an alternative approach, reinforces the interpretation of that section. We may therefore conclude that the criteria given by Eq.(\ref{eq::wtos0}), derived indirectly by comparing the weak and strong QCT, will also be met by the more intuitive criterion derived in this section.
\section{Conclusion} \label{sec::V}
There are two ways to ask if (nonlinear) classical dynamics has emerged from the evolution of a quantum system. One is to observe the system and to ask when the motion of a localized centroid is indistinguishable from the classical trajectories. When this is true we refer to the system as having made the transition in the strong sense. The other method is to obtain only the phase-space probability densities for the classical and quantum motion, and to ask when the these densities become indistinguishable. When this is true we say that the system has made the transition in the weak sense. Two distinct methods have been used to determine how an open quantum system will make the transition to classical dynamics.
Here we have shown that in the semiclassical regime (the regime in which the weak inequalities are valid), these two levels of description may be compared. Specifically, when the action of the system is much larger than $\hbar$, the inequalities implying a rapid weak QCT, which takes place before a classical steady-state, are stronger than those implying the strong QCT. We have also pointed out that when the action is much larger than $\hbar$, and the environmental noise is very small, both this weak regime and the strong transition are essentially equivalent, regardless of the exact behavior of the respective inequalities.
From the above analysis we have also shown that in the semiclassical regime the strong inequalities may be simplified, so that in both the weak and strong cases, the conditions for the emergence of classical motion involve simple inequalities. The inequalities accompanying the strong QCT being \begin{equation} \frac{1}{s} \; \ll \; \frac{D}{2 \hbar m \bar\lambda^2} \; \ll \; \frac{s}{8} \end{equation} while, in defining this weak region where the QCT precedes the termination of classical structure, we get \begin{equation}
\frac{1}{\ln s} \; \lesssim \; \frac{D}{2 \hbar m \bar{\lambda}^2 } \; \ll \; \frac{s}{\ln s}. \label{weak6new} \end{equation} Here $D$ is the momentum diffusion coefficient due to the measurement or environment, $\lambda$ is the Lyapunov exponent for the system, $s$ is the action of the system in units of $\hbar$, and $m$ is the mass.
We have also derived a very simple sufficient condition for when this weak regime implies the strong transition, and this is $S/\hbar \gg 10^3$, where $S$ is the action of the system. When this condition is not met, the weak transition occurs without the smooth trajectories of classical mechanics. However, in the semiclassical limit this weak regime is entirely sufficient to determine the emergence of classical dynamics in a quantum system.
\end{document} |
\begin{document}
\title{Greenberg's Conjecture and Cyclotomic Towers} \author{David C. Marshall} \address{Department of Mathematics, University of Texas at Austin, Austin, Texas 78712, USA} \email{marshall@math.utexas.edu} \subjclass[2000]{Primary 11R23; Secondary 11R18, 11R32} \date{June 12, 2003}
\begin{abstract} We describe Greenberg's pseudo-null conjecture, and prove a result describing conditions under which the pseudo-null conjecture for a number field $K$ implies the conjecture for finite extensions of $K$. We then apply the result to the cyclotomic $\mathbb{Z}_p$-tower above a cyclotomic field of prime roots of unity, verifying the conjecture for a large class of cyclotomic fields. \end{abstract}
\maketitle
\section{Greenberg's conjecture} In the late 1950's Iwasawa introduced a powerful technique for studying class groups and unit groups of number fields. Motivated by the theory of curves over finite fields, Iwasawa's theory of $\mathbb{Z}_p$-extensions has since become a widely used tool in algebraic number theory, Galois theory, and arithmetic geometry. We describe in this section a conjecture of Greenberg concerning the structure of a classical Iwasawa module, and we mention a Galois theoretic consequence concerning free pro-$p$-extensions of number fields.
Let $K$ be an algebraic number field and $p$ an odd prime. By a \textit{multiple $\mathbb{Z}_p$-extension} $K_\infty/K$ we mean a Galois extension with Galois group $\Gamma\simeq \mathbb{Z}_p^d$ for some positive integer $d$. In what follows we will be particularly interested in two such extensions of $K$ for which we reserve the following notation: \begin{itemize} \item $K^{cyc}/K$ denotes the \textit{cyclotomic} $\mathbb{Z}_p$-extension of $K$. \item $\widetilde{K}/K$ denotes the compositum of all $\mathbb{Z}_p$-extensions of $K$. \end{itemize} Let $F$ be a finite extension of $K$ contained in $K_\infty$, and denote by $A(F)$ the Sylow $p$-subgroup of the ideal class group of $F$. The Galois group of $F/K$ acts on $A(F)$ in the natural way, making $A(F)$ into a $\mathbb{Z}_p[\text{Gal}(F/K)]$-module. As $F$ varies over all finite subextensions the $A(F)$ form an inverse system (under norm maps) and we denote by $A$ the inverse limit. The group $A$ then carries a natural structure as a module over the Iwasawa algebra \[\mathbb{Z}_p[[\Gamma]]:=\varprojlim_F\mathbb{Z}_p[\text{Gal}(F/K)].\]
It is common to study $A$ by identifying the $A(F)$ with Galois groups as follows. By class field theory, the group $A(F)$ is isomorphic to the Galois group, $X_F$, of the maximal abelian unramified $p$-extension of $F$ (the \textit{$p$-Hilbert class field of $F$}). The isomorphism respects the Galois module structure, the action of $\text{Gal}(F/K)$ on $X_F$ being inner automorphism. The $X_F$ form an inverse system (the maps being given by restriction of automorphisms) and the limit $X$ is the Galois group of the maximal abelian unramified pro-$p$-extension of $K_\infty$. So $X\simeq A$.
The Iwasawa algebra $\mathbb{Z}_p[[\Gamma]]$ is non-canonically isomorphic to the power series ring \[\Lambda:=\mathbb{Z}_p[[T_1, T_2, \dots, T_d]],\] where topological generators $\gamma_i$ of $\Gamma$ are sent to $1+T_i$. So the $\mathbb{Z}_p[[\Gamma]]$-module structure of $A$ is studied via the $\Lambda$-module structure of $X$ (noting that $T_ix=x^{\gamma_i-1}$).
For $K_\infty/K$ any multiple $\mathbb{Z}_p$-extension Greenberg (\cite{Green:73}, Theorem~1) has shown $X$ to be a finitely generated torsion $\Lambda$-module. In particular, the annihilator of $X$, $\operatorname{Ann}_{\Lambda}(X)$, is non-trivial. Traditionally, annihilators of classical Iwasawa modules have been of much interest. The Main conjecture of Iwasawa theory gives the factors of the annihilator of $X$ for the cyclotomic $\mathbb{Z}_p$-extension of a number field $K$ as essentially the $p$-adic $L$-functions attached to $K$. There is also a two variable Main conjecture for certain $\mathbb{Z}_p^2$-extensions arising from the theory of elliptic curves.
Greenberg (\cite{Green:99}, Conjecture~3.4) has conjectured that for the cyclotomic $\mathbb{Z}_p$-extension $K^{cyc}/K$ of a totally real field $K$, the module $X$ is finite. If a totally real field $K$ satisfies Leopoldt's conjecture the extensions $K^{cyc}$ and $\widetilde{K}$ coincide (i.e. $K$ has only one $\mathbb{Z}_p$-extension). Furthermore, when $\Lambda =\mathbb{Z}_p[[T]]$ it can be shown that a module being finite is equivalent to having an annihilator of height at least 2. With this in mind the above conjecture is a special case of the more general conjecture (\cite{Green:99}, Conjecture~3.5):
\begin{con} Let $K$ be any number field and $\widetilde{K}$ the compositum of all $\mathbb{Z}_p$-extensions of $K$. Then $\operatorname{Ann}_{\Lambda}(X)$ has height at least 2. \end{con}
A $\Lambda$-module whose annihilator has height at least 2 is said to be \textit{pseudo-null}, and we will refer to Conjecture~1 above as \textit{Greenberg's conjecture}, or just the \textit{pseudo-null conjecture}.
The point of this note is two-fold. First, we prove a ``going-up'' theorem for the pseudo-null conjecture. Namely, if $K$ is a number field, and $F$ is a finite extension of $K$ in $\widetilde{K}$, we give conditions under which Greenberg's conjecture for $K$ implies Greenberg's conjecture for $F$ (Theorem~6). The result is an exercise in utilizing several equivalent formulations of the conjecture. Versions of these formulations have appeared in Lannuzel and Nguyen-Quang-Do (\cite{Lan:00}, Theorem~4.4) as well as work of McCallum~\cite{McCal:00} and this author~\cite{Ma:00}. Secondly, as an application of the result, we consider the example $K=\mathbb{Q}(\zeta_p)$ and $F=\mathbb{Q}(\zeta_{p^n})$. We verify the conjecture for a certain class of such $K$'s, implying the conjecture for each field in the corresponding $\mathbb{Z}_p$-tower.
The key argument in both results is reduced to a capitulation problem, namely the need for a set of ideals, or ideal classes, to become principal when extended to an appropriate field. For the ``going-up'' result, the resolution of this problem is provided by an equivalent form of the conjecture, stating that all ideal classes capitulate in $\widetilde{K}$. In verifying the conjecture for $\mathbb{Q}(\zeta_p)$ capitulation is obtained by more direct means. We state our second result here.
Let $K=\mathbb{Q}(\zeta_p)$, $E=\mathcal{O}_K^{\times}$ and $U=\mathcal{O}_{K_\pi}^{\times}$, where $\pi$ is the unique prime of $K$ above $p$. Denote by $\overline{E}$ the closure of $E$ in $U$. We denote by $\lambda_p$ the Iwasawa lambda invariant of the cyclotomic $\mathbb{Z}_p$-extension of $\mathbb{Q}(\zeta_p)$. Let $v_p$ denote the $p$-adic valuation. In Section~4 we prove \begin{thm} Suppose $K=\mathbb{Q}(\zeta_p)$ satisfies the following conditions: \begin{enumerate} \item Vandiver's conjecture
\item $\lambda_p=1$.
\item $v_p(|(U/\overline{E})[p^\infty]|)\leq v_p(|A(K)|)$. \end{enumerate} Then for all $n\geq 1$ the pseudo-null conjecture holds for $\mathbb{Q}(\zeta_{p^n})$. \end{thm}
We mention here one Galois theoretic consequence of the pseudo-null conjecture for cyclotomic fields. The existence of free pro-$p$-extensions (Galois extensions with Galois group a free pro-$p$-group) has been the subject of much study. See for example the list of known results in~\cite{Yama:94}. Let $K=\mathbb{Q}(\zeta_{p^n})$ for some $n>0$, and let $\Omega_K$ denote the maximal pro-$p$ extension of $K$ which is unramified at all primes not dividing $p$. Let $\mathcal{G}_K$ denote the Galois group.
Since free pro-$p$-extensions are unramified outside $p$, such extensions of $K$ are contained in $\Omega_K$. We will see that $\mathcal{G}_K$ is a free pro-$p$ group exactly when $p$ is a regular prime (since the number of relations defining $\mathcal{G}_K$ is equal to the $p$-rank of the class group of $K$). When $p$ is an irregular prime the group $\mathcal{G}_K$ is not free, but we may look for free pro-$p$ quotients. Let $r_2$ denote the number of complex places of $K$. Then Leopoldt's conjecture predicts $r_2+1$ independent $\mathbb{Z}_p$-extensions of $K$, and so the maximal rank of a free pro-$p$-extension of $K$ is bounded above by $r_2+1$. The following is proved in~\cite{Lan:00}, as well as~\cite{McCal:00}: \begin{thm} Suppose that $K=\mathbb{Q}(\zeta_{p^n})$ satisfies Greenberg's conjecture. Then $\mathcal{G}_K$ has a free pro-$p$-quotient of rank $r_2+1$ if and only if $p$ is regular. \end{thm}
We give here a brief outline of the paper. In Section 2, we introduce several auxiliary $\Lambda$-modules and Galois groups needed for the later study. Theorem~3 and Lemma~1 are the key results of this section, implying a sufficient condition for a standard Iwasawa module to be torsion free (Corollary~1). In Section 3 we recall and provide several equivalent formulations of Greenberg's pseudo-null conjecture, and we state and prove one of our main results (the ``going-up'' theorem). Finally, in Section 4 we turn to the example furnished by cyclotomic fields, proving Theorem~1 above.
\textit{Acknowledgements} This work is an outgrowth of the authors Ph.D. thesis and he would like to thank his advisor Bill McCallum, as well as Ralph Greenberg and Manfred Kolster for useful conversations and comments. This work was partially supported by NSF VIGRE grant 9977116.
\section{Auxiliary modules} For a number field $K$ and a prime number $p$, we call a field extension of $K$ \textit{$p$-ramified} if it is unramified at all primes of $K$ not dividing $p$. We fix the following notation:
\noindent The fields:
\begin{tabular}{lll} $\Omega_K$ & & the maximal pro-$p$, $p$-ramified extension of $K$ \\ $\widetilde{K}$ & & the compositum of all $\mathbb{Z}_p$-extensions of $K$ \\ $L_\infty$ & & the maximal abelian unramified pro-$p$-extension of $\widetilde{K}$ \\ $M_\infty$ & & the maximal abelian $p$-ramified pro-$p$-extension of $\widetilde{K}$ \\ $N_\infty$ & & the extension of $\widetilde{K}$ generated by $p$-power roots of $p$-units of $\widetilde{K}$ \end{tabular}
\noindent The Galois groups:
\begin{tabular}{lll} $\mathcal{G}_K$ & & the Galois group of $\Omega_K/K$ \\ $\Gamma$ & & the Galois group of $\widetilde{K}/K$ \\ $X$ & & the Galois group of $L_\infty/\widetilde{K}$ \\ $Y$ & & the Galois group of $M_\infty/\widetilde{K}$ \\ $Y'$ & & the Galois group of $N_\infty/\widetilde{K}$ \end{tabular}
The Galois groups $Y$ and $Y'$ carry an action of $\Gamma$ via conjugation, just as $X$, making them into $\Lambda$-modules. We shall see that for certain base fields $K$, the pseudo-null conjecture may be formulated in terms of the $\Lambda$-module structure of $Y$ (in particular, that $Y$ is $\Lambda$-torsion free). The module $Y$ is known to be finitely generated, and, for $K/\mathbb{Q}$ abelian, have $\Lambda$-rank equal to $r_2$, where $r_2$ denotes the number of complex places of $K$ (\cite{Green:78}). For a $\Lambda$-module $M$
we write $\operatorname{Tor}_\Lambda(M)$ for the $\Lambda$-torsion submodule. The following result is due to McCallum. \begin{thm}[\cite{McCal:00}, Theorem 3] Suppose there is only one prime of $K$ above $p$, and $\widetilde{K}$ contains all $p$-power roots of unity. Then $\operatorname{Tor}_\Lambda(Y')=0$. \end{thm}
\textit{Remark 1}: The proof of this result involves a detailed analysis of the filtration \[E_F^u \subset E_F^n \subset E_F^{\text{loc}} \subset E_F,\] where $E_F$ denotes the units $\mathcal{O}_F[1/p]^\times$ of a finite extension $F$ of $K$ in $\widetilde{K}$, and the superscripts denote certain classes of universal norms (see Section~4 of~\cite{McCal:00} for the precise definitions). The torsion submodule of $Y'$ is contained in the kernel of a surjective map of Galois groups. The Pontryagin dual of this kernel is $\varinjlim_F(E_F/E_F^u) \otimes \mathbb{Q}_p/\mathbb{Z}_p$, and is shown to be zero by considering each graded factor from the filtration.
\textit{Remark 2}: In particular, the result tells us $\operatorname{Tor}_{\Lambda}(Y)$ fixes the field $N_\infty$. This observation, combined with Lemma~1 below, gives our approach to verifying the pseudo-null conjecture.
The group $\mathcal{G}_K$ has a minimal free presentation \[1\longrightarrow R\longrightarrow F_g\longrightarrow \mathcal{G}_K \longrightarrow 1,\] where $F_g$ is the free pro-$p$-group on $g$ generators and $R$ is the normal closure of a finitely generated subgroup (the group of relations for $\mathcal{G}_K$). Denote by $s$ the minimal number of (topological) generators of $R$. The numbers $g$ and $s$ are equal to the $\mathbb{F}_p$-dimensions of $H^i(\mathcal{G}_K, \mathbb{Z}/p\mathbb{Z})$, $i=1, 2$ respectively (see Chapter~4 of~\cite{Serre:97}).
Let $\mathcal{G}_K^{ab}$ denote the maximal abelian quotient of $\mathcal{G}_K$, and $M_K$ the maximal abelian $p$-ramified pro-$p$-extension of $K$ (so $\mathcal{G}_K^{ab}=\text{Gal}(M_K/K)$). The field $M_K$ is an abelian, $p$-ramified extension of $\widetilde{K}$ (the Galois group of $M_K/\widetilde{K}$ is just the torsion subgroup of $\mathcal{G}_K^{ab}$), and so is contained in the field $M_\infty$. Hence we have a natural map from $Y$ to $\mathcal{G}_K^{ab}$ given by restriction of automorphisms. We refer the reader to \cite{McCal:00} for a proof of the following.
\begin{lem}[\cite{McCal:00}, Lemma 24] Suppose $K$ satisfies Leopoldt's conjecture. If $\mathcal{G}_K$ is a one-relator group (i.e. $s=1$), then the map \[\operatorname{Tor}_\Lambda(Y)\longrightarrow \mathcal{G}_K^{ab}\] is the zero map if and only if $\operatorname{Tor}_\Lambda(Y)=0$. \end{lem}
The following is an immediate consequence of Theorem~3 and Lemma~1: \begin{cor} If $K$ is a number field satisfying the hypotheses of Theorem~3 and Lemma~1, then \begin{equation} M_K\subset N_\infty \,\, \text{implies}\,\, \operatorname{Tor}_\Lambda(Y)=0. \end{equation} \end{cor}
\section{Equivalent formulations} We have introduced the natural Iwasawa modules $X$ and $Y$ in the last section. The Galois action on each of the $X_F$ is also compatible with regard to extensions of ideal classes, so we may form the $\Lambda$-module $\varinjlim_FX_F$ as well. Recall the groups $\operatorname{Ext}^i_\Lambda(\cdot, \Lambda)$ are the right derived functors of $\operatorname{Hom}_\Lambda(\cdot, \Lambda)$.
\begin{thm} Let $p$ be an odd prime and let $K$ be a number field with a unique prime above $p$. Then $\operatorname{Ext}^1_\Lambda(X, \Lambda)$ is the Pontryagin dual of $\varinjlim_FX_F$, where the $F$ vary over the finite extensions of $K$ in $\widetilde{K}$. \end{thm}
\textbf{Proof}: Let $\mathfrak{m}$ denote the unique maximal ideal of $\Lambda=\mathbb{Z}_p[[T_1, \dots, T_r]]$, and define \[\omega_n(T_i)=(1+T_i)^{p^n}-1.\] The result is obtained by establishing the isomorphism \begin{equation} H_{\mathfrak{m}}^r(X)\simeq \varinjlim_FX_F, \end{equation} where $H_{\mathfrak{m}}^i(X)$ denotes Grothendieck's local cohomology relative to the $\mathfrak{m}$-primary sequences \[\textbf{x}_n=(p^n, \omega_n(T_1), \dots, \omega_n(T_r)).\] The desired result is then a consequence of (a version of) Grothendieck's local duality; namely \[\operatorname{Ext}_\Lambda^{N-i}(X, \Lambda)\simeq \operatorname{Hom}_{\mathbb{Z}_p}(H_{\mathfrak{m}}^i(X), \mathbb{Q}/\mathbb{Z}),\] where $N$ denotes the length of the $\mathfrak{m}$-primary sequence. A good reference for this material is Chapter~3 of~\cite{Bruns:93}.
The details establishing (2) can be found in Theorem~8 of~\cite{McCal:00}, where McCallum proves a similar result for the Galois group $X'$ of the maximal abelian unramified pro-$p$-extension of $\widetilde{K}$ in which all primes dividing $p$ are completely decomposed. The proof translates easily to this case, simply replacing the decomposition group with inertia. $\Box$
Let $\mu_n$ denote the group of $n$-th roots of unity. As above, we let $X_F'$ denote the Galois group of the maximal abelian unramified extension of $F$ in which all primes dividing $p$ are completely decomposed. We write $X'$ for $X_{\widetilde{K}}'$.
\begin{thm} Let $p>5$ be a prime and suppose $\mu_p$ is in $K$. If $K$ has a unique prime ideal $\wp$ dividing $p$, then the following are equivalent:
(a) $X$ is pseudo-null
(b) $X'$ is pseudo-null
(c) $\operatorname{Tor}_\Lambda(Y)=0$
(d) $\varinjlim_FX_F'=0$
(e) $\varinjlim_FX_F=0$,
\noindent where the fields $F$ vary over all finite extensions of $K$ in $\widetilde{K}$. \end{thm}
\textbf{Proof}: $(a)\Leftrightarrow (b)$. Recall $\Gamma =\text{Gal}(\widetilde{K}/K)$. We let $\Gamma_\wp$ denote the decomposition group of $\wp$ in $\Gamma$, and let $\Lambda_\wp=\mathbb{Z}_p[[\Gamma/\Gamma_\wp]]$. There is a natural surjection $X\rightarrow X'$ whose kernel is generated as a $\mathbb{Z}_p$-module by the Frobenius automorphisms corresponding to the primes above $p$, and therefore is finitely generated as a module over $\Lambda_\wp$. As a $\Lambda$-module, the annihilator of $\Lambda_\wp$ has height equal to the $\mathbb{Z}_p$-rank of $\Gamma_\wp$ (this is just the augmentation ideal in $\mathbb{Z}_p[[\Gamma_p]]$). Since there is only one prime of $K$ above $p$, its decomposition group has finite index in $\Gamma$, and therefore our assumption on $p$ makes $\Lambda_\wp$ pseudo-null. Therefore the kernel of the surjection $X\rightarrow X'$ is pseudo-null, and $X$ and $X'$ are pseudo-isomorphic.
$(a)\Leftrightarrow (c)$. This follows from a duality due to Jannsen (\cite{Jan:89}, Theorem~5.4) relating the $\Lambda$-modules $X'$ and $Y$, together with a structure theorem for $Y$ due to Nguyen-Quang-Do (Corollary~14 of~\cite{McCal:00} or Theorem~4.4 of~\cite{Lan:00}).
$(c)\Leftrightarrow (d)$. In proving the results cited in the previous case, one shows, in particular, that \[\operatorname{Tor}_\Lambda(Y)\simeq \operatorname{Ext}^1_\Lambda(X', \Lambda)\] (\cite{McCal:00}, Theorem~9). But $\operatorname{Ext}^1_\Lambda(X', \Lambda)$ is known to be the Pontryagin dual of $\varinjlim_FX_F'$ (\cite{McCal:00}, Theorem~8). The result then follows.
$(c)\Leftrightarrow (e)$. Grothendieck's local duality can be used to show that a torsion $\Lambda$-module is pseudo-null if and only if $\operatorname{Ext}_\Lambda^1$ vanishes (\cite{McCal:00}, Lemma~6). This implies, in particular, that $\operatorname{Ext}^1_\Lambda(X, \Lambda)$ and $\operatorname{Ext}^1_\Lambda(X', \Lambda)$ are isomorphic, yielding \[\operatorname{Tor}_\Lambda(Y)\simeq \operatorname{Ext}^1_\Lambda(X, \Lambda)\] as well. Theorem~4 then finishes the proof. $\Box$
\textit{Remark}: Various forms of these equivalences have certainly appeared elsewhere. In~\cite{Lan:00}, Lannuzel and Nguyen-Quang-Do prove the equivalence of (a), (c), and (e) under slightly different hypotheses. Namely, no restriction is made on the number of primes of $K$ dividing $p$, but rather it is assumed that all finite extensions of $K$ in $\widetilde{K}$ satisfy Leopoldt's conjecture. Formulation (c) has been used by McCallum~\cite{McCal:00} and this author~\cite{Ma:00} to verify Greenberg's conjecture for certain classes of cyclotomic fields.
The following theorem provides sufficient conditions for when the pseudo-null conjecture for a number field $K$ implies the conjecture for a finite extension of $K$ in $\widetilde{K}$. We apply this to the cyclotomic tower in Section~4.
\begin{thm} Let $p\geq 5$ be a prime and suppose $\mu_p$ is contained in $K$. Suppose $K$ has a unique prime $\wp$ dividing $p$. Then, if $F\subset \widetilde{K}$ is a finite extension of $K$ satisfying \begin{enumerate} \item $\wp$ is non-split in $F/K$
\item $\dim_{\mathbb{F}_p}H^2(\mathcal{G}_F, \mathbb{Z}/p\mathbb{Z})\leq 1$
\item Leopoldt's conjecture, \end{enumerate} then Greenberg's conjecture for $K$ implies Greenberg's conjecture for $F$. \end{thm}
\textbf{Proof}: Let $K$ and $F$ be number fields satisfying the above hypotheses, and assume the pseudo-null conjecture holds for $K$. We apply the notation introduced in Section 2 to the field $F$ (so we have $\Omega_F$, $\mathcal{G}_F$, $M_F$, etc.) If the $\mathbb{F}_p$-dimension of $H^2(\mathcal{G}_F, \mathbb{Z}/p\mathbb{Z})$ is 0, then $\mathcal{G}_F$ is a free pro-$p$-group. A structure theorem for $Y$ due to Nguyen Quang Do (\cite{NQD:84}, Proposition 1.7) then implies $\operatorname{Tor}_\Lambda(Y)=0$. Hence by formulation (c) of Theorem~5 Greenberg's conjecture holds for $F$.
If the $\mathbb{F}_p$-dimension of $H^2(\mathcal{G}_F, \mathbb{Z}/p\mathbb{Z})$ is 1, then such an $F$ satisfies the hypotheses of Theorem~3 and Lemma~1, and so Corollary~1 applies. Namely, Greenberg's pseudo-null conjecture will hold for $F$ provided $M_F\subset N_\infty$, and hence it will suffice to show the extension $M_F/\widetilde{F}$ is generated by $p$-power roots of $p$-units of $\widetilde{F}$.
We consider the field $F^{cyc} = FK^{cyc}$, the cyclotomic $\mathbb{Z}_p$-extension of $F$. By assumption, this field contains all $p$-power roots of unity. Recall the group $\mathcal{G}_F^{ab}=\text{Gal}(M_F/F)$. The subgroup $\text{Gal}(M_F/F^{cyc})$ has the same torsion subgroup (which is just $\text{Gal}(M_F/\widetilde{F})$) and $\mathbb{Z}_p$-rank 1 less. In particular, we have a non-canonical isomorphism \[\text{Gal}(M_F/F^{cyc})\simeq \text{Gal}(\widetilde{F}/F^{cyc}) \times \text{Gal}(M_F/\widetilde{F}).\] We let $L$ denote the fixed field of the first factor (so $M_F=\widetilde{F} L$.)
The Galois group $\text{Gal}(L/F^{cyc})$ is isomorphic to the torsion subgroup of $\mathcal{G}_F^{ab}$, and hence is a finite $p$-group. Since $F^{cyc}$ contains all $p$-power roots of unity, the extension $L/F^{cyc}$ is just a Kummer extension, generated by $p$-power roots of elements of $F^{cyc}$, \[L=F^{cyc}(x_1^{1/p^{m_1}}, x_2^{1/p^{m_2}}, \dots, x_n^{1/p^{m_n}}).\] Further, the ideals $(x_i)$ are $p^{m_i}$-th powers of ideals of $F^{cyc}$, say $(x_i)=\mathfrak{J}_i^{p^{m_i}}$.
The extension $M_F/\widetilde{F}$ is also generated by the $x_i^{1/p^{m_i}}$, and the ideals $(x_i)$ are the $p^{m_i}$-th powers of the ideals $\mathfrak{J}_i$ extended to $\widetilde{F}$. But here is the key: the ideal classes $[\mathfrak{J}_i]$ become \textit{principal classes} when extended to $\widetilde{F}$. This follows from the fact that $F^{cyc} \subset \widetilde{K}$ and, having assumed the pseudo-null conjecture holds for $K$ (using formulation (e) of Theorem~5), the fact that all ideal classes become principal in $\widetilde{K}$.
For a generator $x_i^{1/p^{m_i}}$ of $M_F/\widetilde{F}$ we now know the ideal $(x_i)$ is the $p^{m_i}$-th power of a principal ideal, say \[(x_i)=(y_i)^{p^{m_i}}.\] The elements $x_i$ and $y_i^{p^{m_i}}$ must differ by a unit, say $x_i=uy_i^{p^{m_i}}$. But clearly, an extension generated by a $p^{m_i}$-th root of $x_i$ is also generated by a $p^{m_i}$-th root of $x_i/(y_i^{p^{m_i}})=u$, and so the extension $M_F/\widetilde{F}$ is generated by $p$-power roots of units on $\widetilde{F}$. This implies $M_F\subset N_\infty$ which, by Corollary~1 and Theorem~5, implies Greenberg's conjecture for $F$. $\Box$
\section{Cyclotomic Fields} We fix $p$ a prime number and consider more closely the case of the cyclotomic fields $K=\mathbb{Q}(\zeta_{p^n})$. Recall the group $\mathcal{G}_K$ has a minimal presentation as a pro-$p$-group with $g$ generators and $s$ relations, where $g$ and $s$ are equal to the $\mathbb{F}_p$-dimensions of $H^1(\mathcal{G}_K, \mathbb{Z}/p\mathbb{Z})$ and $H^2(\mathcal{G}_K, \mathbb{Z}/p\mathbb{Z})$ respectively. \begin{lem} Let $p$ be a prime and let $K=\mathbb{Q}(\zeta_{p^n})$ for some natural number $n$. Let $\alpha$ denote the $\mathbb{Z}/p\mathbb{Z}$-rank of the $p$-class group of $K$. Then \begin{align*} g & = \frac{p^n + p^{n-1} +2}{2} +\alpha \\ s & = \alpha . \end{align*} \end{lem}
\textbf{Proof}: These computations are not new, and we give here just a sketch. Let $\Omega_K'$ be the maximal $p$-ramified extension of $K$ with Galois group $\mathcal{G}_K'$. Since $K$ contains the group $\mu_p$, and $\mathcal{G}_K$ is the maximal pro-$p$ quotient of $\mathcal{G}_K'$, we have \[H^i(\mathcal{G}_K, \mathbb{Z}/p\mathbb{Z})\simeq H^i(\mathcal{G}_K', \mu_p).\] The $\mathbb{Z}/p\mathbb{Z}$-dimensions of the latter groups can be obtained by considering the sequence \[1\longrightarrow \mu_p \longrightarrow \mathcal{O}_{\Omega_K'}[1/p]^\times \stackrel{p}{\longrightarrow} \mathcal{O}_{\Omega_K'}[1/p]^\times \longrightarrow 1.\] The $p$-power map on $\mathcal{O}_{\Omega_K'}[1/p]^\times$ is surjective by the maximality of $\Omega_K'$ over $K$ (since $p$-th roots of $p$-units generate $p$-ramified extensions). Taking cohomology of the sequence with respect to the Galois group $\mathcal{G}_K'$ yields a long exact sequence which may be broken into the following pair of short exact sequences. \begin{equation*} 0\rightarrow \frac{\mathcal{O}_K[1/p]^\times}{(\mathcal{O}_K[1/p]^\times)^p} \rightarrow H^1(\mathcal{G}_K', \mu_p) \rightarrow C(K)[p]\rightarrow 0 \end{equation*}
\begin{equation*} 0\rightarrow \frac{C(K)}{pC(K)} \rightarrow H^2(\mathcal{G}_K', \mu_p)\rightarrow H^2(\mathcal{G}_K', \mathcal{O}_{\Omega_K'}[1/p]^\times)[p]\rightarrow 0, \end{equation*} where $C(K)$ denotes the ideal class group of $K$. The group $H^2(\mathcal{G}_K', \mathcal{O}_{\Omega_K'}[1/p]^\times)$ injects into the Brauer group $B(K)$, and can be shown to be 0 by considering its behavior in the exact sequence \begin{equation*} 0\rightarrow B(K)\rightarrow \oplus_vB(K_v)\stackrel{\sum inv}\longrightarrow \mathbb{Q}/\mathbb{Z} \rightarrow 0. \end{equation*} A simple dimension count then gives \begin{align*} g & = r_2+1+\alpha \\ s & = \alpha \end{align*} where $r_2=(p^n +p^{n-1})/2$, as desired. $\Box$
If $p$ is a regular prime, $\alpha=0$ for $\mathbb{Q}(\zeta_{p^n})$, $n\geq 0$. Hence $s=0$, implying $\operatorname{Tor}_\Lambda(Y)=0$, establishing Greenberg's conjecture for each field in the cyclotomic tower.
The following corollary is an immediate consequence of Theorem~6 and Lemma~2. \begin{cor} Let $p$ be an irregular prime. Let $n>0$ be such that $\mathbb{Q}(\zeta_{p^n})$ has a cyclic $p$-class group. Then Greenberg's conjecture for $\mathbb{Q}(\zeta_p)$ implies Greenberg's conjecture for $\mathbb{Q}(\zeta_{p^n})$. \end{cor}
\textbf{Proof}: In the notation of Theorem~6, with $K$ as above, let $F=\mathbb{Q}(\zeta_{p^n})$ for some positive integer $n$ satisfying the hypothesis. The field $K$ has a unique prime $\pi$ above $p$, and $\pi$ is totally ramified in $F/K$, and hence non-split. The dimension of $H^2(\mathcal{G}_F, \mathbb{Z}/p\mathbb{Z})$ is less than or equal to 1 by our assumption of cyclic $p$-class groups. Since $F/\mathbb{Q}$ is abelian, implying Leopoldt's conjecture for $F$, the hypotheses of Theorem~6 are satisfied, as desired. $\Box$
Finally, we prove Theorem~1 by providing a class of cyclotomic fields $\mathbb{Q}(\zeta_p)$, satisfying the hypotheses of Corollary~2, for which the pseudo-null conjecture is true. A similar class was first given by McCallum (\cite{McCal:00}, Theorem~1). He considered such fields with $p$-class group isomorphic to $\mathbb{Z}/p\mathbb{Z}$. We provide here a slight generalization of that class, allowing for cyclic $p$-class groups of arbitrary $p$-power order, as well as apply Corollary~2 to extend the conjecture to all fields in the cyclotomic $\mathbb{Z}_p$-tower. We restate Theorem~1 here.
\begin{thm} Suppose $K=\mathbb{Q}(\zeta_p)$ satisfies the following conditions: \begin{enumerate} \item Vandiver's conjecture
\item $\lambda_p=1$.
\item $v_p(|(U/\overline{E})[p^\infty]|)\leq v_p(|A(K)|)$. \end{enumerate} Then for all $n\geq 1$ the pseudo-null conjecture holds for $\mathbb{Q}(\zeta_{p^n})$. \end{thm}
\textit{Remark 1}: Condition (2) is heuristically true for approximately 75\% of all irregular primes and experimentally true for 75\% of the irregular primes up to 12 million, according to~\cite{Buhler:93} (for these primes, $\lambda_p$ is just the index of irregularity of $p$).
\textit{Remark 2}: Letting $K_n=\mathbb{Q}(\zeta_{p^{n+1}})$ and $A_n=A(K_n)$, the hypotheses of Vandiver's conjecture and $\lambda_p=1$ imply \[A_n\simeq X/((1+T)^{p^n})-1)X,\] where $X=\mathbb{Z}_p[[T]]/(T+p^a)$ (see Theorem 10.16 and Proposition 13.22 of~\cite{Wash:97}). In particular this yields isomorphisms \[A_n\simeq \mathbb{Z}/p^{a+n}\mathbb{Z}\] for all $n\geq 0$, and so (3) is a condition on cyclic groups of $p$-power order.
\textit{Remark 3}: Since $A(K)$ is cyclic, there is only one Bernoulli number $B_i$, $2\leq i\leq p-3$, divisible by $p$. If $B_{p-j}$ denotes this term (so $\varepsilon_j A(K)$ is the non-trivial term of the idempotent decomposition of $A(K)$),then $L_p(s, \omega^{1-j})$ is the only non-trivial $p$-adic $L$-function attached to $K$. It follows from Theorem 8.25 of~\cite{Wash:97} that \[(U/\overline{E})[p^{\infty}]\simeq \mathbb{Z}/p^m\mathbb{Z} ,\] where $m=v_p(L_p(1, \omega^{1-j}))$. This valuation may be computed in terms of the characteristic power series $f(T)$ of $\varprojlim_nA(K_n)$. Under the assumption $\lambda_p=1$ this power series has the form $f(T)=(T+cp^a)u$, where $u$ is a unit, $p^a$ is the order of the cyclic group $A(K)$, and \[f((1+p)^s-1)=L_p(s, \omega^{1-j}).\] So the valuation of $L_p$ at $s=1$ equals the valuation of $f(p)=(p+cp^a)u$.
If $a>1$, $v_p(f(p))=1$, and condition (3) is satisfied. If, on the other hand, $a=1$, $v_p(f(p))$ depends on the value of $c\pmod{p}$. The valuation will again be 1 provided $c \not\equiv -1 \pmod{p}$. This congruence has been checked for $p<4000$ in~\cite{Iwa:65}, although tables are only given for $p<400$ and $3600<p<4000$. For these values the congruence condition is satisfied.
Suppose $K=\mathbb{Q}(\zeta_p)$ satisfies (1)-(3) above. Since $A(K)$ is cyclic, say of order $p^a$, the group $\mathcal{G}$ is a one-relator group and Lemma~1 applies. We will utilize this lemma to show $\operatorname{Tor}_\Lambda(Y)=0$. In light of Corollary~1, it suffices to show $M_K\subset N_\infty$, and so we consider the structure of $\mathcal{G}_K^{ab}$ in more detail.
\begin{lem} Suppose $K$ satisfies hypothesis (1) and (2) of Theorem 7. Then the torsion subgroup of $\mathcal{G}_K^{ab}$ is cyclic. \end{lem}
\textbf{Proof}: Let $J_K$ denote the idele group of $K$, with $K^\times$ embedded diagonally. Let $U$ be the subgroup of ideles which are units at $\pi$ (the prime of $K$ above $p$) and 1 elsewhere, and let $U'$ be the subgroup of ideles which are 1 at $\pi$ and units elsewhere. Class field theory gives an isomorphism \[\mathcal{G}_K^{ab}\simeq \text{pro-$p$-completion of}\,\, J_K/(\overline{K^\times U'}),\] where the overline denotes the closure.
If we let $\overline{E}$ denote the closure of the embedding of the units of $K$ in $U$, then in fact we have an exact sequence \[0\longrightarrow U_1/\overline{E}_1 \longrightarrow \mathcal{G}_K^{ab} \longrightarrow A(K) \longrightarrow 0,\] where the subscript 1 indicates we are taking units congruent to 1 modulo $\pi$. Since $U_1$ has $\mathbb{Z}_p$-rank $[K:\mathbb{Q}]=p-1$ and $\overline{E_1}$ has $\mathbb{Z}_p$-rank $(p-3)/2$ (by Leopoldt's conjecture, which holds for $K$), the $\mathbb{Z}_p$-rank of $\mathcal{G}_K^{ab}$ is $(p+1)/2$ ($p\neq 2$ by the assumption $\lambda_p =1$).
We claim the torsion in $\mathcal{G}_K^{ab}$ comes from $U_1/\overline{E}_1$, and show this by considering an idele $(a_v)$ whose image in $\mathcal{G}_K^{ab}$ is a torsion element, say of order $p^m$. So \[(a_v)^{p^m} \in \overline{K^\times U'},\] say $(a_v)^{p^m}=\alpha (u_v)$ (where we abuse notation writing $\alpha$ for both the element of $K^\times$ as well as its diagonal image in $J_K$). This implies $\alpha$ is a $p^m$-th power in $K_\pi$, the $\pi$-adic completion of $K$. Let $\mathfrak{a}$ then be the ideal of $K$ such that $\mathfrak{a}^{p^m}=(\alpha)$. We want to show the class of
$\mathfrak{a}$ is principal.
Let $K_{m-1}=\mathbb{Q}(\zeta_{p^m})$, so $K_{m-1}(\alpha^{1/p^m})$ is an unramified extension. Since the class of $\mathfrak{a}$ lies in $A(K)^-$ (by Vandiver's conjecture), the Kummer pairing implies the Galois group of $K_{m-1}(\alpha^{1/p^m})/K_{m-1}$ is trivial. Hence $\alpha$ must be a $p^m$-th power in $K_{m-1}$ as well, which means the ideal class of $\mathfrak{a}$ is principal when extended to $K_{m-1}$ (represented by a principal ideal generated by a $p^m$-th root of $\alpha$). But the map from $A(K)$ to $A(K_{m-1})$ is injective (\cite{Wash:97}, Proposition~13.26), and so $\mathfrak{a}$ must have represented a principal class in $A(K)$ as well. Hence the torsion in $\mathcal{G}_K^{ab}$ maps to 0 in $A(K)$.
We now just need to determine the torsion subgroup of $U_1/\overline{E}_1$. We may consider each factor of the idempotent decomposition separately. Since $\varepsilon_iE_1=0$ for $i=0$ and for $i$ odd, and each $\varepsilon_iU_1\simeq \mathbb{Z}_p$, we obtain \[U_1/\overline{E}_1\simeq (\mathbb{Z}_p)^{(p+1)/2}\oplus \bigoplus_{\text{$i$ even}} \varepsilon_iU_1/\varepsilon_i\overline{E_1}.\] For even $i$ the terms $\varepsilon_iU_1/\varepsilon_i\overline{E_1}$ are equal to $\varepsilon_iU_1^+/\varepsilon_i\overline{E_1}^+$, where the superscript $+$ indicates we are looking at units in the local subfield fixed by the automorphism of order 2. Vandiver's conjecture implies the cyclotomic units $C_1^+$ have index prime to $p$ in $E_1^+$ (\cite{Wash:97}, Theorem~8.2), and so it suffices to consider the quotients $\varepsilon_iU_1^+/\varepsilon_i\overline{C_1}^+$. But Theorem~8.25 of~\cite{Wash:97} states \[[\varepsilon_iU_1^+:\varepsilon_i\overline{C_1}^+]= p^{v_p(L_p(1, \omega^i))}.\] Since $A(K)$ is cyclic there is only one non-trivial $L_p(s, \omega^i)$, and hence only one cyclic factor, say of order $p^m$, in the torsion subgroup of $U_1/\overline{E}_1$. $\Box$
\textbf{Proof of Theorem 7}: The field $\widetilde{K}$ is in fact the fixed field of the torsion subgroup of $\mathcal{G}_K^{ab}$, and so the extension $M_K/\widetilde{K}$ is a Kummer extension with $\text{Gal}(M_K/\widetilde{K})\simeq \mathbb{Z}/p^m\mathbb{Z}$. With $A(K)\simeq \mathbb{Z}/p^a\mathbb{Z}$, condition (3) of the Theorem just states $m\leq a$.
To show that $M_K$ is contained in $N_\infty$, we need to show that $M_K/\widetilde{K}$ is generated by a $p$-th power root of a unit of $\widetilde{K}$. The argument, as in the proof of Theorem~6, is reduced to a capitulation problem.
Consider the extension $M_K/K_{m-1}$. There is a non-canonical isomorphism \[\text{Gal}(M_K/K_{m-1})\simeq \text{Gal}(\widetilde{K}/K_{m-1})\times \text{Gal}(M_K/\widetilde{K}).\] We let $L$ denote the fixed field of the first factor. The extension $L/K_{m-1}$ is a Kummer extension, and we may write \[L=K_{m-1}(x^{1/p^m})\] for some $x$ in $K_{m-1}$ where the ideal $(x)$ is of the form $(x)=\mathfrak{J}^{p^m}P$, where $P$ is the principal ideal of $K_{m-1}$ lying above $p$.
Since, in particular, $\mathfrak{J}$ represents a class of order dividing $p^m$ in $A(K_{m-1})$, condition (3) implies the class of $\mathfrak{J}$ is an extension of a class from $A(K)$ (recall the map $A(K)\rightarrow A(K_{m-1})$ is just an injection $\mathbb{Z}/p^a\mathbb{Z} \hookrightarrow \mathbb{Z}/p^{a+m-1}\mathbb{Z}$). We let $\mathfrak{A}$ be a representative ideal of the class that extends to the class of $\mathfrak{J}$.
Since the $p$-Hilbert class field of $K$ is contained in $\widetilde{K}$, and the class of $\mathfrak{A}$, and therefore $\mathfrak{J}$, becomes principal in $\widetilde{K}$. The extension $M_K/\widetilde{K}$ is also generated by a $p^m$-th root of $x$, and the ideal $(x)$ in $\widetilde{K}$ is now the $p^m$-th power of a \textit{principal} ideal, \[(x)=(y)^{p^m}.\] The elements $x$ and $y^{p^m}$ then differ by a unit , i.e. $x=uy^{p^m}$. But clearly the extension $M_K$ is also generated by the $p^m$-th root of $x/y^{p^m}=u$, and so the field $M_K$ is contained in $N_\infty$. $\Box$
\end{document} |
\begin{document}
\date{} \date{} \title{On curvature flow with driving force under Neumann boundary conditon in the plane}
\author{Longjie ZHANG}
\date{December, 2015\\Corresponding author University:Graduate School of Mathematical Sciences, The University of Tokyo. Address:3-8-1 Komaba Meguro-ku Tokyo 153-8914, Japan. Email:zhanglj@ms.u-tokyo.ac.jp, zhanglj919@gmail.com}
\maketitle
\begin{minipage}{140mm}
{{\bf Abstract:} We consider a family of axisymmetric curves evolving by its mean curvature with driving force in the half space. We impose a boundary condition that the curves are perpendicular to the boundary for $t>0$, however, the initial curve intersects the boundary tangentially. In other words, the initial curve is oriented singularly. We investigate this problem by level set method and give some criteria to judge whether the interface evolution is fattening or not. In the end, we can classify the solutions into three categories and provide the asymptotic behavior in each category. Our main tools in this paper are level set method and intersection number principle.
{\bf Keywords and phrases:} mean curvature flow, driving force, Neumann boundary, level set method, singularity, fattening. }
{\bf 2010MSC:} 35A01, 35A02, 35K55, 53C44.
\end{minipage}
$$$$
\section{Introduction}\large
This paper studies the planar curvature flow with driving force and Neumann boundary condition of the form \begin{equation}\label{eq:cur} V=-\kappa+A\, \ \textrm{on}\ \Gamma(t)\subset \Omega , \end{equation} \begin{equation}\label{eq:Neum1} \Gamma(t)\perp\partial \Omega, \end{equation} \begin{equation}\label{eq:initial1} \Gamma(0)=\Lambda_0, \end{equation} where $\Omega=\{(x,y)\in \mathbb{R}^2\mid x\geq 0\}$, $V$ is the outer normal velocity of $\Gamma(t)$, $\kappa$ is the curvature of $\Gamma(t)$ and the sign is chosen such that the problem is parabolic. $A$ called driving force is a positive constant.
In this paper, we consider the initial curve $\Lambda_0$ is closed, smooth and given by $$
\Lambda_0=\{(x,y)\in \mathbb{R}^2\mid |y|=u_0(x), 0\leq x\leq b_0\}, $$ for $u_0(x)\in C[0,b_0]\cap C^{\infty}(0,b_0)$. By the assumption of $\Lambda_0$, there hold $$ u_0(x)>0,\ 0<x<b_0 $$ and $$ u_0(0)=u_0(b_0)=0,\ u_{0}^{\prime}(0)=-u_{0}^{\prime}(b_0)=\infty. $$ \begin{figure}
\caption{Initial curve}
\label{fig:u0}
\end{figure} Before giving our main results, we first consider another problem. \begin{equation} V=-\kappa+A\, \ \textrm{on}\ \Lambda(t)\subset \mathbb{R}^2,\tag{\ref{eq:cur}*} \end{equation} \begin{equation} \Lambda(0)=\Lambda_0,\tag{\ref{eq:initial1}*} \end{equation} We consider this problem by level set method. Seeing the theory in \cite{G}, there exists unique viscosity solution $\phi$ of the following level set equation $$ \left\{ \begin{array}{lcl}
\displaystyle{\phi_t=|\nabla \phi|\textmd{div}(\frac{\nabla \phi}{|\nabla \phi|})+A|\nabla \phi|}\ \textrm{in}\ \mathbb{R}^2\times(0,T),\\ \phi(x,y,0)=a_1(x,y), \end{array} \right. $$ where $a_1(x,y)$ satisfies $\Lambda_0=\{(x,y)\mid a_1(x,y)=0\}$. The results in appendix show that the zero set of $\phi$ is not fattening. Indeed, thanks to Theorem \ref{thm:gu}, the zero set of $\phi$ can be written into $$
\Lambda(t)=\{(x,y)\mid \phi(x,y,t)=0\}=\{(x,y)\in\mathbb{R}^2\mid |y|=v(x,t), a_*(t)\leq x\leq b_*(t)\},\ 0<t<T. $$ Moreover, $(v,a_*,b_*)$ is the solution of the following free boundary problem \begin{equation} \left\{ \begin{array}{lcl} \displaystyle{u_t=\frac{u_{xx}}{1+u_x^2}+A\sqrt{1+u_x^2}},\ x\in(a_*(t),b_*(t)),\ 0<t< \delta,\\ u(a_*(t),t)=0,\ u(b_*(t),t)=0,\ 0\leq t< \delta,\\ u_x(a_*(t),t)=\infty,\ u_x(b_*(t),t)=-\infty,\ 0\leq t<\delta,\\ u(x,0)=u_0(x),\ 0\leq x\leq b_0. \end{array} \right.\tag{*} \end{equation} And $a_*$ and $b_*$ are called the end points of $\Lambda(t)$.
Here we give our main results.
\begin{thm}\label{thm:exist}
If there exists $\delta$ such that $a_*(t)<0$, for $0<t<\delta$, then there exist $T_1>0$ and a unique smooth family of smooth curves $\Gamma(t)$ satisfying (\ref{eq:cur}), (\ref{eq:Neum1}), $0\leq t<T_1$, and satisfying (\ref{eq:initial1}) in the sense that $\lim\limits_{t\rightarrow0^+}d_H(\Gamma(t),\Lambda_0)=0$. Moreover, $\Gamma(t)$ can be written into $\Gamma(t)=\{(x,y)\in\Omega\mid |y|=u(x,t),\ 0\leq x\leq b(t)\}$. And $(u,b)$ is the unique solution of the following free boundary problem \begin{equation}\label{eq:1graph} u_t=\frac{u_{xx}}{1+u_x^2}+A\sqrt{1+u_x^2},,\ 0<t<T_1,\ 0<x<b(t), \end{equation} \begin{equation}\label{eq:1bounday} u(b(t),t)=0,\ u_x(b(t),t)=-\infty,\ u_x(0,t)=0,\ 0<t<T_1, \end{equation} \begin{equation}\label{eq:1initial} u(x,0)=u_0(x),\ 0\leq x\leq b_0. \end{equation} \end{thm} Here $d_H(A,B)$ denotes the Hausdorff distance defined as $$ d_H(A,B)=\max\{\sup\limits_{x\in A}\inf\limits_{y\in B}d(x,y),\sup\limits_{y\in B}\inf\limits_{x\in A}d(x,y)\}. $$
Let $T$ be the maximal smooth time given by $$ T=\sup\{t\mid \Gamma(s)\ \text{is}\ \text{smooth}, 0<s<t\}. $$
\begin{thm}\label{thm:threecondition}
(Classification) Under the same assumptions of Theorem \ref{thm:exist}, denote
$h(t)=\max\limits_{0\leq x\leq b(t)}u(x,t)$.Then $\Gamma(t)$ must fulfill one of the following situations.
(1). (Expanding) The existence time $T=\infty$ and both $h(t)$ and $b(t)$ tend to $\infty$, as $t\rightarrow\infty$.
(2). (Bounded) The existence time $T=\infty$ and both $h(t)$ and $b(t)$ are bounded from above and below by two positive constants, as $t\rightarrow\infty$.
(3). (Shrinking) The existence time $T<\infty$ and both $h(t)$ and $b(t)$ tend to 0, as $t\rightarrow T$.
\end{thm}
\begin{thm}\label{thm:asym}
(Asymptotic behavior) Under the same assumptions of Theorem \ref{thm:exist}. Then $\Gamma(t)$ must fulfill one of following three conditions.
(1). (Expanding) Assume that $T=\infty$ and that both $h(t)$ and $b(t)$ tend to $\infty$ as $t\rightarrow\infty$, there exist $t_0>0$, $R_1(t)$, $R_2(t)$ such that $$B_{R_1(t)}((0,0))\cap\{x>0\}\subset U(t) \subset B_{R_2(t)}((0,0))\cap\{x>0\},\ t>t_0$$
where $U(t)=\{(x,y)\in\mathbb{R}^2\mid |y|<u(x,t),\ x>0\}$. Moreover $\lim\limits_{t\rightarrow\infty}R_1(t)/t=\lim\limits_{t\rightarrow\infty}R_2(t)/t=A$.
(2). (Bounded) Assume that $T=\infty$ and that both $h(t)$ and $b(t)$ are bounded from above and below by two positive constants for $t>0$. Then $\lim\limits_{t\rightarrow\infty}d_H(\Gamma(t),\partial B_{1/A}((0,0))\cap\{x\geq0\})=0$.
(3). (Shrinking) Assume that $T<\infty$ and that both $h(t)$ and $b(t)$ tend to 0 as $t\rightarrow T$. Then the flow $\Gamma(t)$ shrinks to a point at $t=T$. \end{thm} We extend $\Gamma(t)$ by even and still denote the extended curve by $\Gamma(t)$. Then problem (\ref{eq:cur}), (\ref{eq:Neum1}), (\ref{eq:initial1}) is equivalent to the following problem in whole space \begin{equation}\label{eq:cureven} V=-\kappa+A,\ \text{on}\ \Gamma(t)\subset \mathbb{R}^2, \end{equation} \begin{equation}\label{eq:initialeven} \Gamma(0)=\Gamma_0=\Lambda_0\cup\{(-x,y)\in\mathbb{R}^2\mid (x,y)\in\Lambda_0\}. \end{equation} If we extends $u_0$ by even(still denoted by $u_0$), then obviously \begin{equation}\label{eq:ineven}
\Gamma_0=\{(x,y)\in\mathbb{R}^2\mid |y|=u_0(x)\}. \end{equation} In this paper we consider the problem (\ref{eq:cureven}), (\ref{eq:initialeven}) instead of the problem (\ref{eq:cur}), (\ref{eq:Neum1}), (\ref{eq:initial1}).
We next give a sufficient result to have a fattening phenomenon. The definition of interface evolution and fattening are given in section 2.
\begin{thm}\label{thm:fattening1} (Fattening)
If there exists $\delta$ such that $a_*(t)\geq0$, for $0<t<\delta$, the interface evolution $\Gamma(t)$ for (\ref{eq:cureven}) with initial data $\Gamma_0$ is fattening. \end{thm}
Theorem \ref{thm:exist} and Theorem \ref{thm:fattening1} can be explained by Figure \ref{fig:exist} and \ref{fig:fattening1}. $\varphi$ in Figure \ref{fig:exist} and \ref{fig:fattening1} is given by the unique viscosity solution of $$ \left\{ \begin{array}{lcl}
\displaystyle{\varphi_t=|\nabla \varphi|\textmd{div}(\frac{\nabla \varphi}{|\nabla \varphi|})+A|\nabla \varphi|}\ \textrm{in}\ \mathbb{R}^2\times(0,T),\\ \varphi(x,y,0)=a_2(x,y), \end{array} \right. $$ where $a_2(x,y)$ satisfies $\Gamma_0=\{(x,y)\mid a_2(x,y)=0\}$. Let $\Gamma(t)=\{(x,y)\mid \varphi(x,y,t)=0\}$.
\begin{figure}
\caption{$a_*(t)<0$ in Theorem \ref{thm:exist}}
\label{fig:exist}
\end{figure}
\begin{figure}
\caption{$a_*(t)\geq0$ in Theorem \ref{thm:fattening1}}
\label{fig:fattening1}
\end{figure}
\begin{rem} The assumptions for $a_*(t)$ in Theorem \ref{thm:exist} and \ref{thm:fattening1} seem not to be understood easily. Here we explain the assumptions by giving some sufficient conditions.
Denoting $\kappa(O)$ as the curvature of $\Lambda_0$ at origin, it is easy to see that $$ \kappa(O)=-\lim\limits_{x\rightarrow0^{+}}u_0^{\prime\prime}/(1+(u_0^{\prime})^2)^{3/2}. $$ Then we can prove that if
(a). $\kappa(O)<A$, there holds $a_*(t)<0$, for $t$ small;
(b). $\kappa(O)>A$, there holds $a_*(t)>0$, for $t$ small. \\Since $$ a_*^{\prime}(0)=\kappa(O)-A. $$ \end{rem}
{\bf The role of $a_*(t)$ in main theorem.} Let $(u,b)$ is the unique solution of the free boundary problem (\ref{eq:1graph}), (\ref{eq:1bounday}), (\ref{eq:1initial}). Obviously, the flow $\Gamma^*(t)=\{(x,y)\mid |y|=u(x,t),\ 0\leq x\leq b(t)\}$ satisfies (\ref{eq:cur}), (\ref{eq:Neum1}), (\ref{eq:initial1}) naturally.
Let $(v,a_*,b_*)$ be the solution of the problem (*). If $a_*(t)\geq0$, $0<t<\delta$, the curves $$
\Lambda(t)=\{(x,y)\mid|y|=v(x,t),\ a_*(t)\leq x\leq b_*(t)\} $$ are located in $\{x\geq0\}$.
Note that $\Gamma^*(0)=\Lambda(0)=\Lambda_0$. However, $\Gamma^*(t)$ are perpendicular to $y$-axis and the family $\{\Lambda(t)\}$ evolves freely. This means that there exist two types of flows $\Gamma^*(t)$ and $\Lambda(t)$ evolving by $V=-\kappa+A$ with the same initial curve $\Lambda_0$. This can be considered as non-uniqueness. Indeed, seeing the proof of Theorem \ref{thm:fattening1}, the flow given by extending $\Gamma^*(t)$ evenly is the boundary of closed evolution and the flow given by extending $\Lambda(t)$ evenly is the boundary of open evolution.
If $a_*(t)<0$, $0<t<\delta$, the problem (*) will not make sense in the half space $\{x\ge0\}$. But the solution given by (*) plays the role of a sub-solution (in the proof of Lemma \ref{lem:closebou}). Using this sub-solution, the boundaries of the open evolution and closed evolution are away from the $x$-axis. By the uniqueness result(Proposition 5.4), we can prove they are the same.
In the curve shortening flow------$A=0$, since $a_*(t)\geq0$ always holds, the interface evolution is fattening.
{\bf Motivation.} This research is motivated by \cite{MNL}, the mean curvature flow with driving force under the Neumann boundary condition in a two-dimensional cylinder with periodically undulating boundary. \begin{figure}
\caption{Curve touching }
\label{fig:intro1}
\caption{After touching}
\label{fig:intro2}
\end{figure}
In \cite{MNL}, they only consider the condition that for initial curve $\{(x,y)\in\mathbb{R}^2\mid y=u_0(x)\}$ with $|u_0^{\prime}(x)|<M$ for some $M$. They show that the interior point of $\Gamma(t)=\{(x,y)\in\mathbb{R}^2\mid y=u(x,t)\}$ never touches the boundary and $\Gamma(t)$ remains graph. Therefore, the problem can be studied by the classical parabolic theory. If removing the assumption $|u_0^{\prime}(x)|<M$, when $u(x,t)$ touches the boundary, the singularity will develop(Figure \ref{fig:intro1}). Noting Figure \ref{fig:intro2}, after touching, $\Gamma(t)$ will possibly separate into two parts and become non-graph($\Gamma(t)$ can't be represented by $y=u(x,t)$). This makes us analyze what will happen after touching boundary. Since $\Gamma(t)$ may become non-graph, we want to use the level set method established by \cite{CGG}; see also Evans and Spruck \cite{ES2} for the mean curvature flow, where fattening phenomenon is first observed. Therefore the main task in this paper is to study whether the interface evolution is fattening or not. The notion of the level set solution is introduced in Section 2.
{\bf A short review for mean curvature flow.} For the classical mean curvature flow: $A=0$ in (\ref{eq:cur}), there are many results. Concerning this problem, Huisken \cite{H} shows that any solution that starts out as a convex, smooth, compact surface remains so until it shrinks to a "round point" and its asymptotic shape is a sphere just before it disappears. He proves this result for hypersurfaces of $\mathbb{R}^{n+1}$ with $n\geq2$, but Gage and Hamilton \cite{GH} show that it still holds when $n=1$, the curves in the plane. Gage and Hamilton also show that embedded curve remains embedded, i.e. the curve will not intersect itself. Grayson \cite{Gr} proves the remarkable fact that such family must become convex eventually. Thus, any embedded curve in the plane will shrink to "round point" under curve shortening flow. But in higher dimensions it is not true. Grayson \cite{Gr2} also shows that there exists a smooth flow that becomes singular before shrinking to a point. His example consisted of a barbell: two spherical surfaces connected by a sufficiently thin "neck". In this example, the inward curvature of the neck is so large that it will force the neck to pinch before shrinking. In \cite{AAG}, A. Altschuler, S. B. Angenent and Y. Giga study the flow whose initial hypersurface is a compact, rotationally symmetric hypersurface but pinching on $x$-axis by level set method. They proved the hypersurface will separate into two smooth hypersurfaces after pinching.
{\bf Main method.} In this paper, one of the most important tools is the intersection number principle. It was also used in \cite{AAG} that the intersection number between two families evolving by mean curvature flow is non-increasing. But for the problem with driving force, their intersection number may increase. In \cite{GMSW}, they give the extended intersection number principle for the following free boundary problem called (Q) \begin{equation} \left\{ \begin{array}{lcl} \displaystyle{u_t=\frac{u_{xx}}{1+u_x^2}+A\sqrt{1+u_x^2}},\ x\in(a(t),b(t)),\ 0<t< T,\\ u(a(t),t)=0,\ u(b(t),t)=0,\ 0\leq t< T,\\ u_x(a(t),t)=\tan\theta_-(t),\ u_x(b(t),t)=-\tan\theta_+(t),\ 0\leq t< T,\\ u(x,0)=u_0(x),\ a(0)\leq x\leq b(0), \end{array} \right.\tag{Q} \end{equation} where $0<\theta_{\pm}<\pi/2$.
If $(u_1,a_1,b_1)$, $(u_2,a_2,b_2)$ are the solutions of (Q) with $\theta_{\pm}^1$, $\theta_{\pm}^2$, $u_0^1$ and $u_0^2$, the intersection number between $$ \{(x,y)\mid y=u_1(x,t),a_1(t)\leq x\leq b_1(t)\} $$ and $$ \{(x,y)\mid y=u_2(x,t),a_2(t)\leq x\leq b_2(t)\} $$ will increase possibly, by some simple examples(here we omit it). In \cite{GMSW}, they find a non-increasing quantity. If $u_1$ and $u_2$ are extended by straight line, such that the extended functions $u_1^{*}$, $u_2^{*}$ are in $C^1(\mathbb{R})$, then the intersection number between $u_1^{*}$ and $u_2^{*}$ is non-increasing provided that $\theta_{\pm}^1 \neq \theta_{\pm}^2$. If $\theta_{+}^1= \theta_{+}^2$, the intersection number will not increase provided that $b_1(t)\neq b_2(t)$ and decrease at $t_0$ satisfying $b_1(t_0)=b_2(t_0)$. Similarly for $a(t)$. These results are called ``extended intersection number principle''.
As we observing before, in this paper, the curve symmetric around $x$-axis is considered. Under this condition, $\theta_{\pm}$ in problem (Q) satisfy $\theta_{\pm}=\pi/2$. If we extend $u_1$ and $u_2$ by vertical straight line, it will be seen that the extended $C^1$ curve $\gamma_1(t)$ and $\gamma_2(t)$(Since $\theta_{\pm}=\pi/2$, the extended curve will not be graph) will intersect with each other even if the intersection number between $\gamma_1(0)$ and $\gamma_2(0)$ is zero. We will investigate the intersection number in Section 4.
The rest of this paper is organized as follows. In Section 2, we introduce the level set method established by \cite{CGG}. The definition of open evolution, closed evolution, fattening and the basic knowledge including comparison principle, monotone convergence theorem and so on are given in this section. In Section 3, we prove the Evans-Spruck estimate(also called gradient interior estimate). In Section 4, the results of intersection number are investigated and their application are given. In Section 5, we give the proof of Theorem \ref{thm:exist} and Theorem \ref{thm:fattening1}. In Section 6, we study the possible formations of singularity. In Section 7, we classify the solution given by Theorem \ref{thm:exist} and prove the asymptotic behavior in each category(Theorem \ref{thm:threecondition} and \ref{thm:asym}). In Section 8, we give another non-fattening result in $(n+1)$-dimension with a type of initial hypersurfaces.
\section{Level set method}
Since the initial curve $\Gamma_0$ given in (\ref{eq:initialeven}) has singularity at $(0,0)$, the equation $V=-\kappa+A$ does not make sense at $t=0$. Therefore, we want to apply the level set method to our problem. In this section, we introduce the level set method in $\mathbb{R}^N$. For $\Gamma(t)$ being a smooth family of smooth, closed, compact hypersurfaces in $\mathbb{R}^{N}$. Assume there exists $\psi(x,t)$ such that $\Gamma(t)=\{x|\psi(x,t)=0,x\in\mathbb{R}^N\}$. If $\Gamma(t)$ evolves by (\ref{eq:cureven}), we can derive $\psi(x,t)$ satisfying \begin{equation}\label{eq:level}
\displaystyle{\psi_t=|\nabla \psi |\textmd{div}(\frac{\nabla \psi}{|\nabla \psi|})+A|\nabla \psi|\ \textrm{in}\ \mathbb{R}^N\times(0,T)}. \end{equation} Equation (\ref{eq:level}) is called the level set equation of (\ref{eq:cureven}). Theorem 4.3.1 in \cite{G} gives the existence and uniqueness of the viscosity solution for (\ref{eq:level}) with $\psi(x,0)=\psi_0(x)$. Where $\psi_0(x)$ is a bounded and uniform continuous function. \begin{figure}
\caption{Level set method in $\mathbb{R}^2$}
\label{fig:Levelsetmethod}
\end{figure}
{\bf Level set method} Using the solution of level set equation, we introduce the level set method. \begin{defn}\label{def:evo} (1) Let $D_0$ be a bounded open set in $\mathbb{R}^N$. A family of open sets $\{D(t)\mid D(t)\subset \mathbb{R}^N\}_{0<t<T}$ is called an $(generlized)$ $open$ $evolution$ of (\ref{eq:cureven}) with initial data $D_0$ if there exists a viscosity solution $\psi$ of (\ref{eq:level}) that satisfies $$ D(t)=\{x\in\mathbb{R}^N \mid \psi(x,t)>0\},\ D_0=\{x\in\mathbb{R}^N\mid \psi(x,0)>0\}. $$
(2) Let $E_0$ be a bounded closed set in $\mathbb{R}^N$. A family of closed sets $\{E(t)\mid E(t)\subset \mathbb{R}^N\}_{0<t<T}$ is called a $(generlized)$ $closed$ $evolution$ of (\ref{eq:cureven}) with initial data $E_0$ if there exists a viscosity solution $\psi$ of (\ref{eq:level}) that satisfies $$ E(t)=\{x\in\mathbb{R}^N \mid \psi(x,t)\geq0\},\ E_0=\{x\in\mathbb{R}^N\mid \psi(x,0)\geq0\}. $$
The set $\Gamma(t)=E(t)\setminus D(t)$ is called an \textit{(generalized) interface evolution} of (\ref{eq:cureven}) with initial data $\Gamma_0=E_0\setminus D_0$. \end{defn}
\begin{rem}\label{rem:lev} (1) For open set $D_0$, we often choose $$ \psi(x,0)=\max\{\textrm{sd}(x,\partial D_0),-1\} $$ where $$ \textrm{sd}(x,\partial D_0)=\left\{ \begin{array}{lcl} \textrm{dist}(x,\partial D_0), \ x\in D_0,\\ -\textrm{dist}(x,\partial D_0), \ x\notin D_0. \end{array}\right. $$
(2) Seeing that the choice of $\psi(x,0)$ isn't unique, but by the Theorem 4.2.8 in \cite{G}, the open evolution $D(t)$ and closed evolution $E(t)$ are both independent on the choice of $\psi(x,0)$.
(3) In generally, even if $E_0=\overline{D_0}$, we can not guarantee $E(t)=\overline{D(t)}$. If $E(t)\setminus D(t)$ has interior points for some $t$, we call the interface evolution is fattening. Respectively, if $E(t)=\overline{D(t)}$, for all $0<t<T$, we say the interface evolution is regular. Therefore, in the proof of Theorem \ref{thm:exist}, it is sufficient to prove $\partial D(t)=\partial E(t)$. In the proof of Theorem \ref{thm:fattening1}, it is sufficient to prove that there exists a ball $B$ such that $B\subset E(t)\setminus D(t)$. \end{rem}
We now list some fundamental properties of open evolution and closed evolution of (\ref{eq:cureven})(Chapter 4 in \cite{G}).
\begin{thm}\label{thm:semi} (Semigroups)\cite{G}. Denote $U(t)$ and $M(t)$ being the operators such that $U(t)D_0=D(t)$ and $M(t)E_0=E(t)$, for $t>0$. Then we have $U(t)D(s)=D(t+s)$ and $M(t)E(s)=E(t+s)$, for any $t>0$, $s>0$. \end{thm}
\begin{thm}\label{thm:order}(Order preserving property)\cite{G}. Let $D_0$, $D_0^{\prime}$ be two open sets in $\mathbb{R}^N$ and let $E_0$, $E_0^{\prime}$ be two closed sets in $\mathbb{R}^N$. Then
(1) $U(t)D_0\subset U(t)D_0^{\prime}$, if $D_0\subset D_0^{\prime}$;
(2) $M(t)E_0\subset M(t)E_0^{\prime}$, if $E_0\subset E_0^{\prime}$;
(3) $U(t)D_0\subset M(t)E_0^{\prime}$, if $D_0\subset E_0^{\prime}$;
(4) $E_0\subset D_0$ and $\textrm{dist}(E_0,\partial D_0)>0$, then $M(t)E_0\subset U(t)D_0$. \end{thm}
\begin{thm}\label{thm:mon}(Monotone convergence)\cite{G}.
(1) Let $D(t)$ and $\{D_j(t)\}$ be open evolutions with initial data $D_0$ and $D_{j0}$ respectively. If $D_{j0}\uparrow D_0$, then $D_j(t)\uparrow D(t)$, $t>0$, i.e., $\bigcup\limits_{j\geq1}D_j(t)=D(t)$;
(2) Let $E(t)$ and $\{E_j(t)\}$ be closed evolutions with initial data $E_0$ and $E_{j0}$ respectively. If $E_{j0}\downarrow E_0$, then $E_j(t)\downarrow E(t)$, $t>0$, i.e., $\bigcap\limits_{j\geq1}E_j(t)=E(t)$. \end{thm} \begin{thm}\label{thm:conti}(Continuity in time)\cite{G}. Let $D(t)$ and $E(t)$ be open and closed evolutions, respectively.
(1a) $D(t)$ is a lower semicontinuous function of $t\in[0,T)$, in the sense that for any $t_0\geq 0$, and sequence $x_n\in (D(t_n))^c$ with $x_n\rightarrow x_0$, $t_n\rightarrow t_0$, the limit $x_0\in (D(t_0))^c$. If $D(0)$ is bounded so that $\mathcal{C}_{\epsilon}(D(t_0))$ is compact, this implies that for any $t_0\geq0$, $\epsilon>0$ there is a $\delta>0$ such that $|t-t_0|<\delta$ implies $D(t)\supset\mathcal{C}_{\epsilon}(D(t_0))$.
(1b) $E(t)$ is an upper semicontinuous function of $t\in[0,T)$, in the sense that for any $t_0\geq 0$, and sequence $x_n\in E(t_n)$ with $x_n\rightarrow x_0$, $t_n\rightarrow t_0$, the limit $x_0\in E(t_0)$. If $E(0)$ is bounded so that $\mathcal{N}_{\epsilon}(E(t_0))$ is compact, this implies that for any $t_0\geq0$, $\epsilon>0$ there is a $\delta>0$ such that $|t-t_0|<\delta$ implies $E(t)\subset\mathcal{N}_{\epsilon}(E(t_0))$.
(2a) $D(t)$ is a left upper semicontinuous in $t$ in the sense that for any $t_0\in(0,T)$, $x_0\in (D(t_0))^c$ there is a sequence $x_n\rightarrow x_0$ and $t_n\uparrow t_0$ with $x_n\in (D(t_n))^c$. Moreover, for any $t_0\in(0,T)$, $\epsilon>0$ there exists a $\delta>0$ such that $t_0-\delta<t<t_0$ implies $\mathcal{C}_{\epsilon}(D(t))\subset D(t_0)$.
(2b) $E(t)$ is a left lower semicontinuous in $t$ in the sense that for any $t_0\in(0,T)$, $x_0\in E(t_0)$ there is a sequence $x_n\rightarrow x_0$ and $t_n\uparrow t_0$ with $x_n\in E(t_0)$. Moreover, for any $t_0\in(0,T)$, $\epsilon>0$ there exists a $\delta>0$ such that $t_0-\delta<t<t_0$ implies $\mathcal{N}_{\epsilon}(E(t))\supset E(t_0)$. \end{thm}
Where $N_{\epsilon}(A)=\{x\in \mathbb{R}^N\mid d(x,A)<\epsilon\}$, for $A$ is a closed subset in $\mathbb{R}^N$ and $C_{\epsilon}(A)=N_{\epsilon}(A^c)^c$, for $A$ is an open subset in $\mathbb{R}^N$.
\begin{rem}\label{rem:ori} For $A>0$, even if $D_1(0)$ and $D_2(0)$ are disjoint, $D_1(t)$ and $D_2(t)$ may intersect. The basic reason is that the level set equation (\ref{eq:level}) is not orientation free(If $u$ is a solution, there does not hold that $-u$ is also a solution for (\ref{eq:level})). \end{rem}
In order to prove Theorem \ref{thm:fattening1}, we need the following lemma. This lemma gives the construction of an open evolution containing two disjoint components.
\begin{lem}\label{lem:sep} Assume $D_1(t)$ and $D_2(t)$ being the open evolution of (\ref{eq:level}) with $D_1(0)=U_1$ and $D_2(0)=U_2$. And $D(t)$ is denoted as the open evolution of (\ref{eq:level}) with $D(0)=U_1\cup U_2$. If $D_1(t)\cap D_2(t)=\emptyset$ for $0\leq t\leq T$, then $D(t)=D_1(t)\cup D_2(t)$, $0\leq t\leq T$. \end{lem}
Under the condition $A=0$, $D_1(t)\cap D_2(t)=\emptyset$ holds automatically provided that $D_1(0)\cap D_2(0)=\emptyset$. But for $A>0$, it is not true. Therefore, we give the assumption $D_1(t)\cap D_2(t)=\emptyset$ for $0\leq t\leq T$.
\begin{proof} First, we assume $D_1(t)\cap D_2(t)=\emptyset$ for $0\leq t\leq T$ and $\delta=:\min\limits_{0\leq t\leq T}\textrm{dist}(D_1(t),D_2(t))>0$. We define $$ a_i(x)=\max\{\textrm{sd}(x,\partial D_i(0)),0\}, \ x\in\mathbb{R}^n,\ i=1,2. $$
\begin{figure}
\caption{Proof of Lemma \ref{lem:sep}}
\label{fig:lemsep}
\end{figure}
In the theory in \cite{G}, there exist non-negative $\varphi_i(x,t)\in C_{c}(\mathbb{R}^N\times[0,T])$ being the solution of (\ref{eq:level}) with $\varphi_i(x,0)=a_i(x)$. Then $D_i(t)=\{x\in\mathbb{R}^N\mid \varphi_i(x,t)>0\}$ and $\varphi_i=0$ hold outside of $D_i(t)$, $i=1,2$. Since $\textrm{supp}\varphi_1$ and $\textrm{supp}\varphi_2$ are seperated by $\delta$, it is easy to show that $\varphi(x,t)=:\max\{\varphi_1,\varphi_2\}(x,t)$ is also a viscosity solution of (\ref{eq:level}). Then $D(t)=\{x\mid\varphi(x,t)>0\}=D_1(t)\cup D_2(t)$ with initial data $D(0)=U_1\cup U_2$, for $0\leq t\leq T$.
Next we prove the result only under the assumption $D_1(t)\cap D_2(t)=\emptyset$, $0\leq t\leq T$. Consider $\displaystyle{D_i^j(t)=\{x\mid \varphi_i(x,t)>\frac{1}{j}\}}$.
We claim that $\min\limits_{0\leq t\leq T}\textrm{dist}(D_1^j(t),D_2^j(t))>0$, for all $j$. If $\min\limits_{0\leq t\leq T}\textrm{dist}(D_1^j(t),D_2^j(t))=0$, for some $j$, then there exist $t_0\in[0,T]$ and sequences $\{x_m\}\subset D_1^j(t_0)$, $\{y_m\}\subset D_2^j(t_0)$ such that $$
|x_m-y_m|\rightarrow0,\ \ \varphi_1(x_m,t_0)>\frac{1}{j},\ \varphi_2(y_m,t_0)>\frac{1}{j}. $$ Then there exists $x$, such that $\lim\limits_{m\rightarrow\infty}x_m=\lim\limits_{m\rightarrow\infty}y_m=x$. Then $$ \varphi_1(x,t_0)\geq\frac{1}{j}>0,\ \varphi_2(x,t_0)\geq\frac{1}{j}>0. $$ Consequently, $x\in D_1(t_0)\cap D_2(t_0)\neq\emptyset$, contradiction. Then we have $\min\limits_{0\leq t\leq T}\textrm{dist}(D_1^j(t),D_2^j(t))>0$, for all $j$. By the argument in the first step, there holds $D^j(t)=D_1^j(t)\cup D_2^j(t)$ is the open evolution with initial openset $\displaystyle{\{x\mid \varphi_1(x,0)>\frac{1}{j}\}\cup\{x\mid \varphi_2(x,0)>\frac{1}{j}\}}$, for $0\leq t\leq T$.
Noting $\bigcup\limits_{j=1}^{\infty}D_1^j(0)\cup D_2^j(0)=U_1\cup U_2$ and using Theorem \ref{thm:mon}, $D(t)=\bigcup\limits_{j=1}^{\infty}D^j(t)=\bigcup\limits_{j=1}^{\infty}D_1^j(t)\cup D_2^j(t)=D_1(t)\cup D_2(t)$, for $0\leq t\leq T$. \end{proof}
\begin{thm}\label{thm:openevolutionmeancurvature}(Relation between evolution and mean curvature flow) Let $D(t)$, $E(t)$ be the open evolution and closed evolution, respectively. Assume that in an open region $U\times(t_1,t_2)\subset\mathbb{R}^N\times(0,T)$, $\partial D(t)$ and $\partial E(t)$ are the graph of continuous functions $v_1$, $v_2$. Precisely, $$ \partial D(t)\cap U=\{x\in \mathbb{R}^N\mid x_N=v_1(x^{\prime},t), x^{\prime}\in U^{\prime}\} $$ and $$ \partial E(t)\cap U=\{x\in \mathbb{R}^N\mid x_N=v_2(x^{\prime},t), x^{\prime}\in U^{\prime}\}, $$ where $x^{\prime}=(x_1,\cdots,x_{N-1})$, $U^{\prime}=U\cap\{x_N=0\}$ and $v_1$, $v_2$ are continuous in $U^{\prime}\times(t_1,t_2)$. Then the function $v_1$($v_2$) is a viscosity supersolution(subsolution) of $$
v_t=\left(\delta_{ij}-\frac{v_{x_i}v_{x_j}}{1+|\nabla v|^2}\right)v_{x_ix_j}+ A\sqrt{1+|\nabla v|^2} $$ or is a viscosity subsolution(supersolution) of $$
v_t=\left(\delta_{ij}-\frac{v_{x_i}v_{x_j}}{1+|\nabla v|^2}\right)v_{x_ix_j}-A\sqrt{1+|\nabla v|^2} $$ where the signs of the last terms are determined by the direction of the normal velocity of $\partial D(t)\cap U$ and $\partial E(t)\cap U$. \end{thm} We can use the similar method in \cite{ES} to prove this theorem. Here we omit it.
\section{A Priori estimates }
In this section, we give an interior gradient estimate.
{\bf Graph equation} Let $u(x,t)$ be some function on an open subset of $\mathbb{R}^n\times \mathbb{R}$, then the graph of $u(x,t)$ is a family of hypersurfaces in $\mathbb{R}^{n+1}$. If the family of hypersurfaces moves by $V=-\kappa+A$ if and only if $$
u_t=\left(\delta_{ij}-\frac{u_{x_i}u_{x_j}}{1+|\nabla u|^2}\right)u_{x_ix_j}\pm A\sqrt{1+|\nabla u|^2}, $$ where the signs of the last terms are determined by direction of the normal velocity $V$.
Under the case $A=0$, $$
u_t=\displaystyle{\left(\delta_{ij}-\frac{u_{x_i}u_{x_j}}{1+|\nabla u|^2}\right)u_{x_ix_j}}, $$
The estimate for $|\nabla u|$ in entire space $\mathbb{R}^n$ is given by \cite{EH}. The interior estimate of gradient is first given by \cite{ES}. Here we give the estimate under the condition $A>0$. Since the proof for $n=1$ is not easier than $n\geq 2$, we prove it in $n$-dimensional setting.
\begin{thm}\label{thm:es} For $u\in C^3(\Omega_{T})\cap C^0(\overline{\Omega}_T)$, $u$ satisfies \begin{equation}\label{eq:graph}
u_t=\displaystyle{\left(\delta_{ij}-\frac{u_{x_i}u_{x_j}}{1+|\nabla u|^2}\right)u_{x_ix_j}\pm A\sqrt{1+|\nabla u|^2}}, \end{equation} For the condition ``$+$''(``$-$''), we assume $u<0$($u>0$) in $\Omega_T$, $u(0,T)=-v_0$($u(0,T)=v_0$). Then $$
|\nabla u(0,T)|\leq (3+16v_0)e^{2K}, $$ where $\displaystyle{K=20v_0^2(4n+\frac{1}{T}+4A+\frac{A}{2v_0})}+2$, $\Omega_T=B_1(0)\times (0, 2T)$ and $$ \delta_{ij}=\left\{ \begin{array}{lcl} 1,\ i=j\\ 0,\ i\neq j \end{array} \right.. $$ \end{thm}
\begin{proof} We only prove the condition "$+$". For the condition, we can consider ``$-u$'' to get the result. Denote $w=\sqrt{1+|\nabla u|^2}$, $\nu^i=u_{x_i}/\sqrt{1+|\nabla u|^2}$, $g^{ij}=\delta_{ij}-\nu^i\nu^j$. We define the operator $L$ as $$ Lh=g^{ij}h_{x_ix_j}-h_t+A\nu^kh_{x_k}. $$
We let $h=\eta(x,t,u(x,t))w$, where $\eta$ is a non-negative function and will be identified in future. By calculation, \begin{eqnarray*} Lh&=&g^{ij}(w_{x_ix_j}\eta+w_{x_i}(\eta)_{x_j}+(\eta)_{x_i}w_{x_j}+w(\eta)_{x_ix_j})\\ &-&(\eta)_tw-\eta w_t+A\nu^k(w_{x_k}\eta+(\eta)_{x_k}w)\\ &=&\eta Lw+wL\eta+2g^{ij}w_{x_i}(\eta)_{x_j}\\ &=&\eta Lw+wL\eta+2g^{ij}w_{x_i}\left(\frac{h_{x_j}-w_{x_j}\eta}{w}\right). \end{eqnarray*} Then $$ Lh-2g^{ij}\frac{w_{x_i}}{w}h_{x_j}=\eta\left(Lw-2g^{ij}\frac{w_{x_i}w_{x_j}}{w}\right)+wL\eta. $$
We claim that $$ Lw-2g^{ij}\frac{w_{x_i}w_{x_j}}{w}\geq 0. $$ Therefore, there holds \begin{equation}\label{eq:grainter} Lh-2g^{ij}\frac{w_{x_i}}{w}h_{x_j}\geq wL\eta. \end{equation}
We begin to prove the claim. Seeing $$ w_{x_ix_j}=\nu^ku_{x_kx_ix_j}+\frac{1}{w}(u_{x_kx_i}u_{x_kx_j}-\nu^k\nu^lu_{x_kx_i}u_{x_lx_j}), $$ we have $$ g^{ij}w_{x_ix_j}\geq\nu^kg^{ij}u_{x_kx_ix_j}=\nu^k((g^{ij}u_{x_ix_j})_{x_k}-g^{ij}_{x_k}u_{x_ix_j}) $$ $$
=\nu^k\left(u_{tx_k}-A\frac{u_{x_l}u_{x_lx_k}}{\sqrt{1+|\nabla u|^2}}\right)-\nu^kg_{x_k}^{ij}u_{x_ix_j}. $$ Combining $$ g_{x_k}^{ij}=-\frac{1}{w}(\nu^ju_{x_ix_k}+\nu^iu_{x_jx_k})+\frac{2u_{x_i}u_{x_j}}{w^3}w_{x_k}, $$ \begin{eqnarray*} \nu^kg^{ij}u_{x_kx_ix_j} &=&w_t-A\nu^kw_{x_k}+\frac{\nu^ku_{x_ix_j}}{w}\left(\nu^ju_{x_ix_k}+\nu^iu_{x_jx_k}-\frac{2u_{x_i}u_{x_j}}{w^2}w_{x_k}\right)\\ &=&w_t-A\nu^kw_{x_k}+\frac{2}{w}g^{ij}w_{x_i}w_{x_j}. \end{eqnarray*} Therefore $$ g^{ij}w_{x_ix_j}\geq w_t-A\nu^kw_{x_k}+\frac{2}{w}g^{ij}w_{x_i}w_{x_j}. $$ Then $$ Lw\geq\frac{2}{w}g^{ij}w_{x_i}w_{x_j}. $$ We complete the proof of the claim.
Next we choose $\eta=f\circ\phi(x,t,u(x,t))$,
$$\displaystyle{\phi(x,t,z)=\left(\frac{z}{2v_0}+\frac{t}{T}(1-|x|^2)\right)^+}$$ and $$f(\phi)=e^{K\phi}-1.$$
When $\phi>0$, there holds $$
\phi_z=\frac{1}{2v_0},\ \phi_t=\frac{1-|x|^2}{T},\ \phi_{x_i}=-\frac{2t}{T}x_i,\ \phi_{x_ix_j}=-\frac{2t}{T}\delta_{ij}. $$ Consequently, when $\phi>0$, $z<0$, $0<t<2T$, $$ 0\leq\phi\leq2,\ \sum\phi_{x_i}^2\leq\frac{4t^2}{T^2}\leq16. $$
By calculation, \begin{eqnarray*} L\eta&=&g^{ij}f^{\prime\prime}(\phi_{x_i}+\phi_{z}u_{x_i})(\phi_{x_j}+ \phi_{z}u_{x_j})+g^{ij}f^{\prime}(\phi_{x_ix_j}+\phi_zu_{x_ix_j})\\ &-&f^{\prime}(\phi_t+\phi_zu_t)+A\nu^kf^{\prime}(\phi_{x_k}+\phi_zu_{x_k})\\ &\geq&\frac{f^{\prime\prime}}{w^2}(\phi_{x_i}+\phi_zu_{x_i})^2+f^{\prime}(g^{ij} \phi_{x_ix_j}-\phi_t+A\nu^k\phi_{x_k})+f^{\prime}\phi_zLu \end{eqnarray*} \begin{eqnarray*}
&=&\frac{f^{\prime\prime}}{w^2}\left(-\frac{2t}{T}x_i+\frac{1}{2v_0}u_{x_i}\right)^2+f^{\prime}\left(-\frac{2t}{T}(n-\frac{|\nabla u|^2}{1+|\nabla u|^2})-\frac{1-|x|^2}{T}\right.\\
&-&\left. Ax_k\frac{2t}{T}\frac{u_{x_k}}{\sqrt{1+|\nabla u|^2}}\right)+f^{\prime}\phi_zLu. \end{eqnarray*} Combining $$
Lu=g^{ij}u_{x_ix_j}-u_t+A\nu^ku_{x_k}=-\frac{A}{\sqrt{1+|\nabla u|^2}}, $$ there holds \begin{eqnarray*}
L\eta&\geq& \frac{f^{\prime\prime}}{w^2}\left(\frac{|\nabla u|^2}{8v_0^2}-8\right)+f^{\prime}\left(-4n-\frac{1}{T}-4A\right)-f^{\prime}\phi_z\frac{A}{\sqrt{1+|\nabla u|^2}}\\
&\geq& \frac{f^{\prime\prime}}{w^2}\left(\frac{|\nabla u|^2}{8v_0^2}-8\right)+ f^{\prime}\left(-4n-\frac{1}{T}-4A-\frac{A}{2v_0}\right)\\
&=&\frac{K^2e^{K\phi}}{w^2}\left(\frac{|\nabla u|^2}{8v_0^2}-8\right)+ Ke^{K\phi}\left(-4n-\frac{1}{T}-4A-\frac{A}{2v_0}\right). \end{eqnarray*}
When $|\nabla u|\geq \max\{16v_0,2\}$, we have $$
\frac{|\nabla u|^2}{16v^2_0}\geq8,\ \frac{|\nabla u|^2}{16}\geq\frac{1+|\nabla u|^2}{20}. $$
Then \begin{eqnarray*}
L\eta&\geq&\frac{K^2e^{K\phi}}{w^2}\frac{|\nabla u|^2}{16v_0^2}+ Ke^{K\phi}\left(-4n-\frac{1}{T}-4A-\frac{A}{2v_0}\right)\\ &\geq&\frac{K^2e^{K\phi}}{20v_0^2}+Ke^{K\phi}(-4n-\frac{1}{T}-4A-\frac{A}{2v_0})\\ &=&Ke^{K\phi}\left(\frac{K}{20v_0^2}-4n-\frac{1}{T}-4A-\frac{A}{2v_0}\right)>0, \end{eqnarray*} when we choose $\displaystyle{K=20v_0^2(4n+\frac{1}{T}+4A+\frac{A}{2v_0})+2}$, $\Omega_T=B_1(0)\times (0, 2T)$.
Therefore by (\ref{eq:grainter}), there holds $$
Lh-2g^{ij}\frac{w_{x_i}}{w}h_{x_j}\geq0\ \text{on}\ \{h>0\ \textrm{or}\ |\nabla u|>\max\{16v_0,2\}\}. $$
By maximum principle, \begin{eqnarray*}
(e^{\frac{K}{2}}-1)w(0,T)&=&h(0,T)\leq\max\limits_{h=0\ \textrm{and} \ |\nabla u|=\max\{16v_0,2\}}h\\ &\leq&(e^{2K}-1)\max\{\sqrt{1+(16v_0)^2},\sqrt{5}\}. \end{eqnarray*} Consequently, $w(0,T)\leq e^{2K}(3+16v_0)$. \end{proof}
\begin{rem}\label{rem:es} (1) In Theorem \ref{thm:es}, $\Omega_T$ can be replaced by $\Omega_T=B_R(x_0)\times(0,2T)$ and $v_0=u(x_0,T)$. Then the conclusion becomes $$
|\nabla u(x_0,T)|\leq e^{2K}(3+16\frac{v_0}{R}), $$ where $\displaystyle{K=20\frac{v_0^2}{R^2}\left(4n+\frac{R^2}{T}+\frac{4A}{R}+\frac{A}{2v_0}\right)}+2$. We can set $v(x,t)=\displaystyle{\frac{u(Rx+x_0,R^2t)}{R}}$, then we can use Theorem \ref{thm:es} for $v(x,t)$.
(2) When $u$ is the solution of (\ref{eq:graph}) for ``+'' without $u<0$, then we can set $$v=u-M-\epsilon$$
where $M=\sup\limits_{\overline{\Omega}_T} |u|$ and $\epsilon>0$. Using (1) in Remark \ref{rem:es} to $v$, we can deduce
$$|\nabla u(0,T)|\leq\left(3+16\frac{M-u(0,T)+\epsilon}{R}\right)e^{2\widetilde{K}_{\epsilon}},$$ where $\displaystyle{\widetilde{K}_{\epsilon}=\frac{20(M-u(0,T)+\epsilon)^2}{R^2}\left(4n+\frac{R^2}{T}+\frac{4A}{R} +\frac{A}{2(M+\epsilon-u(0,T))}\right)}$+2.
Tending $\epsilon\rightarrow0$, we have $$
|\nabla u(0,T)|\leq \left(3+32\frac{M}{R}\right)e^{2\widetilde{K}}, $$ where $\displaystyle{\widetilde{K}=\frac{80M^2}{R^2}\left(4n+\frac{R^2}{T}+\frac{4A}{R}\right) +\frac{20AM}{R^2}}+2$. \end{rem}
Then we use the (2) in Remark \ref{rem:es} and the same method as \cite{AAG} to prove the next corollary.
\begin{cor}\label{cor:es} For $s_1<s_2$, $\rho>0$ and $q\in\mathbb{R}^n$ we set $$ \Omega=B_{\rho}(x_0)\times(s_1,s_2). $$
Suppose that $u\in C^3(\Omega)$ solves the equation (\ref{eq:graph}) in $\Omega$ with $M=\sup\limits_{\overline{\Omega}}|u|<\infty$. For any $\epsilon>0$ there is a constant $C=C(M,\epsilon,n)$ such that
$$|\nabla u|\leq C \ \textrm{on}\ \Omega_{\epsilon}=B_{\rho-\epsilon}(x_0)\times(s_1+\epsilon^2,s_2).$$ \end{cor} \begin{rem}\label{rem:hes} (1) From Corollary \ref{cor:es} and \cite{LSU}, there exist $C_k(M,\epsilon,n)$ such that $$
|\nabla^k u|\leq C_k,\ (x,t)\in B_(\rho-2\epsilon)\times(s_1+2\epsilon^2,s_2). $$ (2) Noting $C$ and $C_k$ are all independent on $s_2$, if the solution $u$ exists for all $t>s_1$, $s_2$ can be chosen as $\infty$. \end{rem}
\section{Intersections and the Sturmian theorem}
In this section, we want to introduce the intersection number argument. There are many applications for the intersection number principle. For example, in this paper:
(1). A type of derivative estimate in Theorem \ref{thm:grad}.
(2). The flow $\Gamma(t)$ evolving by $V=-\kappa+A$ with some initial curve does not intersect itself.(Proposition \ref{lem:alphad2})
(3). The asymptotic behavior in Section 7.
Since the proof in $\mathbb{R}^2$ is not difficult than in higher dimension, some theorems and lemmas are proved in $\mathbb{R}^{n+1}$, $n\geq1$.
\textbf{Sturm's classical result} The Sturmian theorem states that the number of zeros(counted with multiplicity) of a solution of linear parabolic equation of the type $$ u_t=a(x,t)u_{xx}+b(x,t)u_x+c(x,t)u $$ doesn't increase with time, provided that $u$ is defined on a rectangle $x_0\leq x\leq x_1$, $0<t<T$ and $u(x_j,t)\neq0$ for $j=0,1$, for all $t\in(0,T)$. This result also holds for the number of sign changing rather than the number of zeros of $u(\cdot,t).$
It is well known that the intersection number between two families of rotationally symmetric hypersurfaces $\Gamma_1(t)$ and $\Gamma_2(t)$ evolving by $V=-\kappa$ is non-increasing(\cite{A2}). The definition of intersection number between two families of rotationally symmetric hypersurfaces is given following. But this result is not true under the condition $V=-\kappa+A$. Indeed seeing future, the intersection number between two families of rotationally symmetric hypersurfaces evolving by $V=-\kappa+A$ may increase. In this section we give some results about this.
{\bf Horizontal and vertical graph equation } If $\Gamma(t)$ is a family of rotationally symmetric hypersurfaces in $\mathbb{R}^{n+1}$, then parts of $\Gamma(t)$ may be represented either as horizontal graph, $r=u(x,t)$, or vertical graph, $x=v(r,t)$, where $(x,y_1,\cdots,y_n)\in\mathbb{R}^{n+1}$ and $r=\sqrt{y_1^2+y_2^2+\cdots+y_n^2}$.
If $\Gamma(t)$ is given as a horizontal graph, then $\Gamma(t)$ evolves by $V=-\kappa+A$ in $\mathbb{R}^{n+1}$ and the direction of the normal velocity $V$ is chosen outward iff $u$ satisfies the horizontal graph equation \begin{equation}\label{eq:1horizontal} \frac{\partial u}{\partial t}=\frac{u_{xx}}{1+u_x^2}-\frac{n-1}{u}+A\sqrt{1+u_x^2}. \end{equation} If $\Gamma(t)$ is given as a vertical graph, then $\Gamma(t)$ evolves by $V=-\kappa+A$ in $\mathbb{R}^{n+1}$ iff $v$ satisfies the vertical graph equation \begin{equation}\label{eq:1vertical+} \frac{\partial v}{\partial t}=\frac{v_{rr}}{1+v_r^2}+\frac{n-1}{r}v_r+ A\sqrt{1+v_r^2}, \end{equation} or \begin{equation}\label{eq:1vertical-} \frac{\partial v}{\partial t}=\frac{v_{rr}}{1+v_r^2}+\frac{n-1}{r}v_r-A\sqrt{1+v_r^2}, \end{equation} where the signs of the last terms are determined by the direction of the normal velocity $V$(We choose ``$+$($-$)'' when the direction of $V$ is rightward(leftward)).
{\bf Intersection number for rotationally symmetric hypersurfaces} For two rotationally symmetric hypersurfaces $\Gamma_1(t)$ and $\Gamma_2(t)$ are given by $\Gamma_1(t)=\{(x,y)\in \mathbb{R}\times\mathbb{R}^{n}\mid r=u_1(x,t)\}$ and $\Gamma_2(t)=\{(x,y)\in \mathbb{R}\times\mathbb{R}^{n}\mid r=u_2(x,t)\}$. The intersection number between $\Gamma_1(t)$ and $\Gamma_2(t)$ denoted by $\mathcal{Z}[\Gamma_1(t),\Gamma_2(t)]$ is defined by the number of intersections between $u_1(\cdot,t)$ and $u_2(\cdot,t)$.
\begin{thm}\label{thm:sl}Two smooth families of smooth, closed, hypersurfaces given by $\Gamma_1(t)=\{(x,y)\in \mathbb{R}\times\mathbb{R}^n\mid r=u_1(x,t),a_1(t)\leq x\leq b_1(t)\}$, $\Gamma_2(t)=\{(x,y)\in \mathbb{R}\times\mathbb{R}^n\mid r=u_2(x,t),a_2(t)\leq x\leq b_2(t)\}$ evolve by $V=-\kappa+A$ in $\mathbb{R}^{n+1}$, $0<t<T$. Then either $\Gamma_1\equiv\Gamma_2$ for all $t\in(0,T)$, or the number of intersections of $\Gamma_1(t)$ and $\Gamma_2(t)$ is finite for all $t\in(0,T)$. In the second case, if $a_1(t)$, $b_1(t)$, $a_2(t)$ and $b_2(t)$ are all different and their order remains unchanged for all $t\in(0,T)$, this number is nonincreasing in time, and decreases whenever $\Gamma_1(t)$ and $\Gamma_2(t)$ have a tangential intersection. \end{thm} We only give the sketch of the proof. For example, if the order of $a_1$, $b_1$, $a_2$, $b_2$ is given by $a_1(t)<a_2(t)<b_1(t)<b_2(t)$, $0<t<T$, the intersections are only in the interval $[a_2(t),b_1(t)]$. Since $u_1(a_2(t),t)-u_2(a_2(t),t)\neq0$ and $u_1(b_1(t),t)-u_2(b_1(t),t)\neq0$, $0<t<T$, using Theorem D in \cite{A1}, the intersection number between $u_1$ and $u_2$ is not increasing and decreases when tangentially intersecting in $[a_2(t),b_1(t)]$. Consequently, the intersection number between $\Gamma_1(t)$ and $\Gamma_2(t)$ is not increasing. We can prove the result for the other conditions with the same method.
\begin{thm}\label{thm:grad} $\Gamma(t)=\{(x,y)\in\mathbb{R}^{n+1}\mid r=u(x,t),a_2(t)\leq x\leq b_2(t)\}$ is a smooth family of closed, smooth hypersurfaces in $\mathbb{R}^{n+1}$, $0<t<T$. If $\Gamma(t)$ evolves by $V=-\kappa+A$ in $\mathbb{R}^{n+1}$, there is a function $\sigma$: $\mathbb{R}_+\times\mathbb{R}_+\rightarrow\mathbb{R}$ such that $$
|u_x(x,t)|\leq \sigma(t,u(x,t)) $$ holds for $0<t<T$, $a_2(t)<x<b_2(t)$. The function $\sigma$ only depends on $M=\max\limits_{a_2(0)<x<b_2(0)} u(x,0)$ and $T$. \end{thm} \begin{proof} Let $w_{0}(r)\in C^{\infty}((0,+\infty))$, $w^{\prime}_0(r)\geq0$ and $$ x=w_0(r)=\left\{ \begin{array}{lcl} 0, \ 0\leq r<M+1\\ 1, \ r>M+2 \end{array}\right.. $$
\begin{figure}
\caption{Proof of Theorem \ref{thm:grad}}
\label{fig:grad}
\end{figure}
We let $w$ be the unique solution of the vertical equation (\ref{eq:1vertical-}) with the boundary condition $$ w_r(0,t)=0,\ t\geq0 $$ and initial condition $$ w(r,0)=w_0(r),\ r\geq0. $$ Differentiating (\ref{eq:1vertical-}) in $r$, \begin{equation}\label{eq:deriveeq} p_t=a(r,t)p_{rr}+b(r,t)p_r+c(r,t)p, \end{equation} where $p=w_r$, $a(r,t)=1/(1+w_r^2)$, $b(r,t)=-2w_rw_{rr}/(1+w_r^2)^2+(n-1)/r-Aw_r/\sqrt{1+w_r^2}$, $c(r,t)=-(n-1)/r^2$.
By the maximum principle, we have for all $r,t>0$, $w_r\geq 0$ and $\sup\limits_{r\geq0}w(r,t)$ is nonincreasing in time. It follows from classical estimate for parabolic equation that all derivative of $w$ are uniformly bounded for $r,t\geq0$.
We note that $w_r(r,0)>0$ for $M+1<r<M+2$. Using the property of Green's function, for any $\delta$ satisfying $0<\delta<M+AT$, there exists $A_{\delta,T}>0$ such that $A_{\delta,T}$ decreases with respect to $\delta$ and \begin{equation}\label{eq:derivativeblew} p(r,t)\geq e^{-\frac{A_{\delta,T}}{t}}, \end{equation} for $\delta\leq r\leq M+AT$, $0<t<T$.
Since the strong maximum principle implies that $p(r,t)>0$ for $r>0$, the inverse of $x=w(r,t)$ exists, denoted by $r=v(x,t)$. Seeing the normal velocity of $x=w(r,t)$ is leftward, the normal velocity of $r=v(x,t)$ is upward. Then $v(x,t)$ satisfies the horizontal graph equation (\ref{eq:1horizontal}) with the free boundary condition $$ v(a(t),t)=0,\ v_x(a(t),t)=\infty,\ \lim\limits_{x\rightarrow b(t)}v(x,t)=\infty,\ \lim\limits_{x\rightarrow b(t)}v_x(x,t)=\infty,\ t>0. $$ Let $\Sigma(t)=\{(x,y)\in \mathbb{R}^{n+1}\mid r=v(x,t),a(t)\leq x< b(t)\}$ and $\Sigma_{\xi}(t)$ denote the translation of $\Sigma(t)$ given by $$ x=w(r,t)+\xi. $$ $\Sigma_{\xi}(t)$ can be also represented by $r=v(x-\xi,t)$, $a(t)+\xi\leq x<b(t)+\xi$. Let $a_1(t)$ and $b_1(t)$ be the end point of $\Sigma_{\xi}(t)$, then $a_1(t)=\xi+a(t)$, $b_1(t)=\xi+b(t)$. Obviously, for $(x_0,t_0)\in (a_2(t_0),b_2(t_0))\times(0,T)$, there exists $\xi\in \mathbb{R}$ such that $$ v(x_0-\xi,t_0)=u(x_0,t_0). $$ By the following Lemma \ref{lem:inters}, we can deduce that the graph of $u(x,t_0)$ intersects $v(x-\xi,t_0)$ only once. \begin{figure}
\caption{Proof of Theorem \ref{thm:grad}}
\label{fig:grad2}
\end{figure} Next we claim $$v_x(x_0-\xi,t_0)\geq u_x(x_0,t_0).$$ If not, $v_x(x_0-\xi,t_0)< u_x(x_0,t_0)$, then there exists $\delta>0$, such that $$ u(x,t_0)>v(x-\xi,t_0), $$ for all $x\in(x_0,x_0+\delta)$. Since $\lim\limits_{x\rightarrow b_1(t)}v(x,t_0)=+\infty$, $\Sigma_{\xi}(t_0)$ intersects $\Gamma(t_0)$ at least twice. This yields a contradiction.
By maximum principle, it is easy to see $r=u(x,t)<M+At<M+AT$, $a_2(t)\leq x\leq b_2(t)$, $0<t<T$. Combining (\ref{eq:derivativeblew}), there holds $$u_x(x_0,t_0)\leq\frac{1}{w_r(v(x_0-\xi,t_0),t_0)}\leq e^{\frac{A_{v(x_0-\xi,t_0),T}}{t_0}}=e^{\frac{A_{u(x_0,t_0),T}}{t_0}}:=\sigma(t_0,u(x_0,t_0)).$$
By considering the reflection $\widetilde{\Sigma}(0)=\{(x,y)\mid x=-w_0(r)\}$ and the equation (\ref{eq:1vertical+}) with $w_r(0,t)=0,\ t\geq0$ and $w(r,0)=w_0(r),\ r\geq0$, the bound for $-u_x(x_0,t_0)$ can be got similarly. \end{proof}
\begin{lem}\label{lem:inters} $\Sigma_{\xi}(t)$ and $\Gamma(t)$ is given by Theorem \ref{thm:grad}, then $\Sigma_{\xi}(t)$ intersects $\Gamma(t)$ at most once. \end{lem} \begin{proof} By the same argument as Theorem \ref{thm:sl}, the intersection number between $\Sigma_{\xi}(t)$ and $\Gamma(t)$ is not increasing provided that $a_1(t)$, $b_1(t)$, $a_2(t)$ and $b_2(t)$ are all different and that their order remains unchanged. So we only prove this result when the order of $a_1(t)$, $b_1(t)$, $a_2(t)$ and $b_2(t)$ changes.
{\bf Case 1.} Assume $a_1(t)<a_2(t)<b_1(t)< b_2(t)$, $t<t_2$ and $a_1(t)<a_2(t)<b_2(t)< b_1(t)$, $t>t_2$. And for $t<t_2$, $\Sigma_{\xi}(t)$ does not intersect $\Gamma(t)$. Then $\Sigma_{\xi}(t)$ does not intersect $\Gamma(t)$, for $t>t_2$. \begin{figure}
\caption{Case 2 }
\label{fig:21}
\caption{Case 2 }
\label{fig:22}
\end{figure}
Since$\lim\limits_{x\rightarrow b_1(t_2)}v(x-\xi,t_2)=+\infty$ and $u(b_2(t_2),t_2)=0$, there exists a positive $\delta$ independent on $t$, such that $v(x-\xi,t_2)>u(x,t_2)$, $b_1(t_2)-\delta<x<b_1(t_2)$. By continuity, there exists $\epsilon$ such that \begin{equation}\label{eq:1interlemma} v(b_1(t_2)-\delta-\xi,t)>u(b_1(t_2)-\delta,t),\ t_2-\epsilon\leq t<t_2+\epsilon \end{equation} and \begin{equation}\label{eq:2interlemma} v(x-\xi,t)>u(x,t),\ b_1(t_2)-\delta<x\leq b_2(t),\ t_2\leq t<t_2+\epsilon. \end{equation} The assumptions in this case imply boundary condition $$ u(a_2(t),t)-v(a_2(t)-\xi,t)< 0,\ t_2-\epsilon\leq t<t_2+\epsilon $$ and initial condition $$ u(x,t_2-\epsilon)<v(x-\xi,t_2-\epsilon),\ a_2(t_2-\epsilon)\leq x\leq b_1(t_2)-\delta. $$ Combining the other boundary condition (\ref{eq:1interlemma}), using maximum principle in domain $$ \cup_ {t_2-\epsilon\leq t<t_2+\epsilon}\left(\left[a_2(t),b_1(t_2)-\delta\right]\times\{t\}\right), $$ there holds $$ u(x,t)<v(x-\xi,t),\ a_2(t)\leq x\leq b_1(t_2)-\delta,\ t_2-\epsilon\leq t<t_2+\epsilon. $$ Seeing (\ref{eq:2interlemma}), $u(x,t)<v(x-\xi,t)$, $a_2(t)\leq x\leq b_2(t)$, $t_2\leq t<t_2+\epsilon$. It means that $\Sigma_{\xi}(t)$ does not intersect $\Gamma(t)$, for $t_2\leq t<t_2+\epsilon$. So by Theorem \ref{thm:sl}, $\Sigma_{\xi}(t)$ does not intersect $\Gamma(t)$, for $t>t_2$.
{\bf Case 2.} Assume $a_1(t)<a_2(t)$, $\Sigma_{\xi}(t)$ does not intersect $\Gamma(t)$, $t<t_3$ and $a_1(t_3)=a_2(t_3)$.
\begin{figure}
\caption{Case 3}
\label{fig:9}
\end{figure}
Since $\lim\limits_{x\rightarrow a_2(t)}u_x(x,t)=\infty$, there exist $\delta_1$ and $\epsilon$ such that $r=u(x,t)$ can be expressed as $x=h(r,t)$, $0\leq r\leq \delta_1$, $t_3-\epsilon<t<t_3+\epsilon$. The assumptions in this case imply that
$$ w(\delta_1,t)+\xi<h(\delta_1,t),\ t_3-\epsilon<t<t_3+\epsilon. $$
It is easy to see $w(r,t)+\xi$ and $h(r,t)$ satisfy the vertical graph equation $$ \left\{ \begin{array}{lcl} \displaystyle{w_t=\frac{w_{rr}}{1+w_r^2}+\frac{n-1}{r}w_r-A\sqrt{1+w_r^2}, \ 0\leq r\leq\delta_1,\ t_3-\epsilon<t<t_3+\epsilon},\\ w_r(0,t)=0,\ t\geq0,\\ \end{array} \right. $$ and $w(r,t_3-\epsilon)+\xi< h(r,t_3-\epsilon)$. By strong maximum principle, $w(r,t)+\xi<h(r,t)$, for $0\leq r<\delta_1,\ t_3-\epsilon<t<t_3+\epsilon$. Contradiction to $a_1(t_3)=a_2(t_3)$. It means that this case does not happen.
{\bf Case 3.} Assume $a_2(t)<b_2(t)<a_1(t)<b_1(t)$, $t<t_6$ and $a_2(t)<a_1(t)<b_2(t)<b_1(t)$, $t>t_6$.
\begin{figure}
\caption{Case 4}
\label{fig:61}
\caption{Case 4}
\label{fig:62}
\end{figure}
Obviously, $\Sigma_{\xi}(t)$ dosen't intersect $\Gamma(t)$, $t<t_6$. For $\lim\limits_{x\rightarrow b_2(t)}u_x(x,t)=-\infty$ and $\lim\limits_{x\rightarrow a_1(t)}v_x(x-\xi,t)=\infty$, there exists $\epsilon$ such that $$ u_x(x,t)-v_x(x-\xi,t)<0,\ a_1(t)\leq x\leq b_2(t),\ t_6<t<t_6+\epsilon. $$ Seeing $u(a_1(t),t)-v(a_1(t)-\xi,t)>0$ and $u(b_2(t),t)-v(b_2(t)-\xi,t)<0$, $u(x,t)$ intersects $v(x-\xi,t)$ only once in $[a_1(t),b_2(t)]$, $t_6<t<t_6+\epsilon$. Consequently, $\Sigma_{\xi}(t)$ intersects $\Gamma(t)$ only once, $t_6<t<t_6+\epsilon$. So by Theorem \ref{thm:sl} we have $\Sigma_{\xi}(t)$ intersects $\Gamma(t)$ only once, $t>t_6$.
The other conditions can be proved similarly as the three cases above. We see that the intersection number increases only in Case 3.
Then we can conclude that
1. if $a_1(0)<a_2(0)$, $\Sigma_{\xi}(t)$ does not intersect $\Gamma(t)$.
2. if $a_2(0)<a_1(0)<b_2(0)$, $\Sigma_{\xi}(t)$ intersects $\Gamma(t)$ at most once.
3. if $b_2(0)<a_1(0)$, $\Sigma_{\xi}(t)$ intersects $\Gamma(t)$ at most once.(Only in this case, the intersection number may increase)
We complete the proof. \end{proof}
\begin{rem}\label{rem:intersection1} The intersection number between two closed, compact, rotationally symmetric hypersurfaces $\Gamma_1(t)=\{(x,y)\in \mathbb{R}\times\mathbb{R}^n\mid r=u_1(x,t),a_1(t)\leq x\leq b_1(t)\}$, $\Gamma_2(t)=\{(x,y)\in \mathbb{R}\times\mathbb{R}^n\mid r=u_2(x,t),a_2(t)\leq x\leq b_2(t)\}$ is denoted by $\mathcal{Z}(t):=\mathcal{Z}[\Gamma_1(t),\Gamma_2(t)]$. If $\Gamma_i(t)$ evolve by $V=-\kappa+A$ in $\mathbb{R}^{n+1}$, seeing Theorem \ref{thm:sl} and the proof in Lemma \ref{lem:inters}, we can similarly prove \\ (a) $\mathcal{Z}(t)$ does not increase when $t$ satisfies $\mathcal{Z}(\Gamma_1(t),\Gamma_2(t))>0$. \\ (b) If $\mathcal{Z}(t_0)=0$, then $\mathcal{Z}(t)\leq 1$, $t_0<t<T$.
Observing the proof of Case 3 in Lemma \ref{lem:inters}, it also holds that the intersection number will possibly increase once only for $a_1(0)>b_2(0)$ or $a_2(0)>b_1(0)$ in this remark. The results in this remark can be proved similarly as Lemma \ref{lem:inters}.
Observing the opinion in this remark similarly, since $\mathcal{Z}(0)\leq1$ in Lemma \ref{lem:inters}, there holds $\mathcal{Z}(t)\leq 1$ for $0<t<T$. \end{rem}
Using the intersection argument, we can prove the following theorem.
\begin{thm}\label{thm:gu} Let $\Gamma(t)$, $t\in [0,T)$, be a family of smooth hypersurfaces evolving by $V=-\kappa+A$ in $\mathbb{R}^{n+1}$. If $\Gamma(0)$ is obtained by rotating the graph of a function around the $x$-axis, then so are the $\Gamma(t)$ for $t\in[0,T).$ \end{thm}
For the proof of Theorem \ref{thm:gu}, we see that $\Gamma(t)$ is also rotationally symmetric because the equation is rotationally invariance. Since $\Gamma(0)$ is obtained by rotating the graph of a function around the $x$-axis, $\Gamma(0)$ can be written into $\Gamma(0)=\{(x,y)\in \mathbb{R}\times\mathbb{R}^n\mid r=v_0(x)\}$ for some function $v_0(x)$. It means that all straight vertical line $x=c$ intersects $\Gamma(0)$ at most once. Using the same argument in Lemma \ref{lem:inters}, all $x=c$ intersects $\Gamma(t)$ at most once. Then $\Gamma(t)$ can be written into $\{(x,y)\in \mathbb{R}\times\mathbb{R}^n\mid r=u(x,t)\}$. We omit the details.
For the following argument, we only consider the results in $\mathbb{R}^2$. In our problem, the curve evolving by $V=-\kappa+A$ maybe intersect itself at $r=0$. To conquer this difficulty, we give the definition of the $\alpha$-domain first used by \cite{AAG}.
\begin{defn}\label{def:alphad} We say a domain is an $\alpha$-domain if
(1). Let $U\subset \mathbb{R}^{2}$ be an open set of the form $$ U=\{(x,y)\in\mathbb{R}^2\mid r<u(x)\}. $$
(2). $I=\{x\in\mathbb{R}\mid u(x)>0 \}$ is a bounded, connected interval. Then there exist $a_1<a_2$ such that $\partial I=\{a_1,a_2\}$.
(3). $u$ is smooth on $I$;
(4). $\partial U$ intersects each cylinder $\partial C_{\rho}$ with $0<\rho\leq\alpha$ twice and these intersections are transverse, where $C_{\rho}=\{(x,y)\in\mathbb{R}^2\mid r<\rho\}$. \end{defn}
\begin{figure}
\caption{$\alpha$-domain}
\label{fig:alphadom}
\end{figure} From Figure \ref{fig:alphadom}, we observe that the boundary $\partial U$ of an $\alpha$-domain $U$ does not intersect itself at $y=0$. The condition (3) implies $\partial U$ is a smooth curve, except possibly at its endpoints $(a_1,0),(a_2,0)$. The condition (4) implies that there exist $\delta_1,\delta_2>0$ such that $$ u(a_1+\delta_1)=u(a_2-\delta_2)=\alpha, $$ and $$ u^{\prime}(x)=\left\{ \begin{array}{lcl} >0,\ x\in(a_1,a_1+\delta_1],\\ <0,\ x\in[a_2-\delta_2,a_2). \end{array} \right. $$
Therefore, the inverse of $u|_{[a_1,a_1+\delta_1]}$ and $u|_{[a_2-\delta_2,a_2]}$ exist, denoted by $v_1$, $v_2:[0,\alpha]\rightarrow\mathbb{R}$. By the implicity theorem, they are smooth in $(0,\alpha]$. Moreover, $v_1^{\prime}(r)>0$, $v_2^{\prime}(r)<0$, $(0<r\leq\alpha)$ and $$ \partial U\cap C_{\alpha}=\{(x,y)\in\mathbb{R}^2\mid0\leq r\leq\alpha,\ x=v_i(r),\ i=1,2\}. $$ The two components of $\partial U\cap C_{\alpha}$ are called the left and right caps of $\partial U$.
\begin{lem}\label{lem:alphad2} Let $U$ be an $\alpha$-domain. Then there exists $t_U>0$ such that $D(t)$ denoted the open evolution with $D(0)=U$ is an $(\alpha+At)$-domain, $0<t<t_U$. \end{lem}
For proving the Lemma \ref{lem:alphad2}, we need the following lemma. \begin{lem}\label{lem:in} Let $(u,a,b)$ be the solution of \begin{equation}\label{eq:eq1} u_t=\frac{u_{xx}}{1+u_x^2}+A\sqrt{1+u_x^2}, \ x\in(a(t),b(t)), \ 0<t< T, \end{equation} \begin{equation}\label{eq:eq2} u(a(t),t)=0,\ u(b(t),t)=0,\ 0\leq t< T, \end{equation} \begin{equation}\label{eq:eq3} u_x(a(t),t)=+\infty,\ u_x(b(t),t)=-\infty,\ 0\leq t< T, \end{equation} \begin{equation}\label{eq:eq4} u(x,0)=u_0(x),\ a(0)\leq x\leq b(0), \end{equation} where $u_0\in C[a(0),b(0)]\cap C^1(a(0),b(0))$.
We denote $\gamma_1(t)$ consisting of the following three parts, $\gamma_{11}(t)=\{(x,y)\in \mathbb{R}^2\mid x=a(t), y<0\}$, $\gamma_{12}(t)=\{(x,y)\in \mathbb{R}^2\mid x=b(t), y<0\}$ and $\gamma_{13}(t)=\{(x,y)\in \mathbb{R}^2\mid y=u(x,t), a(t)\leq x\leq b(t)\}$. For all $C\in \mathbb{R}$, denote $\gamma_2(t)=\{(x,y)\in \mathbb{R}^2\mid y=C+At\}$.
Then the intersection number $\mathcal{Z}[\gamma_1(t),\gamma_2(t)]$ is not increasing in $t\in [0,T)$. \end{lem} \begin{proof} It is sufficient to show for $t_1\in(0,T)$, there exists $\epsilon>0$ such that $\mathcal{Z}[\gamma_1(t),\gamma_2(t)]$ is not increasing on $(t_1-\epsilon,t_1+\epsilon)$. For convenience, we denote $a(t_1)=x_1$ and $b(t_1)=x_2$.
Next we will prove this result by three cases separately.
First, if $C+At_1\leq0$. Since $u_x(x_1,t_1)=+\infty$ ,$u_x(x_2,t_1)=-\infty$ and $u(x_1,t_1)=u(x_2,t_1)=0$, there exist $\epsilon$, $\delta>0$ such that \begin{equation}\label{eq:481interlemma} u(x_1+\delta,t)>C+At,\ u(x_2-\delta,t)>C+At,\ t_1-\epsilon\leq t< t_1+\epsilon, \end{equation} \begin{equation}\label{eq:482interlemma} u_x(x,t)>0,\ x\in(a(t),x_1+\delta),\ u_x(x,t)<0,\ x\in(x_2-\delta,b(t)),\ t_1-\epsilon\leq t< t_1+\epsilon, \end{equation} and $$ (x_1+\delta,x_2-\delta)\subset(a(t),b(t)),\ t_1-\epsilon\leq t< t_1+\epsilon. $$
\textbf{Case 1.} $C+At_1<0$.
Let $\mathcal{Z}[\gamma_1(t),\gamma_2(t)]=h(t)+\mathcal{Z}[\gamma_{13}(t),\gamma_2(t)]$, where $h(t)$ is denoted as the intersection number between $\gamma_{2}(t)$ and half lines $\gamma_{11}(t)$, $\gamma_{12}(t)$.
In this condition, there exists $\epsilon$ such that $C+A(t_1+\epsilon)<0$. Then there holds $h(t)\equiv2$, $t_1-\epsilon\leq t< t_1+\epsilon$. (\ref{eq:481interlemma}) and (\ref{eq:482interlemma}) imply $\mathcal{Z}[\gamma_{13}(t),\gamma_2(t)]=\mathcal{Z}_{[x_1+\delta,x_2-\delta]}[\gamma_{13}(t),\gamma_2(t)]$, for $t\in(t_1-\epsilon,t_1+\epsilon)$. Where $\mathcal{Z}_{I}[\Gamma_{1}(t),\Gamma_2(t)]$ denotes the intersection number between $\Gamma_1(t)$ and $\Gamma_2(t)$ in set $I$. By (\ref{eq:481interlemma}) again and Theorem D in \cite{A1}, $\mathcal{Z}_{[x_1+\delta,x_2-\delta]}[\gamma_{13}(t),\gamma_2(t)]$ does not increase for $t_1-\epsilon\leq t< t_1+\epsilon$.
Therefore $\mathcal{Z}[\gamma_1(t),\gamma_2(t)]$ is non-increasing for $t\in(t_1-\epsilon,t_1+\epsilon)$.
\textbf{Case 2.} $C+At_1=0$.
For $t_1\leq t< t_1+\epsilon$, obviously, $h(t)=0$. And seeing $u(a(t),t)=u(b(t),t)=0$, (\ref{eq:481interlemma}) and (\ref{eq:482interlemma}), $C+At$ intersects $u(x,t)$ exactly twice in $[a(t),x_1+\delta)\cup(x_2-\delta,b(t)]$. Then $\mathcal{Z}_{[a(t),x_1+\delta)\cup(x_2-\delta,b(t)]}[\gamma_{13}(t),\gamma_2(t)]=2$, $t_1\leq t< t_1+\epsilon$. Therefore, $$ h(t)+\mathcal{Z}_{[a(t),x_1+\delta)\cup(x_2-\delta,b(t)]}[\gamma_{13}(t),\gamma_2(t)]=2,\ t_1\leq t< t_1+\epsilon. $$
For $t_1-\epsilon< t\leq t_1$, obviously, $C+At<0$, then there holds $h(t)\equiv2$. (\ref{eq:481interlemma}) and (\ref{eq:482interlemma}) imply that $\mathcal{Z}_{[a(t),x_1+\delta)\cup(x_2-\delta,b(t)]}[\gamma_{13}(t),\gamma_2(t)]=0$, $t_1-\epsilon< t\leq t_1$. Then we have $$ h(t)+\mathcal{Z}_{[a(t),x_1+\delta)\cup(x_2-\delta,b(t)]}[\gamma_{13}(t),\gamma_2(t)]=2,\ t_1-\epsilon< t< t_1+\epsilon. $$
On the other hand, by (\ref{eq:481interlemma}) and Theorem D in \cite{A1}, there holds $\mathcal{Z}_{[x_1+\delta, x_2-\delta]}[\gamma_{13}(t),\gamma_2(t)]$ is not increasing, $t_1-\epsilon<t< t_1+\epsilon$.
Therefore $\mathcal{Z}[\gamma_1(t),\gamma_2(t)]$ is non-increasing for $t\in(t_1-\epsilon,t_1+\epsilon)$.
\begin{figure}
\caption{Proof of the case 3 in Lemma \ref{lem:in}}
\label{fig:lemalphadom2}
\end{figure}
\textbf{Case 3.} $C+At_1>0$.
In this case, there exists $\epsilon$ such that $C+A(t_1-\epsilon)>0$. Then there hold
$$h(t)\equiv0$$
and $$C+At>u(a(t),t)=u(b(t),t)=0,$$ $t\in(t_1-\epsilon,t_1+\epsilon)$. So by Theorem D in \cite{A1}, $\mathcal{Z}[\gamma_{13}(t),\gamma_2(t)]$ is non-increasing and finite for $t\in(t_1-\epsilon,t_1+\epsilon)$. Consequently, $\mathcal{Z}[\gamma_1(t),\gamma_2(t)]$ is finite and non-increasing in $t\in(t_1-\epsilon,t_1+\epsilon)$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:alphad2}] Since $ U$ is an $\alpha$-domain, using Theorem \ref{thm:partialUmeancurvature}, $\partial D(t)=\{(x,y)\in\mathbb{R}^2\mid |y|=u(x,t), a(t)\leq x\leq b(t)\}$, where $(u,a,b)$ satisfies (\ref{eq:eq1}), (\ref{eq:eq2}), (\ref{eq:eq3}). Moreover, there exists a maximal time $T_U>0$ such that $\partial D(t)$ is smooth, $0<t<T_U$.
Since $U$ is not contained in the cylinder $\overline{C_{\alpha}}$, there exists a small ball $B_{\epsilon}(P)\subset U\setminus\overline{C_{\alpha}}$. By (1) in Theorem \ref{thm:order}, $D(t)$ contains the ball $B_{\epsilon(t)}(P)$ for $0<t<\delta_1$. Where $\epsilon(t)$ satisfies \begin{equation}\label{eq:ball2} \epsilon^{\prime}(t)=A-\frac{1}{\epsilon(t)},\ 0<t<\delta, \end{equation} with $\epsilon(0)=\epsilon$. Since $B_{\epsilon}(P)\cap\overline{C_{\alpha}}=\phi$, by (1b) in Theorem \ref{thm:conti}, there exists $t_1>0$, such that $B_{\epsilon(t)}(P)\cap \overline{C_{\alpha+At}}=\phi$, $0<t<t_1$.
For $0<\rho<\alpha+At_0$, $0<t_0\leq t_{U}^{\alpha}$, where $t_{U}^{\alpha}=\min\{\delta_1,t_1\}$, $y=\rho-At_0$ intersects $\gamma_1(0)$ exactly twice($\gamma_1(t)$ is constructed in Lemma \ref{lem:in}). Lemma \ref{lem:in} implies that $ y=\rho$ intersects $\gamma_1(t_0)$ at most twice. Consequently, $y=\rho$ intersects $y=u(x,t_0)$ at most twice, for $0<t_0< \min\{t_{U}^{\alpha},T_U\}$. On the other hand, there holds $B_{\epsilon(t_0)}(P)\subset D(t_0)\setminus\overline{C_{\alpha+At_0}}$, $0<t_0< \min\{t_{U}^{\alpha},T_U\}$, then $y=\rho$ intersects $y=u(x,t_0)$ at least twice, $0<t_0< \min\{t_{U}^{\alpha},T_U\}$. Therefore we have $\partial C_{\rho}$ intersects $\partial D(t_0)$ exactly twice, $0<t_0<\min\{t_{U}^{\alpha},T_U\}$.
Choosing $t_U=\min\{t_U^{\alpha},T_U\}$, $D(t)$ is an $(\alpha+At)$-domain, $0<t<t_U$. The proof is completed. \begin{figure}
\caption{Proof of Lemma \ref{lem:alphad2}}
\label{fig:4711}
\caption{Proof of Lemma \ref{lem:alphad2}}
\label{fig:4712}
\end{figure} \end{proof}
\begin{prop}\label{pro:sin} For $t_U^{\alpha}$ and $T_U$ given in the proof Lemma \ref{lem:alphad2}, $t_U^{\alpha}\leq T_U$. \end{prop}
To prove the previous proposition we need the following lemma.
\begin{lem}\label{lem:sing1} Assume that $D(t)=\{(x,y)\mid |y|<u(x,t), a(t)\leq x\leq b(t)\}$ is a $\rho$-domain, $0<t<T$. Let $w_1<w_2$ such that $$ C_\rho\cap\partial D(t)=\{(x,y)\mid x=w_1(y,t),\ x=w_2(y,t)\}. $$
Then $$\lim\limits_{t\rightarrow T}w_1(y,t)=w_1(y,T)\ \ \ \ \ \textrm{and} \ \ \ \ \lim\limits_{t\rightarrow T}w_2(y,t)=w_2(y,T)$$
exist and these convergences are uniform for $|y|\leq\frac{\rho}{2}$. Moreover, $a(T)=:v_1(0,T)<v_1(r,T)$ and $b(T)=:v_2(0,T)>v_2(r,T)$, $0<r<\frac{\rho}{2}$, where $v_1(r,t)=w_1(y,t)$ and $v_2(r,t)=w_2(y,t)$. \end{lem}
\begin{proof} $w_1(y,t)$ and $w_2(y,t)$ satisfy the equation (\ref{eq:graph}), respectively for "$\mp$". We only prove for $w_1(y,t)$. Since $w_1$ is uniformly bounded, Corollary \ref{cor:es} and Remark \ref{rem:hes} imply that derivatives $\frac{\partial ^j}{\partial y^j} w_1$, $j=1,2$, are uniformly bounded for $0\leq|y|\leq\frac{\rho}{2}$, $\frac{T}{2}\leq t<T$. Consequently, $\frac{\partial w_{1}}{\partial t}$ is bounded for $0\leq|y|\leq\frac{\rho}{2}$, $\frac{T}{2}\leq t<T$. So there exists $w_1(y,T)$ such that $w_1(y,t)$ converges to $w_1(y,T)$ uniformly for $0\leq|y|\leq\frac{\rho}{2}$, $\frac{T}{2}\leq t<T$.
Note there hold $$ \frac{\partial v_1}{\partial r}(\frac{\rho}{2},t)>0,\ \frac{\partial v_1}{\partial r}(0,t)=0,\ 0<t<T $$ and $$ \frac{\partial v_1}{\partial r}(r,0)>0,\ 0< r<\frac{\rho}{2}. $$
Since $p=\frac{\partial v_1}{\partial r}$ satisfies (\ref{eq:deriveeq}), maximum principle implies
$$ \frac{\partial v_1}{\partial r}>0,\ 0<r<\frac{\rho}{2},\ 0<t\leq T. $$ Therefore $v_1(0,T)<v_1(r,T)$, for $0<r<\frac{\rho}{2}$. \end{proof}
\begin{proof}[Proof of Proposition \ref{pro:sin}.] If $T_U<t_U^{\alpha}$. By Lemma \ref{lem:alphad2}, there exists $\rho>0$ such that $D(t)$ is a $\rho$-domain, for $0<t<T_U$.
We divide $\partial D(t)$ into two parts: $\partial D(t)=(\partial D(t)\cap\{r< \rho/2\})\cup(\partial D(t)\cap\{r\geq \rho/2\})$. \\ {\bf Step 1.} $\partial D(t)\cap\{r< \rho/2\}$
Since $\partial D(t)$ is a $\rho$-domain, there exist $w_1<w_2$ such that $\partial D(t)\cap\{r<\rho\}=\{(x,y)\mid x=w_1(y,t), |y|<\rho\}\cup\{(x,y)\mid x=w_2(y,t), |y|<\rho\}$. By the same argument as in Lemma \ref{lem:sing1}, $\frac{\partial ^j}{\partial y^j} w_i$, $j=1,2$, $i=1,2$, are uniformly bounded for $0\leq|y|\leq\frac{\rho}{2}$, $\frac{T_U}{2}\leq t<T_U$. Therefore, the mean curvature of $\partial D(t)\cap\{r< \rho/2\}$ is bounded for $\frac{T_U}{2}\leq t<T_U$. \\ {\bf Step 2.} $\partial D(t)\cap\{r\geq\rho/2\}$
Recalling $\partial D(t)=\{(x,y)\mid |y|=u(x,t),a(t)\leq x\leq b(t)\}$, by Lemma \ref{lem:sing1}, there hold $a(T_U)<v_1(\rho/2,T_U)$ and $b(T_U)>v_2(\rho/2,T_U)$. Then for any $\epsilon$ small enough, $t$ is close to $T_U$ such that \begin{equation}\label{eq:subset1} (v_1(\rho/2,t),v_2(\rho/2,t))\subset (a(T_U)+\epsilon,b(T_U)-\epsilon). \end{equation}
Corollary \ref{cor:es} and Remark \ref{rem:hes} imply that $u_x$ and $u_{xx}$ are uniformly bounded for $x\in(a(T_U)+\epsilon,b(T_U)-\epsilon)$, $t$ close to $T_U$. (\ref{eq:subset1}) implies that $u_x$ and $u_{xx}$ are uniformly bounded for $x\in(v_1(\rho/2,t),v_2(\rho/2,t))$, $t$ close to $T_U$. The curvature of $\partial D(t)\cap\{r\geq\rho/2\}$ is bounded for $t$ close to $T_U$.
Consequently, the curvature of $\partial D(t)$ is uniformly bounded as $t\uparrow T_U$. It contradicts to $\partial D(t)$ becoming singular at $T_U$. \end{proof}
\begin{rem}\label{rem:time} In Lemma \ref{lem:alphad2}, $0<t<\min\{t_{U}^{\alpha},T_U\}$ can be replaced by $0<t<t_{U}^{\alpha}$. Seeing the choice of $t_{U}^{\alpha}$, if $U\subset W$, $t_{U}^{\alpha}\leq t_{W}^{\alpha}$. \end{rem}
{\bf Intersection number principle} Lemma \ref{lem:inters} and Lemma \ref{lem:alphad2} show the possible intersection number between two curves evolving by $V=-\kappa+A$. Here we want to introduce a more general result about the intersection number. Consider the following problem which we call (Q): \begin{equation} \left\{ \begin{array}{lcl} \displaystyle{u_t=\frac{u_{xx}}{1+u_x^2}+A\sqrt{1+u_x^2}},\ x\in(a(t),b(t)),\ 0<t< T,\\ u(a(t),t)=0,\ u(b(t),t)=0,\ 0\leq t< T,\\ u_x(a(t),t)=\tan\theta_-(t),\ u_x(b(t),t)=-\tan\theta_+(t),\ 0\leq t< T,\\ u(x,0)=u_0(x),\ a(0)\leq x\leq b(0), \end{array} \right.\tag{Q} \end{equation} where $u_0\in C[a(0),b(0)]\cap C^1(a(0),b(0))$ and $\theta_{\pm}(t)$ are smooth functions with values in $[0,\pi/2]$. Let $$ \gamma_1(t):=\left\{ \begin{array}{lcl} \{(x,y)\mid y=\tan\theta_-(t)(x-a(t)),y<0\},\ \theta_-(t)<\pi/2\\ \{(x,y)\mid x=a(t),y<0\},\theta_-(t)=\pi/2, \end{array} \right. $$ $$ \gamma_2(t):=\left\{ \begin{array}{lcl} \{(x,y)\mid y=-\tan\theta_+(t)(x-b(t)),y<0\},\ \theta_+(t)<\pi/2\\ \{(x,y)\mid x=b(t),y<0\},\theta_+(t)=\pi/2, \end{array} \right. $$ and $$ \gamma_3(t):=\{(x,y)\mid y=u(x,t),a(t)\leq x\leq b(t)\}. $$ The extension curve of $u(\cdot,t)$ is given by $$ \gamma(t):=\gamma_1(t)\cup\gamma_2(t)\cup\gamma_3(t). $$ \begin{prop}\label{pro:intersection} Let $u^{1}(x,t)$, $a^{1}(t)<x<b^{1}(t)$ be solution of (Q) for $\theta_{\pm}^1(t)\in[0,\pi/2)$, and $u^2(x,t)$, $a^{2}(t)<x<b^{2}(t)$ be solution of (Q) for $\theta_{\pm}^2(t)=\pi/2$, for $0\leq t<T$. Let $\gamma_i(t)$ be the extension curve of $u^i(x,t)$, respectively. Then $\mathcal{Z}[\gamma_1(t),\gamma_2(t)]$ is non-increasing in $t\in[0,T)$ and is finite for each $t\in[0,T)$. Moreover, $\mathcal{Z}[\gamma_1(t),\gamma_2(t)]$ will drop when $\gamma_1(t)$ intersects $\gamma_2(t)$ tangentially. \end{prop} For the proof of this proposition, it is similar as the proof of Lemma \ref{lem:in} above or Proposition 2.4 in \cite{GMSW}. Here we omit it. \begin{rem}\label{rem:1matanointersection} (1). Proposition 2.4 in \cite{GMSW} only give the results under $\theta_{\pm}^i\in(0,\pi/2)$, $i=1,2$.
(2). For $\theta_{\pm}^i=\pi/2$, $i=1,2$, the results in Proposition \ref{pro:intersection} are not true. Indeed, this condition is same as in Remark \ref{rem:intersection1} \\ (a). If $\mathcal{Z}(u^1(\cdot,0),u^2(\cdot,0))>0$, $\mathcal{Z}(u^1(\cdot,t),u^2(\cdot,t))$ will not increase for $0<t<T$ provided that $\mathcal{Z}(u^1(\cdot,t),u^2(\cdot,t))>0$, $0<t<T$. \\ (b). If $\mathcal{Z}(u^1(\cdot,0),u^2(\cdot,0))=0$, $\mathcal{Z}(u^1(\cdot,t),u^2(\cdot,t))\leq 1$, $0<t<T$. \end{rem}
\section{Proof of Theorem \ref{thm:exist} and Theorem \ref{thm:fattening1}}
Denote $U=\{(x,y)\in \mathbb{R}^2\mid |y|<u_0(x),-b_0\leq x\leq b_0\}$, where $u_0$ is given by (\ref{eq:ineven}). By assumption of $u_0$ in Section 1, we know that $U\cap\{x\geq0\}$ is an $\alpha$-domain with smooth boundary, for some $\alpha>0$.
We choose vector field $X\in C^1(\mathbb{R}^{2}\setminus\{(0,0)\}\rightarrow\mathbb{R}^{2})$ such that
(i) At any $P\in \partial U$ not on the $x$-axis has $\langle X,\textbf{n}(P)\rangle<0$, $\textbf{n}$ is inward unit normal vector at $P$.
(ii) Near the $(0,0)$, we set $X((x,y))=(0,-y/|y|)$ and set $X=(-1,0)$ near $(b,0)$, $X=(1,0)$ near $(-b,0)$. \\ We note that $X$ has no definition at $(0,0)$.
Since $X\neq0 $ on $\partial U\setminus \{(0,0)\}$, there exists a neighbourhood $V\supset\partial U$ such that $|X|\geq \delta>0$ for some $\delta>0$ in $V\setminus \{(0,0)\}$.
\begin{prop}\label{pro:sigma2} For $\rho$ small enough, there exists a smooth curve $\Sigma\subset V\setminus\{(0,0)\}$ with
(i) $X(P)\notin T_P\Sigma$ at all $P\in\Sigma$,i.e., $\Sigma$ is transverse to the vector field $X$;
(ii) $\Sigma=\partial U$ in $\{(x,y)\mid|y|\geq2\rho\}$;
(iii) $\Sigma\cap\{(x,y)\mid|y|\leq\rho\}$ consists of discs $\Delta_{\pm c}=\{(\pm c,y)\mid|y|\leq\rho\}$ and pipe $B_d=\{(x,y)\mid-d \leq x\leq d,|y|=\rho\}$. \end{prop}
\begin{figure}
\caption{Proof of Proposition \ref{pro:sigma2}}
\label{fig:sigma2}
\end{figure} \begin{proof} Because $U\cap\{x\geq0\}$ is an $\alpha$-domain, there exist $\delta_j$, $\gamma_j$ and $0<\delta_j<\gamma_j$ such that $$ u_0(\delta_j)=u_0(\gamma_j)=u_0(-\delta_j)=u_0(-\gamma_j)=\frac{\alpha}{2^j} $$ and
$$\partial U\cap C_{\alpha}=\{(x,y)\mid x=\pm v(y), |y|<\alpha\}\cup\{(x,y)\mid x=\pm w(y),|y|<\alpha\},$$
where $v,w\in C^{\infty}((-\alpha,\alpha))$ and $0<v(y)<w(y)$ for $|y|<\alpha$.
We let $w_j\in C^{\infty}((-\alpha/2^{j-1},\alpha/2^{j-1}))$ be defined as following $$ w_j(y)=\left\{ \begin{array}{lcl}
\gamma_{j+2},\ 0\leq\displaystyle{|y|<\frac{\alpha}{2^{j+1}}}\\
w(y),\ \displaystyle{\frac{\alpha}{2^{j}}<|y|<\frac{\alpha}{2^{j-1}}}, \end{array} \right.. $$
And $u_j\in C^{\infty}((-\delta_{j-1},\delta_{j-1}))$ is defined as following $$ u_j(x)=\left\{ \begin{array}{lcl} \displaystyle{\frac{\alpha}{2^{j+1}}},\ x\in[0,\delta_{j+2}]\\ u_0(x),\ x\in[\delta_j,\delta_{j-1}) \end{array} \right.. $$
Let $\Sigma_j$ consist of three parts: $\{(x,y)\mid |y|=u_j(x),\ x\in(-\delta_j,\delta_j)\}$, $\{(x,y)\mid x=\pm w_j(y), |y|<\alpha/2^j\}$ and $\partial U\cap \{|y|\geq \alpha/2^{j}\}$. It is easy to see that for $j$ sufficient large, $\Sigma_j\subset V\setminus \{(0,0)\}$ satisfies (i), (ii), (iii) for $c=\gamma_{j+2}$, $\rho=\alpha/2^{j+1}$ and $d=\delta_{j+2}$. \end{proof}
Denote $\sigma(P,t):\Sigma\times(-\delta,\delta)\rightarrow V$($V$ is given at the begining of this section and $\Sigma$ is given by Proposition \ref{pro:sigma2}) the flow generated by vector field $X$ in $\mathbb{R}^{2}$. Precisely, $\sigma(P,t)$ is defined as following: $$ \left\{ \begin{array}{lcl} \displaystyle{\frac{d\sigma(P,t)}{dt}=X(\sigma(P,t))},\ P\in \Sigma,\\ \sigma(P,0)=P,\ \ \ \ \ P\in \Sigma. \end{array} \right. $$
Seeing (i) in Proposition \ref{pro:sigma2}, for any $C^{1}$ function $u:\Sigma\rightarrow\mathbb{R}$, ``the image of $u$ under $\sigma$''---$\{\sigma(P,u(P))\mid P\in \Sigma\}$ is a $C^1$ curve. Conversely, for any curve $\Gamma\subset V$ being $C^1$ close to $\Sigma$, there exists a unique $C^1$ function $u:\Sigma\rightarrow\mathbb{R}$ such that $\Gamma=\{\sigma(P,u(P))\mid P\in \Sigma\}$. In other words, the map $\sigma(\cdot,t)$ defines a new coordinate from $\Sigma$ to $V$. Therefore, if $\Gamma(t)\subset V$$(0<t<T)$ is a smooth family of smooth curves and $C^1$ close to $\Sigma$, there exists a unique function $u \in C^\infty(\Sigma\times(0,t))$ such that $\Gamma(t)=\{\sigma(P,u(P,t))\mid P\in\Sigma\}$. Let $z$ be the local coordinate on an open subset of $\Sigma$. If $\Gamma(t)$ evolves by $V=-\kappa+A$, in this coordinate $u$ satisfies the following equation \begin{equation}\label{eq:para1} \frac{\partial u}{\partial t}=a(z,u,u_z)\frac{\partial^2u}{\partial z^2}+b(z,u,u_z). \end{equation}
Here $a$, $b$ are smooth functions of their arguments \cite{A2}(Section 3). $a$ is always positive so that (\ref{eq:para1}) is a parabolic equation.
For example, $\sigma(\cdot,t)$ is the flow defined as above. We can easily deduce that $$ \sigma(P,t)=\left\{ \begin{array}{lcl} (x,\rho-t),\ P\in B_d,\\ (-c+t,y),\ P\in \Delta_{-c},\\ (c-t,y),\ P\in \Delta_{c}, \end{array} \right. $$ where we choose the local coordinates: \\
(1). on $B_d$, $(x,\rho y)$ for $|y|=1$; \\ (2). on $\Delta_{\pm c}$, $(\pm c,y)$.
Since $y=\pm1$ on $B_d$, $u$ only depends on $x$. Therefore on $B_d$, $a(x,u,u_x)=1/(1+u_x^2)$ and $b(x,u,u_x)=-A\sqrt{1+u_x^2}$. Then $u$ satisfies \begin{equation}\label{eq:2dimhorieq} u_t=\frac{u_{xx}}{1+u_x^2}-A\sqrt{1+u_x^2}. \end{equation} On $\Delta_{\pm c}$, $u$ only depends on $y$. Then on $\Delta_{\pm c}$, $a(y,u,u_y)=1/(1+u_y^2)$, $b(y,u,u_y)=-A\sqrt{1+u_y^2}$. Therefore $u$ satisfies (\ref{eq:graph}) for ``$-$'' and $n=1$.
\begin{rem} In $\mathbb{R}^{n+1}$, $b$ obtained above may not be smooth. For example, $$ u_t=\frac{u_{xx}}{1+u_x^2}+\frac{n-1}{\rho-u}-A\sqrt{1+u_x^2}, $$
on $\{(x,y)\in\mathbb{R}\times\mathbb{R}^n\mid |y|=\rho, -d<x<d\}$. In this case, $b=\frac{n-1}{\rho-u}-A\sqrt{1+u_x^2}$. It is easy to see when $u=\rho$, $b$ is not smooth. This is the most different between 2-dimension and higher dimension. \end{rem}
\begin{lem}\label{lem:max} For $v(x,t)$ being smooth function on $V\times(0,T)$, where $V$ is a compact set, we denote $m(t)$ as $$ m(t)=\max\{v(x,t)\mid x\in V\}. $$ Then there exists $P_t\in V$ such that $v(P_t,t)=m(t)$ and $m^{\prime}(t)=v_t(P_t,t)$ for $t>0$. \end{lem} It is a well known result. For example, the result can be found in \cite{M}.
\begin{prop}\label{pro:uniq2} Let $\Gamma_1$, $\Gamma_2$ be two families of curves with $\sigma^{-1}(\Gamma_j)$ the graph of $u_j(\cdot,t)$ for certain $u_j\in C(\Sigma\times[0,T))$. Assume $u_j$ are smooth on $\Sigma\times(0,T)$ and smooth on $\Sigma\setminus(\Delta_{\pm c}\cup B_d)\times[0,T)$. If $\Gamma_1(0)=\Gamma_2(0)$, there holds $\Gamma_1(t)=\Gamma_2(t)$, $0\leq t<T$.
\end{prop} \begin{proof} Consider $v(P,t)=u_1(P,t)-u_2(P,t)$. From our assumptions, we have $v\in C(\Sigma\times[0,T))$ and that $v$ is smooth on $\Sigma\setminus(\Delta_{\pm c}\cup B_d)\times[0,T)$, as well as on $\Sigma\times(0,T)$. Moreover $v(P,0)\equiv0$. Define $m(t)=\max\{v(P,t)\mid P\in \Sigma\}$ and for each $0\leq t<T$ with $m(t)>0$. We want to show that $m^{\prime}(t)\leq Cm(t)$ for some constant $C$. Choose $P_t$ as in Lemma \ref{lem:max} such that $m(t)=v(P_t,t)$ and $m^{\prime}(t)=v_t(P_t,t)$.
\textbf{Case 1.} $P_t\in B_d$, since $u_j$ satisfy the equation (\ref{eq:2dimhorieq}), $v$ satisfies a parabolic equation $$ v_t=a_1(x,t)v_{xx}+b_1(x,t)v_x, $$ where $a_1(x,t)$ and $b_1(x,t)$ is smooth, and $a_1(x,t)>0$. Since $v$ attains its maximum at $P_t$, $v_x(P_t,t)=0$ and $v_{xx}(P_t,t)\leq0$. Then $v_t(P_t,t)\leq0$. Considering Lemma \ref{lem:max}, $m^{\prime}(t)\leq0$.
\textbf{Case 2.} $P_t\in \Delta_{\pm c}$. We only consider $P_t\in \Delta_{-c}$. Then in the $y$-coordinates of $\Delta_{-c}$, $u_j$ satisfy the full graph equation, which is (\ref{eq:graph}) for ``$-$'' and $n=1$. Therefore $v=u_1-u_2$ satisfies a parabolic equation $$ v_t=a_2(y,t)v_{yy}+b_2(y,t)v_{y}. $$ Seeing $v_y(P_t,t)=0$ and $v_{yy}(P_t,t)\leq 0$, $m^{\prime}(t)\leq0$.
\textbf{Case 3.} $P_t\in \Sigma\setminus(\Delta_{\pm c}\cup B_d)$. Then we can choose coordinate $z$ on some neighbourhood of $P_t$ on $\Sigma$ and $u_j$ satisfy (\ref{eq:para}). We may write this equation as $u_t=F(z,t,u, u_z, u_{zz})$. Then $v=u_1-u_2$ satisfies $$ v_t=a_3(z,t)v_{zz}+b_3(z,t)v_{z}+c_3(z,t)v, $$ where $$ c_3(z,t)=\int_{0}^1F_u(z,t,u^{\theta}, u_z^{\theta},u_{zz}^{\theta})d\theta, $$ where $u^{\theta}=(1-\theta)u_2+\theta u_1$.
By the assumption, outside of the disks $\Delta_{\pm c}$ and the pipe $B_d$, $u_i$ are smooth up to $t=0$, so the coefficient $c(z,t)$ is bounded, $0<t<T$, saying by $|c(z,t)|\leq M<\infty$. The constant $M$ may depend on the choice of local coordinate $z$. Noting $\Sigma$ is compact, by easy covering argument, we can choose $M$ independent of the choice of local coordinate. Since $v_z(P_t,t)=0$, $v_{zz}(P_t,t)\leq0$, $$ v_t(P_t,t)\leq c(P_t,t)v(P_t,t)\leq Mv(P_t,t). $$ Consequently, $m^{\prime}(t)\leq Mm(t)$.
Combining these three cases, we have $m^{\prime}(t)\leq C m(t)$, for some constant $C>0$. Considering $m(0)=0$, $m(t)\leq 0$. Conversely, we can prove $M(t)=\min\{v(P,t)\mid P\in \Sigma\}\geq0$. Therefore $u_1\equiv u_2$. We complete the proof. \end{proof}
\begin{lem}\label{lem:closeas} Then there exist $E_j$ closed such that $E_j^{\circ}$ are $\alpha/2^j$-domains and $E_j\downarrow \overline{U}$. Where $U$ is given at the beginning of the section and $E^{\circ}$ denotes the interior of the set $E$. \end{lem} \begin{proof} Since $U\cap \{x\geq0\}$ is an $\alpha$-domain, there exists unique $\delta_0>0$ such that $$ u_0(\delta_0)=\alpha $$ and
$$ u_0^{\prime}(x)>0,\ 0<x<\delta_0. $$ For all $j\geq1$, there exists unique $\delta_j$, $0<\delta_j<\delta_0$ such that $$ u_0(\delta_j)=\alpha/2^j. $$
We can contruct $v_j\in C^{\infty}((-b_0,b_0))$ being even such that $$ v_j(x)=\left\{ \begin{array}{lcl} \alpha/2^j,\ x\in (-\delta_j/2,\delta_j/2),\\ u_0(x), x\in [-b_0,-\delta_j]\cup[\delta_j,b_0], \end{array} \right. $$ $v_j(x)\geq u_0$, $x\in[-b_0,b_0]$ and $v^{\prime}_j(x)>0$, $x\in(\delta_j/2,\delta_j)$. It is easy to see $v_j\downarrow u_0$ uniformly in $[-b_0,b_0]$.
Let $E_j=\{(x,y)\mid|y|\leq v_j(x),\ -b_0\leq x\leq b_0\}$. Since $v_j\downarrow u_0$ uniformly in $[-b_0,b_0]$, $E_j\downarrow \overline{U}$. It is easy to check $E_j^{\circ}$ are $\alpha/2^j$-domain. \end{proof}
\begin{lem}\label{lem:closebou} Let the same assumption in Theorem \ref{thm:exist} be given. Then there exists $t_1>0$ such that, for all $t_2$ satisfying $0<t_2<t_1$, the second fundamental forms and derivatives of $\partial E_j(t)$ are uniformly bounded for $t_2\leq t\leq t_1$, where $E_j(t)$ denote the closed evolution of $V=-\kappa+A$ with $E_j(0)=E_j$. \end{lem} \begin{proof}
Let $E_j(t)=\{(x,y)\mid|y|\leq v_j(x,t)\}$.
{\bf Step 1.} For all $t_2$ satisfying $0<t_2<\delta$($\delta$ given by Theorem \ref{thm:exist}), there exists a constant $c>0$ such that $$ v_j(0,t)>c,\ t_2/2<t<\delta. $$
Let $U^{+}(t)$ denote the open evolution with $U^+(0)=U\cap\{x\geq0\}$. Using Theorem \ref{thm:partialUmeancurvature}, $U^{+}(t)$ is the domain surrounded by $\Lambda(t)$($\Lambda(t)$ is defined in Section 1). By (3) in Theorem \ref{thm:order} and $U\cap\{x\geq0\}\subset E_j$, there holds $U^{+}(t)\subset E_j(t)$. By our assumption that $a_*(t)<0$, for $0<t\leq\delta$, there holds $(0,0)\in U^{+}(t)\subset E_j(t)$, $0<t<\delta$. For all $t_2$ satisfying $0<t_2<\delta$, there exists $c>0$ such that $v_j(0,t)> c$, $t_2/2\leq t\leq \delta$.
{\bf Step 2.} Construction of four auxiliary balls.
Since $U\cap\{x\geq0\}$ is an $\alpha$-domain, there exist $\beta_2>\beta_1>0$ such that $u_0(\pm\beta_1)=u_0(\pm\beta_2)=\alpha$ and $u_0^{\prime}(x)<0$ for $x>\beta_2$, $u_0^{\prime}(x)>0$ for $0<x<\beta_1$. There exist $p>\beta_1$ and $0<q<\beta_2$ such that $\displaystyle{u_0(\pm q)=u_0(\pm p)=\frac{\alpha}{2}}$. we consider the points $$Q=(-p,0),\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ P=(p,0),$$ $$Q^{\prime}=(-p,\alpha),\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ P^{\prime}=(p,\alpha).$$
\begin{figure}
\caption{Proof of Lemma \ref{lem:closebou}}
\label{fig:uniformb}
\end{figure}
Since $P\in U$ and $P^{\prime}\in \overline{U}^{c}$, there exists $\epsilon$ such that $\overline{B_{\epsilon}(P)}\subset U$ and $\overline{B_{\epsilon}(P^{\prime})}\subset\overline{U}^c$. Consequently, $\overline{B_{\epsilon}(P)}\cup \overline{B_{\epsilon}(Q)}\subset E^{\circ}$ and $\overline{B_{\epsilon}(P^{\prime})}\cup \overline{B_{\epsilon}(Q^{\prime})}\subset E^{c}$. Then for $j$ large enough, $\overline{B_{\epsilon}(P)}\cup \overline{B_{\epsilon}(Q)}\subset E_j^{\circ}$ and $\overline{B_{\epsilon}(P^{\prime})}\cup \overline{B_{\epsilon}(Q^{\prime})}\subset E_j^{c}$. By (4) in Theorem \ref{thm:order}, \begin{equation}\label{eq:q1} \overline{B_{\epsilon(t)}(P)}\cup \overline{B_{\epsilon(t)}(Q)}\subset E_j(t)^{\circ}, \end{equation} $0<t<\delta_2$. By (1b) in Theorem \ref{thm:conti}, there exists $\delta_3>0$ such that \begin{equation}\label{eq:q2} \overline{B_{\epsilon(t)}(P^{\prime})}\cup \overline{B_{\epsilon(t)}(Q^{\prime})}\subset E_j(t)^c, \end{equation} for $0< t<\delta_3$. Where $\epsilon(t)$ is the solution of (\ref{eq:ball2}) with $\epsilon(0)=\epsilon$, $0<t<\delta_1$. Choose $\delta_2$ independent on $j$ such that $\epsilon(t)>\epsilon/2$, $0<t<\delta_2$. \begin{figure}
\caption{Proof of Lemma \ref{lem:closebou}}
\label{fig:uniformb2}
\end{figure}
{\bf Step 3.} Divide $\partial E_j(t)$ into two parts by auxiliary balls.
Since for all $\rho<\alpha$, $C_{\rho}$ intersects $\partial E_j$ at most forth, by Proposition \ref{pro:intersection}, there exists $t_0>0$ such that $C_{\rho}$ intersects $\partial E_j(t)$ at most forth, $0<t<t_0$. By continuity, we can deduce that there exists $\delta_4$ such that for all $\rho<\alpha$, the equation $v_j(x,t)=\rho$ has most one root for $x>p$, for all $t<\delta_4$. By symmetry, it is so for $x<-p$.
Choosing $t_1=\min\{t_0,\delta_2,\delta_3,\delta_4\}$, Step 1 implies that $E_j(t)^{\circ}$ are all $c$-domains, $t_2/2<t<t_1$. Let $d<\min\{c,\epsilon/4\}$. By (\ref{eq:q1}) in Step 2, we have $v_j(x,t)>d$, for $t_2/2<t<t_1$, $|x-p|<\sqrt{\epsilon^2(t)-d^2}$ or $|x+p|<\sqrt{\epsilon^2(t)-d^2}$. Seeing $\epsilon(t)>\epsilon/2$, there holds $$ v_j(x,t)\geq d,\ \text{in}\ \Omega=(-p-\frac{\sqrt{3}}{4}\epsilon,p+\frac{\sqrt{3}}{4}\epsilon)\times(t_2/2,t_1). $$
For $x\leq-p$, by (\ref{eq:q2}) in Step 2, $$ v_j(x,t)<\alpha/2-\epsilon(t)<\alpha/2-\epsilon/2,\ x\leq-p,\ 0\leq t<t_1. $$ This is also true for $x\geq p$.
{\bf Step 4.} The derivatives and second fundamental forms of $\partial E_j(t)$ are bounded in $\Omega^{\prime}=[-p,p]\times(t_2,t_1)$.
Since $v_j(x,t)\geq d$ in $\Omega=(-p-\frac{\sqrt{3}}{4}\epsilon,p+\frac{\sqrt{3}}{4}\epsilon)\times(t_2/2,t_1)$, Theorem \ref{thm:grad} implies that $v_{jx}$ are uniformly bounded in $\Omega$. By Remark \ref{rem:hes}, $v_{jxx}$ are uniformly bounded in $\Omega^{\prime}$.
{\bf Step 5.} The derivatives and second fundamental forms of $\partial E_j(t)$ are bounded for $x\leq -p$ and $x\geq p$, $t_2<t<t_1$.
We only consider for $x\leq-p$. For $0<t<t_1$ the part of $\partial E_j(t)$ on $x\leq-p$ can be represent by $x=w_j(y,t)$, for $|y|<\alpha/2$, $t\in (0,t_1)$. And $w_j$ satisfy the equation (\ref{eq:graph}) in the condition ``$-$'' and $n=1$. Then Corollary \ref{cor:es} and Remark \ref{rem:hes} imply that all $\frac{\partial ^k}{\partial y^k}w_j(y,t)$, $k=1,2$, are uniformly bounded for $|y|\leq\alpha/2-\epsilon/2$, $t_2<t<t_1$ for any $t_2>0$. Then the derivatives and second fundamental forms of $\partial E_j(t)$ are uniformly bounded when $x\leq-p$, $t_2<t<t_1$.
The proof of this lemma is completed. \end{proof}
\begin{lem}\label{lem:opensy} There exist $U_j$ being open and $U_j\cap\{x\geq0\}$ being an $\alpha$-domain such that $U_j\uparrow U$. \end{lem}
\begin{proof} Since $U\cap\{x>0\}$ being $\alpha$-domain, for $j\geq1$, there exist $\delta_j$ satisfying $0<\delta_j<\delta_0$ such that $u_0(\delta_j)=\alpha/2^j$, where $\delta_0$ satisfies $u_0(\pm \delta_0)=\alpha$ and $u_0^{\prime}(x)>0$ for $0<x<\delta_0$. We set $u_j\in C^{\infty}((-b_0,b_0))$ and even satisfying $$ u_j(x)=\left\{ \begin{array}{lcl} 0,\ x=0,\\ u_0(x),\ x\in[-b_0,-\delta_j]\cup[\delta_j,b_0], \end{array} \right. $$ and $u_j(x)\leq u_0$ for $x\in[-b_0,b_0]$, $u_j^{\prime}(x)>0$ for $x\in(0,\delta_j)$.
Let $U_j=\{(x,y)\mid |y|<u_j(x)\}$. Obviously $u_j\uparrow u_0$, then $U_j\uparrow U$. It is easy to check $U_j\cap\{x>0\}$ are $\alpha$-domain. \end{proof} \begin{lem}\label{lem:openbou} Let the same assumption in Theorem \ref{thm:exist} be given. Then there exists $t_1>0$ such that for all $t_2$ satisfying $0<t_2<t_1$, the second fundamental forms and derivatives of $\partial U_j(t)$ is uniform bounded, $t_2<t<t_1$, where $U_j(t)$ is the open evolution of $V=-\kappa+A$ with $U_j(0)=U_j$. \end{lem} \begin{proof} Let $U(t)$ and $U^{+}(t)$ be the open evolution with $U(0)=U$ and $U^{+}(0)=U\cap\{x>0\}$. Seeing appendix, $\Lambda(t)=\partial U^{+}(t)$($\Lambda(t)$ is given in Section 1). Since $a_*(t)<0$, for $0<t<\delta$, $(0,0)\in\partial U^{+}(t)$.
By (1) in Theorem \ref{thm:order} and $U\cap\{x>0\}\subset U$, we have $U^{+}(t)\subset U(t)$. Consequently, $(0,0)\in U(t)$, $0<t<\delta$. Then for $j$ large enough, $(0,0)\in U_j(t)$. The following parts can be proved similar as in Lemma \ref{lem:closebou}. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:exist}] Seeing Lemma \ref{lem:closebou} and \ref{lem:openbou}, $\partial U(t)$, $\partial E(t)$ are smooth curves and homeomorphic to the curve $\Sigma$ given by Proposition \ref{pro:sigma2}. Consequently, $\partial U(t)$, $\partial E(t)$ satisfy the assumption of Proposition \ref{pro:uniq2}, $0\leq t<T_1$, for some $T_1$ satisfying $0<T_1<t_1$. Where $t_1$ is given by Lemma \ref{lem:closebou} and \ref{lem:openbou}. Then there holds $\partial U(t)=\partial E(t)$, $0<t<T_1$. If we let $\Gamma(t)=\partial E(t)\cap\{x\geq0\}$, $\Gamma(t)$ will be the unique solution of (\ref{eq:cur}), (\ref{eq:Neum1}) and (\ref{eq:initial1}). The proof of Theorem \ref{thm:exist} is completed. \end{proof} \begin{rem} Indeed, $\partial E(t)$ and $\partial U(t)$ are smooth on $\Sigma\setminus B_d\times[0,T_1)$ and $\Sigma\times(0,T_1)$. If we remove the assumption that $\Gamma_0$ is smooth at end point $(-b_0,0)$ and $(b_0,0)$, $\partial E(t)$ and $\partial U(t)$ will be smooth on $\Sigma\setminus( \Delta_{\pm c}\cup B_d)\times[0,T_1)$ and $\Sigma\times(0,T_1)$. Therefore the result is also true even if removing smoothness at end points. \end{rem} \begin{proof}[Proof of Theorem \ref{thm:fattening1}] It is sufficient to show that there is a ball $B$ such that $B\subset E(t)\setminus U(t)$, for some $t$.
{\bf Closed evolution $E(t)$.} Since $E_j^{\circ}$(given by Lemma \ref{lem:closeas}) are $\alpha/2^j$-domain with smooth boundary, by Lemma \ref{lem:alphad2}, there exists a positive time $t_1$, $t_1<\delta$($\delta$ is given in Theorem \ref{thm:fattening1}) such that $E_j(t)^{\circ}$ are $(At+\alpha/2^j)$-domain for $0<t<t_1$. Combining $E_j(t)\downarrow E(t)$, we have $E(t)^{\circ}$ is an $At$-domain, $0<t<t_1$. Therefore $E(t)^{\circ}$ is an $At_1/2$-domain, $t_1/2<t<t_1$.
{\bf Open evolution $U(t)$.} Denote $U^{\pm}(t)$ being the open evolutions with $U^{\pm}(0)=U\cap\{\pm x\geq 0\}$. Seeing appendix $\partial U^{+}(t)=\Lambda(t)$, $\partial U^{-}(t)=\{(-x,y)\mid(x,y)\in \Lambda(t)\}$, where $\Lambda(t)$ is given in Section 1. Thus the left end point of $U^+(t)$ and the right end point of $U^-(t)$ are ($a_*(t)$,0) and ($-a_*(t)$,0), respectively. By the assumption in this theorem $a_*(t)\geq 0$, $0\leq t<\delta$, it means that $-a_*(t)\leq a_*(t)$, $0\leq t<\delta$. Therefore, $U^+(t)\cap U^-(t)=\emptyset$, $0\leq t<\delta$. From Lemma \ref{lem:sep}, the inner evolution $U(t)$ satisfies $U(t)=U^{+}(t)\cup U^{-}(t)$, for $0\leq t<\delta$.
By (2a) in Theorem \ref{thm:conti}(the boundary of open evolution evolves continuously) and $a(t)\geq0$, there exists $\displaystyle{\delta_1<\frac{At_1}{4}}$ such that $$ \displaystyle{B_{\delta_1}((0,\frac{At_1}{4}))}\cap U(t)=\emptyset,\ \displaystyle{\frac{t_1}{2}<t<t_1} $$ and $$ B_{\delta_1}((0,\frac{At_1}{4}))\subset E(t),\ \displaystyle{\frac{t_1}{2}<t<t_1}. $$
Where $\displaystyle{B_{\delta_1}((0,\frac{At_1}{4}))}$ is a ball centered at $(0,At_1/4)$ with radius $\delta_1$. Then $\displaystyle{B_{\delta_1}((0,\frac{At_1}{4}))}\subset \Gamma(t)=E(t)\setminus U(t)$, for $\displaystyle{\frac{t_1}{2}<t<t_1}$. \end{proof}
\section{Formation of sigularity} In this section, we want to identify the singular formation of $\Gamma(t)$, when $\Gamma(t)$ becomes singular at $t=T<\infty$. Where \begin{equation}\label{eq:singularT} T=\sup\{t>0\mid \Gamma(s)\ \text{are}\ \text{smooth},0<s<t\} \end{equation}
and $\Gamma(t)$ is given by Theorem \ref{thm:exist}. For convenience, we still consider $\Gamma(t)$ extended evenly. By Theorem \ref{thm:openevolutionmeancurvature} and Theorem \ref{thm:gu}, $\Gamma(t)=\{(x,y)\in\mathbb{R}^2\mid|y|=u(x,t),-b(t)\leq x\leq b(t)\}$ and $(u,b)$ is the solution of the following free boundary problem $$ \left\{ \begin{array}{lcl} \displaystyle{u_t=\frac{u_{xx}}{1+u_x^2}+A\sqrt{1+u_x^2}}, \ x\in(-b(t),b(t)), \ 0<t< T,\\ u(-b(t),t)=0$, $u(b(t),t)=0$,\ $0\leq t< T,\\ u_x(-b(t),t)=\infty,\ u_x(b(t),t)=-\infty,\ 0\leq t< T,\\ u(x,0)=u_0(x),\ -b_0\leq x\leq b_0. \end{array} \right. $$
Noting the choice of initial curve, $\Gamma(t)$ is not convex for $t$ near $0$. Therefore, $\Gamma(t)$ possible intersects itself at $y$-axis. Therefore, it is necessary to study the local minima of $u(\cdot,t)$.
As showed in section 4, the numbers of local maxima and local minima are a finite nonincreasing function of time. It follows that, after a while, the numbers of local maxima and local minima are constants. After discarding an initial section of the solution, we may even assume that $x\mapsto u(x,t)$ has $m$ local minima and $m+1$ local maxima. Let these minima and maxima be located at $\{\xi_j(t)\}_{1\leq j\leq m}$ and $\{\eta_j(t)\}_{0\leq j\leq m}$, respectively. And order the $\xi_j(t)$ and $\eta_j(t)$ so that \begin{equation}\label{eq:ordermami} -b(t)<\eta_0(t)<\xi_1(t)<\eta_1(t)<\cdots<\xi_m(t)<\eta_m(t)<b(t). \end{equation} Since the number of critical points of $u(\cdot,t)$ drops whenever $u(\cdot,t)$ has degenerate critical point, the minima and maxima of $u(\cdot,t)$ are all nondegenerate. By the implicit function theorem the $\xi_j(t)$ and $\eta_j(t)$ are therefore smooth functions of time.
\begin{lem}\label{lem:convergeendmimax} The limits $$ \lim\limits_{t\rightarrow T}b(t)=b(T) $$ and $$ \lim\limits_{t\rightarrow T}\xi_j(t)=\xi_j(T),\ \lim\limits_{t\rightarrow T}\eta_j(t)=\eta_j(T) $$ exist. \end{lem}
\begin{proof} We prove this lemma by the method from \cite{AAG}, first developed by \cite{CM}. But in our proof, there is a little difference, since the intersection number between two flows evolving by $V=-\kappa+A$ may increase. Therefore, the method in \cite{AAG} should be modified.
First, we prove $\lim\limits_{t\rightarrow T}b(t)$ exists. By the vertical equation $$ w_t=\frac{w_{rr}}{1+w_r^2}+A\sqrt{1+w_r^2}, $$ we can derive $b^{\prime}(t)=w_{rr}+A\leq A$ because of $w_{rr}(0)\leq0$. Then $b(t)-At$ is non-increasing. It is easy to see $b(t)-At$ is bounded for $t<T$. Therefore $\lim\limits_{t\rightarrow T}(b(t)-At)$ exists. Consequently, $\lim\limits_{t\rightarrow T}b(t)$ exists.
Next, we prove $\lim\limits_{t\rightarrow T}\xi_j(t)$ exists. We assume $$\limsup\limits_{t\rightarrow T}\xi_j(t)>\liminf\limits_{t\rightarrow T}\xi_j(t).$$ We can choose $x_0\in(\liminf\limits_{t\rightarrow T}\xi_j(t),\limsup\limits_{t\rightarrow T}\xi_j(t))$ and $x_0\neq 0$. Without loss of generality, we assume $-b(T)<x_0<0<b(T)$. Since $\xi_j(t)$ is continuous in $t$, there exists a sequence $t_m\rightarrow T$ such that $$ \xi_j(t_m)=x_0 \ \textrm{and}\ u_x(x_0,t_m)=0. $$ We let $\widetilde{\Gamma}(t)$ be the reflection from $\Gamma(t)$ about $x=x_0$. Consequently, $\widetilde{a}(t):=2x_0-a(t)$ and $\widetilde{b}(t):=2x_0-b(t)$ are the end points of $\widetilde{\Gamma}(t)$. Obviously, $\widetilde{\Gamma}(t)$ evolves by $V=-\kappa+A$ and $\widetilde{a}(T)<-b(T)<x_0<\widetilde{b}(T)<b(T)$. For $t$ being sufficiently close to $T$, $\widetilde{a}(t)<a(t)<x_0<\widetilde{b}(t)<b(t)$, i.e., the order of $\widetilde{a}(t)$, $\widetilde{b}(t)$, $a(t)$, $b(t)$ dose not change. Using Theorem \ref{thm:sl}, since $\widetilde{\Gamma}(t_m)$ intersects $\Gamma(t_m)$ at $x_0$ tangentially, the intersection number between $\widetilde{\Gamma}(t)$ and $\Gamma(t)$ will drop infinite times, for $t$ close to $T$. But Theorem \ref{thm:sl} shows that the intersection number between $\Gamma(t)$ and $\widetilde{\Gamma}(t)$ is finite(The choice of $x_0$ implies $\Gamma(t)$ is not identity to $\widetilde{\Gamma}(t)$). This yields a contradiction. \end{proof}
\begin{lem}\label{lem:singlepoint pinch} If $\xi_{j}(T)<\eta_j(T)$, then for any compact interval $[c,d]\subset(\xi_j(T),\eta_j(T))$, there exists $t_1$ and $\delta>0$ such that $u(x,t)\geq\delta$ for $x\in[c,d]$, $t\in[t_1,T)$. (Similarly for $\eta_{j-1}(T)<\xi_j(T)$, $-b(T)<\eta_0(T)$, $\eta_m(T)<b(T)$). \end{lem}
\begin{proof} Let $[a,b]\subset(\xi_j(T),\eta_j(T))$ be any compact interval, then there exists $t_1<T$ such that $[a,b]\subset(\xi_{j}(t),\eta_j(t))$ and $u_x(x,t)>0$, $x\in[a,b]$, $t\in(t_1,T)$. Letting $\theta=\arctan u_x$, $\theta$ satisfies $$ \theta_t=\cos^2\theta\theta_{xx}+A\sin\theta\theta_x. $$ Since $u_x>0$, $x\in[a,b]$, $t\in(t_1,T)$, there holds $$ \theta_t-\cos^2\theta\theta_{xx}-A\sin\theta\theta_x=0. $$
On the other hand, we let $\varphi(x,t)=\epsilon e^{-ct}\sin(\lambda(x-a))$, where $\lambda=\pi/(b-a)$, $c>A\lambda\pi+\lambda^2$, $0<\epsilon<\pi$. Since $\varphi_{xx}\leq0$, $x\in[a,b]$ and seeing $$
\left|-A\lambda\frac{\sin(\epsilon e^{-ct}\sin(\lambda(x-a)))}{\sin(\lambda(x-a))}\cos(\lambda(x-a))\right|\leq A\lambda\pi, $$ there holds \begin{eqnarray*} \varphi_t&-&\cos^2\varphi\varphi_{xx}-A\sin\varphi\varphi_x\leq\varphi_t-\varphi_{xx}-A\sin\varphi\varphi_x\\ &=& \epsilon e^{-ct}\sin(\lambda(x-a))\left(-c+\lambda^2-A\lambda\frac{\sin(\epsilon e^{-ct}\sin(\lambda(x-a)))}{\sin(\lambda(x-a))}\cos(\lambda(x-a))\right)\\&\leq&\epsilon e^{-ct}\sin(\lambda(x-a))(-A\lambda\pi-\lambda^2+\lambda^2+A\lambda\pi)=0, \end{eqnarray*} for $x\in[a,b],\ t\in(t_1,T).$ Since $u_x(x,t_1)$ is bounded from below for some positive constant in $[a,b]$, we can choose $\epsilon>0$ small enough such that $\varphi(x,t_1)\leq\theta(x,t_1)$. Seeing $$ \varphi(a,t)=0<\theta(a,t),\ \varphi(b,t)=0<\theta(b,t),\ t\in(t_1,T). $$ By maximum principle, $$ \theta(x,t)\geq\varphi(x,t),\ a<x<b,\ t_1<t<T. $$ Consequently, $$ u_x\geq\arctan u_x=\theta\geq\epsilon e^{-ct}\sin(\lambda(x-a)),\ x\in[a,b],\ t\in(t_1,T). $$ $$ u\geq\epsilon\frac{e^{-ct}}{\lambda}(1-\cos(\lambda(x-a))),\ x\in[a,b],\ t\in(t_1,T). $$ Then for all $[c,d]\subset(a,b)$, $u$ is uniformly bounded from below for $x\in[c,d],\ t\in[t_1,T)$. \end{proof}
\begin{lem}\label{lem:limitsurface} $\lim\limits_{t\rightarrow T}u(x,t)=u(x,T)$ exists, and $u(x,t)$ converges uniformly to $u(x,T)$, for $x\in \mathbb{R}$, as $t\rightarrow T$. The function $u$ is smooth at $(x,t)\in \mathbb{R}\times(0,T]$ provided that $u(x,t)>0$. We interpret that $u(x,t)=0$ outside $(a(t),b(t))$. \end{lem} \begin{proof} By Lemma \ref{lem:singlepoint pinch}, for all $[c,d]\subset(\xi_{j-1}(T),\xi_{j}(T))$, $u(x,t)\geq \delta$, $x\in[c,d]$, $t\in[t_1,T)$. By Theorem \ref{thm:grad}, $u_x$ is uniformly bounded on $[c,d]\times[t_1,T)$, which implies $\frac{\partial^i}{\partial x^i}u(x,t)$, $i=1,2$ are bounded on any compact subinterval of $(c,d)$. On the other hand, from equation, $u_t(x,t)$ is uniformly bounded on such interval, so that $u(\cdot,t)$ converges uniformly on any such interval.
The same idea can be applied to the conditions in the intervals $(-b(T),\xi_1(T))$ and $(\xi_m(T),b(T))$. Siince outside of $[-b(T),b(T)]$,$u(x,T)$ is considered to be $0$, the result is true.
Except at $-b(T)$, $b(T)$ and $\xi_j(T)$s, $u(x,t)$ converges pointwise for every $x$ not equaling $-b(T)$, $b(T)$, $\xi_j(T)$, as $t\rightarrow T$. The convergence is uniform on any interval that does not contain any of the points.
Next we want to prove the functions $u(\cdot,t)$ are equicontinuous for $T/2<t<T$.
Assuming $x_1<x_2$, if $x_1$, $x_2$ are both not in the interval $(-b(T),b(T))$, the conclusion is obvious. Assume $x_1\in(-b(T),b(T))$.
Suppose that $|u(x_1,t)-u(x_2,t)|\geq\epsilon$. Then either $u(x_1,t)\geq\epsilon/2$ or $u(x_2,t)\geq\epsilon/2$ or both; we assume the first one. From Theorem \ref{thm:grad}, $|u_x|<\sigma(\epsilon/2,T/2)$ whenever $u(x,t)\geq\epsilon/2$, $T/2<t<T$. Thus, if $u(x,t)\geq\epsilon/2$ on $(x_1,x_2)$, $$
x_2-x_1\geq\frac{|u(x_1,t)-u(x_2,t)|}{\sigma(\epsilon/2,T/2)}\geq\frac{\epsilon}{\sigma(\epsilon/2,T/2)}. $$ If $u(x,t)<\epsilon/2$ some where in the interval $(x_1,x_2)$, then there is a smallest $x_3$ satisfying $x_1<x_3$ at which $u(x_3,t)=\epsilon/2$. On the interval $(x_1,x_3)$, $u(x,t)\geq\epsilon/2$. Then $$ x_2-x_1\geq x_3-x_1\geq\frac{u(x_1,t)-u(x_3,t)}{\sigma(\epsilon/2,T/2)}\geq\frac{\epsilon}{2\sigma(\epsilon/2,T/2)}. $$ So for every $\epsilon>0$, choose $\delta=\epsilon/(2\sigma(\epsilon/2,T/2))$ so that $$
|u(x_1,t)-u(x_2,t)|<\epsilon, $$
$|x_1-x_2|<\delta$, for $T/2<t<T$.
Thus $u(x,t)$ is equicontinuous. Noting that $u(x,t)$ converges to $u(x,T)$ in $\mathbb{R}\setminus\{\xi_i(T),-b(T),b(T)\}$ and $\mathbb{R}\setminus\{\xi_i(T),-b(T),b(T)\}$ is dense in $\mathbb{R}$, the proof is completed. \end{proof}
\begin{lem}\label{lem:seprate} Suppose that $u(\eta_0(T),T)>0$, then $-b(T)<\eta_0(T)$. \end{lem}
\begin{proof} Since $u(\eta_0(T),T)>0$, there exists $\delta>0$ such that $\delta=\inf\limits_{0\leq t\leq T} u(\eta_0(t),t)$. We consider $$
x=v(|y|,t), $$
being the inverse function of $|y|=u(x,t)$ for $x\in(a(t),\eta_0(t))$ and let $w(y,t)=v(|y|,t)$. $w(y,t)$ satisfies the equation (\ref{eq:graph}) for the condition "$-$" and ``$n=1$'', $|y|<\delta$, $0<t<T$. Clearly $w$ is uniformly bounded, so Corollary \ref{cor:es} and Remark \ref{rem:hes} imply that $\frac{\partial^k w}{\partial y^k}(y,t)$, $k=1,2$ are bounded for $|y|\leq\delta/2$, $T/2\leq t<T$. So the limit function $w(y,T)$ obtained by Lemma \ref{lem:limitsurface} is smooth for $|y|\leq\delta/2$.
As the proof of Lemma \ref{lem:sing1}, using maximum principle, $v_r(r,T)>0$, $0<t<\delta/2$. Then $-b(T)=v(0,T)<v(\delta/2,T)<\eta_0(T)$. \end{proof}
Lemma \ref{lem:singlepoint pinch} and Lemma \ref{lem:seprate} imply that ``width'' and ``height'' become zero at same time. Therefore, $\Gamma(t)$ can not pinch at $y$-axis before shrinking. We prove the detail in the following theorem and corollary.
\begin{thm}\label{thm:formationofsingular}(Formation of singular)
1. If $m=0$, $u(\eta_0(T),T)=0$ and $b(T)=0$. This implies that $\Gamma(t)$ shrinks to the origin $O$, as $t\rightarrow T$.
2. If $m\geq1$, there is $j$ such that $u(\xi_j(T),T)=0$, $1\leq j\leq m$. \end{thm} \begin{proof} 1. First, we prove for $m=0$, i.e., $u(x,t)$ only has one maximum without local minimum. We prove this by contradiction.
{\bf Case 1.} If $u(\eta_0(T),T)>0$, from Lemma \ref{lem:seprate}, $-b(T)<\eta_0(T)<b(T)$. $\Gamma(t)$ can be divided into three parts $\Delta_1(t)$, $\Delta_2(t)$ and $\Delta_3(t)$, for $t$ being very close to $T$, where $\Delta_1(t)$ and $\Delta_2(t)$ are the left and right caps of $\Gamma(t)$, $\Delta_3(t)$ is the middle part of $\Gamma(t)$ away form $x$-axis. It is easy to show the derivatives and second fundamental formations of $\Delta_1$, $\Delta_2$ and $\Delta_3$ are uniformly smooth for $t\rightarrow T$(We can similarly prove as Lemma \ref{lem:sing1}), which contradicts to $\Gamma(t)$ becoming singular at $T$.
{\bf Case 2.} If $b(T)> 0$, there holds $-b(T)<\eta_0(T)$ or $\eta_0(T)<b(T)$, assuming $-b(T)<\eta_0(T)$. By Lemma \ref{lem:singlepoint pinch}, for every $[c,d]\subset(-b(T),\eta_0(T))$, $u(x,t)\geq \delta>0$ in $[c,d]\times[t_1,T)$. Then $u(\eta_0(t),t)\geq\delta$, $t_1\leq t<T$. Consequently, $u(\eta_0(T),T)\geq\delta$. By the same argument as in Case 1, we get a contradiction. Here we complete the proof under the condition $m=0$.
2. For $m\geq1$, if $u(\xi_j(T),T)>0$, for any $1\leq j\leq m$, we can divide $\Gamma(t)$ into three parts as above for $t$ being close to $T$. Then we can get contradiction similarly as in the condition $m=0$. So there is $j$ such that $u(\xi_j(T),T)=0$. \end{proof}
\begin{cor}\label{cor:2dsingular} There is $t_1$ satisfying $0<t_1<T$ such that $u(x,t)$ loses all its local minima for $t\in[t_1,T)$. Moreover, $\Gamma(t)$ shrinks to a point, as $t\rightarrow T$. \end{cor} \begin{proof} Denote $h(t)=\max\limits_{a(t)<x<b(t)}u(x,t)$. By Lemma \ref{lem:in}, we can deduce that, for $t$ satisfying $t_2<t<T$ given, when $\rho<\min\{At_2,h(t)\}$, $y=\rho$ intersects $y=u(x,t)$ only twice.
If $u(x,t)$ does not lose its all local minima, the number of minima will not change denoted by $m\geq 1$. From Theorem \ref{thm:formationofsingular}, there exists $j$, $1\leq j\leq m$ such that $u(\xi_j(T),T)=0$. So we can choose $t_0$ satisfying $t_2<t_0<T$, there exists $\xi_j(t_0)$ such that $u(\xi_j(t_0),t_0)<At_2$. Obviously, $u(\xi_j(t_0),t_0)< h(t_0)$, then $u(\xi_j(t_0),t_0)<\min\{At_2,h(t_0)\}$. Consequently, $y=\rho=u(\xi_j(t_0),t_0)$ intersects $y=u(x,t_0)$ three times. Contradiction.
Therefore, there is $t_1$ such that $u(x,t)$ will lose its all local minima for $t\in[t_1,T)$. Seeing the proof in Theorem \ref{thm:formationofsingular} for $m=0$, $u(\eta_0(T),T)=0$ and $b(T)=0$. It means that $\Gamma(t)$ shrinks to a point, as $t\rightarrow T$. \end{proof}
\begin{rem} We note that all the proof in this section, we do not use the condition that $u(\cdot,t)$ is even. Therefore, the argument in this section can be used in any $x$-axisymmetric curve. \end{rem}
\section{Asymptotic behaviors } In this section, we will prove Theorem \ref{thm:threecondition} and Theorem \ref{thm:asym}. For convenience, we still extend $\Gamma(t)$ by even, still denoted by $\Gamma(t)$ and let $$ h(t)=\max\limits_{-b(t)\leq x\leq b(t)}u(x,t),\ \ l(t)=2b(t). $$ Denote $U(t)$ being the open set surrounded by $\Gamma(t)$.
All the proofs in this section are proved by intersection number principle introduced in section 4. For the proof of asymptotic behavior, there have so far been many methods. The intersection argument in proving asymptotic behavior is developed by Professor Hiroshi Matano. Saying roughly, if two functions $u(x,t)$ and $v(x)$ are satisfying the same parabolic equation, moreover, $u(x,t)$ intersects $v(x)$ at some fixed point tangentially, for any large $t$. Then there holds $u(x,t)\equiv v(x)$. Another important method in studying asymptotic behavior is by using Lyapunov function to prove $u(x,t)$ is independent on $t$. By the intersection argument, we can prove $u(x,t)$ is independent on $t$ without Lyapunov function.
The following lemma says that $l(t_0)$ being large enough deduce $h(t_0)$ being large. We prove it by using Proposition \ref{pro:intersection}. Although the proof of Lemma \ref{lem:expanding} is similar as in \cite{GMSW}, for the reader's convenience, we still give the proof for detail. \begin{lem}\label{lem:expanding} For any $\tau\in(0,T)$ and $M\in(0,A\tau/2)$, there exists $l_{M,\tau}>0$ such that, when $l(t_0)>l_{M,\tau}$ for some $t_0\in[\tau,T)$, it holds $h(t_0)>M$. \end{lem}
\begin{proof} For given $\tau\in(0,T)$ and $M\in(0,A\tau/2)$, we choose $R_0$ such that $$ R_0\geq\frac{2}{A}. $$ Let $R(t)$ be the solution of (\ref{eq:ball2}) with $R(0)=R_0$. Since $R_0>1/A$, $R(t)$ is increased in $t$. Therefore $R^{\prime}(t)\geq A-1/R_0\geq A/2$. Integrating the inequality, there holds $$ R(\tau)\geq R_0+A\tau/2\geq R_0+M. $$ So there exists $\tau_1\in(0,\tau]$ such that $$ R(\tau_1)=R_0+M. $$
Now we let $$ W(x,t):=\sqrt{R(t)^2-x^2}-R_0,\ \ x\in[\sigma_-(t),\sigma_+(t)],\ t\in(0,\tau_1], $$ where $\sigma_{-}(t)=-\sqrt{R(t)^2-R^2_0}$ and $\sigma_+(t)=\sqrt{R(t)^2-R^2_0}$. And we denote $$ \theta_{\pm}(t)= \arctan\frac{\sqrt{R(t)^2-R^2_0}}{R_0}. $$ Obviously, $\pi/2>\theta_{\pm}(t)>0$.
We choose $l_{M,\tau}:=\sigma_+(\tau_1)-\sigma_-(\tau_1)=2\sqrt{R(\tau_1)^2-R^2_0}=2\sqrt{M^2+2R_0M}.$ We let $\gamma_1(t)$ and $\gamma_2(t)$ be the extension of $u(x,t)$ and $W(x,t)$ as in Proposition \ref{pro:intersection}. Obviously, $(W(x,t),\sigma_{\pm}(t))$ is the solution of (Q) with $\theta_{\pm}(t)$(Proposition \ref{pro:intersection}), so by Proposition \ref{pro:intersection}, we can deduce $$ \mathcal{Z}(\gamma_1(t_0),\gamma_2(\tau_1))\leq \mathcal{Z}(\gamma_1(t_0-s),\gamma_2(\tau_1-s)),\ \textrm{for}\ s\in[0,\tau_1). $$ Since the extended curve $\gamma_2(\tau_1-s)$ converges to the $x$-axis, as $s\rightarrow\tau_1$, the right-hand side of the above inequality equals 2 for $s$ sufficiently close to $\tau_1$. Consequently, $$ Z[\gamma_1(t_0),\gamma_2(\tau_1)]\leq2. $$
Assuming $l(t_0)>l_{M,\tau}$, for some $t_0\in[\tau,T)$, then $\sigma_{\pm}(\tau_1)$ satisfy $$ -b(t_0)<\sigma_1(\tau_-)<\sigma_+(\tau_1)<b(t_0). $$ Hence $\gamma_1(t_0)$ intersects $\gamma_2(\tau_1)$ twice below the $x$-axis. So $u(x,t_0)>W(x,\tau_1)$ on the interval $[\sigma_-(\tau_1),\sigma_+(\tau_1)]$. Consequently, $h(t_0)>M$. \end{proof}
The following corollary gives that as long as $l(t)$ is unbounded, $\Gamma(t)$ will be expanding.
\begin{cor}\label{cor:expanding2} Assume $T=\infty$ and there exists a sequence $s_m\rightarrow\infty$ such that $l(s_m)\rightarrow\infty$, as $m\rightarrow\infty$. Then $l(t)\rightarrow\infty$ and $h(t)\rightarrow\infty$, as $t\rightarrow\infty$. \end{cor} \begin{proof} We can use the same argument as in Lemma \ref{lem:expanding}, there exist $C>1/A$ and $m_0$ such that $u(x,s_{m_0})>\sqrt{(C+R_0)^2-x^2}-R_0$. Obviously, $\sqrt{(C+R_0)^2-x^2}-R_0>\sqrt{C^2-x^2}$, $-C\leq x\leq C$. Therefore $u(x,s_{m_0})\geq \sqrt{C^2-x^2}$, $-C\leq x\leq C$. By (b) in Remark \ref{rem:intersection1}, $u(x,s_{m_0}+t)\geq\sqrt{C(t)^2-x^2}$, $-C(t)\leq x\leq C(t)$, where $C(t)$ is the solution of (\ref{eq:ball2}) with $C(0)=C$. Seeing the choice of $C$, we can deduce $C(t)\rightarrow\infty$, as $t\rightarrow\infty$. Then $h(t+s_{m_0})>C(t)\rightarrow\infty$ and $l(t+s_{m_0})>2C(t)\rightarrow\infty$, as $t\rightarrow\infty$. \end{proof}
The following lemma gives that as long as $h(t)$ is unbounded, $\Gamma(t)$ will be expanding.
\begin{lem}\label{lem:1expanding} Assume $T=\infty$ and there exists a sequence $s_m\rightarrow\infty$ such that $h(s_m)\rightarrow\infty$, as $m\rightarrow\infty$. Then $l(t)\rightarrow\infty$ and $h(t)\rightarrow\infty$, as $t\rightarrow\infty$. \end{lem} \begin{proof} If $l(t)$ is unbounded, by Corollary \ref{cor:expanding2}, $h(t)\rightarrow\infty$ and $l(t)\rightarrow\infty$, $t\rightarrow\infty$. The result is true. Next we prove $l(t)$ is unbounded by contradiction.
Assume $l(t)$ is bounded.
{\bf Step 1.} we are going to prove that $\lim\limits_{t\rightarrow\infty}b(t)$ exists. If $\liminf\limits_{t\rightarrow\infty}b(t)<\limsup\limits_{t\rightarrow\infty}b(t)$, we can choose $x_0$ such that $$ \liminf\limits_{t\rightarrow\infty}b(t)<x_0<\limsup\limits_{t\rightarrow\infty}b(t). $$ We consider the function $u_1(x)=\sqrt{1/A^2-(x+1/A-x_0)^2}$. Obviously, $x_0$ is the right endpoint of $u_1(x)$ and $(u_1(x),x_0-2/A,x_0)$ is the solution of the problem (Q) with $\theta_{\pm}=\pi/2$. So $b(t)-x_0$ changes sign infinite many times as $t$ varying over $[0,\infty)$. There exists a sequence $p_m\rightarrow\infty$ such that $u(x,p_m)$ intersects $u_1(x)$ tangentially at $x_0$. Arguing as in Lemma \ref{lem:inters}, the intersection number between $u(x,t)$ with $u_1(x)$ drops at $b(p_m)=x_0$. Therefore, the intersection number between $u(x,t)$ and $u_1(x)$ drops infinite many times. This yields a contradiction. Then we let $\nu:=\lim\limits_{t\rightarrow\infty}b(t)$.
{\bf Step 2.} We deduce the contradiction.
Since $h(s_m)\rightarrow\infty$, Lemma \ref{lem:alphad2} implies that for $t_1=4/A^2$, there holds $y=\rho$ intersects $y=u(x,t)$ only twice, $\rho<At_1$, $t>t_1$. Here we choose $\rho_0=2/A$. Then there exists $w(y,t)>0$ such that $$ C_{\rho_0}\cap\Gamma(t)=\{(x,y)\mid x=w(y,t)\ \text{or}\ x=-w(y,t)\}. $$
$w(y,t)$ satisfies (\ref{eq:graph}) in the condition "$+$" and $n=1$, for $\{y\mid |y|<\rho_0\}\times(t_1,\infty)$. Since $w(0,t)=b(t)$ is bounded for $t>0$, by Corollary \ref{cor:es} and Remark \ref{rem:hes}, $\frac{\partial^kw}{\partial y^k}(y,t)$, $k=1,2,3$ are uniformly bounded for $|y|\leq\rho_0/2$, $t>t_1+\epsilon^2$. From equation, $\frac{\partial^kw}{\partial t^k}w(y,t)$, $k=1,2$ are also bounded for $|y|<\rho_0/2$, $t>t_1+\epsilon^2$. So there exists $w_1(y,t)$, for any sequence satisfying $t_m\rightarrow \infty$ such that $w(\cdot,\cdot+t_m)$ converges to $w_1$ in $C^{2,1}([-\rho_0/2,\rho_0/2]\times[t_1+\epsilon^2,\infty))$ locally in time, as $m\rightarrow\infty$. Hence $w_1(y,t)$ also satisfies (\ref{eq:graph}) with the condition ``$+$'' and $n=1$. Moreover, $w_1(0,t)=\nu$ and $\frac{\partial}{\partial y}w_1(0,t)=0$, $t>t_1+\epsilon^2$.
Next, we consider the function $w_2(y)=\nu-1/A+\sqrt{1/A^2-y^2}$. $w_2(y)$ satisfies (\ref{eq:graph}) with the condition ``$+$'' and $n=1$. Moreover, $w_2(0)=\nu$ and $\frac{\partial}{\partial y}w_2(0)=0$. So $w_1(y,t)$ intersects $w_2(y)$ at $y=0$ tangentially for all $t>t_1+\epsilon^2$. By the same argument as in Lemma \ref{lem:inters}, there holds $w_1(y,t)\equiv w_2(y)$, $|y|\leq \rho_0/2$. Noting $\frac{\partial w_2}{\partial y}(1/A)=\infty$, $\frac{\partial w_1}{\partial y}(1/A,t)=\infty$. But $w_y(1/A,t)$ is bounded, as $t\rightarrow\infty$, by gradient interior estimate. This is a contradiction.(Indeed, $w(y,t)$ has definition for $y\in(-2/A,2/A)$, but the limit function $w_2(y)$ has definition only in $[-1/A,1/A]$.)
We complete the proof. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:threecondition}] For $T<\infty$, seeing Corollary \ref{cor:2dsingular}, we get the conclusion.
For $T=\infty$, $h(t)$ is bounded or unbounded, as $t\rightarrow\infty$.
(1). $h(t)$ is unbounded. Lemma \ref{lem:1expanding} yields that $l(t)\rightarrow\infty$, $h(t)\rightarrow\infty$, as $t\rightarrow\infty$.
(2). $h(t)$ is bounded.
By Corollary \ref{cor:expanding2}, $l(t)$ is also bounded. Next we want to prove $h(t)$ and $l(t)$ are bounded from below.
{\bf Step 1.} We prove if there exists a sequence $s_m\rightarrow\infty$, as $m\rightarrow\infty$ such that $h(s_m)\rightarrow0$, as $m\rightarrow\infty$, then $l(s_m)\rightarrow0$, as $m\rightarrow\infty$.
By Lemma \ref{lem:expanding} with $M=h(s_m)$, $$ l(s_m)\leq l_{M,\tau}=2\sqrt{M^2+2R_0M}=2\sqrt{h(s_m)^2+2R_0h(s_m)} $$ Then we have $l(s_m)\rightarrow0$.
{\bf Step 2.} $h(t)$ is bounded from below.
If there exists another sequence $t_m\rightarrow\infty$ such that $h(t_m)\rightarrow 0$, as $t_m\rightarrow\infty$, by Step 1, $l(t_m)\rightarrow 0$, as $t_m\rightarrow\infty$. Then there exists $t_{m_0}$ and $r<1/A$ such that $U(t_{m_0})\subset B_r((0,0))$, recalling $U(t)$ being the domain surrounded by $\Gamma(t)$. Then by comparison principle, we have $U(t+t_{m_0})\subset B_{r(t)}((0,0))$, where $r(t)$ is the solution of (\ref{eq:ball2}) with $r(0)=r$. Obviously, $B_{r(t)}((0,0))$ shrinks to origin in finite time. Then it is also for $U(t)$. This contradicts to $T=\infty$.
Hence $h(t)$ is bounded from blew.
{\bf Step 3.} Prove the result by contradiction. Assume there exists a sequence $s_m\rightarrow\infty$ such that $l(s_m)\rightarrow0$.
Since $h(t)$ is bounded from below, by Lemma \ref{lem:alphad2}, there exist $\rho_0$ and $t_1>0$ such that for all $\rho<\rho_0$, $y=\rho$ intersects $y=u(x,t)$ only twice for $t>t_1$. Then we let $w(y,t)>0$ such that $$ C_{\rho_0}\cap\Gamma(t)=\{(x,y)\mid x=w(y,t)\ \text{or}\ x=-w(y,t)\}. $$
Arguing as the proof of Lemma \ref{lem:1expanding}, $\nu=\lim\limits_{t\rightarrow\infty}b(t)=0$, by $l(s_m)\rightarrow0$. And $w(\cdot,t)\rightarrow w_1$ in $C^{2,1}([0,\rho_0/2]\times[t_1+\epsilon^2,\infty))$ locally in time, as $m\rightarrow\infty$ and $w_1(y)=-1/A+\sqrt{1/A^2-y^2}\leq0$. But seeing $w(y,t)>0$, there holds $w_1(y)\geq0$ for $|y|<\rho_0/2$. Consequently, $w_1(y)\equiv0$, for $|y|<\rho_0/2$. Contradiction.
Therefore $h(t)$ and $l(t)$ are bounded from below. \end{proof}
The conclusion of the Shrinking case in Theorem \ref{thm:asym} is obvious. We only need prove the case expanding and bounded.
\begin{proof}[Proof of the Expanding case in Theorem \ref{thm:asym}] In this case, since $h(t)$ and $l(t)$ tend to infinity, using the same argument as in the proof of Corollary \ref{cor:expanding2}, there exist $t_0$ and $C>1/A$ such that $B_C((0,0))\subset U(t_0)$. By comparison principle, $B_{C(t)}((0,0))\subset U(t_0+t)$. Therefore, $B_{C(t-t_0)}((0,0))\subset U(t)$, $t\geq t_0$, where $C(t)$ satisfies (\ref{eq:ball2}) with $C(0)=C$.
On the other hand, seeing $U(0)$ being bounded, there exists $R>1/A$ such that $U(0)\subset B_R((0,0))$. Then $U(t)\subset B_{R(t)}((0,0))$, where $R(t)$ also satisfies (\ref{eq:ball2}) with $R(0)=R$.
Denoting $R_1(t)=C(t-t_0)$ and $R_2(t)=R(t)$, $B_{R_1(t)}((0,0))\subset U(t)\subset B_{R_2(t)}((0,0))$, $t>t_0$. By the theory of ordinary equation, we can easily deduce that $\lim\limits_{t\rightarrow\infty}R_1(t)/t=\lim\limits_{t\rightarrow\infty}R_2(t)/t=A$. We complete the proof. \end{proof}
\begin{proof}[Proof of the Bounded case in Theorem \ref{thm:asym}]
Since $h(t)$ is bounded from below, by Lemma \ref{lem:alphad2}, as before, there exist $t_1$, $\rho_0$ such that $$ C_{\rho_0}\cap\Gamma(t)=\{(x,y)\mid x=w(y,t)\ \text{or}\ x=-w(y,t)\},\ t>t_1. $$
{\bf Step 1.} Asymptotic behavior of $C_{\rho_0/2}\cap\Gamma(t)$.
Arguing as the proof of Lemma \ref{lem:1expanding}, there exist $\nu$ and $w_2(y)$ such that $\nu=\lim\limits_{t\rightarrow\infty}b(t)$ and $w(\cdot,t)\rightarrow w_2$ in $C^{2,1}([-\rho/2,\rho_0/2])$, as $t\rightarrow\infty$, where $w_2(y)=\nu-1/A+\sqrt{1/A^2-y^2}$.
{\bf Step 2.} Asymptotic behavior of $\{(x,y)\mid|y|\geq\rho_0/2\}\cap\Gamma(t)$.
Noting that $w_2(\rho_0/4)<\nu=\lim\limits_{t\rightarrow\infty}b(t)$, then for all $\epsilon>0$, there is $t_2$ such that $(-w_2(\rho_0/4)-\epsilon,w_2(\rho_0/4)+\epsilon)\subset(-b(t),b(t))$, $t>t_2$. We consider $u(x,t)$ in the following set $[-w_2(\rho_0/4),w_2(\rho_0/4)]\times(t_2,\infty)$. Because $u(x,t)$ satisfies (\ref{eq:graph}) under the condition $n=1$ and "$+$", $\frac{\partial ^k }{\partial x^k}u$, $k=1,2,3$, are uniformly bounded in $(-w_2(\rho_0/4)-\epsilon/2,w_2(\rho_0/4)+\epsilon/2)\times(t_2+\epsilon^2,\infty)$. Therefore $\frac{\partial^i}{\partial t^i}u$, $i=1,2$, $\frac{\partial ^k }{\partial x^k}u$, $k=1,2,3$, are uniformly bounded in $[-w_2(\rho_0/4),w_2(\rho_0/4)]\times(t_2+\epsilon^2,\infty)$
Next we want to show $\lim\limits_{t\rightarrow\infty}u(0,t)$ exists. If $\limsup\limits_{t\rightarrow\infty}u(0,t)>\liminf\limits_{t\rightarrow\infty}u(0,t)$, we can choose $y_0$ such that $\limsup\limits_{t\rightarrow\infty}u(0,t)>y_0>\liminf\limits_{t\rightarrow\infty}u(0,t)$. We consider the function $u_2(x)=y_0-1/A+\sqrt{1/A^2-x^2}$. By the same argument in the proof of Lemma \ref{lem:1expanding}, we get the contradiction. Denote $\mu:=\lim\limits_{t\rightarrow\infty}u(0,t)$. We can show, as in the proof of Lemma \ref{lem:1expanding}, $u(\cdot,t)\rightarrow u_3$ in $C^{2,1}([-w_2(\rho_0/4),w_2(\rho_0/4)])$, as $t\rightarrow\infty$, where $u_3(x)=\mu-1/A+\sqrt{1/A^2-x^2}$.
{\bf Step 3.} Identify $\nu$ and $\mu$.
Since the graph of $y=u_2(x)$ and $x=w_2(y)$ are identical with each other, for $\rho/4<y<\rho/2$, then $\nu=\mu=1/A$. Consequently, $u(x,t)$ converges to $\varphi(x)=\sqrt{1/A^2-x^2}$, $x\in \mathbb{R}$, as $t\rightarrow\infty$. Here we consider $u(x,t)$ and $\varphi(x)$ as 0 outside the domains of definition.(Indeed, seeing the proof, $\lim\limits_{t\rightarrow \infty}d_H(\Gamma(t),\partial B_{1/A}((0,0))=0$). We complete the proof. \end{proof}
\section{Appendix } In this section, we want to prove there exists unique smooth family of smooth hypersurfaces $\Gamma(t)$ satisfying \begin{equation}\label{eq:hcur} V=-\kappa+A,\ \text{on}\ \Gamma(t)\subset \mathbb{R}^{n+1}, \end{equation} where $\Gamma(0)=\partial U$ with $U$ is an $\alpha$-domain.
Seeing $\partial U$ is not necessary smooth, we also use the level set method and prove the interface evolution is not fattening.
\begin{defn}\label{def:alphad} We say a domain being an $\alpha$-domain in $\mathbb{R}^{n+1}$ if
(1) Let $U\subset \mathbb{R}^{n+1}$ be an open set of the form $$ U=\{(x,y)\in\mathbb{R}\times\mathbb{R}^n\mid r<u(x)\}. $$
(2) $I=\{x\in\mathbb{R}\mid u(x)>0 \}$ is a bounded, connected interval.
(3) $u$ is smooth on $I$;
(4) $\partial U$ intersects each cylinder $\partial C_{\rho}$ with $0<\rho\leq\alpha$ twice and these intersections are transverse, where $C_{\rho}=\{(x,y)\in\mathbb{R}\times\mathbb{R}^n\mid r<\rho\}$. \end{defn}
For $U$ being $\alpha$-domain, we choose smooth vector field $X:\mathbb{R}^{n+1}\rightarrow\mathbb{R}^{n+1}$ such that \\ (i) At any point $P\in\partial U$ not on the $x$-axis has $\langle X(P), \textbf{n}(P)\rangle<0$, $\textbf{n}$ is inward unit normal vector at $P$. \\ (ii) Near the two end points of $\partial U$, $X$ is constant vector with $X\equiv\pm e_0=(\pm1,0,\cdots,0).$
Since $X\neq0$ on the compact $\partial U$, there is an open neighbourhood $V\supset\partial U$ on which $|X|\geq\delta>0$ for some $\delta>0$.
\begin{figure}
\caption{Vector field $X$}
\label{fig:vectorX}
\end{figure}
\begin{prop}\label{pro:sigma1}For small enough $\rho>0$ there exists a smooth hypersurface $\Sigma\subset V$ with\\ (i) $X(P)\notin T_P\Sigma$ at all $P\in\Sigma$, i.e., $\Sigma$ is transverse to the vector field $X$.\\
(ii) $\Sigma=\partial U$ in $\{(x,y)\in\mathbb{R}\times\mathbb{R}^{n}\mid |y|\geq2\rho\}$.\\
(iii) $\Sigma\cap\{(x,y)\in\mathbb{R}\times\mathbb{R}^{n}\mid|y|\leq\rho\}$ consists of two flat disks $\Delta_a=\{(a,y)\in\mathbb{R}\times\mathbb{R}^{n}\mid|y|\leq\rho\}$ and $\Delta_b=\{(b,y)\in\mathbb{R}\times\mathbb{R}^{n}\mid|y|\leq\rho\}$ for some $a<b$. \end{prop}
Seeing Figure \ref{fig:sigma1}, this proposition can be proved as in Proposition \ref{pro:sigma2}.
\begin{figure}
\caption{Proof of Proposition \ref{pro:sigma1}}
\label{fig:sigma1}
\end{figure}
Let $\phi^{t}:\mathbb{R}^{n+1}\rightarrow\mathbb{R}^{n+1}(t\in\mathbb{R})$, $t\in(-\delta,\delta)$ be the flow generated by vector field $X$ on $\mathbb{R}^{n+1}$ determined by $$ \left\{ \begin{array}{lcl} \displaystyle{\frac{d\phi^t(P)}{dt}=X(\phi^t)},\ P\in \Sigma,\\ \phi^0(P)=P,\ \ \ \ \ P\in \Sigma. \end{array} \right. $$
We denote $\sigma(P,s):=\phi^{s}(P)$. As in Section 5, suppose $\Gamma(t)\subset V$$(0<t<T)$ are smooth hypersurfaces with $\sigma^{-1}(\Gamma(t))$ being the graph $u(\cdot,t)$ for $u:\Sigma\times[0,T)\rightarrow\mathbb{R}$. Let $z_1,z_2,\cdots,z_n$ be local coordinates on an open subset of $\Sigma$. If $\Gamma(t)$ evolving by $V=-\kappa+A$, then in these coordinates $u$ satisfies the following parabolic equation \begin{equation}\label{eq:para} \frac{\partial u}{\partial t}=a_{ij}(z,u,\nabla u)\frac{\partial^2u}{\partial z_i\partial z_j}+b(z,u,\nabla u). \end{equation}
\begin{figure}
\caption{The transportation from $\Sigma$ to $\Gamma$}
\label{fig:charact}
\end{figure} For example, on $\Delta_a$, by calculation, $\sigma(y_1,,y_2,\cdots,y_n,s)=(a-s,y_1,y_2,\cdots,y_n)$. Then $u$ satisfies the "$-$" condition of (\ref{eq:graph}). \begin{prop}\label{pro:uniq} For $n\geq1$, let $\Gamma_1(t)$, $\Gamma_2(t)(0\leq t<T)$ be two families of hypersurface smooth and $\sigma^{-1}(\Gamma_j(t))$ be the graph of $u_j(\cdot,t)$ for certain $u_j\in C(\Sigma\times[0,T))$. Assume that the $u_j$ are smooth on $\Sigma\times(0,T)$ as well as on $\Sigma\setminus(\Delta_a\cup\Delta_b)\times[0,T)$. Then if the $\Gamma_j(t)$ evolve by $V=-\kappa+A$ and if $\Gamma_1(0)=\Gamma_2(0)$, then there holds $\Gamma_1(t)=\Gamma_2(t)$ for $0<t<T$. \end{prop} We use the same method in \cite{AAG}. The proof is similar as in Proposition \ref{pro:uniq2}. Here we omit it.
\begin{thm}\label{thm:partialUmeancurvature}
If $U$ is an $\alpha$-domain with smooth boundary, let $D(t)$ and $E(t)$ be the open and closed evolutions of $V=-\kappa+A$ with $D(0)=U$ and $E(0)=\overline{U}$. Then there exists $T>0$ such that $\partial D(t)$ and $\partial E(t)$ are smooth hypersurfaces for $0<t\leq T$ and $\partial D(t)=\partial E(t)$. Moreover, denoting $\Sigma(t)=\partial D(t)=\partial E(t)$, $\Sigma(t)$ can be written into $\Sigma(t)=\{(x,y)\in\mathbb{R}\times\mathbb{R}^n\mid |y|=u(x,t), a(t)\leq x\leq b(t)\}$ and $(u,a,b)$ is the solution of (Q) with $\theta_{\pm}=\pi/2$. \end{thm} \begin{proof}
We only give the sketch of the proof. By approximate argument similarly in Lemma 6.2 and Lemma 6.4, $\partial D(t)$ and $\partial E(t)$ are smooth hypersurfaces and can be represented by $\sigma(P,u_j(P))$, for some $u_j$, $j=1,2$. Then we can use Proposition \ref{pro:uniq} to prove $\partial D(t)=\partial E(t)$. Therefore $\Gamma(t)=\partial E(t)$ can be represented by $\Gamma(t)=\{(x,y)\in\mathbb{R}\times\mathbb{R}^n\mid |y|=u(x,t), a(t)\leq x\leq b(t)\}$. Using Theorem \ref{thm:openevolutionmeancurvature}, $(u,a,b)$ is the solution of (Q) with $\theta_{\pm}=\pi/2$. \end{proof}
{\bf Acknowledgment}
The author expresses his hearty thanks to Professor Hiroshi Matano and Professor Yoshikazu Giga for their stimulating suggestions. The author learned the content about extended intersection number principle from Professor Hiroshi Matano. The author learned the techniques about viscosity solutions and formation of singularity in Section 6 contained in \cite{A2} from Professor Yoshikazu Giga. The author is grateful to the anonymous referee for valuable suggestion to improve the presentation of this paper.
\end{document} |
\begin{document}
\title{A Stable and Adaptive Polygenic Signal Detection Method Based on Repeated Sample Splitting}
\section*{Abstract}
Focusing on polygenic signal detection in high dimensional genetic association studies of complex traits, we develop an adaptive test for generalized linear models to accommodate different alternatives. To facilitate valid post-selection inference for high dimensional data, our study here adheres to the original sampling-splitting principle but does so, repeatedly, to increase stability of the inference. We show the asymptotic null distributions of the proposed test for both fixed and diverging number of variants. We also show the asymptotic properties of the proposed test under local alternatives, providing insights on why power gain attributed to variable selection and weighting can compensate for efficiency loss due to sample splitting. We support our analytical findings through extensive simulation studies and two applications. The proposed procedure is computationally efficient and has been implemented as the R package DoubleCauchy.
\\[0.5cm] {\it Keywords}: Adaptive; High dimensional data; Polygenic risk score; Robustness; Sample splitting.
\section{Introduction}\label{intro}
Polygenic signal detection can improve power of genetic studies of complex traits by aggregating weak signals across a large number of genetic variants that do not, individually, achieve statistical significance. The general concept of set-based testing has been well examined in settings such as gene-based association studies \citep{Derkach2014, Zhao2020} or multiple-phenotype analyses \citep{Liu2018}, but applications of existing methods to high dimensional genetic data require additional considerations.
For simultaneously testing regression coefficients in high dimensional generalized linear models (GLMs), \citet{Goeman2011} proposed a feasible test statistic for the scenario when the number of variants is fixed but can be larger than the sample size. \citet{Guo2016} first investigated the asymptotic properties of the test statistic of \citet{Goeman2011} for diverging number of variants, and then proposed a U-statistic for GLMs with unbounded link functions. The p-value calculation based on asymptotical normal approximation, however, is not accurate for stringent significance levels, and the test is not adaptive to different alternatives.
Recently, \citet{Wu2019} proposed an adaptive method, where the test statistic is based on different functions of variant-specific score statistics, with different functions targeting different alternatives. However, accurate inference requires parametric bootstrap, which is computationally expensive for large-scale studies or stringent significance levels. Further, the method of \citet{Wu2019} aggregates information across {\it all} variants, and the lack of variable selection can adversely affect power despite of its using the whole sample to derive the variant-specific score statistics.
In this paper, we focus on polygenic signal detection in high dimensional generalized linear models, and we propose to use sample splitting, repeatedly, for valid and stable post-variable-selection inference. One sample splitting produces two independent sub-samples, of which one is used for variable selection, and the other for valid association testing without the need for correcting for variable-selection bias. This general principle has been used in many study settings, but the inherent instability has been noted, including in variable selection \citep{Larry2009, Meinshausen2009, Meinshausen2010}, change-point detection \citep{Zou2020}, and more recently selective inference \citep{Rinaldo2019, Barber2019, Dai2020}. Repeated sample splitting is a natural remedy, but it is not obvious how to aggregate information across multiple, correlated sample splits to derive a valid and efficient test.
In the context of polygenic signal detection, the first polygenic risk score (PRS) method \citep{ISC2009} used the one-time-only sample splitting strategy. Specifically, the method divides the data into a training sample and a testing sample, and then performs a two-stage analysis. Stage one applies a variable selection procedure to the training sample to obtain a set of potentially associated variants and their corresponding weights. Using the independent testing sample, stage two first constructs a polygenic risk score for each individual by calculating a weighted sum of the numbers of the risk allele across the selected variants, and it then evaluates the aggregated score for association with the trait of interest. The original polygenic method has since been extended \citep{Vilhj2015,Shi2016,Lloyd2019,Li2020}, but the strategy of repeated sample splitting has not been examined.
In this work, we combine the concepts of ``repeated sample splitting" and ``adaptive testing" together to develop a robust polygenic association test for testing high dimensional regression coefficients in generalized linear models. In Section \ref{method}, we first review the classical polygenic association test, based on a weighted sum of the numbers of the risk allele across the selected variants, and we note its equivalence to a weighted sum of the variant-specific score statistics. We then consider different weighting factors for the score vector, where the different weights are tailored to different alternatives. To aggregate information across the different weighting factors, we use the recent Cauchy method of \citet{Liu2019}, and we discuss the connection of our adaptive method with that of \citet{Wu2019}. To improve stability of our inference, we then introduce the combination procedure that aggregates information across multiple, correlated sample splits. Finally, we derive the asymptotic null distributions of the proposed test for both fixed and diverging number of variants, and we study its asymptotic properties under local alternatives. In Section \ref{simulation} we present extensive simulation results for method evaluation and comparison. In Section \ref{application} we provide results from two applications, including additional simulation studies using the real genetic data from the two applications combined with simulated outcome data. We conclude with discussion in Section \ref{discussion}, which includes information for DoubleCauchy, a R package that implements the proposed test.
\section{Methods}\label{method}
\subsection{Notations}\label{method_notations}
Let $Y \in \mathbb{R}^{n \times 1}$ be the outcome variable of interest, $G \in \mathbb{R}^{n \times J}$ the genotype matrix, and $ X \in \mathbb{R}^{n \times q} $ the covariate matrix for a sample of size $n$ with $J$ genetic variants and $q$ covariates. For clarity, let $y_i$ be the response for individual $i$, $g_{ij}$ the genotype for individual $i$ and variant $j$, and $x_{ij'}$ the covariate value for individual $i$ and covariate $j'$, $i=1,\ldots, n$, $j=1,\ldots, J$, and $j'=1,\ldots, q$. Further, let $G_{i} \in \mathbb{R}^{J \times 1}$ be the genotype vector for individual $i$, $G_{j} \in \mathbb{R}^{n \times 1} $ the genotype vector for variant $j$, $ X_{i} \in \mathbb{R}^{q \times 1}$ the covariate vector for individual $i$, and $ X_{j'} \in \mathbb{R}^{n \times 1}$ the vector for covariate $j'$.
We assume that conditional on $(G_i, X_i)$, $y_i$ follows a distribution with density function, $f(y_i)=exp\{(y_i\theta_i-b(\theta_i))/a_i(\phi)+c(y_i,\phi)\}$ for some specific functions $a(\cdot)$, $b(\cdot)$ and $c(\cdot)$, where $\theta$ is the canonical parameter, $\phi$ the dispersion parameter, $\text{var}(y_i| G_i, X_i) = a_i(\phi)\nu(\mu_i)$, and $\nu(\mu_i)$ the variance function. We consider the generalized linear model that models $\mu_i=b'(\theta_i)=E(y_i| G_i, X_i)$ for different types of response variables in the exponential family by a monotone and differentiable link function $\mathcal{G}(\cdot) $, $\mathcal{G}(\mu_i) = G_i^T \beta + X_i^T\beta_x$, where $ \beta$ and $ \beta_x$ are, respectively, $J$- and $q$-dimensional vectors of regression coefficients; $q$ is fixed but $J$ may vary depending on the study setting. Among the $J$ genetic variables, we use $\mathcal{M}^*$ and $|\mathcal{M}^*|$ to denote, respectively, the set and number of truly associated ones. For simplicity but without loss of generality, we also assume that $G_i$ and $X_i$ have been mean centred at zero and standardized to have variance one.
\subsection{The classical polygenic risk score for high dimensional association test}\label{method_prs}
Suppose we have $2n$ independent observations, the classical polygenic risk score-based high dimensional association testing method \citep{ISC2009} first randomly splits the sample into two equal subsets, $D_{n,1}$ and $D_{n,2}$; the corresponding data and parameter estimates such as $y$, $X$, $G$, and $\hat \beta$ will carry superscripts $^{(1)}$ and $^{(2)}$, respectively for the two subsets, unless specified otherwise. A variable selection procedure is then applied to the training sample $D_{n,1}$ to select a subset of candidate genetic variables, $\mathcal{M}$, where we define $J_2 = |\mathcal{M}|$.
To test the $J_2$ variants simultaneously in the testing sample $D_{n,2}$, one single polygenic risk score $G_i^*$ is constructed by aggregating the $J_2$ selected variables using $G^{(2)}_{i}$, but weighted by the effect estimate $\hat{\beta}^{(1)}_j$ obtained from $D_{n,1}$, $j \in \mathcal{M}$ . That is, $G_i^* = \sum_{j=1}^{J_2} \hat{\beta}^{(1)}_jg^{(2)}_{ij}, i=1, \ldots, n$. The inference is then based on the generalized linear regression model applied to $D_{n,2}$, \begin{equation}\label{eq:1}
\mathcal{G}\{E(y^{(2)}_i|G^*_i, X_i^{(2)})\} = G_i^*\beta^* + {X_i^{(2)T}}\beta_x, \end{equation} and testing \begin{equation}\label{h0s}
H_0: \beta^*=0 \:\: \text{versus} \:\: H_1: \beta^* \neq 0. \end{equation} The corresponding score statistic is, $T_1 = \sum_{i=1}^n (y^{(2)}_i-\hat{\mu}_{i}^{(2)})G_i^*$, where $\hat{\mu}_{i}^{(2)} = \mathcal{G}^{-1}({X_i^{(2)T}}\hat \beta_{x})$ and $\hat{\beta}_{x}$ is the maximum likelihood estimate of $\beta_x$ under $H_0$. The distribution of standardized $T_1$ can be approximated by $\chi_1^2$, and the p-value of a test based on $T_1$ will be denoted as $p_1$.
This classical polygenic association testing has since been improved on several fronts, including modelling dependency structure (i.e.\ linkage disequilibrium) between genetic variables \citep{Vilhj2015} and better estimation of $\beta_j^{(1)}$ \citep{Shi2016}, among others \citep{Lloyd2019}. However, additional work are needed. To facilitate our discussion, first it is instructive to re-formulate $T_1$ as the following,
\begin{align*}
T_1 & = \sum_{i=1}^n (y^{(2)}_i-\hat{\mu}_{i}^{(2)})G_i^* = \sum_{i=1}^n (y^{(2)}_i-\hat{\mu}_{i}^{(2)})\sum_{j=1}^{J_2} \hat{\beta}^{(1)}_j g^{(2)}_{ij} \\
&= \sum_{j=1}^{J_2} \hat{\beta}^{(1)}_j \sum_{i=1}^n (y^{(2)}_i-\hat{\mu}_{i}^{(2)}) g^{(2)}_{ij} = n\sum_{j=1}^{J_2} \hat{\beta}^{(1)}_j S_j, \end{align*} where $S_j = n^{-1}\sum_{i=1}^n (y^{(2)}_i-\hat{\mu}_{i}^{(2)}) g^{(2)}_{ij}$. Thus, $T_1$ constructed based on the aggregated risk score $G_i^*$ is analytically equivalent to a {\it linearly} weighted average of the score statistics, $S_j$'s, across the $J_2$ genetic variants.
Tests based on $T_1$ are sub-optimal when signs of $\hat{\beta}^{(1)}_j$ and $S_j$ differ. When the effect size $\beta_j$ is large, it is likely to obtain sign-consistent results between $\hat{\beta}^{(1)}_j$ from the training sample and $S_j$ from the testing sample. This will prevent $S_j$'s of variants with opposite direction of effect being cancelled out. However, for weak signals there is no theoretical guarantee for obtaining sign-consistent $\hat{\beta}^{(1)}_j$ and $S_j$ \citep{Jin2014}, so it is better to develop a test that is robust to this assumption. Recent work in association tests for rare variants have also shown that $T_1$ type of tests are only powerful when a large proportion of the variants being tested are causal, in addition to their genetic effects being in the same direction \citep{Derkach2014}. Further, the direct use of $\hat{\beta}^{(1)}_j$'s as weights may not be robust to different alternatives. Finally, when the signal-to-noise ratio is low as often the case in practice, the one-time-only sample splitting approach may not be reliable \citep{Meinshausen2009}. Figure \ref{fig:cf} is an illustration of the p-value lottery phenomenon associated with $T_1$, when it is applied to a real dataset with $2n=1409$ and $J=3754$; see Section \ref{application} for details of the application data.
\begin{figure}
\caption{ Histograms of p-values of $T_1$ (white) and the proposed $T_{dc}$ (blue) based on randomly splitting a real dataset to training and testing samples, independently 100 times. The right figure is a zoom-in plot for the proposed $T_{dc}$. Details of the application data in Section \ref{application}.}
\label{fig:cf}
\end{figure}
\subsection{An adaptive procedure for polygenic signal detection}\label{method_adaptive}
Here we develop a robust method that is adaptive to different alternatives. We first propose new tests by considering different weighting schemes, given a particular sample split. We then improve the stability of our inference through repeated sample splitting.
Recall that testing (\ref{h0s}) in (\ref{eq:1}) can be reformulated as testing \begin{equation}\label{h0}
H_0: \beta=0 \:\: \text{versus} \:\: H_1: \beta \neq 0, \end{equation} in \begin{equation}\label{eq:2}
\mathcal{G}\{E(y^{(2)}_i|G_i^{(2)}, X_i^{(2)})\} = G_i^{(2)T}\beta + {X_i^{(2)T}}\beta_x, \end{equation} where $\beta=(\beta_1,\ldots, \beta_{J_2})^T$. The proposed new test statistics have the following form, \begin{equation}\label{eq:Tgamma}
T_{\gamma} = n \sum_{j=1}^{J_2} {w}_j^{\gamma-2} S_j^2,\:\: \gamma \in \Gamma = \{2, 4, 6, \ldots\}, \end{equation} where $w_j$ depends on $\hat{\beta}^{(1)}_j$ obtained from $D_{n,1}$, and $\gamma$ is an even integer to avoid signal cancellation between variants.
Let $S=(S_1, \ldots, S_{J_2})^T$ and $R=\text{diag}\{r_j\}=\text{diag}\{{w}_j^{\gamma-2}\}$, $j=1,\ldots,J_2$, we have \begin{equation}\label{eq:3}
T_{\gamma} = n{S^{T}RS}. \end{equation} We can easily modify $R$ to include off-diagonal elements to reflect potential linkage disequilibrium between genetic variables, and we will study the asymptotic null distributions of $T_{\gamma}$ in Theorems 1--3 for both fixed and diverging $J_2$.
The different $\gamma$ values in (\ref{eq:Tgamma}) adapt to different signal sparsities. To obtain an accurate yet computationally efficient adaptive test, we propose to aggregate $p_{\gamma}$'s, the p-values of $T_{\gamma}, \gamma \in \Gamma= \{2, 4, 6, \ldots\}$, and $p_1$, the p-value of $T_1$, using the Cauchy combination method recently proposed by \citet{Liu2019}. The Cauchy method can accommodate complex dependency structure among p-values without explicitly modelling it. In our setting, the proposed test statistics is \begin{equation}\label{eq:Tc}
T_{c}={(|\Gamma|+1)}^{-1} \sum_{\gamma \in \Gamma \cup {1}} \tan\{(0.5-p_{\gamma})\pi\}. \end{equation} The tail of the null distribution of $T_{c}$ can be well approximated by the standard Cauchy distribution, as long as the individual $p_{\gamma}$'s are accurate, which we study in Sections \ref{method_asymptotic} and \ref{simulation}. The final p-value of $T_{c}$ is $p_c=1/2-(\text{arctan}\ t_{c})/\pi$, where $t_{c}$ is the observed value for $T_c$.
Here we acknowledge that $T_{\gamma}$ is related to SPU type of test statistics proposed by \citet{Wu2019}. For an integer $\gamma \geq 1$, $\text{SPU}(\gamma) = \sum_{j=1}^J S_j^{\gamma},$ where $S_j$ is obtained from the {\it whole} sample. If we omit the sample splitting step in our approach, $J_2=J$, and let $w_j = S_j$, we have $T_{\gamma} \propto \text{SPU}(\gamma)$ for all $\gamma>1$.
The authors of $\text{SPU}(\gamma)$ have noted that for an {\it even} integer $\gamma \rightarrow \infty$, $\text{SPU}(\gamma) \propto (\sum_j |S_j|^{\gamma})^{1/\gamma} \rightarrow \max_j|S_j|$, defined as $\text{SPU}(\infty)$; this suggests that larger $\gamma$ is more powerful for sparse alternatives. To make SPU robust to different alternatives, the authors then proposed an adaptive SPU, $\text{aSPU} = \text{min}_{\gamma \in \Gamma_{\text{aSPU}}} \{p_{\text{spu}(\gamma)}\},$ where the recommended $\Gamma_{\text{aSPU}} = \{1,2,3,4,5,6,\infty\}$, and $p_{\text{spu}(\gamma)}$ is the p-value of $\text{SPU}(\gamma)$. The asymptotic $p_{\text{spu}(\gamma)}$ for $\gamma=1$ and $2$ can be obtained with mild conditions imposed on moments of $S_j$'s and their correlation structure \citep{Wu2019}, but the asymptotic approximation is not accurate for $\gamma > 2$. The authors then proposed to calculate $p_{\text{spu}(\gamma)}$, and subsequently $p_{\text{aSPU}}$, based on parametric bootstrap, which is computational expensive.
The distinction between $\text{aSPU}$ and the proposed $T_c$ is four-fold. Firstly, although $T_c$ includes evidence from $T_1$, the building block of $T_c$ is $T_{\gamma}$, where $\gamma$ is an even integer, which facilitates studies of asymptotic properties of the proposed tests; see Theorems 1--4 for details. Secondly, tests using different $\gamma$ values are correlated with each other. Thus, even if the individual p-value estimation is accurate, the minimum-p approach of $\text{aSPU}$ makes the inference more difficult than that of $T_c$, which is based on the easy-to-implement Cauchy method. Thirdly, although $\text{aSPU}$ uses the whole sample for association testing, it aggregates information across all $J$ genetic variants, many of which may be from the null leading to reduced power as compared to $T_c$, which benefits from variable selection. Lastly, the flexible structure of $w_j$ in $T_{\gamma}$ can incorporate other information available for each variant $j$, such as the functional importance measure of a genetic variant \citep{Iuliana2016}.
To further robustify $T_{c}$ against sampling variation inherent in the one-time-only sample splitting approach, we then consider repeated sample splitting of $m$ times. For the $s$th sample split, $s =1, \ldots, m$, we obtain $T_{c,s}$ and its corresponding p-value, $p_{c,s}$. To combine the $p_{c,s}$'s while not explicitly modelling the correlation, we again utilize the Cauchy method of \citet{Liu2019}. The proposed double Cauchy combination test statistic is \begin{equation}\label{eq:Tdc}
T_{dc}=m^{-1} \sum_{s=1}^m \tan\{(0.5-p_{c,s})\pi\}. \end{equation} Similar to inference based on $T_c$, the tail of the null distribution of $T_{dc}$ can be well approximated by the standard Cauchy distribution, as long as the individual p-values to be combined are accurate which we study next.
\subsection{Asymptotic properties of $T_{\gamma}$} \label{method_asymptotic}
To make the dependency of $T_{\gamma}$ on $n$ and $J_2$ explicit, we use $T_{n,J_2,\gamma}$ to denote $T_{\gamma}$ in this section. We study the asymptotic properties of $T_{n,J_2,\gamma}$ for both fixed and diverging $J_2$, under the null or local alternatives. For notation simplicity, we now omit superscript $^{(2)}$ from $Y \in \mathbb{R}^{n \times 1}$, $G \in \mathbb{R}^{n \times J_2}$ and $X \in \mathbb{R}^{n \times q}$, representing, respectively, the outcome, genotype and covariate data in the testing sample $D_{n,2}$, where $J_2$ is the number of variants to be tested. Recall that $T_{n,J_2,\gamma} = n{S^{T}RS}$, where $S=(S_1, \ldots, S_{J_2})^T$ is the score vector, $R=\text{diag}\{r_j\}$, and $\gamma$ is an even integer. The covariance matrix of $n^{1/2}S$ is $\Sigma_s=E\{a_i(\phi)\nu(\mu_i)G_iG_i^T\}$, where $G_i \in \mathbb{R}^{ J_2 \times 1}$, the genotype vector for individual $i$, $\epsilon=(\epsilon_1, \ldots, \epsilon_n)^T=Y-\mathcal{G}^{-1}(G\beta+X\beta_x)$, and $\epsilon_0 = (\epsilon_{01}, \ldots, \epsilon_{0n})^T = Y-\mathcal{G}^{-1}(X\beta_x)$.
The following theorem gives the asymptotic null distribution of $T_{n,J_2,\gamma}$, provided that the same regularity conditions, required for the convergence of $S$ to a multivariate normal random variable, hold \citep{Goeman2011}. In addition, we ignore the nuisance parameters $a_i(\phi)$ and $\beta_x$ for now and discuss how to include them in Section \ref{discussion}. We provide all proofs in the Supplementary Material.
\begin{theorem}
\label{thm1}
Under the null hypothesis $H_0$ in (\ref{h0}), for any fixed finite $J_2$ and $\gamma$, $T_{n,J_2,\gamma} \rightarrow T_{J_2, \gamma}$
in distribution as $n \rightarrow \infty$, where $T_{J_2,\gamma}$ and $\sum_{j=1}^{J_2} \lambda_{J_2,j}\chi_{1j}^2$ are equivalent in distribution, $\chi_{1j}^2$'s are independent variables with the central chi-square distribution with 1 degrees of freedom, $\chi_1^2$, $\lambda_{J_2,1} \geq \ldots, \geq \lambda_{J_2,J_2}$ are the eigenvalues of $C_s^TRC_s$, and $\Sigma_s = C_sC_s^T$. \end{theorem}
When $Y$ is normally distributed, $T_{n,J_2,\gamma}$ and $T_{J_2,\gamma}$ equivalent in distribution always holds for any $n$ (and finite $J_2$); when both $n$ and $J_2$ are diverging, additional assumptions are required.
\begin{assumption}
\label{asm:1}
Assume $G_{i} = C_{g}Z_{i}, \forall i$, where $C_g$ is a $J_2 \times J_2$ matrix and $C_gC_g^T = \Sigma_g$, and $Z_i=(z_{i1}, \ldots, z_{iJ_2})^T$ with $E(Z_i) = 0$ and $\text{cov}(Z_i) = I_{J_2}$. Assume $z_{ij}$ has finite eighth moment and $E(z_{ij}^4) = 3 + \Delta \ < \infty$, $\forall j$, where $\Delta$ is a constant and $\Delta > -3$, and $E(\Pi_j z_{ij}^{\nu_j}) = \Pi_j E(z_{ij}^{\nu_j})$, where $ \sum_j \nu_j \leq 8$ and all $\nu_j$'s are non-negative integers. \end{assumption} \begin{assumption}
\label{asm2}
Let $f_g$ be the probability density of $G$ and $D(f_g)$ be its support. Assume $E(\epsilon \mid G) = 0$ and $E(\epsilon^3 \mid G) = 0$, and there are positive constants $K_1$ and $K_2$ such that $E(\epsilon^2 \mid G) > K_1$ and $E(\epsilon^4 \mid G) < K_2$ almost everywhere for $g \in D(f_g)$. \end{assumption} \begin{assumption}
\label{asm3}
There exist real numbers $\rho_{\infty,j}$'s such that $\lim_{J_2 \rightarrow \infty} \rho_{J_2,j} = \rho_{\infty, j}$ uniformly $\forall j$, and $\lim_{J_2 \rightarrow \infty} \sum_{j=1}^{J_2} \rho_{J_2,j} = \sum_{j=1}^{\infty} \rho_{\infty,j} < \infty$, where $\rho_{J_2,j} = \lambda_{J_2,j}/\sqrt{tr(R\Sigma_s)^2},\ j=1, \ldots, J_2$, which are the eigenvalues of $C_s^TRC_s/\sqrt{tr(R\Sigma_s)^2}$ in descending order. \end{assumption} \begin{assumption}
\label{asm4}
$n \{tr(R\Sigma_g)^2/tr^2(R\Sigma_g)\} \rightarrow \infty$ as $n$ and $J_2 \rightarrow \infty$. \end{assumption}
Assumptions 1--3 are standard in studying high dimensional testing \citep{Guo2016,Zhang2019}. Assumption 4 specifies a relationship between $n$ and $J_2$. Because $tr(R\Sigma_g)= \sum_{j=1}^{J_2} \lambda_{J_2,j}$ and $tr(R\Sigma_g)^2 = \sum_{j=1}^{J_2} \lambda_{J_2,j}^2$, we have $n \{tr(R\Sigma_g)^2/tr^2(R\Sigma_g)\} = n\{\sum_{j=1}^{J_2} \lambda_{J_2,j}^2/(\sum_{j=1}^{J_2} \lambda_{J_2,j})^2\}$. Thus, Assumption 4 holds for any diverging $n$ and $J_2$ if $\lambda_{J_2,j}$'s are dominated by first few larger ones. When all $\lambda_{J_2,j}$'s are similar in magnitude, Assumption 4 is equivalent to requiring sample size $n$ grows to infinity at a rate faster than $J_2$. The following theorem generalizes Theorem 1 from finite to infinite $J_2$.
\begin{theorem}
\label{thm2}
Under the null hypothesis $H_0$ in (\ref{h0}) and assume Assumptions 1--4 hold,
\[
\sigma_{n,0}^{-1}\{ T_{n, J_2,\gamma} - tr(R \Sigma_s)\} \rightarrow \zeta\: \text{ and } \: \{2tr(R \Sigma_s)^2\}^{-1/2}\{ T_{J_2,\gamma} - tr(R \Sigma_s)\} \rightarrow \zeta
\]
in distribution as $n$ and $J_2 \rightarrow \infty$, where $\zeta$ and $\sum_{j=1}^{\infty} \rho_{\infty,j}(\chi_{1j}^2-1)/\sqrt{2}$ are equivalent in distribution, $\sigma_{n,0}^2= 2tr(R \Sigma_s)^2 + \delta$, and
$\delta = n^{-1}\left\{\sum_{j=1}^{J_2}\sum_{k=1}^{J_2} r_jr_k E(g_{ij}^2g_{ik}^2\epsilon_{0i}^4)-tr^2(R\Sigma_s) - 2tr(R\Sigma_s)^2\right\}=o\{tr(R\Sigma_s)^2\}$.
Therefore, as $n$ and $J_2 \rightarrow \infty$,
\[
\text{sup}_x|pr(T_{n,J_2,\gamma} \leq x) - pr(T_{J_2,\gamma} \leq x)| \rightarrow 0.
\] \end{theorem}
Theorems 1 and 2 show that we can use $\sum_{j=1}^{J_2} \lambda_{J_2,j}\chi_{1j}^2$ to approximate the asymptotic null distribution of $T_{n,J_2,\gamma}$ for both fixed and diverging $J_2$. The corresponding p-value can be calculated using the method of \citet{Davies1980}.
To show the asymptotic normality of $T_{n,J_2,\gamma}$ under the null, we need to impose the following assumption, which substitutes for specifying an explicit relationship between $J_2$ and $n$.
\begin{assumption}
\label{asm5}
$tr^2(R\Sigma_g)^2/tr(R\Sigma_g)^4 \rightarrow \infty$ and $tr(R\Sigma_g)^2 \rightarrow \infty$ as $n$ and $J_2 \rightarrow \infty$. \end{assumption}
\begin{theorem}
\label{thm3}
Under the null hypothesis $H_0$ in (\ref{h0}) and assume Assumptions 1--5 hold,
\[
\sigma_{n,0}^{-1}\{ T_{n,J_2,\gamma} - tr(R \Sigma_s)\} \rightarrow N(0,1),
\]
in distribution as $n$ and $J_2 \rightarrow \infty$. \end{theorem}
We now study the interplay between the adverse effect of reduced sample size on power and the beneficial effect of variable selection afforded by sample splitting, under the local alternative $\mathscr{L}_{\beta}$, $ \mathscr{L}_{\beta}=\left\{\Delta_{\beta}^TR\Sigma_gR\Delta_{\beta}=o\{n^{-1}tr(R\Sigma_g)^2\}\ \text{and} \ \{\mathcal{G}^{-1}(G_i^T\beta)\}^2 = O(1)\right\}, $ where $\Delta_{\beta} = E\{\mathcal{G}^{-1}(G_i^T\beta)G_i\}$.
\begin{theorem}
\label{thm4}
Under the local alternative $\mathscr{L}_{\beta}$ and assume Assumptions 1--5 hold,
\[
\sigma_{n,1}^{-1}\{T_{n,J_2,\gamma} - tr(R \Sigma_s)- \mu_{n,\beta}\} \rightarrow N(0,1),
\]
in distribution as $n$ and $J_2 \rightarrow \infty$, where $\mu_{n,\beta} = tr(R\Xi_{\beta})+(n-1)\Delta_{\beta}^TR\Delta_{\beta}$, $\sigma_{n,1}^2 = \{2tr(R \Sigma_s + R\Xi_{\beta})^2\}\{1+o(1)\}$, and $\: \Xi_{\beta} = E\left[\{\mathcal{G}^{-1}(G_i^T\beta)\}^2G_iG_i^T\right]$.
\end{theorem}
Theorem \ref{thm4} reveals that power of $T_{n,J_2,\gamma}$ under $\mathscr{L}_{\beta}$ is determined by
$\text{SNR}_n(\beta) = \mu_{n,\beta}/\sigma_{n,1}$, where $\text{SNR}_n(\beta)$ can be interpreted as signal-to-noise ratio following \citet{Guo2016}.
As detailed in the Supplementary Material, \\ $\mu_{n,\beta}= \sum_{j=1}^{J_2}r_jE\{\mathcal{G}^{-1}(G_i^T\beta)g_{ij}\}^2 + (n-1)\sum_{j=1}^{J_2}r_jE^2\{\mathcal{G}^{-1}(G_i^T\beta)g_{ij}\}$ and \\ $\sigma_{n,1}^2 = \{\sigma_{n,0}^2 + 2tr(R\Xi_{\beta})^2 +4tr(R\Sigma_s R\Xi_{\beta})\}\left\{1+o(1)\right\}$,
where \\ $\sigma_{n,0}^2 = 2\sum_{j=1}^{J_2}\sum_{k=1}^{J_2} r_jr_k E^2(g_{ij}g_{ik}\epsilon_{i}^2)\{1+o(1)\}$, $tr(R\Xi_{\beta})^2=\sum_{j=1}^{J_2}\sum_{k=1}^{J_2} r_jr_kE^2\left[g_{ij}g_{ik}\{\mathcal{G}^{-1}(G_i^T\beta)\}^2\right]$, and $tr(R\Sigma R\Xi_{\beta}) = \sum_{j=1}^{J_2}\sum_{k=1}^{J_2} r_jr_kE(g_{ij}g_{ik}\epsilon_{i}^2)E\left[g_{ij}g_{ik}\{\mathcal{G}^{-1}(G_i^T\beta)\}^2\right]$.
Now define $T_{2n,J} = 2n\sum_{j=1}^JS_j^2$ as the test statistic calculated based on the {\it whole} sample of size $2n$ but {\it without} variable selection and assuming $R=I$. In this case, $G_j \in \mathbb{R}^{ 2n \times 1}$ and $\beta \in \mathbb{R}^{ J \times 1}$ for calculating $S_j$. The signal-to-noise ratio corresponding to $T_{2n,J}$ is $\text{SNR}_{2n}(\beta) = \mu_{2n,\beta}/\sigma_{2n,1}$. And \\ $\mu_{2n,\beta} = \sum_{j=1}^JE\{\mathcal{G}^{-1}(G_i^T\beta)g_{ij}\}^2 + (2n-1)\sum_{j=1}^JE^2\{\mathcal{G}^{-1}(G_i^T\beta)g_{ij}\}$, \\ $\sigma_{2n,1}^2 = \{\sigma_{2n,0}^2 + 2tr(\Xi_{\beta,J})^2 +4tr(\Sigma_{s,J} \Xi_{\beta,J})\}\left\{1+o(1)\right\}$,
where \\ $\sigma_{2n,0}^2 = 2\sum_{j=1}^J\sum_{k=1}^J E^2(g_{ij}g_{ik}\epsilon_{i}^2)\left\{1+o(1)\right\}$, $tr(\Xi_{\beta,J})^2=\sum_{j=1}^J\sum_{k=1}^J E^2\left[g_{ij}g_{ik}\{\mathcal{G}^{-1}(G_i^T\beta)\}^2\right]$, and $tr(\Sigma_{s,J} \Xi_{\beta,J}) = \sum_{j=1}^J\sum_{k=1}^J E(g_{ij}g_{ik}\epsilon_{i}^2)E\left[g_{ij}g_{ik}\{\mathcal{G}^{-1}(G_i^T\beta)\}^2\right]$.
To provide additional insights on power comparison, assume $pr(\mathcal{M} \supset \mathcal{M}^*) \rightarrow 1$ as $n \rightarrow \infty$; this assumption can be fulfilled by existing variable selection algorithms \citep{Fan2008,Li2012,Zhang2017}. Comparing $\mu_{n,\beta}$ with $\mu_{2n, \beta}$, it is not surprising that sample size reduction is the primary cause of power loss for a sample splitting-based method. However, the expressions for $\sigma_{n,1}^2$ and $\sigma_{2n,1}^2 $ show that the first two terms are non-negative, and each term is a summation over $J_2$ and $J$ variants, respectively, for $\sigma_{n,1}^2$ and $\sigma_{2n,1}^2 $. Because $J_2 \leq J$, noise-filtering in the training sample $D_{n,1}$ thus can reduce variance of the test statistic calculated in the testing sample $D_{n,2}$. Because SNR is ratio of $\mu$ over $\sigma$, $\text{SNR}_n(\beta)$ can be larger than $\text{SNR}_{2n}(\beta)$, and tests based on $T_{n,J_2,\gamma}$ can be more powerful than $T_{2n,J}$. The use of weights derived from $D_{n,1}$ can further compensate the efficient loss due to reduced sample size in $D_{n,2}$. Simulation studies in the next section show that, even if sure-screening fails in $D_{n,1}$, the sample splitting approach can have comparable power with the methods of \citet{Wu2019,Guo2016} applied to the full-sample without variable selection.
\section{Simulation studies} \label{simulation}
\subsection{Simulation designs}
To evaluate the performance of $T_{dc}$ and compare it to tests proposed by \citet{Guo2016} and \citet{Wu2019}, we consider two simulation designs. Design one simulates $G$, while design two builds upon real genetic data from applications. Design one considers sample size of $2n=200$ or $1500$ and dimension $J \in \{10,50,200,400,1000,4000\}$. It generates $G$ based on a multivariate normal distribution with mean vector $0$ and (autoregressive) correlation matrix $\Sigma_g=\{\rho^{|i-j|}\}_{J \times J}$, where $\rho=0.2,0.5,0.8$, and $i$ and $j=1,\ldots,J$. For simulation design two, $G$ comes from two applications, where $2n=1409$ and $J=3754$ SNPs, and $2n=71$ and $J=4088$ gene-expression levels, respectively. For a more streamlined presentation, we present simulation results of design two in Section \ref{application}, along with application results.
To implement $T_{dc}$, we let $r_j=\hat{\beta}_j^{\gamma-2}$ ($j=1, \ldots, J_2$) and $\Gamma = \{2,4,6,42\}$ to first obtain $T_c$ of (\ref{eq:Tc}). We then use $m=10$, $50$ or $100$ to derive the more stable $T_{dc}$ of (\ref{eq:Tdc}), and also to study the effect of $m$ on the performance of $T_{dc}$. For fair method comparison, we choose $\Gamma = \{2,4,6,42\}$ to be aligned with $\Gamma_{\text{aSPU}} = \{1,2,3,4,5,6, \infty\}$, studied and recommended by the authors of the aSPU test \citep{Wu2019}; $42$ in $\Gamma$ is to mimic $\infty$ in $\Gamma_{\text{aSPU}}$. \citet{Wu2019} has also noted that $6$ ``often suffices and that the performance of the aSPU test is robust to such a choice'', which we observed for $T_{dc}$ in our studies (results not shown).
For completeness, we also study the performance of the individual $T_1$ and $T_{\gamma}$'s ($\gamma=\{2,4,6,42\}$), but present the corresponding results in the Supplementary Material. The numbers of simulation replicates are $10^6$ for evaluating type I error control and 500 for power, and additional simulation design details are provided below when appropriate. \subsection{Type I error} \label{type_I}
Methods applied to binary outcomes often have worse performance than normally distributed traits. Thus, we generate $Y$ based on a logistic regression with $\beta=0$, and without loss of generality, intercept equals to one and with no other covariates. For type I error evaluation, the variable selection procedure and the value of $J$ are not critical.
Thus, we choose $J_2=J$, regress simulated $Y$ on each of the $J_2$ simulated variants in $D_{n,1}$, and obtain the corresponding $\hat \beta_j$. We then perform the high dimensional polygenic association testing in $D_{n,2}$.
Table \ref{table:1} shows the empirical test sizes of $T_{\gamma}$'s, $T_c$, and $T_{dc}$ for $2n=200$, $m=10,50,100$, $\rho=0.5$, and nominal $\alpha$ values of 0.05, 0.01, $10^{-3}$, and $10^{-4}$, and {Tables S1 and S2} in the Supplementary Material show results for $\rho=0.2$ and $0.8$, respectively; results for $2n=1500$ are more accurate thus not shown. Here, the distributions of $T_{\gamma}$'s are approximated by the weighted linear combination of independent $\chi_1^2$ distributions as specified in Theorem 2, and the distributions of $T_c$ and $T_{dc}$ by the standard Cauchy distribution.
\begin{table}
\centering
\def~{\hphantom{0}}
\caption{\label{table:1} Empirical test sizes for seven test statistics. Sample size $2n=200$, and autoregressive model $AR(1,\rho)$ with $\rho=0.5$ for correlation between the $J_2$ variants. One-time 50\%-50\% sample splitting for the first six methods, $m=10,50,100$ times sample splitting for the proposed $T_{dc}$.}
\resizebox{0.9\textwidth}{!}{
\begin{tabular}{cccc cccc ccc}
$J_2$ &$\alpha$ &$T_1$ & $T_2$ & $T_4$ & $T_6$ & $T_{42}$ & $T_c$& $T_{dc,10}$ &$T_{dc,50}$ & $T_{dc,100}$\\\\
10& 5\% & 4.9975 & 4.7951 & 4.8761 & 4.9097 & 4.9130 & 5.1774&5.2651&5.2281 &4.8201\\
& 1\% & 0.9640 & 0.8771 & 0.8927 & 0.8999 &0.9250 &0.9510 & 0.9614&1.0518 &1.0082\\
& 0.1\%& 0.0873 & 0.0768 & 0.0695 &0.0700 & 0.0746& 0.0789&0.0951&0.0854 &0.0730\\
& 0.01\% &0.0078 & 0.0063 & 0.0052 & 0.0051 & 0.0046 &0.0088&0.0081&0.0092 &0.0096\\\\
50& 5\% & 4.9503 & 4.7245 & 4.7715 &4.8447 & 4.9615 & 5.1745 &5.3395&5.3038 &5.2115\\
& 1\% & 0.9701 & 0.8750 &0.8912 &0.9110 & 0.9339 & 0.9449&0.9360&0.9280 &0.9033\\
& 0.1\% & 0.0961 & 0.0772 &0.0769 &0.0789 & 0.0767 & 0.0825&0.0698&0.0604 &0.0598\\
& 0.01\% & 0.0087 & 0.0064 &0.0059 &0.0049 & 0.0056 & 0.0062&0.0041&0.0067&0.0110\\\\
200& 5\% & 4.9820 & 4.6781 & 4.7274 & 4.8011 & 4.9341 & 5.1217 & 5.3721&5.4933&5.5147\\
& 1\% & 0.9921 & 0.8743 & 0.8951 & 0.8961 & 0.9172 & 0.9517&0.9408 &0.9416&0.9374\\
& 0.1\%& 0.0960 & 0.0814 & 0.0780 &0.0775 & 0.0775 & 0.0815& 0.0705&0.0656&0.0679\\
& 0.01\% & 0.0116 & 0.0070 & 0.0067 &0.0064 & 0.0060 & 0.0069&0.0042&0.0156&0.0263 \\\\
400& 5\% & 4.9763 & 4.6932 & 4.7571 & 4.8307 & 4.9550 & 5.1717 &5.3691&5.5539&5.6548\\
& 1\% & 0.9848 & 0.8886& 0.8991 & 0.9161 & 0.9230 & 0.9518&0.9375&0.9527&0.9679\\
& 0.1\%& 0.0972 & 0.0786& 0.0830& 0.0790 & 0.0767 & 0.0824 &0.0724&0.0624&0.0740\\
& 0.01\% & 0.0102 &0.0061& 0.0070&0.0053 &0.0049 &0.0067&0.0053&0.0150&0.0245\\\\
1000& 5\% &4.9887 & 4.6689 & 4.7323 & 4.7755 & 4.9269 & 5.1299 & 5.4092&5.7421& 5.1065\\
& 1\% &1.0117 &0.8923 & 0.8829 & 0.08865 & 0.9227 & 0.9312 & 0.9405&0.9734&0.8623\\
& 0.1\%&0.0967& 0.0754&0.0808 & 0.0755 &0.0798 &0.0821 &0.0763&0.0604&0.0564\\
& 0.01\% &0.0100&0.0073&0.0082 & 0.0060 &0.0059 & 0.0074 & 0.0047&0.0087&0.0173\\\\
\end{tabular}
}
\label{tablelabel}
\end{table}
Table \ref{table:1} shows that the empirical type I error rate of $T_{dc}$ is controlled at or below the nominal $\alpha$ level when $m=10$ and $50$, considering Monte Carlo error. However, the empirical type I error rate is slightly inflated for larger $J_2$ and stringent $\alpha$ level ($\alpha=10^{-4}$) when $m=100$. To better understand this inflation problem, we provide the summary statistics of the empirical $\alpha$ from the $m=100$ sample splits in Table S3 when $J_2=400$ and $1000$. Results show that the test size of $T_{c}$ is accurate and stable across the 100 sample splits. Thus, the inflation stems from the double Cauchy combination step. The accuracy of the Cauchy approximation for large $m$ has been studied in Theorem 2 by \citet{Liu2019}. \citet{Liu2019} showed that, to obtain accurate p-value approximation, $m$ should be bounded by $(t_{\alpha})^{c_0}$, where $t_{\alpha}$ is the upper $\alpha$-quantile of the standard Cauchy distribution and $0<c_0<1/2$. When $\alpha = 10^{-4}$, $t_{\alpha} = 3183$, thus $t_{\alpha}^{1/2} = 56.4$ provides an upper bound of $m$. Although this theoretical result is under certain conditions on the correlation matrix of the p-values to be combined and not accurate across all scenarios \citep{Liu2019}, it helps the understanding of the approximation error when $m=100$.
In the above simulation studies and later in applications, the p-value approximation for the individual $T_{\gamma}$ is based on $\sum_{j=1}^{J_2} \lambda_{J_2,j}\chi_{1j}^2$ as shown in Theorem 2. Table S4 shows that the normal approximation given in Theorem 3, however, is not adequate for stringent $\alpha$ levels when $\gamma>2$ and using the simulation parameter values considered here. Thus, we recommend the use of the $\sum_{j=1}^{J_2} \lambda_{J_2,j}\chi_{1j}^2$ approximation in practice. Consistent with previous reports, tests of \citet{Wu2019} and \citet{Guo2016} based on asymptotic approximations are not accurate (Table S4). Thus, we use parametric bootstrap, as recommended by the authors, with $10^3$ replicates to evaluate power of these methods for fair comparison.
\subsection{Power}\label{power} Similar to the type I error evaluation above, here we also focus on the more difficult case of analyzing binary outcomes than normally distributed traits. We generate $Y$ based on logistic models with different proportions of nonzero regression coefficients, varying from $0.1\%$, $1\%$, $5\%$, to $10\%$ for the $J$ variants. We assume the indices of the nonzero $\beta_j$'s to be uniformly distributed in $\{1,\ldots,J\}$. We consider three different scenarios of the sign of nonzero $\beta_j$'s: randomly specify the nonzero $\beta_j$'s to be half positive and half negative, all the nonzero $\beta_j$'s are positive, and all are negative. Results below focus on power comparison between the proposed $T_{dc}$ test and the methods of \citet{Guo2016} and \citet{Wu2019}, which are applied to the whole sample and without variable selection. Results of the original polygenic risk score test, $T_1$, are shown in Figure S1.
To better delineate the factors influencing power, we consider three study scenarios. In all three scenarios, the weights inferred from the training sample $D_{n,1}$ are leveraged to construct the $T_{dc}$ test statistic using the testing sample $D_{n,2}$.
(I), Oracle: $\mathcal{M} = \mathcal{M}^*$. This is the `best' case scenario for $T_{dc}$, where the selection step applied to $D_{n,1}$ identifies all and only truly associated variants; the estimated weights however may not be optimal. This study is to show power gain of $T_{dc}$, despite the reduction of sample size, as compared to the methods without variable selection.
(II), $J_2=J$: $\mathcal{M} = \{G_1,\ldots, G_J\}$. This is the `worst' case scenario for $T_{dc}$, where the selection step fails completely at filtering out non-signals; the estimated weights however may be informative. This scenario is tailored for studying power loss of $T_{dc}$ due to sample size reduction as compared with methods without sample splitting, while also demonstrating the benefits of leveraging the weights inferred from $D_{n,1}$ for associate testing using $D_{n,2}$.
(III), Variable Selection: $\mathcal{M}$ is estimated after variable screening. This study investigates the impact of accuracy of variable selection on power of $T_{dc}$ as compared to the methods without variable selection.
For variable selection, we considered the DCSIS method of \citet{Li2012} (DCSIS) and SIS of \citet{Fan2008}, because these methods require less assumptions than e.g. {ElasticNet \citep{Zou2005}} for the property of sure screening to hold \citep{buhlmann2014high}. The implementation of DCSIS and SIS also require the specification of $J_2$. In our power simulation study, $2n=1500$ and $J=4000$, and we choose $J_2 = 2n$, which is more conservative than $2n/log(2n)$ recommended by the authors.
Because all three methods, the proposed $T_{dc}$ test and the methods of \citet{Guo2016} and \citet{Wu2019}, incorporate $S^2_j$'s across variants, we expect them to be robust to the direction/sign of $\beta_j$'s. This is confirmed by {Figure S2}, which also shows that results are qualitatively similar between the two variable selection methods, DCSIS of \citet{Li2012} and SIS of \citet{Fan2008}. Thus, below we only present the simulation results when the nonzero $\beta_j$'s are half positive and half negative, and DCSIS is the variable selection method. Figure \ref{fig:power} shows the results for $2n=1500$ and $J=4000$, reflecting the values observed in the real dataset studied in Section \ref{application}; $\rho=0.5$ for correlation between the $J$ variants, $J_2=1500$ for variable selection, and $m=10$ for repeated 50\%-50\% sample splitting to construct $T_{dc}$.
\begin{figure}
\caption{ Power comparison of the proposed test $T_{dc}$ (red triangle), the method of \citet{Guo2016} (blue square), and the method of \citet{Wu2019} (black circle), for the three study scenarios (I), (II) and (III). Sample size $2n=1500$, and the total variant $J=4000$ among which 0.1\% (row 1), 1\% (row 2), 5\% (row 3), and 10\% (last row) are truly associated.}
\label{fig:power}
\end{figure}
For scenario (I), the first column of Figure \ref{fig:power} shows that the proposed $T_{dc}$ test has substantial power gain, attributed to noise filtering despite of the reduction in sample size for associate testing (and using the estimated weights), as compared to the methods of \citet{Guo2016} and \citet{Wu2019}.
For scenario (II), the second column of Figure \ref{fig:power} shows that the anticipated power loss of $T_{dc}$ due to sample splitting can be compensated by leveraging the weights inferred from $D_{n,1}$, as compared with the methods of \citet{Guo2016} and \citet{Wu2019}, which use the full sample; recall that $J_2=J$ for $T_{dc}$, meaning the variable selection step completely failed at selecting relevant variants. For the sparse alternative case, scenario (II) 4 signals, $T_{dc}$ displays comparable power with the method of \citet{Wu2019}, while both are substantially more powerful than the method of \citet{Guo2016}. For the other alternatives considered in this scenario, all three methods have comparable power with the method of \citet{Guo2016} having slightly higher power. Overall, the proposed $T_{dc}$ test is most robust to the different alternatives considered here, and it is also computationally efficient which we discuss in Section \ref{discussion}.
For the more realistic scenario (III), interestingly the results are similar to those of scenario (II). This suggests that while the variable selection step filters out noise, it also filters out some (weak) signals; the sure screening property requires that the nonzero regression coefficients must be sufficiently large \citep{buhlmann2014high, Fan2008}. In addition to DCSIS and SIS, we also evaluated other selection methods such as ElasticNet, but results are similar especially for weak signals.
We emphasize that $J_2=2n=1500$ is fixed across all the alternatives considered here, and power of $T_{dc}$ shown in Figure \ref{fig:power} can be improved by considering a smaller $J_2$ for sparse but relatively strong signals (e.g.\ 4 or 40 signals out $J=4000$ variants). Indeed, applications and additional simulation studies in Section \ref{application} demonstrate the advantages of the proposed method for certain alternatives. Power can be further improved by using $m=50$ instead of $10$ (Figure S3). In general, a larger $m$ leads to a heavier penalty paid for multiple hypothesis testing; the effective number of tests, however, does not go up linearly with respect to $m$ because of the inherent correlation between the different sample splits. On the other hand, a larger $m$ leads to better variable selection in stage 1, particularly when the signals are weak. The resulting improved efficiency thus compensates the power loss due to multiple hypothesis testing. Overall, performance of $T_{dc}$ is robust to the choice of $m$ (Figure S3).
Simulation results so far have focused on the 50\%-50\% sample splitting proportion. We have also investigated 33\%-67\% (Figure S4) and 67\%-33\% (Figure S5) sample splitting. Results in Figures S4 and S5 show that the overall power of $T_{dc}$ is not very sensitive to the proportion. However, the 33\%-67\% sample splitting has slightly increased power for the scenarios considered here. This is consistent with the literature \citep{Barber2019}, where it has been noted that uneven sample splitting, with more subjects assigned to the testing sample, can increase power as compared to even sample splitting.
\section{Application and additional simulation studies}\label{application}
\subsection{Cystic fibrosis data}\label{cfd} We apply the proposed $T_{dc}$ test and the methods of \citet{Guo2016} and \citet{Wu2019}, as well as $T_1$, the original polygenic risk score test, to the cystic fibrosis data introduced in \citet{David2015}. This dataset consists of $2n=1409$ independent individuals from Canada with cystic fibrosis on whom lung functions have been measured. Of interest is the association between lung function and a set of $J=3754$ genetic variants, which are the constituents of the apical plasma membrane. These are candidates for association with cystic fibrosis but selected unsupervised based on biological hypothesis alone \citep{sun2012}.
To implement the proposed $T_{dc}$ test, we first randomly divide the $1409$ individuals into two subsets, $D_{n_1}$ and $D_{n_2}$, where $n_1:n_2=409:1000$, $n_1:n_2=705:704$, and $n_1:n_2=1000:409$. As in the simulation studies, we define $r_j=\hat{\beta}_j^{\gamma-2}, j=1, \ldots, J_2$, and $\Gamma = \{2,4,6,42\}$, and we apply the variable selection method DCSIS \citep{Li2012} and let $J_2=n_2$, the sample size of the testing sample. Because the approximation of the asymptotic distribution of $T_{dc}$ requires a positive definite matrix estimate of $\Sigma_g$ in $D_{n_2}$, we use the algorithm proposed by \citet{Rothman2012}, with the tuning parameter selected by 5-fold cross-validations.
Using this application dataset, we first re-evaluate the accuracy of $T_{dc}$ by simulating $Y$ independently of the observed $G$ for the $n_2$ individuals in $D_{n_2}$, where $y_i=1+\varepsilon_i$, $i=1,\ldots,n_2$, and $\varepsilon_i$ follows the standard normal distribution. For variable selection and weight estimation, simulating $Y$ for the $n_1$ individuals in $D_{n_1}$ is an obvious approach. However, to see how potentially non-random variable selection and weight estimation adversely affect the type I error control of the proposed method, we used the real data, both $Y$ and $G$, of the $D_{n_1}$ {\it training} sample; note that $Y$ in the $D_{n_2}$ {\it testing} sample is simulated.
Table \ref{table:3} shows that the empirical $\alpha$ level of $T_{dc}$ remains well controlled when $m=10$ or 50, but is slightly inflated when $m=100$. This result is similar to that based on simulated multivariate normal predictors in Section \ref{type_I}, where we showed that the upper bound for $m$ is 56. In general, smaller $m$ leads better type I error control, but larger $m$ provides more inference stability. Thus, we will use $m=50$ in our real data analyses.
\begin{table}
\centering
\caption{\label{table:3} Empirical test sizes for $T_{dc}$ based on simulated outcome values but real genetic data from the two application datasets. For the cystic fibrosis application data, $2n = 1409$ and $J = 3754$, and for the Riboflavin application data, $2n = 71$ and $J = 4088$. $n_1$ is the sample size for the training sample and $J_2$ is the number of selected variants for association testing in the testing sample.}
\resizebox{0.4\textwidth}{!}{
\begin{tabular}{cccc ccc }
Cystic Fibrosis &$J_2$ & $n_1$ &\diagbox[width=2.5em]{$\alpha$}{$m$} & 10 & 50 & 100 \\\\
&409 & 1000 &5\% & 5.0021 & 4.8969 & 4.8873 \\
&&&1\% & 0.9981 & 1.1371 & 1.1389 \\
&&&0.1\% & 0.1107 & 0.1067 & 0.0869 \\
&&&0.01\% & 0.0084 & 0.0137 & 0.0207\\\\
&704 & 705 &5\% & 4.9311 & 4.6359 & 4.7072 \\
&&&1\% & 0.9847 & 0.9619 & 1.1109 \\
&&&0.1\% & 0.1034 & 0.0843 & 0.0897 \\
&&&0.01\% & 0.0077 & 0.0149 & 0.0274 \\\\
&1000 & 409 &5\% & 4.8180 & 4.7277 & 5.8078 \\
&&&1\% & 0.9572 & 1.0699 & 2.3768 \\
&&&0.1\% &0.1001 & 0.1017 & 0.1891 \\
&&&0.01\% &0.0061& 0.0170 & 0.0354 \\\\
Riboflavin &36 & 35 &5\% & 4.6538 & 3.5662 & 3.3164\\
&&&1\% & 0.7951 & 0.8661 &1.1555\\
&&&0.1\% & 0.1243 & 0.1120 &0.1117\\
&&&0.01\% & 0.0093 & 0.0096 &0.0116\\
\end{tabular}
}
\label{tablelabel} \end{table}
In the absence of oracle knowledge of true association, application results focus on discussing the range of p-values of all methods. The empirical p-values are 0.0985 and 0.0727, respectively, for the methods of \citet{Guo2016} and \citet{Wu2019}, based on $10^4$ bootstrap samples applied to the whole sample. For $T_1$, we randomly split the whole sample to the $D_{n_1}$ and $D_{n_2}$ subsets, independently 100 times, to obtain 100 different $p_{1}$'s, the $T_1$-based p-values. The histogram of $p_{1}$'s for $n_1:n_2=409:1000$ is shown in Figure \ \ref{fig:cf}.
For $T_{dc}$, we also randomly split the whole sample to two subsets, but independently $100\times 50$ times, and use a sequence of $m=50$ repeated sample splits to obtain 100 $p_{dc}$'s, the $T_{dc}$-based p-values. The histogram of $p_{dc}$'s for $n_1:n_2=409:1000$ is shown in Figure \ref{fig:cf} in blue. Results clearly show that the proposed repeated sample splitting strategy leads to a much more stable inference than the one-time-only sample splitting approach: $p_1$ ranges from 0.0019 to 0.9446, while $p_{dc}$ ranges from 0.003 to 0.101 with a mode of around 0.05. For completeness, we also provide the summary statistics of the 100 $p_{dc}$'s in Table \ref{table:2}.
In an effort to study the behaviour of $T_{dc}$ in depth, we considered three different sample splitting proportions, 33\%-67\%, 50\%-50\% and 67\%-33\%, and obtained 100 p-values for each proportion to demonstrate the improved inference stability of $T_{dc}$ as compared with $T_1$. However, a practical question arises: What would be the reported p-value for this application? This could be randomly drawn from the set of p-values, but we recommend the medium of the 100 p-values from the 50\%-50\% sample split for a conservative estimate; 33\%-67\% can increase power as compared to 50\%-50\%, assuming sufficient total sample size \citep{Barber2019}. Overall, considerable variation remains in this application, suggesting that the signals are too weak or sample size is not sufficient.
\begin{table}
\centering
\caption{\label{table:2} Summary of p-values of the proposed $T_{dc}$ applied to two real application datasets based on different $n_1$-$n_2$ sample splits, with $m=50$ for constructing $T_{dc}$ and repeatedly 100 times. The total sample size $2n=1409$ for cystic fibrosis data and $2n=71$ for riboflavin data, and the total genetic variants $J=3754,\ 4088$, respectively. We choose $J_2=n_2$, the sample of size of the testing sample for all scenarios}
\resizebox{0.9\textwidth}{!}{
\begin{tabular}{ccc cccccc }
& $n_1$&$n_2$ & Minimum & 1st Quantile & Median & Mean & 3rd Quantile & Maximum \\\\
Cystic Fibrosis &409 & 1000& 0.003 & 0.030 & 0.045 & 0.046 & 0.061 & 0.101 \\
&705 &704 & 0.020 & 0.067 & 0.079 &0.086 &0.105 &0.195 \\
&1000&409 & 0.003 & 0.045 & 0.059 & 0.057 & 0.067 & 0.104 \\\\
Riboflavin & 35 & 36 & $5.551 \times 10^{-17}$ & $1.665 \times 10^{-16}$ & $1.665 \times 10^{-16}$ & $2.520 \times 10^{-16}$ & $2.359 \times 10^{-16}$ & $4.496 \times 10^{-15}$ \\
\end{tabular}
}
\label{tablelabel} \end{table}
\subsection{Riboflavin data} In this application, the outcome of interest ($Y$) here is the standardized riboflavin (B2) production rate, measured on $2n=71$ independent individuals, and the predictors ($G$) are standardized gene expression levels of $J=4088$ genes. {The dataset is freely available in the R package hdi, and has been used for studying variable selection \citep{buhlmann2014high} and constructing valid confidence interval after model selection \citep{shi2020statistical}.}
Similarly to the application above in Section \ref{cfd}, we first use the real $G$ but simulated $Y$ to re-evaluate the type I error control of $T_{dc}$. Results in Table \ref{table:3} shows that $T_{dc}$ remains accurate in this setting. Considering the total sample size of $n=71$, the implementation of $T_{dc}$ in this study only used the 50\%-50\% sample splitting proportion, where $n_1:n_2=35:36$ and $J_2=36$ for variables selected using DCSIS \citep{Li2012}.
We then apply the three methods to the real data for method comparison. The empirical p-values are $0.028 $ and $5.0 \times 10^{-5}$, respectively, for the methods of \citet{Guo2016} and \citet{Wu2019} based on $10^5$ bootstrap samples. In contrast, the p-value of $T_{dc}$ is in the range of $10^{-16}$ based on $m=50$.
To show the stability of our inference, similar to Section \ref{cfd}, we also perform sample splitting independently $100 \times 50$ times to obtain 100 $p_{dc}$'s. The summary statistics of the 100 $p_{dc}$'s are shown in Table \ref{table:2}. The maximum $p_{dc}$ is $4.496 \times 10^{-15}$, suggesting better performance of $T_{dc}$ as compared to the other two methods. For completeness, we also compare the performance between $T_{dc}$ and $T_1$. Results in Figure S6 show that the proposed repeated sample splitting not only provides robustness but also improves power; the range of $p_1$ is $[7.8 \times 10^{-8}, 0.027]$ as compared with $[5.551 \times 10^{-17}, 4.496 \times 10^{-15}]$ of $p_{dc}$.
To further demonstrate the reliability of the application result, we conduct additional power simulation study based on the real gene expression levels ($G$) of the 4,088 genes. To simulate $Y$, we consider the number of nonzero regression coefficients to be 3, 40 or 200. First, to best mimic the presumed underlying signal structure \citep{shi2020statistical}, we assume $\beta_j,\ j=1588,\ 3154$ and $4004$ to be nonzero and all the $\beta_j$s are positive with values ranging from 0.2 to 0.6 (Figure \ref{fig:riboflavin}). Next, for the scenarios where there are 40 and 200 signals in total, we further assume the first 37 and 197 $\beta_j$'s are nonzero, respectively; these $\beta_j$'s are generated from a normal distribution with mean 0 and variance 0.05, corresponding to weak signals. To implement $T_{dc}$, we use $n_1:n_2=35:36$ and $J_2=36$ for variables selected using DCSIS, the same set-up as that for type I error evaluation above.
This simulation falls under scenario (III) considered in Section \ref{power}, where power study was based on both simulated $G$ and $Y$, and the corresponding power is shown in the right column of Figure \ref {fig:power}. Here, results in Figure \ref{fig:riboflavin} show that the method of \citet{Guo2016} (blue square) performs poorly when there are sparse strong signals (left plot) or sparse strong signals combined with some weak signals (middle plot). In either case, power of the proposed $T_{dc}$ test is appreciably bigger than that of the method of \citet{Wu2019} (black circle). To be consistent with the power study in Section \ref{power}, we note that power in Figure \ref{fig:riboflavin} is for $m=10$. Figure S7 shows that power of $T_{dc}$ can be slightly improved by using $m=50$; this is consistent with the results in Figure S3.
\begin{figure}
\caption{ Power comparison of the proposed test $T_{dc}$ (red triangle), the method of \citet{Guo2016} (blue square), and the method of \citet{Wu2019} (black circle) based on the real gene expression data of the Riboflavin dataset. In total, $2n=71$ and $J=4088$; $J_2=n_2=36$ for variables selected using DCSIS. Among the 4088 variants, 3 (left), 40 (middle), 200 (right) are truly associated. In each case, 3 signals are relatively strong with their $\beta_j$'s shown in the X-axis, and the remaining weak signals with their $\beta_j$'s randomly drawn from $N(0, 0.05)$.}
\label{fig:riboflavin}
\end{figure}
\section{Discussion} \label{discussion}
In the theoretical study, we did not consider the impact of estimating nuisance parameters $\beta_x$ and $\phi$, as we expect that the results would be similar when we impose stringent conditions on the design matrix $X$ and the relationship between $n$ and $q$ to ensure estimation accuracy of the nuisance parameters \citep{Guo2016}. In practice, we can estimate the nuisance parameters in the training sample and treat the estimates as known quantities to construct $T_{dc}$ in the testing sample. This approach has been recommended by \citet{Cher2018} for another study setting where the sample splitting strategy is used.
The proposed $T_{dc}$ is computationally efficient, because a) p-value of each $T_{\gamma}$ is estimated by using the null distribution derived in Theorem 2, $\sum_{j=1}^{J_2} \lambda_{J_2,j}\chi_{1j}^2$, and b) p-value of $T_{c}$ across different $\gamma$ values is calculated using equation (\ref{eq:Tc}), and c) the final p-value of $T_{dc}$ across $m$ sample splits is obtained using equation (\ref{eq:Tdc}). For example, using a laptop with Apple M1 Chip with 8 GB unified memory, the computation time for the riboflavin data application took 6.3 seconds with $m=10$ and 31.7 seconds with $m=50$; the computation time scales linearly with respective to $m$, as expected. The computation cost mostly comes from variable selection, as the computation time is reduced to 2 and 10 seconds, respectively, for $m=10$ and 50 when using SIS instead of DCSIS for variable selection.
For easy of implementation, we have developed a R package, DoubleCauchy, and released it at https://github.com/yanyan-zhao/DoubleCauchy. The package contains three main functions: i) the DoubleCauchy function to conduct the proposed $T_{dc}$ test, where the variable selection step can be based on DCSIS, SIS or ElasticNet, ii) the DoubleCauchyParallel function to further reduce the computation time of the DoubleCauchy function, if parallel computing resource is available, and iii) the AdapSide function to leverage additional information available, such as the functional importance measure of each of the variants analyzed.
\end{document} |
\begin{document}
\title{\huge \bf Asymptotic behavior for a singular diffusion equation with gradient absorption}
\author{ \Large Razvan Gabriel Iagar\,\footnote{Departamento de Análisis Matemático, Univ. de Valencia, Dr. Moliner 50, 46100, Burjassot (Valencia), Spain, \textit{e-mail:} razvan.iagar@uv.es},\footnote{Institute of Mathematics of the Romanian Academy, P.O. Box 1-764, RO-014700, Bucharest, Romania.} \\[4pt] \Large Philippe Lauren\c cot\,\footnote{Institut de Math\'ematiques de Toulouse, CNRS UMR~5219, Universit\'e de Toulouse, F--31062 Toulouse Cedex 9, France. \textit{e-mail:} Philippe.Laurencot@math.univ-toulouse.fr}\\ [4pt] } \date{\today} \maketitle
\begin{abstract} We study the large time behavior of non-negative solutions to the singular diffusion equation with gradient absorption $$
\partial_t u-\Delta_{p}u+|\nabla u|^q=0 \quad \hbox{in} \ (0,\infty)\times\mathbb{R}^N, $$ for $p_c:=2N/(N+1)<p<2$ and $p/2<q<q_*:=p-N/(N+1)$. We prove that there exists a unique very singular solution of the equation, which has self-similar form and we show the convergence of general solutions with suitable initial data towards this unique very singular solution. \end{abstract}
\noindent {\bf AMS Subject Classification:} 35K67, 35K92, 35B40, 35D40.
\noindent {\bf Keywords:} large time behavior, singular diffusion, gradient absorption, very singular solutions, $p$-Laplacian, bounded measures.
\section{Introduction and results}\label{sec1}
The aim of the present paper is to study the large time behavior of non-negative solutions to the following equation with singular diffusion and gradient absorption: \begin{equation}\label{eq1}
\partial_{t}u-\Delta_{p}u+|\nabla u|^q=0, \quad (t,x)\in Q_\infty := (0,\infty)\times\mathbb{R}^N, \end{equation} for $p_c:=2N/(N+1)<p<2$ and $p/2<q<q_*:=p-N/(N+1)$. We consider only non-negative initial data \begin{equation} u(0,x)=u_0(x), \quad x\in\mathbb{R}^N, \label{inco} \end{equation}
under suitable decay and regularity assumptions that will be specified later. Equation \eqref{eq1} presents a competition between the effects of the two terms: one term of singular diffusion $\Delta_p u := \text{div}\left( |\nabla u|^{p-2} \nabla u \right)$, which in our case is supercritical (that is, $p>p_c=2N/(N+1)$) in order to avoid extinction in finite time, and another term of nonlinear absorption depending on the gradient $|\nabla u|^q$. Due to this competition, interesting mathematical features appear in some ranges of exponents $p$ and $q$.
The qualitative theory of \eqref{eq1} for general exponents $p$ and $q$ developed very recently; indeed, while there are many (even classical ones) papers on nonlinear diffusion equations with zero order absorption, covering almost all possible cases, the study of the gradient absorption proved to be much more involved and brought a bunch of very interesting mathematical phenomena, some of them having been the subject of intensive research in the last decade. As expected, the first results were obtained in the semilinear case $p=2$, where the asymptotic behavior for $q>1$ has been identified in a series of papers \cite{BKaL04, BKL04, BL01, BVDxx, BGK04, GL07, Gi05}. Finite time extinction was shown to take place for $q\in (0,1)$ \cite{BLS01, BLSS02, Gi05} while the critical case $q=1$, in spite of its apparent simplicity, is still far from being fully understood: only some large-time estimates are available \cite{BRV97} but no precise asymptotics. Passing to the $p$-Laplacian is a natural step, and for the slow-diffusion case $p>2$, the exponent $q=p-1$ proved to have a very interesting critical effect, as an interface between absorption-dominated behavior and diffusion-dominated behavior \cite{BtL08, LV07}, while itself gives rise to a critical regularized sandpile-type behavior, as shown recently in \cite{ILV}. A natural next step was then to pass to the study of the fast-diffusion case $1<p<2$, where the authors made important progress recently in understanding the decay rates and typical self-similar profiles \cite{IL1, IL2}. In particular, finite time extinction was shown to take place when $(p,q)$ ranges in $(p_c,2)\times (0,p/2)$ and in $(1,p_c)\times (0,\infty)$ while diffusion is likely to govern the large time dynamics when $(p,q)\in (p_c,2)\times (q_*,\infty)$. The intermediate range $(p,q)\in (p_c,2)\times (p/2,q_*)$ features a balance between the diffusion and absorption terms and is the focus of this paper.
From now on, we restrict ourselves to the following range of exponents: \begin{equation} p\in (p_c,2) \;\;\text{ and }\;\; q\in \left( \frac{p}{2} , q_* \right)\,, \label{rexp} \end{equation} and we set \begin{equation} \alpha := \frac{p-q}{2q-p}>0\,, \quad \beta := \frac{q-p+1}{2q-p}>0 \;\;\text{ and }\;\; \eta := \frac{1}{N(p-2)+p} > 0\,, \label{expas} \end{equation} the positivity of $\eta$ being a consequence of $p>p_c$. We also observe that, thanks to \eqref{rexp}, \begin{equation} \alpha - N \beta = \frac{(N+1)(q_*-q)}{2q-p}>0\,. \label{expas2} \end{equation}
In order to state the main result concerning the large-time behavior, we recall a special category of solutions to \eqref{eq1}, that are called \emph{very singular solutions}. These are solutions to \eqref{eq1} with an initial trace at $t=0$ more concentrated at the origin than a Dirac mass, thus justifying the name. The precise definition is given in Definition~\ref{def.VSS} at the beginning of Section~\ref{sec4}.
The name \emph{very singular solution} has been introduced in \cite{BPT86} for the heat equation with absorption of order zero. After this first paper, many other very singular solutions for diffusion equations with absorption terms were constructed, see \cite{CQW07, KV92, Le96, PW88, Sh04, Zh94} and the references therein. For \eqref{eq1}, we have established in \cite[Theorem 1.1]{IL2} the existence and uniqueness of such a very singular solution to \eqref{eq1}, under the more restrictive hypothesis of radial symmetry and self-similarity. We recall this result for the reader's convenience as Theorem~\ref{th.VSSunique} below. For the moment, let us denote this unique radially symmetric, self-similar very singular solution by $U$ with \begin{equation}\label{eq.selfVSS} U(t,x):=t^{-\alpha}f_U(xt^{-\beta}) , \qquad (t,x)\in Q_\infty . \end{equation} The main result about large time behavior is the following:
\begin{theorem}\label{th.asympt} Let $u_0$ be a function such that \begin{equation} u_0\in L^1(\mathbb{R}^N)\cap W^{1,\infty}(\mathbb{R}^N)\,, \quad u_0\ge 0\,, \quad u_0\not\equiv 0\,. \label{wp1} \end{equation} and \begin{equation}\label{wp111}
\lim\limits_{|x|\to\infty}|x|^{\alpha/\beta}u_0(x)=0. \end{equation} Then, the following large time behavior holds true: \begin{equation}\label{asympt}
\lim\limits_{t\to\infty}t^{\alpha} \|u(t)-U(t)\|_\infty=0, \end{equation} where $U$ is the unique radially symmetric self-similar very singular solution to \eqref{eq1} introduced in \eqref{eq.selfVSS}. \end{theorem}
In order to prove Theorem~\ref{th.asympt}, several steps are needed, some of them being also very interesting by themselves. A very important element of the proof is identifying the possible limits as $t\to\infty$, that we can prove to be very singular solutions in the sense of Definition~\ref{def.VSS} by viscosity techniques. Thus, the circle will be closed by the following general uniqueness result.
\begin{theorem}\label{th.VSSgeneral} There exists a unique very singular solution to \eqref{eq1} in the sense of Definition~\ref{def.VSS}. In particular, this solution is radially symmetric and in self-similar form and it coincides with $U$. \end{theorem}
This theorem is an important extension of \cite[Theorem 1.1]{IL2}, where the uniqueness of a very singular solution is established under the extra conditions of radial symmetry and self-similar form. An interesting by-product of Theorem~\ref{th.VSSgeneral} is a comparison principle for the elliptic equation $$
-\Delta_{p}v+|\nabla v|^q-\alpha v-\beta x\cdot\nabla v=0, \quad x\in \mathbb{R}^N, $$
under suitable conditions as $|x|\to\infty$. For a precise form of the statement, we refer the reader to Theorem~\ref{th.comp} below.
On the way to proving Theorem~\ref{th.VSSgeneral}, we found out that a theory of the Cauchy problem associated to \eqref{eq1} with non-negative and bounded measures as initial data had to developed. We thus prove an interesting result of well-posedness for \eqref{eq1} for such initial data which extends to $p\in (p_c,2)$ the existing one for the semilinear case $p=2$ \cite{BL99, BVDxx} but holds true only if the singular diffusion equation $\partial_t v - \Delta_p v = 0$ in $Q_\infty$ is well-posed in this setting. However, this issue seems to be still an open question for general non-negative and bounded measures but the answer is positive for Dirac masses which is exactly what is needed for the proof of Theorem~\ref{th.VSSgeneral}.
\begin{theorem}\label{th.uniqfund} Consider a non-negative bounded Borel measure $u_0\in{\cal M}_{b}^{+}(\mathbb{R}^N)$. If the singular diffusion equation \begin{eqnarray*} \partial_t v - \Delta_p v & = & 0 \;\;\text{ in }\;\; Q_\infty\,, \\ v(0) & = & u_0 \;\;\text{ in }\;\; \mathbb{R}^N\,, \end{eqnarray*} has a unique solution $v\in C([0,\infty); {\cal M}_{b}^{+}(\mathbb{R}^N)) \cap C(Q_\infty)$, then there exists a unique non-negative function $u\in C(Q_\infty)$ which is a viscosity solution to \eqref{eq1} in $Q_\infty$ and satisfies \begin{equation}\label{cp.fund} \lim_{t\to 0} \int_{\mathbb{R}^N} \psi(x)\ u(t,x)\, dx = \int_{\mathbb{R}^N} \psi(x)\ du_0(x) \end{equation} for any bounded and continuous function $\psi\in BC(\mathbb{R}^N)$. Moreover, $u(t)$ belongs to $L^1(\mathbb{R}^N)\cap W^{1,\infty}(\mathbb{R}^N)$ for all $t>0$ and satisfies \begin{equation}
\|u(t)\|_1 \le M_0 := \int_{\mathbb{R}^N} du_0(x)\,, \label{evian} \end{equation} as well as the following estimates \begin{equation}\label{wp101}
\|u(t)\|_{1} + t^{N\eta}\|u(t)\|_{\infty} \leq C_s(M_0)\,, \end{equation} and \begin{equation}\label{wp102}
\|\nabla u(t)\|_{\infty}\leq C_s(M_0) \left( 1 + t^{(N+1)(q_*-q)\eta/(p-q)} \right) t^{-(N+1)\eta}\,, \end{equation} where $C_s\in C([0,\infty))$ is a positive function depending only on $N$, $p$, and $q$. \end{theorem}
The proof of this theorem is technical and quite involved, as usual when dealing with measures, since the lack of regularity does not allow to apply some of the standard techniques. In particular, Theorem~\ref{th.uniqfund} also implies the existence and uniqueness of a fundamental solution with any given mass $M>0$ to \eqref{eq1}, as it is explained at the end of Section~\ref{sec3}.
\noindent \textbf{Organisation of the paper.} We collect in Section~\ref{sec2} many technical results and estimates needed in the sequel, in the form of separate lemmas. These include: a rigorous definition of viscosity solutions, decay estimates, estimates on the tail of the solution at sufficiently large times, and estimates of the solutions for small times, which are useful tools for identifying the initial trace. We agree that this section is a bit technical, but this allows us to state more clearly the main ideas and steps in the proofs of our main results. A reader who is not so interested in technical details could skip this part and admit the technical lemmas, or come back to it later.
Section~\ref{sec3} is devoted to the proof of Theorem~\ref{th.uniqfund}. The proof is divided into two steps: we first construct a solution to \eqref{eq1} by classical approximation arguments. We next proceed to show the uniqueness of the solution which is actually the main contribution of this section. We then pass to the proof of Theorem~\ref{th.VSSgeneral}, which occupies almost all Section~\ref{sec4} and is divided into several steps: we first construct a maximal and a minimal element in the class of the very singular solutions to \eqref{eq1}. Then, we find that these two solutions are identical, by identifying both of them with the unique radially symmetric and self-similar very singular solution $U$, and we end up with the proof of the comparison principle for the associated elliptic equation. We end the paper with the proof of Theorem~\ref{th.asympt}, to which Section~\ref{sec5} is devoted. It relies on the half-relaxed limits technique and is rather short, since most of the needed technical facts were already done in previous sections.
\section{Well-posedness and decay estimates}\label{sec2}
In this section, we collect previous results on the well-posedness of \eqref{eq1} as well as some qualitative properties of the solutions. Let us first recall the notion of solutions we use throughout the paper.
\subsection{Viscosity solution}\label{sec2vs}
As in our previous works \cite{IL1, IL2}, a suitable notion of solution for equation~\eqref{eq1} is that of \emph{viscosity solution}, which is useful in dealing with the gradient term. Due to the singular character of \eqref{eq1} at points where $\nabla u$ vanishes, the standard definition of viscosity solution has to be adapted to deal with this case \cite{IS, JLM, OS}. In fact, it requires to restrict the class of comparison functions \cite{IS, OS}. More precisely, let ${\cal F}$ be the set of functions $f\in C^{2}([0,\infty))$ satisfying $$ f(0)=f'(0)=f''(0)=0, \ f''(r)>0 \ \hbox{for} \ \hbox{all} \ r>0,
\quad \lim\limits_{r\to 0}|f'(r)|^{p-2}f''(r)=0. $$ For example, $f(r)=r^{\sigma}$ with $\sigma>p/(p-1)>2$ belongs to ${\cal F}$. We then introduce the class ${\cal A}$ of admissible comparison functions $\psi$ defined as follows: a function $\psi\in C^2(Q_{\infty})$ belongs to ${\cal A}$ if, for any $(t_0,x_0)\in Q_{\infty}$ where $\nabla\psi(t_0,x_0) =0$, there exist a constant $\delta>0$, a function $f\in{\cal F}$, and a modulus of continuity $\omega\in C([0,\infty))$, (that is, a non-negative function satisfying $\omega(r)/r\to 0$ as $r\to 0$), such that, for all
$(t,x)\in Q_{\infty}$ with $|x-x_0|+|t-t_0|<\delta$, we have $$
|\psi(t,x)-\psi(t_0,x_0)-\partial_t\psi(t_0,x_0)(t-t_0)|\le f(|x-x_0|)+\omega(|t-t_0|). $$ With these notations, viscosity solutions to \eqref{eq1} are defined as follows \cite{IS, JLM, OS}:
\begin{definition}\label{def.visc} An upper semicontinuous function $u:Q_{\infty}\to\mathbb{R}$ is a viscosity subsolution to \eqref{eq1} in $Q_{\infty}$ if, whenever $\psi\in{\cal A}$ and $(t_0,x_0)\in Q_{\infty}$ are such that \begin{equation*} u(t_0,x_0)=\psi(t_0,x_0), \quad u(t,x)<\psi(t,x) \ \mbox{for all}\ (t,x)\in Q_{\infty}\setminus\{(t_0,x_0)\}, \end{equation*} then \begin{equation*}
\left\{\begin{array}{ll}\partial_t\psi(t_0,x_0)\leq\Delta_{p}\psi(t_0,x_0)-|\nabla\psi(t_0,x_0)|^{q} & \ \hbox{if} \ \nabla\psi(t_0,x_0)\neq0,\\ \partial_t \psi(t_0,x_0)\leq 0 & \ \hbox{if} \ \nabla\psi(t_0,x_0)=0.\end{array}\right. \end{equation*} A lower semicontinuous function $u:Q_{\infty}\to\mathbb{R}$ is a viscosity supersolution to \eqref{eq1} in $Q_{\infty}$ if, whenever $\psi\in{\cal A}$ and $(t_0,x_0)\in Q_{\infty}$ are such that \begin{equation*} u(t_0,x_0)=\psi(t_0,x_0), \quad u(t,x)>\psi(t,x) \ \mbox{for all}\ (t,x)\in Q_{\infty}\setminus\{(t_0,x_0)\}, \end{equation*} then \begin{equation*}
\left\{\begin{array}{ll}\partial_t\psi(t_0,x_0)\geq\Delta_{p}\psi(t_0,x_0)-|\nabla\psi(t_0,x_0)|^{q} & \ \hbox{if} \ \nabla\psi(t_0,x_0)\neq0,\\ \partial_t \psi(t_0,x_0)\geq 0 & \ \hbox{if} \ \nabla\psi(t_0,x_0)=0.\end{array}\right. \end{equation*}
A continuous function $u:Q_{\infty}\to\mathbb{R}$ is a viscosity solution to \eqref{eq1} in $Q_{\infty}$ if it is a viscosity subsolution and supersolution. \end{definition}
A remarkable feature of this modified definition is that basic results about viscosity solutions, such as comparison principle and stability property, are still valid, see \cite[Theorem 3.9]{OS} (comparison principle) and \cite[Theorem 6.1]{OS} (stability). The relationship between viscosity solutions and other notions of solutions is investigated in \cite{JLM}. From now on, by a solution to \eqref{eq1} we mean a viscosity solution in the sense of Definition~\ref{def.visc} above.
With this notion of solution to \eqref{eq1}, we have the following well-posedness result \cite[Theorem~6.2]{IL1}.
\begin{proposition}\label{pr.wp1} Assume that $u_0$ is a function satisfying the conditions \eqref{wp1}. Then there exists a unique non-negative function $u\in C([0,\infty)\times\mathbb{R}^N)$ which is a viscosity solution to \eqref{eq1} in $Q_\infty$ and satisfies $u(0)=u_0$. In addition, $u(t)\in L^1(\mathbb{R}^N)\cap W^{1,\infty}(\mathbb{R}^N)$ for each $t>0$ and $u$ is also a weak solution to \eqref{eq1}-\eqref{inco} in the following sense: \begin{equation}\label{def.weak}
\int_{\mathbb{R}^N} (u(t,x) -u(s,x)) \psi(x) \,dx + \int_s^t\int_{\mathbb{R}^N} \left(|\nabla u|^{p-2} \nabla u \cdot \nabla\psi + |\nabla u|^q \psi \right)\,dx\,d\tau=0, \end{equation} for any $0\leq s<t<\infty$ and $\psi\in C_0^{\infty}(\mathbb{R}^N)$. \end{proposition}
As usual for homogeneous parabolic equations, the radial symmetry and monotonicity are preserved, as the following result states.
\begin{lemma}\label{lem.rs}
If $u_0$ satisfies \eqref{wp1} and is radially symmetric and non-increasing with respect to $|x|$, then the same properties hold true for $u(t)$, for any $t>0$. \end{lemma}
\begin{proof} The radial symmetry of $u(t)$ for positive times $t>0$ follows readily from the rotational invariance of \eqref{eq1} and the well-posedness of \eqref{eq1}. Next, we can write
$u(t,x)=u(t,|x|)=u(t,r)$, and it satisfies $$
\partial_{t}u-(p-1)|\partial_{r}u|^{p-2}\partial^2_{r}u-\frac{N-1}{r}|\partial_{r}u|^{p-2}\partial_{r}u+|\partial_{r}u|^q=0. $$ At a formal level, it is clear that the zero function is a solution to the equation satisfied by $\partial_{r}u$ (which can be derived by differentiating the above equation for $u$), and the claimed monotonicity follows from the comparison principle since $\partial_r u_0\leq0$. Thanks to the uniqueness of solutions to \eqref{eq1}, this argument can be made rigorous by standard approximations, as in \cite{IL1}. \end{proof}
A classical property of parabolic equations is that a modulus of continuity in space entails a modulus of continuity in time. In that direction, we have the following result which can be proved as \cite[Lemma~5]{GGK03}.
\begin{lemma}\label{le.wp5} Consider an initial condition $u_0$ satisfying \eqref{wp1} and let $u$ be the corresponding solution to \eqref{eq1}-\eqref{inco}. Assume further that there are $\tau\ge 0$ and $A>0$ such that
$\|\nabla u(t)\|_\infty \le A$ for all $t\in [\tau,\infty)$. Then there is $C_2>0$ depending only on $N$, $p$, and $q$ such that \begin{equation}
|u(t,x) - u(s,x)| \le C_2\ \left[ (1+A)\ |t-s|^{1/2} + A^q\ |t-s| \right]\,, \qquad t>s\ge \tau\,. \label{wp8} \end{equation} \end{lemma}
\subsection{Decay estimates}\label{sec2de}
We next recall temporal decay estimates in $L^1(\mathbb{R}^N)$ and $W^{1,\infty}(\mathbb{R}^N)$ which are consequences of the analysis performed in \cite{IL1} and depend on the behavior of the initial data as $|x|\to\infty$.
\begin{proposition}\label{pr.wp2a} Assume that $u_0$ satisfies \eqref{wp1} and denote the corresponding solution to \eqref{eq1}-\eqref{inco} by $u$. Then there is a constant $C>0$ depending only on $N$, $p$, and $q$ such that \begin{equation}
|\nabla u(t,x)| \le C \left( \|u(s)\|_\infty^{1/\alpha p} + (t-s)^{-1/p} \right) \left( u(t,x) \right)^{2/p}\,, \quad 0\le s<t\,, \;\; x\in\mathbb{R}^N\,. \label{wp100} \end{equation}
In addition, if $M$ is such that $M\ge \|u_0\|_1$, then the estimates \eqref{wp101} and \eqref{wp102} hold true with $C_s(M)$ instead of $C_s(M_0)$. \end{proposition}
\begin{proof} The estimate \eqref{wp100} is a straightforward consequence of \cite[Theorem~1.3~(i) \&~(ii)]{IL1}, while \eqref{wp101} follows by comparison with the solution $v$ to the diffusion equation \begin{eqnarray} \partial_t v - \Delta_p v & = & 0 \;\;\text{ in }\;\; Q_\infty\,, \label{wp103} \\ v(0) & = & u_0 \;\;\text{ in }\;\; \mathbb{R}^N\,, \label{wp104} \end{eqnarray} see \cite{DBH90} for instance. Indeed, we obviously have $u\le v$ in $Q_\infty$ by the comparison principle and, since $p>p_c$, we deduce from \cite[Lemma~III.6.1 \& Theorem~III.6.2]{DBH90} (with $r=1$ and $R=\infty$) that \begin{equation}
\|v(t)\|_1 \le C\ \|u_0\|_1 \;\;\text{ and }\;\; \|v(t)\|_\infty \le C\ \|u_0\|_1^{p\eta}\ t^{-N\eta} \label{wp300} \end{equation} for $t>0$. Finally, \eqref{wp102} readily follows from \eqref{wp100} (with $s=t/2$) and \eqref{wp101}. \end{proof}
For initial data decaying sufficiently fast as $|x|\to\infty$, faster temporal decay estimates were also supplied in \cite[Theorem~1.2]{IL1}, which are only valid when $p$ and $q$ satisfy \eqref{rexp}.
\begin{proposition}\label{pr.wp2} Assume that $u_0$ satisfies \eqref{wp1} as well as \begin{equation}
0 \le u_0(x) \le \kappa\ |x|^{-\alpha/\beta}\,, \quad x\in\mathbb{R}^N\,, \label{wp2} \end{equation} for some $\kappa>0$, and denote the corresponding solution to \eqref{eq1}-\eqref{inco} by $u$. Then there is a constant $K_\kappa>0$ depending only on $N$, $p$, $q$, and $\kappa$ such that \begin{equation}\label{wp3}
t^{\alpha-N \beta}\|u(t)\|_{1} + t^{\alpha}\|u(t)\|_{\infty} + t^{\alpha+\beta} \|\nabla u(t)\|_{\infty} \leq K_\kappa, \quad t>0. \end{equation} \end{proposition}
The precise dependence of $K_\kappa$ on the parameters is not stated in \cite[Theorem~1.2~(i)]{IL1} but can be recovered by inspecting the proofs of \cite[Theorem~1.2~(i) \& Lemma~5.1]{IL1}.
\subsection{Small time estimates}\label{sec2ste}
The previous decay estimates allow us to analyze precisely the behavior of solutions to \eqref{eq1} for small times, a fact which will be of utmost importance when considering non-smooth or even singular initial data.
\begin{proposition}\label{pr.wp2b} Assume that $u_0$ satisfies \eqref{wp1} and denote the corresponding solution to \eqref{eq1}-\eqref{inco} by $u$. \begin{itemize}
\item[(a)] Let $\psi\in C_0^\infty(\mathbb{R}^N)$ and $T>0$. If $M$ is such that $M\ge \|u_0\|_1$, there exists a constant $C(M,T)>0$ depending only on $N$, $p$, $q$, $M$, and $T$ such that, for $t\in (0,T)$, \begin{equation} \begin{split}
& \left| \int_{\mathbb{R}^N} (u(t,x) - u_0(x))\ \psi(x)\, dx \right| \\
\le & C(M,T) \left[ \|\psi\|_\infty\ t^{(N+1)(q_*-q)\eta} + \|\nabla\psi\|_{p/(2-p)}\ t^{1/p} \right]\,. \end{split} \label{wp200} \end{equation} \item[(b)] Let $\psi\in C_0^\infty(\mathbb{R}^N)$ be a non-negative function such that $\psi(x)=0$ for $x\in B_r(0)$ for some $r>0$. If $u_0$ satisfies \eqref{wp2} for some $\kappa>0$, there exists a constant $C(\kappa,r)>0$ depending only on $N$, $p$, $q$, $\kappa$, and $r$ such that, for $t>0$, \begin{equation}
\int_{\mathbb{R}^N} u(t,x)\ \psi(x)\, dx \le \int_{\mathbb{R}^N} u_0(x)\ \psi(x)\, dx + C(\kappa,r)\|\nabla\psi\|_{p/(2-p)}\ t^{1/p} \,. \label{wp201} \end{equation} \end{itemize} \end{proposition}
\begin{proof} \textbf{Case~(a).} Let $\psi\in C_0^\infty(\mathbb{R}^N)$, $T>0$, and $t\in (0,T)$. It follows from \eqref{def.weak} that \begin{equation}\label{volvic} \begin{split}
& \left| \int_{\mathbb{R}^N} (u(t,x) - u_0(x))\ \psi(x)\,dx \right| \\
\leq &\int_0^t \int_{\mathbb{R}^N} \left( |\nabla u(s,x)|^{p-1}\
|\nabla\psi(x)| + |\nabla u(s,x)|^{q}\ \psi(x)\right)\,dx\,ds. \end{split} \end{equation} To estimate the gradient terms in the right-hand side of \eqref{volvic}, we first notice that \eqref{wp100} and \eqref{wp101} give for $(s,x)\in Q_\infty$ \begin{eqnarray}
\left| \nabla u(s,x) \right| & \le & C \left[ \left\| u\left( \frac{s}{2} \right) \right\|_\infty^{1/\alpha p} + s^{-1/p} \right] \left( u(s,x) \right)^{2/p}\,, \nonumber\\ & \le & C(M) \left[ s^{-N\eta/\alpha p} + s^{-1/p} \right] \left( u(s,x) \right)^{2/p}\,. \label{grad.est.u2} \end{eqnarray} Now, we infer from \eqref{wp101} and \eqref{grad.est.u2} that \begin{align*}
\int_{\mathbb{R}^N} |\nabla u(s,x)|^q\ |\psi(x)| \,dx
\leq & C(M)\ \|\psi\|_\infty \left[ s^{-qN\eta/\alpha p} + s^{-q/p} \right] \|u(s)\|_\infty^{(2q-p)/p}\ \|u(s)\|_1 \\
\leq & C(M)\ \|\psi\|_\infty \left[ s^{-N\eta/\alpha} + s^{-((N+1)q-N)\eta} \right]. \end{align*} Observing that \begin{align*} & 1 - \frac{N\eta}{\alpha} = \frac{(N+1)(q_*-q)p\eta}{p-q} > 0,\\ & 1-((N+1)q-N)\eta = (N+1)(q_*-q)\eta > 0 \end{align*} by \eqref{rexp}, we integrate the above inequality over $(0,t)$ and obtain \begin{align}
& \int_0^t\int_{\mathbb{R}^N} |\nabla u(s,x)|^{q} |\psi(x)| \,dx\,ds \nonumber \\
\leq & C(M)\ \|\psi\|_{\infty} \left[ t^{(N+1)(q_*-q)p\eta/(p-q)} + t^{(N+1)(q_*-q)\eta} \right] \nonumber \\
\leq & C(M)\ \|\psi\|_{\infty} \left[ 1 + t^{(N+1)(q_*-q)q\eta/(p-q)} \right] t^{(N+1)(q_*-q)\eta}. \label{interm3bis} \end{align} Similarly, by \eqref{grad.est.u2} and H\"older's inequality, \begin{align*}
& \int_{\mathbb{R}^N} |\nabla u (s,x)|^{p-1}\ |\nabla\psi(x)|\,dx \\
&\leq C(M) \left[ s^{-(p-1)N\eta/\alpha p} + s^{-(p-1)/p} \right] \int_{\mathbb{R}^N} (u(s,x))^{2(p-1)/p}\ |\nabla\psi(x)|\,dx\\
&\leq C(M) \left[ s^{-(p-1)N\eta/\alpha p} + s^{-(p-1)/p} \right] \| u(s)\|_1^{2(p-1)/p}\ \|\nabla\psi\|_{p/(2-p)}\\
&\leq C(M) \left[ 1 + s^{(p-1)(N+1)(q_*-q)\eta/(p-q)} \right] \|\nabla\psi\|_{p/(2-p)}\ s^{-(p-1)/p} , \end{align*} hence, after integrating over $(0,t)$, \begin{equation}\label{interm2bis} \begin{split}
& \int_0^t\int_{\mathbb{R}^N} |\nabla us,x)|^{p-1}\ |\nabla\psi(x)| \,dx\,ds \\
\leq & C(M) \left[ 1 + t^{(p-1)(N+1)(q_*-q)\eta/(p-q)} \right] \|\nabla\psi\|_{p/(2-p)}\ t^{1/p}. \end{split} \end{equation} Combining \eqref{volvic}, \eqref{interm3bis}, and \eqref{interm2bis} gives \eqref{wp200}.
\noindent\textbf{Case~(b).} Let $t>0$ and a non-negative function $\psi\in C_0^\infty(\mathbb{R}^N)$. Since $u_0$ satisfies \eqref{wp2}, it follows from \eqref{wp100} and \eqref{wp3} that, for $(s,x)\in Q_\infty$, \begin{eqnarray}
\left| \nabla u(s,x) \right| & \le & C \left[ \left\| u\left( \frac{s}{2} \right) \right\|_\infty^{1/\alpha p} + s^{-1/p} \right] \left( u(s,x) \right)^{2/p}\,, \nonumber\\ & \le & C(\kappa)\ s^{-1/p} \left( u(s,x) \right)^{2/p}\,. \label{grad.est.u2b} \end{eqnarray} Owing to the non-negativity of $\psi$, it follows from \eqref{def.weak} and \eqref{grad.est.u2b} that \begin{align*}
\int_{\mathbb{R}^N} (u(t,x)-u_0(x))\ \psi(x)\, dx \le & \int_0^t \int_{\mathbb{R}^N} |\nabla u(s,x)|^{p-1}\ |\nabla\psi(x)|\, dx\, ds \\
\le & C(\kappa)\ \int_0^t \int_{\mathbb{R}^N} \left( u(s,x) \right)^{2(p-1)/p}\ |\nabla\psi(x)|\ s^{-(p-1)/p} \, dx\, ds\,. \end{align*}
We now use again the decay property \eqref{wp2} of $u_0$ together with \cite[Equation~(5.5)]{IL1} to conclude that $u(s,x)\le C(\kappa)\ |x|^{-\alpha/\beta}$ for $(s,x)\in Q_\infty$. Since $\psi$ vanishes in $B_r(0)$ then so does $\nabla\psi$ and, by H\"older's inequality, \begin{align*}
\int_{\mathbb{R}^N} (u(t,x)-u_0(x))\ \psi(x)\, dx \le & C(\kappa)\ \int_0^t \int_{\{|x|>r\}} |x|^{-2(p-1)\alpha/p\beta}\ |\nabla\psi(x)|\ s^{-(p-1)/p} \,dx\, ds \\
\le & C(\kappa)\ t^{1/p}\ \left( \int_{\{|x|>r\}} |x|^{-\alpha/\beta}\, dx \right)^{2(p-1)/p}\ \|\nabla\psi\|_{p/(2-p)}\,, \end{align*} from which \eqref{wp201} follows since $\alpha/\beta>N$ by \eqref{expas2}. \end{proof}
\subsection{Tail behavior}\label{sec2tb}
We end this section with a control on the tail of solutions to \eqref{eq1}-\eqref{inco}. We first establish a pointwise estimate by showing the existence of a universal upper bound (also refered to as a \emph{friendly giant} in literature), an idea also used in previous works, see \cite{BKL04, BL01, KV92, VazquezPME} for instance. We define \begin{equation}\label{FG} \Gamma_{p,q}(r):=\gamma\ r^{-\alpha/\beta}, \quad r>0, \end{equation} where \begin{equation}\label{exp.FG} \gamma:=\frac{q-p+1}{p-q}\ \left( \frac{p-1}{q-p+1} \right)^{1/(q-p+1)}, \end{equation} and first state some useful properties of $\Gamma_{p,q}$.
\begin{lemma}\label{le.wp6}
For all $r>0$, $\Gamma_{p,q}$ belongs to $L^1(\mathbb{R}^N\setminus B_r(0))$ and $(t,x)\longmapsto \Gamma_{p,q}(|x|-r)$ is a supersolution to \eqref{eq1} in $(0,\infty)\times (\mathbb{R}^N\setminus B_r(0))$. \end{lemma}
\begin{proof} The stated integrability of $ \Gamma_{p,q}$ follows from the property $\alpha/\beta>N$, see \eqref{expas2}, while a direct computation and the monotonicity of $\Gamma_{p,q}$ give the second assertion. \end{proof}
\begin{lemma}\label{le.wp3} Consider an initial condition $u_0$ satisfying \eqref{wp1} and let $u$ be the corresponding solution to \eqref{eq1}-\eqref{inco}. Define \begin{equation}\label{radius.initial}
R(u_0):=\inf\left\{ R>0: \ u_0(x)|x|^{\alpha/\beta}\leq\gamma \ a.\, e. \ \hbox{in} \ \{|x|\geq R\}\right\} \in [0,\infty]. \end{equation} If $R(u_0)<\infty$, then \begin{equation}
0 \leq u(t,x) \leq \Gamma_{p,q}(|x|-R(u_0)) \label{wp7} \end{equation}
for any $t>0$ and $x\in\mathbb{R}^N$ with $|x|>R(u_0)$. \end{lemma}
\begin{proof} Clearly, $$
u_0(x) \leq \gamma |x|^{-\alpha/\beta} = \Gamma_{p,q}(|x|-R(u_0))\,, \qquad x\in\mathbb{R}^N\setminus B_{R(u_0)}(0)\,. $$
In addition, for all $x\in\mathbb{R}^N$ such that $|x| = R(u_0)$ and
$t>0$, we have $\Gamma_{p,q}(|x|-R(u_0)) = \infty > u(t,x)$. Thus,
$u(t,x) \leq \Gamma_{p,q}(|x|-R(u_0))$ on the parabolic boundary of $(0,\infty)\times (\mathbb{R}^N\setminus B_{R(u_0)}(0))$, and the comparison principle guarantees that $u(t,x)\leq
\Gamma_{p,q}(|x|-R(u_0))$ in $[0,\infty)\times \mathbb{R}^N\setminus B_{R(u_0)}(0)$. \end{proof}
We next prove an integral estimate on the tail behaviour of solutions to \eqref{eq1}-\eqref{inco}.
\begin{lemma}\label{le.wp4} Let $u_0$ be an initial condition satisfying \eqref{wp1} and denote the corresponding solution to \eqref{eq1}-\eqref{inco} by $u$. There is $C_0>0$ depending only on $N$, $p$, and $q$ such that, for $R>0$ and $t\ge 0$, there holds \begin{equation}
\int_{\{|x|\ge R\}} u(t,x)\, dx \le C_0\ R^{(\beta N-\alpha)/\beta}\ \left( \sup_{|x|\ge R/2}{\left\{ u_0(x)\ |x|^{\alpha/\beta} \right\}} + t \ R^{-1/\beta} \right)\,. \label{wp5} \end{equation} \end{lemma}
\begin{proof} We fix $\zeta\in C^\infty(\mathbb{R}^N)$ such that $0\le \zeta\le 1$ and \begin{equation}
\zeta(x) = 0 \;\;\text{ if }\;\; |x|\le \frac{1}{2} \;\;\;\text{ and }\;\;\; \zeta(x) = 1 \;\;\text{ if }\;\; |x|\ge 1\,. \label{trunc} \end{equation} For $R>0$ and $x\in\mathbb{R}^N$, we define $\zeta_R(x):=\zeta(x/R)$. It follows from the weak formulation of \eqref{eq1} and Young's inequality that \begin{align*}
& \frac{d}{dt} \int_{\mathbb{R}^N} \zeta_R(x)^{q/(q-p+1)}\ u(t,x)\, dx + \int_{\mathbb{R}^N} \zeta_R(x)^{q/(q-p+1)}\ |\nabla u(t,x)|^q\, dx \\
\le & \frac{q}{q-p+1}\ \int_{\mathbb{R}^N} \zeta_R(x)^{(p-1)/(q-p+1)}\ |\nabla u(t,x)|^{p-1}\ |\nabla\zeta_R(x)|\, dx \\
\le & \frac{p-1}{q-p+1}\ \int_{\mathbb{R}^N} \zeta_R(x)^{q/(q-p+1)}\ |\nabla u(t,x)|^q\, dx + \int_{\mathbb{R}^N} |\nabla\zeta_R(x)|^{q/(q-p+1)}\, dx \,, \end{align*} whence \begin{equation} \frac{d}{dt} \int_{\mathbb{R}^N} \zeta_R(x)^{q/(q-p+1)}\ u(t,x)\, dx \le C(\zeta)\ R^{(\beta N-\alpha-1)/\beta}\,. \label{fantasio} \end{equation} Owing to the properties \eqref{trunc} of $\zeta$, we find, after integrating with respect to time, \begin{align*}
\int_{\{|x|\ge R\}} u(t,x)\, dx \le & \int_{\mathbb{R}^N} \zeta_R(x)^{q/(q-p+1)}\ u(t,x)\, dx \\ \le & \int_{\mathbb{R}^N} \zeta_R(x)^{q/(q-p+1)}\ u_0(x)\, dx + C(\zeta)\ t\ R^{(\beta N-\alpha-1)/\beta}\\
\le & \sup_{|x|\ge R/2}{\left\{ u_0(x)\ |x|^{\alpha/\beta} \right\}} \ \int_{\{ |x|\ge R/2\}} |x|^{-\alpha/\beta}\, dx + C(\zeta)\ t\ R^{(\beta N-\alpha-1)/\beta}\,, \end{align*} from which \eqref{wp5} follows. \end{proof}
As a consequence of these integral tail estimates, we obtain some precise pointwise estimates for sufficiently rapidly decaying initial data.
\begin{lemma}\label{lem.point} If $u_0$ satisfies \eqref{wp1} and \eqref{wp2} for some $\kappa>0$ and $u$ denotes the corresponding solution to the Cauchy problem \eqref{eq1}-\eqref{inco}, then there exists $C>0$ depending on $N$, $p$, and $q$ such that \begin{equation}\label{est.point}
|x|^{\alpha/\beta}u(t,x)\leq C \left(\sup\limits_{|y|\geq|x|/4}\{u_0(y)|y|^{\alpha/\beta}\}+t|x|^{-1/\beta}\right) \end{equation} for any $x\in\mathbb{R}^N\setminus\{0\}$ and $t>0$. \end{lemma}
\begin{proof}
\noindent \textbf{Step~1.} Let first $u_0$ be radially symmetric and non-increasing with respect to $|x|$. Then, by Lemma~\ref{lem.rs}, $u(t)$ has the same properties for any $t>0$, and for $x\in\mathbb{R}^N$, $x\neq0$ we deduce from Lemma~\ref{le.wp4} that \begin{equation*} \begin{split}
Cu(t,x)|x|^N&\leq\int_{\{|x|/2\leq|y|\leq|x|\}}u(t,y)\,dy\\&\leq C_0\left(\frac{|x|}{2}\right)^{(N\beta-\alpha)/\beta}\left(\sup\limits_{|y|\geq|x|/4}\{u_0(y)|y|^{\alpha/\beta}\}+t\left(\frac{2}{|x|}\right)^{1/\beta}\right)\\
&\leq2^{(1+\alpha)/\beta}C_0|x|^{(N\beta-\alpha)/\beta}\left(\sup\limits_{|y|\geq|x|/4}\{u_0(y)|y|^{\alpha/\beta}\}+t|x|^{-1/\beta}\right). \end{split} \end{equation*} which gives \eqref{est.point} for this specific class of initial data.
\noindent \textbf{Step~2.} Fix $x_0\in\mathbb{R}^N\setminus\{0\}$. We define $$
\kappa_0 := \sup\limits_{|y|\geq|x_0|/4}\{u_0(y)|y|^{\alpha/\beta}\}\le\kappa $$
and take $R_0\in (0,|x_0|/4)$ such that $\kappa_0 R_0^{-\alpha/\beta}\geq\|u_0\|_{\infty}$. We define \begin{equation}
\tilde{u}_0(x):=\left\{\begin{array}{ll}2\kappa_0|x|^{-\alpha/\beta}, \quad
|x|\geq R_0,\\2\kappa_0 R_0^{-\alpha/\beta}, \quad |x|\leq R_0.\end{array}\right. \end{equation}
Then $\tilde{u}_0$ is a radially symmetric and non-increasing function of $|x|$ and it satisfies \eqref{wp1} since $\alpha/\beta>N$ as well as \eqref{wp2} with constant $2\kappa_0$. Moreover, $u_0\leq\tilde{u}_0$ in $\mathbb{R}^N$, hence the comparison principle guarantees that $u\leq\tilde{u}$ in $Q_{\infty}$, where $\tilde{u}$ is the solution to \eqref{eq1} with initial condition $\tilde{u}_0$. Applying Step~1 above to $\tilde{u}$ gives \begin{align*}
|x_0|^{\alpha/\beta} u(t,x_0) \le & |x_0|^{\alpha/\beta} \tilde{u}(t,x_0) \le 2^{(1+\alpha)/\beta} C_0 \left(\sup\limits_{|y|\geq|x_0|/4}\{\tilde{u}_0(y)|y|^{\alpha/\beta}\}+t|x_0|^{-1/\beta}\right) \\
\le & 2^{(1+\alpha)/\beta} C_0 \left( 2\kappa_0 + t|x_0|^{-1/\beta}\right)\,, \end{align*} and thus \eqref{est.point}. \end{proof}
\section{Well-posedness with non-negative bounded measures as initial data}\label{sec3}
In this section, we prove Theorem~\ref{th.uniqfund}, together with some preparatory results. We begin with the proof of the existence statement which will be done, as usual, through an approximation process.
\begin{proof}[Proof of Theorem~\ref{th.uniqfund}. Existence] Let $u_0\in{\cal M}_{b}^{+}(\mathbb{R}^N)$ and $(u_0^k)_{k\ge 1}$ be a sequence of functions in $C_0^\infty(\mathbb{R}^N)$ such that \begin{equation}\label{aprox.cond1}
\|u_0^k\|_{1} = M_0 := \int_{\mathbb{R}^N} du_0\,, \end{equation} and \begin{equation}\label{aprox.cond2} \lim\limits_{k\to \infty} \int_{\mathbb{R}^N} u_0^k(x) \psi(x) \,dx = \int_{\mathbb{R}^N} \psi(x)\,du_0(x)\ \ \hbox{for} \ \hbox{any} \ \psi\in BC(\mathbb{R}^N). \end{equation} Given $k\ge 1$, we denote the unique solution of \eqref{eq1} with initial condition $u_0^{k}$ by $u^{k}$. Owing to \eqref{aprox.cond1}, it follows from Proposition~\ref{pr.wp2a} that $(u^k)_k$ is bounded in $L^\infty(\tau,\infty;W^{1,\infty}(\mathbb{R}^N))$ for each $\tau>0$. Combining this property with Lemma~\ref{le.wp5} implies the time equicontinuity of the sequence $(u^k)_k$ in $(\tau,\infty)\times\mathbb{R}^N$ for all $\tau>0$. We then deduce from the Arzel\`a-Ascoli theorem that $(u^{k})_k$ is relatively compact in $C([\tau,T]\times K)$ for all compact subsets $K$ of $\mathbb{R}^N$ and $0<\tau< T$. There are thus a subsequence $(u^{k})$ (not relabeled) and a continuous function $u\in C(Q_\infty)$ such that \begin{equation} u^{k}\longrightarrow u \;\;\text {in }\;\; C([\tau,T]\times K) \;\;\text{ as }\;\; k\to \infty \label{gaston} \end{equation} for all compact subsets $K$ of $\mathbb{R}^N$ and $0<\tau<T$. Owing to the stability of viscosity solutions to \eqref{eq1} \cite[Theorem~6.1]{OS}, this convergence guarantees that $u$ is a viscosity solution to \eqref{eq1} in $Q_\infty$. In addition, since $u^k$ satisfies \eqref{wp101} and \eqref{wp102} with the constant $C_s(M_0)$, so does $u$. Consequently, $u(t)$ belongs to $L^1(\mathbb{R}^N)$ and $W^{1,\infty}(\mathbb{R}^N)$ for all $t>0$.
In order to complete the proof of the existence part, it remains to identify the initial condition taken by $u$. Consider $t\in (0,1)$, $\psi\in C_0^\infty(\mathbb{R}^N)$, and $k\ge 1$. Owing to \eqref{aprox.cond1}, we are in a position to apply Proposition~\ref{pr.wp2b}~(a) and conclude that \begin{equation}\label{interm2} \begin{split}
\left|\int_{\mathbb{R}^N} u^{k}(t,x) \psi(x) \,dx \right.&\left. - \int_{\mathbb{R}^N} u_0^{k}(x) \psi(x)\,dx \right|\\
&\leq C(M_0,1)\ \left( t^{1/p}\ \|\nabla\psi\|_{p/(2-p)} +
t^{(N+1)(q_*-q)\eta}\ \|\psi\|_{\infty} \right). \end{split} \end{equation} Owing to \eqref{aprox.cond2} and \eqref{gaston}, we may let $k\to\infty$ in \eqref{interm2} to get \begin{equation*} \begin{split}
\left|\int_{\mathbb{R}^N} u(t,x) \psi(x) \,dx \right.&\left. -\int_{\mathbb{R}^N} \psi(x) \,du_0(x)\right|\\
&\leq C\ \left( t^{1/p} \|\nabla\psi\|_{p/(2-p)} +
t^{(N+1)(q_*-q)\eta} \|\psi\|_{\infty} \right), \end{split} \end{equation*} from which we readily deduce that \begin{equation}\label{qqq} \lim\limits_{t\to 0} \int_{\mathbb{R}^N} u(t,x) \psi(x) \,dx = \int_{\mathbb{R}^N} \psi(x) \,du_0(x) \end{equation} for any $\psi\in C_0^\infty(\mathbb{R}^N)$. In fact, by a classical density argument, \eqref{qqq} is valid for any continuous function
$\psi\in C_0(\mathbb{R}^N)$ which vanishes as $|x|\to\infty$. Let us now show that \eqref{qqq} is satisfied for any function $\psi\in BC(\mathbb{R}^N)$. To this end, let $\zeta\in C^\infty(\mathbb{R}^N)$ be such that $0\le \zeta\le 1$ and \begin{equation*}
\zeta(x) = 0 \;\;\text{ if }\;\; |x|\le \frac{1}{2} \;\;\;\text{ and }\;\;\; \zeta(x) = 1 \;\;\text{ if }\;\; |x|\ge 1\,, \end{equation*} and $\psi\in BC(\mathbb{R}^N)$. Then, for $R>0$, $\left( 1-\zeta_R^{q/(q-p+1)} \right)\psi$ belongs to $C_0(\mathbb{R}^N)$ and \begin{align}
& \left| \int_{\mathbb{R}^N} u(t,x) \psi(x) \,dx-\int_{\mathbb{R}^N} \psi(x) \,du_0(x) \right| \nonumber \\
\le & \left| \int_{\mathbb{R}^N} u(t,x) \left( 1-\zeta_R(x)^{q/(q-p+1)} \right) \psi(x) \,dx - \int_{\mathbb{R}^N} \left( 1-\zeta_R(x)^{q/(q-p+1)} \right) \psi(x) \,du_0(x) \right| \nonumber \\ & + \int_{\mathbb{R}^N} u(t,x) \zeta_R(x)^{q/(q-p+1)} \psi(x)\, dx + \int_{\mathbb{R}^N} \zeta_R(x)^{q/(q-p+1)} \psi(x)\, du_0(x) \nonumber \\
\le & \left| \int_{\mathbb{R}^N} u(t,x) \left( 1-\zeta_R(x)^{q/(q-p+1)} \right) \psi(x) \,dx - \int_{\mathbb{R}^N} \left( 1-\zeta_R(x)^{q/(q-p+1)} \right) \psi(x) \,du_0(x) \right| \nonumber \\
& + \|\psi\|_\infty \left( \int_{\mathbb{R}^N} u(t,x) \zeta_R(x)^{q/(q-p+1)} \, dx + \int_{\mathbb{R}^N} \zeta_R(x)^{q/(q-p+1)} \, du_0(x) \right). \label{prunelle} \end{align} We now recall that it follows from \eqref{fantasio} that $$ \int_{\mathbb{R}^N} u^k(t,x) \zeta_R(x)^{q/(q-p+1)} \, dx \le \int_{\mathbb{R}^N} u_0^k(x) \zeta_R(x)^{q/(q-p+1)} \, dx + C(\zeta) t R^{(\beta N - \alpha -1)/\beta} $$ for $t\in (0,1)$ and $k\ge 1$. We then infer from \eqref{aprox.cond2}, \eqref{gaston}, and Fatou's lemma that \begin{equation} \int_{\mathbb{R}^N} u(t,x) \zeta_R(x)^{q/(q-p+1)} \, dx \le \int_{\mathbb{R}^N} \zeta_R(x)^{q/(q-p+1)} \, du_0(x) + C(\zeta) t R^{(\beta N - \alpha -1)/\beta} \label{lebrac} \end{equation} for $t\in (0,1)$. We then infer from \eqref{qqq}, \eqref{prunelle}, and \eqref{lebrac} that \begin{equation}
\limsup_{t\to 0} \left| \int_{\mathbb{R}^N} u(t,x) \psi(x) \,dx - \int_{\mathbb{R}^N} \psi(x) \,du_0(x) \right| \le 2\|\psi\|_\infty \int_{\mathbb{R}^N} \zeta_R(x)^{q/(q-p+1)} \, du_0(x). \label{contrex} \end{equation} Since $u_0$ is a bounded measure, we then let $R\to\infty$ in \eqref{contrex} and use the properties of $\zeta$ to conclude that the left-hand side of \eqref{contrex} vanishes. This ends the proof of the existence result. \end{proof}
We next turn to the proof of the uniqueness part of Theorem~\ref{th.uniqfund} for which the following two preliminary results are needed. We will first need the following inequality for vectors in $\mathbb{R}^N$.
\begin{lemma}\label{lemma.vecineq} If $q\geq p/2$, then there exists $\vartheta=\vartheta(p,q)\in (0,1]$ such that \begin{equation}\label{vector.ineq}
(a-b)\cdot(|a|^{p-2}a-|b|^{p-2}b)\geq\vartheta\frac{\left||a|^{q-1}a-|b|^{q-1}b\right|^2}{|a|^{2q-p}+|b|^{2q-p}}\geq\vartheta\frac{\left(|a|^q-|b|^q\right)^2}{|a|^{2q-p}+|b|^{2q-p}}, \end{equation} for all $(a,b)\in\mathbb{R}^N\times\mathbb{R}^N$. \end{lemma}
When $q=1$ and $p\in(1,2]$, this lemma is proved in \cite[Lemma~A.2]{BIV10}.
\begin{proof} Consider $(a,b)\in\mathbb{R}^N\times \mathbb{R}^N$, $\vartheta\in (0,1]$, and define \begin{equation*} \begin{split}
\Lambda(a,b)&:=(a-b)\cdot(|a|^{p-2}a-|b|^{p-2}b)\left(|a|^{2q-p}+|b|^{2q-p}\right)-\vartheta\left||a|^{q-1}a-|b|^{q-1}b\right|^2\\
&=\left[|a|^p+|b|^p-(|a|^{p-2}+|b|^{p-2})(a\cdot b)\right]\left(|a|^{2q-p}+|b|^{2q-p}\right)\\&-\vartheta|a|^{2q}-\vartheta|b|^{2q}+2\vartheta|a|^{q-1}|b|^{q-1}(a\cdot b)\\
&=\left(|a|^p+|b|^p\right)\left(|a|^{2q-p}+|b|^{2q-p}\right)-\vartheta\left(|a|^{2q}+|b|^{2q}\right)\\
&-\left[|a|^{2q-2}+|b|^{2q-2}+|a|^{p-2}|b|^{2q-p}+|a|^{2q-p}|b|^{p-2}-2\vartheta|a|^{q-1}|b|^{q-1}\right](a\cdot b). \end{split} \end{equation*} Since $\vartheta\in(0,1]$, we have \begin{equation*} \begin{split}
|a|^{2q-2}&+|b|^{2q-2}+|a|^{p-2}|b|^{2q-p}+|a|^{2q-p}|b|^{p-2}-2\vartheta|a|^{q-1}|b|^{q-1}\\
&\geq|a|^{2q-2}+|b|^{2q-2}-2|a|^{q-1}|b|^{q-1}=\left(|a|^{q-1}-|b|^{q-1}\right)^2\geq 0. \end{split} \end{equation*}
As $a\cdot b\leq|a||b|$, it follows from the previous inequalities that \begin{equation*} \begin{split}
\Lambda(a,b)&\geq\left(|a|^p+|b|^p\right)\left(|a|^{2q-p}+|b|^{2q-p}\right)-\vartheta\left(|a|^{2q}+|b|^{2q}\right)\\
&-\left[|a|^{2q-2}+|b|^{2q-2}+|a|^{p-2}|b|^{2q-p}+|a|^{2q-p}|b|^{p-2}-2\vartheta|a|^{q-1}|b|^{q-1}\right]|a||b|\\
&\geq\left(|a|^p+|b|^p-|a|^{p-1}|b|-|a| |b|^{p-1}\right)\left(|a|^{2q-p}+|b|^{2q-p}\right)-\vartheta\left(|a|^q-|b|^q\right)^2\\
&\geq\left(|a|-|b|\right)\left(|a|^{p-1}-|b|^{p-1}\right)\left(|a|^{2q-p}+|b|^{2q-p}\right)-\vartheta\left(|a|^q-|b|^q\right)^2. \end{split} \end{equation*} Since $q\geq p/2$, it follows from \cite[Lemma~1]{GP76} that there is $C_1\geq 1$ depending only on $p$ and $q$ such that \begin{equation*}
\frac{\left(|a|^q-|b|^q\right)^2}{\left(|a|^{p-1}-|b|^{p-1}\right)
\left(|a|-|b|\right)} \leq C_1 \max\left\{|a|,|b|\right\}^{2q-p}
\leq C_1\left(|a|^{2q-p}+|b|^{2q-p}\right). \end{equation*} Consequently, choosing $\vartheta=1/C_1$, we end up with $\Lambda(a,b)\geq 0$, which implies the first inequality in \eqref{vector.ineq}. The second inequality then follows easily from the triangular inequality. \end{proof}
We next estimate the small time behavior of solutions to \eqref{eq1}.
\begin{lemma}\label{le.stb} Consider $u_0\in{\cal M}^{+}_{b}(\mathbb{R}^N)$ and let $u$ be a non-negative solution to \eqref{eq1} with initial condition $u_0$. If there exists a unique non-negative solution $v\in C([0,\infty);{\cal M}^{+}_{b}(\mathbb{R}^N)) \cap C(Q_\infty)$ to the diffusion equation \eqref{wp103}-\eqref{wp104} in $Q_\infty$ with initial condition $u_0$, then, for $t>0$ and $r\in [1,\infty]$, \begin{equation}
\|u(t)\|_1 \le M_0 := \int_{\mathbb{R}^N} du_0(x)\,, \label{stb0} \end{equation} and \begin{equation}
\| u(t) - v(t) \|_r \le C(M_0) \left( 1 + t^{(N+1)(q_*-q)q\eta/r(p-q)} \right)\ t^{[(N+1)(q_*-q)-N(r-1)]\eta/r}\,. \label{stb1} \end{equation} \end{lemma}
\begin{proof} For $\tau>0$, let $v^\tau$ be the solution to the diffusion equation \eqref{wp103} in $(\tau,\infty)\times\mathbb{R}^N$ with initial condition $v^\tau(\tau) =u(\tau)$.
We first prove \eqref{stb0}. By the comparison principle, $u\le v^\tau$ in $(\tau,\infty)\times\mathbb{R}^N$ while the $L^1$-accretivity of the $p$-Laplacian guarantees that $\|v^\tau(t)\|_1\le \|v^\tau(\tau)\|_1$ for $t>\tau$. Consequently, for $t>\tau$, $$
\|u(t)\|_1 \le \|v^\tau(t)\|_1 \le \|v^\tau(\tau)\|_1=\int_{\mathbb{R}^N} u(\tau,x)\, dx \mathop{\longrightarrow}_{\tau\to 0} M_0\,, $$ and thus \eqref{stb0}.
Next, since $u(\tau)\in L^1(\mathbb{R}^N)$ and $p>p_c$, it follows from the $L^1$-accretivity of the $p$-Laplacian that, for $t>\tau$, \begin{equation*}
\|u(t)-v^\tau(t)\|_1 \leq \int_{\tau}^{t} \int_{\mathbb{R}^N} |\nabla u(s,x)|^q \,dx\,ds. \end{equation*} Thanks to \eqref{stb0}, we may use \eqref{wp101} and \eqref{wp100} to obtain \begin{align*}
\|u(t)-v^\tau(t)\|_1 \leq & C\ \int_\tau^t \int_{\mathbb{R}^N} \left[ \left\| u\left( \frac{s+\tau}{2} \right) \right\|_\infty^{1/\alpha p} + (s-\tau)^{-1/p} \right]^q\ \left( u(s,x) \right)^{2q/p}\, ds \\
\le & C(M_0)\ \int_\tau^t \left[ (s-\tau)^{-qN\eta/\alpha p} + (s-\tau)^{-q/p} \right]\ \|u(s)\|_\infty^{(2q-p)/p}\ \|u(s)\|_1\, ds \\ \le & C(M_0)\ \int_\tau^t \left[ (s-\tau)^{-qN\eta/\alpha p} + (s-\tau)^{-q/p} \right]\ s^{-(2q-p)N\eta/p}\, ds \\ \le & C(M_0)\ \int_\tau^t \left[ (s-\tau)^{-N\eta/\alpha} + (s-\tau)^{-(q(N+1)-N)\eta} \right]\, ds \\ \le & C(M_0) \left[ t^{(N+1)(q_*-q)p\eta/(p-q)} + t^{(N+1)(q_*-q)\eta} \right]\,, \end{align*} hence \begin{equation*}
\|u(t)-v^\tau(t)\|_1 \leq C(M_0) \left[ 1 + t^{(N+1)(q_*-q)q\eta/(p-q)} \right] t^{(N+1)(q_*-q)\eta} \,, \qquad t>\tau\,. \end{equation*} Now, since $v^\tau(t)$ converges towards $v(t)$ in $L^1(\mathbb{R}^N)$ for all $t>0$, we conclude that \begin{equation}
\|u(t)-v(t)\|_1 \leq C(M_0) \left[ 1 + t^{(N+1)(q_*-q)q\eta/(p-q)} \right] t^{(N+1)(q_*-q)\eta}\,, \qquad t>0 \,. \label{intermL1} \end{equation} Also, by \eqref{wp101}, \eqref{wp300}, and \eqref{stb0}, \begin{equation}\label{intermLinf}
\|u(t) - v(t)\|_{\infty} \leq \|u(t)\|_{\infty} + \|v(t)\|_{\infty} \leq C(M_0)\ t^{-N\eta}\,, \quad t>0. \end{equation} We then infer from \eqref{intermL1}, \eqref{intermLinf}, and H\"older's inequality that \eqref{stb1} is true. \end{proof}
\begin{proof}[Proof of Theorem~\ref{th.uniqfund}. Uniqueness] Let $u_1$ and $u_2$ be two non-negative solutions to \eqref{eq1} with initial condition $u_0\in{\cal M}^{+}_{b}(\mathbb{R}^N)$ and define $$ M_0 := \int_{\mathbb{R}^N} du_0(x)\,, $$ and $w:=u_1-u_2$. Then, $w$ solves \begin{equation}
\partial_{t}w-(\Delta_{p}u_1-\Delta_{p}u_2)+|\nabla u_1|^q-|\nabla u_2|^q=0 \quad \hbox{in} \ Q_\infty. \end{equation} Consider $r>0$ to be specified later and $T>0$. For $t\in (0,T)$, we calculate \begin{equation*} \begin{split}
\frac{1}{r+1}\ \frac{d}{dt} \|w\|_{r+1}^{r+1}
&=-\int_{\mathbb{R}^N}r|w|^{r-1}\nabla w\cdot\left(|\nabla u_1|^{p-2}\nabla u_1-|\nabla u_2|^{p-2}\nabla u_2\right)\,dx\\
&-\int_{\mathbb{R}^N}|w|^{r-1}w\left(|\nabla u_1|^q-|\nabla u_2|^q\right)\,dx. \end{split} \end{equation*} Lemma~\ref{lemma.vecineq} then gives, with the help of Young's inequality, \begin{align*}
& \frac{1}{r+1}\ \frac{d}{dt} \|w\|_{r+1}^{r+1} \\
\leq & -r\vartheta \int_{\mathbb{R}^N}|w|^{r-1} \frac{(|\nabla u_1|^q-|\nabla u_2|^q)^2}{1+|\nabla u_1|^{2q-p}+|\nabla u_2|^{2q-p}}\,dx\\
& + \int_{\mathbb{R}^N}|w|^{(r+1)/2}\frac{|w|^{(r-1)/2}(|\nabla u_1|^q-|\nabla u_2|^q)}{\sqrt{1+|\nabla u_1|^{2q-p}+|\nabla u_2|^{2q-p}}} \sqrt{1+|\nabla u_1|^{2q-p}+|\nabla u_2|^{2q-p}}\,dx\\
\leq & C\ \int_{\mathbb{R}^N}|w|^{r+1} \left(1+ |\nabla u_1|^{2q-p}+|\nabla u_2|^{2q-p}\right)\,dx\\
\leq & C\left(1+\|\nabla u_1\|_{\infty}^{2q-p}+\|\nabla u_2\|_{\infty}^{2q-p}\right)\|w\|_{r+1}^{r+1}. \end{align*} Owing to \eqref{stb0}, we are in a position to use the gradient estimate \eqref{wp102} and we further obtain \begin{equation*}
\frac{1}{r+1}\ \frac{d}{dt} \|w(t)\|_{r+1}^{r+1}\leq C(M_0,T) \left( 1 + t^{-(N+1)\eta(2q-p)} \right) \|w(t)\|_{r+1}^{r+1}. \end{equation*} Observing that $$ 1-(N+1)\eta(2q-p)=2(N+1)(q_*-q)\eta>0, $$ we may integrate the above differential inequality over $(s,t)$, $0<s<t<T$, to obtain \begin{equation}\label{interm5}
\|w(t)\|_{r+1}^{r+1} \leq \|w(s)\|_{r+1}^{r+1}\ \exp\left\{(r+1) C(M_0,T) \left( t^{2(N+1)(q_*-q)\eta} + t \right)\right\}. \end{equation} We now choose $r\in(0,(N+1)(q_*-q)/N)$ and realize that \eqref{stb1} guarantees that (keeping the notation of Lemma~\ref{le.stb}) \begin{align*}
\|w(s)\|_{r+1}^{r+1} \le & \|u_1(s)-v(s)\|_{r+1}^{r+1} + \|v(s)-u_2(s)\|_{r+1}^{r+1} \\ \leq & C(M_0,T)\ s^{((N+1)(q_*-q)-Nr)\eta} \mathop{\longrightarrow}_{s\to 0} 0. \end{align*}
Consequently, letting $s\to 0$ in \eqref{interm5} leads us to $\|w(t)\|_{r+1}^{r+1} \leq 0$ for all $t\in (0,T)$, hence $u_1\equiv u_2$ in $(0,T)$. As $T$ was arbitrary, the proof is complete. \end{proof}
Since initial data of the form $M\delta_0$, where $\delta_0$ denotes the Dirac mass at $x=0$, play an essential role in the sequel, we rephrase Theorem~\ref{th.uniqfund} in this particular setting.
\begin{corollary}\label{cor.fund} For any $M>0$, there exists a unique solution $u_{M}$ to \eqref{eq1} with initial condition $M\delta_0$. In the sequel, $u_M$ will be refered to as \emph{the fundamental solution to \eqref{eq1} of mass $M$}. Moreover, it satisfies the estimates \eqref{wp101} and \eqref{wp102} with $C_s(M)$ instead of $C_s(M_0)$. \end{corollary}
\begin{proof} The existence and uniqueness of a solution for the $p$-Laplacian equation \eqref{wp103} with initial condition $u_0=M\delta_0$ are proved in \cite[Theorem 4.1]{CQW07}. Thus, applying Theorem~\ref{th.uniqfund} with $u_0=M\delta_0$, we get the claimed result. \end{proof}
\section{Very singular solutions}\label{sec4}
As specified in the Introduction, we will study in detail the very singular solutions of \eqref{eq1}. More precisely, in this section we show that there exists in fact a unique very singular solution to \eqref{eq1}. This is done by constructing a minimal and a maximal very singular solution and identifying them afterwards. We begin with the precise definition.
\begin{definition}\label{def.VSS} A very singular solution to \eqref{eq1} is a viscosity solution $u$ to \eqref{eq1} in $Q_\infty$ in the sense of Definition~\ref{def.visc} satisfying \begin{equation} u(t)\in L^1(\mathbb{R}^N)\cap W^{1,\infty}(\mathbb{R}^N) \label{regvss} \end{equation} for all $t>0$ as well as \begin{equation}\label{VSS1}
\lim\limits_{s\to 0}\int_{\{|x|\leq r\}}u(s,x)\,dx = \infty, \quad r\in(0,\infty), \end{equation} and \begin{equation}\label{VSS2}
\lim\limits_{s\to 0}\int_{\{|x|\geq r\}}u(s,x)\,dx=0, \quad r\in(0,\infty). \end{equation} A very singular subsolution (resp. supersolution) to \eqref{eq1} is a viscosity subsolution (resp. supersolution) to \eqref{eq1} in $Q_\infty$ in the sense of Definition~\ref{def.visc}, which satisfies \eqref{regvss}, \eqref{VSS1} and \eqref{VSS2}. \end{definition}
We already know that the class of very singular solutions to \eqref{eq1} for $p\in(p_c,2)$ and $q\in(p/2,q_*)$ is non-empty as a consequence of the following result \cite{IL2}.
\begin{theorem}\label{th.VSSunique} There exists a unique radially symmetric, self-similar very singular solution $U$ to \eqref{eq1}, having the form \begin{equation}\label{selfVSS}
U(t,x)=t^{-\alpha}f_U(|x|t^{-\beta}), \quad (t,x)\in Q_\infty. \end{equation} The profile $f_U$ is a solution to the differential equation \begin{equation}\label{ODE1}
(|f_U'|^{p-2}f_U')'(r)+\frac{N-1}{r}(|f_U'|^{p-2}f_U')(r)+\alpha f_U(r)+\beta rf_U'(r)-|f_U'(r)|^q=0\,, \quad r> 0, \end{equation} satisfying $f_U'(0)=0$ and there is an explicit positive constant $\omega^*$ such that $$ \lim\limits_{r\to\infty} r^{p/(2-p)}\ f_U(r) = \omega^*. $$ \end{theorem}
This important result is very useful in the sequel in order to identify very singular solutions when we are able to show that they are radially symmetric and in self-similar form.
\subsection{Some properties of very singular subsolutions and solutions}
From Definition~\ref{def.VSS}, one expects the initial trace of a very singular solution to \eqref{eq1} to vanish outside the origin. This is made rigorous in the next result.
\begin{proposition}\label{prop.zero} Let $u$ be a very singular subsolution to \eqref{eq1} and $K$ be a compact subset of $\mathbb{R}^N\setminus\{0\}$. Then $$ \lim\limits_{t\to 0} \sup_{x\in K}\{u(t,x)\} = 0. $$ \end{proposition}
\begin{proof} Fix $\tau>0$ and let $v_{\tau}$ be the solution to the diffusion equation \eqref{wp103} in $(\tau,\infty)\times\mathbb{R}^N$ with initial condition $v_\tau(\tau)=u(\tau)$. According to \cite[Theorem III.6.2]{DBH90}, $v_\tau$ satisfies the following pointwise estimate: there exists a constant $C>0$ depending only on $N$ and $p$ such that, for any $x_0\in\mathbb{R}^N$, $R>0$, and $t>\tau$, \begin{equation}\label{DiBH} \sup_{x\in B_R(x_0)}\{v_{\tau}(t,x)\} \leq C\ (t-\tau)^{-N\eta} \left(\int_{B_{2R}(x_0)} v_{\tau}(\tau,x)\,dx\right)^{p\eta} + C \left(\frac{t-\tau}{R^p}\right)^{1/(2-p)}. \end{equation} Since $u$ is a subsolution to the diffusion equation \eqref{wp103} in $(\tau,\infty)\times\mathbb{R}^N$ with $u(\tau)=v_\tau(\tau)$, the comparison principle gives $u\le v_\tau$ in $(\tau,\infty)\times\mathbb{R}^N$. Plugging these information in \eqref{DiBH}, we are led to \begin{equation}\label{DiBH2} \sup_{x\in B_R(x_0)}\{u(t,x)\} \leq C \ (t-\tau)^{-N\eta} \left(\int_{B_{2R}(x_0)} u(\tau,x) \,dx \right)^{p\eta}+ C \left(\frac{t-\tau}{R^p}\right)^{1/(2-p)} \end{equation}
for any $t>\tau>0$. Now, assume further that $x_0\ne 0$ and $|x_0|>2R$. Then $0\not\in B_{2R}(x_0)$ and we may let $\tau\to 0$ in \eqref{DiBH2} and use \eqref{VSS2} to obtain \begin{equation}\label{DiBH3} \sup_{x\in B_R(x_0)}\{u(t,x)\}\leq C \left(\frac{t}{R^p}\right)^{1/(2-p)}\,, \qquad t>0\,. \end{equation}
Therefore, if $x_0\ne 0$ and $|x_0|>2R$, $$ \lim\limits_{t\to 0}\ \sup_{x\in B_R(x_0)}\{u(t,x)\} = 0, $$ and this property entails Proposition~\ref{prop.zero} by a covering argument. \end{proof}
In particular, Proposition~\ref{prop.zero} implies that $u(0,x)=0$, for any very singular subsolution $u$ and any $x\neq0$. This is useful to prove some comparison results.
\begin{proposition}\label{prop.compFG} Let $u$ be a very singular subsolution to \eqref{eq1}. Then \begin{equation}\label{comp.VSS}
0\leq u(t,x)\leq\Gamma_{p,q}(|x|), \quad (t,x)\in Q_\infty. \end{equation} \end{proposition}
\begin{proof}
We adapt the proof of \cite[Lemma~3.4]{BKL04}. At a formal level, the result follows from Lemma~\ref{le.wp3} since we can view a very singular solution as having an initial condition satisfiying $R(u_0)=0$. More precisely, let $r>0$ and define $D_{r}:=\{x\in\mathbb{R}^N: \ |x|>r\}$. By Lemma~\ref{le.wp6}, $S:(t,x)\longmapsto \Gamma_{p,q}(|x|-r)$ is a supersolution to \eqref{eq1} in $(0,\infty)\times D_r$ with $u(t,x)<\infty=S(t,x)$ if $(t,x)\in (0,\infty)\times \partial D_r$ and $u(0,x)=0\le S(x)$ for $x\in D_r$ by Proposition~\ref{prop.zero}. Since $u$ is a subsolution to \eqref{eq1} in $Q_\infty$ and thus also in $(0,\infty)\times D_r$, the comparison principle gives
$u(t,x)\leq\Gamma_{p,q}(|x|-r)$ for any $(t,x)\in(0,\infty)\times D_r$. Fix now $x_0\in\mathbb{R}^N$, $x_0\ne 0$. Then $x_0\in D_r$ for any $r\in(0,|x_0|)$, hence $u(t,x_0)\leq\Gamma_{p,q}(|x_0|-r)$, for any $t>0$ and $r\in(0,|x_0|)$. The conclusion follows by letting $r\to 0$ in the previous inequality. \end{proof}
We next prove that very singular subsolutions also enjoy the temporal decay estimates \eqref{wp3}.
\begin{proposition}\label{prop.decayVSS} If $u$ is a very singular subsolution to \eqref{eq1} in $Q_\infty$, the following estimates hold: \begin{equation}\label{decayVSS}
t^{\alpha-N\beta}\|u(t)\|_{1} + t^{\alpha}\|u(t)\|_{\infty} \leq K_\gamma, \quad t>0, \end{equation} where $\gamma$ and $K_\gamma$ are defined in \eqref{exp.FG} and Proposition~\ref{pr.wp2}, respectively. In addition, if $u$ is a very singular solution to \eqref{eq1} in $Q_\infty$, \begin{equation}
t^{\alpha+\beta} \|\nabla u(t)\|_{\infty} \leq K_\gamma, \quad t>0. \label{gradestVSS} \end{equation} \end{proposition}
\begin{proof} At a formal level, since $u$ is a very singular subsolution, its initial condition is somehow concentrated at $x=0$. It thus ``vanishes'' outside the origin and the conditions on the initial data in Proposition~\ref{pr.wp2} are fulfilled. As more regularity on the initial condition is needed to apply this result, we provide a rigorous proof now. Consider $\tau>0$. According to \eqref{regvss} and \eqref{comp.VSS}, $u(\tau)$ satisfies \eqref{wp1} and \eqref{wp2} with $\kappa=\gamma$ and we infer from Proposition~\ref{pr.wp2} that the solution $u^\tau$ to \eqref{eq1} in $(\tau,\infty)\times\mathbb{R}^N$ with initial condition $u^\tau(\tau)=u(\tau)$ satisfies \begin{align*}
(t-\tau)^{\alpha-N\beta} \|u^\tau(t)\|_{1} + (t-\tau)^{\alpha} \|u^\tau(t)\|_{\infty} \leq & K_\gamma, \\
(t-\tau)^{\alpha+\beta} \|\nabla u^\tau(t)\|_{\infty} \leq & K_\gamma, \end{align*} for $t>\tau$. Now, if $u$ is a very singular subsolution to \eqref{eq1}, the comparison principle gives $u\le u^\tau$ in $(\tau,\infty)\times\mathbb{R}^N$ and \eqref{decayVSS} follows at once from the previous estimate after letting $\tau\to 0$. Next, if $u$ is a very singular subsolution to \eqref{eq1}, we obviously have $u^\tau=u$ and thus \eqref{gradestVSS}. \end{proof}
Finally, the last preliminary result concerns some local estimates on small balls for very singular subsolutions. It is similar to \cite[Lemma 3.6]{BKL04} for $p=2$, and its proof adapts an argument from \cite[p. 186-187]{CLS}.
\begin{proposition}\label{prop.local} For $y\in\mathbb{R}^N$ and $\varrho>0$, let $\sigma_{y,\varrho}$ be the solution to \begin{equation} -\Delta_{p}\sigma_{y,\varrho}=1 \quad \hbox{in}\ \ B_\varrho(y),\qquad \sigma_{y,\varrho}=0 \quad \hbox{on} \ \ \partial B_\varrho(y).\label{sigma} \end{equation}
For every $\lambda\in (0,\infty)$, there exists $A_{\lambda,\varrho}>0$ depending only on $N$, $p$, $\varrho$, and $\lambda$ such that, if $u$ is a very singular subsolution to \eqref{eq1}, $y\in\mathbb{R}^N\setminus\{0\}$, and $0<\varrho<|y|$, we have \begin{equation}\label{locest.VSS} u(t,x) \leq \lambda\ e^{A_{\lambda,\varrho}t}\ \exp\left(\frac{1}{\sigma_{y,\varrho}(x)}\right), \quad (t,x)\in(0,\infty)\times B_\varrho(y). \end{equation} \end{proposition}
\begin{proof}
We fix $y\in\mathbb{R}^N$, $\varrho\in (0,|y|)$, $\lambda>0$, and define $$ w(t,x):=\lambda\ e^{At}\ \exp\left(\frac{1}{\sigma(x)}\right), \qquad (t,x)\in (0,\infty)\times B_\varrho(y), $$ where $\sigma=\sigma_{y,\varrho}$, the dependence on $y$ and $\varrho$ being omitted for simplicity. We wish to choose $A>0$ such that \begin{equation}\label{interm6}
\partial_t w - \Delta_{p}w + |\nabla w|^q\geq 0 \quad \hbox{in} \ (0,\infty)\times B_\varrho(y). \end{equation} To this end, we calculate: $$ \partial_t w(t,x)=A\ w(t,x), \quad \nabla w(t,x)=-\frac{w(t,x)}{\sigma(x)^2}\ \nabla\sigma(x), $$ hence $$
|\nabla w(t,x)|^q = \frac{|\nabla\sigma(x)|^q}{\sigma(x)^{2q}}\ w(t,x)^q $$ and $$
|\nabla w(t,x)|^{p-2}\nabla w(t,x) = - \frac{|\nabla\sigma(x)|^{p-2}}{\sigma(x)^{2(p-1)}}\ w(t,x)^{p-1}\ \nabla\sigma(x). $$ It follows from \eqref{sigma} that \begin{equation*} \begin{split}
\Delta_{p} w(t,x) = & 2(p-1)\ \frac{|\nabla\sigma(x)|^p}{\sigma(x)^{2p-1}}\ w(t,x)^{p-1} + (p-1)\ \frac{|\nabla\sigma(x)|^p}{\sigma(x)^{2p}}\ w(t,x)^{p-1} \\ & - \frac{w(t,x)^{p-1}}{\sigma(x)^{2(p-1)}}\ \Delta_p\sigma(x)\\
= & \frac{w(t,x)^{p-1}}{\sigma(x)^{2p}}\ \left[ \sigma(x)^{2} + (p-1)\ (1+2\sigma(x))\ |\nabla\sigma(x)|^{p} \right]. \end{split} \end{equation*} Gathering all the previous calculations, we obtain \begin{align*}
\partial_t w - \Delta_{p} w + |\nabla w|^q = & w^{p-1} \left\{ A\ w^{2-p} - \frac{\sigma^2 + (p-1)(1+2\sigma) |\nabla\sigma|^p}{\sigma^{2p}} + \frac{|\nabla\sigma|^{q}}{\sigma^{2q}}\ w^{q-p+1} \right\} \\ \ge & w^{p-1} \left\{ \lambda^{2-p} A\ \exp{\left\{ \frac{2-p}{\sigma}
\right\}} - \frac{\left\| \sigma^2 + (1+2\sigma) |\nabla\sigma|^p
\right\|_{L^{\infty}(B_{\varrho}(y))}}{\sigma^{2p}} \right\}. \end{align*} Setting $\mu_p := \inf_{r>0}{\left\{ e^r\ r^{-2p} \right\}}>0$, we end up with $$
\partial_t w - \Delta_{p} w + |\nabla w|^q \ge \frac{w^{p-1}}{\sigma^{2p}} \left\{ \lambda^{2-p} (2-p)^{2p}
\mu_p A - \left\| \sigma^2 + (1+2\sigma) |\nabla\sigma|^p
\right\|_{L^{\infty}(B_{\varrho}(y))} \right\}. $$ Since $\sigma(x) = \varrho^p\ \sigma_{0,1}((x-y)/\varrho)$ for $x\in B_\varrho(y)$, we conclude that \eqref{interm6} holds true for a sufficiently large constant $A_{\lambda,\varrho}>0$ which depends only on $N$, $p$, $\lambda$, and $\varrho$.
With this choice, $w$ is a supersolution to \eqref{eq1} in $(0,\infty)\times B_\varrho(y)$ which satisfies additionally $w(0,x)\ge 0 = u(0,x)$ for $x\in B_\varrho(y)$ by Proposition~\ref{prop.zero} and $w(t,x)=\infty>u(t,x)$ for $(t,x)\in (0,\infty)\times\partial B_\varrho(y)$ by \eqref{sigma}. The estimate \eqref{locest.VSS} then follows by the comparison principle. \end{proof}
\subsection{The minimal very singular solution}\label{sec4min}
In this section we will construct a special very singular solution and prove that it is minimal among all the very singular solutions and has a self-similar form. As a consequence, it will coincide with the unique radially symmetric self-similar very singular solution obtained in \cite{IL2}, see Theorem~\ref{th.VSSunique}. Recalling the notation $u_M$ for the fundamental solution to \eqref{eq1} with mass $M>0$, we begin with the following preliminary result.
\begin{lemma}\label{lemma.fs1} Let $u$ be a very singular supersolution to \eqref{eq1} and assume further that $$
u\in C(Q_{\infty}) \ {\rm and} \ u(t,x)\leq \Gamma_{p,q}(|x|), \qquad (t,x)\in Q_\infty\,. $$ Then, for any $M>0$, we have $u_M\leq u$ in $Q_\infty$. \end{lemma}
\begin{proof} Fix $M>0$. We borrow ideas from the proofs of \cite[Lemma 3.7]{BKL04} and Theorem~\ref{th.uniqfund} above. As $u$ is a very singular supersolution to \eqref{eq1}, we have
$\|u(t)\|_{1}\longrightarrow\infty$ as $t\to 0$ and, for each $k\ge 1$, there exists a non-negative function $u_{0,k}\in L^1(\mathbb{R}^N)\cap W^{1,\infty}(\mathbb{R}^N)$ such that \begin{equation}\label{interm7}
\|u_{0,k}\|_1=M, \quad 0\le u_{0,k}(x)\leq u(1/k,x) \le \Gamma_{p,q}(|x|), \ \hbox{for} \ \hbox{any} \ x\in\mathbb{R}^N. \end{equation} Denoting the solution to \eqref{eq1} with initial condition $u_{0,k}$ by $u_k$, we argue as in the proof of the existence part of Theorem~\ref{th.uniqfund} to find a non-negative function $\tilde{u}\in C(Q_\infty)$ and a subsequence of $(u_k)_k$ (not relabeled) with the following properties: \begin{equation} \begin{minipage}{12cm} $\tilde{u}$ is a solution to \eqref{eq1} in $Q_\infty$ and satisfies the estimates \eqref{wp101}-\eqref{wp102} with $C_s(M)$ and \eqref{wp3} with $\kappa=\gamma$. \end{minipage} \label{pim} \end{equation} and \begin{equation} u_k\longrightarrow \tilde{u} \;\;\text{ in }\;\; C([\tau,T]\times K) \label{pam} \end{equation} for all compact subsets $K$ of $\mathbb{R}^N$ and $\tau<t<T$.
It remains to identify the initial condition taken by $\tilde{u}$. On the one hand, since $u$ is a supersolution to \eqref{eq1}, it readily follows from \eqref{interm7} that $$
u_k(t,x) \le u\left( t + \frac{1}{k} , x \right) \le \Gamma_{p,q}(|x|)\,, \qquad (t,x)\in Q_\infty\,, $$ whence, owing to \eqref{pam} and the continuity of $u$ in $Q_\infty$, \begin{equation}
\tilde{u}(t,x) \le u(t,x) \le \Gamma_{p,q}(|x|)\,, \qquad (t,x)\in Q_\infty\,. \label{poum} \end{equation} On the other hand, consider $\psi\in C_0^{\infty}(\mathbb{R}^N)$ and $t\in(0,1)$. Owing to \eqref{interm7}, we may use Proposition~\ref{pr.wp2b}~(a) and deduce that, for all $k\ge 1$, \begin{equation} \begin{split}
\left| \int_{\mathbb{R}^N} \left( u_k(t,x)- u_{0,k}(x) \right) \psi(x)\,dx \right| \le C(M,1)\ & \left[ \|\psi\|_\infty\ t^{(N+1)(q_*-q)\eta} \right. \\ & + \left. \|\nabla\psi\|_{p/(2-p)}\ t^{1/p} \right]. \end{split} \label{b00} \end{equation} It also follows from \eqref{interm7} that, for $r>0$ and $k\ge 1$, \begin{align*}
&\left|\int_{\mathbb{R}^N} u_{0,k}(x) \psi(x) \,dx - M\ \psi(0)\right| = \left|\int_{\mathbb{R}^N} u_{0,k}(x) (\psi(x)-\psi(0)) \,dx \right|\\
&\leq 2\|\psi\|_{\infty}\ \int_{\{|x|\geq r\}} u(1/k,x) \,dx + \left( \int_{\{|x|\leq r\}} u_{0,k}(x) \,dx\right)\ \sup\limits_{|x|\leq r}{\left\{ |\psi(x)-\psi(0)| \right\}} \\
& \leq 2\|\psi\|_{\infty}\int_{\{|x|\geq r\}} u(1/k,x) \,dx + M \sup\limits_{|x|\leq r}{\left\{ |\psi(x)-\psi(0)| \right\}}. \end{align*} Combining \eqref{b00} and the above estimate, we obtain, for $k\ge 1$ and $r>0$, \begin{align*}
& \left| \int_{\mathbb{R}^N} \tilde{u}(t,x)\ \psi(x)\, dx - M\ \psi(0) \right| \\
\le & \left| \int_{\mathbb{R}^N} \left( \tilde{u}(t,x) - u_k(t,x) \right) \psi(x)\, dx \right| + \left|\int_{\mathbb{R}^N} \left( u_k(t,x) - u_{0,k}(x) \right) \psi(x)\,dx \right| \\
& + \left| \int_{\mathbb{R}^N} u_{0,k}(x)\ \psi(x) \,dx - M \psi(0) \right|\\
\le & \int_{\mathbb{R}^N} \left| \tilde{u}(t,x) - u_k(t,x) \right| |\psi(x)|\, dx \\
& + C(M,1)\ \left[ \|\psi\|_\infty\ t^{(N+1)(q_*-q)\eta} + \|\nabla\psi\|_{p/(2-p)}\ t^{1/p} \right] \\
& + 2\|\psi\|_{\infty}\int_{\{|x|\geq r\}} u(1/k,x) \,dx + M \sup\limits_{|x|\leq r}{\left\{ |\psi(x)-\psi(0)| \right\}}. \end{align*} Since $t>0$, $r>0$, and $\psi$ is compactly supported, we first let $k\to\infty$ in the above inequality and use \eqref{VSS2} and \eqref{pam} to conclude that \begin{align*}
\left| \int_{\mathbb{R}^N} \tilde{u}(t,x)\ \psi(x)\, dx - M\ \psi(0) \right| \le & C(M,1)\ \left[ \|\psi\|_\infty\ t^{(N+1)(q_*-q)\eta} + \|\nabla\psi\|_{p/(2-p)}\ t^{1/p} \right] \\
& + M \sup\limits_{|x|\leq r}{\left\{ |\psi(x)-\psi(0)| \right\}}. \end{align*} We then let $t\to 0$ and $r\to 0$ and end up with \begin{equation}\label{interm14} \lim\limits_{t\to 0} \int_{\mathbb{R}^N} \tilde{u}(t,x)\ \psi(x) \,dx = M\ \psi(0) \end{equation} for any $\psi\in C_0^{\infty}(\mathbb{R}^N)$. By a standard density argument, we extend \eqref{interm14} to test functions $\psi\in C_0(\mathbb{R}^N)$. In order to extend \eqref{interm14} to test functions in $BC(\mathbb{R}^N)$, we proceed as in the proof of the existence part of Theorem~\ref{th.uniqfund} with the difference that the control for large $x$ is here provided by $\Gamma_{p,q}$ thanks to the upper bound \eqref{poum} and Lemma~\ref{le.wp6}. The uniqueness statement of Theorem~\ref{th.uniqfund} then implies that $\tilde{u}=u_M$. Recalling \eqref{poum} completes the proof. \end{proof}
The next result shows more properties of the fundamental solutions $u_M$.
\begin{lemma}\label{lemma.fs2} \begin{itemize} \item[(a)] For each $M>0$ and $t>0$, $u_{M}(t)$ is a radially symmetric function, and $u_{M_1}(t)\leq u_{M_2}(t)$ if $0<M_1\leq M_2<\infty$.
\item[(b)] For each $M>0$, the function $u_M$ satisfies \begin{equation}\label{FS.bound1}
0\leq u_{M}(t,x)\leq \Gamma_{p,q}(|x|), \quad (t,x)\in Q_\infty \end{equation} as well as the estimates \eqref{wp101}-\eqref{wp102} with $C_s(M)$ and \eqref{wp3} with $\kappa=\gamma$.
\item[(c)] For each $M>0$ and any $r>0$, there exist a constant $C(r)$ depending only on $r$, $p$, $q$, and $N$ such that \begin{equation}\label{FS.bound2}
\int_{\{|x|\geq r\}} u_{M}(t,x)\,dx\leq C(r)\ t^{1/p}\,, \qquad t\in (0,1). \end{equation} \end{itemize} \end{lemma}
\begin{proof} The proof of part~(a) is identical to the proof of \cite[Lemma~3.3]{BL01} to which we refer. Next, it is easy to see that Proposition~\ref{prop.zero} is also valid for the fundamental solutions $u_M$ and the estimate~\eqref{FS.bound1} can be proved as Proposition~\ref{prop.compFG}. We then infer from \eqref{stb0} and \eqref{FS.bound1} that Propositions~\ref{pr.wp2a} and~\ref{pr.wp2} can be applied to $(t,x)\longmapsto u_M(t+\tau,x)$ for any arbitrary small $\tau$ from which the validity of \eqref{wp101}-\eqref{wp102} with $C_s(M)$ and \eqref{wp3} with $\kappa=\gamma$ follows after passing to the limit $\tau\to 0$. Finally, let $\tau>0$, $r>0$, and two non-negative functions $\xi\in C_0^\infty(\mathbb{R}^N)$ and $\zeta\in C^\infty(\mathbb{R}^N)$ such that $$
0\le \xi\le 1\,, \qquad \xi(x)=1 \;\;\text{ if }\;\; |x|<1 \;\;\text{ and }\;\; \xi(x)=0 \;\;\text{ if }\;\; |x|>2 $$ and $$
0\le \zeta\le 1\,, \qquad \zeta(x)=0 \;\;\text{ if }\;\; |x|<r/2 \;\;\text{ and }\;\;\zeta(x)=1 \;\;\text{ if }\;\; |x|>r\,. $$ For $R>0$ and $x\in\mathbb{R}^N$, we set $\xi_R(x)=\xi(x/R)$. Since $u(\tau)$ satisfies \eqref{wp2} with $\kappa=\gamma$ by \eqref{FS.bound1} and $\xi_R \zeta\in C_0^\infty(\mathbb{R}^N)$ vanishes in $B_{r/2}(0)$, it follows from \eqref{wp201} that, for $t>\tau$ and $R>0$, \begin{align*}
\int_{\{r<|x|<R\}} u_M(t,x)\, dx \le & \int_{\mathbb{R}^N} u_M(t,x)\ (\xi_R \zeta)(x)\, dx \\
\le & \int_{\mathbb{R}^N} u_M(\tau)\ (\xi_R \zeta)(x)\, dx + C(\gamma,r/2)\ \|\nabla(\xi_R \zeta)\|_{p/(2-p)}\ (t-\tau)^{1/p}\,. \end{align*} Letting $\tau\to 0$, we find, since $\xi_R \zeta$ vanishes in a neighborhood of $x=0$, $$
\int_{\{r<|x|<R\}} u_M(t,x)\, dx \le C(\gamma,r/2)\ \|\nabla(\xi_R \zeta)\|_{p/(2-p)}\ t^{1/p}\,. $$ Combining \eqref{FS.bound1} with the previous inequality, we obtain \begin{align*}
\int_{\{|x|>r\}} u_M(t,x)\, dx \le & \int_{\{r<|x|<R\}} u_M(t,x)\, dx + \int_{\{|x|>R\}} \Gamma_{p,q}(|x|)\, dx \\
\le & C(\gamma,r/2)\ \|\nabla(\xi_R \zeta)\|_{p/(2-p)}\ t^{1/p} + \int_{\{|x|>R\}} \Gamma_{p,q}(|x|)\, dx \\
\le & C(r)\ \left( \|\nabla(\xi_R \zeta)\|_{p/(2-p)}\ t^{1/p} + R^{-(\alpha-N\beta)/\beta} \right)\,. \end{align*} Now, \begin{align*}
\|\nabla(\xi_R \zeta)\|_{p/(2-p)} \le & \| \zeta \nabla\xi_R\|_{p/(2-p)} + \|\xi_R \nabla \zeta\|_{p/(2-p)} \\
\le & R^{-(N+1)(p-p_c)/p} \|\nabla\xi\|_{p/(2-p)} +
\|\nabla\zeta\|_{p/(2-p)}\,, \end{align*} and thus $$
\int_{\{|x|>r\}} u_M(t,x)\, dx \le C(r)\ \left( t^{1/p} + R^{-(N+1)(p-p_c)/p}\ t^{1/p} + R^{-(\alpha-N\beta)/\beta} \right)\,. $$ Letting $R\to\infty$ gives \eqref{FS.bound2}. \end{proof}
We are now ready to construct the minimal very singular solution. By Lemma~\ref{lemma.fs2}, for any $t>0$, the sequence $(u_{M}(t))_{M>0}$ is non-decreasing and uniformly bounded by $\Gamma_{p,q}$. Thus, we can define \begin{equation}\label{minimalVSS} \overline{U}(t,x) := \sup_{M>0}\{ u_M(t,x)\} = \lim\limits_{M\to\infty}u_{M}(t,x)\,, \qquad (t,x)\in Q_\infty\,. \end{equation} Using once more Lemma~\ref{lemma.fs2}, we see that $\overline{U}(t)$ is radially symmetric for any $t>0$. Moreover, a first outcome of Proposition~\ref{prop.compFG} and Lemma~\ref{lemma.fs1} is that \begin{equation} \overline{U} \leq u \;\;\text{ in }\;\; Q_\infty \;\;\text{ for any very singular solution }\;\; u \;\;\text{ to \eqref{eq1}.} \end{equation} It remains to show that $\overline{U}$ is a very singular solution to \eqref{eq1}.
\begin{proposition}\label{prop.minimalVSS} The function $\overline{U}$ constructed in \eqref{minimalVSS} is a very singular solution to \eqref{eq1}. Moreover, $\overline{U}=U$, the latter being defined in Theorem~\ref{th.VSSunique}. \end{proposition}
\begin{proof} We first prove that $\overline{U}$ has the expected behavior as $t\to 0$. Let $r>0$. On the one hand, if $M>0$, we have $\overline{U}\ge u_M$ in $Q_\infty$ by \eqref{minimalVSS} and thus \begin{equation*}
\liminf_{t\to 0} \int_{\{|x|\leq r\}} \overline{U}(t,x) \,dx \geq \lim_{t\to 0} \int_{\{|x|\leq r\}} u_{M}(t,x) \,dx =M\,, \end{equation*} from which the expected concentration \eqref{VSS1} of $\overline{U}$ at the origin follows. On the other hand, we infer from the monotone convergence theorem and \eqref{FS.bound2} that \begin{equation*}
\int_{\{|x|\geq r\}} \overline{U}(t,x) \,dx =
\lim\limits_{M\to\infty} \int_{\{|x|\geq r\}} u_{M}(t,x) \,dx \leq C(r)\ t^{1/p}\,. \end{equation*} Letting $t\to 0$ gives the expected vanishing \eqref{VSS2} outside the origin.
Finally, it follows from Lemma~\ref{lemma.fs2}~(b) that $(u_M)_M$ is bounded in $L^\infty(\tau,\infty;W^{1,\infty}(\mathbb{R}^N))$ for any $\tau>0$. This property and Lemma~\ref{le.wp5} ensure the time equicontinuity of the family $(u_M)_M$ in $(\tau,\infty)\times\mathbb{R}^N$ for all $\tau>0$. We then deduce from the Arzel\`a-Ascoli theorem that $(u_M)_M$ is relatively compact in $C([\tau,T]\times K)$ for all compact subsets $K$ of $\mathbb{R}^N$ and $0<\tau< T$. Recalling \eqref{minimalVSS}, we conclude that $(u_{M})_M$ converges to $\overline{U}$ uniformly in compact subsets of $Q_\infty$. Consequently, thanks to the stability of viscosity solutions \cite[Theorem 6.1]{OS}, $\overline{U}$ is a viscosity solution to \eqref{eq1}, and thus a very singular solution in the sense of Definition~\ref{def.VSS}.
It remains to prove that $\overline{U}$ has a self-similar form which follows from the scale invariance of \eqref{eq1} and is now a standard step. Indeed, for $\lambda\in (0,\infty)$ and $M\in(0,\infty)$, define a rescaled version of $u_M$ by \begin{equation}\label{rescaling} u_{M}^{\lambda}(t,x):=\lambda^{(p-q)/(2q-p)}u_{M}(\lambda t,\lambda^{(q-p+1)/(2q-p)}x)\,, \qquad (t,x)\in Q_\infty. \end{equation} By straightforward calculations, we find that $u_{M}^{\lambda}$ is a solution to \eqref{eq1}. To identify its initial trace, we consider $\psi\in BC(\mathbb{R}^N)$ and write \begin{equation*} \begin{split} \int_{\mathbb{R}^N} u_{M}^{\lambda}(t,x)\ \psi(x)\,dx &= \lambda^{(p-q)/(2q-p)}\ \int_{\mathbb{R}^N} u_{M}(\lambda t,\lambda^{(q-p+1)/(2q-p)}x)\ \psi(x) \,dx\\ &= \lambda^{(N+1)(q_*-q)/(2q-p)}\ \int_{\mathbb{R}^N} u_{M}(\lambda t,y)\ \psi(\lambda^{-(q-p+1)/(2q-p)}y) \,dy. \end{split} \end{equation*} Letting $t\to 0$, we find that the initial condition of $u_{M}^{\lambda}$ is $\lambda^{(N+1)(q_*-q)/(2q-p)} M \delta_0$. By Theorem~\ref{th.uniqfund}, we obtain $u_{M}^{\lambda} = u_{\lambda^{(N+1)(q_*-q)/(2q-p)}M}$. We now pass to the limit as $M\to\infty$ and deduce from \eqref{minimalVSS} that \begin{equation*} \overline{U}(t,x) = \lambda^{(p-q)/(2q-p)}\ \overline{U}(\lambda t,\lambda^{(q-p+1)/(2q-p)}x), \quad (t,x)\in Q_\infty\,. \end{equation*} Therefore, $\overline{U}$ has a self-similar form and since it is obviously radially symmetric due to Lemma~\ref{lemma.fs2}~(a) and \eqref{minimalVSS}, we infer from Theorem~\ref{th.VSSunique} that $\overline{U}=U$. \end{proof}
A further outcome of the above analysis is the following result which is a straightforward consequence of Lemma~\ref{lemma.fs1}, \eqref{minimalVSS}, and Proposition~\ref{prop.minimalVSS}.
\begin{corollary}\label{cor.nonumber} If $u$ is a very singular supersolution to \eqref{eq1} in $Q_\infty$ such that $$
u\in C(Q_{\infty}) \ {\rm and} \ u(t,x)\leq\Gamma_{p,q}(|x|), \qquad (t,x)\in Q_\infty\,, $$ then $$ U(t,x)\leq u(t,x), \qquad (t,x)\in Q_\infty. $$ \end{corollary}
\subsection{The maximal very singular solution}\label{sec4max}
We begin with the following general result for very singular subsolutions to \eqref{eq1}.
\begin{proposition}\label{prop.maxVSS} Let $u$ be a very singular subsolution to \eqref{eq1}. Then there exists a very singular solution $\overline{u}$ such that $u\leq\overline{u}$ in $Q_\infty$. \end{proposition}
\begin{proof} Fix $\tau>0$ and let $u^\tau$ be the solution to \eqref{eq1} in $(\tau,\infty)\times\mathbb{R}^N$ with initial condition $u^{\tau}(\tau)=u(\tau)$. The comparison principle and Proposition~\ref{prop.compFG} then ensure that \begin{equation}
u(t,x) \le u^{\tau}(t,x) \le \Gamma_{p,q}(|x|)\,, \qquad (t,x)\in (\tau,\infty)\times\mathbb{R}^N\,. \label{d24a} \end{equation}
Moreover, the function $u(\tau)$ satisfies \eqref{wp1} and \eqref{wp2} (with $\kappa=\gamma$) by Proposition~\ref{prop.compFG} and it follows from Proposition~\ref{pr.wp2} that, for $t>\tau$, \begin{equation}\label{est1.tau}
(t-\tau)^{\alpha-N\beta} \|u^{\tau}(t)\|_{1} + (t-\tau)^{\alpha}\|u^{\tau}\|_{\infty}\leq K_\gamma, \end{equation} and \begin{equation}\label{est2.tau}
(t-\tau)^{\alpha+\beta} \|\nabla u^{\tau}(t)\|_{\infty}\leq K_\gamma. \end{equation} We also notice that, if $0<\tau_1<\tau_2$, the inequality \eqref{d24a} implies that $u^{\tau_2}(\tau_2)=u(\tau_2)\le u^{\tau_1}(\tau_2)$, whence \begin{equation} u^{\tau_1}(t,x)\geq u^{\tau_2}(t,x)\,, \qquad (t,x)\in(\tau_2,\infty)\times\mathbb{R}^N\,, \label{d25a} \end{equation} by the comparison principle. Owing to \eqref{d24a}, \eqref{est1.tau}, and \eqref{d25a}, we may define the pointwise limit \begin{equation}\label{est3.tau} W(t,x):=\sup\limits_{\tau\in (0,t/2)} \{u^{\tau}(t,x)\} = \lim\limits_{\tau\to 0} u^{\tau}(t,x), \quad (t,x)\in Q_\infty. \end{equation}
The remainder of the proof is devoted to proving that $W$ is a very singular solution to \eqref{eq1} in $Q_\infty$. Consider $n\ge 1$. By \eqref{est1.tau} and \eqref{est2.tau}, the family $\{u^\tau\ :\ \tau\in (0,1/2n) \}$ is bounded in $L^\infty(1/n,n;W^{1,\infty}(\mathbb{R}^N))$ which allows us to apply Lemma~\ref{le.wp5} and deduce from the Arzel\`a-Ascoli theorem that $\{u^\tau\ :\ \tau\in (0,1/2n) \}$ is relatively compact in $C((1/n,n)\times B_n(0))$. Consequently, the pointwise convergence \eqref{est3.tau} of $(u^\tau)_\tau$ to $W$ can be improved to convergence in $C((1/n,n)\times B_n(0))$ for all $n\ge 1$, from which we deduce that $W$ is a viscosity solution to \eqref{eq1} in $Q_\infty$ by the stability of viscosity solutions \cite[Theorem~6.1]{OS}. We may also use this convergence to pass to the limit as $\tau\to 0$ in \eqref{d24a} and obtain \begin{equation}
u(t,x) \le W(t,x) \le \Gamma_{p,q}(|x|)\,, \qquad (t,x)\in Q_\infty\,. \label{d27} \end{equation}
It remains to prove that the function $W$ has the expected behavior as $t\to 0$. Since $u$ is a very singular subsolution to \eqref{eq1}, it satisfies \eqref{VSS1} and so does $W$ by \eqref{d27}. The study of the behavior of $W$ outside the origin requires more work. Let $\zeta\in C^{\infty}(\mathbb{R}^N)$ be such that $0\le\zeta\le 1$, $$
\zeta(x)=1 \ \hbox{if} \ |x|\geq 1, \quad \zeta(x)=0 \
\hbox{if} \ |x|\leq\frac{1}{2}. $$ Fix $r>0$ and define $\zeta_r(x) = \zeta(x/r)$ for $x\in\mathbb{R}^N$. It follows from \eqref{def.weak} that, for $t>0$ and $\tau\in (0,t/2)$, \begin{equation}\label{interm8} \begin{split}
\int_{\mathbb{R}^N} u^{\tau}(t,x)\ \zeta_r(x) \,dx \le & \int_{\mathbb{R}^N} u^{\tau}(\tau,x)\ \zeta_r(x) \,dx + \int_{\tau}^{t} \int_{\mathbb{R}^N} |\nabla u^{\tau}(s,x)|^{p-1}\ |\nabla\zeta_r(x)| \,dx\,ds\\
\le & \int_{\{|x|\ge r/2\}} u(\tau,x) \,dx \\
& + \int_{\tau}^{t} \int_{\{r/2<|x|<r\}} |\nabla u^{\tau}(s,x)|^{p-1}\ |\nabla \zeta_r(x)| \,dx\,ds, \end{split} \end{equation} since $u^{\tau}(\tau)=u(\tau)$ by definition. On the one hand, Fatou's lemma and \eqref{est3.tau} give \begin{equation} \int_{\mathbb{R}^N} W(t,x)\ \zeta_r(x)\ dx \le \liminf_{\tau\to 0} \int_{\mathbb{R}^N} u^\tau(t,x)\ \zeta_r(x)\ dx\,. \label{d29} \end{equation} On the other hand, since $u$ is a very singular subsolution, we have \begin{equation}
\lim_{\tau\to 0} \int_{\{|x|\ge r/2\}} u(\tau,x)\, dx =0\,, \label{d30} \end{equation} while \eqref{wp100}, \eqref{d24a}, and \eqref{est1.tau} give, for $s>\tau$, \begin{align*}
|\nabla u^\tau(s,x)|^{p-1} \le & C \left( \left\| u^\tau\left( \frac{s+\tau}{2} \right) \right\|_\infty^{1/\alpha p} + (s-\tau)^{-1/p} \right)^{p-1}\ \left( u^\tau(s,x) \right)^{2(p-1)/p} \\
\le & C\ (s-\tau)^{-(p-1)/p}\ \Gamma_{p,q}(|x|)^{2(p-1)/p}\,. \end{align*} Thus \begin{equation}
\int_{\tau}^{t} \int_{\{r/2<|x|<r\}} |\nabla u^{\tau}(s,x)|^{p-1}\ |\nabla\zeta_r(x)| \,dx \,ds \leq C(r)\ t^{1/p} \|\nabla\zeta\|_{\infty}.\label{interm11} \end{equation} Combining \eqref{interm8}, \eqref{d29}, \eqref{d30}, and \eqref{interm11} leads us to \begin{equation*}
\int_{\mathbb{R}^N} W(t,x)\ \zeta_r(x) \,dx \leq C(r)\ t^{1/p}\ \|\nabla\zeta\|_{\infty}. \end{equation*} Using the properties of $\zeta$ and letting $t\to 0$ in the above inequality, we conclude that $W$ satisfies \eqref{VSS2}. Summarizing, we have established that $W$ is a very singular solution to \eqref{eq1} in $Q_\infty$ which lies above $u$ by \eqref{d27}. \end{proof}
We are now ready to construct the maximal very singular solution to \eqref{eq1}. We denote the set of very singular solutions to \eqref{eq1} in $Q_\infty$ by ${\cal S}$. Since $U\in{\cal S}$, ${\cal S}$ is non-empty and we may define \begin{equation}\label{maximalVSS} V(t,x) := \sup\limits_{u\in{\cal S}}\{u(t,x)\}, \quad (t,x)\in Q_\infty. \end{equation} We prove next that $V$ is itself a very singular solution to \eqref{eq1}. We begin with the following bounds.
\begin{lemma}\label{est.maxVSS} For $t>0$, we have \begin{equation}\label{estmax1}
t^{\alpha}\ \|V(t)\|_{\infty} + t^{\alpha+\beta}\|\nabla V(t)\|_{\infty} \leq K_\gamma, \end{equation} and \begin{equation}\label{estmax2}
U(t,x)\leq V(t,x)\leq\Gamma_{p,q}(|x|), \quad x\in\mathbb{R}^N. \end{equation} \end{lemma}
\begin{proof}
Since $U\in{\cal S}$, the inequality \eqref{estmax2} follows at once from \eqref{maximalVSS} and Proposition~\ref{prop.compFG}. We next deduce from \eqref{decayVSS} and \eqref{maximalVSS} that $\|V(t)\|_\infty \le K_\gamma\ t^{-\alpha}$ for $t>0$ while \eqref{gradestVSS} and \eqref{maximalVSS} entail that, for any $x\in\mathbb{R}^N$, $y\in\mathbb{R}^N$, $u\in{\cal S}$, and $t>0$, $$
u(t,x) \leq u(t,y) + K_\gamma\ t^{-(\alpha+\beta)}\ |x-y|\leq V(t,y) + K_\gamma\ t^{-(\alpha+\beta)}\ |x-y|. $$ Hence, passing to the supremum over $u\in{\cal S}$ $$
V(t,x)\leq V(t,y) + K_\gamma\ t^{-(\alpha+\beta)}\ |x-y|, $$ and $V(t)$ is Lipschitz continuous for all $t>0$ with Lipschitz constant $K_\gamma\ t^{-(\alpha+\beta)}$. Consequently, $V(t)\in W^{1,\infty}(\mathbb{R}^N)$ and satisfies \eqref{estmax1}. \end{proof}
We can now establish the main property of $V$.
\begin{lemma}\label{lem.Bardi} $V$ is a very singular subsolution to \eqref{eq1}. \end{lemma}
\begin{proof}
Since $V$ is the supremum of a family of viscosity solutions to \eqref{eq1} by \eqref{maximalVSS}, the fact that $V$ is a viscosity subsolution to \eqref{eq1} follows from \cite[Proposition V.2.11]{BCD}. The regularity $V(t)\in L^1(\mathbb{R}^N)\cap W^{1,\infty}(\mathbb{R}^N)$ for $t>0$ is a consequence of Lemma~\ref{est.maxVSS} and the integrability at infinity of $\Gamma_{p,q}$ (see Lemma~\ref{le.wp6}). Also, the concentrating property \eqref{VSS1} at the origin as $t\to 0$ follows at once from \eqref{maximalVSS} since $U\in{\cal S}$. It remains to check that $V(t)$ vanishes outside the origin as $t\to 0$. For that purpose, let $r>0$ and $R>r$. Since the annulus $K(r,R):= \{x\in\mathbb{R}^N\ :\ r/2\le |x|\le R\}$ is compact, there is a finite number $l$ of points $(y_i)_{1\le i \le l}$ in $\mathbb{R}^N$ such that \begin{equation} K(r,R)\subset\bigcup\limits_{i=1}^{l} B_{r/8}(y_i). \label{d37} \end{equation} We infer from \eqref{locest.VSS} that, for any $1\leq i\leq l$, $\lambda>0$, $t>0$, and $u\in{\cal S}$, we have $$ u(t,x)\leq \lambda e^{A_{\lambda,r/4}t}\ \exp\left(\frac{1}{\sigma_{y_i,r/4}(x)}\right), \quad x\in B_{r/8}(y_i). $$ The above estimate being valid for all $u\in{\cal S}$ we conclude that, for any $1\leq i\leq l$, $\lambda>0$, and $t>0$, \begin{equation} V(t,x)\leq \lambda e^{A_{\lambda,r/4}t}\ \exp\left(\frac{1}{\sigma_{y_i,r/4}(x)}\right), \quad x\in B_{r/8}(y_i). \label{d38} \end{equation} Recalling \eqref{d37}, we infer from \eqref{estmax2} and \eqref{d38} that, for $t>0$ and $\lambda>0$, \begin{eqnarray*}
\int_{\{|x|\geq r\}} V(t,x)\,dx & = &\int_{K(r,R)} V(t,x) \,dx + \int_{\{|x|>R\}} V(t,x)\,dx\\
& \leq & \lambda\ e^{A_{\lambda,r/4}t}\ \sum\limits_{i=1}^{l} \int_{B_{r/8}(y_i)}\ \exp\left(\frac{1}{\sigma_{y_i,r/4}(x)} \right) \,dx + \int_{\{|x|>R\}} \Gamma_{p,q}(|x|) \,dx. \end{eqnarray*} Passing first to the limit $t\to 0$ and then $\lambda\to 0$ gives $$
\limsup\limits_{t\to 0} \int_{\{|x|\geq r\}} V(t,x) \,dx\le \int_{\{|x|>R\}} \Gamma_{p,q}(|x|) \,dx $$ for all $R>r$. Thanks to Lemma~\ref{le.wp6}, the right-hand side of the above inequality converges to zero as $R\to\infty$, so that $V$ satisfies \eqref{VSS2} and the proof is complete. \end{proof}
We are now in a position to identify $V$.
\begin{proposition}\label{prop.maximalVSS} The function $V$ defined in \eqref{maximalVSS} is a very singular solution in the sense of Definition~\ref{def.VSS}. Moreover, it is radially symmetric and has self-similar form, thus coinciding with the unique self-similar very singular solution $U$ given by Theorem~\ref{th.VSSunique}. \end{proposition}
A straightforward consequence of Proposition~\ref{prop.maximalVSS} is that ${\cal S}=\{U\}$, which proves Theorem~\ref{th.VSSgeneral}.
\begin{proof} It follows from Proposition~\ref{prop.maxVSS} and Lemma~\ref{lem.Bardi} that there exists a very singular solution $\overline{u}$ to \eqref{eq1} such that $$ V(t,x)\leq\overline{u}(t,x) \quad \hbox{for} \ \hbox{all} \ (t,x)\in(0,\infty)\times\mathbb{R}^N. $$ The definition \eqref{maximalVSS} of $V$ implies that $V\equiv\overline{u}$ and thus $V$ is the maximal very singular solution. The radial symmetry and self-similarity of $V$ then follow from the scaling and rotational invariances of \eqref{eq1}. \end{proof}
\subsection{A comparison principle}\label{sec4cc}
An interesting consequence of the uniqueness of the very singular solutions to \eqref{eq1} is the following comparison principle for the related elliptic equation \begin{equation}\label{eq.ell}
-\Delta_{p}v+|\nabla v|^q-\alpha v-\beta y\cdot\nabla v=0 \quad\text{ in }\quad \mathbb{R}^N\,. \end{equation}
\begin{theorem}\label{th.comp} Let $v_1$ be a viscosity subsolution and $v_2$ be a viscosity supersolution to \eqref{eq.ell} in $\mathbb{R}^N$, such that \begin{equation} v_i\in L^1(\mathbb{R}^N)\cap W^{1,\infty}(\mathbb{R}^N)\,, \quad v_i\ge 0\,, \quad v_i\not\equiv 0\, \quad i=1,2. \label{wp11} \end{equation} Assume that \begin{equation}\label{int.est}
\lim\limits_{R\to\infty}\int_{\{|y|\geq R\}} v_i(y)|y|^{\alpha/\beta-N}\,dy=0, \
i=1,2, \;\;\text{ and }\;\; v_2(y)\leq\Gamma_{p,q}(|y|), \quad y\in\mathbb{R}^N. \end{equation} Then $v_1(y)\leq f_U(y) \leq v_2(y)$ for all $y\in\mathbb{R}^N$, where $f_U$ is the profile of the very singular solution $U$ to \eqref{eq1}, see Theorem~\ref{th.VSSunique}. \end{theorem}
Besides its interest in itself, this comparison principle will also be useful to settle the asymptotic behavior in Section~\ref{sec5}.
\begin{proof} For $i=1,2$, define $$ u_i(t,x):=t^{-\alpha}v_i(xt^{-\beta}), \quad (t,x)\in Q_\infty\,. $$ It is then straightforward to check that $u_1$ is a subsolution and $u_2$ is a supersolution to \eqref{eq1} in $Q_\infty$. Moreover, we have $$ u_i\in C(Q_{\infty})\;\;\text{ and }\;\; u_i(t)\in L^1(\mathbb{R}^N)\cap W^{1,\infty}(\mathbb{R}^N), \quad t>0\,, \quad i=1,2. $$ On the one hand, for $i=1,2$ and any $r>0$, we have \begin{equation*} \begin{split}
\int_{\{|x|\geq r\}}u_i(t,x)\,dx&=t^{N\beta-\alpha}\int_{\{|y|\geq rt^{-\beta}\}}v_i(y)|y|^{\alpha/\beta-N}|y|^{N-\alpha/\beta}\,dy\\&\leq r^{N-\alpha/\beta}\int_{\{|y|\geq rt^{-\beta}\}}v_i(y)|y|^{\alpha/\beta-N}\,dy, \end{split} \end{equation*} which tends to 0 as $t\to0$ by \eqref{int.est}. On the other hand, since $v_i\not\equiv 0$, there is $r_0>0$ sufficiently large such that $$ \int_{B_{r_0}(0)} v_i(y)\, dy >0\,, \quad i=1,2\,. $$ Consequently, for $t>0$ sufficiently small ($t\in \left( 0,(r/r_0)^{1/\beta} \right)$), we have $$
\int_{\{|x|\leq r\}}u_i(t,x)\,dx=t^{N\beta-\alpha}\int_{\{|y|\leq rt^{-\beta}\}}v_i(y)\,dy>t^{N\beta-\alpha}\int_{B_{r_0}(0)}v_i(y)\,dy, $$ which tends to $+\infty$ as $t\to 0$, since $N\beta-\alpha<0$. It follows that $u_1$ is a very singular subsolution to \eqref{eq1} and $u_2$ is a very singular supersolution to \eqref{eq1}. Furthermore $$
u_2(t,x)=t^{-\alpha}v_2(xt^{-\beta})\leq\gamma|x|^{-\alpha/\beta}=\Gamma_{p,q}(|x|), $$ for any $(t,x)\in Q_\infty$. By Theorem~\ref{th.VSSgeneral}, Corollary~\ref{cor.nonumber}, and Proposition~\ref{prop.maxVSS}, we obtain $$ u_1\leq U\leq u_2 \quad \hbox{in} \quad Q_{\infty}. $$ We reach the conclusion by going back to the original variables. \end{proof}
\section{Convergence to self-similarity}\label{sec5}
With all the preparations done in the previous sections, we are now ready to prove the main result about asymptotic convergence. The proof will be divided into several steps.
\begin{proof}[Proof of Theorem \ref{th.asympt}] Let us first notice that the condition \eqref{wp111} implies that $R(u_0)<\infty$, where $R(u_0)$ is defined in \eqref{radius.initial}. Moreover, there exists a sufficiently large constant $\kappa>0$ such that $$
u_0(x)\leq\kappa|x|^{-\alpha/\beta} \quad \hbox{for any} \quad x\in\mathbb{R}^N. $$
\noindent \textbf{Step~1. Self-similar variables.} In a first step, we pass to self-similar variables and define the new variables $(s,y)$ and function $v$ by \begin{equation}\label{self.fct} u(t,x)=:(1+t)^{-\alpha}v(s,y), \quad s:=\ln(1+t), \ y:=x(1+t)^{-\beta}. \end{equation} Then $v$ solves the equation \begin{equation}\label{eq.self}
\partial_{s}v-\Delta_{p}v+|\nabla v|^q-\alpha v-\beta y\cdot\nabla v=0, \quad (s,y)\in Q_{\infty}, \end{equation} with initial condition $v(0)=u_0$ in $\mathbb{R}^N$.
\noindent \textbf{Step~2. Estimates for $v$.} Starting from the estimates established for $u$, we can deduce estimates for $v$ in similar norms as follows. First, recalling the homogeneity of $\Gamma_{p,q}$, we deduce from \eqref{wp7} that \begin{equation}\label{FG.boundv} \begin{split}
v(s,y)&=e^{\alpha s}u(e^s-1,ye^{\beta s})\leq e^{\alpha s}\Gamma_{p,q}(|y|e^{\beta s}-R(u_0))\\&\leq\Gamma_{p,q}(|y|-R(u_0)e^{-\beta s}) \end{split} \end{equation} for any $(s,y)\in Q_{\infty}$. Then, the estimates \eqref{wp3} can be easily transformed into the following ones for $v$: \begin{eqnarray}
\|v(s)\|_{1} + \|v(s)\|_{\infty} + \|\nabla v(s)\|_{\infty} &\leq& \left[ \left(\frac{e^s}{e^s-1}\right)^{\alpha-N\beta} + \left(\frac{e^s}{e^s-1}\right)^{\alpha} + \left(\frac{e^s}{e^s-1}\right)^{\alpha+\beta} \right]K_{\kappa} \nonumber\\ &\leq& 6K_{\kappa},\label{wp3.v} \end{eqnarray} for any $s>\ln{2}>0$, where $K_{\kappa}$ is the constant in \eqref{wp3}. Finally, the pointwise upper bound \eqref{est.point} reads \begin{equation}
|y|^{\alpha/\beta} v(s,y) \le C\ \left[ \sup_{|z|\ge |y| e^{\beta s}/4}\left\{ u_0(z) |z|^{\alpha/\beta} \right\} + |y|^{-1/\beta} \right] \label{pompee} \end{equation} for $(s,y)\in (0,\infty)\times (\mathbb{R}^N\setminus\{0\})$.
\noindent \textbf{Step~3. Lower bound for $v$.} We infer from \cite[Proposition~1.8]{IL1} that $u(t,x)>0$ for $(t,x)\in Q_\infty$. In particular, $u(1,0)>0$ and, since $u(1,\cdot)\in C(\mathbb{R}^N)$, there is $m_0>0$ such that \begin{equation} u(1,x)\ge m_0\,, \qquad x\in B_1(0)\,. \label{pompon} \end{equation} Next, according to the analysis performed in \cite{IL2} (in particular, Lemma~2.1, Lemma~2.8, Lemma~2.10, Proposition~2.11, and Proposition~2.16 therein), there exists $a_*>0$ such that, for $a\in (0,a_*)$, the maximal solution $g_a$ defined on $[0,R_m(a))$ to the Cauchy problem \begin{equation}\label{IVPa} \left\{ \begin{array}{l}
(|g_a'|^{p-2}g_a')'(r)+\displaystyle{\frac{N-1}{r}} (|g_a'|^{p-2}g_a')(r)+\alpha g_a(r)+\beta rg_a'(r)-|g_a'(r)|^q=0 , \\
\\ g_a(0)=a, \ g_a'(0)=0, \end{array}\right. \end{equation} has the following properties: there is $R(a)\in (0,R_m(a))$ such that \begin{equation} 0 < g_a(r) \le a \;\;\text{ for }\;\; r\in [0,R(a))\,, \quad g_a(R(a))=0\,, \;\;\text{ and }\;\; g_a'(R(a))<0\,. \label{cesar} \end{equation} Introducing $$ G_{a,\lambda}(t,x) := \left\{ \begin{array}{lcl}
\lambda^{p/(2-p)}\ t^{-\alpha}\ g_a\left( \lambda |x| t^{-\beta} \right) & \text{ if } & |x|\in \left[ 0, R(a) t^\beta/\lambda \right]\,, \\
0 & \text{ if } & |x| \ge R(a) t^\beta/\lambda\,, \end{array} \right. $$
for $(a,\lambda) \in (0,a_*)\times (0,1)$, the properties of $g_a$ guarantee that $G_{a,\lambda}\in C(Q_\infty)$ and is a subsolution to \eqref{eq1} in $Q_\infty$ (it can be interpreted locally as the maximum of two subsolutions to \eqref{eq1} in $Q_\infty$, namely the zero function and $(t,x)\longmapsto \lambda^{p/(2-p)}\ t^{-\alpha}\ g_a\left( \lambda |x| t^{-\beta} \right)$). \\ Now, we set $$ a_0:= \frac{a_*}{2} \in (0,a_*)\,, \quad t_0 :=\frac{1}{R(a_0)^p} \left( \frac{m_0}{a_0} \right)^{2-p}\,, \quad \lambda_0 := R(a_0)\ t_0^\beta\,, $$ and observe that, if $x\in B_{R(a_0) t_0^\beta / \lambda_0}(0) = B_1(0)$, then \eqref{pompon} implies that $$ G_{a_0,\lambda_0}(t_0,x) \le a_0 \lambda_0^{p/(2-p)} t_0^{-\alpha} = a_0 \left( \lambda_0 t_0^{-\beta} \right)^{p/(2-p)} t_0^{1/(2-p)} = m_0 \le u(1,x)\,. $$ Since $u(1,x)> 0 = G_{a_0,\lambda_0}(t_0,x)$ if $x\not\in B_{R(a_0) t_0^\beta / \lambda_0}(0)$, we have $u(1,x) \ge G_{a_0,\lambda_0}(t_0,x)$ for all $x\in\mathbb{R}^N$ and the comparison principle entails that \begin{equation*} u(t+1,x) \ge G_{a_0,\lambda_0}(t+t_0,x)\,, \qquad (t,x)\in Q_\infty\,. \end{equation*} In particular, for $t>0$ and $x\in B_{R(a_0) (t+t_0)^\beta / \lambda_0}(0)$, $$
u(t+1,x) \ge \lambda_0^{p/(2-p)}\ (t+t_0)^{-\alpha}\ g_{a_0}\left( \lambda_0 |x| (t+t_0)^{-\beta} \right) \,. $$ In terms of $v$, the previous lower bound reads \begin{equation}
v(s,y) \ge \lambda_0^{p/(2-p)} \left( \frac{e^s}{e^s-2+t_0} \right)^\alpha g_{a_0}\left( \lambda_0 |y| \left( \frac{e^s}{e^s-2+t_0} \right)^\beta \right)\,, \label{modeste} \end{equation}
for $s>\ln{2}$ and $|y|\le (R(a_0)/\lambda_0) ((e^s-2+t_0) e^{-s})^\beta$.
\noindent \textbf{Step~4. Half-relaxed limits.} To complete the proof of the convergence, we introduce the half-relaxed limits \cite{Bl94}, in a similar way as it has been previously used in papers on large-time behavior, see \cite{ILV, Ro} for instance. We thus define \begin{equation*} \tilde{w}_{*}(s,y):=\liminf\limits_{(\sigma,z,\varepsilon)\to(s,y,0)}v\left(\frac{\sigma}{\varepsilon},z\right), \quad \tilde{w}^{*}(s,y):=\limsup\limits_{(\sigma,z,\varepsilon)\to(s,y,0)}v\left(\frac{\sigma}{\varepsilon},z\right) \end{equation*} for $(s,y)\in Q_\infty$. It is a standard fact that $\tilde{w}_{*}$ and $\tilde{w}^{*}$ do not depend on $s>0$, so that we can define $$ w_{*}(y):=\tilde{w}_{*}(1,y)=\tilde{w}_{*}(s,y), \quad w^{*}(y):=\tilde{w}^{*}(1,y)=\tilde{w}^{*}(s,y), \quad s>0\,. $$ In addition, it follows from \cite[Th\'eor\`eme~4.1]{Bl94} that $w_{*}$ is a viscosity supersolution and $w^{*}$ is a viscosity subsolution to the stationary equation associated to \eqref{eq.self}, that is, the elliptic equation \eqref{eq.ell}. Moreover, the definition of $w_*$ and $w^*$ and \eqref{modeste} ensure that \begin{equation}
w_*\leq w^* \;\;\text{ in }\;\; \mathbb{R}^N \;\;\text{ and }\;\; \lambda_0^{p/(2-p)}\ g_{a_0}(\lambda_0 |y|) \le w_*(y) \;\;\text{ for }\;\; y\in B_{R(a_0)/\lambda_0}(0)\,. \label{cesar2} \end{equation} An obvious consequence of \eqref{cesar} and \eqref{cesar2} is that $w_*$ and $w^*$ are both not identically equal to zero.
Our aim now is to show that $w_*\equiv w^*$ with the help of Theorem~\ref{th.comp}. In order to apply it, we translate the estimates for $v$ in Step~2 above into estimates for $w_*$ and $w^*$. We readily notice that \eqref{FG.boundv} implies \begin{equation}\label{FG.boundw}
w_*(y)\leq w^*(y)\leq\Gamma_{p,q}(|y|), \quad \hbox{for} \ \hbox{any} \ y\in\mathbb{R}^N, \end{equation} and that \eqref{wp3.v} implies that \begin{equation}\label{wp3.w}
w_*(y)\leq w^*(y)\leq6K_{\kappa}, \quad \|\nabla w_{*}\|\leq6K_{\kappa}, \ \|\nabla w^{*}\|\leq6K_{\kappa}, \quad y\in\mathbb{R}^N, \end{equation} whence $w_*$ and $w^*$ belong to the space $L^1(\mathbb{R}^N)\cap W^{1,\infty}(\mathbb{R}^N)$. In addition, taking into account the condition \eqref{wp111} on $u_0$, we deduce from \eqref{pompee} that $$
w_*(y)\leq w^*(y)\leq C|y|^{-(\alpha+1)/\beta}, \quad y\in\mathbb{R}^N\,. $$ Consequently, \begin{equation}\label{VSS1.w}
\int_{\{|y|\geq r\}} \left( w_{*}(y) + w^{*}(y) \right) |y|^{\alpha/\beta-N}\,dy\leq C\int_{r}^{\infty}s^{-1/\beta-1}\,ds=C r^{-1/\beta}, \end{equation} which converges to 0 as $r\to\infty$. Gathering \eqref{cesar2}, \eqref{FG.boundw}, \eqref{wp3.w}, and \eqref{VSS1.w}, we are in a position to apply Theorem~\ref{th.comp} and conclude that $w^*\leq f_U \le w_*$ in $\mathbb{R}^N$.
Recalling \eqref{cesar2}, we have established that $w_*\equiv w^*=f_U$, which in turn implies that $$
\lim_{\varepsilon\to 0} \sup_{y\in K}\left\{ \left| v\left( \frac{1}{\varepsilon},y \right) - f_U(y) \right| \right\} = 0 $$ for any compact subset $K$ of $\mathbb{R}^N$ by \cite[Lemma V.1.9]{BCD}
or \cite[Lemme~4.1]{Bl94}. Owing to \eqref{FG.boundv} and the decay of $f_U$ as $|x|\to\infty$ (see Theorem~\ref{th.VSSunique}), the above convergence can be improved to the convergence of $v(s)$ to $f_U$ in $L^\infty(\mathbb{R}^N)$ as $s\to\infty$. Going back to the original variables gives \eqref{asympt} and ends the proof. \end{proof}
\end{document} |
\begin{document}
\title{Properties of sets of Subspaces with Constant Intersection Dimension}
\begin{abstract} A $(k,k-t)$-SCID (set of Subspaces with Constant Intersection Dimension) is a set of $k$-dimensional vector spaces that have pairwise intersections of dimension $k-t$. Let $\mathcal{C}=\{\pi_1,\ldots,\pi_n\}$ be a $(k,k-t)$-SCID. Define $S:=\langle \pi_1, \ldots, \pi_n \rangle$ and $I:=\langle \pi_i \cap \pi_j \mid 1 \leq i < j \leq n \rangle$. We establish several upper bounds for $\dim S + \dim I$ in different situations. We give a spectrum result for the case $(n-1)(k-t)\leq k$ and for the case $n\leq\frac{q^{t(n-\eta)}-1}{q^t-1}$, giving examples of $(k,k-t)$-SCIDs reaching a large interval of values for $\dim S + \dim I$. \end{abstract}
\section{Introduction}
Let $\mathcal{V}$ be a vector space over a finite field $\mathbb{F}_q$. Let $k$ and $t$ be integers such that $t \leq k$. A set $\mathcal{C}$ of $k$-dimensional subspaces of $\mathcal{V}$ that have pairwise intersections of dimension $k-t$ is called a $(k,k-t)$-\emph{SCID}. The acronym SCID stands for \emph{a set of Subspaces with Constant Intersection Dimension}.\par SCIDs are introduced in \cite{SCID}, where a similar definition is given in terms of projective spaces instead of vector spaces. For our purposes however, the ambient space will always be a vector space $\mathcal{V}$ over a finite field $\mathbb{F}_q$ of order $q$. The dimension of this vector space $\mathcal{V}$ does not need to be predefined. \par In the domain of coding theory, SCIDs are better known as \emph{equidistant codes}. These codes are relevant in a \emph{random network coding} setting, where information is sent through a network with varying topology. This network is depicted as a directed multigraph where the information has to be transmitted from the \emph{sources} to the \emph{sinks} through some intermediary nodes. Within network coding, these intermediary nodes apply coding to the received inputs, instead of simply routing them. It was shown in \cite{NIF} that the maximal information rate of a network with one source can be achieved by applying this technique. When the order of the ground field is large enough, it is sufficient to only apply \emph{linear} network coding, where the nodes transmit linear combinations of the input they receive \cite{LNC}.
\emph{Random network coding} is when the nodes output \emph{random} linear combinations of the input, instead of using a predefined scheme. The concept and benefits of \emph{random network coding} in a multi-source setting are explored in \cite{random}, and later on approached mathematically through \emph{subspace codes} in \cite{RNC}. \par
A subspace code is a code that has vector subspaces as codewords. Note that this is different from the classical case, where the codewords are vectors. The distance between two codewords $U$ and $V$ from a subspace code is called the \emph{subspace distance} and is defined as $d (U,V)=\dim U+\dim V - 2\dim (U\cap V)$. When all codewords have the same dimension, the code is called a \emph{constant dimension code}. When in addition all pairwise intersections have the same dimension, then the distance between any two codewords is constant. In this case we say we have an \emph{equidistant code}, see e.g. \cite{equi}. It should now be clear why these codes correspond to SCIDs. \par
Note that in the definition of a SCID, it is required that all pairwise intersections have the \emph{same dimension}. It is not necessary that they all coincide. If this is the case, then the SCID is called a $(k,k-t)$-\emph{sunflower}. The $(k-t)$-space that all the elements of the sunflower have in common, is often called the \emph{center} of the sunflower. Sunflowers have been investigated before, see for instance \cite{sunflower}. \par Another special case of SCIDs occurs when $k=t$, i.e. when every two distinct elements intersect trivially. A $(k,0)$-SCID is called a \emph{partial $k$-spread}. Note that a partial $k$-spread is also a $(k,0)$-sunflower with trivial center. Partial spreads are studied thoroughly within the domain of finite geometry, see for example \cite{ms} and \cite{seg}.\par
From now on, we will assume that a SCID contains at least two elements. The following lemma follows directly from the definitions: \begin{lemma}\label{lemma} A subset of a $(k,k-t)$-SCID (resp. $(k,k-t)$-sunflower), containing at least two elements, is again a $(k,k-t)$-SCID (resp. $(k,k-t)$-sunflower). \end{lemma}
Let $\mathcal{C}=\{\pi_1,\ldots,\pi_n\}$ be a $(k,k-t)$-SCID with $n$ elements. Define the following two spaces: \begin{align*} S&:=\langle \pi_1, \ldots, \pi_n \rangle ,\\ I&:=\langle \pi_i \cap \pi_j \mid 1 \leq i < j \leq n \rangle .\end{align*}
Intuitively, when the space $S$ has large dimension, the elements of $\mathcal{C}$ are further apart, causing the dimension of $I$ to be smaller and vice versa. This raises the question: \emph{is it possible to give an upper bound on $\dim{S}+\dim{I}$?} \par The article is structured as follows: In Section \ref{upp}, we establish several upper bounds for different situations. In Section \ref{spec}, we give a spectrum result for the case $(n-1)(k-t)\leq k$ and for the case $n\leq\frac{q^{t(n-\eta)}-1}{q^t-1}$, giving examples of $(k,k-t)$-SCIDs reaching a large interval of values for $\dim S + \dim I$.
\section{Upper bounds on $\dim S + \dim I$}\label{upp} In this section, we justify the intuition from the previous section by giving upper bounds on the sum $\dim S + \dim I$. When an upper bound is established on this sum, it is clear that for a large dimension of $S$, the dimension of $I$ must be small, and vice versa. We give several upper bounds and compare them for different values of $n \geq 2$, $k$ and $t$. At the end of this section, a summary of the best bounds is given in Table \ref{tabel}.\par Theorem \ref{main} gives a bound that is valid for all values of $n \geq 2$, $k$ and $t$. \par
\begin{theorem}\label{main} Let $\mathcal{C}=\{\pi_1,\ldots,\pi_n\}$ be a $(k,k-t)$-SCID, $n\geq 2$. Define $S:=\langle \pi_1, \ldots, \pi_n \rangle$ and $I:=\langle \pi_i \cap \pi_j \mid 1 \leq i < j \leq n \rangle$. Then: \[ \dim S + \dim I \leq nk .\] \end{theorem} \begin{proof} The proof is by induction on the number of spaces $n$. For the induction base, assume $n=2$. Then $\mathcal{C}=\{\pi_1,\pi_2\}$. Hence, $S=\langle \pi_1,\pi_2\rangle$ has dimension $k+t$ and $I=\pi_1\cap\pi_2$ has dimension $k-t$. In this case, $\dim S + \dim I=2k$, agreeing with the theorem. \par Now assume the theorem is true for $n-1$. Define $\mathcal{C'}:=\{\pi_1,\ldots,\pi_{n-1}\}$, then $\mathcal{C'}$ is a $(k,k-t)$-SCID by Lemma \ref{lemma}. Hence, if we define $S':=\langle \pi_1, \ldots, \pi_{n-1} \rangle$ and $I':=\langle \pi_i \cap \pi_j \mid 1 \leq i < j \leq n-1 \rangle$, then $\dim S' + \dim I' \leq (n-1)k$, by the induction hypothesis. \par Define $A:=\langle\pi_1 \cap \pi_n,\pi_2\cap\pi_n,\ldots,\pi_{n-1}\cap\pi_n\rangle$, so $A$ is the space spanned by all intersections of $\pi_1,\ldots,\pi_{n-1}$ with the space $\pi_n$. Then $k-t \leq \dim A \leq k$, since $\pi_1\cap\pi_n \subseteq A \subseteq \pi_n$. Note that $I = \langle I',A \rangle$, such that: \[ \dim I = \dim I' + \dim A - \dim (A \cap I') .\] Let $0 \leq \delta \leq t$ be such that $\dim A = k-t+\delta$, then: \begin{align}\label{I} \dim I \leq \dim I' + k-t+\delta .\end{align} On the other hand, from $S=\langle S', \pi_n \rangle$, it follows: \[ \dim S = \dim S' + \dim \pi_n - \dim (\pi_n \cap S') .\] But $A \subseteq \pi_n \cap S'$, hence $\dim (\pi_n \cap S') \geq \dim A = k-t+\delta$. Together with $\dim \pi_n=k$, this results in the following inequality: \begin{align}\label{S} \dim S \leq \dim S' + k - (k-t+\delta) = \dim S' + t -\delta .\end{align} Combining (\ref{I}) and (\ref{S}) with the induction hypothesis, we find: \begin{align*} \dim S + \dim I & \leq \dim S' + t -\delta + \dim I' + k-t+\delta \\ & \leq (n-1)k + k\\ & \leq nk ,\end{align*} which concludes the proof. \end{proof}
The natural question that arises now is whether this bound is sharp. In Theorem \ref{construction}, a construction of a SCID reaching this upper bound is given, under the assumption that $(n-1)(k-t)\leq k$. Hence, under this assumption, the bound given in Theorem \ref{main} is sharp. \par
\begin{theorem}\label{construction} Let $\mathcal{C}=\{\pi_1,\ldots,\pi_n\}$ be a $(k,k-t)$-SCID. Define $S:=\langle \pi_1, \ldots, \pi_n \rangle$ and $I:=\langle \pi_i \cap \pi_j \mid 1 \leq i < j \leq n \rangle$. Let $(n-1)(k-t)\leq k$. Then \[ \dim S + \dim I = nk \] if and only if there exist $(k-t)$-spaces $V_{ij}$, for each $1\leq i<j\leq n$, and $(k-(n-1)(k-t))$-spaces $U_i$, for each $1\leq i \leq n$, such that the following conditions hold: \begin{enumerate} \item For each $1\leq i < j \leq n$, $V_{ij}=\pi_i \cap \pi_j$. \item For each $1 \leq i \leq n$, $\pi_i = \langle U_i, \pi_i \cap \pi_j \mid i\neq j\rangle$. \item The dimension of the span of all the spaces above is maximal, i.e., \begin{align*} &\dim \langle U_i, V_{lj} \mid i \in \{1,\ldots, n\} \text{ and } 1\leq l < j \leq n \rangle \\ &=n(k-(n-1)(k-t))+\frac{n(n-1)}{2}(k-t). \end{align*} \end{enumerate} \end{theorem} \begin{proof} First note that to find the dimension of the space in the third condition, we can just sum up the dimensions of the spaces $U_i$ and $V_{lj}$. This gives us the formula in the third condition: \begin{align*} \sum_{i=1}^{n}{\dim U_i} + \sum_{1\leq i < j \leq n}{\dim V_{ij}} = n(k-(n-1)(k-t))+\frac{n(n-1)}{2}(k-t) .\end{align*} \par For the first part of the proof, assume that all the enlisted conditions hold for $\mathcal{C}$. Then we want to prove that $\dim S + \dim I=nk$. The third condition implies that the span of all spaces $V_{ij}$, with $1\leq i < j \leq n$, must be maximal. So to find the dimension of $I$, we just need to sum up the dimensions of the spaces $V_{ij}$: \begin{align}\label{I2} \dim I = \sum_{1\leq l < j \leq n}{\dim V_{lj}}=\frac{n(n-1)}{2}(k-t) .\end{align} On the other hand, the first two conditions imply that $S=\langle \pi_1, \ldots, \pi_n \rangle=\langle U_i, V_{lj} \mid i \in \{1,\ldots, n\} \text{ and } 1\leq l < j \leq n \rangle$. The third condition immediately implies: \begin{align}\label{S2} \dim S = n(k-(n-1)(k-t))+\frac{n(n-1)}{2}(k-t) .\end{align} Combining (\ref{I2}) and (\ref{S2}), we get that $\dim S+\dim I=nk$.
This completes the first part of the proof. \par The remainder of the proof is again by induction on the size $n$ of $\mathcal{C}$. For the induction base, assume $n=2$. Then it is not hard to see that the enlisted conditions must hold. \par Now assume that the theorem is true for any $(k,k-t)$-SCID with size $n-1$ and that we have a $(k,k-t)$-SCID $\mathcal{C}=\{\pi_1,\ldots,\pi_n\}$ of size $n$ such that $\dim S + \dim I=nk$. Now define $\mathcal{C'}:=\{\pi_1,\ldots,\pi_{n-1}\}$, $S':=\langle\pi_1,\ldots,\pi_{n-1}\rangle$ and $I':=\langle\pi_i\cap\pi_j \mid 1\leq i < j \leq n-1\rangle$. Then, by Lemma \ref{lemma}, $\mathcal{C'}$ is a $(k,k-t)$-SCID. \par Note that we only can have $\dim S + \dim I$ maximal, if equality holds in (\ref{I}) and (\ref{S}) in every induction step of the proof of Theorem \ref{main}. Adding up these two equalities, we find that $\dim S + \dim I = \dim S' + \dim I' + k$, implying that also $\dim S'+\dim I'=(n-1)k$ must be maximal. Applying the induction hypothesis on $\mathcal{C'}$ now gives us the following: \begin{enumerate} \item There exist $(k-t)$-spaces $V_{ij}$, for each $1\leq i <j \leq n-1$, such that $V_{ij}=\pi_i\cap\pi_j$. \item There exist $(k-(n-2)(k-t))$-spaces $U'_i$, for each $1 \leq i \leq n-1$, such that $\pi_i = \langle U'_i, \pi_i \cap \pi_j \mid i\neq j\rangle$. \item The span of the spaces above has maximal dimension.
\end{enumerate} Define $A:=\langle\pi_1 \cap \pi_n,\pi_2\cap\pi_n,\ldots,\pi_{n-1}\cap\pi_n\rangle$, and $\delta \geq 0$ such that $\dim A = k-t+\delta$. As remarked before, equality must hold in (\ref{I}) in the proof of Theorem \ref{main}: \[ \dim I = \dim I'+k-t+\delta ,\] hence, $\dim (A \cap I')=0$. Now define $V_{in}:=\pi_i\cap\pi_n$, for all $1\leq i <n$. Then $\dim(A\cap I')=0$ implies that $V_{in}\subseteq U'_i$, so the third property implies that the span $\langle V_{ij} \mid 1\leq i < j \leq n \rangle$ has maximal dimension. Analogously as in the first part of the proof, we now find that $\dim I = \frac{n(n-1)}{2}(k-t)$.\par Now choose $(k-(n-1)(k-t))$-spaces $U_i \subseteq U'_i$, for $1\leq i < n$, such that $U'_i=\langle U_i, V_{in}\rangle$. Also choose a $(k-(n-1)(k-t))$-space $U_n$ such that $\pi_n=\langle V_{in}, U_n \mid 1\leq i < n \rangle$. For these choices of $V_{ij}$, with $1\leq i<j \leq n$, and $U_i$, with $1\leq i \leq n$, the first two conditions are fulfilled. \par Note that $S=\langle U_i, V_{lj} \mid i \in \{1,\ldots, n\} \text{ and } 1\leq l < j \leq n \rangle$, while for the dimension of $S$ we have: \[ \dim S = nk-\dim I= nk - \frac{n(n-1)}{2}(k-t) ,\] which is exactly the sum of the dimensions of the spaces $V_{ij}$, with $1\leq i<j \leq n$ and the spaces $U_i$, with $1\leq i \leq n$. This is precisely what the third condition states. \end{proof}
Note that the condition $(n-1)(k-t)\leq k$ is necessary for the construction in Theorem \ref{construction} to work. Moreover, it follows from the proof that if this condition doesn't hold, then there exists no $(k,k-t)$-SCID with $\dim S + \dim I = nk$. This means that the bound given in Theorem \ref{main} is sharp if and only if the condition $(n-1)(k-t)\leq k$ holds.
The objective now is to gain more insight in what happens if the condition $(n-1)(k-t) \leq k$ doesn't hold. For this purpose, it is useful to consider the case where $n=3$. \par In the case $2(k-t)>k$, the bound from Theorem \ref{main}, although valid, cannot be sharp. Lemma \ref{three} provides an upper bound that is also valid for all values of $k$ and $t$, but which is in fact an improvement in the case $2(k-t)>k$. \par
\begin{lemma}\label{three} Let $\mathcal{C}=\{\pi_1,\pi_2,\pi_3\}$ be a $(k,k-t)$-SCID. Define $S:=\langle \pi_1,\pi_2,\pi_3 \rangle$ and $I:=\langle\pi_1\cap\pi_2,\pi_1\cap\pi_3,\pi_2\cap\pi_3\rangle$. Then $\dim S + \dim I \leq 2(k+t)$. \end{lemma} \begin{proof} Define $\epsilon \geq 0$, such that $\dim (\pi_1\cap\pi_2\cap\pi_3)=k-t-\epsilon$. \\ Then: \begin{align*} \dim I &= k-t+2\epsilon \\ \dim S & \leq k+t+t-\epsilon, \end{align*} such that $\dim S + \dim I \leq k+t+t-\epsilon +k-t+2\epsilon = 2k+t+\epsilon$. Note however that $\epsilon \leq k- (k-t) = t$, such that $\dim S + \dim I \leq 2k+2t$. \end{proof}
Note that the equality $\dim S+\dim I=2(k+t)$ only occurs when $\epsilon=t$. In that case, we have $\pi_1=\langle \pi_1\cap \pi_2,\pi_1\cap\pi_3\rangle$, $\pi_2=\langle \pi_1\cap \pi_2,\pi_2\cap\pi_3\rangle$ and $\pi_3=\langle \pi_2\cap \pi_3,\pi_1\cap\pi_3\rangle$. \par Inspired by this lemma, we prove a new bound on $\dim S + \dim I$ that is valid for all values of $n \geq 3$, $k$ and $t$. \par
\begin{theorem} Let $\mathcal{C}=\{\pi_1,\ldots,\pi_n\}$ be a $(k,k-t)$-SCID, with size $n\geq 3$. Define $S:=\langle\pi_1,\ldots,\pi_n\rangle$ and $I:=\langle\pi_i\cap\pi_j \mid 1\leq i<j\leq n\rangle$. Then: \[ \dim S + \dim I \leq (n-1)k +2t .\] \end{theorem} \begin{proof} Define $S':=\langle\pi_1,\ldots,\pi_{n-1}\rangle$ and $I':=\langle\pi_i\cap\pi_j \mid 1\leq i<j\leq n-1\rangle$. Then, by Theorem \ref{main}, we know that: \begin{align}\label{ind} \dim S' +\dim I' \leq (n-1)k .\end{align} Now define $A:=\langle\pi_1\cap\pi_n,\ldots,\pi_{n-1}\cap\pi_n\rangle$ and let $\delta\geq 0$ be such that $\dim A = k-t+\delta$. \par Note that $I=\langle I',A\rangle$, such that: \begin{align}\label{Idim}
\dim I = \dim I' + \dim A - \dim (A \cap I'). \end{align} Define $B:=\langle\pi_1\cap\pi_2,\pi_1\cap\pi_3,\ldots,\pi_1\cap\pi_{n-1}\rangle$, the space spanned by all the intersections with $\pi_1$, except for $\pi_1\cap\pi_n$. Then $\pi_1\cap\pi_2\subseteq B$, such that $\dim B \geq k-t$. Moreover we have $\langle \pi_1\cap\pi_n,B\rangle \subseteq \pi_1$, such that: \begin{center}
$\dim \pi_1 \geq \dim(\pi_1\cap\pi_n) + \dim B - \dim (\pi_1\cap\pi_n\cap B)$ \\
$\Downarrow$ \\
$k \geq k-t + k-t - \dim (\pi_1\cap\pi_n\cap B)$ \\
$\Downarrow$ \\
$\dim (\pi_1\cap\pi_n\cap B) \geq k-2t.$ \end{center} Since $\pi_1\cap\pi_n\cap B\subseteq A\cap I'$, it follows that $\dim( A\cap I' )\geq k-2t$. Combining this with (\ref{Idim}) we get: \begin{align}\label{I3} \dim I \leq \dim I' + k-t+\delta-(k-2t)=\dim I'+t+\delta. \end{align} On the other hand, $S=\langle S',\pi_n\rangle$ and $A\subseteq S'\cap \pi_n$, such that: \begin{align}\label{S3} \dim S \leq \dim S' + \dim \pi_n -\dim A \leq \dim S' + t - \delta. \end{align} Combining (\ref{I3}) and (\ref{S3}) with (\ref{ind}), we find: \begin{align*} \dim S + \dim I &\leq \dim S' + t -\delta + \dim I' + t + \delta \\ & \leq (n-1)k + 2t. \end{align*} \end{proof}
Comparing this new bound to the bound given in Theorem \ref{main}, we can now distinguish three cases: \begin{itemize}
\item[$k<2t$] In this case, $nk < (n-1)k+2t$, so the bound from Theorem \ref{main} is the best bound we have. By Theorem \ref{construction}, this bound is sharp if and only if the inequality $(n-1)(k-t)\leq k$ holds.
\item[$k=2t$] Now we have $nk=(n-1)k+2t$, such that the new bound is the same as the bound given in Theorem \ref{main}. Note that in this case $(n-1)(k-t)\leq k \Leftrightarrow n \leq 3$, such that the bound is only sharp for $n \leq 3$. For $n>3$, we will show in Theorem \ref{ugly} that $\dim S + \dim I \leq nk-(n-3)$. We don't know whether this bound is sharp.
\item[$k>2t$] Then $(n-1)k+2t < nk$, such that the new bound is an improvement compared to Theorem \ref{main}. In Theorem \ref{ugly}, we will show that $\dim S + \dim I \leq 2k + 2(n-2)t-(n-3)$. Note that, given $k>2t$, we have for $n\geq 3$:
\begin{align*}
2k + 2(n-2)t-(n-3) &= 2k + (n-3)2t + 2t - (n-3)\\
& < 2k + (n-3)k + 2t \\
& = (n-1)k+2t,
\end{align*}
such that Theorem \ref{ugly} indeed gives us a better bound for $n\geq 3$. \end{itemize}
\begin{theorem}\label{ugly} For $n\geq 3$, let $\mathcal{C}=\{\pi_1,\ldots,\pi_n\}$ be a $(k,k-t)$-SCID, with $k\geq 2t$. Define $S:=\langle\pi_1,\ldots,\pi_n\rangle$ and $I:=\langle\pi_i\cap\pi_j \mid 1\leq i<j\leq n\rangle$. Then: \[ \dim S + \dim I \leq 2k+2(n-2)t-(n-3) .\] \end{theorem} \begin{proof} The proof is by induction on $n$. \par For the induction base, consider $n=3$. If $k>2t$, it follows from Lemma \ref{three} that $\dim I + \dim S \leq 2k+2t$, agreeing with the theorem. If $k=2t$, then we have by Theorem \ref{main} that $\dim S + \dim I \leq 3k=2k+2t$. \par Now assume that the theorem is true for $(k,k-t)$-SCIDs with $n-1$ elements. Then it is in particular true for $\mathcal{C'}:=\{\pi_1,\ldots,\pi_{n-1}\}$. Define $S':=\langle\pi_1,\ldots,\pi_{n-1}\rangle$ and $I':=\langle\pi_i\cap\pi_j \mid 1\leq i < j\leq n-1\rangle$. Then the induction hypothesis implies: \begin{align}\label{ind2} \dim S' + \dim I' \leq 2k+2(n-3)t-(n-4) .\end{align} Define $A:=\langle\pi_1\cap\pi_n,\ldots,\pi_{n-1}\cap\pi_n\rangle$ to be the space spanned by the intersections with the space $\pi_n$. Then $\dim A = k-t+\delta$, for a certain value $\delta \geq 0$. Note that $S=\langle S', \pi_n\rangle$ and that $A\subseteq \pi_n\cap S'$, thus we have: \begin{align}\label{S5} \dim S \leq \dim S' + \dim \pi_n - \dim A = \dim S' +t -\delta. \end{align} We can now repeat the same argument as in the previous proof, to find that $\dim (A \cap I') \geq k-2t$. We distinguish between two cases: \begin{itemize}
\item Case 1: $\dim (A \cap I') = k-2t$. \par
For any $i, j$, with $1\leq i < j \leq n-1$, we have $\langle\pi_i\cap\pi_n,\pi_j\cap\pi_n\rangle\subseteq\pi_n$, implying:
\begin{align*}
\dim \pi_n &\geq \dim (\pi_i\cap\pi_n) + \dim (\pi_j\cap\pi_n)-\dim (\pi_i\cap\pi_j\cap\pi_n) \\
\Rightarrow k &\geq k-t+k-t-\dim (\pi_i\cap\pi_j\cap\pi_n).
\end{align*}
We find $\dim (\pi_i\cap\pi_j\cap\pi_n)\geq k-2t$. Since $\dim(A\cap I')=k-2t$ and $\pi_i\cap\pi_j\cap\pi_n\subseteq A \cap I'$, we have that $\pi_i\cap\pi_j\cap\pi_n=A\cap I'$. Since this argument is independent from the choices of $i$ and $j$, it follows that all intersections $\pi_i\cap\pi_j\cap\pi_n$ must coincide, for $1\leq i < j \leq n-1$. Hence, the intersections $\pi_i\cap\pi_n$ form a $(k-t,k-2t)$-sunflower inside $\pi_n$.\par
Now let $X$ be a $t$-dimensional space skew to $\pi_n$, such that $\pi_1\cap\pi_2=\langle\pi_1\cap\pi_2\cap\pi_n,X\rangle$. Then, $\pi_1=\langle\pi_1\cap\pi_n,X\rangle\subseteq\langle\pi_n,X\rangle$ and $\pi_n\subseteq\langle\pi_n,X\rangle$. For any $i$, with $1<i<n$, we have $\pi_i\cap\pi_1\subseteq\pi_i$, $\pi_i\cap\pi_n\subseteq\pi_i$ and
\begin{align*}
\dim\langle\pi_i\cap\pi_1,\pi_i\cap\pi_n\rangle &= \dim(\pi_i\cap\pi_1)+\dim(\pi_i\cap\pi_n)-\dim(\pi_1\cap\pi_i\cap\pi_n) \\
&=k-t+k-t-(k-2t) \\
&=k.
\end{align*}
This implies that $\pi_i=\langle\pi_i\cap\pi_1,\pi_i\cap\pi_n\rangle\subseteq\langle\pi_1,\pi_n\rangle\subseteq\langle\pi_n,X\rangle$. Hence, $S \subseteq \langle\pi_n,X\rangle$ and $I \subseteq \langle\pi_n,X\rangle$. \par
We have that $\dim\langle\pi_n,X\rangle=k+t$, which implies that $\dim I \leq k+t$ and $\dim S \leq k+t$, such that $\dim S + \dim I \leq 2(k+t)$. This bound is lower than the one stated in the theorem.
\item Case 2: $\dim (A \cap I') > k-2t$. \par
We have that $I=\langle I',A\rangle$, such that \begin{align}\label{I5} \dim I &= \dim I'+ \dim A - \dim (A \cap I') \nonumber \\ & < \dim I' + k-t+\delta-(k-2t) \nonumber \\ &<\dim I'+t+\delta. \end{align} Combining (\ref{S5}) and (\ref{I5}) with (\ref{ind2}) we get: \begin{align*} \dim S + \dim I &\leq \dim S' + t - \delta + \dim I'+t+\delta -1\\ & \leq 2k+2(n-3)t-(n-4) + 2t-1 \\ & \leq 2k+2(n-2)t-(n-3). \end{align*} \end{itemize} \end{proof}
We conclude this section with a summary of the bounds for different values of $n$, $k$ and $t$, given by Table \ref{tabel}. \par \begin{table} \centering
\begin{tabular}{|c|c|c|}
\hline
Condition&Upper bound for $\dim S + \dim I$&Sharpness?\\
\hline
\hline
$(k-t)(n-1) \leq k$ & $nk$ & yes \\
\hline
$k\geq 2t$, $n\geq 3$ & $2k+2(n-2)t-(n-3)$ & unknown \\
\hline
\begin{tabular}{@{}c@{}}$k<2t$ \\ $(k-t)(n-1) > k$\end{tabular} & $nk$ & no \\
\hline \end{tabular} \caption{Summary of the best bounds found for $\dim S +\dim I$, for different values of $n$, $k$ and $t$.} \label{tabel} \end{table}
\section{Spectrum results}\label{spec} In this section we construct examples of SCIDs for several values of $\dim S + \dim I$. In the second subsection, we'll assume that the condition $(n-1)(k-t)\leq k$ holds, and adapt the construction from Theorem \ref{construction} to construct new SCIDs with $\dim S + \dim I = nk-\epsilon$. Then later in the third subsection, we will drop the assumption $(n-1)(k-t)\leq k$ and use field reduction to construct sunflowers, which are particular examples of SCIDs, for even smaller values of $\dim S + \dim I$. The concept of field reduction is explained in the first subsection. We finish this section with a summary of the spectrum results we obtained.\par \subsection{Field reduction} Field reduction is a powerful tool in finite geometry. The method is described for projective spaces in \cite{mbs} and for projective spaces and polar spaces in \cite{fieldred}. For our purposes however, we will consider field reduction in vector spaces. \par The idea relies on the fact that $\mathbb{F}_{q^t}$ is a $t$-dimensional vector space over $\mathbb{F}_q$. Hence, all 1-dimensional subspaces of a vector space $V(n,q^t)$ correspond to $t$-dimensional subspaces of the vector space $V(nt,q)$. Moreover, the set of all 1-dimensional spaces of $V(n,q^t)$ corresponds to a $t$-spread in $V(nt,q)$. This spread is often called a \emph{Desarguesian spread}. \par Note that a $d$-dimensional subspace in $V(n,q^t)$ corresponds to a $dt$-dimensional subspace in $V(nt,q)$. Hence, we have the following lemma:
\begin{lemma}\label{fred} Using field reduction, a set of 1-dimensional subspaces in $V(n,q^t)$ spanning a $d$-dimensional space corresponds to a partial $t$-spread in $V(nt,q)$ spanning a $dt$-dimensional space. \end{lemma}
\subsection{Spectrum result on SCIDs} In the proof of Theorem \ref{construction}, we constructed a SCID such that both $\dim I$ and $\dim S$ were maximal. This way the sum $\dim S + \dim I$ was maximal as well. In this section we want to construct SCIDs with smaller values for this sum. In order to do this, we adapt the construction from Theorem \ref{construction} in such a way that $\dim I$ decreases, while $\dim S$ stays as large as possible. \par The idea behind the proof of Theorem \ref{spec1} is that instead of having all pairwise intersections span a space of maximal dimension, we now demand that there is some overlap between three spaces of the SCID. This causes $\dim I$ to decrease. On the other hand, we want $\dim S$ to be as large as possible under this requirement. So apart from the overlap, we need all spaces of the SCID to span maximal dimension. \par \begin{theorem}\label{spec1} If $(n-1)(k-t)\leq k$ and $0\leq \epsilon \leq k-t$, then there exists a $(k,k-t)$-SCID $\{\pi_1,\ldots,\pi_n\}$, such that \[ \dim S + \dim I =nk - \epsilon, \] with $S:=\langle\pi_1,\ldots,\pi_n\rangle$ and $I:=\langle\pi_i\cap\pi_j \mid i\neq j\rangle$. \end{theorem} \begin{proof} If $\epsilon=0$, then a construction is given by Theorem \ref{construction}. \par For $\epsilon>0$, consider a $(k,k-t)$-SCID $\mathcal{C}=\{\pi_1,\ldots,\pi_n\}$ constructed as in the proof of Theorem \ref{construction}. We will slightly adapt this SCID in order to construct a new SCID $\mathcal{C'}=\{\pi'_1,\ldots,\pi'_n\}$. In this new SCID, there will be an $\epsilon$-dimensional overlap between the three spaces $\pi'_1$, $\pi'_2$ and $\pi'_n$. \par Consider an $\epsilon$-dimensional subspace $E \subseteq \pi_1\cap\pi_n$ and two $(k-t-\epsilon)$-dimensional subspaces $D_1 \subseteq \pi_1\cap\pi_2$ and $D_2 \subseteq \pi_2\cap\pi_n$. From the conditions in Theorem \ref{construction}, it follows that $\langle E,D_1,D_2\rangle$ has maximal dimension. We'll choose our spaces from $\mathcal{C'}$ such that the space $E$ is the overlap between $\pi'_1$, $\pi'_2$ and $\pi'_n$, and such that $\pi'_1\cap\pi'_2=\langle E, D_1\rangle$ and $\pi'_2\cap\pi'_n=\langle E, D_2\rangle$. \par By shifting the intersections like this, we loose $\epsilon$ dimensions from each of the three spaces $\pi_1$, $\pi_2$ and $\pi_n$. To compensate for this loss, choose $\epsilon$-dimensional spaces $P_1$, $P_2$ and $P_n$, such that these spaces span maximal dimension together with the elements of $\mathcal{C}$, i.e., $\dim \langle P_1,P_2,P_n, \pi_i \mid 1 \leq i \leq n \rangle$ is maximal. \par Remember that by Theorem \ref{construction}, there exist $(k-(n-1)(k-t))$-dimensional spaces $U_1,\ldots,U_n$, such that for each $1\leq i\leq n$, the space $U_i$ is skew to $\langle\pi_1,\ldots,\pi_{i-1},\pi_{i+1},\ldots,\pi_n\rangle$ and such that $\pi_i=\langle U_i, \pi_i\cap\pi_j \mid i\neq j \rangle $. Now we have all components to define the spaces of $\mathcal{C}'$: \begin{align*} \pi'_1 &:= \langle D_1,P_1,U_1,\pi_1\cap\pi_j \mid 3\leq j \leq n\rangle \\ \pi'_2 &:= \langle E,D_1,D_2,P_2,U_2,\pi_2\cap\pi_j \mid 3\leq j < n\rangle \\ \pi'_j&:=\pi_j \text{, for } 3\leq j < n \\ \pi'_n &:= \langle D_2,P_n,U_n,\pi_1\cap\pi_n,\pi_j\cap\pi_n \mid 3\leq j < n\rangle. \end{align*} Now we first want to show that this defines in fact a $(k,k-t)$-SCID. For the dimensions of the spaces $\pi'_1,\ldots,\pi'_n$, note that we can just sum up the dimensions of the spaces in their definitions above. This follows directly from the third condition in Theorem \ref{construction}. We now find: \begin{align*} \dim \pi'_1 &= (k-t-\epsilon) + \epsilon + (k-(n-1)(k-t)) + (n-2)(k-t)=k \\ \dim \pi'_2 &= \epsilon + 2(k-t-\epsilon) + \epsilon + (k-(n-1)(k-t)) + (n-3)(k-t) = k \\ \dim \pi'_j&=\dim \pi_j = k \text{, for } 3\leq j < n \\ \dim \pi'_n&=(k-t-\epsilon) + \epsilon + (k-(n-1)(k-t)) + (k-t)+(n-3)(k-t)=k. \end{align*} Moreover, for the intersections we have that $\pi'_i\cap\pi'_j=\pi_i\cap\pi_j$, for $1\leq i<j\leq n$, except for $\pi'_1\cap\pi'_2$ and $\pi'_2\cap\pi'_n$. For these intersections we have that $\pi'_1\cap\pi'_2=\langle D_1,E\rangle$ and $\pi'_2\cap\pi'_n=\langle D_2,E\rangle$. Note that $E\subseteq \pi_1\cap \pi_n$. Hence all pairwise intersections of the spaces $\pi'_1,\ldots,\pi'_n$ have dimension $k-t$. This implies that $\mathcal{C'}=\{\pi'_1,\ldots,\pi'_n\}$ is indeed a $(k,k-t)$-SCID. \par Now define $S:=\langle\pi'_1,\ldots,\pi'_n\rangle$ and $I:=\langle\pi'_i\cap\pi'_j \mid 1\leq i<j\leq n\rangle$. Note that, since $E\subseteq\pi'_1\cap\pi'_n$: \[ I=\langle D_1,D_2,\pi'_i\cap\pi'_j\rangle ,\] where $(i,j)\in\{1,\ldots,n\}^2\setminus \{(1,2),(2,n)\}$. By the definitions above and the third condition of Theorem \ref{construction}, we can sum up the dimensions of these spaces to find the dimension of $I$: \begin{align}\label{I6} \dim I &= (k-t-\epsilon) + (k-t-\epsilon) + \left(\frac{n(n-1)}{2}-2\right)(k-t) \nonumber \\ & = \frac{n(n-1)}{2}(k-t)-2\epsilon. \end{align} On the other hand, note that: \[ S = \langle D_1, D_2, P_1, P_2, P_n, U_l, \pi'_i\cap\pi'_j\rangle, \] where $1\leq l \leq n$ and $(i,j)\in\{1,\ldots,n\}^2\setminus \{(1,2),(2,n)\}$. Note again that $E\subseteq \pi'_1\cap\pi'_n$. Again by the third condition of Theorem \ref{construction} and by the way we defined these spaces, we can find the dimension of $S$ by summing the dimensions: \begin{align}\label{S6} \dim S &= 2(k-t-\epsilon) + 3\epsilon + n(k-(n-1)(k-t)) + \left(\frac{n(n-1)}{2}-2\right)(k-t) \nonumber\\ &=\frac{n(n-1)}{2}(k-t)+\epsilon+ n(k-(n-1)(k-t)) .\end{align} Combining (\ref{I6}) and (\ref{S6}), we get: \[\dim S + \dim I =nk-\epsilon.\]
We can conclude that $\mathcal{C'}$ is a $(k,k-t)$-SCID fulfilling the condition of the theorem. \end{proof}
The idea of this proof can be generalized to construct SCIDs with lower values of the sum $\dim S + \dim I$. In the first part of the proof of Theorem \ref{spec2}, we let intersections \emph{coincide} instead of just having some overlap. This again causes $\dim I$ to decrease. Meanwhile we keep $\dim S$ as large as possible by choosing spaces that span maximal dimension. \par In the second part of the proof, we adapt the construction from the first part by using a similar technique to Theorem \ref{spec1}. \par \begin{theorem}\label{spec2} If $(n-1)(k-t)\leq k$, $0\leq \epsilon \leq k-t$ and $2\leq\eta \leq n-1$, then there exists a $(k,k-t)$-SCID $\{\pi_1,\ldots,\pi_n\}$ such that: \[ \dim S + \dim I =nk -(\eta-2)(k-t)- \epsilon ,\] with $S:=\langle\pi_1,\ldots,\pi_n\rangle$ and $I:=\langle\pi_i\cap\pi_j \mid i\neq j\rangle$. \end{theorem} \begin{proof} For $\eta=2$, this is exactly Theorem \ref{spec1}. So assume $\eta>2$. We distinguish between two cases, based on the value of $\epsilon$. \par \textbf{Case 1: $\epsilon=0$} \newline Let $\mathcal{C}$ be a $(k,k-t)$-SCID as constructed in Theorem \ref{construction}. Just like in the previous proof, we will make adaptations to $\mathcal{C}$ in order to obtain a new $(k,k-t)$-SCID $\mathcal{C}'=\{\pi'_1,\ldots,\pi'_n\}$ fulfilling the conditions of the theorem. For this new SCID, there will be a $(k-t)$-dimensional subspace that all spaces $\pi'_1,\ldots,\pi'_{\eta-1}$ and $\pi'_n$ have in common.\par Now define $D:=\pi_1\cap\pi_n$, this will be the common subspace. But if we shift the elements of the SCID in such a way that all spaces $\pi'_1,\ldots,\pi'_{\eta-1}$ and $\pi'_n$ have $D$ in common, then we loose $(k-t)(\eta-2)$ dimensions in each of these spaces. To compensate this, choose $(k-t)(\eta-2)$-dimensional spaces $P_1, \ldots, P_{\eta-1}$ and $P_n$, such that these spaces span maximal dimension with the elements of $\mathcal{C}$, i.e., $\dim\langle P_1,\ldots,P_{\eta-1},P_n,\pi_i\mid 1\leq i\leq n \rangle$ is maximal. \par Note that, by Theorem \ref{construction}, there exist $(k-(n-1)(k-t))$-dimensional spaces $U_1,\ldots,U_n$, such that for each $1\leq i\leq n$, the space $U_i$ is skew to $\langle\pi_1,\ldots,\pi_{i-1},\pi_{i+1},\ldots,\pi_n\rangle$ and such that $\pi_i=\langle U_i, \pi_i\cap\pi_j \mid i\neq j \rangle $. Now we have all components to define the spaces of $\mathcal{C'}$: \par \begin{align*} \pi'_i&=\langle D,U_i,P_i,\pi_i\cap\pi_j\mid \eta\leq j < n\rangle \text{, for } 1\leq i < \eta \\ \pi'_j&=\pi_j \text{, for } \eta \leq j < n \\ \pi'_n&=\langle D,U_n,P_n,\pi_n\cap\pi_j\mid \eta\leq j < n\rangle \end{align*} We now want to show that this indeed defines a $(k,k-t)$-SCID. To find the dimensions of the spaces $\pi'_1,\ldots,\pi'_n$, note that we can simply sum up the dimensions of the spaces in their definitions above. This follows from the third condition in Theorem \ref{construction}. We now find: \begin{align*} \dim\pi'_i&=(k-t)+(k-(n-1)(k-t))+(k-t)(\eta-2)+(n-\eta)(k-t)\\ &=k \text{, for } 1\leq i < \eta \\ \dim\pi'_j&=\dim\pi_j=k\text{, for } \eta\leq j < n \\ \dim\pi'_n&=(k-t)+(k-(n-1)(k-t))+(k-t)(\eta-2)+(n-\eta)(k-t)\\ &=k. \end{align*} For the pairwise intersections, we have $\pi'_i\cap\pi'_j=\pi_i\cap\pi_j$, if $1\leq i \leq n$ and $\eta\leq j < n$. All other pairwise intersections are equal to the space $D$. We can conclude that all the pairwise intersections of the spaces $\{\pi'_1,\ldots,\pi'_n\}$ have dimension $k-t$, implying $\mathcal{C'}:=\{\pi'_1,\ldots,\pi'_n\}$ is a $(k,k-t)$-SCID.\par Now define $S:=\langle\pi'_1,\ldots,\pi'_n\rangle$ and $I:=\langle\pi'_i\cap\pi'_j \mid 1\leq i < j\leq n\rangle$. Note that \[ I = \langle D, \pi'_i\cap\pi'_j\mid 1\leq i \leq n \text{, } \eta\leq j < n \text{ and } i\neq j\rangle, \] where we can add the dimensions of the intersections $\pi'_i\cap\pi'_j$ and $D$, to find $\dim I$. Note that the number of intersections $\pi'_i\cap\pi'_j$ occurring in this expression is \[ \eta(n-\eta)+\frac{(n-\eta)(n-\eta-1)}{2}. \] From this follows the dimension of $I$: \begin{align}\label{I7} \dim I &= (k-t)+\left(\eta(n-\eta)+\frac{(n-\eta)(n-\eta-1)}{2}\right)(k-t), \end{align} where the first value $k-t$ comes from $\dim D$. On the other hand, for $S$ we have: \begin{align*} S&=\langle D, U_1,\ldots,U_n,P_1,\ldots,P_{\eta-1},P_n,\pi'_i\cap\pi'_j\mid 1\leq i \leq n \text{, } \eta\leq j < n \text{ and } i\neq j\rangle .\end{align*} By construction, we can find the dimension of $S$ by summing up all dimensions of the spaces occurring in the expression above. Hence, we find: \begin{align}\label{S7} \dim S =& (k-t)+n(k-(n-1)(k-t))+\eta(k-t)(\eta-2) \nonumber \\ & +\left(\eta(n-\eta)+\frac{(n-\eta)(n-\eta-1)}{2}\right)(k-t) .\end{align} Note that $2\eta(n-\eta)+(n-\eta)(n-\eta-1)=n(n-1)-\eta(\eta-1)$.
Combining this with (\ref{I7}) and (\ref{S7}), we find: \[\dim S+\dim I = nk-(\eta-2)(k-t).\]
Hence, $\mathcal{C'}$ is a $(k,k-t)$-SCID fulfilling the condition of the theorem. \par \textbf{Case 2: $\epsilon>0$} \newline
Let $\mathcal{C'}=\{\pi'_1,\ldots,\pi'_n\}$ be a $(k,k-t)$-SCID as constructed in the previous case for $\epsilon=0$. Let the spaces $D$, $P_i$ and $U_j$, for $i \in \{1,\ldots,\eta-1,n\}$ and $j \in \{1,\ldots,n\}$, be as defined in the first case. We will again adapt $\mathcal{C'}$ to construct a $(k,k-t)$-SCID $\mathcal{C''}=\{\pi''_1,\ldots,\pi''_n\}$ meeting the desired conditions. For this, we generalize the technique used in the proof of Theorem \ref{spec1}. \par Let $E \subseteq D=\pi'_1\cap\pi'_n$ be an $\epsilon$-dimensional subspace, this will be the overlap between the spaces $\pi''_1,\ldots,\pi''_\eta$ and $\pi''_n$. Next, consider $(k-t-\epsilon)$-dimensional subspaces $D_1\subseteq \pi'_1\cap\pi'_\eta, \ldots, D_{\eta-1} \subseteq \pi'_{\eta-1}\cap\pi'_\eta$ and $D_n\subseteq\pi'_\eta\cap\pi'_n$. We will choose the elements of $\mathcal{C''}$ in such a way that $\pi''_i\cap\pi''_\eta=\langle D_i,E\rangle$, for $i\in\{1,\ldots,\eta-1,n\}$. \par Shifting the spaces like this again causes a loss of dimensions. Note that we loose $(\eta-1)\epsilon$ dimensions from $\pi'_\eta$. To compensate, choose an $(\eta-1)\epsilon$-dimensional space $Y$, such that $Y$ spans maximal dimension together with the spaces of $\mathcal{C}$, i.e., $\dim \langle Y, \pi_i \mid 1\leq i \leq n\rangle$ is maximal. \par From each of the first $\eta-1$ spaces and the $n$th space we loose $\epsilon$ dimensions. To compensate, choose $\epsilon$-dimensional spaces $X_1,\ldots,X_{\eta-1}$ and $X_n$, such that these spaces span maximal dimension together with $Y$ and the spaces of $\mathcal{C'}$. I.e. $\dim \langle X_1,\ldots, X_{\eta-1},X_n,Y,\pi'_i \mid 1\leq i\leq n\rangle$ is maximal. \par Now we have everything we need to define $\mathcal{C''}=\{\pi''_1,\ldots,\pi''_n\}$: \begin{align*} \pi''_i&:=\langle D, D_i,U_i, P_i, X_i, \pi'_i\cap\pi'_j\mid \eta<j<n \rangle\text{, for } 1\leq i < \eta \\ \pi''_\eta&:=\langle E,D_1,\ldots,D_{\eta-1},D_n,U_\eta,Y,\pi'_\eta\cap\pi'_j\mid \eta<j<n\rangle \\ \pi''_j&:=\pi'_j\text{, for } \eta<j<n \\ \pi''_n&:=\langle D,D_n,U_n,P_n,X_n,\pi'_j\cap\pi'_n\mid \eta<j<n\rangle. \end{align*} By the way we defined the spaces in the definitions above, we can just sum up the dimensions of the subspaces to find the dimensions of the spaces $\pi''_i$: \begin{align*} \dim\pi''_i=& (k-t)+(k-t-\epsilon)+(k-(n-1)(k-t))+(\eta-2)(k-t) \\ &+\epsilon+(n-\eta-1)(k-t)\\ =&k\text{, for } 1\leq i <\eta \\ \dim\pi''_\eta=&\epsilon+\eta(k-t-\epsilon)+(k-(n-1)(k-t))+(\eta-1)\epsilon+(n-\eta-1)(k-t) \\ =&k \\ \dim\pi''_j=&\dim\pi'_j=k\text{, for }\eta<j<n \\ \dim\pi''_n=&(k-t)+(k-t-\epsilon)+(k-(n-1)(k-t))+(\eta-2)(k-t) \\ &+\epsilon+(n-\eta-1)(k-t) \\ =&k \end{align*} For the pairwise intersections we have $\pi''_i\cap\pi''_j=\pi'_i\cap\pi'_j$, as long as $i$ and $j$ are different from $\eta$. For $\eta<i<n$, we have $\pi''_\eta\cap\pi''_i=\pi'_\eta\cap\pi'_i$. For $1\leq i <\eta$, $\pi''_i\cap\pi''_\eta=\langle E,D_i \rangle$ and similarly we have $\pi''_\eta\cap\pi''_n=\langle E,D_n\rangle$. We can conclude that all the pairwise intersections of the spaces $\{\pi''_1,\ldots,\pi''_n\}$ have dimension $k-t$, implying $\mathcal{C''}:=\{\pi''_1,\ldots,\pi''_n\}$ is indeed a $(k,k-t)$-SCID.\par Now define $S:=\langle\pi''_1,\ldots,\pi''_n\rangle$ and $I:=\langle\pi''_i\cap\pi''_j\mid 1\leq i < j\leq n\rangle$. Note that, since $E\subseteq D$: \[ I = \langle D,D_1,\ldots,D_{\eta-1},D_n,\pi''_i\cap\pi''_j\rangle ,\] where $1\leq i \leq n$, $\eta<j<n$ and $i\neq j$. Note that the number of different intersections $\pi''_i\cap\pi''_j$ occurring in the expression above is: \[ (\eta+1)(n-\eta-1)+\frac{(n-\eta-1)(n-\eta-2)}{2} .\] By construction, we can again sum the dimensions of the spaces occurring in the expression above to find the dimension of $I$: \begin{align}\label{I8} \dim I =& (k-t)+\eta(k-t-\epsilon) \nonumber \\ &+\left((\eta+1)(n-\eta-1)+\frac{(n-\eta-1)(n-\eta-2)}{2}\right)(k-t). \end{align} On the other hand, note for $S$ that: \begin{align*} S = \langle D,D_1,\ldots,D_{\eta-1},D_n,X_1,\ldots,X_{\eta-1},X_n,Y,P_1,\ldots,P_{\eta-1},P_n,U_1,\ldots,U_n,\pi''_i\cap\pi''_j\rangle ,\end{align*} where $1\leq i \leq n$, $\eta<j<n$ and $i\neq j$. Again, the construction allows us to calculate the sum of the dimensions of all the spaces in the expression above to find the dimension of $S$. We now have: \begin{align}\label{S8} \dim S =& (k-t) + \eta(k-t-\epsilon)+\eta\epsilon+(\eta-1)\epsilon \nonumber \\ &+\eta(\eta-2)(k-t)+n(k-(n-1)(k-t)) \nonumber \\ &+ \left((\eta+1)(n-\eta-1)+\frac{(n-\eta-1)(n-\eta-2)}{2}\right)(k-t). \end{align} Combining (\ref{I8}) and (\ref{S8}) and noting that \[2(\eta+1)(n-\eta-1)+(n-\eta-1)(n-\eta-2)=n(n-1)-\eta(\eta+1),\]
we find: \[\dim S + \dim I = nk-(\eta-2)(k-t)-\epsilon. \]
This shows that $\mathcal{C''}$ meets the desired conditions, finishing the proof. \end{proof}
As long as the condition $(n-1)(k-t)\leq k$ holds, we have established examples of $(k,k-t)$-SCIDs with $\dim S + \dim I = N$, for any integer $N \in[nk-(n-2)(k-t),nk]$.\par
\subsection{Spectrum result on sunflowers}
What we actually did in the constructions of the previous section, was making $\dim I$ smaller while keeping $\dim S$ as large as possible. This method eventually gives rise to a maximal $(k,k-t)$-sunflower, for the case $\dim S + \dim I = nk-(n-2)(k-t)=2k+(n-2)t$.\par Note that for any sunflower we have $\dim I = k-t$, which is the smallest possible dimension for $I$. To construct SCIDs with $\dim S + \dim I \leq 2k+(n-2)t$, it is not possible to further reduce $\dim I$. But since we're dealing with a \emph{maximal} sunflower, we can reduce $\dim S$. In that case we still have a sunflower, which is an example of a SCID.\par From now on, we drop the condition $(n-1)(k-t)\leq k$. The essence of this section lies in field reduction and the following lemma:
\begin{lemma}\label{quot} The existence of a $(k,k-t)$-sunflower spanning dimension $d$ is equivalent to the existence of a partial $t$-spread in a $(d-k+t)$-dimensional space, spanning that $(d-k+t)$-dimensional space. \end{lemma} \begin{proof} Let $S$ be the space spanned by the elements of the sunflower and let $C$ be its center. Then $\dim S=d$, $\dim C=k-t$, and all elements of the sunflower have dimension $k$. \par Now consider the quotient space $S/C$. The elements of the sunflower all contain the center $C$, so in the quotient space they have dimension $k-(k-t)=t$. Since all elements of the sunflower have precisely $C$ as their pairwise intersections, their quotient equivalents must intersect trivially. So they must form a partial $t$-spread in $S/C$. \par Moreover, since the elements of the sunflower span the space $S$, their quotient equivalents must span $S/C$, which has dimension $d-(k-t)=d-k+t$. \end{proof}
Remember that we are working in a vector space $\mathcal{V}$ over the field $\mathbb{F}_q$, otherwise we cannot apply field reduction.
\begin{theorem}\label{sun} If $1\leq\eta\leq n-2$ and $n\leq\frac{q^{t(n-\eta)}-1}{q^t-1}$, then there exists a $(k,k-t)$-SCID $\{\pi_1,\ldots,\pi_n\}$ such that: \[ \dim S + \dim I =2k +(n-2)t-\eta t ,\] with $S:=\langle\pi_1,\ldots,\pi_n\rangle$ and $I:=\langle\pi_i\cap\pi_j \mid i\neq j\rangle$. \end{theorem} \begin{proof} We will construct a sunflower $\mathcal{C}$ meeting the conditions. Note that for a sunflower, $\dim I=k-t$. To have $\mathcal{C}$ fulfill the equality in the theorem, we must have that $\dim S=k+(n-1)t-\eta t$. \par By Lemma \ref{quot}, the existence of such a sunflower is equivalent to the existence of a partial $t$-spread in an $(n-\eta)t$-dimensional vector space $V((n-\eta)t,q)$, spanning that space. We can now use field reduction to guarantee the existence of $\mathcal{C}$. \par Consider the vector space $V(n-\eta,q^t)$. Choose $n$ lines, such that the last $n-\eta$ lines span the complete space $V(n-\eta,q^t)$. Since $n-\eta<n\leq\frac{q^{t(n-\eta)}-1}{q^t-1}$, where $\frac{q^{t(n-\eta)}-1}{q^t-1}$ is the number of lines in $V(n-\eta,q^t)$, this is always possible. By Lemma \ref{fred}, this set of $n$ lines in $V(n-\eta,q^t)$ corresponds to a partial $t$-spread in $V((n-\eta)t,q)$, spanning the whole space. \end{proof}
Note that $\frac{q^{t(n-\eta)}-1}{q^t-1}$ is the cardinality of a $t$-spread in an $(n-\eta)t$-dimensional vector space over $\mathbb{F}_q$. By reversing the arguments used in the previous proof, it is clear that there cannot exist a \emph{$(k,k-t)$-sunflower} with $\dim S + \dim I =2k +(n-2)t-\eta t$ if $n>\frac{q^{t(n-\eta)}-1}{q^t-1}$. However, this does not exclude the existence of an example of a $(k,k-t)$-SCID with these parameters. \par
\begin{theorem} If $1\leq\eta\leq n-2$, $0\leq\epsilon<t$, and $n\leq\frac{q^{t(n-\eta)}-1}{q^t-1}$, then there exists a $(k,k-t)$-SCID $\{\pi_1,\ldots,\pi_n\}$ such that: \[ \dim S + \dim I =2k +(n-2)t-\eta t+\epsilon ,\] with $S:=\langle\pi_1,\ldots,\pi_n\rangle$ and $I:=\langle\pi_i\cap\pi_j \mid i\neq j\rangle$. \end{theorem} \begin{proof} For $\epsilon=0$, this is exactly the previous theorem. \par For $\epsilon>0$, we will prove the existence of a $(k,k-t)$-sunflower meeting the conditions, in a similar way as in Theorem \ref{sun}. Choose $n$ lines such that the last $n-\eta$ lines span the complete space $V(n-\eta,q^t)$. Similarly to the previous proof, we have by Lemma \ref{fred} that this set of $n$ lines in $V(n-\eta,q^t)$ corresponds to a partial $t$-spread $\{\pi_1,\ldots,\pi_n\}$ in $V((n-\eta)t,q)$, spanning this whole space. Now embed this space $V=V((n-\eta)t,q)$ in a vector space $V'=V((n-\eta)t+\epsilon,q)$. Choose an $\epsilon$-dimensional space $E$ in $V'$, intersecting trivially with $V$. Then $\langle V,E\rangle=V'$. Consider a $(t-\epsilon)$-dimensional subspace $U$ of $\pi_1$. Now replace $\pi_1$ by $\pi'_1=\langle U,E\rangle$. Then $\{\pi'_1,\pi_2,\ldots,\pi_n\}$ is a partial $t$-spread, spanning $V'$. By Lemma \ref{quot}, there exists a sunflower meeting the conditions. \end{proof}
We have now proved that there exists a $(k,k-t)$-SCID (more precisely, a $(k,k-t)$-sunflower) with $\dim S + \dim I =N$, for any integer $N \in [2k,2k+(n-2)t]$, as long as the condition $n\leq\frac{q^{t(n-\eta)}-1}{q^t-1}$ holds. Note that a $(k,k-t)$-SCID with $\dim S +\dim I < 2k$ cannot exist.
\subsection{Summary} There exists a $(k,k-t)$-SCID with $n$ elements and with $\dim S + \dim I = N$, \begin{itemize}
\item for any integer $N \in[2k+(n-2)t,nk]$, if $(n-1)(k-t)\leq k$.
\item for any integer $N \in [2k,2k+(n-2)t]$, if $n\leq\frac{q^{t(n-\eta)}-1}{q^t-1}$. \end{itemize}
\end{document} |
\begin{document}
\title{Divisibility properties of the Fibonacci entry point} \author{Paul Cubre} \address{Department of Mathematics, Penn State University, State College, PA 16802} \email{pcubre@gmail.com} \author{Jeremy Rouse} \address{Department of Mathematics, Wake Forest University, Winston-Salem, NC 27109} \email{rouseja@wfu.edu} \subjclass[2010]{Primary 11B39; Secondary 11R32, 14G25} \thanks{The first author was partially supported by the Wake Forest University Graduate School. The second author was supported by NSF grant DMS-0901090} \begin{abstract} For a prime $p$, let $Z(p)$ be the smallest positive integer $n$ so that $p$ divides $F_{n}$, the $n$th term in the Fibonacci sequence. Paul Bruckman and Peter Anderson conjectured a formula for $\zeta(m)$, the density of primes
$p$ for which $m | Z(p)$ on the basis of numerical evidence. We prove Bruckman and Anderson's conjecture by studying the algebraic group $G : x^{2} - 5y^{2} = 1$ and relating $Z(p)$ to the order of $\alpha = (3/2,1/2) \in G(\mathbb{F}_{p})$. We are then able to use Galois theory and the Chebotarev density theorem to compute $\zeta(m)$. \end{abstract}
\maketitle
\section{Introduction and Statement of Results}
Let $F_{n}$ denote the Fibonacci sequence defined as usual by $F_{0} = 0$, $F_{1} = 1$ and $F_{n} = F_{n-1} + F_{n-2}$ for $n \geq 2$. If $p$
is a prime number, the smallest positive integer $m$ for which $p |
F_{m}$ is called the Fibonacci entry of $p$, $Z(p)$. For example, we have $Z(11) = 10$ since $11 | F_{10} = 55$ is the smallest Fibonacci number that is a multiple of $11$.
It is well-known that for every prime $p$, $Z(p) \leq p+1$ (in fact, a proof of this follows from Lemma~\ref{grouporder} and Lemma~\ref{Zplem} in Section~\ref{modp}). In 1913, Carmichael (see \cite{Carmichael}, Theorem XXI) proved that if $m \ne 1, 2, 6, 12$, then there is a prime number $p$ so that $Z(p) = m$. It is not presently known if there are infinitely many primes $p$ for which $Z(p) = p+1$.
The main question we study is, given a positive integer $m$, how often does $m$ divide $Z(p)$? A natural conjecture would be that $Z(p)$ is ``random'' mod $m$ and so the answer should be $1/m$. However, Lagarias proved in 1985 (see \cite{Lagarias} and \cite{LagariasCor}) that the density of primes $p$ so that $Z(p)$ is even is $2/3$. More precisely, \[
\lim_{x \to \infty}
\frac{\# \{ p \leq x : p \text{ is prime and } Z(p) \text{ is even} \}}{\pi(x)} = \frac{2}{3}, \] where $\pi(x)$ is the number of primes $\leq x$. Motivated by this work, Bruckman and Anderson in \cite{BA} gathered numerical data and conjectured a formula for \[
\zeta(m) := \lim_{x \to \infty} M(m,x) / \pi(x), \]
where $M(m,x) = \# \{ p \leq x : p \text{ is prime and } m | Z(p) \}$. Their conjecture is the following.
\begin{conjec}[Conjecture 3.1 of \cite{BA}] \label{BAconj} If $m = q^{e}$ is a prime power (with $e \geq 1$), then \[
\zeta(q^{e}) = \frac{q^{2-e}}{q^{2} - 1}. \] For an arbitrary positive integer $m$, we have \[
\zeta(m) = \rho(m) \prod_{q^{j} \| m} \zeta(q^{j}). \] where the product is over all prime powers occurring in the prime factorization of $m$, and \[
\rho(m) = \begin{cases}
1 & \text{ if } 10 \nmid m\\
\frac{5}{4} & \text{ if } m \equiv 10 \pmod{20}\\
\frac{1}{2} & \text{ if } 20 | m. \end{cases} \] \end{conjec}
The main result of the current paper is a proof of this conjecture.
\begin{thm} \label{main} Conjecture~\ref{BAconj} is true for every positive integer $m$. \end{thm}
One consequence of this result is that the numbers $Z(p)$ show a bias toward being composite. For example, if $q$ is prime, then $\zeta(q) = \frac{q}{q^{2} - 1} \approx \frac{1}{q} + \frac{1}{q^{2}}$, and so the numbers $Z(p)$ are divisible by $q$ more often than entries in a random sequence.
To prove Theorem~\ref{main}, we study the algebraic group \[
G : x^{2} - 5y^{2} = 1. \] This is a twisted torus isomorphic to $\mathbb{G}_{m}$ over $\mathbb{Q}(\sqrt{5})$. The group law is given by $(x_{1},y_{1}) * (x_{2},y_{2}) = (x_{1} x_{2} + 5 y_{1} y_{2}, x_{1} y_{2} + x_{2} y_{1})$. We consider the point $\alpha = (3/2,1/2) \in G(\mathbb{Q})$, and we show (see Lemma~\ref{multiples}) that \[
n \alpha = \overbrace{\alpha + \alpha + \cdots + \alpha}^{n~\text{times}}
= (L_{2n}/2, F_{2n}/2). \] Here $L_{n}$ is the $n$th Lucas number. These are defined by $L_{0} = 2$, $L_{1} = 1$ and $L_{n} = L_{n-1} + L_{n-2}$ for $n \geq 2$. In Lemma~\ref{Zplem}, we use this to relate the Fibonacci entry point of $p$ to the order of $\alpha \in G(\mathbb{F}_{p})$.
Next, we study the density of primes $p$ for which $m$ divides the order of $\alpha \in G(\mathbb{F}_{p})$. In the context of untwisted tori, questions of this nature are quite familiar. They were first considered by Hasse in \cite{Hasse1} and \cite{Hasse2}, and very general results of this type are due to Ballot \cite{Ballot} and Moree \cite{Moree}. Twisted tori of the type mentioned above are considered by Jones and the second author in \cite{JonesRouse}, and in this paper criteria are given that will guarantee that if $\alpha \in G(\mathbb{Q})$ is fixed and $\ell$ is a prime number, then the density of primes $p$ for which $\ell$ divides the order of $\alpha \in G(\mathbb{F}_{p})$ exists and equals $\frac{\ell}{\ell^{2} - 1}$.
In the present paper, we extend these results to arbitrary positive integers $m$. To make the extension to prime powers, we consider preimages of $\alpha \in G(\mathbb{F}_{p})$ under multiplication by $\ell$. We say that $\alpha$ has an $\ell^{n}$th preimage if there is a $\beta \in G(\mathbb{F}_{p})$ so that $\ell^{n} \beta = \alpha$. In Lemma~\ref{orderpreimage}, we relate the order of $\alpha$ with the number of preimages of $\alpha$ under the multiplication by $\ell$. Questions of this type are common in arithmetic dynamics, and these have connections with the Diffie-Hellman key exchange protocol (see \cite{DiffieHellman}). In this context, a generator $\alpha \in \mathbb{F}_{q}^{\times}$ is chosen and an integer $x$ that encodes a message is also chosen. Computing $y = \alpha^{x}$ given $\alpha$ and $x$ is straightforward, but computing $x$ given $\alpha$ and $y$ is quite difficult (note that $\alpha$ is a preimage of $y$).
To study how often $\alpha$ has an $\ell^{n}$th preimage, we show that there are $\ell^{n}$ elements $P_{n,r}$ of $G(\mathbb{C})$ so that $\ell^{n} P_{n,r} = \alpha$. We let $K_{\ell^{k}}$ be the field obtained by adjoining all of the $x$ and $y$-coordinates of the $\ell^{k}$th preimages to $\mathbb{Q}$. Then, we essentially show that there is an $n$th preimage of $\alpha$ in $G(\mathbb{F}_{p})$ if and only if the Artin symbol $\artin{K_{\ell^{k}}/\mathbb{Q}}{\mathfrak{p}}$ fixes an $n$th preimage for all prime ideals $\mathfrak{p}$ above $p$. Finally, we compute ${\rm Gal}(K_{\ell^{k}}/\mathbb{Q})$ and compute the relevant density. As a consequence, we are able to prove another conjecture of Anderson and Bruckman.
\begin{thm}[Conjecture 2.1 of \cite{BA}] \label{BAconj2} Let $p$ be a prime and $\epsilon_{p} = \legen{p}{5}$. Given a prime $q$ and integers $x$, $i$ and $j$ with $i \geq j \geq 0$, let $M(q,x,i,j)$ denote the number of primes $p \leq x$ such that
$q^{i} \| (p - \epsilon_{p})$ and $q^{j} \| Z(p)$. Then \[
\zeta(q;i,j) := \lim_{x \to \infty}
\frac{M(q,x,i,j)}{\pi(x)} =
\begin{cases}
\frac{q-2}{q-1} & \text{ if } i = j = 0,\\
q^{-2i} & \text{ if } i \geq 1 \text{ and } j = 0,\\
\frac{q-1}{q^{2i-j+1}} & \text{ otherwise.}
\end{cases} \] \end{thm}
To handle the case that $m = \prod_{i=1}^{r} \ell_{i}^{s_{i}}$ is composite, we must show that the fields $K_{\ell_{i}^{s_{i}}}$, $1 \leq i \leq r$ are (almost) linearly disjoint. The complication in the formula for $\zeta(m)$ arises from the fact that $\mathbb{Q}(\sqrt{5}) \subseteq K_{2^{a}} \cap K_{5^{b}}$.
An outline of the paper is as follows. We review the relevant algebraic number theory in Section~\ref{back}. In Section~\ref{modp} we connect the Fibonacci sequence with the arithmetic of $G(\mathbb{F}_{p})$ and $Z(p)$ with the order of $\alpha \in G(\mathbb{F}_{p})$, and in turn we connect that with the number of preimages. In Section~\ref{galois} we define a Galois representation and prove that it is surjective (except in the case that $2$ and $5$ both divide $m$). Finally in Section~\ref{density} we compute the relevant densities and prove Theorem~\ref{BAconj2} and Theorem~\ref{main}.
\section{Background} \label{back}
We begin by reviewing some algebraic number theory. For an introduction to these ideas, see \cite{Mollin}. If $L/K$ is a Galois extension of number fields and $\alpha \in L$, define the norm of $\alpha$ to be \[
N_{L/K}(\alpha) = \prod_{\sigma \in {\rm Gal}(L/K)} \sigma(\alpha). \]
If $K/\mathbb{Q}$ is a field extension, let $\mathcal{O}_{K}$ be the ring of algebraic integers in $K$. We say that a prime number $p$ ramifies in $K$ if in the factorization \[
p \mathcal{O}_{K} = \prod_{i=1}^{n} \mathfrak{p}_{i}^{e_{i}} \] we have $e_{i} > 1$ for some $i$. For a fixed $K$, only finitely many primes $p$ ramify, and those that ramify are precisely those that divide the discriminant $\Delta_{K}$.
Suppose now that $K/\mathbb{Q}$ is a Galois extension and $\mathfrak{p}$ is a prime ideal in $\mathcal{O}_{K}$. We say that $\mathfrak{p}$ is a prime ideal above $p$ if $\mathfrak{p} \cap \mathbb{Z} = (p)$. This implies that $\mathcal{O}_{K}/\mathfrak{p}$ is a finite extension of $\mathbb{F}_{p}$. If $p$ is unramified in $K/\mathbb{Q}$, then there is a unique element $\sigma \in {\rm Gal}(K/\mathbb{Q})$ for which \[
\sigma(\alpha) \equiv \alpha^{p} \pmod{\mathfrak{p}} \] for all $\alpha \in \mathcal{O}_{K}$. This element is called the \emph{Artin symbol} of $\mathfrak{p}$ and is denoted $\artin{K/\mathbb{Q}}{\mathfrak{p}}$. Let \[
\artin{K/\mathbb{Q}}{p} = \left\{ \artin{K/\mathbb{Q}}{\mathfrak{p}} : \mathfrak{p} \text{ is a prime ideal of } \mathcal{O}_{K} \text{ above } p \right\}. \] This set is a conjugacy class in ${\rm Gal}(K/\mathbb{Q})$.
Let $\zeta_{n} = e^{2 \pi i / n}$. It follows from the definition of the Artin symbol that if $p$ is a prime and $\mathfrak{p}$ is a prime ideal above $p$ in $\mathcal{O}_{\mathbb{Q}(\zeta_{n})}$, then $\artin{K/\mathbb{Q}}{\mathfrak{p}}(\zeta_{n}) = \zeta_{n}^{p}$.
We are now ready to state the Chebotarev density theorem. This result is the key tool we will use to compute the densities mentioned in the introduction. To state it, let $\pi(x)$ be the number of primes $\leq x$. \begin{thm}[\cite{IK}, page 143]
Suppose that $K/\mathbb{Q}$ is a finite Galois extension. If $\mathcal{C}
\subset {\rm Gal}(K/\mathbb{Q})$ is a conjugacy class, then \[
\lim_{x \to \infty} \frac{\# \{ p \leq x : p \text{ is prime and } \artin{K/\mathbb{Q}}{p} = \mathcal{C} \}}{\pi(x)}
= \frac{|\mathcal{C}|}{|{\rm Gal}(K/\mathbb{Q})|}. \] \end{thm}
We will need a standard result in Galois theory. If $K_{1}$ and $K_{2}$ are two subfields of a field $E$, let $\langle K_{1}, K_{2} \rangle$ be the smallest subfield that contains both $K_{1}$ and $K_{2}$. The following result describes ${\rm Gal}(\langle K_{1}, K_{2} \rangle/F)$ in terms of ${\rm Gal}(K_{1}/F)$, ${\rm Gal}(K_{2}/F)$ and ${\rm Gal}((K_{1} \cap K_{2})/F)$. \begin{thm}[\cite{Milne}, Proposition 3.20] \label{linearlydisjoint}
Let $K_{1}$ and $K_{2}$ be Galois extensions of a field $F$. Then
$K_{1} \cap K_{2}$ and $\langle K_{1}, K_{2} \rangle$ are both
Galois over $F$ and \[
{\rm Gal}(\langle K_{1}, K_{2} \rangle/F) \cong \{ (\sigma,\tau) : \sigma|_{K_{1} \cap K_{2}} = \tau|_{K_{1} \cap K_{2}} \} \subseteq {\rm Gal}(K_{1}/F) \times
{\rm Gal}(K_{2}/F). \] In particular, \[
|\langle K_{1}, K_{2} \rangle : F| = \frac{|K_{1} : F| |K_{2} : F|}{|K_{1} \cap K_{2} : F|}. \] \end{thm}
Finally, given a group $G$, we will denote its order by $|G|$. If
$g \in G$ is an element, we will denote the order of $g$ by $|g|$. If $n$ is a non-zero integer and $\ell$ is a prime number we will denote by ${\rm ord}_{\ell}(n)$ the highest power of $\ell$ that divides $n$.
\section{Connection between the Fibonacci sequence and $G(\mathbb{F}_{p})$} \label{modp}
In this section we will connect the order of the point $\alpha = (3/2,1/2) \in G(\mathbb{F}_{p})$ with $Z(p)$, the smallest positive integer
$n$ so that $p | F_{n}$. Given a prime number $\ell$ we will also relate ${\rm ord}_{\ell}(|\alpha|)$, ${\rm ord}_{\ell}(|G(\mathbb{F}_{p})|)$ and the highest positive integer $m$ so that there is a $\beta \in G(\mathbb{F}_{p})$ with $\ell^{m} \beta = \alpha$.
If $K$ is a field, let \[
G(K) = \{ (x,y) \in K^{2} : x^{2} - 5y^{2} = 1 \}. \] The set $G(K)$ becomes an abelian group with the group operation \[
(x_{1}, y_{1}) * (x_{2}, y_{2}) = (x_{1} x_{2} + 5 y_{1} y_{2},
x_{1} y_{2} + x_{2} y_{1}). \] The identity of $G(K)$ is $(1,0)$, and the inverse of $(x_{1},y_{1})$ is $(x_{1},-y_{1})$. The group $G$ is a twisted algebraic torus. It becomes isomorphic to $\mathbb{G}_{m}$ over $K(\sqrt{5})$.
\begin{lem} \label{twistiso} Let $H(K) = \{ (x,y) \in K^{2} : xy = 1 \}$. If $\sqrt{5} \in K$ and ${\rm char}(K) \ne 2$, then \[
H(K) \cong G(K). \] \end{lem} \begin{proof}
Let $\phi: G(K) \rightarrow H(K)$ be given by
$\phi(x,y)=(x+\sqrt{5}y,x-\sqrt{5}y)$. It is easily checked that $\phi$ is a homomorphism. If $(x,y) \in H(K)$ then $\phi((x+y)/2,(x-y)/2\sqrt{5})=(x,y)$ and $((x+y)/2,(x-y)/2\sqrt{5})\in G(K)$. This shows that $\phi$ is
surjective. Then $\ker \phi= \{(x,y)\in G(K): x+\sqrt{5}y=1 \text{
and } x-\sqrt{5}y=1\}$. If ${\rm char}(K) \ne 2$, this implies that $\ker \phi = \{ (1,0) \}$ and hence $\phi$ is injective. \end{proof}
A simple consequence of this is the following.
\begin{lem} For any prime $p \ne 2, 5$, $G(\mathbb{F}_{p})$ is cyclic. \end{lem} \begin{proof}
We must prove this for two cases: $p \equiv 1,4 \pmod{5}$ and $p
\equiv 2,3 \pmod{5}$. In the first case, $5$ is a square in $\mathbb{F}_{p}$ and Lemma~\ref{twistiso} shows that $G(\mathbb{F}_{p}) \cong \mathbb{F}_{p}^{\times}$ which is cyclic. In the second case, $5$ is a square in $\mathbb{F}_{p^{2}}$ and so $G(\mathbb{F}_{p}) \subseteq G(\mathbb{F}_{p^{2}}) \cong \mathbb{F}_{p^{2}}^{\times}$. This shows that $G(\mathbb{F}_{p})$ is a subgroup of a cyclic group and is hence cyclic. \end{proof}
The following result gives a formula for the order of $G(\mathbb{F}_{p})$.
\begin{lem} \label{grouporder} Suppose that $p \ne 2,5$ is prime. We have \[
|G(\mathbb{F}_{p})| = \begin{cases}
p-1 & \text{ if } p \equiv 1 \text{ or } 4 \pmod{5}\\
p+1 & \text{ if } p \equiv 2 \text{ or } 3 \pmod{5}. \end{cases} \] \end{lem} \begin{proof}
In the case that $p \equiv 1,4 \pmod{5}$ the previous lemma gives
$|G(\mathbb{F}_p)|=|\mathbb{F}_p^\times|=p-1$. However the following proof handles
both cases. A line through the point $(1,0)$ intersects the curve
$x^{2} - 5y^{2} = 1$ in another rational point if and only if the
slope is rational; the same argument applies in $\mathbb{F}_{p}$. Such a
line has the form $y \equiv m(x-1) \pmod{p}$. Every point in
$G(\mathbb{F}_{p})$ other than $(1,0)$ lies on one such a line. Computing
the intersection of this line with $x^{2} - 5y^{2} = 1$ shows that
$x = -\frac{1 + 5m^{2}}{1-5m^{2}}$ and $y = -\frac{2m}{1-5m^{2}}$.
This gives rise to a map $f : S \to G(\mathbb{F}_{p}) - \{ (1,0) \}$ where
$S = \{ m \in \mathbb{F}_{p} : 5m^{2} \ne 1 \}$. The argument above shows
that this map is surjective, and a straightforward calculation shows
that $f$ is injective. It follows that $|G(\mathbb{F}_{p}) = |S|$. We have that $|S| = p-1$ when $p \equiv 1, 4 \pmod{5}$ and $|S| = p+1$ otherwise. This completes the proof. \end{proof}
The next result shows that the Fibonacci and Lucas sequences occur as the coordinates of multiples of $\alpha = (3/2,1/2) \in G(\mathbb{Q})$.
\begin{lem} \label{multiples} We have $n \alpha = \left(\frac{L_{2n}}{2}, \frac{F_{2n}}{2}\right)$. \end{lem} \begin{proof}
We prove this by induction on $n$. The base case for $n=1$ is
$\alpha=(3/2,1/2)=(L_2/2,F_2/2)$. Our induction hypothesis for $n$
is $n\alpha=(L_{2n}/2,F_{2n}/2)$. Then
$n\alpha*\alpha=(L_{2n}/2,F_{2n}/2)*(3/2,1/2)=((3L_{2n}+5F_{2n})/4,(L_{2n}+3F_{2n})/4)$.
Using the identity $5F_n=L_{n+1}+L_{n-1}$ on the first coordinate we
get \begin{align*} 3L_{2n}+5F_{2n}&=3L_{2n}+L_{2n+1}+L_{2n-1}\\ &=3L_{2n}+2L_{2n+1}-L_{2n}\\ &=2(L_{2n}+L_{2n+1}) =2L_{2n+2}. \end{align*} Then using the identity $L_n=F_{n+1}+F_{n-1}$ on the second coordinate we get \begin{align*} L_{2n}+3F_{2n}&=F_{2n+1}+F_{2n-1}+3F_{2n}\\ &=F_{2n+1}+F_{2n+1}-F_{2n}+3F_{2n}\\ &=2(F_{2n+1}+F_{2n}) =2F_{2n+2}. \end{align*} Then we have shown $(n+1)(3/2,1/2)=(L_{2n+2}/2,F_{2n+2}/2)$, completing the induction. \end{proof}
The next result connects the order of $\alpha \in G(\mathbb{F}_{p})$ with $Z(p)$.
\begin{lem} \label{Zplem} Let $p \ne 2, 5$ be prime. Then \[
Z(p) = \begin{cases}
2 |\alpha| & \text{ if } |\alpha| \text{ is odd, }\\
\frac{1}{2} |\alpha| & \text{ if } |\alpha| \equiv 2 \pmod{4}, \text{ and}\\
|\alpha| & \text{ if } |\alpha| \equiv 0 \pmod{4}. \end{cases} \] \end{lem} \begin{proof} We need a few identities involving the Lucas and Fibonacci sequences: \begin{align*} L^2_n-5F^2_n=4(-1)^n,\\ L^2_n=L_{2n}+2(-1)^n,\\ F_{2n}=F_nL_n. \end{align*} First we wish to show $p \mid F_n$ if and only if $p \mid F_{2n}$ and $L_n^2 \equiv 4(-1)^n \pmod{p}$. Suppose $p \mid F_n$. Clearly $p \mid F_nL_n=F_{2n}$ and $p \mid 5F_n=L_n^2-4(-1)^n$. In the other direction, if $p \mid F_{2n}$ either $p \mid F_n$ or $p \mid L_n$. From $L_n^2 \equiv 4(-1)^n \pmod{p}$, we can conclude that $p \nmid L_n$. Therefore $p \mid F_n$.
Since $L_n^2 \equiv 4(-1)^n \pmod{p}$ implies $L_{2n}+2(-1)^n \equiv 4(-1)^n \pmod{p}$, we have $L_{2n} \equiv 2(-1)^n \pmod{p}$. Additionally as $n\alpha = (L_{2n}/2,F_{2n}/2)$, it is clear that $p \mid F_{2n}$ and $L_{2n}\equiv 2(-1)^n \pmod{p}$ if and only if $n\alpha \equiv ((-1)^n,0)\pmod{p}$. Therefore we conclude $p \mid F_n$ if and only if $n\alpha\equiv ((-1)^n,0) \pmod{p}$.
It follows that $Z(p)$ is the smallest positive integer $n$ for which $n \alpha \equiv ((-1)^{n},0) \pmod{p}$. Since $G(\mathbb{F}_{p})$ is cyclic, $(-1,0)$ is the unique element in $G(\mathbb{F}_{p})$ of order $2$. If $\abs{\alpha}$ is odd, then $(-1,0) \not\in \langle \alpha \rangle$. Therefore $n$ is even, $\abs{\alpha}\mid n$, and $Z(p)=2\abs{\alpha}$. If $\abs{\alpha}$ is even, then $(-1,0) \in \langle \alpha \rangle$ and $(\abs{\alpha}/2)\alpha$ has order $2$. When $\abs{\alpha} \equiv 2 \pmod{4}$, $(\abs{\alpha}/2)\alpha \equiv (-1,0) \equiv (-1^{\abs{\alpha}/2},0) \pmod{p}$ and $Z(p)=\abs{\alpha}/2$. When $\abs{\alpha} \equiv 0 \pmod{4}$, then $(\abs{\alpha}/2)\alpha \equiv (-1,0) \not\equiv (-1^{\abs{\alpha}/2},0) \pmod{p}$. However $\abs{\alpha} \equiv (1,0) \equiv ((-1)^{\abs{\alpha}},0) \pmod{p}$ and $Z(p)=\abs{\alpha}$. \end{proof}
The next lemma relates the order of $\alpha$ with the largest $m$ for which $\alpha$ has an $\ell^{m}$-th preimage. We will be able to detect the number of preimages using Galois theoretic data, and hence determine the order.
\begin{lem} \label{orderpreimage}
Suppose that $\ell$ and $p$ are prime numbers and $\alpha \in G(\mathbb{F}_{p})$. If $\ell \nmid |\alpha|$, then there are infinitely many preimages of $\alpha$ under multiplication by $\ell$. Suppose that
$\ell$ divides $|\alpha|$ and there is an $\ell^{m}$-th preimage of $\alpha$ in $G(\mathbb{F}_{p})$, but no $\ell^{m+1}$-th preimage. Then \[
{\rm ord}_{\ell}(|\alpha|) = {\rm ord}_{\ell}(|G(\mathbb{F}_{p})|) - m. \] \end{lem} \begin{proof} Let $\alpha \in G(\mathbb{F}_{p})$, $H=\langle \alpha \rangle$ and $\phi : H \to H$
be given by $\phi(x) = \ell x$. We have that $\phi$ is an automorphism of $H$ if and only if $\gcd(\ell,|H|) = 1$ and this implies that $\alpha$ has an $\ell^{m}$th preimage for all $m$ if and only
if ${\rm ord}_{\ell}(|\alpha|) = 0$.
On the other hand, suppose that $\ell$ divides $|\alpha|$ and write $\alpha = s \gamma$, where $\gamma$ is a generator of the cyclic group $G(\mathbb{F}_{p})$. Write $s = \gcd(s,|G(\mathbb{F}_{p})|) s'$ and note that $s' \gamma$ is also a generator of $G(\mathbb{F}_{p})$. By replacing
$\gamma$ with $s' \gamma$, we may assume that $s$ divides $|G(\mathbb{F}_{p})|$.
Let $m = {\rm ord}_{\ell}(s)$. Then $(s/\ell^{m}) \gamma$ is an $\ell^{m}$th preimage of $\alpha$. It is easy to see, however that if an
$\ell^{m+1}$st preimage $\beta_{m+1}$ of $\alpha$ existed, then its order would be $\ell^{m+1} |\alpha|$, and this does not divide
$|G(\mathbb{F}_{p})|$. Finally, the order of $s \gamma$ is
$|\alpha| = \frac{|G(\mathbb{F}_{p})|}{s}$ and this gives \[
{\rm ord}_{\ell}(|\alpha|) = {\rm ord}_{\ell}(|G(\mathbb{F}_{p})|) - m, \] as desired. \end{proof}
\section{Galois theory} \label{galois}
For a positive integer $m$, let $\gamma_{m} = \sqrt[m]{\frac{3+\sqrt{5}}{2}}$. The isomorphism in Lemma~\ref{twistiso} shows that if $m$ is a positive integer, there are precisely $m$ elements $P_{m,r} \in G(\mathbb{C})$ with $m P_{m,r} = \alpha$. These are given by \[
P_{m,r} = \left( \frac{\zeta_{m}^{r} \gamma_{m} + \zeta_{m}^{-r} \gamma_{m}^{-1}}{2}, \frac{\zeta_{m}^{r} \gamma_{m} - \zeta_{m}^{-r} \gamma_{m}^{-1}}{2 \sqrt{5}} \right), \quad 0 \leq r \leq m-1. \] Note that $m (P_{m,r} - P_{m,s}) = \alpha - \alpha = 0$ and so $P_{m,r} - P_{m,s}$ is a point in $G(\mathbb{C})$ with order dividing $m$.
To study the frequency with which $m$ divides
$|\alpha|$ in $G(\mathbb{F}_{p})$, we need to know how often the points $P_{m,r}$ ``live in $G(\mathbb{F}_{p})$.'' More precisely, this means that if $K_{m}$ is the field obtained by adjoining the $x$ and $y$-coordinates of all the $P_{m,r}$ to $\mathbb{Q}$, we need to determine how often the rational prime $p$ is contained in a prime ideal $\mathfrak{p} \subset \mathcal{O}_{K_{m}}$ so that \[
\artin{K_{m}/\mathbb{Q}}{\mathfrak{p}}(P_{m,r}) = P_{m,r}. \] If the above equation is true, this implies that the point $P_{m,r}$ (which for all but finitely many primes $p$ can be thought of as an element of the finite field $\mathcal{O}_{K_{m}}/\mathfrak{p}$) is fixed by a generator of the Galois group of $\mathcal{O}_{K_{m}}/\mathfrak{p}$ over $\mathbb{F}_{p}$, i.e., there is an element $\beta \in G(\mathbb{F}_{p})$ so that $m \beta = \alpha$. We must first understand ${\rm Gal}(K_{m}/\mathbb{Q})$ and its action on the points $\{ P_{m,r} \}$. For simplicity, let \[
P_{m} := P_{m,0} = \left(\frac{\gamma_{m} + \gamma_{m}^{-1}}{2},
\frac{\gamma_{m} - \gamma_{m}^{-1}}{2 \sqrt{5}}\right),
Q_{m} := P_{m,1} - P_{m,0} = \left(\frac{\zeta_{m} + \zeta_{m}^{-1}}{2},
\frac{\zeta_{m} - \zeta_{m}^{-1}}{2 \sqrt{5}}\right). \]
Let $I(m) = \{ ax + b : a \in (\mathbb{Z}/m\mathbb{Z})^{\times}, b \in (\mathbb{Z}/m\mathbb{Z}) \}$ denote the affine group over $\mathbb{Z}/m\mathbb{Z}$. The group law on $I(m)$ is composition.
\begin{lem} \label{homdef} The extension $K_{m}/\mathbb{Q}$ is Galois. For each $\sigma \in {\rm Gal}(K_{m}/\mathbb{Q})$, there are elements $a_{\sigma} \in (\mathbb{Z}/m\mathbb{Z})^{\times}$ and $b_{\sigma} \in (\mathbb{Z}/m\mathbb{Z})$ so that $\sigma(P_{m} + r Q_{m}) = P_{m} + (a_{\sigma} r + b_{\sigma}) Q_{m}$ for $0 \leq r \leq m-1$. The map \[
\rho : {\rm Gal}(K_{m}/\mathbb{Q}) \to I(m) \] given by $\rho(\sigma) = a_{\sigma} x + b_{\sigma}$ is an injective homomorphism. \end{lem} \begin{proof}
The field $K_m$ lies in the Galois extension
$\mathbb{Q}(\gamma_m,\zeta_m,\sqrt{5})$ over $\mathbb{Q}$. Then $K_m$ is separable
as it is an intermediate field of a separable extension, it is a
splitting field as we are adjoining all conjugates of the
coordinates $P_{m}$ and $Q_{m}$. We conclude that the extension
$K_m/\mathbb{Q}$ is Galois. For $\sigma \in {\rm Gal}(K_{m}/\mathbb{Q})$, $\sigma
(mP_{m,r})=m\sigma(P_{m,r})=\alpha$. Therefore ${\rm Gal}(K_{m}/\mathbb{Q})$
must take a preimage of $\alpha$ to another preimage of $\alpha$.
Hence $\sigma(P_{m,0})=P_{m,b_{\sigma}}=P_{m,0}+b_{\sigma}Q_{m}$.
Additionally $\sigma(mQ_{m})=m\sigma(Q_{m})=0$. So ${\rm Gal}(K_{m}/\mathbb{Q})$
must take an element of order $m$ to another element of order $m$.
Hence $\sigma(Q_{m})=a_{\sigma}Q_{m}$. This yields
$\sigma(P_{m}+rQ_{m})=\sigma(P_{m})+r\sigma(Q_{m})=P_m+(a_{\sigma}
r+ b_{\sigma})Q_{m}$. From the definition of $\rho$, it is easy to see that $\rho$ is a homomorphism. The kernel of $\rho$ is the set $\{ \sigma
\in {\rm Gal}(K_{m}/\mathbb{Q}) : \rho(\sigma)=x\}$. If $\sigma \in \ker \rho$, then $\sigma(P_{m} + rQ_{m}) = P_{m} + rQ_{m}$ for all $r$. This implies that $P_{m}$ and $Q_{m}$ are both fixed by $\sigma$. Since the coordinates of $P_{m}$ and $Q_{m}$ generate $K_{m}$ over $\mathbb{Q}$, it follows that $\sigma$ fixes $K_{m}$ and so $\ker \rho = 1$. \end{proof}
The Chinese remainder theorem allows one to see that if the prime factorization of $m = \prod_{i=1}^{r} \ell_{i}^{s_{i}}$, then \[
I(m) \cong \prod_{i=1}^{r} I(\ell_{i}^{s_{i}}). \] We will therefore begin by studying the images of ${\rm Gal}(K_{m}/\mathbb{Q}) \to I(m)$ in the case that $m$ is a prime power. This case was studied by Jones and the second author in \cite{JonesRouse}. In particular, Proposition 4.1 and Theorem 4.2 (with the special case $d = 5$) handle the case that $m = \ell^{k}$ is a prime. Specifically, if $F$ is a number field, the map \[
\rho : {\rm Gal}(\langle K_{m}, F \rangle/F) \to I(\ell^{k}) \] is surjective for all $k$ if and only if \begin{enumerate} \item there is no point $\beta \in G(F)$ with $\ell \beta = \alpha$, and
\item $|F(\zeta_{\ell^{3}} + \zeta_{\ell^{3}}^{-1}) : F| = \frac{\ell^{2} (\ell-1)}{2}$, and \item if $\ell = 2$, $-2$ and $-10$ are not squares in $F$ and if $L$ is the field obtained by adjoining to $F$ the coordinates of $Q_{8}$, then there is no point $\beta \in G(L)$ with $2 \beta = \alpha$. \end{enumerate}
Using this result, we deduce the following result concerning the surjective of the map $\rho : {\rm Gal}(K_{m}/\mathbb{Q}) \to I(m)$.
\begin{lem} \label{surjlem} Let $\ell$ be a prime and $k \geq 1$. Then the map $\rho : {\rm Gal}(K_{\ell^{k}}/\mathbb{Q}) \to I(\ell^{k})$ is surjective. \end{lem} \begin{proof}
If there is a $\beta \in G(\mathbb{Q}(\sqrt{5}))$ with $\ell \beta = \alpha$, then the isomorphism $G(\mathbb{Q}(\sqrt{5})) \cong H(\mathbb{Q}(\sqrt{5}))$ from Lemma~\ref{twistiso} sends $\beta$ to $\sqrt[\ell]{(3 + \sqrt{5})/2}$. It is well-known that $\mathcal{O}_{\mathbb{Q}(\sqrt{5})} = \mathbb{Z}\left[\frac{1 + \sqrt{5}}{2}\right]$ and $\mathbb{Z}\left[\frac{1 + \sqrt{5}}{2}\right]^{\times}$ is generated by $-1$ and $\frac{1+\sqrt{5}}{2}$. Since $\frac{3 + \sqrt{5}}{2} = \left(\frac{1 + \sqrt{5}}{2}\right)^{2}$, it follows that $\ell = 2$. However, the $2$nd preimages of $\alpha$ are $\left( \frac{\pm \sqrt{5}}{2}, \frac{\pm \sqrt{5}}{10} \right)$, which are not in $G(\mathbb{Q})$. This establishes condition $(1)$. It is well-known that
$|\mathbb{Q}(\zeta_{\ell^{3}} + \zeta_{\ell^{3}}^{-1}) : \mathbb{Q}| =
\frac{\ell^{2} (\ell-1)}{2}$, and so condition $(2)$ holds. For condition (3), a simple check shows that $L = \mathbb{Q}(\sqrt{2}, \sqrt{-5})$ and so $\sqrt{5} \not\in L$ and thus there is no $\beta \in G(L)$ with $2 \beta = \alpha$. \end{proof}
The next step is extending the result of the above lemma to prove that the map ${\rm Gal}(K_{m}/\mathbb{Q}) \to I(m)$ is surjective (unless
$10 | m$).
\begin{lem} \label{minsubfields} If $\ell > 2$ is prime, then every minimal subfield of $K_{\ell^{k}}/\mathbb{Q}$ is either $\mathbb{Q}\left(P_{\ell}\right)$ (or some conjugate), or is contained in $\mathbb{Q}(\zeta_{\ell^{2}})$. If $\ell = 2$, then every minimal subfield of $K_{2^{k}}/\mathbb{Q}$ is contained in $\mathbb{Q}(\sqrt{5}, i, \sqrt{2})$. \end{lem} \begin{proof} Lemmas~\ref{homdef} and \ref{surjlem} translate this into a group theory problem. Let $N = \{ ax+b \in I(\ell^{k}) : a \equiv 1 \pmod{\ell}\}$. This is a normal subgroup of $I(\ell^{k})$. If $G$ is a group, let $\Phi(G)$ denote the Frattini subgroup of $G$, the intersection of the maximal subgroups of $G$. Since $N \unlhd G$, $\Phi(N) \subseteq \Phi(G)$. Since the order of $N$ is a power of $\ell$, every maximal subgroup has index $\ell$ and so if $n \in N$, then $n^{\ell} \in \Phi(N)$. This implies that \[
\{ ax+b \in I(\ell^{k}) : a \equiv 1 \pmod{\ell^{2}}, b \equiv 0 \pmod{\ell}\} \subseteq \Phi(N) \subseteq \Phi(G). \] and in particular, that the kernel of the map from $I(\ell^{k}) \to I(\ell^{2})$ is contained in $\Phi(G)$. Since $I(\ell^{2})$ is solvable, every maximal subgroup has prime power index. Using the surjective homomorphism $\phi : I(\ell^{2}) \to (\mathbb{Z}/\ell^{2} \mathbb{Z})^{\times}$, we can identify the maximal subgroups of index less than $\ell$ and also show there are none of index $> \ell$. If $M \subseteq I(\ell^{2})$ is a maximal subgroup of index $\ell$ then either $x+1 \in M$, in which case $M = \{ ax+b : a^{\ell-1} \equiv 1 \pmod{\ell^{2}} \}$, or $x+1 \not\in M$. Note that if $f_{1} = ax + b$ and $f_{2} = cx+d$, then $f_{1} \circ f_{2} \circ f_{1}^{-1} \circ f_{2}^{-1} = x + ad - bc + b - d$. Hence, if $x+1 \not\in M$, then $M$ is abelian. This implies that if $ax+b$ and $cx+d$ are in $M$ then $(a-1)d = (c-1) b$. If $e$ is the common value of $\frac{b}{a-1}$ for $a \ne 1$, then we have \[
M = \{ a (x-e) + e : a \in (\mathbb{Z}/\ell^{2} \mathbb{Z})^{\times} \} \] for some $e$ with $0 \leq e \leq \ell^{2} - 1$. All of these subgroups are conjugates of $\{ ax : a \in (\mathbb{Z}/\ell^{2} \mathbb{Z})^{\times} \}$. Translating back to fields, we obtain the desired result when $\ell > 2$.
When $\ell = 2$ we have $\Phi(I(2^{k})) = \{ ax + b : a \equiv 1 \pmod{8}, b \equiv 0 \pmod{2} \}$ provided $k \geq 3$, and a straightforward calculation shows that the field corresponding to $\Phi(I(2^{k}))$ is $\mathbb{Q}(\sqrt{5}, i, \sqrt{2})$. \end{proof}
Suppose that $\ell_{1}, \ell_{2}, \ldots, \ell_{n}$ are prime divisors of $m$. To prove that ${\rm Gal}(K_{m}/\mathbb{Q}) \to I(m)$ is surjective, we will want to show that $K_{\ell_{n}^{s_{n}}} \cap \langle K_{\ell_{1}^{s_{1}}}, K_{\ell_{2}^{s_{2}}}, \ldots, K_{\ell_{n-1}}^{s_{n-1}} \rangle = \mathbb{Q}$. We will prove this using ramification properties of these fields.
\begin{lem} \label{ramify} Suppose that $\ell$ is prime. Then $K_{\ell^{k}}/\mathbb{Q}$ is ramified only at $5$ and $\ell$. If $\ell \ne 2$, then every minimal subfield of $K_{\ell^{k}}/\mathbb{Q}$ is ramified at $\ell$. \end{lem} \begin{proof}
From the formulas for the $P_{\ell^{k},r}$ it is clear that
$K_{\ell^{k}}$ is contained in the splitting field of $x^{\ell^{k}}
- \frac{3 + \sqrt{5}}{2}$ over $\mathbb{Q}(\sqrt{5})$. The discriminant
$\Delta_{L/\mathbb{Q}}$ of $L = \mathbb{Q}(\gamma_{\ell^{k}})$ is \[
\Delta_{L/\mathbb{Q}} = \Delta_{\mathbb{Q}(\sqrt{5})/\mathbb{Q}}^{[L : \mathbb{Q}(\sqrt{5})]}
\cdot N_{L/\mathbb{Q}(\sqrt{5})}(\Delta_{L/\mathbb{Q}(\sqrt{5})}). \] Since the discriminant of $x^{\ell^{k}} - \frac{3 + \sqrt{5}}{2}$ is a power of $\ell$ times a unit, it follows that $\Delta_{L/\mathbb{Q}}$ is a power of $5$ times a power of $\ell$. If $L_{1}$ and $L_{2}$ are two extensions of $\mathbb{Q}$ ramified only at primes in a set $S$, then $\langle L_{1}, L_{2} \rangle/\mathbb{Q}$ is ramified only at primes in $S$ (see Theorem 4.67 of \cite{Mollin}). It follows from this that the splitting field of $x^{\ell^{k}} - \frac{3 + \sqrt{5}}{2}$ is ramified only at $5$ and $\ell$ and hence $K_{\ell^{k}}/\mathbb{Q}$ is too.
To prove the second claim it is enough, by Lemma~\ref{minsubfields}, to prove that $\mathbb{Q}(P_{\ell})$ is ramified at $\ell$. From the surjectivity of $\rho$ proven in Lemma~\ref{surjlem}, it follows that ${\rm Gal}(K_{\ell}/\mathbb{Q})$ acts transitively on the $P_{\ell,r}$. Therefore, the Galois closure of $\mathbb{Q}(P_{\ell})$ over $\mathbb{Q}$ is $K_{\ell}$. Since $\mathbb{Q}(\zeta_{\ell} + \zeta_{\ell}^{-1})$ is ramified at $\ell$ and is contained in $K_{\ell}$, it follows that $K_{\ell}/\mathbb{Q}$ is ramified at $\ell$ and this implies that $\mathbb{Q}(P_{\ell})$ is ramified at $\ell$ as well. \end{proof}
\begin{lem} \label{surj} Let $m$ be an arbitrary positive integer. The map
$\rho : {\rm Gal}(K_{m}/\mathbb{Q}) \to I(m)$ is surjective if $10 \nmid m$. If $10 | m$, then the image of $\rho$ is \[
\{ ax + b : b \text{ is even } \iff a \equiv 1 \text{ or } 4 \pmod{5} \}. \] \end{lem} \begin{proof} First note that if $\gcd(m_{1},m_{2}) = 1$, then $\langle K_{m_{1}}, K_{m_{2}} \rangle = K_{m_{1} m_{2}}$. It is clear that $K_{m_{1}}, K_{m_{2}} \subseteq K_{m_{1} m_{2}}$. For the reverse direction note that if $m_{1} x + m_{2} y = 1$ then \[
m_{1} m_{2} (y P_{m_{1}} + x P_{m_{2}})
= m_{2} y \alpha + m_{1} x \alpha = \alpha. \] Thus $y P_{m_{1}} + x P_{m_{2}}$ is preimage of $\alpha$ under multiplication by $m_{1} m_{2}$. Further, since $G(\langle K_{m_{1}}, K_{m_{2}} \rangle)$ contains elements of order $m_{1}$ and $m_{2}$, it must contain $Q_{m_{1} m_{2}}$ and so $K_{m_{1} m_{2}} = \langle K_{m_{1}}, K_{m_{2}} \rangle$.
Suppose first that $10 \nmid m$. To prove that $\rho$ is surjective, we will prove (by induction on the number of distinct prime factors of
$m$) that $|{\rm Gal}(K_{m}/\mathbb{Q})| = |I(m)|$. The base case ($n = 1$) is handled by Lemma~\ref{surjlem}.
Suppose that $m = \prod_{i=1}^{n} \ell_{i}^{s_{i}}$ is the prime factorization of $m$. Since $K_{m/\ell_{n}^{s_{n}}} = \langle K_{\ell_{1}^{s_{1}}}, K_{\ell_{2}^{s_{2}}}, \ldots, K_{\ell_{n-1}^{s_{n-1}}} \rangle$ we have that $K_{m/\ell_{n}^{s_{n}}}$ is ramified only at $5$ and $\ell_{1}, \ldots, \ell_{n-1}$, while $K_{\ell_{n}^{s_{n}}}$ is ramified only at $5$ and $\ell_{n}$. Moreover, by Lemma~\ref{ramify} every minimal subfield of $K_{\ell_{n}^{s_{n}}}$ is ramified at $\ell_{n}$. It follows therefore that $K_{\ell_{n}^{s_{n}}} \cap K_{m/\ell_{n}^{s_{n}}}$ is unramified everywhere, and since $\mathbb{Q}$ has no unramified extensions we have that $K_{\ell_{n}^{s_{n}}} \cap K_{m/\ell_{n}^{s_{n}}} = \mathbb{Q}$. From Theorem~\ref{linearlydisjoint} we obtain that \[
|\langle K_{\ell_{n}^{s_{n}}}, K_{m/\ell_{n}^{s_{n}}} \rangle : \mathbb{Q}|
= |K_{\ell_{n}^{s_{n}}} : \mathbb{Q}| \cdot |K_{m/\ell_{n}^{s_{n}}} : \mathbb{Q}|
= |I(\ell_{n}^{s_{n}})| \cdot |I(m/\ell_{n}^{s_{n}})| = |I(m)|, \] which proves the desired claim.
A similar argument shows that if $10 | m$ then $|K_{m} : \mathbb{Q}| =
|K_{2^{s_{1}} 5^{s_{2}}} : \mathbb{Q}| \cdot |K_{m/(2^{s_{1}} 5^{s_{2}})} : \mathbb{Q}|$. To determine $|K_{2^{s_{1}} 5^{s_{2}}} : \mathbb{Q}|$ we will show that $K_{2^{s_{1}}} \cap K_{5^{s_{2}}} = \mathbb{Q}(\sqrt{5})$ by determining the subfields of $K_{2^{s_{1}}}$ that are ramified only at $5$. We have that $\mathbb{Q}(\sqrt{5}) \subseteq K_{2^{s_{1}}}$ and the subgroup of $I(2^{s_{1}})$ corresponding to $\mathbb{Q}(\sqrt{5})$ is \[
H = \{ ax + b : b \equiv 0 \pmod{2} \}. \] Since $H$ is a $2$-group, $\Phi(H) = H' H^{2} = \{ ax + b
: a \equiv 1 \pmod{8}, b \equiv 0 \pmod{4} \}$. The field corresponding to this subgroup is $L = \mathbb{Q}\left(\sqrt{\frac{1+\sqrt{5}}{2}}, i, \sqrt{2}\right)$ and it is straightforward to see that the maximal subextension of $L$ ramified only at $5$ is $\mathbb{Q}(\sqrt{5})$. It follows that $K_{2^{s_{1}}} \cap K_{5^{s_{2}}} = \mathbb{Q}(\sqrt{5})$ and so when $10 | m$, $|K_{m} : \mathbb{Q}| =
\frac{1}{2} |I(m)|$. Finally, since $P_{2} = \left(\frac{\sqrt{5}}{2}, \frac{\sqrt{5}}{10} \right)$ we have that $\sigma \in {\rm Gal}(K_{m}/\mathbb{Q})$ fixes $\sqrt{5}$ if and only if it fixes $P_{2}$ and this occurs if and only if $\rho(\sigma) = ax + b$ where $b \equiv 0 \pmod{2}$ and $\legen{a}{5} = 1$. This yields the desired result. \end{proof}
\begin{rem} The method from the previous sections is very general and can be used to establish the surjectivity of Galois representations attached to arbitrary one-dimensional tori. \end{rem}
\section{Density computations} \label{density}
In this section, we will translate conditions on when preimages of $\alpha$ exist in $G(\mathbb{F}_{p})$ into statements about the Frobenius conjugacy class $\artin{K_{m}/\mathbb{Q}}{p}$. We will then count the sizes of these classes and use this to prove Theorem~\ref{main}. We will start by focusing on the prime power case.
As in the previous section, let $\ell$ be a prime number, $k \geq 1$ and $K_{\ell^{k}}$ be the field obtained by adjoining all the $x$ and $y$-coordinates of $P_{\ell^{k},r}$ to $\mathbb{Q}$. Let $\rho : {\rm Gal}(K_{\ell^{k}}/\mathbb{Q}) \to I(\ell^{k})$ be the homomorphism defined in Lemma~\ref{homdef}. We define \[
\mathcal{C}_{k,n,\ell} = \{ \sigma \in {\rm Gal}(K_{\ell^{k}}/\mathbb{Q}) : \sigma(\ell^{k-n} P_{\ell^{k},r}) = \ell^{k-n} P_{\ell^{k},r} \text{ for some
} r \text{ with } 0 \leq r \leq \ell^{k} - 1 \}. \] We will make the convention that if $n < 0$, then $\mathcal{C}_{k,n,\ell}$ is empty. Since $P_{\ell^{k},r}$ is a $\ell^{k}$th preimage of $\alpha$, $\ell^{k-n} P_{\ell^{k},r}$ is an $\ell^{n}$th preimage of $\alpha$.
\begin{lem} \label{artinpreimage} Let $p \ne 2, 5, \ell$ be a prime number. Then, there is an $n$th preimage of $\alpha$ in $G(\mathbb{F}_{p})$ if and only if $\artin{K_{\ell^{k}}/\mathbb{Q}}{p} \subseteq \mathcal{C}_{k,n,\ell}$. \end{lem} \begin{proof}
First suppose that $\artin{K_{\ell^{k}}/\mathbb{Q}}{p} \subseteq
\mathcal{C}_{k,n,\ell}$ and let $\mathfrak{p}$ be a prime ideal
above $p$ in $\mathcal{O}_{K_{\ell^{k}}}$. Let $\sigma =
\artin{K_{\ell^{k}}/\mathbb{Q}}{\mathfrak{p}}$ fix $\ell^{k-n}
P_{\ell^{k},r}$. We may consider $\ell^{k-n} P_{\ell^{k},r}$ as an
element of $G(\mathcal{O}_{K_{\ell^{k}}}/\mathfrak{p})$ (note that
$2$ and $5$ are the only primes that divide the denominators of
coordinates of preimages of $\alpha$). Since $\sigma$ fixes
$\ell^{k-n} P_{\ell^{k},r}$ and $\sigma$ acts as the Frobenius
automorphism on $\mathcal{O}_{K_{\ell^{k}}}/\mathfrak{p}$, it
follows that $\ell^{k-n} P_{\ell^{k},r} \in G(\mathbb{F}_{p})$, as desired.
To show the reverse implication we will first show that for any prime $\mathfrak{p}$ above $p$, the reduction mod $\mathfrak{p}$ map on $\ell^{n}$th preimages of $\alpha$ is injective. Any $n$th preimage of $\alpha$ has the form $P_{\ell^{n},r} = P_{\ell^{n}} + r Q_{\ell^{n}}$. If $P_{\ell^{n}} + r_{1} Q_{\ell^{n}} \equiv P_{\ell^{n}} + r_{2} Q_{\ell^{n}} \pmod{\mathfrak{p}}$ with $r_{1} \not\equiv r_{2} \pmod{\ell^{n}}$, then $(r_{1} - r_{2}) Q_{\ell^{n}}$ is congruent to the identity mod $\mathfrak{p}$. Every element of order $\ell$ is a multiple of $(r_{1} - r_{2}) Q_{\ell^{n}}$ and this implies that the $y$-coordinate of $Q_{\ell}$ is $\equiv 0 \pmod{\mathfrak{p}}$ and this implies that \[
N_{\mathbb{Q}(\zeta_{\ell}, \sqrt{5})/\mathbb{Q}}\left(\frac{\zeta_{\ell} - \zeta_{\ell}^{-1}}{2 \sqrt{5}}\right) \equiv 0 \pmod{p}. \] This is a contradiction because $N_{\mathbb{Q}(\zeta_{\ell})/\mathbb{Q}}(\zeta_{\ell} - \zeta_{\ell}^{-1}) = \ell$. Hence the reduction mod $\mathfrak{p}$ map is injective on $\ell^{n}$th preimages.
Finally, suppose there is an $\ell^{n}$th preimage of $\alpha$ in $G(\mathbb{F}_{p})$. For a prime ideal $\mathfrak{p}$ above $p$, we consider $\mathbb{F}_{p} \subseteq \mathcal{O}_{K_{\ell^{k}}}/\mathfrak{p}$. Since ${\rm Gal}(K_{\ell^{k}}/\mathbb{Q})$ acts on the $\ell^{n}$th preimages of $\alpha$ in $K_{\ell^{k}}$, and the reduction map is injective, this implies that $\artin{K_{\ell^{k}}/\mathbb{Q}}{\mathfrak{p}} \in \mathcal{C}_{k,n,\ell}$, as desired. \end{proof}
The previous lemma allows us to determine, based on $\artin{K/\mathbb{Q}}{p}$, when preimages exist. The next allows us to determine the group order mod $\ell^{k}$. \begin{lem} \label{artingrouporder} If $p$ is a prime number with $p \ne 2, 5, \ell$. If \[
ax+b \in \rho\left(\artin{K_{\ell^{k}}/\mathbb{Q}}{p}\right) \subseteq I(\ell^{k}), \] and $a \not\equiv 1 \pmod{\ell^{k}}$, then
${\rm ord}_{\ell}(|G(\mathbb{F}_{p})|) = {\rm ord}_{\ell}(a-1)$. \end{lem} \begin{proof} Let $\mathfrak{p}$ be a prime above $p$ in $\mathcal{O}_{K_{\ell^{k}}}$ and $\sigma = \artin{K_{\ell^{k}}/\mathbb{Q}}{\mathfrak{p}}$. Note that $\sigma(\zeta_{\ell^{k}}) = \zeta_{\ell^{k}}^{p}$ and define $\epsilon \in \{ \pm 1 \}$ by $\sigma(\sqrt{5}) = \epsilon \sqrt{5}$. If $\rho(\sigma) = ax + b$, then \[
\left(\frac{\zeta_{\ell^{k}}^{p} + \zeta_{\ell^{k}}^{-p}}{2},
\frac{\zeta_{\ell^{k}}^{p} - \zeta_{\ell^{k}}^{-p}}{2 \epsilon \sqrt{5}}\right) =
\sigma(Q_{\ell^{k}}) = a Q_{\ell^{k}}
= \left(\frac{\zeta_{\ell^{k}}^{a} + \zeta_{\ell^{k}}^{-a}}{2}, \frac{\zeta_{\ell^{k}}^{a} - \zeta_{\ell^{k}}^{-a}}{2 \sqrt{5}}\right). \] Comparing the $x$-coordinates, we obtain that $a \equiv \pm p \pmod{\ell^{k}}$. Using this fact and comparing the $y$-coordinates gives that $a \equiv \epsilon p \pmod{\ell^{k}}$. Also, $\epsilon = 1$ if and only if $\sqrt{5} \in \mathcal{O}_{K_{\ell^{k}}}/\mathfrak{p}$ and so $\epsilon = \legen{p}{5}$. Finally, we use Lemma~\ref{grouporder} to conclude that \[
a - 1 \equiv \begin{cases}
p-1 \pmod{\ell^{k}} & \text{ if } p \equiv 1 \text{ or } 4 \pmod{5}\\
-p-1 \pmod{\ell^{k}} & \text{ if } p \equiv 2 \text{ or } 3 \pmod{5}. \end{cases} \] This yields the desired result. \end{proof}
Now, for $1 \leq t < k$, define \[
\mathcal{D}_{k,t,\ell}
= \{ \sigma \in {\rm Gal}(K_{\ell^{k}}/\mathbb{Q}) : a_{\sigma} \not\equiv
1 \pmod{\ell^{k}} \text{ and if }
n = {\rm ord}_{\ell}(a_{\sigma} - 1), \text{ then }
\sigma \not\in \mathcal{C}_{k,n-t+1,\ell} \}. \] Here $a_{\sigma}$ denotes the coefficient of $x$ in $\rho(\sigma) = a_{\sigma} x + b_{\sigma} \in I(\ell^{k})$. Combining Lemma~\ref{orderpreimage}, Lemma~\ref{artinpreimage}, and Lemma~\ref{artingrouporder}, we see that if $p$ is a prime with $p \not\equiv \pm 1 \pmod{\ell^{k}}$, then \[
\ell^{t} \text{ divides } |\alpha| \text{ if and only if }
\artin{K_{\ell^{k}}/\mathbb{Q}}{p} \subseteq \mathcal{D}_{k,t,\ell}. \]
\begin{lem} \label{countprime} Assume the notation above. For $1 \leq t < k$, we have \[
\frac{|\mathcal{D}_{k,t,\ell}|}{|{\rm Gal}(K_{\ell^{k}}/\mathbb{Q})|} =
\frac{\ell^{2-t} - \ell^{2-k} - \ell^{1-k} + \ell^{1-2k+t}}{\ell^{2} - 1}. \] \end{lem} \begin{proof}
Noting that $P_{m,r} = P_{m} + r Q_{m}$ we see that $\sigma$
fixes $\ell^{k-n+t+1} P_{\ell^{k},r}$ if and only if $(a_{\sigma}-1)
r + b_{\sigma} \equiv 0 \pmod{\ell^{n-t+1}}$. It is easy to see that
there are no solutions $r$ to this congruence if and only if
$\ell^{n-t+1} \nmid b$. Thus, $|\mathcal{D}_{k,t,\ell}| = \{ (a,b)
\in I(\ell^{k}) : {\rm ord}_{\ell}(a - 1) = n \text{ and } \ell^{n-t+1}
\nmid b \}$. We find that there are $\ell^{2k -2n + t - 2} (\ell -
1) (\ell^{n-t+1} - 1)$ elements of $I(\ell^{k})$ satisfying these
properties if $n \geq t$, and $0$ if $n < t$. Summing from $n = t$
to $k-1$ we get \[
|\mathcal{D}_{k,t,\ell}| = \frac{\ell^{2k-t+1} - \ell^{k+1} + \ell^{t} - \ell^{k}}{\ell + 1} \]
and dividing by $|I(\ell^{k})| = \ell^{2k-1} (\ell - 1)$ gives the desired result. \end{proof}
We are now able to prove Theorem~\ref{BAconj2} using similar reasoning.
\begin{proof}[Proof of Theorem~\ref{BAconj2}]
Fix a prime $q \ne 2$ and integers $i$ and $j$ with $i \geq j \geq
0$. By Lemma~\ref{Zplem}, we have ${\rm ord}_{q}(Z(p)) =
{\rm ord}_{q}(|\alpha|)$ for $\alpha = (3/2,1/2) \in G(\mathbb{F}_{p})$. Let $k =
i+1$ and $p \ne 2, 5, q$ be a prime. As in the proof of Lemma~\ref{countprime}, we have ${\rm ord}_{q}(p -
\epsilon_{p}) = i$ and ${\rm ord}_{q}(Z(p)) = j$ if and only if when $\sigma \in \artin{K_{q^{k}}/\mathbb{Q}}{p}$, then \[
{\rm ord}_{q}(a_{\sigma} - 1) = i \text{ and } {\rm ord}_{q}(b_{\sigma})
= i-j, \] (with the exception that when $j = 0$, there is a $q^{n}$th preimage of $\alpha$ for all $n$ and so ${\rm ord}_{q}(b_{\sigma}) \geq i$). When $i$ and $j$ are both positive, there are $(q-1)^{2} q^{2k-2i+j-2}$ pairs of $(a_{\sigma},b_{\sigma})$. When $i = 0$, there are $(q-2) q^{2k-2}$ pairs, and when $i \geq 1$ and $j = 0$, there are $(q-1) q^{2k-2i-1}$ pairs. Applying the Chebotarev density theorem proves the desired result. \end{proof}
Now, we turn to the composite case. Let $m = \prod_{i=1}^{r} \ell_{i}^{s_{i}}$. Take $M = \prod_{i=1}^{r} \ell_{i}^{S_{i}}$ to be a multiple of $m$. The discussion following the proof of Lemma~\ref{artingrouporder} now implies the following statement. If $p \not\equiv \pm 1 \pmod{\ell_{i}^{S_{i}}}$ for all $i$, $1 \leq i \leq r$, then $m$ divides the order of $\alpha$ in $G(\mathbb{F}_{p})$ if and only if for all $\sigma \in \legen{K_{M}/\mathbb{Q}}{p}$, \[
\sigma|_{K_{\ell_{i}^{S_{i}}}} \in \mathcal{D}_{S_{i},s_{i},\ell_{i}}
\text{ for all } i, 1 \leq i \leq r. \] In the case that $m$ is coprime to $10$, it follows from Lemma~\ref{surj} that ${\rm Gal}(K_{M}/\mathbb{Q}) \cong \prod_{i=1}^{r} {\rm Gal}(K_{\ell_{i}^{S_{i}}}/\mathbb{Q})$, and the fraction of elements $\sigma$ that are in $\mathcal{D}_{S_{i},s_{i},\ell_{i}}$ for all $i$ is \[
\prod_{i=1}^{r} \frac{|\mathcal{D}_{S_{i},s_{i},\ell_{i}}|}{|{\rm Gal}(K_{M}/\mathbb{Q})|}. \] In the case that $m$ is a multiple of $10$, ${\rm Gal}(K_{M}/\mathbb{Q})$ is an index 2 subgroup of the direct product and counting is more tricky. For this reason, we now define for $t_{1} \geq 1$ and $t_{2} \geq 1$ \begin{align*}
& \mathcal{D}_{k,t_{1},t_{2},10}
= \left\{ \sigma \in {\rm Gal}(K_{10^{k}}/\mathbb{Q}) :
a_{\sigma} \not\equiv 1 \pmod{2^{k}} \text{
and } a_{\sigma} \not\equiv 1 \pmod{5^{k}} \text{ and if }\right.\\
& \left. n_{1} = {\rm ord}_{2}(a_{\sigma} - 1), n_{2} = {\rm ord}_{5}(a_{\sigma} - 1),
\text{ then } \sigma \not\in \mathcal{C}_{k,n_{1}-t_{1}+1,2}
\text{ and } \sigma \not\in \mathcal{C}_{k,n_{2}-t_{2}+1,5} \right\}. \end{align*} For $t_{1} = 0$, we omit the condition $\sigma \not\in \mathcal{C}_{k,n_{1}-t_{1}+1,2}$.
\begin{lem} \label{count25} Assume the notation above. For $0 \leq t_{1} < k$ and $1 \leq t_{2} < k$, we have \begin{align*}
\frac{|\mathcal{D}_{k,t_{1},t_{2},10}|}{|{\rm Gal}(K_{10^{k}}/\mathbb{Q})|}
&= \frac{25}{36 \cdot 2^{t_{1}} 5^{t_{2}}}
- \frac{5}{6 \cdot 2^{t_{1}} 5^{k}} +
\frac{5}{36 \cdot 2^{t_{1}} 5^{2k-t_{2}}} + \frac{5}{2 \cdot 10^{k}}
- \frac{5}{12 \cdot 2^{k} 5^{2k-t_{2}}}\\
&- \frac{25}{12 \cdot 5^{t_{2}} 2^{k}}
+ \frac{5}{18 \cdot 2^{2k - t_{1}} 5^{2k-t_{2}}} +
\frac{25}{18 \cdot 2^{2k-t_{1}} 5^{t^{2}}} - \frac{5}{3 \cdot 2^{2k-t_{1}} 5^{k}}, \end{align*} when $t_{1} > 0$, and \begin{align*}
\frac{|\mathcal{D}_{k,0,t_{2},10}|}{|{\rm Gal}(K_{10^{k}}/\mathbb{Q})|}
&= \frac{25}{9 \cdot 5^{t_{2}}} -
\frac{25}{9 \cdot 5^{t_{2}} \cdot 4^{k}}
+ \frac{5}{9 \cdot 5^{2k-t_{2}}} - \frac{10}{3 \cdot 5^{k}}
- \frac{5}{9 \cdot 4^{k} \cdot 5^{2k-t_{2}}}
+ \frac{10}{3 \cdot 20^{k}}. \end{align*} \end{lem} \begin{proof}
Arguing as in Lemma~\ref{countprime}, $|\mathcal{D}_{k,t_{1},t_{2},10}|$ is the number of $ax+b \in I(10^{k})$ that satisfy \begin{align*}
& b \text{ even } \iff a \equiv 1, 4 \pmod{5},\\
& a \not\equiv 1 \pmod{2^{k}}, a \not\equiv 1 \pmod{5^{k}}\\
& {\rm ord}_{2}(a-1) = n_{1}, {\rm ord}_{5}(a - 1) = n_{2},\\
& 2^{n_{1} - t_{1} + 1} \nmid b, \text{ and } 5^{n_{2} - t_{2} + 1} \nmid b, \end{align*} provided $t_{1} > 0$. In the case that $b$ is odd, $a \equiv 2, 3 \pmod{5}$ and $t_{2} = 0$. In the case that $b$ is even and $a \equiv 1, 4 \pmod{5}$ there are $2^{k-n_{1} - 1} \cdot 4 \cdot 5^{k - n_{2} -
1}$ choices for $a$ and $(2^{k-1} - 2^{k - n_{1} + t_{1} - 1}) (5^{k} - 5^{k - n_{1} + t_{1} - 1})$ choices for $b$. Summing over $n_{1}$ and $n_{2}$ gives \[
\sum_{n_{1} = t_{1}}^{k-1} \sum_{n_{2} = t_{2}}^{k-1}
4 \cdot 2^{k - n_{1} - 1} \cdot 5^{k - n_{2} - 1}
\left(2^{k-1} - 2^{k - n_{1} + t_{1} - 1}\right)
\left(5^{k} - 5^{k - n_{2} + t_{2} - 1}\right). \] This sum of four geometric series is easily evaluated to give the stated answer. In the case that $t_{1} = 0$, there are again $2^{k-n_{1}-1} \cdot 4 \cdot 5^{k-n_{2}-1}$ choices for $a$, and $2^{k-n_{1}} \cdot (5^{k} - 5^{k-n_{2}+t_{2} - 1})$ choices for $b$. Summing yields the stated result. \end{proof}
We are now ready to prove the main result. \begin{proof}[Proof of Theorem~\ref{main}]
Let $m$ be a positive integer with $\gcd(m,10) = 1$ and fix an
$\epsilon > 0$. Let $m = \prod_{i=1}^{r} \ell_{i}^{s_{i}}$ be the
prime factorization of $m$ and note that $\lim_{k \to \infty}
\frac{|\mathcal{D}_{k,t,\ell}|}{|{\rm Gal}(K_{\ell^{k}}/\mathbb{Q})|} =
\zeta(\ell^{t})$ by Lemma~\ref{countprime}. Choose a positive real number $\eta$ small enough
so that $1 - \frac{\epsilon}{2} \leq (1 - \eta)^{r}$ and $(1 +
\eta)^{r} \leq 1 + \frac{\epsilon}{2}$. Now, let $M =
\prod_{i=1}^{r} \ell_{i}^{S_{i}}$ be chosen with $S_{i}$ is
sufficiently large that \[
\zeta(\ell^{s_{i}}) - \eta \leq \frac{|\mathcal{D}_{S_{i},s_{i},\ell_{i}}|}{|{\rm Gal}(K_{\ell_{i}^{S_{i}}}/\mathbb{Q})|} \leq \zeta(\ell^{s_{i}}) - \frac{2}{(\ell_{i} - 1) \ell_{i}^{S_{i}}} + \eta \]
for $1 \leq i \leq r$. Combining the Chebotarev density theorem with the observation that $\ell_{i}^{s_{i}}$ divides the order of $\alpha \in G(\mathbb{F}_{p})$ if and only if $\artin{K_{\ell^{S_{i}}}/\mathbb{Q}}{p} \subseteq \mathcal{D}_{S_{i},s_{i},\ell_{i}}$ (provided $p \not\equiv \pm 1 \pmod{\ell_{i}^{S_{i}}}$) we obtain that the number of primes $p \leq x$ for which $m$ divides $|\alpha|$ satisfies \[
-\epsilon/2 + \prod_{i=1}^{r} \frac{|\mathcal{D}_{S_{i},s_{i},\ell_{i}}|}{{\rm Gal}(K_{\ell_{i}^{S_{i}}}/\mathbb{Q})} \leq \frac{\# \{ p \leq x : m | |\alpha| \}}{\pi(x)}
\leq \prod_{i=1}^{r} \frac{|\mathcal{D}_{S_{i},s_{i},\ell_{i}}|}{{\rm Gal}(K_{\ell_{i}^{S_{i}}}/\mathbb{Q})}
+ \frac{2}{(\ell_{i} - 1) \ell_{i}^{S_{i} - 1}} + \epsilon/2 \] provided $x$ is sufficiently large. We have that \[
\zeta(m) (1 - \epsilon/2) \leq \prod_{i=1}^{r} \left(\zeta(\ell_{i}^{s_{i}}) - \eta\right) \leq d \leq \prod_{i=1}^{r} \left(\zeta(\ell_{i}^{s_{i}}) + \eta\right) \leq
\zeta(m) (1 + \epsilon/2). \] which implies that \[
(\zeta(m) - \epsilon) \pi(x) \leq \# \{ p \leq x : m | |\alpha| \} \leq (\zeta(m) + \epsilon) \pi(x), \] provided $x$ is large enough. Combining this with Lemma~\ref{Zplem} proves Theorem~\ref{main} in the case that $\gcd(m,10) = 1$. The case that $\gcd(m,10) > 1$ is similar with two notable differences: there is extra complexity in dealing with the primes $2$ and $5$ using Lemma~\ref{count25}, and Lemma~\ref{Zplem}
shows that $2 | Z(p)$ if and only if $|\alpha|$ is odd. \end{proof}
\end{document} |
\begin{document}
\begin{center} \bf Geometric angle structures on triangulated surfaces \end{center}
\begin{center} Ren Guo \end{center}
\noindent {\bf Abstract} In this paper we characterize a function defined on the set of edges of a triangulated surface such that there is a spherical angle structure having the function as the edge invariant (or Delaunay invariant). We also characterize a function such that there is a hyperbolic angle structure having the function as the edge invariant.
\noindent \S 1. {\bf Introduction}
Suppose $S$ is a closed surface and $T$ is a triangulation of $S$. Here by a triangulation we mean the following: take a finite collection of triangles and identify their edges in pairs by homeomorphism. Let $V, E, F$ be the sets of all vertices, edges and triangles in $T$ respectively. If $a, b$ are two simplices in triangulation $T$, we use $a<b$ to denote that $a$ is a face of
$b$. Let $C(S,T)=\{ (e, f) | e \in E, f \in F,$ such that $e < f\}$ be set of all \it corners \rm of the triangulation. An \it angle structure \rm on a triangulated surface $(S,T)$ assigns each corner of $(S,T)$ a number in $(0, \pi)$. A \it Euclidean (or hyperbolic, or spherical) angles structure \rm is an angle structure so that each triangle with the angle assignment is Euclidean (or hyperbolic, or spherical). More precisely, a Euclidean angle structure is a map $x: C(S,T)\to (0, \pi)$ assigning every corner $i$ (for simplicity of notation, we use one letter to denote a corner) a positive number $x_i$ such that $x_i+x_j+x_k=\pi$ whenever $i,j,k$ are three corners of a triangle. A hyperbolic angle structure is a map $x: C(S,T)\to (0, \pi)$ such that $x_i+x_j+x_k<\pi$. A spherical angle structure is a map $x: C(S,T)\to (0, \pi)$ such that \begin{equation} \label{1} \left\{ \begin{array}{ccc} x_i+x_j+x_k>\pi\\ x_j+x_k-x_i<\pi. \end{array} \right. \end{equation}
Actually it is proved in {\bf [B]} that positive numbers $x_i,x_j,x_k$ are three inner angles of a spherical triangle if and only if they satisfy conditions $(1)$.
Given an angle structure $x: C(S,T)\to (0, \pi)$, we define its \it edge invariant \rm which is a function $D_x: E \to (0,2\pi)$ such that $D_x(e)=x_i+x_{i'}$ where $i=(e,f),i'=(e,f')$ are two opposite corners facing the edge $e$. And we define its \it Delaunay invariant \rm which is a function $\mathcal{D}_x: E \to (-2\pi,2\pi)$ such that $\mathcal{D}_x(e)=x_j+x_k+x_{j'}+x_{k'}-x_i-x_{i'}$ where $i=(e,f),i'=(e,f')$ are two opposite corners facing the edge $e$ and $j,k$(or $j',k'$) are the other two corners of the triangle $f$ (or $f'$).
For the simplicity of natation, we use $G$ to denote a fixed geometry, where $G=E,H$ or $S$ means the Euclidean, hyperbolic or spherical geometry respectively. Now given a function $D: E \to (0,2\pi)$ (or $\mathcal{D}: E \to (-2\pi,2\pi)$), we use $AG(S,T;D)$ (or $AG(S,T;\mathcal{D})$) to denote the set fo all $G$ angle structures having $D$ (or $\mathcal{D}$) as the edge (or Delaunay) invariant.
The motivation of considering these sets is the study of \it geometric cone metrics \rm with prescribed edge invariant or Delaunay invariant on triangulated surfaces from the variational point of view. A \it Euclidean (or hyperbolic, or spherical) cone metric \rm assigns each edge in $T$ a positive number such that the numbers on any three edges of a triangle in $T$ form three edge length of a Euclidean (or hyperbolic, or spherical) triangle. The variational method contains a variational problem and a linear programming problem. The variational problem is to show that the unique maximal point of a convex "capacity" defined on the set $AG(S,T;D)$ (or $AG(S,T;\mathcal{D})$) gives the unique geometric cone metric. The linear programming problem is to characterize the function $D$ (or $\mathcal{D}$) such that the set $AG(S,T;D)$ (or $AG(S,T;\mathcal{D})$) is nonempty.
For Euclidean angle structures, the Delaunay invariant and the edge invariant are related by $2D_x(e)+\mathcal{D}_x(e)=2\pi$ for any $e$. Thus given two functions $D$ and $\mathcal{D}$ satisfying $2D(e)+\mathcal{D}(e)=2\pi$ for any $e$, we have $AE(S,T;D)=AE(S,T;\mathcal{D}).$ Therefore the problem of Euclidean cone metric with given edge invariant is eqivalent to the problem of Euclidean cone metric with given Delaunay invariant. Rivin {\bf[Ri1]} {\bf[Ri2]} worked out the variational problem and the linear programming problem about $AE(S,T;D).$ Leibon {\bf[Le]} worked out the variational problem and the linear programming problem about $AH(S,T;\mathcal{D}).$ Luo {\bf[Lu]} worked out the variational problem about $AS(S,T;D)$ the linear programming problem about which will be solved in this paper (theorem 1). Although the variational problems about $AH(S,T;D)$ and $AS(S,T;\mathcal{D})$ are still open, we will solve the linear programming problem about them in this paper (theorem 2 and 3).
The main results are the following. For a triangulated surface $(S,T),$ a subset $X\subseteq F,$ we use $|X|$ to denote the number of triangles in $X$ and we use $E(X)$ to denote the set of all edges of triangles in $X.$
\noindent {\bf Theorem 1.} \it Given a triangulated surface $(S,T)$ and a function $D: E\to (0,\pi)$, the set $AS(S,T;D)$ is nonempty if and only if for any subset $X\subseteq F,$
$$\pi |X|< \sum _{e\in E(X)}D(e).$$ \rm
\noindent {\bf Theorem 2.} \it Given a triangulated surface $(S,T)$ and a function $D: E\to (0,2\pi)$, the set $AH(S,T;D)$ is nonempty if and only if for any subset $X\subset F,$
$$\pi(|F|-|X|)> \sum _{e\notin E(X)}D(e).$$ \rm
\noindent {\bf Theorem 3.} \it Given a triangulated surface $(S,T)$ and a function $\mathcal{D}: E\to (-2\pi,2\pi)$, the set $AS(S,T;\mathcal{D})$ is nonempty if and only if for any subset $X\subset F,$
$$\pi(|F|-|X|)> \sum _{e\notin E(X)}(\pi-\frac12\mathcal{D}(e)).$$ \rm
The paper is organized as follows. In section 2, we prove theorem 1 by using Leibon's result. In section 3, we recall the duality theorem in linear programming. In section 4, following Rivin's method, we prove theorem 2 and 3 by using the duality theorem.
\noindent{\bf Acknowledgement} I wish to thank my advisor, Professor Feng Luo, for suggesting this problem and for fruitful discussion.
\noindent \S 2. {\bf Prove of theorem 1}
First let us recall the Leibon's result of characterization of the function $\mathcal{D}$ such that the set $AH(S,T;\mathcal{D})$ is nonempty.
\noindent {\bf Theorem 4.}(Leibon){\bf[Le]} \it Given a triangulated surface $(S,T)$ and a function $\mathcal{D}: E\to (0,2\pi)$, the set $AH(S,T;\mathcal{D})$ is nonempty if and only if for any subset $X\subseteq F,$
$$\pi |X|< \sum _{e\in E(X)}(\pi-\frac12\mathcal{D}(e)).$$ \rm
\noindent {\bf Proof of theorem 1.} To show the conditions are necessary, for any $X\subseteq F$, we have $\sum _{e\in E(X)}D(e)=\sum_{e\in E(X)}(x_i+x_{i'}),$ where $i,i'$ are two opposite corners facing the edge $e$. It turns out that the right hand side of the equation is equal to $\sum_{f\in X}(x_i+x_j+x_k)+\sum x_h,$ where the corner $h=(e,f^*)$ with $e\in E(X)$ and $f^*\notin X.$ Hence $\sum _{e\in E(X)}D(e)\geq
\sum_{f\in X}(x_i+x_j+x_k)> \sum_{f\in X}\pi= \pi|X|.$
To show the conditions are sufficient, let us define a function $\mathcal{D}: E \to (0,2\pi)$ by setting
$\mathcal{D}(e)=2\pi-2D(e).$ Thus the conditions $\pi |X|< \sum _{e\in E(X)}D(e)$ are equivalent to $\pi |X|< \sum _{e\in E(X)}(\pi-\frac12\mathcal{D}(e))$ which guarantee $AH(S,T;\mathcal{D})$ is nonempty by theorem 4. It follows that there is a solution for the inequalities $$ \left\{ \begin{array}{ccc} x_i+x_j+x_k< \pi & \ i,j,k\ \mbox{are three corners of a triangle}\\ x_j+x_k+x_{j'}+x_{k'}-x_i-x_{i'}=\mathcal{D}(e) \\ x_i> 0 \end{array} \right. $$
Let us define new variables $y_i$ for all $i \in C(S,T)$ by setting $$y_i=\frac{\pi+x_i-x_j-x_k}{2}$$ provided $i,j,k$ are three corners of a triangle. And since $\mathcal{D}(e)=2\pi-2D(e),$ the inequalities above are equivalent to $$ \left\{ \begin{array}{ccc} y_i+y_j+y_k> \pi& \ i,j,k\ \mbox{are three corners of a triangle}\\ y_i+y_{i'}=D(e)& \ i,i'\ \mbox{are two opposite corners facing an edge}\ e \\ y_j+y_k< \pi & \ j,k\ \mbox{are two corners of a triangle}\\ \end{array} \right. $$
This solution obviously satisfies $$\left\{ \begin{array}{ccc} y_i+y_j+y_k> \pi& \ i,j,k\ \mbox{are three corners of a triangle}\\ y_i+y_{i'}=D(e)& \ i,i'\ \mbox{are two opposite corners facing an edge}\ e \\ y_j+y_k-y_i< \pi & \ i,j,k\ \mbox{are three corners of a triangle}\\ y_i> 0\\ \end{array} \right. $$ Thus we obtain an angle structure in $AS(S,T;D)$. QED
\noindent \S 3. {\bf Duality Theorem}
We fix the notations as follows: $x=(x_1, ..., x_n)^t$ is a column vector in $ \mathbf{R}^n$. The standard inner product in $ \mathbf{R}^n$ is denoted by $a^t x$. If $A: \mathbf{R}^n \to \mathbf{R}^m$ is a linear transformation, we denote its transpose by $A^t: \mathbf{R}^m \to \mathbf{R}^n.$ Given two vectors $x, a$ in $ \mathbf{R}^n$, we say $x \geq a$ if $x_i \geq a_i$ for all indices $i$. Also $x>a$ means $x_i > a_i$ for all indices $i$.
A linear programming problem $(P)$ is to minimize an \it objective function \rm $z = a^tx$ subject to the \it restrain conditions\rm $$ \left\{ \begin{array}{ccc} Ax=b\\ x \geq 0 \end{array} \right. $$ where $x \in \mathbf{R}^n$, $b\in \mathbf{R}^m$ and $A: \mathbf{R}^n \to \mathbf{R}^m$ is a linear transformation. We call a point $x$ satisfying the restrain conditions a \it feasible solution \rm and denote the set of all the feasible solutions by
$D(P)=\{x \in \mathbf{R}^n|Ax=b,x \geq 0\}.$ An \it optimal solution \rm $x$ for $(P)$ is a feasible solution so that the objective function $z$ realizes the minimal value. The \it dual problem \rm $(P^*)$ of $(P)$ is to maximize $z = b^t y$ subject to $ A^t y \leq a , y\in \mathbf{R}^m$. Let us recall the duality theorem in linear programming. The proof of the theorem can be found in the book {\bf[KB]}.
\noindent {\bf Theorem 5.} \it The following statements are equivalent.
\noindent (a) Problem (P) has an optimal solution.
\noindent (b) $D(P) \neq \emptyset$ and $D(P^*) \neq \emptyset$.
\noindent (c) Both problem $(P)$ and problem $(P^*)$ have optimal solutions so that the minimal value of $(P)$ is equal to the maximal value of $(P^*)$. \rm
In applications that we are interested, there is a special case that the objective function $z =0$ for $(P)$. Thus the optimal solution exists if and only if $D(P) \neq \emptyset$. Thus we obtain the following corollary.
\noindent {\bf Corollary 6.} For $A: \mathbf{R}^n \to \mathbf{R}^m$ and $b\in \mathbf{R}^m,$ the set $\{ x \in
\mathbf{R}^n |Ax=b, x \geq 0\} \neq \emptyset$ if and only if the maximal value of $z = b^ty$ on $\{ y \in R^m | A^t y \leq 0\}$ is non-positive.
\noindent \S 4. {\bf Proof of theorem 2 and 3 }
By following Rivin's method in {\bf [Ri2]}, we will prove a lemma about the closure of $AH(S,T;D)$ in $ \mathbf{R}^{3|F|}=\{(x_i)^t, i\in C(S,T)\}$. The closure of $AH(S,T;D)$ consists of all the points satisfying $$ \left\{ \begin{array}{ccc} x_i+x_j+x_k \leq \pi& \ i,j,k\ \mbox{are three corners of a triangle}\\ x_i+x_{i'}=D(e) & \ i,i'\ \mbox{are two opposite corners facing an edge}\ e\\ x_i\geq 0 \\ \end{array} \right. $$
\noindent {\bf Lemma 7.} Given a triangulated surface $(S,T)$ and a function $D: E\to [0,2\pi]$, the closure of $AH(S,T;D)$ is nonempty if and only if for any subset $X\subset F,$
$$\pi(|F|-|X|)\geq \sum _{e\notin E(X)}D(e).$$ \rm
\noindent {\bf Proof.} The linear programming problem $(P)$ with variables $x=(...,x_i,...,t_f,...)$ indexed by $C(S,T)\cup F$ is to minimize the objective function $z = 0$ subject to the restrain conditions $$ \left\{ \begin{array}{ccc} x_i+x_j+x_k+ t_f=\pi& \ i,j,k\ \mbox{are three corners of a triangle} f\\ x_i+x_{i'}=D(e)& \ i,i'\ \mbox{are two opposite corners facing an edge}\ e\\ x_i\geq 0 \\ t_f\geq 0 \end{array} \right. $$ The dual problem $(P^*)$ with variable $y =( ...,y_f, ..., y_e, ...)$ indexed by $E \cup F$ is to maximize the objective function $z = \sum_{f \in F} \pi y_f+ \sum_{e \in E} D(e) y_e$ subject to the restrain conditions $$ \left \{ \begin{array}{ccc} y_f \leq 0&\\ y_f + y_e \leq 0&\ \mbox{whenever}\ e < f. \end{array} \right. $$ Since the closure of $AH(S,T;D)$ is nonempty is equivalent to that the set $D(P)$ is nonempty, by corollary 6, the latter one is equivalent to that the maximal value of the objective function of $(P^*)$ is non-positive.
To show the conditions $\pi(|F|-|X|)\geq \sum _{e\notin E(X)}D(e)$ for any $X\subset F$ are necessary, for any $X\subset F,$ let $$y_f= \left\{ \begin{array}{ccc} 0&\mbox{if}\ f\in X \\ -1&\mbox{if}\ f\notin X \end{array} \right.\ \mbox{and}\ y_e= \left\{ \begin{array}{ccc} 0&\mbox{if}\ e\in E(X) \\ 1&\mbox{if}\ e\notin E(X) \end{array} \right. $$
We claim that $(y_f, y_e)$ is a feasible solution. In fact, given a pair $e<f,$ if $f\in X$, we must have $e\in E(X)$, then $y_f + y_e=0.$ If $f\notin X$, then $y_f + y_e=-1+y_e \leq 0$.
By the assumption that the maximal value of the objective function of $(P^*)$ is non-positive, since $(y_f, y_e)$ is feasible, we have $0\geq z(y_f, y_e) = \sum_{f \notin X} \pi y_f+ \sum_{e
\notin E(X)} D(e) y_e = \pi(|X|-|F|)+ \sum _{e\notin E(X)}D(e).$
To show the conditions are sufficient, take an arbitrary feasible solution $(y_f, y_e)$. If $y_f=0$ for all $f$, from $y_f+y_e\leq 0$, we know $y_e\leq 0$. Hence $z(y_f,y_e) = \sum _{e\notin E}D(e)y_e \leq 0$, since $D(e)\in [0,2\pi].$ Otherwise, define
$X=\{f\in F | y_f = 0\}\subset F$, and let $a=max\{y_f, f\notin X\}$. We have $a < 0$. Define $$y_f^{(1)}= \left\{ \begin{array}{ccc} y_f=0&\mbox{if}\ f\in X \\ y_f-a&\mbox{if}\ f\notin X \end{array} \right.\ \mbox{and}\ y_e^{(1)}= \left\{ \begin{array}{ccc} y_e&\mbox{if}\ e\in E(X) \\ y_e+a&\mbox{if}\ e\notin E(X) \end{array} \right. $$
We claim that $(y_f^{(1)},y_e^{(1)})$ is a feasible solution. In fact, $y_f^{(1)}\leq 0$. Given a pair $e<f,$ if $f\in X$, we must have $e\in E(X)$, then $y_f^{(1)}+y_e^{(1)}=y_f+y_e\leq 0$. If $f\notin X$ and $e\notin E(X)$, then $y_f^{(1)}+y_e^{(1)}=y_f-a+y_e+a\leq 0$. If $f\notin X$ but $e\in E(X)$, there exists another triangle $f'\in X$ so that $e<f'$, then $y_e=y_e+y_{f'}\leq 0$. Therefore $y_f^{(1)}+y_e^{(1)} = y_f-a+y_e\leq y_f-a \leq 0$, since $a$ is the maximum.
Now the value of the objective function is
$z(y_f^{(1)},y_e^{(1)})= z(y_f,y_e)+ a(\pi (|X|-|F|)+\sum _{e\notin E(X)}D(e))\geq z(y_f,y_e)$, according to the conditions. Note the number of 0's in $\{y_f^{(1)}\}$ is more than that in $\{y_f\}$. By the same procedure, after finite steps, it ends at a feasible solution $(y_f^{(n)}=0,y_e^{(n)})$. We have $z(y_f^{(n)},y_e^{(n)})\leq 0$. Since the value of the objective function does not increase, therefore $0\geq z(y_f^{(n)},y_e^{(n)})\geq \ldots \geq z(y_f^{(1)},y_e^{(1)}) \geq z(y_f, y_e)$. QED
\noindent {\bf Proof of theorem 2.} Let $x_i=a_i+\varepsilon$ for any $i \in C(S,T),$ where $a_i\geq 0$ and $\varepsilon\geq 0.$ The linear programming problem $(P)$ with variables $\{...,a_i,...\varepsilon\}$ is to minimize the objective function $z = -\varepsilon$ subject to the restrain conditions $$ \left\{ \begin{array}{ccc} a_i+a_j+a_k+3\varepsilon\leq\pi& \ i,j,k\ \mbox{are three corners of a triangle}\\ a_i+a_j+2\varepsilon=D(e)& \ i,j\ \mbox{are two opposite corners facing an edge}\ e \\ a_i\geq 0\\ \varepsilon \geq 0 \end{array} \right. $$ The dual problem $(P^*)$ with variable $y =( ...,y_f, ..., y_e, ...)$ indexed by $E \cup F$ is to maximize the objective function $z = \sum_{f \in F} \pi y_f+ \sum_{e \in E} D(e) y_e$ subject to the restrain conditions $$ \left\{ \begin{array}{ccc} y_f \leq 0\\ y_f + y_e \leq 0& \mbox{whenever} e < f\\ 3\sum_{f \in F} y_f + 2 \sum_{e \in E} y_e \leq -1 \end{array} \right. $$ By the theorem 5(c), the maximal value of the objective function of $(P^*)$ is negative is equivalent to that the minimal value of the objective function of $(P)$ is negative. The latter one is equivalent to that there exists a feasible solution $a_i\geq 0, \varepsilon> 0$. Therefore the set $AH(S,T;D)$ is nonempty.
We only need to show that the maximal value of the objective function of $(P^*)$ is negative is equivalent to the conditions
$\pi(|F|-|X|)> \sum _{e\notin E(X)}D(e)$ for any $X\subset F.$
To show the conditions are necessary, for any $X\subset F,$ we have $2|E(X)|>3|X|$ or $2|E(X)|\geq 3|X|+1$. Let $$y_f= \left\{ \begin{array}{ccc} 0&\mbox{if}\ f\in X \\ -1&\mbox{if}\ f\notin X \end{array} \right.\ \mbox{and}\ y_e= \left\{ \begin{array}{ccc} 0&\mbox{if}\ e\in E(X) \\ 1&\mbox{if}\ e\notin E(X) \end{array} \right. $$ We claim that $(y_f, y_e)$ is a feasible solution. If fact, as in lemma 7, we can check $y_f+ y_e\leq 0$ for any pair $e<f.$ Furthermore $$3\sum_{f \in F} y_f + 2 \sum_{e \in E} y_e
=3\sum_{f\notin X}(-1)+2\sum_{e\notin E(X)}1=3(|X|-|F|)+2(|E|-|E(X)|)$$
$$=3|X|-2|E(X)|+2|E|-3|F|=3|X|-2|E(X)|\leq-1$$
since $2|E|=3|F|$. Now $(y_f,y_e)$ is feasible implies that $z(y_f,y_e)<0$ which is equivalent to $\pi(|F|-|X|)< \sum _{e\notin E(X)}D(e).$
To show the conditions are sufficient, by the proof of lemma 7 we know the maximal value of the objective function of $(P^*)$ is $\leq 0$ under the conditions. We try to show it can not be 0. Assume that $(y_f,y_e)$ is a feasible solution satisfying $z(y_f,y_e)=0.$ We claim that $y_f=0$ for all $f$. Otherwise, as in the proof of lemma 7, we can find another feasible solution $(y_f^{(1)},y_e^{(1)})$ and we can check that
$z(y_f^{(1)},y_e^{(1)}) = z(y_f,y_e)+ a(\pi (|X|-|F|)+\sum _{e\notin E(X)}D(e)) > z(y_f,y_e)= 0$, according to the conditions. It is contradiction since the maximal value of the objective function of $(P^*)$ is $\leq 0.$
Now from $y_f=0$ for all $f$ we see $y_e\leq 0$. Since $0=z(y_f,y_e)=\sum_{e \in E} D(e) y_e$ and $D(e)>0$, we get $y_e=0$ for all $e$ and therefore $(y_f,y_e)= (0,0).$ But $(y_f,y_e)= (0,0)$ does not satisfy $3\sum_{f \in F} y_f + 2 \sum_{e \in E} y_e \leq -1.$ It is a contradiction since we assume that $(y_f,y_e)$ is a feasible solution. This proves that the maximal value of the objective function of $(P^*)$ is negative. QED
\noindent {\bf Proof of theorem 3.} Given two functions $D:E\to (0, 2\pi)$ and $\mathcal{D}: E\to (-2\pi,2\pi)$ satisfying $2D(e)+\mathcal{D}(e)=2\pi$ for any $e$, we claim that $AH(S,T;D)\neq \emptyset$ is eqivalent to $AS(S,T;\mathcal{D})\neq \emptyset.$ By this claim, theorem 3 is true as a corollary of theorem 2.
In fact, $AS(S,T;\mathcal{D})$ is the set of solutions for the inequalities $$ \left\{ \begin{array}{ccc} x_i+x_j+x_k > \pi& \ i,j,k\ \mbox{are three corners of a triangle}\\ x_j+x_k-x_i < \pi & \ i,j,k\ \mbox{are three corners of a triangle}\\ x_j+x_k+x_{j'}+x_{k'}-x_i-x_{i'}=\mathcal{D}(e) \\ x_i > 0 \end{array} \right. $$
Let us define new variables $y_i$ for all $i\in C(S,T)$ by setting $$y_i=\frac{\pi+x_i-x_j-x_k}{2}$$ provided $i,j,k$ are three corners of a triangle. Since $2D(e)+\mathcal{D}(e)=2\pi,$ we see that the inequalities above are equivalent to $$ \left\{ \begin{array}{ccc} y_i+y_j+y_k < \pi& \ i,j,k\ \mbox{are three corners of a triangle}\\ y_i> 0 \\ y_i+y_{i'}=D(e) & \ i,i'\ \mbox{are two opposite corners facing an edge}\ e\\ y_j+y_k<\pi & \ j,k\ \mbox{are two corners of a triangle}\\ \end{array} \right. $$
Since $y_i+y_j+y_k < \pi$ implies $y_j+y_k<\pi$, we can omit the latter one. Equivalently, we get $$ \left\{ \begin{array}{ccc} y_i+y_j+y_k < \pi& \ i,j,k\ \mbox{are three corners of a triangle}\\ y_i> 0 \\ y_i+y_{i'}=D(e) & \ i,i'\ \mbox{are two opposite corners facing an edge}\ e\\ \end{array} \right. $$
Now the set of solutions of the inequalities above is exactly $AH(S,T;D).$ Thus we see $AH(S,T;D)\neq \emptyset$ is eqivalent to $AS(S,T;\mathcal{D})\neq \emptyset.$ QED
\noindent {\bf Reference}
[B] Marcel Berger, Geometry II. Springer-Verlag 1987
[KB]Bernard Kolman $\&$ Robert Beck, Elementary Linear Programming with Applications. Academic Press 2 edition 1995
[Le] Gregory Leibon, Characterizing the Delaunay decompositions of compact hyperbolic surface. Geom. Topol. 6(2002), 361-391
[Lu] Feng Luo, A Characterization of spherical polyhedron surfaces.\\ http://front.math.ucdavis.edu/math.GT/0408112
[Ri1] Igor Rivin, Euclidean structures on simplicial surfaces and hyperbolic volume. Ann. of Math. (2) 139 (1994), no. 3, 553-580
[Ri2] Igor Rivin, Combinational optimization in geometry. Advance in Applied Math. 31(2003), no. 1, 242-271
\noindent Department of Mathematics\\ Rutgers University\\ Piscataway, NJ 08854, USA
\noindent Email: renguo$@$math.rutgers.edu
\end{document} |
\begin{document}
\title{\uppercase{On the Clifford theorem for surfaces}
\footnote{ 2000 \textit{Mathematics Subject Classification}. Primary 14J10; Secondary 14J29. } \footnote{ \textit{Key words and phrases}. Clifford theorem, Clifford index, algebraic surface, moduli.}
\begin{abstract} We give two generalizations of the Clifford theorem to algebraic surfaces. As an application, we obtain some bounds for the number of moduli of surfaces of general type. \end{abstract}
\section*{Introduction} The classical Brill-Noether theory is to study special divisors or linear systems on an algebraic curve, and the Clifford theorem is the first step of the theory (cf. \cite{A}). The main purpose of this paper is to generalize the Clifford theorem to algebraic surfaces.
Let $X$ be a smooth projective complex surface and $L$ a divisor on it. One of the fundamental problems in the surface case is to study the adjoint linear system $|K_X+L|$. Roughly speaking, the behavior of this linear system depends on the positivity of $L$. When $L$ is positive, we have a celebrated method of Reider \cite{Re} (see also \cite{Bom} and \cite{Tan}). When $L$ is zero, the canonical system has also been studied systematically by Beauville \cite{Be}. When $L$ is negative, the linear system corresponds to the special divisors on a curve. Exactly, we say a divisor $D$ on $X$ a special divisor if it is effective and $h^0(K_X-D)>0$. However, for surfaces, we have no general method to study such special divisors. In order to find a powerful method to study special linear systems in the surface case, we need to establish first a Clifford-type theorem.
One easy generalization of the Clifford theorem is as follows. Let $L$ be a special divisor on $X$. From $h^0(L)+h^0(K_X-L)\leq h^0(K_X)+1$ and the Riemann-Roch theorem, we get \begin{equation}\label{I} h^1(L)\leq q+\frac{1}{2}L(K_X-L),
\end{equation} where $q$ is the irregularity of $X$. If $L=0$ or $K_X$, the equality holds. As in the curve case, the nontrivial problem is to characterize the equality. Our first result describes such conditions on the surface and on the divisor $L$. We can assume that $L\nsim0$ and $K_X-L\nsim 0$.
\begin{theorem}\label{theorem0.2}If the equality in \eqref{I} holds then either $L$ contains a divisor of the movable part of
$|K_X|$, or $L$ is contained in the fixed part of $|K_X|$, or one of the following cases occurs. \begin{enumerate}
\item $|K_X|$ is composed of a rational pencil, and the movable part of
$|L|$ is a sum of some fibers of the pencil.
\item $|K_X|$ is composed of a irrational pencil of elliptic curves. The corresponding elliptic fibration is $f:X\rightarrow C$ with $g(C)\geq2$. There are two line bundles
$A$ and $B$ on $C$ such that $f^*A$ and $f^*B$ are respectively the movable part of $|L|$ and $|K_X-L|$. The Clifford index of $C$ is less than $2$. Exactly, we have the following possible cases$:$ \begin{enumerate} \item $0\leq\chi(\mathcal{O}_X)\leq2$, $C$ is hyperelliptic and one of $A$ and $B$ is a multiple of $g_2^1;$
\item $\chi(\mathcal{O}_X)=0$, $q=g(C)$, $C$ is a smooth plane quintic and both of $A$ and $B$ are hyperplane sections$;$
\item $\chi(\mathcal{O}_S)=0$, $q=g(C)$, $C$ is trigonal and one of $A$ and $B$ is $g_3^1$. \end{enumerate} \end{enumerate} \end{theorem}
This theorem can be considered as a generalization of the Clifford theorem. We have another type of generalization as follows.
\begin{theorem}\label{theorem0.1} Let $X$ be a smooth minimal complex projective surface of general type. Let $L$ be a special divisor on $X$ such that $L\nsim K_X$, then $h^0(L)\leq K_XL/2+1$. If the equality holds, then one of the following cases occurs. \begin{enumerate} \item $h^0(L)=1$ and $L$ is a sum of $(-2)$-curves.
\item The movable part of $|L|$ has no base points and $\varphi_L:X\rightarrow \mathbf{P}^1$ is a projective surjective morphism, whose general fiber is an irreducible smooth curve of genus $2$.
\item The movable part of $|L|$ has no base points and $\varphi_L$ is generically $2$ to $1$ onto a surface of minimal degree in $\mathbf{P}^{h^0(L)-1}$. \end{enumerate} \end{theorem}
The two theorems will be proved in Sections \ref{sec:1} and \ref{sec:2}, respectively.
The organization of the paper is as follows. In Section \ref{sec:1}, we prove Theorem \ref{theorem0.2}. In Section \ref{sec:2}, we will give some Clifford type inequalities on a surface (Propositions \ref{proposition1.2} and \ref{proposition1.5}) and prove Theorem \ref{theorem0.1}. In Section \ref{sec:3}, we use these two inequalities to define two indices $\alpha(X)$ and $\beta(X)$ on $X$ like the Clifford index in the case of a curve. We study some basic properties of $\alpha(X)$ and $\beta(X)$ and give some bounds for them (Propositions \ref{proposition2.4} and \ref{proposition2.6}). In Section \ref{sec:4}, we give a detailed description of $X$, when $\alpha$ and $\beta$ are zero (Theorems \ref{theorem3.1} and \ref{theorem3.2}). In Section \ref{sec:5}, we use our inequalities to give some bounds for the number of moduli of surfaces (Theorem \ref{theorem4.2}).
Throughout the paper, we let $X$ be a smooth complex projective surface and $K_X$ be its canonical divisor. $p_g$ and $q$ denote, respectively, $h^0(K_X)$ and $h^1(\mathcal{O}_X)$. For a divisor $L$
on $X$, we let $\varphi_L$ be the rational map defined by the linear system $|L|$. $|L|$ is said to be composed of a pencil if $\dim\varphi_L(X)=1$. Numerical equivalence between divisors is denoted by $\equiv$ and linear equivalence by $\sim$. $g_d^r$ denotes a linear system of degree $d$ and dimension $r$ on a smooth projective curve. If $E$ is a vector space we will denote by $\mathbb{P}E$ the space of one-dimension subspaces of $E$.
The author would like to express his appreciation to professor Sheng-Li Tan for his advice, encouragement and the helpful discussions. The author is also grateful to the referee for providing him some valuable suggestions and pointing out grammatical mistakes.
\section{Proof of Theorem \ref{theorem0.2}}\label{sec:1} In this section, we will prove Theorem \ref{theorem0.2}. In the first place, we need the following key lemma.
\begin{lemma}\label{lemma2.1} Suppose $Z$ is a projective variety. Let $L$ and $D$ be cartier divisors on $Z$. Assume $Y$ is an irreducible and reduced closed subscheme of $Z$ and denote $\mathcal{I}_Y$ the ideal sheaf of $Y$ in $Z$. If $h^0(L)-h^0(\mathcal{I}_Y(L))>0$ and $h^0(D)-h^0(\mathcal{I}_Y(D))>0$, then we have $$h^0(L)-h^0(\mathcal{I}_Y(L))+h^0(D)-h^0(\mathcal{I}_Y(D))\leq h^0(L+D)-h^0(\mathcal{I}_Y(L+D))+1.$$ \end{lemma} \begin{proof} For any Cartier divisor $A$ on $Z$, we have the standard exact sequence $$0\rightarrow \mathcal{I}_Y(A)\rightarrow \mathcal{O}_Z(A)\xrightarrow{r_Y}\mathcal{O}_Y(A)\rightarrow 0,$$ where $r_Y$ is the restriction map. We consider the linear system
$r_Y|A|$ on $Y$:
$$r_Y|A|=\mathbb{P}r_Y(H^0(A))\subset \mathbb{P}H^0(\mathcal{O}_Y(A)).$$ We then define a map \begin{eqnarray*}
\mu:r_Y|L|\times r_Y|D|&\rightarrow& r_Y|L+D|,\\
(L_1,D_1) &\mapsto & L_1+D_1. \end{eqnarray*} It is easy to check that $\mu$ is well defined. But every element of
$r_Y|L+D|$ has finite components, thus $\mu$ is finite. Hence
$$\dim(\text{Im}(\mu))=\dim(r_Y|L|\times r_Y|D|)=h^0(L)-h^0(\mathcal{I}_Y(L))-1+h^0(D)-h^0(\mathcal{I}_Y(D))-1.$$
We know that $$h^0(L+D)-h^0(\mathcal{I}_Y(L+D))-1=\dim r_Y|L+D|\geq\dim(\text{Im}(\mu)).$$ We get our desired inequality. \end{proof}
\begin{remark} If we take $Y$ to be an irreducible and reduced divisor, then the inequality is $h^0(L)-h^0(L-Y)+h^0(D)-h^0(D-Y)\leq h^0(L+D)-h^0(L+D-Y)+1$. Furthermore, if $Y$ is ample enough, such that $h^0(L-Y)=h^0(D-Y)=h^0(L+D-Y)=0$, then we get $h^0(L)+h^0(D)\leq h^0(L+D)+1$ which is well known. \end{remark}
\begin{proof}[Proof of Theorem $\ref{theorem0.2}$] If $h^0(K_X-L)=1$ or $h^0(L)=1$, we have $h^0(L)=p_g$ or $h^0(K_X-L)=p_g$, respectively. Our conclusions are obvious. Hence we assume $h^0(L)\geq 2$ and $h^0(K_X-L)\geq 2$. In particular, $X$ is either an elliptic surface or a surface of general type.
Let $|L|=|M|+V$ be the decomposition into its movable and fixed parts. We claim that $h^0(K_X-L)>h^0(K_X-L-M)$. This is because if $h^0(K_X-L)=h^0(K_X-L-M)$, then $$h^0(K_X-L)+h^0(M)=h^0(K_X-L-M)+h^0(M)\leq h^0(K_X-L)+1.$$ This implies $h^0(M)\leq 1$. It is absurd. Hence we proved the claim.
If $\dim\varphi_L(X)=2$, then $h^0(L)\geq 3$ and the general member of $|M|$ is reduced and irreducible. Since $h^0(K_X-L)>h^0(K_X-L-M)$ and $h^0(L)-h^0(L-M)=h^0(L)-1\geq2$, the conditions of Lemma \ref{lemma2.1} are satisfied. Hence by Lemma \ref{lemma2.1}, we have $$h^0(L)-h^0(L-M)+h^0(K_X-L)-h^0(K_X-L-M)\leq p_g+1-h^0(K_X-M).$$ Since $h^0(L)+h^0(K_X-L)=p_g+1$, we get $h^0(K_X-M)\leq h^0(K_X-L-M)+1$. This implies $h^0(K_X-L-M)\geq h^0(K_X-M)-1\geq1$. Thus we conclude that $$h^0(K_X-M)-1+h^0(M)\leq h^0(K_X-L-M)+h^0(M)\leq h^0(K_X-L)+1,$$ i.e., $h^0(K_X-M)+h^0(M)\leq h^0(K_X-L)+2$. Since $h^0(K_X-M)\geq h^0(K_X-L)$, we obtain $h^0(M)\leq2$. It contradicts that
$h^0(M)=h^0(L)\geq 3$. Therefore $|L|$ is composed of a pencil. Similarly, $|K_X-L|$ is also composed of a pencil.
Since $h^0(L)+h^0(K_X-L)=p_g+1$, i.e.,
$\dim|K_X|=\dim|L|+\dim|K_X-L|$, we can write every divisor in
$|K_X|$ as a divisor in $|L|$ plus a divisor in $|K_X-L|$. Hence
$|K_X|$ is composed of a pencil. Let $\pi:\widetilde{X}\rightarrow X$ be a composite of blowing-ups such that the movable part of
$|\pi^* K_X|$ is base point free. We can assume that $\pi$ is the shortest among those with such a property. Let $\widetilde{X}\xrightarrow{f} C\xrightarrow{\varepsilon}\mathbf{P}^{p_g-1}$ be the Stein factorization of $\varphi_{\pi^* K_X}$. Then there are two base point free divisors $A$ and $B$ on $C$ such that $f^*A$, $f^*B$ and $f^*(A+B)$ are respectively the movable part of $\pi^*L$, $\pi^*(K_X-L)$ and $\pi^*K_X$. Thus $h^0(L)=h^0(\pi^* L)=h^0(f^*A)=h^0(A)$, $h^0(K_X-L)=h^0(B)$ and $h^0(A+B)=p_g$. If $g(C)=1$, we have $h^0(A)+h^0(B)=\deg (A+B)=h^0(A+B)$. This implies $h^0(L)+h^0(K_X-L)=p_g$ which contradicts our assumptions. Hence $g(C)\neq1$. When $X$ is of general type, we know that $g(C)=0$,
$q\leq2$ by Xiao's estimate in \cite{X} and, therefore, $|K_X|$ is composed of a rational pencil.
When $X$ is not of general type, it must be an elliptic surface. It follows that the movable part of $|K_X|$ is base point free, $\widetilde{X}=X$ and the general fiber of $f$ is an elliptic curve. If $h^1(A)=0$, then $h^0(A)=\deg A-g(C)+1$. Thus we obtain $$h^0(B)=h^0(A+B)+1-h^0(A)=\deg B+1+h^1(A+B)\geq \deg B+1.$$ Hence $g(C)=0$. Similarly, if $h^1(B)=0$, we also have $g(C)=0$. Next we assume that both of $A$ and $B$ are special divisors and $g(C)\geq2$. By the Clifford theorem, we have $$p_g+1=h^0(A)+h^0(B)\leq\frac{\deg(A+B)}{2}+2\leq\frac{\deg f_*\omega_X}{2}+2 =\frac{p_g+g(C)-1}{2}+2.$$ Hence we obtain $p_g\leq g(C)+1\leq q+1$, i.e., $\chi(\mathcal{O}_X)\leq2$. If $h^0(A)=\deg A/2+1$ or $h^0(B)=\deg B/2+1$, we get the case $(a)$ immediately. If $h^0(A)\leq(\deg A+1)/2$ and $h^0(B)\leq(\deg B+1)/2$, then we have $$p_g+1=h^0(A)+h^0(B)\leq\frac{\deg(A+B)}{2}+1\leq\frac{p_g+g(C)-1}{2}+1.$$ This implies $p_g\leq g(C)-1\leq q-1$, i.e., $\chi(\mathcal{O}_X)\leq0$. Therefore we know that $\chi(\mathcal{O}_X)=0$, $g(C)=q=p_g+1$, $h^0(A)=(\deg A+1)/2$ and $h^0(B)=(\deg B+1)/2$. By the classical knowledge of algebraic curves, we get the cases $(b)$ and $(c)$. \end{proof}
\section{Proof of Theorem \ref{theorem0.1}}\label{sec:2}
In this section, firstly we will give some Clifford type inequalities. Let $L$ be a divisor on a smooth minimal complex projective surface $X$ of general type. Let $|L|=|M|+V$ be the decomposition into its movable and fixed parts, and $W$ the image of $\varphi_L$.
\begin{proposition}\label{proposition1.2} If $LK_X\geq 0$, we have $$h^0(L)\leq \max\Big\{\frac{K_XL}{2}+1, \frac{(K_XL)^2}{2K_X^2}+2\Big\}.$$ \end{proposition}
\begin{proof} We can first assume that $h^0(L)\geq 3$.
Case A. $\dim W=1$. We can write $L\thicksim\sum_{i=1}^{a}F_i+V\equiv aF+V$, where $a\geq h^0(L)-1$, the $F_i's$ are the fibers of $\varphi_L$ and $F^2\geq 0$. Because of the nefness of $K_X$ we see that $$LK_X=aFK_X+VK_X\geq(h^0(L)-1)FK_X.$$ This implies $LK_X\geq 2FK_X$. When $FK_X\geq 2$, we get $h^0(L)\leq LK_X/2+1$.
When $FK_X=1$, we have $F^2K_X^2\leq(FK_X)^2=1$ and $LK_X\geq 2$. But since $FK_X\equiv F^2(\bmod~2)$, this implies $F^2=1$ and $K_X^2=1$. Hence $$h^0(L)\leq LK_X+1\leq \frac{(LK_X)^2}{2}+1=\frac{(LK_X)^2}{2K_X^2}+1.$$
When $FK_X=0$, we get $F^2\leq 0$ by Hodge's index theorem. Thus we have $F^2=0$ and $F\equiv 0$. Hence $F=0$. It is absurd.
Case B. $\dim W=2$. In this case, we have $$M^2\geq (\deg\varphi_L)(\deg W)\geq (\deg\varphi_L)(h^0(L)-2).$$ When $\deg\varphi_L\geq 2$, we obtain $M^2\geq 2h^0(L)-4$. When $\deg\varphi_L=1$, because $X$ is a surface of general type, $W$ is not a ruled surface. Hence $\deg W\geq 2n-2=2h^0(L)-4$. This implies $M^2\geq 2h^0(L)-4$. We obtain $$LK_X=MK_X+VK_X\geq MK_X\geq\sqrt{M^2K_X^2}\geq\sqrt{(2h^0(L)-4)K_X^2}.$$ Therefore we conclude that $h^0(L)\leq (K_XL)^2/2K_X^2+2$. \end{proof}
The following Castelnuovo type inequality is standard (cf. \cite[Lemma 2.1]{Ko1}). \begin{lemma}\label{lemma1.3}
Let $S$ be a smooth projective surface, $D$ a divisor on $S$ such that $|D|$ defines a birational map of $S$ onto the image. If $|D|$ has no fixed part and $(K_S-D)D\geq 0$, then $D^2\geq 3h^0(D)-7$. \end{lemma}
\begin{proposition}\label{proposition1.5} If $K_XL\geq K_X^2$, then $h^0(L)\leq (K_XL)^2/2K_X^2+2$. If $0\leq K_XL\leq K_X^2$, then $h^0(L)\leq K_XL/2+2$. If one of the conditions holds, then $\varphi_L$ is generically $2$ to $1$ onto a surface of minimal degree in $\mathbf{P}^{h^0(L)-1}$. \end{proposition}
\begin{proof} Case 1. $K_XL\geq K_X^2$. This implies $(K_XL)^2/2K_X^2+2\geq K_XL/2+2$. By Proposition \ref{proposition1.2}, we have $h^0(L)\leq (K_XL)^2/2K_X^2+2$. When the equality holds, from the proof of Proposition \ref{proposition1.2}, we obtain $\dim W=2$,
$M^2=2h^0(L)-4$, $(MK_X)^2=M^2K_X^2$ and $VK_X=0$. Hence $|M|$ is base point free, $V$ is a sum of some $(-2)$-curves and $M\equiv rK_X$ for some rational number $r$.
Assume $\deg\varphi_L=1$ and $h^0(M)\geq 4$. Then by Lemma \ref{lemma1.3}, we have $2h^0(M)-4=M^2\geq 3h^0(M)-7$, i.e., $h^0(M)\leq 3$. This is a contradiction.
Assume $\deg\varphi_L=1$ and $h^0(M)\leq 3$. Then since $\dim W=2$, we have $h^0(M)=3$ and $W=\mathbf{P}^2$. Hence $X$ is a rational surface. It contradicts our assumption on $X$.
Therefore $\deg\varphi_L=2$ and $\deg W=h^0(L)-2$. Thus $W$ is a surface of minimal degree.
Case 2. $K_XL\leq K_X^2$. In this case we have $(K_XL)^2/2{K_X}^2+2\leq K_XL/2+2$. By Proposition \ref{proposition1.2}, we obtain $h^0(L)\leq K_XL/2+2$. When the equality holds, we also have $\dim W=2$. Therefore $(K_XL)^2/2{K_X}^2+2=K_XL/2+2$, i.e., $K_XL= K_X^2$. Thus we can finish our proof similarly as Case 1. \end{proof}
Now we will prove Theorem \ref{theorem0.1}. \begin{proof}[Proof of Theorem $\ref{theorem0.1}$] Since $h^0(K_X-L)=h^2(L)>0$, we have $(K_X-L)K_X\geq 0$. By Proposition \ref{proposition1.5}, we get $h^0(L)\leq K_XL/2+2$.
If $h^0(L)=K_XL/2+2$, we have $K_X^2=M^2=2h^0(L)-4$ and $(MK_X)^2=M^2K_X^2$. Therefore $M\equiv K_X$. But since $h^0(K_X-M)\geq h^0(K_X-L)>0$, we know that $M\sim K_X$. Hence $h^0(-V)=h^0(M-L)=h^0(K_X-L)>0$. This implies $V=0$ and $L=M\sim K_X$. It contradicts the assumption $L\nsim K_X$. Therefore we obtain $h^0(L)\leq (K_XL-1)/2+2$.
If $h^0(L)=(K_XL-1)/2+2$, we have $K_XL=2h^0(L)-3$. When $\dim W=1$, we obtain $$K_XL=2h^0(L)-3\geq (h^0(L)-1)FK_X.$$ This implies $FK_X=1$. Since $F^2{K_X}^2\leq(FK_X)^2=1$ and $FK_X\equiv F^2\pmod2$, we have $F^2=K_X^2=FK_X=1$. Thus $F^2K_X^2=(FK_X)^2=1$. This implies $F\equiv K_X$. Since $h^0(K_X-F)\geq h^0(K_X-L)>0$, we know that $F\sim K_X$. Hence $L\sim K_X$. It also contradicts the assumption $L\nsim K_X$. When $\dim W=2$, we have $M^2\geq 2h^0(L)-4=K_XL-1\geq K_XM-1$. Since $M^2\equiv MK_X\pmod2$, we get
$M^2\geq K_XM$. Because $\dim W=2$, we can find a reduced and irreducible curve in $|M|$. Hence $M$ is a nef divisor. Since $h^0(K_X-M)\geq h^0(K_X-L)>0$, we have $(K_X-M)M\geq 0$ and $(K_X-M)K_X\geq 0$. It follows that $M^2\leq K_XM\leq K_X^2$. Hence $M^2=K_XM\leq K_X^2$. By Hodge's index theorem, we get $M^2K_X^2\leq (K_XM)^2=(M^2)^2$, i.e., $K_X^2\leq M^2$. Therefore $K_X^2=M^2=K_XM$ and $M^2K_X^2=(K_XM)^2$. Thus $M\equiv K_X$. Because $h^0(K_X-M)>0$ and $M\sim K_X$, we know that $M\sim K_X\sim L$. It contradicts the assumption $L\nsim K_X$ again. Hence we conclude that $$h^0(L)\leq\frac{K_XL-2}{2}+2=\frac{K_XL}{2}+1.$$
Now we assume the equality holds, i.e., $K_XL=2h^0(L)-2$. If $h^0(L)=1$, then $K_XL=0$. Hence $L$ is a sum of $(-2)$-curves.
When $\dim W=1$, we have $$2h^0(L)-2=LK_X=aFK_X+VK_X\geq(h^0(L)-1)FK_X.$$ This implies $K_XF\leq 2$. If $K_XF=1$, by Hodge's index theorem, we have $F^2K_X^2\leq (FK_X)^2=1$. This implies $F^2=K_X^2=1$. But $K_X^2\geq K_XL=2h^0(L)-2\geq2$. It is impossible. Hence we have $K_XF=2$. It follows that $F^2K_X^2\leq (FK_X)^2=4$. Since $K_X^2\geq 2$ and $FK_X\equiv F^2\pmod2$, we obtain $F^2=0$ or $F^2=2$.
If $F^2=2$, then $K_X^2=K_XL=2$. Thus $F^2K_X^2=(K_XF)^2=4$. By Hodge's index theorem, we know that $F\equiv K_X$. This implies $V\sim0$ and $K_X\sim L\sim F$. It contradicts the assumption $L\nsim K_X$.
If $F^2=0$, then the movable part of $|L|$ is base point free. Since $K_XF=2$, we conclude that $a=h^0(L)-1$, $W\cong\mathbf{P}^1$ and $g(F)=(F^2+FK_X)/2+1=2$. Therefore, the general fiber of $\varphi_L: X\rightarrow W\cong\mathbf{P}^1$ is an irreducible smooth curve of genus $2$.
When $\dim W=2$, we have $h^0(L)\geq 3$ and $K_XL=2h^0(L)-2\geq4$. Since $M^2\geq 2h^0(L)-4=K_XL-2\geq K_XM-2$, $K_XM\geq M^2$ and $M^2-K_XM$ is even, we know that $M^2=K_XM$ or $M^2=K_XM-2$.
If $M^2=K_XM$, the inequality $(K_XM)^2\geq K_X^2M^2$ implies that $M^2\geq K_X^2$. Since $K_X^2\geq K_XM=M^2$, we have $K_X^2=M^2=K_XM$. By Hodge's index theorem, we obtain $M\equiv K_X$. Since $h^0(K_X-M)>0$, we obtain $L\sim M\sim K_X$. It contradicts the assumption $L\nsim K_X$.
If $M^2=K_XM-2=2h^0(M)-4$, we have that $|M|$ is base point free and $\varphi_L$ is generically $2$ to $1$ onto a surface of minimal degree in $\mathbf{P}^{h^0(L)-1}$. \end{proof}
\section{Clifford type indices on a surface}\label{sec:3} For a smooth connected projective curve, we have an invariant, the Clifford index, introduced by Martens \cite{Ma}. It plays an important role in the study of curves. Because of Theorems \ref{theorem0.1} and \ref{theorem0.2}, we can define two indices of Clifford type on a smooth minimal surface X of general type. \begin{definition} For a divisor $L$ on $X$, we define two indices $\alpha(L)$ and $\beta(L)$ by \begin{eqnarray*} \alpha(L)&=&K_XL-2h^0(L)+2,\\ \beta(L) &=&q+\frac{1}{2}L(K_X-L)-h^1(L). \end{eqnarray*} \end{definition}
Note that by the Serre duality theorem, we have $\beta(L)=\beta(K_X-L)$ and by the Reimann-Roch theorem, we have $h^0(L)+h^0(K_X-L)=1+p_g-\beta(L)$ and \begin{eqnarray*} \alpha(L)+\alpha(K_X-L)&=&K_X^2-2(h^0(L)+h^0(K_X-L))+4 \\
&=&K_X^2-2(1+p_g-\beta(L))+4\\
&=&K_X^2-2p_g+2\beta(L)+2. \end{eqnarray*}
Next we define indices $\alpha(X)$ and $\beta(X)$ for the surface $X$. \begin{definition} Let $\mathcal{S}=\{L\in\Pic(X);h^0(L)\geq 2, h^0(K_X-L)\geq 2\}$, we define $\alpha(X)$ and $\beta(X)$ by \begin{eqnarray*} \alpha(X)= \begin{cases}\underset{L\in\mathcal{S}}{\min}~ \alpha(L) & \mathcal{S}\neq \emptyset \\ \infty & \mathcal{S}=\emptyset \end{cases}\\ \beta(X)= \begin{cases}\underset{L\in\mathcal{S}}{\min}~ \beta(L) & \mathcal{S}\neq \emptyset\\ \infty & \mathcal{S}=\emptyset. \end{cases} \end{eqnarray*}
\end{definition}
Similarly as in the curve case, we say that $L$ computes the index $\alpha(X)$ or $\beta(X)$, if $\alpha(X)=\alpha(L)$ or $\beta(X)=\beta(L)$, respectively.
\begin{remark}\label{remark2.3} When $L$ computes $\alpha(X)$ or $\beta(X)$, we can always assume
$|L|$ has no fixed part. This assumption is convenient for our work. The reason is as follows. Let $|L|=|M|+V$ be the decomposition into its movable and fixed parts. If $L$ computes $\alpha(X)$, we have $h^0(L)=h^0(M)$ and $VK_X\geq 0$. Therefore $K_XL-2h^0(X,L)+2\geq K_XM-2h^0(X,M)+2$, i.e., $\alpha(L)\geq\alpha(M)$. If $L$ computes $\beta(X)$, we have $h^0(L)+h^0(K_X-L)=1+p_g-\beta(X)$. But since $h^0(L)+h^0(K_X-L)\leq h^0(M)+h^0(K_X-M)\leq 1+p_g-\beta(X)$, we have $h^0(M)+h^0(K_X-M)=1+p_g-\beta(X)$. Hence $M$ computes $\beta(X)$ too. \end{remark}
\begin{example} Let $S_d$ be a generic hypersurface of degree $d$ in $\mathbf{P}^3$. $H$ denote the hyperplane section of $S_d$. When $d\geq5$, $S_d$ is a minimal surface of general type and $K_{S_d}=(d-4)H$. In this case, by the Noether-Lefschetz theorem, we have $\Pic(S_d)\cong \mathbb{Z}H$. Hence $\alpha(S_5)=\beta(S_5)=\infty$.
Now we assume $d\geq6$. Let $n$ be an integer such that $1\leq n\leq d-5$. Then we have $$h^0(nH)=\frac{1}{6}(n+1)(n+2)(n+3).$$ Thus we obtain \begin{eqnarray*} \alpha(nH)&=&nHK_{S_d}-2h^0(nH)+2\\
&=&nd(d-4)-\frac{1}{3}(n+1)(n+2)(n+3)+2. \end{eqnarray*} Hence $\alpha(S_d)={\min}_{1\leq n\leq d-5}\alpha(nH)=\alpha(H)=d(d-4)-6$. We also have \begin{eqnarray*} \beta(nH)&=&p_g(S_d)+1-h^0(nH)-h^0((d-4-n)H)\\
&=&-\frac{1}{2}d(n^2-(d-4)n). \end{eqnarray*} Therefore $\beta(S_d)={\min}_{1\leq n\leq d-5}\beta(nH)=\beta(H)=d(d-5)/2$. \end{example}
For surfaces with $\alpha=\infty$, we have the following theorem. \begin{theorem} If $S$ is a surface with $\alpha(S)=\infty$, then $\alpha(S')=\infty$ for every small deformation $S'$ of $S$. \end{theorem} \begin{proof} Let $f:\mathcal{X}\rightarrow \Delta$ be a small deformation of $\mathcal{X}_0=S$, $0\in \Delta$, such that the Picard scheme $\Pic_{\mathcal{X}/\Delta}$ and the Poincar\'e line bundle $\mathcal{L}$ on $\mathcal{X}\times\Pic_{\mathcal{X}/\Delta}$ exist (cf. \cite{K}). Put
$$W_{m,n}=\{y\in\Pic_{\mathcal{X}/\Delta}~;h^0(\mathcal{L}_y)\geq m, h^2(\mathcal{L}_y)\geq n\}.$$ By the semicontinuity theorem \cite[Theorem 12.8]{Ha}, we know that $W_{m,n}$ is a closed subscheme of $\Pic_{\mathcal{X}/\Delta}$. Consider the natural morphism $\pi:\Pic_{\mathcal{X}/\Delta}\rightarrow \Delta$. Then $\{p\in \Delta~;W_{m,n}\cap\pi^{-1}(p)=\emptyset\}$ is an open subset of $\Delta$. Since $\alpha(S)=\infty$, we have $\{L\in \Pic(S)~;h^0(L)\geq 2, h^2(L)\geq 2\}=\emptyset$. Hence $W_{2,2}\cap\pi^{-1}(0)=\emptyset$ and $\{p\in \Delta~;W_{2,2}\cap\pi^{-1}(p)=\emptyset\}\neq\emptyset$. Thus for every $p\in\{p\in \Delta~;W_{2,2}\cap\pi^{-1}(p)=\emptyset\}$, we have $\alpha(f^{-1}(p))=\infty$. This completes the proof of the theorem. \end{proof}
The above theorem tells us the surfaces with $\alpha=\infty$ form an open subset of the moduli of surfaces. We now give some bounds for $\alpha(X)$ and $\beta(X)$ as follows:
\begin{proposition}\label{proposition2.4} If $\alpha(X)\neq\infty$, then $0\leq\alpha(X)\leq K_X^2-2\chi(\mathcal{O}_X)+6$. \end{proposition} \begin{proof} $\alpha(X)\geq 0$ is an easy consequence of Theorem
\ref{theorem0.1}. Suppose that $L$ computes $\alpha(X)$. Then we have $\alpha(X)= K_XL-2h^0(X,L)+2$. By Remark \ref{remark2.3}, we can assume $|L|$ has no fixed part. Let $W$ be the image of $\varphi_L$.
Case A. $\dim W=1$. In this case, we have $L\thicksim\sum_{i=1}^{a}F_i\equiv aF$, where the $F_i's$ are the fibers of $\varphi_L$. Since $$a\geq h^0(L)-1=\frac{K_XL-\alpha(X)}{2}=\frac{aK_XF-\alpha(X)}{2},$$ we have \begin{equation}\label{1} 2a+\alpha(X)\geq aK_XF. \end{equation} By the Riemann-Roch theorem, we obtain \begin{eqnarray}\label{A1} h^0(L)+h^0(K_X-L)\geq\frac{a^2}{2}F^2-\frac{a}{2}K_XF+\chi(\mathcal{O}_X). \end{eqnarray} Since $h^0(L)=(K_XL-\alpha(X))/2+1$ and $h^0(K_X-L)\leq (K_X(K_X-L)-\alpha(X))/2+1$, we get from (\ref{A1}) the inequality \begin{eqnarray}\label{2} K_X^2-2\chi(\mathcal{O}_X)+4+aK_XF-a^2F^2\geq 2\alpha(X). \end{eqnarray}
If $F^2\geq 1$, we have $2a\leq a^2+1\leq a^2F^2+1$. This and (\ref{1}) imply $aK_XF-a^2F^2\leq\alpha(X)+1$. Hence by (\ref{2}), we get $\alpha(X)\leq K_X^2-2\chi(\mathcal{O}_X)+5$.
If $F^2=0$, then $|L|$ has no base points. We can assume that $F$ is a smooth and irreducible curve. When $h^0(L)=2$, we have $W=\mathbf{P}^1$, and $\alpha(X)+2=aK_XF$. By (\ref{2}), we get our conclusion immediately. When $h^0(L)\geq 3$, from the standard exact sequence $$0\rightarrow\mathcal{O}_X(L-F)\rightarrow\mathcal{O}_X(L) \rightarrow\mathcal{O}_{F}\rightarrow 0,$$ it follows that $h^0(L-F)\geq h^0(L)-1\geq 2$. Therefore $$\frac{1}{2}((a-1)K_XF-\alpha(X))+1\geq h^0(L-F)\geq h^0(L)-1=\frac{1}{2}(aK_XF-\alpha(X)).$$ This implies $K_XF\leq 2$. Since $F^2=0$, we have $K_XF=2$. Hence $h^0(L-F)= h^0(L)-1=a-\alpha(X)/2$, for a general fiber $F$. Inductively, we can get $h^0(L-iF)=h^0(L)-i$, for $1\leq i\leq h^0(L)-1$. Let $k=h^0(L)-2$, then $h^0(L-kF)=2$. On one hand, by the Riemann-Roch theorem, we obtain \begin{eqnarray}\label{A2} h^0(L-kF)+h^0(K_X-L+kF) & \geq & \frac{1}{2}(L-kF)(L-kF-K_X)+\chi(\mathcal{O}_X)\nonumber\\
& = & k-a+\chi(\mathcal{O}_X). \end{eqnarray} On the other hand, \begin{eqnarray*} h^0(L-kF)+h^0(K_X-L+kF) & \leq & 2+\frac{1}{2}K_X(K_X-L+kF)-\frac{1}{2}\alpha(X)+1\\
& = & \frac{1}{2}K_X^2+k-a-\frac{1}{2}\alpha(X)+3. \end{eqnarray*} Combining these two inequalities, we can get $\alpha(X)\leq K_X^2-2\chi(\mathcal{O}_X)+6$.
Case B. $\dim W=2$. This case implies that $L^2\geq 2h^0(L)-4=K_XL-\alpha(X)-2$. Hence by the Riemann-Roch theorem, we obtain \begin{eqnarray}\label{B} h^0(L)+h^0(K_X-L) & \geq & \frac{1}{2}L^2-\frac{1}{2}K_XL+\chi(\mathcal{O}_X)\nonumber\\
& \geq & -\frac{1}{2}\alpha(X)-1+\chi(\mathcal{O}_X). \end{eqnarray} Since \begin{eqnarray*} h^0(L)+h^0(K_X-L) & \leq & \frac{1}{2}(K_XL-\alpha(X))+1+\frac{1}{2}(K_X(K_X-L)-\alpha(X))+1\\
& = & \frac{1}{2}K_X^2-\alpha(X)+2, \end{eqnarray*} we obtain $K_X^2/2-\alpha(X)+2\geq -\alpha(X)/2-1+\chi(\mathcal{O}_X)$, i.e., $\alpha(X)\leq K_X^2-2\chi(\mathcal{O}_X)+6$. \end{proof}
We can see easily the following corollary.
\begin{corollary}\label{corollary2.5}If $\alpha(X)=K_X^2-2\chi(\mathcal{O}_X)+6$, $L$ computes $\alpha(X)$
and $|L|$ has no fixed part, then $|L|$ is base point free and one of the following cases occurs. \begin{enumerate} \item $h^0(L)=2$, $h^1(L)=0$ and $K_X-L$ also computes $\alpha(X)$.
\item $h^0(L)\geq3$ and $|L|$ is composed of a pencil of genus $2$. \item $h^1(L)=0$, $K_X-L$ also computes $\alpha(X)$ and $\varphi_L$ is generically $2$ to $1$ onto a surface of minimal degree in $\mathbf{P}^{h^0(L)-1}$. \end{enumerate} \end{corollary}
\begin{proposition}\label{proposition2.6} If $\alpha(X)\neq\infty$, then $0\leq\beta(X)\leq \alpha(X)/2+q+1$. \end{proposition} \begin{proof} $\beta(X)\geq 0$ is an easy consequence of Theorem \ref{theorem0.2}. The following proof is similar to that of Proposition \ref{proposition2.4}. Keep the notation as in the proof of Proposition \ref{proposition2.4}. We assume $L$ computes $\alpha(X)$
and $|L|$ has no fixed part. Then we have $\alpha(X)= K_XL-2h^0(L)+2$ and \begin{equation}\label{beta} h^0(L)+h^0(K_X-L)=1+p_g-\beta(L)\leq 1+p_g-\beta(X). \end{equation}
Case A. $\dim W=1$. By (\ref{A1}) and (\ref{beta}), we obtain \begin{equation}\label{A3} \beta(X)\leq q+\frac{a}{2}K_XF-\frac{a^2}{2}F^2. \end{equation}
If $F^2\geq 1$, we have $2a\leq a^2+1\leq a^2F^2+1$. This and (\ref{1}) imply $aK_XF-a^2F^2\leq\alpha(X)+1$. From (\ref{A3}), it follows that $\beta(X)\leq \alpha(X)/2+q+1/2$.
If $F^2=0$, then $|L|$ has no base points. When $h^0(L)=2$, then $\alpha(X)+2=aK_XF$. It follows from (\ref{A3}) that $\beta(X)\leq \alpha(X)/2+q+1$. When $h^0(L)\geq 3$, similarly as in the proof of Proposition \ref{proposition2.4}, we have $K_XF=2$. Hence by (\ref{A2}), we get $$h^0(L-kF)+h^0(K_X-L+kF)\geq k-a+\chi(\mathcal{O}_X),$$ where $k=h^0(L)-2=(K_XL-\alpha(X))/2-1$. On the other hand, $h^0(L-kF)+h^0(K_X-L+kF)\leq 1+p_g-\beta(X)$. Combining them, we obtain \begin{eqnarray*} \beta(X)\leq q+a-k & = & q+a-\frac{1}{2}(K_XL-\alpha(X))+1 \\
& = & q+\frac{1}{2}\alpha(X)+1+a-\frac{aK_XF}{2}\\
& = & q+\frac{1}{2}\alpha(X)+1. \end{eqnarray*}
Case B. $\dim W=2$. By (\ref{B}) and (\ref{beta}), we get $\beta(X)\leq \alpha(X)/2+q+1$ immediately. \end{proof}
Similarly as in the case of Corollary \ref{corollary2.5}, we have the following corollary. \begin{corollary}
If $\beta(X)=\alpha(X)/2+q+1$, $L$ computes $\alpha(X)$ and $|L|$
has no fixed part, then $|L|$ is base point free and one of the following cases occurs. \begin{enumerate} \item $h^0(L)=2$, $h^1(L)=0$ and $L$ computes $\beta(X)$.
\item $h^0(L)\geq3$ and $|L|$ is composed of a pencil of genus $2$. \item $h^1(L)=0$, $L$ computes $\beta(X)$ and $\varphi_L$ is generically $2$ to $1$ onto a surface of minimal degree in $\mathbf{P}^{h^0(L)-1}$. \end{enumerate} \end{corollary}
\section{Surfaces with $\alpha=0$ or $\beta=0$}\label{sec:4}
It is natural to ask what will happen when these indices $\alpha$ and $\beta$ are small. The answers for $\alpha=0$ and $\beta=0$, respectively, are given in the following theorems. We always assume
$L$ computes $\alpha(X)$ and $|L|$ has no fixed part. \begin{theorem}\label{theorem3.1}
If $\alpha(X)=0$, then $|L|$ has no base point and one of the following occurs.
$1$. There exists a projective surjective morphism $f:X\rightarrow\mathbf{P}^1$, whose general fiber is an irreducible smooth curve of genus $2$.
$2$. $X$ is the minimal resolution of a double covering of $\mathbf{P}^2$, whose branch locus is a reduced curve of degree $10$ with only one infinitely near triple point as its essential singularity. In this case, $K_X^2=7$, $p_g=5$, $q=0$ and $K_X\sim 2L-Z$, where $Z$ is an effective divisor with $LZ=0$ and $K_XL=2L^2=4$.
$3$. $X$ is the smooth minimal model of a double covering of $\Sigma_2$, whose branch locus is a reduced curve of
$|8\Delta_0+14\Gamma|$ with at worst negligible singularities. In this case, $K_X^2=9$, $p_g=6$, $q=0$ and $K_X\sim 3D$, where $2D=L$.
$4$. $X$ is the minimal resolution of a double covering of $\mathbf{P}^2$, whose branch locus is a reduced curve of degree $10$ with at worst negligible singularities. In this case, $K_X^2=8$, $p_g=6$, $q=0$ and $K_X\sim 2L$.
\end{theorem}
\begin{proof} Let $W$ be the image of $\varphi_L$. Since $L$ computes $\alpha(X)$, we get $K_XL-2h^0(L)+2=\alpha(X)=0$.
When $\dim W=1$, by Theorem \ref{theorem0.1}, we have $|L|$ is base point free and the general fiber of $\varphi_L: X\rightarrow W\cong\mathbf{P}^1$ is an irreducible smooth curve of genus 2. Thus
$X$ is the surface of type 1 in the theorem. When $\dim W=2$, by Theorem \ref{theorem0.1}, we know that $|L|$ is base point free, $\varphi_L:X\rightarrow W$ is generically $2$ to $1$ and \begin{equation}\label{*} L^2=K_XL-2=2h^0(L)-4\geq 2. \end{equation} By Hodge's index theorem, we obtain $$K_X^2L^2\leq (K_XL)^2=(L^2+2)^2=(L^2)^2+4L^2+4.$$ This implies \begin{equation}\label{4} K_X^2\leq L^2+\frac{4}{L^2}+4. \end{equation} Since $2\leq h^0(K_X-L)\leq K_X(K_X-L)/2+1$, we have \begin{equation}\label{5} K_X^2\geq K_XL+2=L^2+4. \end{equation} Combining (\ref{4}) and (\ref{5}), we obtain $$L^2+4\leq K_X^2\leq L^2+\frac{4}{L^2}+4\leq L^2+6.$$ Thus we get three possible cases A: $K_X^2=L^2+4$, B: $K_X^2=L^2+5$ and C: $K_X^2=L^2+6$.
Case A. $K_X^2=L^2+4$. This implies that $K_X^2=K_XL+2$. Then $$2\leq h^0(K_X-L)\leq\frac{1}{2}K_X(K_X-L)+1=\frac{1}{2}({K_X}^2-K_XL)+1=2.$$
Thus $h^0(K_X-L)=2$. Let $|K_X-L|=|M'|+V'$ be the decomposition into its movable and fixed parts. Let $\phi:X\dashrightarrow
\mathbf{P}^1$ be the rational map defined by $|K_X-L|$. Then there exists an irreducible reduced curve $F'$, such that ${F'}^2\geq 0$, $M'\equiv bF'$ and $b\geq h^0(K_X-L)-1=1$. Since $$2=(K_X-L)K_X=M'K_X+V'K_X\geq bF'K_X\geq F'K_X,$$
we can get $F'K_X=2$. Hence $b=1$, ${F'}^2=0$ or $2$. If ${F'}^2=2$, then $(K_X-L)F'=M'F'+V'F'\geq {F'}^2=2$. Thus $LF'\leq K_XF'-2=0$. By Hodge's index theorem, we get ${F'}^2\leq 0$. It is impossible. It follows that ${F'}^2=0$, and $|M'|$ is base point free. Hence $g(F')=({F'}^2+K_XF')/2+1=2$ and $M'\sim F'$. We know that the general fiber of $\phi:X\rightarrow\mathbf{P}^1$ is an irreducible smooth curve of genus 2. Therefore $X$ is the surface of type 1 in the theorem.
Case B. $K_X^2=L^2+5$. Since $2\leq h^0(K_X-L)\leq K_X(K_X-L)/2+1=5/2$, we get $h^0(K_X-L)=2$. By (\ref{4}), we have $L^2+5\leq L^2+4/L^2+4$. This implies $2\leq L^2\leq 4$. Since $L^2=2h^0(L)-4$ is an even number, there are two cases B-I: $L^2=2$ and B-II: $L^2=4$.
Case B-I. We have $K_X^2=7$, $K_XL=4$ and $h^0(L)=3$. By Theorem \ref{theorem0.2}, we have $p_g(X)=h^0(K_X)\geq h^0(L)+h^0(K_X-L)=5$. Using Noether's inequality, we know that $7=K_X^2\geq 2p_g(X)-4$, i.e., $p_g(X)\leq 5$. Thus we get $p_g(X)=5$ and $K_X^2=7<10=2p_g(X)$. Since $K_X^2\geq 2p_g(X)$, when $X$ is irregular (See \cite{De}), we conclude that $q(X)=0$.
Since $h^0(L)=3$, we know that $\varphi_L:X\rightarrow W=\mathbf{P}^2$ is generically $2$ to $1$. Let $X\rightarrow X'\xrightarrow{f}\mathbf{P}^2$ be the Stein factorization of $\varphi_L$, $\widetilde{X}$ the canonical resolution of the double covering and $m_i$ the multiplicity of the corresponding singularity. $R$ and $B$ denote, respectively, the ramification divisor and the branch locus of $\varphi_L$. If $H$ denotes a line on $\mathbf{P}^2$, then we have $K_X=\varphi_L^*(-3H)+R=-3L+R$. By the theory of double covering (See \cite[\S2]{Hor}, \cite[III, \S2]{Ho} or \cite[\S1.3]{Xiao}), there exists an effective divisor $Z$ on $X$, such that $2R=\varphi_L^*B-2Z$ and $LZ=0$. Thus $$BH=\frac{1}{2}\varphi_L^*B\varphi_L^*H=(R+Z)L=RL=(K_X+3L)L=K_XL+3L^2=10.$$ Hence $B\sim 10H$ and $K_X\sim 2L-Z$. Now we can compute the invariants of $\widetilde{X}$. We have \begin{eqnarray*} \chi(\mathcal{O}_X)=\chi(\mathcal{O}_{\widetilde{X}}) & = & \frac{1}{4}B\left(K_{\mathbf{P}^2}+\frac{1}{2}B\right)+2\chi(\mathcal{O}_{\mathbf{P}^2})-\sum_i\frac{1}{2}\left[\frac{m_i}{2}\right]\left(\left[\frac{m_i}{2}\right]-1\right)\\
& = & 7-\frac{1}{2}\sum_i\left[\frac{m_i}{2}\right]\left(\left[\frac{m_i}{2}\right]-1\right), \end{eqnarray*} \begin{eqnarray*} K_{\widetilde{X}}^2=2\left(K_{\mathbf{P}^2}+\frac{1}{2}B\right)^2-\sum_i2\left(\left[\frac{m_i}{2}\right]-1\right)^2=8-2\sum_i\left(\left[\frac{m_i}{2}\right]-1\right)^2. \end{eqnarray*} From the equality $q(\widetilde{X})=q(X)=0$, it follows that $$p_g(X)=6-\frac{1}{2}\sum_i\left[\frac{m_i}{2}\right]\left(\left[\frac{m_i}{2}\right]-1\right).$$ Since $p_g(X)=5$, we have $[m_i/2]=2$ for only one index and $K_{\widetilde{X}}^2=6$. It follows that $\widetilde{X}$ has a $(-1)$-curve. Therefore $X$ is the surface of type 2 in the theorem.
Case B-II. We have $K_X^2=L^2+5=9$ and $K_XL=L^2+2=6$ by (\ref{*}). Thus $L^2K_X^2=36=(K_XL)^2$. Using Hodge's index theorem, we have $L\equiv (2/3)K_X$. It follows from (\ref{*}) that $h^0(L)=4$. By Theorem \ref{theorem0.2}, we have $p_g(X)=h^0(K_X)\geq h^0(L)+h^0(K_X-L)=6$. By Noether's inequality, we obtain $9=K_X^2\geq 2p_g(X)-4$, i.e., $p_g(X)\leq 6$. Hence $p_g(X)=6$ and $K_X^2=9<12=2p_g(X)$. It follows that $q(X)=0$.
Since $\deg W=L^2/2=2$, either $W\cong\mathbf{P}^1\times\mathbf{P}^1$ or $W$ is a quadric cone.
Assume $W\cong\mathbf{P}^1\times\mathbf{P}^1$. Two rulings of $W$ allow us to write $L\sim D_1+D_2$ with divisors $D_i$ satisfying $D_i^2=0$ $(i=1,2)$. Since $6=K_XL=K_XD_1+K_XD_2$, we may assume that $K_XD_1$ is an even integer not greater than 3. Hence $K_XD_1=2$. But, this is absurd, because $LD_1=(2/3)K_XD_1=4/3$.
Now we assume that $W$ is a quadric cone. In this case, by the same argument as in the proof of \cite[Lemma 2, Case II b]{Hor}, we have
$L\sim 2D+G$, where $|D|$ is a pencil and $G$ is an effective divisor with $LG=0$. From the equality $4=L^2=L(2D+G)$, it follows that $LD=2$. Since $L\equiv(2/3)K_X$, we have $K_XD=3$. From $LD=2$, we get $(2D+G)D=2D^2+DG=2$, hence $D^2=0$ or 1. But $3=K_XD\equiv D^2\pmod 2$, hence $D^2=1$ and $DG=0$. The equality $0=LG=2DG+G^2$ implies that $G^2=0$. Then by Hodge's index theorem, we have $G=0$. Thus $L\sim 2D$ and $K_X\equiv 3D$.
Since $h^0(D)\leq(1/2)K_XD+1=5/2$, we have $h^0(D)=2$. By $D^2=1$, we know $|D|$ has one base point $P$. Let $\sigma:\widehat{X}\rightarrow X$ be the blowing-up with center $P$ and put $E=\sigma^{-1}(P)$. Then the movable part $\widehat{D}$ of
$|\sigma^*D|$ defines a holomorphic map
$g:\widehat{X}\rightarrow\mathbf{P}^1$. Since $|L|$ has no base point, there exists $\eta\in H^0(\widehat{X},\mathcal{O}_{\widehat{X}}(\sigma^*L))$ which does not vanish on $E$. Take $\xi\in H^0(\widehat{X},\mathcal{O}_{\widehat{X}}(2E))$ such that $(\xi)=2E$. Then $\xi/\eta$ is a meromorphic section of $\mathcal{O}_{\widehat{X}}(-2\widehat{D})$. Then $g$ and $\xi/\eta$ define a rational map $h:\widehat{X}\rightarrow \Sigma_2$. Since $\eta$ does not vanish on $E$, $h$ is defined everywhere so that
$h^*\Delta_0=2E$. We consider the linear system $|\Delta_0+2\Gamma|$ on $\Sigma_2$. This give rise to a morphism $q:\Sigma_2\rightarrow\mathbf{P}^3$ whose image coincides with $W$ up to an automorphism of $\mathbf{P}^3$. Then by the construction, we have the following commutative diagram. \[\begin{CD} \widehat{X} @>h>> \Sigma_2 \\ @V\sigma VV @VVqV \\ X @>\varphi_L>> W \end{CD}\] Let $H$ be a plane on $\mathbf{P}^3$. $R$ and $B$ denote, respectively, the ramification divisor and the branch locus of $h$. Then there exists an effective divisor $Z$ on $\widehat{X}$, such that $2R=h^*B-2Z$ and $Z$ is contracted by $h$. Since $q^*H=\Delta_0+2\Gamma=-(1/2)K_{\Sigma_2}$, we have $\sigma^*(2D)=\sigma^*L=\sigma^*\varphi_L^*H=h^*q^*H=h^*(\Delta_0+2\Gamma)$. The equality $h^*\Delta_0=2E$ implies that $h^*\Gamma=\sigma^*D-E$. Thus we obtain $$K_{\widehat{X}}=h^*(-2\Delta_0-4\Gamma)+R=-4E-4(\sigma^*D-E)+R=-2\sigma^*L+R.$$ As in Case B-I, we have $$B\Delta_0=Rh^*\Delta_0=(K_{\widehat{X}}+2\sigma^*L)(2E) =2(\sigma^*K_X+E+2\sigma^*L)E=-2,$$ $$B\Gamma=Rh^*\Gamma=(\sigma^*K_X+E+2\sigma^*L)(\sigma^*D-E)=K_XD+2LD-E^2=8.$$ Hence $B\sim 8\Delta_0+14\Gamma$. This equality implies $$K_{\widehat{X}}=h^*(4\Delta_0+7\Gamma)-Z-2\sigma^*L=3\sigma^*D+E-Z,$$ i.e., $K_X\sim 3D-\sigma_*Z$. Since $K_X\equiv 3D$, we get $K_X\sim 3D$ and $Z=0$. Let $\overline{X}$ be the canonical resolution of the double covering $h$ and $m_i$ the multiplicity of the corresponding singularity. By the standard theory of double covering, we obtain \begin{eqnarray*} \chi(\mathcal{O}_{\overline{X}})=\chi(\mathcal{O}_{\widehat{X}})& = & \frac{1}{4}B\left(-2\Delta_0-4\Gamma+\frac{1}{2}B\right)+2-\sum_i\frac{1}{2}\left[\frac{m_i}{2}\right]\left(\left[\frac{m_i}{2}\right]-1\right)\\
& = & 7-\frac{1}{2}\sum_i\left[\frac{m_i}{2}\right]\left(\left[\frac{m_i}{2}\right]-1\right), \end{eqnarray*} \begin{eqnarray*} K_{\overline{X}}^2=2\left(K_{\Sigma_2}+\frac{1}{2}B\right)^2-\sum_i2\left(\left[\frac{m_i}{2}\right]-1\right)^2=8-2\sum_i\left(\left[\frac{m_i}{2}\right]-1\right)^2. \end{eqnarray*} The equality $q(\widehat{X})=q(\overline{X})=q(X)=0$ implies that $$p_g(X)=6-\frac{1}{2}\sum_i\left[\frac{m_i}{2}\right]\left(\left[\frac{m_i}{2}\right]-1\right).$$
Since $p_g(X)=6$, we have $[m_i/2]([m_i/2]-1)=0$ for all indices. Thus $B\in|8\Delta_0+14\Gamma|$ is a reduced curve with at worst negligible singularities. Therefore, in this case, $X$ is the surface of type 3 in the theorem.
Case C. $K_X^2=L^2+6$. From (\ref{4}), it follows that $L^2+6=K_X^2\leq L^2+4/L^2+4$. This implies $L^2\leq 2$. Thus $L^2=2$ and $K_X^2=8$. By (\ref{*}), we get $h^0(L)=3$ and $K_XL=4$. Hence $K_X^2L^2=16=(K_XL)^2$. By Hodge's index theorem, we conclude that $K_X\equiv 2L$.
Since $h^0(L)=3$, we know that $\varphi_L:X\rightarrow W=\mathbf{P}^2$ is generically $2$ to $1$. Let $m_i$ be the multiplicity of the corresponding singularity. $R$ and $B$ denote, respectively, the ramification divisor and the branch locus of $\varphi_L$. If $H$ denotes a line on $\mathbf{P}^2$, then we have $K_X=\varphi_L^*(-3H)+R=-3L+R$. By the theory of double covering, there exists an effective divisor $Z$ on $X$ such that $2R=\varphi_L^*B-2Z$ and $LZ=0$. We get $BH=RL=(K_X+3L)L=10$, i.e.,
$B\sim 10H$. Thus $K_X\sim -3L+5L-Z\sim 2L-Z$. Since $K_X\equiv 2L$, we obtain $K_X\sim 2L$ and $Z=0$. By \cite[Lemma 5]{Hor}, we know that $B\in|10H|$ is a reduced curve with at worst negligible singularities. Therefore $X$ is the surface of type 4 in the theorem and $p_g(X)=6-(1/2)\sum[m_i/2]([m_i/2]-1)=6$. \end{proof}
Now, we assume $L$ computes $\beta(X)$ and $|L|$ has no fixed part.
\begin{theorem}\label{theorem3.2}
If $\beta(X)=0$, then $|K_X|$ is composed of a rational pencil and
$|L|$ is a sum of some fibers of the pencil. \end{theorem} \begin{proof} It is just a special case of Theorem \ref{theorem0.2}. \end{proof} When $\alpha$ or $\beta$ increase, the surface become more and more complicated and we can not hope to give a detailed description of the surface.
\section{The number of moduli of a surface}\label{sec:5} \begin{definition} For a surface of general type $S$, we define $M(S)$, which is the number of moduli of $S$, to be the dimension of its Kuranishi space $B$, i.e., the maximum of the dimensions of the irreducible components of $B$ (cf. \cite{Cat1}). \end{definition}
Hence we have $$10\chi(\mathcal{O}_S)-2K_S^2=h^1(T_S)-h^2(T_S)\leq M(S)=\dim B\leq h^1(T_S).$$ By \cite{Bo}, we have $h^0(T_S)=h^0(\Omega_S^1(-K_S))=0$. By Serre duality, $h^2(T_S)=h^0(\Omega_S^1(K_S))$, and we have \begin{equation}\label{8} M(S)\leq h^1(T_S)=10\chi(\mathcal{O}_S)-2K_S^2+h^0(\Omega_S^1(K_S)). \end{equation} Hence one can give an upper bound for $M(S)$ by giving an upper bound for $h^0(\Omega_S^1(K_S))$. The following theorem improves the inequality given in \cite[Theorem B]{Cat2}.
\begin{theorem}\label{theorem4.2}Let $X$ be a smooth minimal complex projective surface of general type. We have the inequality $M(X)\leq10\chi(\mathcal{O}_X)+(5/2)K_X^2+4$. Furthermore, if $q(X)>0$, then $M(X)\leq10\chi(\mathcal{O}_X)+(1/2)K_X^2+4$. \end{theorem} \begin{proof} We can assume $h^0(\Omega_X^1(K_X))>0$. We know that $\Omega_X^1(K_X)$ is $K_X$-semistable (cf. \cite[Corollary 1.2]{E}, \cite{Bo} or \cite{T}). Thus we can find an invertible sheaf $\mathcal{O}_X(L)$ such that it is an maximal invertible subbundle of $\Omega_X^1(K_X)$ of maximal slope. Then $\Omega_X^1(K_X)/\mathcal{O}_X(L)$ is torsion free and $(\Omega_X^1(K_X)/\mathcal{O}_X(L))^{\vee\vee}\cong\mathcal{O}_X(3K_X-L)$. Hence we obtain \begin{equation}\label{9} h^0(\Omega_X^1(K_X))\leq h^0(L)+h^0((\Omega_X^1(K_X)/\mathcal{O}_X(L)))\leq h^0(L)+h^0(3K_X-L). \end{equation}
Case 1. $K_XL<K_X^2$. The inequality implies $K_X(3K_X-L)>2K_X^2$. By the assumption $h^0(\Omega_X^1(K_X))>0$, we have $K_XL\geq 0$. Thus by Proposition \ref{proposition1.5}, we get $h^0(L)\leq K_XL/2+2$ and $h^0(3K_X-L)\leq (K_X(3K_X-L))^2/2K_X^2+2$. It follow from (\ref{9}) that \begin{eqnarray*} h^0(\Omega_X^1(K_X)) & \leq & \frac{K_XL}{2}+2+\frac{(K_X(3K_X-L))^2}{2K_X^2}+2\\
& = & \frac{(K_XL-(5/2)K_X^2)^2-(25/4)(K_X^2)^2}{2K_X^2}+\frac{9}{2}K_X^2+4\\
& \leq & \frac{(0-(5/2)K_X^2)^2-(25/4)(K_X^2)^2}{2K_X^2}+\frac{9}{2}K_X^2+4\\
& = & \frac{9}{2}K_X^2+4. \end{eqnarray*} Hence $M(X)\leq 10\chi(\mathcal{O}_X)-2K_X^2+h^0(\Omega_X^1(K_X))\leq 10\chi(\mathcal{O}_X)+(5/2)K_X^2+4$.
Case 2. $K_XL\geq K_X^2$. By Proposition \ref{proposition1.5}, we get $h^0(L)\leq(K_XL)^2/2K_X^2+2$. Since $\Omega_X^1(K_X)$ is $K_X$-semistable, we have $K_XL\leq(3/2)K_X^2$. Hence $K_X(3K_X-L)\geq(3/2)K_X^2$. By Proposition \ref{proposition1.5}, we obtain $h^0(3K_X-L)\leq(K_X(3K_X-L))^2/2K_X^2+2$. From (\ref{9}), it follows that \begin{eqnarray*} h^0(\Omega_X^1(K_X)) & \leq & \frac{(K_XL)^2}{2K_X^2}+2+\frac{(K_X(3K_X-L))^2}{2K_X^2}+2\\
& = & \frac{(K_XL-(3/2)K_X^2)^2-(9/4)(K_X^2)^2}{K_X^2}+\frac{9}{2}K_X^2+4\\
& \leq & \frac{(K_X^2-(3/2)K_X^2)^2-(9/4)(K_X^2)^2}{K_X^2}+\frac{9}{2}K_X^2+4\\
& = & \frac{5}{2}K_X^2+4. \end{eqnarray*} Hence $M(X)\leq 10\chi(\mathcal{O}_X)-2K_X^2+h^0(\Omega_X^1(K_X))\leq 10\chi(\mathcal{O}_X)+(1/2)K_X^2+4$.
When $q(X)=h^0(\Omega_X^1)>0$, we know that $\mathcal{O}_X(K_X)\subset\Omega_X^1(K_X)$. Thus $K_XL\geq K_X^2$. Therefore, by Case 2, we have $M(X)\leq 10\chi(\mathcal{O}_X)+(1/2)K_X^2+4$. \end{proof}
\address{ Department of Mathematics and Statistics\\ Huazhong Normal University \\ Wuhan 430079 \\ People's Republic of China}{hsun@mail.ccnu.edu.cn}
\address{ Department of Mathematics\\ East China Normal University \\ Shanghai 200241 \\ People's Republic of China}{suntju@sohu.com}
\end{document} |
\begin{document}
\title{Virtual Specht stability for $FI$-modules in positive characteristic} \author{{Nate Harman} \\
\textit{\small Department of Mathematics} \\ \textit{\small Massachusetts Institute of Technology} \\ \textit{\small Cambridge, MA, 02139, USA}\\ \texttt{\small nharman@math.mit.edu}}
\maketitle
\begin{abstract} We define a notion of virtual Specht stability which is a relaxation of the Church-Farb notion of representation stability for sequences of symmetric group representations. Using a structural result of Nagpal, we show that $FI$-modules over fields of positive characteristic exhibit virtual Specht stability. \end{abstract}
2010 {\it Mathematics Subject Classification:} 20C30.
{\it Keywords:} representation stability, $FI$-modules
\begin{section}{Introduction}
Church and Farb defined the notion of representation stability for a sequence of symmetric group representations \cite{CF} over a field of characteristic zero. Most notably, it has been shown that the sequence of representations defined by a finitely generated $FI$-module over a field of characteristic zero exhibits representation stability \cite{CEF}.
$FI$-modules can be defined over fields of positive characteristic and have been studied in that setting fairly extensively in \cite{CEFN} and \cite{Nagpal}, however so far there has not been a replacement for the notion of representation stability which holds in this context. The purpose of this note is to define a notion of virtual Specht stability which is a relaxation of representation stability that holds for finitely generated $FI$-modules over fields of arbitrary characteristic.
We'll briefly note that theory of finitely generated $FI$-modules has been applied in numerous situations in geometry, topology, and algebra. Theorem \ref{virtstab} will immediately imply that virtual Specht stability holds in many of those applications. In particular virtual Specht stability will hold for the mod-$p$ cohomology of configuration spaces $\text{Conf}_n(M)$, as well as for the homology of congruence subgroups $\Gamma_n(p)$ with coefficients in $\mathbb{F}_p$. See \cite{CEF} and \cite{CEFN} for a discussion of these and other examples.
\begin{subsection}{$FI$-modules and representation stability}
Let $FI$ denote the category where the objects are finite sets, and morphisms are injections. An $FI$-module over a commutative ring $k$ is a covariant functor $V$ from $FI$ to the category of modules over $k$.
Since the set of $FI$-endomorphisms of the set $[n] = \{1,2,\dots,n\}$ is the symmetric group $S_n$, an $FI$-module $V$ can be thought of as a sequence of representations $V_n$ (the image of $[n]$ under the functor) of $S_n$ for each $n$ along with with compatibility maps from $V_n$ to $V_m$ for every injection from $[n]$ to $[m]$. An $FI$-module is said to be finitely generated in degree at most $d$ if all the $V_n$ are finitely generated as $k$-modules and $V_n$ is spanned by the images of $V_d$ under all injections from $[d]$ to $[n]$ for all $n >d$.
$FI$-modules were introduced by Church, Ellenberg, and Farb in \cite{CEF}, where it was shown that over a field of characteristic zero the sequence of symmetric group representations defined by a finitely generated $FI$-module exhibits the phenomenon known as representation stability as defined by Church and Farb in \cite{CF}.
If $\lambda = (\lambda_1, \lambda_2, \dots, \lambda_\ell)$ is a partition, then for $n \ge |\lambda|+\lambda_1$ let $\lambda[n] = (n - |\lambda|, \lambda_1, \lambda_2, \dots, \lambda_\ell)$. In other words, $\lambda[n]$ is the partition obtained by taking $\lambda$ and adding a large first part to make it have size $n$. Explicitly, the stability result of Church, Ellenberg, and Farb can be stated as follows:
\begin{theorem}{\textbf{(\cite{CEF} Proposition 3.3.3)}}\label{stab} Let $V$ be a finitely generated $FI$-module over a field of characteristic zero. There exist non-negative integer constants $c_\lambda$ independent of $n$ and nonzero for finitely many partitions such that for all $n \gg 0$
$$V_n = \bigoplus_\lambda c_\lambda S^{\lambda[n]}$$ as a representation of the symmetric group $S_n$, where $S^{\lambda[n]}$ denotes the irreducible Specht module associated to the partition $\lambda[n]$. \end{theorem}
In positive characteristic, representations of symmetric groups do not in general decompose into a direct sum of irreducible representations. So one might expect that for $FI$-modules over a field $k$ of characteristic $p > 0$ things are more complicated, and indeed that is the case.
As an example, consider the natural $FI$-module $V$ sending a finite set $S$ to $k[S]$, the space of formal linear combinations of elements of $S$. In this example we have that $V_n \cong k^n$ with the action of $S_n$ permuting the coordinates in the usual way. This is a direct sum of two irreducible representations if $p \nmid n$, and is an indecomposable representation with three irreducible composition factors whenever $p\mid n$ (and $n >2$). So even in this basic example, we see that things are more complicated in positive characteristic.
\end{subsection}
\begin{subsection}{Specht modules and Specht stability}
Specht modules are a well behaved class of symmetric group representations defined over arbitrary commutative rings. While in general they are not irreducible, they play an important role in the representation theory of symmetric groups. We will briefly review some facts about Specht modules that will be important for our purposes. All of the following results can be found in James's book on the representation theory of symmetric groups \cite{James}.
\begin{itemize}
\item For partition $\lambda$ of $n$ the integral Specht module $S^\lambda_\mathbb{Z}$ is a representation of $S_n$ over $\mathbb{Z}$ which is a free and of finite rank as a $\mathbb{Z}$-module. For an arbitrary commutative ring $k$ the Specht module $S^\lambda_k$ over $k$ is isomorphic to $S^\lambda_\mathbb{Z} \otimes_\mathbb{Z} k$. From now on we will drop the subscript when the base ring is clear.
\item $S^\lambda_\mathbb{Z}$ is naturally equipped with a nondegenerate $S_n$-equivariant symmetric bilinear form. This gives rise to a (possibly degenerate) $S_n$-equivariant symmetric bilinear form on $S^\lambda_k$ over any ring $k$. Let $S^{\lambda \perp}_k$ denote the radical of this form, which is naturally a $S_n$-invariant subspace of $S^\lambda_k$.
\item Over a field $k$ of characteristic $p$ the quotient $S^\lambda / S^{\lambda \perp}$ is non-zero if and only if $\lambda$ is $p$-regular (meaning $\lambda$ does not have $p$ parts of the same size), in which case it is irreducible. For $p$-regular partitions $\lambda$, let $D^\lambda$ denote this irreducible quotient.
\item Over a field of characteristic $p$ the $D^\lambda$ form a complete set of pairwise non-isomorphic irreducible representations of $S_n$ as $\lambda$ runs over the set of all $p$-regular partitions of $n$.
\item Over a field of characteristic $p$ the irreducible composition factors of $S^\lambda$ are all of the form $D^\mu$ with $\mu \ge \lambda$ in the dominance order. Moreover, for $p$-regular $\lambda$, $D^\lambda$ occurs in $S^\lambda$ with multiplicity one.
\end{itemize}
Specht modules are much better behaved and easier to work with than irreducible representations, so one could hope to generalize representation stability in terms of them. Putman defined a notion of \emph{Specht stability} \cite{Putman} for a sequence of symmetric group representations in which for all sufficiently large $n$ the $n$th term $V_n$ admits a filtration $$ 0 = V_n^{0} \subset V_n^1 \subset V_n^2 \subset \dots \subset V_n^d = V_n$$ where the graded pieces $V_n^i / V_n^{i-1}$ are isomorphic to Specht modules $S^{\lambda_i[n]}$ with $\lambda_i$ not depending on $n$.
The previous example sending a set $S$ to $k[S]$ can easily be seen to be Specht stable (of length $2$) by letting $V_n^1$ be the space of formal linear combinations of elements of $S$ where the coefficients sum to zero. Putman showed that any $FI$-module presented in sufficiently small degree relative to the characteristic of $k$ will exhibit Specht stability (\cite{Putman} Theorem E). However without the restriction on presentation degree Putman's theorem fails and there are $FI$-modules which do not exhibit Specht stability.
\end{subsection}
\begin{subsection}{Grothendieck groups}
Recall that the Grothendieck group $G_0(\mathcal{C})$ of an (essentially small) abelian category $\mathcal{C}$ is the abelian group generated by symbols $[X]$ for every object $X$ of $\mathcal{C}$, subject to the relation $[X] - [Y] + [Z] = 0$ for every short exact sequence $$0 \to X \to Y \to Z \to 0$$ of objects in $\mathcal{C}$. If every object of $\mathcal{C}$ has finite length (i.e. if $\mathcal{C}$ is a Krull-Schmidt category) then $G_0(\mathcal{C})$ is the free abelian group generated by irreducible classes $[X]$, where $X$ is a simple object in $\mathcal{C}$. In this case, given an arbitrary object $Y$, its class $[Y]$ in $G_0(\mathcal{C})$ is a formal linear combination of irreducible classes $[X]$ with coefficients corresponding to the multiplicity of $X$ in a Jordan-H\"older series for $Y$.
One could ask if for a finitely generated $FI$-module $V$ if the expression for $[V_n]$ in terms of irreducible classes $[D^\lambda]$ in the Grothendieck group of the category of finite dimensional representations of $S_n$ over $k$ stabilizes if we identify $[D^{\lambda[n]}]$ for different values of $n$. However in our example from before of sending a finite set $S$ to $k[S]$ we have the following inside the Grothendieck group for all $n > 2$:
\[ [V_n]= \begin{cases} \hphantom{2}[D^{(n)}] + [D^{(n-1,1 )}] &\text{if $p \nmid n$},\\[2ex]
2[D^{(n)}] + [D^{(n-1,1 )}] &\text{if $p \mid n$}. \end{cases} \]
This example illustrates a general fact about finitely generated $FI$-modules in characteristic $p$. Rather than stabilizing like in characteristic zero, the expression for $[V_n]$ in terms of irreducible classes becomes periodic with period a power of $p$ (see \cite{Harman} section 3.2). While this sort of periodicity is certainly interesting, one often still wants to think of the sequence of symmetric group representations coming from a finitely generated $FI$-module as being stable in some sense.
\end{subsection}
\begin{subsection}{Virtual Specht stability}
The main new idea in this paper is to combine the two ideas above and work in terms of Specht modules inside the Grothendieck groups to obtain a version of representation stability for finitely generated $FI$-modules over fields of arbitrary characteristic. Our main result is the following virtual relaxation of Theorem \ref{stab}, which says that we see stability when expressing $[V_n]$ in terms of Specht classes in the Grothendieck group:
\begin{theorem}\label{virtstab} Let $V$ be a finitely generated $FI$-module over a field $k$ of positive characteristic. There exist (possibly negative) integer constants $c_\lambda$ independent of $n$ and nonzero for only finitely many partitions such that for all $n \gg 0$
$$[V_n] = \sum_\lambda c_\lambda [S^{\lambda[n]}]$$ inside the Grothendieck group of the category of finite dimensional representations of $S_n$ over $k$, where $S^{\lambda[n]}$ denotes the Specht module associated to the partition $\lambda[n]$. \end{theorem}
We say that such a sequence of symmetric group representations $V_n$ (whether they come from an $FI$-module or not) satisfying the conclusion of Theorem \ref{virtstab} exhibits \emph{virtual Specht stability}. In particular, any sequence of representations exhibiting Putman's notion of Specht stability clearly also exhibits virtual Specht stability.
We'll note that in positive characteristics the Specht modules are in general not irreducible, and moreover their classes do not even form a basis for the Grothendieck group (they do span, but there are linear relations between them). So it is possible that each $[V_n]$ can be expressed in terms of Specht classes in multiple ways. Virtual Specht stability a priori just requires that among these choices of ways to write $[V_n]$ in terms of Specht classes there is a consistent way to do it for sufficiently large $n$.
Moreover we'll note that for a fixed $FI$-module the choice of the coefficients $c_\lambda$ in the theorem are not unique. So while this definition of virtual Specht stability will prove easy to work with for our purposes, we'd like to be able to make a statement that involves fewer choices and has a unique output.
Instead of using all Specht modules, we could consider just those corresponding to $p$-regular partitions $\lambda$, which do form a basis of the Grothendieck group (unitriangular relative to the basis of irreducibles). Instead of asking for the existence of some consistent way of expressing $[V_n]$ in terms of Specht classes, we could ask the stronger question of whether the unique expression for $[V_n]$ as a linear combination of $p$-regular Specht classes stabilizes. Our next theorem says that, in fact, this is equivalent to our (seemingly) weaker notion of virtual Specht stability. Note that $\lambda[n]$ is $p$-regular for $n > 2|\lambda|$ if and only if $\lambda$ is $p$-regular.
\begin{theorem} \label{unique}
A sequence of representations $V_n$ of $S_n$ over a field of characteristic $p$ exhibits virtual Specht stability if and only if there exist (possibly negative) integer constants $b_\lambda$ independent of $n$ and nonzero for only finitely many $p$-regular partitions $\lambda$ such that for all $n \gg 0$
$$[V_n] = \sum_\lambda b_\lambda [S^{\lambda[n]}]$$ inside the Grothendieck group of the category of finite dimensional representations of $S_n$ over $k$.
\end{theorem}
So in particular, finitely generated $FI$-modules over a field of characteristic $p$ also exhibit this ``stronger" version of virtual Specht stability. In fact we will prove a slightly stronger version of Theorem \ref{unique} which also gives control over how two equivalent stable expressions can differ, but we will wait until section \ref{sectequiv} to state the stronger result.
\end{subsection}
\end{section}
\begin{section}*{Acknowledgments} This result was formulated, proved, and subsequently presented by the author while at the American Institute of Mathematics (AIM) workshop on representation stability in June 2016. Thanks to Andrew Putman, Steven Sam, Andrew Snowden, David Speyer, and the AIM staff for organizing the workshop and for inviting me to speak. Thanks as well to Pavel Etingof, Benson Farb, Andrew Putman, and Jordan Ellenberg for helpful comments and conversations. This work was partially supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1122374.
\end{section}
\begin{section}{Virtual Specht stability for $FI$-modules}
In this section we will prove Theorem \ref{virtstab}. The main tool we will need is a powerful structural result for $FI$-modules over arbitrary Noetherian rings due to Nagpal. Before stating the result, we will review a few definitions.
Let $W$ be a representation of $S_m$ over an arbitrary noetherian ring $k$ (for our purposes, $k$ will be a field of positive characteristic). The $FI$-module $M(W)$ is the functor that takes a finite set $S$ to the vector space $k[\text{Hom}_{FI}([m],S)] \otimes_{k[S_m]} W$. In other words, $M(W)_n$ is the representation of $S_n$ obtained by first extending $W$ to a $S_m \times S_{n-m}$ representation by letting the second factor act trivially, and then inducing up to $S_n$.
Such modules, and direct sums thereof, are exactly those $FI$-modules which admit the structure of an $FI\sharp$-module (See \cite{CEF}, section 4). We say that an $FI$-module $V$ is $\sharp$-filtered if it admits a filtration $$0 = V^0 \subset V^1 \subset \ldots \subset V^d = V$$ of $FI$-submodules such that the graded pieces $V^i / V^{i-1}$ are of the form $M(W)$ for some $W$. Informally, the result of Nagpal we will need says that, up to torsion, every $FI$-module admits a resolution by $\sharp$-filtered modules. More precisely, it says:
\begin{theorem}{\textbf{(\cite{Nagpal} Theorem A)}}\label{rohit} For an arbitrary finitely generated $FI$-module $V$, there exist $\sharp$-filtered $FI$-modules $J^1, J^2, J^3, \dots, J^N$ and maps of $FI$-modules $\phi^0: V \to J^1, \ \phi^1: J^1 \to J^2, \ \dots, \ \phi^{N-1}: J^{N-1} \to J^N$ such that for all $n \gg 0$
$$0 \to V_n \to J^1_n \to J^2_n \to \dots \to J^N_n \to 0$$ is an exact sequence of $S_n$ representations.
\end{theorem}
In addition to Nagpal's result on $FI$-modules, we will need one standard fact from the modular representation theory of symmetric groups. Informally, the following theorem says that at the level of Grothendieck groups induction between Specht modules behaves the same as in characteristic zero.
\begin{theorem} \textbf{(\cite{James} Corollary 17.14)} \label{spechtfilt} For arbitrary partitions $\lambda \vdash n, \ \mu \vdash m$ and over an arbitrary field $k$, the induced representation $V = Ind_{S_n \times S_m}^{S_{n+m}}(S^\lambda \boxtimes S^\mu)$ has a filtration $$0 = V^0 \subset V^1 \subset \ldots \subset V^d = V$$ such that the graded pieces $V^i / V^{i-1}$ are isomorphic to Specht modules $S^\nu$, with $S^\nu$ occurring with multiplicity equal to the Littlewood-Richardson coefficient $c_{\lambda,\mu}^{\nu}$.
\end{theorem}
\noindent \textbf{Proof of Theorem \ref{virtstab}:} The proof will be by a series of reductions to a known case from characteristic zero. First, Nagpal's result allows us to replace $[V_n]$ by an alternating sum of the $[J^i_n]$'s inside the Grothendieck group for all sufficiently large $n$. Hence it is enough to prove the result for $\sharp$-filtered $FI$-modules.
Next, we know by definition that $\sharp$-filtered modules admit a filtration with quotients of the form $M(W)$, where $W$ is a representation of some $S_m$. At the level of Grothendieck groups such filtrations just become sums, so it is enough to prove the result for the modules of the form $M(W)$.
Now since the functor $W \to M(W)_n$ (which again is given by extending $W$ to a $S_m \times S_{n-m}$ representation by letting $S_{n-m}$ act trivially and then inducing up to $S_n$) is exact, it descends to a linear map on Grothendieck groups, so it is enough to prove the claim for a collection of representations $W$ that span the Grothendieck groups of symmetric groups.
We know that the Specht modules $S^\lambda$ are exactly such a class of representations . Finally, Theorem \ref{spechtfilt} tells us that $M(S^\lambda)_n = Ind_{S_m \times S_{n-m}}^{S_{n}}(S^\lambda \boxtimes S^{(n-m)})$ has a filtration by Specht modules with multiplicities the same as in characteristic zero, where we know stabilization occurs. $\square$
One consequence of representation stability in characteristic zero is that the characters $\chi_{V_n}$ of the $S_n$ representations $V_n$ are eventually polynomial functions in the number of cycles of given lengths (\cite{CEF} Theorem 1.5). Since Specht modules are defined over the integers their Brauer characters agree with their usual characters in characteristic zero, this along with Theorem \ref{virtstab} immediately implies the following positive characteristic analog of this fact:
\begin{corollary} If $V$ is a finitely generated $FI$-module over a field of positive characteristic then the sequence of Brauer characters $\hat{\chi}_{V_n}$ of the $S_n$ representations $V_n$ is eventually polynomial. \end{corollary}
\end{section}
\begin{section}{Equivalent stable presentations}\label{sectequiv}
For this section all representations are over a fixed field $k$ of characteristic $p > 0$. We will refer to ``the Grothendieck group for $S_n$" which should be taken to mean ``the Grothendieck group of the category of finite dimensional representations of $S_n$ over $k$".
For a fixed value of $n$ we know that the Specht classes $[S^\lambda]$ span the Grothendieck group for $S_n$, but since there more Specht modules than irreducible representations there are linear relations among the Specht classes.
One might hope that when we pass to the stable setting that these relations go away, since after all we are requiring a relation to hold in infinitely many Grothendieck groups simultaneously. Unfortunately, there is an easy source of stable linear relations between Specht classes that we will outline now.
Let $\sum c_\lambda[S^\lambda] = 0$ be a linear relation between Specht classes inside the Grothendieck group for $S_m$. Then we know that $$\sum c_\lambda [M(S^\lambda)_n] = \sum c_\lambda [Ind_{S_m \times S_{n-m}}^{S_{n}}(S^\lambda \boxtimes S^{(n-m)})] =0$$
in the Grothendieck group for $S_n$ for all $n > m$. If we then expand each term into Specht classes using Theorem \ref{spechtfilt} we then obtain a stable expression of the form $\sum d_\lambda[S^{\lambda[n]}] = 0$ with $d_\lambda$ not depending on $n$, $d_\lambda = c_\lambda$ if $|\lambda| = m$, and $d_\lambda =0$ if $|\lambda|>m$ which holds for all $n > 2m$. We will call such an expression an $\emph{induced expression for zero}$.
In particular this implies that the stable expression for a finitely generated $FI$-module in terms of Specht classes guaranteed by Theorem \ref{virtstab} is far from unique, we can just add an induced expression for zero as constructed above to obtain another equivalent stable expression.
We'd like to uniqueness statement about the expressions in terms of Specht classes, and there are (at least) two ways we could try to do this. First, we can restrict to those Specht classes corresponding to $p$-regular $\lambda$ as in the statement of Theorem \ref{unique}, and ask if the unique expression in terms of these stabilizes. Alternatively, we could ask if a stable expression for an $FI$-module (or other virtually Specht stable sequence) is unique up to a linear combination of induced expressions for zero.
The following proposition does both of these simultaneously, immediately implying both Theorem \ref{unique} and that any two stable expressions for the same $FI$-module differ by a linear combination of induced expressions for zero.
\begin{proposition}\label{unique2} For any expression of the form $\sum c_\lambda[S^{\lambda[n]}]$ there is an expression $\sum d_\lambda[S^{\lambda[n]}]$ equivalent to the first inside the Grothendieck group for $S_n$ for all sufficiently large $n$ such that $d_\lambda = 0$ for $p$-singular $\lambda$, and the difference $\sum c_\lambda[S^{\lambda[n]}] - \sum d_\lambda[S^{\lambda[n]}]$ is a linear combination of induced expressions for zero.
\end{proposition}
\noindent \textbf{Proof:} Let $\lambda_0$ be such that $\lambda_0[n]$ is minimal in the dominance order among those $p$-singular partitions $\lambda$ with $c_\lambda \ne 0$ and let $m = |\lambda_0|$. iInside the Grothendieck group for $S_{m}$ we know we can write: $$[S^{\lambda_0}] = \sum_{\lambda > \lambda_0 } b_\lambda [S^\lambda]$$ for some integer coefficients $b^\lambda$. Applying the functor $M(*)_n$ to both sides, expanding in terms of Specht classes using Theorem \ref{spechtfilt}, and rearranging gives an induced expression for zero of the form $$[S^{\lambda_0[n]}] - \sum_{\lambda[n] > \lambda_0[n] } b'_\lambda [S^{\lambda[n]}] = 0$$ for all sufficiently large $n$. If we subtract off $c_{\lambda_0}$ times this from our expression $\sum c_\lambda[S^{\lambda[n]}]$ we get an equivalent expression $\sum c'_\lambda[S^{\lambda[n]}]$ where $c'_{\lambda_0} = 0$ differing from the first by Specht classes corresponding to partitions $\lambda[n]$ larger than $\lambda_0[n]$ in the dominance order.
Since the (stable) dominance order has no infinite ascending chains, we can repeat this process until there are no non-zero coefficients for Specht classes corresponding to $p$-singular partitions. $\square$
\end{section}
\begin{section}{Examples} Finally we will finish by giving a couple of examples to illustrate our results.
\noindent \textbf{Example 1:} Returning to our example from the first section of the $FI$-module $V$ that sends a finite set $S$ to $k[S]$. Inside $V_n$ there is the subspace of formal linear combinations of elements of $[n]$ where the sum of the coefficients is zero. This space is isomorphic to the Specht module $S^{(n-1,1)}$. The quotient of $V_n$ by this subspace is a copy of the trivial representation, which itself is the Specht module $S^{(n)}$. Hence we see that $[V_n] = [S^{(n-1,1)}]+[S^{(n)}]$ for all $n \ge 2$. This holds even when the characteristic of $k$ divides $n$, in which case the Specht module $S^{(n-1,1)}$ is not irreducible as it contains a $1$-dimensional invariant subspace. Of course this example also satisfies Putman's stronger notion of (non-virtual) Specht stability.
\noindent \textbf{Example 2:} Let $k$ be a field of characteristic $5$, and consider the standard representation of $S_5$ acting on $k^5$ by permuting the coordinates. This contains two proper $S_5$ invariant subspaces: the $4$-dimensional space of vectors with coordinate sum zero (i.e. the Specht module $S^{(4,1)}$), and the $1$-dimensional space of invariants spanned by the vector $(1,1,1,1,1)$. Since we are in characteristic $5$, this vector $(1,1,1,1,1)$ has coordinate sum zero and therefore the space of invariants is contained in the Specht module. Let $W = D^{(4,1)}$ be the $3$-dimensional quotient of $S^{(4,1)}$ by this invariant subspace.
Now consider the $FI$-module $M(W)$. Unlike the first example, the representations $M(W)_n$ in general do not admit a filtration where the successive quotients are Specht modules. Nevertheless, by construction we see that in the Grothendieck group $[W] = [S^{(4,1)}] - [S^{(5)}]$. Applying the Pieri rule to this expression term wise and simplifying we obtain an expression for $[M(W)_n]$ for all $n \ge 10$: $$[M(W)_n] = [S^{(n-5,4,1)}] + [S^{(n-4,3,1)}] + [S^{(n-3,2,1)}] + [S^{(n-2,1,1)}] - [S^{(n-5,5)}]$$
\noindent \textbf{Example 3:} Let $W = S^{(1,1,1)}$ be the sign representation of $S_3$ over a field of characteristic $3$. We know that $M(W)$ exhibits Specht stability by Theorem \ref{spechtfilt}, and at the level of Grothendieck groups we have that $$[M(W)_n]= [S^{(n-2,1,1)}]+[S^{(n-3,1,1,1)}]$$ for $n \ge 4$. This satisfies our definition of virtual Specht stability, however it involves a $3$-singular partition. Theorem \ref{unique} tells us that we should also see stability when expressing $[M(W)_n]$ in terms of Specht classes for $3$-regular partitions. In this case we have that $$[S^{(1,1,1)}] = [D^{(2,1)}] = [S^{(2,1)}]-[S^{(3)}]$$ which by the Pieri rule tells us that $$[M(W)_n] = [S^{(n-3,2,1)}]+[S^{(n-2,1,1)}] - [S^{(n-3,3)}]$$
\end{section}
\end{document} |
\begin{document}
\captionsetup{labelformat=default,labelsep=period} \title[]{A fast two-point gradient algorithm based on sequential subspace optimization method for nonlinear ill-posed problems}
\author{Guangyu Gao$^1$, Bo Han$^1$\footnote{E-mail: bohan@hit.edu.cn} and Shanshan Tong$^2$}
\address{$^{1}$ Department of Mathematics, Harbin Institute of Technology, Harbin, Heilongjiang Province, 150001, PR China} \address{$^{2}$ School of Mathematics and Information Science, Shaanxi Normal University, Xi'an, Shaanxi Province, 710119, PR China}
\ead{guangyugao60@163.com, bohan@hit.edu.cn, tongshanshan33@163.com}
\begin{abstract}
In this paper, we propose and analyze a fast two-point gradient algorithm for solving nonlinear ill-posed problems, which is based on the sequential subspace optimization method. A complete convergence analysis is provided under the classical assumptions for iterative regularization methods.
The design of the two-point gradient method involves the choices of the combination parameters which is systematically discussed. Furthermore, detailed numerical simulations are presented for inverse potential problem, which exhibit that the proposed method leads to a strongly decrease of the iteration numbers and the overall computational time can be significantly reduced.
\noindent{Keywords}: nonlinear ill-posed problems, two-point gradient, sequential subspace optimization, inverse potential problem \end{abstract}
\section{Introduction}\label{section-1} \noindent In this paper, we mainly focus on an iterative solution of nonlinear inverse problems in Hilbert spaces which can be written as
\begin{equation}\label{equation 1-1}
F(x)=y.
\end{equation} Here $F:\mathcal{D}(F)\subset \mathcal{X}\rightarrow \mathcal{Y}$, is possibly nonlinear operator between real Hilbert spaces $\mathcal{X}$ and $\mathcal{Y}$, with domain of definition $\mathcal{D}(F)$. The solution set of this problem is defined as \[ M_{F(x)=y}:=\left\{x\in\mathcal{X}:F(x)=y\right\}. \] Instead, only approximate measured data $y^\delta$ is available such that
\begin{equation}\label{equation 1-2}
\left\|y^\delta-y\right\|\leq\delta
\end{equation}
with $\delta>0$ (noise level).
Due to the ill-posed (or ill-conditioned) nature of problem (\ref{equation 1-1}), regularization strategy is necessary to obtain their stable and accurate numerical solutions, and many effective techniques have been proposed over the past few decades (see e.g. \cite{1,2,3,4,5}).
There are at least two common regularization methods for solving (\ref{equation 1-1}), i.e., (generalized) \emph{Tikhonov-type regularization} and \emph{Iterative-type regularization}.
In \emph{Tikhonov-type regularization}, one attempts to approximate an $x_0$-minimum-norm solution of (\ref{equation 1-1}). The corresponding study of linear and nonlinear ill-posed problems is discussed in \cite{6}. Furthermore, \emph{Iterative-type regularization} represents a very powerful and popular class of numerical solvers. Many publications have been concerned with iterative regularization methods for nonlinear problems in the Hilbert spaces.
The most basic iterative method for solving (\ref{equation 1-1}) is the Landweber iteration \cite{1,7}, which reads
\begin{equation}\label{equation 1-3}
x_{k+1}^\delta=x_{k}^\delta-\alpha_{k}^\delta F'(x_{k}^\delta)^*\left(F(x_{k}^\delta)-y^\delta\right),~~k=0,1,\ldots,
\end{equation}
there $F'(x)^*$ is the Hilbert space adjoint of the derivative of $F$, $x_{0}^\delta=x_{0}$ is a suitable guess and $\alpha_{k}^\delta$ is a scaling parameter. There are many definitions about $\alpha_{k}^\delta$ in the convergence analysis called the \emph{constant method}, \emph{steepest descent method} \cite{8} and the \emph{minimal error method} \cite{2}.
It is well-known that Landweber iteration, the steepest descent method as well as minimal error method are quite slow. Hence, acceleration strategies have to be used in order to speed them up to make them more applicable in practise. Nesterov acceleration scheme was first introduced for solving convex programming problems which is effective for implementation and greatly improves the convergence rate \cite{9,10,11}. Recently, Nesterov acceleration scheme was used to accelerate Landweber iteration in \cite{12,19}, leading to the following method \begin{equation}\label{equation 1-4}
\begin{array}{ll}
z_{k}^\delta=x_{k}^\delta+\frac{k-1}{k+\alpha-1}(x_{k}^\delta-x_{k-1}^\delta), \\
x_{k+1}^\delta=z_{k}^\delta-\alpha_{k}^\delta s_{k}^\delta,~~ s_{k}^\delta=F'(z_{k}^\delta)^*\left(F(z_{k}^\delta)-y^\delta\right),\\
x_{0}^\delta=x_{-1}^\delta=x_{0},
\end{array} \end{equation} where the constant $\alpha\geq3$. Although the convergence analysis of (\ref{equation 1-4}) is not given, the significant acceleration effect can be verified by numerical experiments \cite{12}. Furthermore, the two-point gradient (TPG) method was proposed in \cite{13}, and is given by \begin{equation}\label{equation 1-5}
\begin{array}{ll}
z_{k}^\delta=x_{k}^\delta+\lambda_{k}^\delta(x_{k}^\delta-x_{k-1}^\delta), \\
x_{k+1}^\delta=z_{k}^\delta-\alpha_{k}^\delta s_{k}^\delta,~~ s_{k}^\delta=F'(z_{k}^\delta)^*\left(F(z_{k}^\delta)-y^\delta\right),\\
x_{0}^\delta=x_{-1}^\delta=x_{0}.
\end{array} \end{equation} The convergence results are obtained under a suitable choice of the combination parameters $\{\lambda_{k}^\delta\}$. Recently, the study of TPG method for solving problem (\ref{equation 1-1}) in Banach spaces has been proposed in \cite{14}.
Sequential subspace optimization (SESOP) method was first introduced to solve linear systems of equations in finite dimensional vectors spaces and then was brought up for dealing with nonlinear inverse problems in Hilbert spaces and Banach spaces \cite{15,16,17,18}. The significant improvement of SESOP method is to change one search direction to multiple as it is the case for Landweber or the conjugate gradient method. The general iteration format of SESOP is
\begin{equation}\label{equation 1-6}
\begin{array}{ll}
x_{k+1}^\delta=x_{k}^\delta-\sum\limits_{i\in I_{k}^\delta}t_{k,i}^\delta F'(x_{i}^\delta)^*w_{k,i}^\delta, \;\;w_{k,i}^\delta=F(x_{i}^\delta)-y^\delta,\\
x_{0}^\delta=x_{-1}^\delta=x_{0},
\end{array} \end{equation} where $t_{k,i}^\delta$ is chosen such that $x_{k+1}^\delta$ intersects in the stripes or hyperplane of each $x_{i}^\delta$ which depend on search directions, forward operator and noise level, $I_{k}^\delta$ is the index set of the selected space.
Motivated by the above considerations, an accelerated TPG method based on SESOP method (TGSS) is proposed for solving (\ref{equation 1-1}) in this paper, which leads to the following scheme
\begin{equation}\label{equation 1-7}
\begin{array}{ll}
z_{k}^\delta=x_{k}^\delta+\lambda_{k}^\delta(x_{k}^\delta-x_{k-1}^\delta), \\
x_{k+1}^\delta=z_{k}^\delta-\sum\limits_{i\in I_{k}^\delta}t_{k,i}^\delta F'(z_{i}^\delta)^*w_{k,i}^\delta, \quad w_{k,i}^\delta=F(z_{i}^\delta)-y^\delta,\\
x_{0}^\delta=x_{-1}^\delta=z_{0}^\delta=x_{0},
\end{array} \end{equation} where $t_{k,i}^\delta$ is chosen so that $x_{k+1}^\delta$ intersects in the stripes or hyperplane of each $z_{i}^\delta$. $I_{k}^\delta$ is the index set of the selected space that needs to contain $k$, i.e., the current search direction $F'(z_{k}^\delta)^*w_{k,k}^\delta$ need to be chosen as one of the search directions, which guarantees the decent property of the iteration error. In order to find nontrivial $\lambda_{k}^\delta$, we apply the Nesterov acceleration scheme in \cite{9} and the discrete backtracking search (DBTS) algorithm in \cite{13} to our situation. The convergence analysis of the proposed method is provided with the general assumptions for iteration regularization methods. Furthermore, we provide a complete convergence analysis by showing a uniform convergence result for the noise-free case with the combination parameters chosen in a certain range \cite{14}.
However, the TGSS method for solving (\ref{equation 1-1}), to the best of authors knowledge, have not been investigated before. On the basis of the iteration format of TGSS method, we summarize the following improvements. On the one hand, the method we present is to use finite search directions in each step of the iteration instead of just one as it is the case for the TPG method. This leads to more computational time in each step of the iteration, but it may reduce the total number of steps to obtain a satisfying approximation. On the other hand, compare to SESOP method, in each iteration, replacing $z_{i}^\delta$ with $x_{i}^\delta$ not only optimizes the search directions for each iteration but improves the initial projection point.
An outline of the remaining paper is as follows. In section \ref{section-2}, we present the assumptions and required notations, and derive some preliminary results. In section \ref{section-3}, we introduce the TGSS method for nonlinear ill-posed problems with exact data and noisy data respectively, and give an algorithm with two search directions. The convergence and regularity analysis of TGSS are then given in section \ref{section-4}. In section \ref{section-5}, some numerical experiments are performed to present the efficiency of proposed methods. Finally, we give some conclusions in section \ref{section-6}.
\section{Assumptions and preliminary results}\label{section-2} \noindent In this section, we state some main assumptions which are standard in the analysis of regularization methods \cite{20,21}. Furthermore, we give some basic definitions as well as their preliminary results, which we refer to \cite{15,16} for details.
\begin{assumption} \label{assumption 2-1} Let $\rho$ be a positive number such that $B_{4\rho}(x_0)\subset\mathcal{D}(F)$, where $B_{4\rho}(x_0)$ denotes the closed ball around $x_0$ with radius $4\rho$. The mapping $x\mapsto F'(x)$, $F':\mathcal{D}(F)\rightarrow\mathcal{L}(\mathcal{X},\mathcal{Y})$ is a Fr\'{e}chet derivative operater. \begin{description}
\item[(1)] The equation $F(x)=y$ has a solution $x_*$ in $B_{\rho}(x_0)=B_{\rho}(x_{-1})$ which is not necessarily unique.
\item[(2)] The local tangential cone condition of $F$ holds, namely,
\begin{equation}\label{equation 2-1}
\left\|F(x)-F(\tilde{x})-F'(x)(x-\tilde{x})\right\|\leq
\eta\left\|F(x)-F(\tilde{x})\right\|,~ \forall x, \tilde{x}\in B_{4\rho}(x_0),
\end{equation}
where $0<\eta<1$.
\item[(3)]
Fr\'{e}chet derivative $F'(\cdot)$ is locally uniformly nonzero and bounded, i.e.,
\[0<\left\|F'(x)\right\|\leq c_F,~~~ \forall x\in B_{4\rho}(x_0),\]
where $c_F>0$. \end{description} \end{assumption}
\begin{remark}\label{remark 2-1} Assume that $\eta=0$, we can get the operator F is linear. Since we are concerned with the non-linear case, we will thus occasionally ignore the case where $\eta=0$ without any further remarks.
The validity of local tangential cone condition indicates that the operator $F'$ fulfills $F'(x)=0$ for some $x\in B_{4\rho}(x_0)$ if and only if $F$ is constant in $B_{4\rho}(x_0)$ (see e.g. \cite[Proposition 1.12]{22}). The case where $F$ is constant in $B_{4\rho}(x_0)$ is not interested, we thus postulate $F'(x)\neq0$ for all $x\in B_{4\rho}(x_0)$. \end{remark}
\begin{lemma} \label{lemma 2-1}
(\cite[Proposition 2.1]{2}). Let $\rho, \varepsilon>0$ be such that \[\left\|F(x)-F(\tilde{x})-F'(x)(x-\tilde{x})\right\|\leq
c(x,\tilde{x})\left\|F(x)-F(\tilde{x})\right\|, \forall x, \tilde{x}\in B_{\rho}(x_0)\subset\mathcal{D}(F)\]
for some $c(x,\tilde{x})\geq0,$ and $c(x,\tilde{x})<1$ if $\|x-\tilde{x}\|\leq\varepsilon$. Assume that $F(x)=y$ is solvable in $B_{\rho}(x_0)$, then a unique $x_0$-minimum-norm solution exists which is characterized as the solution $x^\dagger$ of $F(x)=y$ in $B_{\rho}(x_0)$ satisfying \[x^\dagger-x_0\in\mathcal{N}\left(F'(x^\dagger)\right)^\perp.\] \end{lemma}
Now, we give some definitions and their preliminary results about metric projection, which are crucial for our further analysis.
\begin{definition} \label{definition 2-2}
(\cite[Lemma 2.1]{22}).
The metric projection $P_C(x)$ satisfies
\begin{equation}\label{equation 2-2}
\|x-P_C(x)\|^2=\mathop{\min}\limits_{z\in C}\|x-z\|^2,
\end{equation}
where $C\subset \mathcal{X}$ is a nonempty closed convex set which can guarantee $P_C(x)$ is the unique element of $x\in \mathcal{X}$.
For later convenience, we use the square of the distance. The metric projection onto a convex set $C$ fulfills a descent property which reads
\begin{equation}\label{equation 2-3}
\|P_C(x)-z\|^2\leq\|x-z\|^2-\|P_C(x)-x\|^2
\end{equation}
for all $z\in C$.
\end{definition}
\begin{definition}\label{definition 2-3}
Define the $\emph{hyperplane}$
\[H(u,\alpha):=\{x\in\mathcal{X}:\langle u,x \rangle=\alpha\},\]
where $u\in\mathcal{X}\backslash\{0\}$, $\alpha,\xi\in R$ with $\xi\geq0$ . The halfspace \[H_\leq(u,\alpha):=\{x\in\mathcal{X}:\langle u,x \rangle\leq\alpha\}.\]
We also can define $H_\geq(u,\alpha)$, $H_<(u,\alpha)$ and $H_>(u,\alpha)$ in the same way. Then, the stripe
\[H(u,\alpha,\xi):=\{x\in\mathcal{X}:|\langle u,x \rangle-\alpha|\leq\xi\}.\]
Note that we can get $H(u,\alpha,\xi)=H_\leq(u,\alpha+\xi)\cap H_\geq(u,\alpha-\xi)$ and $H(u,\alpha,0)=H(u,\alpha)$. In addition, $H(u,\alpha)$, $H_\leq(u,\alpha)$, $H_\geq(u,\alpha)$ as well as $H(u,\alpha,\xi)$ are nonempty, closed, convex subsets of $\mathcal{X}$. Meanwhile, $H_<(u,\alpha)$, $H_>(u,\alpha)$ are open subsets of $\mathcal{X}$.
\end{definition}
\begin{remark}\label{remark 2-2}
The metric projection $P_{H(u,\alpha)}(x)$ in the Hilbert space setting corresponds to an orthogonal projection, which reads
\begin{equation}\label{equation 2-4}
P_{H(u,\alpha)}(x)=x-\frac{\langle u,x \rangle-\alpha}{\|u\|^2}u.
\end{equation}
\end{remark}
\noindent The following statements can be proved in \cite{15, 16, 23}, which are very helpful to our method.
\begin{prop}\label{proposition 2-4} The following statements hold. \begin{description}
\item[(1)] Let $H(u_i,\alpha_i), \;i=1,2,...,N,$ be hyperplanes with nonempty intersection $H:=\mathop{\bigcap}\limits_{i=1,2,...,N}H(u_i,\alpha_i)$. The projection of x onto H is calculated by
\[P_H(x)=x-\sum\limits_{i=1}^N \tilde{t}_i u_i,\]
where $\tilde{t}=(\tilde{t}_1,...,\tilde{t}_N)\in\mathbb{R}^N$ is the solution of the following convex optimization problem
\begin{equation}\label{equation 2-5}
\mathop{\min}\limits_{t\in R^N}h(t):=\frac{1}{2}\left\|x-\sum\limits_{i=1}^N t_iu_i\right\|^2+\sum\limits_{i=1}^N t_i \alpha_i.
\end{equation}
The corresponding partial derivatives are
\begin{equation}\label{equation 2-6}
\frac{\partial}{\partial t_j}h(t)=-\left\langle u_j,x-\sum\limits_{i=1}^N t_iu_i \right\rangle+\alpha_j.
\end{equation}
If vectors $u_i,\; i=1,2,...,N$, are linearly independent, we can get that $h$ is strictly convex and $\tilde{t}$ is unique.
\item[(2)] If $x\in H_>(u,\alpha)$, the projection of x onto $ H_\leq(u,\alpha)$ is obtained as follows
\[P_{H_\leq(u,\alpha)}(x)=P_{H(u,\alpha)}(x)=x-t_+u,\]
where\[t_+=\frac{\langle u,x \rangle-\alpha}{\|u\|^2}>0.\]
\item[(3)] The projection of $x\in\mathcal{X}$ onto a stripe $H(u,\alpha,\xi)$ can be calculated as
\[P_{H(u,\alpha,\xi)}(x)=
\left\{
\begin{array}{ll}
P_{H_\leq(u,\alpha+\xi)}(x),~~\;\; x\in H_>(u,\alpha+\xi); \\
\quad \quad x,~~~~~~~~~~\;\quad x\in H(u,\alpha,\xi); \\
P_{H_\geq(u,\alpha-\xi)}(x),~~\;\; x\in H_<(u,\alpha-\xi).
\end{array}
\right.
\] \end{description} \end{prop}
\begin{definition}\label{definition 2-5}
Since the projection point is not easy to find in the proposed algorithm, we specify the order of the projections: \begin{enumerate}
\item First, solve for the metric projection point $p_{k_1}:=P_{H_{k,k}}(z_k)$ from $z_k$ to $H_{k,k}$.
\item And then, get $p_{k_2}:=P_{H_{k,k_1}\mathop{\bigcap}H_{k,k_2}}(P_{H_{k,k}}(z_k))$ in the same way. Meanwhile, we can get $p_{k_3}:=P_{H_{k,k_1}\mathop{\bigcap}H_{k,k_2}\mathop{\bigcap}H_{k,k_3}}(p_{k_2})$.
\item We end up with $P_{\mathop{\mathop{\bigcap}\limits_{i\in I_{k}}H_{k,i}}}(p_{k_{s-1}})$ as $x_{k+1}$ just like the previous definitions. \end{enumerate}
We sort the elements of the finite set $I_k$ that contains $k$, such as $k=k_1>k_2>\cdots>k_s$. And, $H_{k,i}:=H(u_{k,i},\alpha_{k,i},\xi_{k,i})$, where $i\in I_k$. Obviously, our definition of $H_{k,i}$ and their finite intersection are closed convex sets. According to the optimum approximation theorem, we know that there is a unique projection point onto a closed convex subset in Hilbert spaces. \end{definition}
\section{The TGSS method}\label{section-3}
\noindent In this section, we describe in detail the TGSS method with exact data case and noisy data case respectively. Then, we formulate the TGSS scheme with two search directions.
\subsection{TGSS with exact data}\label{subsection 3-1} \noindent Firstly, we put forward TGSS method for nonlinear operators in the case of exact data. We give the following definition of iteration. \begin{definition}\label{definition 3-1} At iteration $k\in \mathbb{N}$, choose a finite index set $I_k$ includes $k$. Let \begin{equation}\label{equation 3-1} z_{i}=x_{i}+\lambda_{i}(x_{i}-x_{i-1}), \end{equation} where $\lambda_{0}=0,~ 0\leq \lambda_{i}\leq 1,~\forall i\in I_k\subset \mathbb{N},$ and define \[H_{k,i}:=H(u_{k,i},\alpha_{k,i},\xi_{k,i})\] with \begin{equation}\label{equation 3-2} \begin{array}{ll}
w_{k,i}:=F(z_{i})-y,\\
u_{k,i}:=F'(z_{i})^{*}w_{k,i},\\
\alpha_{k,i}:=\langle u_{k,i},z_{i}\rangle-\langle w_{k,i},r_{i}\rangle,\\
\xi_{k,i}:=\eta\| w_{k,i}\|\| r_{i}\|, \end{array} \end{equation} where $r_{i}=F(z_{i})-y$ is the residual term, which in this case is equal to $w_{k,i}$. \end{definition}
\begin{alg}\label{algorithm 3-1} \rm(\textbf{TGSS iteration for exact data with multiple search directions.})
\textbf{Given:}~Exact data $y$, initial choice $x_0=x_{-1}\in\mathcal{X}$ and the parameter $\eta$.
\textbf{For}~$k\in \mathbb{N}$.
~~~~~\textbf{If}~$F(x_{k})\neq y$.
~~~~~~~~~(1) Choose a index set $I_{k}\subset \{k+K+1, \ldots, k\}\cap \mathbb{N}$ with $K\geq1$.
~~~~~~~~~~(2) Compute $z_k$, the search directions $u_{k,i}$, and the parameters $\alpha_{k,i}$, $\xi_{k,i}$ defined in Definition \ref{definition 3-1}.
~~~~~~~~~(3) Update $x_{k+1}$ by \begin{equation}\label{equation 3-3}
x_{k+1}=z_{k}-\sum\limits_{i\in I_{k}}t_{k,i}u_{k,i}. \end{equation} ~~~~~~~~~~~~~~~ Find $t_k:=(t_{k,i})_{i\in I_k}$ by Definition \ref{definition 2-5} such that \begin{equation}\label{equation 3-4} x_{k+1}\in \bigcap_{i\in I_{k}}H_{k,i}. \end{equation} ~~~~~~~~~~~~~~~ Output~$x_{k+1}$.
~~~~~~~~~ Set $k=k+1$.
~~~~~\textbf{Else if}~$F(x_{k})=y$.
~~~~~~~~~ Output~$x_{k}$.
~~~~~~~~~ \textbf{break}
~~~~~\textbf{End If}
\textbf{End For} \end{alg}
\begin{prop}\label{proposition 3-2} For any $k\in \mathbb{N}, i\in I_k$, the solution set $M_{F(x)=y}$ fulfills \begin{equation}\label{equation 3-5}
M_{F(x)=y}\subset H_{k,i}, \end{equation} where $u_{k,i},\alpha_{k,i}$ and $\xi_{k,i}$ are chosen as in (\ref{equation 3-2}). \begin{proof} Let $z\in M_{F(x)=y}$, we have \[\langle u_{k,i},z\rangle-\alpha_{k,i}=\langle w_{k,i},F'(z_{i})(z-z_{i})+F(z_{i})-y\rangle.\] Due to $F(z)=y$ there hold \begin{equation*} \begin{array}{ll}
\left|\langle w_{k,i},F'(z_{i})(z-z_{i})+F(z_{i})-F(z)\rangle\right|\leq\|w_{k,i}\|\cdot\|F(z_{i})-F(z)-F'(z_{i})(z_{i}-z)\|\\
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\leq\eta\|w_{k,i}\|\cdot\|F(z_{i})-y\|,
\end{array}
\end{equation*} and using $r_{i}=F(z_{i})-y$ we have $z\in H_{k,i}$. \end{proof} \end{prop}
\begin{prop}\label{proposition 3-3}
Let $\{x_k\}_{k\in \mathbb{N}},\{z_k\}_{k\in N}$ be iterative sequences generated by Algorithm \ref{algorithm 3-1}. We have \begin{equation}\label{equation 3-6}
z_{k}\in H_>(u_{k,k},\alpha_{k,k}+\xi_{k,k}). \end{equation}
By projecting $z_k$ first onto $H(u_{k,k},\alpha_{k,k},\xi_{k,k})$ we obtain the descent property \begin{equation}\label{equation 3-7}
\left\|z-x_{k+1}\right\|^2\leq\left\|z-z_{k}\right\|^2-\frac{(1-\eta)^2\left\|r_k\right\|^4}{\left\|u_{k,k}\right\|^2},
\end{equation}
for $z\in M_{F(x)=y}.$
\begin{proof} The first estimate is due to an adequate choice of the stripes. According to our defination of $w_{k,k}=F(z_{k})-y=r_{k}$ we have \begin{equation*} \begin{array}{ll}
\alpha_{k,k}:=\langle u_{k,k},z_{k}\rangle-\|r_{k}\|^2,\\
\xi_{k,k}:=\eta\| r_{k}\|^2, \end{array} \end{equation*}
and thus $\langle u_{k,k},z_{k}\rangle-\alpha_{k,k}=\|r_{k}\|^2>\xi_{k,k}$ as $0<\eta<1$. According to the Proposition \ref{proposition 2-4} and Definition \ref{definition 2-5}
we have \begin{equation*} \begin{array}{ll}
\left\|z-x_{k+1}\right\|^2\leq\left\|z-P_{H(u_{k,k},\alpha_{k,k},\xi_{k,k})}
(z_{k})\right\|^2\\
~~~~~~~~~~~~~~~~=\left\|z-z_k+\frac{\left\langle u_{k,k},z_k \right\rangle-(\alpha_{k,k}+\xi_{k,k})}{\left\|u_{k,k}\right\|^2}u_{k,k}\right\|^2
=\left\|z-z_k+\frac{(1-\eta)\left\|F(z_k)-y\right\|^2}{\left\|u_{k,k}\right\|^2}u_{k,k}\right\|^2\\
~~~~~~~~~~~~~~~~=\left\|z-z_k\right\|^2+2\frac{(1-\eta)\left\|F(z_k)-y\right\|^2}{\left\|u_{k,k}\right\|^2}
\left\langle u_{k,k},z-z_k \right\rangle+\frac{(1-\eta)^2\left\|F(z_k)-y\right\|^4}{\left\|u_{k,k}\right\|^2}\\
~~~~~~~~~~~~~~~~=\left\|z-z_k\right\|^2+2\frac{(1-\eta)\left\|r_k\right\|^2}{\left\|u_{k,k}\right\|^2}
\left\langle r_{k},F'(z_{k})(z-z_k)+r_k-r_k \right\rangle+\frac{(1-\eta)^2\left\|r_k\right\|^4}{\left\|u_{k,k}\right\|^2}\\
~~~~~~~~~~~~~~~~\leq\left\|z-z_k\right\|^2-\frac{(1-\eta)^2\left\|r_k\right\|^4}{\left\|u_{k,k}\right\|^2}.
\end{array} \end{equation*} The descent property (the second estimate) is obtained.
\end{proof} \end{prop}
\subsection{TGSS with noisy data}\label{subsection 3-2}
\noindent We have discussed the case of exact data before which is an idealized case in practice. Now we will extend our discussion to noisy data case in this subsection.
\begin{definition}\label{definition 3-4} At iteration $k\in \mathbb{N}$, choose a finite index set $I_k^\delta$ includes $k$. Let
\begin{equation}\label{equation 3-8}
z_{i}^\delta=x_{i}^\delta+\lambda_{i}^\delta(x_{i}^\delta-x_{i-1}^\delta),
\end{equation} where $\lambda_{0}^\delta=0,\;0\leq \lambda_{i}^\delta\leq 1,\;\forall i\in I_k^\delta\subset \mathbb{N}$. Define the stripes \[H_{k,i}^\delta:=H(u_{k,i}^\delta,\alpha_{k,i}^\delta,\xi_{k,i}^\delta)\] with \begin{equation}\label{equation 3-9} \begin{array}{ll}
w_{k,i}^\delta:=F(z_{i}^\delta)-y^\delta,\\
u_{k,i}^\delta:=F'(z_{i}^\delta)^{*}w_{k,i}^\delta,\\
\alpha_{k,i}^\delta:=\langle u_{k,i}^\delta,z_{i}^\delta\rangle-\langle w_{k,i}^\delta,r_{i}^\delta\rangle,\\
\xi_{k,i}^\delta:=(\delta+\eta(\| r_{i}^\delta\|+\delta))\| w_{k,i}^\delta\|, \end{array} \end{equation} where $r_{i}^\delta=F(z_{i}^\delta)-y^\delta$ is the residual term, which in this case is equal to $w_{k,i}^\delta$. \end{definition}
\noindent In this paper, we will use the $Morozov~discrepency~principle$ with respect to $z_{k}^\delta$, i.e., stop the iteration after $k_*$ steps, where \begin{equation}\label{equation 3-10}
k_*:=k_*(\delta,y^\delta):=\min\{k\in \mathbb{N}:\|r_k^\delta\|\leq\tau\delta\}. \end{equation}
\begin{alg}\label{algorithm 3-2} \rm(\textbf{TGSS iteration for noisy data with multiple search directions.})
\textbf{Given:}~Noisy data $y^\delta$, parameter $\delta$ and initial choice $x_0^\delta=x_{-1}^\delta=x_0\in\mathcal{X}$ and the parameter $\eta$.
\textbf{For}~$k\in \mathbb{N}$.
~~~~~\textbf{If}~stopping criterion (\ref{equation 3-10}) is not satisfied.
~~~~~~~~~(1) Choose a index set $I_{k}^\delta\subset \{k+K+1, \ldots, k\}\cap \mathbb{N}$ with $K\geq1$.
~~~~~~~~~~(2) Compute $z_k^\delta$, the search directions $u_{k,i}^\delta$, and the parameters $\alpha_{k,i}^\delta$, $\xi_{k,i}^\delta$ defined in Definition \ref{definition 3-4}.
~~~~~~~~~(3) Update $x_{k+1}^\delta$ by \begin{equation}\label{equation 3-11}
x_{k+1}^\delta=z_{k}^\delta-\sum\limits_{i\in I_{k}^\delta}t_{k,i}^\delta u_{k,i}^\delta. \end{equation} ~~~~~~~~~~~~~~~ Find $t_k^\delta:=(t_{k,i}^\delta)_{i\in I_k^\delta}$ by Definition \ref{definition 2-5} such that \begin{equation}\label{equation 3-12} x_{k+1}^\delta\in \bigcap_{i\in I_{k}^\delta}H_{k,i}^\delta. \end{equation} ~~~~~~~~~~~~~~~ Output~$x_{k+1}^\delta$.
~~~~~~~~~ Set $k=k+1$.
~~~~~\textbf{Else if}~$k$ satisfies (\ref{equation 3-10}).
~~~~~~~~~ Output~$x_{k}$.
~~~~~~~~~ Set $k=k_*$.
~~~~~~~~~ \textbf{break}
~~~~~\textbf{End If}
\textbf{End For} \end{alg}
\noindent We obtain the analogous statement as in the noise-free case.
\begin{prop}\label{proposition 3-5} For any $k\in \mathbb{N}, i\in I_k^\delta$, the solution set $M_{F(x)=y}$ fulfills \[M_{F(x)=y}\subset H_{k,i}^\delta,\]
where $u_{k,i}^\delta,\alpha_{k,i}^\delta$ and $\xi_{k,i}^\delta$ are chosen as in (\ref{equation 3-9}).
\begin{proof} Let $z\in M_{F(x)=y}.$ We then have \begin{equation*} \begin{array}{ll}
|\langle u_{k,i}^\delta,z\rangle-\alpha_{k,i}^\delta|=|\langle w_{k,i}^\delta,F'(z_{i}^\delta)(z-z_{i}^\delta)+F(z_{i}^\delta)-F(z)+F(z)-y^\delta\rangle|\\
~~~~~~~~~~~~~~~~~~~~\leq\|w_{k,i}^\delta\|\cdot(\|F(z_{i}^\delta)-F(z)-F'(z_{i}^\delta)(z_{i}^\delta-z)\|+\|y^\delta-y\|)\\
~~~~~~~~~~~~~~~~~~~~\leq\|w_{k,i}^\delta\|\cdot(\eta\|F(z_{i}^\delta)-y^\delta+y^\delta-F(z)\|+\delta)\\
~~~~~~~~~~~~~~~~~~~~\leq\|w_{k,i}^\delta\|\cdot(\eta(\| r_{i}^\delta\|+\delta)+\delta), \end{array} \end{equation*} i.e., $z\in H_{k,i}^\delta$. \end{proof} \end{prop}
\begin{prop}\label{proposition 3-6}
Let $\{x_k^\delta\}_{k\in \mathbb{N}},\{z_k^\delta\}_{k\in N}$ be iterative sequences generated by Algorithm \ref{algorithm 3-2}. As long as $\|r_{k}^\delta\|>\frac{1+\eta}{1-\eta}\delta$, we have \begin{equation}\label{equation 3-13}
z_k^\delta\in H_>(u_{k,k}^\delta,\alpha_{k,k}^\delta+\xi_{k,k}^\delta), \end{equation} where $u_{k,k}^\delta, \alpha_{k,k}^\delta$ and $\xi_{k,k}^\delta$ are chosen as in (\ref{equation 3-9}). By projecting $z_k^\delta$ first onto $H(u_{k,k}^\delta,\alpha_{k,k}^\delta,\xi_{k,k}^\delta)$ we get similar decreasing property to Proposition \ref{proposition 3-3} (the second estimate) which we discuss in details later.
\begin{proof} The proof we refer to Proposition \ref{proposition 3-3} and Definition \ref{definition 3-4}.
\end{proof}
\end{prop}
In the following subsection, we want to take a look at an important special case of Algorithm \ref{algorithm 3-2} which acquires a better understanding of the structure of the TGSS method.
\subsection{TGSS with two search directions}\label{subsection 3-3}
\noindent We want to summarize a fast way to compute $x_{k+1}^\delta$ according to Algorithm \ref{algorithm 3-2}, using only two search directions, i.e., $I_{k}^\delta$ has two elements ($N=2$). This method has been suggested and analyzed by Sch\"{o}pfer and Schuster in \cite{23} and has been successfully implemented for the numerical solution of an integral equation of the first kind.
The following algorithm is a special case of Algorithm \ref{algorithm 3-2}, where we have chosen $I_{0}^\delta=\{0\}$ and $I_{k}^\delta=\{k-1,k\}$ for all $k\geq1$. For convenience, we skip the first index $k$ in the subscript of the functions and parameters we are dealing with.
\begin{definition}\label{definition 3-7} In the first step (k=0), we choose $u_0^\delta$ as the search direction. At iteration $k\in \mathbb{N}$, where $k\geq 1$ and $\mathbb{N}$ is the set of all natural numbers, choose index set $I_k^\delta:=\{k,k-1\}$, i.e., the two search directions $\{u_k^\delta,u_{k-1}^\delta\}$, where \begin{equation}\label{equation 3-14} \begin{array}{l} w_{k}^\delta:=F(z_{k}^\delta)-y^\delta,\\ u_k^\delta:=F'(z_{k}^\delta)^{*}w_{k}^\delta. \end{array} \end{equation} Let $H_{-1}^\delta:=\mathcal{X}$, and for $k\in \mathbb{N}$, define the stripes \[H_{k}^\delta:=H(u_{k}^\delta,\alpha_{k}^\delta,\xi_{k}^\delta)\] with \begin{equation}\label{equation 3-15} \begin{array}{l}
\alpha_{k}^\delta:=\langle u_{k}^\delta,z_{k}^\delta\rangle-\langle w_{k}^\delta,r_{k}^\delta\rangle,\\
\xi_{k}^\delta:=(\delta+\eta(\| r_{k}^\delta\|+\delta))\| w_{k}^\delta\|. \end{array} \end{equation}
\noindent Where $r_{k}^\delta=F(z_{k}^\delta)-y^\delta$, is the residual term of iteration which in this case is equal to $w_{k}^\delta$. Choose the same stop rule as in Algorithm \ref{algorithm 3-2}, where the constant holds \begin{equation}\label{equation 3-16} \tau>\frac{1+\eta}{1-\eta}>1. \end{equation} \end{definition}
\noindent If $\left\|r_{k}^\delta\right\|>\tau\delta$, it follows Proposition \ref{proposition 3-6} that \begin{equation}\label{equation 3-17} z_k^\delta\in H_>(u_{k}^\delta,\alpha_{k}^\delta,\xi_{k}^\delta). \end{equation} Then calculate the iterate point $x_{k+1}^\delta$ by the following steps. \begin{description}
\item[(i)] Compute
\[\tilde{x}_{k+1}^\delta:=P_{H(u_{k}^\delta,\alpha_{k}^\delta+\xi_{k}^\delta)}(z_k^\delta)=z_k^\delta-\frac{\langle u_{k}^\delta,z_k^\delta \rangle-(\alpha_{k}^\delta+\xi_{k}^\delta)}{\|u_{k}^\delta\|^2}u_{k}^\delta.\]
Thus, for all $z\in M_{F(x)=y}$, we have the descent property
\begin{equation}\label{equation 3-18}
\left\|z-\tilde{x}_{k+1}^\delta\right\|^2\leq\left\|z-z_k^\delta\right\|^2-
\left(\frac{\|r_{k}^\delta\|\left(\|r_{k}^\delta\|-\delta-\eta(\| r_{k}^\delta\|+\delta)\right)}{\|u_{k}^\delta\|}\right)^2.
\end{equation}
If $\tilde{x}_{k+1}^\delta\in H_{k-1}^\delta$, i.e., $\tilde{x}_{k+1}^\delta=P_{H_{k}^\delta\cap H_{k-1}^\delta}(z_{k}^\delta)$, the calculation is completed. Otherwise, turn to step (ii).
\item[(ii)] Decide whether $\tilde{x}_{k+1}^\delta\in H_>(u_{k-1}^\delta,\alpha_{k-1}^\delta+\xi_{k-1}^\delta)$ or $\tilde{x}_{k+1}^\delta\in H_<(u_{k-1}^\delta,\alpha_{k-1}^\delta-\xi_{k-1}^\delta)$.
Then calculate
\[x_{k+1}^\delta:=P_{H(u_{k}^\delta,\alpha_{k}^\delta+\xi_{k}^\delta)\cap H(u_{k-1}^\delta,\alpha_{k-1}^\delta\pm\xi_{k-1}^\delta)}(\tilde{x}_{k+1}^\delta),\]
i.e., determine $x_{k+1}^\delta=\tilde{x}_{k+1}^\delta-\tilde{t}_{k}^\delta u_{k}^\delta-\tilde{t}_{k-1}^\delta u_{k-1}^\delta$ such that $\tilde{t}=(\tilde{t}_{k}^\delta ,\tilde{t}_{k-1}^\delta )$ minimizes the optimization function
\begin{equation}\label{equation 3-19}
h_2(t_1,t_2):=\frac{1}{2}\left\|\tilde{x}_{k+1}^\delta-t_1 u_{k}^\delta-t_2 u_{k-1}^\delta \right\|^2+t_1(\alpha_{k}^\delta+\xi_{k}^\delta)+t_2(\alpha_{k-1}^\delta\pm\xi_{k-1}^\delta).
\end{equation}
Then $x_{k+1}^\delta\in H_{k}^\delta\cap H_{k-1}^\delta$ and for all $z\in M_{F(x)=y}$ we have
\begin{equation}\label{equation 3-20}
\left\|z-x_{k+1}^\delta\right\|^2\leq\left\|z-z_k^\delta\right\|^2-S_{k}^\delta,
\end{equation}
where
\[
S_{k}^\delta:=\left(\frac{\|r_{k}^\delta\|\left(\|r_{k}^\delta\|-\delta-\eta(\| r_{k}^\delta\|+\delta)\right)}{\|u_{k}^\delta\|}\right)^2+\left(\frac{\langle u_{k-1}^\delta,\tilde{x}_{k+1}^\delta \rangle-(\alpha_{k-1}^\delta\pm\xi_{k-1}^\delta)}{\gamma_k\|u_{k-1}^\delta\|}\right)^2
\]
and
\[
\gamma_k:=\left(1-\left(\frac{\left|\langle u_{k}^\delta,u_{k-1}^\delta \rangle\right|}{\|u_{k}^\delta\|\|u_{k-1}^\delta\|}\right)^2\right)^{\frac{1}{2}}\in \left(0,1\right].
\] \end{description}
\begin{remark}\label{remark 3-1}
Discussing the TGSS method with two search directions allows a deeper understanding of TGSS method.
\begin{description}
\item[(a)] Although it might not end up with the metric projection point on the intersection, it does guarantee property $\tilde{x}_{k+1}^\delta=P_{H_{k}^\delta\cap H_{k-1}^\delta}(z_{k}^\delta)$ or $x_{k+1}^\delta\in H_{k}^\delta\cap H_{k-1}^\delta$. And according to the uniqueness of the projection path, the uniqueness of the iteration sequence can also be guaranteed.
\item[(b)] To see that (\ref{equation 3-17}) is valid if $\left\|r_{k}^\delta\right\|>\tau\delta$, we note that (\ref{equation 3-16}) implies
\[
\left\|r_{k}^\delta\right\|>\tau\delta>\delta\frac{1+\eta}{1-\eta}.
\]
Because of $0\leq\eta<1$ we have
\[
\| r_{k}^\delta\|-\eta\| r_{k}^\delta\|-\delta\eta-\delta>0,
\]
yielding
\[
\alpha_{k}^\delta+\xi_{k}^\delta=\langle u_{k}^\delta,z_k^\delta \rangle-\| r_{k}^\delta\|\cdot(\| r_{k}^\delta\|-\eta\| r_{k}^\delta\|-\delta\eta-\delta)<\langle u_{k}^\delta,z_k^\delta \rangle.
\]
Thus $z_k^\delta\in H_>(u_{k}^\delta,\alpha_{k}^\delta+\xi_{k}^\delta)$, i.e., (\ref{equation 3-17}) holds.
\item[(c)] We try to make the width of each stripe as narrow as possible, which is related to the value of $\eta$ from the tangential cone condition. Similarly, the choice of (\ref{equation 3-16}) for $\tau$ is also depends strongly on the constant $\eta$. The smaller $\eta$ , the better the approximation of F by its linearization, while a large value of $\eta$ , the bigger the corresponding tolerance $\tau\delta$, that is, the residual term (error) after the iteration is stopped becomes larger.
In addition, the intersection of finding $x_{k+1}^\delta$ becomes correspondingly wider, so that the difference between $x_{k+1}^\delta$ and the true solution will be larger.
\item[(d)] The discussion of the relationship between $u_{k}^\delta$ and $u_{k-1}^\delta$ has been stated in \cite{5,15}, and the modification due to step (ii) might be significant, if the search directions $u_{k}^\delta$ and $u_{k-1}^\delta$ fulfill
\[
\frac{\left|\langle u_{k}^\delta,u_{k-1}^\delta \rangle\right|}{\|u_{k}^\delta\|\|u_{k-1}^\delta\|}\approx 1,
\]
since the case the coefficient $\gamma_k$ is quite small and therefore $S_{k}^\delta$ is large. This can be illustrated by looking at the situation where $u_{k}^\delta\perp u_{k-1}^\delta$: The projection of $z_k^\delta$ onto $H_{k}^\delta$ is already contained in $H_{k-1}^\delta$, such that step (ii) has no effect. This inspires the method of Heber et al. \cite{24}.
\item[(e)] Algorithm \ref{algorithm 3-2} with two search directions is valuable for an implementation. The search direction $u_{k-1}^\delta$ has already been calculated for the previous iteration and can be reused. Moreover, in each iteration, better $z_{k}^\delta$ is selected to replace $x_{k}^\delta$, and the two-point gradient method is used to optimize the search direction $F'(x_{j}^\delta)^{*}(F(x_{j}^\delta)-y^\delta),~j=\{k,k-1\}$. Hence the costly computations are to choose a suitable $\lambda_{k}^\delta$ and to get the $z_{k}^\delta$, and the determination of $F'(z_{j}^\delta)^{*}r_{k}^\delta$.
\item[(f)] The relevant conclusions of Algorithm \ref{algorithm 3-2} are also applicable to the noise-free case by setting $\delta=0$, except that the discrepancy principle has to be replaced, such as a maximal number of iterations. The remainder of the proof follows the same lines as in the treatment of noisy case.
\end{description} \end{remark}
\begin{prop}\label{proposition 3-8} If $ u_{k}^\delta,u_{k-1}^\delta$ are linearly dependent, step (i) already yields the metric projection of $z_{k}^\delta$ onto $H_{k}^\delta\cap H_{k-1}^\delta$. Furthermore, we have $ H(u_{k}^\delta,\alpha_{k}^\delta+\xi_{k}^\delta)\subset H_{k-1}^\delta$. \begin{proof}
Let $u_{k}^\delta=lu_{k-1}^\delta$, $0\neq l\in \mathbb{R}$.
For $z\in H(u_{k}^\delta,\alpha_{k}^\delta+\xi_{k}^\delta)\subset H_{k-1}^\delta$ we get \begin{equation*} \begin{array}{l}
\alpha_{k-1}^\delta-\xi_{k-1}^\delta\leq\left\langle u_{k-1}^\delta,z \right\rangle\leq\alpha_{k-1}^\delta+\xi_{k-1}^\delta,\\
l\left\langle u_{k}^\delta,z \right\rangle=l(\alpha_{k}^\delta+\xi_{k}^\delta). \end{array} \end{equation*}
Hence $l(\alpha_{k}^\delta+\xi_{k}^\delta)\in[\alpha_{k-1}^\delta-\xi_{k-1}^\delta,\alpha_{k-1}^\delta+\xi_{k-1}^\delta]$.
And, for any $x\in H(u_{k}^\delta,\alpha_{k}^\delta+\xi_{k}^\delta)$, we obtain
\[
\left\langle u_{k-1}^\delta,x \right\rangle=l\left\langle u_{k}^\delta,x \right\rangle=l(\alpha_{k}^\delta+\xi_{k}^\delta)\in[\alpha_{k-1}^\delta-\xi_{k-1}^\delta,\alpha_{k-1}^\delta+\xi_{k-1}^\delta],
\]
showing that $x\in H_{k-1}^\delta$. It follows that $ H(u_{k}^\delta,\alpha_{k}^\delta+\xi_{k}^\delta)\subset H_{k-1}^\delta$ yields the assertion.
From Definition \ref{definition 2-5} and Algorithm \ref{algorithm 3-2}, we consider the search direction $\sum\limits_{i\in I_{k}^\delta}t_{k,i}^\delta u_{k,i}^\delta$. Inspired by the above discussion and inductive hypothesis method, we figure out that $\{u_{k,i}^\delta\}_{i\in I_{k}^\delta/T_k}$ are linearly independent, where $T_k$ is the index set of $t_{k,i}=0$. A similar heuristic argument, the details of which we omit. The proposition also applies to Algorithm \ref{algorithm 3-1}.
\end{proof} \end{prop}
\section{Convergence and regularity analysis}\label{section-4} \noindent For the analysis of TGSS methods presented in Section \ref{section-3}, using the assumptions which postulated in Section \ref{section-2}, we establish the convergence results of the TGSS method firstly. We then give the regularity analysis of TGSS method with noisy data. Finally, we will discuss DBTS algorithm with the choice of the combination parameter $\lambda_{k}^\delta$ which completes our theory of regularity.
\subsection{Convergence results}\label{subsection 4-1} \noindent In Section \ref{section-3} we place some restrictions on the combination parameters $\lambda_{k}^\delta$. Minimal requirements on their values are: \begin{equation}\label{equation 4-1} \lambda_{0}^\delta=0,~~0\leq\lambda_{k}^\delta\leq1,~~\forall k\in \mathbb{N}. \end{equation}
\begin{prop}\label{proposition 4-1} Let $x_{k}^\delta,x_{k-1}^\delta\in B_{\rho}(x_*)$ for $k\in \mathbb{N}$. Assume that \begin{equation}\label{equation 4-2}
\left\|F(z_{k}^\delta)-y^\delta\right\|>\tau\delta, \end{equation} with $\tau$ satisfying \begin{equation}\label{equation 4-3} \tau>\frac{1+\eta}{1-\eta}. \end{equation} Define \begin{equation}\label{equation 4-4}
\Delta_{k}:=\left\|x_{k}^\delta-x_*\right\|^2-\left\|x_{k-1}^\delta-x_*\right\|^2, \end{equation}
and
\begin{equation}\label{equation 4-5} \Psi:=(1-\eta)-\tau^{-1}(1+\eta)>0. \end{equation} Then there holds \begin{equation}\label{equation 4-6} \Delta_{k+1}^\delta\leq\lambda_{k}^\delta\Delta_{k}^\delta+
\lambda_{k}^\delta(\lambda_{k}^\delta+1)\|x_{k}^\delta-x_{k-1}^\delta\|^2- \frac{\Psi^2\|r_{k}^\delta\|^2}{ c_F^2}, \end{equation}
where $c_F$ is the upper bound of $\|F'(x)\|$. \begin{proof} Since $x_{k}^\delta,x_{k-1}^\delta\in B_{\rho}(x_*)$, $x_{*}\in B_{\rho}(x_0)$ and by the triangle inequality, we have $x_{k}^\delta,x_{k-1}^\delta\in B_{2\rho}(x_0)$. Together with $\lambda_{k}^\delta\leq1$, we deduce \begin{equation*} \begin{array}{l}
\|z_{k}^\delta-x_0\|\leq\|z_{k}^\delta-x_{k}^\delta\|+\|x_{k}^\delta-x_0\|=\lambda_{k}^\delta\|x_{k}^\delta-x_{k-1}^\delta\|+\|x_{k}^\delta-x_0\|\\
~~~~~~~~~~~~\;\leq\lambda_{k}^\delta\|x_{k}^\delta-x_{*}\|+\lambda_{k}^\delta\|x_{*}-x_{k-1}^\delta\|+\|x_{k}^\delta-x_0\|\\ ~~~~~~~~~~~~\;\leq2\lambda_{k}^\delta\rho+2\rho\\ ~~~~~~~~~~~~\;\leq4\rho, \end{array} \end{equation*} which indicates that $z_{k}^\delta\in B_{4\rho}(x_0)$. Hence, using Proposition \ref{proposition 3-6} and (\ref{equation 3-18}), we get \begin{equation*} \begin{array}{l}
\left\|x_*-x_{k+1}^\delta\right\|^2\leq\left\|x_*- z_{k}^\delta\right\|^2-\left(\frac{\|r_{k}^\delta\|\left(\|r_{k}^\delta\|-\delta-\eta(\| r_{k}^\delta\|+\delta)\right)}{\|u_{k}^\delta\|}\right)^2\\
~~~~~~~~~~~~~~~~~\;\leq\left\|x_*- z_{k}^\delta\right\|^2-\frac{\Psi^2\|r_{k}^\delta\|^2}{ c_F^2}. \end{array} \end{equation*} Now, using the above inequality, we get \begin{equation*} \begin{array}{l}
\Delta_{k+1}^\delta=\left\|x_{k+1}^\delta-x_*\right\|^2-\left\|x_{k}^\delta-x_*\right\|^2\\
~~~~~~~\leq\left\|x_*- z_{k}^\delta\right\|^2-\left\|x_{k}^\delta-x_*\right\|^2-\frac{\Psi^2\|r_{k}^\delta\|^2}{ c_F^2}\\
~~~~~~~=2\langle z_{k}^\delta-x_{k}^\delta,x_{k}^\delta-x_* \rangle+\left\|z_{k}^\delta-x_{k}^\delta\right\|^2-\frac{\Psi^2\|r_{k}^\delta\|^2}{ c_F^2}\\
~~~~~~~=-2\lambda_{k}^\delta\langle x_{k-1}^\delta-x_{k}^\delta,x_{k}^\delta-x_* \rangle+(\lambda_{k}^\delta)^2\left\|x_{k}^\delta-x_{k-1}^\delta\right\|^2 -\frac{\Psi^2\|r_{k}^\delta\|^2}{ c_F^2}\\
~~~~~~~=-\lambda_{k}^\delta\left(\left\|x_{k-1}^\delta-x_{k}^\delta+x_{k}^\delta-x_*\right\|^2-\left\|x_{k}^\delta-x_*\right\|^2-\left\|x_{k}^\delta-x_{k-1}^\delta\right\|^2\right)\\
~~~~~~~~~~+(\lambda_{k}^\delta)^2\left\|x_{k}^\delta-x_{k-1}^\delta\right\|^2
-\frac{\Psi^2\|r_{k}^\delta\|^2}{ c_F^2}\\
~~~~~~~=-\lambda_{k}^\delta\left(\left\|x_{k-1}^\delta-x_*\right\|^2-\left\|x_{k}^\delta-x_*\right\|^2\right)+\lambda_{k}^\delta(\lambda_{k}^\delta+1)\left\|x_{k}^\delta-x_{k-1}^\delta\right\|^2-\frac{\Psi^2\|r_{k}^\delta\|^2}{ c_F^2}\\
~~~~~~~=\lambda_{k}^\delta\Delta_{k}^\delta+\lambda_{k}^\delta(\lambda_{k}^\delta+1)\left\|x_{k}^\delta-x_{k-1}^\delta\right\|^2
-\frac{\Psi^2\|r_{k}^\delta\|^2}{ c_F^2}, \end{array} \end{equation*} which yields the assertion. \end{proof} \end{prop}
\begin{remark}\label{remark 4-1}
Analogically the fact that $\Delta_{k+1}^\delta\leq0$ for all $k<k_*$, i.e., that $x_{k+1}^\delta$ is a better approximation of $x_*$ than $x_{k}^\delta$ when the discrepancy principle (\ref{equation 3-10}) is not yet satisfied, in the convergence analysis of Landweber iteration. We want our TGSS method to share this property. According to the inequation (\ref{equation 4-6}), we use the $coupling~condition$: \begin{equation}\label{equation 4-7}
\lambda_{k}^\delta(\lambda_{k}^\delta+1)\|x_{k}^\delta-x_{k-1}^\delta\|^2-\frac{\Psi^2\|r_{k}^\delta\|^2}{ \mu c_F^2}\leq0, \end{equation} which holds for all $0\leq k\leq k_*$, and $\mu$ is a constant satisfying $\mu>1$. Then, we can derive the sufficient condition \begin{equation}\label{equation 4-8}
\lambda_{k}^\delta(\lambda_{k}^\delta+1)\|x_{k}^\delta-x_{k-1}^\delta\|^2\leq\frac{\left(\Psi\tau\delta\right)^2}{\mu c_F^2}, \end{equation}
which leads to the choice \begin{equation}\label{equation 4-9}
\lambda_{k}^\delta=\min\left\{\sqrt{\frac{(\Psi\tau\delta)^2}{\mu c_F^2\|x_{k}^\delta-x_{k-1}^\delta\|^2}+\frac{1}{4}}-\frac{1}{2},\frac{k}{k+\alpha}\right\}, \end{equation} where $\alpha\geq3$ is a given number. Note that in the above formula for $\lambda_{k}^\delta$, inside the $"\min"$ the second argument is taken to be $k/(k+\alpha)$ which is the combination parameter used in Nesterov acceleration scheme. In case the first position is large, this formula may lead to $\lambda_{k}^\delta=k/(k+\alpha)$ and consequently the acceleration effect of Nesterov can be utilized. It also satisfies the requirement (\ref{equation 4-1}). However, when $\delta=0$, we get $\lambda_{k}^\delta=\lambda_{k}^0=0$ by the definition (\ref{equation 4-9}), which leads to the Sequential subspace optimization iteration.
Based on the above facts, we find a sequence $\lambda_{k}^\delta$ by a discrete backtracking search procedure (DBTS), which takes nonzero for $\delta=0$, satisfies the condition (\ref{equation 4-9}), and ensures the acceleration effect at the same time. \end{remark}
\begin{prop}\label{proposition 4-2}
Let $k_*=k_*(\delta,y^\delta)$ be chosen according to (\ref{equation 3-10}), and assume that (\ref{equation 4-7}) holds for all $0\leq k<k_*$. Then $x_{k}^\delta\in B_\rho(x_*)$ as in (\ref{equation 3-11}) is well-defined, and we get \begin{equation}\label{equation 4-10}
\left\|x_{k+1}^\delta-x_*\right\|\leq\left\|x_{k}^\delta-x_*\right\|,~~~~\forall(-1)\leq k<k_*. \end{equation} Moreover, $x_{k}^\delta\in B_\rho(x_*)\subset B_{2\rho}(x_0)$ for all $(-1)\leq k<k_*$ and \begin{equation}\label{equation 4-11}
\sum\limits_{k=0}^{k_*-1}\left\|F(z_{k}^\delta)-y^\delta\right\|^2\leq\frac{ c_F^2}{\overline\mu\Psi^2}\|x_0^\delta-x_*\|^2 \end{equation} with $\overline\mu:=(\mu-1)/\mu>0.$ \begin{proof} Taking $k=0$, it follows from (\ref{equation 4-6}) that \[
\Delta_{1}^\delta\leq\lambda_{0}^\delta\Delta_{0}^\delta+\lambda_{0}^\delta(\lambda_{0}^\delta+1)\left\|x_{0}^\delta-x_{-1}^\delta\right\|^2
-\frac{\Psi^2\|r_{0}^\delta\|^2}{ c_F^2}. \] Using (\ref{equation 4-7}) and $\lambda_{0}^\delta=0$, we can deduce \[ \Delta_{1}^\delta\leq\lambda_{0}^\delta\Delta_{0}^\delta=0, \] which implies that $x_{1}^\delta\in B_\rho(x_*)$. We proceed inductively to demonstrate that \[ \Delta_{k+1}^\delta\leq\lambda_{k}^\delta\Delta_{k}^\delta\leq\cdot\cdot\cdot\leq\lambda_{k}^\delta\lambda_{k-1}^\delta\cdot\cdot\cdot\lambda_{1}^\delta\Delta_{1}^\delta, \] where $0\leq\lambda_{k}^\delta\leq1$. As a result, we have \[\Delta_{k+1}^\delta\leq0~~~~\forall(-1)\leq k<k_*.\] Then, we obtain (\ref{equation 4-10}), and $x_{k+1}^\delta\in B_\rho(x_*)$, which completes the induction. From $x_*\in B_{\rho}(x_0)$, we derive $x_{k+1}^\delta\in B_{2\rho}(x_0)$ for all $(-1)\leq k<k_*$.
In addition, from (\ref{equation 4-6}), (\ref{equation 4-7}) and (\ref{equation 4-10}) we can deduce that \[
\Delta_{k+1}^\delta\leq\frac{\Psi^2\|r_{k}^\delta\|^2}{\mu c_F^2}-\frac{\Psi^2\|r_{k}^\delta\|^2}{ c_F^2}, \] i.e., \[
\overline\mu\frac{\Psi^2\|r_{k}^\delta\|^2}{ c_F^2}\leq\left\|x_{k}^\delta-x_*\right\|^2-\left\|x_{k+1}^\delta-x_*\right\|^2, \] and hence \[
\overline\mu\frac{\Psi^2}{ c_F^2}\sum\limits_{k=0}^{k_*-1}\left\|F(z_{k}^\delta)-y^\delta\right\|^2\leq\|x_0^\delta-x_*\|^2-\|x_{k_*}^\delta-x_*\|^2\leq\|x_0^\delta-x_*\|^2. \] We have the estimate \[
\sum\limits_{k=0}^{k_*-1}\left\|F(z_{k}^\delta)-y^\delta\right\|^2\leq\frac{ c_F^2}{\overline\mu\Psi^2}\|x_0^\delta-x_*\|^2, \] which yields the assertion. \end{proof} \end{prop}
\noindent Under the same assumptions as Proposition \ref{proposition 4-2}, we have the following corollary.
\begin{corollary}\label{corollary 4-3}
An obvious induction gives the discrepancy principle yields a finite $k_*=k_*(\delta,y^\delta)$ in Algorithm \ref{algorithm 3-2}. \begin{proof}
Assume that the discrepancy principle is not satisfied for any iteration index $k$, i.e., $\|F(z_{k}^\delta)-y^\delta\|>\tau\delta$ hold for all $k\in \mathbb{N}$. The conclusion of Proposition \ref{proposition 4-2} becomes \[
\sum\limits_{k=0}^{+\infty}\left\|F(z_{k}^\delta)-y^\delta\right\|^2\leq\frac{ c_F^2}{\overline\mu\Psi^2}\|x_0^\delta-x_*\|^2, \]
with $\|x_0^\delta-x_*\|\leq\rho$. We thus obtain \[
\sum\limits_{k=0}^{+\infty}\left\|F(z_{k}^\delta)-y^\delta\right\|^2\leq\frac{ c_F^2}{\overline\mu\Psi^2}\|x_0^\delta-x_*\|^2\leq\frac{ c_F^2}{\overline\mu\Psi^2}\rho^2<+\infty. \]
Consequently, the sequence $\left\{\|F(z_{k}^\delta)-y^\delta\|\right\}_{k\in \mathbb{N}}$ has to be the null sequence, i.e., \begin{equation}\label{equation 4-12}
\lim\limits_{k\rightarrow+\infty}\|F(z_{k}^\delta)-y^\delta\|=0. \end{equation}
This is a contradiction to our assumption $\tau\delta<\|r_k^\delta\|$ for all $k\in \mathbb{N}$. Then, there must be a finite stopping index $k_*$ fulfilling the discrepancy principle (\ref{equation 3-10}). \end{proof} \end{corollary}
\noindent If we are given exact data $y^\delta=y$, i.e., $\delta=0$, then (\ref{equation 4-11}) becomes \begin{equation}\label{equation 4-13}
\sum\limits_{k=0}^{+\infty}\left\|F(z_{k})-y\right\|^2\leq+\infty, \end{equation} as in the case $k_*=+\infty$. Otherwise, if the sum terminates in a finite number of steps, i.e., if $F(z_{k})= y$ for some k, then the iteration is terminated and a solution is found. So, this is not restriction.
\noindent Combining (\ref{equation 4-13}) and the condition (\ref{equation 4-7}), we can obtain that \begin{equation}\label{equation 4-14}
\sum\limits_{k=0}^{+\infty}\lambda_{k}(\lambda_{k}+1)\|x_{k}-x_{k-1}\|^2<+\infty, \end{equation} from which there obviously follows \begin{equation}\label{equation 4-15}
\lim\limits_{k\rightarrow+\infty}\|F(z_{k})-y\|^2=0, \end{equation} and \begin{equation}\label{equation 4-16}
\lim\limits_{k\rightarrow+\infty}\lambda_{k}(\lambda_{k}+1)\|x_{k}-x_{k-1}\|^2=0. \end{equation} If we can prove that $z_k$ converges, then we can get a solution that iteratively converges to $F(x)=y$. To do this, we first show some intermediate results.
\begin{prop}\label{proposition 4-4}
Let $\{x_k\}_{k\in\mathbb{N} }$ be the iterative sequence generated by (\ref{equation 3-3}). We then have $\|x_{k}-x_{*}\|$ converges to a constant, which characterized as $\varepsilon\geq0$. There hold \[
\lim\limits_{k\rightarrow+\infty}\|z_{k}-x_{*}\|=\varepsilon. \] \begin{proof}
From Proposition \ref{proposition 4-2} it follows that the sequence $\{\left\|x_{k}^\delta-x_*\right\|\}_{k\in\mathbb{N}}$ is a bounded monotonically decreasing sequence, then $\left\|x_{k}^\delta-x_*\right\|$ converges to a constant, which characterized as $\varepsilon\geq0$. According to the definition of (\ref{equation 3-1}), we have the inequality \[
\|z_{k}-x_{*}\|=\|x_{k}-x_{*}+\lambda_k(x_{k}-x_{k-1})\|\leq\|x_{k}-x_{*}\|+\lambda_k\|x_{k}-x_{k-1}\|, \] and from the estimate of (\ref{equation 3-7}), we can obtain \begin{equation}\label{equation 4-17}
\sqrt{\left\|x_{k+1}-x_{*}\right\|^2+\frac{(1-\eta)^2}{c_F^2}\|r_{k}\|^2}\leq\|z_{k}-x_{*}\|\leq\|x_{k}-x_{*}\|+\lambda_k\|x_{k}-x_{k-1}\|. \end{equation} Using (\ref{equation 4-15}) and (\ref{equation 4-16}) we can deduce that \begin{equation}\label{equation 4-18} \begin{array}{l}
\lim\limits_{k\rightarrow+\infty}\|F(z_{k})-y\|=0,\\
\lim\limits_{k\rightarrow+\infty}\lambda_{k}\|x_{k}-x_{k-1}\|=0. \end{array} \end{equation} Taking $k\rightarrow+\infty$ in (\ref{equation 4-17}) yields the assertion. \end{proof} \end{prop}
\begin{lemma}\label{lemma 4-5} (\cite[Lemma 2.7]{13}). Let $x_*\in B_{4\rho}(x_0)$ be a solution of $F(x)=y$ and $x_1,x_2\in B_{4\rho}(x_0)$. Then we get \begin{equation}\label{equation 4-19}
\left\|F'(x_1)(x_*-x_2)\right\|\leq
2(1+\eta)\left\|F(x_1)-y\right\|+(1+\eta)\left\|F(x_2)-y\right\|. \end{equation}
\end{lemma}
\begin{lemma}\label{lemma 4-6}
For the iterative sequences generated by Algorithm \ref{algorithm 3-2}, there hold
\[
x_k^\delta=x_0+\sum\limits_{i=0}^{k-1}\lambda_{i}^\delta(x_i^\delta-x_{i-1}^\delta)-\sum\limits_{i=0}^{k-1}\sum\limits_{s\in I_i^\delta}t_{i,s}^\delta u_{i,s}^\delta,
\]
and
\[
x_l^\delta-x_k^\delta=\sum\limits_{i=k}^{l-1}\lambda_{i}^\delta(x_i^\delta-x_{i-1}^\delta)-\sum\limits_{i=k}^{l-1}\sum\limits_{s\in I_i^\delta}t_{i,s}^\delta u_{i,s}^\delta,
\]
as well as
\[
x_l^\delta-x_{l-1}^\delta=-\sum\limits_{m=0}^{l-2}\sum\limits_{s\in I_m^\delta}(\mathop\prod\limits_{n=m+1}^{l-1}\lambda_{n}^\delta)t_{m,s}^\delta u_{m,s}^\delta-\sum\limits_{s\in I_{l-1}^\delta}t_{l-1,s}^\delta u_{l-1,s}^\delta.
\]
\begin{proof}
Obviously, the first two statements can be obtained from (\ref{equation 1-7}). The third equality can be proved by induction, which we refer to the proof of Lemma 2.6 in \cite{13}. \end{proof} \end{lemma}
\begin{lemma}\label{lemma 4-7} According to the definitions in Algorithm \ref{algorithm 3-1}, there hold \begin{equation}\label{equation 4-20}
\lim\limits_{k\rightarrow+\infty}|t_{k,i}|\|F(z_{i})-y\|^2=0. \end{equation} \begin{proof} From Algorithm \ref{algorithm 3-1} and Proposition \ref{proposition 3-8}, we arrive at \[
\left\|\sum\limits_{i\in I_{k}}t_{k,i}u_{k,i}\right\|=\left\|x_{k+1}-z_k\right\|\leq4\rho, \] as well as $\{u_{k,i}\}_{i\in I_k/Tk}$ are linearly independent. Hence, there exist a positive constant $\alpha$ satisfies \[
\sum\limits_{i\in I_{k}}|t_{k,i}|\left\|u_{k,i}\right\|\leq\alpha\left\|\sum\limits_{i\in I_{k}}t_{k,i}u_{k,i}\right\|. \]
Therefore, $|t_{k,i}|\left\|u_{k,i}\right\|$ is bounded by a constant $C$ for $i\in I_{k}$, i.e., $|t_{k,i}|\leq\frac{C}{\left\|u_{k,i}\right\|}$.
Next, we turn to the sufficient conditional proof of the conclusion. It follows from (\ref{equation 3-7}), coupling condition (\ref{equation 4-7}) and Proposition \ref{proposition 4-2}, we get that \[
\sum\limits_{k=0}^{+\infty}\frac{\left\|F(z_{i})-y\right\|^4}{\|u_{k,i}\|^2}\leq\frac{ 1}{\overline\mu\Psi^2}\|x_0-x_*\|^2. \] Thus, \[
\lim\limits_{k\rightarrow+\infty}\frac{\|F(z_{i})-y\|^2}{\|u_{k,i}\|}=0, \] which implies (\ref{equation 4-20}) holds as well. \end{proof} \end{lemma}
Now we state and prove the convergence for TGSS method in the case of exact data, where we apply an additional condition \begin{equation}\label{equation 4-21}
\sum\limits_{k=0}^{+\infty}\lambda_{k}\|x_{k}-x_{k-1}\|<+\infty. \end{equation}
Since under the Assumption \ref{assumption 2-1}, $\{\|x_k-x_{k-1}\|\}$ can be bounded by $2\rho$. The sufficient condition of (\ref{equation 4-21}) is given by \begin{equation}\label{equation 4-22} \sum\limits_{k=0}^{+\infty}\lambda_{k}<+\infty. \end{equation}
\begin{thm}\label{theorem 4-8} Let $N\geq1$ be a fixed integer and $k\in I_k$ for each iteration $k\in\mathbb{N}$ in Algorithm \ref{algorithm 3-1}. Assume that $k_*=k_*(0,y)=+\infty$, combination parameters $\lambda_k$ satisfies (\ref{equation 4-1}), (\ref{equation 4-7}) with $\delta=0$ and (\ref{equation 4-21}). Then the iterates $\{z_k\}_{k\in \mathbb{N}}$ generated by Algorithm \ref{algorithm 3-1} converges to a solution $x_*\in B_{\rho}(x_0)\cap\mathcal{D}(F)$ of $F(x)=y$. If in addition $\mathcal{N}\left(F'(x^\dagger)\right)\subset\mathcal{N}\left(F'(x)\right)$ for all $x\in B_{4\rho}(x^\dagger)$, then the sequence $\{z_k\}_{k\in \mathbb{N}}$ converges to $x^\dagger$ as $k\rightarrow+\infty$. \begin{proof} Motivated by the proof of Theorem 2.14 from \cite{25}, we will show that $\{z_k\}_{k\in\mathbb{N} }$ is a Cauchy sequence. Define \begin{equation}\label{equation 4-23} e_k:=z_k-x_*. \end{equation}
This is equivalent to show that the sequence $\{e_k\}_{k\in\mathbb{N}}$ is a Cauchy sequence. We have seen the respective proof in Proposition \ref{proposition 4-4} that $\|e_k\|$ is converges to $\varepsilon$. Given $j\geq k$, we choose some integer $l=l(k,j)\in \{k,k+1,..., j\}$ such that \begin{equation}\label{equation 4-24}
\|F(z_{l})-y\|\leq\|F(z_{i})-y\|,~~~~\forall k\leq i\leq j. \end{equation} There holds \[
\|e_j-e_k\|\leq\|e_j-e_l\|+\|e_l-e_k\| \] as well as \begin{equation*} \begin{array}{l}
\|e_j-e_l\|^2=2\langle e_l-e_j,e_l\rangle+\|e_j\|^2-\|e_l\|^2,\\
\|e_l-e_k\|^2=2\langle e_l-e_k,e_l\rangle+\|e_k\|^2-\|e_l\|^2. \end{array} \end{equation*}
Let $k\rightarrow+\infty$, the last two terms on the right-hand side converge to $\varepsilon^2-\varepsilon^2=0$. In order to prove $\|e_j-e_k\|\rightarrow0$, we need to show$|\langle e_l-e_j,e_l\rangle|\rightarrow0$ and $|\langle e_l-e_k,e_l\rangle|\rightarrow0$ as $j,k$ tend to infinity.
\noindent We first consider the term \begin{equation*} \begin{array}{l}
|\langle e_l-e_k,e_l\rangle|=|\langle z_l-z_k,e_l\rangle|=|\langle x_l-x_k+\lambda_{l}(x_l-x_{l-1})-\lambda_{k}(x_k-x_{k-1}),e_l\rangle|\\
~~~~~~~~~~~~~~~~\;\leq|\langle x_l-x_k,e_l\rangle|+\lambda_{l}|\langle x_l-x_{l-1},e_l\rangle|+\lambda_{k}|\langle x_k-x_{k-1},e_l\rangle|\\
~~~~~~~~~~~~~~~~\;\leq|\langle x_l-x_k,e_l\rangle|+\lambda_{l}\|x_l-x_{l-1}\|\|e_l\|+\lambda_{k}\|x_k-x_{k-1}\|\|e_l\|. \end{array} \end{equation*} Combining with (\ref{equation 4-18}), we get that \[
\lim\limits_{k\rightarrow+\infty}\left(\lambda_{l}\|x_l-x_{l-1}\|\|e_l\|+\lambda_{k}\|x_k-x_{k-1}\|\|e_l\|\right)=0. \] Now consider \begin{equation*} \begin{array}{l}
|\langle x_l-x_k,e_l\rangle|=\left|\left\langle \sum\limits_{i=k}^{l-1}\lambda_{i}(x_i-x_{i-1})-\sum\limits_{i=k}^{l-1}\sum\limits_{s\in I_i}t_{i,s}u_{i,s},e_l\right\rangle\right|\\
~~~~~~~~~~~~~~~~~\leq\sum\limits_{i=k}^{l-1}\lambda_{i}\left|\left\langle x_i-x_{i-1},e_l\right\rangle\right|+\sum\limits_{i=k}^{l-1}\sum\limits_{s\in I_i}|t_{i,s}|\left|\left\langle u_{i,s},e_l\right\rangle\right|. \end{array} \end{equation*}
Using the boundness of $\|e_l\|$ and (\ref{equation 4-21}), we get the estimation \[
\sum\limits_{i=k}^{l-1}\lambda_{i}\left|\left\langle x_i-x_{i-1},e_l\right\rangle\right|\leq\sum\limits_{i=k}^{l-1}\lambda_{i}\| x_i-x_{i-1}\|\|e_l\|\leq\sum\limits_{i=k}^{+\infty}\lambda_{i}\| x_i-x_{i-1}\|\|e_l\|<+\infty. \] Then it can be deduced that \begin{equation}\label{equation 4-25}
\lim\limits_{k\rightarrow+\infty}\left(\sum\limits_{i=k}^{l-1}\lambda_{i}\left|\left\langle x_i-x_{i-1},e_l\right\rangle\right|\right)=0. \end{equation} According to Lemma \ref{lemma 4-5}, Lemma \ref{lemma 4-6} and Lemma \ref{lemma 4-7}, we have \begin{equation*} \begin{array}{l}
\sum\limits_{i=k}^{l-1}\sum\limits_{s\in I_i}|t_{i,s}|\left|\left\langle u_{i,s},e_l\right\rangle\right|=\sum\limits_{i=k}^{l-1}\sum\limits_{s\in I_i}|t_{i,s}|\left|\left\langle F(z_{s})-y,F'(z_s)(z_l-x_*)\right\rangle\right|\\
~~~~~~~~~~~~~~~~~~~~~~~~~~~~\leq\sum\limits_{i=k}^{l-1}\sum\limits_{s\in I_i}|t_{i,s}|\| F(z_{s})-y\|\|F'(z_s)(z_l-x_*)\|\\
~~~~~~~~~~~~~~~~~~~~~~~~~~~~\leq2(1+\eta)\sum\limits_{i=k}^{l-1}\sum\limits_{s\in I_i}|t_{i,s}|\| F(z_{s})-y\|^2\\
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+(1+\eta)\sum\limits_{i=k}^{l-1}\sum\limits_{s\in I_i}|t_{i,s}|\| F(z_{s})-y\|\| F(z_{l})-y\|\\
~~~~~~~~~~~~~~~~~~~~~~~~~~~~\leq3(1+\eta)\sum\limits_{i=k}^{l-1}\sum\limits_{s\in I_i}|t_{i,s}|\| F(z_{s})-y\|^2. \end{array} \end{equation*} Note that $I_i$ is a finite set. Using (\ref{equation 4-20}) we get that \begin{equation}\label{equation 4-26}
\lim\limits_{k\rightarrow+\infty}\left(\sum\limits_{i=k}^{l-1}\sum\limits_{s\in I_i}|t_{i,s}|\left|\left\langle u_{i,s},e_l\right\rangle\right|\right)=0. \end{equation}
According to (\ref{equation 4-25}) and (\ref{equation 4-26}), we arrive at $|\langle x_l-x_k,e_l\rangle|\rightarrow0$, then it follows that $|\langle e_l-e_k,e_l\rangle|\rightarrow0$ as $k$ tends to infinity. Similarly, it can be shown that $|\langle e_l-e_j,e_l\rangle|\rightarrow0$ as $k\rightarrow+\infty$. Thus, we obtain \[
\lim\limits_{k\rightarrow+\infty}\|e_j-e_k\|=0, \]
which deduce that $\{e_k\}_{k\in\mathbb{N}}$ is a Cauchy sequence and the same holds for $\{z_k\}_{k\in\mathbb{N}}$. Then, $\{z_k\}$ converges to a solution $x_*$ of $F(x)=y$ as $\|F(z_{k})-y\|\rightarrow0$ for $k\rightarrow+\infty$.
Next, we prove the second part of the theorem with the additional condition $\mathcal{N}\left(F'(x^\dagger)\right)\subset\mathcal{N}\left(F'(x)\right)$ for all $x\in B_{4\rho}(x^\dagger)$. According to the iterates in Algorithm \ref{algorithm 3-1}, we can get \begin{equation*} \begin{array}{l} z_{k+1}-z_k=x_{k+1}+\lambda_{k+1}(x_{k+1}-x_k)-z_k\\ ~~~~~~~~~~\;\;\;=-\sum\limits_{i\in I_k}t_{k,i}u_{k,i}+\lambda_{k+1}(x_{k+1}-x_k)\\ ~~~~~~~~~~\;\;\;=-(1+\lambda_{k+1})\sum\limits_{i\in I_k}t_{k,i}u_{k,i}+\lambda_{k+1}(z_{k}-x_k)\\ ~~~~~~~~~~\;\;\;=-(1+\lambda_{k+1})\sum\limits_{i\in I_k}t_{k,i}u_{k,i}+\lambda_{k+1}\lambda_{k}(x_{k}-x_{k-1}), \end{array} \end{equation*} thus \[ z_k-z_0=\sum\limits_{s=0}^{k-1}(z_{s+1}-z_{s})=\sum\limits_{s=0}^{k-1}\left(-(1+\lambda_{s+1})\sum\limits_{i\in I_s}t_{s,i}u_{s,i}+\lambda_{s+1}\lambda_{s}(x_{s}-x_{s-1})\right). \] Note that $(1+\lambda_{s+1})\sum\limits_{i\in I_s}t_{s,i}u_{s,i}\in \mathcal{R}\left(F'(z_i)^*\right)$ and since \[ \mathcal{R}\left(F'(z_i)^*\right)\subset\mathcal{N}\left(F'(z_i)\right)^\perp\subset \mathcal{N}\left(F'(x^\dagger)\right)^\perp~~for~all~ i\in \mathbb{N}, \] we arrive at \[ \sum\limits_{s=0}^{k-1}\left(-(1+\lambda_{s+1})\sum\limits_{i\in I_s}t_{s,i}u_{s,i}\right)\in \mathcal{N}\left(F'(x^\dagger)\right)^\perp. \] With the conclusion of Lemma \ref{lemma 4-6}, we have \[ \sum\limits_{s=0}^{k-1}\lambda_{s+1}\lambda_{s}(x_{s}-x_{s-1})\in \mathcal{N}\left(F'(x^\dagger)\right)^\perp, \] we conclude that \[ z_k-z_0\in \mathcal{N}\left(F'(x^\dagger)\right)^\perp~~for~all~ k\in \mathbb{N}, \] which is also valid for the limit of $z_k$, i.e., $x_*-x_0\in \mathcal{N}\left(F'(x^\dagger)\right)^\perp$. It follows from Lemma \ref{lemma 2-1} that $x^\dagger$ is the unique solution satisfying the above condition, then we obtain $z_k \rightarrow x^\dagger$ as $k\rightarrow+\infty$. \end{proof} \end{thm}
\begin{corollary}\label{corollary 4-9} Under the assumptions of Theorem \ref{theorem 4-8}, we arrive at $x_k$ converges to $x_*$, where $x_*$ is the limit of $z_k$ as $k\rightarrow+\infty$. \begin{proof} According to the iterates in Algorithm \ref{algorithm 3-1} and (\ref{equation 3-7}), we get that \[
\left\|x_*-x_{k+1}\right\|^2\leq\left\|x_*-z_{k}\right\|^2, \] thus \[
\left\|x_*-x_{k+1}\right\|\rightarrow0~~as ~~k\rightarrow+\infty, \] which confirms the statement. \end{proof} \end{corollary}
\subsection{Regularity analysis}\label{subsection 4-2} \noindent We now discuss the Algorithm \ref{algorithm 3-2} be a convergent regularization method with the discrepancy principle (\ref{equation 3-10}) as the stopping rule. To do this, we need to show that $x_k^\delta$ depends continuously on the data $y^\delta$ firstly.
\begin{lemma}\label{lemma 4-10}
Let $\{x_k\}$ and $\{z_k\}$ be the sequences generated by the Algorithm \ref{algorithm 3-1} with the initial value $x_0^\delta=x_{-1}^\delta=x_0$. Accordingly, $\{x_k^\delta\}$ and $\{z_k^\delta\}$
generated by the Algorithm \ref{algorithm 3-2} with the noisy data $y^\delta$, where $y^\delta$ satisfies $\|y^\delta-y\|\leq\delta$. Assume that \begin{equation}\label{equation 4-27} \lambda_{k}^\delta\rightarrow\lambda_{k}~~~~as~\delta\rightarrow0. \end{equation} Then, there hold \[ x_k^\delta\rightarrow x_k~~and~z_k^\delta\rightarrow z_k~~as~\delta\rightarrow0, \]
for a fixed integer $k\in\mathbb{N}$. \begin{proof} We prove the assertion by induction, which is closely follows the corresponding proof of \cite[Lemma 4.10]{16}. From $\lambda_0^\delta=0$, we obtain $z_0^\delta=x_{0}^\delta=x_0$. By the definition of (\ref{equation 3-2}), (\ref{equation 3-9}) and the continuity of $F'(\cdot)$ on $\mathcal{D}(F)$, we arrive at $z_0^\delta,w_{0,i}^\delta,u_{0,i}^\delta,\alpha_{0,i}^\delta$ and $\xi_{0,i}^\delta$ depend continuously on $y^\delta$, where $i\in I_0^\delta=\{0\}$. Since the norm in Hilbert space is continuous, thus \[
x_1^\delta=\tilde{x}_1^\delta=z_0^\delta-\frac{\langle u_{0,i}^\delta,z_0^\delta \rangle-(\alpha_{0,i}^\delta+\xi_{0,i}^\delta)}{\|u_{0,i}^\delta\|^2}u_{0,i}^\delta \] depends continuously on $y^\delta$, i.e., \[x_1^\delta\rightarrow x_1,~~ z_1^\delta\rightarrow z_1,~~as~\delta\rightarrow0,\] with $\lambda_0^\delta=\lambda_0=0$.
Now, assume that $x_j^\delta$ and $z_j^\delta$ depend continuously on the data $y^\delta$ for all $j\leq k$. Since $F$ is continuously Fr\'{e}chet differentiable on $\mathcal{D}(F)$. It follows from Proposition \ref{proposition 3-8} that the search directions with the finite set $I_j^\delta/T_k$ are linearly independent. Combining with the strictly convex property of (\ref{equation 2-5}), we have $\{t_{j,i}^\delta\}$ is uniquely determined by (\ref{equation 2-6}), whose coefficients depend continuously on $y^\delta$. Similar to the discussion for the case of $k = 1$, we arrive at \[u_{k,i}^\delta\rightarrow u_{k,i},~~t_{k,i}^\delta\rightarrow t_{k,i},~~ as~\delta\rightarrow0,\] thus yields
\[
x_{k+1}^\delta=z_{k}^\delta-\sum\limits_{i\in I_{k}^\delta}t_{k,i}^\delta u_{k,i}^\delta,
\]
depend continuously on $y^\delta$, i.e.,
\[
x_{k+1}^\delta\rightarrow x_{k+1},~~ as~\delta\rightarrow0.
\]
According to above conclusion and definition (\ref{equation 4-27}), we have
\[
z_{k+1}^\delta\rightarrow z_{k+1},~~ as~\delta\rightarrow0,
\]
which yields the assertion. \end{proof} \end{lemma}
\begin{thm}\label{theorem 4-11} Let $k_*$ be chosen by the discrepancy principle (\ref{equation 3-10}). Assume that the coupling condition (\ref{equation 4-7}) holds for all $0\leq k\leq k_*$, combination parameters $\lambda_k^\delta$ satisfies (\ref{equation 4-1}), (\ref{equation 4-21}) and (\ref{equation 4-27}). Then $z_{k_*}^\delta$ generated by the Algorithm \ref{algorithm 3-2} converges to a solution $x_*\in B_\rho(x_0)\cap\mathcal{D}(F)$ of $F(x)=y$ as $\delta\rightarrow0$. In addition, if $\mathcal{N}\left(F'(x^\dagger)\right)\subset\mathcal{N}\left(F'(x)\right)$ for all $x\in B_{4\rho}(x^\dagger)$, then the sequence $z_{k_*}^\delta$ converges to $x^\dagger$ as $\delta\rightarrow0$. \begin{proof}
Inspired by the proof of \cite[Theorem 2.10]{13}, let $x_*$ be the limit point of the sequence $\{z_k\}$ given by Algorithm \ref{algorithm 3-1}. From Corollary \ref{corollary 4-9}, we have $x_k$ also converges to $x_*$. Let $\delta_n\rightarrow0$ as $n\rightarrow+\infty$ as well as $y_n:= y^{\delta_n}$ be a sequence of noisy data satisfies $\|y-y_n\|\leq \delta_n$ and define $k_n:=k_*(\delta_n,y_n)$ be the stopping index determined by discrepancy principle. There have two cases. First, if $k$ is a finite accumulation point of $k_n$, thus we can assume that $k_n=k$ for all $n\in N$. Then it follows by using the discrepancy principle that \[
\|F(z_k^{\delta_n})-y_n\|\leq\tau\delta_n. \] From Lemma \ref{lemma 4-10}, we get that $z_k^{\delta}$ depends continuously on $y^\delta$ for a fixed $k\in\mathbb{N}$. Then take the limit of $n\rightarrow+\infty$ in the above inequality, which conclude that \[ z_k^{\delta_n}\rightarrow z_k,~~F(z_k^{\delta_n})\rightarrow F(z_k)=y,~~as~~n\rightarrow+\infty. \] This proves that the iteration terminates with $z_k=x_*$ and $z_k^{\delta_n}\rightarrow x_*$ as $\delta_n\rightarrow0$.\\ For the second case, if $k_n\rightarrow+\infty$ as $n\rightarrow+\infty$. For some $k$ with $k_n>k+1$, Proposition \ref{proposition 4-2} and (\ref{equation 4-1}) yield that \begin{equation*} \begin{array}{l}
\|z_{k_n}^{\delta_n}-x_*\|=\|x_{k_n}^{\delta_n}+\lambda_{k_n}^{\delta_n}(x_{k_n}^{\delta_n}-x_{k_n-1}^{\delta_n})-x_*\|\\
~~~~~~~~~~~~~~\leq\|x_{k_n}^{\delta_n}-x_*\|+\lambda_{k_n}^{\delta_n}\|x_{k_n}^{\delta_n}-x_*\|+\lambda_{k_n}^{\delta_n}\|x_{k_n-1}^{\delta_n}-x_*\|\\
~~~~~~~~~~~~~~\leq3\|x_{k}^{\delta_n}-x_*\|\\
~~~~~~~~~~~~~~\leq3\|x_{k}^{\delta_n}-x_{k}\|+3\|x_{k}-x_*\|. \end{array} \end{equation*}
If we fixed some $\varepsilon>0$, according to Proposition \ref{proposition 4-2} and Proposition \ref{proposition 4-4} that there exist some corresponding fixed $k(\varepsilon)$ such that $\|x_{k}-x_*\|\leq\varepsilon/6$ for all $n>n(\varepsilon,k)$. Furthermore, the iterations in Algorithm \ref{algorithm 3-2} depend continuously on the fixed $k$, thus we can find an $n=n(\varepsilon,k)$ such that $\|x_{k}^{\delta_n}-x_*\|\leq\varepsilon/6$. Hence, it follows that $n$ is chosen sufficiently large enough which satisfies $k_n > k + 1$, we conclude that \[
\|z_{k_n}^{\delta_n}-x_*\|\leq3\|x_{k}^{\delta_n}-x_{k}\|+3\|x_{k}-x_*\| \leq3\frac{\varepsilon}{6}+3\frac{\varepsilon}{6}\leq\varepsilon, \] and therefore $z_{k_n}^{\delta_n}\rightarrow x_*$ as $n\rightarrow+\infty$, which confirms the first part of the statement. In addition, if $\mathcal{N}\left(F'(x^\dagger)\right)\subset\mathcal{N}\left(F'(x)\right)$ for all $x\in B_{4\rho}(x^\dagger)$, in this case, $x_*$ can be chosen as $x_*=x^\dagger$. It follows from Theorem \ref{theorem 4-8} that $z_k\rightarrow x^\dagger$, also $x_k\rightarrow x^\dagger$. We can prove the situation by analogy with the previous discussion, which yields the assertion. \end{proof} \end{thm}
\subsection{DBTS: the choice of $\lambda_k^\delta$}\label{subsection 4-3} \noindent In this subsection, we will discuss the choice of the $\lambda_k^\delta$, which leads to a convergent regularization method and also promotes the acceleration effect.
We have already briefly discussed the choice of the combination parameter in Remark \ref{remark 4-1}. However, $\lambda_k^\delta$ decrease to $0$ as $\delta\rightarrow0$ by the choices, and there is no corresponding two-point gradient acceleration effect. Therefore, in order to satisfy (\ref{equation 4-7}) and (\ref{equation 4-21}) with $\delta\rightarrow0$, we will introduce the discrete backtracking search (DBTS) algorithm proposed in \cite{13}. Define a function $q:\mathbb{R}_0^+\rightarrow\mathbb{R}_0^+$ which satisfies \begin{equation}\label{equation 4-28}
q(m_1)\leq q(m_2),~~\forall m_1>m_2,~~\sum\limits_{k=0}^{+\infty}q(k)<+\infty. \end{equation} Now, we introduce DBTS algorithm in detail.
\begin{alg}\label{algorithm 4-1} \rm(\textbf{DBTS algorithm for calculation combination parameters $\lambda_k^\delta$, $k>1$.})
\textbf{Given:}~$x_k^\delta$,$x_{k-1}^\delta$,$\tau,\delta,\Psi,\mu,c_F,\alpha,y^\delta,F,q: \mathbb{R}_0^+\rightarrow\mathbb{R}_0^+,i_{k-1}\in\mathbb{N},j_{\max}\in\mathbb{N}.$
\textbf{Calculate}~$\|x_k^{\delta}-x_{k-1}^{\delta}\|$ and define
\[{\beta _k}\left( i \right) = \min \left\{ {\frac{{q\left( i \right)}}{{\left\| {x_k^\delta - x_{k - 1}^\delta } \right\|}},\frac{k}{k+\alpha}} \right\},~~\alpha\geq3.\]
\textbf{For}~{$ j=1,\cdots, j_{\max}$}.
~~~~~~Set $\lambda _k^\delta = {\beta_k}\left( {{i_{k - 1}} + j} \right)$.
~~~~~~Calculate $z_k^\delta = x_k^\delta+\lambda_k^\delta(x_k^{\delta}-x_{k-1}^{\delta})$.
~~~~~~{\bf If} {$\|y^{\delta}-F(z_k^\delta)\|\leq \tau \delta$}
~~~~~~~~~~${i_k} = {i_{k - 1}} + j,$
~~~~~~~~~~${\bf break}$.
~~~~~~{\bf Else if}~~~{$\lambda _k^\delta \left( {\lambda _k^\delta + 1} \right){\left\| {x_k^\delta - x_{k - 1}^\delta } \right\|^2} \le \frac{\Psi^2}{ \mu c_F^2} {\left\| { F\left( {z_k^\delta } \right)-y^\delta} \right\|^2}$}.
~~~~~~~~~~${i_k} = {i_{k - 1}} + j,$
~~~~~~~~~~${\bf break}$,
~~~~~~{\bf Else if}~~~${i_k} = {i_{k - 1}} + {j_{\max }}$.
~~~~~~~~~~Calculate $\lambda_k^\delta$ by (\ref{equation 4-9}).
~~~~~${\bf End~~If}$
${\bf End ~~For}$\\
{ \bf Output:} $\lambda_k^\delta$, $i_k$. \end{alg}
\begin{remark}\label{remark 4-3} Comparing with the reference \cite{13}, we have three modifications: The first one is the definition of $\beta _k$ in which we place $\beta _k(i)=k/(k+\alpha)$ instead of $\beta _k(i)=1$, this modification will speed up convergence by making use of the Nesterov acceleration scheme. The second one is in the second "$\textbf{Else if}$" part, where we set $\lambda _k^\delta=0$ by (\ref{equation 4-9}), the modification can affect convergence speed. The third one is the same position as above, we change the original assignment statement to a judgment statement for a better loop. \end{remark}
First, we need to verify that the combination parameter $\lambda _k^\delta$ selected from Algorithm \ref{algorithm 4-1} satisfies condition (\ref{equation 4-7}), and (\ref{equation 4-21}). From Algorithm \ref{algorithm 4-1} that when $\|y^{\delta}-F(z_k^\delta)\|> \tau \delta$ happens, i.e., two cases of "$\textbf{Else if}$", which is obvious that $\lambda _k^\delta$ satisfies condition (\ref{equation 4-7}).
For exact data case, Algorithm \ref{algorithm 4-1} either makes $\lambda _k=\beta_k(i_k)$ or $\lambda _k=0$. It follows from the definition that $i_k\geq i_{k-1}+1$ and $i_k\geq k$. Thus we arrive at \[
\sum\limits_{k=0}^{+\infty}\lambda_{k}\|x_{k}-x_{k-1}\|\leq\sum\limits_{k=0}^{+\infty}\beta_k(i_k)\|x_{k}-x_{k-1}\| \leq\sum\limits_{k=0}^{+\infty}q(i_k)\leq\sum\limits_{k=0}^{+\infty}q(k)<+\infty, \] therefore condition (\ref{equation 4-21}) holds.
According to the definition of $\lambda_k^\delta$ in Algorithm \ref{algorithm 4-1}, we can not use Theorem \ref{theorem 4-11} to verify the regularization property of the TGSS method, since the combination parameter is not continuously dependent on $y^\delta$. In fact, $\lambda_k^\delta$ could have multiple values as $\delta\rightarrow0$ which applied to Algorithm \ref{algorithm 3-1} may generate a variety of iterative sequences for noise-free case. For the generated sequences, we will use $\Gamma_{\eta, q}(x_0)$ to denote the set including all the iterative sequences $\{x_k\},\{z_k\}\subset\mathcal{X}$. Without losing generality, the combination parameters $\lambda_k$ in noise-free case are chosen to satisfy \begin{equation}\label{equation 4-29}
\lambda_{k}(\lambda_{k}+1)\left\|x_{k}-x_{k-1}\right\|^2
\leq\frac{\Psi^2}{ \mu c_F^2}\|F(z_{k})-y\|^2, \end{equation} as well as \begin{equation}\label{equation 4-30}
0\leq\lambda_{k}\leq\min \left\{ {\frac{{q\left( i_k \right)}}{{\left\| {x_k - x_{k - 1} } \right\|}},\frac{k}{k+\alpha}} \right\}, \end{equation} where the sequence $\{i_k\}$ of integers in DBTS method satisfying $i_0=0$ and $1\leq i_k-i_{k-1}\leq j_{\max}$ for all $k$.\\ For a given sequence $\{(x_k)\},\{(z_k)\}\in \Gamma_{\eta, q}(x_0)$, we obtain that the corresponding combination parameters $\lambda_{k}$ satisfy condition (\ref{equation 4-7}) and (\ref{equation 4-21}). Thus, it verifies the convergence of $\{x_k\},\{z_k\}$ using Theorem \ref{theorem 4-8}. The following discussion shows that the corresponding uniform convergence results.
\begin{prop}\label{proposition 4-12}
Assume that all the conditions in Theorem \ref{theorem 4-8} hold. If $\mathcal{N}\left(F'(x^\dagger)\right)\subset\mathcal{N}\left(F'(x)\right)$ for all $x\in B_{4\rho}(x^\dagger)$, then $\forall~\varepsilon>0$, there exist an integer $k(\varepsilon)$, for any sequence $\{(x_k)\}\in \Gamma_{\eta, q}(x_0)$ we get $\|x^\dagger-x_k\|^2<\varepsilon$ for all $k\geq k(\varepsilon)$. \begin{proof} This proof closely follows the corresponding proof for \cite[Proposition 3.9]{14}. By contradiction, suppose that the opposite result is true, i.e., there exist an $\varepsilon_0>0$ such that
\begin{equation}\label{equation 4-31}
\|x^\dagger-x_{k_l}^{(l)}\|^2\geq\varepsilon_0,
\end{equation}
for any $l\geq1$ with $\{(x_k^{(l)})\}\in \Gamma_{\eta, q}(x_0)$ and $k_l>l$.
We will establish the following system. For each $k=0,1,\cdots$, let $\{l_{k,n}\}$ is a strictly increasing subsequence of positive integers and $\{(\hat {x}_k)\}\in\mathcal{X}$. They satisfy the following conditions:
\begin{description}
\item[(i)] $\{(\hat {x}_k)\}\in\Gamma_{\eta, q}(x_0)$.
\item[(ii)] For each fixed $k$, there hold $x_k^{(l_{k,n})}\rightarrow\hat {x}_k$ and $F(x_k^{(l_{k,n})})\rightarrow F(\hat {x}_k)$ as $n\rightarrow+\infty$.
\end{description}
Assume that the above system is available, we will deduce a contradiction. According to (i), it follows from Theorem \ref{theorem 4-8} that $\|x^\dagger-\hat {x}_k\|^2\rightarrow0$ as $k\rightarrow+\infty$. Thus we can find a large integer $\hat {k}$ such that
\[
\|x^\dagger-\hat {x}_{\hat {k}}\|^2<\varepsilon_0/2.
\]
Then we have \begin{equation}\label{equation 4-32} \begin{array}{l}
\varepsilon_0/2>\left(\left\|x^\dagger-\hat {x}_{\hat {k}}\right\|^2-\left\|x^\dagger- x_{\hat {k}}^{(l_{\hat {k},n})}\right\|^2\right)+\left\|x^\dagger- x_{\hat {k}}^{(l_{\hat {k},n})}\right\|^2\\
~~~~~~=\left\langle x^\dagger-\hat {x}_{\hat {k}},x_{\hat {k}}^{(l_{\hat {k},n})}-\hat {x}_{\hat {k}}\right\rangle+\left\langle x^\dagger-x_{\hat {k}}^{(l_{\hat {k},n)}},x_{\hat {k}}^{(l_{\hat {k},n})}-\hat {x}_{\hat {k}}\right\rangle+\left\|x^\dagger- x_{\hat {k}}^{(l_{\hat {k},n})}\right\|^2. \end{array} \end{equation} From property (ii), we arrive at \[ \left\langle x^\dagger-\hat {x}_{\hat {k}},x_{\hat {k}}^{(l_{\hat {k},n})}-\hat {x}_{\hat {k}}\right\rangle+\left\langle x^\dagger-x_{\hat {k}}^{(l_{\hat {k},n)}},x_{\hat {k}}^{(l_{\hat {k},n})}-\hat {x}_{\hat {k}}\right\rangle\rightarrow0~~as~n\rightarrow+\infty. \] Then pick $\hat{n}$ with $\hat{l}:=l_{\hat{k},\hat{n}}$ satisfies \[ \left\langle x^\dagger-\hat {x}_{\hat {k}},x_{\hat {k}}^{(\hat{l})}-\hat {x}_{\hat {k}}\right\rangle+\left\langle x^\dagger-x_{\hat {k}}^{(\hat{l})},x_{\hat {k}}^{(\hat{l})}-\hat {x}_{\hat {k}}\right\rangle\geq-\varepsilon_0/2. \] Thus, by (\ref{equation 4-32}) we can obtain \[
\left\|x^\dagger- x_{\hat {k}}^{(\hat{l})}\right\|^2<\varepsilon_0. \]
Note that $k_{\hat{l}}>\hat{l}=l_{\hat{k},\hat{n}}\geq\hat{k}$. Thus, it follows from the monotonicity of $\left\|x^\dagger- x_{k}^{(\hat{l})}\right\|^2$ with respect to $k$ that \[
\left\|x^\dagger- x_{k_{\hat{l}}}^{(\hat{l})}\right\|^2\leq\left\|x^\dagger- x_{\hat{k}}^{(\hat{l})}\right\|^2<\varepsilon_0. \]
This leads to a contradiction to (\ref{equation 4-31}) with $l=\hat{l}$.
Next, we turn to the discussion of $\{l_{k,n}\}$ and $(\hat{x}_k)$, for each $k=0,1,\cdots,$ which lead to (i) and (ii) hold. In this case, we adopt a diagonal argument. For $k=0$, let $(\hat{x}_0)=(x_0)$ as well as $l_{0,n}=n$ for all $n$. Since $x_0^{(n)}=x_0$, then (ii) holds automatically for $k=0$.
Assume that we have already constructed $\{l_{k,n}\}$ and $(\hat{x}_k)$ for all $0\leq k\leq m$. Then, we will focus on the definition of $\{l_{m+1,n}\}$ and $(\hat{x}_{m+1})$. According to the combination parameter $\lambda_m^{(l_{m,n})}$ and the integer $i_m^{(l_{m,n})}$ involved in the construction of $\left(x_{m+1}^{(l_{m,n})}\right)$. We get \[ 0\leq\lambda_m^{(l_{m,n})}\leq\frac{m}{m+\alpha}~~~~and~~~~0\leq i_m^{(l_{m,n})}\leq mj_{\max}~~\forall~n, \] where the constant $\alpha\geq3$. Choose a subsequence of $\{l_{m,n}\}$, denoted by $\{l_{m+1,n}\}$, which satisfies \begin{equation}\label{equation 4-33} \lim\limits_{n\rightarrow+\infty}\lambda_m^{(l_{m+1,n})}=\hat{\lambda}_m~~~and~~~i_m^{(l_{m+1,n})}=\hat{i}_m~~\forall~ n, \end{equation} for some number $0\leq\hat{\lambda}_m\leq m/(m+\alpha)$ and some integer $\hat{i}_m$. Define \[ \hat{z}_m=\hat{x}_m+\hat{\lambda}_m(\hat{x}_m-\hat{x}_{m-1}). \] It follows from the induction hypothesis and (\ref{equation 4-33}) that \begin{equation}\label{equation 4-34} z_m^{(l_{m+1,n})}\rightarrow\hat{z}_m~~as~~n\rightarrow+\infty. \end{equation} Using (\ref{equation 4-33}), the continuity of $F, F'$, and the discussion in the proof of Lemma \ref{lemma 4-10}, there holds \begin{equation}\label{equation 4-35} t_{m,i}^{(l_{m+1,n})}\rightarrow\hat{t}_{m,i}~~as~~n\rightarrow+\infty,~~~\forall i\in I_m. \end{equation} Define \[ \hat{x}_{m+1}=\hat{z}_{m}-\sum\limits_{i\in I_{m}}\hat{t}_{m,i} F'(\hat{z}_{m})^*(F(\hat{z}_{m})-y). \] According to (\ref{equation 4-34}) and (\ref{equation 4-35}), it follows that \begin{equation}\label{equation 4-36} \lim\limits_{n\rightarrow+\infty}x_{m+1}^{(l_{m+1,n})}=\hat{x}_{m+1}. \end{equation} Thus we complete the construction of $\{l_{m+1,n}\}$ as well as $(\hat{x}_{m+1})$.
We also need to prove that $\hat{\lambda}_m$ satisfies the requirements which in order to guarantee that the generated sequence is indeed in $\Gamma_{\eta, q}(x_0)$. It follows from the definition of the sequences in $\Gamma_{\eta, q}(x_0)$ that \[
\lambda_{m}^{(l_{m+1,n})}(\lambda_{m}^{(l_{m+1,n})}+1)\left\|x_{m}^{(l_{m+1,n})}-x_{m-1}^{(l_{m+1,n})}\right\|^2
\leq\frac{\Psi^2}{\mu c_F^2}\|F(z_{m}^{(l_{m+1,n})})-y\|^2, \] as well as \[
0\leq\lambda_{m}^{(l_{m+1,n})}\leq\min \left\{ {\frac{{q\left( i_m^{(l_{m+1,n})} \right)}}{{\left\| {x_m^{(l_{m+1,n})} - x_{m -1}^{(l_{m+1,n})} } \right\|}},\frac{m}{m+\alpha}} \right\}, \] with $1\leq i_m^{(l_{m+1,n})}-i_{m-1}^{(l_{m+1,n})}\leq j_{\max}$ for all $n$. Combining with (\ref{equation 4-33}), (\ref{equation 4-34}) and (\ref{equation 4-35}), as $n\rightarrow+\infty$ in the above two inequalities we can conclude that \[
\hat{\lambda}_{m}(\hat{\lambda}_{m}+1)\left\|\hat{x}_{m}-\hat{x}_{m-1}\right\|^2
\leq\frac{\Psi^2}{\mu c_F^2}\|F(\hat{z}_{m})-y\|^2, \] and \[
0\leq\hat{\lambda}_{m}\leq\min \left\{ {\frac{{q\left( \hat{i}_m \right)}}{{\left\| {\hat{x}_m - \hat{x}_{m -1}} \right\|}},\frac{m}{m+\alpha}} \right\}. \] Assume that $\{l_{m,n}\}$ was chosen so that $i_{m-1}^{(l_{m,n})}=\hat{i}_{m-1}$ for all $n$ and some integer $\hat{i}_{m-1}$. It follows that $\{l_{m+1,n}\}$ is a subsequence of $\{l_{m,n}\}$, then $1\leq\hat{i}_{m}-\hat{i}_{m-1}= i_m^{(l_{m+1,n})}-i_{m-1}^{(l_{m+1,n})}\leq j_{\max}$. According to $i_0^{(l)}=0$ for all $l$, we have $\hat{i}_0=0$. Therefore, the proof is complete. \end{proof} \end{prop}
\begin{lemma}\label{lemma 4-13}
Assume that $\{y^{\delta_n}\}$ be a sequence with noisy data satisfying $\|y^{\delta_n}-y\|\leq\delta_n$, where $\delta_n\rightarrow0$ as $n\rightarrow+\infty$. Let the combination parameters $\{\lambda_k^{\delta_n}\}$ are defined by DBTS method with $\{i_0^{\delta_n}\}=0$. Then, pick a fixed integer $k\geq0$, there exist a sequence $\{(x_i)\}\in \Gamma_{\eta, q}(x_0)$ such that \[ x_i^{\delta_n}\rightarrow x_i,~~z_i^{\delta_n}\rightarrow z_i~~and ~~F(x_i^{\delta_n})\rightarrow F(x_i)~~as~n\rightarrow+\infty \] for all $0\leq i\leq k$. \begin{proof}
We prove the assertion by induction discussion on $k$. For $k=0$, we have $x_0^{\delta_n}=x_0$, $\lambda_0^{\delta_n}=0$ and then $F(x_0^{\delta_n})\rightarrow F(x_0)$, $z_0^{\delta_n}=z_0=x_0$. Now, assumed that the assertion is true for $k=m$, i.e., for all $0\leq i\leq m$, there exist a sequence $\{(x_i)\}\in \Gamma_{\eta, q}(x_0)$ which satisfies \[ x_i^{\delta_n}\rightarrow x_i,~~z_i^{\delta_n}\rightarrow z_i~~and ~~F(x_i^{\delta_n})\rightarrow F(x_i)~~as~n\rightarrow+\infty. \]
Next, we show that the result with $i=m+1$ is also true. Without losing generality, we will get a sequence from $\Gamma_{\eta, q}(x_0)$ by maintaining the former $m+1$ terms in $\{(x_i)\}$ and modifying the rest terms. It is necessary to redefine $x_{m+1}$ because we can apply the DBTS method with $\lambda_i=0$ for $i\leq m+1$ to generate the rest terms.
Furthermore, the combination parameter $\lambda_m^{\delta_n}$ generated by Algorithm \ref{algorithm 4-1} such that \begin{equation}\label{equation 4-37}
\lambda_m^{\delta_n}(\lambda_m^{\delta_n}+1)\left\|x_m^{\delta_n}-x_{m-1}^{\delta_n}\right\|^2
\leq\frac{\Psi^2}{\mu c_F^2}\|F(z_m^{\delta_n})-y^{\delta_n}\|^2, \end{equation} as well as \begin{equation}\label{equation 4-38} \begin{array}{l}
0\leq\lambda_m^{\delta_n}\leq\min \left\{ {\frac{{q\left( i_m^{\delta_n} \right)}}{{\left\| {x_m^{\delta_n} - x_{m-1}^{\delta_n}} \right\|}},\frac{m}{m+\alpha}} \right\}~or\\
\lambda_m^{\delta_n}=\min\left\{\sqrt{\frac{(\Psi\tau\delta_n)^2}{\mu c_F^2\|{x_m^{\delta_n}-x_{m-1}^{\delta_n}}\|^2}+\frac{1}{4}}-\frac{1}{2},\frac{m}{m+\alpha}\right\} \end{array} \end{equation} with $1\leq i_m^{\delta_{n}}-i_{m-1}^{\delta_{n}}\leq j_{\max}$. By defining a subsequence of $\{y^{\delta_n}\}$ if necessary, then we obtain \begin{equation}\label{equation 4-39} \lim\limits_{n\rightarrow+\infty}\lambda_m^{\delta_n}=\lambda_m~~and~~i_m^{\delta_n}=i_m~~for~ all~ n \end{equation} for some figure $0\leq\lambda_m\leq m/(m+\alpha)$ and some integer $i_m$. Next, define $z_m,t_{m,i}$ with $i\in I_m$ as well as $x_{m+1}$ by (\ref{equation 3-3}) with $k=m$. It follows (\ref{equation 4-39}) and the proof of Lemma \ref{lemma 4-10}, we thus obtain $$ z_m^{\delta_n}\rightarrow z_m~~and~~x_{m+1}^{\delta_n}\rightarrow x_{m+1}~~as~n\rightarrow+\infty, $$ and $$ F(x_{m+1}^{\delta_n})\rightarrow F(x_{m+1})~~as~n\rightarrow+\infty. $$ Using the induction hypothesis and by taking $n\rightarrow+\infty$ in (\ref{equation 4-37}), we can get that $\lambda_m$ and $i_m$ meet the requirements in the definition of $\Gamma_{\eta, q}(x_0)$. The proof is therefore complete. \end{proof} \end{lemma}
\begin{thm}\label{theorem 4-14} Let the combination parameters $\{\lambda_k^{\delta_n}\}$ are defined by DBTS method with $i_0^{\delta_n}=0$. Find the integer $k_*$ which determined by the discrepancy principle (\ref{equation 3-10}). If $\mathcal{N}\left(F'(x^\dagger)\right)\subset\mathcal{N}\left(F'(x)\right)$ for all $x\in B_{4\rho}(x^\dagger)$, then there holds \begin{equation}\label{equation 4-40}
\lim\limits_{\delta\rightarrow0}\|x_{k_*}^\delta-x^\dagger\|=0. \end{equation} \begin{proof}
By contradiction, suppose that (\ref{equation 4-40}) is not true, then there exist a figure $\varepsilon>0$ as well as a subsequence $\{y^{\delta_n}\}$ of $\{y^{\delta}\}$ which satisfying $\|y^{\delta_n}-y\|\leq\delta_n$ with $\delta_n\rightarrow0$ as $n\rightarrow+\infty$ such that \begin{equation}\label{equation 4-41}
\|x_{k_*}^{\delta_n}-x^\dagger\|\geq\varepsilon~~for~all~n. \end{equation} Hence, it remains to consider the following two cases.\\ \indent $Case~1.$ Assume that $\{k_{\delta_n}\}$ has a finite cluster point $\hat{k}$. By taking a subsequence if necessary and using Lemma \ref{lemma 4-13}, we get $k_{\delta_n}=\hat{k}$ for all $n$ and there exist a sequence $\{(x_k)\}\in \Gamma_{\eta, q}(x_0)$ satisfies \[ x_{\hat{k}}^{\delta_n}\rightarrow x_{\hat{k}},~~as~n\rightarrow+\infty. \] It can be seen via Theorem \ref{theorem 4-11} that we can obtain $x_{\hat{k}}=x^\dagger$. There holds \[ x_{k_{*}}^{\delta_n}\rightarrow x^\dagger,~~as~n\rightarrow+\infty, \]
i.e., $\|x_{k_*}^{\delta_n}-x^\dagger\|\rightarrow0$ as $n\rightarrow+\infty$. This leads to a contradiction.
\indent $Case~2.$ Assume that $\lim\limits_{n\rightarrow+\infty}k_{\delta_n}=+\infty$. It follows Proposition \ref{proposition 4-12} that there exist an integer $k(\varepsilon)$ which satisfies \begin{equation}\label{equation 4-42}
\|x_{k(\varepsilon)}-x^\dagger\|<\varepsilon~~for~any~\{(x_k)\}\in \Gamma_{\eta, q}(x_0). \end{equation} For this $k(\varepsilon)$, by taking a subsequence if necessary and using Lemma \ref{lemma 4-13}, we can pick $\{(x_k)\}\in \Gamma_{\eta, q}(x_0)$ such that \begin{equation}\label{equation 4-43} x_{k}^{\delta_n}\rightarrow x_k~~as~n\rightarrow+\infty,~~\forall~0\leq k\leq k(\varepsilon). \end{equation} It follows from $\lim\limits_{n\rightarrow+\infty}k_{*}=+\infty$, that we have $k_{*}>k(\varepsilon)$ for $n$ which is large enough. Then, according to (\ref{equation 4-10}) in Proposition \ref{proposition 4-2} we get \[
\left\|x_{k_*}^{\delta_n}-x^\dagger\right\|\leq\left\|x_{k(\varepsilon)}^{\delta_n}-x^\dagger\right\|. \] Combining (\ref{equation 4-42}) and (\ref{equation 4-43}) as $n\rightarrow+\infty$, we arrive at \[
\lim\limits_{n\rightarrow+\infty}\|x_{k(\varepsilon)}^{\delta_n}-x^\dagger\|=\|x_{k(\varepsilon)}-x^\dagger\|<\varepsilon. \] This is a contradiction to (\ref{equation 4-41}). According to the above discussions, we therefore get (\ref{equation 4-40}) which yields the assertion. \end{proof} \end{thm}
\begin{remark}\label{remark 4-4} In the proof of Theorem \ref{theorem 4-8}, Lemma \ref{lemma 4-10} Theorem \ref{theorem 4-11}, we take advantage of $\lambda_k^\delta$ with $\delta\rightarrow0$ by defining the hypothesis of uniqueness. However, given the fact that it is not unique in the DBTS method, we discuss the generated different cluster points above. In Theorem \ref{theorem 4-14} we removed this hypothesis by using a uniform convergence result established in Proposition \ref{proposition 4-12}. \end{remark}
\section{Numerical simulations}\label{section-5}
\noindent In this section we carry out some numerical experiments with one-dimensional and two-dimensional inverse potential problem to test the good performance of the proposed TGSS method for solving the ill-posed system (\ref{equation 1-1}). Our simulations were done by using MATLAB R2013a on a LG computer with Intel Core i5-6500 CPU 3.20 GHz and 8.00 GB memory.
To comprehensively demonstrate the acceleration performance of TGSS method with two search directions, we compare the proposed TGSS method with the algorithms associated with it. First, we explain the following abbreviations.
\begin{description}
\item[1.] Land: The classical Landweber method (\ref{equation 1-3}) which
we choice the stepsize with a constant $\alpha_k^\delta=1$.
\item[2.] TPG-DBTS: Two-point gradient method, whose iteration scheme is given by (\ref{equation 1-5}), and the parameters $\lambda_k^\delta$ are selected in DBTS method.
\item[3.] TPG-Nes: Two-point gradient method with classic Nesterov acceleration scheme, as equation (\ref{equation 1-4}).
\item[4.] SESOP: Sequential subspace optimization method for the iterative solution, i.e., equation (\ref{equation 1-6}).
\item[5.] TGSS-DBTS: Accelerated two-point gradient method based on the sequential subspace optimization with $\lambda_k^\delta$ are chosen in DBTS method.
\item[6.] TGSS-Nes: Accelerated two-point gradient method based on the sequential subspace optimization with classic Nesterov acceleration scheme. \end{description}
In order to better illustration, we define the following quantities for quantitatively analyze the reconstruction performance. \begin{enumerate}
\item Difference:
\[
x-x^\dagger,
\]
which is the difference between exact and the reconstructed solution.
\item Relative error (RE):
\[
RE=\frac{\|x-x^\dagger\|}{\|x^\dagger\|}.
\]
\item The rate of iterations:
\[
Rate(k_*)=\frac{k_*(\cdot)}{k_*(Land)},
\]
which illustrates the degree of acceleration on the aspect of iterations. Here, $k_*(\cdot)$ represents the number of iteration of corresponding algorithms under the discrepancy principle, and
$k_*(Land)$ is the number of iteration of Landweber iteration under the same conditions.
\item The rate of computation time:
\[
Rate(t)=\frac{t(\cdot)}{t(Land)},
\]
which illustrates the degree of acceleration on computation time. Here, $t(\cdot)$ represents the computation time of corresponding algorithms, and
$t(Land)$ is the computation time of Landweber iteration. \end{enumerate}
\noindent Next, we consider the following elliptic equation \begin{equation}\label{equation 5-1} \left\{
\begin{array}{ll}
-\Delta u+cu=f, & \hbox{in $\Omega$,} \\
\frac{\partial u}{\partial n}=0, & \hbox{on $\Gamma$,}
\end{array} \right. \end{equation} from the measurement of $u$, where $\Omega\subset\mathbb{R}^d(d=1,2)$ is an open bounded domain with a Lipschitz boundary $\Gamma$ as well as $f\in L^2(\Omega)$. We assume that the sought solution $c\in L^2(\Omega)$. Then the inverse problem can be equivalently described as solving a nonlinear operator equation \[ F(c)=u, \] we define the nonlinear operator $F:\mathcal{D}(F)\mapsto L^2(\Omega)$. The domain $\mathcal{D}(F)$ for \[
\mathcal{D}(F):=\left\{c\in L^2(\Omega):\|c-\hat{c}\|_{L^2(\Omega)}\leq \zeta_0~for~ some~\hat{c}\geq0, a.e.\right\} \] as the admissible set of $F$. As we know that $F$ is well-defined for some positive constant $\zeta_0$ as well as Fr\'{e}chet differentiable \cite{26}. The Fr\'{e}chet derivative of $F$ and its adjoint can be calculated as follows \[ F'(c)q=-A(c)^{-1}(qF(c))~=and~F'(c)^*\omega=-u(c)A(c)^{-1}\omega, \] for $q,\omega\in L^2(\Omega)$. Here $A(c):H^2(\Omega)\cap H_0^1(\Omega)\rightarrow L^2(\Omega)$ is defined by $A(c)u=-\Delta u+cu$ \cite{27}.
By using $u^\dagger=F(c^\dagger)$ with the true parameter $c^\dagger$, we get the exact data $u^\dagger$. We consider Gaussian noisy data models where we simulate noisy data $u^\delta$ in the following way \[ u^\delta=u^\dagger+\delta\cdot n, \] where $\delta$ represents the noise level and $n$ represents the random variable satisfying the standard normal distribution. In this paper, the forward operator of the elliptic equation is discretized using finite elements on a uniform grid which we refer to \cite{28}. The implementation of corresponding experiment with reference to the MATLAB package shared by Bangti Jin (http://www.uni-graz.at/$\backsim$ clason/codes/l1fitting.zip).
\subsection{1-D inverse potential problem}\label{subsection 5-1} \noindent We first consider the one-dimensional case of above problem with the following information.
\begin{itemize}
\item Let $\Omega=[-1,1],~f(x)\equiv1$ as well as true parameter
\[
c^\dagger(x)=1-\cos(\pi x)
\]
for equation (\ref{equation 5-1}).
\item The exact data $u$ are obtained by solving the forward model using the standard piecewise linear finite element method with mesh $N=256$ and mesh size $h=1/N$.
\item For comparison purposes, we choose three noise levels, $\delta=0.1\%$, $\delta=0.01\%$ and $\delta=0.001\%$.
\item Assume that $\eta=0.1$. Since $\tau$ needs to satisfy $\tau>\frac{1+\eta}{1-\eta}$ in SESOP method and TGSS method, as well as the condition $\tau>2\frac{1+\eta}{1-2\eta}$ in TPG method which can guarantee the corresponding convergence. Thus, we take $\tau=2.8$, $\mu=1.01$ and $c_F=0.1$.
\item As for the selection of combination parameters $\lambda_k^\delta$ in DBTS method set $i_0=2$, $j_{\max}=1$ and $q(i)=4/i^{1.1}$. In Nesterov acceleration scheme we take $\lambda_k^\delta=\frac{k-1}{k+\alpha-1}$ with $\alpha=3$. \end{itemize}
\begin{figure}
\caption{Reconstructions of each method for 1-D problem to show reconstructed solution and relative errors in the first 150 iterations. (a) Reconstructed solution. (b) Evolution of the relative error with exact data. (c) Evolution of the relative error with noise level $\delta=0.1\%$. }
\label{figure 1-1}
\label{figure 1-2}
\label{figure 1-3}
\label{1D}
\end{figure}
We first illustrate the phenomenon generated by the numerical experiment after 150 steps of iteration which can be clearly seen from figure \ref{1D}. As shown in figure \ref{figure 1-1}, we observe that the reconstructed solutions generated by algorithm TGSS are closer to the true solution than other algorithms. Meanwhile, when referring to relative error (RE) curves, it can be seen the advantage of our algorithm in convergence behavior in this respect from the figure \ref{figure 1-2} and figure \ref{figure 1-3} which respectively represent the case with exact data and the case with noise level of $\delta=0.1\%$. They clearly show that the TGSS method makes the reconstruction error decrease dramatically.
\begin{table} \footnotesize
\centering \caption{ Comparisons between Land, TPG, SESOP and TGSS methods for 1-D inverse potential problem with noisy data. } \begin{threeparttable} \begin{tabular}{l l l l l l l l l l l l l} \hline $\delta$& Methods& $k_*$& Time (s)& RE & Rate($k_*$) & & Rate(t)&\\ \hline \multirow{1}{*}{0.1$\%$} & $\textrm{\textrm{Land}}$& 2169 & 2.003 & ${5.25 \times 10^{-3}}$ & 100$\%$ & & 100$\%$ & \\ &$\textrm{\textrm{TPG-DBTS}}$& 182 & 0.180 & ${2.33 \times 10^{-3}}$ & 8.39$\%$ & & 8.99$\%$ & \\ &$\textrm{\textrm{TPG-Nes}}$& 105 & 0.101 & ${2.46 \times 10^{-3}}$ & 4.84$\%$ & & 5.04$\%$ & \\ &$\textrm{\textrm{SESOP}}$& 37 & 0.143 & ${4.24 \times 10^{-3}}$ & 1.71$\%$ & & 7.14$\%$ & \\ &$\textrm{\textrm{TGSS-DBTS}}$& 14 & 0.039 & ${3.04 \times 10^{-3}}$ & 0.65$\%$ & & 1.95$\%$ & \\ &$\textrm{\textrm{TGSS-Nes}}$& 10 & 0.025 & ${5.38 \times 10^{-4}}$ & 0.46$\%$ & & 1.25$\%$ & \\ \hline \multirow{1}{*}{0.01$\%$} & $\textrm{\textrm{Land}}$& 5756 & 5.349 & ${6.97 \times 10^{-4}}$ & 100$\%$ & & 100$\%$ & \\ &$\textrm{\textrm{TPG-DBTS}}$& 233 & 0.231 & ${7.66 \times 10^{-4}}$ & 4.05$\%$ & & 4.32$\%$ & \\ &$\textrm{\textrm{TPG-Nes}}$& 336 & 0.316 & ${5.16 \times 10^{-4}}$ & 5.84$\%$ & & 5.91$\%$ & \\ &$\textrm{\textrm{SESOP}}$& 87 & 0.370 & ${5.33\times 10^{-4}}$ & 1.51$\%$ & & 6.92$\%$ & \\ &$\textrm{\textrm{TGSS-DBTS}}$& 24 & 0.068 & ${4.03 \times 10^{-4}}$ & 0.42$\%$ & & 1.27$\%$ & \\ &$\textrm{\textrm{TGSS-Nes}}$& 18 & 0.051 & ${1.34 \times 10^{-4}}$ & 0.31$\%$ & & 0.95$\%$ & \\ \hline \multirow{1}{*}{0.001$\%$} & $\textrm{\textrm{Land}}$& 16960 & 15.763& $ {1.17 \times 10^{-4}}$ & 100$\%$ & & 100$\%$ & \\ &$\textrm{\textrm{TPG-DBTS}}$& 624 & 0.611 & ${5.35\times 10^{-5}}$ & 3.68$\%$ & & 3.88$\%$ & \\ &$\textrm{\textrm{TPG-Nes}}$& 732 & 0.685 & ${5.44 \times 10^{-5}}$ & 4.32$\%$ & & 4.35$\%$ & \\ &$\textrm{\textrm{SESOP}}$& 342 & 1.134 & ${1.12 \times 10^{-4}}$ & 2.02$\%$ & & 7.19$\%$ & \\ &$\textrm{\textrm{TGSS-DBTS}}$& 122 & 0.276 & ${1.08 \times 10^{-4}}$ & 0.72$\%$ & & 1.75$\%$ & \\ &$\textrm{\textrm{TGSS-Nes}}$& 103 & 0.177 & ${1.14 \times 10^{-5}}$ & 0.61$\%$ & & 1.12$\%$ & \\ \hline \end{tabular}
\end{threeparttable} \label{table 1-1} \end{table}
In addition, three various noise levels are added to the generated exact data, respectively, to test the robustness of the proposed method (TGSS) to noise. Table \ref{table 1-1} summarizes detailed simulation results for each noise level. Under the same discrepancy principle, we can reach the following conclusions: \begin{itemize}
\item At all noise levels, compared with Land, TPG and SESOP methods, our proposed TGSS method leads to a strongly decrease of the iteration numbers and the overall computational time can be significantly reduced.
\item We observe that TPG method has a good acceleration effects relative to Land, but when it combined with SESOP method (TGSS), the acceleration performance becomes more obvious.
\item By using comparisons under the same conditions, i.e., the parameters $\lambda_k^\delta$ are chosen in the same way such as DBTS method or Nesterov acceleration scheme, we can get that TGSS method achieves more satisfactory acceleration effects than TPG method.
\item TGSS-DBTS as well as TGSS-Nes methods seem to have similar convergence behavior which illustrate the validity of selecting $\lambda_k^\delta$ in DBTS method. \end{itemize}
\subsection{2-D inverse potential problem}\label{subsection 5-2} \noindent We now consider a 2-D case to illustrate the performance of proposed methods on solving inverse potential problem under different noise levels.
\begin{itemize}
\item Let $\Omega=[-1,1]^2,~f(x_1,x_2)\equiv1$ as well as true parameter
$$
c^\dagger(x_1,x_2)=1+\cos(\pi x_1)\cos(\pi x_2)\chi_{\{|(x_1,x_2)|_\infty<1/2\}}
$$
for equation (\ref{equation 5-1}).
\item Use a sample with the two-dimensional standard piecewise linear finite element method with mesh $N=64\times64$ and mesh size $h=1/N^2$.
\item Choose three noise levels, which are $\delta=2\%$, $\delta=1\%$ and $\delta=0.5\%$. Moreover, take $\eta=0.1$, $\tau=2.8$, $\mu=1.01$ and $c_F=0.1$.
\item As for the selection of combination parameters $\lambda_k^\delta$ in DBTS method we set $i_0=2$, $j_{\max}=1$ and $q(i)=9/i^{1.1}$. In Nesterov acceleration scheme we take $\lambda_k^\delta=\frac{k-1}{k+\alpha-1}$ with $\alpha=9$. \end{itemize}
\begin{figure}
\caption{Experiment for 2-D problem to show true solution and the relative error evolution of each method in the first 100 iterations. (a) True solution. (b) Evolution of the relative error with exact data. (c) Evolution of the relative error with noise level $\delta=2\%$. }
\label{figure 2-1}
\label{figure 2-2}
\label{figure 2-3}
\label{2D-1}
\end{figure}
\begin{figure}
\caption{Experiment with exact data for 2-D problem to show reconstructed solution of each method at 100th iteration. (a)-(f) Reconstructed solution of each method. }
\label{figure 3-1}
\label{figure 3-2}
\label{figure 3-3}
\label{figure 3-4}
\label{figure 3-5}
\label{figure 3-6}
\label{2D-2}
\end{figure}
Compared with other methods, the 2-D numerical experiments depicted in Figure \ref{2D-1} and Figure \ref{2D-2} indicate that, in the exact data case the proposed method (TGSS) has better reconstruction effects. It follows Figure \ref{figure 2-1} (true solution) and Figure \ref{2D-2} (reconstructed solution of each method) that after 100 steps of iteration that our proposed method can obtain more accurate reconstruction results. As can be seen from Figure \ref{figure 2-2}, compared with other algorithms especially Land and TPG, the value of RE in our proposed method is going down faster. Thus, we can conclude that TGSS method has a satisfactory convergence behavior in the case of exact data.
\begin{figure}
\caption{Difference $c^\dag-c_k$ by each method with $2\%$ noise level at 50th iteration. }
\label{figure 4-1}
\label{figure 4-2}
\label{figure 4-3}
\label{figure 4-4}
\label{figure 4-5}
\label{figure 4-6}
\label{Noise-2D difference}
\end{figure}
For the case of noise data, we exhibit the RE curves in the first 100 iterations see Figure \ref{figure 2-3} and the reconstruction difference results at the 50th iteration for the noisy data ($\delta=2\%$) case in Figure \ref{Noise-2D difference} of different methods. We can intuitively see that at $\delta=2\%$, RE's value in TGSS method declines faster which means that it converges to the true solution faster. The more obvious effect is shown in Figure \ref{Noise-2D difference}. By comparing the Difference between exact and reconstructed solutions, we can observe that TGSS certainly reconstruct the coefficient of (\ref{equation 5-1}) better than TPG. Without losing generality, more detailed comparisons for noisy data are summarized in Table \ref{table 1-2}.
\begin{table} \footnotesize
\centering \caption{ Comparisons between Land, TPG, SESOP and TGSS methods for 2-D inverse potential problem with noisy data. } \begin{threeparttable} \begin{tabular}{l l l l l l l l l l l l l} \hline $\delta$& Methods& $k_*$& Time (s)& RE & Rate($k_*$) & & Rate(t)&\\ \hline \multirow{1}{*}{2$\%$} & $\textrm{\textrm{Land}}$& 2451 & 199.289& $ {4.46 \times 10^{-2}}$ & 100$\%$ & & 100$\%$ & \\ &$\textrm{\textrm{TPG-DBTS}}$& 257 & 21.003 & ${4.48\times 10^{-2}}$ & 10.49$\%$ & & 10.54$\%$ & \\ &$\textrm{\textrm{TPG-Nes}}$& 205 & 16.679 & ${4.47 \times 10^{-2}}$ & 8.36$\%$ & & 8.37$\%$ & \\ &$\textrm{\textrm{SESOP}}$& 64 & 15.091 & ${4.14 \times 10^{-2}}$ & 2.61$\%$ & & 7.57$\%$ & \\ &$\textrm{\textrm{TGSS-DBTS}}$& 40 & 7.397 & ${3.88 \times 10^{-2}}$ & 1.63$\%$ & & 3.71$\%$ & \\ &$\textrm{\textrm{TGSS-Nes}}$& 40 & 6.138 & ${3.82 \times 10^{-2}}$ & 1.63$\%$ & & 3.08$\%$ & \\ \hline \multirow{1}{*}{1$\%$} & $\textrm{\textrm{Land}}$& 4607 & 377.001 & ${2.82 \times 10^{-2}}$ & 100$\%$ & & 100$\%$ & \\ &$\textrm{\textrm{TPG-DBTS}}$& 376 & 30.845 & ${2.84 \times 10^{-2}}$ & 8.16$\%$ & & 8.18$\%$ & \\ &$\textrm{\textrm{TPG-Nes}}$& 274 & 22.400 & ${2.87 \times 10^{-2}}$ & 5.95$\%$ & & 7.42$\%$ & \\ &$\textrm{\textrm{SESOP}}$& 103 & 24.243 & ${2.61\times 10^{-2}}$ & 2.24$\%$ & & 6.43$\%$ & \\ &$\textrm{\textrm{TGSS-DBTS}}$& 68 & 12.106 & ${2.76 \times 10^{-2}}$ & 1.48$\%$ & & 3.21$\%$ & \\ &$\textrm{\textrm{TGSS-Nes}}$& 63 & 9.22 & ${2.86 \times 10^{-2}}$ & 1.37$\%$ & & 2.45$\%$ & \\ \hline \multirow{1}{*}{0.5$\%$} & $\textrm{\textrm{Land}}$& 7293 & 595.095 & ${2.08 \times 10^{-2}}$ & 100$\%$ & & 100$\%$ & \\ &$\textrm{\textrm{TPG-DBTS}}$& 469 & 38.010 & ${2.12 \times 10^{-2}}$ & 6.43$\%$ & & 6.39$\%$ & \\ &$\textrm{\textrm{TPG-Nes}}$& 337 & 27.969 & ${2.15 \times 10^{-2}}$ & 4.62$\%$ & & 4.70$\%$ & \\ &$\textrm{\textrm{SESOP}}$& 164 & 39.141 & ${2.01 \times 10^{-2}}$ & 2.25$\%$ & & 6.58$\%$ & \\ &$\textrm{\textrm{TGSS-DBTS}}$& 98 & 17.235 & ${2.05 \times 10^{-2}}$ & 1.34$\%$ & & 2.90$\%$ & \\ &$\textrm{\textrm{TGSS-Nes}}$& 97 & 14.534 & ${2.06\times 10^{-2}}$ & 1.33$\%$ & & 2.44$\%$ & \\ \hline \end{tabular}
\end{threeparttable} \label{table 1-2} \end{table}
In order to visibly illustrate the convergence behavior of each method in 2-D inverse potential problem we created a table with detailed data. It can be seen from Table \ref{table 1-2} that at each noise level, TGSS takes about four to six times overall computational time and the iteration numbers less than TPG. That is to say, the convergence rate of the proposed TGSS is significantly accelerated compared to that of TPG. Under the same discrepancy principle, we can get similar conclusions to 1-D case. As can be expected, TGSS shows the favorable robustness.
To sum up, from the above analysis, we conclude that the TGSS method achieves more satisfactory acceleration effects than TPG method in respects accordingly.
\section{Conclusions}\label{section-6} In this paper we introduced an accelerated two-point gradient method i.e. TGSS, as a paradigm of new iterative regularization method. TGSS can be regarded as a hybrid between two-point gradient and sequential subspace optimization methods for solving nonlinear ill-posed problem $F(x)=y$. As the Landweber method, we first find intermediate variables $z_k^\delta$ by combination parameters $\lambda_k^\delta$ according to Nesterov acceleration scheme or DBTS method, then substitute the original search direction with the one respect to $z_k^\delta$. As the TPG method, it uses information from previous iteration, which can improve the search direction to a finite number. For this purpose, we defined stripes for the exact and the noisy data case, considering the nonlinearity of the operator $F$ by utilize a tangential cone condition. The new iteration is updated by projecting the $z_k^\delta$ onto the intersections of stripes with respect to the search directions.
In terms of main theoretical results, we have presented convergence property of the TGSS method, then we get the regularization theory. Meanwhile, we have the corresponding consistency theory, which is to select $\lambda_k^\delta$ by DBTS method. In section \ref{section-5}, we presented numerical results for the TGSS method. The numerical experiments validate that the TGSS has excellent acceleration effects compared with Land, TPG and SESOP methods for producing satisfying approximations.
\section*{Acknowledgments} This work was supported by the National Natural Science Foundation of China under Grant No.11871180.
\section*{References}
\end{document} |
\begin{document}
\author{Marc Kesseb\"ohmer and Bernd O. Stratmann} \address{Fachbereich 3 - Mathematik und Informatik, Universit\"at Bremen, D--28359 Bremen, Germany}
\email{mhk@math.uni-bremen.de} \address{Mathematical Institute, University of St Andrews, St Andrews KY16 9SS, Scotland}
\email{bos@maths.st-and.ac.uk}
\title[Refined measurable rigidity and flexibility]{ Refined measurable rigidity and flexibility for conformal iterated function systems}
\subjclass{37C15, 28A80, 37C45}
\date{\today}
\keywords{Rigidity, conformal iterated function systems, thermodynamical formalism, multifractal formalism, Lyapunov spectra}
\begin{abstract} In this paper we investigate aspects of rigidity and flexibility for conformal iterated function systems. For the case in which the systems are not essentially affine we show that two such systems are conformal equivalent if and only if in each of their Lyapunov spectra there exists at least one level set such that the corresponding Gibbs measures coincide. We then proceed by comparing this result with the essentially affine situation. We show that essentially affine systems are far less rigid than non--essentially affine systems, and subsequently we then investigate the extent of their flexibility. \end{abstract}
\maketitle
\section{Introduction}
In 1982 D. Sullivan published his influential purely measurable form of Mostow's rigidity theorem. It states that if two geometrically finite Kleinian groups are conjugate under a Borel map $F$ which is non-singular with respect to the Patterson measures associated with the two groups, then $F$ agrees almost everywhere with a conformal conjugacy (\cite{sul1}, see also \cite{sul2}, \cite{sul3} and \cite{B}). Since the appearance of this theorem the concept of measurable rigidity has attracted a great deal of attention, and in the meanwhile numerous generalisations and variations have been obtained. One of these
was derived by Hanus and Urba\'nski (\cite{HU}),
who considered non-essentially affine, conformal iterated function systems (see Section \ref{section2} for the definitions), and showed that two such systems $\Phi$ and $\Psi$ are conformal equivalent if and only if their associated conformal measures $\mu_{\Phi}$ and $\mu_{\Psi}$ (each of maximal Hausdorff dimension) coincide up to permutation of the generators. This result can be seen as
the
starting point for this paper.
Our first goal is to give a multifractal refinement of the result in \cite{HU}, where for ease of exposition we restrict the discussion to the $1$--dimensional finite case. For this we will recall that each system $\Phi$ gives rise to its Lyapunov spectrum $u\mapsto \ell_{\Phi}(u)$, which is given by the multifractal spectrum of the measure of maximal entropy associated with $\Phi$. Moreover, each level set in this spectrum supports a canonical shift--invariant Gibbs measure $\mu_{\Phi, u}$. In a nut shell, our main result for non-essentially affine, conformal iterated function systems is that two such systems $\Phi$ and $\Psi$ are conformal equivalent if and only if $\mu_{\Phi,u}$ is equal to $\mu_{\Psi,v}$ up to permutation of the generators, for some $u,v \in \mathbb{R}\setminus \{0\}$ (see Theorem \ref{NAS} for a more complete statement which also involves cohomological equivalence of the associated canonical geometric potential functions, equality of pressure functions as well as equality of Lyapunov spectra).
In the second part of the paper we consider essentially affine,
conformal iterated function systems.
Note that for non-essentially affine systems a conjugation
map
between two systems is conformal if and only if it is bi-Lipschitz
(see \cite{MU} Theorem 7.2.4). Hence, for essentially affine
systems bi-Lipschitz conjugation is the natural substitute for
conformal conjugation.
By investigating similar questions as before for the non-essentially affine case, we obtain that from the point of view of multifractal rigidity essentially affine systems behave rather different than non-essentially affine systems. For instance, if for two essentially affine systems $\Phi$ and $\Psi$ we have that $\mu_{\Phi,u}$ is equal to $\mu_{\Psi,v}$ up to permutation of the generators, for some $u,v \in \mathbb{R}\setminus \{0\}$, then this does {\em not necessarily} imply that $\Phi$ and $\Psi$ are bi-Lipschitz equivalent. More precisely, we show that equality of $\mu_{\Phi,u}$ and $\mu_{\Psi,u}$ up to permutation of the generators {\em together} with the equality of the pressure functions $P_{\Phi}$ and $P_{\Psi}$ at $u$, for some $u \in \mathbb{R} \setminus \{0\}$, is equivalent to the fact that $\Phi$ is bi-Lipschitz equivalent to $\Psi$, as well as to the facts $P_{\Phi}= P_{\Psi}$, $\ell_{\Phi}=\ell_{\Psi}$ and cohomological equivalence of the two canonical geometric potential functions associated with the systems (see Theorem \ref{linearrigid}). These results clearly show that essentially affine systems are less rigid than non-essentially affine systems, and a further investigation of this phenomenon of flexibility is then given in Section \ref{Subsec:MultiFlex}. There, we derive sufficient and necessary conditions for equality of $\mu_{\Phi,u}$ and $\mu_{\Psi,v}$ in terms of the pressure functions and the canonical geometric potential functions (see Theorem \ref{Thm:Multiflex}). Also, we show that this situation does in fact occur. Namely, in Proposition \ref{flexsection} we obtain that if $\mu_{\Phi,u}$ is given and $v$ fulfils a certain admissibility condition (see Definition \ref{adm}), then there exists an essentially affine system $\Psi$ such that $\mu_{\Phi,u}$ is equal to $\mu_{\Psi,v}$ up to permutation of the generators. Finally, we
give a brief discussion of the extent of flexibility of an essentially affine system. The outcome here is
that for a non-degenerate $\Phi$ the set of systems $\Psi$ for which $\mu_{\Phi,u}$ is equal to $\mu_{\Psi,v}$ up to permutation of the generators, for some $u,v\in \mathbb{R} \setminus \{0\}$, forms a $2$--dimensional submanifold of the moduli space of $\Phi$, whereas if $\Phi$ is degenerate then this set is a $1$--dimensional submanifold (see Proposition \ref{ending}).
\section{Preliminaries}\label{section2} \subsection{Conformal iterated function systems} Throughout this paper we consider \emph{conformal iterated function systems} (CS) on some connected compact set $X \subset \mathbb{R}$. Recall from \cite{HU} (see also \cite{MU}) that these systems are generated by an ordered family
$\Phi$ of injective contractions $(\varphi_i : X \to \hbox{Int} X \, | \, i \in I)$, for some given finite index set $I:=\{1,\ldots,d\}$ with at least two elements. Furthermore, $\Phi$ satisfies the following conditions, where we use the notation $\varphi_\omega := \varphi_{x_1}\circ \varphi_{x_2}\circ \ldots \circ \varphi_{x_n} $ for $\omega = x_1 x_2 \ldots x_n \in I^n$. \begin{description} \item[{\it Open set condition}] $\varphi_{i} (\hbox{Int} (X) ) \subset \hbox{Int} (X)$ for all $i \in I$, \\ and $\varphi_i (\hbox{Int} (X)) \cap \varphi_j (\hbox{Int}(X))= \emptyset \,\hbox{ for each } \, i, j \in I, i\neq j$. \item[{\it Conformality--condition}] There exists an open connected set $U \subset \mathbb{C}$ containing $X$ such that $\varphi_{i}$ extends to a conformal map on $U$, for each $i \in I$. \item[{\it Bounded distortion property}] There exists $C \ge 1$ such that for all $n \in \mathbb{N},
\omega \in I^n$ and $x,y \in U$ we have $$
|\varphi_\omega'(y)| \le C \; |\varphi_\omega'(x)|. $$ \end{description} A central object associated with a CS $\Phi$ is its {\em limit set}
\[\Lambda(\Phi):=\bigcap_{n\in\mathbb{N}}\bigcup_{\omega\in I^n} \varphi_\omega (X).\] Clearly, $\Lambda (\Phi)$ is the unique non-empty compact subset of $\mathbb{R}$ for which $\Lambda(\Phi) = \bigcup_{i\in I} \varphi_{i}(\Lambda(\Phi))$. From a combinatorial point of view $\Phi$ is described by the full-shift $\Sigma_{d} := I^{{\mathbb{N}}}$. As usual, we assume $\Sigma_{d}$ to be equipped with the left-shift map $\sigma$. The link between $\Sigma_{d}$ and $\Phi $ is provided by the canonical bijection $\pi_{\Phi}: \Sigma_{d} \to \Lambda(\Phi)$ which is given by $\pi_{\Phi}(x_{1} x_{2} \ldots) := \lim_{n \to \infty} \varphi_{x_{1} x_{2} \ldots x_{n}} (X)$. Evidently, we can always think of $\Phi $ as being a conformal fractal representation
of $\Sigma_{d}$. \\
We also
consider the special situation in which all the $\varphi_{i}$
are in particular affine
transformations. In this case the system is called
{\em affine iterated function system} (AS), and occasionally it will also be referred to as an {\em affine fractal representation} of
$\Sigma_{d}$. One of the major issues of this paper is to study
certain deformations of a given CS $\Phi= (\varphi_i : X \to \hbox{Int} X \, | \, i \in I)$.
More precisely,
let
$\Psi:=(\psi_i:Y\to \hbox{Int}Y \, | \, i\in I)$ be
some other CS defined on some connected compact set
$Y \subset \mathbb{R}$.
Then $\Psi$ is called a {\em deformation} of
$\Phi $ if there exists a bi-Lipschitz map $h:\Lambda(\Phi)\to \Lambda(\Psi)$ such that \[ \psi_i= h\circ \varphi_i \circ h^{-1},\hbox{ for each } i\in I.\] A map $h$ of this type will be called
a {\em fractal boundary correspondence}. In particular, if in here $\Phi$ is an AS, that is if $\Psi$ is a deformation of an affine iterated function system, then $\Psi$ will be referred to as {\em essentially affine iterated function system} (EAS). On the other hand, if $\Phi$ is a CS which is not an EAS then $\Phi$ will be called {\em non-essentially affine iterated function system} (NAS). \\ Let us also introduce the
{\em deformation space} ${\mathcal T}(\Sigma_{d})$ associated with $\Sigma_{d}$. This is given by
\[{\mathcal T}(\Sigma_{d}):= \{ \Psi: \Psi \hbox{ is a
CS on $\Sigma_{d}$} \} .\] Clearly, ${\mathcal T}(\Sigma_{d})$ relates to $\Sigma_{d}$ similar as the Teichm\"uller space for a Riemann surface relates to the associated fundamental group. We then decompose the space ${\mathcal T}(\Sigma_{d}) $
into the two disjoint deformation spaces
\[{\mathcal T}_{E}(\Sigma_{d}):= \{ \Psi: \Psi \hbox{ is an
EAS on $\Sigma_{d}$} \} \hbox{ and } {\mathcal T}_{N}(\Sigma_{d}):= \{ \Psi: \Psi \hbox{ is an NAS on $\Sigma_{d}$} \}.\] Also, we introduce an equivalence relation
on ${\mathcal T} (\Sigma_{d})$ as follows. Two systems $\Phi, \Psi \in {\mathcal T}(\Sigma_{d})$ are said to be equivalent ($\Phi \sim \Psi$) if and only if there exists
a fractal boundary correspondence $h: \Lambda(\Phi) \to
\Lambda(\Psi)$ between them.
Finally, recall that a CS is called {\em degenerate} if it is
equivalent to an AS $\Psi= (\psi_i : X \to \hbox{Int} X \, | \, i \in I)$ for which $\psi_{i}' = \psi_{j}'$, for all $i,j \in I$.
It is easy to see
that for a degenerate EAS the multifractal analysis in this paper is trivial.
\subsection{Thermodynamic and multifractal formalism for CS}
Let
$\Phi =(\varphi_i : X \to \hbox{Int} X \, | \, i \in
I) \in {\mathcal T}(\Sigma_{d}) $ be given, and let $\delta_{\Phi}$ refer to the
Hausdorff dimension of $\Lambda(\Phi)$.
Throughout, we require the following standard concepts
from thermodynamic
formalism, and we assume that the reader is familiar with the basics of
this formalism (see e.g. \cite{Bowen}, \cite{Denker}, \cite{Pesin},
\cite{Ruelle}). Here we use the common notation $[x_{1}\ldots x_{n}]:=
\{y=(y_1 y_2 \ldots) \in \Sigma_{d}: y_{i}=x_{i} \hbox{ for }
i=1,\ldots,n\}$ and $S_{n} f:= \sum_{k=0}^{n-1} f \circ \sigma^{k}$.
\begin{itemize}
\item The {\em canonical geometric potential} $I_{\Phi}:\Sigma_{d} \to \mathbb{R}$
associated with $\Phi$ is given
by $I_{\Phi}(x):=\log
\varphi_{x_1}'(\pi_{\Phi}(x))$ for all $x=(x_1 x_2 \ldots)
\in\Sigma_d$.
\item $\mu_{\Phi}$ refers to a
Gibbs measure on $\Sigma_{d}$ for the potential
$\delta_{\Phi} I_{\Phi}$.
\item $\ell_{\Phi}$ refers to the {\em Lyapunov spectrum} of $\Phi$,
given for $\alpha
\in \mathbb{R}$ by
\[ \ell_{\Phi}(\alpha):=\dim_{H}\left(\pi_{\Phi} \left(\left\{x \in
\Sigma_{d}:
\lim_{n \to
\infty} \frac{S_n I_\Phi(x)}{-n} = \alpha
\right\}\right) \right) . \]
\item
$\mathcal{P}_{\Phi} : \mathbb{R} \to \mathbb{R}$ denotes the {\em pressure
function},
given
for a potential function $f:\Sigma_{d} \to \mathbb{R}$ by
\[\mathcal{P}_\Phi (f):=\lim _{n\to \infty}\frac{1}{n}\log\sum_{\omega\in
I^n}\exp( \sup_{x\in [\omega]} S_n f (x)) .
\]
Also, for $u
\in \mathbb{R}$ we define
\[ P_{\Phi} (u) : = \mathcal{P}_{\Phi} (u \, I_{\Phi}) \]
and \[
\alpha_{\Phi}(u) := - P_{\Phi}'(u) =- \int I_{\Phi} \, \mathrm d
\mu_{\Phi,u} .\]
Here, $\mu_{\Phi,u}$ refers to the {\em $\sigma$--invariant
Gibbs measure} on $\Sigma_{d}$ for the potential function
$u I_{\Phi} -P(u)$, where `Gibbs' means as usual that
for all $n \in \mathbb{N}, (x_{1},\ldots,x_{n}) \in I^{n}$ and
$x \in [x_{1} \ldots x_{n}]$,
\[ \mu_{\Phi,u} ([x_{1},\ldots,x_{n}]) \asymp e^{u S_{n}
I_{\Phi}(x) - n P_{\Phi}(u)} .\]
Furthermore, we let $m_{\Phi,u}$ denote the {\em $(u I_{\Phi}
-P_{\Phi}(u))$--conformal measure} within the measure class of
$\mu_{\Phi,u}$, given by
\[ \frac{d m_{\Phi,u} \circ \varphi_{i}}{d
m_{\Phi,u}} = \left| \varphi'_{i}\right|^{u} \,
\mathrm e^{-P_{\Phi}(u)} , \hbox{ for all } i \in I.
\]
\end{itemize}
Finally, throughout
we require the following notions of equivalence in
connection with
two given $\Phi, \Psi \in \mathcal{T}(\Sigma_{d})$.
\begin{itemize}
\item The Gibbs measures $\mu_{\Phi,u}$
and $\mu_{\Psi,v}$ are {\em equal up
to permutation} ($\mu_{\Phi,u} \cong \mu_{\Psi,v}$) if and only if
$\mu_{\Phi,u} = \mu_{\Psi_{0},v}$ for some system $\Psi_{0}$ obtained from $\Psi$
by a permutation of the generators of $\Psi$.
\item The potentials $I_{\Phi}$
and $I_{\Psi}$ are {\em cohomological equivalent} ($I_{\Phi} \simeq I_{\Psi}$) if
$I_{\Phi}$ is cohomologous to $I_{\Psi_{0}}$, for some system $\Psi_{0}$
obtained from $\Psi$
by a permutation of the generators of $\Psi$. (Recall that two
functions
$f,g: \Sigma_{d} \to \mathbb{R}$ are {\em cohomologous} if there exists a continuous
function $e:\Sigma_{d} \to \mathbb{R}$ such that $f-g=e- e\circ \sigma$).
\end{itemize}
For the type of iterated function systems which we consider in this paper the calculation of the Lyapunov spectrum is basically an application of the multifractal analysis of the measure of maximal entropy for cookie-cutter Cantor sets. The following proposition summarises the outcome of this analysis. Subsequently, we will outline the proof employing the down-to-earth approach given in \cite{Falconer}. Note that the proposition also immediately follows from the multifractal formalism for growth rates developed in \cite{KS1}. Also, note that in here the function $\beta_{\Phi}$ is precisely the inverse of the function $\alpha_{\Phi}$, that is $\beta_{\Phi} \circ \alpha_{\Phi}= id.$.
\begin{proposition}\label{MFF}
Let $\Phi \in {\mathcal T}(\Sigma_{d}) $ non-degenerate
be given. Then there exists
a real-analytic function $\beta_{\Phi}: \mathbb{R} \to \mathbb{R}$ and
$\alpha_{-}, \alpha_{+} >0$ such that $\ell_{\Phi}(\alpha)
=0$ for all $ \alpha \notin
(\alpha_{-},\alpha_{+})$ and such that for all $ \alpha \in
[\alpha_{-},\alpha_{+}]$,
\[ \ell_{\Phi}(\alpha) =
\beta_{\Phi}(\alpha)
+\frac{P_{\Phi}\left(\beta_{\Phi}(\alpha)\right)}{\alpha}.\] \end{proposition}
\begin{proof} (Sketch) Let $\nu$ refer to the measure of maximal
entropy for the system $\Phi$ on $\Sigma_{d}$. Then $\nu$ is a Gibbs measure
for the potential function $\varphi$ constant equal to the negative
of the topological entropy
$h_{\mbox{\tiny \rm top}}:=\log d$.
Hence, we in particular have
$\nu([\omega]) \asymp \exp(S_{n} \varphi
(x)) = d^{-n}$ for all $n \in \mathbb{N}, \omega
\in I^{n}$ and $x \in [\omega]$. Trivially, we have $\varphi < 0$ and
$\mathcal{P}(\varphi)=0$, which shows that $\nu$ can be
analysed by standard multifractal analysis (see e.g. \cite{Falconer}). This
gives that there exists a well-defined, strictly decreasing,
real-analytic function $\gamma_{\Phi}:\mathbb{R} \to \mathbb{R}$ such that
$\mathcal{P}(\gamma_{\Phi}(t) \, I_{\Phi}+ t \varphi) =0$, for all
$t \in \mathbb{R}$. In order to determine the Hausdorff dimension spectrum
of \[ E_{\tau}:= \left\{x \in
\Lambda(\Phi) : \lim_{n\to \infty} \frac{\log
\nu\left(\pi_{\Phi}^{-1} (B(x,r))\right)}{\log r} = \tau
\right\} ,\]
one considers the Legendre transform of $\gamma_{\Phi}$, given by
$f(\tau) =\inf \{ \gamma_{\Phi} (t) + t \, \tau: t \in \mathbb{R}\}$, or what is
equivalent $f(\tau) = \gamma_{\Phi} (t_{\tau}) + t_{\tau} \, \tau$ where $t_{\tau}$
is determined by $\gamma_{\Phi}'(t_{\tau}) =- \tau$. In particular,
there exists a maximal interval $(\tau_{-}, \tau_{+})$ on which $f$
is continuous, concave and strictly positive; outside this
interval $f$ vanishes. Now, the key
observation is that there exists a Gibbs measure
$\nu_{\tau}$
for the potential function $\gamma_{\Phi}(t_{\tau})\, I_{\Phi}+
t_{\tau} \varphi$ which is concentrated on $\pi_{\Phi}^{-1}(E_{\tau})$.
(Note that the measure $\nu_{\tau}$ coincides with the measure
$\mu_{\Phi,\gamma_{\Phi}(t_{\tau})}$ which we already introduced
above).
Hence, we have for all $n \in \mathbb{N}$, $\omega \in I^{n}$ and $x
\in [\omega]$,
\[ \nu_{\tau} ([\omega]) \asymp \exp
\left(\gamma_{\Phi}(t_{\tau})\, S_{n}I_{\Phi}(x)+
t_{\tau} S_{n}\varphi (x)\right) .\]
Since by the bounded distortion property $\exp(S_{n}I_{\Phi}(x)) \asymp
|\pi_{\Phi}([\omega]) |$, the mass distribution
principle therefore immediately gives $\dim_{H}(E_{\tau}) = f(\tau)$.
To
finish the proof, note that
\begin{eqnarray*} \pi_{\Phi}^{-1}(E_{\tau}) &=& \left\{ x=(x_{1}x_{2} \ldots)\in
\Sigma_{d} : \lim_{n \to \infty} \frac{-n \, h_{{\mbox{\tiny \rm top}}}}{\log
|\pi_{\Phi}([x_{1}\ldots x_{n}])|}
= \tau\right\} \\ &=&
\left\{ x \in
\Sigma_{d} : \lim_{n \to \infty} \frac{\log S_{n} I_{\Phi}(x)}{-n} =
\frac{h_{\mbox{\tiny \rm top}}}{\tau} \right\} .\end{eqnarray*}
This shows that $ f(\tau) = \ell_{\Phi}(\alpha)$, for
$\alpha:= h_{\mbox{\tiny \rm top}}/ \tau$. Finally, define $\beta_{\Phi}(\alpha):=
\gamma_{\Phi}(t_{\tau})$ and note that
$\mathcal{P}(\gamma_{\Phi}(t_{\tau})\, I_{\Phi}+
t_{\tau} \varphi)=0$ immediately implies that
$P(\beta_{\Phi}(\alpha))= t_{\tau} \, h_{\mbox{\tiny \rm top}}$.
Using this and rewriting the above in terms
of $\alpha$, the result follows.
\end{proof}
\section{Multifractal rigidity for NAS}
For the proof of the main result of this section (Theorem \ref{NAS})
we require the following proposition. Note that for $u =
\delta_{\Phi}$ this result has been obtained by
Mauldin and Urba\'nski (\cite{MU} Theorem
6.1.3). Since it is straight forward to adapt the arguments in \cite{MU}
to our
multifractal situation here, we will only give an outline
of the proof emphasising the major changes which have to be made.
\begin{proposition}\label{propext}
Let $\Phi \in {\mathcal T}_{N}(\Sigma_{d})$ and $u
\in \mathbb{R} \setminus \{0\}$ be given, and let $m_{\Phi,u}$ refer
to the $(u I_{\Phi}-P_{\Phi}(u))$--conformal measure in the
measure class
of $\mu_{\Phi,u}$.
Then there exists an open connected set $W
\supset X$ such that
$d\mu_{\Phi,u} /d m_{\Phi,u}$ has a
positive real-analytic extension to $W$. \end{proposition} \begin{proof}(Sketch) The first step consists of applying Arzel\`a--Ascoli to obtain that
\[ F:C(X)\to C(X), F(g):=\mathrm e^{-P_\Phi(u)}\sum_{i\in I}|\varphi_i'|^u g\circ\varphi_i\] is an almost periodic operator, that is $\{F^n(g):n\in \mathbb{N}\}$ is relative compact with respect to the sup-norm for every $g\in C(X)$ (see \cite{MU} Lemma 6.1.1). Also, the Gibbs--property of $\mu_{\Phi,u}$ immediately implies that $F^n(\mathbf 1)$ is uniformly bounded away from zero and infinity, for each $n\in \mathbb{N}$. \\ The second step is to use the above results to show that there exists a unique positive continuous function $\rho:X\to \mathbb{R}^{+}$ such that (see \cite{MU} Theorem 6.1.2) \[F(\rho)=\rho,\; \int \rho \;\mathrm d m_{\Phi,u}=1, \;\mbox{ and }
\rho|_{\Lambda(\Phi)}=\frac{d \mu_{\Phi,u}}{d m_{\Phi,u}}.\] The final step is to consider the sequence of functions $\left(b_{n}\right)_{n \in \mathbb{N} }$, given by
\[b_n(z):=\sum_{|\omega|=n}|\varphi_i '(z)|^u \mathrm e^{-nP_\Phi (u)}.\] One verifies that each $b_{n}$ is defined locally on a sufficiently large neighbourhood of each $w \in X$, where it is analytic, uniformly bounded and equicontinuous (see \cite{MU}, proof of Theorem 6.1.3). It then follows that $(b_{n})$ has a subsequence converging to an analytic function which locally extends $\rho$. Since $X$ is compact and simply connected, this provides us with a globally defined analytic extention of $\rho$, which is uniformly bounded from above and below. \end{proof} The following theorem gives the main results of this section. Here, the main outcome is that if we have equality up to permutation of two Gibbs measures associated with two points in the Lyapunov spectra of two NAS, then the two systems are already bi-Lipschitz equivalent. Therefore, the theorem represents a refinement of the Hanus--Urba{\'n}ski rigidity theorem mentioned in the introduction (see also Corollary \ref{CorNASrigid}).
\begin{theorem}[Multifractal rigidity for NAS]\label{NAS} $ \, $ \\
Let $\Phi, \Psi \in {\mathcal T}_{N}(\Sigma_{d})$ and
$u , v \in \mathbb{R} \setminus \{0\}$ be given. Then the following
three statements are
equivalent.
\begin{itemize}
\item[ (i)] $\mu_{\Phi,u} \cong \mu_{\Psi,v}$;
\item[ (ii)]
$\Phi \sim \Psi $ and $u=v$;
\item[(iii)] $I_\Phi\simeq I_\Psi$ and $u=v$.
\end{itemize}
Also, the following two
statements are
equivalent.
\begin{itemize}
\item [(iv)] $P_\Phi = P_\Psi$;
\item[ (v)] $\ell_{\Phi} = \ell_{\Psi}.$
\end{itemize}
Furthermore, each of the statements in {\rm (i) - (iii)} implies the
statements in {\rm (iv)} and {\rm (v)}.
\end{theorem}
\begin{proof}
The implications ``(ii)$\implies$(i)'' , ``(iii) $\implies$(i)''
and ``(iii)$\implies$(iv)'', as well as the equivalence of
(iv) and (v) follow exactly as in the case $\Phi
\in {\mathcal T}_{E}(\Sigma_{d})$, and for this we refer to
Theorem \ref{linearrigid} in Section \ref{Sec:linearrigid}.
On the basis of the assumption that ``(i)$\implies$(ii)'' holds,
the implication ``(i)$\implies$(iii)'' can be obtained as
follows. Assume that $\mu_{\Phi,u} \cong \mu_{\Psi,v}$.
We then have
$u I_\Phi\simeq v I_\Psi + c$, for some constant $c$. Also, since
``(i)
$\implies$ (ii)'' holds, we have that $u=v$ and $\Phi \sim \Psi$.
It hence follows that
$I_\Phi - I_\Psi \simeq c$
and $\delta_{\Phi}=\delta_{\Psi}$. Consequently, $0=
P_{\Phi}(\delta_{\Phi}) - P_{\Psi}(\delta_{\Psi}) = c
\delta_{\Phi}$.
Since $\delta_{\Phi} \neq 0$, this implies that $c=0$, and hence the statement in
(iii) follows.
It remains to show that ``(i) $\implies$ (ii)''. For this note
that by applying a suitable permutation if necessary, we can
assume without loss of generality that $\mu_{\Phi,u} = \mu_{\Psi,v}$.
Let $h:\Lambda(\Phi) \to \Lambda(\Psi)$ refer to the associated measurable boundary
correspondence, that is $\psi_{i}\circ h = h \circ \varphi_{i}$ for all
$i \in I$. The aim is to show that there exists an open
neighbourhood of $\Lambda(\varphi)$ such that
$h$ extends to a real-analytic map on this neighbourhood. For ease of notation
we will not distinguish between measures on $\Sigma_{d}$ and
measures on the corresponding limit sets arising from representations
of $\Sigma_{d}$. For $\omega \in I^{n}$,
define
$J_{\varphi_\omega, u}:= d \mu_{\Phi,u} \circ \varphi_{\omega}
/d \mu_{\Phi,u}$,
and similar $J_{\psi_\omega, v}$ for the system $\Psi$.
We then have for each $i \in I$,
\begin{eqnarray*}
J_{\varphi_{i}, u} & = & \frac{d \mu_{\Phi,u} \circ \varphi_{i}}{d \mu_{\Phi,u}}
=
\frac{d \mu_{\Psi,v} \circ h \circ \varphi_{i}}{d \mu_{\Phi,v}}
= \frac{d \mu_{\Psi,v} \circ \psi_{i} \circ h }{d
\mu_{\Psi,v} \circ h } =
\frac{d \mu_{\Psi,v} \circ \psi_{i} }{d
\mu_{\Psi,v} }\circ h \\
& = & J_{\psi_{i}, v} \circ h.
\end{eqnarray*}
On the other hand, we have \begin{eqnarray*} J_{\psi_{i},v} & = &
\frac{d \mu_{\Psi,v} \circ \psi_{i}}{d
m_{\Psi,v} \circ \psi_{i}} \, \, \frac{d m_{\Psi,v}
\circ \psi_{i}}{d
m_{\Psi,v}} \, \, \frac{d m_{\Psi,v}}{d
\mu_{\Psi,v}}\\
&=& \frac{d \mu_{\Psi,v} }{d
m_{\Psi,v} }\circ \psi_{i} \, \, \frac{d m_{\Psi,v}
\circ \psi_{i}}{d
m_{\Psi,v}} \, \, \left(\frac{d \mu_{\Psi,v}}{d
m_{\Psi,v}}\right)^{-1}. \end{eqnarray*} Also, since $m_{\Psi,v} $ is the $(v I_{\Psi}-P_{\Psi}(v))$--conformal measure in the measure class
of $\mu_{\Psi,v}$,
we have
\[ \frac{d m_{\Psi,v} \circ \psi_{i}}{d
m_{\Psi,v}} = \left| \psi'_{i}\right|^{v} \, \mathrm e^{-P_{\Psi}(v)}
, \hbox{ for all } i \in I. \]
Now, the conformality condition in the definition of a CS immediately
gives that $\left| \psi'_{i}\right|^{v}$
has a real-analytic extension to an open neighourhood
of $X$. Hence, by combining these observations with Proposition \ref{propext},
it follows that there exist $W \supset X $ such that
$J_{\psi_{i}, v}$ has a
real-analytic extension $\widetilde{J}_{\psi_{i}, v}$ to $W$. In the
same way we obtain a
real-analytic extension $\widetilde{J}_{\varphi_{i}, u}$ for the system
$\Phi$.
Next, note that since $\Psi \in {\mathcal T}_{N}(\Sigma_{d})$,
there exists $j \in I$ such that $\widetilde{J}_{\psi_{j}, v}$ is not
equal to a constant.
Since $\widetilde{J}_{\varphi_{j}, u} = \widetilde{J}_{\psi_{j}, v} \circ h$,
the same holds for
$\widetilde{J}_{\varphi_{j}, u}$ (note, $h$ is defined on the perfect set
$\Lambda(\Phi)$). In particular, the set of zeros of
$\widetilde{J}_{\varphi_{j}, u}'$, and
$\widetilde{J}_{\psi_{j},v}'$ respectively, can not have points of accumulation in
$X$, and $Y$ respectively. Therefore, there exists $x \in \Lambda(\Phi)$
such that $\widetilde{J}_{\varphi_{j}, u}'(x) \neq 0$ and
$\widetilde{J}_{\psi_{j}, v}'(h(x)) \neq 0$.
This implies that there exists an inverse branch
$\widetilde{J}_{\psi_{j}, v}^{-1}$
which is analytic in a neighbourhood of $\widetilde{J}_{\varphi_{j}, u}(x)$ such that
$\widetilde{J}_{\psi_{j}, v}^{-1}\left(\widetilde{J}_{\varphi_{j},u}(x)\right)=h(x)$. By choosing
a neighbourhood $W' \subset X$ of $x$ sufficiently small, we obtain
that
$\widetilde{J}_{\psi_{j}, v}^{-1}\circ \widetilde{J}_{\varphi_{j}, u}$ is well-defined
and bijective
on $W'$, and
\[ \widetilde{J}_{\psi_{j}, v}^{-1}\circ \widetilde{J}_{\varphi_{j}, u}
(y)=h(y), \hbox{ for
all } y \in W' \cap \Lambda(\Phi).\]
It now follows that there exists $\omega \in I^{n}$, for some $n \in
\mathbb{N}$, such that $\varphi_{\omega}(X) \subset W'$. Hence,
there exists $W''\supset X$ on which
$\psi_{\omega}^{-1} \circ
\widetilde{J}_{\psi_{j}, v}^{-1}\circ \widetilde{J}_{\varphi_{j},u} \circ
\varphi_{\omega}$ is real-analytic and
such that $\psi_{\omega}^{-1} \circ
\widetilde{J}_{\psi_{j}, v}^{-1}\circ \widetilde{J}_{\varphi_{j},u} \circ
\varphi_{\omega}$ coincides with $h$ on
$W'' \cap \Lambda(\Phi)$.
\end{proof}
The following corollary is an immediate consequence of the previous theorem. We remark that the fact that $\mu_{\Phi} \cong \mu_{\Psi}$ implies that the two Lyapunov spectra coincide is somehow characteristic for non-essentially affine systems. Namely, as we will see in Section \ref{EAS}, in this respect essentially affine systems behave rather different. Also, note that the equivalence of (i) and (ii) is precisely the content of the Hanus--Urba{\'n}ski rigidity theorem.
\begin{corollary}\label{CorNASrigid} For $\Phi, \Psi \in {\mathcal T}_{N}(\Sigma_{d})$, the following statements are equivalent. \begin{itemize}
\item[ (i)] $\mu_{\Phi} \cong \mu_{\Psi}$; \item[ (ii)] $\Phi \sim \Psi $. \end{itemize} In particular, we also have \[ \mu_{\Phi} \cong \mu_{\Psi} \implies \ell_{\Phi} = \ell_{\Psi}.\] \end{corollary}
{\bf Remark:} Recently, it has been shown in \cite{PW} that for cocompact Fuchsian groups the pressure function is {\em not} a complete invariant of isometry, that is equality of the pressure functions of two isomorphic cocompact Fuchsian groups does not necessarily imply that the two associated Riemann surfaces are isometric. This result suggests that one might expect that for two systems $\Phi, \Psi \in {\mathcal T}_{N}(\Sigma_{d})$ we have that $P_{\Phi} = P_{\Psi}$ does not necessarily imply $\Phi \sim \Psi$. However, the argument in \cite{PW} relies on Buser's constructive example of isospectral but non-isometric, compact Riemann surfaces (see \cite{Buser}), and it is currently not clear (at least to the authors) how to adapt this construction to the situation of a NAS.
\section{Multifractal rigidity and flexibility for EAS}
\subsection{Deformation spaces for EAS}\label{EAS} We require the following elementary facts about how to switch forward and backward between two given essentially affine iterated function systems. \begin{lemma}\label{hoelder}
Let $\Phi =(\varphi_i : X \to \hbox{Int} X \, | \, i \in
I), \Psi =(\psi_i : Y \to \hbox{Int} Y \, | \, i \in
I) \in {\mathcal T}_{E}(\Sigma_{d})$ be given. Then there exists a H{\"o}lder continuous homeomorphism $h:\Lambda(\Phi) \to \Lambda(\Psi)$ such that \[ \varphi_i\circ h=h\circ \psi_{i}, \hbox{ for all $i\in I$}.\] Moreover, if $\varphi_i '=\psi_{i} '$ for all $i\in I$, then $h$ is bi-Lipschitz. \end{lemma}
\begin{proof}
Let $\Phi, \Psi $ be given as stated in the lemma. Without loss of generality we can assume that $X=Y=[0,1]$ and
that both systems are affine. For each $n \in \mathbb{N}$, we define a piecewise linear map $h_n$ by induction as follows. For $i \in I$ let $I_i:=\varphi_i(\Lambda (\Phi))$ and $J_i:=\psi_i(\Lambda(\Psi))$, and define $h_{0,i}:\hbox{Conv}(I_i)\to \hbox{Conv}(J_i)$ to be the uniquely determined linear surjection from $I_{i}$ onto $J_{i}$, where $\hbox{Conv}$ refers to the convex hull. The map $h_0 := \sum_{i=1}^{n} h_{0,i}$ is piecewise linear and maps $\bigcup_{i\in I} \hbox{Conv}(I_i)$ onto $\bigcup_{i\in I} \hbox{Conv} (J_i)$. Similarly, for each $\omega \in I^{n}, i \in I$ and $n \in \mathbb{N}$, let $h_{\omega,i}$ be the uniquely determined linear surjection which maps $\hbox{Conv}(\varphi_{\omega i}(\Lambda(\Phi)))$ onto $\hbox{Conv}(\psi_{\omega i} (\Lambda(\Psi)))$. Hence, $h_{n}:= \sum_{\omega \in I^{n} } \sum_{i\in I} h_{\omega,i}$ is a piecewise linear surjection mapping $\bigcup_{\omega \in I^{n}} \bigcup_{i\in I} \hbox{Conv}(\varphi_{\omega}(I_i))$ onto $\bigcup_{\omega \in I^{n}} \bigcup_{i\in I} \hbox{Conv} (\psi_{\omega} (J_i))$.
Also, one readily verifies that $h_n$ converges uniformly to a continuous function $h:=\lim_{n\to\infty} h_n$. The fact that $h$ is H{\"o}lder continuous with H{\"o}lder exponent $s:=\min \left\{\log(\psi_i') / \log(\varphi_i'):i\in I\right\}$ can be seen as follows. Let $x= \pi_{\Phi} (x_{1} x_{2} \ldots)$ and $ y =\pi_{\Phi} (y_{1} y_{2} \ldots)$ be two distinct elements of $ \Lambda(\Phi)$. If $x_{1}\neq y_{1}$ then the assertion follows immediately, and hence we can assume without loss of generality that $x_{1}=y_{1}$. Then there exists a smallest $n \in {\mathbb{N}}$ such that $x_{n+1}\neq y_{n+1}$ and $x_{i}=y_{i}$, for all $1\leq i \leq n$. The open set condition gives that there exists $c>0$
such that $|x-y| \geq c \prod_{i=1}^{n} \varphi_{x_{i}}' $. Using this, we obtain
\[|h(x)-h(y)| \leq \prod_{i=1}^{n} \psi_{x_{i}}'
\leq \prod_{i=1}^{n} \varphi_{x_{i}}'^{s}
= \frac{1}{c^{s}} \left( c \prod_{i=1}^{n} \varphi_{x_{i}}'
\right)^{s} \leq \frac{1}{c^{s}} |x-y|^{s}.
\]
The remainder of the proposition is now straight forward. \end{proof} Note that we necessarily have that each equivalence class in ${\mathcal T}_{E}(\Sigma_{d})/ \sim$ contains an affine fractal representation. Also, note that each affine fractal representation $\Phi=
(\varphi_i : X \to \hbox{Int} X \, | \, i \in I)$ can be parameterised by its {\em contraction rate vector} $(\varphi_{1}',\ldots,\varphi_{d}')$, and the previous lemma shows that this vector has to be unique up to permutations of its entries. Therefore, as an immediate consequence of the previous lemma we obtain the following.
\begin{proposition}\label{corEAS} There exists a canonical bijection from ${\mathcal T}_{E}(\Sigma_{d})/ \sim$ onto \[ \{(\lambda_{1}, \ldots,\lambda_{d} ) \in ({\mathbb{R}}^{+})^{d}: \sum_{i=1}^{d}\lambda_{i} \leq 1\}/\Pi_d.\] Here, $\Pi_{d}$ refers to the group of permutations of the elements in $I$. \end{proposition}
\subsection{Multifractal rigidity for EAS}\label{Sec:linearrigid}
The goal of this section is to study rigidity for essentially affine iterated function systems. We show that for these systems one can only obtain a multifractal version of Sullivan's purely measurable rigidity theorem which is significantly weaker than the one for the non-essentially affine situation which we obtained in the previous section.
The following theorem states the main result of this section. In there it is shown that in the EAS setting there is a 1-1 correspondence between the space of pressure functions and the moduli space ${\mathcal T}_{E}(\Sigma_{d})/ \sim$. Also, the theorem in particular gives that for essentially affine systems equivalence of $\mu_{\Phi}$ and $\mu_{\Psi}$ alone does in general not imply that the pressure functions of the systems coincide. In fact, as we will see in Section \ref{Subsec:MultiFlex}, this will only be the case if the two systems are equivalent. Clearly, this can be seen as a first instance exhibiting the difference between the essentially affine and the non-essentially affine settings.
\begin{theorem}[Multifractal rigidity for EAS]\label{linearrigid} $ \, $ \\
For $\Phi, \Psi \in {\mathcal T}_{E}(\Sigma_{d})$ non-degenerate, the following
statements are equivalent. \begin{itemize}
\item[ (i)] $\mu_{\Phi,u} \cong \mu_{\Psi,u}$ and $ P_{\Phi}(u)=P_{\Psi} (u)$, for some $u \in \mathbb{R} \setminus \{0\}$;
\item[ (ii)] $\Phi \sim \Psi $;
\item[(iii)] $I_\Phi\simeq I_\Psi$;
\item [(iv)] $P_\Phi = P_\Psi$;
\item[ (v)] $\ell_{\Phi} = \ell_{\Psi}.$
\end{itemize} \end{theorem}
\begin{proof}
Let $\Phi =(\varphi_i : X \to \hbox{Int} X \, | \, i \in
I), \Psi =(\psi_i : Y \to \hbox{Int} Y \, | \, i \in
I) \in {\mathcal T}_{E}(\Sigma_{d})$ be two given non-degenerate systems.
``(i)$\implies$(ii)'':
Suppose that $\mu_{\Phi,u}=\mu_{\Psi,u}$, for some $u \in
\mathbb{R} \setminus \{0\}$ . We then have
for each $n \in \mathbb{N}$ and $\omega \in
I^{n}$,
\begin{eqnarray*}
|\varphi_\omega (\Lambda(\Phi))|&\asymp & (\mu_{\Phi,u}\circ
\pi_{\Phi}^{-1}(\varphi_\omega
(\Lambda(\Phi))))^{1/u} \mathrm e^{nP_{\Phi}(u)/u}\\
& = & (\mu_{\Psi,u}\circ \pi_{\Psi}^{-1}(\psi_\omega
(\Lambda(\Psi)))^{1/u} \mathrm e^{nP_{\Psi}(u)/u} \asymp |\psi_\omega (\Lambda(\Psi))|.
\end{eqnarray*}
We can now proceed similar as in Proposition \ref{hoelder} to
build up a bi-Lipschitz map $h: \Lambda(\Phi) \to \Lambda(\Psi)$ as the limit of
piecewise linear surjections. (Note that the
existence of $h$ can be obtained alternatively by applying Theorem
2.2 in \cite{HU}).
``(ii)$\implies$(i)'': Suppose that $\Phi \sim \Psi$, and note that a bi-Lipschitz conjugation does not alter the pressure function. Hence, similar as in the previous case,
we obtain for each $n \in \mathbb{N}$, $\omega \in I^{n}$ and $u \in
\mathbb{R}$, \begin{eqnarray*}
\mu_{\Phi,u}\circ \pi_\Phi^{-1}(\varphi_\omega (\Lambda(\Phi)))&\asymp& |\varphi_\omega
(\Lambda(\Phi))|^{u} \mathrm e^{-nP_{\Phi}(u)}\asymp |h(\varphi_\omega
(\Lambda(\Phi)))|^{u} \mathrm e^{-nP_{\Phi}(u)}\\
& \asymp & |\psi_\omega
(\Lambda(\Psi))|^{u} \mathrm e^{-nP_{\Psi}(u)} \asymp \mu_{\Psi,u} \circ \pi_{\Psi}^{-1} (\psi_\omega (\Lambda(\Psi))). \end{eqnarray*} Therefore, using the ergodicity of $\mu_{\Phi,u}$ and $ \mu_{\Psi,u}$, it follows that $\mu_{\Phi,u}=\mu_{\Psi,u}$.
``(i)$\iff$(iii)'': This is an immediate consequence of the fact that $\mu_{\Phi,u}$ and $ \mu_{\Psi,u}$ are Gibbs measures for the potential $u I_\Phi - P_{\Phi}(u)$, and $u I_\Psi - P_{\Psi}(u)$ respectively.
``(iii)$\implies$(iv)'': This follows from the definition of the pressure function.
``(iv)$\iff$(v)'': This follows since $P_\Phi$ and $\ell_\Phi$ are a Legendre transform pair.
``(iv)$\implies$(iii)'': Suppose that $P_\Phi = P_\Psi$, and let $\Phi_a$ and $\Psi_a$ be the affine fractal representations within the equivalence classes $[\Phi], [\Psi] \in {\mathcal T}_{E}(\Sigma_{d})/ \sim$. Also, let $(\lambda_1,\ldots ,\lambda_d)$ and $(\rho_1,\ldots ,\rho_d)$ refer to the contraction rate vectors associated with $\Phi_{a}$, and $\Psi_{a}$ respectively. Using the fact that (i) implies (iv), we obtain \[P_{\Phi_a}=P_\Phi=P_\Psi=P_{\Psi_a}.\] Since for affine systems the pressure function at $u$ is equal to the logarithm of the sum of the contraction rates raised to the power $u$, it follows that \[\log \sum_{i=1}^{d}\lambda_i^u=P_{\Phi_a}(u)=P_{\Psi_a}(u)=\log \sum_{i=1}^{d} \rho_i^u, \hbox{ for all } u\in \mathbb{R}.\] We can now employ a finite inductive argument as follows. Let the $\lambda_{i}$ and $\rho_{i}$ be ordered by their sizes such that $\lambda_{i_{1}} \geq \lambda_{i_{2}} \geq \ldots \geq \lambda_{i_{d}}$ and $\rho_{j_{1}} \geq \rho_{j_{2}} \geq \ldots \geq \rho_{j_{d}}$. Since $\sum_{i=1}^{d}\lambda_i^u= \sum_{i=1}^{d} \rho_i^u $, it follows
\[ \left(\frac{\lambda_{i_{1}}}{\rho_{j_{1}}} \right)^u=\frac{1+\sum_{m=2}^{d}(\rho_{j_{m}}/\rho_{j_{1}})^u}{1+ \sum_{m=2}^{d}(\lambda_{i_{m}}/\lambda_{i_{1}})^u}. \] Since for each $u \geq 0$ the right hand side in the latter equality lies between $1/d$ and $d$, we deduce, by letting $u$ tend to infinity, that the assumption $\lambda_{i_{1}}\neq\rho_{j_{1}}$ gives rise to an immediate contradiction. Hence, we have that $\lambda_{i_{1}} =\rho_{j_{1}}$. For the inductive step assume that for some $k \in I$ we have $\lambda_{i_{n}}= \rho_{j_{n}}$, for all $n\in \{1,\ldots,k\}$. We then have $\sum_{m=k+1}^{d}\lambda_{i_{m}}^u= \sum_{m=k+1}^{d} \rho_{j_{m}}^u $, and hence \[ \left(\frac{\lambda_{i_{k+1}}}{\rho_{j_{k+1}}} \right)^u=\frac{1+\sum_{m=k+2}^{d}(\rho_{j_{m}}/\rho_{j_{k+1}})^u}{1+ \sum_{m=k+2}^{d}(\lambda_{i_{m}}/\lambda_{i_{k+1}})^u}. \] As above, the right hand side in the latter equality lies between $1/d$ and $d$, and hence, by letting $u$ tend to infinity, we get an immediate contradiction to the assumption $\lambda_{i_{k+1}}\neq\rho_{j_{k+1}}$. This shows that the contraction rate vectors $(\lambda_1,\ldots ,\lambda_d)$ and $(\rho_1,\ldots ,\rho_d)$ coincide up to a permutation. Combining this observation with the fact that (i) implies (iii), it follows that \[I_\Phi \simeq I_{\Phi_a}=I_{\Psi_a}\simeq I_\Psi.\] This completes the proof of the theorem.
\end{proof}
The following corollary is an immediate consequence of the previous theorem. Note that a comparison of the statement in here with Corollary \ref{CorNASrigid} (see also Theorem \ref{NAS}) clearly shows in which respect essentially affine systems have to be considered as being less rigid than non-essentially affine systems. Also, we remark that it is straight forward to incorporate the degenerate cases. \begin{corollary}\label{CorEASrigid}
For $\Phi, \Psi \in {\mathcal T}_{E}(\Sigma_{d})$, the following
statements are
equivalent.
\begin{itemize}
\item[ (i)] $\mu_{\Phi} \cong \mu_{\Psi}$ and $
\delta_{\Phi}=\delta_{\Psi}$;
\item[ (ii)] $\Phi \sim \Psi $.
\end{itemize} Moreover, we have
\[ \mu_{\Phi} \cong \mu_{\Psi} \hbox{ and } \delta_{\Phi} =
\delta_{\Psi} \implies \ell_{\Phi} = \ell_{\Psi}.\]
\end{corollary} \subsection{Multifractal flexibility for EAS and applications to Lyapunov spectra}\label{Subsec:MultiFlex} As shown in Theorem \ref{linearrigid}, if for two essentially affine systems $\Phi$ and $ \Psi$ we have that $\mu_{\Phi} \cong \mu_{\Psi}$, then this does not necessarily imply that the two systems are equivalent, nor that their pressure functions coincide. This naturally raises the question of what can be said about the pressure functions in case $\mu_{\Phi} \cong \mu_{\Psi}$ and $\delta_{\Phi} \neq \delta_{\Psi}$. The following theorem gives a complete answer to this question. \begin{theorem}[Multifractal flexibility for EAS]
\label{Thm:Multiflex} $ \, $ \\
For $\Phi, \Psi \in {\mathcal T}_{E}(\Sigma_{d})$ and $u,v\in \mathbb{R}\setminus \{0\}$, the following
three statements are equivalent. \begin{itemize}
\item[ (i)] $\displaystyle \mu_{\Phi,u} \cong \mu_{\Psi,v}$;
\item[(ii)] $\displaystyle I_\Phi\simeq \frac{v}{u} I_\Psi + \frac{P_{\Phi}(u)-P_{\Psi}(v)}{u}$;
\item[(iii)] $\displaystyle P_{\Phi}(s) = P_{\Psi}\left(s\cdot \frac{v}{u}\right)+ s\cdot
\frac{P_{\Phi}(u)-P_{\Psi}(v)}{u}$, for all $s \in
\mathbb{R}$. \end{itemize} Furthermore, each of the statements in (i) - (iii) implies \begin{itemize}
\item[(iv)] $\alpha_{\Phi} (u) \, \ell_{\Phi}(\alpha_{\Phi} (u)) = \alpha_{\Psi} (v)\, \ell_{\Psi}(\alpha_{\Psi} (v)) .$
\end{itemize} \end{theorem}
\begin{proof} The equivalence ``(ii)$\iff$(iii)''
can be obtained by exactly the same means as the equivalence ``(iii)$\iff$(iv)''
in Theorem \ref{linearrigid}. Hence, it is sufficient to show that ``(i)$\iff$(ii)''.
``(i)$\iff$(ii)'': By using a permutation of the generators if necessary, we can
assume without loss of generality that $\mu_{\Phi,u} = \mu _{\Psi,v}$.
It is then a standard
result for Gibbs measure that this is equivalent to $u I_\Phi \simeq v
I_\Psi + P_{\Phi}(u)-P_{\Psi}(v)$, giving that all three
statements are equivalent.
To finish the proof, it remains to show that (i) and (ii) implies (iv).
For this, we have by Proposition \ref{MFF},
\begin{eqnarray*}
\alpha_{\Phi} (u) \, \ell_{\Phi}(\alpha_{\Phi} (u)) & =& u \alpha_{\Phi}(u) + P_{\Phi}(u)
= -\int \left(u I_{\Phi}-P_{\Phi}(u)
\right) \mathrm d \mu_{\Phi,u} \\ & =& -\int \left(v I_{\Psi}-P_{\Psi}(v)\right)
\mathrm d \mu_{\Psi,v}
= v \alpha_{\Psi} (v) + P_{\Psi}(v) \\ & =& \alpha_{\Psi} (v)\, \ell_{\Psi}(\alpha_{\Psi} (v)) . \end{eqnarray*} \end{proof} For the special case in which $u=\delta_\Phi$ and $v=\delta_\Psi$, the previous theorem has the following immediate implication. \begin{corollary} $ \, $ \\
For $\Phi, \Psi \in {\mathcal T}_{E}(\Sigma_{d})$, the following
statements are equivalent. \begin{itemize}
\item[ (i)] $\mu_{\Phi} \cong \mu_{\Psi}$;
\item[(ii)] $I_\Phi\simeq \delta_{\Psi}/\delta_{\Phi} \cdot I_\Psi$;
\item[(iii)] $P_{\Phi}(s) = P_{\Psi}(\delta_{\Psi}/\delta_{\Phi} \cdot
s)$, for all $s \in \mathbb{R}$.
\end{itemize} \end{corollary} Our next aim is to show that there exist systems which are not bi-Lipschitz equivalent but which nevertheless admit multifractal measures which coincide up to permutation of the generators. For this note that using Theorem \ref{Thm:Multiflex}, we have \[\frac{u}{v}I_\Phi=I_\psi + \frac{P_\Phi(u)-P_\Psi(v)}{v}< \frac{P_\Phi(u)-P_\Psi(v)}{v}.\] By monotonicity of the pressure function, it therefore follows \begin{equation*}\label{admissible} P_\Phi\left(\frac{u}{v}\right)<\frac{P_\Phi(u)-P_\Psi(v)}{v}. \end{equation*} This observation motivates the following notion of admissibility. \begin{definition}\label{adm}
Let $\Phi \in \mathcal{T}_{E}(\Sigma_{d})$ and
$u,v,p \in
\mathbb{R}$ such that $v \neq 0$ be given.
The triple $(u,v,p)$ is called {\em $\Phi$-admissible} if and only if
\[ P_{\Phi}\left(\frac{u}{v}\right) < \frac{P_{\Phi}(u) -p}{v}
.\]
\end{definition}
\begin{proposition}[Flexibility of Lyapunov spectra for
EAS (I)]\label{flexsection} $\,$ \\
Let a non-degenerate $\Phi \in {\mathcal T}_{E}(\Sigma_{d})$ be
given, and let $(u,v,p)$ be a $\Phi$-admissible triple. Then there exists $[\Psi] \in {\mathcal T}_{E}(\Sigma_{d})/ \sim$ (which is unique up to permutations of the generators of $\Psi$) such that \[ \mu_{\Phi,u} \cong \mu_{\Psi,v} \;\mbox{ and } \;p=P_{\Psi}(v).\]
\end{proposition}
\begin{proof}
Without loss of generality we can assume that $\Phi$ is an AS.
Let $(\lambda_{1},\ldots,\lambda_{d}) \in (\mathbb{R}^{+})^{d}$ be the
contraction rate vector associated with $\Phi$, and let
$(u,v,p)$ be a given $\Phi$--admissible triple.
Then define
\[ \rho_{n}:= \left(\mathrm e^{p-P_{\Phi}(u) } \, \lambda_{n}^{u}
\right)^{1/v} , \, \hbox{ for each } n \in \{1,\ldots,d\}.\]
An elementary calculation immediately shows that the
$\Phi$--admissibility of $(u,v,p)$
is equivalent to
$\sum_{n=1}^{d} \rho_{n} < 1$. Hence, by Corollary \ref{corEAS}
there exists an affine fractal representation $\Psi=(\psi_{i}:
[0,1] \to (0,1) \, | \, i \in I) \in {\mathcal T}_{E}(\Sigma_{d})$
whose contraction rate vector is $(\rho_{1},\ldots,\rho_{d})$.
Next, observe that
\begin{eqnarray*}
u I_{\Phi}-P_{\Phi}(u) \hspace{-2mm} &-& \hspace{-2mm}
\left( v I_{\Psi}-P_{\Psi}(v)\right)
= -p+ P_{\Psi}(v) = -p +\lim_{k \to \infty}
\frac{1}{k} \log \left(\sum_{n=1}^{d} \rho_{n}^{v}
\right)^{k}\\
&=& -p + \log \left( \mathrm e^{p-P_{\Phi}(u)} \sum_{n=1}^{d}
\lambda_{n}^{u}\right) = -P_{\Phi}(u) + \log \sum_{n=1}^{d}
\lambda_{n}^{u} =0.
\end{eqnarray*}
This shows that the potentials $u I_{\Phi}-P_{\Phi}(u) $ and
$v I_{\Psi}-P_{\Psi}(v)$ coincide, and also that
$p=P_{\Psi}(v)$.
It follows that the Gibbs measures corresponding to these
potentials have to be equal up to permutation, that is
$\mu_{\Phi, u} \cong \mu_{\Psi,v}$.
\end{proof}
We end this section by giving a brief discussion of the extent of
flexibility of an EAS. For this it is more convenient to work
with the {\em moduli space of $\Sigma_{d}$}
\[ {\mathcal M}_{E} (\Sigma_{d}) :=
{\mathcal T}_{E}(\Sigma_{d}) / \sim,\] where without loss of generality we always assume that an equivalence class in ${\mathcal M}_{E} (\Sigma_{d}) $ is represented by the unique affine system contained in it. Now, first note that there clearly always is a
trivial measure--wise overlap between the Lyapunov spectra of two
EAS, namely $\mu_{\Phi,0} \cong \mu_{\Psi,0}$ for all $\Phi, \Psi
\in {\mathcal T}_{E}(\Sigma_{d})$. As we have seen above, for EAS
there also is the possibility of non-trivial overlaps, and we
will now see that these are generically represented by
$2$--dimensional
submanifolds of ${\mathcal M}_{E} (\Sigma_{d})$.
\begin{definition} Two systems $\Phi,\Psi \in\mathcal{M}_{E}\left(\Sigma_{d}\right)$ are called \emph{Lyapunov--related} if and only if there exist $u,v\in \mathbb{R} \setminus \{0\}$ such that $\mu_{\Phi,u}\cong\mu_{\Psi,v}$. \end{definition}
Using Theorem \ref{Thm:Multiflex} (ii), we immediately see that
if $\Phi$ and $\Psi$ are Lyapunov--related, that is
$\mu_{\Phi,u}\cong\mu_{\Psi,v}$ for some $u,v\in \mathbb{R} \setminus \{0\}$, then for each $s \in \mathbb{R} \setminus \{0\}$ there exists $t \in \mathbb{R} \setminus \{0\}$ such that $\mu_{\Phi,s}\cong\mu_{\Psi,t}$ (simply choose $t = s \cdot v/u$). More precisely, we have the following proposition which shows that for a non-degenerate $\Phi$ the set of systems which are Lyapunov--related to $\Phi$ forms a $2$-dimensional submanifold of $\mathcal{M}_{E}\left(\Sigma_{d}\right)$, whereas if $\Phi$ is degenerate then this set is a $1$--dimensional submanifold. Note that here, the
case $d=2$ appears to be special since it permits only
exactly two equivalence classes modulo Lyapunov--related,
namely the diagonal in $\mathcal{T}_{E}\left(\Sigma_{d}\right)$
and the complement of it in $\mathcal{M}_{E}\left(\Sigma_{d}\right)$
(see Figure \ref{fig}).
In all other cases there is a continuum of such equivalence classes.
\begin{proposition}[Flexibility of Lyapunov spectra for
EAS (II)]\label{ending}
\begin{itemize}
\item[ (i)] The `Lyapunov--relation' is an equivalence relation on $\mathcal{M}_{E}\left(\Sigma_{d}\right)$. \item[(ii)] Let $\Phi \in \mathcal{M}_{E}\left(\Sigma_{d}\right)$ be given, and let $(\lambda_{1},\ldots,\lambda_{d})$ be the contraction rate vector of $\Phi$. Then the following holds for the equivalence class $[[\Phi]]$ of $\Phi$ modulo
the Lyapunov--relation. If $\Phi$ is degenerate, then $[[\Phi]] $ is equal to
\[ \left\{ \Psi \in \mathcal{M}_{E}\left(\Sigma_{d}\right):
\rho_{i}
= t , \hbox{ for all } i \in I, \hbox{
for some } t \in (0,1/d] \right\}.\] If $\Phi$ is non-degenerate, then $[[\Phi]] $ is equal to
\[ \left\{ \Psi \in \mathcal{M}_{E}\left(\Sigma_{d}\right): \rho_{i}
= t \cdot \lambda_{i}^{s}, \hbox{ for all } i \in I,
\hbox{
for some } s,t \in \mathbb{R} \setminus
\{0\} \right\}. \]
\end{itemize} Here, $(\rho_{1},\ldots,\rho_{d})$ refers to the contraction rate vector of the system $\Psi$.
\end{proposition}
\begin{figure}\label{fig}
\end{figure}
\begin{proof} The assertion in (i) is an immediate consequence of the definition of the relation $\cong$. Furthermore, the first part in (ii) follows since for degenerate systems the Lyapunov spectrum is trivial. For the second part of (ii) we proceed as follows. Let
$\Phi, \Psi
\in\mathcal{M}_{E}\left(\Sigma_{d}\right)$
be two non-degenerate systems with contraction rate vector $(\lambda_{1},\ldots,\lambda_{d})$, and $(\rho_{1},\ldots,\rho_{d})$ respectively. First, if $\Phi$ and $\Psi$ are Lyapunov--related, then Theorem \ref{Thm:Multiflex} implies that there exist $u,v\in\mathbb{R} \setminus \{0\}$ such that \[ \left(\begin{array}{cc} \log\lambda_{1} & 1\\ \vdots & \vdots\\ \log\lambda_{d} & 1\end{array}\right)\left(\begin{array}{c} u\\ -P_{\Phi}(u)\end{array}\right)=\left(\begin{array}{cc} \log\rho_{1} & 1\\ \vdots & \vdots\\ \log\rho_{d} & 1\end{array}\right)\left(\begin{array}{c} v\\ -P_{\Psi}\left(v\right)\end{array}\right).\] This implies that for some $a,b \in \mathbb{R}, a \neq 0$, we have\[ \left(\begin{array}{c} \log\lambda_{1}\\ \vdots\\ \log\lambda_{d}\end{array}\right)=\left(\begin{array}{cc} \log\rho_{1} & 1\\ \vdots & \vdots\\ \log\rho_{d} & 1\end{array}\right)\left(\begin{array}{c} a\\ b\end{array}\right) .\] This settles one direction of the equality. For the reverse direction, assume that \[ \left(\begin{array}{c} \log\lambda_{1}\\ \vdots\\ \log\lambda_{d}\end{array}\right)\in\textrm{span}\left(\left(\begin{array}{c} \log\rho_{1}\\ \vdots\\ \log\rho_{d}\end{array}\right),\left(\begin{array}{c} 1\\ \vdots\\ 1\end{array}\right)\right).\] We then have that $I_{\Phi}=vI_{\Psi}+u$, for uniquely determined $u,v\in\mathbb{R}$, $v \neq 0$, giving that $u= P_{\Phi}(1)-P_{\Psi}(v)$. Hence, it follows that $I_{\Phi} = vI_{\Psi}+P_{\Phi}(1)-P_{\Psi}(v)$, which gives $\mu_{\Phi,1}=\mu_{\Psi,v}$. This shows that $\Phi$ and $\Psi$ are Lyapunov--related. \end{proof}
\end{document} |
\begin{document}
\theoremstyle{plain}\newtheorem{teo}{Theorem}[section] \theoremstyle{plain}\newtheorem{prop}[teo]{Proposition} \theoremstyle{plain}\newtheorem{lem}[teo]{Lemma} \theoremstyle{plain}\newtheorem{cor}[teo]{Corollary} \theoremstyle{definition}\newtheorem{defin}[teo]{Definition} \theoremstyle{remark}\newtheorem{rem}[teo]{Remark} \theoremstyle{plain} \newtheorem{assum}[teo]{Assumption} \theoremstyle{definition}\newtheorem{example}[teo]{Example}
\begin{abstract} Helfer in \cite{Hel} was the first to produce an example of a spacelike Lorentzian geodesic with a continuum of conjugate points. In this paper we show the following result: given an interval $[a,b]$ of $I\!\!R$ and any closed subset $F$ of $I\!\!R$ contained in $\left]a,b\right]$, then there exists a Lorentzian manifold $(M,g)$ and a spacelike geodesic $\gamma:[a,b]\to M$ such that $\gamma(t)$ is conjugate to $\gamma(a)$ along $\gamma$ iff $t\in F$. \end{abstract}
\maketitle
\begin{section}{Introduction} \label{sec:intro} It is well known that, in Riemannian geometry, the set of conjugate (or, more generally, focal) points along a geodesic is discrete; Beem and Ehrlich (see \cite{BE, BEE}) have shown that the same holds for causal, i.e., timelike or lightlike, geodesics in a Lorentzian manifold. The issue of the lack of discreteness for the set of conjugate points along a geodesic in a semi-Riemannian manifold with metric of arbitrary index has been somewhat ignored or overlooked in the literature (see for instance \cite[Exercise~8, pag.\ 299]{ON}, or \cite[The Index Theorem]{MasielloBook}). However, without a suitable nondegeneracy assumption, the classical proof of discreteness for the Riemannian case does not work in the general case, and Helfer in \cite{Hel} gave the first counterexample to the discreteness of conjugate points along a spacelike Lorentzian geodesic. In \cite[Section~11]{Hel} it is produced an example of a whole segment of conjugate points.
The occurrence of an infinite number of conjugate points along a compact segment of a semi-Riemannian geodesic is a rather pathological phenomenon, for instance it cannot happen if the metric is real-analytic; moreover, the nondegeneracy assumption mentioned above is {\em generic\/} (see for instance \cite{PT2}). Nevertheless, in order to fully understand the theory of conjugate points for non positive definite metrics, it is a natural question to ask what are the possible ``shapes'' for the set of conjugate points along a geodesic. In this paper we answer this question by reducing the problem to the study of intersection theory of curves in the Lagrangian Grassmannian of a symplectic space.
Given a geodesic $\gamma:[a,b]\to M$ in a semi-Riemannian manifold $(M,g)$, the set $l(t)$ of pairs $\big(J(t),gJ'(t)\big)$, where $J$ is a Jacobi field along $\gamma$ with $J(a)=0$, is a Lagrangian subspace of the symplectic space $T_{\gamma(t)}M\oplus T_{\gamma(t)}M^*$ endowed with its canonical symplectic form; the conjugate points along $\gamma$ correspond to instants $t\in\left]a,b\right]$ where $l(t)$ is {\em not\/} transversal to the Lagrangian subspace $\{0\}\oplus T_{\gamma(t)}M^*$. The use of a (parallel) trivialization of $TM$ along $\gamma$ allows to associate to $l$ a curve in the Lagrangian Grassmannian $\Lambda$ of the fixed symplectic space $I\!\!R^n\oplus{I\!\!R^n}^*\congI\!\!R^{2n}$. Conjugate points along $\gamma$ correspond therefore to intersections of this curve with the subvariety of $\Lambda$ consisting of Lagrangians that are not transverse to $\{0\}\oplus{I\!\!R^n}^*$. Details of this construction can be found in \cite{Hel, MPT, PT2, catania}. The problem of determining precisely which curves of Lagrangians arise from a semi-Riemannian geodesic is a rather difficult task. A partial result in this direction can be found in the last section of \cite{MPT}, where it is proven that a necessary condition for a smooth curve in the Lagrangian Grassmannian $\Lambda$ to arise from a semi-Riemannian geodesic is that it be tangent to a singular distribution of affine planes in $\Lambda$. However, this condition alone is not sufficient, and attempts to produce interesting examples of conjugate points along geodesics using this characterization lead quickly to rather involved computations.
In this paper we introduce a new procedure for constructing a curve $\xi$ in the Lagrangian Grassmannian $\Lambda$ starting from a semi-Riemannian geodesic $\gamma$. This new construction is {\em canonical\/} (see Remark~\ref{thm:remcanonical}), i.e., it does not depend on the choice of a trivialization of $TM$ along $\gamma$, and, again, the curve $\xi$ contains the relevant information about the conjugate points along $\gamma$. The main feature of this new construction is that it is very easy to characterize which curves $\xi$ actually arise from semi-Riemannian geodesics; namely, such curves are precisely those for which $\xi'(t)$ (which is naturally identified with a symmetric bilinear form on $\xi(t)$) is {\em nondegenerate\/} for all $t$ (Theorem~\ref{thm:ABSTRACT}). Using this characterization, it is easy to produce examples and counterexamples concerning the occurrence of several {\em types\/} of conjugate points along a semi-Riemannian geodesic; we prove in particular that any compact subset of $I\!\!R$ appears as the set of conjugate instants along some spacelike Lorentzian geodesic (Theorem~\ref{thm:MAIN}).
\end{section}
\begin{section}{The abstract setup} \label{sec:setup} Given (finite dimensional) real vector spaces $V$, $W$ we denote by $\mathrm{Lin}(V,W)$ the space of linear maps from $V$ to $W$ and by $\mathrm{B}(V,W)$ the space of bilinear forms $B:V\times W\toI\!\!R$; by $\mathrm{B}_{\textrm{sym}}(V)$ we denote the subspace of $\mathrm{B}(V,V)$ consisting of {\em symmetric\/} bilinear forms. The {\em index\/} of a symmetric bilinear form $B\in\mathrm{B}_{\textrm{sym}}(V)$ is defined as the supremum of the dimensions of the subspaces of $V$ on which $B$ is negative definite. We always implicitly identify the spaces $\mathrm{B}(V,W)$ and $\mathrm{Lin}(V,W^*)$ by the isomorphism $B(v,w)=B(v)(w)$, where $W^*$ denotes the dual space of $W$.
Let $(M,\mathfrak g)$ be an $(n+1)$-dimensional semi-Riemannian manifold and let $\gamma:[a,b]\to M$ be a non lightlike geodesic, i.e., $\mathfrak g(\dot\gamma,\dot\gamma)$ is not zero. Using a parallel trivialization of the normal bundle of $\gamma$, the Jacobi equation along $\gamma$ can be seen as a second order linear system of differential equations in $I\!\!R^n$ of the form $v''=Rv$, where $t\mapsto R(t)$ is a smooth curve of $g$-symmetric linear endomorphisms of $I\!\!R^n$ representing a component of the curvature tensor and $g$ is a nondegenerate symmetric bilinear form in $I\!\!R^n$ representing the semi-Riemannian metric $\mathfrak g$ on the normal bundle of $\gamma$. An equation of the form $v''=Rv$ with a $g$-symmetric $R$ is called a {\em Morse--Sturm\/} system; the index of $g$ is called the {\em index of the Morse--Sturm system}.
We recall from \cite{Hel} the following: \begin{lem}\label{thm:lemhelfer} Every Morse--Sturm system in $I\!\!R^n$ can be obtained by a parallel trivialization of the normal bundle from the Jacobi equation along a non lightlike geodesic $\gamma:[a,b]\to M$, where $(M,\mathfrak g)$ is an $(n+1)$-dimensional (conformally flat) semi-Riemannian manifold. Moreover, the geodesic can be chosen to be either spacelike or timelike; in the first case the index of the metric $\mathfrak g$ equals the index of the Morse--Sturm system, and in the latter case the index of the metric $\mathfrak g$ equals the index of the Morse--Sturm system plus one. \end{lem} \begin{proof} Consider $M=I\!\!R^{n+1}$ with coordinates $(x_1,\ldots,x_{n+1})$ and let $\gamma:[a,b]\to M$ be given by $ \gamma(t)=t\frac\partial{\partial x_{n+1}}$; consider in $M$ the metric $\mathfrak g=e^\Omega\mathfrak g_0$, with $\mathfrak g_0=g\pm\mathrm dx_{n+1}^2$, and $\Omega$ given by: \[\Omega(x_1,\ldots,x_{n+1})=\pm\sum_{i,j=1}^n {g\Big(R(x_{n+1})\frac\partial{\partial x_i},\frac\partial{\partial x_j}\Big)}x_ix_j.\] The choice of the sign $\pm$ in the above expressions is made according to the desired causal character of $\gamma$. It is easily checked that the Christoffel symbols of the Levi--Civita connection of $\mathfrak g$ in the canonical basis vanish along $\gamma$; this implies that $\gamma$ is a geodesic and that $\big(\frac\partial{\partial x_i}\big)_{i=1}^n$ gives a parallel trivialization of the normal bundle $\dot\gamma^\perp$. \end{proof}
Setting $\alpha=gv'$ the Morse--Sturm equation $v''=Rv$ is written as the following first order linear systems of differential equations: \begin{equation}\label{eq:firstorder} \left\{ \begin{aligned} v'&=g^{-1}\alpha,\\ \alpha'&=gRv. \end{aligned}\right. \end{equation} The coefficient matrix $\begin{pmatrix}0&g^{-1}\\gR&0\end{pmatrix}$ of \eqref{eq:firstorder} is easily seen to be a curve in the Lie algebra $\mathrm{sp}(2n,I\!\!R)$ of the symplectic group $\mathrm{Sp}(2n,I\!\!R)$ of $I\!\!R^n\oplus{I\!\!R^n}^*$ endowed with the canonical symplectic form: \begin{equation}\label{eq:omegacan} \omega\big((v_1,\alpha_1),(v_2,\alpha_2)\big)=\alpha_2(v_1)-\alpha_1(v_2). \end{equation} Recall indeed that the Lie algebra $\mathrm{sp}(2n,I\!\!R)$ consists of all the matrices of the form: \begin{equation}\label{eq:XABC} X=\begin{pmatrix}A&B\\C&-A^*\end{pmatrix}, \end{equation} where $A\in\mathrm{Lin}(I\!\!R^n)$, $B\in\mathrm{B}_{\textrm{sym}}({I\!\!R^n}^*)$ and $C\in\mathrm{B}_{\textrm{sym}}(I\!\!R^n)$. The considerations above motivate the following: \begin{defin}\label{thm:defsimplsist} Let $X:[a,b]\to\mathrm{sp}(2n,I\!\!R)$ be a smooth curve in $\mathrm{sp}(2n,I\!\!R)$ and denote by $A,B,C$ the $n\times n$ blocks of $X$ as in \eqref{eq:XABC}. The system \begin{equation}\label{eq:sistdif} \left\{\begin{aligned} v'&=Av+B\alpha,\\ \alpha'&=Cv-A^*\alpha, \end{aligned}\right. \end{equation} is called a {\em symplectic differential system\/} in $I\!\!R^n$. With little abuse of terminology we identify the coefficient matrix $X$ with the system \eqref{eq:sistdif} and call $X$ a symplectic differential system in $I\!\!R^n$. We call the system $X$ {\em nondegenerate\/} if the matrix $B(t)$ is invertible for every $t\in[a,b]$; in this case, the {\em index\/} of $X$ is defined as the index of $B(t)$ (which does not depend on $t$).
An instant $t\in\left]a,b\right]$ is said to be {\em conjugate\/} for $X$ if there exists a non zero solution $(v,\alpha)$ of $X$ with $v(a)=v(t)=0$. \end{defin}
The {\em fundamental matrix\/} of $X$ is the curve $[a,b]\ni t\mapsto\Phi(t)$ in the general linear group of $I\!\!R^n$ characterized by the matrix differential equation \begin{equation}\label{eq:defPhi} \Phi'=X\Phi, \end{equation} with initial condition $\Phi(a)=\mathrm{Id}$; if $(v,\alpha)$ is a solution of $X$ we have $\Phi(t)\big(v(a),\alpha(a)\big)=\big(v(t),\alpha(t)\big)$ for all $t\in[a,b]$. The fact that $X$ takes values in $\mathrm{sp}(2n,I\!\!R)$ implies that $\Phi$ is actually a curve in the symplectic group $\mathrm{Sp}(2n,I\!\!R)$. We will denote by $L_0$ the subspace: \[L_0=\{0\}\oplus{I\!\!R^n}^*\subsetI\!\!R^n\oplus{I\!\!R^n}^*;\] clearly, $t\in\left]a,b\right]$ is conjugate for $X$ iff $\ell(t)\cap L_0\ne\{0\}$ where $\ell(t)$ is the subspace: \begin{equation}\label{eq:ellt} \ell(t)=\Phi(t)(L_0)\subsetI\!\!R^n\oplus{I\!\!R^n}^*. \end{equation}
We now define the following notion of isomorphism for symplectic differential systems. \begin{defin}\label{thm:defisoXtildeX} Let $X$ and $\tilde X$ be symplectic differential systems in $I\!\!R^n$. An {\em isomorphism\/} from $X$ to $\tilde X$ is a smooth curve $\phi:[a,b]\to\mathrm{Sp}(2n,I\!\!R)$ with $\phi(t)(L_0)=L_0$ for all $t\in[a,b]$ satisfying either one of the following equivalent conditions: \begin{enumerate} \item\label{itm:iso1} $\tilde\Phi(t)\phi(a)=\phi(t)\Phi(t)$ for all $t\in[a,b]$, where $\Phi$ and $\tilde\Phi$ denote respectively the fundamental matrices of $X$ and $\tilde X$;
\item\label{itm:iso2} $\tilde X(t)=\phi(t)X(t)\phi(t)^{-1}+\phi'(t)\phi(t)^{-1}$ for all $t\in[a,b]$.
\end{enumerate} If $\phi$ is an isomorphism from $X$ to $\tilde X$ we write $\phi:X\cong\tilde X$ and we say that $X$ and $\tilde X$ are {\em isomorphic\/}. \end{defin} It follows easily from condition \eqref{itm:iso1} above that isomorphic symplectic systems have the same conjugate instants. Observe that an isomorphism $\phi:X\cong\tilde X$ can be written in block matrix notation as: \[\phi=\begin{pmatrix}Z&0\\{Z^*}^{-1}W&{Z^*}^{-1}\end{pmatrix},\] with $Z(t)\in\mathrm{Lin}(I\!\!R^n)$ invertible and $W(t)\in\mathrm{B}_{\textrm{sym}}(I\!\!R^n)$ symmetric for all $t\in[a,b]$. A straightforward computation shows that condition \eqref{itm:iso2} above is equivalent to: \begin{align} \label{eq:tildeA}&\tilde A=ZAZ^{-1}-ZBWZ^{-1}+Z'Z^{-1},\\ \label{eq:tildeB}&\tilde B=ZBZ^*,\\ &\tilde C={Z^*}^{-1}(WA+C-WBW+A^*W+W')Z^{-1}, \end{align} where ${}^*$ denotes transposition. It follows immediately that, if $X$ is isomorphic to $\tilde X$ then $X$ is nondegenerate iff $\tilde X$ is nondegenerate and that the indexes of $X$ and $\tilde X$ coincide.
Observe that we have a {\em category\/} $\underline{\mathfrak C}$ whose objects are symplectic differential systems and whose set of morphisms from $X$ to $\tilde X$ are the isomorphisms $\phi:X\cong\tilde X$; composition of morphisms is defined in the obvious way. Observe also that in this category {\em every morphism is an isomorphism\/}.
The study of symplectic differential systems has an interest on its own, due to the fact that such systems are naturally in connection with solutions of Hamiltonian systems in symplectic manifolds (see \cite{PT2}); the notion of symplectic differential system also appears in the theory of mechanical systems subject to non holonomic constraints and in sub-Riemannian geometry (see Section~\ref{sec:final}). In this article we are interested in the subcategory of $\underline{\mathfrak C}$ consisting of {\em Morse--Sturm systems}; we say that a nondegenerate symplectic differential system $X$ with $n\times n$ blocks $A,B,C$ is a Morse--Sturm system if $B$ is constant and $A=0$. As we have observed in the beginning of the section, such systems always arise from the Jacobi equation along a non lightlike semi-Riemannian geodesic by a parallel trivialization of the normal bundle. In the following lemma we show that the category of symplectic differential systems is not ``essentially larger'' than the subcategory of Morse--Sturm systems: \begin{lem}\label{thm:notlarger} Every nondegenerate symplectic differential system $X$ is isomorphic to a Morse--Sturm system. \end{lem} \begin{proof} It follows easily from \eqref{eq:tildeB} that every nondegenerate symplectic differential system is isomorphic to one whose component $B$ is constant. We may thus assume without loss of generality that $B$ is constant (and nondegenerate). To conclude the proof we must exhibit a smooth curve $Z$ in the Lie group \[G=\big\{Z\in\mathrm{GL}(n,I\!\!R):ZBZ^*=B\big\}\] and a smooth curve $W$ of symmetric $n\times n$ matrices such that the righthand side of \eqref{eq:tildeA} vanishes. It suffices to take $W=\frac12\big(B^{-1}A+A^*B^{-1}\big)$ and $Z$ to be the solution of $Z'=Z(BW-A)$ with $Z(a)=\mathrm{Id}$. In order to see that $Z$ takes values in $G$ simply observe that $BW-A$ is in the Lie algebra $\mathfrak g$ of $G$ given by: \[\mathfrak g=\big\{Y\in\mathrm{gl}(n,I\!\!R):YB+BY^*=0\big\}.\qedhere\] \end{proof}
Recall that a {\em symplectic space\/} is a real finite dimensional vector space $V$ endowed with a {\em symplectic form\/} $\omega$, i.e., $\omega$ is an antisymmetric nondegenerate bilinear form on $V$. A {\em Lagrangian subspace\/} of $V$ is a $n$-dimensional subspace $L\subset V$ with $\omega\vert_{L\times L}=0$, where $n=\frac12\mathrm{dim}(V)$. We denote by $\Lambda(V,\omega)$ the {\em Lagrangian Grassmannian\/} of $(V,\omega)$, i.e., the set of all Lagrangian subspaces of $V$. The Lagrangian Grassmannian is a real-analytic compact connected $\frac12n(n+1)$-dimensional embedded submanifold of the Grassmannian of all $n$-dimensional subspaces of $V$. We denote by $\Lambda(2n,I\!\!R)$ the Lagrangian Grassmannian of the symplectic space $I\!\!R^n\oplus{I\!\!R^n}^*$ endowed with its canonical symplectic form \eqref{eq:omegacan}.
Clearly, the subspace $L_0$ is Lagrangian in $I\!\!R^n\oplus{I\!\!R^n}^*$ and therefore \eqref{eq:ellt} defines a smooth curve $\ell$ in $\Lambda(2n,I\!\!R)$; such curve is used in \cite{MPT} to study the conjugate points along a semi-Riemannian geodesic. We now introduce the smooth curve $\xi:[a,b]\to\Lambda(2n,I\!\!R)$ given by: \begin{equation}\label{eq:defxi} \xi(t)=\Phi(t)^{-1}(L_0); \end{equation} obviously $t\in\left]a,b\right]$ is conjugate for $X$ iff $\xi(t)$ is not transversal to $\xi(a)=L_0$. This motivates the following: \begin{defin} An {\em abstract symplectic system\/} is a triple $(V,\omega,\xi)$ where $(V,\omega)$ is a symplectic space and $\xi:[a,b]\to\Lambda(V,\omega)$ is a smooth curve in the Lagrangian Grassmannian of $(V,\omega)$. An {\em isomorphism\/} from $(V,\omega,\xi)$ to $(\tilde V,\tilde\omega,\tilde\xi)$ is a symplectomorphism $\sigma:(V,\omega)\to(\tilde V,\tilde\omega)$ such that $\sigma(\xi(t))=\tilde\xi(t)$ for all $t\in[a,b]$; we write $\sigma:(V,\omega,\xi)\cong(\tilde V,\tilde\omega,\tilde\xi)$. An instant $t\in\left]a,b\right]$ is said to be {\em conjugate\/} for $(V,\omega,\xi)$ if $\xi(t)\cap\xi(a)\ne\{0\}$. \end{defin} It is clear that isomorphic abstract symplectic systems have the same conjugate instants. Observe that abstract symplectic systems and their isomorphisms form a category $\underline{\mathfrak D}$ with composition of morphisms defined in the obvious way; as in $\underline{\mathfrak C}$, all morphisms of $\underline{\mathfrak D}$ are isomorphisms. If $X$ is a symplectic differential system and if $\xi$ is defined in \eqref{eq:defxi} then $\underline{\mathcal F}(X)=(I\!\!R^n\oplus{I\!\!R^n}^*,\omega,\xi)$ is an abstract symplectic system; moreover, if $\phi:X\cong\tilde X$ is an isomorphism then $\sigma=\phi(a)$ is an isomorphism from $\underline{\mathcal F}(X)$ to $\underline{\mathcal F}(\tilde X)$. The rule $\underline{\mathcal F}$ is a {\em functor\/} from the category $\underline{\mathfrak C}$ to the category $\underline{\mathfrak D}$; in addition we have the following: \begin{lem}\label{thm:equivcat} The functor $\underline{\mathcal F}$ is an {\em equivalence\/} from $\underline{\mathfrak C}$ to $\underline{\mathfrak D}$, i.e.: \begin{enumerate} \item\label{itm:eqcat1} $\underline{\mathcal F}$ is {\em full\/} and {\em faithful\/}, i.e., given symplectic differential systems $X$ and $\tilde X$ then $\underline{\mathcal F}$ induces a bijection from the morphisms $\phi:X\cong\tilde X$ to the morphisms $\sigma:\underline{\mathcal F}(X)\cong\underline{\mathcal F}(\tilde X)$;
\item\label{itm:eqcat2} $\underline{\mathcal F}$ is {\em surjective on isomorphism classes\/}, i.e., given an abstract symplectic system $(V,\omega,\xi)$ there exists a symplectic differential system $X$ such that $\underline{\mathcal F}(X)$ is isomorphic to $(V,\omega,\xi)$. \end{enumerate} \end{lem} \begin{proof} Part \eqref{itm:eqcat1} is obtained by straightforward verification. For part \eqref{itm:eqcat2}, we describe how to construct the symplectic differential system $X$ from the abstract symplectic system $(V,\omega,\xi)$. Choose a smooth curve $[a,b]\ni t\mapsto\psi(t)$ where each $\psi(t)$ is a symplectomorphism from $(V,\omega)$ to $I\!\!R^n\oplus{I\!\!R^n}^*$ (endowed with the canonical symplectic form) such that $\psi(t)\big(\xi(t)\big)=L_0=\{0\}\oplus{I\!\!R^n}^*$ for all $t$. Define $X$ to be the unique symplectic differential system whose fundamental matrix $\Phi$ is given by $\Phi(t)=\psi(t)\psi(a)^{-1}$; more explicitly, take $X(t)=\Phi'(t)\Phi(t)^{-1}$. It is easy to check that $\sigma=\psi(a)^{-1}$ is an isomorphism from $\underline{\mathcal F}(X)$ to $(V,\omega,\xi)$. \end{proof}
We now want to characterize which abstract symplectic systems correspond to nondegenerate symplectic differential systems. To this aim, we recall a couple of simple facts about the geometry of the Lagrangian Grassmannian (see for instance \cite{Duis, MPT}). Let $(V,\omega)$ be a symplectic space. A {\em Lagrangian decomposition\/} of $V$ is a pair $(\xi_0,\xi_1)$ of Lagrangian subspaces of $V$ such that $V=\xi_0\oplus \xi_1$; to each Lagrangian decomposition $(\xi_0,\xi_1)$ there corresponds a chart $\varphi_{\xi_0,\xi_1}$ defined in the open subset $\Lambda^0(\xi_1)$ of $\Lambda(V,\omega)$ consisting of those Lagrangians that are transverse to $\xi_1$. The chart $\varphi_{\xi_0,\xi_1}$ takes values in the space $\mathrm{B}_{\textrm{sym}}(\xi_0)$ of symmetric bilinear forms in $\xi_0$ and is defined by: \[\varphi_{\xi_0,\xi_1}(L)=\omega(T\cdot,\cdot)\vert_{\xi_0\times \xi_0},\quad L\in\Lambda^0(\xi_1),\] where $T:\xi_0\to \xi_1$ is the unique linear map whose graph $\mathrm{Gr}(T)=\{v+Tv:v\in \xi_0\}$ equals $L$. The differential $\mathrm{d}\varphi_{\xi_0,\xi_1}(\xi_0)$ of the chart $\varphi_{\xi_0,\xi_1}$ at $\xi_0$ gives an isomorphism from the tangent space $T_{\xi_0}\Lambda(V,\omega)$ to the space $\mathrm{B}_{\textrm{sym}}(\xi_0)$; such isomorphism does not depend on the complementary Lagrangian $\xi_1$ to $\xi_0$ and therefore for every $L\in\Lambda(V,\omega)$ there is a {\em natural identification\/} of the tangent space $T_L\Lambda(V,\omega)$ with the space $\mathrm{B}_{\textrm{sym}}(L)$.
Let $L\in\Lambda(V,\omega)$ be given and consider the {\em evaluation map\/} $\beta_L:\mathrm{Sp}(V,\omega)\to\Lambda(V,\omega)$ given by $\beta_L(A)=A(L)$; using local coordinates the differential of $\beta_L$ is easily computed as: \begin{equation}\label{eq:difbetaL} \mathrm{d}\beta_L(A)\cdot Y=\omega(YA^{-1}\cdot,\cdot)\vert_{A(L)\times A(L)},\quad A\in\mathrm{Sp}(2n,I\!\!R),\ Y\in T_A\mathrm{Sp}(2n,I\!\!R). \end{equation}
Let now $X$ be a symplectic differential system and define $\xi$ as in \eqref{eq:defxi}; obviously $\xi=\beta_{L_0}\circ\Phi^{-1}$. By \eqref{eq:difbetaL} and \eqref{eq:defPhi} we have: \[\xi'(t)=-\omega(\Phi(t)^{-1}X(t)\Phi(t)\cdot,\cdot)\vert_{\xi(t)\times\xi(t)}= -\omega(X(t)\Phi(t)\cdot,\Phi(t)\cdot)\vert_{\xi(t)\times\xi(t)};\] since $\omega(X(t)\cdot,\cdot)\vert_{L_0\times L_0}=B(t)$, we see that $\xi'(t)$ is the push-forward of $-B(t)$ by the isomorphism $\Phi(t)^{-1}:L_0\to\xi(t)$. This motivates the following: \begin{defin}\label{thm:abstractnondeg} An abstract symplectic system $(V,\omega,\xi)$ is called {\em nondegenerate\/} when $\xi'(t)$ is a nondegenerate symmetric bilinear form on $\xi(t)$ for all $t$. In this case, the {\em index\/} of $(V,\omega,\xi)$ is defined as the index of $-\xi'(t)$ (which does not depend on $t$). \end{defin} Clearly, nondegeneracy and indexes of abstract symplectic systems are preserved by isomorphisms; moreover, a symplectic differential system $X$ is nondegenerate with index $k$ iff $\underline{\mathcal F}(X)$ is nondegenerate with index $k$ as an abstract symplectic system.
Summarizing the results of this section, we have proven the following theorem: \begin{teo}[abstract characterization of semi-Riemannian geodesics]\label{thm:ABSTRACT} Let $(V,\omega,\xi)$ be a nondegenerate abstract symplectic system of index $k$, with $\mathrm{dim}(V)=2n$. Then, there exists a $(n+1)$-dimensional semi-Riemannian manifold $(M,g)$ and a non lightlike geodesic $\gamma:[a,b]\to M$ such that $\underline{\mathcal F}(X)$ is isomorphic to $(V,\omega,\xi)$, where $X$ is the Morse--Sturm system obtained from the Jacobi equation along $\gamma$ by a parallel trivialization of the normal bundle of $\gamma$ (see \eqref{eq:firstorder}). A point $\gamma(t)$, $t\in\left]a,b\right]$ is conjugate to $\gamma(a)$ along $\gamma$ iff $t$ is a conjugate instant for $(V,\omega,\xi)$. Moreover, $\gamma$ can be chosen to be either timelike or spacelike; the index of $g$ is equal to $k+1$ in the first case and to $k$ in the latter case.\qed \end{teo} Clearly, from a strictly technical point of view, the categorical terminology adopted in this section is unnecessary. Nevertheless, the authors believe that the employment of this language helps the reader in perceiving the analogies between this theory and other situations in Mathematics\footnote{ Here are some examples. The category of simply connected Lie groups is equivalent to the category of real, finite-dimensional Lie algebras. The same holds for the categories of geometric simplicial complexes and abstract simplicial complexes.} where categorical equivalences occur. \end{section}
\begin{section}{Distribution of conjugate points along a geodesic} \label{sec:distribution} In this section we want to construct examples of conjugate points using the characterization given in Theorem~\ref{thm:ABSTRACT}. The idea is to construct smooth curves $\xi$ of Lagrangians of a fixed symplectic space having everywhere nondegenerate derivative, and such that $\xi(t)$ is not transversal to a fixed Lagrangian $\xi_0$ at a prescribed set of values of the parameter $t$. Such construction is performed using local charts $\varphi_{\xi_0,\xi_1}$ in the Lagrangian Grassmannian; in these coordinates curves of Lagrangians are identified with curves of symmetric bilinear forms. The main technical problem to complete the construction is to connect smoothly $\xi$ with $\xi_0$ without violating the nondegeneracy condition on the derivative and without creating new conjugate instants (see Proposition~\ref{thm:menoschato}). The proof of Proposition~\ref{thm:menoschato} takes inspiration from the proof of some elementary versions of the so-called {\em H-principle\/} of Gromov~\cite{Gromov} by the method of convex integration; roughly speaking, we construct a curve satisfying a certain open differential relation by first searching for its derivative.
We start with two technical results:
\begin{lem}\label{thm:chato1} Let $U\subsetI\!\!R^k$ be a connected open set, $u\in U$ a fixed point, $\bar\tau:[c,b]\to U$ a smooth curve and $a\inI\!\!R$, $a<c$. Then there exists $M>0$ such that for all $\eta,\eta'>0$ there exists a smooth extension $\tau:[a,b]\to U$ of $\bar\tau$ with the following properties: \begin{itemize} \item $\int_a^c\tau=u(c-a)$;
\item $\big\Vert\tau\vert_{[a,c]}\big\Vert_\infty=\sup_{t\in[a,c]}\Vert\tau(t)\Vert\le M$;
\item $\tau\vert_{[a,c-\eta]}$ is constant. \end{itemize} \end{lem} \begin{proof} Let $r>0$ be such that the open ball $B(u;r)$ of center $u$ and radius $r$ is contained in $U$ and choose a smooth curve $\tilde\gamma:[c-1,b]\to U$ such that $\tilde\gamma(c-1)=u$ and $\tilde\gamma\vert_{[c,b]}=\bar\tau$. Set $M=\Vert u\Vert+1+\Vert\tilde\gamma\Vert_\infty$ and choose $\varepsilon>0$ small enough such that $\varepsilon<\eta$ and \begin{equation}\label{eq:chata} \frac\varepsilon{c-a-\varepsilon}\Vert\tilde\gamma-u\Vert_\infty<\min\{r,1\}. \end{equation} Now, let $\gamma:[c-\varepsilon,b]\to U$ be a smooth non decreasing reparameterization of $\tilde\gamma$ such that $\gamma\vert_{[c,b]}=\bar\tau$ and $\gamma\vert_{[c-\varepsilon,c-\frac\varepsilon2]}\equiv u$. Choose smooth functions $\phi_1,\phi_2:[a,b]\to[0,1]$ with $\phi_1+\phi_2\equiv1$ and such that the support of $\phi_1$ is contained in $\left[a,c-\frac\varepsilon2\right[$ and the support of $\phi_2$ is contained in $\left]c-\varepsilon,b\right]$. Finally set: \[\delta=\frac{-\int_a^c\phi_2(\gamma-u)}{\int_a^c\phi_1},\] and define $\tau=\phi_1(u+\delta)+\phi_2\gamma$. To check that such $\tau$ works observe that $\Vert\delta\Vert$ is less than or equal to the left hand side of \eqref{eq:chata}. \end{proof}
\begin{cor}\label{thm:corchato} Let $\bar\sigma:[c,b]\to\mathrm{B}_{\textrm{sym}}(I\!\!R^n)$ be a smooth map such that $\bar\sigma(c)$ is nondegenerate, $\bar\sigma'(t)$ is nondegenerate for all $t\in[c,b]$ and such that $\bar\sigma(c)$ and $\bar\sigma'(c)$ have the same index. Then, given $a<c$ there exists a smooth extension $\sigma:[a,b]\to\mathrm{B}_{\textrm{sym}}(I\!\!R^n)$ of $\bar\sigma$ such that $\sigma(a)=0$, $\sigma(t)$ is nondegenerate for all $t\in\left]a,c\right]$ and $\sigma'(t)$ is nondegenerate for all $t\in[a,b]$. \end{cor} \begin{proof} Simply apply Lemma~\ref{thm:chato1} to the following objects: \begin{itemize} \item $U=\{B\in\mathrm{B}_{\textrm{sym}}(I\!\!R^n):\text{$B$ is nondegenerate and it has the same index as $\bar\sigma(c)$}\}$;
\item $\displaystyle u=\frac{\bar\sigma(c)}{c-a}$;
\item $\bar\tau=\bar\sigma'$;
\item $\eta>0$ is chosen small enough so that $\eta M<r$, where $r>0$ is such that the open ball $B(\bar\sigma(c);r)$ is contained in $U$. \end{itemize} Finally, define $\sigma(t)=\int_a^t\tau$ for $t\in[a,b]$. \end{proof}
\begin{prop}\label{thm:menoschato} Let $(V,\omega)$ be a symplectic space, $\xi_0\subset V$ be a Lagrangian subspace and $\bar\xi:[c,b]\to\Lambda(V,\omega)$ be a smooth curve such that $\bar\xi(c)\cap\xi_0=\{0\}$ and $\bar\xi'(t)\in\mathrm{B}_{\textrm{sym}}(\bar\xi(t))$ is nondegenerate for all $t\in[c,b]$. Then, given $a<c$ there exists a smooth extension $\xi:[a,b]\to\Lambda(V,\omega)$ of $\bar\xi$ such that $\xi(a)=\xi_0$, $\xi(t)\cap\xi_0=\{0\}$ for all $t\in\left]a,c\right]$ and $\xi'(t)\in\mathrm{B}_{\textrm{sym}}(\xi(t))$ is nondegenerate for all $t\in[a,b]$. \end{prop} \begin{proof} Let $\xi_1$ be a Lagrangian complementary to both $\xi_0$ and $\bar\xi(c)$; it's easy to see that $\xi_1$ can be chosen such that $\varphi_{\xi_0,\xi_1}(\bar\xi(c))$ equals any prescribed nondegenerate bilinear form on $\xi_0$. In particular, we may assume that $\varphi_{\xi_0,\xi_1}(\bar\xi(c))$ and $\bar\xi'(c)$ have the same index. Let $b'\in\left]c,b\right]$ be such that $\bar\xi([c,b'])$ is contained in the domain of the chart $\varphi_{\xi_0,\xi_1}$ and define $\bar\sigma:[c,b']\to\mathrm{B}_{\textrm{sym}}(\xi_0)\cong\mathrm{B}_{\textrm{sym}}(I\!\!R^n)$ by $\bar\sigma=\varphi_{\xi_0,\xi_1}\circ\bar\xi\vert_{[c,b']}$. The conclusion follows by an application of Corollary~\ref{thm:corchato} to $\bar\sigma$, keeping in mind that if $\sigma=\varphi_{\xi_0,\xi_1}\circ\xi$ then: \begin{itemize} \item[(a)]$\xi(a)=\xi_0\Leftrightarrow\sigma(a)=0$;
\item[(b)]$\xi(t)\cap\xi_0=\{0\}\Leftrightarrow\sigma(t)\ \text{nondegenerate}$;
\item[(c)]$\xi'(t)\in\mathrm{B}_{\textrm{sym}}(\xi(t))$ is just a {\em push-forward\/} of $\sigma'(t)\in\mathrm{B}_{\textrm{sym}}(\xi_0)$ by an isomorphism between $\xi_0$ and $\xi'(t)$.
\end{itemize} \end{proof} We are now ready to prove the main result of the section: \begin{teo}\label{thm:MAIN} Let $F\subset\left]a,b\right]$ be {\em any\/} compact subset; then there exists a 3-dimensional Lorentzian manifold $(M,g)$ and a spacelike geodesic $\gamma:[a,b]\to M$ such that $\gamma(t)$ is conjugate to $\gamma(a)$ along $\gamma$ iff $t\in F$. \end{teo} \begin{proof} By Theorem~\ref{thm:ABSTRACT}, it suffices to find an abstract symplectic system $(V,\omega,\xi)$ of index $1$ with $\mathrm{dim}(V)=4$ whose set of conjugate instants is $F$. Consider the space $V=I\!\!R^2\oplus{I\!\!R^2}^*$ endowed with the canonical symplectic form and set $\xi_0=\{0\}\oplus{I\!\!R^2}^*$; given $c\in\left]a,\inf F\right[$, we'll construct a smooth curve $\bar\xi:[c,b]\to\Lambda(V,\omega)$ such that $\bar\xi'(t)$ is nondegenerate for all $t$ and $\bar\xi(t)\cap\xi_0\ne\{0\}$ iff $t\in F$. The desired curve $\xi:[a,b]\to\Lambda(V,\omega)$ will then be obtained by applying Proposition~\ref{thm:menoschato}. The curve $\bar\xi$ will take values in the domain of the chart $\varphi_{\xi_0,\xi_1}$ where $\xi_1=I\!\!R^2\oplus\{0\}$; we define $\bar\xi=\varphi_{\xi_0,\xi_1}^{-1}\circ\rho$, where $\rho:[c,b]\to\mathrm{B}_{\textrm{sym}}(\xi_0)\cong\mathrm{B}_{\textrm{sym}}(I\!\!R^2)$ is defined\footnote{ Identifying $\mathrm{B}_{\textrm{sym}}(I\!\!R^2)$ with $I\!\!R^3$, then the set of degenerate bilinear forms corresponds to a double cone $\mathcal C$. The curve $\rho(t)$ defined above takes values in a plane $\pi$ orthogonal to the axis of the cone, and $1-R(t)$ is the distance between $\rho(t)$ and the circle $\mathcal C\cap\pi$.} by: \[\rho(t)=\begin{pmatrix}1+R(t)\cos(t)&R(t)\sin(t)\\ R(t)\sin(t)&1-R(t)\cos(t)\end{pmatrix},\quad t\in[c,b],\] and $R:[c,b]\to\left]0,+\infty\right[$ is a smooth map such that $R^{-1}(1)=F$. The condition $R(t)>0$ implies that $\rho'(t)$ is always nondegenerate and therefore also $\xi'(t)$ is nondegenerate; moreover, $\bar\xi(t)\cap\xi_0\ne\{0\}$ iff $R(t)=1$. The existence of the required function $R$ follows by taking $R=1-f$ in Lemma~\ref{thm:existeR} below. \end{proof}
\begin{lem}\label{thm:existeR} Given a closed subset $F\subsetI\!\!R$, there exists a smooth map $f:I\!\!R\to\left[0,1\right[$ such that $f^{-1}(0)=F$. \end{lem} \begin{proof} Write $I\!\!R\setminus F=\bigcup_{r=1}^{+\infty}I_r$ as a disjoint union of open intervals $I_r$. For each $r\ge1$ let $f_r:I\!\!R\toI\!\!R$ be a smooth map such that: \begin{itemize} \item $f_r$ is zero outside $I_r$;
\item $f_r$ is positive on $I_r$;
\item $\big\Vert f^{(i)}_r\big\Vert_\infty<2^{-r}$ for $i=0,\ldots,r$, where $f_r^{(i)}$ denotes the $i$-th derivative of $f_r$.
\end{itemize} To conclude the proof set $f=\sum_{r=1}^{+\infty}f_r$. \end{proof}
Examples of non lightlike geodesics with a prescribed set of conjugate points in higher dimensional semi-Riemannian manifolds with metric of arbitrary index can be trivially obtained from Theorem~\ref{thm:MAIN} by considering orthogonal products with a flat manifold. On the other hand, if $\gamma$ is a spacelike geodesic in a 2-dimensional Lorentzian manifold $(M,g)$, then $\gamma$ is a timelike geodesic in the Lorentzian manifold $(M,-g)$ with the same conjugate points. This implies that the conjugate points along a geodesic in
a 2-dimensional semi-Riemannian manifold are always isolated.
\end{section}
\begin{section}{Final remarks} \label{sec:final} \begin{rem}\label{thm:remaparecem} If $\gamma:[a,b]\to M$ is any geodesic (of arbitrary causal character) in a semi-Riemannian manifold $(M,g)$, then a Morse--Sturm system can be obtained from the Jacobi equation along $\gamma$ by a parallel trivialization of the tangent bundle $TM$ along $\gamma$. At the beginning of Section~\ref{sec:setup} we have defined a Morse--Sturm system from the Jacobi equation by means of a parallel trivialization of the {\em normal bundle\/} $\dot\gamma^\perp$ of $\gamma$. The advantage of the latter construction is that one has a converse to the above construction, i.e., every Morse--Sturm system arises from the Jacobi equation along a non lightlike semi-Riemannian geodesic (Lemma~\ref{thm:lemhelfer}).
Symplectic differential systems are more generally associated to solutions of Hamiltonian systems in a symplectic manifold endowed with a Lagrangian distribution (details of this construction can be found in \cite{PT2, catania}). To each symplectic differential system is naturally associated the notion of {\em Maslov index}; this formalism is used in \cite{PT2} to prove a Morse index theorem for non convex Hamiltonian systems and for semi-Riemannian geometry (see also \cite{MPT, PT3}). In \cite{PT2} it is also defined the notions of {\em multiplicity\/} and of {\em signature\/} of a conjugate instant of a symplectic differential system; these notions, as well as that of Maslov index, can be defined directly in the context of abstract symplectic systems. In the proof of Theorem~\ref{thm:MAIN} we have constructed examples containing only conjugate instants of multiplicity one and signature zero. However, Theorem~\ref{thm:ABSTRACT} and Proposition~\ref{thm:menoschato} make it an easy task to produce more {\em exotic\/} examples of geodesics of arbitrary Maslov index and having a complicated distribution of conjugate points of several {\em types}. \end{rem}
\begin{rem}\label{thm:remcanonical} As mentioned in the Introduction, abstract symplectic systems are {\em canonically\/} associated to semi-Riemannian geodesics, or more generally, to solutions of Hamiltonian systems in a symplectic manifold endowed with a Lagrangian distribution. This is done as follows. Let $(\mathcal M,\omega)$ be a symplectic manifold (in the geodesic case, $\mathcal M=TM^*$ is the cotangent bundle of a semi-Riemannian manifold $(M,\mathfrak g)$), $H$ a possibly time dependent Hamiltonian function on $\mathcal M$ (in the geodesic case $H(p)=\frac12\mathfrak g^{-1}(p,p)$), $\mathfrak L\subset T\mathcal M$ a Lagrangian distribution on $\mathcal M$ (in the geodesic case, $\mathfrak L$ is the vertical subbundle of $TTM^*$) and $\Gamma:[a,b]\to\mathcal M$ a solution of the Hamilton equations of $H$. An abstract symplectic system is then obtained by considering $V=T_{\Gamma(a)}\mathcal M$ and $\xi(t)$ to be the inverse image of $\mathfrak L_{\Gamma(t)}$ in $T_{\Gamma(a)}\mathcal M$ by the Hamiltonian flow. \end{rem}
\begin{rem}\label{thm:remfocais} By minor modifications of the theory presented in this paper it is also possible to treat the case of focal points to submanifolds along an orthogonal geodesic. To this aim, one should introduce a category of pairs $(X,\ell_0)$ where $X$ is a symplectic differential system and $\ell_0$ is a Lagrangian subspace of $I\!\!R^n\oplus{I\!\!R^n}^*$. The Lagrangian subspace $\ell_0$ encodes the information about the tangent space and the second fundamental form of the initial submanifold: an instant $t\in\left]a,b\right]$ is {\em focal for $(X,\ell_0)$} if there exists a non zero solution $(v,\alpha)$ of $X$ with $(v(a),\alpha(a))\in\ell_0$ and $v(t)=0$. Accordingly, abstract symplectic systems should be replaced by quadruples $(V,\omega,\xi,\xi_0)$, where $\xi_0$ is a Lagrangian subspace of $(V,\omega)$. Details of this construction can be found in \cite{PT2}. \end{rem}
\begin{rem}\label{thm:remsub} {\em Degenerate\/} symplectic systems (systems \eqref{eq:sistdif} with coefficient $B$ degenerate) can be used to study stationary points of {\em constrained\/} Lagrangian problems (see \cite{Lisboa}). An important class of examples of these stationary points are the so-called {\em sub-Riemannian geodesics}, i.e., geodesics in manifolds endowed with a {\em partially defined\/} metric tensor. Also in this case, conjugate points may accumulate along a geodesic, however, we will show in a forthcoming paper that the set of conjugate points along a geodesic is always a finite union of isolated points and closed intervals. \end{rem} \end{section}
\end{document} |
\begin{document}
\title{Stability through non-shadows} \begin{abstract}
We study families $\mathcal{F}\subseteq 2^{[n]}$ with restricted intersections and prove a conjecture of Snevily in a stronger form for large $n$. We also obtain stability results for Kleitman's isodiametric inequality and families with bounded set-wise differences. Our proofs introduce a new twist to the classical linear algebra method, harnessing the non-shadows of $\mathcal{F}$, which may be of independent interest. \end{abstract}
\section{Introduction}
\subsection{Restricted intersections} The celebrated theorem of Erd\H{o}s, Ko and Rado~\cite{1961EKR} states that when $n\geq 2k$, every $k$-uniform intersecting family $\mathcal{F}\subseteq 2^{[n]}$ has size at most $\binom{n-1}{k-1}$. We can view the Erd\H{o}s-Ko-Rado theorem as a result on families with restricted intersections where the empty intersection is forbidden.
A family $\mathcal{F}\subseteq 2^{[n]}$ is \emph{Sperner} if for any distinct $A,B\in \C F$, $A\not\subseteq B$. Another cornerstone result in extremal set theory is Sperner's theorem~\cite{1928Sperner} from 1928, stating that any Sperner family in $2^{[n]}$ has size at most $\binom{n}{\lfloor n/2\rfloor}$. In Sperner's theorem, $|A\setminus B|=0$ is forbidden.
What can we say if we instead prescribe all possible pairwise intersection sizes? Formally, for a subset $L$ of non-negative integers, a family $\mathcal{F}$ is $L$-intersecting if for any distinct $F,F'\in\mathcal{F}$, $|F\cap F'|\in L$. Results of this type date back to the work of Fisher~\cite{1940Fisher}, who considered the $L$-intersecting problem when $L$ consists of a single element. More precisely, Fisher~\cite{1940Fisher} proved that a uniform family with this property cannot contain more sets than the size of its underlying set. Later, this problem was extensively studied via linear algebra methods by e.g.,~Frankl-Wilson~\cite{1981FranklWilson}, Snevily~\cite{2003Snevily}, Ray-Chaudhuri-Wilson~\cite{1975Ray} and Alon-Babai-Suzuki~\cite{1991AlonBabaiSuzuki}. We also refer the interested readers to the recent survey~\cite{2019book}.
The following fundamental result was proved by Ray-Chaudhuri and Wilson~\cite{1975Ray} in 1975.
\begin{theorem}[Ray-Chaudhuri-Wilson~\cite{1975Ray}]\label{thm:RCW}
Let $L=\{\ell_{1},\ell_{2},\ldots,\ell_{s}\}$ be a set of $s$ non-negative integers and $k$ be a positive integer. Let $\mathcal{F}$ be a $k$-uniform $L$-intersecting family, then $|\mathcal{F}|\le\binom{n}{s}$. \end{theorem}
Snevily~\cite{1995SnevilyJCD} proposed the following conjecture in 1995.
\begin{conj}[Snevily~\cite{1995SnevilyJCD}]\label{conj:main}
Let $L=\{\ell_{1},\ell_{2},\ldots,\ell_{s}\}$ be a set of $s$ non-negative integers and $K=\{k_{1},k_{2},\ldots,k_{r}\}$ be a set of $r$ positive integers with $\max{\ell_{i}}<\min{k_{j}}$. Let $\mathcal{F}\subseteq \bigcup_{i=1}^{r}\binom{[n]}{k_{i}}$ be an $L$-intersecting family, then $|\mathcal{F}|\leq\binom{n}{s}$. \end{conj}
This conjecture, if true, would be a generalization of the Ray-Chaudhuri-Wilson theorem. Indeed, Theorem~\ref{thm:RCW} shows that Conjecture~\ref{conj:main} is true when $|K|=1$. Using the multilinear polynomial methods, Alon, Babai and Suzuki~\cite{1991AlonBabaiSuzuki} proved that families as above satisfy $|\mathcal{F}|\leq\binom{n}{s}+\cdots+\binom{n}{s-r+1}$ and Snevily~\cite{1994JCTASnevily} showed that $|\mathcal{F}|\leq\binom{n-1}{s}+\cdots+\binom{n-1}{0}$. Both of these bounds verify Conjecture~\ref{conj:main} asymptotically, that is, $|\mathcal{F}|\le (1+o(1))\binom{n}{s}$ when $n\rightarrow\infty$.
In support of Conjecture~\ref{conj:main}, Snevily~\cite{1995SnevilyJCD} proved the special case when $L=\{0,1,\ldots,s-1\}$, and for the general case that $|\mathcal{F}|\leq\sum_{i=s-2r+1}^{s}\binom{n-1}{i}$, implying that $|\mathcal{F}|\leq \binom{n}{s}+\binom{n}{s-1}$ for large $n$. For more results related to Conjecture~\ref{conj:main}, we refer the readers to~\cite{2009JCTAChen, 2019book, 2015EUJC, 2007EUJC, 2018DmWang} and the reference therein.
Observe that, if a set system $\mathcal{F}\subseteq 2^{[n]}$ satisfies the conditions in Conjecture~\ref{conj:main}, then $\mathcal{F}$ is Sperner. By LYM inequality (Theorem~\ref{thm:LYM}), It is clear that Conjecture~\ref{conj:main} is true when $n<2s-1$. We propose a stronger conjecture for Sperner systems as follows.
\begin{conj}\label{conj:new}
Let $n,s$ be two integers with $n\ge 2s-1$, $L=\{\ell_{1},\ell_{2},\ldots,\ell_{s}\}$ be a set of $s$ non-negative integers and $\mathcal{F}\subseteq 2^{[n]}$ be an $L$-intersecting Sperner system, then $|\mathcal{F}|\leq\binom{n}{s}$. \end{conj}
Clearly, Conjecture~\ref{conj:new} implies Conjecture~\ref{conj:main}. Our first result verifies Conjecture~\ref{conj:new} when $0\in L$ for reasonably large $n$.
\begin{theorem}\label{thm :0}
Let $L=\{\ell_{1},\ell_{2},\ldots,\ell_{s}\}$ be a set of $s$ non-negative integers and $\mathcal{F}\subseteq 2^{[n]}$ be an $L$-intersecting Sperner family, we have $|\mathcal{F}| \le \sum_{i=0}^{s}\binom{n-1}{i}$. In particular, if $0\in L$ and $n\ge 3s^{2}$, then $|\mathcal{F}|\leq\binom{n}{s}$, with equality holds if and only if $L=\{0,1,2,\ldots,s-1\}$ and $\mathcal{F} = \binom{[n]}{s}$. \end{theorem}
Note that when $L=\{0\}$, $\mathcal{F}=\{\{1\},\{2\},\ldots,\{n\}\}$ is a $\{0\}$-intersecting Sperner family, showing that the above upper bound $|\mathcal{F}| \le \sum_{i=0}^{s}\binom{n-1}{i}$ cannot be improved in general.
What if $0\notin L$? We say a family $\mathcal{F}=\{F_{1},F_{2},\ldots,F_{m}\}\subseteq 2^{n}$ has \emph{rank} $t$ if $\max_{i\in [m]}{|F_{i}|}=t$, and say $\mathcal{F}$ is \emph{trivially} $\ell$-intersecting if there is a set $A$ with size $\ell$ such that $A\subseteq\bigcap_{i=1}^{m}F_{i}$. We obtain better bounds when $0\notin L$ and $\mathcal{F}$ has relatively small rank. \begin{theorem}\label{thm:large}
Let $L=\{\ell_{1},\ell_{2},\ldots,\ell_{s}\}$ be a set of $s$ positive integers with $\ell_{1}<\ell_{2}<\cdots<\ell_{s}$ and $\mathcal{F}\subseteq 2^{[n]}$ be an $L$-intersecting Sperner family with rank $M$. Then either $\mathcal{F}$ is trivially $\ell_1$-intersecting and then $|\mathcal{F}|\le \sum_{i=0}^s{\binom{n-2}{i}}={\binom{n}{s}}-\Omega_{s}(n^{s-1})$, or $|\mathcal{F}|=O_{M,s}(n^{s-1})$. \end{theorem}
Combining Theorems~\ref{thm :0} and~\ref{thm:large}, we see that Conjecture~\ref{conj:main} is true when $n$ is sufficiently large.
\begin{cor}\label{cor:large}
Let $L=\{\ell_{1},\ell_{2},\ldots,\ell_{s}\}$ be a set of $s$ non-negative integers and let $K=\{k_{1},k_{2},\ldots,k_{r}\}$ be a set of $r$ positive integers with $\max{\ell_{i}}<\min{k_{j}}$. There exists $n_0$ such that for any $n\ge n_0$ and $L$-intersecting family $\mathcal{F}\subseteq \bigcup_{i=1}^{r}\binom{[n]}{k_{i}}$, $|\mathcal{F}|\leq\binom{n}{s}$. \end{cor}
\subsection{Restricted symmetric/set-wise differences}
For set systems with bounded symmetric differences, Kleitman~\cite{1966Kleitman} proved the following classical result: for integers $n>d$ and a family $\mathcal{F}\subseteq 2^{[n]}$ with $|A\triangle B| \le d$ for any $A,B \in \mathcal{F}$, \begin{equation*}
|\mathcal{F}|\le
\begin{cases}
\sum\limits_{i=0}^{k}\binom{n}{i} & d=2k; \\
2\sum\limits_{i=0}^{k}\binom{n-1}{i} & d=2k+1.
\end{cases} \end{equation*}
The bounds in both cases above are optimal. For the even case, the upper bound can be attained by the radius-$k$ Hamming ball $\mathcal{K}(n,k) := \{ F:F\subseteq[n], |F|\le k \}$, and for the odd case, consider the family $\mathcal{K}_y(n,k) := \{ F:F\subseteq[n], |F\setminus\{y\}|\le k \}$. The original proof of Kleitman is combinatorial. Recently, Huang, Klurman and Pohoata~\cite{2020Huang} provided an algebraic proof via the Cvetkovi\'{c} bound~\cite{1972Cvetkovic} and extended the results when the allowed symmetric differences lie in a set of consecutive integers.
For a family $\mathcal{F}\subseteq 2^{[n]}$ and a subset $S\subseteq [n]$, we define the \emph{translate} of $\mathcal{F}$ by $S$ as $\mathcal{F}\triangle S:=\{F\triangle S:F\in\mathcal{F}\}$. Recently, Frankl~\cite{2017CPCFrankl} proved the following stability result for Kleitman's theorem. \begin{theorem}[Frankl~\cite{2017CPCFrankl}]\label{thm:StabilityFrankl}
Let $n\ge d+2$, $d\geqslant 0$ and let $\mathcal{F}\subseteq 2^{[n]}$ with $|A\triangle B| \le d$ for any $A,B \in \mathcal{F}$.
\begin{enumerate}
\item If $d=2k$ and $\mathcal{F}$ is not contained in any translate of $\mathcal{K}(n,k)$, then $|\mathcal{F}|\leqslant \sum_{i=0}^{k}\binom{n}{i}-\binom{n-k-1}{k}+1$.
\item If $d=2k+1$ and $\mathcal{F}$ is not contained in any translate of $\mathcal{K}_{y}(n,k)$, then $|\mathcal{F}| \le 2\sum_{i=0}^{k}\binom{n-1}{i} - \binom{n-k-2}{k}+1$.
\end{enumerate} \end{theorem}
Frankl's proof is combinatorial and relatively involved, making use of his earlier stability result for Katona theorem~\cite{2017JCTBFrankl}. We prove a finer stability result for Kleitman's theorem for the odd case.
\begin{theorem}\label{thm:SymmetricStability}
Let $\mathcal{F} \subseteq 2^{[n]}$ be a family of subsets of $[n]$ with $|A\triangle B| \le 2k+1$ for any $A,B \in \mathcal{F}$, then either $|\mathcal{F}| \le 2\sum_{i=0}^{k}\binom{n}{i} - 2\binom{n-5k-1}{k}$, or $\mathcal{F}$ is contained in some translate of $\mathcal{K}(n,k+1)$. Furthermore, if $\mathcal{F}$ is contained in some translate of $\mathcal{K}(n,k+1)$, then either $\mathcal{F}$ is contained in some translate of $\mathcal{K}_{y}(n,k)$, or $|\mathcal{F}| \le 2\sum_{i=0}^{k}\binom{n-1}{i} - \binom{n-k-2}{k}+1$. \end{theorem}
Note that if $\mathcal{F}$ is not contained in any translate of $\mathcal{K}(n,k+1)$, our upper bound $|\mathcal{F}| \le 2\sum_{i=0}^{k}\binom{n}{i} - 2\binom{n-5k-1}{k}$ is better than the bound $|\mathcal{F}| \le 2\sum_{i=0}^{k}\binom{n-1}{i} - \binom{n-k-2}{k}+1$ in Theorem~\ref{thm:StabilityFrankl} when $n\ge 10k^{2}$. We remark that our method does also apply to the even case. However, it yields a slighly weaker bound $|\mathcal{F}| \le \sum_{i=0}^{k}\binom{n+1}{i} - \binom{n-5k}{k}$ when $\mathcal{F}$ is not a translate of $\mathcal{K}(n,k)$.
Another type of well-studied restricted intersection problem is to bound the set-wise differences. In 1983, Katona~\cite{1983Katona} asked if we require $|A\setminus B|\le k$ for any distinct $A,B\in\mathcal{F}$, then what is the largest such $\mathcal{F}$? Frankl~\cite{1985Frankl} proved that for a set $L$ of non-negative integers, given a family $\mathcal{F}\subseteq 2^{[n]}$ so that for any distinct $A,B\in\mathcal{F}$, $|A\setminus B|\in L$, then $|\mathcal{F}|\le \sum_{i=0}^{|L|}\binom{n}{i}$.
Our method also yields stability result for families with bounded set-wise differences as follows. \begin{theorem}\label{thm:DifferenceStability}
Let $n,t,k$ be positive integers with $k<t\le\frac{k+n}{2}$ and $\mathcal{F}=\{F_1,F_2,\ldots,F_m\} \subseteq 2^{[n]}$ be a family with rank $t$ and $|F_i\setminus F_j| \le k$ for any $F_i,F_j \in \mathcal{F}$, then either $\mathcal{F}$ is trivially $(t-k)$-intersecting and $|\mathcal{F}|\le \sum_{i=0}^k\binom{n-(t-k)}{i}$, or $|\mathcal{F}| \le\sum_{i=0}^{k} \binom{n}{i} -\binom{n-t-2k}{k}$. In the latter case, if $n$ is sufficiently large in terms of $t$ and $k$, then $|\mathcal{F}|\le (1-c)\binom{n-(t-k)}{k},$ for some constant $c>0$. \end{theorem}
We remark that the upper bound on the rank $t\le \frac{k+n}{2}$ is not restrictive. Indeed, for any sets $A,B$, $|A\setminus B|=|B^c\setminus A^c|$. Thus, when $t>\frac{k+n}{2}$, we can consider the family consisting of \emph{complement} $F^c$ of all sets $F\in \mathcal{F}$ instead.
For families with set-wise differences lying in $L$, where $|L|=k$, Frankl~\cite{1985Frankl} conjectured that if the family is Sperner, then the upper bound $|\mathcal{F}|\le \sum_{i=0}^{k}\binom{n}{i}$ can be improved to $\binom{n}{k}$. As a corollary, we show that this conjecture holds for $L=\{1,2,\ldots,k\}$ and large $n$. The following corollary can also be deduced from the main results in~\cite{2017Frankl}. \begin{cor}\label{coro:main}
Given integer $k$ and Sperner family $\mathcal{F}\subseteq 2^{[n]}$ with $|A\setminus B| \le k$ for any $A,B \in \mathcal{F}$, if $n$ is sufficiently large, then $|\mathcal{F}| \le \binom{n}{k}$, with equality holds if and only if $\mathcal{F} = \binom{[n]}{k}$. \end{cor}
\subsection{Our method} Linear algebra method has proven to be a powerful tool in extremal set theory, see e.g.,~\cite{1991AlonBabaiSuzuki,1988Babai, 1983Bannai, 1984Blokhuis, 2009JCTAChen, 2007EUJC, 2007JACMubayi, 1975Ray,2003Snevily}. Given a family $\C F$, in this approach we associate members of $\C F$ with multilinear polynomials that are linearly independent. We can then bound the cardinality of the family by the dimension of the underlying space. Improvements can often be made by cleverly adding an extra set of polynomials, which together with the original ones are still linearly independent.
We introduce a new twist to this classical method, utilizing the non-shadows of the family to find a large collection of polynomials that can be added. There are several advantages of our variation. First of all, the linear independence can usually be verified easily. A more important feature is that, as the new polynomials are associated with non-shadows of $\C F$, we can gather additional structural information of $\C F$, which not only offers optimal bounds in many scenarios but also provides a stability result as shown in Theorems~\ref{thm:SymmetricStability} and~\ref{thm:DifferenceStability}.
\noindent\textbf{Organization.} The rest of this paper is organized as follows. In Section~\ref{sec:pre} we will collect some useful tools and give a short proof of a special case of the Katona intersection theorem (Theorem~\ref{thm:shadow}) to illustrate our method. We will prove the stability results, Theorems~\ref{thm:SymmetricStability} and~\ref{thm:DifferenceStability}, in Section~\ref{sec:Stability}. The proofs of Theorems~\ref{thm :0} and~\ref{thm:large} will be presented in Section~\ref{sec:Snevily}.\\
{\bf \noindent Notations.} In this paper, we usually regard $[n]=\{1,2,\ldots,n\}$ as the ground set. For a set $A\subseteq [n]$, we write $A^{c}$ for its complement, that is, $A^{c}=[n]\setminus A$. For a subset $A\subseteq [n]$, we will use $\binom{A}{k}$ to denote the family of all subsets of $A$ with size $k$ and $\binom{A}{\leq k}$ to denote the family of all subsets of $A$ with size at most $k$. For a pair of vectors $\boldsymbol{x}=\{x_{1},x_{2},\ldots,x_{n}\}$ and $\boldsymbol{y}=\{y_{1},y_{2},\ldots,y_{n}\}$, we define their inner product as $\boldsymbol{x}\cdot\boldsymbol{y}=\sum_{i=1}^{n}x_{i}y_{i}$. We use $\boldsymbol{1}_n$ to denote the length-$n$ all-ones vector $(1,1,\ldots,1)$ and omit the subscript when the dimension is clear. Given a polynomial $f$ in $n$ variables $x_{1},x_{2},\ldots,x_{n}$, define its \emph{multilinear reduction} $\tilde{f}$ to be the polynomial obtained from $f$ by replacing each $x_{i}^{t}$ term by $x_{i}$ for any positive integer $t$ and $1\le i\le n$. For a family $\mathcal{F}\subseteq 2^{[n]}$, we denote the \emph{$k$-shadow} of $\mathcal{F}$ as $\partial_{k}{\mathcal{F}}:=\{T\in\binom{[n]}{k}:T\subseteq F\ \text{for\ some\ }F\in\mathcal{F}\}$.
\section{Preliminaries}\label{sec:pre} \subsection{Some useful tools} Hilton and Milner~\cite{1967Hilton} showed the following result, which states that the size of non-trivial intersecting families is noticeably smaller than that of the trivial ones.
\begin{theorem}[Hilton-Milner~\cite{1967Hilton}]\label{thm:HiltonMilner} Let $n,k$ be positive integers with $n>2k$. If $\mathcal{F}\subseteq \binom{[n]}{k}$ is an intersecting family and $\bigcap_{F\in\mathcal{F}}F=\emptyset$, then we have \begin{equation*}
|\mathcal{F}| \le \binom{n-1}{k-1} - \binom{n-k-1}{k-1}+1. \end{equation*} \end{theorem}
For Sperner families, the Lubell–Yamamoto–Meshalkin inequality, discovered by Bollob\'{a}s~\cite{1965Bollobas}, Lubell~\cite{1966Lubell}, Me\v{s}alkin~\cite{1963Me} and Yamamoto~\cite{1954Ya} independently, is useful. \begin{theorem}[LYM inequality]\label{thm:LYM}
Let $\mathcal{F}\subseteq 2^{[n]}$ be a Sperner family of subsets of $[n]$, then
\begin{equation*}
\sum\limits_{A\in\mathcal{F}}\frac{1}{\binom{n}{|A|}}\le 1.
\end{equation*} \end{theorem}
We also need the triangular criterion when we want to prove a sequence of polynomials are linearly independent.
\begin{prop}\label{prop:triangular}
Let $f_{1},f_{2},\ldots,f_{m}$ be functions in a linear space. If $v^{(1)},v^{(2)},\ldots,v^{(m)}$ are vectors such that $f_{i}(v^{(i)})\neq 0$ for $1\le i\le m$ and $f_{i}(v^{(j)})=0$ for $i>j$, then $f_{1},f_{2},\ldots,f_{m}$ are linearly independent. \end{prop}
\subsection{Warm up: Katona intersection theorem} As a warm-up and illustration of our method, we will give a short proof of a special case of the Katona intersection theorem. \begin{theorem}[Katona~\cite{1964Katona}]\label{thm:shadow}
Suppose $\mathcal{F} \subseteq \binom{[n]}{k+1}$ is an intersecting family. Then $|\partial_{k}{\mathcal{F}}| \ge |\mathcal{F}|$. \end{theorem} \begin{proof}[Proof of Theorem~\ref{thm:shadow}]
Suppose $\mathcal{F}=\{F_{1},F_{2},\ldots,F_{m}\} \subseteq \binom{[n]}{k+1}$ is an intersecting family. Let $\boldsymbol a_{i},\boldsymbol b_{i}$ be the characteristic vectors of $F_i, F_i^c$, respectively. For $\boldsymbol x=(x_1,\ldots,x_n)$ and $i\in [m]$, we define $g_i(\boldsymbol x) =\prod_{j=1}^{k} (\boldsymbol x \cdot \boldsymbol b_i -j)$ and let $f_i=\tilde{g}_{i}$ be the multilinear reduction of $g_i$. Note that $\ker(f_{i})$ corresponds to characteristic vectors of subsets $F\subseteq [n]$ with $|F\setminus F_{i}|\in [k]$,
Let $\{S_1,\ldots,S_r\} = \binom{[n]}{\le k-1}$ with $|S_i|\le |S_j|$ for $i< j$, so $r = \sum_{i=0}^{k-1}\binom{n}{i}$. For each $S_{i}$, let $f_{S_i}$ be the multilinear reduction of $g_{S_i}(\boldsymbol x):=(\boldsymbol{x}\cdot \boldsymbol{1} -k-1)\cdot\prod_{\ell\in S_i }x_{\ell}$, and $\boldsymbol a_{S_i}$ be the characteristic vector of $S_i$. Let $\{T_1, T_2,\ldots,T_h\} =\binom{[n]}{k}\setminus \partial_{k}{\mathcal{F}}$ be the collection of all $k$-sets that are not $k$-shadow of $\mathcal{F}$. For each $T_{i}$, we define $f_{T_i}(\boldsymbol{x}):= \prod_{j\in T_i }x_j$, and $\boldsymbol a_{T_i}$ to be the characteristic vector of $T_i$. Here we can see that $\ker(f_{T_{i}})$ corresponds to characteristic vectors of subsets of $[n]$ which do not contain $T_{i}$, and $\ker(g_{S_{i}})$ corresponds to characteristic vectors of subsets of $[n]$ which contain $k+1$ elements or do not contain $S_{i}$.
\begin{claim}\label{claim:S7}
The polynomials $\{f_1,\ldots,f_m,f_{S_1},\ldots,f_{S_r},f_{T_1},\ldots,f_{T_h} \}$ are linearly independent. \end{claim} \begin{poc}
We shall show that these polynomials satisfy the triangular criterion in Proposition~\ref{prop:triangular}. First note that $f_i(\boldsymbol a_i) =(-1)^{k}k! \ne 0$ as $\boldsymbol a_{i}\cdot \boldsymbol b_{i}=0$ for each $i$. For $i\ne j$, we have $f_i(
\boldsymbol a_j)=0$ as $\boldsymbol a_{j}\cdot \boldsymbol b_{i}=|F_{j}\cap F_{i}^{c}|=|F_j\setminus F_i| \in [k]$. Next, $f_{S_i}(\boldsymbol a_{S_i}) = (|S_i|-k-1) \ne 0$. Since $|F_i|=k+1$, we know that $f_{S_i}(\boldsymbol a_j) =0$. By definition, if $i<j$, we have $|S_i| \le |S_j|$ and $S_{j}\setminus S_{i}\ne \emptyset$. Thus for any $i<j$, we have $f_{S_j}(\boldsymbol a_{S_i}) = 0$. For $f_{T_{i}}$'s, we have $f_{T_i}(\boldsymbol a_{T_i}) = 1 \ne 0$, and for any $T\in \{T_1,\ldots,T_{i-1},T_{i+1}\ldots,T_h,S_1,S_2,\ldots,S_{r}\}$, we can see $|T| \le |T_i|, T\ne T_i$ and $T_{i}\setminus T\ne \emptyset$, so we have $f_{T_i}(\boldsymbol{a}_{T}) =0$. Lastly, since $T_i \not\subseteq F_j$ for any $i,j$, we have $f_{T_i}(\boldsymbol a_{j}) = 0$. Hence, Proposition~\ref{prop:triangular} implies that $\{f_1,\ldots,f_m,f_{S_1},\ldots,f_{S_r},f_{T_1},\ldots,f_{T_h} \}$ are linearly independent. \end{poc}
Since all of these polynomials have degree at most $k$, we have $|\mathcal{F}|+r+h\le \sum _{i=0}^{k}\binom{n}{i}$. As $r = \sum_{i=0}^{k-1}\binom{n}{i}$ and $h=\binom{n}{k} -|\partial_{k}{\mathcal{F}}|$, we have $|\mathcal{F}| \le |\partial_{k}{\mathcal{F}}|$ as desired.
\end{proof}
\section{Stability}\label{sec:Stability} \subsection{Families with bounded symmetric differences}
In this section, we will prove Theorem~\ref{thm:SymmetricStability}. Let $\mathcal{F}_o$ consist of all sets of odd size in $\mathcal{F}$ and $\mathcal{F}_e$ consist of all sets of even size in $\mathcal{F}$. Observe that for any subsets $X,Y,Z \subseteq [n]$, $(X\triangle Y)\triangle(X\triangle Z)= Y\triangle Z$. Hence, by translating all sets in $\C F$ by some appropriate $U\in \C F$, we may assume $\emptyset\in\mathcal{F}$ and $|F|\le 2k+1$ for any $F\in \mathcal{F}$. Then without loss of generality, we can further assume that $|\mathcal{F}_o|\le|\mathcal{F}_e|$.
Let $\mathcal{F}_e =\{F_1,\ldots, F_m\}$ and $\boldsymbol a_{i},\boldsymbol b_{i}$ be the characteristic vectors of $F_i, F_i^c$, respectively. For $\boldsymbol x=(x_1,\ldots,x_n)$ and $i\in [m]$, we define $g_i(\boldsymbol x) =\prod_{\ell=1}^{k} (\boldsymbol x \cdot \boldsymbol b_i + (\boldsymbol{1}-\boldsymbol x)\cdot \boldsymbol a_i-2\ell)$ and $f_i=\tilde{g}_{i}$. It is easy to see that $\ker(g_{i})$ corresponds to characteristic vectors of subsets $F\subseteq [n]$ with $|F_{i}\triangle F|\in\{2,4,\ldots,2k\}$.
By definition, $f_i(\boldsymbol a_i) = (-2)^k k! \ne 0$ as $\boldsymbol a_{i}\cdot \boldsymbol b_{i}=0$ for each $i$. As each set in $\mathcal{F}_{e}$ has even size, $\boldsymbol a_{i}\cdot \boldsymbol b_{j} + \boldsymbol b_i \cdot \boldsymbol a_j=|F_i\triangle F_j| \in \{2,4,\ldots,2k\}$ for any $i\ne j$, so we have $f_i( \boldsymbol a_j)=0$. By Proposition~\ref{prop:triangular}, $\{f_i\}_{i=1}^{m}$ are linearly independent, which implies $m\le \sum_{i=0}^{k} \binom{n}{i}$ since the polynomials $\{f_i\}_{i=1}^{m}$ lie in the space of $n$-variate polynomials of degree at most $k$.
Let $\mathcal{S}= \binom{[n]}{k}\setminus\partial_{k}{\mathcal{F}}_{e}:= \{T\in \binom{[n]}{k} : T\not\subseteq F_i,\text{ for all } i\in[m]\}$. For each subset $T\in \mathcal{S}$, we define a polynomial $h_T(\boldsymbol x) =\prod_{i\in T} x_i$ and let $\boldsymbol a_T$ be the characteristic vector of $T$. Similarly, $\ker(h_{T})$ corresponds to characteristic vectors of subsets of $[n]$ which do not contain $T$ as a subset. The following claim will establish an upper bound $|\mathcal{F}_{e}|\leq \sum_{i=0}^{k} \binom{n}{i}-|\mathcal{S}|$.
\begin{claim}\label{claim:S1}
The polynomials $\{h_T\}_{T\in \mathcal{S}}$ and $\{f_i\}_{i=1}^{m}$ are linearly independent. \end{claim}
\begin{poc} For any $T\in \mathcal{S}$ and $i\in [m]$, since $T\not\subseteq F_i$, we have $h_T(\boldsymbol a_i) = 0$. For any $T,S\in \mathcal{S}$, it is easy to check that $ h_T(a_S)\neq 0$ if and only if $T=S$. By Proposition~\ref{prop:triangular}, $\{h_T\}_{T\in \mathcal{S}}$ and $\{f_i\}_{i=1}^{m}$ are linearly independent. \end{poc}
If $|\mathcal{S}|\ge \binom{n-5k-1}{k}$, then we have \begin{equation*}
|\mathcal{F}|\le 2|\mathcal{F}_e|\le 2\big(\sum_{i=0}^{k}\binom{n}{i} -|\mathcal{S}|\big) \le 2\sum_{i=0}^{k}\binom{n}{i} - 2\binom{n-5k-1}{k}, \end{equation*} yielding the first alternative.
We may then assume $|\mathcal{S}|<\binom{n-5k-1}{k}$. Relabelling $\mathcal{F}_{e}$ if necessary, we may assume $2k+1\ge |F_1|\ge |F_2|\ge \cdots \ge |F_m|$. As $|F_1|$ is maximum, we have $|F_i\setminus F_1|\le |F_1\setminus F_i|$ for any $1\le i\le m$. Since $|F_1\triangle F_i| \le 2k+1$ and both of $|F_1|$ and $|F_i|$ are even, we have $|F_i\setminus F_1|\le k$ and $|F_1|\le 2k$. We can then draw the following claim.
\begin{claim}\label{claim:S2}
If $|F_i\setminus F_1|= k$, then we have $|F_1\setminus F_i|=k$, and $|F_1|=|F_i|$. \end{claim}
Since $|\mathcal{S}|< \binom{n-5k-1}{k}$ and $n-|F_1|> n-5k-1$, there exists some $k$-set $R\subseteq F_{1}^{c}$ such that $R\in \partial_{k}{\mathcal{F}_{e}}$. Consequently, there exists some $E\in \mathcal{F}_e$ such that $R\subseteq E$ and $|E\setminus F_1| \ge |R|= k$. In fact, $|E\setminus F_{1}|=k$ as $|E\setminus F_{1}|\le k$. Claim~\ref{claim:S2} then implies that $|F_{1}|=|E|$. Fix such $E$ and let $A: = F_1\cap E$.
\begin{claim}\label{claim:S3}
For any $F\in \mathcal{F}$, we have $|F \triangle A|\le k+1$. \end{claim} \begin{poc}
Suppose that there exists $F\in \mathcal{F}$ such that $|F \triangle A|\ge k+2$. Since $|F_1|\le 2k$, $|E\setminus F_1|= k$ and $|F|\le 2k+1$, we have $|F_1\cup E \cup F|\le 5k+1$. Since $|\mathcal{S}|< \binom{n-5k-1}{k}$, there exists some $k$-set $Q\subseteq [n]\setminus (F_1\cup E\cup F)$ such that $Q\in\partial_{k}{\mathcal{F}}_{e}$. Thus there is some $T\in\mathcal{F}_{e}$ such that $Q\subseteq T$ and $|T\cap ([n]\setminus (F_1\cup E\cup F))|\ge k$. As $|T\setminus F_1| \le k$, we have $T\cap ([n]\setminus (F_1\cup E\cup F)) = T\setminus F_1$.
Therefore $T\setminus F_1$, $E\setminus A$ and $F_1\setminus A$ are pairwise disjoint sets of size $k$. By Claim~\ref{claim:S2}, $|T\triangle F_1|= 2k$, $|T\triangle E|= 2k$ and $T=A \cup (T\setminus F_1)=A\cup (T\cap([n]\setminus (F_{1}\cup E\cup F)))$. Then we have \begin{equation*}
|T\triangle F|= |T\cap ([n]\setminus (F_1\cup E\cup F))|+ |A\triangle F|\ge k+k+2=2k+2, \end{equation*}
a contradiction. \end{poc}
By Claim~\ref{claim:S3}, if $|\mathcal{S}|<\binom{n-5k-1}{k}$, then $\mathcal{F}$ should be contained in a translate of $\mathcal{K}(n,k+1)$. Without loss of generality, assume $\mathcal{F}$ is contained in $\mathcal{K}(n,k+1)$. Let $\mathcal{H} = \{H\in \mathcal{F}, |H| = k+1\}$. Note that $\mathcal{H}$ is an intersecting family, because the symmetric difference of any pair of subsets in $\mathcal{F}$ is bounded by $2k+1$. If $\mathcal{H}$ is a non-trivial intersecting family, by Hilton-Milner Theorem, we have \begin{equation*}
|\mathcal{F}|\le \sum_{i=0}^{k}\binom{n}{i}+|\mathcal{H}|\le \sum_{i=0}^{k}\binom{n}{i}+ \binom{n-1}{k}- \binom{n-k-2}{k} +1=2\sum\limits_{i=0}^{k}\binom{n-1}{i} - \binom{n-k-2}{k}+1. \end{equation*} Otherwise, there exists some $y\in[n]$ such that $y\in H$ for any $H\in \mathcal{H}$, which implies $\mathcal{F}$ is contained in $\mathcal{K}_{y}(n,k)$. This completes the proof.
\subsection{Families with bounded set-wise differences} In this section, we prove Theorem~\ref{thm:DifferenceStability} and then apply it to show Corollary~\ref{coro:main}. \begin{proof}[Proof of Theorem~\ref{thm:DifferenceStability}]
Let $\mathcal{F}=\{F_1,F_2,\ldots,F_m\} \subseteq 2^{[n]}$ be a family with rank $t$ and $|F_i\setminus F_j| \le k$ for any $F_i,F_j \in \mathcal{F}$. Without loss of generality, we assume that $|F_j|\ge |F_i|$ for any $i>j$, then we have $|F_1| = t$ and $|F_j\setminus F_i|\ge 1$ for any $i>j$.
Let $\boldsymbol a_{i},\boldsymbol b_{i}$ be the characteristic vectors of $F_i, F_i^c$, respectively. For $\boldsymbol x=(x_1,\ldots,x_n)$ and $i\in [m]$, we define $g_i(\boldsymbol x) =\prod_{j=1}^{k} (\boldsymbol x \cdot \boldsymbol b_i -j)$ and $f_i=\tilde{g}_{i}$. Here $\ker(f_{i})$ corresponds to characteristic vectors of subsets $F\subseteq [n]$ with $|F\setminus F_{i}|\in [k]$.
By definition, $f_i(\boldsymbol a_i) =(-1)^{k} k! \ne 0$ as $\boldsymbol a_{i}\cdot \boldsymbol b_{i}=0$ for each $i$. For $i>j$, we have $f_i(
\boldsymbol a_j)=0$ as $\boldsymbol a_{j}\cdot \boldsymbol b_{i}=|F_j\setminus F_i| \in [k]$. By Proposition~\ref{prop:triangular}, $\{f_i\}_{i=1}^{m}$ are linearly independent, which implies $m\le \sum_{j=0}^{k} \binom{n}{j}$.
Let $\mathcal{S}=\binom{[n]}{k}\setminus\partial_{k}{\mathcal{F}}$. For each subset $T\in \mathcal{S}$, we define a polynomial $h_T(\boldsymbol x) =\prod_{i\in T} x_i$. The following claim can be derived the same way as Claim~\ref{claim:S1}.
\begin{claim}\label{claim:S4}
The polynomials $\{h_T\}_{T\in \mathcal{S}}$ and $\{f_i\}_{i=1}^{m}$ are linearly independent. \end{claim}
Claim~\ref{claim:S4} implies that $m \le \sum_{i=0}^{k} \binom{n}{i} - |\mathcal{S}|$. Thus, we shall assume $|\mathcal{S}| < \binom{n-t-2k}{k}$, otherwise the second alternative holds.
\begin{claim}\label{claim:S5}
For any pair of distinct $E,F\in \mathcal{F}$ with $|F|=t,|E \setminus F| =k$, we must have $|E| =t$ and $|F\setminus E| =k$. \end{claim}
\begin{poc}
Note that $\max_{i\in[m]}\{|F_i|\}=t$, thus we have $k\ge |F\setminus E|\ge |E\setminus F| \ge k$ and so $|F\setminus E|=k$. Also, $|E|=|E\cup F|-|F\setminus E|=|E\cup F|-|E\setminus F|=|F|=t$. \end{poc}
By our assumption, we have $\binom{|F_1^c|}{k}= \binom{n-t}{k} >|\mathcal{S}|$, hence there exists some $F\in \mathcal{F}$ such that $|F\cap F_1^c| =| F\setminus F_1| = k$. By Claim~\ref{claim:S5}, we have $|F|=t$ and $|F\cap F_1| =t-|F\setminus F_{1}|= t-k$. Let $F\cap F_1 =A$. We can further extract more structural properties of $\mathcal{F}$ as follows.
\begin{claim}\label{claim:S6}
For any $F' \in \mathcal{F}$, if $|F'\cap F_1^c|=k$, then we have $F'\cap F_1 = A$. \end{claim} \begin{poc}
Since $|F_{1}^{c}\setminus (F'\cup F)|\ge n-t-2k$ and $|\mathcal{S}|<\binom{n-t-2k}{k}$, there exist $T\in \binom{F_1^c\setminus (F'\cup F)}{k}$ and $E \in \mathcal{F}$ such that $T\subseteq E$. Since $|T|=k$, we have $E\setminus F'=E\setminus F = T \subseteq F_1^c,$ by Claim~\ref{claim:S5} we have $|E| =t$. Also note that $F\cap E\subseteq F_{1}$, so $|F\cap E| =|F\cap E\cap F_1|=t-k$, which implies $E\cap F_1=F\cap F_1=A$. Applying the same analysis on the (ordered) triple $(F',E,F_1)$ instead of $(E,F,F_1)$, we have $F'\cap F_1=E\cap F_1=A$ as desired. \end{poc}
Next, we will show that for any $F' \in \mathcal{F}$, we have $A \subseteq F' \cap F_1$, and so $\mathcal{F}$ is trivially $(t-k)$-intersecting with $A\subseteq\bigcap_{F'\in\mathcal{F}}F'$. Suppose there exists some $F'$, such that $A \setminus (F' \cap F_1) \ne \emptyset$, then since $|F_1^c \setminus F'| \ge n-t-k$, and $|\mathcal{S}|<\binom{n-t-2k}{k}$, there exist subsets $T\in \binom{F_1^c\setminus F'}{k}$ and $E \in \mathcal{F}$ such that $T\subseteq E$. By Claim~\ref{claim:S6} we know that $E\cap F_1 = A$, so $|E\setminus F'|$ = $|T \cup (A \setminus F')| \ge k+1$, which is a contradiction.
Then we have that $A\subseteq F_i$ for all $i\in [m]$. Since $|F_i|\le t$, we have $|F_i\setminus A| \le k$, which implies that $m \le \sum_{i=0}^{k}\binom{n-(t-k)}{i}$. The proof of Theorem~\ref{thm:DifferenceStability} is finished. \end{proof}
\begin{proof}[Proof of Corollary~\ref{coro:main}]
Suppose $\mathcal{F}=\{F_1,F_2,\ldots,F_m\} \subseteq 2^{[n]}$ is a rank $t$ Sperner family such that $|F_i\setminus F_j| \le k$.
Suppose first that $t>\frac{k+n}{2}$, as $|F_i\setminus F_j| \le k$ for any $F_i,F_j \in \mathcal{F}$, we know $\min_{i\in[m]}\{|F_i|\} \ge t-k$. Define $\mathcal{F}':=\{F_1^c,F_2^c,\ldots,F_m^c\}$.
Then $\max_{i\in[m]}\{|F_i^c|\}=n-\min_{i\in[m]}\{|F_i|\} \le n- (t-k)\le \frac{k+n}{2} $ and $|F_i^c\setminus F_j^c|=|F_j\setminus F_i|\le k$ for any $F_i^c,F_j^c \in \mathcal{F}'$, so we can assume $t\le\frac{k+n}{2}$.
If $t>k$, by Theorem~\ref{thm:DifferenceStability}, we have either
\begin{equation*}
|\mathcal{F}| \le \sum_{i=0}^{k} \binom{n}{i} -\binom{\frac{n-5k}{2}}{k} < \binom{n}{k},
\end{equation*}
when $n\gg k$, or there exists $A$ with $|A|=t-k$ and $A\subseteq F_i$ for any $i\in[m]$, which implies $\{F_i \setminus A\}_{i=1}^{m}$ is a Sperner family with size at most $k$.
By Theorem~\ref{thm:LYM}, we have
\begin{equation*}
1\ge \sum_{F\in \mathcal{F}} \frac{1}{\binom{n-(t-k)}{|F\setminus A|}} \ge \sum_{F\in \mathcal{F}} \frac{1}{\binom{n-(t-k)}{k}},
\end{equation*}
which implies $|\mathcal{F}| \le \binom{n-(t-k)}{k} <\binom{n}{k}$.
If $t\le k$. By Theorem~\ref{thm:LYM}, we have
$$1\ge \sum_{F\in \mathcal{F}} \frac{1}{\binom{n}{|F|}} \ge \sum_{F\in \mathcal{F}} \frac{1}{\binom{n}{k}},$$ which implies $|\mathcal{F}| \le \binom{n}{k}$, the equality holds if and only if $\mathcal{F} = \binom{[n]}{k}$. \end{proof}
\section{Sperner families with restricted intersections}\label{sec:Snevily} In this section, we will prove Theorem~\ref{thm :0} and Theorem~\ref{thm:large}, respectively. \subsection{Proof of Theorem~\ref{thm :0}} Let $L=\{\ell_{1},\ell_{2},\ldots,\ell_{s}\}$ be a set of $s$ non-negative integers with $\ell_{1}<\ell_{2}<\cdots<\ell_{s}$. Let $\mathcal{F} = \{F_1,\ldots,F_m\}$ be an $L$-intersecting Sperner family. By relabelling, we can assume there exists some integer $r$ such that $1\in F_i$ if and only if $1\le i\le r$. For each $1\leq i\leq m$, let $\boldsymbol a_{i}$ be the characteristic vector of $F_i$.
For $\boldsymbol x=(x_1,\ldots,x_n)$ and $i\in [m]$, we define $g_i(\boldsymbol x) =\prod_{\ell \in L, \ell \ne |F_i|} (\boldsymbol x \cdot \boldsymbol a_i -\ell)$ and $f_i(\boldsymbol x)$ be the multilinear reduction of $g_i(\boldsymbol x)$ with $x_{1}=1$, that is, $f_i(\boldsymbol x)$ is obtained from $g_{i}(\boldsymbol{x})$ by replacing each $x_{i}^{t}$ term by $x_{i}$ for any $2\le i\le n$ and $t\ge 2$, and $x_{1}^{t}$ term by $1$ for any $t\ge 1$.
Note that $\boldsymbol a_i \cdot \boldsymbol a_j = |F_i\cap F_j|$, and we replace $x_{1}^{t}$ with $1$ in our multilinear reduction to get $f_{i}$, this acts as if $1\in F_{j}$ when we evaluate $f_{i}(\boldsymbol{a}_{j})$ and hence
$$f_i(\boldsymbol a_j)= \prod_{\ell \in L, \ell \ne |F_i|} (|F_i\cap (F_j\cup\{1\})| -\ell).$$
Moreover, observe that if $i>j$ and $r\ge i$, then $|F_{i}\cap (F_{j}\cup\{1\})|=|F_{i}\cap F_{j}|$ as $1\in F_{j}$, and if $i>j$ and $r<i$, then $|F_{i}\cap (F_{j}\cup\{1\})|=|F_{i}\cap F_{j}|$ also holds as $1\notin F_{i}$. Therefore, if $i>j$, we have $f_i(\boldsymbol a_j)= \prod_{\ell \in L, \ell \ne |F_i|} (|F_i\cap F_j| -\ell)$. Since $\mathcal{F}$ is a Sperner family, we know $|F_i\cap F_j| \in L\setminus \{|F_i|\}$, which implies that $f_i(\boldsymbol a_j) =0$ for $i>j$. If $i=j$, then $f_i(\boldsymbol a_j)= \prod_{\ell \in L, \ell \ne |F_i|} (|F_i| -\ell) \ne 0$. By Proposition~\ref{prop:triangular}, the polynomials $\{f_i\}_{i=1}^{m}$ are linearly independent, which implies $m\le \sum_{j=0}^{s} \binom{n-1}{j}$ since for $i\in [m]$, $f_i(\boldsymbol x)$ has $n-1$ variables $x_2,x_3,\ldots,x_n$ and degree at most $s$. The first result then follows.
For the second part, let $\ell_{1}=0\in L$. We will prove $|\mathcal{F}| \le \binom{n}{s}$ by induction on $s$, where $s=|L|$ and $n\ge 3s^{2}$.
When $s=1$, by the first result, we have $|\mathcal{F}| \le \sum_{j=0}^{s} \binom{n-1}{j} = \binom{n}{1}$, and it is easy to see that equality holds if and only if $\mathcal{F}={[n]\choose 1}$.
Now, suppose $s\ge 2$ and the result holds for any positive integer less than $s$. If $L=\{0,1,2,\ldots,s-1\}$, then for any $S\in \binom{[n]}{s}$, there exists at most one set in $\mathcal{F}$ containing $S$. Construct a new family $\mathcal{F'}:=\{F'_{1},\ldots,F'_{m}\}$, where $F'_i$ is an arbitrary subset of $F_i$ with size $s$ if $|F_{i}|> s$, and $F'_i = F_i$ otherwise. Since $\mathcal{F}$ is a Sperner family, $\mathcal{F'}$ is also a Sperner family with $|F'_i|\le s$. Then by Theorem~\ref{thm:LYM}, we have $m\le \binom{n}{s}$. It is easy to see that the equality holds if and only if $\mathcal{F} = \mathcal{F}' = \binom{[n]}{s}$. Next, we assume that $L\ne \{0,1,2,\ldots,s-1\}$.
Let $\mathcal{H}= \{F : F \in \mathcal{F}, |F|\le s\}$ and $L' = L\cap \{0,1,2,\ldots, s-1\}$. Then $\mathcal{H}$ is an $L'$-intersecting Sperner family, with $0\in L'$ and $|L'|\le s-1$. By the induction hypothesis, we have $|\mathcal{H}|\le \binom{n}{s-1}$.
For each $x\in [n]$, let $\mathcal{F}_x=\{F\in \mathcal{F}: x\in F \}$, and $L\setminus\{0\} = \{\ell_2,\ell_3,\ldots,\ell_s\}$. Then $\mathcal{F}_x$ is an $L\setminus\{0\}$-intersecting Sperner family with $|L\setminus\{0\}|\le s-1$, and by the first part $|\mathcal{F}_x| \le \sum_{i=0}^{s-1}\binom{n-1}{i}$. Then via double counting, we have
\begin{equation*}
(m-|\mathcal{H}|)(s+1) \le \sum_{x\in[n]}|\mathcal{F}_x| \le n \sum_{i=0}^{s-1}\binom{n-1}{i}.
\end{equation*}
Since $n\ge 3s^2, s\ge 2$, we have $\sum_{i=0}^{s-2} \binom{n-1}{i} \le \binom{n-1}{s-2}\cdot \sum_{i=0}^{s-2} 2^{-i} \le 2 \binom{n-1}{s-2} $. Therefore, we have
\begin{align*}
m&\le |\mathcal{H}|+\frac{n}{s+1}\sum_{i=0}^{s-1}\binom{n-1}{i}
\le \binom{n}{s-1} +\frac{n}{s+1} \binom{n-1}{s-1} + \frac{n}{s+1}\sum_{i=0}^{s-2} \binom{n-1}{i} \\
&\le \binom{n}{s-1} +\frac{s}{s+1} \binom{n}{s} + \frac{2n}{s+1}\binom{n-1}{s-2} = \binom{n}{s-1} +\frac{s}{s+1} \binom{n}{s} + \frac{2(s-1)}{s+1}\binom{n}{s-1}\\ &= \bigg(\frac{3s-1}{s+1}\cdot\frac{s}{n-s+1} + \frac{s}{s+1}\bigg)\binom{n}{s} \le \bigg(\frac{1}{s+1}\cdot\frac{3s^2-s}{3s^2-s+1} + \frac{s}{s+1}\bigg)\binom{n}{s} < \binom{n}{s}.
\end{align*}
\subsection{Proof of Theorem~\ref{thm:large}}
Let $T\in\binom{[n]}{\ell_{1}}$ be the set satisfying that $|\{F\in\mathcal{F}: T\subseteq F\}|$ is the largest among all sets in $\binom{[n]}{\ell_{1}}$. Let $\mathcal{H} = \{ F\in \mathcal{F}: T\subseteq F\}$. If $\mathcal{H}=\mathcal{F}$, then $\C F$ is trivially $\ell_1$-intersecting and by Theorem~\ref{thm :0}, as $|T|=\ell_{1}\ge 1$, we have $\{F\setminus T :F\in\mathcal{F}\}\subseteq 2^{[n]\setminus T}$ and so
\begin{equation*}
|\mathcal{F}|=\big|\{F\setminus T :F\in\mathcal{F}\}\big| \le \sum_{i=0}^{s} \binom{n-2}{i} = \binom{n}{s} -\binom{n-1}{s-1} +\sum_{i=0}^{s-2} \binom{n-2}{i} = \binom{n}{s}-\Omega_{s}(n^{s-1}).
\end{equation*}
We may then assume $\C H\neq \C F$ and so there exists some $E \in \mathcal{F}$ with $T \not\subseteq E$. For any $F\in \mathcal{H}$ we have $|F\cap E| \ge \ell_1$, so $(E\setminus T)\cap F\ne \emptyset$, for otherwise $T\subseteq E$. Consequently, for each $x\in E\setminus T$, letting $\mathcal{F}_x=\{ F: F\in\mathcal{H}, x\in F \}$, we have $\mathcal{H}\subseteq \bigcup_{x\in E\setminus T} \mathcal{F}_x$. Since for any sets $A,B \in \mathcal{F}_x$, $\{x\}\cup T
\subseteq A\cap B$, $F_x$ is an $\{\ell_2,\ell_3,\ldots,\ell_s\}$-intersecting Sperner family. Then, by Theorem~\ref{thm :0}, we have $|\mathcal{F}_x| \le \sum_{i=0}^{s-1}\binom{n-1}{i}$ and $|\mathcal{H}| \le M \sum_{i=0}^{s-1}\binom{n-1}{i}$.
Take an arbitrary set $X\in\mathcal{F}$. For any set $F \in \mathcal{F}$, we have $|X\cap F| \ge \ell_1$, thus we can partition the whole family $\mathcal{F}$ into subfamilies $\mathcal{H}_{Y}$ according to which $\ell_{1}$-subset $Y\in \binom{X}{\ell_{1}}$ its members contain. Note that for each such $\C H_Y$, $|\mathcal{H}_{Y}|\le |\mathcal{H}|$. As $|X|\le M$, we have $|\mathcal{F}| \le \binom{M}{\ell_1}|\mathcal{H}| =O_{M,s}(n^{s-1})$.
\end{document} |
\begin{document}
\title{Graphs whose normalized Laplacian matrices are separable as density matrices in quantum mechanics} \author{Chai Wah Wu\footnote{e-mail: chaiwahwu@member.ams.org}\\IBM T. J. Watson Research Center\\P. O. Box 218, Yorktown Heights, NY 10598, USA.} \date{July 23, 2014} \maketitle
\begin{abstract} Recently normalized Laplacian matrices of graphs are studied as density matrices in quantum mechanics. Separability and entanglement of density matrices are important properties as they determine the nonclassical behavior in quantum systems. In this note we look at the graphs whose normalized Laplacian matrices are separable or entangled. In particular, we show that the number of such graphs is related to the number of $0$-$1$ matrices that are line sum symmetric and to the number of graphs with at least one vertex of degree $1$. \end{abstract}
\section{Introduction} Applications of quantum mechanics in information technology such as quantum teleportation, quantum cryptography and quantum computing \cite{nielsen-quantum-2002} lead to much recent interest in studying entanglement in quantum systems. One important problem is to determine whether a given state operator is entangled or not. This is especially difficult for mixed state operators. In Refs. \cite{braunstein:laplacian:2006,braunstein:laplacian_graph:2006,wu:separable:2006,wang:tripartite:2007,wu:multipartite:2009,wu:separable:2010}, normalized Laplacian matrices of graphs are considered as density matrices, and their entanglement properties are studied. The reason for studying this subclass of density matrices is that simpler and stronger conditions for entanglement and separability can be found and graph theory may shed light on the entanglement properties of state operators. In this note, we continue this study and determine the number of graphs that result in separable or entangled density matrices.
\section{Density matrices, separability, and partial transpose} A state of a finite dimensional quantum mechanical system is described by a state operator or a density matrix $\rho$ acting on ${\mathbb C}^n$ which is Hermitian and positive semidefinite with unit trace. A state operator is called a {\em pure} state if it has rank one. Otherwise the state operator is {\em mixed}. An $n$ by $n$ density matrix $\rho$ is separable in ${\mathbb C}^p\otimes {\mathbb C}^q$ with $n=pq$ if it can be written as $\sum_{i} c_i \rho_i \otimes \eta_i$ where $\rho_i$ are $p$ by $p$ density matrices and $\eta_i$ are $q$ by $q$ density matrices with $\sum_i c_i = 1$ and $c_i\geq 0$.\footnote{This definition can be extended to composite systems of multiple states, but here we only consider decomposition into the tensor product of two component states.} A density matrix that is not separable is called entangled. Entangled states are necessary to invoke behavior that can not be explained using classical physics and enable novel applications.
We denote the $(i,j)$-th element of a matrix $A$ as $A_{ij}$. Let $f$ be the canonical bijection between $\{1,\dots, p\}\times \{1,\dots,q\}$ and $\{1,\dots, pq\}$: $f(i,j) = (i-1)q+j$. For a $pq$ by $pq$ matrix $A$, if $f(i,j) = k$ and $f(i_2,j_2) = l$, we can write $A_{kl}$ as $A_{(i,j)(i_2,j_2)}$. \begin{definition} The $(p,q)$-partial transpose $A^{PT}$ of an $n$ by $n$ matrix $A$, where $n=pq$, is given by: \[ A_{(i,j)(k,l)}^{PT} = A_{(i,l)(k,j)} \] \end{definition} We remove the prefix ``$(p,q)$'' if $p$ and $q$ are clear from context. In matrix form, the partial transpose is constructed by decomposing $A$ into $p^2$ blocks \begin{equation}\label{eqn:A}
A = \left(\begin{array}{cccc} A^{1,1} & A^{1,2} &\cdots & A^{1,p} \\ A^{2,1} & A^{2,2} & \cdots & A^{2,p} \\ \vdots & \vdots & & \vdots \\ A^{p,1} & A^{p,2} & \cdots & A^{p,p} \end{array}\right) \end{equation} where each $A^{i,j}$ is a $q$ by $q$ matrix, and $A^{PT}$ is given by: \begin{equation}\label{eqn:Apt}
A^{PT} = \left(\begin{array}{cccc} (A^{1,1})^T & (A^{1,2})^T &\cdots & (A^{1,p})^T \\ (A^{2,1})^T & (A^{2,2})^T & \cdots & (A^{2,p})^T \\ \vdots & \vdots & & \vdots \\ (A^{p,1})^T & (A^{p,2})^T & \cdots & (A^{p,p})^T \end{array}\right) \end{equation}
\subsection{Necessary conditions for separability of density matrices} It is clear that if $A$ is Hermitian, then so is $A^{PT}$. Peres \cite{peres:separability:1996} introduced the following necessary condition for separability: \begin{theorem} \label{thm:peres} If a density matrix $\rho$ is separable, then $\rho^{PT}$ is positive semidefinite, i.e. $\rho^{PT}$ is a density matrix. \end{theorem} Horodecki et al. \cite{horodecki:separability:1996} showed that this condition is sufficient for separability in ${\mathbb C}^2\otimes {\mathbb C}^2$ and ${\mathbb C}^2\otimes {\mathbb C}^3$, but not for other tensor products. A density matrix having a positive semidefinite partial transpose is often referred to as the Peres-Horodecki condition for separability.
In \cite{wu:separable:2006} it was shown that when restricted to zero row sum density matrices, we have a weaker form of the Peres-Horodecki condition that is easier to verify. \begin{theorem} \label{thm:zerosums} If a density matrix $A$ with zero row sums is separable, then $A^{PT}$ has zero row sums. \end{theorem}
\section{normalized Laplacian matrices as density matrices} For a Laplacian matrix $A$ of a nonempty graph, $\frac{1}{Tr(A)}A$ is symmetric positive semidefinite with trace $1$ and thus can be viewed as a density matrix of a quantum system. In \cite{braunstein:laplacian_graph:2006} it was shown that a necessary condition for separability of a Laplacian matrix is that the vertex degrees of the graph and its partial transpose are the same for each vertex. This condition is equivalent to row sums of $A^{PT}$ being $0$. In \cite{wu:separable:2006} it was shown that this condition is also sufficient for separability in ${\mathbb C}^2\otimes {\mathbb C}^q$. Note that separability of the normalized Laplacian matrix is not invariant under graph isomorphism. Therefore the vertex numbering is important in determining separability; i.e. we consider labeled graphs. For labeled graphs of $n$ vertices, there are $2^{\frac{n(n-1)}{2}}$ different Laplacian matrices to consider. Since the empty graph has trace $0$ and cannot be considered a density matrix, we only need to look at $L(n) = 2^{\frac{n(n-1)}{2}} - 1$ different matrices.
\section{A sufficent condition for separability of normalized Laplacian matrices}
\begin{definition} A square matrix is {\em line sum symmetric} if the $i$-th column sum is equal to the $i$-th row sum for each $i$. \end{definition}
\begin{theorem}[\cite{wu:separable:2006}]\label{thm:sufficient} A normalized Laplacian matrix $A$ is separable in ${\mathbb C}^p\otimes {\mathbb C}^q$ if $A^{i,j}$ in Eq. (\ref{eqn:A}) is line sum symmetric for all $i$,$j$. \end{theorem}
For $V_1$ and $V_2$ disjoint subsets of vertices of a graph, let $e(V_1,V_2)$ denote the number of edges between $V_1$ and $V_2$. A graphical interpretation of Theorem \ref{thm:sufficient} is that by splitting the $pq$ vertices into $p$ groups $V_i$ of $q$ vertices, where $V_i = \{(i-1)q+1, (i-1)q+2, ..., iq\}$, the normalized Laplacian matrix of a graph ${\cal G}$ is separable in ${\mathbb C}^p\otimes {\mathbb C}^q$ if for each $j\neq i$ and for each $1\leq m\leq q$, $e(v,V_j) = e(w,V_i)$ where $v$ is the $m$-th vertex in $V_i$ and $w$ is the $m$-th vertex in $V_j$. This is illustrated in Fig. \ref{fig:sufficient} for the case $p=2$.
\begin{figure}
\caption{A sufficient condition for separability for the case $p=2$ is that the number of edges from vertex $v_i$ to $V_2$ is the same as the edges from vertex $v_{q+i}$ to $V_1$ for each $i$.}
\label{fig:sufficient}
\end{figure}
\begin{definition} Let $L_s(p,q)$ be the number of normalized Laplacian matrices of graphs of $n$ vertices that are separable under ${\mathbb C}^p\otimes {\mathbb C}^q$ where $n = pq$. Let $L_e(p,q)$ be the number of normalized Laplacian matrices of graphs of $n$ vertices that are entangled under ${\mathbb C}^p\otimes {\mathbb C}^q$. \end{definition}
It is clear that $L_s(p,q) + L_e(p,q) = L(pq)$.
\subsection{Upper and lower bounds for $L_s$ and $L_e$}
\begin{definition} Let ${\cal N}_s(n)$ denote the set of $n$ by $n$ $0$-$1$ matrices that are line sum symmetric. Let $N_s(n)$ denote the cardinality of the set ${\cal N}_s(n)$. Let ${\cal N}_e(n)$ denote the set of $n$ by $n$ $0$-$1$ matrices that are not line sum symmetric. Let $N_e(n)$ denote the cardinality of the set ${\cal N}_e(n)$. \end{definition} Clearly $N_s(n)+N_e(n) = 2^{n^2}$. The first few values of $N_s(n)$ can be found in \url{https://oeis.org/A229865}.
We now show how bounds for $L_s$ (and $L_e$) can be derived from $N_s$. \begin{theorem} \[ L_s(p,q) \geq 2^\frac{pq(q-1)}{2}N_s(q)^{\frac{p(p-1)}{2}} -1 \] \end{theorem} {\em Proof: } Let $A$ be the normalized Laplacian matrix of a graph. Since $A$ is symmetric, in the decomposition in Eq. (\ref{eqn:A}), $A^{i,i}$ is symmetric and $A^{i,j} = (A^{j,i})^T$. Therefore to apply Theorem \ref{thm:sufficient} we only need to check that $A^{i,j}$ is line sum symmetric for $j> i$. There are ${\frac{p(p-1)}{2}}$ such submatrices and thus $N_s(q)^{\frac{p(p-1)}{2}}$ possible combinations. The remaining entries in the strictly upper triangular portion of $A$ corresponds to $\frac{pq(pq-1)}{2} - \frac{p(p-1)}{2}q^2 = \frac{pq(q-1)}{2}$ elements which corresponds to $2^\frac{pq(q-1)}{2}$ combinations. This combines to $2^\frac{pq(q-1)}{2}N_s(q)^{\frac{p(p-1)}{2}}$ combinations. Finally we need to subtract $1$ for the zero matrix corresponding to the empty graph.
$\Box$
\begin{definition} Let ${\cal M}_n(i)$ denote the set of symmetric $n$ by $n$ $0-1$ matrices such that \begin{itemize} \item There is at least one row with a single $1$. \item The diagonal entries are $0$ \item There are $2i$ nonzero elements in the matrix. \end{itemize} Let $M_n(i)$ denote the cardinality of the set ${\cal M}_n(i)$. \end{definition}
Some values of $M_n(i)$ are shown in Table \ref{tbl:mni}. It is clear that ${\cal M}_n(i)$ is the set of adjacency matrices of labeled graphs of $n$ vertices and $i$ edges with at least one vertex of degree $1$ and $M_n(i)$ is the number of such graphs.
\begin{theorem} $M_n(i) = 0$ if $i > \frac{(n-1)(n-2)}{2} + 1$. For $j\geq 0$, $n\geq 4+j$, \[M_n\left(\frac{(n-1)(n-2)}{2} - j + 1\right) = n(n-1)\left(\begin{array}{c}\frac{(n-1)(n-2)}{2}\\j\end{array}\right).\] For $i\leq 3$, $M_n(i)$ is equal to the number of labeled bipartite graphs with $n$ vertices and $i$ edges.\footnote{See \url{https://oeis.org/A000217},\url{https://oeis.org/A050534}, \url{https://oeis.org/A053526}.} In particular, $M_n(1) = \frac{n(n-1)}{2}$, $M_n(2) = \frac{(n+1)n(n-1)(n-2)}{8}$, $M_n(3) = \frac{((n+1)(n+2)+2)n(n-1)(n-2)(n-3)}{48}$. \end{theorem} {\em Proof: } Clearly the maximum number of edges in a graph with at least one vertex of degree $1$ is achieved with a single vertex $v$ connected to a vertex $w$ in a clique of $(n-1)$ vertices, and this graph has $\frac{(n-1)(n-2)}{2} + 1$ edges. If $n\geq 4+j$ and a graph $\cal G$ with $\frac{(n-1)(n-2)}{2} - j + 1$ edges has at least one vertex of degree $1$, then this graph will have a single vertex $v$ connected to a vertex $w$ in a graph ${\cal W}$ consisting of $n-1$ vertices and $\frac{(n-1)(n-2)}{2} - j$ edges, i.e. $\cal W$ is a clique of $n-1$ vertices minus $j$ edges. Each vertex of $\cal W$ has degree $\geq n-2-j \geq 2$, i.e. $\cal W$ does not include a vertex of degree $1$. In this case ${\cal G}$ is uniquely defined by the vertices $v$ and $w$ and the graph $\cal W$. There are $n(n-1)$ such pairs of vertices $(v,w)$ and there are $\left(\begin{array}{c}\frac{(n-1)(n-2)}{2}\\j\end{array}\right)$ labeled graphs with $n-1$ vertices and $\frac{(n-1)(n-2)}{2} - j$ edges.
Let ${\cal G}$ be a graph of $n$ vertices with at least one vertex of degree $1$ and at most $3$ edges. Since there are no cliques of $3$ vertices, it is a bipartite graph. Similarly if ${\cal G}$ is a bipartite graph of at most $3$ edges, the absence of cliques of $3$ vertices means that there is a vertex of degree $1$.
$\Box$
\begin{table} \begin{center} \small
\begin{tabular}{c|*{8}r} $n$ & $M_n(1)$ & $M_n(2)$ & $M_n(3)$ & $M_n(4)$ &$M_n(5)$ & $M_n(6)$ & $M_n(7)$ & $M_n(8)$ \\ \hline\hline $2$ & $1$ &&&&&&& \\ \hline $3$ &$3$ & $3$ &&&&&& \\\hline $4$ & $6$ & $15$ & $16$ & $12$ &&&&\\\hline $5$ & $10$& $45$& $110$& $195$ & $210$& $120$ &$20$ &\\\hline $6$&$15$&$105$&$435$&$1320$&$2841$&$4410$&$4845$&$3360$\\\hline $7$& $21$ & $210$ & $1295$& $5880$ & $19887$ & $51954$ & $106785$ &$171360$\\\hline $8$&$28$& $378$& $3220$ &$20265$& $97188$& $369950$ &$1147000$& $2931138$\\\hline \end{tabular} \end{center} \caption{Values of $M_n(i)$.} \label{tbl:mni} \end{table}
\begin{theorem} \[L_e(p,q) \geq \sum_{i=1}^{\frac{(p-1)(p-2)}{2}+1}M_p(i)N_e(q)^iN_s(q)^{\frac{p(p-1)}{2}-i}2^{\frac{pq(q-1)}{2}}\] \end{theorem} {\em Proof: } Consider the mapping where we replace each submatrix $A^{i,j}$ with $1$ if it is an element of ${\cal N}_e(q)$ and $0$ otherwise. This results in a $p$ by $p$ $0-1$ matrix $B$. If $B$ is in ${\cal M}_p(k)$ for some $k$, then $A$ has a row of submatrices $A^{i,j}$ that are all line sum symmetric except for one, and this implies that $A^{PT}$ does not have zero sums and thus $A$ is not separable by Theorem \ref{thm:zerosums}. There are $M_p(k)N_e(q)^kN_s(q)^{\frac{p(p-1)}{2}-k}$ such combinations. As before, there are $\frac{pq(q-1)}{2}$ remaining locations in the strictly upper triangular part of $A$ that is not occupied by $A^{i,j}$ for some $i\neq j$.
$\Box$
\begin{corollary} \[ L_s(p,q) \leq L(pq) - \sum_{i=1}^{\frac{(p-1)(p-2)}{2}+1}M_p(i)N_e(q)^iN_s(q)^{\frac{p(p-1)}{2}-i}2^{\frac{pq(q-1)}{2}} \] \[ L_e(p,q) \leq L(pq) - 2^\frac{pq(q-1)}{2}N_s(q)^{\frac{p(p-1)}{2}} +1 \] \end{corollary}
It is easy to show that the upper and lower bound for the case $p=2$ coincide and this corresponds to the fact that the separability (and entanglement) condition is both sufficient and necessary in ${\mathbb C}^2\otimes {\mathbb C}^q$ \cite{wu:separable:2006}. In particular,
\begin{theorem} \[ L_s(2,q) = 2^{q(q-1)}N_s(q) - 1\] \[ L_e(2,q) = 2^{q(q-1)}N_e(q) \] \end{theorem} {\em Proof: } The upper bound for $L_s(2,q)$ is equal to \[ L(2q) - N_e(q)2^{q(q-1)} = 2^{q(2q-1)}-1-(2^{q^2}-N_s(q))2^{q(q-1)} = N_s(q)2^{q(q-1)}-1 \] which is also the lower bound for $L_s(2,q)$.
$\Box$
\end{document} |
\begin{document}
\title[Sum Rules and Spectral Measures] {Sum Rules and Spectral Measures of Schr\"odinger Operators with $L^2$ Potentials} \author[R. Killip and B. Simon]{Rowan Killip$^{1}$ and Barry Simon$^{2}$}
\thanks{$^1$ Department of Mathematics, UCLA, Los Angeles, CA 90095. E-mail: killip@math.ucla.edu. Supported in part by NSF grant DMS-0401277 and a Sloan Foundation Fellowship.} \thanks{$^2$ Mathematics 253-37, California Institute of Technology, Pasadena, CA 91125. E-mail: {\nobreak bsimon@caltech.edu}. Supported in part by NSF grant DMS-0140592 and in part by Grant No.\ 2002068 from the United States--Israel Science Foundation, Jerusalem, Israel}
\date{August 30, 2006}
\begin{abstract} Necessary and sufficient conditions are presented for a positive measure to be the spectral measure of a half-line Schr\"odinger operator with square integrable potential. \end{abstract}
\maketitle
\section{Introduction} \label{s1}
In this paper, we will discuss which measures occur as the spectral measures for half-line Schr\"odinger operators with certain decaying potentials. Let us begin with the appropriate definitions.
A potential $V\in L^2_\text{\rm{loc}}(\Reals^+)$ (where $\Reals^+=[0,\infty)$) is said to be limit point at infinity if \begin{equation} H=-\frac{d^2\ }{dx^2} + V(x) \end{equation} together with a Dirichlet boundary condition at the origin, $u(0)=0$, defines a selfadjoint operator on $L^2(\Reals^+)$, without the need for a boundary condition at infinity. This is what we will mean by a Schr\"odinger operator. (In some sections, we will also treat $V\in L^1_\text{\rm{loc}}$ where we feel that this generality may be of use to others.)
The spectral theory of such operators was first described by Weyl and subsequently refined by many others. We will now sketch the parts of this theory that are required to state our results; fuller treatments can be found elsewhere (e.g., \cite{CL,RS2,Titchmarsh}).
The name `limit point' was coined by Weyl for the following property, which is equivalent to that given above: for all $z\in{\mathbb{C}}\setminus\Reals$ there exists a unique function $\psi\in L^2(\Reals^+)$ so that $-\psi'' + V\psi=z\psi$ and $\psi(0)=1$. The value of $\psi'(0)$ is denoted $m(z)$ and is termed the (Weyl) $m$-function. It is an analytic function of $z$. Of course, by homogeneity, one has that \begin{equation}\label{mDefn} m(z) = \frac{\psi'(0)}{\psi(0)} \end{equation} where $\psi$ is \emph{any} non-zero $L^2$ solution of $-\psi'' + V\psi=z\psi$. This will prove the more convenient definition.
Simple Wronskian calculations show that $m(z)$ has a positive imaginary part whenever $\Im(z)>0$. Therefore, by the Herglotz Representation Theorem, there is a unique positive measure $d\rho$ so that \begin{equation}\label{HRTa}
\int \frac{d\rho(E)}{1+E^2} < \infty \end{equation} and \begin{equation}\label{HRT} m(z) = \int \ \biggl[ \frac{1}{E-z} - \frac{E}{1+E^2} \biggr] \, d\rho(E) + \Re m(i). \end{equation} Uniqueness follows from the fact that \begin{equation} \label{RBV} d\rho(E) =\wlim_{\varepsilon\downarrow 0}\, \tfrac{1}{\pi}\, \Im m(E+i\varepsilon)\, dE. \end{equation} Moreover, the boundary values $m(E+i0)$ exist almost everywhere and are equal to the Radon-Nikodym derivative, $\frac{d\mu}{dE}$.
At first sight, \eqref{HRT} does not permit us to recover $\Im m$ from $d\rho$ without first knowing $\Re m(i)$. Actually, it can be recovered from the asymptotic \cite{Atk,GSAnn} \begin{equation} \label{1.10} m(z) =\sqrt{-z} + o(1) \end{equation}
that holds as $|z|\to\infty$ along rays at a small angle to the negative real axis. (If the support of $d\rho$ is bounded from below, it holds as $z\to-\infty$.)
Just as $V$ determines $d\rho$, so $\rho$ (or $m(z)$) determines $V$. This is a famous result of Gel'fand--Levitan \cite{GL55,LevBk,LevSar}; see also Remling \cite{Rem02} and Simon \cite{Sim271}.
As we have described, each potential gives rise to a spectral measure, which also determines $V$. Our main goal in this paper is to give necessary and sufficient conditions in terms of $\rho$ for $V\in L^2(\Reals^+)$. Our model here is a result we proved recently \cite{KS} for Jacobi matrices, the discrete analog of Schr\"odinger operators. (Other pre-cursors will be discussed later). To properly frame our result, we recall briefly the Jacobi matrix result. A Jacobi matrix is a semi-infinite tridiagonal matrix \begin{equation} \label{1.11} J= \begin{pmatrix} b_1 & a_1 & 0 & \cdots \\ a_1 & b_2 & a_2 & \vphantom{\ddots} \\ 0 & a_2 & b_3 & \ddots \\ \vdots & & \ddots & \ddots \end{pmatrix} \end{equation} viewed as an operator on $\ell^2 ({\mathbb{Z}}_+)=\ell^2 (\{1,2,3,\dots\})$. The spectral measure here is defined by \begin{equation} \label{1.12} m(E) =\langle \delta_1, (J-E)^{-1}\delta_1\rangle =\int\frac{d\mu(x)}{x-E}. \end{equation} We write $J_0$ for the Jacobi matrix with $b_n\equiv 0$, $a_n\equiv 1$.
Our earlier result is:
\begin{theorem}[\cite{KS}]\label{T1.1} $J-J_0$ is Hilbert-Schmidt, that is, \begin{equation} \label{1.13} \sum_{n=1}^\infty (a_n -1)^2 + b_n^2 <\infty, \end{equation} if and only if the spectral measure $d\mu$ obeys \begin{SL} \item {\rm{(Blumenthal--Weyl)}} $\text{\rm{supp}} (d\mu) = [-2,2] \cup \{E_j^+\}_{j=1}^{N_+} \cup \{E_j^-\}_{j=1}^{N_-}$ with $E_1^+ > E_2^+ > \cdots > 2$ and $E_1^- < E_2^- < \cdots < -2$ with $\lim_{j\to\infty} E_j^\pm =\pm2$ if $N_\pm =\infty$.
\item {\rm{(Normalization)}} $\mu$ is a probability measure.
\item {\rm{(Lieb--Thirring Bound)}} \begin{equation}\label{1.12x} \sum_{\pm, j} (\abs{E_j^\pm} -2)^{3/2} <\infty \end{equation} \item {\rm{(Quasi-Szeg\H{o} Condition)}} Let $d\mu_{\text{\rm{ac}}}(E)= f(E)\, dE$. Then \begin{equation} \label{1.14} \int_{-2}^2 \log (f(E)) \sqrt{4-E^2}\, dE >-\infty \end{equation} \end{SL} \end{theorem}
We have changed the ordering of the conditions relative to \cite{KS} in order to facilitate comparison with Theorem~\ref{T1.2} below.
To state the result for Schr\"odinger operators, we need some further preliminaries. Let $d\rho_0$ be the free spectral measure (i.e., for $V=0$), it is \begin{equation} \label{1.19} d\rho_0 (E) =\pi^{-1} \chi_{[0,\infty)}(E) \sqrt{E}\, dE. \end{equation} For any positive measure $\rho$, we define a signed measure $d\nu$ on $(1,\infty)$ by \begin{equation} \label{1.17a} \frac{2}{\pi} \int f(k^2) k\,d\nu(k) = \int f(E) [d\rho(E)-d\rho_0 (E)]. \end{equation} Notice that $d\nu$ is parameterized by momentum, $k$, rather than energy, $E=k^2$. This is actually the natural independent variable for what follows (in \cite{KS} we used $z$ defined by $E=z+z^{-1}$). We will write $w$ for the $m$-function in terms of $k$: \begin{equation} w(k) = m(k^2). \end{equation} With this notation, \begin{equation} \label{1.17b} \frac{d\nu}{dk} = \Im[ w(k+i0) ] - k \end{equation} at a.e.\ point $k\in(1,\infty)$. Here $\frac{d\nu}{dk}$ is the Radon-Nikodym derivative of the a.c.\ part of $\nu$, which hay have a singular part as well.
We will eventually prove (see Section~\ref{s9}) that if $V\in L^2$, then \begin{equation} \label{1.17c}
\bigl\| \nu \bigr\|_{\ell^2(M)}^2 := \sum \bigl[ \abs{\nu} ([n,n+1]) \bigr]^2 <\infty \end{equation} and as a partial converse, if $d\rho$ is supported in $[-a,\infty)$ and \eqref{1.17c} holds, then $d\rho$ is the spectral measure for a potential $V\in L_\text{\rm{loc}}^2$.
We also need to introduce the long- and short-range parts of the Hardy--Littlewood maximal function \cite{HL,Rudin}, \begin{align} (M\nu)(x) &= \sup_{L>0}\, \frac{1}{2L}\, \abs{\nu} ([x-L,x+L]) \label{1.17d} \\ (M_s\nu)(x) &= \sup_{0<L\leq 1} \, \frac{1}{2L}\, \abs{\nu} ([x-L,x+L]) \label{1.17e} \\ (M_l\nu)(x) &= \sup_{1\leq L}\, \frac{1}{2L}\, \abs{\nu} ([x-L,x+L]). \label{1.17f} \end{align}
The main theorem of this paper is
\begin{theorem}\label{T1.2} A positive measure $d\rho$ on $\Reals$ is the spectral measure associated to a $V\in L^2(\Reals^+)$ if and only if \begin{SL} \item {\rm{(Weyl)}} $\text{\rm{supp}} (d\rho) = [0,\infty) \cup \{E_j\}_{j=1}^N$ with $E_1 < E_2 < \cdots <0$ and $E_j \to 0$ if $N=\infty$.
\item {\rm{(Normalization)}} \begin{equation} \label{1.17g} \int \log \biggl[ 1+ \biggl( \frac{M_s\nu (k)}{k}\biggr)^2\biggr] k^2\, dk <\infty \end{equation}
\item {\rm{(Lieb--Thirring)}} \begin{equation}\label{1.20x} \sum_j \, \abs{E_j}^{3/2} <\infty \end{equation}
\item {\rm{(Quasi-Szeg\H{o})}} \begin{equation} \label{1.17h} \int_0^\infty \log \biggl[ \frac{1}{4}\, \frac{d\rho}{d\rho_0} + \f12 + \f14\, \frac{d\rho_0}{d\rho} \biggr]
\sqrt{E}\, dE <\infty \end{equation} \end{SL} \end{theorem}
\begin{remarks} 1. It may be surprising that we have replaced the innocuous normalization condition in the Jacobi case by \eqref{1.17g}. The reason is the following: $\mu({\mathbb{R}})=1$ in the Jacobi case is the condition that $\mu$ is the spectral measure of some Jacobi matrix. In this theorem, we do not presume a priori that $d\rho$ is spectral measure. We will eventually see that \eqref{1.17g} implies that $\rho$ is the spectral measure of an $L_\text{\rm{loc}}^2$ potential. Indeed, \eqref{1.17g} has additional information that we will need to control high energy pieces.
2. The name of condition (i) was chosen because the fact that it is implied by $V\in L^2$ is an immediate consequence of Weyl's Theorem on the invariance of the essential spectrum under (relatively) compact perturbations.
3. Bounds on sums of powers of eigenvalues in terms of the $L^p$ norm of the potential are usually referred to as Lieb--Thirring Inequalities in deference to their exhaustive work on this question, \cite{LT}. However, the particular case that appears in Theorem~\ref{T1.2} was first observed by Gardner, Greene, Kruskal, and Miura; see \cite[p.~115]{GGKM}.
4. The argument of $\log$ in \eqref{1.17h} has the form \begin{equation}\label{1.?} \tfrac14\, \lambda + \tfrac12 + \tfrac14\,\lambda^{-1} = [\tfrac12\, (\lambda+\lambda^{-1})]^2 \geq 1 \end{equation} so the integrand is nonnegative. This is significantly different from \eqref{1.14} where the integrand can have both signs, and the finiteness of the measure implies one sign is automatically finite so we do not have to worry about oscillations. In our case, an oscillating integrand would present severe difficulties because spectral measures are not finite.
5. Theorem~\ref{T1.2} implies that if $V\in L^2$, then $\sigma_\text{\rm{ac}} (H)=[0,\infty)$. This is a result of Deift--Killip \cite{DeiftK}. \end{remarks}
We will prove Theorem~\ref{T1.2} in two parts. First, we prove an equivalence of $V\in L^2$ and a set of conditions that has an unsatisfactory element. Then we will show the conditions of Theorem~\ref{T1.2} are equivalent to those of this Theorem~\ref{T1.3}. Both portions are lengthy.
The intermediate theorem requires one further object. Let \begin{equation} \label{1.18}
F(q) = \pi^{-1/2} \int_{p\geq 1} p^{-1} e^{-(q-p)^2} d\nu (p). \end{equation} By \eqref{HRTa}, this integral is absolutely convergent.
\begin{theorem}\label{T1.3} A positive measure $d\rho$ on $\Reals$ is the spectral measure associated to a $V\in L^2(\Reals^+)$ if and only if \begin{SL} \item {\rm{(Weyl)}} $\text{\rm{supp}} (d\rho) = [0,\infty) \cup \{E_j\}_{j=1}^N$ with $E_1 < E_2 < \cdots <0$ and $E_j\to0$ if $N=\infty$.
\item {\rm{(Local Solubility)}} \begin{equation} \label{1.19x} \int_0^\infty \, \abs{F(q)}^2 \, dq <\infty \end{equation}
\item {\rm{(Lieb--Thirring)}} \begin{equation}\label{1.20} \sum_j \, \abs{E_j}^{3/2} <\infty \end{equation}
\item {\rm{(Strong Quasi-Szeg\H{o})}} \begin{equation} \label{1.21} \int \log \biggl[ \frac{\abs{w(k+i0) + ik}^2}{4k\Im w(k+i0)}\biggr] k^2\, dk <\infty \end{equation} \end{SL} \end{theorem}
\begin{remarks} 1. Notice that \begin{align*} \abs{w(k+i0)+ik}^2 &\geq \abs{\Im w(k+i0) + k}^2 \\ &\geq \abs{\Im w(k+i0) + k}^2 - \abs{\Im w(k+i0) -k}^2 \\ &= 4k \Im w(k+i0) \end{align*} so the argument in the $\log$ in \eqref{1.21} is at least $1$ and the integrand is strictly positive, so the integral either converges or is $+\infty$.
2. There is a significant difference between \eqref{1.21} and \eqref{1.14}. Since \eqref{1.21} involves $M$ and not just $\Im w$, the singular part of $d\rho$ enters in both (ii) and (iv). Still, as we shall see, the restriction on $d\rho_\text{\rm{s}}$ is mild.
3. The occurrence of $w$ in \eqref{1.21} means that if one starts with $d\rho$, it is difficult to check this condition---one first has to calculate the Hilbert transform (conjugate function) of $d\rho$. Considering the example $$ d\rho = d\rho_0 + \sum_{j=1}^\infty c_j \delta(E-j^2) dE $$ shows that \eqref{1.19x} is not strong enough to allow the replacement of \eqref{1.21} by the weaker Quasi-Szeg\H{o} condition \eqref{1.17h}. Specifically, \eqref{1.19x} only requires $c_j\in\ell^2$ while \eqref{1.21} implies $c_j\in\ell^1$. The relation of the two Quasi-Szeg\H{o} conditions and their connection to $\Re w(k)$ is discussed in Section~\ref{s8}.
The advantage of the maximal function is that it involves no cancellation; we see plainly that \eqref{1.17g} is a statement about the size of $d\rho-d\rho_0$.
4. The name `Local Solubility' comes from the fact that this condition (plus the fact that support of $d\rho$ is bounded from below) guarantees that $d\rho$ is the spectral measure for some $L^2_\text{\rm{loc}}$ potential. See Section~\ref{s6}.
5. We will prove Theorem~\ref{T1.3} in Section~\ref{s7}, and then use it to prove Theorem~\ref{T1.2} in Sections~\ref{s8}--\ref{s11}. \end{remarks}
There are significant differences from Theorem~\ref{T1.1}, both in the form of Theorem~\ref{T1.3} and its proof. Understanding the difficulties that led to these differences is illuminating. To understand the issues, we recall that Theorem~\ref{T1.1} was proven by showing a general sum rule, dubbed the $P_2$ sum rule: \begin{equation} \label{1.22} Q(d\mu) + \sum_{\pm,j} F(E_j^\pm) =\tfrac14\, \sum_j b_j^2 + \tfrac12\, \sum_j G(a_j) \end{equation} where \begin{align} Q(d\mu) &=\frac{1}{4\pi} \int_{-2}^2 \log \biggl( \frac{\sqrt{4-E^2}}{2\Ima m(E+i0)}\biggr) \sqrt{4-E^2}\, dE \label{1.23} \\ G(a) &= a^2 -1 -\log \abs{a}^2 \label{1.24} \\
F(\beta + \beta^{-1}) &= \tfrac14\, [\beta^2 -\beta^{-2} -\log \beta^4 ], \quad |\beta|>1. \label{1.25} \end{align} To prove Theorem~\ref{T1.1}, one proves \eqref{1.22} is always true (both sides may be infinite) and then notes $Q(d\mu)<\infty$ if and only if \eqref{1.14} holds, $F(E_j^\pm)=(\abs{E_j^\pm}-2)^{3/2} + (O(\abs{E_j^\pm}-2)^2)$, and $G(a)=2(a-1)^2 + O((a-1)^3)$, so $\sum_{\pm,j} F(E_j^\pm)<\infty$ if and only if \eqref{1.12x} holds and the right side of \eqref{1.13} is finite if and only if \eqref{1.11} holds.
For nice $J$'s, (e.g., $b_n=0$ and $a_n=1$ for $n$ large), \eqref{1.22} is a combination of two sum rules of Case \cite{Case1,Case2}. For general $J$'s, it is proven in Killip--Simon \cite{KS} with later simplifications of parts of the proof in \cite{Sim288,SZ}.
The difficulties in extending this strategy in the continuous case were several: \begin{SL} \item The translation of the normalization condition $\mu({\mathbb{R}})=1$ is not clear. We needed a condition that guaranteed $d\rho$ is the spectral measure associated to a reasonable $V$\!, preferably belonging to $L^2_\text{\rm{loc}}$. We sought to express this in terms of the divergence of $\rho(-\infty,R)$ as $R\to\infty$. As it turned out, the $A$-function approach to the inverse spectral problem, \cite{GSAnn,Sim271}, leads quickly and conveniently to the condition \eqref{1.19x}, which is perfect for us.
\item The natural half-line sum rules in the Schr\"odinger case invariably lead to terms involving $V(0)$ or worse still, $V'(0)$. This is clearly unacceptable for one seeking $V\in L^2$ conditions.
\item The half-line sum rules also lead to terms that, like \eqref{1.14}, have an integrand that has a variable sign. In \eqref{1.14}, the fact that $\int f(E)\, dE \leq 1$ implies uniform control on $\int \log_+ (f(E)) \sqrt{4-E^2}\, dE$ and so the terms of the `wrong' sign (where $f(E)>1$) present no problem. But in the whole-line case where $\rho ({\mathbb{R}})=\infty$, terms of opposite signs could involve difficult to control cancellations. \end{SL}
The resolution of difficulties (ii) and (iii) was to fall back to the whole-line sum rule used in \cite{DeiftK}. The penalty is that the Strong Quasi-Szeg\H{o} condition, \eqref{1.21}, little resembles the Quasi-Szeg\H{o} condition of our earlier theorem, \eqref{1.14}. It is this disappointment that led us to push on and find Theorem~\ref{T1.2}.
Whole-line sum rules date to the original inverse-scattering solution of the KdV equation, \cite{GGKM}. Consider the operator \begin{equation} \label{1.26} L_0 =-\frac{d^2}{dx^2} +\chi_{(0,\infty)} (x) V(x) \end{equation} acting on $L^2(\Reals)$ with eigenvalues $E_j^{(0)}$. The well-known sum rule is, \cite{DeiftK,GGKM,ZF}, \begin{equation} \label{1.27} \tfrac18\, \int_0^\infty V(x)^2\, dx = \tfrac23\, \sum_j (E_j^{(0)})^{3/2} + Q \end{equation} where \begin{equation} \label{1.28} Q =\frac{1}{\pi} \int_0^\infty \log \biggl[ \frac{\abs{w(k+i0)+ik}^2}{4k\Im w(k+i0)}\biggr] k^2\, dk. \end{equation}
As in \cite{KS}, we will need to prove it in much greater generality than was known previously. Essentially, assuming that $w$ is the $m$-function of an $L_{\text{\rm{loc}}}^2$ potential $V$\!, we will prove \eqref{1.27} always holds although both sides may be infinite.
If one notes that the half-line and whole-line eigenvalues interlace, \begin{equation} \label{1.29} E_j^{(0)}\leq E_j \leq E_{j+1}^{(0)}, \end{equation} it is clear that \eqref{1.27} proves Theorem~\ref{T1.2}.
As was the case \cite{KS,Sim288,SZ}, the key to the proof of \eqref{1.27} is a `step-by-step' sum rule, that is, a result that, in essence, is the difference of \eqref{1.27} for $L_0$ and for \begin{equation} \label{1.30} L_t =-\frac{d^2}{dx^2} +\chi_{(t,\infty)}(x) V(x) \end{equation} and which always holds. A second important ingredient is the semicontinuity of $Q$.
In Section~\ref{s2}, we will discuss a relative Wronskian which is the analogue of the product of $m$-functions used implicitly in \cite{KS,SZ} and explicitly in \cite{Sim288} to prove a multi-step sum rule. In Section~\ref{s3}, as an aside, we will re-express this relative Wronskian as a perturbation determinant. In Section~\ref{s4}, we prove the step-by-step sum rule. In Section~\ref{s5}, we prove lower semicontinuity of the quasi-Szeg\H{o} term. In Section~\ref{s6}, we discuss \eqref{1.19x} and, in particular, show it implies $\mu$ is the spectral measure of a locally $L^2$ potential. Section~\ref{s7} completes the proof of \eqref{1.27} and of Theorem~\ref{T1.3}. Sections~\ref{s8}--\ref{s11} prove Theorem~\ref{T1.2} given Theorem~\ref{T1.3}.
The earliest theorem of the type presented here is Verblunsky's form \cite{V36} of Szeg\H{o}'s theorem \cite{OPUC1,Szb}. Let us elaborate. The orthogonal polynomials associated to a measure on the unit circle obey a recurrence and the coefficients that appear in this recurrence are known as the Verblunsky coefficients. The result just mentioned says that the Verblunsky coefficients are square summable if and only if the logarithm of the density of the a.c.\ part of the measure is integrable. In fact, there is a sum rule relating these quantities.
One of the more interesting spectral consequences of Szego's theorem is the construction by Totik \cite{Totik} (see also Simon \cite{OPUC1}) that given any measure supported on the circle, there is an equivalent measure whose recursion coefficients lie in all $\ell^p$ ($p>2$). We expect that the results and techniques of the current paper will provide tools allowing one to carry this result over to Schr\"odinger operators (although it seems likely that $\ell^p$ will be replaced by $\ell^p(L^2)$ rather than by $L^p$.
Kre{\u\i}n systems give a continuum analogue for orthogonal polynomials on the unit circle. The corresponding version of Szeg\H{o}'s Theorem can be found in \cite{Krein}; though for proofs, see \cite{Stakh}. Using a continuum analogue of the Geronimus relations, Kre{\u\i}n's Theorem gives results for potentials of the form $V(x)=a(x)^2 \pm a'(x)$ with $a\in L^2$. Note that the operators associated to such potentials are automatically positive---there are no bound states. For a further discussion of the application of Kre{\u\i}n systems to Schr\"odginer operators, see \cite{DenJMAA,DenCMP,DenJDE}.
More recently, Sylvester and Winebrenner \cite{SW} studied the scattering for the Helmholtz equation on a half-line and obtained necessary and sufficient conditions (in terms the reflection coefficient) for square integrability of the derivative of the wave speed. Applying appropriate Liouville transformations connects this work to the study of Schr\"odinger operators with potentials $V(x)=a(x)^2 \pm a'(x)$, just as for Kre{\u\i}n systems. Our methods parallel their work in places, particularly with regard to the semicontinuity properties of $Q$ discussed in Section~\ref{s5}. However, dealing with bound states adds to the complexity of our case.
As mentioned earlier, \cite{DeiftK} proved that $\sigma_{\text{\rm{ac}}}(H)=[0,\infty)$ for $H=-\frac{d^2}{dx^2}+V$ with $V\in L^2$. Earlier work by Christ, Kiselev, and Remling, \cite{Kis96,CK,Rem98}, settled the case $V(x) \leq C (1+\abs{x}^2)^{-\alpha}$ for $\alpha>\f12$ by entirely different means. The most recent development in this direction is the use of sum rules by Rybkin, \cite{Rybppt}, to prove $\sigma_{\text{\rm{ac}}}(H)=[0,\infty)$ for potentials of the form $V=f+g'$ with $f,g\in L^2$.
\noindent \textit{Acknowledgements:} We wish to thank Wilhelm Schlag, Terence Tao, and Christoph Thiele for various pointers on the harmonic analysis literature. We would also like thank Christian Remling for some insightful comments.
\section{The Relative Wronskian} \label{s2}
In this section, we will consider $V\in L_\text{\rm{loc}}^1 (\Reals^+)$ for which the operator \begin{equation} \label{2.1} H=-\frac{d^2}{dx^2}+V \end{equation} with boundary condition $u(0)=0$ is essentially selfadjoint and has $\sigma_\text{\rm{ess}} (H)\subset [0,\infty)$. As noted in the Introduction, for any $k\in{\mathbb{C}}_+$ with $k^2\not\in\sigma(H)$, there is unique solution $\psi_+ (x,k)$ of \begin{equation}\label{R2.1} -\psi'' + V\psi = k^2 \psi \end{equation} which is $L^2$ at $+\infty$ and $\psi_+(0)=1$. By the above assumption on $\sigma_\text{\rm{ess}}$, this extends to a meromorphic function of $k$ in ${\mathbb{C}}_+$ with poles exactly at the negative eigenvalues of $H$. Moreover, the poles are simple.
Let us define \begin{equation} \label{2.2} W(x,k) =e^{-ikx}\psi'_+ (x,k) + ike^{-ikx}\psi_+ (x,k). \end{equation} $W$ is the Wronskian of $\psi_+ (x,k)$ and $\psi_-^{(0)}(x,k) \equiv e^{-ikx}$, which is the solution of \begin{equation} \label{2.2a} -\psi'' =k^2 \psi \end{equation} that is $L^2$ at $-\infty$ (recall $\Im k>0$). Note that $W(x,k)$ is a meromorphic function of $k$, an absolutely continuous function of $x$, and is easily seen to obey \begin{equation} \label{2.3} \frac{\partial}{\partial x}\, W(x,k) =e^{-ikx} \psi_+ (x,k) V(x). \end{equation}
The zeros of $k\mapsto W(x_0,k)$ are precisely those points where one can find a $c\in{\mathbb{C}}$ for which \begin{equation} \label{2.4} u(x) = \begin{cases} \psi_+ (x,k), & x\geq x_0 \\ ce^{-ikx}, & x\leq x_0 \end{cases} \end{equation} is a $C^1$ function, that is, $W(x_0,k)=0$ if and only if $k^2$ is an eigenvalue of the operator $L_t$ of \eqref{1.27} with $t=x_0$. In particular, all zeros lie on the imaginary axis: $k=i\kappa$ with $\kappa >0$.
We will use $\kappa_1(x) >\kappa_2(x) >\cdots$ to indicate the zeros of $W(x,k)$ so that $-\kappa_j(x)^2$ are the negative eigenvalues of $L_x$.
We define the \textit{relative Wronskian} by \begin{equation} \label{2.5} a_x (k) =\frac{W(x,k)}{W(0,k)}. \end{equation} For each $x$, it is a meromorphic function of $k$. Like the $m$-function---and unlike $W(x,\cdot)$---it is independent of the normalization $\psi_+(0,k)=1$. By the above, we have
\begin{proposition} \label{P2.1} The poles of $a_x(k)$ are simple and lie at those points $k=-i\kappa_j(0)$ for which $-\kappa_j(0)^2$ is an eigenvalue of $L_0$. The zeros are also simple and lie on the set $k=-i\kappa_j(x)$ where $-\kappa_j(x)^2$ are the eigenvalues of $L_x$. {\rm{(}}In the event that a point lies in both sets, there is neither a pole nor a zero---they cancel one another.{\rm{)}} \end{proposition}
Next, we note that
\begin{proposition}\label{P2.2} For $\Re k\neq 0$, $a_x (k)$ is absolutely continuous in $x$. Moreover, $\log[a_x (k)]$, defined so that it is continuous in $x$ with $\log (a_{x=0}(k))=0$, obeys \begin{equation} \label{2.6} \frac{d}{dx}\, \log[a_x(k)] = V(x) (ik+w(k;x))^{-1} \end{equation} where \begin{equation}\label{E:w} w(k,x) = \frac{\psi_+'(x,k)}{\psi_+(x,k)} \end{equation} is the $m$-function associated to the operator $L_x$ restricted to $[x,\infty)$. \end{proposition}
\begin{proof} As a ratio of nonvanishing absolutely continuous functions, $a_x(k)$ is absolutely continuous, and then so is its log. By \eqref{2.3}, \[ \frac{d}{dx}\, \log[a_x(k)] = \frac{e^{-ikx} \psi_+ (x,k) V(x)}{W(x,k)}. \] As \[ \frac{W(x,k)}{e^{-ixk} \psi_+(x,k)} = \frac{\psi'_+ (x,k)}{\psi_+ (x,k)} + ik = w(k;x)+ik, \] \eqref{2.6} is immediate. \end{proof}
\begin{proposition} \label{P2.3} \begin{SL} \item[{\rm{(a)}}] For $k\in{\mathbb{C}}_+$ with $\Real k\neq 0$, \begin{equation} \label{2.6a}
\bigl| \log[a_x(k)] \bigr| \leq \abs{\Real k}^{-1} \int_0^x \abs{V(y)}\, dy. \end{equation} \item[{\rm{(b)}}] Fix $K>0$. Then there exist $R>0$ and $C$ so, for all $k$ in ${\mathbb{C}}_+$ with $\abs{k} >R$ and all $x$ in $[0,K]$, \begin{equation} \label{2.6b}
\bigl| \log[a_x(k)] \bigr| \leq C\abs{k}^{-1}. \end{equation} \end{SL} \end{proposition}
\begin{proof} (a) If $\Real k>0$ and $\Ima k>0$, then $\Ima w>0$ and so $\abs{ik+w(k;x)}^{-1} \leq \abs{\Real k}^{-1}$. Thus \eqref{2.6a} follows from \eqref{2.6}.
(b) By \cite{GSAnn}, uniformly for $x\in [0,K]$, $w(k;x)-ik\to 0$ as $\abs{k}\to\infty$ with $\arg (-ik) \leq \frac{\pi}{4}$. This plus \eqref{2.6} implies that \eqref{2.6b} holds uniformly in $x\in (0,K)$ and $\abs{\arg (-ik)}\leq \frac{\pi}{4}$. By \eqref{2.6a}, it holds for $\arg k\in (0, \frac{\pi}{4})\cup (\frac{3\pi}{4},\pi)$. \end{proof}
\begin{proposition} \label{P2.4} Suppose that for some $k_0\in (0,\infty)$, $\lim_{\varepsilon\downarrow 0} w(k_0 + i\varepsilon; 0)\equiv w(k_0 +i0; 0)$ exists and $\Ima w(k_0 +i0;0)\in (0,\infty)$. Then for all $x$, $\lim_{\varepsilon\downarrow 0} w(k_0 +i\varepsilon;x) \equiv w(k_0 +i0; x)$ exists and $\lim_{\varepsilon\downarrow 0} a_x (k_0 +i\varepsilon) \equiv a_x (k_0 +i0)$ exists. Moreover, \begin{equation} \label{2.7} \abs{a_x (k_0 + i0)}^2 = \frac{T(k_0,0)}{T(k_0,x)} \end{equation} where \begin{equation} \label{2.8} T(k_0,x) = \frac{4k_0 \Ima w(k_0 +i0;x)}{\abs{w(k_0 +i0;x)+ik_0}^2}. \end{equation} \end{proposition}
\begin{proof} As $\Ima w(k_0 +i0;0)\in (0,\infty)$, we may also take the vertical limit $\psi_+(x,k_0+i0)$; indeed, this is just the solution to \eqref{R2.1} with $\psi(0)=1$, $\psi'(0)=\Ima w(k_0 +i0;0)$, and $k=k_0$.
As $\Ima w(k+i0)>0$, $\psi_+ (x,k_0 +i0)$ is not a complex multiple of a real-valued solution and so cannot have any zeros. Thus $\lim_{\varepsilon\downarrow 0} w(k_0 + i\varepsilon;x)$ exists. Similarly, by \eqref{2.2}, $W(x,k_0 +i\varepsilon)$ has a limit and \begin{equation} \label{2.9} \abs{W(x, k_0 +i0)}=\abs{\psi_+ (x, k_0 +i0)}\, \abs{w(k_0 +i0;x)+ik}. \end{equation}
As $\psi_+ (\,\cdot\,, k_0 +i0)$ and $\overline{\psi_+ (\,\cdot\,, k_0 +i0)}$ obey the same equation, their Wronskian is a constant (in $x$), that is \begin{equation} \label{2.10} \abs{\psi_+ (x, k_0 +i0)}^2 \Ima w(x;k_0 +i0)=C_{k_0}. \end{equation} The definition \eqref{2.8} and \eqref{2.9}, \eqref{2.10} imply \begin{equation} \label{2.11} \abs{W(x,k_0 +i0)}^2 = \frac{4C_{k_0} k_0}{T(k_0,x)}. \end{equation} Thus \eqref{2.5} implies \eqref{2.7}. \end{proof}
We write the letter $T$ in \eqref{2.8} because, as we will see, it represents the transmission probability of stationary scattering theory.
Our final result in this section is
\begin{proposition} \label{P2.5} Let $V\in L_\text{\rm{loc}}^2 ([0,\infty))$ and suppose $\sigma_\text{\rm{ess}} (H)\subset [0,\infty)$. Then as $\kappa\to\infty$ {\rm{(}}real $\kappa${\rm{)}}, \begin{equation} \label{2.12} \log [a_x (i\kappa)] = -\frac{1}{2\kappa} \int_0^x V(y)\, dy + \frac{1}{8\kappa^3} \int_0^x V(y)^2\, dy+ o(\kappa^{-3}) \end{equation} with an error uniform in $x$ for $x\in [0,K]$ for any $K$. \end{proposition}
\begin{proof} By \cite{GSAnn,Sim271}, \begin{equation}\label{2.13} w(i\kappa,x) = -\kappa -\int_0^1 V(x+y) e^{-2\kappa y}\, dy + o(\kappa^{-1}) \end{equation} with the $o(\kappa^{-1})$ uniform in $x$ for $x\in [0,K]$. Notice that the integral in \eqref{2.13} is $o(\kappa^{-1/2})$ since $V\in L_\text{\rm{loc}}^2$. Thus \begin{equation} \label{2.14} [w(i\kappa,x)-\kappa]^{-1} = (-2\kappa)^{-1} + (2\kappa)^{-2} \int_0^1 V(x+y) e^{-2\kappa y}\, dy + o(\kappa^{-3}). \end{equation} To get this, note that one error term is $O(\kappa^{-3}) o(\kappa^{-1})$ and by the fact that the integral in \eqref{2.13} is a priori $o(\kappa^{-1/2})$, the other is $O(\kappa^{-3}) O(\kappa^{-1/2})^2$.
Thus, by integrating \eqref{2.6}, the proposition will follow once we show \begin{equation} \label{2.15} \lim_{\kappa\to\infty} \, \kappa \int_0^x V(y) \int_0^1 V(y+s) e^{-2\kappa s} \, ds = \tfrac12 \int_0^x V(y)^2 \, dy \end{equation} for all $V\in L^2 (0, x+1)$. To prove this, note first that it is trivial if $V$ is continuous since then, $\kappa \int_0^1 V(y+s) e^{-2\kappa s}\, ds =\f12 [V(y) +o(1)]$ for each $y$ uniformly in $y$ in $[0,x]$. Moreover, \begin{equation} \label{2.16} \begin{split}
\biggl| \, \int_0^x f(y)\biggl( \int_0^1 & g(y+s) \kappa e^{-2\kappa s}\, ds\biggr) dy \biggr| \\ & \leq \tfrac12 \biggl( \int_0^x \abs{f(y)}^2 \, dy \biggr)^{1/2} \biggl( \int_0^{x+1} \abs{g(y)}^2 \, dy \biggr)^{1/2} \end{split} \end{equation} so an approximation theorem goes from $V$ continuous to general $V$ in $L^2 (0,x+1)$.
To prove \eqref{2.16}, use the Schwartz inequality and \[
\biggl\| \int_0^1 g(\,\cdot\, +s) \kappa e^{-2\kappa s}\, ds \biggr\|_2 \leq \int_0^1
\kappa e^{-2\kappa s} \|g(\,\cdot\, +s)\|_2 \, ds \leq \tfrac12\, \biggl( \int_0^{x+1} \abs{g(y)}^2\, dy \biggr)^{1/2} \]
where $\|\cdot\|_2$ is $L^2 (0,x)$ norm. \end{proof}
\section[Perturbation Determinants]{Perturbation Determinants: An Aside} \label{s3}
In this section, we provide an alternate definition of $a_x(k)$ which we could have used (and, indeed, initially did use) to define and prove the basic properties of this function. The definition as a perturbation determinant makes the similarity to the Jacobi matrix theory stronger. Expressions of suitable Wronskians as Fredholm determinants go back to Jost and Pais \cite{JP}. We will not use this alternate definition again in this paper, but felt it is suggestive and should be useful for other purposes.
We will write $\mathfrak{I}_1$ for the space of trace-class operators with the usual norm: $\|A\|=\text{\rm{Tr}}(|A|)$.
We need one preliminary:
\begin{proposition}\label{P3.1} Let $V$ be in $L_\text{\rm{loc}}^1 ((0,\infty))$ and consider $L = -\frac{d^2}{dx^2} +V$ on $L^2(\Reals)$ {\rm{(}}with a boundary condition at infinity if $V$ is limit circle there{\rm{)}}. Fix $0<K<\infty$ and view $L^2([0,K])$ as functions {\rm{(}}and multiplication operators{\rm{)}} on all of \/$\Reals$ that happen to vanish outside this interval.
Given $z\in{\mathbb{C}}_+$, the mapping $f\mapsto f(L-z)^{-1}$ is continuous and differentiable from $L^2(0,K)$ into the trace class operators. \end{proposition}
\begin{proof} Let $L_D$ be the operator with a Dirichlet boundary condition added at $x=K$, that is, $L_D =L_D^- \oplus L_D^+$ with $L_D^-$ on $L^2 (-\infty, K)$ with $u(K)=0$ boundary conditions and $L_D^+$ on $L^2 (K,\infty)$ with $u(K)=0$ boundary conditions and the same boundary condition at infinity as $L$.
Let $u_\pm$ solve $-u'' +Vu =zu$ with $u_-$ square-integrable at $-\infty$ and $u_+$, $L^2$ at $+\infty$ (or, obeying $H$'s boundary condition at infinity if $V$ is limit circle). Let $\varphi$ be given by \begin{equation} \label{3.1} \varphi(x) =\begin{cases} u_+ (K) u_-(x), & x\leq K \\ u_- (K) u_+(x), & x\geq K \end{cases} \end{equation} and normalize $u_-$ so that $W(u_+, u_-)=1$. Then, standard formulae for Green's functions \cite{CL} show that with $G(x,y)$, the integral kernel of $(L-z)^{-1}$ and $G_D(x,y)$ that of $(L_D-z)^{-1}$, \begin{equation} \label{3.2} G(x,y) -G_D(x,y) = (u_+(K) u_-(K))^{-1} \varphi (x) \varphi(y). \end{equation}
Since $\varphi$ is bounded on $[0,K]$, $f\varphi\in L^2$ and so \[ f[(L-z)^{-1} - (L_D-z)^{-1}] \] is a bounded rank one operator, and so trace class. Thus it suffices to prove $f(L_D-z)^{-1} = f(L_D^- -z)^{-1} \oplus 0$ is trace class, and so that $f(L_D^- -z)^{-1}$ is trace class on $L^2 (-\infty, K)$.
Similarly, adding a boundary condition at $x=0$ is rank one, so with $H_D$ the operator on $L^2 (0,K)$ with $u(0)=u(K)=0$ boundary conditions, it suffices to prove that $f(H_D -z)^{-1}$ is trace class.
As $V\restriction [0,K]$ is in $L^1$, $H_D$ is bounded from below, and so by adding a constant to $V$\!, we can suppose $H_D \geq 0$. Thus it suffices to show that $f(H_D+1)^{-1}$ is trace class. Write \begin{align*} f(H_D+1)^{-1} &= \int_0^\infty e^{-t} fe^{-tH_D}\, dt \\ &= \int_0^\infty e^{-t} (fe^{-tH_D/2}) (e^{-tH_D/2})\, dt. \end{align*}
By general principles (see \cite{Sxxi}), the integral kernel of $e^{-tH/2}$, call it $P_t (x,y)$, obeys \begin{alignat*}{2} P_t (x,y) &\leq Ct^{-1/2} \exp \biggl( \frac{-(x-y)^2}{Dt}\biggr), \qquad && t\leq 1 \\ &\leq C, \qquad && t\geq 1. \end{alignat*} >From this it follows that for any $g\in L^2 (0,K)$, \[
\|ge^{-sH_D}\|_2 \leq C\|g\|_{L^2} (1+\abs{s}^{-1/4}) \]
and $\|\cdot\|_2$ a Hilbert-Schmidt norm. Thus \begin{align*}
\|f(H_D+1)^{-1}\|_1 &\leq \int_0^\infty e^{-t}\, \|fe^{-tH/2}\|_2 \,
\|e^{-tH/2}\|_2 \, dt \\ &< \infty. \end{align*}
The proof of continuity and differentiability in $f$ follows from these estimates. \end{proof}
\begin{remark} The use of Dirichlet decoupling and semigroup estimates to get trace class results goes back to Deift--Simon \cite{DS}. \end{remark}
\begin{corollary}\label{C3.2} If $L_t$ is given by \eqref{1.30} and $z\in{\mathbb{C}}_+$, then \begin{equation} \label{3.3} X_t =(L_t-z) (L_0-z)^{-1} -1\in \mathfrak{I}_1. \end{equation} Moreover, if $V(x)$ is continuous in a one-sided neighborhood of $x=0$, $\text{\rm{Tr}} (X_t)$ is differentiable at $t=0$ and \begin{equation} \label{3.4}
\left. \frac{d}{dt}\, \text{\rm{Tr}} (X_t)\right|_{t=0} =V(0) G(0,0) \end{equation} where $G$ is the integral kernel of the operator $(L_0-z)^{-1}$. \end{corollary}
\begin{proof} We have that \begin{equation} \label{3.5} L_t -L_0 = V\chi_{[0,t]} \end{equation} so \eqref{3.3} follows from Proposition~\ref{P3.1}. By the continuity assumption, $X_t$ has a piecewise continuous integral kernel $X_t (x,y) =V(x) \chi_{[0,t]}(x) G(x,y)$, so (see, e.g., Theorem~3.9 in Simon \cite{STI}) \[ \text{\rm{Tr}} (X_t) =\int_0^t V(x) G(x,x)\, dx \] from which \eqref{3.4} follows. \end{proof}
The main result in this section is
\begin{theorem}\label{T3.3} Let $V\in L_\text{\rm{loc}}^2 (0,\infty)$ with $\sigma_\text{\rm{ess}} (H) =[0,\infty)$. Then for \[ k\in{\mathbb{C}}_+ \backslash\{k=-i\kappa\mid -\kappa^2 \in \sigma (L_0)\} \] we have \begin{equation} \label{3.6} a_x(k) =\det [(L_x -k^2)/(L_0 -k^2)]. \end{equation} \end{theorem}
\begin{proof} By continuity, we can suppose $\Real k\neq 0$. Similarly, by Proposition~\ref{P3.1}, we can suppose $V$ is continuous on $[0,x]$.
Let $\tilde a_s(k)$ be the right-hand side of \eqref{3.6}. If we prove that for $0<t<x$, \begin{equation} \label{3.7} \frac{d}{dt} \, \log [\tilde a_t(k)] = V(t) [ik + w(k;t)]^{-1} \end{equation} then, by \eqref{2.6} and $\tilde a_0 (k)\equiv 1$, we could conclude \eqref{3.6}.
As \[
\left. \frac{d}{dA}\, \log\bigl[\det (1+A)\bigr]\right|_{A=0} =\text{\rm{Tr}} (A) \] and for $t$ near $t_0$, \begin{equation} \label{3.8} \log \tilde a_t(k) =\log \tilde a_{t_0}(k) + \log \det [(L_t -k^2)/(L_{t_0}-k^2)], \end{equation} \eqref{3.4} and \eqref{3.8} imply that \[
\left. \frac{d}{dt}\, \log [\tilde a_t(k)]\right|_{t_0} = V(t_0) G(t_0, t_0) \] where $G$ is the integral kernel for $(L_{t_0} -k^2)^{-1}$. This leads to \eqref{3.7} after writing the Green's function in terms of $\psi_+$ and $\psi_-^{(0)}$. \end{proof}
\section{The Step-by-Step Sum Rule} \label{s4}
In this section, we will prove a general step-by-step sum rule for all $V\in L_\text{\rm{loc}}^2 ([0,\infty))$ that involves $\int_0^x V(y)^2 \, dy$. We begin with a preliminary: Recall (see Proposition~\ref{P2.1}) that $\kappa_j(t)\geq 0$ is defined so that $\kappa_1(t)\geq\kappa_2(t)\geq \cdots$ and $\{-\kappa_j(t)^2\}_{j=1}^{N(t)}$ are the negative eigenvalues of $L_t$ and $\kappa_j(t)=0$ if $j>N(t)$, which may be infinite.
\begin{proposition}\label{P4.1} For any $V\in L_\text{\rm{loc}}^2 ([0,\infty))$ and $t\in (0,\infty)$, \begin{equation} \label{4.1} \sum_j \, \abs{\kappa_j(t)^2 - \kappa_j(0)^2}<\infty . \end{equation} \end{proposition}
\begin{proof} Let $A(s) =-\frac{d^2}{dx^2} + \chi_{[t,\infty)}V + s\chi_{[0,t)} V$ so $A(0)=L_t$ and $A(1) = L_0$. Let $E_j (s)$ denote the negative eigenvalues $B(s)$ with $E_1\leq E_2 \leq\cdots$ and $E_j(s)=0$ if $j>N(s)$, the number of negative eigenvalues of $A(s)$. Let $\psi_j(s)$ be the corresponding normalized eigenvectors. Pick $a>0$ so for all $s\in [0,1]$, $A(s) \geq 1-a$.
By first-order eigenvalue perturbation \cite{RS4,Kato} (a.k.a.\ the Feynman--Hellman theorem) if $j\leq N(s)$: \begin{align*} \frac{d}{ds}\, E_j(s) &= \langle \psi_j(s), \chi_{[0,t)} V\psi_j(s)\rangle \\ &= (E_j(s) + a) \langle (A(s)+a)^{-1/2} \psi_j(s), \chi_{[0,t]} V(A(s)+a)^{-1/2} \psi_j(s)\rangle \end{align*} so \begin{equation} \label{4.2}
\biggl| \frac{dE_j(s)}{ds}\biggr| \leq 2a \abs{\langle\psi_j(s), (A(s)+a)^{-1/2} \chi_{[0,t]} V(A(s)+a)^{-1/2} \psi_j(s)\rangle} \end{equation} and thus \begin{align}
\sum_{j=1}^{N(s)} \, \biggl| \frac{dE_j(s)}{ds}\biggr| &\leq 2a \|\chi_{[0,t]} \abs{V}^{1/2}
(A(s)+a)^{-1/2}\|_2^2 \notag \\ &\leq C \label{4.3} \end{align}
where $\|\cdot\|_2$ is the Hilbert-Schmidt norm. \eqref{4.3}, which indicates a $C$ independent of $s\in [0,1]$, follows from an estimate that \begin{align*}
\|\chi_{[0,t]} \abs{V}^{1/2} (A(s)+a)^{-1/2}\|_2^2 &=\int_0^t \abs{V(x)}^2 (A(s)+a)^{-1} (x,x), dx \\ &\leq C. \end{align*} \eqref{4.3} implies \[ \sum_{j=1}^\infty \, \abs{E_j(s) - E_j(u)} \leq C\abs{s-u} \] which for $s=1$, $u=0$ is \eqref{4.1}. \end{proof}
\begin{remark} One may also prove this proposition using the $\mathfrak{I}_1\to L^1$ bound for the Kre{\u\i}n spectral shift function. Indeed, the proof of this general result follows along the general lines given above. \end{remark}
We can use this to define the Blaschke product that we will need to deal with the zeros and poles of $a_t(k)$:
\begin{proposition} \label{P4.2} Let \begin{equation} \label{4.4} B_t(k) =\prod_j \biggl\{\biggl[ \frac{k+i\kappa_j(0)}{k-i\kappa_j(0)}\, \frac{k-i\kappa_j(t)} {k+i\kappa_j(t)}\biggr] \exp \biggl[ -\frac{2i}{k} \, \bigl(\kappa_j(0)-\kappa_j(t)\bigr)\biggr] \biggr\}. \end{equation} Then: \begin{SL} \item The infinite product converges on ${\mathbb{C}}_+\backslash \{\kappa_j^{(0)}\}_{j=0}^{N(0)}$.
\item $B_t(k)$ has a continuation to $\bar{\mathbb{C}}_+\backslash[\{\kappa_j(0)\}_{j=0}^N \cup \{0\}]$ and \begin{equation} \label{4.5} k\in{\mathbb{R}}\backslash \{0\} \Rightarrow \abs{B_t(k)} =1. \end{equation} \item For $k\notin i{\mathbb{R}}$, \begin{equation} \label{4.6} \abs{\log \abs{B_t(k)}} \leq C\abs{\Real k}^{-2}. \end{equation} \item Uniformly for $\arg(y)\leq \frac{\pi}{4}$, \begin{equation} \label{4.6a} \log [B_t (iy)]=\frac{2}{3y^3}\, \sum_j \, [\kappa_j(0)^3 - \kappa_j(t)^3] + O(\abs{y}^{-5}) \end{equation} as $\abs{y}\to\infty$. \end{SL} \end{proposition}
\begin{proof} Let $\kappa,\lambda >0$. Define \begin{equation} \label{4.7} F(k;\kappa,\lambda) = \log \biggl[ \frac{k+i\kappa}{k-i\kappa}\, \frac{k-i\lambda}{k+i\lambda}\biggr] -\frac{2i}{k}\, (\kappa-\lambda). \end{equation} Then \[ F(k; \lambda,\lambda)=0 \] and, by a straightforward computation, \begin{equation} \label{4.8} \frac{\partial}{\partial\kappa}\, F(k; \kappa,\lambda) = -\frac{2i\kappa^2}{k(k^2 + \kappa^2)}. \end{equation} It follows for $k\in{\mathbb{C}}$ with $\pm ik\notin [\min(\kappa,\lambda)\max (\kappa,\lambda)]$, \begin{equation} \label{4.9} \abs{F(k;\kappa,\lambda)} \leq 2 \int_{\min (\kappa,\lambda)}^{\max(\kappa,\lambda)} \frac{\mu^2}{\abs{k}\, \abs{k^2 + \mu^2}}\, d\mu \end{equation} The right side is invariant under $k\to\bar k$, so suppose $\Ima k\geq 0$. Then $\frac{\mu}{\abs{k+i\mu}} \leq 1$, so \begin{equation} \label{4.10} \abs{F(k;\kappa,\lambda)} \leq \frac{\max(\kappa,\lambda)^2 -\min (\kappa,\lambda)^2} {\abs{k}\, \inf \{\abs{k-i\mu}\mid \mu\in\pm (\min (\kappa,\lambda), \max(\kappa, \lambda))\}}. \end{equation} We can thus prove:
(i) By \eqref{4.10}, if $k\notin \{0\}\cup \{i\kappa_j(0)\}\cup \{-i\kappa_j(t)\} \equiv Q$, we have for all $n$ sufficiently large that \[ \abs{F(k; \kappa_n(0), \kappa_n(t))} \leq C_k \abs{ \kappa_n(0)^2 - \kappa_n(t)^2} \] so, by \eqref{4.1}, the product \eqref{4.4} converges absolutely and uniformly on compact subsets of ${\mathbb{C}}\backslash Q$.
(ii) The above argument shows $B$ has analytic continuation across ${\mathbb{R}}\backslash \{0\}$. Since the continuation is given by a convergent product, and the finite products have magnitude $1$ on ${\mathbb{R}}$, that is true of $B$ on ${\mathbb{R}}\backslash \{0\}$.
(iii) From \eqref{4.10} and $\inf \{\abs{k-i\mu}\mid k\in \dots \}\geq \Real\abs{k}$, we have \[ \abs{F(k;\kappa,\lambda)} \leq \frac{\abs{\kappa^2 -\lambda^2}}{\abs{\Real k}^2} \] which, given \eqref{4.1}, implies \eqref{4.6}.
(iv) By \eqref{4.8} for $y$ real and large, \begin{align*} \frac{\partial}{\partial\kappa}\, F(iy, \kappa,\lambda) &= \frac{2\kappa^2}{y(y^2 -\kappa^2)} \\ &= \frac{2\kappa^2}{y^3} + O\biggl( \frac{\kappa^4}{y^5}\biggr) \end{align*} so \eqref{4.6a} holds by integrating and using \[ \int_{\kappa_j(t)}^{\kappa_j(0)} 2\mu^2 \, d\mu = \tfrac23\, [ \kappa_j(0)^3 - \kappa_j(t)^3 ]. \qedhere \] \end{proof}
Let $a_t (k)$ be given by \eqref{2.5} and $B_t (k)$ by \eqref{4.4}. The two functions are analytic in ${\mathbb{C}}_+$ and have the same zeros and poles, so \begin{equation} \label{4.11} g_t(k) = \log \biggl[ \frac{a_t(k)}{B_t(k)}\biggr] \end{equation} is analytic in ${\mathbb{C}}_+$. We define $g_t$ by taking the branch of $\log$ which is real for $k=i\kappa$ with $\kappa$ large.
\begin{proposition} \label{P4.3} \begin{SL} \item $a_t (k)$ is analytic in ${\mathbb{C}}_+$.
\item For a.e.~$k\in{\mathbb{R}}_+$, $\lim_{\varepsilon\downarrow 0} g_t (k+i\varepsilon) \equiv g_t (k)$ exists and if $\Im m(k^2+i0)>0$, then \begin{equation} \label{4.12} \Real g_t (k) =\tfrac12\, \log \biggl[ \frac{T(k,0)}{T(k,t)}\biggr] \end{equation} with $T$ given by \eqref{2.8}.
\item For each $\varepsilon >0$, \begin{equation} \label{4.13} \Ima k>\varepsilon \Rightarrow \abs{g_t (k)}\leq C_\varepsilon \abs{k}^{-1}. \end{equation}
\item For all $k\in{\mathbb{C}}_+$, $\Real k\neq 0$, \begin{equation} \label{4.14} \abs{g_t(k)}\leq C [\abs{\Real k}^{-1} + \abs{\Real k}^{-2}]. \end{equation}
\item As $y\to\infty$ along the real axis, \begin{equation} \label{4.15} g_t (iy) =ay^{-1} + by^{-3} + o(y^{-3}) \end{equation} with coefficients \begin{align} a &= -\tfrac12 \int_0^t V(x)\, dx \label{4.16} \\ b &= \tfrac18 \int_0^t V(x)^2 \, dx - \tfrac23\, \sum_j \, [\kappa_j(0)^3 - \kappa_j(t)^3]. \label{4.17} \end{align} \end{SL} \end{proposition}
\begin{proof} (i) is discussed in the definition.
(ii) This combines Proposition~\ref{P2.4} and \eqref{4.5}.
(iii) This follows from \eqref{2.6a}, \eqref{2.6b}, \eqref{4.6}, \eqref{4.6a}, and the continuity (and so, boundedness) of $g_t$ on compact subsets of ${\mathbb{C}}_+$.
(iv) This combines \eqref{2.6a} and \eqref{4.6}.
(v) This combines \eqref{2.12} and \eqref{4.6a}. \end{proof}
We are now ready for the nonlocal step-by-step sum rule.
\begin{theorem} \label{T4.4} Suppose $V\in L_\text{\rm{loc}}^2 (\Reals^+)$ and $\Im m(E+i0)>0$ for almost every $E>0$. Then for any $y_0, y_1 \in (0,\infty)$, \begin{equation} \label{4.18} \Real \biggl[ g_t(iy_0) - \frac{y_1 g_t(iy_1)}{y_0}\biggr] =
\int_0^\infty \frac{(y_0^2 - y_1^2)\xi^2}{y_0(\xi^2 + y_0^2) (\xi^2 + y_1^2)}\,
\log \biggl[ \frac{T(\xi,0)} {T(\xi,t)}\biggr]\, \frac{d\xi}{\pi} \end{equation} where $g_t$ is given by \eqref{4.11} and $T$, by \eqref{2.8}. \end{theorem}
\begin{proof} If $h$ is a bounded harmonic function on ${\mathbb{C}}_+$ with a continuous extension to $\bar{\mathbb{C}}_+$, then for $y>0$, \begin{equation} \label{4.19} h(x+iy) = \frac{y}{\pi} \int \frac{h(\xi)}{(\xi-x)^2 + y^2}\, d\xi . \end{equation} This Poisson representation is standard \cite{Rudin,Stein} and follows by noting that the difference of the two sides is a harmonic function on ${\mathbb{C}}_+$ vanishing on ${\mathbb{R}}$ so, by the reflection principle, a restriction of a bounded harmonic function on ${\mathbb{C}}$ vanishing on ${\mathbb{R}}$ and so $0$ by Liouville's Theorem.
As $\Real g_t(k)$ is a bounded harmonic function on $\{k\mid\Ima k\geq \varepsilon\}$, we have for all $y>0$ and $\varepsilon >0$, \begin{equation} \label{4.20} \Real g_t (x+iy+i\varepsilon) = \frac{y}{\pi} \int \frac{\Real g_t (\xi+i\varepsilon)}{(\xi-x)^2 + y^2}\, dw \end{equation} and therefore, \begin{equation} \label{4.27} \Re g_t(iy_0+i\varepsilon) - \frac{y_1}{y_0}\, \Re g_t (iy_1 + i\varepsilon) =
\int Q(\xi, y_0, y_1) \Real g_t (\xi+i\varepsilon) \,d\xi \end{equation} where \begin{align*} Q(\xi, y_1, y_0) &= \frac{1}{\pi} \biggl[ \frac{y_0}{\xi^2 + y_0^2} - \frac{y_1}{y_0}\, \frac{y_1}{\xi^2 + y_1^2}\biggr] \\ &= \frac{1}{\pi} \, \frac{y_0^2 - y_1^2}{y_0} \, \frac{\xi^2}{(\xi^2 + y_0^2)(\xi^2 + y_1^2)}. \end{align*}
By \eqref{4.14}, uniformly in $\varepsilon$, \[ \abs{\Real g_t (\xi+i\varepsilon)} \leq C [\abs{\xi}^{-2} + \abs{\xi}^{-1}] \] and clearly, \[ \abs{Q(\xi)} \leq C_{y_0, y_1}\, \frac{\xi^2}{1+\xi^4} \] so, by the dominated convergence theorem, we can take $\varepsilon\downarrow 0$ in \eqref{4.20}. The left side converges to the left side of \eqref{4.18} and, by \eqref{4.12} and $\Real g_t (-\bar k)=\Real g_t (k)$, the right side converges to the right side of \eqref{4.18}. \end{proof}
Here is the step-by-step version of the Faddeev--Zhabat sum rule \eqref{1.27}:
\begin{theorem}[Step-by-Step Faddeev--Zhabat Sum Rule]\label{T4.5} Suppose $V\in L_\text{\rm{loc}}^2(\Reals^+)$ and $\Im m(E+i0)>0$ for almost every $E>0$. For any $t>0$, \begin{equation} \label{4.28} \tfrac18 \int_0^t V(x)^2\, dx = \tfrac23 \sum_j\, [\kappa_j(0)^3 - \kappa_j(t)^3]
+ \lim_{y\to\infty} \int_0^\infty P(\xi,y) \log \biggl[ \frac{T(\xi,t)}{T(\xi,0)}\biggr] \, d\xi \end{equation} where \begin{equation} \label{4.29} P(\xi,y) = \frac{1}{\pi} \biggl[ \frac{4\xi^2 y^4}{(\xi^2 + 4y^2)(\xi^2 +y^2)}\biggr]. \end{equation} \end{theorem}
\begin{proof} By \eqref{4.15}, with $b$ given by \eqref{4.17}, \[ y^3 [g_t (2iy) -\tfrac12\, g_t(iy)] = b[\tfrac18 -\tfrac12] + o(1) \] so, by \eqref{4.18}, \begin{equation} \label{4.30} b=\lim_{y\to\infty} - \f83 \, \biggl[ \frac{y^3 [(2y)^2 -(y)^2]}{2\pi y}\biggr] \int_0^\infty \frac{\xi^2}{(\xi^2+4y^2)(\xi^2 + y^2)} \, \log\biggl[\frac{T(\xi,0)}{T(\xi,t)} \biggr]\, d\xi \end{equation} which is \eqref{4.28}. \end{proof}
\begin{remarks} 1. As $\lim_{y\to\infty} P(\xi,y) =\frac{1}{\pi}\xi^2$, formally, \eqref{4.28} is just a difference of \eqref{1.28} for $L_0$ and $L_t$.
2. In the preceding theorems, the assumption that $\Im m(E+i0)>0$ for almost every $E>0$ was only used to allow us to apply Proposition~\ref{P2.4} to obtain a simpler expression for the boundary values of $a_t(k)$. The assumption may be removed if one is willing to replace the ratio $T(\xi,t)/T(\xi,0)$ by the limiting value of the relative Wronskian. \end{remarks}
\section{Lower Semicontinuity of the Quasi-Szeg\H{o} Terms} \label{s5}
For any $V\in L_\text{\rm{loc}}^1 (0,\infty)$, we can define (in the limit circle case after picking a boundary condition at infinity) $T(k,0)$ by \eqref{2.8} for a.e.\ $k\in (0,\infty)$ and then \begin{equation} \label{5.1} Q(V) =-\frac{1}{\pi} \int_0^\infty \log [T(k,0)] k^2 \, dk. \end{equation} Since $T\leq 1$, $-\log [T]\geq 0$ and the integral can only diverge to $\infty$, so $Q(M)$ is always defined although it may be infinite. The main result in this section is:
\begin{theorem}\label{T5.1} Let $V_n, V$ be a sequence in $L_\text{\rm{loc}}^2 ((0,\infty))$. Let $V$ be limit point at infinity. Suppose \begin{equation} \label{5.2} \int_0^a \abs{V_n(x)-V(x)}^2\, dx\to 0 \end{equation} for each $a>0$. Then \begin{equation} \label{5.3} Q(V) \leq \liminf Q(V_n). \end{equation} \end{theorem}
\begin{remarks} 1. As noted in the Introduction, this is related to results in Sylvester--Winebrenner \cite{SW}. However, they have no bound states and $\abs{r(k)} \leq 1$ in the upper half-plane. This fails in our case and our argument will need to be more involved.
2. It is interesting that the analogue in the Jacobi case \cite{KS} used semicontinuity of the entropy and this result comes from weak semicontinuity of the $L^p$-norm.
3. It is not hard to see that this result holds if $L_\text{\rm{loc}}^2$ is replaced by $L_\text{\rm{loc}}^1$ and the $\abs{\dots}^2$ in \eqref{5.2} is replaced by $\abs{\dots}^1$. Basically, one still has strong resolvent convergence in that case. But the argument is simpler in the $L_\text{\rm{loc}}^2$ case we need, so that is what we state. \end{remarks}
We will prove this theorem in several steps. We will write $w_n(k)$ and $w(k)$ for the $m$-functions (parameterized by momentum) associated to $V_n$ and $V$ respectively.
\begin{proposition}\label{P5.2} Let $V_n,V$ obey the hypothesis of Theorem~\ref{T5.1}. Then for all $k$ with $\Real k>0$ and $\Ima k>0$, one has $w_n(k)\to w(k)$ as $n\to\infty$. \end{proposition}
\begin{proof} Let $H$ (resp., $H_n$) be the operator $u\mapsto -u'' +Vu$ on $L^2 (0,\infty)$ with boundary condition $u(0)=0$ at $x=0$ and, if need be, a boundary condition at $\infty$ for some $n$ if the corresponding $H_n$ is limit circle at $\infty$.
By the standard construction of these operators, $H$ being limit point at infinity has $D\equiv \{u\in C_0^\infty ([0,\infty))\mid u(0)=0\}$ as an operator core. (\cite[Theorem~X.7]{RS2} has the result essentially if $V$ is continuous, but the proof works if $V$ is $L_\text{\rm{loc}}^2$. Essentially, any $\varphi \in [(H+i) [D]]^\perp$ solves $-\varphi''+ V\varphi =i\varphi$ with $\varphi(0) =0$ and that cannot be $L^2$; it follows that $\overline{H\pm i[D]}=L^2$ which is essential selfadjointness.)
Let $f=(H-k^2)\varphi$ with $\varphi\in D$. Then \begin{align*}
\|[(H_n -k^2)^{-1} -(H-k^2)^{-1}] f\| &= \|(H_n-k^2)^{-1} (V_n-V)\varphi\| \\
&\leq \abs{\Ima k^2}^{-1} \|(V_n-V)\varphi\|\to 0 \end{align*} by \eqref{5.2}, so we have strong resolvent convergence.
If $\varphi\in L^2 (0,a)$ and $\psi =(H_n-k^2)^{-1}\varphi$, then for $x>a$, \[ w_n (k,x) = \frac{\psi'_n(x)}{\psi_n(x)} \] and so, for $x>0$, we have $w_n(k,x)\to w(k,x)$.
Differentiating \eqref{E:w} with respect to $x$ and using \eqref{R2.1} leads to the Riccati equation \begin{equation}\label{E:Riccati} \frac{d w}{d x} = k^2 - V(x) - w^2. \end{equation} By combining this with \eqref{5.2}, one can deduce $w_n(k)\to w(k)$. \end{proof}
We now define the reflection coefficient (for now, a definition; we will discuss its connection with reflection at the end of the section) by \begin{equation} \label{5.4} r_n (k) = \frac{ik-w_n(k)}{ik+w_n(k)}. \end{equation} The following bound is clearly relevant.
\begin{proposition}\label{P5.3} Let $k=\abs{k}e^{i\eta}$ with $\eta\in [0,\frac{\pi}{2})$, $\abs{k}\neq 0$. Then \begin{equation} \label{5.5}
\sup_{z\in{\mathbb{C}}_+}\, \biggl| \frac{ik-z}{ik+z}\biggr| = \biggl( \frac{1+\sin (\eta)}{1-\sin(\eta)}\biggr)^{1/2}. \end{equation} \end{proposition}
\begin{proof} $z\mapsto\frac{ik-z}{ik+z}$ is a fractional linear transformation which takes $z = -ik\in {\mathbb{C}}_-$ to infinity since $\Real k>0$ if $\eta\in [0,\frac{\pi}{2})$. Thus ${\mathbb{C}}_+$ is mapped into the interior of the circle $\{\frac{ik-x}{ik+x}\mid x\in{\mathbb{R}}\}\cup\{-1\}$. By replacing $k$ by $k/\abs{k}$, we can suppose $\abs{k}=1$. Let \[
f(x) =\biggl| \frac{x-ie^{i\eta}}{x+ie^{i\eta}}\biggr|^2. \]
Straightforward calculus shows that $f'(x)=0$ exactly at $x=\pm 1$. Since $\abs{f}\to 1$ as $x\to \pm\infty$, we see the maximum of $f(x) =(1+x^2 + 2x\sin\eta)/(1+x^2-2x\sin\eta)$ occurs at $x=1$ and is $(1 + \sin(\eta))/(1-\sin(\eta))$. \end{proof}
\begin{lemma} \label{L5.4} Let $f_n$ and $f_\infty$ be a sequence of functions on ${\mathbb{D}}$, the open disk, with \begin{equation} \label{5.6} \sup_{z\in{\mathbb{D}},n}\, \abs{f_n(z)}<\infty. \end{equation} Let $f_n(z)\to f_\infty(z)$ for all $z\in{\mathbb{D}}$. Let $f_n (e^{i\theta})$ be the a.e.\ radial limit of $f_n (re^{i\theta})$ and similarly for $f_\infty (e^{i\theta})$. Then $f_n (e^{i\theta})\to f_\infty (e^{i\theta})$ weak-$*$, that is, for all $g\in L^1 (\partial{\mathbb{D}})$, \begin{equation} \label{5.7} \int_0^{2\pi} g(e^{i\theta}) f_n (e^{i\theta}) \, \frac{d\theta}{2\pi} \to \int_0^{2\pi} g(e^{i\theta}) f_\infty (e^{i\theta}) \, \frac{d\theta}{2\pi}. \end{equation} \end{lemma}
\begin{proof} By \eqref{5.6}, it suffices to prove \eqref{5.7} for $g(e^{i\theta}) = e^{ik\theta}$ for all $k$. But for $H^\infty $ functions (see \cite{Rudin}), $\int e^{ik\theta} f(e^{i\theta}) \frac{d\theta}{2\pi} =0$ if $k>0$ and $\int e^{-ik\theta} f(e^{i\theta}) \frac{d\theta}{2\pi} = f^{(k)}(0)/k!$. Pointwise convergence in ${\mathbb{D}}$ and boundedness implies convergence of all derivatives inside ${\mathbb{D}}$. \end{proof}
\begin{theorem}\label{T5.5} Let $r_n(k)$ be given by \eqref{5.4} for $\Ima k>0$. Then for a.e.~$k\in (0,\infty)$, $r_n(k)=\lim_{\varepsilon\downarrow 0} r_n (k+i\varepsilon)$ exists and obeys \begin{equation} \label{5.8} \abs{r_n(k)} \leq 1, \qquad (k>0). \end{equation} Moreover, for any $g$ in $L^1 (a,b)$ with $0<a<b<\infty$, we have that \begin{equation} \label{5.9} \int_a^b g(k) r_n(k)\, dk \to \int_a^b g(k) r(k)\, dk \end{equation} and that for $1\leq p<\infty$, \begin{equation} \label{5.10} \liminf_{n\to\infty}\, \int_a^b \abs{r_n(k)}^p k^2 \, dk \geq \int_a^b \abs{r(k)}^p k^2 \, dk. \end{equation} \end{theorem}
\begin{proof} Pick $0<c<a<b<d<\infty$. Let $Q$ be the semidisk in ${\mathbb{C}}_+$ with flat edge $(c,d)$. Let $\varphi:{\mathbb{D}}\to Q$ be a conformal map. Since \[ \sup_{k\in Q}\, \arg(k) < \frac{\pi}{2}, \] we have \begin{equation} \label{5.11} \sup_{n,k\in Q}\, \abs{r_n(k)}<\infty \end{equation} by Proposition~\ref{P5.3}. We can thus apply Lemma~\ref{L5.4} to $r_n \circ \varphi$ and so conclude \eqref{5.9}. \eqref{5.8} follows from Proposition~\ref{P5.3} for $\eta=0$.
Note that \eqref{5.9} implies $r_n\to r$ in the weak topology on $L^p ((a,b),k^2\, dk)$. Thus \eqref{5.10} is just an expression of the fact that the norm on a Banach space is weakly lower semicontinuous. \end{proof}
\begin{proof}[Proof of Theorem~\ref{T5.1}] Notice that \begin{equation} \label{5.12} T(k_0,0)+\abs{r(k_0,0)}^2 =1. \end{equation} Thus \begin{align} -\log [T] &=-\log (1-\abs{r}^2) \notag \\ &= \sum_{m=1}^\infty \, \frac{\abs{r}^{2m}}{m}. \label{5.13} \end{align} \eqref{5.10} implies that for each $m$ and $0<a<b<\infty$, \[ \int_a^b \biggl[\frac{\abs{r(k)}^{2m}}{m}\biggr] k^2 \, dk \leq \liminf \int_a^b \biggl[ \frac{\abs{r_n(k)}^{2m}}{m}\biggr] k^2\, dk, \] which becomes \[ \int_a^b \biggl[\,\sum_{m=1}^M \, \frac{\abs{r}^{2m}}{m}\biggr] k^2\, dk \leq \liminf \int_a^b \biggl[\, \sum_{m=1}^M \, \frac{\abs{r_n}^{2m}}{m}\biggr] k^2\, dk \] so, by \eqref{5.13}, \[ -\frac{1}{\pi} \int_a^b \log [T(k_0,0)]k^2, dk \leq \liminf \biggl\{-\frac{1}{\pi} \int_0^\infty \log [T_n (k_0,0)] k^2\, dk\biggr\}. \] Now take $a\downarrow 0$ and $b\to\infty$. \end{proof}
We end this section with a sketch of an alternate approach to Theorem~\ref{T5.1}. We present this approach because it is rooted in the physics of scattering. Since we have a direct proof, we do not produce all the technical details---indeed, one is missing. The argument is in a sequence of steps:
\noindent{\bf Step 1.} \ Let $L$ be the whole-line problem obtained by setting $V=0$ on $(-\infty, 0)$. Let $j$ be a $C^\infty$ function with $0\leq j\leq 1$ and $j(x)=0$ if $x>0$ and $j(x)=1$ if $x<-1$. Let $J$ be multiplication by $j$. Then, by \cite{Sim99}, \begin{equation} \label{5.14} \slim_{t\to\pm\infty}\, e^{itL} Je^{-itL} P_\text{\rm{ac}} (L) = P_\ell^\pm (L) \end{equation} exist and are invariant projections for $L$. $L\restriction\text{\rm{ran}} (P_\ell^\pm)$ is absolutely continuous and has spectrum $[0,\infty)$ with multiplicity $1$.
\noindent{\bf Step 2.} \begin{equation} \label{5.15} P_\ell^- (L) P_\ell^+(L) P_\ell^- (L) \equiv R_\ell^- (L) \end{equation} is a positive operator on $\text{\rm{ran}} (P_\ell^-)$ which commutes with $L\restriction \text{\rm{ran}} (P_\ell^- (L))$ and so, by the simplicity of the spectrum of this operator, it is multiplication by a function $R_L(E)$. Since $0\leq R_\ell^-(L)\leq 1$, as a function, $0\leq R(E)\leq 1$. $R$ is discussed in \cite{Sim99}.
\noindent{\bf Step 3.} \ By computations related to those in \cite{SW}, \begin{equation} \label{5.16} R_L (k) = \abs{r(k)}^2 \end{equation} with $r$ given by \eqref{5.4}.
\noindent{\bf Step 4.} \ We believe that for $V_n\to V$ in the sense of Theorem~\ref{T5.1}, one has for a dense set of vectors uniformity in $n$ of the limit in \eqref{5.14}, but we have not nailed down the details. If true, one has \begin{equation} \label{5.17} \wlim_{n\to\infty} \, R_\ell^-(L_n) = R_\ell^- (L). \end{equation}
\noindent{\bf Step 5.} \ By \eqref{5.16}, $\abs{r_n(k)}^2 \to \abs{r(k)}^2$ weakly as $L^\infty$-functions (i.e., when smeared with $g\in L^1 (a,b)$) on $[a,b]$ for any $0<a<b<\infty$. By the weak semicontinuity of the norm, \eqref{5.10} holds for $p\geq 2$.
\noindent{\bf Step 6.} \ Get semicontinuity of $Q(V)$ from \eqref{5.10} for $p\geq 2$, as we do in the above proof.
\section{Local Solubility} \label{s6}
In this section, we will study \eqref{1.19x} and describe its relation to $d\rho$ being the spectral measure of some $V\in L_\text{\rm{loc}}^2$. We will prove:
\begin{theorem}\label{T6.1} Let $d\rho$ be a measure obeying condition {\rm{(i)}} of Theorem~\ref{T1.3}. Define $F$ by \eqref{1.18} and suppose \eqref{1.19x} holds. Then $d\rho$ is the spectral measure of some $V\in L_\text{\rm{loc}}^2$. \end{theorem}
\begin{theorem}\label{T6.2} Let $d\rho$ be the spectral measure of a potential in $L^2$. Then \eqref{1.19x} holds, that is, $F\in L^2(\Reals^+)$. \end{theorem}
Before discussing the main ideas used to prove these results, we wish to reassure the reader that the hypotheses of Theorem~\ref{T6.1} do bound the growth of $d\rho$ at infinity. Specifically, we know that \eqref{HRTa} must hold for any spectral measure. We do this first because such information is helpful in justifying some calculations that appear once the real work begins.
\begin{lemma} If $d\rho$ obeys condition {\rm{(i)}} of Theorem~\ref{T1.3} and \eqref{1.19x} holds, then \begin{equation}\label{HRTaAgain} \int \frac{d\rho(E)}{1+E^2} < \infty. \end{equation} \end{lemma}
\begin{proof} Unravelling the definitions of $F(q)$ and $d\nu$ given in \eqref{1.18} and \eqref{1.17a}, we find $$ F(q) = \pi^{-1/2} \int_1^\infty \exp\bigl\{ -\bigl(q-\sqrt{E}\bigr)^2 \bigr\} E^{-1}\,d[\rho-\rho_0](E). $$ The contribution of $\rho_0$ can be bounded using $$ \tfrac{1}{\pi} \int_0^\infty \exp\bigl\{ -\bigl(q-\sqrt{E}\bigr)^2 \bigr\} E^{-1/2} \,dE = \tfrac{2}{\pi} \int_0^\infty \exp\bigl\{ -(q-k)^2 \bigr\}\,dk \leq 2 \pi^{-1/2}, $$ which shows that $$
\int_1^\infty \exp\bigl\{ -\bigl(q-\sqrt{E}\bigr)^2 \bigr\} E^{-1} \,d\rho(E) \leq 2 + 2|F(q)|. $$ Integrating both sides $\frac{dq}{1+q^2}$ leads to \eqref{HRTaAgain}, at least when the region of integration is restricted to $[1,\infty)$. The remaining portion of the integral is finite by condition {\rm{(i)}} of Theorem~\ref{T1.3}. \end{proof}
The key to proving the two theorems of this section will be the fact that essentially, $\widehat F(\alpha)$, the Fourier transform of $F$, is $e^{-\f14 \alpha^2} A(\alpha)$, where $A(\alpha)$ is the $A$-function introduced by Simon \cite{Sim271} and studied further by Gesztesy--Simon \cite{GSAnn}.
We will, first and foremost, use formula (1.21) from \cite{GSAnn}: \begin{equation} \label{6.1} A(\alpha) =-2\int \lambda^{-1/2} \sin \bigl(2\alpha \sqrt{\lambda}\bigr) \, d(\rho-\rho_0) (\lambda) \end{equation} where $\lambda^{-1/2}\sin (2\alpha \sqrt{\lambda})$ is interpreted as $\abs{\lambda}^{-1/2} \sinh (2\alpha \sqrt{\abs{\lambda}})$ if $\lambda <0$ and \eqref{6.1} holds in distributional sense. We will also need the following (eqn.~(1.16) of \cite{GSAnn}): \begin{equation} \label{6.2}
\abs{A(\alpha)-V(\alpha)} \leq \biggl| \int_0^\alpha \abs{V(y)}\,dy\biggr|^2 \exp \biggl( \alpha \int_0^\alpha \abs{V(y)}\, dy \biggr) \end{equation} proven in \cite{Sim271} for regular $V$'s and in (1.16) of \cite{GSAnn} for $V\in L_\text{\rm{loc}}^1$. Finally, we need the following result, which follows readily from Remling's work \cite{Rem02,Rem03}. (It can also be proved using the Gel'fand--Levitan method.)
\begin{proposition}\label{P6.3} Let $d\rho$ be a measure obeying \eqref{HRTaAgain} and condition {\rm{(i)}} of Theorem~\ref{T1.3}. If the distribution \eqref{6.1} lies in $L_\text{\rm{loc}}^1 [0,\infty)$, then $d\rho$ is the spectral measure of a potential $V\in L_\text{\rm{loc}}^1 [0,\infty)$. \end{proposition}
\begin{proof} Consider the continuous function $$ K(x,t)=\tfrac12\phi(x-t)-\tfrac12\phi(x+t), \quad\text{where}\quad
\phi(x)=\int_0^{|x|/2} A(\alpha)\,d\alpha $$ and $A(\alpha)$ is given by \eqref{6.1}. As explained in Theorem~1.1 of \cite{Rem03}, $A(\alpha)$ is the $A$-function of a potential in $L^1_\text{\rm{loc}}$ provided \begin{equation} \label{PosDef} \int\int \bar\psi(x)\psi(t) \bigl[\delta(x-t) + K(x,t)\bigr]\,dx\,dt > 0 \end{equation} for all non-zero $\psi\in L^2([0,\infty))$ of compact support. We will now show that this holds.
For $\psi\in C_c^\infty$, elementary manipulations using \eqref{6.1} show \begin{equation*} \int\!\!\int \bar\psi(x)\psi(t) K(x,t) \,dx\,dt = \int\!\!\int\!\!\int \bar\psi(x)\psi(t) \frac{\sin(x\sqrt{\lambda})\sin(t\sqrt{\lambda})}{\lambda}
\,dx\,dt\,d[\rho-\rho_0](\lambda). \end{equation*} Thus by recognizing the spectral resolution of the free Schr\"odinger operator we have \begin{equation*}
\text{LHS\eqref{PosDef}} = \int \biggl| \int \psi(x) \frac{\sin(x\sqrt{\lambda})}{\sqrt{\lambda}}\,dx \biggr|^2
\,d\rho(\lambda) \end{equation*} for such test functions. It then extends easily to all $\psi\in L^2([0,\infty))$ of compact support, because $K$ is a bounded function.
This representation shows that LHS\eqref{PosDef} is non-negative. It cannot vanish for non-zero $\psi$ because the Fourier sine transform of $\psi$ is analytic and so has discrete zeros; however, the support of $d\rho$ is not discrete by hypothesis. Thus we have shown that $A(\alpha)$ defined by \eqref{6.1} is the $A$-function of some $V\in L^1_\text{\rm{loc}}$.
Unfortunately, we are only half-way through the proof; the $A$-function need not uniquely determine the spectral measure through \eqref{6.1}. This is the case, for example, when the potential is limit circle at infinity; different boundary conditions lead to different spectral measures, but all have the same $A$-function. Christian Remling has explained to us that using de Branges work, \cite{deB}, one can deduce that this is actually the only way non-uniqueness can occur. In our situation however, we have some extra information which permits us to complete the proof of uniqueness without much technology, which is what we proceed to do now.
Let $d\rho_1$ denote the spectral measure for the potential $V$ just constructed (with a boundary condition at infinity if necessary). Classical results tell us that \eqref{HRTaAgain} holds for $d\rho_1$ and that $\int_{-\infty}^0 \exp\{c\sqrt{-\lambda}\}\,d\rho_1(\lambda)<\infty$ for any $c>0$. Lastly, by construction we have \begin{equation} \label{Aequal} \int \lambda^{-1/2} \sin \bigl(2\alpha \sqrt{\lambda}\bigr) \, d(\rho-\rho_0)(\lambda) =\int \lambda^{-1/2} \sin \bigl(2\alpha \sqrt{\lambda}\bigr) \, d(\rho_1-\rho_0)(\lambda) \end{equation} as weak integrals of distributions. We wish to conclude that $\rho_1=\rho$.
Our first step is to prove that the support of $d\rho_1$ is bounded from below. Let us fix a non-negative $\phi\in C^\infty_c(\Reals)$ with $\int \phi(x)\,dx=1$ and $\text{\rm{supp}}(\phi)\subset[1,2]$. Elementary considerations show that there is a constant $C$ so that $$
\biggl| \int k^{-1} \sin(2\alpha k) \phi(\alpha/N)\,d\alpha \biggr| \leq C N^2 (1+k)^{-100} $$ for all $N>1$ and all $k\geq 0$. More easily, we have $$ 4N^2 e^{4Nk} \geq \tfrac{N}{k} \sinh(4Nk) \geq \int k^{-1} \sinh(2\alpha k) \phi(\tfrac\alpha{N})\,d\alpha \geq \tfrac{N}{k} \sinh(2Nk) \geq N^2 e^{Nk} $$ for the same range of $N$ and $k$. Putting this together with \eqref{Aequal} we obtain \begin{equation}\label{rho1bb} \int_{-\infty}^0 e^{N\sqrt{-\lambda}} \, d\rho_1(\lambda) \leq C_1 + C_2 e^{4N\sqrt{-E_1}}, \end{equation} where $E_1$ denotes the infimum of the support of $d\rho$ just as in condition {\rm{(i)}} of Theorem~\ref{T1.3}. Taking $N\to\infty$ in \eqref{rho1bb} leads to the conclusion that the support of $\rho_1$ is bounded from below (by $16E_1$, which is easily improved).
Now that we know that the supports of both $\rho$ and $\rho_1$ are bounded from below, we may use \begin{equation} \label{FouInt} \tfrac{2}{\sqrt{\pi}}\int \alpha e^{-\alpha^2/s} \sin(2\alpha k)\,d\alpha = s^{3/2} k\, e^{-s k^2},
\quad\text{for $s>0$ and $k\in{\mathbb{C}}$}, \end{equation} on both sides of \eqref{Aequal} and so obtain \begin{equation} \label{LaplaceEqual} \int e^{-s\lambda} \, d\rho(\lambda) = \int e^{-s\lambda} \, d\rho_1(\lambda). \end{equation} That $\rho_1=\rho$ now follows from the invertibility of Laplace transforms. \end{proof}
As outlined above, our discussion of the local solubility condition revolves around a relation between the distributions $A$ and $F$. Let \begin{equation} \label{6.3} A(\alpha) = A_S(\alpha) + A_L (\alpha) \end{equation} where $A_S$ is the integral over $\lambda <1$ and $A_L$ over $\lambda\geq 1$. Since \begin{equation} \label{6.4} \int_{-\infty}^\infty \pi^{-1/2} e^{-q^2} e^{-iq\alpha} = e^{-\alpha^2/4}, \end{equation} \eqref{1.17a}, \eqref{1.18}, and \eqref{6.1} immediately imply \begin{equation} \label{6.5} e^{-\alpha^2/4} A_L(\alpha) = i [\widehat F(2\alpha) - \widehat F(-2\alpha)]. \end{equation}
For $p\geq 1$ and $q\leq 0$, we have $e^{-(p-q)^2} \leq e^{-p^2} e^{-q^2}$. Combining this with \[ \int_{p\geq 1} e^{-p^2} d\abs{\nu} (p) <\infty, \] which follows from \eqref{HRTaAgain}, we obtain that for $q\leq 0$, \begin{equation} \label{6.6}
F(q) \leq Ce^{-q^2}. \end{equation}
\begin{proof}[Proof of Theorem~\ref{T6.1}] By \eqref{1.19x} and \eqref{6.6}, $F\in L^2({\mathbb{R}})$ and hence $\widehat{F}\in L^2({\mathbb{R}})$. By \eqref{6.5}, $A_L(\alpha)\in L_\text{\rm{loc}}^2$. By \eqref{6.1}, $A_S(\alpha)$ is bounded on bounded intervals, so $A(\alpha)\in L_\text{\rm{loc}}^2$. By Remling's Theorem (Proposition~\ref{P6.3}), $d\rho$ is the spectral measure of some $V\in L_\text{\rm{loc}}^1$. By \eqref{6.2}, $\abs{A(\alpha)-V(\alpha)}$ is bounded on bounded intervals, so $A\in L_\text{\rm{loc}}^2 \Rightarrow V\in L_\text{\rm{loc}}^2$. \end{proof}
To prove Theorem~\ref{T6.2}, we need the following elementary fact:
\begin{proposition}\label{P6.6} If $T$ is a tempered distribution on $(1,\infty)$ which is real and $\Ima \widehat T(\alpha)\in L^2$, then $T\in L^2$. \end{proposition}
\begin{proof} We begin by noting that if $h\in L^2 (0,\infty)$, then \begin{equation} \label{6.7} \int_{-\infty}^\infty \abs{\Real \widehat h(\alpha)}^2\,d\alpha = \int_{-\infty}^\infty \abs{\Ima \widehat h(\alpha)}^2\,d\alpha \end{equation} since $\overline{\widehat h(\alpha)} =\widehat{\tilde h}(\alpha)$, where $\tilde h(x)=h(-x)$, and thus, by the Plancherel theorem, \begin{equation} \label{6.8} \int_{-\infty}^\infty h(\alpha)^2\,d\alpha = \int_{-\infty}^\infty \tilde h(x)h(x)\,dx =0 \end{equation} implying \eqref{6.7}. In particular, \begin{equation} \label{6.9} \int_{-\infty}^\infty \abs{\widehat h(\alpha)}^2\, d\alpha = 2\int \abs{\Ima \widehat h(\alpha)}^2\, d\alpha. \end{equation}
Given $T$, pick $C^\infty$ $g$ with $g(x)=g(-x)$, $\abs{g(x)}\leq 1$, $g(x)=1$ for $\abs{x}$ small, $\text{\rm{supp}}(y)\subset [-1,1]$, and $\int g(x)\,dx =1$. Define \begin{equation} \label{6.10} g_L(x)=g \biggl( \frac{x}{L}\biggr) \qquad r_\delta(x)=\delta^{-1} g\biggl(\frac{x}{\delta}\biggr) \end{equation} and note that $r_\delta * (g_LT)\in L^2$, supported in $(0,\infty)$ for $\delta <1$, and since \begin{equation} \label{6.11} [r_\delta * (g_L T)]\widehat{\ } (\alpha) = \widehat h_1(\alpha\delta) (\widehat g_L *T)(\alpha) \end{equation} and $\widehat h_1, \widehat g_L$ are real, we have \[ \int \abs{\Ima [r_\delta * (g_L T)]^\sim(\alpha)}^2\, d\alpha \leq \int \abs{\Ima \widehat T}^2\, d\alpha. \] Thus, by \eqref{6.9} and the Plancherel theorem, \[ \int \abs{r_\delta * g_L T(x)}^2\, dx \leq 2\int \abs{\Ima \widehat T(\alpha)}^2 \, d\alpha \] so $T\in L^2$ by taking $\delta\downarrow 0$ and $L\to\infty$. \end{proof}
\begin{proof}[Proof of Theorem~\ref{T6.2}] If $V\in L^2$, then \begin{equation} \label{6.12} \int_0^\alpha \abs{V(y)}\, dy \leq \biggl( \int_0^\infty \abs{V(y)}^2\, dy \biggr)^{1/2} \alpha^{1/2} \end{equation} so \eqref{6.2} says that \begin{equation} \label{6.13} \abs{A(\alpha) - V(\alpha)} \leq C \alpha^2 \exp (C\alpha^{3/2}) \end{equation} and thus, $e^{-\alpha^2/2} A(\alpha)\in L^2$.
{}From \eqref{6.1}, \begin{equation} \label{6.14} \abs{A_S(\alpha)}\leq e^{C\alpha} \end{equation} so $e^{-\alpha^2/2} A_S(\alpha)\in L^2$, and thus, $e^{-\alpha^2/2} A(\alpha)\in L^2$. By \eqref{6.5} and the fact that $F$ is real-valued, it follows that $\Ima \widehat{F}\in L^2$.
$F$ is not supported on $(1,\infty)$, but by \eqref{6.6} and boundedness on $(0,1)$, $F=F_1 + F_2$, where $F_2$ is supported on $(1,\infty)$ and $F_1\in L^2$. Thus, $\Ima \widehat{F}_1\in L^2$, so $\Ima\widehat{F}_2\in L^2$. By Proposition~\ref{P6.6}, $F_2\in L^2$, that is, \eqref{1.19x} holds. \end{proof}
\section{Proof of Theorem~\ref{T1.3}} \label{s7}
Here we will use the results of the last three sections to prove Theorem~\ref{T1.3}. We use the strategy of \cite{KS} as refined in \cite{SZ} and \cite{Sim288}. We treat each direction of the theorem in a separate subsection.
\subsection*{$\bf V\in L^2 \Rightarrow$ (i)--(iv)} As $V\in L^2$, $V(H_0+1)^{-1}$ is compact, and thus (i) holds by Weyl's Theorem. (ii) is just Theorem~\ref{T6.2}.
Fix $R<\infty$ and let \begin{equation} \label{7.1} V^{(R)}(x) = \begin{cases} V(x), & 0 \leq x \leq R \\ 0, & x>R \end{cases} \end{equation} so $L_t=H_0$ if $t>R$. Thus applying Theorem~\ref{T4.5} to $V^{(R)}$ with $t>R$ gives \begin{equation}\label{7.2} \begin{aligned} \tfrac12 \int_0^R V(x)^2 &= \tfrac23 \sum_j [\kappa_j^{(R)}]^3 + \lim_{y\to\infty} \int_0^\infty P(\xi,y)\log \biggl( \frac{1}{T^{(R)}(\xi,0)}\biggr)\,d\xi. \end{aligned} \end{equation} By rewriting $P$ as \[ P(\xi,y) = \frac{\xi^2}{\pi} \biggl[\frac{1}{(1+\frac{\xi^2}{4y^2}) (1+\frac{\xi^2}{y^2})}\biggr], \] we see that it is monotone increasing in $y$. As the integrand $\log (\frac{1}{T}) \geq 0$, the monotone convergence theorem implies \begin{equation} \label{7.3} \tfrac18 \int_0^R V(x)^2\, dx = \tfrac23\, \sum_{j=1}^{N(R)} \bigl[\kappa_j^{(R)}\bigr]^3 + Q(V^{(R)}). \end{equation}
Now take $R\to\infty$. Theorem~\ref{T5.1} controls $Q(V^{(R)})$ and since $\kappa_j^{(R)}$ converge individually to the $\kappa_j$ associated to $V$, we have \begin{equation} \label{7.4} \sum_{j=1}^\infty \kappa_j^3 \leq \liminf_{R\to\infty}\, \sum_{j=1}^\infty \bigl[\kappa_j^{(R)}\bigr]^3 \end{equation} (a trivial instance of Fatou's Lemma). Thus \eqref{7.3} becomes \begin{equation} \label{7.5} \tfrac18 \int_0^\infty V(x)^2 \, dx \geq \tfrac23 \sum_{j=1}^\infty \kappa_j^3 + Q(V). \end{equation}
In particular, $V\in L^2$ implies $Q<\infty$, that is, \eqref{1.21} holds. As $\kappa_j^3 = [E_j^{(0)}]^{3/2}$, so $\sum [E_j^{(0)}]^{3/2} <\infty$. By \eqref{1.29}, this implies \eqref{1.20}.
\qed
\subsection*{(i)--(iv) $\bf\Rightarrow V\in L^2$} By Theorem~\ref{T6.1}, $d\rho$ is the spectral measure of a $V\in L_\text{\rm{loc}}^2$ so, in particular, \eqref{4.28} holds. Since $-\kappa_j(t)^3 \leq 0$ and $\log[T(\xi,t)]\leq 0$, this implies that \begin{equation} \label{7.6} \tfrac18 \int_0^t V(x)^2\, dx \leq \tfrac23 \sum_j \kappa_j(0)^3 + \lim_{y\to\infty} \int_0^\infty P(\xi,y) \log \biggl( \frac{1}{T(\xi,0)}\biggr)\,d\xi. \end{equation} By the same monotone convergence argument used in the first part of the proof, \begin{equation} \label{7.7} \tfrac18 \int_0^t V(x)^2\, dx \leq \tfrac23 \sum_j [E_j^{(0)}]^{3/2} + Q. \end{equation} Taking $t\to\infty$, we see $V\in L^2$ and that \eqref{7.7} holds with $t=\infty$.
\qed
Our proof shows that the Faddeev--Zhabat sum rule, \eqref{1.27}, holds for any $V\in L^2 (0,\infty)$. Rewriting $Q$ in terms of the reflection coefficient (see \eqref{5.13}) and fixed on $(-R,\infty)$ with $R<\infty$, one can obtain \eqref{1.27} for $V\in L^2 (-\infty,\infty)$ by using the ideas in \cite{SZ}.
\section{Isolating $\Re(w)$} \label{s8}
The next four sections are devoted to deducing Theorem~\ref{T1.3} from Theorem~\ref{T1.2}. This amounts to showing that the two lists of conditions are equivalent. (They are not equivalent item by item, only collectively.)
The role of this section is to prove that \begin{equation} \label{8.2} \text{Strong Quasi-Szeg\H{o}} \Leftrightarrow \text{Quasi-Szeg\H{o}} + (R<\infty) \end{equation} where \begin{equation} \label{8.1} R=\int_0^\infty \log \biggl\{1+ \biggl(\frac{\Re w}{k}\biggr)^2\biggr\} k^2\, dk. \end{equation} Note that the strong quasi-Szeg\H{o} condition involves both the real and imaginary parts of $w$, whereas the quasi-Szeg\H{o} condition depends only on $\Im w$ and $R$ only on $\Re w$. Hence the title of this section.
We now present an outline of the proof of Theorem~\ref{T1.3}: under the assumption of the Weyl and Lieb-Thirring conditions, we prove \begin{SL} \item Strong Quasi-Szeg\H{o} $\Rightarrow$ Quasi-Szeg\H{o} (this section). \item Strong Quasi-Szeg\H{o} $+$ Local Solubility $\Rightarrow$ Normalization (see Section~\ref{s11}). \item Quasi-Szeg\H{o} $+$ Normalization $\Rightarrow R<\infty$ (see Section~\ref{s11}), so, by \eqref{8.2}, Quasi-Szeg\H{o} $+$ Normalization $\Rightarrow$ Strong Quasi-Szeg\H{o}. \item Normalization $\Rightarrow$ Local Solubility (see Section~\ref{s9}). \end{SL} The first two statements show that the conditions in Theorem~\ref{T1.3} imply those in Theorem~\ref{T1.2}, the second pair proves the converse.
\begin{lemma}\label{Llog} For any $f\in{\mathbb{C}}$ and any $0<\epsilon\leq 1$, $$
\log( 1 + |f|^2 ) \leq \begin{cases} \epsilon^{-1} \log( 1 + \epsilon |f|^2 ) & \\
\log( 1 + \epsilon |f|^2 ) + \log(1 + \epsilon^{-1} ). & \end{cases} $$ Moreover, if $\epsilon=(1+\delta)^{-2}$ and $\delta \geq 6$, then $$ \log(1 + \epsilon^{-1} ) \leq 6 \log( \tfrac14\delta + \tfrac12 + \tfrac14\delta^{-1} ). $$ \end{lemma}
\begin{proof}
The first inequality follows from the concavity of $F:x\mapsto\log(1+x|f|^2)$: $$
\epsilon \log( 1 + |f|^2 ) = (1-\epsilon) F(0) + \epsilon F(1)
\leq F(\epsilon) = \log( 1 + \epsilon |f|^2 ). $$ The second inequality follows from $$
1 + |f|^2 \leq 1 + \epsilon |f|^2 + \epsilon^{-1} + |f|^2
= ( 1 + \epsilon |f|^2 )(1 + \epsilon^{-1} ) $$ by taking logarithms. For the last inequality, notice that $$ 1+(1+\delta)^2 = \delta^2 + 2\delta + 2
\leq \delta^2 + 4\delta + 6 + 4 \delta^{-1} + \delta^{-2}
= 16 ( \tfrac14\delta + \tfrac12 + \tfrac14\delta^{-1} )^2 $$ and since $\delta\geq6$, we have $2 \leq \tfrac14\delta + \tfrac12 + \tfrac14\delta^{-1}$. Therefore, $$ 1+(1+\delta)^2 \leq ( \tfrac14\delta + \tfrac12 + \tfrac14\delta^{-1} )^6, $$ which gives the result. \end{proof}
\begin{theorem}\label{T8.2} Using the notations \begin{align} SQS &= \int \log \biggl[ \frac{\abs{w(k+i0) + ik}^2}{4k\Im w(k+i0)}\biggr] k^2\, dk, \\ QS &= \int_0^\infty \log \biggl[ \frac{1}{4}\, \frac{d\rho}{d\rho_0} + \f12 + \f14\, \frac{d\rho_0}{d\rho} \biggr]
\sqrt{E}\, dE, \end{align} and $R$ as in \eqref{8.1}, we have $QS \leq SQS \leq QS + R$ and $R\leq 55 \, SQS$. In particular, $QS + R <\infty \Leftrightarrow \text{SQS} <\infty$. \end{theorem}
\begin{proof} The bulk of the proof rests on the following calculation: \begin{align} \frac{\abs{w(k + i0) + ik}^2}{4k \Ima w(k+i0)} &= \frac{\Ima w}{4k} + \f12 + \frac{k}{4\Ima w} + \frac{k}{4\Ima w} \biggl( \frac{\Re w}{k}\biggr)^2 \label{8.3} \\ &= \biggl( \frac{\delta}{4} + \frac12 + \frac{1}{4\delta}\biggr)
\biggl[ 1 + \frac{1}{(1+\delta)^2}\biggl( \frac{\Re w}{k}\biggr)^2 \biggr] \label{8.4} \end{align} where $\delta=\frac{\Ima w}{k} = \frac{d\rho}{d\rho_0}$. Taking logarithms and integrating immediately shows that $QS \leq SQS \leq QS + R$.
To prove $R\leq 55 \, SQS$, we make use of the following notation: $$
\delta=\frac{d\rho}{d\rho_0},\quad \epsilon=( 1 + \delta )^{-2}, \quad f(k)=\frac{\Re w}{k},\
\quad\text{and}\quad A=\{E: \delta > 6 \}. $$ Notice that from the calculation above, $$ SQS = \int \log( \tfrac14\delta + \tfrac12 + \tfrac14\delta^{-1}) k^2\,dk
+ \int \log( 1 + \epsilon |f|^2 ) k^2\,dk. $$ Combining this with Lemma~\ref{Llog} gives \begin{align*}
R &=\int_0^\infty \log( 1 + |f|^2 ) k^2\,dk \\
&\leq \int_A \log( 1 + \epsilon |f|^2 ) k^2\,dk + \int_A \log( 1 + \epsilon^{-1} ) k^2\,dk + {}\\
&\qquad + 49 \int_{A^c} \log( 1 + \epsilon |f|^2 ) k^2\,dk \\
&\leq 49 \, SQS + 6 \int_A \log( \tfrac14\delta + \tfrac12 + \tfrac14\delta^{-1}) k^2\,dk \\
&\leq 55 \, SQS. \end{align*} The number $49$ appears because on $A^c$, $\delta\leq6$ which implies $\epsilon^{-1} \leq 49$. \end{proof}
\section{The Normalization Conditions} \label{s9}
In this section, we will prove that $$
\text{Normalization}\ \Rightarrow \ \text{\eqref{1.17c}}\ \Rightarrow \ \text{Local Solubility} $$ (cf. step~(iv) in the strategy of Section~\ref{s8}). This then implies that $d\rho$ is the spectral measure of a potential $V\in L^2_\text{\rm{loc}}$ by Theorem~\ref{T6.1}.
\begin{proposition}\label{P9.1} Let $d\nu$ be any real signed measure on $[0,\infty)$ and define $M_l\nu$ by \eqref{1.17f}. Then the following are equivalent: \begin{gather} M_l\nu \in L^2 (dk) \label{9.1A} \\ \abs{\nu} ([n,n+1])\in \ell^2 \label{9.1B} \\ \int \log \biggl[ 1+\biggl( \frac{M_l\nu}{k}\biggr)^2\biggr] k^2\, dk <\infty. \label{9.1C} \end{gather} \end{proposition}
\begin{proof} It is not difficult to see that \eqref{9.1A} $\Rightarrow$ \eqref{9.1B}: \begin{align}\label{9.4} \sum \bigl[ \abs{\nu} ([n,n+1]) \bigr]^2 &\leq \int_0^\infty \bigl[ \abs{\nu} ([k-1,k+1]) \bigr]^2 \, dk \leq 4 \int_0^\infty \bigl[ M_l\nu(k) \bigr]^2 \, dk. \end{align} To prove the converse, let us write $\nu_n =\abs{\nu} ([n,n+1])$. Then, for any $k\in[n,n+1]$, \begin{equation}\label{E:DMO}
M_l\nu(k) = \sup_{L\geq 1} \frac{|\nu|([k-L,k+L])}{2L}
\leq \sup_{m\geq 0} \frac{3}{2m+1} \sum_{j=n-m}^{n+m} \nu_j. \end{equation} Indeed, one may take $m$ to be the integer in $[L,L+1)$. As the discrete maximal operator in \eqref{E:DMO} is $\ell^2$ bounded, we may deduce \begin{align} \int_0^\infty \bigl[ M_l\nu(k) \bigr]^2 \, dk \leq \sum_{n} \Bigl[ \sup_{k\in[n,n+1]} M_l\nu(k) \Bigr]^2 &\leq C \sum_{n} \nu_n^2. \end{align} This proves \eqref{9.1B} $\Rightarrow$ \eqref{9.1A}.
As $\log(1+x^2)\leq x^2$, $$ \int \log[ 1 + k^{-2} M_l\nu(k)^2 ] \,k^2\, dk \leq \int [ M_l\nu(k) ]^2 \, dk, $$ which proves \eqref{9.1A} $\Rightarrow$ \eqref{9.1C}.
We will finish the proof by showing that \eqref{9.1C} implies \eqref{9.1B}. For each $k\in[n,n+1]$, it follows directly from the definition that $\tfrac12 \nu_n \leq M_l\nu(k)$. Thus \begin{equation}\label{E:9.7} \sum_n n^2 \log\biggl[ 1 + \frac{\nu_n^2}{4(n+1)^2} \biggr] \leq \int \log \biggl[ 1+\biggl( \frac{M_l\nu}{k}\biggr)^2\biggr] k^2\, dk, \end{equation} which shows that $\nu_n \geq (n+1)$ only finitely many times. For the remaining values of $n$, one need only apply the estimate $\log(1+x)\geq \tfrac12 x$ for $x\in[0,1]$, which follows by comparing derivatives, to see that $\nu_n\in\ell^2$. \end{proof}
\begin{theorem}\label{T9.2} If \[ \int \log\biggl[ 1+\biggl(\frac{M_s\nu}{k}\biggr)^2\, \biggr] k^2\, dk <\infty, \] then the equivalent conditions of Proposition~\ref{P9.1} hold. \end{theorem}
\begin{proof} The result follows by the reasoning used to prove \eqref{9.1C} $\Rightarrow$ \eqref{9.1B}: For all $k\in[n,n+1]$, $$ \abs{\nu} ([n,n+1]) \leq \abs{\nu} ([k-1,k+1]) \leq 2 M_s\nu(k). $$ Thus \eqref{E:9.7} holds with $M_s\nu$ in place of $M_l\nu$ and the argument given above may be continued from there. \end{proof}
\begin{theorem}\label{T9.3} If \eqref{9.1B} holds, then so does Local Solubility, that is, \eqref{1.19x}. In particular, by Theorem~\ref{T9.2}, \[ \text{Normalization} \Rightarrow \text{Local Solubility}. \] \end{theorem}
\begin{remark} This is step (iv) of the strategy in Section~\ref{s8}. \end{remark}
\begin{proof} By the definition \eqref{1.18}, \begin{equation}\label{9.21x}
F(q) =2\pi^{-1/2} \int_{p\geq 1} e^{-(p-q)^2} \, d\nu(p). \end{equation}
As $(n+x-q)^2 \geq (n-q)^2 - 2\abs{x}\abs{n-q}$ for $|x|<1$, \begin{equation} \label{9.21} \abs{F(q)}\leq 2\pi^{-1/2} \sum_{n=1}^\infty e^{-(n-q)^2} e^{2\abs{n-q}} \abs{\nu} ([n,n+1]). \end{equation} Thus by Young's inequality for sums, \eqref{9.1B} $\Rightarrow F\in L^2$. \end{proof}
We conclude this section with a result we will need in Section~\ref{s11}. In the proof, we will use the following simple inequality: for $\delta\in[0,1]$, \begin{equation}\label{E:SI} \log [\tfrac14\, \delta+\tfrac12 + \tfrac14\, \delta^{-1}] \geq \tfrac14(\delta-1)^2. \end{equation} As equality holds when $\delta=1$, the result follows by differentiating: $$ \frac{\delta-1}{\delta(\delta+1)} \leq \frac12(\delta-1). $$
\begin{theorem}\label{T9.4} If {\rm{(}}Strong{\rm{)}} Quasi-Szeg\H{o} and Local Solubility hold, then $\nu$ obeys \eqref{9.1B}. In particular, by Theorem~\ref{T1.3}, this follows for $V\in L^2$. \end{theorem}
\begin{proof} Let us recall that the Quasi-Szeg\H{o} condition says \begin{equation}\label{E:QS9} \int_0^\infty \log \biggl[ \frac{1}{4}\, \frac{d\rho}{d\rho_0} + \f12 + \f14\, \frac{d\rho_0}{d\rho} \biggr] k^2\,dk < \infty. \end{equation} (By Theorem~\ref{T8.2}, this is also implied by the strong quasi-Szeg\H{o} condition.)
Let us decompose $d\nu=d\nu_+ - d\nu_-$ where $d\nu_\pm$ are both positive measures. The definition of $d\nu$, \eqref{1.17a}, shows that for $k>1$, $$ \frac{d\rho}{d\rho_0} = 1 + \frac{1}{k}\frac{d\nu}{dk}. $$ Moreover, $d\nu_-$ is absolutely continuous; in fact, \eqref{1.17b} shows $\frac{d\nu_-}{dk}\leq k$.
Let us restrict the integral \eqref{E:QS9} to the essential support of $d\nu_-$, that is, where $\frac{d\rho}{d\rho_0}\leq 1$. Using \eqref{E:SI}, we deduce that \begin{equation}
\int_0^\infty \biggl| \frac{d\nu_-}{dk} \biggr|^2 \,dk < \infty \end{equation} and hence that $\abs{\nu_-} ([n,n+1]) \in \ell^2$. To complete the proof, we need to deduce the same result for $\nu_+$.
The local solubility condition says $F\in L^2$ where $F$ is defined as in \eqref{9.21x}. The first sentence of Theorem~\ref{T9.3} says \[ F_-(q) = \int_{p\geq 1} e^{-(p-q)^2}\, d\nu_- (p) \ \in L^2 (dq) \] and so $F\in L^2$ implies \[ F_+(q)=\int_{p\geq 1} e^{-(p-q)^2} \, d\nu_+ (p) \ \in L^2 (dq). \] For $q\in[n,n+1]$, we have $F_+(q) \geq e^{-1} \nu_+ ([n,n+1])$ and thus may conclude $\nu_+ ([n,n+1])\in\ell^2$. \end{proof}
\section{Harmonic Analysis Preliminaries} \label{s10}
For harmonic functions in the half-plane, it is well known that the conjugate function belongs to $L^p$ ($0<p<\infty$) if and only if the same is true for the nontangential maximal function. The first direction appears already in the paper of Hardy and Littlewood that introduced the maximal function \cite[Theorem 27]{HL}. The other direction, which is much harder, is due to Burkholder, Gundy, and Silverstein \cite{BGS}. The purpose of this section is to present an analogous theorem with a peculiar replacement for $L^p$. Theorem~2 of \cite{BGS} covers this situation perfectly if one is willing to consider the maximal Hilbert transform; we are not. However, this does resolve one direction; for the other, we will use subharmonic functions in the manner of \cite{HL}.
We will use the following notation: $f\lesssim g$ means $f\leq C g$ for some absolute constant $C$, whereas $f\approx g$ means that $f\lesssim g$ and $g \lesssim f$.
\begin{proposition} Let $d\sigma$ be a compactly supported positive measure on $\Reals$, \begin{equation}\label{BGS1}
\int \log \bigl[1 + |H\sigma|^2 \bigr] \,dx \lesssim
\int \log \bigl[1 + |M\sigma|^2 \bigr] \,dx. \end{equation} \end{proposition}
\begin{proof} This is a special case of \cite[Theorem 2]{BGS}. It is also amenable to the good-$\lambda$ approach discussed in textbooks: \cite[\S V.4]{Stein} or \cite[\S XIII]{Tor}. \end{proof}
As noted earlier, Burkholder, Gundy, and Silverstein do not provide the converse inequality; indeed as they note, in the generality they treat, the result is false without switching to the maximal Hilbert transform. Nevertheless, the function $x\mapsto\log[1+x^2]$ grows sufficiently quickly that the result is true. We divide the proof into two propositions.
\begin{proposition}\label{P7.3} There is a $\lambda_0$ so that for any finite positive measure $d\sigma$ on $\Reals$, \begin{align}\label{Ineq2}
\int_{\{M \sigma > \lambda_0 \}} \log\bigl[ 1 + |M\sigma|^2 \bigr]\,dx \lesssim \int \log \bigl[1 + \left(H \sigma\right)^2 + \left(\tfrac{d \sigma}{dx}\right)^2 \bigr] \,dx. \end{align}
In particular, $|\{M \sigma > 2\lambda_0 \}|\lesssim \mbox{\rm RHS\eqref{Ineq2}}$. \end{proposition}
\begin{proof} Let $u(z)+iv(z)=\int d\sigma(x)/(x-z)$ denote the Cauchy integral of $d\sigma$, then $$
F(z) = \log[1+u(z)+iv(z)] $$ is analytic---$u\geq 0$ because it is the Poisson integral of a positive measure. In particular,
$|F|^{1/2}$ is subharmonic. Now as $|F(z)| \geq \log|1+u(z)|$, \begin{align} \log\bigl[ 1+[M\sigma](x)\bigr] &\lesssim \sup_{y>0} \log\bigl[ 1+ u(x+iy) \bigr] \\
&\leq \sup_{y>0} |F(x+iy)| \\
&\lesssim \bigl\{ [ M |F|^{1/2} ](x) \bigr\}^2. \end{align}
Elementary calculations show $\abs{\Re F}\leq \log(1+u+|v|)$ and $\abs{\Im F}\leq\frac\pi2$; therefore, $$
\log\bigl[ 1+ M\sigma \bigr] \lesssim \left\{ 1 + M \sqrt{\log[1+u+|v|]} \right\}^2
\lesssim 1 + \left\{ M \sqrt{\log[1+u+|v|]} \right\}^2. $$ >From this, one may deduce that for $\lambda_1$ sufficiently large, \begin{align}\label{E:star}
\sqrt{\log\bigl[ 1+ M\sigma \bigr]} \lesssim M \sqrt{\log[1+u+|v|]} \end{align} on the set where $\log[ 1+M\sigma]\geq \lambda_1$.
Interpolating between the $L^\infty$ and $L^2$ bounds on $M$ shows that $$
\int_{Mf>\lambda} |M f|^2 \,dx \lesssim \int_{|f|>\lambda/2} |f|^2 \, dx. $$ Combining this with \eqref{E:star}, we see that for $\lambda_0\geq e^{\lambda_1}$ and $\epsilon$ sufficiently small, $$
\int_{\{M \sigma > \lambda_0 \}} \log\bigl[ 1+ M\sigma \bigr] \,dx
\lesssim \int_{\{ |u+iv|>\epsilon \}} \log[1+u+|v|] \, dx. $$ To obtain \eqref{Ineq2}, we need merely note that $\log(1+x)\approx\log(1+x^2)$ on any interval $[a,\infty)$ with $a>0$. \end{proof}
\begin{proposition}\label{HA.4} For any finite positive measure $d\sigma$ on $\Reals$, \begin{equation} \int \log \bigl[1 + \left(M\sigma\right)^2 \bigr] \,dx \lesssim \int \log \bigl[1 + \left(H \sigma\right)^2 + \left(\tfrac{d \sigma}{dx}\right)^2 \bigr] \,dx. \end{equation} \end{proposition}
\begin{proof} By Proposition~\ref{P7.3}, it suffices to prove \begin{equation}\label{want} \int_{\{M \sigma \leq \lambda_0 \}} \abs{M\sigma}^2 \,dx \lesssim \int \log \bigl[1 + (H \sigma)^2 + (\tfrac{d \sigma}{dx})^2 \bigr] \,dx. \end{equation}
Let $\Omega=\{M \sigma > 4\lambda_0 \}$, $d\sigma_1=\chi_\Omega d\sigma$, and $d\sigma_2=\chi_{\Omega^c}d\sigma$. We will prove \eqref{want} by writing $M\sigma\leq M\sigma_1 + M\sigma_2$.
It is a well-known property of the maximal function that $$
\sigma(\{M \sigma > 4\lambda_0 \})\lesssim \lambda_0|\{M \sigma > 2\lambda_0 \}|. $$
Combining this with Proposition~\ref{P7.3}, shows that $\|\sigma_1\|=\sigma(\Omega)\lesssim \text{RHS\eqref{want}}$. Consequently, by the weak-type $L^1$ bound on the maximal operator, $$ \int_{\{M \sigma \leq \lambda_0 \}} \abs{M\sigma_1}^2 \,dx \lesssim \int_0^{\lambda_0}
\frac{\|\sigma_1\|}{\lambda} 2\lambda\,d\lambda \lesssim \text{RHS\eqref{want}}. $$
Now we turn to bounding $M\sigma_2$. On $\Omega^c$, we know that $d\sigma$ must be absolutely continuous and its Radon-Nikodym derivative is bounded by $4\lambda_0$. Therefore, $L^2$ boundedness of the maximal operator implies $$
\int \abs{M\sigma_2}^2 \,dx
\lesssim \int_{\{M\sigma\leq 4\lambda_0\}} |\tfrac{d \sigma}{dx}|^2 \,dx
\lesssim \int \log \bigl[1 + \left(\tfrac{d \sigma}{dx}\right)^2 \bigr] \,dx, $$ which completes the proof. \end{proof}
Putting the previous propositions together, we obtain the following
\begin{theorem}\label{Tloc} If $\sigma$ is a positive measure of compact support, then \begin{equation}\label{BGS2}
\int \log \Bigl[1 + |H\sigma|^2 + (\tfrac{d \sigma}{dx})^2 \Bigr] \,dx \approx
\int \log \Bigl[1 + |M\sigma|^2 \Bigr] \,dx. \end{equation} \end{theorem}
\section{Taming $\Real m$} \label{s11}
The purpose of this section is to prove Corollary~\ref{C11.3} below and so complete the proof of Theorem~\ref{T1.2} as laid out in Section~\ref{s8}.
Let $H_s$ denote the short-range Hilbert transform: $H_s \sigma=K*\sigma$ where $$
K(x) = \begin{cases} 0, & |x|>1 \\ \tfrac1\pi[x^{-1} - x], & |x|<1 \end{cases} $$ and let $H_l=H-H_s$ denote the long-range Hilbert transform: $H_l\sigma=K*\sigma$ with $$
K(x) = \begin{cases} \tfrac1\pi x^{-1}, & |x|>1 \\ \tfrac1\pi x, & |x|<1. \end{cases} $$ Note that both $H_s$ and $H_l$ are Calder\'on--Zygmund operators and so bounded on $L^p(\Reals)$ for $1<p<\infty$. As in the Introduction, we define short- and long-range maximal operators: $$
[M_s \sigma](x) = \sup_{L\leq1} \frac{ |\sigma|\bigl([x-L,x+L]\bigr) }{2L}, $$ and for $M_l$, the supremum is taken over $L\geq 1$. Naturally, both truncated maximal operators are $L^p$-bounded for $1<p\leq\infty$.
We will use the notation $$
\|\mu\|_{\ell^2(M)}^2 = \sum_n \Bigl[ |\mu|\bigl( [n,n+1] \bigr) \Bigr]^2 $$
as introduced in \eqref{1.17c}. Obviously, $\|\mu\|_{\ell^2(M)}^2 \leq \|\mu\|^2$.
\begin{lemma}\label{dumb} Let $F(k) = (1+k^2)^{-1}$. For each complex measure $\mu\in\ell^2(M)$, \begin{align*}
\int \bigl[\Phi*|d\mu|\bigr]^2 \,dk \lesssim \|\mu\|_{\ell^2(M)}^2, \
\int |M_l\mu|^2 \,dk \lesssim \|\mu\|_{\ell^2(M)}^2, \
\int |H_l\mu|^2 \,dk \lesssim \|\mu\|_{\ell^2(M)}^2. \end{align*} \end{lemma}
\begin{proof}
All three inequalities follow by replacing $|d\mu|$ by its average on each of the intervals $[n,n+1]$. This operation changes $\Phi*|d\mu|$
and $M_l\mu$ by no more than a factor of two. For $H_l$, it introduces an error which can be bounded by $\Phi*|d\mu|$. We then use the $L^2$ boundedness of the appropriate operator. \end{proof}
\begin{theorem}\label{T:IT}
If $\mu$ is a positive measure on $\Reals$ with $\|\mu\|_{\ell^2(M)}^2<\infty$, then \begin{equation}\label{BGSa}
\int \log \Bigl[1 + |H\mu|^2k^{-2} \Bigr] \,(1+k^2)\,dk < \infty \end{equation} if and only if \begin{equation}\label{BGSb}
\int \log \Bigl[1 + |M_s\mu|^2 k^{-2} \Bigr] \,(1+k^2)\,dk < \infty. \end{equation} \end{theorem}
\begin{proof} As neither integral can diverge on any compact set, we can restrict our attention to $k>1$.
We begin by proving that \eqref{BGSb} implies \eqref{BGSa}. Given a compactly supported positive measure $d\sigma$, Theorem~\ref{Tloc} and Lemma~\ref{dumb} show that \begin{align*}
\int_n^{n+1} \log \Bigl[1 + |H_s\sigma|^2 \Bigr] \,dk
&\lesssim \int_n^{n+1} \log \Bigl[1 + |H\sigma|^2 \Bigr] + |H_l\sigma|^2 \,dk \\
&\lesssim \|\sigma\|^2_{\ell^2(M)} + \int_n^{n+1} \log \Bigl[1 + |H\sigma|^2 \Bigr] \,dk \\
&\lesssim \|\sigma\|^2_{\ell^2(M)} + \int \log \Bigl[1 + |M\sigma|^2 \Bigr] \,dk \\
&\lesssim \|\sigma\|^2_{\ell^2(M)} + \int \log \Bigl[1 + |M_s\sigma|^2 \Bigr] \,dk + \int |M_l\sigma|^2 \,dk \\
&\lesssim 2\|\sigma\|^2_{\ell^2(M)} + \int \log \Bigl[1 + |M_s\sigma|^2 \Bigr] \,dk. \end{align*} Let us choose $\sigma=(1+n^2)^{-1/2}d\mu_n$ where $d\mu_n$ is the restriction of $d\mu$ to the interval $[n-1,n+2]$. Combining the above with Lemma~\ref{dumb} gives \begin{align*}
\sum (1+n^2) & \int_n^{n+1} \log \Bigl[1 + \tfrac1{n^2+1}|H\mu|^2 \Bigr] \,dk \\
&\lesssim \|\mu\|_{\ell^2(M)}^2 + \sum (1+n^2) \int_n^{n+1} \log \Bigl[1 + \tfrac1{n^2+1}|H_s\mu|^2 \Bigr] \,dk \\
&\lesssim \|\mu\|_{\ell^2(M)}^2 + \sum (1+n^2) \int_{n-3}^{n+4} \log \Bigl[1 + \tfrac1{n^2+1}|M_s\mu|^2 \Bigr] \,dk \\
&\lesssim \|\mu\|_{\ell^2(M)}^2 + \int \log \Bigl[1 + k^{-2} |M_s\mu|^2 \Bigr] (1+k^2)\,dk. \end{align*}
The proof that \eqref{BGSa} implies \eqref{BGSb} is a little more involved because the Hilbert transform is positivity preserving.
Let $\phi$ be a smooth bump which is supported on $[-2,3]$ and is equal to $1$ on $[-1,2]$. We will write $\phi_n(x)$ for $\phi(x-n)$. Elementary calculations show that \begin{equation}\label{ssss}
\Bigl| [H_s(\phi d\sigma)](x) - \phi(x)[H_s \sigma](x) \Bigr| \lesssim \|\sigma \| \Phi(x) \end{equation} where $\Phi(x)=(1+x^2)^{-1}$. Using Theorem~\ref{Tloc}, Lemma~\ref{dumb}, and then \eqref{ssss}, \begin{align*}
\int_n^{n+1} \log \Bigl[1 + |M_s\sigma|^2 \Bigr] \,dk
&\leq \int_n^{n+1} \log \Bigl[1 + |M(\phi_nd\sigma)|^2 \Bigr] \,dk \\
&\lesssim \int \log \Bigl[1 + |H(\phi_nd\sigma)|^2 + |\phi_n\tfrac{d\sigma}{dk}|^2 \Bigr] \,dk \\
&\lesssim \int \log \Bigl[1 + |H_s(\phi_nd\sigma)|^2 \Bigr] \,dk + \|\sigma\|^2\\
&\lesssim \int \log \Bigl[1 + \phi_n^2 |H_s\sigma|^2 \Bigr] \,dk + \|\sigma\|^2 \end{align*} By choosing $\sigma=(1+n^2)^{-1/2}d\mu_n$ where $d\mu_n$ is the restriction of $d\mu$ to the interval $[n-4,n+5]$, the proof may be completed in much the same manner as was used to prove the opposite implication. \end{proof}
It is now easy to complete the outline from Section~\ref{s8}.
\begin{corollary}\label{C11.3} In the nomenclature of Theorems~\ref{T1.2}, \ref{T1.3}, and~\ref{T8.2}, \begin{align} \label{E:C1} \text{Normalization} &\Rightarrow R<\infty \\ \label{E:C2} \text{Strong Quasi-Szeg\H{o}} \ + \ \text{Local Solubility} &\Rightarrow \text{Normalization}. \end{align} \end{corollary}
\begin{proof} We begin with \eqref{E:C1}. As the $m$-function associated to the free operator is purely imaginary on the spectrum, we have that for all $k>0$, \begin{align} \label{E:R1} \Re w(k) &= \int_{(-\infty,1]} \frac{d\rho(E)-d\rho_0(E)}{E-k^2} + \frac2\pi \int \frac{\xi d\nu(\xi)}{\xi^2-k^2} \\ &= f(k) + \frac1\pi \int \frac{d\nu(\xi)}{\xi+k} + \frac1\pi \int \frac{d\nu(\xi)}{\xi-k} \\ &= f(k) + [H\mu](k) \end{align} where $f(k)$ is defined to be the first term on the RHS of \eqref{E:R1} and $d\mu$ is defined by $$ \int g(k) \,d\mu(k) = \int g(k) \,d\nu(k) + \int g(-k) \,d\nu(k). $$
By Theorem~\ref{T9.2}, Normalization implies $\nu\in\ell^2(M)$ and hence $\mu\in\ell^2(M)$; thus we may apply Theorem~\ref{T:IT} to see that \eqref{BGSa} holds. As $$ \log[1+(x+y)^2] \lesssim x^2 + \log[1+y^2] $$
and $|f(k)|\lesssim(k-1)^{-1}$ for $k>1$, we see that this is sufficient to deduce $R<\infty$.
We now turn to \eqref{E:C2}. By Theorem~\ref{T8.2}, we know that $R<\infty$ and so by the calculation above, \eqref{BGSa} holds. From the proof of Theorem~\ref{T9.4} we are guaranteed that $d\mu$, defined as above, belongs to $\ell^2(M)$. Thus we may apply Theorem~\ref{T:IT} to deduce that the normalization condition holds. \end{proof}
\end{document} |
\begin{document}
\title[decay/surge]{On decay-surge population models} \author{Branda Goncalves, Thierry Huillet and Eva L\"ocherbach} \address{B. Goncalves and T. Huillet: Laboratoire de Physique Th\'{e}orique et Mod\'{e}lisation, CY Cergy Paris Universit\'{e}, CNRS UMR-8089, 2 avenue Adolphe-Chauvin, 95302 Cergy-Pontoise, France \\ E-mails: branda.goncalves@outlook.fr, Thierry.Huillet@cyu.fr \\ E. L\"ocherbach: SAMM, Statistique, Analyse et Mod\'elisation Multidisciplinaire, Universit\'e Paris 1 Panth\'eon-Sorbonne, EA 4543 et FR FP2M 2036 CNRS, France.\\ E-mail: eva.locherbach@univ-paris1.fr} \maketitle \begin{abstract} We consider continuous space-time decay-surge population models which are semi-stochastic processes for which deterministically declining populations, bound to fade away, are reinvigorated at random times by bursts or surges of random sizes. In a particular separable framework (in a sense made precise below) we provide explicit formulae for the scale (or harmonic) function and the speed measure of the process. The behavior of the scale function at infinity allows to formulate conditions under which such processes either explode or are transient at infinity, or Harris recurrent. A description of the structures of both the discrete-time embedded chain and extreme record chain of such continuous-time processes is supplied.
\textbf{Keywords}: declining population models, surge processes, PDMP, Hawkes processes, scale function, Harris recurrence.
\textbf{AMS Classification 2010:} 60J75, 92D25. \end{abstract}
\section{Introduction} This paper deals with decay-surge population models where a deterministically declining evolution following some nonlinear flow is interrupted by bursts of random sizes occurring at random times. Decay-surge models are natural models of many physical and biological phenomena, including the evolution of aging and declining populations which are reinvigorated by immigration, the height of the membrane potential of a neuron decreasing in between successive spikes of its presynaptic neurons due to leakage effects and jumping upwards after the action potential emission of its presynaptic partners, work processes in single-server queueing systems as from Cohen (1982)... Our preferred physical image will be the one of aging populations subject to immigration.
Decay-surge models have been extensively studied in the literature, see among others Eliazar and Klafter (2006), and Harrison and Resnick (1976) and (1978). Most studies concentrate however on non-Markovian models such as shot noise or Hawkes processes, where superpositions of overlapping decaying populations are considered, see Br\'emaud and Massouli\'e (1996), Eliazar and Klafter (2007) and (2009), Huillet (2021), Kella and Stadje (2001), Kella (2009) and Resnick and Tweedie (1982).
Inspired by storage processes for dams, the papers of Resnick and Tweedie (1982), Kella (2009), \c{C}inlar and Pinsky (1971), Asmussen and Kella (1996), Boxma et al (2011), Boxma at al (2006), Harrison and Resnick (1976), Brockwell et al (1982) are mostly concerned with growth-collapse models when growth is from \emph{stochastic} additive inputs such as compound Poisson or L\'{e}vy processes or renewal processes. Here, the water level of a dam decreasing deterministically according to some fixed water release program is subject to sudden uprises due to rain or flood. Growth-collapse models are also very relevant in the Burridge-Knopoff stress-release model of earthquakes and continental drift, as from Carlson et al (1994), and in stick-slip models of interfacial friction, as from Richetti et al (2001). As we shall see, growth-collapse models are in some sense 'dual' to decay-surge models.
In contrast with these last papers, and as in the works of Eliazar and Klafter (2006), (2007) and (2009), we concentrate in the present work on a \emph{deterministic} and continuous decay motion in between successive surges, described by a nonlinear flow, determining the decay rate of the population and given by $$ x_t (x) = x - \int_0^t \alpha ( x_s (x)) ds ,\; t \geq 0, \; x_0 ( x) = x \geq 0.$$ In our process, upward jumps (surges) occur with state dependent rate $ \beta ( x), $ when the current state of the process is $x .$ When a jump occurs, the present size of the population $x$ is replaced by a new random value $ Y(x) > x, $ distributed according to some transition kernel $ K ( x, dy ) , y \geq x .$
This leads to the study of a quite general family of continuous-time piecewise deterministic Markov processes (PDMP's) $X_t(x) $ representing the size of the population at time $t$ when started from the initial value $x \geq 0 .$ See Davis (1984). The infinitesimal generator of this process is given for smooth test functions by $$ {\mathcal G} u (x) = - \alpha (x) u'(x) + \beta ( x) \int_{ ( x, \infty) } K ( x, dy ) [ u(y) - u(x) ] , \; x \geq 0, $$ under suitable conditions on the parameters $ \alpha, \beta $ and $ K(x, dy ) $ of the process. In the sequel we focus on the study of separable kernels $K (x, dy ) $ where for each $ 0 \le x \le y , $ \begin{equation}\label{eq:sep} \int_{(y, \infty )} K( x,dz ) {=}\frac{k\left( y\right) }{k\left( x\right) } \text{,} \end{equation} for some positive non-increasing function $k: [0, \infty ) \to [ 0, \infty ] $ which is continuous on $ (0, \infty).$
The present paper proposes a precise characterization of the probabilistic properties of the above process in this separable frame. Supposing that $ \alpha (x) $ and $ \beta ( x) $ are continuous and positive on $ (0, \infty ) , $ the main ingredient of our study is the function \begin{equation}\label{eq:Gamma} \Gamma (x) = \int_1^x \gamma ( y) dy, \mbox{ where } \gamma (y) = \beta (y) / \alpha (y) ,\; y, x \geq 0 . \end{equation} Supposing that $\Gamma (\cdot ) $ is a {\bf space transform,} that is, $ \Gamma ( 0 ) = - \infty $ and $ \Gamma ( \infty ) = \infty, $ we show that \begin{enumerate} \item[1.] Starting from some strictly positive initial value $ x > 0, $ the process does not get extinct (does not hit $0$) in finite time almost surely (Proposition \ref{prop:extinction}). In particular, imposing additionally $ k (0 ) < \infty ,$ we can study the process in restriction to the state space $ (0, \infty). $ This is what we do in the sequel. \item[2.] The function \begin{equation}
s(x) = \int_{1}^{x}\gamma (y)e^{-\Gamma (y)}/k(y)dy, \; x \geq 0 , \end{equation} is a {\bf scale function} of the process, that is, solves $ {\mathcal G} s (x) = 0$ (Proposition \ref{prop:scale}). It is always strictly increasing and satisfies $ s (0) = -\infty $ under our assumptions. But it might not be a space transform, that is, $ s( \infty ) $ can take finite values. \item[3.] This scale function plays a key role in the understanding of the exit probabilities of the process and yields conditions under which the process either explodes in finite time or is transient at infinity. More precisely, if $ s ( \infty ) < \infty ,$ we have the following explicit formula for the exit probabilities. For any $ 0 < a < x < b , $ \begin{equation}\label{eq:exit1}
{\mathbf{P}} ( X_t \mbox{ enters $(0, a] $ before entering $ [b, \infty ) $ } |X_0 = x ) = \frac{s(x) - s(a) }{s(b) - s(a) }. \end{equation} (Proposition \ref{prop:exit}).
Taking $ b \to \infty $ in the above formula, we deduce from this that $ s( \infty ) < \infty $ implies either that the process explodes in finite time (possesses an infinite number of jumps within some finite time interval) or that it is transient at infinity.
Due to the asymmetric dynamic of the process (continuous motion downwards and jumps upwards such that entering the interval $ [b, \infty ) $ starting from $x < b $ always happens by a jump), \eqref{eq:exit1} does not hold if $ s (\infty ) = \infty .$ \item[4.] Imposing additionally that $ \beta ( 0 ) > 0 ,$ {\bf Harris recurrence (positive or null)} of the process is equivalent to the fact that $s$ is a space transform, that is, $ s( \infty ) = \infty $ (Theorem \ref{theo:harris}).
In this case, up to constant multiples, the unique invariant measure of the process possesses a Lebesgue density (speed density) given by $$ \pi ( x) = \frac{ k (x) e^{ \Gamma ( x) } }{ \alpha ( x) }, x > 0 .$$ More precisely, we show how the scale function can be used to obtain Foster-Lyapunov criteria in the spirit of Meyn and Tweedie (1993) implying the non-explosion of the process together with its recurrence under additional irreducibility properties. Additional conditions, making use of the speed measure, under which first moments of hitting times are finite, are also supplied in this setup. \end{enumerate}
{\bf Organization of the paper.} In Section 2, we introduce our model and state some first results. Most importantly, we establish a simple relationship between decay-surge models and growth-collapse models as studied in Goncalves, Huillet and L\"ocherbach (2021) that allows us to obtain explicit representations of the law of the first jump time and of the associated speed measure without any further study. Section 3 is devoted to the proof of the existence of the scale function (Proposition \ref{prop:scale}) together with the study of first moments of hitting times which are shown to be finite if the speed density is integrable at $ + \infty $ (Proposition \ref{prop:hitting}). Section 4 then collects our main results. If the scale function is a space transform, it can be naturally transformed into a Lyapunov function in the sense of Meyn and Tweedie (1993) such that the process does not explode in finite time and comes back to certain compact sets infinitely often (Proposition \ref{prop:MT}). Using the regularity produced by the jump heights according to the absolutely continuous transition kernel $ K( x, dy ), $ Theorem \ref{theo:petite} establishes then a local Doeblin lower bound on the transition operator of the process - a key ingredient to prove Harris recurrence which is our main result Theorem \ref{theo:harris}. Several examples are supplied including one related to linear Hawkes processes and to shot-noise processes. In a last part of the work, we focus on the embedded chain of the process, sampled at the jump times, which, in addition to its fundamental relevance, is easily amenable to simulations. Following Adke (1993), we also draw the attention to the structure of the extreme record chain of $X_t(x) $, allowing in particular to derive the distribution of the first upper record time and overshoot value, as a level crossing time and value. This study is motivated by the understanding of the time of the first crossing of some high population level and the amount of the corresponding overshoot, as, besides extinction, populations can face overcrowding.
\section{The model, some first results and a useful duality property} We study population decay models with random surges described by a Piecewise Deterministic Markov Process (PDMP) $ X_t , t \geq 0,$ starting from some initial value $ x \geq 0$ at time $0$ and taking values in $ [0, \infty ) .$ The main ingredients of our model are \begin{enumerate} \item[1.] The drift function $\alpha (x) .$ We suppose that $ \alpha : [ 0,\infty ) \to [0, \infty ) $ is continuous, with $ \alpha ( x) > 0 $ for all $ x > 0. $ In between successive jumps, the process follows the decaying dynamic \begin{equation}\label{eq:detdyn} \overset{.}{x}_{t} (x) =-\alpha \left( x_{t} (x) \right) , x_{0} (x) =x \geq 0. \end{equation} \item[2.] The jump rate function $ \beta ( x) .$ We suppose that $ \beta : {( 0 , \infty)} \to [0, \infty ) $ is continuous and $ \beta ( x) > 0 $ for all $ x > 0.$ \item[3.] The jump kernel $K (x, dy ). $ This is a transition kernel from $ [0, \infty) $ to $[0, \infty ) $ such that for any $ x > 0, $ $ K ( x, [x, \infty) ) = 1 .$ Writing $ K(x, y ) = \int_{(y, \infty) } K (x, dz), $ we suppose that $K(x, y ) $ is jointly continuous in $x$ and $y.$
\end{enumerate}
In between successive jumps, the population size follows the deterministic flow $ x_t (x) $ given in \eqref{eq:detdyn}. For any $0\leq a<x$, the integral \begin{equation}\label{eq:timetoa} t_{a}\left( x\right) :=\int_{a}^{x}\frac{dy}{\alpha \left( y\right) } \end{equation} is the time for the flow to hit $a$ starting from $x.$ In particular, starting from $ x > 0, $ the flow reaches $0$ after some time $t_0\left( x\right) =\int_{0}^{x}\frac{dy}{\alpha \left( y\right) }\leq \infty $. We refer to \cite{DSGHL} for a variety of examples of such decaying flows that can hit zero in finite time or not.
Jumps occur at state dependent rate $\beta \left( x\right) .$ At the jump times, the size of the population grows by a random amount $\Delta \left( X_{t-}\right) >0$ of its current size $X_{t-}.$ Writing $Y\left( X_{t-}\right) :=X_{t-}+\Delta \left( X_{t-}\right) $ for the position of the process right after its jump, $ Y ( X_{t-}) $ is distributed according to $ K ( X_{t- } , d y ).$
Up to the next jump time, $X_{t}$ then decays again, following the deterministic dynamics \eqref{eq:detdyn}, started at the new value $Y\left( X_{t-}\right) :=X_{t-}+\Delta \left( X_{t-}\right) $.
We are thus led to consider the PDMP $ X_t$
with state-space $\left[ 0,\infty \right) $ solving \begin{equation} dX_{t}=-\alpha \left( X_{t}\right) dt+\Delta \left( X_{t-}\right) \int_{0}^{\infty }\mathbf{1}_{\left\{ r\leq \beta \left( X_{t-}\left( x\right) \right) \right\} }M\left( dt,dr\right) ,\text{ }X_{0}=x, \label{C0} \end{equation} where $M\left( dt,dr\right) $ is a Poisson measure on $\left[ 0,\infty \right) \times \left[ 0,\infty \right) $. Taking $dt\ll 1$ be the system's scale, this dynamics means alternatively that we have transitions \begin{eqnarray*} X_{t-} &=&x\rightarrow x-\alpha \left( x\right) dt\text{ with probability } 1-\beta \left( x\right) dt \\ X_{t-} &=&x\rightarrow x+\Delta \left( x\right) \text{ with probability } \beta \left( x\right) dt. \end{eqnarray*}
It is a nonlinear version of the Langevin equation with jumps.
\subsection{Discussion of the jump kernel} We have \begin{equation*} \mathbf{P}\left( Y\left( x\right) >y\mid X_{t-}=x\right) =K\left( x,y\right) = \int_{ ( y, \infty ) } K (x, dz). \end{equation*} Clearly $K\left( x,y\right) $\ is a non-increasing function of $y$\ for all $ y\geq x$ satisfying $K\left( x,y\right) =1$\ for all $y<x$\textbf{. }By continuity, this implies that $ K\left( x,x\right) =1$ such that the law of $\Delta \left( x\right) $ has no atom at $0.$
In the sequel we concentrate on the separable case \begin{equation}\label{eq:sep} K\left( x,y\right) {=}\frac{k\left( y\right) }{k\left( x\right) } \text{,} \end{equation} where $k: [0, \infty ) \to [ 0, \infty ] $ is any positive non-increasing function. In what follows we suppose that $k$ is continuous and finite on $ (0, \infty). $
Fix $z>0$ and assume $y=x+z.$ Then $$ {\bf P}\left( Y\left( x\right) >y\right) =\frac{k\left( x+z\right) }{k\left( x\right) } . $$ Depending on $k\left( x\right) , $ this probability can be a decreasing or an increasing function of $x$ for each $z.$
\begin{example} Suppose $k\left( x\right) =e^{-x^{\alpha }}${\bf , }$\alpha >0,${\bf \ }$x\geq 0$ (a Weibull distribution).
- If $\alpha <1,$ then $\partial _{x}K (x, x + z ) >0,$ so that the larger $ x,$ the larger ${\bf P}\left( Y\left( x\right) >x+z\right) .$ In other words, if the population stays high, the probability of a large number of immigrants will be enhanced. There is a positive feedback of $x$ on $\Delta \left( x\right) , $ translating a herd effect.
- If $\alpha =1,$ then $\partial _{x}K(x, x+z) =0$ and there is no feedback of $x$ on the number of immigrants, which is then exponentially distributed.
- If $\alpha >1,$ then $\partial _{x}K (x, x+z) <0$ and the larger $x,$ the smaller the probability ${\bf P}\left( Y\left( x\right) >x+z\right) .$ In other words, if the population stays high, the probability of a large number of immigrants will be reduced. There is a negative feedback of $x$ on $\Delta \left( x\right) .$ \end{example}
\begin{example} {\bf The case $k\left( 0\right) <\infty . $} Without loss of generality, we may take $k\left( 0\right) =1.$ Assume that $k\left( x\right) ={\bf P}\left( Z>x\right) $ for some proper random variable $Z>0$ and that \[ Y\left( x\right) \stackrel{d}{=}Z\mid Z>x \] so that $Y\left( x\right) $ is amenable to the truncation of $Z$ above $x.$ Thus \[ {\bf P}\left( Y\left( X_{t-}\right) >y\mid X_{t-}=x\right) =\frac{{\bf P} \left( Z>y,Z>x\right) }{{\bf P}\left( Z>x\right) }=\frac{k\left( y\right) }{ k\left( x\right) },\text{ for }y>x . \] A particular (exponential) choice is \[ k\left( x\right) =e^{-\theta x},\text{ }\theta >0, \] with ${\bf P}\left( Y\left( X_{t-}\right) >y\mid X_{t-}=x\right) =e^{-\theta \left( y-x\right) }$ depending only on $y-x$. Another possible one is (Pareto): $k\left( x\right) =\left( 1+x\right) ^{-c},$ $c>0$.
Note $K\left( 0,y\right) =k\left( y\right) >0$ for all $y>0,$ and $k\left( y\right) $ turns out to be the cpdf of a jump above $y$, starting from $0$: state $0$ is {\bf reflecting}.
{\bf The case $ k (0 ) = \infty .$} Consider $k\left( x\right) =\int_{x}^{\infty }\mu \left( Z\in dy\right) $ for some positive Radon measure $\mu $ with infinite total mass. In this case, \[ {\bf P}\left( Y\left( X_{t-}\right) >y\mid X_{t-}=x\right) =\frac{\mu \left( Z>y,Z>x\right) }{\mu \left( Z>x\right) }=\frac{k\left( y\right) }{k\left( x\right) },\text{ for }y>x. \]
Now, $K\left( 0,y\right) =0$ for all $y>0$ and state $0$ becomes {\bf attracting}. An example is $k\left( x\right) =x^{-c},$ $c>0,$ which is not a cpdf.
The ratio $k\left( y\right) /k\left( x\right) $ is thus the conditional probability that a jump is greater than the level $y$ given that it did occur and that it is greater than the level $x$, see Eq. (1) of Eliazar and Klafter (2009) for a similar choice.
Our motivation for choosing the separable form $K\left( x,y\right) =k\left( y\right) /k\left( x\right) $ is that it accounts for the possibility of having state $0$ either absorbing or reflecting for upwards jumps launched from $0$ and also that it can account for both a negative or positive feedback of the current population size on the number of incoming immigrants. \end{example}
\begin{example} One can think of many other important and natural choices of $K\left( x,y\right) ,$ not in the separable class, among which those for which \[ K\left( x,dy\right) =\delta _{Vx}\left( dy\right) \] for some random variable $V>1.$ For this class of kernels, state $0$ is {\bf always} attracting. For example, choosing $V=1+E$ where:
1/ $E$ is exponentially distributed with pdf $e^{-\theta x},$ $Y\left( x\right) =Vx$ yields \[ {\bf P}\left( Y\left( x\right) >y\mid X_{-}=x\right) =K\left( x,y\right) = {\bf P}\left( \left( 1+E\right) x>y\right) =e^{-\theta \left( \frac{y}{x} -1\right) }, \]
2/ $E$ is Pareto distributed with pdf $\left( 1+x\right) ^{-c},$ $c>0,$ $ Y\left( x\right) =Vx$ yields \[ {\bf P}\left( Y\left( x\right) >y\mid X_{-}=x\right) =K\left( x,y\right) = {\bf P}\left( \left( 1+E\right) x>y\right) =\left( y/x\right) ^{-c}, \] both with $K\left( 0,y\right) =0.$ When \\ 3/: $V\sim \delta _{v},$ $v>1$, then $K\left( x,y\right) ={\bf 1}\left( y/x\leq v\right) .$ The three kernels depend on $y/x.$
Note that in all three cases, $\partial _{x}K (x, x+z) >0, $ so that the larger $x,$ the larger ${\bf P}\left( Y\left( x\right) >x+z\right). $ If the population stays high, the probability of a large number of immigrants will be enhanced. There is a positive feedback of $x$ on $ \Delta \left( x\right) ,$ translating a herd effect. \end{example}
\begin{remark} A consequence of the separability condition of $K$ is the following. Consider a Markov sequence of after-jump positions defined recursively by $ Z_{n}=Y\left( Z_{n-1}\right) $, $Z_{0}=x_{0}$. With $x_{m}>x_{m-1}$, we have \[ {\bf P}\left( Z_{m}>x_{m}\mid Z_{m-1}=x_{m-1}\right) =K\left( x_{m-1},x_{m}\right) \text{, }m=1,...,n, \] so that, with $x_{0}<x_{1}<...<x_{n}$, and under the separability condition on $K$, the product \[ \prod_{m=1}^{n}{\bf P}\left( Z_{m}>x_{m}\mid Z_{m-1}=x_{m-1}\right) =\prod_{m=1}^{n}K\left( x_{m-1},x_{m}\right) =\prod_{m=1}^{n}\frac{k\left( x_{m}\right) }{k\left( x_{m-1}\right) }=\frac{k\left( x_{n}\right) }{k\left( x_{0}\right) } \] only depends on the initial and terminal states $\left( x_{0},x_{n}\right) $ and not on the full path $\left( x_{0},...,x_{n}\right) .$
\end{remark}
\subsection{The infinitesimal generator} In what follows we always work with separable kernels. Moreover, we write $X_t ( x) $ for the process given in \eqref{C0} to emphasize the dependence on the starting point $ x , $ that is, $X_t (x) $ designs the process with the above dynamics \eqref{C0} and satisfying $ X_0 ( x) = x .$ If the value of the starting point $x$ is not important, we shall also write $ X_t $ instead of $ X_t ( x) .$
Under the separability condition, the infinitesimal generator of $X_{t}$ acting on bounded smooth test functions $u$ takes the following simple form \begin{equation}\label{eq:generator} \left( \mathcal{G}u\right) \left( x\right) =-\alpha \left( x\right) u^{\prime }\left( x\right) +\frac{\beta \left( x\right) }{k\left( x\right) } \int_{x}^{\infty }k\left( y\right) u^{\prime }\left( y\right) dy , x \geq 0. \end{equation}
\begin{remark} In Eliazar and Klafter (2006), a particular scale-free version of decay-surge models with $\alpha \left( x\right) \propto x^{a}$, $\beta \left( x\right) \propto x^{b}$ and $k\left( x\right) \propto x^{-c}$, $c>0$, has been investigated. \end{remark}
\begin{remark} If $x_{t}$ goes extinct in finite time $t_{0}(x) < \infty ,$ since $x_{t}$ is supposed to represent the size of some population, we need to impose $x_{t}=0 $ for $t\geq t_0(x),$ forcing state $0$ to be absorbing. From this time on, $X_{t}$ can re-enter the positive orthant if there is a positive probability to move from $0$ to a positive state meaning $ k (0 ) < \infty $ and $\beta \left( 0\right) >0.$ In such a case, the first time $X_{t}$ hits state $0$ is only a first local extinction time the expected value of which needs to be estimated. The question of the time elapsed between consecutive local extinction times (excursions) also arises.
On the contrary, for situations for which $k(0)=\mathbf{\infty }$ or $\beta \left( 0\right) =0,$ the first time $X_{t}$ hits state $0$ will be a global extinction time. \end{remark}
\subsection{Relation between decay-surge and growth-collapse processes} In this subsection, we exhibit a natural relationship between decay-surge population models, as studied here, and growth-collapse models as developed in Boxma et al (2006), Goncalves et al (2021), Gripenberg (1983) and Hanson and Tuckwell (1978). Growth-collapse models describe deterministic population growth where at random jump times the size of the population undergoes a catastrophe reducing its current size to a random fraction of it. More precisely, the generator of a growth-collapse process, having parameters $ ( \tilde \alpha, \tilde \beta, \tilde h ) ,$ is given for all smooth test functions by \begin{equation}\label{eq:Gsep} (\tilde {\mathcal G} u) (x) = \tilde \alpha (x) u' ( x) - \tilde \beta ( x)/\tilde h(x) \int_0^x u'(y)\tilde h(y) dy , x \geq 0. \end{equation} In the above formula, $ \tilde \alpha, \tilde \beta $ are continuous and positive functions on $ (0, \infty), $ and $ \tilde h $ is positive and non-decreasing on $ (0, \infty).$
In what follows, consider a decay-surge process $X_{t}$ defined by the triple $\left( \alpha ,\beta ,k\right) $ and let $\widetilde{X}_{t}=1/X_{t}.$
\begin{proposition} The process $\widetilde{X}_{t}$ is a growth-collapse process as studied in \cite{GHL} with triple $\left( \widetilde{\alpha }, \widetilde{\beta },\widetilde{h}\right) $ given by $$ \widetilde{\alpha }\left( x\right) =x^{2}\alpha (1/x) , \widetilde{\beta }(x) =\beta (1/x) \mbox{ and } \widetilde{h}(x) =k(1/x), \; x > 0. $$ \end{proposition}
\begin{proof} Let $u$ be any smooth test function and study $u ( \tilde X_t) = u \circ g ( X_t) $ with $g ( x) = 1/x . $ By Ito's formula for processes with jumps, $$ u ( \tilde X_t ) = u \circ g (X_t ) = u ( \tilde X_0 ) + \int_0^t {\mathcal G} (u \circ g ) (X_{t'}) dt' + M_t, $$ where $M_t$ is a local martingale. We obtain \begin{eqnarray*} {\mathcal G} (u \circ g ) (x) &=& -\alpha(x)(u \circ g)'(x)+ \frac{\beta(x)}{k(x)}\int_x^{\infty} (u \circ g)'(y) k(y) dy, \\
&=& \frac{1}{x^2}\alpha(x)u'(\frac{1}{x}) + \frac{\beta(x)}{k(x)}\int_x^{\infty} u'(\frac{1}{y}) k(y) \frac{-dy}{y^2} \end{eqnarray*} Using the change of variable $y=1/x ,$ this last expression can be rewritten as $$ \frac{1}{x^2}\alpha(x)u'(\frac{1}{x}) -\frac{\beta(x)}{k(x)}\int_0^{1/x} u'(z) k(\frac{1}{z}) dz = \widetilde{\alpha }(y)u^{\prime }(y)-\frac{\widetilde{\beta }(y)}{\widetilde{h}(y)}\int_{0}^{y}u^{\prime }(t)\widetilde{h}(t)dt $$ which is the generator of the process $\widetilde{X}_{t}.$ \end{proof} In what follows we speak about the above relation between the decay-surge (DS) process $ X$ and the growth-collapse (GC) process $ \tilde X$ as {\it DS-GC-duality}. Some simple properties of the process $X$ follow directly from the above duality as we show next. Of course, the above duality does only hold up to the first time one of the two processes leaves the interior $ (0, \infty ) $ of its state space. Therefore a particular attention has to be paid to state $0$ for $X_t$ or equivalently to state $ + \infty $ for $\tilde X_t.$ Most of our results will only hold true under conditions ensuring that, starting from $ x > 0, $ the process $X_t$ will not hit $0$ in finite time.
Another important difference between the two processes is that the simple transformation $ x \mapsto 1/x$ maps a priori unbounded sample paths $ X_t $ into bounded onces $ \tilde X_t $ (starting from $ \tilde X_0 = 1/x, $ almost surely, $ \tilde X_t \le \tilde x_t ( 1/x) -$ a relation which does not hold for $X$).
\subsection{First consequences of the DS-GC-duality} Given $X_{0}=x > 0$, the first jump times both of the DS-process $X_t$ starting from $x,$ and of the GC-process $ \tilde X_t, $ starting from $ 1/x,$ coincide and are given by \begin{equation*}
T_{x}=\inf \{ t > 0 : X_t \neq X_{t-} | X_0 = x \}= \tilde T_{\frac1x}= \inf \{ t>0:\tilde X_{t}\neq \tilde X_{t-}| \tilde X_{0}=\frac1x\}. \end{equation*} Introducing \begin{equation*} \Gamma \left( x\right) :=\int_1^{x}\gamma \left( y\right) dy, \mbox{ where } \gamma \left( x\right) :=\beta \left( x\right) /\alpha \left( x\right) , x > 0 , \end{equation*} and the corresponding quantity associated to the process $ \tilde X_t,$ $$ \widetilde{\Gamma } \left( x\right) = \int_1^x \tilde \gamma \left( y\right)dy , \; \tilde \gamma \left( x\right)= \tilde \beta \left( x\right)/ \tilde \alpha \left( x\right) , x > 0 ,$$ clearly, $\widetilde{\Gamma }\left( x\right) =-\Gamma \left( 1/x\right) $ for all $ x > 0.$
Arguing as in Sections 2.4 and 2.5 of \cite{GHL}, a direct consequence of the above duality is the fact that for all $t < t_0 ( x), $ \begin{equation} \label{eq:tx} \mathbf{P}\left( T_{x} >t\right) =e^{-\int_{0}^{t}\beta \left( x_{s}\left( x\right) \right) ds}=e^{-\left[ \Gamma \left( x\right) -\Gamma \left( x_{t}\left( x\right) \right) \right] }. \end{equation}
To ensure that $\mathbf{P}\left( T_{x} <\infty \right) =1,$ in accordance with Assumption 1 of \cite{GHL} we will impose the condition \begin{ass}\label{As1} $\Gamma \left(0 \right) =- \infty .$ \end{ass}
\begin{proposition}\label{prop:extinction} Under Assumption \ref{As1}, the stochastic process $X_{t}(x) , x > 0 , $ necessarily jumps before reaching $0.$ In particular, for any $ x > 0, $ $ X_t(x)$ almost surely never reaches $0$ in finite time. \end{proposition} \begin{proof} By duality, we have
$$ {\mathbf{P}} (X \mbox{ jumps before reaching $0$} | X_0 = x ) = {\mathbf{P}} ( \tilde X \mbox{ jumps before reaching $ + \infty $}| \tilde X_0 = \frac1x) = 1, $$ as has been shown in Section 2.5 of \cite{GHL}, and this implies the assertion. \end{proof}
In particular, the only situation where the question of the extinction of the process $X $ makes sense (either local or total) is when $t_0(x)<\infty $ and $\Gamma(0)>-\infty. $
\begin{example} We give an example where finite time extinction of the process is possible. Suppose $\alpha \left( x\right) =\alpha_1 x^{a}$ with $\alpha_1>0$ and $a<1$. Then $x_{t}\left( x\right) ,$ started at $x$, hits $0$ in finite time $t_{0}\left( x\right) =x^{1-a}/\left[ \alpha_1 \left( 1-a\right) \right] $, with \[ x_{t}\left( x\right) =\left( x^{1-a}+\alpha_1 \left( a-1\right) t\right) ^{1/\left( 1-a\right) }, \] see \cite{DSGHL}. Suppose $\beta \left( x\right) =\beta_1 >0$, constant. Then, with $\gamma_1=\beta_1/\alpha_1>0$ \[ \Gamma \left( x\right) =\int_{1}^{x}\gamma \left( y\right) dy=\frac{\gamma_1 }{ 1-a}\left( x^{1-a}-1\right) \] with $\Gamma \left( 0\right) =-\frac{\gamma_1 }{1-a}>-\infty .$ Assumption \ref{As1} is not fulfilled, so $X$ can hit $0$ in finite time and there is a positive probability that $T_x =\infty $. On this last event, the flow $x_{t}\left( x\right) $ has all the time necessary to first hit $0$ and, if in addition the kernel $k$ is chosen so as $k(0)=\infty ,$ to go extinct definitively. The time of extinction $\tau \left( x\right) $ of $X$ itself can be deduced from the renewal equation in distribution \[ \tau \left( x\right) \stackrel{d}{=}t_{0}\left( x\right) {\bf 1}_{\{T_x =\infty \}}+\tau ^{\prime }\left( Y\left( x_{T_x }\left( x\right) \right) \right) {\bf 1}_{\{T_x <\infty \}} \] where $\tau ^{\prime }$ is a copy of $\tau $.
We conclude that for this family of models, $X$ itself goes extinct in finite time. This is an interesting regime that we shall not investigate any further. \end{example}
Let us come back to the discussion of Assumption \ref{As1}. It follows immediately from Eq. (13) in \cite{GHL} that for $x>0,$ under Assumption \ref{As1} and supposing that $ t_0 ( x) = \infty $ for all $ x > 0 ,$ \begin{equation*} \mathbf{E}\left( T_{x} \right) =e^{-\Gamma \left( x\right) }\int_{0}^{x}\frac{dz}{\alpha \left( z\right) }e^{\Gamma \left( z\right) }. \end{equation*} Clearly, when $x\rightarrow 0$, $\mathbf{E}\left( T_{x} \right) \sim 1/\alpha \left( x\right) $.
\begin{remark} $\left( i\right) $ If $\beta \left( 0\right) >0$ then Assumption \ref{As1} implies $t_{0}(x)=\infty ,$ such that $0$ is not accessible.
$\left( ii\right) $ Notice also that $t_0 (x) < \infty $ together with Assumption \ref{As1} implies that $ \beta ( 0 ) = \infty $ such that the process $ X_t( x) $ is prevented from hitting $0$ even though $x_t(x) $ reaches it in finite time due to the fact that the jump rate $\beta (x) $ blows up as $x \to 0.$ \end{remark}
\subsection{Classification of state $0$}
Recall that for all $ x > 0, $ \begin{equation*} t_{0}(x{)}=\int_{0}^{x}\frac{dy}{\alpha \left( y\right) } \end{equation*} represents the time required for $x_{t}$ to move from $x>0$ to $0$. So: \begin{eqnarray*} &\text{ If } t_{0}(x{)}<\infty \mbox{ and } \Gamma ( 0 ) > - \infty , &\text{ state }0\text{ is accessible.} \\ &\text{ If } t_{0}(x{)}=\infty \mbox{ or } \Gamma ( 0 ) = - \infty , &\text{ state }0\text{ is inaccessible.} \end{eqnarray*} We therefore introduce the following conditions which apply in the separable case $K(x,y)=k(y)/k(x).$
\textbf{Condition (R):} $(i)$ $\beta \left( 0\right) >0$ and \newline $\left( ii\right) $ $K\left( 0,y\right) =k\left( y\right) /k\left( 0\right) >0 $ for some $y>0$ (and in particular $k\left( 0\right) <\infty) .$
\textbf{Condition (A):} $\frac{\beta(0)}{k(0)}k(y)=0 $ for all $y > 0 $.
State $0$ is reflecting if condition $\left( R\right) $ is satisfied and it is absorbing if condition (A) is satisfied.
This leads to four possible combinations for the boundary state $0$:
\textbf{Condition} $\left( R\right) $ and $t_{0}(x)<$\textbf{\ }$\infty $ and $ \Gamma (0 ) > - \infty :$ regular (reflecting and accessible).
\textbf{Condition }$\left( R\right) $\ and $t_{0}(x)=$\textbf{\ }$\infty $ or $ \Gamma ( 0 ) = - \infty :$ entrance (reflecting and inaccessible).
\textbf{Condition }$\left( A\right) $ and $t_{0}(x)<$\textbf{\ }$\infty $ and $ \Gamma (0 ) > - \infty :$ exit (absorbing and accessible).
\textbf{Condition }$\left( A\right) $ and $t_{0}(x)=$\textbf{\ }$\infty $ or $ \Gamma ( 0 ) = - \infty :$ natural (absorbing and inaccessible).
\subsection{Speed measure.} Suppose now an invariant measure (or speed measure) $\pi \left( dy\right) $ exists. Since we supposed $\alpha (x)>0$ for all $x>0,$ we necessarily have $ x_{\infty }(x)=0$ for all $x>0$ and so the support of $\pi $ is $[ x_{\infty }(x)=0,\infty) .$ Thanks to our duality relation, by Eq. (19) of \cite{GHL} the explicit expression of the speed measure is given by $\pi ( dy) = \pi ( y) dy $ with \begin{equation} \pi \left( y\right) =C\frac{k\left( y\right) e^{\Gamma \left( y\right) }}{ \alpha \left( y\right) }, \label{C5} \end{equation} up to a multiplicative constant $C>0$. If and only if this function is integrable at 0 and $\infty ,$ $\pi \left( y\right) $ can be tuned to a probability density function.
\begin{remark} $\left( i\right) $When $k\left( x\right) =e^{-\kappa _{1}x}$, $\kappa _{1}>0$ , $\alpha \left( x\right) =\alpha _{1}x$ and $\beta \left( x\right) =\beta _{1}>0$ constant, $\Gamma \left( y\right) =\gamma _{1}\log y$, $\gamma _{1}=\beta _{1}/\alpha _{1}$ and \begin{equation*} \pi \left( y\right) =Cy^{\gamma _{1}-1}e^{-\kappa _{1}y} \end{equation*} a Gamma$\left( \gamma _{1},\kappa _{1}\right) $ distribution. This result is well-known, corresponding to the linear decay-surge model (a jump version of the damped Langevin equation) having an invariant (integrable) probability density, see Malrieu (2015). We shall show later that the corresponding process $X $ is positive recurrent.
$\left( ii\right) $A less obvious power-law example is as follows: Assume $ \alpha \left( x\right) =\alpha _{1}x^{a}$ ($a>1$) and $\beta \left( x\right) =\beta _{1}x^{b}$, $\alpha _{1},\beta _{1}>0$ so that $\Gamma \left( y\right) =\frac{\gamma _{1}}{b-a+1}y^{b-a+1}.$ We have $\Gamma \left( 0\right) =-\infty $ if we assume $b-a+1=-\theta $ with $\theta >0$, hence $\Gamma \left( y\right) =-\frac{\gamma _{1}}{\theta }y^{-\theta }.$ Taking $k\left( y\right) =e^{-\kappa _{1}y^{\eta }}$, $\kappa _{1},\eta >0$ \begin{equation*} \pi \left( y\right) =Cy^{-a}e^{-\left( \kappa _{1}y^{\eta }+\frac{\gamma _{1} }{\theta }y^{-\theta }\right) } \end{equation*} which is integrable both at $y=0$ and $y=\infty $.
As a special case, if $a=2$ and $b=0$ (constant jump rate $\beta \left( x\right) $), $\eta =1$ \begin{equation*} \pi \left( y\right) =Cy^{-2}e^{-\left( \kappa _{1}y+\gamma _{1}y^{-1}\right) }, \end{equation*} an inverse Gaussian density.
$\left( iii\right) $ In Eliazar and Klafter (2007) and (2009), a special case of our model was introduced for which $k\left( y\right) =\beta \left( y\right) $. In such cases, \begin{equation*} \pi \left( y\right) =C\gamma \left( y\right) e^{\Gamma \left( y\right) } \end{equation*} so that \begin{equation*} \int_{0}^{x}\pi \left( y\right) dy=C\left( e^{\Gamma \left( x\right) }-e^{\Gamma \left( 0\right) }\right) =Ce^{\Gamma \left( x\right) }, \end{equation*} under the Assumption $\Gamma \left( 0\right) =-\infty $. If in addition $ \Gamma \left( \infty \right) <\infty $, $\pi \left( y\right) $ can be tuned to a probability density.\newline \end{remark}
\section{Scale function and hitting times}
In this section we start by studying the scale function of $X_t,$ before switching to hitting times features that make use of it. A scale function $s\left( x\right) $ of the process is any function solving $ \left( \mathcal{G}s\right) \left( x\right) =0 .$ In other words, a scale function is a function that transforms the process into a local martingale. Of course, any constant function is solution. Notice that for the growth-collapse model considered in \cite{GHL}, other scale functions than the constant ones do not exist.
In what follows we are interested in non-constant solutions and conditions ensuring the existence of those. To clarify ideas, we introduce the following condition \begin{ass}\label{ass:C} Let \begin{equation}\label{eq:s2} s (x)=\int_{1}^{x}\gamma (y)e^{-\Gamma (y)}/k(y)dy , \; x \geq 0, \end{equation} and suppose that $ s ( \infty ) = \infty .$ \end{ass} Notice that Assumption \ref{ass:C} implies that $ k( \infty ) = 0 $ which is reasonable since it prevents jumps of the process $X_t$ jumping from some finite position $X_{t-} $ to an after jump position $X_t = X_{t- } + \Delta ( X_{t-}) = + \infty .$
\begin{proposition}\label{prop:scale} (1) Suppose $\Gamma (\infty )=\infty .$ Then the function $s$ introduced in \eqref{eq:s2} above is a strictly increasing version of the scale function of the process obeying $s (1)=0.$
(1.1) If additionally Assumptions \ref{As1} and \ref{ass:C} hold and if $ k(0) < \infty, $ then $ s(0 ) = - \infty $ and $ s( \infty ) = \infty , $ such that $s $ is a space transform $ [0, \infty ) \to [ - \infty , \infty ) .$
(1.2) If Assumption \ref{ass:C} does not hold, then \begin{equation*} s_{1}(x)=\int_{x}^{\infty }\gamma (y)e^{-\Gamma (y)}/k(y)dy = s ( \infty ) - s(x) \end{equation*} is a version of the scale function which is strictly decreasing, positive, such that $s_{1}(\infty )=0.$
(2) Finally, suppose that $\Gamma (\infty )<\infty .$ Then the only scale functions belonging to $C^1 $ are the constant ones. \end{proposition}
\begin{remark} \begin{enumerate} \item[1.] We shall see later that -- as in the case of one-dimensional diffusions, see e.g. Example 2 in Section 3.8 of Has'minskii (1980) -- the fact that $s$ is a space transform as in item (1.1) above is related to the Harris recurrence of the process. \item[2.] The assumption $\Gamma ( \infty ) < \infty $ of item (3) above corresponds to Assumption 2 of \cite{GHL} where this was the only case that we considered. As a consequence, for the GC-model considered there we did not dispose of non-constant scale functions. \end{enumerate} \end{remark}
\begin{proof} To find a $C^1-$scale function $s,$ it necessarily solves $$ \left( \mathcal{G}s\right) \left( x\right) = -\alpha \left( x\right) s^{\prime }\left( x\right) +\beta \left( x\right)/ k\left( x\right) \int_{x}^{\infty }k\left( y\right) s^{\prime }\left( y\right) dy=0 $$ such that for all $ x > 0, $ \begin{equation} k\left( x\right) s^{\prime }\left( x\right) -\gamma \left( x\right) \int_{x}^{\infty }k\left( y\right) s^{\prime }\left( y\right) dy=0. \end{equation}
Putting $u^{\prime }\left( x\right) =k\left( x\right) s^{\prime }\left( x\right) $, the above implies in particular that $u^{\prime}$ is integrable in a neighborhood of $+\infty $ such that $u ( \infty ) $ must be a finite number. We get $u^{\prime }\left( x\right) =\gamma \left( x\right) \left( u\left( \infty \right) -u\left( x\right) \right). $
\textbf{Case 1 :} $u(\infty )=0$ so that $u(x) = - c_1 e^{ - \Gamma ( x) } $ for some constant $c_1, $ whence $\Gamma \left( \infty \right) =\infty .$ We obtain \begin{equation} s^{\prime }\left( x\right) = c_{1}\frac{\gamma \left( x\right) }{k\left( x\right) }e^{-\Gamma \left( x\right) } \end{equation} and thus
\begin{equation} \label{eq:scale} s\left( x\right) =c_2 + c_{1}\int_{1}^{x }\frac{\gamma \left( y\right) }{ k\left( y\right) }e^{-\Gamma \left( y\right) }dy \end{equation} for some constants $c_{1}, c_2.$ Taking $ c_2 = 0 $ and $ c_1 = 1 $ gives the formula \eqref{eq:s2}, and both items (1.1) and (1.2) follow from this.
\textbf{Case 2 :} $u(\infty )\neq 0$ is a finite number. Putting $ v(x)=e^{\Gamma (x)}u(x),$ $v$ then solves \begin{equation*} v^{\prime }(x)=u(\infty )\gamma (x)e^{\Gamma (x)} \end{equation*} such that \begin{equation*} v(x)=d_{1}+u(\infty )e^{\Gamma (x)} \end{equation*} and thus \begin{equation*} u(x)=e^{-\Gamma (x)}d_{1}+u(\infty ). \end{equation*} Letting $x\to \infty ,$ we see that the above is perfectly well-defined for any value of the constant $d_{1},$ if we suppose $\Gamma (\infty )=\infty .$
As a consequence, $u^{\prime }(x)=-d_{1}\gamma (x)e^{- \Gamma (x)},$ leading us again to the explicit formula \begin{equation}\label{eq:scalebis} s(x)=c_{2} + c_{1}\int_{1}^x\frac{\gamma \left( y\right) }{k\left( y\right) } e^{-\Gamma \left( y\right) }dy, \end{equation} with $c_1 = -d_1,$ implying items (1.1) and (1.2).
Finally, if $\Gamma ( \infty ) < \infty, $ we see that we have to take $d_1 = 0 $ implying that the only scale functions in this case are the constant ones. \end{proof}
\begin{example} In the linear case example with $\beta \left( x\right) =\beta _{1}>0$, $ \alpha \left( x\right) =\alpha _{1}x$, $\alpha _{1}>0$ and $k\left( y\right) =e^{-y},$ with $\gamma _{1}=\beta _{1}/\alpha _{1},$ Assumption \ref{ass:C} is satisfied such that \begin{equation*} s \left( x\right) =\gamma _{1}\int_{1}^{x}y^{-\left( \gamma _{1}+1\right) }e^{y}dy \end{equation*} which is diverging both as $x\rightarrow 0$ and as $x\to +\infty .$ Notice that $0$ is inaccessible for this process, i.e., starting from a strictly positive position $x>0,$ $X_{t}$ will never hit $0.$ Defining the process on the state space $(0,\infty ),$ invariant under the dynamics, the process is recurrent and even positive recurrent as we know from the Gamma shape of its invariant speed density. \end{example}
\subsection{Hitting times}
Fix $a < x < b.$ In what follows we shall be interested in hitting times of the positions $a$ and $b,$ starting from $x,$ under the condition $\Gamma ( \infty ) = \infty .$ Due to the asymmetric structure of the process (continuous motion downwards and up-moves by jumps only), these times are given by \begin{equation*} \tau_{x, b } = \inf \{ t > 0 : X_t = b \} = \inf \{ t > 0 : X_t \le b , X_{t- } > b \} \end{equation*} and \begin{equation*} \tau_{x, a } = \inf \{ t > 0 : X_t = a \} = \inf \{ t > 0 \ : \ X_t \le a \} . \end{equation*} Obviously, $\tau_{a,a} = \tau_{b, b } = 0 .$
Let $T=\tau _{x,a}\wedge \tau _{x,b}.$ Contrarily to the study of processes with continuous trajectories, it is not clear that $T <\infty $ almost surely. Indeed, starting from $x,$ the process could jump across the barrier of height $b$ before hitting $a$ and then never enter the interval $[0,b]$ again. So we suppose in the sequel that $T<\infty $ almost surely. Then \begin{equation*} \mathbf{P}\left( \tau _{x,a}<\tau _{x,b}\right) +\mathbf{P}\left( \tau _{x,b}<\tau _{x,a}\right) =1. \end{equation*}
\begin{proposition}\label{prop:exit} Suppose $\Gamma (\infty )=\infty $ and that Assumption \ref{ass:C} does not hold. Let $0<a<x<b<\infty $ and suppose that $T= \tau _{x,a}\wedge \tau _{x,b}<\infty $ almost surely. Then \begin{equation} \mathbf{P}\left( \tau _{x,a}<\tau _{x,b}\right) = \frac{\int_{x}^{b}\frac{ \gamma \left( y\right) }{k\left( y\right) }e^{{-}\Gamma \left( y\right) }dy}{ \int_{a}^{b}\frac{\gamma \left( y\right) }{k\left( y\right) }e^{{-}\Gamma \left( y\right) }dy}. \label{scaleproba} \end{equation} \end{proposition}
\begin{proof} Under the above assumptions, $s_1 (X_{t})$ is a local martingale and the stopped martingale $M_{t}=s_1(X_{T\wedge t})$ is bounded, which follows from the fact that $X_{T\wedge t}\geq a$ together with the observation that $s_1$ is decreasing implying that $M_{t}\leq s_1(a).$ Therefore from the stopping theorem we have \begin{equation*} s_1 (x)=\mathbb{E}(M_0)=\mathbb{E}(s_1(X_0))=\mathbb{E}(M_T). \end{equation*} Moreover, \begin{equation*} \mathbb{E}(M_T)=s_1(a)\mathbf{P}(T=\tau _{x,a})+s_1(b)\mathbf{P}(T=\tau _{x,b}). \end{equation*} But $\{T=\tau _{x,a}\}=\{\tau _{x,a}<\tau _{x,b}\}$ and $\{T=\tau _{x,b}\}=\{\tau _{x,b}<\tau _{x,a}\},$ such that \begin{equation*} s_1(x)=s_1(a)\mathbf{P}(\tau _{x,a}<\tau _{x,b})+s_1 (b)\mathbf{P}(\tau _{x,b}<\tau _{x,a}). \end{equation*} \end{proof}
\begin{remark} - See \cite{KS} for similar arguments in a particular case of a constant flow and exponential jumps.
- We stress that it is not possible to deduce the above formula without imposing the existence of $ s_1$ (that is, if Assumption \ref{ass:C} holds such that $s_1$ is not well-defined). Indeed, if we would want to consider the local martingale $ s ( X_t) $ instead, the stopped martingale $ s ( X_{t \wedge T}) $ is not bounded since $ X_{t \wedge T} $ might take arbitrary values in $ ( a, \infty ) , $ such that it is not possible to apply the stopping rule. \end{remark}
\begin{remark} Let $\tau _{x,[b,\infty )}=\inf \{t>0:X_{t}\geq b\}$ and $\tau _{x,[0,a]}=\inf \{t>0:X_{t}\le a\}$ be the entrance times to the intervals $ [b,\infty )$ and $[0,a].$ Observe that by the structure of the process, namely the continuity of the downward motion, \begin{equation*} \{\tau _{x,a}<\tau _{x,b}\}\subset \{\tau _{x,a}<\tau _{x,[b,\infty )}\}\subset \{\tau _{x,a}<\tau _{x,b}\}. \end{equation*} Indeed, the second inclusion is trivial since $\tau _{x,[b,\infty )}\le \tau _{x,b}.$ The first inclusion follows from the fact that it is not possible to jump across $b$ and then hit $a$ without touching $b.$ Therefore, \eqref{scaleproba} can be rewritten as \begin{equation} \mathbf{P}\left( \tau _{x,[0,a]}<\tau _{x,[b,\infty )}\right) =\frac{ s_{1}\left( x\right) -s_{1}\left( b\right) }{s_{1}\left( a\right) -s_{1}\left( b\right) } . \label{scalebis} \end{equation}
Now suppose that the process does not explode in finite time, that is, during each finite time interval, almost surely, only a finite number of jumps appear. In this case $\tau _{x,[b,\infty )}\overset{}{\to }+\infty $ as $b\to \infty .$ Then, letting $ b \to \infty $ in \eqref{scalebis}, we obtain for any $ a>0,$ \begin{equation} \mathbf{P}\left( \tau _{x,a}<\infty \right) ={\frac{s_{1}(x)}{s_{1}(a)}}= \frac{\int_{x}^{\infty }\frac{\gamma \left( y\right) }{k\left( y\right) }e^{{ -}\Gamma \left( y\right) }dy}{\int_{a}^{\infty }\frac{\gamma \left( y\right) }{k\left( y\right) }e^{{-}\Gamma \left( y\right) }dy}<1, \label{eq:scale2} \end{equation} since $ \int_a^x \frac{\gamma \left( y\right) }{k\left( y\right) }e^{{-}\Gamma \left( y\right) }dy > 0 $ by assumptions on $ k $ and $ \gamma .$ \end{remark}
As a consequence, we obtain the following
\begin{proposition} \label{cor:transience} Suppose that $\Gamma ( \infty ) = \infty $ and that Assumption \ref{ass:C} does not hold. Then either the processes explodes in finite time with positive probability or it is transient at $+\infty , $ i.e. for all $a<x,$ $\tau _{x,a}=\infty $ with positive probability. \end{proposition}
\begin{proof} Let $ a < x < b $ and $T = \tau_{x, a } \wedge \tau_{x, b } $ and suppose that almost surely, the process does not explode in finite time. We show that in this case, $ \tau_{x, a } = \infty $ with positive probability.
Indeed, suppose that $ \tau_{x, a } < \infty $ almost surely. Then \eqref{scaleproba} holds, and letting $b \to \infty, $ we obtain \eqref{eq:scale2} implying that $ \tau_{x, a } = \infty $ with positive probability which is a contradiction. \end{proof}
\begin{example}Choose $\alpha \left( x\right) =x^{2}, $ $\beta \left( x\right) =1+x^{2}$ so that $\gamma \left( x\right) =1+1/x^{2}$
and $\Gamma \left( x\right) =x-1/x $ with $\Gamma \left( 0\right) =-\infty $ and $\Gamma \left( \infty \right) =\infty .$
Choose also $k\left( x\right) =e^{-x/2}. $ Then Assumption \ref{ass:C} is violated: the survival function of the big upward jumps decays too slowly in comparison to the decay of $ e^{ - \Gamma}.$ The speed density is \begin{equation*} \pi \left( x\right) =C\frac{k\left( x\right) e^{\Gamma \left( x\right) }}{ \alpha \left( x\right) }=Cx^{-2}e^{x/2 -1/x} \end{equation*} which is integrable at $ 0 $ but not at $ \infty .$ The process explodes (has an infinite number of upward jumps in finite time) with positive probability. \end{example}
\subsection{First moments of hitting times}
Let $a > 0 .$ We are seeking for positive solutions of \begin{equation*} \left( \mathcal{G}\phi _{a}\right) \left( x\right) =-1, x \geq a, \end{equation*} with boundary condition $\phi _{a}\left( a\right) =0.$ The above is equivalent to \begin{equation*} \left( \mathcal{G}\phi _{a}\right) \left( x\right) =-\alpha \left( x\right) \phi _{a}^{\prime }\left( x\right) +\frac{\beta \left( x\right) }{k\left( x\right) }\int_{x}^{\infty }k\left( y\right) \phi _{a}^{\prime }\left( y\right) dy=-1. \end{equation*} This is also \begin{equation*} -k\left( x\right) \phi _{a}^{\prime }\left( x\right) +\gamma \left( x\right) \int_{x}^{\infty }k\left( y\right) \phi _{a}^{\prime }\left( y\right) dy=- \frac{k\left( x\right) }{\alpha \left( x\right) }. \end{equation*} Putting $U\left( x\right) :=\int_{x}^{\infty }k\left( y\right) \phi _{a}^{\prime }\left( y\right) dy$, the latter integro-differential equation reads $U^{\prime }\left( x\right) =-\gamma \left( x\right) U\left( x\right) - \frac{k\left( x\right) }{\alpha \left( x\right) }. $ Supposing that $\int^{+\infty} \pi (y ) dy < \infty $ (recall \eqref{C5}), this leads to \begin{eqnarray*} U\left( x\right) &=&e^{-\Gamma \left( x\right) }\int_{x}^{\infty }e^{\Gamma \left( y\right) }\frac{k\left( y\right) }{\alpha \left( y\right) }dy , \\ -U^{\prime }\left( x\right) &=&\gamma \left( x\right) e^{-\Gamma \left( x\right) }\int_{x}^{\infty }e^{\Gamma \left( y\right) }\frac{k\left( y\right) }{\alpha \left( y\right) }dy+\frac{k\left( x\right) }{\alpha \left( x\right) }=k\left( x\right) \phi _{a}^{\prime }\left( x\right) , \end{eqnarray*} such that \begin{eqnarray} \label{eq:phia} \phi _{a}\left( x\right) &=&\int_{a}^{x}dy\frac{\gamma \left( y\right) }{ k\left( y\right) }e^{-\Gamma \left( y\right) }\int_{y}^{\infty }e^{\Gamma \left( z\right) }\frac{k\left( z\right) }{\alpha \left( z\right) } dz+\int_{a}^{x}\frac{dy}{\alpha \left( y\right) } \nonumber \\ &=&\int_{a}^{\infty }dz\pi \left( z\right) \left[ s_1\left( a\right) -s_1 \left( x\wedge z\right) \right] +\int_{a}^{x}\frac{dy}{\alpha \left( y\right) }. \end{eqnarray}
Notice that $[ a, \infty ) \ni x \mapsto \phi_a ( x) $ is non-decreasing and that $\phi_a ( x) < \infty $ for all $x > a > 0$ under our assumptions. Dynkin's formula implies that for all $x > a $ and all $t \geq 0, $ \begin{equation} \label{eq:later} \mathbf{E}_x ( t \wedge \tau_{x, a } )= \phi_a ( x)- \mathbf{E}_x ( \phi_a ( X_{t \wedge \tau_{x, a } } ) ). \end{equation} In particular, since $\phi_a ( \cdot ) \geq 0, $ \begin{equation*} \mathbf{E}_x ( t \wedge \tau_{x, a } ) \le \phi_a ( x) < \infty , \end{equation*} such that we may let $t \to \infty $ in the above inequality to obtain by monotone convergence that \begin{equation*} \mathbf{E}_x ( \tau_{x, a } ) \le \phi_a ( x) < \infty . \end{equation*} In a second step, we obtain from \eqref{eq:later}, using Fatou's lemma, that \begin{multline*} \mathbf{E}_x ( \tau_{x, a } ) = \lim_{ t \to \infty } \mathbf{E}_x ( t \wedge \tau_{x, a } ) = \phi_a ( x) - \lim_{ t \to \infty } \mathbf{E}_x ( \phi_a ( X_{t \wedge \tau_{x, a } } ) ) \\ \geq \phi_a ( x) -\mathbf{E}_x ( \liminf_{ t \to \infty } \phi_a ( X_{t \wedge \tau_{x, a } } ) ) = \phi_ a (x) , \end{multline*} where we have used that $\liminf_{ t \to \infty } \phi_a ( X_{t \wedge \tau_{x, a } } ) = \phi_a ( a) = 0 .$
As a consequence we have just shown the following
\begin{proposition}\label{prop:hitting} Suppose that $\int^{+\infty }\pi (y)dy<\infty .$ Then $\mathbf{E}_{x}(\tau _{x,a})=\phi _{a}(x)<\infty $ for all $0<a<x,$ where $\phi _{a}$ is given as in \eqref{eq:phia}. \end{proposition}
\begin{remark} The last term $\int_{a}^{x}\frac{dy}{\alpha \left( y\right) }$ in the RHS of the expression of $\phi _{a}\left( x\right) $ in \eqref{eq:phia} is the time needed for the deterministic flow to first hit $a$ starting from $x>a,$ which is a lower bound of $\phi _{a}\left( x\right) $. Considering the tail function of the speed density $\pi \left( y\right) $, namely $\overline{\pi } \left( y\right) :=$ $\int_{y}^{\infty }e^{\Gamma \left( z\right) }\frac{ k\left( z\right) }{\alpha \left( z\right) }dz$, the first term in the RHS expression of $\phi _{a}\left( x\right) $ is \begin{equation*} \int_{a}^{x}-ds_{1}\left( y\right) \overline{\pi }\left( y\right) =-\left[ s_{1}\left( y\right) \overline{\pi }\left( y\right) \right] _{a}^{x}-\int_{a}^{x}s_{1}\left( y\right) \pi \left( y\right) dy, \end{equation*} emphasizing the importance of the couple $\left( s_{1}\left( \cdot \right) ,\pi \left( \cdot \right) \right) $ in the evaluation of $\phi _{a}\left( x\right) $. If $a$ is a small critical value below which the population can be considered in danger, this is the mean value of a `quasi-extinction' event when the initial size of the population was $x$. \end{remark}
\begin{remark} Notice that the above discussion is only possible for couples $0<a<x, $ since starting from $x,$ $X_{t\wedge \tau _{x,a}}\geq a$ for all $t.$ A similar argument does not hold true for $x<b$ and the study of $\tau _{x,b}.$ \end{remark}
\subsection{Mean first hitting time of $0$} Suppose that $ \Gamma ( 0 ) > - \infty .$ Then for flows $x_{t}\left( x\right) $ that go extinct in finite time $ t_0\left( x\right) $, under the condition that $\int^{+\infty }\pi (y)dy<\infty ,$ one can let $a\rightarrow 0$ in the expression of $ \phi _{a}\left( x\right) $ to obtain \begin{equation*} \phi _{0}\left( x\right) =\int_{0}^{x}dy\frac{\gamma \left( y\right) }{ k\left( y\right) }e^{-\Gamma \left( y\right) }\int_{y}^{\infty }e^{\Gamma \left( z\right) }\frac{k\left( z\right) }{\alpha \left( z\right) } dz+\int_{0}^{x}\frac{dy}{\alpha \left( y\right) }, \end{equation*} which is the expected time to eventual extinction of $X$ starting from $x, $ that is, $\phi _{0}\left( x\right) =\mathbf{E}\tau _{x,0} .$
The last term $\int_{0}^{x}\frac{dy}{\alpha \left( y\right) }=t_0\left( x\right) <\infty $ in the RHS expression of $\phi _{0}\left( x\right) $ is the time needed for the deterministic flow to first hit $0$ starting from $ x>0,$ which is a lower bound of $\phi _{0}\left( x\right) $.
Notice that under the condition $ t_0 ( x) < \infty $ and $ k(0 ) < \infty, $ $\int_{0 }\pi (y)dy<\infty ,$ such that $ \pi$ can be tuned into a probability. It is easy to see that $ \Gamma ( 0 ) > - \infty $ implies then that $ \phi_0 ( x) < \infty .$
\begin{example}(Linear release at constant jump rate): Suppose $\alpha \left( x\right) =\alpha _{1}>0${\em , }$\beta \left( x\right) =\beta _{1}>0${\em , }$\gamma \left( x\right) =\gamma _{1}=\beta _{1}/\alpha _{1}${\em , }$\Gamma \left( x\right) =\gamma _{1}x${\em \ }with {\em \ }$\Gamma \left( 0\right) =0>-\infty $. Choose $k\left( x\right) =e^{-x}.$
State $0$ is reached in finite time $t_{0}\left( x\right) =x/\alpha _{1},$ and it turns out to be reflecting. We have $\int_{0}\pi \left( x\right) dx<\infty $ and $\int^{\infty }\pi \left( x\right) dx<\infty $ if and only if $\gamma _{1}<1.$ In such a case, the first integral term in the above expression of $\phi _{0}\left( x\right) $ is $\gamma _{1}/\left[ \alpha _{1}\left( 1-\gamma _{1}\right) \right] x$, so that $\phi _{0}\left( x\right) =x/\left[ \alpha _{1}\left( 1-\gamma _{1}\right) \right] <\infty .$ \end{example}
\section{Non-explosion and Recurrence} In this section we come back to the scale function $s $ introduced in \eqref{eq:s2} above. Despite the fact that we cannot use $s $ to obtain explicit expressions for exit probabilities, we show how we might use it to obtain Foster-Lyapunov criteria in spirit of Meyn and Tweedie (1993) that imply the non-explosion of the process together with its recurrence under additional irreducibility properties.
Let $ S_1 < S_2 < \ldots < S_n < \ldots $ be the successive jump times of the process and $ S_\infty = \lim_{n \to \infty } S_n.$ We start discussing how we can use the scale function $s$ to obtain a general criterion for non-explosion of the process, that is, $ S_\infty = + \infty $ almost surely.
\begin{proposition}\label{prop:MT} Suppose $ \Gamma ( \infty ) = \infty $ and suppose that Assumption \ref{ass:C} holds. Suppose also that $ \beta $ is continuous on $ [0, \infty ) .$ Let $ V $ be any $\mathcal{C}^1-$function defined on $ [0, \infty ), $ such that $ V(x) = 1+s(x) $ on $ [1, \infty ) $ and such that $V (x) \geq 1/2 $ for all $x. $ Then $V$ is a norm-like function in the sense of Meyn and Tweedie (1993), and we have \begin{enumerate} \item $\mathcal{G} V (x) =0, $ $\forall x\geq 1.$
\item $\sup_{x\in [0,1]} |\mathcal{G}V (x)| < \infty.$ \end{enumerate} As a consequence, $ S_{\infty} = \sup_n S_n = \infty $ almost surely, so that $ X $ is non-explosive. \end{proposition}
\begin{proof} We check that $ V$ satisfies the condition (CD0) of \cite{MT}. It is evident that $ V$ is norm-like since $ \lim_{x \to \infty }V( x) = 1 + \lim_{x \to \infty } s ( x) = 1 + s ( \infty ) = \infty $ since Assumption \ref{ass:C} holds. Moreover, since $K(x,y)= \frac{k(y)}{k(x)} $, $$ \mathcal{G} V (x) = -\alpha(x)V' (x)+\frac{\beta(x)}{k(x)}\int_x^{\infty}k(y) V' (y)dy. $$ Since for all $ 1 \le x \le y , $ $V' (x)= s'(x) $ and $ V' (y) = s ' (y) , $ we have $ \mathcal{G} V =\mathcal{G}{s}=0 $ on $[1,\infty[.$
For the second point, for $ x \in ]0,1[, $ \begin{eqnarray*} \mathcal{G}V(x) &=& -\alpha(x)V'(x)+\frac{\beta(x)}{k(x)}\int_x^{\infty}k(y)V'(y)dy \\
&=& -\alpha(x)V'(x)+\frac{\beta(x)}{k(x)}\int_1^{\infty}k(y)V'(y)dy + \beta(x)\int_x^1 K(x,y)V'(y)dy. \end{eqnarray*} $\alpha(x)V'(x) $ is continuous and thus bounded on $[0,1]. $ Moreover, for all $y\geq x, $ $K(x,y) \leq 1 $ implying that $\beta(x)\int_x^1 K(x,y)V'(y)dy \leq \beta(x)\int_x^1 V'(y)dy < \infty.$ We also have for all $y \geq 1, $ $$\int_1^{\infty}k(y)V'(y)dy=\int_1^{\infty}k(y)s'(y)dy .$$ Thus, using $\mathcal{G}V(x)=\mathcal{G}s(x)=0 $ on $]0,1[ $ we have $$\frac{\beta(x)}{k(x)} \int_1^{\infty}k(y)V'(y)dy= \frac{\beta(x)}{k(x)} k(1)V'(1)/\gamma(1) < \infty $$ because $\beta $ is continuous on $[0,1]$ and $k$ taking finite values on $ (0, \infty) .$ As a consequence,
$\sup_{x\in [0,1]} |\mathcal{G}V(x)| < \infty. $ \end{proof}
We close this subsection with a stronger Foster-Lyapunov criterion implying the existence of finite hitting time moments.
\begin{proposition} Suppose there exist $ x_* > 0, $ $ c > 0 $ and a positive function $V$ such that $\mathcal{G}V (x) \leq -c $ for all $ x \geq x_* .$ Then for all $ x \geq a \geq x_*,$ $$\mathbf{E}(\tau_{x,a}) \leq \frac{V(x)}{c}.$$ \end{proposition}
\begin{proof} Using Dynkin's formula, we have for $ x \geq a \geq x_*, $ $$ \mathbf{E}(V(X_{t\wedge \tau_{x,a}}))= V(x)+ \mathbf{E}(\int_0^{t\wedge \tau_{x,a}} \mathcal{G}V(X_s) ds )
\leq V(x)-c \mathbf{E}(t\wedge \tau_{x,a} ) , $$ such that $$ \mathbf{E}(t\wedge \tau_{x,a} )\leq \frac{V(x)}{c},$$ which implies the assertion, letting $t \to \infty .$ \end{proof}
\begin{example} Suppose $\alpha (x)=1+x, $ $\beta (x)=x$ and $ k(x)=e^{-2x}.$ We choose $V(x)=e^{x} .$ Then \begin{equation*} \left( \mathcal{G}V\right) \left( x\right) =-e^{x}\leq -1,\forall x\geq 0. \end{equation*} As a consequence $\mathbf{E}(\tau _{x,a})<\infty ,$ for all $a>0.$ \end{example}
\subsection{Irreducibility and Harris recurrence}
In this section we impose that $ \Gamma ( \infty ) = \infty , $ such that non-trivial scale functions do exist. We also assume that Assumption \ref{ass:C} holds since otherwise the process is either transient at $ \infty $ or explodes in finite time. Then the function $ V $ introduced in Proposition \ref{prop:MT} is a Lyapunov function. This is {\it almost} the Harris recurrence of the process, all we need to show is some irreducibility property that we are going to check now.
\begin{theorem}\label{theo:petite} Suppose we are in the separable case, that $k \in C^1 $ and that $0$ is inaccessible, that is, $t_0 ( x) = \infty $ for all $x.$ Then every compact set $ C \subset ] 0, \infty [ $ is `petite` in the sense of Meyn and Tweedie. More precisely, there exist $ t > 0, $ $ \alpha \in (0, 1 ) $ and a probability measure $ \nu $ on $ ( \R_+ , {\mathcal{B} }( \R_+) ) ,$ such that $$ P_t (x, dy ) \geq \alpha \mathbf{1}_C (x) \nu (dy ) .$$ \end{theorem}
\begin{proof} Suppose w.l.o.g. that $ C = [a, b ] $ with $ 0 < a < b .$ Fix any $ t > 0 .$ The idea of our construction is to impose that all processes $ X_s ( x), a \le x \le b , $ have one single common jump during $ [0, t ].$ Indeed, notice that for each $ x \in C, $ the jump rate of $ X_s ( x) $ is given by $ \beta ( x_s(x) ) $ taking values in a compact set $ [ \beta_* , \beta^* ] $ where $ \beta_* = \min \{ \beta ( x_s ( x ) ), 0 \le s \le t, x \in C \} $ and $ \beta^* = \max \{ \beta ( x_s ( x ) ), 0 \le s \le t, x \in C \} .$ Notice that $ 0 < \beta_* < \beta^* < \infty $ since $ \beta $ is supposed to be positive on $ (0, \infty ).$ We then construct all processes $ X_s (x) , s \le t, x \in C , $ using the same underlying Poisson random measure $ M .$ It thus suffices to impose that $E$ holds, where $$ E = \{ M ( [0, t - \varepsilon] \times [ 0 , \beta^* ] ) = 0 , M ( [t- \varepsilon, t] \times [0, \beta_* ] ) = 1, M ( [t- \varepsilon, t] \times ] \beta_*, \beta^* ] ) = 0\} .$$ Indeed, the above implies that up to time $ t - \varepsilon, $ none of the processes $ X_s ( x) , x \in C, $ jumps. The second and third assumption imply moreover that the unique jump time, call it $S,$ of $ M $ within $ [ t- \varepsilon, t ] \times [ 0 , \beta^* ] $ is a common jump of all processes. For each value of $x \in C,$ the associated process $ X (x) $ then chooses a new after-jump position $y$ according to \begin{equation}\label{eq:transition}
\frac{1}{k( x_S (x)) }| k'(y)| dy \mathbf{1}_{ \{ y \geq x_S ( x) \} }. \end{equation}
{\bf Case 1.} Suppose $k$ is strictly decreasing, that is, $|k'| (y ) > 0 $ for all $y.$ Fix then any open ball $ B \subset [ b, \infty [ $ and notice that $\mathbf{1}_{ \{ y \geq x_S ( x) \} } \geq \mathbf{1}_B ( y ) ,$ since $ b \geq x_S (x) .$ Moreover, since $ k$ is decreasing, $ 1/ k (x_S (x) ) \geq 1/ k ( x_t (a) ).$ Therefore, the transition density given in \eqref{eq:transition} can be lower-bounded, independently of $x, $ by
$$ \frac{1}{k( x_t (a)) } \mathbf{1}_B ( y ) | k'(y)| dy = p \tilde \nu ( dy ) , $$
where $\tilde \nu ( dy ) = c \mathbf{1}_B ( y ) | k'(y)| dy , $ normalized to be a probability density, and where $ p = \frac{1}{k( x_t (a) } / c .$ In other words, on the event $E , $ with probability $p, $ all particles choose a new and common position $ y \sim \tilde \nu ( dy ) $ and couple.
{\bf Case 2.} $| k'| $ is different from $0$ on a ball $ B $ (but not necessarily on the whole state space). We suppose w.l.o.g. that $B$ has compact closure. Then it suffices to take $t$ sufficiently large in the first step such that $ x_{t- \varepsilon } (b) < \inf B .$ Indeed, this implies once more that $ \mathbf{1}_B ( y ) \le \mathbf{1}_{ \{ y \geq x_s ( x) \}} $ for all $ x \in C $ and for $s$ the unique common jump time.
{\bf Conclusion.} In any of the above cases, let $ \bar b := \sup \{ x : x \in B \} < \infty $ and restrict the set $ E$ to $$ E' = E \cap \{ M ( [ t - \varepsilon ] \times ] \beta_* , \bar b ] = 0 \} .$$ Putting $ \alpha := p \, \mathbb{P} ( E' ) $ and
$$ \nu (dy ) = \int_{ t _ \varepsilon }^t {\mathcal L} ( S | E' ) (ds) \int \tilde \nu (dz) \delta_{ x_{ t- s} (z) } (dy )$$ then allows to conclude. \end{proof}
\begin{remark} If $\beta $ is continuous on $ [0, \infty ) $ with $ \beta (0 ) > 0 $ and if moreover $k(0) < \infty, $ the above construction can be extended to any compact set of the form $ [0, b ], b < \infty $ and to the case where $ t_0 ( x) < \infty .$ \end{remark}
As a consequence of the above considerations we obtain the following theorem. \begin{theorem}\label{theo:harris} Suppose that $ \beta ( 0 ) > 0, k ( 0) < \infty ,$ that $ \Gamma ( \infty ) = \infty $ and moreover that Assumption \ref{ass:C} holds. Then the process is recurrent in the sense of Harris and its unique invariant measure is given by $ \pi.$ \end{theorem}
\begin{proof} Condition (CD1) of Meyn and Tweedie (1993) holds with $ V$ given as in Proposition \ref{prop:MT} and with compact set $ C = [0, 1 ].$ By Theorem \ref{theo:petite}, all compact sets are `petite'. Then Theorem 3.2 of \cite{MT} allows to conclude. \end{proof}
\begin{example} A meaningful recurrent example consists of choosing $\beta \left(x\right) =\beta _{1}/x,$ $\beta _{1}>0$ (the surge rate decreases like $1/x $), $\alpha \left( x\right) =\alpha _{1}/x $ (finite time extinction of $x_{t}$), $\alpha _{1}>0,$ and $k\left( y\right) =e^{-y}.$ In this case, all compact sets are `petite`. Moreover, with $\gamma _{1}=\beta _{1}/\alpha _{1}$, $\Gamma \left( x\right) =\gamma_{1}x$ and \begin{equation*} \pi\left( y\right) =\frac{y}{\alpha _{1}}e^{\left(\gamma_{1}-1\right)y} \end{equation*} which can be tuned into a probability density if
$\gamma _{1} < 1 . $ This also implies that Assumption \ref{ass:C} is satisfied such that $ s $ can be used to define a Lyapunov function. The associated process $X$ is positive recurrent if $ \gamma_1 < 1 ,$ null-recurrent if $ \gamma_1 = 1.$ \end{example}
\section{The embedded chain}
In this section, we illustrate some of the previously established theoretical results by simulations of the embedded chain that we are going to define now. Defining $ T_1 = S_1, T_n = S_n - S_{n-1} , n \geq 2, $ the successive inter-jump waiting times, we have \begin{eqnarray*} \mathbf{P}\left( T_{n}\in dt,X_{S_{n}}\in dy\mid X_{S_{n-1}}=x\right) &=&dt\beta \left( x_{t}\left( x\right) \right) e^{-\int_{0}^{t}\beta \left( x_{s}\left( x\right) \right) ds}K\left( x_{t}\left( x\right) ,dy\right) \\ &=&dt\beta \left( x_{t}\left( x\right) \right) e^{-\int_{x_{t}\left( x\right) }^{x}\gamma \left( z\right) dz}K\left( x_{t}\left( x\right) ,dy\right) . \end{eqnarray*} The embedded chain is then defined through $Z_{n}:=X_{S_{n}}, n \geq 0.$ If $0$ is not absorbing, for all $ x\geq 0$ \begin{eqnarray*} \mathbf{P}\left( Z_{n}\in dy\mid Z_{n-1}=x\right) &=&\int_{0}^{\infty }dt\beta \left( x_{t}\left( x\right) \right) e^{-\int_{x_{t}\left( x\right) }^{x}\gamma \left( z\right) dz}K\left( x_{t}\left( x\right) ,dy\right) \\ &=&e^{-\Gamma \left( x\right) }\int_{0}^{x}dz\gamma \left( z\right) e^{\Gamma \left( z\right) }K\left( z,dy\right) , \end{eqnarray*} where the last line is valid for $ x> 0 $ only, and only if $ t_0 ( x) = \infty .$ This implies that
$Z_{n}$ is a time-homogeneous discrete-time Markov chain on $\left[ 0,\infty \right) $\textbf{.}
\begin{remark} \label{rem7} $\left( S_{n},Z_{n}\right) _{n\geq 0}$ is also a discrete-time Markov chain on $\Bbb{R}_{+}^{2}$ with transition probabilities given by \begin{equation*} \mathbf{P}\left( S_{n}\in dt\mid Z_{n-1}=x,S_{n-1}=s\right) =dt\beta \left( x_{t-s}\left( x\right) \right) e^{-\int_{0}^{t-s}\beta \left( x_{s^{\prime }}\left( x\right) \right) ds^{\prime }}\text{, }t\geq s, \end{equation*} and \begin{equation*} \mathbf{P}\left( Z_{n}\in dy\mid Z_{n-1}=x,S_{n-1}=s\right) =\int_{0}^{\infty }dt\beta \left( x_{t}\left( x\right) \right) e^{-\int_{x_{t}\left( x\right) }^{x}\gamma \left( z\right) dz}K\left( x_{t}\left( x\right) ,dy\right) \end{equation*} independent of $s.$ Note \begin{equation*} \mathbf{P}\left( T_{n}\in d\tau \mid Z_{n-1}=x\right) =d\tau \beta \left( x_{\tau }\left( x\right) \right) e^{-\int_{0}^{\tau }\beta \left( x_{s}\left( x\right) \right) ds},\tau \geq 0 . \end{equation*} \end{remark} Coming back to the marginal $Z_{n}$ and assuming $\Gamma \left( 0\right) =-\infty ,$ the arguments of Sections 2.5 and 2.6 in \cite{GHL} imply that \begin{equation*} \mathbf{P}\left( Z_{n}>y\mid Z_{n-1}=x\right) =e^{-\Gamma \left( x\right) }\int_{0}^{x}dz\gamma \left( z\right) e^{\Gamma \left( z\right) }\int_{y}^{\infty }K\left( z,dy^{\prime }\right) \end{equation*} \begin{equation} =1-e^{\Gamma \left( x\wedge y\right) -\Gamma \left( x\right) }+e^{-\Gamma \left( x\right) }\int_{0}^{x\wedge y}dz\gamma \left( z\right) e^{\Gamma \left( z\right) }K\left( z,y\right) . \label{S1} \end{equation} To obtain the last line, we have used that $K\left( z,y\right) =1$\ for all $y\leq z$\ and, whenever $z<y$, we have split the second integral in the first line into the two parts corresponding to ($z<y\leq x$ and $z\leq x<y$).
To simulate the embedded chain, we have to decide first if, given $Z_{n-1}=x$ , the forthcoming move is down or up.
- A move up occurs with probability given by $ \mathbf{P}\left( Z_{n}> x\mid Z_{n-1}=x\right)=e^{-\Gamma \left( x\right) }\int_{0}^{x}dz\gamma \left( z\right) e^{\Gamma \left( z\right) } K\left( z,x\right) \mathbf{.} $
- A move down occurs with complementary probability.
As soon as the type of move is fixed (down or up), to decide where the process goes precisely, we must use the inverse of the corresponding distribution function (\ref{S1}) (with $y\leq x$\ or $y>x$), conditioned on the type of move.\newline
\begin{remark}
If state $0$\ is absorbing, Eq. (\ref{S1}) is valid only when $x>0,$\ and the boundary condition $\mathbf{P}\left( Z_{n}=0\mid Z_{n-1}=0\right) =1$\ should be added.
\end{remark}
In the following simulations, as before, we work in the separable case $K(x,y)=\frac{k(y) }{k(x)} ,$ where we choose $k(x)=e^{-x} $ in the first simulation and $k(x)=1/(1+x^2) $ in the second. Moreover, we take $\alpha(x)=\alpha_1 x^a$ and $ \beta(x)=\beta_1 x^b $ with $\alpha_1=1, $ $a=2,$ $\beta_1=1$ and $b=1.$ In these cases, there is no finite time extinction of the process $x_t(x) ;$ that is, in both cases, state $0$ is not accessible.
\begin{center} \includegraphics[scale=0.75]{R1} \end{center}
Notice that in accordance with the fact that $k(x)=1/(1+x^2)$ has slower decaying tails than $k(x)= e^{-x},$ the process with jump distribution $k(x)=1/(1+x^2)$ has higher maxima than the process with $k(x)= e^{-x}$ .
The graphs above do not provide any information about the jump times. In what follows we take this additional information into account and simulate the values $Z_{n}$ of the embedded process as a function of the jump times $S_{n} .$ To do so we must calculate the distribution $\mathbf{P}\left( S_{n}\leq t\mid X_{S_{n-1}}=x,S_{n-1}=s\right) .$ Using Remark \ref{rem7} we have \begin{multline*} \mathbf{P}\left( S_n\leq t \mid X_{S_{n-1}}=x, S_{n-1}=s\right) = \int_{s}^{t}dt^{\prime}\beta \left( x_{t^{\prime}-s}\left( x\right) \right) e^{-\int_{0}^{t^{\prime}-s}\beta \left( x_{s^{\prime }}\left( x\right) \right) ds^{\prime }} \\ =\int_{0}^{t-s}du\beta \left( x_{u}\left( x\right) \right) e^{-\int_{0}^{u}\beta \left( x_{s'}\left( x\right) \right) ds'} = 1- e^{-\int_{0}^{t-s}\beta \left( x_{s'}\left( x\right) \right) ds'} = 1- e^{-[\Gamma(x)-\Gamma(x_{t-s}(x))]}. \end{multline*}
The simulation of the jump times $S_n $ then goes through a simple inversion of the conditional distribution function $\mathbf{P}\left( S_n\leq t \mid X_{S_{n-1}}=x, S_{n-1}=s\right) .$ In the following simulations we use the same parameters as in the previous simulations.
\begin{center} \includegraphics[scale=0.8]{R2} \end{center}
The above graphs give the positions $Z_n$ as a function of the jump times $S_n$. The waiting times between successive jumps are longer in the first process than in the second one. Since we use the same jump rate function in both processes and since this rate is an increasing function of the positions, this is due to the fact that jumps lead to higher values in the second process than in the first such that jumps occur more frequently.
\begin{center} \includegraphics[scale=0.7]{R3} \end{center}
These graphs represent the sequence $T_{n}=S_{n}-S_{n-1} $ of the inter jump waiting times for the two processes showing once more that these waiting times are indeed longer in the first process than in the second.
\section{The extremal record chain}
Of interest are the upper record times and values sequences of $Z_{n}$, namely \begin{eqnarray*} R_{n} &=&\inf \left( r\geq 1:r>R_{n-1},Z_{r}>Z_{R_{n-1}}\right) \\ Z_{n}^{*} &=&Z_{R_{n}}. \end{eqnarray*} Unless $X$ (and so $Z_{n}$) goes extinct, $ Z_{n}^{*}$ is a strictly increasing sequence tending to $\infty $ \textbf{.}
Following \cite{Adke}, with $\left( R_{0}=0,Z_{0}^{*}=x\right) $,\ $\left( R_{n},Z_{n}^{*}\right) _{n\geq 0}$\ clearly is a Markov chain with transition probabilities for $y>x$ \begin{eqnarray*} \overline{P}^{*}\left( k,x,y\right) &:=&\mathbf{P}\left( R_{n}=r+k,Z_{n}^{*}>y\mid R_{n-1}=r,Z_{n-1}^{*}=x\right) \\ &=&\overline{P}\left( x,y\right) \text{ if }k=1 \\ &=&\int_{0}^{x}...\int_{0}^{x}\prod_{l=0}^{k-2}P\left( x_{l},dx_{l+1}\right) \overline{P}\left( x_{k-1},y\right) \text{ if }k\geq 2, \end{eqnarray*} where $P\left( x,dy\right) =\mathbf{P}\left( Z_{n}\in dy\mid Z_{n-1}=x\right) ,$\ $\overline{P}\left( x,y\right) =\mathbf{P}\left( Z_{n}>y\mid Z_{n-1}=x\right) $ and $x_{0}=x.$
Clearly the marginal sequence $\left( Z_{n}^{*}\right) _{n\geq 0}$\ is Markov with transition matrix \begin{equation*} \overline{P}^{*}\left( x,y\right) :=\mathbf{P}\left( Z_{n}^{*}>y\mid Z_{n-1}^{*}=x\right) =\sum_{k\geq 1}\overline{P}^{*}\left( k,x,y\right), \end{equation*} but the record times marginal sequence $\left( R_{n}\right) _{n\geq 0}$\ is non-Markov. However \begin{equation*} \mathbf{P}\left( R_{n}=r+k\mid R_{n-1}=r,Z_{n-1}^{*}=x\right) =\overline{P} ^{*}\left( k,x,x\right) , \end{equation*} showing that the law of $A_{n}:=R_{n}-R_{n-1}$\ (the age of the $n$-th record) is independent of $R_{n-1}$ (although not of $Z_{n-1}^{*}$)$:$
\begin{equation*} \mathbf{P}\left( A_{n}=k\mid Z_{n-1}^{*}=x\right) =\overline{P}^{*}\left( k,x,x\right) ,\text{ }k\geq 1. \end{equation*}
Of particular interest is $\left( R_{1},Z_{1}^{*}=Z_{R_{1}}\right) $, the first upper record time and value because $S_{R_{1}}$ is the first time $(X_t)_t $\ exceeds the threshold $x,$ and $Z_{R_{1}}$\ the corresponding overshoot at $y>x$. Its joint distribution is simply ($y>x$) \begin{equation*} P^{*}\left( k,x,dy\right) =\mathbf{P}\left( R_{1}=k,Z_{1}^{*}\in dy\mid R_{0}=0,Z_{0}^{*}=x\right) \end{equation*} \begin{eqnarray*} &=&P\left( x,dy\right) \text{ if }k=1 \\ &=&\int_{0}^{x}...\int_{0}^{x}\prod_{l=0}^{k-2}P\left( x_{l},dx_{l+1}\right) P\left( x_{k-1},dy\right) \text{ if }k\geq 2 . \end{eqnarray*} If $y_{c}>x$\ is a critical threshold above which one wishes to evaluate the joint probability of $\left( R_{1},Z_{1}^{*}=Z_{R_{1}}\right) $, then $ P^{*}\left( k,x,dy\right) /\overline{P}^{*}\left( k,x,y_{c}\right) $\textbf{ \ }for $y>y_{c}$\ is its right expression.
Note that $\mathbf{P}\left( R_{1}=k\mid R_{0}=0,Z_{0}^{*}=x\right) =\overline{P} ^{*}\left( k,x,x\right) $ and also that $$P^{*}\left( x,dy\right) :=\mathbf{P}\left( Z_{1}^{*}\in dy\mid Z_{0}^{*}=x\right) =\sum_{k\geq 1}P^{*}\left( k,x,dy\right) .$$
Of interest is also the number of records in the set $\left\{ 0,...,N\right\} :$ \begin{equation*} \mathcal{R}_{N}:=\#\left\{ n\geq 0:R_{n}\leq N\right\} =\sum_{n\geq 0} \mathbf{1}_{\left\{ R_{n}\leq N\right\} } . \end{equation*}
{\color{blue} }
The following graphs represent the records of the two processes as a function of the ranks of the records, the one at the left with $k(x) = e^{-x}$ and the one at the right with $k(x) = 1/(1+x^2)$. We can notice that there are more records in the first graph than in the second. Records occur more frequently in the first graph than in the second. On the other hand, the heights of the records are much lower in the first graph than in the second.
\begin{center} \includegraphics[scale=0.65]{R4} \end{center}
The following graphs give $Z_{R_n} $ as a function of $R_n$. We remark that the gap between two consecutive records decreases over the time whereas the time between two consecutive records becomes longer. In other words, the higher is a record, the longer it takes to surpass it statistically.
\begin{center} \includegraphics[scale=0.65]{R5} \end{center}
The following graphs give $A_n= R_n-R_{n-1} $ as a function of $n. $ The differences between two consecutive records are much greater in the first graph than in the second. It is also noted that the maximum time gap is reached between the penultimate record and the last record.
\begin{center} \includegraphics[scale=0.65]{R6} \end{center}
The following graphs give the obtained records from the simulation of the two processes, as a function of time. We can remark that the curve is slowly increasing with the time. In fact, to reach the $12$th record, the first simulated process has needed $5500$ units of time and analogously, the second simulated process has needed $1500$ units of time to reach its $ 8$th record.
\begin{center} \includegraphics[scale=0.7]{R7} \end{center}
\section{Decay-Surge processes and the relation with Hawkes and Shot-noise processes} \subsection{Hawkes processes} In the section we study the particular case $\beta \left( x\right) =\beta _{1}x,$ $\beta _{1}>0$ (the surge rate increases linearly with $x$), $\alpha \left( x\right) =\alpha _{1}x$, $\alpha _{1}>0$ (exponentially declining population) and $k\left( y\right) =e^{-y}.$ In this case, with $\gamma _{1}=\beta _{1}/\alpha _{1}$, $\Gamma \left( x\right) =\gamma _{1}x.$
In this case, we remark $\Gamma (0)=0>-\infty .$ Therefore there is a strictly positive probability that the process will never jump (in which case it is attracted to $0$). However we have $t_{0}(x)=\infty, $ so the process never hits $0$ in finite time. Finally, $\beta (0)=0$ implies that state $0$ is natural (absorbing and inaccessible).
Note that for this model, \begin{equation*} \pi \left( y\right) =\frac{1}{\alpha _{1}y}e^{\left( \gamma _{1}-1\right) y} \end{equation*} and we may take a version of the scale function given by \begin{equation*} s (x) =\frac{\gamma _{1}}{1-\gamma _{1}}[e^{-\left( \gamma _{1}-1\right) y} ]_{0}^{x } = \frac{\gamma _{1}}{1-\gamma _{1}} \left( e^{\left( 1 - \gamma _{1} \right) x}- 1 \right) . \end{equation*} Clearly, Assumption \ref{ass:C} is satisfied if and only if $ \gamma _{1}< 1. $ We call the case $ \gamma_1 < 1 $ {\it subcritical}, the case $ \gamma_1 > 1 $ {\it supercritical} and the case $ \gamma_1 = 1 $ {\it critical}.
{\bf Supercritical case.} It can be shown that the process does not explode almost surely, such that it is transient in this case (see Proposition \ref {cor:transience}). The speed density is neither integrable at $0$ nor at $ \infty $.
\textbf{Critical and subcritical case.} If $\gamma_1 < 1, $ then $\pi $ is integrable at $+\infty , $ and we find \begin{equation*} \phi _{a}\left( x\right) =-\left[ s\left( y\right) \overline{ \pi }\left( y\right) \right] _{a}^{x}-\int_{a}^{x}s \left( y\right) \pi \left( y\right) dy+\frac{1}{\alpha _{1}}\log \frac{x}{a}, \end{equation*} where \begin{equation*} \int_{a}^{x}s \left( y\right) \pi \left( y\right) dy=\frac{\gamma _{1}}{ \alpha _{1}\left( 1 - \gamma _{1} \right) }\left[ \ln \left( \frac{x}{a} \right) - \func{Ei}\left( \left( \gamma_{1}-1\right) x\right) + \func{Ei}\left( \left( \gamma _{1} -1\right) a\right)
\right] . \end{equation*} In the critical case $\gamma_1 = 1 $ the hitting time of $a$ will be finite without having finite expectation.
In both critical and subcritical cases, that is, when $\gamma _{1}\leq 1, $ the process $X_t $ converges to $0$ as $t \to \infty $ as we shall show now.
Due to the additive structure of the underlying deterministic flow and the exponential jump kernel, we have the explicit representation \begin{equation} \label{eq:additive} X_t = e^{ - \alpha_1 t } x + \sum_{ n \geq 1 : S_n \le t } e^{ - \alpha_1 (t- S_n ) } Y_n, \end{equation} where the $(Y_n)_{n \geq 1 } $ are i.i.d. exponentially distributed random variables with mean $1,$ such that for all $n, $ $Y_n $ is independent of $ S_k, k \le n , $ and of $Y_k, k < n.$ Finally, in \eqref{eq:additive}, the process $X_t$ jumps at rate $\beta_1 X_{t- } .$
The above system is a \textit{linear Hawkes process} without immigration, with kernel function $h( t) = e^{ - \alpha_1 t } $ and with random jump heights $(Y_n)_{n \geq 1 } $ (see \cite{Hawkes}, see also \cite{bm}). Such a Hawkes process can be interpreted as \textit{inhomogeneous Poisson process with branching}. Indeed, the additive structure in \eqref{eq:additive} suggests the following construction.
\begin{itemize} \item At time $0,$ we start with a Poisson process having time-dependent rate $\beta _{1}e^{-\alpha _{1}t}x.$
\item At each jump time $S$ of this process, a new (time inhomogeneous) Poisson process is born and added to the existing one. This new process has intensity $\beta _{1}e^{-\alpha _{1}(t-S)}Y,$ where $Y$ is exponentially distributed with parameter $1,$ independent of what has happened before. We call the jumps of this newborn Poisson process \textit{jumps of generation $1 $}.
\item At each jump time of generation $1,$ another time inhomogeneous Poisson process is born, of the same type, independently of anything else that has happened before. This gives rise to jumps of generation $2.$
\item The above procedure is iterated until it eventually stops since the remaining Poisson processes do not jump any more. \end{itemize}
The total number of jumps of any of the offspring Poisson processes is given by \begin{equation*} \beta_1 \mathbf{E } (Y) \int_{S}^\infty e^{ - \alpha_1 (t- S ) } dt = \gamma_1. \end{equation*} So we see that whenever $\gamma_1 \le 1 , $ we are considering a subcritical or critical Galton-Watson process which goes extinct almost surely, after a finite number of reproduction events. This extinction event is equivalent to the fact that the total number of jumps in the system is finite almost surely, such that after the last jump, $X_t$ just converges to $0 $ (without, however, ever reaching it). Notice that in the subcritical case $ \gamma_1 < 1 ,$ the speed density is integrable at $ \infty, $ while it is not at $0$ corresponding to absorption in $0.$
An interesting feature of this model is that it can exhibit a phase transition when $\gamma _{1}$ crosses the value $1$.
Finally, in the case of a linear Hawkes process with immigration we have $ \beta ( x) = \mu + \beta_1 x , $ with $\mu > 0.$ In this case, \begin{equation*} \pi( y) =\frac{1}{\alpha _{1}}e^{\left( \gamma _{1}-1\right) y } y^{ \mu/ \alpha_1 - 1 } \end{equation*} which is always integrable in $0$ and which can be tuned into a probability in the subcritical case $\gamma_1 < 1$ corresponding to positive recurrence.
\begin{remark} An interpretation of the decay-surge process in terms of Hawkes processes is only possible in case of affine jump rate functions $ \beta, $ additive drift $ \alpha $ and exponential kernels $k$ as considered above. \end{remark}
\subsection{Shot-noise processes} Let $h\left( t\right) $, $t\geq 0,$ with $h\left( 0\right) =1$ be a causal non-negative non-increasing response function translating the way shocks will attenuate as time passes by in a shot-noise process. We assume $h\left( t\right) \rightarrow 0$ as $t\rightarrow \infty $ and \begin{equation} \int_{0}^{\infty }h\left( s\right) ds<\infty . \label{IF} \end{equation} With $X_{0}=x\geq 0$, consider then the linear shot-noise process \begin{equation} X_{t}=x+\int_{0}^{t}\int_{{\Bbb R}_{+}}yh\left( t-s\right) \mu \left( ds,dy\right) , \label{LSN} \end{equation} where, with $\left( S_{n};n\geq 1\right) $ the points of a homogeneous Poisson point process with intensity $\beta ,$ $\mu \left( ds,dy\right) =\sum_{n\geq 1}\delta _{S_{n}}\left( ds\right) \delta _{\Delta _{n}}\left( dy\right) $ (translating independence of the shots' heights $\Delta _{n}$ and occurrence times $S_{n}$)$.$ Note that, with $dN_{s}=\sum_{n\geq 1}\Delta _{n}\delta _{S_{n}}\left( ds\right) ,$ so with $N_{t}=\sum_{n\geq 1}\Delta _{n}{\bf 1}_{\{ S_{n}\leq t \}} $ representing a time-homogeneous compound Poisson process with jumps' amplitudes $\Delta ,$ \begin{equation} X_{t}=x+\int_{0}^{t}h\left( t-s\right) dN_{s} \label{LFSN} \end{equation} is a linearly filtered compound Poisson process. Under this form, it is clear that $X_{t}$ cannot be Markov unless $h\left( t\right) =e^{-\alpha t}$ , $\alpha >0$. We define \begin{eqnarray*} \nu \left( dt,dy\right) &=&{\bf P}\left( S_{n}\in dt,\Delta _{n}\in dy\text{ for some }n\geq 1\right) \\ &=&\beta dt\cdot {\bf P}\left( \Delta \in dy\right) . \end{eqnarray*} In the sequel, we shall assume without much loss of generality that $x=0$.
The linear shot-noise process $X_{t}$ has two alternative equivalent representations, emphasizing its {\bf superposition} characteristics:
\begin{eqnarray*} \left( 1\right) \text{ }X_{t} &=&\sum_{n\geq 1}\Delta _{n}h\left( t-S_{n}\right) {\bf 1}_{\{ S_{n}\leq t\}} \\ \left( 2\right) \text{ }X_{t} &=&\sum_{p=1}^{P_{t}}\Delta _{p}h\left( t- {\mathcal S}_{p}\left( t\right) \right) , \end{eqnarray*} where $P_t = \sum_{n\geq 1} {\bf 1}_{\{ S_{n}\leq t\}}.$
Both show that $X_{t}$ is the size at $t$ of the whole decay-surge population, summing up all the declining contributions of the sub-families which appeared in the past at jump times (a shot-noise or filtered Poisson process model appearing also in Physics and Queuing theory, \cite{Sny}, \cite {Parzen}). The contributions $\Delta _{p}h\left( t-{\mathcal S}_{p}\left( t\right) \right) $, $p=1, ...P_{t},$ of the $P_{t}$ families to $X_{t}$ are stochastically ordered in decreasing sizes.\newline
In the Markov case, $h\left( t\right) =e^{-\alpha t}$, $t\geq 0$, $\alpha >0, $ we have \begin{equation} X_{t}=e^{-\alpha t}\int_{0}^{t}e^{\alpha s}dN_{s}, \label{MSN} \end{equation} so that \[ dX_{t}=-\alpha X_{t}dt+dN_{t}, \] showing that $X_{t}$ is a time-homogeneous Markov process driven by $N_{t},$ known as the {\bf classical} linear shot-noise. This is clearly the only choice of the response function that makes $X_{t}$ Markov. In that case, by Campbell's formula (see \cite{Parzen}),
\begin{eqnarray*} \Phi _{t}^{X}\left( q\right) &:&={\bf E}e^{-qX_{t}}=e^{-\beta \int_{0}^{t}\left( 1-\phi _{\Delta }\left( qe^{-\alpha \left( t-s\right) }\right) \right) ds} \\ &=&e^{-\frac{\beta }{\alpha }\int_{e^{-\alpha t}}^{1}\frac{1-\phi _{\Delta }\left( qu\right) }{u}du}\text{ where }e^{-\alpha s}=u, \end{eqnarray*} with \[ \Phi _{t}^{X}\left( q\right) \rightarrow \Phi _{\infty }^{X}\left( q\right) =e^{-\frac{\beta }{\alpha }\int_{0}^{1}\frac{1-\phi _{\Delta }\left( qu\right) }{u}du}\text{ as }t\rightarrow \infty . \]
The simplest explicit case is when $\phi _{\Delta }\left( q\right) =1/\left( 1+q/\theta \right) $ (else $\Delta \sim $Exp$\left( \theta \right) $) so that, with $\gamma =\beta /\alpha ,$ \[ \Phi _{\infty }^{X}\left( q\right) =\left( 1+q/\theta \right) ^{-\gamma } \] the Laplace-Stieltjes-Transform of a Gamma$\left( \gamma ,\theta \right) $ distributed random variable $X_{\infty },$ with density \[ \frac{\theta ^{\gamma }}{\Gamma \left( \gamma \right) }x^{\gamma -1}e^{-\theta x}\text{, }x>0. \] This time-homogeneous linear shot-noise with exponential attenuation function and exponentially distributed jumps is a decay-surge Markov process with triple \[ \left( \alpha \left( x\right) =-\alpha x;\text{ }\beta \left( x\right) =\beta ;\text{ }k\left( x\right) =e^{-\theta x}\right) . \]
Shot-noise processes being generically non-Markov, there is no systematic relationship of decay-surge Markov processes with shot-noise processes. In \cite{EK2009}, it is pointed out that decay-surge Markov processes could be related to the maximal process of nonlinear shot noise; see Eliazar and Klafter (2007 and 2009).
\textbf{Acknowledgments.}
T. Huillet acknowledges partial support from the ``Chaire \textit{ Mod\'{e}lisation math\'{e}matique et biodiversit\'{e}''.} B. Goncalves and T. Huillet acknowledge support from the labex MME-DII Center of Excellence ( \textit{Mod\`{e}les math\'{e}matiques et \'{e}conomiques de la dynamique, de l'incertitude et des interactions}, ANR-11-LABX-0023-01 project). This work was funded by CY Initiative of Excellence (grant ``\textit{Investissements d'Avenir}''ANR- 16-IDEX-0008), Project ''EcoDep'' PSI-AAP2020-0000000013.
\end{document} |
\begin{document}
\maketitle \begin{abstract}
This paper studies an optimal investment and risk control problem for an insurer with default contagion and regime-switching. The insurer in our model allocates his/her wealth across multi-name defaultable stocks and a riskless bond under regime-switching risk. Default events have an impact on the distress state of the surviving stocks in the portfolio. The aim of the insurer is to maximize the expected utility of the terminal wealth by selecting optimal investment and risk control strategies. We characterize the optimal trading strategy of defaultable stocks and risk control for the insurer. By developing a truncation technique, we analyze the existence and uniqueness of global (classical) solutions to the recursive HJB system. We prove the verification theorem based on the (classical) solutions of the recursive HJB system.
{\noindent{\textbf{AMS 2000 subject classifications}: 3E20, 60J20.}
\noindent{\textit{Keywords and phrases}:}\quad Optimal investment; default contagion; regime-switching; recursive dynamical system.} \end{abstract}
\section{Introduction}\label{sec:intro} Since the seminal works of Merton~\cite{Merton69, Merton71}, portfolio optimization problems have been the subject of considerable investigations. In recent years, the hybrid diffusion models have received a considerable amount of attention from both researchers and practitioners. {In particular, the regime-switching model (as a class of hybrid models) is usually proposed to capture the influence on the behavior of the market caused by transitions in the macroeconomic system or the macroscopic readjustment and regulation. Zhang and Zhou~\cite{zhangzhou09} study the valuation of stock loan in which the underlying stock price is modeled as a Markov modulated geometric Brownian motion using a two-state hidden Markov chain.} Elliott, et al.~\cite{ElliSiu07} consider the pricing of options under a generalized Markov modulated jump diffusion model. Capponi, et al.~\cite{Ago14mf1} obtain a Poisson series representation for the arbitrage-free price process of vulnerable contingent claims in a market driven by an underlying continuous-time Markov chain. {Apart from the classical Merton's model of utility maximization on terminal wealth, there has been an increasing consideration of different stochastic control criteria for portfolio management in recent years.} Zhou and Yin~\cite{zhangyin03} study the Markowitz's mean-variance portfolio selection with regime-switching in a continuous time model. {Elliott and Siu~\cite{ElliSiu} investigate an optimal portfolio selection problem in a Markov modulated Black-Scholes market when an economic agent faces model uncertainty.} Shen~and Siu~\cite{ShenSiu} discuss a consumption-portfolio optimization problem in a hidden Markov modulated asset price model with multiple risky assets under the situation that an economic agent only has access to information about the price processes of risky shares. Andruszkiewicz, et al.~\cite{Davis16} consider a risk-sensitive investment problem under a jump diffusion regime-switching market model.
{The objective of this paper is to consider an analytical framework for the portfolio allocation and risk control of an insurer, which explicitly accounts for the interaction between regime-switching and credit risk.} These two sources of risk have been identified as tightly linked in empirical research, see, for example, Campbell and Taksler~\cite{Campbell}. For pricing, models accounting for the dependence of default intensities on asset volatilities have been proposed by Carr and Linetsky~\cite{CarrLinetsky}, Carr and Wu~\cite{CarrWu}, and extended to a multi-name context by Mendoza-Arriaga and Linetsky~\cite{MendozaMF}. We propose a model in which switching regimes, capturing the state or modes of the underlying credit market, drive both volatility and default risk of the risky asset price processes. Moreover, the total risk controlled by liabilities of the insurer is driven by the switching regimes and the credit states of the portfolio. Zou and Cadenillas~\cite{ZouCa14} consider an optimal investment and risk control problem with a single default-free asset. The case with multiple default-free assets and regime-switching is extended by Zou and Cadenillas~\cite{ZouCa17}. More recently, Peng and Wang \cite{Pengwang} study the optimal investment strategy and risk control for an insurer who has some inside information on the insurance business. Bo and Wang~\cite{Bowang} focus on an optimal investment and risk control problem for an insurer under stochastic diffusive factors.
We incorporate the interaction between regime-switching and default contagion risk into the risk control model. Differently from the default-free case, default events have an impact on the distress state of the surviving stocks in the portfolio. {Since defaults can occur sequentially, the default intensities of the surviving names are affected by the default events of other stocks in the portfolio. Hence, the HJB system associated with the stochastic control problem is recursive in terms of default states of the portfolio.} The depth of the recursion equals the number of stocks in the portfolio. We analyze the HJB equation and the constrained equation satisfied by the optimal strategy of stocks using a backward recursion. The recursive procedure starts from the state in which all stocks are defaulted and regresses toward the state in which all stocks are alive. Since the policy space of our control problem is not assumed to be compact, the main difficulty in the analysis of solutions to this coupled system lies in the general default state and the non-Lipschitz nonlinearities of the system. Andruszkiewicz, et al.~\cite{Davis16} deal with a risk-sensitive investment problem in a finite-factor model under a compact policy space. The existence and uniqueness of solutions to their HJB equation can be established by verifying the globally Lipschitz-continuous coefficients. We prove in this paper that the nonlinearities of the coupled system are Lipschitz-continuous only when the variable corresponding to the solution is not close to zero (see Lemma~\ref{lem:Gkesti}). This suggests developing a truncation technique such that the truncated nonlinearity in the system is globally Lipschitz-continuous and considering an approximation of the truncated recursive coupled systems. For this purpose, we establish a key comparison result (see Lemma~\ref{lem:comparison}) for two coupled monotone dynamical systems. We refer the reader to Smith~\cite{smith08} for the definition of monotone dynamical systems. In order to construct the limit of the approximating truncated systems, we prove that the approximating systems admit a uniform (strictly positive) lower bound, and then this limit can be verified to be the unique global solution of our recursive HJB system (see Theorem \ref{thm:solutionk}).
The rest of the paper is organized as follows. Section \ref{sec:model} introduces the market model with regime-switching and credit risk interaction. Section \ref{sec:framework} formulates the dynamic optimization problem for an insurer and derives the recursive HJB system. Section~\ref{sec:HJB-sys} analyzes the (classical) solutions of the recursive HJB system. The optimal investment and risk control strategies are characterized in the same section. A verification theorem is also proved in the same section. Section~\ref{sec:numerics} develops a numerical analysis. Additional technical proofs are provided in the Appendix.
\section{The Model}\label{sec:model}
We consider a financial market consisting of $n\geq1$ defaultable stocks and a risk-free money market account. Let $(\Omega,{\mathcal G},{\mathbb{G}}, \mathbb{P} )$ be a complete filtered probability space, where the global filtration $\mathbb{G}:=\mathbb{F} \vee {\mathbb{Z}}_1\vee{\mathbb{Z}}_2$ is augmented by all $ \mathbb{P} $-null sets so as to satisfy the usual conditions. Let $T>0$ be the finite target horizon. The filtration $\mathbb{F} :=({\mathcal{F}}_t)_{t\in[0,T]}$, where $\mathcal{F}_t$ is the sigma-algebra generated by independent multi-dimensional standard Brownian motions denoted by $W:=(W_j(t);\ j=1,\ldots,d)_{t\in[0,T]}^{\top}$, $\bar{W}:=(\bar{W}_j(t);\ j=1,\ldots,\bar{d})_{t\in[0,T]}^{\top}$ and a regime-switching process $Y:=(Y(t))_{t\in[0,T]}$ introduced below. Here $d,\bar{d}\geq1$ and we use $\top$ to denote the transpose operator. We next specify the filtrations $\mathbb{Z}_1$ and $\mathbb{Z}_2$. The default state is described by an $n$-dimensional default indicator process $Z:=(Z_j(t);\ j=1,\ldots,n)_{t\in[0,T]}$ which takes values on ${\cal S}:=\{0,1\}^n$. For $j=1,\ldots,n$, the default time of the $j$-th stock is given by \begin{eqnarray}\label{eq:default-time} \tau_j := \inf\{t\geq0;\ Z_j(t)=1\}. \end{eqnarray} The filtration $\mathbb{Z}_1:=({\mathcal{Z}}_{1t})_{t\in[0,T]}$, where the sigma-algebra ${\cal Z}_{1t}:=\bigvee_{j=1}^n{\sigma(Z_j(s);\ s\leq t)}$. {Hence $\mathbb{Z}_1$ contains all information about default events until the target horizon $T$. The filtration $\mathbb{Z}_2:=({\cal Z}_{2t})_{t\in[0,T]}$ where the sigma-algebra ${\cal Z}_{2t}:=\sigma((N_{i,z}(s),\ (i,z)\in\{1,\ldots,m\}\times{\cal S});\ s\leq t)$. Here $N_{i,z}:=(N_{i,z}(t))_{t\in[0,T]}$ for $(i,z)\in\{1,\ldots,m\}\times{\cal S}$ are independent Poisson processes with respective intensities $\nu(i,z)>0$, which will be used to model the risk control process of an insurer introduced in \eqref{eq:Ct} and \eqref{eq:N} below.} Our model consists of four blocks: the regime-switching process, the credit model, the price processes and the risk process for an insurer. Each of these blocks will be detailed in the sequel.
\noindent{\it Regime-switching process.}\quad The regime-switching process $Y$ here is described as a continuous-time (conservative) Markov chain with state space $\{1,\ldots,m\}$ where $m\geq1$, which is independent of the multi-dimensional Brownian motions $(W,\bar{W})$. {The generator of the Markov chain $Y$ is given by an $m\times m$-dimensional matrix $Q:=(q_{ij})_{m\times m}$. This yields that $q_{ii}\leq0$ for $i\in\{1,\ldots,m\}$, $q_{ij}\geq0$ for $i\neq j$, and $\sum_{j=1}^{m}q_{ij}=0$ for $i\in\{1,\ldots,m\}$ (i.e., $\sum_{j\neq i}q_{ij}=-q_{ii}$ for $i\in\{1,\ldots,m\}$).}
\noindent{\it Credit risk model.}\quad The joint process $(Y,Z)$ of the regime-switching process and the default indicator process is a joint Markov process with state space $\{1,\ldots,m\}\times\mathcal{S}$. Moreover, at any time $t\in[0,T]$, the default indicator process transits from a state $Z(t) :=(Z_1(t),\ldots,Z_{j-1}(t),Z_j(t),Z_{j+1}(t),\ldots,Z_n(t))$ in which the stock $j$ is alive ($Z_j(t)=0$) to the neighbour state ${Z}^j(t):=(Z_1(t),\ldots,Z_{j-1}(t),1-Z_j(t),Z_{j+1}(t),\ldots,Z_n(t))$ in which the stock $j$ has defaulted at a stochastic rate $\mathds{1}_{Z_j(t)=0}h_{j}(Y(t),Z(t))$. Here $h_j(i,z)>0$ for all $(i,z)\in\{1,\ldots,m\}\times{\cal S}$. We assume that $Y(t)$, $Z_1(t),\ldots,Z_n(t)$ will not jump simultaneously almost surely. Consequently, the default intensity of the $j$-th stock may change either if any other stock in the portfolio defaults (contagion effect), or if there are regime-switchings (market risk effect). Our default model thus belongs to the rich class of interacting intensity models, introduced by Frey and Backhaus \cite{FreyBackhaus04} (see also the interacting default intensity model with diffusive factors introduced in Birge, et al.~\cite{BirgeBoCapponi}). Hereafter, we set $h(i,z):=(h_j(i,z);\ j=1,\ldots,n)^{\top}$ for $(i,z)\in \{1,\ldots,m\}\times{\cal S}$.
\noindent{\it Price processes.}\quad The vector of the price processes of the $n$ defaultable stocks is denoted by $\tilde{S}:=(\tilde{S}_j(t);\ j=1,\ldots,n)_{t\in[0,T]}^{\top}$. For $t\in[0,T]$, the price process of the $j$-th defaultable stock is given by \begin{equation} \tilde{S}_j(t)=(1-Z_j(t))S_j(t), \qquad \; j = 1,\ldots,n. \label{eq:pricedef} \end{equation} In other words, the price of the $j$-th stock is given by the predefault price $S_j(t)$ up to ${\tau_j}-$, and jumps to $0$ at time ${\tau_j}$, where it remains forever afterwards. The dynamics of the pre-default price process $S:=(S_j(t);\ j=1,\ldots,n)_{t\in[0,T]}^{\top}$ of the $n$ defaultable stocks is given by \begin{align}\label{eq:P} dS(t) = {diag}(S(t)) [(\mu(Y(t))+h(Y(t),Z(t))) dt + \sigma(Y(t))dW(t)]. \end{align} Above, ${diag}(S(t))$ is the diagonal $n\times n$-dimensional matrix with diagonal elements $S_j(t)$ for $j=1,\ldots,n$. For each $i\in\{1,\ldots,m\}$, the vector $\mu(i)$ is $\mathds{R}^n$-valued, and $\sigma(i)$ is an $\mathds{R}^{n\times d}$-valued matrix such that $\sigma(i)\sigma(i)^{\top}$ is positive definite. Eq.~\eqref{eq:P} indicates that the investor holding the credit sensitive security is compensated for the incurred default risk at the premium rate $h(Y(t),Z(t))$. Using equations~\eqref{eq:pricedef}, \eqref{eq:P} and integration by parts, the dynamics of the defaultable stock prices can be given by \begin{align}\label{eq:tildeP} d\tilde{S}(t) = {diag}(\tilde{S}(t)) [\mu(Y(t))dt + \sigma(Y(t))dW(t)-dM(t)], \end{align} where $M:=(M_j(t);\ j=1,\ldots,n)_{t\in[0,T]}^{\top}$ is a pure jump $ \mathbb{P} $-martingale given by \begin{align}\label{eq:taui} M_j(t)&:= Z_j(t) - \int_0^{t\wedge\tau_j}h_j(Y(s),Z(s))ds,\ \ \ \ \ \ t\in[0,T]. \end{align}
\noindent{\it Risk control process.}\quad For the risk control process, denote by $\eta(t)$ the $\mathbb{G}$-predictable total outstanding number of policies (liabilities) at time $t$. The risk model for claims is described as an extensive Cram\'er-Lundberg model, in which the claim (risk) per policy $C=(C(t))_{t\in[0,T]}$ is given by the following dynamics \begin{align}\label{eq:Ct} dC(t)=c(Y(t))dt+\phi(Y(t))dW(t)+\bar{\phi}(Y(t))d\bar{W}(t)+g(Y(t-))dN(t), \end{align} where, for each $i=1,\ldots,m$, the volatilities $\phi(i)$ and $\bar{\phi}(i)$ are respectively $d$-dimensional and $\bar{d}$-dimensional nonzero row vectors, the drift $c(i)\in\mathds{R}$, and the positive jump size (claim size) $g(i)\in\mathds{R}_+:=(0,\infty)$. Here, the jump process $N:=(N(t))_{t\in[0,T]}$ is a Markov modulated Poisson process with positive intensity process given by $(\nu(Y(t),Z(t)))_{t\in[0,T]}$. For $t\in[0,T]$, the process $N(t)$ represents the number of claims occurring in time interval $[0,t]$. More precisely, we can rewrite $N(t)$ as \begin{align}\label{eq:N} N(t)=\sum_{(i,z)\in\{1,\ldots,m\}\times{\cal S}}\int_0^t\mathds{1}_{Y(s-)=i,Z(s-)=z}dN_{i,z}(s). \end{align} We recall that for $(i,z)\in\{1,\ldots,m\}\times{\cal S}$, $N_{i,z}=(N_{i,z}(t))_{t\in[0,T]}$ are independent Poisson processes with respective intensities $\nu(i,z)$, and moreover they are also independent of the random processes $(W,\bar{W},Y)$. Then, we have that, for $t\in[0,T]$, \begin{align}\label{eq:tildeN} \tilde{N}(t) &:=N(t) - \sum_{(i,z)\in\{1,\ldots,m\}\times{\cal S}}\int_0^t\mathds{1}_{Y(s)=i,Z(s)=z}\nu(i,z)ds=N(t)-\int_0^t\nu(Y(s),Z(s))ds \end{align} is a $ \mathbb{P} $-martingale. An example of insurance product whose arrival intensity of claims depends on the default states of stocks and the regimes of the economy is so-called Trade Credit Insurance (see, e.g., Jones~\cite{PMJones}). Trade Credit Insurance protects a supplier from the risk of buyer's non-payment. The supplier delivers unpaid goods or services to the buyer and allows a deferred payment from the buyer. To ensure the payment, the supplier purchases trade credit insurance products. In exchange for the premia, the insurer covers the payment if the buyer defaults. This implies that claims arrive when the buyer fails to pay the suppliers due to credit risk such as protracted default, insolvency, and bankruptcy, etc. Consequently, the probability of buyer's default is correlated with the default states of stocks and the regimes of the economy.
The diffusive term $c(Y(t))dt+\phi(Y(t))dW(t)+\bar{\phi}(Y(t))d\bar{W}(t)$ in \eqref{eq:Ct} models the fluctuations in the value of the claim per policy. From equations~\eqref{eq:tildeP} and \eqref{eq:Ct}, it can be seen that apart from the risk (pure jump) model for the claims, the claim (risk) per policy $C(t)$ is also driven by an idiosyncratic source of risk $\bar{W}$ and has the common source of risk ${W}$ with the defaultable stock prices $\tilde{S}(t)$. Thus, by Zou and Cadenillas~\cite{ZouCa14}, the total risk of the insurer in our case can be described as \begin{align}\label{eq:risk-model} dR^\eta(t)=\eta(t)dC(t). \end{align} The forthcoming section will formulate the dynamic optimization problem for an insurer and formally derive the recursive HJB system using the dynamic programming principle.
\section{Dynamic Optimization for an Insurer}\label{sec:framework}
In this section, we formulate the optimal investment and risk control problem for an insurer and derive the recursive HJB system accordingly. For this reason, for $j=1,\ldots,n$, let $\tilde{\pi}_j(t)$ be the $\mathbb{G}$-predictable fraction strategy for the $j$-th defaultable stock at time $t$. We assume that the insurer will not invest in the stock once it has defaulted. Then $1-\tilde{\pi}(t)^{\top}e_n^{\top}$ is the fraction strategy for the risk-free money market account at time $t$. The dynamics of the money market account $B(t)$ is given by $dB(t)= r(Y(t))B(t)dt$, where the regime-switching interest rate $r(i)>0$ for $i=1,\ldots,m$. Here $\tilde{\pi}(t):=(\tilde{\pi}_j(t);\ j=1,\ldots,n)^{\top}$ and $e_n$ denotes the $n$-dimensional row vector whose all entities are ones.
We assume that the average premium per liability for the insurer is $p(Y(t),Z(t))$ (i.e., it depends not only on the macro-economy, but also on the default state of the portfolio), then the price of the insurance risk satisfies the dynamics $dP(t)=p(Y(t),Z(t))dt-dC(t)$. The insurer is in fact able to trade this risk process by selling insurance products and ceding part or all of his/her business to reinsurers. Recall that $\eta(t)$ stands for the $\mathbb{G}$-predictable total outstanding number of policies (liabilities) at time $t$ introduced in Section~\ref{sec:model}. {Let $X^{\tilde{\pi},\tilde{l}}(t)$ represent the time-$t$ wealth level corresponding to the strategy $(\tilde{\pi},\tilde{l})$,} then the self-financing condition yields that \begin{align}\label{eq:SDEX} \frac{dX^{\tilde{\pi},\tilde{l}}(t)}{X^{\tilde{\pi},\tilde{l}}(t-)}&=\tilde{\pi}(t)^{\top}{ diag}(\tilde{S}(t-))^{-1}d\tilde{S}(t)+\big(1-\tilde{\pi}(t)^{\top}e_n^{\top}\big)\frac{dB(t)}{B(t)}+\tilde{l}(t)dP(t)\\ &=\tilde{\pi}(t)^{\top}{ diag}(\tilde{S}(t-))^{-1}d\tilde{S}(t)+\big(1-\tilde{\pi}(t)^{\top}e_n^{\top}\big)\frac{dB(t)}{B(t)}+ p(Y(t),Z(t))\tilde{l}(t)dt-d{R}^{\tilde{l}}(t),\nonumber \end{align} where $\tilde{l}(t)$ is the ratio of liabilities over wealth at time $t$. By virtue of the dynamics \eqref{eq:risk-model}, it holds that \begin{align}\label{eq:risk-model2} d{R}^{\tilde{l}}(t)=\tilde{l}(t)\left\{c(Y(t))dt+\phi(Y(t))dW(t)+\bar{\phi}(Y(t))d\bar{W}(t)+g(Y(t-))dN(t)\right\}. \end{align} Using equations \eqref{eq:SDEX} and \eqref{eq:risk-model2}, the wealth process of the insurer can be rewritten as \begin{align}\label{eq:sdeX2} \frac{dX^{\tilde{\pi},\tilde{l}}(t)}{X^{\tilde{\pi},\tilde{l}}(t-)}=&\big[r(Y(t))+\tilde{\pi}(t)^{\top}(\mu(Y(t))-r(Y(t))e_n^{\top}) +\tilde{l}(t)(p(Y(t),Z(t))-c(Y(t)))\big]dt\nonumber\\ &+\big[\tilde{\pi}(t)^{\top}\sigma(Y(t))-\tilde{l}(t)\phi(Y(t))\big]dW(t)-\tilde{l}(t)\bar{\phi}(Y(t))d\bar{W}(t)\\ &-\tilde{\pi}(t)^{\top}dM(t)-\tilde{l}(t)g(Y(t-))dN(t).\nonumber \end{align}
We next give the definition of the admissible control set which will be used in the paper. \begin{definition}\label{def:add-con} The admissible control set $\tilde{\cal U}$ is a class of $\mathbb{G}$-predictable feedback strategies $(\tilde{\pi}(t),\tilde{l}(t))_{t\in[0,T]}:=((\tilde{\pi}_j(t);\ j=1,\ldots,n)^{\top},\tilde{l}(t))_{t\in[0,T]}$, given by the Markov control $\tilde{\pi}_j(t):=\pi_j(t,X^{\tilde{\pi},\tilde{l}}(t-),Y(t-),Z(t-))$ for $j=1,\ldots,n$, and the nonnegative Markov control $\tilde{l}(t):=l(t,X^{\tilde{\pi},\tilde{l}}(t-),Y(t-),Z(t-))$ such that the wealth process $X^{\tilde{\pi},\tilde{l}}(t)$ of the insurer is nonnegative for all $t\in[0,T]$. Moreover $\tilde{\pi}_j(t)=\tilde{\pi}_j(t)(1-Z_j(t-))$ for $j=1,\ldots,n$, and the feedback control function $\pi_j$, $j=1,\ldots,n$ and $l$ are assumed to be locally bounded. We use ${\cal U}$ to denote the set of the above feedback functions $(\pi,l):=((\pi_j; j=1,\ldots,n)^{\top},l)$. \end{definition} For $x\in\mathds{R}_+$, let $U(x):=\frac{1}{\gamma}x^{\gamma}$ with $\gamma\in(0,1)$ be the power (CRRA) utility. We consider the following expected utility maximization problem from terminal wealth of the insurer given by, for $(t,x,i,z)\in[0,T]\times\mathds{R}_+\times\{1,\ldots,m\}\times{\cal S}$, \begin{align}\label{eq:value-fcn}
V(t,x,i,z) := \sup_{(\tilde{\pi},\tilde{l})\in\tilde{\cal U}} \mathbb{E} \left[U(X^{\tilde{\pi},\tilde{l}}(T))\big|X^{\tilde{\pi},\tilde{l}}(t)=x,Y(t)=i,Z(t)=z\right]. \end{align} Suppose that $V$ is $C^{1,2}$ in $(t,x)\in[0,T]\times\mathds{R}_+$ for each $(i,z)\in\{1,\ldots,m\}\times{\cal S}$. Then, It\^o's formula yields that \begin{align}\label{eq:hjbsystem} &dV(t,X^{\tilde{\pi},\tilde{l}}(t),Y(t),Z(t))\nonumber\\ &\ =\bigg\{\frac{\partial V}{\partial t}+X^{\tilde{\pi},\tilde{l}}(t)\frac{\partial V}{\partial x}\big[r(Y(t))+\tilde{\pi}(t)^{\top}\theta(Y(t),Z(t))+\tilde{l}(t)(p(Y(t),Z(t))-c(Y(t)))\big]\nonumber\\ &\quad+\frac{1}{2}(X^{\tilde{\pi},\tilde{l}}(t))^2\frac{\partial^2 V}{\partial x^2}\big[\tilde{\pi}(t)^{\top}\sigma(Y(t))\sigma(Y(t))^{\top}\tilde{\pi}(t) +(\tilde{l}(t))^2(\phi(Y_{t})\phi(Y_{t})^{\top}+\bar{\phi}(Y(t))\bar{\phi}(Y(t))^{\top})\nonumber\\ &\quad\qquad-2\tilde{l}(t)\tilde{\pi}(t)^{\top}\sigma(Y(t))\phi(Y(t))^{\top}\big]\bigg\}dt\nonumber\\ &\quad+X^{\tilde{\pi},\tilde{l}}(t)\frac{\partial V}{\partial x}\left\{\big[\tilde{\pi}(t)^{\top}\sigma(Y(t))-\tilde{l}(t)\phi(Y(t))]dW(t)-\tilde{l}(t)\bar{\phi}(Y(t))d\bar{W}(t)\right\}\nonumber\\ &\quad+\big[V(t,X^{\tilde{\pi},\tilde{l}}(t-)-\tilde{l}(t)X^{\tilde{\pi},\tilde{l}}(t-)g(Y(t-)),Y(t-),Z(t-))-V(t,X^{\tilde{\pi},\tilde{l}}(t-),Y(t-),Z(t-))\big]dN(t)\nonumber\\ &\quad+\sum_{j=1}^n\big[V(t,X^{\tilde{\pi},\tilde{l}}(t-)-\tilde{\pi}_j(t)X^{\tilde{\pi},\tilde{l}}(t-),Y(t-),Z^{j}(t-))-V(t,X^{\tilde{\pi},\tilde{l}}(t-),Y(t-),Z(t-))\big]dZ_j(t)\nonumber\\ &\quad+\sum_{j\neq Y(t-)}[V(t,X^{\tilde{\pi},\tilde{l}}(t-),j,Z(t-))-V(t,X^{\tilde{\pi},\tilde{l}}(t-),Y(t-),Z(t-))\big]dH_{Y(t-),j}(t). \end{align} Here, the coefficient $\theta(i,z):=\mu(i)-r(i)e_n^{\top}+h(i,z)$ for $(i,z)\in\{1,\ldots,m\}\times{\cal S}$, and the process, for $t\in[0,T]$, \begin{align}\label{eq:Hil} H_{ij}(t):=\sum_{0<s\leq t}\mathds{1}_{Y(t-)=i,Y(t)=j},\quad i,j\in\{1,\ldots,m\}\ {\rm and}\ i\neq j. \end{align} The dynamic programming principle yields that the value function $V$ satisfies the following HJB equation, i.e., for $(t,x,i,z)\in[0,T)\times\mathds{R}_+\times\{1,\ldots,m\}\times{\cal S}$, \begin{align}\label{eq:hjbsystem2} 0&=\frac{\partial V(t,x,i,z)}{\partial t}+r(i)x\frac{\partial V(t,x,i,z)}{\partial x}+\sum_{j\neq i}\big[V(t,x,j,z)-V(t,x,i,z)\big]q_{ij}\nonumber\\ &\quad+\sup_{(\pi,l)\in{\cal U}}\Bigg\{x\frac{\partial V(t,x,i,z)}{\partial x}\big[\pi^{\top}(I-diag(z))\theta(i,z)+l(p(i,z)-c(i))\big]\nonumber\\ &\quad+\frac{1}{2}x^2\frac{\partial^2 V(t,x,i,z)}{\partial x^2}\big[\pi^{\top}(I-diag(z))\sigma(i)\sigma(i)^{\top}(I-diag(z))\pi+l^2(\phi(i)\phi(i)^{\top}+\bar{\phi}(i)\bar{\phi}(i)^{\top})\nonumber\\ &\quad-2l\pi^{\top}(I-diag(z))\sigma(i)\phi(i)^{\top}\big]\nonumber\\ &\quad+\big[V(t,x-xlg(i),i,z)-V(t,x,i,z)\big]\nu(i,z)\nonumber\\ &\quad+\sum_{j=1}^n\big[V(t,x-\pi_jx,i,z^j)-V(t,x,i,z)\big](1-z_j)h_j(i,z)\Bigg\} \end{align} with terminal condition $V(T,x,i,z)=U(x)$ for all $(x,i,z)\in\mathds{R}_+\times\{1,\ldots,m\}\times{\cal S}$. Here, for $j=1,\ldots,n$, and $z\in{\cal S}$, the flipped state is defined as \begin{align}\label{eq:zj} z^j=(z_1,\ldots,z_{j-1},1-z_j,z_{j+1},\ldots,z_n). \end{align} In particular, we set $z^j=z$ if $j=0$.
It can be observed that Eq.~\eqref{eq:hjbsystem2} is in fact a recursive dynamical system in terms of default states $z\in{\cal S}$. Further if we consider the value function in the form of $V(t,x,i,z)=x^{\gamma}\varphi(t,i,z)$, then $\varphi(t,i,z)$ satisfies the recursive dynamical system given by, for $(i,z)\in\{1,\ldots,m\}\times{\cal S}$, on $t\in[0,T)$, \begin{align}\label{eq:hjbeqn} 0&=\frac{\partial \varphi(t,i,z)}{\partial t}+\gamma r(i)\varphi(t,i,z)+\sum_{j=1}^m\varphi(t,j,z)q_{ij}\nonumber\\ &\quad+\sup_{(\pi,l)\in{\cal U}}H\big((\pi,l);i,z,(\varphi(t,i,z^j);\ j=0,1,\ldots,n)\big) \end{align} with terminal condition $\varphi(T,i,z)=\frac{1}{\gamma}$ for all $(i,z)\in\{1,\ldots,m\}\times{\cal S}$. For $(\pi,l)\in(-\infty,1]^n\times[0,\infty)$ and $(i,z)\in\{1,\ldots,m\}\times{\cal S}$, the function \begin{align}\label{eq:H} &H\big((\pi,l);t,i,z,\bar{f}(z)\big)=\gamma\big\{\pi^{\top}(I-diag(z))\theta(i,z)+(p(i,z)-c(i))l\big\}f(z)\nonumber\\ &\qquad+\bigg\{\frac{\gamma(\gamma-1)}2\pi^{\top}(I-diag(z))\sigma(i)\sigma(i)^{\top}(I-diag(z))\pi+[(1-lg(i))^\gamma-1]\nu(i,z)\nonumber\\ &\qquad+\frac{\gamma(\gamma-1)}2l^2\big(\phi(i)\phi(i)^{\top}+\bar{\phi}(i)\bar{\phi}(i)^{\top}\big)-\gamma(\gamma-1)l\pi^{\top}(I-diag(z))\sigma(i)\phi(i)^{\top}\bigg\}f(z)\nonumber\\ &\qquad+\sum_{j=1}^n[(1-\pi_j)^\gamma f({z}^j)-f(z)](1-z_j)h_j(i,z), \end{align} where ${\bar f}(z)=(f(z^j);\ j=0,1,\ldots,n)$ is an arbitrary vector-valued function defined on $z\in{\cal S}$. In the forthcoming section, we will study the existence and uniqueness of (classical) solutions of the recursive HJB system \eqref{eq:hjbeqn}.
\section{Analysis of Iterated HJB Equations}\label{sec:HJB-sys}
This section analyzes the existence and uniqueness of global (classical) solutions to the recursive dynamical system \eqref{eq:hjbeqn} in terms of default states $z\in{\cal S}$.
We introduce the notations which will be used frequently in this section. For $x\in\mathbb{R}^m$, we write $x=(x_1,...,x_m)^{\top}$ as an $m$-dimensional column vector. For any $x,y\in\mathds{R}^m$, we write $x\leq y$ if $x_i\leq y_i$ for all $i=1,\ldots,m$, while we write $x<y$ if $x\leq y$ and there exists some $i\in\{1,\ldots,m\}$ such that $x_i<y_i$. In particular, we write $x\ll y$ if $x_i<y_i$ for all $i=1,2,...,m$. Recall that $e_{n}$ denotes the $n$-dimensional row vector whose all entities are ones. For the general default state $z\in{\cal S}$, we introduce a general default state representation $z=0^{j_1,\ldots,j_k}$ for indices $j_1\neq\cdots\neq j_k$ belonging to $\{1,\ldots,n\}$, and $k\in\{0,1,\ldots,n\}$. Such a vector $z$ is obtained by flipping the entries $j_1,\ldots,j_k$ of the zero vector to one, i.e., $z_{j_1}=\cdots=z_{j_k}=1$, and $z_{j}=0$ for $j\notin\{j_1,\ldots,j_k\}$ (if $k=0$, we set $z=0^{j_1,\ldots,j_k}=0$). Clearly $0^{j_1,\ldots,j_{n}}=e_n$.
Recall the recursive dynamical system \eqref{eq:hjbeqn} in terms of default states $z=0^{j_1,\ldots,j_k}$ (where $k=0,1,\ldots,n$). The solvability can in fact be analyzed in the recursive form in terms of default states. Hence, our proof strategy for analyzing the system is based on a recursive procedure, starting from the default state $z=e_n$ (i.e., all stocks have defaulted) and proceeding backward to the default state $z=0$ (i.e., all stocks are alive). \begin{itemize}
\item[(i)] $k=n$ (i.e., all stocks have defaulted). In this default state, the insurer will not invest in stocks because they have defaulted and hence the optimal fraction strategy for stocks is given by $\pi_1^*=\cdots=\pi_n^*=0$ by virtue of Definition~\ref{def:add-con}. Let $\varphi(t,e_n)=(\varphi(t,i,e_n);\ i=1,\ldots,m)^{\top}$. Then, the dynamical system \eqref{eq:hjbeqn} reduces to
\begin{align}\label{eq:hjben} \left\{ \begin{aligned} \frac d{dt}\varphi(t,e_n)=&-A^{(n)}\varphi(t,e_n),\quad\text{ in }[0,T);\\ \varphi(T,e_n)=&\frac{1}{\gamma}e_m^{\top}. \end{aligned} \right. \end{align} \end{itemize} Here the matrix of coefficient is given by \begin{align}\label{eq:Aen} A^{(n)}=&diag\left[\left(\gamma r(i)+\sup_{l\in{\cal U}^{(n)}}H^{(n)}(l,i);\ i=1,\ldots,m\right)\right]+Q, \end{align} where the policy space in \eqref{eq:Aen} in this case is reduced to \begin{align}\label{eq:Un} {\cal U}^{(n)}:=\{l=l(i)\in[0,+\infty);\ 1-lg(i)\geq0\}. \end{align} Moreover, the function $H^{(n)}(l,i)$ is given by, for $(l,i)\in[0,\infty)\times\{1,\ldots,m\}$, \begin{align*} H^{(n)}(l,i):=&\gamma(p(i,e_n)-c(i))l+\frac{\gamma(\gamma-1)}{2}l^2\big(\phi(i)\phi(i)^{\top}+\bar{\phi}(i)\bar{\phi}(i)^{\top}\big)\\ &+[(1-lg(i))^\gamma-1]\nu(i,e_n). \end{align*} Since $\gamma\in(0,1)$, it is not difficult to verify that for each $i=1,\ldots,m$, $H^{(n)}(l,i)$ is continuous and strictly concave in $l$ on the compact ${\cal U}^{(n)}$. Consequently, there exists a unique optimum $l^{*}\in{\cal U}^{(n)}$ which is given by \begin{align}\label{eq:lnstar} l^{*} =l^{*}(i)= \argmax_{l\in{\cal U}^{(n)}}H^{(n)}(l,i),\qquad i=1,\ldots,m. \end{align} Further, we have that $\sup_{l\in{\cal U}^{(n)}}H^{(n)}(l,i)=H^{(n)}(l^{*},i)\in[0,\infty)$ for each $i=1,\ldots,m$. Then, the matrix of coefficient $A^{(n)}$ given by \eqref{eq:Aen} is finite.
We next prove that the dynamical system \eqref{eq:hjben} has a unique strictly positive solution. To this purpose, we need the following auxiliary result which will be also used in the proof related to the general default case. The proof is provided in the Appendix. \begin{lemma}\label{lem:sol-hjben2} Let $g(t):=(g_i(t);\ i=1,\ldots,m)^{\top}$ satisfy the following dynamical system given by \begin{align*} \left\{ \begin{aligned} \frac d{dt}g(t)=&Bg(t)\quad\text{ in }(0,T];\\ g(0)=&\xi. \end{aligned} \right. \end{align*} If $B=(b_{ij})_{m\times m}$ satisfies $b_{ij}\geq 0$ for $i\neq j$ and $\xi\gg0$, then $g(t)\gg0$ for all $t\in[0,T]$. \end{lemma} Then we have the following lemma whose proof is given in the Appendix. \begin{lemma}\label{lem:sol-hjben} The dynamical system \eqref{eq:hjben} admits a unique solution which is given by, for $t\in[0,T]$, \begin{align}\label{eq:varphien} \varphi(t,e_n)=\frac{1}{\gamma} e^{A^{(n)}(T-t)}e_m^{\top}=\frac{1}{\gamma}\sum_{i=0}^{\infty}\frac{(A^{(n)})^i(T-t)^i}{i!}e_m^{\top}, \end{align} where the $m\times m$-dimensional matrix $A^{(n)}$ is given by \eqref{eq:Aen}. Moreover, it holds that $\varphi(t,e_n)\gg 0$ for all $t\in[0,T]$. \end{lemma}
We next consider the general default state with the form $z=0^{j_1,\ldots,j_{k}}$ for $0\leq k\leq n-1$, i.e., the stocks $j_1,\ldots,j_{k}$ have defaulted and the stocks $\{j_{k+1},\ldots,j_n\}:=\{1,\ldots,n\}\setminus\{j_1,\ldots,j_k\}$ are alive. Then, we have \begin{itemize}
\item[(ii)] Since the stocks $j_1,\ldots,j_k$ have defaulted, the optimal fraction strategies for the stocks $j_1,\ldots,j_{k}$ are given by $\pi_j^{(k,*)}=0$ for $j\in\{j_1,\ldots,j_{k}\}$ by virtue of Definition~\ref{def:add-con}. Let $\varphi^{(k)}(t)=(\varphi(t,i,0^{j_1,\ldots,j_{k}});\ i=1,\ldots,m)^{\top}$, $p^{(k)}(i)=p(i,0^{j_1,\ldots,j_{k}})$, and $h^{(k)}_j(i)=h_j(i,0^{j_1,\ldots,j_{k}})$ for $j\notin\{j_1,\ldots,j_k\}$ and $i=1,\ldots,m$. Therefore, the corresponding HJB system \eqref{eq:hjbeqn} in this default state reduces to
\begin{align}\label{eq:hjbn-1} \left\{ \begin{aligned} \frac d{dt}\varphi^{(k)}(t)=&-A^{(k)}\varphi^{(k)}(t)-G^{(k)}(t,\varphi^{(k)}(t)),\quad\text{ in }[0,T);\\ \varphi^{(k)}(T)=&\frac{1}{\gamma}e_m^{\top}. \end{aligned} \right. \end{align} Here, the $m\times m$-dimensional matrix $A^{(k)}$ is given by \begin{align}\label{eq:An-1} A^{(k)}=diag\left[\left(\gamma r(i)-\sum_{j\notin\{j_1,\ldots,j_{k}\}}h_{j}^{(k)}(i);\ i=1,\ldots,m\right)\right]+Q. \end{align} The coefficient $G^{(k)}(t,x)=(G^{(k)}_i(t,x);\ i=1,\ldots,m)^{\top}$ for $(t,x)\in[0,T]\times\mathds{R}^{m}$ is given by, for $i=1,\ldots,m$, \begin{align}\label{eq:Gin-1} G_i^{(k)}(t,x)&=\sup_{(\pi^{(k)},l)\in{\cal U}^{(k)}}\left\{\sum_{j\notin\{j_1,\ldots,j_k\}}(1-\pi_{j}^{(k)})^\gamma h^{(k)}_{j}(i)\varphi^{(k+1),j}(t,i)+H^{(k)}((\pi^{(k)},l),i)x_i\right\}, \end{align} where the policy space in this default case is given by \begin{align}\label{eq:Un} {\cal U}^{(k)}:=\left\{(\pi^{(k)},l)=(\pi^{(k)}(t,i),l(t,i))\in(-\infty,1]^{n-k}\times[0,\infty);\ 1-lg(i)\geq0\right\}. \end{align} The function $\varphi^{(k+1),j}(t,i):=\varphi(t,i,0^{j_1,\ldots,j_k,j})$ for $j\notin\{j_1,\ldots,j_k\}$ corresponds to the $i$-th element of the positive solution of the HJB system \eqref{eq:hjbeqn} at the default state $z=0^{j_1,\ldots,j_k,j}$. The function $H^{(k)}((\pi^{(k)},l),i)$ is given by, for $(\pi^{(k)},l)\in{\cal U}^{(k)}$, and $i=1,\ldots,m$, \begin{align}\label{eq:Hk} H^{(k)}((\pi^{(k)},l),i)=&\gamma\big\{(\pi^{(k)})^{\top}\theta^{(k)}(i)+(p^{(k)}(i)-c(i))l\big\}+\frac{\gamma(\gamma-1)}2\Big\{(\pi^{(k)})^{\top}\sigma^{(k)}(i)\sigma^{(k)}(i)^{\top}\pi^{(k)}\nonumber\\ &+l^2\big[\phi(i)\phi(i)^{\top}+\bar{\phi}(i)\bar{\phi}(i)^{\top}\big]-2l(\pi^{(k)})^{\top}\sigma^{(k)}(i)\phi(i)^{\top}\Big\}\nonumber\\ &+\big[(1-lg(i))^\gamma-1\big]\nu^{(k)}(i). \end{align} Here, for each $i=1,\ldots,m$, we used notations $\pi^{(k)}=(\pi_j^{(k)};\ j\notin\{j_1,\ldots,j_k\})^{\top}$, $\theta^{(k)}(i)=(\theta_j(i);\ j\notin\{j_1,\ldots,j_k\})^{\top}$, $\sigma^{(k)}(i)=(\sigma_{j\kappa}(i);\ j\notin\{j_1,\ldots,j_k\},\kappa\in\{1,\ldots,d\})$, and $\nu^{(k)}(i)=\nu(i,0^{j_1,\ldots,j_k})$. \end{itemize} From the expression of $G_i^{(k)}(t,x)$ given by \eqref{eq:Gin-1}, it can be seen that the solution $\varphi^{(k)}(t)$ on $t\in[0,T]$ of Eq.~\eqref{eq:hjbeqn} at $z=0^{j_1,\ldots,j_k}$ depends on the solution $\varphi^{(k+1),j}(t)$ on $t\in[0,T]$ of Eq.~\eqref{eq:hjbeqn} at $z=0^{j_1,\ldots,j_k,j}$ for $j\notin\{j_1,\ldots,j_k\}$. In particular, for $k=n-1$, the solution $\varphi^{(k+1),j}(t)=\varphi(t,e_n)\gg0$ corresponding to the solution to Eq.~\eqref{eq:hjbeqn} at $z=e_n$ (i.e., $k=n$) has been obtained in Lemma~\ref{lem:sol-hjben}. This suggests solving the HJB system \eqref{eq:hjbeqn} backward recursively in terms of default states $z=0^{j_1,\ldots,j_k}$. Thus, in order to analyze the existence and uniqueness of a positive (classical) solution to the dynamical system \eqref{eq:hjbn-1}, we first assume that the HJB system \eqref{eq:hjbeqn} admits a positive unique (classical) solution $\varphi^{(k+1),j}(t)$ on $t\in[0,T]$ for $j\notin\{j_1,\ldots,j_k\}$.
We have the following estimate on $G^{(k)}(t,x)$ given by \eqref{eq:Gin-1} which is stated in the following lemma. The proof is reported in the Appendix. \begin{lemma}\label{lem:Gkesti} For each $k=0,1,\ldots,n-1$, assume that the HJB system \eqref{eq:hjbeqn} admits a positive unique (classical) solution $\varphi^{(k+1),j}(t)$ on $t\in[0,T]$ for $j\notin\{j_1,\ldots,j_k\}$. Then, for any $x,y\in\mathds{R}^m$ satisfying $x,y\geq\varepsilon e_m^{\top}$ with $\varepsilon>0$, there exists a positive constant $C=C(\varepsilon)$ depending on $\varepsilon>0$ only such that \begin{align}\label{eq:Gkesti}
\left\|G^{(k)}(t,x)-G^{(k)}(t,y)\right\|\leq C\left\|x-y\right\|. \end{align}
Here $\|\cdot\|$ denotes the Euclidian norm. \end{lemma}
In order to study the existence and uniqueness of solutions to the HJB system \eqref{eq:hjbn-1}, we also need the following comparison result. The proof is delegated to the Appendix. \begin{lemma}\label{lem:comparison} Let $g_{\kappa}(t):=(g_{\kappa i}(t);\ i=1,\ldots,m)^{\top}$ with $\kappa=1,2$ satisfy the following dynamical systems on $[0,T]$ respectively \begin{align*} \left\{ \begin{aligned} \frac d{dt}g_1(t)=&f(t,g_1(t))+\tilde{f}(t,g_1(t)),\ \text{ in }(0,T];\\ g_1(0)=&\xi_1, \end{aligned} \right.\quad\quad \left\{ \begin{aligned} \frac d{dt}g_2(t)=&f(t,g_2(t)),\ \text{ in }(0,T];\\ g_2(0)=&\xi_2. \end{aligned} \right. \end{align*} Here the functions $f(t,x),\,\tilde{f}(t,x):[0,T]\times\mathds{R}^m\to\mathds{R}^m$ are Lipschitz continuous w.r.t. $x\in\mathds{R}^m$ uniformly in $t\in[0,T]$. The function $f(t,\cdot)$ satisfies the type $K$ condition for each $t\in[0,T]$ (i.e., for any $x,y\in\mathds{R}^m$ satisfying $x\leq y$ and $x_i=y_i$ for some $i=1,\ldots,m$, it holds that $f_i(t,x)\leq f_i(t,y)$ for each $t\in[0,T]$). If $\tilde{f}(t,x)\geq0$ for $(t,x)\in[0,T]\times\mathds{R}^m$ and $\xi_1\geq\xi_2$, then $g_1(t)\geq g_2(t)$ for all $t\in[0,T]$. \end{lemma}
We are now at the position to state the result of existence and uniqueness of positive (classical) solutions to the HJB system \eqref{eq:hjbn-1}. \begin{theorem}\label{thm:solutionk} For each $k=0,1,\ldots,n-1$, assume that the HJB system \eqref{eq:hjbeqn} admits a positive unique (classical) solution $\varphi^{(k+1),j}(t)$ on $t\in[0,T]$ for $j\notin\{j_1,\ldots,j_k\}$. Then, there exists a unique positive (classical) solution $\varphi^{(k)}(t)$ on $t\in[0,T]$ of the HJB system \eqref{eq:hjbeqn} at the default state $z=0^{j_1,\ldots,j_k}$ (i.e., the HJB system \eqref{eq:hjbn-1} admits a unique positive (classical) solution). \end{theorem}
\noindent{\it Proof.}\quad For a constant $a>0$, consider the following truncated dynamical system given by \begin{align}\label{eq:truneq1} \left\{ \begin{aligned} \frac{d}{dt}\varphi_a^{(k)}(t)=&-A^{(k)}\varphi^{(k)}_a(t)-G^{(k)}_{a}(t,\varphi_a^{(k)}(t)),\ \text{ in }[0,T);\\ \varphi_a^{(k)}(T)=&\frac{1}{\gamma}e_m^{\top}, \end{aligned} \right. \end{align} where the truncated nonlinearity $G_a^{(k)}(t,x):=G^{(k)}(t,x\vee ae_m^{\top})$ for $(t,x)\in[0,T]\times\mathds{R}^m$. Thanks to Lemma~\ref{lem:Gkesti}, there exists a positive constant $C=C(a)$ which depends on $a>0$ only such that for all $t\in[0,T]$, \begin{align}\label{eq:Lip-Ga}
\big\|G_a^{(k)}(t,x)-G_a^{(k)}(t,y)\big\|\leq C\|x-y\|,\qquad x,y,\in\mathds{R}^m, \end{align} i.e., $G^{(k)}_a(t,x)$ is globally Lipschitz continuous w.r.t. $x\in\mathds{R}^m$ uniformly in $t\in[0,T]$. By reversing the flow of time, consider $\tilde{\varphi}_a^{(k)}(t):=\varphi_a^{(k)}(T-t)$ for $t\in[0,T]$. Then $\tilde{\varphi}_a^{(k)}(t)$ satisfies the following dynamical system given by \begin{align}\label{eq:truneq2} \left\{ \begin{aligned} \frac{d}{dt}\tilde{\varphi}_a^{(k)}(t)=&A^{(k)}\tilde{\varphi}^{(k)}_a(t)+G^{(k)}_{a}(T-t,\tilde{\varphi}_a^{(k)}(t)),\ \text{ in }(0,T];\\ \tilde{\varphi}_a^{(k)}(0)=&\frac{1}{\gamma}e_m^{\top}. \end{aligned} \right. \end{align} Let $\psi^{(k)}(t)=(\psi_i^{(k)}(t);\ i=1,\ldots,m)^{\top}$ satisfy the following dynamical system: \begin{align}\label{eq:psieqn} \left\{ \begin{aligned} \frac d{dt}\psi^{(k)}(t)=&A^{(k)}\psi^{(k)}(t),\quad\text{ in }(0,T];\\ \psi^{(k)}(0)=&\frac{1}{\gamma}e_m^{\top}. \end{aligned} \right. \end{align} Recall the $m\times m$-dimensional matrix of coefficients $A^{(k)}$ given by \eqref{eq:An-1}. Then, we have that $[A^{(k)}]_{ij}=q_{ij}$ for all $i\neq j$ using \eqref{eq:An-1}. Since~$Q=(q_{ij})_{m\times m}$~is the generator of the Markov chain, it holds that $q_{ij}\geq0$ for all $i\neq j$. Hence $[A^{(k)}]_{ij}\geq0$ for all $i\neq j$ and thus the linear function $A^{(k)}x$ is of type $K$ in $x\in\mathds{R}^m$. Also since $\psi^{(k)}(0)=\frac{1}{\gamma}e_m^{\top}\gg0$, it follows from Lemma~\ref{lem:sol-hjben2} that the dynamical system \eqref{eq:psieqn} admits a unique (classical) solution $\psi^{(k)}(t)$ on $[0,T]$ and moreover $\psi^{(k)}(t)\gg0$ for all $t\in[0,T]$. Set \begin{align}\label{eq:epsilonk} \varepsilon^{(k)}:=\min_{i=1,\ldots,m}\left\{\inf_{t\in[0,T]}\psi_i^{(k)}(t)\right\}. \end{align} Then, by the continuity of $\psi^{(k)}(t)$ in $t\in[0,T]$ and the fact that $\psi^{(k)}(t)\gg0$ for all $t\in[0,T]$, we have that $\varepsilon^{(k)}>0$. Further, by virtue of estimates \eqref{eq:Lip-Ga} and \eqref{eq:G>0} in the Appendix, together with the initial condition $\varphi_a^{(k)}(0)=\psi^{(k)}(0)=\frac{1}{\gamma}e_m^{\top}\gg0$, it follows from Lemma~\ref{lem:comparison} that \begin{align}\label{eq:tildesol} \tilde{\varphi}_a^{(k)}(t)\geq \psi^{(k)}(t)\geq \varepsilon^{(k)}e_m^{\top},\quad \text{for all}\ t\in[0,T]. \end{align} Notice that the positive constant $\varepsilon^{(k)}$ is independent of $a>0$. Then, for $a\in(0,\varepsilon^{(k)})$, it holds that \begin{align*} G_a^{(k)}(T-t,\tilde{\varphi}^{(k)}_a(t))=G^{(k)}\big(T-t,\tilde{\varphi}^{(k)}_a(t)\vee ae_m^{\top}\big)=G^{(k)}(T-t,\tilde{\varphi}^{(k)}_a(t)). \end{align*} This yields that for $a\in(0,\varepsilon^{(k)})$, the function $\tilde{\varphi}^{(k)}_a(t)$ solves the dynamical system given by \begin{align*} \left\{ \begin{aligned} \frac{d}{dt}\tilde{\varphi}_a^{(k)}(t)=&A^{(k)}\tilde{\varphi}^{(k)}_a(t)+G^{(k)}(T-t,\tilde{\varphi}_a^{(k)}(t)),\ \text{ in }(0,T];\\ \tilde{\varphi}_a^{(k)}(0)=&\frac{1}{\gamma}e_m^{\top}. \end{aligned} \right. \end{align*} By the uniqueness of the solution to the dynamical system \eqref{eq:truneq2} and using the estimate \eqref{eq:tildesol}, it follows that, for $a\in(0,\varepsilon^{(k)})$, $\varphi_a^{(k)}(t):=\tilde{\varphi}_a^{(k)}(T-t)$ on $[0,T]$ is the unique (classical) solution to the HJB system \eqref{eq:hjbn-1}. Thus, we complete the proof of the theorem.
$\Box$\\
We next turn to the characterization of the optimal strategy $(\pi^{(k)},l)\in{\cal U}^{(k)}$ at the default state $z=0^{j_1,\ldots,j_k}$ where $k=0,1,\ldots,n-1$. Let us recall the HJB system \eqref{eq:hjbn-1}, i.e., \begin{align*} \left\{ \begin{aligned} \frac d{dt}\varphi^{(k)}(t)=&-A^{(k)}\varphi^{(k)}(t)-G^{(k)}(t,\varphi^{(k)}(t)),\quad\text{ in }[0,T);\\ \varphi^{(k)}(T)=&\frac{1}{\gamma}e_m^{\top}. \end{aligned} \right. \end{align*} Theorem~\ref{thm:solutionk} shows that the above system admits a unique positive (classical) solution $\varphi^{(k)}(t)$ on $[0,T]$ and moreover $\varphi^{(k)}(t)\geq \varepsilon^{(k)}e_m^{\top}$ for all $t\in[0,T]$ using \eqref{eq:tildesol}. Here $\varepsilon^{(k)}>0$ is given by~\eqref{eq:epsilonk}. Then, by virtue of the equality \eqref{eq:Gk2} given in the Appendix, there exists a positive constant $C(\varepsilon^{(k)})$ depending on $\varepsilon^{(k)}>0$ such that for each $i=1,\ldots,m$, \begin{align}\label{eq:Gk3} &G^{(k)}_i(t,\varphi^{(k)}(t,i))\\
&\quad=\sup_{\substack{(\pi^{(k)},l)\in{\cal U}^{(k)}\\\|\pi^{(k)}\|^2+l^2\leq C(\varepsilon^{(k)})}}\left\{\sum_{j\notin\{j_1,\ldots,j_k\}}(1-\pi_{j}^{(k)})^\gamma h_{j}^{(k)}(i)\varphi^{(k+1),j}(t,i)+H^{(k)}((\pi^{(k)},l),i)\varphi^{(k)}(t,i)\right\}.\nonumber \end{align} Here, for each $i=1,\ldots,m$, $\varphi^{(k+1),j}(t,i)$ on $t\in[0,T]$ is the $i$-th element of the positive (classical) solution $\varphi^{(k+1),j}(t)$ of the HJB system \eqref{eq:hjbeqn} at the default state $z=0^{j_1,\ldots,j_k,j}$ for $j\notin\{j_1,\ldots,j_k\}$. It is not difficult to verify that, for each $i=1,\ldots,m$ and fixed $t\in[0,T]$, \[ \sum_{j\notin\{j_1,\ldots,j_k\}}(1-\pi_{j}^{(k)})^\gamma h_{j}^{(k)}(i)\varphi^{(k+1),j}(t,i)+H^{(k)}((\pi^{(k)},l),i)\varphi^{(k)}(t,i) \]
is strictly concave in $(\pi^{(k)},l)\in{\cal U}^{(k)}$. Also notice that the space ${\cal U}^{(k)}\cap\{(\pi^{(k)},l);\ \|\pi^{(k)}\|^2+|l|^2\leq C(\varepsilon^{(k)})\}$ is compact. Hence, there exists a unique optimum $(\pi^{(k,*)},l^*)\in{\cal U}^{(k)}$ such that \begin{align}\label{eq:optimumk} &(\pi^{(k,*)},l^*)=(\pi^{(k,*)}(t,i),l^*(t,i))\\
&\quad=\argmax_{\substack{(\pi^{(k)},l)\in{\cal U}^{(k)}\\\|\pi^{(k)}\|^2+l^2\leq C(\varepsilon^{(k)})}}\left\{\sum_{j\notin\{j_1,\ldots,j_k\}}(1-\pi_{j}^{(k)})^\gamma h_{j}^{(k)}(i)\varphi^{(k+1),j}(t,i)+H^{(k)}((\pi^{(k)},l),i)\varphi^{(k)}(t,i)\right\}\nonumber\\ &\quad=\argmax_{(\pi^{(k)},l)\in{\cal U}^{(k)}}\left\{\sum_{j\notin\{j_1,\ldots,j_k\}}(1-\pi_{j}^{(k)})^\gamma h_{j}^{(k)}(i)\varphi^{(k+1),j}(t,i)+H^{(k)}((\pi^{(k)},l),i)\varphi^{(k)}(t,i)\right\}\nonumber \end{align} for all $i=1,\ldots,m$.
We conclude this section with a verification theorem whose proof is reported in the Appendix.
\begin{theorem}\label{thm:vefi} At any default state $z=0^{j_1,\ldots,j_k}$ for $k=0,1,\ldots,n$, let $\varphi(t,z)$ be the unique positive (classical) solution to the dynamical system of HJB equations \eqref{eq:hjbeqn} (i.e., for $k=n$, $\varphi(t,z)=\varphi(t,e_n)$ is given in Lemma~\ref{lem:sol-hjben} and for $k=0,1,\ldots,n-1$, $\varphi(t,z)=\varphi^{(k)}(t)$ is given in Theorem~\ref{thm:solutionk}). Also let the optimal strategy $(\pi^{*},l^*)=(\pi^*(t,i,z),l^*(t,i,z))$ for $i=1,\ldots,m$ be given by \eqref{eq:lnstar} for $k=n$ and given by \eqref{eq:optimumk} for $k=0,1,\ldots,n-1$. Then, we have that \begin{itemize}
\item[(i)] For $(t,x,i,z)\in[0,T]\times\mathds{R}_+\times\{1,\ldots,m\}\times{\cal S}$, and any admissible feedback strategy $(\pi,l)\in{\cal U}$, it holds that \begin{align*} x^\gamma \varphi(t,i,z)\geq \mathbb{E} \big[U(X^{\pi,l}(T))\mid X^{{\pi},l}(t)=x,Y(t)=i,Z(t)=z\big]. \end{align*} \item[(ii)] The value function $V(t,x,i,z)$ for $(t,x,i,z)\in[0,T]\times\mathds{R}_+\times\{1,\ldots,m\}\times{\cal S}$ admits the following representation \begin{align*}
V(t,x,i,z)= \mathbb{E} \big[U(X^{\pi^*,l^*}(T))| X^{\pi^*,l^*}(t)=x,Y(t)=i,Z(t)=z]=x^{\gamma}\varphi(t,i,z). \end{align*} \end{itemize} \end{theorem}
\section{Numerical Analysis}\label{sec:numerics}
In this section, we investigate the sensitivity of the optimal strategy of stocks and risk control to changes in market parameters. The sensitivity analysis is performed on a simple market model consisting of two defaultable stocks and a riskless bond, i.e., $n=2$. In this market model, it follows from \eqref{eq:P} that the pre-default prices of stocks are given by \begin{align*} \left\{
\begin{array}{ll}
\frac{dS_1(t)}{S_1(t)}=\{\mu_1(Y(t))+h_1(Y(t),Z(t))\}dt+\sum_{j=1}^2\sigma_{1j}(Y(t))dW_j(t);\\ \\
\frac{dS_2(t)}{S_2(t)}=\{\mu_2(Y(t))+h_2(Y(t),Z(t))\}dt+\sum_{j=1}^2\sigma_{2j}(Y(t))dW_j(t),
\end{array} \right. \end{align*} where $Z:=(Z_1,Z_2)\in{\cal S}=\{0,1\}^2$ is the two-dimensional default state process of stocks and $W$ is a two-dimensional Brownian motion (i.e., $d=2$). The regime-switching process $Y$ is a continuous-time (conservative) Markov chain with state space $\{1,2\}$ (i.e., $m=2$). The claim (risk) per policy in the risk control is then given by \begin{align*} dC(t) &= c(Y(t))dt+\sum_{j=1}^2\phi_j(Y(t))dW_j(t) + \bar{\phi}(Y(t))d\bar{W}(t)\nonumber\\ &\quad+\sum_{(i,z)\in\{1,2\}\times\{0,1\}^2}g(Y(t-))\mathds{1}_{Y(t-)=i,Z(t-)=z}dN_{i,z}(t). \end{align*} Here $\bar{W}$ is a scalar Brownian motion (i.e., $\bar{d}=1$) and $N_{i,z}$ for $(i,z)\in\{1,2\}\times\{0,1\}^2$ are independent Poisson processes with respective intensities $\nu^{z}(i):=\nu(i,z)$. Throughout the section, we use the following benchmark parameters given in Table~\ref{tab:parameters}. In particular, we use the notation $h_k^{z}:=(h_k(1,z),h_k(2,z))$ to represent the vector of default intensities of the $k$-th stock at the default state $z\in\{0,1\}^2$. \begin{table}[htbp] \centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\hline
$\mu(1)$&$\mu(2)$&$r(1)$&$r(2)$&$p(1)$&$p(2)$&$c(1)$&$c(2)$\\%&$\phi(1)$&$\phi(2)$\\
\hline
$(1,0.55)$&$(1.4,0.8)$&0.1&0.06&$0.8$&$0.5$&$0.1$&$0.05$\\%&$(0.4,0.8)$&$(0.7,1.2)$\\
\hline
\hline
$\bar{\phi}(1)$&$\bar{\phi}(2)$&$g(1)$&$g(2)$&$h^{(1,0)}_2$&$h^{(0,1)}_1$&$h^{(0,0)}_1$&$h^{(0,0)}_2$\\%&$\nu^{(0,0)}(1)$&$\nu^{(0,0)}(2)$\\
\hline
0.3&0.6&0.2&0.1&$(0.9,1.3)$&$(0.7,1)$&$(0.5,0.75)$&$(0.75,1.1)$\\%&2&3\\
\hline
\hline
$\phi(1)$&$\phi(2)$ &$\nu^{(0,0)}(1)$ & $\nu^{(0,0)}(2)$&$\nu^{(1,0)}(1)$&$\nu^{(1,0)}(2)$&$\nu^{(0,1)}(1)$&$\nu^{(0,1)}(2)$\\
\hline
$(0.4,0.8)$&$(0.7,1.2)$ &2 & 3&2.5 &4 &2.3 &3.7\\
\hline
\hline
$\nu^{(1,1)}(1)$&$\nu^{(1,1)}(2)$& & & & & & \\
\hline
2.6&5 & & & & & &\\
\hline
\end{tabular} \caption{Market parameters values}\label{tab:parameters} \end{table} Moreover, we set the risk aversion parameter to $\gamma=0.5$. The generator of the Markov chain $Y$ and the volatility matrix of stocks are given respectively by \begin{align*} Q=Q_0=\left[\begin{matrix} -0.5&0.5\\ 1&-1\\ \end{matrix}\right],\quad \sigma(1)=\left[\begin{matrix} 0.7&0\\ 0&1\\ \end{matrix}\right],\quad \sigma(2)=\left[\begin{matrix} 1&0\\ 0&1.5\\ \end{matrix}\right]. \end{align*}
\begin{center} \makeatletter \deffigure{figure} \makeatother \begin{figure}
\caption{\small Dependence of the optimal strategies of stocks and risk control on default intensities at a given regime. Top panel: the dependence of the optimal strategies of stock 1 and risk control on the default intensity of stock 1 in regime 2. The default state $z=(0,1)$. Bottom panel: the dependence of the optimal strategies of stock 2 and risk control on the default intensity of stock 2 in regime 1. The default state $z=(1,0)$.}
\label{fig:l pi to h}
\end{figure} \end{center} We first perform a comparative statics analysis to examine how the default risk premia affect the optimal strategies of stocks and risk control of the insurer. Figure \ref{fig:l pi to h} displays the optimal strategy of stocks and risk control in a given regime at different times when the default intensity of a stock varies. Consider first the situation in which stock 1 is alive and stock 2 has defaulted (i.e., it corresponds to the default state $z=(0,1)$). The top left graph of Figure~\ref{fig:l pi to h} indicates that, as the stock 1's default intensity becomes higher in regime 2, i.e., $h_1^{(0,1)}(2)$ increases, the insurer reduces his/her investment in the defaultable stock 1. Recall that, for a fixed regime $i\in\{1,2\}$, $\nu^{z}(i)$ represents the jump intensity of the claim (risk) per policy in the risk control at the default state $z\in\{0,1\}^2$. Under the benchmark parameter configuration, we have $\nu^{(1,1)}(2)>\nu^{(0,1)}(2)$ and $\nu^{(1,1)}(1)>\nu^{(1,0)}(1)$. This implies that a default event can result in an increase in the expected number of claims that occur during a fixed period of time. In other words, when the default intensity of a stock increases, not only the defaultable stocks but the liabilities become riskier. The top right graph of Figure~\ref{fig:l pi to h} shows that, as the default intensity of stock 1 increases, the insurer would reduce his/her investment in the stock and cede more liabilities to reinsurers at the same time, by considering the higher risk in both stocks and liabilities. This line of reasoning is also confirmed by the bottom graphs of Figure~\ref{fig:l pi to h} in the case where stock 1 has defaulted and the default intensity of stock 2 will increase (i.e., it corresponds to the default state $z=(1,0)$). \begin{center} \makeatletter \deffigure{figure} \makeatother \begin{figure}
\caption{\small Dependence of optimal strategies of stocks and risk control in regime 1 on volatility of stocks at different times.}
\label{optm_strtgy_to_sigma}
\end{figure} \end{center}
We next give an illustration of how market volatility impacts the optimal investment strategy of stocks and risk control. Figure \ref{optm_strtgy_to_sigma} plots the optimal strategy of stocks and risk control in regime 1 at different times when the volatility of stocks varies. The default states considered here are $z=(0,1)$ and $(1,0)$. A comparison between the left panel and the right panel of Figure \ref{optm_strtgy_to_sigma} shows that the insurer decreases his/her investment in stocks and allocates a larger proportion of wealth to the liability, when the volatility of stocks increases. This happens because a higher volatility induces the insurer to reduce his/her investment in the defaultable stocks and increase the proportion of wealth allocated to the liability. This can be also confirmed from the right panel of Figure \ref{optm_strtgy_to_sigma}. It also demonstrates that the optimal strategy for the liability is more sensitive to the changes of volatility of stocks than that to the changes in time. Consequently, the above comparison exploits that the optimal strategy of the liability is more sensitive to the changes in risk than that to the changes in time. \begin{center} \makeatletter \deffigure{figure} \makeatother \begin{figure}
\caption{\small Dependence of the optimal strategy of both stocks on the default intensity $h^{(0,0)}_2(1)$ of stock $2$ in regime 1. The current default state $z=(0,0)$, i.e. both stocks are alive. Left panel: dependence of the optimal strategy of stock $1$ on the default intensity $h^{(0,0)}_2(1)$ of stock $2$ in regime 1; Right panel: dependence of the optimal strategy of stock $2$ on the default intensity $h^{(0,0)}_2(1)$ of stock $2$ in regime 1.}
\label{pi_ratio_to_h2}
\end{figure} \end{center}
\begin{center} \makeatletter \deffigure{figure} \makeatother \begin{figure}
\caption{\small Dependence of the optimal strategy of both stocks on the default intensity $h^{(0,0)}_1(2)$ of stock $1$ in regime 2. The current default state $z=(0,0)$, i.e. both stocks are alive. Left panel: dependence of the optimal strategy of stock $2$ in regime $2$ on the default intensity $h^{(0,0)}_1(2)$ of stock $1$ in regime 2; Right panel: dependence of the optimal strategy of stock $1$ in regime $2$ on the default intensity $h^{(0,0)}_1(2)$ of stock $1$ in regime 2.}
\label{pi_ratio_to_h}
\end{figure} \end{center}
\begin{center} \makeatletter \deffigure{figure} \makeatother \begin{figure}
\caption{\small The difference of value functions between two regimes given different generators of Markov chain $Y$ at different default states $z=(0,0)$, $(1,0)$, $(0,1)$ and $(0,0)$.}
\label{varphi_differ}
\end{figure} \end{center}
We finally assess the impact of default contagion on the optimal investment strategy of stocks and the value function respectively. In particular, we explain how to disentangle the direct and indirect (contagion) effects of an increase in the default intensity. Figure \ref{pi_ratio_to_h2} and \ref{pi_ratio_to_h} illustrate how default contagion impacts the investment strategy of the stock. They suggest that when the default intensity of one stock increases, the insurer tends to reduce his/her investment in both stocks when both stocks are alive. This fact reflects the contagion property of default in this model: when one asset has higher default probability, the contagion property of default makes the investor reduce his/her investment in the other asset as well. As it appears from the left panel of Figure~\ref{pi_ratio_to_h2}, when the default contagion of stock 2 increases, the insurer decreases the proportion of wealth allocated to stock 1. This occurs because at the default of stock 2, the default intensity of stock 1 will instantaneously increase (an upward jump in the default intensity from $h_1^{(0,0)}=(0.5,0.75)$ to $h_1^{(0,1)}=(0.7,1)$), inducing a higher default risk of stock 1. Consequently, the risk averse insurer would allocate a smaller proportion of wealth to this stock. Notice that at the default of stock 1, the default intensity of stock 2 will instantaneously increase because there is an upward jump in the default intensity from $h_2^{(0,0)}=(0.75,1.1)$ to $h_2^{(1,0)}=(0.9,1.3)$. The right panel of Figure~\ref{pi_ratio_to_h} confirms a similar trend for stock 2, however, the indirect contagion effect becomes more pronounced for the case of stock 2.
The direct effect of the default intensity is shown in the left panel of Figure \ref{pi_ratio_to_h2} (resp. in the right panel of Figure \ref{pi_ratio_to_h}). For a fixed default intensity of stock 1 (resp. stock 2), the insurer will invest less wealth in stock 1 (resp. stock 2) when the time to maturity decreases. In this regard, it might be noted that the conditional survival probability of stocks $ \mathbb{P} (\tau_i>T|{\cal G}_t)$ with $t<T$ is given by ${\bf1}_{\tau_i>t} \mathbb{E} [e^{-\int_t^Th_i^{Z(s)}(Y(s))ds}|\mathcal{G}_t]$. As expected, this probability is decreasing with respect to the default intensity $h_i^z$ and for shorter time to maturity, all else being equal, the rate of change of this probability with respect to the default intensity becomes smaller (i.e., all else being equal, the conditional survival probability is not too sensitive to the default intensity when $t$ tends to $T$). Therefore, at an increase in the default intensity of stock 1 (resp. stock 2), the insurer tends to decrease investment in stock $1$ (resp. stock $2$) for shorter time to maturity. Moreover, the insurer will allocate less proportion of his/her wealth to stock 1 (resp. stock 2) as the default intensity of stock 1 (resp. stock 2) increases. Similar observation has also been made in Jiao, et al.~\cite{Jiao2013}.
Figure \ref{varphi_differ} depicts the difference of value functions between two regimes at four different default states $z=(0,0)$, $(1,0)$, $(0,1)$ and $(1,1)$ respectively. The graphs in Figure \ref{varphi_differ} confirm how the change of the absolute values of elements in the generator $Q$ affects the difference of value functions between two regimes. At each default state, the difference of value functions between two regimes becomes tinier for larger absolute values of elements in the generator $Q$. This happens because a larger absolute value of the elements in $Q$ will result in a more frequent regime switching of the Markov chain. Consequently, the insurer relies more on his/her investment strategy rather than the regime he/she is in when faced with a market with frequent regime switching.
\section*{Acknowledgments} The authors gratefully acknowledge the constructive and insightful comments provided by one anonymous reviewer and Editor-in-Chief, Prof. Ulrich Horst, which helped to greatly improve the quality of the manuscript. This research of L. Bo and H. Liao was supported in part by the NSF of China under Grant 11471254, The Key Research Program of Frontier Sciences, CAS under Grant QYZDB-SSW-SYS009, and Fundamental Research Funds for Central Universities under Grant WK3470000008.
\appendix \section{Technical Proofs}\label{app:proof1} \renewcommand\theequation{A.\arabic{equation}} \setcounter{equation}{0}
\noindent{\it Proof of Lemma~\ref{lem:sol-hjben2}.}\quad Define $f(x)=Bx$ for $x\in\mathds{R}^m$. By virtue of Proposition 1.1 of Charter 3 in Smith~\cite{smith08}, it suffices to verify that $f:\mathds{R}^m\to\mathds{R}^m$ is of type $K$, i.e., for any $x,y\in\mathds{R}^m$ satisfying $x\leq y$ and $x_i=y_i$ for some $i=1,\ldots,m$, then $f_i(x)\leq f_i(y)$. Notice that $b_{ij}\geq0$ for all $i\neq j$. Then, it holds that \begin{align}\label{eq:111} f_i(x)&=(Bx)_i=\sum_{j=1}^mb_{ij}x_j=b_{ii}x_i+\sum_{j=1,j\neq i}^mb_{ij}x_j\nonumber\\ &=b_{ii}y_i+\sum_{j=1,j\neq i}^mb_{ij}x_j \leq b_{ii}y_i+\sum_{j=1,j\neq i}^mb_{ij}y_j=f_i(y), \end{align} and hence $f$ is of type $K$. Thus, we complete the proof of the lemma.
$\Box$\\
\noindent{\it Proof of Lemma~\ref{lem:sol-hjben}.}\quad The expression of the solution $\varphi(t,e_n)$ given by \eqref{eq:varphien} is obvious. Notice that $e_m\gg0$ and $q_{ij}\geq0$ for all $i\neq j$ since $Q=(q_{ij})_{m\times m}$ is the generator of the Markov chain. Then, in order to prove $\varphi(t,e_n)\gg0$ for all $t\in[0,T]$, using Lemma~\ref{lem:sol-hjben2}, it suffices to verify $[A^{(n)}]_{ij}\geq0$ for all $i\neq j$, however, $[A^{(n)}]_{ij}=q_{ij}$ for all $i\neq j$ using \eqref{eq:Aen}. Thus, we have verified the condition given in Lemma~\ref{lem:sol-hjben2}, and hence $\varphi(t,e_n)\gg0$ for all $t\in[0,T]$.
$\Box$\\
\noindent{\it Proof of Lemma~\ref{lem:Gkesti}.}\quad It suffices to prove that, for any $x,y\in\mathds{R}^m$ satisfying $x,y\geq\varepsilon e_m^{\top}$ with $\varepsilon>0$, there exists a constant $C=C(\varepsilon)>0$ depending on $\varepsilon>0$ only such that $|G_i^{(k)}(t,x)-G_i^{(k)}(t,y)|\leq C\| x-y\|$ for each $i=1,\ldots,m$. Since $\sigma(i)\sigma(i)^{\top}$ is also positive definite, $\sigma^{(k)}(i)\sigma^{(k)}(i)^{\top}$ is positive definite. Hence, there exists a constant $\delta>0$ such that $(\pi^{(k)})^{\top}\sigma^{(k)}(i)\sigma^{(k)}(i)^{\top}\pi^{(k)}\geq \delta\|\pi^{(k)}\|^2$. Then, for any $(\pi^{(k)},l)\in{\cal U}^{(k)}$, there exists a positive constant $C_1>0$ such that
\begin{align}\label{eq:est1} &\frac{\gamma(\gamma-1)}2\left\{(\pi^{(k)})^{\top}\sigma^{(k)}(i)\sigma^{(k)}(i)^{\top}\pi^{(k)}+l^2\big(\phi(i)\phi(i)^{\top}+\bar{\phi}(i)\bar{\phi}(i)^{\top}\big)-2l(\pi^{(k)})^{\top}\sigma^{(k)}(i)\phi(i)\right\}\nonumber\\
&\qquad=\frac{\gamma(\gamma-1)}2\left\{l^2\bar{\phi}(i)\bar{\phi}(i)^{\top}+\|\sigma^{(k)}(i)^\top\pi^{(k)}-l\phi(i)\|^2\right\}\nonumber\\
&\qquad\leq\frac{\gamma(\gamma-1)}2\left\{\alpha l^2\bar{\phi}(i)\bar{\phi}(i)^{\top}+\frac12(1-\alpha)\|\sigma^{(k)}(i)^\top\pi^{(k)}\|^2-(1-\alpha)\|l\phi(i)\|^2\right\}\nonumber\\
&\qquad=\frac{\gamma(\gamma-1)}2\left\{l^2(\alpha\bar{\phi}(i)\bar{\phi}(i)^{\top}-(1-\alpha)\|\phi(i)\|^2)+\frac12(1-\alpha)\|\sigma^{(k)}(i)^\top\pi^{(k)}\|^2\right\}\nonumber\\
&\qquad\leq-C_1(\|\pi^{(k)}\|^2+l^2), \end{align}
where the constant $\alpha\in(\max_{i=1,\ldots,m}\big\{\frac{\|\phi(i)\|^2}{\bar{\phi}(i)\bar{\phi}(i)^{\top}+\|\phi(i)\|^2}\big\},1)$.
On the other hand, for any $(\pi^{(k)},l)\in{\cal U}^{(k)}$, it holds that \begin{align}\label{eq:est2}
\gamma\big\{(\pi^{(k)})^{\top}\theta^{(k)}(i)+(p^{(k)}(i)-c(i))l\big\}&\leq\gamma\|\theta^{(k)}(i)\|\|\pi^{(k)}\|+\gamma|p^{(k)}(i)-c(i)|l\nonumber\\
&\leq C_2\sqrt{\|\pi^{(k)}\|^2+l^2}, \end{align}
where the constant $C_2:=\max_{i=1,\ldots,m}\big\{\gamma\sqrt{\|\theta^{(k)}(i)\|^2+|p^{(k)}(i)-c(i)|^2}\big\}>0$. Finally, for any $(\pi^{(k)},l)\in{\cal U}^{(k)}$, we have that $\{(1-lg(i))^\gamma-1\}\nu^{(k)}(i)\leq C_3l$, where the constant $C_3:=\max_{i=1,\ldots,m}\{\gamma g(i)\nu^{(k)}(i)\}>0$. Then, by virtue of \eqref{eq:Hk}, it follows that, for any $(\pi^{(k)},l)\in{\cal U}^{(k)}$ and $i=1,\ldots,m$, \begin{align}\label{eq:esti4}
H^{(k)}((\pi^{(k)},l),i)\leq -C_1\big(\|\pi^{(k)}\|^2+l^2\big)+C_4\sqrt{\|\pi^{(k)}\|^2+l^2}. \end{align}
Here $C_4=C_2+C_3$. This yields that there exists a constant $C_5>0$ such that when $(\pi^{(k)},l)\in{\cal U}^{(k)}$ and $\|\pi^{(k)}\|^2+l^2>C_5$, we have $H^{(k)}((\pi^{(k)},l),i)<0$ for all $i=1,\ldots,m$, and meanwhile, for $x\geq \varepsilon e_{m}^{\top}$ with $\varepsilon>0$, \begin{align}\label{eq:est5} &\sum_{j\notin\{j_1,\ldots,j_k\}}(1-\pi_j^{(k)})^\gamma h_{j}^{(k)}(i)\varphi^{(l+1),j}(t,i)+H^{(k)}((\pi^{(k)},l),i)x_i\nonumber\\
&\qquad\leq\big(1+\|\pi^{(k)}\|\big)^\gamma \sum_{j\notin\{j_1,\ldots,j_k\}}h_{j}^{(k)}(i)\varphi^{(l+1),j}(t,i)+\varepsilon H^{(k)}((\pi^{(k)},l),i)\nonumber\\
&\qquad\leq C_6\big(1+\|\pi^{(k)}\|^\gamma\big)\sum_{j\notin\{j_1,\ldots,j_k\}}h^{(k)}_{j}(i)+\varepsilon\left\{-C_1(\|\pi^{(k)}\|^2+l^2)+C_4\sqrt{\|\pi^{(k)}\|^2+l^2}\right\}\nonumber\\
&\qquad\leq-\varepsilon C_1\big(\|\pi^{(k)}\|^2+l^2\big)+\varepsilon C_4\sqrt{\|\pi^{(k)}\|^2+l^2}+C_7\big(\sqrt{\|\pi^{(k)}\|^2+l^2}\big)^{\gamma}+C_8, \end{align}
for some constants $C_6,C_7,C_8>0$. Notice that we used the recursive assumption that the HJB system \eqref{eq:hjbeqn} admits a positive unique (classical) solution $\varphi^{(k+1),j}(t)$ on $t\in[0,T]$ for $j\notin\{j_1,\ldots,j_k\}$. Then $\varphi^{(k+1),j}(t)$ is continuous on $[0,T]$, and hence $\varphi^{(k+1),j}(t)$ is bounded on $[0,T]$. From the estimate \eqref{eq:est5}, it follows that, for any $x\geq\varepsilon e_m^{\top}$, there exists a positive constant $C_9=C_9(\varepsilon)$ such that when $(\pi^{(k)},l)\in{\cal U}^{(k)}$, $\|\pi^{(k)}\|^2+l^2>C_9$ and $x\geq\varepsilon e_m^{\top}$, it holds that, for each $i=1,\ldots,m$, \begin{align}\label{eq:est6} &\sum_{j\notin\{j_1,\ldots,j_k\}}(1-\pi_j^{(k)})^\gamma h^{(k)}_{j}(i)\varphi^{(l+1),j}(t,i)+H^{(k)}((\pi^{(k)},l),i)x_i<0. \end{align} On the other hand, for $i=1,\ldots,m$, it holds that \begin{align}\label{eq:G>0} G^{(k)}_i(t,x)=&\sup_{(\pi^{(k)},l)\in{\cal U}^{(k)}}\left\{\sum_{j\notin\{j_1,\ldots,j_k\}}(1-\pi_{j}^{(k)})^\gamma h_{j}^{(k)}(i)\varphi^{(k+1),j}(t,i)+H^{(k)}((\pi^{(k)},l),i)x_i\right\}\nonumber\\ \geq&\sum_{j\notin\{j_1,\ldots,j_k\}} h^{(k)}_{j}(i)\varphi^{(k+1),j}(t,i)+H^{(k)}((0e_{n-k}^{\top},0),i)x_i\nonumber\\ =&\sum_{j\notin\{j_1,\ldots,j_k\}} h_{j}^{(k)}(i)\varphi^{(k+1),j}(t,i)>0. \end{align} Thus, using the estimate \eqref{eq:est6}, we have that, for all $x\geq\varepsilon e_m^{\top}$, \begin{align}\label{eq:Gk2}
G^{(k)}_i(t,x)=&\sup_{\substack{(\pi^{(k)},l)\in{\cal U}^{(k)}\\\|\pi^{(k)}\|^2+l^2\leq C_9(\varepsilon)}}\left\{\sum_{j\notin\{j_1,\ldots,j_k\}}(1-\pi_{j}^{(k)})^\gamma h_{j}^{(k)}(i)\varphi^{(k+1),j}(t,i)+H^{(k)}((\pi^{(k)},l),i)x_i\right\}. \end{align} It follows from \eqref{eq:Gk2} and \eqref{eq:Hk} that, for all $x,y\geq\varepsilon e_m^{\top}$, \begin{align}
G^{(k)}_i(t,x)=&\sup_{\substack{(\pi^{(k)},l)\in{\cal U}^{(k)}\\\|\pi^{(k)}\|^2+l^2\leq C_9(\varepsilon)}}\Bigg\{\sum_{j\notin\{j_1,\ldots,j_k\}}(1-\pi_{j}^{(k)})^\gamma h_{j}^{(k)}(i)\varphi^{(k+1),j}(t,i)+H^{(k)}((\pi^{(k)},l),i)y_i\nonumber\\ &\qquad\qquad\qquad+H^{(k)}((\pi^{(k)},l),i)(x_i-y_i)\Bigg\}\nonumber\\
\leq&\sup_{\substack{(\pi^{(k)},l)\in{\cal U}^{(k)}\\\|\pi^{(k)}\|^2+l^2\leq C_9(\varepsilon)}}\Bigg\{\sum_{j\notin\{j_1,\ldots,j_k\}}(1-\pi_{j}^{(k)})^\gamma h_{j}^{(k)}(i)\varphi^{(k+1),j}(t,i)+H^{(k)}((\pi^{(k)},l),i)y_i\nonumber\\
&\qquad\qquad\qquad+\left|H^{(k)}((\pi^{(k)},l),i)\right|\left|x_i-y_i\right|\Bigg\}\nonumber\\
\leq&G^{(k)}_i(t,y)+\left|x_i-y_i\right|\sup_{\substack{(\pi^{(k)},l)\in{\cal U}^{(k)}\\\|\pi^{(k)}\|^2+l^2\leq C_9(\varepsilon)}}\left\{\left|H^{(k)}((\pi^{(k)},l),i)\right|\right\}\nonumber\\
\leq&G^{(k)}_i(t,y)+C(\varepsilon)\left|x_i-y_i\right|, \end{align} where $C(\varepsilon)>0$ is a constant which depends on $\varepsilon>0$ only. Then, the above estimate results in the validity of the estimate \eqref{eq:Gkesti} for all $x,y\in\mathds{R}^m$ satisfying $x,y\geq\varepsilon e_m^{\top}$. Thus, we complete the proof of the lemma.
$\Box$\\
\noindent{\it Proof of Lemma~\ref{lem:comparison}.}\quad For $p>0$, let $g_{1}^{(p)}(t)=(g_{1i}^{(p)}(t);\ i=1,\ldots,m)^{\top}$ be the solution to the following dynamical system given by \begin{equation} \left\{ \begin{aligned} \frac d{dt}g_{1}^{(p)}(t)=&f(t,g_{1}^{(p)}(t))+\tilde{f}(t,g^{(p)}_{1}(t))+\frac{1}{p}e_m^{\top},\ \text{ in }(0,T];\\ g_{1}^{(p)}(0)=&\xi_1+\frac{1}{p}e_m^{\top}. \end{aligned} \right. \end{equation} Then, for all $t\in(0,T]$, it holds that \begin{align*}
\|g_{1}^{(p)}(t)-g_1(t)\|\leq&\|g_{1}^{(p)}(0)-g_1(0)\|+\int_0^t\big\|f(s,g_{1}^{(p)}(s))-f(s,g_1(s))\big\|ds\nonumber\\
&+\int_0^t\big\|\tilde{f}(s,g_{1}^{(p)}(s))-\tilde{f}(s,g_1(s))\big\|ds+\frac1p\int_0^t\|e_m\|ds\nonumber\\
\leq&\frac2p\|e_m\|+(C+\tilde{C})\int_0^t\big\|g_{1}^{(p)}(s)-g_1(s)\big\|ds. \end{align*} Here $C>0$ (resp. $\tilde{C}>0$) is the Lipschitz constant of $f(t,x)$ (resp. $\tilde{f}(t,x)$) in $x$. Then, the Gronwall's lemma yields that $g_{1}^{(p)}(t)\to g_1(t)$ for all $t\in[0,T]$ as $p\to\infty$. We claim that $g_{1}^{(p)}(t)\gg g_2(t)$ for all $t\in[0,T]$. If the claim were false, notice that $g_{1}^{(p)}(0)\gg g_2(0)$, and $g_1^{(p)}(t),g_2(t)$ are continuous on $[0,T]$, then there exists a $t_0\in(0,T]$ such that $g_{1}^{(p)}(s)\geq g_2(s)$ on $s\in[0,t_0]$ and $g_{1i}^{(p)}(t_0)=g_{2i}(t_0)$ for some $i\in\{1,\ldots,m\}$. Since $t_0>0$, $g_1^{(p)}(t)$ and $g_2(t)$ are differentiable on $(0,T]$, we have that \begin{align*}
\frac d{dt}g_{1i}^{(p)}(t)\big|_{t=t_0}=\lim_{\epsilon\to0}\frac{g_{1i}^{(p)}(t_0)-g_{1i}^{(p)}(t_0-\epsilon)}{\epsilon}
\leq\lim_{\epsilon\to0}\frac{g_{2i}(t_0)-g_{2i}(t_0-\epsilon)}{\epsilon}= \frac d{dt}g_{2i}(t)\big|_{t=t_0}. \end{align*} On the other hand, since $f(t,\cdot)$ satisfies the type $K$ condition for each $t\in[0,T]$ and $\tilde{f}(t,x)\geq0$ for all $(t,x)\in[0,T]\times\mathds{R}^m$, for the above $i$, we also have that \begin{align}
\frac d{dt}g_{1i}^{(p)}(t)\big|_{t=t_0}=&f_i(t_0,g_{1i}^{(p)}(t_0))+\tilde{f}_i(t_0,g_{1}^{(p)}(t_0))+\frac1p\nonumber\\
>&f_i(t_0,g_{1i}^{(p)}(t_0))\geq f_i(t_0,g_2(t_0))=\frac d{dt}g_{2i}(t)\big|_{t=t_0}. \end{align} This results in a contradiction, and hence $g_{1}^{(p)}(t)\gg g_2(t)$ for all $t\in[0,T]$. Thus, it holds that $g_1(t)\geq g_2(t)$ for all $t\in[0,T]$ by letting $p$ tend to infinity.
$\Box$\\
\noindent{\it Proof of Theorem~\ref{thm:vefi}.}\quad For $(t,x,i,z)\in[0,T]\times\mathds{R}_+\times\{1,\ldots,m\}\times{\cal S}$, note that $\varphi(T,i,z)=\frac{1}{\gamma}$. Then, by virtue of It\^o's formula, for all $(\pi,l)\in\tilde{\cal U}$, it follows that \begin{align*} &\frac{1}{\gamma}(X^{\pi,l}(T))^\gamma=(X^{\pi,l}(t))^\gamma \varphi(t,Y(t),Z(t))+\int_t^T(X^{\pi,l}(s))^\gamma\frac{\partial \varphi(s,Y(s),Z(s))}{\partial s}ds\nonumber\\ &\quad+\int_t^T\gamma (X^{\pi,l}_s)^{\gamma-1}\varphi(s,Y(s),Z(s))dX^{\pi,l}(s)^c\\ &\quad+\frac{\gamma(\gamma-1)}{2}\int_t^T(X^{\pi,l}(s))^{\gamma-2}\varphi(s,Y(s),Z(s))d[X^{\pi,l},X^{\pi,l}]^c(s)\\ &\quad+\int_t^T\varphi(s,Y(s-),Z(s-))(X^{\pi,l}(s-))^\gamma[(1-l(s)g(Y(s-)))^\gamma-1]dN(s)\\ &\quad+\sum_{j=1}^n\int_t^T(X^{\pi,l}(s-))^\gamma\big[(1-\pi_j(s-))^\gamma \varphi(s,Y(s-),Z^j(s-))-\varphi(s,Y(s-),Z(s-))\big]dZ_j(s)\\ &\quad+\int_t^T\sum_{j\neq Y(s-)}(X^{\pi,l}(s-))^\gamma \big[\varphi(s,j,Z(s-))-\varphi(s,Y(s-),Z(s-))\big]dH_{Y(s-),j}(s)\\ &\quad=(X^{\pi,l}(t))^\gamma \varphi(t,Y(t),Z(t))+\int_t^T(X^{\pi,l}(s))^\gamma{\cal A}(\pi,l;s,Y(s),Z(s))ds+M^{\pi,l}(T)-M^{\pi,l}(t). \end{align*} Here for $(\pi,l)\in(-\infty,1]^n\times[0,\infty)$ and $(t,i,z)\in[0,T]\times\{1,\ldots,m\}\times{\cal S}$, the coefficient is given by \begin{align*} &{\cal A}(\pi,l;t,i,z)\nonumber\\ &\quad=\frac{\partial\varphi(t,i,z)}{\partial t}+\Bigg\{\gamma\Big[r(i)+\pi^{\top}(I-diag(z))\theta(i,z)+\pi^{\top}(I-diag(z))h(i,z)+(p(i,z)-c(i))l\Big]\\ &\qquad+\frac{\gamma(\gamma-1)}2\Big[\pi^{\top}(I-diag(z))\sigma(i)\sigma(i)^{\top}(I-diag(z))\pi+l^2\big(\phi(i)\phi(i)^{\top}+\bar{\phi(i)}\bar{\phi(i)}^{\top}\big)\\ &\qquad-2l\pi^{\top}(I-diag(z))\sigma(i)\phi(i)^{\top}\Big]+[(1-lg(i))^\gamma-1]\nu(i,z)\bigg\}\varphi(t,i,z)\\ &\qquad+\sum_{j=1}^n[(1-\pi_j)^\gamma \varphi(t,i,z^j)-\varphi(t,i,z)](1-z_j)h_j(i,z)+\sum_{j\neq i}[\varphi(t,j,z)-\varphi(t,i,z)]q_{ij}, \end{align*} and the $ \mathbb{P} $-(local) martingale is defined as \begin{align*} M^{\pi,l}(t)=&\int_0^t\gamma(X^{\pi,l}(s))^\gamma \varphi(s,Y(s),Z(s))\big[\pi(s)^{\top}(I-diag(Z(s)))\sigma(Y(s))-l(s)\phi(Y(s))\big]dW(s)\\ &+\int_0^t\gamma(X^{\pi,l}(s))^\gamma \varphi(s,Y(s),Z(s))l(s)\bar{\phi}(Y(s))d\bar{W}(s)\\ &+\int_0^t(X^{\pi,l}(s))^\gamma \varphi(s,Y(s-),Z(s-))[(1-l(s)g(Y(s-)))^\gamma-1]d\tilde{N}(s)\\ &+\sum_{j=1}^n\int_0^T(X^{\pi,l}(s-))^\gamma[(1-\pi_j(s))^\gamma \varphi(s,Y(s-),Z(s-)^j)-\varphi(s,Y(s-),Z(s-))]dM_j(s)\\ &+\int_0^t\sum_{j\neq Y(s-)}(X^{\pi,l}(s-))^\gamma \big[\varphi(s,j,Z(s-))-\varphi(s,Y(s-),Z(s-))]d\tilde{H}_{Y(s-),j}(s), \end{align*} where we used the following $ \mathbb{P} $-martingale processes given by, for $t\in[0,T]$, \begin{align*} \tilde{N}(t)&:=N(t)-\int_0^t\nu(Y(s),Z(s))ds,\quad \tilde{H}_{ij}(t):=H_{ij}(t)-\int_0^tq_{ij}\mathds{1}_{Y(s)=i}ds, \end{align*}
for all $i,j\in\{1,\ldots,m\}$ and $i\neq j$. Here, we recall that the process $H_{ij}(t)$ is defined by \eqref{eq:Hil}. Using \eqref{eq:hjbeqn}, \eqref{eq:lnstar} and \eqref{eq:optimumk}, for $t\in[0,T]\times\{1,\ldots,m\}\times{\cal S}$, we have that ${\cal A}(\pi,l;t,i,z)\leq {\cal A}(\pi^*,l^*;t,i,z)=0$ for all $(\pi,l)\in{\cal U}$. Moreover, define $\tau_a:=\inf\{s\geq t;\ |X^{\pi,l}(s)|>a\}$ for $a>0$. Eq.~\eqref{eq:sdeX2} gives that, for $s\in[t,T]$, \begin{align}\label{diff} X^{\pi,l}(s\wedge\tau_a)=&X^{\pi,l}(s\wedge\tau_a-)\\ &\times[1-\tilde{\pi}^\top(s\wedge\tau_a)\Delta M(s\wedge\tau_a)-\tilde{l}(s\wedge\tau_a)g(Y(s\wedge\tau_a-))\Delta N(s\wedge\tau_a)],\nonumber \end{align} where the feedback controls are given by \begin{align*} \tilde{\pi}(s\wedge\tau_a)=&\pi\big(s\wedge\tau_a,X^{\pi,l}(s\wedge\tau_a-),Y(s\wedge\tau_a-),Z(s\wedge\tau_a-)\big),\\ \tilde{l}(s\wedge\tau_a)=&l\big(s\wedge\tau_a,X^{\pi,l}(s\wedge\tau_a-),Y(s\wedge\tau_a-),Z(s\wedge\tau_a-)\big). \end{align*} Notice that $(\pi,l)\in{\cal U}$ is locally bounded, and hence \begin{align*}
|\tilde{\pi}(s\wedge\tau_a)|+|\tilde{l}(s\wedge\tau_a)|\leq C_1(\pi,l,a,T),\quad s\in[t,T]. \end{align*}
The positive constant $C_1$ depends on $(\pi,l)$, $a$ and $T$ only. Since $|\Delta M|\vee|\Delta N|\leq 1$, it follows that \begin{align*}
|X^{\pi,l}(s\wedge\tau_a)|\leq C_2(\pi,l,a,T),\quad s\in[t,T], \end{align*} where $C_2$ is a positive constant which depends on $(\pi,l)$, $a$ and $T$ only. This implies that $M^{\pi,l}(\cdot\wedge\tau_a)$ is a $ \mathbb{P} $-martingale. Hence, it holds that \begin{align} \mathbb{E} _{t,x,i,z}\left[U(X^{\pi,l}(T\wedge\tau_a)\big)\right]&\leq x^\gamma \varphi(t,i,z)+ \mathbb{E} _{t,x,i,z}\left[{M}^{\pi,l}(T\wedge\tau_a)-{M}^{\pi,l}(t)\right]\nonumber\\ &=x^\gamma \varphi(t,i,z), \end{align} where we set $ \mathbb{E} _{t,i,z}[\cdot]:= \mathbb{E} [\cdot\mid X^{\pi,l}(t)=x,Y(t)=i,Z(t)=z]$ for $(t,x,i,z)\in[0,T]\times\mathds{R}_+\times\{1,\ldots,m\}\times{\cal S}$. It follows from Fatou's lemma that \begin{align*} \mathbb{E} _{t,x,i,z}[U(X^{\pi,l}(T))]\leq \varliminf_{a\to\infty} \mathbb{E} _{t,x,i,z}[U(X^{\pi,l}(T\wedge\tau_a))]\leq x^\gamma \varphi(t,i,z). \end{align*} This verifies the validity of the conclusion (i).
We next prove the conclusion (ii). In fact, recall that the optimal feedback strategy $(\pi^{*},l^*)=(\pi^*(t,i,z),l^*(t,i,z))$ for $i=1,\ldots,m$ is given by \eqref{eq:lnstar} for $k=n$ and given by \eqref{eq:optimumk} for $k=0,1,\ldots,n-1$. Then, there exists a constant $C>0$ which is independent of $(t,i,z)$ such that
$\|\pi^*(t,i,z)\|^2+|l^*(t,i,z)|^{2}\leq C$ for all $(t,i,z)\in[0,T]\times\{1,\ldots,m\}\times{\cal S}$. We next estimate $ \mathbb{E} [(X^{\pi^*,l^*}(T\wedge\tau_a))^{2\gamma}]$. First of all, the dynamics of the wealth process can be rewritten as, for $s\in[t,T]$, \begin{align*} &dX^{\pi^*,l^*}(s)=X^{\pi^*,l^*}(s)[r(Y_{s})+\pi^{*}(s,Y(s),Z(s))^{\top}\theta(Y(s),Z(s))\nonumber\\ &\quad+l^*(s,Y(s),Z(s))(p(Y(s),Z(s))-c(Y(s)))]ds\nonumber\\ &\quad+X^{\pi^*,l^*}(s)[\pi^{*}(s,Y(s),Z(s))^{\top}\sigma(Y(s))-l^*(s,Y(s),Z(s))\phi(Y(s))]dW(s)\nonumber\\ &\quad-X^{\pi^*,l^*}(s)l^*(s,Y(s),Z(s))\bar{\phi}(Y(s))d\bar{W}(s)\nonumber\\ &\quad-X^{\pi^*,l^*}(s-)\pi^{*}(s,Y(s),Z(s))^{\top}dZ(s)-l^*(s-,Y(s-),Z(s-))X^{\pi^*,l^*}(s-)g(Y(s-))dN(s). \end{align*} Then, It\^o's formula yields that for $u\in[t,T]$, \begin{align*} (X^{\pi^*,l^*}(u))^{2\gamma}=&(X^{\pi^*,l^*}(t))^{2\gamma}+\tilde{M}^{\pi^*,l^*}(u)-\tilde{M}^{\pi^*,l^*}(t)\nonumber\\ &+\int_t^u(X^{\pi^*,l^*}(s))^{2\gamma}\tilde{\cal A}(\pi^*(s,Y(s),Z(s)),l^*(s,Y(s),Z(s));Y(s),Z(s))ds. \end{align*} Here, for $(\pi,l)\in(-\infty,1]^n\times[0,\infty)$ and $(i,z)\in\{1,\ldots,m\}\times{\cal S}$, \begin{align*} \tilde{\cal A}(\pi,l;i,z)=&{2\gamma}\big[r(i)+\pi^{\top}(I-diag(z))\theta(i,z)+(p(i,z)-c(i))l\big]\\ &+{\gamma}({2\gamma}-1)\big[\pi^{\top}(I-diag(z))\sigma(i)\sigma(i)^{\top}(I-diag(z))\pi+l^2\big(\phi(i)\phi(i)^{\top}+\bar{\phi}(i)\bar{\phi}(i)^{\top}\big)\\ &-2l\pi^{\top}(I-diag(z))\sigma(i)\phi(i)^{\top}\big]+[(1-lg(i))^{2\gamma}-1]\nu(i,z)\\ &+\sum_{j=1}^n[(1-\pi_j)^{2\gamma}-1](1-z_j)h_j(i,z). \end{align*} The $ \mathbb{P} $-(local) martingale is given by, for $t\in[0,T]$, \begin{align*} \tilde{M}^{\pi^*,l^*}(t):=&\int_0^t(X^{\pi^*,l^*}(s))^{2\gamma}[\pi^{*}(s,Y(s),Z(s))^{\top}\sigma(Y(s))-l^*(s,Y(s),Z(s))\phi(Y(s))]dW(s)\nonumber\\ &-\int_0^t(X^{\pi^*,l^*}(s))^{2\gamma}l^*(s,Y(s),Z(s))\bar{\phi}(Y(s))d\bar{W}(s)\nonumber\\ &+\sum_{j=1}^n\int_0^t(X^{\pi^*,l^*}(s-))^{2\gamma}[(1-\pi_j^*(s,Y(s-),Z(s-)))^{2\gamma}-1]dM_j(s)\nonumber\\ &+\int_0^t(X^{\pi^*,l^*}(s-))^{2\gamma}[(1-lg(Y(s-)))^{2\gamma}-1]d\tilde{N}(s). \end{align*}
As above, we have that $\|\pi^*(t,i,z)\|^2+|l^*(t,i,z)|^{2}\leq C$ for all $(t,i,z)\in[0,T]\times\{1,\ldots,m\}\times{\cal S}$, and hence \[
|(1-l^*(t,i,z)g(i))^{2\gamma}-1|\leq(1+\gamma)(|l^*(t,i,z)g(i)|^2+|l^*(t,i,z)g(i)|). \] Then, there exists a constant $C>0$ such that for all $(t,i,z)\in[0,T]\times\{1,\ldots,m\}\times{\cal S}$, \[
|\tilde{\cal A}(\pi^*(t,i,z),l^*(t,i,z),i,z)|\leq C. \] Thus, we have that for all $t\in[0,T]$, \begin{align*} & \mathbb{E} _{t,x,i,z}\big[(X^{\pi^*,l^*}(T\wedge\tau_a))^{2\gamma}\big] =x^{2\gamma}\nonumber\\ &\qquad+ \mathbb{E} _{t,x,i,z}\left[\int_t^{T\wedge\tau_a}(X^{\pi^*,l^*}(s))^{2\gamma}\tilde{\cal A}(\pi^*(s,Y(s),Z(s)),l^*(s,Y(s),Z(s));Y(s),Z(s))ds\right]\\
&\quad\leq x^{2\gamma}+ \mathbb{E} _{t,x,i,z}\left[\int_t^{T}(X^{\pi^*,l^*}(s\wedge\tau_a))^{2\gamma}\big|\tilde{\cal A}(\pi^*(s,Y(s),Z(s)),l^*(s,Y(s);Z(s)),Y(s),Z(s))\big|ds\right]\\ &\quad\leq x^{2\gamma}+C\int_t^{T} \mathbb{E} _{t,x,i,z}[(X^{\pi^*,l^*}(s\wedge\tau_a))^{2\gamma}]ds. \end{align*} The Gronwall's inequality yields that \[ \sup_{a\in\mathds{R}_+} \mathbb{E} _{t,x,i,z}\big[(X^{\pi^*,l^*}(T\wedge\tau_a))^{2\gamma}\big]\leq x^{2\gamma}e^{CT}, \] and hence $\{(X^{\pi^*,l^*}(T\wedge\tau_a))^{\gamma}\}_{a\in\mathds{R}_+}$ is uniformly integrable. This yields that \begin{align*} V(t,x,i,z)= \mathbb{E} _{t,x,i,z}[U(X^{\pi^*,l^*}(T))]=\lim_{a\to\infty} \mathbb{E} _{t,x,i,z}[U(X^{\pi^*,l^*}(T\wedge\tau_a))]= x^\gamma \varphi(t,i,z). \end{align*} This verifies the validity of the conclusion (ii).
$\Box$
\end{document} |
\begin{document}
\title{Optimizing a Generalized Gini Index in Stable Marriage Problems: NP-Hardness, Approximation and a Polynomial Time Special Case}
\begin{abstract} This paper deals with fairness in stable marriage problems. The idea studied here is to achieve fairness thanks to a Generalized Gini Index (GGI), a well-known criterion in inequality measurement, that includes both the egalitarian and utilitarian criteria as special cases. We show that determining a stable marriage optimizing a GGI criterion of agents' disutilities is an NP-hard problem. We then provide a polynomial time 2-approximation algorithm in the general case, as well as an exact algorithm which is polynomial time in the case of a constant number of non-zero weights parametrizing the GGI criterion. \keywords{Stable marriage problem \and Fairness \and Generalized Gini index \and Complexity}
\end{abstract}
\section{Introduction}
Since the seminal work of Gale and Shapley~[\citeyear{gale1962college}] on stable marriages, matching problems under preferences have been extensively studied both by economists and computer scientists. These problems involve two sets of agents (also called individuals in the sequel) that should be matched with each other while taking agents' preferences into account. The results obtained in the field have a tremendous number of applications, among which the National Resident Matching Program in the US (for allocating junior doctors to hospitals), the teacher allocation in France (for allocating newly tenured teachers to schools) or the allocation of lawyers in Germany (for assigning graduating lawyers to legal internship positions). For an overview of the applications of matching models under preferences, the interested reader can refer to a recent book chapter on this topic \citep{biro2017applications}.
The \emph{stable marriage problem} involves $n$ men and $n$ women, each of whom ranks the members of the opposite sex in order of preference. The goal is to find a \emph{stable} matching, i.e., a matching between men and women such that there is no man and woman that prefer each other to their current match. Gale and Shapley~[\citeyear{gale1962college}] provided an algorithm that computes a stable marriage. However, it is well-known that this algorithm favours one group (men or women, according to the way the algorithm is applied) over the other.
We are interested here in \emph{fair} stable marriage algorithms, i.e., in procedures favouring stable marriages that fairly share dissatisfactions --also called disutilities-- among individuals (irrespective of their sex), the dissatisfaction being defined for each woman (resp. man) as a function of the rank, in order of preferences, of the man (resp. woman) to whom she is paired with. Given the vector of individuals' dissatisfactions induced by a matching, there are several ways of formalizing the notion of ``fairness''. We mean here by fair stable marriage that the vector of individuals' dissatisfactions should be well-balanced. For example, consider the following instance of the stable marriage problem.
\begin{ex} The instance consists of 10 men $\{m_1,\ldots,m_{10}\}$ and women $\{w_1,\ldots,w_{10}\}$ with the following preferences, where $i \succ^m_k j$ (resp. $i \succ^w_k j$) means that $m_k$ (resp. $w_k$) prefers $w_i$ to $w_j$ (resp. $m_i$ to $m_j$):
{\small \begin{center} \begin{align*} & m_1 : 1 \succ^m_1 2 \succ^m_1 3 \succ^m_1 4 \succ^m_1 5 \succ^m_1 6 \succ^m_1 7 \succ^m_1 8 \succ^m_1 9 \succ^m_1 10 \\ & m_2 : 2 \succ^m_2 1 \succ^m_2 3 \succ^m_2 4 \succ^m_2 5 \succ^m_2 6 \succ^m_2 7 \succ^m_2 8 \succ^m_2 9 \succ^m_2 10 \\ & m_3 : 3 \succ^m_3 1 \succ^m_3 2 \succ^m_3 4 \succ^m_3 5 \succ^m_3 6 \succ^m_3 7 \succ^m_3 8 \succ^m_3 9 \succ^m_3 10 \\ & m_4 : 7 \succ^m_4 1 \succ^m_4 2 \succ^m_4 3 \succ^m_4 6 \succ^m_4 4 \succ^m_4 5 \succ^m_4 8 \succ^m_4 9 \succ^m_4 10 \\ & m_5 : 6 \succ^m_5 1 \succ^m_5 2 \succ^m_5 3 \succ^m_5 7 \succ^m_5 4 \succ^m_5 5 \succ^m_5 8 \succ^m_5 9 \succ^m_5 10 \\ & m_6 : 4 \succ^m_6 1 \succ^m_6 2 \succ^m_6 3 \succ^m_6 5 \succ^m_6 7 \succ^m_6 6 \succ^m_6 8 \succ^m_6 9 \succ^m_6 10 \\ & m_7 : 5 \succ^m_7 1 \succ^m_7 2 \succ^m_7 3 \succ^m_7 4 \succ^m_7 7 \succ^m_7 6 \succ^m_7 8 \succ^m_7 9 \succ^m_7 10 \\ & m_8 : 8 \succ^m_8 4 \succ^m_8 5 \succ^m_8 6 \succ^m_8 10 \succ^m_8 7 \succ^m_8 1 \succ^m_8 2\succ^m_8 3 \succ^m_8 9 \\ & m_9 : 10 \succ^m_9 4 \succ^m_9 6 \succ^m_9 7 \succ^m_9 9 \succ^m_9 5 \succ^m_9 1 \succ^m_9 2\succ^m_9 3 \succ^m_9 8\\ & m_{10} : 9 \succ^m_{10} 4 \succ^m_{10} 5 \succ^m_{10} 7 \succ^m_{10} 8 \succ^m_{10} 6 \succ^m_{10} 1 \succ^m_{10} 2\succ^m_{10} 3 \succ^m_{10} 10 \end{align*} \end{center}
\begin{center} \begin{align*} &w_1:1 \succ^w_1 2 \succ^w_1 3 \succ^w_1 4 \succ^w_1 5 \succ^w_1 6 \succ^w_1 7 \succ^w_1 8 \succ^w_1 9 \succ^w_1 10 \\ &w_2:1 \succ^w_2 2 \succ^w_2 3 \succ^w_2 4 \succ^w_2 5 \succ^w_2 6 \succ^w_2 7 \succ^w_2 8 \succ^w_2 9 \succ^w_2 10 \\ &w_3:1 \succ^w_3 2 \succ^w_3 3 \succ^w_3 4 \succ^w_3 5 \succ^w_3 6 \succ^w_3 7 \succ^w_3 8 \succ^w_3 9 \succ^w_3 10 \\ &w_4:1 \succ^w_4 2 \succ^w_4 3 \succ^w_4 7 \succ^w_4 8 \succ^w_4 9 \succ^w_4 6 \succ^w_4 4 \succ^w_4 5 \succ^w_4 10 \\ &w_5:1 \succ^w_5 2 \succ^w_5 3 \succ^w_5 6 \succ^w_5 8 \succ^w_5 9 \succ^w_5 7 \succ^w_5 4 \succ^w_5 5 \succ^w_5 10 \\ &w_6:1 \succ^w_6 2 \succ^w_6 3 \succ^w_6 4 \succ^w_6 8 \succ^w_6 9 \succ^w_6 5 \succ^w_6 6 \succ^w_6 7 \succ^w_6 10 \\ &w_7:1\succ^w_7 2 \succ^w_7 3 \succ^w_7 5 \succ^w_7 8 \succ^w_7 9 \succ^w_7 4 \succ^w_7 6 \succ^w_7 7 \succ^w_7 10 \\ &w_8: 2 \succ^w_8 10 \succ^w_8 8 \succ^w_8 7 \succ^w_8 1 \succ^w_8 3 \succ^w_8 4 \succ^w_8 5 \succ^w_8 6 \succ^w_8 9 \\ &w_9: 1 \succ^w_9 2 \succ^w_9 9 \succ^w_9 10 \succ^w_9 3 \succ^w_9 4 \succ^w_9 5 \succ^w_9 6 \succ^w_9 7 \succ^w_9 8\\ &w_{10}:1 \succ^w_{10} 2 \succ^w_{10} 3 \succ^w_{10} 4 \succ^w_{10} 5 \succ^w_{10} 6 \succ^w_{10} 7 \succ^w_{10} 10 \succ^w_{10} 8 \succ^w_{10} 9 \end{align*} \end{center} }
The stable marriages in this instance are:
{\small \begin{align*} \mathbf{x}^1:~& \{(m_1,w_1),(m_2,w_2),(m_3,w_3),(m_4,w_7),(m_5,w_6),(m_6,w_4),(m_7,w_5),\\&(m_8,w_8),(m_9,w_{10}),(m_{10},w_9)\}\\ \mathbf{x}^2:~& \{(m_1,w_1),(m_2,w_2),(m_3,w_3),(m_4,w_7),(m_5,w_6),(m_6,w_5),(m_7,w_4),\\&(m_8,w_8),(m_9,w_{10}),(m_{10},w_9)\}\\ \mathbf{x}^3:~& \{(m_1,w_1),(m_2,w_2),(m_3,w_3),(m_4,w_6),(m_5,w_7),(m_6,w_4),(m_7,w_5),\\&(m_8,w_8),(m_9,w_{10}),(m_{10},w_9)\}\\ \mathbf{x}^4:~& \{(m_1,w_1),(m_2,w_2),(m_3,w_3),(m_4,w_6),(m_5,w_7),(m_6,w_5),(m_7,w_4),\\&(m_8,w_8),(m_9,w_{10}),(m_{10},w_9)\}\\ \mathbf{x}^5:~& \{(m_1,w_1),(m_2,w_2),(m_3,w_3),(m_4,w_6),(m_5,w_7),(m_6,w_5),(m_7,w_4),\\&(m_8,w_{10}),(m_9,w_9),(m_{10},w_8)\}\\ \end{align*} }
where a pair $(m_i,w_j)$ means that $m_i$ and $w_j$ are matched.
\noindent If one assumes that the dissatisfaction of an individual is equal to the rank of the partner in his/her preference list, then the dissatisfactions induced by the previous stable marriages are:
{\small \begin{center} \begin{tabular}{ccC{1.75cm}C{1.75cm}} \hline matching & vector of dissatisfactions & sum of$~~~~~~~~~~$ dissatisfactions & max of$~~~~~~~~~~$ dissatisfactions \\ \hline $\mathbf{x}^1$ & $(1,1,1,1,1,1,1,1,1,1,1,2,3,7,7,7,7,3,4,10)$ & 61 & 10\\ $\mathbf{x}^2$ & $(1,1,1,1,1,5,5,1,1,1,1,2,3,4,4,7,7,3,4,10)$ & 63 & 10\\ $\mathbf{x}^3$ & $(1,1,1,5,5,1,1,1,1,1,1,2,3,7,7,4,4,3,4,10)$ & 63 & 10\\ $\mathbf{x}^4$ & $(1,1,1,5,5,5,5,1,1,1,1,2,3,4,4,4,4,3,4,10)$ & 65 & 10\\ $\mathbf{x}^5$ & $(1,1,1,5,5,5,5,5,5,5,1,2,3,4,4,4,4,2,3,9)$ & 74 & 9\\ \hline \end{tabular} \end{center} }
where the $i^{th}$ component of the vector is the dissatisfaction of $m_i$ for $i \in \{1,\ldots,10\}$, and of $w_{i-10}$ for $i \in \{11,\ldots,20\}$.
\label{exGILBERT} \end{ex}
In this instance, the matching $\mathbf{x}^4$ can be considered as inducing a \emph{well-balanced} vector of dissatisfactions. The matchings $\mathbf{x}^1$, $\mathbf{x}^2$ and $\mathbf{x}^3$ indeed favour more some individuals (the men
in this case) than others, while matching $\mathbf{x}^5$ yields quite high dissatisfactions for numerous agents. The matching $\mathbf{x}^4$ is therefore a good compromise between the \emph{utilitarian} and the \emph{egalitarian} viewpoints, where the utilitarian viewpoint aims at minimizing the sum of dissatisfactions while the egalitarian viewpoint aims at minimizing the dissatisfaction of the worst off individual. Both the utilitarian and egalitarian approaches have been advocated for promoting fairness in the stable marriage problem \citep{gusfield1987three,gusfield1989stable}. Other approaches aim at treating equally men and women, by minimizing the absolute difference between the total dissatisfactions of the two groups (\emph{sex-equal} stable marriage problem \citep{kato1993complexity,mcdermid2014sex}) or by minimizing the maximum total dissatisfaction between the two groups (\emph{balanced} stable marriage problem \citep{manlove2013algorithmics}). However, note that, in the instance of Example~\ref{exGILBERT}, all these criteria favour either $\mathbf{x}^1$ (utilitarian) or $\mathbf{x}^5$ (egalitarian, sex-equal, balanced). Finally, there exists another type of approach, that is not based on assigning scores to marriages. In a first step, for each man, one lists all his possible matches in a stable marriage, in order of his preferences (this list includes as many elements as there are feasible stable marriages). In a second step, each man is matched with the median woman in the list. This procedure yields a stable marriage, which is called \emph{median} stable marriage \citep{teo1998geometry,cheng2010understanding}. In the instance of the example, the median stable marriage is $\mathbf{x}^4$. Nevertheless, in this article, we focus on determining a fair stable marriage by using a \emph{scoring rule}.
In social choice theory, a scoring rule assigns a score to each alternative by summing the scores given by every individual over the alternative. This summation principle ensures that all individuals contribute equally to the score of an alternative. An alternative is usually a candidate in an election, but it can also be an element of a combinatorial domain. For instance, in proportional representation problems \citep{procaccia2008complexity}, where one aims at electing a committee, every feasible committee is an alternative. In the setting of stable marriage problems, every stable marriage is an alternative and the utilitarian approach is clearly a scoring rule where each individual evaluates a stable marriage by the rank of his/her match. An interesting extension of the class of scoring rules is the class of \emph{rank dependent scoring rules} \citep{goldsmith2014voting}, where, instead of limiting the aggregation to a summation operation, the scores are aggregated by taking into account their ranks in the ordered list of scores. As emphasized by \cite{goldsmith2014voting}, rank dependent scoring rules can be used to favour fairness by imposing some conditions on their parameters. A well known class of rank-dependent scoring rules in inequality measurement are the \emph{Generalized Gini Indices} (GGI) \citep{weymark1981generalized}. Furthermore, this class of rank dependent scoring rules circumvents both the utilitarian and egalitarian criteria. Their optimization on combinatorial domains have been studied in several settings (often under the name of \emph{Ordered Weighted Averages}): assignment problems \citep{lesca2018fair}, proportional representation \citep{elkind2015owa}, resource allocation \citep{heinen2015fairness}. To the best of our knowledge, the problem of determining a GGI optimal stable marriage has not been studied yet. This is precisely the purpose of the present work.
The paper is organized as follows. In Section~\ref{sec:notations}, we introduce notations and we formally define the GGI stable marriage problem studied here. Then, in Section~\ref{sec:complexity}, we prove that it is NP-hard to determine an optimal stable marriage according to a GGI criterion applied to agents' disutilities. In Section~\ref{sec:approximation}, we provide a polynomial time 2-approximation algorithm. Finally, in Section~\ref{sec:param}, we establish a parametrized complexity result with respect to a GGI-specific parameter.
\section{The GGI Stable Marriage Problem} \label{sec:notations}
Let $\mathcal{M}=\{m_1,\ldots,m_n\}$ denote the set of men, and $\mathcal{W}=\{w_1,\ldots,w_n\}$ the set of women. As in Example \ref{exGILBERT}, for each $m_k$ (resp. $w_k$), a preference relation $\succ^m_k$ (resp. $\succ^w_k$) is defined on $\mathcal{W}$ (resp. $\mathcal{M}$), where $i \succ^m_k j$ (resp. $i \succ^w_k j$) means that $m_k$ (resp. $w_k$) prefers $w_i$ to $w_j$ (resp. $m_i$ to $m_j$). We denote by $\mathtt{rk}(m_i,w_j)$ the rank of woman $w_j$ in the preference order of man $m_i$, and similarly for $\mathtt{rk}(w_j,m_i)$.
A solution of a stable marriage problem is a matching represented by a binary matrix $\mathbf{x}$, where $x_{ij}=1$ means that $m_i$ is matched with $w_j$. A matching $\mathbf{x}$ induces a matching function $\mu_\mathbf{x}$ defined by $w_j = \mu_\mathbf{x}(m_i)$ and $m_i = \mu_\mathbf{x}(w_j)$ if $x_{ij} = 1$. In a perfect matching (called indifferently matching or marriage from now on), every man (resp. woman) is matched with a different woman (resp. man). More formally, a matching is defined by: \begin{eqnarray} \textstyle\sum_{i=1}^n x_{ij} = 1 \quad & \forall j \in \{1,\ldots,n \} \label{c_matchingi}\\ \textstyle\sum_{j=1}^n x_{ij} = 1 \quad & \forall i \in \{1,\ldots,n \} \label{c_matchingj} \end{eqnarray}
A matching is said to be \emph{stable} if there exists no man and woman who prefer each other to their current partner. More formally, a perfect matching is stable if the following constraints hold \citep{vate1989linear}: \begin{equation} x_{ij} + \sum_{j' \succ^m_i j} x_{ij'} + \sum_{i' \succ^w_j i} x_{i'j} \ge 1 \quad \forall (i,j) \in \{1,\ldots,n\}^2 \label{c_stable} \end{equation} The set of stable marriages, i.e. binary matrices $\mathbf{x}$ such that constraints \ref{c_matchingi}, \ref{c_matchingj} and \ref{c_stable} hold, is denoted by $\mathcal{X}$. In their seminal paper, \cite{gale1962college} states that there always exists at least one stable marriage, which can be computed in $O(n^2)$. \\ The Gale-Shapley algorithm is based on a sequence of proposals from men to women. Each man proposes to the women following his preference order, pausing when a women agrees to be matched with him but continuing if his proposal is rejected. When a woman receives a proposal, she rejects it if she already has a better proposal according to her preferences. Otherwise, she agrees to hold it for consideration and rejects any former proposal that she might had. Such a sequence of proposals always leads to a stable marriage called man-optimal stable marriage and denoted by $\mathbf{x}^m$ (if the role of men and women is reversed, we obtain the woman-optimal stable marriage denoted by $\mathbf{x}^w$). In the man-optimal stable marriage, each man has the best partner, and each woman has the worst partner, that is possible in any stable marriage. Contrarily, in the woman-optimal stable marriage, each woman has the best partner, and each man has the worst partner, that is possible in any stable marriage.
Two important properties of the Gale-Shapley algorithm are that:\\[0.5ex] -- if $m$ proposes to $w$, then there is no stable marriage in which $m$ has a better match than $w$.\\ -- if $m$ proposes to $w$, then there is no stable marriage in which $w$ has a worse match than $m$.\\[0.5ex] These properties justify the notion of preference shortlists obtained through the Gale-Shapley algorithm by removing any man $m$ from a woman $w$'s preference list and vice-versa, when $w$ receives a proposal from a man she prefers to $m$. Note that the shortlists that are obtained at the end of the algorithm do not depend on the order in which the proposals are made. \begin{ex}
For instance, with the preferences of Example~\ref{exGILBERT}, the Gale-Shapley algorithm leads to the following shortlists: {\small \begin{align*} m_1 &: 1 \succ^m_1 2 \succ^m_1 3 \succ^m_1 4 \succ^m_1 5 \succ^m_1 6 \succ^m_1 7 \succ^m_1 9 \succ^m_1 10 \\ m_2 &: 2 \succ^m_2 3 \succ^m_2 4 \succ^m_2 5 \succ^m_2 6 \succ^m_2 7 \succ^m_2 8 \succ^m_2 9 \succ^m_2 10 \\ m_3 &: 3 \succ^m_3 4 \succ^m_3 5 \succ^m_3 6 \succ^m_3 7 \succ^m_3 10 \\ m_4 &: 7 \succ^m_4 6 \succ^m_4 10 \\ m_5 &: 6 \succ^m_5 7 \succ^m_5 10 \\ m_6 &: 4 \succ^m_6 5 \succ^m_6 10 \\ m_7 &: 5 \succ^m_7 4 \succ^m_7 10 \\ m_8 &: 8 \succ^m_8 4 \succ^m_8 5 \succ^m_8 6 \succ^m_8 10 \succ^m_8 7\\ m_9 &: 10 \succ^m_9 4 \succ^m_9 6 \succ^m_9 7 \succ^m_9 9 \succ^m_9 5\\ m_{10} &: 9 \succ^m_{10} 8 \succ^m_{10} 10\\ \\ w_1 &: 1 \\ w_2 &: 1 \succ^w_2 2 \\ w_3 &: 1 \succ^w_3 2 \succ^w_3 3 \\ w_4 &: 1 \succ^w_4 2 \succ^w_4 3 \succ^w_4 7 \succ^w_4 8 \succ^w_4 9 \succ^w_4 6 \\ w_5 &: 1 \succ^w_5 2 \succ^w_5 3 \succ^w_5 6 \succ^w_5 8 \succ^w_5 9 \succ^w_5 7 \\ w_6 &: 1 \succ^w_6 2 \succ^w_6 3 \succ^w_6 4 \succ^w_6 8 \succ^w_6 9 \succ^w_6 5 \\ w_7 &: 1 \succ^w_7 2 \succ^w_7 3 \succ^w_7 5 \succ^w_7 8 \succ^w_7 9 \succ^w_7 4 \\ w_8 &: 2 \succ^w_8 10 \succ^w_8 8 \\ w_9 &: 1 \succ^w_9 2 \succ^w_9 9 \succ^w_9 10\\ w_{10} &: 1 \succ^w_{10} 2 \succ^w_{10} 3 \succ^w_{10} 4 \succ^w_{10} 5 \succ^w_{10} 6 \succ^w_{10} 7 \succ^w_{10} 10 \succ^w_{10} 8 \succ^w_{10} 9 \end{align*} } \end{ex}
These shortlists makes it possible to identify some transformations that can be applied from the man-optimal stable marriage to obtain other stable marriages (more favourable to women). These transformations are called rotations \citep{irving1986complexity}. A rotation is a sequence $\rho = (m_{i_0},w_{i_0}), \ldots,(m_{i_{r-1}},w_{i_{r-1}})$ of man-woman pairs such that, for each $i_k$ ($0\leq k \leq r-1$), (1) $w_{i_k}$ is first in $m_{i_k}$'s shortlist and (2) $w_{i_{k+1}}$ ($k+1$ taken modulo $r$) is second in $m_{i_k}$'s shortlist. Such a rotation is said to be exposed in the shortlists.
\begin{ex}
Continuing Example~\ref{exGILBERT}, there are two rotations exposed in the shortlists, $\rho_1 = (4,7),(5,6)$ and $\rho_2 = (6,4),(7,5)$. \end{ex}
Given a rotation, if each $m_{i_k}$ exchanges his current partner $w_{i_k}$ for $w_{i_{k+1}}$, then the matching remains stable. Eliminating a rotation $\rho = (m_{i_0},w_{i_0}), \ldots,(m_{i_{r-1}},w_{i_{r-1}})$ amounts to removing all successors $m$ of $m_{i_{k-1}}$ in $w_{i_k}$'s shortlist together with the corresponding appearances of $w_{i_k}$ in the shortlists of men $m$. The obtained stable marriage can then be read from the modified shortlists by matching each man with the first woman in his shortlist. In this new stable marriage, each woman (resp. man) is better off (resp. worse off) than before eliminating the rotation.
Once an exposed rotation has been identified and eliminated, then one or more rotations may be exposed in the resulting (further reduced) shortlists. This process may be repeated, and once all rotations have been eliminated, we obtain the woman optimal stable marriage. A rotation $\pi$ is said to be a predecessor of a rotation $\rho$, denoted by $\pi < \rho$, if $\rho$ cannot be exposed in the men shortlists before $\pi$ is eliminated. This notion of predecessors makes it possible to define what is called the rotation poset $(P, \leq)$ where $P$ is the set of all rotations and $\leq$ is the precedence relation that we have just mentioned. A closed set in a poset $(P, \leq)$ is a subset $R$ of $P$ such that $\rho \in R, \pi < \rho \Rightarrow \pi \in R$.
The following theorem is crucial to understand the importance of the rotation poset.
\begin{thr} \citep{irving1986complexity} The stable marriages of a given stable marriage instance are in one-to-one correspondence with the closed subsets of the rotation poset. \end{thr}
In this correspondence, each closed subset $R$ represents the stable marriage obtained by eliminating the rotations in $R$ starting from $\mathbf{x}^m$.
The rotation poset can be represented as a directed acyclic graph, with the rotations as nodes and an arc from $\pi$ to $\rho$ iff $\pi$ is an immediate predecessor of $\rho$ (i.e., $\pi < \rho$ and there is no rotation $\sigma$ such that $\pi < \sigma < \rho$). Note that this graph has at most $n(n-1)/2$ nodes, i.e., there are at most $n(n-1)/2$ rotations \citep{irving1987efficient}. Indeed, there are at most $n^2 -n$ pairs that can be involved in rotations (the $n$ pairs of $\mathbf{x}^w$ cannot be involved in a rotation). Each pair belong to at most one rotation and there are at least two pairs in each rotation. We will take advantage of the rotation poset in multiple places in the paper. Importantly, note that the rotation poset (actually a subgraph whose transitive closure is the rotation poset) can be generated in $O(n^2)$ \citep{gusfield1989stable}.
\begin{ex}
For instance, with the preferences of Example~\ref{exGILBERT}, the rotations and their immediate predecessors are given in the following table. \begin{table}[!h] \begin{center}
\begin{tabular}{|l|l|l|}
\hline
Rotation & New pairs & Immediate predecessors\\
\hline
$\rho_1 = (4,7),(5,6)$ & $(4,6), (5,7)$ & \\
$\rho_2 = (6,4),(7,5)$ & $(6,5), (7,4)$ & \\
$\rho_3 = (8,8),(9,10),(10,9)$ & $(8,10), (9,9), (10,8)$ & $\rho_1,\rho_2$\\
\hline \end{tabular} \end{center} \end{table}
\begin{figure}
\caption{ \small Rotation poset in Example \ref{exGILBERT}.}
\end{figure}
\label{exIRVING3} \end{ex}
This rotation poset shows that there are (potentially many) other stable marriages than the man-optimal or woman-optimal stable marriages. These other stable marriages are likely to be fairer than $\mathbf{x}^m$ and $\mathbf{x}^w$ as they are both extreme cases.
In order to compute a fair stable marriage, the optimization of several aggregation functions has been investigated.\\ [0.5ex] -- Utilitarian approach: $\sum_{i=1}^n \mathtt{rk}(m_i,\mu_\mathbf{x}(m_i))\!+\!\sum_{j=1}^n \mathtt{rk}(w_j,\mu_\mathbf{x}(w_j))$, which can be minimized in $O(n^3)$ \citep{Feder1994}).\\ -- Egalitarian approach: $\max \{\mathtt{rk}(p,\mu_\mathbf{x}(p)):p \in \mathcal{M}\cup \mathcal{W}\}$, which can also be minimized in $O(n^2)$ \citep{gusfield1987three}.\\
-- Sex-equal stable marriage: $|\sum_{i=1}^n \mathtt{rk}(m_i,\mu_\mathbf{x}(m_i)) - \sum_{j=1}^n \mathtt{rk}(w_j,\mu_\mathbf{x}(w_j))|$, the minimization of which is NP-hard \citep{kato1993complexity}.\\ -- Balanced stable marriage: $\max \{\sum_{i=1}^n \mathtt{rk}(m_i,\mu_\mathbf{x}(m_i)) , \sum_{j=1}^n \mathtt{rk}(w_j,\mu_\mathbf{x}(w_j))\}$, the minimization of which is NP-hard \citep{manlove2013algorithmics}.\\[0.5ex]
Our contribution differs with previous works on the fair stable marriage problem. Indeed, we optimize a generalized Gini index on disutility values.\\
Given a matching $\mathbf{x}$, the \emph{disutility} $d(m_i,\mathbf{x})$ (also called \emph{dissatisfaction}) of a man $m_i$ is defined by $d(\mathtt{rk}(m_i,\mu_\mathbf{x}(m_i)))$, where $d:\mathbb{N}\rightarrow \mathbb{Q}^+$, is a strictly increasing function called disutility function. The disutility values $d(w_j,\mathbf{x})$ are defined similarly for women. Every stable marriage induces therefore a disutility vector: \begin{equation*} \mathbf{d}(\mathbf{x}) = (d(m_1,\mathbf{x}),\ldots,d(m_n,\mathbf{x}),d(w_1,\mathbf{x}),\ldots,d(w_n,\mathbf{x})) \end{equation*} with $N = 2n$ components. Note that the use of disutility values (often called weights) is a common way to extend the traditional framework where the aggregation function is applied on rank values (see e.g., \cite{teo1998geometry,gusfield1989stable}). Using a unique disutility function for all agents guarantees that they all have the same importance in the aggregation operation. Indeed, the disutility values assigned to the ranks do not depend on the agent's identity. Note that both the egalitarian and the utilitarian variants of the stable marriage problem remain polynomially solvable if one uses disutility values.
\begin{ex}
We come back to Example~\ref{exGILBERT}. Let $d$ be the disutility function defined by $d(i) = (i-1)^2$, then the disutility values are given by the matrices $d_M$ and $d_W$ below where $d_M[i][j]$ (resp. $d_W[j][i]$) is the disutility of $m_i$ (resp. $w_j$) if he (resp. she) is matched with $w_j$ (resp. $m_i$).
\scalebox{0.9}{\parbox{\columnwidth}{ \begin{minipage}[c]{.46\linewidth} \[ d_M: \begin{pmatrix} 0 & 1 & 4 & 9 & 16 & 25 & 36 & 49 & 64 & 81\\ 1 & 0 & 4 & 9 & 16 & 25 & 36 & 49 & 64 & 81\\ 1 & 4 & 0 & 9 & 16 & 25 & 36 & 49 & 64 & 81\\ 1 & 4 & 9 & 25 & 36 & 16 & 0 & 49 & 64 & 81\\ 1 & 4 & 9 & 25 & 36 & 0 & 16 & 49 & 64 & 81\\ 1 & 4 & 9 & 0 & 16 & 36 & 25 & 49 & 64 & 81\\ 1 & 4 & 9 & 16 & 0 & 36 & 25 & 49 & 64 & 81\\ 36 & 49 & 64 & 1 & 4 & 9 & 25 & 0 & 81 & 16\\ 36 & 49 & 64 & 1 & 25 & 4 & 9 & 81 & 16 & 0\\ 36 & 49 & 64 & 1 & 4 & 25 & 9 & 16 & 0 & 81 \end{pmatrix} \] \end{minipage}
\begin{minipage}[c]{.46\linewidth} \[ d_W: \begin{pmatrix} 0 & 1 & 4 & 9 & 16 & 25 & 36 & 49 & 64 & 81\\ 0 & 1 & 4 & 9 & 16 & 25 & 36 & 49 & 64 & 81\\ 0 & 1 & 4 & 9 & 16 & 25 & 36 & 49 & 64 & 81\\ 0 & 1 & 4 & 49 & 64 & 36 & 9 & 16 & 25 & 81\\ 0 & 1 & 4 & 49 & 64 & 9 & 36 & 16 & 25 & 81\\ 0 & 1 & 4 & 9 & 36 & 49 & 64 & 16 & 25 & 81\\ 0 & 1 & 4 & 36 & 9 & 49 & 64 & 16 & 25 & 81\\ 16 & 0 & 25 & 36 & 49 & 64 & 9 & 4 & 81 & 1\\ 0 & 1 & 16 & 25 & 36 & 49 & 64 & 81 & 4 & 9\\ 0 & 1 & 4 & 9 & 16 & 25 & 36 & 64 & 81 & 49 \end{pmatrix} \] \end{minipage} }} \label{exIRVING3}
\end{ex}
Let $\mathbf{d} = (d_1,\ldots,d_N)$ denote a disutility vector. The generalized Gini index~\citep{weymark1981generalized} is defined as follows: \begin{defi} Let ${\bm \lambda} = (\lambda_1, \ldots, \lambda_N)$ be a vector of weights such that $\lambda_1 \ge \ldots \ge \lambda_N$. The $\mathtt{GGI}_{{\bm \lambda}}(\cdot)$ aggregation function induced by ${\bm \lambda}$ is defined by: \begin{equation*} \mathtt{GGI}_{{\bm \lambda}}(\mathbf{d}) = \sum_{i=1}^{N} \lambda_{i} d^\downarrow_i, \end{equation*} where $\mathbf{d^\downarrow}$ denotes the vector $\mathbf{d}$ ordered by nonincreasing values, i.e., $d^\downarrow_1 \geq d^\downarrow_2 \geq \ldots \geq d^\downarrow_N$. \end{defi}
The weights of the GGI aggregation function may be defined in a variety of manner. For instance, the weights initially proposed for the Gini social-evaluation function are: \begin{equation} \label{gCoeff} \lambda_i = (2(N-i)+1)/N^2 \quad \forall i \in \{1,\ldots,N\} \end{equation}
\begin{ex} Coming back to Example~\ref{exGILBERT}, if the weights ${\bm \lambda}$ are defined by Equation~\ref{gCoeff} and the disutility function is defined by $d(i) = i$, the GGI values of the different stable marriages are (the lower the better):
{\small \begin{center} \begin{tabular}{ccC{1.75cm}} \hline matching $\mathbf{x}$ & ordered vectors $\mathbf{d}^\downarrow(\mathbf{x})$ & $\mathtt{GGI}_{{\bm \lambda}}(\mathbf{d}(\mathbf{x}))$\\ \hline $\mathbf{x}^1$ & $(10, 7, 7, 7, 7, 4, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1)$ & 4.4525 \\ $\mathbf{x}^2$ & $(10, 7, 7, 5, 5, 4, 4, 4, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1)$ & 4.4725 \\ $\mathbf{x}^3$ & $(10, 7, 7, 5, 5, 4, 4, 4, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1)$ & 4.4725 \\ $\mathbf{x}^4$ & $(10, 5, 5, 5, 5, 4, 4, 4, 4, 4, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1)$ & 4.3925 \\ $\mathbf{x}^5$ & $(9 , 5, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4, 3, 3, 2, 2, 1, 1, 1, 1)$ & 4.74 \\ \hline \end{tabular} \end{center} }
\noindent We thus observe that using a GGI aggregation function makes it possible to obtain $\mathbf{x}^4$ as an optimal stable marriage. \end{ex}
The GGI is also known in multicriteria decision making under the name of \emph{ordered weighted average}~\citep{yager1988ordered}. This aggregation function, to minimize, is well-known to satisfy the Pigou-Dalton transfer principle if $\lambda_1\!>\!\lambda_2\!>\!\ldots\!>\!\lambda_N$: \begin{defi} An aggregation function $F$ satisfies the \emph{transfer principle} if for any $\mathbf{d} \in (\mathbb{R}^+)^N$ and $\varepsilon \in (0,d_j-d_i)$ where $d_j>d_i$:\\ \begin{equation*} F(d_1, \ldots, d_i + \varepsilon, \ldots, d_j - \varepsilon, \ldots , d_N) < F(d_1, \ldots, d_N). \end{equation*} \end{defi} This condition states that the overall welfare should be improved by any transfer of disutility from a ``less happy'' agent $j$ to a happier agent $i$ given that this transfer reduces the gap between the disutilities of agent $i$ and $j$.
We can now define the GGI Stable Marriage problem.\\[2ex] \fbox{\parbox{12cm}{ \textbf{GGI Stable Marriage (GGISM)}\\ \emph{INSTANCE:} Two disjoint sets of size $n$, the men and the women; for each person, a preference list containing all the members of the opposite sex; a vector of weight parameters ${\bm \lambda}$ and a disutility function $d$.\\ \emph{SOLUTION:} A stable marriage $\mathbf{x}$.\\ \emph{MEASURE:} $\mathtt{GGI}_{{\bm \lambda}}(\mathbf{d}(\mathbf{x}))$ (to minimize). }}
\section{Complexity of the GGISM Problem} \label{sec:complexity}
The GGISM problem extends both the egalitarian and the utilitarian approaches to the stable marriage problem. Indeed, if the weights of the GGI operator are ${\bm \lambda} = (1,\ldots,1)$, one obtains the sum operation. If the weights are ${\bm \lambda} = (1,0,\ldots,0)$, one obtains the max operation. While both variants are polynomially solvable problems, the following result states that the GGISM problem is NP-hard:
\begin{thr} The GGISM problem is NP-hard. \end{thr} \begin{proof} We make a reduction from Minimum 2-Satisfiability, which is strongly NP-hard \citep{kohli1994minimum}.\\
\noindent \fbox{\parbox{12cm}{ \textbf{Minimum 2-Satisfiability (Min 2-SAT)}:\\ \emph{INSTANCE}: A set $V$ of variables, a collection $C$ of disjunctive clauses of at most 2 literals, where a literal is a variable or a negated variable in $V$.\\ \emph{SOLUTION}: A truth assignment for $V$.\\ \emph{MEASURE}: Number of clauses satisfied by the truth assignment (to minimize). }}
To illustrate the reduction, we will use the following 2-SAT instance: \begin{align} V &= \{v_1,v_2,v_3,v_4,v_5,v_6\} \label{form1a}\\ C &= \{ (v_1 \lor v_2), (\lnot v_2 \lor \lnot v_4), (\lnot v_1 \lor v_3), ( v_3 \lor \lnot v_4), v_2 , (v_5\lor v_6)\} \label{form1b} \end{align} As a preliminary step, note that we can get rid of variables that are present in only one clause. Such a variable is set to true if it is present as a negative literal in the clause and to false otherwise. It can then be removed from the instance. Furthermore, we can make sure that there are exactly two literals in each clause (by duplicating literals). For example, the instance described by Equations \ref{form1a} and \ref{form1b} can be modified to: \begin{align} V &= \{v_1,v_2,v_3,v_4\} \label{form2a}\\ C &= \{ (v_1 \lor v_2), (\lnot v_2 \lor \lnot v_4), (\lnot v_1 \lor v_3), ( v_3 \lor \lnot v_4), (v_2 \lor v_2)\} \label{form2b} \end{align}
In the following we will denote by $n_v = |V|$ the number of variables and by $n_c = |C|$ the number of clauses. In the previous example $n_v = 4$ and $n_c = 5$. Furthermore, we will denote by $c_i$ the $i^{th}$ clause in $C$.\\
We are now going to create an instance of the GGISM problem such that: \begin{itemize} \item There is a one-to-one correspondence between the stable marriages and the truth assignments for $V$. \item A stable marriage minimizing the GGI of the agent's disutilities corresponds to a truth assignment of $V$ minimizing the number of clauses that are satisfied. \end{itemize}
In order to create a one-to-one correspondence between the stable marriages and the truth assignments for $V$, we are going to create a rotation $\rho_i$ for each variable $v_i \in V$. Each of these rotations will be exposed in the shortlists from the man-optimal stable marriage for the instance under construction. Additionally, we will ensure that these rotations will be the only ones of the stable marriage instance. In other words, the rotation poset will have one vertex per variable and no edge, as illustrated in Figure~\ref{fig:posetProof}.
\begin{figure}
\caption{Rotation poset of the stable marriage instance generated by the reduction.}
\label{fig:posetProof}
\end{figure}
We now give the ``meaning'' of these rotations. Let's recall that in a stable marriage there is a one-to-one correspondence between the closed subsets of nodes of the rotation poset and the stable marriages. Now let $\mathbf{x}$ be a stable marriage corresponding to a closed subset $R$ of rotations, then the corresponding truth assignment over $V$ consists in setting $v_i = 1$ if $\rho_i\in R$ and $v_i = 0$ otherwise. Thus in the generated stable marriage instance, the man-optimal stable marriage (i.e., $R = \emptyset$) corresponds to a truth assignment where all variables in $V$ are set to 0 while the woman-optimal stable marriage (i.e., $R = \{\rho_i| v_i \in V\}$) corresponds to a truth assignment where all variables in $V$ are set to 1.
We now describe more precisely the fashion in which rotations $\rho_i$ are generated. For each variable $v_i$, we create a man-woman pair $(m_{ij}, w_{ij})$ for each clause $c_j$ that involves $v_i$ either as a positive or negative literal. If variable $v_i$ is present two times in a clause $c_j$, then two man-woman pairs $(m_{ij}, w_{ij})$ and $(m_{ij}', w_{ij}')$ are created. This induces the creation of $2n_c$ men and $2n_c$ women in the instance. The rotation $\rho_i$ then involves all the men and women induced by variable $v_i$.
For example, in the instance described by Equations \ref{form2a} and \ref{form2b}, $\rho_2$ involves men $m_{21},m_{22},m_{25},m_{25}'$ and women $w_{21},w_{22},w_{25},w_{25}'$ as variable $v_2$ is present in $c_1, c_2$ and $c_5$. Let $r$ denote the number of times variable $v_i$ appears in $C$. The rotation $\rho_i$ is then induced by the following patterns in the shortlists of men $\{m_{ij}|v_i\in c_j\}$ and women $\{w_{ij}|v_i\in c_j\}$:
\begin{minipage}[c]{.46\linewidth} \begin{align*} m_{ij_0} &: w_{ij_0} \succ^m_{ij_0} w_{ij_1} \\ m_{ij_1} &: w_{ij_1} \succ^m_{ij_1} w_{ij_2} \\ \vdots &\\ m_{ij_{r-2}} &: w_{ij_{r-2}} \succ^m_{ij_{r-2}} w_{ij_{r-1}} \\ m_{ij_{r-1}} &: w_{ij_{r-1}} \succ^m_{ij_{r-1}} w_{ij_0} \end{align*} \end{minipage} \begin{minipage}[c]{.46\linewidth} \begin{align*} w_{ij_0} &: m_{ij_{r-1}} \succ^w_{ij_0} m_{ij_0} \\ w_{ij_1} &: m_{ij_0} \succ^w_{ij_1} m_{ij_1} \\ \vdots &\\ w_{ij_{r-2}} &: m_{ij_{r-3}} \succ^w_{ij_{r-2}} m_{ij_{r-2}} \\ w_{ij_{r-1}} &: m_{ij_{r-2}} \succ^w_{ij_{r-1}} m_{ij_{r-1}} \end{align*} \end{minipage}
For instance, rotation $\rho_2$ is induced by the following pattern in the shortlists:
\begin{minipage}[c]{.46\linewidth} \begin{align*} m_{21} &: w_{21} \succ^m_{21} w_{22} \\ m_{22} &: w_{22} \succ^m_{22} w_{25} \\ m_{25} &: w_{25} \succ^m_{25} w_{25}' \\ m_{25}'&: w_{25}' \succ^{m'}_{25} w_{21} \end{align*} \end{minipage} \begin{minipage}[c]{.46\linewidth} \begin{align*} w_{21} &: m_{25}' \succ^w_{21} m_{21} \\ w_{22} &: m_{21} \succ^w_{22} m_{22} \\ w_{25} &: m_{22} \succ^w_{25} m_{25} \\ w_{25}' &: m_{25} \succ^{w'}_{25} m_{25}' \end{align*} \end{minipage}
\begin{table} \begin{center} \begin{tabular}{cccc} \hline clause $c_j$ & in & out & decisive agents \\ \hline $v_i \wedge v_k$ & & $\rho_i, \rho_k$ & $m_{ij}$, $m_{kj}$ \\ $v_i \wedge \neg v_k$ & $\rho_k$ & $\rho_i$ & $m_{ij}$, $w_{kj}$ \\ $\neg v_i \wedge v_k$ & $\rho_i$ & $\rho_k$ & $w_{ij}$, $m_{kj}$ \\ $\neg v_i \wedge \neg v_k$ & $\rho_i, \rho_k$ & & $w_{ij}$, $w_{kj}$ \\ $v_i \wedge v_i$ & & $\rho_i$ & $m_{ij}$, $m_{ij}'$ \\ $\neg v_i \wedge \neg v_i$ & $\rho_i$ & & $w_{ij}$, $w_{ij}'$ \\ \hline \end{tabular} \caption{\label{tab:Proof}Clause $c_j$ is not satisfied iff the rotations of the second (resp. third) column are included (resp. not included) in $R$. Consequently, clause $c_j$ is not satisfied iff the two agents in the last column are matched with their choices of rank $\mathtt{rk}^+(\cdot)$.} \end{center} \end{table}
Note that each man $m_{ij}$ or woman $w_{ij}$ is involved in one and only one rotation, which is $\rho_i$. As a consequence, each man or woman in the generated instance has only two possible matches in a stable marriage, namely $w_{ij_k}$ and $w_{ij_{k+1}}$ (modulo the size $r$ of rotation $\rho_i$) for $m_{ij_k}$, and $m_{ij_{k-1}}$ (modulo $r$) and $m_{ij_{k}}$ for $w_{ij_k}$. For simplicity, we will denote by $\mathtt{rk}^+(m_{ij})$ (resp. $\mathtt{rk}^-(m_{ij})$) the rank of the best (resp. worst) possible match for $m_{ij}$ in a stable marriage. Notations $\mathtt{rk}^+(w_{ij})$ and $\mathtt{rk}^-(w_{ij})$ are defined similarly for women.
Given a stable marriage characterized by a set $R$ of rotations, it is possible to determine if clause $c_j$ is satisfied by examining which rotations belong to $R$. According to the form of clause $c_j$, columns ``in'' and ``out'' of Table~\ref{tab:Proof} indicate which rotations should be included or not in $R$ so that $c_j$ is not satisfied. Assuming that $c_j$ involves variables $v_i$ and $v_k$ (or possibly their negations), it is sufficient to examine the matches of two specific agents among $m_{ij}, m_{kj}, w_{ij}, w_{kj}$ to determine if rotations $\rho_i$ and $\rho_k$ belong or not to $R$. These two specific agents are called \emph{decisive agents of $c_j$} in the following. We have indeed $\rho_i \in R$ iff the rank of the match of $m_{ij}$ is $\mathtt{rk}^+(m_{ij})$. Similarly, we have $\rho_i \not\in R$ iff the rank of the match of $w_{ij}$ is $\mathtt{rk}^+(w_{ij})$. Put another way, $m_{ij}$ (resp. $w_{ij}$) is a decisive agent of $c_j$ if $v_i$ (resp. $\neg v_i$) belongs to $c_j$. The clause $c_j$ is not satisfied iff the two decisive agents are with their match of rank $\mathtt{rk}^+(\cdot)$. The decisive agents according to the form of clause $c_j$ are given in the last column of Table~\ref{tab:Proof}.
For illustration, let us return to the 2-SAT instance described by Equations~\ref{form2a} and \ref{form2b}. Given the stable marriage instance generated by the reduction, and a stable marriage $\mathbf{x}$, clause $v_1\land v_2$ is \emph{not} satisfied iff $\mathtt{rk}(m_{11},\mu_\mathbf{x}(m_{11})) = \mathtt{rk}^+(m_{11})$ and $\mathtt{rk}(m_{21},\mu_\mathbf{x}(m_{21})) = \mathtt{rk}^+(m_{21})$. More generally, it is possible to count the number of clauses that are \emph{not} satisfied by examining the ranks of the matches of the decisive agents of each clause.
We will soon explain how to use a GGI operator to count the number of clauses that are not satisfied in the 2-SAT instance. Beforehand, we need to introduce fictitious agents in order to control the positions of the decisive agents in the ordered vector of disutilities for every stable marriage. More precisely, we introduce four fictitious agents $m_j$, $m_j'$, $w_j$, $w_j'$ per clause $c_j$ such that $m_j$ (resp. $m_j'$) is the first choice of $w_j$ (resp. $w_j'$) and vice-versa. Thus $m_j$ (resp. $m_j'$) can only be matched to $w_j$ (resp. $w_j'$) in a stable marriage, and therefore the fictitious agents will not interfere with the possible matches of the other agents.
The fictitious agents are placed in the preference lists of the other agents such that $\mathtt{rk}^+(\cdot) = 2j+1$ and $\mathtt{rk}^-(\cdot) = 2j+2$ for the two decisive agents of clause $c_j$. Furthermore, $\mathtt{rk}^+(\cdot) = 1$ and $\mathtt{rk}^-(\cdot) = 2$ for the remaining (non-decisive) agents. Note that $2j+1>2$ as $j\ge 1$ and therefore the two decisive agents of $c_j$ are at positions $2(n_c-j)+1$ and $2(n_c-j)+2$ in the permutation that ranks the agents by non-increasing disutilities.
To achieve these properties, we position $2j$ fictitious agents at the beginning of the preference list of the decisive agents of clause $c_j$ (e.g., $m_1 m_1' \ldots m_j m_j'$ for a decisive agent $w_{ij}$). These agents are positioned just before the two possible matches of the agent in a stable marriage. Regarding the non-decisive agents, their two possible matches in a stable marriage are simply placed at the beginning of their preference lists.
For illustration, in the 2-SAT instance described by Equations~\ref{form2a} and \ref{form2b}, the preference list of agent $w_{22}$ (who is a decisive agent of $c_2$) is: $$w_{22} : m_1 \succ_{22}^w m_1' \succ_{22}^w m_2 \succ_{22}^w m_2' \succ_{22}^w m_{21} \succ_{22}^w m_{22} \succ_{22}^w\ldots $$ and the preference list of agent $m_{22}$ (who is \emph{not} a decisive agent of $c_2$) is: $$m_{22} : w_{22} \succ_{22}^m w_{25}\succ_{22}^m \ldots $$
This construction is illustrated in Figure \ref{prefexRed} (where symbols $\succ$ are omitted for readability reasons) for the Minimum 2-Satisfiability instance defined by Equations \ref{form2a} and \ref{form2b}. The preference lists of the agents are only partially given but note that they can be completed in any consistent way that would lead to complete and transitive orders.
\begin{figure}
\caption{Preference lists obtained for the min 2-SAT instance of Equations~\ref{form2a} and \ref{form2b}.}
\label{prefexRed}
\end{figure}
We now explain how to define the disutility values attributed to each rank, as well as the weights of the GGI operator, so that the number of unsatisfied clauses can be inferred from the GGI value of the stable marriage.
\paragraph{Disutility values and weights of the GGI.} We first recall that each clause $c_j$ induces 6 agents that are matched either with their first or second choices and 2 agents (the decisive ones) that are matched with their choices of rank $2j+1$ or $2j+2$. By construction of the preference lists, note that no agent can be matched with a partner that is ranked strictly beyond $2n_c+2$ in his/her preference list. Therefore the values of $d(i)$ for $i > 2n_c+2$ play no role, and can be fixed arbitrarily as long as they are increasing with $i$ and strictly greater than $d(2n_c+2)$.\\ -- The increasing disutility values for ranks 1 to $2n_c+2$ are defined as follows (assuming that $n_c \ge 2$): \begin{align*} d(1) &= 0 \\ d(2) &= 1 \\ d(2j+1) &= j + 1,\quad \forall j\in\{1,\ldots, n_c\}\\ d(2j+2)&= j + 1 + n_c^{-j}, \quad \forall j\in\{1,\ldots, n_c\} \end{align*}
-- The non-increasing weights of the GGI are defined as follows: \begin{equation*}
{\bm \lambda} = (\underbrace{n_c^{n_c+1},n_c^{n_c},n_c^{n_c},n_c^{n_c-1}, \ldots, n_c^{3},n_c^{2},n_c^{2},n_c^{1}}_{2n_c\text{ weights}}, \underbrace{0, \ldots, 0}_{6n_c\text{ weights}}). \end{equation*} We recall that the $2n_c$ agents with the highest disutility values are the decisive agents (the two decisive agents of clause $c_j$ are matched with an agent of rank $2j+1$ or $2j+2$) and the $6n_c$ agents with the lowest disutility values are the \emph{non}-decisive agents (who are matched to one of their two first choices). Consequently, the weight vector ${\bm \lambda}$ attributes a weight 0 in the GGI operator to the $6n_c$ non-decisive agents while, for each clause $c_j$, it attributes a weight $n_c^{j}$ (resp. $n_c^{j+1}$) to the most satisfied (resp. least satisfied) of the two decisive agents of $c_j$.
An upper bound on the GGI value is given by $\Delta_u = \sum_{j=1}^{n_c} (n_c^{j} + n_c^{j+1})d(2j+2)$. This would correspond to a stable marriage, where for each $c_{j}$, the two decisive agents of $c_j$ are both matched to their choice of rank $2 j+2$. Similarly, a lower bound on the GGI value is given by $\Delta_l = \sum_{j=1}^{n_c} (n_c^{j} + n_c^{j+1})d(2j+1)$ (if the two decisive agents of $c_j$ are both matched to their choice of rank $2 j+1$). Simple calculations show that $\Delta_l=\Delta_u-n_c(1+n_c)$.
These bounds are useful for establishing Lemma \ref{lem:Proof} below, that makes it possible to infer the number of unsatisfied clauses from the GGI value. The lower the GGI value, the higher the number of unsatisfied clauses. Hence, minimizing the GGI value amounts to maximizing the number of unsatisfied clauses, which concludes the proof.
\begin{lem} \label{lem:Proof} For the GGI stable marriage instance obtained by the method described above, a stable marriage $\mathbf{x}$ corresponds to a truth assignment on $V$ for which the number of unsatisfied clauses is: $$ \left\lfloor \frac{\Delta_u - \mathtt{GGI}_{{\bm \lambda}}(\mathbf{d}(\mathbf{x}))}{n_c+1}\right\rfloor $$
\end{lem}
\noindent\textit{Proof of Lemma \ref{lem:Proof}}. We wish to show that if a stable marriage $\mathbf{x}$ corresponds to a truth assignment on $V$ with exactly $k$ unsatisfied clauses then: $$ \Delta_u - (k+1)(n_c+1) < \mathtt{GGI}_{{\bm \lambda}}(\mathbf{d}(\mathbf{x})) \le \Delta_u - k(n_c+1), $$ from which the lemma straightforwardly follows.
Assume that $k$ clauses $\{c_{j_{1}},\ldots,c_{j_{k}}\}$ are unsatisfied for the truth assignment induced by $\mathbf{x}$. Then for each $c_{j_l}$, the two decisive agents of $c_{j_l}$ are both matched to their choice of rank $2 j_l+1$. Hence: \begin{align*} \mathtt{GGI}_{{\bm \lambda}}(\mathbf{d}(\mathbf{x})) &\leq \Delta_u - \sum_{l=1}^k (n_c^{j_l}+n_c^{j_l+1})(d(2j_l+2) - d(2j_l+1))\\ &= \Delta_u - \sum_{l=1}^k (n_c^{j_l}+n_c^{j_l+1})n_c^{-j_l} = \Delta_u - k(n_c+1) \end{align*} because each decisive agent of clause $c_{j_l}$, for $l\in \{1,\ldots,k\}$, has a disutility of $d(2j_l+1)$ and not $d(2j_l+2)$.
Now, let $\{c_{j_{1}},\ldots,c_{j_{n_c-k}}\}$ denote the satisfied clauses for the truth assignment induced by $\mathbf{x}$. Then for each $c_{j_l}$, at least one of the two decisive agents of $c_{j_l}$ is matched to his/her choice of rank $2 j_l+2$. In the best case (w.r.t. the GGI value), only one of the two is matched to his/her choice of rank $2 j_l+2$, and his/her weight in the GGI aggregation is $n_c^{j_l+1}$ because his/her disutility is the highest among the two agents. Hence: \begin{align*} \mathtt{GGI}_{{\bm \lambda}}(\mathbf{d}(\mathbf{x}))&\geq \Delta_l+\sum_{l=1}^{n_c-k} n_c^{j_l+1} (d(2j_l+2)-d(2j_l+1))\\ & = \Delta_u-n_c(1+n_c) +\sum_{l=1}^{n_c-k} n_c^{j_l+1} n_c^{-j_l}\\ & = \Delta_u-n_c(1+n_c) + (n_c-k)n_c\\ & = \Delta_u-(k+1)n_c\\ & > \Delta_u-(k+1)(n_c+1) \end{align*} This concludes the proof of the lemma.
\end{proof}
\section{A 2-approximation Algorithm} \label{sec:approximation}
We now present a polynomial time 2-approximation algorithm for the GGI stable marriage problem.
The 2-approximation algorithm uses a linear programming formulation of the stable marriage problem, based on the rotation poset \citep{gusfield1989stable}. It is indeed well-known that the set of stable marriages can be characterized by the following set of inequalities where we have one binary variable $y(\rho)$ for each rotation in the rotation poset and: \begin{equation} y(\rho') - y(\rho) \leq 0 \label{eqRotPos} \end{equation} for each pair of rotations such that $\rho$ precedes $\rho'$. Variable $y(\rho)$ is equal to $1$ if rotation $\rho$ is included in the closed set of rotations associated to the stable marriage and 0 otherwise. Importantly, note that the extreme points of the polytope defined by constraints \ref{eqRotPos} for $0 \le y(\rho) \le 1,~\forall \rho$ are in one-to-one correspondence with the stable marriages of the instance \citep{gusfield1989stable}. Furthermore, the stable marriage $\mathbf{x}$ characterized by variables $y(\rho)$ can be inferred by using Equations~\ref{eq_xy1},~\ref{eq_xy2} and~\ref{eq_xy3} below.
To explain this point we introduce some notations. Let $\Gamma$ denote the set of man-woman pairs included in at least one stable matching. These pairs can be found by looking at the pairs that are created and broken by each rotation. Indeed, note that for each pair $(m,w) \in \Gamma$, there exists exactly one rotation, denoted by $\rho_{\mathtt{get}}(m,w)$, that creates this pair (unless this pair is in $\mathbf{x}^m$) and exactly one rotation, denoted by $\rho_{\mathtt{break}}(m,w)$, that breaks this pair (unless this pair is in $\mathbf{x}^w$). Then, one can compute variables $x_{ij}$ corresponding to a set of variables $y(\rho)$ by using the following equations: \begin{align} x_{ij} & = 1 - y(\rho_{\mathtt{break}}(i,j)) ,\quad \forall (i,j) \in \Gamma \mbox{ s.t. } x_{ij}^m =1 \label{eq_xy1} \\ x_{ij} & = y(\rho_{\mathtt{get}}(i,j)) ,\quad \forall (i,j) \in \Gamma \mbox{ s.t. } x_{ij}^w =1 \label{eq_xy2} \\ x_{ij} & = y(\rho_{\mathtt{get}}(i,j))-y(\rho_{\mathtt{break}}(i,j)) ,\quad \forall (i,j) \in \Gamma \mbox{ s.t. } x_{ij}^m = x_{ij}^w =0 \label{eq_xy3} \end{align}
\begin{ex} Let us come back to Example~\ref{exGILBERT}. The pairs in $\Gamma$ are listed in the left column of Table~\ref{tab:getbreak}. The rotations $\rho_{\mathtt{get}}(m,w)$ and $\rho_{\mathtt{break}}(m,w)$ for each pair $(m,w)$ are given in the middle and right columns.
\begin{table} \begin{center}
\begin{tabular}{|l|l|l|}
\hline
$(m,w) \in \Gamma$ & $\rho_{\mathtt{get}}(m,w)$ & $\rho_{\mathtt{break}}(m,w)$\\
\hline
$(m_1,w_1)$ & & \\
$(m_2,w_2)$ & & \\
$(m_3,w_3)$ & & \\
$(m_4,w_7)$ & & $\rho_1$ \\
$(m_4,w_6)$ & $\rho_1$ & \\
$(m_5,w_6)$ & & $\rho_1$\\
$(m_5,w_7)$ & $\rho_1$ & \\
$(m_6,w_4)$ & & $\rho_2$\\
$(m_6,w_5)$ & $\rho_2$ & \\
$(m_7,w_5)$ & & $\rho_2$\\
$(m_7,w_4)$ & $\rho_2$ & \\
$(m_8,w_8)$ & & $\rho_3$\\
$(m_8,w_{10})$ & $\rho_3$ & \\
$(m_9,w_{10})$ & & $\rho_3$\\
$(m_9,w_9)$ & $\rho_3$ & \\
$(m_{10},w_9)$ & & $\rho_3$\\
$(m_{10},w_8)$ & $\rho_3$ & \\
\hline \end{tabular} \caption{\label{tab:getbreak}Rotations $\rho_{\mathtt{get}}(m,w)$ and $\rho_{\mathtt{break}}(m,w)$ in Example~\ref{exGILBERT}.} \end{center} \end{table} \end{ex}
A mathematical programming formulation of the GGISM problem reads as follows: \begin{empheq}[left=\mathcal{P} \empheqlbrace]{align} & \min_{\mathbf{d},\mathbf{x},\mathbf{y}} \mathtt{GGI}_{{\bm \lambda}}(\mathbf{d}) \notag \\ d^m_i & = \sum_{(i,j) \in \Gamma} x_{ij} d(\mathtt{rk}(m_i,w_j)) ,\quad \forall i \in \{1,\ldots,n\} \notag \\ d^w_j & = \sum_{(i,j) \in \Gamma} x_{ij} d(\mathtt{rk}(w_j,m_i)) ,\quad \forall j \in \{1,\ldots,n\} \notag \\ x_{ij} & = 1 - y(\rho_{\mathtt{break}}(i,j)) ,\quad \forall (i,j) \in \Gamma \mbox{ s.t. } x_{ij}^m =1 \label{const:x-y1} \\ x_{ij} & = y(\rho_{\mathtt{get}}(i,j)) ,\quad \forall (i,j) \in \Gamma \mbox{ s.t. } x_{ij}^w =1 \label{const:x-y2} \\ x_{ij} & = y(\rho_{\mathtt{get}}(i,j))-y(\rho_{\mathtt{break}}(i,j)) ,\quad \forall (i,j) \in \Gamma \mbox{ s.t. } x_{ij}^m = x_{ij}^w =0 \label{const:x-y3} \\ y(\rho') & - y(\rho) \le 0 ,\quad \forall (\rho,\rho') \mbox{ s.t. } \rho < \rho' \label{const:y} \\ d^m_i & \ge 0 ,\quad \forall i \in \{1,\ldots,n\} \notag\\ d^w_j & \ge 0 ,\quad \forall j \in \{1,\ldots,n\} \notag\\ x_{ij} & \ge 0 ,\quad \forall (i,j) \in \Gamma\notag\\ y(\rho) & \in \{0,1\},\quad \forall \rho \in P \notag \end{empheq}
\noindent where $d_i^m$ (resp. $d_j^w$) represents the disutility of $m_i$ (resp. $w_j$), $\mathbf{d} = (d_1^m,\ldots,d_n^m,d_1^w,\ldots,d_n^w)$, and as usual: \begin{itemize} \item $x_{ij}=1$ (resp. 0) if $(m_i,w_j)$ is (resp. is not) in the stable marriage $\mathbf{x}$, \item $y(\rho)=1$ (resp. 0) if $\rho$ belongs (resp. does not) to the set of rotations characterizing $\mathbf{x}$, \item $P$ is the set of all rotations. \end{itemize} Let us denote by $\widehat{\mathcal{P}}$ the linear programming relaxation of $\mathcal{P}$ where $y(\rho) \in \{0,1\}$ is replaced by $0 \le y(\rho) \le 1$. Importantly, note that variables $y(\rho)$ in an optimal solution to $\widehat{\mathcal{P}}$ are not necessarily integer because the objective function is non-linear (and therefore there does not necessarily an optimal vertex in the solution polytope).
A polynomial time 2-approximation algorithm can be obtained by rounding an optimal solution of $\widehat{\mathcal{P}}$. The 2-approximation algorithm writes as follows:\\ [1ex] {\sc Rounding Algorithm} \begin{enumerate} \item Solve $\widehat{\mathcal{P}}$ and let $(\mathbf{\hat{d}},\mathbf{\hat{x}},\mathbf{\hat{y}})$ denote an optimal solution to $\widehat{\mathcal{P}}$; \item For each $\rho \in P$, set $y(\rho) = 1$ if $\hat{y}(\rho) \ge 0.5$, and $y(\rho) = 0$ otherwise; \item Return the stable marriage $\mathbf{x}$ obtained from $\mathbf{y}$ by using constraints~\ref{const:x-y1}--\ref{const:x-y3} in $\mathcal{P}$. \end{enumerate}
\begin{ex} Coming back to Example \ref{exGILBERT}, assume that the weights of the GGI operator are defined by Equation~\ref{gCoeff} and that the disutility function is defined by $d(i) = i$. Then, an optimal solution $(\mathbf{\hat{d}},\mathbf{\hat{x}},\mathbf{\hat{y}})$ to $\widehat{\mathcal{P}}$ is characterized by $\hat{y}(\rho_1) = \hat{y}(\rho_2) = 0.75$ and $\hat{y}(\rho_3) = 0$ (for a GGI value of $4.3075$). For this instance, by {\sc Rounding Algorithm}, the obtained vector $\mathbf{y}$ is therefore $y(\rho_1) = y(\rho_2) = 1$ and $y(\rho_3) = 0$. This corresponds to stable marriage $\mathbf{x}^4$, which is in fact an optimal solution. \end{ex}
Steps 2 and 3 of the algorithm can obviously be performed in polynomial time. In step 1, solving $\widehat{\mathcal{P}}$ can also be performed in polynomial time by using one of the linearizations of the GGI operator proposed by \cite{ogryczak2003solving}.
The following lemma ensures that the returned solution is a 2-approximation of an optimal solution of $\mathcal{P}$: \begin{lem} For any feasible solution $(\mathbf{\hat{d}},\mathbf{\hat{x}},\mathbf{\hat{y}})$ of $\widehat{\mathcal{P}}$, the feasible solution $(\mathbf{d},\mathbf{x},\mathbf{y})$ of $\mathcal{P}$ obtained by setting $$ y(\rho) = \left \lbrace \begin{array}{cl}
1 & \mbox{ if } \hat{y}(\rho) \ge 0.5,\\
0 & \mbox{ otherwise}
\end{array} \right. $$
is such that $\mathbf{\hat{d}} \ge \frac{1}{2}\mathbf{d}$ where $\ge$ is taken componentwise. \label{lemhalf} \end{lem} \begin{proof} In order to establish the result stated in the lemma, we introduce the notion of man and woman \emph{weights} of a rotation. Given a rotation $\rho = (m_{i_0}, w_{i_0}),\ldots,(m_{i_{r-1}},w_{i_{r-1}})$ we define the $m_i$-weight of that rotation by: \begin{equation*} \omega_i^m(\rho) = \left\{\begin{array}{l} d(\mathtt{rk}(m_{i_k},w_{i_k})) - d(\mathtt{rk}(m_{i_k},w_{i_{(k+1)\mbox{\tiny ~mod } r}})) \mbox{ if } i \in \{i_0,\ldots,i_{r-1}\} \mbox{ and } i=i_k\\ 0 \mbox{ otherwise.} \end{array}\right. \end{equation*} Similarly, we define the $w_j$-weight of that rotation by: \begin{equation*} \omega_j^w(\rho) = \left\{\begin{array}{l} d(\mathtt{rk}(w_{i_k},m_{i_k})) - d(\mathtt{rk}(w_{i_k},m_{i_{(k-1)\mbox{\tiny ~mod } r}})) \mbox{ if } j \in \{i_0,\ldots,i_{r-1}\} \mbox{ and } j=i_k\\ 0 \mbox{ otherwise.} \end{array}\right. \end{equation*}
Note that a man weight of a rotation will always be negative while a woman weight of a rotation will always be positive.
Assume that $\rho$ is a rotation that is exposed in a stable marriage $\mathbf{x}$, and let $\mathbf{x}'$ be the stable marriage obtained from $\mathbf{x}$ by eliminating $\rho$. Then: \begin{equation*} d(m_i,\mathbf{x}') = d(m_i,\mathbf{x}) - \omega_i^m(\rho), \hspace{0.5cm} d(w_j,\mathbf{x}') = d(w_j,\mathbf{x}) - \omega_j^w(\rho). \end{equation*} Consequently, if $\mathbf{x}$ is the stable marriage obtained from the man-optimal stable marriage $\mathbf{x}^m$ by eliminating rotations $\rho_1, \ldots, \rho_t$, then: \begin{equation*} d(m_i,\mathbf{x}) = d(m_i,\mathbf{x}^m) - \sum_{k=1}^t \omega_i^m(\rho_k), \hspace{0.5cm} d(w_j,\mathbf{x}) = d(w_j,\mathbf{x}^m) - \sum_{k=1}^t \omega_j^w(\rho_k). \end{equation*} We now establish the result stated in the lemma. Let $(\mathbf{\hat{d}},\mathbf{\hat{x}},\mathbf{\hat{y}})$ denote a feasible solution of $\widehat{\mathcal{P}}$. The previous equations extend as follows for solutions of $\widehat{\mathcal{P}}$: \begin{equation} \hat{d}^m_i = d(m_i,\mathbf{x}^m) - \sum_{\rho \in P} \hat{y}(\rho)\omega_i^m(\rho), \hspace{0.5cm} \hat{d}^w_j = d(w_j,\mathbf{x}^m) - \sum_{\rho \in P} \hat{y}(\rho) \omega_j^w(\rho). \label{eq:hatd^m_i} \end{equation}
Now consider the feasible solution $(\mathbf{d},\mathbf{x},\mathbf{y})$ of $\mathcal{P}$ defined by $y(\rho) = 1$ if $\hat{y}(\rho) \ge 0.5$, and $0$ otherwise. The feasibility of $(\mathbf{d},\mathbf{x},\mathbf{y})$ comes from the fact that $\{\rho : \hat{y}(\rho) \ge 0.5\}$ is a closed set of rotations. Indeed, note that constraints~\ref{const:y} ensures that $y(\rho') \le y(\rho)$ for all $\rho<\rho'$. We have: \begin{align*} \hat{d}^m_i - \frac{1}{2}d^m_i &= d(m_i,\mathbf{x}^m) - \sum_{\rho \in P}\hat{y}(\rho) \omega_i^m(\rho) - \frac{1}{2} (d(m_i,\mathbf{x}^m) - \sum_{\rho \in P} y(\rho) \omega_i^m(\rho))\\
&=\frac{1}{2} d(m_i,\mathbf{x}^m) - \sum_{\rho\in P} (\hat{y}(\rho) - \frac{1}{2}y(\rho)) \omega_i^m(\rho)
\\&\ge \frac{1}{2} d(m_i,\mathbf{x}^m) \ge 0
\end{align*} as $0 \le (\hat{y}(\rho) - \frac{1}{2}y(\rho))$ for all $\rho\in P$ and $\omega_i^m(\rho) \le 0$ for all $i\in\{1,\ldots,n\}$ and $\rho\in P$. Hence, $\hat{d}^m_i \ge \frac{1}{2}d^m_i$ for all $i\in\{1,\ldots,n\}$.
Similarly, for women we have: \begin{align*} \hat{d}^w_j-\frac{1}{2}d^w_j &=d(w_j,\mathbf{x}^m) - \sum_{\rho\in P} \hat{y}(\rho) \omega_j^w(\rho)-\frac{1}{2}(d(w_j,\mathbf{x} ^m) - \sum_{\rho \in P} y(\rho) \omega_j^w(\rho))\\
&=\frac{1}{2} d(w_j,\mathbf{x}^m) - \sum_{\rho\in P} (\hat{y}(\rho) - \frac{1}{2}y(\rho)) \omega_j^w(\rho)\\
&\ge \frac{1}{2}(d(w_j,\mathbf{x}^m) - \sum_{\rho \in P} \omega_j^w(\rho)) \end{align*} as $(\hat{y}(\rho) - \frac{1}{2}y(\rho)) \le 0.5$ for all $\rho\in P$ and $\omega_j^w(\rho) \ge 0$ for all $j\in\{1,\ldots,n\}$ and $\rho\in P$. Since eliminating all rotations from $\mathbf{x}^m$ leads to $\mathbf{x}^w$, we have that $\frac{1}{2}(d(w_j,\mathbf{x}^m) - \sum_{\rho \in P} \omega_j^w(\rho)) = \frac{1}{2} d(w_j,\mathbf{x}^w)$. Therefore, $\hat{d}^w_j-\frac{1}{2}d^w_j \geq 0$ and hence, $\hat{d}^w_j \ge \frac{1}{2}d^w_j$ for all $j\in\{1,\ldots,n\}$.
By combining the inequalities obtained for men and women, we obtain that $\mathbf{\hat{d}} \ge \frac{1}{2}\mathbf{d}$, which concludes the proof.
\end{proof}
We can now state the main result of this section:
\begin{thr} {\sc Rounding Algorithm} is a polynomial time 2-approximation algorithm for the GGI stable marriage problem, and the bound is tight. \end{thr}
\begin{proof} We first recall that all steps of {\sc Rounding Algorithm} can be performed in polynomial time. Furthermore, by Lemma~\ref{lemhalf}, the feasible solution $(\mathbf{{d}},\mathbf{{x}},\mathbf{{y}})$ generated by {\sc Rounding Algorithm} is such that $\mathbf{\hat{d}} \ge \frac{1}{2}\mathbf{d}$, where $(\mathbf{\hat{d}},\mathbf{\hat{x}},\mathbf{\hat{y}})$ is an optimal solution to $\widehat{\mathcal{P}}$. Consequently: \[ \mathtt{GGI}_{{\bm \lambda}}(\mathbf{\hat{d}}) \ge \frac{1}{2} \mathtt{GGI}_{{\bm \lambda}}(\mathbf{d}) \] because $\mathtt{GGI}_{{\bm \lambda}}(\mathbf{{d}}) \le \mathtt{GGI}_{{\bm \lambda}}(\mathbf{{d'}})$ for $\mathbf{{d}} \le \mathbf{{d'}}$ (see e.g. \cite{fodor1995characterization}) and $\mathtt{GGI}_{{\bm \lambda}}(\alpha \mathbf{d})=\alpha\mathtt{GGI}_{{\bm \lambda}}(\mathbf{d})$ for $\alpha > 0$.
For the tightness of the bound, consider the following instance of the stable marriage problem: \begin{center} $m_1 : 1 \succ^m_1 2 \succ^m_1 3$ \hspace{0.5cm} $w_1 : 2 \succ^w_1 3 \succ^w_1 1$\\ $m_2 : 2 \succ^m_2 3 \succ^m_2 1$ \hspace{0.5cm} $w_2 : 3 \succ^w_2 1 \succ^w_2 2$\\ $m_3 : 3 \succ^m_3 1 \succ^m_3 2$ \hspace{0.5cm} $w_3 : 1 \succ^w_3 2 \succ^w_3 3$ \end{center} There are two rotations $\rho_1 =(1,1),(2,2),(3,3)$ and $\rho_2=(1,2),(2,3),(3,1)$, with $\rho_1 < \rho_2$, which yield three stable marriages: \begin{itemize} \item the man-optimal stable marriage $\mathbf{x}^m$ in which each man is matched with his first choice, and which corresponds to eliminating no rotation, \item the woman-optimal stable marriage $\mathbf{x}^w$ in which each woman is matched with her first choice, and which corresponds to eliminating both rotations, \item a ``compromise'' stable marriage $\mathbf{x}^c$ in which each agent is matched with his/her second choice, and which corresponds to eliminating only $\rho_1$. \end{itemize} We use the disutility function $d$ defined by $d(1)=0$, $d(2)=1+\epsilon$ and $d(3)=2$ (with $\epsilon>0$) and the following GGI weights: ${\bm \lambda} = (a,b,c,0,0,0)$ with $a \ge b \ge c > 0$. The disutility vectors of the three stable marriages are then $\mathbf{d}(\mathbf{x}^m) = (0,0,0,2,2,2)$, $\mathbf{d}(\mathbf{x}^w) = (2,2,2,0,0,0)$ and $\mathbf{d}(\mathbf{x}^c) = (1+\epsilon,1+\epsilon,1+\epsilon,1+\epsilon,1+\epsilon,1+\epsilon)$.
By using Equation~\ref{eq:hatd^m_i} from the proof of Lemma~\ref{lemhalf}, the value of $\hat{d}^m_i$ for each man $m_i$ and the value of $\hat{d}^w_j$ for each woman $w_j$ are written as follows in terms of $\hat{y}(\rho_1)$ and $\hat{y}(\rho_2)$: \begin{align*} \hat{d}^m_i &= 0 + \hat{y}(\rho_1) (1 + \epsilon) + \hat{y}(\rho_2) (1 - \epsilon) \quad \forall i\in\{1,2,3\}\\ \hat{d}^w_j &= 2 - \hat{y}(\rho_1) (1 - \epsilon) - \hat{y}(\rho_2) (1 + \epsilon) \quad \forall j\in\{1,2,3\} \end{align*} where $\hat{y}(\rho_1) \ge \hat{y}(\rho_2)$. We see that the three men share the same disutility value, as well as the three women. The GGI value of a feasible solution to $\widehat{\mathcal{P}}$ is thus completely determined by the common disutility of the men or the common disutility of the women because only the three least satisfied agents are taken into account in ${\bm \lambda}$. Consequently, an optimal solution to $\widehat{\mathcal{P}}$ minimizes $\max\{\hat{d}_1^m,\hat{d}_1^w\}$ for $\hat{y}(\rho_1) \ge \hat{y}(\rho_2)$. Simple calculations make it possible to conclude that the only optimal solution to $\widehat{\mathcal{P}}$ is characterized by $\hat{y}(\rho_1)=\hat{y}(\rho_2)=0.5$.
For this instance, {\sc Rounding Algorithm} returns therefore the woman-optimal stable marriage which has the ordered disutility vector $(2,2,2,0,0,0)$ and a GGI value of $2(a + b + c)$. However, an optimal stable marriage is the ``compromise'' stable marriage, which has the ordered disutility vector $(1+\epsilon,1+\epsilon,1+\epsilon,1+\epsilon,1+\epsilon,1+\epsilon)$ and a GGI value of $(1 + \epsilon)(a + b + c)$. By taking the limit for $\epsilon$ going to 0, we obtain the tightness of the bound. \end{proof}
\begin{remark} Note that the approach taken in {\sc Rounding Algorithm} is valid for any aggregation criterion $F$ on dissatisfactions of agents for which the following condition holds: \begin{align*} F(\frac{1}{2}\mathbf{d}(\mathbf{x})) &= \frac{1}{2}F(\mathbf{d}(\mathbf{x}))\\ F(\mathbf{d}(\mathbf{x}) + \mathbf{r}) &\ge F(\mathbf{d}(\mathbf{x})) \end{align*} where $\mathbf{r}$ is any non-negative vector. \end{remark}
\begin{remark} A general approximation result for the optimization of a generalized Gini index in muliobjective optimization problems has been proposed by \cite{kasperski2015combinatorial}. For the GGISM problem, it amounts to compute an optimal stable marriage according to the sum of disutilities of pairs $(m_i,w_j)$, where the disutility of a pair $(m_i,w_j)$ is defined by $\lambda_1 \max\{d(\mathtt{rk}(m_i,w_j)),d(\mathtt{rk}(w_j,m_i))\} + \lambda_2 \min\{d(\mathtt{rk}(m_i,w_j)),d(\mathtt{rk}(w_j,m_i))\}$. This can be performed in polynomial time by linear programming. The returned solution is a $N\lambda_1$-approximation, provided $\sum_{i=1}^N \lambda_i = 1$. To obtain a better guarantee than 2, one should have $\lambda_1 < 2/N$. On the contrary, by taking advantage of the specific structure of the stable marriage problem, our approach yields a 2-approximation \emph{whatever weights are used}. \end{remark}
\section{The GGI Stable Marriage Problem with a Bounded Number of Non-zero Weights} \label{sec:param}
In this section, we provide an algorithm whose complexity is $O(2^Kn^{2K+4})$ where $K = \max\{i : \lambda_i > 0\}$. Hence, the complexity is polynomial time if $K$ is assumed to be a constant, where $K$ is the number of non-zero weights in the GGI operator. In the parametrized complexity terminology \citep{niedermeier2006invitation}, this means that the GGI stable marriage problem belongs to class XP for parameter $K$.
We adopt a brute-force approach to solve the problem in $O(2^Kn^{2K+4})$. Let \begin{multline*} \mathbf{t}(\mathbf{x})=((d(m_1,\mathbf{x}),m_1,\mu_{\mathbf{x}}(m_1)),\ldots,(d(m_n,\mathbf{x}), m_n,\mu_{\mathbf{x}}(m_n)),\\ (d(w_1,\mathbf{x}),w_1,\mu_{\mathbf{x}}(w_1)),\ldots,(d(w_n,\mathbf{x}),w_n,\mu_{\mathbf{x}}(w_n)) \end{multline*} denote the vectors of triples $(d(a_i,\mathbf{x}),a_i,\mu_{\mathbf{x}}(a_i))$ induced by stable marriage $\mathbf{x}$, where $d(a_i,\mathbf{x})$ is the dissatisfaction of agent $a_i$ when matched with $\mu_{\mathbf{x}}(a_i)$. We denote by $T^\downarrow(\mathbf{x})$ the set of vectors $\mathbf{t}^\downarrow(\mathbf{x})$ that can be obtained from $\mathbf{t}(\mathbf{x})$ by sorting the triples in decreasing order of dissatisfactions. The projection of a vector $\mathbf{t}^\downarrow(\mathbf{x}) \in T^\downarrow(\mathbf{x})$ on the $K$ first components is denoted by $\mathbf{t}^\downarrow_K(\mathbf{x})$. We denote by $T^\downarrow_K(\mathbf{x})$ the set $\{\mathbf{t}^\downarrow_K(\mathbf{x}) : \mathbf{t}^\downarrow(\mathbf{x}) \in T^\downarrow(\mathbf{x})\}$.
For instance, assume that $\mathbf{t}(\mathbf{x})$ $=$ $((1,m_1,w_2),(2,m_2,w_1),(2,w_1,m_2),(1,w_2,m_1))$. Then the set $T^\downarrow(\mathbf{x})$ is
\begin{multline*}
\{((2,m_2,w_1),(2,w_1,m_2),(1,m_1,w_2),(1,w_2,m_1)),\\
((2,w_1,m_2),(2,m_2,w_1),(1,m_1,w_2),(1,w_2,m_1)),\\
((2,m_2,w_1),(2,w_1,m_2),(1,w_2,m_1),(1,m_1,w_2))\\
((2,w_1,m_2),(2,m_2,w_1),(1,w_2,m_1),(1,m_1,w_2))\}
\end{multline*} and the set $T^\downarrow_2(\mathbf{x})$ is $\{((2,m_2,w_1),(2,w_1,m_2)), ((2,w_1,m_2),(2,m_2,w_1))\}$.
The idea is to enumerate all vectors in $T_K^\downarrow = \cup_{\mathbf{x}\in\mathcal{X}} T^\downarrow_K(\mathbf{x})$ without redundancy. The polynomiality of the approach follows from the fact that $|T_K^\downarrow| \leq (2n^2)^{K}$ because the number of distinct triples is upper bounded by $2n^2$. Note that we have: $$
\min_{\mathbf{x}\in\mathcal{X}} \mathtt{GGI}_{{\bm \lambda}}(\mathbf{d}(\mathbf{x})) =
\min_{\mathbf{t}\in T_K^\downarrow} \mathtt{GGI}_{{\bm \lambda}}(\mathbf{t}) $$ because $\lambda_i=0$ for all $i>K$, where, by abuse of notation, we denote by $\mathtt{GGI}_{{\bm \lambda}}(\mathbf{t})$ the value of the GGI operator applied to the vector of dissatisfactions obtained from $\mathbf{t}$\footnote{Note that vector $\mathbf{t}\in T_K^\downarrow$ is incomplete as it only has $K$ components, but it is sufficient to apply the GGI operator because $\lambda_i=0$ for all $i>K$.}. Hence identifying an optimal GGI stable marriage will be performed by finding a vector $\mathbf{t}\in T_K^\downarrow$ minimizing the GGI operator and computing a corresponding stable marriage.
\begin{ex} Coming back to the instance of Example \ref{exGILBERT}, assume that $K=2$ and that the disutility function is defined by $d(i)= i$. Then, our enumeration algorithm would produce the following set $\mathcal{T}^\downarrow_2$:
\begin{multline*} \{((10,w_{10},m_9),(7,w_{4},m_6)),((10,w_{10},m_9),(7,w_{5},m_7)),\\ ((10,w_{10},m_9),(7,w_{6},m_5)),((10,w_{10},m_9),(7,w_{7},m_4)),\\ ((10,w_{10},m_9),(5,m_{4},w_6)),((10,w_{10},m_9),(5,m_{5},w_7)),\\ ((10,w_{10},m_9),(5,m_{6},w_5)),((10,w_{10},m_9),(5,m_{7},w_4)),\\ ((9,w_{10},m_8),(5,m_{4},w_6)),((9,w_{10},m_8),(5,m_{5},w_7)),\\ ((9,w_{10},m_8),(5,m_{6},w_5)),((9,w_{10},m_8),(5,m_{7},w_4)),\\ ((9,w_{10},m_8),(5,m_{8},w_{10})),((9,w_{10},m_8),(5,m_{9},w_9)),\\ ((9,w_{10},m_8),(5,m_{10},w_8))\} \end{multline*} For this instance, the optimal GGI value is therefore necessarily $9 \lambda_1 + 5 \lambda_2$. Note that, in most cases, the optimal GGI value depends on ${\bm \lambda}$ (it is not the case here because $(9,5)$ dominates componentwise all vectors of dissatisfactions obtained from $\mathcal{T}^\downarrow_2$).
\end{ex}
We now describe our enumeration algorithm. Algorithm \ref{main} builds set $T_K^\downarrow$ by induction using the following formula: \begin{align*}
T_0^\downarrow &= \{()\}\\
T_k^\downarrow &= \{\mathbf{v} \circ t : \mathbf{v} \in T_{k-1}^\downarrow \text{ and } t \in T(\mathbf{v})\} \text{ where } T(\mathbf{v}) = \{ t_k : \mathbf{t} \in T_N^\downarrow \text{ s.t. } (t_1,\ldots, t_{k-1}) = \mathbf{v}\} \end{align*} The aim of Algorithm \ref{NextTriples} is to compute $T(\mathbf{v})$, i.e., the set of possible triples for the $k^{th}$ component of a vector in $T_k^\downarrow$ starting by the $(k-1)$-vector $\mathbf{v}$. The idea is to impose restrictions on the considered stable marriages so that the least satisfied agents as well as their matches correspond to the ones in $\mathbf{v}$. For this purpose, we impose mandatory rotations (set $\mathtt{IN}_\mathbf{v}$) and forbidden rotations (set $\mathtt{OUT}_\mathbf{v}$). Note that, each time a rotation is made mandatory (resp. forbidden), the set of its ancestors (resp. descendants), denoted by $\mathtt{Anc}(\rho)$ (resp. $\mathtt{Desc}(\rho)$), are also made mandatory (resp. forbidden) so that $\mathtt{IN}_{\mathbf{v}}$ (resp. $P\setminus \mathtt{OUT}_\mathbf{v}$) remains a closed set of rotations. For each triple $(d,a,a')$ belonging to $\mathbf{v}$, we ensure that agent $a$ is matched with agent $a'$ by making rotation $\rho_{\mathtt{get}}(a,a')$ mandatory and $\rho_{\mathtt{break}}(a,a')$ forbidden (Lines 3--5). Additionally, to ensure that the $k$ least satisfied agents are indeed those involved in $\mathbf{v}$, we put a threshold on the dissatisfactions of the agents in $\mathcal{A}_{\overline{\mathbf{v}}} = \mathcal{M}\cup\mathcal{W}\setminus \{a:(d,a,a') \in \mathbf{v}\}$. Note that the set $\mathcal{A}_{\overline{\mathbf{v}}}$ is updated in Line 3. Let $d_{\min}(\mathbf{v})$ denote the dissatisfaction of the last triple in $\mathbf{v}$ (i.e., the lowest level of dissatisfaction in $\mathbf{v}$). The dissatisfactions of the agents in $\mathcal{A}_{\overline{\mathbf{v}}}$ should not be strictly greater than $d_{\min}(\mathbf{v})$. This condition is imposed by using again sets $\mathtt{IN}_\mathbf{v}$ and $\mathtt{OUT}_\mathbf{v}$. More precisely, given a rotation $\rho = (m_{i_0},w_{i_0}),\ldots,(m_{i_{r-1}},w_{i_{r-1}})$, we define $d_{\max}^w(\rho)=\max_{k=0,\ldots,r-1} d(w_{i_k},m_{i_k})$ the highest dissatisfaction of a woman involved in $\rho$ before $\rho$ is eliminated, and $d_{\max}^m(\rho)=\max_{k=0,\ldots,r-1} d(m_{i_k},w_{i_{k+1}})$ the highest dissatisfaction of a man involved in $\rho$ after $\rho$ is eliminated. To make sure that the agents in $\mathcal{A}_{\overline{\mathbf{v}}}$ have a dissatisfaction lower than or equal to $d_{\min}(\mathbf{v})$, we make mandatory (resp. forbidden) any rotation $\rho\in P\backslash \mathtt{OUT}_\mathbf{v}$ (resp. $P \backslash \mathtt{IN}_{\mathbf{v}}$) such that $d_{\max}^w(\rho) > d_{\min}(\mathbf{v})$ (resp. $d_{\max}^m(\rho) > d_{\min}(\mathbf{v})$) (Lines 6--7, resp. Lines 8--9). The enumeration of the triples in $T(\mathbf{v})$ is performed by branching on the gender (man or woman) of the agent that will realize the $k^{th}$ highest dissatisfaction. We denote by $T_W(\mathbf{v})$ (resp. $T_M(\mathbf{v})$) the set of triples $(d,a,a') \in T(\mathbf{v})$ where $a \in \mathcal{W}$ (resp. $a \in \mathcal{M}$). We have of course $T_W(\mathbf{v}) \cup T_M(\mathbf{v}) = T(\mathbf{v})$. Algorithm \ref{NextWomen} enumerates the triples in $T_W(\mathbf{v})$ while Algorithm \ref{NextMen} enumerates the triples in $T_M(\mathbf{v})$ (Line 10 of Algorithm \ref{NextTriples}). The validity of the approach follows from the validity of Algorithms~\ref{NextWomen} and~\ref{NextMen}.
\paragraph{Validity of the approach.} The operations of Algorithms \ref{NextWomen} and \ref{NextMen} are similar. They proceed in the spirit of the algorithm proposed by \cite{gusfield1987three} for determining a minmax stable marriage. Let $\mathbf{x}_R$ denote the stable marriage corresponding to a set $R$ of rotations. Note that we have built sets $\mathtt{IN}_{\mathbf{v}}$ and $\mathtt{OUT}_\mathbf{v}$ such that if $R\cap\mathtt{IN}_{\mathbf{v}} = \mathtt{IN}_{\mathbf{v}}$ and $R\cap\mathtt{OUT}_\mathbf{v} = \emptyset$ then $\mathbf{v} \in T^\downarrow_{k-1}(\mathbf{x}_R)$. Furthermore, the special case $\mathbf{x}_{\mathtt{IN}_{\mathbf{v}}}$ (resp. $\mathbf{x}_{P\setminus \mathtt{OUT}_\mathbf{v}}$) is the stable marriage compatible with $\mathtt{IN}_{\mathbf{v}}$ and $\mathtt{OUT}_\mathbf{v}$ that satisfy most the men (resp. women) as it takes as few (resp. much) rotations as allowed by sets $\mathtt{IN}_\mathbf{v}$ and $\mathtt{OUT}_\mathbf{v}$. We only explain the operation of Algorithm \ref{NextWomen}, because the operation of Algorithm \ref{NextMen} is symmetric.
The aim of Algorithm \ref{NextWomen} is to enumerate all triples $(d,a,a')$ in $T_W(\mathbf{v})$. Notably, we will enumerate these triples by nonincreasing values of $d$ by exploring carefully the set of stable marriages compatible with sets $\mathtt{IN}_\mathbf{v}$ and $\mathtt{OUT}_\mathbf{v}$. More precisely, at each iteration $i$ of the algorithm (loop {\tt while} in Line 5) we will consider a stable marriage $\mathbf{x}_i$ compatible with sets $\mathtt{IN}_\mathbf{v}$ and $\mathtt{OUT}_\mathbf{v}$ such that all women are always better off in $\mathbf{x}_{i}$ than in $\mathbf{x}_{i-1}$ for $i\neq 0$ (with at least one woman strictly better off). At each iteration, the new triples are found by looking at set $W_i$ that includes all women in $\mathcal{A}_{\overline{\mathbf{v}}}$ whose dissatisfaction can be ranked in $k^{th}$ position in $\mathbf{x}_{i}$, i.e., whose dissatisfaction is equal to $d^\downarrow_k(\mathbf{x}_{i})$ (Lines 3 and 13)\footnote{We recall that $d^\downarrow_k(\mathbf{x})$ denotes the $k^{th}$ component of vector $\mathbf{d}(\mathbf{x})$ when sorted by nonincreasing values.}.
Obviously, for the women, the worst stable marriage compatible with $\mathtt{IN}_\mathbf{v}$ and $\mathtt{OUT}_\mathbf{v}$ is $\mathbf{x}_{\mathtt{IN}_\mathbf{v}}$. If no woman can be ranked in $k^{th}$ position w.r.t. stable marriage $\mathbf{x}_{\mathtt{IN}_\mathbf{v}}$, then no woman can be ranked in $k^{th}$ position for \emph{any} stable marriage compatible with $\mathtt{IN}_{\mathbf{v}}$. Indeed, eliminating additional rotations would only increase the dissatisfactions of men and decrease the dissatisfactions of women. Otherwise the recurrence is initialized with $\mathbf{x}_0 = \mathbf{x}_{\mathtt{IN}_\mathbf{v}}$ and stable marriage $\mathbf{x}_{i+1}$ is obtained from $\mathbf{x}_{i}$ by eliminating rotation $\rho_{\mathtt{break}}(m,w)$ (and all required ancestors) for all woman $w$ in $W_i$ so that their dissatisfactions are strictly decreased (Line 10). Loop {\tt while} stops if one of the following conditions occurs: \begin{itemize} \item if $W_i = \emptyset$, it means that only men can be ranked in $k^{th}$ position in $\mathbf{x}_i$; as eliminating rotations will only improve the situation of women and deteriorate the situation of men, we can safely conclude that all triples in $T_W(\mathbf{v})$ have been enumerated; \item if at least one rotation $\rho_{\mathtt{break}}(m,w)$ does not exist or is forbidden (i.e., $(m,w) \in \mathbf{x}_{P\backslash \mathtt{OUT}_\mathbf{v}}$); indeed, in this case, we can conclude that it is not possible to find a triple in $T_W(\mathbf{v})$ with a dissatisfaction strictly less than the current value $d^\downarrow_k(\mathbf{x}_{i})$ (the boolean Flag is then set to True in Line 9). \end{itemize}
\paragraph{Complexity analysis and proof of termination.} In Algorithm~\ref{NextWomen}, at every step $i$ of the {\tt while} loop, all agents in $W_i$ share the same dissatisfaction level $d^\downarrow_k(\mathbf{x}_{i})$. Furthermore, for all $i \neq 0$, we have that $d^\downarrow_k(\mathbf{x}_{i}) < d^\downarrow_k(\mathbf{x}_{i-1})$. As there are only $n$ dissatisfaction levels (corresponding to the $n$ possible ranks), the {\tt while} loop necessarily terminates in $O(n)$ iterations. The nested {\tt for} loop also terminates in $O(n)$ iterations because there can be at most $n$ women in $W_i$. All instructions inside the {\tt for} loop are in $O(1)$, except the instruction in Line 10 which is in $O(n^2)$ (the number of rotations is upper bounded by $n(n-1)/2$). Overall, Algorithm~\ref{NextWomen} is in $O(n^4)$. The analysis of Algorithm~\ref{NextMen} is similar. In Algorithm~\ref{NextTriples}, Lines 4 and 5 are in $O(n^2)$, hence the {\tt for} loop in Line 2 is in $O((k-1)n^2)$, therefore in $O(n^3)$ as $k \le 2n$. Lines 6--9 are in $O(n^4)$. Since we have shown that both calls in Line 10 are in $O(n^4)$, the overall complexity of Algorithm~\ref{NextTriples} is $O(n^4)$. Finally, the complexity of the three nested {\tt for} loops in Algorithm~\ref{main} is $O(\sum_{k=1}^K (2n^2)^{k-1} (n^4+2n^2))$ because:\\[1ex] -- the cardinality of set $T_{(k-1)}^\downarrow$ in Line 4 is upper bounded by $(2n^2)^{k-1}$ (there are at most $2n^2$ triples, and $k-1$ components per vector of triples in $T_{(k-1)}^\downarrow$);\\ -- Line 5 is in $O(n^4)$; \\ -- Lines 6--7 are in $O(2n^2)$.\\[1ex] Overall, the complexity of Algorithm~\ref{main} thus is $O(2^Kn^{2K+4})$.
\paragraph{Final remarks.} At the end of Algorithm \ref{main}, one obtains a set $T_K^\downarrow$ of vectors of triples. Within this set, one can choose a vector $\mathbf{v}^*$ which realizes: $$ \min_{\mathbf{v}\in T_K^\downarrow} \mathtt{GGI}_{{\bm \lambda}}(\mathbf{v}) = \min_{\mathbf{x}\in\mathcal{X}} \mathtt{GGI}_{{\bm \lambda}}(\mathbf{d}(\mathbf{x})) . $$ Given this vector $\mathbf{v}^*$, any stable marriage $\mathbf{x}^*$ such that $\mathbf{v}^* \in T_K^\downarrow(\mathbf{x^*})$ verifies $$ \mathtt{GGI}_{{\bm \lambda}}(\mathbf{d}(\mathbf{x}^*))= \min_{\mathbf{x}\in\mathcal{X}} \mathtt{GGI}_{{\bm \lambda}}(\mathbf{d}(\mathbf{x})) . $$
Given $\mathbf{v}^*$, it is easy to compute a stable marriage $\mathbf{x}^*$ such that $\mathbf{v}^* \in T_K^\downarrow(\mathbf{x}^*)$. In particular, $\mathbf{x}_{\mathtt{IN}_{\mathbf{v}^*}}$ (resp. $\mathbf{x}_{P \setminus \mathtt{OUT}_{\mathbf{v}^*}}$) is a best possible stable marriage for men (resp. women) where sets $\mathtt{IN}_{\mathbf{v}^*}$ and $\mathtt{OUT}_{\mathbf{v}^*}$ are generated in the same fashion as in Algorithm \ref{NextTriples} (Lines 1--9).
\begin{algorithm}[] \DontPrintSemicolon \SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up} \SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{the GGISM instance and the value of $K$} \Output{$T_K^\downarrow$}
$T_0^\downarrow \leftarrow \{()\}$\\
\For{$k = 1, \ldots, K$}{
$T_k^\downarrow\leftarrow \emptyset$\\
\For{$\mathbf{v} \in T_{k-1}^\downarrow$}{
$T\leftarrow \mathtt{NextTriples}(\mathbf{v},k)$\\
\For{$t \in T$}{
$T_k^\downarrow \leftarrow T_k^\downarrow \cup\{\mathbf{v}\circ t \}$
}
}
}
\Return $T_K^\downarrow$ \caption{$\mathtt{Enumerate}$} \label{main} \end{algorithm} \begin{algorithm}[] \DontPrintSemicolon \SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up} \SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{vector $\mathbf{v}$ of imposed triples, index $k$ of the next triple} \Output{set $T$ of possible next triples}
$\mathtt{IN}_{\mathbf{v}}\leftarrow \emptyset$; $\mathtt{OUT}_\mathbf{v}\leftarrow \emptyset$; $\mathcal{A}_{\overline{\mathbf{v}}}\leftarrow \mathcal{M} \cup \mathcal{W}$\\
\For{$i = 1,\ldots,k-1$}{
$(a,a',d) = v_i$, $\mathcal{A}_{\overline{\mathbf{v}}} \leftarrow \mathcal{A}_{\overline{\mathbf{v}}} \setminus \{ a \}$ \\
$\mathtt{IN}_{\mathbf{v}} \leftarrow \mathtt{IN}_{\mathbf{v}}\cup \{ \rho_{\mathtt{get}}(a,a')\}\cup \mathtt{Anc}(\rho_{\mathtt{get}}(a,a')) $\\
$\mathtt{OUT}_\mathbf{v} \leftarrow \mathtt{OUT}_\mathbf{v}\cup \{ \rho_{\mathtt{break}}(a,a')\}\cup \mathtt{Desc}(\rho_{\mathtt{break}}(a,a')) $}
\For{$\rho \in P\backslash \mathtt{OUT}_\mathbf{v} \text{ s.t. } d_{\max}^w(\rho) > d_{\min}(v)$}{
$\mathtt{IN}_{\mathbf{v}} \leftarrow \mathtt{IN}_{\mathbf{v}}\cup \{\rho\}\cup \mathtt{Anc}(\rho)$
}
\For{$\rho \in P \backslash \mathtt{IN}_{\mathbf{v}} \text{ s.t. } d_{\max}^m(\rho) > d_{\min}(v)$}{
$\mathtt{OUT}_\mathbf{v} \leftarrow \mathtt{OUT}_\mathbf{v}\cup \{\rho\}\cup \mathtt{Desc}(\rho)$
} \Return $\mathtt{NextWomen}(\mathtt{IN}_{\mathbf{v}},\mathtt{OUT}_\mathbf{v},k,\mathcal{A}_{\overline{\mathbf{v}}})$ $\cup$ $\mathtt{NextMen}(\mathtt{IN}_\mathbf{v},\mathtt{OUT}_\mathbf{v},k,\mathcal{A}_{\overline{\mathbf{v}}})$ \caption{$\mathtt{NextTriples}$} \label{NextTriples} \end{algorithm} \begin{algorithm}[] \DontPrintSemicolon \SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up} \SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{set $\mathtt{IN}_{\mathbf{v}}$ and $\mathtt{OUT}_\mathbf{v}$ of mandatory and forbidden rotations, index $k$ of the next triple, set $\mathcal{A}_{\overline{\mathbf{v}}}$} \Output{$T_W(\mathbf{v})$}
Compute $\mathbf{x}_{\mathtt{IN}_{\mathbf{v}}}$ and $\mathbf{x}_{P\backslash \mathtt{OUT}_\mathbf{v}}$\\ $ T \leftarrow \emptyset$; $R \leftarrow \mathtt{IN}_\mathbf{v}$; $i \leftarrow 0$; $\mathbf{x}_i \leftarrow \mathbf{x}_R$ \\
$W_i \leftarrow \{w\in\mathcal{A}_{\overline{\mathbf{v}}}\cap \mathcal{W}: d(w,\mathbf{x}_{i}) = d^\downarrow_k(\mathbf{x}_{i})\}$\\ Flag $\leftarrow$ False\\
\While{$W_i \neq \emptyset$}{
\For{$w\in W_i$}{
let $m$ be the match of $w$ in $\mathbf{x}_i$\\
$T\leftarrow T\cup \{(d(w,m),w,m)\}$\\
\lIf{$(m,w)\in\mathbf{x}_{P\backslash \mathtt{OUT}_\mathbf{v}}$}{Flag $\leftarrow$ True}
\lElse{$R\leftarrow R \cup \rho_{\mathtt{break}}(m,w)\cup \mathtt{Anc}(\rho_{\mathtt{break}}(m,w))$}
}
\lIf{Flag}{\Return $T$}
$i \leftarrow i+1$; $\mathbf{x}_i \leftarrow \mathbf{x}_R$\\
$W_i \leftarrow \{w\in\mathcal{A}_{\overline{\mathbf{v}}}\cap \mathcal{W}: d(w,\mathbf{x}_{i}) = d^\downarrow_k(\mathbf{x}_{i})\}$
} \Return $T$ \caption{NextWomen} \label{NextWomen} \end{algorithm} \begin{algorithm} \DontPrintSemicolon \SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up} \SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{set $\mathtt{IN}_{\mathbf{v}}$ and $\mathtt{OUT}_\mathbf{v}$ of mandatory and forbidden rotations, index $k$ of the next triple, set $\mathcal{A}_{\overline{\mathbf{v}}}$} \Output{$T_M(\mathbf{v})$}
Compute $\mathbf{x}_{\mathtt{IN}_{\mathbf{v}}}$ and $\mathbf{x}_{P\backslash \mathtt{OUT}_\mathbf{v}}$ \\
$ T \leftarrow \emptyset$; $R \leftarrow P \setminus \mathtt{OUT}_\mathbf{v}$; $i \leftarrow 0$; $\mathbf{x}_i \leftarrow \mathbf{x}_R$ \\
$M_i\leftarrow \{m\in\mathcal{A}_{\overline{\mathbf{v}}}\cap \mathcal{M}: d(m,\mathbf{x}_{i}) = d^\downarrow_k(\mathbf{x}_{i})\}$\\ Flag $\leftarrow$ False\\
\While{$M_i\neq \emptyset$}{
\For{$m\in M_i$}{
let $w$ be the match of $m$ in $\mathbf{x}_i$\\
$T \leftarrow T \cup \{(d(m,w),m,w)\}$\\
\lIf{$(m,w) \in \mathbf{x}_{\mathtt{IN}_{\mathbf{v}}}$}{Flag $\leftarrow$ True}
\lElse{$R\leftarrow R \setminus (\rho_{\mathtt{get}}(m,w)\cup \mathtt{Des}(\rho_{\mathtt{get}}(m,w)))$}
}
\lIf{Flag}{\Return $T$}
$i \leftarrow i+1$; $\mathbf{x}_i \leftarrow \mathbf{x}_R$\\
$M_i\leftarrow \{m\in\mathcal{A}_{\overline{\mathbf{v}}}\cap \mathcal{M}: d(m,\mathbf{x}_{i}) = d^\downarrow_k(\mathbf{x}_{i})\}$
} \Return $T$ \caption{NextMen} \label{NextMen} \end{algorithm}
\section{Conclusion}
In this paper, we have shown that the minimization of a Generalized Gini Index (GGI) of the dissatisfactions of men and women in a stable marriage problem is an NP-hard problem. Then, we have proposed a polynomial time 2-approximation algorithm for the problem, based on a rounding of the optimal solution to the linear programming relaxation of the problem. Lastly, we have shown that minimizing a GGI of the dissatisfactions of men and women in a stable marriage is in the class XP with respect to the number of strictly positive weights in the GGI operator.
For future works, following \cite{aziz2017random}, it could be worth investigating the randomized version of the GGI stable marriage problem. By \emph{randomized}, we mean that we consider mixed stable marriages, and not only deterministic stable marriages. A mixed stable marriage is a probability distribution over stable marriages. This enlargement of the set of feasible solutions could make it possible to enhance the optimal GGI value (where the GGI operator is applied to the vector of expected dissatisfactions of the agents). Note that the relaxed solution we compute in the first step of the 2-approximation algorithm proposed in Section~\ref{sec:approximation} can be converted into a mixed stable marriage by using a trick proposed by \cite{teo1998geometry}. It turns out that the obtained approach returns an optimal marriage for the randomized variant of the GGI stable marriage problem. A more thorough investigation of the randomized GGI stable marriage problem is underway.
\end{document} |
\begin{document}
Supplementary material for \textbf{A Unified View of Multi-Label Performance Measures}.
\section{Appendix A: Proofs} In this section, we provide detailed proofs of the theoretical results presented in the manuscript. \subsection{Proof of Theorem 4} Because $F$ is label-wise effective, its order of prediction value on a specific instance $x_i$ is correct. Therefore, the threshold error $\epsilon_i$ can happen in either the two ways: \begin{enumerate} \item $\epsilon_i$ positive labels are predicted as negative labels.\\
In this case, the true positive number $TP_i$ on this instance becomes $|Y_{i \cdot}^+|-\epsilon_i$, and the false positive number $FP_i$ is zero, and the false negative number $FN_i$ becomes $\epsilon_i$.\\ The precision value and the recall value will be:
\begin{align*} Prec_i&=\frac{TP_i}{TP_i+FP_i}=1, \quad Rec_i=\frac{TP_i}{TP_i+FN_i}=\frac{|Y_{i \cdot}^+|-\epsilon_i}{|Y_{i \cdot}^+|} \end{align*} And the $F$-measure$_i$ is: \begin{equation*}
F\text{-measure}_i=\frac{2Prec_i \times Rec_i}{Prec_i+Rec_i}=\frac{2(|Y_{i \cdot}^+|-\epsilon_i)}{2|Y_{i \cdot}^+|-\epsilon_i} \end{equation*} \item $\epsilon_i$ negative labels are predicted as positive labels.\\
In this case, the true positive number $TP_i$ on this instance is still $|Y_{i \cdot}^+|$, and the false positive number $FP_i=\epsilon_i$ , and the false negative number $FN_i$ is zero.\\ The precision value and the recall value will be:
\begin{align*} Prec_i&=\frac{TP_i}{TP_i+FP_i}=\frac{|Y_{i \cdot}^+|}{|Y_{i \cdot}^+|+\epsilon_i},\quad Rec_i=\frac{TP_i}{TP_i+FN_i}=1 \end{align*} And the $F$-measure$_i$ is: \begin{equation*}
F\text{-measure}_i=\frac{2Prec_i \times Rec_i}{Prec_i+Rec_i}=\frac{2|Y_{i \cdot}^+|}{2|Y_{i \cdot}^+|+\epsilon_i} \end{equation*} \end{enumerate} The instance-F1 is lower bounded by the sum of minimum value of $F$-measure$_i$, thus: \begin{equation*}
\text{\textit{instance-F1}}(H) \geq \frac{1}{m} \sum_{i=1}^m \min \Big\{\frac{2(|Y_{i \cdot}^+|-\epsilon_i)}{2|Y_{i \cdot}^+|-\epsilon_i}, \frac{2|Y_{i \cdot}^+|}{2|Y_{i \cdot}^+|+\epsilon_i}\Big\} \end{equation*} Under the assumption that all the instances are i.i.d drawn, micro-F1 equals instance-F1. Theorem 4 is proved. \qedsymbol \subsection{Proof of Theorem 5} Because $F$ is instance-wise effective, its order of prediction value on a specific label $\Y_{\cdot j}$ is correct. Therefore, the threshold error $\epsilon_j$ can happen in either the two ways: \begin{enumerate} \item $\epsilon_j$ positive instances are predicted as negative instances.\\
In this case, the true positive number $TP_j$ on this label becomes $|Y_{\cdot j}^+|-\epsilon_j$, and the false positive number $FP_j$ is zero, and the false negative number $FN_j$ becomes $\epsilon_j$.\\ The precision value and the recall value will be:
\begin{align*} Prec_j&=\frac{TP_j}{TP_j+FP_j}=1, \quad Rec_j=\frac{TP_j}{TP_j+FN_j}=\frac{|Y_{\cdot j}^+|-\epsilon_j}{|Y_{\cdot j}^+|} \end{align*} And the $F$-measure$_j$ is: \begin{equation*}
F\text{-measure}_j=\frac{2Prec_j \times Rec_j}{Prec_j+Rec_j}=\frac{2(|Y_{j \cdot}^+|-\epsilon_j)}{2|Y_{j \cdot}^+|-\epsilon_j} \end{equation*} \item $\epsilon_j$ negative instanaces are predicted as positive instances.\\
In this case, the true positive number $TP_j$ on this label is still $|Y_{\cdot j}^+|$, and the false positive number $FP_j=\epsilon_j$ , and the false negative number $FN_j$ is zero.\\ The precision value and the recall value will be:
\begin{align*} Prec_j&=\frac{TP_j}{TP_j+FP_j}=\frac{|Y_{\cdot j}^+|}{|Y_{ \cdot j}^+|+\epsilon_j},\quad Rec_j=\frac{TP_j}{TP_j+FN_j}=1 \end{align*} And the $F$-measure$_j$ is: \begin{equation*}
F\text{-measure}_j=\frac{2Prec_j \times Rec_j}{Prec_j+Rec_j}=\frac{2|Y_{\cdot j}^+|}{2|Y_{ \cdot j}^+|+\epsilon_j} \end{equation*} \end{enumerate} The macro-F1 is lower bounded by the sum of minimum value of $F$-measure$_j$, thus: \begin{equation*}
\text{\textit{macro-F1}}(H) \geq \frac{1}{l} \sum_{j=1}^l \min \Big\{\frac{2(|Y_{\cdot j}^+|-\epsilon_j)}{2|Y_{\cdot j}^+|-\epsilon_j}, \frac{2|Y_{\cdot j}^+|}{2|Y_{\cdot j}^+|+\epsilon_j}\Big\} \end{equation*} Theorem 5 is proved. \qedsymbol \subsection{Proof of LIMO Algorithm} \begin{mythm} In each iteration (step 5 to step 15) of Algorithm 1, the updated direction of the model is an unbiased estimation of the gradient of this objective function: \begin{equation} \label{obj}
\begin{split}
\argmin_{\W, \xi} &\ \sum_{i=1}^l ||\w_i||^2+\lambda_1 \sum_{i=1}^m \sum_{(u,v)}\xi_i^{uv}+\lambda_2 \sum_{j=1}^l \sum_{(a,b)} \xi_{ab}^j \\
\text{s.t.}\ & \w_u^\top \x_i -\w_v^\top \x_i >1-\xi_i^{uv}, \ \ \xi_i^{uv}\geq 0, \ \text{for }i=1,\cdots,m \text{ and } (u,v)\in Y_{i \cdot}^+ \times Y_{i \cdot}^- \ , \\
&\w_j^\top \x_a -\w_j^\top \x_b>1-\xi^j_{ab}, \ \ \xi^j_{ab}\geq 0, \ \text{for } j=1,\cdots,l \text{ and } (a,b)\in Y_{\cdot j}^+ \times Y_{\cdot j}^- \ . \end{split} \end{equation} \end{mythm} \begin{spacelessprf} Suppose that the function in Equation (\ref{obj}) is $f(\W)$, because $\W$ can be decomposed into $[\w_1,\w_2,\cdots,\w_l]$, we consider the partial gradient of a particular $\w_k$: \begin{equation} \begin{split}
\frac{\partial f(\W)}{\partial \w_k}=&2\w_k +\lambda_1 \phi_1 +\lambda_2 \phi_2 = 2\w_k\\
&+ \lambda_1 \sum_{i=1}^m \Big\{
\llbracket k\in Y_{i \cdot}^- \rrbracket \x_i \sum_{j\in Y_{i \cdot}^+ }\llbracket 1-(\w_j - \w_k)^\top \x_i >0 \rrbracket \\
&\qquad \quad \ \ -\llbracket k\in Y_{i \cdot}^+ \rrbracket \x_i
\sum_{j\in Y_{i \cdot}^- }\llbracket 1-(\w_k - \w_j)^\top \x_i >0 \rrbracket \Big\} \\
&+ \lambda_2 \sum_{a\in Y_{\cdot k}^+} \sum_{ b\in Y_{\cdot k}^-} (\x_b-\x_a) \llbracket 1-\w_k^\top(\x_a-\x_b) >0\rrbracket
\end{split} \end{equation}
The second term $\lambda_1 \phi_1$ is the gradient of label-wise margin on $\w_k$, and the third term $\lambda_2 \phi_2$ is the gradient of the instance-wise margin on $\w_k$.
Assume that $(\x_i, y_{ik}, y_{ij})$ is picked in step 5 and 6, the direction will be computed in step 8 or 9 according to: \begin{equation*} \begin{split}
g^{label}(\x_i,y_{ik}, y_{ij})
=& \llbracket k\in Y_{i \cdot}^- \rrbracket \lambda_1 \x_i \llbracket 1-(w_k-w_j)^\top \x_i >0\rrbracket \\
&-\llbracket k\in Y_{i \cdot}^+ \rrbracket \lambda_1 \x_i \llbracket 1-(w_j-w_k)^\top \x_i >0\rrbracket + \w_k \end{split} \end{equation*} Then do the expectation: \begin{equation*} \begin{split}
E_{\x_{i}}\big[ E_{y_{ij}}[g^{label}(\x_i,y_{ik}, y_{ij})]\big]
&=\frac{1}{C} E_{\x_{i}}\bigg[ {\lambda_1 \x_i}\sum_{j\in Y_{i \cdot}^+ } \llbracket 1-(w_k-w_j)^\top \x_i >0\rrbracket\\
&\qquad \quad - \lambda_1 \x_i \sum_{j\in Y_{i \cdot}^- }\llbracket 1-(w_j-w_k)^\top \x_i >0\rrbracket + \frac{1}{D} \w_k\bigg] \\
&=\frac{1}{C^\prime} \lambda_1 \phi_1 + \frac{1}{D^\prime} \w_k
\end{split} \end{equation*} Where $C^\prime$ and $D^\prime$ are constants. Similarly, we can prove the expectation of the direction in step 11 to 15: \begin{equation*}
E_{\x_a,\x_b}[g^{inst}(y_k,\x_a,\x_b)]=\frac{1}{C^{\prime \prime}}\lambda_2 \phi_2 + \frac{1}{D^{\prime\prime}} \w_k \end{equation*}
Because of the linearity of expectation, and absorbing the constants into $\lambda_1$ and $\lambda_2$, the gradient $\frac{\partial f(\W)}{\partial \w_k}$ can be unbiased estimated. Namely, the updated direction of the algorithm is an unbiased estimation of the gradient of Equation (\ref{obj}). \end{spacelessprf}
\section{Appendix B: Detailed Experimental Results} In this section, detailed experimental results are included. The results of synthetic data are in Section \ref{ss:syn} and The results of benchmark data are in Section \ref{ss:ben} \subsection{Detailed Experimental Results of Synthetic Data} \label{ss:syn} In this section, the detailed experimental results of synthetic data are given. \begin{table}[!htb] \centering \caption{Original absolute value and rescaled value of experiments on ranking measures. In the left columns are absolute values, and in the right columns are rescaled relative values.}
\begin{tabular}{c|c c| c c |c c}
\hline measure & \multicolumn{2}{c|}{LIMO-inst} & \multicolumn{2}{c|}{LIMO} & \multicolumn{2}{c}{LIMO}\\ \hline ranking loss & 0.027 & 0.00 & 0.015 & 0.99 & 0.015 & 1.00\\ avg. precision & 0.992 & 0.00 & 0.992 & 0.58 & 0.992 & 1.00\\ one-error & 0.000 & 1.00 & 0.001 & 0.28 & 0.001 & 0.00\\ coverage & 1.576 & 0.00 & 1.557 & 0.97 & 1.556 & 1.00\\ macro-AUC & 0.842 & 1.00 & 0.828 & 0.00 & 0.842 & 0.98\\ instance-AUC & 0.973 & 0.00 & 0.985 & 0.99 & 0.985 & 1.00\\ micro-AUC & 0.861 & 0.14 & 0.854 & 0.00 & 0.903 & 1.00\\ \hline \end{tabular} \end{table}
\begin{table}[!htb] \centering \setlength{\tabcolsep}{3pt} \caption{Original absolute value and rescaled value of experiments on classification measures. In the left columns are absolute values, and in the right columns are rescaled relative values.}
\begin{tabular}{c|c c| c c |c c |c c |c c |c c}
\hline measure & \multicolumn{2}{c|}{LIMO-inst-t} & \multicolumn{2}{c|}{LIMO-inst-t(x)} &
\multicolumn{2}{c|}{LIMO-label-t} & \multicolumn{2}{c|}{LIMO-label-t(x)} &
\multicolumn{2}{c|}{LIMO-t} & \multicolumn{2}{c}{LIMO-t(x)} \\ \hline Hamming loss & 0.172 & 0.28 & 0.160 & 0.48 & 0.188 & 0.00 & 0.131 & 1.00 & 0.163 & 0.43 & 0.134 & 0.94\\ micro-F1 & 0.837 & 0.00 & 0.860 & 0.43 & 0.840 & 0.06 & 0.890 & 1.00 & 0.858 & 0.40 & 0.885 & 0.92\\ macro-F1 & 0.869 & 0.87 & 0.857 & 0.25 & 0.861 & 0.46 & 0.859 & 0.35 & 0.872 & 1.00 & 0.852 & 0.00\\ instance-F1 & 0.804 & 0.00 & 0.883 & 0.79 & 0.835 & 0.32 & 0.904 & 1.00 & 0.858 & 0.54 & 0.900 & 0.96\\ \hline \end{tabular} \end{table}
\subsection{Detailed Experimental Results of Benchmark Data} \label{ss:ben} The ranking results in Figure 4 in paper are computed from Table \ref{table:all-detail}. Because this table is too large, we can only rotate it to show in the next page.
\begin{sidewaystable}[!htb] \centering \setlength{\tabcolsep}{4pt} \scriptsize \caption{Experimental results on eleven multi-label performance measures. For each performance measure, ``$\downarrow$'' indicates ``the smaller the better'' and ``$\uparrow$'' indicates ``the larger the better''. The results are shown in mean$\pm$std(rank). The smaller the rank, the better the performance. }\label{table:all-detail}
\begin{tabular}{c|c|c c c c c c c c c c c} \hline Dataset & Algorithm & hamming loss$\downarrow$& ranking loss$\downarrow$& avg. precision$\uparrow$ & one-error$\downarrow$& coverage$\downarrow$& instance-F1$\uparrow$ & instance-AUC$\uparrow$ & macro-F1$\uparrow$ & macro-AUC$\uparrow$µ-F1$\uparrow$ µ-AUC $\uparrow$\\ \hline \multirow{6}{*}{CAL500} &BR &.145$\pm$.003(5) &.216$\pm$.005(4) &.470$\pm$.008(4) &.212$\pm$.025(5) &143.025$\pm$2.319(4) &.354$\pm$.009(5) &.784$\pm$.005(4) &.097$\pm$.006(5) &.544$\pm$.012(2) &.357$\pm$.011(5) &.779$\pm$.006(4) \\ &ML-kNN &.139$\pm$.003(3) &.184$\pm$.005(3) &.491$\pm$.007(3) &.106$\pm$.023(3) &129.789$\pm$2.426(2) &.321$\pm$.010(6) &.816$\pm$.005(3) &.053$\pm$.002(6) &.523$\pm$.009(3) &.318$\pm$.010(6) &.813$\pm$.004(3) \\ &GFM &.200$\pm$.002(6) &.522$\pm$.007(5) &.337$\pm$.004(5) &.000$\pm$.000(1) &166.481$\pm$0.906(6) &.454$\pm$.006(3) &.662$\pm$.006(5) &.183$\pm$.005(3) &.518$\pm$.013(5) &.457$\pm$.006(3) &.661$\pm$.006(5) \\ &LIMO-inst &.143$\pm$.004(4) &.545$\pm$.015(6) &.147$\pm$.004(6) &.971$\pm$.022(6) &162.652$\pm$1.539(5) &.386$\pm$.010(4) &.455$\pm$.015(6) &.302$\pm$.008(1) &.566$\pm$.011(1) &.389$\pm$.010(4) &.458$\pm$.014(6) \\ &LIMO-label &.138$\pm$.002(2) &.180$\pm$.004(2) &.499$\pm$.008(2) &.105$\pm$.023(2) &129.993$\pm$2.491(3) &.473$\pm$.004(2) &.820$\pm$.004(2) &.126$\pm$.003(4) &.510$\pm$.011(6) &.477$\pm$.004(2) &.815$\pm$.004(2) \\ &LIMO &.137$\pm$.025(1) &.178$\pm$.004(1) &.501$\pm$.008(1) &.122$\pm$.035(4) &129.323$\pm$2.672(1) &.475$\pm$.006(1) &.822$\pm$.004(1) &.288$\pm$.006(2) &.523$\pm$.011(4) &.479$\pm$.006(1) &.816$\pm$.004(1) \\ \hline \multirow{6}{*}{medical} &BR &.011$\pm$.001(1) &.073$\pm$.041(5) &.416$\pm$.100(6) &.804$\pm$.029(6) &3.697$\pm$1.752(5) &.766$\pm$.022(1) &.927$\pm$.040(5) &.384$\pm$.040(3) &.877$\pm$.038(3) &.792$\pm$.020(1) &.910$\pm$.039(5) \\ &ML-kNN &.016$\pm$.001(5) &.048$\pm$.008(4) &.788$\pm$.017(4) &.266$\pm$.025(5) &3.034$\pm$0.411(4) &.564$\pm$.033(5) &.953$\pm$.007(4) &.190$\pm$.015(6) &.797$\pm$.029(5) &.654$\pm$.028(4) &.949$\pm$.008(4) \\ &GFM &.025$\pm$.002(6) &.287$\pm$.026(6) &.692$\pm$.025(5) &.217$\pm$.025(3) &6.581$\pm$0.714(6) &.636$\pm$.025(4) &.882$\pm$.013(6) &.216$\pm$.018(4) &.650$\pm$.027(6) &.605$\pm$.027(5) &.875$\pm$.014(6) \\ &LIMO-inst &.015$\pm$.001(4) &.017$\pm$.005(1) &.881$\pm$.018(2) &.170$\pm$.028(2) &1.248$\pm$0.279(1) &.444$\pm$.090(6) &.983$\pm$.005(1) &.448$\pm$.024(2) &.901$\pm$.032(1) &.439$\pm$.087(6) &.979$\pm$.005(1) \\ &LIMO-label &.014$\pm$.001(3) &.032$\pm$.006(3) &.829$\pm$.018(3) &.217$\pm$.026(4) &2.237$\pm$0.378(3) &.641$\pm$.030(3) &.968$\pm$.006(3) &.207$\pm$.012(5) &.859$\pm$.035(4) &.702$\pm$.020(3) &.960$\pm$.007(3) \\ &LIMO &.013$\pm$.001(2) &.019$\pm$.006(2) &.893$\pm$.017(1) &.147$\pm$.027(1) &1.423$\pm$0.350(2) &.706$\pm$.019(2) &.981$\pm$.006(2) &.464$\pm$.024(1) &.896$\pm$.029(2) &.757$\pm$.012(2) &.977$\pm$.006(2) \\ \hline \multirow{6}{*}{enron} &BR &.070$\pm$.003(6) &.136$\pm$.010(4) &.539$\pm$.086(4) &.533$\pm$.263(6) &16.834$\pm$0.669(4) &.482$\pm$.009(3) &.866$\pm$.010(4) &.187$\pm$.015(3) &.631$\pm$.025(5) &.473$\pm$.008(3) &.814$\pm$.008(4) \\ &ML-kNN &.053$\pm$.001(3) &.096$\pm$.004(3) &.624$\pm$.014(3) &.310$\pm$.022(4) &13.615$\pm$0.423(3) &.409$\pm$.022(5) &.904$\pm$.004(3) &.083$\pm$.008(6) &.633$\pm$.022(4) &.461$\pm$.017(4) &.898$\pm$.003(1) \\ &GFM &.069$\pm$.003(5) &.554$\pm$.027(6) &.399$\pm$.021(6) &.246$\pm$.041(2) &31.645$\pm$0.764(6) &.428$\pm$.022(4) &.669$\pm$.014(6) &.118$\pm$.012(5) &.553$\pm$.015(6) &.437$\pm$.016(5) &.654$\pm$.011(6) \\ &LIMO-inst &.054$\pm$.001(4) &.205$\pm$.008(5) &.520$\pm$.008(5) &.344$\pm$.013(5) &23.679$\pm$0.804(5) &.404$\pm$.053(6) &.796$\pm$.008(5) &.310$\pm$.018(1) &.717$\pm$.015(1) &.414$\pm$.056(6) &.810$\pm$.006(5) \\ &LIMO-label &.049$\pm$.001(2) &.085$\pm$.003(2) &.672$\pm$.010(2) &.233$\pm$.017(1) &12.324$\pm$0.444(2) &.565$\pm$.011(1) &.916$\pm$.003(2) &.137$\pm$.005(4) &.644$\pm$.019(3) &.591$\pm$.008(2) &.897$\pm$.003(2) \\ &LIMO &.049$\pm$.001(1) &.083$\pm$.003(1) &.672$\pm$.010(1) &.253$\pm$.022(3) &11.880$\pm$0.255(1) &.562$\pm$.010(2) &.918$\pm$.003(1) &.278$\pm$.017(2) &.663$\pm$.021(2) &.596$\pm$.006(1) &.896$\pm$.004(3) \\ \hline \multirow{6}{*}{corel5k} &BR &.014$\pm$.000(5) &.280$\pm$.010(5) &.077$\pm$.013(6) &.962$\pm$.004(6) &207.643$\pm$3.477(5) &.139$\pm$.005(4) &.720$\pm$.010(5) &.044$\pm$.003(4) &.605$\pm$.004(4) &.158$\pm$.006(3) &.706$\pm$.009(5) \\ &ML-kNN &.009$\pm$.000(1) &.135$\pm$.002(3) &.245$\pm$.004(2) &.736$\pm$.009(3) &114.727$\pm$1.658(3) &.017$\pm$.002(6) &.865$\pm$.002(3) &.009$\pm$.001(6) &.540$\pm$.007(5) &.027$\pm$.003(6) &.866$\pm$.002(3) \\ &GFM &.021$\pm$.001(6) &.803$\pm$.012(6) &.100$\pm$.005(5) &.516$\pm$.030(1) &320.449$\pm$2.173(6) &.150$\pm$.008(3) &.416$\pm$.012(6) &.029$\pm$.002(5) &.516$\pm$.006(6) &.146$\pm$.011(4) &.411$\pm$.013(6) \\ &LIMO-inst &.010$\pm$.000(2) &.275$\pm$.004(4) &.105$\pm$.004(4) &.897$\pm$.006(5) &172.120$\pm$2.183(4) &.058$\pm$.003(5) &.725$\pm$.004(4) &.118$\pm$.003(1) &.706$\pm$.006(1) &.057$\pm$.002(5) &.725$\pm$.004(4) \\ &LIMO-label &.011$\pm$.000(4) &.112$\pm$.003(1) &.289$\pm$.006(1) &.710$\pm$.010(2) &99.629$\pm$2.136(2) &.214$\pm$.004(1) &.888$\pm$.003(1) &.050$\pm$.002(3) &.658$\pm$.006(3) &.236$\pm$.004(1) &.882$\pm$.002(1) \\ &LIMO &.011$\pm$.000(3) &.116$\pm$.003(2) &.227$\pm$.005(3) &.791$\pm$.008(4) &94.253$\pm$2.109(1) &.152$\pm$.005(2) &.884$\pm$.003(2) &.117$\pm$.004(2) &.692$\pm$.007(2) &.177$\pm$.005(2) &.881$\pm$.003(2) \\ \hline \multirow{6}{*}{bibtex} &BR &.016$\pm$.001(5) &.114$\pm$.007(3) &.528$\pm$.012(2) &.428$\pm$.014(2) &32.758$\pm$1.929(4) &.399$\pm$.009(1) &.886$\pm$.007(3) &.318$\pm$.016(3) &.866$\pm$.007(4) &.419$\pm$.012(3) &.869$\pm$.007(4) \\ &ML-kNN &.014$\pm$.000(2) &.218$\pm$.004(5) &.339$\pm$.006(5) &.599$\pm$.007(6) &56.259$\pm$1.260(5) &.160$\pm$.007(6) &.782$\pm$.004(5) &.066$\pm$.006(6) &.661$\pm$.007(5) &.211$\pm$.007(5) &.776$\pm$.005(5) \\ &GFM &.037$\pm$.000(6) &.707$\pm$.003(6) &.210$\pm$.004(6) &.492$\pm$.010(5) &85.281$\pm$0.701(6) &.223$\pm$.008(5) &.618$\pm$.003(6) &.130$\pm$.007(5) &.575$\pm$.003(6) &.185$\pm$.007(6) &.626$\pm$.003(6) \\ &LIMO-inst &.014$\pm$.000(1) &.120$\pm$.003(4) &.494$\pm$.008(4) &.469$\pm$.016(4) &32.403$\pm$0.640(3) &.392$\pm$.005(2) &.880$\pm$.003(4) &.323$\pm$.005(2) &.921$\pm$.002(2) &.438$\pm$.006(1) &.877$\pm$.003(3) \\ &LIMO-label &.014$\pm$.000(4) &.071$\pm$.002(2) &.527$\pm$.008(3) &.433$\pm$.013(3) &20.425$\pm$0.422(2) &.386$\pm$.007(4) &.929$\pm$.002(2) &.232$\pm$.004(4) &.911$\pm$.002(3) &.405$\pm$.007(4) &.917$\pm$.002(2) \\ &LIMO &.014$\pm$.000(3) &.058$\pm$.001(1) &.570$\pm$.004(1) &.390$\pm$.008(1) &17.447$\pm$0.357(1) &.390$\pm$.007(3) &.942$\pm$.001(1) &.326$\pm$.006(1) &.924$\pm$.002(1) &.435$\pm$.004(2) &.938$\pm$.002(1) \\ \hline
\end{tabular} \end{sidewaystable}
\end{document} |
\begin{document}
\title{\bf Tight running times for minimum $\ell_q$-norm load balancing: beyond exponential dependencies on $1/\epsilon$ }
\author{ Lin Chen$^1$\footnote{Research of Lin Chen was partly supported by NSF Grant 1756014.}\ \ \ Liangde Tao$^2$\ \ Jos\'e Verschae$^3$ \\ \\{\small $^1$Department of Computer Science, Texas Tech University, US} \\{\small chenlin198662@gmail.com} \\{\small $^2$Department of Computer Science, Zhejiang University, China} \\{\small vast.tld@gmail.com} \\{\small $^3$Institute for Mathematical and Computational Engineering},\\ {\small Faculty of Mathematics and School of Engineering, Pontificia Universidad Católica de Chile, Chile} \\{\small jverschae@uc.cl} }
\date{} \maketitle
\begin{abstract} We consider a classical scheduling problem on $m$ identical machines. For an arbitrary constant $q>1$, the aim is to assign jobs to machines such that $\sum_{i=1}^m C_i^q$ is minimized, where $C_i$ is the total processing time of jobs assigned to machine $i$. It is well known that this problem is strongly NP-hard.
Under mild assumptions, the running time of an $(1+\epsilon)$-approximation algorithm for a strongly NP-hard problem cannot be polynomial on $1/\epsilon$, unless $\text{P}=\text{NP}$. For most problems in the literature, this translates into algorithms with running time at least as large as $2^{\Omega(1/\varepsilon)}+n^{O(1)}$. For the natural scheduling problem above, we establish the existence of an algorithm which violates this threshold. More precisely, we design a PTAS that runs in $2^{\tilde{O}(\sqrt{1/\epsilon})}+n^{O(1)}$ time. This result is in sharp contrast to the closely related minimum makespan variant, where an exponential lower bound is known under the exponential time hypothesis (ETH). We complement our result with an essentially matching lower bound on the running time, showing that our algorithm is best-possible under ETH. The lower bound proof exploits new number-theoretical constructions for variants of progression-free sets, which might be of independent interest.
Furthermore, we provide a fine-grained characterization on the running time of a PTAS for this problem depending on the relation between $\epsilon$ and the number of machines $m$. More precisely, our lower bound only holds when $m=\Theta(\sqrt{1/\epsilon})$. Better algorithms, that go beyond the lower bound, exist for other values of $m$. In particular, there even exists an algorithm with running time polynomial in $1/\epsilon$ if we restrict ourselves to instances with $m=\Omega(1/\epsilon\log^21/\epsilon)$.
\noindent{\bf Keywords:} {Polynomial Time Approximation Scheme, Scheduling, Exponential Time Hypothesis.} \end{abstract} \thispagestyle{empty}
\setcounter{page}{1}
\section{Introduction}
We consider a classical scheduling problem on identical parallel machines. Suppose we are given $m$ identical machines and $n$ jobs, each having a processing time $p_j$. A feasible solution corresponds to an assignment of jobs to machines. For a given assignment, let $C_i$ be the total processing time of jobs assigned to machine $i$, that is, $C_i=\sum_{j\rightarrow i}p_j$. Our objective is to minimize $\sum_{i=1}^m C_i^q$, where $q>1$ is an arbitrary constant. For either exact algorithms or approximation schemes, minimizing $\sum_{i=1}^m C_i^q$ is equivalent to minimizing the $\ell_q$-norm of machine loads, i.e., $(\sum_{i=1}^m C_i^q)^{1/q}$. In the standard 3-field scheduling notation by Graham et al.~\cite{graham1979optimization}, this problem is denoted as $P||\sum_iC_i^q$.
Our problem is well-known to be strongly NP-hard by a simple reduction from 3-partition. On the other hand, a classic result by Alon et al.~\cite{alon1997approximation} shows that it admits a polynomial time approximation scheme (PTAS) with running time $f(1/\epsilon)+n^{O(1)}$, where $f(1/\epsilon)$ is doubly exponential in $1/\epsilon$.
Very recently, improved running times have been obtained for $P||\sum_iC_i^q$ and other closely related load balancing problems. Particularly, for a variety of objective functions, which include both $\sum_iC_i^q$ and the makespan objective $C_{\max}=\max_i C_i$, Jansen et al.~\cite{jansen2020closing} show that the problem admits a PTAS with a running time of $2^{\tilde{O}(1/\epsilon)}+\tilde{O}(n)$. On the negative side, for the makespan objective, Chen et al.~\cite{chen2014optimality} show that such a running time is essentially best possible under the exponential time hypothesis (ETH). However, the lower bound does not hold for other objectives, including $P||\sum_iC_i^q$, leaving open the possibility for improved running times. In this paper, we study this question and explore the surprisingly rich complexity landscape of $P||\sum_iC_i^q$ in the context of approximation schemes.
\noindent\textbf{Contribution Overview.}
We study the complexity landscape of approximation schemes for $P||\sum_iC_i^q$.
Consider some strongly NP-hard optimization problem whose optimal value $\text{OPT}(I)$ is integral and upper bounded by $\text{poly}(|I|_u)$ for any instance $I$, where $|I|_u$ is the input size written in unary. This implies that the problem does not admit a fully polynomial-time approximation scheme (FPTAS) unless P=NP~\cite{garey2002computers}. In the majority of cases, for such problems the literature presents PTASs with running time at least as large as $2^{\Omega(1/\epsilon)}+n^{O(1)}$, that is, the dependency on $1/\epsilon$ is exponential. We show that $P||\sum_{i}C_i^q$ does not fall into this case, and a running time subexponential on $1/\epsilon$ is achievable. More precisely, we give a PTAS with a running time of $2^{\tilde{O}(\sqrt{1/\epsilon})}+n^{O(1)}$. On the other hand, we show that this running time is essentially tight, by providing an almost matching lower bound under ETH. That is, we show that ETH rules out a PTAS of running time $2^{O(({1/\epsilon})^{1/2-\delta})}+n^{O(1)}$ for any $\delta>0$. We are not aware of any other PTAS for a strongly NP-hard problem with such a tight subexponential behavior on $1/\varepsilon$.
Besides the results above, we give a fine-grained study on the upper and lower bounds of the running time of a PTAS for $P||\sum_{i}C_i^q$. First of all, we notice that our lower bound only holds for a small range of values of $m$, depending on $\epsilon$. Moreover, for some other values, we can circumvent the lower bound and obtain improved running times. More precisely, the lower bound only holds when $m=\Theta(\sqrt{1/\epsilon})$.
Quite surprisingly, when $m$ is larger, namely $m=\Omega(1/\epsilon\log^{2}(1/\epsilon))$, an algorithm that runs polynomially in $1/\epsilon$ exists, despite the problem being strongly NP-hard in general and our stronger lower bound. If $m = O(\sqrt{1/\epsilon})$ we can use a PTAS with running time $(1/\epsilon) ^{O(m)}$, which also breaks the lower bound for $m=o(\sqrt{1/\epsilon})$. See Figure~\ref{fig:alg_complex} for a depiction of our results. It remains an open problem to obtain tight running times when $m=\Theta({(1/\epsilon)^\theta})$ for $\theta\in (1/2,1]$.
\begin{figure}
\caption{Complexity landscape of $P||\sum_iC_i^q$. The time axis specifies the dependency of the running time with respect to $1/\epsilon$. A term of $n^{O(1)}$ needs to be added in the running of each algorithm.}
\label{fig:alg_complex}
\end{figure}
\noindent \textbf{Technical Contribution.}
Our main technical contribution lies in the lower bound proof. For this, we give a fine-grained reduction from a variant of Max3SAT to $P||\sum_i C_i^q$. To do so, we convert a set of clauses to a set of jobs. We enforce that two jobs which represent variables in the same clause are scheduled together in some carefully constructed gap (i.e., slot) of a given size. For such a construction, it is imperative to use pairs of numbers with unique sums, to guarantee that only these two jobs fit this gap. Hence, our construction is tightly related to {\it Sidon sets} and {\it Salem–Spencer sets} (also called progression-free sets), both of which have been studied extensively in number theory (see, e.g., \cite{erdos1941problem,o2004complete,moser1953non,gasarch2008finding}). A Sidon set $S=\{s_1,s_2,\ldots,s_n\}$ is a subset of natural numbers where all pairwise sums $s_i+s_j$, for $i\le j$, are distinct. That is, $s_{i}+s_j=s_{i'}+s_{j'}$ implies $\{i,j\}=\{i',j'\}$. A weaker notion is that of a Salem-Spencer set, that is, a set $S=\{s_1,s_2,\cdots,s_n\}$ with no cardinality 3 progression, i.e., no triplet $(i,j,k)\in\ensuremath{\mathbb{Z}}_n:=\{1,2,\cdots,n\}$ of pairwise different numbers satisfies $s_i-s_j = s_k-s_i$. In other words, if $s_j+s_k=2s_i$ then $i=j=k$. Our lower bound could be proved by adapting known techniques if a Sidon set $S\subseteq \ensuremath{\mathbb{Z}}_N$ (where $\ensuremath{\mathbb{Z}}_N:=\{1,2,\cdots ,N\}$) with cardinality $n$ exists for $N=n^{1+o(1)}$. Unfortunately, this is impossible, as Erdös and Turán~\cite{erdos1941problem} show that the cardinality of a Sidon set with $n$ elements requires $N=\Omega(n^2)$. We can circumvent this negative result by requiring only some pairs of numbers to have a unique sum, where these pairs correspond to the clauses in the given Max3SAT instance. Towards this, we first transform the given Max3SAT instance, with variables $z_{j}$ for $j\in \mathbb{Z}_n$, into a special structure such that all clauses can be divided into two disjoint subsets $C_1$ and $C_2$: $C_1$ consists of clauses $cl_2,cl_5,\cdots,cl_{n-1}$ such that $cl_\ell=(w_{\ell-1}\vee w_{\ell}\vee w_{\ell+1})$, where $w_{j}\in\{z_j,\neg z_{j}\}$ for all $j$; and $C_2$ consists of clauses $cl_1',cl_2',\cdots,cl_n'$ such that $cl_\ell'=(z_\ell\oplus \neg z_{\tau(\ell)})$, where $\tau$ is a permutation of $\ensuremath{\mathbb{Z}}_n$ and $\oplus$ is the XOR operation (see Section~\ref{subsec:maxsat} for details). For $C_1$, we construct a set of numbers $\{\sigma(1),\sigma(2),\cdots,\sigma(n)\}$ such that every adjacent sum $\sigma(i)+\sigma(i+1)$ is unique, and this will be achieved through extending a known construction of Salem-Spencer sets (Lemma~\ref{lemma:uniquesum-2}). For $C_2$, we extend the construction to additionally require that the $\sigma(i)$'s we construct admit a \textit{linked unique sum}. That is, there exists a subset of numbers $E_i=\{e_{i,1},e_{i,2},\ldots,e_{i,\omega}\}\subseteq \ensuremath{\mathbb{Z}}_{n^{1+o(1)}}$ for every $i\in\ensuremath{\mathbb{Z}}_n$ such that $E_i\cap E_{i'}=\emptyset$ for any $i'\neq i$, and the sum of each pair $\sigma(i)+e_{i,1},e_{i,1}+e_{i,2},\cdots,e_{i,\omega-1}+e_{i,\omega},e_{i,\omega}+\sigma(\tau(i))$ is unique in the sense that no other pairs in $S\cup E$ sum up to the same value, where $E=\bigcup_{i=1}^n E_i$. Note that a linked unique sum is a weaker notion than Sidon or Salem-Spencer, as for these there is no auxiliary set $E$. Nevertheless, the property of linked unique sum is strong enough for our reduction. The construction of the auxiliary set $E$ relies on further extending our technique for constructing unique adjacent sums, together with a group theoretic lemma that allows an \lq\lq orthogonal\rq\rq\, decomposition of the permutation $\tau$ (Lemma~\ref{lemma:shuffle-1}). Our results may be of separate interest for constructing fine-grained lower bounds on approximation or parameterized algorithms for other problems.
Another crucial observation, which may also be of independent interest, is a structural result needed for our PTAS with running time $2^{\tilde{O}(\sqrt{1/\epsilon})}+n^{O(1)}$ (see Section~\ref{subsec:alg2}). For many objective functions (like $C_{\max}$) we can round the processing times to powers of $1+\epsilon$ in order to bound the overall loss by a factor of $1+O(\epsilon)$. We observe that for minimizing $\sum_i C_i^q$ it is possible to consider a coarser grouping of jobs into sizes within a $(1+\sqrt{\epsilon})$ factor. Broadly speaking, by imposing extra structure to a near-optimal solution, we can use a Taylor expansion to bound the error, and notice that the linear term of the polynomial expansion cancels out. This leaves us only with the quadratic (and lower order) terms. This observation might translate to other problems with $\ell_q$-norm objective, and even other min-sum cost functions.
\noindent\textbf{Related Work.}
Load balancing problems are fundamental in computer science and have been studied extensively in the literature. In particular, the first PTAS for $P||C_{\max}$ dates back to the 80's~\cite{hochbaum1987using} and there is a long history of improvements on the running time for various identical machine scheduling problems, including $P||C_{\max}$, $P||\sum_i C_i^q$, $P||\sum_j w_jC_j$, etc.; see, e.g., \cite{leung1989bin,alon1998approximation,hochbaum1997various, skutella1999ptas,jansen2010eptas,jansen2020closing}. Recently, more general objective functions based on arbitrary norms have been considered~\cite{ibrahimpur_minimum_norm_2021}. Parameterized algorithms for scheduling problems have also been studied extensively (see, e.g.~\cite{jansen2020structural,mnich2018parameterized,mnich2015scheduling,knop2018scheduling,chen2017parameterized}).
The exponential time hypothesis (ETH) is a widely accepted complexity assumption introduced by Impagliazzo et al.~\cite{impagliazzo2001problems,impagliazzo2001complexity}, which can be used to obtain lower bounds on the running time of algorithms for various problems (see, e.g., \cite{lokshtanov2013lower} for a survey). In 2014, Chen et al. \cite{chen2014optimality} provide a concrete lower bound on the running time of a PTAS for $P||C_{\max}$ under ETH. Later, Jansen et al.~\cite{jansen2020closing} give a PTAS with running time $2^{\tilde{O}(1/\epsilon)}+n^{O(1)}$ for $P||C_{\max}$, which almost matches the lower bound.
Despite PTASs having been established for a variety of optimization problems, much less is known regarding lower bounds on their running time. In addition to $P||C_{\max}$, mentioned above, other well-known examples include multiple knapsack \cite{jansen2016bounding}, planar vertex cover, planar dominating set, and planar traveling salesperson~\cite{marx2007optimality}. Interestingly, all of these lower bounds have an almost linear dependency on $1/\epsilon$ in the exponent, which essentially matches the best-known PTAS. Generally, Chen et al.~\cite{chen2004linear} proved that if the problem, parameterized by $1/\epsilon$, is W[1]-hard under a linear FPT reduction, then there is no PTAS with $f(1/\epsilon)|I|^{o(1/\epsilon)}$ running time for an arbitrary computable function $f$, assuming all problems in SNP cannot be solved in sub-exponential time. We are not aware of a PTAS whose running time is subexponential in $1/\epsilon$, either for scheduling or other strongly NP-hard problems.
Unlike approximation algorithms, subexponential running times on a parameter have been observed in the field of parameterized algorithms and have received significant attention. In particular, a variety of optimization problems in planar graphs admit a fixed parameter tractable (FPT) algorithm that is subexponential in the parameter, including, e.g., independent set~\cite{demaine2005subexponential}, dominating set~\cite{demaine2005subexponential}, and multiway cut~\cite{klein2012solving,marx2012tight,pilipczuk2018network}. Note that, on the other hand, a subexponential PTAS was ruled out for the planar dominating set problem~\cite{marx2007optimality}.
\section{Approximation schemes}\label{sec:upper} The goal of this section is to prove the following theorem. \begin{theorem}\label{thm:algorithm}
For any sufficiently small $\epsilon>0$, there exists an algorithm that outputs a $(1+\epsilon)$-approximate solution for the scheduling problem $P||\sum_iC_i^q$ within $2^{\tilde{O}(\sqrt{1/\epsilon})}+n^{O(1)}$ time. More specifically, there exists a:
\begin{compactitem}
\item $(1+\epsilon)$-approximation algorithm AL$_1$ that runs in time $(1/\epsilon)^{O(m)}+n^{O(1)}$ for $m=O(\sqrt{1/\epsilon})$;
\item $(1+\epsilon)$-approximation algorithm AL$_2$ that runs in time $2^{\tilde{O}(\sqrt{1/\epsilon})}+n^{O(1)}$ for $m=(1/\epsilon)^{O(1)}$;
\item $(1+\epsilon)$-approximation algorithm AL$_3$ that runs in time $(1/\epsilon)^{O(1)}+n^{O(1)}$ for $m=\Omega(1/\epsilon\log^2(1/\epsilon))$.
\end{compactitem} \end{theorem}
In particular, for a sufficiently small $\epsilon$, we may run AL$_1$ for $m\le \sqrt{1/\epsilon}$, run AL$_2$ for $\sqrt{1/\epsilon}<m\le 1/\epsilon^2$, and run AL$_3$ for $m\ge 1/\epsilon^2$. This guarantees a $2^{\tilde{O}(\sqrt{1/\epsilon})}+n^{O(1)}$ time algorithm for all values of $m$ and $1/\epsilon$.
We remark that standard techniques round the processing time of a job to some multiple of $1+\epsilon$, yielding an instance with $\tilde{O}(1/\epsilon)$ different types of jobs. However, such rounded instance cannot be solved to optimality in time $2^{(1/\epsilon)^{1-\delta}}+n^{O(1)}$ for any constant $\delta>0$~\cite{chen2014optimality}. Hence, we need a new approach for Theorem~\ref{thm:algorithm}.
We now give a brief overview of the proof of Theorem~\ref{thm:algorithm}. Algorithm~AL$_1$ is based on a standard dynamic programming, given in Appendix~\ref{appsubsec:al1}. Algorithm~AL$_2$ is based on an new observation (Lemma~\ref{thm:structure}) which shows that we can classify processing times on intervals of the form $[\epsilon(1+\sqrt{\epsilon})^{h-1},\epsilon(1+\sqrt{\epsilon})^{h})$ for an integer $h$. After preprocessing the instance (Lemma~\ref{lemma:prepocessing}), we can focus on only $\tilde{O}(1/\sqrt{\epsilon})$ such intervals. We show that there exists a near-optimal solution where jobs are scheduled in an ordered way following the mentioned classification. This algorithm is described in Section~\ref{subsec:alg2}. Algorithm~AL$_3$ (see Appendix~\ref{appsubsec:al3}) is based on modifying the famous algorithm for the bin packing problem by Karmarkar and Karp~\cite{karmarkar1982efficient}.
All the three algorithms will operate on a scheduling instance that is well-structured, as implied by the following lemma. The structure can be achieved through standard techniques, namely scaling and grouping of small jobs, see, e.g., \cite{alon1998approximation}. For an instance $I$, we denote by $\text{size}(I)$ the total processing time of jobs in $I$, and by $m(I)$ the number of machines.
\begin{lemma}[Alon et al.~\cite{alon1998approximation}] \label{lemma:prepocessing}
For any sufficiently small $\epsilon>0$, given an arbitrary instance $I_0$ of $P||\sum_i C_i^q$, we can transform in linear time $I_0$ into a well-structured rounded instance $I$ with less or equal number of jobs and less or equal number of machines, that satisfies: \begin{compactitem}
\item $\text{size}(I)=m(I)$;
\item the processing time of each job in $I$ belongs to $[\epsilon,1]$;
\item there exists an optimal solution for $I$ such that the load of each machine belongs to $[1/2,2]$. \end{compactitem} Furthermore, any $(1+\epsilon)$-approximation solution for $I$ can be transformed into an $(1+O(\epsilon))$-approximation solution for $I_0$ in linear time. \end{lemma}
In the following, we focus exclusively on the instance after the preprocessing. It is worth mentioning that for non-integral values of $q$, the objective function can be irrational even for rational processing times. For obtaining a PTAS this is however not a problem, as computing the objective function up to an additive error of $\epsilon/\text{poly}(n)$ suffices for our results. In what follows we omit this technicality, and assume that we can compute the objective function without error.
\subsection{Algorithm 2}\label{subsec:alg2}
In this subsection, we describe and analyze algorithm AL$_2$.
\begin{lemma}\label{lemma:al2}
Consider an instance after the preprocessing of Lemma~\ref{lemma:prepocessing}. For any $\epsilon>0$, there exists an algorithm \textrm{AL}$_{2}$ that outputs a $(1+O(\epsilon))$-approximation solution for $P||\sum_i C_i^q$ with $m^{\tilde{O}(1/\sqrt{\epsilon})}$ running time. \end{lemma}
We know there exists an optimal solution $\vex^*$ where the load of each machine belongs to $[1/2,2]$. Let $L_i^*$ be the load of machine $i$ in $\vex^*$ where $1/2\le L_i^*\le 2$. Without loss of generality we further assume that $ L_1^*\le L_2^*\le \cdots\le L_m^*$. For some integer $h\ge1$, let $\mathcal{G}_h$ be the set of jobs whose processing time lies in $[\epsilon(1+\sqrt{\epsilon})^{h-1},\epsilon(1+\sqrt{\epsilon})^h)$. Given that $\epsilon \le p_j\le 1$, every job belongs to some set $\mathcal{G}_h$ for $h\in\{1,\ldots,\tau\}$, where $\tau = \tilde{O}(1/\sqrt{\epsilon})$. For simplicity, we call a job in $\mathcal{G}_h$ a $\mathcal{G}_h$-job. The following structural result contains the key observation for the existence of a PTAS with subexponential time.
\begin{lemma}\label{thm:structure}
There exists a feasible solution $\hat{x}$ satisfying: i) its objective value is at most $(1+O(\epsilon)) OPT$, and ii) the machines can be ordered from $1$ to $m$ such that for any $1\le i\le m-1$ and $h$, the processing time of every $\mathcal{G}_h$-job on machine $i+1$ is at most the processing time of any $\mathcal{G}_h$-job on machine $i$.
\end{lemma}
\begin{proof}
Given an optimal solution $x^*$, we construct $\hat{x}$ as follows. For machine $m$, we replace all $\mathcal{G}_h$-jobs with the same number of the smallest $\mathcal{G}_h$-jobs. For machine $m-1$, we replace all $\mathcal{G}_h$-jobs with the same number of the remaining smallest $\mathcal{G}_h$-jobs, etc. Eventually, every $\mathcal{G}_h$-job on machine $i+1$ is no greater than any $\mathcal{G}_h$-job on machine $i$. Let $\hat{L}_i=L_i^*+\Delta_i$ be the new load of machine $i$.
By the definition of $\mathcal{G}_h$, we know that the largest $\mathcal{G}_h$-job has a processing time at most $1+\sqrt{\epsilon}$ times the smallest one. This implies that $ L_i^*/(1+\sqrt{\epsilon})\le \hat{L}_i\le (1+\sqrt{\epsilon})L_i^*$, and hence $|\Delta_i|\le \sqrt{\epsilon}L_i^* \le 2\sqrt{\epsilon}$. In order to bound the objective function, first write $\sum_{i=1}^m (L_i^*+\Delta_i)^q = \sum_{i=1}^m {L_i^*}^q(1+\Delta_i/L_i^*)^q$. Using a Taylor expansion of order 1 on the function $(1+x)^q$ around $x=0$, we obtain that for some $0\le \xi_i\le \Delta_i/L_i^*\le 1$, \begin{eqnarray*}
\sum_{i=1}^m (L_i^*+\Delta_i)^q &&= \sum_{i=1}^m {L_i^*}^q\left(1+q\frac{\Delta_i}{L_i^*}+\frac{q(q-1)}{2}(1+\xi_i)^{q-2}\left(\frac{\Delta_i}{L_i^*}\right)^2\right)\\
&&\le (1+O(\epsilon))\sum_{i=1}^m {L_i^*}^q+ q\sum_{i=1}^m \Delta_i {L_i^*}^{q-1}\\
&&= (1+O(\epsilon))OPT+ q{L_1^*}^{q-1}\sum_{k=1}^m \Delta_k+q\sum_{i=2}^m\left[({L_i^*}^{q-1}-{L_{i-1}^*}^{q-1})\sum_{k=i}^m \Delta_k\right]\\
&&\le (1+O(\epsilon))OPT. \end{eqnarray*}
The last equality uses Abel's transformation (summation by parts). The last inequality follows since, for each $i$, it holds that $\sum_{k=i}^m\Delta_{k}\le 0$, as the last $m-i$ machines received the smallest $\mathcal{G}_h$-jobs for each $h$. \end{proof}
Exploiting the ordering of the jobs and machines given by Lemma~\ref{thm:structure}, we are able to develop a dynamic programming based algorithm to prove Lemma~\ref{lemma:al2}, see Appendix~\ref{appsubsec:al2}.
\section{Lower Bound}
In this section, we will prove the following theorem. \begin{theorem}\label{thm:lower-bound}
Let $q>1$ be an arbitrary constant. Assuming ETH, there is no PTAS for $P||\sum_i C_i^q$ that runs in $2^{O((1/\varepsilon)^{1/2-\delta})}+n^{O(1)}$ time for any constant $\delta>0$. \end{theorem}
For the proof we give a fine-grained reduction from a variant of Max3SAT, called 3SAT$'$ (which we elaborate in the following subsection), to $P||\sum_i C_i^q$.
\subsection{3SAT\texorpdfstring{$'$}{Lg} - Max3SAT with a Special Structure}\label{subsec:maxsat}
We study a variant of 3SAT, which we call 3SAT$'$, whose instances have the following structure: There are $n$ variables $z_1,\ldots,z_n$, where $n$ is a multiple of $3$. There are $4n/3$ clauses, such that the set of clauses can be divided into two disjoint sets $C_1$ and $C_2$ such that: \begin{compactitem} \item In $C_1$, every clause is a disjunction (OR operator) of three literals. For each variable $z_i$, exactly one literal in $C_1$ belongs to $\{z_i,\neg z_i\}$. \item In $C_2$, every clause is of the form $z_i\oplus \neg z_k$, where $\oplus$ denotes the XOR operator. Also, for every variable $z_i$, each literal $z_i$ and $\neg z_i$ appears exactly once within $C_2$. \end{compactitem}
For example, $C_1=\{(z_1\vee \neg z_2\vee z_3)\}$ and $C_2=\{(z_1\oplus \neg z_2),(z_2\oplus \neg z_3),(z_3\oplus \neg z_1)\}$ defines a 3SAT$'$ instance for $n=3$.
Let $cl_2,cl_3,\cdots,cl_{n-1}$ be the clauses in $C_1$. By re-indexing we can assume that $cl_\ell$ is of the form $(w_{\ell-1}\vee w_{\ell}\vee w_{\ell+1})$, where $w_{j}\in\{z_j,\neg z_{j}\}$ for all $j$. Also notice that $|C_1|=n/3$ and $|C_2|=n$. Since every literal appears exactly once in $C_2$, we define a permutation $\tau:\mathbb{Z}_n\rightarrow \mathbb{Z}_n$ (i.e. a bijection) such that $\tau(i)=k$ for each $(z_i\oplus\neg z_k)\in C_2$.
Similarly to 3SAT, it is also difficult to distinguish instances of 3SAT$'$ where almost all clauses are satisfiable and instances where at most certain fraction of the clauses can be satisfied, as implied by the following lemma. See Appendix~\ref{appsec:sat} for its proof.
\begin{lemma}\label{lemma:maxsat-eth2} Assuming ETH, there exists a constant $\beta\in(0,1)$ such that for any sufficiently small $\epsilon',\delta>0$ there is no algorithm with running time $2^{O(n^{1-\delta})}$ that distinguishes between instances of 3SAT$\,'$ with $4n/3 $ clauses where at least $(1-\epsilon')\cdot 4n/3$ clauses are satisfiable, from instances where at most $(\beta+\epsilon')\cdot 4n/3$ clauses are satisfiable. \end{lemma} \subsection{Overview of the reduction}\label{subsec:overview} We now briefly describe the structure of the constructed scheduling instance. The detailed reduction will be presented in Appendix~\ref{appsec:reduction-construction}. We remark that the high-level structure of the scheduling instance resembles the classical reduction and that of~\cite{chen2014optimality}. New technical ingredients are in job processing times, as we will elaborate in Section~\ref{subsec:technical}.
For an instance $I_{sat}$ of 3SAT$'$ with $n$ variables, we construct the following 6 kinds of jobs:
$\bullet$ Variable jobs: For each positive (or negative, resp.) literal, say, $z_i$ (or $\neg z_i$, resp.), two pairs of variable jobs $V_{i,+,1}^{\rho}$ and $V_{i,+,2}^{\rho}$ (or $V_{i,-,1}^{\rho}$ and $V_{i,-,2}^{\rho}$, resp.) are constructed where $\rho\in\{T,F\}$. In total, we construct 4 jobs for each (positive or negative) literal, i.e., 8 jobs for each variable.
$\bullet$ Clause jobs:
For each clause $cl_\ell$ of $C_1$, one clause job $\textrm{CL}^T_\ell$ and two copies of clause job $\textrm{CL}_\ell^F$ are constructed.
Recall that $|C_1|=n/3$, we construct $n$ clause jobs.
$\bullet$ Truth-assignment jobs, link jobs and dummy jobs: These three kinds of jobs will be created suitably so that the conditions below (CO1 to CO4) are satisfied.
$\bullet$ Gap jobs: Let $Q$ be a target makespan. We construct $\tilde{O}(n)$ gap jobs and the same number of machines to create gaps. Roughly speaking, every feasible schedule whose objective value is not too large will have one gap job on each machine, leaving a gap that must be filled up such that the load of the machine is exactly $Q$. We will create 4 kinds of gaps (incurred by gap jobs) satisfying the following conditions:
\begin{compactitem} \item[$\bullet$ CO1.] Variable-Truth gaps. To fill up these gaps, for any $i$ either $V_{i,+,1}^F$, $V_{i,+,2}^F$, $V_{i,-,1}^T$, $V_{i,-,2}^T$, or $V_{i,+,1}^T$, $V_{i,+,2}^T$, $V_{i,-,1}^F$, $V_{i,-,2}^F$ are used. Truth-assignment jobs are created for this purpose.
\item[$\bullet$ CO2.] Variable-Clause-Dummy gaps. For each clause $cl_\ell\in C_1$, there are three variable-clause-dummy gaps. If the positive (or negative, resp.) literal $z_i$ (or $\neg z_i$, resp.) is in $cl_\ell\in C_1$, then a variable-clause-dummy gap is created so that it could only be filled up by $\textrm{CL}_\ell^\rho$ and $V_{i,+,1}^{\rho'}$ (or $\textrm{CL}_\ell^\rho$ and $V_{i,-,1}^{\rho'}$, resp.), where $\rho,\rho'\in\{T,F\}$, together with a dummy job. Further, the gap ensures that $\textrm{CL}^T_\ell$ has to be scheduled with either $V_{i,+,1}^T$ or $V_{i,-,1}^T$.
\item[$\bullet$ CO3.] Variable-Link and Link-Link gaps. For each clause $(z_i\oplus\neg z_k)\in C_2$ we create a collection of Variable-Link and Link-Link gaps. To fill up these gaps, either $V_{i,+,2}^T$ and $V_{k,-,2}^F$, or $V_{i,+,2}^F$ and $V_{k,-,2}^T$ are used. Link jobs are created for this purpose (see Section~\ref{subsec:technical} for more details on this construction).
\item[$\bullet$ CO4.] Variable-Dummy gaps. Recall that 8 variable jobs are constructed for a variable and only 7 of them are used for the 4 kinds of gaps above (either $V_{i,+,1}^\rho$ or $V_{i,-,1}^\rho$ is left, where $\rho\in\{T,F\}$), the remaining one together with a dummy job will be used to fill these gaps. \end{compactitem}
With this construction, it is not difficult to verify that if every gap is filled exactly, $I_{sat}$ is satisfiable. To see why, if $V_{i,+,1}^F$, $V_{i,+,2}^F$, $V_{i,-,1}^T$, $V_{i,-,2}^T$ are used in the variable-truth gaps, then we let variable $z_i$ be true, otherwise we let it be false. For any clause of $C_1$, say, $cl_\ell$, there is one $\textrm{CL}_\ell^T$ and it must be scheduled with a true variable job, say, $V_{i,+,1}^T$ if $z_i$ is a literal in $cl_{\ell}$ (or $V_{i,-,1}^T$ if $\neg z_i$ is a literal in $cl_{\ell}$). If $V_{i,+,1}^T$ (or $V_{i,-,1}^T$, resp.) is scheduled with $\textrm{CL}_\ell^T$, then the positive (or negative, resp.) literal $z_i$ (or $\neg z_i$, resp.) is in $cl_\ell$. Meanwhile the variable $z_i$ is true (or false, resp.) since otherwise $V_{i,+,1}^T$ (or $v_{i,-,1}^T$,resp.) are used to fill variable-truth gaps. Thus clause $cl_\ell$ is satisfied. For any clause of $C_2$, say, $(z_i\oplus\neg z_k)$, if $V_{i,+,2}^T$ and $V_{k,-,2}^F$ (or $V_{i,+,2}^F$ and $V_{k,-,2}^T$, resp.) are used to fill up the corresponding variable-link and link-link gaps, then variables $z_i$ and $z_k$ are both true (false, resp.) since otherwise $V_{i,+,2}^T$ and $V_{k,-,2}^F$ ($V_{i,+,2}^F$ and $V_{k,-,2}^T$, resp.) would have been used to fill up the variable-truth gaps. Hence, $(z_i\oplus\neg z_k)$ is satisfied. Similarly, if $I_{sat}$ is satisfiable, then every gap can be filled up.
Chen et al.~\cite{chen2014optimality} provided a reduction that meets the above requirement with job processing times, and hence the target value $Q$, being $O(n^{1+\delta})$ for arbitrarily small constant $\delta>0$. Unfortunately, using this reduction we can only deduce a weaker lower bound of $2^{O((1/\epsilon)^{1/3-\delta})}$ (see Appendix~\ref{appsec:old} for a detailed discussion). For our purpose, we need to design job processing times to achieve a stronger {\em ratio-preserving} property, as we elaborate below.
Recall that we are given an instance of 3SAT$'$ with $n$ variables and $4n/3$ clauses. For a given solution, we say a machine is {\em good} if its load is exactly $Q$ (in which case there is exactly one gap job on it and the gap is filled up exactly), and is {\em bad} otherwise (in which case its load is at least $Q+1/2$ or at most $Q-1/2$). Our scheduling instance will additionally satisfy the following properties: \begin{compactitem} \item[i.] There are $m=\tilde{O}(n)$ machines and the target makespan is $Q=\tilde{O}(n)$. \item[ii.] Each processing time is a multiple of $1/2$ and the total job processing time equals $mQ$.
\item[iii.] Conditions CO1 to CO4 are satisfied. Additionally, the following ratio-preserving properties are satisfied. For any $\vartheta\in (0,1)$ it holds that: \begin{compactitem}
\item If the 3SAT$'$ instance admits a truth assignment where at most $\vartheta n$ clauses are not satisfied, then the constructed scheduling instance admits a feasible solution with at most $\vartheta_1n$ bad machines, for some $\vartheta_1=\Theta(\vartheta)$. In particular, the load of these bad machines is $Q+1$ or $Q-1$. \item If any truth assignment for the 3SAT$'$ instance has at least $\vartheta n$ clauses that are not satisfied, then in any feasible schedule of the constructed scheduling instance there are at least $\vartheta_2n$ bad machines, for some $\vartheta_2=\Theta(\vartheta)$.
\end{compactitem}
\end{compactitem}
Before giving more details of the construction, we briefly argue that an instance satisfying properties (i)-(iii) implies Theorem~\ref{thm:lower-bound}. \begin{proof}[Proof Idea (Theorem~\ref{thm:lower-bound})] Take $q=2$ for simplicity. We assume by contradiction that there exist some sufficiently small $\delta>0$, such that for any $\epsilon>0$ there is an $(1+\epsilon)$-approximation algorithm with running time $2^{O((1/\epsilon)^{1/2-\delta})}+n^{O(1)}$. Let $\beta\in (0,1)$ be a constant, and let $\epsilon',\delta>0$ be sufficiently small numbers, as in Lemma~\ref{lemma:maxsat-eth2}. We show that, for an appropriately chosen $\epsilon$, the PTAS can be used to distinguish, in time $2^{n^{1-\delta}}$, 3SAT$'$ instances where at least $(1-\epsilon')\cdot 4n/3$ clauses are satisfiable, from 3SAT$'$ instances where at most $(\beta+\epsilon')\cdot 4n/3$ clauses are satisfiable, contradicting ETH by Lemma~\ref{lemma:maxsat-eth2}. Indeed, we first observe that every bad machine will cause the objective value to increase by at least some fixed constant. A straightforward but crucial observation follows from the fact that, for load balancing problems, the total difference from the average load is~0. That is, if $C_i=Q+\Delta_i$, then the cost is \begin{eqnarray} \sum_{i=1}^m C_i^2= \sum_{i=1}^m (Q+\Delta_i)^2 =\sum_{i=1}^m (Q^2+ 2T\Delta_i + \Delta_i^2) = mQ^2 + \sum_{i=1}^m\Delta_i^2, \label{eq:load} \end{eqnarray} where the last equality follows as $\sum_{i}\Delta_i=0$ (for general $q>1$, a similar statement follows from a Taylor expansion, as in the proof of Lemma~\ref{thm:structure}). Consequently, if at least $(1-\epsilon')\cdot 4n/3$ clauses of $I_{sat}$ are satisfiable, then at most $\Theta(\epsilon'n)$ machines will have a load of either $Q+1$ or $Q-1$, and hence the optimal objective value of the constructed scheduling instance is at most $mQ^2+\Theta(\epsilon' n)$ by Eq~\eqref{eq:load}. On the other hand, if at most $(\beta+\epsilon')\cdot 4n/3$ clauses of $I_{sat}$ are satisfiable for some constant $\beta<1$, then $\Delta_i\ge 1/2$ for at least $\Theta((1-\beta-\epsilon')n)$ machines. By Eq~\eqref{eq:load} the optimal objective value of the constructed scheduling instance is at least $mQ^2+\Theta((1-\beta-\epsilon')n)=mQ^2+\Theta(n)$ (see Lemma~\ref{lemma:calculate} for the detailed computation). Now we apply the efficient PTAS with $\epsilon=\Theta(\frac{n^{1-\delta}}{mQ^2})\approx \Theta(1/n^{2+\delta})$. Given the fact that $mQ^2\epsilon=\Theta(n^{1-\delta})$, if at least $(1-\epsilon')\cdot 4n/3$ clauses of $I_{sat}$ are satisfiable, then the PTAS should return a schedule with objective value at most $mQ^2+\Theta(\epsilon' n)+\Theta(n^{1-\delta})=mQ^2+\Theta(\epsilon' n)$. Otherwise, the PTAS returns a schedule with objective value at least $mQ^2+\Theta(n)$. Theorem~\ref{thm:lower-bound} follows as our PTAS has a running time of $2^{O((1/\epsilon)^{1/2-\delta})}+n^{O(1)} \le 2^{O(n^{1-\delta})}$. \end{proof}
\noindent\textbf{Remark.} One can verify that if $Q$ is larger, e.g., $Q=\Theta(n^2)$, then the above argument only rules out a PTAS of running time $2^{O((1/\epsilon)^{1/4-\delta})}$. Hence, simultaneously enforcing the ratio-preserving property while having $Q=\tilde{O}(n)$ is the main technical challenge, which we overcome with our new number-theoretic constructions, as we elaborate in the following.
The rest of the paper is organized as follows. In Section~\ref{subsec:technical} we give an overview of the main technical ingredients for the construction of the processing times in our reduction. We also motivate our number theoretical constructions, which are specified in Section~\ref{subsec:adjacent-sum}.
In Appendix~\ref{appsec:reduction-construction} we present the complete reduction. In Appendix~\ref{appsec:remaining-proofs} we show its correctness and conclude Theorem~\ref{thm:lower-bound}.
\subsection{Defining Processing times: Main Techniques}\label{subsec:technical} To illustrate the main technical ingredient, in the following part of this subsection we will focus on conditions CO2 and CO3 while ignoring the other conditions (which can be handled using the techniques for CO2 and CO3). Recall that our goal is to create suitable gap jobs that can only be filled up by specific jobs.
We can view each job, say, $V_{i,+,1}^{T}$, as a combination of three components -- the type-component $V_{\cdot,+,1}$ (indexed by $i$), the index-component $i$, and the $T/F$-component $T$. Ignoring dummy jobs for simplicity, conditions CO2 and CO3 involve 5 different type-components, including $V_{\cdot,+,1}$, $V_{\cdot,+,2}$ $V_{\cdot,-,1}$ $V_{\cdot,-,2}$ and $\textrm{CL}_{\cdot}$. Denote by $s(\cdot)$ the processing time of a job. We can define the processing time of a job into a summation of three terms corresponding to components, e.g., $s(V_{i,+,1}^{T})=\mu(V_{\cdot,+,1})+\sigma(i)+\eta(T)$, where the functions $\mu,\sigma,\eta$ map the type-component, index-component and T/F-component of a job to some positive integers. Now the question becomes: how can we define functions $\mu,\sigma,\eta$ such that from their sum, e.g., $\mu(V_{\cdot,+,1})+\mu(\textrm{CL}_{\cdot})+\sigma(i)+\sigma(\ell)+\eta(T)+\eta(F)$, we can conclude that it can only be added up by $s(V_{i',+,1}^\rho)$ and $s(\textrm{CL}_{\ell'}^{\rho'})$, where $\{i',\ell'\}=\{i,\ell\}$ and $\{\rho,\rho'\}=\{T,F\}$. Notice that there are only a constant number of different type-components and T/F-components, it is thus easy to define $\mu$ and $\eta$. For example, let $\sigma_{max}$ be a sufficiently large value that exceeds the maximal value of $\sigma$ and $\eta$, and define $\mu(V_{\cdot,+,1}),\mu(V_{\cdot,+,2}),\mu(V_{\cdot,-,1}),\mu(V_{\cdot,-,2}),\mu(\textrm{CL}_{\cdot})$ to be $10^5\sigma_{max}, 10^4\sigma_{max}, 10^3\sigma_{max}, 10^2\sigma_{max}, 10\sigma_{max}$, then from the sum $\mu(V_{\cdot,+,1})+\mu(\textrm{CL}_{\cdot})$ it is very easy to identify the type-components of two jobs.
The main difficulty lies in the function $\sigma$ as we require job processing times to be $\tilde{O}(n)$, whereas $\sigma$ must map $[n]$ to $[\tilde{O}(n)]$ such that \begin{eqnarray} \sigma(i)+\sigma(\ell)=\sigma(i')+\sigma(\ell')\implies \{i,\ell\}=\{i',\ell'\}. \label{eq:tech-1} \end{eqnarray}
In other words, we require the sum $\sigma(i)+\sigma(\ell)$ to be unique among the sums of all possible pairs $(i,\ell)$. Recall the special structure of 3SAT$'$, where each clause $cl_\ell\in C_1$ is of the form $(w_{\ell-1}\vee w_{\ell}\vee w_{\ell+1})$ such that $w_{j}\in\{z_j,\neg z_{j}\}$ for all $j$. Hence, for condition CO2, it suffices to guarantee Eq~\eqref{eq:tech-1} for $i\in\{\ell-1,\ell,\ell+1\}$. Recall that a Salem-Spencer set is a set of numbers where no three of which form an arithmetic progression, hence if we let $\sigma$ map $[n]$ to a Salem-Spencer set of size $n$, Eq~\eqref{eq:tech-1} always holds for $\ell=i$. For our purpose, we need to generalize the construction of a Salem-Spencer set such that in addition to $2\sigma(\ell)$, the sum of any two adjacent numbers $\sigma(\ell)+\sigma(\ell+1)$ is also unique, as we show in Lemma~\ref{lemma:uniquesum-2}.
Condition CO3 is more complicated, as each clause of $C_2$ is of the form $(z_i\oplus\neg z_k)$ with $k=\tau(i)$, where the permutation $\tau$ is arbitrary. If we consider the index-components of the two variable jobs $V_{i,+,2}^\rho$ and $V_{k,-,2}^{\rho'}$, we cannot guarantee that $\sigma(i)+\sigma(k)=\sigma(i')+\sigma(k')$ implies $\{i,k\}=\{i',k'\}$. Consider the following indirect approach. Suppose for each $(z_i\oplus\neg z_k)$ we can construct a pair of jobs LN$_{(i,k)}^{T}$ and LN$_{(i,k)}^{F}$ (called link jobs), and meanwhile create two gaps such that they must be filled up by $V_{i,+,2}^{\rho_1}$ together with LN$_{(i,k)}^{\rho_1'}$, and $V_{k,-,2}^{\rho_2}$ together with LN$_{(i,k)}^{\rho_2'}$ respectively, and furthermore, $\{\rho_1,\rho_1'\}=\{\rho_2,\rho_2'\}=\{T,F\}$, then we know that if both gaps are filled up, then either $V_{i,+,2}^T$ and $V_{k,-,2}^F$, or $V_{i,+,2}^F$ and $V_{k,-,2}^T$ are used, which is sufficient for condition CO3. Using this idea, instead of designing $\sigma$ such that the sum $\sigma(i)+\sigma(k)$ is unique, we seek to design $\sigma$ such the pair $(i,k)$ is ``uniquely -linked'' in the sense that there exists some number $e_i=\tilde{O}(n)$ such that the sums $\sigma(i)+e_i$ and $\sigma(k)+e_i$ are both unique among the sums of all pairs. Unfortunately, requiring the uniqueness of $\sigma(i)+e_i$ and $\sigma(k)+e_i$ is still too strong. We will show in Lemma~\ref{lemma:linked-sum} that for every $k=\tau(i)$ there exists a sequence of $\omega=O(\frac{\log n}{\log\log n})$ numbers $e_{i,1}$, $e_{i,2}$, $\ldots$, $e_{i,\omega}$ such that the sums $\sigma(i)+e_{i,1}$, $e_{i,1}+e_{i,2}$, $\ldots$, $e_{i,\omega-1}+e_{i,\omega}$, $e_{i,\omega}+\sigma(k)$ are all unique. Consequently, instead of creating one pair of link jobs, we will create $\omega$ pairs of link jobs for each $(z_i\oplus\neg z_k)$, ensuring condition CO3.
\subsection{Set of Integers with Unique Adjacent Sum and Linked Sum}\label{subsec:adjacent-sum}
In this section, we present our main technical contribution regarding the number-theoretic constructions needed in our reduction.
\noindent\textbf{Some notation.} Recall that we let $\ensuremath{\mathbb{Z}}_n=\{1,2,\cdots,n\}$. All the logarithms are taken with base $e$ unless stated otherwise. We will use $\cdot$ in the subscript to denote an arbitrary index, e.g., $x_{\cdot}$ refers to $x_i$ for some $i$. We write vectors in boldface, e.g. $\vex, \vey$. Vectors start with its $0$-th coordinate. For any $\upsilon$-dimensional vector $\vecc$, $\vecc[h]$ denotes its $h$-th coordinate for $0\le h\le \upsilon-1$, and $\vecc\vex=\sum_{h=0}^{\upsilon-1}\vecc[h]x^h$.
\begin{lemma}\label{lemma:uniquesum-1}
Let $N\in\mathbb{Z}^+$. There exists a subset ${\cal{S}}\subseteq \ensuremath{\mathbb{Z}}_N$ such that $|{\cal{S}}|\ge N^{1-c_0\sqrt{\frac{1}{{\log N}}}}$ for some sufficiently large $c_0$ (in particular, $c_0\ge7$ suffices), and for any $y\in {\cal{S}}$ and $1\le h\le 5$, the linear equation $h\cdot y=y_1+y_2+\cdots+y_h$ with $y_i\in {\cal{S}}$ for all $i$ has a unique solution $y_1=y_2=\cdots=y_h=y$. \end{lemma}
The proof of Lemma~\ref{lemma:uniquesum-1} mainly utilizes the idea for constructing Salem–Spencer sets~\cite{behrend1946sets} and can be found in Appendix~\ref{appsec:ajacent-sum}. In particular, we can show that $|{\cal{S}}|\ge N^{1-7\sqrt{\frac{1}{{\log N}}}}$. For any integer $d\in\ensuremath{\mathbb{Z}}^+$, we denote by ${\cal{S}}_d$ the subset of $\ensuremath{\mathbb{Z}}_d$ that satisfies Lemma~\ref{lemma:uniquesum-1}. Now we are ready to prove Lemma~\ref{lemma:uniquesum-2}, which is one of our two main number-theoretical results.
\begin{lemma}\label{lemma:uniquesum-2}
Let $N\in\mathbb{Z}^+$, $d=\lceil e^{(\sqrt{\log\log N}+c)^2} \rceil=e^{{\mathcal{O}}(\sqrt{\log\log N})}\cdot \log N$ for some sufficiently large $c$ ($c\ge 7$ suffices) and $x=5d+1$. There exists an injection $\sigma: \ensuremath{\mathbb{Z}}_N\rightarrow \ensuremath{\mathbb{Z}}_{N'}$ such that: \begin{compactitem} \item[1.] $N'= N^{1+O(\frac{1}{\sqrt{\log \log N}})}$; \item[2.] For any $i\in\ensuremath{\mathbb{Z}}_N$, $\sigma(i)=\sum_{j=0}^{\gamma}\vea_i[j]x^j$ for some $\vea_i[j]\in{\cal{S}}_{d}$, $0\le j\le \gamma$, where $\gamma=\lceil \frac{\log N}{\log\log N} \rceil+O(\frac{\log N}{(\log\log N)^{3/2}})$; \item[3.] For any $1\le h\le 5$ and $i\in\ensuremath{\mathbb{Z}}_N$, the equation $h\cdot\sigma(i)=\sigma(i_1)+\sigma(i_2)+\cdots+\sigma(i_h)$, $i_j\in\ensuremath{\mathbb{Z}}_N$ has a unique solution $i_1=i_2=\cdots=i_h=i$. Further, the equation $h\cdot \sigma(i)=\sigma(i_1)+\sigma(i_2)+\cdots+\sigma(i_k)$, $i_j\in\ensuremath{\mathbb{Z}}_N$ has no feasible solution when $1\le k< h$ or $h<k\le 5$; \item[4.] For any $i\le N-1$, the linear equation $\sigma(i)+\sigma(i+1)=\sigma(i_1)+\sigma(i_2)$, $i_1\le i_2$ has a unique solution $i_1=i$, $i_2=i+1$. Furthermore, the linear equation $\sigma(i)+\sigma(i+1)=\sigma(i_1)+\sigma(i_2)+\cdots+\sigma(i_k)$ has no feasible solution when $k=1$ or $2<k\le 5$. \end{compactitem} \end{lemma}
\begin{proof}
By Lemma~\ref{lemma:uniquesum-1} we know $|{\cal{S}}_{d}|\ge d^{1-c_0\sqrt{\frac{1}{{\log d}}}}$ for some constant $c_0$ (in particular, we can choose $c_0=7$). Let $\hat{\cal{S}}\subseteq {\cal{S}}_{d}$ be an arbitrary subset such that $|\hat{\cal{S}}|=2^\omega$ for some integer $\omega$ such that $|\hat{\cal{S}}|\ge 1/2\cdot |{\cal{S}}_{d}|$, then it is easy to see that $|\hat{\cal{S}}|=d^{1-\Theta(\sqrt{\frac{1}{{\log d}}})}$. Consider all the integers that can be written as $\sum_{i=0}^{\beta}\vea[i]x^i=\vea\cdot\vex$ for some integer $\beta$, $\vea=(\vea[0],\vea[1],\cdots,\vea[\beta])\in \hat{\cal{S}}^{\beta+1}$, and $\vex=(1,x,\cdots,x^\beta)$, for $x=5d+1$. It is easy to see that we obtain $|\hat{\cal{S}}|^{\beta+1}$ different integers constructed this way.
Simple calculations show that $|\hat{\cal{S}}|^{\beta+1}\ge N$ if $$\beta\ge \frac{\log N}{\log d} \left(1+\Theta\left(\sqrt{\frac{1}{\log d}}\right)\right).$$
Hence, by picking $\beta=\lceil \frac{\log N}{\log\log N} \rceil+O(\frac{\log N}{(\log\log N)^{3/2}}),$ we can guarantee that $|\hat{\cal{S}}|^{\beta+1}\ge N$. For $d\ge e^{(\sqrt{\log\log N}+7)^2}$, we notice that $\sqrt{\log d}\ge \sqrt{\log\log N}+7$, hence
\[|{\cal{S}}_{d}|\ge d^{1-7\sqrt{\frac{1}{{\log d}}}}=e^{\log d-7\sqrt{\log d}}>e^{\log\log N}=\log N>\beta+1.\] Hence, we can define an arbitrary injection $g$ that maps $j\in \{0,1,\cdots,\beta\}$ to a distinct number in~${\cal{S}}_{d}$.
Consider all the vector $\vea$'s. For any two vectors $\vea_j$ and $\vea_k$, we say they are {\em close} if $\vea_j$ and $\vea_k$ differ by exactly one coordinate, i.e., there exists some $0\le j^*\le \beta$ such that $\vea_i[j]=\vea_k[j]$ for all $j\neq j^*$ and $\vea_i[j^*]\neq\vea_k[j^*]$. We claim the following. \begin{claim}\label{claim:1} Vectors in $ \hat{\cal{S}}^{\beta+1}$ can be ordered such that any two consecutive vectors are close.
\end{claim} \begin{proof}
Recall that $|\hat{{\cal{S}}}|=2^\omega$, hence we can map each $a_j\in \hat{\cal{S}}$ to a distinct $\omega$-bit binary number (or more specifically, a binary string) within $\{0,1\}^{\omega}$. Let $\xi:\hat{\cal{S}}\rightarrow \{0,1\}^{\omega}$ be an arbitrary one-to-one mapping, then we can define an extended mapping $\xi':{\hat{\cal{S}}}^{\beta+1}\rightarrow \{0,1\}^{(\beta+1)\omega}$ such that $\vea$ is mapped to a $(\beta+1)\omega$-bit binary string $\overline{\xi(\vea[0])\xi(\vea[1])\cdots\xi(\vea[{\beta}])}_2$. If we can order all $(\beta+1)\omega$-bit binary string such that every adjacent numbers differ by exactly one bit, then the inverse of these binary strings gives a sequence of $\vea$'s such that adjacent vectors are close.
Now we prove the following statement: for any $n\in \mathbb{Z}$, $n\ge 2$ and an arbitrary string $b\in \{0,1\}^n$, all binary strings of $\{0,1\}^n$ can be ordered in a sequence starting with $b$ such that any two adjacent strings differ by exactly one bit. We show this by induction. The statement is clearly true for $n=2$. Suppose it is true for all $n\le n'$, we prove it also holds for $n=n'+1$. Consider the first bit of $b$, which can be $0$ or $1$. Assume it is $1$ (the case of $0$ can be proved in a similar way), then $b=1b_1$ for some $b_1\in \{0,1\}^{n'}$. According to the induction hypothesis, all binary strings of $\{0,1\}^{n'}$ can be ordered in a sequence starting with $b_1$ such that any two adjacent binary strings only differ by one bit. Let such a sequence be $b_1,b_2,\cdots,b_{2^{n'}}$, then all binary strings of $\{0,1\}^{n'+1}$ can be ordered as $1b_1,1b_2,\cdots,1b_{2^{n'}},0b_{2^{n'}}, 0b_{2^{n'}-1},\cdots,0b_1$. Hence, the statement is true, and Claim~\ref{claim:1} follows. \end{proof}
Now consider an arbitrary ordering of vectors of $\hat{\cal{S}}^{\beta+1}$ that satisfies Claim~\ref{claim:1}. Let the sequence be $\vea_1,\vea_2,...$ where $\vea_i=(\vea_i[0],\vea_i[1],\cdots,\vea_i[\beta])$ denote the $i$-th vector in the sequence. Recall that each $\vea_i$ is a $(\beta+1)$-dimensional vector. Let ${prec}(i)$ be the unique coordinate where $\vea_i$ and $\vea_{i-1}$ differ. Similarly, let ${succ}(i)$ be the coordinate where $\vea_i$ and $\vea_{i+1}$ differ. By definition it holds that ${prec}(i+1)={succ}(i)$. For the first and last vectors in the sequence, we define additionally that $\vea_1[prec(1)]=\vea_{|\hat{\cal{S}}^{\beta+1}|}[succ(|\hat{\cal{S}}^{\beta+1}|)]=0$. Let $\gamma=\beta+8$ and recall the injection $g:\{0,1,\cdots,\beta\}\rightarrow {\cal{S}}_{d}$. We define $\sigma$ such that \begin{align*} &\sigma(i)=\vea_i\cdot\vex+\vea_i[{prec(i)}]x^{\beta+1}+\vea_i[{succ(i)}]x^{\beta+2}+g\left(prec(i)\right) x^{\beta+5}+g\left(succ(i)\right) x^{\beta+6}, \quad \text{if $i$ is odd}; \\ &\sigma(i)=\vea_i\cdot\vex+\vea_i[{prec(i)}]x^{\beta+3}+\vea_i[{succ(i)}]x^{\beta+4}+g\left(prec(i)\right) x^{\beta+7}+g\left(succ(i)\right) x^{\beta+8}, \quad \text{if $i$ is even}. \end{align*} where $\vex=(1,x,x^2,\cdots,x^\beta)$, while noting that $prec(i),succ(i)\le \beta<x$. Also, remark that all coeficients in the polynomial expression belong to $\mathcal{S}_d$. Given that $x=5d+1$ and $\beta=\frac{\log N}{\log d} \left(1+\Theta\left(\sqrt{\frac{1}{\log d}}\right)\right)$, one can verify that \[\sigma(i)\le (5d+1)^{\beta+9}=e^{(\beta+9)\log(5d+1)}\le e^{\left(\frac{\log N}{\log d}+{\mathcal{O}}\left(\frac{\log N}{(\log d)^{3/2}}\right)\right)(\log d+{\mathcal{O}}(1))}\le N'= N^{1+O(\frac{1}{\sqrt{\log \log N}})},\] for any $i\le N$, hence \textbf{Properties 1} and \textbf{2} of Lemma~\ref{lemma:uniquesum-2} hold.
Consider the equation $h\cdot \sigma(i)=\sigma(i_1)+\sigma(i_2)+\cdots+\sigma(i_k)$ for $k,h\le 5$. The right-hand side of this equation can be expressed as $\sigma(i_1)+\sigma(i_2)+\cdots+\sigma(i_k) = \sum_{j=1}^{\gamma}b_jx^j$, for some coefficients~$b_j$. Notice as each coefficient $\vea_i[j]$ belongs to $\hat{\cal{S}}\subseteq {\cal{S}}_d$, which is at most $\vea_i[j]\le d<x/5$, then each $b_h$ is a sum of at most $k\le 5$ such coefficients, and thus $0\le b_h <x$. We obtain a similar statement for the left-hand side. Hence the coefficients of terms of the same degree must coincide, and we have
$h\cdot \vea_i[j]=\sum_{h=1}^k \vea_{i_h}[j]$ for all $0\le j\le \beta.$ According to Lemma~\ref{lemma:uniquesum-1}, we know the only solution for the above is $k=h$ and $i_1=i_2=\cdots=i_k=i$, hence \textbf{Property 3} is true.
It remains to prove \textbf{Property 4}. We suppose $i$ is odd in the following; the case of $i$ being even can be proved analogously. Consider $\sigma(i)+\sigma(i+1)$ which is equal to
\begin{eqnarray*} &&\sum_{j=0}^{\beta}b_jx^j+\vea_i[{prec(i)}]x^{\beta+1}+\vea_i[{succ(i)}]x^{\beta+2}+\vea_{i+1}[{prec(i+1)}]x^{\beta+3}+\vea_{i+1}[{succ(i+1)}]x^{\beta+4}\\ &+&g\left({prec(i)}\right)x^{\beta+5}+g\left({succ(i)}\right)x^{\beta+6}+g\left({prec(i+1)}\right)x^{\beta+7}+g\left({succ(i+1)}\right)x^{\beta+8}. \end{eqnarray*}
As before, the coefficients of terms of the same degree must coincide. We know that $\vea_i$ and $\vea_{i+1}$ only differs at coordinate $succ(i)=prec(i+1)$, hence $b_j=2\vea_i[j]$ for $j\neq succ(i)$. If $\sigma(i_1)+\sigma(i_2)=\sigma(i)+\sigma(i+1)$, we know $\vea_{i_1}[j]+\vea_{i_2}[j]=2\vea_i[j]$ for $j\neq succ(i)$, and by the fact that $\vea_k[j]\in{\cal{S}}_d$ we know it must hold that \begin{eqnarray} \vea_{i_1}[j]=\vea_{i_2}[j]=\vea_i[j] \quad \text{for all }j\neq succ(i). \label{eq:1} \end{eqnarray}
Now consider the $succ(i)$-th coordinate. We know that among $i_1$ and $i_2$ one is even and one is odd, for otherwise in $\sigma(i_1)+\sigma(i_2)$ either the coefficients of $x^{\beta+1}$ and $x^{\beta+2}$ are 0, or the coefficients of $x^{\beta+3}$ and $x^{\beta+4}$ are 0. In either case, this means that $\vea_i$ or $\vea_{i+1}$ has a 0 coefficient, which is a contradiction as $\mathcal{S}_d\subseteq \mathbb{Z}_d$. Let $\{i_o,i_e\}=\{i_1,i_2\}$ where $i_o$ is odd and $i_e$ is even.
Then from $\sigma(i_1)+\sigma(i_2)=\sigma(i)+\sigma(i+1)$, we have
\begin{eqnarray*} && \vea_{i_o}[{prec(i_o)}]x^{\beta+1}+\vea_{i_o}[{succ(i_o)}]x^{\beta+2}+\vea_{i_e}[{prec(i_e)}]x^{\beta+3}+\vea_{i_e}[{succ(i_e)}]x^{\beta+4}+\\ &+&g\left({prec(i_o)}\right)x^{\beta+5}+g\left({succ(i_o)}\right)x^{\beta+6}+g\left({prec(i_e)}\right)x^{\beta+7}+g\left({succ(i_e)}\right)x^{\beta+8}\\ &=&\vea_i[{prec(i)}]x^{\beta+1}+\vea_i[{succ(i)}]x^{\beta+2}+\vea_{i+1}[{prec(i+1)}]x^{\beta+3}+\vea_{i+1}[{succ(i+1)}]x^{\beta+4}+\\ &+&g\left({prec(i)}\right)x^{\beta+5}+g\left({succ(i)}\right)x^{\beta+6}+g\left({prec(i+1)}\right)x^{\beta+7}+g\left({succ(i+1)}\right)x^{\beta+8}.
\end{eqnarray*} Now we can deduce that $\vea_{i_o}[{succ(i_o)}]=\vea_i[{succ(i)}]$ and $succ(i_o)=succ(i)$ (since $g$ is an injection). Using Equation~\eqref{eq:1} we conclude that $\vea_{i_o}=\vea_i$. Similarly, $\vea_{i+1}[prec(i+1)]=\vea_{i_e}[prec(i_e)]$, $prec(i+1)=prec(i_e)$. As $succ(i)=prec(i+1)$, Equation~\eqref{eq:1} yields that $\vea_{i_e}=\vea_{i+1}$.
Each vector in the sequence is unique, so we know $i_o=i$ and $i_e=i+1$. Using that $i_1\le i_2$ we obtain that $i_1=i$ and $i_2=i+1$. Hence, \textbf{Property 4} is proved. Lemma~\ref{lemma:uniquesum-2} follows. \end{proof}
With Lemma~\ref{lemma:uniquesum-2}, we are ready for the main result of this section.
\begin{lemma}\label{lemma:linked-sum} Let $\tau$ be an arbitrary permutation of $\ensuremath{\mathbb{Z}}_n$. Then there exists a set ${\cal{S}}=\{s_1,s_2,\cdots,s_n\}$ of positive integers together with an auxiliary set $E=\bigcup_{i=1}^nE_i$ of positive integers such that: \begin{compactitem} \item All integers in ${\cal{S}}\cup E$ are bounded by $n^{1+O(\frac{1}{\sqrt{\log\log n}})}$; \item $E_i=\{e_{i,1},e_{i,2},\cdots,e_{i,\omega}\}$ for all $i$, where $\omega=O(\frac{\log n}{\log\log n})$; \item $E_i\cap E_{i'}=\emptyset$ for any $i'\neq i$; \item For every $i\in\ensuremath{\mathbb{Z}}_n$ and $k=\tau(i)$, each sum $s_i+e_{i,1}$, $e_{i,1}+e_{i,2}$, $\cdots$, $e_{i,\omega-1}+e_{i,\omega}$, $e_{i,\omega}+s_k$ is unique, that is, there is no other pair in ${\cal{S}}\cup E$ that adds up to the same value. \end{compactitem} In particular, all these properties are satisfied by setting $s_i=\sigma(i)$ where $\sigma:\ensuremath{\mathbb{Z}}_N\rightarrow \ensuremath{\mathbb{Z}}_{N'}$ is the function specified in Lemma~\ref{lemma:uniquesum-2} by taking $N=n$. \end{lemma}
We start with a natural proof idea. Recall Lemma~\ref{lemma:uniquesum-2}, where $\sigma(i)=(\vea_i[0],\vea_i[1],\cdots,\vea_i[{\gamma}])\cdot (1,x,\cdots,x^{\gamma})=\vea_i\cdot \vex$ such that $\vea_i[j]<x/5$, whereas when we add two values, say, $\sigma(i)+\sigma(k)$, we can directly add each coordinate $\vea_i[j]+\vea_k[j]$. Given $i$ and $k$, how can we guarantee that the equation $\vea_i+\vea_k=(\vea_i[0]+\vea_k[0],\vea_i[1]+\vea_k[1],\cdots,\vea_i[{\gamma}]+\vea_k[{\gamma}])=\vea_{\ell}+\vea_r$ has a unique solution $\{\ell,r\}=\{i,k\}$? A simple observation is that, since $\vea_i[j]\in {\cal{S}}_d$ (by Property 3 of Lemma~\ref{lemma:uniquesum-2}), we know that $2\vea_i[j]$ can only be expressed as $\vea_i[j]+\vea_i[j]$, and $\vea_i[j]$ can only be expressed as $\vea_i[j]+0$. Consequently, we may lift up the dimension by writing $\sigma(i)=(\vea_i[0],\vea_i[1],\cdots,\vea_i[{\gamma}],0)\cdot (1,x,\cdots,x^{\gamma},x^{\gamma+1})$, and consider the following sequence of numbers:
\small \begin{align*} & & (&\vea_i[0],&\vea_i[1],\quad\cdots,\quad&\vea_i[{\gamma-1}],&\vea_i[{\gamma}], \quad&0) \\ &\rightarrow& (&0,&\vea_i[1],\quad\cdots,\quad&\vea_i[{\gamma-1}],&\vea_i[{\gamma}], \quad&\vea_k[{0}])\\ &\rightarrow& (&\vea_k[0],&\vea_i[1],\quad\cdots,\quad&\vea_i[{\gamma-1}],&\vea_i[{\gamma}], \quad&0)\\ &\rightarrow& (&\vea_k[0],&0,\quad\cdots,\quad&\vea_i[{\gamma-1}],&\vea_i[{\gamma}], \quad&\vea_k[{1}])\\ &\rightarrow& (&\vea_k[0],&\vea_k[1],\quad\cdots,\quad&\vea_i[{\gamma-1}], &\vea_i[{\gamma}], \quad&0)\\ &\rightarrow& &\cdots\\ &\rightarrow& (&\vea_k[0],&\vea_k[1],\quad\cdots,\quad&\vea_k[{\gamma-1}], &0, \quad&\vea_k[{\gamma}])\\ &\rightarrow& (&\vea_k[0],&\vea_k[1],\quad\cdots,\quad&\vea_k[{\gamma-1}], &\vea_k[{\gamma}], \quad&0) \end{align*} \normalsize
It is easy to verify that the sum of any two adjacent vectors in the above sequence is unique, which gives possible values for $e_{i,j}$'s. Unfortunately, numbers constructed in this way do not necessarily satisfy that $E_i\cap E_{i'}=\emptyset$. In particular, there might exist some pair $i',k'$ with $k'=\tau(i')$ where $\vea_i[j]=\vea_{i'}[j]$ for $j\le \gamma-2$, and $\vea_k[j]=\vea_{k'}[j]$ for $j\ge \gamma -1$. In this case, we have $$(\vea_{i'}[0],\vea_{i'}[1],\cdots,\vea_{k'}[{\gamma-1}], \vea_{k'}[{\gamma}], 0)=(\vea_i[0],\vea_i[1],\cdots,\vea_k[{\gamma-1}], \vea_k[{\gamma}], 0),$$ violating $E_i\cap E_{i'}=\emptyset$.
How can we construct unique $e_{i,j}$'s? Towards this, we consider all the one-to-one mappings $g:{\cal{S}}^{\gamma+1}_d\rightarrow {\cal{S}}^{\gamma+1}_d$. Under composition of functions, all such mappings form a group $Aut({\cal{S}}^{\gamma+1}_d)$. We are interested in the special mapping that maps each $\vea_i$ to $\vea_k$ (which corresponds to the permutation $\tau$), which belongs to $Aut({\cal{S}}^{\gamma+1}_d)$. We show that any mapping in $Aut({\cal{S}}^{\gamma+1}_d)$, and hence this special mapping, can be decomposed into a sequence of simple mappings. More precisely, we consider any finite set ${\cal{M}}$ and the group $Aut({\cal{M}}^{\upsilon})$ of one-to-one mappings from ${\cal{M}}^{\upsilon}$ to itself. We call a mapping in $Aut({\cal{M}}^{\upsilon})$ an $h$-shuffler if this mapping only changes the $h$-th coordinate of the input vector, i.e., an $h$-shuffler $f$ satisfies that for any $\vey\in {\cal{M}}^{\upsilon}$, $$f(\vey)=f(\vey[0],\vey[1],\cdots,\vey[\upsilon-1])=(\vey[0],\cdots,\vey[h-1],z,\vey[h+1],\cdots,\vey[\upsilon-1])),$$ for some $z\in {\cal{M}}$. We show the following group theoretic lemma which states that any mapping of $Aut({\cal{M}}^{\upsilon})$ can be decomposed into $2\upsilon$ $h$-shufflers; see Appendix~\ref{appsubsec:shuffle} for the proof. Our $e_{i,j}$'s can be obtained from these $h$-shufflers.
\begin{lemma}\label{lemma:shuffle-1}
Let ${\cal{M}}$ be a finite set, $\upsilon\in\ensuremath{\mathbb{Z}}^+$ and $Aut({\cal{M}}^{\upsilon})$ be the group of all one-to-one mappings from ${\cal{M}}^{\upsilon}$ to itself. The group operation is function composition and denoted as $\circ$. For any $\pi\in Aut({\cal{M}}^{\upsilon})$, there exist $h$-shufflers $f_h,\hat{f}_h\in Aut({\cal{M}}^{\upsilon})$ for every $1\le h\le \upsilon$ such that $F(\vey)=\hat{F}(\pi(\vey))$ for any $\vey\in {\cal{M}}^{\upsilon}$, where $F=f_{\upsilon-1}\circ f_{\upsilon-2}\circ \cdots\circ f_{0}$ and $\hat{F}=\hat{f}_{\upsilon-1} \circ \hat{f}_{\upsilon-2}\circ \cdots\circ \hat{f}_{0}$. Furthermore, $f_h$'s and $\hat{f}_h$'s can be constructed in time that is polynomial in $|{\cal{M}}^\upsilon|$. \end{lemma}
Lemma~\ref{lemma:shuffle-1} implies a decomposition of $\pi$ into $h$-shufflers, i.e., $\pi=\hat{f}_0^{-1}\circ \hat{f}_1^{-1}\circ\cdots\circ \hat{f}_{\upsilon-1}^{-1}\circ f_{\upsilon-1}\circ\cdots\circ f_0$.
With Lemma~\ref{lemma:shuffle-1}, we are ready to prove Lemma~\ref{lemma:linked-sum}. We first set the value of all parameters. Towards this, we will apply Lemma~\ref{lemma:uniquesum-2} twice.
At first, we apply Lemma~\ref{lemma:uniquesum-2} by taking $N=n$. Then we obtain $\sigma: \ensuremath{\mathbb{Z}}_n\rightarrow \ensuremath{\mathbb{Z}}_{n'}$ where $n'=n^{1+{\mathcal{O}}(1/\sqrt{\log\log n})}$, $\sigma(i)=\sum_{j=0}^\gamma \vea_i[j]x^j$ for $\vea_i[j]\in {\cal{S}}_d$ where $\gamma=\lceil\frac{\log n}{\log\log n}\rceil+{\mathcal{O}}(\frac{\log n}{(\log\log n)^{3/2}})$, $d=\lceil e^{(\sqrt{\log\log n}+7)^2}\rceil=e^{{\mathcal{O}}(\sqrt{\log\log n})}\cdot \log n $ and $x=5d+1$. Except $N$, all the other parameters, including $n',\sigma,x,d,\gamma$ and $\vea_i$'s are fixed throughout the following part of this section.
Next, we apply Lemma~\ref{lemma:uniquesum-2} again by setting $N=4\gamma+4$ where $\gamma$ takes the value we determined above. By doing so we obtain another injection $\sigma'$. We have the following simple observation.
\begin{observation} If $\ell\le 4\gamma +4$, then $\sigma'(\ell)=o(\log^2 n)<x^2$. \end{observation} \begin{proof} Note that $\gamma=O(\frac{\log n}{\log\log n})$. By Lemma~\ref{lemma:uniquesum-2}, $\sigma'(\ell)\le (4\gamma+4)^{1+O(\frac{1}{\sqrt{\log\log (4\gamma+4)}})}=o(\gamma^2)=o(\log^2n)$. \end{proof}
Next, We will apply Lemma~\ref{lemma:shuffle-1}. In the following part of this paper, any $\upsilon$-dimensional vector~$\vecc$ represents the number given by the polynomial expression $\sum_{i=1}^{\upsilon-1}\vecc[i]x^i$; vectors and polynomial expressions are used interchangeably. Given our permutation $\tau$, we define $\hat{\tau}$ as a one-to-one mapping that maps each vector $\vea_i$ to $\vea_k$, or equivalently, maps $\sigma(i)$ to $\sigma(k)$ if $k=\tau(i)$. Notice that $\vea_i$'s form a subset of ${\cal{S}}_d^{\gamma+1}$, so currently $\hat{\tau}$ is only defined on this subset. We can extend $\hat{\tau}$ to ${\cal{S}}_d^{\gamma+1}$ such that for $\vecc\in {\cal{S}}_d^{\gamma+1}$ and $\vecc\neq \vea_i$, then $\hat{\tau}(\vecc)=\vecc$. Hence, $\hat{\tau} \in Aut({\cal{S}}_d^{\gamma+1})$. According to Lemma~\ref{lemma:shuffle-1}, we can obtain $h$-shufflers $f_h$ and $\hat{f}_h$ for all $0\le h\le \gamma$ such that $f_{\gamma}\circ f_{\gamma-1}\circ\cdots\circ f_{0}=\hat{f}_{\gamma}\circ\hat{f}_{\gamma-1}\circ\cdots\circ \hat{f}_0\circ \hat{\tau}$.
For ease of notation, define $F_h=f_h\circ f_{h-1}\circ\cdots\circ f_0$ and $\hat{F}_h=\hat{f}_h\circ \hat{f}_{h-1}\circ\cdots\circ \hat{f}_0$. As $h$-shufflers only changes the $h$-th coordinate, we have the following observation.
\begin{observation}\label{obs:F-hat} The following statements are true: \begin{itemize}
\item For any $0\le h\le \gamma$ $$(F_{\gamma}(\vea_i))[h]=(F_{\gamma-1}(\vea_i))[h]=\cdots=(F_h(\vea_i))[h],$$
$$(\hat{F}_{\gamma}(\vea_i))[h]=(\hat{F}_{\gamma-1}(\vea_i))[h]=\cdots=(\hat{F}_h(\vea_i))[h];$$
\item For any $0\le h\le \gamma$, $$(F_{h-1}(\vea_i))[h]=(F_{h-2}(\vea_i))[h]=\cdots=(F_{0}(\vea_i))[h]=\vea_i[h],$$ $$(\hat{F}_{h-1}(\vea_i))[h]=(\hat{F}_{h-2}(\vea_i))[h]=\cdots=(\hat{F}_{0}(\vea_i))[h]=\vea_i[h].$$ \end{itemize}
\end{observation}
Now we are ready to construct a unique linking sequence for every $i$. Intuitively, note that since $\sigma'(h)<x^2$, then $\sigma'(h)$ occupies two \lq\lq bits\rq\rq in the polynomial (that is, two coordinates). With this in mind, we let $\hat{\sigma}'(i)=(0,\sigma'(i))$. Moreover, we define $\hat{\sigma}'(0)=(0,0)$. Now consider the following two sequences, starting from $\vea_i$ and $\vea_k$, respectively, and end up at the same vector: \small \begin{align*} &&(&\vea_i[0],&\vea_i[1],\quad\cdots,\quad&\vea_i[{\gamma-1}], &\vea_i[{\gamma}],\quad&0,&\hat{\sigma}'(0)) &:=&\veb^0_{i}\\&\rightarrow& (&0,&\vea_i[1],\quad\cdots,\quad &\vea_i[{\gamma-1}],&\vea_i[{\gamma}], \quad&(F_0(\vea_i))[0], &\hat{\sigma}'(1))&:=&\veb^1_{i}\\ &\rightarrow& (&(F_0(\vea_i))[0],&\vea_i[1],\quad\cdots,\quad&\vea_i[{\gamma-1}],&\vea_i[{\gamma}], \quad&0, &\hat{\sigma}'(2))&:=&\veb^2_{i}\\ &\rightarrow& (&(F_1(\vea_i))[0],&0,\quad\cdots,\quad&\vea_i[{\gamma-1}],&\vea_i[{\gamma}], \quad&(F_1(\vea_i))[1], &\hat{\sigma}'(3))&:=&\veb^3_i\\ &\rightarrow& (&(F_1(\vea_i))[0],&(F_1(\vea_i))[1],\quad\cdots,\quad&\vea_i[{\gamma-1}], &\vea_i[{\gamma}], \quad&0, &\hat{\sigma}'(4))&:=&\veb^4_i\\ &\rightarrow& &\cdots\\ &\rightarrow& (&(F_{\gamma}(\vea_i))[0],&(F_{\gamma}(\vea_i))[1],\quad\cdots,\quad&(F_{\gamma}(\vea_i))[\gamma-1], &0, \quad&(F_{\gamma}(\vea_i))[\gamma], &\hat{\sigma}'(2\gamma+1)) &:=&\veb^{2\gamma+1}_i\\ &\rightarrow& (&(F_{\gamma}(\vea_i))[0],&(F_{\gamma}(\vea_i))[1],\quad\cdots,\quad&(F_{\gamma}(\vea_i))[\gamma-1], &(F_{\gamma}(\vea_i))[\gamma], \quad&0, &\hat{\sigma}'(2\gamma+2))&:=&\veb^{2\gamma+2}_i \end{align*}\normalsize and \small \begin{align*} &&(&\vea_k[0],&\vea_k[1],\quad\cdots,\quad&\vea_k[{\gamma-1}], &\vea_k[{\gamma}],\quad&0,&\hat{\sigma}'(4\gamma+4)) &:=&\hat{\veb}^0_k\\&\rightarrow& (&0,&\vea_k[1],\quad\cdots,\quad &\vea_k[{\gamma-1}],&\vea_k[{\gamma}], \quad&(\hat{F}_0(\vea_k))[0], &\hat{\sigma}'(4\gamma+3)) &:=&\hat{\veb}^1_k\\ &\rightarrow& (&(\hat{F}_0(\vea_k))[0],&\vea_k[1],\quad\cdots,\quad&\vea_k[{\gamma-1}],&\vea_k[{\gamma}], \quad&0, &\hat{\sigma}'(4\gamma+2))&:=&\hat{\veb}^2_k\\ &\rightarrow& (&(\hat{F}_1(\vea_k))[0],&0,\quad\cdots,\quad&\vea_k[{\gamma-1}],&\vea_k[{\gamma}], \quad&\hat{F}_1(\vea_k))[1]), &\hat{\sigma}'(4\gamma+1))&:=&\hat{\veb}^3_k\\ &\rightarrow& (&(\hat{F}_1(\vea_k))[0],&\hat{F}_1(\vea_k))[1],\quad\cdots,\quad&\vea_k[{\gamma-1}], &\vea_k[{\gamma}], \quad&0, &\hat{\sigma}'(4\gamma))&:=&\hat{\veb}^4_k\\ &\rightarrow& &\cdots\\ &\rightarrow& (&(\hat{F}_{\gamma}(\vea_k))[0],&(\hat{F}_{\gamma}(\vea_k))[1],\quad\cdots,\quad&(\hat{F}_{\gamma}(\vea_k))[\gamma-1], &0, \quad&(\hat{F}_{\gamma}(\vea_k))[\gamma]), &\hat{\sigma}'(2\gamma+3))&:=&\hat{\veb}^{2\gamma+1}_k\\ &\rightarrow& (&(\hat{F}_{\gamma}(\vea_k))[0],&(\hat{F}_{\gamma}(\vea_k))[1],\quad\cdots,\quad&(\hat{F}_{\gamma}(\vea_k))[\gamma-1]), &(\hat{F}_{\gamma}(\vea_k))[\gamma], \quad&0, &\hat{\sigma}'(2\gamma+2))&:=&\hat{\veb}^{2\gamma+2}_k \end{align*}\normalsize
Consider each vector in the above sequence, say, $\veb^2_{i}$. According to Observation~\ref{obs:F-hat}, we know \begin{align*} &&(&(F_0(\vea_i))[0],&\vea_i[1],\quad&\cdots,&\vea_i[{\gamma-1}],\quad&\vea_i[{\gamma}], &0, \quad&\hat{\sigma}'(2))\\ &=&(&(F_0(\vea_i))[0],&(F_0(\vea_i))[1],&\cdots,&(F_0(\vea_i))[\gamma-1],\quad&(F_0(\vea_i))[\gamma], &0, \quad&\hat{\sigma}'(2)) \end{align*} More generally, it is easy to verify that every $\veb^{2j}_i$ is the concatenation of the vector $\left(F_{j-1}(\vea_i),0\right)$ and $\hat{\sigma}'(2j)$, and each $\veb^{2j+1}_i$ is a combination $SW_j\left(\left(F_{j-1}(\vea_i),0\right)\right)$ and $\hat{\sigma}'(2j+1)$, where $SW_j$ is a one-to-one mapping that swaps two coordinates of a vector. A similar statement holds for $\hat{\veb}^{2j}_k$'s and $\hat{\veb}^{2j+1}_k$'s. Since $SW_j$'s, $F_j$'s and $\hat{F}_j$'s are all one-to-one mappings, and $\sigma'$ is an injection, each of the vectors in the sequence above is unique. More precisely, we have the following. \begin{lemma}\label{lemma:veb-uniquesum-1} For any $0\le h\le 2\gamma+1$ and $1\le i\le n$, \begin{itemize}
\item If $\veb^h_{i}=\veb^{h'}_{i'}$, then $h=h'$ and $i=i'$;
\item If $\hat{\veb}^h_i=\hat{\veb}^{h'}_{i'}$, then $h=h'$ and $i=i'$. \end{itemize} \end{lemma}
Furthermore, by the fact that $F_{\gamma}=\hat{F}_{\gamma}\circ \hat{\tau}$ and $\vea_k=\hat{\tau}(\vea_i)$, we have the following observation: \begin{observation} $$\veb^{2\gamma+2}_i=\hat{\veb}^{2\gamma+2}_k.$$ \end{observation}
Next, we consider any two adjacent vectors in the above sequence. We observe that they differ at exactly three positions -- the last coordinate (i.e., $\hat{\sigma}'(j)$'s), and other two coordinates such that one of the two vectors has $0$ coordinate. Other coordinates, e.g., $(F_0(\vea_i))[0]$ in $\veb^2_{i}$ and $(F_1(\vea_i))[0]$ in $\veb^3_i$ are identical according to Observation~\ref{obs:F-hat}. This leads to the following Lemma.
\begin{lemma}\label{lemma:veb-uniquesum-2} For any $0\le h\le 2\gamma+1$ and $1\le i\le n$, \begin{itemize}
\item If $\veb^h_{i}+\veb^{h+1}_i=\veb^{h'}_{i'}+\veb^{h''}_{i''}$ where $h'\le h''$, then $h'=h$, $h''=h+1$, $i'=i''=i$;
\item If $\hat{\veb}^h_i+\hat{\veb}^{h+1}_i=\hat{\veb}^{h'}_{i'}+\hat{\veb}^{h''}_{i''}$ where $h'\le h''$, then $h'=h$, $h''=h+1$, $i'=i''=i$. \end{itemize} \end{lemma} \begin{proof}
We prove the first statement, that is, $\veb^h_{i}+\veb^{h+1}_i=\veb^{h'}_{i'}+\veb^{h''}_{i''}$ implies $h'=h$, $h''=h+1$, $i'=i''=i$. The second statement can be proved in the same way.
We first consider the last coordinate of the summation $\veb^h_{i}+\veb^{h+1}_i=\veb^{h'}_{i'}+\veb^{h''}_{i''}$, which is $\sigma'(h)+\sigma'(h+1)=\sigma'(h')+\sigma'(h'')$. If $h=0$, according to \textbf{Property 3} of Lemma~\ref{lemma:uniquesum-2}, $1\times \sigma'(1)$ can only be expressed as $0+\sigma'(1)$, hence we have $h'=0$ and $h''=1$. If $h\ge 1$, according to \textbf{Property 4} of Lemma~\ref{lemma:uniquesum-2}, we have $h'=h$ and $h''=h+1$.
Consider other coordinates of the equation $\veb^h_{i}+\veb^{h+1}_i=\veb^{h}_{i'}+\veb^{h+1}_{i''}$. On the left-side it is either $a$ or $2a$ where $a\in {\cal{S}}_d$. Consider the equation $a=b+c$ where $b,c\in \{0\}\cup {\cal{S}}_d$. By Lemma~\ref{lemma:uniquesum-1}, there do not exist two numbers in $ {\cal{S}}$ that add up to $a$, hence we know $b,c\in\{0,a\}$. Similarly if $2a=b+c$ for $a\in {\cal{S}}_d$ and $b,c\in\{0\}\cup {\cal{S}}_d$, then $b$ and $c$ are both nonzero, for otherwise the linear equation $2y=y_1$ admits a solution, which is a contradiction to that ${\cal{S}}_d$ satisfies Lemma~\ref{lemma:uniquesum-1}. Hence, $2a=b+c$ for $a,b,c\in {\cal{S}}_d$, and again by Lemma~\ref{lemma:uniquesum-1} we have $b=c=a$. Hence, we conclude that each coordinate of $\veb^{h}_{i'}$ and $\veb^{h+1}_{i''}$ must be the same as $\veb^h_{i}$ and $\veb^{h+1}_i$, respectively. By Lemma~\ref{lemma:veb-uniquesum-1}, it follows that $i'=i''=i$.
\end{proof}
Using the same argument, we know if $2\veb_i^{2\gamma+2}=\veb_{i'}^{h'}+\hat{\veb}_{i''}^{h''}$, then $\veb_i^{2\gamma+2}=\veb_{i'}^{h'}=\hat{\veb}_{i''}^{h''}$. Thus the following is also true. \begin{lemma}\label{lemma:veb-uniquesum-3} $2\veb_i^{2\gamma+2}=\veb_{i'}^{h'}+\hat{\veb}_{i''}^{h''}$, then $h'=h''=2\gamma+2$, $(i',i'')=(i,\tau(i))$. \end{lemma}
We are now ready to prove Lemma~\ref{lemma:linked-sum}. \begin{proof}[Proof of Lemma~\ref{lemma:linked-sum}]The sequence of $E_i$ that links $\sigma(i)$ and $\sigma(k)$ for $k=\tau(i)$ is exactly $\veb^0_{i}=\vea_i,\veb^1_{i},\veb^2_{i},\cdots,\veb^{2\gamma+2}_i=\hat{\veb}^{2\gamma+2}_k$, $\hat{\veb}^{2\gamma+1}_k$, $\cdots$, $\hat{\veb}^1_k,\hat{\veb}^0_k=\vea_k$ (recall that by a vector $\veb$ we mean the integer $\veb\cdot\vex$ with $x=5d+1=O(\log n)$). The uniqueness of each linking sequence is ensured by Lemma~\ref{lemma:veb-uniquesum-1} and Lemma~\ref{lemma:veb-uniquesum-2}. The largest number is bounded by $x^{\gamma+5}=n^{1+O(\frac{1}{\sqrt{\log\log n}})}$. Furthermore, $\omega=O(\gamma)=O(\frac{\log n}{\log\log n})$. Thus, all properties of Lemma~\ref{lemma:linked-sum} are satisfied. \end{proof}
\appendix
\section{Omitted Proofs in Section~\ref{sec:upper} - Algorithms AL\texorpdfstring{$_1$}{Lg}, AL\texorpdfstring{$_2$}{Lg}, AL\texorpdfstring{$_3$}{Lg}}\label{appsec:alg}
\subsection{Algorithm 1}\label{appsubsec:al1}
\begin{lemma}
Consider an instance after the preprocessing of Lemma~\ref{lemma:prepocessing}. For any $\epsilon>0$, there exists an algorithm \textrm{AL}$_{\ref{alg:1}}$ that returns an $(1+O(\epsilon))$-approximate solution for $P||\sum_i C_i^q$ and runs in time $(m/\epsilon)^{O(m)}$. \end{lemma}
The algorithm can be formulated as a dynamic program. For each $h\in \ensuremath{\mathbb{Z}}_n$, we create a set of states $\mathcal{F}_h$. A state $(h,L_1,\dots,L_m)$ belongs to $\mathcal{F}_h$ if it is possible to assign jobs $1,\dots,h$ on machines such that the total load on machine $i$ equals $L_i$ for all $i\in \ensuremath{\mathbb{Z}}_m$. Starting from $(0,\dots,0)\in \mathcal{F}_0$, every state in $\mathcal{F}_{i-1}$ can give rise to some states in $\mathcal{F}_i$ by trying all the possible assignment of job $h$. And the solution is given by the state in $\mathcal{F}_n$ with the minimum objective i.e., $\sum_{i=1}^m L_i^q$. Due to Lemma \ref{lemma:prepocessing}, we know that one of the optimal solutions has a load vector $({L_1^*},\dots,{L_m^*})\in \mathcal{F}_n$ satisfying $L_i^*\leq 2$ for all $i$. Clearly, the running time of Algorithm \ref{alg:1} is $O(m\sum_i |\mathcal{F}_i|)$. However, the total number of states stored during the dynamic programming can be pseudo-polynomial. Fortunately, using the framework by Woeginger \cite{woeginger1999does}, we are able to trim the state space to make it polynomial. More precisely, for any $\delta>0$ construct $\hat{\mathcal{F}}_1,\dots,\hat{\mathcal{F}}_n$ satisfying the following property: \begin{itemize}
\item the size of each $\hat{\mathcal{F}}_i$ is bounded by $(1/\delta)^{O(m)}$;
\item for each $(i,L_1,\dots,L_m)\in \mathcal{F}_i$, there exists $(i,\hat{L}_1,\dots,\hat{L}_m)\in \hat{\mathcal{F}}_i$ such that $L_j\leq \hat{L}_j \leq L_j(1+\delta)^i$. \end{itemize}
In Algorithm \ref{alg:1}, we set the parameter $\delta=\epsilon/n$. Due to Lemma~\ref{lemma:prepocessing}, $n\in O(m/\epsilon)$, the total running time is bounded by $(m/\epsilon)^{O(m)}$. Let $(n,\hat{L}_1,\dots,\hat{L}_m)$ be the state with the minimum objective in $\hat{\mathcal{F}}_n$, and $(n,L_1^*,\dots,L_m^*)$ be the state with the minimum objective in $\mathcal{F}_n$. Hence, $(n,\hat{L}_1,\dots,\hat{L}_m)$ is an $(1+O(\epsilon))$-approximate solution, as,
\[ \sum_i \hat{L}_i^q \leq (1+\delta)^{nq}
\sum_i {L_i^*}^q \leq (1+2q\epsilon) OPT,\]
where $q$ is a fixed constant.
\begin{algorithm}[tb]
\caption{Pseudo Code Description of AL$_1$}
\label{alg:1}
\textbf{Input}: $I,\epsilon$\\
\textbf{Output}: $\min \{\sum_{i=1}^m \hat{L}_i^q: (\hat{L}_1,\dots,\hat{L}_m) \in \hat{\mathcal{F}_n} \}$
\begin{algorithmic}[1]
\STATE{$\delta=\epsilon/n$}
\STATE{$\Gamma = \{[0], [\epsilon,\epsilon(1+\delta)),[\epsilon(1+\delta),\epsilon(1+\delta)^2),\dots \}$}
\STATE{$\hat{\mathcal{F}}_0=\{(0,\dots,0)\}$ }
\FOR{$j=1$ to $n$}
\STATE {$\mathcal{F}'_j=\emptyset$}
\FORALL {$(L_1,\dots,L_m) \in \mathcal{F}_{i-1}$}
\FORALL{$i\in[1,m]$ and $L_i+p_j\leq 2$}
\STATE{$\mathcal{F}'_j= \mathcal{F}'_j \cup (L_1,\dots,L_{i-1},L_i+p_j,L_{i+1},\dots,L_m)$}
\ENDFOR
\ENDFOR
\STATE{$\hat{\mathcal{F}}_j=\emptyset$}
\FORALL{$S\in \Gamma^m$}
\FOR{$i=1$ to $m$}
\STATE{$\hat{L}_i = \max \{L_i: (\dots,L_i,\dots)\in \mathcal{F}'\cap S \}$}
\ENDFOR
\STATE{$\hat{\mathcal{F}}_j = \hat{\mathcal{F}}_j \cup (\hat{L}_1,\dots,\hat{L}_m)$}
\ENDFOR
\ENDFOR
\end{algorithmic} \end{algorithm}
\subsection{Algorithm 2}\label{appsubsec:al2} We first recall Lemma~\ref{lemma:al2} and then present its proof. \begin{T1}
Consider an instance after the preprocessing of Lemma~\ref{lemma:prepocessing}. For any $\epsilon>0$, there exists an algorithm \textrm{AL}$_{2}$ that outputs an $(1+O(\epsilon))$-approximation solution for $P||\sum_i C_i^q$ within $m^{\tilde{O}(1/\sqrt{\epsilon})}$ time. \end{T1} \begin{proof} Based on Lemma \ref{thm:structure}, we design \textrm{AL}$_{2}$ as a dynamic program as follows.
We say that a vector $(i,v,u_1,u_2,\ldots,u_\tau)$ is a valid state if it is possible to assign the $u_h$ largest jobs in $\mathcal{G}_h$, for each $h$, on the first $i$ machines with the objective equal to $v$. In our algorithm, for each $i\in\{0,\ldots,m\}$, we construct a set $\mathcal{F}_i$ of valid states. Start from $(0,\dots,0)\in \mathcal{F}_0$. To construct the valid states in $\mathcal{F}_{i}$, we consider a state in $\mathcal{F}_{i-1}$ and try all possible assignments of jobs to machine $i$ that respect the ordering of jobs given by Lemma~\ref{thm:structure} and the load bound for each machine implied by Lemma~\ref{lemma:prepocessing}. Given the sets $\mathcal{F}_{0},\ldots,\mathcal{F}_{m}$, the answer can be found by searching the state with the minimum objective in $\mathcal{F}_m$. In order to limit the number of states, we can eliminate \emph{dominated} states. Namely, if two states $(i,v,u_1,\dots,u_\tau)$ and $(i,v',u_1,\dots,u_\tau)$ in $\mathcal{F}_i$ satisfy $v'<v$, then we say that $(i,v',u_1,\dots,u_\tau)$ is dominated and delete it from $\mathcal{F}_i$.
The overall running time of \textrm{AL}$_{2}$ can be bounded as follows. Recall that by Lemma~\ref{lemma:prepocessing}, $p_j\ge \epsilon$ and that the load of each machine is at most 2. Hence, each state in $\mathcal{F}_{i-1}$ can give rise to at most $(2/\epsilon+1)^{\tau}$ (dominated or undominated) states in $\mathcal{F}_i$. As also the number of jobs in each set $\mathcal{G}_h$ is bounded by $2m/\epsilon$, the number of undominated states in $\mathcal{F}_{i}$ is at most $(2m/\epsilon+1)^{\tau}=m^{\tilde{O}(\sqrt{1/\epsilon})}$. Hence, each set $\mathcal{F}_i$ can be constructed in time $(2/\epsilon+1)^{\tau}m^{\tilde{O}(\sqrt{1/\epsilon})}=m^{\tilde{O}(\sqrt{1/\epsilon})}$, which implies the same bound for the overall running of our dynamic programming algorithm. The lemma follows. \end{proof}
\subsection{Algorithm 3}\label{appsubsec:al3}
The goal of this subsection is to prove the following lemma. \begin{lemma}\label{lemma:bin-packing-scheduling}
Consider an instance after the preprocessing of Lemma~\ref{lemma:prepocessing}. For any $\epsilon>0$, there exists an algorithm that outputs a feasible schedule for well structured instance of $P||\sum_i C_i^q$ whose objective value is at most $OPT+O(\log^2 m)$ within $(1/\epsilon)^{O(1)}+n^{O(1)}$ time.
\end{lemma}
Recall Lemma~\ref{lemma:prepocessing}, and that an approximation scheme for well structured instances also implies an approximation scheme for general instances. Given Lemma~\ref{lemma:bin-packing-scheduling} and the fact that $OPT\ge m$ for well structured instances, we know that the additive error $O(\log^2 m) \le \epsilon OPT$ if $m=\Omega(1/\epsilon\log^2 (1/\epsilon))$. Hence, Theorem~\ref{thm:algorithm} is proved.
In the following, we present Algorithm $\ref{alg:3}$, which is the algorithm claimed in Lemma~\ref{lemma:bin-packing-scheduling}. Algorithm $\ref{alg:3}$ modifies upon the famous algorithm for bin packing by Karmarkar and Karp \cite{karmarkar1982efficient}. Given an instance $I$, $m(I)$ denotes the number of machines, and $\text{size}(I)$ denotes the total processing time of jobs. To exclude the trivial cases, if $I$ consists of less than $m(I)$ jobs, then it is obvious that the optimal solution assigns each job to a separate machine. Hence, in the following parts, we can assume without of loss generality there are more than $m(I)$ jobs.
We give a very high-level description. The scheduling problem can be interpreted as a bin packing problem where the bin number is a constraint, and the objective is to minimize the cost of bins instead of minimizing the number of bins (where the cost of a bin is its load to the power of $q$). Under such an interpretation, we are able to iteratively apply the {\it harmonic grouping scheme} to round job processing times and establish a {\it configuration LP} for the rounded instance. Based on the extreme point solution of the configuration LP, we assign jobs to roughly $m(I)/2$ machines and continue with the remaining jobs and machines.
As discussed in Lemma \ref{lemma:prepocessing}, here we assume the processing time of all jobs in $I$ are within $[\epsilon,1]$. Suppose there are $\chi$ distinct job processing times with $b_1$ jobs of processing time $p_1$, $b_2$ jobs of processing time $b_2$, ... , $b_\chi$ jobs of processing time $p_\chi$. Consider the the subset of jobs that can be scheduled on a single machine, which can be characterized by a $\chi$-tuple $\vet_j=(t_{1j},t_{2j},\cdots,t_{\chi j})$ where $t_{ij}$ indicates the number of jobs of processing time $p_i$ on this machine. We call any $\vet_j$ with $t_{ij}\le b_i$ for every $i$ as a configuration. Let $N$ denote the number of configurations, let $\vet_1,\vet_2,\cdots,\vet_N$ be a complete enumeration of them.
We establish a configuration integer program for instance $I$ as follows. We introduce a variable $x_j$ for each configuration $\vet_j$ which indicates the number of machines which is scheduled according to $\vet_j$. Consequently, all machines of configuration $\vet_j$ accommodate $t_{ij}x_j$ jobs of processing time $p_i$. We define the load, or total processing time of configuration $\vet_j$ as $L(\vet_j)=\sum_{i=1}^\chi t_{ij}p_i$. We define the cost of configuration $\vet_j$ as $v_j=(\sum_{i=1}^\chi t_{ij}p_i)^q$.
\begin{subequations} \begin{eqnarray} \text{Conf-IP($I$)}: &\min& \sum_{j=1}^N {v}_jx_j \nonumber\\ &s.t.& \sum_{j=1}^N x_j= m(I) \label{eq:11}\\ && \sum_{j=1}^N t_{ij}x_j\ge b_i, \quad i=1,2,\cdots, \chi \label{eq:2}\\ && x_j\in\mathbb{N}, \quad j=1,2,\cdots, N \nonumber \end{eqnarray} \end{subequations}
Let ${OPT}_{IP}(I)$ be the optimal objective value of Conf-IP($I$). Relaxing the integral constraint $x_j\in\mathbb{N}$ to $x_j\ge 0$ in Conf-LP(${I}$), we obtain a configuration linear programming Conf-LP(${I}$). Let $OPT_{LP}({I})$ be its optimal objective value, it is obvious that $OPT_{LP}({I})\le OPT_{IP}({I})$. We have the following lemma.
\begin{lemma}\label{lemma:conf}
There exists an algorithm of running time polynomial in $\chi, \log(m/\epsilon)$ and $1/\varsigma$, and returns a feasible extreme point solution of objective value at most $OPT_{LP}({I})+\varsigma$ for Conf-LP(${I}$). \end{lemma}
\begin{proof} We consider the dual of Conf-LP(${I}$): \begin{subequations} \begin{eqnarray} \text{Dual-LP(${I}$)}: &\max& \sum_{i=1}^\chi b_iy_i-m(I)z \nonumber\\ &s.t.& \sum_{i=1}^\chi t_{ij}y_i-z\le {v}_j, \quad j=1,2,\cdots, N \label{eq:dual1}\\ && y_i\ge 0, \quad j=1,2,\cdots, N \nonumber \end{eqnarray} \end{subequations}
We use a similar algorithm as that for the classical bin packing problem. We employ the ellipsoid method to solve Dual-LP($\bar{I}$). We give a brief description. The ellipsoid method iteratively computes a sequence of ellipsoids $E_0,E_1,\cdots,$. In each iteration, it implements a separation oracle to check whether the center of the current ellipsoid $E_k$, say, $(\vey^k,z^k)=(y_1^k,\cdots,y_\chi^k,z^k)$, is feasible. If it is, then it outputs a cut $\veb\cdot\vey-m(I)z\ge \veb\cdot\vey^k-m(I)z^k$ where $\veb=(b_1,\cdots,b_{\chi})$; Otherwise, it finds out a violating constraint, say, $\vet_j\cdot \vey-z\le\bar{v}_j$, and outputs $\vet_j\cdot\vey-z\le \vet_j\cdot \vey^k-z^k$. Incorporating the cut output by the separation oracle, the ellipsoid method computes a new ellipsoid $E_{k+1}$ and guarantees that the volume of the new ellipsoid is smaller than $E_k$ by a factor of $e^{-\frac{1}{5(\chi+1)}}$. After a polynomial number of iterations (specifically, which is polynomial in $\log 1/\varsigma$), the ellipsoid method finds a near-optimal feasible solution with an additive error of $\varsigma$.
In their seminal work, Karmarkar and Karp \cite{karmarkar1982efficient} further prove that to compute an approximate solution to Conf-LP(${I}$) up to an additive precision of $\varsigma$, it suffices to construct an approximate separation oracle such that in each iteration, instead of checking whether the center $(\vey^k,z^k)$ is feasible and returns a violating constraint if it is infeasible, the approximate separation oracle checks a point $(\tilde{\vey}^k,\tilde{z}^k)$ and does the following: \begin{itemize} \item If $(\tilde{\vey}^k,\tilde{z}^k)$ violates a constraint, say, $\vet_j\cdot\tilde{\vey}^k-\tilde{z}^k> {v}_j$, then outputs cut $\vet_j\cdot\vey-{z}\le\vet_j\cdot\vey^k-\tilde{z}^k$; \item If $(\tilde{\vey}^k,\tilde{z}^k)$ does not violate any constraint, then outputs $\veb\cdot{\vey}-m(I)z\ge \veb\cdot\tilde{\vey}^k-m(I)\tilde{z}^k$. \end{itemize}
Karmarkar and Karp showed that ellipsoid method equipped with the approximate separation oracle can return a near-optimal solution within an additive error of $O(\varsigma)$ as long as the point $(\tilde{\vey}^k,\tilde{z}^k)$ in each iteration satisfies that \begin{itemize} \item For any constraint, if $\vet_j \cdot\tilde{\vey}^k-\tilde{z}^k>{v}_j$, then $\vet_j \cdot{\vey}^k-{z}^k>{v}_j$; \item For the objective function, $\veb\cdot\vey^k-m(I)z^k\le \veb\cdot\tilde{\vey}^k-m(I)\tilde{z}^k+\varsigma$. \end{itemize}
Here the first property ensures that if $(\tilde{\vey}^k,\tilde{z}^k)$ is infeasible, then $({\vey}^k,{z}^k)$ is also infeasible by violating the same constraint, and therefore the approximate separation oracle proceeds exactly the same as an (accurate) separation oracle. The second property ensures that the approximate separation oracle will never cut off a feasible point whose objective value is significantly better than $(\tilde{\vey}^k,\tilde{z}^k)$ by $\varsigma$, and hence ensures the near-optimality.
Now we describe our approximate separation oracle as follows. We first round $v_j$ {\it up} to be the nearest value of the form $(1+\epsilon)^k$ and let it be $\bar{v}_j$. Notice that there are a polynomial number of distinct rounded values. Given an arbitrary point $(\vey^k,z^k)$, we consider all inequalities of the form $$\vet_j\vey^k\le \bar{v}_j+z^k.$$
Our goal is to find out a violating constraint or determine there is none. Since there are only a polynomial number of different values for $\bar{v}_j$'s, we can sequentially check for every value $(1+\epsilon)^h$, whether $(\vey^k,z^k)$ violates the constraint $\vet_j\vey^k\le (1+\epsilon)^h+z^k$ for all configurations $\vet_j$ such that $(1+\epsilon)^{h-1}<(\vet_j\vep)^q\le (1+\epsilon)^h$ where $\vep=(p_1,\cdots,p_\chi)$. We argue that we can drop the lower bound by sequentially checking for every value $(1+\epsilon)^h$, whether $(\vey^k,z^k)$ violates the constraint $\vet_j\vey^k\le (1+\epsilon)^h+z^k$ for all configurations $\vet_j$ such that $(\vet_j\vep)^q\le (1+\epsilon)^h$. This is because that if $(\vey^k,z^k)$ violates $\vet_j\vey^k\le (1+\epsilon)^h+z^k$ but $(\vet_j\vep)^q\le (1+\epsilon)^{h-1}$, say, $(\vet_j\vep)^q$ rounded up to $(1+\epsilon)^{h'}$ for $h'<h$, then $(\vey^k,z^k)$ also violates $\vet_j\vey^k\le (1+\epsilon)^{h'}+z^k$, which will be found out already. Hence, finding a violating constraint is equivalent as finding a vector $\vet\le \veb$ such that $(\vet\cdot\vep)^q\le (1+\epsilon)^h$ and $\vet\cdot\vey^k$ is maximized, and comparing this maximal value with $z^k+(1+\epsilon)^k$. This is a knapsack problem which admits a fully polynomial time approximation scheme (FPTAS). More precisely, using the same method as Karmarkar and Karp~\cite{karmarkar1982efficient}, we can round $\vey^k$ to some value $\tilde{\vey}^k$ close enough such that \begin{itemize}
\item For any $\chi$-dimensional vector $\ved$ whose coordinates are non-negative and $\|\ved\|_{1}\le n$, $0\le \ved\cdot {\vey}^k-\ved\cdot \tilde{\vey}^k\le \varsigma$; \item In polynomial time (specifically, polynomial in $1/\varsigma$), we are able to find $\vet^*$ such that by taking $\vet=\vet^*$, $\vet\cdot\tilde{\vey}^k$ is maximized subject to $(\vet\cdot\vep)^q\le (1+\epsilon)^h$. \end{itemize}
Overall, our above argument ensures that in polynomial time we either determine some configuration $\vet_j$ such that $\vet_j\tilde{\vey}^k> \bar{v}_j+z^k$ for some $j$, and hence $$\vet_j{\vey}^k+z^k\ge \vet_j\tilde{\vey}^k \bar{v}_j+z^k>\bar{v}_j+z^k\ge v_j+z^k;$$ or we conclude there is no such configuration and guarantee that $\veb\cdot\vey^k-m(I)z^k\le \veb\cdot\tilde{\vey}^k-m(I)\tilde{z}^k+\varsigma$.
Hence, there exists an approximate separation oracle for Dual-LP($I$), indicating that Dual-LP($I$) can be solved using the ellipsoid method in polynomial time (up to arbitrary precision). Notice that the derived solution may not necessarily be an extreme point, however, by using exactly the same argument as that of Karmarkar and Karp \cite{karmarkar1982efficient}, we can make it into an extreme point solution. \end{proof}
Consider the near-optimal extreme point solution $\vex(I)$ given by Lemma~\ref{lemma:conf}. We denote by $\lfloor \vex(I)\rfloor=(\lfloor x_1(I)\rfloor,\cdots, \lfloor x_N(I)\rfloor)$ the rounded solution. Similar as the algorithm for bin packing, we assign jobs to machine according to $\lfloor \vex(I)\rfloor$ and then proceed with the residue instance $I_{res}(\vex)$. Denote by $I_{res}(\vex)$ the {\it residue instance} where we take away jobs scheduled according to $\lfloor \vex(I)\rfloor$ from the original instance $I$, i.e., $I_{res}$ consists jobs in $\vex(I)-\lfloor \vex(I)\rfloor$. It is easy to see $I_{res}(\vex)$ consists of $m(I_{res}(\vex))=m(I)- \sum_j \lfloor x_j(I) \rfloor$ machines.
\begin{lemma}
$OPT_{LP}(I_{res}(\vex)) + OPT_{LP}(I\setminus I_{res}(\vex)) \leq OPT_{LP}(I)$ \end{lemma}
\begin{proof}
We know each configuration of instance $I$ is also a configuration of instance $I_{res}$ and $I\setminus I_{res}(\vex)$. Hence, for any solution $\vex(I)$ of instance $I$, $\lfloor \vex(I) \rfloor$ is an feasible solution of $I\setminus I_{res}(\vex)$ and $\vex(I)-\lfloor \vex(I) \rfloor$ is an feasible solution of $I_{res}(\vex)$. \end{proof}
Similar to the bin packing algorithm by Karmarkar and Karp \cite{karmarkar1982efficient}, we will employ the {\it harmonic grouping scheme} to round the instance and apply Lemma~\ref{lemma:conf} on the rounded instance. It has to be noticed that the processing time of every job in the instance is no more than 1 which is guaranteed by Lemma~\ref{lemma:prepocessing}. The harmonic grouping works as follows: We deal with the job one by one in non-decreasing order of its processing time and pack the job into the current group. At any time, only one group is open. When the total processing time of jobs in the current group is at least 2, we close it and start a new group. By doing this, all jobs in $I$ are packed into $r$ groups i.e., $G_1,\dots,G_r$. We discard all jobs in $G_1$ along with $|G_{i-1}|-|G_i|$ jobs with smallest processing time in $G_i$ for each $i\in[2,r]$ and let it be $I_d$. For those remaining jobs, we lift the processing time to the largest one among their group and let it be $I'$. The {\it harmonic grouping scheme} has the following property \cite{williamson2011design}: \begin{itemize}
\item the number of distinct job processing times in $I'$ is at most $\text{size}(I)/2$;
\item $\text{size}(I_d) \in O(\log \text{size}(I))$. \end{itemize}
\begin{lemma}
$OPT_{LP}(I')\leq OPT_{LP}(I)$ \end{lemma}
\begin{proof}
Given any solution $\vex(I)$ of instance $I$, for each configuration we can replace each job $j$ in group $G_i$ with a job $j'$ in group $G_{i+1}$. In such a modified solution, all jobs of $I'$ are scheduled. Since $p_{j'}\leq p_j$ holds, the objective does not increase. \end{proof}
Now we formally present Algorithm $\ref{alg:3}$. Given an instance $I$ of the scheduling problem, we first apply the harmonic grouping scheme to obtain the rounded instance $I'$ along with another instance $I_d$ composed by the discarded jobs. Then we apply Lemma~\ref{lemma:conf} to derive a feasible solution $\vex(I')$ for $I'$ and assign jobs to machines according to $\lfloor\vex(I')\rfloor$ and close these machines. The remaining jobs $\vex(I')-\lfloor\vex(I')\rfloor$ and the remaining empty machines $m(I')-\|\lfloor\vex(I'_j)\rfloor\|_1$ (here $\|\cdot\|_1$ is the 1-norm, which counts the number of integral configurations) forms a new instance $I'_{res}$. In the next iteration, $I'_{res}$ servers as the input and we repeat this process until there are only constant machines left. For the instance with constant machines, we call Algorithm $\ref{alg:1}$. Then we do the balancing operation to make sure that for every two machines their load difference is at most 1. At last, we group all the discarded jobs into $O(\log^2m)$ groups where the total processing time of each group is at most 2. This can be easily done, since the processing time of each job is at most 1. We pick arbitrarily $O(\log^2m)$ machines and schedule each group of jobs on one machine.
\begin{algorithm}[tb]
\setcounter{algorithm}{2}
\caption{Pseudo Code Description of AL$_3$}
\label{alg:3}
\begin{algorithmic}[1]
\STATE{$I_D=\emptyset$}
\WHILE{$m(I)> 10$}
\STATE{apply harmonic grouping scheme to create instance $I'$ and $I_d$}
\STATE{$I_D=I_D\cup I_d$}
\STATE{let $\hat{\vex}(I')$ be the $(\epsilon/\log m)$-balanced solution for Conf-IP($I'$)}
\STATE{schedule $I'\setminus I'_{res}(\hat{\vex})$ according to $\lfloor \hat{\vex}(I') \rfloor$}
\STATE{$I=I'_{res}(\hat{\vex})$}
\ENDWHILE
\STATE{schedule $I$ using Algorithm \ref{alg:1}}
\WHILE{$\exists i,j$ s.t. $L_i-L_j>1$}
\STATE{move the job with the largest processing time in machine $i$ to machine $j$}
\ENDWHILE
\STATE{schedule $I_D$ on arbitrarily $O(\log^2m)$ machines with max increasing load 2}
\end{algorithmic} \end{algorithm}
Finally, we estimate the overall loss incurred. In each iteration, only $m(I)/2+1$ variables take non-zero value in solution $I'_{res}(\hat{\vex})$. Hence, $m(I'_{res}(\hat{\vex}))\leq m(I)/2+1$ and after at most $O(\log m)$ rounds Algorithm \ref{alg:3} terminates. Each iteration introduces an additional cost of $\varsigma$, together with discarded jobs of total processing time $O(\log^2m)$, which need to be handled at last. Set $\varsigma=O(\log m)$ and observing that $OPT\ge OPT_{LP}(I)$ and $OPT\ge m$, we know the overall additional cost is bounded by $O(\log^2m)$. Now consider all the discarded jobs. Through Lemma~\ref{lemma:prepocessing} and the balancing operation, we know the load of machines in the solution is at most 3. Meanwhile, due to the convexity of the objective, the balancing operation does not increase the objective value. Given that $q$ is a constant, hence, the overall objective value increases by at most $O(\log^2 m)$, and Lemma~\ref{lemma:bin-packing-scheduling} is proved.
\section{Omitted Proofs in Section~\ref{subsec:maxsat} - Proof of Lemma~\ref{lemma:maxsat-eth2}}\label{appsec:sat} The goal of this section is to prove the following lemma. \begin{T3} Assuming ETH, there exists a constant $\beta\in(0,1)$ such that for any sufficiently small $\epsilon',\delta>0$, it is not possible to distinguish between instances of 3SAT$\,'$ with $(1-\epsilon')\cdot 4n/3 $ clauses where at least $4n/3$ clauses are satisfiable, from instances where at most $(\beta+\epsilon')\cdot 4n/3$ clauses are satisfiable, in time $2^{O(n^{1-\delta})}$. \end{T3}
Towards the proof, we start with the following result (see, e.g., Corollary 1 of~\cite{bonnet2015subexponential}). \begin{lemma}\cite{moshkovitz2010two,bonnet2015subexponential}\label{lemma-cite:1} Under ETH, for sufficiently small $\epsilon'>0$, and $\delta>0$, it is impossible to distinguish between instances of 3SAT with $\Lambda$ clauses where at least $(1-\epsilon')\Lambda$ are satisfiable from instances where at most $(7/8+\epsilon')\Lambda$ are satisfiable, in time $O(2^{\Lambda^{1-\delta}})$. \end{lemma}
Applying the classical technique of constructing enforcer via expander for 3SAT (see, e.g., Theorem 5 of \cite{trevisan2004inapproximability}), we have the following, \begin{lemma}\cite{trevisan2004inapproximability}\label{lemma-cite:2} There exists a constant $d_0$ such that given a 3SAT formula $\phi$ with $\Lambda$ clauses, another 3SAT formula $\phi'$ with $\Lambda'=\Lambda+3d_0\Lambda=O(\Lambda)$ clauses can be constructed in polynomial time such that: \begin{itemize} \item Every variable occurs in at most $2d+1$ clauses in $\phi'$; \item There is an assignment for $\phi$ where at most $k$ clauses are not satisfied if and only if there is an assignment for $\phi'$ such that at most $k$ clauses are not satisfied. \end{itemize} \end{lemma}
Denote by 3SAT-$d$ the 3SAT problem where every variable occurs at most $d$ times. Combining Lemma~\ref{lemma-cite:1} and Lemma~\ref{lemma-cite:2}, we have the following lemma. \begin{lemma}\label{lemma:maxsat-eth} Under ETH, there exists some constants $d\in\mathbb{N}$ and $\alpha\in(0,1)$ such that for sufficiently small $\epsilon'>0$, and $\delta>0$, it is impossible to distinguish between instances of 3SAT-$d$ with $\Lambda$ clauses where at least $(1-\epsilon')\Lambda$ are satisfiable from instances where at most $(\alpha+\epsilon')\Lambda$ are satisfiable, in time $O(2^{\Lambda^{1-\delta}})$. \end{lemma}
It is worth mentioning that the reduction in~\cite{trevisan2004inapproximability} involves constructing 2-clauses, that is, 3SAT-$d$ in Lemma~\ref{lemma:maxsat-eth} refers to a 3SAT instance where clauses may contain 2 or 3 variables. For ease of presentation, we want to enforce every clause to contain exactly 3 variables\footnote{We remark, however, that our reduction also works if $C_1$ contains 2-causes and 3-clauses. It suffices to create two CL$_\ell$, one true copy and one false copy instead of three, and meanwhile adjust the number of dummy jobs.}. This can be done by introducing dummy variables together with 3-clauses that enforce a dummy variable to be true or false (called enforcers). In particular, Berman et al.~\cite{berman2003approximation} provide a general enforcer that allows them to deduce the APX-hardness of MAX3SAT (where every clause contains 3 variables and every variable appears 4 times) through the APX-hardness of MAX2SAT. We can apply their technique directly to get a strengthened version of Lemma~\ref{lemma:maxsat-eth} where in 3SAT-$d$ every clause contains exactly 3 variables.
The following proof is a slight variation of that from Tovey~\cite{tovey}.
\begin{lemma}\label{le:tovey}
Given a 3SAT-$d$ formula $\phi$ with $\Lambda$ clauses, a 3SAT$\,'$ formula $\phi'$ with $|C_1|=\Lambda$ and $|C_2|\le 3\Lambda$ clauses can be constructed in polynomial time such that: \begin{itemize} \item If there is an assignment for $\phi$ where at most $k$ clauses are not satisfied, then there is an assignment for $\phi'$ where at most $k$ clauses are not satisfied. \item If there is an assignment for $\phi'$ where there are at most $k$ clauses not satisfied, then there is an assignment for $\phi$ where at most $kd$ clauses are not satisfied. \end{itemize} \end{lemma} \begin{proof} Let $z$ be any variable in $\phi$ and suppose it appears $\ell\le d$ times in clauses. If $\ell=1$ then we add a dummy clause $(z\oplus \neg z)$. Otherwise $\ell\ge 2$ and we introduce $\ell$ new variables $z_1$, $z_2$, $\cdots$, $z_\ell$ and $\ell$ new clauses $(z_1\oplus \neg z_2)$, $(z_2\oplus \neg z_3)$, $\cdots$, $(z_{\ell}\oplus \neg z_{1})$ which enforce $z_1,z_2,\cdots,z_\ell$ to take the same truth value. Meanwhile we replace the $\ell$ occurrences of $z$ in the original clauses by $z_1$, $z_2$, $\cdots$, $z_\ell$ in turn and remove $z$. By doing so we transform $\phi$ into a new formula $\phi'$ by introducing at most $3\Lambda$ new variables and $3\Lambda$ new clauses.
Notice that each new clause we add in $\phi'$ is of the form $(z_i\oplus \neg z_{i'})$. We let $C_2$ be the set of them and let $C_1$ be the set of other clauses. It is easy to verify that $\phi'$ is an instance of 3SAT$'$. Notice that every clause in $C_1$ has a corresponding clause in $\phi$ by replacing $z_i$'s with $z$.
Suppose there is an assignment for $\phi$ where at most $k$ clauses are not satisfied. Then for any variable $z$ in $\phi$ that occurs $\ell$ times, we let $z_1$, $z_2$, $\cdots$, $z_\ell$ all take the same value as $z$. It is easy to see that at most $k$ clauses in $C_1$ of $\phi'$ are not satisfied.
Suppose there is an assignment for $\phi'$ where at most $k$ clauses in $C_1$ are not satisfied. For any variable $z$ in $\phi$ that correspond to $z_1$, $z_2$, $\cdots$, $z_\ell$ in $\phi'$, we let $z_i$ take the same value of $z_1$ for all $i$. Now we check the number of additional unsatisfied clauses in $C_1$ we introduce by doing so. If all $z_i$'s take the same value, then no additional unsatisfied clauses are introduced. Otherwise, it is possible that some of the clauses in $C_1$, which is satisfied by $z_2$, $z_3$, $\cdots$ or $z_\ell$, becomes unsatisfied. But there are at most $d-1$ such kind of clauses. Hence, at most $d-1$ unsatisfied clauses are introduced, if there is at least one unsatisfied clause among $(z_1\oplus \neg z_2)$, $(z_2\oplus \neg z_3)$, $\cdots$, $(z_{\ell}\oplus \neg z_{1})$. This implies that we have introduced at most $k(d-1)$ unsatisfied clauses by setting $z_i=z_1$ for all variables, i.e., there are at most $k(d-1)+k=kd$ unsatisfied clauses in $C_1$ now. Hence, there is an assignment for $\phi$ where at most $kd$ clauses are not satisfied. \end{proof}
Given Lemma~\ref{le:tovey}, we know that if there is an assignment for $\phi$ with are at most $\alpha \Lambda$ unsatisfied clauses, then there is an assignment for $\phi'$ with every clause in $C_2$ satisfied and at most $\alpha \Lambda$ clauses in $C_1$ unsatisfied; if every assignment has at least $\beta \Lambda$ unsatisfied clauses in $\phi$, then there are at least $\beta \Lambda/d$ unsatisfied clauses in $\phi'$. Hence, according to Lemma~\ref{lemma:maxsat-eth}, Lemma~\ref{lemma:maxsat-eth2} is proved.
\section{Omitted Contents in Section~\ref{subsec:overview} - Why Old Reduction does not Work}\label{appsec:old}
Chen et al.~\cite{chen2014optimality} provided a reduction that meets the conditions CO1 to CO4 with job processing times, and hence the target value $T$, being $O(n^{1+\delta})$ for any arbitrary small constant $\delta>0$. This reduction provides a strong lower bound for $P||C_{max}$, but does not work well for our problem $P||\sum_iC_i^q$. To see this, we take $q=2$ as an example and compare the two objective values for the constructed scheduling problem when $I_{sat}$ is satisfiable and when it is not. If $I_{sat}$ is satisfiable, then there exists a schedule such that every machine has a load of exactly~$T$, implying that the optimal objective value is $mT^2$. Otherwise, at least one machine has load $T+1$ or more and machine has load $T-1$ or less, and then the optimal objective is at least $(m-2)T^2+(T+1)^2+(T-1)^2=mT^2+2$. For the sake of contradiction, let us assume that there exists a PTAS with running time $2^{O((1/\epsilon)^{\kappa})}$ for some $\kappa\in (0,1)$. If we take $\epsilon$ to be sufficiently small such that $mT^2\epsilon\le 1$, then the PTAS can be used to determine whether the constructed scheduling instance admits a feasible schedule of objective value at most $mT^2+1<mT^2+2$, and hence whether $I_{sat}$ is satisfiable. The running time of the PTAS becomes $2^{O((1/\epsilon)^{\kappa})}=2^{O((mT^2)^{\kappa})}$. Plug in $m=O(n)$ and $T=O(n^{1+\delta})$ in the reduction, we have $mT^2=O(n^{3+2\delta})$. Hence, if $\kappa=1/3-\delta$, we have $(mT^2)^{\kappa}\le O(n^{1-\delta})$, and an efficient PTAS of running time $2^{O((1/\epsilon)^{1/3-\delta})}$ can thus determine the satisfiability of $I_{sat}$ in $2^{O(n^{1-\delta})}$ time, contradicting ETH. To summarize, the above argument implies a lower bound of $2^{O((1/\epsilon)^{1/3-\delta})}$ on the running time of PTAS for arbitrary constant $\delta>0$, which is not strong enough to match our algorithms in Theorem~\ref{thm:algorithm}.
To overcome the obstacle, a natural idea is to decrease the value of $m$ or $T$ in the reduction. However, if, say, $m=n^{0.9}$ and $T=n^{O(1)}$, then we know the standard dynamic programming for scheduling returns the optimal solution in $T^{O(m)}=2^{O(n^{0.9})}$ time; similarly, if $T=n^{0.9}$ and $m=n^{O(1)}$, then we know there are at most $n^{0.9}$ different kinds of jobs, and the scheduling problem can also be solved in time $2^{O(n^{0.9})}$ through dynamic programming. Hence, we cannot expect to reduce $I_{sat}$ to such scheduling instances, assuming ETH.
As a consequence, in this paper, we will not try to decrease $m$ or $T$. Instead, we increase the gap between the two optimal objective values for the constructed scheduling problem when $I_{sat}$ is satisfiable and when $I_{sat}$ is not satisfiable by exploiting the hardness gap in Lemma~\ref{lemma:maxsat-eth2}.
\section{Omitted Proofs in Section~\ref{subsec:adjacent-sum} - Proof of Lemma~\ref{lemma:uniquesum-1}}\label{appsec:ajacent-sum} \begin{T2}
Let $N\in\mathbb{Z}^+$. There exists a subset ${\cal{S}}\subseteq \ensuremath{\mathbb{Z}}_N$ such that $|{\cal{S}}|\ge N^{1-c_0\sqrt{\frac{1}{{\log N}}}}$ for some sufficiently large $c_0$ (in particular, $c_0=7$ suffices), and for any $y\in {\cal{S}}$ and $1\le h\le 5$, the linear equation $h\cdot y=y_1+y_2+\cdots+y_h$ with $y_i\in {\cal{S}}$ for all $i$ has a unique solution $y_1=y_2=\cdots=y_h=y$. \end{T2} \begin{proof}
For any $d\ge 2$, $M\ge 2$ and $k\le (M+1)(d-1)^2$, we let $x=5d-1$ and $\vex=(1,x,x^2,\cdots,x^M)$, $\vecc=(\vecc[0],\vecc[1],\cdots,\vecc[M])$. We define the set $S_k(M,d)$ as: \begin{eqnarray*} {\cal{S}}_k(M,d)&=&\{y: y=\vecc\vex+x^{M+1}=\vecc[0]+\vecc[1]x^1+\cdots+\vecc[M]x^M+x^{M+1},\\ &&\vecc[i]\in\mathbb{N}, 0\le \vecc[i]<d, \sum_{i=0}^M(\vecc[i])^2=k\}. \end{eqnarray*}
That is, ${\cal{S}}_k(M,d)$ is the set of all integers which can be expressed in the form of ${\vecc}\cdot{\vex}+x^{M+1}$ such that $\vecc[i]\in [0,d)$ and $\|\vecc\|_2^2=k$, where $\|\cdot\|_2$ is the $\ell_2$-norm of a vector.
We claim that for any $y\in {\cal{S}}_k(M,d)$ and $1\le h,h'\le 5$, if $hy=y_1+y_2+\cdots+y_{h'}$, $y_i\in S_k(M,d)$, then we have $h'=h$, $y_1=y_2=\cdots=y_h=y$. Let $y=\vecc\vex+x^{M+1}$ and $y_j=\vecc_j\vex+x^{M+1}$, where $\vecc_j=(\vecc_j[0],\vecc_j[2],\cdots,\vecc_j[M])$. Using the fact that $x=5d-1$ and $\vecc_j[i]< d$, we know for $h'\le 5$ we have $\sum_{j=1}^{h'}\vecc_j[i]<x$. Hence by checking the coefficient of $x^{M+1}$ on both sides of the equation $hy=\sum_{j=1}^{h'}y_j$, we have $h=h'$. Moreover, we can conclude that the coefficient of $x^i$ in the sum $y_1+y_2+\cdots+y_{h'}$ equals $\sum_{j=1}^{h'}\vecc_j[i]$, hence by comparing the coefficient of $x^i$ on both sides, we have
$$\sum_{j=1}^h\vecc_j=h\vecc.$$
By the definition of ${\cal{S}}_k(M,d)$, the followings are true: $$\sum_{i=0}^{M}(\vecc_j[i])^2=k, \quad \forall j,$$ and $$\sum_{i=0}^{M}(\frac{\sum_{j=1}^h \vecc_j[i]}{h})^2=\sum_{i=0}^{M}(\vecc[i])^2=k$$ Hence, \begin{eqnarray}\label{eq:ari}
k=\sum_{i=0}^{M}(\frac{\sum_{j=1}^h \vecc_j[i]}{h})^2=\frac{\sum_{j=1}^h\sum_{i=0}^{M}(\vecc_j[i])^2}{h}=\sum_{i=0}^{M}\frac{\sum_{j=1}^h(\vecc_j[i])^2}{h}. \end{eqnarray}
According to the inequality between the quadratic mean and arithmetic mean, we know $$ \left(\frac{\sum_{j=1}^h \vecc_j[i]}{h}\right)^2\le \frac{\sum_{j=1}^h (\vecc_j[i])^2}{h}, \quad \forall i$$ and the equality only holds when $c_j[i]$'s are identical for all $j$. Hence by Eq~\eqref{eq:ari} we know $\vecc=\vecc_j$, and consequently $y_j=y$ for $1\le j\le h$. Hence, the claim is true.
It remains to select an appropriate ${\cal{S}}_k(M,d)$ such that ${\cal{S}}_k\subseteq \ensuremath{\mathbb{Z}}_N$ and has a large cardinality. Towards this, we first observe that the largest number of ${\cal{S}}_k(M,d)$ is bounded by $(5d-1)^{M+2}$. We shall select $d$ and $M$ such that $(5d-1)^{M+2}\le N$. Notice that there are $d^{M+1}-1$ different positive integer numbers which can be expressed as $\vecc\cdot\vex+x^{M+1}$ where $\vecc[i]\in [0,d)$. Furthermore $k=\|\vecc\|_2^2\le (M+1)(d-1)^2$, hence there exists some $1\le k^*\le (M+1)(d-1)^2$ such that
$$|{\cal{S}}_{k^*}({M,d})|\ge \frac{d^{M+1}-1}{(M+1)(d-1)^2}>\frac{d^{M-1}}{M+1}.$$
It remains to select $d$ and $M$ subject to $(5d-1)^{M+2}\le N$ such that $\frac{d^{M-1}}{M+1}$ is large. Below all logarithms are taken with the base $e$. We pick $M=\lfloor \sqrt{\frac{\log N}{\log 5}}\rfloor-2$ and $d=\lfloor\frac{e^{\sqrt{\log 5\cdot \log N}}}{5}\rfloor$. It is easy to see that $(M+2)\log (5d-1)\le \sqrt{\frac{\log N}{\log 5}}\cdot \log e^{\sqrt{\log 5\cdot \log N}}=\log N$, hence $(5d-1)^{M+2}\le N$. Furthermore, for sufficiently large $N$ (e.g., $N>e^{10}$), we know $d\ge \frac{e^{\sqrt{\log 5\cdot \log N}}}{10}$, hence \begin{eqnarray*}
\frac{d^{M-1}}{M+1}=e^{(M-1)\log d -\log (M+1)}&\ge& e^{(\sqrt{\log N\cdot\log 5}-\log 10)(\sqrt{\frac{\log N}{\log 5}}-4)-\frac{1}{2}(\log\log N-\log\log 5)} \\&\ge&e^{\log N\cdot (1+\frac{-4\sqrt{\log N\log 5}-\frac{\log 10}{\sqrt{\log 5}}\sqrt{\log N}-\Omega(\log\log N)}{\log N})}\\&=&N^{1-\Omega(\frac{1}{\sqrt{\log N}})} \end{eqnarray*}
Hence, Lemma~\ref{lemma:uniquesum-1} is proved. In particular, it is easy to verify that $4\sqrt{\log 5}+\frac{\log 10}{\sqrt{\log 5}}\le 7$, hence the $\frac{d^{M-1}}{M+1}\ge N^{1-\frac{7}{\sqrt{\log N}}}$. \end{proof}
\iffalse \section{Omitted proofs in Section~\ref{subsec:adjacent-sum} - Proof of Claim~\ref{claim:1}}\label{appsec:claim} \begin{T5} All vectors in $ \hat{\cal{S}}^{\beta+1}$ can be ordered such that any two adjacent vectors in the sequence are close. \end{T5} \begin{proof}
Recall that $|\hat{{\cal{S}}}|=2^\omega$, hence we can map each $a_j\in \hat{\cal{S}}$ to a distinct $\omega$-bit binary number (or more specifically, a binary string) within $\{0,1\}^{\omega}$. Let $\xi:\hat{\cal{S}}\rightarrow \{0,1\}^{\omega}$ be an arbitrary one-to-one mapping, then we can define an extended mapping $\xi':{\hat{\cal{S}}}^{\beta+1}\rightarrow \{0,1\}^{(\beta+1)\omega}$ such that $\vea$ is mapped to a $(\beta+1)\omega$-bit binary string $\overline{\xi(\vea[0])\xi(\vea[1])\cdots\xi(\vea[{\beta}])}_2$. If we can order all $(\beta+1)\omega$-bit binary string such that every adjacent numbers differ by exactly one bit, then the inverse of these binary strings gives a sequence of $\vea$'s such that adjacent vectors are close.
Now we prove the following statement: for any $n\in \mathbb{Z}$, $n\ge 2$ and an arbitrary string $b\in \{0,1\}^n$, all binary strings of $\{0,1\}^n$ can be ordered in a sequence starting with $b$ such that any two adjacent strings differ by exactly one bit. We show this by induction. The statement is clearly true for $n=2$. Suppose it is true for all $n\le n'$, we prove it also holds for $n=n'+1$. Consider the first bit of $b$, which can be $0$ or $1$. Assume it is $1$ (the case of $0$ can be proved in a similar way), then $b=1b_1$ for some $b_1\in \{0,1\}^{n'}$. According to the induction hypothesis, all binary strings of $\{0,1\}^{n'}$ can be ordered in a sequence starting with $b_1$ such that any two adjacent binary strings only differ by one bit. Let such a sequence be $b_1,b_2,\cdots,b_{2^{n'}}$, then all binary strings of $\{0,1\}^{n'+1}$ can be ordered as $1b_1,1b_2,\cdots,1b_{2^{n'}},0b_{2^{n'}}, 0b_{2^{n'}-1},\cdots,0b_1$. Hence, the statement is true, and Claim~\ref{claim:1} follows. \end{proof} \fi
\section{Omitted proofs in Section~\ref{subsec:adjacent-sum} - Proof of Lemma~\ref{lemma:linked-sum}} \iffalse \label{appsec:link-sum} The goal of this section is to prove the following lemma. \begin{T4} Let $\tau$ be an arbitrary permutation of $\ensuremath{\mathbb{Z}}_n$. Then there exists a set ${\cal{S}}=\{s_1,s_2,\cdots,s_n\}$ of positive integers together with an auxiliary set $E=\bigcup_{i=1}^nE_i$ of positive integers such that: \begin{compactitem} \item All integers in ${\cal{S}}\cup E$ are bounded by $n^{1+O(\frac{1}{\sqrt{\log\log n}})}$; \item $E_i=\{e_{i,1},e_{i,2},\cdots,e_{i,\omega}\}$ for all $i$, where $\omega=O(\frac{\log n}{\log\log n})$; \item $E_i\cap E_{i'}=\emptyset$ for any $i'\neq i$; \item For every $i\in\ensuremath{\mathbb{Z}}_n$ and $k=\tau(i)$, each sum $s_i+e_{i,1}$, $e_{i,1}+e_{i,2}$, $\cdots$, $e_{i,\omega-1}+e_{i,\omega}$, $e_{i,\omega}+s_k$ is unique in the sense that no other pairs in ${\cal{S}}\cup E$ that adds up to the same value. \end{compactitem} In particular, all these properties are satisfied by setting $s_i=\sigma(i)$ where $\sigma:\ensuremath{\mathbb{Z}}_N\rightarrow \ensuremath{\mathbb{Z}}_{N'}$ is the function specified in Lemma~\ref{lemma:uniquesum-2} by taking $N=n$. \end{T4}
Towards the proof, we need a shuffling lemma that can decompose the permutation $\tau$ into a sequence of ``simpler" permutations.
\subsection{A Shuffling Lemma} \fi \label{appsubsec:shuffle}
The goal of this subsection is to prove the following. \begin{T6}
For any $\pi\in Aut({\cal{M}}^{\upsilon})$, there exist $h$-shufflers $f_h,\hat{f}_h\in Aut({\cal{M}}^{\upsilon})$ for every $1\le h\le \upsilon$ such that $F(\vey)=\hat{F}(\pi(\vey))$ for any $\vey\in {\cal{M}}^{\upsilon}$, where $F=f_{\upsilon-1}\circ f_{\upsilon-2}\circ \cdots\circ f_{0}$ and $\hat{F}=\hat{f}_{\upsilon-1} \circ \hat{f}_{\upsilon-2}\circ \cdots\circ \hat{f}_{0}$. Furthermore, $f_h$'s and $\hat{f}_h$'s can be constructed in time that is polynomial in $|{\cal{M}}^\upsilon|$. \end{T6}
For any finite set $X$, we denote by $Aut(X)$ the set of all one-to-one mapping from $X$ to itself. For any $f_1,f_2\in Aut(X)$, we denote by $f_1\circ f_2 \in Aut(X)$ the composition of $f_1$ and $f_2$, i.e., $f_1\circ f_2(x)=f_1(f_2(x))$ for any $x\in X$. Note that $Aut(X)$ is a symmetric group under composition. We denote by $f^{-1}\in Aut(X)$ the inverse of $f\in Aut(X)$.
Let ${\cal{M}}$ be an arbitrary finite set of cardinality $t$. Let $\upsilon\in \ensuremath{\mathbb{Z}}^+$. Denote by ${\cal{M}}^{\upsilon}$ the set of all $\upsilon$-dimensional vectors whose entries belong to ${\cal{M}}$.
For any vector $\vey\in {\cal{M}}^{\upsilon}$, we denote by $\vey[h]\in {\cal{M}}$ the $h$-th coordinate of $\vey$, and $\vey[-h]\in {\cal{M}}^{\upsilon-1}$ the vector obtained by removing the $h$-th coordinate from $\vey$.
For any $f\in Aut({\cal{M}}^{\upsilon})$ and $0\le h\le \upsilon-1$, we call $f$ an $h$-shuffler if $\left(f(\vey)\right)[-h]=\vey[-h]$ for all $y\in \cal{M}^\upsilon$, that is, $$f(\vey)=f(\vey[0],\vey[1],\cdots,\vey[\upsilon-1])=(\vey[0],\cdots,\vey[h-1],z,\vey[h+1],\cdots,\vey[\upsilon-1])),$$ for some $z\in {\cal{M}}$.
We prove the following lemma.
\begin{lemma}\label{lemma:shuffle}
For any $\pi\in Aut({\cal{M}}^{\upsilon})$ and $0\le h\le \upsilon-1$, there exist $h$-shufflers $f,\hat{f}\in Aut({\cal{M}}^{\upsilon})$ such that for any $\vey\in {\cal{M}}^{\upsilon}$, $\left(f(\vey)\right)[h]=\left(\hat{f}(\pi(\vey))\right)[h]$. Furthermore, $f$ and $\hat{f}$ can be constructed in time that is polynomial in $|{\cal{M}}^\upsilon|$. \end{lemma}
\begin{figure}
\caption{Illustration of Lemma \ref{lemma:shuffle} for $M=\{1,2,3\}$, $\upsilon=4$, $h=1$ (recall that vectors start with the $0$-th coordinate). Each vertex in $U$ (i.e., the solid circle) is a 4-dimensional vector. Each mega-vertex in $\bar{U}$ (i.e., the dotted circle) contains exactly 3 vertices. Each solid line between vertices represents the mapping $\pi$ which is colored by one of 3 colors.}
\label{fig:shuffle_lemma}
\end{figure}
Briefly speaking, $f$ and $\hat{f}$ shuffles the $h$-coordinate of $\vey$ and its image under $\pi$ such that they become identical.
Now we are ready to prove Lemma~\ref{lemma:shuffle-1}.
\begin{proof}[Proof of Lemma~\ref{lemma:shuffle-1}] For ease of presentation, we let ${\cal{M}}=\{1,2,\cdots,t\}$.
Note that $|{\cal{M}}^{\upsilon}|=t^{\upsilon}$. Sort elements (vectors) of ${\cal{M}}^{\upsilon}$ in an arbitrary order and denote them by $\{\vecc_1,\vecc_2,\cdots,\vecc_{t^{\upsilon}}\}$. We create a bipartite graph $G=(U\cup W, E)$ to represent $\pi$ as follows: Both $U$ and $W$ contain $t^{\upsilon}$ vertices. Let $U=\{u_1,u_2,\cdots,u_{t^{\upsilon}}\}$ and $W=\{w_1,w_2,\cdots,w_{t^{\upsilon}}\}$. There is an edge between $u_i$ and $w_j$ if and only if $\pi(\vecc_i)=\vecc_j$. Since $\pi$ is one-to-one mapping, $G$ is 1-regular.
As each $u_i$ and $w_i$ correspond to $\vecc_i$, we will slightly abuse notation and write $u_i[h]$ or $u_i[-h]$ to refer to $\vecc_i[h]$ and $\vecc_i[-h]$.
\noindent\textbf{Contraction.} We contract the graph $G$ as follows. We partition $U$ (or $W$) into $t^{\upsilon-1}$ subsets such that $u_i$ and $u_j$ (or $w_i$ and $w_j$) are in the same subset if and only if $u_i[-h]=u_j[-h]$ (or $w_i[-h]=w_j[-h]$). Denote by $\bar{U}_i$ (or $\bar{W}_i$), $1\le i\le t^{\upsilon-1}$, all the subsets in the partition of $U$ (or $W$). It is clear that each $\bar{U}_i$ (or $\bar{W}_i$) contains exactly $t$ vertices from $U$ (or $W$). We now contract all the $t$ vertices in $\bar{U}_i$ (or $\bar{W}_i$) into one mega-vertex, and denote this mega-vertex as $\bar{u}_i$ (or $\bar{w}_i$). By doing so we generate parallel edges, that is, there are $\ell$ parallel edges between each pair of mega-vertices $\bar{u}_i$ and $\bar{w}_j$ if there are $\ell$ edges between vertices in $\bar{U}_i$ and $\bar{W}_j$ in the original graph $G$. We denote by $\psi$ an arbitrary one-to-one mapping between a parallel edge (between mega-vertices $\bar{u}_i$ and $\bar{w}_j$) and an edge in $G$ (between some vertex in subset $\bar{U}_i$ and some vertex in subset $\bar{W}_j$). Denote by $\bar{G}=(\bar{U}\cup \bar{W},\bar{E})$ the contracted graph. Given that $G$ is $1$-regular and every mega-vertex contains exactly $t$ vertices, we have the following observation: \begin{observation} The contracted graph $\bar{G}=(\bar{U}\cup \bar{W},\bar{E})$ is a $t$-regular bipartite graph. \end{observation}
\noindent\textbf{Coloring.} It is known that every bipartite regular graph admits a perfect matching (see, e.g.~\cite{konig1916graphen}). Consequently, every $t$-regular bipartite graph can be decomposed into $t$ perfect matchings. We decompose $\bar{G}$ into $t$ perfect matchings and color edges in each perfect matching with a distinct color.
Overall we have used $t$ colors. Since $|{\cal{M}}|=t$, we can map the $i$-th color to integer $i\in {\cal{M}}$.
Recall that $\psi$ is a one-to-one mapping between $\bar{E}$ and $E$, hence via $\psi$ we also obtain a coloring for $E$ (by coloring each edge in $E$ with the same color as its corresponding edge in $\bar{E}$). Recall that $G$ is 1-regular. Thus we can extend the edge coloring to a vertex coloring, such that each vertex in $G$ is colored with the same color as the unique edge incident to it.
\noindent\textbf{Define functions $f$ and $\hat{f}$.} Consider every vertex set $\bar{U}_k$. We know $\bar{U}_k$ contains $t$ vertices, and let $\bar{U}_k=\{u_{k_1},u_{k_2},\cdots,u_{k_t}\}$. By definition $u_{k_i}[-h]$'s are identical and $u_{k_i}[h]$'s are exactly the $t$ elements in ${\cal{M}}$. Recall that we decompose $\bar{G}$ into $t$ perfect matchings and each perfect matching is colored with a unique color, we know the $t$ parallel edges incident to the mega-vertex $\bar{u}_k$ are colored with $t$ distinct colors. Consequently, each vertex $u_{k_i}$ is also colored with a distinct color. Recall the one-to-one correspondence between a vertex $u_j$ and $\vecc_j\in{\cal{M}}$. Now we define a function $f$ such that $f(\vecc_{k_i})[-h]=\vecc_{k_i}[-h]$, and $f(\vecc_{k_i})[h]$ equals the color of $u_{k_i}$, where we interpret each color as a number in $\{1,\cdots,t\}$. Consequently, $(f(\vecc_{k_1}),f(\vecc_{k_2}),\cdots,f(\vecc_{k_t}))$ is a permutation of $(\vecc_{k_1},\vecc_{k_2},\cdots,\vecc_{k_t})$. Hence, $f\in Aut({\cal{M}}^{\upsilon})$.
Similarly, we consider each $\bar{W}_k=\{w_{k_1},w_{k_2},\cdots,w_{k_t}\}$ and define a function $\hat{f}$ such that $\hat{f}(\vecc_{k_i})[-h]=\vecc_{k_i}[-h]$, and $\hat{f}(\vecc_{k_i})[h]$ equals the color of $w_{k_i}$. Consequently, $(\hat{f}(\vecc_{k_1}),\hat{f}(\vecc_{k_2}),\cdots,\hat{f}(\vecc_{k_t}))$ is also a permutation of $(\vecc_{k_1},\vecc_{k_2},\cdots,\vecc_{k_t})$, and $\hat{f}\in Aut({\cal{M}}^{\upsilon})$.
Furthermore, the color of each $u_i$ or $w_j$ is defined as the color of the edge incident to it, hence if there is an edge between $u_i$ and $w_j$ in $G$, then we know $(f(u_i))[h]=(\hat{f}(w_j))[h]$. Hence, Lemma~\ref{lemma:shuffle} is proved. \end{proof}
See Figure~\ref{fig:shuffle_lemma} for an illustration of the mapping $f,\hat{f}$ we construct in Lemma~\ref{lemma:shuffle}. Iteratively applying Lemma~\ref{lemma:shuffle}, we are able to prove the following.
\begin{proof} We prove the following statement by induction: For $0\le k\le \upsilon-1$, there exist $h$-shufflers $f_h,\hat{f}_h\in Aut({\cal{M}}^{\upsilon})$ for every $0\le h\le k$ such that $F_k(\vey)[h']=\hat{F}_k(\pi(\vey))[h']$ for any $\vey\in {\cal{M}}^{\upsilon}$ and $h'\le k$, where $F_k=f_{k}\circ f_{k-1}\circ \cdots\circ f_{0}$ and $\hat{F}_k=\hat{f}_{k} \circ \hat{f}_{k-1}\circ \cdots\circ \hat{f}_{0}$.
The statement is true for $k=1$ by Lemma~\ref{lemma:shuffle}. Suppose the statement is true for $k$, we prove it is true for $k+1$.
Consider $\hat{f}_{k}\circ \hat{f}_{k-1}\circ\cdots\circ \hat{f}_0\circ \pi\circ f_0^{-1}\circ f_1^{-1}\circ\cdots \circ f_k^{-1}$. According to Lemma~\ref{lemma:shuffle}, there exist $(k+1)$-shufflers $f_{k+1},\hat{f}_{k+1}$ such that \begin{eqnarray}\label{eq:induction} \left( f_{k+1}(\vey)\right)[k+1]=\left(\left(\hat{f}_{k+1}\circ\hat{f}_{k}\circ \cdots\circ \hat{f}_0\circ \pi\circ f_0^{-1}\circ f_2^{-1}\circ\cdots \circ f_k^{-1}\right)(\vey)\right)[k+1], \quad\forall \vey\in {\cal{M}}^{\upsilon}. \end{eqnarray}
Since $f_k\circ f_{k-1}\circ\cdots\circ f_1\in Aut({\cal{M}}^{\upsilon})$, for every $\vey\in {\cal{M}}^{\upsilon}$ there exists some $\vez\in {\cal{M}}^{\upsilon}$ such that $\vey=\left(f_k\circ f_{k-1}\circ\cdots\circ f_1\right)(\vez)$, plug this into Equation~\eqref{eq:induction}, for all $\vez\in {\cal{M}}^{\upsilon}$ we get \begin{eqnarray*} &&\left(f_{k+1}\circ f_k\circ\cdots\circ f_0\right)(\vez)[k+1]\\ &=&\left(\left(\hat{f}_{k+1}\circ \cdots\circ \hat{f}_0\circ \pi\circ f_0^{-1}\circ \cdots \circ f_k^{-1}\circ f_k\circ f_{k-1}\circ\cdots\circ f_0\right)(\vez)\right)[k+1]\\ &=&\left(\left(\hat{f}_{k+1}\circ \cdots\circ \hat{f}_0\circ \pi\right)(\vez)\right)[k+1] \end{eqnarray*}
Moreover, for any $h\le k$, recall that $f_{k+1}$ and $\hat{f}_{k+1}$ does not change the $h$-th coordinate, hence \begin{eqnarray*} \left(f_{k+1}\circ f_k\circ\cdots\circ f_0\right)(\vez)[h] &=&\left( f_k\circ\cdots\circ f_0\right)(\vez)[h]\\ &=&\left(\left(\hat{f}_{k}\circ \cdots\circ \hat{f}_1\circ \pi\right)(\vez)\right)[h]\\ &=&\left(\left(\hat{f}_{k+1}\circ\hat{f}_{k}\circ \cdots\circ \hat{f}_0\circ \pi\right)(\vez)\right)[h] \end{eqnarray*}
Hence, the statement holds for all $k\le \upsilon-1$, and Lemma~\ref{lemma:shuffle-1} is proved. \end{proof}
\section{Construction of the Scheduling Instance}\label{appsec:reduction-construction}
Now we provide the details of the reduction. We first recall all the functions and parameters we have set in proving Lemma~\ref{lemma:linked-sum}. \begin{itemize}
\item Recall that $\tau$ is the one-to-one mapping that maps $i$ to $k$ for every $(z_i\oplus \neg z_k)\in C_2$.
\item Apply Lemma~\ref{lemma:uniquesum-2} by taking $N=n$, we get $\sigma: \ensuremath{\mathbb{Z}}_n\rightarrow \ensuremath{\mathbb{Z}}_{n'}$ where $n'=n^{1+{\mathcal{O}}(\frac{1}{\sqrt{\log\log n}})}$, $\sigma(i)=\sum_{j=0}^\gamma \vea_i[j]x^j$ for $\vea_i[j]\in {\cal{S}}_d$ where $\gamma=\lceil\frac{\log n}{\log\log n}\rceil+O(\frac{\log n}{(\log\log n)^{3/2}})$, $d=e^{{\mathcal{O}}(\sqrt{\log\log n})}\log n$ and $x=5d+1$. We lift the dimension such that $\vea_i=(\vea_i[0],\cdots,\vea_i[\gamma+3])$ where $\vea_i[\gamma+1]=\vea_i[\gamma+2]=\vea_i[\gamma+3]=0$.
\item Apply Lemma~\ref{lemma:uniquesum-2} again by setting $N=4\gamma+4$, we get another injection $\sigma'$ such that $\sigma'(y)=o(\log^2 n)<x^2$ for $y\le 4\gamma+4$.
\item We have constructed in the proof of Lemma~\ref{lemma:linked-sum}:
$\veb^0_{i}=\vea_i,\veb^1_{i},\veb^2_{i},\cdots,\veb^{2\gamma+2}_i=\hat{\veb}^{2\gamma+2}_k$, $\hat{\veb}^{2\gamma+1}_k$, $\cdots$, $\hat{\veb}^1_k,\hat{\veb}^0_k=\vea_k$ where $k=\tau(i)$.
\item Again, each vector $\vecc$ represents the polynomial $\sum_i\vecc[i]x^i$. Polynomials and vectors are used interchangeably.
\item Let $\sigma_{max}=x^{\gamma+6}=n^{1+O(\frac{1}{\sqrt{\log\log n}})}$, and thus $\sigma_{max}>x\cdot \veb^h_{i}\vex$ and $\sigma_{max}> x\cdot \hat{\veb}^h_{i}\vex$ for any $i,h$, and also $\sigma_{max}>\sigma'(y)x^{\gamma+3}$ for any $y\le 4\gamma+4$. \end{itemize}
\noindent\textbf{Construction of the scheduling instance.} We shall construct two major classes of jobs, gap jobs and main jobs. Main jobs are divided into $5$ types: dummy jobs, clause jobs, truth-assignment jobs, link jobs and variable jobs. The three types -- truth-assignment, link and variable jobs -- are further divided into sub-types, e.g., variable jobs are further divided into 4 sub-types (see Table~\ref{table:job-time}). A gap job is defined as a fixed huge value $10^{14}\sigma_{max}$ subtracting several main jobs.
The processing time of each job can be expressed as a summation over three components: Type, Index and True/False. The type component of a main job is always of the form $10^j\sigma_{max}$ where $2\le j\le 13$. Table~\ref{table:job-time} summarizes the value $j$ for each kind of main job, e.g., the type-component of a variable job whose sub-type belongs to V$_{\cdot,+,1}$ is $10^5\sigma_{max}$. The index-component of clause jobs, truth-assignment jobs and variable jobs is of the form $10\sigma(i)$ for some index $i$. Dummy jobs do not have index-component; Link jobs have much more complicated index-components, which will be specified in the following part of this subsection. Each main job has a true version and a false version. A gap job does not have a true/false version but only one unified version.
\begin{table}[!ht] \renewcommand\arraystretch{1.3}
\setlength\tabcolsep{1.5pt} \centering{} \resizebox{14cm}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline
$\backslash$&Dummy& Clause & \multicolumn{4}{c|}{Truth-assignment} & \multicolumn{2}{c|}{Link} & \multicolumn{4}{c|}{Variable} \\ \hline $\backslash$&DM& CL$_{\cdot}$ & TR$_{\cdot,a}$ & TR$_{\cdot,b}$ & TR$_{\cdot,c}$ & TR$_{\cdot,d}$ & LN$_{\cdot,+}$ & LN$_{\cdot,-}$ & V$_{\cdot,+,1}$ & V$_{\cdot,+,2}$ & V$_{\cdot,-,1}$ & V$_{\cdot,-,2}$ \\ \hline $\zeta(\cdot)$&13 & 12 & 11 & 10 & 9 & 8 & 7 & 6 & 5 & 4 & 3 & 2 \\ \hline \end{tabular} } \caption{Type-component of main jobs} \label{table:job-time} \end{table}
Define a function $\zeta$ that maps the (sub)-type of a main job to the exponent of $10$ as indicated by Table~\ref{table:job-time}, e.g., $\zeta(\textrm{TR}_{\cdot,a})=11$. Now we provide the exact processing time of every job. In the following $\rho\in\{T,F\}$, $\iota\in\{+,-\}$.
$\bullet$ Variable jobs: 4 jobs $V_{i,+,1}^{\rho}$ and $V_{i,+,2}^{\rho}$ are constructed for the positive literal $z_i$, and 4 jobs $V_{i,-,1}^{\rho}$ and $V_{i,-,2}^{\rho}$ are for the negative literal $\neg z_i$. \begin{eqnarray*} &&s(V_{i,\iota,\kappa}^T)=10^{\zeta(\textrm{V}_{\cdot,\iota,k})}\sigma_{max}+10\sigma(i)+1,\\ &&s(V_{i,\iota,\kappa}^F)=10^{\zeta(\textrm{V}_{\cdot,\iota,k})}\sigma_{max}+10\sigma(i)+2,\quad \kappa=1,2, \iota=+,- \end{eqnarray*}
$\bullet$ Truth-assignment jobs: 8 jobs $\textrm{TR}_{i,a}^{\rho}$, $\textrm{TR}_{i,b}^{\rho}$, $\textrm{TR}_{i,c}^{\rho}$ and $\textrm{TR}_{i,d}^{\rho}$ are constructed for every $i$. \begin{eqnarray*} &&s(\textrm{TR}_{i,\kappa}^{T})=10^{\zeta(\textrm{TR}_{\cdot,\kappa})}\sigma_{max}+10\sigma(i)+1.5, \\ &&s(\textrm{TR}_{i,\kappa}^{F})=10^{\zeta(\textrm{TR}_{\cdot,\kappa})}\sigma_{max}+10\sigma(i)+1. \quad \kappa=a,b,c,d \end{eqnarray*}
$\bullet$ Clause jobs: there are 3 clause jobs for every clause $cl_{\ell}\in C_1$ where $\ell\in \{2,5,\cdots,n-1\}$, with one $\textrm{CL}_{\ell}^T$ and two copies of $\textrm{CL}_{\ell}^F$: $$s(\textrm{CL}_{\ell}^T)=10^{\zeta(\textrm{CL}_{\cdot})}\sigma_{max}+10\sigma(\ell)+2, \quad s(\textrm{CL}_{\ell}^F)=10^{\zeta(\textrm{CL}_{\cdot})}\sigma_{max}+10\sigma(\ell)+1.$$
$\bullet$ Dummy jobs: there are $n+n/3$ true dummy jobs $\textrm{DM}^T$ of processing time $10^{\zeta(\textrm{DM})}\sigma_{max}+1$, and $n-n/3$ false dummy jobs $\textrm{DM}^F$ of processing time $10^{\zeta(\textrm{DM})}\sigma_{max}+2$.
$\bullet$ Link jobs: We create $4\gamma+4$ links jobs for each clause in $C_2$. Recall the vectors $\veb^h_{i}$ and $\hat{\veb}^h_i$ for $1\le h\le 2\gamma+2$. For every clause $(z_i\oplus z_k)\in C_2$ and every $1\le h\le 2\gamma+2$, we create two pairs of link jobs, LN$_{i,h,+}^T$ and LN$_{i,h,+}^F$, and LN$_{k,h,-}^T$ and LN$_{k,h,-}^F$ such that $$s(\textrm{LN}_{i,h,+}^T)=10^{\zeta(\textrm{LN}_{\cdot,+})}\sigma_{max}+10\veb^{h}(i)\vex+1, \quad s(\textrm{LN}_{i,h,+}^F)=10^{\zeta(\textrm{LN}_{\cdot,+})}\sigma_{max}+10\veb^h_{i}\vex+2,$$ $$s(\textrm{LN}_{k,h,-}^T)=10^{\zeta(\textrm{LN}_{\cdot,-})}\sigma_{max}+10\hat{\veb}^{h}(k)\vex+1, \quad s(\textrm{LN}_{k,h,-}^F)=10^{\zeta(\textrm{LN}_{\cdot,-})}\sigma_{max}+10\hat{\veb}^{h}(i)\vex+2.$$
Let $\textrm{TR}_A$, $\textrm{TR}_B$, $\textrm{TR}_C$, $\textrm{TR}_D$ be the set of jobs $\textrm{TR}_{i,a}^{\rho}$, $\textrm{TR}_{i,b}^{\rho}$, $\textrm{TR}_{i,c}^{\rho}$ and $\textrm{TR}_{i,d}^{\rho}$ respectively. Sometimes we may drop the superscript for simplicity, e.g., we use $\textrm{TR}_{i,a}$ to represent $\textrm{TR}_{i,a}^T$ or $\textrm{TR}_{i,a}^F$. We construct gap jobs. There are 5 kinds of gap jobs.
$\bullet$ There are two gap jobs (variable-link jobs) $\theta_{\textrm{LN},i,+}$ and $\theta_{\textrm{V-L},i,-}$ for each variable $z_i$: \begin{eqnarray*} s(\theta_{\textrm{V-L},i,+})&=&(10^{14}-10^{7}-10^4)\sigma_{max}-10\left(\vea_i[0]+2\sum_{j=1}^{\gamma}\vea_i[j]x^j+(F_0(\vea_i))[0]\cdot x^{\gamma+1}+{\sigma}'(1)x^{\gamma+2}\right)-3\\ &=& \left(10^{14}-10^{\zeta(\textrm{LN}_{\cdot,+})}-10^{\zeta(V_{\cdot,+,2})}\right)\sigma_{max}-10(\vea_i+\veb^1_{i})\vex-3\\ s(\theta_{\textrm{V-L},i,-})&=&(10^{14}-10^6-10^2)\sigma_{max}-10\left(\vea_i[0]+2\sum_{j=1}^{\gamma}\vea_i[j]x^j+(\hat{F}(\vea_i))[0]\cdot x^{\gamma+1}+{\sigma}'(1)x^{\gamma+2}\right)-3\\ &=& \left(10^{14}-10^{\zeta(\textrm{LN}_{\cdot,-})}-10^{\zeta(V_{\cdot,-,2})}\right)\sigma_{max}-10\left(\vea_i+\hat{\veb}^1_i\right)\vex-3 \end{eqnarray*}
$\bullet$ There are $4\gamma+3$ gap jobs (link-link jobs), $\theta_{\textrm{L-L},i,h,+}$ and $\theta_{\textrm{L-L},i,h,-}$ and $\theta_{\textrm{L-L},i,+,-}$ for every $1\le i\le n$. For $h=1,2,\cdots,2\gamma+1$, we define
\begin{eqnarray*} s(\theta_{\textrm{L-L},i,h,+}) &=& \left(10^{14}-2\times 10^{\zeta(\textrm{LN}_{\cdot,+})}\right)\sigma_{max}-10\left(\veb^h_{i}\vex+\veb^{h+1}_i\right)\vex-3\\ s(\theta_{\textrm{L-L},i,h,-}) &=& \left(10^{14}-2\times 10^{\zeta(\textrm{LN}_{\cdot,-})}\right)\sigma_{max}-10\left(\hat{\veb}^h_i\vex+\hat{\veb}^{h+1}_i\right)\vex-3 \end{eqnarray*} Additionally, we define \begin{eqnarray*} s(\theta_{\textrm{L-L},i,+,-}) = \left(10^{14}-10^{\zeta(\textrm{LN}_{\cdot,+})}-10^{\zeta(\textrm{LN}_{\cdot,-})}\right)\sigma_{max}-2\times 10\veb^{2\gamma+2}_i\vex-3 \end{eqnarray*} Here recall that $\veb^{2\gamma+2}_i=\hat{\veb}^{2\gamma+2}_{\tau(i)}$.
$\bullet$ There are three gap jobs (variable-clause-dummy jobs) for each $cl_{\ell}\in C_1$ ($\ell\in \{2,5,\cdots,n-1\}$): for $i=\ell-1,\ell,\ell+1$, if $z_i\in cl_\ell$, we construct $\theta_{\textrm{V-C-D},\ell,i,+}$, otherwise $\neg z_i\in cl_\ell$, and we construct $\theta_{\textrm{V-C-D},\ell,i,-}$: \begin{eqnarray*} &&s(\theta_{\textrm{V-C-D}, \ell,i,+})=\left(10^{14}-10^{\zeta(\textrm{DM})}-10^{\zeta(\textrm{CL}_{\cdot})}-10^{\zeta(V_{\cdot,+,1})}\right)\sigma_{max}-10(\sigma(\ell)+\sigma(i))-4,\\ &&s(\theta_{\textrm{V-C-D},\ell,i,-})=\left(10^{14}-10^{\zeta(\textrm{DM})}-10^{\zeta(\textrm{CL}_{\cdot})}-10^{\zeta(V_{\cdot,-,1})}\right)\sigma_{max}-10(\sigma(\ell)+\sigma(i))-4. \end{eqnarray*}
$\bullet$ There is one gap job (variable-dummy job) for each variable. Notice that each variable appears exactly once in clauses of $C_1$, if $z_i$ appears in $C_1$, we construct $\theta_{\textrm{V-D},i,-}$. Otherwise, we construct $\theta_{\textrm{V-D},i,+}$ instead. \begin{eqnarray*} &&s(\theta_{\textrm{V-D},i,+})=\left(10^{14}-10^{\zeta(\textrm{DM})}-10^{\zeta(V_{\cdot,+,1})}\right)\sigma_{max}-10\sigma(i)-3,\\ &&s(\theta_{\textrm{V-D},i,-})=\left(10^{14}-10^{\zeta(\textrm{DM})}-10^{\zeta(V_{\cdot,-,1})}\right)\sigma_{max}-10\sigma(i)-3. \end{eqnarray*}
Thus, for each clause $cl_\ell$ and $i=\ell-1,\ell,\ell+1$, either $\theta_{\textrm{V-D},i,+}$ and $\theta_{\textrm{V-C-D},\ell,i,-}$ exist, or $\theta_{\textrm{V-D},i,-}$ and $\theta_{\textrm{V-C-D},\ell,i,+}$ exist.
$\bullet$ There are four gap jobs (variable-truth jobs) for each variable $z_i$, namely $\theta_{\textrm{V-T},i,a,c}$, $\theta_{\textrm{V-T},i,b,d}$, $\theta_{\textrm{V-T},i,a,d}$ and $\theta_{\textrm{V-T},i,b,c}$: \begin{eqnarray*} &&s(\theta_{\textrm{V-T},i,a,c})=\left(10^{14}-10^{\zeta(V_{\cdot,+,1})}-10^{\zeta(\textrm{TR}_{\cdot,a})}-10^{\zeta(\textrm{TR}_{\cdot,c})}\right)\sigma_{max}-30\sigma(i)-4,\\ &&s(\theta_{\textrm{V-T},i,b,d})=\left(10^{14}-10^{\zeta(V_{\cdot,+,2})}-10^{\zeta(\textrm{TR}_{\cdot,b})}-10^{\zeta(\textrm{TR}_{\cdot,d})}\right)\sigma_{max}-30\sigma(i)-4,\\ &&s(\theta_{\textrm{V-T},i,a,d})=\left(10^{14}-10^{\zeta(V_{\cdot,-,1})}-10^{\zeta(\textrm{TR}_{\cdot,a})}-10^{\zeta(\textrm{TR}_{\cdot,d})}\right)\sigma_{max}-30\sigma(i)-4,\\ &&s(\theta_{\textrm{V-T},i,b,c})=\left(10^{14}-10^{\zeta(V_{\cdot,-,2})}-10^{\zeta(\textrm{TR}_{\cdot,b})}-10^{\zeta(\textrm{TR}_{\cdot,c})}\right)\sigma_{max}-30\sigma(i)-4, \end{eqnarray*}
Overall, we have constructed $2\gamma n+8n$ gap jobs. We also construct $2\gamma n+8n$ machines. The following Table~\ref{table:job-time-whole} summarizes the processing times of all jobs.
\begin{table}[!ht] \renewcommand\arraystretch{1.3}
\setlength\tabcolsep{1.5pt} \centering \resizebox{14cm}{!}{
\begin{tabular}{|c|l|l|l|c|c|} \hline
Job-type & Sub-type & Type-component & Index-component & \makecell{T/F\\(T)} & \makecell{T/F\\(F)} \\ \hline \multirow{4}{*}{Variable} & $V_{i,+,1}$ & $10^{\zeta(\textrm{V}_{\cdot,+,1})}\sigma_{max}$ & $10\sigma(i)$ & 1 & 2 \\ \cline{2-6}
& $V_{i,+,2}$ & $10^{\zeta(\textrm{V}_{\cdot,+,2})}\sigma_{max}$ & $10\sigma(i)$ & 1 & 2 \\ \cline{2-6}
& $V_{i,-,1}$ & $10^{\zeta(\textrm{V}_{\cdot,-,1})}\sigma_{max}$ & $10\sigma(i)$ & 1 & 2 \\ \cline{2-6}
& $V_{i,-,2}$ & $10^{\zeta(\textrm{V}_{\cdot,-,2})}\sigma_{max}$ & $10\sigma(i)$ & 1 & 2 \\ \hline \multirow{4}{*}{Truth-assignment} & TR$_{i,a}$ & $10^{\zeta(\textrm{TR}_{\cdot,a})}\sigma_{max}$ & $10\sigma(i)$ & 1.5 & 1 \\ \cline{2-6}
& TR$_{i,b}$ & $10^{\zeta(\textrm{TR}_{\cdot,b})}\sigma_{max}$ & $10\sigma(i)$ & 1.5 & 1 \\ \cline{2-6}
& TR$_{i,c}$ & $10^{\zeta(\textrm{TR}_{\cdot,c})}\sigma_{max}$ & $10\sigma(i)$ & 1.5 & 1 \\ \cline{2-6}
& TR$_{i,d}$ & $10^{\zeta(\textrm{TR}_{\cdot,d})}\sigma_{max}$ & $10\sigma(i)$ & 1.5 & 1 \\ \hline
Clause & CL$_\ell$ & $10^{\zeta(\textrm{CL}_{\cdot})}\sigma_{max}$ & $10\sigma(\ell)$ & 2 & 1 \\ \hline
Dummy & DM & $10^{\zeta(\textrm{DM})}\sigma_{max}$ & 0 & 1 & 2 \\ \hline \multirow{2}{*}{\makecell{Link\\$h\in\{0,1,\cdots,2\gamma+2\}$}} & {LN$_{i,h,+}$} & $10^{\zeta(\textrm{LN}_{\cdot,+})}\sigma_{max}$ & $10\veb^h_{i}\vex$ & 1 & 2 \\ \cline{2-6}
& {LN$_{i,h,-}$} & $10^{\zeta(\textrm{LN}_{\cdot,-})}\sigma_{max}$ & $10\veb^h_{i}\vex$ & 1 & 2 \\ \cline{2-6} \hline
\multirow{2}{*}{Variable-Link} & $\theta_{\textrm{V-L},i,+}$ & $(10^{14}-10^{\zeta(\textrm{LN}_{\cdot,+})}-10^{\zeta(V_{\cdot,+,2})})\sigma_{max}$ & $-10(\vea_i+\veb^1_{i})\vex$ & \multicolumn{2}{l|}{-3} \\ \cline{2-6}
& $\theta_{\textrm{V-L},i,-}$ & $(10^{14}-10^{\zeta(\textrm{LN}_{\cdot,-})}-10^{\zeta(V_{\cdot,-,2})})\sigma_{max}$ & $-10(\vea_i+\hat{\veb}^1_i)\vex$ & \multicolumn{2}{l|}{-3} \\ \hline
\multirow{3}{*}{\makecell{Link-Link\\$h\in\{1,\cdots,2\gamma+1\}$}} & {$\theta_{\textrm{L-L},i,h,+}$} & $(10^{14}-2\times 10^{\zeta(\textrm{LN}_{\cdot,+})})\sigma_{max}$ & $-10(\veb^h_{i}+\veb^{h+1}_i)\vex$ & \multicolumn{2}{l|}{-3} \\ \cline{2-6}
& {$\theta_{\textrm{L-L},i,h,-}$} & $(10^{14}-2\times 10^{\zeta(\textrm{LN}_{\cdot,-})})\sigma_{max}$ & $-10(\veb^h_{i}+\hat{\veb}^{h+1}_i)\vex$ & \multicolumn{2}{l|}{-3} \\ \cline{2-6}
& $\theta_{\textrm{L-L},i,+,-}$ & $(10^{14}-10^{\zeta(\textrm{LN}_{\cdot,+})}-10^{\zeta(\textrm{LN}_{\cdot,-})})\sigma_{max}$ & $-20\veb^{2\gamma+2}_i\vex$ & \multicolumn{2}{l|}{-3} \\ \cline{2-6} \hline
\multirow{2}{*}{\makecell{Variable-Clause\\-Dummy, $|i-\ell|\le 1$}} & $\theta_{\textrm{V-C-D},\ell,i,+}$ & $(10^{14}-10^{\zeta(\textrm{DM})}-10^{\zeta(\textrm{CL}_{\cdot})}-10^{\zeta(V_{\cdot,+,1})})\sigma_{max}$ & $-10(\sigma(\ell)+\sigma(i))$ & \multicolumn{2}{l|}{-4} \\ \cline{2-6}
& $\theta_{\textrm{V-C-D},\ell,i,-}$ & $(10^{14}-10^{\zeta(\textrm{DM})}-10^{\zeta(\textrm{CL}_{\cdot})}-10^{\zeta(V_{\cdot,-,1})})\sigma_{max}$ & $-10(\sigma(\ell)+\sigma(i))$ & \multicolumn{2}{l|}{-4} \\ \hline
\multirow{2}{*}{Variable-Dummy} & $\theta_{\textrm{V-D},i,+}$ & $(10^{14}-10^{\zeta(\textrm{DM})}-10^{\zeta(V_{\cdot,+,1})})\sigma_{max}$ & $-10\sigma(i)$ & \multicolumn{2}{l|}{-3} \\ \cline{2-6}
& $\theta_{\textrm{V-D},i,-}$ & $(10^{14}-10^{\zeta(\textrm{DM})}-10^{\zeta(V_{\cdot,-,1})})\sigma_{max}$ & $-10\sigma(i)$ & \multicolumn{2}{l|}{-3} \\ \hline
\multirow{4}{*}{{Variable-Truth}} & $\theta_{\textrm{V-T},i,a,c}$ & $(10^{14}-10^{\zeta(V_{\cdot,+,1})}-10^{\zeta(\textrm{TR}_{\cdot,a})}-10^{\zeta(\textrm{TR}_{\cdot,c})})\sigma_{max}$ & $-30\sigma(i)$ & \multicolumn{2}{l|}{-4} \\ \cline{2-6}
& $\theta_{\textrm{V-T},i,b,d}$ & $(10^{14}-10^{\zeta(V_{\cdot,+,2})}-10^{\zeta(\textrm{TR}_{\cdot,b})}-10^{\zeta(\textrm{TR}_{\cdot,d})})\sigma_{max}$ & $-30\sigma(i)$ & \multicolumn{2}{l|}{-4} \\ \cline{2-6}
& $\theta_{\textrm{V-T},i,a,d}$ & $(10^{14}-10^{\zeta(V_{\cdot,-,1})}-10^{\zeta(\textrm{TR}_{\cdot,a})}-10^{\zeta(\textrm{TR}_{\cdot,d})})\sigma_{max}$ & $-30\sigma(i)$ & \multicolumn{2}{l|}{-4} \\ \cline{2-6}
& $\theta_{\textrm{V-T},i,b,c}$ & $(10^{14}-10^{\zeta(V_{\cdot,-,2})}-10^{\zeta(\textrm{TR}_{\cdot,b})}-10^{\zeta(\textrm{TR}_{\cdot,c})})\sigma_{max}$ & $-30\sigma(i)$ & \multicolumn{2}{l|}{-4} \\ \hline \end{tabular} } \caption{Job processing times} \label{table:job-time-whole} \end{table}
\section{Proof of Theorem~\ref{thm:lower-bound}}\label{appsec:remaining-proofs} The proof is carried out in 4 steps. We first show in Section~\ref{subsec:unique-time} that every job in the constructed instance has a unique processing time. This allows us to refer to a job by its symbol (e.g., $V_{i,+,1}^T$) as well as by its processing time. Next, we show in Section~\ref{subsec:sat-to-scheduling-simple} that if a significant fraction of clauses in the 3SAT$'$ instance are satisfiable, then the constructed scheduling instance admits a solution with a small objective value. Next, we show in Section~\ref{subsec:sche-to-sat} that if any truth-assignment for the 3SAT$'$ instance will leave a significant fraction of clauses unsatisfied, then the constructed scheduling instance does not admit a solution with a small objective value. Finally, we are able to prove the correctness of our reduction in Section~\ref{subsec:formal-proof} by leveraging the above two facts.
\subsection{Uniqueness of job processing times}\label{subsec:unique-time} We claim that the processing time of each job we create is unique, whereas there is a one-to-one correspondence between the symbol of a job and its processing time. To see the claim, consider Table~\ref{table:job-time-whole}. It suffices to compare the processing time of jobs within each subtype. Given that $\sigma$ is an injection, it is easy to see that the processing time of each variable job, truth-assignment job, clause job, variable-dummy job and variable-truth job is unique. For variable-clause-dummy jobs, by property 4 of Lemma~\ref{lemma:uniquesum-2} we know the sum $\sigma(\ell)+\sigma(i)$ for $i=\ell-1,\ell,\ell+1$ is unique. The uniqueness of link jobs follows from the uniqueness of $\veb^h_{i}$'s from Lemma~\ref{lemma:veb-uniquesum-1}. The uniqueness of link-link jobs follows from the uniqueness of the summation $\veb^h_{i}+\veb^{h+1}_i$ from Lemma~\ref{lemma:veb-uniquesum-2}.
\subsection{3SAT\texorpdfstring{$'$}{Lg} to Scheduling} \label{subsec:sat-to-scheduling-simple} The goal of this subsection is to prove the following lemma. \begin{lemma}\label{lemma:sat-sche} If there are at most $\vartheta n$ clauses which are not satisfied, then the constructed scheduling instance admits a feasible schedule with objective value at most $(10^{14}\sigma_{max})^{q} (2\gamma n+ 8n) +\vartheta n\cdot \frac{q(q-1)}{2}(10^{14}\sigma_{max})^{q-2}+o(n\sigma_{max}^{q-2})$. \end{lemma}
Recall that every main job, except the clause job, admits a true copy and false copy, while the clause job admits a true copy and two false copies. We first ignore the true/false version of jobs and schedule them according to Table~\ref{table:job-schedule}, where each row represents jobs that are scheduled on one machine.
\begin{table}[!ht] \renewcommand\arraystretch{1.3}
\setlength\tabcolsep{1.5pt} \centering \resizebox{9cm}{!}{
\begin{tabular}{|c|l|l|l|l|ll} \hline \multirow{2}{*}{Variable-Link} & $\theta_{\textrm{V-L},i,+}$ & $V_{i,+,2}$ & LN$_{i,1,+}$ & $\backslash$ \\ \cline{2-5}
& $\theta_{\textrm{V-L},i,-}$ & $V_{i,-,2}$ & LN$_{i,1,-}$ & $\backslash$ \\ \hline \multirow{3}{*}{\makecell{Link-Link\\$h\in\{1,2,\cdots,2\gamma+1\}$}} & {$\theta_{\textrm{L-L},i,h,+}$} & LN$_{i,h,+}$ & LN$_{i,h+1,+}$ & $\backslash$ \\ \cline{2-5}
& {$\theta_{\textrm{L-L},i,h,-}$} & LN$_{i,h,-}$ & LN$_{i,h+1,-}$ & $\backslash$ \\ \cline{2-5}
& $\theta_{\textrm{L-L},i,+,-}$ & LN$_{i,2\gamma+2,+}$ & LN$_{\tau(i),2\gamma+2,-}$ & $\backslash$ \\ \cline{2-5} \hline
\multirow{2}{*}{\makecell{Variable-Clause-Dummy\\$|i-\ell|\le 1$}} & $\theta_{\textrm{V-C-D},\ell,i,+}$ & $V_{i,+,1}$ & CL$_{\ell}$ & DM \\ \cline{2-5}
& $\theta_{\textrm{V-C-D},\ell,i,-}$ & $V_{i,-,1}$ & CL$_{\ell}$ & DM \\ \hline \multirow{2}{*}{{Variable-Dummy}} & $\theta_{\textrm{V-D},i,+}$ & $V_{i,+,1}$ & DM & $\backslash$ \\ \cline{2-5}
& $\theta_{\textrm{V-D},i,-}$ & $V_{i,-,1}$ & DM & $\backslash$ \\ \hline \multirow{4}{*}{{Variable-Truth}} & $\theta_{\textrm{V-T},i,a,c}$ & $V_{i,+,1}$ & TR$_{i,a}$ & TR$_{i,c}$ \\ \cline{2-5}
& $\theta_{\textrm{V-T},i,b,d}$ & $V_{i,+,2}$ & TR$_{i,b}$ & TR$_{i,d}$ \\ \cline{2-5}
& $\theta_{\textrm{V-T},i,a,d}$ & $V_{i,-,1}$ & TR$_{i,a}$ & TR$_{i,d}$ \\ \cline{2-5}
& $\theta_{\textrm{V-T},i,b,c}$ & $V_{i,-,2}$ & TR$_{i,b}$ & TR$_{i,c}$ \\ \hline \end{tabular} }
\caption{SAT to Scheduling -- Jobs scheduled on each machine} \label{table:job-schedule} \end{table}
We show that if we schedule according to Table~\ref{table:job-schedule}, then every job has been scheduled (ignoring the superscripts $T$ or $F$, which will be determined later). It is obvious that every gap job is scheduled. For simplicity, we abuse the notation a bit by using the symbol of a gap job to denote the machine on which it is scheduled.
$\bullet$ Consider clause jobs. Recall that for each clause $cl_\ell$ and $i=\ell-1,\ell,\ell+1$, we either construct $\theta_{\textrm{V-D},i,-}$ and $\theta_{\textrm{V-C-D},\ell,i,+}$ if the positive literal $z_i$ occurs in $C_1$, or construct $\theta_{\textrm{V-D},i,+}$ and $\theta_{\textrm{V-C-D},\ell,i,-}$ if the negative literal $\neg z_i$ occurs in $C_1$. Hence the three copies of job $\textrm{CL}_\ell$ appear on machine $\theta_{\textrm{V-C-D},\ell,\ell-1,+}$ or $\theta_{\textrm{V-C-D},\ell,\ell-1,-}$, machine $\theta_{\textrm{V-C-D},\ell,\ell,+}$ or $\theta_{\textrm{V-C-D},\ell,\ell,-}$, and machine $\theta_{\textrm{V-C-D},\ell,\ell+1,+}$ or $\theta_{\textrm{V-C-D},\ell,\ell+1,-}$. Thus, all three copies of a clause job are scheduled.
$\bullet$ Consider truth-assignment jobs. There are two copies of TR$_{i,a}$, TR$_{i,b}$, TR$_{i,c}$ and TR$_{i,d}$. It is easy to see that all of them are scheduled on machines $\theta_{\textrm{V-T},i,a,c}$, $\theta_{\textrm{V-T},i,b,d}$, $\theta_{\textrm{V-T},i,a,d}$ and $\theta_{\textrm{V-T},i,b,c}$.
$\bullet$ Consider variable jobs. There are two copies of $V_{i,+,1}$, $V_{i,+,2}$, $V_{i,-,1}$ and $V_{i,-,2}$. It is easy to see that one copy of them are scheduled on machines $\theta_{\textrm{V-T},i,a,c}$, $\theta_{\textrm{V-T},i,b,d}$, $\theta_{\textrm{V-T},i,a,d}$ and $\theta_{\textrm{V-T},i,b,c}$. One copy of $V_{i,+,2}$ and $V_{i,-,2}$ are scheduled on machines $\theta_{\textrm{V-L},i,+}$ and $\theta_{\textrm{V-L},i,-}$. If machines $\theta_{\textrm{V-D},i,-}$ and $\theta_{\textrm{V-C-D},\ell,i,+}$ exist (when the positive literal $z_i$ occurs in $C_1$), then $V_{i,-,1}$ and $V_{i,+,1}$ are scheduled on them respectively; otherwise machines $\theta_{\textrm{V-D},i,+}$ and $\theta_{\textrm{V-C-D},\ell,i,-}$ exist (the negative literal $\neg z_i$ occurs in $C_1$), then $V_{i,+,1}$ and $V_{i,-,1}$ are scheduled on them respectively.
$\bullet$ Consider link jobs. There are two copies of LN$_{i,h,+}$ (or LN$_{i,h,-}$) for $1\le h\le 2\gamma+2$. Let $\iota\in\{+,-\}$. The two copies of LN$_{i,1,\iota}$ are scheduled on machines $\theta_{\textrm{V-L},i,\iota}$ and $\theta_{\textrm{L-L},i,1,\iota}$. The two copies of LN$_{i,h,\iota}$ are scheduled on $\theta_{\textrm{L-L},i,h,\iota}$ and $\theta_{\textrm{L-L},i,h+1,\iota}$ for $2\le h\le 2\gamma+1$. The two copies of LN$_{i,2\gamma+2,+}$ are scheduled on machines $\theta_{\textrm{L-L},2\gamma+1,+}$ and $\theta_{\textrm{L-L},i,+,-}$, and the two copies of LN$_{i,2\gamma+2,-}$ are scheduled on machines $\theta_{\textrm{L-L},2\gamma+1,-}$ and $\theta_{\textrm{L-L},\tau^{-1}(i),+,-}$, where $\tau^{-1}$ is the inverse of the mapping $\tau$ (note that $\tau^{-1}$ exists since $\tau$ is one-to-one).
$\bullet$ Consider dummy jobs. There are in total $2n$ dummy jobs. It is obvious that for every $i$, 2 dummy jobs are scheduled on machines $\theta_{\textrm{V-C-D},\ell,i,+}$, $\theta_{\textrm{V-D},i,-}$ or machines $\theta_{\textrm{V-C-D},\ell,i,-}$, $\theta_{\textrm{V-D},i,+}$.
Next, we consider the load of every machine. According to Table~\ref{table:job-time-whole}, it is easy to verify that if we sum up the type-component of jobs on each machine, it becomes $10^{14}\sigma_{max}$; if we sum up the index-component of jobs on each machine, it becomes $0$. Now we consider the T/F-component of jobs. It is easy to verify that the T/F-components of all jobs add up to $0$, hence we have the following direct observation.
\begin{observation}\label{obs:total-processing} The total processing time of all jobs add up to $10^{14}\sigma_{max} \cdot (2\gamma n +8n)$. \end{observation}
\begin{table}[!ht] \renewcommand\arraystretch{1.3}
\setlength\tabcolsep{1.5pt} \centering \resizebox{12cm}{!}{
\begin{tabular}{|c|l|l|l|l|l|} \hline \multirow{2}{*}{Variable-Link} & $\theta_{\textrm{V-L},i,+}$ & $V_{i,+,2}^T$ & LN$_{i,1,+}^F$ & $\backslash$ \\ \cline{2-5}
& $\theta_{\textrm{V-L},i,-}$ & $V_{i,-,2}^F$ & LN$_{i,1,-}^{T}$ & $\backslash$ \\ \hline \multirow{3}{*}{\makecell{Link-Link\\$h\in\{1,2,\cdots,2\gamma+1\}$}} & {$\theta_{\textrm{L-L},i,h,+}$} & LN$_{i,h,+}^F$ & LN$_{i,h+1,+}^T$ & $\backslash$ \\ \cline{2-5}
& {$\theta_{\textrm{L-L},i,h,-}$} & LN$_{i,h,-}^F$ & LN$_{i,h+1,-}^T$ & $\backslash$ \\ \cline{2-5}
& $\theta_{\textrm{L-L},i,+,-}$ & LN$_{i,2\gamma+2,+}^T$ & LN$_{\tau(i),2\gamma+2,-}^*$ & $\backslash$ \\ \cline{2-5} \hline \multirow{2}{*}{\makecell{Variable-Clause-Dummy \& Variable-Dummy\\Case 1: positive literal $z_i\in C_1$}} & $\theta_{\textrm{V-C-D},\ell,i,+}$ & $V_{i,+,1}^T$ & CL$_{\ell}^{*}$ & DM$^{*}$ \\ \cline{2-5}
& $\theta_{\textrm{V-D},i,-}$ & $V_{i,-,1}^F$ & DM$^*$ & $\backslash$ \\ \hline \multirow{2}{*}{\makecell{Variable-Clause-Dummy \& Variable-Dummy\\Case 2: negative literal $\neg z_i\in C_1$}} & $\theta_{\textrm{V-C-D},i,-}$ & $V_{i,-,1}^F$ & CL$_{\ell}^{*}$ & DM$^*$ \\ \cline{2-5}
& $\theta_{\textrm{V-D},i,+}$ & $V_{i,+,1}^T$ & DM$^*$ & $\backslash$ \\ \hline \multirow{4}{*}{{Variable-Truth}} & $\theta_{\textrm{V-T},i,a,c}$ & $V_{i,+,1}^F$ & TR$_{i,a}^F$ & TR$_{i,c}^F$ \\ \cline{2-5}
& $\theta_{\textrm{V-T},i,b,d}$ & $V_{i,+,2}^F$ & TR$_{i,b}^F$ & TR$_{i,d}^F$ \\ \cline{2-5}
& $\theta_{\textrm{V-T},i,a,d}$ & $V_{i,-,1}^T$ & TR$_{i,a}^T$ & TR$_{i,d}^T$ \\ \cline{2-5}
& $\theta_{\textrm{V-T},i,b,c}$ & $V_{i,-,2}^T$ & TR$_{i,b}^T$ & TR$_{i,c}^T$ \\ \hline \end{tabular} } \caption{Scheduling of Truth/False types of jobs if the variable $z_i$ is true} \label{table:job-variable-true} \end{table}
\begin{table}[!ht] \renewcommand\arraystretch{1.3}
\setlength\tabcolsep{1.5pt} \centering \resizebox{12cm}{!}{
\begin{tabular}{|c|l|l|l|l|l|} \hline \multirow{2}{*}{Variable-Link} & $\theta_{\textrm{V-L},i,+}$ & $V_{i,+,2}^F$ & LN$_{i,1,+}^T$ & $\backslash$ \\ \cline{2-5}
& $\theta_{\textrm{V-L},i,-}$ & $V_{i,-,2}^T$ & LN$_{i,1,-}^{F}$ & $\backslash$ \\ \hline \multirow{3}{*}{\makecell{Link-Link\\$h\in\{1,2,\cdots,2\gamma+1\}$}} & {$\theta_{\textrm{L-L},i,h,+}$} & LN$_{i,h,+}^T$ & LN$_{i,h+1,+}^F$ & $\backslash$\\ \cline{2-5}
& {$\theta_{\textrm{L-L},i,h,-}$} & LN$_{i,h,-}^T$ & LN$_{i,h+1,-}^F$ & $\backslash$ \\ \cline{2-5}
& $\theta_{\textrm{L-L},i,+,-}$ & LN$_{i,2\gamma+2,+}^F$ & LN$_{\tau(i),2\gamma+2,-}^*$ & $\backslash$ \\ \cline{2-5} \hline \multirow{2}{*}{\makecell{Variable-Clause-Dummy \& Variable-Dummy\\Case 1: positive literal $z_i\in C_1$}} & $\theta_{\textrm{V-C-D},\ell,i,+}$ & $V_{i,+,1}^F$ & CL$_{\ell}^{*}$ & DM$^{*}$ \\ \cline{2-5}
& $\theta_{\textrm{V-D},i,-}$ & $V_{i,-,1}^T$ & DM$^*$ & $\backslash$ \\ \hline \multirow{2}{*}{\makecell{Variable-Clause-Dummy \& Variable-Dummy\\Case 2: negative literal $\neg z_i\in C_1$}} & $\theta_{\textrm{V-C-D},i,-}$ & $V_{i,-,1}^T$ & CL$_{\ell}^{*}$ & DM$^*$ \\ \cline{2-5}
& $\theta_{\textrm{V-D},i,+}$ & $V_{i,+,1}^F$ & DM$^*$ & $\backslash$ \\ \hline \multirow{4}{*}{{Variable-Truth}} & $\theta_{\textrm{V-T},i,a,c}$ & $V_{i,+,1}^T$ & TR$_{i,a}^T$ & TR$_{i,c}^T$ \\ \cline{2-5}
& $\theta_{\textrm{V-T},i,b,d}$ & $V_{i,+,2}^T$ & TR$_{i,b}^T$ & TR$_{i,d}^T$ \\ \cline{2-5}
& $\theta_{\textrm{V-T},i,a,d}$ & $V_{i,-,1}^F$ & TR$_{i,a}^F$ & TR$_{i,d}^F$ \\ \cline{2-5}
& $\theta_{\textrm{V-T},i,b,c}$ & $V_{i,-,2}^F$ & TR$_{i,b}^F$ & TR$_{i,c}^F$ \\ \hline \end{tabular} } \caption{Scheduling of Truth/False types of jobs if the variable $z_i$ is false} \label{table:job-variable-false} \end{table}
Consider the truth-assignment of $I_{sat}$. If the variable $z_i$ is true, then we determine the true/false version of main jobs according to Table~\ref{table:job-variable-true}. Otherwise the variable $z_i$ is false in the assignment, then we flip the True/False version of all jobs in Table~\ref{table:job-variable-true}, i.e., we schedule according to Table~\ref{table:job-variable-false}. It is easy to see that in each row of Table~\ref{table:job-variable-true}, if there is no job with a superscript of $*$, then their T/F-components sum up to $0$, i.e., the load of this machine is exactly $10^{14}\sigma_{max}$. We call the current schedule a semi-schedule. It remains to determine the true/false version of jobs with the superscript $*$.
$\bullet$ Consider link-link machines. We only need to consider machines $\theta_{\textrm{L-L},i,+,-}$. The T/F-type of the job LN$_{i,2\gamma+2,+}$ has already been decided based on the true/false of variable $z_i$. Consider the other job LN$_{\tau(i),2\gamma+2,-}$ scheduled on this machine. Notice that based on the true/false of the variable $z_{\tau(i)}$, one copy of LN$_{\tau(i),2\gamma+2,-}$ is scheduled on $\theta_{\textrm{L-L},\tau(i),2\gamma+1,-}$, and the remaining copy is scheduled on $\theta_{\textrm{L-L},i,+,-}$. If $z_{\tau(i)}$ is true, the remaining copy is LN$_{\tau(i),2\gamma+2,-}^F$; otherwise, the remaining copy is LN$_{\tau(i),2\gamma+2,-}^T$. Hence, we have the following observation: \begin{itemize}[--] \item if variables $z_i$ is true and $z_{\tau(i)}$ is false, then LN$_{i,2\gamma+2,+}^T$ and LN$_{\tau(i),2\gamma+2,-}^T$ are on this machine, whereas the load is $10^{14}\sigma_{max}-1$; \item if variables $z_i$ is false and $z_{\tau(i)}$ is true, then LN$_{i,2\gamma+2,+}^F$ and LN$_{\tau(i),2\gamma+2,-}^F$ are on this machine, whereas the load is $10^{14}\sigma_{max}+1$; \item if variables $z_i$ and $z_{\tau(i)}$ are both true or both false, then one of LN$_{i,2\gamma+2,+}$ and LN$_{\tau(i),2\gamma+2,-}$ is true and the other is false, whereas the load is $10^{14}\sigma_{max}$. \end{itemize}
The above observation leads to the following claim. \begin{claim}\label{claim:sat-to-schedule-1} The load of machine $\theta_{\textrm{L-L},i,+,-}$ is $10^{14}\sigma_{max}$ if the clause $(z_i\oplus\neg z_{\tau(i)})$ is satisfied, and is $10^{14}\sigma_{max}\pm 1$ otherwise. \end{claim}
$\bullet$ Consider variable-clause-dummy and variable-dummy machines. Notice that there is one true copy and two false copies of CL$_{\ell}$, scheduled on machines $\theta_{\textrm{V-C-D},\ell,i,\kappa_{i-\ell}}$ where $i\in\{\ell-1,\ell,\ell+1\}$ and $\kappa_{i-\ell}\in\{+,-\}$. If there exists at least one $i=i^*$ such that $V_{i,\kappa_{i^*-\ell},1}^T$ is on machine $\theta_{\textrm{V-C-D},\ell,i^*,\kappa_{i^*-\ell}}$, then we schedule CL$_{\ell}^T$ on machine $\theta_{\textrm{V-C-D},\ell,i^*,\kappa_{i^*-\ell}}$, and schedule the two copies of CL$_{\ell}^F$ on the remaining two machines, respectively. Otherwise, we schedule CL$_{\ell}^T$ on machine $\theta_{\textrm{V-C-D},\ell,\ell-1,\kappa_{-1}}$ and the two false copies CL$_{\ell}^F$ on machines $\theta_{\textrm{V-C-D},\ell,i,\kappa_{i-\ell}}$ where $i=\ell,\ell+1$.
Finally, we determine the true/false version of dummy jobs on variable-clause-dummy and variable-dummy machines. Recall that there are $n+n/3$ true dummy and $n-n/3$ false dummy jobs.
On variable-clause-dummy machines, if the clause job is true, schedule a true dummy job. Otherwise, the clause job is false, then if the variable job is true (or false), schedule a false (or true) dummy job.
On variable-dummy machines, we schedule dummy jobs in the following way. A false variable job is always scheduled with a true dummy job. For true variable jobs, we first partition the indices of variables, $\{1,2,\cdots,n\}$, into two subsets $S_1,S_2$ such that $$S_1=\{i: {\textrm{On machine $\theta_{\textrm{V-C-D},\ell,i,\kappa_{i-\ell}}$ there is a true clause job and a false variable job}}\},$$ and $S_2$ consists of the remaining indices. On machine $\theta_{\textrm{V-D},i,+}$ or $\theta_{\textrm{V-D},i,-}$ where $i\in S_1$ and the variable job is true, we schedule a true dummy job; on machine $\theta_{\textrm{V-D},i,+}$ or $\theta_{\textrm{V-D},i,-}$ where $i\not\in S_1$ and the variable job is true, we schedule a false dummy job.
Consider the true/false versions of all two jobs on a variable-dummy machine and use $(T/F,T/F)$ to denote the true/false version of the two jobs in the order of variable job, dummy job. Then the above scheduling can be restated as follows. A variable-dummy machine $\theta_{\textrm{V-D},i,+}$ or $\theta_{\textrm{V-D},i,+}$ is: \begin{itemize}
\item $(F,T)$, if a false variable job is on it;
\item $(T,F)$, if a true variable job is on it and $i\not\in S_1$;
\item $(T,T)$, if a true variable job is on it and $i\in S_1$. \end{itemize} Hence there are in total three kinds of variable-dummy machines $(F,T), (T,T), (T,F)$.
Now we check the total number of true and false dummy jobs scheduled in the above way. Similarly we consider the true/false versions of all three jobs on a variable-clause-dummy machine and use $(T/F,T/F,T/F)$ to denote the true/false versions of the three jobs in the order of variable job, clause job and dummy job, then there are in total four kinds of variable-clause-dummy machines: $(F,T,T), (T,T,T), (T,F,F), (F,F,T)$. Let $\sharp(T/F,T/F,T/F)$ and $\sharp(T/F,T/F)$ be the number of machines of each kind. Then we have the following observations: \begin{subequations} \begin{eqnarray}
&&\sharp(F,T,T)=|S_1| \label{ILP:1}\\ &&\sharp(F,T,T)+\sharp(T,T,T)=n/3 \label{ILP:2}\\ &&\sharp(T,F,F)+\sharp(F,F,T)=2n/3 \label{ILP:3}\\ &&\sharp(F,T)+\sharp(T,T)+\sharp(T,F)=n \label{ILP:4}\\ &&\sharp(F,T,T)+\sharp(F,F,T)+\sharp(F,T)=n \label{ILP:5}\\
&&\sharp(T,T)=|S_1|\label{ILP:6} \end{eqnarray} \end{subequations}
Here Eq~\eqref{ILP:1} follows from the definition of $S_1$. Eq~\eqref{ILP:2} follows from the fact that there are in total $n/3$ true clause jobs. Eq~\eqref{ILP:3} follows from the fact that there are in total $n$ clause jobs, and hence $2n/3$ false clause jobs. Eq~\eqref{ILP:4} follows from the fact that there are in total $n$ variable-dummy machines. Eq~\eqref{ILP:5} follows from the fact that there are in total $n$ false variable jobs. We now explain Eq~\eqref{ILP:6}. Notice that for each $i$ there are in total $8$ variable jobs (i.e., $V_{i,\cdot,\cdot}$), 4 true copies and 4 false copies. Among them 2 true and 2 false copies are scheduled on variable-truth machines, 1 true and 1 false copies are scheduled on variable-link machines (see Table~\ref{table:job-variable-true}). Hence, 1 true and 1 false copies are scheduled on variable-clause-dummy and variable-dummy machines. For any $i$, if the false (or true) variable job $V_{i,\cdot,\cdot}$ is scheduled on a variable-clause-dummy machine, then the remaining true (or false) variable job is scheduled on a variable-dummy machine. Now consider the set of all $i$'s where the true variable job $V_{i,\cdot,\cdot}$ is scheduled with a true dummy job on a variable-dummy machine and let it be $S_3$. According to the way we schedule, on machine $\theta_{\textrm{V-D},i,+}$ or $\theta_{\textrm{V-D},i,-}$, we schedule a true variable job and a true dummy job only if $i\in S_1$ (otherwise, either the variable job or the dummy job is false), hence $S_3\subseteq S_1$. Meanwhile, for any $i\in S_1$, we know the false variable job $V_{i,\cdot,\cdot}$ is scheduled on a variable-clause-dummy machine, whereas the true variable job must be scheduled on a variable-dummy machine, this implies that any $i\in S_1$ also satisfies that $i\in S_3$. Hence $S_1=S_3$ and Eq~\eqref{ILP:6} is true.
The total number of true dummy jobs scheduled equals $\sharp(F,T,T)+\sharp(T,T,T)+\sharp(F,F,T)+\sharp(F,T)+\sharp(T,T)=n+\sharp(T,T,T)+\sharp(T,T)=n+n/3-|S_1|+|S_1|=4n/3$. Similarly, we can show the total number of false jobs scheduled equals $2n/3$. Hence, our way of scheduling dummy jobs is feasible.
Now we check the load of every variable-clause-dummy machines and variable-dummy machines. It is easy to verify that for a variable-clause-dummy machine, if its kind is $(T,T,T)$, or $(T,F,F)$, or $(F,F,T)$, then its load is $10^{14}\sigma_{max}$; if its kind is $(F,T,T)$, then its load is $10^{14}\sigma_{max}+1$. For a variable-dummy machine, if its kind is $(F,T)$ or $(T,F)$, then its load is $10^{14}\sigma_{max}$; if its kind is $(T,T)$, then its load is $10^{14}\sigma_{max}-1$.
Notice that for every $1\le i\le n$, the variable-dummy machine $\theta_{\textrm{V-D},i,\cdot}$ is of $(T,T)$ if and only if the variable-clause-dummy machine $\theta_{\textrm{V-C-D},\ell,i,\cdot}$ is of $(F,T,T)$. Recall that we always try to schedule the true clause job CL$_{\ell}^T$ with a true variable job, if possible. Hence, CL$_{\ell}^T$ is scheduled with a false variable job if and only if all the three variable jobs scheduled on variable-clause-dummy machines, i.e., $V_{\ell-1,\kappa_{-1},1}$, $V_{\ell,\kappa_0,1}$ and $V_{\ell+1,\kappa_1,1}$, are all false where $\kappa_{-1},\kappa_{0},\kappa_1\in\{+,-\}$. Consider $V_{\ell-1,\kappa_{-1},1}$. If $\kappa_{-1}=+$, then $\theta_{\textrm{V-C-D},\ell,\ell-1,+}$ exists, indicating case 1 of Table~\ref{table:job-variable-true} or Table~\ref{table:job-variable-false} occurs, i.e., the positive literal $z_{\ell-1}$ is in clause $cl_\ell\in C_1$. Furthermore, as $V_{\ell-1,+,1}^F$ is scheduled on the variable-clause-dummy machine, the scheduling follows Table~\ref{table:job-variable-false}, the variable $z_{\ell-1}$ is false in the assignment of $I_{sat}$. That is, $cl_\ell$ is not satisfied by $z_{\ell-1}$. Similarly, we can show that if $\kappa_{-1}=-$, then the negative literal $\neg z_{\ell-1}$ is in $cl_\ell$ and variable $z_i$ is true, whereas $cl_\ell$ is not satisfied by $z_{\ell-1}$, either. Using the same argument, we can show that if all three jobs $V_{\ell-1,\kappa_{-1},1}$, $V_{\ell,\kappa_0,1}$ and $V_{\ell+1,\kappa_1,1}$ scheduled together with CL$_\ell$ are all false, then $cl_\ell$ is not satisfied by the assignment. Furthermore, according to our scheduling method, if we cannot schedule CL$_{\ell}^T$ with a true variable job, we schedule it with the false job $V_{\ell-1,\kappa_{-1},1}^F$. That means, among the three machines $\theta_{\textrm{V-C-D},\ell,i,\cdot}$, only $\theta_{\textrm{V-C-D},\ell-1,i,\cdot}$ is of kind $(F,T,T)$ and has a load of $10^{14}\sigma_{max}+1$. The other two machines have a load of $10^{14}\sigma_{max}$. Similarly, we check variable-dummy machines and see that among the three machines $\theta_{\textrm{V-D},i,\cdot}$ where $i\in\{\ell-1,\ell,\ell+1\}$, only machine $\theta_{\textrm{V-D},\ell-1,\cdot}$ is of kind $(T,T)$ and has a load of $10^{14}\sigma_{max}-1$. The other two machines have a load of $10^{14}\sigma_{max}$.
According to our observation in the above paragraph, we have the following claim. \begin{claim}\label{claim:sat-to-schedule-2} If $cl_\ell\in C_1$ is satisfied, then the three clause-variable-dummy machines $\theta_{\textrm{V-C-D},i,\ell,\cdot}$ and the three variable-dummy machines $\theta_{\textrm{V-D},i,\cdot}$, $i\in\{\ell-1,\ell,\ell+1\}$ all have a load of $10^{14}\sigma_{max}$; otherwise, machine $\theta_{\textrm{V-C-D},\ell-1,\ell,\cdot}$ has a load of $10^{14}\sigma_{max}+1$, $\theta_{\textrm{V-D},\ell-1,\cdot}$ has a load of $10^{14}\sigma_{max}-1$, and all the remaining 4 machines have a load of $10^{14}\sigma_{max}$. \end{claim}
Combining Claim~\ref{claim:sat-to-schedule-1} amd Claim~\ref{claim:sat-to-schedule-2}, we know that each unsatisfied clause can lead to at most $2$ machines with load $10^{14}\sigma_{max}\pm 1$. Recall that the total processing time of all jobs is $10^{14}\sigma_{max}\cdot (2\gamma n+8n)$, hence the number of machines with load $10^{14}\sigma_{max}+ 1$ should equal the number of machines with load $10^{14}\sigma_{max}-1$. Consequently, if there are $\vartheta n$ unsatisfied clauses, the resulted schedule will contain at most $2\vartheta n$ machines with load $10^{14}\sigma_{max}\pm 1$. Using Taylor's expression, we have that
\begin{align*}
(x+1)^q+(x-1)^q &= x^q[(1+\frac{1}{x})^q+(1-\frac{1}{x})^q] \\
&= x^q(1+\frac{q(q-1)}{2x^2}+o(\frac{1}{x^2}))\\
&= x^q+\frac{q(q-1)}{2}\cdot x^{q-2}+o(x^{q-2}), \end{align*}
Hence, by simple calculations Lemma~\ref{lemma:sat-sche} is proved.
\subsection{Scheduling to 3SAT\texorpdfstring{$'$}{Lg}} \label{subsec:sche-to-sat} The goal of this subsection is to show that if the constructed scheduling instance admits a feasible schedule of a small objective value, then the given 3SAT$'$ instance admits a truth-assignment that satisfies most clauses. More precisely, we prove the following lemma. \begin{lemma}\label{lemma:sche-sat} If there are at least $\vartheta n$ clauses not satisfied, then any feasible schedule has an objective value at least $(2\gamma n+8n)(10^{14}\sigma_{max})^q+\frac{q(q-1)\vartheta n}{48}\cdot (10^{14}\sigma_{max})^{q-2}+o(n\sigma_{max}^{q-2})$. \end{lemma}
In the following we consider a solution $Sol$ for scheduling whose objective value is bounded by $(2\gamma n+8n)(10^{14}\sigma_{max})^q+(\frac{q(q-1)\vartheta n}{48}-\epsilon')\cdot (10^{14}\sigma_{max})^{q-2}$ for arbitrarily small $\epsilon'>0$.
Recall that we have constructed in total $2\gamma n+8n$ machines. According to Subsection~\ref{subsec:sat-to-scheduling-simple}, the total processing time of all jobs is $(2\gamma n+8n)\cdot 10^{14}\sigma_{max}$. Consider an arbitrary schedule. We say a machine is good if its load is exactly $10^{14}\sigma_{max}$; otherwise, the machine is bad. Since the processing times are half-integral (multiples of $1/2$), the load of a bad machine is either no larger than $10^{14}\sigma_{max}-0.5$, or no less than $10^{14}\sigma_{max}+0.5$. Furthermore, we say a machine is very bad if its load deviates from $10^{14}\sigma_{max}$ by at least $\sigma_{max}$, i.e., the load of a very bad machine is either no larger than $(10^{14}-1)\sigma_{max}$, or no smaller than $(10^{14}+1)\sigma_{max}$.
\begin{lemma}\label{lemma:almost-good-schedule}
If there exists a very bad machine, then the objective value of the schedule is at least $m(10^{14}\sigma_{max})^q+c_1\sigma_{max}^{q}$ for some constant $c_1>0$. \end{lemma}
Towards the proof, we need the following lemma. \begin{lemma} For $x,q,m> 1$ and $k\ge 1$, it holds that $$(x-k)^q+(m-1)(x+\frac{k}{m-1})^q\ge mx^q+\frac{q(q-1)}{4}\min\{(x-1)^{q-2},(x+1)^{q-2}\}.$$ \end{lemma} \begin{proof} Taking the derivative of $(x-k)^q+(m-1)(x+\frac{k}{m-1})^q$ with respect to $k$, we get $-q(x-k)^{k-1}+q(x+\frac{k}{m-1})^{q-1}> 0$ when $x,q,m>1$ and $k\ge 1$, hence the function $(x-k)^q+(m-1)(x+\frac{k}{m-1})^q$ is an increasing function of $k$, thus it suffices to prove the lemma for $k=1$. According to the mean value theorem, we have \begin{eqnarray*} &\Gamma:=&\frac{1}{m}(x-1)^q+\frac{m-1}{m}(x+\frac{1}{m-1})^q-x^q\\ &=&-\frac{1}{m}[x^q-(x-1)^q]+\frac{m-1}{m}[(x+\frac{1}{m-1})^q-x^q]\\ &=&-\frac{1}{{m}}[x^q-(x-\frac{1}{2})^q]-\frac{1}{{m}}[(x-\frac{1}{2})^q-(x-1)^q]+\frac{m-1}{m}[(x+\frac{1}{m-1})^q-x^q]\\ &=& -\frac{1}{2m} q(x-\theta_1)^{q-1}-\frac{1}{2m} q(x-\frac{1}{2}-\theta_2)^{q-1}+\frac{m-1}{m} \cdot \frac{1}{m-1}\cdot q(x+\theta_3)^{q-1} \end{eqnarray*} for some $\theta_1,\theta_2\in (0,1/2)$ and $\theta_3\in (0,\frac{1}{m-1})$. Further apply the mean value theorem, we have \begin{eqnarray*} \Gamma&\ge&\frac{q}{2m}[(x+\theta_3)^{q-1}-(x-\frac{1}{2}-\theta_2)^{q-1}]\\ &=&\frac{q}{2m}(\theta_2+\theta_3+\frac{1}{2})(q-1)(x+\theta_4)^{q-2} \end{eqnarray*} for some $\theta_4\in (-1/2-\theta_2,\theta_3)$. If $q\ge 2$, then $(x+\theta_4)^{q-2}\ge (x-1)^{q-2}$. Otherwise $1<q<2$ and it holds that $(x+\theta_4)^{q-2}\ge (x+1)^{q-2}$. Thus $$\Gamma\ge \frac{q(q-1)}{4m} \min\{ (x-1)^{q-2},(x+1)^{q-2}\}.$$
Hence, the lemma is proved. \end{proof}
Similarly, we can prove that \begin{lemma} For $x,q,m> 1$ and $k\ge 1$, it holds that $$(x+k)^q+(m-1)(x-\frac{k}{m-1})^q\ge mx^q+\frac{q(q-1)}{4}\min\{(x-1)^{q-2},(x+1)^{q-2}\}.$$ \end{lemma}
Now we are ready to prove Lemma~\ref{lemma:almost-good-schedule}. \begin{proof}[Proof of Lemma~\ref{lemma:almost-good-schedule}]
Suppose the load of one very bad machine is $(10^{14}-k)\sigma_{max}$ for some $|k|\ge 1$, then total load of all other machines is $10^{14}(m-1)\sigma_{max}+k\sigma_{max}$. By the convexity of the function $x^q$, the objective value of such a solution is at least: \begin{eqnarray*} &&(10^{14}-k)^q\sigma_{max}^q+(m-1)\cdot [\frac{10^{14}(m-1)\sigma_{max}+k\sigma_{max}}{m-1}]^q\\ &=&\sigma_{max}^q[(10^{14}+k)^q+(m-1)(10^{14}+\frac{k}{m-1})^q]\\ &\ge&m(10^{14}\sigma_{max})^q+\sigma_{max}^q\cdot \frac{q(q-1)}{4}\min\{(10^{14}-1)^{q-2},(10^{14}+1)^{q-2}\} \end{eqnarray*}
Hence, the lemma is proved. \end{proof}
We have shown that if a schedule admits a very bad machine, then its objective is significantly large and cannot be $Sol$. To prove Lemma~\ref{lemma:sche-sat}, it suffices to restrict our attention to schedules without any very bad machine.
Notice that the processing time of a gap job is at least $(10^{14}-2\times 10^{13})\sigma_{max}$, we know that there can be at most one gap job on a machine that is not very bad. Given the fact that the total number of gap jobs equals the number of machines, and there is no very bad machine in $Sol$, we have the following observation. \begin{lemma}\label{lemma:structure1} There is exactly one gap job on each machine in $Sol$. \end{lemma}
Given Lemma~\ref{lemma:structure1}, we will use the symbol of a gap job, e.g., $\theta_{\textrm{V-L},i,+}$, to denote the machine on which this job is scheduled.
The following lemma is straightforward by observing that $\sigma_{max}>x\cdot \sigma(i)$ for all $i$, and hence the type coordinates (i.e., the term $10^j\sigma_{max}$) of jobs on a machine that is not very bad cannot add up to smaller than $(10^{14}-2)\sigma_{max}$ or larger than $(10^{14}+2)\sigma_{max}$.
\begin{lemma}\label{lemma:structure2} If in a solution there is no very bad machine, then \begin{itemize} \item On a variable-link machine {$\theta_{\textrm{V-L},i,\iota}$ where $\iota\in\{+,-\}$}, there are exactly three jobs -- a gap job, a variable job and a link job. \item On a link-link machine $\theta_{\textrm{L-L},i,h,\iota}$ where $\iota\in\{+,-\}$, there are exactly three jobs -- a gap job and two link jobs. \item On a variable-dummy machine $\theta_{\textrm{V-D},i,\iota}$ where $\iota\in\{+,-\}$, there are exactly three jobs -- a gap job, a variable job and a dummy job. \item On a variable-clause-dummy machine $\theta_{\textrm{V-C-D},\ell,i,\iota}$ where $\iota\in\{+,-\}$, there are exactly four jobs -- a gap job, a variable job, a clause job and a dummy job. \item On a variable-truth machine $\theta_{\textrm{V-T},i,\rho}$ where $\rho\in\{(a,c),(b,d),(a,d),(b,c)\}$, there are exactly four jobs -- a gap job, a variable job and two truth-assignment jobs; Furthermore, the two truth-assignment jobs are: \begin{itemize} \item $TR_{\cdot,a}$ and $TR_{\cdot,c}$ $\quad$ if $\rho=(a,c)$; \item $TR_{\cdot,b}$ and $TR_{\cdot,d}$ $\quad$ if $\rho=(b,d)$; \item $TR_{\cdot,a}$ and $TR_{\cdot,d}$ $\quad$ if $\rho=(a,d)$; \item $TR_{\cdot,b}$ and $TR_{\cdot,c}$ $\quad$ if $\rho=(b,c)$. \end{itemize} \end{itemize} \end{lemma} \begin{proof} The proof can be carried out through a counting argument in the order of dummy jobs, clause jobs, truth-assignment jobs, link jobs and variable jobs according to Table~\ref{table:job-time}. In the following, we prove dummy jobs and the other types of jobs can be proved in a similar way. The reader may refer to Table~\ref{table:job-time-whole} for a quick overview on job processing times. Note that a dummy job has a processing time at least $(10^{13}-1/2)\sigma_{max}$. It is easy to see that if a variable-link machine, or link-link machine, or variable-truth machine accepts one dummy job, then the load of this machine is larger than $(10^{14}+1)\sigma_{max}$, contradicting the fact that there is no very bad machine. Hence, dummy jobs can only be scheduled on variable-clause-dummy machine or a variable-dummy machine. Similarly, if a variable-clause-dummy machine or a variable-dummy machine accepts two or more dummy jobs, its load becomes larger than $(10^{14}+1)\sigma_{max}$, hence each of these machines can accept at most 1 dummy job. On the other hand, there are $2n$ dummy jobs, which is equal to the sum of the number of variable-clause-dummy machines (which is $n$) and the number of variable-dummy machines (which is also $n$). Hence, each variable-clause-dummy machine or variable-dummy machine accepts exactly one dummy job. Subtracting one dummy job together with the gap job on each variable-clause-dummy machine or variable-dummy machine, we know that if the machine is not very bad, then the remaining jobs on a variable-clause-dummy machine should add up to some value within $[(10^{12}+10^5-2)\sigma_{max},(10^{12}+10^5+2)\sigma_{max}]$ (if this machine is $\theta_{\textrm{V-C-D},\ell,i,+}$) or $[(10^{12}+10^3-2)\sigma_{max},(10^{12}+10^3+2)\sigma_{max}]$ (if this machine is $\theta_{\textrm{V-C-D},\ell,i,-}$), and the remaining jobs on a variable-dummy machine should add up to some value within $[(10^5-2)\sigma_{max},(10^5+2)\sigma_{max}]$ (if this machine is $\theta_{\textrm{V-D},i,+}$) or $[(10^3-2)\sigma_{max},(10^3+2)\sigma_{max}]$ (if this machine is $\theta_{\textrm{V-D},i,-}$). Consequently, we can apply the same argument to clause jobs, and then truth-assignment jobs, then link jobs and then variable jobs. \end{proof}
Using Lemma~\ref{lemma:structure2}, we further have the following observation.
\begin{lemma}\label{lemma:term-cancel} On a good machine, the type-component of jobs add up to $10^{14}\sigma_{max}$, the index-components and the true/false-components of jobs add up to $0$, respectively. \end{lemma}
Now we further identify the index-component of jobs on each machine.
\begin{lemma}\label{lemma:vd-index} Consider an arbitrary variable-dummy machine $\theta_{\textrm{V-D},i,\iota}$ where $\iota\in\{+,-\}$. If the machine is good, then the variable job on this machine is $V_{i,\iota,2}$. \end{lemma}
Applying Lemma~\ref{lemma:term-cancel}, the proof is straightforward by checking the sum of type-components and index-components of jobs, respectively.
\begin{lemma}\label{lemma:vcd-index} Consider an arbitrary variable-clause-dummy machine $\theta_{\textrm{V-C-D},\ell,i,\iota}$ where $\iota\in\{+,-\}$. If the machine is good, then the clause job on this machine is $\textrm{CL}_{\ell}$, and the variable job on this machine is $V_{i,\iota,1}$. \end{lemma} \begin{proof} By Lemma~\ref{lemma:term-cancel}, the type-components of the three jobs add up to $10^{14}\sigma_{max}$, hence it is easy to see that the variable job should be $V_{i',\iota,1}$ for some $i'$. Let the clause job be $\textrm{CL}_{\ell'}$ for some $\ell'$. As the index-components of the three jobs add up to $0$, we have $$\sigma(\ell')+\sigma(i')=\sigma(\ell)+\sigma(i).$$
Notice that for any machine $\theta_{\textrm{V-C-D},\ell,i,\iota}$ it holds that $i\in\{\ell-1,\ell,\ell+1\}$ and $\ell\in\{2,5,\cdots,n-1\}$. We claim that $\ell'=\ell$ and $i'=i$. To see why, consider two cases. If $\ell=i$, then $\sigma(\ell')+\sigma(i')=2\sigma(\ell)$. According to Lemma~\ref{lemma:uniquesum-2}, we have $\ell'=i'=\ell=i$ and the claim follows. Otherwise, $i=\ell\pm 1$. According to Lemma~\ref{lemma:uniquesum-2}, the only solution for $\sigma(j)+\sigma(j+1)=\sum_{h=1}^k\sigma(j_h)$, $k\le 5$, is $k=2$ and $\{j_1,j_2\}=\{j,j+1\}$. Hence, we have $\{\ell',i'\}=\{\ell,i\}$. Note that $\ell',\ell\in\{2,5,\cdots,n-1\}$, hence $\ell',\ell\equiv 2 (\mod 3)$. But $i\not\equiv 2 (\mod 3)$. Thus, $\ell=\ell'$ and $i=i'$. In both cases, Lemma~\ref{lemma:vcd-index} holds. \end{proof}
\begin{lemma}\label{lemma:vtd-index} Consider an arbitrary variable-truth machine $\theta_{\textrm{V-T},i,\rho}$ where $\rho\in\{(a,c),(b,d),(a,d),\\(b,c)\}$. If the machine is good, then the variable and truth-assignment jobs are: \begin{itemize} \item $V_{i,+,1}$ and $TR_{i,a}$, $TR_{i,c}$ $\quad$ if $\rho=(a,c)$; \item $V_{i,+,2}$ and $TR_{i,b}$, $TR_{i,d}$ $\quad$ if $\rho=(b,d)$; \item $V_{i,-,1}$ and $TR_{i,a}$, $TR_{i,d}$ $\quad$ if $\rho=(a,d)$; \item $V_{i,-,2}$ and $TR_{i,b}$, $TR_{i,c}$ $\quad$ if $\rho=(b,c)$. \end{itemize} \end{lemma} \begin{proof} According to Lemma~\ref{lemma:term-cancel}, the type-components of jobs add up to $10^{14}\sigma_{max}$. Hence, it is easy to verify that for some $i_1,i_2,i_3$ the variable and truth-assignment jobs are $V_{i_1,+,1}$ and $TR_{i_2,a}$, $TR_{i_3,c}$, if $\rho=(a,c)$; $V_{i_1,+,2}$ and $TR_{i_2,b}$, $TR_{i_3,d}$, if $\rho=(b,d)$; $V_{i_1,-,1}$ and $TR_{i_2,a}$, $TR_{i_3,d}$, if $\rho=(a,d)$; $V_{i_1,-,2}$ and $TR_{i_2,b}$, $TR_{i_3,c}$, if $\rho=(b,c)$.
We prove $i_1=i_2=i_3=i$, and Lemma~\ref{lemma:vtd-index} follows. According to Lemma~\ref{lemma:term-cancel}, the index-components add up to $0$, hence $$10\sigma(i_1)+10\sigma(i_2)+10\sigma(i_3)=30\sigma(i),$$ i.e., $\sigma(i_1)+\sigma(i_2)+\sigma(i_3)=3\sigma(i)$. According to Lemma~\ref{lemma:uniquesum-2}, the above equation has a unique solution, which is $i_1=i_2=i_3=i$. \end{proof}
\begin{lemma}\label{lemma:vl-index-+} Consider an arbitrary variable-link machine $\theta_{\textrm{V-L},i,\iota}$ where $\iota\in\{+,-\}$. If the machine is good, then the variable job on this machine is $V_{i,\iota,2}$, and the link job on this machine is $\textrm{LN}_{i,1,\iota}$. \end{lemma}
\begin{proof} Using the fact that the type-components of all jobs add up to $10^{14}\sigma_{max}$, it is easy to see that the variable job should be $V_{i_1,\iota,2}$ and the link job should be $\textrm{LN}_{i_2,h,\iota}$ for some $1\le i_1,i_2\le n$ and $1\le h\le 2\gamma+2$. We prove the lemma for $\iota=+$. The case that $\iota=-$ can be proved in the same way.
Given that the index-components should add up to $0$, we have the following: \begin{eqnarray} \vea_i+\veb^1_{i}=\vea_{i_1}+\veb^h_{i_2} \end{eqnarray}
Recall that $\vea_i=\veb^0_{i}$. According to Lemma~\ref{lemma:veb-uniquesum-2}, we have $h=1$ and $i_1=i_2=i$. \end{proof}
\begin{lemma}\label{lemma:ll-index-+} Consider an arbitrary link-link machine $\theta_{\textrm{L-L},i,h,\iota}$ where $\iota\in\{+,-\}$, $1\le h\le 2\gamma+1$. If the machine is good, then the two link jobs on this machine are $\textrm{LN}_{i,h,\iota}$ and $\textrm{LN}_{i,h+1,\iota}$. \end{lemma} The proof is similar to that of Lemma~\ref{lemma:vl-index-+} by utilizing Lemma~\ref{lemma:veb-uniquesum-2}.
\begin{lemma}\label{lemma:ll-index--} Consider an arbitrary link-link machine $\theta_{\textrm{L-L},i,+,-}$. If the machine is good, then the two link jobs on this machine are $\textrm{LN}_{i,2\gamma+2,+}$ and $\textrm{LN}_{\tau(i),2\gamma+2,-}$. \end{lemma} \begin{proof} Using the fact that the type-components of all jobs add up to $10^{14}\sigma_{max}$, it is easy to see that the two link jobs should LN$_{i_1,h_1,+}$ and LN$_{i_2,h_2,-}$. Given that the index-components should add up to $0$, we have the following: \begin{eqnarray} 2\veb^{2\gamma+2}_i=\veb^{h_1}_{i_1}+\hat{\veb}^{h_2}_{i_2} \end{eqnarray}
According to Lemma~\ref{lemma:veb-uniquesum-3}, we have $i_1=i$ and $i_2=\tau(i)$, and $h_1=h_2=2\gamma+2$.
\end{proof}
We have proved, so far, that if a machine is good, then the jobs scheduled on it must follow Table~\ref{table:job-schedule}. Finally we consider the true/false-components of jobs on good machines. Based on the T/F-component of jobs, the following lemma is easy to verify. \begin{lemma}\label{lemma:true-type} The followings are true: \begin{itemize} \item If a variable-link machine is good, then the T/F-type of the variable job and link job on this machine is $(T,F)$ or $(F,T)$; \item If a link-link machine is good, then the T/F-type of the two link jobs on this machine is $(T,F)$ or $(F,T)$; \item If a variable-clause-dummy machine is good, then the T/F-type of the variable job, clause job and dummy job on this machine is $(T,T,T)$ or $(T,F,F)$ or $(F,F,T)$; \item If a variable-dummy machine is good, then the T/F-type of the variable job and dummy job on this machine is $(T,F)$ or $(F,T)$; \item If a variable-truth machine is good, then the T/F-type of the variable job and two truth-assignment jobs on this machine is $(T,T,T)$ or $(F,F,F)$; \end{itemize} \end{lemma}
\subsubsection{Truth-assignment based on scheduling} Given a feasible schedule $Sol$, we give a truth-assignment of $I_{sat}$ as follows: if the job $V_{i,+,1}^T$ is scheduled on machine $\theta_{\textrm{V-T},i,a,c}$, then we let variable $z_i$ be false; if the job $V_{i,+,1}^F$ is scheduled on machine $\theta_{\textrm{V-T},i,a,c}$, then we let variable $z_i$ be true. If $V_{i,+,1}$ is not scheduled on machine $\theta_{\textrm{V-T},i,a,c}$, we let $z_i$ be true.
We call the machines in the following Table~\ref{table:variable-job-true} as machines of group $i$. Notice that groups are not disjoint, particularly machines $\theta_{\textrm{L-L},i,+,-}$, $\theta_{\textrm{L-L},\tau^{-1}(i),+,-}$ will appear in two groups. Besides the two machines, all other machines in a group do not appear in other groups. We have the following lemma.
\begin{table}[!ht] \renewcommand\arraystretch{1.3}
\setlength\tabcolsep{1.5pt} \centering \resizebox{12cm}{!}{
\begin{tabular}{|c|l|l|l|l|l|} \hline \multirow{2}{*}{Variable-Link} & $\theta_{\textrm{V-L},i,+}$ & $V_{i,+,2}^T$ & LN$_{i,1,+}^F$ & $\backslash$ \\ \cline{2-5}
& $\theta_{\textrm{V-L},i,-}$ & $V_{i,-,2}^F$ & LN$_{i,1,-}^{T}$ & $\backslash$ \\ \hline \multirow{4}{*}{\makecell{Link-Link\\$h\in\{1,2,\cdots,2\gamma+1\}$}} & {$\theta_{\textrm{L-L},i,h,+}$} & LN$_{i,h,+}^F$ & LN$_{i,h+1,+}^T$ & $\backslash$ \\ \cline{2-5}
& {$\theta_{\textrm{L-L},i,h,-}$} & LN$_{i,h,-}^F$ & LN$_{i,h+1,-}^T$ & $\backslash$ \\ \cline{2-5}
& $\theta_{\textrm{L-L},i,+,-}$ & LN$_{i,2\gamma+2,+}^T$ & LN$_{\tau(i),2\gamma+2,-}^F$ & $\backslash$ \\ \cline{2-5}
& $\theta_{\textrm{L-L},\tau^{-1}(i),+,-}$ & LN$_{\tau^{-1}(i),2\gamma+2,+}^T$ & LN$_{i,2\gamma+2,-}^F$ & $\backslash$ \\ \cline{2-5}\hline \multirow{2}{*}{\makecell{Variable-Clause-Dummy \& Variable-Dummy\\Case 1: positive literal $z_i\in C_1$}} & $\theta_{\textrm{V-C-D},\ell,i,+}$ & $V_{i,+,1}^T$ & CL$_{\ell}^{*}$ & DM$^{*}$ \\ \cline{2-5}
& $\theta_{\textrm{V-D},i,-}$ & $V_{i,-,1}^F$ & DM$^T$ & $\backslash$ \\ \hline \multirow{2}{*}{\makecell{Variable-Clause-Dummy \& Variable-Dummy\\Case 2: negative literal $\neg z_i\in C_1$}} & $\theta_{\textrm{V-C-D},i,-}$ & $V_{i,-,1}^F$ & CL$_{\ell}^{F}$ & DM$^T$ \\ \cline{2-5}
& $\theta_{\textrm{V-D},i,+}$ & $V_{i,+,1}^T$ & DM$^F$ & $\backslash$ \\ \hline \multirow{4}{*}{{Variable-Truth}} & $\theta_{\textrm{V-T},i,a,c}$ & $V_{i,+,1}^F$ & TR$_{i,a}^F$ & TR$_{i,c}^F$ \\ \cline{2-5}
& $\theta_{\textrm{V-T},i,b,d}$ & $V_{i,+,2}^F$ & TR$_{i,b}^F$ & TR$_{i,d}^F$ \\ \cline{2-5}
& $\theta_{\textrm{V-T},i,a,d}$ & $V_{i,-,1}^T$ & TR$_{i,a}^T$ & TR$_{i,d}^T$ \\ \cline{2-5}
& $\theta_{\textrm{V-T},i,b,c}$ & $V_{i,-,2}^T$ & TR$_{i,b}^T$ & TR$_{i,c}^T$ \\ \hline \end{tabular} } \caption{Scheduling of group $i$ machines when $V_{i,+,1}^F$ is scheduled on machine $\theta_{\textrm{V-T},i,a,c}$ (and we set variable $z_i$ to be true)} \label{table:variable-job-true} \end{table}
\begin{table}[!ht] \renewcommand\arraystretch{1.3}
\setlength\tabcolsep{1.5pt} \centering \resizebox{12cm}{!}{
\begin{tabular}{|c|l|l|l|l|l|} \hline \multirow{2}{*}{Variable-Link} & $\theta_{\textrm{V-L},i,+}$ & $V_{i,+,2}^F$ & LN$_{i,1,+}^T$ & $\backslash$ \\ \cline{2-5}
& $\theta_{\textrm{V-L},i,-}$ & $V_{i,-,2}^T$ & LN$_{i,1,-}^{F}$ & $\backslash$ \\ \hline \multirow{4}{*}{\makecell{Link-Link\\$h\in\{1,2,\cdots,2\gamma+1\}$}} & {$\theta_{\textrm{L-L},i,h,+}$} & LN$_{i,h,+}^T$ & LN$_{i,h+1,+}^F$ & $\backslash$ \\ \cline{2-5}
& {$\theta_{\textrm{L-L},i,h,-}$} & LN$_{i,h,-}^T$ & LN$_{i,h+1,-}^F$ & $\backslash$ \\ \cline{2-5}
& $\theta_{\textrm{L-L},i,+,-}$ & LN$_{i,2\gamma+2,+}^F$ & LN$_{\tau(i),2\gamma+2,-}^T$ & $\backslash$ \\ \cline{2-5}
& $\theta_{\textrm{L-L},\tau^{-1}(i),+,-}$ & LN$_{\tau^{-1}(i),2\gamma+2,+}^F$ & LN$_{i,2\gamma+2,-}^T$ & $\backslash$ \\ \cline{2-5}\hline \multirow{2}{*}{\makecell{Variable-Clause-Dummy \& Variable-Dummy\\Case 1: positive literal $z_i\in C_1$}} & $\theta_{\textrm{V-C-D},\ell,i,+}$ & $V_{i,+,1}^F$ & CL$_{\ell}^{F}$ & DM$^{T}$ \\ \cline{2-5}
& $\theta_{\textrm{V-D},i,-}$ & $V_{i,-,1}^T$ & DM$^F$ & $\backslash$ \\ \hline \multirow{2}{*}{\makecell{Variable-Clause-Dummy \& Variable-Dummy\\Case 2: negative literal $\neg z_i\in C_1$}} & $\theta_{\textrm{V-C-D},i,-}$ & $V_{i,-,1}^T$ & CL$_{\ell}^{*}$ & DM$^*$ \\ \cline{2-5}
& $\theta_{\textrm{V-D},i,+}$ & $V_{i,+,1}^F$ & DM$^T$ & $\backslash$ \\ \hline \multirow{4}{*}{{Variable-Truth}} & $\theta_{\textrm{V-T},i,a,c}$ & $V_{i,+,1}^T$ & TR$_{i,a}^T$ & TR$_{i,c}^T$ \\ \cline{2-5}
& $\theta_{\textrm{V-T},i,b,d}$ & $V_{i,+,2}^T$ & TR$_{i,b}^T$ & TR$_{i,d}^T$ \\ \cline{2-5}
& $\theta_{\textrm{V-T},i,a,d}$ & $V_{i,-,1}^F$ & TR$_{i,a}^F$ & TR$_{i,d}^F$ \\ \cline{2-5}
& $\theta_{\textrm{V-T},i,b,c}$ & $V_{i,-,2}^F$ & TR$_{i,b}^F$ & TR$_{i,c}^F$ \\ \hline \end{tabular} } \caption{Scheduling of group $i$ machines when $V_{i,+,1}^T$ is scheduled on machine $\theta_{\textrm{V-T},i,a,c}$ (and we set variable $z_i$ to be false)} \label{table:variable-job-false} \end{table}
\begin{lemma}\label{lemma:groupi} Suppose all machines in group $i$ are good. If $V_{i,+,1}^F$ is scheduled on machine $\theta_{\textrm{V-T},i,a,c}$, then the jobs scheduled on these machines are according to Table~\ref{table:variable-job-true} ; if $V_{i,+,1}^T$ is scheduled on machine $\theta_{\textrm{V-T},i,a,c}$, then the jobs scheduled on these machines are according to Table~\ref{table:variable-job-false}. \end{lemma} \begin{proof} We prove the first half of Lemma~\ref{lemma:groupi}, the second half can be proved in the same way. If $V_{i,+,1}^T$ is scheduled on machine $\theta_{\textrm{V-T},i,a,c}$, then by Lemma~\ref{lemma:true-type} we know the other two jobs are TR$_{i,a}^F$ and TR$_{i,c}^F$, consequently, TR$_{i,a}^T$ is scheduled on machine $\theta_{\textrm{V-T},i,a,d}$. Using similar argument it is easy to see the jobs scheduled on the 4 variable-truth machines follow Table~\ref{table:variable-job-true}.
We consider variable-clause-dummy and variable-dummy machines. It follows that the remaining $V_{i,+,1}^T$ and $V_{i,-,1}^F$ are scheduled on these machines. The T/F type of the other jobs on these machines follow from Lemma~\ref{lemma:true-type}.
Next, we consider variable-link machines. Again by the scheduling on variable-truth machines, the remaining $V_{i,+,2}^T$ and $V_{i,-,2}^F$ are scheduled on these machines. The T/F-type of the link jobs are determined by Lemma~\ref{lemma:true-type}.
Finally we consider link-link machines. Based on the link jobs scheduled on variable-link machines and Lemma~\ref{lemma:true-type}, LN$_{i,1,+}^F$ and LN$_{i,2,+}^T$ must be scheduled on $\theta_{\textrm{L-L},i,1,+}$, consequently the remaining LN$_{i,2,+}^F$ must be scheduled on machine $\theta_{\textrm{L-L},i,2,+}$. Iteratively carrying on the above argument we can show that jobs scheduled on machines $\theta_{\textrm{L-L},i,h,+}$ must follow Table~\ref{table:variable-job-true}. Similar arguments can be applied to machines $\theta_{\textrm{L-L},i,h,-}$, $\theta_{\textrm{L-L},i,+,-}$ and $\theta_{\textrm{L-L},\tau^{-1}(i),+,-}$.
\end{proof}
The T/F-type of the clause job CL$_{\ell}$ is not determined in Table~\ref{table:variable-job-true}. Recall that among three copies of CL$_{\ell}$ there is one true copy CL$_{\ell}^T$. Suppose CL$_{\ell}^T$ is scheduled on group $i$ machines. If $V_{i,+,1}^T$ is scheduled on machine $\theta_{\textrm{V-T},i,a,c}$ and we set variable $z_i$ to be true, then from Table~\ref{table:variable-job-true} we know case 1 must happen, which implies that clause $cl_\ell$ is satisfied by $z_i$. If $V_{i,+,1}^F$ is scheduled on machine $\theta_{\textrm{V-T},i,a,c}$ and we set variable $z_i$ to be false, then from Table~\ref{table:variable-job-false} we know case 2 must happen, which implies that clause $cl_\ell$ is satisfied by $\neg z_i$. Hence the following lemma is true.
\begin{lemma}\label{lemma:schedule-satisfy-1}
If all machines in group $i$ are good and CL$_{\ell}^T$ is scheduled on these machines, then the clause $cl_\ell \in C_1$ that contains variable $z_i$ is satisfied by this variable.
\end{lemma}
Now consider clauses in $C_2$ and we have the following lemma. \begin{lemma}\label{lemma:schedule-satisfy-2} If all machines in group $i$ are good, and all machines in group $\tau(i)$ are also good, then the clause $(z_i\oplus \neg z_{\tau(i)})$ is satisfied.
\end{lemma}
\begin{proof}
There are two possibilities. If $V_{i,+,1}^F$ is scheduled on machine $\theta_{\textrm{V-T},i,a,c}$ and we set variable $z_i$ to be true, then LN$_{\tau(i),2\gamma+2,-}^F$, implying that LN$_{\tau(i),2\gamma+2,-}^T$ is scheduled in group $\tau(i)$. By checking Table~\ref{table:variable-job-true} and Table~\ref{table:variable-job-false} for variable $z_{\tau(i)}$, it follows that Table~\ref{table:variable-job-true} is the case when LN$_{\tau(i),2\gamma+2,-}^T$ is scheduled, and consequently variable $z_{\tau(i)}$ is set to be true, whereas $(z_i\oplus \neg z_{\tau(i)})$ is satisfied. The other case when $V_{i,+,1}^T$ is scheduled on machine $\theta_{\textrm{V-T},i,a,c}$ and we set variable $z_i$ to be false can be proved in a similar way.
\end{proof}
\begin{lemma}\label{lemma:bound-not-good-group} In a feasible schedule $Sol$, if there are at most $m'$ machines which are not good, then in the corresponding truth-assignment, there are at most $6m'$ clauses that are not satisfied. \end{lemma} \begin{proof} We say a group is good if all machines in this group are good. According to Lemma~\ref{lemma:schedule-satisfy-1}, if a clause $cl_\ell$ in $C_1$ is not satisfied, then the group that contains the job CL$_\ell^T$ is not good, that is, there is at least one machine that is not good in this group. Hence, if there are $m_1$ clauses in $C_1$ not satisfied, then there are at least $m_1$ groups that are not good. According to Lemma~\ref{lemma:schedule-satisfy-2}, if a clause $(z_i\oplus \neg z_{\tau(i)})$ in $C_2$ is not good, then among group $i$ and group $\tau(i)$ there is at least one group which is not good. Given that each group is only involved in two clauses of $C_2$, if there are $m_2$ clauses in $C_2$ not satisfied, then there are at least $m_2/2$ groups which are not good. Hence, there are at least $\max\{m_1,m_2/2\}$ groups which are not good, given $m_1+m_2$ clauses which are not satisfied. Using the fact that $\frac{m_1+m_2}{\max\{m_1,m_2/2\}}\le 3$, we know if there are at most $m'$ machines which are not good, then there are at most $2m'$ groups which are not good, and hence there are at most $6m'$ clauses which are not satisfied. \end{proof}
\begin{lemma}\label{lemma:calculate} In a feasible schedule, if there are at least $m'$ machines which are not good, then its objective value is at least $(2\gamma n+8n) \times (10^{14}\sigma_{max})^2+m'/4$. \end{lemma} \begin{proof} Recall that if a machine is not good, then its load is either $\ge 10^{14}\sigma_{max}+0.5$ or $\le 10^{14}\sigma_{max}-0.5$. Suppose there are $m_1'$ machines, with load $10^{14}\sigma_{max}+\mu_1$, $10^{14}\sigma_{max}+\mu_2$, $\cdots$, $10^{14}\sigma_{max}+\mu_{m_1'}$ where $\mu_j\ge 1/2$; there are $m_2'$ machines, with load $10^{14}\sigma_{max}-\nu_1$, $10^{14}\sigma_{max}-\nu_2$, $\cdots$, $10^{14}\sigma_{max}+\nu_{m_2'}$ where $\nu_j\ge 1/2$. It follows that $\sum_j\mu_j=\sum_j\nu_j$ and $m_1'+m_2'=m'$. The objective value of the schedule is \begin{eqnarray*} && \!\!(2\gamma n+8n-m')(10^{14}\sigma_{max})^q+\sum_{j=1}^{m_1'}(10^{14}\sigma_{max}+\mu_j)^q+\sum_{j=1}^{m_2'}(10^{14}\sigma_{max}-\nu_j)^q\\ &=& \!\!(2\gamma n+8n) (10^{14}\sigma_{max})^q+\frac{q(q-1)}{2}[\sum_{j=1}^{m_1'}\mu_j^2\!+\!\sum_{j=1}^{m_2'}\nu_j^2] (10^{14}\sigma_{max})^{q-2}+o([\sum_{j=1}^{m_1'}\mu_j^2\!+\!\sum_{j=1}^{m_2'}\nu_j^2]\sigma^{q-2}_{max})\\ &\ge& \!\!(2\gamma n+8n)(10^{14}\sigma_{max})^q+\frac{q(q-1)}{2}[\frac{m_1'}{4}+\frac{m_2'}{4}] (10^{14}\sigma_{max})^{q-2}+o(m'\sigma_{max}^{q-2})\\ &=& \!\!(2\gamma n+8n)(10^{14}\sigma_{max})^q+\frac{q(q-1)m'}{8} (10^{14}\sigma_{max})^{q-2}+o(m'\sigma_{max}^{q-2}). \end{eqnarray*} \end{proof} Combining the above lemmas, Lemma~\ref{lemma:sche-sat} is proved.
\subsection{Finalizing the Proof of Theorem~\ref{thm:lower-bound}}~\label{subsec:formal-proof}
Suppose on the contrary there exists a PTAS for $P||\sum_i C_i^q$ that runs in time $2^{O((1/\varepsilon)^{1/2-\delta})}+n^{O(1)}$, we show that this algorithm can be used to distinguish between instances of 3SAT$'$ with $4n/3$ clauses where at least $4(1-\epsilon')/3$ clauses are satisfiable from instances where at most $4(\beta+\epsilon')n/3$ clauses are satisfiable in time $2^{O(n^{1-\delta})}$, contradicting Lemma~\ref{lemma:maxsat-eth2}.
Consider the constructed scheduling instance with $2\gamma n+8n=O(\frac{n\log n}{\log\log n})$ machines. Recall $\sigma_{max}=n^{1+O(\frac{1}{\log\log n})}$. If the 3SAT$'$ instance has at most $4\epsilon'n/3$ unsatisfied clauses, then by Lemma~\ref{lemma:sat-sche} (taking $\vartheta=4\epsilon'/3$) the objective value $Obj_1$ of the constructed scheduling instance is at most \begin{eqnarray*} Obj_1&\leq&(2\gamma n+8n)(10^{14}\sigma_{max})^q+4\epsilon'n/3 \cdot \frac{q(q-1)}{2}(10^{14}\sigma_{max})^{q-2}+o(n\sigma_{max}^{q-2})\\ &=& (2\gamma n+8n)(10^{14}\sigma_{max})^q+\frac{2\epsilon' q(q-1)n}{3}\cdot (10^{14}\sigma_{max})^{q-2}+o(n\sigma_{max}^{q-2})
\end{eqnarray*}
If the 3SAT$'$ instance has at least $4(1-\beta-\epsilon')n/3$ unsatisfied clauses, then by Lemma~\ref{lemma:sche-sat} (taking $\vartheta=4(1-\beta-\epsilon')/3$) the objective value $Obj_2$ of any feasible solution for the constructed scheduling instance is at least \begin{eqnarray*} Obj_2&\ge&(2\gamma n+8n) (10^{14}\sigma_{max})^q+\frac{q(q-1)\cdot 4(1-\beta-\epsilon')n/3}{48}\cdot (10^{14}\sigma_{max})^{q-2}+o(n\sigma_{max}^{q-2})\\ &=&(2\gamma n+8n) (10^{14}\sigma_{max})^q+\frac{q(q-1)(1-\beta-\epsilon')n}{36}\cdot (10^{14}\sigma_{max})^{q-2}+o(n\sigma_{max}^{q-2}) \end{eqnarray*}
for some constant $\beta<1$.
We apply the PTAS for $P||\sum_i C_i^q$ by setting $\varepsilon=\frac{1}{(2\gamma +8) \times (10^{14}\sigma_{max})^2}\cdot \frac{q(q-1)\epsilon'}{36}=\Theta(\gamma^{-1} \sigma_{max}^{-2})=n^{-2-O(\frac{\log\log n}{\log n})}$, then it follows that the PTAS runs in time $2^{O(n^{1-o(1)})}$. If there exists a feasible schedule with objective value at most $Obj_1$, then the PTAS returns a solution with objective value at most $$Obj_1\cdot(1+\varepsilon)\le (2\gamma n+8n) (10^{14}\sigma_{max})^q+\frac{q(q-1)\epsilon'n}{36}\cdot (10^{14}\sigma_{max})^{q-2}+o(n\sigma_{max}^{q-2})<Obj_2.$$
Otherwise, any feasible solution has an objective value of at least $Obj_2$. That is, the PTAS can be used to distinguish between scheduling instances that admit a feasible schedule at most $Obj_1$ and scheduling instances that do not admit any feasible schedule of objective value no more than $Obj_2$, and thus can also be used to distinguish 3SAT$'$ where at least $4(1-\epsilon')n/3$ clauses are satisfiable from instances where at most $4(\beta+\epsilon')n/3$ clauses are satisfiable, contradicting Lemma~\ref{lemma:maxsat-eth2}.
\noindent\textbf{Remark.} It is important to observe that our reduction is only valid when the number of machines $m=\tilde{O}({n})=\tilde{O}(\sqrt{1/\epsilon})$. If $m=O((1/\epsilon)^{\kappa})$ for $\kappa<1/2$, then applying the same reduction we have $\epsilon=\tilde{O}(n^{-1/\kappa})$ by using that $m=\tilde{O}(n)$, whereas we have Corollary~\ref{coro:lower-bound} below. On the other hand, if $m=\Omega((1/\epsilon)^{\kappa})$ for $\kappa>1/2$, then we also have $n=\Omega((1/\epsilon)^{\kappa})$. The objective value is $\Theta(mT^2)=\Omega((1/\epsilon)^{3\kappa})$. Therefore, a PTAS brings an error of $O(mT^2\epsilon)=\Omega(n\cdot(1/\epsilon)^{2\kappa-1})=\Omega(n)$, which is large enough to accommodate the gap of $O(n)$ in the reduction, i.e., the reduction does not work any more.
\begin{corollary}\label{coro:lower-bound}
Let $q>1$ be an arbitrary constant. Assuming ETH, for any $\epsilon$ such that $m=O((1/\epsilon)^{\kappa})$ for some $\kappa\le 1/2$, there is no $(1+\epsilon)$-approximation algorithm for $P||\sum_i C_i^q$ that runs in time $2^{O((1/\varepsilon)^{\kappa-\delta})}+n^{O(1)}$ time for any constant $\delta>0$. \end{corollary}
\section{Conclusion}
We consider $P||\sum_i C_i^q$, which is identical machine scheduling with the objective of minimizing the $\ell_q$-norm of machine loads for arbitrary constant $q>1$. We establish a PTAS of running time $2^{\tilde{O}(\sqrt{1/\epsilon})}+n^{O(1)}$ and prove that it is essentially the best possible under the exponential time hypothesis. This is the first PTAS that runs in sub-exponential time in $1/\epsilon$ for strongly NP-hard scheduling and other related problems. It is interesting and also important to explore the sub-exponential phenomenon in PTASs for other problems. In particular, it will be interesting to investigate the scheduling problem with the objective of minimizing weighted job completion times, i.e., $P||\sum_j w_jC_j$. Another interesting open problem is to show subexponential lower bound or develop FPTAS for $P||\sum_i C_i^q$ where $m=\Theta((1/\epsilon)^\theta)$ for $\theta\in (1/2,1]$.
\end{document} |
\begin{document}
\title{Weak stabilization of a transmission Euler-Bernoulli plate equation with force and moment feedback} \begin{center} \abstract{In this paper we will study the asymptotic behaviour of the energy decay of a transmission plate equation with force and moment feedback. Precisly, we shall prove that the energy decay at least logarithmically over the time. The method consist to use the classical second order Carleman estimate to estabish a resolvent estimate which provide by the famous Burq's result~\cite{Bur} the kind of decay above mentionned.} \end{center}
\textbf{Key words and phrases: }Transmission problem, boundary stabilization, Euler-Bernoulli plate equation, energy decay, Carleman estimates. \\ \textbf{Mathematics Subject Classification:} \textit{35A01, 35A02, 35M33, 93D20}.
\tableofcontents
\section{Introduction and statement of results} Let $\Omega\subset\mathbb{R}^{n}$ be an open, bounded connected domain with smooth boundary $\Gamma=\Gamma_{1}\cup\Gamma_{2}$ where $\Gamma_{1}$ and $\Gamma_{2}$ are two non empty component of $\Gamma$ such that $\Gamma_{1}\cap\Gamma_{2}=\emptyset$. \\ Let $\Omega_{1}\subset\Omega$ be an open domain with smooth boundary $\partial\Omega_{1}=\Gamma_{1}\cup\Gamma_{0}$ where $\Gamma_{1}\cap\Gamma_{0}=\Gamma_{2}\cap\Gamma_{0}=\emptyset$. Then $\Omega_{2}=\Omega\backslash\overline{\Omega}_{1}$ is an open connected domain with boundary $\partial\Omega_{2}=\Gamma_{2}\cup\Gamma_{0}$ (See Figure~\ref{fig1}).
\begin{figure}
\caption{Geometrical situation of the transmission problem.}
\label{fig1}
\end{figure}
\\ We are going to study the following mixed boundary value problem \begin{equation}\label{1} \left\{ \begin{array}{lll} \partial_{t}^{2}u_{1}+c_{1}^{2}\Delta^{2}u_{1}=0&\textrm{in}&\Omega_{1}\times]0,+\infty[, \\ \partial_{t}^{2}u_{2}+c_{2}^{2}\Delta^{2}u_{2}=0&\textrm{in}&\Omega_{2}\times]0,+\infty[, \\ \textcolor{magenta}{u_{1}=u_{2}}&\textcolor{magenta}{\textrm{on}}&\textcolor{magenta}{\Gamma_{0}\times]0,+\infty[}, \\ \textcolor{magenta}{\partial_{\nu} u_{1}=\partial_{\nu} u_{2}}&\textcolor{magenta}{\textrm{on}}&\textcolor{magenta}{\Gamma_{0}\times]0,+\infty[}, \\ \textcolor{magenta}{c_{1}\Delta u_{1}=c_{2}\Delta u_{2}}&\textcolor{magenta}{\textrm{on}}&\textcolor{magenta}{\Gamma_{0}\times]0,+\infty[}, \\ \textcolor{magenta}{c_{1}\partial_{\nu} \Delta u_{1}=c_{2}\partial_{\nu} \Delta u_{2}}&\textcolor{magenta}{\textrm{on}}&\textcolor{magenta}{\Gamma_{0}\times]0,+\infty[}, \\ \textcolor{blue}{\Delta u_{1}=0}&\textcolor{blue}{\textrm{on}}&\textcolor{blue}{\Gamma_{1}\times]0,+\infty[}, \\ \textcolor{blue}{u_{1}=0}&\textcolor{blue}{\textrm{on}}&\textcolor{blue}{\Gamma_{1}\times]0,+\infty[}, \\ \textcolor{red}{\Delta u_{2}=-a\,\partial_{t}\partial_{\nu} u_{2}}&\textcolor{red}{\textrm{on}}&\textcolor{red}{\Gamma_{2}\times]0,+\infty[}, \\ \textcolor{red}{\partial_{\nu}\Delta u_{2}=b\,\partial_{t} u_{2}}&\textcolor{red}{\textrm{on}}&\textcolor{red}{\Gamma_{2}\times]0,+\infty[}, \\ u_{1}(x,0)=u_{1}^{0}(x),\;\partial_{t}u_{1}(x,0)=u_{1}^{1}(x)&\textrm{in}&\Omega_{1}, \\ u_{2}(x,0)=u_{2}^{0}(x),\;\partial_{t}u_{2}(x,0)=u_{2}^{1}(x)&\textrm{in}&\Omega_{2}. \end{array}\right. \end{equation} Where $\nu$ denotes the inner unit normal to the boundary, $c_{1}$, $c_{2}$ are constants strictly positives and $a$ and $b$ are a non negative bounded functions on $\Gamma_{2}$. Then we can suppose that there exists a strictly positive constant $c_{0}$ such that \begin{equation}\label{2} a\geq c_{0} \quad\textrm{and}\quad b\geq c_{0}\;\;\;\;\textrm{ on }\Gamma_{2}. \end{equation} And finally as regards $u_{1}^{0}$, $u_{1}^{1}$, $u_{2}^{0}$ and $u_{2}^{1}$ we will fix them later in the right spaces.
The energy of a solution \begin{equation*} u=\left\{ \begin{array}{lcl} u_{1}&\textrm{in}&\Omega_{1} \\ u_{2}&\textrm{in}&\Omega_{2} \end{array}\right. \end{equation*} of~\eqref{1} at the time $t\geq 0$ is defined by \begin{equation*}
E(t,u)=\frac{1}{2}\int_{\Omega}\Big(|\partial_{t}u(x,t)|^{2}+\alpha^{2}(x)|\Delta u(x,t)|^{2}\Big)\alpha^{-1}(x)\,\mathrm{d} x, \end{equation*} where \begin{equation*} \alpha=\left\{ \begin{array}{lcl} c_{1}&\textrm{in}&\Omega_{1}, \\ c_{2}&\textrm{in}&\Omega_{2}. \end{array}\right. \end{equation*} By Green's formula we can prove that for all $\;t_{1},\,t_{2}>0$ we have \[
E(t_{2},u)-E(t_{1},u)=-c_{2}\int_{t_{1}}^{t_{2}}\!\!\!\int_{\Gamma_{2}}a|\partial_{t}\partial_{\nu} u_{2}(x,t)|^{2}\,\mathrm{d} x\,\mathrm{d} t-c_{2}\int_{t_{1}}^{t_{2}}\!\!\!\int_{\Gamma_{2}}b|\partial_{t}u_{2}(x,t)|^{2}\,\mathrm{d} x\,\mathrm{d} t, \] and this mean that the energy is decreasing over the time.
We define the operator $\mathcal{A}$ by $$\mathcal{A}=\left[\begin{array}{cc} 0& \mathrm{Id} \\ -\alpha^{2}\Delta^{2}& 0 \end{array}\right]$$ the Hilbert space $\mathcal{H}=X\times H$ where $H=L^{2}(\Omega,\alpha^{-1}(x)\,\mathrm{d} x)$ and \begin{equation}\label{8} \begin{split}
X=\big\{u\in H\,:\,u_{1}\in H^{2}(\Omega_{1}),\,u_{2}\in H^{2}(\Omega_{2}),\,u_{1\,|\Gamma_{1}}=0,\,u_{1\,|\Gamma_{0}}=u_{2\,|\Gamma_{0}}, \\
\partial_{\nu} u_{1\,|\Gamma_{0}}=\partial_{\nu} u_{2\,|\Gamma_{0}},\int_{\Gamma_{2}}u_{2}.\overline{\partial_{\nu} u}_{2}\mathrm{d} x=0\big\}, \end{split} \end{equation} and the domain of $\mathcal{A}$ by \begin{align*}
\mathcal{D}(\mathcal{A})=\big\{&(u,v)\in\mathcal{H}\,:\,(v,\Delta^{2}u)\in\mathcal{H},\,c_{1}\Delta u_{1\,|\Gamma_{0}}=c_{2}\Delta u_{2\,|\Gamma_{0}},\,\Delta u_{1\,|\Gamma_{1}}=0, \\
&c_{1}\partial_{\nu} \Delta u_{1\,|\Gamma_{0}}=c_{2}\partial_{\nu} \Delta u_{2\,|\Gamma_{0}},\,\Delta u_{2\,|\Gamma_{2}}=-a\,\partial_{\nu} v_{2\,|\Gamma_{2}},\,\partial_{\nu}\Delta u_{2\,|\Gamma_{2}}=b\,v_{2\,|\Gamma_{2}}\big\}. \end{align*} Now we are able to state our main results \begin{thm}\label{9}
There exists $C_{1},\,C_{2},\,C_{3}>0$ such that if $|\mathrm{Im}(\lambda)|\leq C_{1}\mathrm{e}^{-C_{2}|\mathrm{Re}(\lambda)|}$ and $|\lambda|>C_{3}$ the resolvent $(\lambda\mathrm{Id}+i\mathcal{A})^{-1}$ is analytic and moreovere we have
$$\|(\lambda\mathrm{Id}+i\mathcal{A})^{-1}\|_{\mathcal{L}(\mathcal{H},\mathcal{H})}\leq C\mathrm{e}^{C|\mathrm{Re}(\lambda)|}.$$ \end{thm} As an immediate consequence (see~\cite[p.17]{Bur} and also more recently~\cite{BD}) of the previous theorem, we get the following rate of decrease of energy \begin{thm}\label{10} For any $k>0$ there exists $C>0$ such that for any initial data $(u_{0},v_{0})=(u_{1}^{0},u_{2}^{0},u_{1}^{1},u_{2}^{1})\in\mathcal{D}(\mathcal{A}^{k})$ the solution $u(x,t)$ of~\eqref{1} starting from $(u_{0},v_{0})$ satisfy
$$E(t,u)\leq\frac{C}{(\ln(2+t))^{2k}}\|(u_{0},v_{0})\|_{\mathcal{D}(\mathcal{A}^{k})}^{2},\quad \forall\; t>0.$$ \end{thm} \begin{rem}\rm{ \* \begin{enumerate}
\item[1)]In the case where $\Gamma_{1}=\emptyset$, (i.e $\Gamma=\Gamma_{2}$) and we have only one boundary term effective in all $\Gamma$ gived by $\Delta u_{2\,|\Gamma}=-a\,\partial_{t}\partial_{\nu} u_{2\,|\Gamma}$, Ammari and Vodev~\cite{AV} have proved an exponential stabilization result to the system~\eqref{1}.
\item[2)]Theorem~\ref{10} remains valid if we suppose that $\Gamma_{1}=\emptyset$ and $\Gamma_{2}=\Gamma_{2}'\cup\Gamma_{2}''$ satisfying $\Gamma_{2}'\cap\Gamma_{2}''=\emptyset$ with the following transmission and boundary conditions:
\begin{equation*} \left\{ \begin{array}{lll} \textcolor{magenta}{u_{1}=u_{2}}&\textcolor{magenta}{\textrm{on}}&\textcolor{magenta}{\Gamma_{0}\times]0,+\infty[}, \\ \textcolor{magenta}{\partial_{\nu} u_{1}=\partial_{\nu} u_{2}}&\textcolor{magenta}{\textrm{on}}&\textcolor{magenta}{\Gamma_{0}\times]0,+\infty[}, \\ \textcolor{magenta}{c_{1}\Delta u_{1}=c_{2}\Delta u_{2}}&\textcolor{magenta}{\textrm{on}}&\textcolor{magenta}{\Gamma_{0}\times]0,+\infty[}, \\ \textcolor{magenta}{c_{1}\partial_{\nu} \Delta u_{1}=c_{2}\partial_{\nu} \Delta u_{2}}&\textcolor{magenta}{\textrm{on}}&\textcolor{magenta}{\Gamma_{0}\times]0,+\infty[}, \\ \textcolor{blue}{\Delta u_{1}=0}&\textcolor{blue}{\textrm{on}}&\textcolor{blue}{\Gamma_{2}'\times]0,+\infty[}, \\ \textcolor{blue}{u_{1}=0}&\textcolor{blue}{\textrm{on}}&\textcolor{blue}{\Gamma_{2}'\times]0,+\infty[}, \\ \textcolor{red}{\Delta u_{2}=-a\,\partial_{t}\partial_{\nu} u_{2}}&\textcolor{red}{\textrm{on}}&\textcolor{red}{\Gamma_{2}''\times]0,+\infty[}, \\ \textcolor{red}{\partial_{\nu}\Delta u_{2}=b\,\partial_{t} u_{2}}&\textcolor{red}{\textrm{on}}&\textcolor{red}{\Gamma_{2}''\times]0,+\infty[}. \end{array}\right. \end{equation*}
\item[3)]To prove Theorem~\ref{9} and Theorem~\ref{10}, we make use the Carleman estimates to obtain information about the resolvent in a boundary domain, the cost is to use phases functions satisfying H\"ormander's assumption. Albano~\cite{A} proved a Carleman estimate for the plate operator, by decomposing the operator as the product of two Schr{\"o}dinger ones and gives for eatch of them the corresponding Carleman estimate then by making together these two estimates we obtain the result. Inspired from this method, we are going to do one similar decomposition of a system of forth order to get two new systems of second order, then we apply the classical Carleman estimate to one of these derivative operators to obtain easly the result of Theorem~\ref{9}.
\item[4)]Theorem~\ref{9} and Theorem~\ref{10} are analogous to those of Fathallah~\cite{I}, in the case of hyperbolic-parabolic coupled system, and Lebeau and Robbiano~\cite{LR} resuts, in the case of scalar wave equation without transmission, but our method is different from their because it consist to use the Carleman estimates directly for the stationary operator without going through the interpolation inequality.
\item [5)]Several studies have focused on transmission problems, such as the works of Bellassoued~\cite{B} and Fathallah~\cite{I} for the stabilization problems and that of Le Rousseau and Robbiano~\cite{RR} for a control problem. In eatch of these works we need to find a Carleman estimates near the interface, but here and thanks to the transmission conditions we will use only the classical Carleman estimates (See for instance Le Rouseau and Lebeau in~\cite{LL} and Lebeau and Robbiano in~\cite{LR} and~\cite{LR2}). \end{enumerate} } \end{rem}
In this paper $C$ will always be a generic positive constant whose value may be different from one line to another.
The outline of this paper is as follow. In section~\ref{11} we prove the well-Posedness of the problem~\eqref{1} and in section~\ref{d1} we prove the logarithmic decay of energy of the system~\eqref{1}.
\section{Well-Posedness of the problem}\label{11} To prove the Well-Posedness of the problem~\eqref{1} we are going to use the semigroups theory. Our strategy consiste to write the equations as a Cauchy problem with an operator which generates a semigroup of contractions.
\subsection{The Cauchy problem} Throughout this paper, we denote $\mathcal{O}=\Omega_{1}\cup\Omega_{2}$ and $\langle\,.\,,\,.\,\rangle_{H}$ the inner product in $H=L^{2}(\mathcal{O},\alpha^{-1}(x)\,\mathrm{d} x)$ defined by \begin{equation*} \langle u,v\rangle_{H}=\int_{\mathcal{O}}u(x)\overline{v(x)}\alpha^{-1}(x)\,\mathrm{d} x=\int_{\Omega_{1}}u_{1}(x)\overline{v_{1}(x)}c_{1}^{-1}\,\mathrm{d} x+\int_{\Omega_{2}}u_{2}(x)\overline{v_{2}(x)}c_{2}^{-1}\,\mathrm{d} x, \end{equation*} where we recall that $$u(x,t)=\left\{\begin{array}{lcl} u_{1}(x,t)&\textrm{if}&x\in\Omega_{1}, \\ u_{2}(x,t)&\textrm{if}&x\in\Omega_{2}, \end{array}\right. \quad \text{and} \quad v(x,t)=\left\{\begin{array}{lcl} v_{1}(x,t)&\textrm{if}&x\in\Omega_{1}, \\ v_{2}(x,t)&\textrm{if}&x\in\Omega_{2}. \end{array}\right.$$ The tow first equations of~\eqref{1} can be written as follows $$\partial_{t}\left(\begin{array}{c} u \\ v \end{array}\right)=\mathcal{A}\left(\begin{array}{c} u \\ v \end{array}\right)$$ and the Cauchy problem is written in following form $$\left\{\begin{array}{lcl} \partial_{t}\left(\begin{array}{c} u \\ v \end{array}\right)(x,t)=\mathcal{A}\left(\begin{array}{c} u \\ v \end{array}\right)(x,t)&\text{if}&(x,t)\in\mathcal{O}\times]0,+\infty[, \\ \left(\begin{array}{l} u \\ v \end{array}\right)(x,0)=\left(\begin{array}{l} u_{0} \\ v_{0} \end{array}\right)(x)&\text{if}&x\in\mathcal{O}, \end{array}\right.$$ where $$u_{0}(x)=\left\{\begin{array}{lcl} u_{1}^{0}(x)&\textrm{if}&x\in\Omega_{1}, \\ u_{2}^{0}(x)&\textrm{if}&x\in\Omega_{2}, \end{array}\right. \quad \text{and} \qquad v_{0}(x)=\left\{\begin{array}{lcl} v_{1}^{0}(x)&\textrm{if}&x\in\Omega_{1}, \\ v_{2}^{0}(x)&\textrm{if}&x\in\Omega_{2}. \end{array}\right.$$
Now we have to specify the functional space and the domain of the operator $\mathcal{A}$, that's why we are going first to define anothor operator $G$ which is the square root of the operator $\alpha^{2}\Delta^{2}$ (see~\cite[p.391]{TucWei} for the definition and more details).
\subsection{Properties of the square root of the operator $\alpha^{2}\Delta^{2}$} In the space $H=L^{2}(\mathcal{O},\alpha^{-1}(x)\,\mathrm{d} x)$ we define the operator $G$ by the following expression \[ Gu=-\alpha\Delta u\qquad \forall\;u\in\mathcal{D}(G) \]
with domain $\mathcal{D}(G)=X$ defined in~\eqref{8}. The space $X$ is equipped with the norm $\|u\|_{X}=\|Gu\|_{H}$ and we defined the graph norm of $G$ by
$$\|u\|_{gr(G)}^{2}=\|u\|_{H}^{2}+\|Gu\|_{H}^{2}$$ then we have the following result \begin{pro}
$(X,\|\,.\,\|_{X})$ is a Hilbert space with a norm equivalent to the graph norm of $G$. \end{pro} \begin{pr}
It is easy to show that if $G$ is a colsed operator then $(X,\|\,.\,\|_{gr(G)})$ is a Hilbert space. Thus to prove the proposition it suffices to show that $G$ is closed and both norms are equivalent. \\ By Green's formula and Poincar\'e inequality it is easy to show that there exists $C>0$ such that
$$\langle Gu,u\rangle_{H}=\|\nabla u\|_{L^{2}(\mathcal{O})}^{2}\geq C\|u\|_{H}^{2}\quad\forall\,u\in X$$ then $G$ is a strictly positive operator and we have
$$\|Gu\|_{H}\|u\|_{H}\geq\langle Gu,u\rangle_{H}\geq C\|u\|_{H}^{2}\quad\forall\,u\in X$$ which prove the equivalence between the tow norms. \\ Since $G$ is positive then by Propsition 3.3.5 in~\cite[p.79]{TucWei} $-G$ is m-dissipative and thus $G$ is a closed operator. This completes the proof. \end{pr}
This last result allows us to properly define the functional space of the operator $\mathcal{A}$. \begin{pro}\label{7}
The two spaces $(X,\|\,.\,\|_{2})$ and $(X,\|\,.\,\|_{X})$ are algebraically and topologically the same. Where $\|\,.\,\|_{2}$ is the classical Sobolev norm. \end{pro} \begin{pr} We have only to prove that the two norms are equivalent. \\
First, we note that $(X,\|\,.\,\|_{2})$ is a Hilbert space because $X$ is a closed subspace of $H^{2}(\mathcal{O})$, in addition we have
$$\|u\|_{X}=\|Gu\|_{L^{2}(\mathcal{O})}\leq C\|u\|_{2}\quad\forall\,u\in X,$$
and while $(X,\|\,.\,\|_{X})$ is also a Hilbert space, then according to the Banach theorem (see Corollary 9.2.3 from~\cite[p.132]{YVA}) the tow norms are equivalent. \end{pr}
As an important consequence of this result is that the space $X$ is a Hilbert space with the norm $\|\,.\,\|_{gr(G)}+\|a^{\frac{1}{2}}\partial_{\nu}\,.\,\|_{L^{2}(\Gamma_{2})}+\|b^{\frac{1}{2}}\,.\,\|_{L^{2}(\Gamma_{2})}$.
\subsection{Existence and uniqueness of the solution} We set $\mathcal{H}=X\times H$ the Hilbert space with the norm \[
\|(u,v)\|=\|u\|_{X}+\|v\|_{H}\qquad\forall\, (u,v)\in\mathcal{H}, \] and we recall that the domain of the operator $\mathcal{A}$ is defined by \begin{align*}
\mathcal{D}(\mathcal{A})=\big\{&(u,v)\in\mathcal{H}\,:\,(v,\Delta^{2}u)\in\mathcal{H},\,c_{1}\Delta u_{1\,|\Gamma_{0}}=c_{2}\Delta u_{2\,|\Gamma_{0}},\,\Delta u_{1\,|\Gamma_{1}}=0, \\
&c_{1}\partial_{\nu} \Delta u_{1\,|\Gamma_{0}}=c_{2}\partial_{\nu} \Delta u_{2\,|\Gamma_{0}},\,\Delta u_{2\,|\Gamma_{2}}=-a\,\partial_{\nu} v_{2\,|\Gamma_{2}},\,\partial_{\nu}\Delta u_{2\,|\Gamma_{2}}=b\,v_{2\,|\Gamma_{2}}\big\}. \end{align*} \begin{thm} Under the above assumptions, $\mathcal{A}$ is m-dissipative and especially it generates a strongly semigroup of contractions in $\mathcal{H}$. \end{thm} \begin{pr} According to Lumer-Phillips theorem (see for exemple~\cite[p.103]{TucWei}) we have only to prove that $\mathcal{A}$ is m-dissipative. \\ Let $(u,v)\in\mathcal{D}(\mathcal{A})$ then by Green's formula we have \begin{eqnarray*} \mathrm{Re}\left(\left\langle\mathcal{A}\left(\begin{array}{l} u \\ v \end{array}\right),\left(\begin{array}{l} u \\ v \end{array}\right)\right\rangle_{\mathcal{H}}\right)\!\!\!\!&=&\!\!\!\!\alpha\,\mathrm{Re}\left(\langle\Delta u,\Delta v\rangle_{L^{2}(\mathcal{O})}-\langle\Delta^{2}u,v\rangle_{L^{2}(\mathcal{O})}\right) \\ \!\!\!\!&=&\!\!\!\!\alpha\,\mathrm{Re}\left(\langle\Delta u,\partial_{\nu} v\rangle_{L^{2}(\partial\mathcal{O})}-\langle\partial_{\nu} \Delta u,v\rangle_{L^{2}(\partial\mathcal{O})}\right) \\
\!\!\!\!&=&\!\!\!\!-c_{2}\|a^{\frac{1}{2}}\partial_{\nu} v_{2}\|_{L^{2}(\Gamma_{2})}^{2}-c_{2}\|b^{\frac{1}{2}} v_{2}\|_{L^{2}(\Gamma_{2})}^{2}\leq 0. \end{eqnarray*} This shows that $\mathcal{A}$ is dissipative. \\ Let now $(f,g)\in\mathcal{H}$ and our purpose is to find a couple $(u,v)\in\mathcal{D}(\mathcal{A})$ such that $$\left(\mathrm{Id}-\mathcal{A}\right)\left(\begin{array}{l} u \\ v \end{array}\right)=\left(\begin{array}{l} u-v \\ v+\alpha^{2}\Delta^{2}u \end{array}\right)=\left(\begin{array}{l} f \\ g \end{array}\right)$$ and more explicitly we have to find $(u,v)\in\mathcal{D}(\mathcal{A})$ such that \begin{equation*} \left\{\begin{array}{l} v=u-f=\left\{\begin{array}{lll} v_{1}=u_{1}-f_{1}&\text{in}&\Omega_{1} \\ v_{2}=u_{2}-f_{2}&\text{in}&\Omega_{2} \end{array}\right. \\ u+\alpha^{2}\Delta^{2} u=f+g=\left\{\begin{array}{lll} u_{1}+c_{1}^{2}\Delta^{2} u_{1}=f_{1}+g_{1}&\text{in}&\Omega_{1} \\ u_{2}+c_{2}^{2}\Delta^{2} u_{2}=f_{2}+g_{2}&\text{in}&\Omega_{2}. \end{array}\right. \end{array}\right. \end{equation*} First note that, thanks to the remark after Proposition~\ref{7} and the Riesz representation theorem, there exists a unique $u\in X=\mathcal{D}(G)$ such that for all $\varphi\in X$ we have \begin{equation}\label{4} \begin{split} \langle f+g,\varphi\rangle_{H}+\langle c_{2}a\,\partial_{\nu} f_{2},\partial_{\nu}\varphi_{2}\rangle_{L^{2}(\Gamma_{2})}+\langle c_{2}b\,f_{2},\varphi_{2}\rangle_{L^{2}(\Gamma_{2})}=\langle\alpha\Delta u,\alpha\Delta\varphi\rangle_{H} \\ +\langle u,\varphi\rangle_{H}+\langle c_{2}a\,\partial_{\nu} u_{2},\partial_{\nu}\varphi_{2}\rangle_{L^{2}(\Gamma_{2})}+\langle c_{2}b\, u_{2},\varphi_{2}\rangle_{L^{2}(\Gamma_{2})}. \end{split} \end{equation} In particular for all $\varphi\in\mathscr{C}_{c}^{\infty}(\mathcal{O})$ the expression~\eqref{4} is written as follows $$\langle\alpha\Delta^{2}u+\alpha^{-1}(u-f-g),\varphi\rangle_{L^{2}(\mathcal{O})}=0$$ then we have \begin{equation}\label{5} u+\alpha^{2}\Delta^{2} u=f+g\quad\text{in}\;L^{2}(\mathcal{O}). \end{equation} Now if we return again to the expression~\eqref{4} then through Green's formula we write it as follows \begin{align*} &\langle\alpha\Delta^{2}u+\alpha^{-1}(u-f-g),\varphi\rangle_{L^{2}(\mathcal{O})}=\langle c_{2}a\,\partial_{\nu} f_{2},\partial_{\nu}\varphi_{2}\rangle_{L^{2}(\Gamma_{2})}+\langle c_{2}b\,f_{2},\varphi_{2}\rangle_{L^{2}(\Gamma_{2})}+ \\ &\langle\alpha\partial_{\nu}\Delta u,\varphi\rangle_{L^{2}(\partial\mathcal{O})}-\langle c_{2}a\,\partial_{\nu} u_{2},\partial_{\nu}\varphi_{2}\rangle_{L^{2}(\Gamma_{2})}-\langle c_{2}b\,u_{2},\varphi_{2}\rangle_{L^{2}(\Gamma_{2})}-\langle\alpha\Delta u,\partial_{\nu}\varphi\rangle_{L^{2}(\partial\mathcal{O})} \end{align*} then by~\eqref{5} and after a simple calculation we get that for all $\varphi\in X$ that \begin{align*} &\langle c_{1}\partial_{\nu}\Delta u_{1}-c_{2}\partial_{\nu}\Delta u_{2},\varphi_{2}\rangle_{L^{2}(\Gamma_{0})}-\langle c_{1}\Delta u_{1}-c_{2}\Delta u_{2},\partial_{\nu}\varphi_{1}\rangle_{L^{2}(\Gamma_{0})}-\langle c_{1}\Delta u_{1},\partial_{\nu}\varphi_{1}\rangle_{L^{2}(\Gamma_{1})} \\ &+c_{2}\langle a(\partial_{\nu} f_{2}-\partial_{\nu} u_{2})-\Delta u_{2},\partial_{\nu}\varphi_{2}\rangle_{L^{2}(\Gamma_{2})}+c_{2}\langle b(f_{2}-u_{2})+\partial_{\nu}\Delta u_{2},\varphi_{2}\rangle_{L^{2}(\Gamma_{2})}=0 \end{align*} and this shows the following equalities
$$c_{1}\Delta u_{1\,|\Gamma_{0}}=c_{2}\Delta u_{2\,|\Gamma_{0}},\; c_{1}\partial_{\nu} \Delta u_{1\,|\Gamma_{0}}=c_{2}\partial_{\nu} \Delta u_{2\,|\Gamma_{0}},\; \Delta u_{1\,|\Gamma_{1}}=0$$ and also \begin{equation*} \begin{array}{l}
\Delta u_{2\,|\Gamma_ {2}}=-a(\partial_{\nu} u_{2\,|\Gamma_ {2}}-\partial_{\nu} f_{2\,|\Gamma_ {2}})=-a\,\partial_{\nu} v_{2\,|\Gamma_ {2}}\quad \\
\partial_{\nu}\Delta u_{2\,|\Gamma_ {2}}=b(u_{2\,|\Gamma_ {2}}- f_{2\,|\Gamma_ {2}})=b\,v_{2\,|\Gamma_ {2}}. \end{array} \end{equation*} And this concludes the proof. \end{pr}
One consequence of this last result is that if we assume that $(u_{0},v_{0})\in\mathcal{D}(\mathcal{A})$, there exists a unique solution of~\eqref{1} which can be expressed by means of a semigroup on $\mathcal{H}$ as follows \begin{equation}\label{6} \left(\begin{array}{l} u \\ \partial_{t}u \end{array}\right)=e^{t\mathcal{A}}\left(\begin{array}{l} u_{0} \\ v_{0} \end{array}\right) \end{equation} where $e^{t\mathcal{A}}$ is the semigroupe of the operator $\mathcal{A}$. And we have the following regularity of the solution $$ \left(\begin{array}{l} u \\ \partial_{t}u \end{array}\right) \in C([0,+\infty[,\mathcal{D}(\mathcal{A}))\cap C^{1}([0,+\infty[,\mathcal{H}).$$ \\ And if $(u_{0},v_{0})\in\mathcal{H}$, the function $u(t)$ given by~\eqref{6} is the mild solution of~\eqref{1} and it lives in $C([0,+\infty[,\mathcal{H})$.
\section{Proof of Theorem~\ref{9}}\label{d1}
The purpose of this section is to find an estimate of the resolvent $(\lambda\mathrm{Id}+i\mathcal{A})^{-1}$ for $\lambda$ in the region $\{z\in\mathbb{C};\;|\mathrm{Im}(z)|<C_{1}\mathrm{e}^{-C_{2}|\mathrm{Re}(z)|},\,|z|>C_{3}\}$ with some constants $C_{1},\,C_{2},\,C_{3}>0$. More precisely we prove that $\|(\lambda\mathrm{Id}+i\mathcal{A})^{-1}\|_{\mathscr{L}(\mathcal{H},\mathcal{H})}\leq C\mathrm{e}^{C|\mathrm{Re}(\lambda)|}$ which imply the weak energy decay of the solution of the equation~\eqref{1}.
The main idea consiste to the use of the Carleman estimates for a second order elliptic operator which it derived from an original one of fourth order and this is what comes from the originality of our work, it means we prove the stability result for a system of fourth order by using an estimate of Carleman of second order only.
As manshed previously that to prove Theorem~\ref{9}, we will need the Carleman estimates due to Lebeau and Robbiano~\cite{LR2} and formuled by Burq~\cite{Bur}. We consider the elliptic second order operator $P=-h^{2}\Delta$ defined for a complex valued functions which are defined in an open subset $U\subset\mathbb{R}^{n}$ with smooth boundary, and whose principal symbol is denoted by $p(x,\xi)=|\xi|^{2}$, where $h$ is a very small semi-classical parameter.
Let $\varphi\in\mathscr{C}^{\infty}(\overline{U})$ a real value function and let's define the adjoint operator $P_{\varphi}=\mathrm{e}^{\varphi/h}P\mathrm{e}^{-\varphi/h}$ of principal symbol $p_{\varphi}(x,\xi)=p(x,\xi+i\nabla\varphi)$ for $0<h\leq h_{0}$. Then we have the following result \begin{pro}\cite[Proposition 2]{LR2}~\cite[Proposition 1]{LR}\label{d16} Let $\gamma$ be an non-empty union of connex component of $\partial U$. Assume the weight function $\varphi$ satisfies to the following assumptions: \begin{enumerate}
\item $\nabla\varphi\neq 0$ for all $x\in\overline{U}$
\item $\partial_{\nu}\varphi\neq 0$ for all $x\in\partial U$
\item $\partial_{\nu}\varphi<0$ for all $x\in \gamma$
\item The H{\"o}rmander's sub-ellipticity condition
\[
\forall\; (x,\xi)\in\overline{U}\times\mathbb{R}^{n};\; p_{\varphi}(x,\xi)=0\Longrightarrow\{\mathrm{Re}(p_{\varphi}),\mathrm{Im}(p_{\varphi})\}(x,\xi)>0.
\] \end{enumerate} Then there exists $C>0$ such that for all $u\in\mathscr{C}^{\infty}(\overline{U})$ satisfying \[ \left\{\begin{array}{ll} \displaystyle\Delta u=f &\text{in }U \\ u=0&\text{on }\gamma, \end{array} \right. \] and for all $h\in]0,h_{0}]$ small we have \begin{equation}\label{d6} \begin{split}
h\int_{U}\mathrm{e}^{2\varphi/h}|u|^{2}\,\mathrm{d} x&+h^{3}\int_{U}\mathrm{e}^{2\varphi/h}|\nabla u|^{2}\,\mathrm{d} x\leq C\Big(h^{4}\int_{U}\mathrm{e}^{2\varphi/h}|f|^{2}\,\mathrm{d} x \\
&+h\int_{\partial U\backslash\gamma}\mathrm{e}^{2\varphi/h}|u|^{2}\,\mathrm{d} x+h^{3}\int_{\partial U\backslash\gamma}\mathrm{e}^{2\varphi/h}|\partial_{\nu} u|^{2}\,\mathrm{d} x\Big). \end{split} \end{equation} \end{pro} \begin{rem} \rm{ \* \begin{enumerate}
\item [1)]If the function $u$ is supported away from a subset $\gamma_{0}\subset\partial U$ then the estimate~\eqref{d6} is allows true even if we don't assume that $\partial_{\nu}\varphi\neq 0$ in $\gamma_{0}$, while the proof is local.
\item [2)]We can not assume that $\partial_{\nu}\varphi<0$ in the whole $\partial U$, otherwise the weight function attain his global maximum in $U$, and thus our srtategy of the construction of the phases is fails (See next subsection). \end{enumerate} } \end{rem}
\subsection{Weight function's construction}\label{d15} In this section we will try to find two phases $\varphi_{1}$ and $\varphi_{2}$ which satisfy to the H{\"o}rmander's condition except in a finite number of ball where one of them do not satisfies this condition the second does and is strictly greater. The main ingredient of this section is the following one. Note that this result is similar to the Burq's one~\cite[Proposition 3.2]{Bur}, but here we give a new proof due to F.~Laudenbach. \begin{pro}\label{d8}
With keeping the same notations as the first section, then there exists two real functions $\psi_{1},\,\psi_{2}\in\mathscr{C}^{\infty}(\Omega)$ satisfying for $k=1,2$ that $\partial_{\nu}\psi_{k\,|\Gamma}\neq 0$ and $\partial_{\nu}\psi_{k\,|\Gamma_{1}}<0$ having only degenerate critical points (of finite number) such that when $\nabla\psi_{k}=0$ then $\nabla\psi_{\sigma(k)}\neq 0$ and $\psi_{\sigma(k)}>\psi_{k}$. Where $\sigma$ is the permutation of the set $\{1,2\}$ different from the identity. \end{pro} \begin{rem} \rm{ \* \begin{enumerate}
\item[1)] One consequence of Proposition~\ref{d8} is that there exists a finite number of points $x_{kj_{k}}$ for $k=1,2$ and $j_{k}=1,\ldots,N_{k}$ and $\epsilon>0$ such that $B(x_{kj_{k}},2\epsilon)\subset\overline{\Omega}$ and $B(x_{1j_{1}},2\epsilon)\cap B(x_{2j_{2}},2\epsilon)=\emptyset$, for all $k=1,2$ and $j_{k}=1,\ldots,N_{k}$ and in $B(x_{kj_{k}},2\epsilon)$ we have $\psi_{\sigma(k)}>\psi_{k}$ (See Figure~\ref{fig2}).
\item[2)] For $\lambda>0$ large enough the weight functions $\varphi_{k}=\mathrm{e}^{\lambda\psi_{k}}$ satisfy the H{\"o}rmander's condition in $\displaystyle U_{k}=\Omega\bigcap\left(\bigcup_{j_{k}=1}^{N_{k}}B(x_{kj_{k}},\epsilon)\right)^{c}$. Indeed, we have only to prove that for an open bounded subset $U\in\mathbb{R}^{n}$ and if $\psi\in\mathscr{C}^{\infty}(\overline{U})$ satisfying $|\nabla\psi|\geq C$ in $\overline{U}$ and $\varphi=\mathrm{e}^{\lambda\psi}$ we have $\{\mathrm{Re}(p_{\varphi}),\mathrm{Im}(p_{\varphi})\}(x,\xi)\geq C'$ in $\overline{U}\times\mathbb{R}^{n}$ for $\lambda>0$ large enough. We have $$ \left\{\begin{array}{c} \nabla\varphi=\lambda\mathrm{e}^{\lambda\psi}\nabla\psi\;\text{ and }\;\varphi''=\mathrm{e}^{\lambda\psi}(\lambda\nabla\psi.{}^{t}\nabla\psi+\lambda\psi'') \\
p_{\varphi}(x,\xi)=0\Longrightarrow\langle\xi,\nabla\varphi\rangle=0\text{ and }|\xi|^{2}=|\nabla\varphi|^{2} \end{array}\right. $$ then we obtain \begin{eqnarray*}
\{\mathrm{Re}(p_{\varphi}),\mathrm{Im}(p_{\varphi})\}(x,\xi)&=&4\lambda\mathrm{e}^{\lambda\psi}\,{}^{t}\xi.\psi''.\xi+4\mathrm{e}^{3\lambda\psi}(\lambda^{4}|\nabla\psi|^{2}+\lambda^{3}\,{}^{t}\nabla\psi.\psi''.\nabla\psi) \\
&=&4\mathrm{e}^{3\lambda\psi}(\lambda^{4}|\nabla\psi|^{2}+O(\lambda^{3})). \end{eqnarray*} Which conclude the result.
\item[3)] In general, Proposition~\ref{d8} is also true for any smooth manifold with boundary which the latter is the disjoint union of two open and closed submanifolds. \end{enumerate} } \end{rem} \begin{figure}
\caption{The domains of the weight functions $\varphi_{1}$ and $\psi_{1}$ (in yellow and orange), $\varphi_{2}$ and $\psi_{2}$ (in red and orange) where they have not critical points.}
\label{fig2}
\end{figure} \begin{pr}
While the Morse functions are dense (for the $\mathscr{C}^{\infty}$ topology) in the set of $\mathscr{C}^{\infty}$ functions then we can find $\psi_{1}$ a Morse function such that $\partial_{\nu}\psi_{1\,|\Gamma_{1}}<0$ and $\partial_{\nu}\psi_{1\,|\Gamma_{2}}>0$. We can suppose that $\psi_{1}$ have no local maximum in $\Omega$ (The proceeding of the elimination of the maximum is described by Burq~\cite[Appendix A]{Bur}, we can see also~\cite[Theorem 8.1]{Mi} and~\cite[Lemma 2.6]{L}).
Let $c$ be a critical point of $\psi_{1}$ while its index is different from $n$ then we can find a $\mathscr{C}^{\infty}$ arc $\gamma_{c}:[-1,1]\rightarrow\Omega$ such that $\gamma_{c}(0)=c$ and $\psi_{1}(\gamma_{c}(1))=\psi_{1}(\gamma_{c}(-1))>\psi_{1}(c)$. We do this construction for all the critical points of $\psi_{1}$ so that all the arcs are mutually disjoint. Hence, this allows us to find a vector field $X$ in $\Omega$, vanishing near the boundary of $\Omega$ such that for all critical points $c$ of $\psi_{1}$ we have $$X(\gamma_{c}(t))=\stackrel{.}{\gamma}_{c}(t),$$ where $\stackrel{.}{\gamma}$ stand for the time derivative.
We denote $\phi_{t}$ its flow: $$\stackrel{.}{\phi_{t}}(x)=X(\phi_{t}(x)),$$
and we set $\psi_{2}=\psi_{1}\circ\phi_{1}$, thus $\psi_{1}$ and $\psi_{2}$ satisfy the required properties. Indeed, since $X\equiv0$ near the boundary $\Gamma$ which mean that $\phi_{t}(x)=x$ near $\Gamma$ then $\partial_{\nu}\psi_{1\,|\Gamma}=\partial_{\nu}\psi_{2\,|\Gamma}$. If $c$ is a critical point of $\psi_{1}$ then we have $\psi_{2}(c)=\psi_{1}(\gamma_{c}(1))>\psi_{1}(c)$, and if $c'$ is a critical point of $\psi_{2}$ then $c'=\phi_{-1}(c)$ where $c$ is a critical point of $\psi_{1}$ and we have $\psi_{2}(c')=\psi_{1}(\phi_{1}\circ\phi_{-1}(c))=\psi_{1}(c)<\psi_{1}(\phi_{-1}(c))=\psi_{1}(c')$ by the construction of $\gamma_{c}$. \end{pr}
\subsection{Back to the proof of Theorem~\ref{9}} We return now to the main proof. Let $(F,G)\in\mathcal{H}$ with $F=(f_{1},f_{2})\in X$ and $G=(g_{1},g_{2})\in H$ and $(u,v)\in D(\mathcal{A})$ with $u=(u_{1},u_{2})$ and $v=(v_{1},v_{2})$ such that $$(\lambda\mathrm{Id}+i\mathcal{A})\left(\begin{array}{c} u \\ v \end{array}\right)=\left(\begin{array}{c} F \\ G \end{array}\right),$$ then we get the following boundary value problem \begin{equation}\label{d2} \left\{\begin{array}{ll} \lambda u+iv=F&\text{in }\mathcal{O} \\ -i\alpha^{2}\Delta^{2}u+\lambda v=G&\text{in }\mathcal{O} \\ u_{1}=u_{2},\quad\partial_{\nu} u_{1}=\partial_{\nu} u_{2}&\text{on }\Gamma_{0} \\ c_{1}\Delta u_{1}=c_{2}\Delta u_{2},\quad c_{1}\partial_{\nu} \Delta u_{1}=c_{2}\partial_{\nu}\Delta u_{2}&\text{on }\Gamma_{0} \\ u_{1}=0&\text{on }\Gamma_{1} \\ \Delta u_{1}=0&\text{on }\Gamma_{1} \\ \Delta u_{2}=-a\,\partial_{\nu} v_{2}&\text{on }\Gamma_{2} \\ \partial_{\nu}\Delta u_{2}=b\, v_{2}&\text{on }\Gamma_{2}. \end{array}\right. \end{equation} Then the solution $(u,v)$ of~\eqref{d2} satisfies \begin{equation}\label{d4} \left\{\begin{array}{ll} v=i\lambda u-iF&\text{in }\mathcal{O} \\ (-\lambda^{2}+\alpha^{2}\Delta^{2})u=iG-\lambda F=\Phi&\text{in }\mathcal{O} \\ u_{1}=u_{2},\quad\partial_{\nu} u_{1}=\partial_{\nu} u_{2}&\text{on }\Gamma_{0} \\ c_{1}\Delta u_{1}=c_{2}\Delta u_{2},\quad c_{1}\partial_{\nu} \Delta u_{1}=c_{2}\partial_{\nu}\Delta u_{2}&\text{on }\Gamma_{0} \\ u_{1}=0&\text{on }\Gamma_{1} \\ \Delta u_{1}=0&\text{on }\Gamma_{1} \\ \Delta u_{2}+i\lambda a\,\partial_{\nu} u_{2}=ia\,\partial_{\nu} f_{2}=\phi_{a}&\text{on }\Gamma_{2} \\ \partial_{\nu}\Delta u_{2}-i\lambda b\,u_{2}=-ib\,f_{2}=\phi_{b}&\text{on }\Gamma_{2}. \end{array}\right. \end{equation} Integrating by part we obtain \begin{eqnarray}\label{d3}
\langle\Phi,u\rangle_{H}&=&\!\!\!\alpha^{-1}\int(-\lambda^{2}+\alpha^{2}\Delta^{2})u.\overline{u}\,\mathrm{d} x=-\lambda^{2}\|u\|_{H}^{2}+\alpha\int\Delta^{2}u.\overline{u}\,\mathrm{d} x\nonumber \\
&=&\!\!\!-\lambda^{2}\|u\|_{H}^{2}+\alpha^{2}\|\Delta u\|_{H}^{2}-c_{2}\langle \Delta u_{2},\partial_{\nu} u_{2}\rangle_{L^{2}(\Gamma_{2})}+c_{2}\langle \partial_{\nu}\Delta u_{2},u_{2}\rangle_{L^{2}(\Gamma_{2})}\nonumber \\
&=&\!\!\!-\lambda^{2}\|u\|_{H}^{2}+\alpha^{2}\|\Delta u\|_{H}^{2}-c_{2}\langle\phi_{a},\partial_{\nu} u_{2}\rangle_{L^{2}(\Gamma_{2})}+c_{2}\langle\phi_{b},u_{2}\rangle_{L^{2}(\Gamma_{2})} \\ &+&\!\!\!i\lambda c_{2}\langle a\,\partial_{\nu} u_{2},\partial_{\nu} u_{2}\rangle_{L^{2}(\Gamma_{2})}+i\lambda c_{2}\langle\nonumber b\,u_{2},u_{2}\rangle_{L^{2}(\Gamma_{2})}. \end{eqnarray} Keeping only the imaginary part of~\eqref{d3} then we get \begin{equation}\label{d9}
|\mathrm{Re}(\lambda)|\int_{\Gamma_{2}} a|\partial_{\nu} u_{2}|^{2}+ b|u_{2}|^{2}\,\mathrm{d} x\leq C(\|\Phi\|_{H}\|u\|_{H}+2|\mathrm{Re}(\lambda)\mathrm{Im}(\lambda)|^{2}\,\|u\|_{H}^{2}+\|f_{2}\|_{2}\|u_{2}\|_{2}). \end{equation} Now we return to the system~\eqref{d4} which can be recast as follows \begin{equation}\label{d7} \left\{\begin{array}{ll} v=i\lambda u-iF&\text{in }\mathcal{O} \\ \displaystyle\Big(\Delta-\mathrm{sig}\big(\mathrm{Re}(\lambda)\big)\frac{\lambda}{\alpha}\Big)u=\alpha^{-1}w&\text{in }\mathcal{O} \\ u_{1}=u_{2},\quad\partial_{\nu} u_{1}=\partial_{\nu} u_{2}&\text{on }\Gamma_{0} \\ u_{1}=0&\text{on }\Gamma_{1} \end{array}\right. \end{equation} and \begin{equation}\label{d5} \left\{\begin{array}{ll} \displaystyle\Big(\Delta+\mathrm{sig}\big(\mathrm{Re}(\lambda)\big)\frac{\lambda}{\alpha}\Big)w=\alpha^{-1}\Phi&\text{in }\mathcal{O} \\ w_{1}=w_{2},\quad\partial_{\nu} w_{1}=\partial_{\nu} w_{2}&\text{on }\Gamma_{0} \\ w_{1}=0&\text{on }\Gamma_{1} \\ \displaystyle w_{2}=c_{2}\phi_{a}-ic_{2}\lambda a\,\partial_{\nu} u_{2}-\mathrm{sig}\big(\mathrm{Re}(\lambda)\big)\lambda u_{2}&\text{on }\Gamma_{2} \\ \displaystyle\partial_{\nu} w_{2}=c_{2}\phi_{b}+ic_{2}\lambda b\,u_{2}-\mathrm{sig}\big(\mathrm{Re}(\lambda)\big)\lambda\partial_{\nu} u_{2}&\text{on }\Gamma_{2}, \end{array}\right. \end{equation} where \begin{equation*} w=\left\{ \begin{array}{lcl} w_{1}&\textrm{in}&\Omega_{1} \\ w_{2}&\textrm{in}&\Omega_{2}. \end{array}\right. \end{equation*} To prove the resolvent estimate, we need the following result which is a consequence of the Carleman estimates introduced in the beginning of this section. \begin{lem} There exists $C>0$ such that for any $u$ and $w$ solution of~\eqref{d7} and~\eqref{d5} the following estimate holds: \begin{equation}\label{d10}
\|u\|_{X}^{2}\leq C\mathrm{e}^{C/h}\Big(\|\Phi\|_{L^{2}(\Omega)}^{2}+\|\phi_{a}\|_{L^{2}(\Gamma_{2})}^{2}+\|\phi_{b}\|_{L^{2}(\Gamma_{2})}^{2}+\int_{\Gamma_{2}} a|\partial_{\nu} u_{2}|^{2}\,\mathrm{d} x+\int_{\Gamma_{2}} b|u_{2}|^{2}\,\mathrm{d} x\Big) \end{equation}
for $h=|\mathrm{Re}(\lambda)|^{-1}$ small enough and $|\mathrm{Im}(\lambda)|\leq cst$. \end{lem} \begin{pr} In this proof we will keep the same notations as in section~\ref{d15}. We shall extend $w$ in whole $\Omega$ by noting $\tilde{w}=\mathbb{1}_{\Omega_{1}}w_{1}+\mathbb{1}_{\Omega_{2}}w_{2}$ where $\mathbb{1}_{\Omega_{1}}w_{1}$ (resp. $\mathbb{1}_{\Omega_{2}}w_{2}$) is the extension of $w_{1}$ (resp. $w_{2}$) by zero on $\overline{\Omega}_{2}$ (resp. $\overline{\Omega}_{1}$). Note that a such extension is meaningful while $\tilde{w}$ is seen now as a $H^{2}$ function in whole $\Omega$ thanks to the transmission conditions in~\eqref{d7} (See~\cite{D}).
Let $\varphi_{1}$ and $\varphi_{2}$ two weight functions that satisfies the conclusion of the section~\ref{d15}. Let $\chi_{1}=(\chi_{11},\chi_{12})$ and $\chi_{2}=(\chi_{21},\chi_{22})$ two cut-off functions equal to one in $\displaystyle\left(\bigcup_{j=1}^{N_{k}}B(x_{kj},2\epsilon)\right)^{c}$ and supported in $\displaystyle\left(\bigcup_{j=1}^{N_{k}}B(x_{kj},\epsilon)\right)^{c}$ (in order to eliminate the critical points of the phases functions $\varphi_{1}$ and $\varphi_{2}$ (See Figure~\ref{fig2})). Then for $k=1,2$ we obtain from the system~\eqref{d5} the following equations \begin{equation*} \left\{ \begin{array}{ll} \displaystyle\Delta(\chi_{k1}w_{1})=\Psi_{k1}&\text{in }\Omega_{1} \\ \displaystyle\Delta(\chi_{k2}w_{2})=\Psi_{k2}&\text{in }\Omega_{2} \\ \chi_{k1}w_{1}=0&\text{on }\Gamma_{1} \\ \displaystyle \chi_{k2}w_{2}=c_{2}\phi_{a}-ic_{2}\lambda a\,\partial_{\nu} u_{2}-\mathrm{sig}\big(\mathrm{Re}(\lambda)\big)\lambda u_{2}&\text{on }\Gamma_{2} \\ \displaystyle\partial_{\nu}(\chi_{k2}w_{2})=c_{2}\phi_{b}+ic_{2}\lambda b\,u_{2}-\mathrm{sig}\big(\mathrm{Re}(\lambda)\big)\lambda\partial_{\nu} u_{2}&\text{on }\Gamma_{2}, \end{array} \right. \end{equation*} where we are noted \begin{equation}\label{d17} \Psi_{k}=\left\{\begin{array}{ll} \displaystyle\Psi_{k1}=[\Delta,\chi_{k1}]w_{1}+\frac{1}{c_{1}}\chi_{k1}\Phi_{1}-\left(\frac{1}{c_{1}h}+\frac{i}{c_{1}}\mathrm{sig}(\mathrm{Re}(\lambda))\mathrm{Im}(\lambda)\right)(\chi_{k1}w_{1}) \\ \\ \displaystyle\Psi_{k1}=[\Delta,\chi_{k2}]w_{2}+\frac{1}{c_{2}}\chi_{k2}\Phi_{2}-\left(\frac{1}{c_{2}h}+\frac{i}{c_{2}}\mathrm{sig}(\mathrm{Re}(\lambda))\mathrm{Im}(\lambda)\right)(\chi_{k2}w_{2}), \end{array}\right. \end{equation} and \begin{equation*} \Phi=\left\{ \begin{array}{lcl} \Phi_{1}&\textrm{in}&\Omega_{1} \\ \Phi_{2}&\textrm{in}&\Omega_{2}. \end{array}\right. \end{equation*} Applying now Proposition~\ref{d16} to the functions $\chi_{k}\tilde{w}$ and $\Psi_{k}$ with $U=\Omega$ then we obtain for $k=1,2$ that \begin{equation*} \begin{split}
h\|\mathrm{e}^{\varphi_{k}/h}\chi_{k}\tilde{w}\|_{L^{2}(U_{k})}^{2}+h^{3}\|\mathrm{e}^{\varphi_{k}/h}\nabla(\chi_{k}\tilde{w})\|_{L^{2}(U_{k})}^{2}\leq C\big(h^{4}\|\mathrm{e}^{\varphi_{k}/h}\Psi_{k}\|_{L^{2}(U_{k})}^{2} \\
+h\|\mathrm{e}^{\varphi_{k}/h} w_{2}\|_{L^{2}(\Gamma_{2})}^{2}+h^{3}\|\mathrm{e}^{\varphi_{k}/h}\partial_{\nu} w_{2}\|_{L^{2}(\Gamma_{2})}^{2}\big). \end{split} \end{equation*} Then the expression of $\Psi_{k1}$ and $\Psi_{k2}$ in~\eqref{d17} yields \begin{equation}\label{d18} \begin{split}
h\|\mathrm{e}^{\varphi_{k}/h}\chi_{k}w\|_{L^{2}(U_{k})}^{2}+h^{3}\|\mathrm{e}^{\varphi_{k}/h}\nabla(\chi_{k}w)\|_{L^{2}(U_{k})}^{2}\leq C\big(h^{4}\|\mathrm{e}^{\varphi_{k}/h}\Phi\|_{L^{2}(U_{k})}^{2} \\
+h^{4}\|\mathrm{e}^{\varphi_{k}/h}[\Delta,\chi_{k}]w\|_{L^{2}(U_{k})}^{2}+h^{4}|\mathrm{Im}(\lambda)|^{2}\|\mathrm{e}^{\varphi_{k}/h}\chi_{k}w\|_{L^{2}(U_{k})}^{2} \\
+h^{3}\|\mathrm{e}^{\varphi_{k}/h}\chi_{k2}w_{2}\|_{L^{2}(U_{k})}^{2}+h\|\mathrm{e}^{\varphi_{k}/h} w_{2}\|_{L^{2}(\Gamma_{2})}^{2}+h^{3}\|\mathrm{e}^{\varphi_{k}/h}\partial_{\nu} w_{2}\|_{L^{2}(\Gamma_{2})}^{2}\big). \end{split} \end{equation} We addition the two last estimates for $k=1,2$ and using the properties of phases $\varphi_{k}<\varphi_{\sigma(k)}$ in $\displaystyle\left(\bigcup_{j=1}^{N_{k}}B(x_{kj},2\epsilon)\right)$ then we can absorb the term $[\Delta,\chi_{k}]w$ at the right hand side of~\eqref{d18} into the left hand side for $h>0$ small. More precisly we obtain \begin{equation*} \begin{split}
h\int_{\Omega}\left(\mathrm{e}^{2\varphi_{1}/h}+\mathrm{e}^{2\varphi_{2}/h}\right)|w|^{2}\,\mathrm{d} x+h^{3}\int_{\Omega}\left(\mathrm{e}^{2\varphi_{1}/h}+\mathrm{e}^{2\varphi_{2}/h}\right)|\nabla w|^{2}\,\mathrm{d} x \\
\leq C\Bigg(h^{4}\int_{\Omega}\left(\mathrm{e}^{2\varphi_{1}/h}+\mathrm{e}^{2\varphi_{2}/h}\right)|\Phi|^{2}\,\mathrm{d} x+h\int_{\Gamma_{2}}\left(\mathrm{e}^{2\varphi_{1}/h}+\mathrm{e}^{2\varphi_{2}/h}\right)|w_{2}|^{2}\,\mathrm{d} x \\
+h^{3}\int_{\Gamma_{2}}\left(\mathrm{e}^{2\varphi_{1}/h}+\mathrm{e}^{2\varphi_{2}/h}\right)|\partial_{\nu} w_{2}|^{2}\,\mathrm{d} x\Bigg). \end{split} \end{equation*} Then by the boundary conditions in~\eqref{d5} we get \begin{equation*} \begin{split}
\int_{\Omega}|w|^{2}\,\mathrm{d} x+\int_{\Omega}|\nabla w|^{2}\,\mathrm{d} x\leq C\mathrm{e}^{C/h}\Bigg(\int_{\Omega}|\Phi|^{2}\,\mathrm{d} x+\int_{\Gamma_{2}}|u_{2}|^{2}\,\mathrm{d} x+\int_{\Gamma_{2}}|b\,u_{2}|^{2}\,\mathrm{d} x \\
+\int_{\Gamma_{2}}|\partial_{\nu} u_{2}|^{2}+\int_{\Gamma_{2}}|a\,\partial_{\nu} u_{2}|^{2}\,\mathrm{d} x\Bigg). \end{split} \end{equation*} And this yields from the assumption~\eqref{2} that \begin{equation}\label{d12}
\|w\|_{L^{2}(\Omega)}^{2}\leq C\mathrm{e}^{C/h}\Big(\|\Phi\|_{L^{2}(\Omega)}^{2}+\|\phi_{a}\|_{L^{2}(\Gamma_{2})}^{2}+\|\phi_{b}\|_{L^{2}(\Gamma_{2})}^{2}+\int_{\Gamma_{2}} a|\partial_{\nu} u_{2}|^{2}\,\mathrm{d} x+\int_{\Gamma_{2}} b|u_{2}|^{2}\,\mathrm{d} x\Big). \end{equation} Observing by Green's formula and the expression of $w$ in~\eqref{d7} that \begin{equation}\label{d19}
\|w\|_{L^{2}(\Omega)}^{2}=\alpha\|\Delta u\|_{L^{2}(\Omega)}^{2}+|\lambda|^{2}\|u\|_{L^{2}(\Omega)}^{2}+2\alpha|\mathrm{Re}(\lambda)|.\|\nabla u\|_{L^{2}(\Omega)}^{2}\geq C\|u\|_{X}^{2}. \end{equation} And this completes the proof by the combination of~\eqref{d12} and~\eqref{d19}. \end{pr}
From~\eqref{d9} and~\eqref{d10} we obtain \begin{equation*} \begin{split}
\|u\|_{X}^{2}\leq C\mathrm{e}^{C/h}\Big(\|\Phi\|_{H}^{2}+\|\phi_{a}\|_{L^{2}(\Gamma_{2})}^{2}+\|\phi_{b}\|_{L^{2}(\Gamma_{2})}^{2}+\|\Phi\|_{H}\|u\|_{H} \\
+|\mathrm{Re}(\lambda)\mathrm{Im}(\lambda)|^{2}\|u\|_{H}^{2}+\|f\|_{2}\|u\|_{2}\Big), \end{split} \end{equation*} then by the expression of $\phi_{a}$ and $\phi_{b}$ in~\eqref{d4} we have \begin{equation}\label{d11}
\|u\|_{X}^{2}\leq C\mathrm{e}^{C/h}\left(\|\Phi\|_{H}^{2}+\|f_{2}\|_{2}^{2}+\|\Phi\|_{H}\|u\|_{H}+|\mathrm{Im}(\lambda)|^{2}\|u\|_{H}^{2}+\|f\|_{2}\|u\|_{2}\right). \end{equation}
Then estimate~\eqref{d11} and Proposition~\ref{7} for $|\mathrm{Im}(\lambda)|\leq \frac{1}{\sqrt{2C}}\mathrm{e}^{-C/h}$ give us
$$\|u\|_{X}^{2}\leq C\mathrm{e}^{C/h}\left(\|\Phi\|_{H}^{2}+\|f_{2}\|_{X}^{2}\right).$$ Using the expression of $\Phi$ in~\eqref{d4} we obtain \begin{equation}\label{d13}
\|u\|_{X}\leq C\mathrm{e}^{C/h}\left(\|F\|_{X}+\|G\|_{H}\right). \end{equation} We thus obtain form the first equation of~\eqref{d7} and~\eqref{d13} that \begin{equation}\label{d14}
\|v\|_{H}\leq |\lambda|\,\|u\|_{H}+\|F\|_{H}\leq C\mathrm{e}^{C/h}\left(\|F\|_{X}+\|G\|_{H}\right), \end{equation} and hence~\eqref{d13} and~\eqref{d14} give
$$\|(u,v)\|_{\mathcal{H}}\leq C\mathrm{e}^{C|\mathrm{Re}(\lambda)|}\|(i\mathcal{A}+\lambda\mathrm{Id})(u,v)\|_{\mathcal{H}}^{2}.$$ Then $(i\mathcal{A}+\lambda)$ is injective then bijective in $\mathcal{D}(\mathcal{A})$ and we have
$$\|(i\mathcal{A}+\lambda\mathrm{Id})^{-1}\|_{\mathscr{L}(\mathcal{H},\mathcal{H})}\leq C\mathrm{e}^{C|\mathrm{Re}(\lambda)|}$$
for $\lambda\in\{z\in\mathbb{C};\;|\mathrm{Im}(z)|<C_{1}\mathrm{e}^{-C_{2}|\mathrm{Re}(z)|},\,|z|>C_{3}\}$ and this complete the proof of Theorem~\ref{9}.
\nocite{*}
\addcontentsline{toc}{section}{References}
\end{document} |
\begin{document}
\title{A stability theorem on cube tessellations} \markright{Stability of cube tessellations} \author{Peter Frankl\thanks{R\'enyi Institute, H-1364 Budapest, POB 127, Hungary. Email: {\tt peter.frankl@gmail.com}.} \and J\'anos Pach\thanks{R\'enyi Institute, Budapest, Hungary and EPFL, Lausanne, Switzerland. Email: {\tt pach@cims.nyu.edu}. Research partially supported by Swiss National Science Foundation Grants 200020-162884 and 200021-165977.}}
\date{}
\maketitle
\begin{abstract} It is shown that if a $d$-dimensional cube is decomposed into $n$ cubes, the side lengths of which belong to the interval $(1-\frac{1}{n^{1/d}+1},1]$, then $n$ is a perfect $d$-th power and all cubes are of the same size. This result is essentially tight. \end{abstract}
\section{Introduction}
It was proved by Dehn~\cite{De03} that, for $d\ge 2$, in any decomposition (tessellation, tiling) of the $d$-dimensional unit cube into finitely many smaller cubes, the side length of every participating cube must be rational. Fine and Niven~\cite{FN46} and, independently, Hadwiger raised the problem of characterizing, for a fixed $d\ge 2$, the set $N_d$ of all integers $n$ such that the $d$-dimensional unit cube can be decomposed into $n$ smaller cubes. Obviously, $m^d\in N_d$ for every positive integer $m$. Hadwiger observed that the intervals $(1,2^d)$ and $(2^d,2^d+2^{d-1})$ do not belong to $N_d$. On the other hand, for any $d$ there is a threshold $n_0(d)$ such that every integer $n\ge n_0(d)$ belongs to $N_d$; see \cite{P72}, \cite{M74}, \cite{E74}, \cite{CFG91}. It is conjectured that $n_0(d)\le c^d$ for a suitable constant $c$.
Amram Meir asked many years ago whether for any $d\ge 2, {\varepsilon}>0$, and for every sufficiently large $n\ge n_0(d,{\varepsilon})$, there exists a decomposition of a $d$-dimensional cube into $n$ smaller cubes such that the ratio between the side lengths of any two cubes is at least $1-{\varepsilon}$. This question was answered in the affirmative in~\cite{FMP17}. In particular, it was shown in~\cite{FMP17} that, for large $n$, a square can be decomposed into precisely $n$ smaller squares such that the ratio of their side lengths is at least $1-O\left(\frac{1}{\sqrt{n}}\right)$.
The aim of this note is to show that the above bound is asymptotically tight. More precisely, we have the following stability result, which holds in every dimension $d\ge 2$.
\noindent{\bf Theorem 1.} {\em Let $d, n\ge 2$ be positive integers. Suppose that a $d$-dimensional cube can be decomposed into precisely $n$ smaller cubes whose side lengths belong to the interval $(1-\frac{1}{n^{1/d}+1},1]$.
Then $n$ is a perfect $d$-th power, that is, $n=m^d$ for a positive integer $m$. Moreover, in this case the small cubes must be congruent.}
\section{Proof of Theorem 1}
Consider a decomposition of the cube $[0,z]^d$ into $n$ smaller cubes of side lengths $s_i, 1\le i\le n$, where $$1=s_1\ge s_2\ge \ldots \ge s_n>1-\frac{1}{n^{1/d}+1}.$$
By Dehn's theorem mentioned in the Introduction, we can assume that all $s_i$ and, hence, also $z$ are rational numbers. The total volume of the small cubes is $z^d$, so that we have \begin{equation}\label{eq0} z^d=\sum_{1\le i\le n}s_i^d\le ns_1^d =n. \end{equation} If equality holds here, then $s_1=\ldots=s_n=1$ and $n$ is a perfect $d$-th power, so we are done. Therefore, we can assume
\noindent{\bf Claim 2.} {\em $z < n^{1/d}.$}
Fix a line $\ell$ parallel to the $x$-axis (say) that does not share a segment with the boundary of any small cube participating in the decomposition. (This holds, for example, if the other $d-1$ coordinates of the points of $\ell$ are all irrational.) Let $C_1, C_2, \ldots, C_m$ denote the small cubes crossed by $\ell$, listed from left to right, and let $$0=x_0 < x_1 < x_2 <\ldots < x_m=z$$ be the $x$-coordinates of the points at which $\ell$ stabs the facets of these cubes. Using the assumption on the side lengths of the cubes, we have \begin{equation}\label{formula1} j\left(1-\frac{1}{n^{1/d}+1}\right)<x_j\le j, \end{equation} for every $j\; (1\le j\le m).$
\noindent{\bf Claim 3.} {\em $m=\lceil z\rceil.$}
\noindent{\bf Proof.} Since the side length of each cube $C_j$ is at most $1$, we clearly have $m\ge z$. It remains to show that $m<z+1$.
Suppose for contradiction that $m\ge z+1$. Applying (\ref{formula1}) with $j=m$, we obtain $$z+\frac{n^{1/d}-z}{n^{1/d}+1}=(z+1)\left(1-\frac{1}{n^{1/d}+1}\right) \le m\left(1-\frac{1}{n^{1/d}+1}\right)<x_m=z.$$ Comparing the left-hand side and the right-hand side, we get $n^{1/d}-z<0$, which contradicts Claim 2. $\Box$
Claims 2 and 3 immediately imply that every line $\ell$ which is parallel to one of the coordinate axes and does not share a segment with the boundary of any small cube, intersects the same number, $m= \lceil z\rceil<n^{1/d}+1$, of small cubes. In particular, (\ref{formula1}) can be extended to $$j-1<j\left(1-\frac{1}{n^{1/d}+1}\right)<x_j\le j,$$ for $1\le j\le m$. Thus, we can pick a small ${\varepsilon}>0$ such that \begin{equation}\label{formula2} j-1+{\varepsilon} \in (x_{j-1},x_j) \end{equation} holds for every $j\; (1\le j\le m)$.
Given a small irrational number ${\varepsilon}>0$, define a gridlike set $P_{{\varepsilon}}$ of $m^d$ points in ${\mathbb R}^d$, as follows. Let $$P_{{\varepsilon}}=\{{\varepsilon}, 1+{\varepsilon}, 2+{\varepsilon},\ldots, m-1+{\varepsilon}\}^d.$$ If ${\varepsilon}$ is small enough, then all of these points lie in the interior of the cube $[0,z]^d$.
\noindent{\bf Claim 4.} {\em There exists ${\varepsilon}>0$ such that every cube participating in the decomposition contains precisely one point in $P_{{\varepsilon}}$.}
\noindent{\bf Proof.} If ${\varepsilon}$ is irrational, no element of $P_{{\varepsilon}}$ lies on the boundary of any small cube. (This follows from the theorem of Dehn cited at the beginning of the Introduction.) The sidelength of every small cube is at most $1$, the minimum distance between two points in $P_{{\varepsilon}}$, so that no cube can cover two elements of $P_{{\varepsilon}}$.
We now finalize the choice of ${\varepsilon}>0$. For every cube $C$ in the decomposition, pick a point $p=p(C)$ in the interior of $C$, all of whose coordinates are irrational. Let $\ell_1, \ell_2, \ldots, \ell_d$ denote the lines through $p$ parallel to the coordinate axes. None of them shares a segment with the boundary of any cube.
The line $\ell_1$ intersects precisely $m$ cubes. Suppose that $C$ is the $j$-th among them, and its projection to the first coordinate axis is the interval $[x_{j-1},x_j]$. If we choose ${\varepsilon}>0$ small enough, then (\ref{formula2}) is satisfied for $\ell_1$. The same is true for the lines $\ell_2,\ldots,\ell_d$. Repeating the argument for every cube $C$, we can find an irrational ${\varepsilon}>0$, which simultaneously satisfies all of the above conditions for all $C$. Then, for every $C$, there exist integers $j_k=j_k(C)$\; $(1\le j_k\le m,\, 1\le k\le d)$ such that the orthogonal projection of $C$ to the $k$-th coordinate axis contains $j_k-1+{\varepsilon}$. Hence, we have $$(j_1-1+{\varepsilon}, j_2-1+{\varepsilon},\ldots, j_d-1+{\varepsilon})\in C,$$ showing that $C$ contains a point of $P_{{\varepsilon}}$. $\Box$
It follows from Claim 4 that $n$, the number of cubes participating in the decomposition, is equal to $|P_{{\varepsilon}}|=m^d$. Thus, $n=m^d$ is a perfect $d$-th power.
Notice that the set $P_{{\varepsilon}}$ can be covered by $m^{d-1}$ lines parallel to the first coordinate axis, and every small cube is stabbed by precisely one of these lines. The total sidelength of the cubes stabbed by each of these lines is equal to $z$. Therefore, the sum of the sidelengths of all small cubes satisfies $\sum_{i=1}^ns_i=\sum_{i=1}^{m^d}s_i=m^{d-1}z$, or, equivalently, $$\frac{\sum_{i=1}^{m^d}s_i}{m^d}=\frac{z}{m}.$$
On the other hand, it follows from (\ref{eq0}) for $n=m^d$ that $$\frac{\sum_{i=1}^{m^d}s_i^d}{m^d}=\left(\frac{z}{m}\right)^d.$$ For any positive numbers $s_i$, we have $$\left(\frac{\sum_{i=1}^{m^d}s_i}{m^d}\right)^d\le \frac{\sum_{i=1}^{m^d}s_i^d}{m^d},$$ with equality if and only if all $s_i$ are equal. In our setting equality holds, hence all small cubes must be of the same size.
This completes the proof of Theorem~1. \;\;\;\;\;\;\;\;\; $\Box$ $\Box$
Finally, we show that Theorem 1 is not far from being best possible. Consider the subdivision of the cube $[0,m]^d$ into $m^d$ unit cubes. Discard all of them that are not tangent to any of the coordinate hyperplanes. Fill out the resulting hole, $[1,m]^d$, by $m^d$ cubes of sidelength $1-\frac{1}{m}$. Altogether we have $$n=m^d-(m-1)^d+m^d<(m+1)^d=m^d+O(dm^{d-1})$$ cubes, where the inequality follows from the fact that the function $x^d$ is strictly convex. The sidelengths of these cubes belong to the interval $$[1-\frac{1}{m},1]=[1-\frac{1}{n^{1/d}(1+o(1))},1],$$ as $m$ tends to infinity. This interval is only slightly larger than the interval of ``permissible'' sidelengths in Theorem 1, but the number of small cubes participating in the tessellation is not a perfect $d$-th power.
\end{document} |
\begin{document}
\title{Spectral Dependence of Coherent Backscattering of Light in a Narrow-Resonance Atomic System} \author{D.V. Kupriyanov, I.M. Sokolov, and N.V. Larionov} \affiliation{Department of Theoretical Physics, State Technical University, 195251, St.-Petersburg, Russia} \author{P. Kulatunga, C.I. Sukenik, S. Balik, and M.D. Havey} \affiliation{Department of \ Physics, Old Dominion University, Norfolk, VA 23529} \date{\today }
\begin{abstract} We report a combined theoretical and experimental study of the spectral and polarization dependence of near resonant radiation coherently backscattered from an ultracold gas of $^{85}$Rb atoms. Measurements in a $\pm 6$ $MHz$ range about the $5s^{2}S_{1/2}\rightarrow 5p^{2}P_{3/2}$ $F=3\rightarrow F^{\prime }=4$ hyperfine transition are compared with simulations based on a realistic model of the experimental atomic density distribution. \ In the simulations, the influence of heating of the atoms in the vapor, magnetization of the vapor, finite spectral bandwidth, and other nonresonant hyperfine transitions are considered. \ Good agreement is found between the simulations and measurements. \ \end{abstract}
\pacs{32.80.-t, 32.80.Pj, 34.80.Qb, 42.50.-p, 42.50.Gy, 42.50.Nn} \maketitle
\section{Introduction}
Coherent wave scattering effects in disordered media display an extraordinary variety of phenomena which are of both fundamental and practical concern. \ Of particular interest is that coherent wave scattering shows a broad universality which makes possible a qualitatively similar description for different types of wave excitation in a variety of media. These range, as an illustration, from enhancement of light scattering off the lunar regolith and the rings of Saturn on the one hand \cite{Mish}, to explanation of peculiarities in propagation of waves in the solid earth on the other \cite{POAN}. In addition, coherent wave scattering is a useful technique for diagnosing the average properties of scatterers in turbid media, and for assessing relatively thin surface layers in biological and mechanical materials \cite{Sheng,LagTig,POAN}. The propagation of light waves in natural photonic materials such as opal gives the valuable semiprecious gemstone its highly valued beauty. Of fundamental scientific importance, coherent wave scattering was first recognized by Anderson \cite {Anderson} in the context of interference of electron wave scattering in conductors. As the scattering mean free path decreases and becomes shorter than a characteristic length on the order of the wavelength, wave diffusion slows as a result of wave interference. The limiting case where diffusion ceases is called strong localization, where the propagating wave becomes spatially localized inside the medium. For electromagnetic radiation \cite {Sheng,LagTig}, two recent reports of strong localization have been made, one in the optical regime \cite{Wiersma1}, and the other for microwave radiation \cite{Chabanov1}. A major long-term and fundamental goal of the research presented here, and of other researchers in the field, is to attain strong localization of light, but in an ultracold atomic vapor.
Quite recently, coherent multiple light scattering has been observed in ultracold atomic gases, which form a unique and flexible medium for fundamental studies and practical applications \cite {Labeyrie1,Labeyrie2,Kulatunga1,Bidel}. In all cases, the essential physical mechanisms are due to interferences in multiple wave scattering from the components of the medium; under certain not very stringent conditions the interferences survive configuration averaging, thus generating macroscopic observables. First observations and initial explanations for electromagnetic radiation were of the so-called coherent backscattering (CBS) cone in disordered media \cite{Ishimaru,Wolf, Albeda}. For radiation incident on a diffusive medium, the effect manifests itself as a spatially narrow ($\sim $ 1 mrad) cusp-shaped intensity enhancement in the nearly backwards direction \cite{Sheng,LagTig}. As electromagnetic waves are not scalar, the detailed shape and size of the enhancement depends on the polarization of the incident and the detected light. Nevertheless, for classical radiation scattering from a $^{1}S_{0}\rightarrow $ $^{1}P_{1}$ atomic transition, the largest possible interferometric enhancement is to increase the intensity by a factor of two.
Atomic gases, because they have exceptionally high-Q resonances, and because the light scattering properties may be readily modified by light polarization or intensity, atomic density, and applied external fields, represent an interesting and flexible medium in which to study the role of multiple scattering. However, to achieve the full potential of atomic scatterers as a practical medium for such studies, it is necessary to significantly cool the atoms, in order to suppress the dephasing effects of atomic motion. Coherent backscattering interference has, in fact, been measured in $^{85}$Rb \cite{Labeyrie1,Kulatunga1} and Sr \cite{Bidel}, and quite successfully modeled for resonant and near-resonant scattering as well \cite{CBSth1,KCBSth1,KCBSth2,Jonckheere}. Measurements have also been made of the magnetic field dependence of the coherent backscattering line shape \cite{Labeyrie3}, and of the time-dependence, for a particular geometry, of light scattered in the coherent scattering regime \cite{Kaiser}. However, there remains a significant range of physical parameters associated with the various processes which have not yet been fully explored. \ Among these are the influence of light intensity, nonzero ground state multipoles such as alignment or orientation, cooperative multi-atom scattering associated with higher atomic density, and more general geometries for time-dependent studies. \ In the present report we concentrate attention on another variable, that being the dependence of the coherent backscattering enhancement on detuning of the probe beam from exact resonance. \ It is clear that non-resonant excitation of the atomic sample results in a smaller optical depth (and associated larger transport mean free path) of the medium \cite{CBSth1,KCBSth1}. \ However, theoretical and experimental results presented here reveal that other more subtle effects, including far-off-resonance optical transitions, heating of the vapor by multiple light scattering and self-magnetization of the vapor during the CBS phase, can have significant effects on the spectral variation of the CBS enhancement. \
In the following sections we first present an overview of the physical system, including how atomic samples are prepared and characterized and a brief review of measurements of coherent backscattering from an atomic vapor. \ This is followed by a summary of the approach to simulate coherent multiple scattering in an ultracold atomic gas. \ We then present our experimental and theoretical results, with focus on various mechanisms that can influence the spectral variation of the coherent backscattering enhancement factor.
\section{Overview of Physical System}
\subsection{Preparation and description of ultracold atomic sample}
Preparation of the ultracold atomic $^{85}$Rb sample used in the measurements described in this paper has been described in detail elsewhere \cite{Kulatunga1}, but for completeness will be briefly reviewed here. \ The samples are formed in a vapor-loaded magneto-optical trap (MOT) which is operated in a standard six-beam configuration. The trapping laser is detuned a frequency of -2.7$\gamma $ from resonance, where $\gamma \sim 5.9$ $MHz$, is the natural linewidth of the $F=3\rightarrow F^{\prime }=4$ hyperfine transition in $^{85}$Rb. Laser light for the MOT is derived from an injection locked diode laser (Sanyo DL7140-201) which is slaved to a master laser (Hitachi HG7851G). The master laser is locked to a crossover peak produced in a Doppler-free saturated absorption spectrometer. Laser locking is achieved by dithering the master laser current and demodulating the saturation absorption spectrum with a lock-in amplifier. In order to produce the required light for hyperfine repumping, the slave laser is microwave modulated to produce a sideband at the wavelength corresponding to the $ F=2\rightarrow F^{\prime }=3$ hyperfine transition. Light exiting the slave passes through an acousto-optic modulator (AOM), which is used as an optical switch, and subsequently coupled into a single mode fiber optic patchcord. The combination of the AOM switching and fiber coupling results in an $\sim $ 65 dB attenuation of the trapping laser light. After exiting the fiber, the trapping light is split into three beams and sent to the MOT. Each beam contains $\sim $3.3 mW of light and is retroreflected, generating an average $\sim $19 mW in the center of the chamber.
In order to ascertain the number and density of confined atoms, we employ absorption and fluorescence imaging. We find that the MOT is not completely spherical \cite{Kulatunga1,CBSth1}, but rather is somewhat `cigar-shaped' having $1/e^{2}$ Gaussian radii of 1.1 mm and 1.38 mm, where the radius is defined according to the density distribution $n(r)=n_{0}\exp (-r^{2}/2r_{0}^{2})$, $n_{0}$ being the peak density. This distribution results in an optical depth through the center of the MOT of about 6, where the optical depth $b$ is defined as resulting in an attenuation of the incident intensity by a factor $e^{-b}$. We determine the peak optical depth by direct measurement of the transmitted CBS light intensity through the central region of the MOT. In these measurements, probing of the density takes place when the MOT lasers are off, for they result in a significant excited state fraction, decreasing the measured optical depth. For a Gaussian atom distribution in the MOT, the optical depth is given by $b=$ $ \sqrt{2\pi }\sigma _{0}n_{0}r_{0}$, where $\sigma _{0}$ is the cross-section for light scattering\ \cite{Metcalf}. With the values given above and an average Gaussian radius $r_{0}=1.2$ $mm$, we calculate that the MOT contains approximately $4.3\times 10^{8}$ atoms and has a peak density $n_{0}=$ $ 1.6\times 10^{10}$ atoms-cm$^{-3}$. \ Note that these parameters are large enough to insure an optical depth large enough for coherent multiple scattering, but that the density is not so large as to necessitate consideration of cooperative pair scattering in the vapor. \
\begin{figure}
\caption{Schematic diagram of the coherent backscattering apparatus. Shown in the figure is an acousto optic modulator (AOM), magneto optic trap (MOT), linear polarizer (LP), quarter wave plate (QWP), and a charge coupled device (CCD) camera.}
\label{Figure1}
\end{figure}
The vapor-loaded MOT is formed in a custom-made stainless steel ultrahigh vacuum (UHV) chamber that is pumped by both an ion and titanium sublimation pump. The UHV chamber is fitted with a stainless-steel sidearm containing a valvable and heated Rb reservoir. \ Because we are observing light which is backscattered from our sample, it is critical that all other backscattered reflections are suppressed. A major source of unwanted back-scattered light is from the vacuum viewports on the MOT chamber. In order to minimize this light, we installed wedged optical quality windows having a ``V''-type antireflection (AR)\ coating at 780 nm on the probe-laser (described in the following section) entrance and exit ports. The AR coating results in less than 0.25\% reflectivity at 780 nm. Further, the window through which the probe laser beam enters is mounted on a UHV bellows, allowing us to better direct unwanted reflections from entering the charge-coupled device (CCD) detector. We also found it necessary to replace the standard window on the CCD camera with a wedged and near-infrared AR coated window in order to suppress interference fringe formation in the CCD images. \
\subsection{Measurement of atomic coherent backscattering}
We present in this section a brief overview of the coherent backscattering apparatus used to obtain the experimental results reported here. Further details may be found in Kulatunga, \textit{et al. }\cite{Kulatunga1}, where the experimental apparatus used in experiments to study coherent radiative transfer in an ultracold gas of $^{85}$Rb \cite{Kulatunga1} is described. A schematic diagram of the arrangement is shown in Figure 1. \ There the external light source used in the experiment is provided by an external cavity diode laser that is stabilized by saturated absorption to a crossover resonance associated with hyperfine components of the 5s $^{2}$S$_{1/2}$ $ \rightarrow $ 5p $^{2}$P$_{3/2}$ transition. With reference to Figure 2, which shows relevant hyperfine transitions in $^{85}$Rb, the laser may be tuned several hundred MHz from nearly any hyperfine resonance in $^{85}$Rb by a standard offset locking technique using an acousto-optic modulator. Detuning from resonance is defined by $\Delta =\omega _{L}-\omega _{0}$, where $\omega _{L}$ is the CBS laser frequency and $\omega _{0}$ is the $ F=3\rightarrow F^{\prime }=4$ resonance frequency. The laser bandwidth is a few hundred kHz, and the typical output power is $\sim $5 mW. The laser output is launched into a single mode polarization preserving fiber and then beam expanded and collimated by a beam expander to a $1/e^{2}$ diameter of about 8 mm. The polarization of the resulting beam is selected and then the beam passed through a nonpolarizing and wedged 50-50 beam splitter that passes approximately half of the laser power to the atomic sample. The backscattered radiation is directed by the same beam splitter to a field lens of 45 cm focal length, which condenses the light on the focal plane of a liquid nitrogen cooled CCD camera. The diffraction limited spatial resolution is about 100 $\mu rad$, while the polarization analyzing power is greater than 2000 at 780 nm. There are four polarization channels that are customarily studied in coherent backscattering. For linearly polarized input radiation, two of these correspond to measuring the backscattered light in two mutually orthogonal output channels. This is readily achieved by removing the quarter wave plate, as shown in Figure 1, and rotating the linear polarization analyzer located before the field lens. For input radiation of definite helicity, that being generated by the linearly polarized input and the quarter wave plate, the other two channels correspond to the helicity of the backscattered radiation. \ This is similarly measured by rotation of the linear polarizer just before the field lens. The instrumentation as described, with some modifications to suppress the intense trapping beam fluorescence, has been previously used to study coherent backscattering in ultracold atomic gases and in solid and liquid samples as well \cite{Kulatunga1}.
\begin{figure}
\caption{Hyperfine energy levels of relevant transitions in atomic $^{85}$Rb.}
\label{Figure2}
\end{figure}
Measurements of the backscattered light is made by exposing the ultracold atoms to the CBS laser light for an interval of 0.25 ms temporally centered in a 5 ms dark interval during which the MOT lasers are turned off. \ The MOT lasers are then turned back on for 20 ms, which is sufficiently long that the cold atom sample is reconstituted. \ This procedure is repeated for 300 s, which constitutes a single experimental run. \ A run of 300 s with the MOT absent allows measurement of the background, which is principally due to hot atom fluorescence excited by the CBS laser. \ Attenuation of the CBS laser by the MOT during the data taking phase, which reduces the amount of background during the backscattering run, in comparison with the background phase, is accounted for by auxiliary measurements of the MOT attenuation of the CBS laser intensity. \ Finally, the saturation parameter for the CBS laser is less than s = 0.08 on resonance, which with the 0.25 ms measurement interval is sufficient to minimize mechanical action of the CBS laser beam on the atomic sample. \
\subsection{Brief overview of the theoretical treatment}
A general theory of the coherent backscattering process in an ultracold atomic gas has been developed recently by several groups \cite {CBSth1,KCBSth1,KCBSth2}. The theoretical development essentially maintains the earlier conceptions of weak localization in the atomic scattering problem \cite{Shlyap}, and takes into account the influence of the optical depth and sample size on the character of the coherent backscattering cone. In spite of the fact that the basic ladder and interference terms, describing the process, have a similar structure in all the theoretical approaches, there are certain types of accompanying physical phenomena which can become more important as more detailed experimental or theoretical spectral analysis is considered.
In our earlier theoretical approach \cite{CBSth1}, the general analytical development was realized by a Monte-Carlo simulation of coherent multiple scattering in an ultracold ($T<50$ $\mu K$) gas of $^{85}$Rb atoms confined in a magneto optical trap. \ The simulation was closely matched to the experimental density distribution and temperature conditions as described in the previous paragraphs. \ The radiation field frequency was selected to be in the vicinity of the $F=3\rightarrow F^{\prime }=4$ hyperfine transition, and to have polarization states and a weak-field intensity corresponding to the experimental realization. The effects of sample size, and the spatial and polarization dependence of the coherent backscattering cone were considered in detail. \ Some aspects of the spectral variation of the coherent backscattering enhancement factor were also considered, including the surprisingly-strong influence of the far-off-resonance $F=3\rightarrow F^{\prime }=3$ and $F=3\rightarrow F^{\prime }=2$ hyperfine transitions. However, other physical effects can have a profound influence on the spectral variations, and we consider some of those in the present report. \ Among these, for currently achievable laboratory conditions, are (i) heating of the atomic gas by multiple scattering of the probing light source, (ii) optical pumping effects initiated by the probe or MOT lasers, and (iii) the influence of the finite bandwidth of the probe laser. Each of these effects has been quantitatively ignored in earlier studies, since their role is not so crucial to calculations of the basic characteristics of the CBS process. Of particular interest is the influence of atomic motion and internal polarization variables on the spectral variations of the CBS enhancement. \ We point out that to properly account for these factors, it is necessary to consider the influence of the mean field on both attenuation and dispersion of the multiply scattered light, and to include also the anisotropic Green's function for light propagating along a chain of scatterers. \ As is seen in the following section, inclusion of some such effects, in isolation or combination, may well be essential to better agreement between experimental and theoretical results.
Finally we emphasize that the simulations are made for conditions quite close to those in the experiment. These conditions include sample size, temperature, shape and density, and the characteristic intensity of the CBS laser beam. \ These conditions are such that cooperative scattering may be neglected, and such that saturation of the atomic transition is also negligible. In simulations of thermal effects, and of the influence of atomic magnetization on the coherent backscattering enhancement, more severe conditions are used in order to illustrate possible range of influence of the effects. \
\section{Experimental and Theoretical Results}
In this section we present experimental and theoretical results associated with backscattering of near-resonance radiation from ultracold atomic $^{85}$ Rb. \ First we present experimental measurements of the spectral variation of the coherent backscattering enhancement, in a range of approximately $\pm 6MHz$, as a function of detuning from the $F=3\rightarrow F^{\prime }=4$ hyperfine transition. \ These results are directly compared to theoretical simulations, made with inclusion of the influence of off-resonant hyperfine transitions and considering an ultracold sample not at absolute zero. \ Second, we present simulations of several effects which should generally be considered when modelling coherent backscattering from ultracold atomic vapors.
\subsection{Spectral variation of the CBS enhancement: experimental results}
\begin{figure}
\caption{Comparison of experimental and theoretical enhancement spectra in the \emph{l} $\|$ \emph{l} polarization channel. \ Theoretical spectra show modification by Doppler broadening, which is varied from $kv_{0}=0$ to $kv_{0}=0.25\protect\gamma $, in an ensemble of ${}^{85}$Rb atoms having a peak density of $ n_{0}=1.6\times 10^{10}\;cm^{-3}$ and a Gaussian radius $r_{0}=1\;mm $.}
\label{Figure3}
\end{figure}
\begin{figure}
\caption{Comparison of experimental and theoretical enhancement spectra in the \emph{l} $\perp $ \emph{l} polarization channel. \ Theoretical spectra show modification by Doppler broadening, which is varied from $kv_{0}=0$ to $kv_{0}=0.25\protect\gamma $, in an ensemble of ${}^{85}$Rb atoms having a peak density of $ n_{0}=1.6\times 10^{10}\;cm^{-3}$ and a Gaussian radius $r_{0}=1\;mm $.}
\label{Figure4}
\end{figure}
\begin{figure}
\caption{Comparison of experimental and theoretical enhancement spectra in the helicity preserving
($h||h)$ polarization channel. \ Theoretical spectra show modification by Doppler broadening, which is varied from $kv_{0}=0$ to $kv_{0}=0.25\protect\gamma $, in an ensemble of ${}^{85}$Rb atoms having a peak density of $ n_{0}=1.6\times 10^{10}\;cm^{-3}$ and a Gaussian radius $r_{0}=1\;mm$.}
\label{Figure5}
\end{figure}
Measurements of the variation of the coherent backscattering enhancement with detuning of the CBS laser, in a $\pm 6MHz$ range around the $F=3\rightarrow F^{\prime }=4$ hyperfine transition, are shown in Figures 3-6. \ The measurements have a typical uncertainty on the order of 2\%, this being due to a combination of statistical uncertainty due to counting statistics in the spatial intensity measurements, but also an estimated uncertainty due to the cone fitting procedure, as described previously \cite
{Kulatunga1}. \ In addition, there is residual noise in the spatial distribution of backscattered light due to speckle in the $l$ $||$ $l$ and $
h\perp h$ channels; slight variations in speckle appearing in background scattered light from run to run does not completely average to zero. \ This effect is responsible for the somewhat larger fluctuations in the extracted enhancement factors for these two polarization channels. \ There is a small systematic reduction of the peak enhancement due to the finite spatial resolution of the backscattering polarimeter, an effect which arises from smoothing of the nearly cusped shaped CBS cone near its peak by the finite spatial resolution of the instrument. \ This is accounted for by using a Lorentzian model of the spatial response and the CBS cone, which allows an estimate of the amount of reduction. \ Justification for this is made by the fact that the spatial variation of the simulated cones is to a good approximation described by a Lorentzian, as is the measured spatial response of the experimental apparatus. \ In our case, accounting for this effect amounts to a maximum decrease in the peak enhancement $\sim 0.01$ in the enhancement for the narrowest cones, which appear in the $h$ $||$ $h$ data. This estimated correction is not made to the data in Figs. 3-6. On the scale of the figures, there is negligible uncertainty in the detuning measurements.
\begin{figure}
\caption{Comparison of experimental and theoretical enhancement spectra in the helicity non-preserving $(h\perp h)$ polarization channel. \ Theoretical spectra show modification by Doppler broadening, which is varied from $kv_{0}=0$ to $kv_{0}=0.25\protect\gamma $, in an ensemble of ${}^{85}$Rb atoms having a peak density of $ n_{0}=1.6\times 10^{10}\;cm^{-3}$ and a Gaussian radius $r_{0}=1\;mm$.}
\label{Figure6}
\end{figure}
Also shown in the figures are simulations of the enhancement for several different values of the average Doppler shift of the atoms, measured in units of the natural spectral width $\gamma $. \ It appears, from comparison of the experimental data and the simulations that the data is better described by inclusion of some nonzero average heating of the vapor \ on the order of a few hundred kHz. \ Note that in a following section we model the influence of the finite bandwidth of the CBS laser; it is seen there that the decrease in enhancement on resonance, as seen in Figures 3-6, cannot be explained by that mechanism.
\subsection{Influence of dynamical heating}
The initial temperature of the atomic ensemble is $\sim $ $50\;\mu K$, which makes negligible any possible spectral manifestations caused by atomic motion. However, during the interaction time, the probe light produces a certain mechanical action on the atoms. The radiation force associated with the probe light can accelerate the atoms and heat them to temperatures where the Doppler broadening and shift can become comparable with the natural linewidth. It is important to recognize that the initial scattering event transfers momentum from the CBS laser to the atomic ensemble, but that subsequent scattering of the light deep within the sample is more nearly isotropic, resulting in some effective heating of the atoms during the CBS data taking phase. \ Although the dynamical process is complex, and is currently under study, we present here a short discussion of this process by comparing several scanning spectra averaged over an equilibrium Maxwell distribution of atom velocities, that is for different temperatures and respectively for different Doppler widths. The drift velocity of the atomic cloud, which also exists but was ignored in our calculations, leads (to a good approximation) to a Doppler shift of all the spectral dependencies into the blue wing with respect to the laser frequency $\omega _{L}$.
In addition to the experimental data, there are also shown in Figures 3-6 calculated spectral variations of the enhancement factor for all four polarization channels. In the graphs, the velocity is indicated as a fraction of the natural width of the atomic transition, $kv_{0}$ ($v_{0}= \sqrt{2k_{B}T/m}$ is the the most probable velocity in the atomic ensemble). \ It is seen from these graphs that the shape of the spectra becomes significantly modified, even for an average Doppler breadth of $0.1\gamma $, where $\gamma $ is the natural atomic width. \ Similar results are also obtained in the linear polarization channels. \ The overall trend suggests that dynamical heating, or some other mechanism which modifies the spectrum in a similar way, will be required to describe the experimental results. \ Of particular interest is the helicity preserving channel (Figure 5). \ Unique to this case is an increase in the enhancement, even for no atomic motion, for moderate detunings away from exact resonance. As described in a previous report \cite{CBSth1}, this is due to the suppressed role of Raman-type single scattering. The asymmetry is due to the nonnegligible influence of far off resonance hyperfine transitions on the coherent backscattering enhancement. \ This effect is suggested in the overall spectral trend of the data, although more precise measurements are clearly in order. \ In the following section, we see that a much larger enhancement increase at greater detunings can also arise from dynamical (or static) magnetization of the vapor along the direction of propagation of the CBS laser beam. \
\begin{figure}
\caption{Scanning spectra of CBS enhancement for (a) \textit{h} $||$ \textit{h} and (b) \textit{h} $\perp $ \textit{h} polarization channels. Spectra are shown for an average Doppler broadening varying from $kv_{0}=0$ to $kv_{0}=\protect\gamma $, in the ensemble of $ {}^{85}$Rb atoms with $n_{0}=1.6\times 10^{10}\;cm^{-3}$ and $r_{0}=1\;mm$.}
\label{Figure7}
\end{figure}
We finish this section by presenting for completeness theoretical results over a wider spectral range for the detuning dependence of the CBS enhancement. \ These are shown in Figure 7 (a) for the helicity preserving channel and in Figure 7 (b) for the helicity nonpreserving channel. \ The results show that there is some persistence of the CBS enhancement, even for an average Doppler broadening on the order of the natural line width of the atomic transition, and that this enhancement increases at larger detunings, before falling off at the largest offsets, when single scattering becomes dominant. \
\subsection{Optical pumping effects}
Effects on coherent radiative transport in an atomic vapor will generally depend on the polarization of the incident light and on the nonzero ground state multipoles in the atomic vapor. \ In our experimental arrangement, the atoms are confined to a magneto optic trap, in which there exist generally spatially varying hyperfine multipoles. \ However, the MOT lasers are typically turned off for several ms before taking data in a coherent backscattering experiment, and residual macroscopic atomic polarization should be largely dissipated on that time scale. \ However, there can be hyperfine multipoles generated by the CBS laser itself, dynamically polarizing the vapor. The main argument, why there is no optical pumping manifestation in the CBS process, comes from the reasonable assumption that under not atypical experimental conditions the probe radiation is weak and characterized by a small saturation parameter. However, it is quite clear that in an ensemble of cold atoms the relaxation mechanisms in the ground state, which mainly are collisional, play a reduced role and become even negligible. Then, after each cycle of interaction with the polarized CBS light, the atomic ensemble can accumulate a certain degree of polarization, which may be either of an orientation or alignment type. Of particular interest to us here is when the incident radiation has definite helicity, which can magnetize the vapor along the CBS propagation direction. \ It is quite difficult to estimate precisely the actual dynamical spatial distribution, within the atomic cloud, of polarization generated during the whole interaction cycle. Therefore in this section we only qualitatively illustrate how the optical pumping effects can change a basic characteristic of the CBS process such as the enhancement factor.
\begin{figure}
\caption{Diagram explaining the CBS phenomenon for double scattering in an ensemble of oriented atoms of ${}^{85}$Rb atoms. There is only one transition amplitude in the helicity preserving channel, which leads to maximal enhancement of backward scattered light. The direct and reciprocal transitions and photon paths are shown by solid and dashed arrows respectively.}
\label{Figure8}
\end{figure}
Consider probing the atomic sample with positive helicity circular polarized radiation. Let us consider further that, due to optical pumping, the atoms become oriented only along the propagation direction of the light beam. In steady state, following a sufficiently long pumping time, if there is no relaxation in the ground state, the atoms should be concentrated in the Zeeman sublevel $F=3,M=3$, see Figures 2 and 8. Of course, this is an idealized case which can never be precisely attained in reality, but such a model situation is convenient for illustrative purposes. The spectral variations of the enhancement factor for such an oriented ensemble are shown in Figure 9 for the case of monochromatic probe radiation and for a Gaussian-type cloud with the peak density $n_{0}=1.6\times 10^{10}\;cm^{-3}$ and with a radius $ r_{0}=1\;mm$.
The spectral variation of the enhancement in the helicity preserving channel shows quite unusual behavior, in that there is no reduction of the CBS enhancement in the spectral wings. On the contrary, the enhancement factor approaches its maximal possible value of two. The limiting factor of two is normally associated with Rayleigh-type scattering on classical objects. But here we deal with Rayleigh-type scattering under approximately attainable, but not typical quantum conditions. This result may be explained by a simple but fundamental property that in such a coherent atomic ensemble there is no single-atom scattering of Raman-type\ radiation in the backward direction, which potentially could also be a source of backscattered light in the helicity-preserving channel. Moreover, the partial contribution of only double scattering on oriented atoms in an optically thin sample causes the enhancement factor to take the maximum possible numerical value of two. This can be understood by turning to Figure 8, where it is shown that there is only one channel, or one product of the transition matrix elements, contributing in the scattering amplitude of the double Rayleigh scattering, that is in the helicity preserving channel. These are ideal conditions to observe maximal enhancement in the CBS process. In higher orders there are several partial contributions and not all of them can interfere. \ This, as usual, leads to essential reduction of the interference contribution to the total intensity of scattered light. Due to such reduction the enhancement factor considerably decreases in the spectral domain near the resonance scattering, as shown in Figure 9. Thus in the wings of the helicity preserving curve in the graph of Figure 9 a unique situation is revealed when \textit{in an optically thin medium under special conditions the enhancement factor can increase to its maximal value}.
\begin{figure}
\caption{Scanning spectra of enhancement for circular polarized probe light in an ensemble of atoms of ${}^{85}$Rb atoms with 100\% orientation in the direction the light propagation. The spectra were calculated for a Gaussian type atomic cloud with $n_{0}=1.6\times 10^{10}\;cm^{-3}$ and $r_{0}=1\;mm$.}
\label{Figure9}
\end{figure}
\begin{figure}
\caption{The output spectral response of the CBS light when the input circular polarized laser radiation, modeled by a Lorentzian spectrum (\ref{t1}) with $\protect\omega _{L}=\protect\omega _{0}+1.5\protect\gamma $ and $\protect\gamma _{L}=\protect\gamma /6$, is tuned in the blue wing of the $F=3\rightarrow F=4$ optical transition of $ {}^{85}$Rb. The first graph (a) shows the distortion of the input Lorentzian profile (dotted curve) for the total ladder and interference contribution; it is normalized to the total output intensity of the CBS light. The second graph (b) shows the distortion for the interference term only; it is normalized according to the corresponding enhancement factor. Both the dependencies relate to the same helicity preserving channel.}
\label{Figure10}
\end{figure}
The multiple scattering in the helicity non-preserving channel shows more ordinary behavior. The disappearance of CBS in the wings is caused by the dominating contribution of single scattering events as far as the sample becomes optically thin. We point out that there is, for this polarization channel also, a certain increase of the maximal value of enhancement in comparing with a non-oriented atomic ensemble. Here, as in the linear polarization channels, we see that optical pumping phenomenon leads to some quantitative but not qualitative changes in observation of the CBS process. However, the combined results of the numerical simulations presented in this and the previous section suggest that the experimental results may be essentially modified by the combined influence of thermal and optical pumping effects.
\subsection{The finite bandwidth of the probe light spectrum}
In an experiment, the CBS probe laser operates ideally in a single-mode regime and its spectral bandwidth is much less than the natural relaxation rate of the atoms. But in reality the difference is not necessarily so great to completely ignore the spectral distribution of the laser radiation. Typically in our experiments on spectral scanning the sample consisting of rubidium atoms, which have a resonance line natural decay rate $\gamma \sim 5.9\;MHz$, the laser radiation has normally a bandwidth of less than $1\;MHz$ . For the multiple scattering process in higher orders, the scanned spectral profile of the sample is formed as a successive overlap of individual profiles per scattering event, the effective output shape reveals a much narrower spectral variance than $\gamma $. Thus the bandwidth of the laser mode can become comparable with the spectral inhomogeneity in the sample spectrum associated with partial contributions of the higher scattering orders.
This can be quantitatively discussed with the following model of a quasi-monochromatic single mode laser radiation. To define the basic parameters we approximated the assumed homogeneously broadened spectrum of the CBS laser by a Lorentzian profile \begin{equation} I(\omega )=I\,\frac{\gamma _{L}}{\left( \omega -\omega _{L}\right) ^{2}+\left( \gamma _{L}/2\right) ^{2}} \label{t1} \end{equation} where $\omega _{L}$ and $\gamma _{L}$ are the carrier frequency and the spectral bandwidth of the laser radiation respectively. $I$ is the total intensity of the incident laser radiation. The spectrum obeys to the following normalization condition \begin{equation} I\;=\;\int_{-\infty }^{\infty }\frac{d\omega }{2\pi }\,I(\omega ) \label{t2} \end{equation} where in the quasi-monochromatic approximation there is no difference between $0$ and $-\infty $ in the lower limit of this integral. The basic idea in this case is that, in comparison with purely monochromatic radiation, the spectral response of the initially symmetric but broadened input spectral profile (\ref{t1}) should be significantly distorted by the sample due to effects of multiple scattering.
In Figure 10 we show, in the helicity preserving channel, the output spectral response in the backward direction when the input circular polarized laser radiation, modeled by the spectrum of Eq. (\ref{t1}) with $ \omega _{L}=\omega _{0}+1.5\gamma $ and $\gamma _{L}=\gamma /6$, is tuned in the blue wing of the $F=3\rightarrow F^{\prime }=4$ optical transition of $ {}^{85}$Rb with resonant frequency $\omega _{0}$. The first graph plotted in Figure 10 (a) shows the distortion of the input Lorentzian profile for the total ladder and interference contribution, the intensity being normalized to the total output intensity of the CBS light. In turn, the second graph depicted in Figure 10 (b) shows the distortion of the interference term only.\ This term is normalized according to the corresponding enhancement factor $X_{EF}$ [to $(X_{EF}-1)/X_{EF}$]. Both the dependencies relate to the same helicity preserving channel. It is clearly seen that the output spectral profile becomes asymmetric because of the influence of resonance scattering near the atomic transition in higher orders of multiple scattering. It\ may be less obvious, but there is also a small but not negligible difference between the two spectral dependences, which explains why in our experiment the spectral probe of the sample with scanning carrier frequency $\omega _{L}$ near the resonance can be sensitive to the spectral bandwidth of the CBS laser.
\begin{figure}
\caption{Scanning spectra of the intensity (a) and of the enhancement factor (b) in the helicity non-preserving channel for the quasi-monochromatic laser radiation, with $\protect\gamma _{L}=\protect \gamma /4,\,\protect\gamma /6,\,0$. The spectra were calculated for a Gaussian type atomic cloud of ${}^{85}$Rb atoms with $n_{0}=1.6\times 10^{10}\;cm^{-3}$ and $r_{0}=1\;mm$.}
\label{Figure11}
\end{figure}
\begin{figure}
\caption{Scanning spectra of the intensity (a) and of the enhancement factor (b) in the helicity preserving channel for quasi-monochromatic laser radiation with $\protect\gamma _{L}=\protect\gamma /4,\,\protect\gamma /6,\,0$. The spectra were calculated for a Gaussian type atomic cloud of ${}^{85}$Rb atoms with $n_{0}=1.6\times 10^{10}\;cm^{-3}$ and $r_{0}=1\;mm$.}
\label{Figure12}
\end{figure}
The influence of this effect is illustrated, for the helicity preserving and non-preserving channels, in Figures 11 and 12. \ There it is shown that the spectra of the total intensity and of the enhancement factor, generated by scanning of the frequency $\omega _{L}$, reveals different spectral behavior, particularly in the wings of scanned profiles. These are calculated results for the helicity polarization channels, but similar behavior takes place for the linear polarization channel. At first sight this spectral divergence appears as a rather weak effect, but we believe it should not be ignored in precise comparison of the experimental data with numerical simulations. Particularly, it can be important in a realistic estimation of the background, since such a spectral washing in the probe radiation response can be important in the interpolation procedure of the CBS cone to its wing. Indeed, the higher orders of multiple scattering contribute to the formation of the central portion of the CBS cone, but the role of second order scattering is more important in its wings. As we see for large spectral detunings, the correct estimation of the enhancement factor in higher orders of multiple scattering is rather sensitive to the spectral distribution of the probe radiation.
\section{Summary}
A combined theoretical and experimental study of spectral variations in the coherent backscattering enhancement factor, for a very narrow band resonance system, has been reported. \ Experimental data taken over a range of two atomic natural widths about direct atomic resonance suggests spectral variations in the peak value of the CBS enhancement. \ Simulations indicate that the combined influence of heating of the atomic ensemble, and optical pumping of the Zeeman sublevels in the $F=3$ ground level during the coherent backscattering data taking phase, can qualitatively account for the effects. \ The simulations of the CBS process examined the influence of atomic motion, in a thermal equilibrium model, on the spectral variation of the enhancement factor. \ A model case of magnetization of the vapor due to optical pumping was also considered. \ It was found that these two factors could explain variations in the CBS enhancement observed in the experiments. \ The simulations which considered the influence of atomic magnetization predicted a remarkable result; the classical CBS maximum enhancement of two could be closely approached for a strongly magnetized atomic sample. \ Finally, it was shown, by simulation of the influence of the spectral bandwidth of the CBS probe laser, that even a quite small laser line width, in comparison to the natural width of the atomic transition, can influence significantly the CBS enhancement in the wings of the atomic resonance line.
\begin{acknowledgments} We acknowledge informative discussions with Robin Kaiser.\ Financial support for this research was provided by the National Science Foundation (NSF-PHY-0099587, NSF-INT-0233292), by the North Atlantic Treaty Organization (PST-CLG-978468), by the Russian Foundation for Basic Research (01-02-17059), and by INTAS (INFO 00-479). \ D.V.K. would like to acknowledge financial support from the Delzell Foundation, Inc. \end{acknowledgments}
\end{document} |
\begin{document}
\maketitle
\begin{abstract} In this paper we develop methods to extend the minimal hypersurface approach to positive scalar curvature problems to all dimensions. This includes a proof of the positive mass theorem in all dimensions without a spin assumption. It also includes statements about the structure of compact manifolds of positive scalar curvature extending the work of \cite{sy1} to all dimensions. The technical work in this paper is to construct minimal slicings and associated weight functions in the presence of small singular sets and to show that the singular sets do not become too large in the lower dimensional slices. It is shown that the singular set in any slice is a closed set with Hausdorff codimension at least three. In particular for arguments which involve slicing down to dimension $1$ or $2$ the method is successful. The arguments can be viewed as an extension of the minimal hypersurface regularity theory to this setting of minimal slicings. \end{abstract}
\setcounter{secnumdepth}{1}
\setcounter{section}{0}
\section{\bf Introduction}
The study of manifolds of positive scalar curvature has a long history in both differential geometry and general relativity. The theorems involved include the positive mass theorem, the topological classification of manifolds of positive scalar curvature, and the local geometric study of metrics of positive scalar curvature. There are two methods which have been successful in this study in general situations, the Dirac operator method and the minimal hypersurface method. Both of these methods have restrictions on their applicability, the Dirac operator methods require the topological assumption that the manifold be spin, and the minimal hypersurface method has been restricted to the case of manifolds with dimension at most $8$ because of the possibility of singularities which might occur in the hypersurfaces. The purpose of this paper is to extend the minimal hypersurface method to all dimensions.
The Dirac operator method was pioneered by A. Lichnerowicz \cite{lich} and M. Atiyah, I. Singer \cite{as} in the early 1960s. It was extended by N. Hitchin \cite{h} and then systematically developed by M. Gromov and H. B. Lawson in \cite{gl1}, \cite{gl2}, and \cite{gl3}. Surgery methods for manifolds of positive scalar curvature were developed in \cite{sy1} and \cite{gl2}. For simply connected manifolds $M^n$ with $n\geq 5$ Gromov and Lawson conjectured necessary and conditions for $M$ to have a metric of positive scalar curvature (related to the index of the Dirac operator in the spin case). The conjecture was solved in the affirmative by S. Stolz \cite{st}. The Dirac operator method was used by E. Witten \cite{w} to prove the positive mass theorem for spin manifolds (see also \cite{pt}).
The minimal hypersurface method originated in \cite{sy4} for the three dimensional case and was extended to higher dimensions in \cite{sy1}. The extension to the positive mass theorem was initiated in \cite{sy2} and in higher dimensions in \cite{sy5} and \cite{sc}. In this paper we extend the minimal hypersurface argument to all dimensions at least as regards the applications to the positive mass theorem and results which can be proven by slicing down to dimension two.
The basic objects of study in this paper are called {\it minimal $k$-slicings} and we now describe them. We start with a compact oriented Riemannian manifold $M$ which will be our top dimensional slice $\Sigma_n$. We choose an oriented volume minimizing hypersurface $\Sigma_{n-1}$. Since $\Sigma_{n-1}$ is stable, the second variation form $S_{n-1}(\varphi,\varphi)$ has first eigenvalue which is non-negative. We choose a positive first eigenfunction $u_{n-1}$ and we use it as a weight $\rho_{n-1}$ for the volume functional on $n-2$ cycles which are contained in $\Sigma_{n-1}$. We assume we have a $\Sigma_{n-2}\subset\Sigma_{n-1}$ which minimizes the weighted volume $V_{\rho_{n-1}}(\cdot)$. The second variation $S_{n-2}(\varphi,\varphi)$ for the weighted volume on $\Sigma_{n-2}$ then has non-negative first eigenvalue and we let $u_{n-2}$ be a positive first eigenfunction. We then define $\rho_{n-2}=u_{n-2}\rho_{n-1}$ and we continue this process. That is if we have $\Sigma_{j+1}\subset\Sigma_{j+2}\subset\ldots\subset \Sigma_n$ which have been constructed, we choose $\Sigma_j$ to be a minimizer of the weighted volume $V_{\rho_{j+1}}(\cdot)$. Such a nested family $\Sigma_k\subset\Sigma_{k+1}\subset\ldots\subset \Sigma_n$ is called a {\it minimal $k$-slicing}.
The basic geometric theorem about minimal $k$-slicings which is generalized in Section 2 is the statement that if $\Sigma_n$ has positive scalar curvature then for any minimal $k$-slicing we have that $\Sigma_k$ is Yamabe positive and so admits a metric of positive scalar curvature. In particular if $k=2$ then $\Sigma_2$ must be diffeomorphic to $S^2$ and there can be no minimal $1$-slicing.
If we start with $\Sigma_n$ with $n\geq 8$, there might be a closed singular set ${\cal S}_{n-1}$ of Hausdorff dimension at most $n-8$ in $\Sigma_{n-1}$. In this paper we develop methods to carry out the construction of minimal $k$-slicings allowing for the possibility that the $\Sigma_j$ may have nonempty singular sets ${\cal S}_j$. In order to do this it is necessary to extend the existence and regularity theory for minimal hypersurfaces to this setting. To do this requires maintaining some integral control of the geometry of the $\Sigma_j$ in the ambient manifold $\Sigma_n$, and also of constructing the eigenfunctions $u_j$ which are bounded in appropriate weighted Sobolev spaces. This control is gotten by carefully exploiting the terms which are left over in the geometry of the second variation at each stage of the slicing. This is done by modifying the second variation form $S_j$ to a larger form $Q_j$. The form $Q_j$ is more coercive and can be diagonalized with respect to the weighted $L^2$ norm even in the presence of small singular sets. We can then construct the next slice using the first eigenfunction for the form $Q_j$ to modify the weight. This procedure only works if the singular sets ${\cal S}_j$ do not become too large. We prove that for a minimal $k$-slicing the Hausdorff dimension of the singular set ${\cal S}_k$ is at most $k-3$. The regularity theorem is proven by establishing appropriate compactness theorems for minimal $k$-slicings and showing that at a singular point there is a homogeneous minimal $k$-slicing gotten by rescaling and using appropriate monotonicity theorems (volume monotonicity and monotonicity of an appropriate frequency function). A homogeneous minimal $k$-slicing is one in ${\mathbb R}^n$ for which all of the $\Sigma_j$ are cones and all of the $u_j$ are homogeneous of some degree. It is then possible to show that if we had a $\Sigma_{k+1}$ with singular set of codimension at least $3$, but $\Sigma_k$ had a singular set of Hausdorff dimension larger then $k-3$ then there would exist a nontrivial homogeneous $2$-slicing with $\Sigma_2$ having an isolated singularity at the origin. We show that no such homogeneous slicings exist to conclude that if ${\cal S}_{k+1}$ has codimension at least $3$ in $\Sigma_{k+1}$, then ${\cal S}_k$ has codimension at least $3$ in $\Sigma_k$. In particular if $k=2$ then $\Sigma_2$ is regular.
We now state the main theorems of the paper beginning with the positive mass theorem. A manifold $M^n$ is called asymptotically flat if there is a compact set $K\subset M$ such that $M\setminus K$ is diffeomorphic to the exterior of a ball in ${\mathbb R}^n$ and there are coordinates near infinity $x^1,\ldots, x^n$ so that the metric components $g_{ij}$ satisfy
\[ g_{ij}=\delta_{ij}+O(|x|^{-p}),\ |x||\partial g_{ij}|+|x|^2|\partial^2g_{ij}|=O(|x|^{-p}) \] for some $p>\frac{n-2}{2}$. We also require the scalar curvature $R$ to satisfy
\[ |R|=O(|x|^{-q}) \] for some $q>n$. Under these assumptions the ADM mass is well defined by the formula (see \cite{sc} for the $n$ dimensional case) \[ m=\frac{1}{4(n-1)\omega_{n-1}}\lim_{\sigma\to\infty}\int_{S_\sigma}\sum_{i,j}(g_{ij.i}-g_{ii,j})\nu_j\ d\xi(\sigma) \] where $S_\sigma$ is the euclidean sphere in the $x$ coordinates, $\omega_{n-1}=Vol(S^{n-1}(1))$, and the unit normal and volume integral are with respect to the euclidean metric. The positive mass theorem is as follows. \begin{thm} Assume that $M$ is an asymptotically flat manifold with $R\geq 0$. We then have that the ADM mass is nonnegative. Furthermore, if the mass is zero, then $M$ is isometric to $\ifmmode{\Bbb R}\else{$\Bbb R$}\fi^n$. \end{thm}
It is shown in Section 5 using results of \cite{sy3} to simplify the asymptotic behavior and an observation of J. Lohkamp which allows us to compactify the manifold keeping the scalar curvature positive. The result which is needed for compact manifolds follows. \begin{thm} If $M_1$ is any closed manifold of dimension $n$, then $M_1\#T^n$ does not have a metric of positive scalar curvature. \end{thm} Both of these theorems were known if either $n\leq 8$ or for any $n$ assuming the manifold is a spin manifold. Actually for $n=8$ there may be isolated singularities, but in this dimension a result of N. Smale \cite{sm} shows that there is a dense set of ambient metrics for which the singularities do not occur. Using this result the eight dimensional case can also be done without dealing with singularities. In this paper we remove the dimensional and spin assumptions.
Finally we prove the following more precise theorem about compact manifolds with positive scalar curvature. \begin{thm} Assume that $M$ is a compact oriented $n$-manifold with a metric of positive scalar curvature. If $\alpha_1,\ldots,\alpha_{n-2}$ are classes in $H^1(M,\ifmmode{\Bbb Z}\else{$\Bbb Z$}\fi)$ with the property that the class $\sigma_2$ given by $\sigma_2=\alpha_{n-2}\cap\alpha_{n-3}\cap\ldots\alpha_1\cap[M]\in H_2(M,\ifmmode{\Bbb Z}\else{$\Bbb Z$}\fi)$ is nonzero, then the class $\sigma_2$ can be represented by a sum of smooth two spheres. If $\alpha_{n-1}$ is any class in $H^1(M,\ifmmode{\Bbb Z}\else{$\Bbb Z$}\fi)$, then we must have $\alpha_{n-1}\cap\sigma_2=0$. In particular, if $M$ has classes $\alpha_1,\ldots,\alpha_{n-1}$ with $\alpha_{n-1}\cap\ldots\cap\alpha_1\cap[M]\neq 0$, then $M$ cannot carry a metric of positive scalar curvature. \end{thm}
We also point out the recent series of papers by J. Lohkmp \cite{lo1}, \cite{lo2}, \cite{lo3}, and \cite{lo4}. These papers also present an approach to the high dimensional positive mass theorem by extending the minimal hypersurface approach to all dimensions. Our approach seems quite different both conceptually and technically, and is more in the classical spirit of the calculus of variations. In any case we feel that, for such a fundamental result, it is of value to have multiple approaches.
\section{\bf Terminology and statements of main theorems}
We begin by introducing the notation involved in the construction of a {\it minimal $k$-slicing}; that is, a nested family of hypersurfaces beginning with a smooth manifold $\Sigma_n$ of dimension $n$ and going down to $\Sigma_k$ of dimension $k\leq n-1$. This consists of $\Sigma_k\subset\Sigma_{k+1} \subset\ldots \subset \Sigma_n$ where each $\Sigma_j$ will be constructed as a volume minimizer of a certain weighted volume in $\Sigma_{j+1}$.
Let $\Sigma_n$ be a properly embedded $n$-dimensional submanifold in an open set $\Omega$ contained in $\ifmmode{\Bbb R}\else{$\Bbb R$}\fi^N$. We will consider a minimal slicing of $\Sigma_n$ defined in an inductive manner. First, let $u_n=1$, and let $\Sigma_{n-1}$ be a volume minimizing hypersurface in $\Sigma_n$. Of course, it may happen that $\Sigma_{n-1}$ has a singular set ${\cal S}_{n-1}$ which is a closed subset of Hausdorff dimension at most $n-8$. On $\Sigma_{n-1}$ we will construct a positive definite quadratic form $Q_{n-1}$ on functions by suitably modifying the index form associated to the second variation of volume. We will then construct a positive function $u_{n-1}$ on $\Sigma_{n-1}$ which is a least eigenfunction of $Q_{n-1}$. We then define $\rho_{n-1}=u_{n-1}u_n$, and we let $\Sigma_{n-2}$ be a hypersurface in $\Sigma_{n-1}$ which is a minimizer of the $\rho_{n-1}$-weighted volume
$V_{\rho_{n-1}}(\Sigma)=\int_\Sigma\rho_{n-1}d\mu_{n-2}$ for an $n-2$ dimensional submanifold of
$\Sigma_{n-1}$ and we denote $\mu_j$ to be the Hausdorff $j$-dimensional measure. Inductively,
assume that we have constructed a slicing down to dimension $k+1$; that is, we have a nested
family of hypersurfaces, quadratic forms, and positive functions $(\Sigma_j,Q_j,u_j)$ for
$j=k+1,\ldots,n$ such that $\Sigma_j$ minimizes the $\rho_{j+1}$-weighted volume where
$\rho_{j+1}=u_{j+1}u_{j+2}\ldots u_n$, $Q_j$ is a positive definite quadratic form related to
the second variation of the $\rho_{j+1}$-weighted volume (see (\ref{eqn:qform}) below), and $u_j$ is a lowest eigenfunction of $Q_j$ with eigenvalue $\lambda_j\geq 0$. We will always take $\lambda_j$ to be the lowest Dirichlet eigenvalue (if $\partial\Sigma_j\neq 0$) of $Q_j$ with respect to the weighted $L^2$ norm and we take $u_j$ to be a corresponding eigenfunction. We will show in Section 3 that such $\lambda_j$ and $u_j$ exist. We then inductively
construct $(\Sigma_k,Q_k,u_k)$ by letting $\Sigma_k$ be a minimizer of the $\rho_{k+1}$ weighted volume
where $\rho_{k+1}=u_{k+1}u_{k+2}\ldots u_n$, $Q_k$ a positive definite quadratic form described
below, and $u_k$ a positive eigenfunction of $Q_k$.
Note that if $\Sigma_j$ is a leaf in a minimal $k$-slicing, then choosing a unit normal vector $\nu_j$ to $\Sigma_j$ in $\Sigma_{j+1}$ gives us an orthonormal basis $\nu_k,\nu_{k+1},\ldots,\nu_{n-1}$ for the normal bundle of $\Sigma_k$ defined on the regular set ${\cal R}_k$. Thus the second fundamental form of $\Sigma_k$ in $\Sigma_n$ consists of the scalar forms $A_k^{\nu_j}=\langle A_k,\nu_j\rangle$for $j=k,\ldots,n-1$ and we have
$|A_k|^2=\sum_{j=k}^{n-1}|A_k^{\nu_j}|^2$.
Now if we have a minimal $k$-slicing, we let $g_k$ denote the metric induced on $\Sigma_k$ from $\Sigma_n$, and we let $\hat{g}_k$ denote the metric $\hat{g}_k=g_k+\sum_{p=k}^{n-1}u_p^2dt_p^2$ on $\Sigma_k\times(S^1)^{n-k}$ where we use $S^1$ to denote a circle of length $1$, and we denote by $t_p$ a coordinate on the $p$th factor of $S^1$. We then note that the volume measure of the metric $\hat{g}_k$ is given by $\rho_k d\mu_k$ where we have suppressed the $t_p$ variables since we will consider only objects which do not depend on them; for example, the $\rho_k$-weighted volume of $\Sigma_k$ is the volume of the $n$-dimensional manifold $\Sigma_k\times T^{n-k}$. We will need to introduce another metric $\tilde{g}_k$ on $\Sigma_k\times(S^1)^{n-k-1}$. This is defined by $\tilde{g}_k= g_k+\sum_{p=k+1}^{n-1}u_p^2\ dt_p^2$. Note that $\tilde{g}_k$ is the metric induced on $\Sigma_k\times(S^1)^{n-k-1}$ by $\hat{g}_{k+1}$. We also let $\tilde{A}_k$ denote the second fundamental form of $\Sigma_k\times(S^1)^{n-k-1}$ in $(\Sigma_{k+1}\times(S^1)^{n-k-1}, \hat{g}_{k+1})$. The following lemma computes this second fundamental form. \begin{lem} \label{lem:2ff} We have $\tilde{A}_k=A_k^{\nu_k}-\sum_{p=k+1}^{n-1}u_p\nu_k(u_p)dt_p^2$, and the square length with respect to $\tilde{g}_k$ is given by
$|\tilde{A}_k|^2=|A_k^{\nu_k}|^2+\sum_{p=k+1}^{n-1}(\nu_k(log\ u_p))^2$. \end{lem} \begin{pf} If we consider a hypersurface $\Sigma$ in a Reimannian manifold with unit normal $\nu$, then we can consider the parallel hypersurfaces parametrized on $\Sigma$ by $F_\varepsilon(x)=\exp(\varepsilon\nu(x))$ for small $\varepsilon$ and $x\in\Sigma$. We then have a family of induced metrics $g_\varepsilon$ from $F_\varepsilon$ on $\Sigma$, and the second fundamental form is given by $A=-\frac{1}{2}\dot{g}$ where $\dot{g}$ denotes the $\varepsilon$ derivative of $g_\varepsilon$ at $\varepsilon=0$.
If we let $\exp$ denote the exponential map of $\Sigma_k$ in $\Sigma_{k+1}$, then since $\Sigma_{k+1}$ is totally geodesic in $\Sigma_{k+1}\times T^{n-k-1}$, we have \[ F_\varepsilon(x,t)=(\exp(\varepsilon\nu_k(x),t) \] for $(x,t)\in \Sigma_k\times T^{n-k-1}$, and the induced family of metrics is given by \[ \tilde{g}_\varepsilon=(g_k)_\varepsilon+\sum_{p=k+1}^{n-1}(u_p(\exp(\varepsilon\nu_k))^2\ dt_p^2. \] Thus we have \[ \dot{\tilde{g}}=-2A_k^{\nu_k}+2\sum_{p=k+1}^{n-1}u_p\nu_k(u_p)\ dt_p^2 \] since $A_k^{\nu_k}$ is the second fundamental form of $\Sigma_k$ in $\Sigma_{k+1}$. It follows that
$\tilde{A}_k=A_k^{\nu_k}-\sum_{p=k+1}^{n-1}u_p\nu_k(u_p)dt_p^2$, and taking the square norm with respect to the metric $\tilde{g}_k$ then gives the desired formula for $|\tilde{A}_k|^2$. \end{pf}
We now describe the choice we will make for $Q_j$. Let $S_j$ be the second variation form for the weighted volume $V_{\rho_{j+1}}$ at $\Sigma_j$, and define \begin{eqnarray} \label{eqn:qform} Q_j(\varphi,\varphi)&=&S_j(\varphi,\varphi)+\frac{3}{8}\int_{\Sigma_j}
(|\tilde{A}_j|^2\nonumber\\ &+&\frac{1}{3n} \sum_{p=j+1}^n(|\nabla_jlog\ u_p|^2+|\tilde{A}_p|^2))\varphi^2 \rho_{j+1}\ d\mu_j \end{eqnarray} where, for now, $\varphi$ is a function supported in the regular set ${\cal R}_j$ and we define $\tilde{A}_n=0,\ u_n=1$. We will discuss an extended domain for $Q_j$ in the Section 3.
Up to this point our discussion is formal because we have not discussed issues related to
the singularities of the $\Sigma_j$ in a minimal slicing. We first define the {\it regular set}, ${\cal R}_j$
of $\Sigma_j$ to be the set of points $x$ for which there is a neighborhood of $x$ in $\ifmmode{\Bbb R}\else{$\Bbb R$}\fi^N$ in which
all of $\Sigma_j,\Sigma_{j+1},\ldots \Sigma_n$ are smooth embedded submanifolds of $\ifmmode{\Bbb R}\else{$\Bbb R$}\fi^N$. The {\it singular
set}, ${\cal S}_j$ is then defined to be the complement of ${\cal R}_j$ in $\Sigma_j$. Thus ${\cal S}_j$ is
a closed set by definition. The following result follows from the standard minimizing hypersurface
regularity theory. In this paper $dim(A)$ always refers to the Hausdorff dimension of a subset
$A\subset \ifmmode{\Bbb R}\else{$\Bbb R$}\fi^N$.
\begin{prop} \label{prop:topreg} For $j\leq n-1$ we have $dim({\cal S}_j\sim {\cal S}_{j+1})\leq j-7$, and in particular we have $dim({\cal S}_{n-1})\leq n-8$. \end{prop} In light of this result, we see that our main task in controlling singularities is to control the size of the set ${\cal S}_j\cap {\cal S}_{j+1}$. We will do this by extending the minimal hypersurface regularity theory to this slicing setting. In order to do this we need to establish the relevant compactness and tangent cone properties and this requires establishing suitable bounds on the slicings. To begin this process we make the following definition. \begin{defn} For a constant $\Lambda>0$, a {\bf $\Lambda$-bounded minimal $k$-slicing} is a minimal $k$-slicing satisfying the following bounds
$$ \lambda_j\leq \Lambda,\ Vol_{\rho_{j+1}}(\Sigma_j)\leq \Lambda,\ \int_{\Sigma_j}(1+|A_j|^2+
\sum_{p=j+1}^n|\nabla_jlog\ u_p|^2)u_j^2\rho_{j+1}\ d\mu_j\leq \Lambda $$ for $j=k,k+1,\ldots n-1$, where $\mu_j$ is Hausdorff measure, $\nabla_j$ is taken on (the regular set of) $\Sigma_j$, and $A_j$ is the second fundamental form of $\Sigma_j$ in $\ifmmode{\Bbb R}\else{$\Bbb R$}\fi^N$. \end{defn} The minimal $k$-slicings we will consider in this paper will always be $\Lambda$-bounded for some $\Lambda$. We have the following regularity theorem.
\begin{thm} \label{thm:reg} Given any $\Lambda$-bounded minimal $k$-slicing, we have for each $j=k,k+1,\ldots, n-1$ the bound on the singular set $dim({\cal S}_j)\leq j-3$. \end{thm}
We now formulate an existence theorem for minimal $k$-slicings in $\Sigma_n$. We consider the case in which $\Sigma_n$ is a closed oriented manifold. We assume that there is closed oriented $k$-dimensional manifold $X^k$ and a smooth map $F:\Sigma_n\to X\times T^{n-k}$ of non-zero degree $s$. We let $\Omega$ denote a $k$-form of $X$ with $\int_X\Omega=1$, and we denote by $dt^{k+1},\ldots dt^n$ the basic one forms on $T^{n-k}$ where we assume the periods are equal to one. We introduce the notation $\Theta=F^*\Omega$ and $\omega^p=F^*(dt^p)$ for $p=k+1,\ldots, n$.
We can now state our first existence theorem. A more refined existence theorem is given by Theorem \ref{thm:exst2} which we will not state here. \begin{thm} \label{thm:exst} For a manifold $M=\Sigma_n$ as described above, there is a $\Lambda$-bounded, partially regular, minimal $k$-slicing Moreover, if $k\leq j\leq n-1$ and $\Sigma_j$ is regular, then $\int_{\Sigma_j}\Theta\wedge\omega^{k+1}\wedge\ldots\wedge\omega^j=s$. \end{thm}
The proofs of Theorems \ref{thm:reg} and \ref{thm:exst} will be given in Sections 3 and 4. In the remainder of this section we discuss the quadratic forms $Q_j$ in more detail and derive important geometric consequences for minimal $1$-slicings and $2$-slicings under the assumption that $\Sigma_n$ has positive scalar curvature. Consequences of these results, which are the main geometric theorems of the paper, will be given in Section 5.
Recall that in general if $\Sigma$ is a stable two-sided (trivial normal bundle) minimal hypersurface in a Riemannian manifold $M$, then we may choose a globally defined unit normal vector $\nu$, and we may parametrize normal deformations by functions $\varphi\cdot\nu$. The second variation of volume then becomes the quadratic form \begin{equation} \label{eqn:secvar}
S(\varphi,\varphi)=\int_\Sigma[|\nabla\varphi|^2-\frac{1}{2}(R_M-R_\Sigma+|A|^2)\varphi^2]\ d\mu \end{equation} where $R_M$ and $R_\Sigma$ are the scalar curvature functions of $M$ and $\Sigma$ and $A$ denotes the second fundamental form of $\Sigma$ in $M$.
We have the following result which computes the scalar curvature $\tilde{R}_k$ of $\tilde{g}_k$. \begin{lem} \label{lem:rcalc} The scalar curvature of the metric $\tilde{g}_k$ is given by \[ \tilde{R}_k=R_k-2\sum_{p=k+1}^{n-1}u_p^{-1}\Delta_ku_p-2\sum_{k+1\leq p<q\leq n-1} \langle\nabla_k log\ u_p,\nabla_k log\ u_q\rangle \] where $\Delta_k$ and $\nabla_k$ denote the Laplace and gradient operators with respect to $g_k$. \end{lem} \begin{pf} The calculation is a finite induction using the formula \[ \tilde{R}=R-2u^{-1}\Delta u \] for the scalar curvature of the metric $\tilde{g}=g+u^2dt^2$.
For $j=k,\ldots,n-1$ Let $\bar{g}_j=g_k+\sum_{p=j}^{n-1}u_p^2dt_p^2$. Note that $\bar{g}_k=\hat{g}_k$ and $\bar{g}_{k+1}=\tilde{g}_k$. We prove the formula \[ \bar{R}_j=R_k-2\sum_{p=j}^{n-1}u_p^{-1}\Delta_k u_p-2\sum_{j\leq p<q\leq n-1} \langle\nabla_k log\ u_p,\nabla_k log\ u_q\rangle \] by a finite reverse induction on $j$. First note that for $j=n-1$ the formula follows from the one above. Now assume the formula is correct for $\bar{g}_{j+1}$ We then apply the formula above to obtain \[ \bar{R}_j=\bar{R}_{j+1}-2u_j^{-1}\bar{\Delta}_j u_j. \] Since $u_j$ does not depend on the extra variables $t_p$, we have \[ u_j^{-1}\bar{\Delta}_j u_j=u_j^{-1}\rho_j^{-1}div_k(\rho_j\nabla_k u_j)= u_j^{-1}\Delta_k u_j +\sum_{p=j+1}^{n-1}\langle\nabla_k log\ u_p, \nabla_k log\ u_j\rangle \] where as above $\rho_j=u_{j+1}\cdots u_{n-1}$. The statement now follows from the inductive assumption. Since $\bar{g}_{k+1}=\tilde{g}_k$, we have proven the required statement. \end{pf} We now consider consequences of having a minimal $k$-slicing of a manifold of positive scalar curvature. \begin{thm} \label{thm:eval} Assume that the scalar curvature of $\Sigma_n$ is bounded below by a constant $\kappa$. If $\Sigma_k$ is a leaf in a minimal $k$-slicing, then we have the following scalar curvature formula and eigenvalue estimate
\[ \hat{R}_k= R_n+2\sum_{p=k}^{n-1}\lambda_p+\frac{1}{4}\sum_{p=k}^{n-1}(|\tilde{A}_p|^2
-\frac{1}{n}\sum_{q=p+1}^n(|\nabla_plog\ u_q|^2+|\tilde{A}_q|^2)) \]
\[ \int_{\Sigma_k}(\kappa+\frac{3}{4}\sum_{j=k+1}^n|\nabla_klog\ u_j|^2-R_k)\varphi^2\ d\mu_k\leq 4\int_{\Sigma_k}|\nabla_k\varphi|^2\ d\mu_k \] where $\varphi$ is any smooth function with compact support in ${\cal R}_k$. \end{thm} \begin{pf} First note that from (\ref{eqn:qform}) and (\ref{eqn:secvar}) we have \begin{eqnarray*}
Q_j(\varphi,\varphi)&=&\int_{\Sigma_j}[|\nabla_j\varphi|^2- \frac{1}{2}(\hat{R}_{j+1}-\tilde{R}_j)\varphi^2 \\
&-&\frac{1}{8}(|\tilde{A}_j|^2-\frac{1}{n}\sum_{p=j+1}^n(|\nabla_jlog\ u_p|^2+|\tilde{A}_p|^2))\varphi^2]\rho_{j+1}\ d\mu_j, \end{eqnarray*} and therefore $u_j$ satisfies the equation $L_ju_j=-\lambda_ju_j$ where \begin{equation}
\label{eqn:operator} L_j=\tilde{\Delta}_j+\frac{1}{2}(\hat{R}_{j+1}-\tilde{R}_j)+\frac{1}{8}(|\tilde{A}_j|^2-\frac{1}{n}\sum_{p=j+1}^n(|\nabla_jlog\ u_p|^2+|\tilde{A}_p|^2)). \end{equation}
We derive the scalar curvature formula by a finite downward induction beginning with $k=n-1$. In this case the eigenvalue estimates follow from the standard stability inequality (\ref{eqn:secvar}) since $\rho_n=u_n=1$ and $\tilde{R}_{n-1}=R_{n-1}$. We also have from Lemma \ref{lem:rcalc} that $\hat{R}_{n-1}=R_{n-1}-2u_{n-1}^{-1}\Delta_{n-1}u_{n-1}$. The equation satisfied by $u_{n-1}$ is
\[ \Delta_{n-1}u_{n-1}+\frac{1}{2}(R_n-R_{n-1})u_{n-1}+\frac{1}{8}|\tilde{A}_{n-1}|^2u_{n-1}=-\lambda_{n-1}u_{n-1} \] and so we have
$\hat{R}_{n-1}=R_n+2\lambda_{n-1}+\frac{1}{4}|\tilde{A}_{n-1}|^2$. This proves the result for $k=n-1$.
Now we assume the conclusions are true for integers $k$ and larger, and we will derive them for $k-1$. We first observe that $\hat{g}_{k-1}=\tilde{g}_{k-1}+u_{k-1}^2\ dt_{k-1}^2$ and so $\hat{R}_{k-1}=\tilde{R}_{k-1}-2u_{k-1}^{-1}\tilde{\Delta}_{k-1}u_{k-1}$. On the other hand from (\ref{eqn:operator}) applied with $j=k-1$ we see that $u_{k-1}$ satisfies the equation \begin{eqnarray*}
\tilde{\Delta}_{k-1}u_{k-1}&+&\frac{1}{2}(\hat{R}_k-\tilde{R}_{k-1})u_{k-1}+\frac{1}{8}(|\tilde{A}_{k-1}|^2
\\ &-&\frac{1}{n}\sum_{p=k}^n(|\nabla_{k-1}log\ u_p|^2+|\tilde{A}_p|^2))u_{k-1}=-\lambda_{k-1}u_{k-1}. \end{eqnarray*} Substituting this above we have \begin{eqnarray*}
\hat{R}_{k-1}&=&\tilde{R}_{k-1}+2[\lambda_{k-1}+\frac{1}{2}(\hat{R}_k-\tilde{R}_{k-1}) \\
&+&\frac{1}{8}(|\tilde{A}_{k-1}|^2-\frac{1}{n}\sum_{q=k}^n(|\nabla_{k-1}log\ u_q|^2+|\tilde{A}_q|^2))], \end{eqnarray*} so we have
\[ \hat{R}_{k-1}=2\lambda_{k-1}+\hat{R}_k+\frac{1}{4}(|\tilde{A}_{k-1}|^2-\frac{1}{n}\sum_{q=k}^n(|\nabla_{k-1}log\ u_q|^2+|\tilde{A}_q|^2)). \] Using the inductive hypothesis we get the desired formula
\[\hat{R}_{k-1}= R_n+2\sum_{p=k-1}^{n-1}\lambda_p+\frac{1}{4}\sum_{p=k-1}^{n-1}(|\tilde{A}_p|^2
-\frac{1}{n}\sum_{q=p+1}^n(|\nabla_p log\ u_q|^2+|\tilde{A}_q|^2)). \]
Now observe that \begin{eqnarray*}
\sum_{p=k}^{n-1}(n|\tilde{A}_p|^2&-&\sum_{q=p+1}^n(|\nabla_p log\ u_q|^2+|\tilde{A}_q|^2)) \\
&\geq& \sum_{p=k}^{n-1}(\sum_{r=k}^n|\tilde{A}_r|^2-\sum_{q=p+1}^n(|\nabla_p log\ u_q|^2+|\tilde{A}_q|^2)) \\
&\geq& \sum_{p=k}^{n-1}\sum_{q=p+1}^n(\sum_{r=k}^{p-1}(\nu_rlog\ (u_q))^2-|\nabla_plog\ u_q|^2)\\
&=&-\sum_{p=k}^{n-1}\sum_{q=p+1}^n|\nabla_{k-1}log\ u_q|^2\geq -n\sum_{q=k}^n|\nabla_{k-1}log\ u_q|^2. \end{eqnarray*} This formula implies that for each $k$ we have
\begin{equation}\label{eqn:scbound} \hat{R}_k\geq\kappa-1/4\sum_{j=k}^n|\nabla_{k-1}log\ u_j|^2 \end{equation} and so the following eigenvalue estimate follows from (\ref{eqn:secvar})
\[ \int_{\Sigma_k}(\kappa-\frac{1}{4}\sum_{j=k+1}^n|\nabla_k log\ u_j|^2-\tilde{R}_k)\varphi^2\rho_{k+1}\ d\mu_k
\leq 2\int_{\Sigma_k}|\nabla_k\varphi|^2\rho_{k+1}\ d\mu_k \] The remainder of the proof derives the eigenvalue estimate from this one. Since $\varphi$ is arbitrary we may replace $\varphi$ by $\varphi(\rho_{k+1})^{1/2}$ to obtain \begin{eqnarray*}
\int_{\Sigma_k}(\kappa-\frac{1}{4}\sum_{j=k+1}^n|\nabla_klog\ u_j|^2-\tilde{R}_k)\varphi^2\ d\mu_k&\leq&
2\int_{\Sigma_k}|\nabla_k(\varphi/\sqrt{\rho_{k+1}})|^2\rho_{k+1}\ d\mu_k \\
&\leq& 4\int_{\Sigma_k}|\nabla_k(\varphi/\sqrt{\rho_{k+1}})|^2\rho_{k+1}\ d\mu_k \end{eqnarray*} where we used the inequality $2\leq 4$. After expanding, the term on the right becomes
\[ 4\int_{\Sigma_k}(|\nabla_k\varphi|^2-\varphi\langle\nabla_k\varphi,\nabla_klog\ \rho_{k+1}\rangle
+1/4\varphi^2|\nabla_k log\ \rho_{k+1}|^2)\ d\mu_k. \] Rewriting the middle term in terms of $\nabla_k(\varphi)^2$ and integrating by parts the term becomes
\[ 4\int_{\Sigma_k}(|\nabla_k\varphi|^2+1/2\varphi^2[\sum_{p=k+1}^{n-1}(u_p^{-1}\Delta_k u_p
-|\nabla_k log\ u_p|^2)+1/2|\nabla_k log\ \rho_{k+1}|^2])\ d\mu_k. \] Now recall from Lemma \ref{lem:rcalc} that \[ \tilde{R}_k=R_k-2\sum_{p=k+1}^{n-1}u_p^{-1}\Delta_k u_p-2\sum_{k+1\leq p<q\leq n-1} \langle\nabla_k log\ u_p,\nabla_k log\ u_q\rangle. \] Thus we see that the terms involving $\Delta_k u_p$ cancel out, and note also that
\[ |\nabla_k log\ \rho_{k+1}|^2=\sum_{p=k+1}^{n-1}|\nabla_k\ log\ u_p|^2+2\sum_{k+1\leq p<q\leq n-1} \langle\nabla_k log\ u_p,\nabla_k log\ u_q\rangle \] so the second term also cancels. Thus we are left with \begin{eqnarray*}
\int_{\Sigma_k}(\kappa-\frac{1}{4}\sum_{j=k+1}^n|\nabla_klog\ u_j|^2&-&R_k)\varphi^2\ d\mu_k \\
&\leq& 4\int_{\Sigma_k}(|\nabla_k\varphi|^2-\frac{1}{4} \sum_{j=k+1}^n|\nabla_k\ log\ u_j|^2)\ d\mu_k. \end{eqnarray*} This gives the desired eigenvalue estimate. \end{pf} This theorem will be central to the regularity proof in the next section and it also has an important geometric consequence which is the main tool in the applications of Section 5. \begin{thm} \label{thm:12slicing} Assume that $R_n\geq \kappa>0$. If $\Sigma_k$ is regular, then $(\Sigma_k,g_k)$ is a Yamabe positive conformal manifold. If $\Sigma_2$ lies in a minimal $2$-slicing, $\Sigma_2$ is regular, and $\partial\Sigma_2=0$, then each connected component of $\Sigma_2$ is homeomorphic to the two sphere. If $\Sigma_1$ lies in a minimal $1$-slicing and $\Sigma_1$ is regular, then each component of $\Sigma_1$ is an arc of length at most $2\pi/\sqrt{\kappa}$. \end{thm} \begin{pf} Recall that the condition that $g_k$ be Yamabe positive is that the lowest eigenvalue of the conformal Laplacian $-\Delta_k+c(k)R_k$ be positive where $c(k)=\frac{k-2}{4(k-1)}$. In variational form this condition says
\[ -\int_{\Sigma_k}R_k\varphi^2\ d\mu_k<c(k)^{-1}\int_{\Sigma_k}|\nabla_k\varphi|^2\ d\mu_k \] for all nonzero functions $\varphi$ which vanish on $\partial\Sigma_k$ (if $\Sigma_k$ has a boundary). Since $4<c(k)^{-1}$ we see that this follows from the eigenvalue estimate of Theorem \ref{thm:eval}.
Now consider $\Sigma_2$, and apply the eigenvalue estimate of Theorem \ref{thm:eval} with $\varphi=1$ to a component $S$ of $\Sigma_2$ to see that $\int_S R_2\ d\mu_2>0$. It then follows from the Gauss-Bonnet Theorem that $S$ is homeomorphic to the two sphere (note that $S$ is orientable).
Finally, it $\gamma$ is a connected component of $\Sigma_1$ of length $l$, then the eigenvalue estimate of Theorem \ref{thm:eval} implies that the lowest Dirichlet eigenvalue of $\gamma$ is at least $\kappa/4$. Thus $\kappa/4\leq \pi^2/l^2$ and $l\leq 2\pi/\sqrt{\kappa}$ as claimed. \end{pf}
\section{\bf Compactness and regularity of minimal $k$-slicings} The main goal of this section is to prove Theorem \ref{thm:reg}. In order to do this we first must clarify some analytic issues concerning the domain of the quadratic form $Q_j$. We let $L^2(\Sigma_j)$ denote the space of square integrable functions on $\Sigma_j$ with respect to the measure $\rho_{j+1}\mu_j$. We let
\[ \|\varphi\|^2_{0,j}=\int_{\Sigma_j}\varphi^2 \rho_{j+1}\ d\mu_j \] denote the square norm on $L^2_{\Sigma_j}$. We introduce some notation, defining $P_j$ to be the function defined on $\Sigma_j$
\[ P_j=|A_j|^2+\sum_{p=j+1}^n|\nabla_jlog\ u_p|^2. \]
We will say that a minimal $k$-slicing in an open set $\Omega$ is {\it partially regular} if $dim({\cal S}_j)\leq j-3$ for $j=k,\ldots,n-1$. It follows from Proposition \ref{prop:topreg} that if the $(k+1)$-slicing associated to a minimal $k$-slicing is partially regular, then $dim({\cal S}_k)\leq min\{dim({\cal S}_{k+1}),k-7\}\leq k-2$.
For functions $\varphi$ which are Lipschitz (with respect to ambient distance) on $\Sigma_j$ with compact support in ${\cal R}_j\cap\bar{\Omega}$, we define a square norm by
\[ \|\varphi\|_{1,j}^2=\|\varphi\|^2_{0,j}+ \int_{\Sigma_j}(|\nabla_j\varphi|^2+P_j\varphi^2)\rho_{j+1}\ d\mu_j. \] We let ${\cal H}_j$ denote the Hilbert space which is the completion with respect to this norm. Note that functions in ${\cal H}_j$ are clearly locally in $W_{1,2}$ on ${\cal R}_j$. We will assume from now on that $u_j\in {\cal H}_j$ for $j\geq k$; in fact, we take this as part of the definition of a bounded minimal $k$-slicing. We define ${\cal H}_{j,0}$ to be the closed subspace of ${\cal H}_j$ consisting of the completion of the Lipschitz functions with compact support in ${\cal R}_j\cap\Omega$. In order to handle boundary effects we also assume that there is a larger domain $\Omega_1$ which contains $\bar{\Omega}$ as a compact subset and that the $k$-slicing is defined and boundaryless in $\Omega_1$. Note that this is automatic if $\partial\Sigma_j=\phi$. Thus ${\cal H}_{j,0}$ consists of those functions in ${\cal H}_j$ with $0$ boundary data on $\Sigma_j\cap\partial\Omega$. The existence of eigenfunctions $u_j$ in this space will be discussed in the next section. The following estimate of the $L^2(\Sigma_k)$ norm near the singular set will be used both in this section and the next. The result may be thought of as a non-concentration result for the weighted $L^2$ norm near the singular set in case the ${\cal H}_k$ norm is bounded. \begin{prop} \label{prop:l2con} Let ${\cal S}$ be a closed subset of $\Omega_1$ with zero $(k-1)$-dimensional Hausdorff measure. Let $\Sigma_k$ be a member of a bounded minimal $k$-slicing such that $\Sigma_{k+1}$ is partially regular in $\Omega_1$. For any $\eta>0$ there exists an open set $V\subset \Omega_1$ containing ${\cal S}\cap\bar{\Omega}$ such that whenever ${\cal S}_k\cap\bar{\Omega}\subset V$ we have the following estimate \[ \int_{\Sigma_k\cap V}\varphi^2\rho_{k+1}\ d\mu_k\leq
\eta\int_{\Sigma_k\cap\Omega}[|\nabla_k\varphi|^2+(1+P_k)\varphi^2]\rho_{k+1}\ d\mu_k \] for all $\varphi\in {\cal H}_{k,0}$. \end{prop} \begin{pf} Let $\varepsilon>0,\ \delta>0$ be given. We may choose a finite covering of the compact set ${\cal S}\cap \bar{\Omega}$ by balls $B_{r_\alpha}(x_\alpha)$ with $r_\alpha\leq \delta/5$ such \[ \sum_\alpha r_\alpha^{k-1}\leq \varepsilon. \] We let $V$ denote the union of the balls, $V=\cup_\alpha B_{r_\alpha}(x_\alpha)$.
Assume that ${\cal S}_k\cap\bar{\Omega}\subset V$ and let $\varphi\in {\cal H}_{k,0}$. We may extend $\varphi$ to $\Sigma_k\cap\Omega_1$ be taking $\varphi=0$ in $\Omega_1\sim\Omega$.
By a standard first variation argument for submanifolds of $\ifmmode{\Bbb R}\else{$\Bbb R$}\fi^N$, for a nonnegative function we have \begin{eqnarray*}
k\int_{\Sigma_k\cap B_r}\varphi^2\rho_{k+1}\ d\mu_k&\leq& r\int_{\Sigma_k\cap B_r}(|\nabla_k
\varphi^2\rho_{k+1}|+|H_k|\varphi^2\rho_{k+1})\ d\mu_k \\ &+&r\int_{\Sigma_k\cap\partial B_r} \varphi^2\rho_{k+1}\ d\mu_{k-1}. \end{eqnarray*} Let $L_\alpha(r)=\int_{\Sigma_k\cap B_r(x_\alpha)}\varphi^2\rho_{k+1}\ d\mu_k$ and
\[M_\alpha(r)=\int_{\Sigma_k\cap B_r(x_\alpha)}(|\nabla_k
(\varphi^2\rho_{k+1})|+|H_k|\varphi^2\rho_{k+1})\ d\mu_k. \] The above inequality then implies \[ kL_\alpha(r)\leq rM_\alpha(r)+r\frac{d}{dr}(L_\alpha(r)). \] Now for any $\alpha$ and a small constant $\varepsilon_0$ we consider two cases: (1) There exists $r$ with $r_\alpha\leq r\leq \delta/5$ such that the inequality \[ \varepsilon_0L_\alpha(5r)\leq rM_\alpha(r). \] We denote such a choice of $r$ by $r_\alpha'$. Secondly, we have case (2) For all $r$ with $r_\alpha\leq r\leq \delta/5$ we have \[ rM_\alpha(r)< \varepsilon_0L_\alpha(5r). \] The collection of $\alpha$ for which the first case holds will be labeled $A_1$, and that for which the second holds $A_2$. We will handle the two cases separately.
For the collection of balls with radius $r_\alpha'$ indexed by $A_1$ we may apply the five times covering lemma to extract a subset $A_1'\subseteq A_1$ for which the balls in $A_1'$ are disjoint and such that \[ V_1\equiv \cup_{\alpha\in A_1}B_{r_\alpha}(x_\alpha)\subseteq \cup_{\alpha\in A_1}B_{r_\alpha'}(x_\alpha)\subseteq \cup_{\alpha\in A_1'}B_{5r_\alpha'}(x_\alpha). \] From the inequality of case (1) above applied for $\alpha\in A_2'$ we have \[ L_\alpha(r_\alpha)\leq L_\alpha(5r_\alpha')\leq \varepsilon_0^{-1}r_\alpha'M_\alpha(r_\alpha')\leq \varepsilon_0^{-1}\delta M_\alpha(r_\alpha'). \] Summing over $\alpha\in A_1$ and using disjointness of the balls we have \begin{equation} \label{eqn:case1} \int_{\Sigma_k\cap V_1}\varphi^2\rho_{k+1}\ d\mu_k\leq
\varepsilon_0^{-1}\delta\int_{\Sigma_k\cap\Omega}(|\nabla_k \varphi^2\rho_{k+1}|+|H_k|\varphi^2\rho_{k+1})\ d\mu_k. \end{equation}
Now for $\alpha\in A_2$ we have \[ kL_\alpha(r)\leq \varepsilon_0L_\alpha(5r)+r\frac{d}{dr}(L_\alpha(r)) \] for $r_\alpha\leq r\leq \delta/5$. For $j=0,1,2,\ldots$ define $\sigma_j=5^jr_\alpha$ and let $p$ be the positive integer such that $\sigma_{p-1}<\delta/5\leq \sigma_p$. We define $\Lambda_j$ by $\Lambda_j=L_\alpha(\sigma_j)$ for $j=0,1,\ldots,p$. For $\sigma_j\leq r\leq \sigma_{j+1}$ we then have \[ kL_\alpha(r)\leq \varepsilon_0\Lambda_{j+2}\Lambda_j^{-1}L_\alpha(r)+r\frac{d}{dr}(L_\alpha(r)). \] Integrating we find \[ \Lambda_{j+1}\Lambda_j^{-1}\geq 5^{k-\varepsilon_0\Lambda_{j+2}\Lambda_j^{-1}}. \] Setting $R_j=\Lambda_{j+1}\Lambda_j^{-1}$ we have shown \[ R_j\geq 5^{k-\varepsilon_0R_jR_{j+1}}. \] Now if $R_j\leq 5^{k-1}$ then we would have $5^{k-1}\geq 5^{k-\varepsilon_0R_jR_{j+1}}$ which in turn implies $\varepsilon_05^{k-1}R_{j+1}\geq \varepsilon_0R_jR_{j+1}\geq 1$. Thus if we choose $\varepsilon_0=5^{-3k+3}$ we find $R_{j+1}\geq 5^{2(k-1)}$ and hence it follows that $R_jR_{j+1}\geq 5^{2(k-1)}$. Thus we have shown that for any $j=0,1,\ldots, p-1$ we either have $R_j\geq 5^{k-1}$ or $R_jR_{j+1}\geq 5^{2(k-1)}$. This implies that $\Lambda_p\Lambda_0^{-1}\geq 5^{(p-1)(k-1)}\geq 5^{1-k}(\delta/r_\alpha)^{k-1}$ and therefore we have $L_\alpha(r_\alpha)\leq c(r_\alpha/\delta)^{k-1}L_\alpha(\sigma_p)$ for each $\alpha\in A_2$. Summing this over these $\alpha$ and using the choice of the covering we have \[ \int_{\Sigma_k\cap V_2}\varphi^2\rho_{k+1}\ d\mu_k\leq c\varepsilon\delta^{1-k}\int_{\Sigma_k\cap\Omega}\varphi^2\rho_{k+1}\ d\mu_k. \] Combining this with (\ref{eqn:case1}) we finally obtain
\[ \int_{\Sigma_k\cap V}\varphi^2\rho_{k+1}\ d\mu_k\leq c\varepsilon\delta^{1-k}\int_{\Sigma_k\cap\Omega}\varphi^2\rho_{k+1}\ d\mu_k+c\delta\int_{\Sigma_k\cap\Omega}(|\nabla_k \varphi^2\rho_{k+1}|+|H_k|\varphi^2\rho_{k+1})\ d\mu_k. \] since we have now fixed $\varepsilon_0$. We can estimate the second term on the right using
\[ |\nabla_k \varphi^2\rho_{k+1}|+|H_k|\varphi^2\rho_{k+1}\leq (\varphi^2+|\nabla_k\varphi|^2)\rho_{k+1}
+\frac{1}{2}\varphi^2(2+|\nabla_k\log\ \rho_{k+1}|^2+|H_k|^2)\rho_{k+1}. \] This implies the bound
\[ \int_{\Sigma_k\cap V}\varphi^2\rho_{k+1}\ d\mu_k\leq c(\varepsilon\delta^{1-k}+\delta)\int_{\Sigma_k\cap\Omega}\varphi^2\rho_{k+1}\ d\mu_k+c\delta\int_{\Sigma_k\cap\Omega}[|\nabla_k \varphi|^2+P_k\varphi^2]\rho_{k+1}\ d\mu_k. \] The desired conclusion now follows by choosing $\delta$ so that $c\delta=\eta/2$ and then choosing $\varepsilon$ so that $c\varepsilon\delta^{1-k}=\eta$. This completes the proof. \end{pf}
The following coercivity bound will be useful both in this section and in the next. We assume here that we have a partially regular minimal $k$-slicing. \begin{prop} \label{prop:coercive} Assume that our $k$-slicing is bounded. There is a constant $c$ such that for $\varphi\in {\cal H}_{k,0}$ we have \begin{equation*}
c^{-1}\int_{\Sigma_k}[|\nabla_k\varphi|^2+(P_k+|\nabla_klog\ u_k|^2)\varphi^2] \rho_{k+1}\ d\mu_k \leq Q_k(\varphi,\varphi)+\int_{\Sigma_k}\varphi^2\rho_{k+1}\ d\mu_k. \end{equation*} Moreover we have the bound
\[ c^{-1}\int_{\Sigma_k}(|\nabla_k(\varphi\sqrt{\rho_{k+1}})|^2+|A_k|^2\varphi^2\rho_{k+1})\ d\mu_k\leq Q_k(\varphi,\varphi)+\int_{\Sigma_k}\varphi^2\rho_{k+1}\ d\mu_k. \] \end{prop} \begin{pf} We can see from (\ref{eqn:qform}) that
\[ Q_k(\varphi,\varphi)\geq S_k(\varphi,\varphi)+\frac{1}{8n}\int_{\Sigma_k}(\sum_{p=k}^n|\tilde{A}_p|^2+\sum_{p=k+1}^n|\nabla_k\log u_p|^2)\varphi^2\rho_{k+1}d\mu_k. \] Using the stability of $\Sigma_k$ we have \begin{equation} \label{eqn:qbound}
Q_k(\varphi,\varphi)\geq \frac{1}{8n}\int_{\Sigma_k}(\sum_{p=k}^n|\tilde{A}_p|^2+\sum_{p=k+1}^n|\nabla_k\log u_p|^2)\varphi^2\rho_{k+1}d\mu_k. \end{equation} Finally we use Lemma \ref{lem:2ff} to conclude that (note that $\tilde{A}_n=0$)
\[ \sum_{p=k}^n|\tilde{A}_p|^2\geq \sum_{p=k}^{n-1}|A_p^{\nu_p}|^2\geq \sum_{p=k}^{n-1}|A_k^{\nu_p}|^2=|A_k|^2, \] and thus we have \[ Q_k(\varphi,\varphi)\geq \frac{1}{8n}\int_{\Sigma_k}P_k\varphi^2\rho_{k+1}\ d\mu_k. \]
Recall that $S_k(\varphi,\varphi)=\int_{\Sigma_k}(|\nabla_k\varphi|^2-q_k\varphi^2)\rho_{k+1}\ d\mu_k$ where
\[ q_k=\frac{1}{2}(|\tilde{A}_k|^2+\hat{R}_{k+1}-\tilde{R}_k) \] where $\hat{R}_{k+1}$ is given in Theorem \ref{thm:eval} and $\tilde{R}_k$ is given in Lemma \ref{lem:rcalc}. We will need an upper bound on $q_k$, so we first see from Theorem \ref{thm:eval} with $k$ replace by $k+1$
\[ q_k\leq c+\frac{1}{2}\sum_{p=k}^{n-1}|\tilde{A}_p|^2-\frac{1}{2}\tilde{R}_k \] where the constant bounds the curvature of $\Sigma_n$ and the eigenvalues. Now from Lemma \ref{lem:rcalc} we can obtain the bound
\[ -\frac{1}{2}\tilde{R}_k\leq \frac{1}{2}|R_k|+\sum_{p=k+1}^{n-1}|\nabla_k\log u_p|^2+div_k({\cal X}_k) \]
where ${\cal X}_k=\sum_{p=k+1}^{n-1}\nabla_k log\ u_p$. We observe that the Gauss equation implies that $|R_k|\leq c(1+|A_k|^2)$, and so we have
\[ q_k\leq c+c\sum_{p=k}^{n-1}|\tilde{A}_p|^2+\sum_{p=k+1}^{n-1}|\nabla_k\log u_p|^2+div_k({\cal X}_k) \]
Now observe that $Q_k\geq S_k$ and so we have
\[ \int_{\Sigma_k}(|\nabla_k\varphi|^2+\frac{1}{8n}P_k\varphi^2)\rho_{k+1}\ d\mu_k\leq 2Q_k(\varphi,\varphi) +\int_{\Sigma_k}q_k\varphi^2\rho_{k+1}\ d\mu_k. \] We want to bound the second term on the right by a constant times the first plus up to the square of the $L^2$ norm of $\varphi$, so we use the bound for $q_k$ to obtain
\begin{eqnarray*} \int_{\Sigma_k}q_k\varphi^2\rho_{k+1}\ d\mu_k&\leq& c\int_{\Sigma_k}(1+\sum_{p=k}^{n-1}|\tilde{A}_p|^2+\sum_{p=k+1}^{n-1}|\nabla_k\log u_p|^2)\varphi^2\rho_{k+1}d\mu_k\\ &+&\int_{\Sigma_k}div_k({\cal X}_k)\varphi^2\rho_{k+1}d\mu_k.\\ \end{eqnarray*} Now since $\varphi$ has compact support we have \[ \int_{\Sigma_k}div_k({\cal X}_k)\varphi^2\rho_{k+1}\ d\mu_k=-\int_{\Sigma_k}\langle {\cal X}_k,\nabla(\varphi^2\rho_{k+1})\rangle\ d\mu_k. \] Easy estimates then imply the bound
\[ |\int_{\Sigma_k}div_k({\cal X}_k)\varphi^2\rho_{k+1}\ d\mu_k|\leq \frac{1}{2}
\int_{\Sigma_k}|\nabla_k\varphi|^2\rho_{k+1}\ d\mu_k+c\int_{\Sigma_k}(\sum_{p=k+1}^{n-1}|\nabla_k\log u_p|^2)\varphi^2\rho_{k+1}\ d\mu_k. \] We may now absorb the first term back to the left and use (\ref{eqn:qbound}) to obtain the bound
\[ \int_{\Sigma_k}(|\nabla_k\varphi|^2+P_k\varphi^2)\rho_{k+1}\ d\mu_k\leq cQ_k(\varphi,\varphi)+ \int_{\Sigma_k}\varphi^2\rho_{k+1}d\mu_k. \]
To bound the term involving $|\nabla_klog\ u_k|^2$ we recall that on the regular set we have \[ \tilde{\Delta}_ku_k+q_ku_k=-\lambda_ku_k \] where $\lambda_k\geq 0$. This implies by direct calculation
\[ \tilde{\Delta}log\ u_k=-q_k-\lambda_k-|\nabla_klog\ u_k|^2. \] (Note that $\tilde{\nabla}_k=\nabla_k$ on functions which do not depend on the extra variables $t_p$.) Now if $\varphi$ has compact support in ${\cal R}_k$, we multiply by $\varphi^2$, integrate by parts to obtain
\[ \int_{\Sigma_k}(|\nabla_klog\ u_k|^2+q_k)\varphi^2\rho_{k+1}\ d\mu_k\leq 2\int_{\Sigma_k}\varphi\langle\nabla_k\varphi, \nabla_klog\ u_k\rangle\rho_{k+1}\ d\mu. \] By the arithmetic-geometric mean inequality \begin{eqnarray*}
\int_{\Sigma_k}(|\nabla_klog\ u_k|^2+q_k)\varphi^2\rho_{k+1}\ d\mu_k&\leq& \frac{1}{2}\int_{\Sigma_k}(|\nabla_klog\ u_k|^2+q_k)\varphi^2\rho_{k+1}\ d\mu_k \\
&+&2\int_{\Sigma_k}|\nabla_k\varphi|^2\rho_{k+1}\ d\mu_k. \end{eqnarray*} This implies
\[ \frac{1}{2}\int_{\Sigma_k}|\nabla_klog\ u_k|^2\varphi^2\rho_{k+1}\ d\mu_k\leq\frac{1}{2}Q_k(\varphi,\varphi)
+\frac{3}{2}\int_{\Sigma_k}|\nabla_k\varphi|^2\rho_{k+1}\ d\mu_k. \] The first inequality then follows from this and our previous estimate.
The second conclusion follows since $|\nabla_k log\ \rho_{k+1}|^2\leq cP_k$, and so the integrand on the left $|\nabla_k(\varphi\sqrt{\rho_{k+1}})|^2+|A_k|^2\varphi^2\rho_{k+1}$
is bounded pointwise by a constant times $(|\nabla_k\varphi|^2+P_k\varphi^2)\rho_{k+1}$. \end{pf}
Recall that an important analytic step in the minimal hypersurface regularity theory is the local reduction to the case in which the hypersurface is the boundary of a set. This makes comparisons particularly simple and reduces consideration to a multiplicity one setting. We will need an analogous reduction in our situation. Since the leaves of a $k$-slicing can be singular, we must consider the possibility that local topology comes into play and prohibits such a reduction to boundaries of sets. What saves us here is the fact that $k$-slicings come with a natural trivialization of the normal bundle (on the regular set). We have the following result. \begin{prop} \label{prop:boundary} Assume that $U$ is compactly contained in $\Omega$, and that $U\cap\Sigma_n$ is diffeomorphic to a ball. Assume that we have a minimal $k$-slicing in $\Omega$ such that the associated $(k+1)$-slicing is partially regular. Let $\hat{\Sigma}_k$ denote the closure of any connected component of $\Sigma_k\cap U\cap{\cal R}_{k+1}$. Then it follows that $\hat{\Sigma}_k$ divides the corresponding connected component (denoted $\hat{\Sigma}_{k+1}$) of $\Sigma_{k+1}$ into a union of two relatively open subsets, and choosing the one, denoted $U_{k+1}$, for which the unit normal of $\hat{\Sigma}_k$ points outward, we have $\hat{\Sigma}_k=\partial U_{k+1}$ as a point set boundary in $\hat{\Sigma}_{k+1}$, and as an oriented boundary in ${\cal R}_{k+1}$. \end{prop} \begin{pf} Since $\hat{\Sigma}_k\cap{\cal R}_{k+1}$ and $\hat{\Sigma}_{k+1}\cap{\cal R}_{k+1}$ are connected, it follows that the complement of $\hat{\Sigma}_k\cap{\cal R}_{k+1}$ in $\hat{\Sigma}_{k+1}\cap{\cal R}_{k+1}$ has either $1$ or $2$ connected components. These consist of the connected components of points lying near $\hat{\Sigma}_k$ on either side. Locally these are separate components, but they may reduce globally to a single connected component. If this were to happen, then since $dim({\cal S}_{k+1})\leq k-2$, we could find a smooth embedded closed curve $\gamma(t)$ parametrized by a periodic variable $t\in [0,1]$ with $\gamma(0)\in\hat{\Sigma}_k\cap{\cal R}_{k+1}$ and $\gamma(t)\in {\cal R}_{k+1}\sim \hat{\Sigma}_k$ for $t\neq 0$. We may also assume that $\gamma'(0)$ is transverse to $\hat{\Sigma}_k$. We choose local coordinates $x^1,\ldots, x^k$ for $\hat{\Sigma}_k$ in a neighborhood $V$ of $\gamma(0)$ and we may find an embedding $F$ of $V\times S^1$ in ${\cal R}_{k+1}$ with the property that $F(0,t)=\gamma(t)$, $F(x,0)\in \hat{\Sigma}_k$, $F(x,t)\not\in \hat{\Sigma}_k$ for $t\neq 0$, and $\frac{\partial F}{\partial t}(x,0)$ is transverse to $\hat{\Sigma}_k$. The $k$-form $\omega=\zeta(x)dx^1\wedge\ldots\wedge dx^k$, where $\zeta$ is a nonnegative and nonzero function with compact support in $V$, is a closed form which has positive integral over $\hat{\Sigma}_k$. Since the image $V_1=F(V\times S^1)$ is compactly contained in ${\cal R}_{k+1}$ and the normal bundle of $\hat{\Sigma}_{k+1}$ is trivial, we may choose coordinates $x^{k+2},\ldots, x^n$ for a normal disk, and the coordinates $x^1,\ldots, x^k,t,x^{k+2},\ldots, x^n$ are then coordinates on a neighborhood of $V_1$ in $\Sigma_n$. We may then extend $\omega$ to an $(n-1)$-form on this neighborhood by setting \[ \omega_1=\omega\wedge \zeta_1(x^{k+2},\ldots,x^n)dx^{k+2}\wedge\ldots\wedge dx^n \] where $\zeta_1$ is a nonzero, nonnegative function with compact support in the domain of $x^{k+1},\ldots,x^n$. Thus $\omega_1$ is a closed $(n-1)$-form with compact support in $U\cap \Sigma_n$ which has positive integral on $\hat{\Sigma}_{n-1}$, the connected component of $\Sigma_{n-1}$ containing $\gamma(0)$. This contradicts the condition that each connected component of $\Sigma_{n-1}$ must divide the ball $U\cap \Sigma_n$ into $2$ connected components and is the oriented boundary of one of them, say $\hat{\Sigma}_{n-1}=\partial U_n$, since Stokes theorem would imply that $\int_{\hat{\Sigma}_{n-1}}\omega_1=\int_{U_n}d\omega_1=0$ (note that $\omega_1$ has compact support in $U\cap\Sigma_n$). \end{pf}
We will prove a boundedness theorem which will be needed in the proof of the compactness theorem. Note that we will obtain the partial regularity theorem by finite induction down from dimension $n-1$, so we may assume in the following theorems that we have already established partial regularity for $(k+1)$-slicings. In the following result we will consider the restriction of a $k$-slicing to a small ball $B_\sigma(x)$ where $x\in \ifmmode{\Bbb R}\else{$\Bbb R$}\fi^N$. We consider the rescaled $k$-slicing of the unit ball given by $\Sigma_{j,\sigma}=\sigma^{-1}(\Sigma_j-x)$ with $u_{j,\sigma}(y)=a_ju_j(x+\sigma y)$ with $a_j$ chosen so that $\int_{\Sigma_{j,\sigma}}(u_{j,\sigma})^2\rho_{j+1,\sigma}\ d\mu_j=1$. We note that by Proposition \ref{prop:boundary} we may assume that each $\Sigma_j$ in $B_\sigma(x)$ is the oriented boundary of a relatively open set $O_{j+1}\subseteq \Sigma_{j+1}$. We take $O_{j+1,\sigma}$ to be the rescaled open set. The following result implies that the rescaled $k$-slicing remains $\Lambda$-bounded for a suitably chosen $\Lambda$. \begin{thm} \label{thm:bdness} Assume that all bounded $(k+1)$-slicings are partially regular. If we take any bounded minimal $k$-slicing $(\Sigma_j,u_j)$ in $\Omega$ and a ball $B_\sigma(x)$ compactly contained in $\Omega$, then there is a $\Lambda$ depending only on $\Sigma_n$ such that $(\Sigma_{j,\sigma},u_{j,\sigma})$, $j=k,\ldots,n-1$ is $\Lambda$-bounded in $B_{1/2}(0)$. \end{thm} \begin{pf} The proof is by a finite induction beginning with $k=n-1$. The boundedness of $\mu_{n-1}(\Sigma_{n-1,\sigma})$ follows by comparison with a portion of the sphere of radius $1$ in a standard way (see a similar argument below). We normalize $\int_{\Sigma_{n-1,\sigma}}(u_{n-1,\sigma})^2\ d\mu_{n-1}=1$, so it remains to show
\[ \int_{\Sigma_{n-1,\sigma}\cap B_{1/2}(0)}|A_{n-1,\sigma}|^2u_{n-1,\sigma}^2\ d\mu_{n-1}\leq \Lambda. \] To see this, we use stability with the variation $\zeta u_{n-1,\sigma}$ to obtain
\[ \frac{1}{4}\int_{\Sigma_{n-1,\sigma}}|A_{n-1,\sigma}|^2\zeta^2u_{n-1,\sigma}^2\ d\mu_{n-1}\leq Q_{n-1,\sigma}(\zeta u_{n-1,\sigma},\zeta u_{n-1,\sigma}). \] Now we have by direct calculation for any $W_{1,2}(\Sigma_{n-1,\sigma})$ function $v$ \[ Q_{n-1,\sigma}(\zeta v,\zeta v)=Q_{n-1,\sigma}(\zeta^2 v,v)+
\int_{\Sigma_{n-1,\sigma}}v^2|\nabla_{n-1,\sigma}\zeta|^2\ d\mu_{n-1}. \] Taking $v=u_{n-1,\sigma}$ and choosing $\zeta$ to be a function which is $1$ on $B_{1/2}(0)$ with support in $B_1(0)$ and with bounded gradient we find
\[ \int_{\Sigma_{n-1,\sigma}}|A_{n-1,\sigma}|^2u_{n-1,\sigma}^2\ d\mu_{n-1}\leq 4\lambda_{n-1,\sigma}+c\leq \Lambda \] for a constant $\Lambda$ where we have used the eigenvalue condition \[ Q_{n-1,\sigma}(\zeta^2 u_{n-1,\sigma},u_{n-1,\sigma})=\lambda_{n-1,\sigma}\int_{\Sigma_{n-1,\sigma}}\zeta^2u_{n-1,\sigma}^2\ d\mu_{n-1} \] and the obvious relation $\lambda_{n-1,\sigma}=\sigma^2\lambda_{n-1}$. This proves $\Lambda$-boundedness for $k=n-1$.
Now assume that we have $\Lambda$-boundedness for $j\geq k+1$ in $B_{3/4}(0)$. Thus it follows that $\int_{\Sigma_{k+1,\sigma}\cap B_{3/4}(0)}(1+(u_{k+1,\sigma})^2)\rho_{k+2,\sigma}\ d\mu_{k+1}$ is bounded and hence $\int_{\Sigma_{k+1,\sigma}\cap B_{3/4}(0)}\rho_{k+1,\sigma}\ d\mu_{k+1}$ is bounded. We may then use the coarea formula to find a radius $r\in (1/2,3/4)$ so that \[ \int_{\Sigma_{k+1,\sigma}\cap \partial B_r(0)}\rho_{k+1,\sigma}\ d\mu_k\leq \Lambda. \] Using the portion of $\Sigma_{k+1,\sigma}\cap \partial B_r(0)$ lying outside $O_{k,\sigma}$ as a comparison surface we find \[ Vol_{\rho_{k+1,\sigma}}(\Sigma_{k,\sigma}\cap B_{1/2}(0))\leq Vol_{\rho_{k+1,\sigma}}(\Sigma_{k+1,\sigma}\cap \partial B_r(0))\leq \Lambda. \] Finally we prove the bound
\[ \int_{\Sigma_{k,\sigma}\cap B_{1/2}(0)}(|A_{k,\sigma}|^2+\sum_{p=k+1}^n|\nabla_{k,\sigma} log\ u_{p,\sigma}|^2)u_{k,\sigma}^2 \rho_{k+1,\sigma}\ d\mu_k\leq \Lambda \] by the use of stability as we did above for the case $k=n-1$. \end{pf}
We will now formulate and prove a compactness theorem for minimal $k$-slicings under the assumption that the associated $(k+1)$-slicings for the sequence are partially regular. We will say that a $\Lambda$-bounded sequence of $k$-slicings $(\Sigma_j^{(i)},u_j^{(i)})$, $j=k,\ldots, n-1$ {\it converges} to a minimal $k$-slicing $(\Sigma_j,u_j)$ in an open set $U$ if $\Sigma_j^{(i)}$ converges in $C^2$ norm to $\Sigma_j$ in $\bar{U}$ locally on the complement of the singular set (of the limit) ${\cal S}_j$, and such that for $j=k,\ldots,n-1$ \begin{equation} \label{eqn:conv1} \lim_{i\to\infty}V_{\rho_{j+1}^{(i)}}(\Sigma_j^{(i)}\cap U_i)=V_{\rho_{j+1}}(\Sigma_j\cap U), \end{equation} \begin{eqnarray} \label{eqn:conv2}
\lim_{i\to\infty}\|u^{(i)}_j\|^2_{0,j,U_i}&=&\|u_j\|_{0,j,U}^2 \\
\lim_{i\to\infty} \int_{\Sigma_j^{(i)}\cap U_i}(|\nabla_ju_j^{(i)}|^2+P_j^{(i)}(u_j^{(i)})^2)\rho_{j+1}^{(i)}\ d\mu_j&=&\int_{\Sigma_j\cap U}(|\nabla_ju_j|^2+P_ju_j^2)\rho_{j+1}\ d\mu_j \nonumber \end{eqnarray} where $U_i$ is a sequence of compact subdomains of $U$ with $U_i\subseteq U_{i+1}\subseteq U$ and $U=\cup_iU_i$.
To make precise the meaning of convergence on compact subsets for this problem involves some subtlety since changing the $u_p$, $p\geq j+1$ by multiplication by a positive constant has no effect on the $\Sigma_j$, so in order to get nontrivial limits for the $u_p$ we must normalize them appropriately. In case $\Sigma_j\cap U$ has multiple components this normalization must be done on each component. If $(\Sigma_j,u_j)$ is a minimal $k$-slicing with $\Sigma_j$ being partially regular for $j\geq k+1$, then we call a compact subdomain $U$ of $\Omega$ {\it admissible for $(\Sigma_j,u_j)$} if $U$ is a smooth domain which meets $\partial\Sigma_j$ transversally and $dim(\partial U\cap{\cal S}_j)\leq j-3$. It follows from the coarea formula that any smooth domain can be perturbed to be admissble. We make the following definition. \begin{defn} We say that a sequence of $k$-slicings $(\Sigma_j^{(i)},u_j^{(i)})$ {\it converges on compact subsets} to a $k$-slicing $(\Sigma_j,u_j)$ if for any compact subdomain $U$ of $\Omega$ which is admissible for $(\Sigma_j,u_j)$ and for any admissible domains $U_i$ for $(\Sigma_j^{(i)},u_j^{(i)})$ with $U_i\subseteq U_{i+1}\subseteq U$ compactly contained in $U$ it is true that each connected component of $\Sigma_j\cap{\cal R}_{j+1}\cap U$ is a limit of connected components of $\Sigma_j^{(i)}\cap{\cal R}_{j+1}^{(i)}\cap U_i$ in the sense of (\ref{eqn:conv1}) and (\ref{eqn:conv2}) with $u_j$ appropriately normalized on each connected component. \end{defn} \begin{rem} Because of the connectedness of the regular set and the Harnack inequality, we may normalize the $u_j$ to be equal to $1$ at a point of $x_0\in {\cal R}_k$ about which we have a uniform ball on which the $\Sigma_j$ have bounded curvature, and this normalization suffices for the connected component of $\Sigma_k\cap U$ for any compact admissible domain for $(\Sigma_j,u_j)$. A consequence of the compactness theorem below implies that this normalization suffices. \end{rem}
The following compactness and regularity theorem includes Theorem \ref{thm:reg} as a special case. \begin{thm} \label{thm:cptness} Assume that all bounded minimal $(k+1)$-slicings are partially regular. Given a $\Lambda$-bounded sequence of $k$-slicings , there is a subsequence which converges to a $\Lambda$-bounded $k$-slicing on compact open subsets of $\Omega$. Furthermore $\Sigma_k$ is partially regular. \end{thm} \begin{pf} We will proceed as usual by downward induction beginning with $k=n-1$. We will break the proof into two separate steps, the first establishing the first statement of (\ref{eqn:conv1}) for convergence of the $\Sigma_k$ and the second showing the other two statements (\ref{eqn:conv2}) involving convergence of the $u_k$. For $k=n-1$ the first step follows from the usual compactness theorem for volume minimizing hypersurfaces (see \cite{simon}). To complete the proof we will need to develop some monotonicity ideas both for the $\Sigma_j$ and for the $u_j$. We digress on this topic and return to the proof below.
We now prove a version of the monotonicity of the frequency-type function. This idea is due to F. Almgren \cite{almgren}, and it gives a method to prove that solutions of variationally defined elliptic equations are approximately homogeneous on a small scale. The importance of this method for us is that it works in the presence of singularites provided certain integrals are defined. We will apply this to show that the $u_k$ become homogeneous upon rescaling at a given singular point. Assume that $C$ is a $k$ dimensional cone in $\ifmmode{\Bbb R}\else{$\Bbb R$}\fi^n$ which is regular except for a set $\cal S$ with $dim({\cal S})\leq k-3$. Assume that $Q$ is a quadratic form on $C$ of the form
\[ Q(\varphi,\varphi)=\int_C(|\nabla\varphi|^2-q(x)\varphi^2)\rho\ d\mu \] where $\rho$ is a homogeneous weight function on $C$ of degree $p$; i.e. assume that $\rho(\lambda x)=\lambda^p\rho(x)$ for $x\in C$ and $\lambda>0$. Assume also that $\rho$ is smooth and positive on the regular set ${\cal R}$ of $C$ and that $\rho$ is locally $L^1$ on $C$. Assume also that $q$ is smooth on ${\cal R}$ and is homogeneous of degree $-2$; i.e. assume that $q(\lambda x)=\lambda^{-2}q(x)$ for $x\in C$ and $\lambda>0$. Finally assume that
$u$ is a minimizer for $Q$ in a neighorhood of $0$ and in particluar that $u$ is smooth and positive on ${\cal R}$. Assume also that $q=div({\cal X})+\bar{q}$ where $|{\cal X}|^2+|\bar{q}|\leq P$ for some positive function $P$ and that the following integral bound holds
\[ \int_C[|\nabla u|^2+(1+|\nabla log\ \rho|^2+P)u^2]\rho\ d\mu<\infty. \] Under these conditions we may define the frequency function $N(\sigma)$ which is a function of a radius $\sigma>0$ such that $B_\sigma(0)$ is contained in the domain of definition of $u$. It is defined by \begin{equation} \label{eqn:freqfcn} N(\sigma)=\frac{\sigma Q_{\sigma}(u)}{I_{\sigma}(u)} \end{equation} where $Q_\sigma(u)$ and $I_\sigma(u)$ are defined by
\[ Q_\sigma(u)=\int_{C\cap B_\sigma(0)}(|\nabla u|^2-q(x)u^2)\rho\ d\mu_k,\ I_\sigma(u)=\int_{C\cap \partial B_\sigma(0)} u^2\rho\ d\mu_{k-1} \] where the last integral is taken with respect to $k-1$ dimensional Hausdorff measure. We may now prove the following monotonicity result for $N(\sigma)$. \begin{thm} \label{thm:freq} Assume that $u$ is a critical point of $Q$ which is integrable as above. The function $N(\sigma)$ is monotone increasing in $\sigma$, and for almost all $\sigma$ we have \[ N'(\sigma)=\frac{2\sigma}{I_\sigma(u)}(I_\sigma(u_r)I_\sigma(u)-\langle u_r, u\rangle_\sigma^2) \] where $u_r$ denotes the radial derivative of $u$ and $\langle\cdot,\cdot\rangle_\sigma$ denotes the $\rho$-weighted $L^2$ inner product taken on $C\cap\partial B_\sigma(0)$. The limit of $N(\sigma)$ as $\sigma$ goes to $0$ exists and is finite. The function $N(\sigma)$ is equal to a constant $N(0)$ if and only if $u$ is homogeneous of degree $N(0)$. \end{thm} \begin{pf} The argument can be done variationally and combines two distinct deformations of the function $u$. The first involves a radial deformation of $C$; precisely, let $\zeta(r)$ be a function which is nonnegative, decreasing, and has support in $B_\sigma(0)$. Let $X$ denote the vector field on $\ifmmode{\Bbb R}\else{$\Bbb R$}\fi^n$ given by $X=\zeta(r) x$ where $x$ denotes the position vector. The flow $F_t$ of $X$ then preserves $C$, and we may write
\[ Q_\sigma(u\circ F_t)=\int_{C\cap B_\sigma(0)}(|\nabla_t u|^2-(q\circ F_{t})u^2)\rho\circ F_{t}\ d\mu_t \] where we have used a change of variable and $\nabla_t$ and $\mu_t$ denotes the gradient operator and volume measure with respect to $F_t^*(g)$ where $g$ is the induced metric on $C$ from $\ifmmode{\Bbb R}\else{$\Bbb R$}\fi^n$. Differentiating with respect to $t$ and setting $t=0$ we obtain
\[ 0=\int_C\{(\langle-{\cal L}_Xg,du\otimes du\rangle-X(q)u^2)\rho+(|\nabla u|^2-qu^2) (X(\rho)+\rho\ div(X))\}\ d\mu \] where ${\cal L}$ denotes the Lie derivative. By direct calculation we have $X(q)=-2\zeta q$, $X(\rho)=p\zeta \rho$, $div(X)=r\zeta'(r)+k\zeta$, and ${\cal L}_X g=2r\zeta'(r)(dr\otimes dr)+2\zeta g$. Substituting in this information and collecting terms we have
\[ 0=\int_C\{(p+k-2)\zeta(|\nabla u|^2-qu^2)+r\zeta'(|\nabla u|^2-2u_r^2-qu^2)\}\ \rho\ d\mu. \] Letting $\zeta$ approach the characteristic function of $B_\sigma(0)$ this implies \begin{eqnarray*}
(p+k-2)Q_\sigma(u)&=&\sigma\int_{C\cap\partial B_\sigma(0)}(|\nabla u|^2-2u_r^2-qu^2)\}\ \rho\ d\mu_{k-1}\\ &=&\sigma\frac{dQ_\sigma(u)}{d\sigma}-2\sigma\int_{C\cap\partial B_\sigma(0)}u_r^2\rho\ d\mu_{k-1}. \end{eqnarray*}
The second ingredient we need comes from the deformation $u_t=(1+t\zeta(r))u$ where $\zeta$ is as above. Since $\dot{u}=\zeta u$ this deformation implies \[ 0=\int_C(\langle \nabla u,\nabla(\zeta u)\rangle-q\zeta u^2)\rho\ d\mu. \] Expanding this and letting $\zeta$ approach the characteristic function of $B_\sigma(0)$ we have \[ Q_\sigma(u)=\int_{C\cap\partial B_\sigma(0)}uu_r\ \rho\ d\mu_{k-1}. \]
The proof will now follow by combining these. First we have \[ N'(\sigma)=I_\sigma(u)^{-2}\{(Q_\sigma+\sigma Q'_\sigma)I_\sigma-\sigma Q_\sigma I'_\sigma\}. \] Substituting in for the terms involving derivatives this implies \begin{eqnarray*} N'(\sigma)&=&I_\sigma^{-2}\{(Q_\sigma+(p+k-2)Q_\sigma)I_\sigma -Q_\sigma(p+k-1)I_\sigma)\} \\ &+&2\sigma I_\sigma^{-2}\{\int_{C\cap\partial B_\sigma(0)} u_r^2\rho\ d\mu_{k-1}-Q_\sigma^2I_\sigma\}. \end{eqnarray*} Since the first term on the right is $0$, we may write this as \[ N'(\sigma)=2I_\sigma(u)^{-1}(I_\sigma(u)I_\sigma(u_r)-\langle u_r, u\rangle_\sigma^2) \] which is the desired formula.
To see that $N(\sigma)$ is bounded from below as $\sigma$ goes to $0$ we can observe that \[ N(\sigma)=\frac{1}{2}\sigma\frac{d}{d\sigma}\log(\bar{I}_\sigma(u)),\ \bar{I}_\sigma(u)=\frac{\int_{C\cap\partial B_\sigma(0)}u^2\rho\ d\mu_{k-1}}{\int_{C\cap\partial B_\sigma(0)}\rho\ d\mu_{k-1}}, \] and the monotonicity expresses the condition that the function $\log\ \bar{I}_\sigma(u)$ is a convex function of $t=\log \sigma$. Since this function is defined for all $t\leq 0$, and by the coarea formula for any $\sigma_1>0$, there is a $\sigma\in [\sigma_1,2\sigma_1]$ so that $I_\sigma(u)\leq c\sigma^{-1}$ it follows that there is a sequence $t_i=\log\ \sigma_i$ tending to $-\infty$ such that $\bar{I}_{\sigma_i}(u)\leq c\sigma_i^{-K}$ for some $K>0$. Thus we have the function $\log\ \bar{I}_{\sigma_i}(u)\leq -ct_i$. It follows that the slope (that is $N(\sigma)$) of the convex function $\log\bar{I}_\sigma(u)$ is bounded from below as $t$ tends to $-\infty$.
Now if $N(\sigma)=N(0)$ is constant, we must have equality in the Schwartz inequality for each $\sigma$, and hence we would have $u_r=f(r)u$ for some function $f(r)$. Now this implies that $Q_\sigma=f(\sigma)I_\sigma$ and hence we have $rf(r)=N(0)$. Therefore it follows that $f(r)=r^{-1}N(0)$, and $ru_r=N(0)u$ so $u$ is homogeneous of degree $N(0)$ by Euler's formula. \end{pf} We will need to extend the usual monotonicity formula for the volume of minimal submanifolds to the setting in which the submanifold under consideration minimizes a weighted volume with a homogeneous weight function within a partially regular cone. Precisely, let $C$ be a $k+1$ dimensional cone in $\ifmmode{\Bbb R}\else{$\Bbb R$}\fi^n$ with a singular set $\cal S$ of Hausdorff dimension at most $k-2$. Let $\rho$ be a positive weight function which is homogeneous of degree $p$; i.e. we have $\rho(\lambda x)=\lambda^p\rho(x)$ for $x\in C$ and $\lambda>0$. Assume that $\rho$ is smooth and positive on the regular set of $C$, and that $\rho$ is locally integrable with respect to Hausdorff measure on $C$. \begin{thm} \label{thm:mncty} Let $\Sigma$ be a hypersurface in a $k+1$ dimensional cone $C$ which minimizes the weighted volume $V_\rho$ for a homogeneous weight function $\rho$. We then have the monotonicity formula \[ \frac{d}{d\sigma}(\sigma^{-k-p}Vol_\rho(\Sigma\cap B_\sigma(0))=
\int_{\Sigma\cap\partial B_\sigma(0)}r^{-p-k-2}|x^\perp|^2\rho\ d\mu_{k-1} \] where $x^\perp$ denotes the component of the position vector $x$ perpendicular to $\Sigma$. \end{thm} \begin{pf} We take a function $\zeta(r)$ which is decreasing, nonnegative, and equal to $0$ for $r>\sigma$, and we consider the vector field $X=\zeta x$ where $x$ denotes the position vector. The first variation formula for the $\rho$-weighted volume then implies \[ 0=\int_\Sigma(X(\rho)+div_\Sigma(X)\rho)\ d\mu_k. \]
Since $\rho$ is homogeneous we have $X(\rho)=p\zeta \rho$, and by direct calculation $div_\Sigma(X)=k\zeta+r^{-1}\zeta'|x^T|^2$ where $x^T$ denotes the component of $x$ tangential to $\Sigma$. Thus we have
\[ 0=\int_\Sigma\{(p+k)\zeta+r^{-1}\zeta'|x^T|^2\}\rho\ d\mu_k \] Taking $\zeta$ to approximate the characteristic function of $B_\sigma(0)$ we may write this \[ (p+k)Vol_\rho(\Sigma\cap B_\sigma(0))=\sigma\frac{d}{d\sigma}Vol_\rho(\Sigma\cap B_\sigma(0))-
\int_{\Sigma\cap\partial B_\sigma(0)} r^{-1}|x^\perp|^2\rho\ d\mu_{k-1} \] where $x^\perp$ is the component of $x$ normal to $\Sigma$ in $C$. Note that
$r^2=|x^T|^2+|x^\perp|^2$ because $C$ is a cone and so $x$ is tangential to $C$. This may be rewritten as the desired monotonicity formula and completes the proof. \end{pf}
We now show that there can be no tangent minimal $2$-slicing with $C_2$ having an isolated singularity at $\{0\}$. \begin{thm} \label{thm:2dcone} If $C_2$ is a cone lying in a tangent minimal $2$-slicing such that $C_2\sim\{0\}\subseteq{\cal R}_2$, then $C_2$ is a plane and ${\cal R}_2=C_2$. \end{thm} \begin{pf} From the eigenvalue estimate of Theorem \ref{thm:eval} we have
\[ \int_{C_2}(\frac{3}{4}\sum_{j=3}^n|\nabla_2\ log\ u_j|^2-R_2)\varphi^2\ d\mu_2\leq 4\int_{C_2}|\nabla_2\varphi|^2\ d\mu_2 \] for test functions $\varphi$ with compact support in $C_2\sim \{0\}$. Since $C_2$ is a two dimensional cone we have $R_2=0$ away from the origin, and hence we have
\[ \int_{C_2}\sum_{j=3}^n|\nabla_2\ log\ u_j|^2\varphi^2\ d\mu_2\leq c\int_{C_2}|\nabla_2\varphi|^2\ d\mu_2. \] Letting $r$ denote the distance to the origin, we take $\varepsilon$ and $R$ so that $0<\varepsilon<<R$ and choose $\varphi$ to be a function of $r$ which is equal to $0$ for $r\leq \varepsilon^2$, equal to $1$ for $\varepsilon\leq r\leq R$, and equal to $0$ for $r\geq R^2$. In the range $\varepsilon^2\leq r\leq \varepsilon$ we choose \[ \varphi(r)=\frac{log(\varepsilon^{-2}r)}{log(\varepsilon^{-1})} \] and for $R\leq r\leq R^2$ \[ \varphi(r)=\frac{log(R^2r^{-1})}{log\ R}. \]
Thus for $\varepsilon^2\leq r\leq \varepsilon$ we have $|\nabla_2\varphi|^2=(r|log\ \varepsilon|)^{-2}$ and for
$R\leq r\leq R^2$ we have $|\nabla_2\varphi|^2=(r\ log\ R)^{-2}$. It thus follows that
\[ \int_{C_2}|\nabla_2\varphi|^2\ d\mu_2\leq c(|log\ \varepsilon|^{-1}+(log\ R)^{-1}). \] Thus we may let $\varepsilon$ tend to $0$ and $R$ tend to $\infty$ to conclude that the functions $u_3,\ldots, u_n$ are constant on $C_2$. This implies that $C_2$ has zero mean curvature and hence is a plane. If all of the cones $C_3,\ldots C_{n-1}$ are regular near the origin, then it follows that $0\in {\cal R}_2$, and we have completed the proof. Otherwise there is a $C_m$ for $m\geq 3$ which denotes the largest dimensional cone in the minimal $2$-slicing for which the origin is a singular point. It follows that $C_m$ is a volume minimizing cone in $\ifmmode{\Bbb R}\else{$\Bbb R$}\fi^{m+1}=C_{m+1}$, and hence $u_m$ must be homogeneous of a negative degree (see Lemma \ref{lem:negdeg} below) contradicting the fact that $u_m$ is constant along $C_2$. This completes the proof. \end{pf} {\it Completion of proof of Theorem \ref{thm:cptness}:} We first prove the compactness of the $\Sigma_k$ in the sense of (\ref{eqn:conv1}) under the assumption that we have the partial regularity of bounded minimal $(k+1)$-slicings and the compactness (both (\ref{eqn:conv1}) and (\ref{eqn:conv2})) for $j\geq k+1$. We need the following lemma. \begin{lem} \label{lem:locbd} Assume that both the compactness and partial regularity hold for $(k+1)$-slicings. Given any $x\in {\cal S}_{k+1}$, there are constants $c$ and $r_0$ (depending on $x$ and $\Sigma_{k+1}$) so that for $r\in (0,r_0]$ we have \[ \int_{\Sigma_{k+1}\cap B_{2r}(x)} u_{k+1}^2\rho_{k+2}\ d\mu_{k+1}\leq cr^2\int_{\Sigma_{k+1}\cap B_r(x)}P_{k+1}u_{k+1}^2\rho_{k+2}\ d\mu_{k+1}, \] and \[ Vol_{\rho_{k+2}}(\Sigma_{k+1}\cap B_{2r}(x))\leq cVol_{\rho_{k+2}}(\Sigma_{k+1}\cap B_r(x)). \] \end{lem} \begin{pf} Since the left hand side of the inequality is continuous under convergence and the right hand side is lower semicontinuous (Fatou's theorem) it is enough to establish the inequality for $r=1$ on a cone $C_{k+1}$. This we can do by a compactness argument since we can normalize \[ \int_{C_{k+1}\cap B_1(0)} u_{k+1}^2\rho_{k+2}\ d\mu_{k+1}=1 \] and if we had a sequence of singular cones for which the right hand side tends to zero we would have a limiting cone $C_{k+1}$ on which $P_{k+1}=0$. It follows that $u_{k+2},\ldots, u_{n-1}$ are constant on $C_{k+1}$. Note that the highest dimensional {\it singular} cone in the slicing $C_{n_0}$ is minimal and hence $u_{n_0}$ is homogeneous of a negative degree (see Lemma \ref{lem:negdeg} below). Therefore if $n_0>k+1$ we have a contradiction. Therefore we conclude that $C_{k+1}$ is minimal and $C_{k+2},\ldots, C_{n-1}$ are planes. Thus it follows that $\tilde{A}_{k+1}=A_{k+1}=0$ and hence $C_{k+1}$ is also a plane. Thus the cones are regular sufficiently far out in the sequence; a contradiction. The second inequality follows easily by reduction to cones. This proves the bounds. \end{pf} Given a sequence $(\Sigma_j^{(i)},u_j^{(i)})$ of $\Lambda$-bounded minimal $k$- slicings, we may apply the inductive assumption to obtain a subsequence (with the same notation) for which the corresponding sequence of $(k+1)$-slicings converges in the sense of (\ref{eqn:conv1}) and (\ref{eqn:conv2}). By standard compactness theorems we may assume that $\Sigma_k^{(i)}$ converges on compact subsets of $\Omega\sim {\cal S}_{k+1}$ to a limiting submanifold $\Sigma_k$ which minimizes $Vol_{\rho_k}$ (and is therefore regular outside a closed set of dimension at most $k-7$). To establish (\ref{eqn:conv1}) we choose a neighborhood $U$ of ${\cal S}_{k+1}$ such that \[ Vol_{\rho_{k+2}}(\Sigma_{k+1}\cap \bar{U})<\varepsilon. \] We apply Lemma \ref{lem:locbd} and compactness to find a finite collection of points $x_\alpha\in {\cal S}_{k+1}$ and balls $B_{r_\alpha}(x_\alpha)\subset U$ so that \[ \int_{\Sigma_{k+1}\cap B_{2r_\alpha}(x_\alpha)} u_{k+1}^2\rho_{k+2}\ d\mu_{k+1}< cr_\alpha^2\int_{\Sigma_{k+1}\cap B_{r_\alpha}(x_\alpha)}P_{k+1}u_{k+1}^2\rho_{k+2}\ d\mu_{k+1} \] and \[ Vol_{\rho_{k+2}}(\Sigma_{k+1}\cap B_{2r_\alpha}(x_\alpha))< cVol_{\rho_{k+2}}(\Sigma_{k+1}\cap B_{r_\alpha}(x_\alpha)). \] Now apply the Besicovitch covering lemma to extract a finite number of disjoint collections ${\cal B}_\alpha$, $\alpha=1,\ldots, K$ of such balls whose union covers ${\cal S}_{k+1}$. If $V$ denotes the union of these balls, then $V$ is a neighborhood of ${\cal S}_{k+1}$, and hence for $i$ sufficiently large we have ${\cal S}_{k+1}^{(i)}\subset V$. Because of convergence of the left sides and lower semicontinuity of the right side, we have for $i$ sufficiently large \[ \int_{\Sigma_{k+1}^{(i)}\cap B_{2r_\alpha}(x_\alpha)} (u_{k+1}^{(i)})^2\rho_{k+2}^{(i)}\ d\mu_{k+1}< cr_\alpha^2\int_{\Sigma_{k+1}^{(i)}\cap B_{r_\alpha}(x_\alpha)} P_{k+1}^{(i)}(u_{k+1}^{(i)})^2\rho_{k+2}^{(i)}\ d\mu_{k+1} \] and \[ Vol_{\rho_{k+2}^{(i)}}(\Sigma_{k+1}^{(i)}\cap B_{2r_\alpha}(x_\alpha))< cVol_{\rho_{k+2}^{(i)}}(\Sigma_{k+1}^{(i)}\cap B_{r_\alpha}(x_\alpha)). \] By the coarea formula, for each such ball $B_{r_0}(x)$ we may find $s\in [r_0,2r_0]$ ($s$ depending on $i$) so that \[ Vol_{\rho_{k+1}^{(i)}}(\Sigma_{k+1}^{(i)}\cap\partial B_s(x))\leq 2r_0^{-1}\int_{\Sigma_{k+1}^{(i)}\cap B_{2r_0}} u_{k+1}^{(i)}\rho_{k+2}^{(i)}\ d\mu_{k+1}. \] Using the minimizing property of $\Sigma_k^{(i)}$ and simple inequalities we find \begin{eqnarray*}
Vol_{\rho_{k+1}^{(i)}}(\Sigma_k^{(i)}\cap B_{r_0})&\leq& \varepsilon_1^{-1}\int_{\Sigma_{k+1}\cap B_{2r_0}(x)}\rho_{k+2}^{(i)}\ d\mu_{k+1} \\
&+&\varepsilon_1r_0^{-2}\int_{\Sigma_{k+1}\cap B_{2r_0}}(u_{k+1}^{(i)})^2\rho_{k+2}^{(i)}\ d\mu_{k+1} \end{eqnarray*} for any $\varepsilon_1>0$. Applying the inequalities above and summing over the balls (using disjointness and a bound on $K$) we find \[ Vol_{\rho_{k+1}^{(i)}}(\Sigma_k^{(i)}\cap V)\leq c\varepsilon_1^{-1}Vol_{\rho_{k+2}^{(i)}}(\Sigma_{k+1}^{(i)}\cap \bar{U}) +c\varepsilon_1\int_{\Sigma_{k+1}^{(i)}}P_{k+1}^{(i)}(u_{k+1}^{(i)})^2\rho_{k+2}^{(i)}\ d\mu_{k+1}. \] For $i$ sufficiently large this implies \[ Vol_{\rho_{k+1}^{(i)}}(\Sigma_k^{(i)}\cap V)\leq c\varepsilon_1^{-1}\varepsilon+c\varepsilon_1, \] so that we may fix $\varepsilon_1$ sufficiently small and then choose $\varepsilon$ as small as we wish to make the right hand side smaller than any preassigned amount. Since we have \[ \lim_{i\to\infty}Vol_{\rho_{k+1}^{(i)}}(\Sigma_k^{(i)}\sim V)=Vol_{\rho_{k+1}}(\Sigma_k\sim V), \] we can conclude that $\lim_{i\to\infty}Vol_{\rho_{k+1}^{(i)}}(\Sigma_k^{(i)})=Vol_{\rho_{k+1}}(\Sigma_k)$ establishing (\ref{eqn:conv1}).
Now assume that we have established the partial regularity of all bounded minimal $(k+1)$-slicings and that we have proven the compactness for the $\Sigma_k$ in the sense of (\ref{eqn:conv1}). We can then use the results we have obtained above together with dimension reduction to prove partial regularity for $\Sigma_k$. Precisely, we have $dim({\cal S}_k)\leq k-2$, and if $dim({\cal S}_k)>k-3$, then we can choose a number $d$ with \[ k-3<d<dim({\cal S}_k), \] and go to a point $x\in {\cal S}_k$ of density for the measure ${\cal H}^d_\infty$ (since ${\cal H}^d_\infty({\cal S}_k)>0$). Taking successive tangent cones in the standard way and using the upper-semicontinuity of ${\cal H}^d_\infty({\cal S}_k)$ we would eventually produce a minimal $2$-slicing by cones such that $C_2\times \ifmmode{\Bbb R}\else{$\Bbb R$}\fi^{k-2}$ has singular set with Hausdorff dimension at most $k-2$ (by partial regularity of $(k+1)$-slicings) and greater than $k-3$. Therefore the cone $C_2$ must have an isolated singularity at the origin. This in turn contradicts Theorem \ref{thm:2dcone}. Therefore it follows that $dim({\cal S}_k)\leq k-3$ and $\Sigma_k$ is partially regular.
The final step of the proof is to show that the compactness statement holds for the $u_k$ under the assumption that it holds for $(\Sigma_j,u_j)$ for $j\geq k+1$ and also for $\Sigma_k$ (as established above). Assume that we have a sequence of minimal $k$-slicings such that the associated $(k+1)$-slicings and $\Sigma_k^{(i)}$ converge on compact subsets in the sense of (\ref{eqn:conv1}) and (\ref{eqn:conv2}). We choose a compact domain $U$ which is admissble for $(\Sigma_j,U_j)$ and a nested sequence of domains $U_i$ admsisible for $(\Sigma_j^{(i)},u_j^{(i)})$. We work with a connected component of $\Sigma_k\cap U$ which by abuse of notation we call by the same name $\Sigma_k$.
We may assume that the $u_k^{(i)}$ converge uniformly to $u_k$ on compact subsets of $\Omega\sim {\cal S}_k$ (where we can write $\Sigma_k^{(i)}$ locally as a normal graph over $\Sigma_k$ and compare corresponding values of $u_k^{(i)}$ to $u_k$). In particular, if $W$ is a compact subdomain of $\Omega\cap{\cal R}_k$ we have convergence of weighted $L^2$ norms of $u_k^{(i)}$ to the corresponding $L^2$ norm of $u_k$ on $W$. If $U$ is any compact subdomain of $\Omega$ and $\eta>0$, then by Proposition \ref{prop:l2con} applied with ${\cal S}={\cal S}_k$ we can find an open neighborhood $V$ of ${\cal S}\cap\bar{U}$ so that for $i$ sufficiently large ${\cal S}_k^{(i)}\cap\bar{U}\subset V$, and \[ \int_{\Sigma_k^{(i)}\cap V}(u_k^{(i)})^2 \rho_{k+1}^{(i)}\ d\mu_k\leq \eta\int_{\Sigma_k^{(i)}\cap\Omega}
[|\nabla_ku_k^{(i)}|^2+(1+P_k^{(i)})(u_k^{(i)})^2]\rho_{k+1}^{(i)}\ d\mu_k. \] The same inequality holds for the limit, and by the boundedness of the sequence the integral on the right is uniformly bounded. Thus by choosing $\eta$ small enough we can make the right hand side less than any prescribed $\varepsilon>0$. On the other hand if we take $W=U\setminus\bar{V}$ we then have convergence of the weighted $L^2$ norms on $W$, so we can make the difference as small as we wish on $W$. It follows that the difference of $L^2$ norms can be made arbitrarily small on $U$. This completes the proof that the weighted $L^2$ integrals converge.
Completing the proof will require the construction of a proper locally Lipschitz function $\Psi_k$ on ${\cal R}_k$
such that $u_k|\nabla_k\Psi_k|$ is bounded in $L^2(\Sigma_k)$. We give the construction of such a function in Proposition \ref{prop:proper} below. It also follows that we may construct a subsequence so that $\Psi_k^{(i)}$ are uniformly close to $\Psi_k$ on compact subsets of $\ifmmode{\Bbb R}\else{$\Bbb R$}\fi^N\sim{\cal S}_k$ for $i$ large. We can now prove the second part of the convergence (\ref{eqn:conv2}). Assume that $U\subset U_1\subset \Omega$ are compact domains. We let $\varepsilon>0$ we may choose a neighborhood $V$ of ${\cal S}_k$ so small that $\int_{V\cap\bar{U_1}}u_k^2\rho_{k+1}\ d\mu_k<\varepsilon$. Because $\Psi_k$ is proper on ${\cal R}_k$, we may choose $\Lambda$ sufficiently large that $E_k(\Lambda)\subset V$ where $E_k(\Lambda)$ is the subset of $\Sigma_k$ on which $\Psi_k>\Lambda$. We now let $\gamma(t)$ be a nondecreasing Lipschitz function such that $\gamma(t)=0$ for $t<\Lambda$, $\gamma(t)=1$ for $t>\Lambda$, and $\gamma'(t)\leq \Lambda^{-1}$. We let $\varphi$ be a spatial cutoff function which is $1$ on $U$, $0$ outside $U_1$, and has bounded gradient. We then have the inequality by Proposition \ref{prop:coercive}
\[ \int_{\Sigma_k^{(i)}}(|\nabla_k\psi_k^{(i)}|^2+ P_k^{(i)}(\psi_k^{(i)})^2)\rho_k^{(i)}\ d\mu_j\leq cQ_k(\psi_k^{(i)},\psi_k^{(i)}) \] where $\psi_k^{(i)}=\varphi(\gamma\circ\Psi_k^{(i)})u_k^{(i)}$. Since the support of $\psi_k^{(i)}$ is contained in $V$ for $i$ sufficiently large we then have
\[ \int_{\Sigma_k^{(i)}}(|\nabla_k\psi_k^{(i)}|^2+P_k^{(i)}(\psi_k^{(i)})^2)\rho_{k+1}^{(i)}\ d\mu_j\leq c\int_{\Sigma_k^{(i)}\cap V}(1+\Lambda^{-2}|\nabla_k\Psi_k^{(i)}|^2)(u_k^{(i)})^2\rho_{k+1}^{(i)}\ d\mu_k. \] Since we have convergence of the $L^2$ norms of $u_k^{(i)}$ and boundedness of the $L^2$
norms of $u_k^{(i)}|\nabla_k\Psi_k^{(i)}|$, we then conclude that
\[ \int_{\Sigma_k^{(i)}}(|\nabla_k\psi_k^{(i)}|^2+ P_k^{(i)}(\psi_k^{(i)})^2)\rho_{k+1}^{(i)}\ d\mu_j\leq c\varepsilon+c\Lambda^{-2}. \] If we let $V_1$ be a neighborhood of ${\cal S}_k$ such that $\Sigma_k\cap V_1\subset E_k(3\Lambda)$, then for $i$ sufficiently large we will have $\Sigma_k^{(i)}\cap V_1\subset E_k^{(i)}(2\Lambda)$ and hence
\[ \int_{\Sigma_k^{(i)}\cap V_1}(|\nabla_ku_k^{(i)}|^2+ P_k^{(i)}(u_k^{(i)})^2)\rho_{k+1}^{(i)}\ d\mu_j\leq c\varepsilon+c\Lambda^{-2}. \] Since this can be made arbitrarily small, we have shown (\ref{eqn:conv2}) and completed the proof of Theorem \ref{thm:cptness}. \end{pf} We will need the following lemma concerning minimal cones $C_m\subset \ifmmode{\Bbb R}\else{$\Bbb R$}\fi^{m+1}$. \begin{lem} \label{lem:negdeg} Assume that $C_m$ is a volume minimizing cone in $\ifmmode{\Bbb R}\else{$\Bbb R$}\fi^{m+1}$ and that $u_m$ is a positive minimizer for $Q_j$ which is homogeneous of degree $d$ on $C$. There is a positive constant $c$ depending only on $m$ so that $d\leq -c$. \end{lem}
\begin{pf} We write $u_m=r^dv(\xi)$ where $\xi\in S^m$. If we let $\Sigma=C\cap S^m$, then $v$ satisfies the eigenvalue equation $\Delta v+1/8|A_m|^2v=-\mu v$ where we must have $d(d+m-2)=\mu$. This implies that $d=1/2(2-m+\sqrt{(m-2)^2+4\mu})$ or
$d=1/2(2-m-\sqrt{(m-2)^2+4\mu})$. Since $v$ and $|\nabla v|$ are in $L^2(\Sigma)$ we must
have $\mu<0$ and this implies that $d<0$. To prove the negative upper bound on $d$ recall that the set of volume minimizing cones is a compact set, and we have proven the compactness theorem above for the $L^2$ norms, so if we had a sequence $(C_m^{(i)},u_m^{(i)})$ such that $d^{(i)}$ tends to $0$ we could extract a convergent subsequence of the $(\Sigma^{(i)},v^{(i)})$ which converges to $(\Sigma,v)$ where we could normalize $\int_{\Sigma^{(i)}}(v^{(i)})^2\ d\mu_{m-1}=1$ (hence $\int_\Sigma v^2\ d\mu_{m-1}=1$). Since we have smooth convergence on compact subsets of the complement of the singular set of $\Sigma$ we would then have $\Delta v+5/8|A_m|^2v=0$ and therefore we would have $\mu=0$ for the limiting cone, a contradiction. \end{pf} As the final topic of this section we construct the proper functions which were used in the proof of Theorem \ref{thm:cptness}. This result will also be used in the next section. \begin{prop} \label{prop:proper} Suppose we have a $\Lambda$-bounded minimal $k$-slicing in $\Omega$. There exists a positive function $\Psi_k$ which is locally Lipschitz on ${\cal R}_k$ and such that for any domain $U$ compactly contained in $\Omega$, the function $\Psi_k$ is proper on
${\cal R}_k\cap\bar{U}$. Moreover, the function $u_k|\nabla_k\Psi_k|$ is bounded in $L^2(\Sigma_k\cap U)$ for any domain $U$ compactly contained in $\Omega$. \end{prop} \begin{pf} We define $\Psi_k=\max\{1,log\ u_k,log\ u_{k+1},\ldots,log\ u_{n-1}\}$ and we show that it has the properties claimed. First note that $\Psi_k$ is locally Lipschitz on ${\cal R}_k$ since it is the maximum of a finite number of smooth functions on ${\cal R}_k$. The bound
\[ \int_{\Sigma_k\cap U}(u_k|\nabla_k\Psi_k|)^2\rho_{k+1}\ d\mu_k\leq
\max_{k\leq j\leq n-1} \int_{\Sigma_k\cap U}(u_k|\nabla_klog\ u_j|)^2\rho_{k+1}\ d\mu_k \] together with Proposition \ref{prop:coercive} implies the $L^2(\Sigma_k)$ bound claimed on $\Psi_k$. (Note that we may replace $\varphi$ by $\varphi u_k$ in the first inequality of Proposition \ref{prop:coercive} where $\varphi$ is a cutoff function which is equal to $1$ on $U$.)
It remains to prove that $\Psi_k$ is proper on ${\cal R}_k\cap\bar{U}$. Since $\bar{U}$ is compact it suffices to show that for any $x_0\in {\cal S}_k\cap\bar{U}$ we have \[ \lim_{x\to x_0}\Psi_k(x)=\infty. \] If we let $m\geq k$ be the largest integer such that $\Sigma_m$ is singular at $x_0$, then there is an open neighborhood $V$ of $x_0$ in which $\Sigma_m$ is a volume minimizing hypersurface in a smooth Riemannian manifold. We will show that $u_m$ tends to infinity at $x_0$ by first showing that this is true for any homogeneous approximation of $u_m$ at $x_0$. In order to construct homogeneous approximations we need to have the compactness theorem for this top dimensional case, but our proof of compactness used the result we are trying to prove, so we must find another argument for establishing (\ref{eqn:conv2}) since (\ref{eqn:conv1}) is a standard result for volume minimizing hypersurfaces in smooth manifolds. Our proof of the first part of (\ref{eqn:conv2}) did not require the function $\Psi_k$, so we need only deal with the second part. First recall that $dim({\cal S}_m)\leq m-7$, so it follows from a standard result that given any $\varepsilon,\delta>0$ and $a\in (0,7)$ we can find a Lipschitz function $\psi$ so that $\psi=1$ in a neighborhood of ${\cal S}_m$, $\psi(x)=0$ for points $x$ with $dist(x,{\cal S}_m)\geq \delta$, and
\[ \int_{\Sigma_m\cap V}|\nabla_m\psi|^a\ d\mu_m<\varepsilon^a. \] We show that
\[ \int_{\Sigma_m\cap V}|\nabla_m\psi|^2u_m^2\ d\mu_m\leq c\varepsilon^2. \] If we can establish this inequality, then we can complete the proof of compactness for $k=m$ in the set $V$ as in the proof of Theorem \ref{thm:cptness}. To establish the inequality, we observe that the equation satisfied by $u_m$ is of the form
\[ \Delta_m u_m+5/8|A_m|^2u_m+qu_m=0 \] where $q$ is a bounded function (since $\Sigma_m$ is volume minimizing in a smooth manifold). On the other hand the stability implies that
\[ \int_{\Sigma_m} |A_m|^2\varphi^2\ d\mu_m\leq \int_{\Sigma_m} (|\nabla\varphi|^2+c\varphi^2)\ d\mu_m. \] We may then replace $\varphi$ by $u_m^{8/5}\varphi$ and use the equation for $u_m$ to obtain
\[ \int_{\Sigma_m}|\nabla_m(u_m)^{8/5}|^2\varphi^2\ d\mu_m\leq c\int_{\Sigma_m}u_m^{16/5}(|\nabla_m\varphi|^2+\varphi^2)\ d\mu_m. \] We may then apply the Sobolev inequality for minimal submanifolds to conclude that $u_m$ satisfies \[ \int_{\Sigma_m\cap V}u_m^{\frac{16m}{5(m-2)}}\ d\mu_m\leq c. \] We then apply the H\"older inequality to obtain
\[ \int_{\Sigma_m\cap V}|\nabla_m\psi|^2u_m^2\ d\mu_m\leq \|\nabla_m\psi\|_{\frac{16m}{3m+10}}^2
\|u_m\|_{\frac{16m}{5(m-2)}}^2. \] Setting $a=\frac{16m}{3m+10}<7$ we have from above
\[ \int_{\Sigma_m\cap V}|\nabla_m\psi|^2u_m^2\ d\mu_m\leq c\varepsilon^2 \] as desired.
Thus we have the compactness theorem for $(\Sigma_m,u_m)$ in $V$ and we can construct tangent cones to $\Sigma_m$ at $x_0$ and homogeneous approximations to $u_m$ at $x_0$. By Lemma \ref{lem:negdeg} any such homogeneous approximation $v_m$ has strictly negative degree $d\leq -c$ on its cone $C_m$ of definition. If we let ${\cal R}_m(C)$ denote the regular set of $C$, then it follows that for any $\mu>1$, we have \[ \inf_{{\cal R}_m(C)\cap B_{\alpha\sigma}(0)}v_m\geq \mu\inf_{{\cal R}_m(C)\cap B_\sigma(0)}v_m \] for a fixed constant $\alpha\in (0,1)$ depending on $\mu$, but independent of which cone and which homogeneous approximation we choose. Note that $\Delta_mu_m\leq cu_m$ and $\Delta_mv_m\leq 0$, so by the mean value inequality on volume minimizing hypersurfaces (see \cite{bg}) we have \[ u_m(x)\geq cr^{-m}\int_{\Sigma_m\cap B_r(x)}u_m\ d\mu_m,\ v_m(x)\geq cr^{-m}\int_{C_m\cap B_r(x)}v_m\ d\mu_m \] for any $r$ so that $B_r(x_0)$ is compactly contained in $V$. It follows that the essential infima of both $u_m$ and $v_m$ are positive on any compact subset. We now show that there exists $\alpha\in (0,1)$ such that \[ \inf_{{\cal R}_m\cap B_{\alpha\sigma}(x_0)}u_m\geq 2\inf_{{\cal R}_m\cap B_\sigma(x_0)}u_m \] for $\sigma$ sufficiently small. If we establish this, we have finished the proof that $u_m$ tends to infinity at $x_0$ and hence we will have the desired properness conclusion for $\Psi_k$. To establish this inequality we observe that if $(\Sigma_m^{(i)},u_m^{(i)})$ is a sequence converging to $(\Sigma_m,u_m)$ in the sense of (\ref{eqn:conv1}) and (\ref{eqn:conv2}) and $K$ is a compact set such that ${\cal R}_m\cap K\neq \phi$ we have \[ \inf_{{\cal R}_m\cap K}u_m\leq \liminf_{i\to\infty}\inf_{{\cal R}_m^{(i)}\cap K}u_m^{(i)}\leq \limsup_{i\to\infty}\inf_{{\cal R}_m^{(i)}\cap K}u_m^{(i)}\leq c\inf_{{\cal R}_m\cap K}u_m \] for a fixed constant $c$. The first and second inequalities are obvious, and to get the third we observe that for a small radius $r$ and any $x\in {\cal R}_m\cap K$ we have from above \[ u_m(x)\geq cr^{-m}\int_{\Sigma_m\cap B_r(x)}u_m\ d\mu_m, \] and hence for $i$ sufficiently large \[ u_m(x)\geq cr^{-m}\int_{\Sigma_m^{(i)}\cap B_r(x)}u_m^{(i)}\ d\mu_m\geq \varepsilon_0\inf_{\Sigma_m^{(i)}\cap B_r(x)} u_m^{(i)} \] for a positive constant $\varepsilon_0$. This establishes the third inequality. The proof can now be completed by using rescalings at $x_0$ which converge to $(C_m,v_m)$ for some cone and homogeneous function together with the corresponding result for the homogeneous case. \end{pf}
\section{\bf Existence of minimal $k$-slicings} The main purpose of this section is to prove Theorem \ref{thm:exst}. We begin with the construction of the eigenfunction $u_k$ assuming the $\Sigma_k$ has already been constructed and is partially regular in the sense that $dim({\cal S}_k)\leq k-3$. We define the Hilbert spaces ${\cal H}_k$
and ${\cal H}_{k,0}$ as in the last section, namely, ${\cal H}_k$ (respectively ${\cal H}_{k,0}$) is the completion in $\|\cdot\|_{0,1}$ of the Lipschitz functions with compact support in ${\cal R}_k\cap\bar{\Omega}$ (respectively ${\cal R}_k\cap\Omega$). In order to handle boundary effects we also assume that there is a larger domain $\Omega_1$ which contains $\bar{\Omega}$ as a compact subset and that the $k$-slicing is defined and boundaryless in $\Omega_1$. Note that this is automatic if $\partial\Sigma_j=\phi$. Thus ${\cal H}_{k,0}$ consists of those functions in ${\cal H}_k$ with $0$ boundary data on $\Sigma_k\cap\Omega$. The quadratic form $Q_k$ is nonnegative definite on the Lipschitz functions with compact support in ${\cal R}_k\cap\Omega$, and so the standard Schwartz inequality holds for any pair of such functions $\varphi,\psi$ \begin{equation} \label{eqn:schwartz} Q_k(\varphi,\psi)\leq \sqrt{Q_k(\varphi,\varphi)}\sqrt{Q_k(\psi,\psi)}. \end{equation} We now have the following result. \begin{thm} \label{thm:qcomplete} The function $Q_k(\varphi,\psi)$ is continuous with respect to the norm
$\|\cdot\|_{0,1}$ in both variables and therefore extends as a continuous nonnegative definite bilinear form on ${\cal H}_{k,0}$. The Schwartz inequality (\ref{eqn:schwartz}) holds for $\varphi,\psi\in {\cal H}_{k,0}$. The function $Q_k(\varphi,\varphi)$ is strongly continuous and weakly lower semicontinuous on ${\cal H}_{k,0}$. \end{thm} \begin{pf} From Proposition \ref{prop:coercive} we have for $\varphi_1,\varphi_2$ Lipschitz functions with compact support in ${\cal R}_k\cap\Omega$
\[ Q_k(\varphi_1-\varphi_2,\varphi_1-\varphi_2)\leq c\|\varphi_1-\varphi_2\|_{1,k}^2, \] so it follows from (\ref{eqn:schwartz}) that
\[ |Q_k(\varphi_1,\psi)-Q_k(\varphi_2,\psi)|\leq \sqrt{Q_k(\varphi_1-\varphi_2,\varphi_1-\varphi_2)}\sqrt{Q_k(\psi,\psi)}. \] Combining these we see that $Q_k$ is continuous in the first slot, and since it is symmetric in both slots. Therefore $Q_k$ extends as a continuous nonnegative definite bilinear form on ${\cal H}_{k,0}$ and the Schwartz inequality holds on ${\cal H}_{k,0}$ by continuity.
To complete the proof we must prove that $Q_k(\varphi,\varphi)$ is weakly lower semicontinuous on ${\cal H}_{k,0}$. Note that the square norm $\|\varphi\|_{0,k}^2+Q_k(\varphi,\varphi)$ is equivalent to $\|\varphi\|_{1,k}^2$ by Proposition \ref{prop:coercive}. Therefore these have the same bounded linear functionals and hence determine the same weak topology on ${\cal H}_{k,0}$. Assume we have a sequence $\varphi\in {\cal H}_{k,0}$ which converges weakly to $\varphi\in{\cal H}_{k,0}$. We then have for any $\psi\in{\cal H}_{k,0}$ \[ Q_k(\varphi,\psi)=\lim_{i\to\infty}Q_k(\varphi_i,\psi). \] This implies that for $i$ sufficiently large \[ Q_k(\varphi,\varphi)=Q_k(\varphi-\varphi_i,\varphi)+Q_k(\varphi_i,\varphi)\leq \varepsilon+ \sqrt{Q_k(\varphi_i,\varphi_i)}\sqrt{Q_k(\varphi,\varphi)} \] for any chosen $\varepsilon>0$. It follows that \[ Q_k(\varphi,\varphi)\leq \sqrt{Q_k(\varphi,\varphi)}\liminf_{i\to\infty}\sqrt{Q_k(\varphi_i,\varphi_i)} \] which implies the desired weak lower semicontinuity. \end{pf} In order to construct a lowest eigenfunction $u_k$ we will need the following Rellich-type compactness theorem. \begin{thm} \label{thm:rellich} The inclusion of ${\cal H}_{k,0}$ into $L^2(\Sigma_k)$ is compact in the sense that any bounded sequence in ${\cal H}_{k,0}$ has a convergent subsequence in $L^2(\Sigma_k)$. \end{thm} \begin{pf} This statement follows from Proposition \ref{prop:l2con} and the standard Rellich theorem. Assume that we have a bounded sequence $\varphi_i\in {\cal H}_{k,0}$; that is,
$\|\varphi_i\|_{1,k}^2\leq c$. We may extend the $\varphi_i$ to $\Omega_1$ be taking $\varphi_i=0$ in $\Omega_1\sim\Omega$, and by the standard Rellich compactness theorem we may assume by extracting a subsequence that the $\varphi_i$ converge in $L^2$ norm on compact subsets of $\bar{\Omega}\sim {\cal S}_k$ and weakly in ${\cal H}_{k,0}$ to a limit $\varphi\in {\cal H}_{k,0}$. We show that $\varphi_i$ converges to $\varphi$ in $L^2(\Sigma_k)$. Given any $\varepsilon_1>0$, we can choose $\varepsilon>0,\ \delta>0$ in Proposition \ref{prop:l2con} so that for each $i$ we have \[ (\int_{\Sigma_k\cap V}\varphi_i^2\rho_{k+1}\ d\mu_k)^{1/2}\leq \varepsilon_1/3 \] where $V$ is an open neighborhood of ${\cal S}_k\cap\bar{\Omega}$. The Fatou theorem then implies \[ (\int_{\Sigma_k\cap V}\varphi^2\rho_{k+1}\ d\mu_k)^{1/2}\leq \varepsilon_1/3 \] Since $K=(\Sigma_k\sim V)\cap\bar{\Omega}$ is a compact subset of $\bar{\Omega}\sim {\cal S}_k$, we have for $i$ sufficiently large \[ (\int_K(\varphi_i-\varphi)^2\rho_{k+1}\ d\mu_k)^{1/2}\leq \varepsilon_1/3. \] Combining these bounds we find
\[ \|\varphi_i-\varphi\|_0\leq (\int_K(\varphi_i-\varphi)^2\rho_{k+1}\ d\mu_k)^{1/2}+ (\int_{\Sigma_k\cap V}(\varphi_i-\varphi)^2\rho_{k+1}\ d\mu_k)^{1/2}\leq \varepsilon_1 \] for $i$ sufficiently large. This completes the proof. \end{pf} We are now ready to prove the existence, positivity, and uniqueness of $u_k$ on $\Sigma_k\cap\Omega$. \begin{thm} \label{thm:spectrum} The quadratic form $Q_k$ on ${\cal H}_{k,0}$ has discrete spectrum with respect to the $L^2(\Sigma_k)$ inner product and may be diagonalized in an orthonormal basis for $L^2(\Sigma_k)$. The eigenfunctions are smooth on ${\cal R}_k\cap\Omega$, and if we choose a first eigenfunction $u_k$, then $u_k$ is nonzero on ${\cal R}_k\cap\Omega$ and is therefore either strictly positive or strictly negative since ${\cal R}_k\cap\Omega$ is connected. Furthermore any first eigenfunction is a multiple of $u_k$ which we may take to be positive. \end{thm} \begin{pf} This follows from the standard minmax variational procedure for defining eigenvalues and constructing eigenfunctions. For example, to construct the lowest eigenvalue and eigenfunction we let
\[ \lambda_k=\inf\{Q_k(\varphi,\varphi):\ \varphi\in {\cal H}_{k,0},\ \|\varphi\|_{0,k}=1\}. \]
By Theorem \ref{thm:rellich} and Theorem \ref{thm:qcomplete} we may achieve this infimum with a function $u_k\in{\cal H}_{k,0}$ with $\|u_k\|_{0,k}=1$. The Euler-Lagrange equation for $u_k$ is then the eigenfunction equation with eigenvalue $\lambda_k$. The higher eigenvalues and eigenfunctions can be constructed by imposing orthogonality constraints with respect the $L^2(\Sigma_k)$ inner product. We omit the standard details. The smoothness on ${\cal R}_k\cap\Omega$ follows from elliptic regularity theory.
The fact that a lowest eigenfunction $u$ is nonzero follows from the fact that if $u\in {\cal H}_{k,0}$ then $|u|\in {\cal H}_{k,0}$ and $Q_k(u,u)=Q_k(|u|,|u|)$ a property which can be easily checked on the dense subspace of Lipschitz functions with compact support in ${\cal R}_k\cap\Omega$ and then follows by continuity. The multiplicity one property of the lowest eigenspace follows from this property in the usual way. We omit the details. \end{pf}
We now come to the existence results. We first discuss Theorem \ref{thm:exst} and we then generalize the existence proof to a more precise form. Suppose $X$ is a closed $k$-dimensional oriented manifold with $k<n$. We assume that $\Sigma_n$ is a closed oriented $n$-manifold and that there is a smooth map $F:\Sigma_n\to X\times T^{n-k}$ of degree $s\neq 0$. We let $\Omega$ denote a (unit volume) volume form of $X$ and let $\Theta=F^*\Omega$ so that $\Theta$ is a closed $k$-form on $\Sigma_n$. We let $t^p$ for $p=k+1,\ldots, n$ denote the coordinates on the circles and we assume they are periodic with period $1$. For $p=k+1,\ldots, n$ we let $\omega^p$ be the closed $1$-form $\omega^p=F^*(dt^p)$. The assumption on the degree of $F$ implies that $\int_{\Sigma_n}\Theta\wedge\omega^{k+1}\wedge\ldots\wedge \omega^n=s$.
We will need the following elementary lemma. \begin{lem} \label{lem:exact} Suppose $N^m$ is a closed oriented Riemannian manifold and let $\Omega$ be its volume form. Given any open set $U$ of $N$ which is not dense in $N$, the form $\Omega$ is exact on $U$. Moreover, given an open set $V$ compactly contained in $U$, we can find a closed $m$-form $\Omega_1$ which agrees with $\Omega$ on $M\setminus U$ and such that $\Omega_1=0$ in $V$. \end{lem} \begin{pf} Let $f$ be a smooth function which is equal to $1$ in $U$ and such that $\int_Nf\ d\Omega=0$. Let $u$ be a solution of $\Delta u=f$ and let $\theta$ be the $(m-1)$-form $\theta=*du$. We then have $d\theta=d*du=(\Delta u)\Omega$, so we have $d\theta=\Omega$ on $U$.
To prove the last statement, we let $\zeta$ be a smooth cutoff function which is equal to $1$ in $V$ and has compact support in $U$. We then define $\Omega_1=\Omega-d(\zeta*du)$. We then have $\Omega_1=0$ in $V$ and $\Omega_1$ differs from $\Omega$ by an exact form. \end{pf}
We now restate the existence theorem. \begin{thm} For a manifold $M=\Sigma_n$ as described above, there is a $\Lambda$-bounded, partially regular, minimal $k$-slicing Moreover, if $k\leq j\leq n-1$ and $\Sigma_j$ is regular, then $\int_{\Sigma_j}\Theta\wedge\omega^{k+1}\wedge\ldots\wedge\omega^j=s$. \end{thm} \begin{pf} We begin with the $1$-form $\omega^n$ and we integrate to get a map $u_n:\Sigma_n\to S^1$ so that $\omega^n=du_n$. Let $t$ be a regular value of $u_n$ and consider the hypersurface $S_n=u_n^{-1}(t)$. Because the map $F$ has degree $s$ and we have normalized our forms in $X\times T^{n-k}$ to have integral $1$, we see that $\int_{S_n}\Theta\wedge\omega^{k+1}\wedge\ldots \wedge\omega^{n-1}=s$. Let $\Sigma_{n-1}$ be a least volume cycle in $\Sigma_n$ with the property that $\int_{\Sigma_n}\Theta\wedge\omega^{k+1}\wedge\ldots\wedge\omega^{n-1}=s$. The existence follows from standard results of geometric measure theory.
Now suppose for $j\geq k$ we have constructed a partially regular minimal $j+1$ slicing with the property that there is a form $\Theta_{j+1}$ of compact support which is cohomologous to $\Theta\wedge\omega^{k+1}\wedge\ldots\wedge\omega^{j+1}$ such that $\int_{\Sigma_{j+1}}\Theta_{j+1}=s$. Since the slicing is partially regular, we have that the Hausdorff dimension of ${\cal S}_{j+1}$ is at most $j-2$, so it follows that the image $F_j({\cal S}_{j+1})$ under the projection map $F_j:\Sigma_n\to X\times T^{j-k}$ is a compact set of Hausdorff dimension at most $j-2$. It follows from Lemma \ref{lem:exact} that the form $\Omega\wedge dt^{k+1}\wedge\ldots\wedge dt^j$ is exact in a neighborhood $U$ of $F_j({\cal S}_{j+1})$, given a neighborhood $V$ of $F_j({\cal S}_{j+1})$ which is compact in $U$ we can find a form $\Omega_j$ which is cohomologous to $\Omega\wedge dt^{k+1}\wedge\ldots\wedge dt^j$ and vanishes in $V$. Pulling back we see that $\Theta_j=F^*\Omega_j$ vanishes in a neighborhood of ${\cal S}_{j+1}$ and is cohomologous to $\Theta\wedge\omega^{k+1}\wedge\ldots\wedge\omega^j$. We let $u_{j+1}$ be the map gotten by integrating $\omega^{j+1}$ and consider its restriction to $\Sigma_{j+1}$. Since $u_{j+1}$ is in $L^2$ with respect to the weight $\rho_{j+2}$, we see that $\rho_{j+1}=u_{j+1}\rho_{j+2}$ is integrable on $\Sigma_{j+1}$. It then follows from the coarea formula that we can find a regular value $t$ of $u_{j+1}$ in ${\cal R}_{j+1}$ so that the hypersurface $S_j\subset \Sigma_{j+1}$ given by $S_j=u_{j+1}^{-1}(t)$ has finite $\rho_{j+1}$-weighted volume and satisfies $\int_{S_j}\Theta_j=s$. We can then solve the minimization problem for the $\rho_{j+1}$-weighted volume among integer multiplicity rectifiable currents $T$ with support in $\Sigma_{j+1}$, with no boundary in ${\cal R}_{j+1}$, and with $T(\Theta_j)=s$. A minimizer for this problem gives us $\Sigma_j$ and completes the inductive step for the existence. \end{pf} \begin{rem} The existence proof above does not specify the homology class of the minimizers even if the minimizers are smooth since we are minimizing among cycles for which the integral of $\Theta_j$ is fixed. In general there may be homology classes for which the integral of $\Theta_j$ vanishes. We have chosen the class to do the minimization in order to avoid a precise discussion of the homology of the singular spaces in which we are working. In the following we give a more precise existence theorem which specifies the homology classes and allows them to be general integral homology classes, possibly torsion classes. \end{rem}
We now formulate and prove a more general existence theorem for minimal $k$ slicings. In the theorem we let $[\Sigma_n]$ denote the fundamental homology class in $H_n(\Sigma_n,\mathbb Z)$ and, for a cohomology class $\alpha\in H^p(\Sigma_n,\mathbb Z)$, we let $\alpha\cap [\Sigma_n]$ denote its Poincar\'e dual in $H_{n-p}(M,\mathbb Z)$. \begin{thm} \label{thm:exst2} Let $\Sigma_n$ be a smooth oriented manifold of dimension $n$ and let $k$ be an integer with $1\leq k\leq n-1$. Let $\alpha^1,\ldots,\alpha^{n-k}$ be cohomology classes in $H^1(\Sigma_n, \mathbb Z)$, and suppose that $\alpha^{n-k}\cap\alpha^{n-k-1}\cap\ldots\cap\alpha^1\cap[\Sigma_n] \neq 0$ in $H_n(\Sigma_n,\mathbb Z)$. There exists a partially regular minimal $k$ slicing with $\Sigma_j$ representing the homology class $\alpha^{n-j}\cap\ldots\cap\alpha^1\cap [\Sigma_n]$. \end{thm}
\begin{pf} Assume that we are given a partially regular $\Lambda$-bounded minimal $(k+1)$-slicing which represents $\alpha_1,\ldots,\alpha_{n-k-1}$. We thus have the weight function $\rho_{k+1}$ defined on $\Sigma_{k+1}$ which we use to produce $\Sigma_k$. From the partial regularity the singular set ${\cal S}_{k+1}$ of $\Sigma_{k+1}$ has Hausdorff dimension at most $k-2$.
We consider the class of integer multiplicity rectifiable currents which are relative cycles in $H_k(\Sigma_n,{\cal S}_{k+1},\mathbb Z)$; that is, for any $k-1$ form $\theta$ of compact support in $\Sigma_{k+1}\setminus{\cal S}_{k+1}$ we have $T(d\theta)=0$. Because the set ${\cal S}_{k+1}$ has zero $k-1$ dimensional Hausdorff measure we have $H_k(\Sigma_n,{\mathbb Z})= H_k(\Sigma_n,{\cal S}_{k+1},\mathbb Z)$. This follows because a current which is a relative cycle $T$ in $\Sigma_n\setminus{\cal S}_{k+1}$ is also a cycle in $\Sigma_n$ since $\partial T$ is zero since it is unchanged by adding a set of $k-1$ measure zero.
We use $\rho_{k+1}$ weighted volume to set up a minimization problem. We consider the class of relative cycles $T$ with support contained in $\Sigma_{k+1}$ which have finite weighted mass; that is, $T=(S_k,\Theta,\xi)$ where $S_k$ is a countably $k$-rectifiable set, $\Theta$ a $\mu_k$-measurable integer valued function on $S_k$, and $\xi$ a $\mu_k$-measurable map from $S_k$ to $\wedge^k\ifmmode{\Bbb R}\else{$\Bbb R$}\fi^N$ such that $\xi(x)$ is a unit simple vector for $\mu_k$ a.e. $x\in S_k$. Such a $k$-current $T_k$ is $\rho_{k+1}$-finite if
\[ Vol_{\rho_{k+1}}(T_k)\equiv\int_{S_k}\rho_{k+1}|\Theta|\ d\mu_k<\infty. \]
Since we have already constructed $\Sigma_{k+1}$ so that it is $\Lambda$-bounded we have \[ \int_{\Sigma_{k+1}}\rho_{k+1}\ d\mu_{k+1}\leq \Lambda. \] Now we can find a smooth closed hypersurface $H_k$ which is Poincar\'e dual to $\alpha_k$, and we may perturb it and use the coarea formula in a standard way to arrange that $\bar{\Sigma}_k\equiv\Sigma_{k+1}\cap H_k$ is a smooth embedded submanifold away from ${\cal S}_{k+1}$ and \[ \int_{\bar{\Sigma}_k}\rho_{k+1}\ d\mu_k\leq c. \] In particular the associated current $\bar{T}_k\equiv(\bar{\Sigma}_k,1,\bar{\xi})$ (where $\bar{\xi}$ is the oriented unit tangent plane of $\bar{\Sigma}_k$) is $\rho_{k+1}$-finite and is a competitor in our variational problem.
The standard theory of integral currents now allows us to construct a minimizer for our variational problem which gives us the next slice $\Sigma_k$ which could be disconnected and with integer multiplicity. Thus $\Sigma_k$ represents the homology class $\alpha^{n-k}\cap\ldots\cap\alpha^1\cap [\Sigma_n]$. This completes the proof of Theorem \ref{thm:exst2}. \end{pf}
\section{\bf Application to scalar curvature problems} In this section we prove two theorems for manifolds with positive scalar curvature. The first of these is for compact manifolds and the second is the Positive Mass Theorem for asymptotically flat manifolds. Our first theorem which we will need to prove the Positive Mass Theorem is the following. \begin{thm} \label{thm:psc0} Let $M_1$ be any closed oriented $n$-manifold. The manifold $M=M_1\#T^n$ does not have a metric of positive scalar curvature. \end{thm} \begin{pf} Such a manifold $M$ has admits a map $F:M\to T^n$ of degree $1$, and so by Theorem \ref{thm:exst} there exists a closed minimal $1$-slicing of $M$ in contradiction to Theorem \ref{thm:12slicing}.
\end{pf} We also prove the following more general theorem. \begin{thm} \label{thm:psc1} Assume that $M$ is a compact oriented $n$-manifold with a metric of positive scalar curvature. If $\alpha_1,\ldots,\alpha_{n-2}$ are classes in $H^1(M,\ifmmode{\Bbb Z}\else{$\Bbb Z$}\fi)$ with the property that the class $\sigma_2$ given by $\sigma_2=\alpha_{n-2}\cap\alpha_{n-3}\cap\ldots\alpha_1\cap[M]\in H_2(M,\ifmmode{\Bbb Z}\else{$\Bbb Z$}\fi)$ is nonzero, then the class $\sigma_2$ can be represented by a sum of smooth two spheres. If $\alpha_{n-1}$ is any class in $H^1(M,\ifmmode{\Bbb Z}\else{$\Bbb Z$}\fi)$, then we must have $\alpha_{n-1}\cap\sigma_2=0$. In particular, if $M$ has classes $\alpha_1,\ldots,\alpha_{n-1}$ with $\alpha_{n-1}\cap\ldots\cap\alpha_1\cap[M]\neq 0$, then $M$ cannot carry a metric of positive scalar curvature. \end{thm} \begin{pf} By the existence and regularity results of Sections 3 and 4, there is a minimal $2$-slicing so that $\Sigma_2\in\sigma_2$ is regular and satisfies the eigenvalue bound of Theorem \ref{thm:eval}. Choosing $\varphi=1$ on any given component of $\Sigma_2$ and applying the Gauss-Bonnet theorem we see that each component must be topologically $S^2$.
In particular it follows that for any other $\alpha_{n-1}\in H^1(M,\ifmmode{\Bbb Z}\else{$\Bbb Z$}\fi)$ we have that $\alpha_{n-1}\cap\sigma_2$ is a class in $H_1(\Sigma_2,\ifmmode{\Bbb Z}\else{$\Bbb Z$}\fi)$, and therefore is zero. \end{pf}
We now prove a Riemannian version of the positive mass theorem. Assume that $M$ is a complete manifold with the property that there is a compact subset $K\subset M$ such that $M\sim K$ is a union of a finite number of connected components each of which is an asymptotically flat end. This means that each of the components is diffeomorphic to the exterior of a compact set in $\ifmmode{\Bbb R}\else{$\Bbb R$}\fi^n$ and admits asympototically flat coordinates $x^1,\ldots,x^n$ in which the metric $g_{ij}$ satisfies \begin{equation} \label{eqn:af}
g_{ij}=\delta_{ij}+O(|x|^{-p}),\ |x||\partial g_{ij}|+|x|^2|\partial^2g_{ij}|=O(|x|^{-p}),\ |R|=O(|x|^{-q}) \end{equation} where $p>(n-2)/2$ and $q>n$. Under these assumptions the ADM mass is well defined by the formula (see \cite{sc} for the $n$ dimensional case) \[ m=\frac{1}{4(n-1)\omega_{n-1}}\lim_{\sigma\to\infty}\int_{S_\sigma}\sum_{i,j}(g_{ij.i}-g_{ii,j})\nu_j\ d\xi(\sigma) \] where $S_\sigma$ is the euclidean sphere in the $x$ coordinates, $\omega_{n-1}=Vol(S^{n-1}(1))$, and the unit normal and volume integral are with respect to the euclidean metric. We may now state the Positive Mass Theorem. \begin{thm} \label{thm:psc2} Assume that $M$ is an asymptotically flat manifold with $R\geq 0$. For each end it is true that the ADM mass is nonnegative. Furthermore, if any of the masses is zero, then $M$ is isometric to $\ifmmode{\Bbb R}\else{$\Bbb R$}\fi^n$. \end{thm} \begin{pf} The theorem can be reduced to the case when there is a single end by capping off the other ends keeping the scalar curvature nonnegative. We will show only that $m\geq 0$, and the equality statement can be derived from this (see \cite{sy2}). We will reduce the proof to the compact case using results of \cite{sy3} and an observation of J. Lohkamp. \begin{prop} If the mass of $M$ is negative, there is a metric of nonnegative scalar curvature on $M$ which is euclidean outside a compact set. This produces a metric of positive scalar curvature on a manifold $\hat{M}$ which is gotten by replacing a ball in $T^n$ by the interior of a large ball in $M$. \end{prop} \begin{pf} Results of \cite{sy3} and \cite{sc} imply that if $m<0$ we can construct a new metric on $M$ with nonnegative scalar curvature, negative mass, and which is conformally flat and scalar flat near infinity. In particular, we have $g=u^{4/(n-2}\delta$ near infinity where $u$ is a euclidean harmonic function which is asymptotic to $1$. Thus $u$ has the expansion
\[ u(x)=1+\frac{m}{|x|^{n-2}}+O(|x|^{1-n}) \] where $m$ is the mass. Now we use an observation of Lohkamp \cite{lohkamp}. Since $m<0$, we can choose
$0<\varepsilon_2<\varepsilon_1$ and $\sigma$ sufficiently large so that we have $u(x)<1-\varepsilon_1$ for $|x|=\sigma$ and
$u(x)>1-\varepsilon_2$ for $|x|\geq 2\sigma$. If we define $v(x)=u(x)$ for $|x|\leq \sigma$ and $v(x)=\min\{1-\varepsilon_2,u(x)\}$
for $|x|>\sigma$, then we see that $v(x)$ is weakly superharmonic for $|x|\geq \sigma$, so may be
approximated by a smooth superharmonic function with $v(x)=u(x)$ for $|x|\leq \sigma$ and $v(x)=1-\varepsilon_2$ for $|x|$ sufficiently large. The metric which agrees with the original inside $S_\sigma$ and is given by $v^{4/(n-2)}\delta$ outside then has nonnegative scalar curvature and is euclidean near infinity.
By extending this metric periodically we then produce a metric on $\hat{M}$ with nonnegative scalar curvature which is not Ricci flat. Therefore the metric can be perturbed to have positive scalar curvature. \end{pf} Using this result the theorem follows from Theorem \ref{thm:psc1} since the standard $1$-forms on $T^n$ can be pulled back to $\hat{M}$ to produce the $\alpha_1,\ldots,\alpha_{n-1}$ of that theorem. This completes the proof of Theorem \ref{thm:psc2}. \end{pf}
\end{document} |
\begin{document}
\maketitle \begin{abstract}
Fake projective planes are smooth complex surfaces of general type with Betti numbers equal to that of the usual projective plane. Recent explicit constructions of fake projective planes embed them via their bicanonical embedding in $\mathbb P^9$. In this paper, we study Keum's fake projective plane $(a=7, p=2, \{7\}, D_3 2_7)$ and use the equations of \cite{Borisov} to construct an embedding of fake projective plane in $\mathbb P^5$. We also simplify the 84 cubic equations defining the fake projective plane in $\mathbb P^9$. \end{abstract} \section{Introduction}
The Enriques–Kodaira classification splits compact complex surfaces $S$ into 10 classes based largely on their Kodaira dimension $k(S)$. While surfaces with Kodaira dimension $<2$ are better understood, those of general type with maximum Kodaira dimension $k(S)=2$ still need a detailed classification.
To each minimal model of a surface $S$ one associates a triple of numerical invariants $(p_g,q,K_S^2)$, where $p_g=h^0(S,K_S)$ is the geometric genus, $q=h^1(S, {\mathcal O}_S)$ is the irregularity, and $K_S^2$ is the self-intersection number of the canonical class $K_S$. These determine all the other classical invariants such as the topological Euler characteristic $e_{top}(S)=12\chi({\mathcal O}_S)-K^2_S$ and the plurigenera $P_m(S)=h^0(S,mK_S)$ \cite{Hartshorne}. It turns out that producing surfaces with low $p_g$ and $q$ is quite difficult and a complete classification appears far away \cite{Borisov and Fatighenti}. In the case of $p_g=q=0$, one has the Bogomolov-Miyaoka-Yau inequality $K^2_S\leq 9$. The focus of this paper is the extreme case of surfaces with $p_g=q=0$ and $K^2_S=9$. These are the \textit{fake projective planes} (often called FPPs for short) which by definition are complex projective surfaces of general type with Hodge diamond
$$
\begin{array}{ccccc}
&&1&&\\
&0 &&0&\\
0&&1&&0\\
&0&&0&\\
&&1&&
\end{array}
$$ which is the same as that of ${\mathbb C \mathbb P}^2$. The existence of a fake projective plane was first proved by Mumford \cite{Mumford} by expressing the surface as a quotient of a 2-adic analog of the complex two-dimensional ball \[
\mathcal B ^2 = \{(z_1, z_2) \in {\mathbb C}^2: |z_1|^2 + |z_2|^2 \leq 1\} \] by a finitely generated group.
The general theory ensures that each fake projective plane is algebraic. By Noether's formula we know that $c_1^2=9$ and so all FPPs have $c_1^2=3c_2=9$, where $c_1, c_2$ are the Chern numbers. This implies that each FPP is a quotient of $\mathcal B ^2$ by an infinite discrete group \cite{Yau}. These ball quotients are determined by their fundamental group up to holomorphic or anti-holomorphic isomorphism \cite{Mostow} and come in complex conjugate pairs \cite{Kharlamov and Kulikov}. Each of the groups are arithmetic \cite{Klinger} and come in a finite list of classes \cite{Prasad and Yeung}.
Based on the work of Prasad and Yeung \cite{Prasad and Yeung}, a complete classification was obtained by Cartwright and Steger \cite{Cartwright and Steger}. All fake projective planes are quotients of $\mathcal B^2$ by explicit co-compact torsion-free arithmetic subgroups of $\text{PU} (2,1)$. The classification was accomplished with significant use of computer calculations. There are 50 conjugate pairs of fake projective planes split among 28 classes. Each FPP is a ball quotient $\mathcal B^2 / \Gamma$ where $\Gamma$ is the fundamental group, and where the automorphism group is $ N(\Gamma)/\Gamma $ with $N(\Gamma)$ the normalizer of $\Gamma$ in $\text{PU} (2,1)$. The torsion of the Picard group of ${\mathbb P}^2_{fake}$ is equal to the abelianization of $\Gamma$. Various cover relations between related surfaces are also known \cite{Cartwright and Steger}.
\subsection{The Geometry of Keum's Fake Projective Plane} In this paper, we will focus on the fake projective plane $(a=7, p=2, \{7\}, D_3 2_7)$ in Cartwright-Steger classification. First constructed in \cite{Keum 6}, it is named Keum's fake projective plane and we will denote it by ${\mathbb P}^2_{Keum}$. Its automorphism group has maximum order among all FPPs, being equal to the semi-direct product of a normal cyclic subgroup $C_7$ of order 7 and a non-normal cyclic subgroup $C_3$ of order 3. By the Cartwright-Steger classification, there are three other fake projective planes in its class including Mumford's first fake projective plane.
For the rest of this paper, we will let $K$ denote the canonical class of Keum's fake projective plane. The minimal resolution $Y$ of the quotient ${\mathbb P}^2_{Keum}/C_7$ by the subgroup $C_7$ of its automorphism group has interesting geometry which we describe briefly.
Recall that a singular point of type $\frac{1}{m}(1,a)$ is a cyclic quotient singularity given local analytically by the action $(x,y) \mapsto(\zeta x, \zeta^a y)$ on ${\mathbb C}^2$ for $\zeta$ a primitive $m$th root of unity. $Y$ has three singular points of type $\frac{1}{7}(1,3)$ permuted by the residual $C_3$ automorphism group of ${\mathbb P}^2_{Keum}$. It is also a Dolgachev surface fibered over ${\mathbb P}^1$, with generic fibers of genus 1, two multiple fibers, three nodal fibers, and one fiber of type $I_9$. The two multiple fibers are $2F_3$ and $3F_2$, which have multiplicity 2 and 3 respectively. The reductions $F_3$ and $F_2$ are linearly equivalent to $3K_Y$ and $2K_Y$. We refer to \cite{Keum 6, Borisov} for more details.
\subsection{Explicit Construction of \texorpdfstring{${\mathbb P}^2_{Keum}$}{P2Keum}} In \cite{Borisov}, Keum's fake projective plane was explicitly constructed via its bicanonical embedding as the vanishing set of 84 cubic equations in ${\mathbb P}^9$. One first constructs a birational model $Y_0$ of $Y$ as a system of quadrics in 8 variables defined over ${\mathbb Q}(\sqrt{-7})$. Included is a construction of the double and triple fibers and the $C_3$ action on $Y_0$. A degree 7 extension of the field of rational functions of $Y_0$ gives the sevenfold cover of $Y_0$, which is exactly ${\mathbb P}^2_{Keum}$. Ten sections of ${\mathcal O}(2K)$ are constructed from this description and the embedding in ${\mathbb P}^9$ is finally given by 84 cubic equations in the 10 variables $P_0, \ldots, P_9$.
A perennial question is how to simplify the equations of a fake projective plane, which can have polynomials with coefficients hundreds to thousands of decimal digits long. In this paper, we give a simplified version of the equations of Keum's fake projective plane in \cite{Borisov}. We use the equations to find an embedding of ${\mathbb P}^2_{Keum}$ as a degree 25 surface in ${\mathbb P}^5$. The embedding is given by sections of ${\mathcal O}(5H)$, where $H$ is a divisor such that $3H$ is linearly equivalent to $K$. Finally, we exhibit the surface as a system of 56 sextics in ${\mathbb P}^5$ with coefficients in the field ${\mathbb Q}(\sqrt{-7})$.
The paper is organized as follows. In Section \ref{section2} we outline the steps to simplify the 84 cubics defining ${\mathbb P}^2_{Keum}$ in ${\mathbb P}^9$. We follow the strategy described in \cite{Borisov} by explicitly calculating the nonreduced linear cuts on ${\mathbb P}^2_{Keum}$ corresponding to $2$-torsion in the Picard group. Using these equations, in Section \ref{section3} we describe the steps to embed ${\mathbb P}^2_{Keum}$ in ${\mathbb P}^5$. Specifically, we compute global sections of ${\mathcal O}(5H)$ as global sections of the divisor $18H-9H-4H$ and explain the key idea that allowed us to find $H^0\left({\mathbb P}^2_{fake}, {\mathcal O}(4H)\right)$. Section \ref{section4} concludes with future directions.
\begin{rem} A defining feature of recent constructions of fake projective planes is their heavy use of computer algebra software. To that end, this project depended heavily on the use of the Mathematica software system \cite{Math} and the computer algebra systems Magma \cite{Magma} and Macaulay2 \cite{Macaulay2}. \end{rem}
\begin{rem} The 84 cubics in ${\mathbb P}^9$ and the 56 sextics in ${\mathbb P}^5$ are still too large to be included in the printed paper. \end{rem}
\section{Simplification of Keum's Fake Projective Plane} \label{section2} We will begin by simplifying the explicit equations of Keum's fake projective plane ${\mathbb P}^2_{Keum}$ found in \cite{Borisov}. This is done by looking for nonreduced cuves on ${\mathbb P}^2_{Keum}$ which correspond to $2$-torsion in the Picard group. We proceed by making a coordinate change that makes the curve nicer in our new basis.
\subsection*{Step 1: Finite Field Search for Nonreduced Curves}
By the Cartwright-Steger classification, the torsion in the Picard group of ${\mathbb P}^2_{Keum}$ is $C_2^3$. In addition, the automorphism group is $ C_7 \rtimes C_3$, the semidirect product of $C_7$ and $C_3$.
We claim that 2-torsion classes give nonreduced curves in $|2K|$. Let $L$ be a 2-torsion class in the Picard group. By \cite{GKS}, we have $h^0({\mathbb P}^2_{Keum}, K+L) = 1$. Hence, up to scaling, there is a unique section $s_{L}\in H^0({\mathbb P}^2_{Keum}, K+L$). The square of $s_L$ is in $H^0({\mathbb P}^2_{Keum}, 2K)$ and gives rise to a nonreduced curve.
We will further assume that the nonreduced curve is $C_3$ invariant. This reduces the search to nonreduced curves of the form \[ a_0 P_0 + a_1(P_1+P_2+P_3) + a_2(P_4+P_5+P_6)+a_3(P_7+P_8+P_9) \]
up to scaling (so we subsequently set $a_0=1)$. \begin{comment} \begin{itemize}
\item Continue the correspondence between nonreduced curves and 2-torsion. Remark that more generally one may consider $k$-torsion by looking at irreducible components. \end{itemize}
\end{comment} To look for such curves we look at a finite field reduction of ${\mathbb P}^2_{Keum}$ over ${\mathbb F}_p$ for suitable $p$. More precisely, such suitable $p$ contains a square root of $-7$ and has the same Hilbert polynomial for ${\mathbb P}^2_{Keum}$ modulo $p$. We picked $p=43$ with $\sqrt{-7} \equiv 6 \mod{43}$ which was an arbitrary small prime with the aforementioned conditions. Using Magma, we ran an exhaustive search for all $a_1, a_2, a_3$ in ${\mathbb F}_{43}$ and checked if the corresponding curve is nonreduced. We obtained the curve \[ P_0 + 24 (P_1 + P_2 + P_3) + 0 (P_4 + P_5 + P_6) + 28 (P_7 + P_8 + P_9). \]
\subsection*{Step 2: Lift to Characteristic 0} We lift this curve to ${\mathbb Q}(\sqrt{-7})$ as follows. Using Magma, we calculate some points in ${\mathbb F}_{43}$ lying on ${\mathbb P}^2_{Keum}$ and the nonreduced curve. We then apply a variant of Hensel lifting to lift the curve to ${\mathbb Z}/43^k{\mathbb Z}$ for higher $k$ at each step, obtaining a $p$-adic approximation.
The lifting process was done by finding, at each point, two linearly-independent tangent vectors in ${\mathbb P}^9({\mathbb F}_{43})$ that are orthogonal to all polynomials defining ${\mathbb P}^2_{Keum}$ and the linear cut. We modified the points, tangent vectors, and the linear cut at each stage to lift them to higher powers of $43$ such that the orthogonality conditions held; this reduced to solving a system of linear equations modulo $43$. After a sufficiently high power of $43$ we identify the corresponding algebraic numbers by applying a lattice reduction algorithm. We obtain the curve \begin{align*} &P_0 + \frac{(-1 + \sqrt{-7})}{2} (P_1 + P_2 + P_3) + \frac{(272 - 848 \sqrt{-7})}{7}
(P_4 + P_5 + P_6) \\
&+
\frac{(832 - 192 \sqrt{-7}) }{7}(P_7 + P_8 + P_9) \end{align*} which we verify is nonreduced numerically.
Thus we have found one nontrivial $C_3$-invariant torsion line bundle. It is not $C_7$-invariant because the corresponding nonreduced linear cut is not $C_7$ invariant. Its orbit therefore has $7$ elements, which combined with knowledge of the torsion of the Picard group as $ C_2^3$ shows that the action of the automorphism group on the torsion in Picard group is transitive.
\subsection*{Step 3: Setting Up the Coordinate Change} Finally we set up the coordinate change to find a nicer basis for $H^0({\mathbb P}^2_{Keum}, 2K)$ in order to simplify the equations defining the fake projective plane. We use a coordinate change from $P_i$ to $Q_i$ that respects the automorphisms on the surface such that the the nonreduced cut becomes \[ Q_0 + Q_1+Q_2+Q_3+Q_7+Q_8+Q_9. \]
These conditions leave one free parameter in the coordinate change. We fix the free parameter by choosing it in such a way to set the "simplest" coefficient in the equations to 1. This allows us to find a version of the 84 equations with significantly smaller coefficients.
We simplify the equations further by reducing the number of monomials in the equations. We take random linear combinations of the seven equations in each $C_7$ weight and select those that span the space and have the fewest monomials.
\section{Embedding of a fake projective plane into \texorpdfstring{${\mathbb P}^5$}{P5}} \label{section3} In this section, we will describe the process that led us to find the equations of an embedding of ${\mathbb P}^2_{Keum}$ in ${\mathbb P}^5$.
Let $H$ be a divisor such that $3H = K$, where $K=K_{{\mathbb P}^2_{Keum}}$ is the canonical divisor of ${\mathbb P}^2_{Keum}$. Calculations of $h^0({\mathbb P}^2_{Keum}, nH)$ show that $|5H|$ has the expected dimension such that the corresponding map to projective space is ${\mathbb P}^5$. Thus we aim to construct $|5H|$ explicitly, which will give the desired map ${\mathbb P}^2_{Keum} \to {\mathbb P}^5$.
\begin{table}[!h] \begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|c|} \hline
$ n$ & 3&4&5&6&7&8&9&10&11&12 \\
\hline
$h^0\left({\mathbb P}^2_{Keum},nH\right)$ &0&3&6&10&15&21&28&36&45&55 \\
\hline \end{tabular}
\caption{Dimensions of $H^0({\mathbb P}^2_{Keum}, nH)$ for different values of $n$, where $3H=K$}. \end{center} \end{table}
\begin{comment} Old version of table going up to n=18 \begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline
$ n$ & 3&4&5&6&7&8&9&10&11&12&13&14&15&16&17&18 \\
\hline
$h^0\left({\mathbb P}^2_{Keum},nH\right)$ &1&3&6&10&15&21&28&36&45&55&66&78&91 &105&120&136 \\
\hline \end{tabular}
\end{center}
\end{comment}
Recall that $Y$ denotes the quotient ${\mathbb P}^2_{Keum}/C_7$ of Keum's fake projective plane by its $C_7$ automorphism subgroup. It has residual automorphism group $C_3$ and has a double fiber $F = 3K_Y$.
We will construct $|5H|$ as the space $|18H-9H-4H|$. We first find $|9H|=|18H-9H|$ by expressing 112 cubic equations in the $Q_i$ (which lie in $18H=6K$) that vanish on $9H$. Crucial to this construction is the preimage of the double fiber $F$ of $Y$ which we use to find points on $9H$. We then compute $|4H|$. This required the use of several important ideas which are detailed in Step 2 below. Finally, after constructing $4H$ we may find $5H$ as linear combinations of the equations of $9H$ vanishing on $4H$. We conclude by using the explicit equations in ${\mathbb P}^5$ to reconstruct the $C_3$ action on ${\mathbb P}^2_{Keum}$ in its embedding into ${\mathbb P}^5$.
\subsection*{Step 1: Constructing $|9H|$}
The preimage of the double fiber on $Y$ has divisor class $3K = 9H$ \cite{Borisov}. Hence to construct $|9H|$ we are led to find polynomials on $Y$ vanishing on the double fiber. Recall that \cite{Borisov} constructs the surface $Y$ as a system of quadrics in the variables $u_0,u_1, w_1, \ldots, w_6$ with the double fiber given by $\{u_1=0\}$. We compute a number of random points on the double fiber of $Y$ and use the equations to construct points on ${\mathbb P}^2_{Keum}$ lying on the preimage of the double fiber. We then look for polynomials vanishing on these points to compute $H^0({\mathbb P}^2_{Keum}, 9H)$. The search for cubic polynomials gave 112 cubics with 16 in each $C_7$ weight.
\subsection*{Step 2: Constructing $|4H|$} \label{section2step2}
We may attempt to construct $|4H|$ as follows. The action of the $C_7$ automorphism subgroup on $H^0({\mathbb P}^2_{Keum}, 4H)$ gives a $C_7$-representation which splits $H^0({\mathbb P}^2_{Keum}, 4H)$ into three one-dimensional $C_7$-eigenspaces. The Holomorphic Lefschetz Fixed-Point formula shows that the eigenvalues are $\xi^3, \xi^5, \xi^6$, where $\xi$ is a seventh root of unity. Thus $H^0({\mathbb P}^2_{Keum}, 4H)\cong {\mathbb C} r_3 \, \oplus\, {\mathbb C} r_5\, \oplus\, {\mathbb C} r_6 $ where $r_3, r_5, r_6$ are sections of $4H$ with $C_7$-weights 3, 5, and 6 respectively. In addition, the $C_3$-action on the surface implies $r_5=\sigma(r_3), r_6=\sigma^2(r_3)$ for $\sigma$ an order 3 automorphism on ${\mathbb P}^2_{Keum}$. The product $d=r_3 r_5 r_6$ is therefore a $C_3$-invariant section with $C_7$-weight 0 in $H^0({\mathbb P}^2_{Keum}, 12H)$ (it is then invariant under the whole automorphism group).
Set $s_i=r_i^3 \in H^0({\mathbb P}^2_{Keum}, 12H)$ for $i\in \{3,5,6\}$. The equation \[ s_3 s_5 s_6 = d^3 \]
in $H^0({\mathbb P}^2_{Keum}, 36H)$ allows us to narrow down parameters in the search for $r_3, r_5, r_6$. Since $s_3, s_5, s_6,$ and $d$ lie in $H^0({\mathbb P}^2_{Keum}, 12H)$, they are quadratic in the variables $Q_0, \ldots, Q_9$ for the fake projective plane. It is sufficient to construct $s_3$ since $s_5$ and $s_6$ may be constructed from $s_3$ with the $C_3$ action. Additionally, since $s_3$ has $C_7$ weight $3 \times 3\equiv 2 \mod{7}$, we narrow the search down to $C_7$-weight 2 quadratics.
We may further reduce the number of parameters with additional data. The curve $\{r_3=0\}$ passes through the two $C_7$ fixed points \begin{align*} p_1&=(0\colon0\colon0\colon0\colon0\colon0\colon0\colon1\colon0\colon0),\\ p_2&=(0\colon0\colon0\colon0\colon0\colon0\colon0\colon0\colon1\colon0).\end{align*} It follows that at these points the curve $\{s_3=0\}$ vanishes with multiplicity 3, which place additional conditions on the coefficients of $s_3$.
Now we begin describing the details of the calculation. We first calculate the order 3 neighborhoods of the points $p_1$ and $p_2$. This was done by computing the tangent space and solving for the conditions of the neighborhoods vanishing on the FPP. We began by solving for the order 2 neighborhood and then for the third order. To speed up calculations, it was sufficient to take some equations for ${\mathbb P}^2_{Keum}$ locally cutting out the point. After computing these neighborhoods, we posit the general form for $s_3$ as weight 2 quadratics in the variables and then solve for the conditions of being identically 0 at the higher order neighborhoods. We are able to solve for two of these variables, narrowing down the general form for $s_3$ to 6 variables.
We now want to solve for the sextic equation $s_3s_5s_6 -d^3=0$. The requirement that $d$ be invariant under the full automorphism group forces it to be of the form \[ e_1 Q_0^2 + e_2 (Q_1 Q_6 + Q_2 Q_4 + Q_3 Q_5) + e_3 (Q_1 Q_9 + Q_2 Q_7 + Q_3 Q_8) \] for undetermined coefficients $e_1, e_2, e_3$. We also obtain the general forms for $s_5$ and $s_6$ by applying the $C_3$ automorphism to $s_3$. To solve for the coefficients, we compute some points of ${\mathbb P}^2_{Keum}$ with high accuracy and substitute them into $s_3s_5s_6 -d^3=0$ to obtain a system of 24 cubics in 6 variables. We solve this system of equations by applying the trick of \cite{Borisov and Fatighenti}. The Hilbert polynomial of the system of equations modulo 37 with $\sqrt{-7} \equiv 17 \mod{37}$ is 3, which suggests that there are 3 solutions for this system. By applying successive linear conditions on the system and checking the Hilbert polynomial at each step, we are able to take linear cuts that drop the Hilbert polynomial eventually to 1. At some point there are 3 different choices for the linear cuts corresponding to our 3 solutions. We were able to lift these 3 solutions modulo $37^{200}$ and then use the lattice reduction algorithm to obtain the corresponding solutions over ${\mathbb Q}(\sqrt{-7})$. The three solutions differed by a cube root of unity. We selected the solution defined over the desired field of definition to proceed.
The solution for these coefficients allow us to fully determine $s_3,s_5,s_6$, and $d$. The equations for $s_3$ and $d$ are given below, with $s_5$ and $s_6$ found by applying the $C_3$ automorphism. Points on $\{r_3=0\}$ may then be calculated by solving for the simultaneous conditions $\{s_3=0, d=0\}$. These points were used later in the construction. \begin{align*}
s_3 &= \frac{\left(-212275+26525 i \sqrt{7}\right) Q_0 Q_5}{2470336}+\frac{\left(22575+51275 i \sqrt{7}\right) Q_0
Q_8}{1235168}\\
&+\frac{\left(139475+17575 i \sqrt{7}\right) Q_1 Q_2}{9881344}+\frac{\left(196875-91425 i \sqrt{7}\right)
Q_3 Q_4}{2470336}\\
&+\frac{\left(-303625-270725 i \sqrt{7}\right) Q_3 Q_7}{4940672}+\frac{\left(139475+17575 i
\sqrt{7}\right) Q_6^2}{1235168}\\
&+\frac{\left(795725-287175 i \sqrt{7}\right) Q_6 Q_9}{4940672}+\frac{\left(-57575-549675 i
\sqrt{7}\right) Q_9^2}{9881344}\\
d&=\frac{25}{9881344} \Bigl(3407 \sqrt{-7} Q_0^2+17045 Q_0^2-2812 \sqrt{-7} Q_1 Q_6-22316 Q_1 Q_6\\
&+329 \sqrt{-7}
Q_1 Q_9-21987 Q_1 Q_9-2812 \sqrt{-7} Q_2 Q_4-22316 Q_2 Q_4+329 \sqrt{-7} Q_2
Q_7\\
&-21987 Q_2 Q_7-2812 \sqrt{-7} Q_3 Q_5-22316 Q_3 Q_5+329 \sqrt{-7} Q_3 Q_8-21987
Q_3 Q_8 \Bigr) \end{align*}
\subsection*{Step 3: The map ${\mathbb P}^2_{Keum} \to {\mathbb P}^5$} With the computations of $9H$ and $4H$ we may now find $5H$. We look at suitable linear combinations of the 112 polynomials vanishing on $18H-9H=9H$ additionally vanishing on $4H$ to obtain $18H-9H-4H=5H$.
We first compute some random points on $4H$ by solving for $\{d=0, s_3=0\}$ on the FPP. $5H$ is then found by looking for linear combinations of the cubics defining $9H$ for each weight that vanish on these points. To verify that they are in $|5H|$ we also check that they do not vanish on the whole fake projective plane.
The six resulting degree 3 polynomials give us the map ${\mathbb P}^2_{Keum} \to {\mathbb P}^5$. We calculate points in the image of this map in ${\mathbb P}^5$ and find 56 degree 6 polynomials in new variables $Z_1, \ldots, Z_6$ that vanish at these points. These give the desired embedding of the fake projective plane.
\begin{rem} The $C_7$-weights on the variables $Z_1, Z_2,\ldots, Z_6$ are $1,2,\ldots, 6$. There is no weight 0 variable. The construction required that we shift the $C_7$ weights by $3$. This may be explained by viewing our construction of $H^0({\mathbb P}^2_{Keum},5H)$ as given by an embedding \[ H^0({\mathbb P}^2_{Keum},5H) \hookrightarrow H^0({\mathbb P}^2_{Keum}, 18H) \] with the map given by tensoring with $s_3 \otimes f$ for $s_3\in H^0({\mathbb P}^2_{Keum}, 4H)$ and $f\in H^0({\mathbb P}^2_{Keum}, 9H)$. While $f$ has weight 0, $s_3$ has weight $3$ and therefore shifts the weights of $H^0({\mathbb P}^2_{Keum}, 5H)$ by 3. \end{rem}
We take care to reconstruct the automorphism group. While the $C_7$-action is preserved under our construction, the non-$C_3$-invariance of $s_3$ introduces a scaling factor in the $C_3$ action. We fix the coefficients of this scaling factor and recompute the equations with the scaling to find a better basis for the action. As before, we take random linear combinations of the equations that span the space and take the simplest ones to further simplify the equations.
Finally, we use Magma to verify that the Hilbert polynomial is as expected. The verification process for the FPP is carried out as in \cite{Borisov} working modulo $p=1327$ with $\sqrt{-7}=103 \mod{1327}$. Thus we have constructed Keum's fake projective plane as a degree 25 surface in ${\mathbb P}^5$.
\section{Future Directions} \label{section4}
One hopes to find a coordinate change to additionally simplify the 56 equations in ${\mathbb P}^5$.
A related construction of interest is that of Mumford's original fake projective plane \cite{Mumford}. This surface has not been explicitly constructed yet. It lies in the same class as ${\mathbb P}^2_{Keum}$ and two other fake projective planes. We are currently attempting to find this surface by computing a seven-to-one cover of ${\mathbb P}^2_{Keum}$, after which several cover relations may yield the surface and the two fake projective planes in the same class.
\end{document} |
\begin{document}
\title{Randomised Buffer Management with Bounded Delay against Adaptive Adversary}
\section{Introduction}
We study the Buffer Management with Bounded Delay problem, introduced by Kesselman~et~al.~\cite{DBLP:journals/siamcomp/KesselmanLMPSS04}, or, in the standard scheduling terminology, the problem of online scheduling of unit jobs to maximise weighted throughput. The adaptive-online adversary model for this problem has recently been studied by Bie{\'n}kowski~et~al.~\cite{DBLP:conf/waoa/BienkowskiCJ08}, who proved a lower bound of \(\frac{4}{3}\) on the competitive ratio and provided a matching upper bound for \(2\)-bounded sequences. In particular, the authors of~\cite{DBLP:conf/waoa/BienkowskiCJ08} claim that the algorithm $\textsc{RMix}$~\cite{DBLP:journals/jda/ChinCFJST06} is \(\frac{\mathrm{e}}{\mathrm{e}-1}\)-competitive against an adaptive-online adversary. However, the original proof of Chin~et~al.~\cite{DBLP:journals/jda/ChinCFJST06} holds only in the oblivious adversary model. The reason is as follows. First, the potential function used in the proof depends on the adversary's future schedule, and second, it assumes that the adversary follows the earliest-deadline-first policy. Both of these cannot be assumed in adaptive-online adversary model, as the whole schedule of such adversary depends on the random choices of the algorithm. We give an alternative proof that {\textsc{RMix}} indeed is \(\frac{\mathrm{e}}{\mathrm{e}-1}\)-competitive against an adaptive-online adversary.
Similar claim about {\textsc{RMix}} was made in another paper by Bie{\'n}kowski~et~al.~\cite{DBLP:conf/soda/BienkowskiCDHJJS09} studying a slightly more general problem. It assumes that the algorithm does not know exact deadlines of the packets, and instead knows only the order of their expirations. However, any prefix of the deadline-ordered sequence of packets can expire in every step. The new proof that we provide holds even in this more general model, as both the algorithm and its analysis rely only on the relative order of packets' deadlines.
\section{{\textsc{RMix}} and its new analysis}
The algorithm $\textsc{RMix}$ works as follows. In each step, let $h$ be the heaviest pending job. Select a real $x \in [-1,0]$ uniformly at random. Transmit $f$, the earliest-deadline pending packet with $w_f \geq \mathrm{e}^x \cdot w_h$.
We write $a \lhd b$ ($a \unlhd b$) to denote that the deadline of packet $a$ is earlier (not later) that the deadline of packet $b$. This is consistent with the convention of~\cite{DBLP:conf/soda/BienkowskiCDHJJS09} for the more general problem studied therein.
\begin{theorem} \textsc{RMix} is $\mathrm{e} / (\mathrm{e}-1)$-competitive against an adaptive-online adversary. \end{theorem}
\begin{proof} We use the paradigm of modifying the adversary's buffer used in the paper of Li et al.~\cite{DBLP:conf/soda/LiSS05}. Namely, in each time step we assume that $\textsc{RMix}$ and the adversary $\textsc{Adv}$ have the same buffers. Both $\textsc{RMix}$ and $\textsc{Adv}$ transmit a packet. If after doing so, the contents of their buffers become different, we modify the adversary's buffer to make it identical with that of $\textsc{RMix}$. To do so, we may have to let the adversary transmit another packet and keep the one originally transmitted in the buffer, or upgrade one of the packets in its buffer by increasing its weight and deadline. We show that in each step the expected gain of $\textsc{RMix}$ is at least $\frac{\mathrm{e}-1}{\mathrm{e}}$ times the expected {\em amortized gain} of the adversary, denoted $\textsc{Adv}'$. The latter is defined as the sum of weights of the packets that $\textsc{Adv}$ eventually transmitted in the step. Both expected values are taken over possible random choices of $\textsc{RMix}$.
First, we compute the expected gain of $\textsc{RMix}$ in a single step. \[
\mathbf{E}[\textsc{RMix}] = \mathbf{E}[w_f] = \int_{-1}^0 w_f \; dx
\enspace. \]
Assume now that $\textsc{Adv}$ transmits a packet $j$. Without loss of generality, we may assume that for each packet $k$ from the buffer, either $w_j \geq w_k$ or $j \unlhd k$. We call it a {\em greediness property}. We consider two cases.
\begin{enumerate} \item $f \lhd j$. By the greediness property, $w_j \geq w_f$. After both $\textsc{Adv}$ and $\textsc{RMix}$ transmit their packets, we replace $f$ in the buffer of $\textsc{Adv}$ by $j$. \item $j \unlhd f$. After both $\textsc{Adv}$ and $\textsc{RMix}$ transmit their packets, we let $\textsc{Adv}$ transmit additionally $f$ in this round and we reinsert $j$ into its buffer. \end{enumerate}
Therefore the amortized gain of $\textsc{Adv}$ is $w_j$ and additionally $w_f$ if $j \unlhd f$. By the definition of the algorithm, $j \unlhd f$ only if $w_f \geq w_j$. Let $y = \ln (w_j/w_h)$. Then, \[
\mathbf{E}[\textsc{Adv}'] = w_j + \mathbf{E}[w_f | w_f \geq w_j] =
w_j + \int_{y}^0 w_f \; dx
\enspace. \] Finally, we compare the gains, obtaining \begin{align*} \frac{\mathbf{E}[\textsc{RMix}]}{\mathbf{E}[\textsc{Adv}']} \;= &\;
\frac{\int_{-1}^y w_f \; dx + \int_y^0 w_f \; dx}{w_j + \int_{y}^0 w_f \; dx} \geq
\frac{\int_{-1}^y \mathrm{e}^x w_h \; dx + \int_y^0 \mathrm{e}^x w_h \; dx}{w_j + \int_{y}^0 \mathrm{e}^x w_h \; dx} \\
= &\; \frac{\int_{-1}^0 \mathrm{e}^x w_h \; dx}{w_j + \int_{y}^0 \mathrm{e}^x w_h \; dx}
= \frac{w_h \cdot (1-1/\mathrm{e})}{w_j + w_h \cdot (1-w_j/w_h)} \\
= &\; 1 - 1/\mathrm{e} \enspace, \end{align*} which concludes the proof. \qed \end{proof}
\end{document} |
\begin{document}
\title[Unbounded operator valued local positive maps ]{Factorization properties for unbounded local positive maps } \author{Maria Joi\c{t}a} \address{Department of Mathematics, Faculty of Applied Sciences, University Politehnica of Bucharest, 313 Spl. Independentei, 060042, Bucharest, Romania and Simion Stoilow Institute of Mathematics of the Romanian Academy, P.O. Box 1-764, 014700, Bucharest, Romania} \email{mjoita@fmi.unibuc.ro and maria.joita@pub.ro} \urladdr{http://sites.google.com/a/g.unibuc.ro/maria-joita/} \subjclass[2000]{ 46L05} \keywords{locally $C^{\ast }$-algebras, quantized domain, local completely positive maps, local completely contractive maps, local decomposable maps, local completely copositive maps } \thanks{This work was partially supported by a grant of the Ministry of Research, Innovation and Digitization, CNCS/CCCDI--UEFISCDI, project number PN-III-P4-ID-PCE-2020-0458, within PNCDI III}
\begin{abstract} In this paper we present some factorization properties for unbounded local positive maps. We show that an unbounded local positive map $\phi $ on the minimal tensor product of the locally $C^{\ast }$-algebras $\mathcal{A}$ and $C^{\ast }(\mathcal{D}_{\mathcal{E}}),$ where $\mathcal{D}_{\mathcal{E}}$ is a Fr\'{e}chet quantized domain, that is dominated by $\varphi \otimes $id is of the forma $\psi \otimes $id, where $\psi $ is an unbounded local positive map dominated by $\varphi $. As an application of this result, we show that given a local positive map $\varphi :$ $\mathcal{A}\rightarrow $ $\mathcal{B} ,$ the local positive map $\varphi \otimes $id$_{M_{n}\left( \mathbb{C} \right) }$ is local decomposable for some $n\geq 2$ if and only if $\varphi $ is a local\ $CP$-map. Also, we show that an unbounded local $CCP$-map\textit{ \ }$\phi $ on the minimal tensor product of the unital locally $C^{\ast }$ -algebras $\mathcal{A}$ and $\mathcal{B},$ that is dominated by $\varphi \otimes \psi $ is of the forma $\varphi \otimes \widetilde{\psi }$, where $ \widetilde{\psi }$ is an unbounded local $CCP$- map dominated by $\psi $, whenever $\varphi $ is pure. \end{abstract}
\maketitle
\section{$\protect
$Introduction}
Locally \ $C^{\ast }$-algebras are generalizations of $C^{\ast }$-algebras, on which the topology instead of being given by a single $C^{\ast }$-norm is defined by an upward directed family of $C^{\ast }$-seminorms. The concrete models for locally $C^{\ast }$-algebras are $\ast $-algebras of unbounded linear operators on a Hilbert space. In the literature, the locally $C^{\ast }$-algebras are studied under different names like pro-$C^{\ast }$-algebras (D. Voiculescu, N.C. Philips), $LMC^{\ast }$-algebras (G. Lassner, K. Schm \"{u}dgen), $b^{\ast }$-algebras (C. Apostol) and multinormed $C^{\ast }$ -algebras (A. Dosiev). The term locally \ $C^{\ast }$-algebra is due to A. Inoue \cite{I}.
A locally $C^{\ast }$-algebra is a complete Hausdorff complex topological $ \ast $-algebra $\mathcal{A}$ whose topology is determined by a upward filtered family $\{p_{\lambda }\}_{\lambda \in \Lambda }\ $of $C^{\ast }$ -seminorms defined on $\mathcal{A}$. An element $a\in \mathcal{A}$ is positive if there exists $b\in \mathcal{A}$ such that $a=b^{\ast }b$ and it is local positive if there exist $c,d\in \mathcal{A}$ and $\lambda \in \Lambda $ such that $a=c^{\ast }c+d$ with $p_{\lambda }\left( d\right) =0$. Thus, the notion of (local) completely positive maps appeared naturally while studying linear maps between locally $C^{\ast }$-algebras. The structure of strictly continuous completely positive maps between locally $ C^{\ast }$-algebras is described in \cite{J}. The same is done for strongly bounded completely positive maps of order zero \cite{MJ2}. Dosiev \cite{D1} proved a Stinespring type theorem for unbounded local completely positive and local completely contractive maps on unital locally $C^{\ast }$ -algebras. A Radon-Nikodym type theorem for such maps was proved by Bhat, Ghatak and Kumar \cite{BGK}. In \cite{MJ1}, we obtained a structure theorem for unbounded local completely positive maps of local order zero.
Bhat and Osaka \cite{BO} proved some factorization properties for bounded positive maps on $C^{\ast }$-algebras. In this paper, we extend the results of \cite{BO} to unbounded local positive maps on locally $C^{\ast }$ -algebras.
The paper is organized as follows. In Section 2 we gather some basic facts on locally $C^{\ast }$-algebras, concrete models for locally $C^{\ast }$ -algebras and unbounded local completely positive and local completely contractive maps needed for understanding the main results of this paper. In Section 3, we show that a linear map between locally $C^{\ast }$-algebras is local $CP$ (completely positive) if and only if it is continuous and completely positive (Proposition \ref{5}). Therefore, the local $CP$-maps \textit{\ }on unital locally $C^{\ast }$-algebras are exactly strictly continuous completely positive maps on unital locally $C^{\ast }$-algebras \cite[Remark 4.4]{J} and the structure theorem \cite[Theorem 4.6]{J} is valid for local\textit{\ }$CP$-maps\textit{\ }on unital locally $C^{\ast }$ -algebras. As in the case of bounded completely positive maps on $C^{\ast }$ -algebras, we show that two unbounded local $CCP$ (local completely contractive and local completely positive)-maps on unital locally $C^{\ast }$ -algebras determine an unbounded local $CCP$-map on the minimal tensor product and a minimal Stinespring dilation for the tensor product map can be obtained in terms of the minimal Stinespring dilations for each map\ (Proposition \ref{3}). In section 4, we show that if $\mathcal{D}_{\mathcal{E }}$ is a Fr\'{e}chet quantized domain, then an unbounded local positive map $ \phi $ on the minimal tensor product of the locally $C^{\ast }$-algebras $ \mathcal{A}$ and $C^{\ast }(\mathcal{D}_{\mathcal{E}}),$ that is dominated by $\varphi \otimes $id$_{C^{\ast }(\mathcal{D}_{\mathcal{E}})}$ factorizes as $\psi \otimes $id$_{C^{\ast }(\mathcal{D}_{\mathcal{E}})}$, where $\psi $ is an unbounded local positive map dominated by $\varphi \ $(Theorem \ref{6} ). As an application of this result, we show that given a local positive map $\varphi :$ $\mathcal{A}\rightarrow $ $\mathcal{B}$, the local positive map $ \varphi \otimes $id$_{M_{n}\left( \mathbb{C}\right) }$ is local decomposable for $n\geq 2$ if and only if $\varphi $ is a local\ $CP$-map\ (Theorem \ref {7}). Also, we show that given unbounded\textbf{\ }local\textbf{\ }$CCP$ -maps $\phi :\mathcal{A\otimes B}\rightarrow C^{\ast }(\mathcal{D}_{\mathcal{ E\otimes F}}),$ $\varphi :\mathcal{A}\rightarrow C^{\ast }(\mathcal{D}_{ \mathcal{E}})$ and $\psi :\mathcal{B}\rightarrow C^{\ast }(\mathcal{D}_{ \mathcal{F}})$, if $\phi $ is dominated by $\varphi \otimes \psi $ and $ \varphi $ is pure, then $\phi $ factorizes as $\varphi \otimes \widetilde{ \psi }$, where $\widetilde{\psi }$ is an unbounded \ local $CCP$-map dominated by $\psi $ (Theorem \ref{8}).
\section{Preliminaries}
Let $\mathcal{A}$ be a locally $C^{\ast }$-algebra with the topology defined by the family of $C^{\ast }$-seminorms $\left\{ p_{\lambda }\right\} _{\lambda \in \Lambda }.$
An element $a\in \mathcal{A}$ is \textit{bounded} if $\sup \{p_{\lambda }\left( a\right) ;\lambda \in \Lambda \}<\infty $. The subset $b\left( \mathcal{A}\right) =\{a\in \mathcal{A};\left\Vert a\right\Vert _{\infty }:=\sup \{p_{\lambda }\left( a\right) ;\lambda \in \Lambda \}<\infty \}\ $is a $C^{\ast }$-algebra with respect to the $C^{\ast }$-norm $\left\Vert \cdot \right\Vert _{\infty }.$ Moreover, $b\left( \mathcal{A}\right) $ is dense in $\mathcal{A}.$
Let us observe that $\mathcal{A}$ can be realized as a projective limit of an inverse family of $C^{\ast }$-algebras as follows: For each $\lambda \in \Lambda $, let $\mathcal{I}_{\lambda }=\{a\in \mathcal{A};p_{\lambda }\left( a\right) =0\}$. Clearly, $\mathcal{I}_{\lambda }$ is a closed two sided $ \ast $-ideal in $\mathcal{A}$ and $\mathcal{A}_{\lambda }=\mathcal{A}/ \mathcal{I}_{\lambda }$ is a $C^{\ast }$-algebra with respect to the norm induced by $p_{\lambda }$. The canonical quotient $\ast $-morphism from $ \mathcal{A\ }$to $\mathcal{A}_{\lambda }$ is denoted by $\pi _{\lambda }^{ \mathcal{A}}$. For each $\lambda _{1},\lambda _{2}\in \Lambda $ with $ \lambda _{1}\leq \lambda _{2}$, there is a canonical surjective $\ast $ -morphism $\pi _{\lambda _{2}\lambda _{1}}^{\mathcal{A}}:$ $\mathcal{A} _{\lambda _{2}}\rightarrow \mathcal{A}_{\lambda _{1}}$ defined by $\pi _{\lambda _{2}\lambda _{1}}^{\mathcal{A}}\left( a+\mathcal{I}_{\lambda _{2}}\right) =a+\mathcal{I}_{\lambda _{1}}$ for $a\in \mathcal{A}$. Then, $\{ \mathcal{A}_{\lambda },\pi _{\lambda _{2}\lambda _{1}}^{\mathcal{A}}\}$\ forms an inverse system of $C^{\ast }$-algebras, since $\pi _{\lambda _{1}}^{ \mathcal{A}}=$ $\pi _{\lambda _{2}\lambda _{1}}^{\mathcal{A}}\circ \pi _{\lambda _{2}}^{\mathcal{A}}$ whenever $\lambda _{1}\leq \lambda _{2}$. The projective limit \begin{equation*} \lim\limits_{\underset{\lambda }{\leftarrow }}\mathcal{A}_{\lambda }=\{\left( a_{\lambda }\right) _{\lambda \in \Lambda }\in \tprod\limits_{\lambda \in \Lambda }\mathcal{A}_{\lambda };\pi _{\lambda _{2}\lambda _{1}}^{\mathcal{A}}\left( a_{\lambda _{2}}\right) =a_{\lambda _{1}}\text{ whenever }\lambda _{1}\leq \lambda _{2},\lambda _{1},\lambda _{2}\in \Lambda \} \end{equation*} of the inverse system of $C^{\ast }$-algebras $\{\mathcal{A}_{\lambda },\pi _{\lambda _{2}\lambda _{1}}^{\mathcal{A}}\}$ is a locally $C^{\ast }$ -algebra that can be identified with $\mathcal{A}$ via the map $a\mapsto \left( \pi _{\lambda }^{\mathcal{A}}\left( a\right) \right) _{\lambda \in \Lambda }.$
An element $a\in \mathcal{A}$ is \textit{self-adjoint} if $a^{\ast }=a$ and it is\textit{\ positive} if $a=b^{\ast }b$ for some $b\in \mathcal{A}$. An element $a\in \mathcal{A}$ is called \textit{local self-adjoint} if $ a=a^{\ast }+c$, where $c\in \mathcal{A}$ such that $p_{\lambda }\left( c\right) =0$ for some $\lambda \in \Lambda ,$ and \textit{local positive} if $a=b^{\ast }b+c$ where $b,c\in $ $\mathcal{A}$ such that $p_{\lambda }\left( c\right) =0\ $ for some $\lambda \in \Lambda $. In the first case, we call $ a $ as $\lambda $-self-adjoint, and in the second case, we call $a$ as $ \lambda $-positive and write $a\geq _{\lambda }0$. We write, $a=_{\lambda }0$ whenever $p_{\lambda }\left( a\right) =0$. Note that $a\in \mathcal{A}$ is local self-adjoint if and only if there is $\lambda \in \Lambda $ such that $ \pi _{\lambda }^{\mathcal{A}}\left( a\right) $ is self adjoint in $\mathcal{A }_{\lambda }$ and $a\in \mathcal{A}$ is local positive if and only if there is $\lambda \in \Lambda $ such that $\pi _{\lambda }^{\mathcal{A}}\left( a\right) $ is positive in $\mathcal{A}_{\lambda }.$
Let $\mathcal{A}$ and $\mathcal{B}$ be two locally $C^{\ast }$-algebras with the topology defined by the family of $C^{\ast }$-seminorms $\left\{ p_{\lambda }\right\} _{\lambda \in \Lambda }$ and $\left\{ q_{\delta }\right\} _{\delta \in \Delta }$, respectively. For each $n\in \mathbb{N},$ $ M_{n}(\mathcal{A})$ denotes the collection of all matrices of size $n$ with elements in $\mathcal{A}$. Note that $M_{n}(\mathcal{A})$ is a locally $ C^{\ast }$-algebra where the associated family of $C^{\ast }$-seminorms is denoted by $\{p_{\lambda }^{n}\}_{\lambda \in \Lambda }.$
For each $n\in \mathbb{N}$, the $n$-amplification of a linear map $\varphi : \mathcal{A}\rightarrow \mathcal{B}$ is the map $\varphi ^{\left( n\right) }:M_{n}(\mathcal{A})$ $\rightarrow $ $M_{n}(\mathcal{B})$ defined by \begin{equation*} \varphi ^{\left( n\right) }\left( \left[ a_{ij}\right] _{i,j=1}^{n}\right) = \left[ \varphi \left( a_{ij}\right) \right] _{i,j=1}^{n} \end{equation*} for $\left[ a_{ij}\right] _{i,j=1}^{n}\in M_{n}(\mathcal{A})$ .
A linear map $\varphi :\mathcal{A}\rightarrow \mathcal{B}$ is called :
\begin{enumerate} \item \textit{local contractive} if for each $\delta \in \Delta $, there exists $\lambda \in \Lambda $ such that \begin{equation*} q_{\delta }\left( \varphi \left( a\right) \right) \leq p_{\lambda }\left( a\right) \text{ for all }a\in \mathcal{A}; \end{equation*}
\item \textit{local positive} if for each $\delta \in \Delta ,$there exists $ \lambda \in \Lambda $\ such that $\varphi \left( a\right) \geq _{\delta }0$ whenever $a\geq _{\lambda }0$ and $\varphi \left( a\right) =_{\delta }0$\ \ \ whenever $a=_{\lambda }0;$
\item \textit{local completely contractive }(\textit{local }$CC$\textit{-map} )\textit{\ }if for each $\delta \in \Delta $, there exists $\lambda \in \Lambda $ such that \begin{equation*} q_{\delta }^{n}\left( \varphi ^{\left( n\right) }\left( \left[ a_{ij}\right] _{i,j=1}^{n}\right) \right) \leq p_{\lambda }^{n}\left( \left[ a_{ij}\right] _{i,j=1}^{n}\right) \text{ } \end{equation*} for all $\left[ a_{ij}\right] _{i,j=1}^{n}\in M_{n}(\mathcal{A})\ $and for all $n\in \mathbb{N};$
\item \textit{local completely positive }(\textit{local }$CP$\textit{-map}) \textit{\ }if for each $\delta \in \Delta $, there exists $\lambda \in \Lambda $ such that $\varphi ^{\left( n\right) }\left( \left[ a_{ij}\right] _{i,j=1}^{n}\right) \geq _{\delta }0\ $whenever $\left[ a_{ij}\right] _{i,j=1}^{n}\geq _{\lambda }0$ and $\varphi ^{\left( n\right) }\left( \left[ a_{ij}\right] _{i,j=1}^{n}\right) =_{\delta }0\ \ $whenever $\left[ a_{ij} \right] _{i,j=1}^{n}=_{\lambda }0,\ $for all $n\in \mathbb{N}.$ \end{enumerate}
Throughout the paper, $\mathcal{H}$ is a complex Hilbert space and $B( \mathcal{H})$ is the algebra of all bounded linear operators on $\mathcal{H}$ .
Let $(\Lambda ,\leq )$ be a directed poset. A \textit{quantized domain }in a Hilbert space $\mathcal{H}$ is a triple $\{\mathcal{H};\mathcal{E};\mathcal{D }_{\mathcal{E}}\}$, where $\mathcal{E}=\{\mathcal{H}_{\lambda };\lambda \in \Lambda \}$ is an upward filtered family of closed subspaces with dense union $\mathcal{D}_{\mathcal{E}}=\tbigcup\limits_{\lambda \in \Lambda } \mathcal{H}_{\lambda }$ in $\mathcal{H\ }$\cite{D1}.
A quantized family $\mathcal{E}=\{\mathcal{H}_{\lambda };\lambda \in \Lambda \}$ determines an upward filtered family $\{P_{\lambda };\lambda \in \Lambda \}$ of projections in $B(\mathcal{H})$, where $P_{\lambda }$ is a projection onto $\mathcal{H}_{\lambda }$.
We say that a quantized domain $\mathcal{F}=\{\mathcal{K}_{\lambda };\lambda \in \Lambda \}$ \ of $\mathcal{H}$ with its union space $\mathcal{D}_{ \mathcal{F}}$ and $\mathcal{K}$ $=\overline{\tbigcup\limits_{\lambda \in \Lambda }\mathcal{K}_{\lambda }}$ is a \textit{quantized subdomian} of $ \mathcal{E}$, if $\mathcal{K}_{\lambda }\subseteq \mathcal{H}_{\lambda }$ for all $\lambda \in \Lambda $. In this case, we write $\mathcal{F}\subseteq $ $\mathcal{E}$.
Let $\mathcal{E}^{i}=\{\mathcal{H}_{\lambda }^{i};\lambda \in \Lambda \}$ be a quantized domain in a Hilbert space $\mathcal{H}^{i}$ for $i=1,2.$Given a linear operator $V:\mathcal{D}_{\mathcal{E}^{1}}\rightarrow \mathcal{H}^{2}$ , we write $V(\mathcal{E}^{1})\subseteq \mathcal{E}^{2}$ if $V(\mathcal{H} _{\lambda }^{1})\subseteq \mathcal{H}_{\lambda }^{2}$\ for all $\lambda \in \Lambda $.
Let $\mathcal{H}$ and $\mathcal{K}$ be Hilbert spaces. A linear operator $T$ $:$ dom$(T)$ $\subseteq $ $\mathcal{H}$ $\rightarrow $ $\mathcal{K}$ is said to be densely defined if dom$(T)$ is a dense subspace of $\mathcal{H}$. The adjoint of $T$ is a linear map $T^{\bigstar }:$ dom$(T^{\bigstar })$ $ \subseteq $ $\mathcal{K}\rightarrow \mathcal{H},$ where \begin{equation*} \text{dom}(T^{\bigstar })=\{\xi \in \mathcal{K};\eta \rightarrow \left\langle T\eta ,\xi \right\rangle _{\mathcal{K}}\ \text{is continuous for every }\eta \in \text{dom}(T)\} \end{equation*} satisfying $\left\langle T\eta ,\xi \right\rangle _{\mathcal{K} }=\left\langle \eta ,T^{\bigstar }\xi \right\rangle _{\mathcal{H}}$ for all $ \xi \in $dom$(T^{\bigstar })$ and $\eta \in $dom$(T).$
Let $\mathcal{E}=\{\mathcal{H}_{\lambda };\lambda \in \Lambda \}$ be a quantized domain in a Hilbert space $\mathcal{H}$ and \begin{equation*} C(\mathcal{D}_{\mathcal{E}})=\{T\in \mathcal{L}(\mathcal{D}_{\mathcal{E} });TP_{\lambda }=P_{\lambda }TP_{\lambda }\in B(\mathcal{H})\text{ for all } \lambda \in \Lambda \} \end{equation*} where $\mathcal{L}(\mathcal{D}_{\mathcal{E}})$ is the collection of all linear operators on $\mathcal{D}_{\mathcal{E}}$. If $T\in \mathcal{L}( \mathcal{D}_{\mathcal{E}})$, then $T\in C(\mathcal{D}_{\mathcal{E}})$ if and only if $T(\mathcal{H}_{\lambda })\subseteq \mathcal{H}_{\mathcal{\lambda }}$ and $\left. T\right\vert _{\mathcal{H}_{\lambda }}\in B(\mathcal{H}_{\lambda })$ for all $\lambda \in \Lambda $, and so $C(\mathcal{D}_{\mathcal{E}})$ is an algebra. Let \begin{equation*} C^{\ast }(\mathcal{D}_{\mathcal{E}})=\{T\in C(\mathcal{D}_{\mathcal{E} });P_{\lambda }T\subseteq TP_{\lambda }\text{ for all }\lambda \in \Lambda \}. \end{equation*} If $T\in C(\mathcal{D}_{\mathcal{E}})$, then $T\in C^{\ast }(\mathcal{D}_{ \mathcal{E}})$ if and only if $T(\mathcal{H}_{\lambda }^{\bot }\cap \mathcal{ D}_{\mathcal{E}})\subseteq \mathcal{H}_{\lambda }^{\bot }\cap \mathcal{D}_{ \mathcal{E}}$ for all $\lambda \in \Lambda .\ $
If $T\in C^{\ast }(\mathcal{D}_{\mathcal{E}})$, then $\mathcal{D}_{\mathcal{E }}$ $\subseteq $ dom$(T^{\bigstar })$. Moreover, $T^{\bigstar }(\mathcal{H} _{\lambda })\subseteq \mathcal{H}_{\lambda }$ for all $\lambda \in \Lambda $ . Now, let $T^{\ast }=\left. T^{\bigstar }\right\vert _{\mathcal{D}_{ \mathcal{E}}}$. It is easy to check that $T^{\ast }\in C^{\ast }(\mathcal{D} _{\mathcal{E}})$, and so $C^{\ast }(\mathcal{D}_{\mathcal{E}})$ is a unital $ \ast $-algebra.
For each $\lambda \in \Lambda $, the map $\left\Vert \cdot \right\Vert _{\lambda }:C^{\ast }(\mathcal{D}_{\mathcal{E}})\rightarrow \lbrack 0,\infty )$, \begin{equation*} \left\Vert T\right\Vert _{\lambda }=\left\Vert \left. T\right\vert _{ \mathcal{H}_{\lambda }}\right\Vert =\sup \{\left\Vert T\left( \xi \right) \right\Vert ;\xi \in \mathcal{H}_{\lambda },\left\Vert \xi \right\Vert \leq 1\} \end{equation*} is a $C^{\ast }$-seminorm on $C^{\ast }(\mathcal{D}_{\mathcal{E}})$. Moreover, $C^{\ast }(\mathcal{D}_{\mathcal{E}})$ is a locally $C^{\ast }$ -algebra with respect to the topology determined by the family of $C^{\ast }$ -seminorms $\{\left\Vert \cdot \right\Vert _{\lambda }\}_{\lambda \in \Lambda }$ and $b(C^{\ast }(\mathcal{D}_{\mathcal{E}}))$ is identified with the $C^{\ast }$-algebra $\{T\in B\left( \mathcal{H}\right) ;P_{\lambda }T=TP_{\lambda }\ $for all $\lambda \in \Lambda \}$\ via\ the map $T\mapsto \widetilde{T}$, where $\widetilde{T}$ is the extension of $T$ to $\mathcal{H} $ (see the proof of \cite[Lemma 3.1]{D})\textbf{.}
If $\mathcal{E}=\{\mathcal{H}_{\lambda };\lambda \in \Lambda \}$ is a quantized domain in a Hilbert space $\mathcal{H}$, then $\mathcal{D}_{ \mathcal{E}}$ can be regarded as a strict inductive limit of the direct family of Hilbert spaces $\mathcal{E}=\{\mathcal{H}_{\lambda };\lambda \in \Lambda \},$ $\mathcal{D}_{\mathcal{E}}=\lim\limits_{\rightarrow }\mathcal{H} _{\lambda }$, and it is called a locally Hilbert space (see \cite{I}). If $ T\in C^{\ast }(\mathcal{D}_{\mathcal{E}})$, then $T$ is continuous \cite[ Lemma 5.2 ]{I}.
For every locally $C^{\ast }$-algebra $\mathcal{A}$ there is a quantized domain $\mathcal{E}$ in a Hilbert space $\mathcal{H}$ and a local isometric $ \ast $-homomorphism $\pi :\mathcal{A\rightarrow }C^{\ast }(\mathcal{D}_{ \mathcal{E}})$ \cite[Theorem 7.2]{D1}. This result can be regarded as an unbounded analog of the Ghelfand-Naimark theorem.
\section{Local completely positive maps}
Let $\mathcal{A}$ and $\mathcal{B}$ be two locally $C^{\ast }$-algebras with the topology defined by the family of $C^{\ast }$-seminorms $\left\{ p_{\lambda }\right\} _{\lambda \in \Lambda }$ and $\left\{ q_{\delta }\right\} _{\delta \in \Delta }$, respectively.
\begin{proposition} \label{4} Let $\varphi :\mathcal{A}\rightarrow $ $\mathcal{B}$ be a linear map. If $\varphi $ is local positive, then $\varphi $ is continuous. \end{proposition}
\begin{proof} Since $\varphi $ is local positive, for each $\delta \in \Delta $, there exists $\lambda \in \Lambda $ such that $\varphi \left( a\right) \geq _{\delta }0\ $whenever $a\geq _{\lambda }0$ and $\varphi \left( a\right) =_{\delta }0\ \ \ $whenever $a=_{\lambda }0$. Define the map $\varphi _{\delta }^{+}:\left( \mathcal{A}_{\lambda }\right) _{+}\rightarrow \mathcal{ B}_{\delta }$ by $\varphi _{\delta }^{+}\left( \pi _{\lambda }^{\mathcal{A} }\left( a\right) \right) =\pi _{\delta }^{\mathcal{B}}\left( \varphi \left( a\right) \right) $, and extend it to a linear map $\varphi _{\delta }: \mathcal{A}_{\lambda }\rightarrow \mathcal{B}_{\delta }$. This map is positive, and so continuous. Therefore, there is $C_{\delta }>0\ $such that \begin{equation*} q_{\delta }\left( \varphi \left( a\right) \right) =\left\Vert \pi _{\delta }^{\mathcal{B}}\left( \varphi \left( a\right) \right) \right\Vert _{\mathcal{ B}_{\delta }}=\left\Vert \varphi _{\delta }\left( \pi _{\lambda }^{\mathcal{A }}\left( a\right) \right) \right\Vert \leq C_{\delta }\left\Vert \pi _{\lambda }^{\mathcal{A}}\left( a\right) \right\Vert _{\mathcal{A}_{\lambda }}=C_{\delta }p_{\lambda }\left( a\right) \end{equation*} for all $a\in \mathcal{A}$. \end{proof}
Proposition \ref{4} it is a particular case of \cite[Lemma 4.4]{D1}.
\begin{remark} \label{1}Let $\varphi :\mathcal{A}\rightarrow $ $\mathcal{B}$ be a local positive linear map. Then, for each $\delta \in \Delta $, there exist $ \lambda \in \Lambda $ and a positive map $\varphi _{\delta }:\mathcal{A} _{\lambda }\rightarrow \mathcal{B}_{\delta }$ such that $\varphi _{\delta }\left( \pi _{\lambda }^{\mathcal{A}}\left( a\right) \right) =\pi _{\delta }^{\mathcal{B}}\left( \varphi \left( a\right) \right) $ for all $a\in \mathcal{A}$. \end{remark}
\begin{proposition} \label{5}Let $\mathcal{A}$ and $\mathcal{B}$\ be two locally $C^{\ast }$ -algebras and $\varphi :\mathcal{A}\rightarrow $ $\mathcal{B}$ be a linear map. Then $\varphi $ is local completely positive if and only if $\varphi $ is continuous and completely positive. \end{proposition}
\begin{proof} If $\varphi $ is local completely positive, then, by Proposition \ref{4}, for each $n\in \mathbb{N}$, the map $\varphi ^{\left( n\right) }$ is continuous and by \cite[Proposition 2.1]{MJ1}, it is positive. Therefore, $ \varphi $ is continuous and completely positive.
Conversely, suppose that $\varphi $ is continuous and completely positive. Since $\varphi $ is continuous, for each $\delta \in \Delta $, there exist $ \lambda \in \Lambda $ and a positive map $\varphi _{\delta }:\mathcal{A} _{\lambda }\rightarrow \mathcal{B}_{\delta }$ such that $\varphi _{\delta }\left( \pi _{\lambda }^{\mathcal{A}}\left( a\right) \right) =\pi _{\delta }^{\mathcal{B}}\left( \varphi \left( a\right) \right) $ for all $a\in \mathcal{A}$. Let $\left[ a_{ij}\right] _{i,j=1}^{n}\in M_{n}\left( \mathcal{ A}\right) $ such that $\left[ a_{ij}\right] _{i,j=1}^{n}\geq _{\lambda }0$. This means that there exist $\left[ b_{ij}\right] _{i,j=1}^{n},$ $\left[ c_{ij}\right] _{i,j=1}^{n}\in M_{n}\left( \mathcal{A}\right) $ such that $ \left[ a_{ij}\right] _{i,j=1}^{n}=\left( \left[ b_{ij}\right] _{i,j=1}^{n}\right) ^{\ast }\left[ b_{ij}\right] _{i,j=1}^{n}+$ $\left[ c_{ij}\right] _{i,j=1}^{n}\ $and $p_{\lambda }^{n}\left( \left[ c_{ij}\right] _{i,j=1}^{n}\right) =0$. Then \begin{eqnarray*} \pi _{\delta }^{M_{n}(\mathcal{B)}}\left( \varphi ^{\left( n\right) }\left( \left[ c_{ij}\right] _{i,j=1}^{n}\right) \right) &=&\left[ \pi _{\delta }^{ \mathcal{B}}\left( \varphi \left( c_{ij}\right) \right) \right] _{i,j=1}^{n}= \left[ \varphi _{\delta }\left( \pi _{\lambda }^{\mathcal{A}}\left( c_{ij}\right) \right) \right] _{i,j=1}^{n} \\ &=&\varphi _{\delta }^{\left( n\right) }\left( \pi _{\lambda }^{M_{n}\left( \mathcal{A}\right) }\left( \left[ c_{ij}\right] _{i,j=1}^{n}\right) \right) = \left[ 0\right] _{i,j=1}^{n} \end{eqnarray*} and, since $\varphi \ $is completely positive, \begin{equation*} \pi _{\delta }^{M_{n}(\mathcal{B)}}\left( \varphi ^{\left( n\right) }\left( \left[ a_{ij}\right] _{i,j}^{n}\right) \right) =\pi _{\delta }^{M_{n}( \mathcal{B)}}\left( \varphi ^{\left( n\right) }\left( \left( \left[ b_{ij} \right] _{i,j=1}^{n}\right) ^{\ast }\left[ b_{ij}\right] _{i,j=1}^{n}\right) \right) \geq 0. \end{equation*} If $\left[ a_{ij}\right] _{i,j}^{n}=_{\lambda }\left[ 0\right] _{i,j=1}^{n}$, $\ $then $p_{\lambda }^{n}\left( \left[ a_{ij}\right] _{i,j}^{n}\right) =0\ $ and \begin{equation*} \pi _{\delta }^{M_{n}(\mathcal{B)}}\left( \varphi ^{\left( n\right) }\left( \left[ a_{ij}\right] _{i,j=1}^{n}\right) \right) =\varphi _{\delta }^{\left( n\right) }\left( \pi _{\lambda }^{M_{n}\left( \mathcal{A}\right) }\left( \left[ a_{ij}\right] _{i,j=1}^{n}\right) \right) =\left[ 0\right] _{i,j=1}^{n}. \end{equation*} Therefore, \begin{equation*} \varphi ^{\left( n\right) }\left( \left[ a_{ij}\right] _{i,j}^{n}\right) \geq _{\delta }0 \end{equation*} for all $\left[ a_{ij}\right] _{i,j=1}^{n}\in M_{n}\left( \mathcal{A}\right) $ such that $\left[ a_{ij}\right] _{i,j=1}^{n}\geq _{\lambda }0,\ $and $ \varphi ^{\left( n\right) }\left( \left[ a_{ij}\right] _{i,j}^{n}\right) =_{\delta }\left[ 0\right] _{i,j=1}^{n}$ for all $\left[ a_{ij}\right] _{i,j=1}^{n}\in M_{n}\left( \mathcal{A}\right) $ such that $\left[ a_{ij} \right] _{i,j=1}^{n}=_{\lambda }\left[ 0\right] _{i,j=1}^{n}$ and for all $ n, $ and so $\varphi $ is local completely positive. \end{proof}
\begin{corollary} \label{localpositive}Let $\varphi :\mathcal{A}\rightarrow $ $\mathcal{B}$ be a linear map. Then $\varphi $ is local positive if and only if $\varphi $ is continuous and positive. \end{corollary}
\begin{remark} \label{London} Let $\varphi :\mathcal{A}\rightarrow $ $\mathcal{B}$ be a local completely positive map. If the locally $C^{\ast }$-algebra $\mathcal{A }$ is unital, then $\varphi $ is a strictly continuous completely positive\ map \cite[Remark 4.4]{J}. In particular, the structure theorem \cite[Theorem 4.6]{J} is valid for local completely positive maps on unital locally $ C^{\ast }$-algebras. \end{remark}
Let $\mathcal{CPCC}_{\text{loc}}(\mathcal{A},C^{\ast }(\mathcal{D}_{\mathcal{ E}}))$ denote the set of all maps $\varphi :\mathcal{A}\rightarrow C^{\ast }( \mathcal{D}_{\mathcal{E}})$ which are local completely positive and local completely contractive. If $\varphi \in \mathcal{CPCC}_{\text{loc}}(\mathcal{ A},C^{\ast }(\mathcal{D}_{\mathcal{E}})),$ then $\varphi \left( b(\mathcal{A} )\right) \subseteq b(C^{\ast }(\mathcal{D}_{\mathcal{E}}))$. Moreover, there is a completely contractive and completely positive map $\left. \varphi \right\vert _{b(\mathcal{A})}:b(\mathcal{A})\rightarrow B(\mathcal{H})$ such that $\left. \left. \varphi \right\vert _{b(\mathcal{A})}\left( a\right) \right\vert _{\mathcal{D}_{\mathcal{E}}}=\varphi \left( a\right) \ $for all $ a\in b(\mathcal{A}).$
The following result is a version of the Stinespring theorem for unbounded local completely positive and local completely contractive maps.
\begin{theorem} \label{s} \cite[Theorem 5.1]{D1} Let $\mathcal{A}$ be a unital locally $ C^{\ast }$-algebra and $\varphi \in \mathcal{CPCC}_{\text{loc}}(\mathcal{A} ,C^{\ast }(\mathcal{D}_{\mathcal{E}}))$. Then there exist a quantized domain $\{\mathcal{H}^{\varphi },\mathcal{E}^{\varphi },\mathcal{D}_{\mathcal{E} ^{\varphi }}\}$, where $\mathcal{E}^{\varphi }=\{\mathcal{H}_{\lambda }^{\varphi };\lambda \in \Lambda \}$ is an upward filtered family of closed subspaces of $\mathcal{H}^{\varphi }$, a contraction $V_{\varphi }:\mathcal{H }\rightarrow \mathcal{H}^{\varphi }$ and a unital local contractive $\ast $ -homomorphism $\pi _{\varphi }:\mathcal{A\rightarrow }C^{\ast }(\mathcal{D}_{ \mathcal{E}^{\varphi }})$ such that
\begin{enumerate} \item $V_{\varphi }\left( \mathcal{E}\right) \subseteq \mathcal{E}^{\varphi };$
\item $\varphi \left( a\right) \subseteq V_{\varphi }^{\ast }\pi _{\varphi }\left( a\right) V_{\varphi };$
\item $\mathcal{H}_{\lambda }^{\varphi }=\left[ \pi _{\varphi }\left( \mathcal{A}\right) V_{\varphi }\mathcal{H}_{\lambda }\right] $ for all $ \lambda \in \Lambda .$
Moreover, if $\varphi \left( 1_{\mathcal{A}}\right) =$id$_{\mathcal{D}_{ \mathcal{E}}}$, then $V_{\varphi }$ is an isometry. \end{enumerate} \end{theorem}
\ \ \ \ \ \ The triple $\left( \pi _{\varphi },V_{\varphi },\{\mathcal{H} ^{\varphi },\mathcal{E}^{\varphi },\mathcal{D}_{\mathcal{E}^{\varphi }}\}\right) $ constructed in Theorem \ref{s} is called a minimal Stinespring dilation associated to $\varphi $. Moreover, the minimal Stinespring dilation associated to $\varphi $ is unique up to unitary equivalence in the following sense, if $(\ \pi _{\varphi },V_{\varphi },$\ $\{\mathcal{H} ^{\varphi },\mathcal{E}^{\varphi },\mathcal{D}_{\mathcal{E}^{\varphi }}\})$ and$\left( \widetilde{\pi }_{\varphi },\widetilde{V}_{\varphi },\{\widetilde{ \mathcal{H}}^{\varphi },\widetilde{\mathcal{E}}^{\varphi },\widetilde{ \mathcal{D}}_{\widetilde{\mathcal{E}}^{\varphi }}\}\right) \ $are two minimal Stinespring dilations associated to $\varphi $, then there is a unitary operator $U_{\varphi }:\mathcal{H}^{\varphi }\rightarrow \widetilde{ \mathcal{H}}^{\varphi }$ such that $U_{\varphi }V_{\varphi }=\widetilde{V} _{\varphi }$ and $U_{\varphi }\pi _{\varphi }\left( a\right) \subseteq \widetilde{\pi }_{\varphi }\left( a\right) U_{\varphi }$ for all $a\in \mathcal{A\ }$\cite[Theorem 3.4]{BGK}.
If $\left( \pi _{\varphi },V_{\varphi },\{\mathcal{H}^{\varphi },\mathcal{E} ^{\varphi },\mathcal{D}_{\mathcal{E}^{\varphi }}\}\right) $ is a minimal Stinespring dilation associated to $\varphi \in \mathcal{CPCC}_{\text{loc}}( \mathcal{A},C^{\ast }(\mathcal{D}_{\mathcal{E}}))$, then $\left( \left. \pi _{\varphi }\right\vert _{b(\mathcal{A})},V_{\varphi },\mathcal{H}^{\varphi }\right) $, where $\left. \left. \pi _{\varphi }\right\vert _{b(\mathcal{A} )}\left( a\right) \right\vert _{\mathcal{D}_{\widetilde{\mathcal{E}} ^{\varphi }}}=\pi _{\varphi }\left( a\right) \ $for all $a\in b(\mathcal{A} ), $ is a minimal Stinespring dilation associated with $\left. \varphi \right\vert _{b(\mathcal{A})}.$
Let $\mathcal{E}=\{\mathcal{H}_{\iota };\iota \in \Upsilon \}$ and $\mathcal{ F}=\{\mathcal{K}_{\gamma };\gamma \in \Gamma \}\ $be quantized domains in Hilbert spaces $\mathcal{H}$ and $\mathcal{K}$, respectively. Then \begin{equation*} \mathcal{E}\otimes \mathcal{F}=\left\{ \mathcal{H}_{\iota }\otimes \mathcal{K }_{\gamma };\left( \iota ,\gamma \right) \in \Upsilon \times \Gamma \right\} \end{equation*} is a quantized domain in the Hilbert space $\mathcal{H}\otimes \mathcal{K}$, with the union space $\mathcal{D}_{\mathcal{E}\otimes \mathcal{F} }=\tbigcup\limits_{\left( \iota ,\gamma \right) \in \Upsilon \times \Gamma } \mathcal{H}_{\iota }\otimes \mathcal{K}_{\gamma }.$\ If $\mathcal{E}=\{ \mathcal{H}\}$, then $\mathcal{H}\otimes \mathcal{F}=\left\{ \mathcal{H} \otimes \mathcal{K}_{\gamma };\gamma \in \Gamma \right\} $ is a quantized domain in the Hilbert space $\mathcal{H}\otimes \mathcal{K}$ with the union space $\mathcal{D}_{\mathcal{H}\otimes \mathcal{F}}=\tbigcup\limits_{\gamma \in \Gamma }\mathcal{H}\otimes \mathcal{K}_{\gamma }$.
The map $\Phi :C^{\ast }(\mathcal{D}_{\mathcal{E}})\otimes _{\text{alg} }C^{\ast }(\mathcal{D}_{\mathcal{F}})\rightarrow C^{\ast }(\mathcal{D}_{ \mathcal{E}\otimes \mathcal{F}})$ given by \begin{equation*} \Phi \left( T\otimes S\right) \left( \xi \otimes \eta \right) =T\xi \otimes S\eta ,T\in C^{\ast }(\mathcal{D}_{\mathcal{E}}),S\in C^{\ast }(\mathcal{D}_{ \mathcal{F}}),\xi \in \mathcal{D}_{\mathcal{E}},\eta \in \mathcal{D}_{ \mathcal{F}} \end{equation*} identifies $C^{\ast }(\mathcal{D}_{\mathcal{E}})\otimes _{\text{alg}}C^{\ast }(\mathcal{D}_{\mathcal{F}})\ $with a $\ast $-subalgebra of $C^{\ast }( \mathcal{D}_{\mathcal{E}\otimes \mathcal{F}})$. The minimal tensor product of the locally $C^{\ast }$-algebras $C^{\ast }(\mathcal{D}_{\mathcal{E}})$ and $C^{\ast }(\mathcal{D}_{\mathcal{F}})$ is the locally $C^{\ast }$ -algebra $C^{\ast }(\mathcal{D}_{\mathcal{E}})\otimes C^{\ast }(\mathcal{D}_{ \mathcal{F}})$ obtained by the completion of the $\ast $-subalgebra $C^{\ast }(\mathcal{D}_{\mathcal{E}})\otimes _{\text{alg}}C^{\ast }(\mathcal{D}_{ \mathcal{F}})$ in $C^{\ast }(\mathcal{D}_{\mathcal{E}\otimes \mathcal{F}})$ (see for example \cite{G}).
Let $\mathcal{A}$ and $\mathcal{B}$ be two locally $C^{\ast }$-algebras. Recall that $\mathcal{A}$ and $\mathcal{B}$ can be identified with a locally $C^{\ast }$-subalgebra in $C^{\ast }(\mathcal{D}_{\mathcal{E}})$ and $ C^{\ast }(\mathcal{D}_{\mathcal{F}})$, respectively for some quantized domains $\mathcal{D}_{\mathcal{E}}$ and $\mathcal{D}_{\mathcal{F}}$. The minimal or spatial tensor product of the locally $C^{\ast }$-algebras $ \mathcal{A}$ and $\mathcal{B}$ is the locally $C^{\ast }$-algebra $\mathcal{A }\otimes \mathcal{B}$ obtained by the completion of the $\ast $-subalgebra $ \mathcal{A}\otimes _{\text{alg}}\mathcal{B}$\ in $C^{\ast }(\mathcal{D}_{ \mathcal{E}\otimes \mathcal{F}})$. In fact, $\mathcal{A}\otimes \mathcal{B}$ can be identified with the projective limit of the projective system of $ C^{\ast }$-algebras $\{\mathcal{A}_{\lambda }\otimes \mathcal{B}_{\delta };\pi _{\lambda _{1}\lambda _{2}}^{\mathcal{A}}\otimes \pi _{\delta _{1}\delta _{2}}^{\mathcal{B}},\lambda _{1}\geq \lambda _{2},\delta _{1}\geq \delta _{2}\}_{\left( \lambda ,\delta \right) \in \Lambda \times \Delta }$ (see \cite{Ph,F}).
If $\mathcal{A}$ is a $C^{\ast }$-algebra acting nondegenerately on a Hilbert space $\mathcal{H}$ and $\mathcal{B}$ is a locally $C^{\ast }$ -algebra that is identified with a locally $C^{\ast }$-subalgebra in $ C^{\ast }(\mathcal{D}_{\mathcal{F}})$, then the minimal tensor product $ \mathcal{A}\otimes \mathcal{B}$ of $\mathcal{A}$ and $\mathcal{B}$ is the completion of the $\ast $-subalgebra $\mathcal{A}\otimes _{\text{alg}} \mathcal{B}$\ in $C^{\ast }(\mathcal{D}_{\mathcal{H}\otimes \mathcal{F}})$.
Let $\mathcal{A}$ and $\mathcal{B}$ be two locally $C^{\ast }$-algebras, $ \pi _{1}:$ $\mathcal{A\rightarrow }C^{\ast }(\mathcal{D}_{\mathcal{E}})$ and $\pi _{2}:$ $\mathcal{B\rightarrow }C^{\ast }(\mathcal{D}_{\mathcal{F}})$ be two local contractive $\ast $-morphisms. By the functorial property of the minimal tensor product of $C^{\ast }$-algebras and taking into account the above discussion we conclude that there is a unique local $\ast $-morphism $ \pi _{1}\otimes \pi _{2}:\mathcal{A\otimes B\rightarrow }C^{\ast }(\mathcal{D }_{\mathcal{E}\otimes \mathcal{F}})$ such that \begin{equation*} \left( \pi _{1}\otimes \pi _{2}\right) \left( a\otimes b\right) =\pi _{1}\left( a\right) \otimes \pi _{2}\left( b\right) \end{equation*} where $\left( \pi _{1}\left( a\right) \otimes \pi _{2}\left( b\right) \right) \left( \xi \otimes \eta \right) =\pi _{1}\left( a\right) \xi \otimes \pi _{2}\left( b\right) \eta $ for all $\xi \in \mathcal{D}_{\mathcal{E} },\eta \in \mathcal{D}_{\mathcal{F}}$.
\begin{proposition} \label{3} Let $\mathcal{A}$ and $\mathcal{B}$ be two unital locally $C^{\ast }$-algebras and let $\varphi \in \mathcal{CPCC}_{\text{loc}}(\mathcal{A} ,C^{\ast }(\mathcal{D}_{\mathcal{E}}))$ and $\psi \in \mathcal{CPCC}_{\text{ loc}}(\mathcal{B},C^{\ast }(\mathcal{D}_{\mathcal{F}}))$. Then there is a unique map $\varphi \otimes \psi \in $ $\mathcal{CPCC}_{\text{loc}}(\mathcal{ A\otimes B},C^{\ast }(\mathcal{D}_{\mathcal{E}\otimes \mathcal{F}}))$ such that \begin{equation*} \left( \varphi \otimes \psi \right) \left( a\otimes b\right) =\varphi \left( a\right) \otimes \psi \left( b\right) \ \text{for all }a\in \mathcal{A\ } \text{and }b\in \mathcal{B}. \end{equation*} Moreover, if $\left( \pi _{\varphi },V_{\varphi },\{\mathcal{H}^{\varphi }, \mathcal{E}^{\varphi },\mathcal{D}_{\mathcal{E}^{\varphi }}\}\right) $ is a minimal Stinespring dilation associated to $\varphi $ and $\left( \pi _{\psi },V_{\psi },\{\mathcal{H}^{\psi },\mathcal{E}^{\psi },\mathcal{D}_{\mathcal{E }^{\psi }}\}\right) $ is a minimal Stinespring dilation associated to $\psi $ , then $\left( \pi _{\varphi }\otimes \pi _{\psi },V_{\varphi }\otimes V_{\psi },\{\mathcal{H}^{\varphi }\otimes \mathcal{H}^{\psi },\mathcal{E} ^{\varphi }\otimes \mathcal{E}^{\psi },\mathcal{D}_{\mathcal{E}^{\varphi }\otimes \mathcal{E}^{\psi }}\}\right) \ $is a minimal Stinespring dilation associated to $\varphi \otimes \psi .$ \end{proposition}
\begin{proof} Since $\pi _{\varphi }:\mathcal{A\rightarrow }C^{\ast }(\mathcal{D}_{ \mathcal{E}^{\varphi }})$ and $\pi _{\psi }:\mathcal{B\rightarrow }C^{\ast }( \mathcal{D}_{\mathcal{E}^{\psi }})$ are local contractive $\ast $-morphisms, there is a local contractive $\ast $-morphism $\pi _{\varphi }\otimes \pi _{\psi }:\mathcal{A\otimes B\rightarrow }$ $C^{\ast }(\mathcal{D}_{\mathcal{E }^{\varphi }\otimes \mathcal{E}^{\psi }})$ such that \begin{equation*} \left( \pi _{\varphi }\otimes \pi _{\psi }\right) \left( a\otimes b\right) =\pi _{\varphi }\left( a\right) \otimes \pi _{\psi }\left( b\right) , \end{equation*} where $\left( \pi _{\varphi }\left( a\right) \otimes \pi _{\psi }\left( b\right) \right) \left( \xi \otimes \eta \right) =\pi _{\varphi }\left( a\right) \xi \otimes \pi _{\psi }\left( b\right) \eta $ for all $\xi \in \mathcal{D}_{\mathcal{E}^{\varphi }},\eta \in \mathcal{D}_{\mathcal{E}^{\psi }}$. Since $V_{\varphi }:\mathcal{H}\rightarrow \mathcal{H}^{\varphi }$ and $ V_{\psi }:\mathcal{K}\rightarrow \mathcal{H}^{\psi }$ are contractions, we get a contraction $V_{\varphi }\otimes V_{\psi }:\mathcal{H}\otimes \mathcal{ K}\rightarrow \mathcal{H}^{\varphi }\otimes \mathcal{H}^{\psi }$.$\ $Since $ V_{\varphi }\left( \mathcal{E}\right) \subseteq \mathcal{E}^{\varphi }$ and $ V_{\psi }\left( \mathcal{F}\right) \subseteq \mathcal{E}^{\psi },$ $\left( V_{\varphi }\otimes V_{\psi }\right) \left( \mathcal{H}_{\iota }\otimes \mathcal{K}_{\mathcal{\gamma }}\right) \subseteq \mathcal{H}_{\iota }^{\varphi }\otimes \mathcal{H}_{\mathcal{\gamma }}^{\varphi }\ $for all $ \left( \iota ,\mathcal{\gamma }\right) \in \Upsilon \times \Gamma $, thus \begin{equation*} \left( V_{\varphi }\otimes V_{\psi }\right) \left( \mathcal{E}\otimes \mathcal{F}\right) \subseteq \mathcal{E}^{\varphi }\otimes \mathcal{E}^{\psi }. \end{equation*} Therefore, we may consider the map $\varphi \otimes \psi :$ $\mathcal{ A\otimes B\rightarrow }C^{\ast }(\mathcal{D}_{\mathcal{E}\otimes \mathcal{F} })$ given by \begin{equation*} \left( \varphi \otimes \psi \right) \left( c\right) =\left. \left( V_{\varphi }\otimes V_{\psi }\right) ^{\ast }\left( \pi _{\varphi }\otimes \pi _{\psi }\right) \left( c\right) \left( V_{\varphi }\otimes V_{\psi }\right) \right\vert _{\mathcal{D}_{\mathcal{E}^{\varphi }\otimes \mathcal{E} ^{\psi }}}\text{.} \end{equation*} Since $\pi _{\varphi }\otimes \pi _{\psi }$ is a local contractive $\ast $ -morphism and $V_{\varphi }\otimes V_{\psi }$ is a contraction, $\varphi \otimes \psi $ is a local completely positive and local completely contractive map from $\mathcal{A\otimes B}$ to $C^{\ast }(\mathcal{D}_{ \mathcal{E}\otimes \mathcal{F}})$. Clearly, $\left( \varphi \otimes \psi \right) \left( a\otimes b\right) =\varphi \left( a\right) \otimes \psi \left( b\right) \ $for all $a\in \mathcal{A\ }$and $b\in \mathcal{B}$.
To show that $\left( \pi _{\varphi }\otimes \pi _{\psi },V_{\varphi }\otimes V_{\psi },\{\mathcal{H}^{\varphi }\otimes \mathcal{H}^{\psi },\mathcal{E} ^{\varphi }\otimes \mathcal{E}^{\psi },\mathcal{D}_{\mathcal{E}^{\varphi }\otimes \mathcal{E}^{\psi }}\}\right) \ $is a minimal Stinespring dilation associated with $\varphi \otimes \psi $ it remains to show that $\mathcal{H} _{\iota }^{\varphi }\otimes \mathcal{H}_{\gamma }^{\psi }=\left[ \left( \pi _{\varphi }\otimes \pi _{\psi }\right) \left( \mathcal{A\otimes B}\right) \left( V_{\varphi }\otimes V_{\psi }\right) \left( \mathcal{H}_{\iota }\otimes \mathcal{K}_{\gamma }\right) \right] $ for all $\left( \iota , \mathcal{\gamma }\right) \in \Upsilon \times \Gamma $.
Let $\left( \iota ,\mathcal{\gamma }\right) \in \Upsilon \times \Gamma $. Then \begin{eqnarray*} \left[ \left( \pi _{\varphi }\otimes \pi _{\psi }\right) \left( \mathcal{ A\otimes B}\right) \left( V_{\varphi }\otimes V_{\psi }\right) \left( \mathcal{H}_{\iota }\otimes \mathcal{K}_{\gamma }\right) \right] &=&\left[ \pi _{\varphi }\left( \mathcal{A}\right) V_{\varphi }\left( \mathcal{H} _{\iota }\right) \otimes \pi _{\psi }\left( \mathcal{B}\right) V_{\psi }\left( \mathcal{K}_{\gamma }\right) \right] \\ &=&\mathcal{H}_{\iota }^{\varphi }\otimes \mathcal{H}_{\gamma }^{\psi }, \end{eqnarray*} as required.
To show the uniqueness of the map $\varphi \otimes \psi $, let $\phi \in \mathcal{CPCC}_{\text{loc}}(\mathcal{A\otimes B},C^{\ast }(\mathcal{D}_{ \mathcal{E}\otimes \mathcal{F}}))$ such that $\phi \left( a\otimes b\right) =\varphi \left( a\right) \otimes \psi \left( b\right) \ $for all $a\in \mathcal{A\ }$and $b\in \mathcal{B}$, $\phi \neq \varphi \otimes \psi $. Since $\varphi \otimes \psi $ and $\phi $ are continuous and $\mathcal{ A\otimes }_{\text{alg}}\mathcal{B}$ is dense, from $\phi \left( a\otimes b\right) =\varphi \left( a\right) \otimes \psi \left( b\right) =\left( \varphi \otimes \psi \right) \left( a\otimes b\right) $ for all $a\in \mathcal{A\ }$and $b\in \mathcal{B}$ it follows that $\phi =\varphi \otimes \psi $, a contradiction, and the uniqueness of the map $\varphi \otimes \psi $ is proved. \end{proof}
\section{Main results}
\ Suppose that $\{\mathcal{H};\mathcal{E};\mathcal{D}_{\mathcal{E}}\}$ is a Fr\'{e}chet quantized domain in a Hilbert space $\mathcal{H}\ $(that is, $ \mathcal{E=\{H}_{n}\mathcal{\}}_{n\in \mathbb{N}}$). For each $n\in \mathbb{N }^{\ast },$ let $\mathcal{H}_{n}^{c}$ be the orthogonal complement of the closed subspace\ $\mathcal{H}_{n-1}$ in $\mathcal{H}_{n}$, and put $\mathcal{ H}_{0}^{c}=\mathcal{H}_{0}$. Then $\mathcal{H}_{n}=\tbigoplus\limits_{k\leq n}\mathcal{H}_{k}^{c}$. For each $n\in \mathbb{N}$ and for each $\xi \in \mathcal{H}_{k}^{c},k\leq n$, the rank one operator $\theta _{\xi ,\xi }: \mathcal{D}_{\mathcal{E}}\rightarrow \mathcal{D}_{\mathcal{E}},$ $\theta _{\xi ,\xi }(\eta )=\xi \left\langle \xi ,\eta \right\rangle \ $is an element in $C^{\ast }(\mathcal{D}_{\mathcal{E}})$. The closed two sided $ \ast $-ideal of $C^{\ast }(\mathcal{D}_{\mathcal{E}})$ generated by these rank one operators is denoted by $K(\mathcal{D}_{\mathcal{E}})$ and it is called the locally $C^{\ast }$-algebra of all compact operators on $\mathcal{ D}_{\mathcal{E}}$. Clearly, for each $n\in \mathbb{N},K(\mathcal{D}_{ \mathcal{E}})_{n}$ $\subseteq K\left( \mathcal{H}_{n}\right) $, which it is a closed two sided $\ast $-ideal of $C^{\ast }(\mathcal{D}_{\mathcal{E} })_{n}.$
Let $\mathcal{A}$ be a unital locally $C^{\ast }$-algebra, $\{\mathcal{H}; \mathcal{E};\mathcal{D}_{\mathcal{E}}\}$ be a Fr\'{e}chet quantized domain in a Hilbert space $\mathcal{H}$ with $\mathcal{E=\{H}_{n}\mathcal{\}}_{n\in \mathbb{N}}$, $a\in \mathcal{A}$ and $c\in \mathcal{A\otimes }C^{\ast }( \mathcal{D}_{\mathcal{E}})$. If $\xi \in \mathcal{H}_{n}^{c}$ and $a\otimes \theta _{\xi ,\xi }\geq _{\left( \lambda ,n\right) }c$ $\geq _{\left( \lambda ,n\right) }0$, then there is $b\in \mathcal{A}$ such that $ c=_{\left( \lambda ,n\right) }b\otimes \theta _{\xi ,\xi }\ $. Indeed, from $ a\otimes \theta _{\xi ,\xi }\geq _{\left( \lambda ,n\right) }c$ $\geq _{\left( \lambda ,n\right) }0$ we deduce that \begin{equation*} \pi _{\lambda }^{\mathcal{A}}\left( a\right) \otimes \left. \theta _{\xi ,\xi }\right\vert _{\mathcal{H}_{n}}=\pi _{\left( \lambda ,n\right) }^{ \mathcal{A\otimes }C^{\ast }(\mathcal{D}_{\mathcal{E}})}\left( a\otimes \theta _{\xi ,\xi }\right) \geq \pi _{\left( \lambda ,n\right) }^{\mathcal{ A\otimes }C^{\ast }(\mathcal{D}_{\mathcal{E}})}\left( c\right) \geq 0 \end{equation*} and by \cite[Lemma 2.1]{BO}, there is $b_{\lambda }\in \mathcal{A}_{\lambda } $ such that $\pi _{\left( \lambda ,n\right) }^{\mathcal{A\otimes }C^{\ast }(\mathcal{D}_{\mathcal{E}})}\left( c\right) =b_{\lambda }\otimes \left. \theta _{\xi ,\xi }\right\vert _{\mathcal{H}_{n}}.$ Therefore, there is $ b\in \mathcal{A}$ such that \begin{equation*} \pi _{\left( \lambda ,n\right) }^{\mathcal{A\otimes }C^{\ast }(\mathcal{D}_{ \mathcal{E}})}\left( c\right) =b_{\lambda }\otimes \left. \theta _{\xi ,\xi }\right\vert _{\mathcal{H}_{n}}=\pi _{\lambda }^{\mathcal{A}}\left( b\right) \otimes \left. \theta _{\xi ,\xi }\right\vert _{\mathcal{H}_{n}}=\pi _{\left( \lambda ,n\right) }^{\mathcal{A\otimes }C^{\ast }(\mathcal{D}_{ \mathcal{E}})}\left( b\otimes \theta _{\xi ,\xi }\right) . \end{equation*}
\begin{theorem} \label{6} Let $\mathcal{A}$ and $\mathcal{B}$\ be two unital locally $ C^{\ast }$-algebras, $\{\mathcal{H};\mathcal{E};\mathcal{D}_{\mathcal{E}}\}$ be a Fr\'{e}chet quantized domain in a Hilbert space $\mathcal{H}$ with $ \mathcal{E=\{H}_{n}\mathcal{\}}_{n\in \mathbb{N}},$ $\varphi :$ $\mathcal{A}$ $\rightarrow $ $\mathcal{B\ }$and $\phi :\mathcal{A\otimes }C^{\ast }( \mathcal{D}_{\mathcal{E}})\rightarrow \mathcal{B\otimes }C^{\ast }(\mathcal{D }_{\mathcal{E}})$ be two linear maps. Then the following conditions are equivalent:
\begin{enumerate} \item $\phi $ and $\varphi \otimes $id$_{C^{\ast }(\mathcal{D}_{\mathcal{E} })}-\phi $ are local positive.
\item There exists a local positive map $\psi :\mathcal{A\rightarrow B\ }$ such that $\varphi -\psi $ is local positive and $\phi =\psi \otimes $ id$ _{C^{\ast }(\mathcal{D}_{\mathcal{E}})}$. \end{enumerate} \end{theorem}
\begin{proof} $(1)\Rightarrow (2)$ Since the maps $\phi $ and $\varphi \otimes $id$ _{C^{\ast }(\mathcal{D}_{\mathcal{E}})}-\phi $ are local positive, the map $ \varphi \otimes $id$_{C^{\ast }(\mathcal{D}_{\mathcal{E}})}$ is local positive. Then, $\varphi $ is local positive, and by Corollary \ref {localpositive} \begin{equation*} 0\leq \phi \left( a\otimes T\right) \leq \varphi \left( a\right) \otimes T \end{equation*} for all $a\in \mathcal{A}\ $with $a\geq 0$ and for all $T\in C^{\ast }( \mathcal{D}_{\mathcal{E}})$ with $T\geq 0.$ Therefore, for each $\left( \delta ,n\right) \in \Delta \times \mathbb{N},$ there are $\lambda _{0}\in \Lambda $ and $C_{0}>0$ such that \begin{equation*} \left\Vert \phi \left( a\otimes T\right) \right\Vert _{\left( \delta ,n\right) }\leq \left\Vert \varphi \left( a\right) \otimes T\right\Vert _{\left( \delta ,n\right) }=\left\Vert \varphi \left( a\right) \right\Vert _{\delta }\left\Vert T\right\Vert _{n}\leq C_{0}p_{\lambda _{0}}\left( a\right) \left\Vert T\right\Vert _{n} \end{equation*} for all $a\in \mathcal{A}\ $with $a\geq 0$ and for all $T\in C^{\ast }( \mathcal{D}_{\mathcal{E}})$ with $T\geq 0$. Therefore, there exist maps $ \phi _{\left( \delta ,n\right) }:$ $\mathcal{A}_{\lambda _{0}}\mathcal{ \otimes }C^{\ast }(\mathcal{D}_{\mathcal{E}})_{n}\rightarrow \mathcal{B} _{\delta }\mathcal{\otimes }C^{\ast }(\mathcal{D}_{\mathcal{E}})_{n}$ and $ \varphi _{\delta }:\mathcal{A}_{\lambda _{0}}\rightarrow \mathcal{B}_{\delta }$ such that \begin{equation*} \pi _{\left( \delta ,n\right) }^{\mathcal{B\otimes }C^{\ast }(\mathcal{D}_{ \mathcal{E}})}\circ \phi =\phi _{\left( \delta ,n\right) }\circ \pi _{\left( \lambda _{0},n\right) }^{\mathcal{A\otimes }C^{\ast }(\mathcal{D}_{\mathcal{E }})}\text{ and }\pi _{\delta }^{\mathcal{B}}\circ \varphi =\varphi _{\delta }\circ \pi _{\lambda _{0}}^{\mathcal{A}}. \end{equation*} Moreover, $\phi _{\left( \delta ,n\right) }$ and $\varphi _{\delta }\otimes $ id$_{C^{\ast }(\mathcal{D}_{\mathcal{E}})_{n}}-\phi _{\left( \delta ,n\right) }$ are positive. Thus, by the proof of \cite[Theorem 2]{BO}, there is a positive map $\widetilde{\psi }_{\delta }:\mathcal{A}_{\lambda _{0}}\rightarrow \mathcal{B}_{\delta }$ such that \begin{equation*} \phi _{\left( \delta ,n\right) }\left( \pi _{\lambda _{0}}^{\mathcal{A} }\left( a\right) \otimes \left. \theta _{\xi ,\xi }\right\vert _{\mathcal{H} _{n}}\right) =\widetilde{\psi }_{\delta }\left( \pi _{\lambda _{0}}^{ \mathcal{A}}\left( a\right) \right) \otimes \left. \theta _{\xi ,\xi }\right\vert _{\mathcal{H}_{n}} \end{equation*} for all $a\in \mathcal{A}$ such that $a\geq _{\lambda _{0}}0$ and $\xi \in \mathcal{H}_{k}^{c},k\leq n$, and $\varphi _{\delta }-\widetilde{\psi } _{\delta }$ is positive. Since $K(\mathcal{D}_{\mathcal{E}})_{n}\ $is generated by rank one operators $\theta _{\xi ,\xi },\xi \in \mathcal{H} _{k}^{c},$ $k\leq n,$ \begin{equation*} \phi _{\left( \delta ,n\right) }\left( \pi _{\lambda _{0}}^{\mathcal{A} }\left( a\right) \otimes \left. T\right\vert _{\mathcal{H}_{n}}\right) = \widetilde{\psi }_{\delta }\left( \pi _{\lambda _{0}}^{\mathcal{A}}\left( a\right) \right) \otimes \left. T\right\vert _{\mathcal{H}_{n}} \end{equation*} for all $a\in \mathcal{A}$ and for all $T\in K(\mathcal{D}_{\mathcal{E}}).$
Suppose that $\mathcal{B}_{\delta }$ acts nondegenerately on a Hilbert space $\mathcal{K}$. Let $\{u_{i}\}_{i\in I}$ be an approximate unit for $K( \mathcal{D}_{\mathcal{E}})_{n}$, $\eta \in \mathcal{K}$ and $\xi \in \mathcal{H}_{k}^{c},k\leq n,\left\Vert \xi \right\Vert \neq 0.$ Since $ \mathcal{H}_{n}=\tbigoplus\limits_{k\leq n}\mathcal{H}_{k}^{c}$ and $K( \mathcal{D}_{\mathcal{E}})_{n}$ is a closed two sided $\ast $-ideal of $ C^{\ast }(\mathcal{D}_{\mathcal{E}})_{n}$, we have \begin{eqnarray*} &&\left\langle \phi _{\left( \delta ,n\right) }\left( \pi _{\lambda _{0}}^{ \mathcal{A}}\left( a\right) \otimes \left. T\right\vert _{\mathcal{H} _{n}}\right) \left( \eta \otimes \xi \right) ,\left( \eta \otimes \xi \right) \right\rangle \\ &=&\lim\limits_{i}\left\langle \phi _{\left( \delta ,n\right) }\left( \pi _{\lambda _{0}}^{\mathcal{A}}\left( a\right) \otimes u_{i}\left. T\right\vert _{\mathcal{H}_{n}}u_{i}\right) \left( \eta \otimes \xi \right) ,\eta \otimes \xi \right\rangle \\ &=&\lim\limits_{i}\left\langle \left( \widetilde{\psi }_{\delta }\left( \pi _{\lambda _{0}}^{\mathcal{A}}\left( a\right) \right) \otimes u_{i}\left. T\right\vert _{\mathcal{H}_{n}}u_{i}\right) \left( \eta \otimes \xi \right) ,\eta \otimes \xi \right\rangle \\ &=&\lim\limits_{i}\left( \left\langle \widetilde{\psi }_{\delta }\left( \pi _{\lambda _{0}}^{\mathcal{A}}\left( a\right) \right) \eta ,\eta \right\rangle \otimes \left\langle u_{i}\left. T\right\vert _{\mathcal{H} _{n}}u_{i}\left. \theta _{\frac{1}{\left\Vert \xi \right\Vert }\xi ,\frac{1}{ \left\Vert \xi \right\Vert }\xi }\right\vert _{\mathcal{H}_{n}}\left( \xi \right) ,\xi \right\rangle \right) \\ &=&\left\langle \widetilde{\psi }_{\delta }\left( \pi _{\lambda _{0}}^{ \mathcal{A}}\left( a\right) \right) \eta ,\eta \right\rangle \otimes \left\langle \left. T\right\vert _{\mathcal{H}_{n}}\left. \theta _{\frac{1}{ \left\Vert \xi \right\Vert }\xi ,\frac{1}{\left\Vert \xi \right\Vert }\xi }\right\vert _{\mathcal{H}_{n}}\left( \xi \right) ,\xi \right\rangle \\ &=&\left\langle \widetilde{\psi }_{\delta }\left( \pi _{\lambda _{0}}^{ \mathcal{A}}\left( a\right) \right) \eta ,\eta \right\rangle \otimes \left\langle \left. T\right\vert _{\mathcal{H}_{n}}\left( \xi \right) ,\xi \right\rangle \\ &=&\left\langle \left( \widetilde{\psi }_{\delta }\left( \pi _{\lambda _{0}}^{\mathcal{A}}\left( a\right) \right) \otimes \left. T\right\vert _{ \mathcal{H}_{n}}\right) \left( \eta \otimes \xi \right) ,\eta \otimes \xi \right\rangle \end{eqnarray*} for all $a\in \mathcal{A}$ and for all $T\in C^{\ast }(\mathcal{D}_{\mathcal{ E}})$. Therefore, \begin{equation*} \phi _{\left( \delta ,n\right) }\left( \pi _{\lambda _{0}}^{\mathcal{A} }\left( a\right) \otimes \left. T\right\vert _{\mathcal{H}_{n}}\right) = \widetilde{\psi }_{\delta }\left( \pi _{\lambda _{0}}^{\mathcal{A}}\left( a\right) \right) \otimes \left. T\right\vert _{\mathcal{H}_{n}} \end{equation*} for all $a\in \mathcal{A}$ and for all $T\in C^{\ast }(\mathcal{D}_{\mathcal{ E}}).$
Let $\psi _{\delta }:\mathcal{A}\rightarrow \mathcal{B}_{\delta },\psi _{\delta }=\widetilde{\psi }_{\delta }\circ \pi _{\lambda _{0}}^{\mathcal{A} } $. Clearly, $\psi _{\delta }$ is a local positive map, and \begin{equation*} \pi _{\left( \delta ,n\right) }^{\mathcal{B\otimes }C^{\ast }(\mathcal{D}_{ \mathcal{E}})}\left( \phi \left( a\otimes T\right) \right) =\psi _{\delta }\left( a\right) \otimes \left. T\right\vert _{\mathcal{H}_{n}} \end{equation*} for all $a\in \mathcal{A}$ and $T\in $ $C^{\ast }(\mathcal{D}_{\mathcal{E}})$ .
Let $\delta _{1},\delta _{2}\in \Delta $ with $\delta _{1}\geq \delta _{2},$ $n\in \mathbb{N}$ and $a\in \mathcal{A}$. Since \begin{eqnarray*} \pi _{\delta _{1}\delta _{2}}^{\mathcal{B}}\left( \psi _{\delta _{1}}\left( a\right) \right) \otimes \left. T\right\vert _{\mathcal{H}_{n}} &=&\pi _{\left( \delta _{1},n\right) \left( \delta _{2},n\right) }^{\mathcal{ B\otimes }C^{\ast }(\mathcal{D}_{\mathcal{E}})}\left( \psi _{\delta _{1}}\left( a\right) \otimes \left. T\right\vert _{\mathcal{H}_{n}}\right) \\ &=&\pi _{\left( \delta _{1},n\right) \left( \delta _{2},n\right) }^{\mathcal{ B\otimes }C^{\ast }(\mathcal{D}_{\mathcal{E}})}\left( \pi _{\left( \delta _{1},n\right) }^{\mathcal{B\otimes }C^{\ast }(\mathcal{D}_{\mathcal{E} })}\left( \phi \left( a\otimes T\right) \right) \right) \\ &=&\pi _{\left( \delta _{2},n\right) }^{\mathcal{B\otimes }C^{\ast }( \mathcal{D}_{\mathcal{E}})}\left( \phi \left( a\otimes T\right) \right) =\psi _{\delta _{2}}\left( a\right) \otimes \left. T\right\vert _{\mathcal{H} _{n}} \end{eqnarray*} for all $T\in $ $C^{\ast }(\mathcal{D}_{\mathcal{E}})$, it follows that $\pi _{\delta _{1}\delta _{2}}^{\mathcal{B}}\left( \psi _{\delta _{1}}\left( a\right) \right) =\psi _{\delta _{2}}\left( a\right) $. Therefore, there is a linear map $\psi :\mathcal{A\rightarrow B}$ such that \begin{equation*} \psi \left( a\right) =\left( \psi _{\delta }\left( a\right) \right) _{\delta \in \Delta }\text{.} \end{equation*} Moreover, since for each $\delta \in \Delta $, $\psi _{\delta }$ is local positive, $\psi $ is local positive and \begin{eqnarray*} \pi _{\left( \delta ,n\right) }^{\mathcal{B\otimes }C^{\ast }(\mathcal{D}_{ \mathcal{E}})}\left( \left( \psi \otimes \text{id}_{C^{\ast }(\mathcal{D}_{ \mathcal{E}})}\right) \left( a\otimes T\right) \right) &=&\psi _{\delta }\left( a\right) \otimes \left. T\right\vert _{\mathcal{H}_{n}} \\ &=&\pi _{\left( \delta ,n\right) }^{\mathcal{B\otimes }C^{\ast }(\mathcal{D} _{\mathcal{E}})}\left( \phi \left( a\otimes T\right) \right) \end{eqnarray*} for all $a\in \mathcal{A}$ and $T\in $ $C^{\ast }(\mathcal{D}_{\mathcal{E}})$ , and for all $\left( \delta ,n\right) \in \Delta \times \mathbb{N}$. Therefore, \begin{equation*} \phi =\psi \otimes \text{id}_{C^{\ast }(\mathcal{D}_{\mathcal{E}})}\text{.} \end{equation*}
To show that $\varphi -\psi $ is local positive, let $\delta \in \Delta $. We seen that there exist $\lambda _{0}\in \Lambda $ such that $\varphi _{\delta }-\widetilde{\psi }_{\delta }$ is positive, where $\pi _{\delta }^{ \mathcal{B}}\circ \varphi =\varphi _{\delta }\circ \pi _{\lambda _{0}}^{ \mathcal{A}}$ and $\pi _{\delta }^{\mathcal{B}}\circ \psi =\psi _{\delta }= \widetilde{\psi }_{\delta }\circ \pi _{\lambda _{0}}^{\mathcal{A}}$. Clearly, $\left( \varphi -\psi \right) \left( a\right) \geq _{\delta }0$ whenever $a\geq _{\lambda _{0}}0$ and $\left( \varphi -\psi \right) \left( a\right) =_{\delta }0$ whenever $a=_{\lambda _{0}}0$. Therefore, $\varphi -\psi $ is local positive.
$\left( 2\right) \Rightarrow \left( 1\right) $ By Proposition \ref{3}, $\phi $ is local positive, and since \begin{equation*} \varphi \otimes \text{id}_{C^{\ast }(\mathcal{D}_{\mathcal{E}})}-\phi =\left( \varphi -\psi \right) \otimes \text{id}_{C^{\ast }(\mathcal{D}_{ \mathcal{E}})} \end{equation*} and $\varphi -\psi $ are local positive, $\varphi \otimes $id$_{C^{\ast }( \mathcal{D}_{\mathcal{E}})}-\phi $ is local positive. \end{proof}
\begin{corollary} \label{positive} Let $\mathcal{A}$ and $\mathcal{B}$\ be two unital locally $ C^{\ast }$-algebras, $\mathcal{H}$ be a Hilbert space $\mathcal{H},$ $ \varphi :$ $\mathcal{A}$ $\rightarrow $ $\mathcal{B\ }$and $\phi :\mathcal{ A\otimes }B(\mathcal{H})\rightarrow \mathcal{B\otimes }B(\mathcal{H})$ be two linear maps. Then $\phi $ and $\varphi \otimes $id$_{B(\mathcal{H} ))}-\phi $ are local positive if and only if there is a local positive map $ \psi :\mathcal{A\rightarrow B\ }$such that $\varphi -\psi $ is local positive and $\phi =\psi \otimes $ id$_{B(\mathcal{H})}$. \end{corollary}
As an application of Theorem \ref{6}, we show that given a local positive map $\varphi :$ $\mathcal{A}\rightarrow $ $\mathcal{B}$, the local positive map $\varphi \otimes $id$_{M_{n}\left( \mathbb{C}\right) }$ is local decomposable for some $n\geq 2$ if and only if $\varphi $ is a local\ $CP$ -map.
\begin{definition} \cite{MJ3} A linear map $\varphi :\mathcal{A}\rightarrow \mathcal{B}$ is called \textit{local }$\mathit{n}$\textit{-copositive if }the map $\varphi \otimes t:\mathcal{A\otimes }M_{n}\left( \mathbb{C}\right) \rightarrow \mathcal{B\otimes }M_{n}\left( \mathbb{C}\right) $ defined by \begin{equation*} \left( \varphi \otimes t\right) \left( \left[ a_{ij}\right] _{i,j=1}^{n}\right) =\left[ \varphi \left( a_{ji}\right) \right] _{i,j=1}^{n} \end{equation*} is local positive, where $t$ denotes the transpose map on $M_{n}\left( \mathbb{C}\right) $.
We say that $\varphi $ is local completely copositive if for each $\delta \in \Delta $, there exists $\lambda \in \Lambda $ such that $\left( \varphi \otimes t\right) \left( \left[ a_{ij}\right] _{i,j=1}^{n}\right) \geq _{\delta }0\ $whenever $\left[ a_{ij}\right] _{i,j=1}^{n}\geq _{\lambda }0$ and $\left( \varphi \otimes t\right) \left( \left[ a_{ij}\right] _{i,j=1}^{n}\right) =_{\delta }0\ \ $whenever $\left[ a_{ij}\right] _{i,j=1}^{n}=_{\lambda }0$,$\ $for all $n\in \mathbb{N}.$ \end{definition}
\begin{remark} \cite{MJ3}\label{copositive}Let $\mathcal{A}$ and $\mathcal{B}$\ be two locally $C^{\ast }$-algebras and $\varphi :\mathcal{A}\rightarrow \mathcal{B} $ be a \textit{local }$\mathit{n}$\textit{-copositive} map. Then:
\begin{enumerate} \item $\varphi $ is \textit{\ }local positive and so it is continuous and positive.
\item \textit{for each }$\delta \in \Delta $, there exist $\lambda \in \Lambda $ and an $n$-copositive map $\varphi _{\delta }:\mathcal{A}_{\lambda }\rightarrow \mathcal{B}_{\delta }$ such that $\pi _{\delta }^{\mathcal{B} }\circ \varphi =\varphi _{\delta }\circ \pi _{\lambda }^{\mathcal{A}}.$ \end{enumerate} \end{remark}
\begin{definition} \cite{MJ3} A linear map $\varphi :\mathcal{A}\rightarrow \mathcal{B}$ is local decomposable if it is sum of a local completely positive map and a \textit{local completely copositive }map. \end{definition}
If $\varphi :\mathcal{A}\rightarrow \mathcal{B}$ is local decomposable, then for each\textit{\ }$\delta \in \Delta $, there exist $\lambda \in \Lambda $ and a decomposable positive map $\varphi _{\delta }:\mathcal{A}_{\lambda }\rightarrow \mathcal{B}_{\delta }$ such that $\pi _{\delta }^{\mathcal{B} }\circ \varphi =\varphi _{\delta }\circ \pi _{\lambda }^{\mathcal{A}}.$
\begin{theorem} \label{7}Let $\mathcal{A}$ and $\mathcal{B}$\ be two unital locally $C^{\ast }$-algebras and $\varphi :\mathcal{A\rightarrow B}$ be a linear map. If for some $n\geq 2$, $\varphi \otimes $id$_{M_{n}\left( \mathbb{C}\right) }: \mathcal{A\otimes }M_{n}\left( \mathbb{C}\right) \rightarrow \mathcal{ B\otimes }M_{n}\left( \mathbb{C}\right) $ is local decomposable, then $ \varphi $ is local completely positive. \end{theorem}
\begin{proof} We adapt the proof of \cite[Theorem 3.1 ]{BO}. If $\varphi \otimes $id$ _{M_{n}\left( \mathbb{C}\right) }$ is decomposable, there are a local completely positive map $\phi $ and a local completely copositive map $\psi $ such that $\varphi \otimes $id$_{M_{n}\left( \mathbb{C}\right) }=\phi +\psi $ .$\ $By Corollary \ref{positive}, there exist two local positive maps $\phi _{1}:$ $\mathcal{A\rightarrow B}$ and $\psi _{1}:$ $\mathcal{A\rightarrow B\ }$such that $\phi =\phi _{1}\otimes $id$_{M_{n}\left( \mathbb{C}\right) }$ and $\psi =\psi _{1}\otimes $id$_{M_{n}\left( \mathbb{C}\right) }$.
Since $\phi =\phi _{1}\otimes $id$_{M_{n}\left( \mathbb{C}\right) }$ and $ \phi $ is local completely positive, $\phi _{1}$ is local completely positive.
Since $\psi $ and $\psi _{1}$ are continuous (as $\psi $ is local completely copositive and $\psi _{1}$ is local positive) and $\psi =\psi _{1}\otimes $id $_{M_{n}\left( \mathbb{C}\right) }$,\ for each $\delta \in \Delta ,$ there exist $\lambda \in \Lambda $, and completely positive maps $\psi _{\delta }: \mathcal{A}_{\lambda }\rightarrow \mathcal{B}_{\delta }$ and $\psi _{1\delta }:$ $\mathcal{A}_{\lambda }\rightarrow \mathcal{B}_{\delta }$\ such that $ \pi _{\delta }^{\mathcal{B}}\circ \psi =\psi _{\delta }\circ \pi _{\lambda }^{\mathcal{A}},$ $\pi _{\delta }^{\mathcal{B}}\circ \psi _{1}=\psi _{1\delta }\circ \pi _{\lambda }^{\mathcal{A}}\ $and $\psi _{\delta }=\psi _{1\delta }\otimes $id$_{M_{n}\left( \mathbb{C}\right) }$. Then, since $ n\geq 2$, by \cite[Lemma 3.2]{BO}, for each $\delta \in \Delta ,$ $\psi _{\delta }=0$. Consequently, $\psi =0$, and so $\varphi \otimes $id$ _{M_{n}\left( \mathbb{C}\right) }=\phi =\phi _{1}\otimes $id$_{M_{n}\left( \mathbb{C}\right) }$, whence $\varphi =\phi _{1}.$ \end{proof}
Let $\{\mathcal{H};\mathcal{E};\mathcal{D}_{\mathcal{E}}\}$ be a quantized domain in the Hilbert space $\mathcal{H\ }$with $\mathcal{E=\{H}_{\iota } \mathcal{\}}_{\iota \in \Upsilon }$.
For a local contractive $\ast $-morphism $\pi :\mathcal{A\rightarrow } C^{\ast }(\mathcal{D}_{\mathcal{E}})$ \begin{equation*} \pi \left( \mathcal{A}\right) ^{^{\prime }}=\{T\in B\left( \mathcal{H} \right) ;T\pi \left( a\right) \subseteq \pi \left( a\right) T\text{ for all } a\in \mathcal{A}\}. \end{equation*}
\begin{remark} \begin{enumerate} \item $\pi \left( \mathcal{A}\right) ^{^{\prime }}$is a\textbf{\ }von Neumann algebra. Indeed, \begin{eqnarray*} \pi \left( \mathcal{A}\right) ^{^{\prime }} &=&\{T\in B\left( \mathcal{H} \right) ;T\pi \left( a\right) \subseteq \pi \left( a\right) T\text{ for all } a\in \mathcal{A}\} \\ &=&\{T\in B\left( \mathcal{H}\right) ;T\pi \left( a\right) \subseteq \pi \left( a\right) T\text{ for all }a\in b(\mathcal{A)}\} \\ &=&\{T\in B\left( \mathcal{H}\right) ;T\left. \pi \right\vert _{b(\mathcal{A) }}\left( a\right) =\left. \pi \right\vert _{b(\mathcal{A)}}\left( a\right) T \text{ for all }a\in b(\mathcal{A)}\} \\ &=&\left. \pi \right\vert _{b(\mathcal{A)}}\left( b(\mathcal{A})\right) ^{\prime } \end{eqnarray*} where $\left. \pi \right\vert _{b(\mathcal{A)}}:b(\mathcal{A)\rightarrow } B\left( \mathcal{H}\right) $ is the $\ast $-representation of the $C^{\ast }$ -algebra $b(\mathcal{A)}$ of all bounded elements of $\mathcal{A},$ $\left. \left. \pi \right\vert _{b(\mathcal{A)}}\left( a\right) \right\vert _{ \mathcal{D}_{\mathcal{E}}}=\pi \left( a\right) .$
\item $\pi \left( \mathcal{A}\right) ^{^{\prime }}\cap C^{\ast }(\mathcal{D} _{\mathcal{E}})$ \ is identified with a\textbf{\ }von Neumann algebra on $ \mathcal{H}$. Indeed, \begin{eqnarray*} \pi \left( \mathcal{A}\right) ^{^{\prime }}\cap C^{\ast }(\mathcal{D}_{ \mathcal{E}}) &=&\{T\in B\left( \mathcal{H}\right) \cap C^{\ast }(\mathcal{D} _{\mathcal{E}});T\pi \left( a\right) \subseteq \pi \left( a\right) T\text{ for all }a\in \mathcal{A}\} \\ &=&\{S\in b(C^{\ast }(\mathcal{D}_{\mathcal{E}}));S\pi \left( a\right) =\pi \left( a\right) S\ \text{ for all }a\in \mathcal{A}\} \\ &=&b(\pi \left( \mathcal{A}\right) ^{c}) \end{eqnarray*} where $\pi \left( \mathcal{A}\right) ^{c}=\{S\in C^{\ast }(\mathcal{D}_{ \mathcal{E}});S\pi \left( a\right) =\pi \left( a\right) S$ for all $a\in \mathcal{A}\}$.
On the other hand, $\pi \left( \mathcal{A}\right) ^{c}$\ is a locally von Neumann algebra \cite[p.4198]{D}, and then, $b(\pi \left( \mathcal{A}\right) ^{c})$ is identified with a von Neumann algebra on $\mathcal{H}$ \cite[ Proposition 3.2]{D}.
\item By \cite[Proposition 3.2]{D}, $b(C^{\ast }(\mathcal{D}_{\mathcal{E}}))$ is identified with a von Neumann algebra on $\mathcal{H}$, which is spatially isomorphic to the von Neumann algebra $\{T\in B\left( \mathcal{H} \right) ;P_{\iota }T=TP_{\iota },\forall \iota \in \Upsilon \}.$
\item Von Neumann algebras $\pi \left( \mathcal{A}\right) ^{^{\prime }}\cap C^{\ast }(\mathcal{D}_{\mathcal{E}})$ and $\left. \pi \right\vert _{b( \mathcal{A)}}\left( b(\mathcal{A})\right) ^{\prime }\cap b(C^{\ast }( \mathcal{D}_{\mathcal{E}}))$ are isomorphic. \end{enumerate} \end{remark}
\begin{definition} \cite[Definition 4.1]{BGK} Let $\varphi ,\psi \in \mathcal{CPCC}_{\text{loc} }(\mathcal{A},C^{\ast }(\mathcal{D}_{\mathcal{E}}))$. We say that $\psi $ is dominated by $\varphi $,and note by $\varphi \geq \psi $, if $\varphi -\psi \in \mathcal{CPCC}_{\text{loc}}(\mathcal{A},C^{\ast }(\mathcal{D}_{\mathcal{E }})).$ \end{definition}
\ \ \ Let $\varphi ,\psi \in \mathcal{CPCC}_{\text{loc}}(\mathcal{A},C^{\ast }(\mathcal{D}_{\mathcal{E}}))$ such that $\psi $ is dominated by $\varphi $. If $(\pi _{\varphi },V_{\varphi },$\ \ \ $\{\mathcal{H}^{\varphi },\mathcal{E }^{\varphi },\mathcal{D}_{\mathcal{E}^{\varphi }}\})$ is a minimal Stinespring dilation associated to $\varphi $, then, by Radon Nikodym type theorem \cite[Theorem 4.5]{BGK}, there is a unique element $T\in \pi _{\varphi }\left( \mathcal{A}\right) ^{^{\prime }}\cap C^{\ast }(\mathcal{D} _{\mathcal{E}^{\varphi }})$ such that \begin{equation*} \psi \left( a\right) =\varphi _{T}\left( a\right) =\left. V_{\varphi }^{\ast }T\pi _{\varphi }\left( a\right) V_{\varphi }\right\vert _{\mathcal{D}_{ \mathcal{E}}} \end{equation*} for all $a\in \mathcal{A}.$
\begin{definition} Let $\varphi \in \mathcal{CPCC}_{\text{loc}}(\mathcal{A},C^{\ast }(\mathcal{D }_{\mathcal{E}}))$. We say that $\varphi $ is pure if whenever $\psi \in \mathcal{CPCC}_{\text{loc}}(\mathcal{A},C^{\ast }(\mathcal{D}_{\mathcal{E} })) $ and $\varphi \geq \psi ,$ there is a positive number $\alpha $ such that $\psi =\alpha \varphi .$ \end{definition}
Let $\varphi \in \mathcal{CPCC}_{\text{loc}}(\mathcal{A},C^{\ast }(\mathcal{D }_{\mathcal{E}}))$ and $\left( \pi _{\varphi },V_{\varphi },\{\mathcal{H} ^{\varphi },\mathcal{E}^{\varphi },\mathcal{D}_{\mathcal{E}^{\varphi }}\}\right) $ be a minimal Stinespring dilation associated to $\varphi $. Then $\varphi $ is pure if and only if $\pi _{\varphi }\left( \mathcal{A} \right) ^{^{\prime }}\cap C^{\ast }(\mathcal{D}_{\mathcal{E}^{\varphi }})$ $ =\{\alpha $id$_{\mathcal{D}_{\mathcal{E}^{\varphi }}};\alpha \in \mathbb{C} \}.$
\begin{proposition} Let $\varphi \in \mathcal{CPCC}_{\text{loc}}(\mathcal{A},C^{\ast }(\mathcal{D }_{\mathcal{E}}))$. If $\varphi $ is pure, then it is a bounded operator valued completely positive map. \end{proposition}
\begin{proof} Let $\left( \pi _{\varphi },V_{\varphi },\{\mathcal{H}^{\varphi },\mathcal{E} ^{\varphi },\mathcal{D}_{\mathcal{E}^{\varphi }}\}\right) $ be a minimal Stinespring dilation associated to $\varphi $. Since $\varphi $ is pure, $ \pi _{\varphi }\left( \mathcal{A}\right) ^{^{\prime }}\cap C^{\ast }( \mathcal{D}_{\mathcal{E}^{\varphi }})=\{\alpha $id$_{\mathcal{D}_{\mathcal{E} }};\alpha \in \mathbb{C}\}$. On the other hand, for each $\iota \in \Upsilon ,$ $\left. P_{\iota }\right\vert _{\mathcal{D}_{\mathcal{E}^{\varphi }}}\in \pi _{\varphi }\left( \mathcal{A}\right) ^{^{\prime }}\cap C^{\ast }( \mathcal{D}_{\mathcal{E}^{\varphi }})$, and so $P_{\iota }=$id$_{\mathcal{H} ^{\varphi }}$. Therefore, for each $\iota \in \Upsilon ,$ $\mathcal{H} _{\iota }^{\varphi }=\mathcal{H}^{\varphi }$, $\mathcal{D}_{\mathcal{E} ^{\varphi }}=\mathcal{H}^{\varphi }$ and $C^{\ast }(\mathcal{D}_{\mathcal{E} ^{\varphi }})=B\left( \mathcal{H}^{\varphi }\right) $. Consequently, $\pi _{\varphi }\left( a\right) \in B\left( \mathcal{H}^{\varphi }\right) $ for all $a\in \mathcal{A}.\ $Therefore, $V_{\varphi }^{\ast }\pi _{\varphi }\left( a\right) V_{\varphi }\in B\left( \mathcal{H}\right) $ for all $a\in \mathcal{A}$, and \begin{equation*} \varphi \left( a\right) =\left. V_{\varphi }^{\ast }\pi _{\varphi }\left( a\right) V_{\varphi }\right\vert _{\mathcal{D}_{\mathcal{E}}}\in b\left( C^{\ast }(\mathcal{D}_{\mathcal{E}})\right) \end{equation*} for all $a\in \mathcal{A}$. \end{proof}
\begin{theorem} \label{8} Let $\mathcal{A}$ and $\mathcal{B}$ be two unital locally $C^{\ast }$-algebras, $\varphi \in \mathcal{CPCC}_{\text{loc}}(\mathcal{A},\mathcal{D} _{\mathcal{E}}),$ $\psi \in \mathcal{CPCC}_{\text{loc}}(\mathcal{B},C^{\ast }(\mathcal{D}_{\mathcal{F}}))$ and $\phi \in \mathcal{CPCC}_{\text{loc}}( \mathcal{A\otimes B},C^{\ast }(\mathcal{D}_{\mathcal{E\otimes F}}))$. If $ \phi $ is dominated by $\varphi \otimes \psi $ and $\varphi $ is pure, then there exists $\widetilde{\psi }\in \mathcal{CPCC}_{\text{loc}}(\mathcal{B} ,C^{\ast }(\mathcal{D}_{\mathcal{F}}))$ dominated by $\psi $ such that $\phi =\varphi \otimes \widetilde{\psi }.$ \end{theorem}
\begin{proof} Let $\left( \pi _{\varphi },V_{\varphi },\{\mathcal{H}^{\varphi },\mathcal{E} ^{\varphi },\mathcal{D}_{\mathcal{E}^{\varphi }}\}\right) $ and $\left( \pi _{\psi },V_{\psi },\{\mathcal{H}^{\psi },\mathcal{E}^{\psi },\mathcal{D}_{ \mathcal{E}^{\psi }}\}\right) $ be the minimal Stinespring dilations associated to $\varphi $ and $\psi $. Since $\varphi $ is pure, $\mathcal{D} _{\mathcal{E}^{\varphi }}=\mathcal{H}^{\varphi }$ and $C^{\ast }(\mathcal{D} _{\mathcal{E}^{\varphi }})=B\left( \mathcal{H}^{\varphi }\right) $. By Proposition \ref{3}, $\ (\pi _{\varphi }\otimes \pi _{\psi },V_{\varphi }\otimes V_{\psi },\{\mathcal{H}^{\varphi }\otimes \mathcal{H}^{\psi }, \mathcal{H}^{\varphi }\otimes \mathcal{E}^{\psi },$ $\mathcal{D}_{\mathcal{H} ^{\varphi }\otimes \mathcal{E}^{\psi }}\})$ is a minimal Stinespring dilation associated to $\varphi \otimes \psi $. We have: \begin{eqnarray*} \left( \left( \pi _{\varphi }\otimes \pi _{\psi }\right) \left( \mathcal{ A\otimes B}\right) \right) ^{\prime } &=&\left( \left( \left. \pi _{\varphi }\right\vert _{b\mathcal{(A)}}\otimes \left. \pi _{\psi }\right\vert _{b \mathcal{(B)}}\right) \left( b(\mathcal{A)\otimes }_{\text{alg}}b\mathcal{(B) }\right) \right) ^{\prime } \\ &=&\left( \left. \pi _{\varphi }\right\vert _{b\mathcal{(A)}}\left( b( \mathcal{A)}\right) \right) ^{\prime }\overline{\otimes }\left( \left. \pi _{\psi }\right\vert _{b\mathcal{(B)}}\left( b\mathcal{(B)}\right) \right) ^{\prime } \end{eqnarray*} where "$\overline{\otimes }$ " denotes the tensor product of von Neumann algebras, and \begin{eqnarray*} b(C^{\ast }(\mathcal{D}_{\mathcal{H}^{\varphi }\otimes \mathcal{E}^{\psi }})) &=&\{R\in B(\mathcal{H}^{\varphi }\otimes \mathcal{H}^{\psi });R(\text{ id}_{\mathcal{H}^{\varphi }}\otimes P_{\iota })=(\text{id}_{\mathcal{H} ^{\varphi }}\otimes P_{\iota })R,\forall \iota \in \Upsilon \} \\ &=&B(\mathcal{H}^{\varphi })\overline{\otimes }\{S\in B(\mathcal{H}^{\psi });SP_{\iota }=P_{\iota }S,\forall \iota \in \Upsilon \} \\ &=&B(\mathcal{H}^{\varphi })\overline{\otimes }b(C^{\ast }(\mathcal{D}_{ \mathcal{E}^{\psi }})). \end{eqnarray*} Thus, \begin{eqnarray*} &&\left( \left( \pi _{\varphi }\otimes \pi _{\psi }\right) \left( \mathcal{ A\otimes B}\right) \right) ^{\prime }\cap b(C^{\ast }(\mathcal{D}_{\mathcal{H }^{\varphi }\otimes \mathcal{E}^{\psi }})) \\ &=&\left( \left( \left. \pi _{\varphi }\right\vert _{b\mathcal{(A)}}\left( b( \mathcal{A)}\right) \right) ^{\prime }\overline{\otimes }\left( \left. \pi _{\psi }\right\vert _{b\mathcal{(B)}}\left( b\mathcal{(B)}\right) \right) ^{\prime }\right) \cap \left( B(\mathcal{H}^{\varphi })\overline{\otimes } b(C^{\ast }(\mathcal{D}_{\mathcal{E}^{\psi }}))\right) \\ &=&\left( \pi _{\varphi }\left( \mathcal{A}\right) ^{\prime }\cap B(\mathcal{ H}^{\varphi })\right) \overline{\otimes }\left( \left( \left. \pi _{\psi }\right\vert _{b\mathcal{(B)}}\left( b\mathcal{(B)}\right) \right) ^{\prime }\cap b(C^{\ast }(\mathcal{D}_{\mathcal{E}^{\psi }}))\right) \\ &&\text{(since }\varphi \text{ is pure) } \\ &=&\{\alpha \text{id}_{\mathcal{H}^{\varphi }};\alpha \in \mathbb{C} \}\otimes \left( \pi _{\psi }\left( \mathcal{B}\right) ^{\prime }\cap C^{\ast }(\mathcal{D}_{\mathcal{E}^{\psi }})\right) . \end{eqnarray*} Therefore, \begin{equation*} \left( \left( \pi _{\varphi }\otimes \pi _{\psi }\right) \left( \mathcal{ A\otimes B}\right) \right) ^{\prime }\cap C^{\ast }(\mathcal{D}_{\mathcal{H} ^{\varphi }\otimes \mathcal{E}^{\psi }})=\{\alpha \text{id}_{\mathcal{H} ^{\varphi }};\alpha \in \mathbb{C}\}\otimes \left( \pi _{\psi }\left( \mathcal{B}\right) ^{\prime }\cap C^{\ast }(\mathcal{D}_{\mathcal{E}^{\psi }})\right) . \end{equation*} Since $\phi \ $and $\varphi \otimes \psi $ are local completely contractive and local completely positive and $\phi $ is dominated by $\varphi \otimes \psi $, by Radon Nikodym theorem \cite[Theorem 4.5]{BGK}, there is a unique positive contractive linear operator $R\in \left( \left( \pi _{\varphi }\otimes \pi _{\psi }\right) \left( \mathcal{A\otimes B}\right) \right) ^{\prime }\cap C^{\ast }(\mathcal{D}_{\mathcal{H}^{\varphi }\otimes \mathcal{ E}^{\psi }})$ such that $\phi =\left( \varphi \otimes \psi \right) _{R}$. Therefore, there is $T\in \pi _{\psi }\left( \mathcal{B}\right) ^{\prime }\cap C^{\ast }(\mathcal{D}_{\mathcal{E}^{\psi }})$ such that $R=$id$_{ \mathcal{H}^{\varphi }}\otimes T$ and \begin{eqnarray*} \phi \left( a\otimes b\right) &=&\left( \varphi \otimes \psi \right) _{\text{ id}_{\mathcal{H}^{\varphi }}\otimes T}\left( a\otimes b\right) \\ &=&\left( V_{\varphi }^{\ast }\otimes V_{\psi }^{\ast }\right) \left( \text{ id}_{\mathcal{H}^{\varphi }}\otimes T\right) \left( \pi _{\varphi }\otimes \pi _{\psi }\right) \left( a\otimes b\right) \left( V_{\varphi }\otimes V_{\psi }\right) \\ &=&V_{\varphi }^{\ast }\pi _{\varphi }\left( a\right) V_{\varphi }\otimes V_{\psi }^{\ast }T\pi _{\psi }\left( b\right) V_{\psi }=\varphi \left( a\right) \otimes \psi _{T}\left( b\right) \end{eqnarray*} for all $a\in \mathcal{A}$ and $b\in \mathcal{B}$.$\ $Hence, there is $ \widetilde{\psi }=\psi _{T}\in \mathcal{CPCC}_{\text{loc}}(\mathcal{B} ,C^{\ast }(\mathcal{D}_{\mathcal{F}}))$ such that $\phi =\varphi \otimes \widetilde{\psi }$.\ Moreover, $\widetilde{\psi }$ is dominated by $\psi $. \end{proof}
\begin{corollary} Let $\mathcal{A}$ be a unital $C^{\ast }$-algebra, $\mathcal{B}$ be a unital locally $C^{\ast }$-algebra, $\varphi \in \mathcal{CP}(\mathcal{A},B( \mathcal{H})),$ $\psi \in \mathcal{CPCC}_{\text{loc}}(\mathcal{B},C^{\ast }( \mathcal{D}_{\mathcal{F}}))$ and $\phi \in \mathcal{CPCC}_{\text{loc}}( \mathcal{A\otimes B},C^{\ast }(\mathcal{D}_{\mathcal{H\otimes F}}))$. If $ \phi $ is dominated by $\varphi \otimes \psi $ and $\varphi $ is pure, then there is $\widetilde{\psi }\in \mathcal{CPCC}_{\text{loc}}(\mathcal{B} ,C^{\ast }(\mathcal{D}_{\mathcal{F}}))$ which is dominated by $\psi $ and such that $\phi =\varphi \otimes \widetilde{\psi }$. \end{corollary}
\end{document} |
\begin{document}
\title[Universality for Ergodic Jacobi Matrices] {Bulk Universality and Clock Spacing\\ of Zeros for Ergodic Jacobi Matrices\\ with A.C.\ Spectrum} \author[A.~Avila, Y.~Last, and B.~Simon]{Artur Avila$^1$, Yoram Last$^{2,4}$, and Barry Simon$^{3,4}$}
\thanks{$^1$ CNRS UMR 7599, Laboratoire de Probabilit\'es et Mod\`eles Al\'eatoires, Universit\'e Pierre et Marie Curie--Bo\^{i}te Courrier 188, 75252--Paris Cedex 05, France. Current address: IMPA, Estrada Dona Castorina 110, Rio de Janeiro, 22460-320, Brazil. E-mail: artur@math.sunysb.edu.}
\thanks{$^2$ Institute of Mathematics, The Hebrew University, 91904 Jerusalem, Israel. E-mail: ylast@math.huji.ac.il. Supported in part by The Israel Science Foundation (grant no.\ 1169/06)}
\thanks{$^3$ Mathematics 253-37, California Institute of Technology, Pasadena, CA 91125, USA. E-mail: bsimon@caltech.edu. Supported in part by NSF grant DMS-0652919}
\thanks{$^4$ Research supported in part by Grant No.\ 2006483 from the United States-Israel Binational Science Foundation (BSF), Jerusalem, Israel}
\date{September 29, 2008} \keywords{Orthogonal polynomials, clock behavior, almost Mathieu equation} \subjclass[2000]{42C05,26C10,47B36}
\begin{abstract} By combining some ideas of Lubinsky with some soft analysis, we prove that universality and clock behavior of zeros for OPRL in the a.c.\ spectral region is implied by convergence of $\frac{1}{n} K_n(x,x)$ for the diagonal CD kernel and boundedness of the analog associated to second kind polynomials. We then show that these hypotheses are always valid for ergodic Jacobi matrices with a.c.\ spectrum and prove that the limit of $\frac{1}{n} K_n(x,x)$ is $\rho_\infty(x)/w(x)$ where $\rho_\infty$ is the density of zeros and $w$ is the a.c.\ weight of the spectral measure. \end{abstract}
\maketitle
\section{Introduction} \label{s1}
Given a finite measure, $d\mu$, of compact and not finite support on ${\mathbb{R}}$, one defines the orthonormal polynomials, $p_n(x)$ (or $p_n (x,d\mu)$ if the $\mu$-dependence is important), by applying Gram--Schmidt to $1,x,x^2, \dots$. Thus, $p_n$ is a polynomial of degree exactly $n$ with leading positive coefficient so that \begin{equation} \label{1.1} \int p_n(x) p_m(x)\, d\mu(x) = \delta_{nm} \end{equation} See \cite{SzBk,FrBk,Rice} for background on these OPRL (orthogonal polynomials on the real line).
Associated to $\mu$ is a family of Jacobi parameters $\{a_n,b_n\}_{n=1}^\infty$, $a_n >0$, $b_n$ real, determined by the recursion relation ($p_{-1}(x)\equiv 0$) \begin{equation} \label{1.2} xp_n(x) = a_{n+1} p_{n+1}(x) + b_{n+1} p_n(x) + a_n p_{n-1}(x) \end{equation} The $\{p_n(x)\}_{n=0}^\infty$ are an orthonormal basis of $L^2({\mathbb{R}},d\mu)$ (since $\text{\rm{supp}}(d\mu)$ is compact) and \eqref{1.2} says that multiplication by $x$ is given in this basis by the tridiagonal Jacobi matrix \begin{equation} \label{1.3} J= \begin{pmatrix} b_1 & a_1 & 0 & \cdots \\ a_1 & b_2 & a_2 & \cdots \\ 0 & a_2 & b_3 & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{pmatrix} \end{equation}
If we restrict (as we normally will) to $\mu$ normalized by $\mu({\mathbb{R}})=1$, then $\mu$ can be recovered from $J$ as the spectral measure for the vector $(1,0,0,\dots)^t$. Favard's theorem says there is a one-one correspondence between sets of bounded Jacobi parameters, that is, \begin{equation} \label{1.4} \sup_n\, \abs{a_n} =\alpha_+ <\infty \qquad \sup_n \, \abs{b_n}=\beta <\infty \end{equation} and probability measures with compact and not finite support under this $\mu\to J\to\mu$ correspondence.
We will use this to justify spectral theory notation for things like $\text{\rm{supp}}(d\mu)$ which we will denote $\sigma(d\mu)$ since it is the spectrum of $J$, $\sigma(J)$. We will use $\sigma_\text{\rm{ess}}(d\mu)$ for the essential spectrum, and if \begin{equation} \label{1.5} d\mu(x) =w(x)\, dx + d\mu_\text{\rm{s}} (x) \end{equation} where $d\mu_\text{\rm{s}}$ is Lebesgue singular, then we define \begin{equation} \label{1.6} \Sigma_\text{\rm{ac}} (d\mu) =\{x\mid w(x) >0\} \end{equation} determined up to sets of Lebesgue measure $0$, so $\Sigma_\text{\rm{ac}}\neq\emptyset$ means $d\mu$ has a nonvanishing a.c.\ part.
We will also suppose \begin{equation} \label{1.7} \inf_n\, a_n = \alpha_- >0 \end{equation} which is no loss since it is known \cite{Dom78} that if the $\inf$ is $0$, then $\Sigma_\text{\rm{ac}} =\emptyset$, and we will only be interested in cases where $\Sigma_\text{\rm{ac}} \neq\emptyset$.
One of our concerns in this paper is the zeros of $p_n(x,d\mu)$. These are not only of intrinsic interest; they enter in Gaussian quadrature and also as the eigenvalues of $J_{n;F}$, the upper left $n\times n$ corner of $J$, and so, relevant to statistics of eigenvalues in large boxes, a subject on which there is an enormous amount of discussion in both the mathematics and the physics literature.
These zeros are all simple and lie in ${\mathbb{R}}$. $d\nu_n$ is the normalized counting measure for the zeros, that is, \begin{equation} \label{1.8} \nu_n(S)=\frac{1}{n}\, \#(\text{zeros of $p_n$ in $S$}) \end{equation} In many cases, $d\nu_n$ converges to a weak limit, $d\nu_\infty$, called the density of zeros or density of states (DOS). If this weak limit exists, we say that the DOS exists. It often happens that $d\nu_\infty$ is $d\rho_{\frak{e}}$, the equilibrium measure for ${\frak{e}} =\sigma_\text{\rm{ess}}(d\mu)$. This is true, for example, if $\rho_{\frak{e}}$ is equivalent to $dx\restriction{\frak{e}}$ and $\Sigma_\text{\rm{ac}}={\frak{e}}$, a theorem of Widom \cite{Wid} and Van Assche \cite{vA} (see also Stahl--Totik \cite{StT} and Simon \cite{EqMC}). If $d\nu_\infty$ has an a.c.\ part, we use $\rho_\infty (x)$ for $d\nu_\infty /dx$ and we use $\rho_{\frak{e}}(x)$ for $d\rho_{\frak{e}}/dx$. More properly, $d\nu_\infty$ is the ``density of states measure'' (so $\int_{-\infty}^x d\nu_\infty$ is the ``integrated density of states'') and $\rho_\infty (x)$ the ``density of states.''
We are especially interested in the fine structure of the zeros near some point $x_0\in\sigma(d\mu)$. We define $x_j^{(n)}(x_0)$ by \begin{equation} \label{1.9} x_{-2}^{(n)}(x_0) < x_{-1}^{(n)}(x_0) < x_0 \leq x_0^{(n)}(x_0) < x_1^{(n)}(x_0) < \dots \end{equation} requiring these to be all of the zeros near $x_0$. It is known that if $x_0$ is not isolated from $\sigma(d\mu)$ on either side, that is, for all $\delta >0$, \begin{equation} \label{1.10} (x_0-\delta, x_0) \cap \sigma(d\mu) \neq \emptyset \neq (x_0, x_0+\delta) \cap \sigma(d\mu) \end{equation} then for each fixed $j$, \begin{equation} \label{1.11} \lim_{n\to\infty}\, x_j^{(n)}(x_0) =x_0 \end{equation} We are interested in clock behavior named after the spacing of numerals on a clock---meaning equal spacing of the zeros nearby to $x_0$:
\begin{definition} We say that there is {\it quasi-clock behavior\/} at $x_0\in\sigma(d\mu)$ if and only if for each fixed $j\in{\mathbb{Z}}$, \begin{equation} \label{1.12} \lim_{n\to\infty}\, \frac{x_{j+1}^{(n)}(x_0) - x_j^{(n)}(x_0)}{x_1^{(n)}(x_0) -x_0^{(n)}(x_0)} =1 \end{equation} We say there is {\it strong clock behavior\/} at $x_0$ if and only if the DOS exists and for each fixed $j\in{\mathbb{Z}}$, \begin{equation} \label{1.13} \lim_{n\to\infty}\, n(x_{j+1}^{(n)}(x_0) - x_j(x_0)) = \frac{1}{\rho_\infty(x_0)} \end{equation} \end{definition}
Obviously, strong clock behavior implies quasi-clock behavior. Thus far, the only cases where it is proven there is quasi-clock behavior, one has strong clock behavior but, as we will explain in Section~\ref{s7}, we think there are examples where one has quasi-clock behavior at $x_0$ but not strong clock behavior. Before this paper, all examples known with strong clock behavior have $\rho_\infty =\rho_{\frak{e}}$, but we will find several examples where there is strong clock behavior with $\rho_\infty \neq \rho_{\frak{e}}$ in Section~\ref{s7}. In that section, we will say more about:
\begin{conjecture} For any $\mu$, quasi-clock behavior holds at a.e.\ $x_0\in\Sigma_\text{\rm{ac}}(d\mu)$. \end{conjecture}
In this paper, one of our main goals is to prove this result for ergodic Jacobi matrices. A major role will be played by the CD (for Christoffel--Darboux) kernel, defined for $x,y\in{\mathbb{C}}$ by \begin{equation} \label{1.14} K_n(x,y) =\sum_{j=0}^n \, \overline{p_j(x)}\, p_j(y) \end{equation} the integral kernel for the orthogonal projection onto polynomials of degree at most $n$ in $L^2 ({\mathbb{R}},d\mu)$; see Simon \cite{CD} for a review of some important aspects of the properties and uses of this kernel. We will repeatedly make use of the CD formula, \begin{equation} \label{1.15} K_n(x,y) = \frac{a_{n+1} [\, \overline{p_{n+1}(x)}\, p_n(y) - \overline{p_n(x)}\, p_{n+1}(y)]}{\bar x-y}, \end{equation} the Schwarz inequality, \begin{equation} \label{1.16} \abs{K_n(x,y)}^2 \leq K_n(x,x) K_n(y,y) \end{equation} and the reproducing property, \begin{equation} \label{1.17} \int K_n(x,y) K_n(y,z)\, d\mu(y) =K_n(x,z). \end{equation}
It is a theorem (see Simon \cite{weak-CD}) that if the DOS exists, then \begin{equation} \label{1.18} \frac{1}{n+1}\, K_n(x,x)\, d\mu(x) \overset{\text {weak}}{\longrightarrow} d\nu_\infty(x) \end{equation} and, in general, $\frac{1}{n+1} K_n(x,x)\, d\mu(x)$ has the same weak limit points as $d\nu_n$. This suggests that a.c.\ parts converge pointwise, that is, one hopes that for a.e.\ $x_0\in\Sigma_\text{\rm{ac}}$, \begin{equation} \label{1.19} \frac{1}{n+1}\, K_n(x_0,x_0) \to \frac{\rho_\infty(x_0)}{w(x_0)} \end{equation} This has been proven for regular (in the sense of Stahl--Totik \cite{StT}; see also Simon \cite{EqMC}) measures with a local Szeg\H{o} condition in a series of papers of which the seminal ones are M\'at\'e--Nevai-Totik \cite{MNT91} and Totik \cite{Tot}. We will prove it for ergodic Jacobi matrices.
We say {\it bulk universality\/} holds at $x_0\in\text{\rm{supp}}(d\mu)$ if and only if uniformly for $a,b$ in compact subsets of ${\mathbb{R}}$, we have \begin{equation} \label{1.20} \frac{K_n(x_0 + \frac{a}{n}, x_0 + \frac{b}{n})}{K_n(x_0,x_0)} \to \frac{\sin(\pi\rho(x_0)(b-a))}{\pi\rho(x_0)(b-a)} \end{equation} We use the term ``bulk'' here because \eqref{1.20} fails at edges of the spectrum; see Lubinsky \cite{Lub2008}. We also note that when \eqref{1.20} holds, typically (and in all cases below) for $z,w$ complex, one has \begin{equation} \label{1.21} \frac{K_n(x_0 + \frac{z}{n}, x_0 + \frac{w}{n})}{K_n(x_0,x_0)} \to \frac{\sin(\rho(x_0)(w-\bar z))}{\rho(x_0) (w-\bar z)} \end{equation}
Freud \cite{FrBk} proved bulk universality for measures on $[-1,1]$ with $d\mu_\text{\rm{s}} =0$ and strong conditions on $w(x)$. Because of related results (but with variable weights) in random matrix theory, this result was re-examined and proven in multiple interval support cases with analytic weights by Kuijlaars--Vanlessen \cite{KV}. A significant breakthrough was made by Lubinsky \cite{Lub}, whose contributions we return to shortly.
It is a basic result of Freud \cite{FrBk}, rediscovered by Levin (in \cite{LL}), that
\begin{theorem}[Freud--Levin Theorem]\label{T1.1} Bulk universality at $x_0$ implies strong clock behavior at $x_0$. \end{theorem}
\begin{remarks} 1. The proof (see \cite{FrBk,LL,CD}) relies on the CD formula, \eqref{1.15}, which implies that if $y_0$ is a zero of $p_n$, then the other zeros of $p_n$ are the points $y$ solving $K_n(y,y_0)=0$ and the fact that the zeros of $\sin (\pi\rho(x_0)(b-a))$ are at $b-a = j/\rho(x_0)$ with $j\in{\mathbb{Z}}$.
2. Szeg\H{o} \cite{SzBk} proved strong clock behavior for Jacobi polynomials and Erd\H{o}s--Tur\'an \cite{ET} for a more general class of measures on $[-1,1]$. Simon \cite{Fine1,Fine2,Fine3,Fine4} has a series on the subject. The paper with Last \cite{Fine4} was one motivation for Levin--Lubinsky \cite{LL}.
3. Lubinsky (private communication) has emphasized to us that this part of \cite{LL} is due to Levin alone---hence our name for the result. \end{remarks}
It is also useful to define \begin{equation} \label{1.22} \rho_n = \frac{1}{n}\, w(x_0) K_n(x_0,x_0) \end{equation} so \eqref{1.19} is equivalent to \begin{equation} \label{1.23} \rho_n \to \rho_\infty (x_0) \end{equation} We say {\it weak bulk universality} holds at $x_0$ if and only if, uniformly for $a,b$ on compact subsets of ${\mathbb{R}}$, we have \begin{equation} \label{1.24} \frac{K_n(x_0 + \frac{a}{n\rho_n}, x_0 + \frac{b}{n\rho_n})}{K_n(x_0,x_0)} \to \frac{\sin(\pi (b-a))}{\pi(b-a)} \end{equation} the form in which universality is often written, especially in the random matrix literature. Notice that \begin{equation} \label{1.25} \text{weak universality} + \text{\eqref{1.23}} \Rightarrow \text{universality} \end{equation} Notice also that \eqref{1.24} could hold in case where $\rho_n$ does not converge as $n\to\infty$. The same proof that verifies Theorem~\ref{T1.1} implies
\begin{theorem}[Weak Freud--Levin Theorem] \label{T1.2} Weak bulk universality at $x_0$ implies quasi-clock behavior at $x_0$. \end{theorem}
With this background in place, we can turn to describing the main results of this paper: five theorems, proven one per section in Sections~\ref{s2}--\ref{s6}.
The first theorem is an abstraction, extension, and simplification of Lubinsky's second approach to universality \cite{Lub-jdam}. In \cite{Lub}, Lubinsky found a beautiful way of going from control of the diagonal CD kernel to the off-diagonal (i.e., to universality). It depended on the ability to control limits not only of $\frac{1}{n} K_n(x_0,x_0)$ but also $\frac{1}{n} K_n(x_0 + \frac{a}{n}, x_0 +\frac{a}{n})$---what we call the Lubinsky wiggle. We will especially care about the {\it Lubinsky wiggle condition}: \begin{equation} \label{1.26} \lim_{n\to\infty}\, \frac{K_n(x_0 + \frac{a}{n}, x_0 + \frac{a}{n})}{K_n(x_0,x_0)} =1 \end{equation} uniformly for $a\in [-A,A]$ for each $A$. In addition to this, in \cite{Lub}, Lubinsky needed a simple but clever inequality and, most significantly, a comparison model example where one knows universality holds. For $[-1,1]$, he took Legendre polynomials (i.e., $d\mu = \f12 \chi_{[-1,1]}(x)\,dx$). In extending this to more general sets, one uses approximation by finite gap sets as pioneered by Totik \cite{Tot-acta}. Simon \cite{2exts} then used Jacobi matrices in isospectral tori for a comparison model on these finite gap sets, while Totik \cite{Tot-prep} used polynomials mappings and the results for $[-1,1]$.
For ergodic Jacobi matrices where $\sigma(d\mu)$ is often a Cantor set, it is hard to find comparison models, so we will rely on a second approach developed by Lubinsky \cite{Lub-jdam} that seems to be able to handle any situation that his first approach can and which does not rely on a comparison model. Our first theorem, proven in Section~\ref{s2}, is a variant of this approach. We need a preliminary definition:
\begin{definition} Let $d\mu$ be given by \eqref{1.5}. A point $x_0$ is called a {\it Lebesgue point} of $d\mu$ if and only if $w(x_0) >0$ and \begin{align} \lim_{\delta\downarrow 0}\, (2\delta)^{-1} \int_{x_0-\delta}^{x_0+\delta} \abs{w(x)-w(x_0)}\, dx &=0 \label{1.27} \\ \lim_{\delta\downarrow 0}\, (2\delta)^{-1} \mu_\text{\rm{s}} (x_0 -\delta, x_0 + \delta) &= 0 \label{1.28} \end{align} \end{definition}
Standard maximal function methods (see Rudin \cite{Rudin}) show Lebesgue a.e.\ $x_0\in\Sigma_\text{\rm{ac}} (d\mu)$ is a Lebesgue point.
\begin{t1} Let $x_0$ be a Lebesgue point of $\mu$. Suppose that \begin{SL} \item[{\rm{(1)}}] The Lubinsky wiggle condition \eqref{1.26} holds uniformly for $a\in [-A,A]$ and any $A<\infty$.
\item[{\rm{(2)}}] We have \begin{equation} \label{1.29} \liminf_{n\to\infty}\, \frac{1}{n+1}\, K_n (x_0,x_0) >0 \end{equation}
\item[{\rm{(3)}}] For any $\varepsilon$, there is $C_\varepsilon >0$ so that for any $R<\infty$, there is an $N$ so that for all $n>N$ and all $z\in{\mathbb{C}}$ with $\abs{z}<R$, we have \begin{equation} \label{1.30} \frac{1}{n+1}\, K_n \biggl(x_0 + \frac{z}{n}\, , x_0+ \frac{z}{n}\biggr) \leq C_\varepsilon \exp (\varepsilon\abs{z}^2) \end{equation} \end{SL} Then weak bulk universality, and so, quasi-clock behavior, holds at $x_0$. \end{t1}
\begin{remarks} 1. If one replaces \eqref{1.30} by \begin{equation} \label{1.31} C \exp (A\abs{z}) \end{equation} then the result can be proven by following Lubinsky's argument in \cite{Lub-jdam}. He does not assume \eqref{1.31} directly but rather hypotheses that he shows imply it (but which is invalid in case $\text{\rm{supp}}(d\mu)$ is a Cantor set).
2. Because our Theorem~3 below is so general, we doubt there are examples where \eqref{1.30} holds but \eqref{1.31} does not, but we feel our more general abstract result is clarifying.
3. The strategy we follow is Lubinsky's, but the tactics differ and, we feel, are more elementary and illuminating. \end{remarks}
In \cite{Lub-jdam}, the only examples where Lubinsky can verify his wiggle condition are the situations where Totik \cite{Tot-prep} proves universality using Lubinsky's first method. To go beyond that, we need the following, proven in Section~\ref{s3}:
\begin{t2} Let $\Sigma\subset\Sigma_\text{\rm{ac}}$. Suppose for a.e.\ $x_0\in\Sigma$, we have that condition~{\rm{(3)}} of Theorem~1 holds and that \begin{SL} \item[{\rm{(4)}}] $\lim_{n\to\infty} \frac{1}{n+1} K_n(x_0,x_0)$ exists and is strictly positive. \end{SL} Then condition {\rm{(1)}} of Theorem~1 holds for a.e.\ $x_0\in\Sigma$. \end{t2}
Of course, (4) implies condition (2). So we obtain:
\begin{corollary}\label{L1.3} If {\rm{(3)}} and {\rm{(4)}} hold for a.e.\ $x_0\in\Sigma$, then for a.e. $x_0\in\Sigma$, we have weak universality and quasi-clock behavior. \end{corollary}
By \eqref{1.25}, we see
\begin{corollary}\label{C1.4} If {\rm{(3)}} and {\rm{(4)}} hold for a.e.\ $x_0\in\Sigma$, and if the DOS exists and the limit in {\rm{(4)}} is $\rho_\infty(x)/w(x)$, then for a.e.\ $x\in \Sigma$, we have universality and strong clock behavior. \end{corollary}
Next, we need to examine when \eqref{1.30} holds. We will not only obtain a bound of the type \eqref{1.31} but one that does not need to vary $N$ with $R$ and is universal in $z$. We will use transfer matrix techniques and notation.
Given Jacobi parameters, $\{a_n,b_n\}_{n=1}^\infty$, we define \begin{equation} \label{1.32} A_j(z) = \begin{pmatrix} \frac{z-b_j}{a_j} & - \frac{1}{a_j} \\ a_j & 0 \end{pmatrix} \end{equation} so that \eqref{1.2} is equivalent to \begin{equation} \label{1.33} \begin{pmatrix} p_n(x) \\ a_n p_{n-1} (x) \end{pmatrix} = A_n(x) \begin{pmatrix} p_{n-1}(x) \\ a_{n-1} p_{n-2}(x) \end{pmatrix} \end{equation} We normalize, placing $a_n$ on the lower component, so that \begin{equation} \label{1.34} \det(A_j(z)) =1 \end{equation}
The transfer matrix is then defined by \begin{equation} \label{1.35} T_n(z) = A_n(z) \dots A_1(z) \end{equation} so \begin{equation} \label{1.36} \begin{pmatrix} p_n(x) \\ a_n p_{n-1}(x) \end{pmatrix} =T_n(x) \begin{pmatrix} 1 \\ 0 \end{pmatrix} \end{equation} If $\tilde p_n$ are the OPRL associated to the once stripped Jacobi parameters $\{a_{n+1}, b_{n+1}\}_{n=1}^\infty$, and \begin{equation} \label{1.37} q_n(x) = -a_1^{-1} \tilde p_{n-1}(x) \end{equation} with $q_0=0$, then \begin{equation} \label{1.38} T_n(z) = \begin{pmatrix} p_n(z) & q_n(z) \\ a_n p_{n-1}(z) & a_n q_{n-1}(z) \end{pmatrix} \end{equation}
Here is how we will establish \eqref{1.30}/\eqref{1.31}:
\begin{t3} Fix $x_0\in{\mathbb{R}}$. Suppose that \begin{equation} \label{1.39} \sup_n \frac{1}{n+1} \sum_{j=0}^n\, \norm{T_j(x_0)}^2 \leq C <\infty \end{equation} Then for all $z\in{\mathbb{C}}$ and all $n$, \begin{equation} \label{1.40}
\frac{1}{n+1} \sum_{j=0}^n\, \biggl\| T_j\biggl(x_0 + \frac{z}{n+1}\biggr)\biggr\|^2 \leq C \exp (2C\alpha_-^{-1} \abs{z}) \end{equation}
Moreover, if \begin{equation} \label{1.41} \sup_n\, \norm{T_n(x_0)}^2 = C <\infty \end{equation} then for all $z\in{\mathbb{C}}$ and $n$, \begin{equation} \label{1.42}
\biggl\| T_n \biggl( x_0 + \frac{z}{n+1}\biggr)\biggr\| \leq C^{1/2} \exp (C\alpha_-^{-1} \abs{z}) \end{equation} \end{t3}
\begin{remarks} 1. Our proof is an abstraction of ideas of Avila--Krikorian \cite{AvKr} who only treated the ergodic case.
2. $\alpha_-$ is given by \eqref{1.7}.
3. There is a conjecture, called the Schr\"odinger conjecture (see \cite{MMG}), that says \eqref{1.41} holds for a.e.\ $x_0\in\Sigma_\text{\rm{ac}}(d\mu)$. \end{remarks}
Our last two theorems below are special to the ergodic situation. Let $\Omega$ be a compact metric space, $d\eta$ a probability measure on $\Omega$, and $S\colon\Omega\to\Omega$ an ergodic invertible map of $\Omega$ to itself. Let $A,B$ be continuous real-valued functions on $\Omega$ with $\inf_\omega A(\omega) >0$. Let \begin{equation} \label{1.43} \alpha_+ = \norm{A}_\infty \qquad \beta =\norm{B}_\infty \qquad \alpha_- = \norm{A^{-1}}_\infty^{-1} \end{equation} For each $\omega\in\Omega$, $J_\omega$ is the Jacobi matrix with \begin{equation} \label{1.44} a_n(\omega) = A(S^{n-1}\omega) \qquad b_n(\omega) = B(S^{n-1}\omega) \end{equation} \eqref{1.43} is consistent with \eqref{1.4} and \eqref{1.7}. Usually one only takes $\Omega$, a measure space, and $A,B$ bounded measurable functions, but by replacing $\Omega$ by $([\alpha_-,\alpha_+]\times [-\beta,\beta])^\infty\equiv \widetilde \Omega$ and mapping $\Omega\to\widetilde \Omega$ by $\omega\mapsto (A(S^n\omega), B(S^n\omega))_{n=-\infty}^\infty$, we get a compact space model equivalent to the original measure model. We use $d\mu_\omega$ for the spectral measure of $J_\omega$ and $p_n(x,\omega)$ for $p_n(x,d\mu_\omega)$.
The canonical example of the setup with a.c.\ spectrum is the almost Mathieu equation. $\alpha$ is a fixed irrational, $\lambda$ a nonzero real, $\Omega=\partial{\mathbb{D}}$, the unit circle $\{e^{i\theta}\mid\theta\in [0,2\pi)\}$ \[ a_n\equiv 1 \qquad b_n=2\lambda\cos(\pi\alpha n + \theta) \] (so $S(e^{i\theta})=e^{i\theta} e^{i\pi\alpha}$, $d\eta(\theta) = d\theta/2\pi$). If $0\neq\abs{\lambda} <1$, it is known (see \cite{Av-prep1,AD,AJ,Jit07}) that the spectrum is purely a.c.\ and is a Cantor set. It is also known \cite{Jit07} that if $\abs{\lambda} \geq 1$, there is no a.c.\ spectrum.
We will prove the following in Section~\ref{s5}:
\begin{t4} Let $\{J_\omega\}_{\omega\in n}$ be an ergodic family with $\Sigma_\text{\rm{ac}}$, the common essential support of the a.c.\ spectrum of $J_\omega$, of positive Lebesgue measure. Then for a.e.\ pairs $(x,\omega)\in\Sigma_\text{\rm{ac}}\times\Omega$, \begin{alignat}{2} & \text{\rm{(i)}} \qquad && \lim_{n\to\infty} \frac{1}{n+1} \sum_{j=0}^n \, \abs{p_j(x,w)}^2
\text{ exists} \label{1.45} \\ & \text{\rm{(ii)}} && \lim_{n\to\infty} \frac{1}{n+1} \sum_{j=0}^n \, \abs{q_j(x,w)}^2 \text{ exists} \notag \end{alignat} \end{t4}
In Section~\ref{s6}, we will prove
\begin{t5} For a.e.\ $(x,\omega)$ in $\Sigma_\text{\rm{ac}}\times\Omega$, the limit in \eqref{1.45} is $\rho_\infty(x)/w_\omega(x)$ where $\rho_\infty$ is the density of the a.c.\ part of the DOS. \end{t5}
\noindent {\it Note}. This is, of course, an analog of the celebrated results of M\'at\'e--Nevai--Totik \cite{MNT91} (for $[-1,1]$) and Totik \cite{Tot} (for general sets ${\frak{e}}$ containing open intervals) for regular measures obeying a local Szeg\H{o} condition.
Theorems~3--5 show the applicability of Theorem~2, and so lead to
\begin{corollary}\label{C1.5} For any ergodic Jacobi matrix, we have universality and strong clock behavior for a.e.\ $\omega$ and a.e.\ $x_0\in\Sigma_\text{\rm{ac}}$. \end{corollary}
In particular, the almost Mathieu equation has strong clock behavior for the zeros.
\begin{remark}
It is possible to show that for the almost Mathieu equation there is universality for a.e. $x_0 \in \Sigma_\text{\rm{ac}}$ and {\it every} $\omega$. Our current approach to this uses that the Schr\"odinger conjecture is true for the almost Mathieu operator, a recently announced result \cite {AFK}.
\end{remark}
For $n=1,2,3,4,5$, Theorem~$n$ is proven in Section~$n+1$. Section~\ref{s7} has some further remarks.
\noindent{\bf Acknowledgments.} A.A.\ would like to thank M.~Flach and T.~Tombrello for the hospitality of Caltech. B.S.\ would like to thank E.~de~Shalit for the hospitality of Hebrew Universality. This research was partially conducted during the period A.A.\ served as a Clay Research Fellow. We would like to thank H.~Furstenberg and B.~Weiss for useful comments.
\section{Lubinsky's Second Approach} \label{s2}
In this section, we will prove Theorem~1. We begin with two overall visions relevant to the proof. First, the so-called ``sinc kernel'' \cite{LB}, $\sin\pi z/\pi z$ enters as the Fourier transform of a suitable multiple of the characteristic function of $[-\pi,\pi]$.
Second, the ultimate goal of quasi-clock spacing is that on a $1/n\rho_n$ scale, zeros are a unit distance apart, so on this scale \begin{equation} \label{2.1} \# \text{ of zeros in } [0,n] \sim n \end{equation} Lubinsky's realization is that the Lubinsky wiggle condition and Markov--Stieltjes inequalities (see below) imply the difference of the two sides of \eqref{2.1} is bounded by $1$. This is close enough that, together with some complex variable magic, one gets unit spacing.
The complex variable magic is encapsulated in the following result whose proof we defer until the end of the section:
\begin{theorem}\label{T2.1} Let $f$ be an entire function with the following properties: \begin{SL} \item[{\rm{(a)}}] \begin{equation} \label{2.2}
f(0)=1 \end{equation}
\item[{\rm{(b)}}] \begin{equation} \label{2.3} \sup_{x\in{\mathbb{R}}} \, \abs{f(x)}<\infty \end{equation}
\item[{\rm{(c)}}] \begin{equation} \label{2.4}
\int_{-\infty}^\infty \abs{f(x)}^2 \, dx \leq 1 \end{equation}
\item[{\rm{(d)}}] $f$ is real on ${\mathbb{R}}$.
\item[{\rm{(e)}}] All the zeros of $f$ lie on ${\mathbb{R}}$ and if these zeros are labelled by \begin{equation} \label{2.5} \ldots \leq z_{-2} \leq z_{-1} < 0 < z_1 \leq z_2 \leq \dots \end{equation} with $z_0\equiv 0$, then \begin{equation} \label{2.6} \abs{z_j-z_k} \geq \abs{j-k} -1 \end{equation}
\item[{\rm{(f)}}] For each $\varepsilon >0$, there is $C_\varepsilon$ with \begin{equation} \label{2.7} \abs{f(z)} \leq C_\varepsilon e^{\varepsilon\abs{z}^2} \end{equation} \end{SL} Then \begin{equation} \label{2.8} f(z) = \frac{\sin (\pi z)}{\pi z} \end{equation} \end{theorem}
\begin{remarks} 1. \eqref{2.6} allows $f$ a priori to have double zeros but not triple or higher zeros.
2. It is easy to see there are examples where \eqref{2.7} holds for some but not all of $\varepsilon$ and where \eqref{2.8} is false, so \eqref{2.7} is sharp. \end{remarks}
\begin{proof}[Proof of Theorem~2 given Theorem~\ref{T2.1}] (This part of the argument is essentially in Lubinsky \cite{Lub-jdam}.) Fix $a\in{\mathbb{R}}$ and let \begin{equation} \label{2.9} f_n(z) = \frac{K_n(x_0 + \frac{a}{n\rho_n}, x_0 + \frac{a+z}{n\rho_n})}{K_n(x_0, x_0)} \end{equation} By \eqref{1.29}, \eqref{1.30}, and \eqref{1.16}, the $f_n$ are uniformly bounded on each disk $\{z\mid\abs{z}<R\}$, so by Montel's theorem, we have compactness that shows it suffices to prove that any limit point $f(z)$ has the form \eqref{2.8}. We will show that this putative limit point obeys conditions (a)--(f) of Theorem~\ref{T2.1}.
By the Lubinsky wiggle condition \eqref{1.26}, (a) holds. By Schwarz inequality, \eqref{1.11} and the wiggle condition, \begin{equation} \label{2.10} \sup_{x\in{\mathbb{R}}}\, \abs{f(x)} =1 \end{equation} which is stronger than (b).
By \eqref{1.17}, \begin{equation} \label{2.12} \int_{\abs{y-x_0-\frac{a}{n\rho_n}} \leq \frac{R}{n\rho_n}} \abs{K_n(x,y)}^2 w(y)\, dy \leq K_n(x,x) \end{equation} for each $R<\infty$. Changing variables and using the Lebesgue point condition leads to \begin{equation} \label{2.13} \int_{-R}^R \abs{f(y)}^2\, dy \leq 1 \end{equation} which yields \eqref{2.4} (see Lubinsky \cite{Lub-jdam} for more details). In this, one uses \eqref{1.29} and \eqref{1.30} to see that \begin{equation} \label{2.14x} 0 < \inf \rho_n < \sup \rho_n <\infty. \end{equation}
That $f$ is real on ${\mathbb{R}}$ is immediate; the reality of zeros follows from Hurwitz's theorem and the fact (see, e.g., \cite{CD}) that $p_{n+1}(x) -cp_n(x)$ has only real zeros for $c$ real.
The Markov--Stieltjes inequalities (see \cite{Markov,FrBk,CD}) assert that if $x_1,x_2, \dots$ are successive zeros of $p_n(x)-cp_{n-1}(x)$ for some $c$, then for $j\geq k+2$, \begin{equation} \label{2.14} \mu([x_j,x_k]) \geq \sum_{\ell=k+1}^{j-1} \frac{1}{K_n(x_\ell,x_\ell)} \end{equation} Using the fact that the $z_j$ (including $z_0$) are, by Hurwitz's theorem, limits of $x_j$'s scaled by $n\rho_n$ and the Lubinsky wiggle condition to control limits of $n\rho_n/K_n(x_\ell,x_\ell)$, one finds (see Lubinsky \cite{Lub-jdam} for more details) that \eqref{2.6} holds. Here one uses that $x_0$ is a Lebesgue point to be sure that \begin{equation} \label{2.15} \frac{1}{x_k -x_j} \int_{x_j}^{x_k} d\mu(y)\to w(x_0) \end{equation}
Finally, \eqref{1.30} implies \eqref{2.7}. Thus, \eqref{2.8} holds. \end{proof}
The following will reduce the proof of Theorem~\ref{T2.1} to using conditions (a)--(e) to improving the bound \eqref{2.7}.
\begin{proposition}\label{P2.2} \begin{SL} \item[{\rm{(a)}}] Fix $a>0$. If $f$ is measurable, real-valued and supported on $[-a,a]$ with \begin{equation} \label{2.16} \int_{-a}^a f(x)^2\, dx \leq 2a \quad\text{and}\quad \int_{-a}^a f(x)\, dx = 2a \end{equation} then \begin{equation} \label{2.17} f(x) =\chi_{[-a,a]} (x) \quad\hbox{a.e.} \end{equation}
\item[{\rm{(b)}}] If $f$ is real-valued and continuous on ${\mathbb{R}}$ and $\widehat f$ is supported on $[-\pi,\pi]$ with \begin{equation} \label{2.18} \int_{-\infty}^\infty f(x)^2\, dx \leq 1 \quad\text{and}\quad f(0)=1 \end{equation} then \begin{equation} \label{2.19} f(x) =\frac{\sin(\pi x)}{\pi x} \end{equation}
\item[{\rm{(c)}}] If $f$ is an entire function, real on ${\mathbb{R}}$ with \eqref{2.18}, and for all $\delta >0$, there is $C_\delta$ with \begin{equation} \label{2.20} \abs{f(z)}\leq C_\delta \exp ((\pi+\delta)\abs{\Ima z}) \end{equation} then \eqref{2.8} holds. \end{SL} \end{proposition}
\begin{proof} (a) \ Essentially this follows from equality in the Schwarz inequality. More precisely, \eqref{2.16} implies \begin{equation} \label{2.21} \int_{-a}^a \abs{f(x)-\chi_{[-a,a]}(x)}^2\, dx \leq 0 \end{equation}
(b) \ Apply part (a) to $(2\pi)^{1/2} \widehat f(k)$ with $a=\pi$.
(c) \ By the Paley--Wiener theorem, \eqref{2.20} implies that $\widehat f$ is supported on $[-\pi,\pi]$. \end{proof}
Thus, we are reduced to going from \eqref{2.7} to \eqref{2.20}.
\comm{ The key tool is the Phragm\'en--Lindel\"of principle in the following form (see Titchmarsh \cite[Sect.~5.61]{Tit}): Let $S$ be a sector of open angle $\pi/\alpha$, that is, for some $z_0\in\partial{\mathbb{D}}$, \begin{equation} \label{2.22} S=\{z\mid z=z_0 re^{i\theta};\, 0 < r <\infty,\, \abs{\theta} < \pi/2\alpha\} \end{equation} Suppose $f$ is analytic in $S$ and continuous on $\bar S$ and obeys \begin{equation} \label{2.23} \abs{f(z)} \leq C\exp (\abs{z}^\beta) \end{equation} for some \begin{equation} \label{2.24} \beta <\alpha \end{equation} Then \begin{equation} \label{2.25x} \sup_{z\in\bar S}\, \abs{f(z)} \leq \sup_{z\in\partial S}\, \abs{f(z)} \end{equation} and, in particular, if $f$ is bounded on $\partial S$, it is bounded on $S$.
As a warmup, we prove a variant of Theorem~\ref{T2.1} that suffices for most applications, which is somewhat simpler to prove and which we will need in the proof of Theorem~\ref{T2.1}:
\begin{theorem}\label{T2.3} Suppose {\rm{(a)}}--{\rm{(d)}} of Theorem~\ref{T2.1} hold but {\rm{(e)}} is replaced by the weaker and {\rm{(f)}} by the stronger \begin{SL} \item[{\rm{(e$^\prime$)}}] $\abs{z_j} \geq \abs{j}-1$ \item[{\rm{(f$^\prime$)}}] For some $\delta <2$, there is $C$ with \begin{equation} \label{2.25} \abs{f(z)} \leq Ce^{\abs{z}^\delta} \end{equation} \end{SL} Then \eqref{2.8} holds. \end{theorem}
\begin{proof} }
By $f(0)=1$, the reality of the zeros and \eqref{2.7}, we have, by the Hadamard factorization theorem (see Titchmarsh \cite[Sect.~8.24]{Tit}) that \begin{equation} \label{2.26} f(z) = e^{Az} \prod_{j\neq 0} \biggl( 1-\frac{z}{z_j}\biggr) e^{z/z_j} \end{equation} with $A$ real. For $x\in{\mathbb{R}}$, define $z_j(x)$ to be a renumbering of the $z_j$, so \begin{equation} \label{2.24x} \ldots \leq z_{-1}(x) < x \leq z_0(x) \leq z_1(x) \leq \ldots \end{equation} By $\abs{z_j-z_k}\geq \abs{k-j}-1$, we see that \begin{equation} \label{2.25y} z_{n+1}(x) - x \geq n \qquad x-z_{-(n+1)}(x) \geq n \end{equation}
In particular, $(x-1.1,\, x+ 1.1)$ can contain at most $z_0(x), z_{\pm 1}(x), z_{\pm 2}(x)$. Removing the open intervals of size $2/10$ about each of the five points $\abs{z_\ell(x)-x}$ ($\ell=0,\pm 1, \pm 2$) from $[0,1]$ leaves at least one $\delta >0$, that is, we can pick $\delta(x)$ in $[0,1]$ so for all $j$, \begin{equation} \label{2.25a} \abs{z_j(n) - (x\pm \delta)} \geq \tfrac{1}{10} \end{equation} Moreover, by \eqref{2.25y}, for $n=1,2,\dots$, \begin{equation} \label{2.25b} \abs{z_{\pm(n+2)}(x) - (x\pm\delta)} \geq n \end{equation}
Since \begin{equation} \label{2.25c} \frac{\abs{1-(x+iy)/z_j}^2}{\abs{(1-(x+\delta/z_j)(1-x-\delta)/z_j)}} \leq 1 + \frac{(y^2 + \delta^2)}{\abs{z_j-(x+\delta)} \abs{z_j-(x-\delta)}} \end{equation} we conclude from \eqref{2.26} that \begin{align} \frac{\abs{f(x+iy)}^2}{\abs{f(x-\delta)} \abs{f(x+\delta)}} & \leq \biggl[ 1 + \frac{y^2+1}{(\frac{1}{100})}\bigg]^5 \prod_{n=1}^\infty \biggl( 1 + \frac{1+y^2}{n^2}\biggr)^2 \notag \\ & \leq C(1+y^{10}) \biggl( \frac{\sinh \pi\sqrt{y^2+1}}{\pi \sqrt{y^2+1}}\biggr)^2 \label{2.25d} \end{align} Thus, for any $\varepsilon$, there is a $C_\varepsilon$ with \begin{equation} \label{2.25e} \abs{f(x+iy)} \leq C_\varepsilon \exp ((\pi+\varepsilon) \abs{y}) \end{equation} for every $x+iy \in {\mathbb{C}}$, which is (\ref {2.20}). This concludes the proof of Theorem \ref {T2.1}.
\comm{
For $x \in {\mathbb{R}}$, choose $0 \leq \delta=\delta(x) \leq 1$ such that $|(z-x_j)^2-\delta^2| \geq \frac {1} {100}$.
Notice that for every $y \in {\mathbb{R}}$ we have \begin{equation}
\frac {|f(x+iy)|^2} {|f(x+\delta) f(x-\delta)|}=\prod_{j \neq 0}
\biggl( 1+\frac {\delta^2+y^2} {|(z_j-x)^2-\delta^2|} \biggr). \end{equation}
From the condition $|z_j-z_k| \geq |k-j|-1$, it follows that we can relabel all $z_j$, except possibly $10$ of them, as $w_j$, $j \neq 0$, with $|w_j-x|
\geq |j|$. Thus \begin{align}
\frac {|f(x+iy)|^2} {|f(x+\delta) f(x-\delta)|} & \leq (101+100 y^2)^{10} \prod_{j=1}^\infty \biggl (1+\frac {1+y^2} {j^2} \biggr )\\ \nonumber &\leq (101+100 y^2)^{10} \biggl (\frac {\sinh(\pi y)} {\pi y} \biggr )^2. \end{align} }
\comm{ Thus, for $\abs{y}\geq 1$ and $y$ real, \begin{align} \abs{f(iy)}^2 &= \prod_{j\neq 0} \biggl( 1 + \frac{y^2}{z_j^2}\biggr) \label{2.27} \\ &\leq (1+Cy^2)^2 \prod_{j=1}^\infty \biggl( 1+\frac{y^2}{j^2}\biggr)^2 \label{2.28} \\ &= (1+Cy^2)^2 \biggl( \frac{\sinh (\pi y)}{\pi y}\biggr)^2 \label{2.29} \end{align} \eqref{2.28}, with $C=\max(\frac{1}{z_1^2}, \frac{1}{z_{-1}^2})$, follows from $\abs{z_{j+1}} \geq \abs{j}$ and then \eqref{2.29} from the Euler product for $\sin (\pi z)$. }
\begin{remark} It is possible to show, using the Phragm\'en--Lindel\"of principle \cite{Tit}, that if one assumes, instead of (\ref {2.7}), the stronger $|f(z)| \leq C e^{|z|^\delta}$, then it is possible to weaken (\ref {2.6}) to \begin{equation} \label{2.26x}
|z_j| \geq |j|-1 \end{equation} for if \eqref{2.26x} holds, then \eqref{2.26} implies that \begin{equation} \label{2.27x} \abs{f(iy)} \leq C(1+\abs{y}) e^{\pi\abs{y}} \end{equation} Applying Phragm\'en--Lindel\"of to $(1-iz)^{-1} f(z) e^{i\pi z}$ on the sectors $\arg z\in [0,\pi/2]$ and $[\pi/2,\pi]$ proves that \begin{equation} \label{2.28x} \abs{f(x+iy)} \leq C (1+\abs{z}) e^{\pi\abs{y}} \end{equation} \end{remark}
\comm{ Define \begin{equation} \label{2.31} g(z) = e^{i(\pi+\varepsilon)z} f(z) \end{equation}
On the sector $S\equiv \{z\mid \arg z\in (0, \pi/2)\}$ with opening angle $\pi/2$ (so $\alpha =2$ in the language of \eqref{2.24}), we have \eqref{2.23} with $\beta=\gamma <\alpha$ and, by (b), $g$ is bounded on $\arg z=0$ and, by \eqref{2.30} and $e^{i(\pi+\varepsilon)(iy)}=e^{-(\pi+\varepsilon)y}$, $g$ is bounded on $\arg z=\pi/2$. By the Phragm\'en--Lindel\"of principle, $g$ is bounded, that is, on $S$, \begin{equation} \label{2.32} \abs{f(z)} \leq Ce^{(\pi+\varepsilon)\abs{\Ima y}} \end{equation}
A similar argument works on the other three quadrants, so $f$ obeys \eqref{2.20}. By Proposition~\ref{2.2}(c), \eqref{2.8} holds. \end{proof}
\begin{lemma}\label{L2.4} \begin{SL} \item[{\rm{(i)}}] Let $y_1\in [-1,1]$ and let $f$ be $C^1$ in a neighborhood of $[-1,1]$. Suppose $f(y_1)=0$. Then for $x\in [-1,1]$, \begin{equation} \label{2.33} g(x) = \frac{f(x)}{x-y_1} \end{equation} obeys \begin{equation} \label{2.34} \abs{g(x)} \leq \sup_{\abs{y}\leq 1}\, \abs{f'(y)} \end{equation}
\item[{\rm{(ii)}}] Let $y_1,y_2\in [-1,1]$ be distinct and let $f$ be $C^2$ in a neighborhood of $[-1,1]$. Suppose $f(y_1)=f(y_2)=0$. Then for $x\in [-1,1]$, \begin{equation} \label{2.35} h(x) = \frac{f(x)}{(x-y_1)(x-y_2)} \end{equation} obeys \begin{equation} \label{2.36} \abs{h(x)} \leq \tfrac12\, \sup_{\abs{y}\leq 1}\, \abs{f''(y)} \end{equation} \end{SL} \end{lemma}
\begin{remark} $g$ and $h$ are initially defined for $x\neq y_1,y_2$ but have unique continuous extensions. \end{remark}
\begin{proof} (i) \ We have that \begin{align} f(x)-f(y_1) &= \int_0^1 \biggl[ \frac{df}{ds}\, ((1-s)x+sy_1)\biggr]\, ds \label{2.37} \\ &= (y_1-x) \int_0^1 f' ((1-s)x+sy_1)\, ds \label{2.38} \end{align} which implies that \begin{equation} \label{2.39} g(x) = -\int_0^1 f'((1-s)x+sy_1)\, ds \end{equation} from which we immediately get \eqref{2.34} but also \begin{equation} \label{2.40} \abs{g'(x)} \leq \tfrac12\, \sup_{\abs{y}\leq 1}\, \abs{f''(y)} \end{equation}
(ii) \ Clearly, if $g$ is given by \eqref{2.33}, then $g(y_2)=0$ and \begin{equation} \label{2.41} h(x) = \frac{g(x)}{x-y_2} \end{equation} so, by (i), \begin{equation} \label{2.42} \abs{h(x)} \leq \sup_{\abs{y}\leq 1}\, \abs{g'(y)} \end{equation} and \eqref{2.40} implies \eqref{2.36}. \end{proof}
\begin{lemma} \label{L2.5} Suppose $f$ is analytic in a neighborhood of $\bar S$ where \begin{equation} \label{2.43} S = \{z\mid \abs{\arg z} < \pi/4\} \end{equation} and on $\bar S$, \begin{equation} \label{2.44} \abs{f(z)} \leq Ce^{\abs{z}^2} \end{equation} Suppose also for $x>0$, \begin{equation} \label{2.45} \abs{f(x)} \leq C \end{equation} Then on $S$, \begin{equation} \label{2.46} \abs{f(x+iy)} \leq Ce^{2x\abs{y}} \end{equation} \end{lemma}
\begin{proof} We will prove the result for $y>0$ and then use symmetry. Let \begin{equation} \label{2.46x} S_+ = \{z\in S\mid \Ima z >0\} \end{equation} so $S_+$ has opening angle $\pi/4$ (i.e., $\alpha =4$). Let \begin{equation} \label{2.47} g(z) = e^{iz^2} f(z) \end{equation}
Clearly, for $x >0$, \begin{equation} \label{2.48} \abs{g(x)} = \abs{f(x)} \leq C \end{equation} while \begin{equation} \label{2.49} \abs{g(xe^{i\pi/4})} = e^{-x^2} \abs{f(xe^{i\pi/4})} \leq C \end{equation} so, by the Phragm\'en--Lindel\"of principle on $S_+$, $\abs{g(z)} \leq C$ on $S_+$, that is, there \begin{equation} \label{2.50} \abs{f(z)} \leq Ce^{\Real (iz^2)} = Ce^{2xy} \end{equation} which is \eqref{2.46}. \end{proof}
\begin{proof}[Proof of Theorem~\ref{T2.1}] By Lemma~\ref{L2.5}, \begin{equation} \label{2.51} x\in{\mathbb{R}},\, \abs{z-x} \leq 1 \Rightarrow \abs{f(z)} \leq Ce^{2\abs{x}} \end{equation} Thus, for $x\in{\mathbb{R}}$, by a Cauchy estimate, \begin{equation} \label{2.52} \abs{f'(x)} + \abs{f''(x)} \leq Ce^{2\abs{x}} \end{equation}
Let $x_0\in{\mathbb{R}}$ and define $z_j(x_0)$ to be the zeros of $f(z)$ labelled by \begin{equation} \label{2.53} \ldots < z_{-1} (x_0) < x_0 \leq z_1(x_0) < z_2(x_0) \leq \dots \end{equation} By \eqref{2.6}, \begin{equation} \label{2.54} \abs{z_j(x_0) -x_0} \geq \abs{j}-1 \end{equation}
Let \begin{equation} \label{2.55} h(z) = \frac{f(z+x_0)}{(z-z_1(x_0) + x_0)(z-z_{-1}(x_0)+x_0)} \end{equation} Then, by Lemma~\ref{L2.4} (if $\abs{z_1(x_0)-x_0} >1$, use $\abs{z_1(x_0)-x_0}^{-1} <1$ and use the single zero estimate), \begin{equation} \label{2.56} \abs{h(0)} \leq Ce^{2\abs{x_0}} \end{equation}
If now $w_j$ are the zeros of $h$ (i.e., for $j>0$, $w_{\pm j}=z_{\pm(j-1)}(x_0)-x_0$), we have \begin{equation} \label{2.57} \abs{w_j} \geq \abs{j} \end{equation} So, in particular, \begin{equation} \label{2.58} \sum_{+j} \, \abs{w_j}^{-2} <\infty \end{equation}
By the sharp form of Hadamard's theorem (see, e.g., the proof of Reed--Simon \cite[Thm.~XIII.106]{RS4}), we have \begin{equation} \label{2.59} h(z) = h(0) e^{Az} \prod_{j\neq 0} \biggl( 1-\frac{z}{w_j}\biggr) e^{z/w_j} \end{equation} with $A$ real, so \begin{equation} \label{2.60} \abs{h(iy)} \leq \abs{h(0)}\, \frac{\sinh(\pi y)}{\pi y} \end{equation} as in the proof of \eqref{2.29}.
It follows from \eqref{2.56} and \eqref{2.60} that \begin{equation} \label{2.61} \abs{f(x_0 + iy)} \leq Ce^{2\abs{x_0}} e^{\pi\abs{y}} \end{equation} so \begin{equation} \label{2.62} \abs{f(z)} \leq Ce^{(2+\pi)\abs{z}} \end{equation}
Thus, condition (f$^\prime$) of Theorem~\ref{T2.3} with $\gamma=1 <2$ holds, and using that theorem, we get \eqref{2.8}. \end{proof} }
\section{Doing the Lubinsky Wiggle} \label{s3}
Our goal in this section is to prove Theorem~2.
\begin{proof}[Proof of Theorem~2] By Egorov's theorem (see Rudin \cite[p.~73]{Rudin}), for every $\varepsilon$, there exists a compact set ${\mathcal {L}}\subset\Sigma$ with $\abs{\Sigma\setminus {\mathcal {L}}} <\varepsilon$ (with $\abs{\cdot}=$ Lebesgue measure) so that on ${\mathcal {L}}$, $\frac{1}{n+1} K_n(x,x)\equiv \tilde{q}_n(x)$ converges uniformly to a limit we will call $\tilde{q}(x)$. If we prove that \eqref{1.26} holds for a.e.\ $x_0\in {\mathcal {L}}$, then by taking a sequence of $\varepsilon$'s going to $0$, we get that \eqref{1.26} holds for a.e.\ $x_0\in\Sigma$
By Lebesgue's theorem on differentiability of integrals of $L^1$-functions (see Rudin \cite[Thm~7.7]{Rudin}) applied to the characteristic function of ${\mathcal {L}}$, for a.e.\ $x_0\in {\mathcal {L}}$, \begin{equation} \label{3.1} \lim_{\delta\downarrow 0}\, (2\delta)^{-1} \abs{(x_0-\delta, x_0+\delta) \cap {\mathcal {L}}} =1 \end{equation} We will prove that \eqref{1.26} holds for all $x_0$ with \eqref{3.1} and with condition~(4).
$\frac{1}{n+1} K_n(x+\frac{a}{n} + \frac{\bar z}{n}, x+\frac{a}{n} + \frac{z}{n})$ is analytic in $z$, so by a Cauchy estimate and $a$ real, \begin{align}
\biggl| \frac{d}{da}\, \tilde{q}_n\biggl(x + \frac{a}{n}\biggr)\biggr|
& \leq \sup_{\abs{z}\leq 1} \frac{1}{n+1} \biggl| K_n\biggl( x+\frac{a}{n} + \frac{\bar z}{n}\, , x+\frac{a}{n} + \frac{z}{n}\biggr) \biggr| \notag \\
&= \sup_{\abs{z}\leq 1} \biggl| \tilde{q}_n \biggl( x+\frac{a}{n} + \frac{z}{n}\biggr)\biggr| \label{3.2} \end{align} By a Schwarz inequality, for $x,y\in{\mathbb{C}}$, \begin{equation} \label{3.3} \frac{1}{n+1}\, \abs{K_n(x,y)} \leq (\tilde{q}_n(x) \tilde{q}_n(y))^{1/2} \end{equation}
Thus, using the assumed \eqref{1.30}, for any $x_0$ for which \eqref{1.30} holds and any $A<\infty$, there are $N_0$ and $C$ so for $n\geq N_0$, \begin{equation} \label{3.4}
\biggl| \tilde{q}_n \biggl( x_0 + \frac{a}{n}\biggr) - \tilde{q}_n \biggl( x_0 + \frac{b}{n}\biggr)\biggr| \leq C\abs{a-b} \end{equation} for all $a,b$ with $\abs{a}\leq A$, $\abs{b}\leq A$.
Since each $\tilde{q}_n$ is continuous and the convergence is uniform on ${\mathcal {L}}$, $\tilde{q}$ is continuous on ${\mathcal {L}}$. Thus, we have for each $A<\infty$, \begin{equation} \label{3.5}
\sup \biggl\{\bigg| \tilde{q} \biggl( x_0 + \frac{a}{n}\biggr) - \tilde{q} (x_0)\biggr| \biggm| \abs{a}<A,\, x_0 + \frac{a}{n}\in {\mathcal {L}}\biggr\} \to 0 \end{equation} as $n\to\infty$. By the uniform convergence, \begin{equation} \label{3.6x}
\sup \biggl\{\biggl| \tilde{q}_n \biggl( x_0 + \frac{a}{n}\biggr) - \tilde{q}_n(x_0)\biggr| \biggm| \abs{a}<A,\, x_0 + \frac{a}{n}\in {\mathcal {L}}\biggr\} \to 0 \end{equation}
We next use the fact that \eqref{3.1} holds. It implies that \begin{equation} \label{3.6} \sup_{\abs{b}\leq A}\, n\, \text{\rm{dist}} \biggl( x_0 + \frac{b}{n}\, , {\mathcal {L}}\biggr) \to 0 \end{equation} or equivalently, for any $\varepsilon$, there is an $N_1$ so for $n\geq N_1$ and $\abs{b}< A$, there exists $\abs{a} <A$ ($a$ will be $n$-dependent) so that $\abs{a-b}<\varepsilon$ and $x_0 + \frac{a}{n}\in {\mathcal {L}}$. We have that \begin{equation} \label{3.7}
\biggl| \tilde{q}_n \biggl( x_0 + \frac{b}{n}\biggr) - \tilde{q}_n(x_0)\biggr| \leq \biggl| \tilde{q}_n \biggl( x_0 + \frac{b}{n}\biggr)
- \tilde{q}_n \biggl( x_0 + \frac{a}{n}\biggr)\biggr| + \biggl| \tilde{q}_n \biggl( x_0 + \frac{a}{n}\biggr) - \tilde{q}_n (x_0)\biggr| \end{equation} where $\abs{b-a}<\varepsilon$ and $x_0 + \frac{a}{n}\in {\mathcal {L}}$. By \eqref{3.4}, if $n\geq\max(N_0,N_1)$, the first term is bounded by $C\varepsilon$, and by \eqref{3.6}, the second term goes to zero, that is, \begin{equation} \label{3.8}
\sup_{\abs{b}<A}\, \biggl| \tilde{q}_n \biggl( x_0 + \frac{b}{n}\biggr) - \tilde{q}_n(x_0)\biggr| \to 0 \end{equation} Since $\tilde{q}_n(x_0)\to \tilde{q}(x_0)\neq 0$, we have \begin{equation} \label{3.9}
\sup_{\abs{b}<A}\, \biggl| \frac{\tilde{q}_n(x_0 + \frac{b}{n})}{\tilde{q}_n(x_0)} -1 \biggr| \to 0 \end{equation} as $n\to\infty$, which is \eqref{1.26}. \end{proof}
\section{Exponential Bounds for Perturbed Transfer Matrices} \label{s4}
In this section, our goal is to prove Theorem~3. As noted in the introduction, our approach is an extension of a theorem of Avila--Krikorian \cite[Lemma~3.1]{AvKr} exploiting that one can avoid using cocycles and so go beyond the apparent limitation to ergodic situations. The argument here is related to but somewhat different from variation of parameters techniques (see, e.g., Jitomirskaya--Last \cite{JL} and Killip--Kiselev--Last \cite{KKL}) and should have wide applicability.
\begin{proof}[Proof of Theorem~3] Fix $n$ and define for $j=1,2,\dots, n$, \begin{align} \tilde A_j &= A_j \biggl( x_0 + \frac{z}{n+1}\biggr) \label{4.1} \\ A_j &= A_j(x_0) \label{4.2} \\ T_j & = A_j \dots A_1 \qquad \tilde T_j = \tilde A_j \dots \tilde A_1 \label{4.3} \end{align} (Note that $\tilde A_j$ and $\tilde T_j$ are both $j$- and $n$-dependent.)
Note that, by \eqref{1.32}, \begin{equation} \label{4.4} \tilde A_j - A_j = a_j^{-1} \begin{pmatrix} \frac{z}{n+1} & 0 \\ 0 & 0 \end{pmatrix} \end{equation} so that \begin{equation} \label{4.5} \norm{\tilde A_j - A_j} \leq \alpha_-^{-1}\, \frac{\abs{z}}{n+1} \end{equation}
Write \begin{align} T_j^{-1} \tilde T_j &= (T_j^{-1} \tilde A_j T_{j-1})(T_{j-1}^{-1} \tilde A_{j-1} T_{j-2}) \dots (T_1^{-1} \tilde A_1 T_0) \label{4.6} \\ &= (1+B_j) (1+B_{j-1}) \dots (1+B_1) \label{4.7} \end{align} where \begin{equation} \label{4.8} B_k = T_k^{-1} (\tilde A_k -A_k) T_{k-1} \end{equation} Here we used \begin{equation} \label{4.9} A_k T_{k-1} = T_k \end{equation}
Since $T_k$ has determinant $1$ (see \eqref{1.34}), we have \begin{equation} \label{4.10} \norm{T_k^{-1}} = \norm{T_k} \end{equation} So, by \eqref{4.5}, \begin{equation} \label{4.11} \norm{B_k} \leq \norm{T_k}\, \norm{T_{k-1}} \alpha_-^{-1}\, \frac{\abs{z}}{n+1} \end{equation} Thus, since \begin{equation} \label{4.12} \norm{1+B_j} \leq 1 + \norm{B_j} \leq \exp(\norm{B_j}) \end{equation} \eqref{4.7} implies that \begin{equation} \label{4.13} \norm{\tilde T_j} \leq \norm{T_j} \exp\biggl( \alpha_-^{-1} \abs{z} \biggl[ \frac{1}{n+1} \sum_{k=1}^j \, \norm{T_k}\, \norm{T_{k-1}}\biggr]\biggr) \end{equation}
By the Schwarz inequality, for $j=1,2,\dots, n$, \begin{align} \frac{1}{n+1} \sum_{k=1}^j\, \norm{T_k}\, \norm{T_{k-1}} &\leq \frac{1}{n+1} \sum_{k=0}^j \, \norm{T_k}^2 \notag \\ &\leq \frac{1}{n+1} \sum_{k=0}^n\, \norm{T_k}^2 \label{4.14} \end{align} Using \eqref{1.39} and \eqref{4.13}, we find \begin{equation} \label{4.15} \norm{\tilde T_j} \leq \norm{T_j} \exp(C\alpha_-^{-1} \abs{z}) \end{equation} This clearly holds for $j=0$ also. Squaring and summing, \begin{equation} \label{4.16} \frac{1}{n+1} \sum_{j=0}^n \, \norm{\tilde T_j}^2 \leq \biggl( \frac{1}{n+1} \sum_{j=0}^n \, \norm{T_j}^2\biggr) \exp (2C\alpha_-^{-1} \abs{z}) \end{equation} which is \eqref{1.40}.
Note that \eqref{1.41} implies \eqref{1.39} so that \eqref{1.42} is just \eqref{4.15}. \end{proof}
We note that the argument above can also be used for more general perturbative bounds. For example, suppose that \begin{equation} \label{4.16a} C_1 \equiv \sup_n \norm{T_n(x_0)} <\infty \end{equation} for a given set of Jacobi parameters. Let $a'_n =a_n + \delta a_n$ and $b'_n =b_n + \delta b_n$ with \begin{equation} \label{4.17} C_2 \equiv \sum_{n=1}^\infty\, \abs{\delta a_n} + \abs{\delta b_n} <\infty \end{equation} and \begin{equation} \label{4.18} \alpha'_- =\inf\, a'_n >0 \end{equation}
Defining $\tilde A_n, \tilde T_n$ at energy $x_0$ but with $\{a'_n, b'_n\}_{n=1}^\infty$ Jacobi parameters, one gets \begin{equation} \label{4.19} \norm{\tilde A_k -A_k} \leq C_3 [\alpha_-^{-1} + (\alpha'_-)^{-1}] (\abs{\delta a_k} + \abs{\delta b_k}) \end{equation} for some universal constant $C_3$. Thus \begin{equation} \label{4.20} \norm{B_k} \leq C_3 C_1^2 [\alpha_-^{-1} + (\alpha'_n)^{-1}] (\abs{\delta a_k} + \abs{\delta b_k}) \end{equation} and \begin{equation} \label{4.21} \norm{\tilde T_n} \leq C_1 \exp (C_1^2 C_2 C_3 [\alpha_-^{-1} + (\alpha'_-)^{-1}]) \end{equation} providing another proof of a standard $\ell^1$ perturbation result.
\section{Ergodic Jacobi Matrices and Ces\`aro Summability} \label{s5}
In this section, our goal is to prove Theorem~4. We fix an ergodic Jacobi matrix setup. We will need to use special solutions found by Deift--Simon in 1983:
\begin{theorem}[Deift--Simon \cite{S169}] \label{T5.1} For any Jacobi matrix with $\Sigma_\text{\rm{ac}} (d\mu_\omega)$ {\rm{(}}which is a.e.\ $\omega$-independent{\rm{)}} of positive measure, for a.e.\ pairs $(x,\omega)\in \Sigma_\text{\rm{ac}}\times\Omega$ {\rm{(}}a.e.\ with respect to $dx\otimes d\eta(\omega)${\rm{)}}, there exist sequences $\{u_n^\pm (x,\omega)\}_{n=-\infty}^\infty$ so that \begin{equation} \label{5.1} T_n (x,\omega) \binom{u_1^\pm (x,\omega)}{a_0u_0^\pm(x,\omega)} = \binom{u_{n+1}^\pm (x,\omega)}{a_n u_n^\pm (x,\omega)} \end{equation} with the following properties: \begin{alignat}{2} &\text{\rm{(i)}} \qquad && u_n^- (x,\omega) = \overline{u_n^+ (x,\omega)} \label{5.2} \\ &\text{\rm{(ii)}} \qquad && a_n (u_{n+1}^+ u_n^- - u_{n+1}^- u_n^+) =-2i \label{5.3} \\ &\text{\rm{(iii)}} \qquad && \abs{u_n^+ (x,\omega)} = \abs{u_0^+ (x, S^n \omega)} \label{5.4} \\ &\text{\rm{(iv)}} \qquad && \int \abs{u_n^+ (x,\omega)}^2\, d\eta(\omega) <\infty \label{5.5} \\ &\text{\rm{(v)}} \qquad && u_0^\pm \text{ is real} \label{5.6} \end{alignat} \end{theorem}
Of course, by \eqref{5.4}, the integral in \eqref{5.5} is $n$-independent. For later purposes (see Section~\ref{s6}), we will need an explicit formula for this integral. In fact, we will need explicit formulae for $u_0, u_{-1}$ in terms of the $m$-function.
One defines for $\Ima z>0$, $\tilde u_n^+ (z,\omega)$ to solve (i.e., \eqref{5.1}) \begin{equation} \label{5.7} a_n \tilde u_{n+1}^+ + (b_n-z) \tilde u_n^+ + a_{n-1} \tilde u_{n-1}^+ =0 \end{equation} with $\sum_{n=1}^\infty \abs{\tilde u_n^+}^2 <\infty$. This determines $\tilde u_n^+$ up to a constant, and so, \begin{equation} \label{5.8} m(z,\omega) = -\frac{\tilde u_1^+ (z,\omega)}{a_0 \tilde u_0^+ (z,\omega)} \end{equation} is normalization-independent and obeys, by \eqref{5.7}, \begin{equation} \label{5.9} m(z,\omega) = \frac{1}{-z+b_1 -a_1^2 m(z,S\omega)} \end{equation} [{\it Note}: We have suppressed the $\omega$-dependence of $a_n,b_n$.]
As usual with solutions of \eqref{5.9}, \begin{equation} \label{5.10} m(z,\omega) = \int \frac{d\mu_\omega^+ (x)}{x-z} \end{equation} where $d\mu_\omega^+$ is the measure associated to the half-line Jacobi matrix, $J_\omega$.
For a.e.\ $x\in\Sigma_\text{\rm{ac}}$ and a.e.\ $\omega$, $m(x+i0,\omega)$ exists and has \begin{equation} \label{5.11} \Ima m(x+i0, \omega) >0 \qquad (\text{a.e. } x\in\Sigma_\text{\rm{ac}}) \end{equation} We normalize the solution $u^+$ obeying Theorem~\ref{T5.1} by defining: \begin{align} u_0^+ (x,\omega) &= \frac{1}{a_0 [\Ima m(x+i0,\omega)]^{1/2}} \label{5.12} \\ u_1^+ (x,\omega) &= -\frac{m(x+i0,\omega)}{[\Ima m(x+i0,\omega)]^{1/2}} \label{5.13} \end{align} (We have listed all the formulae because \cite{S169} only consider the case $a_n\equiv 1$.) $u_n^+$ are then determined by the difference equation and $u_n^-$ by \eqref{5.2}.
Of course, we have \begin{equation} \label{5.14} p_n = \frac{u_{n+1}^+ - u_{n+1}^-}{u_1^+ - u_1^-} \end{equation} since both sides obey the same difference equations with $p_{-1}=0$ (since $u_0^+ = u_0^-$) and $p_0=1$.
By \eqref{5.14}, to prove Theorem~4 we need to show \begin{equation} \label{5.14a} \frac{1}{n}\, \sum_{j=0}^{n-1} (u_{j+1}^+ - u_{j+1}^-)^2 \end{equation} exists. This follows from the existence of \begin{equation} \label{5.14b} \lim_{n\to\infty}\, \frac{1}{n}\, \sum_{j=1}^n \, \abs{u_j^+}^2 \end{equation} and \begin{equation} \label{5.14c} \lim_{n\to\infty}\, \frac{1}{n}\, \sum_{j=1}^n (u_j^+)^2 \end{equation}
From \eqref{5.4} and the ergodic theorem (plus \eqref{5.5}), the a.e.\ $\omega$ existence of the limit in \eqref{5.14b} is immediate. In cases like the almost Mathieu equation with Diophantine frequencies where $u_n^+$ is almost periodic, one also gets the existence of the limit in \eqref{5.14c} directly, but there are examples, like the almost Mathieu equation with frequencies whose dual has singular continuous spectrum, where the phase of $u_n^+$ is not almost periodic. So this argument does not work in general. In fact, we will eventually prove that for a.e.\ $(x,\omega)$ in $\Sigma_\text{\rm{ac}} \times \Omega$ (see Theorem~\ref{T6.3}) \begin{equation} \label{5.14d} \lim_{n\to\infty}\, \frac{1}{n}\, \sum_{j=1}^n (u_j^+)^2 =0 \end{equation} It would be interesting to have a direct proof of this (for the periodic case, see \cite{Rice}) rather than the indirect path we will take.
Define the $2\times 2$ matrix \begin{equation} \label{5.15} U_n(x,\omega) = \frac{1}{(-2i)^{1/2}} \begin{pmatrix} u_{n+1}^+ (x,\omega) & u_{n+1}^- (x,\omega) \\ a_n u_n^+ (x,\omega) & a_n u_n^- (x,\omega) \end{pmatrix} \end{equation} (where we fix once and for all a choice of $\sqrt{-2i}$). By \eqref{5.3}, \begin{equation} \label{5.16} \det(U_n(x,\omega)) =1 \end{equation} and, by \eqref{5.1}, \begin{equation} \label{5.17} T_n (x,\omega) U_0(x,\omega) = U_n(x,\omega) \end{equation} or \begin{equation} \label{5.18} T_n(x,\omega) = U_n(x,\omega) U_0 (x,\omega)^{-1} \end{equation}
For now, we fix $x\in\Sigma_\text{\rm{ac}}$ with \begin{equation} \label{5.19} E([a_0(\omega)^2 \Ima m(x+i0,\omega)]^{-1}) <\infty \end{equation} (known Lebesgue a.e.\ by Kotani theory; see \cite{S168,S169}), so $U_n$ can be defined and is in $L^2$. We are heading towards a proof of
\begin{theorem}\label{T5.2} For any fixed matrix, $Q$, a.e.\ $\omega$, as matrices \begin{equation} \label{5.20} \lim_{n\to\infty}\, \frac{1}{n}\, \sum_{j=0}^{n-1} T_j(x,\omega)^t QT_j(x,\omega) \end{equation} exists. \end{theorem}
\begin{proof}[Proof of Theorem~4 given Theorem~\ref{T5.2}] Pick $Q=\left(\begin{smallmatrix} 1&0 \\ 0&0 \end{smallmatrix}\right)$. Then the $1,1$ matrix element of $T_j(x,\omega)^t QT_j (x,\omega)$ is $p_j(x,\omega)^2$, so \eqref{1.45} holds. Similarly, the $2,2$ matrix element is $q_j(x,\omega)^2$. \end{proof}
\eqref{5.18} plus \eqref{5.5} will imply critical a priori bounds on $\norm{T_n(x,\,\cdot\,)}_{L^1(d\eta)}$. It will be convenient to use the Hilbert--Schmidt norm on these $2\times 2$ matrices.
\begin{lemma} \label{L5.3} We have \begin{equation} \label{5.21} \sup_n \int \norm{T_n (x,\omega)}\, d\eta(\omega) <\infty \end{equation} \end{lemma}
\begin{proof} Since $\det(U_n) =1$, \begin{equation} \label{5.22} \norm{U_n(x,\omega)^{-1}} = \norm{U_n(x,\omega)} \end{equation} Thus, by \eqref{5.18}, \begin{equation} \label{5.23} \norm{T_n(x,\omega)} \leq \norm{U_n(x,\omega)}\, \norm{U_0(x,\omega)} \end{equation} By the Schwarz inequality, \begin{align*} \sup_n \int \norm{T_n(x,\omega)} \, d\eta(\omega) &\leq \sup_n \int \norm{U_n(x,\omega)}^2 \, d\eta(\omega) \\ &= \int \norm{U_0(x,\omega)}^2 \, d\eta(\omega) \\ &< \infty \end{align*} by \eqref{5.5} and the fact that since \eqref{5.4} holds and we use Hilbert--Schmidt norms, \begin{equation} \label{5.24} \norm{U_j (x,\omega)} = \norm{U_0(x,S^j\omega)} \end{equation} \end{proof}
Let $A_j(\omega)$ be the matrix \eqref{1.32} with $a_j =a_j(\omega)$, $b_j=b_j(\omega)$ and let \begin{equation} \label{5.25a} A(\omega) \equiv A_1(\omega) \end{equation} so \begin{equation} \label{5.25b} A_j(\omega) = A(S^{j-1} \omega) \end{equation} and the transfer matrix for $J_\omega$ is \begin{equation} \label{5.25c} T_n(\omega) = A(S^{n-1}\omega) \dots A(\omega) \end{equation}
Now form the suspension \begin{equation} \label{5.26} \widehat\Omega =\Omega\times {\mathbb{S}}{\mathbb{L}}(2,{\mathbb{C}}) \end{equation} and define $\widehat S\colon\widehat\Omega \to\widehat\Omega$ by \begin{equation} \label{5.27} \widehat S(\omega,C) = (S\omega, A(\omega) C) \end{equation} so \begin{equation} \label{5.28} \widehat S^n(\omega,C) = (S^n \omega, T_n(\omega)C) \end{equation}
\begin{theorem} \label{T5.4} There exists an $\widehat S$-invariant probability measure, $d\nu$, on $\widehat\Omega$ whose projection onto $\Omega$ is $d\eta$ and with \begin{equation} \label{5.29} \int \norm{C}\, d\nu(\omega,C) <\infty \end{equation} \end{theorem}
\begin{proof} Pick any probability measure $\mu_0$ on ${\mathbb{S}}{\mathbb{L}}(2,{\mathbb{C}})$ with $\int \norm{C}^k\, d\mu_0 (C) <\infty$ for all $k$. For example, one could take $d\mu_0(C)=Ne^{-\norm{C}^2} d\,\text{Haar}(C)$ where $N$ is a normalization constant. Let $\widehat S_*$ be induced on measures on $\widehat\Omega$ by $[\widehat S_*(\nu)](f) =\nu(f\circ\widehat S)$. Let \begin{equation} \label{5.30} \nu_n = \widehat S^n_* (\eta\otimes \mu_0) \end{equation}
Then the invariance of $\eta$ under $S_*$ implies the projection of $\nu_n$ is $\eta$ and \begin{align} \int \norm{C}\, d\nu_n &= \int \norm{T_n(\omega)C}\, d\eta\otimes d\mu_0 \notag \\ &\leq \biggl( \int \norm{T_n(\omega)}\, d\eta\biggr) \biggl( \int \norm{C}\, d\mu_0\biggr) \label{5.31} \end{align} which, by \eqref{5.21}, is uniformly bounded in $n$.
Let $\tilde \nu_n$ be the Ces\`aro averages of $\nu_n$, that is, \begin{equation} \label{5.32} \tilde \nu_n =\frac{1}{n}\, \sum_{j=0}^{n-1} \nu_j \end{equation} So, by \eqref{5.31}, \begin{equation} \label{5.33} \sup_n \int \norm{C}\, d\tilde \nu_n <\infty \end{equation} so $\{\tilde \nu_n\}$ are tight, that is, \[ \lim_{K\to\infty}\, \sup_n\, \tilde \nu_n \{C\mid \norm{C}\geq K\} \to 0 \] which implies that $\tilde \nu_n$ has a weak limit point in probability measures on $\widetilde \Omega$. This weak limit point is invariant and, by \eqref{5.33}, it obeys \eqref{5.29}. \end{proof}
\begin{lemma}\label{L5.5} Let $L<\infty$. Let \begin{equation} \label{5.33a} \widehat\Omega_L =\{(\omega,C)\mid \norm{U_0(\omega)} < L, \, \norm{C} <L\} \end{equation} Then for any $\varepsilon$, there is a $K$ so that for a.e.\ $(\omega,C)\in\widehat\Omega_L$, \begin{equation} \label{5.33b} \lim_{n\to\infty}\, \frac{1}{n}\, \sum_{\substack{ j\in B(K,\omega,C) \\ 0\leq j \leq n-1}} \norm{T_j(\omega)C}^2 \leq \varepsilon \end{equation} where \begin{equation} \label{5.33c} B(K,\omega,C) =\{j\mid\norm{T_j(\omega)C} \geq K\} \end{equation} \end{lemma}
\begin{proof} Since $U_0(\omega)\in L^2 (d\eta)$, we have \begin{equation} \label{5.33d} \lim_{s\to\infty}\, \int_{\norm{U_0(\omega)}\geq s} \norm{U_0(\omega)}^2 \, d\eta(\omega) =0 \end{equation} so for any $\delta >0$, there exists $s(\delta)$ so that the integral is less than $\delta$.
Let $\widetilde B(\widetilde K,\omega)$ be defined by \begin{equation} \label{5.33e} \widetilde B(\widetilde K,\omega) = \{j\mid \norm{U_j (\omega)} \geq \widetilde K\} \end{equation} By the Birkhoff ergodic theorem and \eqref{5.24} for a.e.\ $\omega$, \begin{equation} \label{5.33f} \lim_{n\to\infty}\, \frac{1}{n} \sum_{\substack{ j\in\widetilde B(\widetilde K,\omega) \\ 0\leq j \leq n-1}} \norm{U_j(\omega)}^2=\int_{\norm{U_0(\omega)}\geq\widetilde K} \norm{U_0(\omega)}^2 \, d\eta\leq\delta \end{equation} if $\widetilde K\geq s(\delta)$.
Given $\varepsilon$ and $L$, let $\delta = \varepsilon/L^2$ and $K\geq L^2 s(\delta)$. Since \begin{equation} \label{5.33g} \norm{T_j(\omega)C}\leq \norm{U_j(\omega)} L^2 \end{equation} if $(\omega,C)\subset \Omega_L$, \[ B(K,\omega,C) \subset \widetilde B\biggl( \frac{K}{L^2}\, ,\omega\biggr) \] So, by \eqref{5.33f} and \eqref{5.33g}, \begin{equation} \label{5.33h} \lim_{n\to\infty}\, \frac{1}{n} \sum_{\substack{ j\in B(K,\omega,C) \\ 0\leq j \leq n-1}} \norm{T_j(\omega) C}^2 \leq L^2 \delta =\varepsilon \end{equation} which is \eqref{5.33a}. \end{proof}
\begin{proof}[Proof of Theorem~\ref{T5.2}] Without loss, suppose $\norm{Q}\leq 1$. Define on $\widehat\Omega$ \begin{equation} \label{5.34} f_n(\omega,C) =\frac{1}{n}\, \sum_{j=0}^{n-1} C^t T_j(x,\omega)^t QT_j (x,\omega) C \end{equation} If we prove that this has a pointwise limit for $\nu$ a.e.\ $(\omega,C)$, we are done: since $\eta$ is the projection of $\nu$, for $\eta$ a.e.\ $\omega$, there are some $C$ for which \eqref{5.34} has a limit. But $C$ is invertible, so $(C^t)^{-1} f_n C^{-1}$ has a limit, that is, \eqref{5.20} does.
Notice that if \begin{equation} \label{5.35} h(\omega,C) =C^t QC \end{equation} then $f_n(\omega,C)$ is a Ces\`aro average of $h(\widehat S^j(\omega,C))$, so we can almost use the ergodic theorem except we only know a priori that $\int \norm{h(\omega,C)}^{1/2}\, d\nu <\infty$, not $\int \norm{h(\omega,C)}\, d\nu <\infty$, so we need to use Lemma~\ref {L5.5}.
Fix $L$ and consider $(\omega,C)\in \widehat \Omega_L$. Let \begin{equation} \label{5.36} h_K(\omega,C) = \begin{cases} C^t QC &\text{if } \norm{C}\leq K \\ 0 & \text{if } \norm{C} >K \end{cases} \end{equation} Then, since $\norm{Q} \leq 1$, \begin{equation} \label{5.37} \norm{h_K(\widehat S^j(\omega,C)) - h(\widehat S^j(\omega,C))} \leq \begin{cases} 0 & \text{if } j\notin B(K,\omega,C) \\ \norm{T_j(\omega)C}^2 & \text{if } j\in B(K,\omega,C) \end{cases} \end{equation} It follows that if \begin{equation} \label{5.38} f_n^{(K)} (\omega,C) =\frac{1}{n}\, \sum_{j=0}^{n-1} h_K (\widehat S^j(\omega,C)) \end{equation} then \[ \norm{f_n^{(K)}(\omega,C) -f_n(\omega,C)} \leq \text{sum on left side of \eqref{5.33b}} \] So, by Lemma~\ref{L5.5}, \begin{equation} \label{5.39} \limsup_{n\to\infty}\, \norm{f_n^{(K)}(\omega,C)-f_n(\omega,C)}\leq \varepsilon \end{equation} if \begin{equation} \label{5.40} K\geq K(\varepsilon,L) \end{equation} given by the lemma.
For any finite $K$, $h_K$ is bounded, so the Birkhoff ergodic theorem and the invariance of $\nu$ imply, for a.e.\ $(\omega,C)$, $\lim f_n^{(K)}(\omega,C)$ exists. Thus (\ref {5.39}) and (\ref {5.40}) imply that $\lim f_n^{(K)}(\omega,C)$ forms a Cauchy sequence as $K \to \infty$ (among, say, integer values), and that its limit is also $\lim f_n(\omega,C)$, for a.e. $(\omega,C) \in \widehat \Omega_L$.
\comm{ On $\widehat\Omega_L$, \[ \norm{f_n(\omega,C)} \leq L^4 \, \frac{1}{n}\, \sum_{j=0}^{n-1} \, \norm{U_j(\omega)}^2 \] has the limit $L^4\int \norm{U_0(\omega)}\, d\eta(\omega)$ for a.e.\ $\omega$, so the $\norm{f_n(\omega,C)}$ are bounded as $n\to\infty$ for a.e.\ $(\omega,C)\in\widehat\Omega_L$.
For any finite $K$, $h_K$ is bounded, so the Birkhoff ergodic theorem and the invariance of $\nu$ imply, for a.e.\ $(\omega,C)$, $\lim f_n^{(K)}(\omega,C)$ exists. It follows that any two limit points of $f_n(\omega,C)$ lie within $2\varepsilon$ of each other. Since $\varepsilon$ is arbitrary and $\norm{f_n(\omega,C)}$ is bounded, the limit exists for a.e.\ $(\omega,C)\in\widehat\Omega_L$. }
Since $L$ is arbitrary and $\nu (\widehat\Omega\setminus\widehat\Omega_L)\to 0$ on account of
$\int \norm{U_0(\omega)}^2\, d\nu<\infty$, we see that $f_n$ has a limit for a.e.\ $\omega,C$. \end{proof}
\section{Equality of the Local and Microlocal DOS} \label{s6}
Our main goal in this section is to prove Theorem~5. We know from Theorem~4 that for a.e.\ $\omega\in\Omega$ and $x_0\in\Sigma_\text{\rm{ac}}$, we have that \begin{equation} \label{6.1} \frac{1}{n+1}\, K_n (x_0,x_0) \to k_\omega (x_0) \end{equation} some positive function. By Theorems~1 and 2, this implies that the spacing of zeros at a.e.\ Lebesgue point is \begin{equation} \label{6.2} x_{j+1}^{(n)}(x_0) - x_j^{(n)}(x_0) \sim \frac{1}{nw_\omega(x_0) k_{\omega}(x_0)} \end{equation}
Thus, for fixed $K$ large, in an interval $(x_0-\frac{K}{n}, x_0+\frac{K}{n})$, the number of zeros is $2K w(x_0) k(x_0)$. On the other hand, if $\rho_\infty(x_0)$ is the density of states, for a.e.\ $x_0$ in the a.c.\ part of the support of $d\nu_\infty$, the number of zeros in $(x_0-\delta, x_0+\delta)$ is approximately $2\delta n\rho(x_0)$. If $\delta$ were $K/n$, this would tell us that \begin{equation} \label{6.3} w_\omega(x_0) k_\omega(x_0) =\rho_\infty (x_0) \end{equation} which is precisely \eqref{1.23}.
Of course, $\rho_\infty$ is defined by first taking $n\to\infty$ and then $\delta\downarrow 0$, so we cannot set $\delta=K/n$, but \eqref{6.3} is an equality of a local density of zeros obtained by taking intervals with $O(n)$ zeros as $n\to\infty$ and a microlocal individual spacing as in \eqref{6.2}.
So define \begin{equation} \label{6.4} \rho_L (x_0,\omega) =w_\omega(x_0) k_\omega(x_0) \end{equation} the microlocal DOS. Notice that we have indicated an $\omega$-dependence of $\rho_L$ because, at this point, we have not proven $\omega$-independence. $\omega$-independence often comes from the ergodic theorem---we determined the existence of $k_\omega(x_0)$ using the ergodic theorem, but unlike for $\rho_\infty$, the underlying measure was only invariant, not ergodic, and indeed, $k_\omega$, the object we controlled is {\it not\/} $\omega$-independent.
Of course, once we prove $\rho_L=\rho_\infty$, $\rho_L$ will be proven $\omega$-independent, but we will, in fact, go the other way: we first prove that $\rho_L$ is $\omega$-independent, use that to show that if $u$ is the Deift--Simon wave function, then the average of $u^2$ (not $\abs{u}^2$) is zero, and use that to prove that $\rho_L=\rho_\infty$.
\begin{theorem}\label{T6.1} Suppose that $J_\omega$ is a family of ergodic Jacobi matrices. Let $\rho_L(x,\omega)$ be given by \eqref{6.1}/\eqref{6.4} for $x\in\Sigma_\text{\rm{ac}}$, $\omega\in\Omega$. Then for a.e.\ $x\in\Sigma_\text{\rm{ac}}$, $\rho_L(x,\omega)$ is a.e.\ $\omega$-independent. \end{theorem}
\begin{proof} Since $\rho_L (x,\omega)$ is jointly measurable for $(x,\omega)\in\Sigma_\text{\rm{ac}}\times\Omega$, $\rho_L(x,\,\cdot\,)$ is measurable for a.e.\ $x$. Since $S$ is ergodic, it suffices to prove that $\rho_L (x,S\omega)=\rho_L(x,\omega)$ for a.e.\ $(x,\omega)$.
Let $p_n (x,\omega)$ be the OPs for $J_\omega$. Then the zeros of $p_{n-1}(x,S\omega)$ and $p_n(x,\omega)$ interlace. It follows for any interval $[x_0-\frac{A}{n}, x_0+\frac{A}{n}] =I_{n,A}(x_0)$, \begin{equation} \label{6.5} \abs{\#\text{ of zeros of } p_n(x,\omega) \text{ in } I_{n,A}(x_0) - \# \text{ of zeros of } p_{n-1} (x,S\omega) \text{ in } I_{n,A}(x_0)} \leq 2 \end{equation} If $\rho_L(x_0,S\omega)\neq\rho_L(x_0,\omega)$ and $A=k\rho_L(x_0,\omega)^{-1}$ with $k$ large, it is easy to get a contradiction between \eqref{6.5} and \eqref{6.2}. Thus, $\rho_L(x,\omega)=\rho_L(x,S\omega)$ as claimed. \end{proof}
Next, we need a connection between $\rho_L$ and $u$. Recall (see \eqref{5.14}) \begin{equation} \label{6.6} p_n(x,\omega) = \frac{\Ima u_{n+1}^+ (x,\omega)}{\Ima u_1^+(x,\omega)} \end{equation} and, by \eqref{5.13}, \begin{equation} \label{6.7} \Ima u_1^+(x,\omega) = -[\Ima m(x+i0,\omega)]^{1/2} \end{equation} and, by \eqref{5.10}, for a.e.\ $x\in\Sigma_\text{\rm{ac}}$, \begin{equation} \label{6.8} \Ima m(x+i0,\omega)=\pi w_\omega(x) \end{equation}
Thus, if we define \begin{equation} \label{6.9} \text{\rm{Av}}_\omega(f_j(\omega)) \equiv \lim_{n\to\infty}\, \frac{1}{n}\, \sum_{j=1}^n f_j(\omega) \end{equation} then \begin{equation} \label{6.10} \rho_L(x,\omega) = \frac{1}{\pi}\, \text{\rm{Av}}_\omega ([\Ima u_j^+ (x,\omega)]^2) \end{equation}
Note that $\Ima u_j^+(x,\omega)$ is not $\Ima u_0^+ (x,S^j \omega)$, so we cannot write \eqref{6.10} as an integral. In fact, the $\omega$-independence of the right side of \eqref{6.10} (because of $\omega$-independence of the left side) will have important consequences.
To see where we are heading, we note the following result of Kotani \cite{Kot97}; see Damanik \cite[Thm.~5]{Dam}:
\begin{theorem}[Kotani \cite{Kot97}]\label{t6.2} For a.e.\ $x\in\Sigma_\text{\rm{ac}}$, \begin{equation} \label{6.11} \rho_\infty(x) = \frac{1}{2\pi} \int \abs{u_0^+(x,\omega)}^2\, d\eta(x) \end{equation} \end{theorem}
\begin{remarks} 1. \cite{Kot97,Dam} treat $a_n\equiv 1$, but it is easy to accommodate general $a_n$.
2. Kotani's theorem is not stated in this form but rather as (see eqn.~(22) in Damanik \cite{Dam}) \begin{equation} \label{6.12} \pi\rho_\infty (x) = \int \Ima G_\omega(0,0; x+i0)\, d\eta(\omega) \end{equation} where $G_\omega$ is the whole-line Green's function. Because $G_\omega$ is reflectionless, $G_\omega$ is pure imaginary and \begin{align} \Ima (G_\omega (0,0; x+i0)) &= [2a_0^2 \Ima m(x+i0,\omega)]^{-1} \label{6.13} \\ &= \tfrac12\, \abs{u_0^+ (x,\omega)}^2 \label{6.14} \end{align} by \eqref{5.12}. \end{remarks}
Thus, the key to proving $\rho_L =\rho_\infty$ will be to show that \begin{equation} \label{6.15} \text{\rm{Av}}_\omega ([\Ima u_j^+ (x,\omega)]^2) = \text{\rm{Av}}_\omega([\Real u_j^+ (x,\omega)]^2) \end{equation} Note that \eqref{6.10} includes that the $\text{\rm{Av}}_\omega ([\Ima u_j^+]^2)$ exists and, by the ergodic theorem, $\text{\rm{Av}}_\omega(\abs{u_j^+}^2)$ exists, so we know for a.e.\ $(x,\omega)\in\Sigma_\text{\rm{ac}}\times\Omega$ that $\text{\rm{Av}}_\omega([\Real u_j^+ (x,\omega)]^2)$ exists. We are heading towards:
\begin{theorem}\label{T6.3} Suppose $x\in\Sigma_\text{\rm{ac}}$ is such that $\rho_L(x,\omega)$ exists for a.e.\ $\omega$ and is $\omega$-independent, and that \begin{equation} \label{6.16} \nu_\infty ((-\infty,x])\neq \tfrac12 \end{equation} Then for a.e.\ $\omega$, \begin{equation} \label{6.17} \text{\rm{Av}}_\omega((u_j^+(x,\omega))^2) =0 \end{equation} \end{theorem}
\begin{proof}[Proof of Theorem~5 given Theorem~\ref{T6.3}] \eqref{6.16} fails at at most a single $x$ in $\Sigma_\text{\rm{ac}}$, so \eqref{6.17} holds for a.e.\ $(x,\omega)\in\Sigma_\text{\rm{ac}}\times\Omega$. Its real part implies \eqref{6.15}, and so for a.e.\ $(x,\omega)$, \begin{align} \text{\rm{Av}}_\omega([\Ima u_j^+ (x,\omega)]^2) &= \tfrac12\, \text{\rm{Av}}_\omega(\abs{u_j^+ (x,\omega)}^2) \label{6.18} \\ &= \tfrac12 \int \abs{u_0^+ (x,\omega)}^2\, d\eta(x) \label{6.19} \end{align} by the ergodic theorem. By \eqref{6.10}, \eqref{6.11}, and the definition \eqref{6.1}/\eqref{6.4} of $\rho_L$, we see that the limit in \eqref{1.45} is $\rho_\infty (x)/w_\omega(x)$. \end{proof}
\begin{proof}[Proof of Theorem~\ref{T6.3}] Fix $x\in\Sigma_\text{\rm{ac}}$ (at each stage, we work up to sets of Lebesgue measure $0$). Define $\varphi(\omega)\in (0,2\pi)$ by \begin{equation} \label{6.20} \text{\rm{Arg}} (-m(x+i0,\omega)) = -\varphi(\omega) \end{equation} Then $\varphi(\omega)\in (0,\pi)$ by $\Ima m>0$. Let ($\varphi$ and $s_n$ also depend on $x$) \begin{equation} \label{6.21} s_n(\omega) =\sum_{j=1}^n \varphi (S^{j-1}\omega) \end{equation} Then by \eqref{5.8} and \eqref{5.4}, \begin{equation} \label{6.22} u_n^+ (x,\omega) =e^{-is_n(\omega)} u_0^+ (x,S^n \omega) \end{equation} and \begin{equation} \label{6.23} u_{n+j}^+ (x,\omega) = e^{-is_n(\omega)} u_j^+ (x,S^n \omega) \end{equation} It follows that for each fixed $n$, \begin{equation} \label{6.24} \text{\rm{Av}}_\omega ((\Ima u_j^+((x,S^n\omega))^2) =\text{\rm{Av}}_\omega ((\Ima e^{is_n(\omega)} u_j^+ (x,\omega))^2) \end{equation}
If $s,x,y$ are real, \begin{align} (\Ima (e^{is}(x+iy)))^2 &= (x\sin s + y\cos s)^2 \notag \\ &= y^2 + (\sin^2 s)(x^2 -y^2) + xy (\sin 2s) \label{6.25} \end{align} and thus, \begin{equation} \label{6.26} \begin{split} \text{LHS of \eqref{6.24}} &= \text{\rm{Av}}_\omega ([\Ima (u_j^+ (x,\omega))]^2) + \sin^2 s_n(\omega) R(\omega) \\ & \qquad\qquad +\tfrac12\, \sin (2s_n(\omega)) I(\omega) \end{split} \end{equation} where \begin{align} R(\omega) &= \text{\rm{Av}}_\omega (\Real ((u_j^+ (x,\omega))^2)) \label{6.27} \\ I(\omega) &= \text{\rm{Av}}_\omega (\Ima ((u_j^+ (x,\omega))^2)) \label{6.28} \end{align} (all such averages having been previously shown to exist).
\comm{ We already know that average in $R(\omega)$ exists, and if the average on the LHS of \eqref{6.25} exists, either $I(\omega)$ exists or $\sin (2s_n(\omega))=0$. }
We know that for a.e.\ $(x,\omega)$, for $n=0,1,2, \dots$, LHS of \eqref{6.24} exists and is $n$-independent (and equal to $\rho_L (x,\omega)$). For such $(x,\omega)$, \eqref{6.26} implies that for all $n$, \begin{equation} \label{6.29} \sin s_n(\omega) [\sin s_n(\omega) R(\omega) + \cos s_n(\omega) I(\omega)] =0 \end{equation}
We want to consider two cases: \begin{SL} \item[{\it Case 1.}] For a positive measure set of $\omega$, \begin{equation} \label{6.30} s_2(\omega) = \pi \qquad s_4(\omega) =2\pi \qquad s_6(\omega) = 3\pi \qquad \dots \end{equation}
\item[{\it Case 2.}] For a.e.\ $\omega$, there is an $n(\omega)$ so \begin{equation} \label{6.31} s_{2j}(\omega) = j\pi \quad (j=1, \dots, n-1) \qquad s_{2n}(\omega) \neq n\pi \end{equation} \end{SL}
In Case~1, for such $\omega$, we have $\frac {s_n(\omega)} {n \pi} \to \frac {1} {2}$. It follows by standard Sturm oscillation theory (see, e.g., \cite{JoMo}) that $\frac {s_n(\omega)} {n \pi} \to \nu_\infty ((-\infty, x])$ for almost every $\omega$. Thus, the hypothesis \eqref{6.16} eliminates Case~1.
\comm{ $\Ima (u_{2j}^+ (x,\omega)) =0$, that is, \begin{equation} \label{6.32} p_1 (x,\omega) = p_3 (x,\omega) = \cdots = 0 \end{equation} So the overall density of zeros of $\rho$ is $\f12$ and, by standard Sturm oscillation arguments (see, e.g., \cite{JoMo}) and the fact that this holds for a positive measure set of $\omega$'s, we have $\rho_\infty ((-\infty, x])=\f12$. Thus, the hypothesis \eqref{6.16} eliminates Case~1.\marginpar{I found it difficult to parse the statement about overal density of zeros of $\rho$. Should we talk perhaps about sign changes in the sequence $p_i(x,\omega)$ instead?} }
For Case~2, suppose first that $n$ is odd, so $s_{2(n-1)}(\omega)$ is a multiple of $2\pi$ and \eqref{6.21} for $2n-1$ and $2n$ imply \begin{gather} \sin(\varphi_{2n-1}) [\sin (\varphi_{2n-1}) R+\cos(\varphi_{2n-1}) I] = 0 \label{6.33} \\ \sin(\varphi_{2n-1} + \varphi_{2n}) [\sin (\varphi_{2n-1} + \varphi_{2n}) R + \cos (\varphi_{2n-1} +\varphi_{2n}) I] = 0 \label{6.34} \end{gather} Since $\varphi_{2n-1}\in (0,\pi)$, $\sin(\varphi_{2n-1}) \neq 0$ and since $\varphi_{2n-1} + \varphi_{2n} \in (0,2\pi)\setminus \{\pi\}$, (for if it equals $\pi$, then $s_{2n} =n\pi$!), $\sin(\varphi_{2n-1} + \varphi_{2n}) \neq 0$.
The determinant of equations \eqref{6.33}/\eqref{6.34} is \begin{equation} \label{6.35} -\sin(\varphi_{2n-1}) \sin(\varphi_{2n-1} + \varphi_{2n}) \sin (\varphi_{2n}) \neq 0 \end{equation} since \begin{equation} \label{6.36} \sin(A) \cos(B) -\sin(B) \cos(A) =\sin(A-B) \end{equation} Here $\neq 0$ in \eqref{6.35} comes from $\varphi_{2n}\in (0,\pi)$, so $\sin(\varphi_{2n})\neq 0$.
The nonzero determinant means that \eqref{6.33}/\eqref{6.34} $\Rightarrow I=R=0$, that is, $\text{\rm{Av}}_\omega ((u_j^+)^2) =0$ for a.e.\ $\omega$. If $n$ is even, $s_{2(n-1)}(\omega)$ is an odd multiple of $\pi$ and all equations pick up minus signs, so the argument is unchanged. \end{proof}
\section{Assorted Remarks} \label{s7}
1. \ We have proven for general ergodic Jacobi matrices that for a.e.\ $(x,\omega)\in\Sigma_\text{\rm{ac}}\times\Omega$, \begin{equation} \label{7.1} \frac{1}{n+1}\, K_n(x,x;\omega) \to \frac{\rho_\infty(x)}{w_\omega (x)} \end{equation} Here $\rho_\infty$ is the Radon--Nikodym derivative of the a.c.\ part of $d\rho_\infty$. Based on \cite{MNT91,Tot}, where results of this type are proven for regular measures, one expects \begin{equation} \label{7.2} \rho_\infty(x)=\rho_{\frak{e}}(x) \end{equation} Here ${\frak{e}}$ is the essential spectrum of $J_\omega$ and $\rho_{\frak{e}}$ its equilibrium measure. In \cite{EqMC}, it is proven (see Thm.~1.15 there)
\begin{theorem}\label{T7.1} If $\Sigma_\text{\rm{ac}}$ is not empty, then \eqref{7.2} holds if and only if, for $\rho_{\frak{e}}$ a.e.\ $x$, \begin{equation} \label{7.3} \gamma(x)=0 \end{equation} \end{theorem}
In particular, for examples where \eqref{7.3} fails on a set of positive Lebesgue measure in ${\frak{e}}$ (e.g., \cite{Bour,Bour2002,FK2005,FK2006}), \eqref{7.2} may not hold. On the other hand, for examples like the almost Mathieu equation where it is known that \eqref{7.3} holds on all of ${\frak{e}}$ (see \cite{BJ}), \eqref{7.2} holds. The moral is that \eqref{7.2} holds some, but not all, of the time for ergodic Jacobi matrices.
2. \ Here is an interesting example that provides a deterministic problem where one has strong clock behavior but with a density of zeros, $\rho_\infty$, which is not $\rho_{\frak{e}}$. Let $d\mu$ be a measure on $[-2,2]$ of the form ($N$ is a normalization constant) \begin{equation} \label{7.4} d\mu(x) = N^{-1} \biggl[ \chi_{[-1,1]} (x)\, dx + \sum_{n=1}^\infty e^{-n^2} \delta_{x_n}\biggr] \end{equation} where $\{x_n\}$ is a dense subset of $[-2,2]\setminus (-1,1)$. Then, as in Example~5.8 of \cite{EqMC}, $\rho_\infty$ exists and is the equilibrium measure for $[-1,1]$ (not ${\frak{e}}=[-2,2]$). Moreover, the method of \cite{Lub} shows that for $x\in (-1,1)$, \begin{equation} \label{7.5} \frac{1}{n+1}\, K_n (x,x) \to \frac{\rho_\infty (x)}{N^{-1}} \end{equation} Using either the method of this paper (i.e., of \cite{Lub-jdam}) or the method of \cite{Lub}, one proves universality with $\rho_\infty$.
3. \ Example~5.8 of \cite{EqMC} provides a measure with $\sigma_\text{\rm{ess}}(\mu)=[-2,2]$ but $\Sigma_\text{\rm{ac}} = [-2,0]$ and where $\nu_n$ has multiple weak limits, including the equilibrium measures for $[-2,0]$ and for $[-2,2]$. By general principles \cite{StT}, the set of limits is connected, so uncountable. One would like to prove that quasi-clock behavior nevertheless holds for the a.c.\ spectrum of this model as this will provide a key test for the conjecture that quasi-clock behavior always holds on $\Sigma_\text{\rm{ac}}$.
4. \ What has sometimes been called the Schr\"odinger conjecture (see \cite{MMG}) says that for any Jacobi matrix and a.e.\ $x\in\Sigma_\text{\rm{ac}}(\mu)$, we have a solution, $u_n$, with \begin{equation} \label{7.6} 0 < \inf_n \, \abs{u_n} \leq \sup_n \, \abs{u_n} <\infty \end{equation} and $u_{-1}=0$. Invariance of $\Sigma_\text{\rm{ac}}$ under rank one perturbations then proves that for a.e.\ $x\in \Sigma_\text{\rm{ac}}(\mu)$, the transfer matrix is bounded. Thus, Theorem~3 in the strong form would always be applicable.
5. \ While \eqref{6.16} is harmless since it only eliminates at most one $x$, one can ask if \eqref{6.17} holds even if \eqref{6.16} fails. Using periodic problems, it is easy to construct ergodic cases where $\arg u_n^+ =-\pi n/2$, so \eqref{6.29} provides no information on $I(\omega)$. Nevertheless, in these cases, one can show $R(\omega)=I(\omega)=0$. We have not been able to find an example where for a set of positive measure $\omega$'s, $s_{2n}(\omega)=n\pi$, $s_{2n+1}(\omega) = n\pi+\varphi$ with $\varphi$ some fixed point in $(0,\pi)\setminus \{\frac{\pi}{2}\}$. In that case, it might happen that $R(\omega)\neq 0$, $I(\omega)\neq 0$. So it remains open if we need to exclude the $x$ with \eqref{6.16}.
6. \ While we could use soft methods in Section~\ref{s3}, at one point in our research we used an explicit formula for the derivative of $\frac{1}{n} K_n(x_0 + \frac{a}{n}, x_0 + \frac{a}{n})$ as a function of $a$ that may be useful in other contexts, so we want to mention it. We start with a variation of parameters formula (discussed, e.g., in \cite{JL,KKL}) that, in terms of the second kind polynomials of \eqref{1.38}, \begin{equation} \label{7.7} p_n(x)-p_n(x_0) = (x-x_0) \sum_{m=0}^{n-1} (p_n(x_0) q_m(x_0)-p_m(x_0) q_n(x_0)) p_m(x) \end{equation} which implies \begin{equation} \label{7.8} p'_n (x_0) =\sum_{m=0}^{n-1} (p_n(x_0) q_m(x_0) - p_m(x_0) q_n(x_0)) p_m(x_0) \end{equation}
Since \begin{equation} \label{7.9}
\left. \frac{d}{da}\, \frac{1}{n}\, K_n \biggl(x_0 + \frac{a}{n}\, , x_0 + \frac{a}{n}\biggr) \right|_{a=0} = \frac{1}{n^2} \sum_{j=0}^n 2p'_j(x_0) p_j(x_0) \end{equation} this leads to \begin{equation} \label{7.10} \begin{split}
&\left. \frac{d}{da}\, \frac{1}{n}\, K_n\biggl(x_0 + \frac{a}{n}, x_0 + \frac{a}{n}\biggr) \right|_{a=0} \\ &\quad = \frac{2}{n^2} \sum_{j=0}^n \biggl[ p_j(x_0)^2 \biggl(\, \sum_{k=0}^j p_k(x_0) q_k(x_0)\biggr) -q_j (x_0) p_j(x_0) \biggl(\, \sum_{k=0}^j p_k(x_0)^2 \biggr) \biggr] \end{split} \end{equation}
As noted in \cite{CD}, if $\frac{1}{n} \sum_{j=0}^n p_j(x_0)^2$ and $\frac{1}{n} \sum_{j=0}^n p_j(x_0) q_j(x_0)$ have limits and $\sup_n [\frac{1}{n} \sum_{j=0}^n q_j(x_0)^2]<\infty$, then the right side of \eqref{7.10} goes to $0$.
\end{document} |
\begin{document}
\begin{frontmatter}
\title{Existence of Entropy Solutions to Two-Dimensional Steady Exothermically Reacting Euler Equations} \author{Gui-Qiang Chen, Changguo Xiao \& Yongqian Zhang} \address{Mathematical Institute, University of Oxford,
Oxford, OX2 6GG, UK;\\
School of Mathematical Sciences, Fudan University,
Shanghai 200433, China.\\
E-mail: chengq@maths.ox.ac.uk}
\address{
School of Mathematical Sciences, Fudan University, Shanghai 200433, China. \\
E-mail: 09110180021@fudan.edu.cn; yongqianz@fudan.edu.cn}
\begin{abstract} We are concerned with the global existence of entropy solutions of the two-dimensional steady Euler equations for an ideal gas, which undergoes a one-step exothermic chemical reaction under the Arrhenius-type kinetics. The reaction rate function $\phi(T)$ is assumed to have a positive lower bound. We first consider the Cauchy problem (the initial value problem), that is, seek a supersonic downstream reacting flow when the incoming flow is supersonic, and establish the global existence of entropy solutions when the total variation of the initial data is sufficiently small.
Then we analyze the problem of steady supersonic, exothermically reacting Euler flow past a Lipschitz wedge, generating an additional detonation wave attached to the wedge vertex, which can be then formulated as an initial-boundary value problem. We establish the globally existence of entropy solutions containing the additional detonation wave (weak or strong, determined by the wedge angle at the wedge vertex) when the total variation of both the slope of the wedge boundary and the incoming flow is suitably small.
The downstream asymptotic behavior of the global solutions is also obtained.
\noindent \textit{2010 Mathematics Subject Classification}: $\,$ 35L65; 76N10; 35B40; 35A01; 35L45; 35L50; 35L67; 76V05
\end{abstract}
\begin{keyword} Combustion, detonation wave, stability, Glimm scheme, fractional-step, supersonic flow, reacting Euler flow, Riemann problem, entropy solutions, two-dimensional, steady flow, asymptotic behavior. \end{keyword} \end{frontmatter}
\section{Introduction} We are concerned with
the two-dimensional steady supersonic Euler flow of an exothermically reacting ideal gas, which is governed by \begin{eqnarray} (\rho u)_x+(\rho v)_y = 0,\label{d1}\\ (\rho u^2+p)_x+(\rho uv)_y = 0,\\ (\rho uv)_x+(\rho v^2+p)_y = 0,\\ \big((\rho E+p)u\big)_x+\big((\rho E+p)v\big)_y = 0,\\
(\rho uZ)_x+(\rho vZ)_y = -\rho Z\phi(T). \label{d2} \end{eqnarray} Here $(u,v)$ is the velocity, $p$ the scalar pressure, $\rho$ the density, $Z$ the fraction of unburned gas in the mixture, $\phi(T)$ the reaction rate, $q$ the specific binding energy of unburned gas, and $E=\frac{1}{2}(u^2+v^2)+e(\rho,p)+qZ$ the specific total energy with the specific internal energy $e$ that is a given function of $(\rho,p)$ defined through thermodynamical relations.
For an ideal gas, \begin{equation} p=R\rho T,\quad e=c_vT,\quad \gamma=1+\frac {R}{c_v}>1, \end{equation} where $R$ and $c_v$ are positive constants, and $\gamma$ is the adiabatic exponent.
We identify $c_v+R=c_p$ as the specific heat at constant pressure.
\par We assume for simplicity that the specific heats and molecular weights of the reactant and product gases are the same and that the reaction rate function $\phi$ is monotonically increasing and Lipschitz continuous. In addition, inadmissible discontinuous solutions are eliminated by requiring the following entropy condition: \begin{equation} (\rho uS)_x+(\rho vS)_y \ge \frac{q\rho Z \phi(T)}{T}. \end{equation}
\par We first consider the Cauchy problem (the initial value problem) for \eqref{d1}--\eqref{d2} in the region $\{x\ge 0, y\in \mathbb{R}\}$, with initial incoming flow (initial data): \begin{equation} (u,v,p,\rho,Z)(0,y)=(u_0,v_0,p_0,\rho_0,Z_0)(y),\qquad \text{$y \in \mathbb{R}$}.\label{d3} \end{equation} We assume that $u_0(y),v_0(y),p_0(y),\rho_0(y)$, and $Z_0(y)$ are bounded and have bounded total variation with $Z_0(-\infty)=\lim_{y \to -\infty}Z_0(y)=0$. We further assume that there are positive constants $u', \rho '$, and $T'$ such that \begin{equation}\label{d3-a} u_0 > c_0\ge u'>0,\qquad \rho_0\ge \rho '>0,\qquad T_0\ge T'>0, \end{equation} where $c=\sqrt{\frac{\gamma p}{\rho}}$ is the local sonic speed. We make this assumption on the initial data to ensure that the flow is supersonic ({\it i.e.} $u^2+v^2>c^2$).
\begin{center} \setlength{\unitlength}{1mm} \begin{picture}(60,60)(0,-30) \linethickness{1pt} \put(0,0){\vector(1,0){60}} \put(0,-30){\vector(0,1){60}} \thinlines \put(0,8){\line(1,1){20}} \put(0,-4){\line(1,1){20}} \put(0,-16){\line(1,1){20}} \put(0,-28){\line(1,1){20}} \thinlines \put(-4,-1){$O$} \put(61,-2){$x$} \put(-3,28){$y$} \put(-15,14){$u_0>c_0$} \put(-15,8){$v_0$} \put(-15,2){$p_0$} \put(-15,-4){$\rho_0$} \put(-15,-10){$Z_0$} \put(-20,-34){Fig.1. Supersonic Euler flow through the left boundary $x=0$ } \end{picture} \end{center}
We assume the initial data to be such that the reaction rate function $\phi(T)$ never vanishes, so that there is a positive minimum value $\Phi:=\phi(T^{\prime})>0$. In a sense, this is a very realistic condition. Typically, $\phi (T)$ has the Arrhenius form: \begin{equation} \phi(T)=T^{\alpha}e^{-\frac{E}{RT}}, \label{arr} \end{equation} which vanishes only at absolutely zero temperature, where $\alpha$ is a positive constant.
We make this assumption in order to obtain the uniform decay of the reactant to zero. Although the total variation of the solution may very well increase while the reaction is active, the reaction must eventually die out along the flow trajectories. Consequently, the increase in total variation can be estimated rigorously.
\par In Chen-Wagner \cite{chen1}, the large-time existence of entropy solutions to the Cauchy problem has been established for the time-dependent equations of planar flow of an exothermically reacting ideal gas.
\iffalse \begin{eqnarray*} \rho_t+(\rho u)_x=0,\\ (\rho u)_t+(\rho u^2+p)_x=0,\\ (\rho E)_t+((\rho E+p)u)_x=q\rho z\phi(T),\\ (\rho z)_t+(\rho u z)_x=-\rho z\phi(T), \end{eqnarray*} with initial data
$(\rho,u,E,z)(x,0)=(\rho_0,u_0,E_0,z_0)(x)$.
\fi
The total variation of the initial data is bounded by a parameter $\epsilon=\gamma-1$, which grows arbitrarily large as $\epsilon\to 0$ whose limiting case is the isothermal gas. Global entropy solutions are obtained
by using the Glimm fractional-step scheme based on the Glimm scheme.
In this paper, we first establish a global existence theory for entropy solutions of the Cauchy problem for two-dimensional, exothermically reacting steady Euler equations by further developing the Glimm scheme, under the condition that the total variation of the initial data in \eqref{d3} is small.
Then this approach is further developed for solving the supersonic reacting Euler flow past Lipschitz wedges. For a non-reacting supersonic flow past a straight wedge, an attached plane shock is generated at the wedge vertex. When the supersonic flow is governed by the exothermically reacting steady Euler equations, the attached detonation wave is no longer a plane wave even for the straight wedge, whose strength (weak or strong) is determined by the wedge angle and the incoming flow. Nevertheless, we establish that, when the total variation of both the incoming supersonic flow and the slope of the wedge boundary is suitably small, there exists a global entropy solution containing the (weak or strong) detonation wave.
The downstream asymptotic behavior of entropy solutions is also obtained.
The organization of this paper is as follows. In Section 2, we discuss some basic features of the exothermically reacting Euler equations \eqref{d1}--\eqref{d2}. The Glimm fractional-step scheme is described for the Cauchy problem \eqref{d3} for system \eqref{d1}--\eqref{d2} in Section 3. In Section 4, we establish uniform bounds on the total variation in the $y$-direction of the Glimm fractional-step approximate solutions for the Cauchy problem \eqref{d3}.
In Section 5, we establish uniform bounds on the total variation of the Glimm fractional-step approximate solutions in the $y$--variable for the initial-boundary value problem \eqref{dd3}--\eqref{dd4} for \eqref{d1}--\eqref{d2} concerning the supersonic reacting Euler flow past Lipschitz wedges, when the wedge angle at the wedge vertex
is small. In Section 6, the convergence of approximate solutions to an entropy solution is established for both the Cauchy problem \eqref{d3} and the initial-boundary value problem \eqref{dd3}--\eqref{dd4} for \eqref{d1}--\eqref{d2}. The downstream asymptotic behavior of entropy solutions is also clarified in Section 7. In Section 8, we extend the results in Sections 5--6 for the case of small wedge angle to the case of large wedge angle, for which the entropy solution contains a strong detonation wave generated between the incoming fluid and the wedge boundary at the wedge vertex.
\section{Basic Features of the Exothermically Reacting Euler Equations}
In this section, we discuss some basic features of system \eqref{d1}--\eqref{d2}.
\subsection{\textbf{Euler equations}}
\par System \eqref{d1}--\eqref{d2} can be rewritten in the following form: \begin{equation} W(U)_x+H(U)_y=G(U), \label{2} \end{equation} with $U=(u,v,p,\rho,Z)$, where \begin{eqnarray*} && W(U)=(\rho u,\rho u^2+p,\rho uv,\rho u(\bar{h}+\frac{u^2+v^2}{2}),\rho uZ),\\[1.5mm]
&& H(U)=(\rho v,\rho uv,\rho v^2+p,\rho v(\bar{h}+\frac{u^2+v^2}{2}),\rho vZ),\\[1.5mm]
&& G(U)=(0,0,0, q\rho \phi(T)Z,-\rho \phi(T)Z) \end{eqnarray*} with $\bar{h}=\frac {\gamma p}{(\gamma -1)\rho}$.
\par In the case when $G(U)$ is identically zero, system \eqref{2} becomes a system of conservation laws: \begin{equation} W(U)_x+H(U)_y=0. \label{3} \end{equation} For a smooth solution $U(x,y)$, system \eqref{3} is equivalent to \begin{equation} \nabla_UW(U)U_x+\nabla_UH(U)U_y=0. \end{equation} Then the eigenvalues of \eqref{3} are the roots of the 5th order polynomial \begin{equation} \mathrm{det}(\lambda\nabla_UW(U)-\nabla_UH(U)), \end{equation} that is, the solutions of the equation: \begin{equation} (v-\lambda u)^3\big((v-\lambda u)^2-c^2(1+\lambda ^2)\big)=0, \end{equation} where $c=\sqrt{\frac{\gamma p}{\rho}}$ is the sonic speed.
If the flow is supersonic ({\it i.e.} $u^2+v^2>c^2$), system \eqref{2} is hyperbolic. In particular, when $u>c$, the system has five eigenvalues in the $x$-direction: \begin{equation} \lambda _i=\frac{v}{u}, \quad i=2,3,4; \qquad\quad\, \lambda_j=\frac{uv+(-1)^{\frac{j+3}{4}} c\sqrt{u^2+v^2-c^2}}{u^2-c^2}, \quad j=1,5, \end{equation} and the corresponding linearly independent eigenvectors: \begin{equation} r_j=\kappa_j(-\lambda_j,1,\rho(\lambda_j u-v), \frac{\rho (\lambda_ju-v)}{c^2},0)^\top, \quad j=1,5; \end{equation} \begin{equation} r_2=(u,v,0,0,0)^\top, \quad r_3=(0,0,0,\rho,0)^\top, \quad r_4=(0,0,0,0,\frac{1}{\rho u})^\top, \end{equation} where $\kappa_j$ are chosen so that $r_j\cdot\nabla \lambda_j=1$ since the $j$th-characteristic fields are genuinely nonlinear, $j=1,5$. Note that $r_j\cdot\nabla \lambda _j=0,j=2,3,4$, that is, these characteristic fields are always linearly degenerate.
In particular, at a constant state $\tilde{U}=(\tilde{u},0,\tilde{p},\tilde{\rho}, \tilde{Z})$, $$ \lambda_2(\tilde{U})=\lambda_3(\tilde{U})=\lambda_4(\tilde{U})=0, \qquad \lambda_1(\tilde{U})=-\frac{\tilde{c}}{\sqrt{\tilde{u}^2-\tilde{c}^2}}=-\lambda_5(\tilde{U})<0. $$
\begin{definition}[Entropy Solutions] A function $U=U(x,y)\in BV(\mathbb{R}^+\times \mathbb{R})$ is called an entropy solution of problem \eqref{d3} for system \eqref{d1}--\eqref{d2} provided that \begin{enumerate} \item[\rm (i)] $U$ is a weak solution of problem \eqref{d3} for system \eqref{d1}--\eqref{d2}, that is, \begin{equation} \int_{-\infty}^{\infty}\int_{0}^{\infty} \big(W(U)\phi_x+H(U)\phi_y+G(U)\phi\big)\, dxdy+\int_{-\infty}^{\infty}W(U_0(y))\phi(0,y)\, dy= 0 \end{equation} for any $\phi \in C_0^{\infty}([0,\infty)\times(-\infty,\infty))$;
\item[\rm (ii)] {For any convex entropy pair $(\eta,q)$ with respect to $W(U)$, the following inequality} \begin{equation} \eta(W(U))_x+q(W(U))_y\le \nabla_W\eta(W(U))G(U) \end{equation} holds in the sense of distributions, that is, \begin{eqnarray} \int_{-\infty}^{\infty}\int_{0}^{\infty}\big(\eta(W(U))\phi_x+q(W(U))\phi_y+\nabla_W \eta(W(U))G(U)\phi\big)\,dxdy\\ +\int_{-\infty}^{\infty}\eta(W(U_0(y)))\phi(0,y)\, dy\ge 0 \end{eqnarray} for any $\phi \in C_0^{\infty}([0,\infty)\times(-\infty,\infty))$ and $\phi(x,y)\ge 0$. \end{enumerate} \end{definition}
\begin{remark} In particular, $\eta(W)=-\rho uS$ is an entropy which is convex with respect to $W$, while $q(W)=-\rho vS$ is the corresponding entropy flux, when $u>c>0$. \end{remark}
\par As in \cite{chen1}, if we rewrite system \eqref{3} in Lagrangian coordinates: \begin{eqnarray} (x', m)=(x, m(x,y)) \end{eqnarray} with
$\mathrm{d}m=\rho u \mathrm{d}y-\rho v \mathrm{d}x$,
then the fifth equation in \eqref{3} becomes \begin{equation} Z_{x'}=0. \end{equation} It states that the $Z$-component is decoupled from $(u,v,p,\rho)^\top$ in the solution of the non-reacting Riemann problem.
\subsection{\textbf{Wave curves in the phase space}}
We now analyze some basic properties of nonlinear waves. We focus on the case when $u>c>0$ in the state space. Seek the self-similar solutions to system \eqref{3}:
\begin{equation} (u,v,p,\rho,Z)(x,y)=(u,v,p, \rho,Z)(\xi),\quad \xi =\frac {y}{x}, \end{equation} which connect to a fixed constant state $U_0=(u_0,v_0,p_0,\rho_0,z_0)$. Then we have \begin{equation*} \mathrm{det}\big(\xi \nabla_UW(U)-\nabla_UH(U)\big)=0, \label{4} \end{equation*} which implies \begin{equation} \xi =\lambda_i(U)=\frac{v}{u}, \,\,\, i=2,3,4;\quad \text{or}\quad \xi =\lambda_j(U),
\,\,\, j=1,5. \end{equation}
\par Plugging $\xi=\lambda_i(U), i=2,3,4$, into \eqref{4}, we obtain \begin{equation*} dp=0,\qquad vdu-udv=0, \end{equation*} which yields the contact discontinuity curves $C_i(U_0)$ in the phase space: \begin{equation*} C_i(U_0):\,\, p=p_0,\, w=\frac{v}{u}=\frac{v_0}{u_0}, \qquad i=2,3,4. \end{equation*} More precisely, we have \begin{equation} C_2(U_0):\,\, U=(u_0e^{\sigma_2},v_0e^{\sigma_2},p_0,\rho_0,Z_0)^\top \label{a}, \end{equation} with strength $\sigma_2$ and slope $\frac{v_0}{u_0}$, which is determined by
\begin{equation}
\left\{
\begin{array}{ll}
\frac{dU}{d\sigma_2}=r_2(U),\\ [2mm]
U|_{\sigma_2=0}=U_0;
\end{array} \right.
\end{equation}
and \begin{equation} C_3(U_0):\,\, U=(u_0,v_0,p_0,\rho_0e^{\sigma_3},Z_0)^\top,\label{b} \end{equation} with strength $\sigma_3$ and slope $\frac{v_0}{u_0}$, which is determined by \begin{equation} \left\{
\begin{array}{ll}
\frac{dU}{d\sigma_3}=r_3(U),\\[2mm]
U|_{\sigma_3=0}=U_0;
\end{array} \right.
\end{equation}
and \begin{equation} C_4(U_0):\,\, U=(u_0,v_0,p_0,\rho_0,Z_0+\frac{\sigma_4}{\rho_0u_0})^\top, \label{c} \end{equation}
with strength $\sigma_4$ and slope $\frac{v_0}{u_0}$, which is determined by \begin{equation} \left\{
\begin{array}{ll}
\frac{dU}{d\sigma_4}=r_4(U),\\[2mm]
U|_{\sigma_4=0}=U_0.
\end{array} \right.
\end{equation} We can see that $\sigma_4$ is the difference between
$w_5=\rho uZ$ in the Riemann problem.
\par Plugging $\xi=\lambda_j(U), j=1,5$, into \eqref{4}, we obtain the $j$-th rarefaction wave curve $R_j(U_0)$, $j=1,5$, in the phase space through $U_0$:
\begin{equation}
R_j(U_0):\,\, dp=c^2d\rho,\,\, du=-\lambda_jdv,\,\, \rho(\lambda_ju-v)dv=dp,\,\, dZ=0,\qquad j=1,5.
\end{equation}
\par Now we consider discontinuous solutions so that the equations in \eqref{3} are satisfied in the distributional sense. This implies that the following Rankine-Hugoniot conditions hold along the discontinuity with speed $s$, which connects to a state $U_0=(u_0,v_0,p_0,\rho_0,Z_0)$:
\begin{alignat}{2}
s[\rho u]&=[\rho v], \label{5}\\[1.5mm]
s[\rho u^2+p]&=[\rho uv], \\[1.5mm]
s[\rho uv]&=[\rho v^2+p], \\[1.5mm]
s[\rho u(\bar h+\frac{u^2+v^2}{2})]&=[\rho v(\bar h+\frac{u^2+v^2}{2})],\\[1.5mm]
s[\rho uZ]&=[\rho vZ], \label{6}
\end{alignat} where the jump symbol $[\cdot]$ stands for the value of the quantity of the front-state minus that of the back-state. Then we have \begin{equation*}
(v_0-su_0)^3\big((v_0-su_0)^2-\bar{c}^2(1+s^2)\big)=0, \end{equation*} where $\bar{c}^2=\frac{c_0^2}{b}\frac{\rho}{\rho_0}$ and $b=\frac{\gamma+1}{2}-\frac{\gamma-1}{2}\frac{\rho}{\rho_0}$. This implies \begin{equation} s=s_i=\frac{v_0}{u_0}, \qquad i=2,3,4,
\end{equation}
or \begin{equation} s=s_j=\frac{u_0v_0+(-1)^{\frac{j+3}{4}}\bar{c}\sqrt{u_0^2+v_0^2-\bar{c}^2}}{u_0^2-\bar{c}^2},\qquad j=1,5, \end{equation} where $u_0>\bar{c}$ for small shocks.
\par Plugging $s_i$, $i=2,3,4$, into \eqref{5}--\eqref{6}, we obtain the same $C_i(U_0)$, $i=2,3,4$, as defined in \eqref{a}, \eqref{b}, and \eqref{c}; while plugging $s_j$, $j=1,5$, into \eqref{5}--\eqref{6}, we obtain the $j$th shock wave curve $S_j(U_0)$, $j=1,5$, through $U_0$: \begin{equation} S_j(U_0):\, \, [p]=\frac{c_0^2}{b}[\rho],\,\, [u]=-s_j[v], \,\, \rho_0(s_ju_0-v_0)[v]=[p],\,\, [Z]=0. \end{equation} Note that the shock wave curve $S_j(U_0)$ contacts with $R_j(U_0)$ at $U_0$ up to second order.
\par Following Lax \cite{lax}, we can parameterize any physically admissible wave curve in a neighborhood of a constant $\tilde{U}$, $O_{\epsilon}(\tilde{U})$, by $\sigma_j \mapsto \Phi_j(\sigma_j,U_b)$, with $\Phi_j \in C^2, \Phi_j|_{\sigma_j=0}=U_b$, and $\frac{\partial \Phi_j}{\partial \sigma_j}|_{\sigma_j=0}=r_j(U_b)$. Set $$ \Phi(\sigma_5,\sigma_4,\sigma_3,\sigma_2,\sigma_1;U_b)=\Phi_5(\sigma_5,\Phi_4(\sigma_4,\Phi_3(\sigma_3,\Phi_2(\sigma_2, \Phi_1(\sigma_1;U_b))))). $$ We denote $\Psi_j(\sigma_j,W(U_b))=W(\Phi_j(\sigma_j;U_b))$ and \begin{eqnarray*} \Psi(\sigma_5,\sigma_4,\sigma_3,\sigma_2,\sigma_1;W(U_b)) &=&\Psi_5(\sigma_5,\Psi_4(\sigma_4,\Psi_3(\sigma_3,\Psi_2(\sigma_2,\Psi_1(\sigma_1 ;W(U_b))))))\\ &=&W(\Phi(\sigma_5,\sigma_4,\sigma_3,\sigma_2,\sigma_1;U_b)). \end{eqnarray*}
\par Finally, we denote \begin{equation} \boldsymbol{\sigma}=(\sigma_1,\sigma_2,\sigma_3,\sigma_4,\sigma_5), \end{equation}
and
\begin{equation}
\Psi(\boldsymbol{\sigma},W(U_b))=\Psi_5(\sigma_5,\Psi_4(\sigma_4,\Psi_3(\sigma_3,\Psi_2(\sigma_2,\Psi_1(\sigma_1,W(U_b)))))). \end{equation}
\section{The Glimm Fractional-Step Scheme} We employ a fractional-step scheme for the inhomogeneous system \eqref{2} as described in \cite{chen1} based on the Glimm scheme. As before, we regard the $x$-direction as the time-like direction.
\par Choose mesh lengths $h>0$ and $l>0$ in the $x$-direction and $y$-direction, respectively, such that the Courant-Friedrichs-Levy condition holds: \begin{equation}
\Lambda=\max_{1\le j \le 5}|\lambda_j(U)|\le \frac{l}{2h}. \label{c2} \end{equation}
\par Partition $\mathbb{R}^+$ by the sequence $x_k=kh,k\in \mathbb{Z}^+$, and partition $\mathbb{R}$ into cells with the $j$th cell centered at \begin{equation*}
y_j=jl, \qquad j=0,\pm 1,\pm 2, \cdots. \end{equation*} We begin with approximating the initial data $U_0(y)$ by a function $U^h(0,y)$, which is constant for $y$ in the interval $[y_{j-1}, y_{j+1}]$ for $j$ even and converges to $U_0(y)$ both pointwise a.e. and in $L^1$ on any bounded interval as $h\to 0$. Choose a random sequence $\theta_k, k=0,1,2,\cdots$, in the interval $(-1,1)$ with the uniform probability distribution.
\par We then construct the approximate solution $W(U^h(x,y))$ as follows:
\par Assume that $W(U^h(x,y))$ is defined for $x<kh$. Then we construct the approximate solution $W(U^h(x,y))$ in the strip $[kh,(k+1)h)\times (-\infty, \infty)$ as follows:
\par \textit{Step 1 (Random step):} Define \begin{eqnarray*} &&W(U_j^k)=W(U^h(kh-,(j+\theta_k)l)),\\[1.5mm] &&W(U^h(kh+0,y))\equiv W(U_j^k), \qquad (j-1)l\le y <(j+1)l, \end{eqnarray*} where $j+k$ is even, and $\chi _k $ is the $k$th element of the random sequence $(\chi_1, \cdots,\chi_k,\cdots)$.
\par \textit{Step 2 (Solving the Riemann problem):} In the strip $[kh,(k+1)h)\times (-\infty,\infty)$, we solve the following Riemann problem in each domain $(kh,(k+1)h)\times ((j-1)l,(j+1)l)$: \begin{eqnarray*} \left\{ \begin{array}{ll} W(U)_{\tau}+H(U)_y=0,\\[2mm]
W(U)|_{\tau=0}= \begin{cases} W(U_{j-1}^k) & y< jl,\\[1.5mm] W(U_{j+1}^k) & y> jl, \end{cases} \end{array} \right. \end{eqnarray*} where $j+k$ is odd and $\tau =x-kh$. The resulting solution is denoted as $W(U_0^h(x,y))$. \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(48,60)(0,-60) \linethickness{1pt} \multiput(0,0)(0,-3){16}{\line(0,-2){2}} \multiput(24,0)(0,-3){16}{\line(0,-2){2}} \put(0,-24){\line(5,4){20}} \thicklines \put(0,-24){\line(4,1){20}} \thinlines \put(0,-24){\line(3,-1){20}} \put(8,-4){$W(U_{j+1}^k)$} \put(8,-40){$W(U_{j-1}^k)$} \put(12,-10){$\sigma_5$} \put(14,-17){$\sigma_{2(3,4)}$} \put(15,-28){$\sigma_1$} \put(-4,-60){Fig. 2. Riemann problem} \put(-4,-52){$x=kh$} \put(20,-52){$(k+1)h$} \put(-14,-26){$(kh,jl)$} \end{picture} \end{center}
\par \textit{Step 3 (Reacting step):} Define \begin{equation*} W(U^h(x,y))=W(U_0^h(x,y))+G(U_0^h(x,y))(x-kh), \end{equation*} where $G(U)=(0,0,0,q\rho Z\phi(T),-\rho Z\phi(T))$ as before and $kh\le x<(k+1)h$.
\par Therefore, we can construct the approximate solution $W(U^h(x,y))$ in the strip $[kh,(k+1)h)\times (-\infty, \infty)$ as long as the Riemann problems in Step 2 are solvable.
\section{$BV$--Stability}
\par In this section, we estimate the approximate solutions $W(U^h(x,y))$ in the total variation norm and prove that the total variation of the approximate solutions $W(U^h(x,y))$ in $y$, for any fixed $x$, is uniformly bounded with respect to the mesh length $h$. We measure the total variation of approximate solutions by using the sum of the absolute values of the strengths of waves in the solution of each Riemann problem in Step 2 as in Section 3.
\par We define a weighted $l_1$--norm \begin{equation}\label{4.1a}
\|v\|_1=|v_1|+|v_2|+|v_3|+M|v_4|+|v_5| \qquad \text{for a vector $v=(v_1,v_2,v_3,v_4,v_5) \in \mathbb{R}^5$}, \end{equation} where $M>0$ is a constant to be determined later.
\par We define another norm \begin{equation}
\|v\|=|v_1|+|v_2|+|v_3|+|v_4| \qquad \text{for a vector $v=(v_1,v_2,v_3,v_4)\in \mathbb{R}^4$}. \end{equation}
\par Let $U_g\equiv (u,v,p,\rho)$ and $W_g\equiv (\rho u,\rho u^2+p,\rho uv, \rho u(\bar{h}+\frac{u^2+v^2}{2}))$ denote the first four components of $U$ and $W$, respectively.
\subsection{\textbf{Interaction estimates on the non-reacting step}}
\iffalse We first estimate the interactions among weak waves. We will use the following elementary identities, whose proof is straightforward: \begin{enumerate} \item[\rm (i)] If $f\in C^1(\mathbb{R})$, then for any $x\in \mathbb{R}$, \begin{equation} f(x)-f(0)=x\int_0^1f_x(rx)\mathrm{d}r. \end{equation} \item[\rm (ii)] If $f\in C^2(\mathbb{R}^2)$, then for any $(x,y)\in \mathbb{R}^2$, \begin{equation} f(x,y)-f(x,0)-f(0,y)+f(0,0)=xy\int_0^1\int_0^1f_{xy}(rx,sy)\mathrm{d}r\mathrm{d}s. \end{equation} \end{enumerate} \fi
The interaction estimate for \eqref{3} is similar to the argument for Proposition 3.1 in \cite{chen2}.
\begin{center} \setlength{\unitlength}{1mm} \begin{picture}(50,74)(-2,-4) \linethickness{1pt} \multiput(0,0)(0,3){24}{\line(0,2){2}} \multiput(24,0)(0,3){24}{\line(0,2){2}} \multiput(48,0)(0,3){24}{\line(0,2){2}} \put(0,54){\line(3,2){20}} \put(0,54){\line(3,-1){20}} \put(0,54){\line(5,2){20}} \put(0,18){\line(1,1){20}} \put(0,18){\line(5,3){20}} \put(0,18){\line(4,-1){20}} \put(24,40){\line(5,-1){20}} \put(24,40){\line(3,2){20}} \put(24,40){\line(5,1){20}} \put(2,70){$U_a$} \put(2,30){$U_m$} \put(2,5){$U_b$} \put(36,15){$U_b$} \put(36,60){$U_a$} \put(12,66){$\beta_5$} \put(12,56){$\beta_{2(3,4)}$} \put(12,46){$\beta_1$} \put(12,36){$\alpha_5$} \put(12,23){$\alpha_{2(3,4)}$} \put(12,11){$\alpha_1$} \put(40,54){$\gamma_5$} \put(38,46){$\gamma_{2(3,4)}$} \put(40,38){$\gamma_1$} \put(-1,-6){Fig. 7. Weak wave interaction} \end{picture} \end{center}
\begin{lemma}\label{4.1b} Suppose that $U_b, U_m$, and $U_a$ are three states in a small neighborhood $O_{\varepsilon}(U_+)$ with \begin{equation*} \{U_b, U_m\}=(\alpha_1,\alpha_2,\alpha_3,\alpha_4,\alpha_5),\{U_m,U_a\}=(\beta_1,\beta_2,\beta_3,\beta_4,\beta_5), \{U_b,U_a\}=(\gamma_1,\gamma_2,\gamma_3,\gamma_4,\gamma_5). \end{equation*} Then \begin{equation*}\label{33} \gamma_i=\alpha_i+\beta_i+O(1)\Delta(\alpha,\beta), \end{equation*} where
$\Delta(\alpha,\beta)=|\alpha_5|(|\beta_1|+|\beta_2|+|\beta_3|+|\beta_4|)+|\beta_1|(|\alpha_2|+|\alpha_3|+|\alpha_4|) +\sum_{j=1,5}\Delta_j(\alpha,\beta)$ with \begin{equation*} \Delta_j(\alpha,\beta)=\left\{
\begin{array}{ll}
0,&\quad \mbox{$\alpha_j \ge 0$ and $\beta_j \ge 0$},\\
|\alpha_j||\beta_j|,&\quad \mbox{otherwise.}
\end{array}
\right. \end{equation*} \end{lemma}
\subsection{\textbf{Estimates on the reacting step}}
\par For convenience, we use $\tilde{U}$ to denote the value of $U$ before reaction, while $U$ after reaction. That is, \begin{equation*} W(U(x,y))=W(\tilde{U}(x,y))+G(\tilde{U}(x,y))\tau, \end{equation*} where $\tau = x-kh$ and $kh\le x <(k+1)h$.
\begin{lemma} Let \begin{equation} \boldsymbol{\sigma}=(\sigma_1,\sigma_2,\sigma_3,\sigma_4,\sigma_5)=B(W(\tilde{U}_b),W(\tilde{U}_a)) \end{equation} be the vector of signed wave strengths in the solution of the Riemann problem with Riemann data $(W(\tilde{U}_b),W(\tilde{U}_a))$. Let \begin{equation} \Gamma(W(\tilde{U}_b),\boldsymbol{\sigma},h)=B(W(U_b),W(U_a)), \label{a4} \end{equation} where $W(U_b)=W(\tilde{U}_b)+G(\tilde{U}_b)h$ and $W(U_a)=W(\tilde{U}_a)+G(\tilde{U}_a)h$. Then \begin{equation}
\Gamma(W(\tilde{U}_b),\boldsymbol{\sigma},h)=\boldsymbol{\sigma}+O(||\boldsymbol{\sigma}||_1)h. \end{equation} \end{lemma}
\par Lemma 4.2 implies that the increasing of the total variation of the fractional-step approximate solutions is at no more than an exponential rate.
\begin{lemma}
$\|\Gamma(W(\tilde{U}_b),\boldsymbol{\sigma},h)
-\big(\boldsymbol{\sigma}+\frac{\partial \Gamma}{\partial h}(W(\tilde{U}_b),\boldsymbol{\sigma},0)h\big)\|_1
\le C\|\boldsymbol{\sigma}\|_1\frac{h^2}{2}. $
\end{lemma}
\par Lemma 4.3 shows that we can estimate the increase in the total variation for the reacting step by calculating the first derivatives of the solution operator for the Riemann problem.
\par The proof of Lemmas 4.2--4.3 can be found in \cite{chen1}.
In particular, for \eqref{d1}--\eqref{d2}, we need to analyze the reacting step, which takes the form: \begin{equation*} W(U^h(x,y))=W(U_0^h(x,y))+G(U_0^h(x,y))\tau, \end{equation*} where all the quantities $\rho_0^h, \rho^h$, {\it etc.} are evaluated at $(kh+\tau,y)$ and $\tau =x-kh$. More precisely, it takes the form: \begin{equation} \begin{split} \rho^hu^h&=\rho_0^hu_0^h,\\[1.5mm] \rho^h(u^h)^2+p^h&=\rho_0^h(u_0^h)^2+p_0^h,\\[1.5mm] \rho^hu^hv^h&=\rho_0^hu_0^hv_0^h,\\[1.5mm] (\rho^hE^h+p^h)u^h&=(\rho_0^hE_0^h+p_0^h)u_0^h+q\rho_0^hz_0^h\phi(T_0^h)\tau,\\[1.5mm] \rho^hu^hZ^h&=\rho_0^hu_0^hZ_0^h-\rho_0^hZ_0^h\phi(T_0^h)\tau. \end{split} \label{diff} \end{equation} We need to estimate the change in $(u,v,p,\rho,z, T)$ due to the reaction step.
\par First, we have \begin{equation*} (T^h-T_0^h)(kh+\tau,y)=\frac{\partial T}{\partial w_4}q\rho_0^hZ_0^h\phi(T_0^h)\tau =\frac{(\gamma -1)((u_0^h)^2-RT_0^h)}{R\rho_0^hu_0^h((u_0^h)^2-\gamma RT_0^h)}q\rho_0^hZ_0^h\phi(T_0^h)\tau. \end{equation*} Since $u^2>c^2=\frac{\gamma p}{\rho}=\gamma RT$, then $T^h(x,y)\ge T_0^h(x,y)$, which shows that the temperature $T$ does not decrease due to the reaction.
\par Second, from the fifth equation: $Z^h-Z_0^h=-\frac {Z_0^h\phi(T_0^h)\tau}{u_0^h}$. Since $\phi(T)$ is assumed to be Lipschitz continuous, nonnegative, and increasing, there exists a constant $\Phi_1>0$ such that \begin{equation} Z^h-Z_0^h\le -Z_0^h\Phi_1 \tau. \end{equation} Then we conclude \begin{equation} Z^h\le Z_0^h(1-\Phi_1 \tau)\le Z_0^h e^{-\Phi_1 \tau}, \qquad 0\le \tau<h. \label{a1} \end{equation} According to the scheme and using the induction, we can actually obtain \begin{equation}
Z_0^h\le \|Z_0\|_{\infty}e^{-\Phi_1 kh}, \qquad kh \le x<(k+1)h. \end{equation}
\par Third, from the first three equations, we know that $u^h=\frac{\rho_0^hu_0^h}{\rho^h}$, $v^h=v_0^h$, and $p^h=p_0^h+ \rho_0^h(u_0^h)^2-\frac{\rho_0^h(u_0^h)^2}{\rho^h}$. Substitution of these into the fourth equation, we have \begin{equation} \frac{\gamma+1}{2}(\rho_0^h u_0^h)^2(\frac{1}{\rho^h})^2 -\gamma\big(\rho_0^h(u_0^h)^2+p_0^h\big)\frac{1}{\rho^h} +\big(\frac{\gamma-1}{2}(u_0^h)^2+\gamma \frac{p_0^h}{\rho_0^h}+Z_0^hO(h)\big)=0. \end{equation} Therefore, we obtain \begin{equation} \frac{1}{\rho^h}=\frac{\gamma\big(\rho_0^h(u_0^h)^2+p_0^h\big) +\sqrt{\big(\rho_0^h(u_0^h)^2-\gamma p_0^h\big)^2+(\rho_0^h u_0^h)^2Z_0^hO(h)}}{(\gamma+1)(\rho_0^hu_0^h)^2}. \end{equation} Using the Taylor expansion, we know \begin{equation} \frac{1}{\rho_0^h}=\frac{1}{\rho^h}+Z_0^hO(h). \end{equation} That is, \begin{equation}
\rho^h-\rho_0^h=\|Z_0\|_{\infty}e^{-\Phi_1 kh}O(h). \end{equation} \par Similar calculations also apply to $u$ and $p$. Therefore, we have
\begin{lemma} There are positive constants $C_0$ and $\Phi_1$ such that \begin{equation} \begin{split} T^h \ge T_0^h &\ge C_0>0,\\
u^h-u_0^h&=\|Z_0\|_{\infty}e^{-\Phi_1 kh} O(h),\\ v^h-v_0^h&=0,\\
p^h-p_0^h&=\|Z_0\|_{\infty}O(h)e^{-\Phi_1 kh},\\
\rho^h-\rho_0^h&=\|Z_0\|_{\infty}O(h)e^{-\Phi_1 kh},\\ Z^h &\le Z_0^h e^{-\Phi_1 \tau}, \qquad 0\le \tau<h. \end{split} \end{equation} Furthermore, \begin{equation}
Z_0^h\le \|Z_0\|_{\infty}e^{-\Phi_1 kh}O(h), \qquad kh \le x<(k+1)h. \end{equation} All the quantities are evaluated at $(kh+\tau,y)$ with $\tau =x-kh$. \label{q1} \end{lemma}
\subsection{\textbf{Glimm functional for the fractional-step scheme}}
\par Following Glimm's method \cite{glimm}, we define a functional on the restriction of the approximate solution $W(U^h)$ to certain mesh curves $J$. We define a mesh point to be a point $(x,y)=(kh,(j+\theta_k)l)$, where $k\in \mathbb{N}$ and $j\in \mathbb{Z} $ such that $j+k$ is even. A mesh curve $J$ is a piecewise linear curve in the $(x,y)$--plane, which successively connects the mesh points $(kh,(j+\theta_k)l)$ to the mesh points $((k\pm 1)h,(j+1+\theta_{k\pm 1})l)$. We define a partial order on the set of mesh curves by stating that larger curves lie toward larger $x$. We call $J_2$ an immediate successor of $J_1$ if $J_2$ connects the same mesh points as $J_1$, except for one mesh point, and if $J_2>J_1$. Let $J_k$ be the unique mesh curve which connects the mesh points on $x=kh$ to the mesh points on $x=(k+1)h$. Note that $J_k$ crosses all the waves in the Riemann solutions of $W(U_0^h(x,y))$ in the strip $kh\le x<(k+1)h$.
\par We now define a functional $F$ on the set of mesh curves. For any mesh curve $J$, we define \begin{equation}
L_i(J)=\sum\{|\alpha|:\text{$\alpha$ is the $i$th wave crossing $J$}\} \qquad\mbox{for $1\le i \le 5$}. \end{equation}
\par Next, we define \begin{equation} L(J)=\sum_{1\le i\le 5,i\ne 4}L_i(J)+M L_4(J), \end{equation} and \begin{equation}
Q(J)=\sum \{|\alpha| |\beta|: \text{both $\alpha$ and $\beta$ cross $J$ and approach each other}\}, \end{equation} where $M>0$ is a constant to be determined as in \eqref{4.1a}.
By standard procedure as in \cite{smoller1}, when $\mathrm{TV}(U_0(\cdot))$ is small enough, we can choose a positive constant $K_0$ sufficiently large such that the Glimm functional \begin{equation} F(J)=L(J)+ K_0Q(J) \end{equation} is non-increasing in the non-reacting step.
\subsection{\textbf{BV-stability of the reaction step}}
\par We now prove the BV-stability of the approximate solutions during the reaction step. Our total variation bounds imply bounds on the length of $W(U^h(J))$, but we must also deal with the ``drift'' of the solution due to the reaction term $G(U)$.
\par In order to discuss the effect of the exothermic reaction on the functionals $L$ and $Q$, it is convenient to identify a new ``mesh curve" $\tilde J$, which, as a curve, is the same as a given mesh curve $J$, but upon which the value of $W(U)$ differs from the value of $W(U)$ on $J$ by a single reaction step along all of $J$. We take $\tilde J$ to represent the values before the reaction and $J$ to represent the values after the reaction step.
\begin{lemma} There is a positive constant C such that \begin{gather}
L(J_k)\le L(\tilde{J_k})+ Cqh\|w_{5,0}\|_{\infty}e^{-\Phi_1 kh}L(\tilde{J_k}),\\[2mm]
Q(J_k)\le Q(\tilde{J_k})+ Cqh \|w_{5,0}\|_{\infty}e^{-\Phi_1kh}L(\tilde{J_k})^2. \end{gather} \end{lemma}
\begin{pf} For simplicity of presentation, we denote $\textbf{c}=(0,0,0,1,-\frac{1}{q})^\top, \boldsymbol{\tilde{\sigma}}_i=(\tilde{\sigma}_{1i},\tilde{\sigma}_{2i},\tilde{\sigma}_{3i},\tilde{\sigma}_{4i}, \tilde{\sigma}_{5i})$, $\textbf{c}_g=(0,0,0,1)^\top$, $W(\tilde{U}_{i+1})=\Psi(\boldsymbol{\tilde{\sigma}}_i,W(\tilde{U_i}))$, $B=(B_1,B_2,B_3,B_4,B_5)^\top$, and $B_g=(B_1,B_2,B_3,B_4)^\top$.
\par Let \begin{equation} \Gamma(W(\tilde{U_i}),\boldsymbol{\tilde{\sigma}_i},h)=B(W(U_i),W(U_{i+1})) \end{equation} as before, where $W(U_i)=W(\tilde{U_i})+G(\tilde{U_i})h$ and $W(U_{i+1})=W(\tilde{U}_{i+1})+G(\tilde{U}_{i+1})h$. Then we have \begin{equation*} \begin{split} \frac{\partial \Gamma}{\partial h}(W(\tilde{U_i}),\boldsymbol{\tilde{\sigma}}_i,0) &=\tilde{\rho}_{i}\tilde{Z}_{i}\phi(\tilde{T}_{i}){\partial}_1Bq\textbf{c}
+\tilde{\rho}_{i+1}\tilde{Z}_{i+1}\phi(\tilde{T}_{i+1}){\partial}_2Bq\textbf{c}\\ &=\tilde{w}_{5,i}\frac{\phi(\tilde{T}_i)}{\tilde{u}_i}{\partial}_1Bq\textbf{c} +\tilde{w}_{5,i+1}\frac{\phi(\tilde{T}_{i+1})}{\tilde{u}_{i+1}}{\partial}_2Bq\textbf{c}\\ &=\tilde{w}_{5,i}({\partial_1} B \frac{\phi(\tilde{T}_i)}{\tilde{u}_i}+{\partial}_2 B\frac{\phi(\tilde{T}_{i+1})}{\tilde{u}_{i+1}})q\textbf{c}+(\tilde{w}_{5,i+1}-\tilde{w}_{5,i}){\partial}_2 B\frac{\phi(\tilde{T}_{i+1})}{\tilde{u}_{i+1}}q\textbf{c}, \end{split} \end{equation*} where $\tilde{w}_{5,i}=\tilde{\rho}_{i}\tilde{u}_{i}\tilde{Z}_{i}$ and $\tilde{w}_{5,i+1}=\tilde{\rho}_{i+1}\tilde{u}_{i+1}\tilde{Z}_{i+1}$. Since $Z$ is decoupled from $(u,v,p,\rho)^\top$ in the solution of the non-reacting Riemann problem, this means that ${\partial}_1B$ and ${\partial}_2B$ are the block $5\times 5$ matrices with the upper left $4\times 4$ block relating to non-reacting gas dynamics. The remaining $1\times 1$ block contains the derivative of wave strength of $Z$-contact with respect to $\rho uZ$---the value of this derivative is $-1$ for $\frac{\partial B_5}{\partial w_{5,i}}$ and $1$ for $\frac{\partial B_5}{\partial w_{5,i+1}}$, since $B_5=\rho_{i+1} u_{i+1} Z_{i+1}-\rho_{i} u_{i} Z_{i}=w_{5,i+1}-w_{5,i}$. Then we have \begin{equation} {\partial}_1 Bq\textbf{c}=\left(\begin{array}{ccc} {\partial}_{1W_g} B_g&0\\ 0&-1 \end{array}\right) \left(\begin{array}{ccc} q\textbf{c}_g\\ -1 \end{array}\right) = \left(\begin{array}{ccc} {\partial}_{1W_g} B_gq\textbf{c}_g\\ 0 \end{array}\right)+(0,0,0,0,1)^\top, \end{equation} \begin{equation} {\partial}_2 Bq\textbf{c}=\left(\begin{array}{ccc} {\partial}_{2W_g} B_g&0\\ 0&1 \end{array}\right) \left(\begin{array}{ccc} q\textbf{c}_g\\ -1 \end{array}\right) = \left(\begin{array}{ccc} {\partial}_{2W_g} B_gq\textbf{c}_g\\ 0 \end{array}\right)-(0,0,0,0,1)^\top, \end{equation} where $W_g=(w_1, \cdots, w_4)$. Then \begin{equation} \begin{split} \frac{\partial \Gamma}{\partial h}(W(\tilde{U_i}),\boldsymbol{\tilde{\sigma}}_i,0) =&\tilde{w}_{5,i}\Bigg[\frac{\phi(\tilde{T}_{i})}{\tilde{u}_{i}}\left(\begin{array}{ccc} {\partial}_{1W_g} B_gq\textbf{c}_g\\ 1 \end{array}\right) + \frac{\phi(\tilde{T}_{i+1})}{\tilde{u}_{i+1}}\left(\begin{array}{ccc} {\partial}_{2W_g} B_gq\textbf{c}_g\\ -1 \end{array}\right)\Bigg]\\&+(\tilde{w}_{5,i+1}-\tilde{w}_{5,i})\frac{\phi(\tilde{T}_{i+1})}{\tilde{u}_{i+1}}\left(\begin{array}{ccc} {\partial}_{2W_g} B_gq\textbf{c}_g\\ -1 \end{array}\right). \label{b1} \end{split} \end{equation}
Thus, the first four components of \eqref{b1} have the form: \begin{equation} \begin{split} \frac{\partial \Gamma_g}{\partial h}(W(\tilde{U}_i),\boldsymbol{\tilde{\sigma}}_i,0) =&\tilde{w}_{5,i}\Big[{\partial}_{1W_g} B_gq\textbf{c}_g\frac{\phi(\tilde{T}_{i})}{\tilde{u}_{i}}
+ {\partial}_{2W_g} B_gq\textbf{c}_g\frac{\phi(\tilde{T}_{i+1})}{\tilde{u}_{i+1}}\Big]\\
&+(\tilde{w}_{5,i+1}-\tilde{w}_{5,i}){\partial}_{2W_g} B_gq\textbf{c}_g\frac{\phi(\tilde{T}_{i+1})}{\tilde{u}_{i+1}}\\ =& \tilde{w}_{5,i}A(W(\tilde{U}_i),\boldsymbol{\tilde{\sigma}}_i)+(\tilde{w}_{5,i+1} -\tilde{w}_{5,i}){\partial}_{2W_g} B_gq\textbf{c}_g \frac{\phi(\tilde{T}_{i+1})}{\tilde{u}_{i+1}}, \label{b2} \end{split} \end{equation} where \begin{equation} A(W(\tilde{U}_i),\boldsymbol{\tilde{\sigma}}_i)={\partial}_{1W_g} B_gq\textbf{c}_g\frac{\phi(\tilde{T}_{i})}{\tilde{u}_{i}} + {\partial}_{2W_g} B_gq\textbf{c}_g\frac{\phi(\tilde{T}_{i+1})}{\tilde{u}_{i+1}}. \end{equation} It is easy to see that, if $\boldsymbol{\tilde{\sigma}}_{g,i}=(\tilde{\sigma}_{1,i},\tilde{\sigma}_{2,i},\tilde{\sigma}_{3,i},\tilde{\sigma}_{4,i})=\boldsymbol{0}$, then $\tilde{W}_{g,i+1}=\tilde{W}_{g,i}:=W_g(\tilde{U}_i)$ and, in particular, $\tilde{T}_{i+1}=\tilde{T}_i$. Since $B_g(\tilde{W}_{g,i},\tilde{W}_{g,i})$ is the vector of wave strengths for a Riemann problem with equal states, \begin{equation}
{\partial}_{1W_g} B_g|_{\boldsymbol{\tilde{\sigma}}_{g,i}=0}={\partial}_{2W_g} B_g|_{\boldsymbol{\tilde{\sigma}}_{g,i}=0}=0. \end{equation} Therefore, there exists some positive constant $C$ such that the first term in \eqref{b2} can be estimated by \begin{equation}
\|\tilde{w}_{5,i}A(W(\tilde{U}_i),\boldsymbol{\tilde{\sigma}}_i)\|
\le C\tilde{w}_{5,i} \|\boldsymbol{\tilde{\sigma}}_{g,i}\|q. \end{equation}
\par We next examine the last term of \eqref{b2}, which has the form \begin{equation} (\tilde{w}_{5,i+1}-\tilde{w}_{5,i}){\partial}_{2W_g} B_gq\textbf{c}_g \frac{\phi(\tilde{T}_{i+1})}{\tilde{u}_{i+1}}. \label{b3} \end{equation} The fifth component of \eqref{b1} is the equation for the strength of the $Z$-wave. This equation is \begin{equation*} \frac{\partial}{\partial h}(\tilde{w}_{5,i+1}-\tilde{w}_{5,i}) =\tilde{w}_{5,i}(\frac{\phi(\tilde{T}_i)}{\tilde{u}_i}-\frac{\phi(\tilde{T}_{i+1})}{\tilde{u}_{i+1}}) -(\tilde{w}_{5,i+1}-\tilde{w}_{5,i})\frac{\phi(\tilde{T}_{i+1})}{\tilde{u}_{i+1}}, \end{equation*} so that \begin{equation*}
\frac{\partial}{\partial h}|\tilde{w}_{5,i+1}-\tilde{w}_{5,i}|
\le \tilde{w}_{5,i}|\frac{\phi(\tilde{T}_i)}{\tilde{u}_i}
-\frac{\phi(\tilde{T}_{i+1})}{\tilde{u}_{i+1}}|-|\tilde{w}_{5,i+1}-\tilde{w}_{5,i}|\frac{\phi(\tilde{T}_{i+1})}{\tilde{u}_{i+1}}. \end{equation*}
\par Thus, the reaction step produces possible increases in the total variation, which are bounded by \[
C\tilde{w}_{5,i}\|\boldsymbol{\tilde{\sigma}}_{g,i}\|qh+|\tilde{w}_{5,i+1}
-\tilde{w}_{5,i}|\frac{\phi(\tilde{T}_{i+1})}{\tilde{u}_{i+1}}\|{\partial}_{2W_g} B_g\textbf{c}_g\|qh. \] The reaction step also produces a decrease in total variation for the $w_5=\rho uZ$ component---the fifth component of
$\frac{\partial \Gamma}{\partial h}(W(\tilde{U}_{i}),\boldsymbol{\tilde{\sigma}}_i,0)$---in the amount $|\tilde{w}_{5,i+1}-\tilde{w}_{5,i}|\frac{\phi(\tilde{T}_{i+1})}{\tilde{u}_{i+1}}$. We now use the decrease in the $w_5$--component proportional to $|\tilde{w}_{5,i+1}-\tilde{w}_{5,i}|$. Since ${\partial}_{2W_g} B_g$ is Lipschitz continuous, there exists a upper bound $M$ for $\|{\partial}_{2g} B_g\|q$. Thus, the effect of term \eqref{b2} on $(\rho u,\rho u^2+p,\rho uv, \rho u (\bar{h}+\frac{u^2+v^2}{2}))$ of
$\frac{\partial \Gamma}{\partial h}(W(\tilde{U}_{i}),\boldsymbol{\tilde{\sigma}}_i,0)$ is bounded by $M|\tilde{w}_{5,i+1}-\tilde{w}_{5,i}|\frac{\phi(\tilde{T}_{i+1})}{\tilde{u}_{i+1}}h$, and this increase is offset by a decrease in the term $M|\tilde{w}_{5,i+1}-\tilde{w}_{5,i}|$.
\par Thus, the change in $L$ is estimated as follows: \begin{equation} \begin{split} L(J_k)- L(\tilde{J_k})&=\sum_{1\le j \le 5,j\ne 4}\big(L_j(J_k)- L_j(\tilde{J_k})\big)+M\big(L_4(J_k)- L_4(\tilde{J_k})\big)
\\&= \sum_{1\le j \le 5,j\ne 4}\sum_{-\infty <i<\infty}(|\sigma_{j,i}|-|\tilde{\sigma_{j,i}}|)+M \sum_{-\infty <i<\infty}(|\sigma_{4,i}|-|\tilde{\sigma_{4,i}}|)
\\& \le \sum_{-\infty <i<\infty}\|\frac{\partial \Gamma}{\partial h}(W(\tilde{U}_i),\boldsymbol{\tilde{\sigma}}_i,0)\|_1h
\\& \le \sum_{-\infty <i<\infty} \Big(C q\tilde{w}_{5,i}\|\boldsymbol{\tilde{\sigma}}_{g,i}\|h
+|\tilde{w}_{5,i+1}-\tilde{w}_{5,i}|\frac{\phi(\tilde{T}_{i+1})}{\tilde{u}_{i+1}}q\|{\partial}_{2g} B_g\textbf{c}_g\|h \\& \qquad \qquad\qquad
+M \big(\tilde{w}_{5,i}|\frac{\phi(\tilde{T}_i)}{\tilde{u}_i}-\frac{\phi(\tilde{T}_{i+1})}{\tilde{u}_{i+1}}|h
-|\tilde{w}_{5,i+1}-\tilde{w}_{5,i}|\frac{\phi(\tilde{T}_{i+1})}{\tilde{u}_{i+1}}h \big)\Big)
\\&\le Cqh \|\tilde{w_5}\|_{\infty}L(\tilde{J_k})
\\&\le Cqh\|w_{5,0}\|_{\infty}e^{-\Phi_1kh}L(\tilde{J_k}), \end{split} \end{equation} where we have chosen $M>0$ large enough to make the third inequality hold, and the last inequality comes from Lemma \ref{q1}.
\par Consequently, we have \begin{equation} \begin{split}
Q(J_k)-Q(\tilde{J_k})&=\sum_{App}\big(|\alpha_i||\beta_j|-|\tilde{\alpha_i}||\tilde{\beta_j}|\big)
\\&= \sum_{App}\big(|\alpha_i|(|\beta_j|- |\tilde{\beta_j}|)+|\tilde{\beta_j}|( |\alpha_i|- |\tilde{\alpha_i}|)\big)
\\&\le C\big(L(J_k)- L(\tilde{J_k})\big)L(\tilde{J_k}).
\end{split} \end{equation} Therefore,
\begin{equation}
\begin{split}
Q(J_k)\le Q(\tilde{J_k})+Cqh\|w_{5,0}\|_{\infty}e^{-\Phi_1kh}L(\tilde{J_k})^2
\le Q(\tilde{J_k})+Cqh\|w_{5,0}\|_{\infty}e^{-\Phi_1kh}F(\tilde{J_k})^2. \end{split} \end{equation} The proof is completed. \end{pf}
\par Since \begin{equation} F(J)=L(J)+ K_0Q(J), \end{equation} we have actually proved the following lemma.
\begin{lemma} Let $J_k$ be a mesh curve between $x=kh$ and $x=(k+1)h$. Then \begin{equation}
F(J_k)\le F(\tilde{J_k})\big(1+Cqh\|w_{5,0}\|_{\infty}e^{-\Phi_1 kh}(1+ F(\tilde{J_k}))\big), \label{11} \end{equation} where C is a constant independent of the mesh lengths $l$ and $h$. \end{lemma}
\par We need to obtain a uniform bound on $F$. First of all, we suppose that such a bound exists, namely, $F(\tilde{J}_k)\le A$ for some positive constant. Then, by (\ref{11}), we have \[
F(J_k)\le F(\tilde{J_k})\big(1+Cq\|w_{5,0}\|_{\infty}e^{-\Phi_1 kh}(1+A)h\big). \]
Since $F$ is non-increasing in the non-reacting step, $F(\tilde{J}_k)\le F(J_{k-1})$. Then we have \[
F(J_k)\le F(\tilde{J}_0)\prod_{j=0}^{k}\big(1+Cq\|w_{5,0}\|_{\infty}d^{j}(1+A)h\big), \] where $d=e^{-\Phi_1h}$. Using the inequality $\mathrm{ln}(1+x)\le x$ for $x\ge 0$, \begin{equation} \begin{split} \mathrm{ln}\Big(\frac{F(J_k)}{F(\tilde{J}_0)}\Big)
&\le \sum_{j=0}^k \mathrm{ln}\big(1+Cq\|w_{5,0}\|_{\infty}d^j(1+A)h\big)
\\&\le \sum_{j=0}^k Cqh \|w_{5,0}\|_{\infty}d^j(1+A)
\\&\le Cqh \|w_{5,0}\|_{\infty}(1+A)\frac{1}{1-d}. \end{split} \end{equation} Thus we obtain \begin{equation}
F(J_k)\le F(\tilde{J}_0)\mathrm{exp}\Big(\frac{Cqh\|w_{5,0}\|_{\infty}(1+A)}{1-e^{-\Phi_1 h}}\Big). \label{15} \end{equation} The function $f(h)=\frac{h}{1-e^{-\Phi_1 h}}$ is increasing for $h>0$ and tends to $\frac{1}{\Phi_1}$ as $h\to 0$. Thus, for $h$ sufficiently small, we obtain \begin{equation}
F(J_k)\le F(\tilde{J}_0)\mathrm{exp}\Big(\frac{C_1q\|w_{5,0}\|_{\infty}(1+A)}{\Phi_1}\Big), \label{12} \end{equation} where $C_1=2C$. Estimate (\ref{12}) is valid as long as $F(\tilde{J}_k)\le A$. Since $F(\tilde{J}_k)\le F(J_{k-1})$, the condition required for this result is that \begin{equation}
F(\tilde{J}_0)\le \mathrm{exp}\Big(-\frac{C_1q\|w_{5,0}\|_{\infty}(1+A)}{\Phi_1}\Big)A=:g(A).\label{13} \end{equation}
The value of A which maximizes $g(A)$ is $A=\frac{\Phi_1}{C_1q\|w_{5,0}\|_{\infty}}$. Thus, our least-restrictive condition on $F(\tilde{J}_0)$ is \begin{equation} F(\tilde{J}_0)
\le \mathrm{exp}\Big(-1-\frac{C_1q\|w_{5,0}\|_{\infty}}{\Phi_1}\Big)\frac{\Phi_1}{C_1 \|w_{5,0}\|_{\infty}}. \label{14} \end{equation} We summarize these estimates with the following lemma.
\begin{lemma} If $F(\tilde{J}_0)$ satisfies \eqref{13}, then, for all $k\ge 1, F(\tilde{J}_k)\le A$. In particular, if $F(\tilde{J}_0)$ satisfies \eqref{14}, then \[
F(\tilde{J}_k)\le A= \frac{\Phi_1}{C_1 q\|w_{5,0}\|_{\infty}}\qquad\,\mbox{for all $k\ge 1$}. \] Furthermore, if $F(\tilde{J}_0)$ satisfies \eqref{14}, then
\[
F(J_k)\le F(\tilde{J}_0)\mathrm{exp}\Big(\frac{C_1q \|w_{5,0}\|_{\infty}}{\Phi_1}+1\Big) \qquad\,\mbox{for all $k\ge 1$}. \] \end{lemma}
\par Next, we need to estimate the amount that the solution ``drifts'' from its original base point due to the source term $G(U)$. We use $W(U_0(-\infty))={\lim}_{y\to -\infty}W(U_0(y))$ as our base point. From our scheme, \begin{equation} W(U^h(x,y))=W(U_0^h(x,y))+G(U_0^h(x,y))(x-kh). \end{equation} We denote $U_k^{\infty}=\lim_{y\to -\infty}U^h((k+1)h-,y)$ and $U_0(-\infty)=\lim_{y\to -\infty}U_0(y)$. Then $W(U_{0}^{\infty})=W(U_{0}(-\infty))+G(U_{0}(-\infty))h$ and $W(U_{k+1}^{\infty})=W(U_{k}^{\infty})+G(U_{k}^{\infty})h$ for $k\ge 0$. Since $Z_0(-\infty)=0$, $G(U_{0}(-\infty))=0$. We deduce that $W(U_{k+1}^{\infty})=W(U_{k}^{\infty})=W(U_{0}(-\infty))$. Therefore, for all $(x,y)\in J_k$, we have \begin{equation} \begin{split}
\|W(U^h(x,y))-W(U_0(-\infty))\|
&\le \|W(U^h(x,y))-W(U_k^{\infty})||+||W(U_k^{\infty})-W(U_0(-\infty))\|
\\& =\|W(U^h(x,y))-W(U_k^{\infty})\|
\\& \le TV(W(U^h(x,\cdot)))
\\& \le CF(J_k). \end{split} \end{equation}
In summary, we have established the following theorem.
\begin{theorem} If $\mathrm{TV}\big(W(U_0)\big)$ is sufficiently small, then the fractional-step Glimm scheme generates the approximate solutions $U^h(x,y)$ which exist in the whole domain $\{x\ge 0, y\in \mathbb{R}\}$ and have uniformly bounded total variation in the $y$--direction. Moreover, there is a null set $N\subset \Pi_{k=0}^{\infty}(-1,1)$ such that, for each $\theta \in \Pi_{k=0}^{\infty}(-1,1)\setminus N$, there exists a sequence $h_i\to 0$ so that \begin{equation} U_{\theta}=\lim _{h_i\to 0}U_{h_i,\theta} \end{equation} is an entropy solution to problem \eqref{d3} for system \eqref{d1}--\eqref{d2}, where the limit is taken in $L_{loc}^{1}(\Omega)$. Moreover, $U_{\theta}$ has uniformly bounded total variation in the $y$--direction. \end{theorem}
The proof of the convergence part will be given in Section 6.
\section{Initial-Boundary Value Problem}
\par In this section, we are concerned with reacting supersonic flows past Lischitz curved wedges. The problem can be formulated as the initial-boundary value problem for system \eqref{d1}--\eqref{d2} in $\Omega$ with initial data on $\Gamma$: \begin{equation}
(u,v,p,\rho,Z)|_{x=0}=(u_0,v_0,p_0,\rho_0,Z_0)(y)\equiv U_0(y),\qquad \text{$y \in \mathbb{R}$}, \label{dd3} \end{equation} and boundary condition \begin{equation} (u,v)\cdot \textbf{n}=0 \qquad \text{on $\Gamma$},\label{dd4}\\ \end{equation} where \begin{equation*} \Omega=\{(x,y)\, :\, y<g(x), x>0\},\quad \Gamma=\{(x,y)\, :\, y=g(x), x>0\}, \end{equation*} and $\textbf{n}(x\pm)=\frac{(-g'(x\pm),1)}{\sqrt{(g'(x\pm))^2+1}}$ is the outer unit normal vector to $\Gamma$ at the point $x\pm$ (see Fig. 3).
\begin{center} \setlength{\unitlength}{1mm} \begin{picture}(120,70)(-2,-10) \linethickness{1pt} \put(30,0){\vector(0,1){50}} \put(0,30){\vector(1,0){120}} \put(3,3){\vector(4,1){20}} \put(3,7){\vector(4,1){20}} \put(3,11){\vector(4,1){20}} \put(3,-1){\vector(4,1){20}} \qbezier(30,30)(40,31)(50,33) \qbezier(50,33)(90,25)(110,32) \put(24,26){$O$} \put(116,26){$x$} \put(27,50){$y$} \put(6,20){$U_0$} \put(34,30){\line(1,2){2}} \put(38,31){\line(1,2){2}} \put(42,31){\line(1,2){2}} \put(46,32){\line(1,2){2}} \put(50,32){\line(1,2){2}} \put(54,32){\line(1,2){2}} \put(58,31){\line(1,2){2}} \put(62,31){\line(1,2){2}} \put(66,31){\line(1,2){2}} \put(70,30){\line(1,2){2}} \put(74,30){\line(1,2){2}} \put(78,29){\line(1,2){2}} \put(82,29){\line(1,2){2}} \put(86,29){\line(1,2){2}} \put(90,29){\line(1,2){2}} \put(94,29){\line(1,2){2}} \put(98,30){\line(1,2){2}} \put(100,20){\vector(-2,1){16}} \put(100,18){$y=g(x)$} \put(52,10){$\Omega$} \put(18,-8){Fig. 3. Supersonic flow past a Lipschitz curved wedge} \end{picture} \end{center}
The assumption for $U_0(y):= (u_0,v_0,p_0,\rho_0,Z_0)(y)$ is the same as before. The boundary function $y=g(x)$ is a small perturbation of the straight line $y=\frac{v_0(-\infty)}{u_0(-\infty)}x$ such that $y=g(x)$ is Lipschitz continuous with $g(0)=0, g'(0+)=\arctan(\frac{v_0(-\infty)}{u_0(-\infty)})$, and $g' \in BV(\mathbb{R}^+;\mathbb{R})$.
Without loss of generality, we may assume that \begin{equation} v_0(-\infty)=0, \qquad Z_0(-\infty)=0. \end{equation}
The formulation of the initial-boundary value problem is derived from the original physical problem when supersonic flow past a symmetric wedge through the coordinate transformation. For the non-reacting supersonic flow past a straight symmetric wedge, {\it i.e.} $g'(x)=0$, a plane shock is generated, which is attached to the wedge vertex (see Fig. 4). When the supersonic flow is governed by exothermically reacting steady Euler equations, the attached shock is no longer a plane shock even for the straight wedge, though it can be handled as an approximate shock wave.
\begin{center} \setlength{\unitlength}{1mm} \begin{picture}(100,80)(-2,-10)
\linethickness{1pt} \put(0,0){\vector(1,0){30}} \put(-2,10){\vector(1,0){30}} \put(-4,20){\vector(1,0){30}} \put(-2,30){\vector(1,0){30}} \put(0,40){\vector(1,0){30}} \put(30,20){\line(3,4){20}} \put(30,20){\line(3,-4){20}} \put(30,20){\line(5,1){58}} \put(38,26){\vector(4,1){20}} \put(30,20){\line(5,-1){58}} \put(38,13){\vector(4,-1){20}} \put(41,-4){\it{S}} \put(43,44){\it{S}} \put(33,19){\line(5,3){5}} \put(38,18){\line(5,4){6}} \put(43,17){\line(4,3){10}} \put(48,16){\line(5,3){18}} \put(53,15){\line(5,3){23}} \put(58,14){\line(5,3){29}} \put(63,13){\line(5,3){25}} \put(68,12){\line(5,3){21}} \put(73,11){\line(5,3){16}} \put(78,10){\line(5,3){11}} \put(83,9){\line(5,3){6}} \put(18,-14){Fig. 4. Non-reacting supersonic flow past a straight wedge} \end{picture} \end{center}
\subsection{\textbf{Homogeneous initial-boundary value problem}}
\par We first recall some basic properties on the initial-boundary value problem for the homogeneous system \eqref{3}.
\subsubsection{\textbf{Lateral Riemann problem}}
The simple case of problem (\ref{d1})--(\ref{d2}) is that $g\equiv 0$. It has been shown in \cite{courant} that, if $g\equiv 0$, the homogeneous system \eqref{3} with initial condition: \begin{equation}
(u,v,p,\rho,Z)|_{x<0}=(u_{-},v_{-},p_{-},\rho_{-},Z_{-})\equiv U_{-} \end{equation} yields an entropy solution that consists of the constant states $U_{-}$ and $U_{+}:=(u_{+},0,p_{+},\rho_{+},Z_{+})$ with $u_{+}>c_{+}>0$ in the subdomain of $\Omega$, separated by a straight shock-front emanating from the vertex. That is, the state ahead of the shock-front is $U_{-}$, whilst the state behind the shock-front is $U_{+}$ (see Figs. 5--6). When the angle between the flow direction of the front state and the wedge boundary at a boundary vertex is larger than $\pi$, the entropy solution contains a rarefaction wave that separates the front state from the back state (see Fig. 6).
\begin{center} \setlength{\unitlength}{1mm} \begin{picture}(90,60)(-2,-10) \linethickness{1pt} \put(30,0){\vector(0,1){50}} \put(0,30){\vector(1,0){70}} \put(3,3){\vector(1,1){20}} \put(30,30){\line(1,-1){25}} \put(50,20){\vector(1,0){20}} \put(30,30){\line(1,2){2}} \put(34,30){\line(1,2){2}} \put(38,30){\line(1,2){2}} \put(42,30){\line(1,2){2}} \put(46,30){\line(1,2){2}} \put(50,30){\line(1,2){2}} \put(54,30){\line(1,2){2}} \put(58,30){\line(1,2){2}} \put(62,30){\line(1,2){2}} \put(22,26){$O$} \put(66,26){$x$} \put(23,50){$y$} \put(6,20){$U_{-}$} \put(54,14){$U^{+}$} \put(52,2){Shock} \put(-2,-8){Fig. 5. Unperturbed case when $g\equiv 0$} \end{picture} \end{center}
\begin{center} \setlength{\unitlength}{1mm} \begin{picture}(180,40)(-2,-4) \linethickness{1pt} \put(0,24){\line(1,0){30}} \multiput(30,24)(3,0){14}{\line(1,0){2}} \put(2,24){\line(1,2){2}} \put(6,24){\line(1,2){2}} \put(10,24){\line(1,2){2}} \put(14,24){\line(1,2){2}} \put(18,24){\line(1,2){2}} \put(22,24){\line(1,2){2}} \put(26,24){\line(1,2){2}} \put(30,24){\line(1,2){2}} \put(34,23){\line(1,1){4}} \put(38,21){\line(1,1){4}} \put(42,20){\line(1,1){4}} \put(46,19){\line(1,1){4}} \put(50,17){\line(1,1){4}} \put(54,16){\line(1,1){4}} \put(58,15){\line(1,1){4}} \put(62,13){\line(1,1){4}} \put(70,20){$x$} \put(68,24){\vector(1,0){3}} \put(85,24){\line(1,0){30}} \put(89,24){\line(0,1){3}} \put(93,24){\line(0,1){3}} \put(97,24){\line(0,1){3}} \put(101,24){\line(0,1){3}} \put(105,24){\line(0,1){3}} \put(109,24){\line(0,1){3}} \put(113,24){\line(0,1){3}} \put(116,24){\line(1,2){2}} \put(120,25){\line(1,2){2}} \put(124,26){\line(1,2){2}} \put(128,27){\line(1,2){2}} \put(132,28){\line(1,2){2}} \put(136,29){\line(1,2){2}} \put(140,30){\line(1,2){2}} \put(144,31){\line(1,2){2}} \put(148,32){\line(1,2){2}} \put(152,33){\line(1,2){2}} \put(30,24){\line(3,-1){40}} \put(42,17){\vector(3,-1){18}} \put(30,24){\line(4,-3){25}} \put(50,2){Shock} \put(3,16){\vector(1,0){20}} \put(115,24){\line(4,1){40}} \put(126,22){\vector(4,1){20}} \multiput(115,24)(3,0){13}{\line(1,0){2}} \put(153,24){\vector(1,0){3}} \put(115,24){\line(2,-1){18}} \put(115,24){\line(3,-4){12}} \put(115,24){\line(1,-2){10}} \put(90,20){\vector(1,0){16}} \put(116,0){$\text{Rarefaction wave}$} \put(154,20){$x$} \put(40,-4){Fig. 6. Lateral Riemann solutions} \end{picture} \end{center}
\subsubsection{\textbf{Riemann problem}} Consider the Riemann problem for (\ref{3}): \begin{eqnarray}
U|_{x=x_0}=U_{-}=\left\{ \begin{array}{ll} U_b, &\quad y< y_0,\\[2mm] U_a, &\quad y> y_0, \end{array} \right. \end{eqnarray} where $U_a$ and $U_b$ are the constant states which are regarded as the above state and below state with respect to the line $y=y_0$, respectively. It is well known that this Riemann problem is solvable if the states $U_b$ and $U_a$ are close enough.
\subsubsection{\textbf{Estimates on wave interactions for \eqref{3}}} The estimates on week wave interactions are the same as in Lemma \ref{4.1b}.
\subsection{\textbf{Estimates of the reflection on the boundary for system \eqref{3}}} \par Following the notation in \cite{zhang1}, we denote $\{C_k(a_k,b_k)\}_{k=0}^{\infty}$ by the points $\{(a_k,b_k)\}_{k=0}^{\infty}$ in the $(x, y)$--plane with $a_{k+1}>a_k\ge 0$. Set \begin{eqnarray} \omega_{k,k+1}=\arctan\big(\frac{b_{k+1}-b_k}{a_{k+1}-a_k}\big), \quad \omega_k=\omega_{k,k+1}-\omega_{k-1,k},\quad \omega_{-1,0}=0,\\[2mm] \Omega_{k}=\{(x,y):x\in [a_k,a_{k+1}), y<b_k+(x-a_k)\tan(\omega_{k,k+1})\},\\[2mm] \Gamma_{k}=\{(x,y):x\in [a_k,a_{k+1}), y=b_k+(x-a_k)\tan(\omega_{k,k+1})\}, \end{eqnarray} and the outer unit normal vector to $\Gamma_{k+1}$: \begin{equation} \textbf{n}_{k+1}=\frac{(-b_{k+1}+b_k,a_{k+1}-a_k)}{\sqrt{(b_{k+1}-b_k)^2+(a_{k+1}-a_k)^2}}=(-\sin (\omega_{k,k+1}),\cos(\omega_{k,k+1})). \end{equation} \par We then consider the initial-boundary value problem: \begin{eqnarray} \left\{ \begin{array}{ll} (\ref{3}) \qquad \text{in $\Omega_{k}$},\\[2mm]
U|_{x=a_k}=\underline{U},\\[2mm] (u,v)\cdot \textbf{n}_{k}=0 \qquad \text{on $\Gamma_{k}$}, \end{array} \right. \end{eqnarray} where $\underline{U}$ is a constant state.
\begin{center} \setlength{\unitlength}{1mm} \begin{picture}(50,72)(-2,-6) \linethickness{1pt} \multiput(0,0)(0,3){21}{\line(0,2){2}} \multiput(24,2)(0,3){17}{\line(0,1){2}} \multiput(48,-1)(0,3){20}{\line(0,2){2}} \put(0,18){\line(1,1){20}} \put(0,18){\line(5,3){20}} \put(0,62){\line(5,-2){32}} \put(23,53){\line(5,-3){18}} \put(12,57){\vector(1,3){4}} \put(24,52){\line(4,1){24}} \put(0,62){\line(1,-1){18}} \put(36,55){\vector(-1,4){3}} \put(3,61){\line(5,2){4}} \put(7,59){\line(5,2){4}} \put(12,57){\line(5,2){4}} \put(15,56){\line(5,2){4}} \put(19,54){\line(5,2){4}} \put(24,52){\line(1,2){2}} \put(28,53){\line(1,2){2}} \put(32,54){\line(1,2){2}} \put(36,55){\line(1,2){2}} \put(40,56){\line(1,2){2}} \put(44,57){\line(1,2){2}} \put(-1,65){$\Gamma_k$} \put(49,60){$\Gamma_{k+1}$} \put(11,70){$n_k$} \put(35,68){$n_{k+1}$} \put(9,54){$U_k$} \put(38,49){$U_{k+1}$} \put(16,50){$C_{k+1}$} \put(2,34){$U_m$} \put(2,10){$U_b$} \put(36,20){$U_b$} \put(29,51){$\omega_{k+1}$} \put(12,4){$\Omega_k$} \put(36,4){$\Omega_{k+1}$} \put(36,40){$\delta_1$} \put(12,43){$\beta_1$} \put(12,36){$\alpha_5$} \put(12,23){$\alpha_{2(3,4)}$} \put(-12,-6){Fig. 8. Weak wave reflections on the boundary.} \end{picture} \end{center}
\begin{lemma} Let $\{U_b, U_m\}=(0,\alpha_2,\alpha_3,\alpha_4,\alpha_5)$ and $\{U_m,U_k\}=(\beta_1, 0,0,0,0)$ with \begin{equation*} (u_k,v_k) \cdot \textbf{n}_k=0. \end{equation*} Then there exists $U_{k+1}$ such that \begin{equation*} \{U_b, U_{k+1}\}=(\delta_1,0,0,0,0)\qquad \text{with} \quad (u_{k+1},v_{k+1}) \cdot \textbf{n}_{k+1}=0. \end{equation*} Furthermore, \begin{equation*} \delta_1=\beta_1+K_{b5}\alpha_5+K_{b4}\alpha_4+K_{b3}\alpha_3+K_{b2}\alpha_2+K_{b0}\omega_k, \end{equation*} where $K_{b5},K_{b4},K_{b3},K_{b2}$, and $K_{b0}$ are $C^2$--functions of $(\alpha_5,\alpha_4,\alpha_3,\alpha_2,\beta_1,\omega_k;U_b)$ satisfying \begin{equation*}
K_{b5}|_{\omega_k=\alpha_5=\alpha_4=\alpha_3=\alpha_2=\beta_1=0,U_b=U_+}=1,\qquad K_{bi}|_{\omega_k=\alpha_5=\alpha_4=\alpha_3=\alpha_2=\beta_1=0,U_b=U_+}=0,\,\,\, \text{$i=2,3,4$}, \end{equation*} and $K_{b0}$ is bounded. \end{lemma}
\par The proof of this lemma is similar to Proposition 3.2 in \cite{chen2}.
\subsection{\textbf{Construction of approximate solutions}}
\par In this section, we develop a modified Glimm difference scheme to construct a family of approximate solutions in consistent with the boundary condition (\ref{dd3})--\eqref{dd4} and establish their necessary estimates for the initial-boundary value problem for system \eqref{d1}--(\ref{d2}) in the corresponding domains $\Omega_{h}$.
\par We first use the fact that the boundary is a perturbation of the straight wedge: \begin{equation}
\sup_{x\ge 0}|g'(x)|<\varepsilon \qquad \text{for sufficiently small $\varepsilon>0$.} \label{612} \end{equation} Let $h>0,l>0$ denote the step-length in the $x$-direction and $y$-direction, respectively. Set $a_k:=kh$ and $b_k:=y_k=g(kh)$ and follow the notations in Section 2.4. Then \begin{equation}
m:=\sup_{k>0}\Big\{\frac{|y_k-y_{k-1}|}{h}\Big\}<\varepsilon. \end{equation} Define \begin{equation} \Omega_{h}=\bigcup_{k\ge 0}\Omega_{h,k}, \end{equation} where $\Omega_{h,k}=\{(x,y):kh\le x <(k+1)h, \quad y\le g_h(x) \}$ with $g_h(x)=y_k+(x-kh)\tan(\omega_{k,k+1})$ when $kh\le x< (k+1)h$. We also need the Courant-Friedrichs-Lewy type condition: \begin{equation}
\max_{1\le j\le 5}\Big(\sup_{U\in O_{\varepsilon}(U_{+})}|\lambda_j(U)|\Big)\le \frac{l-mh}{2h}. \end{equation}
\par Define \begin{equation} a_{k,n}=(2n+1+\theta_k)l+y_k, \end{equation} where $\theta_k$ is randomly chosen in $(-1,1)$. Then we choose \begin{equation} P_{k,n}=(kh,a_{k,n}),\qquad \text{$k\ge 0, n=0,-1,-2,\cdots $}, \end{equation} to be the mesh points and define the approximate solutions $W(U^h(x,y))$ in $\Omega_h$ for any $\theta=(\theta_0,\theta_1,\theta_2,\cdots)$ in an inductive way.
\par We denote $T_{k,0}$ the diamond domain whose vertices are $(kh,y_k),(kh,-l+y_k),((k+1)h,-l+y_{k+1})$, and $((k+1)h,y_{k+1})$. For $n\le -1$, we denote $T_{k,n}$ the diamond whose vertices are $(kh,(2n+1)l+y_k),(kh,(2n-1)l+y_k),((k+1)h,(2n-1)l+y_{k+1})$, and $((k+1)h,(2n+1)l+y_{k+1})$.
\par Now we can define the difference scheme in $\Omega_{h}$, that is, define the global approximate solution $W(U^h(x,y))$ in $\Omega_{h}$. This can be done by carrying out the following steps inductively, similar to the construction in Section 3.
\par Assume that $W(U^h(x,y))$ is defined for $x<kh$. Then we define $W(U^h(kh+0,y))$ as follows: \par We define, for $n\le -1$, \begin{equation} W(U_0^k):=W(U^h(kh-,a_{k,n})) \qquad \text{for $\,\,\, 2nl+y_k\le y <2(n+1)l+y_k$}, \end{equation} and \begin{equation} W(U^h(kh+0,y)):=W(U_0^k). \end{equation}
\par First, we define $W(U_0^h(x,y))$ in $T_{k,0}$ by solving the following lateral Riemann problem: \begin{equation} \left \{ \begin{array}{ll}
W(U_k)_{x}+H(U_k)_y=0 \qquad \text{in $T_{k,0}$},\\[2mm]
W(U_k)|_{x=kh}=W(U_0^k),\\[2mm]
(u_k,v_k)\cdot \textbf{n}_k=0 \qquad \text{on $\Gamma_k$}.
\end{array} \right. \end{equation} We can obtain the above lateral Riemann solution $W(U_k)$ in $T_{k,0}$ and define \begin{equation} W(U_0^h)=W(U_k) \qquad \text{in $T_{k,0}$}. \end{equation}
\par Second, we solve the following Riemann problem in each diamond $T_{k,n}$ for $n\le -1$: \begin{equation} \left \{
\begin{array}{ll}
W(U_k)_{x}+H(U_k)_y=0 \qquad \text{in $T_{k,n}$},\\[2mm]
W(U_k)|_{x=kh}=W(U_0^k),
\end{array} \right. \end{equation} to obtain the Riemann solution $W(U_k)$ in $T_{k,n}$ and define \begin{equation} W(U_0^h)=W(U_k) \qquad \text{in $T_{k,n}, n\le -1$}. \end{equation}
\par Finally, we use the Glimm fractional-step operator to obtain the desired approximate solutions: \begin{equation} W(U^h(x,y))=W(U_0^h(x,y))+G(U_0^h(x,y))(x-kh)\qquad \text{for $kh\le x<(k+1)h$}. \end{equation}
\par In this way, we have constructed the approximate solution $W(U^h(x,y))$ globally, provided that we can obtain a uniform bound of the approximate solutions. To achieve this, we establish the total variation of $W(U^h(x,y))$ on a class of space-like curves.
\iffalse \begin{definition} A k-mesh curve $J$ is defined to be an unbounded space-like curve lying in the strip $\{(k-1)h\le x\le(k+1)h\}$ and consisting of the segments of the form $P_{k,n-1}N(\theta_{k+1},n),P_{k,n-1}S(\theta_k,n),S(\theta_k,n)P_{k,n}$ and $N(\theta_{k+1},n)P_{k,n}$, \end{definition} where \begin{equation} N(\theta_{k+1},n)=\left\{
\begin{array}{ccc}
P_{k+1,n}&\mbox{if $\theta_{k+1}\le 0$},\\
P_{k+1,n-1}&\mbox{if $\theta_{k+1}> 0$},
\end{array}
\right. \quad S(\theta_{k},n)=\left\{
\begin{array}{ccc}
P_{k-1,n-1}&\mbox{if $\theta_{k}\le 0$},\\
P_{k+1,n}&\mbox{if $\theta_{k}> 0$}.
\end{array}
\right. \end{equation}
\par This definition means that we can connect the mesh points $P_{k,n}$ by two line segments to the two mesh points $P_{k-1,n-1}$ and $P_{k-1,n}$ if $\theta_k\le 0$, or we can connect the mesh points $P_{k,n}$ by two line segments to the two mesh points $P_{k-1,n}$ and $P_{k-1,n+1}$ if $\theta_k> 0$. Therefore, the value of $W(U^h)$ on the segment $N(\theta_{k+1},n)P_{k,n}$ and $P_{k,n-1}N(\theta_{k+1},n)$ is determined by the value of $W(U^h)$ on the segment $P_{k,n-1}P_{k,n}$ (see Fig. 9).
\begin{center} \setlength{\unitlength}{1mm} \begin{picture}(40,44)(-2,-10) \put(28,-4){$P_{k,n-1}$} \put(42,20){$N(\theta_{k+1},n)$} \put(-14,12){$S(\theta_k,n)$} \put(23,32){$P_{k,n}$} \put(25,30){\line(3,-2){17}} \put(25,30){\line(-4,-3){24}} \put(25,-4){\line(3,4){17}} \put(25,-4){\line(-3,2){24}} \put(-28,-12){Fig. 9. Interaction diamond $\Lambda_{k,n}$ and orientation of the segments} \end{picture} \end{center}
\par Clearly, for any $k>0$, each $k$-mesh curve $I$ divides the plane $\mathbb{R}^2$ into an $I^+$ part and $I^-$ part, where $I^-$ is the part containing the set $\{x<0\}$. Furthermore, the partial order of mesh curves are defined to be the same as in \cite{chen1}.
\par We denote by $J_k$ the unique mesh curve which connects the mesh points on $x=kh$ to the mesh points on $x=(k+1)h$. Note that $J_k$ crosses all the waves in the Riemann solutions of $W(U_0^h(x,y))$ in the strip $kh\le x<(k+1)h$. \fi
\par As before, for the mesh curves $J$ in $x>0$, we give the following definition: \begin{definition} \begin{align*}
& L_0(J)=\sum\{\omega(C_k):C_k \in \Omega_J\},\\
& L_j(J)=\sum\{|\alpha_j|:\text{$\alpha_j$ cross $J$}\},\quad j=1,2,3,4,5,\\
& L(J)=K^*L_0(J)+L_1(J)+K^*\big(L_2(J)+L_3(J)+L_4(J)+L_5(J)\big),\\
& Q(J)=\sum\{|\alpha_i||\beta_j|:\text{both $\alpha_i$ and $\beta_j$ crossing $J$ and approaching}\},\\
& F(J)=L(J)+K Q(J), \end{align*} where $K>0$ is determined later, $\Omega_J$ is the set of the corner points $C_k$ with $k\ge 0$:
\begin{equation} \Omega_J=\{C_k \, :\, \,\, C_k \in J\cap \partial \Omega_{h},\mbox{$C_k=(kh,g(kh)),k\ge 0$}\}, \end{equation} and $K^*$ is a positive constant that satisfies $K^*>\max_{2\le i\le 5} K_{bi}+1$. \end{definition}
\par Next, we estimate the functional $F$. To do this, let $I$ and $J$ be two $k$-mesh curves for some $k>0$ such that $J$ is an immediate successor to $I$, and let $\Lambda$ be the diamond between $I$ and $J$. Due to the location of $\Lambda$, two cases are to be considered:
\begin{enumerate} \item[\rm (i)] Case $\Lambda \subset \Omega_h$: If $\alpha$ and $\beta$ are the waves entering $\Lambda$, we define \begin{equation}
Q(\Lambda)=\sum|\alpha_i||\beta_j|, \end{equation} where the sum is taken over all the pairs for which the $i$-wave from $\alpha$ and $j$-wave from $\beta$ are approaching;
\item[\rm (ii)] Case $\Lambda \cap \partial \Omega_h \neq \emptyset$: Let $\Omega_J=\Omega_I \setminus \{C_k\}$ with $C_k=(kh,y_k)$ for some $k\ge 0$, let $I=I_0\cup I'$ and $J=I_0\cup J'$ such that $\partial \Lambda=I'\cup J'$, and let $\beta_1$ and $\alpha_i$ be the $1$-wave and $i$-wave respectively crossing $I'$ with $\alpha_i$ lying below $\beta_1$ on $I$, where $i=2,3,4,5$. In addition, by the construction of approximate solutions, let $\delta_1$ be the weak $1$-wave crossing $J'$ (see Fig. 9 below). \end{enumerate}
\par Define \begin{eqnarray} E_{h,\theta}(\Lambda)=\left\{ \begin{array}{ll}
\omega_k+\sum_{i=2}^{5}|\alpha_i| &\quad \text{if $\Lambda \cap \partial \Omega_h \neq \emptyset$},\\[2mm] Q(\Lambda) & \quad \text{if $\Lambda \subset \Omega_h$}. \end{array} \right. \end{eqnarray}
\subsection{\textbf{Estimates of the non-reacting step involving the boundary}}
\par By choosing a suitable constant $K$, we now prove that the Glimm-type functional $F$ is non-increasing in the non-reacting step.
\begin{theorem} Suppose that the wedge function $g(x)$ satisfies {\rm (\ref{612})}, and $I$ and $J$ are two mesh curves such that $J$ is an immediate successor of $I$. Then there exist constants $\varepsilon>0$ and $K>0$ such that, if $F(I)\le \varepsilon$, then \begin{equation} F(J)\le F(I)-\frac{1}{4}E_{h,\theta}(\Lambda). \end{equation} \end{theorem}
\begin{pf} {We divide our proof into two cases depending on the location of the diamond.}
\par {\textbf{Case 1} (interior weak-weak interaction): $\Lambda$ lies in the interior of $\Omega_h$. Denote $Q(\Lambda)=\Delta(\alpha,\beta)$ as defined in Lemma {\rm 2.1}. Then, for some constant $M>0$,} \begin{equation} L(J)-L(I)\le (1+4K^*)MQ(\Lambda). \end{equation} Since $L(I_0)<\varepsilon$ from $F(I)<\varepsilon$, we have \begin{equation} \begin{split} Q(J)-Q(I)&=\big(Q(I_0)+\sum_{i=1}^{5}Q(\gamma_i,I_0)\big)-\big(Q(I_0)+Q(\Lambda)+\sum_{i=1}^{5}Q(\alpha_i,I_0)+ \sum_{i=1}^{5}Q(\beta_i,I_0)\big) \\&\le Q(MQ(\Lambda),I_0)-Q(\Lambda)\\[1.5mm] &\le \big(ML(I_0)-1\big)Q(\Lambda)\\ &\le -\frac{1}{2}Q(\Lambda). \end{split} \end{equation} Hence, by choosing a suitably large $K$, we obtain \begin{equation} F(J)-F(I)\le \big((1+4K^*)M-\frac{K}{2}\big)Q(\Lambda)\le -\frac{1}{4}Q(\Lambda). \end{equation}
\par \textbf{Case 2} (near the boundary): $\Lambda$ touches the approximate boundary $\partial \Omega_h$. Then $\Omega_J=\Omega_I \setminus \{C_k\}$ for certain $k$.
\par Let $\delta_1$ be the weak $1$-wave going out of $\Lambda$ through $J'$, and let $\beta_1,\alpha_2,\alpha_3,\alpha_4$, and $\alpha_5$ be the weak waves entering $\Lambda$ through $I'$, as shown in Fig. 9. Then
\begin{center} \setlength{\unitlength}{1mm} \begin{picture}(50,76)(-2,0) \linethickness{1pt} \multiput(0,0)(0,3){21}{\line(0,2){2}} \multiput(24,2)(0,3){22}{\line(0,2){2}} \multiput(48,-1)(0,3){20}{\line(0,2){2}} \put(0,18){\line(1,1){20}} \put(0,18){\line(5,3){21}} \put(24,26){\line(4,3){24}} \put(24,26){\line(3,-4){12}} \put(0,62){\line(5,-2){32}} \put(23,53){\line(5,-4){20}} \put(24,52){\line(4,1){24}} \put(0,62){\line(1,-1){18}} \put(24,66){\line(-1,-1){24}} \put(24,65){\line(6,-5){24}} \put(0,42){\line(3,-2){24}} \put(3,61){\line(5,2){4}} \put(7,59){\line(5,2){4}} \put(12,57){\line(5,2){4}} \put(15,56){\line(5,2){4}} \put(19,54){\line(5,2){4}} \put(24,52){\line(1,2){2}} \put(28,53){\line(1,2){2}} \put(32,54){\line(1,2){2}} \put(36,55){\line(1,2){2}} \put(40,56){\line(1,2){2}} \put(44,57){\line(1,2){2}} \put(19,49){$C_k$} \put(37,10){$I_0$} \put(2,35){$I'$} \put(36,32){$J'$} \put(29,51){$\omega_k$} \put(36,44){$\delta_1$} \put(12,43){$\beta_1$} \put(12,36){$\alpha_5$} \put(12,23){$\alpha_{2(3,4)}$} \put(-2,-4){Fig. 9. Near the boundary.} \end{picture} \end{center}
\begin{equation*}
L_0(J)-L_0(I)=-|\omega_k|, \end{equation*} \begin{equation*}
L_i(J)-L_i(I)=\sum_{\text{$\gamma_i$ cross $I_0$}}|\gamma_i|
-\big(|\alpha_i|+\sum_{\text{$\gamma_i$ cross $I_0$}}|\gamma_i|\big)
=-|\alpha_i|, \qquad \text{$i=2,3,4,5$}, \end{equation*} \begin{equation*} \begin{split}
L_1(J)-L_1(I)&=\Big(|\delta_1|+\sum_{\text{$\gamma_1$ cross $I_0$}}|\gamma_1|\Big)
-\Big(|\beta_1|+\sum_{\text{$\gamma_1$ cross $I_0$}}|\gamma_1|\Big)
\\& =|\delta_1|-|\beta_1|\\
&\le \sum_{i=2}^{5}|K_{bi}||\alpha_i|+|K_{b0}||\omega_k|, \end{split} \end{equation*} where the last step is from Lemma 5.1. Thus, \begin{equation} \begin{split}
L(J)-L(I)&\le (|K_{b0}|-K^*)|\omega_k|+\sum_{i=2}^{5}(|K_{bi}|-K^*)|\alpha_i|
\\& \le-\big(|\omega_k+\sum_{i=2}^{5}|\alpha_i|\big), \end{split} \end{equation} since $K^*>\max K_{bi}+1$ for $ i=1,2,3,4,5$. Moreover, we have \begin{equation} \begin{split} Q(J)-Q(I)&=\big(Q(I_0)+Q(\delta_1,I_0)\big)
-\Big(Q(I_0)+Q(\beta_1,I_0)+\sum_{i=2}^{5}Q(\alpha_i,I_0)+|\beta_1|\sum_{i=2}^{5}|\alpha_i|\Big)
\\& \le \Big(\sum_{i=2}^{5}|K_{bi}||\alpha_i|+|K_{b0}||\omega_k|\Big)L(I_0). \end{split} \end{equation} Then we obtain \begin{equation} \begin{split} F(J)-F(I)&=\big(L(J)-L(I)\big)+K\big(Q(J)-Q(I)\big)
\\& \le -\Big(|\omega_k|+\sum_{i=2}^{5}|\alpha_i|\Big)
+K\Big(\sum_{i=2}^{5}|K_{bi}||\alpha_i|+|K_{b0}||\omega_k|\Big)L(I_0)
\\& \le -\frac{1}{4}\Big(|\omega_k|+\sum_{i=2}^{5}|\alpha_i|\Big), \end{split} \end{equation} since we can choose $\varepsilon$ sufficiently small. The proof is completed. \end{pf}
\subsection{\textbf{Estimates of the reacting step involving the boundary}}
\par We first consider the change of the wave strength before and after reaction near the boundary. We denote by $(\tilde{U_b}, \tilde{U}_*)$ and $\tilde{\beta_1}$ the two states and wave strength before reaction, respectively, while by $(U_b, U_*)$ and $\beta_1$ after reaction, respectively (see Figure 10). According to the boundary condition, we have \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(100,78)(-2,-6) \linethickness{1pt} \multiput(0,0)(0,3){13}{\line(0,2){2}} \multiput(25,0)(0,3){15}{\line(0,2){2}} \multiput(73,-1)(0,3){13}{\line(0,2){2}} \multiput(98,-1)(0,3){15}{\line(0,2){2}} \put(0,38){\line(4,1){25}} \put(73,37){\line(4,1){25}} \put(73,37){\line(4,-1){20}} \put(0,38){\line(4,-1){20}} \put(21,33){$\tilde{\beta_1}$} \put(14,37){$\tilde{U}_*$} \put(94,33){$\beta_1$} \put(87,37){$U_*$} \put(10,26){$\tilde{U_b}$} \put(83,26){$U_b$} \put(15,42){\vector(-1,3){4}} \put(88,41){\vector(-1,3){4}} \put(15,50){$n_{k}$} \put(88,50){$n_{k}$} \put(4,-11){Fig. 10. Change of wave strength near the boundary} \put(38,28){after reaction} \put(29,26){\vector(1,0){38}} \put(-4,-4){$x=kh$} \put(69,-4){$x=kh$} \put(21,-4){$(k+1)h$} \put(94,-4){$(k+1)h$} \end{picture} \end{center}
\noindent where \begin{equation} W(U_b(x,y))=W(\tilde{U}_b(x,y))+G(\tilde{U}_b(x,y))(x-kh),\qquad \text{$kh\le x<(k+1)h$}, \end{equation} and \begin{equation} W(U_*(x,y))=W(\tilde{U}_*(x,y))+G(\tilde{U}_*(x,y))(x-kh),\qquad \text{$kh\le x<(k+1)h$}. \end{equation}
From Lemma \ref{q1}, $U_b-\tilde{U}_b=\|Z_0\|_{\infty}e^{-\Phi_1 kh}O(h)$
and $U_*-\tilde{U}_*=\|Z_0\|_{\infty}O(h)e^{-\Phi_1 kh}$. Therefore, we obtain \begin{equation}
\beta_1-\tilde{\beta}_1=\|Z_0\|_{\infty}O(h)e^{-\Phi_1 kh}.\label{key} \end{equation}
As to the inner part, if we perform the same procedure as in the case of the Cauchy problem, we can obtain a similar estimate: \begin{equation}
L(J_k)- L(\tilde{J_k})\le Ch\|w_{5,0}\|_{\infty}e^{-\Phi_1 kh}L(\tilde{J_k}). \end{equation} Combining these two parts together, we have the following global estimate: \begin{equation}
L(J_k)- L(\tilde{J_k})\le Ch\|w_{5,0}\|_{\infty}e^{-\Phi_1 kh}\big(L(\tilde{J_k})+1\big). \end{equation} Therefore, we can do the same procedure as before to establish
\begin{theorem} If $\mathrm{TV}\big(W(U_0)\big)+\mathrm{TV}(g')$ is sufficiently small, then the fractional-step Glimm scheme generates the approximate solutions $U^h(x,y)$ which exist in the whole domain $\Omega$
and have uniformly bounded total variation in the $y$--direction. Moreover, there is a null set $N\subset \Pi_{k=0}^{\infty}(-1,1)$ such that, for each $\theta \in \Pi_{k=0}^{\infty}(-1,1)\setminus N$, there exist a sequence $h_i\to 0$ so that \begin{equation} U_{\theta}=\lim _{h_i\to 0}U_{h_i,\theta} \end{equation} is a weak solution to problem \eqref{dd3}--\eqref{dd4} for system \eqref{d1}--\eqref{d2}, where the limit is taken in $L_{loc}^{1}(\Omega)$. Moreover, $U_{\theta}$ has uniformly bounded total variation in the $y$--direction. \end{theorem}
The proof of the convergence part of Theorem 5.2 is in Section 6.
\section{\textbf{Convergence to Entropy Solutions}} In this section we show that the limit function of the approximate solutions is an entropy solution to the Cauchy problem \eqref{d3}--\eqref{d3-a} and the initial-boundary value problem \eqref{dd3}--\eqref{dd4} for system \eqref{d1}--\eqref{d2}.
\par Let $d\theta_k$ denote the uniform probability measure on $(-1,1)$, and let $d\theta$ denote the induced product probability measure for the random sample $\{\theta_k\}_{k=1}^{\infty}$ in the Cartesian product space $\mathscr{A}=\prod_{k=1}^{\infty}(-1,1)$.
\begin{theorem} Suppose that
\begin{enumerate} \item[\rm (i)] The sequence $U^h(x,y)$ is constructed by using the Glimm fractional-step scheme with the random sample $\{\theta_k\}_{k=0}^{\infty}$ chosen from $\mathscr{A}$.
\item[\rm (ii)] There exist a null set $\mathscr{N}\subset \mathscr{A}$ such that, for $\{\theta_k\}\subset \mathscr{A}-\mathscr{N}$, the sequence $U^h(x,y)$ is uniformly bounded in $L^{\infty}$ and converges pointwise a.e. to the function $U(x,y)$. \end{enumerate} Then the function $U(x,y)$ is an entropy solution of the corresponding problem \eqref{d3}--\eqref{d3-a}, or problem \eqref{dd3}--\eqref{dd4}, for system \eqref{d1}--\eqref{d2}. That is, for any convex entropy pair $(\eta,q)$ with respect to $W(U)$, the following inequality \begin{equation} \eta(W(U))_x+q(W(U))_y\le \nabla_W \eta(W(U))G(U) \label{u} \end{equation} holds in the sense of distributions in $\mathbb{R}^2$ for problem \eqref{d3}--\eqref{d3-a} and in $\Omega$ including the boundary for problem \eqref{dd3}--\eqref{dd4}, which means that \begin{eqnarray} \iint\limits_{\Omega}\big((\eta(W(U))\phi_x+q(W(U))\phi_y+\nabla_W \eta(W(U))G(U)\phi\big)dxdy \\+\int_{-\infty}^{\infty}\eta(W(U_0(y)))\phi(0,y)dy\ge 0, \end{eqnarray} where $\phi(x,y)\ge 0$: for the Cauchy problem \eqref{d3} with $\Omega=\mathbb{R}^2$ and $\phi \in C_0^{\infty}(\mathbb{R}^2)$; and for the initial-boundary value problem \eqref{dd3}--\eqref{dd4}, either $\phi \in C_0^{\infty}(\Omega)$, or
$\phi \in C_0^{\infty}(\mathbb{R}^2)$ and $(\eta, q)=\alpha(W(U))(u,v)$ for
any smooth function $\alpha(W)$ of $W$. \end{theorem}
\begin{pf} We focus our proof on the initial-boundary value problem \eqref{d3}--\eqref{d3-a}, since the proof for the Cauchy problem \eqref{dd3}--\eqref{dd4} is simpler.
We define \begin{equation} \begin{split} L(\theta,h,\phi) =&\iint\limits_{\Omega_h}\big(\eta(W(U^h))\phi_x+q(W(U^h))\phi_y+\nabla_W \eta(W(U^h))G(U^h)\phi\big)dxdy\\ & +\int_{-\infty}^{0}\eta(W(U_0(y)))\phi(0,y)dy. \end{split}\label{54} \end{equation} We only need to prove that $\lim\limits_{h\to 0} L(\theta,h,\phi)\ge 0$ for $\{\theta_k\}\subset \mathscr{A}-\mathscr{N}$.
\par Since $U_0^h(x,y)$ is an entropy solution of conservation laws $W(U)_x+H(U)_y=0$ in the domain $\Omega_{h,k}$, then \begin{equation} \begin{split} &\iint\limits_{\Omega_{h,k}}\big(\eta(W(U_0^h))\phi_x+q(W(U_0^h))\phi_y\big)dxdy +\int_{-\infty}^{y_k}\eta(W(U_0^h(kh+0,y)))\phi(kh,y)\, dy\\ &-\int_{-\infty}^{y_{k+1}}\eta(W(U_0^h((k+1)h-,y)))\phi((k+1)h-,y)\,dy\ge 0, \end{split} \label{55} \end{equation} that is, \begin{equation} \begin{split} &\iint\limits_{\Omega_{h,k}}\big(\eta(W(U_0^h))\phi_x+q(W(U_0^h))\phi_y\big)dxdy +\int_{-\infty}^{0}\eta(W(U_0^h(kh+0,y+y_k)))\phi(kh,y+y_k)\,dy\\ &-\int_{-\infty}^{0}\eta(W(U_0^h((k+1)h-,y+y_{k+1})))\phi((k+1)h-,y+y_{k+1})\,dy\ge 0. \end{split}\label{55} \end{equation} Here we have used the fact that $(u_0^h,v_0^h)\cdot n_k=0$ on the boundary, and the assumptions for $(\eta,q)$ and $\phi$. Since $W(U^h(x,y))=W(U_0^h(x,y))+G(U_0^h(x,y))(x-kh)$, then \begin{equation} \begin{split} &\eta(W(U^h(x,y)))-\eta(W(U_0^h(x,y)))\\ &=\nabla_W \eta(W(U_0^h(x,y)))G(U_0^h(x,y))(x-kh)+\varepsilon(x-kh;x,y)(x-kh) \end{split} \end{equation} for some function $\varepsilon(s;x,y)$, which converges uniformly to $0$ as $s\to 0$. Multiplying the above equation by $\phi_x$ on both sides and integrating on $\Omega_{h,k}$, we have \begin{equation} \begin{split} &\iint\limits_{\Omega_{h,k}}\big(\eta(W(U^h))-\eta(W(U_0^h))\big)\phi_xdxdy\\ &=\iint\limits_{\Omega_{h,k}}\varepsilon(x-kh;x,y)(x-kh)\phi_xdxdy
+\iint\limits_{\Omega_{h,k}}\nabla_W \eta(W(U_0^h(x,y)))G(U_0^h(x,y))(x-kh)\phi_xdxdy\\ &=\iint\limits_{\Omega_{h,k}}\varepsilon(x-kh;x,y)(x-kh)\phi_xdxdy
-\iint\limits_{\Omega_{h,k}}\nabla_W \eta(W(U_0^h(x,y)))G(U_0^h(x,y))\phi_xdxdy\\ &\quad -\iint\limits_{\Omega_{h,k}}\frac{\partial}{\partial x}\big(\nabla_W \eta(W(U_0^h(x,y)))G(U_0^h(x,y))\big)(x-kh)\phi\, dxdy\\ &\quad +\int\limits_{\Gamma_k}\nabla_W \eta(W(U_0^h(x,y)))G(U_0^h(x,y))(x-kh)\phi n_k^1ds\\ &\quad +h\int_{-\infty}^{0}\nabla_W\eta(W(U_0^h((k+1)h-,y+y_{k+1})))G(U_0^h(k+1)h-,y+y_{k+1}))\\ &\qquad\qquad\quad\times \phi(k+1)h-,y+y_{k+1})\,dy. \end{split} \end{equation} Therefore, we use equation (\ref{55}) to obtain \begin{equation} \begin{split} &\iint\limits_{\Omega_{h,k}}\eta(W(U^h))\phi_x\\ &\ge -\iint\limits_{\Omega_{h,k}}\big(q(W(U_0^h))\phi_y+\nabla_W \eta(W(U_0^h(x,y)))G(U_0^h(x,y))\phi\big) dxdy\\ &\quad +\iint\limits_{\Omega_{h,k}}\varepsilon(x-kh;x,y)(x-kh)\phi_x\, dxdy\\ &\quad +\int_{-\infty}^{0}\big(\eta(W(U_0^h((k+1)h-,y+y_{k+1})))\phi((k+1)h,y+y_{k+1})\\ &\qquad\qquad\,\,\,\,\, -\eta(W(U_0^h(kh+0,y+y_k)))\phi(kh,y+y_k)\big)dy\\ &\quad -\iint\limits_{\Omega_{h,k}}\nabla_W \eta(W(U_0^h(x,y)))G(U_0^h(x,y))\phi\, dxdy\\ &\quad -\iint\limits_{\Omega_{h,k}}\frac{\partial}{\partial x}\big(\nabla_W \eta(W(U_0^h(x,y)))G(U_0^h(x,y))\big)(x-kh)\phi\, dxdy\\ &\quad +\int\limits_{\Gamma_k}\nabla_W \eta(W(U_0^h(x,y)))G(U_0^h(x,y))(x-kh)\phi n_k^1\, ds\\ &\quad +h\int_{-\infty}^{0}\nabla_W\eta(W(U_0^h((k+1)h-,y+y_{k+1})))G(U_0^h(k+1)h-,y+y_{k+1}))\\ &\qquad\qquad\quad\times \phi(k+1)h-,y+y_{k+1})\, dy. \end{split} \end{equation}
Summing over $k$, we have \begin{equation} L(\theta,h,\phi)\ge \mathscr{A}(\theta,h,\phi)+\sum_{k=0}^{\infty}\mathscr{B}_k(\theta,h,\phi) +\sum_{k=0}^{\infty}\mathscr{C}_k(\theta,h,\phi) +\sum_{k=0}^{\infty}\mathscr{D}_k(\theta,h,\phi),
\end{equation} where \begin{equation*} \begin{split} \mathscr{A}(\theta,h,\phi)=&\sum_{k=0}^{\infty}\mathscr{A}_k(\theta,h,\phi),\\ \mathscr{A}_0(\theta,h,\phi)=&\int_{-\infty}^{0}\big(\eta(W(U_0(y)))-\eta(W(U_0^h(0,y)))\big)\phi(0,y)dy,\\ \mathscr{A}_k(\theta,h,\phi)=&\int_{-\infty}^{0} \big(\eta(W(U_0^h(kh-,y+y_k)))-\eta(W(U_0^h(kh+0,y+y_k)))\big)\phi(kh,y+y_k)dy\\ &\, + h\int_{-\infty}^{0}\nabla_W\eta(W(U_0^h((k+1)h-,y+y_{k+1})))\times\\ &\, \qquad\qquad \times G(U_0^h(k+1)h-,y+y_{k+1}))\phi(k+1)h-,y+y_{k+1})dy,\qquad k\ge 1,\\
\mathscr{B}_k(\theta,h,\phi)=&\iint\limits_{\Omega_{h,k}}\big(q(W(U^h))-q(W(U_0^h))\phi_y\big) dxdy\\ &\, +\iint\limits_{\Omega_{h,k}}\big(\nabla_W \eta(W(U^h))G(U^h)-\nabla_W \eta(W(U_0^h))G(U_0^h)\big)\phi dxdy \\ &\, +\iint\limits_{\Omega_{h,k}}\varepsilon(x-kh;x,y)(x-kh)\phi_xdxdy,\\
\mathscr{C}_k(\theta,h,\phi)=&\iint\limits_{\Omega_{h,k}}\frac{\partial}{\partial x} \big(\nabla_W \eta(W(U_0^h(x,y)))G(U_0^h(x,y))\big)(x-kh)\phi dxdy,\\
\mathscr{D}_k(\theta,h,\phi)=&\int\limits_{\Gamma_k}\nabla_W \eta(W(U_0^h(x,y)))G(U_0^h(x,y))(x-kh)\phi n_k^1ds. \end{split} \end{equation*} The proof for each component converging to zero as $h$ tends to zero is similar to \cite{chen1}, and we omit here. This completes the proof. \end{pf}
\section{\textbf{Asymptotic Behavior involving the Boundary}}
\par Let $\theta \in \Pi_{k=0}^{\infty}(-1,1)\setminus \mathcal{N}$ be equidistributed. To determine the asymptotic behavior of the solution $U(x,y)$, we need further estimates on $U_{h,\theta}$.
\begin{lemma} There exists a constant $M_1>0$, independent of $U_{h,\theta}, \theta$ and $h$, such that \begin{equation} \sum_{\Lambda}E_{h,\theta}(\Lambda)\le M_1, \end{equation} where the summation is over all the diamonds. \end{lemma}
\begin{pf} \par First, from the conclusion of the non-reacting step, {\it i.e.} Theorem 5.1, we know \begin{equation} F(J)-F(I)\le -\frac{1}{4}E_{h,\theta}(\Lambda), \end{equation} where $J$ is an immediate successor of $I$. Then we conclude \begin{equation} F(\tilde{J}_k)-F(J_{k-1})\le -\frac{1}{4}\sum_{k-1}^{k+1}E_{h,\theta}(\Lambda), \end{equation} where the summation is over all the diamonds between $x=(k-1)h$ and $x=(k+1)h$.
\par Then we know from the reacting step that \begin{equation}
F(J_k)-F(\tilde{J_k})\le Ch\|w_{5,0}\|_{\infty}e^{-\Phi_1 kh}\big(F(\tilde{J_k})+1\big)^2. \end{equation} Combine these two steps together and sum for $k$ from $1$ to $\infty$ to obtain \begin{equation*} \begin{split} \sum_{k=1}^{\infty}\sum_{k-1}^{k+1}E_{h,\theta}(\Lambda)
& \le CF(J_0)+\sum_{k=1}^{\infty}Ch||w_{5,0}||_{\infty}e^{-\Phi_1 kh}(F(\tilde{J_k})+1)^2\\
& \le C\big(F(J_0)+\|w_{5,0}\|_{\infty}\big)<\infty. \end{split} \end{equation*} The proof is completed. \end{pf}
\par Moreover, let $\Gamma_g=\cup_{k=0}^{\infty}\bar{\Lambda}_{k,0}$, where $\Lambda_{k,0}$ is the diamond centered at $C_k$, and let $L_{h,\theta}(\Gamma_g)$ be the summation of the strength of waves leaving $\Gamma_g$. Then we have \begin{lemma} There exists a constant $M_2$ independent of $U_{h,\theta}, h$, and $\theta$ such that \begin{equation} L_{h,\theta}(\Gamma_g)\le M_2 \sum_{\Lambda}E_{h,\theta}(\Lambda). \end{equation} \end{lemma}
This can be obtained by employing Lemmas 5.1--5.2 and (\ref{key}) and by taking the summation of them over $\Gamma_g$.
\par For $i=2, 3, 4, 5$, let $L_i(a-)$ be the amount of all $i$-waves in $U_{\theta}$ crossing the line $x=a$ for any $a>0$. Also, let $\tilde{L}_{i}^{h,\theta}(a)$ and $L_{i}^{h,\theta}(a)$ denote the amount of $i$-waves before reaction and after reaction, respectively, in $U_{h,\theta}$ crossing the line $x=a$ for any $a>0$.
\begin{lemma} $L_i(x-)\to 0$ as $x\to \infty$, for $i=2, 3, 4 ,5$. \end{lemma}
\begin{pf} In fact, for $kh\le x< (k+1)h$, \begin{equation}
\tilde{L}_{i}^{h,\theta}(x)-L_{i}^{h,\theta}(x)\le L(J_k)-L(\tilde J_k)
\le Ch\|w_{5,0}\|_{\infty}e^{-\Phi_1 kh}\big(L(\tilde{J_k})+1\big). \end{equation} Then, by Lemmas 7.1--7.2, we can perform the same procedure as in \cite{zhang2} and conclude this result. \end{pf}
\par Next, we study the asymptotic behavior of the trace of $U$ on the boundary. To this end, from Lemmas 7.1--7.2, we can first deduce
\begin{lemma} Let \begin{equation} B_{h,\theta}(x)=U_{h,\theta}(x, g_h(x)). \end{equation} Then there exists a constant $M>0$ depending only on the system such that \begin{equation} \mathrm{TV}\{B_{h,\theta}; [0,\infty)\}\le M. \end{equation} \end{lemma}
\par Then, by Lemma 7.4, we can choose a subsequence $\{h_{i_l}\}$ of $\{h_i\}$ so that \begin{equation} B_{h_{i_l},\theta} \to B_{\theta}\label{ee} \end{equation} in $L_{loc}^1([0,\infty))$ as $h_{i_l} \to 0$ for some $B_{\theta}\in L^{\infty}$. From the construction of approximate solutions, we have
\begin{lemma} Let $B_{\theta}$ be given by \eqref{ee}. Then \begin{equation*} B_{\theta}\in BV([0,\infty)) \end{equation*} and \begin{equation*} B_{\theta}(x-)\cdot (-g'(x-),1,0,0,0)=0. \end{equation*} \end{lemma}
\begin{pf} Since \begin{equation} \begin{split} &B_{h_{i_l},\theta}(x-)\cdot (-g'_{h_{i_l}}(x-),1,0,0,0)\\ &=(B_{h_{i_l},\theta}(x-)-\tilde{B}_{h_{i_l},\theta}(x-))\cdot (-g'_{h_{i_l}}(x-),1,0,0,0) +\tilde{B}_{h_{i_l},\theta}(x-)\cdot (-g'_{h_{i_l}}(x-),1,0,0,0), \end{split} \end{equation} the first term on the right-hand side tend to $0$ as $h_{i_l} \to 0$, while the second term equals to $0$. Then we conclude the result. \end{pf}
\par Moreover, we can determine the asymptotic behavior of the traces of $U_{\theta}$ on $\partial \Omega$ as follows.
\begin{lemma} There holds the following \begin{equation}
\sup_{\hat \lambda x\le y \le g(x)}|U_{\theta}(x-,y)-B_{\theta}(x-)|\to 0 \qquad \text{as $x\to \infty$} \end{equation} for any $\hat \lambda \in(sup \lambda_1, \inf g')$. \end{lemma}
\begin{pf} Notice that
\begin{eqnarray*}
&&\sup_{\hat \lambda x\le y \le g(x)}|U_{\theta}(x-,y)-B_{\theta}(x-)|\\ &&\le \sup_{\hat \lambda x
\le y \le g(x)}|U_{\theta}(x-,y)-\tilde{U}_{\theta}(x-,y)|
+\sup_{\hat \lambda x\le y \le g(x)}|\tilde{U}_{\theta}(x-,y)-B_{\theta}(x-)|. \end{eqnarray*}
By Lemma 7.3, the first term on the right-hand side tends to zero. In the same way as in \cite{zhang2}, the second term also tends to zero. The proof is completed. \end{pf}
\par From Lemmas 7.3 and 7.6, it follows that
\begin{lemma} Let \begin{equation} B_{\theta}(\infty)=\lim_{x\to \infty}B_{\theta}(x-) \end{equation} and let \begin{equation} g'(\infty)=\lim_{x\to \infty}g'_{+}(x). \end{equation} Then \begin{equation}
\lim _{x\to \infty}\sup_{\hat \lambda x\le y \le g(x)}|\lambda_1(U_{\theta}(x-,y))-\lambda_1(B_{\theta}(x-))|= 0, \end{equation} and \begin{equation*} B_{\theta}(\infty)\cdot (-g'(\infty),1)=0. \end{equation*} \end{lemma}
\par Repeating the argument as in \cite{liu2} and by Lemmas 7.3 and 7.7, we can prove
\begin{lemma} \par Let $U_{\infty}=\lim_{y\to -\infty}U_0(y)$ for the initial data $U_0(y)$ at $x=0$. \begin{enumerate} \item[\rm (i)] If $\lambda_1(B_{\theta}(\infty))>\lambda_1(U_{\infty})$, then \begin{equation} B_{\theta}(\infty)\in R_1^+(U_{\infty}). \end{equation}
\item[\rm (ii)] If $\lambda_1(B_{\theta}(\infty))\le \lambda_1(U_{\infty})$, then \begin{equation} B_{\theta}(\infty)\in S_1^-(U_{\infty}). \end{equation} \end{enumerate} Therefore, the equation \begin{equation} \Phi(0,0,0,0,\alpha_{\infty}; U_{\infty})=B_{\theta}(\infty) \end{equation} has a unique solution $\alpha_{\infty}$.\label{ff} \end{lemma}
\par Considering the geometry of the boundary and performing the same way as in \cite{zhang2}, we can obtain \begin{lemma}
Suppose that $|g'(\infty)|$ is small, then \begin{enumerate} \item[\rm (i)] If $g'(\infty)<0$, then $\lambda_1(B_{\theta}(\infty))>\lambda_1(U_{\infty})$; \item[\rm (ii)] If $g'(\infty)=0$, then $\lambda_1(B_{\theta}(\infty))=\lambda_1(U_{\infty})$; \item[\rm (iii)] If $g'(\infty)>0$, then $\lambda_1(B_{\theta}(\infty))<\lambda_1(U_{\infty})$. \end{enumerate} \end{lemma}
\par By carrying out the same arguments as in \cite{zhang2} and employing the above lemmas, we finally have the asymptotic behavior of entropy solutions.
\begin{theorem} Suppose that $\mathrm{TV}(U_0)+\mathrm{TV}(g')$ is sufficiently small. \begin{enumerate} \item[\rm (i)] If $g'(\infty)<0$, then there exists a $1$-shock which approaches to the shock wave with $(\alpha_{\infty},0,0,0,0)$ both in strength and speed as $x\to \infty$; moreover, the total variation of $U_{\theta}$ outside this shock approaches to zero as $x\to \infty$.
\item[\rm (ii)] If $g'(\infty)=0$, then $\sup_{y<g(x)}|U_{\theta}(x,y)-U_{\infty}| \to 0$ as $x\to \infty$. \item[\rm (iii)] If $g'(\infty)>0$, then the amount of shocks approaches to zero as $x\to \infty$ and $U(x,y)$ approaches the rarefaction wave with $(\alpha_{\infty},0,0,0,0)$, where $(\alpha_{\infty},0,0,0,0)$ is given in Lemma {\rm \ref{ff}}. \end{enumerate} \end{theorem}
\section{\textbf{Supersonic Reacting Euler Flow past Lipschitz Wedge with Large Angle}}
Now we consider the general case when the wedge angle is arbitrary large, but less than the sonic angle. We establish a theory of global existence and asymptotic behavior of entropy solutions for the initial-boundary value problem \eqref{dd3}--\eqref{dd4} for system \eqref{d1}--\eqref{d2} for which $v_0(-\infty)$ is not zero in general.
\subsection{\textbf{Initial-boundary value problem involving a strong shock}} For the wedge with large vertex angle, as in \cite{chen2}, we choose a suitable coordinate system (by rotation when it is necessary) such that the wedge has the lower boundary $\{y=g(x),x\ge 0\}$ with \begin{equation} g(0)=g'(0)=0, \qquad g\in C([0,\infty]), \qquad g'\in \mathrm{BV}. \end{equation}
\begin{center} \setlength{\unitlength}{1mm} \begin{picture}(120,70)(-2,-10) \linethickness{1pt} \put(30,0){\vector(0,1){50}} \put(0,30){\vector(1,0){120}} \put(3,3){\vector(2,1){20}} \put(3,7){\vector(2,1){20}} \put(3,11){\vector(2,1){20}} \put(3,-1){\vector(2,1){20}} \qbezier(30,30)(40,31)(50,33) \qbezier(50,33)(90,25)(110,32) \qbezier(30,30)(50,15)(90,2) \put(90,4){$Shock$} \put(24,26){$O$} \put(116,26){$x$} \put(27,50){$y$} \put(-6,20){$(u_0(y), v_0(y))$} \put(34,30){\line(1,2){2}} \put(38,31){\line(1,2){2}} \put(42,31){\line(1,2){2}} \put(46,32){\line(1,2){2}} \put(50,32){\line(1,2){2}} \put(54,32){\line(1,2){2}} \put(58,31){\line(1,2){2}} \put(62,31){\line(1,2){2}} \put(66,31){\line(1,2){2}} \put(70,30){\line(1,2){2}} \put(74,30){\line(1,2){2}} \put(78,29){\line(1,2){2}} \put(82,29){\line(1,2){2}} \put(86,29){\line(1,2){2}} \put(90,29){\line(1,2){2}} \put(94,29){\line(1,2){2}} \put(98,30){\line(1,2){2}} \put(100,20){\vector(-2,1){16}} \put(100,18){$y=g(x)$} \put(52,10){$\Omega$} \put(18,-8){Fig. 11. Initial-boundary problem with large vertex angle} \end{picture} \end{center}
For the non-reaction problem with straight boundary $\{x\ge 0, y\equiv 0\}$ and uniform incoming flow $U_0(-\infty)$, if we assume that \begin{equation} 0<\arctan\Big(\frac{v_0(-\infty)}{u_0(-\infty)}\Big)<\omega_{crit}, \end{equation} then there exists a supersonic state $U_{+}=(u_{+},0,p_{+},\rho_{+},Z_{+})\in S_1(U_0(-\infty))$ with entropy condition $u_{+}<u_0(-\infty)$ such that the corresponding non-reaction problem \eqref{5}--\eqref{6} has a shock solution with a leading shock front issuing from the vertex (see Fig. 12).
\begin{center} \setlength{\unitlength}{1mm} \begin{picture}(90,60)(-2,-10) \linethickness{1pt} \put(30,0){\vector(0,1){50}} \put(0,30){\vector(1,0){70}} \put(3,3){\vector(1,1){20}} \put(30,30){\line(1,-1){25}} \put(50,20){\vector(1,0){20}} \put(30,30){\line(1,2){2}} \put(34,30){\line(1,2){2}} \put(38,30){\line(1,2){2}} \put(42,30){\line(1,2){2}} \put(46,30){\line(1,2){2}} \put(50,30){\line(1,2){2}} \put(54,30){\line(1,2){2}} \put(58,30){\line(1,2){2}} \put(62,30){\line(1,2){2}} \put(22,26){$O$} \put(66,26){$x$} \put(23,50){$y$} \put(-3,20){$(u_0(-\infty),v_0(-\infty))$} \put(54,14){$(u_{+},v_{+})$} \put(52,2){Shock} \put(-2,-8){Fig. 12. The background solution for the no-reaction problem} \end{picture} \end{center} Moreover, there exist $r_1>0$ and $r_2>0$ such that, for any $U_1\in O_{r_2}(U_0(-\infty))$, the shock polar $S_1(U_1)\cap O_{r_1}(U_+)$ can be parameterized by the form \begin{equation} U=D(s,U_1) \qquad \text{with $U_+=D(s,U_{-\infty})$}, \end{equation} where $s$ is the shock speed.
\subsection{\textbf{Riemann problem with a strong shock}} To construct the approximate solutions, we need to solve the Riemann problem with a strong shock.
\begin{lemma} Let $U_1\in O_{r_1}(U_0(-\infty))$ and $U_2\in O_{r_2}(U_+)$ with small positive constants $r_1>0$ and $r_2>0$. Then the Riemann problem \begin{eqnarray} \left\{ \begin{array}{ccc} W(U)_{x}+H(U)_y=0,\\
U|_{x=0}= \begin{cases} U_1 &\quad y< y_0,\\U_2 &\quad y> y_0, \end{cases} \end{array} \right. \label{rm2} \end{eqnarray} has a unique solution constituted by weak waves $\alpha_2,\alpha_3,\alpha_4,\alpha_5$, and a strong shock $s$, that is, \begin{equation} \Psi(\alpha_5,\alpha_4,\alpha_3,\alpha_2,0;D(s,U_1))=U_2. \label{rm1} \end{equation} \end{lemma}
This lemma can be proved in the same way as in \cite{chen2} by solving (\ref{rm1}). Besides the Riemann problem for the interacting weak waves and the fractional steps in the previous sections, we also employ (\ref{rm2}) for dealing with the interaction between the weak waves and the strong wave. More precisely, we have the following lemma to include the strong shock.
\begin{center} \setlength{\unitlength}{1mm} \begin{picture}(50,90)(-2,-22) \linethickness{1pt} \multiput(0,-12)(0,3){26}{\line(0,2){2}} \multiput(24,-12)(0,3){25}{\line(0,1){2}} \multiput(48,-13)(0,3){25}{\line(0,2){2}} \put(0,1){\line(1,1){20}} \put(0,1){\line(3,-2){19}} \put(0,1){\line(5,3){20}} \put(0,1){\line(5,-2){24}} \put(0,62){\line(5,-2){24}} \put(0,62){\line(2,-3){16}} \put(0,33){\line(5,-2){24}} \put(24,52){\line(4,1){24}} \put(24,23){\line(4,1){24}} \put(24,-8){\line(4,1){24}} \put(24,-8){\line(2,3){18}} \put(24,-8){\line(1,1){18}} \put(24,-8){\line(5,-1){18}} \put(0,62){\line(1,-1){18}} \put(9,54){$U_a$} \put(38,33){$U_a$} \put(2,24){$U_m$} \put(2,-6){$U_b$} \put(25,-13){$U_b$} \put(-16,-20){Fig. 13. Interaction with the strong wave below} \put(38,2){$\delta_{2(3,4)}$} \put(36,18){$\delta_{5}$} \put(44,-13){$s'$} \put(12,35){$\beta_1$} \put(14,50){$\beta_{2(3,4)}$} \put(12,18){$\alpha_{5}$} \put(12,6){$\alpha_{2(3,4)}$} \put(15,-13){$s$} \end{picture} \end{center}
\begin{lemma} Suppose that $U_b\in O_{r_1}(U_0(-\infty))$ and $U_a$, $U_m\in O_{r_2}(U_+)$ with \begin{eqnarray} \{U_m,U_a\}=(\beta_1,\beta_2,\beta_3,\beta_4,0),\\ \{U_b,U_m\}=(s,\alpha_2,\alpha_3,\alpha_4,\alpha_5), \end{eqnarray} and \begin{equation} \{U_b,U_a\}=(s',\delta_2,\delta_3,\delta_4,\delta_5). \end{equation} Then \begin{eqnarray} && s'=s+K_{s_1}\beta_1+O(1)\Delta,\\ && \delta_j=\alpha_j+\beta_j+K_{s_j}\beta_1+O(1)\Delta, \qquad \text{$j=2,3,4$},\\ && \delta_5=\alpha_5+K_{s_5}\beta_1+O(1)\Delta, \end{eqnarray} with \begin{equation}
|K_{s5}|<1,\qquad \text{$\sum_j|K_{sj}|\le M$ $\,\,\,$ for some $M>0$}, \end{equation} and \begin{equation}
\Delta=|\alpha_5|(|\beta_2|+|\beta_3|+|\beta_4|). \end{equation} \label{lem7} \end{lemma}
\begin{lemma} Suppose that \begin{eqnarray} \{U_b,U_m\}=(\alpha_1,\alpha_2,\alpha_3,\alpha_4,\alpha_5),\qquad \{U_m,U_a\}=(s,\beta_2,\beta_3,\beta_4,\beta_5), \end{eqnarray} and \begin{equation} \{U_b,U_a\}=(s',\delta_2,\delta_3,\delta_4,\delta_5), \end{equation} with $U_b$, $U_m\in O_{r_2}(U_0(-\infty))$ and $U_a\in O_{r_1}(U_{+})$. Then \begin{eqnarray*}
s'=s+K_{s_1}\alpha_1+O(1)\sum_{j=1}^{5}|\alpha_j|,\qquad
\delta_j=\beta_j+O(1)\sum_{j=1}^5|\beta_j|. \end{eqnarray*} \label{lem8} \end{lemma}
\begin{pf} Actually, if we set $\alpha_j=0$ for all $j$, then $s'=s$ and $\delta_j=\beta_j$ for all $j$. Then the result follows. \end{pf}
\begin{center} \setlength{\unitlength}{1mm} \begin{picture}(50,74)(-2,-4) \linethickness{1pt} \multiput(0,0)(0,3){24}{\line(0,2){2}} \multiput(24,0)(0,3){24}{\line(0,2){2}} \multiput(48,0)(0,3){24}{\line(0,2){2}} \put(0,54){\line(3,2){20}} \put(0,54){\line(3,-1){20}} \put(0,54){\line(5,2){20}} \put(0,18){\line(1,1){20}} \put(0,18){\line(5,3){20}} \put(0,18){\line(4,-1){20}} \put(24,40){\line(5,-1){20}} \put(24,40){\line(3,2){20}} \put(24,40){\line(5,1){20}} \put(2,70){$U_a$} \put(2,30){$U_m$} \put(2,5){$U_b$} \put(36,15){$U_b$} \put(36,60){$U_a$} \put(12,66){$\beta_5$} \put(12,56){$\beta_{2(3,4)}$} \put(12,46){$s$} \put(12,36){$\alpha_5$} \put(12,23){$\alpha_{2(3,4)}$} \put(12,11){$\alpha_1$} \put(40,54){$\delta_5$} \put(38,46){$\delta_{2(3,4)}$} \put(40,38){$s'$} \put(-10,-6){Fig. 14. Interaction with the strong wave above} \end{picture} \end{center}
\subsection{\textbf{Glimm-type functional involving the strong shock}} We use the same grid points and mesh curves as in the previous sections. For the strip $\Omega_k$, we denote the strong shock in $\Omega_k$ by $s_k$. Without confusion, we also denote its speed by $s_k$ and its location by $y=\chi_k(x)$. \par Let \begin{equation} \Omega_{k+}=\{\chi_k(x)<y\}\cap \Omega_k, \qquad \text{$\Omega_{k-}=\{\chi_k(x)>y\}\cap \Omega_k$}. \end{equation} For $J_k<J<J_{k+1}$, we denote $J_+=J\cap \Omega_{k+}$ and $J_-=J\cap \Omega_{k-}$.
\begin{definition} \begin{eqnarray*}
&&L_j(J_{\pm})=\sum \{|\alpha|: \text{$\alpha$ is weak $j$-wave crossing $J_{\pm}$}\},\\[1.5mm]
&&Q(J_{\pm})=\sum \{|\alpha||\beta|: \text{$\alpha, \beta$ are weak waves, approaching and crossing $J_{\pm}$}\},\\[1.5mm] &&L(J_+)=K_0^*L_0(J)+L_1(J_+)+K_2^*L_2(J_+)+K_3^*L_3(J_+)+K_4^*L_4(J_+)+K_5^*L_5(J_+),\\[1.5mm] &&L(J_-)=L_1(J_-)+K_2^{**}L_2(J_-)+K_3^{**}L_3(J_-)+K_4^{**}L_4(J_-)+K_5^{**}L_5(J_-),\\[1.5mm] &&F(J)=L(J_+)+KL(J_-)+K'Q(J_+)+KK^{''}Q(J_-),\\[1.5mm]
&&F_s(J)=|s_J-s_*|+C_*F(J), \end{eqnarray*} where $K, K', K^{''}, K_j^*, K_j^{**}$, and $C_*$ are all positive constants with $$
K_0^*>|K_{b0}|, \qquad |K_{b5}|<K_5^*<\frac{1}{|K_{s_5}|}. $$ \end{definition}
\begin{proposition} Let $J_k<I<J<\tilde{J}_{k+1}$ such that $J$ is an immediate successor of $I$. Suppose that \begin{eqnarray*}
&&\big{|}s_I-s_*\big{|}<\varepsilon,\\[1.5mm]
&&\big{|}U_{h,\theta}|_{I_+}-U_+\big{|}<\varepsilon_1,\\[1.5mm]
&&\big{|}U_{h,\theta}|_{I_-}-U_0(-\infty)\big{|}<\varepsilon_2 \end{eqnarray*} for some $\varepsilon, \varepsilon_1$, and $\varepsilon_2>0$. Then there exist positive constants $K, K',K^{''}, K_j^*, K_j^{**}, C_*$, and $\tilde{\varepsilon}$, which are independent of $I, J$, and $k$, such that, if $F_s(I)<\tilde{\varepsilon}$, then \begin{equation*} F_s(J)<F_s(I). \end{equation*} Furthermore, we have \begin{eqnarray*}
&&\big{|}s_J-s_*\big{|}<\varepsilon,\\[1.5mm]
&&\big{|}U_{h,\theta}|_{J_+}-U_+\big{|}<\varepsilon_1,\\[1.5mm]
&&\big{|}U_{h,\theta}|_{J_-}-U_0(-\infty)\big{|}<\varepsilon_2. \end{eqnarray*} \end{proposition}
\begin{pf} We consider only the case near the strong $1-shock$, since the other cases can be treated in the same way as in the previous sections.
\par Let $\Lambda$ be the diamond domain between the mesh curves $I$ and $J$.
\par {\textbf{Case 1}}: By Lemma \ref{lem7}, we have \begin{eqnarray*}
&& L_1(J_+)-L_1(I_+)=-|\beta_1|,\\[1.5mm]
&& L_j(J_+)-L_j(I_+)\le |K_{s_j}||\beta_1|+O(1)\Delta, \qquad \text{$j=2,3,4$},\\[1.5mm]
&& L_5(J_+)-L_5(I_+)\le |K_{s_5}||\beta_1|+O(1)\Delta, \\[1.5mm] && L(J_-)-L(I_-)=0,\\[1.5mm] && Q(J_-)-Q(I_-)=0. \end{eqnarray*}
\begin{center} \setlength{\unitlength}{1mm} \begin{picture}(50,92)(-2,-22) \linethickness{1pt} \multiput(0,-12)(0,3){26}{\line(0,2){2}} \multiput(24,-12)(0,3){25}{\line(0,1){2}} \multiput(48,-13)(0,3){25}{\line(0,2){2}} \put(0,1){\line(1,1){20}} \put(0,1){\line(3,-2){19}} \put(0,1){\line(5,3){20}} \put(0,1){\line(5,-2){24}} \put(0,62){\line(5,-2){24}} \put(0,62){\line(2,-3){16}} \put(0,33){\line(5,-2){24}} \put(24,52){\line(4,1){24}} \put(24,23){\line(4,1){24}} \put(24,-8){\line(4,1){24}} \put(24,-8){\line(2,3){18}} \put(24,-8){\line(1,1){18}} \put(24,-8){\line(5,-1){18}} \put(0,62){\line(1,-1){18}} \put(5,-20){Fig. 15. Case 1} \put(38,2){$\delta_{2(3,4)}$} \put(36,18){$\delta_{5}$} \put(38,-15){$s_{k+1}$} \put(12,35){$\beta_1$} \put(14,50){$\beta_{2(3,4)}$} \put(12,18){$\alpha_{5}$} \put(12,6){$\alpha_{2(3,4)}$} \put(15,-13){$s_k$} \end{picture} \end{center} Then we conclude that \begin{equation*}
L(J_+)-L(I_+)\le (-1+\sum_{j=2}^5K_j^*|K_{s_j}|)|\beta_1|+O(1)\Delta, \end{equation*} and \begin{equation*}
Q(J_+)-Q_(I_+)\le O(1)\Delta+O(1)|\beta_1|L_1(J_+). \end{equation*} Moreover, \begin{equation*}
|s_{k+1}-s_*|\le |s_k-s_*|+|K_{s_1}||\beta_1|+O(1)\Delta. \end{equation*} Combining this with the above estimates, and choosing suitable constants $K_j^*$ and large constants $K$, $K^{\prime}$, and $K^{\prime\prime}$, we conclude \begin{equation*} F_s(J)\le F_s(I), \quad \text{for $F_s(I)\le \tilde{\varepsilon}$}. \end{equation*}
\par {\textbf{Case 2}}: By Lemma \ref{lem8}, we have \begin{eqnarray*}
&& s_{k+1}=s_k+O(1)|\boldsymbol{\beta}|, \\[1.5mm]
&& \delta_j=\alpha_j+O(1)|\boldsymbol{\beta}|, \qquad j=1, \cdots, 5, \end{eqnarray*}
where $|\boldsymbol{\beta}|=\sum_{j=1}^5|\beta_j|$. Then \begin{equation*}
L(J_-)-L(I_-)\le -|\boldsymbol{\beta}| \end{equation*} for suitable choice of constants $K_j^{**}$. By choosing sufficiently large $K$, we finally have the desired result.
The proof is complete.
\begin{center} \setlength{\unitlength}{1mm} \begin{picture}(50,74)(-2,-4) \linethickness{1pt} \multiput(0,0)(0,3){24}{\line(0,2){2}} \multiput(24,0)(0,3){24}{\line(0,2){2}} \multiput(48,0)(0,3){24}{\line(0,2){2}} \put(0,54){\line(3,2){20}} \put(0,54){\line(3,-1){20}} \put(0,54){\line(5,2){20}} \put(0,18){\line(1,1){20}} \put(0,18){\line(5,3){20}} \put(0,18){\line(4,-1){20}} \put(24,40){\line(5,-1){20}} \put(24,40){\line(3,2){20}} \put(24,40){\line(5,1){20}} \put(12,66){$\beta_5$} \put(12,56){$\beta_{2(3,4)}$} \put(12,46){$s_k$} \put(12,36){$\alpha_5$} \put(12,23){$\alpha_{2(3,4)}$} \put(12,11){$\alpha_1$} \put(40,54){$\delta_5$} \put(38,46){$\delta_{2(3,4)}$} \put(40,38){$s_{k+1}$} \put(10,-6){Fig. 16. Case 2} \end{picture} \end{center}
\end{pf}
\subsection{\textbf{Estimates of reaction steps for the strong shock}} By Lemma \ref{q1}, we have \begin{equation*}
U_b-\tilde{U}_b=\|Z_0\|_{\infty}e^{-\Phi_1 kh}O(h), \end{equation*} \begin{equation*}
U_a-\tilde{U}_a=\|Z_0\|_{\infty}e^{-\Phi_1 kh}O(h). \end{equation*} Then \begin{equation}
\tilde{s}_k-s_k=\|Z_0\|_{\infty}e^{-\Phi_1 kh}O(h). \end{equation}
\begin{center} \setlength{\unitlength}{1mm} \begin{picture}(100,52)(-2,-10) \linethickness{1pt} \multiput(0,0)(0,3){13}{\line(0,2){2}} \multiput(25,0)(0,3){13}{\line(0,2){2}} \multiput(73,-1)(0,3){13}{\line(0,2){2}} \multiput(98,-1)(0,3){13}{\line(0,2){2}} \put(73,18){\line(4,-1){20}} \put(0,18){\line(4,-1){20}} \put(19,9){$\tilde{s}_k$} \put(14,20){$\tilde{U}_a$} \put(92,10){$s_k$} \put(87,20){$U_a$} \put(10,6){$\tilde{U_b}$} \put(83,6){$U_b$} \put(8,-11){Fig. 17. Change of the strength of the strong shock} \put(38,28){after reaction} \put(29,26){\vector(1,0){38}} \put(-4,-4){$x=kh$} \put(69,-4){$x=kh$} \put(21,-4){$(k+1)h$} \put(94,-4){$(k+1)h$} \end{picture} \end{center}
As in the previous sections, we still have \begin{equation}
F_s(J_k)-F_s(\tilde{J_k})\le Ch\|w_{5,0}\|_{\infty}e^{-\Phi_1 kh}(F_s(\tilde{J_k})+1)^2.
\end{equation} This gives the uniform bounds on $F_s(J_k)$.
\subsection{\textbf{Global existence and asymptotic behavior of entropy solutions for the Lipschitz wedge with large angle}}
We finally have the following theorem.
\begin{theorem} Suppose that $0<\arctan\big(\frac{v_0(-\infty)}{u_0(-\infty)}\big)<\omega_{crit}$. If $\mathrm{TV}(W(U_0))+\mathrm{TV}(g')$ is sufficiently small, then the fractional-step Glimm scheme can generate a family of approximate solutions $U_{h,\theta}(x,y)$ that have uniformly bounded variation in the $y$--direction. Moreover, there exists a null set $N\subset \Pi_{k=0}^{\infty}(-1,1)$ such that, for every $\theta \in \Pi_{k=0}^{\infty}(-1,1)\setminus N$, there exist a sequence $h_j\to 0$ such that \begin{equation} U_{\theta}\stackrel{L_{loc}^1}{=}\lim _{h_i\to 0}U_{h_i,\theta} \end{equation} is a weak solution to problem \eqref{dd3}--\eqref{dd4} for system \eqref{d1}--\eqref{d2}. Moreover, $U_{\theta}$ has uniformly bounded variation in the $y$--direction. \end{theorem}
\par In the same way as in \cite{chen2}, we have \begin{theorem}[Asymptotic behavior] Let $\omega_{\infty}=\lim_{x\to \infty} \arctan(g'(x+))$. Then \begin{equation}
\lim_{x\to \infty}\sup_{\chi_{\theta}(x)<y<g(x)}\big|
\arctan\big(\frac{v_{\theta}(x,y)}{u_{\theta}(x,y)}-\omega_{\infty}\big)\big|=0, \end{equation} and \begin{equation} \lim_{x\to \infty}\sup_{y<\chi_{\theta}(x)}
\big|\arctan\big(\frac{v_{\theta}(x,y)}{u_{\theta}(x,y)}\big)\big|=0. \end{equation} \end{theorem}
\noindent \textbf{Acknowledgements:} The research of Gui-Qiang Chen was supported in part by the UK EPSRC Science and Innovation Award to the Oxford Centre for Nonlinear PDE (EP/E035027/1), the NSFC under a joint project Grant 10728101, and the Royal Society--Wolfson Research Merit Award (UK). Changguo Xiao was supported in part by the NSFC under a joint project Grant 10728101. Yongqian Zhang was supported in part by NSFC Project 11031001, NSFC Project 11121101, and the 111 Project B08018 (China).
\centerline{\bf References}
\end{document} |
\begin{document}
\title{\raggedright Towards Improved Prediction of Ship Performance: A Comparative Analysis on In-service Ship Monitoring Data for Modeling the Speed--Power Relation}
\author{\raggedright
\IEEEauthorblockN{\href{https://orcid.org/0000-0002-1322-7621}{\includegraphics[scale=0.1]{orcid.pdf}\hspace{1mm}\normalsize Simon DeKeyser}\IEEEauthorrefmark{1}, \normalsize Casimir Morob\'{e}\IEEEauthorrefmark{1}, \href{https://orcid.org/0000-0002-0529-0962}{\includegraphics[scale=0.1]{orcid.pdf}\hspace{1mm}\normalsize Malte Mittendorf}\IEEEauthorrefmark{2}}\\
\IEEEauthorblockA{\IEEEauthorrefmark{1}\small \emph{Toqua -- Ghent, Belgium}}\\
\IEEEauthorblockA{\IEEEauthorrefmark{2}\emph{Technical University of Denmark -- Kgs. Lyngby, Denmark} } }
\IEEEtitleabstractindextext{ \begin{abstract} Accurate modeling of ship performance is crucial for the shipping industry to optimize fuel consumption and subsequently reduce emissions. However, predicting the speed-power relation in real-world conditions remains a challenge. In this study, we used in-service monitoring data from multiple vessels with different hull shapes to compare the accuracy of data-driven machine learning (ML) algorithms to traditional methods for assessing ship performance. Our analysis consists of two main parts: (1) a comparison of sea trial curves with calm-water curves fitted on operational data, and (2) a benchmark of multiple added wave resistance theories with an ML-based approach. Our results showed that a simple neural network outperformed established semi-empirical formulas following first principles. The neural network only required operational data as input, while the traditional methods required extensive ship particulars that are often unavailable. These findings suggest that data-driven algorithms may be more effective for predicting ship performance in practical applications. \end{abstract} \begin{IEEEkeywords} Ship Performance, Speed-Power, Added Wave Resistance, Resistance model, Neural Networks \end{IEEEkeywords} }
\maketitle\thispagestyle{plain} \begin{tikzpicture}[remember picture,overlay] \node[anchor=north west,yshift=-15pt,xshift=20pt]
at (current page.north west)
{\includegraphics[height=2 cm]{Figures/logo.png}}; \end{tikzpicture}
\IEEEpeerreviewmaketitle \IEEEdisplaynontitleabstractindextext
\section{Introduction} \IEEEPARstart{A}{ccurate} ship performance modeling is a powerful decision-making instrument for the shipping industry, offering the ability to save fuel and reduce emissions \parencite{toqua}, e.g., through optimized hull maintenance and voyage optimization. In the last decades, a spectrum of strategies has been developed, ranging from simplified empirical formulas derived from model tests to advanced full-scale 3-D numerical simulations \parencite{tezdogan2015full}. However, considering both accuracy and computational effort, Liu and Papanikolaou \parencite*{liu2020regression} argue that for practical applications where only a limited amount of ship parameters is available, a semi-empirical formula seems to be the most efficient method that captures the underlying physics of the problem. Unfortunately, the development of such methods leads to a loss of accuracy, which we classify into three categories. \\\\ \textbf{(1)} Approximations are made to generalize the actual hull shape to an efficient number of parameters, i.e., main particulars. The hull form and its displacement distribution significantly impact the magnitude of the wave resistance in calm water. Nevertheless, incorporating the full hull geometry into a semi-empirical model would reduce its practical advantage over full-scale numerical simulations. \\\\ \textbf{(2)} Assumptions about the range of validity need to be made to enforce regimes where certain physical relations hold. Extrapolating results outside particular validity ranges may result in poor accuracy and non-physical transitions. \\\\ \textbf{(3)} The theoretical expressions are fitted on experimental and numerical data obtained from controlled environments, which are far from reality. Most of the experiments are performed on scaled models in towing tanks, causing an increase in errors when scaling the measurements back to full size. The problem lies in a lack of Reynolds number similarity between the ship and the model as only Froude number similarity is obtained. There, further assumptions are made on, e.g., hull smoothness and weather conditions, as imitating every possible ship/environment would be infeasible. Hence, we face data scarcity when assessing the in-service ship performance. \\\\ The overarching goal of this article is to show that machine learning (ML) approaches may complement or possibly surpass traditional methods in the assessment of in-service ship performance. ML is a subfield of artificial intelligence that involves the use of algorithms and statistical models to enable a system to learn from data and improve its performance on a specific task. It consists of a set of computational techniques to discover patterns and relationships in data and make predictions or decisions based on those patterns. ML algorithms can be thought of as an extension of statistical regression analysis, which is a technique used to model the relationship between a dependent variable (the outcome or response of interest) and one or more independent variables (the predictors or explanatory variables). \\\\ Whereas regression analysis typically involves fitting a fixed model to data and then using the model to make predictions or inferences about the data, ML algorithms can be trained on data and adapt as new data becomes available, allowing the system to learn and improve over time without being explicitly programmed for each task.
\\\\ In the past, naval engineers have pursued finding universal semi-empirical formulas that rely on ship-specific inputs by performing fits (regression analysis) on experimental data to model ship performance. Instead, we propose training ship-specific ML models on real-world data while ensuring physical relevance without needing ship particulars. Nowadays, a lot of in-service sensor data is acquired, allowing us to leverage the scalability of ML and accurately model at once all of the ship-specific intricacies that semi-empirical formulas cannot. \\\\ ML is a natural progression for the shipping industry's practical purposes, as it allows us to predict unobserved outcomes in high-dimensional parameter spaces. In fact, several studies have examined the use of ML in the maritime industry, such as fuel consumption prediction and optimization for Diesel engines \parencite{parlak2006application}, condition-based maintenance of naval propulsion plants \parencite{coraddu2016machine}, and modeling marine fouling speed loss \parencite{coraddu2019data, gupta2022ship}. \\\\ In this work, the focus lies on predicting the power delivered by the engine as a function of the ship's speed. This so-called speed-power relationship forms the basis of ship performance modeling as it is the gateway to crucial economic parameters such as fuel consumption. An assessment of the current state of ship performance modeling research is made by comparing the accuracy of older, simpler theories with state-of-the-art (semi-)empirical formulas on in-service monitoring data. Herein, engine power sensor measurements will form our ground truth. We follow this approach to enable the extension to a supervised learning problem, where a learning algorithm is trained on observed data (ship speed, weather, and loading conditions) to predict specific targets (engine power) in unobserved situations.
\section{Methodology} In the following sections, we use the ship resistance model according to the ISO 19030 \parencite*{ISO19} industry standard to model the speed-power relationship. This model splits the resistance experienced by a ship into different parts: \begin{equation}
R = R_{calm} + R_{AA} + R_{AW} + R_{AH} + R_{others} \end{equation} where $R_{calm}$ is the calm-water resistance, $R_{AA}$ and $R_{AW}$ respectively are the added resistances due to wind and waves, $R_{AH}$ is the added resistance due to changes in hull condition such as the accumulation of marine growth on hull and propeller, and $R_{others}$ combines the effect of all other contributions such as steering and shallow water resistance. \\\\ Several (semi-)empirical techniques are elaborated on and critically examined for the different resistance contributions in section \ref{sec:model}. To simplify our analysis, we will discuss the calculation of the first three resistance terms, which constitute the most considerable contribution to $R$ \parencite{dalheim2020added}. The other terms require knowledge of ship-specific and environmental parameters, which are often unavailable in real-world situations. \\ In this study, we select neural networks (NNs) to prove the value of ML algorithms in predicting ship performance. We chose NNs for their simplicity and effectiveness in demonstrating the potential of ML without adding unnecessary complexity to the comparison. In section \ref{sec:nn}, we provide a brief overview of the NN architecture, pre-processing techniques tailored for in-service vessel data, and the training and evaluation process. \\\\ Section \ref{sec:results} discusses a benchmark of different $R_{AW}$ estimation procedures evaluated on in-service ship monitoring data to highlight the uncertainties of using experimentally fitted formulas for practical purposes. The benchmark, focusing on the resistance due to calm water, wind, and waves, will provide a clear and functional assessment of the accuracy of the (semi-)empirical techniques discussed in section \ref{sec:model}. \\ The goodness of fit between the actual $P_i$ and predicted $\hat{P}_i$ engine powers (for the observations $i = 1, ..., N$) is assessed with several metrics. We use the well-known mean absolute error (MAE): \begin{equation}
\text{MAE} = \frac{1}{N} \sum_{i=1}^N \left |P_i - \hat{P}_i\right |
\label{MAE} \end{equation} , the mean absolute percentage error (MAPE): \begin{equation}
\text{MAPE} = \frac{1}{N} \sum_{i=1}^N \left | \frac{P_i - \hat{P}_i}{\hat{P}_i} \right | \end{equation} , the mean bias error (MBE): \begin{equation}
\text{MBE} = \frac{1}{N} \sum_{i=1}^N \left ( \hat{P}_i - P_i \right ) \end{equation} , and the R-squared score (R2): \begin{equation}
\text{R2} = 1 - \sum_{i=1}^N \frac{(P_i - \hat{P}_i)^2}{(P_i - \bar{P})^2} \end{equation} with $\bar{P} = \frac{1}{N} \sum_{i=1}^N P_i$ the mean observed power. \\\\ The in-service dataset combines high-frequency sensor measurements of ship speed and main engine brake power\footnote{The brake power of a ship's engine is the power output of the engine measured at the engine's crankshaft, before any transmission losses. It can be calculated by measuring the torque and angular speed of the crankshaft.} with weather data and loading conditions. Table \ref{tab:variables} summarizes the variables into input and target categories. \begin{table}[] \centering \caption{Input and target variables of the in-service dataset.} \label{tab:variables} \begin{tabular}{@{}cl@{}} \toprule Category & Variables \\ \midrule
& Draft aft, Draft forward, Displacement \\
& Wind direction, Wind speed \\ Input & Wave direction, Significant wave height, Wave period \\
& Current speed, Current direction \\
& Speed-through-water, Ship heading \\ \midrule Target & Brake power \\ \bottomrule \end{tabular} \end{table} The data is filtered to \emph{steady state} voyages in full sea conditions, where ship accelerations, shallow water effects, and steering maneuvers are eliminated. \\ To further isolate our analysis to the $R_{calm}$, $R_{AA}$ and $R_{AW}$ resistance terms, measurements are taken from periods after cleaning events, such that $R_{AH}$ can be disregarded. Note that the relative contribution of $R_{AH}$ to $R$ will increase over time and for practical use cases, accurately modeling $R_{AH}$ is crucial. Although it remains a challenging task to solve using empirical models \parencite{guo2022combined, demirel2017effect}, recent research has shown that digital twin ML-based methods may bring solace \parencite{coraddu2019data, gupta2022ship}. \\\\ Finally, a comparison is made with a simple ship-specific NN model, which incorporates all of the resistance contributions while requiring only operational data as input. Here, we also consider some limitations of using ML for ship performance modeling. Section \ref{sec:conclusion} will summarize our findings and use them to propose ML as the natural next step towards improved prediction of ship performance.
\section{Ship Resistance Model} \label{sec:model} The brake power $P_B$ of the ship's main engine is calculated according to: \begin{equation}
P_B = \frac{R V_S}{\eta_D \eta_M}
\label{eq:Pe} \end{equation} with $V_S$ the ship's speed-through-water (STW), $\eta_D$ the propulsive efficiency, $\eta_M$ the mechanical (shaft + gearbox) efficiency, and $R$ the total resistance. As with $R_{AH}$, empirically modeling the propulsive and mechanical efficiencies with limited information remains difficult \parencite{shigunov2017added}. \\ In particular, $\eta_D$ depends on the wake fraction, thrust deduction, propeller diameter, and even the total resistance experienced by the hull. All these factors but the propeller diameter are speed and seaway dependent. Several curves and empirical models have been proposed to estimate $\eta_D$ \parencite{fluid_mechanics_1995, kristensen2012prediction, holtrop1982approximate}. However, these are only valid for calm water, and there is a lack of data in waves. Therefore, we use the default values recommended by ISO19030.
\subsection{Calm-Water Resistance} A direct method for calculating a ship's calm-water resistance is to conduct a sea trial, during which the speed-power relation is measured under calm weather conditions. Usually, these measurements are performed around the design speed of the vessel. In recent years, however, operational velocities have been lowered to reduce emissions \parencite{psaraftis2014ship}, meaning that operating speeds do not always match the design speeds anymore. \\ It is widely accepted that the correlation between speed and power follows a relation $P \approx V^c$ with constant exponent $c \approx 3$, which holds around the design speed \parencite{solutions2018basic}. However, previous studies have found that the \emph{cubic law} underestimates power at speeds below the design speed \parencite{adland2020optimal, taskar2020benefit, tillig2018analysis}. These findings suggest that simply extrapolating sea trial curves to operational speeds lower than the design speed can lead to underestimations of power. \\\\ In this work, we will omit three of the main issues associated with the use of sea trial curves to estimate $R_{calm}$: \begin{itemize}
\item Sea trial curve availability
\item Operational speed $\neq$ design speed
\item Time-dependent performance loss (e.g., fouling) \end{itemize} by using the data-driven method proposed by Berthelsen and Nielsen \parencite*{berthelsen2021prediction}. This method fits a calm-water curve using in-service data. Their model starts from a simple least-squares fit of ($x_1, x_2$) to a log-linearized power law: \begin{align}
P &= x_1V^{x_2}\\
\ln(P) &= \ln(x_1) + x_2 \ln(V) \end{align} They extend the regression model to incorporate the ship's draft $T$: \begin{equation}
\ln(P) = \ln(x_1) + x_2 \ln(V) + x_3 T+ x_4 \ln(V)T \end{equation} and make the exponent speed dependent by introducing breakpoints that separate different speed intervals where the exponent remains constant. With one breakpoint $B_p$, the regression is performed on: \begin{align} \begin{split}
\ln(P) = \ln(x_1) &+ x_2 \ln(V) + x_3 T+ x_4 \ln(V)T \\
&+ x_5 (\ln(V) - \ln(B_p))V_d \end{split}
\label{eq:calm} \end{align} In their paper, $V_d$ is a Heaviside function centered at $B_p$. We, however, propose a differentiable dummy index to make the speed-power relation smooth: \begin{equation}
V_d = \frac{1}{2}\left(1 + \tanh\left(\frac{V-B_p}{\delta}\right)\right) \end{equation} with $\delta$ the smoothing factor. \\\\ In practice, the breakpoints are detected with a binary segmentation algorithm, as implemented in the \texttt{ruptures} Python library \parencite{truong2020selective}. The algorithm is applied to the measured power of the speed-sorted data, and it finds change points in the signal where the slope of speed and power alters. At last, the regression is performed on the power data, which is first corrected from weather conditions (see next sections) to ensure that we fit calm-water data. As we do not correct the fouling power loss, a part of $R_{fouling}$ will be included in the fitted calm-water curves.
\subsection{Added Resistance due to Wind} The industry standard for calculating $R_{AA}$ is: \parencite{ISO15} \begin{equation}
R_{AA} = \frac{1}{2}\rho_A A_{XV}C_{AA}(\theta_{rel})V_{wrel}^2 - \frac{1}{2}\rho_A A_{XV}C_{AA}(0^{\circ})V_{G}^2
\label{eq:RAA} \end{equation} with $\rho_A$ the air density, $V_{wrel}$ the relative wind speed calculated according to ISO 15016, $V_G$ the ship's measured speed over ground, $A_{XV}$ the transverse projected area of the ship above the water line, $C_{AA}$ the wind resistance coefficient and $\theta_{rel}$ the relative wind direction. The negative term is the air resistance due to the ship moving forward with a headwind caused by $V_G$ ($\theta_{rel} = 0^{\circ}$), which is already included in $R_{calm}$. \\\\ $C_{AA}$ is estimated with the regression formula based on wind tunnel tests developed by Fujiwara \parencite*{fujiwara2006new}: \begin{align} \begin{split}
C_{AA}(\theta_{rel}) = \: &C_{LF}\cos(\theta_{rel}) + \\
&C_{XLI}\left(\sin(\theta_{rel}) - \frac{1}{2}\sin(\theta_{rel})\cos^3(\theta_{rel})\right) +\\
&C_{ALF}\sin(\theta_{rel})\cos^3(\theta_{rel}) \end{split} \end{align} where the coefficients $C_{LF}$, $C_{XLI}$, and $C_{ALF}$ consist of different regression expressions for $\theta_{rel} < 90^{\circ}$ and $\theta_{rel} > 90^{\circ}$. Unfortunately, the latter coefficients depend on several detailed ship geometry-related parameters, such as the bridge height and longitudinal projected area of superstructures, which are usually unknown. \\ However, Kitamura et al. \parencite*{kitamura2017estimation} developed ship-type-specific regression formulas that estimate the input parameters $P$ of Fujiwara's formula and $A_{XV}$ from the ship's overall length $L_{OA}$ and beam $B$: \begin{equation}
\begin{rcases}
P\\
P/L_{OA}\\
P/B\\
P/L_{OA}^2\\
P/(L_{OA}B)\\
P/B^2
\end{rcases}
=
\begin{cases}
aB + bL_{OA} + c\\
aB + c\\
bL_{OA} + c
\end{cases} \end{equation} with ($a$, $b$, $c$) the regression coefficients and where the left-hand-side and right-hand-side expressions were carefully chosen for each specific parameter to maximize the accuracy. In the following, this approach is used to calculate the inputs for Fujiwara's expression, and while it introduces additional approximations, it is necessary to keep the required parameters at a feasible level. \\\\ Fig. \ref{winddrags} shows $R_{AA}$ calculated according to this method for a bulk carrier ($L_{OA} = 190$ m, $B = 32$ m) with $V_{wrel} = 8$ m/s and $V_G = V_S = 13$ kn, where a distinction is made between laden and ballast loading conditions. The absolute value of $R_{AA}$ is higher for ballast conditions, which is expected as the area above water is larger. Also, the added resistance is more considerable for headwinds and drops to negative values for following winds, meaning that the ship is being pushed forward. The reader may find it counter-intuitive that $R_{AA} = 0$ kN occurs at angles of only $\pm 40^{\circ}$, but it is stressed that we plot the added resistance relative to the ship's headwind (Eq. \ref{eq:RAA}). \begin{figure}
\caption{Polar plot of the added resistance due to wind as a function of the relative wind direction, calculated with Fujiwara's and Kitamura's regression formulas for different ship load conditions.}
\label{winddrags}
\end{figure}
\subsection{Added Resistance due to Waves} Modeling a ship's added resistance due to waves is a highly complex, non-linear problem, and most methods rely on simplified assumptions. In practice, $R_{AW}$ is defined as ``the unsteady longitudinal force a ship experiences apart from the calm water, and wind resistances in a realistic seaway'' \parencite{mittendorf2022data}. \\ The force is a second-order quantity, which depends on the incident wave's amplitude and speed \parencite{liu2011prediction}. One of the oldest and simplest modeling approaches is that of Kreitner, used in ITTC2005 \parencite*{ITTC2005}: \begin{equation}
R_{AW} = 0.64 g H_S^2 C_B \rho_w \frac{B^2}{L_{pp}} \end{equation} with $g$ the gravitational constant, $H_S$ the significant wave height, $C_B$ the block coefficient, $\rho_w$ the water density, and $L_{pp}$ the length between perpendiculars. As this expression only holds for head waves, a cosine law can be used to account for waves with an arbitrary heading \parencite{hansen2011performance}: \begin{equation}
R_{AW} = 0.64 g H_S^2 C_B \rho_w \frac{B^2}{3L_{OA}}(2 + \cos(\alpha_{rel})) \end{equation} with $\alpha_{rel}$ the relative wave direction.\footnote{We use $\alpha_{rel} = 0^{\circ}$ for head waves, and $\alpha_{rel} = 180^{\circ}$ for following waves.} \\ Another simple, but more recent standard of ITTC 2014 \parencite{ITTC2014}, is the empirical STAwave-1 formula: \begin{equation}
R_{AW} = \frac{1}{16} g H_S^2 \rho_w B \sqrt{\frac{B}{L_{B}}} \qquad |\alpha_{rel}| \leq 45^{\circ} \end{equation} where $L_{B}$ is the length of the bow at the waterline. \\\\ More advanced (semi-)empirical approaches split the added wave resistance into two contributions: \begin{equation}
R_{AW} = R_{AWM} + R_{AWR} \end{equation} where the motion induced $R_{AWM}$ and wave reflection $R_{AWR}$ resistances are a function of the wave frequency $\omega$. To accurately model the ship's resistance in irregular sea conditions with waves of varying frequencies, integration is performed over a wave spectrum $S(\omega)$ to calculate the mean added resistance: \begin{equation}
\overline{R}_{AW} = 2\int_0^{\infty} S(\omega) \frac{R_{AW}(\omega)}{\zeta_a^2} d\omega
\label{eq:int} \end{equation} with $\zeta_a$ the wave amplitude. Hereby, we assume that the superposition principle holds and that the calculation is valid for long-crested waves. Often, a Pierson-Moskowitz type spectrum is used in fully developed sea states: \begin{equation}
S(\omega) = \frac{5}{16}H_S^2 \frac{\omega_p^4}{\omega^5} \exp\left(-\frac{5}{4}\left(\frac{\omega_p}{\omega}\right)^4\right) \end{equation} where $\omega_p = 2\pi/T_p$ and $T_p$ is the wave peak period. \\
Using this frequency response framework, ITTC2014 \parencite*{ITTC2014} and ISO 15016 \parencite*{ISO15} recommend using the semi-empirical method called STAwave-2, which again holds for $|\alpha_{rel}| \leq 45^{\circ}$. \\\\ From 2016 to 2020, Liu and Papanikolaou \parencite*{liu2016fast, liu2020regression} derived an improved formula with regression analysis on model test data, which holds for arbitrary wave headings. In 2022, Mittendorf et al. \parencite*{mittendorf2022towards} enhanced their formula by performing multivariate regression on the parameter vector with model test data, thereby also including the 90 \% prediction interval of $R_{AW}$. We note that a discontinuity is present in the Liu/Mittendorf formulas, which we found to be caused by the non-linear form factor in the expression for $R_{AWR}$: \begin{equation}
\left(\frac{0.87}{C_B}\right)^{(1+4\sqrt{Fr})f(\alpha_{rel})} \end{equation} with $Fr$ the Froude number and: \begin{equation} f(\alpha_{rel}) =
\begin{cases}
-\cos(\alpha_{rel}) & \pi - E_1 \leq \alpha_{rel} \leq \pi \\
0 & \alpha_{rel} < \pi - E_1
\end{cases} \end{equation} where $E_1$ is an angle that defines where the ship's bow ends. The discontinuity can be solved by shifting the cosine of $f(\alpha_{rel})$ to 0 when $\alpha_{rel} = \pi - E_1$, while still forcing $f(\alpha_{rel}) = 1$ for $\alpha_{rel} = \pi$: \begin{equation} f(\alpha_{rel}) =
\begin{cases}
-\frac{\cos(\alpha_{rel}) + 1}{\cos(\pi - E_1) + 1} + 1 & \pi - E_1 \leq \alpha_{rel} \leq \pi \\
0 & \alpha_{rel} < \pi - E_1
\end{cases} \end{equation} We will use this new form of $f(\alpha)$ in the $R_{AWR}$ expression of the Liu and Mittendorf methods, as discontinuities will not be present in the measured data. \\\\ To give the reader a visual understanding of the different wave resistance formulas, Fig. \ref{wavedrags} plots $R_{AW}$ calculated with the aforementioned methods for a bulk carrier ($L_{OA} = 190$ m, $B = 32$ m, $C_B = 0.7$) with $H_S = 4$ m, $T_p = 10$ s and $V_G = V_S = 13$ kn in laden condition. Given Fig. \ref{wavedrags}, it is stated that while the theories' $R_{AW}$ predictions are in the same order of magnitude, their shapes vary differently as a function of $\alpha_{rel}$. \begin{figure}
\caption{Polar plot of the added resistance due to waves as a function of the relative wave direction, calculated with different wave resistance theories in laden condition.}
\label{wavedrags}
\end{figure} \mbox{}\\\\ Although the discontinuity has been resolved, the semi-empirical Liu and Mittendorf methods still exhibit non-differentiable irregularities where the resistance increases or decreases. The reason is that their formula consists of expressions for the bow and stern of the ship, including form factors to incorporate non-linear behavior. Of course, in natural seas, the waves will come from multiple directions at once, and Eq. \ref{eq:int} can be extended to a double integration over an angular wave spectrum. Accounting for short-crested waves will smooth the $R_{AW}(\alpha_{rel})$ curves to behave as one would expect physically. Unfortunately, performing a two-dimensional numerical integration comes at the cost of computational time. \\ Another observation is that the more advanced methods predict a larger wave resistance for oblique waves compared to head waves. The latter effect is confirmed in model tests, where oblique waves cause more pitch motions dissipating energy \parencite{valanto2015experimental}. The magnitude of the wave angle-dependent Kreitner's formula remains similar for the different wave headings and lies somewhere between the STAwave and the Mittendorf methods. \\\\ Lastly, to highlight the importance of the peak wave period on the frequency response-type methods, Fig. \ref{peakperiod} shows a surface plot of $R_{AW}$ as a function of $T_p$ and the wave heading $\alpha_{rel}$ calculated with the Mittendorf method. The same values were used as in Fig. \ref{wavedrags} for $B$, $H_S$, etc. and $V_G$ = $V_S$ = 0 kn. The ship experiences the highest added resistance in sea states with short waves that arrive obliquely to the ship's heading. \begin{figure}
\caption{Surface plot of $R_{AW}$ as a function of $T_p$ and the wave heading $\alpha_{rel}$ calculated with the Mittendorf method in laden condition.}
\label{peakperiod}
\end{figure}
\section{An ML-based Approach} \label{sec:nn} Neural networks (NNs) are ML models inspired by the structure and function of the human brain. They are composed of interconnected processing nodes, called neurons, which work together to recognize patterns in data and make predictions. Here, we will use one of the most straightforward NN architectures, the feedforward NN (FNN). This section will briefly explain the architecture of the FNN, the data pre-processing steps, and how a standard training procedure is performed and validated.
\subsection{The Feedforward Architecture} An FNN is organized into three main parts: the input layer, the hidden layers, and the output layer. The input layer receives input data, the hidden layers process that data and the output layer yields the final output of the network. Each layer consists of several neurons, which receive input from the previous layer and produce the output that is conveyed to the next layer. \\\\ Mathematically, a neuron can be represented as a function that takes in a set of inputs, $x_1, x_2, ..., x_n$, and produces a single output, $y$. The inputs are multiplied by a set of weights, $w_1, w_2, ..., w_n$, and summed with a bias term, $b$, to produce an intermediate value, $z$, which is then passed through a non-linear activation function, $f$, to produce the output: \begin{align}
z &= \sum_{i=1}^{n} w_i x_i + b\\
y &= f(z) \end{align} A commonly used activation function is the rectified linear unit (ReLU): \begin{equation}
f(z) = \text{max}(0, z) \end{equation} which has the disadvantage of being non-differentiable at $z = 0$. In this work, we use a smooth approximation of the ReLU function, namely the softplus activation function, defined as: \begin{equation}
f(z) = \log(1 + e^z) \end{equation} The main premise of activation functions is to replicate the behavior of brain neurons, which fire when the electrical impulses coming from connections with other neurons reach a certain threshold (at $z = 0$ here). \subsection{Feature Engineering} The input features are the variables presented to the network's input layer. The process of selecting and transforming the input features to improve the performance of a NN is called feature engineering. It involves identifying the most relevant features in the dataset and pre-processing them in a way that makes them more suitable for the NN to learn from. \\\\ In this work, we deliberately reduce the NN's complexity by using as few features as possible without losing predictive power by combining different input variables from table \ref{tab:variables}. \\ Table \ref{tab:features} summarizes the engineered features, which mimic theoretical formulas of wave and wind resistance. E.g., the \emph{Wind Product} feature resembles the form of Eq. \ref{eq:RAA}, while the \emph{Wave Power} mimics the power per unit wave crest in deep water: \begin{equation}
P = \frac{\rho_w g}{64 \pi} H_S^2 T_e \end{equation} with $T_e$ the wave energy period. \begin{table}[!b] \centering \caption{Features used as input of the NN.} \label{tab:features} \bgroup \def1.2{1.2} {\setlength{\tabcolsep}{15pt} \begin{tabular}{@{}ll@{}} \toprule Feature & Formula \\ \midrule STW & \\ Draft Average & $(T_{aft}$ + $T_{fwd})/2$ \\ Wind Product (long.) & $V_{wrel}^2 \cos(\theta_{rel})$ \\ Wind Product (trans.) & $V_{wrel}^2 \sin(\theta_{rel})$ \\ Wave Power (long.) & $H_S^2 T_p \cos(\alpha_{rel})$ \\ Wave Power (trans.) & $H_S^2 T_p \sin(\alpha_{rel})$ \\ \bottomrule \end{tabular}} \egroup \end{table} \mbox{}\\ Although power is not a vector quantity, we separate all directional variables into their longitudinal (long.) and transversal (trans.) direction to the vessel's heading. Doing so results in a correct angular dependence of model w.r.t. the wind direction $\theta_{rel}$ and wave direction $\alpha_{rel}$ relative to the vessel's heading. \\\\ The final FNN architecture consists of an input layer with six features, four fully-connected hidden layers (64, 32, 16, and 8 neurons, respectively), and an output layer that predicts the brake power.
\subsection{Training and Validation} During training, the weights and biases of the neurons are adjusted to minimize a loss function that measures the difference between the predicted output and the true output (e.g., with the MAE (Eq. \ref{MAE}) or mean squared error (MSE)). The optimization is generally conducted with algorithms like stochastic gradient descent, which calculates the gradient of the loss function with respect to the weights and biases and updates the weights and biases in the direction that reduces the loss. \\ In this study, we employed ADAM \parencite{kingma2014adam}, a frequently used variation of gradient descent that incorporates momentum \parencite{rumelhart1985learning} to accelerate convergence to the minimum of the loss function in the high-dimensional space of weights and biases. \\\\ To assess the performance of the NN, we use k-fold cross-validation. Here, the data is divided into k \emph{folds}, and the model is trained on k-1 of those folds while leaving the remaining fold as a validation set. This process is repeated k times, with a different fold being used as the validation set each time. The final performance of the model is then calculated as the average performance across all k iterations. This approach ensures that the NN is tested on completely unseen data, giving us a reliable representation of how the model would perform in unobserved real-world conditions.
\section{Discussion} \label{sec:results} In the following, we will use in-service data of different vessels to compare the performance of the resistance models described above and extend the analysis to include a simple NN. Before proceeding, we will first justify the use of a speed-dependent exponent in the calm-water speed-power relation. \subsection{Calm-Water Resistance}
Sea trial curves of a tanker were extrapolated to lower speeds, and weather corrections $R_{AA}$, $R_{AW}$ were calculated with Fujiwara's and Mittendorf's semi-empirical formulas, respectively. The main engine power is then obtained with Eq. \ref{eq:Pe}, ignoring $R_{AH}$ and $R_{others}$. Fig. \ref{seatrial} shows the predicted and measured power as a function of STW, along with the extrapolated laden and ballast sea trial curves. Predicted points $\hat{P}$ are colored based on their absolute percentage error (APE) = $|\hat{P} - P| / P$ with respect to the measured values $P$. \\\\ The scatter plot in Fig. \ref{seatrial} supports our earlier hypothesis that extrapolating sea trial curves below the design speed leads to an underestimation of the power, indicating that the cubic law does not hold in this range. Additionally, time-dependent performance losses such as fouling can also cause an underestimation of the power, which increases over time. \\\\ To address the limitations of the extrapolated sea trial curves, we perform a least-squares fit of Eq. \ref{eq:calm} on calm-water data according to the method by Berthelsen and Nielsen. The calm-water data is obtained by correcting the $H_S \leq 1$ m in-service power data for wind and waves with Fujiwara's and Mittendorf's formulas.\footnote{A histogram of $H_S$ can be found in Fig. \ref{app_Hs}.} First, the binary segmentation algorithm was used to find the breakpoint at $V_S = 11.53$ kn. Fig. \ref{calmwater} shows the results in the same way as before, but with the fitted calm-water curves. The speed-power exponent changes (smoothly ($\delta = 0.5$)) at the breakpoint from $\approx 1.8$ to $\approx 2.8$ for the laden curve. \\\\ Table \ref{tab:calmbench} compares the accuracy metrics obtained with the sea trial curves and the fitted calm-water model. Our analysis shows that the data-driven calm-water method significantly improves the accuracy compared to extrapolated sea trial curves in terms of R2, MAPE, and MBE. This suggests that the data-driven calm-water model is a more reliable approach for predicting power requirements in realistic operating conditions. However, as shown in Fig. \ref{calmwater}, the fitted calm-water model still tends to underestimate the power (negative MBE) at low STW. This error is likely due to the presence of \emph{bands} in the speed-power data, where the vessel encounters heavy weather while operating in constant power or constant shaft rpm autopilot, resulting in large power values at low STW. \begin{table}[!b] \centering \caption{Accuracy metrics evaluated on the predictions from different calm-water models in combination with Mittendorf's and Fujiwara's resistance corrections.} \label{tab:calmbench} \begin{tabular}{@{}lcccc@{}} \toprule Calm-Water Model & R2 & MAPE [\%] & MBE [kW] \\ \midrule Extrapolated Sea Trial Curve & 0.72 & 15 & -1522 \\ Fitted Calm-Water Curve & 0.89 & 7.1 & -443 \\ \bottomrule \end{tabular} \end{table}
\begin{figure}
\caption{Scatter plot of the measured and predicted main engine power (normalized w.r.t maximum) as a function of STW, together with the extrapolated laden and ballast sea trial curves. (Log-scaled color map.)}
\label{seatrial}
\end{figure} \begin{figure}
\caption{Scatter plot of the measured and predicted main engine power (normalized w.r.t maximum) as a function of STW, together with the fitted calm-water laden and ballast curves. (Log-scaled color map.)}
\label{calmwater}
\end{figure} \mbox{}
\subsection{Added Wave Resistance Benchmark} Up to this point, we have only used Mittendorf's formula to calculate the power corrections due to waves. In the following analysis, we perform a benchmark of different added wave resistance theories to evaluate their performance, along with sea trials and the fitted calm-water regression model, using data from both an oil tanker and a dry cargo carrier. The method is similar to that used to produce Figs. \ref{seatrial} and \ref{calmwater}, except that different wave theories are used in combination with Fujiwara's $R_{AA}$. Fig. \ref{benchmark} shows the results, including the MAPE scores for each vessel and wave resistance theory, along with a comparison of the fitted calm-water model to sea trials. \\\\ Interestingly, we find that the accuracy of all the added resistance theories is very similar, despite significant differences in their complexity. One possible explanation for this finding is that the more advanced approaches are too complex for noisy data. When examining Fig. \ref{wavedrags}, we see that the Liu theory exhibits abrupt changes (non-differentiable points) in added wave resistance, which are not physically realistic. Based on these results, one might even consider using the wave angle-dependent Kreitner's formula, which has the advantage of requiring fewer ship-specific parameters and can be evaluated approximately ten times faster than Mittendorf's method ($\approx 0.1$ ms versus $\approx 1$ ms).
\begin{figure*}
\caption{MAPE obtained with the different combinations of wave resistance theories and calm-water models evaluated on multiple vessels.}
\label{benchmark}
\end{figure*} \subsection{A Simple NN} \label{sec:simple_nn} In this section, we evaluate the performance of the simple NN by comparing it with the fitted calm-water model using different added wave resistance theories on the in-service data of a chemical tanker.\footnote{The FNN was trained using mini-batches of 32 samples with a learning rate of 0.015 for five epochs on each k-fold.} In contrast to the previous vessels, including sea trial curves in the analysis is not possible as they are unavailable. This further highlights the benefit of using the data-driven calm-water model by Berthelsen and Nielsen. \\\\ Fig. \ref{MAPE_NN} compares the MAPE score achieved with the fitted calm-water curve using different wave resistance theories and the simple NN. The Kreitner and STAwave-2 theories perform worse than the model without weather corrections due to multiple data points falling outside the valid ranges for these empirical formulas. The simple NN achieves a MAPE of 7.4 \%, which represents a roughly 3 \% improvement over the best (semi-)empirical formula. This is a remarkable result considering the simplicity of our NN architecture. It is expected that further increasing the complexity and effort in developing the NN architecture would significantly improve the accuracy. \begin{figure}
\caption{MAPE obtained with the different combinations of wave resistance theories and the fitted calm-water model. A comparison is made with a simple NN.}
\label{MAPE_NN}
\end{figure} \mbox{}\\\\ To further analyze the performance of these models, we can perform an error analysis using binned scatter plots. In these plots, the data points are grouped into bins according to a variable of interest. Then, the MAPE is calculated for each bin, which allows us to visualize the relative accuracy of the model as a function of the variable of interest, providing insight into the factors that may influence the model's performance. \\\\ Figs. \ref{binned_mape_wave} and \ref{binned_mape_wind} show a binned scatter plot with respect to the significant wave height and wind speed, respectively. Here, we compare the fitted calm-water model using Mittendorf's added wave resistance corrections with the simple NN from Fig. \ref{MAPE_NN}. The histograms show the data spread in the variable of interest, and the error bars denote the standard deviation of the binned MAPE. From both plots, we conclude that the NN achieves a better accuracy \emph{globally}, and thus it generalizes better than the theoretical model. \\\\ Taking the significant wave height as the variable of interest, the theoretical model performs particularly poorly in bad weather conditions, with a binned MAPE reaching over 70 \%. This effect is not as noticeable when evaluating the overall MAPE because the data spread gives increased weight to low wave heights. Similarly, using the relative wind speed as the variable of interest yields similar results for the NN, while the theoretical model performs worse for a wide range of wind speeds with a binned MAPE of up to 45Â \%. \begin{figure}
\caption{MAPE binned w.r.t. the wave height, evaluated on a chemical tanker. A comparison is made between the fitted calm-water model in combination with Mittendorf's wave resistance and a simple NN. The histogram shows the data spread of the wave height.}
\label{binned_mape_wave}
\end{figure} \mbox{} \\\\ It is notable that, despite having less data available for high wave heights and wind speeds, the NN's accuracy remains approximately in a similar range. This finding is remarkable because NNs generally require considerable amounts of training data to learn and generalize effectively. The fact that the NN can maintain a similar level of accuracy with fewer data suggests that our choices of feature engineering were effective. \begin{figure}
\caption{MAPE binned w.r.t. the wind speed, evaluated on a chemical tanker. A comparison is made between the fitted calm-water model using Mittendorf's wave resistance and a simple NN. The histogram shows the data spread of the wind speed.}
\label{binned_mape_wind}
\end{figure}
\subsection{Machine Learning as the Next Step} \label{sec:next_step} Our findings above demonstrate the potential of ML methods to improve the prediction of ship performance in various operating conditions compared to traditional methods. However, ML has previously been proposed as a valuable tool for shipping companies. Alexiou et al. \parencite*{alexiou2022towards} have compared a number of the latest data-driven models used in papers to model the speed-power relation and concluded that data-driven models were more suitable for shipping companies, provided sufficient representative historical data was available. \\\\ The key aspect of this work lies in the direct comparison of the data-driven methods and multiple theoretical models on several vessels with different design types. However, before we propose ML as the natural next step for ship performance modeling, we must carefully weigh the benefits against the limitations. \\\\ One of the main benefits of ML models is that they can be trained on existing data from a ship's performance, which means that they can be customized to a specific vessel and its operating conditions. This can provide more accurate and relevant results than using traditional methods. \\ Another advantage of ML is that it can automate some of the more tedious and time-consuming aspects of ship performance modeling, freeing engineers to focus on other tasks. Machine learning algorithms can also learn from data in real-time, which means that they can adapt to changes in a ship's operating conditions and provide updated predictions on an ongoing basis. \\\\ In contrast, one of the main constraints is the need for substantial amounts of high-quality data to train the models. This may be challenging for some ships or operating conditions where data may be limited or difficult to collect (e.g., only noon report data available or sensor drift). \\\\ Additionally, ML models can be difficult to interpret and explain, making it challenging for engineers to apply the results to their work. This issue can be addressed through the use of explainable artificial intelligence (XAI) techniques \parencite{gunning2019xai} and physics-informed ML (PIML) \parencite{karniadakis2021physics}. XAI helps to make ML algorithms more transparent and interpretable, while PIML allows ML methods to follow physical relations. \\\\ Furthermore, ML algorithms can be subject to bias if the training data is not representative of the full range of conditions that a ship may encounter. This can lead to inaccurate or misleading results, which can be challenging to detect and correct. However, careful feature engineering and the use of PIML can help to ensure that the ML algorithm follows physical relationships beyond the range of the training data and is not biased by the data used to train it. This can help to improve the robustness and generalizability of the ML model \parencite{julie}. \\\\ Overall, while the use of ML in ship performance modeling offers many potential benefits, there are also significant limitations that one must consider. However, as the availability of data increases and the expertise in ML continues to grow, these restrictions are becoming easier to overcome.
\section{Conclusion} \label{sec:conclusion} We have outlined several (semi-)empirical approaches developed in the last decades of ship hydrodynamics. Most of these methods use theoretical approximations, after which they are fitted to experimental data. Researchers have been using increasingly advanced regression algorithms to enhance the experimental fits of their semi-empirical formulas. \\ Although their findings are crucial to obtaining a better understanding of ship performance in a research-based way, the developed methods might not be the best choice in practical applications as they require extensive ship particulars that are often unavailable. \\\\ Our benchmark of different added wave resistance theories, performed on in-service data from multiple vessels, supports this idea. Results showed that simple theories had similar accuracy to more complex methods, and that a data-driven calm-water resistance regression method proposed by Berthelsen and Nielsen \parencite*{berthelsen2021prediction} was more useful than extrapolated sea trial curves for performance monitoring. \\\\ A natural next step to further improve accuracy and reduce the need for empirical methodologies and ship particulars, may be to leverage the data scalability of ML techniques. By applying ML techniques to ship-specific sensor data, the whole ship and its environment can be \emph{learned} directly from real-world data. \\ Our results showed that a simple NN outperforms all of the (semi-)empirical added wave resistance methods, which were used in combination with the data-driven calm-water model. This suggests that ML techniques may be a valuable tool for improving ship performance modeling in practical applications. Even a small increase in accuracy could have significant economic and ecological benefits for the shipping industry. It requires a tiny change of perspective -- transitioning from theoretical regression analysis to a more advanced approach -- potentially yielding significant benefits. \printbibliography
\appendix \begin{figure}
\caption{Histogram of wave height.}
\label{app_Hs}
\end{figure}
\end{document} |
\begin{document}
\allowdisplaybreaks \def\mathbb R} \def\ff{\frac} \def\ss{\sqrt{\mathbb R} \def\ff{\frac} \def\ss{\sqrt} \def\B{\mathbf B} \def\mathbb W{\mathbb W} \def\mathbb N} \def\kk{\kappa} \def\m{{\bf m}{\mathbb N} \def\kk{\kappa} \def\m{{\bf m}} \def\varepsilon}\def\ddd{D^*{\varepsilon}\def\ddd{D^*} \def\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho{\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho} \def\langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma{\langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma}
\def\nabla} \def\pp{\partial} \def\E{\mathbb E{\nabla} \def\pp{\partial} \def\E{\mathbb E} \def\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D}
\def\sigma} \def\ess{\text{\rm{ess}}{\sigma} \def\ess{\text{\rm{ess}}} \def\begin} \def\beq{\begin{equation}} \def\F{\scr F{\begin} \def\beq{\begin{equation}} \def\F{\scr F} \def\text{\rm{Ric}}} \def\Hess{\text{\rm{Hess}}{\text{\rm{Ric}}} \def\Hess{\text{\rm{Hess}}} \def\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega{\text{\rm{e}}} \def\uparrow{\underline a} \def\OO{\Omega} \def\oo{\omega}
\def\tilde} \def\Ric{\text{\rm{Ric}}{\tilde} \def\text{\rm{Ric}}} \def\Hess{\text{\rm{Hess}}{\text{\rm{Ric}}} \def\text{\rm{cut}}} \def\P{\mathbb P} \def\ifn{I_n(f^{\bigotimes n}){\text{\rm{cut}}} \def\P{\mathbb P} \def\ifn{I_n(f^{\bigotimes n})} \def\scr C} \def\aaa{\mathbf{r}} \def\r{r{\scr C} \def\aaa{\mathbf{r}} \def\r{r} \def\text{\rm{gap}}} \def\prr{\pi_{{\bf m},\varrho}} \def\r{\mathbf r{\text{\rm{gap}}} \def\prr{\pi_{{\bf m},\varrho}} \def\r{\mathbf r} \def\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda{\mathbb Z} \def\vrr{\varrho} \def\ll{\lambda} \def\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I{\scr L}\def\Tt{\tilde} \def\Ric{\text{\rm{Ric}}} \def\TT{\tilde} \def\Ric{\text{\rm{Ric}}}\def\II{\mathbb I} \def{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb H{{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb H} \def\scr M}\def\Q{\mathbb Q} \def\texto{\text{o}} \def\LL{\Lambda{\scr M}\def\Q{\mathbb Q} \def\texto{\text{o}} \def\LL{\Lambda} \def{\rm Rank}} \def\B{\scr B} \def\i{{\rm i}} \def\HR{\hat{\R}^d{{\rm Rank}} \def\B{\scr B} \def{\rm in}}\def\Sect{{\rm Sect}} \def\H{\mathbb H{{\rm i}} \def\HR{\hat{\mathbb R} \def\ff{\frac} \def\ss{\sqrt}^d} \def\rightarrow}\def\l{\ell}\def\iint{\int{\rightarrow}\def\l{\ell}\def\iint{\int} \def\scr E}\def\Cut{{\rm Cut}{\scr E}\def\Cut{{\rm Cut}} \def\scr A} \def\Lip{{\rm Lip}}\def\kk{\kappa{\scr A} \def\Lip{{\rm Lip}}\def\kk{\kappa} \def\scr B}\def\Ent{{\rm Ent}}\def\L{\scr L{\scr B}\def\Ent{{\rm Ent}}\def\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I{\scr L}
\title{{f Nonlinear Fokker--Planck equations for Probability Measures on Path Space and Path-Distribution Dependent SDEs}
\begin{abstract} By investigating path-distribution dependent stochastic differential equations, the following type of nonlinear Fokker--Planck equations for probability measures $(\mu_t)_{t \geq 0}$ on the path space $\scr C} \def\aaa{\mathbf{r}} \def\r{r:=C([-r_0,0];\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d),$ is analyzed: $$\pp_t \mu(t)=L_{t,\mu_t}^*\mu_t,\ \ t\ge 0,$$ where $\mu(t)$ is the image of $\mu_t$ under the projection $\scr C} \def\aaa{\mathbf{r}} \def\r{r\ni\xi\mapsto \xi(0)\in\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$, and $$L_{t,\mu}(\xi):= \ff 1 2\sum_{i,j=1}^d a_{ij}(t,\xi,\mu)\ff{\pp^2} {\pp_{\xi(0)_i} \pp_{\xi(0)_j }} +\sum_{i=1}^d b_i(t,\xi,\mu)\ff{\pp}{\pp_{\xi(0)_i}},\ \ t\ge 0, \xi\in \scr C} \def\aaa{\mathbf{r}} \def\r{r, \mu\in \scr P^\scr C} \def\aaa{\mathbf{r}} \def\r{r.$$ Under reasonable conditions on the coefficients $a_{ij}$ and $b_i$,
the existence, uniqueness, Lipschitz continuity in Wasserstein distance, total variational norm and entropy, as well as derivative estimates are derived for the martingale solutions. \end{abstract} \noindent
AMS subject Classification:\ 60J75, 47G20, 60G52. \\ \noindent
Keywords: Nonlinear PDE for probability measures, path-distribution dependent SDEs, Wasserstein distance, Harnack inequality, coupling by change of measure.
\vskip 2cm
\section{Introduction}
In this paper, we investigate nonlinear PDEs for probability measures on the path space using path-distribution dependent SDEs. To explain the motivation of the study, let us start from the
following classical PDE on $\scr P(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$, the set of probability measures on $\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$ equipped with the weak topology: \beq\label{E1} \pp_t \mu(t)= L^* \mu(t).\ \ t\ge 0,\end{equation} for a second-order differential operator $$L:= \ff 1 2\sum_{i,j=1}^d a_{ij} \pp_i\pp_j+\sum_{i=1}^d b_i \pp_i,$$ where $a=(a_{ij}): \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\rightarrow}\def\l{\ell}\def\iint{\int \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\otimes \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$ and $b=(b_i): \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\rightarrow}\def\l{\ell}\def\iint{\int \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$ are locally integrable. \eqref{E1} is just the (linear) Fokker--Planck--Kolmogorov equation (FRKE) associated to the operator $L$ in the sense of \cite{BKRS}. We call $\mu\in C(\mathbb R_+; \scr P(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d))$ a solution of \eqref{E1}, if $$\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} f\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu(t)= \int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} f\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu(0) +\int_0^t \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}(Lf)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu(s),\ \ t\ge 0, f\in C_0^\infty(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d).$$ To construct and analyze solutions of \eqref{E1} using the time marginal distributions of Markov processes as proposed by A. N. Kolmogorov \cite{KL}, K. It\^o developed the theory of stochastic differential equations (SDEs), see e.g.\cite{IW}. Let $\sigma} \def\ess{\text{\rm{ess}}$ be a matrix-valued function such that $a=\sigma} \def\ess{\text{\rm{ess}}\sigma} \def\ess{\text{\rm{ess}}^*$, and let $W(t)$ be a $d$-dimensional Brownian motion. Consider the following It\^o SDE \beq\label{E2} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X(t)= b(X(t))\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+ \sigma} \def\ess{\text{\rm{ess}}(X(t)) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W(t).\end{equation} By It\^o's formula, the time marginals $\mu(t):=\scr L_{X(t)}$ = the law of $X(t)$ for t $\geq 0,$ solve the equation \eqref{E1}. This enables one to investigate FPKEs using a probabilistic approach.
Obviously, \eqref{E1} is a linear equation. In applications, many important PDEs for probability measures (or probability densities) are nonlinear, see, for instance, \cite{CA, DV1, DV2, FG, Gu, V2} and references within for the study of Landau type equations. Such PDEs are also of Fokker--Planck type, but are non-linear (see Sections 6.7 and 9.8 (v) in \cite{BKRS}). To analyze non-linear FPKEs for probability measures, McKean-Vlasov equations are introduced by using SDEs with coefficients depending on the distribution of the solution
Consider the following distribution-dependent version of \eqref{E2}: \beq\label{E3} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X(t)= b(t,X(t),\scr L_{X(t)})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+ \sigma} \def\ess{\text{\rm{ess}}(t,X(t),\scr L_{X(t)}) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W(t),\end{equation} where $$b: \mathbb R_+\times \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\times\scr P(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)\rightarrow}\def\l{\ell}\def\iint{\int \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d,\ \ \sigma} \def\ess{\text{\rm{ess}}: \mathbb R_+\times \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\times\scr P(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)\rightarrow}\def\l{\ell}\def\iint{\int \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\otimes\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$$ are measurable. For any $t\ge 0$ and $\mu\in \scr P(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$, consider the second order differential operator $$L_{t,\mu}:= \ff 1 2 \sum_{i,j=1}^d (\sigma} \def\ess{\text{\rm{ess}}\sigma} \def\ess{\text{\rm{ess}}^*)_{ij}(t,\cdot,\mu)\pp_i\pp_j +\sum_{i=1}^d b_i(t,\cdot,\mu)\pp_i.$$ Under reasonable integrability conditions on $\sigma} \def\ess{\text{\rm{ess}}$ and $b$, by It\^o's formula we see that for a solution $X(t)$ of \eqref{E3}, $\mu(t):=\scr L_{X(t)}$ solves the nonlinear FPKE \beq\label{E4} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \mu(t)= L_{t,\mu(t)}^* \mu(t)\end{equation} in the sense that $$\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} f\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu(t)= \int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} f\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu(0) +\int_0^t \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d}(L_{s,\mu(s)}f)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu(s),\ \ t\ge 0, f\in C_0^\infty(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d).$$
There are plentiful references on this type SDEs but most are concerning the existence, uniqueness and moments estimates, see \cite{SZ, MV} and references within.
In the recent paper \cite{W16}, regularity estimates on the distribution, including the exponential convergence and gradient-Harnack type inequalities, are presented. {\bf See also \cite{But,DV1, DV2, EGZ} for the study of ergodicity of distribution dependent SDEs.}
In the above two situations, the stochastic systems are Markovian (or memory-free); i.e. the evolution of the system does not depend on its past. However, many real-world models, in particular those arising from mathematical finance and biology, are with memory, so that the associated evolution equations are path dependent, {\bf see, for instance, the monograph \cite{Moh} for specific models. See also \cite{BWY17, BWY18, HMS} and references within for the study of regularities and ergodicity for the associated FPKEs (i.e. the distribution of the functional solutions).}
In this paper, we investigate nonlinear FPKEs on the path space by using path-distribution dependent SDEs. In Section 2, we introduce the framework of the study and the main results on nonlinear FPKEs for probability measures on path space. To prove these results, we investigate the corresponding path-distribution dependent SDEs in Sections 3-5, where strong/weak existence and uniqueness of solutions as well as Harnack type inequalities are derived respectively. We will mainly follow the ideas of \cite{W16}, but substantial additional efforts have to be made in order to generalize the results in there to the case, where the coefficients do not only depend on the time marginals, but are also on the distribution of the path.
\section{Nonlinear PDEs for measures on path space}
Throughout the paper, we fix $r_0>0$ and consider the path space $\scr C} \def\aaa{\mathbf{r}} \def\r{r:= C([-r_0,0];\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$ equipped with the uniform norm $\|\xi\|_\infty:=\sup_{\theta\in [-r_0.0]}|\xi(\theta)|. $ Let $\scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r$
be the class of probability measures on $\scr C} \def\aaa{\mathbf{r}} \def\r{r$ of finite second-order moment, i.e. $\mu(\|\cdot\|_\infty^2):=\int_\scr C} \def\aaa{\mathbf{r}} \def\r{r \|\xi\|_\infty^2\mu(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\xi)<\infty.$ Then $\scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r$ is a Polish space under the Wasserstein distance
$$\mathbb W_2(\mu,\nu):=\inf_{\pi\in \scr C} \def\aaa{\mathbf{r}} \def\r{r(\mu,\nu)} \bigg(\int_{\scr C} \def\aaa{\mathbf{r}} \def\r{r\times\scr C} \def\aaa{\mathbf{r}} \def\r{r} \|\xi-\eta\|_\infty^2\pi(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\xi,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\eta)\bigg)^{\ff 1 2},$$ where $\scr C} \def\aaa{\mathbf{r}} \def\r{r(\mu,\nu)$ denotes the class of couplings for $\mu$ and $\nu$. It is well known that $(\scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r, \mathbb W_2)$ is a Polish space and the $\mathbb W_2$-metric is consistent with the weak topology. We will study non-linear FPKEs on $\scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r$.
Let \beq\label{*0D} b: \mathbb R} \def\ff{\frac} \def\ss{\sqrt_+\times \scr C} \def\aaa{\mathbf{r}} \def\r{r\times \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r\rightarrow}\def\l{\ell}\def\iint{\int \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d;\ \ \sigma} \def\ess{\text{\rm{ess}}: \mathbb R} \def\ff{\frac} \def\ss{\sqrt_+\times \scr C} \def\aaa{\mathbf{r}} \def\r{r\times \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r\rightarrow}\def\l{\ell}\def\iint{\int \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\otimes\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\end{equation} be measurable.
For any $t\ge 0, \mu\in \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r$, consider the following differential operator $L_{t,\mu}$ from $C_0^\infty(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$ to the set of all $\B(\scr C} \def\aaa{\mathbf{r}} \def\r{r)$-measurable functions: for $f\in C_0^\infty(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$,
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*}
(L_{t,\mu}f)(\xi):= \ff 1 2 \sum_{i,j=1}^d (\sigma} \def\ess{\text{\rm{ess}}\sigma} \def\ess{\text{\rm{ess}}^*)_{ij}(t, \xi, \mu) (\pp_i\pp_j f)(\xi(0))
+\sum_{i=1}^d b_i(t,\xi,\mu) (\pp_if)(\xi(0)),\ \ \xi \in \scr C} \def\aaa{\mathbf{r}} \def\r{r.
\end{align*} Then the associated nonlinear FPKE for probability measures $(\mu_t)_{t\ge 0}$ on the path space $\scr C} \def\aaa{\mathbf{r}} \def\r{r$ is \beq\label{EM} \pp_t \mu(t) = L_{t,\mu_t}^* \mu_t,\end{equation} where $\mu(t)$ is the marginal distribution of $\mu_t$ at $\theta=0$; i.e. $$\{\mu(t)\}(\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x):= \mu_t(\{\xi\in \scr C} \def\aaa{\mathbf{r}} \def\r{r: \xi(0)\in \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D x\}). $$
A continuous functional $\mu_\cdot: \mathbb R} \def\ff{\frac} \def\ss{\sqrt_+\rightarrow}\def\l{\ell}\def\iint{\int \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r$ is called a solution to \eqref{EM}, if $\int_0^t \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\int_\scr C} \def\aaa{\mathbf{r}} \def\r{r|L_{s,\mu_s}f|\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu_s<\infty$ for $f\in C_0^\infty(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$ and \beq\label{S}\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} f\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu(t)=\int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} f\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu(0) +\int_0^t \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\int_\scr C} \def\aaa{\mathbf{r}} \def\r{r(L_{s,\mu_s}f)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu_s,\ \ t\ge 0,~ f\in C_0^\infty(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d).\end{equation} {\bf Since \eqref{S} only characterizes the evolution of the marginal distribution $\mu(t)$ of $\mu_t$, the solution may be not unique. For instance, when $L_\mu$ is distribution independent and it has an invariant probability measure $\mu(0)$, then any $\mu_t$ with marginal distribution $\mu(0)$ at $\theta=0$ solves \eqref{EM} in the sense of \eqref{S}. To select a unique solution associated with the corresponding path-distribution dependent SDEs, we will only consider the martingale solutions of \eqref{EM}, which are a special class of solutions realized by marginals of probability measures on the infinite-time path space $\scr C} \def\aaa{\mathbf{r}} \def\r{r_\infty:=C([-r_0,\infty);\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$. As shown in Theorem \ref{T1.1} below, in many cases the martingale solution is unique. }
For a probability measure $ \mu^\infty$ on $\scr C} \def\aaa{\mathbf{r}} \def\r{r_\infty,$ consider its marginal distributions $$\mu^\infty(t):= \mu^\infty \circ \{\pi(t)\}^{-1}\in \scr P(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d),\ \ \mu_t^\infty:= \mu^\infty \circ\pi_t^{-1}\in \scr P(\scr C} \def\aaa{\mathbf{r}} \def\r{r),\ \ t\ge 0,$$ where $\pi(t): C([-r_0,\infty);\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)\rightarrow}\def\l{\ell}\def\iint{\int\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$ and $\pi_t: C([-r_0,\infty);\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)\rightarrow}\def\l{\ell}\def\iint{\int\scr C} \def\aaa{\mathbf{r}} \def\r{r$ are projection operators defined by $$\pi(t) \xi= \xi(t)\in \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d,\ \ \ \pi_t\xi= \xi_t\in \scr C} \def\aaa{\mathbf{r}} \def\r{r \ \text{with}\ \xi_t(\theta):=\xi(t+\theta)\ \text{for}\ \theta\in [-r_0,0].$$
\begin} \def\beq{\begin{equation}} \def\F{\scr F{defn} A solution $(\mu_t)_{t\ge 0}$ of \eqref{EM} is called a martingale solution, if there exists a probability measure $ \mu^\infty$ on $\scr C} \def\aaa{\mathbf{r}} \def\r{r_\infty$ such that \begin} \def\beq{\begin{equation}} \def\F{\scr F{enumerate} \item[(1)] $\mu_t=\mu_t^\infty$ for all $t\ge 0$. \item[(2)] For any $f\in C_0^\infty(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$, the family of functionals $$M^f(t):= f(\pi(t)\cdot)-\int_0^t (L_{s, \mu_s} f)(\pi_s\cdot)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s,\ \ t\ge 0$$ on $\scr C} \def\aaa{\mathbf{r}} \def\r{r_\infty$ is a $\mu^\infty$-martingale; that is, $$\int_{A} M^f(t_2) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu^\infty = \int_{A} M^f(t_1) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu^\infty,\ \ t_2>t_1\ge 0,~ A\in \sigma} \def\ess{\text{\rm{ess}} (\pi(s): s\le t_1),$$ where $\sigma} \def\ess{\text{\rm{ess}} (\pi(s): s\le t_1)$ is the $\sigma} \def\ess{\text{\rm{ess}}$-field on $\scr C} \def\aaa{\mathbf{r}} \def\r{r_\infty$ induced by the projections $\pi(s)$ for $ s\in [-r_0,t_1]$. \end{enumerate} \end{defn}
To construct the martingale solutions of \eqref{EM} using path-distribution dependent SDEs, we need the following assumptions.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{enumerate} \item[$(H1)$] (Continuity) For every $t\ge 0$, $b(t,\cdot,\cdot)$ is continuous on $\scr C} \def\aaa{\mathbf{r}} \def\r{r \times\scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r$, and there exist locally bounded functions $\aa_1,\aa_2: \mathbb R} \def\ff{\frac} \def\ss{\sqrt_+\rightarrow}\def\l{\ell}\def\iint{\int \mathbb R} \def\ff{\frac} \def\ss{\sqrt_+$ such that
$$\|\sigma} \def\ess{\text{\rm{ess}}(t,\xi,\mu)- \sigma} \def\ess{\text{\rm{ess}}(t,\eta,\nu)\|^2\le \aa_1(t)\|\xi-\eta\|_{\infty}^2+ \aa_2(t) \mathbb W_2(\mu,\nu)^2,\ \ t\ge 0; \xi,\eta\in \scr C} \def\aaa{\mathbf{r}} \def\r{r; \mu,\nu\in \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r.$$
\item[$(H2)$] (Monotonicity) There exist a constant $\kk\ge 0$ and locally bounded functions $\bb_1,\bb_2: \mathbb R} \def\ff{\frac} \def\ss{\sqrt_+\rightarrow}\def\l{\ell}\def\iint{\int \mathbb R} \def\ff{\frac} \def\ss{\sqrt_+$ such that
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} &2\langle b(t,\xi,\mu)- b(t,\eta,\nu), \xi(0)-\eta(0)\rangle+\|\sigma} \def\ess{\text{\rm{ess}}(t,\xi,\mu)- \sigma} \def\ess{\text{\rm{ess}}(t,\eta,\nu)\|_{HS}^2\\
&\le \bb_1(t) \|\xi-\eta\|_{\infty}^2+ \bb_2(t)\mathbb W_2(\mu,\nu)^2 - \kk |\xi(0)-\eta(0)|^2,\ \ t\ge 0; \xi,\eta\in \scr C} \def\aaa{\mathbf{r}} \def\r{r; \mu,\nu\in \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r.\end{align*} \item[$(H3)$] (Growth) $b$ is bounded on bounded sets in $[0,\infty)\times \scr C} \def\aaa{\mathbf{r}} \def\r{r\times \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r$, and there exists a locally bounded function $K: \mathbb R} \def\ff{\frac} \def\ss{\sqrt_+\rightarrow}\def\l{\ell}\def\iint{\int\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+ $ such that
$$|b(t,0,\mu)|^2+ \|\sigma} \def\ess{\text{\rm{ess}}(t,0,\mu)\|^{2}\le K (t) \big\{1+\mu(\|\cdot\|_{\infty}^2)\big\},\ \ t\ge 0,~ \mu\in \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r.$$ \end{enumerate}
The following result characterizes the martingale solutions of \eqref{EM} with $\mathbb W_2$-Lipschitz estimate.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{thm}\label{T1.1} Assume $(H1)$-$(H3)$. Then for any $\mu_0\in \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r$, there exists a unique martingale solution $ (\mu_t)_{t\ge 0} $ of $\eqref{EM}$. Moreover,
\begin} \def\beq{\begin{equation}} \def\F{\scr F{enumerate} \item[$(1)$] $\mu_t( \|\cdot\|_\infty^2)$ is locally bounded in $t$. \item[$(2)$] For any two martingale solutions $(\mu_t)_{t\ge 0}$ and $ (\nu_t)_{t\ge 0}$ of $\eqref{EM}$, \begin{align*} &\mathbb W_2(\mu_t, \nu_t)^2\le \inf_{\vv \in [0,1]} \left\{\ff{ \mathbb W_2(\mu_0,\nu_0)^2 }{1-\vv} \right.\\
&\left. \times \inf_{\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho\in [0,\kk]} \exp\bigg[(r_0-t)\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho+ \ff{\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho r_0}}{1-\vv}\int_0^t\Big\{\ff{4 (\aa_1(r)+\aa_2(r))}\vv + \bb_1(r)+\bb_2(r) \Big\}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\bigg] \right\} \end{align*} holds for all $t\ge 0$ and $\vv\in (0,1)$. \end{enumerate} \end{thm}
From now on, for any $\nu_0,\mu_0\in \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r$, we denote $\mu_t$ and $\nu_t$ the martingale solutions of \eqref{EM} staring at $\mu_0$ and $\nu_0$ respectively.
To estimate the continuity of $\mu_t$ in $\mu_0$ with respect to entropy and total variational norm, we make the following stronger assumption.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{enumerate} \item[{\bf(A)}] $\sigma} \def\ess{\text{\rm{ess}}(t,x)$ is invertible, and there exist increasing functions $\kk_0,\kk_1,\kk_2, \ll: \mathbb R} \def\ff{\frac} \def\ss{\sqrt_+\rightarrow}\def\l{\ell}\def\iint{\int \mathbb R} \def\ff{\frac} \def\ss{\sqrt_+$ such that for any $t\ge 0, x,y\in\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d,\xi,\eta\in \scr C} \def\aaa{\mathbf{r}} \def\r{r$
and $\mu,\nu\in \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r$,
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*}
& |b(t,0,\mu)|^2+\|\sigma} \def\ess{\text{\rm{ess}}(t,x)\|^2\le \kk_0(t)(1+|x|^2+\mu(\|\cdot\|_{\infty}^2)),\\
& \|\sigma} \def\ess{\text{\rm{ess}}(t,\cdot)^{-1}\|_\infty\le \ll(t),\ \ \|\sigma} \def\ess{\text{\rm{ess}}(t,x)-\sigma} \def\ess{\text{\rm{ess}}(t,y)\|_{HS}^2 \le \kk_1(t)|x-y|^2,\\
&|b(t,\xi,\mu)-b(t,\eta,\nu)|\le\kk_{2}(t)(\|\xi-\eta\|_{\infty}+\mathbb W_2(\mu,\nu)).\end{align*} \end{enumerate}
Recall that for any two probability measures $\mu,\nu$ on some measurable space $(E,\scr F)$, the entropy and variational norm are defined as follows:
$$\Ent(\nu|\mu):= \begin} \def\beq{\begin{equation}} \def\F{\scr F{cases} \int (\log \ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\nu}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\nu, \ &\text{if}\ \nu\ \text{ is\ absolutely\ continuous\ with\ respect\ to}\ \mu,\\
\infty,\ &\text{otherwise;}\end{cases}$$ and
$$\|\mu-\nu\|_{var} := \sup_{A\in\F}|\mu(A)-\nu(A)|.$$ By Pinsker's inequality (see \cite{CK, Pin}),
\beq\label{ETX} \|\mu-\nu\|_{var}^2\le \ff 1 2 \Ent(\nu|\mu),\ \ \mu,\nu\in \scr P(E).\end{equation} Then \eqref{TT2} below implies
\beq\label{TT}\|\mu_t-\nu_t\|_{var}^2\le \ff{\psi(t)}{2(t-r_0)}\mathbb W_2(\mu_0,\nu_0)^2 ,\ \ t>r_0,\ \mu_0,\nu_0\in \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r, \end{equation} for some $\psi\in C(\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+;\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+)$.
There are a lot of examples where $\mathbb W_2(\mu_n,\mu_0)\rightarrow}\def\l{\ell}\def\iint{\int 0$ but $\mu_n$ is singular with respect to $\mu_0$ such that $\Ent(\mu_n|\mu_0)=\infty$ and $\|\mu_n-\mu_0\|_{var}=1.$ So,
both \eqref{TT} and \eqref{TT2} are non-trivial. Indeed, these estimates correspond to the log-Harnack inequality for the associated semigroups, see Theorem \ref{T3.1} below for details.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{thm}\label{T1.2} Assume {\bf (A)}. \begin} \def\beq{\begin{equation}} \def\F{\scr F{enumerate} \item[$(1)$] There exists $\psi\in C(\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+;\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+)$ such that
\beq\label{TT2} \Ent(\nu_t|\mu_t) \le \ff{\psi(t)}{t-r_0}\mathbb W_2(\mu_0,\nu_0)^2 ,\ \ t>r_0, \mu_0,\nu_0\in \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r. \end{equation}
\item[$(2)$] If there exists an increasing function $\kk_3: \mathbb R} \def\ff{\frac} \def\ss{\sqrt_+\rightarrow}\def\l{\ell}\def\iint{\int \mathbb R} \def\ff{\frac} \def\ss{\sqrt_+$ such that
\beq\label{*P} \|\sigma} \def\ess{\text{\rm{ess}}(t,x)-\sigma} \def\ess{\text{\rm{ess}}(t,y)\| \le \kk_3(t)(1\land |x-y|), \ \ t\ge 0, x,y\in\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d,\end{equation} then there exists a positive continuous function $H$ defined on the domain
$$D:=\{(p,t): t\ge 0, p>(1+\kk_3(t)\ll(t))^2\},$$ such that
$$\int_\scr C} \def\aaa{\mathbf{r}} \def\r{r \Big(\ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\nu_t}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu_t}\Big)^{\ff 1 p} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\nu_t\le \inf_{\pi\in\scr C(\mu_0,\nu_0)} \int_{\scr C} \def\aaa{\mathbf{r}} \def\r{r\times\scr C} \def\aaa{\mathbf{r}} \def\r{r} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{H(p,t)\big(1+\ff{|\xi(0)-\eta(0)|^2}{t-r_0} + \|\xi-\eta\|_\infty^2\big)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\pi$$
holds for all
$t>r_0$ and $p>(1+\kk_3(t)\ll(t))^2.$
\end{enumerate} \end{thm}
\paragraph{Remark 2.1.} According to Theorem \ref{T1.1}(2), if there exists a constant $\vv\in (0,1)$ such that \beq\label{EXO} \limsup_{t\rightarrow}\def\l{\ell}\def\iint{\int\infty} \ff 1 t \int_0^t \Big(\ff{4(\aa_1(s)+\aa_2(s))}{\vv(1-\vv)} + \ff{\bb_1(s)+\bb_2(s)}{1-\vv}\Big)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s <\sup_{\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho\in [0,\kk]}\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho r_0},\end{equation} then \beq\label{EXO2} \mathbb W_2(\mu_t, \nu_t)^2\le c\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll t} \mathbb W_2(\mu_0,\nu_0)^2,\ \ t\ge 0, \end{equation} holds for some constants $c,\ll>0$; i.e.\,the solution to \eqref{EM} has exponential contraction in $\mathbb W_2$. If $\sigma} \def\ess{\text{\rm{ess}}(t,\cdot,\cdot)$ and $b(t,\cdot,\cdot)$ do not depend on $t$, i.e.\,the equation is time-homogenous, we $\mu_t= P_t^*\mu_0$. By the uniqueness we see that $P_t^*$ is a semigroup, i.e.\,$P^*_{t+s}=P_t^*P_s^*, s,t\ge 0.$ Then \eqref{EXO} implies that $P_t^*$ has a unique invariant probability measure $\mu\in \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r$. Combining \eqref{EXO2} with the semigroup property of $P_t^*$ and \eqref{TT}-\eqref{TT2}, we conclude that \eqref{EXO} also implies the exponential convergence in entropy and total variational norm:
$$\max\{\Ent(\nu_t|\mu), \|\mu-\nu_t\|^2_{var}\} \le c_1 \mathbb W_2(\mu,\nu_{t-1})^2\le c_2\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-\ll t} \mathbb W_2(\mu,\nu_0)^2,\ \ t\ge 1, \nu_0\in\scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r$$ for some constants $c_1,c_2>0$.
\
Finally, we investigate the shift quasi-invariance and differentiability of $\mu_t$ along Cameron--Martin vectors in $\H^1:= \{\xi\in \scr C} \def\aaa{\mathbf{r}} \def\r{r: \int_{-r_0}^0 |\xi'(s)|^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s<\infty\}.$ For $\eta\in \scr C} \def\aaa{\mathbf{r}} \def\r{r$ and a
probability measure $\mu$ on $ \scr C} \def\aaa{\mathbf{r}} \def\r{r$, we say that $\mu$ is differentiable along $\xi$ if for any $A\in \B(\scr C} \def\aaa{\mathbf{r}} \def\r{r)$, $\pp_\xi \mu(A):= \ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \vv} \mu(A+\vv \xi)\big|_{\vv=0}$ exists and $\pp_\xi\mu(\cdot)$ is a
signed measure on $\scr C} \def\aaa{\mathbf{r}} \def\r{r$.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{thm}\label{T1.3} Assume {\bf (A)} and let $b(t,\cdot,\mu)$ be differentiable on $\scr C} \def\aaa{\mathbf{r}} \def\r{r$, $\sigma} \def\ess{\text{\rm{ess}}(t,x)=\sigma} \def\ess{\text{\rm{ess}}(t)$ be independent of $x$. Then for any $t>r_0, \eta\in \H^1$ and $\mu_0\in \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r$,
$\mu_t$ is differentiable along $\eta$, both $\pp_\eta\mu_t$ and $\mu_t(\cdot+\eta)$ are absolutely continuous with respect to $\mu_t$, and for some $\Psi\in C(\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+;\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+)$
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} &\int_\scr C} \def\aaa{\mathbf{r}} \def\r{r \Big(\log \ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu_t(\cdot+\eta)}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu_t}\Big)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu_t (\cdot+\eta) \le \Psi(t) \Big(\ff{|\eta(-r_0)|^2}{t-r_0} + \|\eta\|_{\H^1}^2\Big),\\
&\int_\scr C} \def\aaa{\mathbf{r}} \def\r{r \Big( \ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu_t(\cdot+\eta)}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu_t}\Big)^{\ff 1 p} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu_t (\cdot+\eta)\le \exp\bigg[\Psi(t) \Big(\ff{|\eta(-r_0)|^2}{t-r_0} + \|\eta\|_{\H^1}^2\Big)\bigg],\ \ p>1,\\
& \int_\scr C} \def\aaa{\mathbf{r}} \def\r{r\Big| \ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\pp_\eta\mu_t}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu_t}\Big|^2 \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu_t \le \Psi(t) \Big(\ff{|\eta(-r_0)|^2}{t-r_0} + \|\eta\|_{\H^1}^2\Big).\end{align*}
\end{thm}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof} [Proof of Theorems \ref{T1.1}-\ref{T1.3}] For $\mu_0\in \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r$, take a $\F_0$-measurable random variable $X_0$ on $\scr C} \def\aaa{\mathbf{r}} \def\r{r$ such that $\scr L_{X_0}=\mu_0$. According to Theorem \ref{T2.1}, Corollary \ref{C3.3}, Corollary \ref{C4.2} and \eqref{ETX}, $\mu_t:= \scr L_{X_t}$ satisfies the estimates in Theorems \ref{T1.1}-\ref{T1.3} under the corresponding assumptions. So, it suffices to show that $(\scr L_{X_t})_{t\ge 0}$ is
the unique martingale solution of \eqref{EM}.
Let $\mu^\infty = \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\{X(s)\}_{s\in [-r_0.\infty)}}$. We have $ \scr L_{X_t} =\mu_t^\infty$. By \eqref{ED1} and It\^o's formula, for any $f\in C_0^\infty(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$, $(M^f(t))_{t\ge 0}$ is a $\mu^\infty$-martingale such that $\mu_t:=\scr L_{X_t}$ satisfies \begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} \int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} f\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu(t)&=\E f(X(t))= \E f(X(0) ) +\int_0^t \E (L_{s,\mu_s}f)(X_s)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\\ &= \int_{\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d} f\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu(0) + \int_0^t\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\int_{\scr C} \def\aaa{\mathbf{r}} \def\r{r} (L_{s,\mu_s}f)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu_s,\ \ t\ge 0, f\in C_0^\infty(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d).\end{align*} Therefore, $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\{X(s)\}_{s\in [-r_0.\infty)}}$ is a martingale solution of \eqref{EM}. When the coefficients are distribution-free, it is well known that the weak solution of \eqref{ED1} is equivalent to the martingale solution, so that the uniqueness of the martingale solutions of \eqref{EM} follows from Theorem \ref{T2.1}(3) below. In the following, we explain that the same is true for the present distribution dependent case.
Let $\mu_t=\mu_t^\infty$, for some probability measure $\mu^\infty$ on $\scr C} \def\aaa{\mathbf{r}} \def\r{r_\infty$, be a martingale solution of \eqref{EM}. We intend to prove $\mu^\infty= \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\{X(s)\}_{s\in [-r_0.\infty)}},$ so that the martingale solution is unique. Let $\bar \OO:=\scr C} \def\aaa{\mathbf{r}} \def\r{r_\infty, \bar \F_t$ for $t\ge 0$ be the completion of $\sigma} \def\ess{\text{\rm{ess}}(\pi(s): s\le t)$ with respect to $\mu^\infty$, and $\bar \P:=\mu^\infty$. By Theorem \ref{T2.1}(3) below, it suffices to prove that the coordinate process $$\bar X (t)(\oo):= \oo(t),\ \ t\ge 0, \oo\in \bar\OO$$ is a weak solution to \eqref{ED1}. To this end, for the given $(\mu_t)_{t\ge 0}$, define $$\bar\sigma} \def\ess{\text{\rm{ess}}(t,\xi):= \sigma} \def\ess{\text{\rm{ess}}(t,\xi,\mu_t),\ \ \ \bar b(t,\xi)= b(t,\xi,\mu_t),\ \ t\ge 0,~ \xi\in\scr C} \def\aaa{\mathbf{r}} \def\r{r,$$ and consider the corresponding operator $$(\bar L_t f)(\xi):= \ff 1 2\sum_{i,j=1}^d (\bar\sigma} \def\ess{\text{\rm{ess}}\bar\sigma} \def\ess{\text{\rm{ess}}^*)_{ij}(t,\xi)(\pp_i\pp_jf)(\xi(0)) +\sum_{i=1}^d \bar b_i(t,\xi) (\pp_if)(\xi(0)),\ \ t\ge 0, \xi\in \scr C} \def\aaa{\mathbf{r}} \def\r{r$$ for $f\in C_0^\infty(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d).$ Since $(\mu_t)_{t\ge 0}$ is a martingale solution of \eqref{EM}, for any $f\in C_0^\infty(\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$, the process $$ M^f(t):=f(\bar X(t))-f(\bar X(0))-\int_0^t (\bar L_sf)(\bar X_s)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s,\ \ t\ge 0$$ is a martingale on the probability space $(\bar\OO, (\bar\F_t)_{t\ge 0}, \bar\P).$
By $(H1)$-$(H3)$, the martingale property also holds for $f$ being polynomials of order 2.
In particular, by taking $f(x)= x$ we see that \beq\label{AB} M(t):=M^f(t)= \bar X(t)-\bar X(0)-\int_0^t \bar b(s, \bar X_s)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\end{equation} is a $\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$-valued martingale, and with $f(x):=x_ix_j$ we conclude that $$\<M_i,M_j\>(t) =\int_0^t (\bar\sigma} \def\ess{\text{\rm{ess}}\bar\sigma} \def\ess{\text{\rm{ess}}^*)_{ij}(s, \bar X_s)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s,\ \ 1\le i,j\le d.$$ Then according to Stroock--Varadhan (see, for example, Theorems 4.5.1 and 4.5.2 in \cite{SV}), we may construct a $d$-dimensional Brownian motion $\tilde} \def\Ric{\text{\rm{Ric}} W(t)$ on a product probability space of $(\tilde} \def\Ric{\text{\rm{Ric}} \OO, \tilde} \def\Ric{\text{\rm{Ric}}\F_t,\tilde} \def\Ric{\text{\rm{Ric}} \P)$ with $(\bar \OO, \bar\F_t,\bar \P)$ as a marginal space, and when $\sigma} \def\ess{\text{\rm{ess}}$ is invertible these two spaces coincide, such that $$M(t)=\int_0^t \bar\sigma} \def\ess{\text{\rm{ess}}(s,\bar X_s)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde} \def\Ric{\text{\rm{Ric}} W(s),\ \ t\ge 0.$$ Combining this with \eqref{AB}, we see that $\bar X(t)$ solves the
stochastic functional differential equation
\beq\label{DD0} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \bar X(t)= \bar b(t,\bar X_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+\bar \sigma} \def\ess{\text{\rm{ess}}(t,\bar X_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde} \def\Ric{\text{\rm{Ric}} W(t)\end{equation} with $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\bar X_0}|_{\tilde} \def\Ric{\text{\rm{Ric}}\P}=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\bar X_0}|_{\bar\P}=\mu_0.$
Since, by definition, $\mu_t=\scr L_{\bar X_t}|_{\bar \P}=\scr L_{\bar X_t}|_{\tilde} \def\Ric{\text{\rm{Ric}} \P}$, $\bar X(t)$ solves the path-distribution dependent SDE
$$\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \bar X(t)= b(t,\bar X_t, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\bar X_t}|_{\tilde} \def\Ric{\text{\rm{Ric}}\P}) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+ \sigma} \def\ess{\text{\rm{ess}}(t,\bar X_t, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\bar X_t}|_{\tilde} \def\Ric{\text{\rm{Ric}}\P})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde} \def\Ric{\text{\rm{Ric}} W(t),$$ i.e.\,$(\bar X, \tilde} \def\Ric{\text{\rm{Ric}} W)$ is a weak solution of \eqref{ED1}. Noting that $\mu^\infty:= \scr L_{\bar X|\bar\P} =\scr L_{\bar X|\tilde} \def\Ric{\text{\rm{Ric}}\P},$ by the weak uniqueness of \eqref{ED1} due to Theorem \ref{T2.1}(3) below, we obtain $\mu^\infty = \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\{X(s)\}_{s\in [-r_0,\infty)}}$ as desired. \end{proof}
\section{Path-distribution dependent SDEs}
Recall that for $\gg(\cdot)\in C([-r_0,\infty);\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d)$, the segment functional $\gg_\cdot\in C(\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+;\scr C} \def\aaa{\mathbf{r}} \def\r{r)$ is defined by $$\gg_t(\theta):= \gg(t+\theta),\ \ \theta\in [-r_0,0], t\ge 0.$$ For $\sigma} \def\ess{\text{\rm{ess}},b$ in \eqref{*0D}, consider the following path-distribution dependent SDE on $\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$: \beq\label{ED1} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X(t)= b(t,X_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t})\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+ \sigma(t,X_t, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t})\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W(t), \end{equation} where $W=(W(t))_{t\geq 0}$ is a $d$-dimensional standard Brownian motion with respect to a complete filtered probability space $(\OO, \F, \{\F_{t}\}_{t\ge 0}, \P)$, $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t}$ is the distribution of $X_t$. We investigate the strong solutions of \eqref{ED1} and determine properties, of their distributions.
We first recall the definition of the strong and weak solutions, see for instance \cite[Definition 1.1]{W16} in the path independent setting. For simplicity, we will only consider square integrable solutions. \begin} \def\beq{\begin{equation}} \def\F{\scr F{defn} $(1)$ For any $s\ge 0$, a continuous adapted process $(X_{s,t})_{t\ge s}$ on $\scr C} \def\aaa{\mathbf{r}} \def\r{r$ is called a (strong) solution of \eqref{ED1} from time $s$, if
$$\E\|X_{s,t}\|_\infty^2 +\int_s^t \E\big\{|b(r,X_{s,r},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{s,r}})|+\|\sigma} \def\ess{\text{\rm{ess}}(r,X_{s,r}, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{s,r}})\|^2\big\}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r<\infty,\ \ t\ge s,$$ and $(X_s,(t):= X_{s,t}(0))_{t\ge s}$ satisfies $\P$-a.s. $$X_s,(t) = X_s(s) +\int_s^t b(r,X_{s,r}, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{s,r}})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r + \int_s^t \sigma} \def\ess{\text{\rm{ess}}(r,X_{s,r},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{s,r}})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W(r),\ \ t\ge s.$$
We say that \eqref{ED1} has (strong or pathwise) existence and uniqueness, if for any $s\ge 0$ and $\F_s$-measurable random variable $X_{s,s}$ with $\E\|X_{s,s}\|_{\infty}^2<\infty$, the equation from time $s$ has a unique solution $(X_{s,t})_{t\ge s}$. When $s=0$ we simply denote $X_{0,}=X$; i.e.\,$X_{0,}(t)=X(t), X_{0,t}=X_t, t\ge 0$.
$(2)$ A couple $(\tilde} \def\Ric{\text{\rm{Ric}} X_{s,t}, \tilde} \def\Ric{\text{\rm{Ric}} W(t))_{t\ge s}$ is called a weak solution to \eqref{ED1} from time $s$, if $\tilde} \def\Ric{\text{\rm{Ric}} W(t)$ is a $d$-dimensional Brownian motion a complete filtered probability space $ (\tilde} \def\Ric{\text{\rm{Ric}}\OO, \{\tilde} \def\Ric{\text{\rm{Ric}}\F_t\}_{t\ge s}, \tilde} \def\Ric{\text{\rm{Ric}}\P)$, and $\tilde} \def\Ric{\text{\rm{Ric}} X_{s,t}$ solves
\beq\label{E1'} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde} \def\Ric{\text{\rm{Ric}} X_{s,}(t)= b(t,\tilde} \def\Ric{\text{\rm{Ric}} X_{s,t}, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde} \def\Ric{\text{\rm{Ric}} X_{s,t}}|_{\tilde} \def\Ric{\text{\rm{Ric}}\P})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t + \sigma} \def\ess{\text{\rm{ess}}(t,\tilde} \def\Ric{\text{\rm{Ric}} X_{s,t}, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde} \def\Ric{\text{\rm{Ric}} X_{s,t}}|_{\tilde} \def\Ric{\text{\rm{Ric}}\P})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde} \def\Ric{\text{\rm{Ric}} W(t),\ \ t\ge s.\end{equation}
$(3)$ \eqref{ED1} is said to satisfy weak uniqueness, if for any $s\ge 0$, the distribution of a weak solution $(X_{s,t})_{t\ge s}$ to \eqref{ED1} from $s\ge 0$ is uniquely determined by $\scr L_{X_{s,s}}$. \end{defn}
When \eqref{ED1} has strong existence and uniqueness, the solution $(X_t)_{t\ge 0}$ is a Markov process in the sense that for any $s\ge 0$, $(X_t)_{t\ge s}$ is determined by solving the equation
from time $s$ with initial state $X_s$. More precisely, letting $\{X_{s,t}^{\xi}\}_{t\ge s}$ denote the solution of the equation from time $s$ with initial state $X_{s,s}=\xi$,
the existence and uniqueness imply
\beq\label{MK}
X_{s,t}^{\xi}= X_{u,t}^{X_{s,u}^{\xi}},\ \ t\ge u\ge s\ge 0, \xi\ {\rm is}\ \F_s\text{-measurable\ with\ } \E\|\xi\|_{\infty}^2<\infty.
\end{equation}
When \eqref{ED1} also has weak uniqueness, we may define a semigroup $(P_{s,t}^*)_{t\ge s}$ on $\scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r$ by letting $P_{s,t}^*\mu=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{s,t}}$ for $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{s,s}}=\mu\in \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r$. Indeed, by \eqref{MK} we have \beq\label{SM} P_{s,t}^*= P_{u,t}^* P_{s,u}^*,\ \ t\ge u\ge s\ge 0. \end{equation} For simplicity we set $P_t^\ast=P_{0,t}^\ast,~ t\ge 0$.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{thm}\label{T2.1} Assume $(H1)$-$(H3)$. \begin} \def\beq{\begin{equation}} \def\F{\scr F{enumerate} \item[$(1)$] For any $s\geq 0$ and $X_{s,s}\in L^2(\OO\rightarrow}\def\l{\ell}\def\iint{\int \scr C} \def\aaa{\mathbf{r}} \def\r{r;\F_s)$, $\eqref{ED1}$ has a unique strong solution $(X_{s,t})_{t\ge s}$ with \beq\label{ES1}
\E \sup_{t\in [s,T]}\|X_{s,t}\|_{\infty}^{2}\le H(T) (1+ \E\|X_{s,s}\|_{\infty}^{2}),\ \ T\ge t\ge s\ge 0 \end{equation} for some increasing function $H:\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+\rightarrow}\def\l{\ell}\def\iint{\int \mathbb R} \def\ff{\frac} \def\ss{\sqrt_+$. \item[$(2)$] For any two solutions $X_{s,t}$ and $Y_{s,t}$ of $\eqref{ED1}$ with $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{s,s}},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{Y_{s,s}}\in \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r$, \begin} \def\beq{\begin{equation}} \def\F{\scr F{align*}
&\E \|X_{s,t}-Y_{s,t}\|_{\infty}^2\le \inf_{\vv\in (0,1)} \left\{ \ff{ \E\|X_{s,s}-Y_{s,s}\|_{\infty}^2 }{1-\vv} \right. \\ &\left.\times \inf_{\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho\in [0,\kk],\vv\in (0,1)} \exp\bigg[(r_0+s-t)\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho + \ff{\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho r_0}}{1-\vv}\int_s^t\Big\{\ff{4 (\aa_1(r)+\aa_2(r))}\vv + \bb_1(r)+\bb_2(r) \Big\}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\bigg] \right\}.\end{align*} \item[$(3)$] $\eqref{ED1}$ satisfies weak uniqueness, and for any $t\ge 0$, \begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} &\mathbb W_2(P_t^*\mu_0, P_t^*\nu_0)^2\le \inf_{\vv \in (0,1)} \left\{ \ff{ \mathbb W_2(\mu_0,\nu_0)^2 }{1-\vv} \right. \\ &\times \inf_{\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho\in [0,\kk], \vv\in (0,1)}\exp\bigg[(r_0-t)\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho+\ff{\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho r_0}}{1-\vv}\int_0^t\Big\{\ff{4 (\aa_1(r)+\aa_2(r))}\vv + \bb_1(r)+\bb_2(r) \Big\}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D r\bigg]. \end{align*} \end{enumerate} \end{thm}
We will prove this result by using the argument of \cite{W16}. For fixed $s\ge 0$ and $\F_s$-measurable $\scr C} \def\aaa{\mathbf{r}} \def\r{r$-valued random variable $X_{s,s}$ with $\E\|X_{s,s}\|_{\infty}^2<\infty$, we construct the solution of \eqref{ED1} by iterating in distribution as follows. Firstly, let $$X^{(0)}_{s,t}(\theta)=X_{s,s}\big(0\land(t-s+\theta)\big)\ \text{for}\ \theta\in [-r_0,0], \ \mu_{s,t}^{(0)}=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X^{(0)}_{s,t}},\ \ t\ge s.$$ For any $n\ge 1,$ let $(X_{s,t}^{(n)})_{t\ge s}$ solve the classical path-dependent SDE \beq\label{EN} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X^{(n)}_{s,}(t)= b(t,X^{(n)}_{s,t}, \mu_{s,t}^{(n-1)}) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t + \sigma} \def\ess{\text{\rm{ess}}(t,X^{(n)}_{s,t},\mu_{s,t}^{(n-1)})\,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W(t),\ \ X_{s,s}^{(n)}=X_{s,s}, t\ge s, \end{equation} where $\mu_{s,t}^{(n-1)}:=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_{s,t}^{(n-1)}}$ and $X^{(n)}_{s,t}(\theta):= X^{(n)}_{s,}(t-s+\theta)$ for $\theta\in [-r_0, 0]$.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{lem} \label{L2.1} Assume $(H1)$-$(H3)$. For every $n\ge 1$, the path-dependent SDE $\eqref{EN}$ has a unique strong solution $X^{(n)}_{s,t}$ with
\beq\label{*2} \E\sup_{t\in [s-r_0,T]} |X^{(n)}_{s,}(t)|^2<\infty,\ \ T>s, n\ge 1.\end{equation} Moreover, for any $T>0$, there exists $t_0>0$ such that for all $s\in [0,T]$ and $X_{s,s}\in L^2(\OO\rightarrow}\def\l{\ell}\def\iint{\int\scr C} \def\aaa{\mathbf{r}} \def\r{r;\scr F_s)$, \beq\label{*3}
\E \sup_{ t\in [s, s+t_0]} |X^{(n+1)}_{s,}(t)-X^{(n)}_{s,}(t)|^2\le 4 \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-n} \E\sup_{t\in [s,s+t_0]} |X^{(1)}_{s,}(t)|^2,\ \ s\in [0,T], n\ge 1. \end{equation} \end{lem}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof} The proof is similar to that of \cite[Lemma 2.1]{W16}. Without loss of generality, we may assume that $s=0$ and simply denote $X_{0,}(t)=X(t), X_{0,t}=X_t, t\ge 0$.
(1) We first prove that the SDE \eqref{EN} has a unique strong solution and \eqref{*2} holds.
For $n=1$, let $$\bar b(t,\xi)= b(t,\xi, \mu_{t}^{(0)}),\ \ \bar\sigma} \def\ess{\text{\rm{ess}}(t,\xi)= \sigma} \def\ess{\text{\rm{ess}} (t,\xi, \mu_{t}^{(0)}),\ \ t\ge 0, \xi\in\scr C} \def\aaa{\mathbf{r}} \def\r{r.$$ Then \eqref{EN} reduces to \beq\label{EN*} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X^{(1)}(t)= \bar b(t, X_{t}^{(1)})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t + \bar\sigma} \def\ess{\text{\rm{ess}}(t, X_{t}^{(1)})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W(t),\ \ X_{0}^{(1)}=X_{0}, t\ge 0.\end{equation} By $(H1)$-$(H3)$, the coefficients $\bar b$ and $\bar \sigma} \def\ess{\text{\rm{ess}}$ satisfy the standard monotonicity condition which imply strong existence, uniqueness and non-explosion for the stochastic functional differential equation \eqref{EN*}, see e.g. \cite[Corollary 4.1.2]{Wbook} with $D=\mathbb{R}^d$ and $u_n=1$. It is also standard to prove \eqref{*2} using It\^o's formula
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D |X^{(1)}(t)|^2 &=2\big\langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma\sigma} \def\ess{\text{\rm{ess}}(t,X_t^{(1)},\mu_t^{(0)})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W(t), X^{(1)}(t)\big\>\\
&\quad + \big\{ 2\big\<b(t, X_t^{(1)}, \mu_t^{(0)}), X^{(1)}(t)\big\> + \|\sigma} \def\ess{\text{\rm{ess}}(t,X_t^{(1)},\mu_t^{(0)})\|_{HS}^2\big\}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t.\end{align*} By $(H1)$-$(H3)$, there exists an increasing function $H:\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+\rightarrow}\def\l{\ell}\def\iint{\int\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+$ such that
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*}& 2\big\<b(t,\xi,\mu_t^{(0)}), \xi(0)\big\> + \|\sigma} \def\ess{\text{\rm{ess}}(t,\xi,\mu_t^{(0)})\|_{HS}^2\\ &\le 2\big\<b(t,\xi,\mu_t^{(0)})-b(t, 0,\mu_t^{(0)}), \xi(0) \big\>
+ 2|b(t,0,\mu_t^{(0)})|\cdot |\xi(0)|\\
&+ 2\|\sigma} \def\ess{\text{\rm{ess}}(t,\xi,\mu_t^{(0)})-\sigma} \def\ess{\text{\rm{ess}}(t,0,\mu_t^{(0)})\|^2_{HS}+2\|\sigma} \def\ess{\text{\rm{ess}}(t,0,\mu_t^{(0)})\|^2_{HS}\\
&\le H(t) \big\{1+ \|\xi\|_{\infty}^2+\mu_t^{(0)}(\|\cdot\|_{\infty}^2)\big\},\ \ \ t\ge 0, \xi\in \scr C} \def\aaa{\mathbf{r}} \def\r{r.\end{align*}
Combining this with $(H3)$ and applying the BDG inequality for $p = 1$, for any $N\in [1,\infty)$ and $\tau_N:= \inf\{t\ge 0: |X^{(1)}(t)|\ge N\}$, we have
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} &\E \sup_{s\in [-r_0,t\land \tau_N]} |X^{(1)}(s)|^2
\le 4\mathbb{E}\|X^{(1)}_0\|_{\infty}^2+2H(t) \E\int_0^{t\land \tau_N} \big(1+\|X_s^{(1)}\|_{\infty}^2 + \mu_s^{(0)}(\|\cdot\|_{\infty}^2)\big)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s \\
&\qquad + 4 H(t) \E\bigg(\int_0^{t\land \tau_N} |X^{(1)}(s)|^2 \big(1+ \|X_s^{(1)}\|_{\infty}^2 + \mu_s^{(0)}(\|\cdot\|_{\infty}^2)\big)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\bigg)^{\ff 1 2}\\
&\le 4\mathbb{E}\|X^{(1)}_0\|_{\infty}^2+ \ff 1 2 \E \sup_{s\in [-r_0,t\land \tau_N]} |X^{(1)}(s)|^2\\
&+\{2H(t)+ 8H(t)^2\} \E \int_0^{t\land \tau_N} \big(1+\|X_s^{(1)}\|_{\infty}^2 + \mu_s^{(0)}(\|\cdot\|_{\infty}^2) \big)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s,\ \ t\ge 0.\end{align*} This implies \begin{equation*}\begin{split}
&\E \sup_{s\in [-r_0,t\land \tau_N]} |X_s^{(1)}|^2 \le 8\mathbb{E}\|X^{(1)}_0\|_{\infty}^2\\
&+ \{4H(t)+ 16H(t)^2\} \int_0^{t} \big\{1+ \E \sup_{r\in [-r_0,s\land \tau_N]}|X^{(1)}(r)|^2 + \mu_s^{(0)}(\|\cdot\|_{\infty}^2)\big\}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s,\ \ t\ge 0. \end{split}\end{equation*} By first applying Gronwall's Lemma then letting $N\rightarrow}\def\l{\ell}\def\iint{\int\infty$, we arrive at
$$\E \sup_{s\in [-r_0,t]} |X^{(1)}(s)|^2<\infty,\ \ t\ge 0.$$ Therefore, \eqref{*2} holds for $n=1$.
Now, assuming that the assertion holds for $n=k$ for some $k\ge 1$, we intend to prove it for $n=k+1$. This can be done by repeating the above argument with
$(X_\cdot^{(k+1)}, \mu_\cdot^{(k)}, X_\cdot^{(k)})$ replacing $(X_\cdot^{(1)}, \mu_\cdot^{(0)},X_\cdot^{(0)})$, so, we omit the proof.
(2) To prove \eqref{*3}, let \begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} &\xi^{(n)}(t) = X^{(n+1)}(t)- X^{(n)}(t),\\ &\LL_t^{(n)}= \sigma} \def\ess{\text{\rm{ess}}(t,X_t^{(n+1)},\mu_t^{(n)})- \sigma} \def\ess{\text{\rm{ess}}(t,X_t^{(n)},\mu_t^{(n-1)}),\\ &B_t^{(n)}= b(t,X_t^{(n+1)}, \mu_t^{(n)} ) - b(t,X_t^{(n)}, \mu_t^{(n-1)} ).\end{align*}
By $(H2)$ and It\^o's formula, there exists an increasing function $K_1:\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+\rightarrow}\def\l{\ell}\def\iint{\int\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+$ such that
$$\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D |\xi^{(n)}(t)|^2 \le 2 \langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma\LL_t^{(n)} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W(t), \xi^{(n)}(t)\>+ K_1(t) \big\{\|\xi_t^{(n)}\|_{\infty}^2 + \mathbb W_2(\mu_t^{(n)}, \mu_t^{(n-1)})^2\big\}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t.$$
By the BDG inequality for $p = 1$ and since $\mathbb W_2(\mu_s^{(n)}, \mu_s^{(n-1)})^2\le \E\|\xi_s^{(n)}\|_\infty^2$, we obtain
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*}& \E \sup_{s\in [0,t]} |\xi^{(n)}(s)|^2\le 2 \E\sup_{s\in [0,t]} \int_0^s \langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma\LL_r^{(n)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W(r), \xi^{(n)}(r)\>\\
&\qquad + K_1(t) \int_0^t \Big\{\E \|\xi_s^{(n)}\|_{\infty}^2 + \mathbb W_2(\mu_s^{(n)}, \mu_s^{(n-1)})^2\Big\} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\\
& \le 4\E\bigg(\int_0^t \big\{|\xi^{(n)}(s)|^2 \|\LL_s^{(n)}\|^2\big\}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\bigg)^{\ff 1 2}
+ K_1(t) \int_0^t \Big\{\E \|\xi_s^{(n)}\|_{\infty}^2 + \mathbb W_2(\mu_s^{(n)}, \mu_s^{(n-1)})^2\Big\} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\\
&\le \ff 1 2\E \sup_{s\in [0,t]} |\xi^{(n)}(s)|^2 + 8\int_0^t\E \|\LL_s^{(n)}\|^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s + K_1(t) \int_0^t \Big\{\E \|\xi_s^{(n)}\|_{\infty}^2 + \mathbb W_2(\mu_s^{(n)}, \mu_s^{(n-1)})^2\Big\} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s. \end{align*}
Combining this and $(H1)$ we deduce that
$$\E \sup_{s\in [0,t]} |\xi^{(n)}(s)|^2 \le K_2(t) \int_0^t \Big\{\E \sup_{r\in[0,s]}|\xi^{(n)}(r)|^2 + \mathbb W_2(\mu_s^{(n)}, \mu_s^{(n-1)})^2\Big\} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s,\ \ t\ge 0 $$ for some increasing function $K_2: \mathbb R} \def\ff{\frac} \def\ss{\sqrt_+\rightarrow}\def\l{\ell}\def\iint{\int \mathbb R} \def\ff{\frac} \def\ss{\sqrt_+.$ By
Gronwall's Lemma, we obtain
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} & \E \sup_{s\in [0,t]} |\xi^{(n)}(s)|^2 \le tK_2(t) \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{tK_2(t)} \sup_{s\in [0,t]} \mathbb W_2(\mu_s^{(n)}, \mu_s^{(n-1)})^2\\ &\le tK_2(t) \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{tK_2(t)}
\E \sup_{s\in [0,t]}|\xi^{(n-1)}(s)|^2,\ \ \ t\ge 0.\end{align*}
Taking $t_0>0$ such that $ t_0K_2(T) \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{t_0K_2(T)}\le \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-1}$, we arrive at
$$\E \sup_{s\in [0,t_0]} |\xi^{(n)}(s)|^2\le \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{-1}\E \sup_{s\in [0,t_0]} |\xi^{(n-1)}(s)|^2,\ \ n\ge 1.$$ Since
$$\E \sup_{s\in [0,t_0]} |\xi^{(0)}(s)|^2\le 2 \E \Big\{|X(0)|^2+ \sup_{s\in [0,t_0]} |X^{(1)}(s)|^2\Big\}\le 4 \E \sup_{s\in [0,t_0]} |X^{(1)}(s)|^2,$$ we obtain \eqref{*3}. \end{proof}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof}[Proof of Theorem \ref{T2.1}] Without loss of generality, we only consider $s=0$ and simply denote $X_{0,}=X$; i.e.\,$X_{0,}(t)=X(t), X_{0,t}=X_t, t\ge 0$.
(1) Since the uniqueness follows from Theorem \ref{T2.1}(2), which will be proved in the next step, in this step we only prove existence and estimate \eqref{ES1}. By Lemma \ref{L2.1}, there exists a unique adapted continuous process $(X_t)_{t\in [0,t_0]}$ such that
\beq\label{A01}\lim_{n\rightarrow}\def\l{\ell}\def\iint{\int\infty} \sup_{t\in [0,t_0]} \mathbb W_2(\mu_t^{(n)},\mu_t)^2\le \lim_{n\rightarrow}\def\l{\ell}\def\iint{\int\infty} \E \sup_{t\in [0,t_0]} |X^{(n)}(t)- X(t)|^2=0, \end{equation} where $\mu_t$ is the distribution of $X_t$. By \eqref{EN}, $$X^{(n)}(t)=X(0)+ \int_0^t b(s,X^{(n)}_s,\mu_s^{(n-1)}) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s +\int_0^t\sigma} \def\ess{\text{\rm{ess}}(s,X^{(n)}_s,\mu_s^{(n-1)})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W(s). $$ Then \eqref{A01}, $(H1)$, $(H3)$ and the dominated convergence theorem imply that $\P$-a.s. $$X(t)= X(0)+\int_0^t b(s,X_s, \mu_s)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s +\int_0^t \sigma} \def\ess{\text{\rm{ess}}(s,X_s, \mu_s)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W(s),\ \ t\in [0,t_0].$$
Therefore, $(X_t)_{t\in [0,t_0]}$ solves \eqref{ED1} up to time $t_0$, and \eqref{A01} implies $ \E \sup_{s\in [0,t_0]} |X(s)|^2<\infty.$ The same holds for $(X_{s,t})_{t\in [s,(s+t_0)\land T]}$ and $s\in [0,T]$. So, by solving the equation piecewise in time, and using the arbitrariness of $T>0$,
we conclude that \eqref{ED1} has a strong solution
$(X_t)_{t\ge 0}$ with
\beq\label{*X} \E \sup_{s\in [0,t]}|X(s)|^2<\infty,\ \ \ t\geq 0.\end{equation}
(2) By It\^o's formula and $(H2)$, we have \begin{equation*}\begin{split}
\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \{\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\kk t}|X(t)-Y(t)|^2\}\le& 2\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\kk t} \big\<X(t)-Y(t), \{\sigma} \def\ess{\text{\rm{ess}}(t,X_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t})-\sigma} \def\ess{\text{\rm{ess}}(t,Y_t,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{Y_t})\}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W(t)\big\>\\
+&\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\kk t} \big\{\bb_1(t) \|X_t-Y_t\|_{\infty}^2+\bb_2(t)\mathbb W_2(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{Y_t})^2\big\}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t.
\end{split}\end{equation*} Noting that $\mathbb W_2(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{Y_t})^2\le \E\|X_t-Y_t\|_{\infty}^2$, we see that
$\gg_t:= \sup_{s\in [-r_0,t]} \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\kk s^+}|X(s)-Y(s)|^2$ satisfies
\beq\label{KD1} \begin} \def\beq{\begin{equation}} \def\F{\scr F{split} \E\gg_t \le &\E\|X_0-Y_0\|_\infty^2 + \E\int_0^t (\bb_1+\bb_2)(r)\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\kk r} \|X_r-Y_r\|_\infty^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\\ &+2\E \sup_{s\in [0,t]} \int_0^s \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\kk r} \big\<X(r)-Y(r), \{\sigma} \def\ess{\text{\rm{ess}}(r,X_r,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_r})-\sigma} \def\ess{\text{\rm{ess}}(r,Y_r,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{Y_r})\}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W(r)\big\>.\end{split}\end{equation} By $(H1)$, the BDG inequality for $p = 1$ and since
$\mathbb W_2(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_r},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{Y_r})^2\le \E\|X_r-Y_r\|_{\infty}^2$, we have \begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} & 2\E \sup_{s\in [0,t]} \int_0^s \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\kk r} \big\<X(r)-Y(r), \{\sigma} \def\ess{\text{\rm{ess}}(r,X_r,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_r})-\sigma} \def\ess{\text{\rm{ess}}(r,Y_r,\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{Y_r})\}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W(r)\big\>\\
&\le 4 \E\bigg(\int_0^t \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{2\kk s} |X(s)-Y(s)|^2 \big(\aa_1(s)\|X_s-Y_s\|_\infty^2 + \aa_s(s) W_2(\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_r},\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{Y_r})^2\big)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\bigg)^{\ff 1 2}\\
&\le \vv \E \gg_t + \ff 4 \vv \int_0^t (\aa_1(s)+\aa_2(s)) \E[\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\kk s}\|X_s-Y_s\|_\infty^2]\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\\ &\le \vv \E\gg_t+ \ff 4 \vv \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\kk r_0} \int_0^t (\aa_1(s)+\aa_2(s)) \E\gg_s \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s.\end{align*} Combining this with \eqref{KD1} we obtain
$$\E\gg_t\le \ff {\E\|X_0-Y_0\|_\infty^2}{1-\vv} + \ff{\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\kk r_0}}{1-\vv} \int_0^t \Big\{\ff{4}\vv (\aa_1(s)+\aa_2(s)) + \bb_1(s)+\bb_2(s)\Big\}\E\gg_s \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s,\ \ t\ge s.$$ So, Gronwall's Lemma implies
$$\E\gg_t\le \ff {\E\|X_0-Y_0\|_\infty^2}{1-\vv} \exp\bigg[\ff{\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\kk r_0}}{1-\vv} \int_0^t \Big\{\ff{4}\vv (\aa_1(s)+\aa_2(s)) + \bb_1(s)+\bb_2(s)\Big\} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\bigg].$$ Noting that $\E\gg_t\ge \text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{(t-r_0)\kk} \E\|X_t-Y_t\|_\infty^2$, this implies
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} &\E \|X_t-Y_t\|_\infty^2 \le \ff{\E \|X_0-Y_0\|_\infty^2}{1-\vv} \\
&\times \exp\bigg[(r_0-t)\kk +\ff{\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{\kk r_0}}{1-\vv} \int_0^t \Big\{\ff{4}\vv (\aa_1(s)+\aa_2(s)) + \bb_1(s)+\bb_2(s)\Big\} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\bigg].\end{align*}
Since $(H2)$ remains true if $\kk$ is replaced by a smaller constant $\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho$, this estimate also holds for $\delta} \def\DD{\Delta} \def\vv{\varepsilon} \def\rr{\rho\in [0,\kk]$ replacing $\kk$. Therefore, the estimate in Theorem \ref{T2.1}(2) holds.
(3) Let $(X_t)_{t\ge 0}$ solve \eqref{ED1} with $\scr L_{X_0}=\mu_0$, and let $(\tilde} \def\Ric{\text{\rm{Ric}} X_t,\tilde} \def\Ric{\text{\rm{Ric}} W(t))$ on
$(\tilde} \def\Ric{\text{\rm{Ric}}\OO, \{\tilde} \def\Ric{\text{\rm{Ric}}\F_t\}_{t\ge 0}, \tilde} \def\Ric{\text{\rm{Ric}}\P)$ be a weak solution of \eqref{ED1} such that $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}|_{\P}= \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde} \def\Ric{\text{\rm{Ric}} X_0}|_{\tilde} \def\Ric{\text{\rm{Ric}}\P}=\mu_0$, i.e. $\tilde} \def\Ric{\text{\rm{Ric}} X_t$ solves
\beq\label{E1'} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde} \def\Ric{\text{\rm{Ric}} X(t) = b(t,\tilde} \def\Ric{\text{\rm{Ric}} X_t, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde} \def\Ric{\text{\rm{Ric}} X_t}|_{\tilde} \def\Ric{\text{\rm{Ric}}\P})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t + \sigma} \def\ess{\text{\rm{ess}}(t,\tilde} \def\Ric{\text{\rm{Ric}} X_t, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde} \def\Ric{\text{\rm{Ric}} X_t}|_{\tilde} \def\Ric{\text{\rm{Ric}}\P})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde} \def\Ric{\text{\rm{Ric}} W(t),\ \ \ \scr L_{\tilde} \def\Ric{\text{\rm{Ric}} X_0}=\mu_0.\end{equation}
We aim to prove $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X}|_{\P}=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde} \def\Ric{\text{\rm{Ric}} X}|_{\tilde} \def\Ric{\text{\rm{Ric}}\P}$.
Let $\mu_t= \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t}|_{\P}$ and $$\bar b(t,\xi)= b(t,\xi, \mu_t),\ \ \bar \sigma} \def\ess{\text{\rm{ess}}(t,\xi)= \sigma} \def\ess{\text{\rm{ess}}(t,\xi,\mu_t),\ \ t\ge 0, \xi\in\scr C} \def\aaa{\mathbf{r}} \def\r{r.$$
By $(H1)$-$(H3)$, the stochastic functional differential equation \beq\label{E10} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \bar X(t) = \bar b(t,\bar X_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t + \bar \sigma} \def\ess{\text{\rm{ess}}(t,\bar X_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde} \def\Ric{\text{\rm{Ric}} W(t),\ \ \bar X_0= \tilde} \def\Ric{\text{\rm{Ric}} X_0 \end{equation} has a unique solution. According to Yamada--Watanabe, it also satisfies weak uniqueness. Noting that
$$\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X(t) = \bar b(t, X_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t + \bar \sigma} \def\ess{\text{\rm{ess}}(t, X_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W(t),\ \ \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}|_{\P}= \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde} \def\Ric{\text{\rm{Ric}} X_0}|_{\tilde} \def\Ric{\text{\rm{Ric}}\P},$$ the weak uniqueness of \eqref{E10} implies
\beq\label{HW} \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\bar X}|_{\tilde} \def\Ric{\text{\rm{Ric}}\P}= \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_X|_{\P}.\end{equation} So, \eqref{E10} reduces to
$$ \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \bar X(t) = b(t,\bar X_t, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\bar X_t}|_{\tilde} \def\Ric{\text{\rm{Ric}}\P})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t + \sigma} \def\ess{\text{\rm{ess}}(t,\bar X_t, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\bar X_t}|_{\tilde} \def\Ric{\text{\rm{Ric}}\P})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde} \def\Ric{\text{\rm{Ric}} W(t),\ \ \bar X_0=\tilde} \def\Ric{\text{\rm{Ric}} X_0.$$
Since the strong uniqueness of \eqref{E1'} is ensured by Step (1), we obtain $\bar X=\tilde} \def\Ric{\text{\rm{Ric}} X$. Therefore, \eqref{HW} implies $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{\tilde} \def\Ric{\text{\rm{Ric}} X}|_{\tilde} \def\Ric{\text{\rm{Ric}} \P} = \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_X|_{\P}$ as wanted.
Finally, since $\scr C} \def\aaa{\mathbf{r}} \def\r{r$ is a Polish space, for any $\mu_0,\nu_0\in \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r$, we can take $\F_0$-measurable random variables $X_0$, $Y_0$ such that $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}=\mu_0, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{Y_0}=\nu_0$ and
$\mathbb W_2(\mu_0, \nu_0)^2=\E\|X_0-Y_0\|_{\infty}^2$. Combining this with $\mathbb W_2(P_t^*\mu_0, P_t^*\nu_0)^2\le \E\|X_t-Y_t\|_{\infty}^2,$ we deduce the estimate in Theorem \ref{T2.1}(3) from that in Theorem \ref{T2.1}(2). \end{proof}
\section{Harnack inequality and applications}
To prove Theorem \ref{T1.2}, we investigate Harnack inequalities of the operator $P_t$ defined by \beq\label{*D2} (P_tf)(\mu_0)= \int_{\scr C} \def\aaa{\mathbf{r}} \def\r{r} f \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D(P_t^*\mu_0),\ \ f\in \B_b(\scr C} \def\aaa{\mathbf{r}} \def\r{r), t\ge 0, \mu_0\in \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r.\end{equation} We will consider the Harnack inequality with a power $p>1$ introduced in \cite{W97}, and the log-Harnack inequality developed in \cite{RW10, W10}, where classical SDEs on $\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$ and manifolds are considered. To establish these inequalities for the present path-distribution dependent SDEs, we will adopt coupling by change of measures introduced in \cite{ATW06,W07}. We refer to \cite{Wbook} for a general theory on this method and applications.
To construct the desired coupling for the segment solution $X_t$, we need to assume that $\sigma} \def\ess{\text{\rm{ess}}(t,\xi,\mu)=\sigma} \def\ess{\text{\rm{ess}}(t,\xi(0))$; that is, we consider the following simpler version of \eqref{ED1}: \begin{equation}\label{E11} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X(t)= b(t,X_t,\scr L_{X_t})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t +\sigma} \def\ess{\text{\rm{ess}}(t,X(t))\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W(t). \end{equation}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{thm}\label{T3.1} Assume {\bf (A)}. Then there exists $H_1\in C(\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+;\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+)$ such that
for any $ \mu_0,\nu_0\in \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r $, $\F_0$-measurable random variables $X_0, Y_0$ with $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}=\mu_0, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{Y_0}=\nu_0$, and $f\in \B_{b}^+(\scr C} \def\aaa{\mathbf{r}} \def\r{r)$,
\beq\label{LH11}
(P_{T}\log f)(\nu_0)\le \log (P_{T}f)(\mu_0)+ H_1(T) \mathbb{E}\bigg( \ff{|X(0)-Y(0)|^2}{T-r_0} + \|X_0-Y_0\|_\infty^2 \bigg), \ \ T>r_0. \end{equation}
If moreover $\eqref{*P}$ holds for some increasing $\kk_3:\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+\rightarrow}\def\l{\ell}\def\iint{\int\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+$, then there exists $H_2\in C(D;\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+)$, where $D$ is as in Theorem \ref{T1.2}, such that
\beq\label{HI11} (P_{T}f)(\nu_0)\le (P_{T}f^p)^{\frac{1}{p}}(\mu_0)\mathbb{E}\Big(\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{H_2(p,T) \big(1+\ff{|X(0)-Y(0)|^2}{T-r_0} + \|X_0-Y_0\|_\infty^2\big)}\Big),\ \ T>r_0, (p,T)\in D
\end{equation} holds for $\mu_0,\nu_0$ and $X_0,Y_0$ as above. \end{thm}
As a consequence of Theorem \ref{T3.1}, we have the following result, see, for instance, the proof of \cite[Prposition 3.1]{WY11}.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{cor}\label{C3.3} Assume {\bf (A)} and let $T> r_0$. For any $\mu_0,\nu_0\in\scr P_2$, $P_{T}^*\mu_0$ and $P_{T}^*\nu_0$ are equivalent and the Radon-Nykodim derivative satisfies the entropy estimate $$
\int_{\scr C} \def\aaa{\mathbf{r}} \def\r{r} \bigg(\log \ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D P_{T}^*\nu_0}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D P_{T}^*\mu_0}\bigg)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D P_{T}^*\nu_0\le\inf_{\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}=\mu_0, \scr L_{Y_0}=\nu_0} \mathbb{E}\bigg[H_1(T)\bigg(\ff{|X(0)-Y(0)|^2}{T-r_0} +\|X_0-Y_0\|_\infty^2\bigg)\bigg] ,\ \ T>r_0. $$ If $\eqref{*P}$ holds, then for any $T>r_0$ and $p>(1+\kk_3(T)\ll(T))^2$, $$\int_{\scr C} \def\aaa{\mathbf{r}} \def\r{r} \bigg(\ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D P_{T}^*\nu_0}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D P_{T}^*\mu_0}\bigg)^{\ff 1 p} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D (P_{T}^*\nu_0)
\le \inf_{\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_0}=\mu_0, \scr L_{Y_0}=\nu_0} \mathbb{E}\Big(\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{H_2(p,T) \big(1+\ff{|X(0)-Y(0)|^2}{T-r_0} + \|X_0-Y_0\|_\infty^2\big)}\Big).$$ \end{cor}
\
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof}[Proof of Theorem \ref{T3.1}] For $\mu_t:= P_t^*\mu_0$ and $\nu_t:=P_t^*\nu_0$, we may rewrite \eqref{E11} as \begin{equation}\label{barX} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X(t)= \bar{b}(t,X_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t +\sigma} \def\ess{\text{\rm{ess}}(t,X(t))\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \bar{W}(t),\ \ \scr L_{X_0}=\mu_0, \end{equation} where \begin{equation*}\begin{split} &\bar{b}(t,\xi):=b(t,\xi,\nu_t),\ \ \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \bar{W}(t):=\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W(t)+ \bar{\gamma}(t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t,\\ & \bar{\gamma}(t):=\sigma} \def\ess{\text{\rm{ess}}^{-1}(t,X(t))[b(t,X_t,\mu_t)- b(t,X_t,\nu_t)]. \end{split}\end{equation*} By assumption {\bf (A)} and Theorem \ref{T2.1}(3), we have \begin{equation}\begin{split}\label{EbarG}
|\bar{\gamma}(t)|\leq \lambda(t) \kk_2(t)\mathbb W_2(\mu_t,\nu_t)\le K(t) \mathbb W_2(\mu_0,\nu_0), \ \ t\in[0,T] \end{split}\end{equation} for some increasing function $K:\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+\rightarrow}\def\l{\ell}\def\iint{\int\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+.$ Let \begin{equation}\label{EB2}
\bar{R}_t=\exp\left\{-\int_0^t\langle\bar{\gamma}(s),\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W(s)\rangle-\frac{1}{2}\int_0^t|\bar{\gamma}(s)|^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\right\},\ \ t\in[0,T].
\end{equation}
By Girsanov's theorem, $\{\bar{W}(t)\}_{t\in[0,T]}$ is a $d$-dimensional Brownian motion under the probability measure $\bar{\P}_T:= \bar R_T\P$.
Next, according to the proof of \cite[Theorem 4.3.1]{Wbook} or \cite[Theorem 1.1]{WY11}, we can construct an adapted process $\tilde} \def\Ric{\text{\rm{Ric}}\gg(t)$ on $\mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$ such that \begin} \def\beq{\begin{equation}} \def\F{\scr F{enumerate} \item[(a)] Under the probability measure $\bar \P_T$,
$$\tilde{R}_t:=\exp\left\{-\int_0^t\langle\tilde{\gamma}(s),\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \bar{W}(s)\rangle-\frac{1}{2}\int_0^t|\tilde{\gamma}(s)|^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\right\}, \ \ t\in[0,T]$$ is a martingale, such that $\tilde} \def\Ric{\text{\rm{Ric}}\P_T:= \tilde} \def\Ric{\text{\rm{Ric}} R_T\bar\P_T=\tilde} \def\Ric{\text{\rm{Ric}} R_T\bar R_T\P$ is a probability measure under which $$\tilde} \def\Ric{\text{\rm{Ric}} W(t):= \bar{W}(t)+ \int_0^t \tilde{\gamma}(s)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s = W(t)+ \int_0^t \big(\bar{\gamma}(s)+\tilde{\gamma}(s)\big)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s,\ \ t\in [0,T]$$ is a $d$-dimensional Brownian motion. \item[(b)] Letting $Y(t)$ solve the following stochastic functional differential equation under the probability measure $\tilde} \def\Ric{\text{\rm{Ric}}\P_T$ with the given initial value $Y_0$: \beq\label{barCY}\begin{split} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D Y(t) &= \bar{b}(t,Y_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t+ \sigma} \def\ess{\text{\rm{ess}}(t,Y(t)) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \tilde{W}(t),
\end{split}\end{equation} we have $\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{Y_0|\tilde} \def\Ric{\text{\rm{Ric}}\P}=\scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{Y_0}=\nu_0$ and $X_T=Y_T\ \tilde} \def\Ric{\text{\rm{Ric}}\P_T$-a.s. \item[(c)] There exists $C\in C(\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+;\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+)$ such that
$$ \E_{\tilde} \def\Ric{\text{\rm{Ric}}\P_T} \int_0^T |\tilde} \def\Ric{\text{\rm{Ric}}\gg(s)|^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s \le C(T) \E\Big(\ff{|X(0)-Y(0)|^2}{T-r_0}+\|X_0-Y_0\|_\infty^2\Big).$$
\end{enumerate}
By the definition of $\bar b$ we see that $(Y_t,\tilde} \def\Ric{\text{\rm{Ric}} W(t))$ is a weak solution to the equation \eqref{barX} with initial distribution $\nu_0$, so that by the weak uniqueness,
$\scr L_{Y_t}|_{\tilde} \def\Ric{\text{\rm{Ric}} \P_T}=\nu_t, t\in [0,T].$ Combining this with (b) we obtain $$(P_Tf)(\nu_0)= \E_{\tilde} \def\Ric{\text{\rm{Ric}} \P_T} [f(Y_T)] = \E_{\tilde} \def\Ric{\text{\rm{Ric}} P_T}[f(X_T)] = \E [\bar R_T\tilde} \def\Ric{\text{\rm{Ric}} R_T f(X_T)],\ \ f\in \B_b^+(\scr C} \def\aaa{\mathbf{r}} \def\r{r).$$ Letting $R_T= \bar R_T\tilde} \def\Ric{\text{\rm{Ric}} R_T$, by Young's inequality and H\"older's inequality respectively, we obtain \beq\label{LHI} (P_T\log f)(\nu_0) \le \E [ R_T \log R_T ]+ \log\E[f(X_T)]
= \E [ R_T \log R_T ]+ \log(P_T f)(\mu_0),
\end{equation} and \beq\label{HI}\begin{split} &(P_T f(\nu_0))^{p}\le (\E R_T^{\ff {p}{p-1}})^{p-1} (\E f^{p}(X_T))= (\E R_T^{\ff {p}{p-1}})^{p-1} P_T f^{p}(\mu_0),\ \ p>1. \end{split}\end{equation}
We are now ready to prove assertions (1) and (2) as follows.
By \eqref{EbarG} , (c) and since $\mathbb W_2(\mu_0,\nu_0)^2\le \E\|X_0-Y_0\|_\infty^2$, \begin{equation}\label{LR}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split}
\E[R_T\log R_T]&\le \frac{1}{2}\E_{\tilde{\P}_T}\int_0^T|\bar{\gamma}(s)+\tilde{\gamma}(s)|^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\\
&\le \E_{\tilde{\P}_T}\int_0^T|\tilde{\gamma}(s)|^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s+ \int_0^T|\bar{\gamma}(s)|^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\\
&\le \E_{\tilde{\P}_T}\int_0^T|\tilde{\gamma}(s)|^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s+\int_0^T\lambda(t)^2\kk_2(t)^2\mathbb W_2(\mu_t,\nu_t)^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\\
&\le H_1(T) \mathbb{E}\bigg(\ff{|X(0)-Y(0)|^2}{T-r_0}+ \|X_0-Y_0\|_\infty^2\bigg), \ \ T>r_0\end{split}\end{equation} holds for some $H_1\in C(\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+;\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+).$ Combining this with \eqref{LHI} we obtain \eqref{LH11}.
Finally, according to the proof of \cite[Theorem 4.1]{WY11}, there exists $C\in C(D;\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+)$ such that
$$(\E_{\bar\P_T} \tilde} \def\Ric{\text{\rm{Ric}} R_T^{\ff {p}{p-1}})^{\frac{p-1}{p}} \le \E \Big(\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{C(p,T)\big(1+\ff{|X(0-Y(0)|^2}{T-r_0} +\|X_0-Y_0\|_\infty^2\big)}\Big),\ \ T>r_0, (p,T)\in D. $$ For any $p>p(T):= (1+\kk_3(T)\ll(T))^2$, by applying this estimate for $p_1:=\ff 1 2(p+(p(T))$ and combining with $R_T=\tilde} \def\Ric{\text{\rm{Ric}} R_T \bar R_T$, \eqref{EbarG}, \eqref{EB2} and
$\mathbb W_2(\mu_0,\nu_0)^2\le \E\|X_0-Y_0\|_\infty^2,$ we arrive at \begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} \Big(\E R_T^{\ff {p}{p-1}}\Big)^{\frac{p-1}{p}} &=\Big(\E_{\bar\P_T} \tilde} \def\Ric{\text{\rm{Ric}} R_T^{\ff {p}{p-1}} \bar R_T^{\ff 1 {p-1}}\Big)^{\frac{p-1}{p}}
\le \Big(\E_{\bar \P_T} \tilde} \def\Ric{\text{\rm{Ric}} R_T^{\ff {p_1}{p_1-1}}\Big)^{\frac{p_1-1}{p_1}} \Big(\E_{\bar\P_T} \bar R_T^{\ff {p_1}{p-p_1}}\Big)^{\frac{p-p_1}{pp_1}}\\
&\le \E \Big(\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{C(p_1,T)\big(1+\ff{|X(0-Y(0)|^2}{T-r_0} +\|X_0-Y_0\|_\infty^2\big)}\Big) \Big(\E\bar R_T^{\ff {p}{p-p_1}}\Big)^{\frac{p-p_1}{pp_1}}\\
&\le \E \Big(\text{\rm{e}}} \def\ua{\underline a} \def\OO{\Omega} \def\oo{\omega^{H_2(p,T)\big(1+\ff{|X(0-Y(0)|^2}{T-r_0} +\|X_0-Y_0\|_\infty^2\big)}\Big),\ \ T>r_0, (p,T)\in D\end{align*} for some $H_2\in C(\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+;\mathbb R} \def\ff{\frac} \def\ss{\sqrt_+).$ Therefore, \eqref{HI11} follows from \eqref{HI}. \end{proof}
\section{Shift Harnack inequality and integration by parts formula}
To prove Theorem \ref{T1.3}, we investigate the shift Harnack inequality and integration by parts formula introduced in \cite{W14a}. Assume that $\sigma} \def\ess{\text{\rm{ess}}(t,\xi,\mu)=\sigma} \def\ess{\text{\rm{ess}}(t)$ is invertible. Then the path-distribution dependent SDE \eqref{ED1} becomes $$ \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X(t)= b(t,X_t, \scr L}\def\Tt{\tt} \def\TT{\tt}\def\II{\mathbb I_{X_t})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t +\sigma} \def\ess{\text{\rm{ess}}(t) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W(t),\ \ \scr L_{X_0}=\mu_0.$$
To apply the existing shift Harnack inequality and integration by parts formula, we let
$$\bar b(t,\xi):= b(t, \xi,\mu_t),\ \ \mu_t:=\scr L_{X_t}=P_t^*\mu_0,\ \ t\ge 0,\xi\in \scr C} \def\aaa{\mathbf{r}} \def\r{r$$ and
rewrite this equation as $$\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X(t)= \bar b(t,X_t) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t +\sigma} \def\ess{\text{\rm{ess}}(t) \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W(t),\ \ \ \scr L_{X_0}=\mu_0.$$ Then the following result follows from \cite[Theorem 4.2.3]{Wbook}.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{thm}\label{T4.1} Let $\sigma} \def\ess{\text{\rm{ess}}: [0,\infty)\rightarrow}\def\l{\ell}\def\iint{\int \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d\otimes \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$ and $b: [0,\infty)\times \scr C} \def\aaa{\mathbf{r}} \def\r{r\times\scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r\rightarrow}\def\l{\ell}\def\iint{\int \mathbb R} \def\ff{\frac} \def\ss{\sqrt^d$ satisfy {\bf (A)}, and assume that for any $(t,\mu)\in \mathbb R} \def\ff{\frac} \def\ss{\sqrt_+\times \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r$, $b(t,\cdot,\mu)$ is differentiable. Then
$$\LL(T):=\sup_{t\in[0,T]}\|\sigma} \def\ess{\text{\rm{ess}}(t)^{-1}\|^2<\infty,\ \ K(T):= \sup_{t\in [0,T],\mu\in \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r} \|\nabla} \def\pp{\partial} \def\E{\mathbb E b(t,\cdot,\mu)\|_\infty^2<\infty,\ \ T\geq 0.$$ Moreover: \begin} \def\beq{\begin{equation}} \def\F{\scr F{enumerate} \item[$(1)$] For any $p>1,T>r_0, \mu_0\in \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r, \eta\in\mathbb{H}^{1} $ and $f\in \B_b^+(\scr C} \def\aaa{\mathbf{r}} \def\r{r)$, \begin} \def\beq{\begin{equation}} \def\F{\scr F{align*} (P_{T}f)^p(\mu_0)\le &(P_{T}f^p(\eta+\cdot))(\mu_0)\\ &\times \exp\bigg[\ff{p\, \LL(T)\left(1
+T^2K(T) \right)\left(\frac{|\eta(-r_0)|^2}{T-r_0}+\|\eta\|_{\mathbb{H}^{1}}^{2}\right) }{(p-1)^2}\bigg],\ \ p>1, \end{align*} and $$ (P_{T}\log f)(\mu_0)\le \log (P_{T} f(\eta+\cdot))(\mu_0) + \LL(T)\left(1
+T^2K(T) \right)\left(\frac{|\eta(-r_0)|^2}{T-r_0}+\|\eta\|_{\mathbb{H}^{1}}^{2}\right). $$ \item[$(2)$] For any $T>r_0,$ let \begin{equation*}\begin{split} &\Phi (t)=1_{[0,T-r_0]}(t)\ff{\eta(-r_{0})}{T-r_0}+1_{(T-r_0,T]}(t)\eta^{'}(t-T),\\ &\Theta (t)=\int_0^{t^+}\Phi (s)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s,\ \ t\in[-r_0,T].\end{split}\end{equation*} Then for any $f\in C^1(\scr C} \def\aaa{\mathbf{r}} \def\r{r)$, $\eta\in \mathbb{H}^{1}$ and $\mu_0\in \scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r$, $$\E(\nabla} \def\pp{\partial} \def\E{\mathbb E_\eta f)(X_{T})= \E\bigg[f(X_{T})\int_0^T \big\langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma\sigma} \def\ess{\text{\rm{ess}}(t)^{-1}(\Phi (t) -\nabla} \def\pp{\partial} \def\E{\mathbb E_{\Theta_t} b(t,\cdot, P_{t}^*\mu_0)(X_t)),\ \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W(t)\big\>\bigg].$$ \end{enumerate} \end{thm}
As consequence of Theorem \ref{T4.1} we have the following result.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{cor}\label{C4.2} In the situation of Theorem \ref{T4.1}. For any $\mu_0\in\scr P_2^\scr C} \def\aaa{\mathbf{r}} \def\r{r, \eta\in\H^1$ and $T>r_0$, $\mu_T:= P_T^*\mu_0$ satisfies
\begin} \def\beq{\begin{equation}} \def\F{\scr F{align*}&\int_\scr C} \def\aaa{\mathbf{r}} \def\r{r \Big(\log \ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \mu_T(\cdot+\eta)}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu_T}\Big)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu_T(\cdot+\eta)\le \LL(T) (1+T^2 K(T))\Big(\ff{|\eta(-r_0)|^2}{T-r_0}+ \|\eta\|_{\H^1}^2\Big),\\
&\int_\scr C} \def\aaa{\mathbf{r}} \def\r{r \Big( \ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \mu_T(\cdot+\eta)}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu_T}\Big)^{\ff 1 p} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu_T(\cdot+\eta)\le \exp\bigg[\ff{\LL(T) (1+T^2 K(T))}{(p-1)^2}\Big(\ff{|\eta(-r_0)|^2}{T-r_0}+ \|\eta\|_{\H^1}^2\Big) \bigg],\ \ p>1,\\
&\int_\scr C} \def\aaa{\mathbf{r}} \def\r{r \Big|\ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D \pp_\eta\mu_T}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu_T}\Big|^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu_T \le \LL(T) \big(1+K(T)T^2\big) \Big(\ff{|\eta(-r_0)|^2}{T-r_0}+ \|\eta\|_{\H^1}^2\Big).\end{align*} \end{cor}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof} The first two estimates follow from Theorem \ref{T4.1}(1), see \cite{W14a} or \cite[\S 1.4]{Wbook}. As the last estimate is not explicitly given in these references, we present a brief proof below. It is easy to see that $$M(T):= \int_0^T \big\langle} \def\>{\rangle} \def\GG{\Gamma} \def\gg{\gamma\sigma} \def\ess{\text{\rm{ess}}(t)^{-1}(\Phi (t) -\nabla} \def\pp{\partial} \def\E{\mathbb E_{\Theta_t} b(t,\cdot, P_{t}^*\mu_0)(X_t)),\ \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D W(t)\big\>$$ satisfies
$$\E M(T)^2\le C(T):= \LL(T) \big(1+K(T)T^2\big) \Big(\ff{|\eta(-r_0)|^2}{T-r_0}+ \|\eta\|_{\H^1}^2\Big).$$ Then, Theorem \ref{T4.1}(2) implies that
$$C^1(\scr C} \def\aaa{\mathbf{r}} \def\r{r)\ni f\mapsto (\pp_\eta \mu_T)(f):= \bigg(\ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D }{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\vv} \int_\scr C} \def\aaa{\mathbf{r}} \def\r{r f\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu_T(\cdot+\vv \eta)\bigg)\bigg|_{\vv=0}$$ is a densely defined bounded linear functional on $L^2(\mu_T)$ with
$$\big|(\pp_\eta \mu_T)(f)\big|^2 \le \mu_T(f^2)\E M(T)^2\le C(T)\mu_T(f^2).$$ By the Riesz Representation Theorem, it uniquely extends to a bounded linear functional $$(\pp_\eta\mu_T)(f):= \int_\scr C} \def\aaa{\mathbf{r}} \def\r{r fg\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu_T,\ \ \ f\in L^2(\mu_T)$$ for some $g\in L^2(\mu_T)$ with $\mu_T(g^2)\le C(T).$ Consequently, $\mu_T$ is differentiable along $\eta$ with $(\pp_\eta\mu_T)(A)=\int_Ag\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu_T, A\in \B(\scr C} \def\aaa{\mathbf{r}} \def\r{r),$ and $\pp_\eta \mu_T$ is absolutely continuous with respect to $\mu_T$ such that $$\int_\scr C} \def\aaa{\mathbf{r}} \def\r{r \Big(\ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\pp_\eta\mu_T}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu_T}\Big)^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu_T= \int_\scr C} \def\aaa{\mathbf{r}} \def\r{r g^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D\mu_T\le C(T).$$\end{proof}
\paragraph{Acknowledgement.} The authors would like to thank the referee for helpful comments on an earlier version of the paper.
\end{document} |
\begin{document}
\title{A Nanoscale Coherent Light Source}
\author{Raphael Holzinger} \affiliation{Institut f\"ur Theoretische Physik, Universit\"at Innsbruck, Technikerstr. 21a, A-6020 Innsbruck, Austria} \author{David Plankensteiner} \affiliation{Institut f\"ur Theoretische Physik, Universit\"at Innsbruck, Technikerstr. 21a, A-6020 Innsbruck, Austria} \author{Laurin Ostermann} \affiliation{Institut f\"ur Theoretische Physik, Universit\"at Innsbruck, Technikerstr. 21a, A-6020 Innsbruck, Austria} \author{Helmut Ritsch} \affiliation{Institut f\"ur Theoretische Physik, Universit\"at Innsbruck, Technikerstr. 21a, A-6020 Innsbruck, Austria} \date{\today}
\begin{abstract} Generically, a laser is composed of an optical resonator coupled to a gain medium. If the light amplification via stimulated emission dominates the mirror losses, the emitted light is coherent. Recent studies have shown that sub-wavelength sized rings of quantum emitters possess subradiant eigenmodes which mimic high-$Q$ optical resonators. We add a continuously pumped atom as a gain medium in the ring's center creating a minimalistic coherent light source. The system behaves like a thresholdless laser, featuring a narrow linewidth well below the natural linewidth of the constituent atoms. \end{abstract}
\pacs{42.50.Ar, 42.50.Lc, 42.72.-g}
\maketitle
\section{Introduction}
Conventional lasers consist of an optical cavity filled with a gain medium, typically comprised by an ensemble of energetically inverted emitters amplifying the light field via stimulated emission. Pioneering experiments have realized lasers with the most minimalistic gain medium yet, a single atom~\cite{brune1987realization,mcKeever2003experimental,davidovich1987quantum,walther1988single,an1994microlaser,astafiev2007single, rastelli2019single,loeffler1997spectral}. Corresponding theoretical quantum models have already been studied extensively for several decades~\cite{meschede1985one, mu1992one, pellizzari1994photon,salzburger2005theory}. Standard models of a single-atom laser still feature a macroscopic optical resonator supporting the corresponding laser light mode. Technically, the noise of the cavity mirrors is a substantially limiting factor for the frequency stability of a laser. This can be reduced when working in the bad cavity regime, such that the coherence is stored in the atomic dipoles rather than the light field. In such superradiant lasers~\cite{meiser2009prospects,meiser2010steady,bonnet2012,maier2014superradiant,hotter2019cooling} the properties of the emitted light are governed by the gain medium rather than the resonator.
In this work we go one step further removing the cavity altogether and consider a nano-scale system where atomic quantum emitters provide for the necessary gain while simultaneously acting as a resonator. Thus, in principle, the size of the entire setup can be reduced to even below the order of the laser wavelength. Such a device is characterized solely by the spectral properties of the atoms.
\begin{figure}
\caption{\emph{Coherent Light Emission from a Partially Pumped Atomic Array.} (a) A ring of atoms with an additional atom in its center incoherently pumped with a rate $\nu$. (b) The atoms decay at a spontaneous decay rate $\Gamma_0$ and are collectively coupled to the center atom with dispersive coupling $\Omega\ts{p}$ and dissipative coupling $\Gamma\ts{p}$, respectively. In turn, the ring atoms have couplings $\Omega_{ij}$ and $\Gamma_{ij}$ amongst each other. The symmetric excitation exhibits a collective decay rate $\Gamma\ts{coll}$. (c) The field intensity generated in the steady state according to eq. \eqref{eq:intensity} for a ring of $N = 11$ atoms in the $xz$-plane with $y = 2.5\lambda_0$ and interatomic distance $d=\lambda_0/5$ and pumping rate $\nu = 0.1\Gamma_0$. (d) The field intensity in the $xy$-plane with $z = 2.5\lambda_0$.}
\label{model}
\end{figure}
As discovered recently, tailored dipole-coupled atomic arrays possess collective eigenmodes with a very long lifetime demonstrating analogous characteristics to a high-$Q$ optical cavity mode~\cite{moreno2019extraordinary,manzoni2018optimization}. Such arrangements could be implemented, e.g.\ by means of optical tweezers~\cite{Barredo2016atom,Barredo2018synthetic,wang2019preparation} or superconducting qubit setups operating in the microwave regime~\cite{blais2004cavity}. We study the prospects of implementing a minimalistic sub-wavelength sized laser by incoherently pumping some of the dipoles in such a nano array. As our generic setup we consider a single atom placed in the center of a small ring comprised of identical emitters. The collective coupling to the other emitters in the ring is mediated by virtual photon exchange through the electromagnetic vacuum~\cite{lehmberg1970radiation,hood2016atom,astafiev2007single}. The collective eigenmodes of the outer ring take on the role of a resonator mode.
We show that such a minimal model constitutes a steady-state coherent light source with a spectral linewidth well below the single atom decay rate. Therefore, it can be viewed as a minimal implementation of a laser. Depending on the number of atoms and the configuration of the array, the collective nature of the dipole-dipole couplings leads to strong quantum correlations within the atoms and an inherent emission of a coherent field. Optimal operation is achieved when the collective state in the ring atoms features a single subradiant excitation only.
\section{Model}
We consider $N$ identical two-level atoms with excited state $\ket{e}$ and ground state $\ket{g}$ each, separated in frequency by $\omega_0$ and arranged in a ring geometry at an inter-atomic distance of $d \lesssim \lambda_0 = 2\pi c/\omega_0$. An additional gain atom is placed in the center of the ring as depicted in~\fref{model}a and is assumed to be pumped to its upper level incoherently at a rate $\nu$ (after having eliminated auxiliary levels). The corresponding raising (lowering) operators of the $i$th atom are $\sigma^\pm_i$ for $i \in \lbrace 1, 2, \ldots, N, p \rbrace$ (the index $p$ corresponds to the central, pumped atom). The excited state is subject to spontaneous emission with a rate $\Gamma_0$. All transition dipoles $\pmb{\mu}_i$ are chosen such that they point in $z$-direction.
At the considered distances, the fields emitted by each of the atoms interfere resulting in effective dipole-dipole interactions~\cite{lehmberg1970radiation}, so that the atomic ring acts like a resonator~\cite{moreno2019extraordinary} coupled to the gain atom in its center.
Using standard quantum optical techniques~\cite{gardiner2004quantum} we obtain a master equation for the internal dynamics of the emitters, $ \dot{\rho} = i \left[ \rho, H \right] +\mathcal{L}_\Gamma \left[\rho \right] + \mathcal{L}_\nu \left[\rho \right], $ where the Lindblad term describing the incoherent pumping of the central atom is given by $ \mathcal{L}_\nu \left[ \rho \right] = \frac{\nu}{2}\left(2 \sigma^+_\mathrm{p} \rho \sigma^-_\mathrm{p} - \sigma^-_\mathrm{p} \sigma^+_\mathrm{p} \rho -\rho \sigma^-_\mathrm{p} \sigma^+_\mathrm{p} \right). $ The corresponding Hamiltonian in a frame rotating at the atomic transition frequency $\omega_0$ is \begin{equation} \label{eq:hamiltonian}
H = \sum_{i,j:i \neq j} \Omega_{ij} \sigma^+_i \sigma^-_j, \end{equation} while the Lindblad operator accounting for collective spontaneous emission reads \begin{equation} \label{eq:lindblad_gamma} \mathcal{L}_\Gamma \left[ \rho\right] = \sum_{i,j} \frac{\Gamma_{ij}}{2}\left(2 \sigma^-_i\rho \sigma^+_j -\sigma^+_i \sigma^-_j \rho - \rho \sigma^+_i \sigma^-_j \right). \end{equation}
The collective coupling rates $\Omega_{ij}$ and $\Gamma_{ij}$ are given as the real and imaginary part of the overlap of the transition dipole of the $i$th atom with the electric field emitted by the $j$th atom (see Eq.~\eqref{eq:greens_tensor}).
The emitted electric field $\pmb{E}^+(\pmb{r})$ can be used to compute the field intensity as (see Eq.~\eqref{eq:field}), i.e.\ \begin{equation} \label{eq:intensity} I(\pmb{r}) = \left \langle \pmb{E}^+(\pmb{r}) \pmb{E}^-(\pmb{r}) \right \rangle, \end{equation}
The steady-state intensity is shown in~\fref{model}c and~\fref{model}d for typical operating conditions.
\section{Continuous Collective Emission}
Our goal is to find operating regimes where the system emits coherent light with a narrow linewidth. As the configuration is symmetric with respect to the coupling of the ring atoms to the gain atom in the center, we can expect the ring atoms to be driven into a symmetric excitation state given as \begin{equation} \ket{\psi\ts{sym}} = \frac{1}{\sqrt{N}}\sum_{j=1}^N \sigma^+_j \ket{g}^{\otimes N}. \end{equation}
In accordance with standard laser theory we will target parameters for which a symmetric excitation of the ring atoms constitutes a good cavity, i.e.\ the radiative loss is sufficiently small. To this end, we study the stationary populations of different eigenstates of our Hamiltonian from Eq.~\eqref{eq:hamiltonian} during a time evolution starting from the ground state as depicted in~\fref{symmetric_state}a.
Indeed, as shown in~\fref{symmetric_state}, we find that the two eigenstates involving the symmetric single-excitation state in the ring are occupied predominately at all times (except for the ground state). These states are given by \begin{align}\label{eq:dominant_states} \ket{\Psi_{i}} = a_i \ket{g}^{\otimes N }\otimes \ket{e} + b_i \ket{\psi\ts{sym}}\otimes\ket{g}, \end{align}
for $i \in \lbrace 1, 2 \rbrace$, where $a_i$ and $b_i$ depend on the particular geometry with $\left| a_i \right|^2 + \left| b_i \right|^2 = 1$.
\begin{figure}\label{symmetric_state}
\end{figure}
Note that the gain atom can only emit one photon into the ring at a time. Hence, the single-excitation manifold dominates the dynamics even for pump rates substantially larger than the single-atom decay rate. This is shown in~~\fref{symmetric_state}b, where we plot the occupation probability of different eigenstates at steady state as a function of $\nu$.
The fact that the ring does indeed form a resonator can be seen more clearly as follows. Let us assume that only the symmetric state in the ring is populated. Thus, we can rewrite the Hamiltonian in the subspace spanned by the ground and excited state of the gain atom in the center, as well as the ground state of the ring and its symmetric state obtaining (see Appendix) \begin{equation}\label{eq:Hamiltonian_sym} H\ts{sym} = \Omega\ts{sym} \sigma\ts{sym}^+ \sigma\ts{sym}^- + \sqrt{N}\Omega_\mathrm{p} \left(\sigma\ts{sym}^+\sigma_\mathrm{p}^{-} + \text{H.c.}\right), \end{equation} where $\Omega\ts{sym}=\sum_{j=2}^N \Omega_{1j}$ is the dipole energy shift of the symmetric state. Written like this, the Hamiltonian resembles the Jaynes-Cummings Hamiltonian with the ring taking on the role of the cavity mode. In this sense, the symmetric subspace lowering operator $ \sigma\ts{sym}^- := \ket{g}^{\otimes{N}}\bra{\psi\ts{sym}}\otimes\mathbbm{1}\ts{p} $ can be interpreted as the photon annihilation operator of our "cavity". The coupling between the gain atom and the cavity is then determined by $\Omega_\mathrm{p}$.
If we neglect the dissipative coupling between the central atom and the atoms forming the ring, i.e.\ $\Gamma_\mathrm{p} = 0$, we can rewrite the decay of the system as $ \mathcal{L} \left[ \rho \right] = \mathcal{L}_\nu \left[ \rho \right] + \mathcal{L}_0 \left[ \rho \right] + \mathcal{L}\ts{sym} \left[ \rho \right], $ with \begin{subequations} \begin{align} \mathcal{L}_0 \left[ \rho \right] &= \frac{\Gamma_0}{2}\left(2\sigma_\mathrm{p}^{-}\rho\sigma_\mathrm{p}^{+} - \sigma_\mathrm{p}^{+}\sigma_\mathrm{p}^-\rho - \rho\sigma_\mathrm{p}^{+}\sigma_\mathrm{p}^-\right), \\ \mathcal{L}\ts{sym}\left[ \rho \right] &= \frac{\Gamma\ts{sym}}{2} \left(2\sigma\ts{sym}^-\rho\sigma\ts{sym}^+ - \sigma\ts{sym}^+\sigma\ts{sym}^-\rho \right.\\ &- \left.\rho\sigma\ts{sym}^+\sigma\ts{sym}^-\right). \notag \end{align} \end{subequations}
\begin{figure}
\caption{\emph{Super- and Subradiance of the Symmetric State}. The decay rate of the symmetric state $\Gamma\ts{sym}$ as a function of the atom number in the ring and their interatomic distance. The white dots highlight specific interatomic distances where the decay of the symmetric state is the smallest (subradiant).}
\label{symmetric_decay}
\end{figure}
Minimizing the decay rate of the ring atoms is important, but in order to build up population within the ring we need an efficient coupling to the gain atom as well. In analogy to the Jaynes-Cummings model we thus define a cooperativity parameter (see Appendix) $ C := N\Omega_\mathrm{p}^2/\left(\Gamma_0\Gamma\ts{sym}\right). $ An efficient coherent coupling of the ring atoms to the gain atom is achieved when $C>1$. As we can see in~\fref{decay_rates}b, we reach this limit at extremely small distances or at a distance where $\Gamma\ts{sym}$ is minimal (see~\fref{symmetric_decay}). The cooperativity becomes large at $d<0.1\lambda_0$ since for $d \to 0$ the coherent coupling diverges. Yet, this is also the limit where the energy difference $\Omega\ts{sym}$ is large, which detunes the ring atoms from the gain atom. Furthermore, as we will show later, due to the superradiant loss of the ring in this limit the emitted light features thermal statistics rather than coherence. Consequently, we find that the optimal parameter regime indeed lies where the ring atoms show a subradiant behaviour, i.e.\ at the points highlighted in~\fref{symmetric_decay}.
\begin{figure}\label{decay_rates}
\end{figure}
As seen in~\fref{decay_rates}a, the dissipative coupling of the central atom vanishes at points where the symmetric state shows suppressed spontaneous emission (see~\fref{symmetric_decay}). Hence, the loss during the excitation transport from the gain medium to the ring is reduced as well.
\section{Photon Statistics and Spectral Properties}
We have now identified a regime where our system resembles the typical setup of a single-atom laser. In order to study the statistical properties of the emitted light we calculate the normalized second-order correlation at zero time delay $g^{(2)}(0)$ of the electric field intensity. In the far-field $r \gg \lambda_0$, where the intensity correlation function becomes independent of the position (see Appendix) and is given by \begin{equation}\label{eq:13}
g^{(2)}(0) = \frac{\sum_{ijkl} \left \langle \sigma^+_i \sigma^+_j \sigma^-_k \sigma^-_l \right \rangle_{}}{|\sum_{mn} \left \langle \sigma^+_m \sigma^-_n \right \rangle|^2}. \end{equation}
Coherent light exhibits a Poissonian statistic implying $g^{(2)}(0)=1$~\cite{gardiner2004quantum,mandel_wolf_1995}. Therefore, an operation in the previously identified parameter regimes leads to the emission of coherent light. In addition , we calculate the amount of emitted light, i.e.\ $ I\ts{out} := \sum_{ij} \Gamma_{ij} \left \langle \sigma^{+}_i \sigma^{-}_j \right \rangle. $
In~\fref{g2_plot}a we can see that points of coherent light emission where $g^{(2)}(0)=1$ are achieved along a curve strongly resembling the optimal subradiance parameters shown in~\fref{symmetric_decay}. The points where $g^{(2)}(0)=0$ correspond to the situation where the gain atom decouples from the cavity atoms, since then only the single atom in the center can emit light. And, because it is not possible for a single atom to emit more than one photon at a time, we observe anti-bunching. However, this regime does not coincide with "lasing" since the ring atoms are not occupied. Simultaneously, the intensity shown in \fref{g2_plot}b is small, but still finite when the emitted light is coherent. This is because coherences can only build up when the loss from the atoms in the ring is sufficiently low ($\Gamma\ts{sym}$ is small), which also reduces the amount of light emitted.
\begin{figure}
\caption{\emph{Intensity and Statistics of the Emitted Light.} (a) Steady-state second-order correlation as a function of the ring atom number and atom spacing. For each atom number $N$ there are specific interatomic distances $d$ where the emitted light changes from thermal-like light emission (red), passing over regions of Poissonian statistics (white), to sub-Poissonian properties (blue). (b) The radiated intensity $I\ts{out}$ for the same parameter region. Where $g^{(2)}(0) = 1$ the intensity is maximal, regardless of the atom number. (c) The spectral linewidth $\Delta \nu$ for the same parameters. It reduces to well below $\Gamma_0$. The pump rate was $\nu=0.1\Gamma_0$.}
\label{g2_plot}
\end{figure}
In order to analyze the emitted light in more detail, we compute its spectral linewidth. Therefore, we calculate the emission spectrum by means of the Wiener-Khinchin theorem (see eq. \eqref{eq:spectrum})~\cite{carmichael2009open}. It is given as the Fourier transform of the first-order coherence function, $ g^{(1)}(\tau):=\sum_{i,j}\left \langle \sigma_i^+(\tau) \sigma^-_j \right \rangle. $ The spectrum has a Lorentzian shape, thus we compute the linewidth $\Delta\nu$ as the full width at half maximum (FWHM). In~\fref{g2_plot}c, we show the linewidth as a function of $N$ and the interatomic distance $d$. Once again, we find that the linewidth is small ($\Delta \nu < \Gamma_0$) at the points where the symmetric state is subradiant. It can be seen that in order to maintain coherent light emission the interatomic distances need to become smaller for an increasing number of atoms in a ring of constant radius.
Note, that in order to treat larger atom numbers in the above calculations we have truncated the Hilbert space at the second-excitation manifold (see Appendix). Since the single-excitation subspace usually dominates [as shown in \fref{symmetric_state}], neglecting any state containing more than two excitations is well justified.
\section{Threshold-Less Behavior}
In standard lasing models, coherent output light is achieved from a certain input power threshold on. Above threshold, the intensity of the emitted light increases drastically. In an effort to identify such a threshold in our setup, we compute the properties of the output light as a function of the pump strength of the gain atom.
\begin{figure}
\caption{\textit{Threshold-Less Coherent Light Emission.} (a) $I\ts{out}$ as a function of the pump rate $\nu$ for $N=5$, $d=\lambda_0/2$ exhibiting a maximum from $\nu \approx 4\Gamma_0$ onwards. (b) A zoom in to the weak pump region shows the immediate onset of the intensity $I_\mathrm{out}$ at small $\nu$. (c) The second-order correlation $g^{(2)}(0)$ in steady state is $1$ for finite, but small $\nu$. (d) The radiative linewidth $\Delta \nu$ (blue) in the steady state stays well below the pump broadened linewidth $\Gamma_0 + \nu$ of a single emitter (gray), and approaches the decay rate $\Gamma_\mathrm{sym}$ of the symmetric state (yellow, dashed line).}
\label{output_field}
\end{figure}
The system does not exhibit a threshold. Such a threshold-less behavior has been observed in single-atom lasing setups~\cite{mcKeever2003experimental}. As we can see in Figs.~\ref{output_field}a and~\ref{output_field}b, the output intensity grows as soon as the pump rate becomes nonzero, rather than requiring a sufficiently large pump rate. At the same time, the photon statistics of the emitted field are Poissonian, i.e. $g^{(2)}(0)=1$, for arbitrarily low pumping rates (see~\fref{output_field}c). The only point at which the photon statistics change is when the pump rate becomes large, $\nu\sim 10\Gamma_0$, such that the emitted light starts to reproduce the thermal statistics of the input field. It can also be seen in~\fref{output_field}a that above this point the output intensity is actually reduced. As one would expect, the linewidth of the emitted field is small ($\Delta \nu < \Gamma_0$) as long as the light is coherent (see~\fref{output_field}d). When the incoherent pumping rate $\nu$ is increased, states outside the symmetric subspace are occupied, which leads to a slight increase in the linewidth. However, by increasing $\nu$ further, the linewidth decreases again and approaches $\Gamma_\mathrm{sym}$ as the central atom decouples from the ring atoms the light is emitted from the ring in the subradiant symmetric state.
\section{Conclusions}
We predict that a continuously pumped single atom surrounded by a nano-ring of identical atoms could act as a minimal, sub-wavelength sized implementation of a laser. Under suitable operating conditions the system will emit spatially and temporarily coherent light with Poisson statistics. Our analysis reveals a close analogy to the Jaynes-Cummings model, where the outer ring atoms take on the role of a high-$Q$ cavity mode with the central atom providing for gain. The system works best when driven into a collective subradiant state with a single excitation. In this limit, spontaneous emission is suppressed and the operation strongly resembles the behavior of a threshold-less laser~\cite{Boca04}. While the implementation of such a system in a pure form could be envisioned in optical tweezer arrays of neutral atoms~\cite{Barredo2016atom}, analogous setups based on quantum dots have been implemented and are already operational in the pulsed excitation regime~\cite{le2018colloidal}.
Let us note here that there are no principal lower physical limits on the size of the system apart from the technical implementation of the structure and its pumping. Hence, very high density arrays of such lasers on a surface are possible.
\section{Appendix}
\subsection{Green's Tensor}
The collective coupling rates $\Omega_{ij}$ and $\Gamma_{ij}$ are given as the real and imaginary part of the overlap of the transition dipole of the $i$th atom with the electric field emitted by the $j$th atom, i.e.\ \begin{subequations} \begin{align} \label{eq:couplings1} \Omega_{ij} &= -\frac{3\pi \Gamma_0}{k_0}\Re \left( {\pmb{\mu}}_i^*\cdot \pmb{G} \left( \pmb{r}_i-\pmb{r}_j,\omega_0 \right)\cdot \pmb{\mu}_j \right), \\ \label{eq:couplings2} \Gamma_{ij} &= \frac{6\pi \Gamma_0}{k_0} \Im \left( \pmb{\mu}_i^*\cdot \pmb{G} \left( \pmb{r}_i-\pmb{r}_j,\omega_0 \right)\cdot \pmb{\mu}_j \right). \end{align} \end{subequations}
In the above, $\pmb{G} \left(\pmb{r},\omega_0 \right)$ is the electromagnetic Green's tensor of an oscillating dipole source in the vacuum~\cite{jackson2007classical} which reads \begin{equation} \begin{aligned}\label{eq:greens_tensor} \pmb{G} \left(\pmb{r},\omega_0 \right)\cdot \pmb{\mu} &= \frac{e^{i k_0 r}}{4\pi r}\Big[ \left(\hat{\pmb{r}} \times \pmb{\mu} \right) \times \hat{\pmb{r}} + \\ &+ \left(\frac{1}{k_0^2 r^2} - \frac{i}{k_0 r}\right) \left(3\hat{\pmb{r}} \left( \hat{\pmb{r}}\cdot \pmb{\mu} \right)- \pmb{\mu} \right) \Big], \end{aligned} \end{equation}
where $r = \left| \pmb{r} \right|$ and $\hat{\pmb{r}} = \pmb{r}/r$ is the position unit vector, $k_0 = \omega_0/c$ and $\Gamma_0 = \left|\pmb{\mu}\right|^2 k_0^3/ \left( 3\pi \epsilon_0 \right)$ is the spontaneous emission rate of a single atom.
The electric field operator can be obtained directly from the atomic operators~\cite{PhysRevX.7.031024,PhysRevA.95.033818} as \begin{equation} \label{eq:field}
\pmb{E}^+(\pmb{r}) = \frac{|\pmb{\mu}|k_0^2}{\epsilon_0}\sum_i \pmb{G}\left((\pmb{r}-\pmb{r}_i, \omega_0 \right)\cdot \pmb{\mu}_i \, \sigma^-_i \end{equation} for the positive frequency component. This is the electric field generated by an ensemble of $N$ atoms at the position $\pmb{r}$ in the vacuum.
In order to arrive at the expression for the normalized second-order correlation function $g^{(2)}(0)$ in the far-field we use the fact, that the overlap of the Green's tensor with the atomic dipole becomes approximately independent for $\left| \pmb{r}-\pmb{r}_i \right|\gg \lambda_0$. In our case of identical two-level emitters polarized in $z$-direction distributed in the $xy$-plane qq.~\eqref{eq:greens_tensor} simplifies to \begin{equation} \label{eq:green_simple} \pmb{G} \left(\pmb{r}-\pmb{r}_i,\omega_0 \right)\cdot \pmb{\mu}_i \approx \frac{e^{ikr}}{4\pi r} \hat{e}_z \Big(1-\frac{1}{k_0^2 r^2}-\frac{i}{k_0 r}\Big), \end{equation}
where $\left| \pmb{r}-\pmb{r}_i \right| \gg \lambda_0$ and therefore upon normalization the second-order correlation function becomes independent of $\pmb{r}$ and $\pmb{r}_i$.
\subsection{Symmetric Subspace}
As mentioned in the main text, during the whole time evolution and for any incoherent pumping rate $\nu$ the ring is mainly in the symmetric state. This allows us to restrict ourselves to a subspace within the single-excitation manifold where either the central atom is excited or the symmetric state of the ring is populated. The Hilbert space is spanned by these two states and the ground state of the system, i.e.\ \begin{equation} \label{subspace}
\Big\{ |\phi_1 \rangle,|\phi_2 \rangle,|\phi_3 \rangle \Big\}\equiv \Big\{ |\psi_\mathrm{sym}\rangle \otimes |g\rangle,|g\rangle^{\otimes N}\otimes |e\rangle,|g\rangle^{\otimes N}\otimes |g\rangle\Big\}. \end{equation}
Within this subspace the nonzero matrix elements of the Hamiltonian are given by \begin{subequations}\label{eq:expect_ham} \begin{align}
\langle \phi_1| H |\phi_1\rangle &= \Omega_{\mathrm{sym}}, \\
\langle \phi_1| H |\phi_2 \rangle &= \sqrt{N}\Omega_{\mathrm{p}}. \end{align} \end{subequations}
In turn, this allows us to rewrite the Hamiltonian in this basis as \begin{align}\label{eq:Hamiltonian_sym_appendix} H\ts{sym} &= \Omega\ts{sym}\sigma\ts{sym}^+\sigma\ts{sym}^- + \sqrt{N}\Omega_\mathrm{p}\left(\sigma\ts{sym}^+\sigma_\mathrm{p}^{ge} + \text{H.c.}\right), \end{align} where the subspace lowering operator is given by \begin{align}\label{new_transitions} \sigma\ts{sym}^- = \ket{g}^{\otimes{N}}\bra{\psi\ts{sym}}\otimes \mathbbm{1}_p. \end{align}
If there is only a single excitation present in the system the Lindblad operator accounting for the collective spontaneous emission can be rewritten as \begin{align} \mathcal{L}_\Gamma[\rho] = \sum_{i,j} \frac{\Gamma_{ij}}{2}\Big[\sigma^{eg}_i \sigma^{ge}_j,\rho \Big], \end{align}
where the atomic density matrix in the steady state will live in the symmetric subspace, such that
\begin{align}
\rho \propto \sum_{i,j}^2 \left| \phi_i \right \rangle \left| \langle \phi_j \right|.
\end{align}
Applying the Lindblad superoperator will yield the decay rates $\Gamma_\mathrm{p}$, $\Gamma_\mathrm{sym}$ and $\Gamma_\mathrm{0}$ for $i \neq j$, $i,j=1$ and $i,j=2$, respectively. The collective decay $\Gamma_\mathrm{p}$ between the central atom and the ring atoms will be approximately zero for the distances where $g^{(2)}(0) = 1$ and can be neglected as is discussed in the main text. The ring features the collective decay $\Gamma_\mathrm{sym} = \langle \psi_\mathrm{sym}|\mathcal{L}_\Gamma[\rho]|\psi_\mathrm{sym}\rangle$ and the central atom independent spontaneous emission $\Gamma_0$ with the decay operators $\sigma^-_\mathrm{sym}$ and $\sigma^-_\mathrm{p}$ respectively. Therefore the Lindblad term can be split into \begin{align}\label{eq:lindblad_single_ex} \mathcal{L}_\Gamma[\rho] = \mathcal{L}_{\Gamma_\mathrm{sym}}[\rho]+\mathcal{L}_{\Gamma_0}[\rho]. \end{align}
This leads to a form of the Hamiltonian and Master equation which resembles the Jaynes-Cummings model, where the good cavity is given by the subradiant symmetric state of the ring atoms.
The definition of the cooperativity parameter can be understood as follows. As can be seen in $H_\mathrm{sym}$, the coupling coefficient between the central atom and one of the ring atoms is given by $\sqrt{N} \Omega_\mathrm{p}$, whereas the spontaneous decay into the vaccum modes for the center atom is simply $\Gamma_0$. As analyzed in the main text, the ring atoms are predominately in the symmetric state with a collective decay rate $\Gamma_\mathrm{sym}$ for the parameters where the symmetric state is maximally subradiant. Interpreting the ring in the symmetric state as a cavity and the center atom as the gain medium leads to the definition of the cooperativity parameter $C$.
\begin{figure}
\caption{(a) Scaling behaviour of $I_\mathrm{out}$ in the steady state as a function of the atom number in the ring, where the interatomic distance for each $N$ is chosen along the white circles in fig.\ 3 of the main text and the incoherent pumping rate $\nu = 10^{-1}\Gamma_0$. (b) Comparison of the Master equation with the cut-off at the second excitation manifold as a function of the incoherent pumping rate $\nu$ for $N=8$ atoms in the ring. (c,d) The intensity correlation function $g^{(2)}(0)$ and the linewidth $\Delta \nu$ as a function of $N$ for a pumping rate $\nu = 10^{-1}\Gamma_0$ and an interatomic distance $\lambda_0/2$ between neighbouring atoms.}
\label{scaling}
\end{figure}
\subsection{Truncating the Hilbert Space at Low Excitation}
Concerning the scaling behaviour of the system a mean field treatment even with the inclusion of correlations to second order is not sufficient since the properties of the steady state strongly depend on correlations of higher orders in particular $g^{(2)}(0)$ involves products of four operators. In order to analyze the scaling behaviour for a larger number of emitters in the ring we restrict the system to two excitations. This cut-off can only be a good approximation to the full model for small enough incoherent pumping rates $\nu$. In fig.~\ref{scaling}b the output intensity $I_\mathrm{out}$ in the steady state of the reduced Hilbert space is compared to the full model for eight atoms in the ring and a good agreement can be found for $\nu \leq \Gamma_0$. For $I_\mathrm{out}$, the intensity correlation function $g^{(2)}(0)$ and the linewidth $\Delta \nu$ in fig.~\ref{scaling}acd the pumping rate is $10^{-1}\Gamma_0$ and the interatomic distances are chosen along the white circles in fig.\ 3. The linewidth $\Delta \nu$ is well below $\Gamma_0$ with $N\geq 4$ emitters in the ring and for distances $d$ where $g^{(2)}(0) \approx 1$ but reaches a minimum which is above $\Gamma_0/2$.
\subsection{Computing the Spectrum}
The spectrum we use in order to compute the linewidth via its FWHM is given by the Fourier Transform of the first-order correlation function. Similarly to the second-order correlation function, in the far-field this expression becomes independent of the geometry and is given by \begin{align} g^{(1)}(\tau) = \sum_{i,j}\langle\sigma_i^+\sigma_j^-\rangle. \end{align}
The spectrum can then be computed via~\cite{carmichael2009open} as \begin{align}\label{eq:spectrum} S(\omega) = 2 \Re \Bigg\{ \int_0^\infty d\tau e^{-i\omega \tau}\sum_{i,j}\langle \sigma_i^+(\tau) \sigma^-_j \rangle \Bigg\}. \end{align}
\end{document} |
\begin{document}
\title{Density matrix of the superposition of excitation on coherent states with thermal light and its statistical properties } \author{Li-yun Hu} \affiliation{Department of Physics, Shanghai Jiao Tong University, Shanghai 200030, China} \author{Hong-yi Fan} \affiliation{Department of Physics, Shanghai Jiao Tong University, Shanghai 200030, China} \keywords{excitation on coherent states, thermal light, Wigner function} \pacs{}
\begin{abstract} A beam's density matrix that is described by the superposition of excitation on coherent states with thermal noise (SECST) is presented, and its matrix elements in Fock space are calculated. The maximum information transmitted by the SECST beam is derived. It is more than that by coherent light beam and increases as the excitation photon number increases. In addition, the nonclassicality of density matrix is demonstrated by calculating its Wigner function.
{\small PACS numbers: 42.50.Dv, 03.67.Hk, 03.65.Ud}
\end{abstract} \date{5 July 2008} \maketitle
\section{Introduction}
Recently, much attention has been paid to the excitation on coherent states (ECS) \cite{1,2,3,4,5,6}. As pointed out in Refs.\cite{2,3}, the single photon ECS causes a classical-to-quantum (nonclassical)\ transition. The ECSs can be considered as a generalization of coherent states \cite{7,8} and number eigenstates. All these states can be used as signal beams in optical communications field, in which the nonclassicality of signals plays an important role.
However, in reality, signal beams are usually mixed with thermal noise. Statistical properties of the superposition of (squeezed) coherent states with thermal light (SCST) have been investigated by calculating the photon number matrix elements $\left\langle N\right\vert \rho\left\vert M\right\rangle $ of SCST's density matrix \cite{9,10}. These properties are useful in quantum optics and quantum electronics (e.g. how lasers working well above threshold, heterodyne detection of light, etc.) \cite{11}. Some general properties of the density matrices which describe coherent, squeezed and number eigenstates in thermal noise are studied in Ref.\cite{12}. It is found that the information transmitted by the superposition of number eigenstates with thermal light (SNET) beam is less than that by the SCST beam \cite{13}.
In this paper, we investigate statistical properties of the superposition of ECS with thermal light (SECST). We present the relevant density matrix in Fock space and derive the Mandel $Q$ parameter. The SECST field can exhibit a significant amount of super-Poissonian photon statistics (PPS) due to the presence of thermal noise for excitation photon number $m=0;$ while for $m\neq0$ the SECST field can present the sub-PPS when the thermal mean photon number is less than a threshold value. In addition, the threshold value increases as $m$\ increases. We also calculate the maximum information (channel capacity) transmitted by the SECST beam, which increases as $m$ increases. In addition, the nonclassicality of density matrix is also presented by calculating the Wigner function of the SECST.
Our paper is arranged as follows. In Sec. II we present the density matrix $\rho$ that describes the SECST and calculate its matrix elements in Fock space by using the normal ordered form of $\rho$. The PPS distributions are discussed in Sec III. The maximum information is calculated in Sec. IV. Sec. V is devoted to deriving the Wigner function of the SECST and discussing its nonclassicality in details. Conclusions are summarized in the last section.
\section{Excitation on coherent states with thermal noise}
Firstly, let us briefly review the excitation on coherent states (ECSs). The ECSs, first introduced by Agarwal and Tara \cite{1}, are the result of successive elementary one-photon excitations of a coherent state, and is an intermediate state in between the Fock state and the coherent state, since it exhibits the sub-Poissonian character. Theoretically, the ECSs can be obtained by repeatedly operating the photon creation operator $a^{\dag}$\ on a coherent state, so its density operator is {\small \ } \begin{equation} \rho_{0}=C_{\alpha,m}a^{\dag m}\left\vert \alpha\right\rangle \left\langle \alpha\right\vert a^{m}, \label{1} \end{equation} where $C_{\alpha,m}=[m!L_{m}(-\left\vert \alpha\right\vert ^{2})]^{-1}$ is the normalization factor, $\left\vert \alpha\right\rangle =\exp(-\left\vert \alpha\right\vert ^{2}/2+\alpha a^{\dagger})\left\vert 0\right\rangle $ is the coherent state \cite{7,8}, and $L_{m}\left( x\right) $ is the $m$th-order Laguerre polynomial.
The SECST is described by the density matrix \cite{12} \begin{align} \rho & =\int\frac{d^{2}z}{\pi}P\left( z\right) D\left( z\right) \rho _{0}D^{\dag}\left( z\right) ,\label{2}\\ P\left( z\right) & =\frac{1}{\bar{n}_{t}}\exp\left[ -\frac{\left\vert z\right\vert ^{2}}{\bar{n}_{t}}\right] ,\label{3} \end{align} where $D\left( z\right) =\exp(za^{\dag}-z^{\ast}a)$ is the displacement operator, and $\bar{n}_{t}$ is the mean number of thermal photons for $\rho_{0}\rightarrow\left\vert 0\right\rangle \left\langle 0\right\vert $. We can easily prove that $\mathtt{Tr}\rho=1,$ as it should be. In fact, \begin{align} \mathtt{Tr}\rho & =\int\frac{d^{2}z}{\pi}P\left( z\right) \mathtt{Tr} \left[ D\left( z\right) \rho_{0}D^{\dag}\left( z\right) \right] \nonumber\\ & =\int\frac{d^{2}z}{\pi}P\left( z\right) \mathtt{Tr}\left( \rho _{0}\right) \nonumber\\ & =\int\frac{d^{2}z}{\pi}P\left( z\right) =1.\label{4} \end{align}
\subsection{Normal ordering form of the SECST}
For the simplicity in our later calculation, we first perform the integration in Eq.(\ref{2}) by using the technique of integration within an ordered product (IWOP) of operators \cite{14,15}. Using the normal ordering form of the vacuum projector $\left\vert 0\right\rangle \left\langle 0\right\vert =\colon\exp(-a^{\dag}a)\colon,$ we can reform Eq.(\ref{2}) as the following form \begin{align} \rho & =C_{\alpha,m}e^{-\left\vert \alpha\right\vert ^{2}}\int\frac{d^{2} z}{\pi}P\left( z\right) D\left( z\right) \colon a^{\dag m}\nonumber\\ & \times\exp\left( \alpha a^{\dag}+\alpha^{\ast}a-a^{\dag}a\right) a^{m}\colon D^{\dag}\left( z\right) \nonumber\\ & =\frac{C_{\alpha,m}}{\bar{n}_{t}}\colon\exp\left( -\left\vert \alpha\right\vert ^{2}-a^{\dag}a+\allowbreak a^{\dag}\alpha+a\alpha^{\ast }\right) \nonumber\\ & \times\int\frac{d^{2}z}{\pi}\exp\left[ -\frac{1+\bar{n}_{t}}{\bar{n}_{t} }\left\vert z\right\vert ^{2}\right] \nonumber\\ & \times\exp\left[ \left( a^{\dag}-\alpha^{\ast}\right) \allowbreak z+\left( a-\alpha\right) z^{\ast}\right] \nonumber\\ & \times\left( a^{\dag}-z^{\ast}\right) ^{m}\left( a-z\right) ^{m} \colon.\label{5} \end{align} In the last step of (\ref{5}), we noticed that for any operator $f(a^{\dag },a)$ \begin{equation} D\left( z\right) f(a^{\dag},a)D^{\dag}\left( z\right) =f(a^{\dag}-z^{\ast },a-z).\label{6} \end{equation} Making two independent variable displacements, \[ a^{\dag}-z^{\ast}\rightarrow\beta^{\ast},a-z\rightarrow\beta, \] (note that operators $a^{\dag},a$ can be considered as C-number within the normal order $\colon\colon$), thus Eq.(\ref{5}) can be rewritten as \begin{align} \rho & =\frac{C_{\alpha,m}}{\bar{n}_{t}}\colon\exp\left( -\left\vert \alpha\right\vert ^{2}-\frac{1}{\bar{n}_{t}}a^{\dag}a\right) \nonumber\\ & \times\int\frac{d^{2}\beta}{\pi}\beta^{\ast m}\beta^{m}\exp\left[ -\lambda_{t}^{-2}\left\vert \beta\right\vert ^{2}\right. \nonumber\\ & \left. +\left( \allowbreak\alpha^{\ast}+\allowbreak\frac{a^{\dag}} {\bar{n}_{t}}\right) \beta+\left( \frac{a}{\bar{n}_{t}}+\alpha\right) \beta^{\ast}\right] \colon\nonumber\\ & =\frac{C_{\alpha,m}}{\bar{n}_{t}}\lambda_{t}^{2m+2}\colon\exp\left( -\left\vert \alpha\right\vert ^{2}-\frac{1}{\bar{n}_{t}}a^{\dag}a\right) \nonumber\\ & \times\int\frac{d^{2}\beta}{\pi}\beta^{\ast m}\beta^{m}\exp\left[ -\left\vert \beta\right\vert ^{2}+A^{\dag}\beta+A\beta^{\ast}\right] \colon,\label{7} \end{align} where we have set $\lambda_{t}=\sqrt{\bar{n}_{t}/(1+\bar{n}_{t})}$ and $A=\lambda_{t}(\frac{1}{\bar{n}_{t}}a+\alpha).$ Then using the integration expression of two-variable Hermite polynomial $H_{m,n}$ \cite{16}, \begin{align} & (-1)^{n}e^{-\xi\eta}H_{m,n}\left( \xi,\eta\right) \nonumber\\ & =\int\frac{d^{2}z}{\pi}z^{n}z^{\ast m}\exp\left[ -\left\vert z\right\vert ^{2}+\xi z-\eta z^{\ast}\right] ,\label{8} \end{align} we can put Eq.(\ref{7}) into \begin{align} \rho & =\frac{C_{\alpha,m}}{\bar{n}_{t}}\lambda_{t}^{2m+2}\colon\left( -1\right) ^{m}H_{m,m}\left( A^{\dag},-A\right) \nonumber\\ & \times\exp\left[ -\frac{\left( a-\alpha\right) \left( a^{\dag} -\alpha^{\ast}\right) }{\bar{n}_{t}+1}\right] \colon.\label{9} \end{align} In particular, when $m=0$, corresponding to the case of superposition of coherent state with thermal noise, Eq.(\ref{9}) reduces to \begin{equation} \rho=\frac{1}{\bar{n}_{t}+1}D\left( \alpha\right) \colon e^{-\frac{a^{\dag }a}{\bar{n}_{t}+1}}\colon D^{\dag}\left( \alpha\right) ,\label{9.1} \end{equation} which can be directly checked by using Eqs.(\ref{2}) and (\ref{3}) as well as noticing $\rho_{0}=\left\vert \alpha\right\rangle \left\langle \alpha \right\vert .$
Further employing the relation between Hermite polynomial and Laguerre polynomial \cite{16}, \begin{equation} H_{m,n}\left( \xi,\kappa\right) =\left\{ \begin{array} [c]{cc} n!\left( -1\right) ^{n}\xi^{m-n}L_{n}^{m-n}\left( \xi\kappa\right) , & m>n\\ m!\left( -1\right) ^{m}\kappa^{n-m}L_{m}^{n-m}\left( \xi\kappa\right) , & m<n \end{array} \right. , \label{10} \end{equation} we can see that \begin{align} \rho & =\frac{1}{L_{m}(-\left\vert \alpha\right\vert ^{2})}\frac{\bar{n} _{t}^{m}}{(1+\bar{n}_{t})^{m+1}}\colon L_{m}\left( -A^{\dag}A\right) \nonumber\\ & \times\exp\left[ -\frac{\left( a-\alpha\right) \left( a^{\dag} -\alpha^{\ast}\right) }{\bar{n}_{t}+1}\right] \colon. \label{11} \end{align} Eqs.(\ref{9}) and (\ref{11}) are the normal ordering form of the SECST. From these it is convenient to calculate the phase space distributions, such as Q-function, P-representation and Wigner function.
\subsection{The matrix elements $\left\langle N\right\vert \rho\left\vert M\right\rangle $}
Now we calculate the matrix elements of $\rho$ in Eq.(\ref{2}) between two number states $\left\langle N\right\vert $ and $\left\vert M\right\rangle ,$ i.e., $\left\langle N\right\vert \rho\left\vert M\right\rangle .$ Employing the overcompleteness of coherent states, one can express the matrix elements $\left\langle N\right\vert \rho\left\vert M\right\rangle $ as \begin{equation} \left\langle N\right\vert \rho\left\vert M\right\rangle =\int\frac{d^{2}\beta d^{2}\gamma}{\pi^{2}}\left\langle N\right. \left\vert \beta\right\rangle \left\langle \beta\right\vert \rho\left\vert \gamma\right\rangle \left\langle \gamma\right. \left\vert M\right\rangle ,\label{12} \end{equation} where the overlap between the coherent state and the number state is given by \begin{equation} \left\langle \gamma\right. \left\vert M\right\rangle =\frac{1}{\sqrt{M!} }e^{-\left\vert \gamma\right\vert ^{2}/2}\gamma^{\ast M},\label{13} \end{equation} and the matrix elements $\left\langle \beta\right\vert \rho\left\vert \gamma\right\rangle $ can be obtained from Eq.(\ref{9}) due to $\rho^{\prime} $s normal ordering form, \begin{align} \left\langle \beta\right\vert \rho\left\vert \gamma\right\rangle & =\left( -1\right) ^{m}\frac{C_{\alpha,m}}{\bar{n}_{t}}\lambda_{t}^{2m+2} e^{-\left\vert \alpha\right\vert ^{2}/(\bar{n}_{t}+1)}\nonumber\\ & \times\frac{\partial^{2m}}{\partial\tau^{m}\partial\tau^{\prime m}} \exp\left[ -\tau\tau^{\prime}+\lambda_{t}\tau\alpha^{\ast}-\lambda_{t} \alpha\tau^{\prime}\right] \nonumber\\ & \times\exp\left\{ \left( \frac{\alpha+\bar{n}_{t}\allowbreak\gamma} {\bar{n}_{t}+1}+\frac{\lambda_{t}\tau}{\bar{n}_{t}}\right) \beta^{\ast} -\frac{1}{2}\left\vert \beta\right\vert ^{2}\right. \nonumber\\ & -\left. \frac{1}{2}\left\vert \gamma\right\vert ^{2}+\left( \frac {\alpha^{\ast}}{\bar{n}_{t}+1}-\frac{\lambda_{t}\tau^{\prime}}{\bar{n}_{t} }\right) \gamma\right\} _{\tau=\tau^{\prime}=0},\label{14} \end{align} where we have used the generating function of two-variable Hermite polynomial $H_{m,n},$ \begin{equation} H_{m,n}\left( x,y\right) =\left. \frac{\partial^{m+n}}{\partial t^{m}\partial t^{\prime n}}\exp\left[ -tt^{\prime}+tx+t^{\prime}y\right] \right\vert _{t=t^{\prime}=0}.\label{15} \end{equation} When $M=N,$ $\left\langle N\right\vert \rho\left\vert N\right\rangle $ is just the photon number distribution of the SECST. Then combing with Eqs.(\ref{14}), (\ref{12}) and (\ref{13}), after a lengthy but straightforward calculation, one can get the matrix elements $\left\langle N\right\vert \rho\left\vert M\right\rangle ,$ (without loss of generality, let $M\geqslant N$) \begin{align} \left\langle N\right\vert \rho\left\vert M\right\rangle & =\frac{\left( -1\right) ^{N}}{\sqrt{M!N!}}\frac{\lambda_{t}^{2N}C_{\alpha,m}}{\bar{n} _{t}+1}e^{-\left\vert \alpha\right\vert ^{2}}\frac{\partial^{2m}} {\partial\upsilon^{m}\partial\upsilon^{\prime m}}\nonumber\\ & .\left\{ e^{\lambda_{t}^{2}\upsilon\upsilon^{\prime}}H_{M,N}\left( \frac{\upsilon^{\prime}}{\bar{n}_{t}+1},-\frac{\upsilon}{\bar{n}_{t}}\right) \right\} _{\upsilon=\alpha,\upsilon^{\prime}=\alpha^{\ast}},\label{16} \end{align} where we have used the integral formula \cite{17} \begin{equation} \int\frac{d^{2}\beta}{\pi}f\left( \beta^{\ast}\right) \exp\left\{ -\left\vert \beta\right\vert ^{2}+\tau\beta\right\} =f\left( \tau\right) ,\label{17} \end{equation} and another expression of two-variable Hermite polynomial $H_{m,n},$ \begin{equation} H_{m,n}\left( \xi,\kappa\right) =\sum_{l=0}^{\min(m,n)}\frac{m!n!\left( -1\right) ^{l}\xi^{m-l}\kappa^{n-l}}{l!\left( n-l\right) !\left( m-l\right) !}.\label{18} \end{equation} In particular, when $m=0$, noticing $M\geqslant N$ and Eq.(\ref{10}), Eq.(\ref{16}) reduces to \begin{align} \left\langle N\right\vert \rho\left\vert M\right\rangle & =\sqrt{\frac {N!}{M!}}\alpha^{\ast M-N}\frac{\left( \bar{n}_{t}\right) ^{N}}{\left( \bar{n}_{t}+1\right) ^{M+1}}\nonumber\\ & \times e^{-\left\vert \alpha\right\vert ^{2}/(\bar{n}_{t}+1)}L_{N} ^{M-N}\left[ -\frac{\left\vert \alpha\right\vert ^{2}}{\bar{n}_{t}\left( \bar{n}_{t}+1\right) }\right] ,\label{19} \end{align} which is just the Glauber-Lachs formula \cite{9} when $\bar{n}_{t} =(e^{\beta\omega}-1)^{-1}$. While for $\alpha=0,$ corresponding to the case of superposition of number state with thermal light, using Eq.(\ref{18}), Eq.(\ref{16}) becomes \begin{equation} \left\langle N\right\vert \rho\left\vert M\right\rangle =\delta_{M,N} P_{N},\label{e20} \end{equation} where ($k_{0}=\max[0,m-N]$) \begin{equation} P_{N}=\frac{m!N!}{\bar{n}_{t}+1}\sum_{k=k_{0}}^{m}\frac{1}{k!}\frac{\left( \frac{\bar{n}_{t}}{\bar{n}_{t}+1}\right) ^{k+N}\left[ \bar{n}_{t}\left( \bar{n}_{t}+1\right) \right] ^{k-m}}{\left( k+N-m\right) !\left[ (m-k)!\right] ^{2}}.\label{e21} \end{equation} Eq.(\ref{e20}) is just the result of Ref. \cite{13}.
\section{Sub-Poissonian photon statistics}
To see clearly the photon statistics properties of the SECST, in this section, we pay our attention to the variance of the photon number operator $\left\langle \left( \Delta\hat{n}\right) ^{2}\right\rangle =\left\langle \hat{n}^{2}\right\rangle -\left\langle \hat{n}\right\rangle ^{2}.$ In particular, we will examine the evolution of the Mandel $Q$ parameter defined as \begin{align} Q & =\frac{\left\langle \left( a^{\dag}a\right) ^{2}\right\rangle }{\left\langle a^{\dag}a\right\rangle }-\left\langle a^{\dag}a\right\rangle \nonumber\\ & =\frac{\left\langle a^{2}a^{\dag2}\right\rangle -\left\langle aa^{\dag }\right\rangle ^{2}-\left\langle aa^{\dag}\right\rangle }{\left\langle aa^{\dag}\right\rangle -1}, \label{26} \end{align} which measures the derivation of the variance of the photon number distribution of the field state under consideration from the Poissonian distribution of the coherent state. $Q=1,Q>1$ and $Q<1$ correspond to Poissonian photon statistics (PPS), super-PPS and sub-PPS, respectively.
In order to calculate the average value in Eq.(\ref{26}), we first calculate the value of $\left\langle \alpha\right\vert a^{n}a^{\dag m}\left\vert \alpha\right\rangle .$ In fact, using \begin{equation} \left\langle \alpha\right\vert a^{m+n}a^{\dag m+n}\left\vert \alpha \right\rangle =\left( m+n\right) !L_{m+n}(-\left\vert \alpha\right\vert ^{2}) \label{27} \end{equation} and \begin{equation} \int\frac{d^{2}z}{\pi}z^{n}z^{\ast m}P\left( z\right) =\bar{n}_{t} ^{m}m!\delta_{m,n}, \label{28} \end{equation} we can evaluate (for writing's convenience, let $L_{m}$ denote $L_{m} (-\left\vert \alpha\right\vert ^{2})$) \begin{equation} \left\langle a^{\dag}a\right\rangle =\frac{1+m}{L_{m}}L_{m+1}+\bar{n}_{t}-1, \label{29} \end{equation} and \begin{equation} \left\langle a^{2}a^{\dag2}\right\rangle =2\bar{n}_{t}^{2}+\frac{m+1}{L_{m} }\left[ 4\bar{n}_{t}L_{m+1}+\left( m+2\right) L_{m+2}\right] . \label{30} \end{equation} Substituting Eqs.(\ref{29}) and (\ref{30}) into (\ref{26}) leads to \begin{align} Q & =\frac{\bar{n}_{t}\left( \bar{n}_{t}-1\right) L_{m}+\left( 2\bar {n}_{t}-1\right) \left( m+1\right) L_{m+1}}{\left( 1+m\right) L_{m+1}+\left( \bar{n}_{t}-1\right) L_{m}}\nonumber\\ & +\frac{(m+1)(m+2)L_{m+2}-\frac{\left( m+1\right) ^{2}}{L_{m}}L_{m+1}^{2} }{\left( 1+m\right) L_{m+1}+\left( \bar{n}_{t}-1\right) L_{m}}. \label{31} \end{align} At the zero-temperature limit $(\bar{n}_{t}\rightarrow0)$, Eq.(\ref{31}) just reduces to Eq.(2.20) in Ref.\cite{1}. \begin{figure}
\caption{{\protect\small (Color online)} {\protect\small The evolution of Mandel $Q$ parameter as a function of (}$n_{t},\left\vert \alpha\right\vert ${\protect\small ) for different values }$m.$}
\label{Fig1}
\label{Fig2}
\label{Fig3}
\end{figure}
In Fig.1, we display the parameter $Q\left( n_{t},\left\vert \alpha \right\vert \right) $ as a function of $\left( n_{t},\left\vert \alpha\right\vert \right) $ for different values $m.$ From Fig.1, we see that, for the excitation photon number $m=0$ (see Fig.1 (a)), $Q\left( \bar{n}_{t}=0,\left\vert \alpha\right\vert \right) =1$ corresponding to coherent state (a PPS); while $Q\left( \bar{n}_{t}\neq0,\left\vert \alpha\right\vert \right) >1$, i.e., the SECST field exhibits a significant amount of super-PPS due to the presence of $\bar{n}_{t}$. From Fig.1 (b) and (c), we see that, when $m\neq0,$ the SECST field presents the sub-PPS when $\bar{n}_{t}$ is less than a threshold value for a given $\left\vert \alpha\right\vert ;$ the threshold value increases as $m$\ increases. For example, when $\left\vert \alpha\right\vert =0,$ the threshold values are about 0.414 and 0.481, respectively, for $m=1$ and $m=6$.
\section{Information transmitted by the SECST beam}
According to the negentropy principle of Brillouin \cite{18}, the maximum information $I$ transmitted by a beam is \begin{equation} I=S_{\max}-S_{act}, \label{e32} \end{equation} in which $S_{\max}$ and $S_{act}$ represent the maximum entropy and the actual entropy, respectively, possessed by the quantum mechanical system described by a density matrix $\rho$. Here the maximum information $I$ is an ideal one transmitted through an ideal optical communication system.
For the SECST system, the actual entropy is \begin{equation} S_{act}=-\mathtt{Tr}\left( \rho\ln\rho\right) =-\sum_{N}\sigma_{N}\ln \sigma_{N},\label{e33} \end{equation} where $\rho=\sum_{N}\sigma_{N}\left\vert N\right\rangle \left\langle N\right\vert ,$ and $\sigma_{N}=\left\langle N\right\vert \rho\left\vert N\right\rangle .$ $\sigma_{N}$ can be obtained from Eq.(\ref{16}), i.e., \begin{align} \sigma_{N} & =\frac{\bar{n}_{t}^{N}e^{-\left\vert \alpha\right\vert ^{2} }C_{\alpha,m}}{\left( \bar{n}_{t}+1\right) ^{N+1}}\frac{\partial^{2m} }{\partial\upsilon^{m}\partial\upsilon^{\prime m}}\nonumber\\ & \times\left\{ e^{\lambda_{t}^{2}\upsilon\upsilon^{\prime}}L_{N}\left( \frac{-\upsilon\upsilon^{\prime}}{\bar{n}_{t}\left( \bar{n}_{t}+1\right) }\right) \right\} _{\upsilon=\alpha,\upsilon^{\prime}=\alpha^{\ast} },\label{e34} \end{align} which is independent of the phase of $\alpha.$ On the other hand, for a system in thermal equilibrium, described by the density matrix $\rho_{th}$, with mean photons number $\bar{n}_{t}$, its entropy is \begin{equation} S=-\sum_{N}P_{N}\ln P_{N}=\ln(1+\bar{n}_{t})+\bar{n}_{t}\ln\frac{\bar{n} _{t}+1}{\bar{n}_{t}},\label{e35} \end{equation} where $P_{N}=\bar{n}_{t}^{N}/\left( \bar{n}_{t}+1\right) ^{N=1}$ obtained from Eq.(\ref{19}) under the condition $m=0,\alpha=0.$ Note that the maximum entropy of the system is equal to the entropy of a system in thermal equilibrium, with an equal mean number of photons. The mean photons number of the SECST is given by Eq.(\ref{29}). Therefore, using Eq.(\ref{e35}), we have \begin{align} S_{\max} & =\ln\left( \left( 1+m\right) \frac{L_{m+1}}{L_{m}}+\bar{n} _{t}\right) \nonumber\\ & +\left( \left( 1+m\right) \frac{L_{m+1}}{L_{m}}+\bar{n}_{t}-1\right) \nonumber\\ & \times\ln\left( \frac{\left( 1+m\right) L_{m+1}+\bar{n}_{t}L_{m} }{\left( 1+m\right) L_{m+1}+\left( \bar{n}_{t}-1\right) L_{m}}\right) .\label{e36} \end{align}
\begin{figure}
\caption{{\protect\small (Color online)} {\protect\small The maximum information }$I\left( \bar{n} _{t},\left\vert \alpha\right\vert \right) ${\protect\small as a function of }$\left( \bar{n}_{t},\left\vert \alpha\right\vert \right) $ {\protect\small \ for some different values (a) }$m=0,(b)$ $m=1,$ {\protect\small (truncating the infinite sum at }$N_{\max}=70$ {\protect\small ).}}
\label{Fig7}
\label{Fig8}
\end{figure}\begin{figure}
\caption{{\protect\small (Color online) The maximum information }$I\left( \bar{n}_{t},\left\vert \alpha\right\vert =1\right) ${\protect\small as a function of }$\left( \bar{n}_{t}\right) ${\protect\small \ for some different values (a) }$m=0,(b)$ $m=1,(c)$ ${\protect\small m=2}$ {\protect\small (truncating the infinite sum at }$N_{\max}=70${\protect\small ).}}
\label{Fig9}
\end{figure}
From Eqs.(\ref{e32}), (\ref{e33}) and (\ref{e36}), we can calculate the maximum information transmitted by the SECST beam. In Fig. 2, the maximum information $I\left( \bar{n}_{t},\left\vert \alpha\right\vert \right) $ is plotted as a function of $\left( \bar{n}_{t},\left\vert \alpha\right\vert \right) $\ for some different values $m$ (truncating the infinite sum at $N_{\max}=70$). From Fig.2, we can see that, for a given $\bar{n}_{t},$ $I\left( \bar{n}_{t},\left\vert \alpha\right\vert \right) $ increases as the value $\left\vert \alpha\right\vert $ increases; for given $\left\vert \alpha\right\vert ,$ in general, $I\left( \bar{n}_{t},\left\vert \alpha\right\vert \right) $ grows up as the value $\bar{n}_{t}$ increases. In order to see clearly the effect of different parameter $m$ to $I\left( \bar{n}_{t},\left\vert \alpha\right\vert \right) ,$ we presented a plot in Fig.3, from which it is obvious that $I\left( \bar{n}_{t},\left\vert \alpha\right\vert \right) $ becomes bigger due to the presence of $m$, and increases as $m$ increases. In other words, the maximum information transmitted by the SECST beam is larger than that by the SCST ($m=0$). The channel of ECS can carry with more information than that of coherent state. In Ref. \cite{13}, Vourdas has pointed out that the coherent signals (of known phase) can transmit more information than the number eigenvectors signals. Thus among these three beams, the SECST beam can transmit most information.
\section{The Wigner function of the SECST}
\subsection{The Wigner function}
The Wigner function (WF) plays an important role in quantum optics, especially the WF can be reconstructed from measurements \cite{19,20}. The WF is a powerful tool to investigate the nonclassicality of optical fields. The presence of negativity in the WF of optical field is a signature of its nonclassicality is often used to describe the decoherence of quantum states. In this section, using the normally ordered form of the SECST, we evaluate its WF. For a single-mode system, the WF is given by \cite{21} \begin{equation} W\left( \gamma,\gamma^{\ast}\right) =\frac{e^{2\left\vert \gamma\right\vert ^{2}}}{\pi}\int\frac{d^{2}\beta}{\pi}\left\langle -\beta\right\vert \rho\left\vert \beta\right\rangle e^{2\left( \beta^{\ast}\gamma-\beta \gamma^{\ast}\right) },\label{20} \end{equation} where $\left\vert \beta\right\rangle $ is the coherent state and $\gamma =x+iy$. From Eq.(\ref{20}) it is easy to see that once the normal ordered form of $\rho$ is known, we can conveniently obtain the WF of $\rho.$
On substituting Eq.(\ref{9}) into Eq.(\ref{20}) we obtain the WF of the SECST \begin{align} W\left( \gamma,\gamma^{\ast}\right) & =\frac{\left( \lambda_{t}^{2} A_{1}^{2}\right) ^{m}C_{\alpha,m}}{\pi\left( 2\bar{n}_{t}+1\right) } \exp\left\{ -\frac{2\left\vert \alpha-\gamma\right\vert ^{2}}{2n_{t} +1}\right\} \nonumber\\ & \times\left( -1\right) ^{m}H_{m,m}\left( \frac{A_{2}^{\ast}}{A_{1} },-\frac{A_{2}}{A_{1}}\right) \nonumber\\ & =\frac{\left( \lambda_{t}^{2}A_{1}^{2}\right) ^{m}\exp\left\{ -\frac{2\left\vert \alpha-\gamma\right\vert ^{2}}{2\bar{n}_{t}+1}\right\} }{\pi\left( 2\bar{n}_{t}+1\right) L_{m}\left( -\left\vert \alpha\right\vert ^{2}\right) }L_{m}\left( -\left\vert A_{2}\right\vert ^{2}/A_{1}^{2}\right) , \label{21} \end{align} where we have set \begin{align} A_{1}^{2} & =1-\frac{1}{\left( 2\bar{n}_{t}+1\right) \bar{n}_{t} },\nonumber\\ A_{2} & =\frac{\lambda_{t}\left( \bar{n}_{t}+1\right) }{\left( 2\bar {n}_{t}+1\right) \bar{n}_{t}}\left( 2\bar{n}_{t}\alpha-\alpha+2\gamma \right) . \label{22} \end{align}
\begin{figure}
\caption{{\protect\small (Color online)} {\protect\small The evolution of the Wigner function of the SECST with }$\alpha=0.2+0.2i$ {\protect\small for several different values }$m$ {\protect\small and }$\bar{n}_{t}.$}
\label{Fig4}
\label{Fig5}
\label{Fig6}
\end{figure}Noticing that $L_{m}(-\left\vert \alpha\right\vert ^{2})>0,$ and $L_{m}[-\left\vert A_{2}\right\vert ^{2}/A_{1}^{2}]>0$ when $1-\frac {1}{\left( 2\bar{n}_{t}+1\right) \bar{n}_{t}}>0,$ thus the WF of the SECST is always positive under the condition of $\bar{n}_{t}>1/2$. In particular, when $m=0$, \ Eq.(\ref{21}) becomes \begin{equation} W\left( \gamma,\gamma^{\ast}\right) =\frac{1}{\pi\left( 2\bar{n} _{t}+1\right) }\exp\left\{ -\frac{2\left\vert \alpha-\gamma\right\vert ^{2} }{2\bar{n}_{t}+1}\right\} ,\label{23} \end{equation} which corresponds to the thermal state with mean photon number $\bar{n}_{t}$. While for $\alpha=0,$ $A_{2}\rightarrow2\gamma\lambda_{t}\left( \bar{n} _{t}+1\right) /[\left( 2\bar{n}_{t}+1\right) \bar{n}_{t}],$ $\left\vert A_{2}\right\vert ^{2}/A_{1}^{2}\rightarrow4\left\vert \gamma\right\vert ^{2}\left( \bar{n}_{t}+1\right) /\{\left( 2\bar{n}_{t}+1\right) \left[ \left( 2\bar{n}_{t}+1\right) \bar{n}_{t}-1\right] \}\equiv\xi$,$\ $Eq. (\ref{21}) yields \begin{equation} W\left( \gamma,\gamma^{\ast}\right) =\frac{\left[ \left( 2\bar{n} _{t}+1\right) \bar{n}_{t}-1\right] ^{m}}{\pi\left( 2\bar{n}_{t}+1\right) ^{m+1}\left( \bar{n}_{t}+1\right) ^{m}}e^{-\frac{2\left\vert \gamma \right\vert ^{2}}{2\bar{n}_{t}+1}}L_{m}\left( -\xi\right) .\label{24} \end{equation} At the zero-temperature limit, $T\rightarrow0,\bar{n}_{t}\rightarrow0,$ Eq.(\ref{23}) reduces now into $\frac{1}{\pi}\exp(-2\left\vert \alpha -\gamma\right\vert ^{2}),$ i.e., the WF of coherent state (a Guassian form), which can be seen from Eq.(\ref{2}) yielding $\rho=\left\vert \alpha \right\rangle \left\langle \alpha\right\vert $ under the condition $m=0$; while Eq.(\ref{24}) becomes $\frac{1}{\pi}(-1)^{m}e^{-2\left\vert \gamma\right\vert ^{2}}L_{m}(4\left\vert \gamma\right\vert ^{2}),$ corresponding to the WF of number state.
Using Eq.(\ref{21}), the WFs of the SECST are depicted in Fig.4 in phase space with $\alpha=0.2+0.2i$ for several different values $m$ and $\bar{n}_{t}.$ It is easy to see that the negative region of WF gradually disappears as $m$ and $\bar{n}_{t}$ and increases.
\subsection{The marginal distributions of the SECST}
We now find the probability distribution of position or momentum
$\vert$
-----the marginal distributions, by performing the WF either over the variable $y$ or the variable $x$, respectively. Using Eqs.(\ref{21}) and (\ref{22}) we can derive (denote $\gamma=x+iy,$ $\alpha=q+ip$) \begin{align} \mathrm{P}\left( x,\bar{n}_{t}\right) & \equiv\int W\left( x,y\right) dy\nonumber\\ & =\frac{\left( \lambda_{t}^{2}A_{1}^{2}\right) ^{m}C_{\alpha,m}} {\sqrt{2\pi\left( 2\bar{n}_{t}+1\right) }}\frac{\left[ m!\right] ^{2}e^{-\frac{2\left( q-x\right) ^{2}}{2\bar{n}_{t}+1}}}{\left( 2\bar {n}_{t}-1\right) ^{m}}\nonumber\\ & \times\sum_{k=0}^{m}\frac{2^{2k-m}\bar{n}_{t}^{k}}{k!\left[ (m-k)!\right] ^{2}}\left\vert H_{m-k}\left( E_{1}\right) \right\vert ^{2},\label{25} \end{align} where $H_{m}\left( x\right) $ is single variable Hermite polynomial and $E_{1}=\left[ \left( 2\bar{n}_{t}-1\right) \alpha+2x+2ip\right] /\sqrt{2\left( 2\bar{n}_{t}+1\right) }$. Eq.(\ref{25}) is the marginal distribution of WF of the SECST in \textquotedblleft$x$ -direction\textquotedblright.
On the other hand, performing the integration over $dx$ yields the other marginal distribution in \textquotedblleft$y$-direction\textquotedblright, \begin{align} \mathrm{P}\left( y,\bar{n}_{t}\right) & =\frac{\left( \lambda_{t} ^{2}A_{1}^{2}\right) ^{m}C_{\alpha,m}}{\sqrt{2\pi\left( 2\bar{n} _{t}+1\right) }}\frac{\left[ m!\right] ^{2}e^{-\frac{2(p-y)^{2}}{2\bar {n}_{t}+1}}}{\left( 2\bar{n}_{t}-1\right) ^{m}}\nonumber\\ & \times\sum_{k=0}^{m}\frac{2^{2k-m}\bar{n}_{t}^{k}}{k!\left[ (m-k)!\right] ^{2}}\left\vert H_{m-k}\left( E_{2}\right) \right\vert ^{2},\label{e26} \end{align} where $E_{2}=i\left( 2\bar{n}_{t}\alpha-\alpha+2q+2iy\right) /[\sqrt {2\left( 2\bar{n}_{t}+1\right) }]$. As expected, the two marginal distributions are both real.
\section{Conclusions}
In summary, we have investigated the photon statistics properties of the SECST, described by the density matrix $\rho$ (\ref{2}). We have calculated the matrix elements $\left\langle N\right\vert \rho\left\vert M\right\rangle $ in Fock space and the Mandel $Q$ parameter. It is found that the SECST field exhibits a significant amount of super-PPS due to the presence of thermal noise ($\bar{n}_{t}$) for excitation photon number $m=0$ and that, for $m\neq0$ and a given $\left\vert \alpha\right\vert ,$ the SECST field presents the sub-PPS when $\bar{n}_{t}$ is less than a threshold value. In addition, the threshold value increases as $m$\ increases. We have presented the maximum information (channel capacity) transmitted by the SECST beam. It is shown that the maximum information transmitted increases as $m$ increases. This implies that among the coherent signals, the eigen-number signals and the ECS in thermal light, the last one can transmit the most information. Further, as one of the photon statistical properties, the Wigner function and the marginal distributions of the SECST have also been derived, from which one can clearly see the nonclassicality. The negative region has no chance to be present when the average photon number $\bar{n}_{t}$ of thermal noise exceeds $1/2.$ The marginal distributions are related to the Hermite polynomial.
\begin{acknowledgments} This work is supported by the National Natural Science Foundation of China (Grant No 10775097). L.-Y. Hu's email address is hlyun@sjtu.edu.cn or hlyun2008@126.com. \end{acknowledgments}
\end{document} |
\begin{document}
\title{The Sufficiency of Off-Policyness and Soft Clipping: \
PPO is still Insufficient according to an Off-Policy Measure}
\begin{abstract}
The popular Proximal Policy Optimization (PPO) algorithm approximates the solution in a clipped policy space. Does there exist better policies outside of this space? By using a novel surrogate objective that employs the sigmoid function (which provides an interesting way of exploration), we found that the answer is ``YES'', and the better policies are in fact located very far from the clipped space. We show that PPO is insufficient in ``off-policyness'', according to an off-policy metric called DEON. Our algorithm explores in a much larger policy space than PPO, and it maximizes the Conservative Policy Iteration (CPI) objective better than PPO during training. To the best of our knowledge, all current PPO methods have the clipping operation and optimize in the clipped policy space. Our method is the first of this kind, which advances the understanding of CPI optimization and policy gradient methods. Code is available at https://github.com/raincchio/P3O. \end{abstract}
\section{Introduction} Real-world problems like medication dosing and autonomous driving pose a great challenge for Artificial Intelligence (AI) with an expectation to improve human life and safety. Applications like these require significant interaction with the environment to make learning algorithms effective. Humans can learn from others, by observing their experience to quickly pick up new skills, even without exposure on one's own. Subsequently, when there is a chance to practice, the skills obtained from previous experience or others can be quickly adapted and improved. But it is clear that we are still far from obtaining this remarkable learning ability in AI.
In our context, we consider problems where the environment state cannot be reset, which is also true in real life. In such situations, we can only sample a limited number of possible trajectories, which easily results in failure of learning.
Off-policy learning is one promising paradigm to address this challenge, and it provides an effective discipline of learning by sampling the potential trajectories starting from any state. This means we can evaluate a target policy using a behavior policy that generates experience \citep{offpolicy_doina,gtd}. Moreover, the paradigm seems characteristic and general enough for obtaining skills from other sources. With off-policy learning, one agent can reuse experience from itself or even the other agents, where the samples are collected with methods that are different from real-time on-policy interaction. Off-policy learning holds significant promise, but it is tricky in practice. The mismatch between the distribution of the behavior policy and that of the target policy poses a big stability challenge for the learning process. Even for policy evaluation, following the temporal difference update easily diverges in the linear function approximation \citep{bert_countertd,boyan_safely,gordon_stable,tsi_97}.
\citet{doina_trace_2000} et. al. were the first to use importance sampling for off-policy learning. They used an online updated product of importance sampling ratios to correct the distribution of the behavior policy to that of the target policy, and developed an algorithm that gives a consistent estimation for off-policy evaluation in the lookup table setting. \footnote{There is another class of off-policy learning methods, which does not require importance sampling but to stabilize the underlying ordinary differential equation, e.g., see \citep{gtd,tdc}.} However, importance sampling suffers from high variances, especially when the behavior and target policies are very different. For policy gradient methods, it is hard to see there is a difference. In particular, the importance sampling ratios, widely adopted in recent popular policy gradient algorithms, can be problematic too.
In this context, the Conservative Policy Iteration (CPI) objective~\citep{kakade2002approximately} is a key element in the recent spectrum of popular algorithms, including Trust Region Policy Optimization (TRPO) \citep{schulman2015trust}, Proximal Policy Optimization (PPO) \citep{schulman2017proximal} and many others. CPI is based on the importance sampling ratio between the new and the old policy, which can often cause high variances and lead to poor gradient estimation and unstable performances for policy gradient methods. TRPO avoids this problem by using a fixed threshold for the policy change. PPO applies a clipping method for the importance sampling ratio to ensure it is not too far from the current objective, and then the final objective is the minimum of the clipped and un-clipped objectives.
A more general topic than off-policy learning is sample-efficient learning. In deep reinforcement learning, sample efficiency is characterized by the following:
1) Efficient policy representation. Ensuring that the updated policy is close to the old policy is a good practice, although there is some approximation error \citep{tomar2021mirror}. The CPI with clipping used by PPO~\citep{schulman2017proximal}, TPPO~\citep{wang2020truly}, and TR-PPO~\citep{wang2019trust} ensures we consider only new policies that are not too far from the old policy. \citet{sun2022you} showed that ratio-regularizer can have a similar effect, and proposed a policy optimization method based on early stopping. 2) Convex optimization. \citet{tomar2021mirror} simplified the problem of maximizing the trajectory's return using convex optimization solvers which minimize the Bregman divergence. 3) Second-order methods that take advantage of the Hessian matrix. For example, identification of a trust-region is done by computing an approximation to the second-order gradient, such as TRPO~\citep{schulman2015trust} and ACKTR~\citep{wu2017scalable}. However, these methods usually have a high computation cost. 4) Off-policy learning. Besides the CPI objective-based methods, \citet{wang2016sample} truncated the importance sampling ratio with bias correction and used a stochastic ``dueling network'' to help achieve stable learning. Furthermore, \citet{haarnoja2018soft} proposed an off-policy formulation that reuses of previously collected data for efficiency. Other popular off-policy learning algorithms include soft actor-critic \citep{haarnoja2018soft} and TD3 \citep{fujimoto2018addressing}.
In this paper, we aim to improve the policy representation due to clipping methods, which provides an improved control of the variances caused by importance sampling. Our goal is achieved by applying the preconditioning technique to the CPI objective, aided with a regularization loss in the policy change. Note that preconditioning is usually applied to an iterative method such as linear system solvers \citep{saad03:IMS,yao2008preconditioned}. Recently, there has been research on applying preconditioning in deep learning to accelerate the learning process, e.g., see ~\citep{li2016preconditioned,sappl2019deep}. Our work is a new application of preconditioning to control variances in policy gradient estimation. Moreover, our preconditioning technique has an interesting property: it encourages exploration when the policy change is small and switches to exploitation when the policy change is large.
\section{Background} Here we review the basis of Markov Decision Processes (MDPs) and recent popular algorithms TRPO and PPO. The key to TRPO and PPO are their objective functions, both of which are based on an approximation to the value function of a new policy.
{\bfseries Markov Decision Processes}. An MDP is defined by $(\mathbb{S},\mathbb{A},\mathcal{P},R,\lambda)$, where $\mathbb{S}$ is the state space, $\mathbb{A}$ is the action space, and for each $a\in \mathbb{A}$, $\mathcal{P}$ is a probability measure assigned to a state $s\in \mathbb{S}$, which we denote as $\mathcal{P}(\cdot|s,a)$. Define $R: \mathbb{S}\times \mathbb{A} \to \mathbb{R}$ as the reward function, where $\mathbb{R}$ is the real space.
$\lambda \in (0,1)$ is the discount factor. Here we consider stochastic policies; denote a stochastic policy by a probability measure $\pi$ applied to a state $s$: $\pi(\cdot|s) \to [0,1]$. At a time step $t$, the agent observes the current state $s_t$ and takes an action $a_t$. The environment provides the agent with the next state $s_{t+1}$ and a scalar reward $R_{t+1}=R(s_t, a_t)$. The main task of the agent is to find an optimal policy that maximizes the expected sum of discounted future rewards: \begin{gather*}
V_\pi(s) = \mathbb{E}_{\pi}\Big[\sum_{t=0}^{\infty } \lambda^t R_t \Big],\notag
\\
\mbox{where $a_t \sim \pi(\cdot|s_t)$ and $s_{t+1} \sim \mathcal{P}(\cdot|s_t,a_t)$ for all $t\ge 0$. } \end{gather*}
The state-action value function for the policy is defined similarly, except the initial action (at $t=0$) is not necessarily chosen according to the policy: \begin{gather*}
Q_\pi(s, a) = \mathbb{E}_{\pi}\Big[\sum_{t=0}^{\infty } \lambda^t R_t \Big],\\ \mbox{where $a_t \sim \pi(\cdot|s_t)$ for $t\ge 1$.} \end{gather*} We will use $A_{\pi}$ to denote {\em the advantage function}, which can be used to determine the advantage of an action $a$ at a state $s$ by
$A_{\pi}(s, a)=Q_{\pi}(s, a) - V_{\pi}(s)$. Note that if $a \sim \pi(\cdot|s)$, then the advantage is zero. So this measure computes the advantage of an action with respect to the action that is suggested by the current policy $\pi$. In the remainder of our paper, $\hat{A}$ is an approximation to the advantage, which is simply the difference between the state-action function and the value function both estimated by a specific algorithm.
Let $\rho_0$ be the initial state distribution. Let $\eta(\pi)$ be the expected discounted reward: \[ \eta(\pi) = \mathbb{E}_{s\sim \rho_0}\Big[V_\pi(s)\Big]. \] Now suppose we are interested in another policy $\tilde{\pi}$. Let $d_{\tilde{\pi}}$ be the stationary distribution of the policy. According to \citet{kakade2002approximately}, the expected return of $\tilde{\pi}$ can be calculated in terms of $\eta(\pi)$ and its advantage over $\pi$ in a straight-forward way: \[
\eta(\Tilde{\pi}) = \eta(\pi) + \sum_s \rho_{\Tilde{\pi}}(s) \sum_a \Tilde{\pi}(a|s) A_\pi(s,a). \] Here $\rho_{\Tilde{\pi}}(s) = \sum_{t=0}^{\infty}\lambda^t d_{\tilde{\pi}}(s_t)$, $s_0 \sim \rho_0$ and the actions are chosen according to $\Tilde{\pi}$, which is just the sum of discounted visitation probabilities. \citet{schulman2015trust} approximated $\eta(\Tilde{\pi})$ by replacing $\rho_{\Tilde{\pi}}(s)$ with $\rho_\pi(s)$ in the right-hand side: \[
\hat{\eta}_\pi(\tilde{\pi}) = \eta(\pi) + \sum_s \rho_\pi(s) \sum_a \Tilde{\pi}(a|s) A_\pi(s,a). \]
Note that, in our algorithm, the new policy will be $\tilde{\pi}$. The benefit of this approximation is that the expected return of the new policy $\tilde{\pi}$ can be approximated based on the previous samples and the old policy $\pi$. Note further, for any parameter value $\theta_0$, because of how $\hat{\eta}$ is defined, we have \begin{align} \hat{\eta}_{\pi_{\theta_0}}(\pi_{\theta_0})&=\eta(\pi_{\theta_0}), \label{eq:value_equal_trpo} \\
\nabla_\theta \hat{\eta}_{\pi_{\theta_0}}(\pi_\theta)|_{\theta=\theta_0}&=\nabla_\theta \eta(\pi_\theta)|_{\theta=\theta_0}.\label{eq:value_equal_trpo_2} \end{align} This means that a small gradient ascent update of $\theta_0$ to improve $ \hat{\eta}_{\pi_{\theta_0}}(\pi_{\theta_0})$ also improves $\eta(\pi_{\theta_0})$.
\paragraph{The TRPO objective.}
TRPO, PPO and our algorithm P3O all aim to improve a policy incrementally by maximizing the advantage of the new policy over the old one, by considering the influence from importance sampling. Sample-based TRPO maximizes the following Conservative Policy Iteration (CPI) objective \begin{equation}
L^{cpi}(\theta) = \hat{\mathbb{E}}_{t}\left[ r_t(\theta) \hat{A}_{\pi_{old}}(s_t, a_t)\right],\notag \end{equation}
where $r_t(\theta) = {\pi_{\theta}(a_t|s_t)}/{\pi_{\theta_{old}}(a_t|s_t)}$, by ensuring the difference between the new policy and the old policy is smaller than a threshold. Here the operator $\hat{\mathbb{E}}_{t}$ refers to an empirical average over a finite number of samples. Note that the importance sampling ratio $r_t(\theta)$ can be very large; to avoid this problem, TRPO uses a hard threshold for the policy change instead of a regularization because ``it is difficult to choose a single regularization factor that would work for different problems'', according to \citet{schulman2017proximal}. TRPO then uses the trust region method which is a second-order method that maximizes the objective function with a quadratic approximation. Under that method, the advantage $\hat{A}_{\pi_{old}}$ is replaced by the $Q_{\pi_{old}}$ in TRPO.
\paragraph{The PPO objective.} This policy objective function is used to define the PPO algorithm \citep{schulman2017proximal}, which was motivated to augment TRPO with a first-order method extension. The PPO algorithm first samples a number of trajectories using policy $\pi_{old}$, uniformly extending each trajectory with $T$ time steps. For each trajectory, the advantage is computed according to \[ \hat{A}_t = \lambda^{T-t} V(s_T) + \sum_{k=t}^{T-1} \lambda^{k-t} r_k -V(s_t). \] Given the advantages and the importance sampling ratios (to re-weigh the advantages), PPO maximizes the following objective: \begin{equation*}
L^{ppo}(\theta) = \hat{\mathbb{E}}_t \min\left\{ r_t(\theta) \hat{A}_t, clip(r_t(\theta), 1-\epsilon, 1+\epsilon)\hat{A}_t \right\}, \end{equation*} The role of the $clip$ operator is to make the policy update more on-policy, and the $min$ operator's role is to optimize the policy out of the clip range. The role of the two composed operators is to prevent the potential instability caused by importance sampling, and maintain performance across different tasks. Note that this is not the first time that importance sampling causes trouble for reinforcement learning. For the discrete-action problems, both off-policy evaluation and off-policy control are known to suffer from high variances due to the product of a series of such ratios, each of which can be bigger than expected, especially when the behavior policy and the target policy are dissimilar, e.g., see \citep{offpolicy_doina,gtd}.
We noticed some works in literature called PPO an on-policy algorithm. This might be a historical mistake. {\em\bfseries The nature of reinforcement learning is truly off-policy}. For example, the familiar class of algorithms including Q-learning \citep{watkins1992q}, experience replay \citep{lin1992self}, DQN \citep{mnih2015human}, DDPG \citep{lillicrap2015continuous}, Distributional RL such as C51 \citep{bellemare2017distributional}, QR-DQN \citep{dabney2018distributional} and DLTV \citep{dltv}, Horde\citep{sutton2011horde}, Unreal \citep{jaderberg2016reinforcement}, Rainbow \citep{hessel2018rainbow}, LSPI \citep{lagoudakis2003least}, LAM-API \citep{yao2012approximate}, Kernel regression MDPs \citep{grunewalder2012modelling} and even MCTS \citep{gelly2011monte}, are all off-policy methods.
In off-policy learning, the agent needs to constantly improve its behavior using experience that is imperfect in the sense that it is not learning optimal. Importance sampling ratios arise because one needs to correct the weighting of the objectives from the behavior policy towards the weighting that would be otherwise under a target and improved policy, {\em e.g.}, see \citep{offpolicy_doina,schulman2015trust}. Such re-weighting often leads to high variances in the policy gradient estimations, e.g., see interesting discussions by \citet{truely_policy_gradient}. PPO uses experience from the past (old policies) and also has importance sampling to correct the sample distribution. So clearly PPO is an off-policy algorithm.
Clipping the importance sampling ratio loses the gradient information of policies in an infinitely large policy space. We define $\tilde{\Pi}_{\epsilon}^+=\{\pi; \frac{\pi(s,a)}{\pi_{old}(s,a)} > 1+\epsilon \}$ and $\tilde{\Pi}_{\epsilon}^-=\{\pi; \frac{\pi(s,a)}{\pi_{old}(s,a)} < 1-\epsilon \}$. Note that PPO’s optimization never crosses into these two spaces for non-negative and negative Advantages, respectively. By clipping, PPO looses the gradient information for any policy in $\tilde{\Pi}_{\epsilon}^+$ and $\tilde{\Pi}_{\epsilon}^-$. We found that there are much better policies within these two policies that PPO fails to discover in our experiments. All the PPO algorithms we reviewed extend PPO in some way; however, they unanimously inherit the clipping operation from PPO.
\section{Method} In this section, we propose a new objective function by preconditioning for better exploration in the parameter space. And we also use KL divergence to ensure small and smooth policy changes between the updates, balancing exploration and exploitation.
\begin{figure*}
\caption{ Left: The objective ($L$) versus the importance sampling ratio ($r$). Right: $\nabla_r L$, i.e, the gradient with respect to $r$. The $L$ graph comparison includes the CPI objective (black), the PPO objective (blue), and the Scopic objective (orange). Both plots are for positive Advantage ($\hat{A}_t>0$) and for a single sample and $\tau=2$. The red point shows the starting point for the optimization, which is on-policy learning. }
\label{fig:grdients}
\end{figure*}
\subsection{The Scopic Objective} Inspired by the value of the CPI and PPO objectives,
we propose the following refined objective:
\begin{align}
L^{sc}({\theta})
= \hat{\mathbb{E}}_t \Big[ \sigma\left( \tau \left(r_t(\theta) -1 \right)\right) \frac{4}{\tau}\hat{A}_t\Big]
\label{eq:psf} \end{align} where $\sigma$ is the sigmoid function and $\tau$ is the temperature. The advantage $\hat{A}_t$ is computed in the same way as in PPO. We term this new objective function the {\bfseries Scopic} objective, short for {\bfseries s}igmoidal {\bfseries c}onservative {\bfseries p}olicy {\bfseries i}teration objective without {\bfseries c}lipping.
Intuitively, according to the Scopic objective, the agent learns to maximize the scaled advantages of the new policy over the old one whilst maintaining stability by feeding the importance sampling ratio to the sigmoid function. Theoretically, by using the sigmoid, the importance sampling ratio is allowed to range from zero to infinity while the output is still in a small range, $[\sigma(-\tau), 1]$. So the new policy is allowed to be optimized in a policy space that is much larger than the clipped surrogate objective. Because the PPO objective is clipped, PPO would not have any information such as the gradient for new policies whose importance sampling ratios over the current policy are beyond the two policy spaces as defined in the PPO objective.
In addition to constraining the importance sampling ratio range in a ``soft'' way, there is an interesting property of the Scopic objective that is very beneficial for reinforcement learning. {\em The input of the sigmoid is zero if there is no change in the policy at a state.} Note that the gradient of the sigmoid achieves the maximum in this case. This means when policy change is zero or little, the sigmoid strives for a big parameter update and hence further {\em exploration} in the policy space. The effect of a big change in $\theta$ leads to a big change in $\pi_\theta$ as well, and thus the action selection has a big change, meaning that our method effectively adapts the parameter update magnitude to explore the action space. On the other hand, when the new policy changes greatly from the old policy, the gradient of the sigmoid grows small which gives the parameter little update. The effect is that the agent will focus on a close neighborhood of the policy and use the knowledge built in the policy,
which leads to {\em exploitation}. So by using the Scopic objective, the agent learns to balance exploration and exploitation automatically via the gradient magnitude that is adapted by the sigmoid function. This method of exploration is novel for reinforcement learning and it has not been explored in previous research to the best of our knowledge.
Existing methods of exploration are mostly based on the novelty of states and actions, typically have some roots in the count-based methods such as UCT \citep{uct} and UCB-1 \citep{auer2002using}. The count-based methods have a wide applications in computer games, {\em e.g.}, the use of UCT in AlphaGo \citep{silver2016mastering}, and the contextual bandits \citep{bubeck2012regret} in recommendation systems \citep{Li_2010}; {\em etc}. The count-based methods, by definition, only apply to discrete spaces. However, it is possible to extend to some smoothed versions for the continuous case, such as kernel regression UCT \citep{kr_uct}; such methods depend on a choice of kernels and a regression procedure that is performed on a data set of samples. \citet{parameter_exploration} proposed a method that adds Gaussian noise to parameters for exploration and \citet{dltv} discussed parameter uncertainty versus intrinsic uncertainty, and their method implements the UCB principle without counting, by using the distribution information of the value function for uncertainty estimation. In general, the discussion on ``should we be optimistic or pessimistic in the face of uncertainty'' attracts lots of interests from the literature, e.g., see \citep{ciosek2019better,zhang2019quota,keramati2020being,zhou2020non,kuznetsov2020controlling,zhou2021nondecr}. Our approach via the sigmoidally preconditioned objective is different from the above methods, and it balances exploration and exploitation given an online sample and it does not involve other samples, which is very computationally efficient.
\paragraph{Preconditioning} At first sight, the Scopic objective term $4/\tau$ may appear odd. Here we explain this choice and what the preconditioner is.
First note the important case when there is no change in the new policy. This momentary on-policy learning can be recovered by $\tau=2$, when the input of the sigmoid function is zero. This means that the learning reduces to that of on-policy, at least for the first mini-batch update. Consider the definition of $\hat{\eta}$ in Eq.\ref{eq:value_equal_trpo} and Eq.\ref{eq:value_equal_trpo_2}, for any parameter value $\theta_0$. For the choice of term of $4/\tau$, we can derive \[
L^{sc}(\theta_0)= \hat{\eta}(\theta_0), \quad \quad
\nabla_\theta L^{sc}(\theta)|_{\theta=\theta_0} =\nabla_\theta \hat{\eta}(\theta)|_{\theta=\theta_0}.
\] This ensures that the gradient descent update of the Scopic objective will improve $\hat{\eta}$ and hence $\eta$ for the case of on-policy learning.
\begin{figure*}
\caption{Performance of our P3O versus five baselines for discrete tasks (first row) and continuous tasks (second row).}
\label{fig:performence}
\end{figure*}
The Scopic loss can be viewed as a preconditioning technique. Let $p(\theta) = \sigma(\tau (r_t(\theta) -1))$ then the gradients of the Scopic objective and the CPI objective are as follows: \begin{align}
\nabla_\theta L^{sc} & = \hat{\mathbb{E}}_{t}[ 4 p(\theta) (1-p(\theta)) \nabla_\theta r_t(\theta) \hat{A}_t ], \notag
\\
\nabla_\theta L^{cpi} &= \hat{\mathbb{E}}_{t}[ \nabla_\theta r_t(\theta) \hat{A}_t ].\notag \end{align}
So the stochastic gradient ascent update for the Scopic objective is a modification from that of the CPI objective. This is similar to preconditioning in iterative methods \citep{saad03:IMS}, but note here that the preconditioner is stochastic and applies to the stochastic gradient. Figure \ref{fig:grdients} shows the objective function and the gradient for CPI, the PPO objective and our Scopic objective.
\paragraph{Surrogate Function}
The Surrogate function was first introduced in the TRPO paper. The core idea is that when we need to maximize an objective, we can maximize a lower bound instead. We know the PPO algorithm optimizes a proxy function smaller than CPI with a simple analysis. However, from the perspective of optimization, we can view the optimization process of PPO as modifying a one-step on-policy policy gradient to a multi-step mini-batch stochastic optimization, and then using gradient clipping to ensure the stability of optimization. This process includes both on-policy (for the first mini-batch) and off-policy processes. Our method is also a surrogate function. When $\tau>2$, the Scopic objective is a lower bound of the CPI objective. In experiments, we show that using the sigmoid function is better than gradient clipping for the CPI objective.
\subsection{KL Divergence }\label{sec:KL_rel_MILB} The Scopic objective function can facilitate more exploration, but sometimes the current policy may only need a little exploration. Therefore, for a better balance between exploration and exploitation, we consider minimizing the KL divergence between the new policy and the old policy. This operation ensures that learning is close to on-policy learning and that the importance sampling ratio will be close to one, especially when the learning rate is decayed. In particular, KL divergence can work well if the current policy is good enough. Our method has two networks: a policy network and a value network. The final objective for our policy network is \begin{multline}
L^{p3o}(\theta)=
\mathbb{\hat{E}}_t \Big[ \sigma\left( \tau\left( r_t(\theta) - 1\right)\right) \frac{4\hat{A}_t}{\tau} \\-\beta KL\left(\pi_{\theta_{old}}(\cdot|s_t),\pi_{\theta}(\cdot|s_t)\right)\Big],\notag \end{multline}
where $\beta \ge 0 $ is the regularizer. The value networks is trained with the TD(0) algorithm \citep{sutton2018reinforcement}, with the TD update being: \[
\hat{\nabla}_w L^{vf}(w)= \left[r_t+\lambda_v V(s_{t+1}) - V(s_t)\right]\nabla_w V_w(s_t). \] because the objective function is $
L^{vf}(w)= \hat{\mathbb{E}}_t[r_t+\lambda V_w(s_{t+1}) - V_w(s_t)]$.
Our Preconditiond Proximal Policy Optimization (P3O) algorithm \footnote{We noted another algorithm also called P3O by \citep{ppo_onpolicy4}.} reduces to the gradient ascent maximizing $L^{p3o}$ and the gradient descent minimizing $L^{vf}$.
\section{Empirical Evaluation} We tested the performance of our P3O algorithm versus baselines in both continuous- and discrete tasks in OpenAI Gym \citep{brockman2016openai} and the Arcade Learning Environment \citep{bellemare2013arcade}. The tasks include Ant-v2, HalfCheetah-v2, and Walker2d-v2 for continuous tasks, for which the policy is parameterized using a Gaussian distribution. Discrete tasks include Enduro-v4, Breakout-v4, and BeamRider-v4. The observations on the discrete environments is shown in a four stacking frames RGB image of the screen, and the policy is parameterized using Softmax distribution. In addition to TRPO and PPO, we also include A2C \citep{mnih2016asynchronous}, ACKTR \citep{wu2017scalable}, and DualClip-PPO \citep{ye2020mastering} as baselines. We evaluate the episodic accumulated reward during the training process of each algorithm. We run each algorithm in the six environments with four random seeds and set the training time steps to be ten million for the discrete tasks and three million for continuous tasks. Both PPO and our P3O do not use augmented data over iterations. Instead, both algorithms use the data from the latest policy.
\subsection{Performance Comparison}
In Figure \ref{fig:performence}, the learning curves of TRPO and A2C are very flat: showing ineffectiveness for these discrete environments. The TRPO's performance was similar to the empirical results in \citep{wu2017scalable}. The poor performance may be because it is hard to set a proper parameter for the KL-divergence constraint since it varies in different environments. ACKTR is a second-order, natural gradient algorithm with a much higher computational cost per time step. However, it still did not outperform PPO which is a first-order method. Dualclip-PPO inherits the clipping operation from PPO objective and adds another max operator with an additional parameter \citep{ye2020mastering}. The algorithm was applied to the game of Honor of Kings and achieved competitive plays against human professionals. However, there was no baseline comparison. In our experiments, the algorithm performed close to or worse than than PPO. That PPO is better than all the other four baselines shows that it is indeed important to control the high variance issue of importance sampling. Our P3O outperformed all the baselines including the best performing PPO for the tasks.
In the next subsection, we consider reasons why this happened.
\subsection{The DEON Off-policy Measure and Policy Space Comparison}\label{sec:deon}
In order to understand the performance comparison between PPO and our P3O, we compared the maximum {\em {\bfseries de}viation from {\bfseries on}-policy learning} (DEON) measure, defined by $y=\max(|r-1|)$, where the maximum was taken over the importance sampling ratio $r$ minus one, absolute, in the collected trajectories of samples. The results of this measure are computed during the training process of PPO and P3O, and compared in Figure \ref{fig:max_ratio}. It shows that the deviation of our P3O from on-policy learning is much bigger than PPO during training: {\em P3O is more off-policy than PPO}. This means P3O explores in a much bigger policy space than PPO. The clipping and minimum operations in the PPO objective prevent the algorithm from exploring the policy spaces $\tilde{\Pi}_{\epsilon}^+$ and $\tilde{\Pi}_{\epsilon}^-$ in the case of non-negative and negative Advantages. Together with the performance comparison in Figure \ref{fig:performence}, this shows that the new Scopic objective via sigmoidal preconditioning that is used by our P3O is a very effective way of conducting exploration in the parameter space which results in efficient exploration of the action space.
\begin{figure}
\caption{The DEON off-policy measure of our P3O vs. PPO during training. For PPO, the metric is computed without clipping the importance sampling ratio. Both algorithms use their {\em raw} importance sampling ratios computed during their individual training. P3O's deviations are much higher than PPO, which confirms that P3O explores in a policy space that is much bigger than PPO; this is because the clipping in the PPO's objective makes it fail to discover better policies beyond the clip range. }
\label{fig:max_ratio}
\end{figure}
Note that, for discrete tasks, all the importance sampling ratios finally converged close to one due to the use of the decaying learning rate. This is interesting because it shows the learning rate decay can give us an on-policy learning algorithm in the long run. For continuous tasks where the fixed learning rate is used, the DEON measure of P3O is still increasing in the end, while for PPO it drops close to zero. In particular, the average DEON measure is up to as big as $60.0$ for P3O which performs better than PPO. This shows clipping the importance sampling ratio in the range $[1-\epsilon, 1+\epsilon]$ by the PPO objective is far from sufficient to cover good policies. In our experiments, we set $\epsilon=0.2$, as used in the PPO paper \citep{schulman2017proximal}. The DEON metric being still large in the end of learning means P3O is still exploring. The larger policy search space and the consistent exploration leads to a bigger improvement in performance for continuous tasks than for discrete tasks (See Figure \ref{fig:performence} and note the continuous tasks have a much coarser scale in the y-axis).
Previously, there have been a few measures of policy dissimilarity, especially based on the $L_1$ distance. For example,
\citet{off-policyness-remi-2} and \citet{off-policyness-remi} used $\norm{\pi(\cdot |s) - \mu(\cdot |s)}_1$ to measure the dissimilarity of the two policies at a state $s$. We believe they are also the first to propose the notion of ``off-policyness.''
This idea can be also extended to the action dissimilarity at a state \citep{off-policy-Qualitative}, $|\pi(a |s) - \mu(a |s)|$. However, the $L_1$ and absolute-value based measures are not sufficiently sensitive. Consider two cases, (1) $\pi(a|s)=0.10001, \mu(a|s)=0.00001$; (2) $\pi(a|s)=0.2, \mu(a|s)=0.1$. The dissimilarity according to the absolute-value measure for the two cases is both $0.1$. However, apparently the policies deviate more in the first case. The behaviour policy $\mu$ is a rare event for $a$ at $s$. This suggests that, in the last iteration, we have insufficient samples for the state and action. However, in the second case, there is still a significant percentage of samples. For measuring off-policyness, it appears we need more sensitive metrics. This can be captured well by our DEON measure, which is based on the ratio of the two policies at a state. In particular, the DEON measure is $10000$ for the first case, and $1.0$ for the second case.
\subsection{CPI Objective Comparison} \begin{figure}
\caption{Plotting the CPI objective during training of {P3O} and {PPO}. The CPI objective (see the Background Section) has no clipping involved for PPO as well. P3O maximizes the CPI objective much better than PPO except for BeamRider.}
\label{fig:cpi_objective}
\end{figure} This experiment was motivated by noting that both the PPO objective and the Scopic objective originate from the CPI objective. As shown by the TRPO and PPO algorithms, directly maximizing the CPI objective is problematic because of high variances of importance sampling. PPO and our P3O can be viewed as special methods for maximizing the CPI objective. So one important question is how much is the CPI objective maximized in either of the two algorithms? We thus calculated the CPI objective (without any clipping or sigmoid preconditioning) in the training process of the algorithms. The result is shown in Figure \ref{fig:cpi_objective}. For both the discrete and continuous tasks (except BeamRider), P3O consistently maximizes the CPI objective better than PPO. This means the Advantage of the new policy is consistently larger than the old policy with sigmoid preconditioning than with clipping. For the discrete tasks, the CPI objective of P3O finally converged close to that of PPO. This is also because the decay learning rate was used for discrete tasks, which became really small in the end. This leads to the importance sampling being one and thus the Scopic objective reducing to the CPI objective. The CPI of continuous tasks is even much bigger. Because of the use of the fixed learning rate, P3O still actively explores even in the end of learning and keeps discovering new policies whose Advantage is much bigger than the old one.
\subsection{Sensitivity to Hyper-parameters} The hyper-parameter studies were performed over three dimensions (number of updates, batch size, learning rate). The numbers of epoch updates were either 5 or 10. The batch size were either 32 or 64. The learning rate were either constant $10^{-4}$ or the decay scheduling. The decay schedule started with a learning rate of $3\times 10^{-4}$ and decayed linearly, which was used in OpenAI's PPO implementation. This leads to eight hyper-parameter combinations whose results are shown in Figure \ref{fig:halfcheetah-hyper-parameter}. Group (c) performed the best for PPO (consistent with the best result in the PPO paper) and Group (h) was the best for P3O. This shows PPO prefers the decay schedule while our method prefers the fixed learning rate for continuous tasks. For the best hyper-parameter group for both algorithms, P3O's variance is much lower than PPO. \begin{figure}
\caption{ Hyper-parameter sensitivity studies of PPO (green) and our P3O (red) on HalfCheetah, with two schedulings of each of the three dimensions: the number of epochs on the samples, batch size, and learning rate. In terms of the best case, P3O (group h) performed better than PPO (group c; consistent with the original PPO paper) with higher mean reward and lower variances. }
\label{fig:halfcheetah-hyper-parameter}
\end{figure}
\subsection{Ablation Studies} As the P3O objective has two parts, the sigmoidal preconditioner and the KL-divergence regularizer, we conducted an ablation study. The following three variants were studied: \textbf{P3O-S} removes the sigmoidal preconditioner and replaces it with the identity mapping; \textbf{P3O-K} removes the KL-divergence regularizer; \textbf{P3O-SK} removes both the preconditioner and KL-divergence, reducing to the TRPO objective. Figure \ref{fig:ablation} shows the performance of all these variants decreased sharply. Notably, P3O-K (with only the sigmoid preconditioning) outperforms P3O-S (with only the KL-divergence regularizer), showing that the sigmoid preconditioning is the primary driver of improvement in P3O.
\begin{figure}
\caption{Ablation studies. P3O-S removes the sigmoid preconditioner and P3O-K removes KL-divergence regularizer; while P3O-SK removes both, reducing to the TRPO objective. This shows both factors are important to the performance of P3O; in addition, preconditioning contributes more than the regularization. }
\label{fig:ablation}
\end{figure}
\section{Conclusion}
We proposed a new surrogate objective that applies the sigmoid function to the importance sampling ratio. Our surrogate objective enables us to find much better policies outside of the clipped policy space, and can be viewed as a ``soft clipping'' technique, with a nice exploration property. This extends our understanding of the PPO algorithm and many later developments based on it, and suggests we should look into optimizing the CPI objective beyond the clipped policy space. We found that PPO is insufficient in {\em off-policyness}, and our P3O deviates more from on-policy learning than PPO, according to a measure of off-policyness during training which is called DEON. We can use this metric to measure the policy disparity introduced by the importance sampling method. We compared our P3O algorithm with five recent deep reinforcement learning baselines in both discrete and continuous environments. Results show that our method achieves better performance than the baselines.
\section{Acknowledgments} This work is partially supported in part by the National Natural Science Foundation of China under grants Nos.61902145, 61976102, and U19A2065; the International Cooperation Project under grant No. 20220402009GH; and the National Key R\&D Program of China under grants Nos. 2021ZD0112501 and 2021ZD0112502. Dongcui and Bei are partly funded by the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery grant RGPIN-2022-03034. Bei and Randy are supported by the Alberta Machine Intelligence Institute.
\begin{small}
\end{small} \appendix
\section{Algorithm} The pseudo-code of our P3O algorithm is shown in Algorithm \ref{alg_pa_opt}.
\begin{algorithm} \caption{Preconditioned Proximal Policy Optimization} \label{alg_pa_opt} \begin{algorithmic} \STATE {\bfseries Input:} a simulation environment \STATE {\bfseries Output:} policy $\pi_{\theta}$ for the environment \STATE Initialize policy parameters $\theta_{\pi}, \theta_{\pi_{old}}$ and value network parameters $\theta_w$, learning rate for each parameter$\lambda_{\pi}, \lambda_v$ , update number $T$, number of samples $N$, data buffer $\mathbb{D}$ \STATE Simulate as many as possible agents (depending on hardware) with policy $\pi_{\theta}$ to interact with the environment
\FOR{$iteration=1$ {\bfseries to} $m$ } \REPEAT
\STATE $a_t \sim \pi_\theta(a_t|s_t)$
\STATE $s_{t+1} \sim \mathcal{P}(s_{t+1}|s_t,a_t)$ \STATE $\mathbb{D} \leftarrow \mathbb{D}\cup {(s_t,a_t,r_t,s_{t+1})}$ \UNTIL{$t \geq NT$} \STATE Take all the buffer data to compute advantage $\hat{A}$
\STATE $\pi_{\theta_{old}}= \pi_\theta$ \FOR {$epoch=1$ {\bfseries to} $T$} \STATE Sample $N$ mini-batch samples from Buffer $\mathbb{D}$ \STATE $\theta_\pi \leftarrow\theta_\pi - \lambda_\pi \hat{\nabla}_\theta L^{sc}$ \STATE $\theta_w \leftarrow\theta_w - \lambda_V \hat{\nabla}_w L^{vf} $ \ENDFOR
Clear data buffer $\mathbb{D}$\;
\ENDFOR \end{algorithmic} \end{algorithm}
\section{Related Works}
In this section, we briefly review the related works from four aspects.
\paragraph{Convex optimization methods.} Some work has developed approaches that simplify the optimization problem of reinforcement learning into a convex optimization problem. In the paper~\citep{yao2008preconditioned}, a framework of policy evaluation algorithms called the preconditioned temporal difference (PTD) learning is introduced. And in the paper~\citep{li2016preconditioned,sappl2019deep}, they use the preconditioning for deep neural network effective training. DAPO~\citep{wang2019divergence} proved that the convex optimization method could be used in the reinforcement learning problem and include the Bregman divergence to ensure minor and safe policy updates with off-policy data. Euclidean distance and Kullback Leibler (KL) divergence are instances of Bregman divergence. MDPO~\citep{tomar2021mirror} adds a proximity term that restricts two consecutive policies to be close to each other to the common expected reward objective, iteratively updates the policy by approximately solving a trust-region problem.
\paragraph{Compute Hessian Matrix.} Recently, there are many methods to improve sample efficiency based on the Trust-region second-order gradient optimization algorithms~\citep{schulman2015trust,wu2017scalable}. These methods calculate the approximate Hessian matrix to find an ideal gradient. However, the computational cost is enormous, and the Hessian matrix is not tackled for complex neural networks. Although \citep{wu2017scalable} designed a particular neural network struct for simplifying computation, the network structure is not generalizable for complex application situations.
\paragraph{Policy Approximation.} Several studies go deeper into the policy aprroximation in RL, one of the famuous, is the clipping mechanism of the PPO algorithm, which relies upon optimizing parametrized policies by heuristic method. In~\citep{wang2020truly}, they adopt a new clipping function to support a rollback behavior to restrict the ratio between the new policy and the old one. In the paper~\citep{ye2020mastering}, when the estimate of the advantage is negative, their dual-clip operator will result in a better performance of the policy. In \citep{hsu2020revisiting}, they review the multiple failure modes of the PPO algorithm and use beta distribution instead of the Gaussian distribution to get better performance. However, this is not to improve the algorithm but to replace the structure more suitable for specific environments. These studies only analyzed the effects of the PPO's clip operation and did not put forward the theory to explain why the clipping is needed. For the estimation of $\eta(\pi)$ under a behavior policy $\Tilde{\pi}$, possible methods include Retrace~\citep{wang2016sample} providing an estimator of state-action value, and V-trace~\citep{espeholt2018impala} providing an estimator of state value. But that two methods also suffer from the estimation variance.
\paragraph{Off-Policy Method.} For value-based algorithms, sample efficiency has also been considered from the aspect of data usage. In the DQN \citep{mnih2015human} algorithm, a replay buffer for learning previous experiences is used to improve sample efficiency. In the double-DQN \citep{van2016deep}, the overestimation problem is eliminated by decoupling the two steps of target Q action selection and target Q calculation. The DDPG\citep{lillicrap2019continuous} is a deterministic policy gradient algorithm based on actor-critic, solving continuous action space tasks that DQN cannot solve. TD3\citep{fujimoto2018addressing} follows the idea of double-DQN, using two independent critics to prevent overestimation, but can only be used for environments with continuous action spaces. In the SAC\citep{haarnoja2018soft}, they introduce entropy into a deep reinforcement learning algorithm, which provides sample-efficient learning while retaining the benefits of entropy maximization.
Compared with the previous method, our method uses preconditioner to better deal with the infinite variance problem. Low-variance gradients make learning more efficient, allowing fixed step-sizes to be used, potentially reducing the overall number of steps needed to reach convergence and resulting in faster training. And we can guarantee monotonic policy improvement by cooperating with the KL divergence constraint.
\section{Hyper-parameter}\label{ap:parameter}
The detailed hyper-parameter is shown in the Table~\ref{tab:atari_hyperparameters}. \begin{table} \centering \begin{tabular}{lrr} \toprule Env(type) & MuJoCo &Atari \\ \midrule Horizon (T) & 2048 & 256 \\ Adam stepsize & $1 \times 10^{-4}$ & $2.5 \times 10^{-4}$ \\ Num. epochs & 10 & 4 \\ Minibatch size & 64 & 4 \\ Discount ($\gamma$) & 0.99 &0.99 \\ GAE parameter ($\lambda$) & 0.95 & 0.95 \\ Number of actors & 1 & 8 \\ KL coeff.$\beta_{kl}$ & 1 &0.1 \\ Simga coeff.$\beta_s$ & 1 & 1 \\ VF coeff. & 1 & 1\\ Entropy coeff. & 0 & 0.01 \\ Temperature($\tau$) . & 4& 4\\ \bottomrule \end{tabular} \caption{hyper-parameters used for the MuJoCo 3 million timesteps benchmark and for the Atari 100 million time-steps benchmark}
\label{tab:atari_hyperparameters} \end{table}
\end{document} |
\begin{document}
\title[]{Arithmetics of 2-friezes}
\author{Sophie Morier-Genoud}
\address{Sophie Morier-Genoud, Institut de Math\'ematiques de Jussieu, UMR 7586, Universit\'e Pierre et Marie Curie, 4 place Jussieu, case 247, 75252 Paris Cedex 05 }
\email{sophiemg@math.jussieu.fr }
\date{}
\begin{abstract} We consider the variant of Coxeter-Conway frieze patterns called 2-frieze. We prove that there exist infinitely many closed integral 2-friezes (i.e. containing only positive integers) provided the width of the array is bigger than 4. We introduce operations on the integral 2-friezes generating bigger or smaller closed integral 2-friezes. \end{abstract}
\maketitle
\section{Introduction}
A frieze is a finite or infinite array whose entries (that can be integers, real numbers or more generally elements in a ring) satisfying a local rule. The most classical friezes are the ones introduced by Coxeter \cite{Cox}, and later studied by Conway and Coxeter \cite{CoCo}, for which the rule is the following: \textit{every four neighboring entries form a matrix of determinant 1}.
\begin{figure}
\caption{Fragment of integral frieze of Coxeter-Conway of width 4}
\label{exCoCox}
\end{figure}
Figure \ref{exCoCox} gives an example of a Coxeter-Conway frieze filled in with positive integers. In this example one can easily check that the local rule is satisfied: $$
\begin{array}{ccccccc}
&B&\\
A&&D\\
&C& \end{array}\quad \Longrightarrow \quad AD-BC=1. $$
A particularly interesting class of friezes is the class of \textit{integral closed friezes}. \textit{Closed} means that the array is bounded above and below by rows of 1s, in this case we call \textit{width} of the frieze the number of rows strictly between the top and bottom rows of 1s. \textit{Integral} means that the frieze is filled in with positive integers.
Let us mention the following remarkable properties for Coxeter-Conway closed friezes,
\cite{CoCo}.
{\it \begin{enumerate} \item[(CC1)] Every row in a closed frieze of width $n-3$ is $n$-periodic, \item[(CC2)] Integral closed friezes of width $n-3$ are in one-to-one correspondence with the triangulations of an $n$-gon. The first non-trivial row
in the frieze gives the number of incident triangles at each vertex (enumerating in a cyclic order), see Figure \ref{triang}. \end{enumerate}}
\begin{figure}
\caption{Triangulation associated to the frieze of Figure \ref{exCoCox}}
\label{triang}
\end{figure}
Coxeter and Conway established many other surprising connections between frieze patterns and classical objects in mathematics, like Gauss {\it pentagramma mirificum}, Fibonacci numbers, Farey sequences\footnote{See also Richard Schwartz' applet at http://www.math.brown.edu/$\sim$res/Java/Frieze/Main.html}, continued fractions,....
The study of frieze patterns is currently reviving due to connections with Fomin-Zelevinsky's cluster algebras. This new strong interest started in 2005 with the work of Caldero and Chapoton~\cite{CaCh} where they connected Coxeter-Conway frieze patterns to cluster algebras of type A. New versions of frieze patterns have been introduced to extend this connection to some other types \cite{BaMa}, \cite{ARS}, and provide new information on cluster variables, see also \cite{AD}, \cite{KeSc},~\cite{ADSS}.
In 2005, J. Propp suggested a variant of frieze \cite{Pro}. This variant is called \textit{$2$-frieze} in \cite{MOT}. The defining local rule for the variant of 2-frieze is the following: \textit{each entry in the frieze is equal to the determinant of the matrix formed by its four neighbors}.
\begin{figure}
\caption{Fragment of integral closed 2-frieze of width 4}
\label{ex2frieze}
\end{figure}
Figure \ref{ex2frieze} gives an example of integral closed 2-frieze in which one can easily check that the local rule is satisfied: $$
\begin{array}{ccccccc}
*&B&*\\
A&E&D\\
*&C&* \end{array}\quad \Longrightarrow \quad AD-BC=E. $$
Propp anounced and conjectured some results on the 2-friezes and also referred to unpublished work of D. Hickerson. It seems that nobody had studied this type of frieze in details until \cite{MOT}. In \cite{MOT}, the 2-friezes were introduced to study the moduli space of polygons in the projective plane. It turned out that the space of closed 2-friezes of width $n-4$ can be identified to the space of $n$-gons in the projective plane (provided $n$ is not a multiple of 3).
In the present paper we are interested in combinatorics and algebraic aspects of the 2-friezes. Our study concerns the particular class of \textit{integral closed 2-friezes}.
The natural question, posed in \cite{Pro} and \cite{MOT}, is;
\begin{ques}\label{classif} How many integral closed 2-friezes do exist for a given width? \end{ques}
Let us stress that the answer is known in the case of Coxeter-Conway friezes: they are counted by Catalan numbers! (This is a consequence of the property (CC2) above.)
In the case of 2-friezes of width $m$ we have the following information \begin{enumerate} \item[\textbullet] for $m=1$ and $m=2$, there exist respectively 5 and 51 integral closed 2-friezes; this was announced in \cite{Pro} and proved in \cite{MOT}, \item[\textbullet] for $m=3$, there exist at least 868 integral closed 2-friezes; these friezes were found using two independent computer programs (by J. Propp \cite{Pro} and by R. Schwartz used in \cite{MOT}), \item[\textbullet] for $m>4$, in a private communication V. Fock conjectured to us that there are infinitely many integral closed 2-friezes. \end{enumerate}
The case $m=4$ still needs to be investigated. The main result of the paper is the following.
\begin{thm}\label{inffrieze} For any $m>4$ there exist infinitely many integral closed 2-friezes of width $m$. \end {thm}
Fock's intuition was based on the fact that the closed 2-friezes of width $m$ are related to cluster algebra associated to the quiver $A_2\times A_m$, which is of infinite type for $m>4$. Our proof is based on this idea using a procedure to construct integral closed 2-friezes from evaluation of cluster variables. However this procedure does not give all the possible friezes. Let us mention that our proof of Theorem \ref{inffrieze} uses the positivity conjecture in cluster algebras (see Section \ref{thcla}).
Our next series of results describe operations on the integral 2-friezes and procedures to get new friezes from old ones. These operations are given in Theorems \ref{GluThm}, \ref{CutThm}, \ref{Gluxy} in Section \ref{OpSect}.
The paper is organized as follows. The main sections, Section \ref{ClustSect} and Section \ref{OpSect}, can be read independently. In Section \ref{ClustSect}, we describe the connection between closed 2-friezes and cluster algebras. The main definitions and results concerning the theory of cluster algebras that we need are recalled. We explain how to get integral friezes from cluster algebras. We finally prove Theorem~\ref{inffrieze} in Section \ref{proofth1}. In Section \ref{OpSect}, we recall the main properties of the 2-friezes and introduce a series of algebraic operations on the friezes. In particular, we recall the link between closed friezes and moduli spaces of polygons. This link is helpful to interpret the algebraic operations. In Section \ref{Conc}, we conclude the paper by refining Question \ref{classif} and mentioning further direction.
\section{Closed 2-friezes and cluster algebra}\label{ClustSect}
Theorem \ref{inffrieze} will be proved with the help of the theory of cluster algebras. These algebras have been defined by Fomin and Zelevinsky in the early 2000's. The subject is knowing an exponential growth due to connection to many different fields of mathematics. In Section \ref{cadef} and \ref{thcla}, we recall the main definitions and results about cluster algebras that will be useful for us. All the material can be found in the original work \cite{FZ1}, \cite{FZ2}, \cite{FZ3}, or in \cite[Chapter 3]{GSV}. We use below the presentation made in \cite{MOT}.
\subsection{Closed 2-frieze}\label{defriz}
A $2$-frieze can be defined as a map $v:(i,j)\mapsto v_{i,j}$ from $(\half+\mathbb{Z})^2\cup\mathbb{Z}^2$ to an arbitrary unital (division) ring, such that the following relation holds for all $(i,j) \in (\half+\mathbb{Z})^2\cup\mathbb{Z}^2$ \begin{equation}\label{friezerule} \textstyle v_{i-1,j}\,v_{i,j+1}-\,v_{i,j}\,v_{i-1,j+1}\,=v_{i-\half,j+\half}. \end{equation} A $2$-frieze can be pictured as an infinite array as in Figure \ref{vij} below. \begin{figure}
\caption{Indexing the entries of a 2-frieze.}
\label{vij}
\end{figure}
A \textit{closed $2$-frieze}, is a map $v:(i,j)\mapsto v_{i,j}$, where $(i,j)$ is, as before, a pair of integers or of half-integers, restricted to the stripe $$ -1\leq{}i-j\leq{}m, $$ where $m$ is a fixed integer called the \textit{width} of the frieze, and satisfying the local rule \eqref{friezerule} together with the boundary conditions $v_{i-1,i}=v_{i+\frac{m}{2},i-\frac{m}{2}}=1$ for all $i\in\mathbb{Z}$ or $\mathbb{Z}+\half$.
A closed $2$-frieze is represented by an infinite stripe \begin{equation}\label{closedfr}
\begin{matrix} \cdots& 1&1&1&1&1&\cdots&1&\cdots
\\[4pt] \cdots&v_{0,0}&v_{\half,\half}&v_{1,1}&v_{\frac{3}{2},\frac{3}{2}}&v_{2,2}&\cdots&v_{i,i}&\cdots
\\ &\vdots &\vdots &\vdots &\vdots &\vdots &&\vdots&\\ \cdots&\vdots &\vdots &\vdots &\vdots &\vdots &&v_{i+\frac{m-1}{2},i-\frac{m-1}{2}}&\cdots&\\
\cdots & 1&1&1&1&1&\cdots&1&\cdots \\[4pt] \end{matrix} \end{equation}
The following statement was known by J.Propp and D. Hickerson and has been written in \cite{MOT} (it is an analog of the property (CC1) of the Coxeter-Conway friezes mentioned in the introduction).
\begin{prop} \cite{MOT}\label{perio} In a closed $2$-frieze of width $m$, every row is $2n$-periodic, where $n=m+4$, i.e. $v_{i+n,j+n}=v_{i,j}$ for all $(i,j)$. \end{prop}
\subsection{Formal closed 2-frieze}
A closed $2$-frieze is generically determined by two consecutive columns. Given $2m$ independent variables $x_1, \ldots, x_{2m}$, the following proposition defines a closed $2$-frieze with values in the rational fields of fractions $\mathbb{C}(x_1,x_2,\ldots,x_{2m})$ containing the set of variables $x_1, \ldots, x_{2m}$ into two consecutive columns.
\begin{prop}\cite{MOT}\label{formal} There exists a unique closed $2$-frieze of width $m$, with values in the rational fields of fractions $\mathbb{C}(x_1,x_2,\ldots,x_{2m})$ containing the following sequences \begin{equation} \label{ClustF}
\begin{array}{cccccc} \cdots&1& 1&1&1&\cdots
\\[4pt] &&x_1&x_{m+1}&&
\\[4pt] &&x_{m+2}&x_2&&
\\[4pt] &&x_3&x_{m+3}&&
\\[4pt] &&\vdots&\vdots&&
\\ &&\vdots &\vdots &&\\ \cdots&1& 1&1&1&\cdots \end{array} \end{equation} where $x_1$ is in position $v_{0,0}$. Furthermore, all the entries of the $2$-frieze are Laurent polynomials in $x_1, \ldots, x_{2m}$. \end{prop}
The formal $2$-frieze characterized in the above Proposition is denoted by $F(x_1, \ldots, x_{2m})$.
\begin{ex} \label{ClustFive} Case $m=1$ $$ \begin{array}{cccccccccccccccc} \cdots &1 & 1 & 1 & 1& 1& 1& 1& \cdots\\[6pt] \cdots & x_1 & x_2& \frac{x_2+1}{x_1}& \frac{x_1+x_2+1}{x_1x_2}& \frac{x_1+1}{x_2}&x_1 & x_2&\cdots \\[6pt] \cdots &1& 1& 1& 1& 1& 1& 1& \cdots \\ \end{array} $$ \end{ex}
\begin{ex} \label{ClustSix} Case $m=2$ \begin{equation*} \begin{array}{cccccccccccccccc} \cdots &1 & 1 & 1 & 1& 1& 1& 1& 1& \cdots\\[6pt] \cdots & x_1 & x_3& \frac{x_3+x_2}{x_1}& \frac{(x_3+x_2)(x_4+x_1)}{x_1x_3x_4} & \frac{(x_1+x_4)(x_2+x_3)}{x_2x_3x_4} & \frac{x_1+x_4}{x_2}&x_4 & x_2&\cdots \\[6pt] \cdots & x_4 & x_2& \frac{x_2+x_3}{x_4}& \frac{(x_2+x_3)(x_1+x_4)}{x_4x_2x_1} & \frac{(x_4+x_1)(x_3+x_2)}{x_3x_1x_2} & \frac{x_4+x_1}{x_3}&x_1 & x_3&\cdots \\[6pt] \cdots &1& 1& 1& 1& 1& 1& 1& 1& \cdots \\ \end{array} \end{equation*} \end{ex}
\begin{rem} The Laurent phenomenon described in Proposition \ref{formal} was mentioned in \cite{Pro} and proved in \cite{MOT} using a link to cluster algebras (this link also implies the periodicity described in Proposition \ref{perio} but periodicity has been established by elementary method in~\cite{MOT}). As an easy consequence of the Laurent phenomenon one can obtain integral closed 2-friezes by setting the inital variables $x_1, \ldots, x_{2m}$ to be equal to 1. The link to cluster algebras will actually provide more information. \end{rem}
\subsection{Cluster algebras: basic definitions}\label{cadef}
A cluster algebra is a commutative associative algebra. This is a subalgebra of a field of rational fractions in $N$ variables. The cluster algebra is presented by generators and relations. The generators are collected in packages called \textit{clusters} of fixed cardinality $N$. The constant $N$ is called the rank of the algebra. The generators and relations are not given from the beginning. They are obtained recursively using a combinatorial procedure encoded in a matrix, or an oriented graph with no loops and no $2$-cycles.
We give here an explicit construction of the (complex or real) cluster algebra $\mathcal{A}(\mathcal{Q})$ starting from a finite oriented connected graph $\mathcal{Q}$ with no loops and no $2$-cycles (there exists more general construction of cluster algebras but the one given here is enough for our purpose). Let $N$ be the number of vertices of $\mathcal{Q}$, the set of vertices is then identified with the set $\{1, \ldots, N\}$. The algebra $\mathcal{A}(\mathcal{Q})$ is a subalgebra of the field of fractions $\mathbb{C}(x_1,\ldots, x_N)$ in $N$ variables $x_1,\ldots, x_N$ (or over $\mathbb{R}$, in the real case). The generators and relations of $\mathcal{A}(\mathcal{Q})$ are given using a recursive procedure called \textit{seed mutations} that we describe below.
A \textit{seed} is a couple $$ \Sigma=\left((t_1, \ldots, t_N) , \;\mathcal{R}\right), $$ where $\mathcal{R}$ is an arbitrary finite oriented graph with $N$ vertices and where $t_1, \ldots, t_N$ are free generators of $\mathbb{C}(x_1,\ldots, x_N)$ labeled by the vertices of the graph $\mathcal{R}$. The \textit{mutation at vertex} $k$ of the seed $\Sigma$ is a new seed $\mu_k(\Sigma )$ defined by \begin{enumerate} \item[\textbullet] $\mu_k(t_1, \ldots, t_N)=(t_1, \ldots, t_{k-1},t'_k, t_{k+1},\ldots, t_N)$ where \begin{equation}\label{exrel} \displaystyle t'_k=\dfrac{1}{t_k}\left(\prod\limits_{\substack{\text{arrows in }\mathcal{R}\\ i\rightarrow k }}\; t_i
\quad+\quad \prod\limits_{\substack{\text{arrows in }\mathcal{R}\\ i\leftarrow k }}\;t_i\right) \end{equation} \item[\textbullet] $\mu_k(\mathcal{R})$ is the graph obtained from $\mathcal{R}$ by applying the following transformations \begin{enumerate} \item for each possible path $i\rightarrow k \rightarrow j$ in $\mathcal{R}$, add an arrow $i\rightarrow j$, \item reverse all the arrows leaving or arriving at $k$, \item remove a maximal collection of 2-cycles, \end{enumerate} \end{enumerate} (see Example \ref{exmut} below for a seed mutation).
Starting from the initial seed $\Sigma_0=((x_1,\ldots, x_N), \mathcal{Q})$, one produces $N$ new seeds $\mu_k(\Sigma_0)$, $k=1,\ldots, N$. Then one applies all the possible mutations to all of the created new seeds, and so on. The set of rational functions appearing in any of the seeds produced during the mutation process is called a \textit{cluster}. The functions in a cluster are called \textit{cluster variables}. The cluster algebra $\mathcal{A}(\mathcal{Q})$ is the subalgebra of $\mathbb{C}(x_1,\ldots, x_N)$ generated by all the cluster variables.
\begin{ex}\label{exmut} In the case $n=4$, consider the seed $\Sigma=$ $$ (t_1,t_2,t_3,t_4), \quad \mathcal{R}= \xymatrix{ 1\ar@{->}[r] &2\ar@{->}[d] \\ 3\ar@{<-}[r]\ar@{->}[u] &4 }.$$ The mutation at vertex 1 gives $$ \mu_1(t_1,t_2,t_3,t_4)=\Big(\frac{t_2+t_3}{t_1},t_2,t_3,t_4\Big), \quad \mu_1(\mathcal{R})=\quad \xymatrix{ 1\ar@{<-}[r] &2\ar@{->}[d]\ar@{<-}[ld] \\ 3\ar@{<-}[r]\ar@{<-}[u] &4}. $$ Performing the mutation $\mu_2$ on $\mu_1(\mathcal{R})$ leads to the following graph $$ \mu_2\mu_1(\mathcal{R})=\quad \xymatrix{ 1\ar@{->}[r] &2\ar@{<-}[d]\ar@{->}[ld] \\ 3 &4}. $$ The underlying non-oriented graph of $\mu_2\mu_1(\mathcal{R})$ is the Dynkin diagram of type $D_4$. The algebra $A(\mathcal{R})$ is referred to as the cluster algebra of type $D_4$ in the terminology of \cite{FZ2}. It is known that in this case the mutation process is finite, meaning that applying all the possible mutations to all the seeds leads to a finite number of seeds and therefore to a finite number (24) of cluster variables. \end{ex}
\subsection{Cluster algebras: fundamental results}\label{thcla}
To prove Theorem \ref{inffrieze} we will need the following fundamental theorems on cluster algebras.
The first result relates the classification of cluster algebras to that of Lie algebras, using Dynkin graphs which are any orientations of Dynkin diagrams.
\begin{theo}[Classification \cite{FZ2}] The cluster algebra $\mathcal{A}(\mathcal{Q})$ has finitely many cluster variables if and only if the initial graph $\mathcal{Q}$ is mutation-equivalent to a Dynkin graph of type $A,D,E$. \end{theo}
More general definition of a cluster algebra may allow to include all the Dynkin types in the classification. We do not need such general construction.
The second result establishes a surprising phenomenon of simplification in the expressions of cluster variables. \begin{theo}[Laurent Phenomenon \cite{FZ1}] Every cluster variable can be expressed as a Laurent polynomial with integer coefficients in the variables of any given cluster. \end{theo}
\begin{conj}[Positivity \cite{FZ1}] The Laurent polynomials in the above theorem have positive integer coefficients. \end{conj}
The positivity conjecture has been proved in several cases. In particular, it has been proved\footnote{I am grateful to the anonymous referee for providing me with the reference.} by Nakajima \cite{Nak} in the case where the graph is bipartite. We will be interested in the cluster algebra associated to the quiver \eqref{Graph} (see below) which is a bipartite graph. So, in our case, the positivity conjecture is a theorem; we will need this statement in the proof of Lemma \ref{infclasse}.
\subsection{Algebra of functions on the closed 2-frieze }
We denote by $\mathcal{A}_m$ the subalgebra of $\mathbb{C}(x_1, \ldots, x_{2m})$ generated by the entries of the frieze $F(x_1, \ldots, x_{2m})$, defined by \eqref{ClustF}.
\begin{theo}[\cite{MOT}] The algebra $\mathcal{A}_m$ associated with the 2-frieze $F(x_1, \ldots, x_{2m})$ is a subalgebra of the cluster algebra $\mathcal{A}( \mathcal{Q}_m)$, where $\mathcal{Q}_m$ is the following oriented graph \begin{equation} \label{Graph} \xymatrix{ &1\ar@{->}[r] &2\ar@{<-}[r]\ar@{->}[d] &3\ar@{->}[r] &\cdots &\cdots \ar@{<-}[r] &m-1\ar@{->}[r] &m\ar@{->}[d]\\ & m+1\ar@{<-}[r]\ar@{->}[u] &m+2\ar@{->}[r] &m+3\ar@{->}[u]\ar@{<-}[r] &\cdots&\cdots \ar@{->}[r] &2m-1\ar@{->}[u]\ar@{<-}[r] &2m\\ } \end{equation} Moreover, the set of variables contained in two consecutive columns of the 2-frieze $F(x_1, \ldots, x_{2m})$ is a cluster in $\mathcal{A}( \mathcal{Q}_m)$. \end{theo}
Note that the orientation of the last square in \eqref{Graph} depends on the parity of $m$. Note also that $\mathcal{Q}_m$ is the cartesian product of two Dynkin graphs: $\mathcal{Q}_m=A_2\times A_m$.
\begin{rem} It is well known that the graph $\mathcal{Q}_m$ is mutation equivalent to the Dynkin graph of type $D_4, E_6, E_8$ for $m=2,3,4$ respectively. For $m\geq 5$, the graph $\mathcal{Q}_m$ is not mutation equivalent to any Dynkin graph. Therefore, the number of cluster variables in $\mathcal{A}( \mathcal{Q}_m)$ is infinite for $m\geq{}5$. \end{rem}
\begin{rem}\label{bipart} (i) The algebra $\mathcal{A}_m$ is related to what Fomin and Zelevinsky \cite{FZ4} called the bipartite belt. Indeed, the graph $\mathcal{Q}_m$ is bipartite, i.e. one can associate a sign $\varepsilon(i)=\pm$ to each vertex $i$ of the graph so that any two connected vertices in $\mathcal{Q}_m$ have different signs. Let us assume that $\varepsilon(1)=+$ (this determines automatically all the signs of the vertices).
Consider the iterated mutations $$ \mu_+=\prod_{i: \varepsilon(i)=+}\;\mu_i, \qquad \mu_-=\prod_{i:\varepsilon(i)=-}\;\mu_{i}. $$ Note that $\mu_i$ with $\varepsilon(i)$ fixed commute with each other, and therefore $\mu_+$ and $\mu_-$ are involutions.
One can easily check that the result of the mutation of the graph \eqref{Graph} by $\mu_+$ and $\mu_-$ is the same graph with reversed orientation: $$ \mu_+(\mathcal{Q}_m)=\mathcal{Q}_m^{\hbox{op}}, \qquad \mu_-(\mathcal{Q}_m^{\hbox{op}})=\mathcal{Q}_m. $$
Consider the seeds of $\mathcal{A}(\mathcal{Q}_m)$ obtained from $\Sigma_0$ by applying successively $\mu_+$ or $\mu_{-}$: \begin{equation*} \Sigma_0,\quad \mu_+(\Sigma_0),\quad \mu_-\mu_+(\Sigma_0),\quad \ldots, \quad \mu_{\pm}\mu_{\mp}\cdots \mu_-\mu_+(\Sigma_0),\quad \ldots \end{equation*} The cluster variables in each of the above seeds correspond precisely to two consecutive columns in the 2-frieze pattern \eqref{ClustF}. This set of seeds is called the bipartite belt of $\mathcal{A}(\mathcal{Q}_m)$, see \cite{FZ4}.
(ii) Entries of a 2-frieze $F(x_1, \ldots, x_{2m})$ are the cluster variables in $\mathcal{A}(\mathcal{Q}_m)$ that can be obtained from the initial seed by applying sequences of $\mu_+$ and $\mu_{-}$. It is not known how to characterize the cluster variables of $\mathcal{A}(\mathcal{Q}_m)$ that do not appear in the 2-frieze.
\end{rem}
\subsection{Counting integral 2-friezes}
We are now interested in the integral closed 2-friezes, i.e. arrays as \eqref{closedfr} in which the entries $v_{i,j}$ are positive integers. Due to periodicity, see Proposition~\ref{perio}, we represent a closed 2-frieze by a fundamental fragment of size $2n\times(n-4)$ (we will always choose the fragment whose entries in the first row are $v_{0,0},\ldots, v_{n-\half, n-\half}$). Repeating the same fragment infinitely many times on left and right of the initial one, will lead to the complete infinite 2-frieze.
It is clear that two different fragments can produce to the same infinite frieze. For instance, permuting cyclically the columns of a fragment gives another fragment that produces the same infinite frieze. Also, rewriting a fragment from right to left may lead to another well-defined infinite frieze.
Define the following two operations on a fragment \begin{equation}\label{tausig} \begin{array}{lcl} \tau \;\cdot \;
\begin{matrix}
1&1&\cdots&1&1 \\
a_1&a_2&\cdots&a_{2n-1}&a_{2n}\\
b_1&b_2&\cdots&b_{2n-1}&b_{2n}\\
\vdots& \vdots& &\vdots& \vdots\\
1&1&\cdots&1&1 \\[4pt] \end{matrix}
&=&
\begin{matrix}
1&1&\cdots&1&1 \\
a_2&a_3&\cdots&a_{2n}&a_{1}\\
b_2&b_3&\cdots&b_{2n}&b_{1}\\
\vdots& \vdots& &\vdots& \vdots\\
1&1&\cdots&1&1 \\[4pt] \end{matrix} \\[40pt] \sigma\; \cdot
\begin{matrix}
1&1&\cdots&1&1 \\
a_1&a_2&\cdots&a_{2n-1}&a_{2n}\\
b_1&b_2&\cdots&b_{2n-1}&b_{2n}\\
\vdots& \vdots& &\vdots& \vdots\\
1&1&\cdots&1&1 \\[4pt] \end{matrix} &=&
\begin{matrix}
1&1&\cdots&1&1 \\
a_{1}&a_{2n}&\cdots&a_3&a_2\\ b_{1} &b_{2n}&\cdots&b_3&b_2\\
\vdots& \vdots& &\vdots& \vdots\\
1&1&\cdots&1&1 \\[4pt] \end{matrix} \\ \end{array} \end{equation} Using the definition of closed 2-friezes as maps $v:(i,j)\mapsto{}v_{i,j}$, one has $$ \tau\cdot{}v:(i,j)\mapsto{}v_{i+\half,j+\half}, \qquad \sigma\cdot{}v:(i,j)\mapsto{}v_{-j,-i}\,. $$
\begin{prop} The operations $\tau$ and $\sigma$ generate an action of the dihedral group of order $4n$ on the set of fragments of integral closed 2-friezes of width $n-4$. \end{prop}
\begin{proof} One checks the relations $\sigma\tau\sigma=\tau^{-1}$ and $\sigma^2=\tau^{2n}=\textup{Id}$. \end{proof}
In the sequel we are interested in the problem of counting fragments of integral closed 2-friezes of a given width.
\begin{ex} \label{notsurj} It was proved in \cite{MOT} that the following five fragments of 2-friezes produce all the integral 2-friezes of width 2 (modulo the action of the dihedral group). \begin{equation} \label{SixThree} \begin{array}{rrrrrrrrrrrrr} 1&1&1&1&1&1&1&1&1&1&1&1\\ 1&1&2&4&4&2&1&1&2&4&4&2\\ 1&1&2&4&4&2&1&1&2&4&4&2\\ 1&1&1&1&1&1&1&1&1&1&1&1 \end{array} \end{equation} \begin{equation} \label{SixFour} \begin{array}{rrrrrrrrrrrrr} 1&1&1&1&1&1&1&1&1&1&1&1\\ 1&1&3&6&3&1&1&2&3&3&3&2\\ 1&2&3&3&3&2&1&1&3&6&3&1\\ 1&1&1&1&1&1&1&1&1&1&1&1 \end{array} \end{equation} \begin{equation} \label{SixFive} \begin{array}{rrrrrrrrrrrrr} 1&1&1&1&1&1&1&1&1&1&1&1\\ 1&1&4&6&2&1 & 2&3&2&2&4&3 \\ 2&3&2&2&4&3 &1&1&4&6&2&1\\ 1&1&1&1&1&1&1&1&1&1&1&1 \end{array} \end{equation} \begin{equation} \label{SixTwo} \begin{array}{rrrrrrrrrrrrr} 1&1&1&1&1&1&1&1&1&1&1&1\\ 1&3&5&2&1&3&5&2&1&3&5&2\\ 5&2&1&3&5&2&1&3&5&2&1&3\\ 1&1&1&1&1&1&1&1&1&1&1&1 \end{array} \end{equation} \begin{equation} \begin{array}{rrrrrrrrrrrrr} 1&1&1&1&1&1&1&1&1&1&1&1\\ 2&2&2&2&2&2&2&2&2&2&2&2\\ 2&2&2&2&2&2&2&2&2&2&2&2\\ 1&1&1&1&1&1&1&1&1&1&1&1 \end{array}\label{SixOne} \end{equation} The action of the dihedral group on the above fragments leads to 51 different fragments: the orbits of the fragments \eqref{SixThree}-\eqref{SixOne} contain 6, 12, 24, 8 and 1 elements, respectively. \end{ex}
\subsection{Integral 2-friezes as evaluation of cluster variables}\label{defev}
The easiest way to obtain a closed integral 2-frieze is to make an evaluation of the formal 2-frieze $F( x_1, \ldots, x_{2m})$ by setting all the initial variables $x_i=1$. All the entries in the resulting frieze will be positive integers. Indeed, the Laurent phenomenon ensures that the entries are well-defined and integers, and the local rule \eqref{friezerule} ensures that the entries are positive (here we do not need the positivity conjecture).
The above idea can be extended by setting all the variables in an arbitrary cluster to be equal to~1. Given an arbitrary cluster $\mathbf{c}=(c_1,\ldots,c_{2m})$ in $\mathcal{A}(\mathcal{Q}_m)$, every entry in the 2-frieze $F( x_1, \ldots, x_{2m})$ can be expressed as a Laurent polynomial in $c_i$. This gives a new formal 2-frieze $F(\mathbf{x}(\mathbf{c}))$. Setting $c_i=1$ for all $i$, all the entries of $F(\mathbf{x}(\mathbf{c}))$ become positive integers. Indeed, they are integers since their expressions are Laurent polynomials in $\mathbf{c}$, and they are positive because the expressions are obtained from $\mathbf{c}$ using a sequence of exchange relations that are subtraction free. This procedure defines a map $$ \begin{array}{rcl} \mathbf{ev}: \{ \text{cluster of } \mathcal{A}(\mathcal{Q}_m)\} &\rightarrow& \{\text{fragment of integral 2-frieze of width }m\}\\[6pt]
\mathbf{c} &\mapsto & F(\mathbf{x}(\mathbf{c}))\left|_{\mathbf{c}=(1,\ldots,1)}\right..\\[4pt] \end{array} $$ where the fragment representing the frieze is chosen starting by the two columns containing the values of $(x_1,x_2,\ldots)$. \begin{defn} Integral friezes produced by the map $\mathbf{ev}$ are called \textit{unitary friezes}. \end{defn}
\begin{rem} The map $\mathbf{ev}$ is not necessarily surjective. For instance, in the case of $2$-friezes of width 2,
there are exactly 51 fragments, see Example \ref{notsurj},
but the corresponding cluster algebra is of Dynkin type $D_4$ which is known to have 50 clusters.
One therefore deduces that at least one fragment is not a unitary frieze. \end{rem}
\begin{lem}\label{unitdihed} If a fragment of $2$-frieze is in the image of $\mathbf{ev}$, then all the fragments obtained under the action of the dihedral group are also in the image of $\mathbf{ev}$. \end{lem}
\begin{proof} Let us consider a fragment $\mathbf{ev}(\mathbf{c})$, for some cluster $\mathbf{c}=\mu_{i_1}\cdots \mu_{i_k}(\mathbf{x})$. To describe the action $\sigma$ we introduce the sequence of mutations
$\mu_+:=\mu_{m+1}\mu_2\mu_{m+3}\mu_4\cdots$ (note that $\mu_+^2=\textup{Id}$, see Remark \ref{bipart}). One has $$
\sigma\cdot \mathbf{ev}(\mathbf{c})= \mathbf{ev}(\mu_{{i_1}}\cdots \mu_{{i_k}}\mu_+\mathbf{x}). $$ The action that gives the fragment with the first two columns in the reverse order,
i.e. the action of $\sigma\tau $, is easily obtained by reversing the roles of the indices $i\leftrightarrow i+m$. Hence, $$
\sigma\tau\cdot \mathbf{ev}(\mathbf{c})= \mathbf{ev}(\mu_{\overline{i_1}}\cdots \mu_{\overline{i_k}}\mathbf{x}). $$ where we use the notation $\bar{i}:=i+m \mod 2m$. One finally deduces $$ \tau\cdot \mathbf{ev}(\mathbf{c})= \mathbf{ev}(\mu_{\overline{i_1}}\cdots \mu_{\overline{i_k}}\mu_{+}\mathbf{x}). $$
\end{proof}
\begin{prop}\label{unitfrieze} The integral 2-friezes \eqref{SixThree}-\eqref{SixTwo} are unitary friezes. Under the action of the dihedral group the friezes \eqref{SixThree}-\eqref{SixTwo} produce 50 different friezes coming for the evaluation of the 50 clusters of the cluster algebra $\mathcal{A}_{\mathcal{Q}_2}\simeq \mathcal{A}(D_4)$. Consequently, \eqref{SixOne} is not a unitary frieze. \end{prop}
\begin{proof} The formal frieze of width 2 is displayed in Example \ref{ClustSix}. We construct the fragments of frieze \eqref{SixThree}-\eqref{SixTwo} as image of the map $\mathbf{ev}$ (with the convention that the two first columns of the fragment contain the initial cluster variables $(x_1,\ldots, x_{4})$).
The fragment of frieze \eqref{SixThree} is realized as $\mathbf{ev}(\mathbf{x})$ where $\mathbf{x}=(x_1,\ldots, x_{4})$ is the initial cluster.
The fragment of frieze \eqref{SixFour}, is easily obtained as $\mathbf{ev}(\mathbf{c})$ where $\mathbf{c}=\mu_2(\mathbf{x})$.
The fragment of frieze \eqref{SixFive} is realized as $\mathbf{ev}(\mathbf{d})$ where $\mathbf{d}$ is obtained from the initial cluster by performing the mutations $\mu_2\mu_4$. Indeed, one can check this sequence of mutations transforms the initial seed $(\mathbf{x},\mathcal{Q}_2)$ into $$ \xymatrix{& &1\ar@{<-}[rd] &2\ar@{->}[d] \\ & (d_1,d_2,d_3, d_{4})\;, &3\ar@{->}[r] &4 } $$ where $$ \left\{ \begin{array}{rcl} d_1&=&x_1,\\[10pt] d_2&=&\dfrac{x_1+x_4}{x_2},\\[10pt] d_3&=&x_3,\\[10pt] d_4&=&\dfrac{x_1x_2+x_1x_3+x_3x_4}{x_2x_4}. \end{array} \right. $$ One then sees $$ (d_1,d_2,d_3, d_{4})=(1,1,1,1)\Longleftrightarrow (x_1,x_2,x_3, x_{4})=(1,3,1,2). $$
The fragment of frieze \eqref{SixTwo} is realized as $\mathbf{ev}(\mathbf{e})$ where $\mathbf{e}$ is obtained from the initial cluster by applying the mutations $\mu_4\mu_2\mu_3\mu_4\mu_2$. Indeed, one can check this sequence of mutations transforms the initial seed $(\mathbf{x},\mathcal{Q}_2)$ into $$ \xymatrix{& &1\ar@{->}[rd] &2\ar@{->}[d] \\ & (e_1,e_2,e_3, e_{4})\;, &3\ar@{->}[r] &4 } $$ where $$ \left\{ \begin{array}{rcl} e_1&=&x_1,\\[10pt] e_2&=&\dfrac{x_2+x_3}{x_4},\\[10pt] e_3&=&\dfrac{x_1x_2+x_1x_3+x_2x_4+x_3x_4}{x_2x_3x_4},\\[10pt] e_4&=&\dfrac{x_1x_2+x_1x_3+x_2x_4}{x_3x_4}. \end{array} \right. $$ One can then check that $$ (e_1,e_2,e_3, e_{4})=(1,1,1,1)\Longleftrightarrow (x_1,x_2,x_3, x_{4})=(1,2,3,5). $$
By Lemma \ref{unitdihed}, one deduces that the 50 fragments produced by \eqref{SixThree}-\eqref{SixTwo} under the action of the dihedral group are all realized as evaluation of a cluster at $(1,\cdots,1)$. Since in this case one has exactly 50 clusters, the remaining fragment \eqref{SixOne} can not be realized this way. This can also be checked by hand. Evaluating the initial cluster at $(2,2,2,2)$ and performing a sequence of mutations in all the possible directions will always lead to two different sets $\{2,2,2,2\}$ and $\{2,2,2,3\}$ for the values of the cluster variables. \end{proof}
\begin{rem}\label{width3} For the 2-friezes of width $m=3$ the associated cluster algebra is of type $A_2\times A_3\simeq E_6$. In this type there are 833 clusters, but we found already 868 friezes, so that we can expect at least 35 non-unitary friezes. \end{rem}
\subsection{Proof of Theorem \ref{inffrieze}}\label{proofth1}
In this section we fix $m>4$. We denote by $\mathcal{C}$ the set of all clusters in the cluster algebra $\mathcal{A}_{\mathcal{Q}_m}$. It is known that $\mathcal{C}$ is infinite. We define the following relation on $\mathcal{C}$: $$ \mathbf{c}\sim \mathbf{d} \text{ if and only if } c_1=\cdots=c_{2m}=1 \text { implies } d_1=\cdots=d_{2m}=1, $$ where the $c_i$ and $d_i$ are the variables in the clusters $\mathbf{c}$ and $\mathbf{d}$ respectively.
\begin{lem}\label{Pfequiv} The relation $\sim$ is an equivalence relation on $\mathcal{C}$. \end{lem}
\begin{proof} It is not clear from the definition that $\sim$ is symmetric. Given two clusters $\mathbf{c}$ and $\mathbf{d}$ there exist two $2m$-tuples of Laurent polynomials in $2m$ variables $P_{\mathbf{c},\mathbf{d}}=(P_1,\cdots, P_{2m})$ and $P_{\mathbf{d},\mathbf{c}}=(Q_1,\cdots, Q_{2m})$ such that $$ \mathbf{c}=P_{\mathbf{c},\mathbf{d}}(\mathbf{d})=(P_1(\mathbf{d}),\cdots, P_{2m}(\mathbf{d}))\; \text{ and } \;\mathbf{d}=P_{\mathbf{d},\mathbf{c}}(\mathbf{c})=(Q_1(\mathbf{c}),\cdots, Q_{2m}(\mathbf{c})). $$ The transition maps $P_{\mathbf{c},\mathbf{d}}$ and $P_{\mathbf{d},\mathbf{c}}$ define two bijections, inverse one to the other, from $(\mathbb{C}^*)^n$ to $(\mathbb{C}^*)^n$. Therefore, $$ (1,\ldots,1)=P_{\mathbf{c},\mathbf{d}}(1,\ldots,1)\Longleftrightarrow (1,\ldots,1)=P_{\mathbf{d},\mathbf{c}}(1,\ldots,1). $$ This proves the symmetry of the relation. \end{proof}
We denote by $\mathcal{C}/\!\sim$ the set of all equivalence classes, and we denote by $\bar{\mathbf{c}}$ the equivalence class of an element $\mathbf{c} \in \mathcal{C}$.
\begin{lem}\label{bev} The following map is well defined and is injective $$ \begin{array}{cccl} \overline{\mathbf{ev}}: &\mathcal{C}/\!\sim&\longrightarrow& \{\text{fragments of integral 2-friezes of width }m\}\\[6pt] &\bar{\mathbf{c}}&\mapsto& \mathbf{ev}(\mathbf{c}) \end{array} $$ where $\mathbf{ev}$ is the function introduced in Section \ref{defev}. \end{lem}
\begin{proof} We need to prove that $\mathbf{ev}(\mathbf{c})=\mathbf{ev}(\mathbf{d})$ if and only if $\mathbf{c}\sim \mathbf{d}$. This can be done using as in the proof of Lemma \ref{Pfequiv} the transition functions $P_{\mathbf{c},\mathbf{d}}$, $P_{\mathbf{c},\mathbf{x}}$, $P_{\mathbf{x},\mathbf{d}}$ between the different clusters and the reciprocals. \begin{eqnarray*} \mathbf{ev}(\mathbf{c})=\mathbf{ev}(\mathbf{d}) &\Longleftrightarrow&P_{\mathbf{x},\mathbf{c}}(1,\ldots,1)=P_{\mathbf{x},\mathbf{d}}(1,\ldots,1)\\ &\Longleftrightarrow&(1,\ldots,1)=P_{\mathbf{c},\mathbf{x}}P_{\mathbf{x},\mathbf{d}}(1,\ldots,1)\\ &\Longleftrightarrow&(1,\ldots,1)=P_{\mathbf{c},\mathbf{d}}(1,\ldots,1)\\ &\Longleftrightarrow&\mathbf{c}\sim\mathbf{d} \end{eqnarray*} \end{proof}
To complete the proof of Theorem \ref{inffrieze}, we show that the set $\mathcal{C}/\!\sim$ is infinite.
\begin{lem}\label{infclasse} If $\mathbf{c}\sim\mathbf{d}$ then $\mathbf{c}=(c_1,\ldots,c_{2m})$ is a permutation of $\mathbf{d}=(d_1,\ldots,d_{2m})$. \end{lem}
\begin{proof} Given two clusters $\mathbf{c}=(c_1,\ldots,c_{2m})$ and $\mathbf{d}=(d_1,\ldots,d_{2m})$, one can express every variable in one of the clusters as Laurent polynomial with positive integer coefficients in the variables in the other cluster (thanks to Nakajima's results \cite{Nak}). The equivalence $\mathbf{c}\sim \mathbf{d}$ implies that the expressions are actually unitary Laurent monomials. Write for instance $c_i=\Pi_{1\leq j\leq 2m}d_j^{k_j}$ with $k_j\in \mathbb{Z}$.
If the exponent $k_{j}$ is negative, then the expansion of $c_i$ in the variables of $\mu_j(\mathbf{d})$ is not a Laurent polynomial. Therefore, one deduces that the expressions of the $c_i$'s are just monomials (not Laurent) in the variables of $\mathbf{d}$, and by symmetry, the $d_i$'s are also monomials in the variables of $\mathbf{c}$. This happens if and only if the set of variables in $\mathbf{c}$ is the same as the set of variables in~$\mathbf{d}$. \end{proof}
By Lemma \ref{infclasse}, there is a finite number of clusters in a given equivalence class $\bar{\mathbf{c}}$ of $\mathcal{C}/\sim$. Since $\mathcal{C}$ is infinite, one deduces there are infinitely many classes in $\mathcal{C}/\sim$. The injective map $\overline{\mathbf{ev}}$ in Lemma~\ref{bev} produces infinitely many integral 2-friezes.
Theorem \ref{inffrieze} is proved.
\section{Cutting and gluing $2$-friezes}\label{OpSect}
\subsection{The algebraic operations}
The first operation that we describe on the $2$-friezes, is equivalent to the connected sum defined in \cite{MOT}. It produces a new integral closed 2-frieze starting from two smaller 2-friezes.
\begin{thm} \label{GluThm} Given two integral closed 2-friezes, of width $m$ and $\ell$, the following gluing of two columns on the top of the other over the pair $1\;1$ \begin{equation}\label{glucol} \begin{array}{rccccl} \cdots&1 & 1 & 1 &1&\cdots\\
&&\circ&\circ&&\\ & &\vdots & \vdots & &\\ & &\vdots & \vdots & &\\ & & \circ & \circ & &\\ \cdots&1 & \mathbf{1} & \mathbf{1} &1& \cdots \end{array}, \begin{array}{rccccl} \cdots&1 & \mathbf{1} & \mathbf{1} &1&\cdots\\
&&\bullet &\bullet&&\\ & &\vdots & \vdots & &\\ & & \bullet & \bullet& &\\ \cdots&1 & 1 & 1 &1& \cdots \end{array} \qquad\longmapsto \begin{array}{rccccl} \cdots&1 & 1 & 1 &1&\cdots\\
&&\circ&\circ&&\\ & &\vdots & \vdots & &\\ & &\vdots & \vdots & &\\ & & \circ & \circ & &\\ & & \mathbf{1} & \mathbf{1} &&\\
&&\bullet &\bullet&&\\ & &\vdots & \vdots & &\\ & & \bullet & \bullet& &\\ \cdots&1 & 1 & 1 &1& \cdots \end{array} \end{equation} leads to a new integral closed 2-frieze of width $m+\ell+1$. \end{thm}
The second operation breaks a 2-frieze into a smaller 2-frieze. \begin{thm} \label{CutThm} Cutting above a pair $x,y$ in an integral 2-frieze \begin{equation} \label{BigPat} \begin{array}{rccl} \cdots & 1 & 1 & \cdots\\
&\vdots &\vdots&\\ & x & y &\\
& u & v &\\ &\vdots&\vdots&\\ \cdots & 1 & 1 & \cdots \end{array} \longmapsto \begin{array}{rccl} \cdots & 1 & 1 & \cdots\\
&\vdots&\vdots &\\ & x & y &\\ \cdots & 1 & 1 & \cdots \end{array} \end{equation} gives a new integral 2-frieze if and only if \begin{equation} \label{Cond} u\equiv 1 \mod y, \qquad v\equiv 1\mod x. \qquad \end{equation} \end{thm}
The last operation glues 2-friezes in a more general way than the operation of Theorem \ref{GluThm}. \begin{thm} \label{Gluxy} Gluing two integral friezes, of width $m$ and $\ell$, over a pair $x,y$, as follows, \begin{equation} \label{glufrixy} \begin{array}{rccl} \cdots & 1 & 1 & \cdots\\
&\vdots &\vdots&\\ & r& s &\\
& x & y &\\ \cdots & 1 & 1 & \cdots \end{array}, \begin{array}{rccl} \cdots & 1 & 1 & \cdots\\ & x & y &\\
& u & v &\\ &\vdots&\vdots&\\ \cdots & 1 & 1 & \cdots \end{array} \longmapsto \begin{array}{rccl} \cdots & 1 & 1 & \cdots\\
&\vdots &\vdots&\\ & r& s &\\
& x & y &\\
& u & v &\\ &\vdots&\vdots&\\ \cdots & 1 & 1 & \cdots \end{array} \end{equation} gives a new 2-frieze of width $m+\ell-1$ if and only if \begin{equation} \label{Cond2} u\equiv r\equiv 1 \mod y, \qquad v\equiv s\equiv 1\mod x. \qquad \end{equation} \end{thm}
Let us mention that conditions \eqref{Cond} and \eqref{Cond2} hold true for $(x,y)=(1,1), (2,1)$ and $(1,2)$ independently of the values of $u,v$, because of the local rule. This allows us to cut or glue a frieze whenever one of these pairs appears in the pattern.
\subsection{Entries in the 2-friezes} Recall that the entries of a 2-frieze are given by two sequences $(v_{i,j})_{(i,j)\in \mathbb{Z}^2}$ and $(v_{i+\half,j+\half})_{(i,j)\in \mathbb{Z}^2}$ (see Figure \ref{vij} in Section \ref{defriz}). In the geometric situation it will be useful to complete the closed 2-friezes by two rows of 0's above and under the frieze.
$$
\begin{matrix}
\cdots &0& 0&0&0&\cdots
\\ \cdots&0& 0&0&0&\cdots
\\ \cdots& 1&1&1&1&\cdots
\\ \ldots&v_{i-\half,i-\half}&v_{i,i}&v_{i+\half,i+\half}& v_{i+1,i+1}&\ldots
\\[4pt]
&v_{i,i-1}&v_{i+\half,i-\half}&v_{i+1,i}&v_{i+\thalf,i+\half}&\\ &\vdots &\vdots &\vdots &\vdots &&\\
\cdots & 1&1&1&1&\cdots \\ \cdots& 0&0&0&0&\cdots
\\ \cdots& 0&0&0&0&\cdots \end{matrix} $$
\subsection{2-friezes and moduli space of polygons}
The space of all 2-friezes (with complex coefficients) is an interesting algebraic variety closely related to the famous moduli space $\mathcal{M}_{0,n}$ of genus-zero algebraic curves with $n$ marked points. This geometric interpretation will be useful in the sequel.
We call \textit{$n$-gon} a cyclically ordered $n$-tuple of points $$ V_1,\ldots,V_n\in\mathbb{C}^3, $$ i.e., we assume $V_{i+n}=V_i$, such that any three consecutive points obey the normalization condition \begin{equation} \label{Normal} \det(V_{i}, V_{i+1}, V_{i+2})=1. \end{equation} Following \cite{MOT}, we consider space of $n$-gons in $\mathbb{C}^3$ modulo the action of the Lie group $\mathrm{SL}(3,\mathbb{C})$, via $$ \mathcal{C}_n:=\lbrace n\hbox{-gons}\rbrace/\mathrm{SL}_3(3,\mathbb{C}). $$ This space has a natural structure of algebraic variety.
\begin{prop}\cite{MOT} \label{Isop} The set of closed $2$-friezes over $\mathbb{C}$ of width $m$ is in bijection with $\mathcal{C}_{m-4}$. \end{prop}
Starting from a closed 2-frieze one can construct the $n$-gon $(V_i)$ from any three consecutive diagonals, for instance: $$ V_1=\left( \begin{array}{l} v_{1,j-2}\\[4pt] v_{1,j-1}\\[4pt] v_{1,j} \end{array} \right), \quad V_2= \left( \begin{array}{l} v_{2,j-2}\\[4pt] v_{2,j-1}\\[4pt] v_{2,j}\end{array} \right), \quad \ldots, \quad V_{n}=\left( \begin{array}{l} v_{n,j-2}\\[4pt] v_{n,j-1}\\[4pt] v_{n,j} \end{array} \right). $$ One then can show (cf. \cite{MOT}) that this construction provides the isomorphism from Proposition~\ref{Isop}. This also shows the 2-friezes as $\mathrm{SL}_3$-tilings, see \cite{BeRe}.
\begin{ex} Considering the 2-frieze \eqref{SixFive} (completed with two rows of 0s above and below), we obtain the following hexagon $$ \left( \begin{array}{c} 1\\ 0\\ 0 \end{array} \right), \quad \left( \begin{array}{c} 1\\ 1\\ 0 \end{array} \right), \quad \left( \begin{array}{c} 2\\ 6\\ 1 \end{array} \right), \quad \left( \begin{array}{c} 1\\ 4\\ 1 \end{array} \right), \quad \left( \begin{array}{c} 0\\ 1\\ 1 \end{array} \right), \quad \left( \begin{array}{c} 0\\ 0\\ 1 \end{array} \right). $$ \end{ex}
Thanks to the above geometric interpretation of 2-friezes, we have a useful geometric formulas for the entries of a 2-frieze in terms of the corresponding $n$-gon.
\begin{prop}\cite{MOT}\label{Entry} Given a closed $2$-frieze $(v_{i,j})$ of width $m=n-4$ and its corresponding $n$-gon $(V_i)$, the entries of the frieze are given by \begin{equation*} v_{i-\half,j-\half}= \det( V_{i-1}, V_i, V_{j-3} ), \qquad v_{i,j}=\det( V_{j-3}, V_{j-2}, V_i ). \end{equation*} \end{prop}
Note that in the sequel, we simplify the notation by using $|\cdot,\cdot,\cdot|=\det(\cdot,\cdot,\cdot)$.
Finally, we will also need the following linear recurrence relation.
\begin{prop}\cite{MOT} The points of the $n$-gon $(V_i)$ satisfy the following linear recurrence relation: \begin{equation} \label{Recur} V_i=v_{i,i}\,V_{i-1}-v_{i-\half,i-\half}\,V_{i-2}+V_{i-3}. \end{equation} \end{prop}
\subsection{Connecting two 2-friezes} We give examples of the operations of gluing of friezes described in Theorem \ref{GluThm}. Note that this gluing preserves more than the two columns of the initial friezes. Triangular fragments of the old friezes appear in the new frieze (in the array \eqref{frag} the white bullets, resp. black bullets, stand for the initial entries in the top frieze, resp.bottom frieze, that still appear in the new frieze). The gluing can be described in terms of gluing of diagonals, starting from or ending at the pair 1 1, instead of gluing of columns, at the top and bottom of the pair 1 1.
\begin{equation}\label{frag} \begin{array}{rccccccccccl} \cdots&1&1&1&1 & 1 & 1 &1&1&1&1&\cdots\\ && \circ & \circ & \circ &\circ&\circ& \circ & \circ& \circ& & \\ &&& \circ & \circ &\circ & \circ & \circ & \circ &&&\\ &&&& \circ & \circ & \circ & \circ & &&\\ &&& && 1 & 1 &&&&&\\
&&&&\bullet&\bullet &\bullet&\bullet&&&&\\ &&&\bullet &\bullet& \bullet & \bullet&\bullet &\bullet&&&\\ \cdots&1&1&1&1 & 1 & 1 &1&1&1& 1&\cdots \end{array} \end{equation}
\begin{ex} (a) The friezes \eqref{SixThree}-\eqref{SixFive} are all obtained as a gluing of the trivial frieze
$$ \begin{array}{cccccccccccccccc} \cdots &1 & 1 & 1 & 1& 1& 1& 1& \cdots\\[6pt] \cdots &1& 1& 1& 1& 1& 1& 1& \cdots \\ \end{array} $$
and the unique frieze of width 1 $$ \begin{array}{cccccccccccccccc} \cdots &1 & 1 & 1 & 1& 1& 1& 1& \cdots\\[6pt] \cdots & 1 & 1& 2& 3& 2&1& 1&\cdots \\[6pt] \cdots &1& 1& 1& 1& 1& 1& 1& \cdots \\ \end{array} $$
(b) The 2-frieze in Figure \ref{ex2frieze} in Introduction, is obtained as the gluing of the frieze of width 2 given in \eqref{SixOne} and the above unique frieze of width 1. This can be viewed for instance as follows, as the gluing of two columns, or equivalently two diagonals $$ \begin{array}{cc} 1&1\\ 2&2\\ 2&2\\ 1&1 \end{array} \,+\, \begin{array}{cc} 1&1\\ 2&3\\ 1&1 \end{array}\quad \text{ or } \quad \begin{array}{ccccc} 1&1&&&\\ &2&2&&\\ &&2&2&\\ &&&1&1 \end{array} + \begin{array}{cccc} 1&1&&\\ &3&2&\\ &&1&1 \end{array} \quad \text{or} \quad \begin{array}{ccccc} &&&1&1\\ &&2&2&\\ &2&2&\\ 1&1&&& \end{array} + \begin{array}{cccc} &&1&1\\ &1&2&\\ 1&1&& \end{array} . $$
\end{ex}
\subsection{Cutting and gluing friezes} We give below an example of Theorem \ref{Gluxy}.
\begin{ex} The pair $(2,1)$ appears in both friezes \eqref{SixFive} and \eqref{SixTwo} of width 2. The operation of Theorem \ref{Gluxy} gives the following new frieze $$ \begin{array}{cccccccccccccccc} 1&1&1&1&1&1&1&1&1&1&1&1&1&1\\ 4&3&1&3&13&5&\mathbf{1}&\mathbf{3}&\mathbf{5}&\mathbf{2}&2&6&4&2\\ 2&1&8&10&2&8&14&\mathbf{2}&\mathbf{1}&8&10&2&8&14\\ 3&5&2&2&6&4&\mathbf{2}&\mathbf{4}&\mathbf{3}&\mathbf{1}&3&13&5&1\\ 1&1&1&1&1&1&1&1&1&1&1&1&1&1 \end{array} $$ \end{ex}
\subsection{Proof of Theorem \ref{GluThm}} The initial two friezes are associated to the polygons $U=(U_1,\ldots , U_n)$, $n=m+4$, and $V=(V_1, \ldots, V_k)$, $k=\ell+4$. The entries $(u_{i,j})$ in the first frieze and the entries $(v_{i,j})$ in the second frieze are given by $$ \begin{array}{rcl}
u_{i,j}&=&|
U_{j-3}, U_{j-2}, U_i |\\[6pt] u_{i+\half,j+\half}&=&
|U_{j-2}, U_{i}, U_{i+1}| \end{array}, \qquad \begin{array}{rcl}
v_{i,j}&=&|
V_{j-3}, V_{j-2}, V_i |\\[6pt] v_{i+\half,j+\half}&=&
|V_{j-2}, V_{i}, V_{i+1}| \end{array} $$ We assume that the pair $1\; 1$ where the friezes are connected corresponds to the entries $u_{4,n},\; u_{4+\half, n+\half}$ of the first frieze and to $v_{4,4}, \; v_{4+\half, 4+\half}$ in the second. Define $$
U'_1:=U_1+|V_1,V_3,V_4| U_2 $$ Since the polygons are defined up to the action of $\mathrm{SL}(3,\mathbb{C})$, one can assume that the following two sequences of consecutive vertices are the same: $$V_{1}=U'_1,\;V_{2}=U_{2},\;V_3=U_n.$$ One considers the $n+k-3$-gon $(W_{i})$ consisting in $$ W_1=U'_1, \;W_2=U_2, \;\ldots, W_{n-1}=U_{n-1},\; W_n=V_3, \;W_{n+1}=V_4,\ldots, W_{n+k-3}=V_{k},$$ see Figure \ref{glu11}. \begin{figure}
\caption{Geometric situation when gluing two columns of friezes over 11}
\label{glu11}
\end{figure}
The change from $U_1$ to $U'_1$ has been made in order to have
$$1=|U_{n-1},V_3,V_4 |=|W_{n-1},W_n,W_{n+1}|.$$ Thus, any three consecutive vertices of the polygon $(W_i)$ form a matrix of determinant 1. The 2-frieze associated to $(W_{i})$ is exactly the glued frieze obtained in Theorem \ref{GluThm} (the pair $1 1$ corresponds now to
the entries $w_{4,n}=|W_1,W_2,W_n|$ and $w_{4+\half,n+\half}=|W_2,W_n, W_{n+1}|$).
It remains to show that the entries are actually positive integers.
Since we already have two consecutive columns with positive entries the positivity of the entire frieze $(w_{i,j})$ is guarantee by the local rule: $$
\begin{array}{ccccccc}
*&B&*\\
A&E&D\\
*&C&* \end{array}\quad \Longrightarrow \quad D=(E+BC)/A. $$ The vertices in $(V_i)$ and $(U_i)$ satisfy recurrence relations of the form $$ V_{i}=a_iV_{i-1}-b_iV_{i-2}+V_{i-3},\;\; U_{i}=c_iU_{i-1}-d_iU_{i-2}+U_{i-3}, $$ where $a_i,b_i,c_i,d_i$ are integers. Thus, by induction each vertex $V_i$ is a linear combination with integer coefficients of the first three points $V_1,V_2,V_3$. And similarly, each vertex $U_i$ is a linear combination with integer coefficients of the three points $U_n,U_1,U_2$ and therefore of the three points $U_n,U'_1,U_2$, which are the same as $V_3,V_1,V_2$. One deduces that each vertex of $(W_i)$ is a linear combination of $V_1,V_2,V_3$ with integer coefficients. It follows, using the determinantal formulas of Proposition \ref{Entry}, that the entries in the frieze associated to $(W_i)$ are all integers.
\subsection{Proof of Theorem \ref{CutThm}} The initial 2-frieze pattern \eqref{BigPat} corresponds to some $n$-gon $V=(V_1,\ldots,V_n)$ in $\mathbb{R}^3$. The entries of the 2-frieze are given by: $$ v_{i-\half,j-\half}=
\left| V_{i-1}, V_i, V_{j-3}
\right|,
\qquad v_{i,j}=\left| V_{j-3}, V_{j-2}, V_i
\right|, $$ for integer $i,j\leq n-4$. We fix $i,j$ such that $v_{i-\half,j-\half}=x$, $v_{i,j}=y$, $v_{i-\half,j-1}=u$, $v_{i+\half,j-\half}=v$.
Let us show that the condition \eqref{Cond} is necessary and sufficient for the existence of a point $W$, such that the polygon \begin{equation}\label{gonW} V_1,V_2,\ldots,V_i,W,V_{j-3},V_{j-2},\ldots,V_n \end{equation} defines a positive integer 2-frieze pattern. As before, the positivity of the 2-frieze is guarantee by the positvity of two consecutive columns. \begin{figure}
\caption{Cutting a 2-frieze above $x,y$.}
\label{FirstFig}
\end{figure}
We can express the point $W$ as a linear combination of three consecutive points: $$ W=aV_i-bV_{i-1}+cV_{i-2}. $$ The sequence \eqref{gonW} defines a closed 2-frieze if and only if $a,b,c$ are positive integers and $$
\left| V_{i-1}, V_i, W
\right|=
\left| V_i, W, V_{j-3}
\right|=
\left| W, V_{j-3}, V_{j-2}
\right|=1. $$ The first condition gives immediately $c=1$. The second determinant can be written using the recurrence relation $V_{i+1}=a_{i+1}V_i-b_{i+1}V_{i-1}+V_{i-2}$, $$
\left| V_i, W, V_{j-3}
\right|=|V_i\,,\,-bV_{i-1}+V_{i-2}\,,\,V_{j-3}|=bx+|V_i, b_{i+1}V_{i-1}+V_{i+1},V_{j-3}|=bx-b_{i+1}x+v. $$ Hence, the condition $
\left| V_i, W, V_{j-3}
\right|=1$ leads to $$ b=b_{i+1}+\frac{1-v}{x}. $$ So, $v\equiv 1 \mod x$ is a necessary and sufficient condition. By symmetry we deduce similarly $u\equiv 1 \mod y$.
\subsection{Proof of Theorem \ref{Gluxy}} The condition \eqref{Cond2} is necessary because of Theorem \ref{CutThm}. Let us prove that \eqref{Cond2} is also sufficient. The two initial friezes are associated to polygons, the $n=m+4$-gon $(U_i)$ and the $k=\ell+4$-gon $(V_i)$. Assume that the entries appearing in the two friezes are given by \begin{equation}\label{rsxyuv} \begin{array}{rcl} r&=&
|U_{2}, U_{n-2}, U_{n-1} |\\[6pt] s&=&
|U_{2}, U_{3}, U_{n-1}|\\[6pt]
x&=&|
U_{1}, U_{2}, U_{n-1} |\\[6pt] y&=&
|U_{2}, U_{n-1}, U_{n}| \end{array}, \qquad \begin{array}{rcl} x&=&
|V_1,V_2,V_4 |\\[6pt] y&=&
|V_{2},V_4,V_5|\\[6pt]
u&=&|
V_1,V_4,V_5 |\\[6pt] v&=&
|V_1,V_2,V_5| \end{array} \end{equation} Define $$ U'_1:=U_1+\alpha U_2 $$ where $\alpha$ is given by $u=1+\alpha y$ from \eqref{Cond2}.
\begin{figure}
\caption{Gluing friezes/polygons over x,y}
\label{Figxy}
\end{figure}
We are allowed to identify the following vertices $$ V_1=U'_1,\;V_2=U_2, V_4=U_{n-1}. $$ Now we consider the $n+k-5$-gon $(W_i)$ consisting in $$ W_1=U'_1, W_2=U_2, \ldots, W_{n-2}=U_{n-2}, W_{n-1}=V_4, \ldots, W_{n+k-5}=V_k, $$ see Figure \ref{Figxy}. The choice of $U'_1$ has been made to guarantee $$
1=|U_{n-2},U_{n-1}, V_5|=|W_{n-2}, W_{n-1}, W_{n}| $$ Indeed, using the formula \eqref{rsxyuv} one can express \begin{eqnarray*} V_5&=&\frac{1}{x}\,(vV_4-uV_2+yV_1)\\[6pt] &=&\frac{1}{x}\,(vU_{n-1}+(\alpha y -u)U_2+yU_1) \end{eqnarray*} and then compute \begin{eqnarray*}
|U_{n-2},U_{n-1}, V_5|&=&\frac{\alpha y -u}{x}\,|U_{n-2},U_{n-1},U_2|+\frac{y}{x}\,|U_{n-2},U_{n-1},U_1|\\[6pt] &=& \frac{(\alpha y -u)r}{x}+\frac{y}{x}\frac{x+r}{y}\\[4pt] &=&1, \end{eqnarray*}
where we used the fact that $|U_{n-2},U_{n-1},U_1|$ is equal to $(x+r)/y$, since it is the entry at the left of $x$ in the first frieze.
It follows that any three consecutive vertices of the polygon $(W_i)$ form a matrix of determinant~1. The 2-frieze associated to $(W_{i})$ is exactly the glued frieze obtained in Theorem \ref{Gluxy}.
\section{A classification problem and other open questions}\label{Conc}
We have described several natural procedures to construct integral closed 2-friezes. The first one is based on evaluation of cluster variables in the cluster algebra of type $A_2\times A_m$. The friezes obtained this way are called unitary.
We have proved that for $m\geq5$, there exist infinitely many integral closed 2-friezes with $m$ non-trivial rows. This is due to the fact that in this case the cluster algebra is of infinite type and therefore one can construct infinitely many unitary friezes. We also proved that \textit{not every integral $2$-frieze is unitary}. Proposition \ref{unitfrieze} and Remark \ref{width3} provide examples of non-unitary friezes. The problem of classification of integral closed 2-friezes (see Question \ref{classif}) can now be reformulated:
\begin{ques} How many non-unitary integral $2$-friezes are there for a given width? \end{ques}
It is also natural to ask the following. Are the integral 2-friezes in bijective correspondence with a set of combinatorial objects similar to triangulations? This would give an analog of the property (CC2) mentioned in the introduction.
The second type of constructions introduced in the paper comes from the geometric interpretation of 2-friezes in terms of polygons in the space. It would be interesting to determine whether the set of unitary friezes is stable under the operations \eqref{glucol}, \eqref{BigPat}, \eqref{glufrixy}.
In a more algebraic setting, let us denote by $\mathcal{F}_m$ the set of integral closed 2-friezes of width $m$, and let us consider the vector space of basis vectors $\lbrace \mathcal{F}_m, m\geq 1\rbrace $ over an arbitrary field $K$ $$ \mathbb{A}=\bigoplus_{m\geq 1}K \mathcal{F}_m. $$ The operation \eqref{glucol} gives a structure of associative algebra on $\mathbb{A}$. This algebra is graded by $\deg(F)=m-1$ for $F\in\mathcal{F}_m$. It could be interesting to study the algebra $\mathbb{A}$, for instance to determine generators and relations.
\noindent \textbf{Acknowledgements}. Many ideas and results in the present paper were discussed with Valentin Ovsienko. It is a pleasure to thank the CIRM, that offered us a Recherche en Bin\^ome stay. It is also a pleasure to thank Brown University for its warm hospitality and the chance it offered me to discuss friezes and related topics with V.Ovsienko, R.Schwartz and S.Tabachnikov. My special gratitude goes to V. Fock who gave the first idea for the proof of Theorem~\ref{inffrieze}, and to the anonymous referee for the various valuable comments.
\end{document} |
\begin{document}
\textheight= 195 mm \textwidth = 125 mm
\newcommand{{\mathcal V}}{{\mathcal V}} \renewcommand{{\mathcal O}}{{\mathcal O}} \newcommand{\mathcal L}{\mathcal L} \newcommand{\hbox{\rm Ext}}{\hbox{\rm Ext}} \newcommand{\hbox{\rm Tor}}{\hbox{\rm Tor}} \newcommand{\hbox{Hom}}{\hbox{Hom}} \newcommand{\hbox{Proj}}{\hbox{Proj}} \newcommand{\hbox{GrMod}}{\hbox{GrMod}} \newcommand{\hbox{gr-mod}}{\hbox{gr-mod}} \newcommand{\hbox{Tors}}{\hbox{Tors}} \newcommand{\hbox{gr}}{\hbox{gr}} \newcommand{\hbox{tors}}{\hbox{tors}} \newcommand{\hbox{rank}}{\hbox{rank}} \newcommand{\hbox{{\rm End}}}{\hbox{{\rm End}}} \newcommand{\hbox{Der}}{\hbox{Der}} \newcommand{\hbox{GKdim}}{\hbox{GKdim}} \newcommand{\hbox{gldim}}{\hbox{gldim}} \newcommand{\hbox{im}}{\hbox{im}} \renewcommand{\hbox{ker}}{\hbox{ker}} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \newcommand{{\rm coker}}{{\rm coker}}
\newcommand{{\protect \longrightarrow\!\!\!\!\!\!\!\!\longrightarrow}}{{\protect \longrightarrow\!\!\!\!\!\!\!\!\longrightarrow}}
\renewcommand{\cancel}{\cancel}
\newcommand{\frak h}{\frak h} \newcommand{\frak p}{\frak p} \newcommand{{\mu}}{{\mu}} \newcommand{\gl}{{\frak g}{\frak l}} \newcommand{\ssl}{{\frak s}{\frak l}} \newcommand{{\rm tw}}{{\rm tw}}
\newcommand{\displaystyle}{\displaystyle} \newcommand{\sigma}{\sigma} \renewcommand{\lambda}{\lambda} \renewcommand{\alpha}{\alpha} \renewcommand{\beta}{\beta} \newcommand{\Gamma}{\Gamma} \newcommand{\gamma}{\gamma} \newcommand{\zeta}{\zeta} \newcommand{\epsilon}{\epsilon} \renewcommand{\delta}{\delta} \newcommand{\rho}{\rho} \renewcommand{\tau}{\tau} \newcommand{\nu}{\nu} \newcommand{\chi}{\chi} \newcommand{\omega}{\omega}
\newcommand{{\Bbb A}}{{\Bbb A}} \newcommand{{\Bbb C}}{{\Bbb C}} \newcommand{{\Bbb N}}{{\Bbb N}} \newcommand{{\Bbb Z}}{{\Bbb Z}} \newcommand{{\Bbb Z}}{{\Bbb Z}} \newcommand{{\Bbb Q}}{{\Bbb Q}} \renewcommand{\mathbb K}{\mathbb K}
\newcommand{{\mathcal E}}{{\mathcal E}} \newcommand{{\mathcal K}}{{\mathcal K}} \renewcommand{{\mathcal S}}{{\mathcal S}} \newcommand{{\mathcal T}}{{\mathcal T}}
\newcommand{{GL}}{{GL}}
\newcommand{(x\ y)}{(x\ y)} \newcommand{\colxy}{ \left({\begin{array}{c} x \\ y \end{array}}\right)} \newcommand{\scolxy}{\left({\begin{smallmatrix} x \\ y \end{smallmatrix}}\right)}
\renewcommand{{\Bbb P}}{{\Bbb P}}
\newcommand{\langle}{\langle} \newcommand{\rangle}{\rangle} \newcommand{\otimes}{\otimes} \newcommand{\tensor}{\otimes} \newcommand{\overline}{\overline}
\newtheorem{thm}{Theorem}[section] \newtheorem{lemma}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{prop}[thm]{Proposition}
\theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \newtheorem{notn}[thm]{Notation} \newtheorem{ex}[thm]{Example} \newtheorem{rmk}[thm]{Remark} \newtheorem{rmks}[thm]{Remarks} \newtheorem{note}[thm]{Note} \newtheorem{example}[thm]{Example} \newtheorem{problem}[thm]{Problem} \newtheorem{ques}[thm]{Question} \newtheorem{thingy}[thm]{}
\newcommand{{\protect \rightarrow\!\!\!\!\!\rightarrow}}{{\protect \rightarrow\!\!\!\!\!\rightarrow}}
\newcommand{\donto}{\put(0,-2){$|$}\put(-1.3,-12){$\downarrow$}{\put(-1.3,-14.5)
{$\downarrow$}}}
\newcounter{letter} \renewcommand{\rom{(}\alph{letter}\rom{)}}{\rom{(}\alph{letter}\rom{)}}
\newenvironment{lcase}{\begin{list}{~~~~\rom{(}\alph{letter}\rom{)}} {\usecounter{letter} \setlength{\labelwidth4ex}{\leftmargin6ex}}}{\end{list}}
\newcounter{rnum} \renewcommand{\rom{(}\roman{rnum}\rom{)}}{\rom{(}\roman{rnum}\rom{)}}
\newenvironment{lnum}{\begin{list}{~~~~\rom{(}\roman{rnum}\rom{)}}{\usecounter{rnum} \setlength{\labelwidth4ex}{\leftmargin6ex}}}{\end{list}}
\thispagestyle{empty}
\title[Classification of certain graded twisted tensor products]{Classification, koszulity and Artin-Schelter regularity of certain graded twisted tensor products}
\keywords{Koszul algebras, quadratic algebras, twisted tensor products, Artin-Schelter regular algebras}
\author[ Conner, Goetz ]{ }
\subjclass[2010]{16S37, 16S38} \maketitle
\begin{center}
\vskip-.2in Andrew Conner \\
Department of Mathematics and Computer Science\\ Saint Mary's College of California\\ Moraga, CA 94575\\
Peter Goetz \\
Department of Mathematics\\ Humboldt State University\\ Arcata, California 95521 \\ \ \\
\end{center}
\setcounter{page}{1}
\thispagestyle{empty}
\begin{abstract}
Let $\mathbb K$ be an algebraically closed field. We classify all of the quadratic twisted tensor products $A \tensor_{\tau} B$ in the cases where $(A, B) = (\mathbb K[x], \mathbb K[y])$ and $(A, B) = (\mathbb K[x, y], \mathbb K[z])$. We determine when a quadratic twisted tensor product of this form is Koszul, and when it is Artin-Schelter regular. \end{abstract}
\section{Introduction}
Let $\mathbb K$ be an algebraically closed field, and let $A$ and $B$ be associative $\mathbb K$-algebras. In \cite{Cap}, C\v{a}p, Schichl, and Van\v{z}ura introduced a very general notion of product for $A$ and $B$ called a \emph{twisted tensor product}, denoted $A\tensor_{\tau} B$. (For precise definitions of terminology we refer the reader to Section 2.) Efforts to understand how ring-theoretic and homological properties behave with respect to this product have been the source of several recent papers (see, for example, \cite{C-G}, \cite{GKM}, \cite{JPS}, \cite{SZL}, \cite{ShepWi2}). In the article \cite{C-G}, we initiated a detailed study of the Koszul property for graded twisted tensor products. The current paper grew out of our efforts to address two problems left unresolved in \cite{C-G}.
\begin{problem}\label{quad example} If $A$ and $B$ are Koszul algebras and $A\tensor_{\tau} B$ is quadratic, must $A\tensor_{\tau} B$ be a Koszul algebra? \end{problem}
\begin{problem}\label{classification} Classify quadratic twisted tensor products of $\mathbb K[x]$ and $\mathbb K[y]$ up to isomorphism (of twisted tensor products). \end{problem}
Our solutions to these problems are related. By \cite[Proposition 5.5]{C-G}, all quadratic twisted tensor products of $\mathbb K[x]$ and $\mathbb K[y]$ are Koszul. By \cite[Theorem 5.3]{C-G}, the same is true if $\mathbb K[x]$ or $\mathbb K[y]$ is replaced by $\mathbb K[x]/\langle x^2\rangle$ or $\mathbb K[x]/\langle y^2\rangle$. Hence a negative answer to Problem \ref{quad example} would require $A\tensor_{\tau} B$ to have at least three algebra generators. This requirement is the impetus for our study, and subsequent classification, of quadratic twisted tensor products of $\mathbb K[x,y]$ and $\mathbb K[z]$. A significant part of this classification reduces to Problem \ref{classification}.
In \cite[Section 6]{C-G}, we partially settled Problem \ref{classification}, but we were unable to handle one case. Our partial result described possible isomorphism types of $\mathbb K[x]\tensor_{\tau}\mathbb K[y]$ in terms of a one-parameter family of algebras. The parameter could take on any value, except the zeros of an interesting family of polynomials related to the Catalan numbers. The appearance of these polynomials motivated us to completely resolve Problem \ref{classification}, which we do in Section 3. Our first result is the following.
\begin{thm}[Proposition \ref{oneSidedTwoGen} and Theorem \ref{quadratic ttps of polynomial rings of one variable}] \label{introClassification2D} Every quadratic twisted tensor product of $\mathbb K[x]$ and $\mathbb K[y]$ is isomorphic to either $C(a,b,1)=\mathbb K \langle x, y \rangle/\langle yx-ax^2-bxy-y^2 \rangle$ or $C(a,b,0)=\mathbb K \langle x, y \rangle/\langle yx-ax^2-bxy \rangle$ for some $a,b\in \mathbb K$. Moreover, \begin{itemize} \item[(1)] $C(a,b,0)$ is a graded twisted tensor product of $\mathbb K[x]$ and $\mathbb K[y]$ for any $a,b\in\mathbb K$; \item[(2)] $C(a,b,1)$ is a graded twisted tensor product of $\mathbb K[x]$ and $\mathbb K[y]$ if and only if $a, b \in \mathbb K$ satisfy $f_n(a,b) \ne 0$ for all $n \geq 0$, where the $f_n(t,u)$ are a family of polynomials generalizing the family described in \cite[Section 6]{C-G}. \end{itemize} \end{thm}
In Proposition \ref{two dim as ttp}, we classify the algebras $C(a,b,0)$ and $C(a,b,1)$ up to isomorphism of graded twisted tensor products. We classify the algebras $C(a,b,1)$ and $C(a,b,0)$ up to isomorphism of graded algebras in Theorem \ref{two dim up to isom}.
Section 4 is concerned with the classification of quadratic twisted tensor products of $\mathbb K[x,y]$ and $\mathbb K[z]$. The analysis is considerably more complicated than that of Section 3. We describe twisted tensor products $\mathbb K[x,y]\tensor_{\tau} \mathbb K[z]$ in terms of a \emph{graded twisting map} $\tau:\mathbb K[z]\tensor \mathbb K[x,y]\to \mathbb K[x,y]\tensor \mathbb K[z]$. By \cite[Theorem 1.2]{C-G}, a quadratic twisted tensor product of $\mathbb K[x,y]$ and $\mathbb K[z]$ is determined up to isomorphism by the values of $\tau(z\tensor x)$ and $\tau(z\tensor y)$. Suppressing the tensors, we write \begin{align*} \tau(zx) &= ax^2+bxy+cy^2+dxz+eyz+fz^2,\tag{\textdagger} \label{tauDef} \\ \tau(zy) &= Ax^2+Bxy+Cy^2+Dxz+Eyz+Fz^2, \end{align*} where the coefficients are elements of $\mathbb K$. Our main results in Section 4 are summarized in the following theorem.
\begin{thm}[Lemma \ref{JNFs}, Lemma \ref{cases}] \label{introClassification3D} A quadratic twisted tensor product of $\mathbb K[x,y]$ and $\mathbb K[z]$ is determined, up to isomorphism of twisted tensor products, by a twisting map $\tau$ of the form {\rm (\ref{tauDef})}
where $D=F=0$ and $e,f,A\in\{0,1\}$. Moreover, $\tau$ belongs to of one of the following types: \begin{enumerate} \item (Ore type) $f=0$, \item (Reducible type) $f=1$, $A=0$, \item (Elliptic type) $f=1$, $A=1$, $d=-1$. \end{enumerate} \end{thm}
Not all combinations of the parameters produce twisted tensor products. For detailed descriptions of parameter restrictions in each case, see Theorems \ref{Ore extension classification}, \ref{ttps with G_2 = 0}, and \ref{characterization of elliptic type ttps} respectively. The reason for the ``reducible'' and ``elliptic'' terminology is due to the fact that, generically, a reducible-type algebra has a reducible point scheme and an elliptic-type algebra has an elliptic curve as its point scheme. (Here ``point scheme'' refers to the scheme that represents the functor of point modules, as in \cite{ATVI}.) We intend to study the non-commutative algebraic geometry of these algebras, and of twisted tensor products in general, in a forthcoming paper.
In \cite[Example 5.4]{C-G} we presented an example of Koszul algebras $A$ and $B$ such that $A\tensor_{\tau} B$ is not Koszul. However, Koszul algebras are necessarily quadratic algebras, and, in the example we provided, the algebra $A\tensor_{\tau} B$ is not quadratic. This motivated Problem \ref{quad example}. In Section 5 we consider the Koszul property for the algebras classified in Section 4 and present a family of examples that provide a negative answer to Problem \ref{quad example}.
\begin{thm}[Theorem \ref{Koszul property for Ore extensions, A = 0}, Theorem \ref{Koszul property for T(g, d)}] \label{introKoszul} Let $T$ be a quadratic twisted tensor product of $\mathbb K[x,y]$ and $\mathbb K[z]$. If $T$ is of Ore type or reducible type, then $T$ is Koszul. If $T$ is of elliptic type, then $T$ is Koszul if and only if $c-(a-1)(C+a-1)\neq 0$. \end{thm}
The non-Koszul elliptic-type algebras provide examples of non-Koszul quadratic twisted tensor products of Koszul algebras, solving Problem \ref{quad example}. These counterexamples seem interesting in their own right, so we also study the Yoneda algebra of the non-Koszul elliptic-type algebras. In particular, Theorem \ref{Yoneda presentations} gives an explicit finite presentation of these algebras, in terms of generators and relations.
The classification of the algebras appearing in Theorem \ref{introClassification2D} includes all skew polynomial algebras and the Jordan plane $\mathbb K\langle x,y\rangle/\langle yx-xy-y^2\rangle$. These are the two-dimensional \emph{Artin-Schelter regular} algebras. Having classified quadratic twisted tensor products of $\mathbb K[x,y]$ and $\mathbb K[z]$ in Section 4, we take up the question of Artin-Schelter regularity in Section 6, where we prove the following result.
\begin{thm}[Theorem \ref{AS-regular algebras}] \label{introASreg} Let $T$ denote a quadratic twisted tensor product of $\mathbb K[x,y]$ and $\mathbb K[z]$. \begin{itemize} \item[(1)] If $T$ is an algebra of Ore type, then $T$ is AS-regular if and only if $T\cong \mathbb K[x,y][z;\sigma,\delta]$ where $\sigma \in \hbox{{\rm End}}(\mathbb K[x,y])$ is invertible. \item[(2)] If $T$ is an algebra of reducible type, then $T$ is AS-regular if and only if $E \ne 0$ and $a+d \ne 0$. \item[(3)] Assume that $\text{char}\, \mathbb K \ne 2$. If $T$ is an algebra of elliptic type, then $T$ is AS-regular if and only if $c-(a-1)(C+a-1) \ne 0$. \end{itemize}
\end{thm}
Finally, we remark that, in general, the graded twisted tensor product $R\tensor_{\tau}S$ defined by a twisting map $\tau:S\tensor R\to R\tensor S$ and the graded twisted tensor product $S\tensor_{\tau'}R$ defined by a twisting map $\tau':R\tensor S\to S\tensor R$ are not the same, and need not be related. However, if $R=\mathbb K[x,y]$ and $S=\mathbb K[z]$, then $R\tensor_{\tau} S\cong (S\tensor_{\tau^{\rm op}} R)^{\rm op}$ where $\tau^{\rm op} = \zeta\tau \zeta$ is a graded twisting map with $\zeta:R\tensor S\to S\tensor R$ given by $\zeta(r\tensor s)=s\tensor r$. Thus, by considering opposite algebras, our results characterize all graded twisted tensor products of $\mathbb K[x,y]$ and $\mathbb K[z]$, in either order.
\section{Preliminaries} \label{preliminaries}
Throughout the paper, let $\mathbb K$ denote an algebraically closed field. Tensor products taken with respect to $\mathbb K$ are denoted by $\tensor$. We write $\mathbb K^*$ for $\mathbb K - \{0\}$.
We work extensively in categories where the objects are graded $\mathbb K$-vector spaces. If $V$ and $W$ are ${\Bbb N}$-graded $\mathbb K$-vector spaces, then $V \tensor W$ is ${\Bbb N}$-graded by the K\"unneth formula $$(V \tensor W)_m = \bigoplus_{k+l = m} V_k \tensor W_l.$$ Whenever we refer to $V\tensor W$ as a graded space, we assume this K\"unneth grading.
The term {\it graded algebra} refers to a unital, associative $\mathbb K$-algebra, $A = \oplus_{n \geq 0} A_n$, that is {\it connected} ($A_0 = \mathbb K$) and generated by finitely many homogeneous elements. These assumptions imply the graded algebras we consider are ${\Bbb N}$-graded and {\it locally finite} ($\dim A_n < \infty$ for all $n \geq 0$). Almost all of the graded algebras in this paper are generated in homogeneous degree $1$. If $A$ is a graded algebra, we denote the (graded) multiplication map $A\tensor A\to A$ by ${\mu}_A$. The kernel of the canonical graded algebra homomorphism $A\to \mathbb K$ is the graded radical of $A$, denoted $A_+ = \bigoplus_{i > 0} A_i$.
\subsection{Twisted tensor products and twisting maps} Let $A$ and $B$ be graded algebras. A {\emph{graded twisted tensor product}} of $A$ and $B$ is a triple $(C, i_A, i_B)$ consisting of a graded algebra $C$ and injective homomorphisms of graded algebras $i_A: A \to C$ and $i_B: B \to C$ such that the graded $\mathbb K$-linear map $A \tensor B \to C$ given by $a \tensor b \mapsto i_A(a)i_B(b)$ is an isomorphism of $\mathbb K$-vector spaces.
Suppose that ${\mathcal T} = (C, i_A, i_B)$ and ${\mathcal T}' = (C', i'_A, i'_B)$ are two twisted tensor products of $A$ and $B$. We say that ${\mathcal T}$ and ${\mathcal T}'$ are {\it isomorphic} if there exist graded algebra isomorphisms $\alpha: A \to A$, $\beta: B \to B$ and $\gamma: C \to C$ such that $\gamma i_A = i'_A \alpha$ and $\gamma i_B = i'_B \beta$. Note that this notion of isomorphism is stronger than that of algebra isomorphism. Indeed it is possible to have non-isomorphic triples ${\mathcal T}$ and ${\mathcal T}'$ where $C$ and $C'$ are isomorphic as graded algebras (see Proposition \ref{two dim as ttp} and Theorem \ref{two dim up to isom}).
As shown in \cite{Cap}, the study of twisted tensor products is greatly facilitated by the notion of a twisting map. We adopt the most general graded version for our work below. By a {\it graded twisting map} we mean a graded $\mathbb K$-linear map $\tau: B \tensor A \to A \tensor B$ such that $\tau(1 \tensor a) = a \tensor 1$ and $\tau(b \tensor 1) = 1 \tensor b$ and $$\tau(\mu_B \tensor \mu_A) = ({\mu}_A \tensor {\mu}_B)(1 \tensor \tau \tensor 1)(\tau \tensor \tau)(1 \tensor \tau \tensor 1).$$ The condition that $\tau$ is graded is simply that $\tau((A \tensor B)_n) \subseteq (B \tensor A)_n$ for all $n \geq 0$ with respect to the K\"unneth grading. It is common in the literature to see the phrase ``graded twisting map'' refer to the more restrictive condition: $\tau(B_i \tensor A_j) \subseteq A_j \tensor B_i$ for all $i, j \geq 0$. Since a graded twisting map $\tau$ satisfies $$\tau(B_+ \tensor A_+) \subseteq (A_+ \tensor \mathbb K) \oplus (A_+ \tensor B_+) \oplus (\mathbb K \tensor B_+),$$ we call $\tau$ \emph{one-sided} if either $$\tau(B_+ \tensor A_+) \subseteq (A_+ \tensor \mathbb K) \oplus (A_+ \tensor B_+)$$ or $$\tau(B_+ \tensor A_+) \subseteq (A_+ \tensor B_+) \oplus (\mathbb K \tensor B_+).$$
The relationship between twisting maps and twisted tensor products was established in the ungraded case in \cite{Cap} and in the graded case in \cite{C-G}.
\begin{prop} \label{twisting maps and ttps} \cite[Proposition 2.3]{C-G} Let $A$ and $B$ be graded algebras. Let $\tau: B \tensor A \to A \tensor B$ be a graded $\mathbb K$-linear map. Define ${\mu}_{\tau}: A \tensor B \tensor A \tensor B \to A \tensor B$ by ${\mu}_{\tau} = ({\mu}_A \tensor {\mu}_B)(1 \tensor \tau \tensor 1)$. Then $\tau: B \tensor A \to A \tensor B$ is a graded twisting map if and only if ${\mu}_{\tau}$ defines an associative multiplication giving $A \tensor B$ the structure of a graded algebra. \end{prop}
If $\tau: B \tensor A \to A \tensor B$ is a graded twisting map, we use the notation $A \tensor_{\tau} B$ for the algebra $(A \tensor B, {\mu}_{\tau})$. Note that the triple $(A \tensor_{\tau} B, i_A, i_B)$, where $i_A: A \to A \tensor B$ and $i_B: B \to A \tensor B$ are the canonical inclusions, is a twisted tensor product of $A$ and $B$. We will abuse notation and also write $A \tensor_{\tau} B$ for this triple.
The ungraded version of the following fundamental result first appeared in \cite[Proposition 2.7]{Cap}.
\begin{prop} \label{ttp yields twisting map} \cite[Proposition 2.4]{C-G} Let $(C, i_A, i_B)$ be a graded twisted tensor product of graded algebras $A$ and $B$. Then there exists a unique graded twisting map $\tau$ such that $(C, i_A, i_B)$ is isomorphic to $A \tensor_{\tau} B$ as graded twisted tensor products of $A$ and $B$. \end{prop}
The graded twisted tensor product $A \tensor_{\tau} B$ can also be identified with a certain quotient of the free product algebra $A \ast B$. As a $\mathbb K$-vector space $$A \ast B = \bigoplus_{i \geq 0; \ \epsilon_1, \epsilon_2 \in \{0, 1\}} A^{\epsilon_1}_+ \tensor (B_+ \tensor A_+)^{\tensor i} \tensor B_+^{\epsilon_2}.$$ The algebra $A \ast B$ is ${\Bbb N}$-graded by the usual K\"unneth grading. Moreover, there are natural inclusions of $A$ and $B$ into $A \ast B$. Define an ideal of $A \ast B$ by $$I_{\tau} = \langle b \tensor a - \tau(a \tensor b) : a \in A, b \in B \rangle.$$ By \cite[Proposition 2.5]{C-G}, we have $A \tensor_{\tau} B \cong (A \ast B)/I_{\tau}$ as graded algebras.
\subsection{Quadratic twisted tensor products} Let $C$ be a graded algebra that is generated in degree $1$, and let $T(C_1)$ denote the tensor algebra on the vector space $C_1$. The algebra $C$ is called {\it quadratic} if the kernel of the canonical projection $T(C_1) \to C$ is generated in degree 2. Let $\tau: B \tensor A \to A \tensor B$ be a graded twisting map. In \cite{C-G}, the authors characterized when the algebra $A \tensor_{\tau} B$ is quadratic in terms of the structure of $\tau$. We briefly recall the relevant definitions to state this result.
If $V$ and $W$ are graded vector spaces and $f: V \to W$ is a graded linear map, we denote the degree-$n$ component of $f$ by $f_n$ and define $f_{\leq n} = \oplus_{i = 0}^n f_i$ and $f_{>n} = \oplus_{i > n} f_i$.
We say a graded linear map $t: (B \tensor A)_{\leq n} \to (A \tensor B)_{\leq n}$ is {\emph{graded twisting in degree}} $n$ if $t(1 \tensor a) = a \tensor 1$ and $t(b \tensor 1) = 1 \tensor b$ for all $a \in A_n$ and $b \in B_n$, and $$t_n({\mu}_B \tensor {\mu}_A) = ({\mu}_A \tensor {\mu}_B)(1 \tensor t_{\leq n} \tensor 1)(t_{\leq n} \tensor t_{\leq n})(1 \tensor t_{\leq n} \tensor 1)$$ as maps defined on $(B \tensor B \tensor A \tensor A)_n$. If $t$ is graded twisting in degree $i$ for all $i \leq n$, we say $t$ is {\emph{graded twisting to degree $n$}}.
\begin{defn}\cite[Definition 4.2]{C-G} \label{UEP defn} A graded twisting map $\tau: B \tensor A \to A \tensor B$ has the {\emph{unique extension property to degree $n$}} if, whenever $\tau': B \tensor A \to A \tensor B$ is a graded linear map that is twisting to degree $n$ such that $\tau_i = \tau'_i$ for all $i < n$, it follows that $\tau_n = \tau'_n$.
The graded twisting map $\tau$ has the {\emph{unique extension property}} if $\tau$ has the unique extension property to degree $n$ for all $n \geq 3$. \end{defn}
We note that every one-sided graded twisting map has the unique extension property (\cite[Proposition 5.2]{C-G}).
\begin{thm} \label{quadratic iff uep} Let $A$ and $B$ be quadratic algebras and let $\tau: B \tensor A \to A \tensor B$ be a graded twisting map. The following are equivalent: \begin{enumerate} \item the graded algebra $A \tensor_{\tau} B$ is quadratic, \item the graded twisting map $\tau$ has the unique extension property, \item the ideal $I_{\tau}$ of $A\ast B$ is generated in degree 2. \end{enumerate} \end{thm}
\begin{proof} The equivalence of (1) and (2) is \cite[Theorem 1.2]{C-G}. Statements (1) and (3) are equivalent by \cite[Proposition 2.5]{C-G}. \end{proof}
When the algebra $A \tensor_{\tau} B$ is quadratic we will refer to it as a \emph{quadratic twisted tensor product} of $A$ and $B$.
The following result is very helpful in classifying twisted tensor products up to isomorphism.
\begin{prop}\cite[Proposition 2.2]{C-G} \label{isomorphisms of ttps} Let $A$ and $B$ be algebras, and let $\alpha: A \to A$ and $\beta: B \to B$ be algebra automorphisms. If $\tau: B \tensor A \to A \tensor B$ is a twisting map, then the map $\tau': B \tensor A \to A \tensor B$ defined by $\tau' = (\alpha \tensor \beta) \tau (\beta^{-1} \tensor \alpha^{-1})$ is a twisting map. Furthermore, $A \tensor_{\tau} B$ and $A \tensor_{\tau'} B$ are isomorphic as twisted tensor products of $A$ and $B$. \end{prop}
\subsection{Koszul algebras} There are many equivalent ways to define the notion of a Koszul algebra; see the book by Polishchuk and Positselski \cite{PP}. In this paper, we call a graded algebra $A$ a \emph{Koszul algebra} if the trivial module $_A \mathbb K = A_0 = A/A_{+}$ admits a graded projective resolution $$\cdots \rightarrow P_3 \rightarrow P_2 \rightarrow P_1 \rightarrow P_0 \rightarrow \mathbb K \to 0,$$ where, for all $i \geq 0$, $P_i$ is generated in degree $i$. It is well known (see \cite{PP}, for example) that every Koszul algebra is quadratic.
\section{Quadratic twisted tensor products of $\mathbb K[x]$ and $\mathbb K[z]$} \label{ttps of two one-variable polynomial algebras}
In this section we classify all of the quadratic twisted tensor products of two polynomial rings of one variable. This completes the work begun in \cite[Section 6]{C-G}; see especially Theorems 6.2, 6.5 and Proposition 6.6 of that paper. We also use this classification in the next section.
Throughout this section we fix the following notation. Let $A = \mathbb K[x]$, $B = \mathbb K[z]$, and let $\tau:B\tensor A\to A\tensor B$ be a graded twisting map. Then $$\tau(z \tensor x) = ax^2 \tensor 1 + bx \tensor z + 1 \tensor cz^2$$ for some $a, b, c \in \mathbb K$. Let $$C = \mathbb K \langle x, z \rangle/ \langle zx-ax^2-bxz-cz^2\rangle.$$ To indicate dependence on the parameters we will also write $C = C(a, b, c)$.
\begin{prop}\label{oneSidedTwoGen} If $a = 0$ or $c = 0$, then $\tau$ has the unique extension property, hence $A\tensor_{\tau} B$ is a quadratic algebra isomorphic to $C(a,b,c)$. \end{prop}
\begin{proof} The hypotheses imply the graded twisting map $\tau$ is one-sided. The result follows from \cite[Proposition 5.2]{C-G} and Theorem \ref{quadratic iff uep}. \end{proof}
When $ac \ne 0$, applying Proposition \ref{isomorphisms of ttps} to the automorphism of $B$ given by $z\mapsto z/c$ shows there is no loss of generality in assuming $c = 1$. Thus we consider $$C(a, b,1) = \mathbb K \langle x, z \rangle/\langle zx-ax^2-bxz-z^2 \rangle.$$ By Theorem \ref{quadratic iff uep}, determining when $\tau$ has the unique extension property is equivalent to determining when $C$ is a graded twisted tensor product of $A$ and $B$. Note that for the latter to hold, it is necessary that the Hilbert series of $C$ be $(1-t)^{-2}$.
\begin{lemma} \label{quad hilb series} The Hilbert series of the algebra $C(a, b,1)$ is $(1-t)^{-2}$ unless $a=1$ and $b=-1$. \end{lemma}
\begin{proof} By \cite[p. 126]{PP}, the Hilbert series of $C=C(a,b,1)$ is either $(1-t)^{-2}$ or the ``Fibonacci series" $\sum_{n} F_n t^n$, where $F_0 = 1$, $F_1 = 2$ and $F_{n+2} = F_{n+1} + F_n$, for $n \geq 0$. We claim the Hilbert series is $\sum_{n} F_n t^n$ if and only if $a = 1$ and $b = -1$.
To see this, order the generators of $C$ as $x < z$, and order monomials using left-lexicographic order. By the Diamond Lemma (see, for example, \cite{Berg}), the element $$G = (1+b)zxz + (a-1)zx^2+(b^2-a)x^2z+a(b+1)x^3$$ of the free algebra $\mathbb K\langle x,z\rangle$ is (up to scaling) the only degree-3 element of a Gr\"obner basis with respect to the chosen monomial term order.
It follows that $\dim C_3 = 5$ if and only if $G = 0$, which happens if and only if $a = 1$ and $b = -1$. \end{proof}
Next we define some sequences of polynomials that are the key to the rest of the classification problem. Let $\{e_n(t, u)\}, \{f_n(t, u)\}, \{g_n(t, u)\}, \{h_n(t, u)\}$ in $\mathbb K[t, u]$ be defined as follows: \begin{align*} e_0(t,u) &= 1, &f_0(t,u) &= 1, \\ e_n(t,u) &= ue_{n-1}(t,u)+f_{n-1}(t,u), &f_n(t,u) &= -t e_{n-1}(t,u)+f_{n-1}(t,u), & n&\ge 1,\\ g_n(t,u) &= (1-u)e_n(t,u)-f_n(t,u), &h_n(t,u) &= -te_n(t,u), & n&\ge 0.
\end{align*}
\begin{lemma}\label{key relation}\ \begin{enumerate} \item For all $n\ge 0$, $f_n(t,-t )= (1-t)^n$. \item For all $n\ge 0$, the relation $$e_nzx^nz = f_nzx^{n+1}+g_nx^{n+1}z+h_nx^{n+2}$$ holds in $C(a,b,1)$, where $e_n = e_n(a, b)$, $f_n = f_n(a, b)$, $g_n = g_n(a,b)$, and $h_n = h_n(a,b)$. \item If $f_i(a,b)\neq 0$ for all $1\le i\le n-1$, then $zx^k\in {\rm Span}\{x^i z^j : i, j \geq 0\}\subseteq C(a,b,1)$ for $1 \leq k \leq n$. \end{enumerate} \end{lemma}
\begin{proof} From the recursive formulas it is clear that $e_n(t, -t) = f_n(t,-t)$ for all $n\ge 0$. Thus for $n\ge 1$ we have \begin{align*} f_n(t,-t) &= -te_{n-1}(t,-t)+f_{n-1}(t,-t) \\ &= -tf_{n-1}(t,-t)+f_{n-1}(t,-t) \\ &= f_{n-1}(t,-t)(1-t), \end{align*} and hence (1) follows by induction.
The relation in (2) holds for $n = 0$ by definition of $C(a,b,1)$. Suppose that the relation \[e_{n-1}zx^{n-1}z = f_{n-1}zx^{n}+g_{n-1}x^{n}z+h_{n-1}x^{n+1} \tag{\textasteriskcentered} \label{inductHyp}\] holds for some $n \geq 1$. Multiplying through by $z$ on the right and using the defining relation of $C(a,b,1)$ yields $$e_{n-1}zx^{n-1}(zx-bxz-ax^2) = f_{n-1}zx^{n}z+g_{n-1}x^{n}(zx-bxz-ax^2)+h_{n-1}x^{n+1}z.$$ Next, using the relation (\ref{inductHyp}) on the first term, rearranging and collecting terms we have $$(be_{n-1}+f_{n-1})zx^nz = (-ae_{n-1}+f_{n-1})zx^{n+1}+(bg_{n-1}-h_{n-1})x^{n+1}z+(h_{n-1}+ag_{n-1})x^{n+2}.$$ By the recursive definitions \begin{align*} g_n &= (1-b)[be_{n-1}+f_{n-1}]-[-ae_{n-1}+f_{n-1}] \\ &= b[(1-b)e_{n-1}-f_{n-1}]+ae_{n-1} \\ &= bg_{n-1}-h_{n-1}, \\ h_n &= -ae_n \\ &= -a(be_{n-1}+f_{n-1}) \\ &= -ae_{n-1}+a[(1-b)e_{n-1}-f_{n-1}] \\ &= h_{n-1}+ag_{n-1}, \end{align*} therefore we see that we have $$e_nzx^nz = f_nzx^{n+1}+g_nx^{n+1}z+h_nx^{n+2},$$ as desired.
Statement (3) now follows by a straightforward induction.
\end{proof}
\begin{thm} \label{quadratic ttps of polynomial rings of one variable} The algebra $C(a,b,1)$ is a graded twisted tensor product of $\mathbb K[x]$ and $\mathbb K[z]$ if and only if $a, b \in \mathbb K$ satisfy $f_n(a,b) \ne 0$ for all $n \geq 0$. \end{thm}
\begin{proof} For all $n \geq 0$, write $e_n, f_n, g_n, h_n$ for $e_n(a,b), f_n(a,b), g_n(a,b), h_n(a,b)$, respectively. Let $A=\mathbb K[x]$, $B=\mathbb K[z]$, and $C=C(a,b,1)$.
If $a=0$, then $f_n=1$ for all $n\ge 0$, and by Proposition \ref{oneSidedTwoGen}, $C$ is a graded twisted tensor product of $A$ and $B$. Henceforth we assume $a\neq 0$.
Suppose $f_n \ne 0$ for all $n \geq 0$. Let $i_A:A\to C$ and $i_B:B\to C$ be the graded algebra homomorphisms determined respectively by $i_A(x)=x$ and $i_B(z)=z$. Let $$S={\rm Span}\{x^i z^j : i, j \geq 0\}\subseteq C.$$ By Lemma \ref{key relation}(3), since $f_n \ne 0$ for all $n \geq 0$, we know $zx^k \in S$ for all $k \geq 0$. It follows that $C = S$. In particular, the canonical linear map $$(i_A,i_B):A \tensor B \to C$$ given by $a\tensor b\mapsto i_A(a)i_B(b)$ is surjective.
Since $f_1 = 1-a \ne 0$, we know $a \ne 1$. Thus by Lemma \ref{quad hilb series}, the Hilbert series of $C$ is $(1-t)^{-2}$. Hence $i_A$ and $i_B$ are injective and $(i_A,i_B):A \tensor B \to C$ is a linear isomorphism. It follows that $C$ is a twisted tensor product of $A$ and $B$.
Conversely, suppose that $a, b \in \mathbb K$ are such that $f_n(a,b) = 0$ for some $n \geq 1$.
If $f_1 = 0$, then $a = 1$. Suppose $b = -1$. Then by Lemma \ref{quad hilb series} the Hilbert series of $C$ is not $(1-t)^{-2}$, so $C$ is not a twisted tensor product of $A$ and $B$. If $b \ne -1$, the relation in Lemma \ref{key relation}(2) yields $$(1+b)zxz +(b^2-1)x^2z+(b+1)x^3 = 0.$$ Substituting $zx = ax^2+bxz+z^2$ results in $$(1+b)(ax^2+bxz+z^2)z + (b^2-1)x^2z+(b+1)x^3 = 0.$$ Considering that the coefficient of $x^3$ is nonzero we see that this is a nontrivial dependence relation in $S$. Thus $C$ is not a twisted tensor product of $A$ and $B$. This concludes the case $f_1 = 0$.
Henceforth we assume $n\ge 2$ is minimal such that $f_n = 0$. Since $f_1\neq 0$, we have $a\neq 1$. Thus Lemma \ref{key relation}(1) implies $b \ne -a$. We claim that $e_n \ne 0$.
Suppose, to the contrary, that $e_n = 0$. Then the recurrence formulas for $f_n$ and $e_n$ give $f_{n-1} = ae_{n-1}$ and $f_{n-1} = -be_{n-1}.$ Since $b \ne -a$, we have $e_{n-1} = 0$ and hence $f_{n-1}=0$, contradicting the minimality of $n$. Thus $e_n\ne 0$.
By Lemma \ref{key relation}(2) the relation \[e_nzx^nz - g_nx^{n+1}z + ae_nx^{n+2} = 0\tag{\textasteriskcentered\textasteriskcentered} \label{depRel}\] holds in $C$, and since $f_1, \ldots, f_{n-1}$ are all nonzero we know that $zx^n \in S$ by Lemma \ref{key relation}(3). Since $a\neq 0$ and $e_n\neq 0$, the coefficient of $x^{n+2}$ in (\ref{depRel}) is nonzero, so (\ref{depRel}) is a nontrivial dependence relation in $S$. Thus $C$ is not a twisted tensor product of $A$ and $B$.
\end{proof}
Noting that under the change of variables $z\mapsto z/c$ employed above, the coefficient of $x^2$ in $\tau(z\tensor x)$ becomes $ac$, we have the following.
\begin{cor} \label{abc} Let $A = \mathbb K[x]$, $B = \mathbb K[z]$, and let $\tau:B\tensor A\to A\tensor B$ be a graded twisting map. Let $a, b, c \in \mathbb K$ be given by $$\tau(z \tensor x) = ax^2 \tensor 1 + bx \tensor z + 1 \tensor cz^2.$$ Then $A\tensor_{\tau} B$ is quadratic unless $f_n(ac,b)=0$ for some $n\ge 0$. \end{cor}
To end this section we will classify the quadratic graded twisted tensor products of $A=\mathbb K[x]$ and $B=\mathbb K[z]$ up to: (1) isomorphism of twisted tensor products of $A$ and $B$, and (2) isomorphism of graded algebras.
\subsection{Classification up to isomorphism of graded twisted tensor products}
Recall that $$C(a, b, c) = \mathbb K \langle x, z \rangle/\langle zx-ax^2-bxz-cz^2 \rangle.$$ By Corollary \ref{abc}, the algebra $C(a, b, c)$ is a twisted tensor product of $A$ and $B$ with respect to the obvious inclusions of $A$ and $B$ if and only if $f_n(ac,b)\neq 0$ for all $n\ge 0$. In the classification that follows, we assume that $f_n(ac,b)\neq 0$ for all $n\ge 0$. Above we showed that if $c\ne 0$, then $C(a,b,c)\cong C(ac,b,1)$ as graded twisted tensor products. Similarly, if $c=0\neq a$, the automorphism $x\mapsto x/a$ induces an isomorphism $C(a,b,0)\cong C(1,b,0)$. Otherwise, as twisted tensor products of $A$ and $B$, the $C(a, b, c)$ are completely rigid in the following sense.
\begin{prop} \label{two dim as ttp} If $C(a, b,1) \cong C(a', b',1)$ as twisted tensor products of $A$ and $B$, then $a = a'$ and $b = b'$. If $C(1,b,0)\cong C(1,b',0)$ or $C(0,b,0)\cong C(0,b',0)$ as graded twisted tensor products of $A$ and $B$, then $b=b'$. \end{prop}
\begin{proof} By definition, any isomorphism of twisted tensor products $\gamma: C(a, b, 1) \to C(a', b', 1)$ must commute with the inclusions. It follows that $\gamma(x) = \alpha x$ and $\gamma(z) = \beta z$ for some $\alpha, \beta \in \mathbb K^*$. Considering the defining relations of $C(a, b, 1)$ and $C(a', b', 1)$ makes it evident that $a = a'$ and $b = b'$. An analogous argument holds in the $c=0$ case. \end{proof}
\subsection{Classification up to isomorphism of graded algebras} By Proposition \ref{two dim as ttp}, it suffices to classify the twisted tensor products: $C(a,b,1)$, $C(1,b,0)$, and $C(0,b,0)$ up to isomorphism.
Let $R = \mathbb K \langle x, z \rangle/ \langle f \rangle$, where $0 \ne f$ is homogeneous of degree 2. It is well known (see, for example, \cite[Exercise 2.4.3 (2)]{Rogalski}) that $R$ is isomorphic (as a graded algebra) to exactly one of: \begin{itemize} \item[(i)] $\mathbb K_q[x, z] = \mathbb K \langle x, z \rangle/ \langle zx-qxz \rangle$, for some $q \in \mathbb K$; \item[(ii)] $J(x, z) = \mathbb K \langle x, z \rangle/\langle zx-xz-z^2\rangle$, (the {\it Jordan plane});
\item[(iii)] $\mathbb K \langle x, z \rangle/\langle x^2 \rangle$. \end{itemize} We handle the $c=0$ case first.
\begin{prop}\ The following graded algebra isomorphisms hold. \begin{enumerate} \item $C(1,b,0)\cong \mathbb K_b[x,z]$ if $b\neq 1$; \item $C(0,b,0)=\mathbb K_b[x,z]$ if $b\neq 0$; \item $C(1,1,0)= J(x,z)$; \item $C(1,0,0)\cong C(0,0,0)=\mathbb K \langle x, z \rangle/\langle zx \rangle$. \end{enumerate} \end{prop}
\begin{proof} The first isomorphism can be obtained by $x\mapsto (1-b)x$ and $z\mapsto x+z$. The rest are obvious. \end{proof}
To determine the graded algebra isomorphism type of $C(a, b,1)$ we use the following result.
\begin{lemma}\cite[Exercise 2.4.3]{Rogalski} \label{rogalski lemma} Let $R = \mathbb K \langle x, z \rangle/\langle f \rangle,$ where $0 \ne f$ is homogeneous of degree 2. Write $f = x \phi(x) + z \phi(z),$ for some unique linear transformation $\phi: \mathbb K x \oplus \mathbb K z \to \mathbb K x \oplus \mathbb K z$. If $\phi(x) = m_{11} x + m_{12}z$ and $\phi(z) = m_{21} x + m_{22} z$, define a matrix $M(\phi) = (m_{ij})$. Then $R(\phi) \cong R(\phi')$ as graded algebras if and only if there exists an invertible matrix $N$ such that $M' = N^t M N$. \end{lemma}
\begin{thm} \label{two dim up to isom} Suppose that $C(a, b,1)$ is a quadratic twisted tensor product of $A$ and $B$.
Then, as graded algebras, \begin{itemize}
\item[(1)] $C(a, -1,1) \cong \mathbb K_{-1}[x,z]$, if ${\rm char}\ \mathbb K\neq 2$; \item[(2)] if $4a-(b-1)^2 = 0$, then $C(a, b,1) \cong J(x, z)$; \item[(3)] if $b \ne -1$ and $4a-(b-1)^2 \ne 0$, then $C(a, b,1) \cong \mathbb K_q[x,z]$, where $q$ satisfies $(a+b)q^2+(2a-b^2-1)q + a + b = 0$. \end{itemize} \end{thm}
Note that if ${\rm char}\ \mathbb K\neq 2$ and $b=-1$ then $4a-(b-1)^2=0$ implies $a=1$. This violates Theorem \ref{quadratic ttps of polynomial rings of one variable}, so there is no conflict between statements (1) and (2).
\begin{proof} First, note that by Theorem \ref{quadratic ttps of polynomial rings of one variable}, $f_1(a,b)\neq 0$ so $a\neq 1$.
To prove the theorem, we simply write down an invertible matrix $N$ and check the condition of Lemma \ref{rogalski lemma}.
Using the notation of Lemma \ref{rogalski lemma}, for $\mathbb K_q[x,z]$, $J(x,z)$, and $C(a,b,1)$ we have $$M_q = \begin{bmatrix} 0 & -q \\ 1 & 0 \end{bmatrix},\qquad M_J = \begin{bmatrix} 0 & -1 \\ 1 & -1 \end{bmatrix},\qquad\text{ and }\qquad M_C = \begin{bmatrix} -a & -b \\1 & -1 \end{bmatrix}$$ respectively.
To prove (1), let $$N = \begin{bmatrix} \frac{1+\sqrt{1-a}}{2} & -\frac{1}{2} \\ -1+\sqrt{1-a} & 1 \end{bmatrix}.$$ The determinant of $N$ is $\sqrt{1-a}$, which is nonzero since $a \ne 1$, so $N$ is invertible. It is straightforward to check that $N^t M_{-1} N = M_C$.
For (2), assume that $4a-(b-1)^2 = 0$. Let $\sqrt{a}$ be chosen such that $2\sqrt{a}=b-1$ and put $$N = \begin{bmatrix} 1+\sqrt{a} & 0 \\ \sqrt{a} & 1\end{bmatrix}.$$ Clearly $N$ is invertible and one checks that $N^t M_J N = M_C$.
Finally, for (3) assume that $b \ne -1$ and $4a-(b-1)^2 \ne 0$. Suppose that $q \in \mathbb K$ satisfies $(a+b)q^2+(2a-b^2-1)q + a + b = 0$. We claim that $q \in \mathbb K - \{1, -1\}$. To see this suppose $q \in \{1, -1\}$. If $q = 1$, then $4a-(b-1)^2 = 0$; if $q = -1$, then $b = -1$. We conclude that $q \in \mathbb K - \{1, -1\}$. Define $$N = \begin{bmatrix} \frac{bq-1}{q^2-1} & \frac{1}{q-1} \\ \frac{b-q}{q+1} & 1 \end{bmatrix}.$$ Then $\det N = (1+b)/(1+q) \ne 0$, so $N$ is invertible and one checks that $N^t M_q N = M_C$.
\end{proof}
\section{Quadratic twisted tensor products of $\mathbb K[x,y]$ and $\mathbb K[z]$} \label{sec:QuadTTPs}
Throughout this section, and for the rest of the paper, we fix the notation $R = \mathbb K[x, y]$ and $S = \mathbb K[z]$. For readibility, henceforth, when we write $T$ is a (quadratic) graded twisted tensor product, we mean $T$ is a (quadratic) graded twisted tensor product of $R$ and $S$. Also, if $T$ and $T'$ are graded twisted tensor products and we write $T$ is isomorphic to $T'$, then we mean that $T$ and $T'$ are isomorphic as graded twisted tensor products of $R$ and $S$. The main result of this section is the determination of all of the isomorphism classes of the quadratic twisted tensor products of $R$ and $S$. Let $a, b, c, d, e, f, A, B, C, D, E, F \in \mathbb K$ and define elements of the tensor algebra $\mathbb K \langle x, y, z \rangle$ (suppressing tensors)
\begin{align*} \tau(zx) &= ax^2+bxy+cy^2+dxz+eyz+fz^2, \\ \tau(zy) &= Ax^2+Bxy+Cy^2+Dxz+Eyz+Fz^2. \end{align*}
The problem under consideration is to determine when the algebra $$T = \mathbb K \langle x, y, z \rangle/ \langle zx-\tau(zx), zy-\tau(zy), xy-yx \rangle$$ is a graded twisted tensor product of $R$ and $S$ relative to the obvious maps $i_R:R \to T$ and $i_S:S \to T$. To indicate the dependence of $T$ on the parameters, we will also use the notation $$T = T(a, b, c, d, e, f; A, B, C, D, E, F).$$ We begin with a reduction in the number of parameters.
\begin{lemma} \label{JNFs} If $T$ is a graded twisted tensor product, then $T$ is isomorphic to $T(a', b', c', d', e', f'; A', B', C', 0, E', 0)$ where $e', f'\in \{0,1\}$. Furthermore, if $e'=0$ then $A'\in\{0,1\}$, if $e'=A'=0$, then $C'\in\{0,1\},$ and if $e'=1$, then $d'=E'$. \end{lemma}
\begin{proof} We will apply Proposition \ref{isomorphisms of ttps} repeatedly to automorphisms of $R$ and $S$.
If $F=0\neq f$, applying Proposition \ref{isomorphisms of ttps} to the automorphism of $S$ given by $z \mapsto f^{-1} z$, we see that $T\cong T(a', b', c', d', e', 1; A', B', C', D', E', 0).$
If $F\neq 0$, then applying Proposition \ref{isomorphisms of ttps} to the automorphism of $R$ given by $x \leftrightarrow y$ and the automorphism $z \mapsto F^{-1} z$, we have $$T\cong T(a', b', c', d', e', 1; A', B', C', D', E', F').$$ Then we use the automorphism of $R$ given by $x \mapsto x$, $y \mapsto y + F'x$ to see that, without loss of generality, $F' = 0$ and $f'=1$.
We have proved that if $T$ is any twisted tensor product of $R$ and $S$, then $$T\cong T(a', b', c', d', e', f'; A', B', C', D', E', 0)$$ for $f'\in \{0,1\}$, so we assume henceforth that $F=0$ and $f\in \{0,1\}$.
The graded automorphism group of $R$ is ${\rm GL}_2(\mathbb K)$, where an invertible matrix $P=\begin{bmatrix} p_{11} & p_{12}\\ p_{21} & p_{22}\\ \end{bmatrix}$ determines a graded algebra automorphism $\alpha_P:R\to R$ by $\alpha_P(x) = p_{11}x+p_{12}y$ and $\alpha_P(y)=p_{21}x+p_{22}y$. Since $\mathbb K$ is algebraically closed we choose an invertible matrix $P$ such that $P^{-1} \begin{bmatrix} d & e \\D & E \end{bmatrix} P$ is in Jordan normal form. Applying Proposition \ref{isomorphisms of ttps} to the automorphism $\alpha_P$, we have $$T\cong T(a', b', c', d', e', f; A', B', C', 0, E', 0)$$ where $e'=0$, or $e'=1$ and $d'=E'$. Note that this isomorphism does not affect the coefficients of $z^2$.
If $e'=0$ and $A'\neq 0$, we may rescale $y\mapsto A'y$ to obtain $$T\cong T(a'', b'', c'', d'', 0, f; 1, B'', C'', 0, E'', 0).$$ To complete the proof, if $e'=A'=0$, and $C'\neq 0$, we may rescale $y\mapsto C'^{-1}y$ to obtain $$T\cong T(a'', b'', c'', d'', 0, f; 0, B'', 1, 0, E'', 0).$$ \end{proof}
We refer to the isomorphism class of Lemma \ref{JNFs} as the \emph{Jordan normal form} of the twisted tensor product $T$.
Let us first dispense with the case $f=0$. Let $\sigma\in \hbox{{\rm End}}(R)$ be a graded ring endomorphism and let $\delta$ be a graded left $\sigma$-derivation, meaning $\delta(r_1r_2)=\sigma(r_1)\delta(r_2)+\delta(r_1)r_2$ for all $r_1, r_2\in R$. Then one may construct the \emph{graded (left) Ore extension} $R[z;\sigma,\delta]$, which is freely generated as an $R$-algebra by $z$, subject to the relations $zr=\sigma(r)z+\delta(r)$ for $r\in R$.
\begin{prop} \label{oneSidedOre} The algebra $T$ is a graded twisted tensor product with $f=F=0$ if and only if $T$ is a graded Ore extension. \end{prop}
\begin{proof} Suppose $\sigma$ is a graded ring endomorphism of $R$ and $\delta$ is a graded left $\sigma$-derivation. From the relations of $R[z;\sigma,\delta]$ it is clear that the set ${\mathcal S} = \{x^i y^j z^k : i, j, k \geq 0\}$ is a $\mathbb K$-linear spanning set. Since $z$ is a free generator and $\{x^iy^j : i, j\ge 0\}$ is a $\mathbb K$-basis for $R$, it follows that ${\mathcal S}$ is a $\mathbb K$-basis for $R[z;\sigma,\delta]$. Hence the $\mathbb K$-linear map $R\tensor S\to R[z;\sigma,\delta]$ given by $r\tensor z^i\mapsto rz^i$ is an isomorphism, and $R[z;\sigma,\delta]$ is a graded twisted tensor product. The graded twisting map associated to $R[z;\sigma,\delta]$ is given by $\tau(z\tensor r)=\sigma(r)\tensor z+\delta(r) \tensor 1$, and it follows that $f=F=0$.
Now suppose $T$ is a graded twisted tensor product of $R$ and $S$ with $f=F=0$. Since $T$ is quadratic, the definitions of $\tau(z\tensor x)$ and $\tau(z\tensor y)$ uniquely determine a one-sided graded twisting map $\tau$ for $T$ by Theorem \ref{quadratic iff uep}. Since $\tau$ respects multiplication in $R$, we have $\tau(z\tensor r)\in (R\tensor \mathbb K z) \oplus (R\tensor \mathbb K 1_S)$ for all $r\in R$. Define graded $\mathbb K$-linear maps $\sigma, \delta:R\to R$ by $\tau(z\tensor r) = \sigma(r)\tensor z + \delta(r)\tensor 1_S$. Then for any $r_1, r_2\in R$, we have \begin{align*} \sigma(r_1r_2)\tensor z+\delta(r_1r_2)\tensor 1_S&=\tau(z\tensor r_1r_2)\\ &= ({\mu}_R\tensor 1)(1\tensor \tau)((\sigma(r_1)\tensor z + \delta(r_1)\tensor 1_S)\tensor r_2)\\
&=\sigma(r_1)\sigma(r_2)\tensor z + (\sigma(r_1)\delta(r_2)+\delta(r_1)r_2)\tensor 1_S. \end{align*} This shows $\sigma$ is a ring endomorphism and $\delta$ is a (left) $\sigma$-derivation. By Theorem \ref{quadratic iff uep}(3), for all $r\in R$, the relations $zi_R(r)=\sigma(i_R(r))z+\delta(i_R(r))$ hold in $T$. Since $T$ is a graded twisted tensor product, ${\mathcal S} = \{x^i y^j z^k : i, j, k \geq 0\}$ is a $\mathbb K$-basis for $T$, thus $T$ is freely generated as an $i_R(R)$-algebra by $z$. Thus $T$ is a graded Ore extension. \end{proof}
When $T$ is isomorphic to a graded twisted tensor product with $f=F=0$, we say $T$ is of \emph{Ore type}.
\subsection{The Ore-type case}
Let $\sigma \in \hbox{{\rm End}}(R)$ be the ring endomorphism given by $\sigma(x) = dx + ey$ and $\sigma(y) = Dx+Ey$. Proposition \ref{oneSidedOre} shows that to determine which values of the parameters ensure $T$ is a graded twisted tensor product with $f=F=0$, it suffices to characterize when the formulas $$\delta(x) = ax^2+bxy+cy^2, \ \ \ \delta(y) = Ax^2+Bxy+Cy^2$$ determine a well-defined left $\sigma$-derivation $\delta:R\to R$. In turn, the definitions in the last display determine a well-defined left $\sigma$-derivation $\delta:R\to R$ if and only if $\delta(xy)=\delta(yx)$, or equivalently if $$\sigma(x)\delta(y)+\delta(x)y = \sigma(y)\delta(x) + \delta(y) x, $$ holds in the ring $R$.
\begin{thm} \label{Ore extension classification} Suppose $T$ is an Ore-type twisted tensor product in Jordan normal form. \begin{itemize} \item[(1)] If $e=0$, then $A\in\{0,1\}$, and if $A=0$, then $C\in\{0,1\}$. Furthermore, one of the following mutually exclusive cases is true: \begin{itemize} \item[(i)] $d =E = 1$ and $a, b, c, B$ are arbitrary; \item[(ii)] $d \ne 1$, $E = 1$, $A = B = C = 0$, and $a, b, c$ are arbitrary; \item[(iii)] $d =1$, $E \ne 1$, $a = b = c = 0$, and $B$ is arbitrary; \item[(iv)] $d \ne 1$, $E \ne 1$, $A = c = 0$, $a=B(d-1)/(E-1)$, $b=C(d-1)/(E-1)$, and $B$ is arbitrary. \end{itemize}
\item[(2)] If $e=1$, then $d=E$, and one of the following mutually exclusive cases is true:
\begin{itemize}
\item[(i)] $d = 1$, $A = B = C = 0$, and $a, b, c$ are arbitrary; \item[(ii)] $d \ne 1$, $A = 0$, $B=a=(b-C)(d-1)$, $c=C/(d-1)$, and $b$ is arbitrary. \end{itemize}
\end{itemize}
\end{thm}
\begin{proof} Assume $T$ is an Ore-type twisted tensor product in Jordan normal form, so $f=F=D=0$ and $e\in\{0,1\}$.
For (1), suppose that $e = 0$. By Lemma \ref{JNFs}, we have $A\in\{0,1\}$ and if $A=0$ then $C\in\{0,1\}$. Equating coefficients in $\sigma(x)\delta(y)+\delta(x)y = \sigma(y)\delta(x) + \delta(y) x$ yields the system of equations \begin{align*} A(d-1) &= 0 \\ B(d-1)+a(1-E) &= 0 \\ C(d-1)+b(1-E) &= 0 \\ c(E-1) &= 0. \end{align*} The result then follows by considering the possible normal forms and whether $d$ or $E$ equals 1.
The proof of (2) is similar, and is left to the reader. \end{proof}
This completes the classification, up to isomorphism, of twisted tensor products of Ore type.
We now turn our attention to the case where $T$ is a quadratic twisted tensor product in Jordan normal form with $f=1$. As the associated twisting map is no longer one-sided, this case is more difficult. Surprisingly, as we will show, knowing $\dim T_3=10$ is sufficient for proving $T$ is a quadratic twisted tensor product. For this reason, the remainder of this section relies upon a Gr\"obner-basis calculation.
First we record a useful fact.
\begin{prop} \label{not a zero of fn} If $T$ is a twisted tensor product in Jordan normal form with $A=0$, $E \ne 0$ and $f = 1$, then $y$ is normal in $T$ and $(a,d)$ is not a zero of $f_n(t,u)$ for all $n\ge 1$. In particular, $a\neq 1$. \end{prop}
\begin{proof} Let $T$ be a twisted tensor product in Jordan normal form with $A = 0$, $E \ne 0$ and $f=1$. Then $T$ may be presented as $$\mathbb K\langle x, y, z \rangle/\langle xy-yx, zx-ax^2-byx-cy^2-dxz-eyz-z^2, zy-Byx-Cy^2-Eyz \rangle.$$ Since $E\neq 0$, it is clear from this presentation that $y$ is normal in $T$.
Next, it is easy to show that $$T/\langle y \rangle \cong \mathbb K \langle x, z \rangle/\langle zx-ax^2-dxz-z^2 \rangle.$$
By assumption, $T$ has $\{y^i x^j z^k : i, j, k \geq 0\}$ as a $\mathbb K$-basis, so it follows that $T/\langle y \rangle$ has $\{x^j z^k : j, k \geq 0\}$ as a $\mathbb K$-basis. Hence $T/\langle y \rangle$ is a twisted tensor product of $\mathbb K[x]$ and $\mathbb K[z]$. Then Theorem \ref{quadratic ttps of polynomial rings of one variable} implies that $(a, d)$ is not a zero of $f_n(t,u)$ for all $n \geq 1$.
Finally, recall that $f_1(t,u) = 1-t$, so we have $a \neq 1$. \end{proof}
We briefly recall some terminology and notation for Gr\"obner-basis calculations, loosely following \cite{Berg}.
Let $\mathbb K\langle X\rangle$ be the free algebra on a (finite) set $X$. Fix a total order on $X$ and well-order monomials in $\mathbb K\langle X\rangle$ using degree-lexicographic order. Let $\mathscr B$ be a set of homogeneous elements of the form $W-f_W$ where $W$ is a monomial in $\mathbb K\langle X\rangle$ and $f_W\in\mathbb K\langle X\rangle$ such that $f$ is a linear combination of monomials less than $W$. Denote the set of all monomials $W$ such that $W-f_W\in\mathscr B$ for some $f_W\in\mathbb K\langle X\rangle$ by ${\rm ht}(\mathscr B)$; this is the set of \emph{high terms} of $\mathscr B$. For simplicity, and to suit our purposes, we assume $\mathscr B$ has been chosen so that $W\in {\rm ht}(\mathscr B)$ implies no subword of $W$ is in ${\rm ht}(\mathscr B)$.
If $m$ and $m'$ are monomials and $W-f_{W} \in\mathscr B$, let $r_{mWm'}:\mathbb K\langle X\rangle\to \mathbb K\langle X\rangle$ be the $\mathbb K$-linear map that is the identity on all monomials of $\mathbb K\langle X\rangle$ except $r_{mW m'}(mWm')=mf_Wm'$. We refer to the maps $r_{mWm'}$ as \emph{reductions}. Note that any infinite sequence of reductions applied to an element of $\mathbb K\langle X\rangle$ eventually stabilizes, though two such sequences need not result in the same element of $\mathbb K\langle X\rangle$. An \emph{overlap} occurs when there exist monomials $m, m', m''$ and elements $W_1-f_{W_1}, W_2-f_{W_2}\in \mathscr B$ such that $W_{1}=mm'$ and $W_{2}=m'm''$. An overlap is \emph{resolvable} if there are sequences of reductions $r$ and $r'$ such that $r(f_{W_1}m'')=r'(mf_{W_2})$.
Consider the graded algebra $R=\mathbb K\langle X\rangle/\langle\mathscr B\rangle$. If all overlaps of $\mathscr B$ in homogeneous degrees $\le d$ are resolvable, then monomials of degree $\le d$ that do not contain any element of ${\rm ht}(\mathscr B)$ as a subword form a $\mathbb K$-basis for $R_{\le d}$. In this case we say $\mathscr B$ is a \emph{Gr\"obner basis to degree }$d$. If an overlap does not resolve, one may ``force'' it to resolve by adding to $\mathscr B$ a new element $W-f_W$ where $W$ is the largest monomial such that $\lambda(W-f_W)=r(f_{W_1}C-Af_{W_2})$ for some $\lambda\in\mathbb K^*$ and any sequence of reductions $r$ that eventually stabilizes on $f_{W_1}C-Af_{W_2}$. Note that adding such elements to $\mathscr B$ does not change $R$.
We apply this theory to the algebra $T$ in Jordan normal form. Order the monomials in $\mathbb K \langle x, y, z \rangle$ with left lexicographical order determined by $y < x < z$. Let $$\mathscr B=\{ xy-yx, z^2-(zx-ax^2-byx-cy^2-dxz-eyz), zy-(Ax^2+Byx+Cy^2+Eyz)\}.$$ Under the chosen term-order there are two overlaps in degree 3: $z^3$ and $z^2y$. Reducing the differences $(f_{z^2})z-z(f_{z^2})$ and $(f_{z^2})y-z(f_{zy})$ as much as possible yields the following expressions:
\begin{align*} G_1 &= (1+d) zxz +(a-1)zx^2 - (a-d^2-eA) x^2z+ (a+ad+bA)x^3 +(b+e)Eyzx\\ & +(eB-deE+2de-b)yxz+(b+bd+bB+cA+cAE+ae-aeE)yx^2\\ &+(cE^2-c+eC-e^2E+e^2)y^2z +(c+cd+bC+cB+cBE-beE+be)y^2x\\ &+(cC+cCE-ceE+ce)y^3, \\ G_2 &= -Azx^2 - AEx^2z +(A-dA-AB)x^3+(E-BE-E^2)yzx\\ &-(dE+BE-dE^2)yxz+(B-a-dB-B^2-AC-ACE+aE^2-eA)yx^2\\ &-(CE^2+CE+eE-eE^2)y^2z+(C-b-dC-2BC-BCE+bE^2-eB)y^2x\\
&-(c+C^2+C^2E-cE^2+eC)y^3.\\ \end{align*}
\begin{lemma} \label{cases} We have $\dim T_3=10$ if and only if $\dim {\rm Span}_{\mathbb K}\{G_1, G_2\}=1$. If $T$ is a twisted tensor product in Jordan normal form with $f=1$, then either \begin{enumerate} \item $G_2= 0\neq G_1$ in $\mathbb K\langle x,y,z\rangle$ and $(a,d)$ is not a root of any $f_n(t,u)$, or \item $G_2\neq 0$, $e=0$, $d=-1$, $A=1$ and $G_1=(1-a)G_2$. \end{enumerate} \end{lemma}
\begin{proof} There are eleven monomials of degree 3 in $\mathbb K\langle x,y,z\rangle$ that do not contain an element of ${\rm ht}(\mathscr B)=\{xy, zy, z^2\}$ as a subword. If $G_1=G_2=0$, then $\mathscr B$ is a Gr\"obner basis for $T$ to degree 3 and $\dim T_3=11$. If $G_1$ and $G_2$ are linearly independent, then in order for both overlaps $z^3$ and $z^2y$ to resolve, suitable rescalings of both $G_1$ and $G_2$ must be appended to $\mathscr B$, implying $\dim T_3=9$. The first statement follows.
If $T$ is a twisted tensor product, then the Hilbert series of $T$ is $(1-t)^{-3}$, hence $\dim T_3=10$. If $G_2=0$, then $A=0$, so $(a,d)$ is not a root of any $f_n(t,u)$ by Proposition \ref{not a zero of fn}. In particular, $a\neq 1$ and thus $G_1\neq 0$.
If $G_2\neq 0$, then $G_1=\lambda G_2$ for some scalar $\lambda$. Since $G_2$ has no $zxz$ term, we must have $d=-1$. Since $T$ is in Jordan normal form, $e\in\{0,1\}$. We claim $e=0$. If $e=1$, then $d=E=-1$. Examining the coefficients of $zx^2$ and $x^2z$, we have $1-a=\lambda A$ and $A+1-a=\lambda A$. This implies $A=0=1-a$, contradicting Proposition \ref{not a zero of fn}. Thus $e=0$.
By Lemma \ref{JNFs}, since $e=0$, we have $A\in\{0,1\}$. We claim $A=1$. On the contrary, if $A=0$, then note that $$G_2 = E(1-B-E) yzx+E(1-B-E) yxz + S$$ where $S\in {\rm Span}_{\mathbb K} \{y^i x^j z^k\}.$ Since $T$ is a twisted tensor product, $\{y^ix^jz^k\}$ is a $\mathbb K$-basis for $T$, so the first two terms in the above expression for $G_2$ cannot vanish. Thus $E\neq 0$. Since $G_1=\lambda G_2$, comparing coefficients of $zx^2$ shows that $a-1=0$, again contradicting Proposition \ref{not a zero of fn}. Thus $A=1$, and the fact that $\lambda = 1-a$ follows by considering the coefficients of the $zx^2$ terms. \end{proof}
In a forthcoming paper we will study the geometry, in the sense of \cite{ATVI}, of twisted tensor products. In the case $f = 1$, $G_2 = 0$, the point scheme of $T$ is reducible; whereas in the case $f = 1$, $G_2 \neq 0$, generically, the point scheme of $T$ is an elliptic curve. We refer to quadratic twisted tensor products (in Jordan normal form) where $f=1$ and $G_2=0$ as \emph{reducible}, and those where $f=1$ and $G_2\neq 0$ as \emph{elliptic}.
We consider these two cases in the following subsections.
\subsection{The reducible case.} In this subsection we fix $$T = T(a, b, c, d, e, 1; A, B, C, 0, E, 0)$$ and show that if $G_2=0$ and if $(a,d)$ is not a root of any $f_n(t,u)$, then $T$ is a quadratic twisted tensor product.
We define a filtration on $T$ following Section 4.2.1 of \cite{Loday-Vallette}. Let $V_1 = \mathbb K y$ and $V_2 = \mathbb K x \oplus \mathbb K z$, so $T_1=V_1\oplus V_2$. Then the natural identification $$(T_1)^{\tensor n} \cong \bigoplus_{(i_1,\ldots, i_n)\in \{1,2\}^n} V_{i_1}\tensor \cdots\tensor V_{i_n}$$ endows the tensor algebra $T(T_1)$ with the structure of a $\mathbb K$-vector space graded by the ordered monoid $\mathcal M=\bigcup_{n=0}^{\infty}\{1,2\}^n$ of all tuples with entries in $\{1,2\}$. The set $\mathcal M$ is a monoid under concatenation of tuples and the left-lexicographic order $$\emptyset<(1)<(2)<(1,1)<(1,2)<(2,1)<(2,2)<\cdots$$ is compatible with concatenation. It is clear that the $\mathcal M$-grading respects the multiplication in $T(T_1)$. Observe that this is a refinement of the ${\Bbb N}$-grading given by tensor degree.
The filtration naturally associated to this $\mathcal M$-grading is given by $$F_{(i_1,\ldots,i_n)}T(T_1)= \bigoplus_{(j_1,\ldots,j_m)\le (i_1,\ldots,i_n)} V_{j_1}\tensor\cdots\tensor V_{j_m}.$$ The canonical projection $T(T_1)\to T$ induces a filtration on $T$, and we denote the associated $\mathcal M$-graded algebra by $\hbox{gr}_F T$ or just $\hbox{gr}\ T$.
There is a canonical $\mathcal M$-graded algebra homomorphism $T(T_1)\to \hbox{gr}\ T$, and we denote the kernel of its restriction to $T_1\tensor T_1$ by $R_{\rm lead}$. Let $T^{\circ} = T(T_1)/(R_{\rm lead})$. Noting that $G_2=0$ implies $A=0$, we have $$T^{\circ} = \mathbb K \langle y, x, z\rangle/\langle zx-ax^2-dxz-z^2, zy, xy \rangle.$$
\begin{lemma} \label{Tcirc} Assume $G_2=0$ and $(a,d)$ is not a root of any $f_n(t,u)$. Then the algebra $T^{\circ}$ is Koszul with Hilbert series $H_{T^{\circ}} = (1-t)^{-3}$. The set $\{y^i x^j z^k : i, j, k \geq 0\}$ is a $\mathbb K$-basis for $T^{\circ}$. \end{lemma}
\begin{proof} Let $D=\mathbb K\langle x,z\rangle/\langle zx-ax^2-dxz-z^2\rangle$ and define a graded twisting map $\rho: D \tensor \mathbb K[y] \to \mathbb K[y] \tensor D$ by ${\rho}(1 \tensor y) = y\tensor 1$, ${\rho}(x \tensor 1) = 1\tensor x$, ${\rho}(z \tensor 1) = 1\tensor z$, and ${\rho}_{\ge 2}=0$. Note that $\mathbb K[y] \tensor_{\rho} D$ and $T^{\circ}$ are isomorphic as ${\Bbb N}$-graded algebras.
Since $(a,d)$ is not a root of any $f_n(t,u)$, \cite[Theorem 6.2]{C-G} and our results in Section \ref{ttps of two one-variable polynomial algebras} imply that $D = \mathbb K[x] \tensor_{\sigma} \mathbb K[z]$ for the obvious twisting map $\sigma$. Thus $T^{\circ}\cong \mathbb K[y] \tensor_{\rho} (\mathbb K[x] \tensor_{\sigma} \mathbb K[z])$. This shows $\{y^i x^j z^k : i, j, k \geq 0\}$ is a $\mathbb K$-basis for $T^{\circ}$ and hence the Hilbert series of $T^{\circ}$ is $(1-t)^{-3}$.
Finally, by \cite[Theorem 6.2 and Theorem 5.5]{C-G} we see that $D$ is Koszul. By \cite[Theorem 5.3 (2)]{C-G}, $\mathbb K[y] \tensor_{\rho} D$ is Koszul, hence $T^{\circ}$ is Koszul. \end{proof}
\begin{thm} \label{ttps with A=0, diagonal Jordan form} If $G_2=0$ and $(a,d)$ is not a root of any $f_n(t,u)$, then $$T = T(a, b, c, d, e, 1; A, B, C, 0, E, 0)$$ is a quadratic twisted tensor product. Moreover, $T$ is Koszul. \end{thm}
\begin{proof} Since $(a,d)$ is not a root of $f_1(t,u) = 1-t$, $a\neq 1$ and hence $G_1\neq 0$. By Lemma \ref{cases}, $\dim T_3 = 10$. As graded vector spaces, $T$ and $\hbox{gr} \ T$ are isomorphic, so $\dim (\hbox{gr}\ T)_3=10$. By Lemma \ref{Tcirc}, the Hilbert series of $T^{\circ}$ is $(1-t)^{-3}$, so the canonical graded projection $T^{\circ} \to \hbox{gr} \ T$ is injective in homogeneous degree 3.
Lemma \ref{Tcirc} also shows that the algebra $T^{\circ}$ is Koszul, so, by \cite[Theorem 4.2.4]{Loday-Vallette}, the canonical graded projection $T^{\circ} \to \hbox{gr} \ T$ is an isomorphism and $T$ is Koszul.
By Lemma \ref{Tcirc}, the algebra $T^{\circ}$ has a linear basis of the form $\{y^i x^j z^k : i, j, k \geq 0\}$. Since the projection $T^{\circ}\to \hbox{gr}\ T$ is an isomorphism, $\hbox{gr} \ T$ also has this basis, and consequently $T$ does as well. It follows that $T$ is a twisted tensor product. \end{proof}
We conclude this subsection by recording the solution set of the system of equations determined by setting $G_2=0$, and consequently, the parameter values for which $T$ is a twisted tensor product of $R$ and $S$. The following is immediate from the form of the expression $G_2$.
\begin{lemma} \label{system for z^2y to resolve} The expression $G_2=0$ if and only if $A=0$ and all of the following hold: \begin{align*} &(1) \ \ E(1-B-E) = 0, \\ &(2) \ \ E(-d-B+dE) = 0, \\ &(3) \ \ B(1-d-B)-a(1-E^2) = 0, \\ &(4) \ \ E(C+CE+e-eE) = 0, \\% &(5) \ \ C(1-d-2B-BE)-b(1-E^2)-eB = 0, \\ &(6) \ \ (1+E)(-c(1-E)-C^2)-eC = 0. \\ \end{align*}
\end{lemma}
\begin{thm} \label{ttps with G_2 = 0} Suppose that $T = T(a, b, c, d, e, 1; A, B, C, 0, E, 0)$ is a quadratic twisted tensor product in Jordan normal form. If $G_2=0$, then $A = 0$, $(a, d)$ is not a zero of $f_n(t,u)$ for all $n \geq 1$, and the parameters satisfy one of the following cases: \begin{itemize} \item[(i)] $a = B(1-d-B)$, $b = 0$, $c = 0$, $e=C = E = 0$; \item[(ii)] $e=B = C = 0$, $E = 1$; \item[(iii)] $d = -1$, $B = 2$, $e=C = 0$, $E = -1$; \item[(iv)] $a = B(1-d-B)$, $b = 1-d-2B$, $c=-1$, $C = 1$, $e=E = 0$; \item[(v)] $e=0$, $d= E= -1$, $B = 2$, $C = 1$. \item[(vi)] $e=1$, $d=E=0$, $a=B(1-B)$, $b=C-B-2BC$, $c=-C(1+C)$; \item[(vii)] $B=C=0$, $e=d=E=1$. \end{itemize} Conversely, if $(a, d)$ is not a zero of $f_n(t,u)$ for all $n \geq 1$, and if the parameters satisfy $A=0$ and the conditions in one of $(i)$--$(vii)$, then $T$ is a twisted tensor product of $R$ and $S$. Moreover, all of the algebras described in $(i)$-$(vii)$ are Koszul. \end{thm}
\begin{proof} By Lemmas \ref{cases} and \ref{system for z^2y to resolve}, since $G_2=0$ we have $A=0$ and $(a, d)$ is not a zero of $f_n(t,u)$ for all $n \geq 1$. Since $T$ is assumed to be in Jordan normal form, $e\in\{0,1\}$. If $e=0$, then $C\in\{0,1\}$ and if $e=1$ then $d=E$ by Lemma \ref{JNFs}. Furthermore, equation (1) in Lemma \ref{system for z^2y to resolve} implies $E=0$ or $B+E=1$. Thus to prove the first part of the theorem, it suffices to consider the cases $(e=C=E=0)$, $(e=C=0, E\neq 0)$, $(e=E=0, C=1)$, $(e=0, C=1, E\neq 0)$, $(e=1, d=E=0)$, and $(e=1, d=E\neq 0)$.
Case (i) results from setting $e=C=E=0$ in the equations of Lemma \ref{system for z^2y to resolve}. If $e=C=0$ and $E\neq 0$, then $B+E=1$ and equation (2) in Lemma \ref{system for z^2y to resolve} becomes $BE(1+d)=0$. Since $E\neq 0$, we have $B=0$ or $d=-1$. If $B=0$ we have $E=1$, which is case (ii). If $d=-1$, then equation (3) in Lemma \ref{system for z^2y to resolve} can be rewritten as $(1-a)(1-E^2)=0$. Since $f_1(a,d)=1-a\neq 0$, we have $E=\pm 1$. If $E=1$, the parameters belong to case (ii), otherwise we have case (iii).
Setting $e=E=0$ and $C=1$ in the equations of Lemma \ref{system for z^2y to resolve} results in case (iv). If $e=0$, $C=1$, and $E\neq 0$, then $B+E=1$ and, from equation (4) of Lemma \ref{system for z^2y to resolve}, $1+E=0$. These two equations imply $B=2$ so, using equation (5) of Lemma \ref{system for z^2y to resolve}, we have $d=-1$, which is case (v).
Now we turn to the case $e=1$ and $d=E$. If $d=E=0$, then the equations of Lemma \ref{system for z^2y to resolve} determine $a, b,$ and $c$ in terms of $B$ and $C$ as in case (vi). If $d=E\neq 0$, then $B+E=1$ and equation (2) of Lemma \ref{system for z^2y to resolve} becomes $E(E^2-1)=0$, so $E^2-1=0$. Using $B+E=1$ and $E^2-1=0$ in concert with equation (5) shows $B=0$, hence $E=1$ and, by equations (4) and (6), $C=0$. This is case (vii), and the first part of the proof is complete.
The converse, and the fact that $T$ is Koszul in each case, follow from Theorem \ref{ttps with A=0, diagonal Jordan form} and Lemma \ref{system for z^2y to resolve}, after verifying that all six equations of the Lemma are satisfied in each case. \end{proof}
\subsection{The elliptic case.}
In this subsection we treat the case where $f = 1$ and $G_2 \neq 0$. \begin{lemma} \label{caseA=1} If $T$ is a graded twisted tensor product in Jordan normal form such that $f = 1$ and $G_2\neq 0$, then $A=1$, $d = E = -1$, $e=0$, and $b = (1-a)(2-B)$. \end{lemma}
\begin{proof} By Lemma \ref{cases}, we have $e=0$, $d=-1$, $A=1$, and $G_1=(1-a)G_2$. Now $G_1$ and $G_2$ simplify to: \begin{align*} G_1 &= (a-1)zx^2 - (a-1) x^2z+ bx^3 +bEyzx -byxz\\ &+(bB+c+cE)yx^2+(cE^2-c)y^2z +(bC+cB+cBE)y^2x\\ &+(cC+cCE)y^3 \\ G_2 &= -zx^2 - Ex^2z +(2-B)x^3+(E-BE-E^2)yzx\\ &+(E-BE-E^2)yxz+(2B-a-B^2-C-CE+aE^2)yx^2\\ &-(CE^2+CE)y^2z+(2C-b-2BC-BCE+bE^2)y^2x\\
&-(c+C^2+C^2E-cE^2)y^3.\\ \end{align*} The fact that $b=(1-a)(2-B)$ follows by considering the coefficients of $x^3$.
Denote the coefficients of $x^2z, x^3, yzx,$ and so on, in the above expression for $G_2$ by $\alpha_{x^2z}, \alpha_{x^3}. \alpha_{yzx},$ etc.\ , respectively. Since $G_1=(1-a)G_2$, adjoining $$w=zx^2-\alpha_{x^2z}x^2z-\alpha_{x^3}x^3-\alpha_{yzx}yzx-\alpha_{yxz}yxz-\alpha_{yx^2}yx^2-\alpha_{y^2z}y^2z- \alpha_{y^2x}y^2x-\alpha_{y^3}y^3$$ to the set $\mathscr B$ defined above makes both overlaps $z^3$ and $z^2y$ resolvable. So $\mathscr B'=\mathscr B\cup \{w\}$ is a Gr\"obner basis to degree 3 for $T$. Applying reductions corresponding to elements of $\mathscr B'$ to the difference $(zx^2-f_{zx^2})y-zx(xy-yx)$, we obtain \begin{align*} (1+E)x^4&-[2(\alpha_{x^3}+\alpha_{yzx})-B(1+E)]yx^3\\ &-[2(\alpha_{yx^2}+B\alpha_{yzx})-C(1+E)^2]y^2x^2\\ &-[2(\alpha_{y^2x}+C\alpha_{yzx})-BCE(1+E)]y^3x\\ &-[2(\alpha_{y^3}+C\alpha_{y^2z})-C^2E(1+E)]y^4. \end{align*} As $T$ is a twisted tensor product of $\mathbb K[x,y]$ and $\mathbb K[z]$, the obvious map $\mathbb K[x,y]\to T$ is injective. Thus the expression above, whose image in $T$ vanishes, must also vanish in the free algebra $\mathbb K\langle x,y,z\rangle$. Hence we have $E=-1$. \end{proof}
We note that the conditions $A=1$, $d=E=-1$, $e=0$ and $b=(1-a)(2-B)$ imply $G_1=(1-a)G_2$, so there are no additional restrictions on the parameters to consider. We also note the calculation used to deduce $E=-1$ also shows that the overlap $zx^2y$ created by adjoining $w$ to $\mathscr B$ is resolvable when these conditions hold.
Now we will prove that the conditions of Lemma \ref{caseA=1} ensure that $T$ is a graded twisted tensor product.
\begin{prop} \label{G-basis for ttp diagonal Jordan form, A=1} Let $T = T(a, (1-a)(2-B), c, -1, 0, 1; 1, B, C, 0, -1, 0)$, so $b = (1-a)(2-B)$. Let $\beta = 2-B$. The ideal of relations that defines $T$ has a finite Gr\"obner basis given by: \begin{align*} z^2 &- (zx-ax^2-byx-cy^2+xz) \\ zy &- (x^2+Byx+Cy^2-yz) \\ xy &- yx \\ zx^2 &- (x^2z-\beta yxz+\beta x^3+B\beta yx^2+ C \beta y^2x - \beta yzx). \end{align*} In particular, $\{y^i x^j (zx)^k z^l : i, j, k \geq 0, l \in \{0,1\} \}$ is a $\mathbb K$-linear basis for $T$, and the Hilbert series of $T$ is $1/(1-t)^3$. \end{prop}
\begin{proof} Since the conditions $A=1$, $d=E=-1$, $e=0$ and $b=(1-a)(2-B)$ imply $G_1=(1-a)G_2$, adjoining $w=zx^2-(x^2z-\beta yxz+\beta x^3+B\beta yx^2+ C \beta y^2x - \beta yzx)$ to the set $\mathscr B$ makes both overlaps $z^3$ and $z^2y$ resolvable. Two new overlaps are created: $z^2x^2$ and $zx^2y$. Straightforward calculations show these overlaps are resolvable as well. Thus $\mathscr B\cup\{w\}$ determines a finite Gr\"obner basis for $T$.
The stated basis is precisely the set of monomials that do not contain any of $xy, zy, z^2,$ or $zx^2$ as subwords. The Hilbert series follows by an easy counting argument. \end{proof}
\begin{prop} \label{basis for ttp diagonal Jordan form, A=1} The set ${\mathcal S} = \{y^i x^j z^k : i, j, k \geq 0\}$ is a $\mathbb K$-basis for $T$. \end{prop}
\begin{proof} Let $U$ denote the $\mathbb K$-linear span of ${\mathcal S}$. To show that $U = T$, by Proposition \ref{G-basis for ttp diagonal Jordan form, A=1} and the fact that $x$ and $y$ commute, it suffices to prove that for all $i \geq 0$, that $(zx)^i$ is in $U$.
To start we recall that $zy = x^2+B yx + Cy^2-yz$. An easy induction shows that for all $j\ge 1$, there exist polynomials $p_j(x,y)$ such that $$zy^j = p_j(x,y) + (-1)^jy^jz.$$ In particular, $zy^j$ is in $U$ for all $j\ge 0$.
Next we claim that $z^2 x^j$ and $zx^j$ are in $U$. One checks that there exist polynomials $f_1(x,y)$ and $f_2(x,y)$ such that $$z^2x = f_1(x,y) + f_2(x,y)z^2.$$ Now for $j > 1$, $$z^2x^j = (f_1(x,y)+f_2(x,y)z^2)x^{j-1},$$ so inductively we see that $z^2x^j \in U$. Furthermore, $zx = ax^2+byx+cy^2+z^2-xz \in U$ and hence $$zx^j = (ax^2+byx+cy^2+z^2-xz)x^{j-1},$$ so inductively $zx^j \in U$.
Now we claim that $(zx)^i$ is in $U$ for all $i\ge 1$. We have noted that $zx\in U$. Suppose, inductively, that $(zx)^i \in U$ for some $i\ge 1$. Since $x$ and $y$ commute, it follows that $x(zx)^i\in U$. Write $x(zx)^i = \sum a_{jkl}y^jx^kz^l$. Then $$(zx)^{i+1} = zx(zx)^i =\sum a_{jkl}zy^jx^kz^l=\sum a_{jkl}p_j(x,y)x^kz^l+(-1)^j\sum a_{jkl}y^jzx^kz^l.$$ and it is apparent from the observations above that $(zx)^{i+1} \in U$.
The fact that the Hilbert series of $T$ is $1/(1-t)^3$ implies that ${\mathcal S}$ is linearly independent.
\end{proof}
\begin{thm} \label{characterization of elliptic type ttps} The algebra $T = T(a, (1-a)(2-B), c, -1, 0, 1; 1, B, C, 0, -1, 0)$ is a twisted tensor product of $R$ and $S$. Moreover, the generators $x, y, z$ of $T$ are left and right regular. \end{thm}
\begin{proof} The first statement follows from Proposition \ref{basis for ttp diagonal Jordan form, A=1} and the Hilbert series of $T$. For the second statement, note that $T \cong T^{op}$, as graded algebras. It follows that an element of $T$ is left regular if and only if it is right regular. It is clear from the basis $\{y^i x^j z^k : i, j, k \geq 0\}$, and the fact that $x$ and $y$ commute, that $x$ and $y$ are right regular and that $z$ is left regular.
\end{proof}
We conclude this section by remarking that, up to isomorphism, all of the quadratic twisted tensor products of $R$ and $S$ are described in Theorem \ref{Ore extension classification} (Ore type), Theorem \ref{ttps with G_2 = 0} (reducible type), and Theorem \ref{characterization of elliptic type ttps} (elliptic type).
\section{The Koszul property and Yoneda algebras}
We continue to use the notation $R = \mathbb K[x,y]$, $S = \mathbb K[z]$ and $$T = T(a, b, c, d, e, f; A, B, C, D, E, F)$$ (cf. the beginning of Section \ref{sec:QuadTTPs}). In this section we determine which of the quadratic twisted tensor products of $R$ and $S$ are Koszul. In \cite{C-G} it was asked if there exist Koszul algebras $R$ and $S$, and a twisting map $\tau: S \tensor R \to R \tensor S$ such that the algebra $R \tensor_{\tau} S$ is quadratic, but not Koszul. Theorem \ref{Koszul property for T(g, d)} below affords examples of this phenomenon. We also compute the structure of the Yoneda algebra for these non-Koszul examples. (Recall that for a Koszul algebra, the Yoneda algebra and the quadratic dual algebra are isomorphic; see \cite[Definition 1, p. 19]{PP} for example.)
\subsection{The Koszul property}
In the case of an Ore-type or reducible twisted tensor product, the Koszul property follows immediately from the results in the preceding section.
\begin{thm} \label{Koszul property for Ore extensions, A = 0} Let $T$ be a quadratic twisted tensor product of $R$ and $S$. If $T$ is of Ore type or reducible type, then $T$ is Koszul. \end{thm}
\begin{proof} If $T$ is of Ore type, then $T$ is a graded Ore extension of $R$ by Proposition \ref{oneSidedOre}. Since $R$ is Koszul, it follows that $T$ is Koszul (see \cite[Chapter 4, Section 7, Example 2]{PP}, for example).
If $T$ is of reducible type, then $T$ is Koszul by
Theorem \ref{ttps with A=0, diagonal Jordan form}.
\end{proof}
It remains to consider the quadratic twisted tensor products where $A = 1$. Recall that these are described in Theorem \ref{characterization of elliptic type ttps}. It is convenient, for this section and the remainder of the paper, to change the presentation of the algebras of this type. This change of presentation, albeit motivated by easing computations, is also natural in a certain sense: the cubic equation defining the point scheme (generically an elliptic curve) is in Weierstrass form (see, for example, \cite[Chapter III.1]{Sil}). In order to make this change of presentation it is necessary to assume that $\text{char}\, {\mathbb K} \ne 2$. We note that our main results, Theorem \ref{Koszul property for T(g, d)} and Theorem \ref{AS-regular algebras}(3) below, can be proved by the exact same methods without changing presentation, hence they can be seen to hold over any field.
\begin{lemma} \label{simple presentation, type A=1} Suppose that ${\rm char}\, {\mathbb K} \ne 2$. Define $$\beta = 2-B, \ \ \gamma = C+2(a-1), \ \ g = \gamma-\beta^2/4, \ \ h = c-(a-1)(C+a-1).$$ The algebra $T(a, (1-a)(2-B), c, -1, 0, 1; 1, B, C, 0, -1, 0)$ can be presented as $$\frac{\mathbb K \langle x, y, w \rangle}{\langle wy+yw-x^2-gy^2, w^2+h y^2, xy-yx \rangle}.$$ \end{lemma}
\begin{proof} This is a straightforward computation using the invertible change of variables: $x \mapsto x-(\beta/2)y$, $y \mapsto y$, $w \mapsto -x+(a-1)y+z$. \end{proof}
Using this lemma, we change notation and write $$T(g, h) = T(a, (1-a)(2-B), c, -1, 0, 1; 1, B, C, 0, -1, 0).$$ When we use this notation, we are implicitly assuming that $\text{char} \, {\mathbb K} \ne 2$. Below we will establish when the Koszul property holds for $T(g,h)$ by computing a minimal graded free resolution of the trivial module $_T\mathbb K$. That calculation makes use of the Gr\"obner basis described in the next lemma.
\begin{lemma} \label{Groebner basis for T(g, d)} Order the generators of $T(g, h)$ as $y < x < w$ and use left lexicographical ordering on the monomials in $\mathbb K \langle y, x, w \rangle$. Then the defining ideal of $T(g, h)$ has a finite Gr\"obner basis consisting of \begin{align*} &xy-yx \\ &wy+yw-x^2-gy^2 \\ &w^2+h y^2 \\ &wx^2 -x^2w. \end{align*} Consequently, $\{y^i x^j (wx)^k w^l : i, j, k \geq 0, l \in \{0, 1\}\}$ is a $\mathbb K$-basis for $T(g, h)$. The elements $x^2$ and $y^2$ are central in $T(g, h)$. \end{lemma}
\begin{proof} The proof of the first statement is a straightforward computation using Bergman's diamond lemma. The second and third statements follow immediately from the Gr\"obner basis. \end{proof}
For later reference, we fix the following sequences of graded free modules, which we will prove are resolutions of $T/T_+={_T\mathbb K}$.
\begin{defn} \label{resolutions} Let $T=T(g,h)$. \begin{enumerate} \item If $h\neq 0$, define $(Q_{\bullet},d^Q_{\bullet})$ to be the sequence $$0\to T(-3) \xrightarrow{d^Q_3} T(-2)^{3} \xrightarrow{d^Q_2} T(-1)^{3} \xrightarrow{d^Q_1} T,$$ where $$d^Q_3 = \begin{bmatrix} h y & w & -h x \end{bmatrix}, \ \ d^Q_2 = \begin{bmatrix} -x & w-gy & y \\ 0 & h y & w \\ -y & x & 0 \end{bmatrix}, \ \ d^Q_1 = \begin{bmatrix} x \\ y \\ w \end{bmatrix}.$$ \item If $h=0$, define for each $i \geq 0$ a graded free left $T$-module by $$P_i = \begin{cases} T & i = 0 \\ T(-1)^3 & i = 1 \\ T(-2)^3 & i = 2 \\ T(-i) \oplus T(-i-1) & i \geq 3. \end{cases}$$ Also, define a map $d^P_i: P_i \to P_{i-1}$ via
$$d^P_1 = \begin{bmatrix} x \\ y \\ w \end{bmatrix}, \ \ d^P_2 = \begin{bmatrix} -x & w-g y & y \\ 0 & 0 & w \\ -y & x & 0 \end{bmatrix}, \ \ d^P_3 = \begin{bmatrix} 0 & w & 0 \\ wy & -y^2 & -wx \end{bmatrix}, \ \ d^P_4 = \begin{bmatrix} w & 0 \\ y^2 & w \end{bmatrix},$$ and for all $i \geq 5$, $$d^P_i = \begin{bmatrix} w & 0 \\ y^2 & -w \end{bmatrix}.$$ \end{enumerate}
\end{defn}
Using Lemma \ref{Groebner basis for T(g, d)}, it is easy to check that $(Q_{\bullet},d^Q_{\bullet})$ and $(P_{\bullet},d^P_{\bullet})$ are complexes of graded free modules with ${\rm coker}\ d^Q_1$ and ${\rm coker}\ d^P_1$ isomorphic to $T/T_+={_T \mathbb K}$.
The following standard fact is very useful for proving exactness of complexes of locally-finite graded modules.
\begin{lemma} \label{dimSum} Let $C_{\bullet}:0 \to V_n \to V_{n-1} \to \cdots \to V_1 \to V_0 \to 0$ be a finite complex where the $V_i$ are finite-dimensional vector spaces. Then \[\sum_{i = 0}^n (-1)^i \dim(V_i) = \sum_{i = 0}^n (-1)^i \dim(H_i(C_{\bullet})). \] \end{lemma}
Now we are ready to use Lemma \ref{Groebner basis for T(g, d)} to prove the main theorem of this section.
\begin{thm} \label{Koszul property for T(g, d)} Let $T=T(g,h)$. The complexes $(Q_{\bullet},d^Q_{\bullet})$ and $(P_{\bullet},d^P_{\bullet})$ are graded free $T$-module resolutions of $_{T}\mathbb K$ when $h\neq 0$ and $h=0$, respectively. In particular, the algebra $T(g, h)$ is Koszul if and only if $h = c-(a-1)(C+a-1)$ is nonzero. \end{thm}
\begin{proof} Since each $Q_i$ is generated in degree $i$, but $P_i$ is generated in degrees $i$ and $i+1$ for $i\ge 3$, it suffices to prove that $(Q_{\bullet},d^Q_{\bullet})$ and $(P_{\bullet},d^P_{\bullet})$ are exact in homological degrees $>0$.
Assume that $h \ne 0$. We claim that the map $d^Q_3$ is injective. Suppose that $t \in \hbox{ker}(d^Q_3)$. Then $tx = ty = tw = 0$, so $tT_1=0$. Then Theorem \ref{characterization of elliptic type ttps} implies that $t = 0$, as desired. We have shown that the complex $$0 \xrightarrow{} T(-3) \xrightarrow{d^Q_3} T(-2)^{3} \xrightarrow{d^Q_2} T(-1)^{3} \xrightarrow{d^Q_1} T$$ is exact at $T(-3)$; since $T$ is quadratic, the complex is exact at $T(-1)^3$. Because the Hilbert series of $T$ is $(1-t)^{-3}$, it follows that the complex is exact. Since $_T\mathbb K$ has a linear free resolution, we conclude that $T$ is Koszul when $h \ne 0$.
Now assume that $h = 0$. Notice that $w^2 = 0$ in $T$ so the map $\begin{bmatrix} 0 & w & 0 \end{bmatrix}$ is not injective. Since $T$ is quadratic, the complex $(P_{\bullet}, d^P_{\bullet})$ is exact at $T(-1)^3$.
Now we prove exactness at $T(-3)$. Suppose that $\begin{bmatrix} u_1 & u_2 \end{bmatrix} \in \hbox{ker}(d^P_3)$. Then it follows that $u_2 wx = 0$. From the $\mathbb K$-basis described in Lemma \ref{Groebner basis for T(g, d)} we see that $u_2 = rw$ for some $r \in T$. We also have $u_1 w - u_2 y^2 = 0$, so, using the fact that $y^2$ is central, we have $(u_1-ry^2)w = 0$. The left annihilator ideal of $w$ is $Tw$, so $u_1-ry^2 = uw$ for some $u \in T$. Therefore $$\begin{bmatrix} u_1 & u_2 \end{bmatrix} = \begin{bmatrix} ry^2+uw & rw \end{bmatrix} = u \begin{bmatrix} w & 0 \end{bmatrix} +r \begin{bmatrix} y^2 & w \end{bmatrix} \in \hbox{im}(d^P_4).$$ We conclude exactness at $T(-3)$.
A similar, and easier, argument is used to show exactness in homological degree $i$ for all $i \geq 4$. We leave this argument to the reader.
To show exactness at $T(-2)^3$, note that for any fixed homogeneous degree $j$, the complex $(P_{\bullet},d^P_{\bullet})$ restricts to a finite complex of finite-dimensional vector spaces. Let $q(t)$ denote the Hilbert series of the homology group $H_2(P_{\bullet})$, and let $H_T$ denote the Hilbert series of $T$. Then the Hilbert series of $T(-i)$ is $t^iH_T$. Appending $T \to {_T\mathbb K} \to 0$ to $(P_{\bullet},d^P_{\bullet})$ and applying Lemma \ref{dimSum} yields $$1-H_T+3tH_T -3t^2 H_T + t^3 H_T = -q(t).$$ Rearranging and using the fact that $H_T = (1-t)^{-3}$ we have $$1+q(t) = H_T(1-3t+3t^2-t^3) = 1,$$ so $q(t) = 0$. We conclude exactness of $P_{\bullet}$ at $T(-2)^3$. Thus, $(P_{\bullet}, d^P_{\bullet})$ is a graded free resolution of $_T \mathbb K$.
\end{proof}
\subsection{The Yoneda algebra of $T(g, h)$}
For any graded algebra $A$, the graded Hom functor for graded left $A$-modules is $$\hbox{Hom}_A(M,N)=\bigoplus_{n\in{\Bbb Z}} \hbox{Hom}^n_A(M,N)=\bigoplus_{n\in{\Bbb Z}} \hom_A(M,N[n]),$$ where $\hom_A(M,N[n])$ is the space of degree-0 graded $A$-module homomorphisms $M\to N[n]$. This functor is left exact, and its $i$-th right derived functor is denoted $\hbox{\rm Ext}^i_A(M,N)$. The space $\hbox{\rm Ext}^i_A(M,N)$ inherits a grading from the graded Hom functor, and we denote the homogeneous degree-$j$ component by $\hbox{\rm Ext}^{i,j}_A(M,N)$.
One may compute the space $\hbox{\rm Ext}^i_A(M,N)$ as the homology group $H^i(P_{\bullet},N)$ where $(P_{\bullet},d_{\bullet})$ is a graded free resolution of $M$. Thus $$\hbox{\rm Ext}_A(M,N)=\bigoplus_i \hbox{\rm Ext}^i_A(M,N)=\bigoplus_{i,j} \hbox{\rm Ext}^{i,j}_A(M,N)$$ is bigraded. When $M=N={_A}\mathbb K$, we abbreviate $E^{i,j}(A)=\hbox{\rm Ext}^{i,j}_A(\mathbb K,\mathbb K)$, $E^i(A)=\bigoplus_j E^{i,j}(A)$ and $E(A)=\bigoplus_i E^i(A)$. The vector space $E(A)=\hbox{\rm Ext}_A(\mathbb K,\mathbb K)$ admits the structure of a bigraded algebra called the \emph{Yoneda algebra} via the Yoneda composition product. We briefly recall the definition of this product, see \cite[p. 4]{PP} for more details.
Let $\alpha \in E^i(A)$ and $\beta \in E^j(A)$. The product $\alpha \star \beta \in E^{i+j}(A)$ is defined as follows. Choose representatives $f \in \hbox{Hom}_A(P_i, \mathbb K)$ and $g \in \hbox{Hom}_A(P_j, \mathbb K)$ for the classes $\alpha$ and $\beta$, respectively. Using projectivity, one lifts $g$ as in the first two rows of the following diagram to obtain a map $g_i \in \hbox{Hom}_A(P_{i+j}, P_i)$. Then $\alpha \star \beta$ is defined to be $[f \circ g_i] \in E(A)^{i+j}$.
\centerline{\xymatrix{ P_{i+j} \ar[r] \ar[d]^{g_i} & P_{i+j-1} \ar[r] \ar[d]^{g_{i-1}} & \cdots \ar[r] & P_j \ar[d]^{g_0} \ar[rd] ^{g}\\ P_i \ar[r] \ar[d]^{f} & P_{i-1} \ar[r] & \cdots \ar[r] &P_0 \ar[r]_{\epsilon} & \mathbb K \\ \mathbb K }}
We conclude this section by establishing a presentation for the Yoneda algebra of $T(g, h)$. This family of examples illustrates that, in general, relationships between the Yoneda algebras of $R$, $S$ and $R \tensor_{\tau} S$ may not be simple or obvious.
Let $T = T(g, h)$. Let $(Q_{\bullet},d^Q_{\bullet})$ and $(P_{\bullet},d^P_{\bullet})$ be the complexes of graded free $T$-modules constructed in Definition \ref{resolutions}. Recall that by Theorem \ref{Koszul property for T(g, d)}, these are graded free resolutions of $_T \mathbb K$ when $h\neq 0$ and $h=0$, respectively. Since $\hbox{im}\ d^Q_i\subset T_+Q_{i-1}$ and $\hbox{im}\ d^P_i\subset T_+P_{i-1}$, these resolutions are \emph{minimal}: all differentials in the complexes $\hbox{Hom}_T(Q_{\bullet},\mathbb K)$ and $\hbox{Hom}_T(P_{\bullet},\mathbb K)$ are trivial. Thus by Theorem \ref{Koszul property for T(g, d)}, we may take $E^i(T) = \hbox{Hom}_T(Q_i, \mathbb K)$ and $E^i(T)=\hbox{Hom}_T(P_i, \mathbb K)$ as our models of $\hbox{\rm Ext}^i_T(\mathbb K, \mathbb K)$, according to whether or not $h=0$.
Let $\chi, \nu, \omega \in E^{1,1}(T)$ denote dual basis vectors to $x, y, w \in T_1$, respectively. In the case $h=0$, let $\rho \in E^{3,4}(T)$ denote the graded $T$-linear map: $\rho: T(-3) \oplus T(-4) \to \mathbb K$ given by $\rho(1, 0) = 0$, $\rho(0, 1) = 1$.
\begin{thm}\label{Yoneda presentations} Retain the notation of the previous paragraph. Then the Yoneda algebra $E(T)$ of $T(g,h)$ can be presented as follows. \begin{itemize} \item[(1)] If $h \ne 0$, then $E(T)$ is generated by $\chi, \nu, \omega$ subject to the quadratic relations: $$\chi \nu+\nu \chi, \chi \omega, \omega \chi, \omega \nu-\nu \omega, \nu \omega+\chi^2, \nu^2 - h \omega^2-g \chi^2.$$ \item[(2)] If $h = 0$, then $E(T)$ is generated by $\chi, \nu, \omega, \rho$ subject to the quadratic relations: $$\chi \nu+\nu \chi, \chi \omega, \omega \chi, \omega \nu-\nu \omega, \nu \omega+\chi^2, \nu^2-g \chi^2,$$ quartic relations: $$\chi \rho, \nu \rho, \rho \chi, \rho \nu, \omega \rho+\rho \omega,$$ and one sextic relation: $\rho^2$.
Moreover, for all $i \geq 3$, $E^{i, i}(T) = \mathbb K \omega^i$ and $E^{i, i+1}(T) = \mathbb K \omega^{i-3} \rho$. \end{itemize} \end{thm}
\begin{proof} By Theorem \ref{Koszul property for T(g, d)}, $T$ is Koszul if $h \ne 0$. In this case it is well known (see \cite[Chapter 2, Definition 1 (c)]{PP}, for example) that $E(T)$ is isomorphic to the quadratic dual algebra, $T^!$. The algebra $T^!$ is generated by the space $T_1^*$, and its defining (quadratic) relations are those elements of $T_1^*\tensor T_1^*$ orthogonal to all quadratic relations of $T$ under the natural pairing. Thus statement (1) follows by checking orthogonality of the stated relations with those of $T$, and a simple dimension count.
Now we prove (2). Assume that $h = 0$, and set $T = T(g, 0)$. It follows from \cite[Proposition 3.1, p. 7]{PP} that the diagonal subalgebra, $\bigoplus_i E^{i,i}(T)\subset E(T)$ is isomorphic to the quadratic dual algebra, $T^!$. The quadratic relations of $T^!$, in this case, are obtained from the quadratic relations in (1) by setting $h = 0$.
In order to prove that the quartic expressions given in (2) are in fact relations of $E(T)$, we use the model $E^i(T) = \hbox{Hom}_T(P_i, \mathbb K)$ for $\hbox{\rm Ext}^i_T(\mathbb K, \mathbb K)$, where $(P_{\bullet},d^P_{\bullet})$ is the graded free resolution of $_T \mathbb K$ constructed in Definition \ref{resolutions}. The differentials are written in terms of matrices with respect to the natural canonical bases for the free modules appearing in the resolution. Let $\xi \in E^{4, 5}(T)$ denote the graded $T$-linear map: $\xi: T(-4) \oplus T(-5) \to \mathbb K$ given by $\xi(1,0) = 0$, $\xi(0,1) = 1$. With respect to these bases, we have $$\chi = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \ \ \nu = \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}, \ \ \omega = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}, \ \ \rho = \begin{bmatrix} 0 \\ 1 \end{bmatrix}, \ \ \xi = \begin{bmatrix} 0 \\ 1 \end{bmatrix}.$$
Now we check that $\rho \omega + \omega \rho = 0$ in $E(T)$. We will show that $\omega \rho = \xi$ and $\rho \omega = -\xi$.
In order to compute $\omega \rho$, we must lift $\rho$ as in the first two rows of the following diagram.
\centerline{\xymatrix{ \cdots \ar[r] & P_7 \ar@{.>}[d]^{\rho_4} \ar[r]^{d_7^P} & P_6 \ar@{.>}[d]^{\rho_3} \ar[r]^{d_6^P} & P_5 \ar@{.>}[d]^{\rho_2} \ar[r]^{d_5^P} & P_4 \ar@{.>}[d]^{\rho_1} \ar[r]^{d_4^P} & P_3 \ar@{.>}[d]^{\rho_0} \ar[rd]^{\rho}\\ \cdots \ar[r] & P_4 \ar[r]^{d_4^P} & P_3 \ar[r]^{d_3^P} & P_2 \ar[r]^{d_2^P} & P_1 \ar[r]^{d_1^P} \ar[d]^{\omega} & P_0 \ar[r]_{\epsilon} & \mathbb K \\ & & & & \mathbb K }}
Let $$\rho_0 = \begin{bmatrix} 0 \\ 1 \end{bmatrix}, \ \ \rho_1 = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix}, \ \ \rho_2 = \begin{bmatrix} 0 & 0 & 0 \\ 0 & -1 & 0 \end{bmatrix},$$ and $$ \rho_i = \begin{bmatrix} 0 & 0 \\ (-1)^{i+1} & 0 \end{bmatrix}, \text{ for all } i \geq 3.$$ Then it is straightforward to check that these maps do indeed complete the above diagram. The map $\omega \circ \rho_1$ is given by the matrix $\begin{bmatrix} 0 & 1 \end{bmatrix}^t$, so we conclude that $ \omega \rho = \xi$.
Now we compute $ \rho \omega$. We must lift $\omega$ through the first two rows of the following diagram.
\centerline{\xymatrix{ \cdots \ar[r] & P_5 \ar[r]^{d_5^P} \ar@{.>}[d]^{\omega_4} & P_4 \ar[r]^{d_4^P} \ar@{.>}[d]^{\omega_3} & P_3 \ar[r]^{\ \ \ d_3^P} \ar@{.>}[d]^{\omega_2} & P_2 \ar[r]^{d_2^P} \ar@{.>}[d]^{\omega_1} & P_1 \ar[rd]^{\omega} \ar@{.>}[d]^{\omega_0} \\ \cdots \ar[r] & P_4 \ar[r]^{d_4^P} & P_3 \ar[r]^{d_3^P} \ar[d]^{\rho} & P_2 \ar[r]^{ \ \ \ d_2^P} & P_1 \ar[r]^{d_1^P} & T \ar[r]^{\epsilon} & \mathbb K \\ & & \mathbb K }}
Let $$\omega_ 0 = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}, \ \ \omega_ 1 = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{bmatrix}, \ \ \omega_ 2 = \begin{bmatrix} 0 & 1 & 0 \\ -y & 0 & x \end{bmatrix}, \ \ \omega_ 3 = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix},$$ and $$ \omega_i = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, \text{for all } i \geq 4.$$
One checks that these maps complete the last diagram. The map $\rho \circ \omega_3$ is given by the matrix $\begin{bmatrix} 0 & -1 \end{bmatrix}^t$, so we conclude that $\rho \omega = - \xi$. It follows that $\rho \omega + \omega \rho = 0$ in $E(T)$, as claimed. The other quartic relations can be proved via similar computations.
Note that because the Yoneda algebra is bigraded, $\rho^2 \in E^{6, 8}(T)$. The minimal resolution $(P_{\bullet},d^P_{\bullet})$ constructed in Definition \ref{resolutions} has $P_6 = T(-6) \oplus T(-7)$, so $E^{6, 8}(T) = 0$. Hence $\rho^2 = 0$ in $E(T)$.
Now we prove the last statement of (2). Using the definitions of the maps $\rho_i, \omega_i$ given above, it is easy to compute that for all $i \geq 3$, $\omega^i \in E^{i, i}(T)$ is given by the matrix $\begin{bmatrix} 1 & 0 \end{bmatrix}^t$; and for all $j \geq 1$, $\omega^j \rho \in E^{j+3, j+4}(T)$ is given by the matrix $\begin{bmatrix} 0 & (-1)^{j-1} \end{bmatrix}^t$. Therefore, since we know by Definition \ref{resolutions} (2) that $E^{i, i}(T)$ and $E^{j+3, j+4}(T)$ are both 1-dimensional, we have $E^{i, i}(T) = \mathbb K \omega^i$ for all $i \geq 3$ and $E^{j+3, j+4}(T) = \mathbb K \omega^j \rho$ for all $j \geq 0$.
Using the minimal resolution $(P_{\bullet},d^P_{\bullet})$, we know the Hilbert series of $E(T)$. Then a straightforward Gr\"obner-basis argument shows that the algebra with generators $\chi, \nu, \omega, \rho$ and defining relations as in (2) has the same Hilbert series as $E(T)$. Hence, the relations given in (2) are in fact a complete set of defining relations for $E(T)$.
\end{proof}
When $T(g, h)$ is Koszul, it is natural to ask if $E(T)$ is isomorphic as a $\mathbb K$-algebra to a twisted tensor product of the Yoneda algebras $E(R)$ and $E(S)$ since, as graded vector spaces, we have $E(T) = E(R) \tensor E(S)$.
\begin{prop} If $T(g, h)$ is Koszul, then $E(T(g,h))$ is not isomorphic as a $\mathbb K$-algebra to a twisted tensor product of $E(R)$ and $E(S)$. \end{prop}
\begin{proof} Let $T = T(g, h)$ and assume $T$ is Koszul. Suppose, to the contrary, that $E(T)$ is isomorphic to a twisted tensor product of $E(R)$ and $E(S)$. By the standard fact that the polynomial ring $R$ is a Koszul algebra, $E(R)$ is isomorphic to the exterior algebra $\Lambda(x, y) = \mathbb K \langle x, y \rangle/ \langle x^2, y^2, xy+yx \rangle$. It follows that $E(T)$ has a subalgebra isomorphic to $\Lambda(x, y)$. Therefore there exist linearly independent elements $\alpha, \beta \in E^1(T)$, such that $\alpha^2 = \beta^2 = \alpha\beta + \beta \alpha = 0$.
To rule out the existence of such elements, thereby obtaining a contradiction, we use the presentation for $E(T)$ given in Theorem \ref{Yoneda presentations} (1). Order the generators by $\omega < \chi < \nu$, and order monomials in the free algebra $\mathbb K \langle \chi, \nu, \omega \rangle$ by degree and left-lexicographic order. Then the associated Gr\"obner basis in degree 2 is given by: $$\nu \chi+\chi \nu, \chi \omega, \omega \chi, \nu \omega - \omega \nu, \chi^2 + \omega \nu, \nu^2-g \chi^2-h \omega^2.$$ Consider an element $\gamma = a \chi + b \omega \ne 0$, where $a, b \in \mathbb K$. Then, in terms of the monomial basis determined by the Gr\"obner basis, $$\gamma^2 = -a^2 \omega \nu + b^2 \omega^2,$$ so $\gamma^2 \ne 0$.
Write $\alpha = a_{\omega} \omega + a_{\chi} \chi + a_{\nu} \nu$ and $\beta = b_{\omega} \omega + b_{\chi} \chi + b_{\nu} \nu$ for some $a_{\omega}, a_{\chi}, a_{\nu}, b_{\omega}, b_{\chi}, b_{\nu} \in \mathbb K$. It follows from the last paragraph, that $a_{\nu}$ and $b_{\nu}$ are both nonzero. Without loss of generality, by multiplying $\alpha$ and $\beta$ by appropriate scalars, we may assume that $a_{\nu} = b_{\nu} = 1$. Since $\alpha$ and $\beta$ are linearly independent, $\alpha - \beta \ne 0$, and the $\nu$ component of $\alpha - \beta$ is zero. Thus, $(\alpha - \beta)^2 \ne 0$. However, computing directly and using our assumptions, $$(\alpha - \beta)^2 = \alpha^2 - \alpha \beta - \beta \alpha -\beta^2 = 0,$$ a contradiction.
Hence, $E(T)$ is not isomorphic to a twisted tensor product of $E(R)$ and $E(S)$. \end{proof}
\begin{rmk} \label{regrading not necessarily Koszul} It is also natural to ask if the algebra $E(T(g, 0))$ becomes Koszul after regrading: $\deg(\rho) = 1$. However, this is not the case. For example, if $g \in \{0, 1\}$, then, after regrading, $E(T(g, 0))$ is not Koszul. Nevertheless, further study of $E(T(g,0))$ seems interesting. \end{rmk}
\section{Artin-Schelter regularity}
We continue to use the notation $R = \mathbb K[x,y]$ and $S = \mathbb K[z]$. In this section we determine when a quadratic twisted tensor product of $R$ and $S$ is Artin-Schelter regular. This notion was introduced by Artin and Schelter in \cite{AS}.
\begin{defn} \label{AS-regular} \cite{AS} A finitely-presented graded algebra, $A$, generated in degree 1 is called \emph{AS-regular} of dimension $d$ if (i) $\hbox{gldim}(A) = d < \infty$; (ii) $\hbox{GKdim}(A) < \infty$; and (iii) $A$ is \emph{Gorenstein}: $\hbox{\rm Ext}^n_A(\mathbb K, A) = 0$ if $n \ne d$, and $\hbox{\rm Ext}^d_A(\mathbb K, A) \cong \mathbb K$. \end{defn}
Recall that every quadratic twisted tensor product of $R$ and $S$ is isomorphic as a twisted tensor product of $R$ and $S$ to one of the algebras described in Theorem \ref{Ore extension classification} (Ore type), Theorem \ref{ttps with G_2 = 0} (reducible type), and Theorem \ref{characterization of elliptic type ttps} (elliptic type). The property of AS-regularity is an algebra-isomorphism invariant, so the following result completely determines when a quadratic twisted tensor product of $R$ and $S$ is AS-regular.
\begin{thm} \label{AS-regular algebras} Let $T$ denote a quadratic twisted tensor product of $R$ and $S$. \begin{itemize} \item[(1)] If $T$ is an algebra of Ore type, then $T$ is AS-regular if and only if $T\cong R[z;\sigma,\delta]$ where $\sigma \in \hbox{{\rm End}}(R)$ is invertible. \item[(2)] If $T$ is an algebra of reducible type, then $T$ is AS-regular if and only if $E \ne 0$ and $a+d \ne 0$. \item[(3)] Assume that ${\rm char}\, \mathbb K \ne 2$. If $T$ is an algebra of elliptic type, then $T$ is AS-regular if and only if $h = c-(a-1)(C+a-1) \ne 0$. \end{itemize} \end{thm}
\begin{proof} Since $R$ and $S$ are finitely-presented and generated in degree 1, the same is true of $T$.
Let us begin by showing that an algebra in the context of (1) or (2) has global dimension equal to $3$. If $T$ is an algebra of Ore type or reducible type, then $T$ is Koszul by Theorem \ref{Koszul property for Ore extensions, A = 0}. Then the fact that the Hilbert series of $T$ is $(1-t)^{-3}$ implies that $\hbox{gldim}(T) = 3$.
Let $T$ denote an algebra of Ore type. By Proposition \ref{oneSidedOre}, $T$ is an Ore extension of the form $R[z; \sigma, \delta]$. If $\sigma$ is not invertible, then $T$ is not a domain. It is known that every AS-regular algebra of dimension 3 is a domain (see \cite[Theorem 8.1]{ATVI} and \cite[Theorem 3.9]{ATVII}), so if $\sigma$ is not invertible, then $T$ is not AS-regular. Conversely, by \cite[Proposition 2]{AST}, if $\sigma$ is invertible, then $T$ is AS-regular.
Now we prove (2). Let $T$ be an algebra of reducible type. First we show that the conditions $E \ne 0$ and $a+d \ne 0$ are necessary for $T$ to be AS-regular. If $E = 0$, then the equation $(z-Bx-Cy)y = 0$ holds in $T$, showing that $T$ is not a domain. It follows that $T$ is not AS-regular when $E = 0$.
Now assume that $E \ne 0$. Then Proposition \ref{not a zero of fn} implies that $y \in T$ is normal, and the quotient algebra $T/\langle y \rangle$ is isomorphic to $\mathbb K \langle x, z \rangle/\langle z^2-zx+dxz + ax^2 \rangle$. Notice that if $a+d = 0$, we have $$z^2-zx+dxz +ax^2 = (z-ax)(z-x).$$ By Proposition \ref{not a zero of fn}, we have $a \ne 1$ since $a$ is not a zero of $f_1 = 1-t$. Therefore the algebra $T/\langle y \rangle$ is isomorphic to $\mathbb K \langle x, y \rangle/ \langle xy \rangle$, which is not noetherian. Hence $T$ is not noetherian. As shown in \cite[Theorem 8.1]{ATVI}, all AS-regular algebras of global dimension 3 are noetherian. Consequently, if $a+d = 0$, then $T$ is not AS-regular.
Finally, we show that $E \ne 0$ and $a+d \ne 0$ are sufficient conditions for $T$ to be AS-regular. Suppose that $E \ne 0 \ne a+d$. Then Proposition \ref{not a zero of fn} ensures that $y \in T$ is normal, and using Theorem \ref{two dim up to isom}, the quotient algebra $T/\langle y \rangle$ is noetherian. It follows that $T$ is noetherian (see, for example, \cite[Lemma 8.2]{ATVI}). Then a result of Stephenson and Zhang, \cite[Corollary 0.2]{StZ}, implies that $T$ is AS-regular.
For (3), we use the notation $T(g, h)$ introduced in Section 5 for the algebra $T$. It is well known (see \cite[Theorem 2.2]{Shelton-Tingey}, for example) that quadratic AS-regular algebras are Koszul, so by Theorem \ref{Koszul property for T(g, d)}, if $h=0$, then $T$ is not AS-regular. Now suppose that $h \ne 0$. Recall from Theorem \ref{Koszul property for T(g, d)} that $$0 \xrightarrow{} T(-3) \xrightarrow{d^Q_3} T(-2)^{3} \xrightarrow{d^Q_2} T(-1)^{3} \xrightarrow{d^Q_1} T,$$ where $$d^Q_3 = \begin{bmatrix} h y & w & -h x \end{bmatrix}, \ \ d^Q_2 = \begin{bmatrix} -x & w-gy & y \\ 0 & h y & w \\ -y & x & 0 \end{bmatrix}, \ \ d^Q_1 = \begin{bmatrix} x \\ y \\ w \end{bmatrix},$$ is a minimal resolution of $_T \mathbb K$ by graded free $T$-modules. In particular, the global dimension of $T$ is $3$.
Since the Hilbert series of $T$ is $(1-t)^{-3}$, we know that the Gelfand-Kirillov dimension of $T$ is $3$. Hence to show that $T$ is AS-regular, we only need to verify the Gorenstein condition; it is clear that this condition is equivalent to the exactness of the dual complex of graded right $T$-modules
$$0 \to T \xrightarrow{d^Q_1} T(1)^3 \xrightarrow{d^Q_2} T(2)^3 \xrightarrow{d^Q_3} T(3) \to \mathbb K_T(3) \to 0.$$
Since $h \ne 0$, this complex is exact at $T(3)$. It is straightforward to check that in the tensor algebra $T \langle x, y, w \rangle$, the entries of $d^Q_3 d^Q_2$ give a basis for the space of defining relations of $T$. Thus the complex is exact at $T(2)^3$. By Theorem \ref{characterization of elliptic type ttps}, we know that $x$ is left regular, so the complex is exact at $T$. Finally, using Lemma \ref{dimSum} and the fact that the Hilbert series of $T$ is $(1-t)^{-3}$ (as in the proof of Theorem \ref{Koszul property for T(g, d)}) the complex is exact at $T(1)^3$.
\end{proof}
\noindent {\bf Acknowledgement.} We are very grateful to the anonymous referee whose numerous helpful suggestions greatly improved the quality of this paper.
\end{document} |
\begin{document}
\begin{titlepage}
\title{\LARGE{\textbf{Some remarks on the optimality of the Bruno-R{\"u}ssmann condition}}}
\author{Abed Bounemoura \\
CNRS - PSL Research University\\
(Universit{\'e} Paris-Dauphine and Observatoire de Paris)} \end{titlepage}
\maketitle
\begin{abstract} We prove that the Bruno-R{\"u}ssmann condition is optimal for the analytic preservation of a quasi-periodic invariant curve for an analytic twist map. The proof is based on Yoccoz's corresponding result for analytic circle diffeomorphisms and the uniqueness of invariant curves with a given irrational rotation number. We also prove a similar result for analytic Tonelli Hamiltonian flow with $n=2$ degrees of freedom; for $n \geq 3$ we only obtain a weaker result which recovers and slightly improves a theorem of Bessi. \end{abstract}
\section{Introduction}\label{s1}
Given $n \geq 2$, a vector $\omega \in \mathbb R^n$ satisfies the Bruno-R{\"u}ssmann condition, and we will write $\omega \in \mathrm{BR}$, if \begin{equation}\label{BR} \int_{1}^{+\infty}\frac{\ln(\Psi_{\omega}(Q))}{Q^2}dQ < +\infty \tag{$\mathrm{BR}$} \end{equation} where \begin{equation*}
\Psi_{\omega}(Q)=\max\left\{|k\cdot\omega|^{-1}\; | \; k \in \mathbb Z^n, \; 0 < |k|\leq Q\right\}. \end{equation*} The expression in~\eqref{BR} is just one of the many equivalent ways of defining this Bruno-R{\"u}ssmann condition. Bruno (\cite{Bru71}, ~\cite{Bru72}, ~\cite{Bru89}) and R{\"u}ssmann (\cite{Rus80}, ~\cite{Rus89}, ~\cite{Rus94}, ~\cite{Rus01}) have proved that $\omega \in \mathrm{BR}$ is a sufficient condition for several analytic small divisors problems: among others, for the linearization of a holomorphic germ at a non-resonant fixed point, for the linearization of a torus diffeomorphism isotopic to the identity (respectively a torus vector field) close to a non-resonant translation (respectively close to a non-resonant constant vector field) and for the preservation of a non-resonant quasi-periodic invariant torus in a non-degenerate Hamiltonian system close to integrable.
For $n=2$, $\omega=(1,\alpha) \in \mathrm{BR}$ if and only if $\alpha$ satisfies the following Bruno condition, that we shall write $\alpha \in \mathrm{B}$: \begin{equation}\label{B} \sum_{n \in \mathbb N}\frac{\log q_{n+1}}{q_n} < +\infty \tag{$\mathrm{B}$} \end{equation} where $q_n$ is the denominator of the $n^{th}$-convergent of $\alpha$. It is a deep result of Yoccoz (\cite{Yoc88},~\cite{Yoc95}) that if $\alpha \notin \mathrm{B}$, then the quadratic polynomial \[ P_\lambda(z)=\lambda z+z^2, \quad \lambda=e^{2\pi i \alpha} \] is not analytically linearizable. Other examples of non-Bruno non-linearizable germs were later given by Geyer (\cite{Gey01}). Using this, Yoccoz was able to prove that if $\alpha \notin B$, there exist, arbitrarily close to the rotation $\alpha$, analytic circle diffeomorphisms which are topologically but not analytically conjugate to $\alpha$ and thus in the continuous case, if $\omega \notin \mathrm{BR}$, there exist, arbitrarily close to the constant vector field $\omega$, analytic vector fields on $\mathbb T^2$ which are topologically but not analytically conjugate to $\omega$ (see Theorems~\ref{thY1} and~\ref{thY2} below for more precise statements). The condition $\alpha \in \mathrm{B}$ (or equivalently $\omega \in \mathrm{BR}$) is also known to be optimal in other problems in $\mathbb C^2$, for vector fields close to a non-resonant singular point (\cite{PM97}) and for the complex area-preserving map known as the semi-standard map (\cite{Mar90}).
Unfortunately, to the best of our knowledge, the Bruno-R{\"u}ssmann condition is not known to be optimal for low-dimensional Hamiltonian problems such as the analytic preservation of invariant curves for twist maps. Here it is important to point out that unlike the other problems we mentioned which deal only with the existence of an analytic conjugacy to the linear model, in the Hamiltonian case the conclusions of KAM-like theorems are two-fold: it gives the existence of an analytic invariant curve together with the existence of an analytic conjugacy of the restricted dynamics on the curve to the linear model. The best known result for twist maps is due to Forni (\cite{For94})\footnote{Unfortunately, at several places in the literature (for instance~\cite{Gen15} and other references by the same author) it is stated that $\alpha \in \mathrm{B}$ is optimal for the existence of an analytic invariant circle for the standard map in the perturbative regime which depends analytically in the small parameter; we would like to point out that this statement is incorrectly deduced from results of Marmi (\cite{Mar90}) and Davie (\cite{Dav94}) and thus the optimality of $\alpha \in \mathrm{B}$ for the standard map is still an open question (see \cite{MM00} where this observation is also made).}. To describe his result, let us first remark that $\alpha \in \mathrm{B}$ obviously implies that $\alpha \in \mathrm{R}$ in the sense that \begin{equation}\label{R} \lim_{n \rightarrow +\infty} \frac{\log q_{n+1}}{q_n}=0 \tag{$\mathrm{R}$} \end{equation} but clearly the converse is not true. The condition that $\alpha \in \mathrm{R}$ is in fact the necessary and sufficient condition for the linearized problem (the so-called cohomological equation) to have a solution in the analytic topology (\cite{Rus75}). Using results of Mather (\cite{Mat86}, \cite{Mat88}) and Herman (\cite{Her83}), Forni proved that if an integrable twist map has an invariant curve with rotation number $\alpha \notin \mathrm{R}$, then there exists arbitrarily small analytic perturbation for which there are no (necessarily Lipschitz) invariant curves with rotation number $\alpha$. Observe that this strongly violates the conclusion of the KAM theorem, as the latter would give an analytic invariant curve on which the dynamic is analytically linearizable. For Tonelli Hamiltonian flows with $n \geq 2$ degrees of freedom, a result analogous to Forni's has been obtained by Bessi (\cite{Bes00}). To state it, observe that a generalization of the condition $\alpha \in \mathrm{R}$ is (keeping the same notation) $\omega \in \mathrm{R}$ where \begin{equation}\label{R} \lim_{Q \rightarrow +\infty} \frac{\ln(\Psi_{\omega}(Q))}{Q}=0 \tag{$\mathrm{R}$} \end{equation} and that again this is the necessary and sufficient condition to solve the cohomological equation in the analytic topology. Bessi proved that if $\omega \notin \mathrm{R}$, then there exists arbitrarily small perturbation of the integrable Hamiltonian $H_0(I)=\frac{1}{2}(I_1^2+\cdots+I_n^2)$ in the analytic topology for which there is no invariant $C^1$ Lagrangian graph on which the dynamic is $C^1$ conjugated to the linear flow of frequency $\omega$.
The purpose of this note is to prove that the condition $\alpha \in \mathrm{B}$ is optimal for the analytic KAM theorem for twist maps, in the sense that if $\alpha \notin \mathrm{B}$, then there exist arbitrarily small perturbations of an arbitrary integrable twist map for which there are no analytic invariant curves on which the dynamic is analytically conjugated to $\alpha$. We refer to Theorem~\ref{th1} for a more precise statement. One has to observe that this result does not improve Forni's result, as in our example, the perturbed map will have an analytic invariant curve on which the dynamic is topologically conjugated to $\alpha$, yet there will be no analytic conjugacy and this is sufficient to guarantee that the conclusions of the KAM theorem do not hold. One can considered Forni's result as a ``destruction" of invariant circle with rotation number $\alpha \notin \mathrm{R}$, while our result can be considered as a ``destruction" of the dynamic on the invariant circle with rotation number $\alpha \notin \mathrm{B}$. For perturbations of Tonelli Hamiltonians, we will obtain in Theorem~\ref{th2} a similar result showing the optimality of $\omega \in \mathrm{BR}$ for $n=2$ while for $n \geq 3$, we will only obtain in Theorem~\ref{th3} a result similar to Bessi showing that $\omega \in \mathrm{R}$ is necessary: for $n \geq 3$, it is unlikely that $\omega \in \mathrm{R}$ is sufficient and one should not expect $\omega \in \mathrm{BR}$ to be necessary either\footnote{Yoccoz, private communication.}. Even though we will use action-minimizing properties of invariant quasi-periodic curves and tori in an indirect way, our method of proof is very different from those of Forni and Bessi. For Theorem~\ref{th1}, we will use Yoccoz's result showing the necessity of $\alpha \in \mathrm{B}$ for the analytic linearization of circle diffeomorphims, and the well-known fact that an invariant curve for a twist map with a given irrational rotation number is unique. Under some more assumptions, this uniqueness property has been shown to be true for Tonelli Hamiltonians in any number of degrees of freedom by Fathi, Giualiani and Sorrentino (\cite{FGS09}). Using this and a continuous version of Yoccoz's result, we will obtain Theorem~\ref{th2} for the case $n=2$ and for $n \geq 3$, we will make use of a result of Fayad (\cite{Fay02}) on reparametrized linear flows to obtain Theorem~\ref{th3}.
\section{The case of a twist map}\label{s2}
It will be more convenient for us to represent exact area-preserving map of the annulus $\mathbb T \times \mathbb R$, where $\mathbb T=\mathbb R/\mathbb Z$, by a ``Hamiltonian" generating function defined on the universal cover $\mathbb R^2$ of $\mathbb T \times \mathbb R$ (unlike~\cite{For94} where a ``Lagrangian" generating function is used). Given a smooth function $h: \mathbb R^2 \rightarrow \mathbb R$, the map \[ \bar{f}=\bar{f}_h : \mathbb R^2 \rightarrow \mathbb R^2\] defined by \begin{equation*} \bar{f}(\theta,I)=(\Theta,\mathcal{I}) \Longleftrightarrow \begin{cases} \mathcal{I}=I-\partial_\theta h(\theta,\mathcal{I}),\\ \Theta=\theta+\partial_{\mathcal{I}} h(\theta,\mathcal{I}) \end{cases} \end{equation*} projects to an exact area-preserving map \[ f: \mathbb T \times \mathbb R \rightarrow \mathbb T \times \mathbb R. \] Such a map is an exact area-preserving twist map, or for short twist map in the sequel, if it satisfies the following two conditions: \begin{itemize} \item[(a1)] for all $(\theta,I) \in \mathbb R^2$, $\partial_I \Theta(\theta,I)>0$;
\item[(a2)] for all $\theta \in \mathbb R$, $|\Theta| \rightarrow +\infty$ as $|I| \rightarrow +\infty$ uniformly in $\theta$. \end{itemize}
Given such a twist map $f$, an invariant curve $T$ for $f$ will be an essential topological circle such that $f(T)=T$; necessarily, $T$ is a Lipschitz Lagrangian graph. Since $f$ preserves orientation, the restriction $f_{|T}$ has a well-defined rotation number. The following uniqueness result is well-known (see~\cite{KH95} for instance).
\begin{proposition}\label{unique1}
Let $T_0$ and $T_1$ be two invariant curves for a twist map such that $f_{|T_0}$ and $f_{|T_1}$ have the same irrational rotation number. Then $T_0=T_1$. \end{proposition}
Now let us explain the local setting in which the KAM theorem applies. Consider a smooth function $h_0: (-1,1) \rightarrow \mathbb R$ which satisfies the following conditions: \begin{itemize} \item[(b1)] $h_0''(\mathcal{I}) \neq 0$ for all $\mathcal{I} \in (-1,1)$; \item[(b2)] $h_0'(0)=\alpha$ . \end{itemize} Then the exact area-preserving map $f_0$ generated by $h_0$ is integrable, and the dynamic restricted to the invariant curve $T_0=\mathbb T \times \{0\}$ is the rotation by $\alpha$. To state the KAM theorem of Bruno and R{\"u}ssmann, we need to define norms for real-analytic functions. Let $h : \mathbb R \times (-1,1) \rightarrow \mathbb R$ be a real-analytic function and suppose it admits a holomorphic and bounded extension (still denoted by $h$) to the domain
\[ \mathbb T_s \times D=\{z=(z_1,z_2) \in \mathbb C^2 \; | \; |\mathrm{Im} \; z_1|<s, \; |z_2|<1\} \] for some $s>0$. In such a case, we simply define
\[ |h|_s=\sup_{z \in \mathbb T_s \times D}|h(z)|. \]
Assume that $h_0$ satisfies condition (b1) and (b2) with $\alpha \in \mathrm{B}$, then the KAM theorem states that for any $s>0$, there exists $\varepsilon>0$ such that for any $h_1$ satisfying $|h_1-h_0|_s \leq \varepsilon$, the exact area-preserving map $f_1$ generated by $h_1$ has an analytic invariant curve $T_1$ such that $f_{1|T_1}$ is analytically conjugate to the rotation $\alpha$ (and moreover, $T_1$ analytically converges to $T_0$ as $\varepsilon$ goes to zero).
The following result shows that the condition that $\alpha \in \mathrm{B}$ cannot be weakened.
\begin{Main}\label{th1}
Assume that $h_0$ satisfies condition (b1) and (b2) with $\alpha \notin \mathrm{B}$. Then for all $\varepsilon>0$ sufficiently small and all $s>0$, there exists $h_1$ such that $|h_1-h_0|_s \leq \varepsilon$ and the exact area-preserving map $f_1$ generated by $h_1$ has no analytic invariant curve $T_1$ such that $f_{1|T_1}$ is analytically conjugate to the rotation $\alpha$. \end{Main}
The restriction on $\varepsilon$ only comes from the condition (a1) and is thus independent of the choice of $s$. The proof of Theorem~\ref{th1} will follow easily from the following theorem of Yoccoz.
\begin{theorem}[Yoccoz]\label{thY1} Assume $\alpha \notin \mathrm{B}$. Then for all $\varepsilon>0$ and all $s>0$, there exists an orientation-preserving analytic circle diffeomorphism with a lift of the form
\[ u(\theta)=\theta+\alpha+v(\theta), \quad |v|_s \leq \varepsilon \] which is topologically but not analytically conjugate to the rotation $\alpha$. \end{theorem}
\begin{proof}[Proof of Theorem~\ref{th1}]
Let us fix $s>0$ and $\varepsilon>0$, and consider the function $v : \mathbb R \rightarrow \mathbb R$ given by Theorem~\ref{thY1} which extends to $\mathbb T_s$ and satisfy $|v|_s \leq \varepsilon$. We set \[ h_1(\theta,\mathcal{I})=h_0(\mathcal{I})+v(\theta)\mathcal{I} \] so that obviously
\[ |h_1-h_0|_s \leq |v|_s \leq \varepsilon. \] Let $f_0$ and $f_1$ be the maps generated by respectively $h_0$ and $h_1$. Obviously, the condition (b1) implies that condition (a1) is satisfied by $f_0$ but only for all $(\theta,I) \in \mathbb R \times (-1,1)$ and assuming $\varepsilon$ sufficiently small, the same remains true for $f_1$. Now using a bump function, the map $f_1$, initially defined on $\mathbb T \times (-1,1)$, can be extend to a smooth twist map from $\mathbb T \times \mathbb R$ to itself in such a way that both (a1) and (a2) holds true.
Now for all $(\theta,I) \in \mathbb R \times (-1,1)$, the lift $\bar{f}_1$ is defined by \[ \bar{f}_1(\theta,I)=(\Theta,\mathcal{I}) \] where \begin{equation}\label{eq1} \begin{cases} \mathcal{I}=I-\partial_\theta h_1(\theta,\mathcal{I})=I-v'(\theta)\mathcal{I},\\ \Theta=\theta+\partial_{\mathcal{I}} h_1(\theta,\mathcal{I})=\theta+h_0'(\mathcal{I})+v(\theta). \end{cases} \end{equation} Since $u(\theta)=\theta+\alpha+v(\theta)$ is the lift of an orientation-preserving diffeomorphism of the circle, we have \[ u'(\theta)=1+v'(\theta)>0 \] and thus~\eqref{eq1} can be written again as \begin{equation}\label{eq2} \begin{cases} \mathcal{I}=(u'(\theta))^{-1}I,\\ \Theta=\theta+h_0'\left((u'(\theta))^{-1}I\right)+v(\theta). \end{cases} \end{equation}
From~\eqref{eq2} it is now clear that $T_0=\mathbb T \times \{0\}$ is invariant by $f_1$, and since $h'(0)=\alpha$ by (b2), the restriction $f_{1|T_0}$ is nothing but the dynamic induced by $u$ given by Theorem~\ref{thY1}, hence it is topologically but not analytically conjugate to the rotation $\alpha$.
To conclude, we argue by contradiction and assume the existence of an analytic invariant curve $T_1$ such that $f_{1|T_1}$ is analytically conjugate to the rotation $\alpha$. Since both $T_0$ and $T_1$ are invariant by the twist map $f_1$ and have the same irrational rotation number $\alpha$, it follows from Proposition~\ref{unique1} that $T_0=T_1$ but then $f_{1|T_0}=f_{1|T_1}$ is analytically conjugate to the rotation $\alpha$, which is absurd. \end{proof}
\section{The case of a Hamiltonian flow}\label{s3}
By a suspension argument (see for instance~\cite{KP94} or~\cite{TreBook} for the analytic case), Theorem~\ref{th1} gives a result for Hamiltonian systems with $n=1,5$ degrees of freedom with a convex (non-degenerate) integrable part, and thus also for Hamiltonian systems with $n=2$ degrees of freedom with a quasi-convex (iso-energetically non-degenerate) integrable part.
For Hamiltonian systems with $n \geq 2$ degrees of freedom and a convex integrable part, to use the argument in the proof of Theorem~\ref{th1} one first needs to have an analog of Proposition~\ref{unique1}, and fortunately, such a result was proved in~\cite{FGS09}. The setting is the one of Tonelli Hamiltonians, which is a natural generalization of exact area-preserving twist maps. For more details on Tonelli Hamiltonians and what we will describe next, we refer to~\cite{Sor15}.
Let $H : \mathbb T^n \times \mathbb R^n \rightarrow \mathbb R$ be a smooth Hamiltonian, then it is said to be Tonelli if it satisfies the following two conditions \begin{itemize} \item[(A1)] for all $(\theta,I) \in \mathbb T^n \times \mathbb R^n$, $\nabla_I^2 H(\theta,I)$ is a (uniformly) positive definite quadratic form; \item[(A2)] for all $\theta \in \mathbb T^n$, one has
\[ \lim_{|I| \rightarrow +\infty} \frac{H(\theta,I)}{|I|}=+\infty. \] \end{itemize} In this context, the role of invariant curves is played by Lipschitz Lagrangian graphs, so let $T$ be such a graph, and assume it is invariant be the flow of a Tonelli Hamiltonian $H$. Given a measure supported on $T$ and invariant by the Hamiltonian flow, one can define a rotation vector (or a Schwartzman asymptotic cycle) as an element of $H_1(\mathbb T^n,\mathbb R)\simeq \mathbb R^n$ and by considering all invariant measures, one can define a rotation set for the Hamiltonian flow restricted to $T$. We will say that $T$ is Schwartzman strictly ergodic with rotation vector $\omega \in \mathbb R^n$ if its rotation set reduces to $\omega$ and there exists at least one measure with full support in $T$. Simple examples (the one we will actually use later) are when the restricted flow on $T$ is either topologically conjugate to the linear flow of frequency $\omega$ or obtained form the latter by a smooth reparametrization. The main result of~\cite{FGS09} gives the following statement.
\begin{theorem}[Fathi-Giuliani-Sorrentino]\label{unique2} Let $T_0$ and $T_1$ be two Lipschitz Lagrangian graphs invariant by the flow of a Tonelli Hamiltonian and which are Schwartzman strictly ergodic with the same rotation vector $\omega \in \mathbb R^n$. Then $T_0=T_1$. \end{theorem}
As before, we now come back to a local setting in which the KAM theorem applies. Consider a smooth Hamiltonian function $H_0: B \rightarrow \mathbb R$, where $B=(-1,1)^n \subseteq \mathbb R^n$ is a unit ball and assume it satisfies the following conditions: \begin{itemize} \item[(B1)] $\nabla^2 H_0(I)$ is a positive definite quadratic form for all $I \in B$; \item[(B2)] $\nabla H_0 (0)=\omega$ . \end{itemize} Let us observe that for the KAM theorem, condition (B1) is not needed as the weaker assumption that $\nabla^2 H_0(I)$ is a non-degenerate quadratic form is sufficient. If $H_0$ satisfies (B1) and (B2), its vector field $X^{H_0}$ is integrable and its restriction to the invariant torus $T_0=\mathbb T^n \times \{0\}$ is given by the constant vector field $\omega$.
Let $H : \mathbb T^n \times B \rightarrow \mathbb R$ be a real-analytic function and suppose it admits a holomorphic and bounded extension to the domain
\[ \mathbb T^n_s \times D=\{z=(z_1,\dots,z_{2n}) \in \mathbb C^{n}/\mathbb Z^n \times \mathbb C^n \; | \; \max_{1 \leq i \leq n}|\mathrm{Im} \; z_i|<s, \; \max_{1 \leq i \leq n}|z_{n+i}|<1\} \] for some $s>0$, so that we can define
\[ |H|_s=\sup_{z \in \mathbb T^n_s \times D}|H(z)|. \]
Here's a formulation of the KAM theorem for Tonelli Hamiltonians. Assume $H_0$ satisfies condition (B1) and (B2) with $\omega \in \mathrm{BR}$, then for any $s>0$, there exists $\varepsilon>0$ such that for any $H_1$ with $|H_1-H_0|_s \leq \varepsilon$, the Hamiltonian flow of $H_1$ has an analytic Lagrangian invariant torus $T_1$ which is a graph and such that the restriction $X^{H_1}_{|T_1}$ is analytically conjugate to the vector field $\omega$ (and moreover, $T_1$ analytically converges to $T_0$ as $\varepsilon$ goes to zero).
For $n=2$, we can prove that the condition that $\omega \in \mathrm{BR}$ cannot be weakened.
\begin{Main}\label{th2}
Let $n=2$, and assume that $H_0$ satisfies condition (B1) and (B2) with $\alpha \notin \mathrm{BR}$. Then for all $\varepsilon>0$ sufficiently small and all $s>0$, there exists $H_1$ such that $|H_1-H_0|_s \leq \varepsilon$ and the Hamiltonian flow of $H_1$ has no analytic Lagrangian invariant graph $T_1$ such that $X^{H_1}_{|T_1}$ is analytically conjugate to the vector field $\omega$. \end{Main}
To prove Theorem~\ref{th2}, we will need the follwoing continuous version of Theorem~\ref{thY1}.
\begin{theorem}[Yoccoz]\label{thY2} Assume $\omega \notin \mathrm{BR}$. Then for all $\varepsilon>0$ and all $s>0$, there exists an analytic vector field on $\mathbb T^2$ of the form
\[ U(\theta)=\omega+V(\theta), \quad |V|_s \leq \varepsilon \] which is topologically but not analytically conjugate to vector field $\omega$. \end{theorem}
\begin{proof}[Proof of Theorem~\ref{th2}] The proof is just a continuous version of the proof of Theorem~\ref{th1}. Fixing $s>0$ and $\varepsilon>0$, we define \[ H_1(\theta,I)=H_0(I)+V(\theta)\cdot I\] where $V$ is given by Theorem~\ref{thY2} (for the value $\varepsilon/\sqrt{2}$ instead of $\varepsilon$). Clearly
\[ |H_1-H_0|_s \leq \sqrt{2}|V|_s \leq \varepsilon. \] Observe that the condition (B1) and a smallness assumption on $\varepsilon$ allow again to extend $H_1$ to a smooth function defined on $\mathbb T^2 \times \mathbb R^2$ which satisfies both (A1) and (A2).
It is clear from the Hamiltonian's equation that $T_0=\mathbb T^2 \times \{0\}$ is invariant by the flow of $H_1$, and since $\nabla H_0(0)=\omega$ by (B2), the restriction $X^{H_1}_{|T_0}$ is nothing but the vector field $U$ given by Theorem~\ref{thY2}, hence it is topologically but not analytically conjugate to the vector field $\omega$.
To conclude, we argue by contradiction and assume the existence of an analytic Lagrangian invariant graph $T_1$ such that $X^{H_1}_{|T_1}$ is analytically conjugate to the vector field $\omega$. As $T_0$ and $T_1$ are invariant by the Hamiltonian flow of $H_1$ which is Tonelli and are both Schwartzman strictly ergodic with the same rotation vector $\omega$, it follows from Theorem~\ref{unique2} that $T_0=T_1$. But then $X^{H_1}_{|T_0}=X^{H_1}_{|T_1}$ is analytically conjugate to the vector field $\omega$, which is absurd. \end{proof}
Theorem~\ref{thY2} is not known (and unlikely to be true) for $n \geq 3$, yet the following result was proved by Fayad in~\cite{Fay02}.
\begin{theorem}[Fayad]\label{thF} Let $n \geq 2$ and assume $\omega \notin \mathrm{R}$. Then for all $s>0$ sufficiently small and all $\varepsilon>0$, there exists an analytic vector field on $\mathbb T^n$ of the form
\[ U(\theta)=\omega+\varphi(\theta)\omega, \quad |\varphi|_s \leq \varepsilon, \quad \int_{\mathbb T^n}\varphi(\theta)d\theta=0, \] which is not topologically conjugate to vector field $\omega$. \end{theorem}
The restriction on $s$ is as follows: if $\omega \notin \mathrm{R}$, then there exists $s_0>0$ such that \[ \limsup_{Q \rightarrow +\infty} \frac{\ln(\Psi_{\omega}(Q))}{Q}\geq s_0 \] and one has to choose $s<s_0$. We have to point out that Fayad's result is in fact much more general than the one we stated (it is not perturbative, valid for a $G^\delta$ dense set of functions $\varphi$ and the resulting vector field $U$ is in fact weakly mixing) but we will only use the above statement. Observe that since the flow of $U$ is a reparametrization (with a function of unit average) of the linear flow of frequency $\omega$, it is Schwartzman strictly ergodic with rotation vector $\omega$.
Replacing Theorem~\ref{thY2} by Theorem~\ref{thF} in the proof of Theorem~\ref{th2}, one immediately arrives at the following statement.
\begin{Main}\label{th3}
Let $n\geq 2$, and assume that $H_0$ satisfies condition (B1) and (B2) with $\omega \notin \mathrm{R}$. Then for all $\varepsilon>0$ sufficiently small and all $s>0$ sufficiently small, there exists $H_1$ such that $|H_1-H_0|_s \leq \varepsilon$ and the Hamiltonian flow of $H_1$ has no Lipschitz Lagrangian invariant graph $T_1$ such that $X^{H_1}_{|T_1}$ is topologically conjugate to the vector field $\omega$. \end{Main}
As we already explained, this statement is similar to the main result of~\cite{Bes00}. Yet Bessi's result depends on the choice of $H_0(I)=\frac{1}{2}(I_1^2+\cdots+I_n^2)$ while we can deal with an arbitrary integrable Hamiltonian $H_0$ which is convex in a neighborhood of the origin. Also as it is stated, the main result of~\cite{Bes00} claims the non-existence of a $C^1$ Lagrangian invariant graph $T_1$ such that $X^{H_1}_{|T_1}$ is $C^1$-conjugate to the vector field $\omega$ and thus our conclusion is slightly stronger; yet it seems to us that what is really proved in~\cite{Bes00} is the non-existence of a Lipschitz Lagrangian invariant graph $T_1$ such that $X^{H_1}_{|T_1}$ has all orbits with the same rotation vector $\omega$, in which case our conclusion could be slightly weaker.
\section{Some questions}\label{s4}
Let us conclude by some questions. It is clear from Forni's result, Bessi's result or Theorem~\ref{th3} that when $\omega \notin \mathrm{R}$, invariant torus with a frequency $\omega$ are destroyed in a rather strong sense. But in Theorem~\ref{th1} and Theorem~\ref{th2} this is not the case if $\omega \notin \mathrm{BR}$ as an invariant analytic torus still exist on which the dynamic is topologically linearizable. So one may ask the following question.
\begin{question} Assume that $\omega \in \mathrm{R} \setminus \mathrm{BR}$, is it possible to have the existence of a ``regular" invariant Lagrangian torus on which the conjugacy to the linear model is ``less regular"? \end{question}
We have used quotation marks since we have no idea of what can be expected, the question is basically whether is it possible to prove anything non-trivial under the sole assumption that $\omega \in \mathrm{R}$, which as we already explained, is the condition that guarantees that the cohomological equation can be solved with an arbitrarily small loss of analyticity. Of course, it may well be the case that when $\omega \notin \mathrm{BR}$, the conclusions of Theorem~\ref{th1} and Theorem~\ref{th2} can be strengthened to reach conclusions similar to Forni and Bessi's results.
A second question concerns the assumptions (a1) and (A1). Clearly, (a1) is not a restriction as it is the natural non-degeneracy assumption under which an invariant curve with a prescribed frequency persists. But this is not the case for (A1) as we already pointed out, so we may ask the following question.
\begin{question} Is it possible to prove Theorem~\ref{th2} and Theorem~\ref{th3} replacing the condition (A1) by the weaker condition that $\nabla_I^2 H_0$ is non-degenerate in a neighborhood of $0$? \end{question}
We expect the answer to be yes, at the expense of restricting the conclusion of non-existence to a neighborhood of the unperturbed torus. The role of the condition (A1) is to be able to obtain global uniqueness of invariant torus with a prescribed frequency; without (A1) no such global uniqueness has to be expected yet in view of the statement of the KAM theorem, only local uniqueness would be required. This local uniqueness is known to hold true within the context of KAM theory (see~\cite{Sal04} for instance) but this is not directly applicable to our context, yet we believe that with extra work this can be reached even though we did not pursue this further.
\textit{Acknowledgements.} This work was done while the author was in Cuba, in particular in the very nice caf\'{e} ``Tu t\'{e}" in Santa Clara. The author have also benefited from partial funding from the ANR project Beyond KAM.
\addcontentsline{toc}{section}{References}
\end{document} |
\begin{document}
\title{What is a Singular Knot?}
\author{Zsuzsanna Dancso} \thanks{} \address{Zsuzsanna Dancso,
School of Mathematics and Statistics\\
The Universit of Sydney\\
Sydney NSW 2006\\
Australia } \email{zsuzsanna.dancso@sydney.edu.au}
\begin{abstract} A singular knot is an immersed circle in ${\mathbb R}^{3}$ with finitely many transverse double points. The study of singular knots was initially motivated by the study of Vassiliev invariants. Namely, singular knots give rise to a decreasing filtration on the infinite dimensional vector space spanned by isotopy types of knots: this is called the Vassiliev filtration, and the study of the corresponding associated graded space has lead to many insights in knot theory. The Vassiliev filtration has an alternative, more algebraic definition for many flavours of knot theory, for example braids and tangles, but notably not for knots. This view gives rise to connections between knot theory and quantum algebra. Finally, we review results -- many of them recent -- on extensions of non-numerical knot invariants to singular knots. \end{abstract}
\maketitle
\section{Introduction} A \textbf{singular knot} is an immersed circle in ${\mathbb R}^{3}$, whose singularities are limited to finitely many {\em transverse double points}. We will restrict our attention to {\em oriented} singular knots, which are equipped with a direction: Figure~\ref{fig:SingKnotExample} shows an example. Other knotted objects, such as braids, tangles, or knotted graphs have similar singular versions, see Figure~\ref{fig:SingKnotExample} for an example of a singular braid. A \textbf{singular knot diagram} is a planar projection of a singular knot, which has two types of {\em crossings}: regular crossings with over/under strand information, and double points. Reidemeister moves can be formulated to describe singular knots modulo ambient isotopy, see Figure~\ref{fig:RMoves}. For minimal sets of oriented Reidemeister moves see \cite{BEHY}.
\begin{figure}
\caption{An example of a 2-singular knot -- that is, a knot with two double points; and a 3-singular braid on four strands.}
\label{fig:SingKnotExample}
\end{figure}
\begin{figure}
\caption{Reidemeister moves for singular knots, in addition to the usual Reidemeister 1, 2 and 3 moves.}
\label{fig:RMoves}
\end{figure}
\section{The Vassiliev filtration}
Singular knots give rise to a decreasing filtration, called the {\em Vassiliev filtration} on the infinite dimensional vector space generated by isotopy classes of knots. Knot invariants which vanish on some step of this filtration are called {\em finite type} or {\em Vassiliev} invariants. Examples include many famous knot invariants, for example the coefficients of any knot polynomial -- after an appropriate variable substitution -- are of finite type \cite{BN}. In this section we describe the Vassiliev filtration for knots and other knotted objects.
Let ${\mathcal K}$ denote the set of (isotopy classes of) knots, and let ${\mathbb Q}{\mathcal K}$ be the ${\mathbb Q}$-vector space\footnote{One may replace ${\mathbb Q}$ with one's favourite field of characteristic zero, at no cost.} spanned by elements of ${\mathcal K}$. In other words, elements of ${\mathbb Q}{\mathcal K}$ are formal linear combinations of knots.
Let ${\mathcal K}^{n}$ denote the set of (isotopy classes of) $n$-singular knots, and let $K\in {\mathcal K}^{n}$. Each singularity of $K$ can be {\em resolved} two ways: by replacing the double point with an over-crossing, or with and under-crossing. Note that the notion of ``over'' and ``under'' crossings don't depend on the choice of knot projection -- this is true not only in ${\mathbb R}^{3}$ but in any orientable manifold. We define a \textbf{resolution map} $\rho: {\mathcal K}^{n} \to {\mathbb Q}{\mathcal K}$ as follows: replace each singularity of $K$ by the difference of its two resolutions, as shown in Figure~\ref{fig:Resolve}. This produces a linear combination of $2^{n}$ knots with $\pm1$ coefficients, we call this the \textbf{resolution} of $K$. By an abuse of notation, we denote also by ${\mathcal K}^{n}$ the linear span of the image $\rho({\mathcal K}^{n})$ in ${\mathbb Q}{\mathcal K}$.
\begin{figure}
\caption{Resolution of singularities.}
\label{fig:Resolve}
\end{figure}
The subspaces ${\mathcal K}^{n}$ are a decreasing filtration on ${\mathbb Q}{\mathcal K}$, called the \textbf{Vassiliev filtration}: \begin{equation} {\mathbb Q}{\mathcal K}={\mathcal K}^{0} \supset {\mathcal K}^{1} \supset {\mathcal K}^{2} \supset {\mathcal K}^{3} \supset ... \end{equation}
It's a worthwhile exercise to show that ${\mathcal K}^{1}$ is the set of elements whose coefficients sum to zero: $${\mathcal K}^{1}=\{\sum_{i} \alpha_{i}K_{i} \in {\mathbb Q}{\mathcal K} \, | \sum_{i} \alpha_{i}=0\}.$$
When one encounters a filtered space, a natural idea is to study its {\em associated graded} space instead: this enables inductive arguments and degree-by-degree computations. The associated graded space of knots equipped with the Vassiliev filtration is, by definition, $\mathcal A:= {\bigoplus}_{n=0}^{\infty} {\mathcal K}^{n}/{\mathcal K}^{n+1}$. The space $\mathcal A$ has a useful combinatorial--diagrammatic description in terms of {\em chord diagrams}. The study of finite type invariants is closely tied to the study of the space $\mathcal A$.
We now examine the Vassiliev filtration from a more algebraic perspective through the example of braid groups. Recall that the \textbf{braid group} $B_{n}$ consists of braids on $n$ strands up to braid isotopy, and the group operation is given by vertical stacking. The {\em Artin presentation} is a finite presentation for the group $B_{n}$ in terms of generators and relations (see also Figure~\ref{fig:BraidGens}):
$$B_{n}=\langle \sigma_{1}, ... , \sigma_{n-1} \, | \, \sigma_{i}\sigma_{i+1}\sigma_{i}=\sigma_{i+1}\sigma_{i}\sigma_{i+1}, \text{and} \, \,
\sigma_{i}\sigma_{j}=\sigma_{j}\sigma_{i} \, \, \text{when} \, |i-j|\geq 2 \rangle.$$
Following the strands of a braid from bottom to top determines a permutation of $n$ points; this gives rise to a group homomorphism $S: B_{n} \to S_{n}$ to the symmetric group. For example, (any resolution of) the braid of Figure~\ref{fig:SingKnotExample} is mapped to the cycle $(1324)$. The image $S(b)\in S_{n}$ of a braid $b$ is called the \textbf{skeleton} of $b$; the kernel of this homomorphism is the \textbf{pure braid group} $PB_{n}$. The pure braid group also has a finite presentation due to Artin, in terms of the generators $\sigma_{ij} \, (i<j)$, shown in Figure~\ref{fig:BraidGens}, and somewhat more complicated relations. For more on pure braid presentations see \cite{MM}.
\begin{figure}
\caption{The generators $\sigma_{i}$ of $B_{n}$, and the generators $\sigma_{ij}$ of $PB_{n}$.}
\label{fig:BraidGens}
\end{figure}
The Vassiliev filtration can be defined for braid groups the same way as it is defined for knots; we first discuss it for the pure braid group $PB_{n}$. In this case ${\mathbb Q} PB_{n}$ is the {\em group algebra} of the pure braid group over the field ${\mathbb Q}$. Denote the linear subspace generated by the $\rho$-image of $n$-singular pure braids by ${\mathcal B}^{n}$. Just like in the case of knots, \linebreak
${\mathcal B}^{1}=\{\sum_{i} \alpha_{i}b_{i} \in {\mathbb Q} PB_{n} \, | \sum_{i} \alpha_{i}=0\}.$ This is a two-sided ideal in ${\mathbb Q} PB_{n}$, called the \textbf{augmentation ideal}.
However, in this case much more is true. Any $n$-singular braid can be written as a product of $n$ $1$-singular braids: one can ``comb'' the braid into $n$ horizontal levels so that each level has exactly one singularity. As a result, ${\mathcal B}^{n}$ is simply the $n$-th power of the ideal ${\mathcal B}^{1}$.
A similar statement holds for the braid group $B_{n}$, with the difference that in the definition of ${\mathbb Q} B_{n}$ linear combinations are only allowed within each coset of $PB_{n}$; in other words, only braids of the same skeleton can be combined. Steps of the Vassiliev filtration once again correspond to powers of the augmentation ideal. In fact, this phenomenon is much more general. Many flavours of knotted objects can be finitely presented as an algebraic structure of some kind: {\em tangles} form a {\em planar algebra}; {\em knotted trivalent graphs} have their own special set of operations \cite{Th}; virtual and welded tangles form {\em circuit algebras} \cite{BD}. The Vassiliev filtration -- or its appropriate generalizations -- coincide with the powers of the augmentation ideal in all of these examples.
A notable exception is the case of knots. A ``multiplication operation'' does exist for knots: it is called \textbf{connected sum} and denoted $\#$; see Figure~\ref{fig:ConnSum} for a pictorial definition. It is not true, however, that any $n$-singular knot is the connected sum of $n$ 1-singular knots. Knots which cannot be expressed as a non-trivial connected sum are called \textbf{prime knots}, and there exist prime knots of arbitrary crossing number, in particular all {\em torus knots} are prime. In other words, knots are not {\em finitely generated} as an ``algebraic structure''.
\begin{figure}
\caption{The connected sum operation of knots: the reader is encouraged to check that it is well-defined.}
\label{fig:ConnSum}
\end{figure}
The algebraic view outlined above gives rise to a deep connection between knot theory and quantum algebra. Let ${\mathcal K}$ denote some class of knotted objects (such as knots, braids, tangles, etc). A central question in the study of finite type invariants is to find a {\em universal finite type invariant}, which contains all of the information that any finite type invariant can retain about $\mathcal K$. More precisely, a universal finite type invariant is a filtered map $Z: {\mathbb Q} \mathcal K \to \hat{\mathcal A}$ which takes values in the {\em degree completed} associated graded space, and which satisfies a certain universality property. Perhaps the most famous universal finite type invariant is the {\em Kontsevich integral} of knots \cite{Ko}. It is still an open problem whether the Kontsevich integral separates knots.
Assume that some class of knotted objects $\mathcal K=\langle g_{1}, ..., g_{k}| R_{1},...,R_{l}\rangle$ forms a finitely presented algebraic structure with generators $g_{i}$ and relations $R_{j}$ (e.\ g.\ $\mathcal K$ may be the braid group). Assume one looks for a universal finite type invariant $Z$ which respects operations in the appropriate sense (e.\ g.\ $Z$ may be an algebra homomorphism). Then it is enough to find the values of the generators $Z(g_{i})\in \mathcal A$, subject to equations arising from each $R_{j}$. In other words, one needs to solve a set of equations in a graded space. This set of equations often turns out to be interesting in its own right: for knotted trivalent graphs or {\em parenthesized tangles} they are the equations which define {\em Drinfeld associators} in quantum algebra \cite{MO, BN2, Da}; for {\em welded foams} they are the {\em Kashiwara-Vergne} equations of Lie theory \cite{BD}.
\section{Invariants of singular knots}
Vassiliev's idea in the early '90s was to extend number-valued knot invariants to singular knots using the resolution of singularities discussed above. This led to an explosion of activity in knot theory at the time. Since then singular knots have been studied in their own right by many researchers, and various non-numerical invariants have been extended to singular knots. Here we summarise some of these results.
In \cite{MOY}, Murakami, Ohtsuli and Yamada developed a skein theory for the HOMFLY polynomial using a generalisation of singular knots called {\em abstract singular knots}. We present a simplified version of their result, which is used in several applications. The HOMFLY polynomial is determined by the skein relation shown on the left of Figure~\ref{fig:HOMFLY}, and its value on the unknot.
To define abstract singular knots, one replaces double points with an oriented {\em thick edge} from the incoming to the outgoing edges of the double point, creating an oriented trivalent graph embedded in ${\mathbb R}^{3}$, as in Figure \ref{fig:ThickEdge}. These trivalent graphs are called {\bf oriented abstract singular knots} and they are characterised by two properties: \begin{enumerate} \item Each vertex is incident to one thick and two thin edges. \item At each vertex the thin edges are oriented the same way, while the thick edge is oriented oppositely. \end{enumerate}
Embedded trivalent graphs which satisfy condition (1) are called {\bf abstract singular knots}. Not every such abstract singular knot can be oriented to satisfy (2), and hence they don't all arise from ordinary singular knots with double points: an example is shown in Figure~\ref{fig:ThickEdge}.
\begin{figure}
\caption{Creating an oriented abstract singular knot from a singular knot, and an abstract singular knot which cannot be oriented.}
\label{fig:ThickEdge}
\end{figure}
Let $P_{n}$ be the one-variable spacialization of the HOMFLY polynomial with the substitution $x=q^{n}$, $y=q-q^{-1}$. In particular, $P_{0}$ is the Alexander polynomial, and $P_{2}$ is the Jones polynomial. Then $P_{n}$ satisfies the skein relations shown on the right of Figure~\ref{fig:HOMFLY}, reducing to the case of planar trivalent graphs, that is, abstract singular knots with no crossings. For these graphs \cite{MOY} provide a further set of skein relations which uniquely determine the value of $P_{n}$.
\begin{figure}
\caption{The original HOMFLY skein relation on the left; the HOMFLY skein relations for abstract singular knots \cite{MOY} on the right.}
\label{fig:HOMFLY}
\end{figure}
Khovanov and Rozansky \cite{KR} use the \cite{MOY} calculus, and the homological algebra of {\em matrix factorizations}, in their categorification of the polynomials $P_{n}$. In \cite{OSS}, Ozsvath, Stipsitz and Szabo generalize knot Floer homology to abstract singular knots.
In \cite{KV} Kauffman and Vogel extend the Kauffman polynomial to singular knots -- in fact to knotted four-valent graphs with rigid vertices. In \cite{JL} Juyumaya and Lambropoulou introduced a Jones-type invariant for singular knots, using the theory of singular braids and a Markov trace on Yokonuma--Hecke algebras. In \cite{Fi}, Fiedler extended the Kauffman state models of the Jones and Alexander polynomials to the context of singular knots.
Quandle-type invariants have been generalised and studied for singular knots and other types of knot-like objects (virtual knots, flat knots, pseudoknots). Authors who have contributed to this research include Churchill, Elhamdadi, Hajij, Henrich, Nelson, Oyamaguchi, and Sazdanovich \cite{HN,NOS,CEHN}.
This short summary can not aim to be a comprehensive treatment of the rich body of research, spanning nearly three decades, on singular knots and related knotted objects. We hope that the reader will be inspired to explore some of the many pointers and references, and contribute to the future of the subject.
\end{document} |
\begin{document}
\title{On modular linear differential operators and their applications}
\begin{abstract} A formal definition of the graded algebra $\mathcal{R}$ of modular linear differential operators is given and its properties are studied. An algebraic structure of the solutions to modular linear differential equations (MLDEs) is shown. It is also proved that any quasimodular form of weight $k$ and depth $s$ becomes a solution to a monic MLDE of weight $k-s$. By using the algebraic properties of $\mathcal{R}$, linear differential operators which map the solution space of a monic MLDE to that of another are determined for sufficiently low weights and orders. Furthermore, a lower bound of the order of monic MLDEs satisfied by ${E_4}^m{E_6}^n$ is found. \end{abstract}
\section{Introduction} A modular linear differential equation (MLDE) is a linear ordinary differential equation written in terms of the Ramanujan-Serre differential operators and modular forms on $\mathrm{SL}(2,\mathbb{Z})$. One of the characteristic properties of MLDE is that the solution space of an MLDE is invariant under the modular transformations of $\mathrm{SL}(2,\mathbb{Z})$. The most well-known example of MLDEs is the Kaneko-Zagier equation, which is derived from the study of elliptic curves (\cite{kaneko_zagier1998}). The solutions and the solution spaces of the Kaneko-Zagier equation are studied in many papers (\cite{kaneko_koike2003}, \cite{kaneko_nagatomo_sakai2017} etc.).
Since an MLDE is a linear differential equation, the solution space is the kernel of the differential operator. Therefore, its properties are important for studying MLDEs. Although some papers mention such differential operators, they have not been the subject of research and the algebraic properties remain unknown.
In this paper, we introduce modular linear differential operators (MLDOs). They correspond to MLDE's differential operators and MLDEs can be expressed by using MLDOs. They constitute a graded $\mathbb{C}$-algebra, which we denote by $\mathcal{R}$. We then investigate algebraic properties of MLDOs. In particular, we determine a $\mathbb{C}$-basis of $\mathcal{R}$ (Theorem \ref{thm201712301243} and \ref{thm201712301244}), prove that $\mathcal{R}$ is isomorphic to a graded skew polynomial ring (Theorem \ref{thm201805311336}) and establish a division with remainder for MLDOs (Theorems \ref{thm201712301245} and \ref{thm201712301246}). The division properties are used in the subsequent sections many times.
Utilizing the results mentioned above, we show that the solutions to MLDEs have some algebraic structure (Theorems \ref{thm201804191022}, \ref{thm201804240953} and \ref{thm20180624}). We also prove that a quasimodular form of weight $k$ and depth $s$ satisfies a monic MLDE of weight $k-s$ (Theorem \ref{thm201804241015}).
We next study linear differential operators acting on the solution spaces of monic MLDEs. The algebraic properties of MLDOs enable us to give a condition under which an MLDO maps the solution space of a monic MLDE to that of another (Corollary \ref{cor201801191507}). Note that the idea of such a linear differential operator is utilized in the study of lower order monic MLDEs (see, for example, Proposition 1 and the following Lemma in \cite{kaneko_koike2003}) although it has not been systematically studied.
As applications of Corollary \ref{cor201801191507}, we introduce a family of third order monic MLDEs $\phi_pf=0$ (Proposition \ref{prop201801151023}), where $p\in\mathbb{R}$ is a parameter and $f$ is the unknown function. The solutions to the MLDE $\phi_pf=0$ are related to the theta functions of the $\mathrm{D}_n$ lattices ($p=2n$, Proposition \ref{prop201801151033}). Then we determine the linear differential operators of low order and weight which map the solution space of the MLDE $\phi_pf=0$ to the solution space of a third order monic MLDE (Example \ref{ex201807171044}).
Moreover, we apply the algebraic properties of MLDOs to the monic MLDEs satisfied by the modular forms ${E_4}^m{E_6}^n$, where $E_k$ is the Eisenstein series of weight $k$. Although it is known that ${E_4}^m{E_6}^n$ satisfies some monic MLDE, the order of such a monic MLDE is not clear. Using the division properties of MLDOs, we give a lower bound of the order of monic MLDEs satisfied by ${E_4}^m{E_6}^n$ (Theorem \ref{thm201801111417}).
The paper is organized as follows. In Section \ref{201712301223}, we review the theory of modular forms and MLDEs. At the end of the section, we slightly generalize the setting of Mason's theorem (Theorem \ref{thm201712301224}). In Section \ref{section201712301238}, we give the formal definition of MLDOs, and then study their algebraic properties. In Section \ref{section20180614}, we show an algebraic structure of the solutions to MLDEs and prove that every quasimodular form satisfies a monic MLDE. In Section \ref{section201712301252}, we apply the results of Section \ref{section201712301238} to the linear differential operators which map the solution space of a monic MLDE to that of another. In Section \ref{section201712301302}, we give a lower bound of the order of monic MLDEs satisfied by ${E_4}^m{E_6}^n$. In Section \ref{section20180220}, we show some properties of left ideals of $\mathcal{R}$. We also give $\mathbb{C}$-algebras containing $\mathcal{R}$ and determine their endomorphisms.
\section{Preliminaries} \label{201712301223}
\subsection{Modular forms} Hereafter, the term modular form will refer to a vector-valued modular form.
Recall that the \textit{modular group} $\mathrm{SL}(2,\mathbb{Z})=\{A\in\mathrm{M}(2,\mathbb{Z})\mid\det(A)=1\}$ is generated by $S=(\begin{smallmatrix}0&-1\\1&0\end{smallmatrix})$ and $T=(\begin{smallmatrix}1&1\\0&1\end{smallmatrix})$.
We will denote by $\mathrm{Hol}$ (resp.\ $\mathrm{Mer}$) the space of all holomorphic (resp.\ meromorphic) functions on the upper half plane $\mathcal{H}=\{z\in\mathbb{C}\mid\Im(z)>0\}$, where $\Im(z)$ is the imaginary part of $z$.
For $z\in\mathcal{H}$ and $\alpha\in\mathbb{C}$, we set $e(\alpha)=e^{2\pi i\alpha}$, $q=e^{2\pi i z}$, $q^{\alpha}=e^{2\pi i\alpha z}$ and $\log q=2\pi iz$. In this paper, we use $z$ (resp.\ $w$) as a variable in $\mathcal{H}$ (resp.\ $\mathbb{C}$). For $w\neq 0$, $\arg(w)\in(-\pi,\pi]$ is the argument of $w$. Unless otherwise specified, the logarithm and the power of $w$ are calculated by using $\arg(w)$.
For $A=(\begin{smallmatrix}a&b\\c&d\end{smallmatrix})\in\mathrm{SL}(2,\mathbb{Z})$, we set $Az=\frac{az+b}{cz+d}$ and $j(A,z)=cz+d$. For $f\in\mathrm{Hol}$ and $k\in\mathbb{R}$, we set $(f|_kA)(z)=j(A,z)^{-k}f(Az)$ and for $F=(f_1,\ldots,f_n)^{\mathrm{T}}\in\mathrm{Hol}^n$, $F|_kA=(f_1|_k A,\ldots,f_n|_k A)^{\mathrm{T}}$.
We say that $f$ \textit{has at most exponential growth around infinity} if there exists $C>0$ and $M>0$ such that $|f(z)|<e^{C\Im(z)}$ for all $z$ with $\Im(z)>M$. We say that $f$ \textit{is bounded around infinity} if there exists $\epsilon>0$ and $M>0$ such that $|f(z)|<\epsilon$ for all $z$ with $\Im(z)>M$. We say that $f$ \textit{vanishes around infinity} if for all $\epsilon>0$ there exists an $M>0$ such that $|f(z)|<\epsilon$ for all $z$ with $\Im(z)>M$.
Let $\Gamma$ be a subgroup of $\mathrm{SL}(2,\mathbb{Z})$, $v$ a multiplier of weight $k\in\mathbb{R}$ on $\Gamma$, and $R$ a representation of $\Gamma$ of dimension $n\in\mathbb{Z}_{>0}$. Then $F\in\mathrm{Hol}^n$ is called a \textit{weakly holomorphic modular form} (resp.\ \textit{holomorphic modular form}, resp.\ \textit{cusp form}) \textit{of weight $k$ on $\Gamma$ with respect to $v$ and $R$} if (1) for all $B\in\Gamma$, $F|_kB=v(B)R(B)F$ and (2) for all $A\in\mathrm{SL}(2,\mathbb{Z})$, each component of $F|_k A$ has at most exponential growth (resp.\ is bounded, resp.\ vanishes) around infinity. We will denote the spaces of weakly holomorphic modular forms, holomorphic modular forms and cusp forms by $\mathcal{M}^!(k,\Gamma,v,R)$, $\mathcal{M}(k,\Gamma,v,R)$ and $\mathcal{S}(k,\Gamma,v,R)$, respectively. Additionally, we set $\mathcal{X}(k,v)=\mathcal{X}(k,\mathrm{SL}(2,\mathbb{Z}),v,\mathrm{triv})$ and $\mathcal{X}(k)=\mathcal{X}_k=\mathcal{X}(k,1)$ for $\mathcal{X}=\mathcal{M}$, $\mathcal{S}$ or $\mathcal{M}^!$, where $\mathrm{triv}$ is the trivial one-dimensional representation of $\mathrm{SL}(2,\mathbb{Z})$.
Let $f(z)=q^{\lambda}\sum_{n=0}^{\infty}a_n q^{mn}$ be a convergent $q$-series with $\lambda\in\mathbb{C}$, $a_n\in\mathbb{C}$, $a_0\neq0$ and $m\in\mathbb{R}_{>0}$. Then $\lambda$ is called the \textit{leading exponent} and $1/m$ is called the \textit{width}. Suppose $f(z)\neq 0$ for $\Im(z)>N$. Then for $p\in\mathbb{R}$, the $p$-th power of $f(z)$ is defined by $f^p(z)=q^{p\lambda}{a_0}^p(1+\binom{p}{1}\frac{a_1}{a_0}q^m+\cdots)$ for $\Im(z)>N$. Note that $|f^p(z)|=|f(z)|^p$ and $f^{p+p'}(z)=f^p(z)f^{p'}(z)$.
For an even integer $k\ge 2$, the weight $k$ \textit{Eisenstein series} $E_k(z)=1+O(q)$ is defined as usual. For $k\ge 4$, $E_k\in\mathcal{M}(k)$ holds, but $E_2\notin \mathcal{M}(2)$ and we have \begin{equation} \label{eq201807171108}
(E_2|_2 A)(z)=E_2(z)+\frac{12c}{2\pi i(cz+d)} \end{equation} for $A=(\begin{smallmatrix}a&b\\c&d\end{smallmatrix})\in\mathrm{SL}(2,\mathbb{Z})$. The \textit{Dedekind eta function} $\eta(z)=q^{1/24}+O(q^{25/24})\in\mathcal{S}(1/2,\chi)$ never vanishes on $\mathcal{H}$, where $\chi$ is a multiplier of weight $1/2$. The \textit{Ramanujan delta function} is $\Delta=\frac{1}{1728}({E_4}^3-{E_6}^2)=\eta^{24}\in\mathcal{S}(12)$.
In this paper, we denote $f'(z)=\frac{1}{2\pi i}\frac{df}{dz}(z)$. The following identities are called the \textit{Ramanujan identities}: $E_2'=\frac{1}{12}(E_2^2-E_4)$, $E_4'=\frac{1}{3}(E_2E_4-E_6)$ and $E_6'=\frac{1}{2}(E_2E_6-E_4^2)$. Recall $\eta'=\frac{1}{24}E_2 \eta$.
For $k\in\mathbb{R}$, the \textit{Ramanujan-Serre differential operator} $D_k$ is defined by \begin{equation} D_k f=f'-\frac{k}{12}E_2 f. \end{equation} This operator was originally introduced by Ramanujan \cite{ramanujan1916}. It also appears in the theory of $p$-adic modular forms \cite{serre1973}. We set $\srdiff{0}{k}=1$ and $\srdiff{n}{k}=D_{k+2n-2}\cdots D_{k+2}D_k$. By Eq.\ (\ref{eq201807171108}), we have \begin{equation} \label{eq201712171730}
(\srdiff{n}{k}f)|_{k+2n}A=\srdiff{n}{k}(f|_k A). \end{equation} for $A\in\mathrm{SL}(2,\mathbb{Z})$. The \textit{Leibniz rule} \begin{equation} \label{eq201801200956} \srdiff{n}{k+l}(fg)=\sum_{i=0}^n\binom{n}{i}(\srdiff{i}{k}f)\srdiff{n-i}{l}g \end{equation} will be used later.
A \textit{modular linear differential equation} (MLDE) of weight $k$ and order $n$ is an ordinary differential equation of the form \begin{equation} (g_0\srdiff{n}{k}+g_1\srdiff{n-1}{k}+\cdots+g_{n-1}D_k+g_n)f=0, \end{equation} where $g_i\in\mathcal{M}(l+2i)$ for some $l\in\mathbb{Z}_{\ge0}$, $g_0\neq 0$ and $f$ is the unknown (holomorphic or meromorphic) function (see \cite{mason2007}). An MLDE is called \textit{monic} if $l=0$ and $g_0=1$. The solutions to a monic MLDE are defined on entire $\mathcal{H}$ since it is simply connected. The following result is in Theorem 4.1 in \cite{mason2007}.
\begin{prop} \label{prop201801121402} The solution space of a weight $k$ MLDE $(g_0\srdiff{n}{k}+g_1\srdiff{n-1}{k}+\cdots+g_{n-1}D_k+g_n)f=0$ is invariant under the modular transformations of weight $k$. \end{prop}
\begin{proof}
Let $A\in\mathrm{SL}(2,\mathbb{Z})$. By the modular invariance of $g_i$ and the compatibility (\ref{eq201712171730}), we have $((g_0\srdiff{n}{k}+g_1\srdiff{n-1}{k}+\cdots+g_{n-1}D_k+g_n)f)|_{k+l+2n}A=(g_0\srdiff{n}{k}+g_1\srdiff{n-1}{k}+\cdots+g_{n-1}D_k+g_n)(f|_kA)$. \end{proof}
Note that $g_i$ are holomorphic functions of $q$ and $f'=q\frac{df}{dq}$. Therefore, a monic MLDE has a singularity only at $q=0$, which turns out to be a regular singularity, hence the solutions are given by the Frobenius method. If the indicial roots are all simple, each solution is a finite sum of $q$-series of the form $q^{\lambda}\sum_{i=0}^{\infty}a_i q^i$ (see \cite{coddington_levinson1955}).
\subsection{Generalization of Mason's theorem}
\begin{lem} \label{masons_theorem_lemma_1} Let $v$ be a weight $0$ multiplier on $\mathrm{SL}(2,\mathbb{Z})$. Then the following hold. \begin{enumerate} \item $\mathcal{S}(0,v)=\{0\}$. \item $\mathcal{M}(0,v)= \begin{cases} \mathbb{C} & \text{if }v=1,\\ \{0\} & \text{if }v\neq 1. \end{cases}$ \end{enumerate} \end{lem}
\begin{proof} Since the case $v=1$ is clear, assume $v\neq 1$. Choose $i=2,4,\ldots,10$ so that $v=\chi^{2i}$. (1) $\eta^{24-2i}\mathcal{S}(0,v)\subset\mathcal{S}(12-i,1)=0$. (2) $\eta^{24-2i}\mathcal{M}(0,v)\subset\mathcal{S}(12-i,1)=0$. \end{proof}
\begin{lem} \label{masons_theorem_lemma_2}
Let $u:\mathrm{SL}(2,\mathbb{Z})\to\mathbb{C}$ be a function and $k\in\mathbb{R}$ a real number. Suppose that there exists $f\in\mathrm{Hol}\backslash\{0\}$ such that $f|_k A=u(A)f$ for all $A\in\mathrm{SL}(2,\mathbb{Z})$. Then $u$ is a weight $k$ multiplier on $\mathrm{SL}(2,\mathbb{Z})$. \end{lem}
\begin{proof}
Since $f\neq 0$, $u(\mathrm{SL}(2,\mathbb{Z}))\subset\mathbb{C}^{\times}$. Set $g=f/\eta^{2k}$. Since $g|_0 A=(f|_k A) /(\eta^{2k}|_k A)\allowbreak=u(A)g/\chi^{2k}(A)$, $u/\chi^{2k}$ is a one-dimensional representation of $\mathrm{SL}(2,\mathbb{Z})$, so $u=\chi^{2k}\chi^{2n}$ ($n=0,1,\ldots,11$) and $|u(A)|=1$ for all $A\in\mathrm{SL}(2,\mathbb{Z})$. The other conditions for a multiplier are clear. \end{proof}
The following theorem generalizes Theorem 4.3 in \cite{mason2007}.
\begin{thm} \label{thm201712301224} Let $v$ be a weight $k\in\mathbb{R}$ multiplier on $\mathrm{SL}(2,\mathbb{Z})$, $R$ an $n$-dimensional representation and $F=(f_1,\ldots,f_n)^T\in\mathcal{M}^!(k,\mathrm{SL}(2,\mathbb{Z}),v,R)$. Suppose that each $f_i$ has the $q$-expansion $q^{\lambda_i}\sum_{n=0}^{\infty}a_{in}q^{mn}$ with $m>0$ and $a_{i0}\neq 0$, and that $\lambda_1,\ldots,\lambda_n\in\mathbb{R}$ are distinct. \begin{enumerate} \item $n(k+n-1)\ge 12(\lambda_1+\cdots+\lambda_n)$. \item $f_1,\ldots,f_n$ are solutions to an MLDE of order $n$ and weight $k$. \item The following conditions are equivalent:
\noindent (a) $n(k+n-1)= 12(\lambda_1+\cdots+\lambda_n)$,
\noindent (b) $f_1,\ldots,f_n$ are solutions to a monic MLDE of order $n$ and weight $k$. \end{enumerate} \end{thm}
\begin{proof}
(1) Set $\lambda=\lambda_1+\cdots+\lambda_n$ and $l=n(k+n-1)$. The \textit{modular Wronskian} $W(F)=\det(F,D_k F, \srdiff{2}{k} F,\ldots,\srdiff{n-1}{k}F)$ is a nonzero $q$-series with the leading exponent $\lambda$ and the width $1/m$ since $\lambda_1,\ldots,\lambda_n$ are distinct (cf.\ Proof of Lemma 3.6 in \cite{mason2007}). Moreover, $W(F)|_l A=v(A)^n \det(R(A))W(F)$ for all $A\in\mathrm{SL}(2,\mathbb{Z})$ (cf.\ Lemma 3.1 and 3.4 in \cite{mason2007}). By Lemma \ref{masons_theorem_lemma_2}, $v^n \det(R)$ is a weight $l$ multiplier on $\mathrm{SL}(2,\mathbb{Z})$. Therefore, $u:=\frac{v^n \det(R)}{\chi^{2l}}$ is a weight $0$ multiplier on $\mathrm{SL}(2,\mathbb{Z})$ and so $\frac{W(F)}{\eta^{2l}}\in\mathcal{M}^!(0,u)\backslash\{0\}$ has the leading exponent $\lambda-\frac{l}{12}$. If $\lambda-\frac{l}{12}>0$, then $\frac{W(F)}{\eta^{2l}}\in\mathcal{S}(0,u)\backslash\{0\}$, which contradicts Lemma \ref{masons_theorem_lemma_1}. Therefore, $\lambda-\frac{l}{12}\le 0$ and $n(k+n-1)\ge 12(\lambda_1+\cdots+\lambda_n)$.
(3) The proof of (b)$\Rightarrow$(a) is similar to that of Theorem 4.3 in \cite{mason2007}. We prove (a)$\Rightarrow$(b). Since $\lambda-\frac{l}{12}=0$, we have $\frac{W(F)}{\eta^{2l}}\in\mathcal{M}(0,u)\backslash\{0\}$. By Lemma \ref{masons_theorem_lemma_1}, $u=1$ and $v^n \det(R)=\chi^{2l}$. For each $i=1,\ldots,n$, we have \begin{equation} \label{masons_theorem_mlde} \det \begin{pmatrix} f_i & D_k f_i & \cdots & \srdiff{n}{k} f_i\\ F & D_k F & \cdots & \srdiff{n}{k} F \end{pmatrix} =\sum_{j=0}^n (-1)^j W^j (F) \srdiff{j}{k} f_i=0, \end{equation}
where $W^j (F)=\det(F,\ldots,\srdiff{j-1}{k}F,\srdiff{j+1}{k}F,\ldots,\srdiff{n}{k} F)$. The leading exponent of $W^j (F)$ is equal to or greater than $\lambda$, and $W^j (F)|_{l+2n-2j}A=v(A)^n \det(R(A))W^j (F)$ for all $A\in\mathrm{SL}(2,\mathbb{Z})$. Therefore, $W^j(F)\in\mathcal{M}^!(l+2n-2j,v^n \det(R))$ and $\frac{W^j(F)}{\eta^{2l}}\in\mathcal{M}(2n-2j)$. In particular, $\frac{W^n (F)}{\eta^{2l}}=\frac{W(F)}{\eta^{2l}}\in\mathcal{M}(0)\backslash\{0\}=\mathbb{C}^{\times}$. By dividing the both sides of Eq.\ (\ref{masons_theorem_mlde}) by $\eta^{2l}$, we obtain \begin{equation} (-1)^n \frac{W(F)}{\eta^{2l}}\srdiff{n}{k} f_i +\sum_{j=0}^{n-1} (-1)^{j} \frac{W^j (F)}{\eta^{2l}} \srdiff{j}{k} f_i=0. \end{equation}
(2) We have $\frac{W(F)}{\eta^{2l}}\in\mathcal{M}^!(0,u)\backslash\{0\}$ with the leading exponent $\lambda-\frac{l}{12}$. Set $u=\chi^{2i}$ ($i=0,2,\ldots,10$) and choose an $N\in\mathbb{Z}$ such that $N\ge \frac{i}{12}-\lambda+\frac{l}{12}$. Then $\eta^{24N-2i}\frac{W(F)}{\eta^{2l}}\in\mathcal{M}(12N-i)\backslash\{0\}$. Similarly, we have $\eta^{24N-2i}\frac{W^j (F)}{\eta^{2l}}\in\mathcal{M}(12N-i+2n-2j)$. Multiplying the both sides of Eq.\ (\ref{masons_theorem_mlde}) by $\frac{\eta^{24N-2i}}{\eta^{2l}}$, we obtain \begin{equation} (-1)^n \eta^{24N-2i}\frac{W(F)}{\eta^{2l}}\srdiff{n}{k}f_i +\sum_{j=0}^{n-1} (-1)^{j} \eta^{24N-2i}\frac{W^j (F)}{\eta^{2l}}\srdiff{j}{k}f_i=0. \end{equation} \end{proof}
\section{Modular linear differential operators} \label{section201712301238}
In this section, we define modular linear differential operators (Subsection \ref{subsection201801141056}) and show their algebraic properties (Subsections \ref{section201801110942} and \ref{subsection201801181005}). The division properties shown in Subsection \ref{subsection201801181005} will be utilized for the proofs in the subsequent sections.
\subsection{Definition of MLDO} \label{subsection201801141056} In this paper, an algebra or a ring is associative and unital, but not necessarily commutative. For any graded abelian group $A$, we denote by $A_k$ the degree $k$ homogeneous subgroup of $A$ and by $A_*$ the set of all homogeneous elements of $A$.
We set $\mathcal{M}=\bigoplus_{n\in 2\mathbb{Z}}\mathcal{M}_n$ and $\mathcal{S}=\bigoplus_{n\in 2\mathbb{Z}}\mathcal{S}_n$. Note that $\mathcal{M}=\mathbb{C} [E_4,E_6]$ is a graded polynomial ring and $\mathcal{S}=\Delta\mathcal{M}$ is a prime ideal of $\mathcal{M}$.
We set $H_n=\mathrm{Hol}$ and $H=\bigoplus_{n\in\mathbb{R}}H_n$. For $f\in H_n$, we define the \textit{weight} of $f$ by $\mathrm{wt}(f)=n$. If necessary, we denote an element of $H_n$ as $(f,n)$, where $f\in\mathrm{Hol}$. The space $H$ becomes a graded $\mathbb{C}$-algebra with the identity $(1,0)$ by $(f,n)(g,m)=(fg,m+n)$.
We denote by $\mathrm{End}^i(H)$ the $\mathbb{C}$-vector space $\{ \phi \in \mathrm{End}_{\mathbb{C}}(H) \mid \phi(H_n)\subset H_{n+i}$ for all $n\in\mathbb{R}\}$ and set $\mathrm{End}(H)=\bigoplus_{i\in\mathbb{R}}\mathrm{End}^i(H)$, which is a graded $\mathbb{C}$-algebra. For $a\in \mathrm{End}^i (H)$, we define the \textit{weight} of $a$ by $\mathrm{wt}(a)=i$.
Consider the following three homogeneous elements in $\mathrm{End} (H)$: \begin{align} \delta&\in\mathrm{End}^2 (H),\quad\delta(f,n)=(D_n f,n+2),\\ e_4&\in\mathrm{End}^4 (H),\quad e_4(f,n)=(E_4 f,n+4),\\ e_6&\in\mathrm{End}^6 (H),\quad e_6(f,n)=(E_6 f,n+6). \end{align} It is easy to check the commutation relations: \begin{equation} \label{eq201805150955} [\delta,e_4]=-\frac{1}{3}e_6,\quad [\delta,e_6]=-\frac{1}{2}{e_4}^2,\quad [e_4,e_6]=0, \end{equation} where $[x,y]=xy-yx$ is the commutator.
By $\mathcal{R}=\bigoplus_{i\in\mathbb{Z}}\mathcal{R}_i$, we denote the graded $\mathbb{C}$-subalgebra of $\mathrm{End} (H)$ generated by $\delta$, $e_4$ and $e_6$ and call it the \textit{algebra of modular linear differential operators} or the \textit{MLDO algebra}. An element of $\mathcal{R}$ is called a \textit{modular linear differential operator} (MLDO). We remark that $H$ is a left $\mathcal{R}$-module and $\mathcal{M}\subset H$ is a graded $\mathbb{C}$-subalgebra and a graded left $\mathcal{R}$-submodule.
By $\mathcal{M}'=\bigoplus_{i\in\mathbb{Z}}\mathcal{M}'_i$, we denote the graded $\mathbb{C}$-subalgebra of $\mathrm{End} (H)$ generated by $e_4$ and $e_6$. We define the $\mathbb{C}$-algebra homomorphism $\iota:\mathcal{M}\to\mathcal{M}'$ by $\iota(E_4)=e_4$ and $\iota(E_6)=e_6$. We identify $\mathcal{M}$ with $\mathcal{M}'$ since $\iota$ is a grade-preserving isomorphism. Note that an MLDE can be expressed by an MLDO as follows: \begin{equation} (g_n \srdiff{n}{k}+\cdots+g_1 D_k +g_0)f=0 \Leftrightarrow (g_n {\delta}^n+\cdots+g_1 \delta +g_0)(f,k)=0. \end{equation}
For $f=\sum_{k}(f_k,k) \in H$ and $A\in\mathrm{SL}(2,\mathbb{Z})$, we denote $\sum_k (f|_k A,k)$ by $fA$. \begin{prop} Let $f,g\in H$, $a\in\mathcal{R}$ and $A,B\in\mathrm{SL}(2,\mathbb{Z})$. \begin{enumerate} \item \label{item201801120946} $\delta(fg)=(\delta f)g+f(\delta g)$. (That is, $\delta$ is a derivation of $H$.) \item $e_4(fg)=(e_4 f)g=f(e_4 g)$ and $e_6(fg)=(e_6 f)g=f(e_6 g)$. \item If $f\in\bigoplus_{n\in\mathbb{Z}}H_n$, then $f(AB)=(fA)B$. \item \label{item201801120946_2} $(fg)A=(fA)(gA)$. \item \label{item201801120946_3} $(af)A=a(fA)$. \end{enumerate} \end{prop}
\begin{proof} The results follow from the definitions. \end{proof}
\subsection{Basis of the MLDO algebra} \label{section201801110942} In this subsection, we prove that ${e_4}^i{e_6}^j{\delta}^k$ form a $\mathbb{C}$-basis of $\mathcal{R}$ (Theorem \ref{thm201712301243}) and show that $\mathcal{R}$ is isomorphic to a graded skew polynomial ring (Theorem \ref{thm201805311336}).
For $a\in\mathcal{R}$, we set $D[a]=[\delta,a]$, $D^0[a]=a$ and $D^n[a]=D[D^{n-1}[a]]$ for $n>0$. The Leibniz rule $D[ab]=D[a]b+aD[b]$ is clear. By induction on $n\ge0$, $ D^n[ab]=\sum_{i=0}^n \binom{n}{i}D^i[a]D^{n-i}[b]. $
\begin{lem} \label{lem201805151227} \begin{enumerate} \item For $a\in\mathcal{M}'$, $D[a]\in\mathcal{M}'$. \item For $a\in\mathcal{M}(k)$, $D[\iota(a)]=\iota(D_k a)$. \item For $a\in\mathcal{R}$ and $n\in\mathbb{Z}_{\ge 0}$, ${\delta}^n a=\sum_{i=0}^n\binom{n}{i}D^i [a]{\delta}^{n-i}$ and $a{\delta}^n=\sum_{i=0}^n\binom{n}{i}\allowbreak (-1)^i \allowbreak {\delta}^{n-i} \allowbreak D^i[a]$. \end{enumerate} \end{lem}
\begin{proof} The proof is straightforward. \end{proof}
For $z_0\in\mathcal{H}$, consider the following operator \begin{equation} I_{z_0}\in\mathrm{End}^{-2}(H),\quad(f,n)\mapsto\left(2\pi i \eta^{2n-4}(z)\int_{z_0}^z\eta^{-2n+4}(z)f(z)\,dz,n-2\right). \end{equation} Since $\delta I_{z_0}=1$, $\delta$ is surjective and $I_{z_0}$ is injective. The 3-tuple $(H,\delta,I_{z_0})$ is an \textit{integro-differential algebra} since $R=I_{z_0}\delta$ satisfies the \textit{Rota-Baxter relation} \begin{equation} R(f,n)R(g,m)=(R(f,n))(g,m)+(f,n)R(g,m)-R((f,n)(g,m)) \end{equation} (see \cite{rogensburger_rosenkranz_middeke2009}). Note $R(f,n)=(f(z)-\eta^{2n}(z)\eta^{-2n}(z_0)f(z_0),n)$. We also have the decomposition $H=\ker(\delta)\oplus\mathrm{im}(I_{z_0}).$
\begin{thm} \label{thm201712301243} The set $\{{e_4}^i {e_6}^j {\delta}^k\mid i,j,k\in\mathbb{Z}_{\ge 0}\}$ forms a $\mathbb{C}$-basis of $\mathcal{R}$. \end{thm}
\begin{proof} By Eq.\ (\ref{eq201805150955}), ${e_4}^i {e_6}^j {\delta}^k$ span $\mathcal{R}$. In order to show their linear independence, assume $\sum_{i,j,k\ge0}C_{ijk}{e_4}^i {e_6}^j {\delta}^k=0$. We prove $C_{ijk}=0$ by induction on $k$.
(1) The case $k=0$. Since $0=(\sum_{i,j,k\ge0}C_{ijk}{e_4}^i {e_6}^j {\delta}^k)(1,0)\allowbreak=\sum_{i,j\ge0}C_{ij0}{e_4}^i {e_6}^j(1,0)\allowbreak=\sum_{i,j\ge0}(C_{ij0}{E_4}^i {E_6}^j,4i+6j)$, we have $\sum_{4i+6j=d}C_{ij0}{E_4}^i {E_6}^j \allowbreak =0$ for all $d\ge 0$. Since $E_4$ and $E_6$ are algebraically independent, $C_{ij0}=0$ for all $i,j\ge0$.
(2) The case $k>0$. Assume $C_{ijk'}=0$ for all $k'<k$. Since $0=(\sum_{i,j,l\ge0}C_{ijl}{e_4}^i {e_6}^j {\delta}^l)\allowbreak(I_{z_0})^k(\eta^{4k},2k)=\sum_{i,j\ge0}C_{ijk}{e_4}^i {e_6}^j(\eta^{4k},2k)=\sum_{i,j\ge0}(C_{ijk}{E_4}^i {E_6}^j \eta^{4k},4i+6j+2k)$, we have $\sum_{4i+6j=d}C_{ijk}{E_4}^i {E_6}^j \allowbreak =0$ for all $d\ge 0$. \end{proof}
By Theorem \ref{thm201712301243}, we have, for example, $\mathcal{R}_0=\mathbb{C}$, $\mathcal{R}_2=\mathbb{C} \delta$, $\mathcal{R}_4=\mathbb{C} e_4\oplus \mathbb{C} {\delta}^2$, $\mathcal{R}_6=\mathbb{C} e_6\oplus \mathbb{C} e_4 \delta\oplus \mathbb{C} {\delta}^3$ and $\mathcal{R}_{n}=\{0\}$ if $n<0$ or $n$ is odd.
\begin{thm} \label{thm201712301244} The set $\{{\delta}^k {e_4}^i {e_6}^j \mid i,j,k\in\mathbb{Z}_{\ge 0}\}$ forms a $\mathbb{C}$-basis of $\mathcal{R}$. \end{thm}
\begin{proof} The result follows from Lemma \ref{lem201805151227} (3) and Theorem \ref{thm201712301243}. \end{proof}
Let $x,y,d$ be formal symbols and set $V=\mathbb{C} x\oplus \mathbb{C} y\oplus \mathbb{C} d$. Let $I$ be the two-sided ideal of the tensor algebra $T(V)$ generated by $[x,y]$, $[d,x]+\frac{1}{3}y$ and $[d,y]+\frac{1}{2}x\otimes x$. Let $J$ be the left $\mathcal{R}$-submodule of $\mathcal{R}\otimes_{\mathbb{C}}\mathbb{C}$ generated by $D\otimes 1$ and $J'$ the left $\mathcal{R}$-submodule of $\mathcal{R}\otimes_{\mathbb{C}}\mathbb{C}\Delta$ generated by $D\otimes \Delta$. Note that $\mathbb{C}=\mathcal{M}_0$.
Let $A_2(\mathbb{C})$ denote the Weyl algebra generated by $x$, $y$, $\partial/\partial x$ and $\partial/\partial y$.
\begin{thm} \label{thm201801111434} \begin{enumerate} \item As $\mathbb{C}$-algebras, $T(V)/I$ is isomorphic to $\mathcal{R}$. \item As left $\mathcal{R}$-modules, $(\mathcal{R}\otimes_{\mathbb{C}}\mathbb{C})/J$ is isomorphic to $\mathcal{M}$. \item As left $\mathcal{R}$-modules, $(\mathcal{R}\otimes_{\mathbb{C}}\mathbb{C}\Delta)/J'$ is isomorphic to $\mathcal{S}$. \item There is a $\mathbb{C}$-algebra embedding of $\mathcal{R}$ in $A_2(\mathbb{C})$. \end{enumerate} \end{thm}
\begin{proof} (1) The surjective $\mathbb{C}$-algebra homomorphism \begin{equation} T(V)/I\to\mathcal{R},\quad x\mapsto e_4,\quad y\mapsto e_6,\quad d\mapsto \delta. \end{equation} is injective by Theorem \ref{thm201712301243}.
(2) The homomorphism \begin{equation} (\mathcal{R}\otimes\mathbb{C})/J\to\mathcal{M},\quad a\otimes w+J\mapsto aw \end{equation} is clearly surjective. To show the injectivity, let $a=\sum_{i,j,k\ge0}C_{ijk}({e_4}^i{e_6}^j {\delta}^k)\otimes 1+J\in\ker(\psi)$. Then $\psi(a)=\sum_{i,j,k\ge0}C_{ijk}({e_4}^i{e_6}^j {\delta}^k) 1=\sum_{i,j\ge0}C_{ij0}{E_4}^i{E_6}^j=0$ and $C_{ij0}=0$ for all $i,j\ge0$, so that $a=0$.
(3) The proof is similar to that of (2).
(4) Since $\{x^iy^j(\partial/\partial x)^k(\partial/\partial y)^l \mid i,j,k,l\ge0 \}$ is a $\mathbb{C}$-basis of $A_2(\mathbb{C})$, the map \begin{equation} \mathcal{R}\to A_2(\mathbb{C}),\quad e_4\mapsto x,\quad e_6\mapsto y,\quad \delta\mapsto -\frac{1}{3}y\frac{\partial}{\partial x}-\frac{1}{2}x^2\frac{\partial}{\partial y} \end{equation} is a $\mathbb{C}$-algebra embedding. \end{proof}
Recall the properties of \textit{skew polynomial ring} (cf.\ Chapter 1, Section 2 in \cite{mcconnell_robson2001}). Let $R$ be a ring, $s:R\to R$ a ring homomorphism and $d:R\to R$ a $s$-derivation, that is, $d(a+b)=da+db$ and $d(ab)=(sa)db+(da)b$. For a \textit{variable} $\xi$, the skew polynomial ring $R[\xi;s,d]$ satisfies the following properties: (1) every element of $R[\xi;s;d]$ is uniquely expressed as a finite sum of $r \xi^i$ with $r\in R$ and (2) $\xi r=(sr)\xi+dr$ for $r\in R$. When $R=\bigoplus_n R_n$ is a graded ring, $s$ preserves the grade and $d$ elevates the grade by $m$, we can equip $R[\xi;s,d]$ with a natural grading by $\mathrm{wt}(\xi)=m$ and $\mathrm{wt}(a)=n$ for $a\in R_n$ (called a \textit{graded skew polynomial ring}).
\begin{thm} \label{thm201805311336} As $\mathbb{C}$-algebras, $\mathcal{M}[\xi;1,\delta]$ is isomorphic to $\mathcal{R}$. \end{thm}
\begin{proof} Since $\{{E_4}^i {E_6}^j \xi^k \mid i,j,k\ge0\}$ is a $\mathbb{C}$-basis of $\mathcal{M}[\xi;1,\delta]$, the map \begin{equation} \mathcal{M}[\xi;1,\delta]\to\mathcal{R},\quad {E_4}^i{E_6}^j \xi^k\mapsto{e_4}^i{e_6}^j{\delta}^k \end{equation} is a $\mathbb{C}$-linear isomorphism. Since the commutation relation of $E_4,E_6,\xi$ is the same as that of $e_4,e_6,\delta$, we have a $\mathbb{C}$-algebra isomorphism. \end{proof}
\begin{cor} \label{cor201804181423} The $\mathbb{C}$-algebra $\mathcal{R}$ is right and left Noetherian and a right and left Ore domain. \end{cor}
\begin{proof} Since $\mathcal{M}$ is a right and left Noetherian domain, so is $\mathcal{R}=\mathcal{M}[\xi;1,\delta]$ (cf.\ Theorem 1.2.9 in \cite{mcconnell_robson2001}). A right (resp.\ left) Noetherian domain is right (resp.\ left) Ore (cf.\ Theorem 2.1.15 in \cite{mcconnell_robson2001}). \end{proof}
\subsection{Division of MLDO} \label{subsection201801181005} In this subsection, we show some properties of division of two MLDOs (Theorems \ref{thm201712301245} and \ref{thm201712301246}).
By Theorem \ref{thm201712301243}, every $a\in\mathcal{R}\backslash\{0\}$ can be uniquely expressed as $a_n {\delta}^n+\cdots+a_1 \delta+a_0$, where $a_i\in\mathcal{M}$ and $a_n\neq 0$. We define the \textit{top} of $a$ by $\mathrm{top}(a)=a_n$ and the \textit{order} of $a$ by $\mathrm{ord}(a)=n$. When $a=0$, we set $\mathrm{top}(a)=0$ and $\mathrm{ord}(a)=-\infty$. The conditions $a=0$, $\mathrm{top}(a)=0$ and $\mathrm{ord}(a)=-\infty$ are equivalent to one another. We call an MLDO $a\in\mathcal{R}$ \textit{monic} when $\mathrm{top}(a)\allowbreak=1$ and \textit{quasimonic} when $\mathrm{top}(a)(\infty)\allowbreak=1$. We set ${}^{\mathrm{M}}\calr=\{a\in\mathcal{R}\mid a \text{ is monic}\}$ and ${}^{\mathrm{QM}}\calr=\{a\in\mathcal{R}\mid a \text{ is quasimonic}\}$. For $i\in[-\infty,\infty]$, we set \begin{align} \mathcal{R}^i&=\{a\in\mathcal{R}\mid \mathrm{ord}(a)=i\},&\mathcal{R}_n^i&=\mathcal{R}^i\cap\mathcal{R}_n, \label{eq201806240958} \\ \mathcal{R}^{\le i}&=\{a\in\mathcal{R}\mid \mathrm{ord}(a)\le i\}, &\mathcal{R}_n^{\le i}&=\mathcal{R}^{\le i}\cap\mathcal{R}_n,\\ \mathcal{R}^{<i}&=\{a\in\mathcal{R}\mid \mathrm{ord}(a)<i\},& \mathcal{R}_n^{<i}&=\mathcal{R}^{<i}\cap\mathcal{R}_n, \label{eq201806240958_3} \end{align} \begin{align} {}^{\mathrm{M}}\calr^i&={}^{\mathrm{M}}\calr\cap\mathcal{R}^i, & {}^{\mathrm{M}}\calr_n&={}^{\mathrm{M}}\calr\cap\mathcal{R}_n, & {}^{\mathrm{M}}\calr_n^i&={}^{\mathrm{M}}\calr\cap\mathcal{R}_n^i,\\ {}^{\mathrm{QM}}\calr^i&={}^{\mathrm{QM}}\calr\cap\mathcal{R}^i, & {}^{\mathrm{QM}}\calr_n&={}^{\mathrm{QM}}\calr\cap\mathcal{R}_n, & {}^{\mathrm{QM}}\calr_n^i&={}^{\mathrm{QM}}\calr\cap\mathcal{R}_n^i. \end{align}
\begin{prop} \label{prop201801051831} For $a,b\in\mathcal{R}$, $\mathrm{top}(ab)=\mathrm{top}(a)\mathrm{top}(b)$ and $\mathrm{ord}(ab)=\mathrm{ord}(a)+\mathrm{ord}(b)$. \end{prop}
\begin{proof} The result follows from Lemma \ref{lem201805151227} (3). \end{proof}
\begin{thm} \label{thm201712301245} (Division of MLDO, inhomogeneous version)
Let $a\in\mathcal{R}^i$ and $b\in\mathcal{R}^j$ with $i\ge j\ge 0$, and let $d\in\mathcal{M}$ be a common divisor of $\mathrm{top}(a)$ and $\mathrm{top}(b)$. Let $a',b'\in\mathcal{M}$ satisfy $\mathrm{top}(a)=a'd$ and $\mathrm{top}(b)=b'd$. \begin{enumerate} \item $\mathrm{top}(b)^{i-j}b'a=cb+c'$ for some $c\in\mathcal{R}^{i-j}$ and $c'\in\mathcal{R}^{<j}$. \item $a\mathrm{top}(b)^{i-j}b'=bc+c'$ for some $c\in\mathcal{R}^{i-j}$ and $c'\in\mathcal{R}^{<j}$. \end{enumerate} \end{thm}
\begin{proof} (1) Note that $d$, $a'$ and $b'$ cannot be $0$. We prove (1) by induction on $i$.
(I) The case $i=j$. Set $c=a'\in\mathcal{R}^0$ and $c'=b'a-cb\in\mathcal{R}^{<j}$.
(II) The case $i>j$. Set $e=b'a-a'{\delta}^{i-j}b\in\mathcal{R}^{<i}$.
(II-i) The case $e\in\mathcal{R}^{<j}$. Set $c=\mathrm{top}(b)^{i-j}a'{\delta}^{i-j}\in\mathcal{R}^{i-j}$ and $c'=\mathrm{top}(b)^{i-j}e\in\mathcal{R}^{<j}$.
(II-ii) The case $e\in\mathcal{R}^{j'}$ with $i> j' \ge j$. Set $k=i-j'-1\ge 0$. By the induction hypothesis, $\mathrm{top}(b)^{j'-j+1}e=fb+g$ for some $g\in\mathcal{R}^{<j}$ and $f\in\mathcal{R}^{j'-j}$. We have $\mathrm{top}(b)^{i-j}b'a =(\mathrm{top}(b)^{i-j}a'{\delta}^{i-j}+\mathrm{top}(b)^k f)b+\mathrm{top}(b)^k g$. Set $c=\mathrm{top}(b)^{i-j}a'{\delta}^{i-j}+\mathrm{top}(b)^k f\in\mathcal{R}^{i-j}$ and $c'=\mathrm{top}(b)^k g\in\mathcal{R}^{<j}$.
(2) We only give a sketch. (I) Set $c=a'$ and $c'=ab'-ba'$. (II) Set $e=ab'-ba'{\delta}^{i-j}$. (II-i) Set $c=a'{\delta}^{i-j}\mathrm{top}(b)^{i-j}$ and $c'=e\mathrm{top}(b)^{i-j}$. (II-ii) Set $c=a'{\delta}^{i-j}\mathrm{top}(b)^{i-j}+f\mathrm{top}(b)^k$ and $c'=g\mathrm{top}(b)^k$. \end{proof}
\begin{thm} \label{thm201712301246} (Division of MLDO, homogeneous version)
Let $a\in\mathcal{R}_n^i$ and $b\in\mathcal{R}_m^j$ with $i\ge j\ge 0$ and $m,n\ge 0$, and let $d\in\mathcal{M}_l$ be a common divisor of $\mathrm{top}(a)\in\mathcal{M}_{n-2i}$ and $\mathrm{top}(b)\in\mathcal{M}_{m-2j}$. Let $a'\in\mathcal{M}_{n-2i-l}$ and $b'\in\mathcal{M}_{m-2j-l}$ satisfy $\mathrm{top}(a)=a'd$ and $\mathrm{top}(b)=b'd$. Then, for $p=(m-2j)(i-j+1)-l-m+n$ and $q=(m-2j)(i-j+1)-l+n$, the following hold: \begin{enumerate} \item $\mathrm{top}(b)^{i-j}b'a=cb+c'$ for some $c\in\mathcal{R}_{p}^{i-j}$ and $c'\in\mathcal{R}_{q}^{<j}$, \item $a\mathrm{top}(b)^{i-j}b'=cb+c'$ for some $c\in\mathcal{R}_{p}^{i-j}$ and $c'\in\mathcal{R}_{q}^{<j}$. \end{enumerate} \end{thm}
\begin{proof} Trace the proof of the inhomogeneous version. Note that in (II-ii), $\mathrm{wt}(e)=(m-2j)-l+n$, $\mathrm{wt}(f)=(m-2j)(j'-j+2)-l-m+n$ and $\mathrm{wt}(g)=(m-2j)(j'-j+2)-l+n$. \end{proof}
\begin{cor} \label{cor201802231435} Let $a\in\mathcal{R}$ and $b\in\mathcal{R}^j$ with $\mathrm{top}(b)=1$. \begin{enumerate} \item $a=cb+c'$ for some $c\in\mathcal{R}$ and $c'\in\mathcal{R}^{<j}$. \item $a=bc+c'$ for some $c\in\mathcal{R}$ and $c'\in\mathcal{R}^{<j}$. \end{enumerate} \end{cor}
\begin{proof} We only prove (1). Set $a=\sum_{i} a_i+\sum_{i} \tilde{a}_i$, where $a_i\in\mathcal{R}^{n_i}$, $\tilde{a}_i\in\mathcal{R}^{m_i}$ and $n_i<j\le m_i$. Then $\tilde{a}_i=c_i b+c'_i$ for some $c_i\in\mathcal{R}^{m_i-j}$ and $c'_j\in\mathcal{R}^{<j}$. Set $c=\sum_{i} c_i\in\mathcal{R}$ and $c'=\sum_{i} a_i +\sum_{i} c'_i\mathcal{R}^{<j}$. \end{proof}
\begin{cor} Let $a\in\mathcal{R}_k$ and $b\in\mathcal{R}^j_{2j}$ with $\mathrm{top}(b)=1$. \begin{enumerate} \item $a=cb+c'$ for some $c\in\mathcal{R}_{k-2j}$ and $c'\in\mathcal{R}^{<j}_{k}$. \item $a=bc+c'$ for some $c\in\mathcal{R}_{k-2j}$ and $c'\in\mathcal{R}^{<j}_{k}$. \end{enumerate} \end{cor}
\begin{proof} The proof is similar to that of Corollary \ref{cor201802231435}. \end{proof}
Let $a\in\mathcal{R}$ and $b\in\mathcal{R}\backslash\{0\}$. If $a=cb$ (resp.\ $a=bc$) for some $c\in\mathcal{R}$, we say that $a$ is \textit{divisible} by $b$ from the right (resp.\ left). Such $c$ is unique since $\mathcal{R}$ is an integral domain. We denote it by $a/b$ (resp.\ $b\backslash a$).
\begin{rem} All the results in this subsection hold for a graded skew polynomial ring $S=R[x;1,d]$, where $R$ is a graded commutative integral domain and $d$ is a graded derivation of $R$. \end{rem}
\section{Algebraic structure of the solutions to MLDEs} \label{section20180614}
In this section, we study an algebraic structure of the solutions to MLDEs. We also show that every quasimodular form satisfies a monic MLDE of some weight. Subsection \ref{subsct201806231428} is devoted to the monic case and \ref{subsct201806231430} to the non-monic case.
\subsection{Monic case} \label{subsct201806231428} We set: \begin{align} S_k&=\{(f,k)\in H_k\mid a(f,k)=0\text{ for some }a\in{}^{\mathrm{M}}\calr\},\\ \Sigma_k&=\{f \in\mathrm{Hol}\mid f\text{ satisfies some monic MLDE of weight }k\}. \end{align} Note that $f \in \Sigma_k$ if and only if $(f,k)\in S_k$.
\begin{lem} \label{lem201804181457} For $a\in\mathcal{R}$ and $b\in{}^{\mathrm{M}}\calr$, there exist $a'\in{}^{\mathrm{M}}\calr$ and $b'\in\mathcal{R}$ such that $a'a=b'b$. \end{lem}
\begin{proof} By Lemma 1.2 in \cite{jategaonkar72} or Proposition 2.2 in \cite{resco_small_stafford82}, ${}^{\mathrm{M}}\calr$ is a left Ore set in $\mathcal{R}$. \end{proof}
\begin{lem} \label{lem201804240917} For $\phi\in H_k$, the following conditions are equivalent. \begin{enumerate} \item $\phi\in S_k$. \item The $\mathcal{M}$-module $\mathcal{R} \phi=\sum_{i=0}^{\infty}\mathcal{M} {\delta}^i \phi$ is finitely generated. \item There exists $n\in\mathbb{Z}_{\ge 0}$ such that $\phi,\delta \phi,\ldots,{\delta}^n\phi$ generate $\mathcal{R} \phi$ over $\mathcal{M}$. \end{enumerate} \end{lem}
\begin{proof} The equivalence of (1) and (2) is easy (see Lemma 2.1 in \cite{resco_small_stafford82}) and it is clear that (3) implies (2).
Assume (1). Then $a \phi=0$ for some $a\in{}^{\mathrm{M}}\calr^n$. Therefore, $\phi,\delta \phi,\ldots,{\delta}^{n-1}\phi$ generate $\mathcal{R} \phi$ over $\mathcal{M}$. \end{proof}
\begin{thm} \label{thm201804191022} \begin{enumerate} \item $S_k+S_k\subset S_k$. \item $S_k S_l\subset S_{k+l}$. \item $R_k S_l\subset S_{k+l}$. \item $\mathcal{M}_k\subset S_k$. \end{enumerate} Therefore, $\bigoplus_{k\in\mathbb{R}}S_k\subset H$ is a $\mathbb{C}$-subalgebra and a left $\mathcal{R}$-submodule. \end{thm}
\begin{proof} Let $\phi\in S_k$ and $\psi\in S_l$. Choose $m,n\ge 0$ such that $\phi,\delta \phi,\ldots, {\delta}^m \phi$ (resp.\ $\psi,\delta \psi,\ldots, {\delta}^n \psi$) generate $\mathcal{R} \phi$ (resp.\ $\mathcal{R}\psi$) over $\mathcal{M}$.
(1) Assume $l=k$. We have $\mathcal{R}(\phi+\psi)\subset\mathcal{R}\phi+\mathcal{R}\psi$, where the latter is generated by $\phi,\delta \phi,\ldots, {\delta}^m \phi$ and $\psi,\delta \psi,\ldots, {\delta}^n \psi$. Since $\mathcal{M}$ is Noetherian, $\mathcal{R}(\phi+\psi)$ is finitely generated.
(2) By the Leibniz rule, ${\delta}^p(\phi\psi)=\sum_{q=0}^p\binom{p}{q}({\delta}^q\phi){\delta}^{p-q}\psi$ is a finite sum of $({\delta}^i\phi){\delta}^j\psi$ over $\mathcal{M}$, where $0\le i\le m$ and $0\le j\le n$. Thus $\mathcal{R}(\phi\psi)\subset \sum_{i=0}^{m-1}\sum_{j=0}^{n-1}\mathcal{M} ({\delta}^i\phi){\delta}^j\psi$.
(3) $\mathcal{R}(a \psi)\subset \mathcal{R}\psi$ for $a\in R_k$.
(4) Since $(1,0)\in S_0$ and $\mathcal{M}_k\subset R_k$, the result follows from (3).
We can also prove (1) and (3) by utilizing Lemma \ref{lem201804181457}. \end{proof}
We apply Theorem \ref{thm201804191022} to the theory of monic MLDEs. We set $Y^i=(1,-i)\in H_{-i}$. Then $\delta Y^i=\frac{i}{12}E_2 Y^i$. Note $E_2\in H_2$, $E_4\in H_4$ and $E_6\in H_6$.
\begin{thm} \label{thm201804240953} \begin{enumerate} \item $\Sigma_k\subset \Sigma_{k-1}$. \item $(\log q)\Sigma_k\subset \Sigma_{k-1}$. \item $E_2 \Sigma_k\subset \Sigma_{k+1}$. \item If $k-l\in\mathbb{Z}$, then $\Sigma_k+\Sigma_l\subset \Sigma_{\min\{k,l\}}$. \end{enumerate} \end{thm}
\begin{proof} (1) (2) $Y$ and $(\log q)Y$ are annihilated by ${\delta}^2+\frac{1}{144}e_4$. (3) $E_2 Y$ is annihilated by ${\delta}^3-\frac{23}{144}e_4 \delta-\frac{1}{216}e_6$. (4) The result follows from (1). \end{proof}
Set $\mathcal{QM}=\mathbb{C}[E_2,E_4,E_6]\subset H$. Recall that a \textit{quasimodular form} is a homogeneous element of $\mathcal{QM}$ and its \textit{depth} is the maximum degree in $E_2$. For example, ${E_2}^2 E_4+3 E_2 E_6-2 {E_4}^2$ is a quasimodular form of weight $8$ and depth $2$. We denote by $\mathcal{QM}_k^{\le s}$ the set of quasimodular forms of weight $k$ and depth at most $s$. We formally set $\mathcal{QM}_k^{\le s}=0$ for $s\in\mathbb{Z}_{<0}$. Note that $\mathcal{M}_k=\mathcal{QM}_k^{\le 0}$, $\mathcal{QM}_k^{\le s}=\bigoplus_{i=0}^s\mathcal{M}_{k-2i}{E_2}^i$ and $\delta(\mathcal{QM}_k^{\le s})\subset \mathcal{QM}_{k+2}^{\le s+1}$ for $s\in\mathbb{Z}$.
For $f\in\mathrm{Hol}$, we denote by $\mathrm{dwt}(f)\in\mathbb{Z}\cup\{\infty\}$ the largest $k\in\mathbb{Z}$ such that $f\in \Sigma_k$. When $f$ does not satisfy a monic MLDE of any integral weight, we set $\mathrm{dwt}(f)=-\infty$.
\begin{thm} \label{thm201804241015} \begin{enumerate} \item $\mathrm{dwt}(fg)\ge\mathrm{dwt}(g)+\mathrm{dwt}(g)$, where we formally set $\infty+(-\infty):=-\infty$. \item If $\mathrm{dwt}(f)=\mathrm{dwt}(g)$, then $\mathrm{dwt}(f+g)\ge\mathrm{dwt}(g)$. \item If $\mathrm{dwt}(f)>\mathrm{dwt}(g)$, then $\mathrm{dwt}(f+g)=\mathrm{dwt}(g)$. \item If $f$ is a nonzero quasimodular form of weight $k$ and depth $s$, then $\mathrm{dwt}(f)=k-s$. \end{enumerate} \end{thm}
\begin{proof} (1) (2) Straightforward.
(3) If $-\infty<\mathrm{dwt}(g)<\infty$, then $\mathrm{dwt}(f+g)\ge \mathrm{dwt}(g)$. Since $f\in \Sigma_{\mathrm{dwt}(g)+1}$, $\mathrm{dwt}(f+g)\ge\mathrm{dwt}(g)+1$ implies $\mathrm{dwt}(g)\ge\mathrm{dwt}(g)+1$, which is a contradiction. The case $\mathrm{dwt}(g)=-\infty$ is clear.
(4) Assume $f={E_2}^s f'$, where $f'\in\mathcal{M}_{k'}\backslash\{0\}$ and $k'=k-2s$. By Theorem \ref{thm201804240953} (3), $f\in \Sigma_{k-s}$. By induction on $n\ge 0$, ${\delta}^n(Y^{s-1}{E_2}^{s}f')=Y^{s-1}(\frac{n!}{(-12)^n}{E_2}^{n+s}f'+g)$ for some $g\in\mathcal{QM}_{k'+2n+2s}^{\le n+s-1}$. Since $E_2$, $E_4$ and $E_6$ are algebraically independent and $f'\in\mathbb{C}[E_4,E_6]\backslash\{0\}$, a nonzero MLDO never annihilates $Y^{s-1}{E_2}^{s}f'$. Therefore, $\mathrm{dwt}(f)=k-s$. For general $f$, express it as $f=\sum_{i=0}^s{E_2}^i f_i$ with $f_i\in\mathcal{M}_{k-2i}$ and $f_s\neq 0$. The result follows from (3). \end{proof}
\begin{rem} By Theorem \ref{thm201804241015}, every quasimodular form of weight $k$ and depth $s>0$ satisfies a monic MLDE of weight $k-s$, which is lower than the \textit{original} weight $k$. This phenomenon has already been found in the study of Kaneko-Zagier equation. See the paragraph just after the proof of Theorem 1 in \cite{kaneko_koike2003}. See also Subsection 3.3 in \cite{kaneko_nagatomo_sakai2017}. \end{rem}
\begin{rem} Let us generalize Theorem \ref{thm201804191022}. Let $G$ be an abelian group, $(R'=\bigoplus_{g\in G}R'_g,D)$ be a graded differential commutative ring and ($R=\bigoplus_{g\in G}R_g,D)\subset R'$ a graded differential subring. Denote by $T_g$ the set of all $s\in R'_g$ satisfying $(D^n+\cdots+r_1 D+r_0)s=0$ for some $r_i\in R$. Assume that $R$ is graded-Noetherian, that is, every graded ideal of $R$ is finitely generated. (For graded rings and modules, see, for example, \cite{nastasescu_oystaeyen2004}.) Then $\bigoplus_{g\in G}T_g$ is a graded differential ring containing $R$.
By Theorem \ref{thm201804241015}, $E_2$ does not satisfy a monic MLDE of weight $2$. Consider the setting $G=\mathbb{R}$, $R'=H$, $R=\mathcal{QM}$ and $D=\delta$. Since $R$ is Noetherian, $E_2$ satisfies a monic linear differential equation of weight $2$ with coefficients in $\mathcal{QM}$. Indeed, $(\srdiff{3}{2}+x E_2 \srdiff{2}{2}+(y {E_2}^2-\frac{13}{72} E_4)D_2+\frac{1}{288}(1-4x+24y) {E_2}^3+\frac{1}{288}(-3-4x+24y)E_2E_4+\frac{1}{216}(1-6x)E_6)E_2=0$ for every $x,y\in\mathbb{C}$. \end{rem}
\subsection{Non-monic case} \label{subsct201806231430}
We set $M=\bigoplus_{k\in\mathbb{R}}M_k=\bigoplus_{k\in\mathbb{R}}\mathrm{Mer}$ and $\overline{\mathcal{M}}=\{f/g\in\mathrm{Mer}\mid f\in\mathcal{M},g\in\mathcal{M}_*\backslash\{0\}\}$, where the latter is the set of \textit{meromorphic modular forms} and has a natural grading by $\mathrm{wt}(f/g)=\mathrm{wt}(f)-\mathrm{wt}(g)$ for $f,g\in\mathcal{M}_*$ and $g\neq 0$. It is obvious that $\overline{\mathcal{M}}\subset M$ are \textit{graded fields}, that is, they are nonzero graded commutative rings and every nonzero homogeneous element is invertible (cf.\ \cite{geel_oystaeyen1981} and \cite{nastasescu_oystaeyen2004}, for example). Note that a graded field is not necessarily a field although a graded ring is always a ring.
We denote by $\overline{\mathcal{R}}=\bigoplus_{k\in\mathbb{Z}}\overline{\mathcal{R}}_k$ the graded subring of $\mathrm{End}(M)$ generated by $\delta$ and $\overline{\mathcal{M}}$, and call an element of $\overline{\mathcal{R}}$ a \textit{meromorphic MLDO} or a \textit{merMLDO} for short. Every merMLDO can be uniquely expressed as a finite sum of $f {\delta}^i$ with $f\in\overline{\mathcal{M}}$. The order, top and monicness of a merMLDO are defined as with those of an MLDO. We also define $\overline{\mathcal{R}}^n$, $\overline{\mathcal{R}}^n_k$ etc.\ in the same way as Eqs.\ \ref{eq201806240958} through \ref{eq201806240958_3}. We have $\overline{\mathcal{R}}\simeq\overline{\mathcal{M}}[\xi;1,\delta]$.
Note that all the results in \cite{ore1933} are applicable to $\overline{\mathcal{R}}$. For example, $\overline{\mathcal{R}}$ is a left (resp.\ right) graded-PID, that is, $\overline{\mathcal{R}}$ is an integral domain and every left (resp.\ right) graded ideal of $\overline{\mathcal{R}}$ is generated by one homogeneous element.
For $a,b\in\overline{\mathcal{R}}_*\backslash\{0\}$, we denote by $\mathrm{gcrd}(a,b)$ (called the \textit{greatest common right divisor}) the monic $c\in\overline{\mathcal{R}}_*$ such that $c$ divides $a,b$ from the right and if $c'\in\overline{\mathcal{R}}_*$ divides $a,b$ from the right, then $c'$ divides $c$ from the right. We also denote by $\mathrm{lclm}(a,b)$ (called the \textit{least common left multiple}) the monic $c\in\overline{\mathcal{R}}_*$ such that $c$ is divided by $a,b$ from the right and if $c'\in\overline{\mathcal{R}}_*$ is divided by $a,b$ from the right, then $c'$ is divided by $c$ from the right. By Chapter I, Sections 2 and 3 in \cite{ore1933}, $\mathrm{gcrd}(a,b)$ and $\mathrm{lclm}(a,b)$ always exist uniquely, $\mathrm{ord}(\mathrm{gcrd}(a,b))+\mathrm{ord}(\mathrm{lclm}(a,b))=\mathrm{ord}(a)+\mathrm{ord}(b)$ and $a'a+b'b=\mathrm{gcrd}(a,b)$ for some $a',b'\in\overline{\mathcal{R}}_*$. The proof of the following proposition is straightforward.
\begin{prop} Let $\phi\in M_*$ and $a,b\in\overline{\mathcal{R}}_*\backslash\{0\}$. \begin{enumerate} \item $\mathrm{gcrd}(a,b)\phi=0$ if and only if $a\phi=b\phi=0$. \item $\mathrm{lclm}(a,b)\phi=0$ if $a\phi=0$ or $b\phi=0$. \end{enumerate} \end{prop}
For $n\in\mathbb{Z}_{\ge 0}$ we set: \begin{align} \overline{S}^n_k&=\{(f,k)\in M_k\mid a(f,k)=0\text{ for some }a\in\overline{\mathcal{R}}^n\},\\ \overline{\Sigma}^n_k&=\{f \in\mathrm{Mer}\mid f\text{ satisfies some MLDE of weight }k\text{ and order }n\}. \end{align} It is clear that $f \in \overline{\Sigma}^n_k$ if and only if $(f,k)\in \overline{S}^n_k$. We have $\overline{S}^n_k\subset \overline{S}^{n+1}_k$ and $\overline{\Sigma}^n_k\subset \overline{\Sigma}^{n+1}_k$.
Note the following facts. Let $A$ be a graded field and $N\neq 0$ a graded $A$-module. If $\emptyset \neq N_1\subset N_2\subset N_*$ such that $N_1$ is linearly independent and $N_2$ generates $N$ over $A$, then $N$ has an $A$-basis $N'$ such that $N_1\subset N'\subset N_2$. The cardinality of a basis is independent of the choice of bases since $A$ is a commutative ring.
\begin{lem} For $\phi\in M_k$, the following conditions are equivalent. \begin{enumerate} \item $\phi\in\overline{S}^n_k$. \item $\phi,\delta\phi,\ldots,{\delta}^{n-1}\phi$ generate $\overline{\mathcal{R}}\phi$ over $\overline{\mathcal{M}}$. \item $\dim_{\overline{\mathcal{M}}}\overline{\mathcal{R}} \phi\le n$. \end{enumerate} \end{lem}
\begin{proof} It is straightforward to prove $(1)\Leftrightarrow (2)\Rightarrow (3)$. Assume (3). Then $\phi,\ldots,\allowbreak{\delta}^{n}\phi$ are linearly dependent over $\overline{\mathcal{M}}$. Therefore, $\phi\in\overline{S}^m_k$ for some $m\le n$, so $\phi\in\overline{S}^n_k$ and (1) holds. \end{proof}
\begin{thm} \label{thm20180624} \begin{enumerate} \item $\overline{S}^n_k+\overline{S}^m_k\subset \overline{S}^{n+m}_k$. \item $\overline{S}^n_k\overline{S}^m_l\subset \overline{S}^{nm}_{k+l}$. \item $\overline{\mathcal{R}}_k \overline{S}_l^m\subset \overline{S}_{k+l}^m$. \item $\overline{\mathcal{M}}_k\subset \overline{S}_k^1$. \end{enumerate} \end{thm}
\begin{proof} Let $\phi\in\overline{S}^n_k$ and $\psi\in\overline{S}^m_l$.
(1) Assume $k=l$. We have $a\phi=b\psi=0$ for some $a\in\overline{\mathcal{R}}_*^n$ and $b\in\overline{\mathcal{R}}_*^m$, so $\mathrm{lclm}(a,b)(\phi+\psi)=0$. Note $\mathrm{ord}(\mathrm{lclm}(a,b))\le \mathrm{ord}(a)+\mathrm{ord}(b)=n+m$.
(2) Since $\overline{\mathcal{R}} (\phi\psi)$ is generated by $({\delta}^i\phi){\delta}^j\psi$ over $\overline{\mathcal{M}}$ with $0\le i\le n-1$ and $0\le j \le m-1$, $\dim_{\overline{\mathcal{M}}}\overline{\mathcal{R}} (\phi\psi)\le nm$.
(3) $\overline{\mathcal{R}}(a\psi)\subset \overline{\mathcal{R}}\psi$ for $a\in\overline{\mathcal{R}}_k$.
(4) $(f D_{k}-(D_k f))f=0$ for $f\in\overline{\mathcal{M}}_k$. \end{proof}
\begin{rem} We have $\Sigma_k\subset \mathrm{Hol}\cap (\bigcup_{n=0}^{\infty} \overline{\Sigma}^n_k)$, but the equality does not hold. Let $g=q^{l}+\cdots\in\mathcal{M}_{12m}$ with $m\in\mathbb{Z}_{>0}$ and $l<m$ (for example, $g={E_4}^3=1+\cdots\in\mathcal{M}_{12}$) and $f=\exp(g/\Delta^m)\in\mathrm{Hol}$. Since $(\Delta^m D_0-D_{12m}g)f=0$, $f\in \mathrm{Hol}\cap\overline{\Sigma}_0^1$. Since $f$ has an essential singularity at $q=0$, it cannot be constructed by the Frobenius method and $f\notin \Sigma_k$ for any $k\in\mathbb{R}$. (We can also show $f\notin \Sigma_k$ by $\srdiff{n}{k}f=((-m+l)^n q^{-n(m-l)}+\cdots)f$.) Therefore, $f\in \mathrm{Hol}\cap (\bigcup_{n=0}^{\infty} \overline{\Sigma}^n_0)\backslash \Sigma_0$. It follows that $\eta^{2k} f\in\mathrm{Hol}\cap (\bigcup_{n=0}^{\infty} \overline{\Sigma}^n_k)\backslash \Sigma_k$ for $k\in\mathbb{R}$. \end{rem}
\section{Operators on solution spaces of monic MLDEs} \label{section201712301252} We define the monic and quasimonic MLDOs (Subsection \ref{subsection201801081050}) and give a condition under which an MLDO maps the solution space of a monic MLDE to that of another (Subsection \ref{subsection201804181356}, Corollary \ref{cor201801191507}). We then introduce a family of third order monic MLDEs and apply the results to the solution spaces (Subsections \ref{sct201801121359} and \ref{subsection201804181403}, Example \ref{ex201807171044}).
\subsection{Monic and quasimonic MLDOs} \label{subsection201801081050} Recall that an MLDO $a\in\mathcal{R}$ is called monic when $\mathrm{top}(a)\allowbreak=1$ and quasimonic when $\mathrm{top}(a)(\infty)\allowbreak=1$.
Let $a\in\mathcal{R}$ and $k\in\mathbb{Z}$. We define the $\mathbb{C}$-linear map \begin{equation} a[k]:\mathrm{Hol}\to\mathrm{Hol},\quad a[k]f=\pi(a(f,k)), \end{equation} where $\pi(\sum_{n} (g_n,n))= \sum_{n}g_n$. The map $a\mapsto a[k]$ is $\mathbb{C}$-linear.
For each $a\in\mathcal{R}$, there exists a unique $c$ such that $a[k]q^{\lambda}=cq^{\lambda}+O(q^{\lambda+1})$. We denote such $c$ by $F(k,a,\lambda)$. It is a polynomial in $\lambda$ and linear in $a$.
\begin{lem} If $a=\sum_{i=0}^n a_i {\delta}^i$ with $a_i\in\mathcal{M}$, then $a[k]=\sum_{i=0}^n a_i \srdiff{i}{k}$ and $F(k,a,\lambda)=\sum_{i=0}^n a_i(\infty)P_i(\lambda),$ where $P_0(\lambda)=1$ and $P_i(\lambda)=(\lambda-\frac{k+2i-2}{12})\cdots(\lambda-\frac{k}{12})$ for $i>0$. \end{lem}
\begin{proof} The proof is easy. \end{proof}
The equality $F(k,a,\lambda)=F(0,a,\lambda-\frac{k}{12})$ holds since $D_k q^{\lambda}=(q\frac{d}{dq}-\frac{k}{12}E_2)q^{\lambda}=(\lambda-\frac{k}{12})q^{\lambda}+O(q^{\lambda+1})$.
We set \begin{equation} \label{eq20180707} Z=\{\sum_{i=0}^n a_i {\delta}^i \in\mathcal{R} \mid n\ge 0,\ a_i\in\mathcal{M},\ a_i(\infty)=0\}. \end{equation} Note that $Z=\{a\in\mathcal{R}\mid F(k,a,\lambda)\equiv0\text{ for all }k\in\mathbb{Z}_{\ge 0}\}=\{a\in\mathcal{R}\mid F(k,a,\lambda)\equiv0\text{ for some }k\in\mathbb{Z}_{\ge 0}\}$. It is straightforward to show $Z\cap\mathcal{R}_l=\Delta\mathcal{R}_{l-12}$ and $\Delta\mathcal{R}\subset Z$.
\begin{prop} \label{prop201801091422} For $k,n\in\mathbb{Z}_{\ge0}$, the following hold. \begin{enumerate} \item ${}^{\mathrm{M}}\calr_{2n}^n={}^{\mathrm{QM}}\calr_{2n}^n$. \item ${}^{\mathrm{M}}\calr\cap Z=\emptyset$ and ${}^{\mathrm{QM}}\calr\cap Z=\emptyset$. \item ${}^{\mathrm{M}}\calr_k^n\neq\emptyset$ if and only if $k=2n$. \item \label{item201801191423} ${}^{\mathrm{QM}}\calr_k^n\neq\emptyset$ if and only if $k=2n$ or $k=2n+4,2n+6,\ldots$. \end{enumerate} \end{prop}
\begin{proof} The proof is straightforward. \end{proof}
\begin{lem} For $a\in\mathcal{R}_l$ and $b\in\mathcal{R}$, $(ba)[k]=b[k+l]a[k]$. \end{lem}
\begin{proof} For $f\in\mathrm{Hol}$, $b[k+l]a[k]f=b[k+l]\pi(a(f,k))=\pi(ba(f,k))=(ba)[k]f$. \end{proof}
\begin{lem} For $a\in\mathcal{R}_k$ and $b\in\mathcal{R}$, $F(l,ba,\lambda)=F(l+k,b,\lambda)F(l,a,\lambda)$. \end{lem}
\begin{proof} $(ba)[l]q^{\lambda}=b[l+k]a[k]q^{\lambda}=b[l+k](F(k,a)q^{\lambda}+O(q^{\lambda+1}))=F(l+k,b)F(l,a)q^{\lambda}+O(q^{\lambda+1})$. \end{proof}
\begin{thm} \label{proposition_mldo_divisible_from_the_right} Let $a\in{}^{\mathrm{M}}\calr_{2n}^n$, $b\in\mathcal{R}$ and $k\in\mathbb{Z}_{\ge 0}$. Then the following conditions are equivalent: \begin{enumerate} \item $\ker(a[k])\subset \ker(b[k])$, \item $b$ is divisible by $a$ from the right. \end{enumerate} \end{thm}
\begin{proof} If $b=ca$ with some $c\in\mathcal{R}$, then $b[k]=c[k+2n]a[k]$ and $\ker(a[k])\subset \ker(b[k])$.
Assume (1). By Corollary \ref{cor201802231435}, $b=ca+c'$ for some $c\in\mathcal{R}$ and $c'\in\mathcal{R}^{<n}$. Let $f\in\ker(a[k])$. We have $a[k]f=0$ and $b[k]f=0$. Since $b[k]=c[k+2n]a[k]+c'[k]$, we have $c'[k]f=0$ and $\ker(a[k])\subset\ker(c'[k])$. If $c'\neq 0$, then $\dim\ker(c'[k])<n=\dim\ker(a[k])$, which is a contradiction. Therefore, $c'=0$ and $b=ca$. \end{proof}
\subsection{Characteristic roots of MLDOs} \label{subsection201804181356}
For $a \in\mathcal{R}$, $F(k,a,\lambda)=0$ is an algebraic equation with the variable $\lambda$. We denote the multiset (unordered tuple) of the roots (\textit{characteristic roots}) by $\mathrm{ch}(k,a)$. When $F(k,a,\lambda)\equiv 0$, we set $\mathrm{ch}(k,a)=\mathbb{C}$. If $\mathrm{ch}(0,a)=\{\lambda_1,\ldots,\lambda_n\}$, then $\mathrm{ch}(k,a)=\{\lambda_1+\frac{k}{12},\ldots,\lambda_n+\frac{k}{12}\}$. Recall the set $Z$ defined in Eq.\ (\ref{eq20180707}).
\begin{lem} Let $a,b\in{}^{\mathrm{QM}}\calr$. Then $\mathrm{ch}(k,a)=\mathrm{ch}(k,b)$ if and only if $a-b\in Z$. \end{lem}
\begin{proof} Assume $a-b\in Z$. Then $F(k,a,\lambda)-F(k,b,\lambda)=F(k,a-b,\lambda)=0$ and $\mathrm{ch}(k,a)=\mathrm{ch}(k,b)$.
Assume $\mathrm{ch}(k,a)=\mathrm{ch}(k,b)$. Since $a,b\notin Z$, $F(k,a,\lambda)$ and $F(k,b,\lambda)$ are nonzero monic polynomials in $\lambda$. Thus, $F(k,a,\lambda)=F(k,b,\lambda)$, $0=F(k,a,\lambda)-F(k,b,\lambda)=F(k,a-b,\lambda)$ and so $a-b\in Z$. \end{proof}
\begin{lem} \label{lem201712130947} If $a\in{}^{\mathrm{M}}\calr^n_{2n}$, then the sum of $\lambda\in\mathrm{ch}(k,a)$ is $\frac{n(k+n-1)}{12}$. \end{lem}
\begin{proof} Set $a={\delta}^n+\sum_{i=0}^{n-1} a_i {\delta}^i$ with $a_i\in\mathcal{M}_{2n-2i}$. We have $a_{n-1}=0$ since $\mathcal{M}_2=\{0\}$. Therefore, $a[k]=\srdiff{n}{k}+a_{n-2}\srdiff{n-2}{k}+\cdots$, so that $F(k,a,\lambda)=(\lambda-\frac{k}{12})\cdots(\lambda-\frac{k+2n-2}{12})+O(\lambda^{n-2})=\lambda^n-\frac{n(k+n-1)}{12}\lambda^{n-1}+O(\lambda^{n-2})$. \end{proof}
\begin{lem} \label{lem201712130954} Let $k\in\mathbb{Z}_{\ge0}$ and $n\in\mathbb{Z}_{>0}$. Then the following hold. \begin{enumerate} \item For $\lambda_1,\ldots,\lambda_n\in\mathbb{C}$ and $l\in\mathbb{Z}$ such that $l\ge 2n+4$, there exists $a\in{}^{\mathrm{QM}}\calr_{l}^n$ such that $\mathrm{ch}(k,a)=\{\lambda_1,\ldots,\lambda_n\}$. \item For $\lambda_1,\ldots,\lambda_n\in\mathbb{C}$ such that $\lambda_1+\cdots+\lambda_n=\frac{n}{12}(k+n-1)$, there exists $a\in{}^{\mathrm{M}}\calr_{2n}^n$ such that $\mathrm{ch}(k,a)=\{\lambda_1,\ldots,\lambda_n\}$. \end{enumerate} \end{lem}
\begin{proof} Set $P_0(\lambda)=1$ and $P_i(\lambda)=(\lambda-\frac{k}{12})\cdots(\lambda-\frac{k+2i-2}{12})$ for $i>0$.
(1) Choose $x_0,\ldots,x_{n-1}\in\mathbb{C}$ so that the roots of $P_n(\lambda)+\sum_{i=0}^{n-1}x_i P_i(\lambda)=0$ are $\lambda=\lambda_1,\ldots,\lambda_n$. If $N\in\mathbb{Z}_{\ge4}$, then $f(\infty)=1$ for some $f\in\mathcal{M}_N$. Therefore, for $0\le i\le n$, we can choose $a_i\in\mathcal{M}_{l-2i}$ such that $a_i(\infty)=1$. Set $a=a_n {\delta}^n+\sum_{i=0}^{n-1}x_i a_i {\delta}^i\in{}^{\mathrm{QM}}\calr_l^n$.
(2) Since $\lambda_1+\cdots+\lambda_n=\frac{n}{12}(k+n-1)$, there exist $x_0,\ldots,x_{n-2}\in\mathbb{C}$ so that the roots of $P_n(\lambda)+\sum_{i=0}^{n-2}x_i P_i(\lambda)=0$ are $\lambda=\lambda_1,\ldots,\lambda_n$. For $0\le i\le n-2$, choose $a_i\in\mathcal{M}_{2n-2i}$ such that $a_i(\infty)=1$ and set $a={\delta}^n+\sum_{i=0}^{n-2}x_i a_i {\delta}^i\in{}^{\mathrm{M}}\calr_{2n}^n$. \end{proof}
\begin{thm} \label{thm201801191511} Let $l,n,N\in\mathbb{Z}_{>0}$, $a\in{}^{\mathrm{M}}\calr^n_{2n}$ and $b\in{}^{\mathrm{QM}}\calr_l$. Then the following hold. \begin{enumerate} \item If $N\le n$ and there exists $c\in{}^{\mathrm{M}}\calr_{2N}^N$ such that $cb$ is divisible by $a$ from the right, then $\mathrm{ch}(0,a)\cap\mathrm{ch}(0,b)\neq\emptyset$.
\item If $\mathrm{ch}(0,a)\cap\mathrm{ch}(0,b)\neq\emptyset$, $N\le|\mathrm{ch}(0,a)\cap\mathrm{ch}(0,b)|$ and $l+2n-2N\le8$, then there exists $c\in{}^{\mathrm{M}}\calr_{2n-2N+2}^{n-N+1}$ such that $cb$ is divisible by $a$ from the right. \end{enumerate} \end{thm}
\begin{proof} Note that $\mathrm{ch}(\bullet,\bullet)$ is a multiset, not a set. We assume $b\in{}^{\mathrm{QM}}\calr^m_l$ for some $m\in\mathbb{Z}$. By Proposition \ref{prop201801091422} (\ref{item201801191423}), $l=2m,2m+4,2m+6,\ldots$.
(1) $cb=da$ for some $d\in\mathcal{R}_{l-2n+2N}^{m-n+N}$. By comparing the tops of the both sides, $d\in{}^{\mathrm{QM}}\calr_{l-2n+2N}^{m-n+N}$. Thus, $\mathrm{ch}(l,c)\cup\mathrm{ch}(0,b)=\mathrm{ch}(2n,d)\cup\mathrm{ch}(0,a)$. If $\mathrm{ch}(0,a)\cap\mathrm{ch}(0,b)=\emptyset$, then $\mathrm{ch}(0,a)\subset\mathrm{ch}(l,c)$. Since $|\mathrm{ch}(0,a)|=n$ and $|\mathrm{ch}(l,c)|=N$, we have $n=N$ and $\mathrm{ch}(0,a)=\mathrm{ch}(l,c)$, which contradicts Lemma \ref{lem201712130947}. Therefore, $\mathrm{ch}(0,a)\cap\mathrm{ch}(0,b)\neq\emptyset$.
(2) Set $\mathrm{ch}(0,a)=\{\lambda_1,\ldots,\lambda_n\}$ and $\mathrm{ch}(0,b)=\{\mu_1,\ldots,\mu_m\}$ so that $\lambda_1=\mu_1,\ldots,\lambda_N\allowbreak=\mu_N$. Choose $\lambda$ so that $\lambda+\lambda_{N+1}+\cdots+\lambda_n=\frac{1}{12}(n-N+1)(l+n-N)$. By Lemma \ref{lem201712130954}, there exists $c\in{}^{\mathrm{M}}\calr_{2n-2N+2}^{n-N+1}$ such that $\mathrm{ch}(l,c)=\{\lambda,\lambda_{N+1},\ldots,\lambda_n\}$.
(2-1) The case $l=2m$. $\lambda+\sum_{i=N+1}^m\mu_i=\lambda+\sum_{i=1}^m\mu_{i}-\sum_{i=1}^N\mu_{i}=\lambda+\sum_{i=1}^m\mu_{i}-\sum_{i=1}^N\lambda_{i}=\lambda+\sum_{i=1}^m\mu_{i}-\sum_{i=1}^n\lambda_{i}+\sum_{i=N+1}^n\lambda_{i}=\sum_{i=1}^m\mu_{i}-\sum_{i=1}^n\lambda_{i}+(\lambda+\sum_{i=N+1}^n\lambda_{i})=\frac{1}{12}m(m-1)-\frac{1}{12}n(n-1)+\frac{1}{12}(n-N+1)(2m+n-N)=\frac{1}{12}(m-N+1)(2n+m-N)$. By Lemma \ref{lem201712130954}, $\mathrm{ch}(2n,d)=\{\lambda,\mu_{N+1},\ldots,\mu_m\}$ for some $d\in{}^{\mathrm{M}}\calr_{2m-2N+2}^{m-N+1}$. Since $cb,da\in{}^{\mathrm{M}}\calr_{2m+2n-2N+2}^{m+n-N+1}$ and $\mathrm{ch}(0,cb)=\mathrm{ch}(0,da)$, we have $cb-da\in Z\cap\mathcal{R}_{2m+2n-2N+2}$. By the assumption, $2m+2n-2N+2\le10$. Therefore, $ Z\cap\mathcal{R}_{2m+2n-2N+2}=\{0\}$ and so $cb=da$.
(2-2) The case $l\ge 2m+4$. By Lemma \ref{lem201712130954}, $\mathrm{ch}(2n,d)=\{\lambda,\mu_{N+1},\ldots,\mu_m\}$ for some $d\in{}^{\mathrm{M}}\calr_{l-2N+2}^{m-N+1}$. The rest of the proof is clear. \end{proof}
The following corollary gives a condition under which a quasimonic MLDO maps the solution space of a monic MLDE to another solution space.
\begin{cor} \label{cor201801191507} Let $k\in\mathbb{Z}_{\ge 0}$, $l,n,N\in\mathbb{Z}_{>0}$, $a\in{}^{\mathrm{M}}\calr^n_{2n}$ and $b\in{}^{\mathrm{QM}}\calr_l$. Then the following hold. \begin{enumerate} \item If $N\le n$ and there exists $c\in{}^{\mathrm{M}}\calr^N_{2N}$ such that $b[k]$ maps the kernel of $a[k]$ to the kernel of $c[k+l]$, then $\mathrm{ch}(k,a)\cap\mathrm{ch}(k,b)\neq\emptyset$.
\item If $\mathrm{ch}(k,a)\cap\mathrm{ch}(k,b)\neq\emptyset$, $N\le|\mathrm{ch}(k,a)\cap\mathrm{ch}(k,b)|$ and $l+2n-2N\le8$, then there exists $c\in{}^{\mathrm{M}}\calr_{2n-2N+2}^{n-N+1}$ such that $b[k]$ maps the kernel of $a[k]$ to the kernel of $c[k+l]$. \end{enumerate} \end{cor}
\begin{proof} Since $b[k]\ker(a[k])\subset\ker(c[k+l])$ is equivalent to $\ker(a[k])\subset\ker((cb)[k])$, the results follow from Theorem \ref{thm201801191511}. \end{proof}
\begin{rem} Let $a\in{}^{\mathrm{M}}\calr^n_{2n}$ and $b\in\mathcal{R}_l$. By Lemma \ref{lem201804181457}, there exist homogeneous $a'\in\mathcal{R}$ and $c\in{}^{\mathrm{M}}\calr$ such that $a'a=cb$, therefore $b[k](\ker (a[k]))\subset \ker (c[k+l])$. \end{rem}
\subsection{Example I} \label{sct201801121359}
We introduce a family of third order monic MLDEs $\phi_pf=0$ such that the solutions are related to the Dedekind eta function (Subsection \ref{sct201801121359}) and the theta functions of the $\mathrm{D}_n$ lattices (Subsection \ref{subsection201804181403}). In the end of Subsection \ref{subsection201804181403}, we apply Corollary \ref{cor201801191507} to the solution space of $\phi_pf=0$ (Example \ref{ex201807171044}).
\begin{lem} \label{lemma_eta_functions} \begin{enumerate} \item
$\eta(2z)|_{1/2}T=e(\frac{1}{12})\eta(2z)$ and $\eta(2z)|_{1/2}S=\frac{1}{\sqrt{2}}e(-\frac{1}{8})\eta(\frac{z}{2})$. \item
$\eta(\frac{z}{2})|_{1/2}T=\eta(\frac{z+1}{2})$ and $\eta(\frac{z}{2})|_{1/2}S=\sqrt{2}e(-\frac{1}{8})\eta(2z)$. \item
$\eta(\frac{z+1}{2})|_{1/2}T=e(\frac{1}{24})\eta(\frac{z}{2})$ and $\eta(\frac{z+1}{2})|_{1/2}S=e(-\frac{1}{8})\eta(\frac{z+1}{2})$. \item $\eta(2z)\eta(\frac{z}{2})\eta(\frac{z+1}{2})=e(\frac{1}{48})\eta(z)^3$. \item $16\eta(2z)^8+\eta(\frac{z}{2})^8-e(-\frac{1}{6})\eta(\frac{z+1}{2})^8=0$. \end{enumerate} \end{lem}
\begin{proof}
(1), (2) and the first part of (3) can be proved by the relations $\eta(z+1)=e(\frac{1}{24})\eta(z)$ and $\eta(-\frac{1}{z})=\sqrt{-iz}\eta(z)$. We prove the second part of (3). Since $\eta(\frac{Sz+1}{2})=\eta(\frac{1}{2}-\frac{1}{2z})=\eta((\begin{smallmatrix}1&-1\\2&-1\end{smallmatrix})\frac{z+1}{2})=\chi((\begin{smallmatrix}1&-1\\2&-1\end{smallmatrix}))\sqrt{z}\eta(\frac{z+1}{2})=e(-\frac{1}{8})\sqrt{z}\eta(\frac{z+1}{2})$, we have $\eta(\frac{z+1}{2})|_{1/2}S=\frac{1}{\sqrt{z}}\eta(\frac{Sz+1}{2})=e(-\frac{1}{8})\eta(\frac{z+1}{2})$. In order to calculate $\chi((\begin{smallmatrix}1&-1\\2&-1\end{smallmatrix}))$, we have used Theorem 2 in Chapter 4 in \cite{knopp1970}.
(4) The equality is proved by the definition $\eta(z)=q^{1/24}(1-q)(1-q^2)\cdots$.
(5) By (1) through (3), the left-hand side is a modular form of weight $4$ on $\mathrm{SL}(2,\mathbb{Z})$. The identity follows from the valence formula (cf.\ Theorem 4.1.4 in \cite{rankin1977}). \end{proof}
Let $p\in\mathbb{R}$. Since $\eta(z)$ never vanishes on $\mathcal{H}$, $\eta^p(z)$ is defined on $\mathcal{H}$ and has the $q$-expansion $ q^{p/24}(1-pq+\frac{p(p-3)}{2}q^2+\cdots). $ The $\mathbb{C}$-vector space spanned by $\eta^p(2z)=q^{p/12}(1-pq^2+\frac{p(p-3)}{2}q^4+\cdots)$, $\eta^p(\frac{z}{2})=q^{p/48}(1-pq^{1/2}+\frac{p(p-3)}{2}q+\cdots)$ and $\eta^p(\frac{z+1}{2})=e(\frac{p}{48})q^{p/48}(1+pq^{1/2}+\frac{p(p-3)}{2}q+\cdots)$ is invariant under the weight $\frac{p}{2}$ modular transformations. We have the following $q$-expansions \begin{align} \eta^p\left(\frac{z}{2}\right)+e\left(-\frac{p}{48}\right)\eta^p\left(\frac{z+1}{2}\right)&=2q^{p/48}\left(1+\frac{p(p-3)}{2}q+\cdots\right),\\ -\eta^p\left(\frac{z}{2}\right)+e\left(-\frac{p}{48}\right)\eta^p\left(\frac{z+1}{2}\right)&=2q^{p/48}\left(pq^{1/2}+\cdots\right). \end{align} If $p\neq 0,8$, then the leading exponents of $\eta^p(2z)$, $\pm\eta^p(\frac{z}{2})+e(-\frac{p}{48})\eta^p(\frac{z+1}{2})$ are distinct real numbers. By Mason's theorem, they satisfy a monic MLDE of weight $\frac{p}{2}$ and order $3$. A monic MLDE of weight $\frac{p}{2}$ and order $3$ has the form $(\srdiff{3}{p/2}+x E_4 D_{p/2}+y E_6)f=0$ with $x,y\in\mathbb{C}$. Calculating the indicial roots, we can uniquely determine $x$ and $y$ as $x=-\frac{3p^2-24p+128}{2304}$ and $y=-\frac{p^2(p-24)}{55296}$.
\begin{prop} \label{prop201801151023} Let $p\in\mathbb{R}$ and $\phi_p=\srdiff{3}{p/2}-\frac{3p^2-24p+128}{2304}E_4 D_{p/2}-\frac{p^2(p-24)}{55296}E_6$. Then the following hold. \begin{enumerate} \item If $p\neq 0,8$, then $\eta^p(2z)$, $\eta^p(\frac{z}{2})$ and $\eta^p(\frac{z+1}{2})$ are independent solutions to $\phi_p f=0$. \item If $p=0$, then $1$, $\int E_2^{(2)}(z)dz$ and $\int \sqrt{\Delta_4^{(2)}(z)}dz$ are independent solutions, where $E_2^{(2)}(z)=2E_2(2z)-E_2(z)$ and $\Delta_4^{(2)}(z)=\frac{\eta(2z)^{16}}{\eta(z)^8}$. \item If $p=8$, then $\eta^8(2z)$ and $\eta^8(\frac{z}{2})$ are independent solutions. \end{enumerate} \end{prop}
\begin{proof} (1) Already proved.
(2) The linear independence is clear from the $q$-expansions. According to Theorem 1 in \cite{kaneko_koike2003}, the solutions to $(\srdiff{2}{2}-\frac{1}{18}E_4)f=0$ are $E_2^{(2)}$ and $\sqrt{\Delta_4^{(2)}}$.
(3) By Lemma \ref{lemma_eta_functions} (5), the $\mathbb{C}$-vector space spanned by $\eta^8(2z)=q^{2/3}(1-8q^2+\cdots)$ and $\eta^8(\frac{z}{2})=q^{1/6}(1-8q^{1/2}+\cdots)$ is invariant under the weight $4$ modular transformations. By Mason's theorem, they satisfy $(\srdiff{2}{4}-\frac{1}{18}E_4)f=0$. \end{proof}
\begin{rem} If $p\neq0,3,6$, then $\eta^p(3z)$, $\eta^p(\frac{z}{3})$, $\eta^p(\frac{z+1}{3})$ and $\eta^p(\frac{z+2}{3})$ are independent solutions to $\srdiff{4}{p/2}-\frac{p^2-6p+18}{216}E_4\srdiff{2}{p/2}-\frac{p^3-12p^2+45p-81}{5832}E_6D_{p/2}\allowbreak-\frac{p^2(p^2-36p+288)}{559872}{E_4}^2)f=0.$ \end{rem}
\subsection{Example II} \label{subsection201804181403}
For an even lattice $L\subset\mathbb{R}^n$ and a point $p\in\mathbb{R}^n$, the theta function $\theta_{p+L}$ is given by $\theta_{p+L}(z)=\sum_{x\in L}q^{(x+p,x+p)/2},$ where $(\bullet,\bullet)$ is the standard inner product on $\mathbb{R}^n$. We denote by $L^*$ the dual lattice of $L$. Then the $\mathbb{C}$-vector space spanned by $\theta_{p+L}$ with $p\in L^*/L$ is invariant under the weight $\frac{n}{2}$ modular transformations (cf.\ Proposition 3.2 in \cite{ebeling2013}).
For $n\ge 3$, $\mathrm{D}_n$ are the root lattices $\{(x_1,\ldots,x_n)\in\mathbb{Z}^n\mid x_1+\cdots+x_n\in 2\mathbb{Z}\}$. We have $\mathrm{D}_n^*=\mathbb{Z}^n+\mathbb{Z} t$ and $\mathrm{D}_n^*/\mathrm{D}_n=\{0,s,t,s+t\}$, where $s=(1,0,\ldots,0)$ and $t=(\frac{1}{2},\ldots,\frac{1}{2})$.
We formally set $\mathrm{D}_2=\sqrt{2}\mathbb{Z}\oplus\sqrt{2}\mathbb{Z}$ and $\mathrm{D}_1=2\mathbb{Z}$.
\begin{prop} \label{prop201801151033} For $n\ge 1$ and $p\in \mathrm{D}_n^*/\mathrm{D}_n$, $\theta_{p+\mathrm{D}_n}$ satisfies \begin{equation} \left(\srdiff{3}{n/2}-\frac{3n^2-12n+32}{576}E_4D_{n/2}-\frac{n^2(n-12)}{6912}E_6\right)f=0. \end{equation} Equivalently, it satisfies $ \phi_{2n}(\eta^n\theta_{p+\mathrm{D}_n})=0. $ \end{prop}
\begin{proof} (1) The case $n\ge3$. Since $\theta_2(z)=2\frac{\eta(2z)^2}{\eta(z)}$, $\theta_3(z)=e(-\frac{1}{24})\frac{\eta((z+1)/2)^2}{\eta(z)}$ and $\theta_4(z)=\frac{\eta(z/2)^2}{\eta(z)}$, the result follows from the formulae $\theta_{\mathrm{D}_n}=\frac{1}{2}({\theta_3}^n+{\theta_4}^n)$, $\theta_{s+\mathrm{D}_n}=\frac{1}{2}({\theta_3}^n-{\theta_4}^n)$ and $\theta_{t+\mathrm{D}_n}=\theta_{s+t+\mathrm{D}_n}=\frac{1}{2}{\theta_2}^n$ (cf.\ Chapter 4, Section 7.1 in \cite{conway_sloane1999}). However, we give an alternative proof without using the formulae. We have \begin{align} \theta_{s+t+\mathrm{D}_n}&=\sum_{x_1+\cdots+x_n\in2\mathbb{Z}}q^{((x_1+3/2)^2+(x_2+1/2)^2+\cdots+(x_n+1/2)^2)/2}\\ &=\sum_{x_1+\cdots+x_n\in2\mathbb{Z}}q^{((-x_1-1/2)^2+(x_2+1/2)^2+\cdots+(x_n+1/2)^2)/2}=\theta_{t+\mathrm{D}_n}, \end{align} so the $\mathbb{C}$-vector space spanned by $\theta_{\mathrm{D}_n}$, $\theta_{s+\mathrm{D}_n}$ and $\theta_{t+\mathrm{D}_n}$ is invariant under the weight $\frac{n}{2}$ modular transformations. The leading terms are $\theta_{\mathrm{D}_n}=1+\cdots$, $\theta_{s+\mathrm{D}_n}=2nq^{1/2}+\cdots$ and $\theta_{s+t+\mathrm{D}_n}=2^{n-1}q^{n/8}+\cdots$.
(1-i) The case $n\ge3$ and $n\neq4$. The result follows from Mason's theorem.
(1-ii) The case $n=4$. By Theorem 3.2 in \cite{ebeling2013}, we have $\theta_{s+\mathrm{D}_n},\theta_{t+\mathrm{D}_n}\in\mathcal{M}(2,\Gamma(2),1,\mathrm{triv})$ and the valence formula shows $\theta_{s+\mathrm{D}_n}=\theta_{t+\mathrm{D}_n}$. Therefore, $\theta_{\mathrm{D}_n}$ and $\theta_{s+\mathrm{D}_n}$ satisfy $(\srdiff{2}{2}-\frac{1}{18}E_4)f=0$.
(2) The case $n=1$. The proof is the same as (1).
(3) The case $n=2$. $\mathrm{D}_2^*=\frac{\sqrt{2}}{2}\mathbb{Z}\oplus\frac{\sqrt{2}}{2}\mathbb{Z}$ and $\mathrm{D}_2^*/\mathrm{D}_2=\{0,s',t',s'+t'\}$, where $s'=(\frac{\sqrt{2}}{2},0)$ and $t'=(0,\frac{\sqrt{2}}{2})$. We have $\theta_{\mathrm{D}_n}=1+4q+4q^2+\cdots$, $\theta_{s'+\mathrm{D}_n}=\theta_{t'+\mathrm{D}_n}=2q^{1/4}+4q^{5/4}+2q^{9/4}+\cdots$ and $\theta_{s'+t'+\mathrm{D}_n}=4q^{1/2}+8q^{5/2}+4q^{9/2}+\cdots$. \end{proof}
\begin{exmp} \label{ex201807171044} We apply Corollary \ref{cor201801191507} to the setting $k=0$, $n=3$, $N=1$, $m=2$, $l=4$ and $a={\delta}^3-\frac{3p^2-24p+128}{2304}e_4 \delta-\frac{p^2(p-24)}{55296}e_6\in{}^{\mathrm{M}}\calr_6^3$. It is clear that $\mathrm{ch}(0,a)=\{\frac{p}{24},-\frac{p}{48},\frac{1}{2}-\frac{p}{48}\}$. The monic MLDO $b\in{}^{\mathrm{QM}}\calr_4^2={}^{\mathrm{M}}\calr_4^2$ has the form $b={\delta}^2+x e_4$, where $x\in\mathbb{C}$.
If $\frac{p}{24}\in\mathrm{ch}(0,b)$, then $x=-\frac{p(p-4)}{576}$ and $b[0]\ker(a[0])\subset\ker(c[4])$, where $c={\delta}^3-\frac{3p^2+72p+512}{2304}e_4\delta-\frac{(p+16)^2 (p-8)}{55296}e_6\in{}^{\mathrm{M}}\calr_3^6$.
If $-\frac{p}{48}\in\mathrm{ch}(0,b)$, then $x=-\frac{p(p+8)}{2304}$ and $b[0]\ker(a[0])\subset\ker(c[4])$, where $c={\delta}^3-\frac{3p^2-72p+512}{2304}e_4\delta-\frac{(p-8)^2 (p-32)}{55296}e_6\in{}^{\mathrm{M}}\calr_3^6$.
If $\frac{1}{2}-\frac{p}{48}\in\mathrm{ch}(0,b)$, then $x=-\frac{(p-16)(p-24)}{2304}$ and $b[0]\ker(a[0])\subset\ker(c[4])$, where $c={\delta}^3-\frac{3p^2-72p+1664}{2304}e_4\delta-\frac{(p+16)(p-8)(p-56)}{55296}e_6\in{}^{\mathrm{M}}\calr_3^6$. \end{exmp}
\section{Eisenstein series and monic MLDE} \label{section201712301302} Utilizing the properties of $\mathcal{R}$ established in the preceding sections, we give a lower bound of the order of monic MLDEs of weight $4m+6n$ satisfied by ${E_4}^m{E_6}^n$.
By Theorem 1 in \cite{kaneko_koike2003}, the Eisenstein series $E_k$ for $k\in\{4,6,10\}$ satisfies the second order monic MLDE $(\srdiff{2}{k}-\frac{k(k+2)}{144}E_4)f=0$, the \textit{Kaneko-Zagier equation}. Explicitly, $(\srdiff{2}{4}-\frac{1}{6}E_4)E_4=0$, $(\srdiff{2}{6}-\frac{1}{3}E_4)E_6=0$ and $(\srdiff{2}{10}-\frac{5}{6}E_4)E_{10}=0$. For $k\in\{4,6,10\}$, the independent solutions to $(\srdiff{2}{4}-\frac{k(k+2)}{144}E_4)f=0$ are $E_k$ and $F_k$, where $F_k$ is a $q$-series whose leading exponent is $\frac{(k+1)}{6}$.
\begin{prop} \label{prop201801111416} Let $n\in\mathbb{Z}_{\ge0}$ and $k\in\{4,6,10\}$. Then ${E_k}^n$ satisfies a monic MLDE of weight $nk$ and order $n+1$. \end{prop}
\begin{proof} The case $n=0$ is trivial. Assume $n>0$. Since the $\mathbb{C}$-vector space spanned by $E_k$ and $F_k$ is invariant under the weight $k$ modular transformations, the $\mathbb{C}$-vector space spanned by ${E_k}^i {F_k}^{n-i}$ ($i=0,\ldots,n$) is invariant under the weight $nk$ modular transformations. The leading exponents of ${E_k}^i {F_k}^{n-i}$ are $\frac{(n-i)(k+1)}{6}$, so the result follows from Mason's theorem. \end{proof}
By Mason's theorem, ${E_4}^m{E_6}^n$ satisfies a (possibly non-monic) MLDE of weight $4m+6n$ and order $1$. By Theorem \ref{thm201804191022} (4), it satisfies a monic MLDE of weight $4m+6n$ and some order. The following theorem gives a lower bound for such orders. Note that $E_{10}=E_4E_6$.
\begin{thm} \label{thm201801111417} Let $m,n\in\mathbb{Z}_{\ge0}$ and $N\in\mathbb{Z}_{>0}$. Suppose that ${E_4}^m{E_6}^n$ satisfies a monic MLDE of weight $4m+6n$ and order $N$. Then $N\ge\max\{m,n\}+1$. \end{thm}
\begin{proof} Suppose $(\srdiff{N}{4m+6n}+g_2 \srdiff{N-2}{4m+6n}+\cdots+g_N){E_4}^m{E_6}^n=0$ with $g_i\in\mathcal{M}_{2i}$. Set $a={\delta}^N+g_2 {\delta}^{N-2}+\cdots+g_N\in{}^{\mathrm{M}}\calr_{2N}^N$ and $b=e_4 e_6 \delta+\frac{n}{2}{e_4}^3+\frac{m}{3}{e_6}^2\in{}^{\mathrm{QM}}\calr_{12}^1$. Then $b({E_4}^m{E_6}^n)=0$. By the division of MLDO, ${e_4}^N{e_6}^N a=cb+c'$ for some $c\in\mathcal{R}_{12N-12}^{N-1}$ and $c'\in\mathcal{R}_{12N}^{<1}$. Since ${E_4}^m{E_6}^n\neq0$, we have $c'=0$. Set $c=h_1 {\delta}^{N-1}+\cdots +h_N$, where $h_i\in\mathcal{M}_{10N+2i-12}$. Comparing the tops of ${e_4}^N{e_6}^N a=cb$, we see $h_1={e_4}^{N-1}{e_6}^{N-1}$. Comparing the coefficients of ${\delta}^{N-i}$ for $i\in\{1,\ldots,N-1\}$, we see \begin{align} \label{eq201712161731} {e_4}^N{e_6}^N g_i=&e_4e_6 h_{i+1}+\sum_{j=1}^i h_j\binom{N-j}{N-i-1}D^{i-j+1}[e_4e_6]\nonumber\\ &+\sum_{j=1}^i h_j\binom{N-j}{N-i}D^{i-j}\left[\frac{n}{2}{e_4}^3+\frac{m}{3}{e_6}^2\right], \end{align} since the coefficient of ${\delta}^{N-i}$ in $h_j{\delta}^{N-j}(e_4 e_6\delta+\frac{n}{2}{e_4}^3+\frac{m}{3}{e_6}^2)$ is \begin{equation} \begin{cases} h_j(\binom{N-j}{N-i-1}D^{i-j+1}[e_4e_6]+\binom{N-j}{N-i}D^{i-j}[\frac{n}{2}{e_4}^3+\frac{m}{3}{e_6}^2]) & \text{if }1\le j\le i,\\ e_4e_6 h_j & \text{if }j=i+1,\\ 0 & \text{if } i+1<j\le N. \end{cases} \end{equation}
Let us prove \begin{align} \label{201712221059} h_i=&{e_4}^{N-i}{e_6}^{N-i}\left( \frac{(N-(n+i-1))\cdots(N-(n+1))}{2^{i-1}}{e_4}^{3i-3}\right.\nonumber\\ &\left.+\frac{(N-(m+i-1))\cdots(N-(m+1))}{3^{i-1}}{e_6}^{2i-2}+e_4 e_6 R_i \right) \end{align} by induction on $i\ge2$, where $R_i\in\mathbb{Q}[e_4,e_6,g_1,\ldots,g_{i-1}]\subset\mathcal{M}$.
(1) By setting $i=1$ in Eq.\ (\ref{eq201712161731}), \begin{equation} {e_4}^N{e_6}^N g_1=e_4e_6h_2+h_1\binom{N-1}{N-2}D[e_4e_6]+h_1\left(\frac{n}{2}{e_4}^3+\frac{m}{3}{e_6}^2\right), \end{equation} so $h_2={e_4}^{N-2}{e_6}^{N-2}(\frac{N-(n+1)}{2}{e_4}^3+\frac{N-(m+1)}{3}{e_6}^2+e_4e_6g_1)$.
(2) Assume that the result holds for all $i'\le i$. By Eq.\ (\ref{eq201712161731}), \begin{align} &e_4e_6h_{i+1}={(e_4 e_6)}^N g_i-h_i\binom{N-i}{N-i-1}D[e_4e_6]-h_i\left(\frac{n}{2}{e_4}^3+\frac{m}{3}{e_6}^2\right)\nonumber\\ &\quad-\sum_{j=1}^{i-1} h_j\binom{N-j}{N-i-1}D^{i-j+1}[e_4e_6]-\sum_{j=1}^{i-1} h_j\binom{N-j}{N-i}D^{i-j}\left[\frac{n}{2}{e_4}^3+\frac{m}{3}{e_6}^2\right]\\ &=h_i\left(\frac{N-(n+i)}{2}{e_4}^3+\frac{N-(m+i)}{3}{e_6}^2\right)+{(e_4 e_6)}^{N-i+1}\mathbb{Q}[e_4,e_6,g_1,\ldots,g_{i-2},g_i]\\ &={(e_4 e_6)}^{N-i}\frac{(N-(n+i))\cdots(N-(n+1))}{2^i}{e_4}^{3i}\nonumber\\ &\quad+{(e_4 e_6)}^{N-i}\frac{(N-(m+i))\cdots(N-(m+1))}{3^i}{e_6}^{2i}+{(e_4 e_6)}^{N-i+1}\mathbb{Q}[e_4,e_6,g_1,\ldots,g_i], \end{align} therefore Eq.\ (\ref{201712221059}) holds for $i+1$.
By comparing the coefficients of ${\delta}^0$ in ${e_4}^N{e_6}^N a=cb$, it follows from Eq.\ (\ref{201712221059}) that \begin{align} {e_4}^N{e_6}^Ng_N&=\sum_{i=0}^Nh_iD^{N-i}\left[\frac{n}{2}{e_4}^3+\frac{m}{3}{e_6}^2\right]\\ &=\frac{n(N-(n+N-1))\cdots(N-(n+1))}{2^N}{e_4}^{3N}\nonumber\\ &\quad+\frac{m(N-(m+N-1))\cdots(N-(m+1))}{3^N}{e_6}^{2N}+e_4e_6\mathcal{M}. \end{align} Therefore, $n(N-(n+N-1))\cdots(N-(n+1))=0$ and $m(N-(m+N-1))\cdots(N-(m+1))=0$, so that $N>n$ and $N>m$. \end{proof}
For $\phi\in H=\bigoplus_{n\in\mathbb{R}}H_n$, we denote by $\mathrm{mord}(\phi)$ the least nonnegative integer $n$ such that some element of ${}^{\mathrm{M}}\calr^n$ annihilates $\phi$. If there is no such $n$, we formally set $\mathrm{mord}(\phi)=\infty$. We call $\mathrm{mord}(\phi)$ the \textit{modular order} of $\phi$.
By Proposition \ref{prop201801111416} and Theorem \ref{thm201801111417}, $\mathrm{mord}({E_4}^m{E_6}^n)\ge\max\{m,n\}+1$ and $\mathrm{mord}({E_4}^n)=\mathrm{mord}({E_6}^n)=\mathrm{mord}({E_4}^n{E_6}^n)=n+1$. We conjecture the following. \begin{conj} $\mathrm{mord}({E_4}^m{E_6}^n)=\max\{m,n\}+1$ for all $m,n\ge0$. \end{conj}
\section{Miscellaneous results} \label{section20180220} In this section, we show some properties of $\mathcal{R}$ which are not relevant to the preceding sections.
\subsection{Ideals of $\mathcal{R}$} In this subsection, we study some ideals of $\mathcal{R}$.
\begin{lem} The center $Z(\mathcal{R})$ is equal to $\langle \Delta\rangle$ (the $\mathbb{C}$-subalgebra of $\mathcal{R}$ generated by $\Delta$). \end{lem}
\begin{proof} Since $[\delta,1]=[\delta,\Delta]=0$, $Z(\mathcal{R})\supset \langle \Delta\rangle$. To prove $Z(\mathcal{R})\subset\langle \Delta\rangle$, let $a\in Z(\mathcal{R})$ be homogeneous. Set $a=\sum_{i=0}^n a_i {\delta}^i$, where $a_i\in\mathcal{M}(l-2i)$ for some $l\in\mathbb{Z}$ and $a_n\neq 0$. By comparing the coefficients of ${\delta}^{n-1}$ in $a e_4=e_4 a$, we have $a_{n-1}e_4+n a_nD[e_4]=e_4 a_{n-1}$, so $n=0$ and $a=a_0\in\mathcal{M}(l)$. Since $\delta a=a\delta$, we have $0=D[a]=D_l a$, so that the leading coefficient of $a$ is $\frac{l}{12}$. Since the leading coefficient of an element of $\mathcal{M}(l)$ is a nonnegative integer, $\frac{a}{\Delta^{l/12}}\in\mathcal{M}(0)=\mathbb{C}$, which means $a\in\langle \Delta\rangle$. \end{proof}
\begin{prop} $\Delta\mathcal{R}$ is a completely prime two-sided ideal of $\mathcal{R}$. \end{prop}
\begin{proof} Since $\Delta\in Z(\mathcal{R})$, $\Delta\mathcal{R}$ is a two-sided ideal. Note that $\Delta\mathcal{R}=\{\sum_{i=0}^n a_i {\delta}^i\mid a_i\in\mathcal{S},\ n\ge 0\}$. Let $a,b\in\mathcal{R}\backslash\{0\}$ such that $ab\in\Delta\mathcal{R}$ and $a\notin\Delta\mathcal{R}$. Set $a=\sum_{i=0}^n a_i {\delta}^i$ and $b=\sum_{j=0}^m b_j {\delta}^j$, where $a_i,b_j\in\mathcal{M}$, $a_n,b_m\neq0$ and $n,m\ge 0$. There exists the maximum number $N=\max\{i\in[0,n]\mid a_i\notin\mathcal{S}\}$ and $\sum_{i=0}^N a_i {\delta}^i \sum_{j=0}^m b_j {\delta}^j\in\Delta\mathcal{R}$. Since the top is $a_N b_m$, we have $a_N b_m\in\mathcal{S}$ and $b_m\in\mathcal{S}$, therefore $\sum_{i=0}^N a_i {\delta}^i \sum_{j=0}^{m-1}b_j {\delta}^j\in\Delta\mathcal{R}$. By induction on $j$, we see that every $b_j$ belongs to $\mathcal{S}$ and $b\in\Delta\mathcal{R}$. \end{proof}
Since $\mathcal{R}$ admits the division with remainder, it has a PID-like property. For $a\in\mathcal{R}$, let $[a]$ denote $\{b\in\mathcal{R} \mid \text{there exists }f\in\mathcal{M}\backslash\{0\}\text{ such that }fb\in\mathcal{R} a\}$.
\begin{thm} \label{thm201804181551} \begin{enumerate} \item For any $a\in\mathcal{R}$, $[a]$ is a left ideal of $\mathcal{R}$. \item For any left ideal $I$ of $\mathcal{R}$, there exists $a\in I$ such that $\mathcal{R} a\subset I \subset [a]$. \item For any $\phi\in H$, there exists $a\in\mathcal{R}$ such that $\mathrm{ann}_{\mathcal{R}}(\phi)=[a]$. \end{enumerate} \end{thm}
\begin{proof} (1) It is clear that $[a]$ is an abelian group. Let $b\in[a]$ and $c\in\mathcal{R}$. Then $fb=pa$ for some $f\in\mathcal{M}\backslash\{0\}$ and $p\in\mathcal{R}$. By the division, $gc=qf$ for some $g\in\mathcal{M}\backslash\{0\}$ and $q\in\mathcal{R}$. Therefore, $g(cb)=qfb=qpa\in\mathcal{R} a$ and $cb\in[a]$.
(2) If $I \neq\{0\}$, choose an element $a\in I$ of the lowest order.
(3) If $\mathrm{ann}_{\mathcal{R}}(\phi)\neq\{0\}$, choose an element $a\in \mathrm{ann}_{\mathcal{R}}(\phi)$ of the lowest order. \end{proof}
\begin{rem} Just like the division properties, Theorem \ref{thm201804181551} holds for a skew polynomial ring $S=R[x;1,d]$, where $R$ is a commutative integral domain and $d$ is a derivation of $R$, and for a left $S$-module $M$ such that $fm=0$ implies $m=0$ for $f\in R\backslash\{0\}$ and $m\in M$. \end{rem}
\subsection{$\mathbb{C}$-algebras containing $\mathcal{R}$ and their endomorphisms}
\begin{lem} \label{201712241805} Suppose \begin{equation} \sum_{i,j,k,l\ge0,\ 2i+2j+4k+6l=n}C_{ijkl}z^{-i}{E_2}^j{E_4}^k{E_6}^l=0, \label{eq201802231002} \end{equation} where $C_{ijkl}\in\mathbb{C}$ and $n\ge0$. Then $C_{ijkl}=0$ for all $i,j,k,l\ge 0$ with $2i+2j+4k+6l=n$. \end{lem}
\begin{proof} For $p\in\mathbb{Z}$, set $A_p=(\begin{smallmatrix}1&p\\1&p+1\end{smallmatrix})\in\mathrm{SL}(2,\mathbb{Z})$. By replacing $z$ with $A_p z$ and dividing the both sides of Eq.\ (\ref{eq201802231002}) by $(z+p+1)^n$, we have \begin{equation} \sum_{2i+2j+4k+6l=n}C_{ijkl}(z+p)^{-i}(z+p+1)^{-i}\left(E_2+\frac{6}{\pi i(z+p+1)}\right)^j{E_4}^k{E_6}^l=0. \end{equation} Letting $p\to\infty$, we obtain $\sum_{2j+4k+6l=n}C_{0jkl}{E_2}^j{E_4}^k{E_6}^l=0$ and $C_{0jkl}=0$ for all $j,k,l\ge0$ with $2j+4k+6l=n$. The result is proved by induction on $i$. \end{proof}
Consider the following three homogeneous elements of $\mathrm{End}(H)$: \begin{align} e_1&\in \mathrm{End}^1(H),\quad (f,k)\mapsto (f,k+1),\\ e_2&\in \mathrm{End}^2(H),\quad (f,k)\mapsto (E_2 f,k+2),\\ f_2&\in \mathrm{End}^2(H),\quad (f,k)\mapsto \left(\frac{f}{\log q},k+2\right). \end{align} It is straightforward to show $[\delta,e_1]=-\frac{1}{12}e_1 e_2$, $[\delta,e_2]=-\frac{1}{12}({e_2}^2+e_4)$ and $[\delta,f_2]=-{f_2}^2-\frac{1}{6}f_2 e_2$. Consider the following three graded $\mathbb{C}$-subalgebra of $\mathrm{End}(H)$: \begin{align} \mathcal{QR}&=\langle e_2,e_4,e_6,\delta \rangle,\\ \mathcal{QR}\langle e_1 \rangle&=\langle e_1,e_2,e_4,e_6,\delta \rangle,\\ \mathcal{QR}\langle f_2 \rangle&=\langle f_2,e_2,e_4,e_6,\delta \rangle. \end{align}
\begin{thm} \begin{enumerate} \item The set $\{{e_2}^j{e_4}^k{e_6}^l{\delta}^m\mid j,k,l,m\ge0\}$ forms a $\mathbb{C}$-basis of $\mathcal{QR}$. \item The set $\{{e_1}^i{e_2}^j{e_4}^k{e_6}^l{\delta}^m\mid i,j,k,l,m\ge0\}$ forms a $\mathbb{C}$-basis of $\mathcal{QR}\langle e_1 \rangle$. \item The set $\{{f_2}^i{e_2}^j{e_4}^k{e_6}^l{\delta}^m\mid i,j,k,l,m\ge0\}$ forms a $\mathbb{C}$-basis of $\mathcal{QR}\langle f_2 \rangle$. \end{enumerate} \end{thm}
\begin{proof} The proof is similar to that of Theorem \ref{thm201712301243}. Utilize Lemma \ref{201712241805} for (3). \end{proof}
\begin{thm} \label{thm201806010922} \begin{enumerate} \item As graded $\mathbb{C}$-algebras, $\mathcal{QR}$ is isomorphic to $T(E_2\oplus E_4\oplus E_6\oplus \delta)/I_1$, where $I_1$ is the two-sided ideal generated by $[\delta,E_2]+\frac{1}{12}({E_2}\otimes E_2+E_4)$, $[\delta,E_4]+\frac{1}{3}E_6$, $[\delta,E_6]+\frac{1}{2}{E_4}\otimes E_4$, $[E_2,E_4]$, $[E_2,E_6]$ and $[E_4,E_6]$. \item As graded $\mathbb{C}$-algebras, $\mathcal{QR}\langle e_1 \rangle$ is isomorphic to $T(E_1\oplus E_2\oplus E_4\oplus E_6\oplus \delta)/I_2$, where $I_2$ is generated by $[\delta,E_1]+\frac{1}{12}E_1\otimes E_2$, $[\delta,E_2]+\frac{1}{12}({E_2}\otimes E_2+E_4)$, $[\delta,E_4]+\frac{1}{3}E_6$, $[\delta,E_6]+\frac{1}{2}{E_4}\otimes E_4$, $[E_1,E_2]$, $[E_1,E_4]$, $[E_1,E_6]$, $[E_2,E_4]$, $[E_2,E_6]$ and $[E_4,E_6]$. \item As graded $\mathbb{C}$-algebras, $\mathcal{QR}\langle f_2 \rangle$ is isomorphic to $T(F_2\oplus E_2\oplus E_4\oplus E_6\oplus \delta)/I_3$, where $I_3$ is generated by $[\delta,F_2]+{F_2}\otimes F_2+\frac{1}{6}F_2\otimes E_2$, $[\delta,E_2]+\frac{1}{12}({E_2}\otimes E_2+E_4)$, $[\delta,E_4]+\frac{1}{3}E_6$, $[\delta,E_6]+\frac{1}{2}{E_4}\otimes E_4$, $[F_2,E_2]$, $[F_2,E_4]$, $[F_2,E_6]$, $[E_2,E_4]$, $[E_2,E_6]$ and $[E_4,E_6]$. \end{enumerate} \end{thm}
\begin{proof} The proof is similar to that of Theorem \ref{thm201801111434} (1). \end{proof}
We set $E_1=(1,1)\in H_1$ and $F_2=(\frac{1}{\log q},2)\in H_2$. It is easy to see that $\{E_1,E_2,E_4,E_6\}$ and $\{F_2,E_2,E_4,E_6\}$ are algebraically independent in $H$ (cf. Lemma \ref{201712241805}). Note $\delta E_1=-\frac{1}{12}E_1 E_2$, $\delta F_2=-{F_2}^2-\frac{1}{6}F_2 F_2$ and $\delta E_2=-\frac{1}{12}({E_2}^2+E_4)$.
\begin{thm} \begin{enumerate} \item As $\mathbb{C}$-algebras, $\mathcal{QR}$ is isomorphic to $\mathcal{QM}[\xi;1,\delta]$. \item As $\mathbb{C}$-algebras, $\mathcal{QR}\langle e_1 \rangle$ is isomorphic to $\mathbb{C}[E_1,E_2,E_4,E_6][\xi;1,\delta]$. \item As $\mathbb{C}$-algebras, $\mathcal{QR}\langle f_2 \rangle$ is isomorphic to $\mathbb{C}[F_2,E_2,E_4,E_6] [\xi;1,\delta]$. \end{enumerate} \end{thm}
\begin{proof} The proof is similar to that of Theorem \ref{thm201805311336}. \end{proof}
The grade-preserving endomorphisms of $\mathcal{R}$, $\mathcal{QR}$, $\mathcal{QR}\langle e_1 \rangle$ and $\mathcal{QR}\langle f_2 \rangle$ can be directly calculated by Theorem \ref{thm201806010922}. Let $\mathcal{X}$ be either $\mathcal{R}$, $\mathcal{QR}$, $\mathcal{QR}\langle e_1 \rangle$ or $\mathcal{QR}\langle f_2 \rangle$. Then the monoid of grade-preserving endomorphisms of $\mathcal{X}$ consists of the following: For $\mathcal{X}=\mathcal{R}$, \begin{align} (e_4,e_6,\delta)&\mapsto(a^2e_4,a^3e_6,a\delta). \end{align} For $\mathcal{X}=\mathcal{QR}$, \begin{align} (e_2,e_4,e_6,\delta)&\mapsto(ae_2,a^2e_4,a^3e_6,a\delta+be_2),\\ &\mapsto(0,0,0,a\delta+be_2). \end{align} For $\mathcal{X}=\mathcal{QR}\langle e_1 \rangle$, \begin{align} (e_1,e_2,e_4,e_6,\delta)&\mapsto(ae_1,be_2,b^2e_4,b^3e_6,b\delta+c{e_1}^2+de_2),\\ &\mapsto(0,0,0,0,b\delta+c{e_1}^2+de_2). \end{align} For $\mathcal{X}=\mathcal{QR}\langle f_2 \rangle$, \begin{align} (f_2,e_2,e_4,e_6,\delta)&\mapsto(af_2,ae_2,a^2e_4,a^3e_6,a\delta+bf_2+ce_2),\\ &\mapsto(-af_2,12af_2+ae_2,a^2e_4,a^3e_6,a\delta+bf_2+ce_2), \label{eq201802221820} \\ &\mapsto(0,12af_2+ae_2,a^2e_4,a^3e_6,a\delta+bf_2+ce_2),\\ &\mapsto(0,ae_2,a^2e_4,a^3e_6,a\delta+bf_2+ce_2),\\ &\mapsto(0,0,0,0,a\delta+bf_2+ce_2), \end{align} where $a,b,c,d\in\mathbb{C}$ are arbitrary.
As an application of the endomorphisms above, we prove Proposition \ref{prop201805061103}. Note that $(\delta-a {e_1}^2)(q^a,0)=0$ and $(\delta-a f_2)((\log q)^a,0)=0$ for $a\in\mathbb{C}$.
\begin{prop} \label{prop201805061103} Let $F_i(x,y,z),G(x,y,z)\in\mathbb{C}[x,y,z]$ be polynomials for $1\le i\le n$. Suppose that $\mathrm{wt} (F_i(e_2,e_4,e_6))+2i=l$ is constant for $0\le i\le n$, and $\mathrm{wt} (G(e_2,e_4,e_6))=k$. If $\sum_{i=0}^n F_i(E_2,E_4,E_6)\srdiff{i}{k} G(E_2,E_4,E_6)=0$, then \begin{equation} \sum_{i=0}^n F_i\left(\frac{12}{\log q}+E_2,E_4,E_6\right)\left(D_k-\frac{a}{\log q}\right)^{(i)} \left((\log q)^a G\left(\frac{12}{\log q}+E_2,E_4,E_6\right)\right)=0 \end{equation} for $a\in\mathbb{C}$, where $(D_k-\frac{a}{\log q})^{(i)}=(D_{k+2i-2}-\frac{a}{\log q})\cdots (D_{k+2}-\frac{a}{\log q})(D_k-\frac{a}{\log q})$. \end{prop}
\begin{proof} Since $\sum_{i=0}^n F_i(e_2,e_4,e_6){\delta}^i G(e_2,e_4,e_6)(1,0)=0$, $\sum_{i=0}^n F_i(e_2,e_4,e_6){\delta}^i G(e_2,e_4,e_6)\allowbreak=x\delta$ for some $x\in\mathcal{QR}$. By applying the endomorphism (\ref{eq201802221820}) with $a=1$ and $c=0$, we obtain $\sum_{i=0}^n F_i(12 f_2+e_2,e_4,e_6)(\delta+b f_2)^i G(12 f_2+e_2,e_4,e_6)=x'(\delta+b f_2)$ for some $x'\in\mathcal{QR}\langle f_2 \rangle$. Replace $b$ with $-a$. Then $0=\sum_{i=0}^n F_i(12 f_2+e_2,e_4,e_6)(\delta-a f_2)^i G(12 f_2+e_2,e_4,e_6)((\log q)^a,0)=(\sum_{i=0}^n F_i(\frac{12}{\log q}+E_2,E_4,E_6)(D_k-\frac{a}{\log q})^{(i)} ((\log q)^a G(\frac{12}{\log q}+E_2,E_4,E_6)),\allowbreak k+l)$. \end{proof}
\end{document} |
\begin{document}
\title{Bichromatic Compatible Matchings}
\author{Greg Aloupis\thanks{D\'epartement d'Informatique, Universit\'e Libre de Bruxelles, Brussels, Belgium {\tt aloupis.greg@gmail.com,} \tt{\{lbarbafl,slanger\}@ulb.ac.be}}\ \thanks{Charg\'e de recherches du F.R.S.-FNRS.} \and Luis Barba$^*$\thanks{School of Computer Science, Carleton University, Ottawa, Canada}\ \thanks{Boursier FRIA du FNRS} \and Stefan Langerman$^*$\thanks{Directeur de recherches du F.R.S.-FNRS.} \and Diane L. Souvaine\thanks{NSF Grant \#CCF-0830734., Department of Computer Science, Tufts University, Medford, MA. {\tt dls@cs.tufts.edu}} } \date{}
\maketitle \begin{abstract} For a set $R$ of $n$ red points and a set $B$ of $n$ blue points, a \emph{$BR$-matching} is a non-crossing geometric perfect matching where each segment has one endpoint in $B$ and one in $R$. Two $BR$-matchings are compatible if their union is also non-crossing. We prove that, for any two distinct $BR$-matchings $M$ and $M'$, there exists a sequence of $BR$-matchings $M = M_1 , \ldots, M_k = M'$ such that $M_{i-1} $ is compatible with $M_i$. This implies the connectivity of the \emph{compatible bichromatic matching graph} containing one node for each $BR$-matching and an edge joining each pair of compatible $BR$-matchings, thereby answering the open problem posed by Aichholzer et al. in~\cite{CompatibleMatchingsForPSLG}. \end{abstract}
\section{Introduction}
A planar straight line graph (PSLG) is a geometric graph in which the vertices are points embedded in the plane and the edges are non-crossing line segments. There are many special types of PSLGs of which we name a few. A triangulation is a PSLG to which no more edges may be added between existing vertices. A \emph{geometric matching} of a given point set $P$ is a 1-regular PSLG consisting of pairwise disjoint line segments in the plane joining points of $P$. A geometric matching is \emph{perfect} if every point in $P$ belongs to exactly one segment.
Two branches of study on PSLGs include those of geometric augmentation and geometric reconfiguration. A typical augmentation problem on PSLG $G = (V,E)$ asks for a set of new edges $E'$ such that the graph $(V,E \cup E')$ retains or gains some desired properties (see survey by Hurtado and T\'oth~\cite{Hurtado2012}).
A typical reconfiguration problem on a pair of PSLGs $G$ and $G'$ sharing some property asks for a sequence of PSLGs $G = G_0, \ldots, G_k = G'$ where each successive pair of PSLGs $G_{i-1}$, $G_i$ jointly satisfy some geometric constraints.
In some situations, a bound on the value of $k$ is desired as well~\cite{Aichholzer200619, Aichholzer20023, Aichholzer2009617, Buchin2009, Hurtado1999, Hurtado1996, Razen2008}.
One such solved problem is that of reconfiguring triangulations: given two triangulations $T$ and $T'$, one can compute a sequence of triangulations $T=T_0 , \ldots, T_k = T'$ on the same point set such that $T_{i-1}$ can be reconfigured to $T_i$ by flipping one edge.
Furthermore, bounds on the value of $k$ are known: $O(n^2)$ edge flips are always sufficient~\cite{Hurtado1996} and $\Omega (n^2)$ edge flips are sometimes necessary~\cite{Hurtado1999}.
Two PSLGs on the same vertex set are {\it compatible} if their union is planar. Compatible geometric matchings have been the object of study in both augmentation and reconfiguration problems. For example, the {\it Disjoint Compatible Matching Conjecture}~\cite{Aichholzer2009617} was recently solved in the affirmative~\cite{Ishaque2011}: every perfect planar matching $M$ of $2n$ segments on $4n$ points can be augmented by $2n$ additional segments to form a PLSG that is the union of simple polygons.
Let $M$ and $M'$ be two perfect planar matchings of a given point set. The reconfiguration problem asks for a \emph{compatible sequence} of matchings $M = M_0 , \ldots, M_k = M'$ such that $M_{i-1}$ is compatible with $M_i$ for all $i \in \{1, \ldots,k\}$. Aichholzer et al.~\cite{Aichholzer2009617} proved that there is always a compatible sequence of $O(\log n)$ matchings that reconfigures any given matching into a canonical matching. Thus, the
\emph{compatible matching graph}, that has one node for each perfect planar matching and an edge between any two compatible matchings, is connected with diameter
$O(\log n)$. Razen~\cite{Razen2008} proved that the distance between two nodes in this graph is sometimes $\Omega(\log n/ \log \log n)$.
A natural question to extend this research is to ask what happens with bichromatic point sets in which the segments must join points from different colors.
Let $P= B\cup R$ be a set of points in the plane in general position where $|R|=|B| = n$. A straight-line segment with one endpoint in $B$ and one in $R$ is called a \emph{bichromatic segment}. A perfect planar matching of $P$ where every segment is bichromatic is called a \emph{$BR$-matching}. Sharir and Welzl~\cite{Sharir2006} proved that the number of $BR$-matchings of $P$ is at most $O(7.61^n)$. Hurtado et al.~\cite{Hurtado200814} showed that any $BR$-matching can be augmented to a crossing-free bichromatic spanning tree in $O(n \log n)$ time. Aichholzer et al.~\cite{CompatibleMatchingsForPSLG} proved that for any $BR$-matching $M$ of $P$, there are at least $\lceil\frac{n-1}{2}\rceil$ bichromatic segments spanned by $P$ that are compatible with $M$. Furthermore, there are $BR$-matchings with at most $3n/4$ compatible bichromatic segments.
At least one $BR$-matching can always be produced by recursively applying \emph{ham-sandwich cuts}; see Fig.~\ref{fig:CanonicalMatching} for an illustration. A $BR$-matching produced in this way is called a \emph{ham-sandwich matching}. Notice that the general position assumption is sometimes necessary to guarantee the existence of a $BR$-matching. However, not all $BR$-matchings can be produced using ham-sandwich cuts. Furthermore, some point sets admit only one $BR$-matching, which must be a ham-sandwich matching.
Two $BR$-matchings $M$ and $M'$ are \emph{connected} if there is a sequence of $BR$-matchings $M = M_0, \ldots, M_k = M'$, such that $M_{i-1}$ is compatible with $M_{i}$, for $1\leq i\leq k$. An open problem posed by Aichholzer et al.~\cite{CompatibleMatchingsForPSLG} was to prove that all $BR$-matchings of a given point set are connected\footnote{This problem was also posed during the EuroGIGA meeting that took place after EuroCG 2012.}. We answer this in the affirmative by using a ham-sandwich matching $H$ as a canonical form. Consider the first ham-sandwich cut line $\ell$ used to construct $H$. We show how to reconfigure any given $BR$-matching via a compatible sequence, so that the last matching in the sequence contains no segment crossing $\ell$. We use this result recursively, on every ham-sandwich cut used to generate $H$, to show that any given $BR$-matching is connected with $H$.
\section{Ham-sandwich matchings}\label{Section:Ham-Sandwich} In this paper, a \emph{ham-sandwich cut} of $P$ is a line passing through no point of $P$ and containing exactly $\lfloor \frac{n}{2}\rfloor$ blue and $\lfloor \frac{n}{2}\rfloor$ red points to one side. Notice that if $n$ is even, then this matches the \emph{classical} definition of ham-sandwich cuts (see Chapter 3 of ~\cite{MatousekBorsukUlam}). However, when $n$ is odd, a ham-sandwich cut $\ell$ according the classical definition will go through a red and a blue point of $P$. In this case, we obtain a ham-sandwich cut, according to our definition, by slightly moving $\ell$ away from these two points without changing its slope and without reaching another point of $P$. By the general position assumption this is always possible.
Since every bichromatic point set admits a ham-sandwich cut, $P$ admits at least one $BR$-matching resulting from recursively applying ham-sandwich cuts. We call this a \emph{ham-sandwich matching}; see Fig.~\ref{fig:CanonicalMatching}. Notice that $P$ may admit several ham-sandwich matchings.
\begin{figure}
\caption{A ham-sandwich matching obtained by recursively applying ham-sandwich cuts.}
\label{fig:CanonicalMatching}
\end{figure}
Let $M$ be a $BR$-matching of $P$. In this section we prove that $M$ is connected with a ham-sandwich matching $H$ of $P$. Consider a ham-sandwich cut $\ell$ used to construct $H$. The idea of the proof is to show the existence of a $BR$-matching $M'$, compatible with $M$, such that $M'$ has ``fewer'' crossings with $\ell$ according to some measure defined on $BR$-matchings. By repeatedly applying this result, we end up with a matching $M^\ell$, connected with $M$, such that no segment of $M^\ell$ crosses $\ell$. Once we know how to avoid a ham-sandwich cut, we can apply the same result recursively on every ham-sandwich cut used to generate $H$. In this way, we obtain a sequence of compatible $BR$-matchings that connect $M$ with $H$.
The main ingredient to obtain these results is Lemma~\ref{lemma:CompatibleBRMatching}. Before stating this lemma, we need a few more definitions.
Given a line $\ell$ that contains no point of $P$, let $S_{M,\ell}$ be the set of segments of $M$ that cross $\ell$. We say that $\ell$ is a \emph{chromatic cut} of $M$ if $|S_{M,\ell}| \geq 2$ and not all endpoints of $S_{M,\ell}$ on one side of $\ell$ have the same color. Without loss of generality, we can assume that if a chromatic cut $\ell$ exists, then it is vertical and no segment of $M$ is parallel to $\ell$. The following observation shows the relation existing between chromatic and ham-sandwich cuts.
\begin{lemma}\label{lemma:HamCutsAreChromaticCuts} Given any $BR$-matching $M$ of $P$, every ham-sandwich cut of $P$ either crosses no segment of $M$, or is a chromatic cut of $M$. \end{lemma} \begin{proof} Let $\ell$ be a ham-sandwich cut of $M$. Recall that the number of blue and red points to the right of $\ell$ must be the same. Moreover, every segment that is not crossed by $\ell$ has both of its endpoints on one of its sides. Therefore, if $\ell$ crosses a segment of $M$ having a red endpoint to the right of $\ell$, then, to maintain the balance of red and blue points, $\ell$ must cross another segment having a blue endpoint to the right of $\ell$. That~is, $\ell$ is a chromatic cut of~$M$. \end{proof}
We say that a point lies above (\emph{resp.} below) a segment if it lies above (\emph{resp.} below) the line extending that segment. We proceed to state Lemma~\ref{lemma:CompatibleBRMatching}. However, its proof is deferred to Section~\ref{Section:Avoiding Chromatic Cuts} for ease of readability.
\begin{lemma}\label{lemma:CompatibleBRMatching} Let $M$ be a $BR$-matching of $P$ and let $\ell$ be a chromatic cut of $M$. There exists a $BR$-matching $M'$ of $P$, compatible with $M$, with the following properties. There is a segment $s$ in $M\setminus M'$ that crosses $\ell$ such that all segments of $M$ that cross $\ell$ below $s$ belong also to $M'$. Moreover, these are the only segments of $M'$ crossing $\ell$ below $s$. \end{lemma}
In other words, Lemma~\ref{lemma:CompatibleBRMatching} states that we can find a $BR$-matching $M'$, compatible with $M$, such that a segment $s$ from $M$ that crosses $\ell$ does not appear in $M'$. Moreover, every segment of $M$ that crosses $\ell$ below $s$ is preserved in $M'$, and no new segment crossing $\ell$ below $s$ is introduced. However, we have no control over what happens above $s$.
\begin{lemma}\label{lemma:CompatibleSequence} Given a $BR$-matching $M$ of $P$ and a ham-sandwich cut $\ell$, there is a $BR$-matching $M^\ell$ connected with $M$ such that no segment of $M^\ell$ crosses $\ell$. \end{lemma} \begin{proof} Assume that $\ell$ is a chromatic cut of $M$, i.e., that it crosses at least one segment of $M$. Otherwise, the result follows trivially. Given a $BR$-matching $W$ of $P$ such that $\ell$ is a chromatic cut of $W$, let $\textsc{Next}(W)$ be the matching, compatible with $W$, that exists as a consequence of Lemma~\ref{lemma:CompatibleBRMatching} when applied on $W$.
Let $M_0 = M$. If $S_{M_i, \ell} \neq \emptyset$, i.e., there are segments of $M_i$ that crosses~$\ell$, then let $M_{i+1} = \textsc{Next}(M_i)$. We claim that the sequence $\varphi = (M_0, M_1,\ldots, M_h)$ is finite and hence, that $M^\ell := M_h$ has no segment that crosses $\ell$. Note that the sequence $\varphi$ is well defined by Lemma~\ref{lemma:HamCutsAreChromaticCuts}.
Assume without loss of generality that $\ell$ is a vertical line. Let $\mathcal C_P = \{z_1, z_2, \ldots, z_m\}$ be the set of all possible $O(n^2)$ bichromatic segments that cross $\ell$. Assume that the segments of $\mathcal C_P$ are sorted, from bottom to top, according to their intersection with $\ell$. Given a $BR$-matching $W$ of $P$, let $\chi_{_W} = b_1b_2\ldots b_m$ be a binary number where $b_i$ is defined as follows: $$b_i = \left\{ \begin{array}{lll} 1&&\text{If $z_i$ belongs to $W$}\\ 0&&\text{Otherwise} \end{array}\right.$$
Let $M_i$ and $M_{i+1}$ be two consecutive matchings in $\varphi$. By Lemma~\ref{lemma:CompatibleBRMatching}, there is a segment $s$, corresponding to some segment $z_k$ in $\mathcal C_P$, such that $s= z_k$ belongs to $M_i$ but not to $M_{i+1}$. Moreover, if $z_j$ is a segment that crosses $\ell$ below $z_k$, then $z_j$ belongs to $M_i$ if and only if $z_j$ belongs to $M_{i+1}$. Therefore, the $k$-th digit of $\chi_{_{M_i}}$ is 1 while the $k$-th digit of $\chi_{_{M_{i+1}}}$ is 0. Moreover, for every $j< k$, the $j$-th digit of $\chi_{_{M_i}}$ is identical to the $j$-th digit of $\chi_{_{M_{i+1}}}$. This implies that $\chi_{_{M_i}} > \chi_{_{M_{i+1}}}$. Therefore, $\Phi =\chi_{_{M_0}}, \chi_{_{M_1}}, \ldots, \chi_{_{M_h}}$ is a strictly decreasing sequence. This means that no $BR$-matching is repeated and that $\Phi$ converges to zero yielding our claim, i.e., we reach a $BR$-matching containing no segment that crosses $\ell$. \end{proof}
\begin{theorem}\label{theorem:The graph is connected}
Let $P= B\cup R$ be a set of points in the plane in general position where $|R|=|B|$. If $M$ is a $BR$-matching of $P$ and $H$ is a ham-sandwich matching of $P$, then $M$ and $H$ are connected, i.e., there is a sequence of $BR$-matchings $M = M_0, \ldots, M_r = H$ such that $M_{i-1}$ is compatible with $M_i$ for $1\leq i\leq r$. \end{theorem}
\begin{proof} The proof goes by induction on the size of $P$.
Notice that the result follows trivially if $|P| = 2$ as it contains a unique $BR$-matching.
Assume that the result holds for any bichromatic point set with fewer than $n$ points. Let $\ell$ be the first ham-sandwich cut line used to construct $H$. By Lemma~\ref{lemma:CompatibleSequence}, there is a matching $M^\ell$ such that $M$ and~$M^\ell$ are connected, and no segment of $M^\ell$ crosses $\ell$. Let $\Pi_1$ and $\Pi_2$ be the two halfplanes supported by $\ell$.
For~$i\in \{1,2\}$, let $P_i$ be the set of points of $P$ that lie in $\Pi_i$ and let $M_i$ and $H_i$ be, respectively, the set of segments of $M^\ell$ and $H$ that are contained in $\Pi_i$. Because $|P_i| < |P|$, $M_i$ and $H_i$ are connected by the induction hypothesis.
Since every $BR$-matching of $P_1$ is compatible with every $BR$-matching of $P_2$, we can merge the two compatible sequences obtained by the recursive construction that certify that $M_i$ and $H_i$ are connected. Thus, $M^\ell$ is connected with $H$ and because $M$ is connected to $M^\ell$, $M$ and $H$ are also connected. \end{proof}
Let $V$ be the set of all $BR$-matchings of $P$ and let $G_P$ be the \emph{compatible bichromatic matching graph} of~$P$ with vertex set $V$, where there is an edge between two vertices if their corresponding $BR$-matchings are compatible.
\begin{corollary}\label{corollary:G_P is connected}
Given a set of points $P= B\cup R$ in general position such that $|R|=|B| = n$, the graph $G_P$ is connected. \end{corollary}
While Theorem~\ref{theorem:The graph is connected} implies the connectedness of $G_P$, proving non-trivial upper bounds on its diameter remains elusive. Lemma~\ref{lemma:Lower Bound}, depicted in Fig.~\ref{fig:LowerBound}, provides a linear lower bound on the diameter of $G_P$. Since Sharir and Welzl~\cite{Sharir2006} proved that the number of vertices in $G_P$ is at most $O(7.61^n)$, we obtain a trivial exponential bound on its diameter.
\begin{figure}
\caption{$a)$ Two $BR$-matchings $M$ and $M'$ at distance $\Omega(n)$ in the graph $G_P$ that are both ham-sandwich matchings. $b)$ The only two segments compatible with $M$ are the topmost and bottommost segments of $M'$.}
\label{fig:LowerBound}
\end{figure}
\begin{lemma}\label{lemma:Lower Bound} There exists a bichromatic set $P = B\cup R$ of $4n$ points that admits two $BR$-matchings at distance $\Omega(n)$ in $G_P$. \end{lemma} \begin{proof} Let $P$ be the set of vertices of a regular $4n$-gon $Q$. Partition $P$ into four disjoint sets $P_0, \ldots, P_3$, each of $n$ consecutive points along the boundary of $Q$. Assume without loss of generality that the unique edge on the boundary of $Q$ joining a point from $P_3$ with a point in $P_0$ is parallel to the $x$-axis. Moreover, assume that this edge is the topmost edge of $Q$. Let $B = P_0\cup P_2$ and let $R = P_1\cup P_3$. Note that for any $0\leq i\leq 3$, both bichromatic point sets $P_i\cup P_{i+1}$ and $P_i\cup P_{i-1}$ have unique $BR$-matchings where sum is taken modulo~$3$; see Fig.~\ref{fig:LowerBound}$(a)$.
Let $M$ be the union of the unique $BR$-matchings of $P_0\cup P_1$ and $P_2\cup P_3$. Analogously, let $M'$ be the union of the $BR$-matchings of $P_1\cup P_2$ and $P_0\cup P_3$; see Fig.~\ref{fig:LowerBound}$(a)$ for an illustration. Let $p$ be a point of $P$ and let $M(p)$ and $M'(p)$ be the points matched with $p$ in $M$ and $M'$, respectively. We claim that in any $BR$-matching $W$ of $P$, the point $p$ is matched with either $M(p)$ or $M'(p)$. If this is not the case, then the segment $s$ joining $p$ with its neighbor in $W$ separates $P$ into two sets, one of them having an unbalanced number of blue and red points---a contradiction as the points on each side of $s$ support a $BR$-matching that does not cross $s$.
Let $s_1, \ldots s_{n}$ be the last $n$ segments of $M$ when sorted from left to right. We claim that to connect~$M$ with a $BR$-matching that doesn't contain $s_i$, we need a compatible sequence of $BR$-matchings of length at least $i$. The proof goes by induction on $i$.
For the base case, note that only the topmost and bottommost segments of $M'$ are compatible with $M$, any other bichromatic segment either crosses an edge of $M$ or splits $P$ into two unbalanced sets. By adding these two edges and removing the segments of $M$ that form a cycle with them, we obtain the unique $BR$-matching compatible with $M$ where $s_1$ is not present; see Fig.~\ref{fig:LowerBound}$(b)$
Assume that the result holds for any $j<i$. Let $p$ be an endpoint of $s_i$ and note that the segment~$uM'(p)$ crosses $s_{i-1}$. That is, $p$ is matched through $s_i$ in any $BR$-matching that contains~$s_{i-1}$. Consequently, to reach a $BR$-matching where $s_i$ is not present, we need to first remove $s_{i-1}$, which requires at least $i-1$ steps by the induction hypothesis, and then at least one step more to remove $s_i$. Therefore, a sequence of at least~$i$ $BR$-matchings is needed to connect $M$ with a $BR$-matching that doesn't contain $s_i$.
Thus, to connect $M$ with $M'$ where $s_n$ is not present, we need a compatible sequence of $\Omega(n)$ $BR$-matchings proving our result. \end{proof}
\section{Well-colored graphs and basic tools}\label{Section:Tools} In this section, we introduce some tools that will help us prove Lemma~\ref{lemma:CompatibleBRMatching} in Section~\ref{Section:Avoiding Chromatic Cuts}.
Given a face $F$ of a PSLG, we denote its interior by $int(F)$ and its boundary by $\partial F$. In the remainder, we will only consider bounded faces when we refer to a face of a PSLG. A vertex $v$ is \emph{reflex in $F$} if there is a non-convex connected component in the intersection of $int(F)$ with any disk centered at $v$. Notice that a vertex can be reflex in at most one face of a PSLG. A vertex of a PSLG is \emph{reflex} if it is reflex in one of its bounded faces.
Let $F$ be a face of a given PSLG whose reflex vertices are colored either blue or red. We say that $F$ is \emph{well-colored} if the sequence of reflex vertices along its boundary alternates in color. In the same way, a PSLG is well-colored if all its faces are well-colored. Since a vertex is reflex in at most one face, a well-colored PSLG has an even number of reflex vertices.
Let $G$ be a well-colored PSLG. The \emph{boundary of $G$}, denoted by $\ensuremath{\partial G} $, is the union of all the edges in $G$. The interior of $G$ is the union of the interior of its faces. Let $M$ be a $BR$-matching such that all segments of $M$ are contained in the interior of $G$, i.e., no segment of $M$ crosses and edge of $G$. We show how to \emph{glue} the segments of $M$ to $G$ in such a way that every endpoint of $M$ becomes a reflex vertex. We then provide a technique to construct a new $BR$-matching, compatible with $M$, by matching the reflex vertices of $G$ after the gluing.
\subsection{Coloring a PSLG}
We say that two points $x$ and $y$ are \emph{visible} if the open segment $(x,y)$ is contained in the interior of $G$ and crosses no segment of $M$. Throughout this paper, the color of a point not in $P$, either on a bichromatic segment or on $\ensuremath{\partial G} $, depends on the position from which it is viewed; see Fig.~\ref{fig:ColoringTheBoundary} for an example.
Assume that $F$ is a well-colored face of $G$ and let $x$ be a point on $\partial F$. Let $y$ be a point visible from $x$. Walk in a straight line from $y$ towards $x$ and make a left turn when reaching $x$, following the boundary of $F$ counterclockwise until reaching a reflex vertex $r$ (if $x$ is reflex, then $r=x$). We say that $x$ is blue (\emph{resp.} red) when viewed from $y$ if $r$ is blue (\emph{resp.} red). If $F$ contains no reflex vertex, then the color of $x$ when viewed from $y$ can be arbitrarily chosen to be blue or red.
This coloring scheme can be used for segments as well. For $r\in R$ and $b\in B$, let $x$ be a point in the interior of the bichromatic segment $s = [r,b]$. Let $y$ be a point in the plane, not collinear with $r$ and $b$. We say that $x$ is \emph{blue} when viewed from $y$ if the triple $y,x,b$ makes a left turn. Otherwise, we say that $x$ is \emph{red} when viewed from $y$.
Let $z$ and $z'$ be two points such that each one lies either on $\ensuremath{\partial G} $ or on a segment of $M$. We say that $z$ and $z'$ are \emph{color-visible\xspace} if they are visible and the color of $z$ when viewed from $z'$ is equal to the color of $z'$ when viewed from $z$.
\begin{figure}
\caption{The coloring of the boundary points of a PSLG, as well as of a bichromatic segment. The point $y$ is blue when viewed from $x$ but red when viewed from $z$. Moreover, $y$ and $z$ are color-visible\xspace.}
\label{fig:ColoringTheBoundary}
\end{figure}
\subsection{Basic operators for well-colored PSLGs}
Let $z$ be a point in the interior of a segment $s = [a,b]$ of $M$ and let $z'$ be a non-reflex point on $\ensuremath{\partial G} $ such that $z$ and $z'$ are color-visible\xspace. Operator $\textsc{Glue}$ produces a new PSLG by attaching $s$ to $\partial G$ using $z$ and $z'$ as points of attachment. Formally, if $z'$ is not a vertex of $G$, then insert it as a vertex by splitting the edge of $G$ that contains $z'$. Add the vertices $z, a $ and $b$ and the edges $[z,z']$, $[z,a]$ and $[z,b]$ to $G$. In the resulting PSLG, denoted by $\textsc{Glue}(G,z, z')$, $a$ and $b$ are both reflex vertices of degree one; see Fig.~\ref{fig:GlueCutOperator}.
Let $y$ and $y'$ be two color-visible\xspace points on $\ensuremath{\partial G} $ such that neither $y$ nor $y'$ are reflex vertices. Operator $\textsc{Cut}$ joins $y$ with $y'$ in the following way. Let $F$ be the face of $G$ that contains the segment $[y,y']$. If either $y$ or $y'$ is not a vertex of $G$, insert it by splitting the edge where it lies on. Thus, $[y,y']$ is a chord of $F$, so adding the edge $[y,y']$ to $G$ forms two cycles and splits $F$ into two new faces. In this way, we obtain a new PSLG $\textsc{Cut}(G, y, y')$ with one face more than $G$; see Fig.~\ref{fig:GlueCutOperator} for an illustration of this operation.
Since both operators join two points by adding the edge between them, we can define an operator $\textsc{GlueCut}$ on $G,z$ and $z'$, that behaves like $\textsc{Glue}$ when $z$ belongs to a segment in $M$, or behaves like $\textsc{Cut}$ if both $z$ and $z'$ belong to $\ensuremath{\partial G} $. The PSLG output by this operator is denoted by $\textsc{GlueCut}(G, z,z')$.
A \emph{Glue-Cut Graph (GCG)} is a well-colored PSLG where every reflex vertex has degree one. Although this definition is more general, we can think of a GCG as a PSLG obtained by repeatedly applying $\textsc{GlueCut}$ operations between a convex polygon and the segments of a $BR$-matching contained in its interior; see Fig.~\ref{fig:GlueCutOperator} for an example.
\begin{figure}
\caption{Two pairs of color-visible\xspace points $z,z'$ and $y,y'$, where $z$ and $z'$ can be joined by the $\textsc{Glue}$ operator and $y$ and $y'$ by the $\textsc{Cut}$ operator.}
\label{fig:GlueCutOperator}
\end{figure}
\begin{lemma}\label{lemma:Closed Under GlueCut} The family of Glue-Cut Graphs is closed under the $\textsc{GlueCut}$ operator. \end{lemma} \begin{proof} Let $G$ be a GCG and $z$ be a point in a bichromatic segment $s$ contained in the interior of $G$. Let $z', y$ and $y'$ be points on $\partial G$ such that $z$ and $z'$ (\emph{resp.} $y$ and $y'$) are color-visible\xspace. When constructing $\textsc{Glue}(G,z,z')$, the endpoints of $s$ become reflex vertices of degree one. That is, we add one red and one blue reflex vertex to $G$. Therefore, to prove that $\textsc{Glue}(G,z,z')$ is well-colored, it suffices to show that the points are added in the correct order which is guaranteed by the color-visibility\xspace of $z$ and $z'$; see Fig.~\ref{fig:GlueCutOperator}.
On the other hand, $\textsc{Cut}(G, y, y')$ neither adds nor removes reflex vertices of $G$. This operation divides a well-colored face of $G$ into two, by inserting a new edge. Consider either of the new faces. Let $a,b$ be the first reflex vertices found when following the boundary from this edge on each side. Since $y$ and $y'$ are color-visible\xspace when $\textsc{Cut}$ is invoked, we know that $a$ and $b$ are of different color. Thus each new face, and therefore $\textsc{Cut}(G, y, y')$, is well-colored; see Fig.~\ref{fig:GlueCutOperator}. \end{proof}
\subsection{Simplification of a GCG}
Let $F = (v_1,v_2,\ldots,v_k,v_1)$ be a face of a GCG given as a sequence of its vertices in clockwise order along its boundary. For each vertex $v_i$, if the triple $v_{i-1},v_i,v_{i+1}$ makes a right turn, let $x_i$ be a point at distance $\varepsilon>0$ from $v_i$, lying on the bisector of the convex angle formed by $[v_{i-1},v_i]$ and $[v_i,v_{i+1}]$. If $v_i$ is reflex in $F$, let $x_i = v_i$. Otherwise, if $v_{i-1},v_i,v_{i+1}$ are collinear, do nothing. Let $\mathcal P_F=(x_1, \ldots, x_k, x_1)$ (consider only the indices where $x_i$ is defined). By choosing $\varepsilon$ sufficiently small, $\mathcal P_F$ is a simple polygon contained in $F$ such that every reflex vertex $v_j$ in $F$ remains reflex in $\mathcal P_F$ and no reflex vertex is created; see Fig.~\ref{fig:SimpleExtension}. We call $\mathcal P_F$ a \emph{simplification} of $F$. Though the simplification of a face $F$ is not unique as it depends on the choice of $\varepsilon$, the results presented in this paper hold for any simplification. Therefore, when alluding to $\mathcal P_F$, we refer to any simplification of $F$.
\begin{observation}\label{obs:SimpleExtension} For every bounded face $F$ of a GCG, $\mathcal P_F$ is a simple polygon contained in $F$ such that $F$ and $\mathcal P_F$ share the same set of reflex vertices. \end{observation}
\begin{figure}
\caption{A face $F$ of a GCG $G$ and its simplification $\mathcal P_F$, contained in $F$, with the same set of reflex vertices.}
\label{fig:SimpleExtension}
\end{figure}
Let $F_1, \ldots, F_k$ be the bounded faces of a GCG $G$. We call $\mathcal P_G = \bigcup \mathcal P_{F_i}$ the \emph{simplification of $G$}. Note that $\mathcal P_G$ is the union of a set of disjoint simple polygons.
\subsection{Merging a matching with a GCG}\label{section:GluingTheSegments}
\begin{lemma}\label{lemma:AbellanasAlgorithm} (Rephrasing of Lemma 5 of~\cite{Abellanas2008220}) Let $\mathcal P$ be a simple polygon with an even number of reflex vertices. There exists a perfect planar matching $M$ of the reflex vertices of $\mathcal P$, such that each segment of $M$ is contained in $\mathcal P$ (or on its boundary). \end{lemma}
Let $C = \{r_0, \ldots, r_k\}$ be the set of reflex vertices of a simple polygon $\mathcal P$ sorted along its boundary. Let $M$ be a perfect planar matching of $C$ that exists by Lemma~\ref{lemma:AbellanasAlgorithm}. Let $[r_i, r_j]$ be a segment of $M$, and note that this segment splits $\mathcal P$ into two sub-polygons. Notice that if $[r_i,r_j]$ is contained in the boundary of $\mathcal P$, then one sub-polygon is a segment and the other one is $\mathcal P$ itself. In order for $M$ to be perfect and planar, each sub-polygon must contain an even number of reflex vertices. Therefore, if a segment $[r_i, r_j]$ belongs to $M$, then $i \bmod 2 \neq j \bmod 2$. This implies that if $\mathcal P$ is well-colored, then $M$ is a $BR$-matching.
The main tool to construct $BR$-matchings of the reflex vertices of a GCG comes from the following lemma; see Fig.~\ref{fig:ReflexBRMatching} for an illustration.
\begin{figure}
\caption{A GCG and a $BR$-matching of its reflex vertices.}
\label{fig:ReflexBRMatching}
\end{figure}
\begin{lemma}\label{lemma:ReflexMatching} If $G$ is a GCG, then there is a $BR$-matching $M$ of the reflex vertices of $G$, such that each segment of $M$ is contained in $\mathcal P_G$ (or on its boundary). \end{lemma} \begin{proof} Let $F_1, \ldots, F_k$ be the well-colored faces of $G$. By Observation~\ref{obs:SimpleExtension}, each $F_i$ and its simplification $\mathcal P_{F_i}$ share the same set of reflex vertices. By Lemma~\ref{lemma:AbellanasAlgorithm}, there is a matching $M_i$ of the reflex vertices of $\mathcal P_{F_i}$, such that each segment lies either in the interior or on the boundary of $\mathcal P_{F_i}$. Since $F_i$ is well-colored, $M_i$ is a $BR$-matching. Note that a vertex can be reflex in at most one face of $G$. Therefore, $M = \bigcup M_i$ is a $BR$-matching of the reflex vertices of $G$ and each segment of $M$ lies either in the interior or on the boundary of $\mathcal P_G$. \end{proof}
\subsection{Gluing BR-matchings}
Let $X$ be a GCG and let $M$ be a $BR$-matching contained in the interior of $X$. In this section, we show how to glue the segments of $M$ to the boundary of $X$. In this way, we obtain a GCG $G$ such that the endpoints of the segments of $M$ are all reflex vertices of $G$. Thus, by Lemma~\ref{lemma:ReflexMatching}, we can obtain a $BR$-matching $M'$ of the reflex vertices of $G$ where every segment is contained in $\mathcal P_G$, i.e. we can obtain a $BR$-matching $M'$ whose union with $M$ contains no crossings.
Assume that the vertices of $X$ and the endpoints of $M$ are in general position and that no two points have the same $x$-coordinate.
Let $s$ be the segment with the rightmost endpoint among all segments of $M$. We may assume that the left (\emph{resp.} right) endpoint of $s$ is blue (\emph{resp.} red) and hence, that $s$ is blue (\emph{resp.} red) when viewed from below (\emph{resp.} above).
Extend $s$ to the right until it reaches the interior of a segment $s'$ on $\partial X$ at a point $y$. Depending on the color of $s'$ when viewed from $s$, choose a point $y'$ in the interior of $s'$ above (\emph{resp.} below) $y$ if $s'$ is red (\emph{resp.} blue). Choose $y'$ sufficiently close to $y$ so that the whole segment $s$ is visible from $y'$. This is always possible because $y$ is visible from the right endpoint of $s$. Let $m$ be the midpoint of $s$ and note that $m$ and $y'$ are color-visible\xspace by construction. Let $X' = \textsc{Glue}(X, m, y')$ and note that by Lemma~\ref{lemma:Closed Under GlueCut}, $X'$ is a GCG. Moreover, the endpoints of $s$ become reflex vertices of $X'$; see Fig.~\ref{fig:GluingTheSegments}. Remove $s$ from $M$, let $X = X'$ and repeat this construction recursively until $M$ is empty. We obtain the following result.
\begin{figure}
\caption{Gluing a bichromatic segment with the boundary of a GCG.}
\label{fig:GluingTheSegments}
\end{figure}
\begin{lemma}\label{lemma:Gluing the segments} Let $X$ be a GCG and $M$ be a $BR$-matching contained in the interior of $X$. There is a GCG $G$ augmenting $X$ such that all reflex vertices of $X$ and all endpoints in $M$ are reflex vertices in $G$. Moreover, every segment of $M$ is contained in $\partial G$. \end{lemma}
\section{Augmented matchings}\label{Section:Avoiding Chromatic Cuts} In this section, we provide the proof of Lemma~\ref{lemma:CompatibleBRMatching} presented in Section~\ref{Section:Ham-Sandwich}.
Let $M$ be a $BR$-matching of $P$ and let $\ell$ be a chromatic cut of $M$. Recall that $S_{M,\ell}$ denotes the set of segments of $M$ that cross $\ell$. We show that it is possible to obtain a new $BR$-matching $M'$ with at least one segment $s$ of $S_{M,\ell}$ absent. Furthermore, when examining segments of $M$ that cross $\ell$ below $s$, all segments of $S_{M,\ell}$ are preserved in $M'$ and no new segments are introduced.
A vertex $v$ of a GCG $X$ is \emph{isolated} if no line through $v$, intersecting the interior of $X$, supports a closed halfplane containing all the neighbors of $v$. The following observation is depicted in Fig.~\ref{fig:IsolatedVertex}.
\begin{observation}\label{obs:ConvexVertex} If $v$ is an isolated vertex of a GCG $X$, then $v$ lies outside of $\mathcal P_X$. Moreover, if $v'$ is a vertex of $X$ lying outside $\mathcal P_X$, then the open segments joining $v'$ with its neighbors also lie outside of $\mathcal P_X$.
\end{observation}
\begin{figure}
\caption{An isolated vertex $v$ lying outside of the simplifications of each of its adjacent faces.}
\label{fig:IsolatedVertex}
\end{figure}
Let $\ell$ be a chromatic cut of $M$ and assume that $S_{M,\ell}= \{s_1, \ldots, s_k\}$ is sorted from bottom to top according to the intersection, $x_i$, of $s_i$ with $\ell$.
The idea of the proof is to construct a GCG $X$ augmenting $M$, using the \textsc{GlueCut} operation and then Lemma~\ref{lemma:Gluing the segments}, in such a way that $x_1, \ldots, x_j$ become isolated vertices of $X$ for some $1\leq j\leq k$. Furthermore, we require $X$ to contain the edge between $x_i$ and $x_{i+1}$ for every $1\leq i < j$. By Observation~\ref{obs:ConvexVertex}, these edges will lie outside of $\mathcal P_X$ and so will the portion of $\ell$ lying below $s_j$. Thus, this portion of $\ell$ will not be crossed by a $BR$-matching, compatible with $M$, obtained from Lemma~\ref{lemma:ReflexMatching} when applied on $X$.
Notice that we can only glue $x_i$ with $x_{i+1}$ if they are color-visible\xspace. Although the following lemma shows that there is at least one pair of consecutive color-visible\xspace points among $x_1, \ldots, x_k$, we may not be able to glue all of them. Thus, we will resort to a different strategy that allows us to alter the color of a segment.
\begin{lemma}\label{lemma:ChangeInColor} There exist two consecutive segments $s_i$ and $s_{i+1}$ in $S_{M,\ell}$ such that $x_i$ and $x_{i+1}$ are color-visible\xspace. \end{lemma}
\begin{proof} Because $\ell$ is a vertical chromatic cut, there exist two segments $s_j$ and $s_h$ in $S_{M,\ell}$ such that the left endpoint of $s_j$ is of different color than the left endpoint of $s_h$. Therefore, two consecutive segments $s_i$ and $s_{i+1}$ must exist having left endpoints of different color. This implies that the color of $s_i$ when viewed from above is the same as the color of $s_{i+1}$ when viewed from below. Finally, since $s_i$ and $s_{i+1}$ are consecutive segments in $S_{M,\ell}$, $x_i$ and $x_{i+1}$ are visible. \end{proof}
\begin{figure}
\caption{Case where there is at least one point, lying above $s_1$, on segments $s_2,s_L$ or $s_R$ that is color-visible\xspace with $x_1$.}
\label{fig:InputPaste}
\end{figure}
We proceed to describe the construction of the GCG that augments $M$. Let $\mathcal R$ be a convex polygon strictly containing all segments of $M$. Assume without loss of generality that the left endpoint of $s_1$ is blue, implying that $s_1$ is red from above and blue from below. Let $x_0$ be the bottom intersection between $\ell$ and $\mathcal R$. Since the bounded face of $\mathcal R$ contains no reflex vertex, we can assume that $x_0$ is blue when viewed from $x_1$. That is, $x_0$ and $x_1$ are color-visible\xspace. Finally, let $X_1 = \textsc{Glue}(\mathcal R, x_1, x_0)$ be the GCG obtained by joining $x_0$ with $x_1$; see Fig.~\ref{fig:InputPaste}.
Consider the edge of $\mathcal R$ containing $x_0$ to be a segment $s_0$. The following invariants on the GCG $X_i$ hold initially for $i = 1$ and are maintained through a sequence of iterations.
\begin{itemize} \item The points $x_i$ and $x_{i+1}$ are visible while $x_i$ and $x_{i-1}$ are neighbors. \item Besides $x_{i-1}$, vertex $x_i$ neighbors two vertices on $s_i$, one to the left and one to the right of $\ell$. \item The endpoints of $s_i$ are reflex vertices of $X_i$. \item The endpoints of $s_{i-1}$ are not reflex in $X_i$ and $x_{i-1}$ is an isolated vertex. \item The color of $s_i$, when viewed from a point lying above $s_i$, is given by the color of the right endpoint of $s_i$. \end{itemize}
Our objective is to find a point, color-visible\xspace with $x_i$, that lies above the line extending $s_i$. If such a point exists, by gluing it to $x_i$ and then merging the remaining segments of $M$ using Lemma~\ref{lemma:Gluing the segments}, we obtain a GCG $X$ that augments $M$ with the desired properties (a full explanation is presented in Section~\ref{section:Processing after augment}, an example is depicted in Fig.~\ref{fig:InputPaste}).
As long as no such point exists, we iteratively augment $X_i$, maintaining the above properties as an invariant. This is done with procedure $\textsc{Augment}(i)$, which takes a GCG $X_i$ and adds edges (including the edge between $x_i$ and $x_{i+1}$) to produce a new GCG $X_{i+1}$ where the above properties hold. After several augmentations, we will produce a GCG where the desired color-visible\xspace point will be found.
\subsubsection*{Procedure \textsc{Augment}$(i)$} Refer to Fig.~\ref{fig:Augment} for an illustration of this procedure. Let $l_i$ and $r_i$ be the left and right endpoints of $s_i$, respectively. Assume without loss of generality that $l_i$ is colored blue (and $r_i$ is red). Thus, $s_i$ is red when viewed from above. Extend $s_i$ on both sides and let $s_L$ (\emph{resp.} $s_R$) be the first segment reached to the left (\emph{resp.} right). This procedure is only used when the points in $s_{i+1}$, $s_L$ and $s_R$ appear blue when viewed from $x_i$. Otherwise, \textsc{Augment} is not required as there is a point in either $s_{i+1}$, $s_L$ or $s_R$, lying above $s_i$, that is color-visible\xspace with $x_i$.
Notice that $s_{i+1}$, $s_L$, and $s_R$ could belong either to $M$, or to $\partial X_i$. Let $y_L$ and $y_R$ be the points where the line extending $s_i$ intersects $s_L$ and $s_R$, respectively. Let $X_i'$ be the PSLG obtained by adding the edges $[l_i,y_L]$ and $[r_i,y_R]$ to $X_i$ ($y_L$ and $y_R$ are added as vertices).
\begin{figure}
\caption{$a)$ Example where procedure $\textsc{Augment}(1)$ is required. Points above $s_1$ that lie on segments $s_2$, $s_L$ and $s_R$ are not color-visible\xspace with $x_1$. $b)$ The construction obtained by extending $s_1$, where two reflex vertices $l_1,r_1$ disappear to let $x_1$ and $x_2$ become color-visible\xspace.}
\label{fig:Augment}
\end{figure}
This may create new faces depending on whether $s_L$ or $s_R$ belong to $M$. Vertices $y_L, l_i, x_i, r_i, y_R$ are collinear, meaning $l_i$ and $r_i$ are no longer reflex vertices in $X_i'$. Thus the color of $s_i$ will now be blue when viewed from above or from below. Furthermore, if $s_L$ or $s_R$ belong to $M$, then their endpoints are now reflex vertices of $X_i'$. One can verify that $X_i'$ is well-colored since $y_L$ and $y_R$ are both blue when viewed from $x_i$, hence $X_i'$ is a GCG. See Fig.~\ref{fig:Augment}$(b)$ for an illustration. Notice that, when viewed from above, the color of $x_i$ is now blue, in contrast with the red color that $x_i$ had on $X_i$. Therefore, since $x_{i+1}$ is blue when viewed from below, $x_{i+1}$ and $x_i$ are now color-visible\xspace in $X_i'$ and can be glued.
Let $X_{i+1} = \textsc{GlueCut}(X_i', x_{i+1}, x_i)$. This way, the endpoints of $s_{i+1}$ become (if they were not already) reflex vertices of the GCG $X_{i+1}$ and $x_i$ becomes an isolated vertex.
Notice that no vertex on $s_{i+1}$ neighbors a point lying above $s_{i+1}$. Therefore, the color of every point on segment $s_{i+1}$, when viewed from above, is given by the color of the right endpoint of $s_{i+1}$. In fact, the invariant properties are maintained, should there be a subsequent use of $\textsc{Augment}$.
\subsection{Analysis of AUGMENT}
\begin{observation}\label{obs:ReflexRemain} On each iteration of $\textsc{Augment}$, all reflex vertices of $X_i$ are preserved in $X_{i+1}$, except for the two endpoints of $s_i$ that become non-reflex. \end{observation}
\begin{lemma}\label{lemma:AugmentFinishes} The procedure $\textsc{Augment}$ will only be used $O(n)$ times before producing a GCG $X_j$, where there exists a point, lying above the segment $s_j$, that is color-visible\xspace with $x_j$ (for some $1\leq j \leq k-1$). \end{lemma} \begin{proof} By Lemma~\ref{lemma:ChangeInColor}, there exist segments $s_h, s_{h+1}\in S_{M,\ell}$ such that $x_h$ and $x_{h+1}$ are color-visible\xspace before executing $\textsc{Augment}$ on $X_1$. We claim that $\textsc{Augment}$ can only go as far as to construct $X_h$. If $X_h$ is not constructed, it is because a GCG $X_j$ was constructed (for some $0\leq j<h$), where there exists a point, lying above the segment $s_j$, that is color-visible\xspace with $x_j$. Otherwise, if $X_h$ is constructed, then, by the preserved invariants, the color of $x_h$, when viewed from above, remains unchanged and hence $x_h$ and $x_{h+1}$ are color-visible\xspace. \end{proof}
\begin{figure}
\caption{Case where $x_1$ and $x_2$ are not color-visible\xspace, but a point $y$ can be found in $s_L$ so that $x_1$ and $y$ are color-visible\xspace.}
\label{fig:AfterAugment}
\end{figure}
\subsection{Processing after AUGMENT}\label{section:Processing after augment}
By Lemma~\ref{lemma:AugmentFinishes}, we know that after the last call to $\textsc{Augment}$ we obtain a GCG $X_j$ such that there is a point in either $s_{j+1}$, $s_L$ or $s_R$, lying above $s_j$, that is color-visible\xspace with $x_j$. Assume without loss of generality that $x_j$ is red when viewed from above. If $s_{j+1}$ is red when viewed from below, then $x_j$ and $x_{j+1}$ are color-visible\xspace. In this case, we define $G_{M,\ell}=\textsc{GlueCut}(X_j, x_{j+1}, x_j)$.
Instead, if $x_{j+1}$ is blue when viewed from $x_j$, we follow a different approach. Recall that the endpoints of $s_j$ are reflex vertices. If $s_L$ is red when viewed from the left endpoint of $s_j$, choose a point $y$, slightly above $y_L$ on $s_L$, such that the whole segment $s_j$ is visible from $y$. Since $x_j$ is red when viewed from above, $x_j$ and $y$ are color-visible\xspace. Let $G_{M,\ell}= \textsc{GlueCut}(X_j, y, x_j)$; see Fig.~\ref{fig:AfterAugment}. An analogous construction of $G_{M,\ell}$ follows if $s_R$ is red when viewed from the right endpoint of $s_j$. We call $G_{M,\ell}$ the \emph{extension of $X_j$}.
\begin{lemma}\label{lemma:ConstructionProperties} If $G_{M,\ell}$ is an extension of $X_j$, then the following properties hold:
\begin{itemize} \item The endpoints of $s_j$ are reflex vertices of $G_{M,\ell}$, but $s_j$ is not contained in $\mathcal P_{G_{M,\ell}}$. \item The downwards ray with apex at $x_j$ does not intersect $\mathcal P_{G_{M,\ell}}$. \item For every $1\leq i< j$, the endpoints of $s_i$ are not reflex vertices of $G_{M,\ell}$. Moreover, $s_i$ is not contained in $\mathcal P_{G_{M,\ell}}$. \end{itemize} \end{lemma} \begin{proof} By the invariants of $\textsc{Augment}$, $x_j$ neighbors $x_{j-1}$ as well as two vertices on $s_j$, one to the left and one to the right of $\ell$. Since $x_j$ also neighbors a vertex in $G_{M,\ell}$ lying above the segment $s_j$, $x_j$ is an isolated vertex in $G_{M,\ell}$. Thus, by the preserved invariants and by Observation~\ref{obs:ConvexVertex}, for every $1\leq i\leq j$, $x_i$ lies outside of $\mathcal P_{G_{M,\ell}}$ and hence the segment $s_i$ is not contained in $\mathcal P_{G_{M,\ell}}$. Furthermore, the segment joining $x_i$ with $x_{i-1}$ also lies outside of $\mathcal P_{G_{M,\ell}}$ and so does the downwards ray with apex at $x_j$. Finally, Observation~\ref{obs:ReflexRemain} tells us that, for every $1\leq i <j$, no endpoint of $s_i$ is a reflex vertex of $X_j$ (nor of $G_{M,\ell}$). \end{proof}
We are now ready to provide the proof of Lemma~\ref{lemma:CompatibleBRMatching} which is restated for ease of readability.\\
\textsc{Lemma}~\ref{lemma:CompatibleBRMatching}.\emph{ Let $M$ be a $BR$-matching of $P$ and let $\ell$ be a chromatic cut of $M$. There exists a $BR$-matching $M'$ of $P$, compatible with $M$, with the following properties. There is a segment $s$ of $M\setminus M'$ that crosses $\ell$ such that all segments of $M$ that cross $\ell$ below $s$ also belong to $M'$. Moreover, these are the only segments of $M'$ crossing $\ell$ below $s$. }
\begin{proof} Let $G_{M,\ell}$ be the GCG obtained using the construction presented in this section on $M$ and $\ell$. Recall that $S_{M,\ell}$ is the set of segments of $M$ that cross $\ell$. Lemma~\ref{lemma:ConstructionProperties} states that there is a segment $s_j\in S_{M,\ell}$, such that its endpoints are reflex vertices of $G_{M,\ell}$ but $s_j$ is not contained in $\mathcal P_{G_{M,\ell}}$. Let $W$ be the set of segments in $M$ that are contained in the interior of $G_{M,\ell}$ and let $Z_\ell = \{s_1, \ldots, s_{j-1}\}$ be the set of segments of $S_{M,\ell}$ that cross $\ell$ below $x_j$. By Lemma~\ref{lemma:ConstructionProperties} we know that $Z_\ell\cap W = \emptyset$.
By Lemma~\ref{lemma:Gluing the segments}, since $W$ is contained in the interior of $G_{M,\ell}$, we can augment $G_{M,\ell}$ by gluing the segments of $W$ to its boundary such that the endpoints of every segment in $W$ become reflex vertices in $G_{M,\ell}$. Moreover, the reflex vertices of $G_{M,\ell}$ are preserved.
By Lemma~\ref{lemma:ReflexMatching}, there exists a $BR$-matching $W'$ of the reflex vertices of $G_{M,\ell}$ such that each segment in $W'$ is contained in $\mathcal P_{G_{M,\ell}}$. Notice that the endpoints of $s_j$ are re-matched in $W'$. However, since $s_j$ is not contained in $\mathcal P_{G_{M,\ell}}$, $s_j$ does not belong to $W'$. Moreover, Lemma~\ref{lemma:ConstructionProperties} implies that the ray, shooting downwards from $x_j$, lies outside $\mathcal P_{G_{M,\ell}}$. Thus, no segment in $W'$ crosses $\ell$ below $x_j$.
Let $M' = W'\cup Z_\ell$ be a set of bichromatic segments. Every point in $P$ is matched in $M'$ since every point in $P$ is either a reflex vertex of $G_{M,\ell}$, or an endpoint of a segment in $Z_\ell$. Lemma~\ref{lemma:ConstructionProperties} implies that the endpoints of the segments in $Z_\ell$ are not reflex vertices in $G_{M,\ell}$. Therefore, $M'$ is a $BR$-matching of $P$. Since $W$ and $W'$ are compatible, $M$ and $M'$ are compatible $BR$-matchings. \end{proof}
\section{Remarks}
Although the techniques developed in this paper appear tailored for this specific problem, they have a more general underlying scope. At a deeper level, our tools generate a \emph{balanced} convex partition of the plane. Roughly speaking, in Lemma~\ref{lemma:AbellanasAlgorithm} a simple polygon is partitioned into a set of convex polygons obtained by shooting rays from the reflex vertices towards the interior of the polygon until hitting its boundary. Once the polygon is partitioned, a matching can be found on each convex piece. By using Lemma~\ref{lemma:AbellanasAlgorithm} in the bichromatic setting, we generate a convex partition of the GCGs, where each convex face is in charge of matching a balanced number of red and blue points. Convex partitions, usually constructed by extending segments until they reach another segment or a previously extended section, have been extensively used to solve several augmentation and reconfiguration problems~\cite{Aichholzer2009617, CompatibleMatchingsForPSLG, Ishaque2011, Hurtado200814, Hoffmann201035}. Therefore, the techniques provided in this paper are of independent interest. In conjunction with Lemma~\ref{lemma:ReflexMatching}, operators like \textsc{Glue} and \textsc{Cut} can be used to find special convex partitions that provide new ways to construct compatible PSLGs.\\
\textbf{Acknowledgements. } We thank the authors of~\cite{CompatibleMatchingsForPSLG} for highlighting the open problem from their paper to the audience of the EuroGIGA meeting that took place after EuroCG 2012. We would also like to thank Tillmann Miltzow for useful comments.
This research was done during a visit of Diane L. Souvaine to ULB supported by FNRS.
{\small
}
\end{document} |
\begin{document}
\title{Non-topological condensates for the self-dual Chern-Simons-Higgs model}
\author{Manuel del Pino\footnote{Departamento de Ingenier\'ia Matem\'atica and CMM, Universidad de Chile, Casilla 170, Correo 3, Santiago, Chile. E-mail: delpino@dim.uchile.cl. Author supported by grants Fondecyt 1070389 and FONDAP (Chile).}\quad Pierpaolo Esposito \footnote{Dipartimento di Matematica e Fisica, Universit\`a degli Studi ``Roma Tre", Largo S. Leonardo Murialdo, 1 -- 00146 Roma, Italy. E-mail: esposito@mat.uniroma3.it. Author supported by the PRIN project ``Critical Point Theory and Perturbative Methods for Nonlinear Differential Equations" and the Firb-Ideas project ``Analysis and Beyond".} \quad Pablo Figueroa\footnote{Departamento de Matem\'atica, Pontificia Universidad Cat\'olica de Chile, Avenida Vicuna Mackenna 4860, Macul, Santiago, Chile. E-mail: pfigueros@mat.puc.cl. Author supported by grants Proyecto Anillo ACT-125 and Fondecyt Postdoctorado 3120039 (Chile).} \quad Monica Musso \footnote{Departamento de Matem\'atica, Pontificia Universidad Cat\'olica de Chile, Avenida Vicuna Mackenna 4860, Macul, Santiago, Chile. E-mail: mmusso@mat.puc.cl. Author supported by Fondecyt grant 1040936 (Chile), and by PRIN project ``Metodi variazionali e topologici nello studio di fenomeni non lineari''.}}
\maketitle
\begin{abstract} \noindent For the abelian self-dual Chern-Simons-Higgs model we address existence issues of periodic vortex configurations -- the so-called condensates-- of non-topological type as $k \to 0$, where $k>0$ is the Chern-Simons parameter. We provide a positive answer to the long-standing problem on the existence of non-topological condensates with magnetic field concentrated at some of the vortex points (as a sum of Dirac measures) as $k \to 0$, a question which is of definite physical interest. \end{abstract}
\vskip 0.2truein
\noindent {\bf Keywords}:
\noindent {\bf AMS subject classification}:
\vskip 0.2truein
\section{Introduction and statement of main results} The Chern-Simons vortex theory is a planar theory which is physically relevant in connection with high critical temperature superconductivity, the quantum Hall effect and anyonic particle physics, as widely discussed by Dunne \cite{D}. Hong-Kim-Pac \cite{HKP} and Jackiw-Weinberg \cite{JW} have proposed an abelian self-dual model where the electrodynamics is governed only by the Chern-Simons term. Over the Minkowski space $(\mathbb{R}^{1+2},g)$, with metric tensor $g=\hbox{diag }(1,-1,-1)$, the model is described by the following Lagrangean density: $${\cal L}_({\cal A},\phi)=\frac{k}{4}\epsilon^{\alpha \beta \gamma}A_\alpha F_{\beta \gamma}+D_\alpha \phi \overline{D^\alpha
\phi}-\frac{1}{k^2}|\phi|^2\left(|\phi|^2-1 \right)^2,$$ where the Chern-Simons coupling parameter $k>0$ measures the strenght of the Chern-Simons term and the antisymmetric Levi-Civita tensor $\epsilon^{\alpha \beta \gamma}$ is fixed with $\epsilon^{0 1 2}=1$. The metric tensor $g$ is used to lower and raise indices in the usual way, and the standard summation convention over repeated indices is adopted. The gauge potential ${\cal A}=-i A_\alpha dx^{\alpha}$ is a $1$-form (a connection over the principal bundle $\mathbb{R}^{1+2}\times U(1)$), $ A_\alpha:\mathbb{R}^{1+2}\to \mathbb{R}$ for $\alpha=0,1,2$, and the Higgs field $\phi:\mathbb{R}^{1+2} \to \mathbb{C}$ is the matter field. The gauge field $F_{\cal A}=-\frac{i}{2}F_{\alpha \beta}dx^\alpha \wedge dx^\beta$ is a $2-$form (the curvature of ${\cal A}$), where $F_{\alpha \beta}=\partial_\alpha A_\beta-\partial_\beta A_\alpha$, and the Higgs field $\phi$ is weakly coupled with the gauge potential ${\cal A}$ through the covariant derivative $D_A$ as follows: $D_A \phi=D_\alpha \phi\, dx^\alpha$, $D_\alpha \phi=\partial_\alpha \phi-i A_\alpha \phi$ for $\alpha=0,1,2$.
\noindent The self-dual regime has been identified by Hong-Kim-Pac \cite{HKP} and Jackiw-Weinberger \cite{JW} through the choice of the
``triple-well" potential $\frac{1}{k^2}|\phi|^2 (|\phi|^2-1)^2$ which yields to a Bogomol'nyi reduction \cite{Bog} for the Chern-Simons-Higgs model, as we discuss below. Vortices are time-independent ($x^0$ is the time-variable) configurations $(\mathcal{A},\phi)$ which solve the Euler-Lagrange equations \begin{equation} \label{ELequations}
\left\{ \begin{array}{l} \displaystyle D_\mu D^\mu \phi=-\frac{1}{k^2}(|\phi|^2-1) (3|\phi|^2-1) \phi \\ \displaystyle \frac{k}{2}\epsilon^{\mu \alpha \beta} F_{\alpha \beta}=J^\mu:=i \left(\phi \overline{D^\mu \phi}-\overline{\phi}D^\mu \phi\right) \end{array} \right. \end{equation} and have finite energy. In the self-dual regime, for energy-minimizing vortices (at given magnetic flux) the second-order Euler-Lagrange equations are equivalent to the first-order self-dual equations \begin{equation}\label{CSeqs} \left\{ \begin{array}{l} D_\pm\phi=0 \\
F_{12} \pm \frac{2}{k^2}|\phi|^2(|\phi|^2-1)=0 \\
kF_{12}+2A_0|\phi|^2=0, \end{array}\right. \end{equation} where $D_{\pm}=D_1 \pm i D_2 $ and the last equation is usually referred to as the Gauss law. In the sequel, we restrict our attention to energy-minimizing vortices (at given magnetic flux), and we will simply refer to them as vortices.
\noindent In the physical interpretation, the electric field $\vec{E}=(\partial_1 A_0,\partial_2 A_0,0)$ is planar, the magnetic field $\vec{B}=(0,0,F_{1,2})$ is in the orthogonal direction, and $J^0$, $\vec{J}=(J^1,J^2)$ can be identified with the charge density, current density, respectively, as in the classical Maxwell theory. Thanks to the Gauss law, vortices are both electrically and magnetically charged, a physical relevant property which was absent in the abelian Maxwell-Higgs model \cite{JaTa,Taubes}. Notice that ${\cal A}$ and $\phi$ are not observable quantities, as they are defined only up to a gauge transformation, whereas the electric and magnetic fields as well as the magnitude $|\phi|$ of the Higgs field define gauge-independent quantities. The second and third equations in (\ref{CSeqs}) only involve observable quantities, whereas the first one $D_+ \phi=0$ (or $D_-\phi=0$) -- a gauge invariant version of the Cauchy-Riemann equations-- implies holomorphic-type properties for the Higgs field $\phi$ (or $\bar{\phi}$) in a suitable gauge. Following an approach first developed by Taubes \cite{Taubes} for the abelian Maxwell-Higgs model, vortices $(\phi,\mathcal{A})$ can be found in the form: \begin{equation} \label{1917} \phi=e^{\frac{u}{2} \pm i\sum_{j=1}^N Arg(z-p_j)},\quad A_0=\pm
\frac{1}{k}(|\phi|^2-1), \quad A_1\pm iA_2=-i (\partial_1\pm i\partial_2) \log \phi \end{equation}
as soon as $u=\log |\phi|^2$ does solve the elliptic problem \begin{equation}\label{1} -\Delta u= \frac{1}{\epsilon^2}e^u(1-e^u)-4\pi \sum_{j=1}^N \delta_{p_j}, \end{equation} where $\epsilon=\frac{k}{2}$ and $p_1,\dots,p_N$ are the zeroes of $\phi$ (repeated according to their multiplicities)-- usually referred to as the vortex points (with the convention $N=0$ if $\phi \not= 0$). We refer the interested reader to \cite{Tbook,Ybook} and the references therein for more details and for an extensive discussion of several gauge field theories.
\noindent For planar vortices, the finite energy condition $\int_{\mathbb{R}^2} e^u(1-e^u)<+\infty$ imposes two possible asymptotic behaviors at infinity. The topological behavior $|\phi|^2=e^u \to 1$ as $|z|\to \infty$ gives the vortex number $N$ the topological meaning of winding number for $\phi$ at infinity (up to a $\pm$ sign, depending on whether $D_+ \phi=0$ or $D_-\phi=0$), yielding to quantization effects for the energy $E$, the magnetic flux $\Phi$ and the electric charge $Q$ in the class of topological $N-$vortices: $E=2\pi N$, $\Phi=\pm 2\pi N$ and $Q=\pm 2\pi kN$. The existence of planar topological vortices has been addressed in
\cite{H,SY2,RWa}. The non-topological behavior $|\phi|^2=e^u \to 0$ as $|z|\to \infty$ has no counterpart in the abelian Maxwell-Higgs model, and the possible coexistence of topological and non-topological $N-$vortices is the main new feature in Chern-Simons theories. After the seminal work \cite{SY1} in a radial setting with a single vortex point (see also \cite{CHMY} for related results), it has been a challenging problem to find planar non-topological $N-$vortices \cite{ChI,CFL} for an arbitrary configuration of $p_1,\dots,p_N$. Surprisingly, two different classes have been found by using different limiting problems: the singular Liouville equation in \cite{ChI} or the Chern-Simons equation $-\Delta U=e^U(1-e^U)-4\pi \delta_0$ in \cite{CFL}. Since the latter problem has no scale-invariance, in \cite{CFL} the points $p_1,\dots,p_N$ are taken along the vertices of a regular $N-$polygon in order to glue together $U(\frac{x-p_j}{\epsilon})$, $j=1,\dots,N$, for there is no freedom to adjust the height at each $p_j$ to account for the interaction, but the approximating function has invertible linearized operator.
\noindent Since the theoretical prediction by Abrikosov \cite{Abr}, the appearance of lattice structure, in the form of spatially periodic vortices, has been experimentally observed. To account for it, the model is formulated on $$\Omega=\{z=t\omega_1+s \omega_2: \:(t,s) \in (-\frac{1}{2},\frac{1}{2}) \times (-\frac{1}{2},\frac{1}{2})\},$$ where $\omega_1,\: \omega_2 \in \mathbb{C} \setminus \{0\}$ satisfy $\hbox{Im }(\frac{\omega_2}{\omega_1})>0$. Condensates are time-independent configurations $(\mathcal{A},\phi)$ which solve the Euler-Lagrange equations \eqref{ELequations}, have finite energy and satisfy the 't Hooft boundary conditions \cite{tHo}: \begin{equation}\label{tH} e^{i\xi_k(z+\omega_k)}\phi(z+\omega_k)=e^{i\xi_k(z)}\phi(z),\quad A_0(z+\omega_k)=A_0(z), \quad \left(A_j +\partial_j \xi_k\right)(z+\omega_k)=\left(A_j +\partial_j \xi_k\right)(z) \end{equation} for all $z\in \Gamma^1\cup \Gamma^2 \setminus \Gamma^k$ and $k=1,2$, where $\Gamma^1=\{z=t \omega_1
-\frac{1}{2}\omega_2:\:|t|<\frac{1}{2} \}$,
$\Gamma^2=\{z=-\frac{1}{2}\omega_1+t \omega_2:\:|t|<\frac{1}{2} \}$ and $\xi_1$, $\xi_2$ are real-valued smooth functions defined in a neighborhood of $\Gamma^2 \cup\{\omega_1+\Gamma^2\}$, $\Gamma^1 \cup\{\omega_2+\Gamma^1\}$, respectively. For energy-minimizing vortices (at given magnetic flux) the Euler-Lagrange equations \eqref{ELequations} are still equivalent to the self-dual ones \eqref{CSeqs}. Since \eqref{tH} just reduces to a double periodicity for the observable quantities $F_{12}$ and
$|\phi|$ in $\Omega$, a configuration $(\mathcal{A},\phi)$ in the form \eqref{1917} does solve \eqref{CSeqs} as soon as $u=\log
|\phi|^2$ is a doubly-periodic solution of \eqref{1} in $\Omega$, see \cite{CY,T} for an exact derivation.
\noindent Hereafter, up to a translation, let us assume that $\phi\not=0$ on $\partial \Omega$ (i.e. $p_1,\dots,p_N \in \Omega$) in such a way the winding number $\hbox{deg
}(\phi,\partial\Omega,0)$ is well-defined, and the vortex number $N$ is simply given by $|\hbox{deg }(\phi,\partial\Omega,0)|$. By \eqref{tH} we still have quantization effects as in the case of planar topological vortices: $E=2\pi N$, $\Phi=\pm 2\pi N$ and $Q=\pm 2\pi kN$ , where the $\pm$ sign depends on whether $D_+\phi=0$ or $D_-\phi=0$. Hereafter, up to change $\phi$ with $\bar \phi$, let us assume that $D_+\phi=0$ and restrict our attention to energy-minimizing condensates (at given magnetic flux), simply referred to as condensates.
\noindent Letting $G(z,p)$ be the Green function of $-\Delta$ in $\Omega$ with pole at $p$:
$$\left\{ \begin{array}{ll} -\Delta G(z,p)=\delta_p-\frac{1}{|\Omega|}&\hbox{in }\Omega\\ \int_\Omega G(z,p)dz=0, & \end{array}\right.$$
one is led to consider the following equivalent regular version of (\ref{1}): \begin{equation} \label{2}-\Delta v=\frac{1}{\epsilon^2} e^{u_0+v}
(1-e^{u_0+v})-\frac{4\pi N}{|\Omega|}\qquad \hbox{in } \Omega\end{equation} in terms of $v=u-u_0$, where $u_0=-4\pi \displaystyle \sum_{j=1}^N G(z,p_j)$ and the potential $e^{u_0}$ is a smooth non-negative function which vanishes exactly at $p_1,\dots,p_N$. By translation invariance, notice that $G(z,p)=G(z-p,0)$, and $G(z,0)$ can be decomposed as
$G(z,0)=-{1\over 2\pi}\log|z|+H(z)$, where $H$ is a (not doubly-periodic) function with $\Delta H= \frac{1}{|\Omega|}$ in $\Omega$. If $v$ is a solution of \eqref{2}, by integration over ${\Omega}$ notice that \begin{equation}\label{ci0} \int_{\Omega} e^{u_0+v}(1-e^{u_0+v})=\int_{\Omega}
|\phi|^2(1-|\phi|^2)=2\epsilon^2 \int_\Omega F_{12} =4\pi N\epsilon^2 \end{equation} in view of \eqref{CSeqs}, yielding to the necessary condition
$$16\pi N\epsilon^2=|\Omega| -4 \int_{\Omega}\left(e^{u_0+v}-{1\over2}\right)^2<|\Omega|$$
for the solvability. According to \cite{CY}, Caffarelli and Yang show the existence of $0<\epsilon_c< \sqrt{\frac{|\Omega|}{16\pi N}}$ so that \eqref{1} has a maximal doubly-periodic solution $u_\epsilon$ for $0<\epsilon<\epsilon_c$, while no solution exists for $\epsilon >\epsilon_c$. Notice that \eqref{2} admits a variational structure with energy functional
$$J_\epsilon(v)={1\over2}\int_{\Omega}|\nabla v|^2+{1\over2\epsilon^2}\int_{\Omega}\left(e^{u_0+v}-1\right)^2+{4\pi N\over|{\Omega}|}\int_{\Omega} v$$ where $v \in H^1({\Omega})=\{v \in H_{\text{loc}}^1({\mathbb{R}}^2):\, v\text{ doubly periodic in }{\Omega}\}$. Later, Tarantello \cite{T} shows that the maximal solution $u_\epsilon$ is a local minimum for $J_\epsilon$ in $H^1({\Omega})$, and a second solution $u^\epsilon$ is found as a mountain-pass critical point for $J_\epsilon$.
\noindent To each solution $u$ of \eqref{1} we can associate the $N-$condensate $(\mathcal{A},\phi)$ in the form \eqref{1917} (with the $+$ sign as we agreed), and let $(\mathcal{A}_\epsilon,\phi_\epsilon)$, $(\mathcal{A}^\epsilon,\phi^\epsilon)$ be the ones corresponding to $u_\epsilon$, $u^\epsilon$. Concerning the asymptotic behavior as $\epsilon \to 0$, by \eqref{ci0} we can expect two classes of $N-$condensates: \begin{itemize}
\item $|\phi| \to 1$ as $\epsilon \to 0$ (``topological" behavior),
\item $|\phi| \to 0$ as $\epsilon \to 0$ (``non-topological" behavior), \end{itemize} to be understood in suitable norms. For example, $(\mathcal{A}_\epsilon,\phi_\epsilon)$ exhibits ``topological" behavior:
$$|\phi_\epsilon| \to 1 \hbox{ in }C_{\hbox{loc}}(\bar{\Omega}\setminus\{p_1,\dots, p_N\}),$$ with \begin{equation} \label{1820} (F_{12})_\epsilon \rightharpoonup 2\pi \sum_{j=1}^N \delta_{p_j} \quad \hbox{in the sense of measures} \end{equation} as $\epsilon \to 0$ according to \eqref{ci0}, see \cite{T}. The concentration property \eqref{1820} for the magnetic field has a definite physical interest, and supports the use of the terminology ``vortex points" for the zeroes $p_1,\dots,p_N$ of the Higgs field $\phi$. The $N-$condensate $(\mathcal{A}^\epsilon,\phi^\epsilon)$ has in general a different asymptotic behavior as $\epsilon \to 0$: \begin{itemize}
\item[(i)] when $N=1$, $|\phi^\epsilon|\to 0$ in $C^m(\bar \Omega)$, for all $m \geq 0$, and $(F_{12})^\epsilon$ is a compact sequence in $L^1(\Omega)$ (see \cite{T}); \item[(ii)] when $N=2$,
$|\phi^\epsilon|\to 0$ in $C(\bar \Omega)$ and either $(F_{12})^\epsilon$ is a compact sequence in $L^1(\Omega)$ or $(F_{12})^\epsilon \rightharpoonup 4\pi \delta_q$ in the sense of measures, for some $q \not=p_1,\:p_2$ with $u_0(q)=\max_\Omega u_0$, depending on whether
$$I(v)={1\over2}\int_{\Omega}|\nabla v|^2-8\pi \log \left(\int_\Omega e^{u_0+v} \right) +{8\pi
\over|{\Omega}|}\int_{\Omega} v $$ attains its infimum or not in $H^1({\Omega})$
(see \cite{NoTa3}, and also \cite{DJLW2}); \item[(iii)] when $N\geq 3$, $|\phi^\epsilon|\to 0$ in $C(\bar \Omega)$ and $ (F_{12})^\epsilon \rightharpoonup 2\pi N \delta_q $ in the sense of measures, for some $q \not=p_1,\dots,p_N$ with $u_0(q)=\max_\Omega u_0$ (see \cite{Ch}). \end{itemize} In \cite{DJLPW} it is shown the existence of $N-$condensates
$(\mathcal{A},\phi)$ so that $|\phi|\to 0$ a.e. in $\Omega$ as $\epsilon \to 0$.
Concerning the case $N=2$, it is a very difficult question, which has been discussed in \cite{CLW,LiWa} for $p_1=p_2$, to know whether or not $I$ attains the infimum in $H^1(\Omega)$. An alternative approach of perturbative type has revelead to be successful for $N=2$ \cite{LinYan1} (see also \cite{EsFi} among other things) by constructing a sequence of $2-$condensates for which the second alternative in (ii) does hold, for a critical point $q$ of $u_0$. The same approach works as well for $N\geq 3$, provided the concentration points of the magnetic field are not vortex points.
\noindent The existence of non-topological $N-$condensates with magnetic field concentrated at vortex points as $\epsilon \to 0$ (like in \eqref{1820}) is the main issue from a physical viewpoint and has not received an answer so far. A first partial answer has been provided by Lin and Yan \cite{LinYan} who construct $N-$condensates $(\mathcal{A}_\epsilon,\phi_\epsilon)$ so that $(F_{12})_\epsilon \rightharpoonup 2 \pi N \delta_{p_j}$ in the sense of measures as $\epsilon \to 0$, as soon as $N>4$ and $p_j$ is a simple vortex point in $\{p_1,\dots,p_N\}$. As in \cite{CFL}, they make use of the Chern-Simons equation $-\Delta U=e^U(1-e^U)-4\pi \delta_0$ as limiting problem, which is not suitable to manage multiple concentration points. Moreover, such a condensate does satisfy
$\max_\Omega |\phi_\epsilon|\geq c>0$ for $\epsilon$ small and $|\phi_\epsilon|\to 0$ in $C_{\hbox{loc}} (\bar \Omega \setminus \{p_j\})$, which fits the notion of ``non-topological" behavior in a weak sense. Our aim is to extend to $N-$condensates the perturbative approach developed by Chae and Imanuvilov \cite{ChI} for planar $N-$vortices, based on the use of the singular Liouville equation as limiting problem. As far as non-topological behavior, let us stress that the problem on the torus is much more rigid than the planar case, as well illustrated by the quantization property $\Phi=2\pi N$ (valid just in the doubly-periodic situation). For example, when $F_{12}$ is concentrated like a Dirac measure at a vortex point $p_l$, by the use of Liouville profiles it is natural, as we will see, to have $4\pi(n_l+1)$ as concentration mass of $F_{12}$ at $p_l$, where $n_l$ is the multiplicity of $p_l$ in the set $\{p_1,\dots,p_N\}$, and then the relation $2\pi N=4\pi \displaystyle \sum_{l=1}^m (n_l+1)$ does hold as soon as $F_{12} \rightharpoonup 4\pi \displaystyle \sum_{l=1}^m (n_l+1) \delta_{p_l}$ in the sense of measures. In particular, the concentration of the magnetic field can not take place at all the vortex points $p_1,\dots,p_N$ as in the planar case \cite{ChI}. Let us stress that the $N-$condensates constructed in \cite{Nol} have exactly such a concentration property and then violate the balancing condition \eqref{hhh}.
\noindent Our aim is to provide a general answer to the long-standing question on the existence of non-topological $N-$condensates with magnetic field concentrated at some vortex points. Compared with \cite{ChI}, our main result is rather surprising and reads as follows. \begin{thm} \label{mainbb} Let $\{p_1,\dots,p_m\}$ be a subset of the vortex set $\{p_1,\dots,p_N\} \subset \Omega$, $\{p_j\}_j$ be the remaining points and $n_l$, $n_j$ be the corresponding multiplicities so that \begin{equation} \label{hhh} 2\pi N=4\pi \sum_{l=1}^m (n_l+1). \end{equation} Letting $\mathcal{H}_0$ be a meromorphic function in $\Omega$
so that $|\mathcal{H}_0(z)|^2=e^{u_0+8\pi \sum_{l=1}^m (n_l+1) G(z,p_l)}$ (which exists and is unique up to rotations), assume that $\mathcal{H}_0$ has zero residue at each $p_1,\dots,p_m$. Letting $\sigma_0(z)=-\left( \int^z \mathcal{H}_{0}(w) dw \right)^{-1}$ (a well-defined meromorphic function), assume that
\begin{equation} \label{ggg} D_0=\frac{1}{\pi} \left[ \int_{\Omega \setminus \sigma_0^{-1}(B_\rho(0))} e^{u_0+8\pi \sum_{l=1}^m (n_l+1)G(z,p_l)} - \sum_{l=1}^m (n_l+1) \int_{\mathbb{R}^2 \setminus B_\rho(0)}
\frac{dy}{|y|^4}\right]<0 \end{equation} for small $\rho>0$ and the ``non-degeneracy condition" $\hbox{det }A \not=0$, where $A$ is given by \eqref{matrixA}. Then, for $\epsilon>0$ small there exists $N-$condensate
$(\mathcal{A}_\epsilon,\phi_\epsilon)$ so that $|\phi_\epsilon| \to 0$ in $C(\bar \Omega)$ and \begin{equation} \label{magconc} (F_{12})_\epsilon \rightharpoonup 4\pi \displaystyle \sum_{l=1}^m (n_l+1) \delta_{p_l} \end{equation} weakly in the sense of measures, as $\epsilon \to 0$. \end{thm} \noindent Notice that we can also allow some concentration point not to be a vortex point, by simply adding it to the vortex set with null multiplicity. In section \ref{examples} we will see that in the double-vortex case $N=2$ Theorem \ref{mainbb} essentially recovers the result in \cite{EsFi,LinYan1} concerning single-point concentration, for the assumptions just reduce to have the concentration point $q \not=p_1,p_2$ as a non-degenerate critical point of $u_0$ with $D_0<0$ (for similar results concerning the Liouville equation, see \cite{BaPa,dkm,EGP} in case of bounded domains with Dirichlet b.c. and \cite{Fi} in case of a flat two-torus). Despite of the complex statement, for a rectangle $\Omega$ with $p_1=0$, $p_2=\frac{\omega_1}{2}$, $p_3=\frac{\omega_2}{2}$ and $p_4=\frac{\omega_1+\omega_2}{2}$, and $n_1,n_2,n_3,n_4$ even multiplicities with $\frac{n_4}{2}$ odd, we will check in section \ref{examples} that the assumptions of Theorem \ref{mainbb} do hold for $m=1$ and concentration point $p_1$, up to perform a small translation so to have $p_j \in \Omega$. For computational simplicity, the ``non-degeneracy condition" will be checked just for a square with $n=n_3=2$ and $(n_1,n_2)=(2,0)$ or viceversa. Even more important, examples with $m\geq 2$ will be discussed in section \ref{general}.
\noindent Following an approach developed by Tarantello \cite{T} and exploited in \cite{NoTa3}, \eqref{2} can be seen as a perturbed mean-field equation \eqref{3} with potential $e^{u_0}$ and unperturbed part \begin{equation} \label{10100}
-\Delta w= 4\pi N \left(\frac{e^{u_0+w} }{\int_\Omega e^{u_0+w}}-\frac{1}{|\Omega|}\right). \end{equation}
Since $e^{u_0}$ vanishes like $|z-p_l|^{2n_l}$ near each $p_l$, $l=1,\dots,m$, the Liouville equation $-\Delta U=|z|^{2n} e^U$ will play a central role in the construction of an approximating function in the perturbative approach. Since $U_{\delta,\sigma_0}=\log \frac{8 \delta^2}{(\delta^2 +|\sigma_0|^2)^2}$ does solve $-\Delta U= |\sigma_0'|^2 e^U$ in $\Omega \setminus \{\hbox{poles of }\sigma_0\}$, a natural choice is $\sigma_0=z^{n+1}$ when $m=1$ and $p_1=0$. Letting $P$ be a projection operator on the space of doubly-periodic functions, the approximation rate of $PU_{\delta,z^{n+1}}$ is unfortunately not sufficiently small to carry out the argument, a problem which often arises in perturbation arguments and is usually overcome by refining the ansatz via linear theory around the approximating function. However, such a procedure would require several subsequent refinements, yielding in general to a high level of complexity. Inspired by \cite{DEM4}, in section \ref{improved} we will take advantage of the Liouville formula to use the inner parameter $\sigma_0$, present in the Liouville formula, to get improved profiles. Since $PU_{\delta,\sigma_0} \sim U_{\delta,\sigma_0}-\log(8\delta^2)+\log |\sigma_0|^4+8\pi (n+1)G(z,0)$ as $\delta \to 0$, $PU_{\delta,\sigma_0}$ is a good approximate solution of \eqref{10100} if $\frac{|\sigma_0'|^2}{|\sigma_0|^4}=|(\frac{1}{\sigma_0})'|^2=e^{u_0+8\pi(n+1)G(z,0)}$. By definition of $\mathcal{H}_0$, it is enough to find a meromorphic $\sigma_0$ with $(\frac{1}{\sigma_0})'=\mathcal{H}_0$, a solvable equation if and only if $\mathcal{H}_0$ has zero residue at its unique pole $0$. As we will discuss precisely in Remark \ref{remark2bis}, the assumption on the residues of $\mathcal{H}_0$ is then necessary in our context. Moreover, since $\mathcal{H}_0$ has a pole at $0$ of multiplicity $n+2$ and zeroes $p_j$'s of multiplicities $n_j$, by the property $\mathcal{H}_0(z+\omega_j)=e^{i\theta_j}\mathcal{H}_0(z)$, $j=1,2$, near $\partial \Omega$ for some $\theta_1,\theta_2 \in \mathbb{R}$ we deduce that $$0=\frac{1}{2\pi i} \int_{\partial \Omega} \frac{\mathcal{H}_0'}{\mathcal{H}_0}dz=n+2-\sum_j n_j=2(n+1)-N,$$ providing \eqref{hhh} as a necessary and sufficient condition for the existence of such $\mathcal{H}_0$ (the sufficient part in shown in next section). $D_0<0$ and the ``non-degeneracy condition'' will be necessary to determine $\delta$ and $a$, a sort of small translation parameter accounting for the perturbation term in \eqref{3}, according to the asymptotic expansion for the corresponding ``reduced equations" as derived in section \ref{reduced}. Theorem \ref{mainbb} is proved in section \ref{mainresults} when $m=1$ and in section \ref{general} when $m \geq 2$.
\section{Improved Liouville profiles} \label{improved} \noindent Let us decompose any solution $v$ of (\ref{2}) as
$v=w+c$, where $c=\frac{1}{|\Omega|}\int_\Omega v$. In this way, $w$ has zero average: $\int_\Omega w dz=0$, and by (\ref{ci0}) one has $$e^{2c} \int_\Omega e^{2u_0+2w}-e^c \int_\Omega e^{u_0+w}+4\pi N \epsilon^2=0.$$ This last identity then provides a relation between $c$ and $w$ in the form $c=c_\pm (w)$, where \begin{equation}\label{cc} e^{c_\pm(w)}=\frac{8\pi N \epsilon^2}{\int_\Omega e^{u_0+w} \mp \sqrt{(\int_\Omega e^{u_0+w})^2-16\pi N \epsilon^2 \int_\Omega e^{2u_0+2w}}}, \end{equation} whenever $\big(\int_{\Omega} e^{u_0+w}\big)^2-16\pi N \epsilon^2 \int_{\Omega} e^{2u_0+2w}\ge 0$. The two possible choice of ``plus'' or ``minus'' sign in \eqref{cc} is another indication of multiple solutions for \eqref{2}. In \cite{T}, topological solutions are characterized to satisfy \eqref{cc} with the ``plus'' sign. Since we are interested to non-topological solutions, it is natural to restrict the attention to the case $c=c_-(w)$, reducing problem (\ref{2}) to the following equation in $\Omega$:
\begin{equation} \label{3} \left\{ \begin{array}{rl} -\Delta w=& \displaystyle 4\pi N\left(\frac{e^{u_0+w} }{\int_\Omega e^{u_0+w}}-\frac{1}{|\Omega|}\right) \\ &\displaystyle+\frac{64 \pi^2N^2 \epsilon^2 \int_\Omega e^{2u_0+2w}}{(\int_\Omega e^{u_0+w}+\sqrt{(\int_\Omega e^{u_0+w})^2-16\pi N\epsilon^2\int_\Omega e^{2u_0+2w}})^2}\left(\frac{e^{u_0+w}}{\int_\Omega e^{u_0+w}}-\frac{ e^{2u_0+2w}}{\int_\Omega e^{2u_0+2w}}\right) \\ \displaystyle \int_\Omega w=0. \end{array}\right.\end{equation}
\noindent Here and in the next sections, we first discuss the case $m=1$ in Theorem \ref{mainbb}. Assume that $p$ is present $n-$times in $\{p_1,\dots,p_N\}$, and denote by $p_j'$s the remaining points in the set $\{p_1,\dots,p_N\}$ with corresponding multiplicities $n_j'$s. Up to a translation, we are assuming that $p_j \in \Omega$
for $j=1,\dots,N$, a crucial property which will simplify the arguments below. Since the assumptions in Theorem \ref{mainbb} for the concentration at $p$ are just local properties, for simplicity in the notations let us simply consider the case $p=0$.
\noindent Since $e^{u_0}$ behaves like $|z|^{2n}$ as $z \to 0$, the local profile of $w$ near $0$ will be given in terms of solutions of the ``singular" Liouville equation: \begin{equation}\label{starr}
-\Delta U= |z|^{2n}e^U. \end{equation} Recall that by Liouville formula the function
$$\log \frac{8|F'|^2}{(1+|F|^2)^2}$$ does solve $-\Delta U=e^U$ in the set $\{F'\not= 0 \}$, for any holomorphic map $F$. For entire solutions of \eqref{starr} with finite-energy:
$\int_{\mathbb{R}^2} |z|^{2n}e^U<+\infty$, it is well known that necessarily $F(z)=\frac{z^{n+1}-a}{\delta}$, and then all the entire finite-energy solutions of \eqref{starr} are classified as
$$U_{\delta,a}(z)=\log \frac{8 (n+1)^2 \delta^2}{(\delta^2 +|z^{n+1}-a|^2)^2},\quad \delta>0, \:a \in \mathbb{C}.$$ Moreover, we have that $\int_{\mathbb{R}^2}
|z|^{2n}e^{U_{\delta,a}}=8\pi(n+1)$. Since by construction the corresponding $v=w+c_-(w)$ will satisfy $$ \frac{1}{\epsilon^2} e^{u_0+v}\left(1-e^{u_0+v}\right) \rightharpoonup 8\pi (n+1) \delta_0$$ in the sense of measures, the balance condition \begin{equation} \label{balance} 2\pi N=4 \pi(n+1) \end{equation} is necessary in view of (\ref{ci0}).
\noindent Assume for simplicity $e^{u_0}=|z|^{2n}$. Since $ \int_\Omega
|z|^{2n} e^{U_{\delta,a}}\to 8\pi(n+1)$ as $\delta \to 0$, by
(\ref{balance}) we have the asymptotic matching of $-\Delta U_{\delta,a}= |z|^{2n} e^{U_{\delta,a}}$ and $4\pi N
\frac{|z|^{2n} e^{U_{\delta,a}}}{\int_\Omega
|z|^{2n}e^{U_{\delta,a}}}$ as $\delta \to 0$. To correct $U_{\delta,a}$ into a doubly-periodic function, we consider the projection $PU_{\delta,a}$ of $U_{\delta,a}$ as the solution of
$$\left\{ \begin{array}{ll} -\Delta PU_{\delta,a}=-\Delta U_{\delta,a} +\frac{1}{|\Omega|} \int_\Omega \Delta U_{\delta,a}& \hbox{in }\Omega\\ \int_\Omega PU_{\delta,a}=0.& \end{array} \right.$$ In this way, we gain the constant term
$$\frac{1}{|\Omega|} \int_\Omega \Delta U_{\delta,a} =-\frac{1}{|\Omega|} \int_\Omega |z|^{2n} e^{U_{\delta,a}} \to -\frac{4\pi N}{|\Omega|} \qquad \hbox{as }\delta \to 0$$ in view of (\ref{balance}), and
we still need to check that the difference between $-\Delta U_{\delta,a}= |z|^{2n}e^{U_{\delta,a}}$ and $4\pi N \frac{|z|^{2n}
e^{PU_{\delta,a}}}{\int_\Omega |z|^{2n} e^{PU_{\delta,a}}}$ is asymptotically small. Thanks to an asymptotic expansion of $PU_{\delta,a}$ in terms of $U_{\delta,a}$, we will see that the difference is small (i.e. $PU_{\delta,a}$ is an approximating function of (\ref{3})) but behaves at most like
$|z|^{2n}e^{U_{\delta,a}}O(|z|+\delta^2)$ which is not sufficiently small. A first refinement of the ansatz via the linear theory around $PU_{\delta,a}$ could improve the pointwise error estimate into $|z|^{2n}e^{U_{\delta,a}}O(|z|^2+\delta^2)$, which unfortunately is in general still not enough. Since there is a strong mismatch between the dependence of $U_{\delta,a}$ on $z^{n+1}$ and that of the error on $z$ (or even on $z^2$), we should push such a procedure through several subsequent refinements. Instead, we play directly with the inner parameters present in the Liouville formula, for we have more flexibility in the choice of $F(z)$ on bounded domains. Hereafter, let us fix an open simply-connected domain $\tilde \Omega$ so that $\overline{\Omega}\subset \tilde \Omega$ and $\tilde \Omega \cap \,\left(\omega_1 \mathbb{Z}+\omega_2 \mathbb{Z}\right)=\{0\}$, and set $\mathcal{M}(\overline{\Omega})=\{ \sigma
\Big|_{\overline{\Omega}}: \sigma\hbox{ meromorphic in }\tilde \Omega\}$. Let $\delta \in (0,+\infty)$, $a\in \mathbb{C}$ and $\sigma \in \mathcal{M}(\overline{\Omega})$ be a function which vanishes only at $0$ with multiplicity $n+1$. Since $\log
|\sigma'(z)|^2$ is harmonic in $\{ \sigma' \not= 0\}$, the choice $F(z)=\frac{\sigma(z)-a}{\delta}$ yields to solutions
$$U_{\delta,a,\sigma}(z)=\log \frac{8 \delta^2}{(\delta^2 +|\sigma(z)-a|^2)^2}$$
of $-\Delta U= |\sigma'(z)|^2 e^U$ in $\Omega \setminus \{\hbox{poles of }\sigma\}$, for $U_{\delta,a,\sigma}$ is a smooth function up to $\{\sigma'=0\}$.
\noindent The guess is so to find a better local approximating function $PU_{\delta,a,\sigma}$ for a suitable choice of $\sigma$, where $PU_{\delta,a,\sigma}$ does solve \begin{equation} \label{lll} \left\{ \begin{array}{ll} -\Delta PU_{\delta,a,\sigma} =
|\sigma'(z)|^2 e^{U_{\delta,a,\sigma}} -\frac{1}{|\Omega|} \int_\Omega |\sigma'(z)|^2 e^{U_{\delta,a,\sigma}}& \hbox{in }\Omega\\ \int_\Omega PU_{\delta,a,\sigma}=0.& \end{array} \right. \end{equation} Notice that $PU_{\delta,a,\sigma}$ is well-defined and smooth as long as $\sigma \in \mathcal{M}(\overline{\Omega})$, no matter $\sigma$ has poles or not.
\noindent Recall that $G(z,0)$ can be thought as a doubly-periodic function in $\mathbb{C}$ with singularities on the lattice vertices $\omega_1 \mathbb{Z}+\omega_2\mathbb{Z}$, and $H(z)=G(z,0)+\frac{1}{2\pi} \log |z|$ is then a smooth function in $2\Omega$ with $\Delta H=\frac{1}{|\Omega|}$. Since $2 \Omega$ is simply-connected, we can find an holomorphic function $H^*$ in $2 \Omega$ having the harmonic function $H-\frac{|z|^2}{4|\Omega|}$ as real part. Since $p_j \in \Omega$, take $\tilde \Omega$ close to $\Omega$ so that $\tilde \Omega-p_j \subset 2 \Omega$ for all $j=1,\dots, N$. The function \begin{equation} \label{definitionH} \mathcal{H}(z)= \prod_j (z-p_j)^{n_j} \hbox{exp} \left(
4\pi(n+1) H^*(z) -2\pi\sum_{j=1}^N H^*(z-p_j)-\frac{\pi}{2|\Omega|}\sum_{j=1}^N |p_j|^2
+\frac{\pi}{|\Omega|}z \overline{\sum_{j=1}^N p_j}\right) \end{equation} is holomorphic in $\tilde \Omega$ with \begin{equation} \label{keyrelationH}
|\mathcal{H}(z)|^2=\frac{1}{|z|^{2n}} e^{u_0+8\pi(n+1) H(z)}=e^{4\pi(n+2)H(z)-4 \pi \sum_j n_j G(z,p_j)} \qquad \hbox{in }\tilde \Omega \end{equation}
in view of \eqref{balance}. The meromorphic function $\mathcal{H}_0(z)=\frac{\mathcal{H}(z)}{z^{n+2}}$ does satisfy $|\mathcal{H}_0(z)|^2=e^{u_0+8\pi(n+1) G(z,0)}$ in $\tilde \Omega$. \begin{rem} \label{1149} For simplicity in the notations, we are considering the case $p=0$. When $p \not=0$, by assuming $\tilde \Omega-p \subset 2\Omega$ the function \begin{eqnarray*} \mathcal{H}^p(z)&=& \prod_j (z-p_j)^{n_j} \hbox{exp} \left(
4\pi(n+1) H^*(z-p) +\frac{\pi(n+1)}{|\Omega|}|p|^2-\frac{2\pi(n+1)}{|\Omega|}z \bar p \right) \times\\
&&\times \hbox{exp} \left(-2\pi\sum_{j=1}^N H^*(z-p_j)-\frac{\pi}{2|\Omega|}\sum_{j=1}^N |p_j|^2
+\frac{\pi}{|\Omega|}z \overline{\sum_{j=1}^N p_j}\right) \end{eqnarray*} is holomorphic in $\tilde \Omega$ with
$$|\mathcal{H}^p(z)|^2=\frac{1}{|z-p|^{2n}} e^{u_0+8\pi(n+1) H(z-p)}=e^{4\pi(n+2)H(z-p)-4 \pi \sum_j n_j G(z,p_j)} \qquad \hbox{in }\tilde \Omega$$
in view of \eqref{balance}. The meromorphic function $\mathcal{H}_0^p(z)=\frac{\mathcal{H}^p(z)}{(z-p)^{n+2}}$ does satisfy $|\mathcal{H}^p_0(z)|^2=e^{u_0+8\pi(n+1) G(z,p)}$ in $\tilde \Omega$. \end{rem}
\noindent Hereafter, for a meromorphic function $g$ in $\tilde \Omega$ the notation $\int^z g(w) dw$ stands for the anti-derivative of $g(z)$, which is a well-defined meromorphic function in the simply-connected domain $\tilde \Omega$ as soon as $g$ has zero residues at each of its poles. Since $\mathcal{H}(0)\not=0$ by \eqref{keyrelationH}, we define \begin{equation} \label{sigma0} \sigma_0(z)=-\left(\int^z \mathcal{H}_0(w) e^{-c_0 w^{n+1}} dw \right)^{-1}=-\left(\int^z \frac{\mathcal{H}(w) e^{-c_0 w^{n+1}}}{w^{n+2}} dw \right)^{-1}, \end{equation} where \begin{equation} \label{c0} c_0=\frac{1}{\mathcal{H}(0) (n+1)! }\frac{d^{n+1} \mathcal{H}}{dz^{n+1}}(0) \end{equation} guarantees that the residue of $\mathcal{H}_0(z) e^{-c_0 z^{n+1}}$ at $0$ vanishes. By construction $\sigma_0 \in \mathcal{M}(\overline{\Omega})$ vanishes only at zero with multiplicity $n+1$, as needed, with \begin{equation} \label{0942} \lim_{z \to 0} \frac{z^{n+1}}{\sigma_0(z)}=\frac{\mathcal{H}(0)}{n+1}, \end{equation} and does solve \begin{equation} \label{eq sigma0}
|\sigma_0'(z)|^2= |\sigma_0(z)|^4 e^{u_0+8\pi(n+1)G(z,0)} e^{-2 \re [c_0 z^{n+1}]} \end{equation} in view of \eqref{keyrelationH}.
\noindent Let $\sigma \in \mathcal{M}(\overline{\Omega})$ be a function which vanishes only at zero with multiplicity $n+1$. For $a \in \mathbb{C}$ small there exist $a_0,\dots,a_n$ so that $\{z \in \tilde \Omega: \, \sigma(z)=a \}=\{a_0,\dots,a_n\}$ (distinct points when $a \not=0$). For $a$ small the function \begin{eqnarray}
\mathcal{H}_{a,\sigma}(z) &=& \prod_j (z-p_j)^{n_j} \hbox{exp}\left( 4\pi \sum_{k=0}^n H^*(z-a_k)-\frac{2\pi}{|\Omega|}z \overline{\sum_{k=0}^n a_k}-2\pi\sum_{j=1}^N H^*(z-p_j)\right. \label{Hasigma} \\
&&\left.-\frac{\pi}{2|\Omega|}\sum_{j=1}^N |p_j|^2
+\frac{\pi}{|\Omega|}z \overline{\sum_{j=1}^N p_j}\right) \nonumber \end{eqnarray} is holomorphic in $\tilde \Omega$ with \begin{equation} \label{keyrelation}
|\mathcal{H}_{a,\sigma}(z)|^2=\frac{1}{|z|^{2n}} e^{u_0+8\pi
\sum_{k=0}^n H(z-a_k)-\frac{2\pi}{|\Omega|} \sum_{k=0}^n |a_k|^2} \qquad \hbox{in }\tilde \Omega \end{equation} in view of \eqref{balance}. The advantage in our construction of $\mathcal{H}_{a,\sigma}$, which might be carried over in a simpler and more direct way, is the holomorphic/anti-holomorphic dependence in the $a_k$'s as well as in $z$, a crucial property as we will see in Appendix A. When $a=0$, then $a_0=\dots=a_n=0$ and $\mathcal{H}=\mathcal{H}_{0,\sigma}$.
\noindent Endowed with the norm $\|\sigma\|:=\| \frac{\sigma}{\sigma_0}\|_{\infty,\tilde \Omega}$, the set $\mathcal{M}'(\overline{\Omega})=\{ \sigma \in \mathcal{M}(\overline{\Omega}):\,\|\sigma\|<\infty\}$ is a Banach space, and let $\mathcal{B}_r$ be the closed ball centered at $\sigma_0$ and radius $r>0$, i.e. \begin{equation} \label{setB} \mathcal{B}_r=\bigg\{ \sigma \in
\mathcal{M}({\overline{\Omega}}):\:\Big\|
\frac{\sigma}{\sigma_0}-1\Big\|_{\infty,\tilde \Omega} \leq r \bigg\}. \end{equation} For $a\not=0$ and $r$ small, the aim is to find a solution $\sigma_a \in \mathcal{B}_r$ of $$\sigma(z)= -\left[ \int^z \left(\frac{\sigma(w)-a}{\prod_{k=0}^n (w-a_k)} \frac{w^{n+1}}{\sigma(w)} \right)^2 \frac{\mathcal{H}_{a,\sigma}(w)}{w^{n+2}} e^{-c_{a,\sigma}w^{n+1}}dw\right]^{-1}$$ for a suitable coefficient $c_{a,\sigma}$. To be more precise, letting $$g_{a,\sigma}(z)=\frac{\sigma(z)-a}{\prod_{k=0}^{n}(z-a_k)}$$
for $|a|<\rho$ and $\sigma \in \mathcal{B}_r$, by Lemma \ref{gomme} we have that $g_{a,\sigma} \in \mathcal{M}(\overline{\Omega})$ never vanishes, and the problem above gets re-written as \begin{equation} \label{sigmaa} \sigma(z)= -\left[ \int^z \frac{g^2_{a,\sigma}(w)}{g^2_{0,\sigma}(w)} \frac{\mathcal{H}_{a,\sigma}(w)}{w^{n+2}} e^{-c_{a,\sigma}w^{n+1}}dw\right]^{-1}. \end{equation} The choice \begin{equation} \label{ca} c_{a,\sigma}=\frac{1}{(n+1)!}\frac{d^{n+1}}{dz^{n+1}}\left[ \frac{g^2_{a,\sigma}(z) g^2_{0,\sigma}(0)}{g^2_{a,\sigma}(0) g^2_{0,\sigma}(z)} \frac{\mathcal{H}_{a,\sigma}(z)}{\mathcal{H}_{a,\sigma}(0) } \right](0) \end{equation} lets vanish the residue of the integrand function in \eqref{sigmaa} making the R.H.S. well-defined. Since $\sigma_a \in \mathcal{B}_r$, the function $\sigma_a$ vanishes only at zero with multiplicity $n+1$, and satisfies \begin{equation} \label{eq sigmaa}
|\sigma_a'(z)|^2= |\sigma_a(z)-a|^4 \hbox{exp}\left(u_0+8\pi
\sum_{k=0}^n G(z,a_k)-\frac{2\pi}{|\Omega|} \sum_{k=0}^n
|a_k|^2-2\re [c_{a,\sigma_a}z^{n+1}]\right) \end{equation} in view of \eqref{keyrelation}. The resolution of problem \eqref{sigmaa}-\eqref{ca} will be addressed in Appendix A.
\noindent We have the following expansion for $PU_{\delta,a,\sigma}$ as $\delta\to0$: \begin{lem}\label{expPU} There holds \begin{eqnarray} \label{1138}
PU_{\delta,a,\sigma}=U_{\delta,a,\sigma}-\log (8 \delta^2)+4 \log |g_{a,\sigma}|+8\pi \sum_{k=0}^n H(z-a_k)+\Theta_{\delta,a,\sigma}+2 \delta^2 f_{a,\sigma}+O(\delta^4) \end{eqnarray}
in $C(\overline{\Omega})$, uniformly for $|a|< \rho$ and $\sigma \in \mathcal{B}_r$, where
$$\Theta_{\delta,a,\sigma}=-\frac{1}{|\Omega|}\int_{\Omega} \log {|\sigma(z)-a|^4\over
(\delta^2+|\sigma(z)-a|^2)^2}$$ and $f_{a,\sigma}$ is defined in \eqref{FaQ}. In particular, there holds
$$PU_{\delta,a,\sigma}=8\pi \sum_{k=0}^n G(z,a_k)+\Theta_{\delta,a,\sigma}+2\delta^2 \left(f_{a,\sigma}-{1\over |\sigma(z)-a|^2}\right)+O(\delta^4)$$
in $C_{\text{loc}}(\overline{\Omega} \setminus\{0 \})$, uniformly for $|a|< \rho$ and $\sigma \in \mathcal{B}_r$. \end{lem} \begin{proof} Define $$r_{\delta,a,\sigma}=PU_{\delta,a,\sigma}-U_{\delta,a,\sigma}+\log
(8 \delta^2)-4 \log |g_{a,\sigma}|-8\pi \sum_{k=0}^n H(z-a_k).$$ The function $U_{\delta,a,\sigma}$ does satisfy $-\Delta U_{\delta,a,\sigma}=|\sigma'(z)|^2 e^{U_{\delta,a,\sigma}}$ just in $\Omega \setminus \{\hbox{poles of }\sigma\}$. At the same time, the function $-4\log |g_{a,\sigma}|$ is harmonic in $\Omega \setminus \{\hbox{poles of }\sigma\}$, and has exactly the same singular behavior of $U_{\delta,a,\sigma}$ near each pole of $\sigma$. It means that \begin{equation} \label{yth}
-\Delta \left[U_{\delta,a,\sigma}+4\log |g_{a,\sigma}|\right]=|\sigma'(z)|^2 e^{U_{\delta,a,\sigma}} \end{equation}
does hold in the whole $\Omega$. Since $\Delta H=\frac{1}{|\Omega|}$, by (\ref{lll}) and (\ref{yth}) we get that \begin{eqnarray*}
-\Delta r_{\delta,a,\sigma}= \frac{1}{|\Omega|}\left[ 8\pi(n+1)-\int_\Omega |\sigma'(z)|^2 e^{U_{\delta,a,\sigma}}\right] . \end{eqnarray*} By the Green's representation formula we have that \begin{eqnarray} \label{repr}
r_{\delta,a,\sigma}(z)=\frac{1}{|\Omega|}\int_\Omega r_{\delta,a,\sigma}+\int_{\partial \Omega}[\partial_\nu r_{\delta,a,\sigma}(w) G(w,z)-r_{\delta,a,\sigma}(w) \partial_\nu G(w,z)]ds(w), \end{eqnarray} where $\nu$ is the unit outward normal of $\partial \Omega$ and $ds(w)$ is the line integral element. Since as $\delta \to 0$ there holds
$$r_{\delta,a,\sigma}(w)=PU_{\delta,a,\sigma}(w)-8\pi \sum_{k=0}^n G(w,a_k) +2 \frac{\delta^2}{|\sigma(w)-a|^2}+O(\delta^4)$$
in $C^1(\partial \Omega)$ uniformly in $|a|< \rho$ and $\sigma \in \mathcal{B}_r$, by double-periodicity of $PU_{\delta,a,\sigma}-8\pi \displaystyle \sum_{k=0}^n G(\cdot,a_k)$ we get that \begin{eqnarray} \label{repr1} \int_{\partial \Omega}[\partial_\nu r_{\delta,a,\sigma}(w) G(w,z)-r_{\delta,a,\sigma}(w) \partial_\nu G(w,z)]ds(w)= 2 \delta^2 f_{a,\sigma}(z) +O(\delta^4) \end{eqnarray} in $C(\bar \Omega)$, where \begin{eqnarray}\label{FaQ}
f_{a,\sigma}(z)=\int_{\partial \Omega}\Big[\partial_\nu \frac{1}{|\sigma(w)-a|^2} G(w,z)-\frac{1}{|\sigma(w)-a|^2} \partial_\nu G(w,z)\Big]ds(w).\end{eqnarray} Inserting \eqref{repr1} into \eqref{repr} we get that \begin{eqnarray} \label{repr2} r_{\delta,a,\sigma}(z)=\Theta_{\delta,a,\sigma}+2 \delta^2 f_{a,\sigma}(z) +O(\delta^4) \end{eqnarray}
in $C(\overline{\Omega})$ uniformly in $|a|< \rho$ and $\sigma \in \mathcal{B}_r$, where
$$\Theta_{\delta,a,\sigma}:=\frac{1}{|\Omega|}\int_\Omega r_{\delta,a,\sigma}=-\frac{1}{|\Omega|}\int_{\Omega} \log {|\sigma(z)-a|^4\over
(\delta^2+|\sigma(z)-a|^2)^2}.$$ The estimate \eqref{repr2} yields to the desired expansion for $PU_{\delta,a,\sigma}$ as $\delta \to 0$. \qed \end{proof}
\noindent Letting $\sigma_a \in \mathcal{B}_r$ be the solution of \eqref{sigmaa}-\eqref{ca}, we build up the correct approximating function as $W=PU_{\delta,a,\sigma_a}$. We need to control the approximation rate of $W$ for $\delta$ and $\epsilon$ small enough, by estimating the error term \begin{eqnarray}\label{R}
R&=&\Delta W+4\pi N\left(\frac{e^{u_0+W}}{\int_\Omega e^{u_0+W}}-\frac{1}{|\Omega|}\right)\\ &&+ \frac{64 \pi^2N^2 \epsilon^2 \int_\Omega e^{2u_0+2W}}{\left(\int_\Omega e^{u_0+W}+\sqrt{(\int_\Omega e^{u_0+W})^2-16\pi N\epsilon^2\int_\Omega e^{2u_0+2W}}\right)^2}\left(\frac{e^{u_0+W}}{\int_\Omega e^{u_0+W}}-\frac{ e^{2u_0+2W}}{\int_\Omega e^{2u_0+2W}}\right).\nonumber \end{eqnarray} In order to simplify the notations, we set $U_{\delta,a}=U_{\delta,a,\sigma_a}$, $c_a=c_{a,\sigma_a}$, $\Theta_{\delta,a}=\Theta_{\delta,a,\sigma_a}$, $f_a=f_{a,\sigma_a}$, and omit the subscript $a$ in $\sigma_a$. We have the following crucial result.
\begin{thm}\label{estrr01550} Let $|a|<\frac{\rho}{2}$ and set \begin{eqnarray} \label{rateeps}
\eta=\epsilon^2 \delta^{-\frac{2}{n+1}} \max\Big\{1, \frac{|a|}{\delta}\Big\}^{\frac{2n}{n+1}}. \end{eqnarray} The following expansions do hold \begin{eqnarray}
&&\Delta W+4\pi N\left( \frac{ e^{u_0+W}}{\int_\Omega e^{u_0+W}}-\frac{1}{|\Omega|}\right) \nonumber\\
&&=|\sigma'(z)|^2 e^{U_{\delta,a}}
\left[\frac{e^{2\re[c_a z^{n+1}]}}{1+2 \re[c_a F_a(a)] + |c_a|^2 \re G_a(a)+\frac{1}{2} |c_a|^2 \Delta \re G_a(a) \delta^2 \log \frac{1}{\delta}+\frac{\delta^2}{n+1} D_a}-1\right] \nonumber \\
&&\quad +|\sigma'(z)|^2 e^{U_{\delta,a}}O(\delta^2 |z| +\delta^2|a|^{\frac{1}{n+1}}+\delta^2|c_a|+\delta^{\frac{2n+3}{n+1}})+O(\delta^2) \label{imp} \end{eqnarray} and \begin{eqnarray} &&\ds \frac{64 \pi^2 N^2 \epsilon^2 \int_\Omega e^{2u_0+2W}}{(\int_\Omega e^{u_0+W}+\sqrt{(\int_\Omega e^{u_0+W})^2-16\pi N\epsilon^2\int_\Omega e^{2u_0+2W}})^2} \left(\frac{ e^{u_0+W}}{\int_\Omega e^{u_0+W}}-\frac{e^{2u_0+2W}}{\int_\Omega e^{2u_0+2W}}\right) \nonumber \\
&&= |\sigma'(z)|^2 e^{U_{\delta,a}}
\left[\frac{8(n+1)^2\epsilon^2}{\pi |\alpha_a|^{\frac{2}{n+1}}
\delta^{\frac{2}{n+1}}} E_{a,\delta}-\epsilon^2 |\sigma'(z)|^2 e^{U_{\delta,a}}\right] \left[1+O(|c_a||z|^{n+1}+\eta)+o(1) \right] \label{eps4} \end{eqnarray} as $\epsilon, \delta \to 0$, where $\alpha_a$, $F_a$, $G_a$, $D_a$, $E_{a,\delta}$ are given in \eqref{alpha0}, \eqref{FG}, \eqref{Da}, \eqref{Eadelta}, respectively. \end{thm} \begin{proof} Recall that \eqref{sigmaa} implies the validity of \eqref{eq sigmaa}, which, combined with Lemma \ref{expPU}, yields to the following crucial estimate: \begin{eqnarray} \label{cinema}
W=U_{\delta,a}-\log (8 \delta^2)+\log |\sigma'(z)|^2-u_0+\frac{2\pi}{|\Omega|} \sum_{k=0}^n |a_k|^2+2\re [c_a z^{n+1}]+\Theta_{\delta,a}+2\delta^2 f_a+O(\delta^4) \end{eqnarray}
in $C(\overline{\Omega})$ as $\delta \to 0$, uniformly for $|a|< \rho$. Since by Lemma \ref{gomme} $\sigma=q^{n+1}$ in $\sigma^{-1}(B_\rho(0))$, through the change of variables $y=q(z)$ in $\sigma^{-1}(B_\rho(0))=q^{-1}(B_{\rho^{\frac{1}{n+1}}}(0))$, by (\ref{cinema}) we have that \begin{eqnarray}
&& \frac{8 \delta^2} {e^{\frac{2\pi}{|\Omega|} \sum_{k=0}^n |a_k|^2+\Theta_{\delta,a}+2\delta^2 f_a(0)}}\int_{\sigma^{-1}(B_{\rho}(0))} e^{u_0+W} = \int_{q^{-1}(B_{\rho^{\frac{1}{n+1}}}(0))}
|\sigma'(z)|^2 e^{U_{\delta,a}+2\re[c_a z^{n+1}] +O(\delta^2|z|+\delta^4)} \nonumber \\
&&= \int_{B_{\rho^{\frac{1}{n+1}}}(0)} \frac{8(n+1)^2 \delta^2 |y|^{2n}}{(\delta^2+|y^{n+1}-a|^2)^2}
e^{2\re[c_a (q^{-1}(y))^{n+1}] +O(\delta^2 |y|+\delta^4)}. \label{1045} \end{eqnarray} Since $q^{-1}(y)\sim y $ at $y=0$, the following Taylor expansion does hold \begin{equation} \label{keyexp} e^{c_a(q^{-1}(y))^{n+1}}=1+c_a y^{n+1} \sum_{k=0}^{+\infty} \alpha_a^k y^k \end{equation} in $B_{\rho^{\frac{1}{n+1}}}(0)$, where the coefficients $\alpha_a^k$ depend on $a$ through $\sigma=\sigma_a$. In particular, we have that $\alpha_a:=\alpha_a^0$ takes the form \begin{eqnarray} \label{alpha0} \alpha_a=\displaystyle \lim_{z \to 0}\frac{z^{n+1}}{\sigma(z)} \not=0. \end{eqnarray} By \eqref{keyexp} we then deduce that \begin{equation} \label{keyexp1}
e^{2\re[c_a (q^{-1}(y))^{n+1}]}=\big|e^{c_a (q^{-1}(y))^{n+1}}\big|^2=1+2\re\bigg[c_a y^{n+1} \sum_{k=0}^{+\infty} \alpha_a^k y^k\bigg]+|c_a|^2 |y|^{2n+2}\sum_{k,s=0}^{+\infty} \alpha_a^k \overline{\alpha}_a^s y^k \overline{y}^s. \end{equation} Since $$\sum_{j=0}^n [e^{i\frac{2\pi}{n+1}j}]^k=\sum_{j=0}^n e^{i\frac{2\pi}{n+1}j}=0$$ for all integer $k\notin (n+1)\mathbb{N}$, by the change of variables $y \to e^{i\frac{2\pi}{n+1}j} y$ we have that \begin{eqnarray}
\int_{B_{\rho^{\frac{1}{n+1}}}(0)} \frac{|y|^m y^k}{(\delta^2+|y^{n+1}-a|^2)^2}
&=&\sum_{j=0}^n \int_{B_{\rho^{\frac{1}{n+1}}}(0)\cap C_j} \frac{|y|^m y^k}{(\delta^2+|y^{n+1}-a|^2)^2} \nonumber\\
&=& \int_{B_{\rho^{\frac{1}{n+1}}}(0)\cap C_0} \frac{|y|^m y^k}{(\delta^2+|y^{n+1}-a|^2)^2} \sum_{j=0}^n [e^{i\frac{2\pi}{n+1}j}]^k=0 \label{symmetryint} \end{eqnarray} for all $m\geq 0$ and integer $k \notin (n+1)\mathbb{N}$, where $C_j$ is the sector of the plane between the angles $e^{i\frac{2\pi}{n+1}j}$ and $e^{i\frac{2\pi}{n+1}(j+1)}$. Formula \eqref{symmetryint} tells us that many terms of the expansion \eqref{keyexp1} will give no contribution when inserted in an integral formula like \eqref{1045}. Using the notation $\dots$ to denote such terms, we can rewrite \eqref{keyexp1} as \begin{eqnarray} \label{keyexp2}
e^{2\re[c_a (q^{-1}(y))^{n+1}]}&=&1+2\re\bigg[c_a \sum_{k=0}^{+\infty} \alpha_a^{k(n+1)} y^{(k+1)(n+1)}\bigg]+|c_a|^2 |y|^{2n+2} \sum_{k=0}^{+\infty} |\alpha_a^k|^2 |y|^{2k} \\
&&+2|c_a|^2 |y|^{2n+2}\re \bigg[\sum_{k=0}^{+\infty}\sum_{m=1}^{+\infty} \overline{\alpha}_a^k \alpha_a^{k+m(n+1)} |y|^{2k} y^{m(n+1)}\bigg]+\dots \nonumber \end{eqnarray} Setting \begin{equation} \label{FG}
F_a(y)=\sum_{k=0}^{+\infty} \alpha_a^{k(n+1)} y^{k+1},\quad G_a(y)=|y|^2 \left[2 \sum_{k=0}^{+\infty}\sum_{m=1}^{+\infty} \overline{\alpha}_a^k \alpha_a^{k+m(n+1)} |y|^{\frac{2k}{n+1}} y^m+\sum_{k=0}^{+\infty} |\alpha_a^k|^2 |y|^{\frac{2k}{n+1}} \right], \end{equation} through the change of variables $y \to y^{n+1}$ we can re-write \eqref{1045} as \begin{eqnarray}
&&\hspace{-1.1cm} \frac{8 \delta^2}{(n+1) e^{\frac{2\pi}{|\Omega|} \sum_{k=0}^n |a_k|^2+\Theta_{\delta,a}+2\delta^2 f_a(0)}}\int_{\sigma^{-1}(B_\rho(0))} e^{u_0+W} \nonumber\\
&&\hspace{-1.1cm}= \int_{B_\rho(0)} \frac{8\delta^2}{(\delta^2+|y-a|^2)^2}
\left(1+\re[2 c_a F_a(y)+|c_a|^2 G_a(y)]+O(\delta^2 |y|^{\frac{1}{n+1}}+\delta^4)\right)\nonumber\\
&&\hspace{-1.1cm} = 8\pi-\int_{\mathbb{R}^2 \setminus B_\rho(0)} \frac{8\delta^2}{|y|^4}+
\int_{B_\rho(0)} \frac{8\delta^2}{(\delta^2+|y-a|^2)^2}
\re[2 c_a F_a(y)+|c_a|^2 G_a(y)]+O(\delta^2|a|^{\frac{1}{n+1}}+\delta^{\frac{2n+3}{n+1}}). \label{1046} \end{eqnarray}
Since $|a|<\frac{\rho}{2}$ and $F$ is an holomorphic function in $B_{\frac{\rho}{2}}(a) \subset B_\rho(0)$, we can expand $F_a$ in a power series around $y=a$: \begin{eqnarray} \label{expF} F_a(y)=\sum_{k=0}^\infty \frac{F_a^{(k)}(a)}{k!} (y-a)^k, \end{eqnarray} and then get \begin{eqnarray}
2 \int_{B_\rho(0)} \frac{8\delta^2}{(\delta^2+|y-a|^2)^2} \re[c_a F_a(y)]&=&
2 \int_{B_{\frac{\rho}{2}}(a)} \frac{8\delta^2}{(\delta^2+|y-a|^2)^2} \re[c_a F_a(y)]+O(\delta^2|c_a|)\nonumber\\
&=& 16\pi \re[c_a F_a(a)] +O(\delta^2|c_a|) \label{1047} \end{eqnarray} in view of
$$\int_{B_{\frac{\rho}{2}}(a)} \frac{(y-a)^k}{(\delta^2+|y-a|^2)^2}=0$$ for all integer $k\geq 1$. The map $\re G_a$ is just $C^{2+\frac{2}{n+1}}(B_\rho(0))$ and can be expanded up to second order in $y=a$: \begin{eqnarray}\label{expG}
\re G_a(y)=\re G_a(a)+\langle\nabla \re G_a(a),y-a\rangle+\frac{1}{2}\langle D^2 \re G_a(a) (y-a),y-a\rangle+O(|y-a|^{\frac{2(n+2)}{n+1}}) \end{eqnarray} for $y \in B_{\frac{\rho}{2}}(a)$, yielding to \begin{eqnarray}
&& |c_a|^2 \int_{B_\rho(0)} \frac{8\delta^2}{(\delta^2+|y-a|^2)^2} \re G_a(y)= |c_a|^2 \int_{B_{\frac{\rho}{2}}(a)} \frac{8\delta^2}{(\delta^2+|y-a|^2)^2} \re G_a(y)+O(\delta^2|c_a|^2) \nonumber \\
&&=8\pi |c_a|^2 \re G_a(a)+\frac{|c_a|^2}{4} \Delta \re G_a(a) \int_{B_{\frac{\rho}{2}}(a)} \frac{8\delta^2}{(\delta^2+|y-a|^2)^2} |y-a|^2+O(\delta^2|c_a|^2) \nonumber\\
&&=8\pi |c_a|^2 \re G_a(a)+4\pi |c_a|^2 \Delta \re G_a(a) \delta^2 \log \frac{1}{\delta} +O(\delta^2|c_a|^2) \label{1048} \end{eqnarray} in view of \begin{eqnarray*}
&&\int_{B_{\frac{\rho}{2}}(a)} \frac{(y-a)_1}{(\delta^2+|y-a|^2)^2}=\int_{B_{\frac{\rho}{2}}(a)} \frac{(y-a)_2}{(\delta^2+|y-a|^2)^2}=
\int_{B_{\frac{\rho}{2}}(a)} \frac{(y-a)_1(y-a_2)}{(\delta^2+|y-a|^2)^2}=0\\
&&\int_{B_{\frac{\rho}{2}} (a)} \frac{(y-a)_1^2}{(\delta^2+|y-a|^2)^2}=\int_{B_{\frac{\rho}{2}} (a)} \frac{(y-a)_2^2}{(\delta^2+|y-a|^2)^2}=
\frac{1}{2}\int_{B_{\frac{\rho}{2}}(a)} \frac{|y-a|^2}{(\delta^2+|y-a|^2)^2}. \end{eqnarray*} By inserting \eqref{1047}, \eqref{1048} into \eqref{1046} we get that \begin{eqnarray}
&&\frac{8\delta^2}{(n+1) e^{\frac{2\pi}{|\Omega|} \sum_{k=0}^n |a_k|^2+\Theta_{\delta,a}+2\delta^2 f_a(0)}}\int_{\sigma^{-1}(B_\rho(0))} e^{u_0+W}\nonumber\\
&&= 8\pi-\int_{\mathbb{R}^2 \setminus B_\rho(0)} \frac{8\delta^2}{|y|^4}
+16\pi \re[c_a F_a(a)] +8\pi |c_a|^2 \re G_a(a)+4 \pi |c_a|^2 \Delta \re G_a(a) \delta^2 \log \frac{1}{\delta} \nonumber \\
&&+O(\delta^2|a|^{\frac{1}{n+1}}+\delta^2|c_a|+\delta^{\frac{2n+3}{n+1}}).\label{1049} \end{eqnarray} By Lemma \ref{expPU}, \eqref{1049} and Lemma \ref{gomme} we get that \begin{eqnarray}
&&\frac{ \delta^2}{\pi (n+1) e^{\frac{2\pi}{|\Omega|} \sum_{k=0}^n |a_k|^2+\Theta_{\delta,a}+2\delta^2f_{a}(0)}}\int_\Omega e^{u_0+W} = 1+2 \re[c_a F_a(a)] + |c_a|^2 \re G_a(a) \nonumber \\
&&+\frac{1}{2} |c_a|^2 \Delta \re G_a(a) \delta^2 \log \frac{1}{\delta}+\frac{\delta^2}{n+1} D_a+O(\delta^2|a|^{\frac{1}{n+1}}+\delta^2|c_a|+\delta^{\frac{2n+3}{n+1}}), \label{const} \end{eqnarray} where \begin{equation} \label{Da}
\pi D_a=\int_{\Omega \setminus \sigma^{-1}(B_\rho(0))} e^{u_0+8\pi \sum_{k=0}^n G(z,a_k)-\frac{2\pi}{|\Omega|} \sum_{k=0}^n |a_k|^2} -\int_{\mathbb{R}^2 \setminus B_\rho(0)} \frac{n+1}{|y|^4}. \end{equation}
In view of (\ref{balance}) and $\int_\Omega |\sigma'(z)|^2 e^{U_{\delta,a}}=8\pi(n+1)+O(\delta^2)$, by (\ref{cinema}) and (\ref{const}) we have that \begin{eqnarray*}
&&\Delta W+4\pi N\left( \frac{ e^{u_0+W}}{\int_\Omega e^{u_0+W}}-\frac{1}{|\Omega|}\right) \\
&&=|\sigma'(z)|^2 e^{U_{\delta,a}}
\left[4\pi N \frac{e^{2\re[c_a z^{n+1}]+O(\delta^2 |z|+\delta^4)}
}{8 \delta^2 e^{-\frac{2\pi}{|\Omega|} \sum_{k=0}^n |a_k|^2-\Theta_{\delta,a}-2 \delta^2 f_a
(0)} \int_\Omega e^{u_0+W}}-1 \right] +\frac{1}{|\Omega|}\left(\int_\Omega |\sigma'(z)|^2 e^{U_{\delta,a}}-4\pi N \right)\\
&&=|\sigma'(z)|^2 e^{U_{\delta,a}}
\left[\frac{e^{2\re[c_a z^{n+1}]}}{1+2 \re[c_a F_a(a)] + |c_a|^2 \re G_a(a)+\frac{1}{2} |c_a|^2 \Delta \re G_a(a) \delta^2 \log \frac{1}{\delta}+\frac{\delta^2}{n+1} D_a}-1\right] \\
&& +|\sigma'(z)|^2 e^{U_{\delta,a}}O(\delta^2 |z| +\delta^2|a|^{\frac{1}{n+1}}+\delta^2|c_a|+\delta^{\frac{2n+3}{n+1}})+O(\delta^2) \end{eqnarray*} as $\delta \to 0$, yielding to the validity of \eqref{imp}.
\noindent Introducing the notation $B(w)=16\pi N (\int_{\Omega} e^{2u_0+2w})(\int_{\Omega} e^{u_0+w})^{-2}$, we can write the following expansion \begin{equation} \label{BWW} \frac{16 \pi N \int_\Omega e^{2u_0+2W}}{(\int_\Omega e^{u_0+W}+\sqrt{(\int_\Omega e^{u_0+W})^2-16\pi N\epsilon^2\int_\Omega e^{2u_0+2W}})^2}={B(W) \over 4}+O(\epsilon^2 B^2(W)). \end{equation} Arguing as for (\ref{const}), the change of variables $y=\sigma(z)$ yields to \begin{eqnarray} && {64 \delta^{4+\frac{2}{n+1}}\over
e^{\frac{4\pi}{|\Omega|} \sum_{k=0}^n |a_k|^2+2\Theta_{\delta,a}}}\int_\Omega e^{2u_0+2W}=\delta^{\frac{2}{n+1}}\int_{\sigma^{-1} (B_\rho(0))} |\sigma'(z)|^4 e^{2U_{\delta,a} +O(|c_a||z|^{n+1}+\delta^2)}+O(\delta^{4+\frac{2}{n+1}})\nonumber \\
&&= 64(n+1)^3 |\alpha_a|^{-\frac{2}{n+1}} \int_{B_\rho (0)} \frac{\delta^{4+\frac{2}{n+1}} |y|^{\frac{2n}{n+1}}}{(\delta^2+|y-a|^2)^4}\left(1+O(|c_a||y|+\delta^2+|y|^{\frac{1}{n+1}})\right)+O(\delta^{4+\frac{2}{n+1}}) \nonumber \\
&&= 64(n+1)^3 |\alpha_a|^{-\frac{2}{n+1}} \int_{B_\rho (0)} \frac{\delta^{4+\frac{2}{n+1}} |y+a|^{\frac{2n}{n+1}}}{(\delta^2+|y|^2)^4}\left(1+O(\delta^2+|y|^{\frac{1}{n+1}}+|a|^{\frac{1}{n+1}})\right)+O(\delta^{4+\frac{2}{n+1}}) \label{const1} \end{eqnarray} in view of \begin{eqnarray} \label{1852}
|\sigma'(z)|^2=(n+1)^2 |\alpha_a|^{-2} |z|^{2n}(1+O(|z|))=(n+1)^2 |\alpha_a|^{-\frac{2}{n+1}} |\sigma(z)|^{\frac{2n}{n+1}}(1+O(|\sigma(z)|^{\frac{1}{n+1}})), \end{eqnarray} where $\alpha_a$ is given by \eqref{alpha0}. We have that
$$\int_{B_\rho (0)} \frac{\delta^{4+\frac{2}{n+1}} |y+a|^{\frac{2n}{n+1}}}{(\delta^2+|y|^2)^4}=
\int_{\mathbb{R}^2} \frac{|y+\frac{a}{\delta} |^{\frac{2n}{n+1}}}{(1+|y|^2)^4} +O(\delta^{4+\frac{2}{n+1}}) $$
if $|a|=O(\delta)$ and
$$\int_{B_\rho (0)} \frac{\delta^{4+\frac{2}{n+1}} |y+a|^{\frac{2n}{n+1}}}{(\delta^2+|y|^2)^4}=
\Big(\frac{|a|}{\delta}\Big)^{\frac{2n}{n+1}} \int_{\mathbb{R}^2} \frac{1}{(1+|y|^2)^4}\left[1 +O\Big(\frac{\delta}{|a|}+\delta^6\Big)\right]$$
if $|a|>>\delta$, where in the latter we have used the inequality:
$$|y+a|^{\frac{2n}{n+1}}=|a|^{\frac{2n}{n+1}}+O(|a|^{\frac{n-1}{n+1}}|y|+|y|^{\frac{2n}{n+1}}).$$ Setting \begin{eqnarray} \label{Eadelta} E_{a,\delta}:= \left\{\begin{array}{ll}
\ds\int_{\mathbb{R}^2} \frac{|y+\frac{a}{\delta} |^{\frac{2n}{n+1}}}{(1+|y|^2)^4} &\hbox{if }|a|=O(\delta)\\ \\
\ds\frac{\pi}{3} \Big(\frac{|a|}{\delta}\Big)^{\frac{2n}{n+1}} &\hbox{if }|a|>>\delta, \end{array} \right. \end{eqnarray} by (\ref{const1}) we get that \begin{eqnarray} {64 \delta^{4+\frac{2}{n+1}}\over
e^{\frac{4\pi}{|\Omega|} \sum_{k=0}^n |a_k|^2+2\Theta_{\delta,a}}}\int_\Omega e^{2u_0+2W}= 64(n+1)^3 |\alpha_a|^{-\frac{2}{n+1}} (1+o(1)) E_{a,\delta}.\label{const1517} \end{eqnarray} Since by a combination of (\ref{const}) and (\ref{const1517}) for $B(W)$ we have that \begin{equation} \label{BW}
B(W)=32 \frac{(n+1)^2}{ \pi \delta^{\frac{2}{n+1}}}|\alpha_a|^{-\frac{2}{n+1}}(1+o(1))E_{a,\delta} \end{equation} in view of (\ref{balance}), by \eqref{BWW} and \eqref{BW} we get that \begin{eqnarray} \label{eps0}
\frac{16 \pi N \int_\Omega e^{2u_0+2W}}{(\int_\Omega e^{u_0+W}+\sqrt{(\int_\Omega e^{u_0+W})^2-16\pi N\epsilon^2\int_\Omega e^{2u_0+2W}})^2} =8 \frac{(n+1)^2}{ \pi \delta^{\frac{2}{n+1}}}|\alpha_a|^{-\frac{2}{n+1}} (1+o(1)+O(\eta)) E_{a,\delta}, \end{eqnarray} where $\eta$ is given by \eqref{rateeps}. As we have already seen in deriving (\ref{imp}), by (\ref{cinema}) we have that \begin{eqnarray}
4\pi N\frac{ e^{u_0+W}}{\int_\Omega e^{u_0+W}}=|\sigma'(z)|^2 e^{U_{\delta,a}}
\left[1+O(|c_a| |z|^{n+1})+O(|c_a||a|+\delta^2 |\log \delta| ) \right], \label{eps1} \end{eqnarray} and in a similar way one can show that \begin{eqnarray}
{64(n+1)^3\over \delta^{\frac{2}{n+1}}} |\alpha_a|^{-\frac{2}{n+1}} \frac{e^{2u_0+2W} }{\int_\Omega e^{2u_0+2W} }
E_{a,\delta}= |\sigma'(z)|^4 e^{2U_{\delta,a}}\left[1+ O(|c_a||z|^{n+1})+o(1)\right] \label{eps2} \end{eqnarray} in view of (\ref{const1517}). In conclusion, by (\ref{eps0})-(\ref{eps2}) we have for the $\epsilon^2-$term in $R$ that \begin{eqnarray*} &&\ds \frac{64 \pi^2 N^2 \epsilon^2 \int_\Omega e^{2u_0+2W}}{(\int_\Omega e^{u_0+W}+\sqrt{(\int_\Omega e^{u_0+W})^2-16\pi N\epsilon^2\int_\Omega e^{2u_0+2W}})^2} \left(\frac{ e^{u_0+W}}{\int_\Omega e^{u_0+W}}-\frac{e^{2u_0+2W}}{\int_\Omega e^{2u_0+2W}}\right) \\
&&= |\sigma'(z)|^2 e^{U_{\delta,a}}
\left[\frac{8(n+1)^2\epsilon^2}{\pi |\alpha_a|^{\frac{2}{n+1}} \delta^{\frac{2}{n+1}}} E_{a,\delta}-\epsilon^2
|\sigma'(z)|^2 e^{U_{\delta,a}}\right] \left[1+O(|c_a||z|^{n+1}+\eta)+o(1) \right] \end{eqnarray*} in view of (\ref{balance}), yielding to the validity of \eqref{eps4}. This completes the proof. \qed \end{proof}
\noindent Let us introduce the following weighted norm \begin{equation}\label{wn}
\| h \|_*=\sup_{z\in {\Omega}}
\frac{(\delta^2+|\sigma(z)-a|^2)^{1+\frac{\gamma}{2}}}{\delta^{\gamma} (|\sigma'(z)|^2+\delta^{\frac{2n}{n+1}})}\; |h(z)| \end{equation} for any $h\in L^\infty({\Omega})$, where $0<\gamma<1$ is a small fixed constant. We have that \begin{cor}\label{estrr0cor} There exist positive constants $\delta_0$, $\epsilon_0$ and $C_0$ such that \begin{equation}\label{ere}
\|R\|_*\le C_0 \left(\delta |c_a|+\delta^{2-\gamma} +\delta^{\frac{2}{n+1}-\gamma} |a|^{2+\gamma}+|c_a||a|^{\frac{n+2}{n+1}}+\eta+\eta^2\right) \end{equation} for any $\delta \in (0,\delta_0)$ and $\epsilon\in(0,\epsilon_0)$, where $\eta$ is given by \eqref{rateeps}. \end{cor} \begin{proof} Since \begin{eqnarray*}
&&\frac{e^{2\re[c_a z^{n+1}]}}{1+2 \re[c_a F_a(a)] + |c_a|^2 \re G_a(a)+\frac{1}{2} |c_a|^2 \Delta \re G_a(a) \delta^2 \log \frac{1}{\delta}+\frac{\delta^2}{n+1} D_a}-1\\
&&=\frac{e^{2\re[c_a z^{n+1}]}-1}{1+2 \re[c_a F_a(a)] + |c_a|^2 \re G_a(a)+\frac{1}{2} |c_a|^2 \Delta \re G_a(a) \delta^2 \log \frac{1}{\delta}+\frac{\delta^2}{n+1} D_a}-2 \re[c_a F_a(a)]\\
&&+O(|c_a|^2|a|^2+\delta^2 |\log \delta|)=2\re[c_a (z^{n+1}-\alpha_a a)]+O(|c_a|^2 |z|^{2n+2}+ |c_a| |a|^2+\delta^2 |\log \delta|)\\
&&=2\re[\alpha_a c_a (\sigma(z)-a)]+O(|c_a| |z|^{n+2}+ |c_a| |a|^2+\delta^2 |\log \delta|), \end{eqnarray*} by Theorem \ref{estrr01550} we deduce that \begin{eqnarray*}
R = |\sigma'(z)|^2 e^{U_{\delta,a}}O\left(|c_a||\sigma(z)-a|+|c_a||z|^{n+2}+ |c_a||a|^2+\delta^2|\log \delta|+\eta +\eta^2 \right) +\epsilon^2 |\sigma'(z)|^4 e^{2U_{\delta,a}}(1+O(\eta))+ O(\delta^2) \end{eqnarray*}
as $\delta \to 0$, where $\eta$ is given in \eqref{rateeps}. In view of the estimates $|z|=O(|\sigma(z)|^{\frac{1}{n+1}})$ and $|\sigma'(z)|^2=O(|\sigma(z)|^{\frac{2n}{n+1}})$ near $0$, by setting $y=\sigma(z)$ in $\sigma^{-1}(B_\rho(0))$ we get that \begin{eqnarray*}
\|R\|_*&=&O\left(\sup_{y \in B_\rho(0)} \frac{\delta^{2-\gamma}}{(\delta^2+|y-a|^2)^{1-\frac{\gamma}{2}}}\left[|c_a||y-a|+|c_a||y|^{\frac{n+2}{n+1}} +|c_a||a|^2+\delta^2 |\log \delta|+\eta +\eta^2 \right]\right)\\
&&+O\left( \sup_{y \in B_\rho(0)} \frac{\epsilon^2 \delta^{4-\gamma}|y|^{\frac{2n}{n+1}}}{(\delta^2+|y-a|^2)^{3-\frac{\gamma}{2}}} [1+O(\eta)]\right)+O\left( \sup_{y \in B_\rho(0)} \frac{\delta^{2-\gamma} (\delta^2+|y-a|^2)^{1+\gamma/2}}{(|y|^{\frac{2n}{n+1}}+\delta^{\frac{2n}{n+1}})} \right) +O(\delta^{2-\gamma})\\
&=& O\left(\sup_{y \in B_{2\rho/\delta}(0)} \frac{1}{(1+|y|^2)^{1-\frac{\gamma}{2}}}\left[\delta |c_a||y|+\delta^{\frac{n+2}{n+1}} |c_a| |y|^{\frac{n+2}{n+1}}+|c_a||a|^{\frac{n+2}{n+1}}+\delta^2 |\log \delta|+\eta +\eta^2 \right]\right)\\
&&+O\left( \sup_{y \in B_{2\rho/\delta}(0)} \frac{\epsilon^2 \delta^{-2}(\delta^{\frac{2n}{n+1}}|y|^{\frac{2n}{n+1}}+|a|^{\frac{2n}{n+1}})}{(1+|y|^2)^{3-\frac{\gamma}{2}}}[1+O(\eta)] \right)\\
&&+O\left( \sup_{y \in B_{\rho/\delta}(0)} \frac{\delta^{\frac{2}{n+1}-\gamma} (\delta^{2+\gamma}+|a|^{2+\gamma}+\delta^{2+\gamma} |y|^{2+\gamma})}{(|y|^{\frac{2n}{n+1}}+1)} \right)+O(\delta^{2-\gamma})\\
&=& O\left(\delta |c_a|+\delta^{2-\gamma} +\delta^{\frac{2}{n+1}-\gamma} |a|^{2+\gamma}+|c_a||a|^{\frac{n+2}{n+1}}+\eta+\eta^2 \right) \end{eqnarray*} as claimed. \qed \end{proof}
\section{The reduced equations}\label{reduced} As we will discuss precisely in the next section, it will be crucial to study the system $\int_\Omega R \, PZ_0=0$ and $\int_\Omega R\, PZ=0$, where $PZ_0$ and $PZ$ are the unique solutions with zero average of
$\Delta PZ_{0} =\Delta Z_{0}-\frac{1}{|{\Omega}|}\int_{\Omega} \Delta Z_0$
and $\Delta PZ =\Delta Z-\frac{1}{|{\Omega}|}\int_{\Omega} \Delta Z$ in $\Omega$. Here, the functions $Z_0$ and $Z$ are defined as follows:
$$Z_0(z)=\frac{\delta^2-|\sigma(z)-a|^2}{\delta^2+|\sigma(z)-a|^2}\quad\text{and}\quad Z(z)= \frac{\delta(\sigma(z)-a)}{\delta^2+|\sigma(z)-a|^2},$$
and are (not doubly-periodic) solutions of $-\Delta \phi =|\sigma'(z)|^2 e^{U_{\delta,a,\sigma}} \phi$ in $\Omega$. Through the changes of variable $y=\sigma(z)$ and $y \to \frac{y-a}{\delta}$ notice that \begin{eqnarray}
\int_\Omega \Delta Z_0&=&-\int_{\sigma^{-1}(B_\rho(0))}|\sigma'(z)|^2e^{U_{\delta,a,\sigma}} Z_0+O(\delta^2) =-8(n+1) \delta^2 \int_{B_\rho(0)}\frac{\delta^2-|y-a|^2}{(\delta^2+|y-a|^2)^3} +O(\delta^2)\nonumber \\
&=&-8(n+1) \int_{B_{\rho/\delta}(0)}\frac{1-|y|^2}{(1+|y|^2)^3} +O(\delta^2)=O(\delta^2) \label{deltaZ0} \end{eqnarray} and \begin{eqnarray}
\int_\Omega \Delta Z&=&-\int_{\sigma^{-1}(B_\rho(0))}|\sigma'(z)|^2 e^{U_{\delta,a,\sigma}} Z+O(\delta^3) =-8(n+1) \delta^3 \int_{B_\rho(0)}\frac{y-a}{(\delta^2+|y-a|^2)^3} +O(\delta^3) \nonumber \\
&=&-8(n+1) \int_{B_{\rho/\delta}(0)}\frac{y}{(1+|y|^2)^3} +O(\delta^3)=O(\delta^3) \label{deltaZ} \end{eqnarray} in view of
$$\int_{\mathbb{R}^2} \frac{1-|y|^2}{(1+|y|^2)^3}=0\,,\quad \int_{\mathbb{R}^2} \frac{y}{(1+|y|^2)^3}=0.$$ By \eqref{deltaZ0}-\eqref{deltaZ} the following expansions, useful in the sequel, are easily deduced: \begin{equation}\label{pzij}
PZ_{0}=Z_{0} - {1\over|{\Omega}|}\int_{\Omega} Z_{0}+O(\delta^2)\:,\qquad PZ=Z-{1\over|{\Omega}|}\int_{\Omega} Z+O(\delta) \end{equation}
in $C(\overline{\Omega})$, uniformly in $|a|< \rho$ and $\sigma \in \mathcal{B}_r$.
\noindent Notice that up to now there is no relation between $a$ and $\delta$. However, as we will show in Remarks \ref{remark1} and \ref{remark2}, the range $|a|>>\delta$ is not compatible with solving simultaneously $\int_\Omega R \, PZ_0=0$ and $\int_\Omega R\, PZ=0$. Hence, we shall restrict our attention to the case $a=O(\delta)$ in next sections, so that, we can assume that $\eta=\epsilon^2 \delta^{-\frac{2}{n+1}}$ in \eqref{rateeps} and $E_{a,\delta}=\int_{\mathbb{R}^2} \frac{|y+\frac{a}{\delta} |^{\frac{2n}{n+1}}}{(1+|y|^2)^4}$ in \eqref{Eadelta}. We have that \begin{prop} \label{reducedequations}
Assume $|a|\leq C_0 \delta$ for some $C_0>0$. The following expansions do hold as $\delta,\eta\to 0$ \begin{eqnarray} \label{solve1b}
\int_\Omega R \, PZ_0&=&- 16 \pi (n+1) |\alpha_a|^2 |c_a|^2 \delta^2 \log \frac{1}{\delta}-8\pi \delta^2 D_a +64(n+1)^3 |\alpha_a|^{-\frac{2}{n+1}} \eta \int_{\mathbb{R}^2} \frac{(|y|^2-1)|y+\frac{a}{\delta} |^{\frac{2n}{n+1}}}{(1+|y|^2)^5} \nonumber \\
&&+o(\delta^2+\eta)+O(\delta^2 |c_a|+|a|^{\frac{1}{n+1}}\delta^2 |\log \delta|+\eta^2), \end{eqnarray} and \begin{eqnarray} \label{solve2b} \int_\Omega R \, PZ = 4 \pi (n+1) \delta \overline{\alpha_a c_a}
-64(n+1)^3 |\alpha_a|^{-\frac{2}{n+1}} \eta \int_{\mathbb{R}^2} {|y+\frac{a}{\delta}|^{2n\over n+1} y
\over(1+|y|^2)^5}+o(\delta|c_a|+\delta|a|+\eta+\delta^2)+O(\eta^2), \end{eqnarray} where $\eta=\epsilon^2 \delta^{-\frac{2}{n+1}}$ and $c_a=c_{a,\sigma_a}$, $\alpha_a$, $D_a$ are given by \eqref{ca}, \eqref{alpha0}, \eqref{Da}, respectively. \end{prop} \begin{proof} Through the changes of variable $y=q(z)$ in $\sigma^{-1}(B_\rho(0))$, $ y \to y^{n+1}$ and $y \to \frac{y-a}{\delta}$ we get that \begin{eqnarray}
&&\int_\Omega \frac{\delta^{\gamma} (|\sigma'(z)|^2+\delta^{\frac{2n}{n+1}})}{(\delta^2+|\sigma(z)-a|^2)^{1+\frac{\gamma}{2}}}=\int_{\sigma^{-1}(B_\rho(0))} \frac{\delta^{\gamma} (|\sigma'(z)|^2+\delta^{\frac{2n}{n+1}})}{(\delta^2+|\sigma(z)-a|^2)^{1+\frac{\gamma}{2}}}+O(\delta^\gamma) \label{1458} \\
&&=O\left(\int_{B_{\rho^{\frac{1}{n+1}}}(0)} \frac{\delta^{\gamma} (|y|^{2n}+\delta^{\frac{2n}{n+1}})}{(\delta^2+|y^{n+1}-a|^2)^{1+\frac{\gamma}{2}}}\right)+O(\delta^\gamma)=O\left(\int_{B_\rho(0)} \frac{\delta^{\gamma} (1+\delta^{\frac{2n}{n+1}} |y|^{-\frac{2n}{n+1}})}{(\delta^2+|y-a|^2)^{1+\frac{\gamma}{2}}}\right)+O(\delta^\gamma) \nonumber \\
&&=O\left(\int_{B_{\rho/\delta}(0)} \frac{1+ |y+\frac{a}{\delta}|^{-\frac{2n}{n+1}}}{(1+|y|^2)^{1+\frac{\gamma}{2}}}\right)+O(\delta^\gamma)=O(1) \nonumber \end{eqnarray} in view of
$$\int_{B_{\rho/\delta}(0)} \frac{|y+\frac{a}{\delta}|^{-\frac{2n}{n+1}}}{(1+|y|^2)^{1+\frac{\gamma}{2}}}
\leq \int_{B_1(0)} |y|^{-\frac{2n}{n+1}}+\int_{\mathbb{R}^2} \frac{1}{(1+|y|^2)^{1+\frac{\gamma}{2}}}<+\infty.$$ Hence, by Corollary \ref{estrr0cor} we get that
\begin{eqnarray} \label{int|R|}
\int_\Omega |R|= O\left(\delta |c_a|+\delta^{2-\gamma} +\delta^{\frac{2}{n+1}-\gamma} |a|^{2+\gamma}+|c_a||a|^{\frac{n+2}{n+1}}+\eta+\eta^2 \right). \end{eqnarray}
By \eqref{pzij} and \eqref{int|R|} we deduce that \begin{eqnarray} \label{zerotermbisb} \int_\Omega R \, PZ_0=\int_\Omega R (Z_0+1)+o(\delta^2)+O(\eta \delta^2+\eta^2 \delta^2) \end{eqnarray} in view of $\int_{\Omega} R=0$. Since by H\"older inequality \begin{eqnarray*}
\int_\Omega |Z_0+1|&=& \int_{\sigma^{-1}(B_\rho(0))} \frac{2
\delta^2}{\delta^2+|\sigma(z)-a|^2}+O(\delta^2)=
O\bigg(\int_{B_\rho(0)} |y|^{-{2n \over n+1}} \frac{\delta^2}{\delta^2+|y-a|^2}\bigg)+O(\delta^2)\\ &=& O\bigg(\delta^{1\over n+1} \int_{B_\rho(0)}
\frac{1}{|y|^{2n \over n+1} |y-a|^{1 \over n+1}}\bigg)+O(\delta^2)\\ &=&O\bigg( \delta^{1\over n+1} \bigg[\int_{B_\rho(0)}
\frac{1}{|y|^{2n+1 \over n+1}} \bigg]^{2n \over 2n+1} \bigg[\int_{B_\rho(0)} \frac{1}{|y-a|^{2n+1 \over n+1}} \bigg]^{1 \over 2n+1} \bigg)+O(\delta^2)=O(\delta^{1\over n+1}), \end{eqnarray*} by (\ref{imp}) we have that \begin{eqnarray} \label{firsttermbis} &&\hspace{-0.5cm} \int_{\Omega} (Z_0+1)\left[\lap W+4\pi N\left({e^{u_0+W} \over \int_{\Omega} e^{u_0+W}}-{1\over
|{\Omega}|}\right)\right] \\
&&\hspace{-0.5cm} =\int_{\sigma^{-1}(B_\rho(0))}|\sigma'(z)|^2 e^{U_{\delta,a}}(Z_0+1)
\left[\frac{e^{2\re[c_a z^{n+1}]}}{1+2 \re[c_a F_a(a)] + |c_a|^2 \re G_a(a)+\frac{1}{2} |c_a|^2 \Delta \re G_a(a) \delta^2 \log \frac{1}{\delta}+\frac{\delta^2}{n+1} D_a}-1\right] \nonumber \\
&&\hspace{-0.5cm} +O(\delta^2 |c_a|)+o(\delta^2)\nonumber\\
&&\hspace{-0.5cm} = \int_{B_{\rho^{\frac{1}{n+1}}}(0)} \frac{16(n+1)^2 \delta^4 |y|^{2n}}{(\delta^2+|y^{n+1}-a|^2)^3}
\left[\frac{e^{2\re[c_a (q^{-1}(y))^{n+1}]}}{1+2 \re[c_a F_a(a)] + |c_a|^2 \re G_a(a)+\frac{1}{2} |c_a|^2 \Delta \re G_a(a) \delta^2 \log \frac{1}{\delta}+\frac{\delta^2}{n+1} D_a}-1 \right] \nonumber \\
&&\hspace{-0.5cm} +O(\delta^2 |c_a|)+o(\delta^2). \nonumber \end{eqnarray} We have that the expansion \eqref{keyexp2} still holds in this context, where the notation $\dots$ stands for terms that give no contribution in the integral term of \eqref{firsttermbis} in view of the analogous of formula \eqref{symmetryint}: \begin{eqnarray} \label{symmetryintbis}
\int_{B_{\rho^{\frac{1}{n+1}}}(0)} \frac{|y|^m y^k}{(\delta^2+|y^{n+1}-a|^2)^3}=0 \end{eqnarray} for all $m\geq 0$ and integer $k \notin (n+1)\mathbb{N}$. Hence, through the changes of variables $y \to y^{n+1}$ and $y \to \frac{y-a}{\delta}$, by the symmetries we have that \begin{eqnarray}
&&\int_{B_{\rho^{\frac{1}{n+1}}}(0)} \frac{16(n+1)^2 \delta^4 |y|^{2n}}{(\delta^2+|y^{n+1}-a|^2)^3}
e^{2\re[c_a (q^{-1}(y))^{n+1}]}= \int_{B_\rho(0)} \frac{16 (n+1) \delta^4}{(\delta^2+|y-a|^2)^3}
\re[1+2 c_a F_a(y)+|c_a|^2 G_a(y)] \nonumber \\
&&=\int_{B_{\rho}(a)} \frac{16 (n+1) \delta^4}{(\delta^2+|y-a|^2)^3}
\left[1+2 \re[c_a F_a(a)]+|c_a|^2 \re G_a(a)+\frac{1}{4} |c_a|^2 \Delta \re G_a(a) |y-a|^2 +O(|y-a|^{\frac{2(n+2)}{n+1}}) \right]\nonumber\\
&&+O(\delta^4)=8 \pi (n+1) \left[1+2 \re[c_a F_a(a)]+|c_a|^2 \re G_a(a)+\frac{1}{4}|c_a|^2 \Delta \re G_a(a) \delta^2\right] +O(\delta^{\frac{2(n+2)}{n+1}}) \label{1156} \end{eqnarray} in view of \eqref{expF}, \eqref{expG} and
$$\int_{\mathbb{R}^2} \frac{dy}{(1+|y|^2)^3}=\int_{\mathbb{R}^2} \frac{|y|^2}{(1+|y|^2)^3}dy=\frac{\pi}{2},$$ where $F_a$ and $G_a$ are given by \eqref{FG}. By \eqref{1156} we can re-write \eqref{firsttermbis} as \begin{eqnarray} &&\hspace{-0.5cm}\int_{\Omega} (Z_0+1)\left[\lap W+4\pi N\left({e^{u_0+W} \over \int_{\Omega} e^{u_0+W}}-{1\over
|{\Omega}|}\right)\right] \nonumber \\
&&\hspace{-0.5cm}= 8\pi (n+1) \left[ \frac{1+2 \re[c_a F_a(a)]+|c_a|^2 \re G_a(a)+\frac{1}{4}|c_a|^2 \Delta \re G_a(a) \delta^2}{1+2 \re[c_a F_a(a)] + |c_a|^2 \re G_a(a)+\frac{1}{2} |c_a|^2 \Delta \re G_a(a) \delta^2 \log \frac{1}{\delta}+\frac{\delta^2}{n+1} D_a}
-1 \right] +O(\delta^2 |c_a|) \nonumber\\
&&\hspace{-0.5cm}+o(\delta^2)= -16 \pi (n+1) |\alpha_a|^2 |c_a|^2 \delta^2 \log \frac{1}{\delta}-8\pi \delta^2 D_a +O(\delta^2 |c_a|+|a|^{\frac{1}{n+1}}\delta^2 |\log \delta|)+o(\delta^2) \label{1046pr} \end{eqnarray}
in view of $\Delta \re G_a(a)=4|\alpha_a|^2+O(|a|^{\frac{1}{n+1}})$. By (\ref{eps4}) we also deduce that \begin{eqnarray} &&\int_{\Omega}\frac{64 \pi^2 N^2 \epsilon^2 \int_\Omega e^{2u_0+2W}}{(\int_\Omega e^{u_0+W}+\sqrt{(\int_\Omega e^{u_0+W})^2-16\pi N\epsilon^2\int_\Omega e^{2u_0+2W}})^2} (Z_0+1) \left({e^{u_0+W}\over \int_{\Omega} e^{u_0+W}}-{e^{2u_0+2W}\over \int_{\Omega} e^{2u_0+2W}}\right)\nonumber \\
&&=\,\int_{\sigma^{-1}(B_\rho(0))}|\sigma'(z)|^2 e^{U_{\delta,a}}(Z_0+1)
\left[\frac{8(n+1)^2\epsilon^2}{\pi |\alpha_a|^{\frac{2}{n+1}} \delta^{\frac{2}{n+1}}}
E_{a,\delta}-\epsilon^2 |\sigma'(z)|^2 e^{U_{\delta,a}} \right]\left[1+O(|c_a||z|^{n+1}+\eta)+o(1) \right]\nonumber \\
&&+O(\delta^4 \eta)=\frac{128 (n+1)^3\epsilon^2}{\pi |\alpha_a|^{\frac{2}{n+1}} \delta^{\frac{2}{n+1}}}
E_{a,\delta} \int_{B_\rho(0)}{\delta^4\over(\delta^2+|y-a|^2)^3} \left[1+O(|c_a||y|+\eta)+o(1) \right] \nonumber \\
&&-128(n+1)^3\epsilon^2 |\alpha_a|^{-\frac{2}{n+1}} \int_{B_\rho(0)}\frac{\delta^6|y|^{2n\over n+1}}{(\delta^2+|y-a|^2)^5}\left[1+O(|y|^{1\over n+1}+\eta)+o(1) \right] +O(\delta^4 \eta)\nonumber \\
&& =64(n+1)^3 |\alpha_a|^{-\frac{2}{n+1}} \epsilon^2 \delta^{-\frac{2}{n+1}} E_{a,\delta}-128(n+1)^3\epsilon^2 |\alpha_a|^{-\frac{2}{n+1}} \int_{B_\rho(0)}\frac{\delta^6|y+a|^{2n\over n+1}}{(\delta^2+|y|^2)^5}\left[1+O(|y|^{1\over n+1}+\eta)+o(1) \right]\nonumber \\ && +o(\eta+\delta^2)+O(\eta^2) \nonumber \end{eqnarray} in view of \eqref{1852}. Since
$$\delta^{\frac{2}{n+1}} \int_{B_\rho(0)}\frac{\delta^6|y+a|^{2n\over n+1}}{(\delta^2+|y|^2)^5}\left[1+O(|y|^{1\over n+1}+\eta)+o(1) \right] =\int_{\mathbb{R}^2}\frac{|y+\frac{a}{\delta}|^{2n\over n+1}}{(1+|y|^2)^5}+o(1)+O(\eta)$$
when $|a|=O(\delta)$, we then have that \begin{eqnarray} &&\int_{\Omega}\frac{64 \pi^2 N^2 \epsilon^2 \int_\Omega e^{2u_0+2W}}{(\int_\Omega e^{u_0+W}+\sqrt{(\int_\Omega e^{u_0+W})^2-16\pi N\epsilon^2\int_\Omega e^{2u_0+2W}})^2} (Z_0+1) \left({e^{u_0+W}\over \int_{\Omega} e^{u_0+W}}-{e^{2u_0+2W}\over \int_{\Omega} e^{2u_0+2W}}\right)\nonumber \\
&& =64(n+1)^3 |\alpha_a|^{-\frac{2}{n+1}} \eta \int_{\mathbb{R}^2} \frac{(|y|^2-1)|y+\frac{a}{\delta} |^{\frac{2n}{n+1}}}{(1+|y|^2)^5}+o(\eta+\delta^2)+O(\eta^2) \label{2024} \end{eqnarray} in view of \eqref{Eadelta}. Inserting \eqref{1046pr} and \eqref{2024} into \eqref{zerotermbisb}, we get the validity of \eqref{solve1b}. \begin{rem} \label{remark1}
Notice that in the range $|a|>>\delta$ we find that
$$\delta^{\frac{2}{n+1}} \int_{B_\rho(0)}\frac{\delta^6|y+a|^{2n\over n+1}}{(\delta^2+|y|^2)^5}\left[1+O\bigg(|y|^{1\over n+1}+\eta\Big(\frac{|a|}{\delta}\Big)^{\frac{2n}{n+1}}\bigg)+o(1) \right] =\frac{\pi}{4} \Big(\frac{|a|}{\delta}\Big)^{\frac{2n}{n+1}}\bigg[1+o(1)+O\Big(\eta\Big(\frac{|a|}{\delta}\Big)^{\frac{2n}{n+1}}\Big)\bigg]$$
in view of the inequality $|y+a|^{\frac{2n}{n+1}}=|a|^{\frac{2n}{n+1}}+O(|a|^{\frac{n-1}{n+1}}|y|+|y|^{\frac{2n}{n+1}})$, so that the main order of $\int_\Omega R PZ_0$ in this range is essentially given by \begin{eqnarray*}
- 16 \pi (n+1) |\alpha_a|^2 |c_a|^2 \delta^2 \log \frac{1}{\delta}-8\pi \delta^2 D_a - \frac{32 \pi}{3} (n+1)^3 |\alpha_a|^{-\frac{2}{n+1}} \eta \Big(\frac{|a|}{\delta}\Big)^{\frac{2n}{n+1}}.\end{eqnarray*} \end{rem}
\noindent By \eqref{pzij} and \eqref{int|R|} we deduce that \begin{eqnarray} \label{zerotermbisc}
\int_\Omega R \, PZ =\int_\Omega R Z+o(\delta |c_a|+\delta |a|+\eta+\delta^2)+O(\eta^2 \delta) \end{eqnarray} in view of $\int_{\Omega} R=0$. Since as before \begin{eqnarray*}
\int_\Omega |Z|&=& \int_{\sigma^{-1}(B_\rho(0))} \frac{
\delta|\sigma(z)-a|}{\delta^2+|\sigma(z)-a|^2}+O(\delta)=
O\bigg(\int_{B_\rho(0)} |y|^{-{2n \over n+1}} \frac{\delta |y-a|}{\delta^2+|y-a|^2}\bigg)+O(\delta)\\ &=& O\bigg(\delta^{1\over n+1} \int_{B_\rho(0)}
\frac{1}{|y|^{2n \over n+1} |y-a|^{1 \over n+1}}\bigg)+O(\delta)=O(\delta^{1\over n+1}), \end{eqnarray*} by (\ref{imp}) we have that \begin{eqnarray}
&&\hspace{-0.5cm} \int_{\Omega} Z\,\left[\lap W+4\pi N\left({ e^{u_0+W} \over \int_{\Omega} e^{u_0+W}}-{1\over |{\Omega}|}\right)\right] \label{1011} \\
&&\hspace{-0.5cm} =\int_{\sigma^{-1}(B_\rho(0))}|\sigma'(z)|^2 e^{U_{\delta,a}}Z
\left[\frac{e^{2\re[c_a z^{n+1}]}}{1+2 \re[c_a F_a(a)] + |c_a|^2 \re G_a(a)+\frac{1}{2} |c_a|^2 \Delta \re G_a(a) \delta^2 \log \frac{1}{\delta}+\frac{\delta^2}{n+1} D_a}-1\right]\nonumber\\
&&\hspace{-0.5cm} +O(\delta^2 |c_a|)+o(\delta^2) \nonumber\\
&&\hspace{-0.5cm}=\int_{B_{\rho^{\frac{1}{n+1}} }(0)} \frac{8(n+1)^2 \delta^3 |y|^{2n}(y^{n+1}-a)}{(\delta^2+|y^{n+1}-a|^2)^3} \frac{e^{2\re[c_a (q^{-1}(y))^{n+1}]}}{1+2 \re[c_a F_a(a)] + |c_a|^2 \re G_a(a)+\frac{1}{2} |c_a|^2 \Delta \re G_a(a) \delta^2 \log \frac{1}{\delta}+\frac{\delta^2}{n+1} D_a}\nonumber \\
&&\hspace{-0.5cm} -\int_{B_\rho(0)} \frac{8(n+1) \delta^3 (y-a)}{(\delta^2+|y-a|^2)^3}+O(\delta^2 |c_a|)+o(\delta^2)\nonumber \\
&&\hspace{-0.5cm} =\frac{\int_{B_{\rho^{\frac{1}{n+1}} }(0)} \frac{8(n+1)^2 \delta^3 |y|^{2n}(y^{n+1}-a)}{(\delta^2+|y^{n+1}-a|^2)^3} e^{2\re[c_a (q^{-1}(y))^{n+1}]}}{1+2 \re[c_a F_a(a)] + |c_a|^2 \re G_a(a)+\frac{1}{2} |c_a|^2 \Delta \re G_a(a) \delta^2 \log \frac{1}{\delta}+\frac{\delta^2}{n+1} D_a}+O(\delta^2 |c_a|)+o(\delta^2)\nonumber \end{eqnarray} in view of
$$\int_{B_\rho(a)} \frac{8(n+1) \delta^3 (y-a)}{(\delta^2+|y-a|^2)^3}=0.$$ Since expansion \eqref{keyexp2} is still valid in view of \eqref{symmetryintbis}, through the changes of variables $y \to y^{n+1}$ and $y \to \frac{y-a}{\delta}$, by the symmetries we have that \begin{eqnarray} \label{1158}
&&\int_{B_{\rho^{\frac{1}{n+1}} }(0)} \frac{8(n+1)^2 \delta^3 |y|^{2n}(y^{n+1}-a)}{(\delta^2+|y^{n+1}-a|^2)^3} e^{2\re[c_a (q^{-1}(y))^{n+1}]}\nonumber \\
&&=\int_{B_\rho(0)} \frac{8(n+1) \delta^3(y-a)}{(\delta^2+|y-a|^2)^3} \re[1+2 c_a F_a(y)+|c_a|^2 G_a(y)] \nonumber \\
&&=\int_{B_{\rho}(a)} \frac{8 (n+1) \delta^3}{(\delta^2+|y-a|^2)^3}
\left[\overline{c_a F_a'(a)} |y-a|^2+\frac{1}{2} |c_a|^2 (\partial_1+i\partial_2) \re G_a(a) |y-a|^2 +O(|c_a|^2 |y-a|^3) \right]+O(\delta^3)\nonumber\\
&&=4 \pi (n+1) \delta \left[\overline{c_a F_a'(a)} +\frac{1}{2} |c_a|^2 (\partial_1+i\partial_2) \re G_a (a) \right] +O(\delta^2 |c_a|^2+\delta^3) \end{eqnarray}
in view of \eqref{expF}, \eqref{expG} and $\int_{\mathbb{R}^2} \frac{|y|^2}{(1+|y|^2)^3}dy=\frac{\pi}{2}$, where $F_a$ and $G_a$ are given by \eqref{FG}. By \eqref{1158} we can re-write \eqref{1011} as \begin{eqnarray} &&\int_{\Omega} Z \left[\lap W+4\pi N\left({e^{u_0+W} \over \int_{\Omega} e^{u_0+W}}-{1\over
|{\Omega}|}\right)\right] =4 \pi (n+1) \delta \left[\overline{c_a F_a'(a)} +\frac{1}{2} |c_a|^2 (\partial_1+i\partial_2) \re G_a(a) \right] \nonumber\\
&&+o(\delta |c_a|+\delta^2)=4 \pi (n+1) \delta \overline{\alpha_a c_a} +o(\delta |c_a|+\delta^2) \label{firstterm} \end{eqnarray}
in view of $F_a'(a)=\alpha_a+O(|a|)$ and $\frac{1}{2} (\partial_1+i\partial_2) \re G_a(a)=O(|a|)$. As far as the second term of $R$, by (\ref{eps4}) we have that \begin{eqnarray} &&\int_{\Omega}\frac{64 \pi^2 N^2 \epsilon^2 \int_\Omega e^{2u_0+2W}}{(\int_\Omega e^{u_0+W}+\sqrt{(\int_\Omega e^{u_0+W})^2-16\pi N\epsilon^2\int_\Omega e^{2u_0+2W}})^2} Z \left({e^{u_0+W}\over \int_{\Omega} e^{u_0+W}}-{e^{2u_0+2W}\over \int_{\Omega} e^{2u_0+2W}}\right)\nonumber \\
&&=\,\int_{\sigma^{-1}(B_\rho(0))}|\sigma'(z)|^2 e^{U_{\delta,a}}Z
\left[\frac{8(n+1)^2\epsilon^2}{\pi |\alpha_a|^{\frac{2}{n+1}} \delta^{\frac{2}{n+1}}}
E_{a,\delta}-\epsilon^2 |\sigma'(z)|^2 e^{U_{\delta,a}} \right] \left[1+O(|c_a||z|^{n+1}+\eta)+o(1) \right] \nonumber\\
&&+O(\delta^3\eta) = \frac{64(n+1)^3 \epsilon^2}{\pi |\alpha_a|^{\frac{2}{n+1}} \delta^{\frac{2}{n+1}}}
E_{a,\delta} \int_{B_\rho(0)}\frac{\delta^3 (y-a)}{(\delta^2+|y-a|^2)^3}dy \left[1+O(|c_a||y|+\eta)+o(1) \right] \nonumber \\
&&-64 (n+1)^3 \epsilon^2 |\alpha_a|^{-\frac{2}{n+1}} \int_{B_\rho(0)} \frac{\delta^5 |y|^{2n \over n+1}(y-a)}{(\delta^2+|y-a|^2)^5}\left[1+O(|y|^{\frac{1}{n+1}}+\eta)+o(1) \right]+O(\delta^3\eta) \nonumber \\
&&=\,-64(n+1)^3 |\alpha_a|^{-\frac{2}{n+1}} \eta \int_{\mathbb{R}^2} {|y+\frac{a}{\delta}|^{2n\over n+1} y
\over(1+|y|^2)^5}+o(\eta)+O(\eta^2) \label{secondterm} \end{eqnarray} in view of \eqref{1852} and
$$\int_{B_\rho(0)}\frac{\delta^3 (y-a)}{(\delta^2+|y-a|^2)^3}dy=\int_{B_\rho(a)}\frac{\delta^3 (y-a)}{(\delta^2+|y-a|^2)^3}dy+O(\delta^3)=O(\delta^3).$$ Inserting (\ref{firstterm}) and (\ref{secondterm}) into (\ref{zerotermbisc}), we get the validity of (\ref{solve2b}).\qed \end{proof}
\begin{rem}\label{remark2} Since for $|a|>>\delta$ and $n>1$ \begin{eqnarray*}
\delta^{\frac{2}{n+1}} \int_{B_\rho(0)}\frac{\delta^5 |y|^{2n\over n+1}(y-a)}{(\delta^2+|y-a|^2)^5}=\delta^{\frac{2}{n+1}} \int_{B_\rho(0)}\frac{\delta^5 |y+a|^{2n\over n+1}y}{(\delta^2+|y|^2)^5}+o(1)= \frac{\pi n}{12(n+1)} \Big(\frac{|a|}{\delta}\Big)^{-\frac{2}{n+1}} \frac{a}{\delta} [1+o(1)] \end{eqnarray*} in view of
$$\int_{\mathbb{R}^2} \frac{|y|^2}{(1+|y|^2)^5}=\int_{\mathbb{R}^2} \frac{1}{(1+|y|^2)^4}-\int_{\mathbb{R}^2} \frac{1}{(1+|y|^2)^5}= \frac{\pi}{12}$$ and the inequality
$$|y+a|^{\frac{2n}{n+1}}=|a|^{\frac{2n}{n+1}}+\frac{n}{n+1} |a|^{-\frac{2}{n+1}}(a\overline{y}+\overline{a}y)+O(|a|^{-\frac{2}{n+1}}|y|^2+|y|^{\frac{2n}{n+1}}),$$ notice that the main order of $\int_\Omega R PZ$, in this range, is essentially given by
$$4 \pi (n+1) \delta \overline{\alpha_a c_a}-\frac{16}{3} \pi n (n+1)^2 \epsilon^2 \delta^{-\frac{2}{n+1}} |\alpha_a|^{-\frac{2}{n+1}} \Big(\frac{|a|}{\delta}\Big)^{-\frac{2}{n+1}} \frac{a}{\delta}.$$
Since $\alpha_a$ is uniformly away from zero, the vanishing of $\int_\Omega R PZ$, which is equivalent to have $\epsilon^2 \delta^{-\frac{2}{n+1}} (\frac{|a|}{\delta})^{\frac{2n}{n+1}} \sim \overline{\alpha_a c_a a}$, is generally not compatible in the range $|a|>>\delta$ with the vanishing of $\int_\Omega RPZ_0$ in view of Remark \ref{remark1}, which can take place only if $c_0=0$ (in which case $c_a \sim a$). Indeed, the vanishing of $\int_\Omega RPZ$ and $\int_\Omega RPZ_0$ in the range $|a|>>\delta$ implies the contradiction $|a|^2 \sim \delta^2$. This explains why we don't consider the case $|a|>>\delta$. \end{rem}
\section{Proof of the main results}\label{mainresults} In the previous section, we have built up an approximating function $W=PU_{\delta,a,\sigma_a}$. We will now look for solutions $w$ of the form $w=W+\phi$, where $\phi$ is a small correcting term. In terms of $\phi$, problem \eqref{3} is equivalent to find a doubly-periodic solution $\phi$ of \begin{equation}\label{ephi} L(\phi)=-[R+N(\phi)]\qquad\text{ in ${\Omega}$} \end{equation} with $\int_\Omega \phi=0$. Recalling the notation $B(w)=16 \pi N (\int_{\Omega} e^{2u_0+2w})(\int_{\Omega} e^{u_0+w})^{-2}$, the linear operator $L$ is given by $$L(\phi) = \Delta \phi + \mathcal{K} \phi+\tilde \gamma(\phi),$$ where $$\mathcal{K}=4\pi N {e^{u_0+W}\over\int_{\Omega} e^{u_0+W}} +\frac{4 \pi N \epsilon^2 B(W)}{\left(1+\sqrt{1-\epsilon^2 B(W)}\right)^2} \left({e^{u_0+W}\over\int_{\Omega} e^{u_0+W}}- 2 \frac{e^{2u_0+2W}}{\int_\Omega e^{2u_0+2W}}\right) $$ and \begin{equation*} \begin{split} \tilde \gamma(\phi)&=-4\pi N {e^{u_0+W} \int_{\Omega} e^{u_0+W} \phi \over(\int_{\Omega} e^{u_0+W})^2 }-\frac{4 \pi N \epsilon^2 B(W)}{\left(1+\sqrt{1-\epsilon^2 B(W)}\right)^2} {e^{u_0+W} \over (\int_{\Omega} e^{u_0+W})^2} \int_{\Omega} e^{u_0+W} \phi \\ &+\frac{8 \pi N \epsilon^2 B(W)}{\left(1+\sqrt{1-\epsilon^2 B(W)}\right)^2} \frac{ e^{2u_0+2W}}{(\int_\Omega e^{2u_0+2W})^2} \int_\Omega e^{2u_0+2W} \phi \\ &+4 \pi N \epsilon^2 \frac{DB(W)[\phi]}{(1+\sqrt{1-\epsilon^2 B(W)})^2\sqrt{1-\epsilon^2 B(W)}} \left(\frac{e^{u_0+W}}{\int_\Omega e^{u_0+W}}-\frac{e^{2u_0+2W}}{\int_\Omega e^{2u_0+2W}}\right) \end{split} \end{equation*} with $$DB(W)[\phi]= 2 B(W) \left( {\int_{\Omega} e^{2u_0+2W} \phi \over \int_{\Omega} e^{2u_0+2W}}- {\int_{\Omega} e^{u_0+W} \phi \over \int_{\Omega} e^{u_0+W}}\right).$$ The nonlinear term $N(\phi)$, which is quadratic in $\phi$, is given by \begin{eqnarray} \label{nlt} &&N(\phi)=4\pi N\left[\frac{e^{u_0+W+\phi}}{\int_\Omega e^{u_0+W+\phi}}-\frac{e^{u_0+W}}{\int_\Omega e^{u_0+W}}-{e^{u_0+W}\over\int_{\Omega} e^{u_0+W}}\left(\phi-\frac{\int_{\Omega} e^{u_0+W}\phi}{\int_{\Omega} e^{u_0+W}}\right)\right]\nonumber\\ &&+\left[\frac{4 \pi N \epsilon^2 B(W+\phi)}{(1+\sqrt{1-\epsilon^2 B(W+\phi)})^2}-\frac{4 \pi N \epsilon^2 B(W)}{(1+\sqrt{1-\epsilon^2 B(W)})^2}-\frac{4 \pi N \epsilon^2 DB(W)[\phi]}{(1+\sqrt{1-\epsilon^2 B(W)})^2\sqrt{1-\epsilon^2 B(W)}} \right]\times \nonumber\\ &&\times \left(\frac{e^{u_0+W+\phi}}{\int_\Omega e^{u_0+W+\phi}}-\frac{ e^{2(u_0+W+\phi)}}{\int_\Omega e^{2(u_0+W+\phi)}}\right)\nonumber \\ &&+\frac{4 \pi N \epsilon^2 B(W)}{\left(1+\sqrt{1-\epsilon^2 B(W)}\right)^2}\left[\frac{e^{u_0+W+\phi}}{\int_\Omega e^{u_0+W+\phi}}-\frac{e^{u_0+W}}{\int_\Omega e^{u_0+W}}-{e^{u_0+W} \over\int_{\Omega} e^{u_0+W}}\left(\phi-\frac{\int_{\Omega} e^{u_0+W}\phi}{\int_{\Omega} e^{u_0+W}}\right) \right] \\ &&-\frac{4 \pi N \epsilon^2 B(W)}{\left(1+\sqrt{1-\epsilon^2 B(W)}\right)^2}\left[\frac{e^{2(u_0+W+\phi)}}{\int_\Omega e^{2(u_0+W+\phi)}}-\frac{e^{2(u_0+W)}}{\int_\Omega e^{2(u_0+W)}}-2 \frac{ e^{2(u_0+W)}}{\int_\Omega e^{2(u_0+W)}}\left( \phi-\frac{\int_\Omega e^{2(u_0+W)} \phi}{\int_\Omega e^{2(u_0+W)}} \right)\right] \nonumber\\ &&+\frac{4 \pi N \epsilon^2 DB(W)[\phi]}{(1+\sqrt{1-\epsilon^2 B(W)})^2\sqrt{1-\epsilon^2 B(W)}} \left(\frac{e^{u_0+W+\phi}}{\int_\Omega e^{u_0+W+\phi}}-\frac{e^{u_0+W}}{\int_\Omega e^{u_0+W}}-\frac{ e^{2(u_0+W+\phi)}}{\int_\Omega e^{2(u_0+W+\phi)}}+\frac{ e^{2(u_0+W)}}{\int_\Omega e^{2(u_0+W)}}\right). \nonumber \end{eqnarray} Notice that we can re-write $\tilde \gamma (\phi)$ as \begin{equation*} \begin{split} \tilde \gamma(\phi)&=- \mathcal{K} {\int_{\Omega} e^{u_0+W}\phi \over \int_{\Omega} e^{u_0+W}} +\frac{8\pi N \epsilon^2 B(W)}{(1+\sqrt{1-\epsilon^2 B(W)})^2 \sqrt{1-\epsilon^2 B(W)}}
\left({\int_{\Omega} e^{2(u_0+W)} \phi \over \int_{\Omega} e^{2(u_0+W)}}- {\int_{\Omega} e^{u_0+W} \phi \over \int_{\Omega} e^{u_0+W} }\right) \left[{ e^{u_0+W} \over \int_{\Omega} e^{u_0+W}}\right.\\ &\left.+(\sqrt{1-\epsilon^2 B(W)}-1){ e^{2(u_0+W)} \over \int_{\Omega} e^{2(u_0+W)}}\right]\\ &=\mathcal{K}\left[- {\int_{\Omega} e^{u_0+W}\phi \over \int_{\Omega} e^{u_0+W}} +\frac{\epsilon^2 B(W)}{(1+\sqrt{1-\epsilon^2 B(W)})\sqrt{1-\epsilon^2 B(W)}}
\left({\int_{\Omega} e^{2(u_0+W)} \phi \over \int_{\Omega} e^{2(u_0+W)}}- {\int_{\Omega} e^{u_0+W} \phi \over \int_{\Omega} e^{u_0+W}}\right) \right], \end{split} \end{equation*} and $L$ as \begin{equation}\label{ol} L(\phi) = \Delta \phi + \mathcal{K} \left[ \phi+ \gamma(\phi)\right], \end{equation} where $$\gamma(\phi)=- {\int_{\Omega} e^{u_0+W}\phi \over \int_{\Omega} e^{u_0+W}} +\frac{\epsilon^2 B(W)}{(1+\sqrt{1-\epsilon^2 B(W)})\sqrt{1-\epsilon^2 B(W)}}
\left({\int_{\Omega} e^{2(u_0+W)} \phi \over \int_{\Omega} e^{2(u_0+W)}}- {\int_{\Omega} e^{u_0+W} \phi \over \int_{\Omega} e^{u_0+W}}\right).$$ Let us observe that $$\int_{\Omega} R=\int_{\Omega} L(\phi)=\int_{\Omega} N(\phi)=0.$$
\noindent Since the operator $L$ is not invertible, equation $L(\phi)=-R-N(\phi)$ is not generally solvable. The linear theory we will develop in Appendix B states that $L$ has a kernel which is almost generated by $PZ_0$, $PZ$ and $\overline{PZ}$, yielding to \begin{prop} \label{prop4.1} Let $M_0>0$. There exists $\eta_0>0$ small such that for any $0<\delta\leq \eta_0$,
$|\log \delta| \epsilon^2 \leq \eta_0 \delta^{2\over n+1}$, $|a|\leq M_0 \delta$ and $h\in L^\infty({\Omega})$ with $\int_{\Omega} h=0$ there is a unique solution $\phi$, $d_0\in{\mathbb{R}}$ and $d \in{\mathbb C}$ to \begin{equation}\label{plcobis} \left\{\begin{array}{ll} L(\phi) =h + d_0 \Delta PZ_{0}+\re[d \lap PZ] &\text{in }{\Omega}\\ \int_{\Omega } \phi=\int_{\Omega } \phi \Delta PZ_0 = \int_{\Omega} \phi \Delta PZ=0.& \end{array} \right. \end{equation} Moreover, there is a constant $C>0$ such that
$$\|\phi \|_\infty \le C\left(\log \frac 1\delta \right)\|h\|_*,\qquad
|d_{0}|+|d| \le C\|h\|_*.$$ \end{prop} \noindent As a consequence, in Appendix C we will show \begin{prop}\label{nlp} Let $M_0>0$. There exists $\eta_0>0$ small such that for any
$0<\delta\leq\eta_0$, $|\log \delta|^2 \epsilon^2\leq \eta_0 \delta^{2\over n+1}$
and $|a|\leq M_0 \delta$ there is a unique solution $\phi=\phi(\delta,a)$, $d_0=d_0(\delta,a)\in{\mathbb{R}}$ and $d=d(\delta,a)\in{\mathbb C}$ to \begin{equation}\label{linear} \left\{\begin{array}{ll} L(\phi) =-[R+N(\phi)] + d_0 \Delta PZ_{0}+\re[d \lap PZ] &\text{in }{\Omega}\\ \int_{\Omega } \phi=\int_{\Omega } \phi \Delta PZ_0= \int_{\Omega} \phi \Delta PZ=0.& \end{array} \right. \end{equation} Moreover, the map $(\delta,a)\mapsto \phi(\delta,a)$ is $C^1$ with \begin{equation}\label{estphi}
\|\phi\|_\infty\le C |\log \delta| \|R\|_*.
\end{equation} \end{prop} \noindent The function $W+\phi$ will be a true solution of equation (\ref{3}) once we adjust $\delta$ and $a$ to have $d_0(\delta,a)=d(\delta,a)=0$. \noindent The crucial point is the following: \begin{lem} \label{1039} Let $\phi=\phi(\delta,a)$, $d_0=d_0(\delta,a)\in{\mathbb{R}}$ and $d=d(\delta,a)\in{\mathbb C}$ be the solution of \eqref{linear} given by Proposition \ref{nlp}. There exists $\eta_0>0$ such that if
$0<\delta \leq \eta_0$, $|a| \leq \eta_0$ and \begin{equation} \label{solve} \int_\Omega (L(\phi)+N(\phi)+R) PZ_0=0,\qquad \int_\Omega (L(\phi)+N(\phi)+R) PZ=0 \end{equation} do hold, then $W+\phi$ is a solution of \eqref{3}, i.e. $d_0(\delta,a)=d(\delta,a)=0$. \end{lem}
\begin{proof} Since by (\ref{pzij}) and $\|Z_0\|_\infty+\|Z\|_\infty\leq 2$ there hold \begin{eqnarray*}
\int_{\Omega} \lap PZ_0PZ_0&=& \int_{\Omega}\lap Z_0 PZ_0=-\int_{\sigma^{-1}(B_\rho(0))}|\sigma'(z)|^2 e^{U_{\delta,a}}Z_0(Z_0+1)+O(\delta^2)\\ &=&- 16 (n+1) \delta^4 \int_{B_\rho(0)}
\frac{\delta^2-|y-a|^2}{(\delta^2+|y-a|^2)^4} +O(\delta^2) =- \frac{8 \pi}{3} (n+1) +O(\delta^2) \end{eqnarray*} and \begin{eqnarray*}
\int_{\Omega} \lap PZPZ_0&=&\int_{\Omega}\lap Z PZ_0=-\int_{\sigma^{-1}(B_\rho(0))}|\sigma'(z)|^2 e^{U_{\delta,a}}Z(Z_0+1)+O(\delta^2)\\ &=& -\int_{B_\rho(0)}{16(n+1)\delta^5 (y-a)
\over(\delta^2+|y-a|^2)^4} +O(\delta^2)=-
\int_{B_\rho(0)}{16(n+1)\delta^5 y \over(\delta^2+|y|^2)^4} +O(\delta^2)=O(\delta^2) \end{eqnarray*} in view of \eqref{deltaZ0}-\eqref{deltaZ} and
$$\int_{\mathbb{R}^2} \frac{1-|y|^2}{(1+|y|^2)^4}dy=2 \int_{\mathbb{R}^2} \frac{dy}{(1+|y|^2)^4}-\int_{\mathbb{R}^2} \frac{dy}{(1+|y|^2)^3}=\frac{\pi}{6},$$ by (\ref{linear}) we rewrite the first of (\ref{solve}) as \begin{eqnarray*} 0=d_0 \int_\Omega \lap PZ_0PZ_0+\int_{\Omega}\re[d \lap PZPZ_0]=-\frac{8}{3}\pi
(n+1) d_0+O(\delta^2 |d_0|+\delta^2 |d|). \end{eqnarray*} Similarly, the second of (\ref{solve}) gives that \begin{eqnarray*} 0&=& d_0 \int_\Omega \lap PZ_0 PZ+\int_{\Omega} {1\over 2}\left[d \lap PZ+\bar d \lap\overline{PZ}\right]PZ=
-\int_{\sigma^{-1}(B_\rho(0))} {1\over 2} |\sigma'(z)|^2 e^{U_{\delta,a}} \left[d Z +\bar d\ \overline{Z} \right] Z\\
&&+ O(\delta^2 |d_0|+\delta |d|)= -4 (n+1) \bar d\int_{\mathbb{R}^2} \frac{|y|^2 }{(1+|y|^2)^4}+ O(\delta^2
|d_0|+\delta |d|) \end{eqnarray*} in view of $\int_\Omega \lap PZ_0 PZ=\int_\Omega \lap PZ PZ_0=O(\delta^2)$, \eqref{deltaZ} and \eqref{pzij}. Hence, (\ref{solve}) can be simply re-written as $d_0+O(\delta^2
|d_0|+\delta^2 |d|)=0$, $d+O(\delta^2 |d_0|+\delta |d|)=0$. Summing up the two relations, we then obtain $|d_0|+|d|=\delta O(|d_0|+|d|)$ which implies $d_0=d=0$.\qed \end{proof}
\begin{rem} \label{remark2bis} Since $\phi$ is sufficiently small, the system \eqref{solve} will be a perturbation of the reduced equations $\int_\Omega R \, PZ_0=0$, $\int_\Omega R \, PZ=0$. The integral coefficient in \eqref{solve1b} is negative for all $\frac{a}{\delta}$, as we will see in Appendix D. Since $\alpha_a \to \alpha_0 =\frac{\mathcal{H}(0)}{n+1}\not=0$ and $c_a \to c_0$ as $a \to 0$, we can always exclude the case $c_0 \not=0$. Indeed, in such a case the equation $\int_\Omega R \, PZ_0=0$ yields to $\epsilon^2\delta^{-{2\over n+1}} \sim \delta^2 |\log \delta|$ as $\delta \to 0$ by means of \eqref{solve1b} (we are implicitly assuming $\epsilon^2\delta^{-{2\over n+1}} \to 0$, which is a natural range for solving the reduced equations through \eqref{solve1b}-\eqref{solve2b}). This is not compatible with $\int_\Omega R \, PZ=0$, which allows at most $\delta=O(\epsilon^2\delta^{-{2\over n+1}})$ by means of \eqref{solve2b}. \end{rem}
\noindent The last ingredient is an expansion of the system (\ref{solve}) with the aid of Proposition \ref{reducedequations}: \begin{prop} \label{1219}
Assume $c_0=0$ and $|a|\leq M_0\delta$ for some $M_0>0$. The following expansions do hold as $\delta\to 0$ and $\epsilon\to0$ \begin{eqnarray}
\int_\Omega (L(\phi)+N(\phi)+R) PZ_0&=& -8\pi\delta^2 D_0 +64(n+1)^{\frac{3n+5}{n+1}} |\mathcal{H}(0)|^{-\frac{2}{n+1}} \epsilon^2 \delta^{-{2 \over n+1}}
\int_{\mathbb{R}^2} \frac{(|y|^2-1)|y+\frac{a}{\delta} |^{\frac{2n}{n+1}}}{(1+|y|^2)^5} \nonumber \\
&&+ o(\delta^2+\epsilon^2\delta^{-{1\over n+1}})+O(\epsilon^4 \delta^{-{2\over n+1}}|\log \delta|^2+\epsilon^8 \delta^{-{4\over n+1}}|\log \delta|^2) \label{solve1} \end{eqnarray} and \begin{eqnarray}
\int_\Omega (R+L(\phi)+N(\phi)) PZ &=& 4 \pi \delta (\bar \Upsilon a+ \bar \Gamma \bar a) -64(n+1)^{\frac{3n+5}{n+1}} |\mathcal{H}(0)|^{-\frac{2}{n+1}} \epsilon^2 \delta^{-{2\over n+1}}
\int_{\mathbb{R}^2} {|y+\frac{a}{\delta}|^{2n\over n+1} y
\over(1+|y|^2)^5} \nonumber \\
&&+o(\delta^2+\epsilon^2 \delta^{-\frac{2}{n+1}})+O(\epsilon^4 \delta^{-{2\over n+1}}|\log \delta|^2+\epsilon^8 \delta^{-{4\over n+1}}|\log \delta|^2), \label{solve2} \end{eqnarray} where $D_0$ and $\Gamma$, $\Upsilon$ are defined in \eqref{ggg} and Lemma \ref{derivca}, respectively. \end{prop} \begin{proof}
First, note that from the assumptions and \eqref{ere}, we find that $\|R\|_*=O(\delta^{2-\gamma}+\eta +\eta^2)$, where $\eta=\epsilon^2 \delta^{-\frac{2}{n+1}}$. Hence, since $|\gamma(\phi)|=O((1+\eta)\|\phi\|_\infty)$ in view of \eqref{BW}, by (\ref{estphi}), (\ref{diesis}), (\ref{diesisdiesis}) and (\ref{star}) we have that \begin{eqnarray} \label{zerotermbis}
\int_\Omega (R+L(\phi)+N(\phi)) PZ_0&=& \int_\Omega R PZ_0+O\left( (1+\eta) \Big\|\tilde L\Big(PZ_0+\frac{1}{|\Omega|}\int_\Omega Z_0\Big)\Big\|_*\|\phi\|_\infty+\|\phi\|_\infty^2\right)\\ &=& \int_\Omega R P Z_0+ o(\delta^2+\eta)+O(\eta^2+\eta^4) \nonumber \end{eqnarray} and \begin{eqnarray} \label{zeroterm}
\int_\Omega (R+L(\phi)+N(\phi)) PZ&=&\int_\Omega RPZ+O\left( (1+\eta) \Big\|\tilde L\Big(PZ+\frac{1}{|\Omega|}\int_\Omega Z\Big)\Big\|_*\|\phi\|_\infty+\|\phi\|_\infty^2\right)\\ &=& \int_\Omega R PZ+ o(\delta^2+\eta)+O(\eta^2+ \eta^4)\nonumber \end{eqnarray} in view of $PZ_0=O(1)$ and $PZ=O(1)$, where $\tilde L(\phi)=\lap \phi+\mathcal{K}\phi$. Since by Lemma \ref{derivca} $\mathcal{H}(0)
c_a=\Gamma a+\Upsilon \bar a+o(|a|)$ as $a \to 0$ in view of $c_0=0$, the desired expansions \eqref{solve1}-\eqref{solve2} follow by a combination of \eqref{solve1b}-\eqref{solve2b} and \eqref{zerotermbis}-\eqref{zeroterm}. We have used that $\alpha_a \to \alpha_0=\frac{\mathcal{H}(0)}{n+1}$ as $a \to 0$ in view of \eqref{0942}, where $\alpha_a$ is given by (\ref{alpha0}), and $D_a \to D_0$ as $a \to 0$, where $D_a$ is given by \eqref{Da}.\qed \end{proof}
\noindent Thanks to \eqref{solve1}-\eqref{solve2}, the aim is to find $(\delta(\epsilon),a(\epsilon))$ so that \eqref{solve} does hold. To simplify the notations, we denote $$\varphi_0(\delta,a,\epsilon)=\int_\Omega (L(\phi)+N(\phi)+R) PZ_0 \qquad \varphi(\delta,a,\epsilon)=\overline{ \int_\Omega (L(\phi)+N(\phi)+R) PZ},$$ and \eqref{solve} reduces to find a solution of \begin{equation}\label{solve3} \varphi_0(\delta(\epsilon),a(\epsilon),\epsilon)=\varphi(\delta(\epsilon),a(\epsilon),\epsilon)=0 \end{equation} for $\epsilon$ small. We are now ready to prove our first main result, which clearly implies the validity of Theorem \ref{mainbb} with $m=1$.
\begin{thm} \label{main} Let $\mathcal{H}_0=\frac{\mathcal{H}}{z^{n+2}}$, where $\mathcal{H}$ is given in \eqref{definitionH}, be a meromorphic function in $\Omega$ with $|\mathcal{H}_0(z)|^2=e^{u_0+8\pi(n+1)G(z,0)}$ (which exists in view of \eqref{balance} and is unique up to rotations), and $\sigma_0(z)=-(\int^z \mathcal{H}_0 (w) dw)^{-1}$. Assume that \begin{equation}\label{pc} \frac{d^{n+1} \mathcal{H}}{dz^{n+1}}(0)=0 \end{equation} and for some small $\rho>0$ \begin{equation} \label{D0} D_0:=\frac{1}{\pi}\left[\int_{\Omega \setminus \sigma_0^{-1} (B_\rho(0))} e^{u_0+8\pi(n+1)G(z,0)} - \int_{\mathbb{R}^2 \setminus B_\rho(0)}
\frac{n+1}{|y|^4}\right]<0. \end{equation} If the ``non-degeneracy condition" \begin{equation} \label{nondegenracy}
|\Gamma| \not= \Big|\Upsilon+{ n(2n+3) \over n+1} D_0\Big| \end{equation} does hold, where $\Gamma$ and $\Upsilon$ are given in Lemma \ref{derivca}, for $\epsilon>0$ small there exist $a (\epsilon)$, $\delta(\epsilon)>0$ small so that $w_\epsilon=PU_{\delta(\epsilon),a(\epsilon), \sigma_{a(\epsilon)}}+\phi(\delta(\epsilon),a(\epsilon))$ does solve \eqref{3} with \begin{eqnarray*} &&4\pi N \frac{e^{u_0+w_\epsilon}}{\int_\Omega e^{u_0+w_\epsilon}}+ \frac{64 \pi^2N^2 \epsilon^2 \int_\Omega e^{2u_0+2w_\epsilon}}{(\int_\Omega e^{u_0+w_\epsilon}+\sqrt{(\int_\Omega e^{u_0+w_\epsilon})^2-16\pi N\epsilon^2\int_\Omega e^{2u_0+2w_\epsilon}})^2}\left(\frac{e^{u_0+w_\epsilon}}{\int_\Omega
e^{u_0+w_\epsilon}} -\frac{e^{2u_0+2w_\epsilon}}{\int_\Omega e^{2u_0+2w_\epsilon}}\right)\\ &&\hspace{2cm} \rightharpoonup 8\pi(n+1) \delta_0 \end{eqnarray*} in the sense of measures as $\epsilon \to 0$. \end{thm} \begin{rem} \label{0923} For simplicity, we are considering the case $p=0$ in Theorem \ref{main}, which however is still true for $p \not=0$ by simply replacing in the statement $\mathcal{H}$, $\mathcal{H}_0$ and corresponding quantities with $\mathcal{H}^p$, $\mathcal{H}_0^p$ and corresponding quantities at $p$, where the latter have been defined in Remark \ref{1149}. \end{rem}
\begin{proof} Since the equation $\varphi_0(\delta,a,\epsilon)=0$ naturally requires $\delta^2 \sim \epsilon^2 \delta^{-\frac{2}{n+1}}$ in view of \eqref{solve1}, we make the following change of variables: $\delta=[\frac{(n+1)\epsilon^{n+1} }{|\mathcal{H}(0)|}]^{\frac{1}{n+2}} \mu$ and $\zeta=\frac{a}{\delta}$. The system \eqref{solve3} is equivalent to find zeroes of
$$\Gamma_\epsilon(\mu,\zeta):=\left[ \frac{(n+1) \epsilon^{n+1}}{|\mathcal{H}(0)|} \right] ^{-\frac{2}{n+2}} \left(- \frac{1}{8} \varphi_0, \frac{1}{4\pi \mu^2 } \varphi\right)
\left(\bigg[\frac{ (n+1) \epsilon^{n+1}}{|\mathcal{H}(0)|}\bigg]^{\frac{1}{n+2}} \mu , \bigg[\frac{ (n+1) \epsilon^{n+1}}{|\mathcal{H}(0)|}\bigg]^{\frac{1}{n+2}} \mu \zeta,\epsilon\right),$$ which has the expansion $\Gamma_\epsilon(\mu,\zeta)=\Gamma_0(\mu, \zeta)+o(1)$ as $\epsilon \to 0^+$, uniformly for $\mu$ in compact subsets of $(0,+\infty)$, in view of (\ref{solve1})-(\ref{solve2}), where the map $\Gamma_0: \mathbb{R} \times \mathbb{C} \to \mathbb{R} \times \mathbb{C}$ is defined as
$$\Gamma_0(\mu,\zeta)= \left(\pi D_0 \mu^2-\frac{8 (n+1)^3 }{\mu^{{2\over n+1}}} \int_{\mathbb{R}^2} \frac{(|y|^2-1)|y+\zeta|^{\frac{2n}{n+1}}}{(1+|y|^2)^5}, \Gamma \zeta+\Upsilon \bar \zeta-{16 (n+1)^3 \over \pi \mu^{{2(n+2)\over n+1}}} \int_{\mathbb{R}^2} {|y+\zeta|^{2n\over n+1} \bar y \over(1+|y|^2)^5} \right).$$ We need to exhibit ``stable" zeroes of $\Gamma_0$ in $(0,+\infty)\times \mathbb{C}$, which will persist under $L^\infty-$small perturbations yielding to zeroes of $\Gamma_\epsilon$ as required. The easiest case is given by the point $(\mu_0,0)$, that solves $\Gamma_0=0$ for $\mu_0=({8 (n+1)^3 I_0 \over \pi D_0})^{n+1\over 2(n+2)}>0$ in view of the assumption \eqref{D0} and (see \eqref{1228})
$$I_0:=\int_{\mathbb{R}^2} \frac{(|y|^2-1)|y|^{\frac{2n}{n+1}}}{(1+|y|^2)^5}<0.$$ Regarding $\Gamma_0$ as a map from $\mathbb{R}^3$ into $\mathbb{R}^3$ and setting $\Gamma=\Gamma_1+i\Gamma_2$, $\Upsilon=\Upsilon_1+i\Upsilon_2$, we have that $$D \Gamma_0(\mu_0,0)=\left(\begin{array}{ccc} \frac{2(n+2)}{n+1}\pi D_0 \mu_0 & 0 & 0 \\ 0 & \Gamma_1+\Upsilon_1 +{ n(2n+3)\over n+1} D_0 & \Upsilon_2-\Gamma_2 \\ 0 & \Gamma_2+\Upsilon_2 & \Gamma_1-\Upsilon_1 -{ n(2n+3)\over n+1} D_0 \end{array}\right)$$ in view of \eqref{1228} and
$$ \int_{\mathbb{R}^2}\frac{|y|^{\frac{2n}{n+1}}}{(1+|y|^2)^5}dy=\pi \int_0^\infty \frac{\rho^{\frac{n}{n+1}}}{(1+\rho)^5}d\rho=\pi I_5^{\frac{n}{n+1}}.$$ Since
$$\hbox{det }D\Gamma_0(\mu_0,0)=\frac{2(n+2)}{n+1}\pi D_0 \mu_0 \left(|\Gamma |^2-\left|\Upsilon+{ n(2n+3) \over n+1} D_0\right|^2\right) \not= 0$$ in view of assumption (\ref{nondegenracy}), the point $(\mu_0,0)$ is an isolated zero of $\Gamma_0$ with non-trivial local index. Since $D\Gamma_0(\mu_0,0)$ is an invertible matrix, there exists $\nu>0$ small so that
$|D\Gamma_0(\mu_0,0)(\mu-\mu_0,\zeta)|\geq \nu
|(\mu-\mu_0,\zeta)|$. By a Taylor expansion of $\Gamma_0$ we can find $r_0>0$ small so that
$$|\Gamma_\epsilon(\mu,\zeta)|=|\Gamma_0(\mu,\zeta)|+o(1) \geq \nu |(\mu-\mu_0,\zeta)|+O\left((\mu-\mu_0)^2+|\zeta|^2\right)+o(1) \geq {\nu \over 2}|(\mu-\mu_0,\zeta)|$$ for all $(\mu,\zeta) \in \partial B_{r}(\mu_0,0)$ and all $r \leq r_0$, for $\epsilon$ sufficiently small depending on $r$. Then, the map $\Gamma_\epsilon$ has in $ B_{r_0}(\mu_0,0)$ well-defined degree for all $\epsilon$ small, and it then coincides with the local index of $\Gamma_0$ at $(\mu_0,0)$. In this way, the map $\Gamma_\epsilon$ has a zero of the form $(\mu_\epsilon,\zeta_\epsilon)$ with $\mu_\epsilon \to \mu_0$
and $|\zeta_\epsilon|\to 0$ as $\epsilon \to 0$. Therefore, we have solved \eqref{solve3} for $\delta(\epsilon)=[\frac{(n+1)\epsilon^{n+1} }{|\mathcal{H}(0)|}]^{\frac{1}{n+2}} \mu_\epsilon$ and $a(\epsilon)=\delta(\epsilon)\zeta_\epsilon$, and the corresponding $w_\epsilon$ does solve (\ref{3}) and satisfy the required concentration property as stated in Theorem \ref{main}.\qed \end{proof} \begin{rem} With some extra work, it is rather standard to see that \eqref{solve1} does hold in a $C^1-$sense. For $\zeta$ in a bounded set, by IFT we can find $\epsilon>0$ small so that the first equation in $\Gamma_\epsilon(\mu,\zeta)=0$ can be solved by $\mu(\epsilon,\zeta)$, depending continuously in $\zeta$, so that
$$\mu(\epsilon,\zeta) \to \mu(\zeta):= \left(\frac{8 (n+1)^3}{\pi D_0} \int_{\mathbb{R}^2} \frac{(|y|^2-1)|y+\zeta|^{\frac{2n}{n+1}}}{(1+|y|^2)^5}\right)^{\frac{n+1}{2(n+2)}}$$
as $\epsilon \to 0$. In Appendix D it is proved that $ \int_{\mathbb{R}^2} \frac{(|y|^2-1)|y+\zeta|^{\frac{2n}{n+1}}}{(1+|y|^2)^5}<0$ for all $\zeta \in \mathbb{C}$, yielding to $\mu(\zeta)>0$ when $D_0<0$. Plugging $\mu(\epsilon,\zeta)$ into the second equation in $\Gamma_\epsilon(\mu,\zeta)=0$ we are reduced to find a ``stable" zero of
$$\int_{\mathbb{R}^2} \frac{(|y|^2-1)|y+\zeta|^{\frac{2n}{n+1}}}{(1+|y|^2)^5} \left(\bar \Upsilon \zeta +\bar \Gamma \bar \zeta\right)-2 D_0 \int_{\mathbb{R}^2} {|y+\zeta|^{2n\over n+1} y \over(1+|y|^2)^5}=0.$$ Notice that $\bar \Upsilon \zeta+\bar \Gamma \bar \zeta$ acts in real notation as the multiplication for the matrix $$A=\left(\begin{array}{cc} \re (\Gamma+ \Upsilon)& \im (\Upsilon-\Gamma) \\ -\im (\Gamma+ \Upsilon) & \re (\Upsilon-\Gamma) \end{array}\right).$$ Since by Appendix D we have that
$$\int_{\mathbb{R}^2} \frac{(|y|^2-1)|y+\zeta|^{\frac{2n}{n+1}}}{(1+|y|^2)^5}=f(|\zeta|),\qquad \int_{\mathbb{R}^2} {|y+\zeta|^{2n\over n+1} y \over(1+|y|^2)^5}=g(|\zeta|)\zeta,$$
we can re-write the above equation as $A\zeta=\frac{2D_0 g(|\zeta|)}{f(|\zeta|)}\zeta$. Letting $(\lambda_1,e_1)$ be an eigen-pair of $A$ with $|e_1|=1$, we can find a solution $\zeta_0=|\zeta_0|e_1$ as soon as $|\zeta_0|\not= 0$ does solve $\frac{2D_0 g(|\zeta_0|)}{f(|\zeta_0|)}=\lambda_1$. Since by Appendix D we know that $f<0<g$, we can find solutions $(\mu_\epsilon,\zeta_\epsilon)$ of $\Gamma_\epsilon(\mu,\zeta)=0$ with $\zeta_\epsilon$ bifurcating from $\zeta_0 \not= 0$ as soon as one of the eigenvalues of $A$ positive and belongs to $\frac{2D_0 g}{f}(0,+\infty)$. In particular, by \eqref{1228}-\eqref{1902} and \eqref{1903}-\eqref{1904} we have that
$$\frac{g(0)}{f(0)}=-\frac{(2n+3)(3n+1)}{4(n+1)},\qquad \frac{g(|\zeta|)}{f(|\zeta|)} \to -\frac{51}{356} \hbox{ as }|\zeta|\to \infty,$$
and the condition above is fullfilled if one of the eigenvalues of $A$ lies in $(\frac{51}{178}|D_0|, \frac{(2n+3)(3n+1)}{2(n+1)}|D_0|)$. \end{rem}
\section{Examples and comments}\label{examples} In this section, we will discuss the validity of \eqref{pc}-\eqref{nondegenracy} by providing some examples. Recall that in Theorem \ref{main} we were implicitly assuming that $\{p_1,\dots,p_N\} \subset \Omega$ and denoting for simplicity the concentration point $p$ as $0$. The assumption $\{p_1,\dots,p_N\} \subset \Omega$ simplifies the global construction in $\tilde \Omega$ of $\mathcal{H}$ but \eqref{pc}-\eqref{nondegenracy} just require the local existence for such $\mathcal{H}$ at $0$ as well as for $\sigma_0$ and $H^*$. In this respect, the only relevant assumption is that the concentration point lies in $\Omega$, and so we will provide examples with $0 \in \{\tilde p_1,\dots,\tilde p_N\} \subset \bar \Omega$. To be more precise, let us explain the general strategy we will adopt below. Since we are in a doubly-periodic setting, the configuration of the vortex points has to be periodic in $\bar \Omega$: for all $j=1,\dots,N$ the points $(\tilde p_j +\omega_1 \mathbb{Z}+\omega_2 \mathbb{Z})\cap \bar \Omega$ belong to $\{\tilde p_1,\dots,\tilde p_N\}$ and have all the same multiplicity. Then, we can find $J \subset \{1,\dots,N\}$ so that the points $\{\tilde p_j:\, j \in J\}$ are all non-zero, distinct modulo $\omega_1 \mathbb{Z}+\omega_2 \mathbb{Z}$ and $\left(\{\tilde p_j:\, j \in J\}+\omega_1 \mathbb{Z}+\omega_2 \mathbb{Z}\right) \cap \bar \Omega=\{\tilde p_1,\dots, \tilde p_N\}\setminus \{0\}$. Take now a translation vector $\tau \in \Omega$ so that $\{\tilde p_1+\tau,\dots,\tilde p_N+\tau\}\cap \partial \Omega=\emptyset$, or equivalently $\left(\{\tilde p_1,\dots,\tilde p_N\}+\tau+ \omega_1 \mathbb{Z}+\omega_2 \mathbb{Z}\right)\cap \partial \Omega=\emptyset$. Then, it follows that $\left(\tilde p_j+\tau+\omega_1 \mathbb{Z}+\omega_2 \mathbb{Z}\right)\cap \Omega$ is composed by a single point $p_j$, for all $j=1,\dots,N$. The idea is to apply Theorem \ref{main}, as formulated in Remark \ref{0923}, to the translated vortex configuration $\{ \tau\} \cup \{p_j:j \in J\}\subset \Omega$ with $\tau$ as concentration point. The validity of \eqref{pc}-\eqref{nondegenracy} in the translated situation will follow by appropriate assumptions on $\{\tilde p_1,\dots,\tilde p_N\}$.
\noindent Before stating our first result, let us introduce the notion of even vortex configuration: $-\tilde p_j\in \{\tilde p_1,\dots,\tilde p_N\}+\omega_1 \mathbb{Z}+\omega_2 \mathbb{Z}$ with the same multiplicity of $\tilde p_j$, for all $j=1,\dots,N$. In the periodic case, notice that $\{\tilde p_j: j \in J\}$ is still an even configuration. The validity of \eqref{pc} is discussed in the following: \begin{prop} \label{propp1}Assume $n$ is even and the periodic vortex configuration is even with $0 \in \{\tilde p_1,\dots,\tilde p_N\}$. Let $\mathcal{H}^\tau$ be the function corresponding to $p=\tau$ and remaining vortex points $\{p_j:\, j \in J\}\subset \Omega$, as given in Remark \ref{1149}. Then, there holds $$\frac{d^k \mathcal{H}^\tau}{dz^k}(\tau)=0$$ for all odd number $k$. \end{prop}
\begin{proof} Since $-\Omega=\Omega$ and the periodic vortex configuration $\{\tilde p_1,\dots,\tilde p_N\}$ is even, we have that $G(z)$, $H(z)$ and $e^{-4\pi \sum_{j \in J} n_j G(z,\tilde p_j)}$ are even functions in view of $G(z,p)=G(z-p,0)$. So, it follows that $e^{4\pi(n+2)H(z-\tau)-4\pi \sum_{j\in J} n_j G(z,\tilde p_j+\tau)}=e^{4\pi(n+2)H(z-\tau)-4\pi \sum_{j\in J} n_j G(z,p_j)}$ takes the same value at $\pm z+\tau$ for all $z \in \Omega$. The function $\mathcal{H}^\tau$ satisfies $|\mathcal{H}^\tau|(z+\tau)=|\mathcal{H}^\tau|(-z+\tau)$ for all $z \in \Omega$, and then $\mathcal{H}^\tau(z+\tau)=\mathcal{H}^\tau(-z+\tau)$ for all $z$ since $\mathcal{H}^\tau$ is an holomorphic function. By differentiating $k-$times at $\tau$, it yields to $\frac{d^k \mathcal{H}^\tau}{dz^k}(\tau)=0$ when $k$ is odd.\qed \end{proof}
\noindent The discussion of \eqref{D0} is more interesting and will make use of the Weierstrass elliptic function $\wp$ to represent $D_0$ in case of an even periodic vortex configuration. Furthermore, when $\Omega$ is a rectangle, the points $p_j$'s are half-periods and all the multiplicities are even numbers, by some ideas in \cite{CLW} we will show that assumption \eqref{D0} holds if and only if $\frac{n_3}{2}$ is an odd number, where $n_3$ is the multiplicity of the half-period $\frac{\omega_1+\omega_2}{2}$. Due to the presence of high order derivatives ($2(n+1)$th order) in \eqref{nondegenracy}, we will verify the validity of the ``non-degeneracy" condition in the simplest case $n=n_3=2$ and $\Omega$ a square torus. As we will see, the validity of \eqref{nondegenracy} is just a computational matter which could be carried out in very generality for each case of interest.
\noindent We have the following representation formula: \begin{prop} \label{1014} Assume that the periodic vortex configuration is even with $0 \in \{\tilde p_1,\dots,\tilde p_N\}$, and $n_j$ is even when $\tilde p_j \in \{{\omega_1\over 2}, {\omega_2\over 2},{\omega_1+\omega_2\over 2}\}$. Let $D_0^\tau$ be the coefficient corresponding to $p=\tau$ and remaining vortex points $\{p_j:\, j \in J\}\subset \Omega$, as given in Theorem \ref{main}. Then, for $\tau $small we have that $D_0^\tau$ is given by \eqref{1846}, and does not depend on $\tau$. \end{prop} \begin{proof} The Weierstrass elliptic function $$\wp(z)=\frac{1}{z^2}+\sum_{(n,m)\not=(0,0)} \left( \frac{1}{(z+n\omega_1+m\omega_2)^2}-\frac{1}{(n\omega_1+m\omega_2)^2}\right)$$
is a doubly-periodic meromorphic function with a single pole in $\Omega$ at $0$ of multiplicity $2$. Moreover, the only branching points of $\wp$ are simple and given by the three half-periods ${\omega_1\over 2}$, ${\omega_2\over 2}$ and $\frac{\omega_3}{2}={\omega_1+\omega_2\over 2}$, i.e. $\wp'(\frac{\omega_j}{2})=0$ and $\wp''(\frac{\omega_j}{2})\not=0$ for $j=1,2,3$. For $p\in \bar \Omega \setminus \{ 0\}$, note that $2\pi[2G(z,0)-G(z,p)-G(z,-p)]$ is a doubly-periodic harmonic function in $\Omega$ with a singular behavior $-2\log|z|$ at $z=0$. Moreover, it behaves like $\log|z-p|$ at $z=p$ and $\log|z+p|$ at $z=-p$ when $p \not=\frac{\omega_1}{2},\frac{\omega_2}{2},\frac{\omega_3}{2}$, and like $2\log|z-p|$ if $p\in \{\frac{\omega_1}{2},\frac{\omega_2}{2},\frac{\omega_3}{2}\}$. Thus, we have that
$$2\pi[2G(z,0)-G(z,p)-G(z,-p)]=\log|\wp(z)-\wp(p)|+\text{const.}$$ no matter $p$ is an half-period or not, in view of $\wp(p)=\wp(-p)$, $\wp'(p)=-\wp'(-p)\not=0$ if $p \not=\frac{\omega_1}{2},\frac{\omega_2}{2},\frac{\omega_3}{2}$ and $\wp'(p)=0$, $\wp''(p)\not=0$ if $p\in \{\frac{\omega_1}{2},\frac{\omega_2}{2},\frac{\omega_3}{2}\}$. Since the periodic vortex configuration is even, take $I$ as the minimal subset of $J$ so that $\left(\{\tilde p_k,-\tilde p_k: \, k \in I\}+\omega_1 \mathbb{Z}+\omega_2 \mathbb{Z}\right) \cap \{\tilde p_j:\,j \in J\}=\{\tilde p_j:\,j \in J\}$ and $$\hat n_k=\left\{ \begin{array}{ll} \frac{n_k}{2} &\hbox{if }\tilde p_k \hbox{ is an half-period}\\ n_k& \hbox{otherwise}. \end{array}\right.$$ Letting $N=n+\sum_{j \in J} n_j$ and $u_0(z)=-4\pi nG(z,0)-4\pi \sum_{j \in J}n_j G(z,\tilde p_j)$, assumption \eqref{balance} implies that $$u_0+8\pi(n+1)G(z,0)=4\pi\sum_{k \in I} \hat n_k [2G(z,0)- G(z,\tilde p_k)-G(z,-\tilde p_k)],$$ yielding to \begin{equation*}
e^{u_0+8\pi(n+1)G(z,0)}= \hbox{const.}\: \big| \prod_{k \in I} (\wp(z)-\wp(\tilde p_k))^{\hat n_k} \big|^2. \end{equation*} The additional assumption that $n_j$ is even when $\tilde p_j$ is an half-period is crucial to have $(\wp(z)-\wp(\tilde p_j))^{\hat n_j}$ as a single-valued function. The function \begin{equation} \label{explicitH} \mathcal{H}_0(z)= \lambda_0 \prod_{k \in I} (\wp(z)-\wp(\tilde p_k))^{\hat n_k},\quad \lambda_0=e^{2\pi(n+2)H(0)-2\pi \sum_{j \in J} n_j G(0,\tilde p_j)} \end{equation} is an elliptic function with a single pole at $0$ of zero residue, which satisfies \begin{equation} \label{1328}
|\mathcal{H}_0|^2=e^{u_0+8\pi(n+1)G(z,0)}. \end{equation} Then \begin{equation} \label{1010} \sigma_0(z)=-\left(\int^z \mathcal{H}_0(w) dw \right)^{-1} =-\lambda_0^{-1}\left(\int^z \prod_{k \in I} (\wp(w)-\wp(\tilde p_k))^{\hat n_k} dw \right)^{-1} \end{equation} is a well-defined meromorphic function in $2\Omega$ which satisfies \begin{equation} \label{1345}
\Big| \Big( \frac{1}{\sigma_0} \Big)'(z) \Big|^2=|\mathcal{H}_0|^2(z) = e^{u_0+8\pi(n+1)G(z,0)}. \end{equation} Switching now to the translated vortex configuration $\{\tau\}\cup \{p_j:\,j \in J\}$, let us first notice that the total multiplicity is still $N$, and introduce $u_0^\tau=u_0(z-\tau)=-4\pi nG(z,\tau)-4\pi \sum_{j \in J}n_j G(z,p_j)$. We have that $\mathcal{H}_0^\tau(z)=\mathcal{H}_0(z-\tau)$ is a meromorphic function in $\Omega$ with
$$|\mathcal{H}_0^\tau|^2=e^{u_0^\tau+8\pi(n+1)G(z,\tau)}$$ in view of (\ref{1328}). Since such a function $\mathcal{H}_0^\tau$ is unique up to rotations, we can assume that $\mathcal{H}_0^\tau$ coincides with the function $\mathcal{H}_0$ corresponding to $p=\tau$ and remaining vortex points $\{p_j:\, j \in J\}\subset \Omega$, as given in Theorem \ref{main}. Setting $\mathcal{H}(z)=z^{n+2}\mathcal{H}_0(z)$, we also have that \begin{equation} \label{1344} \mathcal{H}^\tau(z)=\mathcal{H} (z-\tau) \end{equation} for all $z \in \Omega$. Letting $$\sigma_0^\tau(z)=-\left(\int^z \mathcal{H}_0^\tau(w) dw \right)^{-1}$$ with the correct choice of the constant in the integration$\int^z$, we easily deduce that \begin{equation} \label{0935} \sigma_0^\tau(z)=\sigma_0(z-\tau) \end{equation} for all $z \in \Omega$ in view of $(\frac{1}{\sigma_0^\tau})'(z)=(\frac{1}{\sigma_0})'(z-\tau)$. Since $(\sigma_0^\tau)^{-1}(B_\rho(0))-\tau=(\sigma_0)^{-1}(B_\rho(0))$ in view of \eqref{0935}, according to \eqref{D0} let us re-write $D_0^\tau$ as \begin{eqnarray*} \pi D_0^\tau&=&\int_{\Omega \setminus (\sigma_0^\tau)^{-1} (B_\rho(0))} e^{u_0^\tau+8\pi(n+1)G(z,\tau)}
-\int_{\mathbb{R}^2 \setminus B_\rho(0)}\frac{n+1}{|y|^4}\\ &=& \int_{(\Omega-\tau) \setminus (\sigma_0)^{-1}(B_\rho(0))} e^{u_0+8\pi(n+1)G(z,0)}
-\int_{\mathbb{R}^2 \setminus B_\rho(0)}\frac{n+1}{|y|^4}\\ &=&\int_{\Omega \setminus (\sigma_0)^{-1}(B_\rho(0))} e^{u_0+8\pi(n+1)G(z,0)}
-\int_{\mathbb{R}^2 \setminus B_\rho(0)}\frac{n+1}{|y|^4} \end{eqnarray*} by the double-periodicity of $e^{u_0+8\pi(n+1)G(z,0)}$, once we assume for $\tau$ small that $(\sigma_0)^{-1}(B_\rho(0)) \subset \Omega \cap (\Omega-\tau)$. By \eqref{1345} and the change of variable $z \to \frac{1}{\sigma_0}(z)$ we get that \begin{eqnarray}
\pi D_0^\tau&=&\pi D_0= \int_{\Omega \setminus (\sigma_0)^{-1}(B_\rho(0)) } \Big|\left(\frac{1}{\sigma_0}\right)'\Big|^2
-\int_{\mathbb{R}^2 \setminus B_\rho(0)}\frac{n+1}{|y|^4} \nonumber \\ &=&\hbox{Area } \left[ \frac{1}{\sigma_0}\left(\Omega \setminus \sigma_0^{-1} (B_\rho(0)) \right) \right] - (n+1) \hbox{Area} \left( B_{\frac{1}{\rho}}(0) \right). \label{1846} \end{eqnarray} By the Cauchy argument principle the number of pre-images in $\Omega \setminus \sigma_0^{-1}(B_\rho(0))$ through the map $\frac{1}{\sigma_0}$ is constant for all values in each connected component of $\mathbb{C} \setminus \left(\frac{1}{\sigma_0}(\partial \Omega) \cup \partial B_{\frac{1}{\rho}}(0) \right)$, and the area of each of these components has to be counted in \eqref{1846} according to the multiplicity of pre-images. \qed \end{proof}
\noindent Thanks to \eqref{1846}, we can now discuss the validity of \eqref{D0}. \begin{prop} \label{propp2} Let ${\Omega}$ be a rectangle, and assume that the vortex configuration is the periodic one generated by $\{0,{\omega_1\over 2},{\omega_2 \over 2},{\omega_1+\omega_2 \over 2}\}$ with even multiplicities $n,n_1,n_2,n_3 \geq 0$. Suppose that \begin{equation}\label{balanceex} \frac{n_1}{2}+\frac{n_2}{2}+\frac{n_3}{2}=\frac{n}{2}+1. \end{equation} Given $D_0^\tau$ as in Propostion \ref{1014}, then $D_0^\tau<0$ $(>0)$when $\frac{n_3}{2}$ is odd $(\hbox{even})$. \end{prop} \begin{proof} The balance condition \eqref{balance} is satisfied in view of \eqref{balanceex}. Let $\tilde p_1={\omega_1\over 2}$, $\tilde p_2={\omega_2\over 2}$ and $\tilde p_3={\omega_1+\omega_2\over 2}$ be the three half-periods. When $\Omega$ is a rectangle, the function $\wp$ takes real values on $\partial \Omega$ and $\wp''(\tilde p_j)>0$ for $j=1,2$, $\wp''(\tilde p_3)<0$. As a consequence, we have that \begin{equation} \label{2026} \wp(\tilde p_1)-\wp(z),\: \wp(z)-\wp(\tilde p_2),\: \wp(\pm \tilde p_1+it)-\wp(\tilde p_3) ,\: \wp(\tilde p_3)-\wp(\pm \tilde p_2+t) \ge 0 \end{equation} for all $z\in{\partial}{\Omega}$ and $t\in{\mathbb{R}}$. Write $\sigma_0(z)$ in \eqref{1010} as $$ \sigma_0(z)=(-1)^{\frac{n+n_2}{2}}\lambda_0^{-1} \left(\int^z (\wp(\tilde p_1)-\wp(w))^{\frac{n_1}{2}}(\wp(w)-\wp(\tilde p_2))^{\frac{n_2}{2}}(\wp(\tilde p_3)-\wp(w))^{\frac{n_3}{2}} dw \right)^{-1}$$ in view of \eqref{balanceex}. Since $$\frac{d}{dt}\left[ \frac{(-1)^{\frac{n+n_2}{2}}}{\sigma_0 (\pm \tilde p_2+t)} \right]= \lambda_0 (\wp(\tilde p_1)-\wp(\pm \tilde p_2+t))^{\frac{n_1}{2}}(\wp(\pm \tilde p_2+t)-\wp(\tilde p_2))^{\frac{n_2}{2}}(\wp(\tilde p_3)-\wp(\pm \tilde p_2+t))^{\frac{n_3}{2}} \geq 0$$ in view of \eqref{2026}, the function $\frac{(-1)^{\frac{n+n_2}{2}}}{\sigma_0}$ maps the horizontal sides of $\partial \Omega$ into horizontal segments with same orientation. In the same way, the vertical sides of $\partial \Omega$ are mapped into vertical segments with same/opposite orientation depending on whether $\frac{n_3}{2}$ is an even/odd number. So, $T:=\frac{(-1)^{\frac{n+n_2}{2}}}{\sigma_0} (\partial \Omega)$ is still a rectangle with same/opposite orientation and $\frac{(-1)^{\frac{n+n_2}{2}}}{\sigma_0 (\tilde p_3)}$ is the right upper/lower corner of $T$ depending on whether $\frac{n_3}{2}$ is an even/odd number. For $\rho$ small, we then have that $\mathbb{C} \setminus \left(\frac{1}{\sigma_0}(\partial \Omega) \cup \partial B_{\frac{1}{\rho}}(0) \right)$ has three connected components: the interior $\Omega'$ of $(-1)^{\frac{n+n_2}{2}} T$, $B_{\frac{1}{\rho}}(0) \setminus \overline{\Omega'}$ and $\mathbb{C} \setminus \overline{B_{\frac{1}{\rho}}(0)}$. By Lemma \ref{gomme} we have that values in $B_{\frac{1}{\rho}}(0) \setminus \overline{\Omega'}$, $\mathbb{C} \setminus \overline{B_{\frac{1}{\rho}}(0)}$ have exactly $n+1$, $0$ pre-images in $\Omega \setminus \sigma_0^{-1}(B_\rho(0))$ through the map $\frac{1}{\sigma_0}$, respectively. By \eqref{1846} we have that $\pi D_0^\tau=[k - (n+1)] \hbox{Area} (\Omega')$, where $k$ is the number of pre-images corresponding to values in $\Omega'$.
\noindent Since $\wp(z)-\wp(\tilde p_3)={\wp''(\tilde p_3)\over 2}(z-\tilde p_3)^2+O(|z-\tilde p_3|^3)$ as $z \to \tilde p_3$, we obtain that
$$\left[\frac{(-1)^{\frac{n+n_2}{2}}}{\sigma_0}\right]'(z)= \mu (z-\tilde p_3)^{n_3}+O(|z-\tilde p_3|^{n_3+1})$$ and
$$\frac{(-1)^{\frac{n+n_2}{2}}}{\sigma_0(z)}-\frac{(-1)^{\frac{n+n_2}{2}}}{\sigma_0(\tilde p_3)}=\mu {(z-\tilde p_3)^{n_3+1}\over n_3+1}+O(|z-\tilde p_3|^{n_3+2})$$ as $z \to \tilde p_3$, where $\mu:=\lambda_0 \left(-{\wp''(\tilde p_3)\over 2}\right)^{\frac{n_3}{2}} [\wp(\tilde p_1)-\wp(\tilde p_3)]^{\frac{n_1}{2}}[\wp(\tilde p_3)-\wp(\tilde p_2)]^{\frac{n_2}{2}}>0$. When $\frac{n_3}{2}$ is an odd number, $\frac{(-1)^{\frac{n+n_2}{2}}}{\sigma_0 (\tilde p_3)}$ is the right lower corner of $T$ and the function $\frac{(-1)^{\frac{n+n_2}{2}}}{\sigma_0}$ maps $\{z=\tilde p_3+\rho e^{i\theta}\mid \pi\le\theta\le {3\pi\over 2}, 0\le\rho<\rho_0\}$ onto a region whose part inside/outside $T$ is covered ${n_3-2\over 4}$/${n_3-2\over 4}+1$ times, respectively, in view of $$(n_3+1)\pi\le(n_3+1)\theta\le(n_3+1){3\pi\over 2}=(n_3+1)\pi+2\pi {n_3-2\over 4}+\pi+{\pi\over 2}.$$ Hence, near $\tilde p_3$ the map $\frac{1}{\sigma_0}$ covers ${n_3-2\over 4}$/${n_3-2\over 4}+1$ times the interior/exterior part of $\Omega'$ near $\frac{1}{\sigma_0(\tilde p_3)}$. Since $\frac{1}{\sigma_0}$ covers $n+1$ times every values in $B_{\frac{1}{\rho}}(0)\setminus \overline{\Omega'}$, there should be $n-{n_3-2\over 4}$ distinct points $x\in {\Omega}\setminus \sigma_0^{-1}(B_\rho(0))$, away from $\tilde p_1,\tilde p_2, \tilde p_3$, so that $\sigma_0(x)=\sigma_0(\tilde p_3)$. Since $\sigma_0'(x) \not= 0$ if $x\not= \tilde p_1,\tilde p_2,\tilde p_3$, it follows that around any such $x$ $\frac{1}{\sigma_0}$ is a local homeomorphism, and then $\frac{1}{\sigma_0}$ covers exactly $n$/$n+1$ times the interior/exterior part of $\Omega'$ near $\frac{1}{\sigma_0(\tilde p_3)}$. Hence, it follows that $k=n$ and $\pi D_0^\tau=-\hbox{Area} (\Omega')<0$. When $\frac{n_3}{2}$ is even, in a similar way we get that $k=n+2$ and $\pi D_0^\tau=\hbox{Area} (\Omega')>0$.\qed \end{proof}
\noindent Now, to discuss \eqref{nondegenracy} we further restrict the attention to the case $n=n_3=2$ to get \begin{prop} \label{propp3} Let ${\Omega}$ be a square of side $a$, $a>0$, and assume that the vortex configuration is the periodic one generated by $\{0,{a\over 2},{ia \over 2},{a+ia \over 2}\}$ with multiplicities $2,n_1,n_2,2$ and $(n_1,n_2)=(2,0)$ (or viceversa). Then, for $\tau \in \Omega$ assumption \eqref{nondegenracy} does hold for the vortex configuration $\{\tau\}\cup \{p_j:\, j \in J\}\subset \Omega$. \end{prop} \begin{proof} We are restricting the attention to the cases $(n_1,n_2)=(2,0),\, (0,2)$ for they are the only possibilities to have even multiplicities satisfying \eqref{balanceex} for $2,n_1,n_2,2$. Letting $\tilde p_1={a \over 2}$, $\tilde p_2={ia \over 2}$ and $\tilde p_3={a+ia \over 2}$ be the three half-periods, the ``non-degeneracy condition" reads as \begin{equation}\label{ceae2}
\bigg| 3 (\mathcal{H}^\tau)''(\tau )f_3'(\tau)+\mathcal{H}^\tau(\tau) f_3'''(\tau) \bigg|\ne
\left|{6\pi\over a^2} \overline{b_{3}} (\mathcal{H}^\tau)''(\tau)-{28\over 3} D_0^\tau \right| \end{equation} in view of $(\mathcal{H}^\tau)'(\tau)=(\mathcal{H}^\tau)'''(\tau)=0$ by Proposition \ref{propp1},
where $$f_l(z)=\frac{1}{l!}\frac{d^l}{dw^l} \left[2\log \frac{w-q_0^\tau(z)}{(q_0^\tau)^{-1}(w)-z}+4\pi H^*(z-(q_0^\tau)^{-1}(w))\right](0)\:,\quad b_l=\frac{1}{l!}\frac{d^l (q_0^\tau)^{-1}}{dw^l}(0).$$ Since $\sigma_0^\tau(z)=\sigma_0(z-\tau)$ by \eqref{0935}, we deduce that $q_0^\tau(z)=q_0(z-\tau)$ and $(q_0^\tau)^{-1}=\tau +q_0^{-1}$, where $q_0=z [\frac{\sigma_0(z)} {z^{n+1}}]^{\frac{1}{n+1}}$ is defined out of $\sigma_0$ as in Appendix A. Since $\mathcal{H}^\tau(z)=\mathcal{H}(z-\tau)$ in view of (\ref{1344}), by \eqref{1846} the ``non-degeneracy condition" \eqref{ceae2} gets re-written in the original variables as: \begin{equation}\label{ceae21357}
\bigg| 3 \mathcal{H}''(0)f_3'(0)+\lambda_0 f_3'''(0) \bigg|\ne
\left|{6\pi\over a^2} \overline{b_{3}} \mathcal{H}''(0)-{28\over 3} D_0 \right| \end{equation} in view of $\mathcal{H}(0)=\lambda_0$ (see \eqref{explicitH}), where $$f_l(z)=\frac{1}{l!}\frac{d^l}{dw^l} \left[2\log \frac{w-q_0(z)}{q_0^{-1}(w)-z}+4\pi H^*(z-q_0^{-1}(w))\right](0)\:,\quad b_l=\frac{1}{l!}\frac{d^l q_0^{-1}}{dw^l}(0).$$ Since ${d^k \mathcal{H} \over dz^k}(0)=0$ for all odd $k\in{\mathbb{N}}$, we have that \begin{eqnarray*} \frac{z^3}{\sigma_0(z)}=\frac{\lambda_0}{3}+\frac{\mathcal{H}''(0)}{2} z^2-\frac{\mathcal{H}^{(4)}(0)}{24}z^4 -\frac{\mathcal{H}^{(6)}(0)}{2160 }z^6+ O(z^8), \end{eqnarray*} and then $$\sigma_0(z)=\frac{3}{\lambda_0}z^3 -\frac{9\mathcal{H}''(0)}{2\lambda_0^2 }z^5+O(z^7),\quad q_0(z)=\frac{3^{\frac{1}{3}}}{\lambda_0^{\frac{1}{3}}}z-\frac{3^{\frac{1}{3}}\mathcal{H}''(0)}{2 \lambda_0^{\frac{4}{3}}}z^3+O(z^5),\quad q_0^{-1}(w)=\frac{\lambda_0^{\frac{1}{3}}}{3^{\frac{1}{3}}}w+\frac{\mathcal{H}''(0)}{6}w^3+O(w^5)$$ as $z,w \to 0$. Direct computation shows that $b_3=\frac{\mathcal{H}''(0)}{6}$ and \begin{eqnarray*} f_3(z)&=&-\frac{2}{3\sigma_0(z)}+\frac{2\lambda_0 }{9 z^3}+\frac{2b_3}{z}-\frac{2\pi \lambda_0}{9}(H^*)'''(z)-4\pi b_3 (H^*)'(z)\\ &=&\frac{\mathcal{H}^{(4)}(0)}{36}z +\frac{\mathcal{H}^{(6)}(0)}{3240}z^3-\frac{2\pi \lambda_0}{9}(H^*)'''(z)-\frac{2\pi}{3} \mathcal{H}''(0) (H^*)'(z)+O(z^5) \end{eqnarray*} as $z \to 0$. Since then $$f_3'(0)=\frac{\mathcal{H}^{(4)}(0)}{36}-\frac{2\pi \lambda_0}{9}(H^*)^{(4)}(0)-\frac{2\pi}{3} \mathcal{H}''(0) (H^*)''(0),\quad f_3'''(0)=\frac{\mathcal{H}^{(6)}(0)}{540} -\frac{2\pi \lambda_0}{9}(H^*)^{(6)}(0)-\frac{2\pi}{3} \mathcal{H}''(0) (H^*)^{(4)}(0),$$ condition \eqref{ceae21357} is equivalent to \begin{eqnarray*}
&&\bigg| \frac{\mathcal{H}''(0)\mathcal{H}^{(4)}(0)}{12}+\frac{\lambda_0 \mathcal{H}^{(6)}(0)}{540}
-2\pi (\mathcal{H}''(0))^2 (H^*)''(0)-\frac{4 \pi \lambda_0}{3} \mathcal{H}''(0) (H^*)^{(4)}(0)-\frac{2\pi \lambda_0^2}{9}(H^*)^{(6)}(0)\bigg|\\ &&\ne
\left|{\pi\over a^2} |\mathcal{H}''(0)|^2-{28\over 3} D_0\right|. \end{eqnarray*} By the explicit expression \eqref{explicitH} of $\mathcal{H}_0$ we have that $$\mathcal{H}(z)= \lambda_0 z^4 (\wp(z)-\wp(\tilde p_1)) (\wp(z)-\wp(\tilde p_3)).$$ Replacing $\mathcal{H}$ with $\frac{\mathcal{H}}{\lambda_0}$, we can assume $\lambda_0=1$ and simply study the stronger condition \begin{eqnarray}\label{yyy}
\hspace{-0.2cm} \bigg| \frac{\mathcal{H}''(0)\mathcal{H}^{(4)}(0)}{4}+\frac{\mathcal{H}^{(6)}(0)}{180}
-6\pi (\mathcal{H}''(0))^2 (H^*)''(0)- 4 \pi \mathcal{H}''(0) (H^*)^{(4)}(0)-\frac{2\pi}{3}(H^*)^{(6)}(0)\bigg|< {3 \pi\over a^2} |\mathcal{H}''(0)|^2\end{eqnarray} in view of Proposition \ref{1014} and \eqref{1846}. Letting $G_l=\displaystyle \sum_{(n,m) \not= (0,0)}{1\over (n\omega_1+m\omega_2)^l}$, $l\geq 3$, be the Eisenstein series, the Laurent expansion of $\wp$ near $0$ simply re-writes as $$\wp(z)={1\over z^2}+\sum_{l=1}^\infty(2l+1)G_{2l+2}z^{2l},$$ and then \begin{eqnarray*} \mathcal{H}(z)=1-(\wp(\tilde p_1)+\wp(\tilde p_3))z^2+\left(\wp(\tilde p_1)\wp(\tilde p_3)+6 G_4 \right) z^4 +\left(10 G_6 -3G_4 \wp(\tilde p_1)-3G_4 \wp(\tilde p_3)\right) z^6+O(z^8) \end{eqnarray*} as $z \to 0$. Letting $e_j=\wp(\tilde p_j)$ for $j=1,2,3$, recall that \begin{equation} \label{propej} e_2<e_3\le0<e_1,\quad e_1+e_2+e_3=0,\quad 15 G_4=-(e_1e_2+e_1e_3+e_2e_3),\quad 35 G_6=e_1 e_2 e_3, \end{equation} with $e_3=0$ if and only if $\Omega$ is a square (see \cite{AbSte}). By the expansion of $\mathcal{H}$ and \eqref{propej}, we deduce that $$\mathcal{H}''(0)=2 e_2,\:\mathcal{H}^{(4)}(0)=24(e_1 e_3 +6 G_4),\:\mathcal{H}^{(6)}(0)= 720(10 G_6 +3G_4 e_2),$$ and condition \eqref{yyy} gets re-written as \begin{eqnarray} \label{ceae3}
\bigg| 460 G_6 +84 G_4e_2-24\pi e_2^2 (H^*)''(0)-8\pi e_2 (H^*)^{(4)}(0) -\frac{2\pi}{3} (H^*)^{(6)}(0)\bigg|< {12\pi\over a^2} e_2^2 \end{eqnarray} in view of \eqref{propej}.
\noindent From an explicit formula for the Green's function (see \cite{ChO}) we have that \begin{equation*} \begin{split}
H(z)-{|z|^2\over 4|\Omega|}
=\re\left(-{z^2\over 4 a^2}+{iz\over 2a}+{1\over 12}\right)-\frac{1}{2\pi}\log\left|{1-e\left(\frac{z}{a}\right)\over z}
\times\prod_{k=1}^\infty\left(1-e\left(\frac{kai+z}{a}\right)\right)\left(1-e\left(\frac{kai-z}{a}\right)\right)\right|, \end{split} \end{equation*} where $e(z)=e^{2\pi iz}$, yielding to \begin{equation*} H^*(z)= -{z^2\over 4 a^2}+{iz\over 2a}+{1\over 12}-\frac{1}{2\pi} \log \left[\left({1-e\left(\frac{z}{a}\right)\over z}\right) \times\prod_{k=1}^\infty\left(1-e\left(\frac{kai+z}{a}\right)\right)\left(1-e\left(\frac{kai-z}{a}\right)\right) \right].\end{equation*} Direct, but tedious, computations show that \begin{eqnarray*} &&(H^*)''(0)=-{1 \over 2 a^2}+{\pi\over 6a^2}-{4\pi \over a^2}\sum_{k=1}^\infty \lambda_k(\lambda_k+1),\quad (H^*)^{(4)}(0)={\pi^3\over 15a^4}+{16\pi^3\over a^4}\sum_{k=1}^\infty \lambda_k(\lambda_k+1)(6\lambda_k^2+6\lambda_k+1)\\ && (H^*)^{(6)}(0)={8\pi^5\over 63a^6}-{64\pi^5\over a^6}\sum_{k=1}^\infty \lambda_k(\lambda_k+1)(120\lambda_k^4+240\lambda_k^3+150\lambda_k^2+30\lambda_k+1), \end{eqnarray*} where $\lambda_k:={1\over e^{2\pi k}-1}$. On a square torus the Green function $G(z,0)$ has an additional symmetry, the invariance under $\frac{\pi}{2}-$rotations. Therefore, $H^*(iz)=H^*(z)$ for all $z\in{\Omega}$, and then $(H^*)''(0)=(H^*)^{(6)}(0)=0$. Since $e_3=G_6=0$, condition \eqref{ceae3} becomes \begin{eqnarray} \label{ceae4}
\bigg| \frac{28}{5}e_1^2 -8\pi (H^*)^{(4)}(0) \bigg|< {12\pi\over a^2} e_1 \end{eqnarray} in view of \eqref{propej} and $e_1=-e_2>0$. From the study of the Weierstrass function $\wp$ it is known that (see \cite{Ap}) \begin{equation*} \sum_{(n,m)\ne (0,0)} {1\over (n+m\tau)^4}={\pi^4\over 45}+{16\pi^4\over 3}\sum_{m,k=1}^\infty k^3e^{2\pi i km\tau} \end{equation*} for $\tau\in{\mathbb C}$ with $\im \tau >0$. The choice $\tau=i$ yields to $$15 a^4 G_4=a^4 e_1^2={\pi^4\over 3}+80 \pi^4\sum_{m,k=1}^\infty k^3e^{-2\pi km}$$ in view of \eqref{propej}, which turns \eqref{ceae4} into \begin{eqnarray} \label{ceae5}
\hspace{-0.3cm}\bigg| {\pi^4 \over 3}+112 \pi^4\sum_{m,k=1}^\infty k^3e^{-2\pi km} -32\pi^4 \sum_{k=1}^\infty \lambda_k(\lambda_k+1)(6\lambda_k^2+6\lambda_k+1) \bigg| < 3\pi \sqrt{{\pi^4\over 3}+80 \pi^4\sum_{m,k=1}^\infty k^3e^{-2\pi km}}. \end{eqnarray} Since numerically we can approximately compute $$32 \pi^4\sum_{k=1}^\infty \lambda_k(\lambda_k+1)(6\lambda_k^2+6\lambda_k+1)\approx 5,9194 \qquad 80 \pi^4 \sum_{m,k=1}^\infty k^3e^{-2\pi km} \approx 14,7985,$$ we get the validity of \eqref{ceae5}, or equivalently \eqref{nondegenracy} for the vortex configuration $\{\tau\}\cup \{p_j:\, j \in J\}\subset \Omega$. \qed \end{proof}
\noindent As a combination of Propositions \ref{propp1}, \ref{propp2} and \ref{propp3} we finally get that \begin{thm} \label{thmexample}
Let ${\Omega}$ be a square of side $a$, $a>0$, and assume that the vortex configuration is the periodic one generated by $\{0,{a\over 2},{ia \over 2},{a+ia \over 2}\}$ with multiplicities $2,n_1,n_2,2$ and $(n_1,n_2)=(2,0)$ (or viceversa). Then, for $\tau$ small the assumption of Theorem \ref{main} do hold for the slightly translated vortex configuration $\{-\tau(1+i), -\tau(1+i)+\frac{a}{2}, -\tau(1+i)+\frac{ia}{2}, -\tau(1+i)+\frac{a+ia}{2} \}$. In particular, for $\epsilon>0$ small we can find $N-$condensate $(\mathcal{A}_\epsilon,\phi_\epsilon)$ so that $|\phi_\epsilon| \to 0$ in $C(\bar \Omega)$ and \begin{equation} \label{magconc} (F_{12})_\epsilon \rightharpoonup 12\pi \delta_0 \end{equation} weakly in the sense of measures, as $\epsilon \to 0$, where $\{0,{a\over 2},{ia \over 2},{a+ia \over 2}\}$ are the zeroes of $\phi_\epsilon$ with multiplicities $2,n_1,n_2,2$ and $(n_1,n_2)=(2,0)$ (or viceversa). \end{thm}
\noindent As a final remark, observe that for $n=0$ Theorem \ref{main} essentially recovers the result in \cite{LinYan1} concerning single-point concentration in any torus ${\Omega}$ (see also \cite{EsFi}). Notice that $n=0$ corresponds to have that the concentration point $0$ is not really a singular point and a more simple approach is possible as in the above-mentioned papers. By \eqref{balance} the total multiplicity $N$ is $2$ produced by two vortex-points $p_1,p_2\in \Omega \setminus \{0 \}$. Assumption \eqref{pc} is equivalent to have $(\log \mathcal{H})'(0)=0$. By the Cauchy-Riemann equations, the last condition can be just re-written as
$$ \nabla [2 \re \log \mathcal{H}](0)=\nabla \log |\mathcal{H}|^2(0)=\nabla [8\pi H+u_0](0)=0.$$ Since $\nabla H(0)=0$ in view of $H(z)=H(-z)$, we have that \eqref{pc} simply reads as: $0$ is a critical point of $u_0$. As far as \eqref{D0}, notice that $D_0$ does not depend on $\rho>0$ small for \begin{eqnarray*} \int_{\sigma_0^{-1} (B_\rho(0)) \setminus \sigma_0^{-1} (B_r(0))} e^{u_0+8\pi G(z,0)} - \int_{B_\rho(0) \setminus B_r(0)}
\frac{dy}{|y|^4}= \hbox{Area}\left( B_{\frac{1}{r}}(0) \setminus B_{\frac{1}{\rho}}(0) \right)-\pi \Big(\frac{1}{r^2}-\frac{1}{\rho^2}\Big)=0 \end{eqnarray*} for all $0<r\leq \rho$, in view of \eqref{eq sigma0} with $c_0=0$. Therefore, $D_0$ can be re-written as \begin{eqnarray*} D_0 = \frac{1}{\pi}\left[\int_{\Omega \setminus \sigma_0^{-1} (B_\rho(0))} e^{u_0+8\pi G(z,0)} - \int_{\mathbb{R}^2 \setminus B_\rho(0)}
\frac{dy}{|y|^4}\right]=\frac{1}{\pi}\lim_{r\to0} \bigg[\int_{\Omega \setminus \sigma_0^{-1}(B_r(0))}
{e^{8\pi H(z,0)+u_0} \over |z|^4} -
\int_{\mathbb{R}^2 \setminus B_{r}(0)} \frac{1}{|y|^4}\bigg]. \end{eqnarray*}
Since $\sigma_0(z)=\frac{z}{\lambda_0}+\frac{\mathcal{H}''(0)}{2\lambda_0^2}z^3+O(|z|^5)$ and $\sigma_0^{-1}(z)=\lambda_0z+O(|z|^3)$ with $\lambda_0=e^{4\pi H(0)-\frac{u_0(0)}{2}}$, notice that $B_{\lambda_0r-Cr^3}(0)\subset \sigma_0^{-1}(B_r(0)) \subset B_{\lambda_0 r+Cr^3}(0)$ for all $r>0$ small, for some constant $C>0$. Thus, there holds \begin{eqnarray*}
&&\bigg|\int_{\Omega \setminus \sigma_0^{-1}(B_r(0))}
{1\over |z|^4}e^{8\pi[H(z,0)-H(0,0)]+[u_0(z)-u_0(0)]}-\int_{\Omega \setminus B_{\lambda_0 r}(0)}
{1\over |z|^4}e^{8\pi[H(z,0)-H(0,0)]+[u_0(z)-u_0(0)]}\bigg|\\
&&=O\left( \int_{ B_{\lambda_0 r+Cr^3}(0) \setminus B_{\lambda_0 r-Cr^3}(0)} \frac{1}{|z|^2}\right) =o(1) \end{eqnarray*} as $r \to 0$ in view of $\nabla [8\pi H+u_0](0)=0$, yielding to the same expression for $D_0$ as in \cite{EsFi,LinYan1}: $$D_0=\frac{\lambda_0^2}{\pi}\lim_{r\to0} \bigg[\int_{\Omega \setminus B_r(0)}
{1\over |z|^4}e^{8\pi[H(z,0)-H(0,0)]+[u_0(z)-u_0(0)]} -
\int_{\mathbb{R}^2 \setminus B_{r}(0)} \frac{1}{|y|^4}\bigg].$$ The ``non-degeneracy condition" \eqref{nondegenracy} reads as
$$\left| \frac{\mathcal{H}''(0)}{\mathcal{H}(0)}-4\pi (H^*)''(0)\right|=\left| (\log \mathcal{H})''(0)-4\pi (H^*)''(0)\right|\ne {2\pi\over |\Omega|},$$ in view of $\sigma_0=q_0$, $b_1=\lambda_0$, $f_1(z)=-4\pi \lambda_0 (H^*)'(z)+\frac{2 \lambda_0}{z}-\frac{2}{\sigma_0(z)}$ and $\mathcal{H}'(0)=0$. Setting
$\mathcal{H}_1(z)=e^{-4\pi H^*(z)} \mathcal{H}(z)$, we have that $|\mathcal{H}_1(z)|^2=e^{u_0+\frac{2\pi}{|\Omega|}|z|^2}$ and
\begin{eqnarray*}(\log \mathcal{H})''(0)-4\pi (H^*)''(0)&=&(\log \mathcal{H}_1)''(0)=2 (\re \log \mathcal{H}_1)''(0)= (\log |\mathcal{H}_1|^2 )''(0)=\Big(u_0+\frac{2 \pi}{|\Omega|} |z|^2\Big)''(0)\\ &=& \frac{1}{4}[(u_0)_{xx}(0)-(u_0)_{yy}(0)-2i (u_0)_{xy}(0)] \end{eqnarray*} in view of \eqref{definitionH}-\eqref{keyrelationH}, and the above condition turns into
\begin{eqnarray*} 0 &\not=&\frac{1}{16}\left| (u_0)_{xx}(0)-(u_0)_{yy}(0)-2i (u_0)_{xy}(0) \right|^2 -{4\pi^2\over |\Omega|^2}=
\frac{1}{16} \left((u_0)_{xx}(0)-(u_0)_{yy}(0)\right)^2+\frac{1}{4}(u_0)^2_{xy}(0) -{4\pi^2\over|\Omega|^2}\\
&=&\frac{1}{16}(\Delta u_0)^2 (0)-\frac{1}{4}\hbox{det}\,D^2 u_0(0)-{4\pi^2\over |\Omega|^2}=-\frac{1}{4}\hbox{det}\,D^2 u_0(0). \end{eqnarray*} In conclusion, when $n=0$ the assumptions in Theorem \ref{main} are equivalent to have $0$ as a non-degenerate critical point of $u_0(z)=-4\pi G(z,p_1)-4\pi G(z,p_2)$ with $D_0<0$.
\section{A more general result}\label{general} In this section we deal with the case $m\geq 2$ in Theorem \ref{mainbb}. For more clearness, let us denote the concentration points as $\xi_l$, $l=1,\dots,m$, the remaining points in the vortex set as $p_j$, and by $n_l,n_j$ the corresponding multiplicities.
\noindent From section $2$ recall that $H(z)= G(z,0)+\frac{1}{2\pi} \log|z|$ is a smooth function in $2\Omega$ with $\Delta H=\frac{1}{|\Omega|}$, and $H^*$ is an holomorphic function in $2 \Omega$ with $\re H^*=H-\frac{|z|^2}{4|\Omega|}$. Up to a translation, we are assuming that $p_j \in \Omega$ for all $j=1,\dots,N$, and taking $\tilde \Omega$ close to $\Omega$ so that $\tilde \Omega -p_j \subset 2 \Omega$ for all $j=1,\dots,N$. Arguing as for \eqref{definitionH}, the function \begin{eqnarray*} && \mathcal{H}(z)= \prod_j (z-p_j)^{n_j} \hbox{exp}\left( 4\pi \sum_{l=1}^m (n_l+1) H^*(z-\xi_l) -2\pi\sum_{j=1}^N H^*(z-p_j)\right.\\
&&\left. +\frac{\pi}{|\Omega|} \sum_{l=1}^m (n_l+1)(\xi_l-2z) \overline{\xi_l} -\frac{\pi}{2|\Omega|}\sum_{j=1}^N |p_j|^2+\frac{\pi}{|\Omega|}z \overline{\sum_{j=1}^N p_j} \right) \end{eqnarray*} is holomorphic in $\tilde \Omega$ and satisfies
$$|\mathcal{H}(z)|^2=\left(\prod_{l=1}^m |z-\xi_l|^{-2n_l}\right) \hbox{exp}\left(u_0+8\pi \sum_{l=1}^m (n_l+1) H(z-\xi_l)\right)$$ in view of \eqref{hhh}. For $l=1,\dots,m$ the function $$\mathcal{H}^l(z)= \mathcal{H}(z) \prod_{l' \not= l} (z-\xi_{l'})^{-(n_{l'}+2)}$$ is holomorphic near $\xi_l$ and satisfies \begin{eqnarray}
|\mathcal{H}^l(z)|^2=\hbox{exp}\left(4\pi (n_l+2)H(z-\xi_l)+4\pi \sum_{l'\not=l} (n_{l'}+2) G(z, \xi_{l'})-4\pi \sum_j n_j G(z,p_j) \right). \label{1849} \end{eqnarray}
\noindent To be more clear, let us spend few words to compare the case $m=1$ and $m\geq 2$. When $m=1$ notice that $\mathcal{H}$ satisfies $|\mathcal{H}|^2=e^{u_0+8\pi(n+1)H(z)-2n \log |z|}$ in view of \eqref{keyrelationH}. The function $e^{u_0+8\pi(n+1)H(z)-2n \log |z|}$ is a sort of effective potential for \eqref{3} at $0$, where $e^{u_0-2n \log |z|}$ is the non-vanishing part of $e^{u_0}$ and $e^{8\pi(n+1) H(z)}$ is the self-interaction of the concentration point $0$ driven by $PU_{\delta,0,\sigma_0}$ through \eqref{1138}. When $m\geq 2$, \eqref{1849} can be re-written as
$$|\mathcal{H}^l(z)|^2=\hbox{exp} \left(u_0+8\pi(n_l+1)H(z-\xi_l)+8\pi \sum_{l'\not=l} (n_{l'}+1) G(z, \xi_{l'})-2n_l \log |z-\xi_l|\right)$$ for $l=1,\dots,m$, yielding to an effective potential for \eqref{3} at $\xi_l$ exhibiting an additional interaction term $e^{8\pi \sum_{l'\not=l} (n_{l'}+1) G(z, \xi_{l'})}$ generated by the effect of the concentration points $\xi_{l'}$, $l'\not=l$, through \eqref{1139}.
\noindent Setting $\mathcal{H}_{0}=\frac{\mathcal{H}}{(z-\xi_1)^{n_1+2}\dots (z-\xi_m)^{n_m+2}}$, we now define $\sigma_0$ as \begin{equation} \label{1143} \sigma_0(z)=-\left( \int^z \mathcal{H}_{0}(w) \hbox{exp}\left[-\sum_{l=1}^m c_0^l (w-\xi_l)^{n_l+1} \prod_{l' \not=l}(w-\xi_{l'})^{n_{l'}+2}\right] dw \right)^{-1}, \end{equation} where $$c_0^l=\frac{1}{\mathcal{H}_0(\xi_l) (n_l+1)!} \frac{d^{n_l+1} \mathcal{H}^l }{dz^{n_l+1}}(\xi_l),\quad l=1,\dots,m,$$ guarantee that all the residues of the integrand function in the definition of $\sigma_0$ vanish. The presence of the term $ \prod_{l' \not=l}(w-\xi_{l'})^{n_{l'}+2}$ is crucial to compute explicitly the $c_0^l$'s for $$c_0^l (w-\xi_l)^{n_l+1} \prod_{l' \not=l}(w-\xi_{l'})^{n_{l'}+2}=O((w-\xi_{l'})^{n_{l'}+2})$$ has an high-order effect near any other $\xi_{l'}$, $l' \not= l$. By construction $\sigma_0 \in \mathcal{M}(\overline{\Omega})$ vanishes only at the $\xi_l$'s with multiplicity $n_l+1$ and $$\lim_{z \to \xi_l} \frac{(z-\xi_l)^{n_l+1}}{\sigma_0(z)}=\frac{\mathcal{H}^l(\xi_l)}{n_l+1},$$ and satisfies
$$|\sigma_0'(z)|^2= |\sigma_0(z)|^4 \hbox{exp}\left(u_0+8\pi\sum_{l=1}^m (n_l+1)G(z,\xi_l) -2 \sum_{l=1}^m \re \bigg[c_0^l (z-\xi_l)^{n_l+1} \prod_{l'\not=l}(z-\xi_{l'})^{n_{l'}+2}\bigg] \right).$$ Under the assumptions of Theorem \ref{mainbb}, notice that $c_0^l=0$ for all $l=1,\dots,m$ and
$$\left|\left(1\over\sigma_0\right)'(z)\right|^2=|\mathcal{H}_0(z)|^2=e^{u_0+8\pi \sum_{l=1}^m (n_l+1) G(z,\xi_l)}.$$
\noindent Since each $\xi_l$ gives a contribution to the dimension of the kernel for the linearized operator \eqref{ol}, the parameters $\delta$ and $a$ are no longer enough to recover all the degeneracies induced by the ansatz $PU_{\delta,a,\sigma}$, for $\sigma \in \mathcal{M}(\overline{\Omega})$ a function which vanishes only at the points $\xi_l$, $l=1,\dots,m$, with multiplicity $n_l+1$. In our construction, the correct number of parameters to use is $2m+1$, given by $m$ small complex numbers $a_1,\dots,a_m$ and $\delta>0$ small, where the latter gives rise to the concentration parameter $\delta_l$ at $\xi_l$, $l=1,\dots,m$, by means of \eqref{repla2}. The request that all the $\delta_l$'s tend to zero with the same rate is necessary as we will discuss later.
\noindent We need to construct an ansatz that looks as $PU_{\delta_l,a_l,\sigma_{a,l}}$ near each $\xi_l$, for a suitable $\sigma_{a,l}$ which makes the approximation near $\xi_l$ good enough. In order to localize our previous construction, let us define $PU_{\delta_l,a_l,\sigma}$ as the solution of $$\left\{ \begin{array}{ll} -\Delta PU_{\delta_l,a_l,\sigma} =
\chi(|z-\xi_l|) |\sigma'(z)|^2 e^{U_{\delta_l,a_l,\sigma}} -\frac{1}{|\Omega|} \int_\Omega \chi(|z-\xi_l|) |\sigma'(z)|^2 e^{U_{\delta_l,a_l,\sigma}}& \hbox{in }\Omega\\ \int_\Omega PU_{\delta_l,a_l,\sigma}=0,& \end{array} \right.$$
where $\chi$ is a smooth radial cut-off function so that $\chi=1$ in $[-\eta,\eta]$, $\chi=0$ in $(-\infty,-2\eta]\cup [2\eta,+\infty)$, $0<\eta<\frac{1}{2} \min\{|\xi_l-\xi_{l'}|, \hbox{dist }(\xi_l,\partial \Omega): l,l'=1,\dots,m,\, l\not= l' \}$. The approximating function is then built as $W=\displaystyle \sum_{l=1}^m PU_l$, where $U_{\delta_l,a_l,\sigma_{a,l}}$ and $PU_{\delta_l,a_l,\sigma_{a,l}}$ will be simply denoted by $U_l$ and $PU_l$.
\noindent Let us now explain how to find the functions $\sigma_{a,l}$, $l=1,\dots,m$. Setting
$$\mathcal{B}_r^l=\bigg\{ \sigma \hbox{ holomorphic in }B_{2\eta}(\xi_l):\:\Big\| \frac{\sigma}{\sigma_0}-1\Big\|_{\infty,B_{2\eta}(\xi_l)} \leq r \bigg\}$$
for $l=1,\dots,m$, Lemma \ref{gomme} still holds in this context for all $\sigma \in \mathcal{B}_r^l$, by simply replacing $0$, $n$ with $\xi_l$, $n_l$ and $\tilde \Omega$ with $B_{2\eta}(\xi_l)$. Then, for all $\sigma=(\sigma_1,\dots,\sigma_m) \in \mathcal{B}_r:=\mathcal{B}_r^1 \times \dots \times \mathcal{B}_r^m$ and $a=(a_1,\dots,a_m) \in \mathbb{C}^m$ with $\|a\|_\infty <\rho$ there exist points $a_i^l$, $l=1,\dots,m$ and $i=0,\dots,n_l$, so that $\{z \in B_{2\eta}(\xi_l): \, \sigma_l(z)=a_l \}=\{\xi_l+a_0^l,\dots,\xi_l+a_{n_l}^l\}$ for all $l=1,\dots,m$. Arguing as for \eqref{Hasigma}, for $l=1,\dots,m$ the function \begin{eqnarray*} && \hspace{-0.3cm} \mathcal{H}_{a,\sigma}^l(z)= \prod_j (z-p_j)^{n_j} \prod_{l' \not= l} (z-\xi_{l'})^{n_{l'}} \prod_{l' \not= l} \prod_{i=0}^{n_{l'}} (z-\xi_{l'}-a_i^{l'})^{-2} \hbox{exp}\left( 4\pi \sum_{l'=1}^m \sum_{i=0}^{n_{l'}} H^*(z-\xi_{l'}-a_i^{l'}) \right.\\
&&\hspace{-0.3cm} \left. -2\pi\sum_{j=1}^N H^*(z-p_j)+\frac{\pi}{|\Omega|} \sum_{l'=1}^m (n_{l'}+1) (\xi_{l'}-2z) \overline{ \xi_{l'}}
-\frac{\pi}{2|\Omega|}\sum_{j=1}^N |p_j|^2-\frac{2\pi}{|\Omega|}\sum_{l'=1}^m (z-\xi_{l'}) \overline{\sum_{i=0}^{n_{l'}} a_i^{l'}}
+\frac{\pi}{|\Omega|}z \overline{\sum_{j=1}^N p_j} \right) \end{eqnarray*} is holomorphic near $\xi_l$ and satisfies \begin{equation} \label{keyrelationgen}
|\mathcal{H}_{a,\sigma}^l(z)|^2= |z-\xi_l|^{-2n_l} \exp[u_0+8\pi \sum_{i=0}^{n_l} H(z-\xi_l-a_i^l)+8\pi \sum_{l'\not= l} \sum_{i=0}^{n_{l'}}
G(z,\xi_{l'}+a_i^{l'})-\frac{2\pi}{|\Omega|} \sum_{l'=1}^m
\sum_{i=0}^{n_{l'}} |a_i^{l'}|^2] \end{equation} in view of \eqref{hhh}. Setting $$g_{a_l,\sigma_l}^l(z)=\frac{\sigma_l(z)-a_l}{\prod_{i=0}^{n_l}(z-\xi_l-a_i^l)},\quad z \in B_{2\eta}(\xi_l),$$ and \begin{equation} \label{cagen} c_{a,\sigma}^l=\frac{\prod_{l' \not=l}(\xi_l-\xi_{l'})^{-(n_{l'}+2)}}{(n_l+1)!}\frac{d^{n_l+1}}{dz^{n_l+1}}\left[ \Big(\frac{g^l_{a_l,\sigma_l}(z) g^l_{0,\sigma_l}(\xi_l)}{g^l_{a_l,\sigma_l}(\xi_l) g^l_{0,\sigma_l}(z)}\Big)^2 \frac{\mathcal{H}_{a,\sigma}^l(z)}{\mathcal{H}_{a,\sigma}^l(\xi_l) } \right](\xi_l), \end{equation} the aim is to find a solution $\sigma_a =(\sigma_{a,1},\dots, \sigma_{a,m})\in \mathcal{B}_r$ of the system $(l=1,\dots,m)$: \begin{equation} \label{sigmaagen} \sigma_l(z)= -\left( \int^z \Big(\frac{g^l_{a_l,\sigma_l}(w)}{g^l_{0,\sigma_l}(w)}\Big)^2 \frac{\mathcal{H}^l_{a,\sigma}(w)}{(w-\xi_l)^{n_l+2}} \hbox{exp}\left[-\sum_{l'=1}^m c_{a,\sigma}^{l'} (w-\xi_{l'})^{n_{l'}+1} \prod_{l'' \not=l'}(w-\xi_{l''})^{n_{l''}+2}\right] dw\right)^{-1}, \end{equation} where the definition of $c_{a,\sigma}^l$ makes null the residue at $\xi_l$ of the integrand function in \eqref{sigmaagen}. The function $\sigma_{a,l}$ will vanish only at $\xi_l$ with multiplicity $n_l+1$ and satisfy \begin{eqnarray} \label{eq sigmaagen}
|\sigma_{a,l}'(z)|^2&=& |\sigma_{a,l}(z)-a_l|^4 \hbox{exp}\left(u_0+8\pi \sum_{l'=1}^m \sum_{i=0}^{n_{l'}} G(z,\xi_{l'}+a_i^{l'})-\frac{2\pi}{|\Omega|} \sum_{l'=1}^m \sum_{i=0}^{n_{l'}} |a_i^{l'}|^2\right.\\ &&\left. -2\sum_{l'=1}^m \re \Big[c^{l'}_{a,\sigma_{a}} (z-\xi_{l'})^{n_{l'}+1} \prod_{l''\not=l'}(z-\xi_{l''})^{n_{l''}+2}\Big] \right) \nonumber \end{eqnarray} in view of \eqref{keyrelationgen}.
\noindent Since $\mathcal{H}_{0,\sigma}^l=\mathcal{H}^l$ and $c^l_{0,\sigma}=c_0^l$ for all $l=1,\dots,m$, when $a=0$ the system \eqref{sigmaagen} reduces to $m$-copies of \eqref{1143} in each $B_{2\eta}(\xi_l)$, $l=1,\dots,m$, and it is natural to find $\sigma_a$ branching off $(\sigma_0,\dots,\sigma_0)$ for $a$ small by IFT. Let us emphasize that each $\sigma_{a,l}$, $l=1,\dots,m$, is close to $\sigma_0\Big|_{B_{2\eta}(\xi_l)}$, a crucial property to have $D_0$ defined in terms of a unique $\sigma_0$ (see \eqref{ggg}). Letting $q_{0,l}$ be the function so that $\sigma_0=q_{0,l}^{n_l+1}$ near $\xi_l$, arguing as in Lemma \ref{derivca} we have that \begin{lem}\label{derivcagen} Up to take $\rho$ smaller, there exists a $C^1-$map $a \in B_\rho(0) \to \sigma_a \in \mathcal{B}_r$ so that $\sigma_a$ solves the system \eqref{cagen}-\eqref{sigmaagen}. Moreover, the map $a \in B_\rho(0) \to c_a^l:=c^l_{a,\sigma_a}$ is $C^1$ with \begin{eqnarray} && \Gamma^{ll}:=\mathcal{H}(\xi_l) \partial_{a_l} c_a^l
\Big|_{a=0}=\frac{1}{n_l !}\frac{d^{n_l+1}}{dz^{n_l+1}}\bigg[ \mathcal{H}^l(z)f_{n_l+1}^l(z)\bigg] (\xi_l) \label{primo}\\
&&\Upsilon^{ll}:=\mathcal{H}(\xi_l) \partial_{\bar a_l} c_a^l \Big|_{a=0}=-{2\pi(n_l+1)\over |{\Omega}| n_l!}\overline{b_{n_l+1}^l}\,\frac{d^{n_l} \mathcal{H}^l }{dz^{n_l}}(\xi_l) \label{secondo} \end{eqnarray} and for $j \not= l$ \begin{eqnarray} &&\Gamma^{lj}:=\mathcal{H}(\xi_l) \partial_{a_j} c_a^l
\Big|_{a=0}=\frac{n_j+1}{(n_l+1)!}\frac{d^{n_l+1}}{dz^{n_l+1}}\bigg[ \mathcal{H}^l(z) \tilde f_{n_j+1}^j(z) \bigg] (\xi_l) \label{terzo}\\
&&\Upsilon^{lj}:=\mathcal{H}(\xi_l) \partial_{\bar a_j} c_a^l \Big|_{a=0}=-{2\pi(n_j+1)\over |{\Omega}| n_l!}\overline{b_{n_j+1}^j}\,\frac{d^{n_l} \mathcal{H}^l}{dz^{n_l}}(\xi_l) \label{quarto}, \end{eqnarray} where $$f_{n+1}^l(z)=\frac{1}{(n+1)!} \frac{d^{n+1}}{dw^{n+1}}\left[2\log \frac{w-q_{0,l}(z)}{q_{0,l}^{-1}(w)-z}+4\pi H^*(z-q_{0,l}^{-1}(w))\right] (0)\:,\qquad b_{n+1}^l=\frac{1}{(n+1)!}\frac{d^{n+1} q_{0,l}^{-1}}{dw^{n+1}}(0)$$ and for $j \not=l$ $$\tilde f_{n+1}^j(z)=\frac{1}{(n+1)!} \frac{d^{n+1}}{dw^{n+1}} \bigg[-2\log(z-q_{0,j}^{-1}(w))+ 4\pi H^*(z-q_{0,j}^{-1}(w))\bigg](0).$$ \end{lem}
\noindent Letting $n=\min\{n_l :\: l=1,\dots,m\}$, up to re-ordering, assume that $n=n_1=\dots=n_{m'}<n_l$ for all $l=m'+1,\dots,m$, where $1\leq m'\leq m$. The matrix $A$ in Theorem \ref{mainbb} is the $2m \times 2m-$matrix in the form \begin{equation} \label{matrixA} A=\left( \begin{array}{ccc} A_{1,2}^{1,2}& \dots & A_{1,2}^{2m-1,2m}\\ \vdots& \vdots &\vdots\\ A_{2m-1,2m}^{1,2}& \dots& A_{2m-1,2m}^{2m-1,2m} \end{array} \right), \end{equation} where the $2\times 2$-blocks are given by
$$A_{2l-1,2l}^{2l'-1,2l'}=\left(\begin{array}{cc} \re [\Gamma^{ll'}+\Upsilon^{ll'}+\frac{n(2n+3)}{n+1} D_0 \frac{|\mathcal{H}^l(\xi_l)|^{-\frac{2}{n+1}}}{\sum_{j=1}^{m'}|\mathcal{H}^j(\xi_j)|^{-\frac{2}{n+1}}} \delta_{ll'}]& \im [\Upsilon^{ll'}-\Gamma^{ll'}]\\
\im [\Gamma^{ll'}+\Upsilon^{ll'}]& \im [\Gamma^{ll'}-\Upsilon^{ll'}-\frac{n(2n+3)}{n+1} D_0 \frac{|\mathcal{H}^l(\xi_l)|^{-\frac{2}{n+1}}}{ \sum_{j=1}^{m'}|\mathcal{H}^j(\xi_j)|^{-\frac{2}{n+1}}} \delta_{ll'}] \end{array}\right)$$ when $l=1,\dots,m'$ and by $$A_{2l-1,2l}^{2l'-1,2l'}=\left(\begin{array}{cc} \re [\Gamma^{ll'}+\Upsilon^{ll'}]& \im [\Upsilon^{ll'}-\Gamma^{ll'}]\\ \im [\Gamma^{ll'}+\Upsilon^{ll'}]& \im [\Gamma^{ll'}-\Upsilon^{ll'}] \end{array}\right)$$ when $l=m'+1,\dots,m$, with $\Gamma^{ll'}$ and $\Upsilon^{ll'}$ given by \eqref{primo}, \eqref{terzo} and \eqref{secondo}, \eqref{quarto}, respectively, and $\delta_{ll'}$ the Kronecker's symbol.
\noindent Arguing as in Lemma \ref{expPU}, for $l=1,\dots,m$ we have that \begin{eqnarray*}
PU_{\delta_l,a_l,\sigma_l}&=&\chi(|z-\xi_l|) \left[U_{\delta_l,a_l,\sigma_l}-\log (8 \delta^2_l)+4 \log |g_{a_l,\sigma_l}^l| \right]\\
&&+8\pi \sum_{i=0}^{n_l} \left[ \frac{1}{2\pi} (\chi(|z-\xi_l|)-1) \log |z-\xi_l-a_i^l|+H(z-\xi_l-a_i^l)\right]+\Theta_{\delta_l,a_l,\sigma_l}+2\delta^2_l f_{a_l,\sigma_l}+O(\delta^4_l) \end{eqnarray*} and \begin{eqnarray} \label{1139}
PU_{\delta_l,a_l,\sigma_l}=8\pi \sum_{i=0}^{n_l} G(z,\xi_l+a_i^l)+\Theta_{\delta_l,a_l,\sigma_l}+2\delta^2_l \left( f_{a_l,\sigma_l}-{\chi(|z-\xi_l|)\over|\sigma_l(z)-a_l|^2}\right)+O(\delta^4_l) \end{eqnarray}
do hold in $C(\overline{\Omega})$ and $C_{\text{loc}}(\overline{\Omega} \setminus\{\xi_l\})$, respectively, uniformly for $|a|< \rho$ and $\sigma_l \in \mathcal{B}_r^l$, where
$$\Theta_{\delta_l,a_l,\sigma_l}=-\frac{1}{|\Omega|}\int_{\Omega} \chi(|z-\xi_l|) \log {|\sigma_l(z)-a_l|^4\over
(\delta_l^2+|\sigma_l(z)-a_l|^2)^2}$$ and $f_{a_l,\sigma_l}$ is a smooth function in $z$ (with a uniform control in $a_l$ and $\sigma_l$ of it and its derivatives in $z$). Choosing $\sigma_l=\sigma_{a,l}$ and summing up over $l=1,\dots,m$, by \eqref{eq sigmaagen} for our approximating function there hold \begin{eqnarray}\label{ieagr}
W&=& U_{\delta_l,a_l,\sigma_l}-\log (8 \delta^2_l)+\log |\sigma_l'|^2
-u_0+\frac{2\pi}{|\Omega|} \sum_{l'=1}^m \sum_{i=0}^{n_{l'}} |a_i^{l'}|^2+\Theta^l(a,\delta) \\ &&+2 \re \Big[c^l_{a,\sigma_l} (z-\xi_l)^{n_l+1} \prod_{l'\not=l}(z-\xi_{l'})^{n_{l'}+2}\Big]
+O(|z-\xi_l|^{n_l+2}\sum_{l'\ne l}|c^{l'}_{a,\sigma_{l'}}|)+\sum_{l'=1}^m O(\delta^2_{l'}
|z-\xi_l|+\delta^4_{l'})\nonumber \end{eqnarray} and $$W= 8\pi \sum_{l=1}^m \sum_{i=0}^{n_l} G(z,\xi_l+a_i^l)+O\bigg(
\sum_{l'=1}^m \delta^2_{l'} \log|\delta_{l'}|\bigg)$$ uniformly in $B_\eta(\xi_l)$ and in $\Omega \setminus \cup_{l=1}^m B_\eta(\xi_l)$, respectively, where $$\Theta^l(a,\delta):=\sum_{l'=1}^m [\Theta_{\delta_{l'},a_{l'},\sigma_{l'}}+\delta^2_{l'} f_{a_{l'},\sigma_{l'}}(\xi_l)].$$ As a consequence, we have that
$$\int_\Omega e^{u_0+W}= \sum_{l'=1}^m \left[\int_{B_\rho(0)} \frac{n_{l'}+1}{(\delta_{l'}^2+|y-a_{l'}|^2)^2} +o\Big(\frac{1}{\delta_{l'}^2}\Big)\right]= \pi \sum_{l'=1}^m \frac{n_{l'}+1}{\delta_{l'}^2} [1+o(1)],$$ and then near $\xi_l$ there holds
$$4\pi N \frac{e^{u_0+W}}{\int_\Omega e^{u_0+W}}=4\pi N \frac{|\sigma_l'|^2 e^{U_{\delta_l,a_l,\sigma_l}+O(|z-\xi_l|^{n_l+1})+o(1)}}{8\pi \sum_{l'=1}^m (n_{l'}+1) \delta_l^2 \delta_{l'}^{-2}(1+o(1))}.$$ In order to construct a $N-$condensate $(\mathcal{A}_\epsilon,\phi_\epsilon)$ which satisfies \eqref{magconc} as $\epsilon \to 0$, we look for a solution $w_\epsilon$ of \eqref{3} in the form $w_\epsilon=\displaystyle \sum_{l=1}^m PU_{\delta_l,a_l,\sigma_l}+\phi$, where $\phi$ is a small remainder term and $\delta_l=\delta_l(\epsilon)$, $a_l=a_l(\epsilon)$ are suitable small parameters, so that \begin{eqnarray*} &&4\pi N \frac{e^{u_0+w_\epsilon}}{\int_\Omega e^{u_0+w_\epsilon}}+ \frac{64 \pi^2N^2 \epsilon^2 \int_\Omega e^{2u_0+2w_\epsilon}}{(\int_\Omega e^{u_0+w_\epsilon}+\sqrt{(\int_\Omega e^{u_0+w_\epsilon})^2-16\pi N\epsilon^2\int_\Omega e^{2u_0+2w_\epsilon}})^2}\left(\frac{e^{u_0+w_\epsilon}}{\int_\Omega
e^{u_0+w_\epsilon}} -\frac{e^{2u_0+2w_\epsilon}}{\int_\Omega e^{2u_0+2w_\epsilon}}\right) \\ &&\hspace{3cm} \rightharpoonup 8\pi \sum_{l=1}^m (n_l+1) \delta_{\xi_l} \end{eqnarray*}
in the sense of measures as $\epsilon \to 0$. Since $|\sigma_l'|^2 e^{U_{\delta_l,a_l,\sigma_l}} \rightharpoonup 8\pi (n_l+1) \delta_{\xi_l}$ as $\delta_l,a_l \to 0$, to have the correct concentration property we need that $$8\pi \sum_{l'=1}^m (n_{l'}+1) \delta_l^2 \delta_{l'}^{-2} \to 4\pi N$$ for all $l=1,\dots,m$, and then $\frac{\delta_l}{\delta_{l'}} \to 1$ for all $l,l'=1,\dots,m$ in view of \eqref{hhh}. It is then natural to introduce just one parameter $\delta$ and to chose the $\delta_l$'s as \begin{equation}\label{repla2} \delta_l=\delta \qquad l=1,\dots,m. \end{equation}
\noindent We restrict our attention to the case $c_0^l=0$ for all $l=1,\dots,m$, which is necessary in our context and is simply a re-formulation of the assumption that $\mathcal{H}_0$ has zero residues at $p_1,\dots,p_m$. As in Theorem \ref{main}, we will work in the parameter's range: $$a_l=o(\delta),\qquad \delta \sim \epsilon^{n+1\over n+2}$$ as $\epsilon\to 0^+$. Since then
$$K^{-1} \le \frac{\delta^2+|z-\xi_l|^{2n_l+2}}{\delta^2+\big|\sigma_l(z)
-a_l|^2} \le K, \qquad K^{-1}|z-\xi_l|^{2n_l}\leq |\sigma_l'(z)|^2 \leq K |z-\xi_l|^{2n_l}$$ in $B_{2\eta}(\xi_l)$ for all $\sigma_l \in \mathcal{B}_r^l$ and $l=1,\dots,m$, where $K>1$, the norm \eqref{wn} can be now simply defined as
$$\| h \|_*=\sup_{z\in {\Omega}}\left[ \sum_{l=1}^m\frac{\delta^{\gamma}
\left(|z-\xi_l|^{2n_l}+\delta^{\frac{2n_l}{n_l+1}}\right)}{(\delta^2+|z-\xi_l|^{2n_l+2})^{1+\frac{\gamma}{2}}}\right]^{-1}\;
|h(z)|$$ for any $h\in L^\infty({\Omega})$, where $0<\gamma<1$ is a small fixed constant. In order to simplify notations, we set $U_l=U_{\delta_l,a_l,\sigma_l}$, $c_a^l=c_{a,\sigma_l}^l$, $\Theta_l=\Theta_{\delta_l,a_l,\sigma_l}$ and $f_l=f_{a_l,\sigma_l}$. We have that \begin{lem}\label{estrrm} There exists a constant $C>0$ independent of $\delta$ such that \begin{equation}\label{erem}
\|R\|_*\le C\delta^{2-\gamma}. \end{equation} \end{lem} \begin{proof} We shall sketch the proof of \eqref{erem}, by following ideas used in the proof of Theorem \ref{estrr01550}. Through the change of variable $y=\sigma_l(z)$ in $\sigma_l^{-1}(B_\rho(0))$, by Lemma \ref{derivcagen}, \eqref{ieagr}, \eqref{repla2} and $c_0^l=0$ for all $l=1,\dots,m$ we find that \begin{eqnarray*}
&&{8\delta^2\over e^{{2\pi\over|{\Omega}|}\sum_{l'=1}^m\sum_{i=0}^{n_{l'}}|a_i^{l'}|^2+\Theta^l(a,\delta)}}\int_{\sigma^{-1}_l(B_{\rho}(0))} e^{u_0+W}
=\int_{\sigma^{-1}_l(B_{\rho}(0))} |\sigma_l'|^{2}
e^{U_{l}+O(|z-\xi_l|^{n_l+1} \sum_{l'=1}^m |c_{a}^{l'}|+\delta^2|z-\xi_l|+\delta^4)}\\ &&=8\pi(n_l+1)
- \int_{\mathbb{R}^2 \setminus B_{\rho}(0)} \frac{8(n_l+1) \delta^2}{|y|^4}+ O\Big(\|a\|^2+\delta\|a\|+\delta^{2n_l+3\over n_l+1}\Big), \end{eqnarray*}
where $\|a\|^2=\displaystyle \sum_{l=1}^m|a_l|^2$. Setting ${\Omega}_\rho=\cup_{l=1}^m \sigma_l^{-1}(B_\rho(0))$ we get that \begin{eqnarray*} &&
{8\delta^2\over e^{{2\pi\over|{\Omega}|}\sum_{l'=1}^m\sum_{i=0}^{n_{l'}}|a_i^{l'}|^2+\sum_{l'=1}^m\Theta_{l'}}}\,\int_\Omega e^{u_0+W} = \sum_{l=1}^m e^{\delta^2\sum_{l'=1}^mf_{l'}(\xi_l)}\bigg[8\pi(n_l+1)-\int_{\mathbb{R}^2 \setminus B_{\rho}(0)}
\frac{8(n_l+1) \delta^2}{|y|^4}\\
&&+O(\|a\|^2+\delta\|a\|+\delta^{2n_l+3\over n_l+1}\Big)\bigg]+8\delta^2 \int_{\Omega \setminus{\Omega}_\rho} e^{u_0+8\pi \sum_{l=1}^m\sum_{i=0}^{n_l} G(z,\xi_l+a_{i}^l)}+O(\delta^4|\log \delta| +\delta^2\|a\|^{\frac{2}{\max_l n_l+1}})\\
&&=\sum_{l=1}^m\bigg[8\pi(n_l+1)+8\pi(n_l+1)\delta^2\sum_{l'=1}^m f_{l'}(\xi_l)-8(n_l+1)\delta^2\int_{\mathbb{R}^2 \setminus B_{\rho}(0)} \frac{1}{|y|^4}\bigg]\\ &&+8\delta^2 \int_{\Omega \setminus {\Omega}_\rho} e^{u_0+8\pi \sum_{l=1}^m\sum_{i=0}^{n_l} G(z,\xi_l+a_i^l)}+o(\delta^2)=\;4\pi N\left[1+{2\over N}\delta^2D_a+{2\over N}\delta^2\sum_{l,l'=1}^m(n_l+1)f_{l'}(\xi_l)+o(\delta^2)\right] \end{eqnarray*} in view of \eqref{hhh}, where $D_a$ is given by $$\pi D_a=\int_{\Omega \setminus \Omega_\rho} e^{u_0+8\pi \sum_{l=1}^m\sum_{i=1}^{n_l}G(z,\xi_l+a_i^l)} -
\sum_{l=1}^m (n_l+1)\int_{\mathbb{R}^2 \setminus B_{\rho}(0)} \frac{1}{|y|^4}.$$
Hence, for $|z-\xi_l| \leq \eta$ we have that \begin{eqnarray}\label{impm}
&&\Delta W+4\pi N\left( \frac{e^{u_0+W}}{\int_\Omega e^{u_0+W}}-\frac{1}{|\Omega|}\right)=|\sigma_l'|^2 e^{U_l} \bigg[2\hbox{Re }\Big[c_a^l(z-\xi_l)^{n_l+1}\prod_{l' \ne l}(z-\xi_{l'})^{n_{l'}+2}\Big]\\ &&+\delta^2\sum_{l'=1}^m f_{l'}(\xi_l)-{2D_a\over N}\delta^2-{2\delta^2\over N}\sum_{j,l'=1}^m(n_j+1)
f_{l'}(\xi_j)+O(\|a\| |z-\xi_l|^{n_l+2}+\delta^2|z-\xi_l|)+o(\delta^2)\bigg] +O(\delta^2)\nonumber \end{eqnarray} as $\delta \to 0$, in view of \eqref{hhh} and $\int_\Omega \chi_l
|\sigma_l'|^{2} e^{U_{l}}=8\pi(n_l+1)+O(\delta^2)$ for all $l=1,\dots,m$. For $z \in \Omega \setminus \cup_{l=1}^mB_\eta(\xi_l)$, we have that \begin{eqnarray} \Delta W+4\pi N\left( \frac{e^{u_0+W} }{\int_\Omega e^{u_0+W}
}-\frac{1}{|\Omega|}\right)=O(\delta^2). \label{impextm} \end{eqnarray} On the other hand, arguing as in \eqref{const1}, we have that \begin{eqnarray*}
{64\delta^{4}\over e^{{4\pi\over|{\Omega}|}\sum_{l'=1}^m\sum_{i=1}^{n_{l'}}|a_i^{l'}|^2+2\sum_{l'=1}^m\Theta_{l'}}}\int_\Omega e^{2u_0+2W}
=64\sum_{l=1}^{m'}{(n+1)^3\over |\alpha_{a,l}|^{{2\over n+1}}\delta^{{2\over n+1}} }\int_{\mathbb{R}^2}
\frac{|y+a_l \delta^{-1} |^{\frac{2n}{n+1}}}{(1+|y|^2)^4} +O(\delta^{-\frac{1}{n+1}}), \end{eqnarray*} where $\ds\alpha_{a,l}=\lim_{z\to \xi_l}{(z-\xi_l)^{n_l+1}\over \sigma_l(z)}$. Recall that $n=\min\{n_l: l=1,\dots,m\}=n_1=\dots=n_{m'}<n_l$ for all $l=m'+1,\dots,m$. Setting
$$\ds\tilde D_{a,\delta}=\sum_{l=1}^{m'}{(n+1)^3\over|\alpha_{a,l}|^{{2\over n+1}}\delta^{{2\over n+1}}}\int_{\mathbb{R}^2}
\frac{|y+a_l\delta^{-1} |^{\frac{2n}{n+1}}}{(1+|y|^2)^4}\,dy,$$ we have that $$\frac{4\pi N\epsilon^2B(W)}{(1+\sqrt{1-\epsilon^2B(W)})^2}=64\epsilon^2\tilde D_{a,\delta} +o(\epsilon^2\delta^{-\frac{2}{n+1}}),$$ and there hold \begin{equation}\label{eps4m}
\frac{4\pi N\epsilon^2B(W)}{(1+\sqrt{1-\epsilon^2B(W)})^2}\left(\frac{e^{u_0+W}}{\int_\Omega e^{u_0+W}}-\frac{e^{2u_0+2W}}{\int_\Omega e^{2u_0+2W}}\right)=|\sigma_l'|^{2}e^{U_l}\bigg[{16\epsilon^2\over
\pi N}\tilde D_{a,\delta}-\epsilon^2|\sigma_l'|^{2}e^{U_l}+o(\epsilon^2\delta^{-2\over n+1})\bigg] \end{equation} in $B_\eta(\xi_l)$, $l=1,\dots,m$, and \begin{equation}\label{eps5m} \frac{4\pi N\epsilon^2B(W)}{(1+\sqrt{1-\epsilon^2B(W)})^2}\left(\frac{e^{u_0+W}}{\int_\Omega e^{u_0+W}}-\frac{e^{2u_0+2W}}{\int_\Omega e^{2u_0+2W}}\right)=O(\epsilon^2 \delta^{\frac{2n}{n+1}}) \end{equation} in $\Omega \setminus \cup_{l=1}^m B_\eta(\xi_l)$. Therefore, we conclude that
$\|R\|_*=O(\delta^{2-\gamma}+\|a\|^2+\epsilon^2\delta^{-{2\over n+1}})$ and \eqref{erem} follows. \qed \end{proof}
\noindent As mentioned in section $4$, when we look for a solution of \eqref{3} in the form $w=W+\phi$, we are led to study \eqref{ephi}. In order to state the invertibility of the linear operator $L$ in a suitable functional setting, for $l=1,\dots,m$ let us introduce the functions:
$$Z_{0l}(z)=\frac{\delta^2-|\sigma_l(z)-a_l|^2}{\delta^2+|\sigma_l(z)-a_l|^2},\quad Z_l(z)=
\frac{\delta(\sigma_l(z)-a_l)}{\delta^2+|\sigma_l(z)-a_l|^2}\qquad z\in B_{2\eta}(\xi_l).$$ Also, let $PZ_{0l}$ and $PZ_l$ be the unique solutions with zero average of
$$\Delta PZ_{0l} =\chi_l \Delta Z_{0l}-\frac{1}{|{\Omega}|}\int_{\Omega} \chi_l \Delta Z_{0l},\qquad \Delta PZ_l =\chi_l \Delta Z_l-\frac{1}{|{\Omega}|}\int_{\Omega} \chi_l \Delta Z_l$$
where $\chi_l(z):=\chi(|z-\xi_l|)$, and set $PZ_0=\displaystyle \sum_{l=1}^m PZ_{0l}$. As in Propositions \ref{prop4.1}-\ref{nlp}, it is possible to prove: \begin{prop} Let $M_0>0$. There exists $\eta_0>0$ small such that for any $0<\delta\leq\eta_0$,
$|\log\delta|^2\epsilon^2\leq \eta_0 \delta^{2\over n+1}$ and $\|a\|\leq M_0 \delta$ there is a unique solution $\phi=\phi(\delta,a)$, $d_0=d_0(\delta,a)\in{\mathbb{R}}$ and $d_l=d_l(\delta,a)\in{\mathbb C}$, $l=1,\dots,m$, to $$\left\{\begin{array}{ll} L(\phi) =-[R+N(\phi)] + d_0 \Delta PZ_{0}+\displaystyle \sum_{l=1}^m\re[d_l \lap PZ_l] &\text{in }{\Omega}\\ \int_{\Omega}\phi=\int_{\Omega } \phi \Delta PZ_l= 0&l=0,\dots,m. \end{array} \right. $$ Moreover, the map $(\delta,a)\mapsto \phi(\delta,a)$ is $C^1$ with \begin{equation}\label{estphim}
\|\phi\|_\infty\le C \delta^{2-\sigma}|\log \delta|. \end{equation} \end{prop}
\noindent The function $W+\phi$ is a solution of (\ref{3}) if we adjust $\delta$ and $a$ so to have $d_l(\delta,a)=0$ for all $l=0,1,\dots,m$. Similarly to Lemma \ref{1039}, we have that \begin{lem}
There exists $\eta_0>0$ such that if $0<\delta\leq \eta_0$, $\|a\|\leq \eta_0 \delta$ and \begin{equation} \label{solvem} \int_\Omega (L(\phi)+N(\phi)+R) PZ_l=0 \end{equation} does hold for all $l=0,\dots,m$, then $W+\phi$ is a solution of \eqref{3}, i.e. $d_l(\delta,a)=0$ for all $l=0,\dots,m$. \end{lem} \noindent Since there hold the expansions \begin{equation*}
PZ_{0}=\sum_{l=1}^m\bigg[\chi_l(Z_{0l}+1)-{1\over|{\Omega}|}\int_{\Omega} \chi_l(Z_{0l}+1)\bigg]+O(\delta^2)
\:,\quad PZ_l=\chi_l Z_l-{1\over|{\Omega}|}\int_{\Omega} \chi_l Z_l+O(\delta)\:\:l=1,\dots,m \end{equation*} in $C(\bar {\Omega})$, arguing as in Proposition \ref{1219}, by \eqref{hhh} and \eqref{impm}-\eqref{estphim} we can deduce the following expansion for \eqref{solvem}: \begin{lem}
Assume $c_0^l=0$ for all $l=1,\dots,m$ and $\|a\|\leq \eta_0 \delta$. The following expansions do hold as $\epsilon \to 0$ \begin{eqnarray*} \int_\Omega (L(\phi)+N(\phi)+R) PZ_0&=& -8\pi D_0 \delta^2 +64(n+1)^{\frac{3n+5}{n+1}} \epsilon^2 \delta^{-{2 \over n+1}}
\sum_{l=1}^{m'} |\mathcal{H}^l(\xi_l)|^{-\frac{2}{n+1}} \int_{\mathbb{R}^2}
\frac{(|y|^2-1)|y+\frac{a_l}{\delta}|^{\frac{2n}{n+1}}}{(1+|y|^2)^5} dy\\ &&+ o(\delta^2+\epsilon^2\delta^{-{1\over n+1}})+O(\epsilon^4
\delta^{-\frac{2}{n+1}}|\log \delta|^2+\epsilon^8 \delta^{-\frac{4}{n+1}}|\log \delta|^2 )\end{eqnarray*} and \begin{eqnarray*} \int_\Omega (R+L(\phi)+N(\phi)) PZ_l &=& 4 \pi \delta
\sum_{l'=1}^m (\overline{\Upsilon^{ll'}} a_{l'}+ \overline{\Gamma^{ll'}} \bar a_{l'})-64 (n+1)^{\frac{3n+5}{n+1}} \epsilon^2 \delta^{-{2\over n+1}} |\mathcal{H}^l(\xi_l)|^{-\frac{2}{n+1}} \chi_M(l) \int_{\mathbb{R}^2}
\frac{|y+\frac{a_l}{\delta}|^{\frac{2n}{n+1}}y}{(1+|y|^2)^5} dy \\ &&+o(\delta^2+\epsilon^2 \delta^{-\frac{2}{n+1}})+O(\epsilon^4
\delta^{-\frac{2}{n+1}}|\log \delta|^2+\epsilon^8 \delta^{-\frac{4}{n+1}}|\log \delta|^2 ), \end{eqnarray*} where $D_0$ is defined in \eqref{ggg} and $\chi_M$ is the characteristic function of the set $M=\{1,\dots,m'\}$. \end{lem} \noindent Finally, arguing as in the proof of Theorem \ref{main}, we can establish Theorem \ref{mainbb} thanks to $D_0<0$ and the invertibility of the matrix $A$.
\noindent Let us now discuss some examples with $m\geq 2$. As already explained at the beginning of section \ref{examples}, we can consider the case $\xi_1,\dots,\xi_m \in \Omega$ and $p_j \in \bar \Omega$ for all $j$. In general, it is very difficult to establish the sign of $D_0$ as required in \eqref{ggg}. The key idea is to start from a configuration of the vortex points $\{p_1,\dots,p_N\}$ which is obtained in a periodic way by a simpler configuration having just one concentration point. In this case, \eqref{ggg} easily follows but Theorem \ref{mainbb} is not really needed. One can use Theorem \ref{main} to obtain a solution with such a simpler configuration and then repeat it periodically. We then slightly move some of the vortex points in order to: \begin{itemize} \item keep zero residue of the corresponding $\mathcal{H}_0$ at each concentration point; \item break down the periodicity of the configuration. \end{itemize} In this way, assumption \eqref{ggg} is still valid but Theorem \ref{main} is no longer applicable in the trivial way we explained above. We now really need to resort to Theorem \ref{mainbb}. To exhibit some concrete examples, let us focus for simplicity on the case $m=2$ but the general situation can be dealt in the same way. Let $\Omega$ be a rectangle generated by $\omega_1=a$ and $\omega_2=ib$, $a,b>0$, and let $p_1,p_2,p_3$ be the three half-periods. Assume that the vortex set is $\{-\frac{p_1}{2}, \frac{p_1}{2},0,p_1,p_2,p_3\}$, and the concentration points are $\xi_1=-\frac{p_1}{2}$, $\xi_2=\frac{p_1}{2}$ with multiplicity $n$. Supposing that $0$, $p_1$ have even multiplicity $n_1$ and $p_2,p_3$ have even multiplicity $n_2$ with $n_1+n_2=n+2$, we have that such a configuration is not only $\omega_1=2p_1$ periodic but also $p_1$ periodic: it can be tought as a double repetition (in a $p_1$-periodic way) of the vortex configuration $\{-\frac{p_1}{2}, 0,p_2\}$ in $\Omega_-:=[-\frac{a}{2},0]\times [-\frac{b}{2},\frac{b}{2}]$ with corresponding multiplicities $n$, $n_1$ and $n_2$. If $n$ is even, it is easy to see that $\frac{d^{n+1} \mathcal{H}^i}{d z^{n+1}}(\xi_i)=0$ for $i=1,2$ since the given vortex configuration is even with respect to $\xi_1$ and $\xi_2$. Notice that this is still true if we replace $0$ and $p_1$ by $-it$ and $p_1+it$, respectively, for $t \in \mathbb{R}$, provided they keep the same multiplicity $n_1$. Arguing as in \eqref{1846}, notice that $D_0$ can be written as $$\pi D_0=\hbox{Area } \left[ \frac{1}{\sigma_0}\left(\Omega_- \setminus \sigma_0^{-1} (B_\rho(0)) \right) \right]+\hbox{Area } \left[ \frac{1}{\sigma_0}\left(\Omega_+ \setminus \sigma_0^{-1} (B_\rho(0)) \right) \right] - 2(n+1) \hbox{Area} \left( B_{\frac{1}{\rho}}(0) \right),$$ where $\Omega_+:=[0,\frac{a}{2}]\times [-\frac{b}{2},\frac{b}{2}]$. Since $$u_0+8\pi(n+1)G(z,\xi_1)+8\pi(n+1)G(z,\xi_2)=-4\pi n_1 \tilde G(z,0)-4\pi n_2 \tilde G(z,p_2)+4\pi(n+2) \tilde G(z,\xi_1)$$ in $\Omega_-$, where $\tilde G(z,p)$ is the Green function in the torus $\Omega_-$ with pole at $p$, the function $\mathcal{H}_0$ can be expressed as in \eqref{explicitH} in terms of the Weierstrass function of $\Omega_+$ and the points $-\frac{p_1}{2}$, $0$ and $p_2$. Arguing exactly as in section \ref{examples}, we have that $$\hbox{Area } \left[ \frac{1}{\sigma_0}\left(\Omega_- \setminus \sigma_0^{-1} (B_\rho(0)) \right) \right]-(n+1) \hbox{Area} \left( B_{\frac{1}{\rho}}(0) \right)<0$$ provided the multiplicity $n_2$ for the corner of $\Omega_-$ is so that $\frac{n_2}{2}$ is odd. Arguing similarly in $\Omega_+$, we get that $D_0<0$ as soon as $\frac{n_2}{2}$ is an odd number. The example then follows by replacing $0$, $p_1$ with $-it$, $p_1+it$ with $t$ small for the corresponding $D_{0,t} \to D_0$ as $t \to 0$.
\begin{appendices} \section{\hspace{-0.5cm}: The construction of $\sigma_a$} Letting $\sigma_0$ be the solution of \eqref{eq sigma0} of the form \eqref{sigma0}, where $c_0$ is given by \eqref{c0}, we have that $Q_0(z)=\frac{\sigma_0(z)}{z^{n+1}}$ is an holomorphic function near $z=0$ so that $Q_0(0)=\frac{n+1}{\mathcal{H}(0)}$ (see \eqref{0942}). Since $Q_0(0)\not=0$, the $(n+1)-$root $Q_0^{\frac{1}{n+1}}$ of $Q_0$ is a well-defined holomorhpic function locally at $z=0$, and it makes sense to define $q_0(z)=z Q_0^{\frac{1}{n+1}}(z)$ near $z=0$.
\noindent For $\sigma \in \mathcal{B}_r$, where $\mathcal{B}_r$ is given in \eqref{setB}, in a similar way we have that $Q(z)=\frac{\sigma(z)}{z^{n+1}}$ is an holomorphic function near $z=0$ with $|\frac{Q(z)}{Q_0(z)}-1| \leq r$ for all $z$. Since in particular $|Q(z)-\frac{n+1}{\mathcal{H}(0)}|\leq r|Q_0(z)|+|Q_0(z)-\frac{n+1}{\mathcal{H}(0)}|$, we can find $r$ and $\eta>0$ small so that $q(z)=z Q^{\frac{1}{n+1}}(z)$ is a well-defined holomorphic function in $B_{3\eta}(0)$ for all $\sigma \in \mathcal{B}_r$, with $\sigma(z)=q^{n+1}(z)$ for all $z \in B_{3\eta}(0)$. Since $q'(0)=Q^{\frac{1}{n+1}}(0)$ satisfies $|q'(0)|\geq [\frac{(1-r)(n+1)}{|\mathcal{H}(0)|}]^{\frac{1}{n+1}}>0$, then $q$ is locally bi-holomorphic at $0$. In order to have uniform invertibility of $q$ for all $\sigma \in \mathcal{B}_r$, let us evaluate the following quantity: \begin{eqnarray*}
|1-\frac{q'(z)}{q'(0)}|&\leq& \frac{\sup_{B_{\eta}(0)}|q''|}{|q'(0)|} |z|\leq
\frac{2}{\eta^2}[\frac{(1-r)(n+1)}{|\mathcal{H}(0)|}]^{-\frac{1}{n+1}} \left(\sup_{B_{2\eta}(0)}|q|\right) |z| \\
&\leq& \frac{2}{\eta^2} \left(\frac{|\mathcal{H}(0)|}{n+1}\right)^{\frac{1}{n+1}} (\frac{1+r}{1-r})^{\frac{1}{n+1}} \left(\sup_{B_{2\eta}(0)}|q_0|\right) |z| \end{eqnarray*}
for all $z \in B_\eta(0)$, in view of the Cauchy's inequality and $|\frac{\sigma(z)}{\sigma_0(z)}-1|= |\frac{q^{n+1}(z)}{q^{n+1}_0(z)}-1| \leq r$ for all $z \in B_{3\eta}(0)$. Therefore, we can find $\rho_1$ small so that $|1-\frac{q'(z)}{q'(0)}|\leq \frac{1}{2}$ for all $z \in B_{\rho_1^{\frac{1}{n+1}}}(0)$ and $2\rho_1^{\frac{1}{n+1}} |Q(0)|^{-\frac{1}{n+1}}\leq 2\rho_1^{\frac{1}{n+1}}[\frac{|\mathcal{H}(0)|}{n+1}]^{\frac{1}{n+1}} (1-r)^{-\frac{1}{n+1}}\leq 2 \eta$, uniformly for $\sigma\in \mathcal{B}_r$. Hence, the inverse map $q^{-1}$ of $q$ is defined from $B_{\rho_1^{\frac{1}{n+1}}}(0)$ into $B_{2\rho_1^{\frac{1}{n+1}} |Q(0)|^{-\frac{1}{n+1}}}(0)$: for all $y \in B_{\rho_1^{\frac{1}{n+1}}}(0)$ there exists a unique $z \in B_{2\rho_1^{\frac{1}{n+1}} |Q(0)|^{-\frac{1}{n+1}}}(0)$ so that $q(z)=y$, given by $z=q^{-1}(y)$. Since $\sigma=q^{n+1}$ in $B_{3\eta}(0)$, we have that
$$\hbox{Card } \{z\in B_{2\rho_1^{\frac{1}{n+1}} |Q(0)|^{-\frac{1}{n+1}}}(0): \sigma(z)=y\}=n+1 \qquad \forall\: y \in B_{\rho_1}(0)\setminus \{0\},$$ for all $\sigma\in \mathcal{B}_r$. Since
$$|\sigma(z)|\geq (1-r) \inf_{\tilde \Omega \setminus B_{2\rho_1^{\frac{1}{n+1}} |Q(0)|^{-\frac{1}{n+1}}}(0)} |\sigma_0(z)|
\geq (1-r) \inf_{\tilde \Omega \setminus B_{2\rho_1^{\frac{1}{n+1}} [\frac{|\mathcal{H}(0)|}{n+1}]^{\frac{1}{n+1}} (1+r)^{-\frac{1}{n+1}}}(0)} |\sigma_0(z)|>0 $$
for all $z \in \tilde \Omega \setminus B_{2\rho_1^{\frac{1}{n+1}} |Q(0)|^{-\frac{1}{n+1}}}(0)$ we can find $\rho$ ($\leq \rho_1$) small so that
$$\hbox{Card } \{z\in \tilde \Omega: \sigma(z)=y\}=\hbox{Card } \{z\in B_{2\rho_1^{\frac{1}{n+1}} |Q(0)|^{-\frac{1}{n+1}}}(0): \sigma(z)=y\}=n+1 \qquad \forall\: y \in B_{\rho}(0)\setminus \{0\},$$ for all $\sigma\in \mathcal{B}_r$. Since
$$\sigma^{-1}(B_\rho(0)) \subset B_{2\rho_1^{\frac{1}{n+1}} |Q(0)|^{-\frac{1}{n+1}}} (0) \subset B_{2\rho_1^{\frac{1}{n+1}}[\frac{|\mathcal{H}(0)|}{n+1}]^{\frac{1}{n+1}} (1-r)^{-\frac{1}{n+1}} }(0) \subset B_{2\eta}(0),$$ for all $z\in \partial \sigma^{-1}(B_\rho(0))=\sigma^{-1}(\partial B_\rho(0))$ and $\sigma \in \mathcal{B}_r$ we have that
$$\frac{|z|^{n+1}}{\rho}=\frac{|z|^{n+1}}{|\sigma(z)|}=\frac{1}{|Q(z)|}\geq \frac{1}{(1+r)} \inf_{ B_{2\eta}(0)} |Q_0(z)|^{-1} >0$$ for $q_0$ is well-defined in $B_{3\eta}(0)$. We can summarize the above discussion as follows: \begin{lem} \label{gomme} There exist $r,\:\rho >0$ such that $q(z)=z Q(z)^{\frac{1}{n+1}}$ is a locally bi-holomorphic map with $\sigma=q^{n+1}$ and inverse $q^{-1}$ defined on $B_{\rho^{\frac{1}{n+1}}}(0)$, for all $\sigma\in \mathcal{B}_r$. In particular, there exists a neighborhhod $V$ of $0$ so that, for all $\sigma \in \mathcal{B}_r$, there hold $V \subset \sigma^{-1}(B_\rho(0))$ and $\sigma:\sigma^{-1}(B_\rho(0)) \to B_\rho(0)$ is a $(n+1)-1$ map in the following sense: $$\hbox{Card } \{z\in \tilde \Omega:\sigma(z)=y\}=n+1 \qquad \forall\: y \in B_\rho(0)\setminus \{0\}.$$ \end{lem}
\noindent For $|a|<\rho$ and $\sigma\in \mathcal{B}_r$, by Lemma \ref{gomme} we have that $$\sigma^{-1}(a)=\{z \in \tilde \Omega: \: \sigma(z)=a\}=\{a_0,\dots,a_n \},$$ where $a_k=q^{-1}(\hat a_k)$ and $\hat a_k$, $k=0,\dots,n$, are the $(n+1)-$roots of $a$, and then $g_{a,\sigma}(z):=\displaystyle \frac{\sigma(z)-a}{\prod_{k=0}^{n}(z-a_k)} \in \mathcal{M}(\overline{\Omega})$ is a non-vanishing function. We are now in position to prove the following. \begin{lem}\label{derivca} Up to take $\rho$ smaller, there exists a $C^1-$map $a \in B_\rho(0) \to \sigma_a \in \mathcal{B}_r$ so that $\sigma_a$ solves \eqref{sigmaa}-\eqref{ca}. Moreover, the map $a \in B_\rho(0) \to c_a=c_{a,\sigma_a}$ is $C^1$ with \begin{eqnarray*} && \Gamma:=\mathcal{H}(0) \partial_a c_a
\Big|_{a=0}=\frac{1}{n!}\frac{d^{n+1}}{dz^{n+1}}\bigg[ \mathcal{H}(z)f_{n+1}(z)\bigg] (0)\\
&&\Upsilon:=\mathcal{H}(0) \partial_{\bar a} c_a \Big|_{a=0}=-{2\pi(n+1)\over |{\Omega}| n!}\overline{b_{n+1}}\,\frac{d^n \mathcal{H} }{dz^n}(0), \end{eqnarray*} where $$f_{n+1}(z)=\frac{1}{(n+1)!} \frac{d^{n+1}}{dw^{n+1}}\left[2\log \frac{w-q_0(z)}{q_0^{-1}(w)-z}+4\pi H^*(z-q_0^{-1}(w))\right] (0)\:,\qquad b_{n+1}=\frac{1}{(n+1)!}\frac{d^{n+1} q_0^{-1}}{dw^{n+1}}(0).$$ \end{lem} \begin{proof} Given $c_{a,\sigma}$ as in \eqref{ca}, equation \eqref{sigmaa} is equivalent to find zeroes of the map $\Lambda: (a,\sigma) \in B_\rho(0) \times \mathcal{B}_r \to \mathcal{M}(\overline{\Omega})$ given as $$\Lambda(a,\sigma)= \sigma(z)+\left[ \int^z \frac{g^2_{a, \sigma}(w)}{g^2_{0, \sigma}(w)} \frac{\mathcal{H}_{a,\sigma}(w)}{w^{n+2}} e^{-c_{a,\sigma}w^{n+1}} dw \right]^{-1}.$$ Observe that the zeroes $a_k=a_k(a,\sigma)=q^{-1}(\hat a_k)$ are continuously differentiable in $\sigma$. Differentiating the relation $\sigma(a_k)=a$ at $\sigma_0$ along a direction $R \in
\mathcal{M}'(\overline{\Omega})$, we have that $\sigma_0'(a_k(a,\sigma_0)) \partial_\sigma a_k(a,\sigma_0) [R]+R(a_k(a,\sigma_0))=0$. Since $\sigma_0'(a_k) \sim a_k^n$ and $R(a_k)\sim a_k^{n+1}$ in view of $\|R\|<\infty$, we get that $\partial_\sigma a_k(0,\sigma_0)[R]=0$ for all $R \in \mathcal{M}'(\overline{\Omega})$. For $z \not= 0$ the function $\frac{g_{a,\sigma}(z)}{g_{0,\sigma}(z)}$ is continuously differentiable in $\sigma$ with $$\partial_\sigma \left(\frac{g_{a,\sigma}(z)}{g_{0,\sigma}(z)} \right) [R]=a \frac{z^{n+1}}{\prod_{k=0}^{n}(z-a_k)} \frac{R(z)}{\sigma^2(z)} + \frac{\sigma(z)-a }{\prod_{k=0}^{n}(z-a_k)}\frac{z^{n+1}}{\sigma(z)} \sum_{j=0}^n \frac{1}{z-a_j} \partial_\sigma a_j(a,\sigma) [R]$$ for every $R \in \mathcal{M}'(\overline{\Omega})$. In particular, we get that
$\partial_\sigma \left(\frac{g_{a,\sigma}(z)}{g_{0,\sigma}(z)} \right)\Big|_{a=0} [R]=0$ for every $z \not= 0$ and $R \in \mathcal{M}'(\overline{\Omega})$. Since we can write $\frac{g_{a,\sigma}(z)}{g_{0,\sigma}(z)}$ as \begin{equation} \label{1310} \frac{g_{a,\sigma}(z)}{g_{0,\sigma}(z)}=\frac{z^{n+1}}{\sigma(z)} \prod_{k=0}^{n} \frac{q(z)-q(a_k)}{z-a_k}= \frac{z^{n+1}}{\sigma(z)} \prod_{k=0}^{n} \int_0^1 q'(a_k+t(z-a_k))dt \end{equation} for $z$ small in view of $\sigma=q^{n+1}$, we get that $\frac{g_{a,\sigma}(z)}{g_{0,\sigma}(z)}$ is continuously differentiable in $\sigma$ and the linear operator $\partial_\sigma \left( \frac{g_{a,\sigma}(z)}{g_{0,\sigma}(z)}\right)$ is continuous at
$z=0$. In particular, we get that $\partial_\sigma\left(\frac{g_{a,\sigma_0}(z)}{g_{0,\sigma_0}(z)}\right)\Big|_{a=0}[R]=0$ for every $z$ and $R \in \mathcal{M}'(\overline{\Omega})$. By \eqref{Hasigma} we have that $\mathcal{H}_{a,\sigma}$ is continuously differentiable in $\sigma$ with $\partial_\sigma \mathcal{H}_{0,\sigma}[R]=0$ for every $R \in \mathcal{M}'(\overline{\Omega})$. We have that $c_{a,\sigma}$ is also continuosuly differentiable in $\sigma$ with $\partial_\sigma c_{0,\sigma_0}[R]=0$ for every $R \in \mathcal{M}'(\overline{\Omega})$, and so $\Lambda (a,\sigma)$ is with $\partial_\sigma \Lambda(0,\sigma_0)=\hbox{Id}$.
\noindent Since $a_k \sim |a|^{\frac{1}{n+1}}$, the smooth dependence in $a$ is much more delicate, and will be true just for symmetric expressions of the $a_k$'s thanks to the symmetries of $\hat a_k=q(a_k)$. To fully exploit the symmetries, it is crucial that the expression \eqref{Hasigma} of $\mathcal{H}_{a,\sigma}$ is in terms of an holomorphic function $H^*$. Indeed, we have that \begin{eqnarray*}
2\sum_{k=0}^n H^*(z-a_k)-\frac{z}{|\Omega|} \overline{\sum_{k=0}^n a_k} &=&2 \sum_{l=0}^{\infty} g_l(z) \sum_{k=0}^n \hat a_k^l -{z \over
|{\Omega}|}\overline{\sum_{l=1}^\infty b_l \sum_{k=0}^n \hat a_k^l}\\ &=& 2 (n+1)\sum_{l=0}^{\infty} g_{(n+1)l}(z) a^l -{n+1 \over
|{\Omega}|}z\overline{\sum_{l=1}^\infty b_{(n+1)l} a^l} \end{eqnarray*} in view of $\displaystyle \sum_{k=0}^n \hat a_k^l=0$ for all $l \notin (n+1)\mathbb{N}$, where $g_l(z)=\frac{1}{l!}\frac{d^l}{dw^l} [H^*(z-q^{-1}(w))](0)$ and $b_l=\frac{1}{l!}\frac{d^l q^{-1}}{dw^l}(0)$ (recall that $b_0=q^{-1}(0)=0$). Since for $z$ small there holds \begin{eqnarray*}
\sum_{k=0}^n \log \frac{q(z)-q( a_k)}{z-a_k}=\sum_{l=0}^\infty h_l(z) \sum_{k=0}^n \hat a_k^l=(n+1) \sum_{l=0}^\infty h_{(n+1)l}(z) a^l \end{eqnarray*} in view of $a_k=q^{-1}(\hat a_k)$, where $h_l(z)=\frac{1}{l!} \frac{d^l}{dw^l} \left[\log \frac{w-q(z)}{q^{-1}(w)-z}\right](0),$ we have that $\frac{g_{a,\sigma}(z)}{g_{0,\sigma}(z)}$ is continuously differentiable in $a,\, \bar a$ for all $z$ in view of \eqref{1310} (for $z$ far from $0$ it is obvious). Hence, by \eqref{Hasigma} $\frac{g_{a,\sigma}^2}{g_{0,\sigma}^2} \mathcal{H}_{a,\sigma}$, $c_{a,\sigma}$ and $\Lambda(a,\sigma)$ are continuously differentiable also in $a,\, \bar a$, and then $\Lambda$ is a $C^1-$map with $\Lambda(0,\sigma_0)=0$, $\partial_\sigma \Lambda(0,\sigma_0)=\hbox{Id}$. Up to take $\rho$ smaller, by the Implicit Function Theorem we find a $C^1$-map $a \in B_\rho(0) \to \sigma_a$ so that $\Lambda(a,\sigma_a)=0$, and the function $a \to c_a=c_{a,\sigma_a}$ is $C^1$. By $$\partial_a [\frac{g_{a,\sigma}^2(z) g_{0,\sigma}^2(0)}{g_{a,\sigma}^2(0) g_{0,\sigma}^2(z) } \frac{\mathcal{H}_{a,\sigma}(z)}{\mathcal{H}_{a,\sigma}(0)}](0)= \frac{g_{0,\sigma}^2(0)}{g_{0,\sigma}^2(z) }\partial_a [e^{2\log g_{a,\sigma}(z)-2\log g_{a,\sigma}(0)}\frac{\mathcal{H}_{a,\sigma}(z)}{\mathcal{H}_{a,\sigma}(0)}](0)= (n+1)\frac{\mathcal{H}(z)}{\mathcal{H}(0)}[f_{n+1}(z)-f_{n+1}(0)]$$ and $$\partial_{\bar a} [\frac{g_{a,\sigma}^2(z) g_{0,\sigma}^2(0)}{g_{a,\sigma}^2(0) g_{0,\sigma}^2(z) } \frac{\mathcal{H}_{a,\sigma}(z)}{\mathcal{H}_{a,\sigma}(0)}](0)=
-\frac{2\pi (n+1)}{|\Omega|} \frac{\mathcal{H}(z)}{\mathcal{H}(0)} \overline{b_{n+1}}z$$ we deduce the desired expression for $\Gamma$ and $\Upsilon$ in view of $\partial_\sigma c_{0,\sigma_0}=0$ and \eqref{pc}. \qed \end{proof}
\section{\hspace{-0.5cm}: The linear theory}
\noindent In this section, we will prove the invertibility of the linear operator $L$ given by (\ref{ol}) under suitable orthogonality conditions. The operator $L$ can be described asymptotically by the following linear operator in ${\mathbb{R}}^2$
$$L_0(\phi)=\Delta\phi+{8(n+1)^2|y|^{2n}\over (1+|y^{n+1}-\zeta_0|^2)^2}\phi,$$ where $\zeta_0=\lim \frac{a}{\delta}$. When $\zeta_0=0$, as in the case $n=0$ \cite{BaPa}, by using a Fourier decomposition of $\phi$ it can be shown in a rather direct way that the bounded solutions of $L_0(\phi)=0$ in ${\mathbb{R}}^2$ are precisely linear combinations of \begin{equation*}
Y_{0}(y) = \,{1-|y|^{2n+2}\over 1+|y|^{2n+2}}
\qquad\text{and}\qquad Y_l(y) = { (y^{n+1})_l \over 1+|y|^{2n+2}} ,\:l=1,2. \end{equation*} Note that $L_0$ is the linearized operator at the radial solution
$U=U_{1,0}$ of $-\Delta U=|z|^{2n} e^U$.
\noindent For the linearized operator at $U_{1,\zeta_0}$ with $\zeta_0 \not=0$, the Fourier decomposition is useless since $U_{1,\zeta_0}$ is not radial w.r.t. any point if $n \geq 1$. However, the same property is still true as recently proved in \cite{DEM5}, and the argument below could be carried out in full generality in the range $a=O(\delta)$. Since in Theorem \ref{main} we are concerned with the case $a=o(\delta)$, for simplicity we will discuss the linear theory just in this case.
\noindent Recall that \begin{equation*}
Z_{0}(z) = {\delta^2-|\sigma(z)-a|^2\over
\delta^2+|\sigma(z)-a|^2}, \qquad\quad Z_l(z) = {\delta[\sigma(z)-a]_l \over
\delta^2+|\sigma-a|^2},\qquad l=1,2, \end{equation*} and $PZ_l$, $l=0,1,2$, denotes the projection of $Z_l$ onto the doubly-periodic functions with zero average: \begin{equation*}
\left\{ \begin{array}{ll} \Delta PZ_l =\Delta Z_l-\frac{1}{|{\Omega}|}\int_{\Omega} \Delta Z_l & \text{in $\Omega$}\\ \int_{\Omega} PZ_l=0.& \end{array}\right. \end{equation*} Given $h\in L^\infty({\Omega})$ with $\int_{\Omega} h=0$, consider the problem of finding a function $\phi$ in ${\Omega}$ with zero average and numbers $d_l$, $l=0,1,2$, such that \begin{equation}\label{plco} \left\{ \begin{array}{ll} L(\phi) =h + \displaystyle \sum_{l=0}^{2}d_l \Delta PZ_l &\text{ in ${\Omega}$}\\ \int_{\Omega } \Delta PZ_l \phi = 0 &\forall\: l=0,1,2. \end{array} \right. \end{equation} Since $Z=Z_1+iZ_2$, observe that (\ref{plco}) is equivalent to solve (\ref{plcobis}) with $d=d_1-id_2$. Let us stress that the orthogonality conditions in \eqref{plco} are taken with respect to the elements of the approximate kernel due to translations and to an extra element which involves dilations. A similar situation already appears in \cite{DDeMW}.
\noindent First, we will prove an a-priori estimate for problem \eqref{plco} when $d_l=0$ for all $l=0,1,2$, w.r.t. the
$\|\cdot\|_*$-norm defined as
$$\|h\|_*=\sup_{z\in {\Omega}}{(\delta^2+|\sigma(z)-a|^2)^{1+\gamma/2}\over \delta^\gamma(|\sigma'(z)|^2+\delta^{2n\over n+1})}|h(z)|,$$ where $0<\gamma <1$ is a small fixed constant. \begin{prop} \label{p1} There exist $\eta_0>0$ small and $C>0$ such that for any $0<\delta\leq \eta_0$, $\epsilon^2\leq \eta_0 \delta^{2\over n+1}$,
$|a|\leq \eta_0 \delta$ and any solution $\phi$ to \begin{equation}\label{plco1} \left\{ \begin{array}{ll} L(\phi)=h &\text{in }{\Omega}\\ \int_{\Omega } \Delta PZ_l \phi = 0 &\forall\:l=0,1,2\\ \int_\Omega \phi=0,& \end{array} \right. \end{equation} one has \begin{equation}\label{est}
\|\phi \|_\infty \le C \log \frac{1}{\delta} \|h\|_*. \end{equation} \end{prop}
\begin{proof} The proof of estimate \equ{est} consists of several steps. Assume by contradiction the existence of sequences $\delta_k \to 0$, $\epsilon_k$ with $\epsilon_k^2=o(\delta_k^{2\over n+1})$, $a_k$ with $a_k=o(\delta_k)$, functions $h_k$ with $|\log \delta_k| \, \|h_k\|_*=o(1)$ as $k \to +\infty$, and solutions $\phi_k$ of (\ref{plco1}) with $\|\phi_k\|_\infty=1$. Since by (\ref{ol}) the operator $L$ acts as $L(\phi) = \Delta \phi + \mathcal{K} \left[ \phi+ \gamma(\phi)\right]$, where $\gamma(\phi) \in \mathbb{R}$, the function $\psi_k=\phi_k+\gamma(\phi_k)$ does solve \begin{equation*} \left\{ \begin{array}{ll} \Delta \psi_k+\mathcal{K}_k \psi_k= h_k &\text{in ${\Omega}$}\\ \int_{\Omega } \Delta PZ_{k,l} \psi_k= 0 &\forall \: l=0,1,2, \end{array}\right. \end{equation*} where $W_k$, $\mathcal{K}_k$, $Z_{k,l}$ denote the functions $W$, $\mathcal{K}$, $Z_l$, respectively, along the given sequence.
\begin{claim}
$\displaystyle \liminf_{k \to +\infty} \|\psi_k\|_\infty >0$ and, up to a subsequence, $\psi_k \to \tilde c \in \mathbb{R}$ as $k \to+\infty$ in $C^{1,\alpha}_{\hbox{loc}}(\bar {\Omega}\setminus\{0\})$, for all $\alpha \in (0,1)$. \end{claim} \noindent Indeed, assume by contradiction that $\displaystyle
\liminf_{k \to +\infty} \|\psi_k\|_\infty =0$. Up to a subsequence, assume that
$\|\psi_k\|_\infty=\left\|\phi_k+\gamma(\phi_k)\right\|_\infty\to 0$ as $k\to+\infty$. Since $\epsilon_k^2=o(\delta_k^{2\over n+1})$, by (\ref{BW}) it follows that $$\gamma(\phi_k)=-\frac{\int_{\Omega} e^{u_0+W_k}\phi_k}{\int_{\Omega} e^{u_0+W_k}}+o(1)=O(1).$$ Up to a subsequence we have that $\frac{\int_{\Omega} e^{u_0+W_k}\phi_k}{\int_{\Omega} e^{u_0+W_k}} \to c$, and then $\phi_k \to c$ uniformly in $\Omega$ as $k \to +\infty$. Since $\int_{\Omega}\phi_k=0$, we get $c=0$ and $\phi_k \to 0$ in
$L^\infty(\Omega)$, in contradiction with $\|\phi_k\|_\infty=1$. Moreover, since $\|\psi_k\|_\infty=O(1)$, by (\ref{eps1})-(\ref{eps2}) we have that $\Delta\psi_k=o(1)$ in $C_{\hbox{loc}}(\bar \Omega \setminus \{0\})$. Up to a subsequence, we have that $\psi_k \to \psi$ as $k\to+\infty$ in
$C^{1,\alpha}_{\hbox{loc}}(\bar {\Omega}\setminus\{0\})$. Since $\|\psi_k
\|_\infty =O(1)$, $\psi$ is a bounded function which can be extended to a harmonic doubly-periodic function in ${\Omega}$. Therefore, $\psi=\tilde c$ in ${\Omega}$ with $\ds\tilde c=\lim_{k\to+\infty}\gamma(\phi_k)$, since $\ds{1\over
|{\Omega}|}\int_{\Omega}\psi_k=\gamma(\phi_k)$.
\noindent Now, consider the function $\Psi_{k}(y)=\psi_k ( \delta_k^{1\over n+1} y)$. Then, $\Psi_k$ satisfies $$\Delta \Psi_{k} + K_{k}(y)\Psi_{k} =\hat h_{k}(y)\qquad\text{in } \delta_k^{-\frac{1}{n+1}} \Omega ,$$ where $ K_{k}(y)=\delta_k^{2\over n+1} \mathcal{K}_k (\delta_k^{1\over n+1}y)$ and $ \hat h_{k}(y)=\delta_k^{2\over n+1} h_k (\delta_k^{1\over n+1} y)$. Also, we set $\sigma_k(y)=\delta_k^{-1} \sigma_{a_k}(\delta_k^{1\over n+1}y)$ for $y$ in compact subsets of ${\mathbb{R}}^2$.
\begin{claim} $\Psi_{k} \to \Psi=0$ in $C_{\hbox{loc}}({\mathbb{R}}^2)$ as $k\to+\infty$. \end{claim} \noindent Indeed, observe that by (\ref{BW}) and (\ref{eps1})-(\ref{eps2}) we have the following expansions: \begin{equation} \label{mlK}
\mathcal{K}(z)=|\sigma'(z)|^2 e^{U_{\delta,a}}
\left[1+O(|c_a||z|^{n+1})+O(|c_a||a|+\delta^2|\log \delta|) \right]+O(\epsilon^2
|\sigma'(z)|^4 e^{2U_{\delta,a}}) . \end{equation} Since $\epsilon_k^2=o(\delta_k^{2\over n+1})$, the first estimate above re-writes along our sequence as
$$ K_{k}(y)=(1+o(1)+O(\delta_k|y|^{n+1})){8|\sigma_k'(y)|^2 \over \left(1+\big|\sigma_k(y)-a_k \delta_k^{-1}\big|^2\right)^2}+o(1) {64|\sigma_k'(y)|^4 \over \left(1+\big|\sigma_k(y)-a_k \delta_k^{-1}\big|^2\right)^4}$$ uniformly in $\delta_k^{-\frac{1}{n+1}} \Omega$ as $k\to+\infty$. Since $\sigma=z^{n+1}Q$, we have that $\sigma_k(y)= y^{n+1} Q_{a_k}( \delta_k^{\frac{1}{n+1}} y)$ and $\sigma_k'(y)=(n+1) y^n Q_{a_k}(\delta_k^{\frac{1}{n+1}} y)+
\delta_k^{\frac{1}{n+1}} y^{n+1} Q_{a_k}'(\delta_k^{\frac{1}{n+1}} y)$. Since $Q_{a_k}(0) \to \frac{n+1}{\mathcal{H}(0)}=:\gamma\not=0$ and $\|Q_{a_k}'\|_{\infty,\Omega}\leq C \|Q_{a_k}\|_{\infty,\tilde \Omega}\leq C'$, we have that
$$\sigma_k(y)=y^{n+1}[\gamma+o(1)+O(\delta_k^{1\over n+1}|y|)] \:,\qquad
\sigma_k'(y)=(n+1) y^n [\gamma+o(1)+ O(\delta_k^{1 \over n+1}|y|)]$$ as $k \to +\infty$. Then we get that \begin{equation} \label{Kk}
K_{k}(y)=\left[{8(n+1)^2 \gamma|^2 |y|^{2n} \over \left(1+\big|\sigma_k(y)-a_k
\delta_k^{-1}\big|^2\right)^2}+o(1) {64(n+1)^4|\gamma|^4 |y|^{4n} \over
\left(1+\big|\sigma_k(y)-a_k \delta_k^{-1}\big|^2\right)^4} \right][1+o(1)+O(\delta_k^{1\over n+1}|y|)] \end{equation}
uniformly in $\delta_k^{-\frac{1}{n+1}} \Omega$. Choose $\eta$ small so that $|\sigma_k(y)|\geq
\frac{|\gamma|}{2}|y|^{n+1}$ in $B_{\delta_k^{-\frac{1}{n+1}}\eta}(0)$ for $k$ large. Since $\|\Psi_k\|_\infty=O(1)$ and $|\hat h_k(y)|\le C\|h_k \|_* \to 0$ on compact sets, by elliptic estimates and (\ref{Kk}) we get that $\Psi_k(\gamma^{-\frac{1}{n+1}} y) \to \hat \Psi$ in $C_{\hbox{loc}}({\mathbb{R}}^2)$ as $k \to+\infty$, where $\hat \Psi$ is a bounded solution of $L_0(\hat \Psi) = 0$ (with $\zeta_0=0$). Then $\hat \Psi(y)=\displaystyle \sum_{j=0}^2 b_{j}Y_j(y)$ for some $b_{j}\in{\mathbb{R}}$, $j=0,1,2$.\\
Since $\lap Z_{k,l}+|\sigma_k'|^2 e^{U_{\delta_k,a_k}}Z_{k,l}=0$ for $l=0,1,2$ (where $U_{\delta_k,a_k}$ stands for $U_{\delta_k,a_k,\sigma_{a_k}}$), for $l=1,2$ we have that \begin{equation*} \begin{split} \int_{\Omega} \psi_k \Delta Z_{k,l}
=-\int_\Omega |\sigma_k'(z)|^2 \psi_k e^{U_{\delta_k,a_k}}Z_{k,l}=-\int_{B_{\delta_k^{-\frac{1}{n+1}} \eta}(0)}
{8 |\sigma_k'(z)|^2 (\sigma_k-a_k \delta_k^{-1}) \Psi_k \over
(1+|\sigma_k-a_k \delta_k^{-1} |^2 )^3}\,dy+O(\delta_k^3). \end{split} \end{equation*} Since for all $l=0,1,2$ $$0=\int_{\Omega} \psi_k\Delta PZ_{k,l}=\int_{\Omega} \psi_k \left[ \Delta Z_{k,l} -
{1\over |{\Omega}|}\int_{\Omega} \Delta Z_{k,l}\right]=\int_{\Omega} \psi_k \Delta Z_{k,l}+o(1)$$ as $k \to \infty$ in view of \eqref{deltaZ0}-\eqref{deltaZ}, by dominated convergence we get that
$$\int_{{\mathbb{R}}^2} \hat \Psi(y)\,\frac{|y|^{2n}(y^{n+1})_l}{(1+|y|^{2n+2})^3}\, dy =0 \qquad \hbox{for }l=1,2,$$ and we conclude that $b_{1}=b_{2}=0$. Similarly, for $l=0$ we deduce that
$$\int_{{\mathbb{R}}^2} \hat \Psi(y)\,{|y|^{2n}(1-|y|^{2n+2})\over (1+|y|^{2n+2})^3}\, dy=0,$$ which implies that $b_0=0$. Thus, the claim follows.
\noindent On the other hand, from the equation of $\psi_k$ we have the following integral representation \begin{equation}\label{irsn}
\psi_k(z)={1\over |{\Omega}|}\int_{{\Omega}} \psi_k +\int_{\Omega} G(y,z) \left[\mathcal{K}_k(y) \psi_k(y)-h_k(y) \right]\, dy. \end{equation} \begin{claim} $\tilde c=0$ \end{claim} \noindent Indeed, Claims 1 and 2 imply that $\psi_k(0)=\Psi_k(0)
\to 0$ and ${1\over |{\Omega}|}\int_{{\Omega}} \psi_k =\gamma(\phi_k)\to \tilde c$ as $k \to +\infty$ by definition. So, by \eqref{irsn} we deduce that $$\int_{\Omega} G(y,0) \left[\mathcal{K}_k(y) \psi_k(y)-h_k(y) \right]\, dy \to -\tilde c $$ as $k \to +\infty$. Now, we first estimate the integral involving
$h_k$. Since $\int_{B_{\delta_k}(0)}|\log|y||\,dy=O(\delta_k^2 \log \delta_k),$ we get that \begin{equation*}
\left|\int_{B_{\delta_k}(0)} G(y,0) h_k(y) dy\right| \le {C\over
\delta_k^2}\|h_k\|_* \int_{B_{\delta_k}(0)} G(y,0) dy \le C
|\log\delta_k| \|h_k\|_*. \end{equation*} By \eqref{1458} we have that \begin{equation*}
\left|\int_{{\Omega}\setminus B_{\delta_k}(0)} G(y,0) h_k(y) dy\right| \le C
|\log\delta_k| \int_\Omega |h_k|\leq C |\log \delta_k| \|h_k\|_*, \end{equation*} and we conclude that
$$\left|\int_{{\Omega}} G(y,0) h_k(y) dy\right|\le C|\log \delta_k|\|h_k\|_* \to 0$$
in view of $|\log \delta_k| \, \|h_k\|_*=o(1)$ as $k \to +\infty$. By \eqref{mlK} we have that \begin{equation*} \begin{split} &\int_{{\Omega}} G(y,0) \mathcal{K}_k(y) \psi_k(y) dy=\int_{B_\eta(0)} G(y,0) \mathcal{K}_k(y) \psi_k(y) dy+O(\delta_k^2)\\ &=\int_{B_{ \delta_k^{-\frac{1}{n+1}}\eta}(0)} \bigg[-{1\over 2\pi
}\log |y|-{1\over 2\pi(n+1)} \log \delta_k+H( \delta_k^{1\over n+1} y,0)\bigg] K_k(y) \Psi_k(y) dy+O(\delta_k^2). \end{split} \end{equation*}
Since by (\ref{Kk}) $K_{k}=O({|y|^{2n} \over (1+|y|^{2n+2})^2 })$ does hold uniformly in $B_{\delta_k^{-\frac{1}{n+1}}\eta}(0)
\setminus B_1(0)$ and $ K_{k}(y) \to {8(n+1)^2|y|^{2n} \over
(1+|y|^{2n+2})^2}$ as $k\to+\infty$, by dominated convergence we get that \begin{eqnarray*}
&&\int_{B_{\delta_k^{-\frac{1}{n+1}}\eta}(0)} \bigg[-{1\over 2\pi }\log |y|+H( \delta_k^{1\over n+1} y,0)\bigg] K_k(y) \Psi_k(y) dy\\
&& \to \int_{\mathbb{R}^2} \bigg[-{1\over 2\pi }\log |y|+H(0,0)\bigg] {8(n+1)^2|y|^{2n} \over (1+|y|^{2n+2})^2} \Psi(y) dy = 0 \end{eqnarray*} as $k \to +\infty$. Since $\int_{\Omega} h_k=0$, the integration of the equation satisfied by $\psi_k$ gives that $\int_{\Omega} \mathcal{K}_k \psi_k=0$. Then, by \eqref{mlK} we get that $$\int_{B_{\delta_k^{-\frac{1}{n+1}}\eta}(0)} K_k \Psi_k dy=\int_{B_\eta(0)} \mathcal{K}_k \psi_k dy=-\int_{{\Omega}\setminus B_\eta(0)} \mathcal{K}_k \psi_k=O(\delta_k^2),$$ which implies that $$\log \delta_k \int_{B_{\delta_k^{-\frac{1}{n+1}}\eta}(0)} K_k \Psi_k dy=O(\delta_k^2 \log \delta_k).$$ In conclusion, we have shown that $\int_{{\Omega}} G(y,0) \mathcal{K}_k(y) \psi_k(y)dy \to 0$ as $k \to +\infty$, yielding to $\tilde c=0$.
\noindent In the following Claims, we will omit the subscript $k$. Let us denote $\tilde L(\psi)=\lap\psi + \mathcal{K} \psi$. \begin{claim} The operator $\tilde L$ satisfies the maximum principle in $ B_\eta(0) \setminus B_{R\delta^{1\over n+1}}(0)$ for $R$ large enough. \end{claim} \noindent Indeed, as already noticed in the proof of the previous Claim in terms of $K_{k}$, there is $C_1>0$ such that \begin{equation} \label{salsi}
\mathcal{K}(z)\le C_1 {(n+1)^2 \delta^2|z|^{2n}\over
(\delta^2+|z|^{2n+2})^2} \end{equation} in $B_\eta(0)\setminus B_{\delta^{1\over n+1}}(0)$. The function
$$\tilde Z(z)=- Y_0\left({ \mu z\over\delta^{1\over n+1}}\right)=\frac{\mu ^{2n+2}|z|^{2n+2}-\delta^2}{\mu^{2n+2}|z|^{2n+2}+\delta^2}$$ satisfies
$$-\lap \tilde Z(z)=16(n+1)^2 \frac{\delta^2 \mu^{2n+2} |z|^{2n} (\mu^{2n+2}|z|^{2n+2}-\delta^2)}{(\mu^{2n+2}|z|^{2n+2}+\delta^2)^3}.$$ For $R$ large so that $\mu^{2n+2}R^{2n+2}>{5\over 3}$ we have that \begin{equation*} \begin{split}
-\lap \tilde Z(z)&\ge 16(n+1)^2\frac{\delta^2 \mu^{2n+2} |z|^{2n} }{(\mu^{2n+2}|z|^{2n+2}+\delta^2)^2}\,{\mu^{2n+2}R^{2n+2}-1\over \mu^{2n+2}R^{2n+2}+1}\\ &\ge 4(n+1)^2\frac{\delta^2 \mu^{2n+2} R^{4n+4}}{(\mu^{2n+2}R^{2n+2}+1)^2}\,{1\over
|z|^{2n+4}}\ge{(n+1)^2\over \mu^{2n+2}}{\delta^2\over |z|^{2n+4}} \end{split} \end{equation*} in $B_\eta(0) \setminus B_{R\delta^{1\over n+1}}(0)$. On the other hand, since $\tilde Z \le 1$ we have that
$$\mathcal{K} (z)\tilde Z(z)\le C_1{(n+1)^2\delta^2|z|^{2n}\over
(\delta^2+|z|^{2n+2})^2}\le C_1{(n+1)^2\delta^2\over |z|^{2n+4}}$$ in $B_\eta(0)\setminus B_{\delta^{1\over n+1}}(0)$, and for $0<\mu<{1\over\sqrt{C_1}}$ we then get that $$\tilde L(\tilde Z)\le \left(-{1\over \mu^{2n+2}}+C_1\right){(n+1)^2 \delta^2 \over
|z|^{2n+4}}<0$$ in $B_\eta(0) \setminus B_{R\delta^{1\over n+1}}(0)$. Since
$$\tilde Z(x)\ge {\mu^{2n+2}R^{2n+2}-1\over \mu^{2n+2}R^{2n+2}+1}>{1\over 4}$$ for $|z|\geq R\delta^{1\over n+1}$, we have provided the existence of a positive super-solution for $\tilde L$, a sufficient condition to have that $\tilde L$ satisfies the maximum principle.
\begin{claim} There exists a constant $C>0$ such that
$$\|\psi\|_{\infty, B_\eta(0)\setminus B_{R\delta^{1\over n+1}}(0)}\le C[\|\psi\|_i+\|h\|_*],$$ where
$$\|\psi\|_i=\|\psi\|_{\infty, \partial B_{R\delta^{1\over n+1}}(0)}+\|\psi\|_{\infty, {\partial} B_{\eta}(0)}.$$ \end{claim} \noindent Indeed, letting $\Phi$ be the solution of $$ \left\{ \begin{array}{ll}
-\lap \Phi=2 \displaystyle \sum_{i=1}^2 {\delta^{\sigma_i \over n+1} \over |z|^{2+\sigma_i}}&\hbox{for }R\delta^{1\over n+1} \leq |z| \leq r\\
\Phi=0 &\text{for }|z|=r,\, R\delta^{1\over n+1} \end{array}\right.$$ with $r \in (\eta,2\eta)$, $\sigma_1=\sigma (n+1)$ and
$\sigma_2=2n+\sigma (n+1)$, we construct a barrier function of the form $\tilde\Phi=4\|\psi\|_i \tilde Z + \|h\|_* \Phi$. A direct computation shows that
$$\Phi(z)=2 \sum_{i=1}^2 \delta^{\sigma_i \over n+1}\left[-\frac{1}{\sigma_i^2 |z|^{\sigma_i}} + \alpha_i \log |z| +\beta_i \right],$$ where $$\alpha_i={1\over \sigma_i^2 \log {R\delta^{1\over n+1}\over r}}\left({1\over R^{\sigma_i}\delta^{\sigma_i \over n+1}}-{1\over r^{\sigma_i}}\right)<0,\quad\quad \beta_i={1\over \sigma_i^2 r^{\sigma_i}}-{\log r\over \sigma_i^2\log {R\delta^{1\over n+1}\over r}}\left({1\over R^{\sigma_i} \delta^{\sigma_i \over n+1}}-{1\over r^{\sigma_i}}\right)$$ for $i=1,2$. Since $$0\le \Phi(z)\le 2 \sum_{i=1}^2 \delta^{\sigma_i \over n+1}\left[-{1\over \sigma_i^2 r^{\sigma_i}}+\alpha_i \log R\delta^{1\over n+1}+\beta_i\right]=2 \sum_{i=1}^2 \delta^{\sigma_i \over n+1} \alpha_i \log {R\delta^{1\over n+1}\over r}\le \sum_{i=1}^2 {2 \over \sigma_i^2 R^{\sigma_i}},$$ we get that \begin{equation*} \begin{split}
\tilde L(\tilde\Phi)&\le \| h\|_*\left[-2 {\delta^{\sigma}\over
|z|^{2+\sigma(n+1)}}-2{\delta^{\sigma+{2n \over n+1}} \over
|z|^{2+2n+\sigma(n+1)}}
+C_1(n+1)^2 {\delta^2|z|^{2n}\over (\delta^2+|z|^{2n+2})^2}\sum_{i=1}^2 {2\over \sigma_i^2 R^{\sigma_i}} \right]\\
&\le \| h\|_*\left[-2 {\delta^{\sigma}\over |z|^{2+\sigma(n+1)}}-{
\delta^{\sigma+{2n \over n+1}} \over (\delta^2+|z|^{2n+2})^{1+\sigma/2} }
+{\delta^\sigma |z|^{2n}\over(\delta^2+|z|^{2n+2})^{1+\sigma/2} } \right]\\
&\le - \| h\|_* {\delta^\sigma(|z|^{2n}+\delta^{2n\over n+1})\over
(\delta^2+|z|^{2n+2})^{1+\sigma/2}} \end{split} \end{equation*} in view of (\ref{salsi}), for $R$ large so that $C_1(n+1)^2
\displaystyle \sum_{i=1}^2 {2\over \sigma_i^2 R^{\sigma_i}} \leq 1$. Since $|\psi| \leq \tilde \Phi$ on ${\partial} B_{R\delta^{1\over n+1}}(0)\cup{\partial} B_r(0)$ in view of $4\tilde Z\ge 1$, by the maximum principle we conclude that $|\psi|\le \tilde\Phi$ in $B_\eta(0)\setminus B_{R\delta^{1\over n+1}}(0)$ and the claim follows.
\noindent Since Claims 2 and 3 provide that $\|\psi_k \|_i \to 0$ as $k \to \infty$, by Claim 5 we conclude that
$\|\psi_k \|_\infty=o(1)$ as $k\to+\infty$, a contradiction with
$\liminf_{k \to +\infty} \|\psi_k\|_\infty>0$ according to Claim 1. This completes the proof. \qed \end{proof}
\noindent We are now in position to solve problem \eqref{plco}. \begin{prop} \label{p2} There exists $\eta_0>0$ small such that for any $0<\delta\leq
\eta_0$, $|\log \delta| \epsilon^2\leq \eta_0 \delta^{2\over n+1}$, $|a|\leq \eta_0 \delta$ and $h \in L^\infty(\Omega)$ with $\int_\Omega h=0$ there is a unique solution $\phi:=T(h)$, with $\int_\Omega \phi=0$, and $d_0,d_1,d_2 \in \mathbb{R}$ of problem \eqref{plco}. Moreover, there is a constant $C>0$ so that \begin{equation}\label{est1}
\|\phi \|_\infty \le C\left(\log \frac 1\delta \right)\|h\|_*,\quad
\sum_{l=0}^2 |d_{l}|\le C\|h\|_*. \end{equation} \end{prop}
\begin{proof} Since $-\Delta Z_l=|\sigma'(z)|^{2} e^{U_{\delta,a}} Z_l$ in ${\Omega}$ (where $U_{\delta,a}$ stands for $U_{\delta,a,\sigma_a}$) and $\int_\Omega \Delta Z_l=O(\delta^2)$ in view of \eqref{deltaZ0}-\eqref{deltaZ}, we have that $\Delta PZ_l =O(|\sigma'(z)|^{2} e^{U_{\delta,a}})+O(\delta^2)$ in view of $Z_l=O(1)$, yielding to $\|\lap PZ_{l}\|_*\le C$ for all $l=0,1,2$. By Proposition \ref{p1} every solution of \equ{plco} satisfies
$$\|\phi\|_\infty\le C\left(\log{1\over\delta}\right)\left[\|h\|_*+\sum_{l=0}^2|d_{l}|\right].$$ Set $\langle f,g\rangle=\int_{\Omega} fg$ and notice that \begin{equation}\label{diesis} \ds\langle L(\phi),PZ_{j}\rangle = \ds\langle L(\phi),PZ_{j}+t\rangle =\langle \phi+\gamma(\phi),\tilde L(PZ_{j}+t)\rangle \end{equation}
for any $t\in \mathbb{R}$, in view of $\int_\Omega L(\phi)=0$. To estimate the $|d_{l}|$'s, let us test equation \equ{plco} against $PZ_{j}$, $j=0,1,2$, to get $$\big\langle \phi+\gamma(\phi),\tilde L(PZ_{j}+t_j)\big\rangle =\langle h,PZ_{j}\rangle + \sum_{l=0}^2d_{l}\langle \lap PZ_{l},PZ_{j}\rangle$$
where $t_j=\frac{1}{|\Omega|}\int_\Omega Z_j$, $j=0,1,2$. From the proof of Lemma \ref{1039} we know that for $Z_0$ and $Z=Z_1+iZ_2$ there hold the following: \begin{eqnarray*}
&& \int_{\Omega} \lap PZ_0PZ_0=- 16 (n+1) \int_{\mathbb{R}^2} \frac{1-|y|^2}{(1+|y|^2)^4} +O(\delta^2)\,,\qquad \int_{\Omega} \lap PZPZ_0=O(\delta^2)\\ &&\int_{\Omega} \lap PZ \overline{PZ}=- 8 (n+1) \int_{\mathbb{R}^2}
\frac{|y|^2 }{(1+|y|^2)^4} +O(\delta)\,, \qquad \int_{\Omega} \lap PZ PZ=O(\delta) \end{eqnarray*}
where $ \int_{\mathbb{R}^2} \frac{dy}{(1+|y|^2)^4}=2
\int_{\mathbb{R}^2} \frac{1-|y|^2}{(1+|y|^2)^4}=\frac{\pi}{3}$. In terms of the $Z_l$'s we then have that $$\langle \lap PZ_{l},PZ_{j}\rangle=-(n+1)C_{ij} \delta_{lj}+O(\delta^2),$$
where $\delta_{lj}$ denotes the Kronecker's symbol and $c_{00}={8\pi\over 3}$, $c_{11}=c_{22}={4\pi\over 3}$. For $j=0,1,2$ let us now estimate $\big\|\tilde L(PZ_j+t_j)\big\|_*$: \begin{equation}\label{diesisdiesis}
\big\|\tilde L(PZ_j+t_j)\big\|_*=\big\|-|\sigma'(z)|^2 e^{U_{\delta,a}}
Z_j+\mathcal{K}(PZ_j+t_j)+O(\delta^2) \big\|_*=O( \delta+\epsilon^2
\delta^{-\frac{2}{n+1}}+\delta|c_a|) \end{equation} in view of \eqref{deltaZ0}-(\ref{pzij}) and (\ref{mlK}). Since
$|\gamma(\phi)|=O(\|\phi\|_\infty)$ in view of (\ref{BW}) and $\epsilon^2 \delta^{-\frac{2}{n+1}}=o(1)$, by \eqref{1458} we get that
$$\langle \phi+\gamma(\phi),\tilde L(PZ_{j}+t_j)\rangle=O(\delta+\epsilon^2 \delta^{-\frac{2}{n+1}}) \|\phi\|_\infty,$$ which along the previous estimates yields to \begin{equation}\label{estcij} \begin{split}
|d_j|\le C\bigg[(\delta+\epsilon^2\delta^{-{2\over n+1}})\|\phi\|_\infty+\|h\|_*+\delta \sum_{l=0}^2|d_l| \bigg] \end{split} \end{equation} in view of $PZ_j=O(1)$. Since (\ref{estcij}) gives that
$\displaystyle \sum_{l=0}^2|d_{l}|=O(\delta+\epsilon^2\delta^{-{2\over n+1}})\|\phi\|_\infty+O(\|h\|_*)$, we have that every solution of \equ{plco} satisfies
$$\|\phi\|_\infty\le C\left(\log{1\over\delta}\right)\left[\|h\|_*+\sum_{l=0}^2|d_{l}|\right] \leq C\log{1\over\delta}(\delta+\epsilon^2\delta^{-{2\over n+1}})\|\phi\|_\infty+C \log{1\over\delta} \|h\|_*.$$ In view of $\log{1\over\delta} (\delta+\epsilon^2\delta^{-{2\over n+1}})=o(1)$ as $\eta_0\to 0$, the a-priori estimates \eqref{est1} immediately follow.
\noindent To solve \equ{plco}, consider now the space $$H=\left\{\phi\in H^1(\Omega) \hbox{ doubly-periodic}: \: \int_\Omega \phi=0\,,\:\int_{\Omega}\lap PZ_{l}\,\phi=0 \hbox{ for }l=0,1,2\right\}$$ endowed with the usual inner product $[\phi,\psi]=\int_{\Omega}\nabla\phi\nabla\psi.$ Problem \eqref{plco} is equivalent to finding $\phi\in H$ such that $$[\phi,\psi]=\int_{\Omega}\left[\mathcal{K} \left(\phi+\gamma(\phi)\right)-h\right]\psi\qquad\text{for all }\psi\in H.$$ With the aid of Riesz's representation theorem, the equation has the form $(\hbox{Id}-\hbox{compact operator})\phi= \tilde h$. Fredholm's alternative guarantees unique solvability of this problem for any $h$ provided that the homogeneous equation has only the trivial solution. This is equivalent to \eqref{plco} with $h\equiv 0$, which has only the trivial solution by the a-priori estimates \eqref{est1}. The proof is now complete.\qed \end{proof}
\section{\hspace{-0.5cm}: The nonlinear problem} We consider the following non linear problem \begin{equation}\label{pnla} \left\{\begin{array}{ll} L(\phi)= -[R+N(\phi)] +\displaystyle \sum_{l=0}^{2}d_l \Delta PZ_{l} & \text{in }{\Omega}\\ \int_{\Omega } \Delta PZ_l\phi = 0 \hbox{ for all }l=0,1,2 & \\ \int_{{\Omega}}\phi=0,& \end{array} \right. \end{equation} where $R$, $N(\phi)$ and $L$ are given by \eqref{R}, \eqref{nlt} and \eqref{ol}, respectively. Notice that \eqref{linear} and (\ref{pnla}) are equivalent by setting $d=d_1-id_2$.
\begin{lem}\label{lpnla} There exists $\delta_0>0$ small such that for any $0<\delta<\eta_0$,
$|\log \delta|^2 \epsilon^2\leq \eta_0 \delta^{2\over n+1}$, $|a|\leq \eta_0 \delta$ problem \eqref{pnla} admits a unique solution $\phi$ and $d_l$, $l=0,1,2$. Moreover, there exists $C>0$ so that \begin{equation}\label{cotapsi}
\|\phi\|_\infty\le C|\log\delta|\|R\|_*. \end{equation} \end{lem} \begin{proof} In terms of the operator $T$ defined in Proposition \ref{p2}, problem \eqref{pnla} reads as $$\phi=-T\left(R+N(\phi)\right):=\mathcal{A}(\phi).$$ For a given number $M>0$, let us consider the space $$
\mathcal{F}_M = \{\phi\in L^\infty({\Omega}) \hbox{ doubly-periodic }:\: \|
\phi \|_\infty \le M|\log\delta| \,\|R\|_* \}.$$ It is a straightforward but tedious computation to show that \begin{equation}\label{star}
\|N(\phi_1) - N(\phi_2)\|_* \leq C_1 (\|\phi_1\|_\infty
+\|\phi_2\|_\infty) \|\phi_1-\phi_2\|_\infty. \end{equation}
Just to give an idea on how (\ref{star}) can be proved, observe that $0\leq \frac{e^{u_0+W+\phi}}{\int_\Omega e^{u_0+W+\phi}} \leq e^{2\|\phi\|_\infty} \frac{e^{u_0+W}}{\int_\Omega e^{u_0+W}}$ and
$|\int_\Omega e^{u_0+W+\phi} \phi|\leq \|\phi\|_\infty \int_\Omega e^{u_0+W+\phi}$. For $\|\phi\|_\infty\leq 1$ we can then get that
$$\|\phi\|_\infty \|D [\frac{e^{u_0+W+\phi}}{\int_\Omega e^{u_0+W+\phi}}][\phi]\|_*+\|D^2 [\frac{e^{u_0+W+\phi}}{\int_\Omega e^{u_0+W+\phi}}][\phi,\phi]\|_*=O(\|\frac{e^{u_0+W}}{\int_\Omega e^{u_0+W}}\|_* \|\phi\|_\infty^2)
=O(\|\phi\|_\infty^2)$$ in view of $\|\frac{e^{u_0+W}}{\int_\Omega e^{u_0+W}}\|_*=O(1)$ by (\ref{eps1}). This exactly what we need to estimate in $\|\cdot\|_*-$norm the difference between the first term of $N(\phi_1)$ and $N(\phi_2)$. For the other terms we can argue in a similar way to get
$$\|\phi\|_\infty \|D [\frac{e^{2(u_0+W+\phi)}}{\int_\Omega e^{2(u_0+W+\phi)}}][\phi]\|_*+\|D^2 [\frac{e^{2(u_0+W+\phi)}}{\int_\Omega e^{2(u_0+W+\phi)}}][\phi,\phi]\|_*=O(\| \frac{e^{2(u_0+W)}}{\int_\Omega e^{2(u_0+W)}}\|_* \|\phi\|_\infty^2)=O(\|\phi\|_\infty^2)$$
in view of $\| \frac{e^{2(u_0+W)}}{\int_\Omega e^{2(u_0+W)}}\|_* =O(1)$ by (\ref{eps2}), and
$$\|\phi\|_\infty \|D[B(W+\phi)][\phi]\|_*+\|D^2[B(W+\phi)][\phi,\phi]\|_*=O(B(W)\|\phi\|_\infty^2)=O(\delta^{-\frac{2}{n+1}}\|\phi\|_\infty^2)$$ in view of (\ref{BW}). Since $\epsilon^2 \delta^{-\frac{2}{n+1}}=o(1)$ we can deduce the validity of (\ref{star}).
\noindent Denote by $C'$ the constant present in \eqref{est1}. By Proposition \ref{p2} and (\ref{star}) we get that
$$\|\mathcal{A}(\phi_1)-\mathcal{A}(\phi_2)\|_\infty \leq C'|\log \delta| \|N(\phi_1)-N(\phi_2)\|_*\leq 2C'C_1 M \|R\|_ * \log^2 \delta \|\phi_1-\phi_2\|_\infty$$ for all $\phi_1,\phi_2 \in \mathcal{F}_M$. By Proposition \ref{p2} we also have that \begin{equation*}
\|\mathcal{A}(\phi)\|_\infty \le C' | \log \delta |\left[ \|R\|_* +
\|N(\phi)\|_*\right]\leq C' | \log \delta | \|R\|_*+C' C_1|\log
\delta| \|\phi\|_\infty^2 \end{equation*} for all $\phi\in \mathcal{F}_M$. Fix now $M$ as $M=2C'$, and by (\ref{ere}) take
$\eta_0$ small so that $4(C')^2 C_1 \log^2 \delta \|R\|_* < \frac{1}{2}$ in order to have that $\mathcal{A}$ is a contraction mapping of $\mathcal{F}_M$ into itself. Therefore $\mathcal{A}$ has a unique fixed point $\phi$ in $\mathcal{F}_M$, which satisfies (\ref{cotapsi}) with $C=M$.\qed \end{proof}
\section{\hspace{-0.5cm}: The integral coefficients in \eqref{solve1b}-\eqref{solve2b}} Letting $\zeta=\frac{a}{\delta}$, we aim to investigate the integral coefficients
$$I:=\int_{\mathbb{R}^2} \frac{(|y|^2-1)|y+\zeta|^{\frac{2n}{n+1}}}{(1+|y|^2)^5}\,dy\:,\qquad K:=\int_{\mathbb{R}^2} \frac{|y+\zeta|^{\frac{2n}{n+1}}y}{(1+|y|^2)^5}\,dy$$
which appear in \eqref{solve1b}-\eqref{solve2b} or \eqref{solve1}-\eqref{solve2}. We will show below that $I=f(|\zeta|)$ and $K=g(|\zeta|)\zeta$ with $f<0<g$, and the asymptotic behavior of $f$ and $g$ as $|\zeta|\to +\infty$ will be identified.
\noindent By the change of variable $y \to y+\zeta$ and the Taylor expansion
$$(1-x)^{-5}=\sum_{k=0}^{+\infty} c_k x^k \quad\hbox{for }|x|<1$$ with $c_k=\frac{(4+k)!}{24\,k!}$, we can re-write $I$ as \begin{eqnarray*}
I&=&\int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}}(|y-\zeta|^2-1)}{(1+|y-\zeta|^2)^5}dy
=\sum_{k=0}^{+\infty} c_k \int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}}(|y|^2+|\zeta|^2-1-y\bar \zeta-\bar y \zeta)(y\bar \zeta+\bar y \zeta)^k}{(1+|y|^2+|\zeta|^2)^{5+k}}dy \end{eqnarray*} in view of
$$(1+|y-\zeta|^2)^{-5}=(1+|y|^2+|\zeta|^2)^{-5}(1-\frac{y\bar \zeta+\bar y \zeta}{1+|y|^2+|\zeta|^2})^{-5}$$ with
$$\frac{|y\bar \zeta+\bar y \zeta|}{1+|y|^2+|\zeta|^2} \leq \frac{|y|^2+|\zeta|^2}{1+|y|^2+|\zeta|^2}<1.$$ Since $$(y\bar \zeta+\bar y \zeta)^k= \sum_{j=0}^k \left(\begin{array}{l} k \\ j \end{array}\right) y^j \bar \zeta^j \bar y^{k-j} \zeta^{k-j}=
\sum_{1\leq j <\frac{k}{2} } \left(\begin{array}{l} k \\ j \end{array}\right) \zeta^{k-2j} \bar y^{k-2j} |\zeta|^{2j} |y|^{2j}
+\sum_{\frac{k}{2}<j\leq k} \left(\begin{array}{l} k \\ j \end{array}\right) \bar \zeta^{2j-k} y^{2j-k} |\zeta|^{2k-2j} |y|^{2k-2j} $$ for $k$ odd and $$(y\bar \zeta+\bar y \zeta)^k=
\sum_{1\leq j <\frac{k}{2} } \left(\begin{array}{l} k \\ j \end{array}\right) \zeta^{k-2j} \bar y^{k-2j} |\zeta|^{2j} |y|^{2j}
+\sum_{\frac{k}{2}<j\leq k} \left(\begin{array}{l} k \\ j \end{array}\right) \bar \zeta^{2j-k} y^{2j-k} |\zeta|^{2k-2j} |y|^{2k-2j}
+ \left(\begin{array}{l} k \\ \frac{k}{2} \end{array}\right) |\zeta|^k |y|^k $$ for $k$ even, by symmetry we can simplify the expression of $I$ as follows: \begin{eqnarray*} I&=&
\sum_{k=0}^{+\infty} c_k \int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}}(|y|^2+|\zeta|^2-1)(y\bar \zeta+\bar y \zeta)^k}{(1+|y|^2+|\zeta|^2)^{5+k}}dy
-\sum_{k=0}^{+\infty} c_k \int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}}(y\bar \zeta+\bar y \zeta)^{k+1}}{(1+|y|^2+|\zeta|^2)^{5+k}}dy\\
&=&\sum_{k=0}^{+\infty} c_{2k} \left(\begin{array}{c} 2k \\ k \end{array}\right) |\zeta|^{2k} \int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2k} (|y|^2+|\zeta|^2-1)}{(1+|y|^2+|\zeta|^2)^{5+2k}} dy
-\sum_{k=1}^{+\infty} c_{2k-1} \left(\begin{array}{c} 2k \\ k \end{array}\right) |\zeta|^{2k} \int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2k} }{(1+|y|^2+|\zeta|^2)^{4+2k}} dy \end{eqnarray*} Since $I^p_q=\displaystyle \int_0^{\infty} \frac{ \rho^p}{(1+\rho)^q}d\rho$, $q>p+1$, does satisfy the relations: \begin{equation} \label{Ipq} I^p_{q+1}=\frac{q-p-1}{q}I^p_q\:,\qquad I^{p+1}_q=\frac{p+1}{q-p-2} I^p_q, \end{equation}
through the change of variable $\rho^2=\lambda t$, $\lambda=1+|\zeta|^2$, in polar coordinates we have that \begin{eqnarray} \label{1748}
\int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2k} }{(1+|y|^2+|\zeta|^2)^{5+2k}} dy &=& \pi \lambda^{\frac{n}{n+1}-4-k} I^{\frac{n}{n+1}+k}_{5+2k} =\pi \frac{3+k-\frac{n}{n+1}}{4+2k}\lambda^{\frac{n}{n+1}-4-k} I^{\frac{n}{n+1}+k}_{4+2k}\nonumber\\
&=&\frac{3+k-\frac{n}{n+1}}{2(2+k)(1+|\zeta|^2)}
\int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2k} }{(1+|y|^2+|\zeta|^2)^{4+2k}} dy \end{eqnarray} and \begin{eqnarray} \label{1818}
\int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}-2+2k} }{(1+|y|^2+|\zeta|^2)^{2+2k}} dy &=& \pi \lambda^{\frac{n}{n+1}-2-k} I^{\frac{n}{n+1}-1+k}_{2+2k} =\pi \frac{(2+2k)(3+2k)}{(k+\frac{n}{n+1})(2+k-\frac{n}{n+1})}\lambda^{\frac{n}{n+1}-2-k} I^{\frac{n}{n+1}+k}_{4+2k}\nonumber\\
&=&\frac{(2+2k)(3+2k)}{(k+\frac{n}{n+1})(2+k-\frac{n}{n+1})}(1+|\zeta|^2)
\int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2k} }{(1+|y|^2+|\zeta|^2)^{4+2k}} dy \end{eqnarray} Inserting \eqref{1748} and \eqref{1818} into $I$, we get that \begin{eqnarray*} I&=&
\sum_{k=0}^{+\infty} c_{2k} \left(1-\frac{3+k-\frac{n}{n+1}}{(2+k)(1+|\zeta|^2)}\right)\left(\begin{array}{c} 2k \\ k \end{array}\right) |\zeta|^{2k} \int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2k} }{(1+|y|^2+|\zeta|^2)^{4+2k}} dy\\
&&-\sum_{k=1}^{+\infty} c_{2k-1} \left(\begin{array}{c} 2k \\ k \end{array}\right) |\zeta|^{2k} \int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2k} }{(1+|y|^2+|\zeta|^2)^{4+2k}} dy\\ &=&
\sum_{k=1}^{+\infty} \left[\frac{2(3+2k)c_{2k-2}}{k+\frac{n}{n+1}} \left(\frac{1+k}{2+k-\frac{n}{n+1}}-\frac{1}{1+|\zeta|^2}\right)\left(\begin{array}{c} 2k-2 \\ k-1 \end{array}\right) (1+|\zeta|^2)-c_{2k-1} \left(\begin{array}{c} 2k \\ k \end{array}\right) |\zeta|^2 \right]\times\\
&&\times |\zeta|^{2k-2} \int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2k} }{(1+|y|^2+|\zeta|^2)^{4+2k}} dy. \end{eqnarray*}
Since $2(3+2k)c_{2k-2} \left(\begin{array}{c} 2k-2 \\ k-1 \end{array}\right)=k c_{2k-1} \left(\begin{array}{c} 2k \\ k \end{array}\right)$ for all $k \geq 1$, setting $\beta_k=c_{2k-1} \left(\begin{array}{c} 2k \\ k \end{array}\right) |\zeta|^{2k-2} \int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2k} }{(1+|y|^2+|\zeta|^2)^{4+2k}} dy$ we deduce that \begin{eqnarray*} I&=&
\sum_{k=1}^{+\infty} \left[\frac{k}{k+\frac{n}{n+1}} \left(\frac{1+k}{2+k-\frac{n}{n+1}}-\frac{1}{1+|\zeta|^2}\right) (1+|\zeta|^2)- |\zeta|^2 \right] \beta_k\\ &=&
\sum_{k=1}^{+\infty} \left[\frac{k}{k+\frac{n}{n+1}} \left(\frac{|\zeta|^2}{1+|\zeta|^2}-\frac{1}{(2+k)(n+1)-n} \right) (1+|\zeta|^2)- |\zeta|^2 \right] \beta_k<\sum_{k=1}^{+\infty} \left[\frac{k}{k+\frac{n}{n+1}}-1\right] |\zeta|^2 \beta_k<0. \end{eqnarray*}
In conclusion, we have shown that $I=f(|\zeta|)$ with $f<0$.
\noindent By the change of variable $y \to y+\zeta$ and the Taylor expansion of $(1-x)^{-5}$, arguing as before $K$ can be re-written as \begin{eqnarray*}
K&=&\int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}}(y-\zeta)}{(1+|y-\zeta|^2)^5}dy
=\sum_{k=0}^{+\infty} c_k \int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}}(y-\zeta)(y\bar \zeta+\bar y \zeta)^k}{(1+|y|^2+|\zeta|^2)^{5+k}}dy. \end{eqnarray*} By the previous expansions of $(y\bar \zeta+\bar y \zeta)^k$ and \begin{eqnarray*}
\int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2+2k}}{(1+|y|^2+|\zeta|^2)^{6+2k}}dy &=&\pi \lambda^{\frac{n}{n+1}-4-k} I^{\frac{n}{n+1}+1+k}_{6+2k}= \pi \frac{\frac{n}{n+1}+1+k}{5+2k} \lambda^{\frac{n}{n+1}-4-k} I^{\frac{n}{n+1}+k}_{5+2k}\\ &=&
\frac{\frac{n}{n+1}+1+k}{5+2k}\int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2k}}{(1+|y|^2+|\zeta|^2)^{5+2k}}dy, \end{eqnarray*} for symmetry $K$ reduces to \begin{eqnarray*} K&=& \zeta \, \sum_{k=0}^{+\infty} \left[c_{2k+1} \frac{\frac{n}{n+1}+1+k}{5+2k}\left(\begin{array}{c} 2k+1 \\ k \end{array}\right) -c_{2k} \left(\begin{array}{c} 2k \\ k \end{array}\right) \right]
|\zeta|^{2k} \int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2k}}{(1+|y|^2+|\zeta|^2)^{5+2k}}dy. \end{eqnarray*} Since $(1+k) c_{2k+1} \left(\begin{array}{c} 2k+1 \\ k \end{array}\right)=(5+2k) c_{2k} \left(\begin{array}{c} 2k \\ k \end{array}\right)$ for all $k \geq 0$, we get that \begin{eqnarray*} K&=& \zeta \, \sum_{k=0}^{+\infty} \frac{n}{(n+1)(1+k)} c_{2k} \left(\begin{array}{c} 2k \\ k \end{array}\right)
|\zeta|^{2k} \int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}+2k}}{(1+|y|^2+|\zeta|^2)^{5+2k}}dy. \end{eqnarray*}
In conclusion, we have shown that $K=g(|\zeta|)\zeta$ with $g>0$.
\noindent In order to determine the asymptotic behavior of $f$ and $g$ as $|\zeta|\to +\infty$, we will use complex analysis to get some integral representation of $f$ and $g$, see \eqref{exprI-2J} and \eqref{exprK}. We split $I$ as $I=J_1-2J_2$, and we compute separately the constants
$$J_1=\int_{\mathbb{R}^2} \frac{|y+\zeta|^{\frac{2n}{n+1}}}{(1+|y|^2)^4}dy\:,\quad J_2=\int_{\mathbb{R}^2} \frac{|y+\zeta|^{\frac{2n}{n+1}}}{(1+|y|^2)^5}dy.$$ Concerning $J_1$, we re-write it in polar coordinates as \begin{eqnarray*}
J_1&=&\int_{\mathbb{R}^2} \frac{|y|^{\frac{2n}{n+1}}}{(1+|y-\zeta|^2)^4}dy=\int_0^{+\infty} \rho^{\frac{2n}{n+1}+1} d\rho \int_0^{2\pi} \frac{d\theta}{(1+\rho^2+|\zeta|^2-\zeta\rho e^{-i\theta}-\overline{\zeta}\rho e^{i\theta})^4}\\
&=&- i \int_0^{+\infty} \rho^{\frac{2n}{n+1}+1} d\rho \int_{\partial^+ B_1(0)} \frac{w^3}{(\overline{\zeta}\rho)^4 (w^2-\frac{1+\rho^2+|\zeta|^2}{\overline{\zeta}\rho}w+\frac{\zeta^2}{|\zeta|^2})^4}dw. \end{eqnarray*}
Since $w^2- \displaystyle \frac{1+\rho^2+|\zeta|^2}{\overline{\zeta}\rho}w+\frac{\zeta^2}{|\zeta|^2}$ vanishes only at
$$w_\pm=\frac{1+\rho^2+|\zeta|^2\pm \sqrt{(1+\rho^2+|\zeta|^2)^2-4\rho^2|\zeta|^2}}{2 \overline{\zeta} \rho}$$
with $|w_-|<1<|w_+|$, by the Residue Theorem we have that \begin{eqnarray*} J_1= - i \int_0^{+\infty} \rho^{\frac{2n}{n+1}+1} d\rho \int_{\partial^+ B_1(0)} \frac{w^3}{(\overline{\zeta}\rho)^4 (w-w_-)^4(w-w_+)^4}dw=2\pi \int_0^{\infty} \frac{ \rho^{\frac{2n}{n+1}+1} }{6 (\overline{\zeta}\rho)^4} \frac{d^3}{d w^3} \left[ \frac{w^3}{(w-w_+)^4}\right](w_-) d \rho . \end{eqnarray*} A straightforward computation shows that $$ \frac{d^3}{d w^3}\left[ \frac{w^3}{(w-w_+)^4}\right]=-6\frac{w^3+w_+^3+9w w_+(w+w_+)}{(w-w_+)^7},$$ and then
$$ \frac{d^3}{d w^3}\left[ \frac{w^3}{(w-w_+)^4}\right](w_-)=6 (\overline{\zeta}\rho)^4 \frac{(1+\rho^2+|\zeta|^2)[(1+\rho^2+|\zeta|^2)^2+6\rho^2 |\zeta|^2]}{[(1+\rho^2+|\zeta|^2)^2-4\rho^2 |\zeta|^2]^{\frac{7}{2}}}.$$
Recalling that $\lambda=1+|\zeta|^2$, through the change of variable $\rho \to \rho^2$ we finally get for $J_1$ the expression \begin{eqnarray}\label{exprI} J_1=\pi \int_0^{\infty} \rho^{\frac{n}{n+1}} \frac{(\lambda+\rho)[(\lambda+\rho)^2+6(\lambda-1)\rho]}{[(\lambda+\rho)^2-4(\lambda-1)\rho]^{\frac{7}{2}}} d \rho. \end{eqnarray}
\noindent In a similar way, we first re-write $J_2$ as \begin{eqnarray*} J_2= i \int_0^{+\infty} \rho^{\frac{2n}{n+1}+1} d\rho \int_{\partial^+ B_1(0)} \frac{w^4}{(\overline{\zeta}\rho)^5 (w-w_-)^5 (w-w_+)^5}dw= -2\pi \int_0^{+\infty} \frac{ \rho^{\frac{2n}{n+1}+1} }{24 (\overline{\zeta}\rho)^5} \frac{d^4}{d w^4} \left[ \frac{w^4}{(w-w_+)^5}\right](w_-) d\rho \end{eqnarray*} in view of the Residue Theorem. Since $$ \frac{d^4}{d w^4}\left[ \frac{w^4}{(w-w_+)^5}\right]=24\frac{w^4+w_+^4+16w w_+(w^2+w_+^2)+36 w^2 w_+^2}{(w-w_+)^9},$$ we get that
$$ \frac{d^4}{d w^4}\left[ \frac{w^4}{(w-w_+)^5}\right](w_-)=-24 (\overline{\zeta}\rho)^5 \frac{(1+\rho^2+|\zeta|^2)^4 +12 \rho^2|\zeta|^2 (1+\rho^2+|\zeta|^2)^2+42\rho^4 |\zeta|^4}{[(1+\rho^2+|\zeta|^2)^2-4\rho^2 |\zeta|^2]^{\frac{9}{2}}},$$ and then \begin{eqnarray}\label{exprJ} J_2=\pi \int_0^{\infty} \rho^{\frac{n}{n+1}} \frac{(\lambda+\rho)^4+12(\lambda-1)\rho (\lambda+\rho)^2+42(\lambda-1)^2\rho^2}{[(\lambda+\rho)^2-4(\lambda-1)\rho]^{\frac{9}{2}}} d \rho. \end{eqnarray}
\noindent By \eqref{exprI}-\eqref{exprJ} we finally get that $f(|\zeta|)$ takes the form \begin{eqnarray} \label{exprI-2J} f=\pi \int_0^{\infty} \hspace{-0,3cm} \rho^{\frac{n}{n+1}} \frac{(\lambda+\rho)^5-2(\lambda+\rho)^4+2(\lambda-1) \rho (\lambda+\rho)^3 -24 \lambda (\lambda-1) \rho(\rho+1) (\lambda+\rho) -84(\lambda-1)^2 \rho^2 }{[(\lambda+\rho)^2-4(\lambda-1)\rho]^{\frac{9}{2}}} d \rho \end{eqnarray}
where $\lambda=1+|\zeta|^2$.
\noindent Observe that for $\zeta=0$ (i.e. $\lambda=1$) we simply have that \begin{equation} \label{1228} f(0)=J_1-2J_2=\pi [I^{\frac{n}{n+1}}_4-2 I^{\frac{n}{n+1}}_5]=-\frac{2\pi}{2n+3}I^{\frac{n}{n+1}}_5 \end{equation} in view of \eqref{Ipq}. By the change of variable $\rho=\lambda+\sqrt \lambda t$ and the Lebesgue Theorem we get that \begin{eqnarray*} \lambda^{-\frac{n}{n+1}} J_1=\pi \int_{-\sqrt \lambda}^\infty (1+\frac{t}{\sqrt \lambda})^{\frac{n}{n+1}} \frac{(2+\frac{t}{\sqrt \lambda})^3+6\frac{\lambda-1}{\lambda} (1+\frac{t}{\sqrt \lambda})(2+\frac{t}{\sqrt \lambda})}{(t^2+4+\frac{4t}{\sqrt \lambda})^{\frac{7}{2}}} dt \to 20 \pi \int_{\mathbb{R}} \frac{dt}{(t^2+4)^{\frac{7}{2}}} \end{eqnarray*} and \begin{eqnarray*} \lambda^{-\frac{n}{n+1}} J_2&=&\pi \int_{-\sqrt \lambda}^\infty (1+\frac{t}{\sqrt \lambda})^{\frac{n}{n+1}} \frac{(2+\frac{t}{\sqrt \lambda})^4+12 \frac{\lambda-1}{\lambda} (1+\frac{t}{\sqrt \lambda})(2+\frac{t}{\sqrt \lambda})^2 +42 (\frac{\lambda-1}{\lambda})^2 (1+\frac{t}{\sqrt \lambda})^2 }{(t^2+4+\frac{4t}{\sqrt \lambda})^{\frac{9}{2}}} dt\\ & \to & 106 \pi \int_{\mathbb{R}} \frac{dt}{(t^2+4)^{\frac{9}{2}}} \end{eqnarray*}
as $|\zeta|\to +\infty$ (i.e. $\lambda \to +\infty$). Since $\displaystyle \int_{\mathbb{R}} \frac{dt}{(t^2+4)^{\frac{7}{2}}}=\frac{14}{3} \displaystyle \int_{\mathbb{R}} \frac{dt}{(t^2+4)^{\frac{9}{2}}},$ we get that \begin{equation} \label{1902}
\frac{f(|\zeta|)}{|\zeta|^{\frac{2n}{n+1}}} \to -\frac{356}{3} \pi \int_{\mathbb{R}} \frac{dt}{(t^2+4)^{\frac{9}{2}}} \end{equation}
as $|\zeta|\to \infty$.
\noindent In a similar way, for $K$ we have that \begin{eqnarray*} K= i \int_0^{+\infty} \rho^{\frac{2n}{n+1}+1} d\rho \int_{\partial^+ B_1(0)} \frac{w^4(\rho w-\zeta)}{(\overline{\zeta}\rho)^5 (w-w_-)^5 (w-w_+)^5}dw= -2\pi \int_0^{+\infty} \frac{\rho^{\frac{2n}{n+1}+1} }{24 (\overline{\zeta}\rho)^5} \frac{d^4}{d w^4} \left[ \frac{w^4(\rho w-\zeta)}{(w-w_+)^5}\right](w_-) d\rho \end{eqnarray*} in view of the Residue Theorem. Since $$ \frac{d^4}{d w^4}\left[ \frac{w^4(\rho w-\zeta)}{(w-w_+)^5}\right]=24\frac{5\rho w w_+[w^3+w_+^3+6ww_+(w+w_+)]-\zeta [w^4+w_+^4+16w w_+(w^2+w_+^2)+36 w^2 w_+^2]}{(w-w_+)^9},$$ we get that $$ \frac{d^4}{d w^4}\left[ \frac{w^4(\rho w- \zeta)}{(w-w_+)^5}\right](w_-)=12 (\overline{\zeta}\rho)^5 \zeta \frac{(\lambda+\rho^2)^4+2\rho^2 (\lambda-6-5\rho^2) (\lambda+\rho^2)^2+6(\lambda-1)\rho^4 (2\lambda-7-5\rho^2)}{[(\lambda+\rho^2)^2-4(\lambda-1)\rho^2 ]^{\frac{9}{2}}},$$ and then \begin{eqnarray}\label{exprK}
g(|\zeta|)=-\frac{\pi}{2} \int_0^{\infty} \rho^{\frac{n}{n+1}} \frac{(\lambda+\rho)^4+2\rho (\lambda-6-5\rho) (\lambda+\rho)^2+6(\lambda-1)\rho^2 (2\lambda-7-5\rho)}{[(\lambda+\rho)^2-4(\lambda-1)\rho]^{\frac{9}{2}}} d \rho. \end{eqnarray} So, we have that \begin{equation} \label{1903} g(0)=\frac{\pi}{2}(9I_5^{\frac{n}{n+1}}-10 I_6^{\frac{n}{n+1}})=\frac{3n+1}{2(n+1)} \pi I_5^{\frac{n}{n+1}} \end{equation} in view of \eqref{Ipq}, and, by the change of variable $\rho=\lambda+\sqrt \lambda t$ and the Lebesgue Theorem, \begin{eqnarray} \label{1904}
\frac{g(|\zeta|)}{|\zeta|^{\frac{2n}{n+1}}} \to 17 \pi \int_{\mathbb{R}} \frac{dt}{(t^2+4)^{\frac{9}{2}}} \end{eqnarray}
as $|\zeta|\to +\infty$, in view of $$\int_{-\sqrt \lambda}^\infty (1+\frac{t}{\sqrt \lambda})^{\frac{n}{n+1}} \frac{ (2+\frac{t}{\sqrt \lambda})^4-2(1+\frac{t}{\sqrt \lambda}) (4+\frac{6+5 \sqrt \lambda t}{\lambda}) (2+\frac{t}{\sqrt \lambda})^2-6\frac{\lambda-1}{\lambda} (1+\frac{t}{\sqrt \lambda})^2 (3+\frac{7+5 \sqrt \lambda t}{\lambda})}{(t^2+4+\frac{4t}{\sqrt \lambda})^{\frac{9}{2}}} dt\to - \int_{\mathbb{R}} \frac{34\, dt}{(t^2+4)^{\frac{9}{2}}}$$ as $\lambda \to +\infty$.
\noindent {\bf Acknowledgements:} The work for this paper began while the second author was visiting the Departamento de Matem\'atica, Pontificia Universidad Cat\'olica de Chile (Santiago, Chile). Let him thank M. Musso and M. del Pino for their kind invitation and hospitality. \end{appendices}
\end{document} |
\begin{document}
\title{Second Countable Virtually Free Pro-$p$ Groups whose Torsion Elements have Finite Centralizer}
\noindent MSC classification: 20E18
\noindent Key-words: Profinite group, pro-$p$ group, HNN-extension, profinite module.
\begin{abstract} A second countable \vfpp\ group\ all of whose torsion elements have finite centralizer is the free pro-$p$\ product of finite $p$-groups and a free pro-$p$\ factor. The proof explores a connection between $p$-adic representations of finite $p$-groups and virtually free pro-$p$ groups. In order to utilize this connection, we first prove a version of a remarkable theorem of A. Weiss for infinitely generated profinite modules that allows us to detect freeness of profinite modules. The proof now proceeds using techniques developed in the combinatorial theory of profinite groups. Using an HNN-extension we embed our group into a semidirect product $F\rtimes K$ of a free pro-$p$ group $F$ and a finite $p$-group $K$ that preserves the conditions on centralizers and such that every torsion element is conjugate to an element of $K$. We then show that the $\mathbb{Z}_pK$-module $F/[F,F]$ is free using the detection theorem mentioned above. This allows us to deduce the result for $F\rtimes K$, and hence for our original group, using the pro-$p$ version of the Kurosh subgroup theorem. \end{abstract}
\section{Introduction}
The objective of this paper is to give a complete description of a second countable virtually free pro-$p$ group whose torsion elements have finite centralizer. The description is a generalization of the main theorem of \cite{bulletin}, where the result was obtained in the finitely generated case. Note however that finite generation is a rather restrictive condition in Galois theory and that the finite centralizer condition for torsion elements arises naturally in the study of maximal pro-$p$ Galois groups. In particular, D.\,Haran \cite{H93} (see also I.\,Efrat in \cite{E96} for a different proof, or \cite[Proposition 19.4.3]{efrat}) proved Theorem \ref{main} in the case where $G$ is an extension of a free pro-$2$ group with a group of order $2$.
Our main result is the following:
\begin{theorem}\label{main} Let $G$ be a second countable \vfpp\ group\ such that the centralizer of every torsion element in $G$ is finite. Then $G$ is a free pro-$p$\ product of subgroups that are finite or free pro-$p$. \end{theorem}
The proof of Theorem \ref{main} explores a connection between $p$-adic representations of finite $p$-groups and virtually free pro-$p$ groups. One of the main ingredients of the proof is the following result, which can be considered to be a first step towards a generalization of a remarkable theorem of A. Weiss \cite[Theorem 2]{weiss} to infinitely generated pro-$p$ modules.
\begin{theorem}\label{infgenfree} Let $G$ be a finite $p$-group and let $U$ be a profinite $\mathbb{Z}_p G$-lattice. Suppose that there is a normal subgroup $N$ of $G$ such that
\begin{itemize}
\item The restriction $U\res{N}$ is a free $\mathbb{Z}_p N$-module,
\item The module $U^N$ of $N$-fixed points is a free $\mathbb{Z}_p[G/N]$-module.
\end{itemize} Then $U$ itself is a free $\mathbb{Z}_pG$-lattice. \end{theorem}
The connection to representation theory cannot be used in a straightforward way, however. Indeed, if one factors out the commutator subgroup of a free open normal subgroup $F$ then the $G/F$-module one obtains is, in general, not a free module. In order to apply Theorem \ref{infgenfree}, we first use
pro-$p$\ HNN-extension s to embed $G$ into a rather special \vfpp\ group\ $\tilde G$, in which, after factoring out the commutator of a free open normal subgroup $\tilde F$, the obtained $\tilde G/\tilde F$-module is free. With the aid of this module we prove Theorem \ref{main} for $\tilde G$ and apply the Kurosh subgroup theorem to deduce the result for $G$.
We note also that the second countability condition is essential: for bigger cardinality the result is not true, a counter example is available in \cite[Section 4]{HZ}. We use notation for profinite and pro-$p$ groups from \cite{RZ12010}.
\section{Modules}
We prove in this section a detection theorem for infinitely generated profinite permutation modules over finite $p$-groups. Theorem \ref{infgen} is inspired by Theorem 2 of the article \cite{weiss} by A. Weiss and our proof follows his closely. The theorems in this section are of independent interest.
Let $G$ be a finite group and $R$ a profinite commutative ring. A profinite permutation $RG$-lattice $U$ is a profinite $RG$-lattice (that is, a profinite $RG$-module that is free as a profinite $R$-module) with a profinite $R$-basis left invariant under the action of $G$ on $U$.
A profinite permutation module is an inverse limit of permutation modules of finite rank (this can be seen by applying \cite[Lemma 5.6.4]{RZ12010} to the profinite $R$-basis of $U$), but this does not immediately tell us much about the structure of $U$ from a practical point of view. For instance, there exist profinite modules for finite rings that do not possess indecomposable summands. So before we begin our investigation of permutational $\mathbb{Z}_pG$-lattices, we provide the general Theorem \ref{limits are products}, which tells us in particular that a profinite permutation lattice is simply a product of indecomposable finitely generated permutation lattices.
We fix some notation that will remain in force for Proposition \ref{AddM equivalent to ProjE} and Theorem \ref{limits are products}. Let $R$ be a commutative local noetherian profinite ring, let $G$ be a finite group and let $\{M_1,\dots,M_s\}$ be a finite set of non-isomorphic finitely generated indecomposable left $RG$-modules. Set $M=\bigoplus_{i\in\{1,\dots, s\}}M_i$ and let $E = \tn{End}_{RG}(M)$ be the endomorphism ring of $M$. The category $\tn{add}(M)$ has objects all finite direct sums of direct summands of $M$ and morphisms as in the full module category. The category $\limproj{} \tn{add}(M)$ has objects all inverse limits of inverse systems in $\tn{add}(M)$. The full subcategory $\tn{Add}(M)$ of $\limproj{} \tn{add}(M)$ consists of all modules of the form $$\bigoplus_{i\in \{1,\dots,s\}}\prod_{\kappa_i}M_i$$ where $\kappa_i$ is a cardinal number. Finally, denote by $\tn{Proj}(E)$ the category of profinite projective \textit{right} $E$-modules.
The following proposition is dual to \cite[Proposition 2.1]{symondsstructure}, but since duality over the given coefficient rings can be rather complicated, we choose to present a direct proof.
\bp{}\label{AddM equivalent to ProjE} The categories $\tn{Add}(M)$ and $\tn{Proj}(E)$ are equivalent. \ep
\begin{proof} We make frequent use of how \tn{Hom} commutes with limits, products and sums, see \cite[Lemma 5.1.4]{RZ12010} (or more explicitly, \cite[\S 2.3]{symondspermcom1}). Throughout, the notation $(X,Y)$ will be used as short-hand for $\tn{Hom}_{RG}(X,Y)$. Define a functor $\Gamma:\tn{Add}(M)\to \tn{Proj}(E)$ by $U\mapsto (M,U)$ in the usual way. The image $(M,U)$ is indeed projective: if $U = \limproj{i} U_i$ for modules $U_i$ in $\tn{add}(M)$ then $(M,U) = \limproj{i} (M,U_i)$ and each $(M,U_i)$ is easily seen to be projective. By \cite[Lemma 5.4.1]{RZ12010}, to check the projectivity of $(M,U)$ we only need to complete diagrams of the form
$$ \xymatrix{
& (M,U)\ar[d] \\ X \ar@{->>}[r] & \,Y} $$ with the modules $X,Y$ finite, in which case the given map from $(M,U)$ factors through a finitely generated projective quotient, giving the map we require. We need to show that $\Gamma$ is essentially surjective and fully faithful.
To see that $\Gamma$ is essentially surjective, observe that an arbitrary projective $E$-module has the form $P = \bigoplus_{i}\prod_{\kappa_i}(M,M_i)$ by \cite[\S 2.5]{symondspermcom1}. But now $$\Gamma(\bigoplus_{i}\prod_{\kappa_i}M_i) = (M,\bigoplus_{i}\prod_{\kappa_i}M_i)\cong \bigoplus_{i}\prod_{\kappa_i}(M,M_i) = P,$$ as required.
We demonstrate next that $\Gamma$ is faithful. We need to show that if $\alpha:\prod U_i\to V$ is continuous and non-zero ($V$ some other object of $\tn{Add}(M)$), then the corresponding map $\Gamma(\alpha):(M,\prod U_i)\to (M,V)$ is non-zero. Consider the open subset $Y=V\backslash\{0\}$ of $V$. Then the inverse image $X$ of $Y$ under $\alpha$ is a non-empty (because $\alpha\neq 0$) open subset of $\prod U_i$. But if $X$ is open then it contains an element $x$ of the form $(u_i)$ where $u_i$ is 0 for almost all coordinates. Let the non-zero coordinates of $x$ be $i_1,\dots,i_n$. Now the map $$U_{i_1}\oplus\dots\oplus U_{i_n}\hookrightarrow \prod U_i \xrightarrow{\alpha} V$$ is non-zero since $\alpha(x)\neq 0$. Since this leftmost module is a direct sum (coproduct), the corresponding universal property gives that there must be some index ($i_1$, say) so that $$U_{i_1}\hookrightarrow U_{i_1}\oplus\dots\oplus U_{i_n}\hookrightarrow \prod U_i \xrightarrow{\alpha} V$$ is non-zero. We have a projection $M\twoheadrightarrow U_{i_1}$. Now $$\Gamma(\alpha)(M\twoheadrightarrow U_{i_1}\to \prod U_i)\neq 0$$ by construction, and hence $\Gamma(\alpha)\neq 0$.
It remains to show that $\Gamma$ is full. Given $\gamma:(M,U)\to (M,V)$, we need to find a continuous map $\alpha:U\to V$ such that $\gamma(-) = \alpha\circ-$. We split the work into two cases. Firstly, when $R$ finite and secondly, when $R$ is general. Write $U = \prod_i U_i$ with each $U_i\in \{M_1,\dots,M_s\}$.
Case 1: Let $V$ be an object of $\tn{add}(M)$, hence finite, and fix $\gamma:(M,\prod_i U_i)\to (M,V)$ continuous. Then $(M,V)$ is finite since $M$ is finitely generated, so the kernel of $\gamma$ is an open subgroup of $(M,\prod_i U_i) \cong \prod_i (M,U_i)$. In particular, almost all $(M,U_i)$ map to 0 under $\gamma$. Denote by $1,\dots,m$ the indices that don't map to 0 under $\gamma$. Restricting $\gamma$ to this finite sum, we have a map $$\gamma':\bigoplus_{1,\dots,m}(M,U_i)\to (M,V).$$ Now, the version of the result for finitely generated modules (see eg. \cite[Proposition 2.1]{symondsstructure}) tells us that there is a unique $\alpha':\bigoplus_{1,\dots,m}U_i\to V$ such that $\gamma' = \alpha'\circ-$. Extend $\alpha'$ to a continuous map $\alpha:\prod_i U_i\to V$ by setting it to be $\alpha'$ on $1,\dots,m$ and 0 elsewhere. Now given $\rho = (\rho_i)\in \prod_i (M,U_i)$ we have $$\gamma(\rho) = \prod_i\gamma(\rho_i) = \gamma'(\rho_1 +\dots +\rho_m) + 0 = \alpha'\circ(\rho_1 +\dots +\rho_m) + 0 = \alpha\circ\rho,$$ as required.
Next let $V$ be an arbitrary object of $\tn{Add}(M)$, and write it as $V=\limproj{j} \{V_j, \varphi_{jk}\}$, with the $V_j$ objects of $\tn{add}(M)$. Again we consider some continuous homomorphism $\gamma:(M,U)\to (M,V)$. We have that $(M,V)\cong \limproj{j} \{(M,V_j), \varphi_{jk}\circ-\}$. Fix some $\rho:M\to U$. Then $\varphi_j\gamma(-):(M,U)\to (M,V_j)$, so by the previous paragraph we have a unique continuous map $\alpha_j:U\to V_j$ such that $\varphi_j\circ\gamma(-) = \alpha_j\circ-$. Note that for any $\rho:M\to U$ we have $$\alpha_j\rho = \varphi_j\gamma(\rho) = \varphi_{jk}\varphi_k\gamma(\rho) = \varphi_{jk}\alpha_k\rho,$$ so that $\alpha_j = \varphi_{jk}\alpha_k$. That is, the $\alpha_j$ provide a map of inverse systems $U\to \{V_j, \varphi_{jk}\}$, and hence a unique continuous map $\alpha:U\to V$. But now $\gamma = \alpha\circ-$ because for all $\rho:M\to U$: $$\gamma(\rho) = (\varphi_j\gamma(\rho)) = (\alpha_j\rho) = (\varphi_j\alpha\rho) = \alpha\rho.$$ This completes case 1.
Case 2: Let $\pi$ generate the maximal ideal of $R$. We are given $\gamma:(M,U)\to (M,V)$. For each $n\in {\bf N}$, this induces a map $\gamma_n:(M/\pi^nM, U/\pi^nU)\to (M/\pi^nM, V/\pi^nV)$. By Case 1, $\gamma_n = \alpha_n\circ-$, for some continuous map $\alpha_n:U/\pi^nU\to V/\pi^nV$.
But we also have that $U\cong\limproj{n} U/\pi^nU$ and $V\cong \limproj{n} V/\pi^nV$. Let $n\leqslant m$ and note that the diagram $$ \xymatrix{ U/\pi^mU \ar[r]^{\alpha_m}\ar[d] & V/\pi^mV\ar[d] \\ U/\pi^nU \ar[r]^{\alpha_n} & V/\pi^nV} $$ commutes. It follows that the $\alpha_n$ are a map of inverse systems, so yield a unique continuous map $\alpha:U\to V$. This map has the property that $\gamma = \alpha\circ-$. \end{proof}
\begin{theorem}\label{limits are products} The categories $\tn{Add}(M)$ and $\limproj{}\tn{add}(M)$ are equivalent. \end{theorem}
\begin{proof} Note that $E=\tn{End}_{RG}(M)$ is profinite because $M$ is finitely generated. The inclusion $\tn{Add}(M)\to \limproj{} \tn{add}(M)$ is clearly fully faithful, so we need only check that it is essentially surjective. Let $U = \limproj{I} U_i$ be an object of $\limproj{} \tn{add}(M)$. Apply the functor $\Gamma = \textnormal{Hom}_{RG}(M,-)$ to the inverse system for $U$ to get an inverse system of projective right $E$-modules. The inverse limit is $(M,U)$, which we observed in the previous proof is a projective right $E$-module. The theorem is now immediate from Proposition \ref{AddM equivalent to ProjE}. \end{proof}
\begin{corollary}\label{permprod} Let $G$ be a finite $p$-group, $R$ a profinite local commutative noetherian ring and let $U$ be a profinite permutation $R G$-lattice. Then $U$ can be expressed as a cartesian product of cyclic permutation modules
$$U=\prod_{H\leqslant G}\prod_{\kappa_H}R[G/H],$$
where $\kappa_H$ is a cardinal number. \end{corollary}
\begin{proof}There are finitely many isomorphism classes of indecomposable permutation modules (indexed by the conjugacy classes of subgroups of $G$), so the result follows from Theorem \ref{limits are products}. \end{proof}
Some notation that will remain in force throughout this section. Given a finite group $G$ with subgroup $H$, a commutative ring $R$ and a profinite $RH$-module $V$, the profinite $RG$-module $V\ind{G}$ is defined to be $RG\hat\otimes_{RH}V$, where $G$ acts on the left factor as with the usual induced module, and where ``$\hat\otimes$'' denotes the completed tensor product \cite[\S 5.5]{RZ12010}. The restriction of the $RG$-module $U$ to $RH$ is denoted $U\res{H}$. When $N$ is a normal subgroup of $G$ and $U$ is an $RG$-module, the $R[G/N]$-modules $U^N, U_N$ denote, respectively, the submodule of $N$-invariants of $U$, and the module of $N$-coinvariants $U/I_NU$ of $U$, where $I_N$ is the kernel of the augmentation map $\varepsilon:RN\to R$.
\bl{gp-rg}\label{inv iso coinv} Let $N$ be a normal subgroup of the finite group $G$, $R$ a profinite integral domain and $U$ a profinite $RG$-module that is free as an $RN$-module. The map $\phi:U\to U^N$ defined by $\phi(u):=\sum_{n\in N}nu$ is an epimorphism\ of $RG$-modules with kernel $I_NU$. Thus $U^N\cong U_N$. \end{lemma}
\begin{proof} The element $\sum_{n\in N}n$ belongs to the centre of $RG$, hence $\phi$ is a well-defined $RG$-module homomorphism. The kernel of $\phi$ clearly contains $I_NU$. We can check the other assertions after first restricting to $N$. Write $U\res{N}$ as a product of modules of the form $RN$, indexed by a set $I$ . For a subset $X$ of $I$ let $U_X$ be the subproduct of $U$ indexed by $X$ and when $Y\subseteq X$ are finite subsets of $I$, let $\rho_{XY}: U_Y\to U_X$ be the obvious projection. Restricting the inverse system $\{U_X, \rho_{XY}\}$ to $U^N$ yields $U^N = \limproj{X}\{(U_X)^N,\rho_{XY}\}$. For each finite $X$ the obvious map $\phi_X:(U_X)_N\to (U_X)^N$ is the component at $X$ of an isomorphism of inverse systems \cite[Lemma 4]{bulletin}, and hence $\phi$ is an isomorphism.\end{proof}
Let $\zeta$ be a primitive $p$th root of unity and $\mathbb{Z}_p(\zeta)$ the corresponding totally ramified extension of $\mathbb{Z}_p$ of degree $p-1$. Let $\pi$ be a generator of the unique maximal ideal of $\mathbb{Z}_p(\zeta)$. The following may be viewed as a special case of a generalization of \cite[Theorem 3]{weiss} to infinitely generated modules.
\begin{lemma}\label{rootunity} Let $G$ be a finite $p$-group and $V$ a $\mathbb{Z}_p(\zeta)G$-lattice. Suppose that $V/\pi V$ is a free $\mathbb{F}_pG$-module. Then $V$ is a free $\mathbb{Z}_p(\zeta)G$-module.\end{lemma}
\begin{proof} Let $\phi:V\longrightarrow V/\pi V$ be the natural epimorphism. By \cite[Proposition 2.2.2]{RZ12010}, $\phi$ admits a continuous section $\delta: V/\pi V \longrightarrow V$ with $\delta(\pi V)=0$. Consider a profinite space $\Omega$ of free generators of $V/\pi V$ converging to $0$. Put $\mathcal{X}=\delta\left( \Omega \right)$. Let $A$ be a free pro-$p$ $\mathbb{Z}_p(\zeta)G$-module on $\mathcal{X}$ and let $f:A\longrightarrow V$ be the $\mathbb{Z}_p(\zeta)G$-homomorphism induced by sending $\mathcal{X}$ identically to its copy in $V$. Then as a pro-$p$ $\mathbb{Z}_p(\zeta)$-module, $A$ is free pro-$p$ on the basis $G\mathcal{X}$. Since $V/\pi V$ is a free $\mathbb{F}_p G$-module on $\Omega$, it is a free $\mathbb{F}_p$-module on $G\Omega$.
Notice that as $\mathbb{Z}_p(\zeta)$-modules, the radicals of $A, V$ respectively are $\pi A, \pi V$, and that the map $\bar f: A/\pi A\to V/\pi V$ is an isomorphism, so that $f$ is an isomorphism by \cite[Lemma 7.4.4]{W}, as required. \end{proof}
As in \cite{weiss}, a finitely generated generalized permutation $\mathbb{Z}_p(\zeta)G$-lattice is a finite direct sum of modules of the form $\varphi\ind{G}$, where ``$\varphi$'' represents a rank one $\mathbb{Z}_p(\zeta)H$-lattice, $H$ some subgroup of $G$, with action from $H$ coming only from a group homomorphism $\varphi:H\to \langle\zeta\rangle$. A profinite generalized permutation $\mathbb{Z}_p(\zeta)G$-lattice is an inverse limit of finite rank permutation $\mathbb{Z}_p(\zeta)G$-lattices. Since $G$ is finite, there are only finitely many isomorphism types of indecomposable finite rank generalized permutation $\mathbb{Z}_p(\zeta)G$-lattice. Thus, by Theorem \ref{limits are products}, profinite generalized permutation $\mathbb{Z}_p(\zeta)G$-lattices are simply products of indecomposable finite rank generalized permutation $\mathbb{Z}_p(\zeta)G$-lattices.
We give one more technical lemma before proving the main result of this section.
\begin{lemma}\label{indexing sets coincide} Let $W$ be a second countable profinite $\mathbb{F}_pG$-module that can be expressed as a product of finitely generated indecomposable submodules. Let $\prod_{s\in S}\prod_{\kappa_s}W_s$ and $\prod_{s\in S}\prod_{\nu_s}W_s$ be two continuous decompositions of $W$, where the $W_s$ are pairwise distinct finitely generated indecomposable $\mathbb{F}_pG$-modules and $\kappa_s, \nu_s$ are cardinals. Then for each $s\in S$ we have $\kappa_s=\nu_s$. \end{lemma}
\begin{proof} Fix $s$, suppose that $\kappa_s=n$ for some finite number $n$ and suppose for contradiction that $\nu_s$ is strictly greater than $n$. Then, by considering the second decomposition, we obtain a continuous and split projection onto a product of $n+1$ copies of $W_s$. Let $X$ be the kernel of this projection. Since $X$ is open in $W$, we can find some cofinite subset $I$ of the indexing set for the first decomposition such that the product $Y$ indexed by $I$ is contained in $X$. We obtain a continuous surjection $W/Y\to W/X$, which splits since the map $W\to W/X$ splits. Thus $W/X$ is a direct summand of $W/Y$. But the former is a product of $n+1$ copies of $W_s$, while the latter has at most $n$ direct summands isomorphic to $W_s$, contradicting the Krull-Schmidt theorem for finitely generated $\mathbb{F}_pG$-modules.
Meanwhile, if $W_s$ appears infinitely many times in both compositions then $\kappa_s = \nu_s = \aleph_0$ since $W$ is second countable. \end{proof}
\begin{theorem}\label{infgen} Let $G$ be a finite $p$-group and let $U$ be a second countable profinite $\mathbb{Z}_p G$-lattice. Suppose that there is a central subgroup $N$ of $G$ of order $p$ such that
\begin{itemize}
\item The restriction $U\res{N}$ is a free $\mathbb{Z}_p N$-module,
\item $U^N$ is a permutation $\mathbb{Z}_p[G/N]$-module
\item $U/U^N$ is a generalized permutation $\mathbb{Z}_p(\zeta)G$-lattice.
\end{itemize} Then $U$ itself is a permutation $\mathbb{Z}_pG$-lattice. \end{theorem}
\begin{proof} Let $N = \langle n\rangle$. Consider the ideals $I_N = \ker(\varepsilon:\mathbb{Z}_pN\to \mathbb{Z}_p) = \langle n-1, n^2-1,\cdots, n^{p-1}-1\rangle$ and $\mathbb{Z}_pN^N = (1+n+\cdots+n^{p-1})\mathbb{Z}_p$ of $\mathbb{Z}_pN$. As in \cite{weiss}, we have a pullback diagram
$$ \xymatrix{ \mathbb{Z}_pN \ar[r]\ar[d] & \mathbb{Z}_pN/\mathbb{Z}_pN^N\ar[d] \\ \mathbb{Z}_pN/I \ar[r] & \mathbb{Z}_pN/(I+\mathbb{Z}_pN^N).} $$ Fix an isomorphism $\psi:N\to \langle\zeta\rangle$. Via $\psi$ we have an isomorphism $Z_pN/Z_pN^N\cong \mathbb{Z}_p(\zeta)$, and the above pullback can be expressed as
$$ \xymatrix{ \mathbb{Z}_pN \ar[r]\ar[d] & \mathbb{Z}_p(\zeta)\ar[d] \\ \mathbb{Z}_p \ar[r] & \,\mathbb{F}_p.} $$
Apply the functor $-\widehat{\otimes}_{\mathbb{Z}_pN} U$ to the above diagram. Since $U\res{N}$ is free, we can write it as an inverse limit of finite rank free $\mathbb{Z}_pN$-modules. Since inverse limits are functorial and commute naturally with completed tensor products, the argument in \cite{weiss} applied to the finite rank quotients shows that our diagram is an inverse limit of pullbacks, and is hence itself the pullback $$ \xymatrix{ U \ar[r]\ar[d] & U/U^N\ar[d] \\ U_N \ar[r] & \overline{U_N} = \overline{U/U^N}.} $$
By \preflemma{gp-rg} we have that $U_N\cong U^N$, so that $U_N$ is a permutation module by assumption, and hence so too is $\overline{U_N}$.
Thus we have a permutation module $U_N$ and a generalized permutation module $U/U^N$, which by Theorem \ref{limits are products} are products of indecomposable modules of the same type. The indexing sets of the indecomposable summands of these modules both biject naturally onto corresponding indexing sets of indecomposable summands of $\overline{U_N}$, and hence by Lemma \ref{indexing sets coincide} their indexing sets coincide. Now as in \cite{weiss}, we can write $$U_N \cong \prod_r \mathbb{Z}_p\ind{G}_{G_r},\,\, U/U^N \cong \prod_r\varphi_r\ind{G}_{G_r}$$ where $\varphi_r:G_r\to \langle \zeta\rangle$ is a group homomorphism whose restriction to $N$ is $\psi$. Let $H_r=\ker(\varphi_r)$. Again following \cite{weiss} we obtain that
$$ \xymatrix{ \mathbb{Z}_p\ind{G}_{H_r} \ar[r]\ar[d] & \varphi_r\ind{G}_{G_r}\ar[d] \\ \mathbb{Z}_p\ind{G}_{G_r} \ar[r] & \mathbb{F}_p\ind{G}_{G_r}} $$ is a pullback. The product of these pullbacks as $r$ varies is itself a pullback diagram, whose upper left component is the permutation module $L = \prod_r \mathbb{Z}_p\ind{G}_{H_r}$. We want to show that $L\cong U$. The discussion above gives isomorphisms $h:L_N\to U_N$ and $f:L/L^N\to U/U^N$ (of $\mathbb{Z}_pG$ and $\mathbb{Z}_p(\zeta)G$-modules respectively). Let $h',f'$ denote the isomorphisms $\overline{L_N}\to \overline{U_N}$ induced by $h,f$ respectively, so that $\overline{h}^{-1}\overline{f}$ is an $\mathbb{F}_pG$-module automorphism of $\overline{L_N}$. By \cite[Lemma 3.23]{symondspermcom1} we can lift this automorphism to a $\mathbb{Z}_pG$-module endomorphism $k$ of $L_N$. Since $L_N$ is a free $\mathbb{Z}_p$-module, \cite[7.4.4, 2.5.2]{W} tells us that $k$ is an automorphism. Replace the isomorphism $h$ by the isomorphism $hk$ and note that $\overline{hk}=\overline{f}$. Now a diagram chase gives the required isomorphism $L\to U$, completing the theorem. \end{proof}
\begin{proof}[of Theorem \ref{infgenfree}] Suppose first that $N$ has order $p$. We maintain the notation from the proof of Theorem \ref{infgen}. The second hypothesis implies that $\overline{U^N}$ is a free $\mathbb{F}_p G$-module, and now Lemma \ref{rootunity} tells us that $U/U^N$ is a free $\mathbb{Z}_p(\zeta)G$-lattice. The argument in Theorem \ref{infgen} thus shows that $U$ is a pullback of free modules (the second countable hypothesis is not required because we do not need to use Lemma \ref{indexing sets coincide}). The result now follows immediately from the uniqueness of pullbacks.
We now proceed by induction on the order of $N$. Let $K$ be a normal subgroup of $G$ contained as a subgroup of index $p$ in $N$. The module $U_K$ has the properties that $U_K\res{N/K}$ and $(U_K)^{N/K}\cong U^N$ (see Lemma \ref{inv iso coinv}) are free $\mathbb{Z}_p[G/K]$-modules, so that $U_K$ ($\cong U^K$) is free by the first paragraph. Since the order of $K$ is less than the order of $N$, by induction we see that $U$ is free. \end{proof}
\section{HNN-embedding}
For the convenience of the reader we recall some concepts and terminology from \cite{HZ07}, adapting the definitions to the category of pro-$p$ groups.
\bd{boolean sp} A {\em boolean} or {\em profinite} space is, by definition, an inverse limit of finite discrete spaces, i.e. a compact, Hausdorff, totally disconnected topological space. Morphisms in the category of boolean spaces are continuous maps. \end{definition}
A profinite space $X$ with a profinite group $G$ acting continuously on it will be called a $G$-{\em space}.
\bd{TG}\label{finite subgroups} An inverse system of finite groups with projective limit $G$ induces an inverse system of the sets of subgroups of the groups in the inverse system, whose limit is the space of closed subgroups of $G$. If $G$ is virtually torsion free, the subspace $\Fin G$ of non-trivial finite subgroups of $G$ is closed, hence inherits a natural profinite topology (the {\em subgroup topology}). Equipped with this topology, $\Fin G$ with $G$ acting by conjugation becomes a $G$-space.\end{definition}
We recall the notion of a pro-$p$ HNN-group\ as defined in \pcite{HZ07} for the pro-${\cal C}$ case.
\bd{sheaf} A {\em sheaf} of pro-$p$ groups (over a profinite space $X$) is a triple $({\cal G},\gamma,X)$, where ${\cal G}$ and $X$ are profinite spaces, and $\gamma$ is a continuous map from ${\cal G}$ onto $X$, satisfying the following two conditions: \begin{rmenumerate} \item for every $x\in X$, the fiber ${\cal G}(x)=\gamma^{-1}(x)$ over
$x$ is a pro-$p$ group; \item if ${\cal G}^2$ denotes the subspace of ${\cal G}\times{\cal G}$ consisting of
pairs $(g,h)$ such that $\gamma(g)=\gamma(h)$, then the map
$\mu_{{\cal G}}:{\cal G}^2\longrightarrow {\cal G}$, defined by $\mu_{{\cal G}}(g,h):=
g^{-1}h \in {\cal G}(\gamma(g))={\cal G}(\gamma(h))\subseteq{\cal G}$,
is continuous. \end{rmenumerate}
If there is no danger of confusion we shall write $({\cal G},X)$ instead of $({\cal G},\gamma,X)$.
A {\em morphism} of sheaves of pro-$p$ groups $(\alpha,\bar\alpha):({\cal G},\gamma,X) \rightarrow ({\cal H},\eta,Y)$ is a pair of continuous maps $\alpha: {\cal G} \rightarrow {\cal H}$, $\bar\alpha:X\rightarrow Y$ such that the diagram
$$\xymatrix{ {\cal G}\ar@{->}^\alpha[rr]\ar@{->}^\gamma[d] & &{\cal H}\ar@{->}^\eta[d]\\ X\ar@{->}^{\bar\alpha}[rr] & &Y }$$ is commutative and, for all
$x\in X$, the restriction $\alpha_x:=\alpha_{|{\cal G}(x)}$ of $\alpha$ to the fiber ${\cal G}(x)$ is a homomorphism from ${\cal G}(x)$ to ${\cal H}(\bar\alpha(x))$.
In the special case when $Y=\{y\}$ consists of a single element set, we obtain with $H:={\cal H}(y)$ the definition of a {\em fiber morphism} $\alpha:{\cal G}\longrightarrow H$, of the sheaf ${\cal G}$ of pro-$p$ group s to the pro-$p$ group\ $H$. We shall say that $\alpha$ is a {\em fiber monomorphism} if $\alpha_x$ is injective for every $x\in X$.\end{definition}
The simplest example of a sheaf of pro-$p$ group s is that of the {\em constant sheaf}
$(G\times X,{\rm pr}_X,X)$, where $G$ is some pro-$p$ group\ and ${\rm pr}_X:G\times X\longrightarrow X$ is the projection. For every $x\in X$, the fiber $(G\times X)(x)=G\times \{ x\}$ is isomorphic to $G$.
\bd{free product} Given a sheaf $({\cal G},X)$ of pro-$p$ group s, the pro-$p$ product $G = \coprod_{x\in X}{\cal G}(x)$ is a pro-$p$ group $G$ together with a fiber homomorphism $\upsilon:({\cal G},X)\longrightarrow G$, having the following universal property: for every pro-$p$ group\ $K$ and every fiber homomorphism $\beta:({\cal G},X)\rightarrow K$, there exists a unique homomorphism \[\omega:G\longrightarrow K,\] such that $\beta=\omega\upsilon$.\end{definition}
When considering free pro-$p$ products, we make frequent use of the following:
\bt[(\cite[Theorem 9.1.12]{RZ12010} and \cite[Theorem 4.2]{RZ2}]\label{conjugacy} Let $G=\coprod_{i=1}^n G_i$ be a free profinite (pro-$p$) product. Then $G_i\cap G_j^g=1$ for either $i\neq j$ or $g\not\in G_j$.
Every finite subgroup of $G$ is conjugate to a subgroup of one of the factors $G_i$. \end{theorem}
Next we state the pro-$p$ analogue of the concept of an HNN-group, given in \cite{HZ07} for pro-${\cal C}$-groups.
\bd{HNN-grp} Let $H$ be a pro-$p$ group and $\partial_0,\partial_1:({\cal G},T)\rightarrow H$ fiber monomorphism s. A specialization into $K$ consists of a homomorphism $\beta:H\longrightarrow K$ and a continuous map $\beta_1: T\longrightarrow K$ such that for all $t\in T$ and $g\in {\cal G}(t)$ the equality $\beta(\partial_{0}(h))= \beta_1(t)^{-1}\beta(\partial_{1}(h))\beta_1(t)$ is valid. Such a specialization into $K$ will be denoted $(\beta,\beta_1):(H,{\cal G},T)\to K$.
The pro-$p$ HNN-group is then a pro-$p$ group $G$ together with a specialization $(\upsilon,\upsilon_1):(H,{\cal G},T)\longrightarrow G$, with the following universal property: for every pro-$p$ group\ $K$ and every specialization $(\beta,\beta_1):(H,{\cal G},T)\rightarrow K$, there exists a unique homomorphism \[\omega:G\longrightarrow K,\] such that $\omega\upsilon_1=\beta_1$ and $\beta=\omega\upsilon$. We shall denote the pro-$p$ HNN-group by ${\it HNN}(H,{\cal G},T)$. The group $H$ is called the {\em base group}, and elements $t\in T$ are called the {\em stable letters}. \end{definition}
Note that for $T$ a singleton set, identifying ${\cal G}(t)$ with its image under $\partial_0$ and setting $f:=\partial_1$, the definition of a pro-$p$-{\it HNN}\ extension given in \cite[Section 9.4]{RZ12010} is recovered. By \cite[Proposition 9]{HZ07}, the pro-$p$ HNN-group $G={\it HNN}(H,{\cal G},T)$ exists and is unique.
A pro-$p$ HNN-group is a special case of the fundamental pro-$p$ group $\Pi_1({\cal G},\Gamma)$ of a profinite graph\ of pro-$p$ group s $({\cal G},\Gamma)$ as introduced in \cite{ZM90}. Namely, a pro-$p$ HNN-group can be thought as $\Pi_1({\cal G},\Gamma)$, where $\Gamma$ is a connected profinite graph having just one vertex.
For the rest of this section let $G$ be a second countable \vfpp\ group, and fix an open free pro-$p$\ normal subgroup $F$ of $G$ of minimal index. Also suppose that the centralizer $C_F(t)=\{1\}$ for every torsion element $t\in G$. Let $K:=G/F$ and form the free pro-$p$\ product $G_0:=G\amalg K$. Let $\psi:G\to K$ denote the canonical projection. It extends to an epimorphism\ $\psi_0:G_0\to K$, by sending $g\in G$ to $gF\in K$ and each $k\in K$ identically to $k$, and extending to $G_0$ using the universal property of the free pro-$p$\ product. Notice that the kernel $L$ of $\psi_0$ is an open subgroup of $G_0$ and, since $L\cap G=F$ and $L\cap K=\{1\}$, the pro-$p$\ version of the Kurosh subgroup theorem\, \cite[Theorem 4.3]{M89} tells us that $L$ is free pro-$p$. By \cite[Lemma 5.6.7]{RZ12010} there exists a continuous section $Fin(G)/G\longrightarrow Fin(G)$. Define the subspace $\Sigma$ of $Fin(G_0)$ to be the union of the image of this section with the subgroup $K$. Since by Theorem \ref{conjugacy} all finite subgroups of $G_0$ can be conjugated either into $G$ or into $K$, the natural map $Fin(G_0)\longrightarrow Fin(G_0)/G_0$ restricted to $\Sigma$ is a homeomorphism. Define a sheaf $({\cal G},\Sigma)$ by setting ${\cal G}=\{(g,S)\in G_0\times \Sigma\mid g\in S\}$ and defining $\gamma:{\cal G}\longrightarrow \Sigma$ to be the restriction to ${\cal G}$ of the natural projection $G_0\times \Sigma\longrightarrow \Sigma$. Define an equivalence relation on $\Sigma$ by putting $S_1\sim S_2$ if $S_1$ and $S_2$ are contained in the same maximal finite subgroup of $G_0$. This defines an equivalence relation since maximal finite subgroups have trivial intersection
by \cite[Lemma 9]{bulletin}, and it is easy to see that $\sim$ is closed (indeed if $K_1, K_2$ are maximal finite subgroups containing $S_1$ and $S_2$ respectively and such that $K_1\cap K_2=1$, then there is a finite quotient of $G_0$ where the images of $K_1$ and $K_2$ intersect trivially as well). Put $T=\Sigma/\sim$. Given $t\in T$, denote by $K_t$ the unique maximal finite subgroup corresponding to $t$ (i.e. if $S\in \Sigma$ is a representative of an equivalence class $t$ then $K_t$ is the unique maximal finite subgroup containing $S$). Let $({\cal K},T)$ be the subset of $G_0\times T$ defined by $({\cal K},T)=\{(k,t)\mid k\in K_t, t\in T\}$.
\begin{lemma} $({\cal K},T)$ is a subsheaf of $G_0\times T$.\end{lemma}
\begin{proof} We show only that ${\cal K}$ is closed in $G_0\times T$, since the rest follows easily. Let $(g,t)\in G_0\times T$ be such that $g\not\in K_t$. Note that the set of non-trivial torsion elements is closed in $G_0$, because they are of bounded order and can not have $1$ as an accumulation point since $G$ is virtually torsion-free. Therefore there is an open torsion free subgroup $U$ such that $(gU\times T)\cap {\cal K}=\emptyset$ as needed. \end{proof}
With notation as above we define the pro-$p$ HNN-group\ $\tilde G=HNN(G_0,{\cal K},T)$ by setting $\partial_0$ to
be the fiber homomorphism that sends each $K_t$ identically to its
copy in $G$ and setting $\partial_1(k_t)=\psi(k_t)$ for every $k_t\in K_t$. Here $G_0$ is the base group, the $K_t$ are associated subgroups, and $T$ is a set of stable letters in the sense of \prefdef{HNN-grp}.
The objective of this section is to show that the centralizers of torsion elements of $\tilde G$ are finite. It was already proved in \cite[Theorem 12]{HZ07} that $G$ embeds into $\tilde G$ and that the torsion elements of $\tilde G$ are conjugate to elements of $K$ (in fact, \cite[Theorem 12]{HZ07} has a slightly different statement, namely that the space $T$ is a subspace of $\Fin G$, but this was not used in the proof). Thus, in the following lemma we need to prove only item (iv), since items (i)-(iii) are the subject of the proof of \cite[Theorem 12]{HZ07}.
\bl{centr} Let $\tilde G= {\it HNN}(G_0,K_t,\phi_t,T)$ be as explained above, and let $\tilde F$ be the kernel of the map $\tilde G\to K$ induced from the universal property of pro-$p$ HNN groups -- an open normal free pro-$p$ subgroup of $\tilde G$. Then
\begin{enumerate}
\item
[(i)] $G_0$ and therefore $G$ canonically embed in $\tilde G$.
\item [(ii)] $\tilde G = \tilde F\rtimes K$ is a semidirect product.
\item [(iii)] In $\tilde G$ every finite subgroup is conjugate to a subgroup of $K$.
\item[(iv)] $C_{\tilde F}(g)=1$ for every torsion element $g\in \tilde G$.\end{enumerate}
\end{lemma}
\begin{proof} (iv) There is a standard pro-$p$ tree $S:=S(\tilde G)$ associated to $\tilde G:= {\it HNN}(G_0,K_t,\phi_t,T)$ on which $\tilde G$ acts naturally such that the vertex stabilizers are conjugates of $G_0$ and each edge stabilizer is a conjugate of some $K_t$ (cf. \pcite{RZ2} and \cite[\S 3]{ZM90}).
\noindent{\em Claim: Let $e_1,e_2$ be two edges of $S$ with a common vertex $v$ which is not the terminal vertex of both of them. Then the intersection of the stabilizers $\tilde G_{e_1}\cap\tilde G_{e_2}$ is trivial. }
{} By translating $e_1,e_2, v$ if necessary we may assume that $G_0$ is the stabilizer of $v$. Then we have two cases:
1) $v$ is the initial vertex of $e_1$ and $e_2$. Then $\tilde G_{e_1}=K_t^g$ and $\tilde G_{e_2}=K_{t'}^{g'}$ with $g,g'\in G_0$ and either $t\neq t'$ or $g\not\in K_tg'$ and by construction of $\tilde G$ we have $K_t^g\neq K_{t'}^{g'}$ if $t\neq t'$. Suppose that $K_t^g\cap K_{t'}^{g'}\neq\{1\}$. Then, since $G_0=G\amalg K$, we may apply Theorem \ref{conjugacy}, in order to deduce the existence of $g_0\in G_0$ with $K_t^{gg_0}\cap K_{t'}^{g'g_0}\le G_0$. Now by \cite[Lemma 9]{bulletin} two maximal finite subgroups of $G_0$ have trivial intersection. So we have $K_t^g\cap K_{t'}^{g'}=\{1\}$, as needed.
2) $v$ is the terminal vertex of $e_1$ and the initial vertex of $e_2$. Then $\tilde G_{e_1}=K^g$ and $\tilde G_{e_2}=K_t^{g'}$ for $g,g'\in G_0$ so they intersect trivially by the definition of $G_0$ and Theorem \ref{conjugacy}. So the claim holds.
Now pick a torsion element $x\in \tilde G$ and a non-trivial element $f\in\tilde F$ with $x^f=x$. Let $e$ be an edge of $S$ stabilized by $x$. Then $fe$ is also stabilized by $x$ and, since by \cite[Theorem 3.7]{RZ2} the fixed set $S^x$ is a subtree, the path $[e,fe]$ is fixed by $x$ as well. Note that a common vertex of $e$ and $fe$ (if it exists) can be terminal to only one of them, since $f$ can not stabilize any vertex. So $[e,fe]$ has at least one vertex which is initial for its incident edge. Thus by the claim $x=1$.\end{proof}
\section{Proof of the main theorem}
We require the following before turning to the key proposition.
\begin{lemma}\label{group-module connection} Let $G$ be a semidirect product of a free \pp\ group\ $F$ with a finite $p$-group\ $K$ such that every torsion element is conjugate to a subgroup of $K$. Then \begin{enumerate}
\item
[(i)] $G=(K_G)\rtimes F_0$, where $K_G$ is the normal closure of $K$ in $G$ and $F_0$ is free pro-$p$.
\item[(ii)] The natural homomorphism $G\longrightarrow F_0$
induces a natural epimorphism $M=F/[F,F]\longrightarrow
F_0/[F_0,F_0]$ of $\mathbb{Z}_pK$-modules with kernel $I_KM$.
\end{enumerate}
\end{lemma}
\begin{proof} For any group $X$ we denote by $\tor{X}$ the set of torsion elements of $X$. By \cite[Proposition 1.7]{Z2003} $\torfactor G$ is free pro-$p$, so we can fix a section $F_0$ of $\torfactor G$ in $G$, proving (i). In fact, we can choose $F_0$ to live inside $F$, as we may since the restriction of $G\longrightarrow \torfactor G$ to $F$ is an epimorphism. Thus we can write $F/[F,F]=F_0/[F_0,F_0]\oplus L$ as a direct sum of $\mathbb{Z}_p$-modules, for some $\mathbb{Z}_p$-submodule $L$. We claim that a profinite $\mathbb{Z}_p$-basis of $F_0/[F_0,F_0]$ is a profinite free $\mathbb{Z}_pK$-basis of $U$. Again note that $\torgp{G}$ coincides with the normal closure $K_G$ of $K$ in $G$. Thus we have the following commutative diagram $$\xymatrix{G\ar[r]\ar[d]& G/K_G\ar[d]\\
\bar G=F/[F,F]\rtimes K\ar[r]& \bar G/K_{\bar G}}$$ of natural epimorphisms. The lower horizontal map restricted to $F/[F,F]$ has kernel $[F/[F,F],K]$, which coincides with $I_KU$ ($I_K$ the augmentation ideal of $\mathbb{Z}_pK$) when we view $U=F/[F,F]$ as a $\mathbb{Z}_pK$-module.\end{proof}
\bp{model1} Let $G$ be a semidirect product of a free \pp\ group\ $F$ with a finite $p$-group\ $K$ such that every torsion element is conjugate to a subgroup of $K$. Suppose that $C_F(t)=\{1\}$ holds for every torsion element $t\in G$. Then $G=K\amalg F_0$ for a free pro-$p$\ factor $F_0$. \ep
\begin{proof} Suppose that the proposition is false. Then there is a counter-example\ with $K$ having minimal order. By \cite[Theorem 1.2]{Z2003}, when $K$ has order $p$ we have that $G=\coprod_{x\in X} (C_p\times H_x)\amalg H_0$, where $H_x, H_0$ are free pro-$p$, therefore the proposition is satisfied, and so we can suppose that $K$ is of order at least $p^2$.
Let $H$ be a central subgroup of $K$ of order $p$. Then $F\rtimes H$ satisfies the premises of the proposition and hence $F\rtimes H$ is of the form $H\amalg F_1$ for some free factor $F_1$. Let us denote by bar passing to the quotient modulo the normal closure $H_G$ of $H$ in $G$. By \cite[Proposition 1.7]{Z2003}
$\bar F$ is free pro-$p$ and $\overline{\tor{G}}=\tor{\bar
G}$. It follows that $\bar G=\bar F\rtimes (K/H)$ and
every torsion element in $\bar G$ can be lifted to a conjugate of an element in $K$. So every torsion element in $\bar G$ is conjugate to an element of $\bar K=K/H$ and we deduce from Theorem \ref{conjugacy} that $H_G=\torgp{FH}$. Thus by the minimality of $K$ we have \be{FH} \bar G=\bar K\amalg \bar F_0. \end{equation} for some free pro-$p$ group $\bar F_0$.
Consider $U:=F/[F,F]$ as a $\mathbb{Z}_pK$-module and let $I_H$ denote the augmentation ideal of $\mathbb{Z}_pH$. Since $F\rtimes H=H\amalg F_1=\left(\coprod_{h\in H}F_1^h\right)\rtimes H$, $H$ acts by permuting the free factors $F_1^h$, so that $U$ is a free $\mathbb{Z}_pH$-module. Passing in \prefeq{FH} to the quotient modulo the commutator subgroup of $\bar F=(F_0)_{\bar G}$, one can see using Lemma \ref{group-module connection} (ii) that $U/I_HU$ is a free $\mathbb{Z}_p\bar K$-module.
Now an application of Lemma \ref{inv iso coinv} and Theorem \ref{infgenfree} shows that $U$ itself is a free $\mathbb{Z}_p K$ lattice.
By part (ii) of Lemma \ref{group-module connection}, a closed free basis of $F_0/[F_0,F_0]$ is a free $\mathbb{Z}_pK$ basis of $U$; this follows from the fact that a closed subset of $U$ is a free $\mathbb{Z}_pK$-basis if and only if its image in $U/I_KU$ is a free $\mathbb{Z}_p$-basis.
Consider $\tilde G:=K\amalg \tilde F_0$ with $\tilde F_0\cong \torfactor G$, and define a homomorphism\ $\phi:\tilde G\to G$ by sending $K$ to $K$, $\tilde F_0$ to $F_0$ and extending to $\tilde G$ using the universal property of the free pro-$p$ product. The images under the Frattini quotient of $K$ and $F_0$ together generate the image of $G$ (since the image of $K$ is equal to the image of $\torgp{G}$), and hence $G$ is generated by $K$ and $F_0$. Thus $\phi$ is an epimorphism. Let $\tilde F=\phi^{-1}(F)$, and note that as the subgroup $\tilde F$ of $\tilde G$ is the kernel of a homomorphism onto $K$ that restricts to the identity on $K$, we have $\tilde G = \tilde F\rtimes K$. Since $\tilde F\rtimes K=K\amalg \tilde F_0=\left(\coprod_{k\in K}\tilde F_0^k\right)\rtimes K$, $K$ acts by permuting the free factors $\tilde F_0^k$, so that $\tilde F/[\tilde F,\tilde F]$ is a free $\mathbb{Z}_pK$-module with basis the image of the basis of $\tilde F_0$ in $\tilde F/[\tilde F,\tilde F]$. On the other hand from the paragraphs above we know that $U$ is a free module with basis the image of the basis of $F_0$ in $U$. Therefore the kernel of $\phi$ must be contained in $[\tilde F,\tilde F]$. But since $\tilde F$ and $F$ are free this implies $\tilde F\cap \ker\phi=\{1\}$. Since $K\cap\ker\phi=\{1\}$, we conclude that $\phi$ is an isomorphism, as required. \end{proof}
\begin{proof}[of Theorem \ref{main}] First form $\tilde G$ as described before \preflemma{centr}, in order to embed $G$ into a group $\tilde G$ whose finite subgroups have finite centralizers (by \preflemma{centr}), and, moreover, which has a single conjugacy class of maximal finite subgroups. By \prefprop{model1} the group $\tilde G$ is of the form $\tilde G=K\amalg F_0$ where $K$ is finite and $F_0$ is free pro-$p$. One now deduces the result from the pro-$p$ version of the Kurosh subgroup theorem \cite[Theorem 4.3]{M89}. \end{proof}
\section{Acknowledgments}
We express a big thank you to Peter Symonds, whose input has been of great help to this project.
\end{document} |
\begin{document}
\title{Probabilistic Stable Functions on Discrete Cones \\ are Power Series (long version).}
\author{Rapha\"elle Crubill\'e}
\maketitle \begin{abstract}
\hide{
In ~\cite{pcsaamohpc} Danos and Ehrhard developed the category $\mathbf{Pcoh}$ of probabilistic coherence spaces as a model of linear logic allowing to express \emph{discrete} probabilities.
Recently~\cite{pse}, Ehrhard, Pagani and Tasson put forward the cartesian closed category $\mathbf{Cstab_m}$ of measurable cones, and measurable, stables functions between cones, as a model for continuous probabilities.
Here, we look at $\mathbf{Cstab_m}$ as a model for \emph{discrete probabilities}, by showing the existence of a full and faithful functor embedding $\mathbf{Pcoh}_!$ into $\mathbf{Cstab_m}$. The proof is based on a generalization of Bernstein's theorem in real analysis allowing to see stable functions between cones as generalized power series.
}
We study the category $\mathbf{Cstab_m}$ of measurable cones and measurable stable functions\textemdash a denotational model of an higher-order language with \emph{continuous} probabilities and full recursion~\cite{pse}. We look at $\mathbf{Cstab_m}$ as a model for \emph{discrete} probabilities, by showing the existence of a full and faithful functor preserving cartesian closed structure which embeds probabilistic coherence spaces\textemdash a \emph{fully abstract} denotational model of an higher language with full recursion and \emph{discrete} probabilities~\cite{EPT15}\textemdash into $\mathbf{Cstab_m}$. The proof is based on a generalization of Bernstein's theorem in real analysis allowing to see stable functions between discrete cones as generalized power series. \end{abstract}
\section{Introduction}
Probabilistic reasoning allows us to describe the behavior of systems with inherent uncertainty, or on which we have an incomplete knowledge. To handle statistical models, one can employ probabilistic programming languages: they give us tools to build, evaluate and transform them. While for some applications it is enough to consider \emph{discrete} probabilities, we sometimes want to model systems where the underlying space of events has inherent \emph{continuous} aspects: for instance in hybrid control systems~\cite{alur1996hybrid}, as used e.g. in flight management. In the machine learning community~\cite{gordon2014probabilistic,goodman2013principles}, statistical models are also used to express our \emph{beliefs} about the world, that we may then update using \emph{Bayesian inference}\textemdash the ability to condition values of variables via observations.
As a consequence, several probabilistic \emph{continuous} languages have been introduced and studied, such as Church~\cite{church}, Anglican~\cite{anglican}, as well as formal operational semantics for them~\cite{borgstrom2016lambda}. Giving a \emph{fully abstract} \emph{denotational} semantics to a higher-order probabilistic language with full recursion, however, has proved to be harder than in the non-probabilistic case. For discrete probabilities, there have been two such fully abstract models: in~\cite{danos2002probabilistic}, Danos and Harmer introduced a fully abstract denotational semantics of a probabilistic extension of idealized Algol, based on game semantics; and in~\cite{pcsaamohpc} Ehrhard, Pagani and Tasson showed that the category $\mathbf{Pcoh}$ of \emph{probabilistic coherence spaces} gives a fully abstract model for $\textsf{PCF}_\oplus$, a discrete probabilistic variant of Plotkin's $\textsf{PCF}$.
While there is currently no known fully abstract denotational semantics for a higher-order language with full recursion and continuous probabilities, several denotational models have been introduced. The pioneering work of Kozen~\cite{kozen1979semantics} gave a denotational semantics to a \emph{first-order} while-language endowed with a random real number generator. In~\cite{staton2016semantics}, Staton et al give a denotational semantics to an higher-order language: they first develop a distributive category based on measurable spaces as a model of the first-order fragment of their language, and then extend it into a cartesian closed category using a standard construction based on the functor category.
Recently, Ehrhard, Pagani and Tasson introduced in~\cite{pse} the category $\mathbf{Cstab_m}$, as a denotational model of an extension of $\textsf{PCF}$ with continuous probabilities. It is presented as a refinement with \emph{measurability constraints} of the category $\mathbf{Cstab}$ of abstract cones and so-called \emph{stable} functions between cones, consisting in a generalization of \emph{absolutely monotonous functions} from real analysis.
Here, we look at the category $\mathbf{Cstab_m}$ from the point of view of \emph{discrete} probabilities. It was noted in~\cite{pse} that there is a natural way to see any probabilistic coherent space as an object of $\mathbf{Cstab}$. In this work, we show that this connection leads to a full and faithful functor $\mathcal F$ from $\mathbf{Pcoh}_!$\textemdash the Kleisli category of $\mathbf{Pcoh}$\textemdash into $\mathbf{Cstab}$. It is done by showing that every stable function between probabilistic coherent spaces can be seen as a power series, using the extension to an abstract setting of Bernstein's theorem for absolutely monotonous functions shown by McMillan~\cite{mcmillan}. We then show that the functor $\mathcal F$ we have built is cartesian closed, i.e. respects the cartesian closed structure of $\mathbf{Pcoh}_!$. In the last part, we turn $\mathcal F$ into a functor $\mathcal F^m:\mathbf{Pcoh}_! \rightarrow \mathbf{Cstab_m}$, and we show that $\mathcal F^m$ too is cartesian closed.
To sum up, the contribution of this paper is to show that there is a cartesian closed full embedding from $\mathbf{Pcoh}_!$ into $\mathbf{Cstab_m}$. Since $\mathbf{Pcoh}_!$ is known to be a fully abstract denotational model of $\textsf{PCF}_\oplus$, a corollary of this result is that $\mathbf{Cstab_m}$ too is a fully abstract model of $\textsf{PCF}_\oplus$.
\section{Discrete and Continuous Probabilistic Extension of $\textsf{PCF}$: an Overview.}\label{sect:overview} A simple way to add probabilities to a (higher-order) programming language is to add a fair probabilistic choice operator to the syntax. Such an approach has been applied to various extensions of the $\lambda$-calculus~\cite{DLZ}. To fix ideas, we give here the syntax of a (minimal) probabilistic variant of Plotkin's $\textsf{PCF}$~\cite{plotkin1977lcf}, that we will call $\textsf{PCF}_\oplus$. It is a typed language, whose types are given by: $A ::= N \; \; \mbox{\Large{$\mid$}}\;\; A \rightarrow A$, where $N$ is the base type of naturals numbers. The programs are generated as follows: \begin{align*}
M, & N \in \textsf{PCF}_\oplus ::=\; x \; \; \mbox{\Large{$\mid$}}\;\; \lambda {x^A}\cdot M \; \; \mbox{\Large{$\mid$}}\;\; (M N) \; \; \mbox{\Large{$\mid$}}\;\; (Y N) \\ & \; \; \mbox{\Large{$\mid$}}\;\; \text{ifz }(M, N, L) \; \; \mbox{\Large{$\mid$}}\;\; \text{let}(x, M, N)\; \; \mbox{\Large{$\mid$}}\;\; M \oplus N \\ & \; \; \mbox{\Large{$\mid$}}\;\; \underline{n} \; \; \mbox{\Large{$\mid$}}\;\; \text{ succ }(M) \; \; \mbox{\Large{$\mid$}}\;\; \text{ pred }(M) \end{align*} The operator $\oplus$ is the fair probabilistic choice operator, $Y$ is a recursion operator, and $n$ ranges over natural numbers. The $\text{ifz}$ construct tests if its first argument (of type $N$) is $0$, reduces to its second argument if it is the case, and to its third otherwise. We endow this language with a natural operational semantics~\cite{EPT15}, that we choose to be call-by-name. However, for expressiveness we need to be able to simulate a call-by-value discipline on terms of ground type $N$: it is enabled by the $\text{let}$-construct.
We can see that the kind of probabilistic behavior captured by $\textsf{PCF}_\oplus$ is \emph{discrete}, in the sense that it manipulates distributions on countable sets. In~\cite{pcsaamohpc}, Ehrhard and Danos introduced a model of Linear Logic designed to lead to denotational models for discrete higher-order probabilistic computation: the category $\mathbf{Pcoh}$ of \emph{probabilistic coherence spaces (PCSs)}. It was indeed shown in~\cite{EPT15} that $\mathbf{Pcoh}_!$, the Kleisli category of $\mathbf{Pcoh}$ is a \emph{fully abstract} model of $\textsf{PCF}_\oplus$, while the Eilenberg-Moore Category of $\mathbf{Pcoh}$ is a \emph{fully abstract} model of a probabilistic variant of Levy's call-by-push-value calculus.
We are going to illustrate here on examples the ideas behind the denotational semantics of $\textsf{PCF}_\oplus$ in $\mathbf{Pcoh}_!$. The basic idea is that the denotation of a program consists of a vector on $\Rp{X}$, where $X$ is the countable sets of possible outcomes. For instance, the denotation of the program $\underline{0} \oplus \underline{1}$ of type $N$ is the vector $x \in \Rp{\mathbb{N}}$, with $x_0 = \frac 1 2$, $x_1 = \frac 1 2$, and $x_k = 0$ for $k \not \in \{0,1\}$. Morphisms in $\mathbf{Pcoh}_!$, on the other hand, can be seen as analytic functions (i.e. power series) between real vector spaces. Let us look at the denotation of the simple $\textsf{PCF}_\oplus$ program below. $$M := \lambda {x^N}\cdot \left(\underline{0} \oplus \text{ifz}(x,\underline{1}, \text{ifz}(x,\underline{0},\Omega)) \right),$$ where $\Omega$ is the usual encoding of a never terminating term using the recursion operator. The denotation of $M$ consists of the following function $\Rp{\mathbb{N}} \rightarrow \Rp{\mathbb{N}}$: $$f(x)_k = \begin{cases}
\frac 1 2 + \frac 1 2 \sum_{i \neq 0} x_i \cdot x_0 \qquad \text{ if } k=0 \\
\frac 1 2 x_0 \qquad \text{ if }k = 1 \\
0 \qquad \text{ if }k \not \in \{0,1\} \end{cases} $$ We can see that $f(x)_k$ corresponds indeed to the probability of obtaining $\underline{k}$ if we pass to $M$ a term $N$ with $x$ as denotation. Observe that $f$ here is a polynomial in $x$; however since we have recursion in our language, there are programs that do an unbounded number of calls to their arguments: then their denotations are not polynomials anymore, but they are still analytic functions. The analytic nature of $\mathbf{Pcoh}_!$ morphisms plays a key role in the proof of full abstraction for $\textsf{PCF}_\oplus$.
Observe that this way of building a model for $\textsf{PCF}_\oplus$ is utterly dependent on the fact that we consider discrete probabilities over a countable sets of values. In recent years, however, there has been much focus on continuous probabilities in higher-order languages. The aim is to be able to handle classical mathematical distributions on reals, as for instance normal or Gaussian distributions, that are widely used to build generic physical or statistical models, as well as transformations over these distributions.
We illustrate the basic idea here by presenting the language $\textsf{PCF}_{\texttt{sample}}$, following~\cite{pse}, that can be seen as the continuous counterpart to the discrete language $\textsf{PCF}_\oplus$. It is a typed language, with types generated as $A ::= R \; \; \mbox{\Large{$\mid$}}\;\; A \rightarrow A$, and terms generated as follows: \begin{align*}
M \in \textsf{PCF}_{\text{sample}} ::=\;& x \; \; \mbox{\Large{$\mid$}}\;\; \lambda {x^A}\cdot M \; \; \mbox{\Large{$\mid$}}\;\; (M N) \; \; \mbox{\Large{$\mid$}}\;\; (Y N) \\ & \; \; \mbox{\Large{$\mid$}}\;\; \text{ifz }(M, N, L) \; \; \mbox{\Large{$\mid$}}\;\; \text{let}(x, M, N) \\ & \; \; \mbox{\Large{$\mid$}}\;\; \underline{r} \; \; \mbox{\Large{$\mid$}}\;\; \texttt{sample} \; \; \mbox{\Large{$\mid$}}\;\; \underline{f}(M_1, \ldots, M_n)
\end{align*} where $r$ is any real number, and $f$ is in a fixed countable set of measurable functions $\mathbb{R}^n \rightarrow \mathbb{R}$. The constant \texttt{sample} stands for the uniform distribution over $[0,1]$. Observe that admitting every measurable functions as primitive in the language allows to encode every distribution that can be obtained in a measurable way from the uniform distribution, for instance Gaussian or normal distributions. This language is actually expressive enough to simulate other probabilistic features, as for instance Bayesian conditioning, as highlighted in~\cite{pse}. Moreover, we can argue it is also \emph{more general} than $\textsf{PCF}_\oplus$: first it allows to encode integers (since $\mathbb{N} \subseteq \mathbb{R}$) and basic arithmetic operations over them. Secondly, since the orders operator $ \geq : \mathbb{R} \times \mathbb{R} \rightarrow \{0,1\} \subseteq \mathbb{R}$ is measurable, we can construct in $\textsf{PCF}_{\text{sample}}$ terms like this one: $$\text{ifz}(\underline{\geq}(\texttt{sample}, \frac 1 2), M, N),$$ which encodes a fair choice between $M$ and $N$.
We see, however, that $\mathbf{Pcoh}_!$ cannot be a model for $\textsf{PCF}_{\texttt{sample}}$: indeed it doesn't even seem possible to write a probabilistic coherence space for the real type. In~\cite{pse}, Ehrhard, Pagani and Tasson introduced the cartesian closed category $\mathbf{Cstab_m}$ of measurable cones and measurable stables functions, and showed that it provides an adequate and sound denotational model for $\textsf{PCF}_{\texttt{sample}}$. The denotation of the base type $R$ is taken as the set of finite measures over reals, and the denotation of higher-order types is then built naturally using the cartesian closed structure. From there, it is natural to ask ourselves: how \emph{good} $\mathbf{Cstab_m}$ is as a model of probabilistic higher-order languages ?
The present paper is devoted to give a partial answer to this question: in the case where we restrict ourselves to a \emph{discrete fragment} of $\textsf{PCF}_{\texttt{sample}}$. To make more precise what we mean, let us consider a continuous language with an \emph{explicit} discrete fragment which has both $R$ and $N$ as base types: we consider the language $\textsf{PCF}_{\oplus,\texttt{sample}}$ with all syntactic constructs of both $\textsf{PCF}_{\oplus}$ and $\textsf{PCF}_{\texttt{sample}}$, as well as an operator $\texttt{real}$ with the typing rule: $$ \AxiomC{$\strut \wfjt {\Gamma} M N $} \UnaryInfC{$\wfjt {\Gamma} {\mathtt{real}(M)} R $} \DisplayProof, $$ designed to enable the continuous constructs to act on the discrete fragment, by giving a way to see any distribution on $\mathbb{N}$ as a distribution on $\mathbb{R}$. We see that we can indeed extend in a natural way the denotational semantics of $\textsf{PCF}_{\texttt{sample}}$ given in~\cite{pse} to $\textsf{PCF}_{\texttt{sample}, \oplus}$: in the same way that the denotational semantics of $R$ is taken as the set of all finite measures on $\mathbb{R}$, we take the denotational semantics of $N$ as the set $\meas \mathbb{N}$ of all finite measures over $\mathbb{N}$. We take as denotational semantics of the operator $\texttt{real}$ the function: $$\semm {\texttt{real}}: \mu \in \meas \mathbb{N} \mapsto \left(A \in \Sigma_\mathbb{R} \mapsto \sum_{n \in \mathbb{N} \cap A} \mu(n) \right) \in \meas {\mathbb{R}}.$$ We will see later that this function is indeed a morphism in $\mathbf{Cstab_m} \allowbreak(\meas{\mathbb{N}}, \meas {\mathbb{R}})$. What we would like to know is: what is the structure of the sub-category of $\mathbf{Cstab_m}$ given by the \emph{discrete types of $\textsf{PCF}_{\texttt{sample, }\oplus}$}, i.e the one generated inductively by $\semm N$, $\Rightarrow$, $\times$ ?
The starting point of our work is the connection highlighted in~\cite{pse} between PCSs and complete cones: every PCSs can be seen as a complete cone, in such a way that the denotational semantics of $N$ in $\mathbf{Pcoh}_!$ becomes the set of finite measures over $\mathbb{N}$. We formalize this connection by a functor $F^m : \mathbf{Pcoh}_! \rightarrow \mathbf{Cstab_m}$. However, to be able to use $\mathbf{Pcoh}_!$ to obtain information about the discrete types sub-category of $\mathbf{Cstab_m}$, we need to know whether this connection is preserved at higher-order types: does the $\Rightarrow$ construct in $\mathbf{Cstab_m}$ make some wild functions not representable in $\mathbf{Pcoh}_!$ to appear, e.g. not analytic? The main technical part of this paper consists in showing that this is not the case, meaning that the functor $F^m$ is full and faithful, and cartesian closed. It tells us that the discrete types sub-category of $\mathbf{Cstab_m}$ has actually the same structure as the subcategory of $\mathbf{Pcoh}_!$ generated by $\sem \mathbb{N}^\mathbf{Pcoh}$, $\Rightarrow$ and $\times$. Since $\mathbf{Pcoh}_!$ is a fully abstract model of $\textsf{PCF}_\oplus$, it tells us that the discrete fragment of $\textsf{PCF}_{\texttt{sample},\oplus}$ is fully abstract in $\mathbf{Cstab_m}$.
\section{Cones and Stable Functions} The category of measurable cones and measurable, stable functions ($\mathbf{Cstab_m}$), was introduced by Ehrhard, Pagani, Tasson in~\cite{pse} in the aim to give a model for $\textsf{PCF}_{\texttt{sample}}$.
They actually introduced it as a refinement of the category of complete cones and stable functions, denoted $\mathbf{Cstab}$.
Stable functions on cones are a generalization of well-known \emph{absolutely monotonic functions} in real analysis: they are those functions $f: [0, \infty) \rightarrow \Rp{}$ which are infinitely differentiable, and such that moreover all their derivatives are non-negative. The relevance of such functions comes from a result due to Bernstein: every absolutely monotonic function coincides with a power series. Moreover, it is possible to characterize absolutely monotonic functions without explicitly asking for them to be differentiable: it is exactly those functions such that all the so-called \emph{higher-order differences}, which are quantities defined only by sum and subtraction of terms of the form $f(x)$, are non-negative. (see~\cite{widder}, chapter 4). The definition of \emph{pre-stable functions} in~\cite{pse} generalizes this characterization.
In this section, we first recall basic facts about cones and stable functions, all extracted from ~\cite{pse}. Then we will prove a generalization of Bernstein's theorem for pre-stable functions over a particular class of cones, which is the main technical contribution of this paper. We will do that following the work of McMillan on a generalization of Bernstein's theorem for functions ranging over abstract domains endowed with partition systems, see~\cite{mcmillan}.
\subsection{Cones} The use of a notion of cones in denotational semantics to deal with probabilistic behavior goes back to Kozen in~\cite{kozen1979semantics}.
We take here the same definition of cone as in~\cite{pse}.
\begin{definition}
A cone $C$ is a $\Rp{}$-semimodule given together with an $\Rp{}$ valued function $\norm C {\cdot}$ called \emph{norm of $C$}, and verifying:
\begin{align*}
& \left(x + y = x + y'\right)\, \Rightarrow\, y = y' && \norm C {\alpha x} = \alpha {\norm C x} \\
& \norm {C}{x + x'} \leq \norm C x + \norm C {x'} &&
\norm C x = 0 \Rightarrow x = 0\\
& \norm C x \leq \norm C {x + x'}
\end{align*} \end{definition}
The most immediate example of cone is the non-negative real half-line, when we take as norm the identity. Another example is the positive quadrant in a 2-dimensional plan, endowed with the euclidian norm. In a way, the notion of cones is the generalization of the idea of a space where all elements are \emph{non-negative}. This analogy gives us a generic way to define a pre-order, using the $+$ of the cone structure.
\begin{definition}\label{def:coneorder} Let be $C$ a cone. Then we define a partial order $\order C$ on $C$ by: $x {\order C} y $ if there exists $z \in C$, with $y = x + z$. \end{definition}
We define $\boule C$ as the set of elements in $C$ of norm smaller or equal to $1$. We will sometimes call it the \emph{unit ball} of $C$. Moreover, we will also be interested in the \emph{open unit ball} $\bouleopen C$, defined as the set of elements of $C$ of norm smaller than $1$.
In~\cite{pse}, the authors restrict themselves to cones verifying a completeness criterion: it allows them to define the denotation of the recursion operator in $\textsf{PCF}_{\text{sample}}$, thus enforcing the existence of fixpoints. \begin{definition}\label{def:cc}
A cone $C$ is said to be:
\begin{itemize}
\item \emph{sequentially complete} if any non-decreasing sequence $(x_n)_{n \in \mathbb{N}}$ of elements of $\boule C$ has a least upper bound $\sup_{n \in \mathbb{N}} x_n \in \boule C$.
\item \emph{directed complete} if for any directed subset $D$ of $\boule C$, $D$ has a least upper bound $\sup D \in\boule C$.
\item \emph{a lattice cone} if any two elements $x,y$ of $C$ have a least upper bound $x \vee y $.
\end{itemize} \end{definition} Observe that a directed-complete cone is always sequentially complete.
\longv{
\begin{lemma}\label{lemma:lattice}
Let be $C$ a lattice cone. Then it holds that:
\begin{itemize}
\item Any two element $x,y$ of $C$ have a greatest lower bound $x \wedge y$.
\item Decomposition Property: if $z \leq x +y$, there there exists $z_1, z_2 \in C$ such that $z = z_1 + z_2$, and $z_1 \leq x$, and $z_2 \leq y$.
\end{itemize}
\end{lemma}
\begin{proof}
Recall that, if $a \geq b$, we denote by $a-b$ the element $c$ such that $a = b+c$.
\begin{itemize}
\item We consider $z = x+y -(x \vee y)$, and we show that $z$ is indeed the greatest lower bound of $x$ and $y$.
\item We take $z_2 = (x \vee z)-x$, and $z_1 = z - z_1$. First, we see that $z_2 \leq (x+y) - x$, and so $z_2 \leq y$. Moreover, $z_1 = x -( (x \vee z)-z) \leq x$.
\end{itemize}
\end{proof}
} We illustrate Definition~\ref{def:cc} by giving the complete cone used in~\cite{pse} as the denotational semantics of the base type $R$ in $\textsf{PCF}_{\text{sample}}$. \begin{example}\label{ex:R}
We take $\meas \mathbb{R}$ as the set of \emph{finite} measures over $\mathbb{R}$, and the norm as $\norm {\meas {\mathbb{R}}} \mu = \mu(\mathbb{R})$. $\meas \mathbb{R}$ is a directed-complete cone. For every $r \in \mathbb{R}$, the denotational semantics of the term $\underline{r}$ in~\cite{pse} is $\delta_r$, the \emph{Dirac measure with respect to $r$} defined by taking
$\delta_r(U) = 1 \text{ if } r \in U$, and $\delta_r(U) = 0$ otherwise. \end{example} \hide{
\begin{proof}
Let be $D$ a directed subset of $\boule {C_R}$. Recall that the elements of $\boule{C_R}$ are the finite measures $\nu$ on $\mathbb{R}$ such that $\nu(\mathbb{R}^n) \leq 1$. We define a function $\mu : \Sigma_\mathbb{R} \rightarrow \Rp{}$, as $\mu(X) = \sup_{\nu \in X} \nu(X)$.
We are going to show that $\mu$ is a finite measure on $\mathbb{R}$, and that moreover it is the lub of $D$.
We first show that $\mu$ is a measure: it is direct to see that it is non-negative, and that $\mu(\emptyset) = 0$. We show now the countable additivity: let be $(X_n)_{n \in \mathbb{N}}$ a sequence of disjoints measurable subsets of $\mathbb{R}$. First, we see that:
\begin{align*}
\mu(\sum_{n \in \mathbb{N}} X_n) & = \sup_{\nu \in D} \nu(\sum_{n \in \mathbb{N}} X_n) \\
& = \sup_{\nu \in D} \sum_{n \in \mathbb{N}} \nu( X_n) \text{ since every }\nu\text{ is a measure}\\
& \leq \sup_{\nu \in D} \sum_{n \in \mathbb{N}} \mu( X_n) = \sum_{n \in \mathbb{N}} \mu( X_n).
\end{align*}
We have still to show the reverse inequality. Let be $\nu_0 \in D$.
Let be $\epsilon > 0$. Since $\nu_0$ is a \emph{finite} measure, there exists $N \in \mathbb{N}$, such that $\nu_0(\sum_{N \leq n } X_n) \leq \epsilon$. It means that for every $\nu \in D$ with $\nu \leq \nu_0$, $\nu(\sum_{N \leq n } X_n) \leq \epsilon$.
As a consequence:
\begin{align*}
\mu(\sum_{n \in \mathbb{N}} X_n) & = \inf_{\nu \in D \text{ with }\nu \leq \nu_0} \nu(\sum_{n \in \mathbb{N}} X_n) \\
& \leq \inf_{\nu \in D \text{ with }\nu \leq \nu_0} (\sum_{n \leq N} \nu( X_n) + \epsilon)\\
& = \sum_{n \leq N} \mu( X_n) + \epsilon \leq \sum_{n \in \mathbb{N}} \mu(X_n) + \epsilon
\end{align*}
Since it is true for every $\epsilon$, it means that $\mu(\sum_{n \in \mathbb{N}} X_n) \leq \sum_{n \in \mathbb{N}} \mu(X_n)$. Since we have already show the other inequality, it holds that $\mu$ is a measure. Looking at the definition of $\mu$, it is immediate that it is finite, and the greatest lower bound of $D$.
\end{proof}} In a similar way, we define $\meas X$ as the directed-complete cone of finite measures over $X$, for any measurable space $X$.
In~\cite{pse}, the authors ask for the cones they consider only to be sequentially complete. It is due to the fact they want to add measurability requirements to their cones, and as a rule, sequential completeness interacts better with measurability than directed completeness since measurable sets are closed under \emph{countable} unions, but not \emph{general unions}. \longv{ We illustrate this point in the example below. \begin{example}
Let be $A$ a measurable space, and $\mu$ a finite measure on $A$. We consider the cone of measurable functions $A \rightarrow \mathbb{R}_+$. We take $\norm{}f = \int_{A} f d\mu$. Lebesgues Monotone Convergence Theorem shows that this cone is sequentially complete, but it is not directed complete. \end{example}} In this work however, we are only interested in cones arising from \emph{probabilistic coherence spaces} in a way we will develop in Section~\ref{sect:pcs}. Since those cones have an underlying \emph{discrete} structure, we will be able to show that they are actually \emph{directed complete}. We will need this information, since we will apply McMillan's results~\cite{mcmillan} obtained in the more general framework of abstract domains with partitions, in which he asks for directed completeness. That's because directed completness allows to also enforce the existence of \emph{infinum}, as stated in the lemma below, whose proof can be found in the long version. \begin{lemma}
If a cone $C$ is:
\begin{itemize}
\item \emph{sequentially} complete, then every non-increasing sequence $(x_n)_{n \in \mathbb{N}}$ has a greatest lower bound $\inf (x_n)_{n \in \mathbb{N}}$.
\item \emph{directed} complete, then for every $D \subseteq C$ directed for the reverse order, $D$ has a greatest lower bound $\inf D$.
\end{itemize} \end{lemma} \longv{\begin{proof} We do the proof when $C$ is \emph{directed} complete, but it is exactly the same in the sequentially complete case.
Let be $D$ a reverse directed set. If all elements of $D$ are zero, then $\inf D = 0$. Otherwise, let be $x>0 \in D$. We consider the subset $E = \{\frac {x - y}{\norm{C} x} \mid y \leq x \wedge y \in D \}$. It is easy to see it is a directed subset of $\boule C$, which means that, since $C$ is directed complete, it has a supremum. So we can take $z = \norm {C}x \cdot{(x - \sup E)}$, and we show that it is the least upper bound of $D$.
\end{proof} } It is shown in~\cite{pse} that the addition and multiplication by a scalar are Scott-continuous in complete cones, in a sequential sense. In directed complete cones, it holds also in a \emph{directed sense}. \begin{lemma}
The addition $+ : C \times C \rightarrow C$ and the scalar multiplication $\cdot : \Rp{} \times C \rightarrow C$ are Scott-continuous:
\begin{itemize}
\item for any directed subsets $D$ and $E$ of $C$, and $K$ of $\Rp{}$:
\begin{align*}
&\sup{\{x + y \mid x \in D, y \in E\}} = \sup D + \sup E;\\
\text{and }\quad & \sup\{\lambda \cdot x \mid \lambda \in K, \, x \in E \} = \sup K \cdot \sup E.
\end{align*}
\item for any reverse directed subsets $D$, $E$ of $C$, and $K$ of $\Rp{}$: \begin{align*}
&\inf{\{x + y \mid x \in D, y \in E\}} = \inf D + \inf E; \\
\text{and }\quad &\inf\{\lambda \cdot x \mid \lambda \in K, \, x \in E \} = \inf K \cdot \inf E.
\end{align*}
\end{itemize} \end{lemma}
\subsection{Pre-Stable Functions between Cones}\label{subsect:cstab}
As said before, the notion of pre-stable function is a generalization of the notion of \emph{absolutely monotonic} real functions. More precisely, the idea is to define so-called \emph{higher-order differences}, and to specify that they must be all non-negative.
First, we want to be able to talk about those $\vec u = (u_1, \ldots ,u_n)$, such that $\norm{C}{x + \sum u_i} \leq 1$ for a fixed $x \in \boule C$, and $n \in \mathbb{N}$. To that end, we introduce a cone $C_x^n$ whose unit ball is exactly such elements. It is an adaptation of the definition given in~\cite{pse} for the case where $n=1$, and we show in the same way that it is indeed a cone.
\begin{definition}[Local Cone]
Let be $C$ a cone, $n \in \mathbb{N}$, and $x \in \bouleopen C$. We call \emph{$n$-local cone at $x$}, and we denote $C_x^n$ the cone $C^n$ endowed with the following norm:
$$\norm {C_x^n} {(u_1, \ldots, u_n)} = \inf {\{ \frac 1 r \mid x + r \cdot \sum_{1 \leq i \leq n} u_i \in \boule C \wedge r>0 \}}.$$
\end{definition} We can show that whenever $C$ is a directed-complete cone, $C_x^n$ is also directed-complete. \remarque{est ce qu'on en a besoin ?}
For $n\in \mathbb{N}$, we use $\parteps{+} {n}$ (respectively $\parteps{-}n$) for the set of all subsets $I$ of $\{1,\ldots, n\}$ such that $n - {\card I}$ is even (respectively odd).
We are now ready to introduce \emph{higher-order differences}. Since we have only explicit addition, not subtraction, we define separately the positive part $\diff + n$ and the negative part $\diff - n$ of those differences: For $f: \boule C \rightarrow D$, $x \in \boule C$, $\vec u \in \boule C_x^n$, and $\epsilon \in \{-,+\}$, we define:
\begin{align*} \diff \epsilon n (f)(x \mid \vec u) &= \sum_{I \in \parteps \epsilon n} f(x + \sum_{i \in I } u_i)
\end{align*}
\begin{definition}
We say that $f: \boule C \rightarrow D$ is \emph{pre-stable} if, for every $n \in \mathbb{N}$, for every $x \in \boule C$, $\vec u \in \boule C_x^n$, it holds that:
$$\diff - n (f)(x \mid \vec u) \leq \diff + n (f)(x \mid \vec u).$$
\end{definition}
If $f$ is pre-stable, we will set $\diff{}n f (x \mid \vec u) = \diff + n f (x \mid \vec u) - \diff - n f (x \mid \vec u)$. Observe that the quantity $\diff{}n f(x \mid \vec u)$ is actually symmetric in $\vec u$, i.e. stable under permutations of the coordinates of $\vec u$. \begin{definition} A function $f: \boule C \rightarrow D$ is called a \emph{stable function from $C$ to $D$} if it is pre-stable, sequentially Scott-continuous, and moreover there exists $\lambda \in \mathbb{R}_+$ such that $f(\boule C) \subseteq \lambda \cdot \boule D$.
\end{definition}
\begin{definition} $\mathbf{Cstab}$ is the category whose objects are sequentially complete cones, and morphisms from $C$ to $D$ are the stable functions $f$ from $C$ to $D$ such that $f(\boule C) \subseteq \boule D$.
\end{definition} It was shown in~\cite{pse} that it is possible to endow $\mathbf{Cstab}$ with a cartesian closed structure. The product cone is defined as $\prod_{i \in I} C_i = \{(x_i)_{i \in I} \mid \forall i \in I, x_i \in C_i \} $, and $\norm{\prod_{i \in I}C_i}{x} = \sup_{i \in I} \norm {C_i}{x_i}$. The function cone $C \Rightarrow D$ is the set of all stable functions, with $\norm {C \Rightarrow D} f = \sup_{x \in \boule C} \norm D {f(x)}$. It was also shown in~\cite{pse} that these cones are indeed sequentially complete, and that the lub in $C \Rightarrow D$ is computed pointwise. We will use also the cone of pre-stable functions from $C$ to $D$, which is also sequentially complete. \hide{ \begin{figure}
\caption{Cartesian Structure of $\mathbf{Cstab}$.}
\end{figure} } \subsection{A generalization of Bernstein's theorem for pre-stable functions}\label{subsect:bernstein}
We are now going to show an analogue of Bernstein's Theorem for pre-stable functions on directed-complete cones. The idea is to first define an analogue of derivatives for pre-stable functions, and to show that pre-stable functions can be written as the infinite sum generated by an analogue of Taylor expansion on $\bouleopen C$. This result is actually an application of McMillan's work~\cite{mcmillan} in the setting of abstract domains. Here, we give the main steps of the construction directly on cones, and highlight some properties of the Taylor series which are true for cones, but not in the general framework McMillan considered.
\subsubsection{Derivatives of a pre-stable function} We are now going, following McMillan~\cite{mcmillan}, to construct derivatives for pre-stable functions on directed complete cones. This construction is based on the use of a notion of \emph{partition}: a \emph{partition} of $x \in \boule C$ is a multiset $\pi = [u_1, \ldots,u_n] \in \mfin{C}$ such that $x = \sum_{1 \leq i \leq n} y_i $. We write $\partit {\pi} x$ when the multiset $\pi$ is a partition of $x$. We will denote by $+$ the usual union on multiset: $[y_1, \ldots, y_n] + [z_1, \ldots,z_m] = [y_1, \ldots, y_n,z_1, \ldots,z_m]$. We call $\parts x$ the set of partitions of $x$.
\begin{definition}[Refinement Preorder]
If $\pi_1$, $\pi_2$ are in $\parts x$, we says that $\pi_1 \leq \pi_2$ if $\pi_1 = [u_1, \ldots, u_n]$, and $\pi_2 = \alpha_1 + \ldots + \alpha_n$ with each of the $\alpha_i$ a partition of $u_i$. \end{definition}
Observe that when $\pi_1$ and $\pi_2$ are partition of $x$, $\pi_2 \leq \pi_1$ means that $\pi_1$ is a more \emph{finely grained} decomposition of $x$. If $\vec u$ is an $n$-tuple in $\boule C$, we extend the refinement order to $ \parts {\vec u} = \parts{u_1} \times \ldots \times \parts{u_n}$. \begin{lemma}\label{lemma:refinmentdirected}
Let be $C$ a lattice cone.
Then for every $x \in C$, $\parts x$ is a directed set. \end{lemma} \shortv{The proof of Lemma~\ref{lemma:refinmentdirected} may be found in the long version. } \longv{\begin{proof}
We are going to use the following notion: we say that two non-zero elements $x$ and $y$ of $C$ are \emph{orthogonal}, and we note $x \perp y$, if $x \wedge y =0$.
Let be $\pi_1, \pi_2 \in \parts x$. We first show that it cannot exist $z \in \pi_1$ which is orthogonal to all the element of $\pi_2$. Indeed, suppose that it is the case: we take $y_1, \ldots,y_n$ such that $\pi_2 = [y_1, \ldots,y_n]$. Then by hypothesis, $z \leq x = \sum_{1 \leq i \leq n} y_i$. We can now use the decomposition property from Lemma~\ref{lemma:lattice}. It means that $z = \sum_{1\leq i \leq n} z_i$, with $z_i \leq y_i$. But since for all $i$, $z \perp y_i$, it folds that $z_i=0$ for all $i$, and so $z=0$, and we have a contradiction.
Now, we are going to present a procedure to construct a partition $\pi$ of $x$ with $\pi \leq \pi_1$, and $\pi \leq \pi_2$. We can suppose that all elements of $\pi_1$ and $\pi_2$ are non-zero. We start form $\pi=[]$, $\theta_1 = \pi_1$, $\theta_2= \pi_2$, and $w = x$, $v=0$. Through the procedure, we guarantee:
\begin{itemize}
\item $\theta_1, \theta_2 \in \parts w$, $\pi \in \parts v$, and $w + v = x$;
\item all the elements of $\Theta_1$ and $\Theta_2$ are non-zero;
\item $\pi + \Theta_1 \leq \pi_1$, and $\pi + \Theta_2 \leq \pi_2$ (for the refinment order). \end{itemize}
Then at each step of the procedure, if $\theta_1$ is non empty, we do the following: let $\theta_1 = [a_1, \ldots, a_n]$, and $\theta_2 = [b_1, \ldots ,b_m]$. Then we know that there is a $j$, such that $a_1$ and $b_j$ are not orthogonal. We modify the variables as follows:
\begin{align*}
\pi &= \pi + [a_1 \wedge b_j] \\
v &= v + a_1 \wedge b_j \\
\theta_1 &= \begin{cases}[a_1 - a_1 \wedge b_j, a_2, \ldots a_n] \text{ if } a_1 \wedge b_j \neq a_1\\
[a_2, \ldots, a_n] \text{ otherwise.}
\end{cases}\\
\theta_2 &= \begin{cases}
[b_1, \ldots, b_{j-1}, b_j - a_1 \wedge b_j, b_{j+1}, \ldots, b_m] \text{ if }a_1 \wedge b_j \neq b_j\\
[b_1, \ldots, b_{j-1}, b_{j+1}, \ldots, b_m] \text{ otherwise.}
\end{cases}\\
x &= x - a_1 \wedge b_j
\end{align*}
At every step of the procedure presented above, the quantity:
$$\card{(i,j) \mid \text{ not }(a_i\perp b_j) } $$
decreases. Indeed:
\begin{itemize}
\item or we remove either $a_1$ of $\Theta_1$, or $b_j$ of $\Theta_2$, and then the statement above holds.
\item or we replace $a_1$ by $(a_1 - a_1 \wedge b_j)$, and $b_j$ by $(b_j - a_1 \wedge b_j)$. Then we see that $(a_1 - a_1 \wedge b_j) \perp (a_1 - a_1 \wedge b_j)$. Moreover, the pairs that were orthoganal before are still orthogonal: indeed for every $z$ with $z \perp a_1$ it holds that $z \perp a_1 - a_1 \wedge b_j$, and the same for $b_j$.
\end{itemize}
As a consequence, the procedure will terminates. It means that we reach a state where $\Theta_1$ is empty, and all the invariants presented above hold. Then we see that $\pi \in \parts x$, and $\pi \leq \pi_1, \pi_2$.
We are going to illustrate the procedure above on a very basic example. We consider the cone consisting of the positive quadrant of $\mathbb{R}^2$, endowed by the order defined as: $x \leq y$ if $x_1 \leq y_1$, and $x_2 \leq y_2$. We take two partitions of a vector $x \in \mathbb{R}^2$: $\pi_1 = [b_1, b_2]$, and $\pi_2 = [a_1, a_2]$, where $a_1, a_2,b_1,b_2$ are taken as pictured in Figure~\ref{fig:firststepprod}. We are going to apply our procedure in order to obtain a refinment of both $\pi_1$ and $\pi_2$. At the beginning, we have $\Theta_1 = \pi_1$, $\Theta_2 = \pi_2$, $w=x$, $v=0$.
\begin{itemize}
\item The first step is represented in Figure~\ref{fig:firststepprod}. Observe that the procedure is actually non-deterministic: we may choose any $(a,b)$ with $a \in \pi_1$, $b \in \pi_2$, and $a$ and $b$ not orthonal. Here, we choose to start from $(b_1, a_1)$. We take $v = a_1 \wedge b_1$ (and we represent it by a red vector in Figure~\ref{fig:firststepprod}): it is going to be the first element of our new partition $\pi$. Accordingly, we take $\pi = [v]$. We know update the partition $\Theta_1$ and $\Theta_2$ into partitions of $w = x-v$: $\Theta_2$ becomes $[a'_1, a_2]$, and $\Theta_1$ becomes $[b'1,b_2]$ where $a'_1 = a_1 - b_1 \wedge a_1$ and $b'_1 = b_1 - a_1 \wedge b_1$ are represented also in red in Figure~\ref{fig:firststepprod}.
\item The second step is represented in Figure~\ref{fig:sndstepprod}. Observe that now $a'_1$ and $b'_1$ are orthogonal, so we have to choose another pair. Here, we choose $(b'_1, a_2 )$. As before, we add to $\pi$ the glb of $b'_1$ and $a_2$: we obtain $\pi = [a_1 \wedge b_1, b'_1 \wedge a_2]$. Observe that now (as can be seen on Figure~\ref{fig:sndstepprod}, $b'_1 \leq a_2$, and so $b'_1 \wedge a_2 = b'_1$. So when we update the partition $\Theta_1$ and $\Theta_2$, we take:
$\Theta_2 = [b_2]$, and $\Theta_1=[a'1,a'_2]$ where $a'_2 = a_2 - b'1$ is represented in purple in Figure~\ref{fig:sndstepprod}.
\item By doing again two steps of the procedure, we see that the final partition $\pi$ is $[a_1 \wedge b_1, b'_1, a'_1, a'_2]$. We cen see by looking at Figure~\ref{fig:sndstepprod} that it is indeed a refinment of both $\pi_1$ and $\pi_2$..
\end{itemize}
\begin{figure}
\caption{First step of the Procedure}
\label{fig:firststepprod}
\caption{Second Step of the Procedure}
\label{fig:sndstepprod}
\caption{Illustration of the Proof of Lemma\ref{lemma:refinmentdirected}}
\label{fig:refinment}
\end{figure}
\end{proof}} Observe that, as a consequence, the refinement preorder turnsalso $\parts {\vec u}$ into a directed set. \begin{definition}[from~\cite{mcmillan}]\label{def:dersn1}
Let $C$ be a lattice cone, $D$ a cone, and let $f: \boule C \rightarrow D$ be a pre-stable function. Then for every $x \in \boule C$, and $\vec u = (u_1, \ldots ,u_n)\in \boule C_x^n $,
we define $\Phi_{x,\vec u}^{f,n} : \parts {\vec{u}} \rightarrow D$ as:
$$\Phi_{x,\vec u}^{f,n} (\pi_1, \ldots \pi_n) = \sum_{y_1 \in \pi_1} \ldots \sum_{y_n \in \pi_n} \diff{} n f (x \mid y_1, \ldots, y_n) .$$ \end{definition}
It holds (see~\cite{mcmillan} for more details) that $\Phi_{x, \vec u}^{f,n}$ is a non-increasing function whenever $f$ is pre-stable\longv{ (it is shown in Lemma 3.2 of~\cite{mcmillan} by looking at the definition of higher-order differences)}. Since $\parts {\vec{u}}$ is a directed set, $\Phi_{x,\vec u}^{f,n}$ has a greatest lower bound whenever $D$ is a directed-complete lattice cone.
\begin{definition}[from~\cite{mcmillan}]\label{def:dersn}
Let be $C$ a lattice cone, $D$ a directed-complete lattice cone, and $f: \boule C \rightarrow D$ a pre-stable function.
Let be $\vec u \in \boule C_x^n $. Then the \emph{derivative of $f$ in $x$ at rank $n$ towards the direction $\vec u$} is the function $\ders n f(x \mid \cdot): \boule{C_x^n} \rightarrow D$ defined as
$$\ders n f (x \mid \vec u) = \inf_{\vec \pi \in \parts{\vec{u}} } \Phi_{x,\vec u}^f (\vec \pi).$$ \end{definition}
We are now going to illustrate Definition~\ref{def:dersn} on a basic case where we take $f:\Rp{} \rightarrow \Rp{}$, in order to highlight the link with differentiation in real analysis. \begin{example}\label{ex:derdiff}
We take $C$ and $D$ as the positive real half-line, and $x \in [0,1[$. Let be $h$ such that $x + h \leq 1$. Then:
$$\ders 1 f (x \mid h) =\inf_{\pi \text{ with } \partit {\pi}{h} } \sum_{y \in \pi} f(x+y) - f(x) $$
We know already, since $f$ is pre-stable hence absolutely monotone as function on reals, that $f$ is convex, and moreover differentiable (see~\cite{widder}). From there, by considering a particular family of partitions, we can show that $\ders 1 f (x \mid h) = h \cdot f'(x)$\shortv{ (the proof can be found in the long version)}.
\longv{
\begin{proof}
First, let $\pi$ be any partition of $y$. Since $f$ is differentiable and convex, it holds that:
$$\forall z , \, f(x+z) - f(x) \geq f'(x)\cdot z. $$
As a consequence, we see that for any partition $\pi$ of $h$, it holds that $ \sum_{y \in \pi} f(x+y) - f(x) \geq f'(x) \cdot h$, and it implies that $\ders 1 f (x \mid h) \geq f'(z)\cdot h$. To show the reverse inequality, it is enough to consider the particular family of partition $\pi_n = [\frac h n, \ldots, \frac h n]$ of $h$: we see that \begin{align*}
\sum_{y \in \pi_n} f(x+y) - f(x) & = n \cdot f(x + \frac{h}{n} )- f(x)\\ & = h \cdot \frac{ f(x + \frac{h}{n}) - f(x)}{\frac h n} \rightarrow_{n \rightarrow \infty} h \cdot f'(x).
\end{align*}
\end{proof}
} \end{example} \begin{lemma}\label{lemma:aux1}
Let be $C$ a lattice cone, $D$ a directed complete cone, $f$ a pre-stable function from $C$ to $D$. Let be $x \in \bouleopen C$.
Then $\ders n f(x \mid \cdot)$ is a symmetric function $\boule{({C_x^n})} \rightarrow D$ such that moreover:
\begin{itemize}
\item $0 \leq \ders n f (x \mid \vec u) \leq \diff{}n f (x \mid \vec u)$.
\item Both $\vec u \mapsto \ders n f (x \mid \vec u )$ and $\vec u \mapsto \diff {} n f(x \mid \vec u) - \ders n f (x \mid \vec u)$ are pre-stable functions from $C_x^n$ to $D$.
\end{itemize} \end{lemma} \begin{proof} The proof is given in Lemma 3.31 in~\cite{mcmillan}. It comes almost directly from Definition~\ref{def:dersn}.
\end{proof}
We have seen in Example~\ref{ex:derdiff} that our so-called derivatives of pre-stable functions play the same role as the differential of a differentiable function, which are actually \emph{linear} operators $df_x^n: \mathbb{R}^n \rightarrow \mathbb{R}$. While the abstract domains considered in~\cite{mcmillan} do not have to be $\mathbb{R}_+$ semi-modules, so have no notion of linearity, we are able to show in the complete cone case\shortv{ (see the long version for the proof)} that the $\ders n f$ are linear in the sense of Lemma~\ref{lemma:additivity} below. \begin{lemma}\label{lemma:additivity}
Let $C$, $D$ be two directed complete lattice cones, $x \in \bouleopen C$.
\begin{itemize}
\item Let $f:\boule C \rightarrow D$ be a pre-stable function. Then $\ders n f(x \mid \cdot) : \boule {(C_x^n)} \rightarrow D$ is $n$-linear, in the sense that, for each of its arguments, it commutes with the sum and multiplication by a scalar.
\item For any $\vec u \in \boule {(C_x^n)}$, the function $f \in \mathbf{Cstab}(C, D) \mapsto \ders n f(x \mid \vec u) \in D$ is linear and directed Scott-continuous.
\end{itemize}
\end{lemma} \longv{\begin{proof}
We are going to use the following auxiliary lemma:
\begin{lemma}[from~\cite{mcmillan}]\label{lemma:scdersf}
Let $C$ and $D$ be two directed cones, and $f:C \rightarrow D$ linear and non-decreasing, such that moreover for all subset $F$ of $C$ directed for the reverse order, $f(\inf F) = \inf f(F)$. Then $f$ is directed Scott-continuous.
\end{lemma}
\begin{proof}
Let be $E$ a directed subset of $C$.
We define $F = \{ \sup E - x \mid x \in E \}$. Since $E$ is directed, $F$ is directed for the reverse order, and as a consequence:$\inf f(F) = f(\inf F)$. But we see that $\inf F = 0$. Therefore, since $f$ is linear, $f(\inf F) = 0$. As a consequence (and again by linearity of $f$): $f(\sup E) - \sup f(E) = \inf f(F) = 0$.
\end{proof}
We are now going to show Lemma~\ref{lemma:additivity}.
\begin{itemize} \item We first show that $\ders n f(x \mid \cdot) : \boule {(C_x^n)} \rightarrow D$ is $n$-linear.
The additivity is given by Lemma 3.72 of ~\cite{mcmillan}. The commutation with scalar multiplication is not proved on this form in ~\cite{mcmillan} because they have a more general notion of a system of partition. We first show that the result holds when $\lambda$ is a rational number. To do that, we use the fact that $\pi = [\frac x n , \ldots, \frac x n]$ is always a partition of $x$. Then, let $\lambda \in \Rp{}$ and $\vec u =(u_1, \ldots,u_n)$ such that both $\vec u$ and $\vec v = (u_1, \ldots, \lambda u_i, \ldots ,u_n)$ are in $\boule {C_x^n}$. Let be $\overline r = (r_m)_{m \in \mathbb{N}}, \overline q = (q_m)_{m \in \mathbb{N}}$ two sequences of rational number such that $\overline r$ tends to $\lambda$ by below, and $\overline q$ tends to $\lambda$ by above. We see that:
$$\ders n f(x \mid \vec v) = 2\cdot \ders n f (x \mid u_1, \ldots, \frac \lambda 2 \cdot u_i, \ldots u_n) .$$
We take $N$ such that for every $m \geq N$, $q_m \leq 2 \cdot \lambda$: since $\ders n f(x\mid \cdot)$ is non-decreasing, we see that:
\begin{align*}
\ders n f &(x \mid u_1, \ldots, \frac {r_m} 2 \cdot u_i, \ldots ,u_n) \\&\leq
\ders n f (x \mid u_1, \ldots, \frac \lambda 2 \cdot u_i, \ldots, u_n) \\ &\leq
\ders n f (x \mid u_1, \ldots, \frac {q_m} 2 \cdot u_i, \ldots ,u_n)
\end{align*}
Applying now the linearity for rational numbers, we see that for every $m \geq N$:
\begin{align*}
r_m \cdot \ders n f &(x \mid u_1, \ldots, \frac {1} 2 \cdot u_i, \ldots ,u_n) \\&\leq
\ders n f (x \mid u_1, \ldots, \frac \lambda 2 \cdot u_i, \ldots, u_n) \\ &\leq
q_m \cdot \ders n f (x \mid u_1, \ldots, \frac {1} 2 \cdot u_i, \ldots ,u_n)
\end{align*}
As a consequence:
\begin{align*}
\sup_{m \geq N} r_m \cdot \ders n f &(x \mid u_1, \ldots, \frac {1} 2 \cdot u_i, \ldots ,u_n) \\&
\leq \ders n f (x \mid u_1, \ldots, \frac \lambda 2 \cdot u_i, \ldots, u_n) \\ &\leq \inf_{m \in N}
q_m \ders n f (x \mid u_1, \ldots, \frac {1} 2 \cdot u_i, \ldots ,u_n)
\end{align*}
and by Scott-continuity of $\cdot$, it tells us that $\ders n f (x \mid u_1, \ldots, \frac \lambda 2 \cdot u_i, \ldots, u_n) = \lambda \ders n f (x \mid u_1, \ldots, \frac {1} 2 \cdot u_i, \ldots ,u_n) $. We can now conclude: recall that $
\ders n f(x \mid \vec v) = 2\cdot \ders n f (x \mid u_1, \ldots, \frac \lambda 2 \cdot u_i, \ldots u_n) .$ Therefore:
\begin{align*}
\ders n f(x \mid \vec v) &=2 \cdot \lambda \cdot \ders n f (x \mid u_1, \ldots, \frac {1} 2 \cdot u_i, \ldots ,u_n)\\
& = \lambda \cdot \ders n f(x \mid \vec u) \quad \text{ since }\frac 1 2 \in \mathbb{Q}.
\end{align*}
\item We show now that $f \in \mathbf{Cstab}(C, D) \mapsto \ders n f(x \mid \vec u) \in D$ is linear and Scott-continuous. It is immediate that it is linear, since every one of the $f \mapsto \diff {}n f{(x\mid u)}$ is. We are now going to use~\ref{lemma:scdersf} to show the Scott-continuity: it tells us that we have only to check that for every $E \subseteq C \Rightarrow_m D$ directed for the reverse order, $\ders n f (\inf E \mid \vec u) = \inf \ders n f (E \mid \vec u)$. Observe that:
\begin{align*}
\ders n {(\inf E)} (x \mid \vec u) &= \inf_{\vec \pi \in \parts{\vec{u}} } \sum_{y_1 \in \pi_1} \ldots \sum_{y_n \in \pi_n} \diff {}n {(\inf E)} (x \mid y_1, \ldots, y_n) \\
&= \inf_{\vec \pi \in \parts{\vec{u}} } \sum_{y_1 \in \pi_1} \ldots \sum_{y_n \in \pi_n} \inf_{f \in E} \{\diff{} n {f} (x \mid y_1, \ldots, y_n)\}\\
&= \inf_{\vec \pi \in \parts{\vec{u}} } \inf_{f \in E}\{\sum_{y_1 \in \pi_1} \ldots \sum_{y_n \in \pi_n} \diff{}n {f} (x \mid y_1, \ldots, y_n)\}\\
& = \inf_{f \in E} \ders n f (x \mid \vec u) \text{ since the infs can be exchanged}.
\end{align*}
\end{itemize} \end{proof}} \shortv{The proof of Lemma~\ref{lemma:additivity} can be found in the long version.} The linearity of the derivatives means that for every $x \in \bouleopen C$, we can extend $\ders n f(x \mid \cdot)$ to a function $C_n^x \rightarrow D$. We will use implicitly this extension in the following, especially in Definition~\ref{def:TSn}. \subsubsection{Taylor Series for pre-stable functions} We have seen above that the $\ders n f$ are a notion of differential for pre-stable functions. Following further this idea, and the work of McMillan~\cite{mcmillan}, we define an analogue to the Taylor expansion. In all this section $C$ and $D$ are going to be directed complete lattice cones, and $f: \boule C \rightarrow D$ a pre-stable function. \begin{definition}~\label{def:TSn}
Let be $x \in \bouleopen C$.
We call \emph{ Taylor partial sum of $f$ in $x$ at the rank $N$} the function
$Tf^N(x \mid \cdot): \boule {C_x^1} \rightarrow D$ defined as: $$Tf^N(x \mid y) = f(x) + \sum_{k=1}^N \frac 1 {k!} \ders k f (x \mid y, \ldots, y) .$$
\end{definition} The next step consists in establishing that the $T^N f$ are actually a non-increasing bounded sequence in the cone of pre-stable functions from $C$ to $D$, which will allow us to define the \emph{Taylor series of $f$}, as the supremum of the $T^Nf$ (see the long version for more details on the proof).
\longv{ To that end, we are first going to establish an alternative characterization of the Taylor series, which is the one used in~\cite{mcmillan}, in the framework of abstract domains. It consists in substituting each of the $\ders n f(x \mid y, \ldots, y)$ with its expression given by Lemma~\ref{lemma:dersn} below. The validity of Lemma~\ref{def:dersn}, and thus the equivalence of the two definitions, depends on the fact we work with directed-complete cones.
\begin{lemma}[Alternative Caracterisation of Derivatives]\label{lemma:dersn}
Let $x \in \bouleopen C, y \in \boule {C_x^1}$, and $k \in \mathbb{N}$. Then it holds that $\ders k f (x \mid y, \ldots, y) $ is equal to: $$\sup_{\pi = [u_1, \ldots, u_n] \in \parts y}\sum_{\sigma:[\![ 1, k ]\!] \hookrightarrow [\![ 1, n ]\!] } \ders k f (x \mid u_{\sigma(1)}, \ldots u_{\sigma(n)}) $$ \end{lemma} \shortv{The proof of Lemma~\ref{lemma:dersn} can be found in the long version.} \longv{\begin{proof}
We first introduce the following notation: if $\pi = (u_1, \ldots, u_n)$ is a partition of $x$, and $\sigma:[\![ 1, k ]\!] \hookrightarrow [\![ 1, n ]\!]$ an injective function, we denote $\sigma(\pi) = (u_{\sigma(1), \ldots, u_\sigma(k)})$.
We denote by $A = \sup_{\pi \in \parts y}\sum_{\sigma:[\![ 1, k ]\!] \hookrightarrow [\![ 1, n ]\!] } \ders k f (x \mid {\sigma(\pi)})$.
We show separately the two inequalities.
\begin{itemize}
\item We first show that $A \geq \ders k f (x \mid y, \ldots, y)$. For every $n \in \mathbb{N}$, it holds that $\pi = (\frac 1 n \cdot y, \ldots, \frac 1 n \cdot y)$ is a partition of $y$. Therefore for every $n \in \mathbb{N}$: \begin{align*}
A &\geq \sum_{\sigma:[\![ 1, k ]\!] \hookrightarrow [\![ 1, n ]\!] } \ders k f (x \mid \frac y n, \ldots, \frac y n)\\
& = \frac{n!}{(n-k)!} \ders k f (x \mid \frac y n, \ldots, \frac y n)\\
&= \frac{n!}{(n-k)! \cdot n^k} \ders k f (x \mid y, \ldots, y) \end{align*} The sequence $\frac{n!}{(n-k)! \cdot n^k}$ tends to $1$ when $n$ tends to infinity (see in the long version). By Scott-continuity, it means that $A \geq \ders k f (x \mid y, \ldots, y)$. \item Let us show now that $A \leq \ders k f (x \mid y, \ldots, y)$. Let be $\pi = (u_1, \ldots, u_n) \in \parts y$. Then: \begin{align*}
& \sum_{\sigma:[\![ 1, k ]\!] \hookrightarrow [\![ 1, n ]\!] } \ders k f (x \mid \sigma(\pi)) \\
& \leq \sum_{i_1 \in \{1, n\}} \ldots \sum_{i_k \in \{1, \ldots, n\}} \ders k f (x \mid u_{i_1}, \ldots u_{i_k}) \\
& = \ders k f(x \mid y, \ldots, y) \text{ by }n\text{-linearity of }\ders k f(x \mid \cdot)
\end{align*} Since $A = \sup_{\pi \in \parts y}\sum_{\sigma:[\![ 1, k ]\!] \hookrightarrow [\![ 1, n ]\!] } \ders k f (x \mid {\sigma(\pi)}) $, we see that $A \leq \ders k f(x \mid y, \ldots, y) $, which ends the proof. \end{itemize} \end{proof}
With this characterization,~\cite{mcmillan} shows that the sequence of functions $(x \in \boule C_y^1 \mapsto Tf^n(x \mid y))$ is bounded by $(x \in \boule C_y^1 \rightarrow f(x + y))$ in the cone of pre-stable functions from $C_x^1$ to $D$. } } \begin{lemma}\label{lemma:tfN} Let be $y$ is in $\bouleopen C$, and $x$ in $\bouleC_y^1$. Then $\forall N \in \mathbb{N}$, $Tf^N(x \mid y) \leq f(x + y)$, and the function $(x \in \bouleC_y^1 \mapsto f(x + y) - Tf^N(x \mid y))$ is pre-stable. \end{lemma} \longv{ \begin{proof}
Let be $x \in \bouleopen C$ and $y \in\bouleC_x$. We are able to express $f(x+y)$ by using $f(x)$ and finite differences on any partition of $y$: indeed, for every partition $\pi$ of $y$, it holds that:
\begin{equation}\label{eq:idf}
f(x + y) = f(x) + \sum_{1 \leq k\leq n} \frac 1 {k!}\sum_{\sigma:[\![ 1, k ]\!] \hookrightarrow [\![ 1, n ]\!] } \diff{} k f (x, \sigma(\pi))
\end{equation}
\shortv{ (The proof of~\eqref{eq:idf} is done in~\cite{mcmillan} in a purely combinatorial way, by induction on the cardinal of the partition $\pi$. It can be found in the long version for the case $n=2$.)}
\longv{ \begin{proof}
It is an algebraic calculation, done in~\cite{mcmillan}. We give here the proof for $n=2$. Let $\pi = [y_1, y_2]$ a partition of $y$. Then we see that: \begin{align*}
f(x) &+ \sum_{1 \leq k\leq n} \frac 1 {k!}\sum_{\sigma:[\![ 1, k ]\!] \hookrightarrow [\![ 1, n ]\!] } \diff{} k f (x, \sigma(\pi)) \\
&= f(x) + \diff{} 1 f (x,y_1) + \diff{}1 f(x,y_2) \\
& \qquad +\frac 1 2 \cdot (\diff{}2 f(x,y_1,y_2) + \diff{}2 f(x,y_2,y_1) )\\
& = f(x) + (f(x+y_1) - f(x)) + (f(x+y_2)-f(x)) \\
& \qquad + (f(x+y_1+y_2) - f(x+y_1) - f(x+y_2)+f(x)) \\
& = f(x+y_1+y_2) = f(x+y).
\end{align*}
\end{proof}
}
Moreover we are also able to express the derivatives of $f$ at $x$ towards the direction $y$ also using the partitions of $y$ (it is the sense of Lemma~\ref{lemma:dersn}). Accordingly:
\begin{align*}
&Tf^N(x \mid y) = f(x) + \sum_{k=1}^N \frac 1 {k!} \ders k f (x \mid y, \ldots, y) \\
&= f(x) + \sum_{k=1}^N \frac 1 {k!} \sup_{\pi \in \parts y}\sum_{\sigma:[\![ 1, k ]\!] \hookrightarrow [\![ 1, n ]\!] } \ders k f (x \mid \sigma(\pi))
\end{align*}
Using Lemma~\ref{lemma:aux1}, we see that it implies:
$$Tf^N(x \mid y) \leq f(x) + \sum_{k=1}^N \frac 1 {k!} \sup_{\pi \in \parts y}\sum_{\sigma:[\![ 1, k ]\!] \hookrightarrow [\![ 1, \#(\pi) ]\!] } \diff{} k f (x \mid \sigma(\pi))$$
We can now use the Scott continuity of $+$ and $\cdot$, and we obtain:
$$Tf^N(x \mid y) \leq \sup_{\pi \in \parts y} f(x) + \sum_{k=1}^N \frac 1 {k!}\sum_{\sigma:[\![ 1, k ]\!] \hookrightarrow [\![ 1, \#(\pi) ]\!] } \diff{} k f (x \mid \sigma(\pi)) $$
We can now conclude using~\eqref{eq:idf}:
$$Tf^N(x\mid y) \leq \sup_{\pi \in \parts y} f(x+y) \leq f(x+y)$$
The proof of the pre-stability of the function can be found in~\cite{mcmillan}. It is based on the fact that each one of the above inequality can be seen as an inequality in the cone of pre-stable functions.
\end{proof}} Since we have shown that the partial sum of the Taylor series of $f$ was a bounded non-decreasing sequence in the complete cone of pre-stable functions from $C_x^1$ to $D$, we can now define the \emph{Taylor series of $f$} as its supremum. \begin{definition}~\label{def:TS}
We define $Tf(x \mid \cdot): \boule {C_x^1} \rightarrow D$ \emph{the Taylor series of $f$ in $x$}, and $Rf(x \mid \cdot): \boule {C_x^1} \rightarrow D$ \emph{the Remainder of $f$ in $x$} as:
\begin{align*}
Tf(x \mid y) &= \sup_{N \in \mathbb{N}}Tf^N(x \mid y)\\
Rf(x \mid y) &= f(x + y) - Tf(x \mid y).
\end{align*} \end{definition}
\subsubsection{Extended Bernstein's theorem}
Our goal from here is to show that for any $x \in \bouleopen C$, $Rf(0 \mid x) = 0$. We recall here the main steps of the proof of~\cite{mcmillan}. It is based on two technical lemmas, that analyze more precisely the behavior of the remainder of $f$. \longv{The first one is actually a summary of several technical results shown separately in~\cite{mcmillan}.}\shortv{ The proofs are done in~\cite{mcmillan}, and a sketch can be found in the long version.}
\begin{lemma}\label{lemma:auxb1}
Let be $x \in \bouleopen C$.
Then it holds that both:
\begin{align*}
Rf_x : y \in \boule{C_x^1} &\mapsto Rf(x \mid y) \in D \\
\text{and }\quad Rf^y: x \in \boule{C_y^1} &\mapsto Rf(x\mid y) \in D
\end{align*}
are pre-stable functions. Moreover $Rf_x(0) = 0$, and
for every $x \in \boule{C_y^1}$, it holds that $T(Rf^y)(0 \mid x) = 0$.
\end{lemma}
\longv{\begin{proof}
We give here only sketches of the proofs. The detailed proof can be found in~\cite{mcmillan}.
\begin{itemize}
\item For $Rf^y$, it is a consequence of the fact that both $f^y:(x \in \boule C_y^1 \mapsto f(x+y) $ and $Tf^y:x \in \boule C_y^1 \rightarrow Tf(x \mid y)$ are pre-stable functions, with $Tf^y \leq f^y$ in the cone of pre-stable functions, and $Rf^y = f^y - Tf^y$.
\item The pre-stability of $Rf_x$ is stated in Theorem 4.1. of~\cite{mcmillan}. It is based on a previous technical lemma shown in~\cite{mcmillan}, which says it is sufficient for a function to be pre-stable, to have all its differences \emph{in 0} to be non-negative. Then the idea is to fix $x$, and to consider for every $N \in \mathbb{N}$, the function $g_N:\, y \in \boule C_x^1 \mapsto f(x+ y) - Tf^N(x \mid y)$. It is then possible to show that for any $n \in \mathbb{N}$, and $\vec u \in \bouleC_y^n$, $\diff{} n {g_N} (0 \mid \vec u) = \diff{} n f(x \mid \vec u) - \diff{} n (Tf^N_x)(0 \mid \vec u)$, with $Tf^N_x: y \mapsto Tf^N(x \mid y)$. By a computation on the $\diff{} n (Tf^N_x)(0 \mid \vec u)$, we see that the $\diff {} n {g_N}(0 \mid \vec u)$ are non-negative. Then, we conclude using the fact that $Rf_x(y) = \inf_{N \in \mathbb{N}} Tf^N_x(y)$. \item The fact that $Rf_x(0) = 0$ is a direct consequence of the $n$-linearity of the map $\vec u \mapsto \ders n f(x \mid \vec u)$ for $n \geq 1$.
\item The fact that $T(Rf^y)(0 \mid x) =0$ is shown in~\cite{mcmillan} in Lemma 5.26. It is based on the fact that the Scott-continuity of $f \mapsto \ders n f (x \mid \vec u)$ allows us to show that if we take $g(y) = \ders n f(x\mid \vec u)$, then $\ders k g(x \mid \vec v) = \ders {n+k} f(y_0 \mid \vec u, \vec v)$, and from there to compute the Taylor series of $Rf^y$.
\end{itemize}
\end{proof}}
The second technical lemma gives us a way to decompose $Rf(x \mid y)$ into smaller pieces. It is stated in Theorem 5.3 in~\cite{mcmillan}. \longv{
\begin{lemma}\label{lemma:auxb2}
Let be $x, y$ such that $x + y \in \boule C$. Then $Rf(0 \mid x+y) \leq Rf(y \mid x) + Rf( x \mid y)$, and furthermore $Rf(0 \mid x+y) \geq Rf(x \mid y)$, and $Rf(0\mid x+y) \geq Rf(y \mid x)$, and moreover all the inequality are in the cone of pre-stable functions.
\end{lemma}} \longv{
\begin{proof}
We give here a brief sketch of the proof of the first statement. More details can be found in~\cite{mcmillan}. We introduce the function $Rf^{+ x}: y \in \bouleC_x^1 \mapsto Rf(0 \mid x+y)$.
The proof is based on the fact that it is possible to establish (see~\cite{mcmillan}):
\begin{align}
T(Rf^{+ x})(0 \mid y) &= T(Rf^x)(0 \mid y) \label{eq:proofmckey1}\\
\text{ and }R(Rf^{+ x})(0 \mid y) &=R (Rf_x)(0 \mid y) \label{eq:proofmckey2}
\end{align}
As a consequence, we can write:
\begin{align*}
&Rf(x \mid y) + Rf(y \mid x) = Rf_x(y) + Rf^x(y)\\
& = T(Rf_x)(0 \mid y) + R(Rf_x)(0 \mid y) + T(Rf^x)(0 \mid y) + R(R_f^x)(0 \mid y) \\
& = T(Rf_x)(0 \mid y) + R(Rf^{+ x})(0 \mid y) + T(Rf^{+ x})(0 \mid y) + R(R_f^x)(0 \mid y)\\
& \quad \text{by~\eqref{eq:proofmckey1} and~\eqref{eq:proofmckey2}}\\
& = Rf^{+x}(y) + T(Rf_x)(0 \mid y) + R(R_f^x)(0 \mid y) \\
& \geq Rf^{+x}(y) = Rf(0 \mid x+y),
\end{align*}
and we see that we have also shown that the difference is pre-stable. The other two statement are shown in a similar way.
\end{proof}
We use Lemma~\ref{lemma:auxb2} to show the a more involved upper bound on $Rf(0 \mid x)$.}
\begin{lemma}\label{lemma:auxbern1}
Let be $x \in \boule C$, and $\pi = [x_1, \ldots, x_n]$ a partition of $x$, such that for every $x_i \in \pi$, $x + x_i \in \boule C$. Then $Rf(0 \mid x) \leq \sum_{1 \leq i \leq n} \inf_{\pi_i \mid \partit{\pi_i} x_i} \sum_{z \in \pi_i} Rf(x \mid z).$
\end{lemma} \longv{ \begin{proof}
For every $x \in \bouleC$, we denote $g_x: y \in \boule C_x^1 \mapsto f(x+y)$. From the definitions of the $\ders n {}{}$, we see that it holds that $Rg_x(0 \mid y) = Rf(x \mid y)$.
Let be $\pi_1, \ldots, \pi_n$ such that $\pi_i$ is a
partition of $x_i$ over $J$. Then $\pi_1 + \ldots +\pi_n$ is a partition of $x$. Lemma~\ref{lemma:auxb2} applied several times , combined with the fact that $Rg_x(0 \mid y) = Rf(x \mid y)$, tells us that:
$$Rf(0 \mid x_0) \leq \sum_{z \in \pi_1 + \ldots +\pi_n} Rf (z' \mid z), $$
where $z' = \sum_{u \in \pi_1 + \ldots +\pi_n \mid u \neq z } u $.
Moreover, we know that $Rf^z$ is pre-stable (by lemma~\ref{lemma:auxb1}). Since, for every $z \in \pi_1 + \ldots +\pi_n$, $z' \leq x$ (it is immediate, since $\pi_1 + \ldots +\pi_n$ is a partition of $x$), it folds that $ Rf(z' \mid z) =Rf^z(z') \leq Rf^z(x) = Rf (x \mid z) $.
As a direct consequence, we see that
$Rf(0 \mid x_0) \leq \sum_i \sum_{z \in \pi_i} Rf (x \mid z), $
which leads to the result.
\end{proof}}
We are now ready to show the main result of this section.
\begin{proposition}[Extended Bernstein's Theorem]\label{prop:ebt}
Let be $C$, $D$ directed-complete lattice cones, and $f: \boule C \rightarrow D$ a pre-stable function.
Then for every $x \in \bouleopen{{C}}$, it holds that $f(x) = Tf(0 \mid x)$.
\end{proposition}
\begin{proof}
Let be $x \in \boule C$.
First, we consider the partition $\pi = [\frac x N, \ldots, \frac x N]$ of $x$, with $N$ taken such as $x + \frac x N \in \bouleC$. We know that such an $N$ exists since $x$ is in the open unit ball $\bouleopen C$.
We use Lemma~\ref{lemma:auxbern1} on $Rf(0\mid x)$, and the partition $\pi$, and it tells us that:
\begin{equation}\label{eq:auxeb2}
Rf(0 \mid x) \leq \sum_{1 \leq j \leq N}\inf_{\pi=(u_1, \ldots, u_n) \in \parts {\frac x N}} \sum_{1 \leq i \leq n} Rf (x \mid u_i) . \end{equation} Observe that the above expression is valid, since for every $u_i$ in a partition $\pi$ of $\frac x N$, $x+u_i \in \boule C$. We know, by Lemma~\ref{lemma:auxb1} that $Rf(x \mid 0) = 0$. Therefore, we can rewrite~\eqref{eq:auxeb2} as:
\begin{equation}\label{eq:auxeb3}
Rf(0 \mid x) \leq \sum_{1 \leq j \leq N} \inf_{\pi=(u_1, \ldots, u_n) \in \parts {\frac x N}}\sum_{1 \leq i \leq n} Rf (x \mid u_i) - Rf (x \mid 0) .
\end{equation}
Moreover, we are able to express the right part of~\eqref{eq:auxeb3} by the finite differences of the pre-stable function $Rf_{x}$; indeed for each $i$,
\begin{equation}\label{eq:auxeb4}
\diff {}1 {Rf_{x} }(0,u_i) = Rf (x \mid u_i) - Rf (x \mid 0) .
\end{equation}
By the definition of derivatives (see Definition~\ref{def:dersn}), we see that
\begin{equation}\label{eq:auxeb5}
\ders 1 {Rf_{x}}(0 \mid \frac x N) = \inf_{\pi \in \parts {\frac x N}} \sum_{v \in \pi} \diff{} 1 {Rf_{x}}(0,v)
\end{equation}
We see now that combining~\eqref{eq:auxeb3},~\eqref{eq:auxeb4} and~\eqref{eq:auxeb5} leads us to $ Rf(0 \mid x) \leq \sum_{1 \leq j \leq N}\ders 1 {Rf_{x}}(0 \mid \frac x N).$
Moreover, we know that for every $y \in \bouleC_x^1$, $\ders 1 {Rf_{x}}(0 \mid y) \leq T(Rf_{x} )(0 \mid y).$
Hence by using again Lemma~\ref{lemma:auxb1}, which says that $T(Rf_{x})(0 \mid {\frac x N}) = 0$, it holds that $Rf(0 \mid x) = 0$.
\end{proof}
\section{$\mathbf{Cstab}$ is a conservative extension of $\mathbf{Pcoh}_!$}\label{sect:pcs}
Probabilistic coherence spaces (PCS) were introduced by Ehrhard and Danos in~\cite{pcsaamohpc} as a model of higher-order probabilistic computation. It was successful in giving a fully abstract model both of $\textsf{PCF}_\oplus$, and of a discrete probabilistic extension of Levy's Call-by-Push-Value. In this section, we present briefly basic definitions from~\cite{pcsaamohpc} and highlight an embedding from PCSs into cones.
\subsection{Probabilistic Coherence Spaces} The definition of the PCS model of Linear Logic follows the tradition initiated by Girard with Coherence Spaces in~\cite{girard1986system}, and followed for instance by Ehrhard in~\cite{Ehrhard93} when defining hypercoherence spaces. A coherent space interpreting a type can be seen as a symmetric graph, and the interpretation of a program of this type is a \emph{clique} of this graph. Interestingly, such a graph $A$ can be alternatively characterized by giving its set of vertices (that we will call \emph{web}), and a family of subsets of this web, meant to be the family of the cliques of $A$. Then we know that an arbitrary family of subsets of a given web arises indeed as a family of cliques for some graph when some \emph{duality criterion} is verified.
PCSs are designed to express probabilistic behavior of programs. As a consequence, a clique is not a subset of the web anymore, but a \emph{quantitative} way to associate a non-negative real coefficient to every element in the web. \begin{definition}[Pre-Probabilistic Coherent Spaces]\label{def:prepcs} A Pre-PCS is a pair $X = (\web X, \prog X)$, where $\web X$ is a countable set called \emph{web of $X$}, $\prog X $ is a subset of $\subseteq \Rp{\web X}$ whose elements are called \emph{cliques of $X$}. \end{definition} We need here to introduce some notations to deal with infinite dimensional $\mathbb{R}$-vector spaces. Given a countable web $A$, and $a$ an element of $A$, we denote $e_a$ the vector in $\Rp A$ which is $1$ in $a$, and $0$ elsewhere. We are also going to introduce a scalar product on vectors in $\Rp A$: if $u, v \in \Rp A$, we will denote $\scal u v = \sum_{a \in \web X} u_a v_a \in \mathbb{R} \cup \{\infty\} $. Moreover, if $A$ and $B$ are countable sets, $x \in \Rp{A \times B}$, and $u \in \Rp{A}$, we denote by $x \cdot u$ the vector in $(\Rp{}\cup \{\infty\})^{B}$ given by $(x \cdot u)_b = \sum_{a \in A} x_{a,b} u_a$ for every $b \in B$.
We are going to give examples of pre-PCS modeling discrete data-types. First, we define a pre-PCS $\mathbf{1}$ to correspond to unit type. Since unit-type programs have only one possible outcome (that they can reach or not), $\mathbf{1}$ has only one vertex: $\web \mathbf{1} = \{\star\}$. We want the denotation of a unit-type program to express its probability of termination: we take the set of cliques $\prog \mathbf{1}$ as the interval $[0,1]$.
Let us now look at what happens when we consider programs of type $N$: a program can now have a countable numbers of possible outcomes, so the web will consist of $\mathbb{N}$, and cliques will be sub-distributions on these vertices.
\begin{example}[Pre-PCS of Natural Numbers]\label{ex:Nat} We define the Pre-PCS $\mathbb{N}^{\mathbf{Pcoh}}$ by taking $\web \mathbb{N}^\mathbf{Pcoh} = \mathbb{N}$, and $\prog {\mathbb{N}^\mathbf{Pcoh}} = \{u \in \Rp {\mathbb{N}} \mid_{n \in \mathbb{N}} \sum u_n \leq 1\}$. It corresponds to the denotational semantics of the base type $N$ of $\textsf{PCF}_\oplus$ in $\mathbf{Pcoh}_!$.
\end{example}
We now need to give a \emph{quantitative} bi-duality criterion, to specify which one of the $\prog X \subseteq \Rp{\web X}$ are indeed \emph{valid} families of cliques. To do that, we first define a \emph{duality operator}: if $X = (\web X , \prog X)$ is a pre-PCS, we define the pre-PCS $\dual X = (\web X, \{u \in \Rp {\web X}, \, \forall v \in \prog X, \scal u v \leq 1 \})$. We are now ready to give conditions on pre-PCSs to actually be PCS:
\begin{definition}[Probabilistic Coherent Spaces]\label{def:pcs}
A pre-PCS $X$ is a \emph{PCS} if $\dual {\dual X} = X$ and moreover the following two conditions hold:
\begin{itemize}
\item $ \forall a \in \web X$, there exists $\lambda >0$ such that $\lambda e_a \in \prog X$.
\item $\forall a \in \web X$, there exists $M \geq 0$, such that for every $u \in \prog X$, $u_a \leq M$.
\end{itemize} \end{definition} We may see easily that both $\mathbf{1}$ and $\mathbb{N}^{\mathbf{Pcoh}}$ are indeed PCSs.
As highlighted in Example 4.4 from~\cite{pse}, we can associate in a generic way a cone to any PCS. The idea is that we consider the extension of the space of cliques by all uniform scaling by positive reals. We formalize this idea in Definition~\ref{def:pcstocone} below. \begin{definition}\label{def:pcstocone}
Let be $X$ a PCS. We define a cone $\pcstocone X$ as the $\Rp{}$ semi-module $\{\alpha \cdot x \text{ s.t. } \alpha \geq 0, x\in \prog X \}$ where the $+$ is the usual addition on vectors. We endow it with $\norm {\pcstocone X}{\cdot}$ defined by:
$$
\norm {\pcstocone X}{x} = \sup_{y \in \prog{\dual X}} {\scal x y} = \inf\{\frac 1 r \mid \, r \cdot x \in \prog X \}.$$ \end{definition} It is easily seen that it is indeed a cone (the proof uses the so-called technical conditions from Definition~\ref{def:pcs}). Moreover, we can see that $\boule{\pcstocone X} $ consists exactly of the set $\prog X$ of cliques of $X$. Looking at the cone order $\order{\pcstocone X}$, as defined in Definition~\ref{def:coneorder}, we see that it coincides on $\prog X$ with the pointwise order in $\Rp{\web X}$. It is relevant since we know already from~\cite{pcsaamohpc} that $\prog X$ is a bounded-complete and $\omega$-continuous cpo with respect to this pointwise order.
\begin{lemma}\label{lemma:pcsdccones} For every PCS $X$, it holds that $\pcstocone X$ is a directed-complete lattice cone. \end{lemma} \begin{proof} To show that $\pcstocone X$ is directed complete, we use the fact that $\prog X$ is a complete partial order. To show that it is a lattice, we see that $x \vee y $ can be defined as: $(x \vee y)_a = \max{x_a,y_a} \, \forall a \in \web X$. \end{proof}
\subsection{The Category $\mathbf{Pcoh}$.}
Intuitively a morphism in $\mathbf{Pcoh}(X,Y)$ is a linear map from $\Rp{\web X}$ to $\Rp{\web Y}$ \emph{preserving} the cliques. \begin{definition}[Morphisms of PCSs]\label{def:morphismspcs} Let be $X$, $Y$ two PCSs. A \emph{morphism of PCSs between $X$ and $Y$} is a matrix $x \in \Rp{\web X \times \web Y}$ such that for every $u \in \prog X$, it holds that $x \cdot u \in \prog Y$. \end{definition}
We now illustrate Definition~\ref{def:morphismspcs} by looking at the morphisms from $\textbf{Bool}$ to itself: they are the $x \in \Rp{\{\textbf{t}, \textbf{f}\} \times \{\textbf{t}, \textbf{f}\}}$ such that ${x_{\textbf{t}, \textbf{t}} + x_{\textbf{t}, \textbf{f} } \leq 1}$, and similarly ${x_{\textbf{f}, \textbf{t}} + x_{\textbf{f}, \textbf{f} } \leq 1}$. We see that they are exactly those matrices specifying the transitions for a probabilistic Markov chain with two states $\textbf{t}$ and $\textbf{f}$.
We call $\mathbf{Pcoh}$ in the following the category of PCS and morphisms of PCS. In~\cite{pcsaamohpc}, it is endowed with the structure of a model of linear logic. We are only going to recall here partly the exponential structure, since our main focus will be on the Kleisli category associated to $\mathbf{Pcoh}$.
In~\cite{pcsaamohpc}, the construction of the exponential was done by defining a functor $!$, as well as dereliction and digging making $\mathbf{Pcoh}$ a Seely category, and consequently a model of linear logic. Here, we are only going to recall explicitly the effect of $!$ on PCSs. We denote by $\mfin {\web X}$ the set of finite multisets over the web of $X$, and we take it as the web of the PCS $\bang X$. If $\mu \in \mfin A$, we call \emph{support of $\mu$}, and we denote $\supp \mu$, the set of elements $a$ is $A$ such that $a$ appears in $\mu$. Moreover, we will use the following notation: for every $x \in \Rp{\web X}$, and $\mu \in \mfin{\web X}$, we denote $x^\mu = \prod_{a \in \supp \mu} x_a^{\mu(a)} \in \Rp{}$. \begin{definition}
Let be $X$ a PCS. We define the \emph{promotion} of $x \in \prog X$, as the element $x^! \in \Rp{\mfin{\web X}}$ given by $x^!_\mu = x^\mu.$ We define $!X = (\mfin {\web X}, \{x^! \mid x \in X\}^{\bot \bot})$.
\end{definition}
\subsection{The Kleisli Category of Probabilistic Coherence Spaces}\label{subsect:pcoh!} The idea, as usual, is that morphisms in the Kleisli category can use several times their argument, while morphisms in the original category are \emph{linear}. The Kleisli category for $\mathbf{Pcoh}$, denoted $\mathbf{Pcoh}_!$, has also PCSs for objects, while $\mathbf{Pcoh}_!(X,Y) = \mathbf{Pcoh}(\bang X, Y)$.
We give here a direct characterization of $\mathbf{Pcoh}_!$ morphisms.
\begin{lemma}[from~\cite{pcsaamohpc}]\label{lemma:morphpcoh} Let be $f \in \Rp{\mfin{\web {X}} \times {\webY}}$. Then $f$ is a morphism in $\mathbf{Pcoh}_!(X, Y)$, if and only if for every $x \in \prog X$, $f \cdot {x^!} \in \prog Y$.
\end{lemma} What Lemma~\ref{lemma:morphpcoh} tells us is that any $f \in \mathbf{Pcoh}_!(X, Y)$ is entirely characterized by the map $\mapm f : x \in \prog X \rightarrow f \cdot x^! \in \prog Y$. We denote by $\mathcal E^{X, Y}$ the set of all maps $\prog X \rightarrow \prog Y$ that are equal to a $\mapm f$ with $f \in \mathbf{Pcoh}_!(X, Y)$. It has been shown in~\cite{pcsaamohpc} that $\mapm{(\cdot)}$ is actually a bijection from $\mathbf{Pcoh}_!(X, Y)$ to $\mathcal E^{X, Y}$.
Observe that we can see the maps in $\mathcal E^{X, Y}$ as \emph{entire series}, in the sense that they can be written as the supremum of a sequence of polynomials. Indeed, for any morphism $f$, and $x \in \prog X$, we can write: $$\mapm f (x) = \sup_{N \in \mathbb{N}} {\sum_{b \in \web Y} (\sum_{\mu \text{ with }\card \mu \leq N} f_{\mu,b } \cdot x^\mu ) \cdot e_b} $$
As the Kleisli category of the comonad $!$ in a Seely category, $\mathbf{Pcoh}_!$ is a cartesian closed category. We give here explicitely the construction of the product and arrow constructs: if $X$ and $Y$ are PCSs, $X \Rightarrow Y$ is defined by $\web{X \Rightarrow Y} = \mfin {\webX} \times \web Y$ and $\prog{(X \Rightarrow Y )} = \mathbf{Pcoh}_!(X, Y)$. If $(X_i)_{i \in I}$ is a family of PCSs, $\prod_{i \in I} X_i$ is defined by
$ \web{\prod_{i \in I}{X_i}} = \cup_{i \in I}\{i\} \times \web {X_i}$ and $ \prog X = \{x \in \Rp{\web{\prod_{i \in I}{X_i}}} \mid \forall i \in I, \, \pi_i(x) \in \prog{X_i} \}$, where $\pi_i(x)_{a} = x_{(i,a)}$.
\subsection{A fully faithful functor $\mathcal F: \mathbf{Pcoh}_! \rightarrow \mathbf{Cstab}$.}\label{sect:proofconsext}
Recall that Definition~\ref{def:pcstocone} gave a way to see a PCS as a cone. Moreover, as stated in Proposition~\ref{prop:fmsf} below, a morphism in $\mathbf{Pcoh}_!$ can also be seen as a stable function, in the sense that $\mathcal E^{X, Y} \subseteq \mathbf{Cstab}(\pcstocone X, \pcstocone Y)$. \begin{proposition}\label{prop:fmsf} Let be $f \in \mathbf{Pcoh}_!(X, Y)$. Then $\mapm f$ is a stable function from ${\pcstocone X}$ to $\pcstocone Y$.
\end{proposition} \begin{proof}
We know from~\cite{pcsaamohpc} that $\mapm f: \prog X \rightarrow \prog Y$ is sequentially Scott-continuous with respect to the orders $\order{\pcstocone X}$, $\order{\pcstocone Y}$.
Moreover $\mapm f$ is pre-stable: it comes from the fact that $\mapm f$ can be written as a power series with all its coefficients non-negative.
Finally, we have to show that $\mapm f(\boule \pcstocone X) \subseteq \boule \pcstocone Y)$. Since $\boule \pcstocone X = \prog X$, $\boule \pcstocone X = \prog X$, and moreover $f$ is a morphism in $\mathbf{Pcoh}_!(X, Y)$, we see that the result holds.
\end{proof} Thus we can define a functor $\mathcal F: \mathbf{Pcoh}_! \rightarrow \mathbf{Cstab}$, by taking $\mathcal F X = \pcstocone X$, and $\mathcal F f = \mapm f$. Our goal now is to show that $\mathcal F$ is full and faithful, which will make $\mathbf{Pcoh}_!$ a full subcategory of $\mathbf{Cstab}$.
As mentioned before, it was shown in~\cite{pcsaamohpc} that $\,\mapm{\cdot}\,$ is a bijection from $\mathbf{Pcoh}_!(X, Y)$ to $\mathcal{E}^{X, Y}$. It tells us directly that $\mathcal F$ is indeed \emph{faithful}.
In the remainder of this section, we are going to show that $\mathcal F$ is actually also \emph{full}, hence makes $\mathbf{Cstab}$ a conservative extension of $\mathbf{Pcoh}_!$.
In the following, we fix $X$ and $Y$ two PCSs, and $g \in \mathbf{Cstab}(\mathcal F X, \mathcal F Y)$. Our goal is to show that there exists $f \in \mathbf{Pcoh}_!(X, Y)$ such that $\mapm f = g$.
First, recall that we have shown in Lemma~\ref{lemma:pcsdccones} that for every PCS $Z$, the cone $\mathcal F Z$ is a directed complete lattice cone. It means that all results in Section~\ref{subsect:bernstein} can be used here: in particular, $g$ has higher-order derivatives $D^n g$, which makes Definition~\ref{def:fullfunctor} below valid.
\begin{definition}\label{def:fullfunctor} We define $f\in \Rp{\mfin{\web {X}} \times \web Y}$ by taking: $${f}_{[a_1, \ldots, a_k], b} = \frac {\alpha_{[a_1, \ldots, a_k]}}{k!}\left(\ders k g(0 \mid e_{a_1}, \ldots e_{a_k} ) \right)_b \in \mathbb{R}^+.$$ where $\alpha_\mu = \# {\{(c_1, \ldots,c_k) \in \web X^k \text{ with }\mu = [c_1, \ldots, c_k] \}}.$ \end{definition} We have to show now that $f \in \mathbf{Pcoh}_!(X, Y)$, and that $\mapm {f}$ coincides with $g$ on $\prog X$. The key observation here is that we have actually built $f$ in such a way that it is going to coincide with $Tg(0 \mid \cdot)$\textemdash the Taylor series of $g$ defined in Definition~\ref{def:TS}. We first show it for the elements of $\prog X$ with \emph{finite support}, by using finite additivity of the $\ders k g (0 \mid \cdot)$.
\begin{lemma}\label{lemma:densetaylor}
Let be $x \in \prog{X}$, such that $\text{Supp}(x) = \{a \in \prog X \mid x_a > 0\}$ is finite. Then it holds that
$ f \cdot x^!$ is finite (i.e for every $b \in \web Y$, $(f \cdot x^!)_b < \infty$), and moreover $f \cdot x^! = Tg(0 \mid x)$. \end{lemma} \begin{proof}
Let $A= \{a_1, \ldots, a_m\} \subseteq \web X$ be the set $\text{Supp}(x)$. For any $b \in \web Y$, we can deduce from the definition of $f$ that:
\begin{align*}
(f & \cdot x^!)_b = \sum_{k=0}^{\infty} \sum_{\mu = [c_1, \ldots, c_k] \in \mfink k {A}}\frac {\alpha_\mu}{k !}\cdot \ders {k} g(0 \mid e_{c_1}, \ldots e_{c_k} )_b \cdot x^\mu,
\end{align*}
where $\mfink k A$ stands for the set of multisets over $A$ of cardinality $k$. Looking at the definition of $\alpha_\mu$, we see that this implies:
\begin{equation}\label{eq:pcsbern1}
(f \cdot x^!)_b = \sum_{k=0}^{\infty} \sum_{(c_1, \ldots, c_k) \in A^k}\frac {1}{k !} \ders {k} g(0 \mid e_{c_1}, \ldots e_{c_k} )_b \cdot \prod_{i=1}^k x_{c_i}
\end{equation}
By Lemma~\ref{lemma:additivity}, we know that $\ders k g(0 \mid \cdot)$ is $k$-linear. As a consequence, and since $x = \sum_{i=1}^m x_{c_i}\cdot e_{c_i}$ and that moreover $A$ is \emph{finite}, we see that~\eqref{eq:pcsbern1} implies the result:
$$(f \cdot x^!)_b = \sum_{k=0}^{\infty} \frac {1}{k !} \ders {k} g(0 \mid x, \ldots x )_b = (Tg(0 \mid x))_b.$$ \end{proof}
We are now going to apply the generalized Bernstein's theorem, as stated in Proposition~\ref{prop:ebt}, to the stable function $g$ from $\mathcal F X$ to $\mathcal F Y$. It tells us that: \begin{equation}\label{eq:bernkey}
\forall x \in \bouleopen{\pcstocone X}, \quad g( x) = Tg(0 \mid x).
\end{equation}
Combining ~\eqref{eq:bernkey} with Lemma~\ref{lemma:densetaylor}, we obtain that: \begin{equation}\label{eq:eqdense}
\forall x \in \bouleopen{\pcstocone X} \text{ with } \text{Supp}(x) \text{ is finite},\, f \cdot x^! = g(x).
\end{equation}
We can now use~\eqref{eq:eqdense} to show that $\mapm f$ and $g$ coincide on $\prog X$: the key point is that the subset of elements in $\prog X$ of norm smaller than $1$ and finite support is dense, and that moreover $g$ is Scott-continuous. \begin{lemma}\label{lemma:fullnessaux2}
$\forall x \in \prog X, \, f \cdot x^! = g(x)$, and moreover $f \in \mathbf{Pcoh}_!(X, Y)$. \end{lemma} \begin{proof}
Let be $x \in \prog X$. We define a sequence $(y_n)_n \in \mathbb{N}$ of elements in $\prog X$, by taking:
$$(y_n)_a = \begin{cases}
(1 - \frac 1 {2^n})\cdot x_a \text{ if }\lambda(a) \leq n\\
0 \text{ otherwise,}
\end{cases}$$
where we have fixed $\lambda$ an arbitrary enumeration of the elements of $\web X$\textemdash $\lambda$ exists since it is a countable set. Observe the the sequence $(y_n)_{n \in \mathbb{N}}$ is non-decreasing, with $x = \sup_{n \in \mathbb{N}} y_n$.
Moreover, for every $n$, $y_n$ has finite support and $\norm{\pcstocone X}{y_n} < 1$ .
It means that for every $y_n$, we can use~\eqref{eq:eqdense}: we see that $g(y_n) = f \cdot y_n^!$. Since $g$ is a morphism in $\mathbf{Cstab}$, $g$ is sequentially Scott-continuous, hence: \begin{equation}\label{eq:auxf1}
g(x) = \sup_{n \in \mathbb{N}} g(y_n). \end{equation}
Moreover, we know from~\cite{pcsaamohpc} that both $x \mapsto x^!$ and $x \mapsto u \cdot x$ are Scott continuous. It means that: \begin{equation}\label{eq:auxf2}
f\cdot x^! = \sup_{n \in \mathbb{N}} f \cdot {y_n^!}.
\end{equation}
Combining~\eqref{eq:auxf1} and~\eqref{eq:auxf2}, we obtain $f \cdot x^! = g(x)$.
Since $g(\boule{\pcstocone X}) \subseteq \boule{\pcstocone Y}$, it implies also that $\mapm f(\prog X)\subseteq \prog Y$. Thus by Lemma~\ref{lemma:morphpcoh} $f \in \mathbf{Pcoh}_!(X, Y)$. \end{proof} Since we have indeed been able to show in Lemma~\ref{lemma:fullnessaux2} that for any fixed stable function $g$ in $\mathbf{Cstab}(\mathcal F X, \mathcal F Y)$, there exists an $f \in \mathbf{Pcoh}_!(X, Y)$ such that $\mathcal F f = g$, we have indeed shown that $\mathcal F$ is full. \subsection{$\mathcal F$ preserves the cartesian structure.} We want now to give a stronger guarantee on the functor $\mathcal F$: we want to show that it is a \emph{cartesian closed functor}, meaning that it embeds the cartesian closed category $\mathbf{Pcoh}_!$ into the cartesian closed category $\mathbf{Cstab}$ in such a way that: \begin{itemize} \item $\mathcal F$ preserves the product: for every family $(X_i)_{i \in I}$ of PCSs, $\mathcal F{(\prod_{i \in I}^{\mathbf{Pcoh}_!} X_i)}$ is isomorphic to $\prod_{i \in I}^{\mathbf{Cstab_m}} \mathcal F{X_i}$;
\item $\mathcal F$ preserves function spaces: for every $X, Y$ PCSs, $\mathcal F{(X \Rightarrow Y)}$ is isomorphic to $\mathcal F X \Rightarrow \mathcal F Y$.
\end{itemize}
\begin{lemma}\label{lemma:fpresprod} $\mathcal F$ preserves cartesian products. \end{lemma} \begin{proof}
We fix a family $\mathscr{I} = (X_i)_{i \in I}$ of PCSs.
In order to construct an isomorphism, we have a canonical candidate, given by: \begin{equation}\label{eq:morphismcartproduct}
\Psi^{\mathscr{I}} = \langle \mathcal F (\pi_i) \mid i \in I \rangle \in \mathbf{Cstab}( \mathcal F{(\prod_{i \in I}^{\mathbf{Pcoh}_!} X_i)} , \prod_{i \in I}^{\mathbf{Cstab_m}} \mathcal F{X_i}) .
\end{equation} Let us now see that $\Psi^{\mathscr{I}}$ is an isomorphism, \shortv{i.e. that it has an inverse}.\longv{ Looking at the definition of cartesian product in $\mathbf{Pcoh}_!$ defined in Section~\ref{subsect:pcoh!}, and the one of cartesian product in $\mathbf{Cstab}$, defined in Section~\ref{subsect:cstab}, we see that for every $x \in \boule \mathcal F{(\prod_{i \in I}^{\mathbf{Pcoh}_!} X_i)} $: $$\Psi^\mathscr{I}(x) = (y_i)_{i \in I} \quad \text{where} \quad \forall i \in I, \forall a \in \web {X_i}, \, (y_i)_a = x_{(i,a)} .$$ We want now to show that $\Psi^{\mathscr{I}}$ has an inverse.} The only candidate is $\Theta^{\mathscr{I}}: y \in \boule (\prod_{i \in I}^{\mathbf{Cstab}} \mathcal F{X_i}) \mapsto \Theta(y) \in {(\mathcal F{(\prod_{i \in I}^{\mathbf{Pcoh}_!} X_i)} )}$, defined by: $\forall i \in I, a \in \web{X_i}, \Theta(y)_{i,a} = (y_i)_a .$
We see immediately that $\Theta^{\mathscr{I}}$ is linear, hence pre-stable, and that moreover it is Scott-continuous. Besides, it is also preserves the unit ball, since $\forall y \in \boule D, \, \norm {\mathcal F{(\prod_{i \in I}^{\mathbf{Pcoh}_!} X_i)}}{\Theta^{\mathscr{I}}(y)} = \norm {\prod_{i \in I}^{\mathbf{Cstab}} \mathcal F{X_i}}y$\shortv{ (the proof can be found in the long version)}.\longv{ We show now that $\Theta^\mathscr{I}$ preserves the unit ball:
\begin{align*}
&\norm{C}{\Theta^{\mathscr{I}}(y)} = \inf\{\frac 1 r \mid r \cdot \Theta^{\mathscr{I}}(y) \in \prog \prod_{i \in I}^{\mathbf{Pcoh}_!} X_i \} \\
&\quad =\inf\{\frac 1 r \mid \forall i \in I, r \cdot y_i \in \prog X_i \}
=\sup_{i \in I} \norm{\mathcal F X_i}{y_i}
=\norm{\prod_{i \in I}^{\mathbf{Cstab}} \mathcal F X_i} y.
\end{align*} }
Thus $\Theta^\mathscr{I}$ is a morphism in $\mathbf{Cstab}$. \end{proof} \begin{lemma}\label{lemma:fpresarr}
$\mathcal F$ preserves function spaces. \end{lemma} \begin{proof} Let $X, Y$ two PCSs. As previously, there is a canonical candidate for the isomorphism: we define $\Upsilon^{X, Y}$ as the currying in $\mathbf{Cstab}$ of the morphism: $$ \mathcal F{(X \Rightarrow Y)} \times \mathcal F X\stackrel{\Theta^{X \Rightarrow Y, X}}{\xrightarrow{\hspace*{1.2cm}}} \mathcal F(X \Rightarrow Y \times X) \stackrel{\mathcal F(\text{eval}_{X, Y})}{\xrightarrow{\hspace*{1.3cm}}} \mathcal F Y,$$ where $\Theta^{X \Rightarrow Y, X}$ is as defined in the proof of Lemma~\ref{lemma:fpresprod} above.
Unfolding the definition, we see that actually: $\Upsilon^{X, Y}: f \in \boule{\mathcal F(X \Rightarrow Y)} \mapsto \mapm f \in (\mathcal F X \Rightarrow \mathcal F Y)$. Since we have shown that $\mathcal F$ is full and faithful, we can consider $\Xi^{X, Y}$ the inverse function of $\Upsilon^{X, Y}$. Recall from the proof of the fullness of $\mathcal F$ in Section~\ref{sect:proofconsext} that for every $\mu = [a_1, \ldots, a_k] \in \mfin {\web X}$, and $b \in \web Y$: $${\Xi^{X, Y}(f)}_{\mu, b} = \frac {\alpha_{[a_1, \ldots, a_k]}}{k!}\left(\ders k f(0 \mid e_{a_1}, \ldots e_{a_k} ) \right)_b. $$ Recall from Lemma~\ref{lemma:additivity} that for any $\vec u \in \boule {(C_x^k)}$, the function $f \in \mathbf{Cstab}(\mathcal F X, \mathcal F Y) \mapsto \ders k f(x \mid \vec u) \in \mathcal F Y$ is linear and Scott-continuous. As a consequence, $\Xi^{X, Y}$ too is linear and Scott-continuous. \shortv{Moreover, it also preserves the unit ball (see the proof in the long version), hence is a morphism in $\mathbf{Cstab}(\mathcal F X \Rightarrow \mathcal F Y, \mathcal F (X \Rightarrow Y))$.}\longv{
To know that $\Xi^{X, Y}$ is stable, we have still to show that it is bounded: we are actually going to show that it preserves the norm. Indeed, for every $f \in \boule (\mathcal F X \Rightarrow \mathcal F Y)$, we see using the definition of the norm on a cone obtained from a PCS (see Definition~\ref{def:pcstocone}), that:
\begin{equation}\label{eq:nonexpansivexi1} \norm{\mathcal F(X \Rightarrow Y)}{\Xi^{X, Y}(f)} = \inf \{\frac 1 r \mid r \cdot \Xi^{X, Y}(f) \in \prog(X \Rightarrow Y)\}
\end{equation}
It was shown in ~\cite{pcsaamohpc} that:
\begin{equation}\label{eq:nonexpansivexi}
r \cdot \Xi^{X, Y}(f) \in \prog(X \Rightarrow Y) \,\Leftrightarrow\, \forall x \in \prog X, \, (r \cdot \Xi^{X, Y}(f)) \cdot x^! \in \prog Y.
\end{equation}
We see that $(r \cdot \Xi^{X, Y}(f)) \cdot x^! = r \cdot f(x)$ since $\Xi^{X, Y}$ has been defined as the inverse of $\Upsilon^{X, Y}$. It means that we can rewrite~\eqref{eq:nonexpansivexi} as:
\begin{equation}\label{eq:nonexpansivexibis}
r \cdot \Xi^{X, Y}(f) \in \prog(X \Rightarrow Y) \,\Leftrightarrow\, \forall x \in \prog X, \, r\cdot f(x) \in \prog Y.
\end{equation} Since for every PCS $Z$, it holds that $\prog Z = \boule \mathcal F Z$, we can now use~\eqref{eq:nonexpansivexibis} to rewrite~\eqref{eq:nonexpansivexi1} as:
\begin{equation}\label{eq:nonexpansivexi3} \norm{\mathcal F(X \Rightarrow Y)}{\Xi^{X, Y}(f)} = \inf \{\frac 1 r \mid \forall x \in \boule \mathcal F X, r \cdot f(x) \in \boule \mathcal F Y\}
\end{equation}
Looking now at the definition of the norm in the cone $\mathcal F X \Rightarrow \mathcal F Y$, we can complete the proof using~\eqref{eq:nonexpansivexi3} and the homogeneity of the norm. Indeed:
\begin{align*}
\norm{\mathcal F(X \Rightarrow Y)}{\Xi^{X, Y}(f)} &= \inf \{\frac 1 r \mid \norm{\mathcal F X \Rightarrow \mathcal F Y}{ r \cdot f} \leq 1\} \\
& = \inf \{\frac 1 r \mid r \cdot \norm{\mathcal F X \Rightarrow \mathcal F Y}{ f} \leq 1\} \\
&= \norm {\mathcal F X \Rightarrow \mathcal F Y}{ f}
\end{align*} }
\end{proof}
As a direct consequence of Lemma~\ref{lemma:fpresprod} and Lemma~\ref{lemma:fpresarr}, we can state the following theorem:
\begin{theorem}\label{th:fullness} $\mathcal F$ is full and faithful, and it respects the cartesian closed structures.
\end{theorem}
\section{Adding Measurability Requirements} In~\cite{pse}, the authors developed a sound and adequate model of $\textsf{PCF}_{\text{sample}}$ based on stable functions. However, as explained in more details in~\cite{pse}, they need to add to their morphisms some \emph{measurability requirements}, both on cones and on functions between them, since the denotational semantics of the $\texttt{let}(x, M, N)$ construct uses an integral, to model the fact that $M$ is evaluated before being passed as argument to $N$.
We call measurable functions $\mathbb{R}^n \rightarrow \mathbb{R}^k$ the functions measurable when both $\mathbb{R}^n$ and $\mathbb{R}^k$ are endowed with the Borel $\Sigma$-algebra associated with the standard topology of $\mathbb{R}$. The relevant properties of the class of measurable functions $\mathbb{R}^n \rightarrow \mathbb{R}^k$ is that they are closed by arithmetic operations, composition, and pointwise limit, see for example Chapter 21 of~\cite{schechter1996handbook}. \subsection{The category $\mathbf{Cstab_m}$}
$\mathbf{Cstab_m}$ is built as a \emph{refinement} of the category $\mathbf{Cstab}$. The objects of $\mathbf{Cstab_m}$ are going to be complete cones, endowed with a family of \emph{measurability tests}.
If $C$ is a complete cone, we denote by $C'$ the set of linear and Scott-continuous functions $C \rightarrow \mathbb{R}_+$. \begin{definition}\label{def:meascone}
A \emph{measurable cone} (MC) is a pair consisting of a cone $C$, and a collection of \emph{measurability tests} $ (\meastests C n )_{n \in \mathbb{N}})$, where for every $n$, $\meastests C n \subseteq {{C'}^{\mathbb{R}^n }}$, such that:
\begin{itemize}
\item for every $n \in \mathbb{N}$, $0 \in \meastests C n$;
\item for every $n,p \in \mathbb{N}$, if $l \in \meastests C n$, and $h : \mathbb{R}^p \rightarrow \mathbb{R}^n$ is a measurable function, then $l \circ h \in \meastests C p$;
\item for any $l \in \meastests C n$, and $x \in C$, the function $u \in \mathbb{R}^n \mapsto l(u)(x) \in \mathbb{R}$ is measurable.
\end{itemize} \end{definition} \begin{example}[from~\cite{pse}]
Let $X$ be a measurable space.
We endow the cone of finite measures $\meas X$ with the family $\meastests{X} {}$ of measurable tests defined as:
$$\meastests {X}n = \{\epsilon_U \mid U \in \Sigma_X \} \quad \text{where}\quad \epsilon_U(\vec r)(\mu) = \mu(U),$$
where $\Sigma_X$ is the set of all measurable subsets of $X$. Observe that in this case, the measurable tests correspond to the measurable sets. In the following, we will denote $\measm X$ the measurable cone $(\meas X, (\meastests X n)_{n \in \mathbb{N}})$.
\end{example} We define now \emph{measurable paths}, which are meant to be the \emph{admissible} ways to send $\mathbb{R}^n$ into a MC $C$.
\begin{definition}[Measurable Paths] Let be $(C, (\meastests C n)_{n \in \mathbb{N}})$ a measurable cone. A \emph{measurable path of arity $n$ at $C$} is a function $\gamma:\mathbb{R}^n \rightarrow C$, such that $\gamma(\mathbb{R}^n)$ is bounded in $C$, and for every $k \in \mathbb{N}$, for every $l \in \meastests C k$, the function $(\vec r, \vec s) \in \mathbb{R}^{k+n} \mapsto l(\vec r) (\gamma(\vec s)) \in \Rp{}$ is a measurable function. \end{definition} We denote $\pathes C n$ the set of measurable paths of arity $n$ for the MC $C$ . When a measurable path $\gamma$ verify $\gamma(\mathbb{R}^n) \subseteq \boule C$ , we say it is \emph{unitary}. Using measurable paths, the authors of ~\cite{pse} add \emph{measurability requirements} to their definition of stable functions. \begin{definition}\label{def:measstabfunctions}
Let be $C, D$ two MCs. A stable function $f:\boule C \rightarrow D$ is \emph{measurable} if for all unitary $\gamma \in \pathes C n$, $f \circ \gamma \in \pathes D n$.
\end{definition} The category $\mathbf{Cstab_m}$ is therefore the category whose objects are MCs, and whose morphisms are measurable stable functions between MCs. \begin{example}
Recall the function $\semm {\texttt{real}}$ defined in Section~\ref{sect:overview}:
$$\semm {\texttt{real}}: \mu \in \meas \mathbb{N} \mapsto (U \in \Sigma_\mathbb{R} \mapsto \sum_{n \in \mathbb{N} \cap U} \mu(n) ) \in \meas {\mathbb{R}}.$$
We can see that $\semm{\texttt{real}}$ is a measurable function from $\measm \mathbb{N}$ into $\measm \mathbb{R}$. Moreover it is linear and Scott-continuous, and norm-preserving, which makes it a morphism in $\mathbf{Cstab_m}$. In the same way, taking $\measm \mathbb{N}$ as the denotational semantics of type $N$, we could complete the denotationnal semantics given in~\cite{pse} for $\textsf{PCF}_{\texttt{sample}}$ in $\mathbf{Cstab_m}$ into a denotational semantics for $\textsf{PCF}_{\oplus, \texttt{sample}}$.
Observe that $\semm{\texttt{real}}$ would not be measurable, if we endowed $\meas \mathbb{N}$ with for instance $\{0\}$ as measurability tests instead of $\meastests \mathbb{N}{}$: indeed in that case, every $\gamma: \mathbb{R}^n \rightarrow \meas \mathbb{N}$ would be a measurability path. As a consequence, to be a measurable function, $\semm {\texttt{real}}$ should verify: for every arbitrary function $\gamma: \mathbb{R}^n \rightarrow \meas \mathbb{N}$, $\semm{\texttt{real}} \circ \gamma$ is a measurable path on $\measm \mathbb{R}$. However, we can see this is not the case, for instance by considering $\gamma$ of the form $\gamma (s)=\alpha(s)\cdot \dirac{0}$, where $\alpha: \mathbb{R} \rightarrow \Rp{}$ is not Borel measurable.
\end{example}
In~\cite{pse}, the cartesian closed structure of $\mathbf{Cstab_m}$ is derived from that of $\mathbf{Cstab}$ by endowing its exponentials and products with the measurability tests presented in Figure~\ref{fig:cccstabm}.
\begin{figure}
\caption{Cartesian Closed structure of $\mathbf{Cstab_m}$.}
\label{fig:cccstabm}
\end{figure}
\subsection{$\mathbf{Pcoh}_!$ is a full subcategory of $\mathbf{Cstab_m}$} We want now to convert the functor $\mathcal F: \mathbf{Pcoh}_! \rightarrow \mathbf{Cstab}$ into a functor $\mathcal F^m: \mathbf{Pcoh}_! \rightarrow \mathbf{Cstab_m}$. To build $\mathcal F^m$, we are going to endow each $\mathcal F X$ with measurability tests, in such a way that $\mathcal F(f)$ will be a \emph{measurable} stable function for any morphism $f \in \mathbf{Pcoh}_!$.
Observe that this requirement does not determine uniquely the choice of measurability tests. For instance, it would be verified if we choose $\{0\}$ as measurability tests for every $\mathcal F X$. However, as explained in Section~\ref{sect:overview}, we want also $\mathcal F^m (\mathbb{N}^{\mathbf{Pcoh}})$ to be isomorphic to $\semm{N}$: we would like to be able to inject any discrete distribution on $\mathbb{N}$ into a distribution on $\mathbb{R}$.
A natural way to ensure this is to use the discrete structure of the web to give the following definition of the MC arising from a PCS.
\begin{definition}
For any $X \in \mathbf{Pcoh}$, we define $\pcstoconem X$ as the measurable cone $\pcstocone X$ endowed with the family ${\meastests X n}_{n \in \mathbb{N}}$ of measurability tests defined as
$\meastests X n = \{0 \} \cup\{ \epsilon_a \mid a \in \web X\}$, where $\epsilon_a(\vec r,x) = x_a.$
\end{definition} We see that the $\epsilon_a$ are indeed linear (i.e commuting with linear combinations), and moreover Scott-continuous: hence they are indeed element of $\pcstocone {X}'$. It is easy to verify that the other conditions are verified, and so $\pcstoconem X$ is indeed a MC.
\begin{lemma}\label{lemma:mpforpcs}
Let be $X$ a PCS. Then $\pathes {\pcstoconem X} n$ is the set of those $\gamma: \mathbb{R}^n \rightarrow \pcstocone X$ such that:
\begin{itemize}
\item $\exists \lambda \in \mathbb{R}, \gamma(\mathbb{R}^n) \subseteq \lambda \boule \pcstocone X$
\item $\forall a \in \web X$, $\gamma_a: \vec r \in \mathbb{R}^n \mapsto \gamma(\vec r)_a \in \Rp{}$ is measurable.
\end{itemize} \end{lemma} Two MCs with the same underlying cone, but different measurability tests may be isomorphic in $\mathbf{Cstab}$: it is enough for them to have the same \emph{measurable paths}. It is what happens in the example below, where we consider $\pcstoconem{\mathbb{N}^\mathbf{Pcoh}}$ and $\measm \mathbb{N}$. It is actually also what happens at higher-order types, as we will explain in Section~\ref{subsect:fmcc}. \begin{example} The two measurable cones $\pcstoconem {\mathbb{N}^\mathbf{Pcoh}}$ and $\measm \mathbb{N}$ have the same underlying cone, but they do not have the same measurable tests. Indeed: $$
\meastests {\pcstoconem{\mathbb{N}^\mathbf{Pcoh}}}n = \{\epsilon_n \mid n \in \mathbb{N} \}; \quad \meastests{\measm \mathbb{N}} n = \{ \epsilon_U \mid U \subseteq \mathbb{N}\}.$$
But we can prove that they have the same measurable paths. It is immediate that $\pathes{\measm \mathbb{N}} n \subseteq \pathes {\pcstoconem{\mathbb{N}^\mathbf{Pcoh}}} n$, since $\meastests {\pcstoconem{\mathbb{N}^\mathbf{Pcoh}}}n $ is a subset of $ \meastests{\measm \mathbb{N}} n $. We detail now the proof of
the reverse inclusion. Let $\gamma \in \pathes {\pcstoconem{\mathbb{N}^\mathbf{Pcoh}}}n$. We have to show: for every $U \subseteq \mathbb{N}$, the function $$\vec r, \vec s \in \mathbb{R}^{k+n} \mapsto \epsilon_U(\vec r)(\gamma(\vec s)) \qquad \text{is Borel measurable.}$$
The key observation now is that $\epsilon_U(\vec r)(\gamma(\vec s) = \sum_{m \in U} \epsilon_m(\vec r)(\gamma(\vec s))$.
Since $\gamma \in \pathes {\pcstoconem{\mathbb{N}^\mathbf{Pcoh}}}n$ it holds that for every $m \in \mathbb{N}$, the function $((\vec r,\vec s) \in \mathbb{R}^{k+n} \mapsto \epsilon_m(\vec r,\gamma(\vec s)) \in \Rp{})$ is Borel measurable. Since the class of Borel measurable functions are closed by finite sum and pointwise limit, it leads to the result.
\end{example}
\begin{lemma}\label{lemma:functormorph1}
Let $X, Y$ be two PCSs, and
$f \in \mathbf{Pcoh}_!(X, Y)$. Then $\mathcal F f$ is measurable from $\pcstoconem X$ into $\pcstoconem Y$. \end{lemma} \longv{\begin{proof}
We have to show that $\mathcal F f$ preserves measurable paths. Let $\gamma$ a unitary path in $\pathes {\pcstoconem X} n$: our goal is to show that $f \circ \gamma \in \pathes{\pcstoconem Y} n$. Recall that Lemma~\ref{lemma:mpforpcs} gives us a characterization of $\pathes{\pcstoconem Y} n$. Since $\gamma$ and $\mathcal F f$ are bounded, we see immediately that $\mathcal F f \circ \gamma$ is bounded. Let $b$ be in $\web Y$. We see that:
$$(\mathcal F f\circ \gamma)_b (\vec r) = \sum_{\mu \in \mfin {\web X}} f_{\mu,b}\cdot \prod_{a \in \text{Supp}(\mu)} {\gamma_a(\vec r)}^{\mu(a)} .$$
Since $\gamma \in \pathes{\pcstoconem X} n$, it holds that $\gamma_a : \mathbb{R}^n \rightarrow \Rp{}$ is measurable for all $a \in \web X$. We conclude by using the fact that the class of measurable functions $\mathbb{R}^n \rightarrow \Rp{}$ is closed under multiplication, finite sums and limit of non-decreasing sequences: it tells us that $\vec r \in \mathbb{R}^n \mapsto (\mathcal F f\circ \gamma)_b (\vec r) \in \Rp{} $ is measurable, and the result folds. \end{proof}} \shortv{
The proof can be found in the long version. It uses the characterization of $\pathes{\pcstoconem X} n$ given in Lemma~\ref{lemma:mpforpcs}.
}
\begin{theorem}
The functor $\mathcal F^m : \mathbf{Pcoh}_! \rightarrow \mathbf{Cstab_m}$ defined as $\mathcal F^m X = \pcstoconem X$, and $\mathcal F^m f = \mathcal F f$, is full and faithful.
\end{theorem} \begin{proof}
Observe that we can decompose the functor $\mathcal F$ as $\mathcal F = \texttt{Forget} \circ \mathcal F^m$, where $\texttt{Forget}$ is the forgetful functor from $\mathbf{Cstab_m}$ to $\mathbf{Cstab}$. We know from Section~\ref{sect:proofconsext} that $\mathcal F$ is full and faithful. Moreover, it holds that $\texttt{Forget}$ is faithful. From there, we are able to deduce the result\shortv{ (see the long version for more details).}\longv{:
\begin{itemize}
\item $\mathcal F^m$ is faithful: it is implied by the fact that $\mathcal F$ is faithful.
\item $\mathcal F^m$ is full: indeed suppose that it is not the case: then there exist two PCSs $X, Y$, and $f \in \mathbf{Cstab_m}(\mathcal F^m X, \mathcal F^m Y$, such that $f$ is not in the image by $\mathcal F^m$ of $\mathbf{Pcoh}(X, Y)$. Then we consider $g \in \mathbf{Cstab}(\mathcal F X, \mathcal F Y)$ defined by $g = \texttt{Forget} (f)$. Since $\texttt{Forget}$ is faithful, there is no other $f'$ such that $g = \texttt{Forget} {(f')}$: it means that $g$ is not in the image by $\texttt{Forget} \circ \mathcal F^m$ of $\mathbf{Pcoh}(X, Y)$. But since $\mathcal F = \texttt{Forget} \circ \mathcal F^m$ is full, we have a contradiction.
\end{itemize}}
\end{proof}
\subsection{$\mathcal F^m$ is cartesian closed.}\label{subsect:fmcc} We want now to show that just as $\mathcal F$, $\mathcal F^m$ is cartesian closed. Since the forgetful functor from $\mathbf{Cstab_m}$ to $\mathbf{Cstab}$ is cartesian closed, we see that we have only to show that the $\mathbf{Cstab}$-morphisms $\Psi^{\mathscr{I}}$, $\Theta^{\mathscr{I}}$, $\Upsilon^{X, Y}$ and $\Xi^{X, Y}$ defined in Lemmas~\ref{lemma:fpresprod} and Lemma~\ref{lemma:fpresarr} proofs, are also morphisms in $\mathbf{Cstab_m}$.
\begin{lemma}\label{lemma:auxmeaspath}
Let $X$ be a PCS, $\overline{C}$ any measurable cone, and $f \in \mathbf{Cstab}(\texttt{Forget} ({\overline{C}}), \mathcal F X)$. We suppose that for every unitary $\gamma \in \pathes \overline{C} n$:
$$\forall a \in \web X, \, (f\circ \gamma)_a : \mathbb{R}^n \rightarrow \Rp{} \text{is (Borel) measurable.}$$
Then it holds that $f \in \mathbf{Cstab_m}(\overline{C}, \mathcal F^m X)$. \end{lemma} \begin{proof}
Since we already know that $f$ is a morphism in $\mathbf{Cstab}$, hence we have only to show that it preserves measurable paths. Let $\gamma$ be unitary in $\pathes \overline{C} n$. We are going to use Lemma~\ref{lemma:mpforpcs} to show that $f \circ \gamma$ is a measurable path for $\mathcal F^m X$. The second condition in Lemma~\ref{lemma:mpforpcs} holds by hypothesis. The first condition also holds: since both $f$ and $\gamma$ are bounded, $f \circ \gamma$ is bounded too. So Lemma~\ref{lemma:mpforpcs} tells us that $f \circ \gamma \in \pathes {\mathcal F^m X} n$.
\end{proof}
\begin{lemma}\label{lemma:functormcartesian}
For all $\mathscr{I} = (X_i)_{i \in I}$ a finite family of PCSs,
\begin{align*}
\Psi^{\mathscr{I}} & \in \mathbf{Cstab_m}( {\mathcal F^m{(\prod_{i \in I}^{\mathbf{Pcoh}_!} X_i)} }, \prod_{i \in I}^{\mathbf{Cstab_m}} \mathcal F^m{X_i} )\\ \text{and} \quad \Theta^{\mathscr{I}} & \in \mathbf{Cstab_m}( \prod_{i \in I}^{\mathbf{Cstab_m}} \mathcal F^m{X_i}, {\mathcal F^m{(\prod_{i \in I}^{\mathbf{Pcoh}_!} X_i)} } ).
\end{align*} \end{lemma} \begin{proof}
\begin{itemize}
\item Recall that $\Psi^{\mathscr{I}}$ is defined canonically in Equation~\eqref{eq:morphismcartproduct} as $\Psi^{\mathscr{I}} = \langle \mathcal F (\pi_i) \mid i \in I \rangle$, where $\langle \cdot \rangle$ is the cartesian product on morphisms in $\mathbf{Cstab}$. Since the cartesian product on morphisms in $\mathbf{Cstab_m}$ is the same as the one in $\mathbf{Cstab}$ (see~\cite{pse}), and moreover $\mathcal F(\pi_i) = \mathcal F^m(\pi_i)$, we see that $\Psi^{\mathscr{I}}$ is also a morphism of $\mathbf{Cstab_m}$.
\item Using Lemma~\ref{lemma:auxmeaspath}, we see that it is enough to show that for all $\gamma$ in $\pathes {\prod_{i \in I}^{\mathbf{Cstab_m}} \mathcal F^m{X_i}} n$,
for all $(i,a_i) \in \web {\prod_{i \in I}^{\mathbf{Pcoh}_!} X_i} $, $(\Theta^\mathscr{I}\circ \gamma)_{(i,a_i)}$ is measurable. By looking at the definition of $\Theta^\mathscr{I}$,
we see that $(\Theta^\mathscr{I}\circ \gamma)_{(i,a_i)} (\vec r) = (\gamma(\vec r)_i)_a$. We see now that we can construct a measurability test $m \in\meastests{{\prod_{i \in I}^{\mathbf{Cstab_m}} \mathcal F^m{X_i}}} 0 $ such that $ (\gamma(\vec r)_i)_a = m(\cdot)(\gamma(\vec r))$: it is enough to take $m = \oplus_{j \in I} l_j$, with $l_j = 0$ if $j \neq i$, and $l_i = \epsilon_{a_i}$. Since $\gamma$ is a measurability test, it means that $\vec r \in \mathbb{R}^n \mapsto m(\cdot)(\gamma(\vec r)) \in \Rp{}$ is measurable, and so the result holds.
\end{itemize} \end{proof} Lemma~\ref{lemma:functormcartesian} allows us to see that $\mathcal F^m$ is a cartesian functor. We want now to show that it also respects the $\Rightarrow$ construct. First, we show that the $\mathbf{Cstab}$ morphism $\Upsilon^{X, Y}$ is also a morphism in $\mathbf{Cstab_m}$. \begin{lemma}
For all $X$, $Y$ PCSs,
$$\Upsilon^{X, Y} \in \mathbf{Cstab_m}( \mathcal F^m (X \Rightarrow Y), \mathcal F^m X \Rightarrow \mathcal F^m Y)$$ \end{lemma} \begin{proof} Recall that $\Upsilon^{X, Y}$ is defined using currying in $\mathbf{Cstab}$, $\Theta^{X \Rightarrow Y, X}$, and the eval morphism in $\mathbf{Cstab}$. Since currying and structural morphisms are the same in $\mathbf{Cstab_m}$ as in $\mathbf{Cstab}$, and moreover we have shown in Lemma~\ref{lemma:functormcartesian} that $\Theta^{X \Rightarrow Y, X}$ is a morphism in $\mathbf{Cstab}$, we have the result. \end{proof} We show now that $\Xi^{X, Y}$ is also a $\mathbf{Cstab_m}$ morphism, by
using Lemma~\ref{lemma:auxmeaspath}. To do that, we need to show that $(\Xi^{X, Y} \circ \gamma)_{\mu,b}$ is Borel measurable for every $(\mu, b) \in \web{X \Rightarrow Y}$. Our proof strategy is the following: first we show that it can be written as a higher-order partial derivative of a (Borel) measurable function, and then we show that under some conditions on the domain, the partial derivative of a Borel measurable function is again Borel measurable. \begin{lemma}\label{lemma:derivaux1}
Let be $\mu \in \mfin {\web X}$, $b \in \web Y$. Let be $\{a_1, \ldots, a_p\}$ the support of $\mu$. Then there exists $\delta \in \pathes{\mathcal F^m X} p$, and $\alpha_\mu >0$, such that for every $f \in \prog{(X \Rightarrow Y)}$, the function:
$$\psi_{\delta}^f: \vec t \in \mathbb{R}^p \mapsto (\trmeas \delta {\epsilon_b})(\vec t)(\mapm f) \in \Rp{} $$
verify: $\exists c>0$, such that the partial derivative $\frac {\partial {(\psi_\delta^f)}^{\card \mu}}{\partial {t_1}^{\mu(a_1)} \ldots \partial{t_p}^{\mu(a_p)}}$ exists on $[0,c]^p \subseteq \mathbb{R}^p$, and moreover its value in $\vec 0$ is $\alpha_\mu \cdot f_{\mu,b}$.
\end{lemma} \begin{proof}
We take $$ \delta: \vec t \in \mathbb{R}^p \mapsto \begin{cases}
\sum_{1 \leq i\leq m} t_i \cdot e_{a_i} \text{ if } t_i \geq 0 \forall i \text{ and }\sum_{1 \leq i \leq m} t_i \leq 1; \\
0 \text{ otherwise.}
\end{cases} $$
Using Lemma~\ref{lemma:mpforpcs}, we see that indeed $\delta \in \pathes {\mathcal F^m X} p$.
Observe that $\psi_{\delta}^f(\vec t) = \sum_{\nu \mid \supp \nu \subseteq{\{a_1, \ldots,a_n\}}} f_{\nu,b} \cdot \vec t^\nu.$
From there, by using therorems of real analysis for normally convergent series of functions, we can deduce the result (the complete proof may be found in the long version).
\end{proof}
\longv{\begin{proof}
Since $\gamma$ is a measurable path, we know that for every $p \in \mathbb{N}$, and $l \in \pathes {\mathcal F^m X} p$, $\trmeas {\epsilon_b}{l}$ is a measurability test on ${\mathcal F^m X \Rightarrow \mathcal F^m Y}$, and therefore: \begin{equation}\label{eq:fmarroweqaux1}
(\vec r, \vec u) \in \mathbb{R}^{p+n} \mapsto (\trmeas {\epsilon_b} l (\vec r))(\gamma(\vec u)) \in \Rp{} \text{ is measurable.}
\end{equation} We are going to apply~\eqref{eq:fmarroweqaux1} to a particular measurable path on ${\mathcal F^m X}$. Let $p$ be the cardinality of $\text{Supp}(\mu)$, and $\{a_1, \ldots, a_p\} = \text{Supp}(\mu)$. We define $l^{\mu}: \mathbb{R}^{p} \rightarrow \mathcal F^m X$ as:
$$ l^{\mu}: \vec r \in \mathbb{R}^p \mapsto \begin{cases}
\sum_{1 \leq i\leq m} r_i \cdot e_{a_i} \text{ if } r_i \geq 0 \forall i \text{ and }\sum_{1 \leq i \leq m} r_i \leq 1; \\
0 \text{ otherwise.} \end{cases} $$ We see that $l^{\mu}(\mathbb{R}^p)$ is bounded in $\mathcal F^m X$, and moreover for every $a \in \web X$, the function $\vec r \in \mathbb{R}^p \mapsto l^{\mu}(\vec r)_a \in \mathbb{R}^+$ is measurable. Using the characterization of $\pathes {\mathcal F^m X} p$ in Lemma~\ref{lemma:mpforpcs}, we see that $l^{\mu}$ is in $\pathes {\mathcal F^m X} p$. Thus we can apply~\eqref{eq:fmarroweqaux1} with $l = l^{\mu}$. Observe that
$$(\trmeas {\epsilon_b} l^{\mu} (\vec r))(\gamma(\vec u)) = \left(\gamma(\vec u)(l^\mu(\vec r))\right)_b.$$ Therefore~\eqref{eq:fmarroweqaux1} tells us that $\phi^{\mu,b}: \mathbb{R}^{p+n} \rightarrow \Rp{}$ is measurable,
with $\phi^{\mu,b}$ defined as $\phi^{\mu,b}: (\vec r, \vec u) \in \mathbb{R}^{p+n} \mapsto \gamma(\vec u)(l^\mu(\vec r))_b \in \mathbb{R}_+.$ We define $J \subseteq \mathbb{R}^p$ as $[0,\frac 1 p[^p$. We are going to look at the restriction of the function $\phi^{\mu,b}$ to $J \times \mathbb{R}^n$: indeed we are going to show that $\phi^{\mu,b}$ has partial derivatives on that interval. We define $\psi^{\mu,b} : J \times \mathbb{R}^n \rightarrow \Rp{}$ as the restriction of $\phi^{\mu,b}$ to $J \times \mathbb{R}^n$. Since $\phi^{\mu,b}$ is a measurable function, and $J \times \mathbb{R}^n$ a measurable subset of $\mathbb{R}^{p+n}$, $\psi^{\mu,b}$ also is measurable.
Lemma~\ref{lemma:partder1} below (which is proved in the long version) is key: it says that we can recover the coefficients of the power series $\psi^{\mu,b}$ by looking at its partial derivatives. We will then show that we can do it in a \emph{measurable} way. \begin{lemma}\label{lemma:partder1}
For every multiset $\nu \in \mfin{\{1, \ldots, p\}}$, there exists an interval $K$ of the form $[0,c]^p$ such that the partial derivative
$\partial^{\nu} \psi^{\mu,b} = \frac {\partial {(\psi^{\mu,b}_{\mid K \times \mathbb{R}^n})}^{\card \nu}}{\partial {r_1}^{\nu(1)} \ldots \partial{r_p}^{\nu(p)}}: K \times \mathbb{R}^n \rightarrow \Rp{}$ exists, and moreover:
$$\partial^{\nu}\psi^{\mu,b}(\vec 0, \vec u) = {\Xi^{X, Y}(\gamma(\vec u))}_{\nu,b} \cdot \prod_{1 \leq i \leq p} {\nu(i)!} $$ \end{lemma} \longv{ \begin{proof}
Since $l^{\mu}(\vec r) = \sum_{1 \leq i \leq p} r_i \cdot e_i$ for $\vec r \in J$, we see that: $$
\phi^{\mu,b}(\vec r, \vec u) = \sum_{\nu \in \mfin{\web X}} {\Xi^{X, Y}({\gamma(\vec u)})}_{\nu,b} \cdot \vec r^{\nu} \in \Rp{}. $$
For a fixed $\vec s$, we can see it as a generalization of entire series in real analysis. There are well-known results about the differentiation of such series: for instance, a uniformly convergent entire series is differentiable on its (open) domain of convergence. Here, we are going to show the counterpart of some properties on entire series, on what we call \emph{multisets series of $p$ real variables}: those are the series of the form $$S(\vec r) = \sum_{\nu \in \mfin{1, \ldots,p}} a_\nu \cdot \vec r^\nu \quad \text{where} \quad \vec r \in \mathbb{R}^p.$$
First, we observe that for each $\vec r$, we can look at $S(\vec r)$ as an infinite sum over natural numbers:
$$S(\vec r) = \sum_{n \in \mathbb{N}} (\sum_{\mu \in \mfin{1, \ldots,m} \mid \card \mu = n} a_\mu \cdot\vec r^\mu) .$$ We recall here a classical result of real analysis on power series, that we will use in the following. \begin{lemma}\label{derserieonev}[Derivation of a series] Let $f_n : I \rightarrow \mathbb{R}$ be a sequence of functions from a bounded interval $I$. We suppose that $f(x) = \sum_{n \in \mathbb{N}} f_n(x)$ is convergent for every $x \in I$, and moreover for each $n \in \mathbb{N}$, $f_n$ is derivable and $\sum_{n \in \mathbb{N}} f_n'$ is uniformly convergent on $I$. Then $f$ is derivable, and moreover $f' = \sum_{n \in \mathbb{N}} f'_n$.
\end{lemma}
\begin{lemma}\label{lemma:derpart1}
Let $p \in \mathbb{N}$, and $S(\vec r) = \sum_{\nu \in \mfin{\{1, \ldots,p\}}} a_\nu \cdot \vec r^\nu $ with non-negative coefficients $a_\mu$, such that $S$ is convergent on an interval $I=[ -c ,c ]^p$, with $c >0$.
Then there exists $0 <b < c$, such that the function
$g:\vec r \in ]-b, b[^p \mapsto S(\vec r) \in \mathbb{R} $ is partially derivable in each of the $r_i$ variables, and moreover:
$$\frac {\partial g}{\partial {r_i}}(\vec r) = \sum_{\nu \in \mfin{\{1, \ldots,p\}} \mid i \in \nu} a_\nu \cdot \vec r^{\nu - [i]} \cdot \nu(i).$$ \end{lemma}
\begin{proof}
We take $b = \frac c 2$, and set $J=]-b, b[$. To simplify the notations, we suppose here that $i = 1$, but the proof is the same in other cases.
We want to show that for any fixed $\vec u \in J^{p-1}$, the function
$ h_{\vec u}: r \mapsto g(r, \vec u)$ is derivable on $J$.
Let us fix $\vec u \in ]-b, b[^{p-1}$. We are going to use Lemma~\ref{derserieonev} on $h_{\vec u}$.
We see that $h_{\vec u}( r) = \sum_{n \in \mathbb{N}} h_n(r)$, where $$h_n(r) = (\sum_{\nu \in \mfin{2, \ldots,p} } a_{\nu+[1^n]} \cdot \vec u ^\nu ) \cdot r^n,$$
where $[1^n]$ is the multiset consisting of $n$ occurrences of $1$. We see that for every $n \in \mathbb{N}$, the function $h_n$ is derivable on $J$, and:
$$h_n'(r) =(\sum_{\nu \in \mfin{2, \ldots,p}} a_{\nu+[1^n]} \cdot \vec u ^\nu )\cdot n \cdot r^{n-1} .$$
We see now that the series $\sum_{n \in \mathbb{N}} h_n'$ is uniformly convergent on $J$: for every $r \in J$, it holds that:
\begin{align*}
\lvert h_n'(r) \rvert &\leq (\sum_{\nu \in \mfin{2, \ldots,p}} a_{\nu+[1^n]} \cdot \lvert \vec u \rvert ^\nu ) \cdot n \cdot \lvert r \rvert ^{n-1} \\
&= (\sum_{\nu \in \mfin{2, \ldots,p}} a_{\nu+[1^n]} \cdot \lvert \vec u \rvert ^\nu \cdot c^n ) \cdot n \cdot \frac{1}{c}\cdot \left(\frac{\lvert r \rvert}{c}\right) ^{n-1} \\
& = (\sum_{\eta \in \mfin{1, \ldots,p} \mid \eta(1) = n} a_{\eta} \cdot \lvert (c,\vec u) \rvert ^\eta ) \cdot n \cdot \frac{1}{c}\cdot \frac{\lvert r \rvert^{n-1}}{c^{n-1}}
\end{align*}
Since the series $S(\vec r) = \sum a_\mu \cdot \vec r ^\mu$ is convergent on $I$, and $(c, \lvert \vec u \rvert ) \in I$, it holds that there exists $M \geq 0$, with $\sum_{\nu \in \mfin{1, \ldots,p} \mid \nu(1) = n} a_{\nu} \cdot \lvert (c,\vec u) \rvert ^\nu \leq M.$
As a consequence, and since moreover for each $r \in J$, it holds that $\lvert r \rvert \leq b$, we can now write:
\begin{equation}\label{eq:auxderivhn}
\forall r \in J, \quad \lvert h_n'(r) \rvert \leq M \cdot \frac {n} c \cdot \left(\frac{\lvert r \rvert}{c}\right)^{n-1} \leq M \cdot \frac {n} c \cdot \left(\frac{b}{c}\right)^{n-1}
\end{equation}
Since $ b< c$, we know that the quantity in the right part of~\eqref{eq:auxderivhn} defines a convergent series. As a consequence, the series $\sum_{n \in \mathbb{N}} h_n'$ is uniformly convergent on $J$, which means that we are able to apply Lemma~\ref{derserieonev}: we see that $\frac {\partial g}{\partial {r_1}}$ exists on $J^n$, and moreover: \begin{align*}
\frac {\partial g}{\partial {r_1}} ( \vec r) &= \sum_{n \in \mathbb{N}} h_n'(r, (r_2, \ldots,r_p)) \\
& = \sum_{n \in \mathbb{N}}(\sum_{\nu \in \mfin{\{2, \ldots,p\}}} a_{\nu+[1^n]} (r_2, \ldots,r_n) ^\nu )\cdot n \cdot r^{n-1}\\
& = \sum_{\nu \in \mfin{\{1, \ldots,p\}} \mid 1 \in \nu} a_\nu \cdot \vec r^{\nu-[1]} \cdot \nu(1) \quad \text{and the result holds.} \end{align*}
\end{proof} We iterate now Lemma~\ref{lemma:derpart1} in order to look at higher-order partial derivatives. \begin{lemma}\label{lemma:derpart2}
Let $S(\vec r) = \sum_{\nu \in \mfin{\{1, \ldots,p\}}} a_\nu\cdot (\vec r)^\nu $ with $a_\nu \geq 0$. We suppose $S$ convergent on $I=] - b ,b [^p$, with $b>0$.
Then for every multiset $\nu \in \mfin{\{,1 \ldots, p\}}$, there exists $0 <b \leq a$, such that, when we define
$g:\vec r \in [-b, b]^p \mapsto S(\vec r) $, the partial higher-order derivative
$\frac {\partial g^{\card \nu}}{\partial {r_1}^{\nu(1)} \ldots \partial{r_p}^{\nu(p)}}$ exists, and moreover:
$$\frac {\partial g^{\card\nu}}{\partial {r_1}^{\nu(1)} \ldots \partial{r_p}^{\nu(p)}}(\vec r) = \sum_{\eta \in \mfin{\{1, \ldots,p\}} } a_{\nu+\eta} \cdot {\vec r}^{\eta} \cdot \prod_{1 \leq i \leq p} \frac {(\eta+\nu)(i)!}{\eta(i) !}. $$ \end{lemma} \begin{proof}
The proof is by induction on $\card \nu$, and uses Lemma~\ref{lemma:derpart1}. It is clear that the result holds for $\nu= \emptyset$. Now, we suppose that it holds for every $\nu$ of cardinality $N$. Let $\kappa$ be a multiset of cardinality $N+1$, and we take $\nu$, and $i$ such that $\kappa = \nu + [i]$. By the induction hypothesis, there exists $c >0$, such that, when we define $g:\vec r \in [-c, c]^p \mapsto S(\vec r) $, the partial derivative $\frac {\partial g^{\card\nu}}{\partial {r_1}^{\nu(1)} \ldots \partial{r_p}^{\nu(p)}}$ exists, and is equal to: $$T(\vec r)= \sum_{\eta \in \mfin{\{1, \ldots,p\}} } a_{\eta+\nu} \cdot {\vec r}^{\eta} \cdot \prod_{1 \leq j \leq p} \frac {(\eta+\nu)(j)!}{\eta(j) !}. $$
We see we can apply Lemma~\ref{lemma:derpart1} with $T$ as multiset series, and $I = [-c, c]^p$. It means that there exist $0 < d < c$, such that $\frac{\partial T}{\partial {r_i}}$ exists, and
\begin{align*}
\frac{\partial T}{\partial {r_i}}(\vec r) &=\sum_{\eta \in \mfin{\{1, \ldots,p\}} \mid i \in \eta} a_{\eta+\nu} \cdot \vec r^{\eta-[i]}\cdot \eta(i) \cdot \prod_{1 \leq j \leq p} \frac {(\eta+\nu)(j)!}{\eta(j) !} \\
& = \sum_{\iota \in \mfin{\{1, \ldots,p\}}} a_{\iota+\kappa} \cdot \vec r^\iota \cdot {(\iota+[i])(i)} \cdot \prod_{1 \leq j \leq p} \frac {(\iota+\kappa)(j)!}{(\iota+[i])(j) !} \\
& = \sum_{\iota \in \mfin{\{1, \ldots,p\}}} a_{\iota+\kappa} \cdot \vec r^\iota \cdot \prod_{1 \leq j \leq p} \frac {(\iota+\kappa)(j)!}{\iota(j) !}.
\end{align*} \end{proof} We end the proof of Lemma~\ref{lemma:partder1} by using Lemma~\ref{lemma:derpart2} for each $\vec u \in \mathbb{R}^n$ on the multiset series given by
$$S_{\vec u}(\vec r)= \sum_{\nu \in \mfin{\{1, \ldots,p\}}} (\Xi^{X, Y}{\gamma(\vec u)})_{\nu,b} \cdot \vec r^{\nu}.$$ We see that it is indeed absolutely convergent on $I = ]-\frac 1 p, \frac 1 p[^p$, using the fact that for $\vec r \in [0, \frac 1 p]^p$, $S_{\vec u}(\vec r) = \psi^{\mu,b}(\vec r, \vec u)$ for $\vec r \in \mathbb{R}^p$. \end{proof}}
\end{proof}}
\begin{lemma} For every unitary $\gamma \in \pathes{\mathcal F^m X \Rightarrow \mathcal F^m Y} n$, and $(\mu,b) \in \web{X \Rightarrow Y}$, it holds that $(\Xi^{X, Y} \circ \gamma)_{\mu,b}$ is Borel measurable for every $(\mu, b) \in \web{X \Rightarrow Y}$.
\end{lemma}
\begin{proof}
We take $\alpha_\mu, \delta, c$ as given by Lemma~\ref{lemma:derivaux1}. Since $\trmeas{\epsilon_b}{\delta}$ is a measurability tests for $\mathcal F^m X \Rightarrow \mathcal F^m Y$, we see that the function $$(\vec t, \vec s) \in \mathbb{R}^{p+n} \mapsto \psi_\delta^{\gamma(\vec s )}(\vec t) = {(\trmeas {\epsilon_b}{\delta})(\vec t)(\gamma(\vec s))} \in \Rp{}$$ is measurable. Since $K = [0,c[^p \times \mathbb{R}^n$ is a measurable subset of $\mathbb{R}^{p+n}$, the restriction (that we denote $\phi$) of this function to $K$ is measurable too.
Moreover, observe that $\Xi^{X, Y} \circ \gamma (\vec s) \in \prog{(X \Rightarrow Y})$ and $\mapm{{\Xi^{X, Y} \circ \gamma(\vec s)}} = \gamma(\vec s) $. It tells us that we can apply Lemma~\ref{lemma:derivaux1}, and we see that: \begin{equation}\label{eq:auxdermeaas}
(\Xi^{X, Y}\circ \gamma)(\vec s)_{\mu,b} = \frac 1 {\alpha_\mu} \cdot \frac {\partial {\phi}^{\card \mu}}{\partial {t_1}^{\mu(a_1)} \ldots \partial{t_p}^{\mu(a_p)}}(\vec 0, \vec s)
\end{equation}
(observe that this partial derivatives exists since it exists for $\psi_\delta^{\gamma(s)}$ for every fixed $\vec s$).
We are now going to show that every partial derivative of $\phi$, when it exists, is measurable too. It is based of the fact that the class of real-valued measurable functions is closed by addition, multiplication by a scalar and pointwise limit. Indeed, there exists a poitive sequence $(r_n)_{n \in \mathbb{N}}$ in $[0,c[$ which tends towards $0$. As a consequence, the partial derivative of $\phi$ with respect to $t_1$ (for instance) may be written as: $ \frac {\partial \phi}{\partial t_1}(0, \vec t,\vec s) = \lim_{n \rightarrow \infty} f_n( \vec t, \vec s)$, with
$f_n(\vec t, \vec s) = \frac {\phi((r_n, \vec t), \vec s) - \phi((0, \vec t), \vec s)}{r_n}$. It tells us that $((\vec t,\vec s) \in \mathbb{R}^{p-1+n} \mapsto \frac {\partial \phi}{\partial t_1}(0, \vec t,\vec s)) $ is the pointwise limit of a sequence of measurable functions, hence is measurable. By iterating this reasonning, we see that it is also the case for higher-order partial derivatives (when they exists), and~\eqref{eq:auxdermeaas} allows us to conclude the proof.
\end{proof}
\begin{lemma}\label{lemma:fmarrowclosed}
For all $X$, $Y$ PCSs,
$$ \Xi^{X, Y} \in \mathbf{Cstab_m}(\mathcal F^m X \Rightarrow \mathcal F^m Y, \mathcal F^m (X \Rightarrow Y)).$$ \end{lemma}
\hide{\begin{proof} Since the class of real-valued measurable functions is closed by addition, multiplication by a scalar and pointwise limit, and that $\psi^{\mu,b} : K \times \mathbb{R}^p \rightarrow \Rp{}$ is measurable, it holds that the partial derivatives (when they exist) are measurable too. Indeed, observe that $\frac{\partial{\psi^{\mu,b}}}{\partial r_1}((r_1, \vec r), \vec u) = \lim_{n \rightarrow \infty} f_n((r_1, \vec r), \vec u)$, with $$f_n((r_1, \vec r), \vec u) = n \cdot {f((r_1+ \frac 1 n, \vec r), \vec u) - f((r_1, \vec r), \vec u)}.$$ As a consequence, applying Lemma~\ref{lemma:partder1} with $\nu = \mu$ leads us to: $$\vec u \in \mathbb{R}^n \mapsto \frac {\partial {(\psi^{\mu,b})}^{\card \mu}}{\partial {r_1}^{\mu(1)} \ldots \partial{r_p}^{\mu(p)}}(\vec 0, \vec u) \text{ is measurable.}$$ Therefore (again by Lemma~\ref{lemma:partder1}), $(\vec u \in \mathbb{R}^n \mapsto {\Xi^{X, Y}(\gamma(\vec u))}_{\mu,b} \cdot \prod_{1 \leq i \leq p} {\mu(i)!} \text{ is measurable})$, and the result holds.
\end{proof} }
\begin{theorem} $\mathcal F^m$ is a cartesian closed full and faithful functor. \end{theorem}
\hide{\subsection{Programming Languages Considerations} We want now to give a denotational semantics of $\textsf{PCF}_\oplus$ in the category $\mathbf{Cstab_m}$. We want it to be as compatible as possible with the semantics of $\textsf{PCF}_{\text{sample}}$ given in~\cite{pse}, in order to be able to see it as a fragment of the semantics of a language $\textsf{PCF}_{\oplus, \text{sample}}$ containing both $\textsf{PCF}_{\oplus}$ and $\textsf{PCF}_{\text{sample}}$, which would be a language with continuous probabilities with an explicit discrete probabilistic fragment.
To ensure this compatibility, we take $\semm \mathbb{N} = \meas \mathbb{N}$, and the semantics of every type and typing context in $\textsf{PCF}_\oplus$ is given inductively by $\semm \mathbb{N}$, $\mathbf{Cstab_m}$ and $\times_m$. We would like now to use the functor $\mathcal F^m$ in order to send $\sem{\textsf{PCF}_\oplus}_{\mathbf{Pcoh}_!}$ into $\mathbf{Cstab_m}$, in such a way that the semantics of types is as specified above. To do that, we need to have guarantees about the behavior of $\mathcal F^m$ regarding the cartesian structure. However, it is not true that $\mathcal F^m$ respects the cartesian structure of $\mathbf{Pcoh}_!$: it comes from the fact that measurability tests for $\mathcal F^m{X} \Rightarrow_m \mathcal F^m {Y}$ are not the same as those in $\mathcal F^m{X \Rightarrow Y}$. But, it turns out that the \emph{measurable paths} of these two MCs are the same. It leads us to a weaker form of preservation of the cartesian structure, stated below in Theorem~\ref{th:cartclosedfunctorm}. \begin{definition} We say that two MCs $C_m$ and $D_m$ are \emph{cone-isomorphic}, and we note $C_m \equiv D_m$, if there exists an isomorphism $\phi$ in $\mathbf{Cstab}$ between their underlying cones, and moreover $\pathes {D_m} n = \phi \circ \pathes {C_m} n$. \end{definition} Observe that $\meas \mathbb{N} \equiv \mathcal F \mathbb{N}$. The relevance of the equivalence relation comes from Theorem~\ref{th:cartclosedfunctorm} below:
\begin{theorem}\label{th:cartclosedfunctorm} Let $X$ and $Y$ be two PCSs. Let $C_m \equiv \mathcal F^m X$, and $D_m \equiv \mathcal F^m Y$. Then $(C_m \Rightarrow_m D_m) \equiv \mathcal F^m{(X \Rightarrow Y)}$, and $(C_m \times_m D_m) \equiv \mathcal F^m{(X \times Y)}$. \end{theorem} The proof, quite technical, of Theorem~\ref{th:cartclosedfunctorm} can be found in Appendix.
Now, we denote by $\mathbf{D_m}$ the subcategory of $\mathbf{Cstab_m}$ build inductively by $C_\mathbb{N}$, $\times_m$ and $\Rightarrow_m$, and $\mathbf{D}$ the subcategory of $\mathbf{Pcoh}_!$ obtained in the same way from $\mathbb{N}$, $\Rightarrow$ and $\times$. Observe that the $\mathbf{Pcoh}_!$ semantics of $\textsf{PCF}_{\oplus}$ actually live in $\mathbf{D}$, and that we want to embed it into $\mathbf{D_m}$. We do this by using Theorem~\ref{th:cartclosedfunctorm} to convert $\mathcal F^m$ into a convenient functor $\mathbf{D} \rightarrow \mathbf{D_m}$.
\begin{proposition}
There exists a functor $\mathcal G : \mathbf{D} \rightarrow \mathbf{D_m}$, that respects the cartesian structure, and is full and faithful. \end{proposition} \begin{proof} We first define the effect of $\mathcal G$ on objects in $\mathbf{D}$. For every $\sigma$ a discrete type, we take $\mathcal G{\sem \sigma} = \semm \sigma$. Now, we are going to define the effect of $\mathcal G$ on morphisms. For every $\sigma, \tau$ discrete type, we can see by iterating Theorem~\ref{th:cartclosedfunctorm} that $\mathcal F^m {\sem \sigma} \equiv \semm \sigma$, as well as $\mathcal F^m {\sem \tau} \equiv \semm \tau$. Let $\phi^\sigma , \phi^\tau$ be the relevant cones isomorphisms between underlying cones. Let $f \in \mathbf{Pcoh}_!(\sem \sigma, \sem \tau)$: we take $\mathcal F^m f = \phi^{\tau} \circ \mathcal F f \circ {\phi^{\sigma}}^{-1}$. To show that it is full and faithful, we use the fact that $\mathcal F^m$ is so.
\end{proof}
\begin{theorem} $\mathcal G(\sem{PCF_\oplus})$ is a fully abstract denotational model of $PCF_\oplus$ in $\mathbf{Cstab_m}$. \end{theorem} \begin{proof}
We know from~\cite{pcsaamohpc} that $\sem{{\textsf{PCF}_\oplus}}$ is a fully abstract denotational model of $\textsf{PCF}_\oplus$. Since $\mathcal G$ is a full and faithful functor, it is also the case for $\mathcal G(\sem{\textsf{PCF}_\oplus})$.
\end{proof}} \begin{section}{Conclusion}
Our full embedding of $\mathbf{Pcoh}_!$ into $\mathbf{Cstab}$ implies that every stable function $f$ from $\prog X$ to $\prog Y$ can be characterized by an element $\Xi^{X, Y} (f) \in \mathbb{R}^{\mfin{\web{X}}\times \web Y}$, that has to be seen as a power series. It gives us a \emph{concrete representation} of stable functions on discrete cones, similar to the notion of trace introduced by Girard in~\cite{girard1986system} for stable functions on quantitative domains. There are well-known real analysis results on power series, as for instance the \emph{uniqueness theorem}\textemdash any power series which is null on an open subset has all its coefficients equal to $0$\textemdash on which is based the proof of full abstraction for $\textsf{PCF}_\oplus$ in $\mathbf{Pcoh}_!$~\cite{EPT15}.
While we have not been able to extend such a concrete representation to cones which are not \emph{directed-complete}, as for instance the cone $\meas \mathbb{R} \Rightarrow_m \meas \mathbb{R}$, our result could hopefully be a first step in this direction. This kind of characterization could lead to a way towards a full abstraction result for the continuous language $\textsf{PCF}_{\text{sample}}$ in $\mathbf{Cstab_m}$, and more generally gives us new tools to reason about continuous probabilistic programs.
\end{section}
\end{document} |
\begin{document}
\subjclass[2010]{13A18 (12J10, 12J20, 14E15)}
\begin{abstract} For an arbitrary valued field $(K,v)$ and a given extension $v(K^*)\hookrightarrow \Lambda$ of ordered groups, we analyze the structure of the tree formed by all $\Lambda$-valued extensions of $v$ to the polynomial ring $K[x]$. As an application, we find a model for the tree of all equivalence classes of valuations on $K[x]$ (without fixing their value group), whose restriction to $K$ is equivalent to $v$. \end{abstract}
\maketitle
\section*{Introduction} A valuation on a commutative ring $A$ is a mapping $\mu\colon A\to \Lambda\infty$, where $\Lambda$ is an ordered abelian group, satisfying the following conditions:
(0) \ $\mu(1)=0$, \ $\mu(0)=\infty$,
(1) \ $\mu(ab)=\mu(a)+\mu(b),\quad\forall\,a,b\in A$,
(2) \ $\mu(a+b)\ge\min\{\mu(a),\mu(b)\},\quad\forall\,a,b\in A$.
The \emph{support} of $\mu$ is the prime ideal $\mathfrak{p}=\op{supp}(\mu)=\mu^{-1}(\infty)\in\operatorname{Spec}(A)$.
The \emph{value group} of $\mu$ is the subgroup $\g_\mu\subset \Lambda$ generated by $\mu\left(A\setminus\mathfrak{p}\right)$.
Two valuations $\mu$, $\nu$ on $A$ are \emph{equivalent} if there is an isomorphism of ordered groups $\iota\colon \g_\mu \lower.3ex\hbox{\ars{.08}$\begin{array}{c}\,\to\\\mbox{\tiny $\sim\,$}\end{array}$}\g_\nu$ such that $\nu=\iota\circ \mu$. In this case, we write $\mu\sim\nu$.
The \emph{valuative spectrum} of $A$ is the set $\op{Spv}(A)$ of equivalence classes of valuations on $A$. We denote by $[\mu]\in\op{Spv}(A)$ the equivalence class of $\mu$.
Any ring homomorphism $A\to B$ induces a restriction of valuations which behaves well on equivalence classes and determines a mapping $\op{Spv}(B)\to\op{Spv}(A)$.
For any field $K$ we may consider the relative affine line $\op{Spv}(K[x])\to\op{Spv}(K)$.
Given any valuation $v$ on $K$, the fiber $\mathcal{T}_v$ of the equivalence class $[v]\in\op{Spv}(K)$ is called the \emph{valuative tree} over the valued field $(K,v)$. $$ \ars{1.2} \begin{array}{ccc} \mathcal{T}_v&\hooklongrightarrow &\op{Spv}(K[x])\\ \downarrow&&\downarrow\\ \mbox{$[v]$}&\hooklongrightarrow &\op{Spv}(K) \end{array} $$
This terminology is borrowed from Favre and Jonsson's book \cite{FJ}, where valuations of certain 2-dimensional local rings are studied. In the case $\op{rk}(\Gamma)=1$ and $K$ algebraically closed, the valuative tree admits a structure of a Berkovich space and has relevant analytical properties \cite{Bch}.
The main aim of this paper is to obtain a thorough description of the tree $\mathcal{T}_v$ for an arbitrary valued field $(K,v)$.
In the first part of the paper, composed of sections 1--5, we fix an extension $\Gamma\hookrightarrow\Lambda$ of ordered abelian groups, and we describe the tree $\mathcal{T}=\mathcal{T}(\Lambda)$ formed by all $\Lambda$-valued extensions of $v$ to $K[x]$.
Section 1 includes some background on key polynomials of valued fields. For any valuation $\mu$ on $K[x]$, let $\op{KP}(\mu)$ be the set of Maclane-Vaqui\'e key polynomials for $\mu$. If $\op{KP}(\mu)\ne\emptyset$, then $\mu$ has a \emph{degree} and a \emph{singular value}, defined as $\deg(\mu)=\deg(\phi)$, $\operatorname{sv}(\mu)=\mu(\phi)$, for any key polynomial $\phi\in\op{KP}(\mu)$ of minimal degree.
Section 2 discusses tangent directions and the tangent space of $\mathcal{T}$. The leaves of $\mathcal{T}$ (maximal nodes) are characterized by the property $\op{KP}(\mu)=\emptyset$. The set of tangent directions of an inner node $\mu\in\mathcal{T}$ is parametrized by the set $\op{KP}(\mu)/\!\sim_\mu$ of $\mu$-equivalence classes of key polynomials.
Section 3 describes the set $\mathcal{L}_\fin(\mathcal{T})$ of \emph{finite leaves} of the tree, determined by all valuations with non-trivial support. There is a bijection between $\mathcal{L}_\fin(\mathcal{T})$ and the set of monic irreducible polynomials in $K^h[x]$, where $K^h$ is a henselization of $K$. This result is just a reformulation of classical valuation-theoretic results.
Section 4 describes the set of \emph{infinite leaves} of $\mathcal{T}$, which are a kind of limit of certain totally ordered families of inner nodes of $\mathcal{T}$. This section contains a detailed analysis of \emph{limit augmentations} of valuations too. This concept was introduced by Vaqui\'e in his fundamental papers \cite{Vaq0,Vaq} extending to arbitrary valued fields the pioneering work of Maclane for discrete rank-one valued fields \cite{mcla}.
Limit augmentations are based on \emph{continuous families} of iterated augmentations. In the literature, we find different conditions imposed on these families, serving different purposes. We define a continuous family as a totally ordered family of valuations in $\mathcal{T}$, containing no maximal element, and having a stable degree. There is a natural equivalence relation between these families and we show, in Lemma \ref{specialCont}, that every equivalence class of continuous families contains a family satisfying all relevant conditions that are attributed to these families in the literature.
Section 5 gives a detailed description of $\mathcal{T}$. Section 5.1 reviews the fundamental result of Maclane-Vaqui\'e describing how to reach all nodes of $\mathcal{T}$ by a combination of ordinary augmentations, limit augmentations and stable limits. Every node $\mu\in\mathcal{T}$ may be linked to some degree-one node in $\mathcal{T}$ by an essentially unique \emph{Maclane-Vaqui\'e (MLV) chain}, supporting data intrinsically associated to $\mu$ \cite{MLV}. For instance, each node $\mu\in\mathcal{T}$ has a \emph{depth}, defined as the length of its MLV chain, which is either a natural number or infinity. In Section 5.2, we show that these intrinsic data encode arithmetic or geometric invariants of $\mu$, depending on the context in which the base valued field $(K,v)$ is considered. Sections 5.3--5.5, describe the different kinds of paths we may find in $\mathcal{T}$. In Section 5.6, we prove that every two nodes of $\mathcal{T}$ have a greatest common lower node in $\mathcal{T}$ and relate our description of $\mathcal{T}$ to the notion of $\Lambda$-tree.
In the second part of the paper, composed of sections 6--7, we find a concrete model for the valuative tree $\mathcal{T}_v$.
For any valuation $\mu$ on $K[x]$ extending $v$, the embedding $\Gamma\hookrightarrow\g_\mu$ is a \emph{small extension} of ordered groups; that is, if $\Gamma'\subset\g_\mu$ is the relative divisible closure of $\Gamma$ in $\g_\mu$, then the quotient $\g_\mu/\Gamma'$ is a cyclic group \cite[Thm. 1.5]{Kuhl}.
In \cite{csme}, a universal extension $\Gamma\hookrightarrow\R^\I_{\lx}$ of ordered groups is constructed, which contains all small extensions of $\Gamma$ up to $\Gamma$-isomorphism as ordered groups. On a certain subset $\R_{\mbox{\tiny $\op{sme}$}}\subset\R^\I_{\lx}$, an equivalence relation $\sim_{\mbox{\tiny $\op{sme}$}}$ is defined such that the quotient set $\R_{\mbox{\tiny $\op{sme}$}}/\!\sim_{\mbox{\tiny $\op{sme}$}}$ parametrizes the quasi-cuts of the divisible hull of $\Gamma$. Also, there is a canonical subset $\g_{\op{sme}}\subset\R_{\mbox{\tiny $\op{sme}$}}$ which faithfully represents all $\sim_{\mbox{\tiny $\op{sme}$}}$ classes.
In Section 6, we consider the subtree $\mathcal{T}^0\subset \mathcal{T}(\R^\I_{\lx})$ formed by all nodes $\mu$ such that $\g_\mu\subset \R_{\mbox{\tiny $\op{sme}$}}$. Then, we characterize equivalence of valuations in $\mathcal{T}^0$ as follows.
\noindent{\bf Proposition 6.3. }{\it Let $\mu,\nu\in\mathcal{T}^0$ be two inner nodes. Then, $\mu\sim\nu$ if and only if the following three conditions hold:
(a) \ The valuations $\mu$, $\nu$ admit a common key polynomial of minimal degree.
(b) \ For all \,$a\inK[x]\,$ such that \,$\deg(a)<\deg(\mu)$, we have $\,\mu(a)=\nu(a)$.
(c) \ $\operatorname{sv}(\mu)\sim_{\mbox{\tiny $\op{sme}$}} \operatorname{sv}(\nu)$.
In this case, we have $\op{KP}(\mu)=\op{KP}(\nu)$.}
In Section 7, we consider the subtree $\mathcal{T}_\sme\subset\mathcal{T}^0$ formed by all leaves of $\mathcal{T}^0$, and all inner nodes $\mu$ such that $\operatorname{sv}(\mu)$ belongs to $\g_{\op{sme}}$. Then, we use Proposition 6.3 to obtain our main theorem.
\noindent{\bf Theorem 7.1 }{\it The mapping $\mu\mapsto[\mu]$ induces a bijection between $\mathcal{T}_\sme$ and $\mathcal{T}_v$.}
In the rest of the section, we discuss special features of the paths in $\mathcal{T}_\sme$ and we show the existence of \emph{primitive nodes}, leading to a certain stratification of the tree by \emph{limit-depth}, which is the number of limit augmentations in the MLV chains.
The techniques of this paper have been applied in two different contexts \cite{AGNR,Rig}. Let $(K^h,v^h)$ be a henselization of $(K,v)$. In \cite{AGNR}, we use the primitive nodes of the valuative tree to establish a complete parallelism between the arithmetic properties of irreducible polynomials $F\in K^h[x]$, encoded by their Okutsu frames, and the valuation-theoretic properties of their induced valuations $v_F$ on $K^h[x]$, encoded by their MLV chains. In \cite{Rig}, it is shown that the natural restriction mapping $\mathcal{T}_{v^h}\to\mathcal{T}_v$ is an isomorphism of posets.
\section{Key polynomials over valued fields}\label{secKP}
In this section we introduce notation and well-known facts on key polynomials. Proofs and a more detailed exposition can be found in the survey \cite{KP}.
For any field $L$, let $\op{Irr}(L)$ be the set of monic irreducible polynomials in $L[x]$.
Let $(K,v)$ be a valued field. Let $k$ be the residue class field, $\Gamma=v(K^*)$ the value group and $\g_\Q=\Gamma\otimes\mathbb Q$ the divisible hull of $\Gamma$.
Let $\Gamma\hookrightarrow\Lambda$ be an extension of ordered abelian groups. We write simply $\Lambda\infty$ instead of $\Lambda\cup\{\infty\}$. Consider a valuation on the polynomial ring $K[x]$ $$ \mu\colon K[x]\,\longrightarrow\, \Lambda\infty $$ whose restriction to $K$ is $v$. Let $\mathfrak{p}=\mu^{-1}(\infty)$ be the support of $\mu$.
The valuation $\mu$ induces in a natural way a valuation $\bar{\mu}$ on the field of fractions of $K[x]/\mathfrak{p}$; that is, $K(x)$ if $\mathfrak{p}=0$, or $K[x]/(f)$ if $\mathfrak{p}=fK[x]$ for some $f\in\op{Irr}(K)$.
The residue field $k_\mu$ of $\mu$ is, by definition, the residue field of $\bar{\mu}$.
We say that $\mu$ is \emph{commensurable} (over $v$) if $\Gamma_\mu/\Gamma$ is a torsion group. In this case, there is a canonical embedding $\Gamma_\mu\hookrightarrow \g_\Q$. All valuations with nontrivial support are commensurable.
For any $\alpha\in\Gamma_\mu$, consider the abelian groups: $$ \mathcal{P}_{\al}=\{g\in K[x]\mid \mu(g)\ge \alpha\}\supset \mathcal{P}_{\al}^+=\{g\in K[x]\mid \mu(g)> \alpha\}. $$ The \emph{graded algebra of $\mu$} is the integral domain: $$ \mathcal{G}_\mu:=\operatorname{gr}_{\mu}(K[x])=\bigoplus\nolimits_{\alpha\in\Gamma_\mu}\mathcal{P}_{\al}/\mathcal{P}_{\al}^+. $$
There is a natural \emph{initial term} mapping $\op{in}_\mu\colon K[x]\to \mathcal{G}_\mu$, given by $\op{in}_\mu\mathfrak{p}=0$ and $$ \op{in}_\mu g= g+\mathcal{P}_{\mu(g)}^+\in\mathcal{P}_{\mu(g)}/\mathcal{P}_{\mu(g)}^+, \qquad\mbox{if }\ g\in K[x]\setminus\mathfrak{p}. $$
There is a natural embedding of graded algebras \ $\mathcal{G}_v:=\operatorname{gr}_v(K)\hookrightarrow \mathcal{G}_\mu$.
The following definitions translate properties of the action of $\mu$ on $K[x]$ into algebraic relationships in the graded algebra $\mathcal{G}_\mu$.
\noindent{\bf Definition. } Let $g,\,h\in K[x]$.
We say that $g,h$ are \emph{$\mu$-equivalent}, and we write $g\sim_\mu h$, if $\op{in}_\mu g=\op{in}_\mu h$.
We say that $g$ is \emph{$\mu$-divisible} by $h$, and we write $h\mid_\mu g$, if $\op{in}_\mu h\mid \op{in}_\mu g$ in $\mathcal{G}_\mu$.
We say that $g$ is $\mu$-irreducible if $(\op{in}_\mu g)\mathcal{G}_\mu$ is a nonzero prime ideal.
We say that $g$ is $\mu$-minimal if $g\nmid_\mu f$ for all nonzero $f\in K[x]$ with $\deg(f)<\deg(g)$.
The $\mu$-minimality condition admits a relevant characterization \cite[Prop. 2.3]{KP}.
\begin{lemma}\label{minimal0} A polynomial $g\in K[x]\setminus K$ is $\mu$-minimal if and only if $\mu$ acts as follow on $g$-expansions: $$f=\sum\nolimits_{0\le s}a_s g^s,\quad \deg(a_s)<\deg(g)\ \ \Longrightarrow\ \ \mu(f)=\min\left\{\mu\left(a_sg^s\right)\mid 0\le s\right\}.$$ \end{lemma}
\noindent{\bf Definition. } A \emph{(Maclane-Vaqui\'e) key polynomial} for $\mu$ is a monic polynomial in $K[x]$ which is simultaneously $\mu$-minimal and $\mu$-irreducible.
The set of key polynomials for $\mu$ is denoted $\op{KP}(\mu)$.
All $\phi\in\op{KP}(\mu)$ are irreducible in $K[x]$. Let $[\phi]_\mu\subset\op{KP}(\mu)$ be the subset of all key polynomials $\mu$-equivalent to $\phi$. Two $\mu$-equivalent key polynomials have the same degree \cite[Prop. 6.6]{KP}; hence, it makes sense to consider the degree $\deg\, [\phi]_\mu$ of a class.
The existence of key polynomials can be characterized as follows \cite[Thm. 4.4]{KP}.
\begin{theorem}\label{KPempty}
The following conditions are equivalent. \begin{enumerate} \item $\op{KP}(\mu)=\emptyset$. \item $\mu$ is commensurable and $k_\mu/k$ is an algebraic extension of fields. \item $\mathcal{G}_\mu$ is a simple algebra (all nonzero homogeneous elements are units).
\end{enumerate} \end{theorem}
\noindent{\bf Definition. } Suppose that $\op{KP}(\mu)\ne\emptyset$ and take $\phi\in\op{KP}(\mu)$ of minimal degree. The \emph{degree} and \emph{singular value} of $\mu$ are defined as $$ \deg(\mu)=\deg(\phi),\qquad \operatorname{sv}(\mu)=\mu(\phi). $$ The singular value is well defined by \cite[Thm. 3.9]{KP}.
Another relevant invariant of a valuation $\mu$ on $K[x]$ is its field $\kappa=\kappa(\mu)$ of \emph{algebraic residues}, defined as the relative algebraic closure of $k$ in the residue field $k_\mu$ of $\mu$.
Let $\Delta=\Delta_\mu\subset\mathcal{G}_\mu$ be the subring of homogeneous elements of degree zero in the graded algebra. There are natural embeddings $$k\hookrightarrow\kappa\hookrightarrow\Delta\hookrightarrowk_\mu.$$
The structure of $\Delta$ as a $\kappa$-algebra plays an essential role in the description of the branches of a node in the valuative tree.
\begin{theorem}\label{Delta} Let $\mu$ be a valuation on $K[x]$, whose restriction to $K$ is $v$. \begin{enumerate} \item If $\op{KP}(\mu)=\emptyset$, then $\kappa=\Delta=k_\mu$ is a countably generated extension of $k$. \item If $\mu$ is incommensurable, then $\kappa=\Delta=k_\mu$ is a finite extension of $k$. \item If $\mu$ is commensurable and $\op{KP}(\mu)\ne\emptyset$, then $\Delta=\kappa[\xi]$ and $k_\mu=\kappa(\xi)$, for some $\xi\in\Delta$ which is transcendental over $\kappa$. \end{enumerate} \end{theorem}
The valuations $\mu$ falling in case (3) of Theorem \ref{Delta} are said to be \emph{residually transcendental}. There is a tight link between $\Delta$ and the set $\op{KP}(\mu)$ \cite[Thm. 6.7]{KP}
\begin{theorem}\label{DeltaKP} If $\op{KP}(\mu)\ne\emptyset$, the residual ideal mapping $$\op{KP}(\mu)\,\longrightarrow\,\operatorname{Max}(\Delta),\qquad \phi\longmapsto \left(\op{in}_\mu(\phi)\mathcal{G}_\mu\right)\cap \Delta$$ induces a bijection between $\op{KP}(\mu)/\!\sim_\mu$ and the maximal spectrum of $\Delta$. \end{theorem}
If $\mu$ is incommensurable, then $\Delta$ is a field and $\operatorname{Max}(\Delta)$ is a one-element set. In this case, $\op{KP}(\mu)=[\phi]_\mu$, for any monic polynomial $\phi\inK[x]$ of minimal degree such that $\mu(\phi)$ is torsion-free over $\Gamma$.
If $\mu$ is residually transcendental, Theorems \ref{Delta} and \ref{DeltaKP} yield a bijection $$\op{KP}(\mu)/\!\sim_\mu\quad \longleftrightarrow\quad\operatorname{Max}(\Delta)\quad \longleftrightarrow\quad \op{Irr}(\kappa),$$ which depends on the choice of a generator $\xi$ for $\Delta$, as shown in Theorem \ref{Delta}(3).
\section{Tree of valuations with values in a fixed group}\label{secTreeLa}
Let $\mathcal{T}=\mathcal{T}(\La)$ be the set of all valuations $\mu\colon K[x]\to\Lambda\infty$, whose restriction to $K$ is $v$. This set admits a partial ordering. For $\mu,\nu\in \mathcal{T}$ we say that $\mu\le\nu$ if $$ \mu(f)\le \nu(f),\qquad \forall\,f\inK[x]. $$ As usual, we write $\mu<\nu$ to indicate that $\mu\le\nu$ and $\mu\ne\nu$.
If $\mu\le\nu$, there is a canonical homomorphism of graded $\mathcal{G}_v$-algebras: $$\mathcal{G}_\mu\,\longrightarrow\,\mathcal{G}_\nu,\qquad \op{in}_\mu f\longmapsto \begin{cases}\op{in}_\nu f,& \mbox{ if }\mu(f)=\nu(f),\\ 0,& \mbox{ if }\mu(f)<\nu(f).\end{cases} $$
This poset $\mathcal{T}$ has the structure of a \emph{tree}. By this, we simply mean that all intervals $$ (-\infty,\mu\,]:=\left\{\rho\in\mathcal{T}\mid \rho\le\mu\right\} $$ are totally ordered \cite[Thm. 2.4]{MLV}.
\noindent{\bf Definition. } A node $\mu\in\mathcal{T}$ is a \emph{leaf} if it is a maximal element with respect to the ordering $\le$. Otherwise, we say that $\mu$ is an \emph{inner node}.
\begin{theorem}\cite[Thm. 2.3]{MLV}\label{maximal} A node $\mu\in\mathcal{T}$ is a leaf if and only it $\op{KP}(\mu)=\emptyset$. \end{theorem}
All valuations with nontrivial support are leaves of $\mathcal{T}$, because they satisfy condition (2) of Theorem \ref{KPempty}. We call them \emph{finite leaves}. The leaves of $\mathcal{T}$ having trivial support are \emph{valuation-algebraic} in Kuhlmann's terminology \cite{Kuhl}. We call them \emph{infinite leaves}. We denote the set of leaves and subsets of finite and infinite leaves as follows: $$ \mathcal{L}(\mathcal{T})=\mathcal{L}_\fin(\mathcal{T})\sqcup\mathcal{L}_\infty(\mathcal{T}), $$
\noindent{\bf Definition. } For a leaf $\mu\in\mathcal{L}(\mathcal{T})$ we define its \emph{degree} as: $$ \deg(\mu)=\sup\left\{\deg(\rho)\mid \rho\in\mathcal{T},\ \rho<\mu\right\}\in\mathbb N\infty. $$\vskip0.2cm
A finite leaf $\mu\in\mathcal{L}_\fin(\mathcal{T})$ has $\op{supp}(\mu)=fK[x]$ for some monic irreducible $f\inK[x]$ and $\deg(\mu)=\deg(f)$. The infinite leaves may have finite or infinite degree.
\subsection{Tangent directions and augmentations}\label{subsecTanDir} Let $\mu,\,\nu$ be two nodes in $\mathcal{T}$ such that $\mu<\nu$. Let $\mathbf{t}(\mu,\nu)$ be the (nonempty) set of monic polynomials $\phi\inK[x]$ of minimal degree satisfying $\mu(\phi)<\nu(\phi)$.
We say that $\mathbf{t}(\mu,\nu)$ is the \emph{tangent direction} of $\mu$, determined by $\nu$. This terminology will be justified in section \ref{subsecTanSpace}, when we study the tangent space of $\mathcal{T}$.
The following properties of $\mathbf{t}(\mu,\nu)$ were proven by Maclane for discrete rank-one valued fields, and generalized by Vaqui\'e to arbitrary valued fields \cite[Thm. 1.15]{Vaq}, \cite[Prop. 2.2, Cor. 2.5]{MLV}.
\begin{lemma}\label{propertiesTMN} Let $\mu<\nu$ be two nodes in $\mathcal{T}$ and let $\phi\in\mathbf{t}(\mu,\nu)$. \begin{enumerate} \item The polynomial $\phi$ belongs to $\op{KP}(\mu)$ and $\mathbf{t}(\mu,\nu)=[\phi]_\mu$. Also, $\deg(\mu)\le \deg(\nu)$. \item For all nonzero $f\inK[x]$ the equality $\mu(f)=\nu(f)$ holds if and only if $\phi\nmid_\mu f$.
\item If $\mu<\nu<\rho$ in $\mathcal{T}$, then $\mathbf{t}(\mu,\rho)=\mathbf{t}(\mu,\nu)$. In particular, $$ \mu(f)=\rho(f)\ \Longleftrightarrow\ \mu(f)=\nu(f),\qquad \forall\,f\inK[x]. $$
\end{enumerate} \end{lemma}
On the other hand, for any inner node $\mu\in\mathcal{T}$, all $\mu$-equivalence classes in $\op{KP}(\mu)$ are the tangent direction of $\mu$ with respect to some $\nu\in\mathcal{T}$ such that $\mu<\nu$. Indeed, for any $\phi\in\op{KP}(\mu)$ and any $\gamma\in\Lambda\infty$ such that $\mu(\phi)<\gamma$, we may construct the \emph{augmented valuation} $\nu=[\mu;\phi,\gamma]$, defined in terms of $\phi$-expansions as $$ f=\sum\nolimits_{0\le s}a_s\phi^s,\quad \deg(a_s)<\deg(\phi)\ \Longrightarrow\ \nu(f)=\min\{\mu(a_s)+s\gamma\mid 0\le s\}. $$ Note that $\nu(\phi)=\gamma$. The following properties of this augmented valuation are also due to Maclane and Vaqui\'e \cite[Prop. 2.1]{MLV}.
\begin{lemma}\label{propertiesAug} Let $\nu=[\mu;\phi,\gamma]$ be an augmented valuation of $\mu$. \begin{enumerate} \item We have $\mu<\nu$ and $\,\mathbf{t}(\mu,\nu)=[\phi]_\mu$. \item The value group of $\nu$ is $\g_\nu=\gen{\Gamma_{\mu,\phi},\gamma}$, where $\Gamma_{\mu,\phi}$ is the subgroup $$\Gamma_{\mu,\phi}=\left\{\mu(a)\mid a\inK[x],\ 0\le \deg(a)<\deg(\phi)\right\}.$$ \item If $\gamma=\infty$, then $\op{supp}(\nu)=\phiK[x]$. If $\gamma<\infty$, then $\phi$ is a key polynomial for $\nu$ of minimal degree. In both cases, $\deg(\nu)=\deg(\phi)$. \end{enumerate} \end{lemma}
For all $\phi_*\in\op{KP}(\mu)$, $\gamma_*\in\Lambda\infty$ such that $\mu(\phi_*)<\gamma_*$, \cite[Lem. 2.8]{MLV} shows that \begin{equation}\label{eqAug} [\mu;\,\phi,\gamma]=[\mu;\,\phi_*,\gamma_*] \ \Longleftrightarrow\ \mu(\phi_*-\phi)\ge \gamma=\gamma_*\ \Longrightarrow\ \phi\sim_\mu\phi_*. \end{equation}
\subsection{The tangent space of $\mathcal{T}$}\label{subsecTanSpace}
For any inner node $\mu\in\mathcal{T}$, consider the quotient set $$ \op{TD}(\mu)=\left\{\nu\in\mathcal{T}\mid \mu<\nu\right\}/\!\sim_{\op{tan}}, $$ with respect to the equivalence relation $\sim_{\op{tan}}$ which considers $\nu\sim_{\op{tan}}\nu'$ if and only if $ (\mu,\nu]\cap(\mu,\nu']\ne\emptyset$.
The transitivity of $\sim_{\op{tan}}$ follows easily from the fact that $\mathcal{T}$ is a tree. We denote by $[\nu]_{\op{tan}}$ the class of $\nu$. The elements of $\op{TD}(\mu)$ can be identified with the tangent directions of $\mu$ defined in the last section.
\begin{proposition}\label{td=td} For all inner nodes $\mu\in\mathcal{T}$, the association $$ \phi\longmapsto t_\mu(\phi):=\left[\,[\mu;\,\phi,\gamma]\,\right]_{\op{tan}},\quad \gamma\in\Lambda\infty, \ \gamma>\mu(\phi), $$ is independent of the choice of $\gamma$ and so it defines a mapping $t_\mu\colon\op{KP}(\mu)\to\op{TD}(\mu)$, which induces a bijection between $\op{KP}(\mu)/\!\sim_\mu$ and $\op{TD}(\mu)$. \end{proposition}
\begin{proof}
If $\mu(\phi)<\gamma<\gamma_*$, then $\nu<[\mu;\,\phi,\gamma]<[\mu;\,\phi,\gamma_*]$. Thus, $[\mu;\,\phi,\gamma]\sim_{\op{tan}}[\mu;\,\phi,\gamma_*]$, so that the mapping $t_\mu$ is well defined.
Take $\nu \in \mathcal{T}$ such that $\mu<\nu$. For all $\phi\in\mathbf{t}(\mu,\nu)$ we have $$ \mu<[\mu;\,\phi,\nu(\phi)]\le\nu, $$ by comparing their actions on $\phi$-expansions. Thus, $t_\mu(\phi)=[\nu]_{\op{tan}}$. This proves that $t_\mu$ is onto. Finally, let us show that, for all $\phi,\phi_*\in\op{KP}(\mu)$, the equality $t_\mu(\phi)=t_\mu(\phi_*)$ holds if and only if $\phi\sim_\mu\phi_*$.
If $\phi\sim_\mu\phi_*$ and $\gamma=\mu(\phi-\phi_*)>\mu(\phi)$, then $[\mu;\,\phi,\gamma]=[\mu;\,\phi_*,\gamma]$, by (\ref{eqAug}); thus $t_\mu(\phi)=t_\mu(\phi_*)$. Conversely, if $t_\mu(\phi)=t_\mu(\phi_*)$, there exists $\rho\in\mathcal{T}$ such that $$ \mu<\rho\le [\mu;\,\phi,\gamma]\qquad\mbox{and}\qquad \mu<\rho\le [\mu;\,\phi_*,\gamma_*], $$ for some $\gamma>\mu(\phi)$, $\gamma_*>\mu(\phi_*)$. By \cite[Lem. 2.7]{MLV}, there exist $\delta,\delta_*\in\Lambda$ such that $[\mu;\,\phi,\delta]=\rho=[\mu;\,\phi_*,\delta_*]$. By (\ref{eqAug}), we have $\phi\sim_\mu\phi_*$. \end{proof}
By the remarks following Theorem \ref{DeltaKP}, $\op{TD}(\mu)$ is a one-element set if $\mu$ is incommensurable, while there is a (non-canonical) bijection between $\op{TD}(\mu)$ and $\op{Irr}(\kappa(\mu))$, if $\mu$ is commensurable.
\noindent{\bf Definition. } The \emph{tangent space} of $\mathcal{T}$ is the set $\mathbb T(\mathcal{T})$ containing all pairs $(\mu,t)$, where $\mu$ is an inner node in $\mathcal{T}$ and $t\in\op{TD}(\mu)$ is a tangent direction of $\mu$.
\section{Finite leaves}\label{secFinLeaves} For any field $L$ and a monic irreducible polynomial $F\in\op{Irr}(L)$, we denote by $L_F$ the simple field extension $L[x]/(F)$.
In this section, we assume that $\Lambda$ contains the divisible closure of $\Gamma$; that is, $\g_\Q\subset\Lambda$. Under this assumption, the set $\mathcal{L}_\fin(\mathcal{T})$ of finite leaves of $\mathcal{T}$ may be parametrized as $$ \mathcal{L}_\fin(\mathcal{T})=\left\{(\phi,\bar{\nu})\mid \phi\in\op{Irr}(K),\ \bar{\nu}\ \mbox{ valuation on $K_\phi$ extending }v \right\}, $$ where we identify each pair $(\phi,\bar{\nu})$ with the following valuation with support $\phiK[x]$: $$ \nu\colon K[x]\longtwoheadrightarrow K_\phi\stackrel{\bar{\nu}}\,\longrightarrow\,\g_\Q\infty $$
Every simple field extension $L/K$ admits a finite number of extensions of $v$ to $L$. Any such extension determines an infinite number (if $K$ is infinite) of finite leaves of $\mathcal{T}$, one for each $\phi\in\op{Irr}(K)$ such that $K_\phi$ is $K$-isomorphic to $L$. For instance, the valuation $v$ on $K$ determines the finite leaves $(x-a,v)$, for $a$ running in $K$.
Let us recall the description of all extensions of $v$ to simple finite extensions of $K$, which can be found (for instance) in \cite[Sec. 17]{endler}. Let us first describe all extensions of $v$ to an arbitrary algebraic extension $L$ of $K$. These extensions are commensurable over $v$; thus, we aim to describe the set: $$ \mathcal{E}(L)=\left\{w\colon L\,\longrightarrow\, \g_\Q\infty\,\mid\, w\, \mbox{ valuation extending }v\right\}. $$
Consider $K\subset K^{\op{sep}}\subset\overline{K}$, the separable closure of $K$ in a fixed algebraic closure $\overline{K}$.
Let $\bar{v}$ be a fixed extension of $v$ to $\overline{K}$. Let $K\subset K^h\subset K^{\op{sep}}$ be the henselization of $K$ determined by the choice of $\bar{v}$. Thus, $K^h$ is the fixed field of the decomposition group of the restriction of $\bar{v}$ to $K^{\op{sep}}$.
On the set $\op{Mon}(L/K)$ of all $K$-morphisms from $L$ to $\overline{K}$, we define the following equivalence relation $$ \lambda\sim_{K^h}\lambda' \ \ \Longleftrightarrow\ \ \lambda'=\sigma\circ\lambda\quad\mbox{for some}\quad \sigma\in\op{Aut}(\overline{K}/K^h). $$
\begin{theorem}\label{endler}
The mapping $\op{Mon}(L/K)\to\mathcal{E}(L)$, defined by $\lambda\mapsto \bar{v}\circ\lambda$,
induces a bijection between the quotient set $\op{Mon}(L/K){}/\!\sim_{K^h}$ and $\mathcal{E}(L)$. \end{theorem}
\begin{center} \setlength{\unitlength}{4mm} \begin{picture}(8,9) \put(4,0){$K$}\put(3.6,4){$\lambda(L)$}\put(4,8){$\overline{K}$} \put(0,2){$L$}\put(8,4.3){$K^h$}\put(8,6.8){$K^{\op{sep}}$} \put(4.4,1.1){\line(0,1){2.3}}\put(4.4,5.1){\line(0,1){2.4}}\put(8.4,5.2){\line(0,1){1.3}} \put(3.8,0.4){\vector(-2,1){2.9}}\put(1,2.5){\vector(2,1){2.5}} \put(5,.3){\line(4,5){2.9}}\put(5,8.2){\line(3,-1){2.7}} \put(1.6,3.2){\footnotesize{$\lambda$}} \end{picture} \end{center}
For instance, $\mathcal{E}(\overline{K})$ is in bijection with $\op{Aut}(\overline{K}/K^h)\backslash\op{Aut}(\overline{K}/K)$. Every right coset $\op{Aut}(\overline{K}/K^h)\,\sigma$ determines the valuation $\bar{v}\circ\sigma$.
Suppose now that $L/K$ is a simple finite extension; that is, $L=K_\phi$ for some $\phi\in\op{Irr}(K)$. Since $K^h/K$ is a separable extension, the factorization of $\phi$ into a product of monic irreducible polynomials in $K^h[x]$ takes the form $$ \phi=F_1\cdots F_r, $$ with pairwise different $F_1,\dots,F_r\in\op{Irr}(K^h)$.
Let $Z(\phi)\subset \overline{K}$ be the set of zeros of $\phi$, avoiding multiplicities. We have a natural bijection $$ Z(\phi)\,\longrightarrow\,\op{Mon}(L/K),\qquad \theta\longmapsto \lambda_\theta, $$ where $\lambda_\theta$ is determined by $\lambda_\theta\left(x+\phiK[x]\right)=\theta$. Clearly, $$ \begin{array}{rccl} \lambda_\theta \sim_{K^h} \lambda_{\theta'}&\ \Longleftrightarrow\ &\lambda_{\theta'}=\sigma\circ \lambda_\theta&\quad\mbox{for some}\quad\sigma\in\op{Aut}(\overline{K}/K^h)\\ &\ \Longleftrightarrow\ &\theta'=\sigma(\theta)&\quad\mbox{for some}\quad\sigma\in\op{Aut}(\overline{K}/K^h) \end{array} $$ Therefore, $\lambda_\theta \sim_{K^h} \lambda_{\theta'}$ if and only if $\theta$ and $\theta'$ are roots of the same irreducible factor of $\phi$ over $K^h[x]$.
Let us choose an arbitrary root $\theta_i\in Z(F_i)$ for each irreducible factor of $\phi$. By Theorem \ref{endler}, the set of valuations $\overline{w}_{F_i}=\bar{v}\circ\lambda_{\theta_i}$ does not depend on the chosen roots and contains all extensions of $v$ to $L$.
\begin{theorem}\label{CorEndler} There are $r$ extensions of $v$ to $L=K_\phi$, given by $\overline{w}_{F_1},\dots,\overline{w}_{F_r}$. \end{theorem}
This description of the extensions of $v$ to simple finite extensions of $K$ yields a parametrization of the finite leaves by the set $\op{Irr}(K^h)$. For all $F\in\op{Irr}(K^h)$ consider the finite leaf $w_F\in\mathcal{L}_\fin$ given by $$ w_F(g)=\bar{v}(g(\theta))\quad \mbox{for all }g\inK[x], $$ where $\theta\in\overline{K}$ is any root of $F$ in $\overline{K}$. By the henselian property, this valuation $w_F$ is independent on the choice of $\theta$.
Clearly, $\op{supp}(w_F)=N(F)K[x]$, where the ``norm" polynomial $N(F)\in\op{Irr}(K)$ is uniquely determined by the equality $\left(F K^h[x]\right)\cap K[x]=N(F)K[x]$.
Since the valuation induced by $w_F$ on $K_{N(F)}$ is precisely $\overline{w}_F=\bar{v}\circ\lambda_\theta$, Theorem \ref{CorEndler} implies the following result.
\begin{theorem}\label{Lfin=Hensel} If $\,\g_\Q\subset \Lambda$, we have a bijection $$ \op{Irr}(K^h)\,\longrightarrow\, \mathcal{L}_\fin(\mathcal{T}),\qquad F\longmapsto w_F=(N(F),\overline{w}_F). $$ \end{theorem}
The inverse mapping associates to each pair $(\phi,\bar{\nu})\in\mathcal{L}_\fin(\mathcal{T})$, the irreducible factor of $\phi$ over $K^h[x]$ canonically associated to $\bar{\nu}$ by Theorem \ref{CorEndler}.
\section{Infinite leaves and limit nodes}
In this section, we study the nodes of $\mathcal{T}$ which cannot be obtained by a finite chain of ordinary augmentations starting with a degree-one valuation. These nodes will be a kind of limit of certain totally ordered families of valuations in $\mathcal{T}$.
\subsection{Totally ordered families of valuations}\label{subsecTOF} Consider a totally ordered family of inner nodes of $\mathcal{T}$, not containing a maximal element: $$ \mathcal{A}=\left(\rho_i\right)_{i\in A},\quad \rho_i\in\mathcal{T}. $$ We assume that $\mathcal{A}$ is parameterized by a totally ordered set set $A$ of indices such that the mapping $A\to\mathcal{A}$ determined by $i\mapsto \rho_i$ is an isomorphism of totally ordered sets.
By Lemma \ref{propertiesTMN}, the degree function $\deg\colon \mathcal{A}\to\mathbb N$ is order-preserving. Hence, these families fall into two radically different cases:
(a) \ The set $\deg(\mathcal{A})$ is unbounded in $\mathbb N$. We say that $\mathcal{A}$ has \emph{unbounded degree}.
(b) \ There exists $i_0\in A$ such that $\deg(\rho_i)=\deg(\rho_{i_0})$ for all $i\ge i_0$. We say that $\mathcal{A}$ is a \emph{continuous family} of stable degree $m(\mathcal{A})=\deg(\rho_{i_0})$.
In any case, $\mathcal{A}$ determines a unique tangent direction of every valuation $\rho_i\in\mathcal{A}$. Indeed, Lemma \ref{propertiesTMN} shows that $\mathbf{t}(\rho_i,\rho_j)=\mathbf{t}(\rho_i,\rho_k)$ for all $i<j<k$ in $A$.
We denote by $\mathbf{t}(\rho_i,\mathcal{A})$ this common tangent direction. By Lemma \ref{propertiesTMN}, there exists a key polynomial $\varphi_i\in\op{KP}(\rho_i)$ such that $\mathbf{t}(\rho_i,\mathcal{A})=[\varphi_i]_{\rho_i}$, and for any nonzero polynomial $f\inK[x]$ we have $$ \varphi_i\mid_{\rho_i}f \ \ \Longleftrightarrow\ \ \rho_i(f)<\rho_j(f)\ \mbox{ for all }\ j>i \ \mbox{ in }A. $$ $$ \varphi_i\nmid_{\rho_i}f \ \ \Longleftrightarrow\ \ \rho_i(f)=\rho_j(f)\ \mbox{ for all }\ j>i \ \mbox{ in }A. $$\vskip0.2cm
\noindent{\bf Definition. } We say that a nonzero $f\inK[x]$ is \emph{$\mathcal{A}$-stable} if it satisfies $\varphi_i\nmid_{\rho_i}f$, for some index $i\in A$. In this case, we denote its stable value by $$\rho_\aa(f)=\max\{\rho_i(f)\mid i\in A\}.$$ We obtain in this way a \emph{stability function} $\rho_\aa\colon K[x]_\mathcal{A}\to \Lambda\infty$, defined only on the subset $K[x]_\mathcal{A}\subset K[x]$ formed by the $\mathcal{A}$-stable polynomials.
The following basic properties of the function $\rho_\aa$ are obvious: \begin{itemize} \item $(\rho_\aa)_{\mid K}=v$, \item $f,g\in K[x]_\mathcal{A}\ \ \Longrightarrow\ \ fg\inK[x]_\mathcal{A}\ \mbox{ and } \ \rho_\aa(fg)=\rho_\aa(f)+\rho_\aa(g)$, \item $f,g,f+g\in K[x]_\mathcal{A}\ \ \Longrightarrow\ \ \rho_\aa(f+g)\ge\min\{\rho_\aa(f),\rho_\aa(g)\}$. \end{itemize}
In particular, if $K[x]_\mathcal{A}=K[x]$, then the function $\rho_\aa$ is a valuation in $\mathcal{T}$.
\noindent{\bf Definition. } If all the polynomials in $K[x]$ are $\mathcal{A}$-stable, we say that the valuation $\rho_\aa$ is the \emph{stable limit} of $\mathcal{A}$. In this case, we write $\rho_\aa=\lim(\mathcal{A})=\lim_{i\in A}\rho_i$.
\begin{proposition}\cite[Prop. 3.1]{MLV}\label{leaves} If $\mathcal{A}=\left(\rho_i\right)_{i\in A}$ has a stable limit, then $\rho_\aa$ has trivial support and $\op{KP}(\rho_\aa)=\emptyset$. In particular, $\rho_\aa$ is an infinite leaf of the tree $\mathcal{T}$. \end{proposition}
Let us see a necessary condition for a polynomial to be $\mathcal{A}$-unstable.
\begin{lemma}\label{aunstab} All $\mathcal{A}$-unstable polynomials $f$ satisfy $\deg(f)\ge \deg(\rho_i)$ for all $i\in A$. \end{lemma}
\begin{proof} Let $\mathbf{t}(\rho_i,\mathcal{A})=[\varphi_i]_{\rho_i}$ for some $i\in A$. If $f\inK[x]$ has $\deg(f)<\deg(\rho_i)$, then $\deg(f)<\deg(\varphi_i)$. Hence, $\varphi_i\nmid_{\rho_i}f$, contradicting the unstability of $f$. \end{proof}
\begin{corollary}\label{unbIsStab} Every totally ordered family of unbounded degree has a stable limit. \end{corollary}
\subsection{Continuous families and limit augmentations}\label{subsecCont}
Let $\mathcal{C}=\left(\rho_i\right)_{i\in A}$ be a \emph{continuous family} of valuations in $\mathcal{T}$ of stable degree $m=m(\mathcal{C})$.
The following result shows that the valuations of maximal degree $m$ in $\mathcal{C}$ are ``close" one to another in a certain sense.
\begin{lemma}\label{samedegree} Let $\mu<\nu$ be two inner nodes of $\mathcal{T}$ of degree $m$. Take any $\phi\in\op{KP}(\nu)$ of degree $m$. Then, $\phi\in\op{KP}(\mu)$ and $\nu= [\mu;\,\phi, \nu(\phi)]$. \end{lemma}
\begin{proof} For all $a \inK[x]$ with $\deg(a)<m$, we have $\mu(a)=\nu(a)$, because any $\varphi\in \mathbf{t}(\mu,\nu)$ satisfies $\varphi\nmid_\mu a$.
Necessarily $\mu(\phi)<\nu(\phi)$, because the equality $\mu(\phi)=\nu(\phi)$ leads to a contradiction. Indeed, for all $f\inK[x]$ with $\phi$-expansion $f=\sum_{0\le s}a_s\phi^s$, we would have $$ \mu(f)\ge\min_{0\le s}\{\mu\left(a_s\phi^s\right)\}=\min_{0\le s}\{\nu\left(a_s\phi^s\right)\}=\nu(f), $$ showing that $\mu\ge\nu$, against our assumption.
By Lemma \ref{propertiesTMN}, $\phi$ is a key polynomial for $\mu$ such that $\mathbf{t}(\mu,\nu)=[\phi]_\mu$. The equality $\nu= [\mu;\,\phi, \nu(\phi)]$ follows then from the action of both valuations on $\phi$-expansions. \end{proof}
\subsubsection{Stable group of a continuous family}\label{subsubsecG0} \mbox{\null}
Let $\rho\in\mathcal{T}$ be an inner node. By Lemma \ref{minimal0}, applied to any key polynomial for $\rho$ of minimal degree, we have $\g_\rho=\gen{\g_\rho^0,\operatorname{sv}(\rho)}$, where $\g_\rho^0$ is the subgroup: $$ \g_\rho^0=\left\{\rho(a)\mid a\inK[x],\ 0\le\deg(a)<\deg(\rho)\right\}. $$
Moreover, $\g_\rho^0$ is commensurable over $\Gamma$. Indeed, for all $a\inK[x]$ of degree less than $\deg(\rho)$, the initial term $\op{in}_\rho a\in\mathcal{G}_\rho$ is algebraic over the graded algebra $\mathcal{G}_v$ \cite[Prop. 3.5]{KP}; thus, $n\rho(a)$ belongs to $\Gamma$ for some $n\in\mathbb N$.
The index $e_{\op{rel}}(\rho)=\left(\g_\rho\colon \g_\rho^0\right)$ is called the \emph{relative ramification index} of $\rho$.
If $\rho$ is incommensurable over $v$, then $e_{\op{rel}}(\rho)=\infty$.
Let us denote the set of stable values of nonzero $\mathcal{C}$-stable polynomials by $$\g_\cc=\rho_\cc\left(K[x]_\mathcal{C}\setminus\{0\}\right)\subset\Lambda.$$ The following lemma shows that $\g_\cc$ is a subgroup of $\Lambda$. It is called the \emph{stable group} of the continuous family $\mathcal{C}$.
\begin{lemma}\label{GAcomm} Let $\mathcal{C}=\left(\rho_i\right)_{i\in A}$ be a \emph{continuous family} in $\mathcal{T}$ of stable degree $m$. Then, $\Gamma_{\rho_i}^0=\g_\cc$ \,for all $\rho_i\in \mathcal{C}$ of degree $m$. \end{lemma}
\begin{proof} Suppose that $\deg(\rho_i)=m$ for some $i\in A$. Since $\mathcal{C}$ contains no maximal element, there exists $j\in A$ such that $i<j$. Since the degree function preserves the ordering, we have $\deg(\rho_i)=\deg(\rho_j)=m$. By Lemma \ref{samedegree}, $\mathbf{t}(\rho_i,\rho_j)=[\varphi]_{\rho_i}$, for any $\varphi\in\op{KP}(\rho_j)$ of minimal degree $m$.
Now, for all $a\inK[x]$ with $0\le\deg(a)<m$, we have $\rho_i(a)=\rho_j(a)=\rho_\cc(a)$ because $\varphi\nmid_{\rho_i}a$. This proves $\Gamma_{\rho_i}^0=\Gamma_{\rho_j}^0\subset \g_\cc$ already.
Let us show the inclusion $\g_\cc\subset \Gamma_{\rho_i}^0$. Let $\gamma=\rho_\cc(f)\in\g_\cc$ be the stable value of a nonzero $\mathcal{C}$-stable $f\inK[x]$. Let $i\le k<\ell$ in $A$ such that $\deg(\rho_k)=\deg(\rho_\ell)=m$ and $\rho_k(f)=\rho_\ell(f)$. By \cite[Cor. 2.6]{MLV}, the element $\op{in}_{\rho_\ell} f$ is a unit. By \cite[Prop. 3.5]{KP}, there exists $a\inK[x]$ of degree less than $m$ such that $\op{in}_{\rho_\ell} f=\op{in}_{\rho_\ell} a$. In particular, $\rho_\ell(f)=\rho_\ell(a)$ belongs to $\Gamma_{\rho_\ell}^0=\Gamma_{\rho_i}^0$. \end{proof}
\subsubsection{Limit key polynomials and limit augmentations}\label{subsubsecLimit}\mbox{\null}
The set $\op{KP}_\infty(\mathcal{C})$ of \emph{limit key polynomials} for $\mathcal{C}$ is the set of all monic $\mathcal{C}$-unstable polynomials of minimal degree. Since the product of stable polynomials is stable, all limit key polynomials are irreducible in $ K[x]$.
The \emph{unstable degree} $m_\infty=m_\infty(\mathcal{C})$ is the common degree of all limit key polynomials for $\mathcal{C}$.
If all polynomials in $K[x]$ are $\mathcal{C}$-stable, then $\op{KP}_\infty(\mathcal{C})=\emptyset$ and we agree that $m_\infty=\infty$.
By Lemma \ref{aunstab}, $m_\infty\ge m$.
Take any limit key polynomial $\phi\in\op{KP}_\infty\left(\mathcal{C}\right)$, and choose $\gamma\in\Lambda\infty$ such that $$\rho_i(\phi)<\gamma\quad \mbox{for all } \ i\in A.$$ The \emph{limit augmentation} $\nu=[\mathcal{C};\phi,\gamma]$ acts as follows on $\phi$-expansions: $$f=\sum\nolimits_{0\le s}a_s\phi^s,\quad\deg(a_s)<m_\infty\ \ \Longrightarrow\ \ \nu(f)=\min\{\rho_\cc(a_s)+s\gamma\mid 0\le s\}. $$
The following properties of $\nu$ can be found in \cite[Sec. 1.4]{Vaq} and \cite[Sec. 7]{KP}.
\begin{proposition}\label{extensionlim} The augmentation $\nu=[\mathcal{C};\phi,\gamma]$ is a valuation in $\mathcal{T}$ such that $\rho_i<\nu$ for all $i\in A$. If $\gamma<\infty$, then $\nu$ is an inner node, $\g_\nu=\gen{\g_\cc,\gamma}$ and $\phi$ is a key polynomial for $\nu$, of minimal degree; thus, $\deg(\nu)=m_\infty$ and $\,\operatorname{sv}(\nu)=\gamma$.
If $\gamma=\infty$, then $\g_\nu=\g_\cc$ and the support of $\nu$ is $\phiK[x]$; thus, $\nu$ is a finite leaf of $\mathcal{T}$. \end{proposition}
If $m_\infty=m$, then for all $i\in A$ with maximal degree $\deg(\rho_i)=m$, Lemma \ref{samedegree} shows that $\nu=[\rho_i;\,\phi,\nu(\phi)]$. Thus, any limit augmentation of $\mathcal{C}$ is equal to an ordinary augmentation of some $\rho_i$.
Therefore, we may discard the families $\mathcal{C}$ of stable degree $m=m_\infty$ because they do not contribute to produce new nodes in $\mathcal{T}$ by limit augmentations.
Summing up, a continuous family $\mathcal{C}$ of valuations has three possibilities:
(a) \ If $m<m_\infty=\infty$, then $\mathcal{C}$ has a stable limit.
(b) \ If $m=m_\infty<\infty$, we say that $\mathcal{C}$ is \emph{inessential}.
(c) \ If $m<m_\infty<\infty$, we say that $\mathcal{C}$ is \emph{essential}.
The continuous families having a stable limit determine infinite leaves of the tree $\mathcal{T}$, by Proposition \ref{leaves}. The essential continuous families determine inner \emph{limit nodes} of $\mathcal{T}$ as limit augmentations of the family. By the uniqueness of Maclane--Vaqui\'e\ chains \cite[Thm. 4.7]{MLV}, these limit nodes cannot be obtained by chains of ordinary augmentations starting with valuations that are smaller than some $\rho_i\in\mathcal{C}$.
The limit key polynomials are an essential ingredient in the construction of these limit nodes. Let us describe them in more detail.
\begin{lemma}\label{allLKP} Let $\mathcal{C}$ be an essential continuous family and let $\phi\in\op{KP}_\infty(\mathcal{C})$. Then, $$ \op{KP}_\infty(\mathcal{C})=\left\{\phi+a\mid a\inK[x],\ \deg(a)<m_\infty, \ \rho_\cc(a)>\rho_i(\phi) \ \mbox{ for all }i\in A\right\}. $$ \end{lemma}
\begin{proof} Let $\varphi\inK[x]$ be a monic polynomial of degree $m_\infty$. Then, $\varphi=\phi+a$ for some $a\inK[x]$ with $\deg(a)<m_\infty$. Since $a$ is $\mathcal{C}$-stable, there exists $i_0\in A$ such that $$ \rho_\cc(a)=\rho_j(a)\quad\mbox{ for all }\ i_0\le j. $$
For all indices $i_0\le j<k$ we have $\rho_j(\phi)<\rho_k(\phi)$ and $\rho_j(a)=\rho_k(a)=\rho_\cc(a)$.
Suppose that $\rho_\cc(a)>\rho_i(\phi)$ for all $i\in A$. From $\rho_\cc(a)>\rho_k(\phi)$ we deduce that $\rho_j(\varphi)=\rho_j(\phi)<\rho_k(\phi)=\rho_k(\varphi)$. Thus, $\varphi$ belongs to $\op{KP}_\infty(\mathcal{C})$.
Suppose that $\rho_\cc(a)\le \rho_i(\phi)$ for some $i\in A$. Then, for all $\max\{i_0,i\}<j<k$ we have $\rho_j(\varphi)=\rho_\cc(a)=\rho_k(\varphi)$. Thus, $\varphi$ is $\mathcal{C}$-stable. \end{proof}
\subsection{Equivalence of totally ordered families of valuations}\label{subsecEquivCont}
Let $\mathcal{A}=\left(\rho_i\right)_{i\in A}$, ${\mathcal B}=\left(\zeta_j\right)_{j\in B}$ be totally ordered families in $\mathcal{T}$, not containing a maximal element.
We say that $\mathcal{A}$ and ${\mathcal B}$ are \emph{equivalent}, and we write $\mathcal{A}\sim{\mathcal B}$, if they are cofinal in each other.
Obviously, two equivalent families either both have unbounded degree, or both have stable degree. Also, in the latter case they have the same stable and unstable degrees, and the same stable group.
Two totally ordered families in the same class have the same limit behavior.
\begin{proposition}\label{equiv=lim} Let $\mathcal{A}$, ${\mathcal B}$ be totally ordered families in $\mathcal{T}$, not containing a maximal element. Suppose that none of them is a continuous inessential family. Then, $$ \mathcal{A}\sim{\mathcal B} \ \ \Longleftrightarrow\ \ K[x]_\mathcal{A}=K[x]_{\mathcal B} \ \mbox{ and }\ \rho_\aa=\rho_\bb. $$ \end{proposition}
\begin{proof} Suppose $\mathcal{A}\sim{\mathcal B}$. Take any $f\inK[x]_\mathcal{A}$, so that there exists $i_0\in A$ such that $$ \rho_\aa(f)=\rho_i(f)\quad \mbox{for all }\ i\ge i_0. $$ Since ${\mathcal B}$ is cofinal in $\mathcal{A}$, there exists $j_0\in B$ such that $\rho_{i_0}<\zeta_{j_0}$. Take any $j\in B$, $j>j_0$; since $\mathcal{A}$ is cofinal in ${\mathcal B}$, there exists $i\in A$ such that $$ \rho_{i_0}<\zeta_{j_0}<\zeta_j<\rho_i. $$ Necessarily, $\rho_{i_0}(f)=\zeta_{j_0}(f)=\zeta_j(f)=\rho_\bb(f)$. Hence, $f$ is ${\mathcal B}$-stable and $\rho_\aa(f)=\rho_\bb(f)$.
The symmetry of the argument shows that $K[x]_\mathcal{A}=K[x]_{\mathcal B}$ and $\rho_\aa=\rho_\bb$.
Conversely, suppose that $K[x]_\mathcal{A}=K[x]_{\mathcal B}$ and $\rho_\aa=\rho_\bb$. Let us first assume that both families have a stable limit; that is, $K[x]_\mathcal{A}=K[x]_{\mathcal B}=K[x]$.
Since all valuations $\rho_i, \zeta_j$ are bounded above by the valuation $\rho_\aa=\rho_\bb$, the set $\mathcal{A}\cup{\mathcal B}$ is totally ordered. If $\mathcal{A}$ and ${\mathcal B}$ were not cofinal in each other, there would exist (for instance) some $j\in B$ such that $\zeta_j>\rho_i$ for all $i\in A$. But this implies $\zeta_j\ge\rho_\aa=\rho_\bb$, leading to $\zeta_k>\rho_\bb$ for all $k>j$ in $B$. This is a contradiction.
Suppose now $K[x]_\mathcal{A}=K[x]_{\mathcal B}\subsetneqK[x]$. Take any $\phi\in\op{KP}_\infty(\mathcal{A})=\op{KP}_\infty({\mathcal B})$. Since $\rho_\aa=\rho_\bb$, the following limit augmentations coincide $$ \nu:=[\mathcal{A};\,\phi, \infty]=[{\mathcal B};\,\phi, \infty], $$ because they have the same action on $\phi$-expansions. As above, the set $\mathcal{A}\cup{\mathcal B}$ is totally ordered and $\mathcal{A}$ and ${\mathcal B}$ are cofinal in each other, unless there exists (for instance) some $j\in B$ such that $\zeta_j>\rho_i$ for all $i\in A$. This leads to a contradiction too.
Indeed, let $\mathbf{t}(\zeta_j,\zeta_k)=[\varphi]_{\zeta_j}$ for some $k\in B$ such that $k>j$. Since $\zeta_j(\varphi)<\zeta_k(\varphi)$, the augmentation $\mu=[\zeta_j;\,\varphi,\zeta_k(\varphi)]$ satisfies $\zeta_j<\mu\le\zeta_k$. Since $$ \deg(\varphi)=\deg(\mu)\le\deg(\zeta_k)\le m({\mathcal B})<m_\infty({\mathcal B}), $$ we deduce that $\varphi$ is ${\mathcal B}$-stable. By our hypothesis, $\varphi$ is $\mathcal{A}$-stable too, and this implies $\zeta_j(\varphi)=\rho_\aa(\varphi)=\zeta_k(\varphi)$, which is a contradiction. \end{proof}
\begin{corollary}\label{sameLKP} Let $\mathcal{C}$, $\mathcal{C}'$ be two equivalent essential continuous families in $\mathcal{T}$. Then, $\op{KP}_\infty(\mathcal{C})=\op{KP}_\infty(\mathcal{C}')$. \end{corollary}
Since all totally ordered sets admit cofinal well-ordered subsets, in every class of totally ordered families of valuations in $\mathcal{T}$ there are well-ordered families.
In this vein, for any given class, we want to study the existence of ``nice" families in the class, having special properties.
\begin{lemma}\label{numerable} Let $\mathcal{A}=\left(\rho_i\right)_{i\in A}$ be a totally ordered family of unbounded degree. Then, there is a countable equivalent family ${\mathcal B}=\left(\zeta_n\right)_{n\in \mathbb N}$ such that all $\zeta_n$ are commensurable over $v$ and $\deg(\zeta_{m})<\deg(\zeta_{n})$ for all $m<n$. \end{lemma}
\begin{proof} By Corollary \ref{unbIsStab}, $\mathcal{A}$ has a stable limit $\mu=\lim_{i\in A}\rho_i$. The Maclane--Vaqui\'e theorem (Theorem \ref{main}) shows that $\mu$ is the stable limit of a countable infinite Maclane--Vaqui\'e chain ${\mathcal B}=\left(\zeta_n\right)_{n\in \mathbb N}$ such that all $\zeta_n$ are commensurable over $v$ and $\deg(\zeta_{m})<\deg(\zeta_{n})$ for all $m<n$. By Proposition \ref{equiv=lim}, $\mathcal{A}$ and ${\mathcal B}$ are equivalent. \end{proof}
\begin{lemma}\label{specialCont} Let $\mathcal{C}=\left(\rho_i\right)_{i\in A}$ be a continuous family such that $m(\mathcal{C})<m_\infty(\mathcal{C})$. Then, there is an equivalent family ${\mathcal B}=\left(\zeta_j\right)_{j\in B}$ satisfying the following properties: \begin{enumerate} \item ${\mathcal B}$ is well-ordered and all valuations $\zeta_i\in{\mathcal B}$ have degree $m(\mathcal{C})$. \item All valuations $\zeta_j\in{\mathcal B}$ have relative ramification index equal to one. \item All valuations $\zeta_j\in{\mathcal B}$ have value group $\Gamma_{\zeta_j}=\g_\bb$. \item For all $j\in B$, there exist $\chi_j\in\op{KP}(\zeta_j)$ of minimal degree, such that $$ j<k\ \mbox{in }B \ \ \Longrightarrow\ \ \chi_j\not\sim_{\zeta_j}\chi_k\ \mbox{ and }\ \zeta_k=[\zeta_j;\,\chi_k,\operatorname{sv}(\zeta_k)]. $$ \end{enumerate} \end{lemma}
\begin{proof} By replacing $\mathcal{C}$ with a cofinal subfamily, we may assume that $\mathcal{C}$ is well-ordered and $\deg(\rho_i)=m(\mathcal{C})$ for all $i\in A$. By Lemma \ref{GAcomm}, $\Gamma_{\rho_i}^0=\g_\cc$ for all $i\in A$.
Let us construct an equivalent family ${\mathcal B}=\left(\zeta_j\right)_{j\in B}$ all whose valuations have degree $m(\mathcal{C})$, are commensurable and have relative ramification index equal to one; that is, $\operatorname{sv}(\zeta_j)\in \g_\cc=\g_\bb$ for all $j\in B$.
Let $\mathcal{C}_{\op{bad}}$ be the subset of $\mathcal{C}$ formed by all $\rho_i$ such that $\operatorname{sv}(\rho_i)\not\in \g_\cc$. Let us first construct, for each $\rho_i\in\mathcal{C}_{\op{bad}}$, a valuation $\rho'_i$ such that $\operatorname{sv}(\rho'_i)\in\g_\cc$ and $\rho_i<\rho'_i\le \rho_k$ for some $k$ in $A$.
For any given $\rho_i\in\mathcal{C}_{\op{bad}}$, let $\varphi\in\op{KP}(\rho_i)$ be a key polynomial of minimal degree. Then, $\operatorname{sv}(\rho_i)=\rho_i(\varphi)\not\in\g_\cc$. Since $\deg(\varphi)=\deg(\rho_i)<m_\infty(\mathcal{C})$, the polynomial $\varphi$ is $\mathcal{C}$-stable. Take $i<j<k$ in $A$, such that $\rho_j(\varphi)=\rho_k(\varphi)=\rho_\cc(\varphi)$. Since $\rho_\cc(\varphi)\in \g_\cc$, the inequality $\rho_i(\varphi)< \rho_k(\varphi)$ must be strict.
Now, take $\chi\in\op{KP}(\rho_k)$ a key polynomial of minimal degree. By \cite[Thm. 3.9]{KP}, we have $\rho_k(\varphi)\le\rho_k(\chi)=\operatorname{sv}(\rho_k)$. By Lemma \ref{samedegree}, $\rho_k=[\rho_i;\, \chi,\operatorname{sv}(\rho_k)]$. Since $\rho_i(\varphi)< \rho_k(\varphi)\le \rho_k(\chi)$, the augmented valuation $$ \rho'_i:=[\rho_i;\,\chi,\rho_k(\varphi)] $$ satisfies $\operatorname{sv}(\rho'_i)=\rho'_i(\chi)=\rho_k(\varphi)\in\g_\cc$ and $\rho_i<\rho'_i\le \rho_k$.
Consider the totally ordered family $\mathcal{C}'=\mathcal{C}\cup \{\rho'_i\mid \rho_i\in\mathcal{C}_{\op{bad}}\}$. Obviously, $\mathcal{C}$ and $\mathcal{C}'$ are equivalent. Also, by construction, the subfamily ${\mathcal B}$ of $\mathcal{C}'$ formed by all valuations $\rho$ such that $\operatorname{sv}(\rho)\in\g_\cc$ is cofinal in $\mathcal{C}'$. Thus, the family ${\mathcal B}$ is equivalent to $\mathcal{C}$ and satisfies conditions (1) and (2). Since $\Gamma_{\zeta_j}=\g_\cc=\Gamma_{\zeta_j}^0$ for all $\zeta_j\in{\mathcal B}$, condition (3) follows from Lemma \ref{GAcomm}.
Let us prove that condition (4) holds too, for this family ${\mathcal B}$. Let us choose arbitrary key polynomials $\chi_j\in\op{KP}(\zeta_j)$ of minimal degree, for all valuations $\zeta_j\in{\mathcal B}$. By Lemma \ref{samedegree}, $\zeta_k=[\zeta_j;\,\chi_k,\operatorname{sv}(\zeta_k)]$ for all $j<k$ in $B$.
The class $\mathbf{t}(\zeta_j,{\mathcal B})=\mathbf{t}(\zeta_j,\zeta_k)=[\chi_k]_{\zeta_j}$ depends only on $\zeta_j$ and ${\mathcal B}$. Thus, the condition $\chi_j\not\sim_{\zeta_j}\chi_k$, which is equivalent to $\chi_j\not\in \mathbf{t}(\zeta_j,{\mathcal B})$, depends only on $\chi_j$.
Let us show that there exists a key polynomial $\chi'_j\in\op{KP}(\zeta_j)$ of minimal degree, satisfying $\chi'_j\not\in \mathbf{t}(\zeta_j,{\mathcal B})$. Since $\zeta_j$ has relative ramification index equal to one, we have $\operatorname{sv}(\zeta_j)\in\Gamma_{\zeta_j}^0$ and there exists $a\inK[x]$ of degree less than $m$ such that $\zeta_j(a)=\operatorname{sv}(\zeta_j)=\zeta_j(\chi_j)$. By \cite[Prop. 6.3]{KP}, $\chi'_j=\chi_j+a$ is a key polynomial for $\zeta_j$ of minimal degree $m$, such that $\chi'_j\not\sim_{\zeta_j}\chi_j$. Therefore, at least one of the two key polynomials $\chi_j$, $\chi'_j$ does not fall in the class $\mathbf{t}(\zeta_j,{\mathcal B})$. \end{proof}
\section{Paths in the tree $\mathcal{T}$}\label{secCtDepth}
\subsection{Maclane--Vaqui\'e\ chains}\label{subsecMLV}
In this section, we review the fundamental theorem of Maclane--Vaqui\'e\ describing how to reach all nodes in $\mathcal{T}$ by a combination of ordinary augmentations, limit augmentations and stable limits \cite{mcla,Vaq}. All results are extracted from the survey \cite{MLV}.
Consider a finite, or countably infinite, chain of nodes in $\mathcal{T}$ \begin{equation}\label{depthMLV} \mu_0\ \stackrel{\phi_1,\gamma_1}\,\longrightarrow\,\ \mu_1\ \stackrel{\phi_2,\gamma_2}\,\longrightarrow\,\ \cdots \ \,\longrightarrow\,\ \mu_{n-1} \ \stackrel{\phi_{n},\gamma_{n}}\,\longrightarrow\,\ \mu_{n} \ \,\longrightarrow\,\ \cdots \end{equation} in which every node is an augmentation of the previous node, of one of the following two types:
\emph{Ordinary augmentation}: \ $\mu_{n+1}=[\mu_n;\, \phi_{n+1},\gamma_{n+1}]$, for some $\phi_{n+1}\in\op{KP}(\mu_n)$.
\emph{Limit augmentation}: \ $\mu_{n+1}=[\mathcal{A}_n;\, \phi_{n+1},\gamma_{n+1}]$, for some $\phi_{n+1}\in\op{KP}_\infty(\mathcal{A}_n)$, where $\mathcal{A}_n$ is an essential continuous family whose first valuation is $\mu_n$.
We consider an implicit choice of a key polynomial $\phi_0\in\op{KP}(\mu_0)$ of minimal degree, and we denote $\gamma_0=\mu_0(\phi_0)$.
Therefore, for all $n$ such that $\mu_n$ is an inner node of $\mathcal{T}$, the polynomial $\phi_n$ is a key polynomial for $\mu_n$ of minimal degree, and we have $$ m_n:=\deg(\mu_n)=\deg(\phi_n),\qquad \operatorname{sv}(\mu_n)=\mu_n(\phi_n)=\gamma_n. $$
\noindent{\bf Definition. } A chain of mixed augmentations as in (\ref{depthMLV}) is said to be a \emph{Maclane--Vaqui\'e\ (abbreviated MLV) chain} if $\deg(\mu_0)=1$ and every augmentation step satisfies: \begin{itemize} \item If $\,\mu_n\to\mu_{n+1}\,$ is ordinary, then $\ \deg(\mu_n)<\deg\,\mathbf{t}(\mu_n,\mu_{n+1})$. \item If $\,\mu_n\to\mu_{n+1}\,$ is limit, then $\ \deg(\mu_n)=m(\mathcal{A}_n)$ and $\ \phi_n\not \in\mathbf{t}(\mu_n,\mu_{n+1})$. \end{itemize}
Let $0\le r\le \infty$ be the length of a MLV chain. For $n<r$, all nodes $\mu_n$ are residually transcendental valuations. Indeed, in all augmentations of the chain, either ordinary or limit, we have $$ \phi_n\in\op{KP}(\mu_n),\quad \phi_n\not\in\mathbf{t}(\mu_n,\mu_{n+1}). $$ Hence, $\op{KP}(\mu_n)$ contains at least two different $\mu_n$-equivalence classes. By the remarks at the end of Section \ref{secKP}, this implies that $\mu_n$ is commensurable.
\begin{theorem}\label{main} Every node $\nu\in\mathcal{T}$ falls in one, and only one, of the following cases.
(a) \ It is the last valuation of a finite MLV chain. $$ \mu_0\ \stackrel{\phi_1,\gamma_1}\,\longrightarrow\,\ \mu_1\ \stackrel{\phi_2,\gamma_2}\,\longrightarrow\,\ \cdots\ \,\longrightarrow\,\ \mu_{r-1}\ \stackrel{\phi_{r},\gamma_{r}}\,\longrightarrow\,\ \mu_{r}=\nu.$$
(b) \ It is the stable limit of a continuous family $\mathcal{A}=\left(\rho_i\right)_{i\in A}$ of augmentations whose first valuation $\mu_r$ falls in case (a): $$ \mu_0\ \stackrel{\phi_1,\gamma_1}\,\longrightarrow\,\ \mu_1\ \stackrel{\phi_2,\gamma_2}\,\longrightarrow\,\ \cdots\ \,\longrightarrow\,\ \mu_{r-1}\ \stackrel{\phi_{r},\gamma_{r}}\,\longrightarrow\,\ \mu_{r}\ \stackrel{\left(\rho_i\right)_{i\in A}}\,\longrightarrow\,\ \rho_\aa=\nu. $$ Moreover, we may assume that $\deg(\mu_r)=m(\mathcal{A})$ and $\phi_r\not\in\mathbf{t}(\mu_r,\nu)$.
(c) \ It is the stable limit, $\nu=\lim_{n\in\mathbb N}\,\mu_n$, of an infinite MLV chain. \end{theorem}
The inner nodes and the finite leaves of $\mathcal{T}$ fall in case (a). These are the ``bien-specifi\'ees" valuations in Vaqui\'e's terminology.
We denote by $\mathcal{L}\cc_\infty(\mathcal{T}),\,\mathcal{LU}_\infty(\mathcal{T})\subset\mathcal{L}_\infty(\mathcal{T})$ the subsets of infinite leaves falling in cases (b), (c), respectively. The infinite leaves in $\mathcal{L}\cc_\infty(\mathcal{T})$ have finite degree and those in $\mathcal{LU}_\infty(\mathcal{T})$ have infinite degree.
Also, Lemma \ref{propertiesTMN} shows that in all cases displayed in Theorem \ref{main}, we have $$ \phi_n\not\in\mathbf{t}(\mu_n,\nu) \quad \mbox{ and }\quad \nu(\phi_n)=\mu_n(\phi_n)=\gamma_n=\operatorname{sv}(\mu_n)\quad \mbox{for all }\ n. $$
The main advantage of MLV chains is that their nodes are essentially unique, so that we may read in them several data intrinsically associated to the valuation $\mu$.
For instance the sequence $(m_n)_{n\ge0}$ and the character ``ordinary" or ``limit" of each augmentation step $\mu_n\to\mu_{n+1}$ are intrisic features of $\nu$ \cite[Thm. 4.7]{MLV}.
Thus, we may define order preserving functions $$ \op{\mbox{\small depth}},\op{\mbox{\small lim-depth}}\colon\mathcal{T}\,\longrightarrow\,\mathbb N\infty, $$ where $\op{\mbox{\small depth}}(\nu)$ is the length of the MLV chain underlying $\nu$, and
$\op{\mbox{\small lim-depth}}(\nu)$ counts the number of limit augmentations in this MLV chain \footnote{Thus, all valuations of both types (a) and (b) have a finite depth. At this point, we are not following the convention of \cite{MLV}, where the valuations of type (a) were said to have \emph{finite depth} while those of type (b) were said to have \emph{quasi-finite depth}}.
The arguments in the proof of \cite[Lem. 4.2]{MLV} show that these functions preserve the ordering.
\subsection{Decoding MLV chains for arithmetic and geometric applications} \label{subsecAMdata}\mbox{\null}
Besides their intrinsic theoretical interest, MLV chains encode a large amount of information which can be useful in several contexts. In this section, we describe a concrete MLV chain in full generality, and then we interpret it from both the number theoretic and the geometric perspective. In the former case, we will see how to describe the decomposition of primes in number fields, while in the geometric context we will provide the desingularization of a curve.
Let $(\mathcal{O},v)$ be a valuation ring with fraction field $K$ and value group $\Gamma_v=\mathbb Z$. Let $p\in \mathcal{O}$ be a uniformizing element. Consider the following polynomials in $\mathcal{O}[x]$: \begin{equation}\label{poli} \ars{1.2} \begin{array}{rl} \phi_0=&x,\\ \phi_1=&x^5+p^3,\\ \phi_2=&\phi_1^3+p^{10}=x^{15}+3p^3x^{10}+3p^6x^5+p^9+p^{10}, \\ \phi_3=&\phi_2^2+p^{11}\phi_0^4\phi_1^2, \\
= & x^{30}+ 6p^3x^{25}+ 15p^6x^{20} + 2p^{9}(p+10)x^{15}+ p^{11}x^{14} +\\ &(6p+15)p^{12}x^{10}+2p^{14}x^9 +6(p+1)p^{15}x^5+p^{17}x^4+p^{18}(p+1)^2.
\end{array} \end{equation} Let us build a MLV chain of valuations on $K[x]$. We start with the valuation: $$ \displaystyle\mu_0\left(\sum\nolimits_i a_i x^i\right):=\min_{i} \{ v(a_i)+(3/5)i\}, $$ and consider the augmentations: $$ \displaystyle\mu_1=[\mu_0; \phi_1,10/3], \qquad \displaystyle\mu_2=[\mu_1; \phi_2, 301/30], \qquad \displaystyle\mu_3=[\mu_2; \phi_3, \infty]. $$
Note that these valuations are distributed along a path of the valuative tree $\mathcal{T}(\mathbb Q)$, reaching the finite leave $\mu_3$. We get the following MLV for $\mu_3$: \begin{equation}\label{exampleMLVchain}
\mu_0 \stackrel{\phi_1,10/3}\,\longrightarrow\, \mu_1 \stackrel{\phi_2,301/30}\,\longrightarrow\, \mu_2 \stackrel{\phi_3,\infty}\,\longrightarrow\,\ \mu_3. \end{equation}
Let us look at this MLV chain in an arithmetic context, by taking $K=\mathbb Q$ as our base field, fixing a rational prime $p\in\mathbb Z$ and considering the $p$-adic valuation as our valuation $v$. In this setting, the MLV chain (\ref{exampleMLVchain}) encodes an important amount of information on the ring of integers $\mathbb Z_L$ of the number field $L=\mathbb Q[x]/(\phi_3)$. For instance, the prime ideal decomposition of $p$ in $\mathbb Z_L$ is completely described by (\ref{exampleMLVchain}). This can be checked by applying the OM-algorithm \cite{gen} to the pair $K,p$. The algorithm yields an OM-representation of $\phi_3(x)$ consisting of the unique order 3 type: $$ \mathbf{t}=(y;(x,3/5,y+1);(\phi_2,5/3,y+1); (\phi_3,1/2,y+1)), $$ which can be seen as a computational representation of the MLV chain. It exhibits key polynomials, slopes of Newton polygons and \emph{residual polynomials}.
Since the OM-algorithm returns a unique type, we know that $p\mathbb Z_L$ is divided by a unique prime ideal $\mathfrak{P}$ whose ramification index is the product of the denominators of the slopes in $\mathbf{t}$: $e(\mathfrak{P}/p)=5\cdot3\cdot2=30$. The residual degree $f(\mathfrak{P}/p)=1$ is the product of the degrees of the residual polynomials in $\mathbf{t}$.
For any root $\theta\in\overline{\mathbb Q}$ of $\phi_3$, we can derive from this data the following values: \begin{equation}\label{val-Okutsu} \bar{v}(\theta)=3/5,\qquad \bar{v}(\phi_1(\theta))=10/3, \qquad \bar{v}(\phi_2(\theta))=301/30, \end{equation} where $\bar{v}$ is the unique extension of $v$ to $L$.
Now, suppose that $p$ is an indeterminate, so that (\ref{poli}) can be thought as the equations of the germs at the origin $(0,0)$ of plane curves $f(p,x)=\phi_3(x)=0$, $f_i(p,x)=\phi_i(x)=0$ with $i=1,2$, $f_0(p,x)=p=0$, $f_{-1}(p,x)=\phi_0(x)=0$, over $\mathbb C$. Take $K=\mathbb C(p)$ as our base field, equipped with the $p$-adic valuation. The very same OM-algorithm shows that $f$ is irreducible in $\mathbb C((p))[x]$. There is a unique finite leaf $\mu_3\in\mathcal{T}(\mathbb Q)$ with support $f\mathbb C(p)[x]$, and a MLV chain of $\mu_3$ is given in (\ref{exampleMLVchain}). Let $\nu = 30 \mu _3$. Then, for any $\phi \in \mathbb C((p))[x]$, $\nu (\phi )$ is the intersection multiplicity between the germs of curve $f=0$ and $\phi = 0$.
The data supported by this chain contains completely analogous arithmetic information about $f$, but these data have an added geometric perspective. Indeed, the equation (\ref{poli}) has the property that the line $p=0$ cuts the curve $f=0$ only in its singular point $(0,0)$, being tangent at it. In this case, the OM algorithm parallels the Newton-Puiseux algorithm for desingularization \cite[Sec. 1.2]{casas}, and the slopes and key polynomials can be reinterpreted in terms of the sequence of blow-ups involved in the desingularization process. The number of finite leaves detected by the algorithm (one in our case) is the number of points in the normalized curve lying above $(0,0)$.
Any point $P$ blown-up in the desingularization process of $f=0$ gives rise to an exceptional divisor $E_P$. For the sake of simplicity, we will use the same notation $E_P$ for any strict transform of this divisor. Any such $P$ lies either on the intersection of two exceptional divisors and $P$ is called \emph{satellite}, or it lies just on only one exceptional divisor and $P$ is called \emph{free}.
We say that a satellite point is \emph{satellite of} the last free point preceding it.
Among all the points on $f=0$, there is a first satellite point, satellite of a free point $P_1$, which is followed by a sequence of satellite points, being $Q_1$ the last of them. Now, let $P_2$ ($P_3$) be the second (third) free point followed by some satellite point, and let $Q_2$ ($Q_3$) be last point in the sequence of satellite points following $P_2$ ($P_3$).
The points $Q_i$ are special points in the desingularization of the curve, since they are \emph{rupture} points, that is, the exceptional divisor $E_{Q_i}$ intersects three or more other components in the pull-back of the curve.
Consider the divisorial valuation $\nu_i $ whose last centre is $Q_i$ for any $3 \geq i \geq 1$.
It turns out that $\nu _1= 5 \mu_0$, $\nu_2 = 15 \mu _1$, $\nu_3= 30 \mu_2$ and moreover $$ \nu (\phi_0)=18,\qquad \nu (\phi_1)=100, \qquad \nu (\phi_2)=301, $$ which are the values appearing at (\ref{val-Okutsu}) multiplied by $\nu (f_0) = \nu (p) = 30$.
Furthermore, the germ of curve $f_{i-1}(p,x)=0$ shares with $f=0$ all its singular points and some more free simple points until $P_i$, for each $3 \geq i \geq 1$.
\subsection{Nodes of depth zero}\label{subsecDepth0}
For given $a\in K$ and $\delta\in \Lambda\infty$, the depth-zero node $\nu=\omega_{a,\delta}$ is defined as $$ \nu\left(\sum\nolimits_{0\le s}a_s(x-a)^s\right) = \min\left\{v(a_s)+s\delta\mid0\le s\right\}. $$
Clearly, $\omega_{a,\infty}$ is a finite leaf of $\mathcal{T}$ with support $(x-a)K[x]$, while for $\delta<\infty$ the valuation $\omega_{a,\delta}$ is an inner node admitting $x-a$ as a key polynomial.
Besides these (well specified) inner nodes and finite leaves, $\mathcal{T}$ may have depth-zero infinite leaves which are the stable limit of a continuous family of augmentations of stable degree one: $$ \mu_0\stackrel{(\rho_i)_{i\in A}}\,\longrightarrow\, \nu, $$ where $\mu_0$ is an inner node of depth zero. By Theorem \ref{main}, all depth-zero nodes in $\mathcal{T}$ arise from either of these two ways.
For any fixed $a \in K$, the set $\Lambda\infty$ parametrizes a certain path in $\mathcal{T}$, containing all depth-zero nodes $\omega_{a,\delta}$:
\begin{center} \setlength{\unitlength}{4mm} \begin{picture}(20,3) \put(0.25,1.2){\line(1,0){16}} \put(6,0.9){$\bullet$}\put(20,0.9){$\bullet$}
\put(-2.5,1){\begin{footnotesize}$\cdots\cdots$\end{footnotesize}} \put(17,1){\begin{footnotesize}$\cdots\cdots$\end{footnotesize}} \put(21,1.1){\begin{footnotesize}$\omega_{a,\infty}$\end{footnotesize}} \put(5.8,2){\begin{footnotesize}$\omega_{a,\delta}$\end{footnotesize}} \end{picture} \end{center}
The node $\omega_{a,\delta}$ is commensurable if and only if $\delta\in\g_\Q\infty$.
The relative position of the paths corresponding to two different elements $a,b\in K$ is completely determined by the following easy observation: \begin{equation}\label{balls} \omega_{a,\delta}\le\omega_{b,\epsilon}\ \ \Longleftrightarrow\ \ \min\{v(b-a),\epsilon\}\ge\delta. \end{equation}
Thus, the depth-zero paths in $\mathcal{T}$ determined by any two $a,b\in K$ coincide for all parameters $\delta\in\Lambda$ such that $\delta\le v(b-a)$.
\begin{center} \setlength{\unitlength}{4mm} \begin{picture}(22,7.5) \put(4.75,1){$\bullet$}\put(11,1){$\bullet$}\put(11,3.16){$\bullet$} \put(18,1){$\bullet$}\put(18,5.4){$\bullet$}
\put(0.25,1.3){\line(1,0){16}}\put(5,1.3){\line(3,1){11.3}} \put(16.66,4.48){$\dot{}$}\put(16.99,4.59){$\dot{}$}\put(17.32,4.7){$\dot{}$}
\put(-2.4,1.05){\begin{footnotesize}$\cdots\cdots$\end{footnotesize}} \put(16.5,1.05){\begin{footnotesize}$\cdots$\end{footnotesize}} \put(17.6,0){\begin{footnotesize}$\omega_{a,\infty}$\end{footnotesize}} \put(17.6,6.4){\begin{footnotesize}$\omega_{b,\infty}$\end{footnotesize}} \put(2.6,2){\begin{footnotesize}$\omega_{b,v(a-b)}$\end{footnotesize}} \put(2.6,0){\begin{footnotesize}$\omega_{a,v(a-b)}$\end{footnotesize}} \put(10.6,0){\begin{footnotesize}$\omega_{a,\delta}$\end{footnotesize}} \put(10.6,4.2){\begin{footnotesize}$\omega_{b,\delta}$\end{footnotesize}} \end{picture} \end{center}
In particular, for all depth-zero nodes $\mu_0,\nu_0\in \mathcal{T}$, there is a depth-zero node $\rho_0$ such that $\rho_0<\mu_0$ and $\rho_0<\nu_0$. By Theorem \ref{main}, for all nodes $\mu,\nu\in\mathcal{T}$ we have \begin{equation}\label{intersect} (-\infty,\mu\,]\cap (-\infty,\nu\,]\ne\emptyset. \end{equation}
\subsection{Paths of constant depth obtained by ordinary augmentations}\label{subsecConstDepthOrd} Let us fix an inner node $\mu\in\mathcal{T}$. For all $\phi\in\op{KP}(\mu)$, we define the \emph{constant-depth path} beyond $\mu$ as the set: $$ \mathcal{P}_\mu(\phi)=\left\{\mu(\phi,\gamma)\mid \mu(\phi)<\gamma\le\infty\right\},\qquad \mu(\phi,\gamma):=[\mu;\,\phi,\gamma], $$ containing all ordinary augmentations of $\mu$ with respect to $\phi$. This path joins $\mu$ with the finite leaf $\mu(\phi,\infty)$. Actually, by \cite[Lem. 2.7]{MLV}, $\mathcal{P}_\mu(\phi)$ coincides with the semiopen interval $(\,\mu,\mu(\phi,\infty)\,]$ in $\mathcal{T}$.
\begin{center} \setlength{\unitlength}{4mm} \begin{picture}(18,4) \put(-2,1){$\bullet$}\put(-1.6,1.3){\line(1,0){16}}\put(18,1){$\bullet$} \put(6,1){$\bullet$}
\put(-2,2){\begin{footnotesize}$\mu$\end{footnotesize}} \put(16.5,1){\begin{footnotesize}$\cdots$\end{footnotesize}} \put(15.2,1){\begin{footnotesize}$\cdots$\end{footnotesize}} \put(16.8,2){\begin{footnotesize}$\mu(\phi,\infty)$\end{footnotesize}} \put(4.8,2){\begin{footnotesize}$\mu(\phi,\gamma)$\end{footnotesize}} \end{picture} \end{center}
Regardless of the commensurability or incommensurability of $\mu$, Lemma \ref{propertiesAug},(2) shows that $\mu(\phi,\gamma)$ is commensurable if and only if $\gamma\in\g_\Q\infty$.
\noindent{\bf Definition. } A key polynomial $\phi\in\op{KP}(\mu)$ is said to be \emph{strong} if $\deg(\phi)>\deg(\mu)$. We say that $\mathcal{P}_\mu(\phi)$ is \emph{strong} if $\phi$ is strong, and $\mathcal{P}_\mu(\phi)$ is \emph{weak} otherwise.
All the nodes in this path have the same degree: $\deg\left(\mu(\phi,\gamma)\right)=\deg(\phi)$, and the same depth too: $$ \op{\mbox{\small depth}}\left(\mu(\phi,\gamma)\right)= \begin{cases} \op{\mbox{\small depth}}(\mu),&\mbox{ if the path is weak},\\ \op{\mbox{\small depth}}(\mu)+1,&\mbox{ if the path is strong}. \end{cases} $$ Actually, for any given MLV chain of $\mu$: $$ \mu_0\ \stackrel{\phi_1,\gamma_1}\,\longrightarrow\,\ \mu_1\ \stackrel{\phi_2,\gamma_2}\,\longrightarrow\,\ \cdots\ \,\longrightarrow\,\ \mu_{r-1}\ \stackrel{\phi_{r},\gamma_{r}}\,\longrightarrow\,\ \mu_{r}=\mu,$$ we may obtain a MLV chain of $\mu(\phi,\gamma)$ as follows.
If the path is weak and $\mu$ has depth zero, then all $\mu(\phi,\gamma)$ have depth zero too.
If the path is weak and $\mu$ has a positive depth, we may consider $$ \mu_0\ \stackrel{\phi_1,\gamma_1}\,\longrightarrow\,\ \mu_1\ \stackrel{\phi_2,\gamma_2}\,\longrightarrow\,\ \cdots\ \,\longrightarrow\,\ \mu_{r-1}\ \stackrel{\phi,\gamma}\,\longrightarrow\,\ \mu(\phi,\gamma),$$ regardless of the fact that $\mu_{r-1}\to\mu_r$ is an ordinary or a limit augmentation.
If the path is strong, we may just add one more (ordinary) augmentation: $$ \mu_0\ \stackrel{\phi_1,\gamma_1}\,\longrightarrow\,\ \mu_1\ \stackrel{\phi_2,\gamma_2}\,\longrightarrow\,\ \cdots\ \,\longrightarrow\,\ \mu_{r-1}\ \stackrel{\phi_{r},\gamma_{r}}\,\longrightarrow\,\ \mu\ \stackrel{\phi,\gamma}\,\longrightarrow\,\ \mu(\phi,\gamma).$$
Therefore, the nodes in a strong path are ``properly" derived from $\mu$, while the nodes in weak paths are derived from lower nodes.
Let us analyze the intersection of two paths of constant depth beyond the same node $\mu$, determined by different key polynomials $\phi,\phi_*\in\op{KP}(\mu)$. Obviously, $$\mathbf{t}(\mu,\rho)=[\phi]_\mu\quad \mbox{for all}\quad \rho\in \mathcal{P}_\mu(\phi). $$ Therefore, if $\phi\not\sim_\mu\phi_*$, Proposition \ref{td=td} shows that $\mathcal{P}_\mu(\phi)\cap\mathcal{P}_\mu(\phi_*)=\emptyset$.
If $\phi\sim_\mu\phi_*$, then $\mu(\phi-\phi_*)>\mu(\phi)=\mu(\phi_*)$. Thus, (\ref{eqAug}) shows that $$\mathcal{P}_\mu(\phi)\cap\mathcal{P}_\mu(\phi_*)=\left(\mu,\mu(\phi,\gamma_0)\right],\qquad \gamma_0=\mu(\phi-\phi_*). $$
\begin{center} \setlength{\unitlength}{4mm} \begin{picture}(22,8) \put(-2,1){$\bullet$}\put(4.75,1){$\bullet$} \put(11,1){$\bullet$}\put(11,3.16){$\bullet$} \put(20,1){$\bullet$}\put(20,6.06){$\bullet$}
\put(-1.6,1.3){\line(1,0){19}}\put(5,1.3){\line(3,1){12.3}} \put(17.76,4.88){$\dot{}$}\put(18.09,4.99){$\dot{}$}\put(18.42,5.11){$\dot{}$} \put(18.9,5.26){$\dot{}$}\put(19.23,5.37){$\dot{}$}\put(19.56,5.5){$\dot{}$}
\put(-3,1.1){\begin{footnotesize}$\mu$\end{footnotesize}} \put(17.7,1){\begin{footnotesize}$\cdots$\end{footnotesize}} \put(18.9,1){\begin{footnotesize}$\cdots$\end{footnotesize}} \put(21,1.2){\begin{footnotesize}$\mu(\phi,\infty)$\end{footnotesize}} \put(21,6.2){\begin{footnotesize}$\mu(\phi_*,\infty)$\end{footnotesize}} \put(2.5,2){\begin{footnotesize}$\mu(\phi_*,\gamma_0)$\end{footnotesize}} \put(2.6,0){\begin{footnotesize}$\mu(\phi,\gamma_0)$\end{footnotesize}} \put(9.6,0){\begin{footnotesize}$\mu(\phi,\gamma)$\end{footnotesize}} \put(9.6,4.2){\begin{footnotesize}$\mu(\phi_*,\gamma)$\end{footnotesize}} \end{picture} \end{center}\vskip.5cm
\subsection{Paths of constant depth obtained by limit augmentations}\label{subsecConstDepthLim} Let us fix an essential continuous family $\mathcal{A}=\left(\rho_i\right)_{i\in A}$. By taking a cofinal family, if necessary, we may assume that $\mathcal{A}$ contains a minimal valuation $\mu$.
For all limit key polynomials $\phi\in\op{KP}_\infty(\mathcal{A})$, we may consider the \emph{constant depth path} beyond $\mathcal{A}$: $$ \mathcal{P}_\aa(\phi)=\left\{\mathcal{A}(\phi,\gamma)\mid \rho_i(\phi)<\gamma\le\infty \ \mbox{ for all }i\in A\right\},\qquad \mathcal{A}(\phi,\gamma):=[\mathcal{A};\,\phi,\gamma], $$ containing all possible limit augmentations determined by $\phi$.
\begin{center} \setlength{\unitlength}{4mm} \begin{picture}(26,4) \put(-2,1){$\bullet$}\put(26,1){$\bullet$}\put(12,1){$\bullet$} \put(-1.6,1.25){\line(1,0){3.4}}\put(2.8,1){$\cdots$}\put(5,1){$\cdots$}\put(6.5,1.25){\line(1,0){16}} \multiput(4.5,0.1)(0,.25){10}{\vrule height1pt} \put(-1,2){$(\rho_i)_{i\in A}$}
\put(-3,1.1){\begin{footnotesize}$\mu$\end{footnotesize}} \put(22.8,1){\begin{footnotesize}$\cdots$\end{footnotesize}} \put(24,1){\begin{footnotesize}$\cdots$\end{footnotesize}} \put(24.8,2){\begin{footnotesize}$\mathcal{A}(\phi,\infty)$\end{footnotesize}} \put(10.8,2){\begin{footnotesize}$\mathcal{A}(\phi,\gamma)$\end{footnotesize}} \end{picture} \end{center}
As in the previous cases, the last node of the path, $\mathcal{A}(\phi,\infty)$, is a finite leaf. By Proposition \ref{extensionlim}, $\mathcal{A}(\phi,\gamma)$ is commensurable if and only if $\gamma\in\g_\Q\infty$. By \cite[Lem. 3.8]{MLV}, we have $\mathcal{P}_\aa(\phi)=\bigcap\nolimits_{i\in A}(\,\rho_i,\mathcal{A}(\phi,\infty)\,]$.
All the nodes in this path have the same degree and the same depth: $$\deg\left(\mathcal{A}(\phi,\gamma)\right)=\deg(\phi),\qquad\op{\mbox{\small depth}}\left(\mathcal{A}(\phi,\gamma)\right)=\op{\mbox{\small depth}}(\mu)+1. $$
For any $\phi,\phi_*\in\op{KP}_\infty(\mathcal{A})$, let $\delta=\rho_\mathcal{A}(\phi-\phi_*)$. By \cite[Lem. 3.7]{MLV}, we have $$ \mathcal{A}(\phi,\gamma)=\mathcal{A}(\phi_*,\gamma_*)\ \Longleftrightarrow\ \gamma=\gamma_*\ge\delta. $$ By Lemma \ref{allLKP}, $\mathcal{A}(\phi,\delta)=\mathcal{A}(\phi_*,\delta)$ belongs to $\mathcal{P}_\aa(\phi)\cap\mathcal{P}_\mathcal{A}(\phi_*)$.
Therefore, the intersection of the paths determined by $\phi$ and $\phi_*$ is completely analogous to the case of depth-zero valuations.
\subsection{Greatest common lower node}\label{subsecGCN}
Given $\mu,\nu\in\mathcal{T}$, their \emph{greatest common lower node} is defined as $$ \mu\wedge\nu=\max\left( (-\infty,\mu\,]\cap (-\infty,\nu\,]\right)\in \mathcal{T}, $$ provided that this maximal element exists.
\begin{proposition}\label{GCN} For all $\mu,\nu\in\mathcal{T}$, their greatest common lower node $\mu\wedge\nu$ exists \end{proposition}
\begin{proof} If $\mu\le\nu$, then obviously $\mu\wedge\nu=\mu$. Suppose that neither $\mu\le\nu$ nor $\mu\ge\nu$. As we saw in (\ref{intersect}), $ (-\infty,\mu\,]\cap (-\infty,\nu\,]\ne\emptyset$. Let us prove that this totally ordered set always contains a maximal element.
Consider a MLV chain of $\mu$ $$ \mu_0\ \stackrel{\phi_{1},\gamma_{1}}\,\longrightarrow\,\ \mu_{1}\ \,\longrightarrow\,\ \cdots \ \,\longrightarrow\,\ \mu_{r-1} \ \stackrel{\phi_{r},\gamma_{r}}\,\longrightarrow\,\ \mu_{r}=\mu, $$
Since $\mu=\mu_r\not\in (-\infty,\nu\,]$, there exists a minimal index $i$ such that $\mu_i\not\in (-\infty,\nu\,]$. We need only to show that $\mu_i\wedge\nu$ exists, because this clearly implies $\mu\wedge\nu=\mu_i\wedge\nu$.
Suppose that $i=0$. If $\mu_0=\omega_{a,\gamma}$, we have \cite[Sec. 2.2]{MLV} $$ \left(-\infty,\mu_0\,\right]=\left\{\omega_{a,\delta}\mid\delta\in\Lambda,\ \delta\le\gamma\right\}. $$ On the other hand, by comparing their action of $(x-a)$-expansions, we see that $\omega_{a,\delta}\le\nu$ if and only if $\delta\le\nu(x-a)$. Since $\mu_0\not\le\nu$, necessarily $\nu(x-a)<\gamma$, Thus, there is a maximal element in $\left(-\infty,\mu_0\,\right]\cap\left(-\infty,\nu\,\right]$, namely $$\mu_0\wedge\nu=\omega_{a,\nu(x-a)}.$$
Suppose that $i>0$, so that $\mu_{i-1}<\nu$, $\mu_{i}\not\le\nu$.
If $\mathbf{t}(\mu_{i-1},\mu_i)\ne\mathbf{t}(\mu_{i-1},\nu)$, then Proposition \ref{td=td} shows that $(\mu_{i-1},\mu_i\,]\cap (\mu_{i-1},\nu]=\emptyset$. Hence, $\mu_i\wedge\nu=\mu_{i-1}$. Suppose that $\mathbf{t}(\mu_{i-1},\mu_i)=\mathbf{t}(\mu_{i-1},\nu)$.
If $\mu_{i-1}\to\mu_i$ is an ordinary augmentation, then \cite[Lem. 2.7]{MLV} shows that $$ \left(\mu_{i-1},\mu_i\,\right]=\left\{[\mu_{i-1};\,\phi_i,\delta]\,\mid\,\mu_{i-1}(\phi_i)<\delta\le\gamma_i\right\}. $$ Since $\mathbf{t}(\mu_{i-1},\nu)=\mathbf{t}(\mu_{i-1},\mu_i)=[\phi_i]_{\mu_{i-1}}$, we have $\mu_{i-1}(\phi_i)<\nu(\phi_i)$ and $$ \mu_{i-1}(f)=\nu(f) \ \Longleftrightarrow\ \phi_i\nmid_{\mu_{i-1}}f. $$ In particular, $\mu_{i-1}(a)=\nu(a)$ for all $a\inK[x]$ with $\deg(a)<\deg(\phi_i)$. Hence, by comparing their action on $\phi_i$-expansions, we have $$ [\mu_{i-1};\,\phi_i,\delta]\le\nu\ \Longleftrightarrow\ \delta\le \nu(\phi_i). $$ Since $\mu_i\not\le\nu$, we have $\nu(\phi_i)<\gamma_i$. Thus, there is a maximal element in $\left(\mu_{i-1},\mu_i\,\right]\cap\left(\mu_{i-1},\nu\,\right]$, namely $$\mu_i\wedge\nu=[\mu_{i-1};\,\phi_i,\nu(\phi_i)].$$
Suppose that $\mu_{i-1}\to\mu_i$ is a limit augmentation with respect to an essential continuous family $\mathcal{A}=(\rho_j)_{j\in A}$ admitting $\mu_{i-1}$ as its first element. Then, $\phi_i\in\op{KP}_\infty(\mathcal{A})$. By Lemma \ref{specialCont} we may assume that, for all $j\in A$, $$ \rho_j=[\mu_{i-1};\,\chi_j,\beta_j],\qquad \chi_j\in\op{KP}(\mu_{i-1}),\ \beta_j=\operatorname{sv}(\rho_j). $$
If $\rho_j\not\le\nu$ for some $j\in A$, then we can mimic the arguments of the ordinary-augmentation case to conclude that $$\mu\wedge\nu=\rho_j\wedge\nu=[\mu_{i-1};\,\chi_j,\nu(\chi_j)].$$
Suppose that $\rho_j<\nu$ for all $j\in A$. By Lemma \ref{propertiesTMN}(3), we see that
$\nu$ coincides with $\rho_\aa$ on all $\mathcal{A}$-stable polynomials. Let $V=\left\{\rho_j(\phi_i)\mid j \in A\right\}$. By \cite[Lem. 3.8]{MLV}, $$ \left\{\rho\in(\mu_{i-1},\mu_i\,]\,\mid\,\rho_j<\rho\mbox{ for all }j\in A\right\}= \left\{[\mathcal{A};\,\phi_i,\delta]\mid V<\delta\le\gamma_i\right\}. $$ By comparing their action on $\phi_i$-expansions, we have $$ [\mathcal{A};\,\phi_i,\delta]\le\nu\ \Longleftrightarrow\ \delta\le\nu(\phi_i). $$ On the other hand, for all $\mathcal{A}$-unstable polynomials, we have $\rho_j(f)<\rho_\ell(f)$ for all $j<\ell$ in $A$. Thus, $$\rho_j(f)<\rho_\ell(f)\le\nu(f),\quad\mbox{for all }j\in A. $$ In particular, $\nu(\phi_i)>V$. As a consequence, there is a maximal valuation in $(-\infty,\mu_i\,]$ which is less than $\nu$, namely $$ \mu_i\wedge\nu=[\mathcal{A};\,\phi_i,\nu(\phi_i)]. $$ This ends the proof of the proposition. \end{proof}
Suppose that $\mu,\nu\in\mathcal{T}$ are incomparable; that is, $\mu\not\le\nu$ and $\nu\not\le\mu$. Then,
their greatest common lower node, $\rho=\mu\wedge\nu$, has at least two different tangent directions: $\mathbf{t}(\rho,\mu)\ne \mathbf{t}(\rho,\nu)$. By Theorems \ref{Delta} and \ref{DeltaKP}, $\rho$ is an inner commensurable node; in other words, $\rho$ is a residually transcendental valuation.
\subsubsection{$\mathcal{T}$ as a $\Lambda$-tree}
Given an ordered abelian group $\Lambda$, a $\Lambda$-tree is defined \cite{lambdatrees} as a geodesic $\Lambda$-metric space $T$ such that \begin{enumerate}
\item If two geodesics of $T$ intersect in a single point, which is an endpoint of both, then their union is a geodesic;
\item The intersection of two geodesics with a commond endpoint is also a geodesic. \end{enumerate} The existence of a greatest common lower node can be used to define a $\Lambda$-metric on the subtree $\mathcal{H}$ of $\mathcal{T}$ consisting of all inner nodes. Namely, we set \[ d(\mu,\nu)= \operatorname{sv}(\mu)+\operatorname{sv}(\nu)-2\,\operatorname{sv}(\mu\wedge\nu). \]
Note that $d(\mu,\nu)=\left|\operatorname{sv}(\mu) - \operatorname{sv}(\nu)\right|$ if $\mu$ and $\nu$ are comparable. It is easy to see that with this definition, $\mathcal{H}$ is a geodesic $\Lambda$-metric space, and the unique geodesic with endpoints $\mu$, $\nu$ is the union of the segments $[\mu\wedge\nu,\mu]$ and $[\mu\wedge\nu,\nu]$; the two properties above follow.
We are not going to use any metric properties of the tree $\mathcal{H}$, noting only that this is a hyperbolic space. This fact, along with a plethora of additional information, can be found in the monograph \cite{lambdatrees}.
\section{Equivalence classes of valuations and small extensions of groups}\label{secSME}
For our given valued field $(K,v)$, consider a valuation $\mu\colon K[x]\to\Lambda\infty$, whose restriction to $K$ is equivalent to $v$. That is, there exists an order-preserving embedding $j\colon \Gamma\hookrightarrow\Lambda$, fitting into a commutative diagram $$ \ars{1.3} \begin{array}{ccc} K[x]&\stackrel{\mu}\,\longrightarrow\,&\Lambda\infty\\ \uparrow&&\ \uparrow\mbox{\tiny$j$}\\ K&\stackrel{v}\,\longrightarrow\,&\Gamma\infty \end{array} $$
The induced embedding $j\colon \Gamma\hookrightarrow\g_\mu$ is necessarily a \emph{small extension} of ordered abelian groups. That is, if $\Gamma'\subset\g_\mu$ is the relative divisible closure of $\Gamma$ in $\g_\mu$, then $\g_\mu/\Gamma'$ is a cyclic group \cite[Thm. 1.5]{Kuhl}.
Not all small extensions of $\Gamma$ arise from valuations on a polynomial ring. In \cite{Kuhl} it is shown that the divisible closure of $\Gamma$ in $\g_\mu$ must be countably generated over $\Gamma$, and it must be finitely generated over $\Gamma$, if $\g_\mu/\Gamma$ is not a torsion group.
Our aim is to describe the tree $\mathcal{T}_v$ whose nodes are all equivalence classes of va\-lua\-tions on $K[x]$ whose restriction to $K$ is equivalent to $v$. The first natural step is to build up some universal ordered group $\Lambda$ containing all small extensions of $\Gamma$ up to order-preserving $\Gamma$-isomorphism.
\subsection{Maximal rank-preserving extension of $\Gamma$}\label{subsecRlex}
From now on, an \emph{embedding} of totally ordered sets is a mapping which strictly preserves the ordering. Also, an \emph{embedding} $\Lambda\hookrightarrow \Lambda'$ of totally ordered abelian groups is a group homomorphism which is an embedding as totally ordered sets.
A subgroup $H\subset \Gamma$ is \emph{convex} if for all positive $h\in H$, it holds $[-h,h]\subset H$. For all $a\in \Gamma$, the intersection of all convex subgroups of $\Gamma$ containing $a$, is a \emph{principal} convex subgroup of $\Gamma$.
Let $\op{Prin}(\Gamma)$ be the totally ordered set of {\bf nonzero} principal convex subgroups of $\Gamma$, ordered by {\bf decreasing} inclusion.
Any embedding $j\colon \Gamma\hookrightarrow\Lambda$ induces an embedding of ordered sets$$\op{Prin}(\Gamma)\hookrightarrow\op{Prin}(\Lambda),$$ which maps the principal convex subgroup generated by $a\in\Gamma$ to the principal convex subgroup generated by $j(a)$ in $\Lambda$.
\noindent{\bf Definition. } We say that $j$ \emph{preserves the rank} if this mapping is bijective.
For instance, the canonical embeddding $\Gamma\hookrightarrow\g_\Q$ preserves the rank. From now on, we shall consider the bijection between $\op{Prin}(\Gamma)$ and $\op{Prin}(\g_\Q)$ as an identity: $$ I:=\op{Prin}(\Gamma)=\op{Prin}(\g_\Q). $$
We may identify $I\infty$ with a set of indices parameterizing all principal convex subgroups of $\g_\Q$. For all $i\in I$ we denote by $H_i$ the corresponding principal convex subgroup. We agree that $H_\infty=\{0\}$. Then, according to our convention, for any pair of indices $i,j\in I\infty$, we have $$ i<j \ \Longleftrightarrow\ H_i\supsetneq H_j. $$
Let $\left\{I,(Q_i)_{i\in I}\right\}$ be the skeleton of $\g_\Q$. That is, $Q_i=H_i/H_i^*$ for all $i\in I$, where $H_i^*\subset H_i$ is the maximal proper convex subgroup of $H_i$. That is, if $a\in H_i$ generates $H_i$ as a convex subgroup, then $H_i^*$ is the union of all convex subgroups of $\g_\Q$ not containing $a$. The convex subgroup $H_i^*$ is not necessarily principal.
Consider the respective Hahn's products: $$ \mathbf{H}(\gq)\subset \prod\nolimits_{i \in I}Q_i,\qquad \R^I_{\lx}\subset \mathbb R^I, $$ equipped with the lexicographical order. That is, $\mathbf{H}(\gq),\,\R^I_{\lx}$ are the subsets of $\prod_{i \in I}Q_i,\,\mathbb R^I$, respectively, formed by all elements $x=(x_i)_{i\in I}$ whose support $$ \op{supp}(x)=\{i\in I\mid x_i\ne0\} $$is a well-ordered subset of $I$, with respect to the ordering induced by that of $I$.
By Hahn's embedding theorem \cite[Sec. A]{Rib}, there exists a (non-canonical) $\mathbb Q$-linear embedding $$\g_\Q\hooklongrightarrow\mathbf{H}(\gq)$$ which induces an isomorphism between the respective skeletons.
On the other hand, the ordered groups $Q_i$ have rank one; that is, they have only two convex subgroups: $\{0\}$ and $Q_i$. Hence, the choice of positive elements $1_i\in Q_i$ determines $\mathbb Q$-linear embeddings for all $i\in I$: $$ Q_i\hooklongrightarrow \mathbb R,\qquad 1_i\longmapsto 1, $$
Therefore, we have a natural embedding $\mathbf{H}(\gq)\hookrightarrow \R^I_{\lx}$. All in all, we obtain a rank-preserving extension $$ \tau\colon \Gamma\hooklongrightarrow\g_\Q\hooklongrightarrow\mathbf{H}(\gq)\hooklongrightarrow\R^I_{\lx}, $$ which is maximal among all rank-preserving extensions of $\Gamma$ \cite[Sec. A]{Rib}.
\begin{theorem}\label{MaxEqRk} For any rank-preserving extension $\Gamma\hookrightarrow \Lambda$, there exists an embedding $\Lambda\hookrightarrow\R^I_{\lx}$ fitting into a commutative diagram $$ \ars{1.2} \begin{array}{ccc} \Lambda&&\\ \uparrow&\raise.5ex\hbox{$\searrow$}&\\ \Gamma&\stackrel{\tau}\,\longrightarrow\,&\R^I_{\lx} \end{array} $$ \end{theorem}
The embedding $\Lambda\hookrightarrow\R^I_{\lx}$ is not unique. Thus, every rank-preserving extension of $\Gamma$ is $\Gamma$-equivalent to some subgroup of $\R^I_{\lx}$, but not to a unique one.
The nonzero principal convex subgroups of $\R^I_{\lx}$ are parametrized by $I$ via: $$ H_i=\{(x_j)_{j\in I}\mid x_j=0\mbox{ for all }j<i\}\quad\mbox{for all }i\in I. $$
The convex subgroups of $\R^I_{\lx}$ are parametrized by the set $\op{Init}(I)$ of initial segments of $I$, as follows: \begin{equation}\label{HSHahn} S\in \op{Init}(I)\quad \rightsquigarrow\quad H_S=\{(x_j)_{j\in I}\mid x_j=0\mbox{ for all }j\in S\}. \end{equation}
\subsection{A universal group for small extensions of $\Gamma$}\label{subsecRii}
For all $S\in\op{Init}(I)$, let $i_S$ be a formal symbol and consider the ordered set $$ I_S=S+\{i_S\}+(I\setminus S), $$ where $+$ is the usual addition of totally ordered sets.
We define the \emph{one-added-element hull} of $I$ as the set $$ \mathbb I:=I\cup\left\{i_S\mid S\in\op{Init}(I)\right\}, $$ equipped with the total ordering determined by \begin{enumerate} \item[(i)] For all $S\in\op{Init}(I)$, the inclusion $I_S\hookrightarrow \mathbb I$ preserves the order. \item[(ii)] For all $S,T\in\op{Init}(I)$, we have $i_S<i_T$ if and only if $S\subsetneq T$. \end{enumerate}
Consider the Hahn product $\R^\I_{\lx}\subset \mathbb R^{\mathbb I}$, defined as above, just by replacing $I$ with $\mathbb I$.
The inclusions $I\subset I_S\subset \mathbb I$ determine canonical embeddings $$ \R^I_{\lx}\hooklongrightarrow \R^{I_S}_{\lx}\hooklongrightarrow \R^\I_{\lx},\quad\mbox{for all }S\in\op{Init}(I). $$ Altogether, we obtain an embedding $$ \ell\colon \Gamma\hooklongrightarrow\g_\Q\hooklongrightarrow\mathbf{H}(\gq)\hooklongrightarrow\R^I_{\lx}\hooklongrightarrow\R^\I_{\lx}. $$
As shown in \cite[Prop. 5.1]{csme}, $\R^\I_{\lx}$ is the universal ordered abelian group we are looking for.
\begin{proposition}\label{riiUniverse} Let $\mu$ be a valuation on $K[x]$ whose restriction to $K$ is equivalent to $v$. Then, there exists an embedding $j\colon\g_\mu\hookrightarrow \R^\I_{\lx}$ satisfying the following properties: \begin{enumerate} \item[(i)] The following diagram commutes: $$ \ars{1.3} \begin{array}{ccc} K[x]&\stackrel{j\circ\mu}\,\longrightarrow\,&\R^\I_{\lx}\infty\\ \uparrow&&\ \uparrow\mbox{\tiny$\ell$}\\ K&\stackrel{v}\,\longrightarrow\,&\Gamma\infty \end{array} $$ \item[(ii)] There exists $S\in\op{Init}(I)$ such that $j(\g_\mu)\subset \R^{I_S}_{\lx}$. \end{enumerate}
Moreover, $\g_\mu/\Gamma$ is commensurable if and only if $j(\g_\mu)\subset\ell(\Gamma)_\mathbb Q$. Also, $\g_\mu/\Gamma$ preserves the rank if and only if $j(\g_\mu)\subset\R^I_{\lx}$. \end{proposition}
Clearly, $\mu$ is equivalent to the valuation $j\circ \mu$ on $K[x]$, and $v$ is equivalent to the valuation $\ell\circ v$ on $K$. Also, the valuation $j\circ \mu$ restricted to $K$ is equal to $\ell\circ v$.
As a consequence, in order to describe all equivalence classes of valuations $\mu$ on $K[x]$ whose restriction to $K$ is equivalent to $v$, we may assume that $v$ and $\mu$ satisfy the following conditions:
(V1) \ The valuation $v$ takes values in $\R^I_{\lx}$. That is, $\Gamma=v(K^*)\subset \R^I_{\lx}$.
(V2) \ The valuation $\mu$ satisfies $\mu_{\mid K}=v$ and takes values in $\R^{I_S}_{\lx}$ for some $S\in\op{Init}(I)$.
\subsection{Small-extensions equivalence on a subset of the universal value group}\label{subsecSME} From now on, we assume that our valuation $v$ on $K$ satisfies (V1), so that the embedding $\ell$ of the last section is the canonical inclusion. Consider the subset $$\R_{\mbox{\tiny $\op{sme}$}}=\R_{\mbox{\tiny $\op{sme}$}}(I):=\bigcup_{S\in\op{Init}(I)}\R^{I_S}_{\lx}\subset \R^\I_{\lx}.$$
For all $\beta\in\R_{\mbox{\tiny $\op{sme}$}}$, we denote by $\gen{\g,\be}\subset \R^\I_{\lx}$ the subgroup generated by $\Gamma$ and $\beta$.
Let $\Lambda=\R^\I_{\lx}$ and $\mathcal{T}=\mathcal{T}(\La)$. Consider the subtree $$\mathcal{T}^0=\left\{\rho\in\mathcal{T}\mid \g_\rho\subset\R^{I_S}_{\lx}\ \mbox{ for some }\ S\in\op{Init}(I) \right\}\subset \mathcal{T}.$$ Note that all valuations in $\mathcal{T}^0$ satisfy the condition (V2).
On the set $\R_{\mbox{\tiny $\op{sme}$}}$ we define the following equivalence relation.
\noindent{\bf Definition.} We say that $\beta,\gamma\in \R_{\mbox{\tiny $\op{sme}$}}$ are $\,\operatorname{sme}$-equivalent if there exists an isomorphism of ordered groups $$ \gen{\g,\be}\ \lower.3ex\hbox{\ars{.08}$\begin{array}{c}\,\longrightarrow\,\\\mbox{\tiny $\sim\,$}\end{array}$}\ \gen{\g,\ga}, $$ which acts as the identity on $\Gamma$ and sends $\beta$ to $\gamma$.
In this case, we write $\beta\sim_{\mbox{\tiny $\op{sme}$}}\gamma$. We denote by $[\beta]_{\mbox{\tiny $\op{sme}$}}\subset \R_{\mbox{\tiny $\op{sme}$}}$ the class of $\beta$.
The motivation for this definition lies in the following result.
\begin{proposition}\label{motivation} Let $\mu,\nu\in\mathcal{T}^0$ be two inner nodes. Then, $\mu\sim\nu$ if and only if the following three conditions hold:
(a) \ The valuations $\mu$, $\nu$ admit a common key polynomial of minimal degree.
(b) \ For all \,$a\inK[x]\,$ such that \,$\deg(a)<\deg(\mu)$, we have $\,\mu(a)=\nu(a)$.
(c) \ $\operatorname{sv}(\mu)\sim_{\mbox{\tiny $\op{sme}$}} \operatorname{sv}(\nu)$.
In this case, we have $\op{KP}(\mu)=\op{KP}(\nu)$. \end{proposition}
\begin{proof} Suppose that $\mu\sim\nu$. Then, there exists an isomorphism of ordered groups $\iota\colon \g_\mu\lower.3ex\hbox{\ars{.08}$\begin{array}{c}\,\to\\\mbox{\tiny $\sim\,$}\end{array}$}\g_\nu$ such that $\nu=\iota\circ\mu$. The isomorphism $\iota$ induces an isomorphim between the graded algebras: $$ \mathcal{G}_\mu\ \lower.3ex\hbox{\ars{.08}$\begin{array}{c}\,\longrightarrow\,\\\mbox{\tiny $\sim\,$}\end{array}$}\ \mathcal{G}_\nu,\qquad f+\mathcal{P}^+_\alpha(\mu)\longmapsto f+\mathcal{P}^+_{\iota(\alpha)}(\nu), $$ for all $f\in\mathcal{P}_\alpha(\mu)$, $\alpha\in\g_\mu$. Since key polynomials are characterized by algebraic properties of their initial terms in the graded algebra, this implies that both valuations have the same key polynomials: \,$\op{KP}(\mu)=\op{KP}(\nu)$.
Let $\phi$ be a common key polynomial of minimal degree. Since $\mu_{\mid K}=\nu_{\mid K}$, the isomorphism $\iota$ restricted to $\Gamma$ is the identity and $\iota(\mu(\phi))=\nu(\phi)$. Hence, $$\operatorname{sv}(\mu)=\mu(\phi)\sim_{\mbox{\tiny $\op{sme}$}}\nu(\phi)=\operatorname{sv}(\nu). $$
Finally, since $\iota$ restricted to $\Gamma$ is the identity, then $\iota$ acts as the identity on any torsion element in $\g_\mu/\Gamma$. Now, for all $a\in K[x]$ of degree less than $\deg(\phi)$, the values $\mu(a)$, $\nu(a)$ belong to $\g_\Q$ \cite[Lem. 1.3]{MLV}. Thus, $$ \nu(a)=\iota(\mu(a))=\mu(a). $$
Conversely, suppose that $\mu$ and $\nu$ satisfy conditions (a), (b) and (c). Take $\phi\in\op{KP}(\mu)\cap\op{KP}(\nu)$ of minimal degree in both sets. Let us denote $$\beta=\operatorname{sv}(\mu)=\mu(\phi), \qquad \gamma=\operatorname{sv}(\nu)=\nu(\phi).$$ By condition (c), there is an order-preserving $\Gamma$-isomor\-phism $\iota\colon\gen{\Gamma,\beta}\lower.3ex\hbox{\ars{.08}$\begin{array}{c}\,\to\\\mbox{\tiny $\sim\,$}\end{array}$}\gen{\Gamma,\gamma}$, mapping $\beta$ to $\gamma$. As we saw in Section \ref{subsubsecG0}, the subgroup $$ H:=\g_\mu^0=\left\{\mu(a)\mid 0\le \deg(a)<\deg(\mu)\right\}\subset\g_\mu $$ is commensurable over $\Gamma$ and satisfies $\g_\mu=\gen{H,\mu(\phi)}=\gen{H,\beta}$. By condition (b), $H=\g_\nu^0$ and $\g_\nu=\gen{H,\nu(\phi)}=\gen{H,\gamma}$ too. Since $H/\Gamma$ is a torsion abelian group, the $\Gamma$-isomorphism $\iota$ induces an order-preserving isomorphism $$\iota\colon\g_\mu=\gen{H,\beta}\ \lower.3ex\hbox{\ars{.08}$\begin{array}{c}\,\longrightarrow\,\\\mbox{\tiny $\sim\,$}\end{array}$}\ \g_\nu=\gen{H,\gamma},$$ which acts as the identity on $H$ and maps $\beta$ to $\gamma$. Let us check that $\nu=\iota\circ\mu$.
For $f\inK[x]$, consider its $\phi$-expansion $f=\sum_{0\le s}a_s\phi^s$, where $\deg(a_s)<\deg(\phi)=\deg(\mu)$. Since $\phi$ is $\mu$-minimal and $\nu$-minimal, Lemma \ref{minimal0} shows that $$ \mu(f)=\min\left\{\mu(a_s\phi^s)\mid 0\le s\right\},\qquad \nu(f)=\min\left\{\nu(a_s\phi^s)\mid 0\le s\right\}. $$ Let us denote $\delta_s=\mu(a_s)=\nu(a_s)\in H$, for all $s\ge0$. Clearly, $$ \iota\left(\mu(a_s\phi^s)\right)=\iota(\delta_s+s\beta)=\delta_s+s\gamma=\nu(a_s\phi^s). $$ Since $\iota$ preserves the ordering, for arbitrary indices $s,t$ we have $$ \mu(a_s\phi^s)\le\mu(a_t\phi^t)\ \Longrightarrow\ \nu(a_s\phi^s)\le\nu(a_t\phi^t). $$ Thus, there is a common index $s$ for which $\mu(f)=\mu(a_{s}\phi^{s})$ and $\nu(f)=\nu(a_{s}\phi^{s})$, simultaneously. Therefore, $ \iota\left(\mu(f)\right)=\iota\left(\mu(a_s\phi^s)\right)=\nu(a_s\phi^s)=\nu(f) $. \end{proof}
\begin{corollary}\label{motivation2} Take $\beta,\,\gamma\in \R_{\mbox{\tiny $\op{sme}$}}$. \begin{enumerate} \item[(i)] For all $a\in K$ we have $ \omega_{a,\beta}\sim\omega_{a,\gamma}$ if and only if $\beta\sim_{\mbox{\tiny $\op{sme}$}}\gamma$.
\item[(ii)] Let $\mu\in\mathcal{T}^0$ be an inner node and let $\phi\in\op{KP}(\mu)$. If $\beta,\gamma>\mu(\phi)$, then $$ [\mu;\,\phi,\beta]\sim [\mu;\,\phi,\gamma]\ \Longleftrightarrow\ \beta\sim_{\mbox{\tiny $\op{sme}$}}\gamma. $$ \item[(iii)] Let $\mathcal{A}=\left(\rho_i\right)_{i\in A}$ be an essential continuous family in $\mathcal{T}^0$ and let $\phi\in\op{KP}_\infty(\mathcal{A})$. If $\beta,\gamma>\rho_i(\phi)$ for all $i\in A$, then $$ [\mathcal{A};\,\phi,\beta]\sim [\mathcal{A};\,\phi,\gamma]\ \Longleftrightarrow\ \beta\sim_{\mbox{\tiny $\op{sme}$}}\gamma. $$ \end{enumerate} \end{corollary}
\begin{proof} All items follow immediately from Proposition \ref{motivation}, once we see that for the two involved valuations, conditions (a) and (b) hold in each case.
In case (i), the common key polynomial of minimal degree is $\phi=x-a$.
In cases (ii) and (iii), $\phi$ is a common key polynomial of minimal degree for both augmentations by Lemma \ref{propertiesAug} and Proposition \ref{extensionlim}, respectively. \end{proof}
\subsection{Quasi-cuts in $\g_\Q$ and small-extensions closure of $\Gamma$}\label{subsecQcuts}
Consider any subset $\g_{\op{sme}}\subset \R_{\mbox{\tiny $\op{sme}$}}$ which is a set of representatives of the quotient set $\R_{\mbox{\tiny $\op{sme}$}}/\!\sim{\mbox{\tiny $\op{sme}$}}$.
The only $\Gamma$-automorphism of $\g_\Q$ as an ordered group is the identity. Thus, for all $\beta\in\g_\Q\subset\R_{\mbox{\tiny $\op{sme}$}}$, we have $\left[\beta\right]_{\mbox{\tiny $\op{sme}$}}=\{\beta\}$. Therefore, we have necessarily $$\g_\Q\subset\g_{\op{sme}}\subset \R_{\mbox{\tiny $\op{sme}$}}.$$
Any such ``small-extensions closure" $\g_{\op{sme}}$ contains generators of all small extensions of $\Gamma$, up to the relative divisible closure of $\Gamma$. Let us give a precise explanation of this statement, which follows easily from the definition of $\sim_{\mbox{\tiny $\op{sme}$}}$.
\begin{proposition}\label{smallSme}Let $\Gamma\hookrightarrow\Omega$ be a small extension and let $\gamma\in\Omega$ such that $\Omega=\gen{\Delta,\gamma}$, where $\Delta$ is the relative divisible closure of $\Gamma$ in $\Omega$. Let $\Delta\!\lower.3ex\hbox{\ars{.08}$\begin{array}{c}\,\to\\\mbox{\tiny $\sim\,$}\end{array}$}\Delta_0\subset\g_\Q$ be the canonical embedding of $\Delta$ into $\g_\Q$. Then, for a unique $\beta\in\g_{\op{sme}}$ there exists an isomorphism of ordered groups $$\Omega\ \lower.3ex\hbox{\ars{.08}$\begin{array}{c}\,\longrightarrow\,\\\mbox{\tiny $\sim\,$}\end{array}$}\ \gen{\Delta_0,\beta},\qquad \gamma\longmapsto \beta,$$ which maps $\gamma$ to $\beta$, and whose restriction to $\Delta$ is the canonical isomorphism $\Delta\!\lower.3ex\hbox{\ars{.08}$\begin{array}{c}\,\to\\\mbox{\tiny $\sim\,$}\end{array}$} \Delta_0$. \end{proposition}
In \cite{csme}, a real model for the set of quasi-cuts of $\g_\Q$ is constructed, which serves as a canonical choice for $\g_{\op{sme}}$. Let us recall this construction.
A \emph{quasi-cut} in $\g_\Q$ is a pair $D=\left(D^L,D^R\right)$ of subsets such that $D^L\le D^R$ and $D^L\cup D^R=\g_\Q$. Then, $D^L$ is an initial segment of $\g_\Q$, $D^R$ is a final segment of $\g_\Q$ and $D^L\cap D^R$ consists of at most one element.
If $D^L\cap D^R=\{a\}$, we say that $D$ is the \emph{principal quasi-cut} determined by $a\in \g_\Q$. If $D^L\cap D^R=\emptyset$, we say that $D$ is a \emph{cut} in $\g_\Q$.
The set $\operatorname{Qcuts}(\g_\Q)$ of all quasi-cuts in $\g_\Q$ admits a total ordering: $$ D=\left(D^L,D^R\right)\le E=\left(E^L,E^R\right) \ \ \Longleftrightarrow\ \ D^L\subset E^L\quad\mbox{and}\quad D^R\supset E^R. $$
For all $x\in\R_{\mbox{\tiny $\op{sme}$}}$ we consider the folllowing quasi-cut $D_x$ in $\g_\Q$: $$ D_x^L=\{a\in\g_\Q\mid a\le x\},\qquad D_x^R=\{a\in\g_\Q\mid a\ge x\}. $$ We say that $x$ \emph{realizes} the cut $D_x$. The set $\R_{\mbox{\tiny $\op{sme}$}}$ contains realizations of all quasi-cuts in $\g_\Q$ \cite[Sec. 4]{csme}. Moreover, these quasi-cuts provide the following reinterpretation of the equivalence relation $\sim_{\mbox{\tiny $\op{sme}$}}$ \cite[Lem. 5.4]{csme}.
\begin{lemma} For all $x,y\in\R_{\mbox{\tiny $\op{sme}$}}$, we have $x\sim_{\mbox{\tiny $\op{sme}$}} y$ if and only if $D_x=D_y$. \end{lemma}
As a consequence, if we consider on $\g_{\op{sme}}$ the total ordering induced by that of $\R_{\mbox{\tiny $\op{sme}$}}$, we derive a natural isomorphism of ordered sets: $$\g_{\op{sme}}\,\longrightarrow\, \operatorname{Qcuts}(\g_\Q),\qquad x\longmapsto D_x.$$
\begin{corollary}\label{complete}
Equipped with the order topology, $\g_{\op{sme}}$ is complete and contains $\g_\Q$ as a dense subset. \end{corollary}
Indeed, it is well known that the ordered set $\operatorname{Qcuts}(\g_\Q)$ has these properties. We recall that being complete with respect to the order topology means that every non-empty subset of $\g_{\op{sme}}$ has a supremum and an infimum.
In \cite[Sec. 4]{csme}, a canonical choice for $\g_{\op{sme}}$ is described as $$ \g_{\op{sme}}=\g_\Q\sqcup \g_{\op{nbc}}\sqcup \g_{\op{bc}}, $$ for certain subsets $\g_{\op{nbc}}\subset \R^I_{\lx}\setminus\g_\Q$ and $\g_{\op{bc}}\subset \R_{\mbox{\tiny $\op{sme}$}}\setminus\R^I_{\lx}$.
The elements $x\in \g_\Q$ parametrize the principal quasi-cuts. The elements $x\in\g_{\op{nbc}}$, $x\in\g_{\op{bc}}$ correspond to $D_x$ being a \emph{non-ball cut}, or a \emph{ball cut}. Equivalently, the small extension $\Gamma\hookrightarrow\gen{\g_\Q,x}$ preserves, or increases the rank, respectively.
Let us briefly describe $\g_{\op{nbc}}$. For all $S\in\op{Init}(I)$, consider the truncation by $S$: $$ \pi_S\colon \R^I_{\lx}\,\longrightarrow\,\R^I_{\lx},\qquad x=(x_i)_{i\in I}\longmapsto x_S=(y_i)_{i\in I}, $$ where $y_i=x_i$ for all $i\in S$ and $y_i=0$ otherwise. Note that $\pi_S^{-1}(\beta_S)=\beta+H_S$, where $H_S$ is the convex subgroup defined in (\ref{HSHahn}). The set $\g_{\op{nbc}}$ is stratified as: $$ \g_{\op{nbc}}=\bigsqcup\nolimits_{S\in\op{Init}(I)}\operatorname{nbc}(S), $$ where \ $\operatorname{nbc}(S)=\left\{x\in\pi_S\left(\R^I_{\lx}\right)\setminus \g_\Q\mid x_T\in\g_\Q \mbox{ for all } T\in\op{Init}(I),\ T\subsetneq S \right\}$.
Now, let us describe $\g_{\op{bc}}$.
For all $b=(b_i)_{i\in I}\in\g_\Q$, $S\in\op{Init}(I)$, denote $$ b_S^-=((b_j)_{j\in S}\mid-1\mid0\cdots0)\in\mathbb R^{I_S}_{\operatorname{lex}},\qquad b_S^+=((b_j)_{j\in S}\mid1\mid0\cdots0)\in\mathbb R^{I_S}_{\operatorname{lex}}, $$ where $\pm1$ is placed at the $i_S$-th coordinate. Then, $\g_{\op{bc}}$ is constructed as: $$ \g_{\op{bc}}=\bigsqcup\nolimits_{S\in\op{Init}(I)}\left\{b_S^-,\,b_S^+\mid b\in\g_\Q\right\}. $$
The elements determined by the initial segment $S=\emptyset$ deserve a special notation: $$ -\infty=(-1\mid0\cdots 0)=\min(\g_{\op{sme}}),\qquad \infty^-=(1\mid0\cdots 0)=\max(\g_{\op{sme}}), $$ where $\pm1$ is placed at the $i_\emptyset$-th coordinate; that is, the first coordinate of $I_\emptyset=\{i_\emptyset\}+I$. The notation for $\infty^-$ is motivated by the fact that this element is the immediate predecessor of $\infty$ in the set $\g_{\op{sme}}\infty$.
\section{Construction of the valuative tree}\label{secConstruct}
We keep with the notation of the previous section $$\R_{\mbox{\tiny $\op{sme}$}}\subset\Lambda:=\R^\I_{\lx}, \qquad \mathcal{T}^0\subset\mathcal{T}:=\mathcal{T}(\La),$$ and we assume that the valuation $v$ takes values in a subgroup of $\Lambda$.
Since $\g_{\op{sme}}$ is complete, we may extend the singular value function $\operatorname{sv}$ to the leaves of $\mathcal{T}^0$. For a finite leaf $\nu\in\mathcal{L}_\fin(\mathcal{T}^0)$, we agree that $\operatorname{sv}(\nu)=\infty$, while for an infinite leaf $\nu\in\mathcal{L}_\infty(\mathcal{T}^0)$ we define $$ \operatorname{sv}(\nu)=\sup\left\{\operatorname{sv}(\rho)\mid \rho\in(-\infty,\nu)\right\}\in\g_{\op{sme}}. $$
\subsection{Equivalence classes of commensurable extensions} Let $\mathcal{T}_v$ be the set of equivalence classes of valuations on $K[x]$ whose restriction to $K$ is equivalent to $v$. It is well-known how to describe the subset $\mathcal{T}_v^{\op{com}} \subset\mathcal{T}_v$ of the equivalence classes which are commensurable over $[v]$.
By Proposition \ref{riiUniverse}, any such valuation is equivalent to some commensurable node $\mu\in\mathcal{T}$; that is,
a node belonging to the subtree: $\mathcal{T}_\Q:=\mathcal{T}(\gq)\subset\mathcal{T}$.
Finally, it is easy to classify the nodes of $\mathcal{T}_\Q$ up to equivalence. Two commensurable valuations $\mu,\mu'\in\mathcal{T}_\Q$ are equivalent if and only if $\mu=\mu'$. Indeed, if two subgroups $$\Gamma\subset\Delta\subset\g_\Q,\qquad \Gamma\subset\Delta'\subset\g_\Q,$$ admit an order-preserving isomorphism $\iota\colon \Delta \lower.3ex\hbox{\ars{.08}$\begin{array}{c}\,\to\\\mbox{\tiny $\sim\,$}\end{array}$}\Delta'$ which is the identity on $\Gamma$, then necessarily $\Delta=\Delta'$ and $\iota$ is the identity mapping.
Therefore, we have a natural bijective mapping $$ \mathcal{T}_\Q\,\longrightarrow\,\mathcal{T}_v^{\op{com}},\qquad \mu\longmapsto[\mu]. $$
Since all leaves of $\mathcal{T}$ are commensurable, they are leaves of $\mathcal{T}_\Q$ too.
Therefore, both trees have the same leaves. More precisely, with the notation of Section \ref{subsecMLV}, we have: \begin{equation}\label{EqualLeaves}
\mathcal{L}_\fin(\mathcal{T}_\Q)=\mathcal{L}_\fin(\mathcal{T}),\quad \mathcal{LU}_\infty(\mathcal{T}_\Q)=\mathcal{LU}_\infty(\mathcal{T}),\quad \mathcal{L}\cc_\infty(\mathcal{T}_\Q)=\mathcal{L}\cc_\infty(\mathcal{T}).
\end{equation}
By Lemmas \ref{equiv=lim}, \ref{numerable} and \ref{specialCont}, every leaf in $\mathcal{LU}_\infty(\mathcal{T}_\Q)$ is the stable limit of a countable family of nodes in $\mathcal{T}_\Q$ with unbounded degree, and every leaf in $\mathcal{L}\cc_\infty(\mathcal{T}_\Q)$ is the stable limit of an essential continuous family of nodes in $\mathcal{T}_\Q$.
\subsection{Description of the valuative tree} Consider the subtree $$\mathcal{T}_\sme=\left\{\rho\in\mathcal{T}^0\mid \operatorname{sv}(\rho)\in\g_{\op{sme}}\right\}.$$
Since $\g_\Q\subset\g_{\op{sme}}$, we have $\mathcal{T}_\Q\subset\mathcal{T}_\sme\subset\mathcal{T}^0\subset\mathcal{T}$. In particular, from (\ref{EqualLeaves}) we deduce $$
\mathcal{L}_\fin(\mathcal{T}_\Q)=\mathcal{L}_\fin(\mathcal{T}_\sme),\quad \mathcal{LU}_\infty(\mathcal{T}_\Q)=\mathcal{LU}_\infty(\mathcal{T}_\sme),\quad \mathcal{L}\cc_\infty(\mathcal{T}_\Q)=\mathcal{L}\cc_\infty(\mathcal{T}_\sme).
$$
\begin{theorem}\label{main2} The mapping $\mu\mapsto[\mu]$ induces a bijection between $\mathcal{T}_\sme$ and $\mathcal{T}_v$. \end{theorem}
\begin{proof} Let $\mu$ be a valuation on $K[x]$ whose restriction to $K$ is equivalent to $v$. By Proposition \ref{riiUniverse}, $\mu$ is equivalent to some valuation in $\mathcal{T}^0$. Thus, we may suppose $\mu\in\mathcal{T}^0$. If $\mu$ is commensurable, then $\mu\in\mathcal{T}_\Q\subset\mathcal{T}_\sme$, so that $[\mu]$ is the image of some node of $\mathcal{T}_\sme$.
Suppose that $\mu$ is incommensurable. Then, it is the last node of a finite MLV chain $$ \mu_0\ \stackrel{\phi_1,\gamma_1}\,\longrightarrow\,\ \mu_1\ \,\longrightarrow\,\ \cdots\ \,\longrightarrow\,\ \mu_{r-1}\ \stackrel{\phi_{r},\gamma_{r}}\,\longrightarrow\,\ \mu_{r}=\nu.$$ If $\mu=\mu_0=\omega_{a,\gamma}$ has depth zero, then Corollary \ref{motivation2} shows that $\mu$ is equivalent to $\omega_{a,\beta}\in\mathcal{T}_\sme$, where $\beta\in\g_{\op{sme}}$ is the representative of the class $[\gamma]_{\mbox{\tiny $\op{sme}$}}$.
If $\mu$ has a positive depth and $\mu_{r-1}\to\mu$ is an ordinary augmentation, then Corollary \ref{motivation2} shows that $\mu$ is equivalent to $[\mu_{r-1};\,\phi_r,\beta]\in\mathcal{T}_\sme$, where $\beta\in\g_{\op{sme}}$ is the representative of the class $[\gamma_r]_{\mbox{\tiny $\op{sme}$}}$.
If $\mu$ has a positive depth and $\mu_{r-1}\to\mu$ is a limit augmentation, then $\mu=[\mathcal{A};\,\phi_r,\gamma_r]$ for some essential continuous family in $\mathcal{T}_\Q$ and Corollary \ref{motivation2} shows that $\mu$ is equivalent to $[\mathcal{A};\,\phi_r,\beta]\in\mathcal{T}_\sme$, where $\beta\in\g_{\op{sme}}$ is the representative of the class $[\gamma_r]_{\mbox{\tiny $\op{sme}$}}$.
This proves that the mapping $\mu\mapsto[\mu]$ is onto.
Finally, let us show that the mapping $\mu\mapsto [\mu]$ is injective. Suppose that $\mu,\nu\in\mathcal{T}_\sme$ are equivalent. Then $\operatorname{sv}(\mu)\sim_{\mbox{\tiny $\op{sme}$}}\operatorname{sv}(\nu)$ by Proposition \ref{motivation}. Since $\mu$ and $\nu$ belong to $\mathcal{T}_\sme$, we have $\operatorname{sv}(\mu),\,\operatorname{sv}(\nu)\in\g_{\op{sme}}$, so that necessarily $\operatorname{sv}(\mu)=\operatorname{sv}(\nu)$.
Also, Proposition \ref{motivation} shows that $\op{KP}(\mu)=\op{KP}(\nu)$ and $\nu(a)=\mu(a)$ for all $a\inK[x]$ of degree less than $\deg(\mu)$. This implies $\mu=\nu$ by comparison of their action on $\phi$-expansions, for any common key polynomial $\phi$ of minimal degree, having in mind that $\mu(\phi)=\operatorname{sv}(\mu)=\operatorname{sv}(\nu)=\nu(\phi)$. \end{proof}
This subtree $\mathcal{T}_\sme\subset\mathcal{T}$ shares many of the properties of $\mathcal{T}$ discussed in Sections \ref{secTreeLa} and \ref{secCtDepth}. Let us explicitely quote some of them.
$\bullet$\quad For all $\mu\in \mathcal{T}_\sme$, the nodes of a MLV chain of $\mu$, except for (eventually) $\mu$ itself, are commensurable. Thus, these nodes belong to $\mathcal{T}_\sme$ and the depth of $\mu$ can be described solely in terms of $\mathcal{T}_\sme$.
$\bullet$\quad If $\mu\in \mathcal{T}_\sme$ is an inner node and $\phi\in\op{KP}(\mu)$, then we may build up ordinary augmentations in $\mathcal{T}_\sme$: $$ \nu=[\mu;\,\phi,\gamma]\in \mathcal{T}_\sme,\quad \gamma\in\g_{\op{sme}},\ \gamma>\mu(\phi). $$ For any such augmentation, the interval $(\mu,\nu]\subset \mathcal{T}_\sme$ may be described as $$ (\mu,\nu]=\left\{[\mu;\,\phi,\delta]\mid \delta\in\g_{\op{sme}},\ \mu(\phi)<\delta\le\gamma\right\}. $$
$\bullet$\quad In particular, Proposition \ref{td=td} holds in $\mathcal{T}_\sme$ too. There is a canonical bijection between $\op{KP}(\mu)/\!\sim_\mu$ and the set of tangent directions of $\mu$ in the tree $\mathcal{T}_\sme$.
$\bullet$\quad Let $\mathcal{A}=(\rho_i)_{i\in A}$ be an essential continuous family in $\mathcal{T}$, and $\phi\in\op{KP}_\infty(\mathcal{A})$ a limit key polynomial. Then, we may build up limit augmentations in $\mathcal{T}_\sme$: $$ \nu=[\mathcal{A};\,\phi,\gamma]\in \mathcal{T}_\sme,\quad \gamma\in\g_{\op{sme}},\ \gamma>\rho_i(\phi)\ \mbox{ for all }i\in A. $$ By Lemma \ref{specialCont}, we may assume that all $\rho_i$ are commensurable. Thus, we may think that these limit augmentations are constructed solely from objects in the tree $\mathcal{T}_\sme$.
For any such augmentation, we may describe the following interval in $\mathcal{T}_\sme$: $$ \bigcap\nolimits_{i\in A}(\rho_i,\nu]=\left\{[\mathcal{A};\,\phi,\delta]\mid \delta\in\g_{\op{sme}},\ \rho_i(\phi)<\delta\le\gamma\ \mbox{ for all }i\in A\right\}. $$
$\bullet$\quad Every two nodes $\mu,\,\nu\in\mathcal{T}_\sme$ have a greatest common lower node $\mu\wedge\nu\in\mathcal{T}_\sme$.
Indeed, as remarked after Proposition \ref{GCN}, if neither $\mu\le\nu$ nor $\mu\ge\nu$, the greatest common lower node $\mu\wedge\nu\in\mathcal{T}$ is commensurable; thus, it belongs to $\mathcal{T}_\sme$.
\subsection{Paths of constant depth in $\mathcal{T}_\sme$}\label{subsecConstTs} The main difference between $\mathcal{T}_\sme$ and $\mathcal{T}$ lies in the fact that the paths of constant depth in $\mathcal{T}_\sme$ are ``compact", thanks to the completeness of $\g_{\op{sme}}$.
\subsubsection{Inner depth-zero nodes} With the notation in Section \ref{subsecDepth0}, the inner depth-zero nodes of $\mathcal{T}_\sme$ are of the form $\omega_{a,\gamma}$ for $a\in K$ and $\gamma\in\g_{\op{sme}}$. By (\ref{balls}), we have $$ \omega_{a,-\infty}=\omega_{b,-\infty}\le\omega_{c,\gamma} \quad\mbox{ for all}\quad a,b,c\in K,\ \gamma\in \g_{\op{sme}}. $$ Let us denote by $\omega_{-\infty}:=\omega_{a,-\infty}$ this minimal depth-zero valuation, which is independent of $a$. By Theorem \ref{main}, this node $\omega_{-\infty}$ is an absolute minimal node of $\mathcal{T}_\sme$. We say that $\omega_{-\infty}$ is the \emph{root node} of $\mathcal{T}_\sme$. As a valuation, it works as follows: $$ \omega_{-\infty}\colonK[x]\,\longrightarrow\, \left(\mathbb Z\times\Gamma\right)\infty,\qquad f\longmapsto \left(-\deg(f),v(\op{lc}(f))\right), $$ where $\op{lc}(f)$ is the leading coefficient of a nonzero polynomial $f$. All valuations $\mu$ on $K[x]$ satisfying $\mu(x)<\g_\Q$ are equivalent to $\omega_{-\infty}$ \cite[Thm. 2.4]{RPO}.
Since $\omega_{-\infty}$ is incommensurable, it has a unique tangent direction. Actually, $$\op{KP}(\omega_{-\infty})=\{x-a\mid a\in K\}=[x]_{\omega_{-\infty}}.$$
All inner depth-zero nodes in $\mathcal{T}_\sme$ are obtained as a single ordinary augmentation of the root node $\omega_{-\infty}$: $$ \omega_{a,\gamma}=[\omega_{-\infty};\,x-a,\gamma]\quad\mbox{ for all }a\in K,\ \gamma\in\g_{\op{sme}},\ \gamma>-\infty. $$
In particular, the set of all inner depth-zero nodes is: $$ \left\{\omega_{-\infty}\right\}\cup\bigcup\nolimits_{a\in K}\mathcal{P}_{\omega_{-\infty}}(x-a). $$
For any key polynomial $x-a\in \op{KP}(\omega_{-\infty})$, the constant-depth path $\mathcal{P}_{\omega_{-\infty}}(x-a)$ is parametrized by the interval $(-\infty,\infty]\subset\g_{\op{sme}}\infty$:
\begin{center} \setlength{\unitlength}{4mm} \begin{picture}(20,3.5) \put(-1,1){$\bullet$}\put(-0.8,1.3){\line(1,0){15.6}}\put(18,1){$\bullet$} \put(6,1){$\bullet$}\put(20,1){$\bullet$}
\put(-3.2,1){\begin{footnotesize}$\omega_{-\infty}$\end{footnotesize}} \put(15.4,1){\begin{footnotesize}$\cdots\cdots$\end{footnotesize}} \put(17.4,2){\begin{footnotesize}$\omega_{a,\infty^-}$\end{footnotesize}} \put(21,1){\begin{footnotesize}$\omega_{a,\infty}$\end{footnotesize}} \put(5.6,2){\begin{footnotesize}$\omega_{a,\gamma}$\end{footnotesize}} \end{picture} \end{center}
Moreover, $\omega_{a,\gamma}$ is commensurable if and only if $\gamma\in\g_\Q\infty$, and it preserves the rank if and only if $\gamma\in\g_{\op{nbc}}\infty$. The finite leaf $\omega_{a,\infty}$ has an immediate predecessor node $\omega_{a,\infty^-}$, represented by the valuation $$ \omega_{a,\infty^-}\colonK[x]\,\longrightarrow\, \left(\mathbb Z\times\Gamma\right)\infty,\qquad f\longmapsto \left(\op{ord}_{x-a}(f),v(\op{init}(f))\right), $$ where $\op{init}(f)$ is the first nonzero coefficient of the $(x-a)$-expansion of $f\inK[x]$.
The intersection of the depth-zero paths in $\mathcal{T}_\sme$ determined by any two $a,b\in K$ may be computed as in Section \ref{subsecDepth0}: $$ \mathcal{P}_{\omega_{-\infty}}(x-a)\cap\mathcal{P}_{\omega_{-\infty}}(x-b)=[\omega_{-\infty},\omega_{a,v(b-a)}]. $$
\subsubsection{Ordinary augmentations}
Let $\mu\in\mathcal{T}_\sme$ be an inner node and let $\phi\in\op{KP}(\mu)$ be a key polynomial. The constant-depth path $\mathcal{P}_\mu(\phi)\subset\mathcal{T}_\sme$, of all nodes in $\mathcal{T}_\sme$ determined by an ordinary augmentation of $\mu$ with respect to $\phi$, is parametrized by all $\gamma\in\g_{\op{sme}}\infty$ such that $\gamma>\mu(\phi)$:
\begin{center} \setlength{\unitlength}{4mm} \begin{picture}(22,3.5) \put(-2,1){$\bullet$}\put(-1.6,1.3){\line(1,0){16}}\put(18,1){$\bullet$}\put(20,1){$\bullet$} \put(6,1){$\bullet$}
\put(-3,1){\begin{footnotesize}$\mu$\end{footnotesize}} \put(16.5,1){\begin{footnotesize}$\cdots$\end{footnotesize}} \put(15.2,1){\begin{footnotesize}$\cdots$\end{footnotesize}} \put(16.5,2){\begin{footnotesize}$\mu(\phi,\infty^-)$\end{footnotesize}} \put(21,1){\begin{footnotesize}$\mu(\phi,\infty)$\end{footnotesize}} \put(4.8,2){\begin{footnotesize}$\mu(\phi,\gamma)$\end{footnotesize}} \end{picture} \end{center}
Moreover, $\mu(\phi,\gamma)$ is commensurable if and only if $\gamma\in\g_\Q\infty$, and it preserves the rank if and only if $\gamma\in\g_{\op{nbc}}\infty$. The finite leaf $\mu(\phi,\infty)$ has an immediate predecessor node $\mu(\phi,\infty^-)$, represented by the valuation $$ \mu(\phi,\infty^-)\colonK[x]\,\longrightarrow\, \left(\mathbb Z\times\Gamma\right)\infty,\qquad f\longmapsto \left(\op{ord}_\phi(f),\mu(\op{init}(f))\right), $$ where $\op{init}(f)$ is the first nonzero coefficient of the $\phi$-expansion of $f\inK[x]$.
The intersection of the constant-depth paths in $\mathcal{T}_\sme$ determined by any two $\phi,\phi_*\in \op{KP}(\mu)$ may be computed as in Section \ref{subsecConstDepthOrd}: $$ \mathcal{P}_\mu(\phi)\cap\mathcal{P}_\mu(\phi_*)=\begin{cases} \emptyset,&\quad\mbox{ if }\quad\phi\not\sim_\mu\phi_*,\\ \left(\mu,\mu(\phi,\gamma_0)\right],&\quad\mbox{ if }\quad\gamma_0=\mu(\phi-\phi_*)>\mu(\phi). \end{cases} $$
\subsubsection{Limit augmentations}
Finally, let $\mathcal{A}=(\rho_i)_{i\in A}$ be an essential continuous family in $\mathcal{T}_\sme$, and let $\phi\in\op{KP}_\infty(\mathcal{A})$ be a limit key polynomial. Let $\mu\in\mathcal{A}$ be the first valuation in the family. The completeness of $\g_{\op{sme}}$ implies the existence of a minimal limit augmentation of $\mathcal{A}$ in $\mathcal{T}_\sme$ with respect to $\phi$; namely $$ \mu_\mathcal{A}:=[\mathcal{A};\,\phi,\gamma_\mathcal{A}],\qquad \gamma_\mathcal{A}:=\sup\left\{\rho_i(\phi)\mid i\in A\right\}\in\g_{\op{sme}}. $$ Note that $\gamma_\mathcal{A}>\rho_i(\phi)$ for all $i$, because $\mathcal{A}$ has no maximal element. Also, $\gamma_\mathcal{A}<\infty$.
The following result follows immediately from Lemma \ref{allLKP}.
\begin{lemma}\label{muaa} The value $\gamma_\mathcal{A}\in\g_{\op{sme}}$ and the valuation $\mu_\mathcal{A}\in\mathcal{T}_\sme$ are independent of the choice of the limit key polynomial $\phi$ in $\op{KP}_\infty(\mathcal{A})$. \end{lemma}
The constant-depth path $\mathcal{P}_\aa(\phi)\subset\mathcal{T}_\sme$, of all nodes determined by a limit augmentation of $\mathcal{A}$ with respect to $\phi$, is parametrized by all $\gamma\in\g_{\op{sme}}\infty$ such that $\gamma\ge\gamma_\mathcal{A}$:
\begin{center} \setlength{\unitlength}{4mm} \begin{picture}(28,4) \put(-2,0.9){$\bullet$}\put(4.25,1){$\bullet$}\put(25,1){$\bullet$}\put(27,1){$\bullet$}\put(12,1){$\bullet$} \put(-1.6,1.25){\line(1,0){4}}\put(2.8,1){$\cdots$}\put(4.5,1.3){\line(1,0){17}} \multiput(4.5,0.1)(0,.25){10}{\vrule height1pt} \put(-1,2){$(\rho_i)_{i\in A}$}
\put(-3,1.1){\begin{footnotesize}$\mu$\end{footnotesize}} \put(5,.4){\begin{footnotesize}$\mu_\mathcal{A}$\end{footnotesize}} \put(21.8,1){\begin{footnotesize}$\cdots$\end{footnotesize}} \put(23,1){\begin{footnotesize}$\cdots$\end{footnotesize}} \put(23.2,2){\begin{footnotesize}$\mathcal{A}(\phi,\infty^-)$\end{footnotesize}} \put(28,1){\begin{footnotesize}$\mathcal{A}(\phi,\infty)$\end{footnotesize}} \put(10.8,2){\begin{footnotesize}$\mathcal{A}(\phi,\gamma)$\end{footnotesize}} \end{picture} \end{center}
Note that $\mu_\mathcal{A}\in\mathcal{P}_\aa(\phi)$. Again, $\mathcal{A}(\phi,\gamma)$ is commensurable if and only if $\gamma\in\g_\Q\infty$, and it preserves the rank if and only if $\gamma\in\g_{\op{nbc}}\infty$. The finite leaf $\mathcal{A}(\phi,\infty)$ has an immediate predecessor node $\mathcal{A}(\phi,\infty^-)$, represented by the valuation $$ \mathcal{A}(\phi,\infty^-)\colonK[x]\,\longrightarrow\, \left(\mathbb Z\times\Gamma\right)\infty,\qquad f\longmapsto \left(\op{ord}_\phi(f),\rho_\aa(\op{init}(f))\right), $$ where $\op{init}(f)$ is the first nonzero coefficient of the $\phi$-expansion of $f\inK[x]$.
The intersection of the constant-depth paths in $\mathcal{T}_\sme$ determined by any two $\phi,\phi_*\in \op{KP}_\infty(\mathcal{A})$ is an interval in $\mathcal{T}_\sme$ which may be computed as in Section \ref{subsecConstDepthLim}: $$ \mathcal{P}_\aa(\phi)\cap\mathcal{P}_\mathcal{A}(\phi_*)=[\mu_\mathcal{A},\mathcal{A}(\phi,\rho_\aa(\phi-\phi_*)]\subset\mathcal{T}_\sme. $$
Since the set $\left\{\rho_i(\phi)\mid i\in A\right\}\subset\g_\Q$ contains no maximal element, its supremum $\gamma_\mathcal{A}$ in $\g_{\op{sme}}$ is incommensurable. Indeed, if $\gamma_\mathcal{A}\in\g_\Q$, then it would admit an immediate predecessor $\gamma_\mathcal{A}^-<\gamma_\mathcal{A}$, defined as $\gamma_\mathcal{A}^-=b_S^-$, for $b=\gamma_\mathcal{A}$ and $S=I$ (cf. Section \ref{subsecQcuts}). Since $\gamma_\mathcal{A}^-\in\g_{\op{nbc}}$ is incommensurable, it is still an upper bound of the set $\left\{\rho_i(\phi)\mid i\in A\right\}$. This contradicts the minimality of $\gamma_\mathcal{A}$ as an upper bound of this set.
Thus, $\mu_\mathcal{A}$ is incommensurable. In particular, it has a unique tangent direction.
Since $\gamma_\mathcal{A}<\infty$, Proposition \ref{extensionlim} shows that $\phi$ is a key polyomial for $\mu_\mathcal{A}$ of minimal degree. Actually, by \cite[Thm. 4.2]{KP} and Lemma \ref{allLKP}, we have $$ \op{KP}(\mu_\mathcal{A})=[\phi]_{\mu_\mathcal{A}}= \left\{\phi+a\mid a\inK[x],\ \deg(a)<m_\infty, \ \rho_\mathcal{A}(a)>\gamma_\mathcal{A} \right\}=\op{KP}_\infty(\mathcal{A}). $$
Also, all limit augmentations $\mathcal{A}(\phi,\gamma)$ are ordinary augmentations of $\mu_\mathcal{A}$: $$ \mathcal{A}(\phi,\gamma)=[\mathcal{A};\,\phi,\gamma]=[\mu_\mathcal{A};\,\phi,\gamma]\quad \mbox{ for all }\gamma\in\g_{\op{sme}},\ \gamma>\gamma_\mathcal{A}, $$ by comparing the action of both valuations on $\phi$-expansions. Indeed, for all polynomials $a\inK[x]$ of degree less than $m_\infty=\deg(\phi)$, we have $\mu_\mathcal{A}(a)=\rho_\aa(a)$, by the definition of a limit augmentation.
The above picture might suggest that the interval $(\mu,\mu_A)$ is contained in a single constant-depth path beyond $\mu$. This is not the case.
By Lemma \ref{specialCont}, we may suppose that $\mathcal{A}=\left(\rho_i\right)_{i\in A}$, with $\rho_i=[\mu;\,\chi_i,\beta_i]$. Then, for each $i\in A$, the interval $(\mu,\rho_i]$ is contained in $\mathcal{P}_\mu(\chi_i)$; however, for $j>i$, the valuation $\rho_j$ belongs to $\mathcal{P}_\mu(\chi_j)$, but it does not belong to $\mathcal{P}_\mu(\chi_i)$. Therefore, a more appropriate picture of this interval would be the following one:
\begin{center} \setlength{\unitlength}{4mm} \begin{picture}(20,9.5) \put(-2,0.9){$\bullet$}\put(-1.6,1.25){\line(1,0){8.5}}\put(7.3,1){$\cdots$} \put(8.5,8){$\bullet$}\put(8.7,8.3){\line(1,0){3}}\put(12,8){$\cdots$} \multiput(8.8,1.2)(0,.21){33}{\vrule height1pt}
\put(1,1.2){\line(3,1){6}} \put(7.4,2.7){$\dot{}$}\put(7.82,2.85){$\dot{}$}\put(8.24,3){$\dot{}$} \put(3,1.85){\line(2,1){4}} \put(7.4,3.45){$\dot{}$}\put(7.82,3.66){$\dot{}$}\put(8.24,3.87){$\dot{}$} \put(5,2.85){\line(1,1){2}} \put(7.4,4.64){$\dot{}$}\put(7.82,5.06){$\dot{}$}\put(8.24,5.49){$\dot{}$}
\put(-3,1.1){\begin{footnotesize}$\mu$\end{footnotesize}} \put(7,8.2){\begin{footnotesize}$\mu_\mathcal{A}$\end{footnotesize}} \end{picture} \end{center}
\subsection{Primitive nodes}
The constant-depth paths beyond a limit augmentation have completely analogous properties as the depth-zero paths. For the ease of the reader we include the depth-zero paths as a special case of the limit augmentation paths.
\noindent{\bf Convention. }We admit the empty set $\mathcal{A}=\emptyset$ as an essential continuous family in $\mathcal{T}_\sme$. We agree that this family has $\gamma_\mathcal{A}=-\infty$, $\mu_\mathcal{A}=\omega_{-\infty}$, and $$ \op{KP}_\infty(\mathcal{A})=\op{KP}(\mu_\mathcal{A})=\left\{x-a\mid a\in K\right\},\quad \mathcal{P}_{\mathcal{A}}(x-a)=\{\omega_{-\infty}\}\,\cup\,\mathcal{P}_{\omega_{-\infty}}(x-a). $$
\noindent{\bf Definition. } A \emph{primitive-limit} node in $\mathcal{T}_\sme$ is the inner limit node $\mu_\mathcal{A}$ associated to an essential continuous family $\mathcal{A}$ in $\mathcal{T}_\sme$. The set of primitive-limit nodes is in bijection with the set of equivalence classes of essential continuous families.
A \emph{primitive-ordinary\,} node in $\mathcal{T}_\sme$ is an inner node $\mu\in\mathcal{T}_\sme$ admitting strong constant-depth paths (cf. Section \ref{subsecConstDepthOrd}). That is, $\op{KP}_{\op{str}}(\mu)\ne\emptyset$, where $$ \op{KP}_{\op{str}}(\mu):=\{\phi\in\op{KP}(\mu)\mid\deg(\phi)>\deg(\mu)\}. $$ Since $\mu$ has key polynomials of different degrees, it is necessarily commensurable.
A \emph{primitive} node in $\mathcal{T}_\sme$ is a node which is either primitive-limit or primitive-ordinary. Let us denote by $\op{Prim}(\mathcal{T}_\sme)$ the set of all primitive nodes.
By our convention, the root node $\omega_{-\infty}$ is a primitive-limit node.
By Theorems \ref{main} and \cite[Thm. 4.7]{MLV}, the primitive-limit nodes cannot be obtained as an ordinary augmentation of a lower node.
\noindent{\bf Definition. } Let $\rho\in \mathcal{T}_\sme$ be a primitive node. Then, we define $$ \mathcal{P}(\rho)=\begin{cases} \bigcup_{\phi\in\op{KP}_{\operatorname{str}}(\rho)}\mathcal{P}_\rho(\phi),&\mbox{ if $\rho$ is primitive-ordinary},\\ \bigcup_{\phi\in\op{KP}_\infty(\mathcal{A})}\mathcal{P}_\mathcal{A}(\phi),&\mbox{ if $\rho=\mu_\mathcal{A}$ is primitive-limit}. \end{cases} $$
We emphasize that $\rho\in\mathcal{P}(\rho)$ if $\rho$ is a primitive-limit node, but $\rho\not\in\mathcal{P}(\rho)$ if $\rho$ is primitive-ordinary. However, in both cases, the arguments in Section \ref{subsecConstTs} show that \begin{equation}\label{remind} \mu\in\mathcal{P}(\rho),\ \rho<\mu\ \ \Longrightarrow\ \ \mu=[\rho; \phi,\operatorname{sv}(\mu)], \end{equation} for some $\phi\in\op{KP}(\mu)$. If $\rho$ is primitive-ordinary, then necessarily $\phi\in\op{KP}_{\op{str}}(\mu)$.
\begin{theorem}\label{previous} Let $\nu\in\mathcal{T}_\sme$ be either an inner node, or a finite leaf. There exists a unique primitive node $\rho\in\op{Prim}(\mathcal{T}_\sme)$ such that $\nu\in\mathcal{P}(\rho)$. In other words, $$ \mathcal{T}_\sme\setminus\mathcal{L}_\infty(\mathcal{T}_\sme)=\bigsqcup\nolimits_{\rho\in\op{Prim}(\mathcal{T}_\sme)}\mathcal{P}(\rho). $$ \end{theorem}
\begin{proof} If $\nu$ has depth zero, then $\nu$ belongs to $\mathcal{P}(\minf)$, as we saw in Section \ref{subsecConstTs}.
If $\nu$ has a positive depth, then it is the last node of a finite MLV chain $$ \mu_0\ \stackrel{\phi_1,\gamma_1}\,\longrightarrow\,\ \mu_1\ \,\longrightarrow\,\ \cdots\ \,\longrightarrow\,\ \mu_{r-1}\ \stackrel{\phi_{r},\gamma_{r}}\,\longrightarrow\,\ \mu_{r}=\nu.$$
If the last augmentation step $\mu_{r-1}\to\nu$ is ordinary, then $\mu_{r-1}$ is a primitive-ordinary node and $\nu\in\mathcal{P}(\mu_{r-1})$. Indeed, $\nu=[\mu_{r-1};\,\phi_r,\gamma_r]\in\mathcal{P}_{\mu_{r-1}}(\phi)$ and $\deg(\phi_r)>\deg(\mu_{r-1})$ by the definition of a MLV chain.
If $\mu_{r-1}\to\nu$ is a limit augmentation, then $\nu=[\mathcal{A};\,\phi_r,\gamma_r]\in\mathcal{P}(\mu_\aa)$.
Therefore, the union of all sets $\mathcal{P}(\rho)$, for $\rho$ running on all the primitive nodes in $\mathcal{T}_\sme$, covers $\mathcal{T}_\sme\setminus\mathcal{L}_\infty(\mathcal{T}_\sme)$. It remains only to show that $$ \rho,\eta\in\op{Prim}(\mathcal{T}_\sme), \ \ \rho\ne\eta\ \ \Longrightarrow\ \ \mathcal{P}(\rho)\cap \mathcal{P}(\eta)=\emptyset. $$
Since $\mathcal{T}_\sme$ is a tree, this is obvious if $\rho\not\le\eta$ and $\eta\not\le\rho$.
Suppose that $\rho<\eta$ and there exists $\mu\in\mathcal{P}(\rho)\cap\mathcal{P}(\eta)$. By (\ref{remind}), the valuation $\mu\in\mathcal{P}(\rho)$ may be obtained after a single ordinary augmentation step: $\mu=[\rho;\,\phi,\operatorname{sv}(\mu)]$, for a certain $\phi\in\op{KP}(\rho)$. Since $\eta$ belongs to the interval $(\rho,\mu)$, \cite[Lem. 2.7]{MLV} shows that $\eta=[\rho;\,\phi,\operatorname{sv}(\eta)]$ too. By Lemma \ref{propertiesAug}, $\deg(\eta)=\deg(\phi)=\deg(\mu)$.
This leads to a contradiction. Indeed, $\eta$ cannot be a primitive-limit node because it is an ordinary augmentation of a lower node. Hence, $\mathcal{P}(\eta)$ is the union of strong constant-depth paths and this implies $\deg(\eta)<\deg(\mu)$. \end{proof}
\subsection{Stratification of $\mathcal{T}_\sme$ by limit-depth}\label{subsecStrat}
Let $\rho\in\mathcal{T}_\sme$ be a primitive-limit node. The \emph{inductive tree} with root $\rho$ is the subset $\mathcal{T}^{\op{ind}}_\sme(\rho)\subset\mathcal{T}_\sme$ formed by all inner nodes, or finite leaves in $\mathcal{T}_\sme$, which may be obtained by a finite chain of {\bf ordinary} augmentations starting from $\rho$: $$ \rho\ \stackrel{\phi_1,\gamma_1}\,\longrightarrow\,\ \mu_1\ \,\longrightarrow\,\ \cdots \ \,\longrightarrow\,\ \mu_{n-1} \ \stackrel{\phi_{n},\gamma_{n}}\,\longrightarrow\,\ \mu_{n}=\mu. $$
We may consider the stratification by limit-depth $$ \mathcal{T}_\sme\setminus\mathcal{L}_\infty(\mathcal{T}_\sme)=\bigsqcup\nolimits_{n\in\mathbb N_0}\mathcal{T}_n, $$ where $\mathcal{T}_n$ is the subtree of all nodes in $\mathcal{T}_\sme\setminus\mathcal{L}_\infty(\mathcal{T}_\sme)$ whose limit-depth is equal to $n$. These subtrees may be recursively constructed as: $$ \mathcal{T}_0=\mathcal{T}^{\op{ind}}_\sme(\omega_{-\infty}),\qquad \mathcal{T}_{n+1}=\bigsqcup\nolimits_{[\mathcal{A}]\in\mathcal{N}_\infty(\mathcal{T}_n)}\mathcal{T}^{\op{ind}}_\sme(\mu_\mathcal{A}), $$ where $\mathcal{N}_\infty(\mathcal{T}_n)$ is the set of equivalence classes of essential continuous families in $\mathcal{T}_n$.
We could stratify $\mathcal{L}_\infty(\mathcal{T}_\sme)$ in a similar way, but we must add a stratum corresponding to the infinite leaves with an infinite limit-depth. In \cite{ILD} we showed that such infinite leaves do exist.
\end{document} |
\begin{document}
\title{On the three-dimensional finite Larmor radius approximation: the case of electrons in a fixed background of ions}
\begin{abstract} This paper is concerned with the analysis of a mathematical model arising in plasma physics, more specifically in fusion research. It directly follows \cite{DHK1}, where the tri-dimensional analysis of a Vlasov-Poisson equation with finite Larmor radius scaling was led, corresponding to the case of ions with massless electrons whose density follows a linearized Maxwell-Boltzmann law. We now consider the case of electrons in a background of fixed ions, which was only sketched in \cite{DHK1}. Unfortunately, there is evidence that the formal limit is false in general. Nevertheless, we formally derive a fluid system for particular monokinetic data. We prove the local in time existence of analytic solutions and rigorously study the limit (when the Debye length vanishes) to a new anisotropic fluid system. This is achieved thanks to Cauchy-Kovalevskaya type techniques, as introduced by Caflisch \cite{Caf} and Grenier \cite{Gre1}. We finally show that this approach fails in Sobolev regularity, due to multi-fluid instabilities. \end{abstract}
\textbf{Keywords}: Finite Larmor Radius Approximation - Anisotropic quasineutral limit - Anisotropic hydrodynamic systems - Cauchy-Kovalevskaya theorem - Ill-posedness in Sobolev spaces.
\section{Introduction} \subsection{Presentation of the problem}
The main goal of this paper is to derive some fluid model in order to understand the behaviour of a quasineutral gas of electrons in a neutralizing background of fixed ions and submitted to a strong magnetic field. For simplicity, we consider that the magnetic field has fixed direction and intensity. The density of the electrons is governed by the classical Vlasov-Poisson equation. We first introduce some notations:
\begin{notat}
\begin{itemize} Let $(e_1,e_2,e_\parallel)$ be a fixed orthonormal basis of $\mathbb{R}^3$. \item The subscript $\perp$ stands for the orthogonal projection on the plane $(e_1,e_2)$, while the subscript $\parallel$ stands for the projection on $e_\parallel$ . \item For any vector $X=(X_1,X_2,X_\parallel)$, we define $X^\perp$ as the vector $(X_y,-X_x,0)=X\wedge e_\parallel$. \item We define the differential operators $\Delta_{x_\parallel}= \partial_{x_\parallel}^2$ and $\Delta_{x_\perp}=\partial^2_{x_1} + \partial^2_{x_2}$. \end{itemize} \end{notat}
The scaling we consider (we refer to the appendix for physical explanations) leads to the study of the scaled Vlasov-Poisson system (for $t>0, x \in \mathbb{T}^3:= \mathbb{R}^3/ \mathbb{Z}^3, v \in \mathbb{R}^3$):
\begin{equation}
\label{kinbegin} \left\{
\begin{array}{ll}
\partial_{t} f_\epsilon + \frac{v_\perp}{\epsilon}.\nabla_{x} f_\epsilon + v_\parallel.\nabla_{x} f_\epsilon + (E_\epsilon+ \frac{v\wedge e_\parallel}{\epsilon}).\nabla_{v} f_\epsilon = 0 \\
E_\epsilon= (-\nabla_{x_\perp} {V}_\epsilon, -\epsilon\nabla_{x_\parallel} {V}_\epsilon) \\
-\epsilon^2\Delta_{x_\parallel} {V}_\epsilon -\Delta_{x_\perp} {V}_\epsilon = \int f_\epsilon dv - \int f_\epsilon dvdx\\
f_{\epsilon, t=0}=f_{\epsilon,0}\geq 0, \quad \int f_{\epsilon,0}dvdx=1. \end{array}
\right. \end{equation} The quantity $f_\epsilon(t,x,v)$ is interpreted as the distribution function of the electrons: this means that $f_\epsilon(t,x,v) dx dv$ is the probability of finding particles at time $t$ with position $x$ and velocity $v$; $V_\epsilon(t,x)$ and $E_\epsilon(t,x)$ are respectively the electric potential and force.
This corresponds to the so-called finite Larmor radius scaling for the Vlasov-Poisson equation, which was introduced by FrŽnod and SonnendrŸcker in the mathematical literature \cite{FS2}. The $2D$ version of the system (obtained when one restricts to the perpendicular dynamics) was studied in \cite{FS2} and more recently in \cite{Bos} and \cite{GHN}. A version of the full $3D$ system describing ions with massless electrons was studied by the author in \cite{DHK1}. In this work, we considered that the density of electrons follows a linearized Maxwell-Boltzmann law. This means that we studied the following Poisson equation for the electric potential: \begin{equation} V_\epsilon -\epsilon^2\Delta_{x_\parallel} {V}_\epsilon -\Delta_{x_\perp} {V}_\epsilon = \int f_\epsilon dv - \int f_\epsilon dvdx. \end{equation} In this case it was shown after some filtering that the number density $f_\epsilon$ weakly-* converges to some solution $f$ to another kinetic system exhibiting the so-called $E\times B$ drift in the orthogonal plane, but with trivial dynamics in the parallel direction. This last feature seems somehow disappointing.
We observed in \cite{DHK1} that in the case where the Poisson equation reads: \begin{equation}
-\epsilon^2\Delta_{x_\parallel} {V}_\epsilon -\Delta_{x_\perp} {V}_\epsilon = \int f_\epsilon dv - \int f_\epsilon dvdx,
\end{equation}
we could expect to make a pressure appear in the limit process $\epsilon \rightarrow 0$, due to the incompressibility constraint:
\[
\int f dv dx_\perp =\int f dvdx. \]
Unfortunately, we were not able to rigorously derive a kinetic limit or even a fluid limit from (\ref{kinbegin}). This is not only due to technical mathematical difficulties. This is related to the existence of instabilities for the Vlasov-Poisson equation, such as the double-humped instabilities (see Guo and Strauss \cite{GS}) and their counterpart in the multi-fluid Euler equations, such as the two-stream instabilities (see Cordier, Grenier and Guo \cite{CGG}). Such instabilities actually take over in the limit $\epsilon \rightarrow 0$ and the formal limit is false in general, unless $f_{\epsilon,0}$ does not depend on parallel variables, which corresponds to the $2D$ problem studied by FrŽnod and SonnendrŸcker \cite{FS2}.
Actually, we can observe that if on the contrary the initial data $f_{\epsilon,0}$ depends only on parallel variables, we obtain the one-dimensional quasineutral system:
\begin{equation} \left\{
\begin{array}{ll}
\partial_{t} f_\epsilon + v_\parallel \partial_{x_\parallel} f_\epsilon -\partial_{x_\parallel} V_\epsilon \partial_{v_\parallel} f_\epsilon = 0 \\
-\epsilon^2\partial^2_{x_\parallel} {V}_\epsilon = \int f_\epsilon dv - \int f_\epsilon dvdx_\parallel\\
f_{\epsilon, t=0}=f_{\epsilon,0}\geq 0, \quad \int f_{\epsilon,0}dvdx_\parallel=1. \end{array}
\right. \end{equation}
The formal limit is easily obtained, by taking $\epsilon=0$:
\begin{equation} \left\{
\begin{array}{ll}
\partial_{t} f + v_\parallel \partial_{x_\parallel} f -\partial_{x_\parallel} V \partial_{v_\parallel} f = 0 \\
-\epsilon^2\partial^2_{x_\parallel} {V} = \int f dv - \int f dvdx_\parallel\\
f_{ t=0}=f_{0}\geq 0, \quad \int f_{0}dvdx_\parallel=1. \end{array}
\right. \end{equation}
In \cite{Gr99}, an explicit example of Grenier shows that the formal limit is false in general, because of the double-humped instability:
\begin{thm}[\cite{Gr99}]For any $N$ and $s$ in $\mathbb{N}$, and for any $\epsilon<1$, there exist for $i=1,2,3,4$, $v_i^\epsilon(x)\in H^s(\mathbb{T})$ with $\Vert v_1^\epsilon(x)+1 \Vert_{H^s} \leq \epsilon^N$, $\Vert v_2^\epsilon(x)+1/2 \Vert_{H^s} \leq \epsilon^N$, $\Vert v_3^\epsilon(x)-1/2 \Vert_{H^s} \leq \epsilon^N$, $\Vert v_4^\epsilon(x)-1 \Vert_{H^s} \leq \epsilon^N$, such that the solution $f_\epsilon(t,x,v)$ associated to the initial data defined by: \begin{eqnarray*} f_{\epsilon,0}(x,v)&=&1 \quad \text{for} \quad v_1^\epsilon(x) \leq v \leq v_2^\epsilon(x) \text{ and } v_3^\epsilon(x) \leq v \leq v_4^\epsilon(x) \\ &=& 0 \quad \text{elsewhere}. \end{eqnarray*} We also define $f_0$ by: \begin{eqnarray*} f_{0}(x,v)&=&1 \quad \text{for} \quad -1 \leq v \leq -1/2 \text{ and }1/2 \leq v \leq 1 \\ &=& 0 \quad \text{elsewhere}. \end{eqnarray*}
Then $f_\epsilon$ does not converge to $f_0$ in the following sense: \begin{equation} \liminf_{\epsilon\rightarrow 0} \sup_{t\leq T} \int \vert f_\epsilon(t,x,v) - f_0(v) \vert v^2 dv dx > 0 \end{equation} for any $T>0$ and also for $T= \epsilon^\alpha$, with $\alpha<1/2$. \end{thm}
In order to overcome the effects of these instabilities for the usual quasineutral limit, there are two possibilities: \begin{itemize} \item One consists in restricting to particular initial profiles chosen in order to be stable (this would imply in particular some monotony conditions on the data, such as the Penrose condition \cite{Pen}). \item The other one consists in considering data with analytic regularity, in which case the instabilities ( which are essentially of ``Sobolev'' nature) do not have any effect. \end{itemize}
Here the situation is worst: by opposition to the usual quasineutral limit (see \cite{Br}, \cite{Gr99}), restricting to stable profiles is not sufficient. This is due to the anisotropy of the problem and the dynamics in the perpendicular variables.
In this paper, we illustrate this phenomenon by formally deriving the following fluid system, obtained from the kinetic system (\ref{kinbegin}) by considering some physically relevant monokinetic data (we refer to the appendix for the detailed formal derivation).
\begin{equation} \label{sys}
\left\{
\begin{array}{ll}
\partial_t \rho_\epsilon + \nabla_\perp (E^\perp_\epsilon \rho_\epsilon) + \partial_\parallel(v_{\parallel,\epsilon} \rho_\epsilon)= 0 \\
\partial_t v_{\parallel,\epsilon} + \nabla_\perp (E^\perp_\epsilon v_{\parallel,\epsilon}) + v_{\parallel,\epsilon} \partial_\parallel(v_{\parallel,\epsilon}) = -\epsilon\partial_\parallel \phi_\epsilon(t,x) -\partial_\parallel V_\epsilon(t,x_\parallel) \\
E^\perp_\epsilon= -\nabla^\perp \phi_\epsilon \\ -\epsilon^2 \partial^2_{\parallel} \phi_\epsilon - \Delta_{\perp} \phi_\epsilon = \rho_\epsilon - \int \rho_\epsilon dx_\perp\\
-\epsilon \partial_{\parallel}^2 V_\epsilon =\int \rho_\epsilon dx_\perp -1,\\ \end{array}
\right. \end{equation} where:
\begin{itemize}\item$\rho_\epsilon(t,x_\perp,x_\parallel) : \mathbb{R}^+ \times \mathbb{T}^3 \rightarrow \mathbb{R}^+_*$ can be interpreted as a charge density,
\item $v_{\parallel,\epsilon}(t,x_\perp,x_\parallel) : \mathbb{R}^+ \times \mathbb{T}^3 \rightarrow \mathbb{R}$ can be interpreted as a ``parallel'' current density.
\item $\phi_\epsilon(t,x_\parallel)$ and $V_{\epsilon}(t,x)$ are electric potentials.
\end{itemize}
Although we have considerered monokinetic data, (\ref{sys}) is intrinsically a ``multi-fluid'' system, because of the dependence on $x_\perp$. Hence, we still have to face the two-stream instabilities (\cite{CGG}): because of these, the limit is false in Sobolev regularity and we thus decide to study the associated Cauchy problem for analytic data.
We then prove the limit to a new fluid system which is strictly speaking compressible but also somehow ``incompressible in average''. This rather unusual feature is due to the anisotropy of the model. The fluid system is the following (obtained formally by taking $\epsilon=0$): \begin{equation} \label{simple} \left\{
\begin{array}{ll}
\partial_t \rho + \nabla_\perp (E^\perp \rho) + \partial_\parallel(v_\parallel \rho)= 0 \\
\partial_t v_\parallel + \nabla_\perp (E^\perp v_\parallel) + v_\parallel \partial_\parallel(v_\parallel) = -\partial_\parallel p(t,x_\parallel) \\
E^\perp= \nabla^\perp \Delta_\perp^{-1} \left(\rho - \int \rho dx_\perp\right)\\
\int \rho dx_\perp = 1.\\ \end{array}
\right. \end{equation}
We observe that this system can be interpreted as an infinite system of Euler-type equations, coupled together through the ``parameter'' $x_\perp$. It has some interesting features:
\begin{itemize}
\item This system is highly anisotropic in $x_\perp$ and $x_\parallel$. The $2D$ part of the dynamics of the equation for $\rho$ is nothing but the vorticity formulation of $2D$ incompressible Euler. Physically speaking, $\rho$ should be interpreted here as a density rather than a vorticity. The dynamics in the parallel direction is similar to the dynamics of incompressible Euler written in velocity. We finally observe that the pressure $p$ only depends on the parallel variable $x_\parallel$ and not on $x_\perp$.
\item This does not strictly speaking describe an incompressible fluid, since $(E^\perp, v_\parallel)$ is not divergence free. Somehow, the fluid is hence compressible. But the constraint $ \int \rho dx_\perp = 1$ can be interpreted as a constraint of ``incompressibility in average'' which allows one to recover the pressure law from the other unknowns. Indeed, we easily get, thanks to the equation on $\rho$: \begin{equation}
\partial_{x_\parallel} \int \rho v_\parallel dx_\perp =0. \end{equation} So by plugging this constraint in the equation on $\rho v_\parallel$: \begin{equation*}
\partial_t (\rho v_\parallel) + \nabla_\perp (E^\perp \rho_\parallel v_\parallel) + \partial_\parallel(\rho v_\parallel^2) = -\partial_\parallel p(t,x_\parallel)\rho, \end{equation*} we get the (one-dimensional !) elliptic equation allowing to recover $-\partial_{x_\parallel} p$: \begin{equation*}
- \partial^2_\parallel p(t,x_\parallel) = \partial^2_\parallel \int \rho v_\parallel^2 dx_\perp, \end{equation*} from which we get: \begin{equation}
- \partial_\parallel p(t,x_\parallel) = \partial_\parallel \int \rho v_\parallel^2 dx_\perp. \end{equation} \item From the point of view of plasma physics, $E^\perp.\nabla_\perp$ is the so-called electric drift. By analogy with the so-called drift-kinetic equations \cite{Wes}, we can call this system a drift-fluid equation. To the best of our knowledge, this is the very first time such a model is exhibited in the literature.
\end{itemize}
From now on, when there is no risk of confusion, we will sometimes write $v$ and $v_\epsilon$ instead of $v_\parallel$ and $v_{\parallel,\epsilon}$.
\subsection{Organization of the paper}
The outline of this paper is as follows. In Section \ref{sec-results}, we will state the main results of this paper that are: the existence of analytic solutions to (\ref{sys}) locally in time but uniformly in $\epsilon$ (Theorem \ref{exi}), the strong convergence to (\ref{simple}) with a complete description of the plasma oscillations (Theorem \ref{con}) and finally the existence and uniqueness of local analytic solutions to (\ref{simple}), in Proposition \ref{exi2}.
Section \ref{sec-proof1} is devoted to the proof of Theorem \ref{exi}. First we recall some elementary features of the analytic spaces we consider (section \ref{sec-func}), then we implement an approximation scheme for our Cauchy-Kovalesvkaya type existence theorem. The results are based on a decomposition of the electric field allowing for a good understanding of the so-called plasma waves (section \ref{sec-waves}).
In section \ref{sec-proof2}, we prove Theorem \ref{con}, by using the uniform in $\epsilon$ estimates we have obtained in the previous theorem. The proof relies on another decomposition of the electric field, in order to exhibit the effects of the plasma waves as $\epsilon$ goes to $0$.
Then, in section \ref{sec-sharp}, we discuss the sharpness of our results: \begin{itemize} \item In sections \ref{sec-analytic1} and \ref{sec-analytic2}, we discuss the analyticity assumption and explain why we can not lower down the regularity to Sobolev. In section \ref{sec-local}, we explain why it is not possible to obtain global in time results. We obtain these results by considering some well-chosen initial data and using results of Brenier on multi-fluid Euler systems \cite{Br3}.
\item Because of the multi-stream instabilities, studying the limit with the relative entropy method is bound to fail. Nevertheless we found it interesting to try to apply the method and see at which point things get nasty: this is the object of section \ref{sec-rela}, where we study a kinetic toy model which retains the main unstable feature of system (\ref{sys}).
\end{itemize}
The two last sections are respectively a short conclusion and an appendix where we explain the scaling and the formal derivation of system (\ref{sys}).
\section{Statement of the results} \label{sec-results} In order to prove both the existence of strong solutions to systems (\ref{sys}) and (\ref{simple}) and also prove the results of convergence, we follow the construction of Grenier \cite{Gre1}, with some modifications adapted to our problem.
In \cite{Gre1}, Grenier studies the quasineutral limit of the family of coupled Euler-Poisson systems:
\begin{equation} \label{coupledeuler} \left\{
\begin{array}{ll}
\partial_t \rho_\Theta^\epsilon + \operatorname{div}( \rho_\Theta^\epsilon v_\Theta^\epsilon)= 0 \\
\partial_t v_\Theta^\epsilon + v_\Theta^\epsilon.\nabla(v_\Theta^\epsilon) = E^\epsilon \\
\operatorname{rot} E^\epsilon = 0 \\
\epsilon \operatorname{div}E^\epsilon = \int_M \rho_\Theta^\epsilon \mu(d\Theta) - 1,\\ \end{array}
\right. \end{equation} with $(M, \Theta, \mu)$ a probability space.
Following the proof of the Cauchy-Kovalevskaya theorem given by Caflisch \cite{Caf}, Grenier proved the local existence of analytic functions (with respect to $x$) uniformly with respect to $\epsilon$ and then, after filtering the fast oscillations due to the force field, showed the strong convergence to the system: \begin{equation} \label{grenierS} \left\{
\begin{array}{ll}
\partial_t \rho_\Theta + \operatorname{div}( \rho_\Theta v_\Theta)= 0 \\
\partial_t v_\Theta + v_\Theta^\epsilon.\nabla(v_\Theta) = E \\
\operatorname{rot} E = 0 \\
\int \rho_\Theta \mu(d\Theta) = 1.\\ \end{array}
\right. \end{equation}
We notice that the class of systems studied by Grenier is close to system (\ref{sys}), if we take $x=x_\parallel$, $\Theta=x_\perp$ and $(M, \mu)=(\mathbb{T}^2, dx_\perp)$, the main difference being that we have to deal with a dynamics in $\Theta=x_\perp$.
Hence, we introduce the same spaces of analytic functions as in \cite{Gre1}, but this time depending also on $\Theta=x_\perp$.
\begin{deft}
Let $\delta>1$. We define $B_\delta$ the space of real functions $\phi$ on $\mathbb{T}^3$ such that \begin{equation} \label{norm}
\vert \phi \vert_\delta= \sum_{k \in \mathbb{Z}^3} \vert \mathcal{F} \phi (k) \vert \delta^{\vert k \vert} < +\infty, \end{equation} where $\mathcal{F}\phi(k)$ is the k-th Fourier coefficient of $\phi$ defined by: $$\mathcal{F}\phi(k)= \int_{\mathbb{T}^3} \phi(x) e^{-i2\pi k.x} dx.$$ \end{deft}
The first theorem proves the existence of local analytic solutions of (\ref{sys}) with a life span uniform in $\epsilon$.
\begin{thm} \label{exi}
Let $\delta_0>1$. Let $\rho_\epsilon (0)$ and $v_\epsilon (0)$ be two bounded families of $B_{\delta_0}$ such that $\int \rho_\epsilon(0) dx=1$ and: \begin{equation}
\left\Vert \int \rho_\epsilon(0) dx_\perp - 1\right\Vert_{B_{\delta_0}} \leq C \sqrt{\epsilon}, \end{equation} then there exists $\eta>0$ such that for every $\delta_1 \in ]1, \delta_0[$, for any $\epsilon>0$, there exists a unique strong solution $(\rho_\epsilon, v_\epsilon)$ to (\ref{sys}) bounded uniformly in $\mathcal{C}([0,\eta(\delta_0-\delta_1)[,B_{\delta_1})$ with initial conditions $(\rho_\epsilon (0), v_\epsilon (0))$. Moreover, $\sqrt{\epsilon}\partial_\parallel V_\epsilon$ is uniformly bounded in $\mathcal{C}([0,\eta(\delta_0-\delta_1)[,B_{\delta_1})$. \end{thm}
\begin{rque} \begin{itemize}
\item The condition $\left\Vert \int \rho_\epsilon(0) dx_\perp - 1\right\Vert_{B_{\delta_0}} \leq C \sqrt{\epsilon}$ implies that $\sqrt{\epsilon}\partial_\parallel V_\epsilon(0)$ is bounded uniformly in $B_{\delta_0}$ (this is the correct scale in view of the energy conservation).
\item Note that for all $t\geq0, \int \rho_\epsilon dx=1$. Hence the Poisson equation $ -\epsilon \partial_{\parallel}^2 V_\epsilon =\int \rho_\epsilon dx_\perp -1$ can always be solved. \end{itemize} \end{rque}
We can then prove the convergence result:
\begin{thm}
\label{con} Let $(\rho_\epsilon, v_\epsilon)$ be solutions to the system (\ref{sys}) for $0\leq t \leq T$ satisfying for some $s>7/2$ the following estimate: \begin{equation}
(H): \sup_{t\leq T, \epsilon} \left( \Vert \rho_\epsilon \Vert_{H^s_{x_\perp,x_\parallel}}+\Vert v_\epsilon \Vert_{H^s_{x_\perp,x_\parallel}}+ \Vert \sqrt{\epsilon}\partial_{x_\parallel} V_\epsilon \Vert_{H^s_{x_\parallel}} \right) < +\infty . \end{equation} Then we get the following convergences $$\rho_\epsilon \rightarrow \rho, $$ $$ v_\epsilon - \frac{1}{i}( E_+ e^{it/\sqrt{\epsilon}} - E_- e^{-it/\sqrt{\epsilon}}) \rightarrow v, $$ strongly respectively in $\mathcal{C}([0,T], H^{s'}_{x_\perp, x_\parallel})$ and $\mathcal{C}([0,T], H^{s'-1}_{x_\perp, x_\parallel})$ for all $s'<s$, and $$ \sqrt{\epsilon}\left(-\partial_{x_\parallel} V_\epsilon - ( E_+ e^{it/\sqrt{\epsilon}} + E_- e^{-it/\sqrt{\epsilon}})\right)\rightarrow 0, $$ strongly in $\mathcal{C}([0,T], H^{s'}_{x_\parallel})$ for all $s'<s-1$, and where $(\rho, v)$ is solution to the asymptotic system (\ref{simple}) on $[0,T]$ with initial conditions: $$\rho(0)=\lim_{\epsilon\rightarrow 0} \rho_\epsilon(0),$$ $$v(0) =\lim_{\epsilon\rightarrow 0}\left( v_\epsilon(0) - \int \rho_\epsilon v_\epsilon dx_\perp (0)\right) $$ and $E_+(t,x_\parallel), E_-(t,x_\parallel)$ are gradient correctors which satisfy the transport equations: $$\partial_t E_\pm + \left(\int \rho v dx_\perp\right) \partial_{x_\parallel} E_\pm =0,$$ with initial data: $$\partial_{x_\parallel} E_ +(0) = \lim_{\epsilon\rightarrow 0} \frac{1}{2} \partial_{x_\parallel} \left(-\sqrt{\epsilon}\partial_{x_\parallel}V_\epsilon(0) + i\int \rho_\epsilon v_\epsilon dx_\perp(0) \right), $$ $$\partial_{x_\parallel} E_ -(0) = \lim_{\epsilon\rightarrow 0}\frac{1}{2} \partial_{x_\parallel}\left(-\sqrt{\epsilon}\partial_{x_\parallel}V_\epsilon(0) - i\int \rho_\epsilon v_\epsilon dx_\perp(0) \right). $$ \end{thm}
As explained in the introduction, due to the two-streams instabilities, we have to restrict to data with analytic regularity: the Sobolev version of these results is false in general (see \cite{CGG} and the discussion of Section \ref{sec-sharp}).
\begin{rque}
\begin{itemize}
\item It is clear that solutions built in Theorem \ref{exi} satisfy $(H)$.
\item If instead of $(H)$ we make the stronger assumption, for $\delta>1$ \begin{equation}
(H'): \sup_{t\leq T, \epsilon} \left( \Vert \rho_\epsilon \Vert_{B_\delta}+\Vert v_\epsilon \Vert_{B_\delta}+ \Vert \sqrt{\epsilon}\partial_{x_\parallel} V_\epsilon \Vert_{B_\delta} \right) < +\infty, \end{equation} then we get the same strong convergences in $\mathcal{C}([0,T], B_{\delta'})$ for all $\delta'<\delta$.
Using Lemma \ref{lemma} $(ii), (iv)$, the proof under assumption $(H')$ is the same as under assumption $(H)$.
\item The ``well-prepared'' case corresponds to the case when: $$\lim_{\epsilon\rightarrow 0} -\sqrt{\epsilon}\partial^2_{x_\parallel}V_\epsilon(0) =0,$$ $$\lim_{\epsilon\rightarrow 0} \partial_{x_\parallel}\int \rho_\epsilon v_\epsilon dx_\perp(0)=0. $$ Then there is no corrector. \end{itemize}
\end{rque}
With the same method used for Theorem \ref{exi}, we can also prove a theorem of existence and uniqueness of analytic solutions to system (\ref{simple}). \begin{prop} \label{exi2}
Let $\delta_0>\delta_1>1$. For initial data $\rho(0), v(0) \in B_{\delta_0}$ satisfying
\begin{equation} \rho(0) \geq 0, \end{equation} \begin{equation} \int \rho(0) dx_\perp=1 \end{equation} and \begin{equation} \partial_\parallel \int \rho(0)v(0) dx_\perp=0, \end{equation} there exists $\eta>0$ depending on $\delta_0$ and on the initial conditions only such that there is a unique strong solution $(\rho, v_\parallel, p)$ to the system (\ref{simple}) with $\rho,v \in \mathcal{C}([0,\eta(\delta_0-\delta_1)[,B_{\delta})$ for all $\delta<\delta_1$. \end{prop}
\section{Proof of Theorem \ref{exi}} \label{sec-proof1} \subsection{Functional analysis on $B_\delta$ spaces} \label{sec-func} First we define the time dependent analytic spaces we will work with.
Let $\beta$ be an arbitrary constant in $]0,1[$ (take for instance $\beta=1/2$ to fix ideas) and $\eta>0$ a parameter to be chosen later.
\begin{deft}Let $\delta_0>1$. We define the space $B_{\delta_0}^\eta=\{u \in \mathcal{C}^0 ([0, \eta(\delta_0-1)], B_{\delta_0-t/\eta}) \}$, endowed with the norm
$$\Vert u \Vert_{\delta_0} = \sup_{\left\{
\begin{array}{ll} 1<\delta\leq \delta_0 \\ 0\leq t\leq \eta(\delta_0-\delta) \end{array}\right.} \left( \vert u(t)\vert_\delta + (\delta_0-\delta- \frac{t}{\eta})^\beta\vert \nabla u(t)\vert_\delta)\right), $$ where the norm $\vert u\vert_\delta$ was defined in (\ref{norm}): \[
\vert u \vert_\delta= \sum_{k \in \mathbb{Z}^3} \vert \mathcal{F} u (k) \vert \delta^{\vert k \vert} < +\infty, \] \end{deft}
We now gather from \cite{Gre1} a few elementary properties of these spaces, that we recall for the reader's convenience. \begin{lem} \label{lemma}
\begin{enumerate}[(i)]For all $\delta>1$
\item The spaces $B_\delta$ and $B_\delta^\eta$ are Banach algebra. \item If $\delta'<\delta$ then $B_{\delta} \subset B_{\delta'}$, the embedding being continuous and compact. \item For all $s \in \mathbb{R}$, $B_\delta \subset H^s$, the embedding being continuous and compact. \item For all $1<\delta'<\delta$, if $\phi \in B_\delta$, $$\vert\nabla \phi\vert_{\delta'} \leq \frac{\delta}{\delta-\delta'} \vert\phi\vert_\delta.$$
\item If $u$ is in $B_{\delta_0} ^\eta$ and if $\delta+t/\eta<\delta_0$ then $$\vert \partial^2_{x_i, x_j} u(t)\vert_\delta \leq 2^{1+\beta}\Vert u\Vert_{\delta_0} \delta_0 (\delta_0-\delta- \frac{t}{\eta})^{-\beta-1}.$$
\end{enumerate} \end{lem}
For further properties of these spaces we refer to the recent work of Mouhot and Villani \cite{MV}, in which similar analytic spaces (and more sophisticated versions) are considered.
\begin{proof}
We give an elementary proof for $(ii)$ which is not given in \cite{Gre1} . The embedding is obvious. We consider for $N\in \mathbb{N}$ the map $i_N$ defined by: $$i_N(\phi)= \sum_{\vert k \vert \leq N}\mathcal{F}\phi(k)e^{i xk}. $$ We then compute: $$ \vert (Id - i_N) \phi \vert_{\delta'} = \sum_{\vert k \vert > N}\vert \mathcal{F}\phi(k)\vert \delta'^{\vert k \vert} \leq \left(\frac{\delta'}{\delta}\right)^N \sum_{\vert k \vert > N}\vert \mathcal{F}\phi(k)\vert \delta^{\vert k \vert} \leq \left(\frac{\delta'}{\delta}\right)^N \vert \phi \vert_\delta.$$
So the embedding $B_{\delta} \subset B_{\delta'}$ is compact as the limit of finite rank operators.
For $(v)$, take $\delta'=\delta+\frac{\delta_0-\delta-t/\eta}{2}$ and apply $(iv)$. We refer to \cite{Gre1} for the other proofs. \end{proof}
We will also need the following elementary observation:
\begin{rque}
\label{rque} Let $\phi \in B_\delta$. Then: $$\left\vert \int \phi dx_\perp \right\vert_\delta \leq \vert \phi \vert_\delta.$$ \end{rque}
\begin{proof}We simply compute:
$$\left\vert \int \phi dx_\perp \right\vert_\delta= \sum_{k_\perp=0, k_\parallel \in \mathbb{N}} \vert \mathcal{F}(\phi)(k_\perp,k_\parallel) \vert \delta^{\vert k\vert} \leq \sum_ {k \in \mathbb{N}^3} \vert \mathcal{F}(\phi)\vert \delta^{\vert k\vert}=\vert \phi \vert_\delta.$$ \end{proof}
\subsection{Description of plasma oscillations} \label{sec-waves}
To simplify notations, we set $E_{\epsilon,\parallel} = -\partial_{x_\parallel} V_\epsilon(t,x_\parallel)$ (which has nothing to do with $E_\epsilon^\perp$). In this paragraph, we want to understand the oscillatory behaviour of $E_{\epsilon,\parallel}$. We will see that the dynamics in $x_\perp$ does not interfer too much with the equations on $E_{\epsilon,\parallel}$, so that we get almost the same description of oscillations as in Grenier's paper \cite{Gre1}.
First we differentiate twice with respect to time the Poisson equation satisfied by $V_\epsilon$:
\begin{equation} \label{eq-deriv}
\epsilon \partial^2_t \partial_{x_\parallel} E_{\epsilon,\parallel} = \partial^2_t \int \rho_\epsilon dx_\perp. \end{equation}
We use the equation on $\rho_\epsilon$ to compute the right hand side of (\ref{eq-deriv}).
\begin{equation}
\partial_t \int\rho_\epsilon dx_\perp = \underbrace{- \int \nabla_\perp( E^\perp_\epsilon \rho_\epsilon)dx_\perp}_{=0} - \partial_{x_\parallel} \int \rho_\epsilon v_\epsilon dx_\perp. \end{equation} Then we integrate with respect to $x_\perp$ the equation satisfied by $\rho_\epsilon v_\epsilon$, that is:
\[ \partial_t (\rho_\epsilon v_\epsilon) + \nabla_\perp(E_\epsilon^\perp \rho_\epsilon v_\epsilon) + \partial_{x_\parallel}(v_\epsilon^2 \rho_\epsilon) = -\rho_\epsilon(\epsilon\partial_{x_\parallel} \phi_\epsilon(t,x) +\partial_{x_\parallel} V_\epsilon(t,x_\parallel)) \]
and we get:
\begin{equation} -\partial_t \int \rho_\epsilon v_\epsilon dx_\perp = \partial_{x_\parallel} \int \rho_\epsilon v_\epsilon^2 dx_\perp + E_{\epsilon,\parallel}\int\rho_\epsilon dx_\perp- \int \rho_\epsilon (\epsilon \partial_{x_\parallel} \phi_\epsilon) dx_\perp, \end{equation} so that: $$\partial_t^2 \int \rho_\epsilon dx_\perp = \partial^2_{x_\parallel} \int \rho_\epsilon v_\epsilon^2 dx_\perp - \partial_{x_\parallel}(E_{\epsilon,\parallel} \int \rho_\epsilon dx_\perp)+ \partial_{x_\parallel}\int \rho_\epsilon (\epsilon \partial_{x_\parallel} \phi_\epsilon) dx_\perp.$$ Thus it comes:
\begin{equation} \label{waves}
\epsilon \partial_t^2 \partial_{x_\parallel}E_{\epsilon,\parallel} + \partial_{x_\parallel}E_{\epsilon,\parallel} = \partial^2_{x_\parallel} \int \rho_\epsilon v_\epsilon^2 dx_\perp + \epsilon \partial_{x_\parallel}[E_{\epsilon,\parallel} \partial_{x_\parallel}E_{\epsilon,\parallel} ]-\partial_{x_\parallel}\int \rho_\epsilon (\epsilon \partial_{x_\parallel} \phi_\epsilon) dx_\perp. \end{equation}
Equation (\ref{waves}) is the wave equation allowing to describe the essential oscillations. At least formally, this equation indicates that there are time oscillations with frequency $\frac{1}{\sqrt{\epsilon}}$ and magnitude $\frac{1}{\sqrt{\epsilon}}$ created by the right-hand side of the equation which acts like a source. We observe here that the source is expected to be of order $\mathcal{O}(1)$: indeed, by assumption on the data at $t=0$, we can check that this quantity is bounded in a $B_\delta$ space.
In particular if we want to prove strong convergence results we will have to introduce non-trivial correctors in order to get rid of these oscillations. We notice also that (\ref{waves}) is very similar to the wave equation obtained in \cite{Gre1} (the only difference is a new term in the source), so that most of the calculations and estimates on $E_{\epsilon,\parallel}$ we will need are done in \cite{Gre1}.
\subsection{A priori estimates}
We have just observed that $E_{\epsilon,\parallel}$ roughly behaves like $\frac{1}{\sqrt{\epsilon}} e^{\pm it/\sqrt{\epsilon}}$. Hence if we consider the average in time \begin{equation}
G_\epsilon = \int_0^t E_{\epsilon,\parallel}(s,x_\parallel) ds, \end{equation} we expect that $G_\epsilon$ is bounded uniformly with respect to $\epsilon$ in some functional space. We also introduce the translated current (which corresponds to some filtering of the time oscillations created by the electric field): \begin{equation}
w_\epsilon =v_\epsilon - G_\epsilon, \end{equation} so that system (\ref{sys}) now writes: \begin{equation}
\left\{ \begin{array}{ll}
\partial_t \rho_\epsilon + \nabla_\perp (E^\perp_\epsilon \rho_\epsilon) + \partial_\parallel((w_{\epsilon}+G_\epsilon) \rho_\epsilon)= 0 \\
\partial_t w_{\epsilon} + \nabla_\perp (E^\perp_\epsilon (w_{\epsilon}+G_\epsilon)) + (w_{\epsilon} +G_\epsilon)\partial_\parallel(v_{\epsilon}+G_\epsilon)) = -\epsilon\partial_\parallel \phi_\epsilon(t,x_\parallel). \end{array} \right. \end{equation}
The goal is now to prove some a priori estimates for $G_\epsilon, \rho_\epsilon$ and $w_\epsilon$. We are also able to get similar estimates on $E_\epsilon^\perp$ and $\epsilon \partial_{x_\parallel} \phi_\epsilon$, thanks to the Poisson equation satisfied by $\phi_\epsilon$.
\subsubsection{Estimate on $G_\epsilon$ and $\sqrt{\epsilon} E_{\epsilon,\parallel}$}
We use Duhamel's formula for the wave equation (\ref{waves}) to get the following identity: \begin{equation}
\mathcal{F}_\parallel G_\epsilon(t,k_\parallel) = \int_0^t \left(\frac{1}{ik_\parallel} \left[1- \cos(\frac{t-s}{\sqrt{\epsilon}})\right] \mathcal{F}_\parallel g_\epsilon(s,k_\parallel)\right) ds + \mathcal{F}_\parallel G_\epsilon^0, \end{equation} denoting by $\mathcal{F}_\parallel$ the Fourier transform with respect to the parallel variable only and $k_\parallel$ the Fourier variable and where: $$g_\epsilon=\partial^2_{x_\parallel} \int \rho_\epsilon v_\epsilon^2 dx_\perp + \epsilon \partial_{x_\parallel}[E_{\epsilon,\parallel} \partial_{x_\parallel}E_{\epsilon,\parallel} ]-\partial_{x_\parallel}\int \rho_\epsilon (\epsilon \partial_{x_\parallel} \phi_\epsilon) dx_\perp,$$ \begin{equation}
\label{G0} G_\epsilon^0 = \sqrt{\epsilon}E_{\epsilon,\parallel}(0,x_\parallel) \sin \left(\frac{s}{\sqrt{\epsilon}}\right)-\epsilon \partial_t E_{\epsilon,\parallel}(0,x_\parallel) \left(\cos\left(\frac{s}{\sqrt{\epsilon}}\right)-1\right). \end{equation}
We now estimate $\Vert G_\epsilon \Vert_{\delta_0}$.
\begin{eqnarray*}
\vert G_\epsilon \vert_\delta &\leq& \int_0^t \left\vert\mathcal{F}_\parallel^{-1}\left(\frac{1}{ik_\parallel} [1- \cos(\frac{t-s}{\sqrt{\epsilon}})] \mathcal{F}_\parallel g_\epsilon(s,k_\parallel)\right)\right\vert_\delta ds + \vert G_\epsilon^0 \vert_\delta. \end{eqnarray*}
$$\frac{1}{ik_\parallel}\mathcal{F}_\parallel g_\epsilon=\mathcal{F}_\parallel\left(\partial_{x_\parallel} \int \rho_\epsilon v_\epsilon^2 dx_\perp\right) + \epsilon \mathcal{F}_\parallel\left(E_{\epsilon,\parallel}\partial_{x_\parallel}E_{\epsilon,\parallel}\right).$$
Thanks to Remark \ref{rque} and Lemma \ref{lemma} , $(i)$:
\begin{equation}
\left\vert \int \partial_{x_\parallel}(\rho_\epsilon v_\epsilon^2 )dx_\perp \right\vert_\delta \leq \left\vert \partial_{x_\parallel}(\rho_\epsilon v_\epsilon^2) \right\vert_\delta \leq (\delta_0-\delta- \frac{s}{\eta})^{-\beta}\Vert \rho_\epsilon \Vert_{\delta _0}\Vert v_\epsilon \Vert_{\delta _0}^2. \end{equation}
Similarly, we prove: $$\epsilon \left\vert E_{\epsilon,\parallel}\partial_{x_\parallel}E_{\epsilon,\parallel} \right\vert_\delta \leq \frac 1 2 (\delta_0-\delta- \frac{s}{\eta})^{-\beta} \Vert \sqrt{\epsilon} E_{\epsilon,\parallel}\Vert_{\delta _0}^2,$$
$$ \left\vert \int \partial_{x_\parallel}\left(\rho_\epsilon (\epsilon \partial_{x_\parallel} \phi_\epsilon)\right) dx_\perp \right\vert_\delta \leq (\delta_0-\delta- \frac{s}{\eta})^{-\beta}\Vert \rho_\epsilon \Vert_{\delta _0}\Vert \epsilon \partial_{x_\parallel} \phi_\epsilon \Vert_{\delta _0}. $$
Thus, we have: \begin{eqnarray*}
\vert G_\epsilon \vert_\delta \leq C \int_0^t (\delta_0-\delta- \frac{s}{\eta})^{(-\beta)}(\Vert \rho_\epsilon \Vert_{\delta _0} \Vert v_\epsilon \Vert_{\delta _0}^2 + \Vert \sqrt{\epsilon} E_{\epsilon,\parallel}\Vert_{\delta _0}^2+\Vert \rho_\epsilon \Vert_{\delta _0}\Vert \epsilon \partial_{x_\parallel} \phi_\epsilon \Vert_{\delta _0}) + \vert G_\epsilon^0 \vert_\delta. \end{eqnarray*}
Likewise, one can show (this time we use lemma \ref{lemma}, $(v)$): \begin{eqnarray*}
\vert \partial_{x_\parallel} G_\epsilon \vert_\delta \leq C \int_0^t (\delta_0-\delta- \frac{s}{\eta})^{(-\beta-1)}(\Vert \rho_\epsilon \Vert_{\delta _0} \Vert v_\epsilon \Vert_{\delta _0}^2 + \Vert \sqrt{\epsilon} E_{\epsilon,\parallel}\Vert_{\delta _0}^2+\Vert \rho_\epsilon \Vert_{\delta _0}\Vert \epsilon \partial_{x_\parallel} \phi_\epsilon \Vert_{\delta _0}) + \vert \partial_{x_\parallel} G_\epsilon^0 \vert_\delta. \end{eqnarray*} Hence using the elementary estimates $$\int_0^t \frac{ds}{(\delta_0-\delta - \frac{\sigma}{\eta})^\beta}\leq \eta \frac{2}{1-\beta}\delta_0^{1-\beta} ,$$ $$\int_0^t \frac{ds}{(\delta_0-\delta - \frac{\sigma}{\eta})^{\beta+1}} \leq \frac{2\eta}{\beta}(\delta_0-\delta - \frac{t}{\eta})^{-\beta}, $$ and $\Vert v_\epsilon \Vert_{\delta_0} \leq \Vert w_\epsilon \Vert_{\delta_0} +\Vert G_\epsilon \Vert_{\delta_0}$, we get: \begin{equation}
\Vert G_\epsilon \Vert_{\delta_0} \leq \eta C(\delta_0,\beta) \left((\Vert w_\epsilon \Vert_{\delta_0}+ \Vert G_\epsilon \Vert_{\delta_0})^2\Vert \rho_\epsilon \Vert_{\delta_0}+ \Vert \sqrt{\epsilon}E_{\epsilon,\parallel} \Vert_{\delta_0}^2+\Vert \rho_\epsilon \Vert_{\delta _0}\Vert \epsilon \partial_{x_\parallel} \phi_\epsilon \Vert_{\delta _0}\right) + \Vert G_\epsilon^0 \Vert_{\delta_0}. \end{equation} If we compare two solutions $( w^{(1)}, \rho^{(1)})$ and $(w^{(2)}, \rho^{(2)})$ with the same inital data we obtain: \begin{eqnarray} \label{eq-comp1}
\Vert G^{(1)}_\epsilon - G_\epsilon^{(2)} \Vert_{\delta_0} &\leq& \eta C\Big( (\Vert w_\epsilon^{(1)} - w^{(2)}_\epsilon \Vert_{\delta_0} + \Vert G_\epsilon^{(1)}- G^{(2)}_\epsilon \Vert_{\delta_0}) \nonumber \\ & &\times (\Vert w_\epsilon^{(1)} \Vert_{\delta_0} + \Vert w^{(2)}_\epsilon \Vert_{\delta_0} + \Vert G_\epsilon^{(1)} \Vert_{\delta_0} + \Vert G^{(2)}_\epsilon \Vert_{\delta_0}) ( \Vert\rho^{(1)}_\epsilon \Vert_{\delta_0} + \Vert \rho^{(2)}_\epsilon \Vert_{\delta_0}) \nonumber\\ &+& (\Vert w_\epsilon^{(1)} \Vert_{\delta_0}^2 + \Vert w^{(2)}_\epsilon \Vert_{\delta_0}^2 + \Vert G_\epsilon^{(1)} \Vert_{\delta_0}^2 + \Vert G^{(2)}_\epsilon \Vert_{\delta_0}^2)(\Vert \rho_\epsilon^{(1)} - \rho^{(2)}_\epsilon \Vert_{\delta_0}) \nonumber \\ &+& \Vert \rho_\epsilon^{(1)}-\rho_\epsilon^{(2)} \Vert_{\delta_0}(\Vert\epsilon \partial_{x_\parallel}\phi_\epsilon^{(1)} \Vert_{\delta_0}+ \Vert\epsilon \partial_{x_\parallel}\phi_\epsilon^{(2)} \Vert_{\delta_0})\nonumber \\ &+& (\Vert \rho_\epsilon^{(1)} \Vert_{\delta_0}+\Vert \rho_\epsilon^{(2)} \Vert_{\delta_0})\Vert \epsilon \partial_{x_\parallel}\phi_\epsilon^{(1)}-\epsilon \partial_{x_\parallel}\phi_\epsilon^{(2)}\Vert_{\delta_0} \nonumber \\ &+& \Vert \sqrt{\epsilon} E^{(1)}_{\epsilon,\parallel} - \sqrt{\epsilon}E^{(2)}_{\epsilon,\parallel} \Vert_{\delta_0}(\Vert \sqrt{\epsilon} E^{(1)}_{\epsilon,\parallel}\Vert_{\delta_0} + \Vert \sqrt{\epsilon} E^{(2)}_{\epsilon,\parallel}\Vert_{\delta_0})\Big) . \end{eqnarray}
Likewise we get the same estimates on $\Vert\sqrt{\epsilon}E_{\epsilon,\parallel}\Vert_{\delta_0}$ since we have the formula: \begin{equation}
\mathcal{F}_\parallel (\sqrt{\epsilon} E_{\epsilon,\parallel})(t,k_\parallel) = \int_0^t \left(\frac{1}{ik_\parallel} [\sin(\frac{t-s}{\sqrt{\epsilon}})] \mathcal{F}_\parallel g_\epsilon(s,k_\parallel)\right) ds + \mathcal{F}_\parallel (\sqrt{\epsilon} E_{\epsilon,\parallel}^0), \end{equation} with \begin{equation}
\label{E0} E_{\epsilon,\parallel}^0 = E_{\epsilon,\parallel}(0,x) \cos(\frac{s}{\sqrt{\epsilon}})+\sqrt{\epsilon} \partial_t E_{\epsilon,\parallel}(0,x) \sin(\frac{s}{\sqrt{\epsilon}}). \end{equation}
\subsubsection{Estimate on $E_\epsilon^\perp$ and $\epsilon \partial_{x_\parallel} \phi_\epsilon$}
We now use the scaled Poisson equation satisfied by $\phi_\epsilon$ to get some a priori estimates.
The principle here is to look at the symbols of the operators involved in the Poisson equations. Accordingly, we compute in Fourier variables: \begin{equation} \epsilon^2k_\parallel^2 \mathcal{F}\phi_\epsilon + \vert k_\perp\vert^2 \mathcal{F}\phi_\epsilon= \mathcal{F}\left(\rho_\epsilon - \int \rho_\epsilon dx_\perp\right). \end{equation} Thus it comes:
$$\mathcal{F}\phi_\epsilon=\frac{\mathcal{F}(\rho_\epsilon - \int \rho_\epsilon dx_\perp)}{\epsilon^2 k_\parallel^2 + \vert k_\perp\vert^2}.$$
Since $\int (\rho_\epsilon - \int \rho_\epsilon dx_\perp) dx_\perp =0$, we have for all $k_\parallel$: $$\mathcal{F}\left(\rho_\epsilon - \int \rho_\epsilon dx_\perp\right)(0,k_\parallel)=0.$$
Thus it comes, for all $k_\perp, k_\parallel$: \begin{equation}
\vert \mathcal{F}\phi_\epsilon \vert \leq \frac{\vert \mathcal{F}(\rho_\epsilon - \int \rho_\epsilon dx_\perp)\vert}{\vert k_\perp\vert^2}. \end{equation} In particular we easily get, using the relation $E_\epsilon^\perp = -\nabla^\perp \phi_\epsilon$:
\begin{equation*}
\vert \mathcal{F}E^\perp_\epsilon \vert \leq \frac{\vert \mathcal{F}(\rho_\epsilon - \int \rho_\epsilon dx_\perp)\vert}{\vert k_\perp\vert} \leq \left\vert \mathcal{F}\left(\rho_\epsilon - \int \rho_\epsilon dx_\perp\right)\right\vert. \end{equation*}
Hence: \begin{equation} \label{elec}
\Vert E^\perp_\epsilon \Vert_{\delta_0} \leq C \Vert\rho_\epsilon \Vert_{\delta_0}. \end{equation}
Likewise, since $a b \leq \frac{1}{2}(a^2 + b^2)$ and $\vert k_\perp\vert \geq 1$:
\begin{equation*}
\vert \mathcal{F}(\epsilon \partial_{x_\parallel} \phi_\epsilon )\vert \leq \frac{\epsilon \vert k_\parallel \vert \vert \mathcal{F}(\rho_\epsilon - \int \rho_\epsilon dx_\perp)\vert}{\epsilon^2 k_\parallel^2 + \vert k_\perp\vert^2} \leq \frac{1}{2} \vert \mathcal{F}(\rho_\epsilon - \int \rho_\epsilon dx_\perp)\vert, \end{equation*}
and consequently: \begin{equation}
\Vert \epsilon \partial_{x_\parallel} \phi_\epsilon \Vert_{\delta_0} \leq C \Vert\rho_\epsilon \Vert_{\delta_0}. \end{equation}
And if we compare two solutions with the same initial data: \begin{equation}
\Vert E^{\perp,(1)}_\epsilon - E^{\perp,(2)}_\epsilon \Vert_{\delta_0} + \Vert \epsilon \partial_{x_\parallel} \phi^{(1)}_\epsilon - \epsilon\partial_{x_\parallel} \phi^{(2)}_\epsilon \Vert_{\delta_0} \leq C \Vert \rho^{(1)}_\epsilon - \rho^{(2)}_\epsilon\Vert_{\delta_0}. \end{equation}
\subsubsection{Estimate on $\rho_\epsilon$ and $w_\epsilon$} We now use the conservation laws satisfied by $\rho_\epsilon$ and $w_\epsilon$ to get the appropriate estimates.
The density $\rho_\epsilon$ satisfies the equation: $$ \partial_t \rho_\epsilon + \nabla_\perp (E^\perp_\epsilon \rho_\epsilon) + \partial_\parallel((w_{\epsilon}+G_\epsilon) \rho_\epsilon)= 0.$$ With the same kind of computations as before and thanks to estimate (\ref{elec}) we get: \begin{eqnarray*}
\vert \rho_\epsilon \vert_\delta &\leq& \int_0^t \vert \partial_t \rho_\epsilon \vert_\delta + \vert \rho_\epsilon(0) \vert_\delta \\ &\leq& \Vert \rho_\epsilon(0) \Vert_{\delta_0} + \int_0^t (\delta_0-\delta-\frac{s}{\eta})^{-\beta} \Vert \rho_\epsilon \Vert_{\delta_0}(\Vert \rho_\epsilon \Vert_{\delta_0}+\Vert w_\epsilon \Vert_{\delta_0}+\Vert G_\epsilon \Vert_{\delta_0})ds . \end{eqnarray*}
Similarly we estimate $\vert \partial_{x_i} \rho_\epsilon\vert_{\delta}$ by differentiating with respect to $x_i$ the equation satisfied by $\rho_\epsilon$. Finally we get: \begin{equation} \label{reg-rho}
\Vert \rho_\epsilon \Vert_{\delta_0} \leq \eta C \Vert \rho_\epsilon \Vert_{\delta_0}(\Vert \rho_\epsilon \Vert_{\delta_0}+\Vert w_\epsilon \Vert_{\delta_0}+\Vert G_\epsilon \Vert_{\delta_0}). \end{equation}
If we compare two solutions with the same initial conditions, we get likewise: \begin{eqnarray} \label{eq-comp2}
\Vert \rho^{(1)}_\epsilon -\rho^{(2)}_\epsilon \Vert_{\delta_0} &\leq& \eta C \Big( (\Vert \rho^{(1)}_\epsilon \Vert_{\delta_0} + \Vert \rho^{(2)}_\epsilon \Vert_{\delta_0})(\Vert w^{(1)}_\epsilon - w^{(2)}_\epsilon \Vert_{\delta_0}+\Vert G^{(1)}_\epsilon - G^{(2)}_\epsilon \Vert_{\delta_0}) \nonumber \\ &+& (\Vert \rho^{(1)}_\epsilon \Vert_{\delta_0} + \Vert \rho^{(2)}_\epsilon \Vert_{\delta_0} +\Vert w^{(1)}_\epsilon \Vert_{\delta_0}+\Vert w^{(2)}_\epsilon \Vert_{\delta_0}+\Vert G^{(1)}_\epsilon \Vert_{\delta_0}+\Vert G^{(2)}_\epsilon \Vert_{\delta_0})\nonumber \\ &\times& (\Vert \rho^{(1)}_\epsilon -\rho^{(2)}_\epsilon \Vert_{\delta_0})\Big). \end{eqnarray}
In the same fashion, we estimate the $\delta_0$ norm of $w_\epsilon$: \begin{equation}
\Vert w_\epsilon \Vert_{\delta_0} \leq \eta C\left((\Vert w_\epsilon \Vert_{\delta_0}+1)\Vert \rho_\epsilon \Vert_{\delta_0} + (\Vert w_\epsilon \Vert_{\delta_0}+\Vert G_\epsilon \Vert_{\delta_0})^2+\Vert \epsilon \partial_\parallel \phi_\epsilon\Vert_{\delta_0} \right), \end{equation} and if we compare two solutions with the same initial data: \begin{eqnarray} \label{eq-comp3}
\Vert w^{(1)}_\epsilon -w^{(2)}_\epsilon \Vert_{\delta_0} &\leq& \eta C \Big( (\Vert \rho^{(1)}_\epsilon \Vert_{\delta_0}+ \Vert \rho^{(2)}_\epsilon \Vert_{\delta_0})(\Vert w^{(1)}_\epsilon \Vert_{\delta_0} + \Vert w^{(2)}_\epsilon \Vert_{\delta_0}) \nonumber \\ &+& (\Vert w^{(1)}_\epsilon - w^{(2)}_\epsilon \Vert_{\delta_0}+ \Vert \rho^{(1)}_\epsilon - \rho^{(2)}_\epsilon \Vert_{\delta_0}) \nonumber \\ &+& (\Vert w^{(1)}_\epsilon \Vert_{\delta_0}+\Vert w^{(2)}_\epsilon \Vert_{\delta_0}+\Vert G^{(1)}_\epsilon \Vert_{\delta_0}+\Vert G^{(2)}_\epsilon \Vert_{\delta_0})\nonumber \\ & & \times(\Vert w^{(1)}_\epsilon - w^{(2)}_\epsilon \Vert_{\delta_0}+\Vert G^{(1)}_\epsilon - G^{(2)}_\epsilon \Vert_{\delta_0}) \nonumber \\ & &\Vert \epsilon \partial_\parallel \phi^{(1)}_\epsilon - \epsilon \partial_\parallel \phi^{(1)}_\epsilon\Vert_{\delta_0} \Big). \end{eqnarray}
\subsection{Approximation scheme} \label{sec-approx} We use the usual approximation scheme for Cauchy-Kovalevskaya type of results (\cite{Caf}).
We define $\rho_{\epsilon}^{n},w_{\epsilon}^{n},G_{\epsilon}^{n},V_{\epsilon}^{n}, \phi_{\epsilon}^{n}$ by recursion:
\underline{Initialization} For $0<t<\eta(\delta_0 -1)$, we define:
$$ \rho_\epsilon^0(t)=\rho_\epsilon(0), $$ $$ w_{\epsilon}^{0}=v_\epsilon(0) - G_\epsilon^0,$$ $$ -\epsilon^2 \partial^2_{x_\parallel} \phi^{0}_\epsilon - \Delta_{x_\perp} \phi^{0}_\epsilon = \rho^{0}_\epsilon - \int \rho^{0}_\epsilon dx_\perp,$$
$$E_{\epsilon}^{\perp, 0}= - \nabla^\perp \phi_\epsilon^0,$$
and $-\epsilon^2 \partial_{x_\parallel}^2 V_{\epsilon}^{0}=\rho_\epsilon^0 - \int \rho_\epsilon^0 dx$. Finally, $G_\epsilon^0$ is given by formula $G_\epsilon^0= -\int_0^t \partial_\parallel V_\epsilon^{0}$.
\underline{Recursion} We define $\rho_\epsilon^{n+1}, w_\epsilon^{n+1}$ by the equations: $$ \left\{ \begin{array}{ll}
\partial_t \rho_\epsilon^{n+1} + \nabla_\perp (E^{\perp,n}_\epsilon \rho^n_\epsilon) + \partial_\parallel((w^n_{\epsilon}+G^n_\epsilon) \rho^n_\epsilon)= 0 \\
\partial_t w_{\epsilon}^{n+1} + \nabla_\perp (E^{\perp,n}_\epsilon (w^n_{\epsilon}+G^n_\epsilon)) + (w^n_{\epsilon} +G^n_\epsilon)\partial_\parallel(v^n_{\epsilon}+G^n_\epsilon)) = -\epsilon\partial_\parallel \phi^n_\epsilon(t,x_\parallel). \end{array} \right.$$
with the initial conditions: $\rho_\epsilon^{n+1}(0)=\rho_\epsilon(0)$ and $ w_{\epsilon}^{n+1}=v_\epsilon(0) - G_\epsilon^0$.
Then we can define $\phi_\epsilon^{n+1}$ as the solution to the Poisson equation: $$-\epsilon^2 \partial^2_{x_\parallel} \phi^{n+1}_\epsilon - \Delta_{x_\perp} \phi^{n+1}_\epsilon = \rho^{n+1}_\epsilon - \int \rho^{n+1}_\epsilon dx_\perp.$$
$$E_{\epsilon}^{\perp, n+1}= - \nabla^\perp \phi_\epsilon^{n+1},$$
Similarly, $$-\epsilon^2 \partial_{x_\parallel}^2 V_{\epsilon}^{n+1}=\rho_\epsilon^{n+1}- \int \rho_\epsilon^{n+1} dx.$$ Then we can define $G_\epsilon^{n+1}$ with the formula: $G_\epsilon^{n+1}= -\int_0^t \partial_\parallel V_\epsilon^{n+1}$.
Now let $C_1$ be a constant larger than $\Vert \rho_\epsilon(0) \Vert_{\delta_0}$, $\Vert w_\epsilon(0) \Vert_{\delta_0}$, $\Vert G_\epsilon(0) \Vert_{\delta_0}$, $\Vert \sqrt{\epsilon} E_\epsilon(0) \Vert_{\delta_0}$ and all the other constants in the previous estimates. It is possible to choose $\eta$ small enough with respect to $C_1$ to propagate the following estimates by recursion (we refer to \cite{Gre1} for details; we use in particular estimates (\ref{eq-comp1}),(\ref{eq-comp2}),(\ref{eq-comp3})). There exists $C_2>C_1$, for all $n\geq 1$: \begin{enumerate}[(i)] \item
$$ \left\{ \begin{array}{ll} \Vert \rho_\epsilon^n \Vert_{\delta_0}\leq {C_2}, \\ \Vert w_\epsilon^n \Vert_{\delta_0}\leq \ C_2, \\ \Vert G_\epsilon^n \Vert_{\delta_0}\leq C_2, \\ \Vert \sqrt{\epsilon} E_{\epsilon,\parallel}^n \Vert_{\delta_0}\leq C_2. \end{array} \right.$$
\item
$$ \left\{ \begin{array}{ll} \Vert \rho_\epsilon^n - \rho_\epsilon^{n-1} \Vert_{\delta_0}\leq \frac{C_2}{2^n}, \\
\Vert w_\epsilon^n - w_\epsilon^{n-1} \Vert_{\delta_0}\leq \frac{C_2}{2^n}, \\
\Vert G_\epsilon^n - G_\epsilon^{n-1} \Vert_{\delta_0}\leq \frac{C_2}{2^n}, \\
\Vert \sqrt{\epsilon} E_{\epsilon, \parallel}^n -\sqrt{\epsilon} E_{\epsilon, \parallel}^{n-1} \Vert_{\delta_0}\leq \frac{C_2}{2^n}. \end{array} \right.$$
\end{enumerate}
This proves that the sequences $\rho_{\epsilon}^{n},w_{\epsilon}^{n},G_{\epsilon}^{n}, \sqrt{\epsilon}E_\epsilon, E_{\epsilon}^{\perp, n},\epsilon\partial_{x_\parallel}\phi_{\epsilon}^{n}$ are Cauchy sequences (with respect to $n$) in $B_{\delta_0}^\eta$, and consequently converge strongly in $B_{\delta_0}^\eta$, the estimates being uniform in $\epsilon$. It is clear that the limit satisfies System (\ref{sys}).
The requirement $\delta_1<\delta_0$ and the explicit life span in Theorem \ref{exi} come directly from the definition of the $B_{\delta_0}^\eta$ spaces.
For the uniqueness part, one can simply notice that the estimates we have shown allow us to prove that the application $F$ defined by: $$ F(\rho_\epsilon, w_\epsilon ) =\begin{pmatrix} \int_0^t ( - \nabla_\perp (E^{\perp}_\epsilon \rho_\epsilon) - \partial_\parallel((w_{\epsilon}+G_\epsilon) \rho_\epsilon)) )ds \\ \int_0^t(-\nabla_\perp (E^{\perp}_\epsilon (w_{\epsilon}+G_\epsilon)) - (w_{\epsilon} +G_\epsilon)\partial_\parallel(v_{\epsilon}+G_\epsilon)) -\epsilon\partial_\parallel \phi_\epsilon(t,x_\parallel))ds \end{pmatrix}, $$ is a contraction on the closed subset $B$ of $B_{\delta_0}\times B_{\delta_0}$, defined by: \[ B= \left\{\rho, w \in B_{\delta_0}; \Vert \rho\Vert_{{\delta_0}}\leq C, \Vert w\Vert_{{\delta_0}}\leq C \right\}, \] with $C$ large enough, provided that $\eta$ is chosen small enough. The uniqueness of the analytic solution then follows.
\subsection*{Proof of Proposition \ref{exi2}}
We can lead the same analysis as for the proof of Theorem \ref{exi2}, but even simpler since here we do not have to deal anymore with the fast oscillations in time. The only slightly different point is to estimate the norm of $\int_0^t -\partial_\parallel p ds = \int_0^t \partial_\parallel \int \rho v^2 dx_\perp ds$, which is straightforward:
$$\left\Vert \int_0^t \partial_\parallel p ds\right\Vert _{{\delta_0}} \leq \eta C \Vert \rho \Vert _{{\delta_0}}\Vert v \Vert _{{\delta_0}}^2.$$
Then as before, we can use a contraction argument to prove the proposition.
\section{Proof of Theorem \ref{con}} \label{sec-proof2} \subsection*{Step 1: Another average in time for $E_{\epsilon,\parallel}$}
We have observed previously that the wave equation (\ref{waves}) describing the time oscillations of $E_{\epsilon,\parallel}$ was the same as the one appearing in Grenier's work, except for a slight change in the source. Therefore the following decomposition taken from \cite{Gre1} identically holds:
\begin{lem} \label{osci}
Under assumption $(H)$, there exist vector fields $E_\epsilon^1, E_\epsilon^2$ and $W_\epsilon$ such that $E_{\epsilon,\parallel}=E_\epsilon^1+E_\epsilon^2$ and a positive constant $C$ independent of $\epsilon$ such as: \begin{enumerate}[(i)]
\item $\Vert \sqrt{\epsilon}E_\epsilon^1 \Vert_{L^\infty(H^{s-1}_{x_\parallel})}\leq C$.
\item $\partial_t W_\epsilon = E_\epsilon^1$, $\Vert W_\epsilon \Vert_{L^\infty(H^{s-1}_{x_\parallel})}\leq C$ and $W_\epsilon \rightharpoonup 0$ in $L^2$.
\item $W^\epsilon(0)= -\epsilon \partial_t E_{\epsilon,\parallel}(0) = \int \rho_\epsilon(0)v_\epsilon(0) dx_\perp$.
\item $\Vert E_\epsilon^2 \Vert_{L^\infty(H^{s-1}_{x_\parallel})}\leq C$.
\item $\int E^1_\epsilon dx_\parallel = \int E^2_\epsilon dx_\parallel=0$. \end{enumerate}
\end{lem}
\begin{proof}[Idea of the proof] The idea in order to build $E^2_\epsilon$ is to cut off the essential temporal oscillations (of frequency $\frac{1}{\sqrt{\epsilon}}$). Hence, we can define $E^2_\epsilon$ defined by its Fourier transform: $$\mathcal{F}_\parallel E^2_\epsilon (t,k_\parallel)= \frac{1}{2\pi \sqrt{\epsilon}} \int_t^{t+2\pi \sqrt{\epsilon}} \mathcal{F}_\parallel E_{\epsilon,\parallel}(s,k_\parallel) ds$$ and $E^1_\epsilon=E_{\epsilon,\parallel} - E^2_\epsilon$, so that $E^1_\epsilon$ gathers the essential information on the oscillations.
Let us refer to \cite{Gre1} for details.
\end{proof}
\subsection*{Step 2: Uniform bound on $E_\epsilon^\perp$ and $\partial_{x_\parallel} \phi_\epsilon$}
Under hypothesis $(H)$ we clearly get that $E_\epsilon^\perp$ and $\partial_{x_\parallel} \phi_\epsilon$ are bounded in $L^\infty_t(H^{s-1})$ uniformly with respect to $\epsilon$ (we do not need any gain of elliptic regularity).
Since $$\int(\rho_\epsilon - \int \rho_\epsilon dx_\perp)dx_\perp=0,$$ we easily check that: $$\Vert \phi_\epsilon\Vert_{H^s_{x_\perp,x_\parallel}} \leq \Vert \rho -\int \rho dx_\perp \Vert_{H^s_{x_\perp,x_\parallel}}.$$ Hence the result.
\subsection*{Step 3: Passage to the strong limit} Let $w_\epsilon=v_\epsilon - W_\epsilon$. According to Lemma \ref{osci}, $w_\epsilon$ is uniformly bounded in $L^\infty_t(H^{s-1})$. On the other hand, we have : \begin{equation}
\partial_t w_\epsilon + \nabla_\perp(E_\epsilon^\perp w_\epsilon) + w_\epsilon \partial_{x_\parallel} w_\epsilon =-\epsilon \partial_{x_\parallel}\phi_\epsilon + E_\epsilon^2 - w_\epsilon \partial_{x_\parallel} W_\epsilon - W_\epsilon \partial_{x_\parallel} w_\epsilon -W_\epsilon \partial_{x_\parallel} W_\epsilon. \end{equation} (Notice that $\nabla_{\perp}(E^\perp W_\epsilon)= W_\epsilon \nabla_{\perp}(E^\perp)=0$.)
Thus, using the uniform bounds, we can see that $\partial_t w_\epsilon$ is bounded in $L^\infty_t(H^{s-2})$ and thanks to the Aubin-Lions lemma (see J. Simon \cite{sim}), $w_\epsilon$ converges strongly (up to a subsequence) to some function $w$ in $\mathcal{C}([0,T], H^{s'-1})$ for all $s'<s$.
According to Step 2, $\epsilon \partial_{x_\parallel} \phi_\epsilon \rightharpoonup 0$ in the distributional sense. The following convergence also holds in the sense of distributions, according to Lemma \ref{osci}: $$w_\epsilon \partial_{x_\parallel} W_\epsilon + W_\epsilon \partial_{x_\parallel} w_\epsilon \rightharpoonup 0,$$ and $W_\epsilon \partial_{x_\parallel} W_\epsilon+E_\epsilon^2$ weakly converges to some function $F$ since it is clearly bounded in $L^\infty(H^{s-2}_{x_\parallel})$.
Furthermore, since: \[ \int \left(W_\epsilon \partial_{x_\parallel} W_\epsilon+E_\epsilon^2\right) dx_\parallel =0, \] this implies that $\int F dx_\parallel=0$, and thus there exists $p$ such that $F=-\partial_{x_\parallel} p$.
Since $E_\epsilon^\perp$ is uniformly bounded in $L^\infty_t(H^{s-1})$, it also weakly-* converges, up to a subsequence, to some function $E^\perp$.
We now use the strong limit of $w_\epsilon$ in $\mathcal{C}([0,T], H^{s'-1})$ in order to pass to the limit in the sense of distributions in the convection terms. As a consequence, it comes, passing to the limit in the sense of distributions: \begin{equation}
\partial_t w + \nabla_\perp(E^\perp w) + w \partial_{x_\parallel} w = -\partial_{x_\parallel}p. \end{equation}
The equation satisfied by $\rho_\epsilon$ is: $$\partial_t \rho_\epsilon + \nabla_\perp(E_\epsilon^\perp \rho_\epsilon) + \partial_\parallel (w_\epsilon \rho_\epsilon) = - \partial_\parallel (W_\epsilon \rho_\epsilon).$$
The proof is similar for $\rho_\epsilon$ which converges strongly, up to a subsequence, to $\rho$ in $\mathcal{C}([0,T], H^{s'})$ for all $s'<s$. One can likewise take limits in the Poisson equations. We finally obtain (\ref{simple}). By uniqueness of the solutions to (\ref{simple}), the limits actually hold without extraction.
\subsection*{Step 4: Equations for the correctors}
The final step relies on the following lemma proved in Grenier's paper \cite{Gre1} (the main point is to notice that the application $f \mapsto e^{\pm it/\sqrt{\epsilon}} f$ is an isometry on $L^\infty(H^s)$ for any $s$.) \begin{lem} \label{osci2} There exist two correctors $E_+(t,x_\parallel)$ and $E_-(t,x_\parallel)$ in $\mathcal{C}(H^{s-1})$ such that, for all $s'<s$: \begin{itemize} \item $\Vert \sqrt{\epsilon} E^1_\epsilon - e^{it/\sqrt{\epsilon}}E_+ - e^{-it/\sqrt{\epsilon}}E_- \Vert_{\mathcal{C}(H^{s'-1})} \rightarrow 0$, \item $\Vert W_\epsilon - \frac{1}{i}\left(e^{it/\sqrt{\epsilon}}E_+ -e^{-it/\sqrt{\epsilon}}E_- \right)\Vert_{\mathcal{C}(H^{s'-1})} \rightarrow 0$. \end{itemize} \end{lem} In particular we can deduce that: \[ e^{-it/\sqrt{\epsilon}} \sqrt{\epsilon} E^1_\epsilon \rightharpoonup E_+ \] (and similarly $e^{it/\sqrt{\epsilon}} \sqrt{\epsilon} E^1_\epsilon \rightharpoonup E_-$).
Then, the idea is to use Lemmas \ref{osci} and \ref{osci2} and the wave equation (\ref{waves}) in order to obtain the equations satisfied by $E_\pm$. By elementary (but rather tedious) computations we get: $$\partial_t (\partial_{x_\parallel} E_\pm) + \left(\int \rho v dx_\perp\right) \partial_{x_\parallel} (\partial_{x_\parallel} E_\pm) =0.$$ Lemma \ref{osci2} also provides the initial conditions for $E_\pm$.
The proof of the theorem is now complete.
\section{Discussion on the sharpness of the results} \label{sec-sharp} \subsection{On the analytic regularity} \label{sec-analytic1}
Let us recall that the multi-fluid system (\ref{grenierS}) is ill-posed in Sobolev spaces, because of the two-stream instabilities (remind that this is due to the coupling between the different phases of the fluid).
For system (\ref{simple}), we expect the situation to be similar. Due to the dependence on $x_\perp$ and the constraint $\int \rho dx_\perp =1$, system (\ref{simple}) is by nature a multi-fluid system. Neverthless, one could maybe imagine that the dynamics in the $x_\perp$ variable could yield some mixing in $x_\perp$ and $x_\parallel$ (in the spirit of hypoellipticity results) and thus could perhaps bring stability. Here we explain why this is not the case.
The idea is to consider for (\ref{simple}) shear flows initial data. This will allow to exactly recover the multi-fluid equations (\ref{grenierS}). We take: \[ E_0^\perp = (0, \varphi(x_1,x_\parallel),0), \] and consequently $\rho_0 =\nabla_\perp \wedge E_0^\perp = -\varphi'(x_1,u)$. We also assume that $v_0(x_1, x_\parallel)$ does not depend on $x_2$.
Then we observe that: \begin{equation*} \begin{split} \nabla_\perp (E^\perp_0 \rho_0) = 0, \\ \nabla_\perp (E^\perp_0 v_0) = 0. \end{split} \end{equation*}
With such initial data, system (\ref{simple}) reduces to: \begin{equation} \label{eq-multi} \left\{
\begin{array}{ll}
\partial_t \rho + \partial_\parallel(v_\parallel \rho)= 0 \\
\partial_t v_\parallel + v_\parallel \partial_\parallel(v_\parallel) = -\partial_\parallel p(t,x_\parallel) \\
\int \rho dx_1 = 1,\\ \end{array}
\right. \end{equation} and we observe that there is no more dynamics in the $x_\perp$ variable. This is nothing but system (\ref{grenierS}) in dimension $1$, with $M=[0,1[$ and $\mu$ the Lebesgue measure.
Now, let us consider measure type of data in the $x_1$ variable for $\rho$ and $v$ (this corresponds to a ``degenerate'' version of the shear flows defined above). In particular if we choose: \[ \varphi= \frac 1 2 \mathbbm{1}_{x_1\leq \frac 1 4} \rho_{0,1}(x_\parallel) + \frac 1 2 \mathbbm{1}_{x_1\leq \frac 1 2} \rho_{0,2}(x_\parallel), \] we get: \begin{equation} \begin{split} \rho_0= \frac 1 2 \delta_{x_1=\frac 1 4} \rho_{0,1}(x_\parallel) + \frac 1 2 \delta_{x_2=\frac 1 2} \rho_{0,2}(x_\parallel), \\ v_0=\frac 1 2 \delta_{x_1=\frac 1 4} v_{0,1}(x_\parallel) + \frac 1 2 \delta_{x_1=\frac 1 2} v_{0,2}(x_\parallel) \end{split} \end{equation} and we obtain the following system for $\alpha=1, 2$:
\begin{equation} \left\{
\begin{array}{ll}
\partial_t \rho_\alpha + \partial_\parallel(v_\alpha \rho_\alpha)= 0 \\
\partial_t v_\alpha + v_\alpha \partial_\parallel(v_\alpha) = -\partial_\parallel p(t,x_\parallel) \\
\rho_1 + \rho_2=1.\\ \end{array}
\right. \end{equation} This particular system was given as an example by Brenier in \cite{Br2} to illustrate ill-posedness in Sobolev spaces of the multi-fluid equations.
We denote $q=\rho_1 v_1$. Using the constraint $\rho^1 + \rho^2 =1$, we easily obtain that $$p_\parallel = -q^2 \left(\frac{1}{\rho_1}+ \frac{1}{1-\rho_1} \right).$$
We can then observe that the system: \begin{equation} \left\{
\begin{array}{ll}
\partial_t \rho_1 + \partial_\parallel q= 0 \\
\partial_t q + \partial_\parallel(\frac{q^2}{\rho_1}) = -\rho_1 \partial_\parallel p(t,x_\parallel) \\ \end{array}
\right. \end{equation} is elliptic in space-time, and consequently it is ill-posed in Sobolev spaces.
Actually this example is not completely satisfying, since it is singular in $x_1$. Nevertheless we can consider the convolution of this initial data with a standard mollifier, which yields the same qualitative behaviour.
\subsection{On the analytic regularity in the perpendicular variable} \label{sec-analytic2}
We observe that if the initial datum $(\rho(0),v(0))$ does not depend on $x_\parallel$, then the fluid system (\ref{simple}) reduces to: \begin{equation} \left\{
\begin{array}{ll}
\partial_t \rho + \nabla_\perp (E^\perp \rho) = 0 \\
\partial_t v_\parallel + \nabla_\perp (E^\perp v_\parallel) =0 \\
E^\perp= \nabla^\perp \Delta_\perp^{-1} \left(\rho - \int \rho dx_\perp\right)\\
\int \rho dx_\perp = 1.\\ \end{array}
\right. \end{equation}
Thus, $\rho$ satisfies $2D$ incompressible Euler system, written in vorticity formulation. This systems admits a unique global strong solution provided that $\rho(0) \in H^s(\mathbb{T}^2)$ (with $s>1$), by a classical result of Kato \cite{Kat} and even a unique global weak solution provided that $\rho(0) \in L^\infty(\mathbb{T}^2)$, by a classical result of Yudovic \cite{Yud}.
In the other hand, $v_\parallel$ satisfied a transport equation with the force field $E^\perp$. If we assume for instance that $v_0$ is a bounded measure, then using the classical log-Lipschitz estimate on $E^\perp$, we get a unique global weak solution $v_\parallel$ by the method of characteristics.
One could think that it should be possible to build solutions to the final fluid system (\ref{simple}) with similar ``weak'' regularity in the $x_\perp$ variable (while keeping analyticity in the $x_\parallel$ variable). Actually this is not possible in general: this is related to the fact that $E^\perp$ depends also on $x_\parallel$ and this entails that we also need analytic regularity in the $x_\perp$ variable to get analytic regularity in the $x_\parallel$ variable (see estimations such as (\ref{reg-rho})).
\subsection{On the local in time existence} \label{sec-local}
In \cite{Br3}, Brenier considers potential velocity fields, that is velocity fields of the form $v_\Theta = \nabla_x \Phi_\Theta$, for the multi-fluid system: \begin{equation} \left\{
\begin{array}{ll}
\Theta=1,...,M \quad M\in \mathbb{N}^* \\
\partial_t \rho_\Theta + \operatorname{div}( \rho_\Theta v_\Theta)= 0 \\
\partial_t v_\Theta + v_\Theta.\nabla(v_\Theta) = E \\
\operatorname{rot} E = 0 \\
\sum_{\Theta=1}^M \rho_\Theta = 1.\\ \end{array}
\right. \end{equation} In this case the equation on the velocities becomes: \begin{equation} \partial_t \Phi_\Theta + \frac 1 2 \vert \nabla_x \Phi_\Theta \vert^2 +p =0. \end{equation} It is proved in \cite{Br3} that any strong solution satisfying \[ \inf_{\Theta, t, x} \rho_\Theta(t,x) >0 \] can not be global in time unless the initial energy vanishes: \begin{equation} \sum_{\Theta=1}^M \int \rho_{\Theta, t=0} \vert u_{\Theta, t=0} \vert^2 dx =0. \end{equation} This striking result relies on a variational interpretation of these Euler equations. Using the same particular initial data as in section \ref{sec-analytic1}, this indicates that for system (\ref{simple}) also, there is no global strong solution, unless there is no dependence on $x_\perp$ or $x_\parallel$.
We observe that if the initial datum $(\rho(0),v(0))$ does not depend on $x_\perp$, the fluid system (\ref{simple}) does not make sense anymore (as for incompressible Euler in dimension $1$). When the initial datum $(\rho(0),v(0))$ does not depend on $x_\parallel$, we have seen that we recover $2D$ incompressible Euler and there is indeed global existence (of strong or weak solutions).
\subsection{The relative entropy method applied to a toy model : failure of the multi-current limit} \label{sec-rela}
It seems very appealing to try to use the relative entropy method (which was introduced by Brenier \cite{Br2} for Vlasov type of systems) to study the limit, as it would open the way to the study of the limit for solutions to the initial system (\ref{kinbegin}) with low regularity. The only requirement would be that the two first moments of the initial data for (\ref{kinbegin}) are in a small neighborhood (say in $L^2$ topology) of the smooth initial data for the limit system (\ref{simple}). Nevertheless it is not possible to overcome the two-stream instabilities in this framework. We intend to show why.
Let us consider the toy model: \begin{equation} \label{toy} \left\{
\begin{array}{ll}
\partial_t f_\epsilon^\theta + v.\nabla_x f_\epsilon^\theta +E_\epsilon.\nabla_v f^\theta_\epsilon=0\\
E_\epsilon=-\nabla_x V_\epsilon\\
-\epsilon\Delta_x V_\epsilon = \int \int f_\epsilon^\theta dv d\mu -1\\
f^\theta_{\epsilon}(t=0)= f^\theta_{\epsilon,0}. \end{array}
\right. \end{equation} with $t>0$, $x \in \mathbb{T}^3$, $v \in \mathbb{R}^3$ and where $\theta$ lies in $[0,1]$ equipped with a positive measure $\mu$ which is: \begin{itemize}
\item either a sum of Dirac masses with total mass $1$, such as: \[ \mu=\sum_{i=0}^{N-1} \frac{1}{N} \delta_{\theta= i/N}. \] In this case, we model a plasma made of $N$ phases.
\item or the Lebesgue measure, in which case we model a continuum of phases.
\end{itemize}
Actually, we could have considered more general Borel measures but we restrict to these cases for simplicity. This system can be seen as the kinetic counterpart of a simplified version of (\ref{sys}), which focuses on the unstable feature of the system. Of course we could have considered directly the fluid version, that is: \begin{equation} \left\{
\begin{array}{ll}
\partial_t \rho_\epsilon^\theta + \nabla_x (\rho_\epsilon^\theta u^\theta_\epsilon )=0\\
\partial_t u_\epsilon^\theta + u^\theta_\epsilon.\nabla_x u_\epsilon^\theta =E_\epsilon\\
E_\epsilon=-\nabla_x V_\epsilon\\
-\epsilon\Delta_x V_\epsilon = \int \int f_\epsilon^\theta dv d\mu -1\\ \end{array}
\right. \end{equation} but the proofs are essentially the same and the study of system (\ref{toy}) has some interests of its own.
We consider global weak solutions to (\ref{toy}), in the sense of Arsenev \cite{Ar}. We recall that the energy associated to (\ref{toy}) is the following non-increasing functional: \begin{equation} \mathcal{E}_\epsilon(t) = \frac 1 2\int \int f_\epsilon^\theta \vert v\vert^2 dvdx d\mu + \frac 1 2\epsilon \int \vert \nabla_x V_\epsilon \vert^2 dx. \end{equation}
We assume that there exists a constant $K>0$ independent of $\epsilon$, such as $\mathcal{E}_\epsilon(0) \leq K$. This implies that for any $\epsilon$ and $t>0$: \begin{equation} \mathcal{E}_\epsilon(t) \leq K. \end{equation}
Let $(\rho^\theta,u^\theta)$ be the local strong solution, to the system: \begin{equation} \left\{
\begin{array}{ll}
\partial_t \rho^\theta + \nabla_x.(\rho^\theta u^\theta)=0\\
\partial_t u^\theta+ u^\theta.\nabla_x u^\theta =-\nabla_x V \\
\int \rho^\theta d\mu =1.\\ \end{array}
\right. \end{equation} with inital data $(\rho_0^\theta, u_0^\theta)$ (which we a priori have to take with analytic regularity). The ``incompressibility'' constraint reads: \begin{equation} \nabla_x . \int \rho^\theta u^\theta d\mu =0. \end{equation}
Following the approach of Brenier \cite{Br2} for the quasineutral limit with a single phase, we consider the relative entropy (built as a modulation of the energy $\mathcal{E}_\epsilon$): \begin{equation} \mathcal{H}_\epsilon(t) = \frac 1 2 \int \int f_\epsilon^\theta \vert v-u^\theta(t,x)\vert^2 dvdx d\mu + \frac 1 2\epsilon \int \vert \nabla_x V_\epsilon - \nabla_x V\vert^2 dx. \end{equation}
We assume that the system is well prepared in the sense that $\mathcal{H}_\epsilon(0) \rightarrow 0$. The goal is to find some stability inequality in order to show that we also have $\mathcal{H}_\epsilon(t) \rightarrow 0 $.
Our aim here, is to show why the method fails unless $u^\theta$ actually does not depend on $\theta$. This can be interpreted as the effect of the two-stream instabilities \cite{CGG}.
We have, since the energy is non-increasing: \begin{equation} \label{comp} \begin{split} \frac{d}{dt}\mathcal{H}_\epsilon(t) \leq \int \int \partial_t f_\epsilon^\theta \left(\frac 1 2 \vert u^\theta\vert^2 -v .u^\theta\right)dvdx d\mu + \int \int f_\epsilon^\theta \partial_t \left(\frac 1 2 \vert u^\theta \vert ^2 -v. u^\theta \right)dvdx d\mu \\ + \frac 1 2\epsilon \int \partial_t \vert\nabla_x V\vert^2 dx - \epsilon \int \nabla_x V_\epsilon. \partial_t \nabla_x V dx - \epsilon \int \partial_t \nabla_x V_\epsilon. \nabla_x V dx. \end{split} \end{equation}
We clearly have $ \epsilon \int \partial_t \vert\nabla_x V\vert^2 dx=\mathcal{O}(\epsilon)$. Moreover, we get, using the conservation of energy, \[ \epsilon \Big\vert \int \nabla_x V_\epsilon. \partial_t \nabla_x V dx \Big\vert \leq \sqrt{\epsilon} \Vert \sqrt \epsilon \nabla_x V_\epsilon\Vert_{L^2_x} \Vert \partial_t \nabla_x V\Vert_{L^2_x}, \] which is of order $\mathcal{O}(\sqrt \epsilon)$.
For the last term of (\ref{comp}), we compute: \begin{equation} \begin{split} - \epsilon \int \partial_t \nabla_x V_\epsilon.\nabla_x V dx =& \epsilon \int \partial_t \Delta_{x} V_\epsilon V dx \\ =& -\epsilon \int \partial_t \left(\int f_\epsilon^\theta dv d\mu\right) V dx \\ =& + \int \nabla_x.\left(\int f_\epsilon^\theta v dv d\mu\right) V dx \\ =&- \int \left(\int f_\epsilon^\theta v dv d\mu\right) \nabla_x V dx . \end{split} \end{equation} In this computation we have used the local conservation of mass: \[ \partial_t \int f_\epsilon^\theta dv + \nabla_x. \left( \int v f_\epsilon^\theta dv\right) =0. \]
In the other hand we can compute: \begin{equation} \begin{split} &\int \int \partial_t f_\epsilon^\theta \left(\frac 1 2 \vert u^\theta \vert ^2 -v. u^\theta\right)dvdx d\mu + \int \int f_\epsilon^\theta \partial_t \left(\frac 1 2 \vert u^\theta \vert ^2 -v. u^\theta\right)dvdx d\mu \\ =& \int \int f_\epsilon^\theta(u^\theta-v) .(u^\theta-v).\nabla_x u_\theta dv dx d\mu + \int\int f_\epsilon^\theta(u^\theta-v).(\partial_t u^\theta +u^\theta\nabla_x u^\theta) dvdx d\mu \\
-& \int f_\epsilon^\theta E_\epsilon. u^\theta dv dx d\mu. \end{split} \end{equation}
All the trouble comes from this last term. When no assumption is made on $u^\theta$, it can be of order $\mathcal{O}(1/\sqrt{\epsilon})$. This wild term can be interpreted as the appearance of the two-stream instabilities.
Therefore we have to make an additional assumption in order to avoid this instability. This is done by assuming that $u^\theta$ initially does not depend on $\theta$ (which yields that $u_\theta$ does not depend on $\theta$ by uniqueness , in which case we can write: \[ u_\theta=u \] and consequently, we have \begin{equation} \begin{split} - \int f_\epsilon^\theta E_\epsilon. u dv dx d\mu =& \int \left(\epsilon\Delta_x V_\epsilon -1\right)E_\epsilon. u dx . \end{split} \end{equation}
In addition, the incompressibility constraint becomes $\nabla_x.u=0$, and thus:
$$\int E_\epsilon u dx = \int V_\epsilon \nabla_x.u dx=0.$$
Furthermore, we have:
\begin{equation} \begin{split} \int \left(\int f_\epsilon^\theta dv d\mu\right) u. \nabla_x V dv =& \int u. \nabla_x V - \epsilon \int \Delta_x V_\epsilon u. \nabla_x V \\ \end{split} \end{equation} The first term is equal to $0$ according to the incompressibility constraint, while the second is of order $\mathcal{O}({\sqrt{\epsilon}})$, by the energy inequality.
We finally get the stability inequality: \begin{equation} \begin{split} \mathcal{H}_\epsilon(t) \leq \mathcal{H}_\epsilon(0) + R_\epsilon(t) + C\int_0^t \Vert \nabla_x u \Vert \mathcal{H}_\epsilon(s)ds\\ + \int_0^t \int \int f_\epsilon^\theta (u-v) (\partial_t u + u.\nabla_x u +\nabla_x V) d\mu dv dx ds, \end{split} \end{equation} with $R_\epsilon(t)\rightarrow 0$ as $\epsilon$ goes to $0$ and the last term is $0$ by definition of $u,V$.
As as result, by Gronwall's inequality, $\mathcal{H}_\epsilon(t) \rightarrow 0$, uniformly locally in time.
To conclude, we notice that $\rho^\theta_\epsilon:=\int f_\epsilon^\theta dv$ is uniformly bounded in $L^\infty_t(L^1_{\theta,x})$. By the energy inequality, we can easily show that this is also the case for $J_\epsilon^\theta:= \int f_\epsilon^\theta v dv$. Thus, up to a subsequence, there exist $\rho_\theta$ and $J^\theta$ such that $\rho_\epsilon^\theta$ weakly-* converges in the sense of measures to $\rho^\theta$ (resp. $J_\epsilon^\theta$ to $J^\theta$). Passing to the limit in the local conservation of charge, which reads: \[ \partial_t \rho_\epsilon^\theta+ \nabla_x J_\epsilon^\theta=0, \] we obtain: \[ \partial_t \rho^\theta+ \nabla_x J^\theta=0. \]
The goal is now to prove that $J^\theta = \rho^\theta u$.
By a simple use of Cauchy-Schwarz inequality, we have: \begin{equation} \int \int \frac{\vert \rho_\epsilon^\theta u - J_\epsilon^\theta\vert}{\rho^\theta_\epsilon} dx d\mu \leq \int \int f_\epsilon^\theta\vert v- u\vert^2 dv dx d\mu. \end{equation}
Using a classical convexity argument due to Brenier \cite{Br}, the functional $(\rho,J) \mapsto \int \frac{\vert \rho u-J\vert}{\rho} dxd\mu$ is lower semi-continuous with respect to the weak convergence of measures. We finally obtain by passing to the limit that: \[ J^\theta = \rho^\theta u. \]
By uniqueness of the solution to the limit system, provided that the whole sequence $\rho_{\epsilon,0}^\theta$ weakly converges to $\rho^\theta_0$, we obtain the convergences without having to extract subsequences.
Finally we have proved the result:
\begin{prop} Let $(f_\epsilon^\theta,E_\epsilon)$ be a global weak solution to (\ref{toy}) such that the local conservation of charge and current are satisfied. Assume that for some smooth functions $(\rho_0^\theta, u_0)$ (we emphasize on the fact that $u_0$ does not depend on $\theta$, in order to avoid two-stream instabilities) satisfying: \begin{equation} \left\{
\begin{array}{ll} \int \rho_0^\theta d\mu=1, \\ \nabla_x. u_0=0,\\ \end{array}
\right. \end{equation} we have: \begin{equation}
\frac 1 2 \int \int f_\epsilon( t=0)^\theta \vert v-u_0(x)\vert^2 dvdx d\mu + \frac 1 2\epsilon \int \vert \nabla_x V_\epsilon(t=0) - \nabla_x V\vert^2 dx \rightarrow 0 \end{equation} and $\int f_\epsilon^\theta dv \rightharpoonup \rho_0^\theta$ in the weak sense of measures. Then, \begin{equation}
\frac 1 2 \int \int f_\epsilon^\theta \vert v-u(t,x)\vert^2 dvdx d\mu + \frac 1 2\epsilon \int \vert \nabla_x V_\epsilon - \nabla_x V \vert^2 dx \rightarrow 0 , \end{equation} where $(u,V)$ is the local strong solution to the incompressible Euler system: \begin{equation} \left\{
\begin{array}{ll}
\partial_t u + u.\nabla_x u =-\nabla_x V \\ \nabla_x u=0.\\ \end{array}
\right. \end{equation}
Moreover, $\rho_\epsilon^\theta:=\int f_\epsilon^\theta dv$ converges in the weak sense of measures to $\rho^\theta$ solution to: \begin{equation} \partial_t \rho^\theta + u. \nabla_x \rho^\theta=0. \end{equation} and $J_\epsilon^\theta:= \int f_\epsilon^\theta v dv$ converges in the weak sense of measures to $\rho^\theta u$. \end{prop}
\section{Conclusion}
In this work, we have provided a first analysis of the mathematical properties of the three-dimensional finite Larmor radius approximation (FLR), for electrons in a fixed background of ions. We have shown that the limit is unstable in the sense that we have to restrict to data with both particular profiles and analytic data. In particular we have pointed out that the analytic assumption is not only a mathematical technical assumption, but is necessary to have strong solutions. In addition the results are only local-in-time.
On the other hand, we proved in \cite{DHK1} that the FLR approximation for ions with massless electrons is by opposition very stable, in the sense that we can deal with initial data with no prescribed profile and weak (that is in a Lebesgue space) regularity.
This rigorously justifies why physicists rather consider the equations on ions rather than those on electrons, especially for numerical experiments (we refer for instance to Grandgirard et al. \cite{Gra}).
\section{Appendix : Formal derivation of the drift-fluid problem}
\label{formal} \subsection*{Scaling of the Vlasov equation}
Let us recall that our purpose is to describe the behaviour of a gas of electrons in a neutralizing background of ions at thermodynamic equilibrium, submitted to a large magnetic field. For simplicity, we consider a magnetic field with a fixed direction $e_\parallel$ (also denoted by $e_z$) and a fixed large magnitude $\bar B$.
Because of the strong magnetic field, the dynamics of particles in the parallel direction $e_\parallel$ is completely different to their dynamics in the orthogonal plane. We therfore consider this time anisotropic characteristic spatial lengths: $$\tilde{x}_\perp =\frac{x_\perp}{L_\perp}, \quad \tilde{x}_\perp =\frac{x_\parallel}{L_\parallel,}$$
$$\tilde{t}= \frac t \tau \quad \tilde{v}= \frac v {v_{th}},$$ $$f(t,x_\perp,x_\parallel,v)= \bar f \tilde{f}(\tilde{t},\tilde{x}_\perp,\tilde{x}_\parallel,\tilde{v}) \quad V(t,x_\perp,x_\parallel)= \bar V \tilde V(\tilde{t},\tilde{x}_\perp,\tilde{x}_\parallel) \quad E(t,x_\perp,x_\parallel)=\bar E \tilde E(\tilde t, \tilde{x}_\perp,\tilde{x}_\parallel). $$
The subscript $\perp$ stands for the orthogonal projection on the perpendicular plane (to the magnetic field), while the subscript $\parallel$ stands for the projection on the parallel direction.
This yields:
\begin{equation} \left\{
\begin{array}{ll}
\partial_{\tilde t} \tilde f_\epsilon + \frac{v_{th}\tau}{L_\perp}\tilde v_\perp.\nabla_{\tilde x_\perp}\tilde f_\epsilon + \frac{v_{th}\tau}{L_\parallel}\tilde v_\parallel.\nabla_{\tilde x_\parallel}\tilde f_\epsilon +\left(\frac{e\bar E \tau}{mv_{th}}\tilde E_\epsilon+ \frac{e \bar B}{m} \tau \tilde v \wedge e_\parallel\right).\nabla_{\tilde v} \tilde f_\epsilon =0 \\
\frac{\bar E }{\bar V} \tilde E_\epsilon = \left(-\frac{1}{L_\perp}\nabla_{\tilde x_\perp} \tilde V_\epsilon ,-\frac{1}{L_\parallel}\nabla_{\tilde x_\parallel} \tilde V_\epsilon\right)\\ -\frac{\epsilon_0\bar V}{L_\perp^2}\Delta_{\tilde x_\perp} \tilde V_\epsilon -\frac{\epsilon_0\bar V}{L_\parallel^2}\Delta_{\tilde x_\parallel} \tilde V_\epsilon =e\bar f v_{th}^3\left(\int \tilde f_\epsilon d \tilde v - 1\right)\\
\tilde f_{\epsilon,\vert \tilde t=0} =\tilde f_{0,\epsilon}, \quad \bar f L^3 v_{th}^3\int \tilde f_{0,\epsilon} d \tilde vd \tilde x =1.\\
\end{array}
\right. \end{equation}
We set $\Omega= \frac{e\bar B}{m}$ : this is the cyclotron frequency (also referred to as the gyrofrequency) We also consider the so-called electron Larmor radius (or electron gyroradius) $r_L$ defined by:
\begin{equation} r_L=\frac{ v_{th}}{\Omega}= \frac{m v_{th}}{e \bar B}\end{equation} This quantity can be physically understood as the typical radius of the helix around axis $e_\parallel$ described by the particles, due to the intense magnetic field.
The Vlasov equation now reads: $$ \partial_{\tilde t} \tilde f_\epsilon + \frac{r_L}{L_\perp}\Omega \tau\tilde v_\perp.\nabla_{\tilde x_\perp}\tilde f_\epsilon + \frac{r_L}{L_\parallel}\Omega \tau\tilde v_\parallel.\nabla_{\tilde x_\parallel}\tilde f_\epsilon +\left(\frac{\bar E }{\bar B v_{th}}\Omega \tau\tilde E_\epsilon+ \Omega \tau \tilde v \wedge e_\parallel\right).\nabla_{\tilde v} \tilde f_\epsilon =0 . $$
The gyrokinetic ordering consists in: $$\Omega \tau =\frac 1 \epsilon, \quad \frac{\bar E }{\bar B v_{th}}=\epsilon.$$
The spatial scaling we perform is the so-called finite Larmor radius scaling (see Fr\'enod and Sonnendrucker \cite{FS2} for a reference in the mathematical literature): basically the idea is to consider the typical perpendicular spatial length $L_\perp$ with the same order as the so-called electron Larmor radius.
On the contrary, the parallel observation length $L_\parallel$ is taken much larger: \begin{equation} \frac{r_L}{L_\perp} = 1 , \quad \frac{r_L}{L_\parallel} = \epsilon. \end{equation} This is typically an anisotropic situation.
This particular scaling allows, at least in a formal sense, to observe more precise effects in the orthogonal plane than with the isotropic scaling (studied for instance in \cite{GSR1}):
$$ \frac{r_L}{L_\perp} = \epsilon , \quad \frac{r_L}{L_\parallel} = \epsilon. $$
In particular we wish to observe the so-called electric drift $E^\perp$ (also referred to as the $E \times B$ drift) whose effect is of great concern in tokamak physics (see \cite{DHK2} for instance).
The quasineutral ordering we adopt is the following:
\begin{equation} \frac{\lambda_D} {L_\parallel} = \sqrt{\epsilon}. \end{equation}
After straightforward calculations (we refer to \cite{FS2} for details), we get the following Vlasov-Poisson system in dimensionless form, for $t\geq 0, x=(x_\perp,x_\parallel) \in \mathbb{T}^2\times\mathbb{T}, v=(v_\perp,v_\parallel) \in\mathbb{R}^2\times \mathbb{R}$:
\begin{equation} \label{fix} \left\{
\begin{array}{ll}
\partial_{t} f_\epsilon + \frac{v_\perp}{\epsilon}.\nabla_{x} f_\epsilon + v_\parallel.\nabla_{x} f_\epsilon + (E_\epsilon+ \frac{v\wedge e_z}{\epsilon}).\nabla_{v} f_\epsilon = 0 \\
E_\epsilon= (-\nabla_{x_\perp} {V}_\epsilon, -\epsilon\nabla_{x_\parallel} {V}_\epsilon) \\
-\epsilon^2\Delta_{x_\parallel} {V}_\epsilon -\Delta_{x_\perp} {V}_\epsilon = \int f_\epsilon dv - \int f_\epsilon dvdx\\
f_{\epsilon, t=0}=f_{\epsilon,0}. \end{array}
\right. \end{equation}
\begin{rque} It seems physically relevant to consider scalings such as: \begin{equation} \lambda_D / L_\parallel \sim \epsilon^\alpha, \end{equation} with $\alpha \geq 1$. However with such a scaling, the systems seem too degenerate with respect to $\epsilon$ and we have not been able to handle this situation. The scaling we study is nevertheless relevant for some extreme magnetic regimes in tokamaks . \end{rque}
\subsection*{Hydrodynamic equations}
In order to isolate this quasineutral problem, thanks to the linearity of the Poisson equation, we separate the electric field in two parts:
\begin{equation} \label{model} \left\{
\begin{array}{ll} E= E^1_\epsilon + E^2_\epsilon, \\
E^1_\epsilon= (-\nabla_{x_\perp} {V}_\epsilon, -\epsilon\nabla_{x_\parallel} {V}_\epsilon), \\
-\epsilon^2\Delta_{x_\parallel} {V}^1_\epsilon -\Delta_{x_\perp} {V}^1_\epsilon = \int f_\epsilon dv - \int f_\epsilon dvdx_\perp,\\ E^2_\epsilon= - \partial_{x_\parallel} V^2_\epsilon, \\
-\epsilon\Delta_{x_\parallel} {V}^2_\epsilon= \int f_\epsilon dvdx_\perp - \int f_\epsilon dv dx. \\ \end{array}
\right. \end{equation}
In order to make the fast oscillations in time due to the term $\frac{v_\perp}{\epsilon}.\nabla_{x}$ disappear, we perform the same change of variables as in \cite{GHN}, to get the so-called gyro-coordinates: \begin{equation} x_g= x_\perp + v^\perp, v_g=v_\perp. \end{equation} We easily compute the equation satisfied by the new distribution function $g_\epsilon(t,x_g,v_g,v_\parallel)=f_\epsilon(t,x,v)$. \begin{eqnarray*}
\partial_t g_\epsilon + v_\parallel \partial_{x_\parallel} g_\epsilon + E^1_{\epsilon, \parallel}(t,x_g-v_g^\perp)\partial_{v_\parallel} g_\epsilon + E^2_{\epsilon}(t,x_{g,\parallel})\partial_{v_\parallel} g_\epsilon \\ +E^1_{\epsilon, \perp}(t,x_g- v_g^\perp).(\nabla_{v_g} g_\epsilon - \nabla^\perp_{x_g} g_\epsilon) + \frac{1}{\epsilon} v_g^\perp. \nabla_{v_g} g_\epsilon=0. \end{eqnarray*} Notice here that in the process, the so-called electric drift $E^\perp$ appears since: $$ -E^1_{\epsilon, \perp}(t,x_g- v_g^\perp). \nabla^\perp_{x_g} g_\epsilon = E^{1,\perp}_{\epsilon}(t,x_g- v_g^\perp). \nabla_{x_g} g_\epsilon. $$
The equation satisfied by the charge density $\rho_\epsilon=\int g_\epsilon dv$ states: \begin{equation}
\partial_t \rho_\epsilon + \partial_{x_\parallel} \int v_\parallel g_\epsilon dv + \nabla^\perp_{x_g}\int E^1_{\epsilon, \perp}(t,x_g- v_g^\perp) g_\epsilon dv = 0, \end{equation} and the one satisfied by the current density $J_\epsilon=\int g_\epsilon v dv\left(= \begin{pmatrix} \int g_\epsilon v_\perp dv \\ \int g_\epsilon v_\parallel dv \end{pmatrix}\right)$ is the following: \begin{eqnarray}
\partial_t J_\epsilon + \partial_{x_\parallel} \int v_\parallel \begin{pmatrix} v_g \\ v_\parallel \end{pmatrix} g_\epsilon dv + \nabla^\perp_{x_g}\int E^1_{\epsilon, \perp}(t,x_g- v_g^\perp) \begin{pmatrix} v_g \\ v_\parallel \end{pmatrix} g_\epsilon dv \nonumber \\ =\int \begin{pmatrix} E^1_{\epsilon, \perp}(t,x_g- v_g^\perp) \\ 0 \end{pmatrix} g_\epsilon dv + \int \begin{pmatrix} 0 \\ E^1_{\epsilon, \parallel}(t,x_g-v_g^\perp) \end{pmatrix}g_\epsilon dv \nonumber \\
+ \begin{pmatrix} 0 \\ E^2_{\epsilon}(t,x_{g,\parallel})\rho_\epsilon \end{pmatrix} + \frac{J_\epsilon^\perp}{\epsilon}. \end{eqnarray}
We now assume that we deal with special monokinetic data of the form: \begin{equation}
g_\epsilon(t,x,v) = \rho_\epsilon(t,x)\mathbbm{1}_{v_\parallel=v_{\parallel,\epsilon}(t,x)}\mathbbm{1}_{v_g=0}. \end{equation}
This assumption is nothing but the classical ``cold plasma'' approximation together with the assumption that the transverse particle velocities are isotropically distributed (which is physically relevant, see \cite{Sul}) : in other words, the average motion of particles in the perpendicular plane is only due to the advection by the electric drift $E^\perp$.
For the sake of readability, we denote by now $\nabla_{x_g}= \nabla_\perp$ and $\nabla_{x_\parallel}= \nabla_\parallel$. Then we get formally the hydrodynamic model:
\begin{equation}
\left\{
\begin{array}{ll}
\partial_t \rho_\epsilon + \nabla_\perp (E^\perp_\epsilon \rho_\epsilon) + \partial_\parallel(v_{\parallel,\epsilon} \rho_\epsilon)= 0 \\
\partial_t (\rho_\epsilon v_{\parallel,\epsilon}) + \nabla_\perp (E^\perp_\epsilon \rho_\epsilon v_{\parallel,\epsilon}) + \partial_\parallel(\rho_\epsilon v_{\parallel,\epsilon}^2) = -\epsilon\partial_\parallel \phi_\epsilon(t,x)\rho_\epsilon -\partial_\parallel V_\epsilon(t,x_\parallel)\rho_\epsilon \\
E^\perp_\epsilon= -\nabla^\perp \phi_\epsilon \\ -\epsilon^2 \partial^2_{\parallel} \phi_\epsilon - \Delta_{\perp} \phi_\epsilon = \rho_\epsilon - \int \rho_\epsilon dx_\perp\\
-\epsilon \partial_{\parallel}^2 V_\epsilon =\int \rho_\epsilon dx_\perp -1\\ \end{array}
\right. \end{equation}
One can use the first equation to simplify the second one (the systems are equivalent provided that we work with regular solutions and that $\rho_\epsilon>0$):
\begin{equation}
\left\{
\begin{array}{ll}
\partial_t \rho_\epsilon + \nabla_\perp (E^\perp_\epsilon \rho_\epsilon) + \partial_\parallel(v_{\parallel,\epsilon} \rho_\epsilon)= 0 \\
\partial_t v_{\parallel,\epsilon} + \nabla_\perp (E^\perp_\epsilon v_{\parallel,\epsilon}) + v_{\parallel,\epsilon} \partial_\parallel(v_{\parallel,\epsilon}) = -\epsilon\partial_\parallel \phi_\epsilon(t,x) -\partial_\parallel V_\epsilon(t,x_\parallel) \\
E^\perp_\epsilon= -\nabla^\perp \phi_\epsilon \\ -\epsilon^2 \partial^2_{\parallel} \phi_\epsilon - \Delta_{\perp} \phi_\epsilon = \rho_\epsilon - \int \rho_\epsilon dx_\perp\\
-\epsilon \partial_{\parallel}^2 V_\epsilon =\int \rho_\epsilon dx_\perp -1.\\ \end{array}
\right. \end{equation}
\begin{rques}\begin{enumerate} \item Notice here that we do not deal with the usual charge density and current density, since these ones are taken within the gyro-coordinates.
\item We mention that we could have considered the more general case: \begin{equation} \label{gene}
g_\epsilon(t,x,v) = \int_ M \rho_\epsilon^\Theta (t,x)\mathbbm{1}_{v_\parallel=v^\Theta_{\parallel,\epsilon}(t,x)} \nu(d\Theta) \mathbbm{1}_{v_g=0} \end{equation} where $(M,\Theta,\nu)$ is a probability space which allows to model more realistic plasmas than ``cold plasmas'' and covers many interesting physical data, like multi-sheet electrons or water-bags data (we refer for instance to \cite{BBBB} and references therein). We will not do so for the sake of readability but we could deal with it with exactly the same analytic framework: the analogues of Theorems \ref{exi} and \ref{con} identically hold. We get in the end the system:
\begin{equation} \label{sysfinal} \left\{
\begin{array}{ll}
\partial_t \rho^\Theta + \nabla_\perp (E^\perp \rho^\Theta) + \partial_\parallel(v^\Theta_\parallel \rho^\Theta)= 0 \\
\partial_t v^\Theta_\parallel + \nabla_\perp (E^\perp v^\Theta_\parallel) + v^\Theta_\parallel \partial_\parallel(v^\Theta_\parallel) = -\partial_\parallel p(t,x_\parallel) \\
E^\perp= \nabla^\perp \Delta_\perp^{-1} \left(\int \rho^\Theta d\nu - \int \rho^\Theta dx_\perp d\nu\right)\\
\int \rho^\Theta(t,x) dx_\perp d\nu = 1.\\ \end{array}
\right. \end{equation} As before, the equations are coupled through $x_\perp$ and here also through the new parameter $\Theta$.
\item Actually, the choice: \begin{equation}
g_\epsilon(t,x,v) = \rho_\epsilon(t,x)\mathbbm{1}_{v=v_{\epsilon}(t,x)}
\end{equation}
leads to an ill-posed system. Indeed, we have to solve in this case equations of the form $v^\perp_\epsilon=v_{\epsilon,\perp}(t,x-v^\perp_\epsilon)$ where $v_{\epsilon,\perp}$ is the unknown. We can not say if this relation is invertible, even locally.
\end{enumerate} \end{rques}
\end{ack}
\end{document} |
\begin{document}
\title{Finite 2-distance transitive graphs}
\begin{abstract} A non-complete graph $\Gamma$ is said to be $(G,2)$-distance transitive if $G$ is a subgroup of the automorphism group of $\Gamma$ that is transitive on the vertex set of $\Gamma$, and for any vertex $u$ of $\Gamma$, the stabilizer $G_u$ is transitive on the sets of vertices at distance~1 and~2 from $u$. This paper investigates the family of $(G,2)$-distance transitive graphs that are not $(G,2)$-arc transitive. Our main result is the classification of such graphs of valency not greater than~5. \end{abstract}
\section{Introduction}
Graphs that satisfy certain symmetry conditions have been a focus of research in algebraic graph theory. We usually measure the degree of symmetry of a graph by studying if the automorphism group is transitive on certain natural sets formed by combining vertices and edges. For instance, $s$-arc transitivity requires that the automorphism group should be transitive on the set of $s$-arcs (see Section~\ref{sect:def} for precise definitions). The class of $s$-arc transitive graphs have been studied intensively, beginning with the seminal result of Tutte \cite{Tutte-1} that cubic $s$-arc transitive graphs must have $s\leqslant 5$. Later, in 1981, Weiss \cite{weiss}, using the finite simple group classification, showed that there are no $8$-arc transitive graphs of valency at least 3. For a survey on $s$-arc transitive graphs, see~\cite{seress}.
Recently, several papers have considered conditions on undirected graphs that are similar to, but weaker than, $s$-arc transitivity. For examples of such conditions, we mention local $s$-arc transitivity, local $s$-distance transitivity, $s$-geodesic transitivity, and $2$-path transitivity. Devillers et al.~\cite{locallysdist} studied the class of locally $s$-distance transitive graphs, using the normal quotient strategy developed for $s$-arc transitive graphs in~\cite{Praeger-4}. The condition of $s$-geodesic transitivity was investigated in several papers~\cite{DJLP-2,DJLP-prime,DJLP-compare}. A characterization of $2$-path transitive, but not $2$-arc transitive graphs was given by Li and Zhang~\cite{LZ-2path-2013}.
In this paper we study the class of $2$-distance transitive graphs. If $G$ is a subgroup of the automorphism group of a graph $\Gamma$, then $\Gamma$ is said to be $(G,2)$-distance transitive if $G$ acts transitively on the vertex set of $\Gamma$, and a vertex stabilizer $G_u$ is transitive on the neighborhood $\Gamma(u)$ of $u$ and on the second neighborhood $\Gamma_2(u)$ (see Section~\ref{sect:def}). The class of $(G,2)$-distance transitive graphs is larger than the class of $(G,2)$-arc transitive graphs, and in this paper we study the $(G,2)$-distance transitive graphs that are not $(G,2)$-arc transitive.
Our first theorem links the structure of $(G,2)$-distance transitive, but not $(G,2)$-arc transitive graphs to their valency and the value of the constant $c_2$ in the intersection array (see Definition \ref{definition: intersectionarray}).
\begin{theorem}\label{thm:valency stuff}
Let $\Gamma$ be a connected $(G,2)$-distance transitive,
but not $(G,2)$-arc transitive graph of girth $4$ and valency $k\geqslant 3$.
Then $2\leqslant c_2\leqslant k-1$ and the following are valid.
\begin{enumerate}
\item If $c_2=k-1$, then $\Gamma \cong \gridcomp{(k+1)}$ and
$G$ satisfies Condition~\ref{gridcondition}. \item If $c_2=2$, then $k$ is a prime-power such
that $k\equiv 3 \pmod 4$ and $G_u$ acts $2$-homogeneously,
but not $2$-transitively on $\Gamma(u)$ for each $u\in V\Gamma$.
\end{enumerate}
\end{theorem}
The following corollary is a characterization of the family of connected $(G,2)$-distance transitive, but not $(G,2)$-arc transitive graphs of girth $4$ and prime valency.
\begin{corollary}\label{thm:primeval}
Let $\Gamma$ be a connected $(G,2)$-distance transitive, but not $(G,2)$-arc transitive graph of girth $4$ and prime valency $p$, and let $u\in V \Gamma$.
Then the following are valid.
\begin{enumerate}
\item Either $\Gamma\cong \overline{\grid 2{(p+1)}}$, or $c_2 |p-1$ and $2\leqslant c_2\leqslant (p-1)/2$.
\item
If $ c_2=2$, then $p\equiv 3 \pmod 4$ and $G_u$ is $2$-homogeneous, but not $2$-transitive on $\Gamma(u)$.
\item If $ c_2=(p-1)/2$, then $|\Gamma_2(u)|=2p$, and $G_u$
is imprimitive on $\Gamma_2(u)$.
\end{enumerate} \end{corollary}
Finally, our third main result determines all the possible $(G,2)$-distance transitive, but not $(G,2)$-arc transitive graphs of valency at most 5.
\begin{theorem}\label{thm:small val} Let $\Gamma$ be a connected $(G,2)$-distance transitive, but not $(G,2)$-arc transitive graph of valency $k\leqslant 5$. Then $\Gamma$ and $G$ must be as in one of the rows of Table~\ref{maintable}. \end{theorem}
{ \begin{table} \begin{center}
\begin{tabular}{|l|c|c|l|l|} \hline $\Gamma$ & valency & girth & $G$ & Reference \\ \hline $\gridcomp{4}$ & $3$ & $4$ & satisfies Condition~\ref{gridcondition} & Section \ref{sec:gridcomp} \\ \hline Octahedron & $4$ & $3$ & {\setlength\extrarowheight{0pt}\begin{tabular}{l}
$G\leqslant S_2\wr S_3$,\\ $|S_2\wr S_3:G|\in\{1,2\}$,\\ $G$ projects onto $S_3$ \end{tabular}} & Lemma \ref{lem:Octahedron} \\ \hline $\Hamming(2,3)$ & $4$ & $3$ & {\setlength\extrarowheight{0pt}\begin{tabular}{l}
$G\leqslant S_3\wr S_2$,\\ $|S_3\wr S_2:G|\in\{1,2\}$\\ $G$ projects onto $S_2$ \end{tabular}} & Proposition \ref{2dtval4-girth3}\\ \hline {\setlength\extrarowheight{0pt}\begin{tabular}{l} the line graph of a connected\\ $(G,3)$-arc transitive graph \end{tabular}} & 4 & 3 & & Proposition \ref{2dtval4-girth3} \\ \hline $\gridcomp 5$ & $4$ & $4$ & satisfies Condition~\ref{gridcondition} & Section \ref{sec:gridcomp}\\ \hline Icosahedron & $5$ & $3$ & $G\in\{A_5,A_5\times C_2\}$ & Lemma \ref{lem:Icosahedron}\\ \hline $\gridcomp 6$ & $5$ & $4$ & satisfies Condition~\ref{gridcondition} & Section \ref{sec:gridcomp}\\ \hline \end{tabular} \end{center} \caption{$(G,2)$-distance transitive, but not $(G,2)$-arc transitive graphs of valency at most 5} \label{maintable} \end{table}}
In Section~\ref{sect:def} we state the most important definitions and some basic results related to $2$-distance transitivity. In Section~\ref{sect:exam}, we study some examples, such as grids, their complements, Hamming graphs, complete bipartite graphs, and platonic solids from the point of view of $2$-distance transitivity. In Section~\ref{sect:girth4}, we consider $2$-distance transitive graphs of girth 4. Finally the proofs of our main results are given in Section~\ref{sect:proofs}.
\subsection*{Acknowledgment} During the preparation of this article, the first and second authors held {\em Ci\^encia sem Fronteiras/Jovem Talento} and {\em Programa Nacional de P\'os-doutorado} fellowships, respectively, awarded by the Brazilian Government. In addition, the second author was also awarded the NNSF grant 11301230 (China). The third author was supported by the
research projects 302660/2013-5 {\em (CNPq, Produtividade em Pesquisa)},
475399/2013-7 {\em (CNPq, Universal)}, and
the APQ-00452-13 {\em (Fapemig, Universal)}.
\section{Basic Definitions and useful facts}\label{sect:def}
In this paper, graphs are finite, simple, and undirected. For a graph $\Gamma$, let $\VGamma$ and $\Aut\Gamma$ denote its vertex set and automorphism group, respectively. Let $\Gamma$ be a graph and let $u$ and $v$ be vertices in $\Gamma$ that belong to the same connected component. Then the {\em distance} between $u$ and $v$ is the length of a shortest path between $u$ and $v$ and is denoted by $d_{\Gamma}(u,v)$. We denote by $\Gamma_s(u)$ the set of vertices at distance $s$ from $u$ in $\Gamma$ and we set $\Gamma(u)=\Gamma_1(u)$. The \emph{diameter} $\diam\Gamma$ of $\Gamma$ is the greatest distance between vertices in $\Gamma$. Let $G\leqslant\Aut\Gamma$ and let $s\leqslant\diam\Gamma$. We say that $\Gamma$ is \emph{$(G,s)$-distance transitive} if $G$ is transitive on $\VGamma$ and $G_u$ is transitive on $\Gamma_i(u)$ for all $i\leqslant s$.
If $\Gamma$ is $(G,s)$-distance transitive for $s= \diam \Gamma$, then we simply say that it is \emph{$G$-distance transitive}. By our definition, if $s>\diam\Gamma$, then $\Gamma$ is not $(G,s)$-distance transitive. For instance, the complete graph is not $(G,2)$-distance transitive for any group $G$.
In the characterization of $(G,s)$-distance transitive graphs, the following constants are useful. Our definition is inspired by the concept of intersection arrays defined for the distance regular graphs (see \cite{BCN}).
\begin{definition}\label{definition: intersectionarray} Let $\Gamma$ be a $(G,s)$-distance transitive graph, $u\in\VGamma$, and let $v\in\Gamma_i(u)$, $i\leqslant s$. Then the number of edges from $v$ to $\Gamma_{i-1}(u)$, $\Gamma_i(u)$, and $\Gamma_{i+1}(u)$ does not depend on the choice of $v$ and these numbers are denoted, respectively, by $c_i$, $a_i$, $b_i$. \end{definition}
Clearly we have that $a_i+b_i+c_i$ is equal to the valency of $\Gamma$ whenever the constants are well-defined. Note that for $(G,2)$-distance transitive graphs, the constants are always well-defined for $i=1,\ 2$.
A sequence $(v_0,\ldots,v_{s})$ of vertices of a graph is said to be an {\em $s$-arc} if $v_i$ is connected to $v_{i+1}$ for all $i\in\{0,\ldots,s-1\}$ and $v_i\neq v_{i+2}$ for all $i\in\{0,\ldots,s-2\}$. A graph $\Gamma$ is called \emph{$(G,s)$-arc transitive} if $G$ acts transitively on the set of vertices and on the set of $s$-arcs of $\Gamma$. (We note that some authors define $(G,s)$-arc transitivity only requiring that $G$ should be transitive on the set of $s$-arcs.) It is well-known, that $\Gamma$ is $(G,2)$-arc transitive if and only if $G$ is transitive on $\VGamma$, and the stabilizer $G_u$ is 2-transitive on $\Gamma(u)$ for some, and hence for all, $u\in\VGamma$. We will use this fact without further reference in the rest of the paper.
The {\em girth} of a graph $\Gamma$ is the length of a shortest cycle in $\Gamma$. Let $\Gamma$ be a connected $(G,2)$-distance transitive graph. If $\Gamma$ has girth at least 5, then for any two vertices $u$ and $v$ with $d_{\Gamma}(u,v)=2$, there exists a unique 2-arc between $u$ and $v$. Hence if $\Gamma$ is $(G,2)$-distance transitive, then it is $(G,2)$-arc transitive. On the other hand, if the girth of $\Gamma$ is 3, and $\Gamma$ is not a complete graph, then some $2$-arcs are contained in a triangle, while some are not. Hence $\Gamma$ is not $(G,2)$-arc transitive. We record the conclusion of this argument in the following lemma.
\begin{lemma}\label{lem:girth5}
Suppose that $\Gamma$ is a $(G,2)$-distance transitive graph. If $\Gamma$ has girth at least $5$, then $\Gamma$ is $(G,2)$-arc transitive. If $\Gamma$ has
girth~$3$, then $\Gamma$ is not $(G,2)$-arc transitive. \end{lemma}
If $\Gamma$ has girth 4, then $\Gamma$ can be $(G,2)$-distance transitive, but not $(G,2)$-arc transitive. An infinite family of examples can be constructed using Lemma~\ref{lem:gridcomp}.
We close this section with two results on permutation group theory and another one on $2$-geodesic transitive graphs. They will be needed in our analysis in Sections~\ref{sect:girth4}--\ref{sect:proofs}. Recall that a permutation group $G$ acting on $\Omega$ is said to be {\em $2$-homogeneous} if $G$ is transitive on the set of $2$-subsets of $\Omega$.
\begin{lemma}[\cite{kantor}] \label{2dt-2homonot2t}
Let $G$ be a $2$-homogeneous permutation group of
degree $n$ which is not $2$-transitive. Then
the following statements are valid: \begin{enumerate} \item $n=p^e\equiv 3 \pmod 4$ where $p$ is a prime;
\item $|G|$ is odd and is divisible by $p^e(p^e-1)/2$; \end{enumerate} \end{lemma}
\begin{lemma}{\rm(\cite[Theorem 1.51]{Gorenstein-1})}\label{val-2p-1} If $G$ is a primitive, but not $2$-transitive permutation group on $2p$ letters where $p$ is a prime, then $p=5$ and $G\cong A_5$ or $S_5$. \end{lemma}
An \emph{$s$-geodesic} in a graph $\Gamma$ is a shortest path of length $s$ between vertices in $\Gamma$. In particular, a vertex triple $(u,v,w)$ with $v$ adjacent to both $u$ and $w$ is called a \emph{$2$-geodesic} if $u$ and $w$ are not adjacent. A non-complete graph $\Gamma$ is said to be \emph{ $(G,2)$-geodesic transitive} if $G$ is transitive on both the arc set and on the set of 2-geodesics of $\Gamma$. Recall that the {\em line graph} $L(\Gamma)$ of a graph $\Gamma$ is graph whose vertices are the edges of $\Gamma$ and two vertices of $L(\Gamma)$ are adjacent if and only if they are adjacent to a common vertex of $\Gamma$. For a natural number $n$, we denote by $\K_n$ the {\em complete graph} on $n$ vertices.
\begin{lemma}{\rm (\cite[Theorem 1.3]{DJLP-2})}\label{2gt-val4}
Let $\Gamma$ be a connected, non-complete graph of valency $4$ and girth $3$.
Then $\Gamma$ is $(G,2)$-geodesic transitive if and
only if, either $\Gamma=L(\K_4)$
or $\Gamma=L(\Sigma)$ where $\Sigma$ is connected
cubic $(G,3)$-arc transitive graph. \end{lemma}
We observe that the line graph of $\K_4$ is precisely the octahedral graph (see Lemma \ref{lem:Octahedron}).
\section{Constructions, Examples \& non-Examples}\label{sect:exam}
\subsection{Complements of grids and complete bipartite graphs}\label{sec:gridcomp}
For $n,\ m\geqslant 2$, we define the $\grid{n}{m}$ as the graph having vertex set $\{(i,j) \mid 1 \leqslant i\leqslant n,\ 1\leqslant j \leqslant m\}$, and two distinct vertices $(i,j)$ and $(r,s)$ are adjacent if and only if $i=r$ or $j=s$.
The automorphism group of the $\grid{n}{m}$, when $n \neq m$, is the direct product $S_n \times S_m$; when $n=m$, it is $S_n \wr S_2$. The complement $\overline{\Gamma}$ of a graph $\Gamma$, is the graph with vertex set $V\Gamma$, and two vertices are adjacent in $\overline{\Gamma}$ if and only if they are not adjacent in $\Gamma$. Clearly, $\Aut\Gamma=\Aut\overline\Gamma$. Of particular interest to us is the complement graph $\gridcomp{m}$. The graph in Figure \ref{fig:gridcomp} is the $\gridcomp{4}$. Observe that for $\Gamma=\gridcomp{m}$, we have $\diam\Gamma = 3$, and \[ c_1 = 1,\ a_1 = 0,\ b_1 = m-2,\ c_2 = m-2,\ a_2 = 0,\ b_2 = 1. \]
\begin{figure}
\caption{The grid complement $\gridcomp{4}$; and on the right represented according to a distance-partition.}
\label{fig:gridcomp}
\end{figure}
\begin{condition}\label{gridcondition} Let $m\geqslant 3$ and let $\pi: S_2\times S_m\rightarrow S_2$ be the natural projection. We say that a subgroup $G$ of $S_2\times S_m$ satisfies Condition~\ref{gridcondition} if $G\pi=S_2$ and $G\cap S_m$ is a $2$-transitive, but not $3$-transitive subgroup of $S_m$. \end{condition}
\begin{lemma}\label{lem:gridcomp} Let $\Gamma = \gridcomp{m}$ with $m\geqslant 4$, and let $G\leqslant \Aut\Gamma =S_2\times S_m$. Then $\Gamma$ is $(G,2)$-distance transitive, but not $(G,2)$-arc transitive if and only if $G$ satisfies Condition \ref{gridcondition}. \end{lemma} \begin{proof} Let $\Delta_1=\{(1,i)\mid i=1,2,\ldots,m\}$ and $\Delta_2=\{(2,i)\mid i=1,2,\ldots,m\}$ be the two biparts of $V\Gamma$. Let $u=(1,1)\in \Delta_1$. Suppose first that $\Gamma$ is $(G,2)$-distance transitive, but not $(G,2)$-arc transitive. Since $G$ is transitive on $V\Gamma$, $G$ projects onto $S_2$, that is, $G\pi=S_2$. Let $H=G\cap S_m$. Then $G_u=H_1$, $\Delta_2=\Gamma(u)\cup \{(2,1)\}$ and $\Gamma_2(u)=\Delta_1\setminus \{u\}$. Since $\Gamma$ is $(G,2)$-distance transitive, $G_u=H_1$ is transitive on both $\Gamma(u)$ and $\Gamma_2(u)$. Hence $H_1$ is transitive on $\{2,\ldots,m\}$, and so $H$ is a $2$-transitive subgroup of $S_m$. Since $\Gamma$ is not $(G,2)$-arc transitive, $G_u=H_1$ is not 2-transitive on $\{2,3,\ldots,m\}$, so $H$ is not $3$-transitive. Thus $G$ satisfies Condition \ref{gridcondition}.
Conversely, suppose that $G$ satisfies Condition \ref{gridcondition}. Then $H=G\cap S_m$ is transitive on $\Delta_1$ and $\Delta_2$, and $G$ swaps these two sets. Thus $G$ is transitive on $\VGamma$. As $H$ is a 2-transitive, but not 3-transitive subgroup of $S_m$, $H_1$ is transitive, but not 2-transitive on $\Gamma(u)=\{(2,i)\mid i=2,\ldots,m\}$ and on $\Gamma_2(u)=\{(1,i)\mid i=2,\ldots,m\}$. Hence $\Gamma$ is $(G,2)$-distance transitive, but not $(G,2)$-arc transitive. \end{proof}
A list of $2$-transitive, but not $3$-transitive permutation groups can be found in~\cite[pp.~194-197]{cameron}.
Complete bipartite graphs appear frequently in this paper. Since $\K_{m,n}$ with $m\neq n$ is not regular, we study $\K_{m,m}$. The full automorphism group of $\K_{m,m}$ is $S_m \wr S_2$, and this automorphism group acts $2$-arc transitively on $\K_{m,m}$. In the lemma below, we show that there is no $2$-distance transitive
action on $\K_{m,m}$ which is not $2$-arc transitive.
\begin{lemma}\label{lem:complete bipartite} Let $\Gamma\cong \K_{m,m}$ with $m\geqslant 2$ and let $G\leqslant\Aut \Gamma$. Then $\Gamma$ is $(G,2)$-distance transitive if and only if it is $(G,2)$-arc transitive. \end{lemma}
\begin{proof}[Proof] If $\Gamma$ is $(G,2)$-arc transitive, then, by definition, it is $(G,2)$-distance transitive. Conversely, suppose that $\Gamma$ is $(G,2)$-distance transitive with some $G\leqslant\Aut \Gamma$. Let $\VGamma=\Delta_1\cup\Delta_2$ be the bipartition of $V \Gamma$ where $\Delta_1=\{(1,i)\mid i=1,\ldots,m\}$ and $\Delta_2=\{(2,i)\mid i=1,\ldots,m\}$. The full automorphism group of $\Gamma$ is $S_m\wr S_2$. Since $G\leqslant\Aut \Gamma$ is assumed to be vertex transitive,
$G_{\Delta_1}=G_{\Delta_2}$ is transitive on both $\Delta_1$ and $\Delta_2$. Set $G_0=G_{\Delta_1}$. Thus $G_0$ is a subdirect subgroup in $M^{(1)}\times M^{(2)}$ where $M^{(i)}\leqslant S_m$ and $M^{(i)}$ is the image of $G_0$ under the $i$-th coordinate projection $S_m\times S_m\rightarrow S_m$. Further, $G$ projects onto $S_2$ under the natural projection $\Aut \Gamma\rightarrow S_2$. If $x=(x_1,x_2)\sigma\in G$ with $x_i\in S_m$ and $\sigma=(1,2)\in S_2$, then $(M^{(1)})^{x_1}=M^{(2)}$, and so $M^{(1)}$ and $M^{(2)}$ are conjugate subgroups of $S_m$. Hence possibly replacing $G$ with its conjugate $G^{(x_1,1)}$, we may assume without loss of generality that $M^{(1)}=M^{(2)}=M$.
Let $u=(1,1)\in\VGamma$. Then $\Gamma(u)=\Delta_2$ and $\Gamma_2(u)=\Delta_1\setminus \{u\}$. Further, $G_u$ stabilizes $\Delta_1$, and hence $G_u\leqslant G_0$. Since $\Gamma$ is $(G,2)$-distance transitive, it follows that $G_u$ is transitive on both $\Delta_2$ and $\Delta_1\setminus \{u\}$. Set $H=M_1$. Since $G_u\leqslant H\times M$, the stabilizer $H$ must be transitive on $\{2,\ldots,m\}$, and hence $M$ is a 2-transitive subgroup of $S_m$. In particular $M$ contains a unique minimal normal subgroups $N$ and this minimal normal subgroup is either elementary abelian or simple. Since $N$ is transitive, we can write $M=NH$.
We have that $G_0$ contains $1\times N$ if and only if it contains $N\times 1$. Hence we need to consider two cases: the first is when $G_0$ contains $N\times N$ and the second is when it does not.
Suppose first that $G_0$ contains $N\times N$. In particular, $1\times N\leqslant G_u$. For all $h_2\in H$, there is some $n_1h_1\in M$ with $n_1\in N$ and $h_1\in H$ such that $(n_1h_1,h_2)\in G_0$. Since $N\times 1\leqslant G_0$, this implies that $(h_1,h_2)\in G_0$ and also $(h_1,h_2)\in G_u$. Thus $G_u$ projects onto $NH=M$ by the second projection. Hence $G_u$ is 2-transitive on $\Delta_2=\Gamma(u)$, which shows that $\Gamma$ is $(G,2)$-arc transitive.
Suppose now that $N\times N$ is not contained in $G_0$. Since $G_0\cap (1\times M)$ is a normal subgroup of $M$ and $N$ is the unique minimal normal subgroup of $M$, we find that $G_0\cap (1\times M)=1$ and, similarly, that $G_0\cap (M\times 1)=1$. Therefore $G_0$ is a diagonal subgroup; that is, $$ G_0=\{(t,\alpha(t))\mid t\in M\} $$ with some $\alpha\in\Aut M$. As $H$ is the stabilizer of $1$ in $M$, we have that $G_u=\{(t,\alpha(t))\mid t\in H\}$. On the other hand, $G_u$ is transitive on $\Delta_2$, and hence $\alpha(H)$ is a transitive subgroup of $M$. Thus we obtain the factorization $M=H\alpha(H)$. The following possibilities are listed in~\cite[Theorem~1.1]{baum}. \begin{enumerate} \item[(a)] Either $M$ is affine and is isomorphic to $[(\F{2})^3\rtimes \PSL(3,2)]\wr X$ where $X$ is a transitive permutation group; \item[(b)] or $\Soc M\cong \pomegap 8q$, $\Sp(4,q)$ ($q$ even with $q\geqslant 4$), $A_6$, $M_{12}$. \end{enumerate} In case~(a), if $X\neq 1$, then $M$ is contained in a wreath product in product action, and such a wreath product is never 2-transitive. Thus $X=1$, $m=8$, $M=(\F{2})^3\rtimes \PSL(3,2)$, and $G_u\cong\PSL(3,2)$ acting transitively on $\Delta_2$. However, this transitive action of $\PSL(3,2)$ is $2$-transitive, which gives that $\Gamma$ is $(G,2)$-arc transitive.
In case~(b), inspecting the list of almost simple 2-transitive groups in~\cite{cameron}, we find that there are no 2-transitive groups with socle $\pomegap 8q$ or $\Sp(4,q)$ with $q$ even and $q\geqslant 4$. Hence $\Soc M= A_6$ or $M_{12}$. Then $G_u$ is either $A_5$, $S_5$ or $M_{11}$ acting transitively on $\Delta_2$. These actions are all 2-transitive, which implies that $\Gamma$ is $(G,2)$-arc transitive. \end{proof}
\subsection{Hamming graphs and platonic solids}
For $d,\ q\geqslant 2$, the vertex set of the {\em Hamming graph} $\Hamming(d,q)$ is the set $\{1,\ldots,q\}^d$ and two vertices $u=(\alpha_1,\ldots,\alpha_d)$ and $v=(\beta_1,\ldots,\beta_d)$ are adjacent if and only if their Hamming distance is one; that is, they differ in precisely one coordinate. The Hamming graph has diameter $d$ and has girth $4$ when $q=2$ and girth $3$ when $q > 2$. The wreath product $W = S_q\wr S_d$ is the full automorphism group of $\Gamma$, acting distance transitively, see \cite[Section 9.2]{BCN}. The Hamming graphs are well studied, due in part to their applications to coding theory. Hamming graphs arise in two cases of our research. The first case is the cube $\Gamma=\Hamming(3,2)$. The standard construction of the cube graph is precisely the same as for the Hamming graphs with $d=3$ and $q=2$, and so this graph is the `standard' cube with $8$ vertices (the cube $\Hamming(3,2)$ is also isomorphic to the grid complement $\gridcomp{4}$). The second case is $\Gamma=\Hamming(d,2)$ when $d>2$; see Lemma~\ref{lem:cube1}.
Some platonic solids (cube, octahedron and icosahedron) appear in some form in our investigation. The cube appears as the $\gridcomp{4}$. We discuss in more detail the octahedron and the icosahedron. The octahedron (see Figure \ref{fig:Octahedron}) has 6 vertices and diameter 2. Its automorphism group $S_2 \wr S_3$ acts imprimitively preserving the partition of vertices into antipodal pairs. We denote by $\pi$ the natural projection $S_2 \wr S_3\rightarrow S_3$. \begin{figure}
\caption{The octahedron, displayed according to its distance-partition.}
\label{fig:Octahedron}
\end{figure}
\begin{lemma}\label{lem:Octahedron} Let $\Gamma$ be the octahedron, and let $G \leqslant \Aut\Gamma $. Then $\Gamma$ is not $(G,2)$-arc transitive. Further, $\Gamma$ is $(G,2)$-distance transitive, if and only if either $G= S_2\wr S_3$, or $G$ is an index $2$ subgroup of $S_2\wr S_3$ and $G\pi =S_3$. \end{lemma} \begin{proof}
Since $\Gamma$ is non-complete of girth 3, $\Gamma$ is not $(G,2)$-arc transitive. Now assume that $\Gamma$ is $(G,2)$-distance transitive. Let $u=a$ be the vertex in the graph of Figure \ref{fig:Octahedron}. Since $G_u$ is transitive on $\Gamma(u)$ and $|\Gamma(u)|=4$,
$|G_u|$ is divisible by 4. Further, $|G:G_u|=6$, and so $|G|$ is divisible by 24. Suppose that $G$ is a proper subgroup of $\Aut
\Gamma=S_2\wr S_3$. Then $|G|=24$. As $|\Aut \Gamma|=48$, $G$ is an index $2$ subgroup of $S_2\wr S_3$. The three antipodal blocks of $V\Gamma$ in the graph of Figure \ref{fig:Octahedron} are $\Delta_1=\{a,a'\}$, $\Delta_2=\{b,b'\}$ and $\Delta_3=\{c,c'\}$. Since $G$ is transitive on $V\Gamma$, $G$ is transitive on the three antipodal blocks. Thus the image $G\pi$ of $G$ in $S_3$ is $\mathbb{Z}_3$ or $S_3$. Assume $G\pi=\mathbb{Z}_3$. Then $G_u$ acts on the three antipodal blocks trivially. Hence $G_u$ does not map $\Delta_2$ to $\Delta_3$, contradicting that $G_u$ is transitive on $\Gamma(u)$. Therefore $G\pi=S_3$. Simple calculation shows that the conditions stated in the lemma are sufficient for $2$-distance transitivity. \end{proof}
The icosahedron has automorphism group $S_2 \times A_5$ acting arc transitively.
\begin{lemma}\label{lem:Icosahedron}
Let $\Gamma$ be the icosahedron, and let $G \leqslant \Aut\Gamma $. The
graph $\Gamma$ is $(G,2)$-distance transitive if and only if
$G = S_2 \times A_5$ or $G=A_5$. In particular, $\Gamma$ is not $(G,2)$-arc transitive. \end{lemma} \begin{proof}
By \cite[Theorem 1.5]{DJLP-prime}, $\Aut\Gamma\cong S_2 \times A_5$. It is easy to see that for $G\in\{S_2\times A_5,A_5\}$,
$\Gamma$ is $(G,2)$-distance
transitive. Suppose that $\Gamma$ is $(G,2)$-distance transitive.
Then $G$ is transitive on $V\Gamma$ and $G_u$ is transitive on $\Gamma(u)$, and so $12=|V\Gamma|$ divides $|G|$ and $|\Gamma(u)|=5$ divides $|G_u|$. Thus 60 divides $|G|$. Since $G\leqslant \Aut\Gamma\cong S_2 \times A_5$, it follows that $G = S_2 \times A_5$ or $G=A_5$. Finally, as $\Gamma$ is a non-complete graph of girth 3, $\Gamma$ is not $(G,2)$-arc transitive. \end{proof}
\begin{figure}
\caption{The icosahedron, displayed according to its distance-partition.}
\label{fig:Icosahedron}
\end{figure}
\section{Graphs of girth $4$}\label{sect:girth4}
By the assertion of Lemma \ref{lem:girth5}, to study the family of $(G,2)$-distance transitive, but not $(G,2)$-arc transitive graphs, we only need to consider the graphs with girth $3$ or $4$. This section is devoted to the girth $4$ case, and the structure of such graphs depends strongly upon the value of the constant $c_2$ as in Definition~\ref{definition: intersectionarray}. We begin with a simple combinatorial result:
\begin{lemma}\label{lem:sizeofgamma2}
Let $\Gamma$ be a $(G,2)$-distance transitive graph with valency $k$ and girth at least $4$. Let $u\in V\Gamma$. Then there are $k(k-1)$ edges between $\Gamma(u)$ and $\Gamma_2(u)$, and $k(k-1) = c_2 |\Gamma_2(u)|$. \end{lemma} \begin{proof}
Consider a vertex $v \in \Gamma(u)$. Since $\Gamma$ has girth more than $3$, all of the neighbors of $v$, except for $u$, lie in $\Gamma_2(u)$. Thus, there are $k-1$ edges from $v$ to $\Gamma_2(u)$. Since there are $k$ such vertices $v$, there are $k(k-1)$ edges between $\Gamma(u)$ and $\Gamma_2(u)$. As $\Gamma$ is $(G,2)$-distance transitive, the equation $k(k-1) = c_2 |\Gamma_2(u)|$ follows by counting the same quantity from the other side: each vertex in $\Gamma_2(u)$ is incident with exactly $c_2$ edges between $\Gamma_2(u)$ and $\Gamma(u)$. \end{proof}
For a vertex $u\in\VGamma$, we denote by $G_u^{\Gamma_i(u)}$ the permutation group induced by $G_u$ on $\Gamma_i(u)$.
\begin{lemma}\label{lem:c_2=2}
Let $\Gamma$ be a $(G,2)$-distance transitive, but not $(G,2)$-arc transitive graph with valency $k$ and girth $4$, and suppose that $c_2=2$.
Then $G_u$ acts $2$-homogeneously, but not $2$-transitively on
$\Gamma(u)$ for each $u\in \VGamma$. Further,
$k=p^e\equiv 3 \pmod 4$ where $p$ is a prime.
\end{lemma} \begin{proof} Since $c_2=2$, each vertex $w\in\Gamma_2(u)$ uniquely determines a $2$-subset in $\Gamma(u)$, namely the intersection $\Gamma(w) \cap \Gamma(u)$. We claim that the map $\psi:w\mapsto\Gamma(w)\cap\Gamma(u)$ is a bijection between $\Gamma_2(u)$ and the set of 2-subsets of $\Gamma(u)$. Suppose that $\psi(w_1)=\psi(w_2)=\{v_1,v_2\}$. Then $u,\ w_1,\ w_2\in\Gamma(v_1)$ and $v_2\in\Gamma_2(v_1)$. On the other hand, as $v_2$ is adjacent to $u,\ w_1,\ w_2$, there are three edges from $v_2$ to $\Gamma(v_1)$, which is impossible, as $c_2=2$. Hence $\psi$ is injective. Since $\Gamma$ has girth 4, it follows from Lemma
\ref{lem:sizeofgamma2} that $|\Gamma_2(u)| = k(k-1)/2=\binom{k}{2}$, and so the map $\psi$ is a bijection. Hence $G_u$ is transitive on $\Gamma_2(u)$ if and only if it is transitive on the set of $2$-subsets in $\Gamma(u)$, that is, $G_u^{\Gamma(u)}$ acts $2$-homogeneously on $\Gamma(u)$. Since $\Gamma$ is not $(G,2)$-arc transitive, $G_u^{\Gamma(u)}$ is not $2$-transitive on $\Gamma(u)$. Thus by Lemma \ref{2dt-2homonot2t}, $k=p^e\equiv 3 \pmod 4$ where $p$ is a prime.
\end{proof}
In the following lemma we characterize $(G,2)$-distance transitive, but not $(G,2)$-arc transitive Hamming graphs over an alphabet of size $2$.
\begin{lemma}\label{lem:cube1} Let $\Gamma = \Hamming(d,2)$ with $d > 2$, and let $G \leqslant\Aut\Gamma \cong S_2\wr S_{d}$. Then $\Gamma$ is $(G,2)$-distance transitive, but not $(G,2)$-arc transitive if and only if $G=S_2\wr H$ where $H$ is a $2$-homogeneous, but not $2$-transitive subgroup of $S_d$. Further, in this case, $d=p^e\equiv 3 \pmod 4$.
\end{lemma} \begin{proof}
By \cite[p.~222]{BCN}, $\Gamma$ is $\Aut \Gamma$-distance
transitive of girth 4, valency $d$, and $c_2=2$.
Assume that the action of $G$
on $\Gamma$ is 2-distance transitive, but not $2$-arc
transitive. Then by Lemma~\ref{lem:c_2=2}, $G_u$ is 2-homogeneous, but not $2$-transitive
on $\Gamma(u)$, for all $u$.
Further, $d=p^e\equiv 3 \pmod 4$
where $p$ is a prime.
Let
$A=\Aut\Gamma=M\rtimes S_d$ where $M=(S_2)^d$.
Let $u$ be the vertex $(1,\ldots,1)$ and set $H=G_u$.
If $g\in G$, then $g=mh$ where $m\in M$ and $h\in S_d$, and
so $h\in H$. Hence $G\leqslant MH$.
Then, by Dedekind's Modular Law, $(G\cap M)H=G\cap (MH)=G$.
Thus $G\cap M$ is a transitive subgroup of $G$. Since $M$ is regular,
$G\cap M=M$, and so $M\leqslant G$. Thus
$G=M\rtimes H=S_2\wr H$. As the action of $H$ on
$\Gamma(u)$ is faithful, $H=G_u^{\Gamma(u)}$.
Conversely, assume that $G=S_2\wr H$ and $H$ is a $2$-homogeneous,
but not $2$-transitive subgroup of $S_d$. Then $G$ is transitive on $V\Gamma$. Since $G_u^{\Gamma(u)}=G_u=H$, $G_u^{\Gamma(u)}$ acts $2$-homogeneously, but not $2$-transitively on $\Gamma(u)$ for each $u\in \VGamma$. Hence $\Gamma$ is not $(G,2)$-arc transitive and $G_u^{\Gamma(u)}$ is transitive on the set of 2-subsets of $\Gamma(u)$. Since $\Gamma$ has girth 4 and $c_2=2$, we can construct a one-to-one correspondence between the 2-subsets of $\Gamma(u)$ and vertices of $\Gamma_2(u)$ as in the proof of Lemma~\ref{lem:c_2=2}. Thus $G_u$ is transitive on $\Gamma_2(u)$, so $\Gamma$ is $(G,2)$-distance transitive. \end{proof}
We have treated the case where $c_2=2$. When $c_2$ is `large' (that is, close to the valency) we can say a lot about the structure of $\Gamma$.
\begin{lemma}\label{lem:c_2=k}
If $\Gamma$ is a connected
$(G,2)$-distance transitive graph with valency $k$
and girth $4$, then the following are valid.
\begin{enumerate}
\item If $c_2=k$, then $\Gamma = \K_{k,k}$.
\item If $k\geqslant 3$ and $c_2=k-1$, then $\Gamma = \gridcomp{(k+1)}$.
\end{enumerate} \end{lemma} \begin{proof} (i) Let $(u,v,w)$ be a $2$-arc. Since $\Gamma$ has girth 4, $u$ and
$w$ are nonadjacent, so $w$ has $k$ neighbors in $\Gamma(u)$, as
$c_2=k$. Since the valency of $\Gamma$ is $k$, this forces
$\Gamma(u) = \Gamma(w)$. By the $(G,2)$-distance transitivity of
$\Gamma$, every vertex in $\Gamma_2(u)$ has all its neighbors in $\Gamma(u)$, and this implies that $\Gamma_3(u)$ is empty and there
are no edges in $\Gamma_2(u)$. Thus $\Gamma$ is a bipartite graph
and the two biparts are $\Gamma(u)$ and $\{ u \} \cup
\Gamma_2(u)$. Every edge between the two biparts is present, so
$\Gamma$ is a complete bipartite graph. Since $\Gamma$ is regular of
valency $k$, we have $\Gamma = \K_{k,k}$.
(ii) Let $(u,v,w)$ be a 2-arc. Since $\Gamma$ has girth 4 and
$c_2=k-1$, by Lemma~\ref{lem:sizeofgamma2}, we have $|\Gamma_2(u)| =
k$.
Let $w'$ be the unique vertex in $\Gamma_2(u)$ that is not
adjacent to $v$. Assume that the induced subgraph $[\Gamma_2(u)]$ contains an edge. As $G_u$ is transitive on $\Gamma_2(u)$, every vertex of $\Gamma_2(u)$ is adjacent to some vertex of $\Gamma_2(u)$. Since $\Gamma$ has girth 4, the $k-1$ vertices in
$\Gamma_2(u)\cap \Gamma(v)$ are pairwise nonadjacent, so every vertex of $\Gamma_2(u)\cap \Gamma(v)$ is adjacent to $w'$, which is impossible, as $|\Gamma(u)\cap \Gamma(w')|=k-1$. Thus there are no edges in $[\Gamma_2(u)]$. Thus each vertex in $\Gamma_2(u)$ is adjacent to a unique vertex in $\Gamma_3(u)$.
Let $z\in \Gamma_3(u)\cap \Gamma(w)$. Since $c_2=k-1$, every pair of vertices at distance $2$ have $k-1$ common neighbors, so
$|\Gamma(v)\cap \Gamma(z)|=k-1$. Hence $z$ is adjacent to all vertices of $\Gamma_2(u)$ that are adjacent to $v$. If for all $v'\in \Gamma(u)$, $\Gamma_2(u)\cap
\Gamma(v)=\Gamma_2(u)\cap \Gamma(v')$, then $|\Gamma_2(u)|=k-1$,
which is a contradiction. Thus $\Gamma(u)$ contains a vertex $v'$
such that $\Gamma_2(u)\cap \Gamma(v)\neq \Gamma_2(u)\cap
\Gamma(v')$. In particular, $\Gamma_2(u)= \Gamma_2(u)\cap
(\Gamma(v)\cup \Gamma(v'))$. Now $v'$ and $z$ must have a common neighbor in $\Gamma_2(u)$, and so $v'$ and $z$ are at distance $2$. Thus, as $c_2=k-1$, $z$ is adjacent to all vertices of $\Gamma_2(u)$ that are adjacent to $v'$. Thus $z$ is adjacent to all vertices of $\Gamma_2(u)$. Since
$|\Gamma_2(u)|=k$, we find that there are no more vertices in $\Gamma$. Therefore, we have determined $\Gamma$ completely, and $\Gamma = \gridcomp{(k+1)}$. \end{proof}
\section{Proof of Main Results}\label{sect:proofs}
We first prove Theorem~\ref{thm:valency stuff}.
\begin{proof}[{\bf Proof of Theorem \ref{thm:valency stuff}}]
Since $\Gamma$ has girth $4$, it follows that $2\leqslant c_2\leqslant k$. If $c_2=k$, then, by Lemma \ref{lem:c_2=k}, $\Gamma = \K_{k,k}$. However, by Lemma \ref{lem:complete bipartite}, $\Gamma$ is $(G,2)$-arc transitive, whenever it is $(G,2)$-distance transitive, and hence this case cannot arise. Thus $2\leqslant c_2\leqslant k-1$. Statement~(i) now follows from Lemmas~\ref{lem:c_2=k}(ii) and~\ref{lem:gridcomp}, while statement~(ii) follows from Lemma~\ref{lem:c_2=2} \end{proof}
Next we prove Corollary~\ref{thm:primeval}.
\begin{proof}[{\bf Proof of Corollary \ref{thm:primeval}}] If $p=2$, then $\Gamma$ is a cycle graph, so $\Gamma$ is $(G,2)$-distance transitive if and only if it is $(G,2)$-arc transitive, which is a contradiction. Thus $p\geqslant 3$. Then by Theorem \ref{thm:valency stuff}, either $\Gamma \cong \gridcomp{(p+1)}$, or $2\leqslant c_2 \leqslant p-2$. Assume that $2\leqslant c_2 \leqslant p-2$. It follows from Lemma \ref{lem:sizeofgamma2} that $p(p-1) = c_2
|\Gamma_2(u)|$. Since $2\leqslant c_2 \leqslant p-2$, $p$ and $c_2$ are coprime, so $c_2$ divides $p-1$. As $c_2<p-1$, we get $2\leqslant c_2\leqslant (p-1)/2$ and this proves~(i). Statement~(ii) follows from Theorem \ref{thm:valency stuff}(ii). Assume that $c_2= (p-1)/2$. By Lemma~\ref{lem:sizeofgamma2},
$|\Gamma_2(u)|=2p$. If $G_u$ were primitive on $\Gamma_2(u)$, then by Lemma \ref{val-2p-1}, we would have, $p=5$, and hence $c_2=2$. However, In this case
$p\equiv 3 \pmod 4$, which is a contradiction. Thus $G_u$ is imprimitive on $\Gamma_2(u)$ and this shows~(iii). \end{proof}
One can form an infinite family of examples that satisfy the conditions of Corollary \ref{thm:primeval} from Hamming graphs $\Hamming(p,2)$ using Lemma~\ref{lem:cube1}.
In the following, we prove Theorem \ref{thm:small val}, that is, we determine all $(G,2)$-distance transitive, but not $(G,2)$-arc transitive graphs of valency at most $5$. We split the proof into two parts, as we consider the girth 4 and~3 cases separately in Propositions~\ref{lem:valency 3} and~\ref{2dtval4-girth3}, respectively.
\begin{proposition}\label{lem:valency 3}
Let $\Gamma$ be a connected $(G,2)$-distance transitive, but not
$(G,2)$-arc transitive graph of girth $4$ and valency $k\in\{3,4,5\}$. Then
$\Gamma \cong \gridcomp{k+1}$, and $G$ satisfies
Condition \ref{gridcondition}. \end{proposition} \begin{proof}
We claim that $c_2=k-1$ in all cases.
By Theorem \ref{thm:valency stuff},
$c_2\leqslant k-1$. If $k=3$, then $c_2\geqslant 2=k-1$
follows from the girth condition, and so $c_2=k-1$.
If $k\in\{4,5\}$ and $c_2\leqslant k-2$, then we must have that $c_2=2$ (use Corollary~\ref{thm:primeval}
for $k=5$). Hence, by Lemma \ref{lem:c_2=2}, $k \equiv 3 \pmod{4}$:
a contradiction, as $k\in\{4,5\}$. Now the rest follows from
Theorem~\ref{thm:valency stuff}(i). \end{proof}
\begin{proposition}\label{2dtval4-girth3}
Let $\Gamma$ be a connected
$(G,2)$-distance transitive graph of girth $3$ and valency $4$ or $5$,
and let $u\in V\Gamma$. Then one of the following is valid.
\begin{enumerate} \item $\Gamma$ is the octahedron and either $G=S_2\wr S_3$ or $G$ is an index $2$ subgroup of $S_2\wr S_3$ and $G$ projects onto $S_3$; \item $\Gamma\cong \Hamming(2,3)$ and either $G=S_3\wr S_2$ or $G$ is an index $2$ subgroup of $S_3\wr S_2$ and $G$ projects onto $S_2$;
\item $|\Gamma_2(u)|=8$ and $\Gamma$ is the line graph of a connected cubic $(G,3)$-arc transitive graph; \item $\Gamma$ is the icosahedron and $G = A_5$ or $A_5 \times S_2$.
\end{enumerate}
In cases (i)--(iii), the valency of $\Gamma$ is $4$, while
in case~(iv), the valency is $5$. \end{proposition}
\begin{proof}
Suppose first that the valency is~4. Since $\Gamma$ is
$(G,2)$-distance transitive of valency $4$ and girth $3$, it
follows that the induced graph
$[\Gamma(u)]$ is a vertex transitive graph with 4
vertices of valency $k$ where $1\leqslant k\leqslant 3$. If $[\Gamma(u)]$
has valency 3, then $[\Gamma(u)]$ is complete, and so $\Gamma$ is
complete, which is a contradiction. If $[\Gamma(u)]$ has valency 2, then
$[\Gamma(u)]\cong C_4$. Hence $|\Gamma_2(u)\cap \Gamma(v)|=1$ for
any arc $(u,v)$, so $G_{u,v}$ is transitive on $\Gamma_2(u)\cap
\Gamma(v)$, that is, $\Gamma$ is $(G,2)$-geodesic transitive. Thus
by \cite[Corollary 1.4]{DJLP-2}, $\Gamma$ is the octahedron. It
follows from Lemma \ref{lem:Octahedron} that either $G= S_2\wr S_3$,
or $G$ is an index 2 subgroup of $S_2\wr S_3$ and $G$ projects onto
$S_3$. Hence, case~(i) is valid.
Now suppose that $[\Gamma(u)]$ has valency 1. Then $[\Gamma(u)]\cong 2\K_2$ and there are 8 edges between $\Gamma(u)$ and $\Gamma_2(u)$. Further, each arc lies in a unique triangle. Let $\Gamma(u)=\{v_1,v_2,v_3,v_4\}$ be such that
$(v_1,v_2)$ and $(v_3,v_4)$ are two arcs. Then $|\Gamma_2(u)\cap
\Gamma(v_1)|=2$, say $\Gamma_2(u)\cap \Gamma(v_1)=\{w_1,w_2\}$. Since
$[\Gamma(v_1)]\cong 2\K_2$, it follows that $v_2$ is adjacent to neither $w_1$ nor $w_2$. As $|\Gamma_2(u)\cap \Gamma(v_2)|=2$, we have $|\Gamma_2(u)|\geqslant 4$. Since there are 8 edges between $\Gamma(u)$ and $\Gamma_2(u)$ and since $G_u$ is transitive on $\Gamma_2(u)$, we obtain that
$8\mid |\Gamma_2(u)|$, and so $|\Gamma_2(u)|\in\{4,8\}$.
Suppose first that $|\Gamma_2(u)|=4$. As noted above, $v_2$ is not adjacent to $w_1$ or $w_2$. Set $\Gamma_2(u)\cap\Gamma(v_2)=\{w_3,w_4\}$. Then $\Gamma_2(u)=\{w_1,w_2,w_3,w_4\}$.
Since $[\Gamma(v_1)]\cong [\Gamma(v_2)]
\cong 2\K_2$, it follows that $w_1,w_2$ are adjacent and, similarly, $w_3,w_4$
are adjacent.
Since $|\Gamma_2(u)|=4$ and there
are 8 edges between $\Gamma(u)$ and $\Gamma_2(u)$, we
must have $|\Gamma(u)\cap \Gamma(w_i)|=2$. Since $v_2,w_1$ are nonadjacent, $w_1$ is adjacent either to $v_3$ or to $v_4$, say $v_3$. Then $\Gamma(u)\cap \Gamma(w_1)=\{v_1,v_3\}$. As each arc lies in a unique triangle and $(v_1,w_1,w_2)$ is a triangle, it follows that $v_3$ is not adjacent to $w_2$. Hence $v_3$ is adjacent to either $w_3$ or $w_4$, say $w_3$. Then $\Gamma(v_3)=\{u,v_4,w_1,w_3\}$. Since $[\Gamma(v_3)]\cong 2\K_2$ and $u,v_4$ are adjacent, it follows that $w_1,w_3$ are adjacent. Thus,
$\Gamma(w_1)=\{v_1,w_2,v_3,w_3\}$. Finally, as $|\Gamma_2(u)\cap
\Gamma(v_4)|=2$ and $v_4$ is adjacent to neither $w_1$ nor $w_3$, $v_4$ is adjacent to both $w_2$ and $w_4$. Since $[\Gamma(v_4)] \cong 2\K_2$ and $(v_3,u,v_4)$ is a triangle, it follows that $w_2$, $w_4$ are adjacent. Now, the graph $\Gamma$ is completely determined and $\Gamma\cong \Hamming(2,3)$. By \cite[Theorem 9.2.1]{BCN}, $\Gamma$ is $(\Aut\Gamma,2)$-distance transitive where $\Aut\Gamma\cong S_3\wr S_2$. Suppose that $G$ is a proper subgroup of $\Aut\Gamma$. Since $G_u$ is transitive on
$\Gamma(u)$ and $|\Gamma(u)|=4$, $|G_u|$ is divisible by 4, so $|G|$
is divisible by $4|\VGamma|=36$. It follows that
$|G|=36$, so $G$ is an index $2$ subgroup of $S_3\wr S_2$. Finally, as $G_u$ is transitive on $\Gamma(u)$, $G_u$ projects onto $S_2$. Thus~(ii) is valid.
Let us now consider the case when $|\Gamma_2(u)|=8$. Then for each $z\in \Gamma_2(u)$, there is a unique 2-geodesic between $u$ and $z$. Hence there is a one-to-one correspondence between the set of 2-geodesics starting from $u$ and the set of vertices in $\Gamma_2(u)$. Since $G_u$ is transitive on $\Gamma_2(u)$, it follows that $G_u$ is transitive on the set of 2-geodesics starting from $u$, so $\Gamma$ is $(G,2)$-geodesic transitive. Therefore by Lemma \ref{2gt-val4}, $\Gamma$ is the line graph of a connected cubic $(G,3)$-arc transitive graph. Therefore~(iii) is valid.
Assume now that the valency is 5. Let $(u,v)$ be an arc. Since $\Gamma$ is $G$-arc transitive, the induced subgraph $[\Gamma(u)]$ is vertex transitive. As $\Gamma$ has girth 3 and non-complete, the valency $k$ of $[\Gamma(u)]$ is at most $3$. Since $[\Gamma(u)]$ is undirected, it follows that $[\Gamma(u)]$ has $5k/2$ edges, and so $k$ is even; that is, $k=2$. Thus $[\Gamma(u)]\cong C_5$.
Set $\Gamma(u)=\{v_1,v_2,v_3,v_4,v_5\}$ with $v_1=v$ and assume
$(v_1,\ldots,v_5)$ is a 5-cycle. Then $|\Gamma_2(u)\cap
\Gamma(v_1)|=2$ and say $\Gamma_2(u)\cap \Gamma(v_1)=\{w_1,w_2\}$. Then $\Gamma(v_1)=\{u,v_2,v_5,w_1,w_2\}$. As $[\Gamma(v_1)]\cong C_5$ and $(v_2,u,v_5)$ is a 2-arc, it follows that $w_1,w_2$ are adjacent, $v_2$ is adjacent to one of $w_1$ and $w_2$ and $v_5$ is adjacent to the other. Without loss of generality, assume $v_2$ is adjacent to $w_1$ and $v_5$ is adjacent to $w_2$. In particular, $v_2$ and $w_2$ are not adjacent. Moreover, $2\leqslant c_2\leqslant 4$. Since there are 10 edges between $\Gamma(u)$ and $
\Gamma_2(u)$, we have $10=c_2|\Gamma_2(u)|$, so $c_2=2$ and
$|\Gamma_2(u)|=5$.
Since
$|\Gamma_2(u)\cap \Gamma(v_2)|=2$, there exists $w_3$ in $\Gamma_2(u)$ which is adjacent to $v_2$, and so $\Gamma(v_2)=\{u,v_1,v_3,w_1,w_3\}$. Note that $(w_1,v_1,u,v_3)$ is a 3-arc, and as $[\Gamma(v_2)]\cong C_5$, it follows that $w_3$ is adjacent to both $v_3$ and $w_1$. Since $G_u$ is transitive on $\Gamma_2(u)$, $[\Gamma_2(u)]$ is a vertex transitive graph. Recall that $w_1$ is adjacent to $w_2$ and $w_3$. It follows that
$[\Gamma_2(u)]\cong C_5$. Thus $|\Gamma_3(u)\cap \Gamma(w_1)|=1$, say $\Gamma_3(u)\cap \Gamma(w_1)=\{e\}$. Then $(v_1,w_1,e)$ and $(v_2,w_1,e)$ are two 2-geodesics. As $c_2=2$, $|\Gamma(v_1)\cap \Gamma(e)|=|\Gamma(v_2)\cap \Gamma(e)|=2$. Hence $\{w_1,w_2,w_3\}\subseteq \Gamma_2(u)\cap \Gamma(e)$.
Since $|\Gamma_2(u)\cap \Gamma(v_3)|=2$, there exists $w_4(\neq w_3)\in \Gamma_2(u)$ such that $v_3,w_4$ are adjacent. Noting that $\Gamma(u)\cap \Gamma(w_1)=\{v_1,v_2\}$ and $\Gamma(u)\cap \Gamma(w_2)=\{v_1,v_5\}$, we find $w_4\notin \{w_1,w_2,w_3\}$. Since $[\Gamma(v_3)]\cong C_5$ and $(w_3,v_2,u,v_4)$ is a 3-arc, it follows that $w_4$ is adjacent to both $v_4$ and $w_3$. As
$(v_3,w_3,e)$ is a 2-geodesic, $|\Gamma(v_3)\cap \Gamma(e)|=2$, so $w_4\in \Gamma_2(u)\cap \Gamma(e)$. Now $(v_4,w_4,e)$ is a 2-geodesic, so $|\Gamma(v_4)\cap \Gamma(e)|=2$, hence $\Gamma_2(u)\cap \Gamma(v_4)\subset \Gamma(e)$. Let the remaining vertex of $\Gamma_2(u)$ be $w_5$. Since $|\Gamma(u)\cap \Gamma(w_5)|=2$, it follows that $w_5$ is adjacent to both $v_4,v_5$. Hence $\Gamma_2(u)\cap \Gamma(v_4)=\{w_4,w_5\}\subset \Gamma(e)$. Thus $\Gamma_2(u)=\Gamma(e)$, so $\Gamma_3(u)=\{e\}$. Now we have completely determined the graph $\Gamma$, and this graph is the icosahedron. Finally, by Lemma \ref{lem:Icosahedron}, $G\cong S_2\times A_5$ or $A_5$. \end{proof}
\begin{proof}[The proof of Theorem~\ref{thm:small val}]
If the valency of $\Gamma$ is 2 or the girth is greater than 4,
then $\Gamma$
cannot be $(G,2)$-distance transitive,
but not $(G,2)$-arc transitive.
Hence the valency is at least 3.
If the valency and the girth are both equal to
3, then $\Gamma=\K_4$.
Hence Theorem~\ref{thm:small val} follows from
Proposition~\ref{lem:valency 3}, in the case of girth 4,
and from Proposition~\ref{2dtval4-girth3} in the case of
girth~4. \end{proof}
\end{document} |
\begin{document}
\title{New constraint qualifications for
mathematical programs with equilibrium constraints via variational analysis} \author{Helmut Gfrerer\thanks{Institute of Computational Mathematics, Johannes Kepler University Linz, A-4040 Linz, Austria, e-mail: helmut.gfrerer@jku.at.} \and Jane J. Ye\thanks{Department of Mathematics and Statistics, University of Victoria, Victoria, B.C., Canada V8W 2Y2, e-mail: janeye@uvic.ca.}} \date{} \maketitle \begin{abstract} In this paper, we study the mathematical program with equilibrium constraints (MPEC) formulated as a mathematical program with a parametric generalized equation involving {the} regular normal cone. Compared with the usual way of formulating MPEC through a KKT condition, this formulation has {the} advantage that it does not involve extra multipliers as new variables, and it usually requires weaker assumptions on the problem data. {U}sing the so-called first order sufficient condition for metric subregularity, we derive verifiable sufficient conditions for the metric subregularity of the involved set-valued mapping, or equivalently the calmness of the perturbed generalized equation mapping.
\vskip 10 true pt
\noindent {\bf Key words}: mathematical programs with equilibrium constraints, constraint qualification, metric subregularity, calmness.
\vskip 10 true pt
\noindent {\bf AMS subject classification}: 49J53, 90C30, 90C33, 90C46.
\end{abstract}
\section{Introduction} A mathematical program with equilibrium constraints (MPEC) usually refers to an optimization problem in which the essential constraints are defined by a parametric variational inequality or complementarity system. Since many equilibrium phenomena that arise from engineering and economics are characterized by either an optimization problem or a variational inequality, this justifies the name mathematical program with equilibrium constraints (\cite{Luo-Pang-Ralph, Out-Koc-Zowe}). During the last two decades, more and more applications for MPECs have been found and there has been much progress made in both theories and algorithms for solving MPECs.
For easy discussion, consider the following mathematical program with a variational inequality constraint \begin{eqnarray} \mbox{(MPVIC)}\qquad\min_{(x,y)\in C} &&F(x,y)\nonumber\\
\mbox{s.t.}&& \langle \phi(x,y), y'-y\rangle\geq 0 \quad \forall y'\in \Gamma, \label{VI} \end{eqnarray}
where $C\subset \mathbb{R}^n\times\mathbb{R}^m$, $\Gamma:=\{y\in \mathbb{R}^m | g(y)\leq 0\}$, $F:\mathbb{R}^n\times\mathbb{R}^m\to\mathbb{R}$, $\phi:\mathbb{R}^n\times \mathbb{R}^m\to\mathbb{R}^m$, $g:\mathbb{R}^m\to\mathbb{R}^q$ are sufficiently smooth. If the set $\Gamma$ is convex, then MPVIC can be equivalently written as a mathematical program with a generalized equation constraint
\begin{eqnarray*} \mbox{(MPGE)}\qquad\min_{(x,y)\in C} &&F(x,y)\\
\mbox{s.t.}&& 0\in \phi(x,y)+ {N}_\Gamma(y), \end{eqnarray*} where $N_\Gamma(y)$ is the normal cone to set $\Gamma$ at $y$ in the sense of convex analysis. If $g$ is affine or certain constraint qualification such as the Slater condition holds for the constraint $g(y)\leq 0$, then it is known that $ {N}_\Gamma(y)=\nabla g(y)^T N_{\mathbb{R}_-^q}(g(y)).$ Consequently \begin{equation}\label{GE-equiv} 0\in \phi(x,y) +{N}_\Gamma(y) \Leftrightarrow \exists \lambda: 0\in \left (\phi(x,y)+\nabla g(y)^T\lambda, g(y)\right )+N_{\mathbb{R}^m \times \mathbb{R}_+^q}(y, \lambda), \end{equation} where $\lambda$ is referred to a multiplier. This suggests to consider the mathematical program with a complementarity constraint \begin{eqnarray*} \mbox{(MPCC)}\qquad\min_{(x,y)\in C, \lambda\in \mathbb{R}^q} &&F(x,y)\\
\mbox{s.t.}&& 0\in \left ( \phi(x,y)+\nabla g(y)^T\lambda, g(y)\right )+N_{\mathbb{R}^m \times \mathbb{R}_+^q}(y, \lambda). \end{eqnarray*} In the case where the equivalence (\ref{GE-equiv}) holds for a unique multiplier $\lambda$ for each $y$, (MPGE) and (MPCC) are obviously equivalent while in the case where the multipliers are not unique then the two problems are not {necessarily} equivalent if the local optimal solutions are considered (see Dempe and Dutta \cite{Dam-Dut} in the context of bilevel programs). Precisely, it may be possible that for a local solution $(\bar x, \bar y, \bar \lambda)$ of (MPCC), the pair $(\bar x, \bar y)$ is not a local solution of (MPGE). This is a serious drawback for using {the} MPCC reformulation, since a numerical method computing a stationary point for (MPCC) may not have anything to do with the solution to the original MPEC. This shows that whenever possible, one should consider solving problem (MPGE) instead of problem (MPCC). Another fact we want to mention is that in many equilibrium problems, the constraint set $\Gamma$ or the function $g$ may not be convex. In this case, if $y$ solves the variational inequality (\ref{VI}), then $y'=y$ is a global minimizer of the optimization problem: $\displaystyle \min_{y'} ~\langle \phi(x,y), y' \rangle \
\mbox{ s.t. } y'\in \Gamma,$
and hence by Fermat's rule (see, e.g., \cite[Theorem 10.1]{RoWe98}) it is a solution of the generalized equation \begin{equation}\label{ge} 0\in \phi(x,y)+ \widehat{N}_\Gamma(y),\end{equation}
where $\widehat{N}_\Gamma(y)$ is the regular normal cone to $\Gamma$ at $y$ (see Definition \ref{normalcone}). In the nonconvex case, by replacing the original variational inequality constraint (\ref{VI}) by the generalized equation (\ref{ge}), the feasible region is enlarged and the resulting MPGE may not be equivalent to the original MPVIC. However, if the solution $(\bar x,\bar y)$ of MPGE is feasible for the original MPVIC, then it must be a solution of the original MPVIC; see \cite{Bouza} for this approach in the context of bilevel programs.
Based on the above discussion, in this paper we consider MPECs in the form \begin{eqnarray*} \mbox{(MPEC)}\qquad\min
&&F(x,y)\\
\mbox{s.t.}&& 0\in \phi(x,y)+ \widehat{N}_\Gamma(y),
\\&&G(x,y)\leq 0, \end{eqnarray*} where $\Gamma$ is possibly non-convex and $G:\mathbb{R}^n\times\mathbb{R}^m\to\mathbb{R}^p$ is smooth.
Besides of the issue of equivalent problem formulations, one has to consider constraint qualifications as well. This task is of particular importance for deriving optimality conditions. For the problem (MPCC) there exist a lot of constraint qualifications from the MPEC-literature ensuring the Mordukhovich (M-)stationarity of locally optimal solutions. The weakest one of these constraint qualifications appears to be MPEC-GCQ (Guignard constraint qualification) as introduced by Flegel and Kanzow \cite{FleKan05}, see \cite{FleKanOut07} for a proof of M-stationarity of local optimally solutions under MPEC-GCQ. For the problem (MPEC) it was shown by Ye and Ye \cite{YeYe} that calmness of the perturbation mapping associated with the constraints of (MPEC) (called pseudo upper-Lipschitz continuity in \cite{YeYe}) guarantees M-stationarity of solutions. \cite{Ad-Hen-Out} has compared the two formulations (MPEC) and (MPCC) in terms of calmness. {The authors} pointed out {there} that, very often, the calmness condition related to (MPEC) is satisfied at some $(\bar x, \bar y)$ while the one for (MPCC) are not fulfilled at $(\bar x,\bar y,\lambda)$ for certain multiplier $\lambda$. In particular \cite[Example 6]{Ad-Hen-Out} shows that it may be possible that the constraint for (MPEC) satisfies the calmness condition at $(\bar x,\bar y, 0)$ while the one for corresponding (MPCC) does not satisfy the calmness condition at $(\bar x,\bar y, \lambda, 0)$ for any multiplier $\lambda$. In this paper we first show that if multipliers are not unique then {the} MPEC Mangasarian-Fromovitz constraint qualification (MFCQ) never holds for problem (MPCC). Then we present an example for which MPEC-GCQ is violated at $(\bar x,\bar y, \lambda, 0)$ for any multiplier $\lambda$ while the calmness holds for the corresponding (MPEC) at $(\bar x,\bar y,0)$. Note that in contrast to \cite[Example 6]{Ad-Hen-Out}, $\Gamma$ in our example is even convex. However, very little is known how to verify the calmness for (MPEC) when the multiplier $\lambda$ is not unique. When $\phi$, $g$ and $G$ are affine, calmness follows simply by Robinson's result on polyhedral multifunctions \cite{Robinson81}. Another approach is to verify calmness by showing the stronger Aubin property (also called pseudo Lipschitz continuity or Lipschitz-like property) via the so-called Mordukhovich criterion, cf. \cite{Mor}. However, the Mordukhovich criterion involves the limiting coderivate of $\widehat N_\Gamma(\cdot)$, which is very difficult to compute in the case of nonunique $\lambda$; see \cite{GfrOut14b}.
The main goal of this paper is to derive a simply verifiable criterion for the so-called metric subregularity constraint qualification (MSCQ); see Definition \ref{DefMetrSubregCQ}, which is equivalent to calmness. Our sufficient condition for MSCQ involves only first-order derivatives of $\phi$ and $G$ and derivatives up to the second-order of $g$ at $(\bar x,\bar y)$ and is therefore efficiently checkable. Our approach is mainly based on the sufficient conditions for metric subregularity as recently developed in \cite{Gfr11,Gfr13a,Gfr14b,GfrKl16} and some implications of metric subregularity which can be found in \cite{GfrMo16,GfrOut15}. A special feature is that the imposed constraint qualification on both the lower level system $g(y)\leq0$ and the upper level system $G(x,y)\leq0$ is only MSCQ, which is much weaker than MFCQ usually required.
\if{\blue{\noindent Recently a few papers have been devoted to MPECs in the form of (MPEC). \cite{HenOutSur} computed the regular coderivative for the solution mapping of the generalized equation (\ref{ge}) and used it to derive an S-stationary condition. \cite{Ad-Hen-Out} has compared the two formulations (MPEC) and (MPCC) in terms of calmness of the perturbation mapping associated with the generalized equations (also called metric subregularity constraint qualification (MSCQ) in this paper; see Definition \ref{DefMetrSubregCQ}). They pointed out that, very often, the calmness qualification condition related to (MPEC) is satisfied while the one for (MPCC) for certain multipliers $\lambda$ are not fulfilled. In particular \cite[Example 6]{Ad-Hen-Out} shows that it may be possible that the constraint for the MPEC satisfies the calmness condition at $(\bar x,\bar y, 0)$ while the one for corresponding MPCC does not satisfy the calmness condition at $(\bar x,\bar y, \lambda, 0)$ for any $\lambda\in \Lambda(\bar x, \bar y)$. Recently \cite[Theorems 5 and 6]{GfrOut14} derived the formula for the graphical derivative and the regular coderivative of the solution map to (MPEC) and consequently the strong (S-) stationary conditions for (MPEC) under MSCQ and some constraint qualifications that are weaker than the one given in \cite{HenOutSur}, cf. \cite[Remark 1]{GfrOut14}. In order to obtain an Mordukhovich (M-) stationary condition, it was shown in (\cite{YeYe}) that MSCQ is a constraint qualification. But for the general problem in the form of (MPEC), very little is known how to verify MSCQ in case when the multipliers are not unique, except the case when $g$ is affine. In this paper we fill this gap by providing a verifiable sufficient condition for MSCQ for nonlinear $g$ under very weak assumptions. }}\fi
We organize our paper as follows. Section 2 contains the preliminaries and preliminary results. In section 3 we discuss the difficulties involved in formulating MPECs as (MPCC). Section 4 gives new verifiable sufficient conditions for MSCQ.
The following {notation} will be used throughout the paper. We denote by ${\cal B}_{\mathbb{R}^q}$ the closed unit ball in $\mathbb{R}^q$ while when no confusion arises we denote it by $ {\cal B}$. By ${\cal B}(\bar z; r)$ we denote the closed ball centered at $\bar z$ with radius $r$. ${\cal S}_{\mathbb{R}^q}$ is the unit sphere in $\mathbb{R}^q$. For a matrix $A$, we denote by $A^T$ its transpose. The inner product of two vectors $x, y$ is denoted by $x^T y$ or $\langle x,y\rangle$ and by $x\perp y$ we mean $\langle x, y\rangle =0$. Let $\Omega \subset \mathbb{R}^d$ and $z \in \mathbb{R}^d$, we denote by $\dist{z, \Omega}$ the distance from $z $ to set $\Omega$. The polar cone of a set $\Omega$ is
$\Omega^\circ=\{x|x^Tv\leq 0 \ \forall v\in \Omega\}$ and $\Omega^\perp$ denotes the orthogonal complement to $\Omega$. For a set $\Omega$, we denote by ${\rm conv\,} \Omega$ and ${\rm cl\,}\Omega$ the convex hull and the closure of $\Omega$ respectively. For a differentiable mapping $P:\mathbb R^d\rightarrow \mathbb R^s$, we denote by $\nabla P(z)$ the Jacobian matrix of $P$ at $z$ if $s>1$ and the gradient vector if $s=1$. For a function $f:\mathbb{R}^d \rightarrow \mathbb{R}$, we denote by $\nabla^2 f(\bar z)$ the Hessian matrix of $f$ at $\bar z$. Let $M:\mathbb{R}^d\rightrightarrows\mathbb{R}^s$ be an arbitrary set-valued mapping, we denote its graph by $ {\rm gph}M:=\{(z,w)| w\in M(z)\}.$ $o:\mathbb{R}_+\rightarrow \mathbb{R}$ denotes a function with the property that $o(\lambda)/\lambda\rightarrow 0$ when $\lambda\downarrow 0$.
\section{{Basic definitions} and preliminary results} In this section we gather some preliminaries and preliminary results in variational analysis that will be needed in the paper. The reader may find more details in the monographs \cite{Clarke,Mor,RoWe98} and in the papers we refer to. \begin{definition} Given a set $\Omega\subset\mathbb R^d$ and a point $\bar z\in\Omega$, the (Bouligand-Severi) {\em tangent/contingent cone} to $\Omega$ at $\bar z$ is a closed cone defined by \begin{equation*}\label{normalcone} T_\Omega(\bar z)
:=\Big\{u\in\mathbb R^d\Big|\;\exists\,t_k\downarrow 0,\;u_k\to u\;\mbox{ with }\;\bar z+t_k u_k\in\Omega ~\forall ~ k\}. \end{equation*} The (Fr\'{e}chet) {\em regular normal cone} and the (Mordukhovich) {\em limiting/basic normal cone} to $\Omega$ at $\bar z\in\Omega$ are defined by \begin{eqnarray}
&& \widehat N_\Omega(\bar z):=\Big\{v^\ast\in\mathbb{R}^d\Big|\;\limsup_{z\stackrel{\Omega}{\to}\bar z}\frac{\skalp{ v^\ast,z-\bar z}}{\|z-\bar z\|}\le 0\Big\} \nonumber \\ \mbox{and } && N_\Omega(\bar z):=\left \{z^\ast \,\vert\, \exists z_{k}\stackrel{\Omega}{\to}\bar z \mbox{ and } z^\ast_k\rightarrow z^\ast \mbox{ such that } z^\ast_{k}\in \widehat{N}_{\Omega}(z_k) \ \forall k \right \}
\nonumber \end{eqnarray} respectively.
\end{definition} \noindent {Note that $\widehat N_\Omega(\bar z)=(T_\Omega(\bar z))^\circ$ and w}hen the set $\Omega$ is convex, the tangent/contingent cone and the regular/limiting normal cone reduce to the classical tangent cone and normal cone of convex analysis respectively.
It is easy to see that $u\in T_\Omega(\bar z)$ if and only if $\liminf_{t\downarrow 0} t^{-1}\dist{\bar z+tu, \Omega}=0$. Recall that a set $\Omega$ is said to be geometrically derivable at a point $\bar z\in \Omega$ if the tangent cone coincides with the derivable cone at $\bar x$, i.e., $u\in T_\Omega(\bar z)$ if and only if $\lim_{t\downarrow 0} t^{-1}\dist{\bar z+tu, \Omega}=0$; see e.g. \cite{RoWe98}. From the definitions of various tangent cones, it is easy to see that if a set $\Omega$ is Clarke regular in the sense of \cite[Definition 2.4.6]{Clarke} then it must be geometrically derivable and the converse relation is in general false. The following proposition therefore improves the rule of tangents to product sets given in \cite[Proposition 6.41]{RoWe98}. The proof is omitted since it follows from the definitions of the tangent cone and derivability. \begin{proposition}[Rule of Tangents to Product Sets]\label{productset} Let $\Omega=\Omega_1\times \Omega_2$ with $\Omega_1\subset \mathbb{R}^{d_1}, \Omega_2\in C^{d_2}$ closed. Then at any $\bar z=(\bar z_1,\bar z_2)$ with $\bar z_1\in \Omega_1, \bar z_2\in \Omega_2$, one has $$T_\Omega(\bar z )\subset T_{\Omega_1}(\bar z_1 )\times T_{\Omega_2}(\bar z_2 ).$$ Furthermore the equality holds if at least one of sets $\Omega_1, \Omega_2$ is geometrically derivable. \end{proposition} The following directional version of the limiting normal cone was introduced in \cite{Gfr13a}. \begin{definition}Given a set $\Omega\subset\mathbb R^d$, a point $\bar{z}\in \Omega$ and a direction $w\in \mathbb{R}^{d}$, the {\em limiting normal cone} to $\Omega$ in direction $w$ at $\bar{z}$ is defined by
\[
N_{\Omega}(\bar{z}; w):=\left \{z^{*} | \exists t_{k}\downarrow 0, w_{k}\rightarrow w, z^{*}_{k}\rightarrow z^{*}: z^{*}_{k}\in \widehat{N}_{\Omega}(\bar{z}+ t_{k}w_{k}) \ \forall k \right \}. \] \end{definition} By definition it is easy to see that $N_{\Omega}(\bar{z}; 0)=N_{\Omega}(\bar{z})$ and $N_{\Omega}(\bar{z}; u)=\emptyset$ if $u\not \in T_{\Omega}(\bar{z})$. Further by \cite[Lemma 2.1]{Gfr14b}, if $\Omega$ is a union of finitely many closed convex sets, then one has the following relationship between the limiting normal cone and its directional version. \begin{proposition}\cite[Lemma 2.1]{Gfr14b}\label{relationship} Let $\Omega\subset \mathbb{R}^d$ be a union of finitely many closed convex sets, $\bar z\in \Omega, u\in \mathbb{R}^d$. Then
$N_{\Omega}(\bar{z}; u)\subset N_{\Omega}(\bar{z})\cap \{u\}^\perp $ and the equality holds if
the set $\Omega$ is convex and $u\in T_\Omega(\bar z)$.
\end{proposition}
Next we consider constraint qualifications for a constraint system of the form \begin{equation}
\label{EqGenOptProbl} z\in\Omega:=\{z\,\vert\, P(z)\in D\}, \end{equation} where $P:\mathbb{R}^d\to\mathbb{R}^s$ and $D\subset\mathbb{R}^s$ is closed.
\begin{definition}[cf. \cite{FleKanOut07}]Let $\bar z\in \Omega$ where $\Omega$ is defined as in (\ref{EqGenOptProbl}) with $P$ smooth, and $T_\Omega^{\rm lin}(\bar z)$ be the linearized cone of $\Omega$ at $\bar z$ defined by \begin{equation}
T_\Omega^{\rm lin}(\bar z)=\{w|\nabla P(\bar z) w\in T_D (P(\bar z))\}. \label{lcone} \end{equation} We say that the {\em generalized Abadie constraint qualification} (GACQ) and the {\em generalized Guignard constraint qualification} (GGCQ) hold at $\bar z$, if \[T_\Omega(\bar z)=T_\Omega^{\rm lin}(\bar z) \mbox{ and } {(T_\Omega(\bar z))^\circ}=(T_\Omega^{\rm lin}(\bar z))^\circ\] respectively.
\end{definition} It is obvious that GACQ implies GGCQ which is considered as the weakest constraint qualification. In the case of a standard nonlinear program, GACQ and GGCQ reduce to the standard definitions of Abadie and Guignard constraint qualification {respectively}. Under GGCQ, any local optimal solution to a disjunctive problem, i.e., an optimization problem where the constraint has the form (\ref{EqGenOptProbl}) with the set $D$ equal to a union of finitely many polyhedral convex sets, must be M-stationary (see e.g. \cite[Theorem 7]{FleKanOut07}).
GACQ and GGCQ are weak constraint qualifications, but they are usually difficult to verify. Hence we are interested in constraint qualifications that are effectively verifiable, and yet not too strong. The following notion of metric subregularity is the base of the constraint qualification which plays a central role in this paper. \begin{definition}
Let $M:\mathbb{R}^d\rightrightarrows\mathbb{R}^s$ be a set-valued mapping and let $(\bar z,\bar w)\in{\rm gph\,} M$. We say that $M$ is {\em metrically subregular} at $(\bar z,\bar w)$ if there exist a neighborhood $W$ of $\bar z$ and a positive number $\kappa>0$ such that
\begin{equation}\label{EqMetrSubReg}\dist{z,M^{-1}(\bar w)}\leq\kappa\dist{\bar w,M(z)}\ \; \forall z\in W.
\end{equation} \end{definition} The metric subregularity property was introduced in \cite{Ioffe79} for single-valued maps under the terminology ``regularity at a point'' and the name of ``metric subregularity'' was suggested in \cite{DoRo04}. Note that the metrical subregularity at $(\bar z,0)\in{\rm gph\,} M$ is also referred to the existence of a local error bound at $\bar z$. It is easy to see that $M$ is metrically subregular at $(\bar z,\bar w)$ if and only if its inverse set-valued map $M^{-1}$ is calm at $(\bar w,\bar z)\in {\rm gph\,} M^{-1}$, i.e., there exist a neighborhood $W$ of $\bar z$, a neighborhood $V$ of $\bar w$ and a positive number $\kappa>0$ such that
$$M^{-1}(w)\cap V\subset M^{-1}(\bar w) +\kappa \|w-\bar w\| {\cal B} \quad \forall z\in W. $$
While the term for the calmness of a set-valued map was first coined in \cite{RoWe98}, it was introduced as the pseudo-upper Lipschitz continuity in \cite{YeYe} taking into { account} that it is weaker than both the pseudo Lipschitz continuity of Aubin \cite{Aubin} and the upper Lipschitz continuity of Robinson \cite{Robinson75,Robinson76} .
More general constraints can be easily written in the form $P(z)\in D$.
For instance, a set $\Omega=\{z\,\vert\, P_1(z)\in D_1, 0\in P_2(z)+Q(z)\}$ where $P_i:\mathbb{R}^{d}\to\mathbb{R}^{s_i}$, $i=1,2$ and $Q:\mathbb{R}^{d}\rightrightarrows\mathbb{R}^{s_2}$ is a set-valued map can also be written as \[\Omega=\{z\,\vert\, P(z)\in D\}\;\mbox{ with }\; P(z):=\left(\begin{array}{c}P_1(z)\\(z,-P_2(z))\end{array}\right),\ D:=D_1\times {\rm gph\,} Q.\]
We now show that for both representations of $\Omega$ the properties of metric subregularity
for the {maps} describing the constraints are equivalent. \begin{proposition}\label{PropEquGrSubReg}Let $P_i:\mathbb{R}^{d}\to\mathbb{R}^{s_i}$, $i=1,2$, {$D_1\subset \mathbb{R}^{s_1}$} be closed and $Q:\mathbb{R}^{d}\rightrightarrows\mathbb{R}^{s_2}$ be a set-valued map with a closed graph. Further assume that $P_1$ and $P_2$ are Lipschitz near $\bar z$. Then the set-valued map \[M_1(z):=\left(\begin{array}
{c}P_1(z)-D_1\\
P_2(z)+Q(z) \end{array}\right)\] is metrically subregular at $(\bar z,(0,0))$ if and only if the set-valued map \[M_2(z):=\left(\begin{array}
{c}P_1(z)\\(z,-P_2(z)) \end{array}\right)-D_1\times{\rm gph\,} Q\] is metrically subregular at $(\bar z,(0,0,0))$. \end{proposition} \begin{proof}
Assume without loss of generality that the image space $\mathbb{R}^{s_1}\times\mathbb{R}^{s_2}$ of $M_1$ is equipped with the norm $\norm{(y_1,y_2)}=\norm{y_1}+\norm{y_2}$, whereas we use the norm $\norm{(y_1,z,y_2)}=\norm{y_1}+\norm{z}+\norm{y_2}$ for the image space $\mathbb{R}^{s_1}\times\mathbb{R}^d\times\mathbb{R}^{s_2}$ of $M_2$. If $M_2$ is metrically subregular at $(\bar z,(0,0,0))$, then there are a neighborhood $W$ of $\bar z$ and a constant $\kappa$ such that for all $z\in W$ we have
\begin{eqnarray*}\dist{z,\Omega}&\leq& \kappa \distb{(0,0,0), M_2(z)}\\
&=&\kappa\big(\dist{P_1(z),D_1}+\inf\{\norm{z-\tilde z}+\norm{-P_2(z)-\tilde y}\,\vert\, (\tilde z,\tilde y)\in{\rm gph\,} Q\}\big)\\
&\leq & \kappa\big(\dist{P_1(z),D_1}+\inf\{\norm{-P_2(z)-\tilde y}\,\vert\, \tilde y\in Q(z)\}\big)=\kappa \distb{(0,0),M_1(z)} ,
\end{eqnarray*}
which shows metric subregularity of $M_1$. Now assume that $M_1$ is metrically subregular at $(\bar z,(0,0))$ and hence we can find a radius $r>0$ and a real $\kappa$ such that
\[\dist{z,\Omega}\leq\kappa \distb{(0,0),M_1(z)}\ \forall z\in {\cal B}(\bar z;r).\]
Further assume that {$P_1, P_2$ are} Lipschitz with modulus $L$ on ${\cal B}(\bar z;r)$, and consider $z\in {\cal B}(\bar z;r/(2+L))$. Since ${\rm gph\,} Q$ is closed, there are $(\tilde z,\tilde y)\in {\rm gph\,} Q$ with
\[\norm{z-\tilde z}+\norm{-P_2(z)-\tilde y}={\distb{(z,-P_2(z)),{\rm gph\,} Q}}.\]
Then
\[\norm{z-\tilde z}\leq \distb{(z,-P_2(z)),{\rm gph\,} Q}\leq \norm{z-\bar z}+\norm{-P_2(z)+P_2(\bar z)}\leq (1+L)\norm{z-\bar z}\]
implying $\norm{\bar z-\tilde z}\leq \norm{\bar z-z}+\norm{z-\tilde z}\leq (2+L)\norm{z-\bar z}\leq r$ and
\begin{eqnarray*}\dist{\tilde z,\Omega}&\leq& \kappa \distb{(0,0),M_1(\tilde z)}
=\kappa\Big(\dist{P_1(\tilde z),D_1}+\distb{-P_2(\tilde z),Q(\tilde z)}\Big)\\
&\leq& \kappa\big(\dist{P_1(\tilde z),D_1}+\norm{-P_2(\tilde z)-\tilde y}\big)\\
&\leq &\kappa\big(2L\norm{z-\tilde z}+\dist{P_1(z),D_1}+\norm{-P_2(z)-\tilde y}\big).
\end{eqnarray*}
Taking into account $\dist{z,\Omega}\leq\dist{\tilde z,\Omega}+\norm{z-\tilde z}$ we arrive at
\begin{eqnarray*}\dist{z,\Omega}&\leq&\kappa\max\{2L+\frac 1\kappa,1\}\big(\dist{P_1(z),D_1}+\norm{z-\tilde z}+\norm{-P_2(z)-\tilde y}\big)\\
&=&\kappa\max\{2L+\frac 1\kappa,1\}\distb{(0,0,0),M_2(z)},
\end{eqnarray*}
establishing metric subregularity of $M_2$ at {$(\bar z,(0,0,0))$}. \end{proof}
Since the metric subregularity of the set-valued map $M(z):=P(z)-D$ at $(\bar z,0)$ implies GACQ holding at $\bar z$, see e.g., \cite[Proposition 1]{HenOut05}, it can serve as a constraint qualification.
{Following \cite[Definition 3.2]{GfrMo15a}, we define it as a constraint qualification below.} \begin{definition}[\bf metric subregularity constraint qualification]\label{DefMetrSubregCQ} {Let $P(\bar z)\in D$.
We say that the {\sc metric subregularity constraint qualification (MSCQ)} holds at $\bar z$ for the system $P(z)\in D$} if the set-valued map $M(z):=P(z)-D$ is metrically subregular at $(\bar z,0)$, or equivalently the perturbed set-valued map $M^{-1}(w):=\{z| w\in P(z)-D\}$ is calm at $(0, \bar z)$. \end{definition} There exist several sufficient conditions for MSCQ in the literature. Here are the two most frequently used ones. The first case is when the linear CQ holds, i.e., when $P$ is affine and $D$ is a union of finitely many polyhedral convex sets.
The second case is when the no nonzero abnormal multiplier constraint qualification (NNAMCQ) holds at $\bar z$ (see e.g., \cite{Ye}): \begin{equation}\label{NNAMCQ} \nabla P(\bar z)^T\lambda=0,\;\lambda\in N_D(P(\bar z))\quad\Longrightarrow\quad\lambda=0.\end{equation}
It is known that NNAMCQ is equivalent to MFCQ in the case of standard nonlinear programming. Condition (\ref{NNAMCQ}) appears under different terminologies in the literature; e.g., while it is called NNAMCQ in \cite{Ye}, it is referred to generalized MFCQ (GMFCQ) in \cite{FleKanOut07}.
The linear CQ and NNAMCQ may be still too strong for some problems to hold. Recently some new constraint qualifications for standard nonlinear programs have been introduced in the literature that are stronger than MSCQ and weaker than the linear CQ and/or NNAMCQ; see e.g. \cite{AndHaeSchSilMP,AndHaeSchSilSIAM}. These CQs include the relaxed constant positive linear dependence condition (RCPLD) (see \cite[Theorem 4.2]{GuoZhangLin}), the constant rank of the subspace component condition (CRSC) (see \cite[Corollary 4.1]{GuoZhangLin}) and the quasinormality \cite[Theorem 5.2]{guoyezhang-infinite}.
In this paper we will use the following sufficient conditions.
\begin{theorem}\label{ThSuffCondMS}Let $\bar z \in \Omega$ where $\Omega $ is defined as in (\ref{EqGenOptProbl}).
MSCQ holds at $\bar z$ if one of the following conditions is fulfilled: \begin{itemize}
\item First-order sufficient condition for metric subregularity (FOSCMS) for the system $P(z)\in D$ with $P$ smooth, cf. \cite[Corollary 1]{GfrKl16} : for every $0\not=w\in T_\Omega^{\rm lin}(\bar z)$ one has
\[\nabla P(\bar z)^T\lambda=0,\;\lambda\in N_D(P(\bar z);\nabla P(\bar z)w)\quad\Longrightarrow\quad\lambda=0.\]
\item Second-order sufficient condition for metric subregularity (SOSCMS) for the inequality system $P(z)\in \mathbb{R}^s_-$ with $P$ twice
Fr\'echet differentiable at $\bar z$, cf. \cite[Theorem 6.1]{Gfr11}: For every $0\not=w\in T_\Omega^{\rm lin}(\bar z)$ one has
\[\nabla P(\bar z)^T\lambda=0,\;\lambda\in N_{\mathbb{R}^s_-}(P(\bar z)),\; w^T\nabla^2(\lambda^TP)(\bar z)w\geq 0\quad\Longrightarrow\quad\lambda=0.\] \end{itemize} \end{theorem} In the case $T_\Omega^{\rm lin}(\bar z)=\{0\}$, FOSCMS satisfies automatically. By the definition of the linearized cone (\ref{lcone}), $T_\Omega^{\rm lin}(\bar z)=\{0\}$ means that
\[\nabla P(\bar z)w=\xi, \quad \xi \in T_D(P(\bar z))\;\Longrightarrow\;w=0.\]
By the graphical derivative criterion for strong metric subregularity \cite[Theorem 4E.1]{DoRo14}, this is equivalent to saying that the set-valued map $M(z)=P(z)-D$ is strongly metrically subregular (or equivalently its inverse is isolated calm) at $(\bar z,0)$. When the set $D$ is convex, by the relationship between the limiting normal cone and its directional version in Proposition \ref{relationship}, $$N_D(P(\bar z);\nabla P(\bar z)w)=N_D(P(\bar z) )\cap \{\nabla P(\bar z)w\}^\perp.$$ Consequently in the case where $ T_\Omega^{\rm lin}(\bar z ) \not =\{0\}$ and $D$ is convex, FOSCMS reduces to NNAMCQ. Indeed, suppose that $\nabla P(\bar z)^T\lambda=0$ and $\lambda \in N_D(P(\bar z) )$. Then $\lambda^T (\nabla P(\bar z)w)=0$. Hence $\lambda \in N_D(P(\bar z);\nabla P(\bar z)w)$ which implies from FOSCMS that $\lambda=0$. Hence for convex $D$, FOSCMS is equivalent to saying that either the strong metric subregularity or the NNAMCQ (\ref{NNAMCQ}) holds at $(\bar z,0)$. In the case of an inequality system $P(z)\leq 0$ and $ T_\Omega^{\rm lin}(\bar z ) \not =\{0\}$,
SOSCMS is obviously weaker than NNAMCQ.
In many situations, the constraint system $ P(z)\in D$ can be {splitted} into two parts such that one part can be easily verified to satisfy MSCQ. For example
\begin{equation}\label{EqConstrSplit}P(z)=(P_1(z),P_2(z))\in D=D_1\times D_2\end{equation} where $P_i:\mathbb{R}^d\to\mathbb{R}^{s_i}$ are smooth and $D_i\subset \mathbb{R}^{s_i}$, $i=1,2$ are closed, and for one part, let say $P_2(z)\in D_2$, it is known in advance that the map $P_2(\cdot)-D_2$ is metrically subregular at $(\bar z,0)$. In this case the following theorem is useful. \begin{theorem}
\label{ThSuffCondMSSplit}{Let $P(\bar z)\in D$ with $P$ smooth and $D$ closed
and assume that $P$ and $D$} can be written in the form \eqref{EqConstrSplit} such that the set-valued map $P_2(z)-D_2$ is metrically subregular at $(\bar z,0)$. Further assume for every $0\not=w\in T_\Omega^{\rm lin}(\bar z)$ one has
\[\nabla P_1(\bar z)^T\lambda^1+\nabla P_2(\bar z)^T\lambda^2=0,\;\lambda^i\in N_{D_i}(P_i(\bar z);\nabla P_i(\bar z)w)\;i=1,2\;\Longrightarrow\;\lambda^1=0.\]
Then MSCQ holds at $\bar z$ {for the system $P(z)\in D$}. \end{theorem} \begin{proof}
Let the set-valued maps $M$, $M_i( i=1,2)$ be given by $M(z):=P(z)-D$ and $M_i(z)=P_i(z)-D_i( i=1,2)$ respectively. Since $P_1$ is assumed to be smooth, it is also Lipschitz near $\bar z$ and thus $M_1$ has the Aubin property around $(\bar z,0)$. Consider any direction $0\not=w\in T_\Omega^{\rm lin}(\bar z)$. By \cite[Definition 2(3.)]{Gfr13a} the limit set critical for directional metric regularity ${\rm Cr}_{\mathbb{R}^{s_1}}M((\bar z,0); w)$ {with respect to $w$ and $\mathbb{R}^{s_1}$ at $(\bar z, 0)$} is defined as the collection of all elements $(v,z^\ast)\in\mathbb{R}^s\times\mathbb{R}^d$ such that there are sequences $t_k\searrow 0$, $(w_k,v_k,z_k^\ast)\to (w,v,z^\ast)$, $\lambda_k\in{\cal S}_{\mathbb{R}^s}$ and a real $\beta>0$ such that $(z_k^\ast,\lambda_k)\in\widehat N_{{\rm gph\,} M}(\bar z+t_kw_k, t_kv_k)$ and $\norm{\lambda_k^1}\geq\beta$ hold for all $k$, where $\lambda_k=(\lambda_k^1,\lambda_k^2)\in\mathbb{R}^{s_1}\times\mathbb{R}^{s_2}$. We claim that $(0,0)\not\in {\rm Cr}_{\mathbb{R}^{s_1}}M((\bar z,0); w)$. Assume on the contrary that $(0,0)\in {\rm Cr}_{\mathbb{R}^{s_1}}M((\bar z,0); w)$ and consider the corresponding sequences $(t_k,w_k,v_k,z_k^\ast,\lambda_k)$. The sequence $\lambda_k$ is bounded and by passing to a subsequence we can assume that $\lambda_k$ converges to some $\lambda=(\lambda^1,\lambda^2)$ satisfying $\norm{\lambda^1}\geq \beta>0$. Since $(z_k^\ast,\lambda_k)\in \widehat N_{{\rm gph\,} M}(\bar z+t_kw_k, t_kv_k)$ it follows from \cite[Exercise 6.7]{RoWe98} that
$-\lambda_k\in\widehat N_D(P(\bar z+t_kw_k)-t_kv_k)$ and $z_k^\ast=-\nabla P(\bar z+t_kw_k)^T\lambda_k$ implying $-\lambda\in N_D(P(\bar z);\nabla P(\bar z)w)$ and $\nabla P(\bar z)^T(-\lambda)=\nabla P_1(\bar z)^T(-\lambda^1)+\nabla P_2(\bar z)^T(-\lambda^2)=0$. From \cite[Lemma 1]{GfrKl16} we also conclude $-\lambda^i\in N_{D_i}(P_i(\bar z);\nabla P_i(\bar z)w)$ resulting in a contradiction to the assumption of the theorem. Hence our claim $(0,0)\not\in {\rm Cr}_{\mathbb{R}^{s_1}}M((\bar z,0); w)$ holds true and by \cite[Lemmas 2, 3, Theorem 6]{Gfr13a} it follows that $M$ is metrically subregular in direction $w$ at $(\bar z,0)$, where directional metric subregularity is defined in \cite[Definition 1]{Gfr13a}. Since by {definition} $M$ is metrically subregular in every direction $w\not \in T_\Omega^{\rm lin}(\bar z)$, we conclude from \cite[Lemma 2.7]{Gfr14b} that $M$ is metrically subregular at $(\bar z,0)$. \end{proof}
We now discuss some consequences of MSCQ. First we have the following change of coordinate formula for normal cones. \begin{proposition}\label{PropInclNormalcone}
Let $\bar z\in \Omega:=\{z| P(z)\in D\}$ with $P$ smooth and $D$ closed.
Then
\begin{equation}\label{EqInclRegNormalCone}
\widehat N_\Omega(\bar z)\supset \nabla P(\bar z)^T\widehat N_D(P(\bar z)).
\end{equation}
Further, if MSCQ holds at $\bar z$ for the system $P(z)\in D$, then
\begin{equation}\label{EqInclLimNormalCone}
\widehat N_\Omega(\bar z)\subset N_\Omega(\bar z)\subset \nabla P(\bar z)^T N_D(P(\bar z)).
\end{equation}
In particular if MSCQ holds at $\bar z$ for the system $P(z)\in D$ with convex $D$, then
\begin{equation} \label{EqInclNormalCone} \widehat N_\Omega(\bar z)= N_\Omega(\bar z)=\nabla P(\bar z)^T N_D(P(\bar z)).\end{equation} \end{proposition} \begin{proof}
The inclusion \eqref{EqInclRegNormalCone} follows from \cite[Theorem 6.14]{RoWe98}.
The first inclusion in {(\ref{EqInclLimNormalCone})} follows immediately from the definitions of the regular/limiting normal cone, whereas the second one follows from \cite[Theorem 4.1]{HenJouOut02}. When $D$ is convex, the regular normal cone coincides with the limiting normal cone and hence (\ref{EqInclNormalCone}) follows by combining (\ref{EqInclRegNormalCone}) {and} (\ref{EqInclLimNormalCone}). \end{proof}
In the case where $D =\mathbb{R}^{s_1}_-\times\{0\}^{s_2}$, it is well-known in nonlinear programming theory that MFCQ or equivalently NNAMCQ is a necessary and sufficient condition for the compactness of the set of Lagrange multipliers. In the case where $D\not =\mathbb{R}^{s_1}_-\times\{0\}^{s_2}$, NNAMCQ also implies the boundedness of the multipliers. However MSCQ is weaker than NNAMCQ and hence the set of Lagrange multipliers may be unbounded if MSCQ holds but NNAMCQ fails. However Theorem \ref{ThBMP} shows that under MSCQ one can extract some uniformly compact subset of the multipliers.
\begin{definition}[cf. \cite{GfrMo16}]
Let $\bar z\in \Omega:=\{z| P(z)\in D\}$ with $P$ smooth and $D$ closed. We say that the {\em bounded multiplier property} (BMP) holds at $\bar z$ for the system $P(z)\in D$, if there is some modulus $\kappa\geq 0$ and some neighborhood $W$ of $\bar z$ such that for every $z\in W\cap \Omega$ and every $z^\ast\in N_{\Omega}(z)$ there is some $\lambda\in \kappa\norm{z^\ast}
{\cal B}_{\mathbb{R}^s}\cap N_D(P(z))$ satisfying
\[z^\ast=\nabla P(z)^T\lambda.\] \end{definition} The following theorem gives a sharper upper estimate for the normal cone than (\ref{EqInclLimNormalCone}). \begin{theorem}\label{ThBMP} Let $\bar z\in \Omega:=\{z\,\vert\, P(z)\in D\}$ and assume that MSCQ holds at the point $\bar z$ for the system $P(z)\in D$.
Let $W$ denote an open neighborhood of $\bar z$ and let $\kappa\geq 0$ be a real such that
$$\dist{z, \Omega} \leq \kappa \dist{P(z), D} \quad \forall z\in W.$$ Then \[N_{\Omega}(z)\subset \left \{z^\ast\in\mathbb{R}^d\,\vert\, \exists \lambda\in \kappa\norm{z^\ast} {\cal B}_{\mathbb{R}^s}\cap N_D(P(z))\;\mbox{with}\; z^\ast=\nabla P(z)^T\lambda \right \}\quad \forall z\in W.\] In particular BMP holds at $\bar z$ for the system $P(z)\in D$. \end{theorem} \begin{proof} Under the assumption, the set-valued map $M(z):=P(z)-D$ is metrically subregular at $(\bar z,0)$. The definition of the metric subregularity justifies the existence of the open neighborhood $W$ and the number $\kappa$ in the assumption. Hence
for each $z\in M^{-1}(0)\cap W=\Omega\cap W$ the map $M$ is also metrically subregular at $(z,0)$ and by applying {\cite[Proposition 4.1]{GfrOut15}} we obtain
\[N_\Omega(z)=N_{M^{-1}(0)}(z;0)\subset\{z^\ast\,\vert\, \exists \lambda\in \kappa\norm{z^\ast}{\cal B}_{\mathbb{R}^s}: (z^\ast,\lambda)\in N_{{\rm gph\,} M}((z,0);(0,0))\}.\] It follows from \cite[Exercise 6.7]{RoWe98} that $$N_{{\rm gph\,} M}((z,0);(0,0))=N_{{\rm gph\,} M}((z,0))= \{(z^\ast,\lambda)\,\vert\, -\lambda\in N_D(P(z)), z^\ast=\nabla P(z)^T(-\lambda)\}.$$ Hence the assertion follows. \end{proof}
\section{Failure of {MPCC-tailored constraint qualifications for problem (MPCC)}} In this section, we discuss difficulties involved in {MPCC-tailored} constraint qualifications for the problem (MPCC) by considering the constraint system for problem
(MPCC) in the following form $$ \widetilde \Omega :=\left \{(x,y,\lambda): \begin{array}{l}
0=h(x,y, \lambda):=\phi(x,y)+\nabla g(y)^T\lambda,\\
0\geq g(y) \perp -\lambda \leq 0\\
G(x,y)\leq 0\end{array}\right \}, $$ where $\phi:\mathbb{R}^n\times \mathbb{R}^m\to\mathbb{R}^m$ and $G:\mathbb{R}^n\times\mathbb{R}^m\to \mathbb{R}^p$ are continuously differentiable and $g:\mathbb{R}^m\to\mathbb{R}^q$ is twice continuously differentiable.
Given a triple $(\bar x,\bar y,\bar\lambda)\in \widetilde \Omega $ we define the following index sets of active constraints: \begin{eqnarray*}
&&{\cal I}_g:={\cal I}_g(\bar y,\bar\lambda):=\{i{\in \{1,\ldots,q\}}\,\vert\, g_i(\bar y)=0, \bar\lambda_i>0\},\\
&&{\cal I}_\lambda:={\cal I}_\lambda(\bar y,\bar\lambda):=\{i{\in \{1,\ldots,q\}}\,\vert\, g_i(\bar y)<0, \bar\lambda_i=0\},\\
&&{\cal I}_0:={\cal I}_0(\bar y,\bar\lambda):=\{i{\in \{1,\ldots,q\}}\,\vert\, g_i(\bar y)=0, \bar\lambda_i=0\},\\
&&{\cal I}_G:={\cal I}_G(\bar x,\bar y):=\{i{\in \{1,\ldots,p\}}\,\vert\, G_i(\bar x,\bar y)=0\}. \end{eqnarray*}
\begin{definition}[\cite{ScheelScholtes}] We say that MPCC-MFCQ holds at $(\bar x,\bar y,\bar\lambda)$ if the gradient vectors \begin{equation}\label{MPEC-MFCQ} \nabla h_i(\bar x, \bar y, \bar\lambda), i=1,\dots, m, \ (0, \nabla g_i(\bar y) ,0), i\in {\cal I}_g\cup {\cal I}_0, \ (0,0,e_i) , i\in {\cal I}_\lambda\cup {\cal I}_0, \end{equation} where $e_i$ denotes the unit vector with the ith component equal to $1$, are linearly independent and there exists a vector $(d_x,d_y,d_\lambda)\in\mathbb{R}^n\times\mathbb{R}^m\times\mathbb{R}^q$ orthogonal to the vectors in (\ref{MPEC-MFCQ}) and such that $$\nabla G_i(\bar x,\bar y)(d_x,d_y)<0, i\in {\cal I}_G.$$ We say that {MPCC-LICQ} holds at $(\bar x,\bar y,\bar\lambda)$ if the gradient vectors $$\nabla h_i(\bar x, \bar y, \bar\lambda), i=1, \dots, m, \ (0,\nabla g_i(\bar y) ,0), i\in {\cal I}_g\cup {\cal I}_0, \ (0,0,e_i) , i\in {\cal I}_\lambda\cup I_0 ,\ (\nabla G_i (\bar x,\bar y),0), i\in {\cal I}_G $$ are linearly independent. \end{definition} {MPCC-MFCQ} implies that for every partition $(\beta_1,\beta_2)$ of ${\cal I}_0$ the branch \begin{equation}\label{branch} \left \{ \begin{array}{l}
\phi(x,y)+\nabla g(y)^T\lambda=0,\\
g_i(y)=0, {\lambda_i\geq 0, i\in {\cal I}_g,\; \lambda_i=0, g_i(y) \leq 0,}\; i\in {\cal I}_\lambda,\\
g_i(y)=0, \lambda_i\geq 0,i\in\beta_1,\; g_i(y)\leq 0,\lambda_i=0,i\in\beta_2,\\
G(x,y)\leq 0
\end{array} \right. \end{equation} satisfies MFCQ at $(\bar x,\bar y,\bar\lambda)$.
We now show that {MPCC}-MFCQ never {holds} for (MPCC) if the lower level program has more than one multiplier. \begin{proposition}\label{Prop5} Let $(\bar x,\bar y,\bar\lambda)\in \widetilde \Omega$ and assume that there exists a second multiplier $\hat\lambda\not=\bar\lambda$ such that $(\bar x,\bar y,\hat \lambda)\in \widetilde \Omega$. Then for every partition $(\beta_1,\beta_2)$ of ${\cal I}_0$ the branch (\ref{branch})
does not fulfill MFCQ at $(\bar x,\bar y,\bar\lambda)$. \end{proposition} \begin{proof}
Since $\nabla g(\bar y)^T(\hat\lambda-\bar\lambda)=0$, {$(\hat\lambda-\bar\lambda)_i\geq 0$, $i\in {{\cal I}_\lambda\cup \beta_2}$ and $\hat\lambda-\bar\lambda\not =0$,} the assertion follows immediately. \end{proof} Since {MPCC}-MFCQ is stronger than the {MPCC}-LICQ, we have the following corollary immediately. \begin{corollary}\label{Cor1}Let $(\bar x,\bar y,\bar\lambda)\in \widetilde \Omega$ and assume that there exists a second multiplier $\hat\lambda\not=\bar\lambda$ such that $(\bar x,\bar y,\hat \lambda)\in \widetilde \Omega$. Then {MPCC}-LICQ fails at $(\bar x,\bar y,\bar\lambda)$. \end{corollary}
{It is worth noting that our {result in Proposition \ref{Prop5} is} only valid under the assumption that $g(y)$ is independent of $x$. In the case of bilevel programming where the lower level problem has a constraint dependent of the upper level variable, an example given in \cite[Example 4.10]{Mehlitz-Wachsmuth} shows that if the multiplier is not unique, then the corresponding MPCC-MFCQ may hold at some of the multipliers and fail to hold at others.}
\begin{definition}[see e.g. \cite{FleKanOut07}] Let $(\bar x,\bar y,\bar\lambda)$ be feasible for (MPCC). We say {MPCC-ACQ}
and
{MPCC-GCQ} hold if $$T_{\widetilde \Omega }(\bar x,\bar y,\bar\lambda)=T_{\rm MPCC}^{\rm lin}(\bar x,\bar y,\bar\lambda) \mbox{ and } \widehat N_{\widetilde \Omega}(\bar x,\bar y,\bar\lambda)=(T_{\rm MPCC}^{\rm lin}(\bar x,\bar y,\bar\lambda))^\circ$$ respectively, where \begin{eqnarray*}\lefteqn{T_{\rm MPCC}^{\rm lin}(\bar x,\bar y,\bar\lambda)}\\ &:=&\left \{(u,v,\mu)\in\mathbb{R}^n\times\mathbb{R}^m\times\mathbb{R}^q\,\vert\, \begin{array}{l}\nabla_{x} \phi(\bar x,\bar y)u+\nabla_y (\phi+\nabla_{y}(\lambda^Tg))(\bar x,\bar y)v +\nabla g(\bar y)^T\mu=0,\\ \nabla g_i(\bar y)v=0,i\in {\cal I}_g,\;\mu_i=0,i\in {\cal I}_\lambda,\\ \nabla g_i(\bar y)v\leq 0,\mu_i\geq 0,\mu_i\nabla g_i(\bar y)v=0, i\in {\cal I}_0,\\ \nabla G_i(\bar x,\bar y)(u,v)\leq 0,i\in {\cal I}_G \end{array}\right \}\end{eqnarray*} is the MPEC linearized cone at $(\bar x,\bar y,\bar\lambda)$. \end{definition} \noindent Note that {MPCC}-ACQ and {MPCC}-GCQ are the GACQ and GGCQ for the equivalent formulation of the set $\widetilde \Omega$ in the form of $P(z)\in D$ with $D$ involving the complementarity set
$$D_{cc}:=\{(a,b)\in \mathbb{R}_-^q\times \mathbb{R}_-^q| a^Tb=0\}$$ respectively. {MPCC}-MFCQ implies {MPCC}-ACQ (cf. \cite{FleKan05}) and from definition it is easy to see that {MPCC}-ACQ is stronger than {MPCC}-GCQ. Under {MPCC}-GCQ, it is known that a local optimal solution of (MPCC) must be a M-stationary point (\cite[Theorem 14]{FleKanOut07}). Although {MPCC}-GCQ is weaker than most of other {MPCC-tailored} constraint qualifications, the following example shows that the constraint qualification {MPCC}-GCQ still can be violated when the multiplier for the lower level is not unique. In contrast to \cite[Example 6]{Ad-Hen-Out}, all the constraints are convex .
\begin{example}\label{Ex1} Consider MPEC \begin{eqnarray} \min_{x,y} && F(x,y):=x_1-\frac{3}{2} y_1 + x_2-\frac 32 y_2- y_3 \nonumber \\ s.t. && 0\in \phi(x,y)+N_\Gamma( y), \label{EXGE}\\ && G_1(x,y)=G_1(x):=-x_1-2x_2\leq 0,\nonumber \\ && G_2(x,y)=G_2(x):=-2x_1-x_2\leq 0,\nonumber \end{eqnarray} where $$\phi(x,y):=\left(\begin{array}{c}
y_1-x_1\\
y_2-x_2\\
-1\end{array}\right), \quad \Gamma:=\left\{y\in \mathbb{R}^3| g_1(y):=y_3+\frac12 y_1^2\leq 0,\;g_2(y):=y_3+\frac12 y_2^2\leq 0 \right\}.$$ Let $\bar x=(0,0)$, $\bar y=(0,0,0)$. The lower level inequality system $g(y)\leq 0$ is convex satisfying the Slater condition and therefore $y$ is a solution to the parametric generalized equation (\ref{EXGE}) if and only if $y'=y$ is a global minimizer of the optimization problem: $\displaystyle \min_{y'} ~\langle \phi(x,y), y' \rangle \
\mbox{ s.t. } y'\in \Gamma,$ and if and only if there is a multiplier $\lambda$ fulfilling KKT-conditions \begin{eqnarray} \label{EqKKT} &&\left(\begin{array}{c}
y_1-x_1+\lambda_1y_1\\
y_2-x_2+\lambda_2y_2\\
-1+\lambda_1+\lambda_2\end{array}\right)=\left(\begin{array}
{c}0\\0\\0
\end{array}\right),\\ \nonumber && 0\geq y_3+\frac12 y_1^2\perp -\lambda_1\leq 0,\\ \nonumber &&0\geq y_3+\frac12 y_2^2\perp-\lambda_2\leq 0. \end{eqnarray} {Let ${\cal F}:=\{x\,\vert\, G_1(x)\leq 0, G_2(x)\leq 0\}$. Then ${\cal F}={\cal F}_1\cup {\cal F}_2\cup{\cal F}_3$ where \begin{eqnarray*}
{\cal F}_1&:=&\left \{(x_1,x_2)\in \mathbb{R}^2 \,\vert\, 2\vert x_2\vert\leq x_1 \right \},\\
{\cal F}_2&:=&\left\{(x_1,x_2)\in \mathbb{R}^2\,\vert\, \frac {x_1}2\leq x_2\leq 2x_1\right\},\\
{\cal F}_3&:=&\left\{(x_1,x_2)\in \mathbb{R}^2\,\vert\, 2\vert x_1\vert\leq x_2\right\}. \end{eqnarray*} Straightforward calculations yield that for each $x\in {\cal F}$ there exists a unique solution $y(x)$, which is given by \[y(x)=\begin{cases}(\frac{x_1}2, x_2,-\frac 18 x_1^2)& \mbox{if $x\in{\cal F}_1$,}\\ ( \frac{x_1+x_2}3, \frac{x_1+x_2}3,-\frac1{18}(x_1+x_2)^2)& \mbox{if $x\in{\cal F}_2$,}\\ ( x_1, \frac{x_2}2,-\frac 18 x_2^2)& \mbox{if $x\in{\cal F}_3$.} \end{cases}\]} Further, at {$\bar x=(0,0)$ {we have} $y(\bar x)=(0,0,0)$} and the set of the multipliers is
$${ \Lambda:=}\{\lambda\in \mathbb{R}_+^2| \lambda_1+\lambda_2=1\},$$ while for all {$x\not=(0,0)$} the gradients of the lower level constraints active at $y(x)$ are linearly independent and the unique multiplier is given by \begin{equation}\label{lambda}\lambda(x)=\begin{cases}(1,0)& \mbox{if $x\in{\cal F}_1$,}\\ ( \frac{2x_1-x_2}{x_1+x_2}, \frac{2x_2-x_1}{x_1+x_2})&\mbox{if $x\in{\cal F}_2$,}\\ ( 0,1)& \mbox{if $x\in{\cal F}_3$.} \end{cases}\end{equation} Since \[F(x,y(x))=\begin{cases} \frac 14 x_1-\frac 12 x_2+\frac 18 x_1^2&\mbox{if $x\in{\cal F}_1$},\\ \frac1{18}(x_1+x_2)^2&\mbox{if $x\in{\cal F}_2$},\\ \frac 14 x_2-\frac 12 x_1+\frac 18 x_2^2&\mbox{if $x\in{\cal F}_3$},\\ \end{cases}\] {and ${\cal F}={\cal F}_1\cup {\cal F}_2\cup{\cal F}_3$,} we see that $(\bar x,\bar y)$ is a globally optimal solution of the MPEC.
The original problem is equivalent to the following MPCC: \begin{eqnarray*}
\min_{x,y,\lambda}&& x_1-\frac 32 y_1 + x_2-\frac 32 y_2- y_3\\
\mbox{s.t.}&&\mbox{$x,y,\lambda$ {fulfill \eqref{EqKKT},}}\\
&&-2x_1-x_2\leq 0,\\
&&-x_1-2x_2\leq 0. \end{eqnarray*} The feasible region of this problem is \begin{eqnarray*} \widetilde \Omega=\bigcup_{\bar x\not=x\in {\cal F}}\{(x,y(x),\lambda(x))\}
{\cup(\{(\bar x,\bar y)\}\times \Lambda)}. \end{eqnarray*} Any $(\bar x,\bar y, \lambda)$ where $\lambda \in {\Lambda}$ is a globally optimal solution. However it is easy to verify that unless $\lambda_1=\lambda_2=0.5$, any $(\bar x, \bar y, \lambda)$ is not even a weak stationary point, implying by \cite[Theorem 7]{FleKanOut07} that {MPCC}-GCQ and consequently {MPCC}-ACQ {fails} to hold. Now consider $\lambda=(0.5,0.5)$. The MPEC linearized cone $T^{\rm lin}_{\rm MPCC}(\bar x,\bar y,\lambda)$ is the collection of all $(u,v,\mu)$ such that \begin{equation}\label{KKTeqn}
\left(\begin{array}{c}
1.5v_1-u_1\\
1.5v_2-u_2\\
\mu_1+\mu_2\end{array}\right)=\left(\begin{array}{c}
0\\
0\\
0\end{array}\right),\quad \begin{array}{l}v_3=0,\\
-2u_1-u_2\leq 0,\; -u_1-2u_2\leq 0.\end{array} \end{equation} Next we compute the actual tangent cone $T_{\widetilde \Omega}(\bar x,\bar y,\lambda)$. Consider sequences $t_k\downarrow 0$, $(u^k,v^k,\mu^k)\to (u,v,\mu)$ such that $(\bar x,\bar y,\lambda)+t_k(u^k,v^k,\mu^k)\in\widetilde\Omega$. If $u^k\not=0$ for infinitely many $k$, then {$\bar x+t_k u^k\not =0$} and hence $(\bar y+t_kv^k,\lambda+t_k\mu^k)=(y(\bar x+t_ku^k),\lambda(\bar x+t_ku^k))$ for those $k$. Since $\lambda=(0.5,0.5)$, it follows from (\ref{lambda}) that $\bar x+t_ku^k\in{\cal F}_2$ for infinitely many $k$, implying, by passing to a subsequence if necessary, \[v=\lim_{k\to \infty}\frac{y(\bar x+t_ku^k)-\bar y}{t_k}=\frac 13(u_1+u_2,u_1+u_2,0)\] and \begin{eqnarray*}\mu&=&\lim_{k\to\infty}\frac{\lambda(\bar x+t_ku^k)-\lambda}{t_k}= \lim_{k\to\infty}\frac{(\frac{2u_1^k-u_2^k}{u_1^k+u_2^k}, \frac{2u_2^k-u_1^k}{u_1^k+u_2^k})-(0.5,0.5)}{t_k}\\ &=&\lim_{k\to\infty}1.5\frac{(\frac{u_1^k-u_2^k}{u_1^k+u_2^k}, \frac{u_2^k-u_1^k}{u_1^k+u_2^k})}{t_k}.\end{eqnarray*} Hence $v_1=v_2=\frac{1}{3}(u_1+u_2),\ v_3=0$ {and $\mu_1+\mu_2=0$.} {Also from (\ref{KKTeqn}), we have $u_1=u_2$ since $v_1=v_2$ and the tangent cone $T_{\widetilde \Omega}(\bar x,\bar y,\lambda)$ is always a subset of the MPEC linearized cone $T^{\rm lin}_{\rm MPCC}(\bar x,\bar y,\lambda)$ (see e.g. \cite[Lemma 3.2]{FleKan05}).}
Further, since $\bar x+t_ku^k\in{\cal F}_2$, we must have $u_1\geq 0$. If $u^k=0$ for all but finitely many $k$, then we have $v^k=0$ and $\lambda+t_k \mu^k\in\Lambda$ implying $\mu_1+\mu_2=0$. Putting all together, we obtain that
the actual tangent cone $T_{\widetilde\Omega}(\bar x,\bar y,\lambda)$ to the feasible set is the collection of all $(u,v,\mu)$ satisfying
\begin{eqnarray*}
&&u_1=u_2\geq 0, v_1=v_2=\frac 23 u_1,\\
&&v_3=0, \mu_1+\mu_2=0.
\end{eqnarray*}
Now it is easy to see that $T_{\widetilde\Omega}(\bar x,\bar y,\lambda)\not=T^{\rm lin}_{\rm MPCC}(\bar x,\bar y,\lambda)$. Moreover since both $T_{\widetilde\Omega}(\bar x,\bar y,\lambda)$ and $T^{\rm lin}_{\rm MPCC}(\bar x,\bar y,\lambda)$ are convex polyhedral {sets}, one also has $(T_{\widetilde\Omega}(\bar x,\bar y,\lambda))^\circ\not=(T^{\rm lin}_{\rm MPCC}(\bar x,\bar y,\lambda))^\circ$ and thus MPEC-GCQ does not hold for $\lambda=(0.5,0.5)$ as well.
\if{We will now show that at any solution point $(\bar x,\bar y,\lambda)$ the constraint qualifications MPEC-ACQ and MPEC-GCQ fail to hold. Consider $\lambda=(1,0)$. The MPEC linearized cone $T^{\rm lin}_{\rm MPCC}(\bar x,\bar y,\lambda)$ is the collection of all $(u,v,\mu)$ such that \begin{eqnarray*}
\left(\begin{array}{c}
2v_1-u_1\\
v_2-u_2\\
\mu_1+\mu_2\end{array}\right)=0,\quad \begin{array}{l}v_3=0,\\
v_3\leq 0,\; \mu_2\geq 0,\; \mu_2v_3=0,\\
-2u_1-u_2\leq 0,\; -u_1-2u_2\leq 0,\end{array} \end{eqnarray*} which can be simplified to \begin{eqnarray*}
\left(\begin{array}{c}
2v_1-u_1\\
v_2-u_2\\
\mu_1+\mu_2\end{array}\right)=0,\quad \begin{array}{l}v_3=0,\\ \mu_2\geq 0,\\
-2u_1-u_2\leq 0,\; -u_1-2u_2\leq 0.\end{array} \end{eqnarray*} Next we compute the actual tangent cone $T_\Omega(\bar x,\bar y,\lambda)$. Consider sequences $t_k\downarrow 0$, $(u^k,v^k,\mu^k)\to (u,v,\mu)$ such that $(\bar x,\bar y,\lambda)+t_k(u^k,v^k,\mu^k)\in\Omega$ and $u^k\not=0$ $\forall k$. If $\bar x+t_ku^k\in{\cal F}_1$ for infinitely many $k$, then immediately $u_1\geq 2\vert u_2\vert$, {$v=(\frac {u_1}2,u_2, 0)$}, $\mu=(0,0)$ follows. On the other hand, if $\bar x+t_ku^k\in{\cal F}_2$ for infinitely many $k$, then, by passing to a subsequence if necessary, \begin{eqnarray*}\mu&=&\lim_{k\to\infty}\frac{\lambda(\bar x+t_ku^k)-\lambda}{t_k}= \lim_{k\to\infty}\frac{(\frac{2u_1^k-u_2^k}{u_1^k+u_2^k}, \frac{2u_2^k-u_1^k}{u_1^k+u_2^k})-(1,0)}{t_k}\\ &=&\lim_{k\to\infty}\frac{(\frac{u_1^k-2u_2^k}{u_1^k+u_2^k}, \frac{2u_2^k-u_1^k}{u_1^k+u_2^k})}{t_k}.\end{eqnarray*} Hence $2u_2=u_1$ and $\mu_1+\mu_2=0$. Finally, the case $\bar x+t_ku^k\in{\cal F}_3$ is not possible due to $\lambda(x)=(0,1)$ $\forall 0\not=x\in {\cal F}_3$. Putting all together, we obtain that
the actual tangent cone to the feasible set is the collection of all $(u,v,\mu)$ satisfying
\begin{eqnarray*}
\left(\begin{array}{c}
2v_1-u_1\\
v_2-u_2\\
\mu_1+\mu_2\end{array}\right)=0,\quad \begin{array}{l}v_3=0,\\
\mu_2\geq 0,\; \mu_2(2u_2-u_1)=0,\\
u_1\geq 2\vert u_2\vert.\end{array} \end{eqnarray*} Since
$(T_\Omega(\bar x,\bar y,\lambda))^\circ\not=(T^{\rm lin}_{\rm MPCC}(\bar x,\bar y,\lambda))^\circ$,
MPCC-GCQ does not hold. The other cases $\lambda_1,\lambda_2>0, \lambda_1+\lambda_2=1$ and $\lambda=(0,1)$ can be treated similarly and we again obtain that MPCC-GCQ does not hold. }\fi
\end{example}
\section{Sufficient condition for MSCQ} As we discussed in the introduction and section 3, there are much difficulties involved in formulating {an} MPEC as (MPCC). In this section, we turn our attention to problem (MPEC) with the constraint system defined in the following form \begin{equation}\label{Omega} \Omega:=\left \{(x,y): \begin{array}{l} 0\in \phi(x,y)+ \widehat{N}_\Gamma(y)\\ G(x,y)\leq 0\end{array}\right \}, \end{equation} where
$\Gamma:=\{y\in \mathbb{R}^m | g(y)\leq 0\}$, $\phi:\mathbb{R}^n\times \mathbb{R}^m\to\mathbb{R}^m$ and $G:\mathbb{R}^n\times\mathbb{R}^m\to \mathbb{R}^p$ are continuously differentiable and $g:\mathbb{R}^m\to\mathbb{R}^q$ is twice continuously differentiable. Let $(\bar x,\bar y)$ be a feasible solution of problem (MPEC). We assume that MSCQ is fulfilled for the constraint $g(y)\leq 0$ at $\bar y$. Then by definition MSCQ also holds for all points $y\in\Gamma$ near $\bar y$ and by Proposition \ref{PropInclNormalcone} the following equations hold for such $y$: \[N_\Gamma(y)=\widehat N_\Gamma(y)=\nabla g(y)^T N_{\mathbb{R}^q_-}(g(y)),\] where $N_{\mathbb{R}^q_-}(g(y))=\{\lambda\in\mathbb{R}^q_+\,\vert\, \lambda_i=0, i\not\in {\cal I}(y)\}$ {and ${\cal I}(y):=\{i\in\{1,\ldots,q\}\,\vert\, g_i(y)=0\}$ is the index set of active inequality constraints.}
For the sake of simplicity we do not include equality constraints in either the upper or the lower level constraints. We are using MSCQ as the basic constraint qualification for both the upper and the lower level constraints and this allows us to write an equality constraint $h(x)=0$ equivalently as two inequality constraints $h(x)\leq0,\ -h(x)\leq 0$ without affecting MSCQ.
In the case where $\Gamma$ is convex, MSCQ is proposed in \cite{YeYe} as a constraint qualification for the M-stationary condition. Two types of sufficient conditions were given for MSCQ. One is the case when all involved functions are affine and the other is when metric regularity holds. In this section by making use of FOSCMS for the split system in Theorem \ref{ThSuffCondMSSplit}, we derive some new sufficient condition for MSCQ for the constraint system (\ref{Omega}). Applying the new constraint qualification to {the} problem in Example \ref{Ex1}, we show that in contrast to the MPCC reformulation under which even the weakest constraint qualification MPEC-GCQ fails at $(\bar x, \bar y, \lambda)$ for all multipliers $\lambda$, the MSCQ holds at $(\bar x, \bar y)$ for the original formulation.
In order to apply FOSCMS in Theorem \ref{ThSuffCondMSSplit}, we need to calculate the linearized cone $T_\Omega^{\rm lin} (\bar z)$ and consequently we need to calculate the tangent cone $T_{{\rm gph}\widehat{N}_\Gamma}(\bar y, -\phi(\bar x, \bar y))$. We now perform this task. First we introduce some notations.
Given vectors $y\in\Gamma$, $y^\ast\in\mathbb{R}^m$, consider the {\em set of multipliers} \begin{eqnarray}\label{Lambda}
\Lambda(y,y^\ast):=\big\{\lambda \in\mathbb{R}^q_+\big|\;\nabla g(y)^T\lambda=y^\ast, \lambda_i=0, i\not\in {\cal I}(y) \big\}. \end{eqnarray}
{For a multiplier $\lambda$, the corresponding collection of {\em strict complementarity indexes} is denoted by} \begin{eqnarray}\label{EqI+}
I^+(\lambda):=\big\{i\in \{1,\ldots,q\}\big|\;\lambda_i>0\big\}\;\mbox{ for }\;\lambda=(\lambda_1,\ldots,\lambda_q)\in\mathbb{R}^q_+. \end{eqnarray} Denote by ${\cal E}(y,y^\ast)$ the collection of all the {\em extreme points} of the closed and convex set of multipliers $\Lambda(y,y^\ast)$ and recall that $\lambda\in\Lambda(y,y^\ast)$ belongs to ${\cal E}(y,y^\ast)$ if and only if the family of gradients $\{\nabla g_i(y)\,\vert\, i\in I^+(\lambda)\}$ is linearly independent. Further ${\cal E}(y,y^\ast)\ne\emptyset$ if and only if $\Lambda(y,y^\ast)\ne\emptyset$.
To proceed further, recall the notion of the {\em critical cone} to $\Gamma$ at $(y,y^\ast)\in{\rm gph\,} \widehat{N}_\Gamma$ given by
$K(y,y^\ast):=T_\Gamma(y)\cap\{y^\ast\}^\perp$
and define the {\em multiplier set in a direction} $v\in K(y,y^\ast)$ by \begin{eqnarray}\label{EqDir-mult} \Lambda(y,y^\ast;v):=\mathop{\rm arg\,max}\limits\big\{v^T\nabla^2(\lambda^Tg)(y)v\,\vert\, \lambda\in\Lambda(y,y^\ast)\big\}. \end{eqnarray}
Note that $\Lambda(y,y^\ast;v)$ is the solution set of a linear optimization problem and therefore $\Lambda(y,y^\ast;v)\cap {\cal E}(y,y^\ast)\not=\emptyset$ whenever $\Lambda(y,y^\ast;v)\not=\emptyset$. Further we denote the corresponding optimal function value by \begin{eqnarray}\label{EqDir-multval} \theta(y,y^\ast;v):=\max\big\{v^T\nabla^2(\lambda^Tg)(y)v\,\vert\, \lambda\in\Lambda(y,y^\ast)\big\}. \end{eqnarray}
The critical cone to $\Gamma$ has the following two expressions. \begin{proposition}(see e.g. \cite[Proposition 4.3]{GfrMo15a})\label{criticalcone} Suppose that MSCQ holds for the {system} $g(y) \in \mathbb{R}_-^q$ at $y$. Then the critical cone to $\Gamma$ at $(y,y^\ast)\in{\rm gph\,}\widehat N_\Gamma$ is a convex polyhedron that can be explicitly expressed as
$$K(y,y^*)=\{v| \nabla g(y)v \in T_{\mathbb{R}^q_-}(g(y)), v^T y^*=0\}.$$ Moreover for any $\lambda \in \Lambda(y,y^\ast)$,
$$K(y,y^*)=\left \{v| \nabla g(y)v \left \{\begin{array}{ll} =0 \mbox{ if } \lambda_i>0\\ \leq 0 \mbox{ if } \lambda_i=0 \end{array} \right. \right \}.$$ \end{proposition} Based on the expression for the critical cone, it is easy to see that the normal cone to the critical cone has the following expression. \begin{lemma}\cite[Lemma 1]{{GfrOut14}} \label{lemma1} Assume MSCQ holds at $y$ for the system $g(y)\in\mathbb{R}^q_-$. Let $v\in K(y,y^*), \lambda \in \Lambda(y, y^*)$. Then
$${N}_{K(y,y^*)}(v)=\{\nabla g(y)^T\mu|\mu^T\nabla g(y)v=0, \mu\in T_{N_{\mathbb{R}_-^q}(g(y))}(\lambda)\}.$$ \end{lemma}
We are now ready to calculate the tangent cone to the graph of $\widehat N_\Gamma$. This result will be needed in the sufficient condition for MSCQ and it is also of an independent interest. The first equation in the formula \eqref{EqTanConeGrNormalCone} was first shown in \cite[Theorem~1]{GfrOut14} under the extra assumption that the metric regularity holds locally uniformly except for $\bar y$, whereas in \cite{ChiHi16} this extra assumption was removed. \begin{theorem}\label{ThTanConeGrNormalCone} Given $\bar y\in\Gamma$, assume that MSCQ holds at $\bar y$ for
the system $g(y)\in\mathbb{R}^q_-$. Then there is a real $\kappa>0$ and a neighorhood $V$ of $\bar y$ such that for any $y\in\Gamma\cap V$ and any $y^\ast\in\widehat N_\Gamma(y)$
the tangent cone to the graph of $\widehat N_\Gamma$ at $(y, y^*)$ can be calculated by \begin{eqnarray}\label{EqTanConeGrNormalCone} \lefteqn{T_{{\rm gph\,} \widehat N_\Gamma}(y,y^\ast)}\\ \nonumber
&=&\big\{(v,v^\ast)\in\mathbb{R}^{2m}\big|\;\exists\,\lambda\in\Lambda(y,y^\ast;v)\;\mbox{ with }\; v^\ast\in\nabla^2(\lambda^Tg)(y)v+N_{K(y,y^\ast)}(v)\big\}\\
\nonumber&=&\big\{(v,v^\ast)\in\mathbb{R}^{2m}\big|\;\exists\,\lambda\in\Lambda(y,y^\ast;v)\cap \kappa\norm{y^\ast} {\cal B}_{\mathbb{R}^q}\;\mbox{ with }\; v^\ast\in\nabla^2(\lambda^Tg)(y)v+N_{K(y,y^\ast)}(v)\big\}, \end{eqnarray} where the critical cone $K(y,y^\ast)$ and the normal cone $N_{K(y,y^\ast)}(v)$ can be calculated as in Proposition \ref{criticalcone} and Lemma \ref{lemma1} respectively, and
the set ${\rm gph\,} \widehat N_\Gamma$ is geometrically derivable at $(y, y^*)$.
\end{theorem}
\begin{proof} Since MSCQ holds at $\bar y$ for
the system $g(y)\in\mathbb{R}^q_-$, we can find an open neighborhood $V$ of $\bar y$ and and a real $\kappa>0$ such that \begin{equation} \dist{y,\Gamma}\leq \kappa\dist{g(y),\mathbb{R}^q_-}\ \forall y\in V, \label{errorb} \end{equation} which means that MSCQ holds at every $y\in\Gamma\cap V$. Therefore $K(y,y^\ast)$ and and $N_{K(y,y^\ast)}(v)$ can be calculated as in Proposition \ref{criticalcone} and Lemma \ref{lemma1} respectively. By the proof of the first part of \cite[Theorem~1]{GfrOut14} we obtain that for every $y^\ast\in\widehat N_\Gamma(y)$, \begin{eqnarray*}
\nonumber\lefteqn{\big\{(v,v^\ast)\in\mathbb{R}^{2m}\big|\;\exists\,\lambda\in\Lambda(y,y^\ast;v)\cap \kappa\norm{y^\ast} {\cal B}_{\mathbb{R}^q}\;\mbox{ with }\; v^\ast\in\nabla^2(\lambda^Tg)(y)v+ N_{K(y,y^\ast)}(v)\big\}}\\
\label{EqInclAux1}&\subset &\big\{(v,v^\ast)\in\mathbb{R}^{2m}\big|\;\exists\,\lambda\in\Lambda(y,y^\ast;v)\;\mbox{ with }\; v^\ast\in\nabla^2(\lambda^Tg)(y)v+ N_{K(y,y^\ast)}(v)\big\}\qquad\\
\label{EqInclAux2}& \subset & \big\{(v,v^\ast)\in\mathbb{R}^{2m}\big| \lim_{t\downarrow 0}t^{-1}\distb{(y+tv,y^\ast+tv^\ast), {\rm gph\,} \widehat N_\Gamma}=0\}\\ \label{EqInclAux3}&\subset & T_{{\rm gph\,} \widehat N_\Gamma}(y,y^\ast). \end{eqnarray*}
We now show the reversed inclusion \begin{eqnarray} \lefteqn{T_{{\rm gph\,} \widehat N_\Gamma}(y,y^\ast)}\label{reverse}\\
&\subset&\big\{(v,v^\ast)\in\mathbb{R}^{2m}\big|\;\exists\,\lambda\in\Lambda(y,y^\ast;v)\cap \kappa\norm{y^\ast} {\cal B}_{\mathbb{R}^q}\;\mbox{ with }\; v^\ast\in\nabla^2(\lambda^Tg)(y)v+N_{K(y,y^\ast)}(v)\big\}.\nonumber \end{eqnarray}
Although the proof technique is essentially the same as \cite[Theorem~1]{GfrOut14}, for completeness we provide the detailed proof.
Consider $y\in \Gamma\cap V$, $y^\ast\in \widehat N_\Gamma(y)$ and
let $(v,v^*)\in T_{{\rm gph\,} \widehat N_\Gamma}(y,y^\ast)$. Then by definition of the tangent cone, there exist sequences $t_k\downarrow 0, v_k\rightarrow v, v_k^* \rightarrow v^*$ such that
$ y_k^*:={y^*}+t_kv_k^* \in \widehat N_\Gamma(y_k)$, where $y_k:=y+t_kv_k$. By passing to a subsequence if necessary we can assume that $y_k\in V$ $\forall k$ and that there is some index set $\widetilde{{\cal I}}\subset {\cal I}(y)$ such that ${\cal I}(y_k)=\widetilde{{\cal I}}$ hold for all $k$. For every $i\in {{\cal I}(y)}$ we have
\begin{equation}\label{tylor}
g_i(y_k)=g_i(y)+t_k\nabla g_i(y)v_k +o(t_k)=t_k\nabla g_i(y)v_k +o(t_k)\left \{\begin{array}{ll}
=0 &\mbox{ if } i\in \widetilde{{\cal I}},\\
\leq 0 & \mbox{ if } i\in {\cal I}(y) \setminus \widetilde{{\cal I}}.\end{array} \right. \end{equation}
Dividing by $t_k$ and passing to the limit we obtain
\begin{equation}\label{(21)}
\nabla g_i(y)v \left \{\begin{array}{ll}
=0 &\mbox{ if } i\in \widetilde{{\cal I}},\\
\leq 0 & \mbox{ if } i\in {\cal I}(y) \setminus \widetilde{{\cal I}},\end{array} \right.
\end{equation}
which means $v\in T_\Gamma^{\rm lin}(y)$. Since MSCQ holds at every $y\in\Gamma\cap V$, we have that the GACQ holds at $y$ as well and hence $v\in T_\Gamma(y)$.
{Since (\ref{errorb}) holds and $y_k\in V$, $y_k^*\in \widehat N_\Gamma(y_k)=N_\Gamma(y_k)$,} by Theorem \ref{ThBMP}
there exists a sequence of multipliers $\lambda^k\in\Lambda(y_k,y_k^\ast)\cap\kappa\norm{y_k^\ast} {\cal B}_{\mathbb{R}^q}$ as $k\in\mathbb{N}$. Consequently we {assume} that there exists $c_1\geq 0$ such that $\|\lambda^k\|\leq c_1$ for all $k$.
Let
\begin{equation}\label{Psi}
\Psi_{\widetilde{{\cal I}}}(y^*):=\{\lambda\in \mathbb{R}^q|\nabla g(y)^T\lambda =y^*, \lambda_i \geq 0, i \in \widetilde{{\cal I}}, \lambda_i=0, i \not \in \widetilde{{\cal I}}\}.
\end{equation}
By Hoffman's Lemma there is some constant $\beta$ such that for every $y^*\in \mathbb{R}^m$ with
$\Psi_{\widetilde{{\cal I}}}(y^*)\not =\emptyset$ one has
\begin{equation}\dist{\lambda, \Psi_{\widetilde{{\cal I}}}(y^*)}\leq \beta(\|\nabla g(y)^T\lambda -y^*\|+\sum_{i\in \widetilde{{\cal I}}}{\max \{-\lambda_i}, 0 \}+\sum_{i\not \in \widetilde{{\cal I}}}|\lambda_i|) \quad \forall \lambda\in \mathbb{R}^q.\label{hoffman}
\end{equation}
Since
$$\nabla g(y)^T\lambda^k-y^*=t_kv_k^*+(\nabla g(y)-\nabla g(y_k))^T \lambda^k$$
and $\|\nabla g(y)-\nabla g(y_k)\|\leq c_2\|y_k-y\|=c_2 t_k \|v_k\|$ for some $c_2\geq 0$, by (\ref{hoffman}) we can find for each $k$ some
$\widetilde{\lambda}^k \in \Psi_{\widetilde{{\cal I}}}(y^*)\subset \Lambda (y,y^*)$ with
$\| \widetilde{\lambda}^k-\lambda^k\| \leq \beta t_k(\|v_k^*{\|}+c_1c_2\|v_k\|)$.
Taking $\mu^k:=(\lambda^k-\widetilde{\lambda}^k)/t_k$ we have that $(\mu^k)$ is uniformly bounded. By passing to subsequence if necessary we assume that $(\lambda^k)$ and $(\mu^k)$ are convergent to some $\lambda\in \Lambda(y,y^*)\cap \kappa \norm{y^\ast} {\cal B}_{\mathbb{R}^q}$,
and some $\mu$ respectively. Obviously the sequence $(\tilde \lambda^k)$ converges to $\lambda$ as well.
Since $\lambda_i^k=\widetilde{\lambda}^k_i=0, i \not \in \widetilde{{\cal I}}$, {by virtue of (\ref{(21)})} we have ${\mu^k}^T \nabla g(y) v=0 \ \forall k$ implying
\begin{equation} \label{mu}
\mu\in (\nabla g(y) v)^\perp .
\end{equation}
Taking into account {${\lambda^k}^T g(y_k)=0$} and (\ref{tylor}), we obtain
$$0=\lim_{k\rightarrow \infty} \frac{{{\lambda^k}^T }g(y_k)}{t_k}=\lim_{k\rightarrow \infty}{{\lambda^k}^T} \nabla g(y)v_k ={y^*}^Tv. $$ Therefore combining the above with $v\in T_\Gamma(y)$ we have
\begin{equation}\label{criticalc}
v\in K(y, y^*).
\end{equation}
Further we have for all $\lambda' \in \Lambda(y,y^\ast)$, since $\widetilde{\lambda}^k\in \Lambda(y,y^\ast)$,
\begin{eqnarray*}
0 & \leq & ({\widetilde{\lambda}^k}-\lambda')^T g(y_k) =({\widetilde{\lambda}^k}-\lambda')^T( g(y)+t_k\nabla g(y) v_k+\frac{1}{2} t_k^2 v_k^T \nabla^2 g(y) v_k +o(t_k^2))\\
&=& ({\widetilde{\lambda}^k}-\lambda')^T (\frac{1}{2} t_k^2 v_k^T \nabla^2 g(y) v_k +o(t_k^2)).
\end{eqnarray*}
Dividing by $t_k^2$ and passing to the limit we obtain
$(\lambda-\lambda')^T v^T \nabla^2 g(y) v\geq 0 \quad \forall \lambda' \in \Lambda(y,y^\ast)$
and hence $\lambda\in \Lambda(y,y^*;v)$.
Since $$y_k^*=\nabla g(y)^T \widetilde \lambda^k+t_k v_k^*=\nabla g(y_k)^T \lambda^k,$$
we obtain
\begin{eqnarray*}
v^*&=& \lim_{k\rightarrow \infty} v_k^* =\lim_{k\rightarrow \infty}\frac{\nabla g(y_k)^T \lambda^k-\nabla g(y)^T \widetilde \lambda^k}{t_k}\\
&=&\lim_{k\rightarrow \infty}\frac{(\nabla g(y_k)-\nabla g(y))^T {\lambda}^k+\nabla g({y})^T (\lambda^k-\widetilde \lambda^k)}{t_k}\\
&=& \nabla^2 ({{\lambda}}^Tg)(y)v+\nabla g(y)^T \mu.
\end{eqnarray*}
If $\mu \in T_{{N}_{\mathbb{R}_-^q}(g(y))}(\lambda)$,
since (\ref{mu}) holds, by using Lemma \ref{lemma1} we have $\nabla g(y)^T \mu\in {{N}}_{K(y,y^*)}(v)$ and hence the inclusion (\ref{reverse})
is proved.
Otherwise, by taking into account \[T_{{N}_{\mathbb{R}_-^q}(g(y))}(\lambda)=\{\mu\in \mathbb{R} ^q\,\vert\, \mu_i\geq 0 \mbox{ if }\lambda_i=0\}\] and $\mu_i=0$, $i\not\in \widetilde I$,
the set $J:=\{ i\in \widetilde{\cal I} \,\vert\, \lambda_i=0, \mu_i <0\}$ is not empty. Since $\mu^k$ converges to $\mu$, we can choose some index $\bar{k}$ such that
$\mu^{\bar{k}}_i =(\lambda_i^{\bar{k}}-\widetilde\lambda_i^{\bar{k}})/ t_{\bar{k}}\leq \mu_i/2 \ \forall i \in J$. Set $\widetilde{\mu}:=\mu+2( \widetilde \lambda^{\bar{k}}-\lambda)/t_{\bar{k}}$.
Then for all $i$ with $\lambda_i=0$ we have $\widetilde{\mu}_i\geq \mu_i$ and for all $i\in J$ we have
$$ \widetilde{\mu}_i=\mu_i +2(
\widetilde\lambda_i^{\bar{k}}-\lambda_i)/t_{\bar{k}}\geq \mu_i +2(
\widetilde\lambda_i^{\bar{k}}-\tilde\lambda_i^{\bar k})/t_{\bar{k}}\geq 0$$
and therefore $\widetilde{\mu} \in T_{N_{\mathbb{R}_-^q(g(y))}} (\lambda)$. Observing that $\nabla g(y)^T\widetilde{\mu}=\nabla g(y)^T{\mu}$ because of $\lambda, \widetilde\lambda^{\bar k}\in \Lambda (y, y^*)$ and taking into account Lemma \ref{lemma1} we have $\nabla g(y)^T \widetilde{\mu}\in {N}_{K(y,y^*)}(v)$ and hence the inclusion (\ref{reverse})
is proved. This finishes the proof of the theorem.
\end{proof}
Since the regular normal cone is the polar of the tangent cone, the following characterization of the regular normal cone of ${\rm gph\,}\widehat N_\Gamma$ follows from the formula for the tangent cone in Theorem \ref{ThTanConeGrNormalCone}. \begin{corollary}\label{CorSecOrd} Assume that MSCQ is satisfied for the system $g(y)\leq0$ at $\bar y\in\Gamma$. Then there is a neighborhood $V$ of $\bar y$ such that for every $(y,y^\ast)\in{\rm gph\,}\widehat N_\Gamma$ with $y\in V$ the following assertion holds: given any pair $(w^\ast,w)\in \widehat N_{{\rm gph\,}\widehat N_\Gamma}(y,y^\ast)$ we have $w\in K(y,y^\ast)$ and \begin{equation}\label{EqBasicIneqTiltStab} \skalp{w^\ast,w}+ w^T\nabla^2(\lambda^Tg)(y)w \leq 0\;\mbox{ whenever }\;\lambda\in\Lambda(y,y^\ast;w). \end{equation} \end{corollary} \begin{proof} Choose $V$ such that \eqref{EqTanConeGrNormalCone} holds true for every $y\in \Gamma\cap V$ and consider any $(y,y^\ast)\in{\rm gph\,}\widehat N_\Gamma$ with $y\in V$ and $(w^\ast,w)\in \widehat N_{{\rm gph\,}\widehat N_\Gamma}(y,y^\ast)$. By the definition of the regular normal cone we have $\widehat N_{{\rm gph\,}\widehat N_\Gamma}(y,y^\ast)=\big( T_{{\rm gph\,}\widehat N_\Gamma}(y,y^\ast)\big)^\circ$ and, since $\{0\}\times N_{K(y,y^\ast)}(0)\subset T_{{\rm gph\,}\widehat N_\Gamma}(y,y^\ast)$, we obtain \[\skalp{w^\ast,0}+\skalp{w,v^\ast}\leq 0 \ \forall v^\ast \in N_{K(y,y^\ast)}(0)=K(y,y^\ast)^\circ,\] implying $w\in {\rm cl\,}{\rm conv\,} K(y,y^\ast)=K(y,y^\ast)$. By \eqref{EqTanConeGrNormalCone} we have $(w, \nabla^2(\lambda^Tg)(y)w)\in T_{{\rm gph\,}\widehat N_\Gamma}(y,y^\ast)$ for every $\lambda\in\Lambda(y,y^\ast;w)$ and therefore the claimed inequality \[\skalp{w^\ast,w}+\skalp{w, \nabla^2(\lambda^Tg)(y)w}=\skalp{w^\ast,w}+ w^T\nabla^2(\lambda^Tg)(y)w \leq 0\] follows. \end{proof}
The following result will be needed in the proof of Theorem \ref{ThSuffCondMS_FOGE}. \begin{lemma}\label{LemBndSecOrdMult}Given $\bar y\in\Gamma$, assume that MSCQ holds at $\bar y$. Then there is a real $\kappa'>0$ such that for any $y\in\Gamma$ sufficiently close to $\bar y$, any normal vector $y^\ast\in\widehat N_\Gamma(y)$ and any critical direction $v\in K(y,y^\ast)$ one has \begin{equation}\label{EqBndSecOrdMult}\Lambda(y,y^\ast;v)\cap{\cal E}(y,y^\ast)\cap \kappa'\norm{y^\ast} {\cal B}_{\mathbb{R}^q}\not=\emptyset.\end{equation} \end{lemma}
\begin{proof}
Let $\kappa>0$ be chosen according to Theorem \ref{ThTanConeGrNormalCone} and consider $y\in\Gamma$ as close to $\bar y$ such that MSCQ holds at $y$ and \eqref{EqTanConeGrNormalCone} is valid for every $y^\ast\in \widehat N_\Gamma(y)$. Consider $y^\ast\in \widehat N_\Gamma(y)$ and a critical direction $v\in K(y,y^\ast)$. By \cite[Proposition 4.3]{GfrMo15a} we have $\Lambda(y,y^\ast;v)\not=\emptyset$ and, by taking any $\lambda\in \Lambda(y,y^\ast;v)$, we obtain from Theorem \ref{ThTanConeGrNormalCone} that $(v,v^\ast)\in T_{{\rm gph\,} \widehat N_\Gamma}(y,y^\ast)$ with $v^\ast=\nabla^2(\lambda^Tg)(y)v$. Applying Theorem \ref{ThTanConeGrNormalCone} once more, we see that $v^\ast\in \nabla^2(\tilde\lambda^Tg)(y)v+N_{K(y,y^\ast)}(v)$ with $\tilde\lambda\in\Lambda(y,y^\ast;v)\cap \kappa\norm{y^\ast} {\cal B}_{\mathbb{R}^q}$ showing that $\Lambda(y,y^\ast;v)\cap \kappa\norm{y^\ast}{\cal B}_{\mathbb{R}^q}\not=\emptyset$. Next consider a solution $\bar\lambda$ of the linear optimization problem
\[\min\sum_{i=1}^q\lambda_i\quad\mbox{subject to }\lambda\in \Lambda(y,y^\ast;v).\]
We can choose $\bar\lambda$ as an extreme point of the polyhedron $\Lambda(y,y^\ast;v)$ implying $\bar\lambda\in {\cal E}(y,y^\ast)$.
Since $\Lambda(y,y^\ast;v)\subset \mathbb{R}^q_+$, we obtain
$$\norm{\bar\lambda}\leq \sum_{i=1}^q\vert\bar\lambda_i\vert=\sum_{i=1}^q \bar \lambda_i\leq \sum_{i=1}^q\tilde\lambda_i\leq \sqrt{q}\norm{\tilde\lambda}\leq\sqrt{q}\kappa\norm{y^\ast},$$ and hence \eqref{EqBndSecOrdMult} follows with $\kappa'=\kappa\sqrt{q}$. \end{proof}
We are now in {position} to state a verifiable sufficient condition for MSCQ to hold for problem (MPEC).
\begin{theorem}\label{ThSuffCondMS_FOGE}
Given $(\bar x,\bar y)\in \Omega$ defined as in (\ref{Omega}), assume that MSCQ holds both for the lower level problem constraints $g(y)\leq 0$ at $\bar y$ and for the upper level constraints $G(x,y)\leq0$ at
$(\bar x,\bar y)$. Further assume that
\begin{equation}\label{EqNonDegG}
\nabla_x G(\bar x,\bar y)^T\eta=0,\ \eta\in N_{\mathbb{R}^p_-}(G(\bar x,\bar y))\quad \Longrightarrow\quad \nabla_y G(\bar x,\bar y)^T\eta=0
\end{equation}
and assume that
there do not exist $(u,v)\not=0$, $\lambda\in\Lambda(\bar y,-\phi(\bar x,\bar y);v)\cap {\cal E}(\bar y,-\phi(\bar x,\bar y))$, $\eta\in\mathbb{R}^p_+$ and $w\not=0$ satisfying
\begin{eqnarray}
\label{EqSuffMS1}&&\nabla G(\bar x,\bar y)
(u,v)\in T_{\mathbb{R}^p_-}(G(\bar x,\bar y)),\; \\
\label{EqSuffMS2}
&&(v,-\nabla_{x}\phi(\bar x,\bar y)u-\nabla_{y}\phi(\bar x,\bar y)v)\in T_{{\rm gph\,} \widehat N_\Gamma}(\bar y,-\phi(\bar x,\bar y)),\\
\label{EqSuffMS3}&&-\nabla_{x}\phi(\bar x,\bar y)^Tw+\nabla_{x} G(\bar x,\bar y)^T\eta=0,\;\eta\in N_{\mathbb{R}^p_-}(G(\bar x,\bar y)),\; \eta^T\nabla G(\bar x,\bar y)(u,v)=0,\\
\label{EqSuffMS4}&&\nabla g_i(\bar y)w=0, i\in I^+(\lambda),\; w^T\left (\nabla_{y}\phi(\bar x,\bar y)+\nabla^2(\lambda^Tg(\bar y)\right )w-\eta^T\nabla_y G(\bar x,\bar y)w\leq 0,\qquad
\end{eqnarray}
where the tangent cone $ T_{{\rm gph\,} \widehat N_\Gamma}(\bar y,-\phi(\bar x,\bar y))$ can be calculated as in Theorem \ref{ThTanConeGrNormalCone}. Then the multifunction $M_{\rm MPEC}$ defined by \begin{equation}\label{MPEC} M_{\rm MPEC}{(x,y)}:=\vek{\phi(x,y)+\widehat N_\Gamma(y)\\G(x,y)-\mathbb{R}^p_-} \end{equation} is metrically subregular at $\big((\bar x,\bar y),0\big)$. \end{theorem} \begin{proof}By Proposition \ref{PropEquGrSubReg}, it suffices to show that the multifunction $P(x,y)-D$ with $P$ and $D$ given by $$ P(x,y):=\left(\begin{array}{c}y,-\phi(x,y)\\G(x,y)\end{array}\right) \mbox{ and } D:={\rm gph\,} \widehat N_\Gamma\times\mathbb{R}^p_-$$ is metrically subregular at $\big((\bar x,\bar y),0\big)$.
We now invoke Theorem \ref{ThSuffCondMSSplit} with
$$P_1(x,y):=(y,-\phi(x,y)),\ P_2(x,y):=G(x,y),\ D_1:={\rm gph\,} \widehat N_\Gamma,D_2:=\mathbb{R}^p_-.$$ By the assumption $P_2(x,y)-D_2$ is metrically subregular at {$\big((\bar x,\bar y),0\big)$}. Assume to the contrary that $P(\cdot,\cdot)-D$ is not metrically subregular at $\big((\bar x,\bar y),0\big)$. Then by Theorem \ref{ThSuffCondMSSplit}, there exist $0\not=z=(u,v) \in T^{\rm lin}_\Omega(\bar x,\bar y)$ and a directional limiting normal $z^\ast=(w^\ast,w,\eta)\in\mathbb{R}^m\times\mathbb{R}^m\times\mathbb{R}^p$ such that $\nabla P(\bar x,\bar y)^Tz^\ast=0$, $(w^\ast,w)\in N_{{\rm gph\,} \widehat N_\Gamma}(P_1(\bar x,\bar y); \nabla P_1(\bar x,\bar y)z)$, $\eta\in N_{\mathbb{R}^p_-}\big(G(\bar x,\bar y);\nabla G(\bar x,\bar y)(u,v)\big)$ and $(w^\ast,w)\not=0$.
Hence
\begin{equation}\label{w*eqn}
0=\nabla P(\bar x,\bar y)^Tz^\ast=\left(\begin{array}{c}-\nabla_{x}\phi(\bar x,\bar y)^Tw+\nabla_{x}G(\bar x,\bar y)^T\eta\\
w^\ast-\nabla_{y}\phi(\bar x,\bar y)^Tw+\nabla_y G(\bar x,\bar y)^T\eta
\end{array}\right).\end{equation} Since $z=(u,v) \in T^{\rm lin}_\Omega(\bar x,\bar y)$, by the rule of tangents to product sets from Proposition \ref{productset} we obtain
\[\nabla P(\bar x,\bar y)z=\left(\begin{array}{c}(v,-\nabla_{x}\phi(\bar x,\bar y)u-\nabla_{y}\phi(\bar x,\bar y)v)\\
\nabla G(\bar x,\bar y)(u,v)\end{array}\right)\in T_{{\rm gph\,} \widehat N_\Gamma}(\bar y,\yb^\ast{})\times T_{\mathbb{R}^p_-}\big(G(\bar x,\bar y)\big),\]
where $\yb^\ast{}:=-\phi(\bar x,\bar y)$.
It follows that $
(v,-\nabla_{x}\phi(\bar x,\bar y)u-\nabla_{y}\phi(\bar x,\bar y)v)\in T_{{\rm gph\,} \widehat N_\Gamma}(\bar y,\yb^\ast{})$ and consequently by Theorem \ref{ThTanConeGrNormalCone} we have $v\in K(\bar y,\yb^\ast{})$.
Further we deduce from Proposition \ref{relationship} that
\[\eta\in N_{\mathbb{R}^p_-}(G(\bar x,\bar y)), \ \eta^T\nabla G(\bar x,\bar y)(u,v)=0.\]
So far we have shown that $u,v,\eta,w$ fulfill \eqref{EqSuffMS1}-\eqref{EqSuffMS3}. Further we have $w\not=0$, because if $w=0$ then by virtue of \eqref{EqNonDegG} and \eqref{w*eqn} we would obtain $\nabla_xG(\bar x,\bar y)^T\eta=0$, $\nabla_yG(\bar x,\bar y)^T\eta=0$ and consequently $w^\ast=0$ contradicting $(w^\ast,w)\not=0$. If we can show the existence of $\lambda\in\Lambda(\bar y,\yb^\ast{};v)\cap {\cal E}(\bar y,\yb^\ast{})$ such that \eqref{EqSuffMS4} holds, then
we have obtained the desired contradiction to our assumptions, and this would complete the proof.
Since $(w^\ast,w)\in N_{{\rm gph\,} \widehat N_\Gamma}(P_1(\bar x,\bar y); \nabla P_1(\bar x,\bar y)z)$, by the definition of the directional limiting normal cone, there are sequences $t_k\downarrow 0$, $d_k=(v_k,v_k^\ast)\in\mathbb{R}^m\times\mathbb{R}^m$ and $(w_k^\ast,w_k)\in\mathbb{R}^m\times\mathbb{R}^m$ satisfying $(w_k^\ast,w_k)\in\widehat N_{{\rm gph\,} \widehat N_\Gamma}(P_1(\bar x,\bar y)+t_kd_k)$ $\forall k$ and $(d_k,w_k^\ast,w_k)\to(\nabla P_1(\bar x,\bar y)z,w^\ast,w)$. That is, $(y_k,y_k^\ast):=(\bar y,\yb^\ast{})+t_k(v_k,v_k^\ast)\in {\rm gph\,} \widehat N_\Gamma$,
$(w_k^\ast,w_k)\in \widehat N_{{\rm gph\,} \widehat N_\Gamma}(y_k,y_k^\ast)$
and $(v_k, v_k^\ast)\to (v, -\nabla_{x}\phi(\bar x,\bar y)u-\nabla_{y}\phi(\bar x,\bar y)v)$.
By passing to a subsequence if necessary, we can assume that MSCQ holds for $g(y)\leq 0$ at $y_k$ for all $k$ and by invoking Corollary \ref{CorSecOrd} we obtain $w_k\in K(y_k,y_k^\ast)$, and
\begin{equation}\label{EqSecOrdAux1}{w_k^\ast}^Tw_k+w_k^T\nabla^2(\lambda^T g)(y_k)w_k\leq 0 \mbox{ whenever }\lambda\in\Lambda(y_k,y_k^\ast;w_k).
\end{equation}
By Lemma \ref{LemBndSecOrdMult} we can find a uniformly bounded sequence $\lambda^k\in\Lambda(y_k,y_k^\ast;w_k)\cap {\cal E}(y_k,y_k^\ast)$. In particular, following from the proof of Lemma \ref{LemBndSecOrdMult}, we can choose $\lambda^k$ as an optimal solution of the linear optimization problem
\begin{equation}\label{EqMinL1Norm}\min\sum_{i=1}^q\lambda_i\;\mbox{ subject to }\;\lambda\in \Lambda(y_k,y_k^\ast;w_k).
\end{equation}
By passing once more to a subsequence if necessary, we can assume that $\lambda^k$ converges to $\bar\lambda$, and we easily conclude $\bar\lambda\in\Lambda(\bar y,\yb^\ast{})$ and
${w^\ast}^Tw+w^T\nabla^2(\bar\lambda^T g)(\bar y)w\leq 0$, which together with $w^\ast-\nabla_{y}\phi(\bar x,\bar y)^Tw+\nabla_y G(\bar x,\bar y)^T\eta=0$ (see (\ref{w*eqn})) results in
\begin{equation}
\label{EqSecOrdAux2}w^T\left (\nabla_{y}\phi(\bar x,\bar y)+\nabla^2(\bar\lambda^Tg)(\bar y)\right )w-\eta^T\nabla_y G(\bar x,\bar y)w\leq 0.
\end{equation}
Further, we can assume that $I^+(\bar\lambda)\subset I^+(\lambda^k)$ and therefore, because of $\lambda^k\in N_{\mathbb{R}^q_-}(g(y_k))$, $\bar\lambda^Tg(y_k)={\lambda^k}^Tg(y_k)=0$. Hence for every $\lambda\in\Lambda(\bar y,\yb^\ast{})$ we obtain
\begin{eqnarray*}
0&\geq& (\lambda-\bar\lambda)^Tg(y_k)\\
&=&(\lambda-\bar\lambda)^Tg(\bar y)+\nabla((\lambda-\bar\lambda)^Tg)(\bar y)(y_k-\bar y)\\
&&\quad+\frac 12 (y_k-\bar y)^T\nabla^2((\lambda-\bar\lambda)^Tg)(\bar y)(y_k-\bar y) +o(\norm{y_k-\bar y}^2)\\
&=&\frac{t_k^2}2 v_k^T\nabla^2((\lambda-\bar\lambda)^Tg)(\bar y)v_k+o(t_k^2\norm{v_k}^2).
\end{eqnarray*}
Dividing by $t_k^2/2$ and passing to the limit yields $0\geq v^T\nabla^2((\lambda-\bar\lambda)^Tg)(\bar y)v$ and thus $\bar\lambda\in\Lambda(\bar y,\yb^\ast{};v)$. Since $w_k\in K(y_k,y_k^\ast)$ by Proposition \ref{criticalcone} we have $\nabla g_i(y_k)w_k=0$, $i\in I^+(\lambda^k)$ from which $\nabla g_i(\bar y)w=0$, $i\in I^+(\bar\lambda)$ follows.
It is known that the polyhedron $\Lambda( \bar y, \bar y^*)$ can be represented as the sum of the convex hull of its extreme points $ {\cal E}(\bar y,\yb^\ast{})$ and its recession cone ${\cal R}:=\{\lambda \in N_{\mathbb{R}^q_-}(g(\bar y))| \nabla g(\bar y)^T\lambda=0 \}$. We show by contradiction that $\bar\lambda\in{\rm conv\,} {\cal E}(\bar y,\yb^\ast{})$. Assuming on the contrary that $\bar\lambda\not\in {\rm conv\,} {\cal E}(\bar y,\yb^\ast{})$, then $\bar\lambda$ has the representation $\bar\lambda=\lambda^c+\lambda^r$ with $\lambda^c\in {\rm conv\,} {\cal E}(\bar y,\yb^\ast{})$ and $\lambda^r\not=0$ belongs to the recession cone ${\cal R}$, i.e.
\begin{equation}\label{recessioncone}
\lambda^r\in N_{\mathbb{R}^q_-}(g(\bar y)),\ \nabla g(\bar y)^T\lambda^r=0.\end{equation}
Since $\lambda^k\in\Lambda(y_k,y_k^\ast;w_k)$, it is a solution to the linear program:
\begin{eqnarray*}
\max_{\lambda\geq 0} && w_k^T\nabla^2(\lambda^Tg)(y_k)(w_k)\\
s.t. && \nabla g(y_k)^T\lambda=y_k^*\\
&& \lambda^Tg(y_k)=0.
\end{eqnarray*}
By duality theory of linear programming, for each $k$ there is some $r_k\in\mathbb{R}^m$ verifying
\[\nabla g_i(y_k)r_k+w_k^T\nabla^2 g_i(y_k)w_k\leq 0,\ \lambda^k_i(\nabla g_i(y_k)r_k+w_k^T\nabla^2 g_i(y_k)w_k)=0,\ i\in {\cal I}(y_k).\]
Since $\Lambda(y_k,y_k^\ast;w_k)=\{\lambda\in\Lambda(y_k,y_k^\ast)\,\vert\, w_k^T\nabla^2(\lambda^Tg)({y_k})w_k\geq\theta(y_k,y_k^\ast;w_k)\}$ and $\lambda^k$ solves \eqref{EqMinL1Norm}, again by duality theory of linear programming we can find for each $k$ some $s_k\in\mathbb{R}^m$ and $\beta_k\in\mathbb{R}_+$ such that
\[\nabla g_i(y_k)s_k+\beta_k w_k^T\nabla^2 g_i(y_k)w_k\leq 1,\ \lambda^k_i(\nabla g_i(y_k)s_k+\beta_k w_k^T\nabla^2 g_i(y_k)w_k-1)=0,\ i\in {\cal I}(y_k).\]
Next we define for every $k$ the elements $\tilde \lambda^k\in\mathbb{R}^q_+$, $\xi_k^\ast\in\mathbb{R}^m$ by
\begin{eqnarray}
&& \tilde\lambda^k_i:=\begin{cases}
\lambda^r_i&\mbox{if $i\in I^+(\lambda^r)$,}\\
\frac 1k&\mbox{if $i\in I^+(\lambda^k)\setminus I^+(\lambda^r)$,}\\
0&\mbox{else,}\quad
\end{cases} \nonumber\\ && \xi_k^\ast:=\nabla g(y_k)^T\tilde\lambda^k.\label{xi} \end{eqnarray}
Since $I^+(\lambda^r)\subset I^+(\bar\lambda)\subset I^+(\lambda^k)$, we obtain $I^+(\tilde\lambda^k)=I^+(\lambda^k)$, $\tilde\lambda^k\in N_{\mathbb{R}^q_-}(g(y_k))$ and $\xi_k^\ast\in N_\Gamma(y_k)$.
Thus $w_k\in K(y_k,\xi_k^\ast)$ by Proposition \ref{criticalcone} and
\[\nabla g_i(y_k)r_k+w_k^T\nabla^2 g_i(y_k)w_k\leq 0,\ \tilde\lambda^k_i(\nabla g_i(y_k)r_k+w_k^T\nabla^2 g_i(y_k)w_k)=0,\ i\in {\cal I}(y_k)\]
implying $\tilde\lambda^k\in\Lambda(y_k,\xi_k^\ast;w_k)$ by duality theory of linear programming. Moreover, because of $I^+(\tilde\lambda^k)=I^+(\lambda^k)$ we also have
\[\nabla g_i(y_k)s_k+\beta_k w_k^T\nabla^2 g_i(y_k)w_k\leq 1,\ \tilde\lambda^k_i(\nabla g_i(y_k)s_k+\beta_k w_k^T\nabla^2 g_i(y_k)w_k-1)=0,\ i\in {\cal I}(y_k),\]
implying that $\tilde\lambda^k$ is solution of the linear program
\[\min\sum_{i=1}^q\lambda_i\;\mbox{ subject to }\;\lambda\in \Lambda(y_k,\xi_k^\ast;w_k),\]
and, together with $\Lambda(y_k,\xi_k^\ast;w_k)\subset \mathbb{R}^q_+$,
\[\min\{\norm{\lambda}\,\vert\, \lambda\in \Lambda(y_k,\xi_k^\ast;w_k)\}\geq \frac 1{\sqrt{q}}\min\{\sum_{i=1}^q\lambda_i\,\vert\, \lambda\in \Lambda(y_k,\xi_k^\ast;w_k)\}{\geq}\frac{\sum_{i=1}^q\lambda_i^r}{\sqrt{q}}:=\beta>0.\]
Taking into account that $\lim_{k\to\infty}\tilde\lambda^k=\lambda^r$ and (\ref{recessioncone}), (\ref{xi}), we conclude $\lim_{k\to\infty}\norm{\xi_k^\ast}=0$, showing that for every real $\kappa'$ we have
\[\Lambda(y_k,\xi_k^\ast;w_k)\cap {\cal E}(y_k,\xi_k^\ast)\cap\kappa'\norm{\xi_k^\ast} {\cal B}_{\mathbb{R}^q}\subset \Lambda(y_k,\xi_k^\ast;w_k)\cap \kappa'\norm{\xi_k^\ast} {\cal B}_{\mathbb{R}^q}=\emptyset\]
for all $k$ sufficiently large contradicting the statement of Lemma \ref{LemBndSecOrdMult}. Hence $\bar\lambda\in {\rm conv\,}{\cal E}(\bar y,\yb^\ast{})$ and thus $\bar\lambda$ admits a representation as convex combination
\[\bar\lambda=\sum_{j=1}^N\alpha_j{\hat\lambda^j}\;\mbox{ with }\; \sum_{j=1}^N \alpha_j=1,\;0<\alpha_j\leq 1,\; {\hat\lambda^j}\in{\cal E}(\bar y,\yb^\ast{}),\;j=1,\ldots,N.\]
Since $\bar\lambda\in \Lambda(\bar y,\yb^\ast{};v)$ we have $\theta(\bar y,\yb^\ast{};v)=v^T\nabla^2(\bar\lambda^Tg)(\bar y)v=\sum_{j=1}^N \alpha_j v^T\nabla^2({\hat\lambda{}^j}^Tg)(\bar y)v$ implying, together with $v^T\nabla^2({\hat\lambda{}^j}^T g)(\bar y)v\leq \theta(\bar y,\yb^\ast{};v)$, that $v^T\nabla^2({{\hat\lambda{}^j}}^Tg)(\bar y)v= \theta(\bar y,\yb^\ast{};v)$ and consequently ${\hat\lambda^j}\in \Lambda(\bar y,\yb^\ast{};v)$. It follows from (\ref{EqSecOrdAux2}) that
\begin{eqnarray*}\lefteqn{\sum_{j=1}^N\alpha_j\left( w^T\big (\nabla_{y}\phi(\bar x,\bar y)+\nabla^2({\hat\lambda{}^j}^Tg)(\bar y)\big )w-\eta^T\nabla_y G(\bar x,\bar y)w\right)}\\
&&=w^T\left (\nabla_{y}\phi(\bar x,\bar y)+\nabla^2(\bar\lambda^Tg)(\bar y)\right )w-\eta^T\nabla_y G(\bar x,\bar y)w\leq 0\end{eqnarray*}
and hence there exists some index $\bar j$ with
$$w^T\left (\nabla_{y}\phi(\bar x,\bar y)+\nabla^2({\hat\lambda{}^{\bar j}}^Tg)(\bar y)\right )w-\eta^T\nabla_y G(\bar x,\bar y)w\leq 0.$$
Further, by Proposition \ref{criticalcone} we have $\nabla g_i(\bar y)w=0$ $\forall i\in I^+(\bar\lambda)\supset I^+({\hat\lambda^{\bar j}})$ and we see that \eqref{EqSuffMS4} is fulfilled with $\lambda={\hat\lambda^{\bar j}}$. \end{proof}
\if{ By virtue of \cite[Theorem 3.2]{YeYe}, we have the following necessary optimality condition for (MPEC). \begin{proposition}
Let $(\bar x,\bar y)\in \Omega$ defined as in (\ref{Omega}) be a local optimal solution of problem (MPEC) where all functions $F, \phi, G$ are continuously differentiable and $g$ is twice continuously differentiable. Suppose all constraint qualifications in Theorem \ref{ThSuffCondMS_FOGE} hold at $(\bar x,\bar y)$. Then there are multipliers $(\mu, \nu)\in \mathbb{R}^m\times \mathbb{R}^p$ and $\varrho\in \mathbb{R}^p$ which solve the system
\begin{eqnarray*}
&& 0=\nabla_x F(\bar x, \bar y)-\nabla_x \phi(\bar x,\bar y)^T \nu +\nabla_x G(\bar x,\bar y)^T \varrho,\\
&& 0=\nabla_y F(\bar x, \bar y)-\nabla_y\phi(\bar x,\bar y)^T \nu +\mu+\nabla_y G(\bar x,\bar y)^T \varrho,\\
&& (\mu, \nu) \in N_{gph\widehat{N}_\Gamma}(\bar y, -\phi(\bar x,\bar y)),\\
&&\varrho\in N_{\mathbb{R}^p_-}(G(\bar x,\bar y)).
\end{eqnarray*} \end{proposition}}\fi
\begin{example}[Example \ref{Ex1} revisited] \label{Ex1GE} Instead of reformulating the MPEC as a (MPCC), we consider the MPEC in the original form (MPEC). Since for the constraints $g(y)\leq 0$ of the lower level problem MFCQ is fulfilled at $\bar y$ and the gradients of the upper level constraints $G(x,y)\leq 0$ are linearly independent, MSCQ holds for both constraint systems. Condition \eqref{EqNonDegG} is obviously fulfilled due to $\nabla_y G(x,y)=0$. Setting $\yb^\ast{}:=-\phi(\bar x,\bar y)=(0,0,1)$, as in Example \ref{Ex1} we obtain \[ \Lambda(\bar y,\yb^\ast{})=\{(\lambda_1,\lambda_2)\in\mathbb{R}^2_+\,\vert\, \lambda_1+\lambda_2=1\}.\] Since $\nabla g_1(\bar y)=\nabla g_2(\bar y)=(0,0,1)$ and for every $\lambda\in\Lambda(\bar y,\yb^\ast{})$ either $\lambda_1>0$ or $\lambda_2>0$, we deduce \[W(\lambda):=\{w\in\mathbb{R}^3\,\vert\, \nabla g_i(\bar y)w=0,\;i\in I^+(\lambda)\}={\mathbb{R}^2}\times\{0\}\ \ \forall \lambda\in\Lambda(\bar y,\yb^\ast{}).\] Since \[w^T\left (\nabla_{y}\phi(\bar x,\bar y)+\nabla^2(\lambda^Tg)(\bar y)\right )w-\eta^T\nabla_y G(\bar x,\bar y)w=(1+\lambda_1)w_1^2+(1+\lambda_2)w_2^2\geq 0\]
there cannot exist $0 \not=w \in W(\lambda)$ and $\lambda\in\Lambda(\bar y,\yb^\ast{})$ fulfilling \eqref{EqSuffMS4}. Hence by virtue of Theorem \ref{ThSuffCondMS_FOGE}, MSCQ holds at $(\bar x, \bar y)$.
\if{ we easily obtain \[K(\bar y,\yb^\ast{})=\mathbb{R}^2\times\{0\}, \ \Lambda(\bar y,\yb^\ast{})=\{(\lambda_1,\lambda_2)\in\mathbb{R}^2_+\,\vert\, \lambda_1+\lambda_2=1\}.\] Since $\nabla g_1(\bar y)=\nabla g_2(\bar y)=(0,0,1)$ and for every $\lambda\in\Lambda(\bar y,\yb^\ast{})$ either $\lambda_1>0$ or $\lambda_2>0$, we deduce \[\{w\in\mathbb{R}^3\,\vert\, \nabla g_i(\bar y)w=0,\;i\in I^+(\lambda)\}=R^2\times\{0\}\ \forall \lambda\in\Lambda(\bar y,\yb^\ast{})\] Since \[w^T\left (\nabla_{y}\phi(\bar x,\bar y)+\nabla^2(\lambda^Tg)(\bar y)\right )w-\eta^T\nabla_y G(\bar x,\bar y)w=(1+\lambda_1)w_1^2+(1+\lambda_2)w_2^2\]
there cannot exist $w\not=0$}\fi \end{example}
\end{document} |
\begin{document}
\title{Scalable, symmetric atom interferometer for infrasound gravitational wave detection}
\def\affiliation{Leibniz Universit\"at Hannover, Institut f\"ur Quantenoptik, Welfengarten 1, 30167 Hannover, Germany }{\affiliation{Leibniz Universit\"at Hannover, Institut f\"ur Quantenoptik, Welfengarten 1, 30167 Hannover, Germany }} \def\affiliation{Institut f\"ur Quantenphysik and Center for Integrated Quantum Science and Technology \textnormal{(IQ$^{\text{ST}}$)}, Universit\"at Ulm, Albert-Einstein-Allee 11, D-89081 Ulm, Germany}{\affiliation{Institut f\"ur Quantenphysik and Center for Integrated Quantum Science and Technology \textnormal{(IQ$^{\text{ST}}$)}, Universit\"at Ulm, Albert-Einstein-Allee 11, D-89081 Ulm, Germany}}
\author{C. Schubert} \email[]{schubert@iqo.uni-hannover.de} \author{D. Schlippert} \author{S. Abend} \affiqo \author{E. Giese} \affiliation{Department of Physics, University of Ottawa, 25 Templeton Street, Ottawa, Ontario, Canada K1N 6N5} \affiliation{Institut f\"ur Quantenphysik and Center for Integrated Quantum Science and Technology \textnormal{(IQ$^{\text{ST}}$)}, Universit\"at Ulm, Albert-Einstein-Allee 11, D-89081 Ulm, Germany}\author{A.~Roura} \affiliation{Institut f\"ur Quantenphysik and Center for Integrated Quantum Science and Technology \textnormal{(IQ$^{\text{ST}}$)}, Universit\"at Ulm, Albert-Einstein-Allee 11, D-89081 Ulm, Germany}\author{W.~P.~Schleich} \affiliation{Institut f\"ur Quantenphysik and Center for Integrated Quantum Science and Technology \textnormal{(IQ$^{\text{ST}}$)}, Universit\"at Ulm, Albert-Einstein-Allee 11, D-89081 Ulm, Germany}\affiliation{Institute of Quantum Technologies, German Aerospace Center (DLR), D-89069 Ulm, Germany} \affiliation{Institute for Quantum Science and Engineering \textnormal{(IQSE)}, Texas A\&M AgriLife Research and Hagler Institute for Advanced Study, Texas A\&M University, College Station, Texas 77843-4242, USA} \author{W. Ertmer} \author{E. M. Rasel} \affiqo
\date{\today}
\begin{abstract} We propose a terrestrial detector for gravitational waves with frequencies between 0.3\,Hz and 5\,Hz. Therefore, we discuss a symmetric matter-wave interferometer with a single loop and a folded triple-loop geometry. The latter eliminates the need for atomic ensembles at femtokelvin energies imposed by the Sagnac effect in other atom interferometric detectors. It also combines several advantages of current vertical and horizontal matter wave antennas and enhances the scalability in order to achieve a peak strain sensitivity of $2\cdot10^{-21}\,/\sqrt{\mathrm{Hz}}$. \end{abstract}
\maketitle The direct observation of gravitational waves with laser interferometers~\cite{Abbott2016PRL_1, Abbott2016PRL_2} marks the beginning of a new area in astronomy with new searches targeting signals in a broader frequency band with a variety of detectors. One class of proposed detectors relies on atom interferometers rather than macroscopic mirrors as inertial references. Indeed, vertical~\cite{Dimopoulos2008PRD,Coleman2018arXiv,magiswebsite} or horizontal~\cite{Canuel2017arXiv} baselines interrogated by common laser beams propagating along those baselines have recently been proposed for terrestrial observatories. In this letter, we present a new class of symmetric atom interferometers enabling single and multi-loop geometries for broad and narrow-band detection in the frequency range of 0.3\,Hz to 5\,Hz.
Today's detectors are based on laser interferometers and operate in the acoustic frequency band between ten and hundreds of hertz~\cite{Abbott2016PRL_1, Abbott2016PRL_2, Acernese2015CQG, Grote2010CQG}. Future space-borne interferometers, such as LISA~\cite{Robson2018arXiv,AmaroSeoane2017arXiv,Danzman2013arXiv,Amaro-Seoane2012CQG}, are designed to target signals in the range of millihertz to decihertz. Detectors operating in the mid-frequency band~\cite{Kawamura2011CQG} or improving on the frequency band of current ground based devices~\cite{Punturo2010CQG} were also proposed and investigated.
Matter-wave interferometers~\cite{Canuel2017arXiv,Hogan2016PRA,Graham2013PRL,Yu2011GRG,Hogan2011GRG,Hohensee2011GRG,Dimopoulos2008PRD}, mechanical resonators~\cite{Aguiar2011RAA,Weber1960PR,Weber1969PRL}, and optical clocks~\cite{Kolkowitz2016PRD} are pursued to search for sources of gravitational waves in the infrasound domain featuring frequencies too low to be detected by today's ground-based detectors. Waves in this band are emitted for example by inspiraling binaries days or hours before they merge within fractions of a second~\cite{Sesana2017JPCS,Cutler2016arXiv} as the first observed event GW150914~\cite{Abbott2016PRL_1,Abbott2016PRL_2}. Hence, terrestrial observation in this band could be combined with standard astronomical observations. The vast benefits of joint observations manifested itself in the case of a merger of neutron stars~\cite{Abbott2017AJL}.
Our detector is based on a novel type of interferometer where matter waves form a single or several folded loops of symmetric shape~\footnote{Kindred concepts~\cite{Dubetsky2006PRA} were discussed for space-borne detectors~\cite{Hogan2011GRG}.}. An antenna employing folded multi-loop interferometers shows three distinct advantages: (i) The detector is less susceptible to environmental perturbations, and less restrictive on the expansion rate of the atomic ensemble, which otherwise needs to be at the energy level of femtokelvins. (ii) It combines the scalability of arm lengths of horizontal and the single-laser link of vertical antenna types, (iii) and it is less susceptible to technical noise such as the pointing jitter of the atomic sources.
In this Letter, we explain the concept of our detector, and compare a symmetric single-loop interferometer with a folded multi-loop one, confronting their scaling, spectral responses, and critical parameters. Similar to the matter wave interferometric gravitational antenna MIGA~\cite{Canuel2017arXiv,Chaibi2016PRD} and laser interferometers~\cite{Abbott2016PRL_1, Abbott2016PRL_2} our detector concepts have two perpendicular, horizontal arms, depicted in Fig.~\ref{fig:GWD_3_pics}~(a), suppressing laser frequency noise at lower frequencies as in light interferometers~\cite{AmaroSeoane2017arXiv,Abbott2016PRL_1,Acernese2015CQG}. Two light-pulse atom interferometers, separated by a distance $L$, and located in each arm, are sensitive to the phase of the light pulses travelling along the $x$ and $y$ axis between the interferometers and are employed for coherent manipulation of the atoms. A gravitational wave, propagating along the $z$ axis, modifies the geodesic of the light connecting the interferometers and modulates its phase which appears in the differential signal of the two atom interferometers.
In contrast to Refs.~\cite{Chaibi2016PRD,Dimopoulos2008PRD}, our atom interferometers are symmetric~\cite{Gebbe2019arXiv,Ahlers2016PRL,Hogan2011GRG,Leveque2009PRL} as an identical number of photon recoils is transferred to both paths of the atom interferometer during beam splitting, deflection, and recombination. For this purpose we employ a twin-lattice, i.e. two counter-propagating lattices formed by retroreflecting a light beam with two frequency components. \begin{figure}
\caption{(Color online) (a) Matter-wave detector for infrasound gravitational waves. Two horizontal arms (along $x$ and $y$) each with at least two atom interferometers are sensitive to the phase of the light beams (red lines). It is maximally sensitive to gravitational waves travelling along $z$, and periodically stretching and squeezing the detector arms. (b) Spacetime diagram and geometric projections of a symmetric single-loop atom interferometer with a pulsed twin-lattice formed by retroreflection. Atoms are launched vertically along $z$ (orange arrows in the $z(y)$ and $z(t)$ diagrams) by a coherence preserving mechanism such as Bloch oscillations combined with double Bragg diffraction~\cite{Abend2016PRL}. The first interaction with the bottom twin-lattice at $t=0$ excites the atoms into a coherent superposition of two opposite momenta as shown in the $z(y)$ and $y(t)$ diagrams. The second interaction at the apex at $t=T$ inverts the momenta and, finally, a recombination pulse at the bottom lattice at $t=2T$ enables interference. (c) After exciting the atoms into a coherent superposition of two momenta at $t=0$, three subsequent interactions each inverting the momenta at $t=T$, $t=3T$, $t=5T$, form three loops before the recombination at $t=6T$. The two vertical relaunches at $t=(9/5)T$ and $t=(21/5)T$ fold the interferometer geometry which requires only a \textit{single} horizontal beam splitting axis for manipulating the atoms as opposed to (b). }
\label{fig:GWD_3_pics}
\end{figure} Neglecting the non-vanishing pulse duration of the atom-light interaction~\cite{Bonnin15PRA,Cheinet2008IEEE,Antoine07PRA}, the phase difference~\cite{Hohensee2011GRG,Dimopoulos2008PRD} in the single-loop atom interferometer of Fig.~\ref{fig:GWD_3_pics}~(b) induced by a gravitational wave of amplitude $h$ and angular frequency $\omega{\equiv}2\pi f$ reads \begin{equation*}
\phi_{\mathrm{SL}}=2\,h\,k\,L\left[\mathrm{cos}(2\pi f\,T)-1\right]. \end{equation*}
Hence, three parameters determine the spectral sensitivity of such a detector: (i) The distance $L$ between the interferometers, (ii) the number of photon momenta transferred differentially between the two interferometer paths corresponding to the wave number $k$, and (iii) the time $T$ during which the atoms are in free fall. The ability to change $T$ in this setup allow for adjusting the frequency of the maximum spectral response which scales with the inverse of $T$.
Unfortunately, spurious effects can mask the phase induced by a gravitational wave. For example, in the single-loop interferometer the shot-to-shot jitter of the initial velocity $\mathbf{v}$ and position $\mathbf{r}$ causes noise as it enters the phase terms $2\left(\mathbf{k}\times\mathbf{v}\right)\mathbf{\Omega}T^2$, $\mathbf{k}\,\Gamma\,\mathbf{r}\,T^2$, and $\mathbf{k}\,\Gamma\,\mathbf{v}\,T^3$ ~\cite{Hogan2008arXiv,Borde2004GRG} arising from the Sagnac effect and gravity gradients. Here $\mathbf{\Omega}$ and $\Gamma$ denote the Earth's angular velocity and the gravity gradient tensor~\footnote{The nonzero duration of the atom-light interaction is neglected in the derivation of the phase terms since it does not qualitatively change the result.}. Whereas the impact of the gravity gradients can be compensated~\cite{Roura2014NJP,Roura2017PRL}, typical parameters targeting a competitive strain sensitivity enforce kinetic expansion energies of the atomic ensembles to suppress the spurious Sagnac effect corresponding to femtokelvin temperatures. Such a regime is beyond reach even with delta-kick collimation~\cite{Rudolph2016Diss,Kovachy2015PRL,Muntinga2013PRL}.
This challenge can be mitigated by a suitably tailored geometry with atomic trajectories following multiple loops inside the interferometer~\cite{Hogan2011GRG,Marzlin1996PRA} as depicted in Fig.~\ref{fig:GWD_3_pics}~(c). In this scheme the interferometer begins after a vertical launch at time $t=0$, with the first interaction with the symmetric twin lattice creating a coherent superposition of two momentum states. On their way downward, the atoms are coherently reflected at $t=T$ such that they complete their first loop and cross each other at the bottom at $t=(9/5)T$. Here, they are launched upwards again~\cite{Abend2016PRL} and their horizontal motion is inverted at the vertex in the middle of the interferometer at $t=3T$. After completion of the second loop, they are again relaunched at $t=(21/5)T$, once more redirected at $t=5T$, and finally recombined at $t=6T$ to close the interferometer. In order to suppress spurious phase contributions due to the Sagnac effect and gravity gradients, the wave numbers of the five pulses are adjusted~\cite{Hogan2011GRG} to $k$, $\frac{9}{4}k$, $\frac{5}{2}k$, $\frac{9}{4}k$, $k$, corresponding to the differential momentum between the two paths of the interferometer.
Compared the symmetric single-loop interferometer, the adjusted wave numbers and additional loops change the phase difference between the two interferometers~\cite{Hogan2011GRG} which now reads \begin{equation*}
\phi_{\mathrm{FTL}}=\frac{1}{2}h\,k\,L\left[5-9\cos\left(2\cdot2\pi f\,T\right)+4\cos\left(3\cdot2\pi f\,T\right)\right] \end{equation*} for our folded triple-loop sequence. This scheme shares the advantage of a horizontal detector~\cite{Canuel2017arXiv,Chaibi2016PRD} enabling us to operate the interferometers separated by up to several ten kilometers similar to light interferometers~\cite{Abbott2016PRL_1,Abbott2016PRL_2}. Moreover, it requires only a \textit{single} horizontal beam axis in each arm for the coherent manipulation of the atoms. Hence, $T$ is not constrained by the difference in height between the upper and lower twin lattice, and can be tuned to specific frequencies of the gravitational waves similar to vertical antennas, where the light beam propagates along the direction of the free fall of the atoms.
Finally, the folded triple-loop interferometer mitigates several important drawbacks of other concepts. For example, the specific combination of three loops and increased momentum transfer by the central pulses compared to the initial and final pulse, renders this interferometer insensitive to fluctuations of initial position and velocity~\cite{Hogan2011GRG}, mitigating the requirement for femtokelvin energies.
Unfortunately, folded multi-loop interferometers are susceptible to pointing noise of the relaunch vector. In a model with a vanishing atom-light interaction time and no timing errors, the requirements remain comparable to those for a single-loop geometry, and have to be limited to picoradians. For relaunches not centered on the intersections of the trajectories (see Fig.~\ref{fig:GWD_3_pics}~(c)), new terms appear if their directions are not properly aligned~\footnote{See Supplemental Material at [URL will be inserted by publisher] for the impact of timing errors.}. Table~\ref{tab:parameter} summarizes the requirements imposed by the two geometries. The impact of mean field effects is discussed in Ref.~\footnote{See Supplemental Material at [URL will be inserted by publisher] for the impact of relative density fluctuations between the two interferometer arms}.
Figure~\ref{fig:strain_sensitivity_comp} compares the spectral strain sensitivities, obtained by the broad-band mode of the single- (green lines) and triple-loop detector (red lines), and the narrow-band mode (cyan lines) of the multi-loop geometry, with a signal (orange dash-dotted line) generated by a black hole binary as discussed in Ref.~\cite{Sesana2017JPCS}. They are also confronted with the anticipated strain sensitivity of the proposed space-borne detector LISA~\cite{AmaroSeoane2017arXiv} (black dotted line) and the operating advanced LIGO~\cite{Abbott2016PRL_3,Moore2015CQG} (brown dash-dotted line). We emphasize that the sensitivity curves of our detector concepts fill the gap between LISA and advanced LIGO, enabling the terrestrial detection of infrasound gravitational waves.
With respect to the strain sensitivity, our designs share assumptions with other proposals~\cite{Canuel2017arXiv,Chaibi2016PRD} based on atom interferometry. For two 10\,km long arms, we foresee an intrinsic total phase noise of $1\,${\textmu}$\mathrm{rad}/\sqrt{\mathrm{Hz}}$ achieved by $10^9$\,atoms starting every 100\,ms with an upward velocity of $gT$/2 where $g$ is the gravitational acceleration. Moreover, we expect a 20\,dB sub-shot-noise detection~\cite{Hosten2016Nature}, a maximum of $T\sim260\,\mathrm{ms}$, and the symmetric transfer of 1000 photon recoils at the rubidium D2 line in each direction~\footnote{See Supplemental Material at [URL will be inserted by publisher] for parameter estimates.}, which is by a factor of 5 beyond current state of the art~\cite{Gebbe2019arXiv,Pagel2019arXiv,Jaffe2018PRL,Plotkin2018PRL,Kovachy2015Nature,McDonald2014EPL,Chiow2011PRL,Muller2009PRL,Clade2009PRL}.
To enable such a high efficiency for beam splitting and launching~\cite{Gebbe2019arXiv,Abend2016PRL,Szigeti2012NJP}, we require a residual atomic expansion rate of 100\,{\textmu}m/s corresponding to $\sim$100\,pK for $^{87}$Rb as achieved with delta-kick collimated ensembles of rubidium~\cite{Kovachy2015PRL} and Bose-Einstein condensates~\cite{Rudolph2016Diss,Muntinga2013PRL}. Using Bose-Einstein condensates is compatible with the future utilization of entangled atoms~\cite{Kruse2016PRL}.
In addition, the assumed production rate of atomic ensembles enables an interleaved~\cite{Savoie2018ScAdv,Dutta2016PRL} operation of several interferometers. Indeed, for the broad-band mode, we employ for the single- and triple-loop detector three interleaved interferometers with the free-fall times $T_1=T$, $T_2=0.9T$, and $T_3=0.7T$ to avoid peaks in the transfer function~\cite{Hogan2016PRA} resulting in the depicted intrinsic strain sensitivities (green, red dashed line).
Moreover, we extend the triple-loop scheme to a narrow-band mode~\cite{Graham2016PRD} featuring $3n$ loops, with a sensitivity enhanced by a factor $n\in\mathbb{N}$ at a specific frequency determined by the free-fall time $T$. The cyan line in Fig.~\ref{fig:strain_sensitivity_comp} shows the sensitivity for the frequency 0.85\,mHz, corresponding to $T=260\,\mathrm{ms}$, and $n=3$. The interleaved operation also ensures that the strain sensitivity features a high frequency cut-off at 5\,Hz. \begin{table}[tb]
\begin{center}
\caption{Requirements on key parameters of the atomic source and launch mechanisms for the single-loop and triple-loop interferometer.
Here, we list only the dominant contributions originating from the terrestrial gravity gradient and the Sagnac effect. The instabilities in pointing, position $\delta x$ and velocity $\delta v_{x}$, $\delta v_{y}$ refer to 1\,s operation time which corresponds to 10 interferometry cycles. We assume a maximum phase noise of 1\,{\textmu}rad in 1\,s with $k=2000\cdot2\pi/(780\,\mathrm{nm})$, and $T=0.26\,\mathrm{s}$. Typical values for gravity gradients, the earth rotation, and the gravitational acceleration are $\Gamma=1.5\cdot10^{-6}\,1/\mathrm{s}^2$, $\Omega=5.75\cdot10^{-5}\,\mathrm{rad/s}$, and $g=9.81\,\mathrm{m/s}^2$, respectively. The vertical distance between source and beam splitting zone is 30\,cm. The relative relaunch pointing refers to an instability in the angle between the relaunch at $t=(9/5)T$ and $t=(21/5)T$. A jitter in the pointing leads to a coupling to $\Gamma$ and $\Omega$. The latter dominates for our choice of parameters. The phase noise is given per arm. Assuming no correlation, which is a valid assumption for shot noise, the noise would increase by a factor $\sqrt{2}$ in the differential signal between both arms. The values in the upper part of the table assume no timing error for the relaunch. The constraint set by a timing error shifting the relaunches by 10\,ns is shown at the bottom.}
\begin{tabular}{p{0.50\columnwidth}p{0.22\columnwidth}p{0.22\columnwidth}}\hline
& triple-loop & single-loop\\ \hline
$\delta x$ from $\Gamma\cdot\delta x\cdot kT^{2}$ & n/a & 0.4\,nm \\
-pointing launch & n/a & 1.4\,nrad \\
$\delta v_{x}$ from $\Gamma\cdot\delta v_{x}\cdot kT^{3}$ & n/a & 1.7\,nm/s \\
-pointing launch & n/a & 0.7\,nrad \\
$\delta v_{y}$ from Sagnac term & n/a & 5.6\,pm/s (few~fK)\\
-pointing launch & n/a & 2.2\,prad \\
pointing $k$ (g matched to $10^{-7}$) & 48\,prad & 100\,prad \\
relative relaunch pointing & 1\,prad & n/a \\ \hline
absolute relaunch pointing & 1\,nrad & n/a \\ \hline
\end{tabular}
\label{tab:parameter}
\end{center}
\end{table} \begin{figure}\label{fig:strain_sensitivity_comp}
\end{figure}
Next to the atomic shot noise, other technical and environmental noise sources deteriorate the strain sensitivity. Their impact is evaluated by the transfer function~\cite{Cheinet2008IEEE} of the interferometers. Figure~\ref{fig:strain_sensitivity_comp} shows, that the intrinsic strain sensitivity of the single-loop (SL, green dashed line)~\cite{Chaibi2016PRD} and folded triple-loop (FTL, red dashed line) interferometer decays with $f^{-2}$ and $f^{-4}$ at lower frequencies. Below 1\,Hz, vibrations of the retroreflecting mirrors and Newtonian noise as modelled in Refs.~\cite{Chaibi2016PRD,Chaibi2016PRD_suppl,Harms2013PRD,Saulson1984PRD} limit the strain sensitivity (green, red, cyan solid lines). Mirror vibrations enter in the differential signal of the interferometers due to the finite speed $c$ of light causing a delay of $2L/c\approx70\,${\textmu}$\mathrm{s}$, and are modelled with a differential weighting function~\cite{Cheinet2008IEEE}. A suspension system isolating the retroreflection mirror against seismic noise~\cite{Chaibi2016PRD} according to the specifications of Ref.~\cite{Losurdo1999RSI} reduces this contribution to a level similar to the Newtonian noise.
We emphasize that the spectral sensitivity, in particular the low-frequency cut-off, can be tuned by the free-fall time~$T$ of the atoms. In the case of the triple-loop interferometer the maximum value of~$T$ is only determined by the free-fall height. For example, in a $\sim$1\,m-high chamber~$T$ can be tuned up to 260\,ms.
In contrast, in the single-loop interferometer, the tuning capability is restricted by the separation of the two twin lattices as illustrated in Fig.~\ref{fig:GWD_3_pics}~(b). With $T$\,=\,260\,ms and adding 40\,ms for beam splitting, a height difference of $45\,\mathrm{cm}$ is needed. In our scenario, the triple-loop detector surpasses the signal-to-noise ratio of the single-loop by a factor of about 2 for the broad-band, and about 9 for the narrow-band mode.
The relaunches introduce additional noise terms to the folded triple-loop interferometer. The absolute pointing of the relaunch vectors can be adjusted with the interferometer itself by tuning their timings and their pointing. Implementations at the most quiet sites, featuring a residual rotation noise of $2\cdot10^{-11}\,\mathrm{(rad/s)/\sqrt{\mathrm{Hz}}}$~\cite{Schreiber2011PRL} enforce either measures such as mechanical noise dampening by one order of magnitude, or atom interferometric measurements sharing a relaunch pulse for two interleaved cycles~\cite{Savoie2018ScAdv,Dutta2016PRL,Meunier2014PRA,Biedermann2013PRL} to comply with the requirement on pointing noise. Having the second relaunch of the first cycle as the first of a subsequent one leads to a behaviour of $\sim1/t$ for an integration time $t$. For a white pointing noise, assuming a normal distribution with $\sigma=2\cdot10^{-11}\,\mathrm{rad}$ at 1\,s, the limit set by the intrinsic noise would be reached after an integration time of 400\,s.
In conclusion, present designs of terrestrial detectors of infrasound gravitational waves based on atom interferometry face several severe challenges related to scalability and atomic expansion necessitating femtokelvins temperatures which cannot be reached with today's atomic source concepts. Therefore, we propose atomic folded-loop interferometers for horizontal antennas which overcome these stringent requirements at the cost of a very stable relaunch of the atoms. In addition, they merge the advantages of horizontal detectors and vertical setups: As horizontal antennas, they display a scalability of the arm length, and do not rely on deep boreholes while preserving the tunability of the spectral response of vertical detectors enabling broadband and resonant detection modes. Our scheme opens a new pathway to reach strain sensitivities of the order of $7\cdot10^{-21}\,/\sqrt{\mathrm{Hz}}$ at 1\,Hz in terrestrial detectors.
\begin{thebibliography}{71} \makeatletter \providecommand \@ifxundefined [1]{
\@ifx{#1\undefined} } \providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont
{Abbott~{\textit{et al}.}}(2016{\natexlab{a}})}]{Abbott2016PRL_1}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~P.}\ \bibnamefont
{Abbott~{\textit{et al}.}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {116}},\ \bibinfo {pages}
{061102} (\bibinfo {year} {2016}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont
{Abbott~{\textit{et al}.}}(2016{\natexlab{b}})}]{Abbott2016PRL_2}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~P.}\ \bibnamefont
{Abbott~{\textit{et al}.}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {116}},\ \bibinfo {pages}
{221101} (\bibinfo {year} {2016}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dimopoulos}\ \emph {et~al.}(2008)\citenamefont
{Dimopoulos}, \citenamefont {Graham}, \citenamefont {Hogan}, \citenamefont
{Kasevich},\ and\ \citenamefont {Rajendran}}]{Dimopoulos2008PRD}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Dimopoulos}}, \bibinfo {author} {\bibfnamefont {P.~W.}\ \bibnamefont
{Graham}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Hogan}},
\bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Kasevich}}, \ and\
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Rajendran}},\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. D}\ }\textbf
{\bibinfo {volume} {78}},\ \bibinfo {pages} {122002} (\bibinfo {year}
{2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Coleman~{\textit{et al}.}}(2018)}]{Coleman2018arXiv}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Coleman~{\textit{et al}.}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {arXiv:1812.00482}\ } (\bibinfo {year} {2018})}\BibitemShut
{NoStop} \bibitem [{mag()}]{magiswebsite}
\BibitemOpen
\href@noop {} {}\bibinfo {note} {{MAGIS-100},
\url{https://qis.fnal.gov/magis-100/}, accessed: 2019-05-22}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Canuel~{\textit{et al}.}}(2018)}]{Canuel2017arXiv}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Canuel~{\textit{et al}.}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Scientific Reports}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages}
{14064} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Acernese~{\textit{et al}.}}(2015)}]{Acernese2015CQG}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Acernese~{\textit{et al}.}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Classical and Quantum Gravity}\ }\textbf {\bibinfo {volume}
{32}},\ \bibinfo {pages} {024001} (\bibinfo {year} {2015})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Grote~{\textit{et al}.}}(2010)}]{Grote2010CQG}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Grote~{\textit{et al}.}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Classical and Quantum Gravity}\ }\textbf {\bibinfo {volume} {27}},\ \bibinfo
{pages} {084003} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Robson}\ \emph {et~al.}(2018)\citenamefont {Robson},
\citenamefont {Cornish},\ and\ \citenamefont {Liu}}]{Robson2018arXiv}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Robson}}, \bibinfo {author} {\bibfnamefont {N.~J.}\ \bibnamefont {Cornish}},
\ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Liu}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv:1803.01944}\
} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Amaro-Seoane~{\textit{et al}.}}(2017)}]{AmaroSeoane2017arXiv}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Amaro-Seoane~{\textit{et al}.}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {arXiv:1702.00786}\ } (\bibinfo {year} {2017})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Amaro-Seoane~{\textit{et al}.}}(2013)}]{Danzman2013arXiv}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Amaro-Seoane~{\textit{et al}.}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {arXiv:1305.5720}\ } (\bibinfo {year} {2013})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Amaro-Seoane~{\textit{et al}.}}(2012)}]{Amaro-Seoane2012CQG}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Amaro-Seoane~{\textit{et al}.}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Classical and Quantum Gravity}\ }\textbf {\bibinfo {volume}
{29}},\ \bibinfo {pages} {124016} (\bibinfo {year} {2012})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Kawamura~{\textit{et al}.}}(2011)}]{Kawamura2011CQG}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Kawamura~{\textit{et al}.}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Class. Quantum Grav.}\ }\textbf {\bibinfo {volume} {28}},\
\bibinfo {pages} {094011} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Punturo~{\textit{et al}.}}(2010)}]{Punturo2010CQG}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Punturo~{\textit{et al}.}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Class. Quantum Grav.}\ }\textbf {\bibinfo {volume} {27}},\
\bibinfo {pages} {194002} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hogan}\ and\ \citenamefont
{Kasevich}(2016)}]{Hogan2016PRA}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont
{Hogan}}\ and\ \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont
{Kasevich}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. A}\ }\textbf {\bibinfo {volume} {94}},\ \bibinfo {pages} {033632}
(\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Graham}\ \emph {et~al.}(2013)\citenamefont {Graham},
\citenamefont {Hogan}, \citenamefont {Kasevich},\ and\ \citenamefont
{Rajendran}}]{Graham2013PRL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~W.}\ \bibnamefont
{Graham}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Hogan}},
\bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Kasevich}}, \ and\
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Rajendran}},\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {110}},\ \bibinfo {pages} {171102} (\bibinfo {year}
{2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yu}\ and\ \citenamefont {Tinto}(2011)}]{Yu2011GRG}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Yu}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Tinto}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Gen. Relativ.
Gravit.}\ }\textbf {\bibinfo {volume} {43}},\ \bibinfo {pages} {1943}
(\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hogan~{\textit{et al}.}}(2011)}]{Hogan2011GRG}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont
{Hogan~{\textit{et al}.}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Gen. Relativ. Gravit.}\ }\textbf {\bibinfo {volume} {43}},\ \bibinfo {pages}
{1953} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hohensee}\ \emph {et~al.}(2011)\citenamefont
{Hohensee}, \citenamefont {Lan}, \citenamefont {Houtz}, \citenamefont {Chan},
\citenamefont {Estey}, \citenamefont {Kim}, \citenamefont {Kuan},\ and\
\citenamefont {M{\"u}ller}}]{Hohensee2011GRG}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Hohensee}}, \bibinfo {author} {\bibfnamefont {S.-Y.}\ \bibnamefont {Lan}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Houtz}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Chan}}, \bibinfo {author}
{\bibfnamefont {B.}~\bibnamefont {Estey}}, \bibinfo {author} {\bibfnamefont
{G.}~\bibnamefont {Kim}}, \bibinfo {author} {\bibfnamefont {P.-C.}\
\bibnamefont {Kuan}}, \ and\ \bibinfo {author} {\bibfnamefont
{H.}~\bibnamefont {M{\"u}ller}},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Gen. Relativ. Gravit.}\ }\textbf {\bibinfo {volume}
{43}},\ \bibinfo {pages} {1905} (\bibinfo {year} {2011})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Aguiar}(2011)}]{Aguiar2011RAA}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {O.~D.}\ \bibnamefont
{Aguiar}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Res.
Astron. Astrophys.}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {1}
(\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Weber}(1960)}]{Weber1960PR}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Weber}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev.}\ }\textbf {\bibinfo {volume} {117}},\ \bibinfo {pages} {306} (\bibinfo
{year} {1960})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Weber}(1969)}]{Weber1969PRL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Weber}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. Lett.}\ }\textbf {\bibinfo {volume} {22}},\ \bibinfo {pages} {1320}
(\bibinfo {year} {1969})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kolkowitz}\ \emph {et~al.}(2016)\citenamefont
{Kolkowitz}, \citenamefont {Pikovski}, \citenamefont {Langellier},
\citenamefont {Lukin}, \citenamefont {Walsworth},\ and\ \citenamefont
{Ye}}]{Kolkowitz2016PRD}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Kolkowitz}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Pikovski}},
\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Langellier}}, \bibinfo
{author} {\bibfnamefont {M.~D.}\ \bibnamefont {Lukin}}, \bibinfo {author}
{\bibfnamefont {R.~L.}\ \bibnamefont {Walsworth}}, \ and\ \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Ye}},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume} {94}},\
\bibinfo {pages} {124043} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sesana}(2017)}]{Sesana2017JPCS}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Sesana}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {IOP
Conf. Series: Journal of Physics: Conf. Series}\ }\textbf {\bibinfo {volume}
{840}},\ \bibinfo {pages} {012018} (\bibinfo {year} {2017})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Cutler}\ and\ \citenamefont
{Thorne}(2002)}]{Cutler2016arXiv}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Cutler}}\ and\ \bibinfo {author} {\bibfnamefont {K.~S.}\ \bibnamefont
{Thorne}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{arXiv:gr-qc/0204090}\ } (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Abbott~{\textit{et al}.}}(2017)}]{Abbott2017AJL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~P.}\ \bibnamefont
{Abbott~{\textit{et al}.}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Astrophys. J. Lett.}\ }\textbf {\bibinfo {volume} {848}},\ \bibinfo {pages}
{12} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{Note1()}]{Note1}
\BibitemOpen
\bibinfo {note} {Kindred concepts~\cite {Dubetsky2006PRA} were discussed for
space-borne detectors~\cite {Hogan2011GRG}.}\BibitemShut {Stop} \bibitem [{\citenamefont {Chaibi}\ \emph
{et~al.}(2016{\natexlab{a}})\citenamefont {Chaibi}, \citenamefont {Geiger},
\citenamefont {Canuel}, \citenamefont {Bertoldi}, \citenamefont {Landragin},\
and\ \citenamefont {Bouyer}}]{Chaibi2016PRD}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Chaibi}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Geiger}},
\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Canuel}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Bertoldi}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Landragin}}, \ and\ \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {Bouyer}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume}
{93}},\ \bibinfo {pages} {021101(R)} (\bibinfo {year}
{2016}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gebbe~\textit{et al}.}(2019)}]{Gebbe2019arXiv}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Gebbe~\textit{et al}.}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{arXiv:1907.08416}\ } (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ahlers}\ \emph {et~al.}(2016)\citenamefont {Ahlers},
\citenamefont {M{\"u}ntinga}, \citenamefont {Wenzlawski}, \citenamefont
{Krutzik}, \citenamefont {Tackmann}, \citenamefont {Abend}, \citenamefont
{Gaaloul}, \citenamefont {Giese},\ and\ \citenamefont
{Roura~{\textit{et al}.}}}]{Ahlers2016PRL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Ahlers}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{M{\"u}ntinga}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Wenzlawski}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Krutzik}},
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Tackmann}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Abend}}, \bibinfo {author}
{\bibfnamefont {N.}~\bibnamefont {Gaaloul}}, \bibinfo {author} {\bibfnamefont
{E.}~\bibnamefont {Giese}}, \ and\ \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Roura~{\textit{et al}.}}},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {116}},\
\bibinfo {pages} {173601} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {L{\'e}v{\`e}que}\ \emph {et~al.}(2009)\citenamefont
{L{\'e}v{\`e}que}, \citenamefont {Gauguet}, \citenamefont {Michaud},
\citenamefont {Pereira Dos~Santos},\ and\ \citenamefont
{Landragin}}]{Leveque2009PRL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{L{\'e}v{\`e}que}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Gauguet}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Michaud}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Pereira Dos~Santos}}, \
and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Landragin}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\
}\textbf {\bibinfo {volume} {103}},\ \bibinfo {pages} {080405} (\bibinfo
{year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Abend~{\textit{et al}.}}(2016)}]{Abend2016PRL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Abend~{\textit{et al}.}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {117}},\ \bibinfo {pages}
{203003} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bonnin}\ \emph {et~al.}(2015)\citenamefont {Bonnin},
\citenamefont {Zahzam}, \citenamefont {Bidel},\ and\ \citenamefont
{Bresson}}]{Bonnin15PRA}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Bonnin}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Zahzam}},
\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Bidel}}, \ and\ \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Bresson}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {92}},\ \bibinfo {pages} {023626} (\bibinfo {year}
{2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cheinet}\ \emph {et~al.}(2008)\citenamefont
{Cheinet}, \citenamefont {Canuel}, \citenamefont {Pereira Dos~Santos},
\citenamefont {Gauguet}, \citenamefont {Yver-Leduc},\ and\ \citenamefont
{Landragin}}]{Cheinet2008IEEE}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Cheinet}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Canuel}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Pereira Dos~Santos}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Gauguet}}, \bibinfo
{author} {\bibfnamefont {F.}~\bibnamefont {Yver-Leduc}}, \ and\ \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Landragin}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {IEEE Trans. Instrum. Meas.}\
}\textbf {\bibinfo {volume} {57}},\ \bibinfo {pages} {1141} (\bibinfo {year}
{2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Antoine}(2007)}]{Antoine07PRA}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Antoine}},\ }\href {\doibase 10.1103/PhysRevA.76.033609} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{76}},\ \bibinfo {pages} {033609} (\bibinfo {year} {2007})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Hogan}\ \emph {et~al.}(2008)\citenamefont {Hogan},
\citenamefont {Johnson},\ and\ \citenamefont {Kasevich}}]{Hogan2008arXiv}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont
{Hogan}}, \bibinfo {author} {\bibfnamefont {D.~M.~S.}\ \bibnamefont
{Johnson}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont
{Kasevich}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{arXiv:0806.3261}\ } (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bord{\'e}}(2004)}]{Borde2004GRG}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Bord{\'e}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Gen. Relativ. Gravit.}\ }\textbf {\bibinfo {volume} {36}},\ \bibinfo {pages}
{475} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{Note2()}]{Note2}
\BibitemOpen
\bibinfo {note} {The nonzero duration of the atom-light interaction is
neglected in the derivation of the phase terms since it does not
qualitatively change the result.}\BibitemShut {Stop} \bibitem [{\citenamefont {Roura}\ \emph {et~al.}(2014)\citenamefont {Roura},
\citenamefont {Zeller},\ and\ \citenamefont {Schleich}}]{Roura2014NJP}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Roura}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Zeller}}, \
and\ \bibinfo {author} {\bibfnamefont {W.~P.}\ \bibnamefont {Schleich}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\
}\textbf {\bibinfo {volume} {16}},\ \bibinfo {pages} {123012} (\bibinfo
{year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Roura}(2017)}]{Roura2017PRL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Roura}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. Lett.}\ }\textbf {\bibinfo {volume} {118}},\ \bibinfo {pages} {160401}
(\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rudolph}(2016)}]{Rudolph2016Diss}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Rudolph}},\ }\href@noop {} {}\bibinfo {howpublished} {{dissertation, Leibniz
Universit\"at Hannover}} (\bibinfo {year} {2016})\BibitemShut {NoStop} \bibitem [{\citenamefont {Kovachy}\ \emph
{et~al.}(2015{\natexlab{a}})\citenamefont {Kovachy}, \citenamefont {Hogan},
\citenamefont {Sugarbaker}, \citenamefont {Dickerson}, \citenamefont
{Donnelly}, \citenamefont {Overstreet},\ and\ \citenamefont
{Kasevich}}]{Kovachy2015PRL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Kovachy}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Hogan}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Sugarbaker}}, \bibinfo
{author} {\bibfnamefont {S.~M.}\ \bibnamefont {Dickerson}}, \bibinfo {author}
{\bibfnamefont {C.~A.}\ \bibnamefont {Donnelly}}, \bibinfo {author}
{\bibfnamefont {C.}~\bibnamefont {Overstreet}}, \ and\ \bibinfo {author}
{\bibfnamefont {M.~A.}\ \bibnamefont {Kasevich}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {114}},\ \bibinfo {pages} {143004} (\bibinfo {year}
{2015}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{M{\"u}ntinga}}\ \emph {et~al.}(2013)\citenamefont
{{M{\"u}ntinga}}, \citenamefont {Ahlers}, \citenamefont {Krutzik},
\citenamefont {Wenzlawski}, \citenamefont {Arnold}, \citenamefont {Becker},
\citenamefont {Bongs}, \citenamefont {Dittus},\ and\ \citenamefont
{Duncker~{\textit{et al}.}}}]{Muntinga2013PRL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{{M{\"u}ntinga}}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Ahlers}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Krutzik}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Wenzlawski}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Arnold}}, \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Becker}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Bongs}}, \bibinfo {author} {\bibfnamefont
{H.}~\bibnamefont {Dittus}}, \ and\ \bibinfo {author} {\bibfnamefont
{H.}~\bibnamefont {Duncker~{\textit{et al}.}}},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {110}},\
\bibinfo {pages} {093602} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Marzlin}\ and\ \citenamefont
{Audretsch}(1996)}]{Marzlin1996PRA}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.-P.}\ \bibnamefont
{Marzlin}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Audretsch}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. A}\ }\textbf {\bibinfo {volume} {53}},\ \bibinfo {pages} {312}
(\bibinfo {year} {1996})}\BibitemShut {NoStop} \bibitem [{Note3()}]{Note3}
\BibitemOpen
\bibinfo {note} {See Supplemental Material at [URL will be inserted by
publisher] for the impact of timing errors.}\BibitemShut {Stop} \bibitem [{Note4()}]{Note4}
\BibitemOpen
\bibinfo {note} {See Supplemental Material at [URL will be inserted by
publisher] for the impact of relative density fluctuations between the two
interferometer arms}\BibitemShut {NoStop} \bibitem [{\citenamefont
{Abbott~{\textit{et al}.}}(2016{\natexlab{c}})}]{Abbott2016PRL_3}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~P.}\ \bibnamefont
{Abbott~{\textit{et al}.}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {116}},\ \bibinfo {pages}
{131103} (\bibinfo {year} {2016}{\natexlab{c}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Moore}\ \emph {et~al.}(2015)\citenamefont {Moore},
\citenamefont {Cole},\ and\ \citenamefont {Berry}}]{Moore2015CQG}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~J.}\ \bibnamefont
{Moore}}, \bibinfo {author} {\bibfnamefont {R.~H.}\ \bibnamefont {Cole}}, \
and\ \bibinfo {author} {\bibfnamefont {C.~P.~L.}\ \bibnamefont {Berry}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Classical and
Quantum Gravity}\ }\textbf {\bibinfo {volume} {32}},\ \bibinfo {pages}
{015014} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hosten}\ \emph {et~al.}(2016)\citenamefont {Hosten},
\citenamefont {Engelsen}, \citenamefont {R.},\ and\ \citenamefont
{Kasevich}}]{Hosten2016Nature}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont
{Hosten}}, \bibinfo {author} {\bibfnamefont {N.~J.}\ \bibnamefont
{Engelsen}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {R.}}, \ and\
\bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Kasevich}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf
{\bibinfo {volume} {529}},\ \bibinfo {pages} {505} (\bibinfo {year}
{2016})}\BibitemShut {NoStop} \bibitem [{Note5()}]{Note5}
\BibitemOpen
\bibinfo {note} {See Supplemental Material at [URL will be inserted by
publisher] for parameter estimates.}\BibitemShut {Stop} \bibitem [{\citenamefont {Pagel}\ \emph {et~al.}(2019)\citenamefont {Pagel},
\citenamefont {Zhong}, \citenamefont {Parker}, \citenamefont {Olund},
\citenamefont {Yao},\ and\ \citenamefont {M{\"u}ller}}]{Pagel2019arXiv}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont
{Pagel}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Zhong}},
\bibinfo {author} {\bibfnamefont {R.~H.}\ \bibnamefont {Parker}}, \bibinfo
{author} {\bibfnamefont {C.~T.}\ \bibnamefont {Olund}}, \bibinfo {author}
{\bibfnamefont {N.~Y.}\ \bibnamefont {Yao}}, \ and\ \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {M{\"u}ller}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {arXiv:1907.05994}\ } (\bibinfo {year}
{2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jaffe}\ \emph {et~al.}(2018)\citenamefont {Jaffe},
\citenamefont {Xu}, \citenamefont {Haslinger}, \citenamefont {M\"uller},\
and\ \citenamefont {Hamilton}}]{Jaffe2018PRL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Jaffe}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Xu}}, \bibinfo
{author} {\bibfnamefont {P.}~\bibnamefont {Haslinger}}, \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {M\"uller}}, \ and\ \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {Hamilton}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {121}},\ \bibinfo {pages} {040402} (\bibinfo {year}
{2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Plotkin-Swing}\ \emph {et~al.}(2018)\citenamefont
{Plotkin-Swing}, \citenamefont {Gochnauer}, \citenamefont {McAlpine},
\citenamefont {Cooper}, \citenamefont {Jamison},\ and\ \citenamefont
{Gupta}}]{Plotkin2018PRL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Plotkin-Swing}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Gochnauer}}, \bibinfo {author} {\bibfnamefont {K.~E.}\ \bibnamefont
{McAlpine}}, \bibinfo {author} {\bibfnamefont {E.~S.}\ \bibnamefont
{Cooper}}, \bibinfo {author} {\bibfnamefont {A.~O.}\ \bibnamefont {Jamison}},
\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gupta}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\
}\textbf {\bibinfo {volume} {121}},\ \bibinfo {pages} {133201} (\bibinfo
{year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kovachy}\ \emph
{et~al.}(2015{\natexlab{b}})\citenamefont {Kovachy}, \citenamefont
{Asenbaum}, \citenamefont {Overstreet}, \citenamefont {Donnelly},
\citenamefont {Dickerson}, \citenamefont {Sugarbaker}, \citenamefont
{Hogan},\ and\ \citenamefont {Kasevich}}]{Kovachy2015Nature}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Kovachy}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Asenbaum}},
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Overstreet}}, \bibinfo
{author} {\bibfnamefont {C.~A.}\ \bibnamefont {Donnelly}}, \bibinfo {author}
{\bibfnamefont {S.~M.}\ \bibnamefont {Dickerson}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Sugarbaker}}, \bibinfo {author}
{\bibfnamefont {J.~M.}\ \bibnamefont {Hogan}}, \ and\ \bibinfo {author}
{\bibfnamefont {M.~A.}\ \bibnamefont {Kasevich}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {528}},\
\bibinfo {pages} {530} (\bibinfo {year} {2015}{\natexlab{b}})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {McDonald}\ \emph {et~al.}(2014)\citenamefont
{McDonald}, \citenamefont {Kuhn}, \citenamefont {Bennetts}, \citenamefont
{Debs}, \citenamefont {Hardman}, \citenamefont {Close},\ and\ \citenamefont
{Robins}}]{McDonald2014EPL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~D.}\ \bibnamefont
{McDonald}}, \bibinfo {author} {\bibfnamefont {C.~C.~N.}\ \bibnamefont
{Kuhn}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Bennetts}},
\bibinfo {author} {\bibfnamefont {J.~E.}\ \bibnamefont {Debs}}, \bibinfo
{author} {\bibfnamefont {K.~S.}\ \bibnamefont {Hardman}}, \bibinfo {author}
{\bibfnamefont {J.~D.}\ \bibnamefont {Close}}, \ and\ \bibinfo {author}
{\bibfnamefont {N.~P.}\ \bibnamefont {Robins}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Eur. Phys. Lett.}\ }\textbf {\bibinfo
{volume} {105}},\ \bibinfo {pages} {63001} (\bibinfo {year}
{2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chiow}\ \emph {et~al.}(2011)\citenamefont {Chiow},
\citenamefont {Kovachy}, \citenamefont {Chien},\ and\ \citenamefont
{Kasevich}}]{Chiow2011PRL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.-w.}\ \bibnamefont
{Chiow}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Kovachy}},
\bibinfo {author} {\bibfnamefont {H.-C.}\ \bibnamefont {Chien}}, \ and\
\bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Kasevich}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\
}\textbf {\bibinfo {volume} {107}},\ \bibinfo {pages} {130403} (\bibinfo
{year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {M\"uller}\ \emph {et~al.}(2009)\citenamefont
{M\"uller}, \citenamefont {Chiow}, \citenamefont {Herrmann},\ and\
\citenamefont {Chu}}]{Muller2009PRL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{M\"uller}}, \bibinfo {author} {\bibfnamefont {S.-w.}\ \bibnamefont {Chiow}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Herrmann}}, \ and\
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Chu}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {102}},\ \bibinfo {pages} {240403} (\bibinfo {year}
{2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Clad{\'e}}\ \emph {et~al.}(2009)\citenamefont
{Clad{\'e}}, \citenamefont {Guellati-Kh{\'e}lifa}, \citenamefont {Nez},\ and\
\citenamefont {Biraben}}]{Clade2009PRL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Clad{\'e}}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Guellati-Kh{\'e}lifa}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Nez}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Biraben}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. Lett.}\ }\textbf {\bibinfo {volume} {102}},\ \bibinfo {pages} {240402}
(\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Szigeti}\ \emph {et~al.}(2012)\citenamefont
{Szigeti}, \citenamefont {Debs}, \citenamefont {Hope}, \citenamefont
{Robins},\ and\ \citenamefont {Close}}]{Szigeti2012NJP}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~S.}\ \bibnamefont
{Szigeti}}, \bibinfo {author} {\bibfnamefont {J.~E.}\ \bibnamefont {Debs}},
\bibinfo {author} {\bibfnamefont {J.~J.}\ \bibnamefont {Hope}}, \bibinfo
{author} {\bibfnamefont {N.~P.}\ \bibnamefont {Robins}}, \ and\ \bibinfo
{author} {\bibfnamefont {J.~D.}\ \bibnamefont {Close}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo
{volume} {14}},\ \bibinfo {pages} {023009} (\bibinfo {year}
{2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kruse~{\textit{et al}.}}(2016)}]{Kruse2016PRL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{Kruse~{\textit{et al}.}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {117}},\ \bibinfo {pages}
{143004} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Savoie}\ \emph {et~al.}(2018)\citenamefont {Savoie},
\citenamefont {Altorio}, \citenamefont {Fang}, \citenamefont {Sidorenkov},
\citenamefont {Geiger},\ and\ \citenamefont {Landragin}}]{Savoie2018ScAdv}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Savoie}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Altorio}},
\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Fang}}, \bibinfo {author}
{\bibfnamefont {L.~A.}\ \bibnamefont {Sidorenkov}}, \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Geiger}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Landragin}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Science Advances}\ }\textbf {\bibinfo
{volume} {4}},\ \bibinfo {pages} {eaau7948} (\bibinfo {year}
{2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dutta}\ \emph {et~al.}(2016)\citenamefont {Dutta},
\citenamefont {Savoie}, \citenamefont {Fang}, \citenamefont {Venon},
\citenamefont {Garrido~Alzar}, \citenamefont {Geiger},\ and\ \citenamefont
{Landragin}}]{Dutta2016PRL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{Dutta}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Savoie}},
\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Fang}}, \bibinfo {author}
{\bibfnamefont {B.}~\bibnamefont {Venon}}, \bibinfo {author} {\bibfnamefont
{C.~L.}\ \bibnamefont {Garrido~Alzar}}, \bibinfo {author} {\bibfnamefont
{R.}~\bibnamefont {Geiger}}, \ and\ \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Landragin}},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {116}},\
\bibinfo {pages} {183003} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Graham}\ \emph {et~al.}(2016)\citenamefont {Graham},
\citenamefont {Hogan}, \citenamefont {Kasevich},\ and\ \citenamefont
{Rajendran}}]{Graham2016PRD}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~W.}\ \bibnamefont
{Graham}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Hogan}},
\bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Kasevich}}, \ and\
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Rajendran}},\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. D}\ }\textbf
{\bibinfo {volume} {94}},\ \bibinfo {pages} {104022} (\bibinfo {year}
{2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Losurdo~{\textit{et al}.}}(1999)}]{Losurdo1999RSI}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Losurdo~{\textit{et al}.}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Rev. Sci. Instrum.}\ }\textbf {\bibinfo {volume} {70}},\ \bibinfo
{pages} {2507} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chaibi}\ \emph
{et~al.}(2016{\natexlab{b}})\citenamefont {Chaibi}, \citenamefont {Geiger},
\citenamefont {Canuel}, \citenamefont {Bertoldi}, \citenamefont {Landragin},\
and\ \citenamefont {Bouyer}}]{Chaibi2016PRD_suppl}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Chaibi}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Geiger}},
\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Canuel}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Bertoldi}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Landragin}}, \ and\ \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {Bouyer}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume}
{93}},\ \bibinfo {pages} {021101(R) (supplemental material)} (\bibinfo {year}
{2016}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Harms}\ \emph {et~al.}(2013)\citenamefont {Harms},
\citenamefont {Slagmolen}, \citenamefont {Adhikari}, \citenamefont {Miller},
\citenamefont {Evans}, \citenamefont {Chen}, \citenamefont {M{\"u}ller},\
and\ \citenamefont {Ando}}]{Harms2013PRD}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Harms}}, \bibinfo {author} {\bibfnamefont {B.~J.~J.}\ \bibnamefont
{Slagmolen}}, \bibinfo {author} {\bibfnamefont {R.~X.}\ \bibnamefont
{Adhikari}}, \bibinfo {author} {\bibfnamefont {M.~C.}\ \bibnamefont
{Miller}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Evans}},
\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Chen}}, \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {M{\"u}ller}}, \ and\ \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Ando}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume}
{88}},\ \bibinfo {pages} {122003} (\bibinfo {year} {2013})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Saulson}(1984)}]{Saulson1984PRD}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~R.}\ \bibnamefont
{Saulson}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. D}\ }\textbf {\bibinfo {volume} {30}},\ \bibinfo {pages} {732} (\bibinfo
{year} {1984})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schreiber}\ \emph {et~al.}(2011)\citenamefont
{Schreiber}, \citenamefont {Kl{\"u}gel}, \citenamefont {Wells}, \citenamefont
{Hurst},\ and\ \citenamefont {Gebauer}}]{Schreiber2011PRL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.~U.}\ \bibnamefont
{Schreiber}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Kl{\"u}gel}}, \bibinfo {author} {\bibfnamefont {J.-P.~R.}\ \bibnamefont
{Wells}}, \bibinfo {author} {\bibfnamefont {R.~B.}\ \bibnamefont {Hurst}}, \
and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Gebauer}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\
}\textbf {\bibinfo {volume} {107}},\ \bibinfo {pages} {173904} (\bibinfo
{year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Meunier}\ \emph {et~al.}(2014)\citenamefont
{Meunier}, \citenamefont {Dutta}, \citenamefont {Geiger}, \citenamefont
{Guerlin}, \citenamefont {Garrido~Alzar},\ and\ \citenamefont
{Landragin}}]{Meunier2014PRA}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Meunier}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Dutta}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Geiger}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Guerlin}}, \bibinfo {author}
{\bibfnamefont {C.~L.}\ \bibnamefont {Garrido~Alzar}}, \ and\ \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Landragin}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {90}},\ \bibinfo {pages} {063633} (\bibinfo {year}
{2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Biedermann}\ \emph {et~al.}(2013)\citenamefont
{Biedermann}, \citenamefont {Takase}, \citenamefont {Wu}, \citenamefont
{Deslauriers}, \citenamefont {Roy},\ and\ \citenamefont
{Kasevich}}]{Biedermann2013PRL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~W.}\ \bibnamefont
{Biedermann}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Takase}},
\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Wu}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Deslauriers}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Roy}}, \ and\ \bibinfo {author}
{\bibfnamefont {M.~A.}\ \bibnamefont {Kasevich}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {111}},\ \bibinfo {pages} {170802} (\bibinfo {year}
{2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dubetsky}\ and\ \citenamefont
{Kasevich}(2006)}]{Dubetsky2006PRA}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Dubetsky}}\ and\ \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont
{Kasevich}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. A.}\ }\textbf {\bibinfo {volume} {74}},\ \bibinfo {pages}
{023615} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \end{thebibliography}
\end{document} |
\begin{document}
\runauthor{Simanjuntak and Murdiansyah} \begin{frontmatter} \title{Metric Dimension of \\Amalgamation of Regular Graphs
} \author{Rinovia Simanjuntak and Danang Tri Murdiansyah}
\address{Combinatorial Mathematics Research Group\\Faculty of Mathematics and Natural Sciences\\Institut Teknologi Bandung, Indonesia\\ {\small rino@math.itb.ac.id}}
\newtheorem{lemma}{Lemma} \newtheorem{theorem}{Theorem} \newtheorem{proposition}{Proposition} \newtheorem{corollary}{Corollary} \newtheorem{definition}{Definition} \newtheorem{observation}{Observation} \newtheorem{problem}{Open Problem}
\begin{abstract} A set of vertices $S$ resolves a graph $G$ if every vertex is uniquely determined by its vector of distances to the vertices in $S$. The metric dimension of $G$ is the minimum cardinality of a resolving set of $G$.
Let $\{G_1, G_2, \ldots, G_n\}$ be a finite collection of graphs and each $G_i$ has a fixed vertex $v_{0_i}$ or a fixed edge $e_{0_i}$ called a terminal vertex or edge, respectively. The \emph{vertex-amalgamation} of $G_1, G_2, \ldots, G_n$, denoted by $Vertex-Amal\{G_i;v_{0_i}\}$, is formed by taking all the $G_i$'s and identifying their terminal vertices. Similarly, the \emph{edge-amalgamation} of $G_1, G_2, \ldots, G_n$, denoted by $Edge-Amal\{G_i;e_{0_i}\}$, is formed by taking all the $G_i$'s and identifying their terminal edges.
Here we study the metric dimensions of vertex-amalgamation and edge-amalgamation for finite collection of regular graphs: complete graphs and prisms. \end{abstract} \begin{keyword} graph distance; resolving set; metric dimension; amalgamation; complete graphs; prisms \end{keyword} \end{frontmatter}
\section{Introduction}
In this paper we consider finite, simple, and connected graphs. The vertex and edge sets of a graph $G$ are denoted by $V(G)$ and $E(G)$, respectively.
The \textit{distance} $d(u,v)$ between two vertices $u$ and $v$ in a connected graph $G$ is the length of a shortest $u-v$ path in
$G$. For an ordered set $S = \{v_{1}, v_{2}, \ldots, v_{k}\} \subseteq V(G)$, we refer to the $k$-vector $r(v|S)=(d(v,v_{1}), d(v,v_{2}), \ldots, d(v,v_{k}))$ as the {\em(metric) representation of $v$ with respect to $S$}. The set $S$ is called a \textit{resolving set} for $G$ if $r(u|S)= r(v|S)$ implies that $u = v$ for all $u,v \in G$. In a graph $G$, a resolving set with minimum cardinality is called a \textit{basis} for $G$. The \textit{metric dimension}, $dim(G)$, is the number of vertices in a basis for $G$.
The metric dimension problem was first introduced in 1975 by Slater \cite{Sla}, and independently by Harary and Melter \cite{HM76} in 1976; however the problem for hypercube was studied (and solved asymptotically) much earlier in 1963 by Erd\H{o}s and R\'{e}nyi \cite{ER63}. In general, it is difficult to obtain a basis and metric dimension for arbitrary graph. Garey and Johnson \cite{Garey}, and also Khuller \textit{et al}. \cite{Khu96}, showed that determining the metric dimension of an arbitrary graph is an NP-complete problem. The problem is still NP-complete even if we consider some specific families of graphs, such as bipartite graphs \cite{MARR08} or planar graphs \cite{DPSL12}. Thus research in this area are then constrained towards: characterizing graphs with particular metric dimensions, determining metric dimensions of particular graphs, and constructing algorithm that ”best” approximate metric dimensions.
Until today, only graphs of order $n$ with metric dimension 1 (the paths), $n-3$, $n-2$, and $n-1$ (the complete graphs) have been characterized \cite{CEJO00,HMPSW10,JO}. On the other hand, researchers have determined metric dimensions for many particular classes of graphs. There are also some results of metric dimensions of graphs resulting from graph operations; for instance: Cartesian product graphs \cite{Melter1984,Khu96,CHMPPSW07}, join product graphs \cite{BCPZ03,CHMPPSW05}, strong product \cite{RKYS}, corona product graphs \cite{YKR10,IBR11}, lexicographic product graphs \cite{SRUABSB13}, hierarchical product graphs \cite{FW13}, line graphs \cite{KY12,FXW13}, and permutation graphs \cite{HKY}.
In this paper, we study metric dimension of graphs resulting from another type of graph operations, i.e., vertex-amalgamation and edge-amalgamation. Let $\{G_1, G_2, \ldots, G_n\}$ be a finite collection of graphs and each \emph{block} $G_i$ has a fixed vertex $v_{0_i}$ or a fixed edge $e_{0_i}$ called a \emph{terminal vertex} or \emph{edge}, respectively. The \emph{vertex-amalgamation} of $G_1, G_2, \ldots, G_n$, denoted by $Vertex-Amal\{G_i;v_{0_i}\}$, is formed by taking all the $G_i$'s and identifying their terminal vertices. Similarly, the \emph{edge-amalgamation} of $G_1, G_2, \ldots, G_n$, denoted by $Edge-Amal\{G_i;e_{0_i}\}$, is formed by taking all the $G_i$'s and identifying their terminal edges.
Previous study of amalgamation of graphs has been done for vertex-amalgamation of two arbitrary graphs \cite{PZ02}, vertex-amalgamation of cycles \cite{IBSS10a,IBSS10b}, and edge-amalgamation of cycles \cite{SABISU}. Poisson and Zhang studied vertex-amalgamation of two nontrivial connected graphs $G_1, G_2$ and provide a lower bound as follow.
\begin{theorem} \emph{\cite{PZ02}} Let $G$ be the vertex-amalgamation of nontrivial connected graphs $G_1$ and $G_2$. Then \[dim(G) \geq dim(G_1) + dim(G_2) - 2.\] \label{2} \end{theorem}
Other known results are vertex-amalgamation and edge-amalgamation of cycles. We denote by $C_n$ the cycle of order $n$.
\begin{theorem} \emph{\cite{IBSS10b,SABISU}} Let $\{C_{c_1}, C_{c_2}, \ldots, C_{c_n}\}$ be a collection of $n$ cycles with $n_e$ cycles of even order. Suppose that $G$ is the vertex-amalgamation of $C_{c_1}, C_{c_2}, \ldots, C_{c_n}$ and $H$ is the edge-amalgamation of $C_{c_1}, C_{c_2}, \ldots, C_{c_n}$. Then \[dim(G)=\left\{ \begin{array}{ll} \sum_{i=1}^{n} dim(C_{c_i}) - n & , n_e=0,\\ \sum_{i=1}^{n} dim(C_{c_i}) - n + n_e - 1& , n_e \geq 1 \end{array} \right.\] and \[\sum_{i=1}^{n} dim(C_{c_i}) - n - 2 \leq dim(H) \leq \sum_{i=1}^{n} dim(C_{c_i}) - n.\] \label{Cn} \end{theorem}
The previous theorem provided the metric dimensions of vertex and edge amalgamation of connected 2-regular graphs. In the next section, we shall consider metric dimensions of vertex-amalgamation and edge-amalgamation of other connected regular graphs: complete graphs and prisms.
\section{Main Results}
\subsection{Metric Dimension of Amalgamation of Complete Graphs}
Two vertices $u$ and $v$ of a graph $G$ is defined in \cite{SZ03} to be \emph{distance similar} if $d(u,x) = d(v,x)$ for all $x \in V(G) - \{u,v\}$. Certainly, distance similarity is an equivalence relation in $V(G)$. The following observation is useful.
\begin{observation} \emph{\cite{SZ03}}
Let $G$ be a graph and let $V_1, V_2, \ldots , V_k$ be the $k$ distinct distance-similar equivalence classes of $V(G)$. If $W$ is a resolving set of $G$, then $W$ contains at least $|Vi|-1$ vertices from each equivalence class $V_i$ for all $i$ and so $dim(G) \geq |V(G)|-k$. \label{ds} \end{observation}
Let $K_k$ be a complete graph of order $k$. It is obvious that $K_k$ is a $(k-1)$-regular graph. Consider vertex-amalgamation of $\{K_{k_1}, K_{k_2}, \ldots, K_{k_n}\}$ a collection of $n$ complete graphs, where $k_i$ is of an increasing order. We denote by $v_1^i, v_2^i, \ldots, v_{k_i}^i$ the vertices in the block ${K_{k_i}}$, $c=v_{k_i}^i$ the terminal vertex, and ${K_{k_i-1}}$ the subgraph obtained by deleting $c$ from the block ${K_{k_i}}$.
\begin{theorem} Let $\{K_{k_1}, K_{k_2}, \ldots, K_{k_n}\}$ be a collection of $n$ complete graphs with $n_2$ complete graphs of order $2$. If $G$ is the vertex-amalgamation of $K_{k_1}, \ldots, K_{k_n}$ then \[dim(G)=\left\{ \begin{array}{ll} \sum_{i=1}^{n} dim(K_{k_i}) - n + n_2 - 1 & , n_2 \geq 2,\\ \sum_{i=1}^{n} dim(K_{k_i}) - n & , \hbox{otherwise} \end{array} \right.\] \label{vKn} \end{theorem}
\begin{proof} For $n_2 \geq 2$, let $V_c = \{c\}, V_{0} = \{v_1^1, v_1^2, \ldots , v_1^{k_{n_2}}\}$ and $V_i = V({K_{k_{n_2+i}-1}}), i = 1, \ldots, n-n_2$. Clearly, $V_c, V_0, V_1, V_2, \ldots, V_{n-n_2}$ are distance-similar equivalence classes of $V(G)$. By Observation \ref{ds}, a resolving set of $G$ contains at least $|V_i|-1$ vertices from each equivalence class $V_i$ and so $dim(G) \geq 0 + (n_2 - 1) + \sum_{i=1}^{n-n_2} ((n_2+i-1) - 1) = \sum_{i=1}^{n} dim(K_{k_i}) - n + n_2 - 1$. Consider $S = \{v_1^i |i = 1, \ldots, n_2-1\} \cup \{v_1^i, v_2^i, \ldots ,v_{k_i-2}^i|i = n_2+1, \ldots, n\}$. Thus $r(c|S) = (1, \ldots , 1), r(v_1^{n_2}|S) = (2, \ldots ,2)$, and the coordinates in $r(v_{k_i-1}^i|S)$ are 1 for those correspond with vertices in block $K_{k_i}$ and 2 otherwise. This results in $S$ being a resolving set and $dim(G) \leq \sum_{i=1}^{n} dim(K_{k_i}) - n + n_2 - 1$.
For $n_2 = 1$, let $V_c = \{c\}, V_0= v_1^1,$ and $V_i = V({K_{k_{n_2+i}-1}}), i = 1, \ldots, n-n_2$, then we have $V_c, V_0, V_1, \ldots, V_{n-n_2}$ to be distance-similar equivalence classes of $V(G)$. By Observation \ref{ds}, a resolving set of $G$ contains at least $(k_{n_2+i}-1)-1=k_{n_2+i}-2$ vertices from each $V_i$; thus $dim(G) \geq \sum_{i=1}^{n} dim(K_{k_i}) - n$. Choose $S = \{v_1^i, v_2^i, \ldots ,v_{k_i-2}^i|i = n_2+1, \ldots, n\}$, then $r(c|S) = (1, \ldots , 1), r(v_1^1|S)= (2, \ldots ,2)$ and the coordinates in $r(v_{k_i-1}^i|S)$ are 1 for those correspond with vertices in block $K_{k_i}$ and 2 otherwise. Therefore $S$ resolves $G$ and so $dim(G) \leq \sum_{i=1}^{n} dim(K_{k_i}) - n$.
For $n_2 = 0$, let $V_c = \{c\}$ and $V_i = V({K_{k_{n_2+i}-1}}), i = 1, \ldots, n$ and so $V_c, V_1, \ldots, V_{n}$ are distance-similar equivalence classes of $V(G)$. Thus, $dim(G) \leq \sum_{i=1}^{n} k_{n_2+i}-2$. Now define $S = \{v_1^i, v_2^i, \ldots ,v_{k_i-2}^i|i = 1, \ldots, n\}$, then $r(c|S) = (1, \ldots , 1)$ and the coordinates in $r(v_{k_i-1}^i|S)$ are 1 for those correspond with vertices in block $K_{k_i}$ and 2 otherwise. Therefore $S$ resolves $G$ which leads to $dim(G) \leq \sum_{i=1}^{n} dim(K_{k_i}) - n$ and this completes the proof. \end{proof}
Consider edge-amalgamation of $\{K_{k_1}, K_{k_2}, \ldots, K_{k_n}\}$ a collection of $n$ complete graphs, where $k_i$ is of an increasing order. We denote by $v_1^i, v_2^i, \ldots, v_{k_i-2}^i$ the vertices in the block ${K_{k_i}}$, $c_1c_2=v_{k_i-1}^iv_{k_i}^i$ the terminal edge, and ${K_{k_i-2}}$ the subgraph obtained by deleting $c_1c_2$ from the block ${K_{k_i}}$.
\begin{theorem} Let $\{K_{k_1}, K_{k_2}, \ldots, K_{k_n}\}$ be a collection of $n$ complete graphs with $n_3$ complete graphs of order $3$. If $G$ is the edge-amalgamation of $K_{k_1}, \ldots, K_{k_n}$ then \[dim(G)=\left\{ \begin{array}{ll} \sum_{i=1}^{n} dim(K_{k_i}) - 2n + 1, &n_3=0,\\ \sum_{i=1}^{n} dim(K_{k_i}) - 2n + 2, &n_3=1 \hbox{ and } n=2,\\
\sum_{i=1}^{n} dim(K_{k_i}) - 2n + n_3, &\hbox{otherwise}. \end{array} \right.\] \label{eKn} \end{theorem} \begin{proof}
For $n_3=0$, let $V_0 = \{c_1, c_2\}$ and $V_i = V({K_{k_i-2}}), i=1,2, \ldots, n$. $V_0, V_1, \ldots , V_n$ are distance similar equivalence classes of $V(G)$. By Observation \ref{ds}, a resolving set of $G$ contains at least $|V_i|-1$ vertices from each equivalence class $V_i$ and so $dim(G) \geq \sum_{i=1}^{n} dim(K_{k_i}) - 2n + 1$. Define a set $S = \{c_1\} \cup \{v_1^i, v_2^i, \ldots, v_{k_i-3}^i| i=1,2, \ldots, n\}$, then $r(c_2|S) = (1, \ldots , 1)$ and the coordinates in $r(v_{k_i-2}^{i}|S)$ are 1 for those correspond with $c_1$ and vertices in block $K_{k_i}$ and 2 otherwise. Thus $S$ resolves $G$ and $dim(G) \leq \sum_{i=1}^{n} dim(K_{k_i}) - 2n + 1$.
For $n_3=1$ and $n=2$, we have $\{v_1^1\}$, $K_{k_2-2}$, and $\{c_1,c_2\}$ as distance similar equivalence classes of $V(G)$. By Observation \ref{ds}, a resolving set of $G$ contains at least $|K_{k_2-2}|-1$ vertices of $K_{k_2-2}$ and 1 vertices of $\{c_1,c_2\}$ or $dim(G) \geq (k_2-3) + 1 = k_2-2$. Assume $R$ is a resolving set with cardinality $k_2-2$, thus there exist $a \in K_{k_2-2}$ and $b\in \{c_1,c_2\}$ which are not contained in $R$. In this case $r(a|R) = (1,1, \ldots , 1) = r(b|R)$, a contradiction. Therefore $dim(G) \geq k_2-1$. Let $S= \{v_1^1\} \cup \{v_1^2, v_2^2, \ldots , v_{k_2-3}^{2}\} \cup \{c_1\}$. Thus we have $r(c_2|S) = (1, \ldots , 1), r(v_{k_2-2}^{2}|S) = (2, 1, 1, \ldots , 1)$ and so $dim(G) \leq 1 + (k_2-3) + 1= k_2-1$. Therefore $dim(G) = k_2-1 = \sum_{i=1}^{n} dim(K_{k_i}) - 2n + 2$.
For $n_3=n$, the sets $\{c_1,c_2\}$ and $\{v_1^1, v_1^2, \ldots , v_1^{n_3}\}$ are distance similar equivalence classes of $V(G)$. By applying Observation \ref{ds}, we have $dim(G) \geq 1 + (n_3-1) = n_3$. Let $S=\{c_1\} \cup \{v_1^1, v_1^2, \ldots, v_1^{n_3-1}\}$, and so $r(c_2|S) = (1, 1, \ldots , 1)$, and $r(v_1^{n_2}|S) = (1, 2, 2, \ldots , 2)$. Thus $dim(G) \leq n_3$ and we have $dim(G) = n_3= \sum_{i=1}^{n} dim(K_{k_i}) - 2n + n_3$.
For the rest of the cases, let $V_c = \{c_1, c_2\}, V_0 = \{v_1^1, v_1^2, \ldots , v_1^{n_3}\}, V_i = V(K_{k_{n_3+i}-2}),$ $i=1,2, \ldots , n-n_3$. We can see that $V_c, V_0, V_1 \ldots, V_{n-n_3}$ are distance similar equivalence classes of $V(G)$. By using Observation \ref{ds}, $dim(G) \geq 1 + (n_3-1) + \sum_{i=n_3+1}^n((k_i - 2) - 1) = \sum_{i=n_3+1}^n(k_i - 3) + n_3$. Choose $S=\{c_1\} \cup V_0 \cup
\{v_1^{n_3+i}, v_2^{n_3+i}, \ldots , v_{k_{n_3+i}-3}^{n_3+i}| i=1,2, \ldots , n-n_3\}$. Then we have $r(c_2|S) = (1, \ldots ,1), r(v_1^{n_3}|S) = (1, 2, \ldots , 2)$ and and the coordinates in $r(v_{k_{n_3+i}-2}^{n_3+i}|S)$ are 1 for those correspond with $c_1$ and vertices in block $K_{n_3+i}$ and 2 otherwise. Therefore $dim(G) \leq \sum_{i=n_3+1}^{n}(k_i - 3) + n_3$ and, consequently, $dim(G) = \sum_{i=n_3+1}^{n}(k_i - 3) + n_3 = \sum_{i=1}^{n} dim(K_{k_i}) - 2n + n_3$. \end{proof}
\subsection{Metric Dimension of Amalgamation of Prisms}
For $n\geq 3$, \emph{a prism $Pr_n= C_n \times P_2$} is a 3-regular graphs of order $2n$. Let $V(Pr_n)=\{u_1, \ldots, u_n, v_1, \ldots, v_n\}$ and $E(Pr_n)=\{u_i v_i, i=1, \ldots, n\} \cup \{u_i u_{i+1}|i=1,\ldots , n\} \cup \{u_n u_1\} \cup \{v_i v_{i+1}|i=1,\ldots , n\} \cup \{v_nv_1\}$. Consider vertex-amalgamation of $\{Pr_{p_1}, Pr_{p_2}, \ldots, Pr_{p_n}\}$ a collection of $n$ prisms. We denote by $u_1^i, \ldots, u_{p_i}^i, v_1^i, \ldots, v_{p_i}^i$ the vertices in the block ${Pr_{p_i}}$, $c=v_1^i$ the terminal vertex, and ${Pr_{p_i-1}}$ the subgraph obtained by deleting $c$ from the block ${Pr_{p_i}}$.
The following observations are needed in determining the metric dimension of vertex-amalgamation of prisms. \begin{observation}
If $R$ is a resolving set of $Vertex-Amal\{Pr_{p_i};v_1^i\}$ then $|Pr_{p_i} \cap R| \geq 1$ for all $i$. \label{vaPr>1} \end{observation} \begin{proof}
Suppose that there exists $j$ such that $Pr_{p_j} \cap R = \emptyset$ then $r(v_{2}^{j}|R) = r(v_{p_j}^{j}|R)$, a contradiction. \end{proof}
\begin{observation}
Let $R$ be a resolving set of $Vertex-Amal\{Pr_{p_i};v_1^i\}$. If $p_i$ is even then $|Pr_{p_i} \cap R| \geq 2$. \label{vaPre} \end{observation} \begin{proof}
By Observation \ref{vaPr>1}, $|Pr_{p_i} \cap R| \geq 1$. Suppose that $|Pr_{p_i} \cap R| = 1$ and $x$ is the vertex in $Pr_{p_i} \cap R$. If $x\in \{u_1^i, u_{\frac{p_i}{2}+1}^i, v_{\frac{p_i}{2}+1}^i\}$ then $r(u_{2}^{i}|R) = r(u_{p_i}^{i}|R)$. If $x\in \{u_{\frac{p_i}{2}+2}^i, \ldots, u_{p_i}^i, v_2, \ldots, v_{\frac{p_i}{2}}^i\}$ then $r(u_{1}^{i}|R) = r(v_{p_i}^{i}|R)$. If $x\in \{u_2, \ldots, u_{\frac{p_i}{2}}^i, v_{\frac{p_i}{2}+2}^i, \ldots, v_{p_i}^i\}$ then $r(u_{1}^{i}|R) = r(v_{2}^{i}|R)$. All possible cases lead to contradiction and so $|Pr_{p_i} \cap R| \geq 2$. \end{proof}
\begin{observation}
Let $R$ be a resolving set of $Vertex-Amal\{Pr_{p_i};v_1^i\}$. If $p_{i}$ and $p_{j}$ are both odd then $|(Pr_{p_{i}} \cup Pr_{p_{j}}) \cap R| \geq 3$. \label{vaPro} \end{observation} \begin{proof}
By Observation \ref{vaPr>1}, $|Pr_{p_i} \cup Pr_{p_{j}} \cap R| \geq 2$. Suppose that $|Pr_{p_i} \cup Pr_{p_{j}} \cap R| = 2$ and $x$ is the vertex in $Pr_{p_i} \cap R$. If $x = u_1^i$ then $r(u_{2}^{i}|R) = r(u_{p_i}^{i}|R)$. If $x\in \{u_{\frac{p_i+3}{2}}^i, \ldots, u_{p_i}^i, v_2, \ldots, v_{\frac{p_i+1}{2}}^i\}$ then $r(u_{1}^{i}|R) = r(v_{p_i}^{i}|R)$. If $x\in \{u_2, \ldots, u_{\frac{p_i+1}{2}}^i,$ $v_{\frac{p_i+3}{2}}^i, \ldots, v_{p_i}^i\}$ then $r(u_{1}^{i}|R) = r(v_{2}^{i}|R)$. Thus we have $|(Pr_{p_{i}} \cup Pr_{p_{j}}) \cap R| \geq 3$. \end{proof}
Now we are ready to prove the following. \begin{theorem} Let $\{Pr_{p_1}, Pr_{p_2}, \ldots, Pr_{p_n}\}$ be a collection of $n$ prisms with $n_o$ prisms of odd order. If $G$ is the vertex-amalgamation of $Pr_{p_1}, \ldots, Pr_{p_n}$ then \[dim(G)=\left\{ \begin{array}{ll} \sum_{i=1}^{n} dim(Pr_{p_i}) - n & , n_o=0,\\ \sum_{i=1}^{n} dim(Pr_{p_i}) - n + n_0 - 1& , n_o \geq 1 \end{array} \right.\] \label{vPrn} \end{theorem} \begin{proof} For $n_o=0$, we have $dim(G) \geq 2n$ (by Observation \ref{vaPre}). Now, we define $S= \bigcup_{i=1}^{n} \{u_{\frac{p_i}{2}}^i,v_{\frac{p_i}{2}}^i\}$. It is clear that if $x,y$ are two distinct vertices in $Pr_{p_i-1}$ with $d(x,u_{\frac{p_i}{2}}^i)=d(y,u_{\frac{p_i}{2}}^i)$ and $d(x,v_{\frac{p_i}{2}}^i)=d(y,v_{\frac{p_i}{2}}^i)$ then $d(x,c)\neq d(y,c)$. This leads to $S$ being a resolving set and so $dim(G) = 2n = \sum_{i=1}^{n} dim(Pr_{p_i}) - n$.
For $n_o \geq 1$, by applying Observation \ref{vaPre}, we have each $Pr_{p_{i}}$ with even $p_{i}$ contains at least 2 vertices in a resolving set and, by applying Observation \ref{vaPro}, we have each $Pr_{p_{i}}$ with odd $p_{i}$ contains at least 2 vertices in a resolving set, except for exactly one which contains only 1 vertex. Therefore $dim(G) \geq 2(n-n_o) + 2n_o - 1 = 2n - 1$. For the upper bound, we denote by $p_{i_o}$ the minimum among the odd $p_i$s. Define $S=\bigcup_{i\neq i_o} \{u_{\lceil\frac{p_i}{2}\rceil}^i,v_{\lceil\frac{p_i}{2}\rceil}^i\} \bigcup \{v_{\lceil\frac{p_{i_o}}{2}\rceil}^{i_o}\}$. It is a routine exercise to show that $S$ is a resolving set and we obtain $dim(G) = 2n - 1 = \sum_{i=1}^{n} dim(Pr_{p_i}) - n + n_0 - 1$. \end{proof}
Consider edge-amalgamation of $\{Pr_{p_1}, Pr_{p_2}, \ldots, Pr_{p_n}\}$ a finite collection of prisms. We denote by $u_1^i, \ldots, u_{p_i}^i, v_1^i, \ldots, v_{p_i}^i$ the vertices in the block ${Pr_{p_i}}$, $c_1c_2=v_1^iv_{p_i}^i$ the terminal edge, and ${Pr_{p_i-2}}$ the subgraph obtained by deleting $c_1c_2$ from the block ${Pr_{p_i}}$. The following observations are essential and can be proved similarly to those of vertex-amalgamation of prisms.
\begin{observation}
If $R$ is a resolving set of $Edge-Amal\{Pr_{p_i};v_1^iv_{p_i}^i\}$ then $|Pr_{p_i} \cap R| \geq 1$ for all $i$. \label{eaPr>1} \end{observation}
\begin{observation}
If $R$ is a resolving set of $Edge-Amal\{Pr_{p_i};v_1^iv_{p_i}^i\}$ then $|(Pr_{p_{i}} \cup Pr_{p_{j}}) \cap R| \geq 3$ for all distinct $p_{i}$ and $p_{j}$. \label{eaPr>3} \end{observation}
Now we are ready to prove the last theorem.
\begin{theorem} Let $\{Pr_{p_1}, Pr_{p_2}, \ldots, Pr_{p_n}\}$ be a collection of $n$ prisms with $n_o$ prisms of odd order. If $G$ is the edge-amalgamation of $Pr_{p_1}, \ldots, Pr_{p_n}$ then \[dim(G)=\sum_{i=1}^{n} dim(Pr_{p_i}) - n + n_0 - 1.\] \label{ePrn} \end{theorem} \begin{proof} By Observation \ref{eaPr>3}, we have $dim(G) \geq 2n - 1$. Now, we denote by $p_{i_o}$ the minimum among the odd $p_i$s and define $$S = \bigcup_{{\rm even} \ p_i} \{u_{\frac{p_i}{2}}^i,v_{\frac{p_i}{2}}^i\} \bigcup_{{\rm odd} \ p_i, i\neq i_o} \{v_{\frac{p_i+1}{2}}^i,u_1^i\} \bigcup \{v_{\frac{p_{i_o}+1}{2}}^{i_o}\}.$$ It can be checked that $S$ is a resolving set and so $dim(G) = 2n - 1 = \sum_{i=1}^{n} dim(Pr_{p_i}) - n + n_0 - 1$. \end{proof}
The afore-mentioned results for complete graphs and prisms arise to the following more general questions.
\begin{problem} Let $\{G_1, G_2, \ldots, G_n\}$ be a finite collection of graphs and $v_{0_i}$ is a terminal vertex of $G_i$, $i=1,2,\ldots,n$. Determine $dim(Vertex-Amal\{G_i;v_{0_i}\})$ in terms of $dim(G_i)$s. \end{problem}
\begin{problem} Let $\{G_1, G_2, \ldots, G_n\}$ be a finite collection of graphs and $e_{0_i}$ is a terminal edge of $G_i$, $i=1,2,\ldots,n$. Determine $dim(Edge-Amal\{G_i;e_{0_i}\})$ in terms of $dim(G_i)$s. \end{problem}
\end{document} |
\begin{document}
\title{Multi-qubit entanglement engineering via projective measurements}
\author{Witlef Wieczorek} \email[]{witlef.wieczorek@mpq.mpg.de}
\author{Nikolai Kiesel} \author{Christian Schmid} \author{Harald Weinfurter}
\affiliation{Max-Planck-Institut f{\"u}r Quantenoptik, Hans-Kopfermann-Strasse 1, D-85748 Garching, Germany} \affiliation{Department f\"ur Physik, Ludwig-Maximilians-Universit{\"a}t, D-80797 M{\"u}nchen, Germany}
\date{\today}
\begin{abstract}
So far, various multi-photon entangled states have been observed experimentally by using different experimental set-ups. Here, we present a scheme to realize many SLOCC-inequivalent states of three and four qubits via projective measurements on suitable entangled states. We demonstrate how these states can be observed experimentally in a single set-up and study the feasibility of the implementation with present-day technology.
\end{abstract}
\pacs{03.65.Ud, 03.67.Mn, 42.50.Ex, 42.65.Lm}
\maketitle
\section{Introduction}
Entangled states are an essential resource for quantum information applications. Recently, the equivalence under stochastic local operations and classical communication (SLOCC) was successfully used to classify multi-partite entanglement \cite{Ben00,Dur00,Ver02,Lam06}. This classification is particularly relevant for evaluating the use of states for multi-party quantum communication as states of the same SLOCC class can be employed for the same applications. Therefore, the experimental realization of different SLOCC-inequivalent states is highly desirable.
So far, several SLOCC-inequivalent states have been realized in various physical systems. The biggest variety of states was observed in experiments that rely on photonic qubits (e.g.~\cite{Bou99}). However, typical for this experimental approach is its inherent inflexibility. The design of the necessary optical network is especially tailored to the particular state that should be observed. Consequently, once a particular network is built it will not offer, in general, the choice between different SLOCC-inequivalent states. Recently, a linear optics experiment was performed that broke with this inflexibility \cite{Wie08} by allowing the observation of an entire family of SLOCC-inequivalent four-photon entangled states. Essentially, this was achieved by multi-photon interference.
Here we show that projective measurements on subsystems can provide another means of preparing SLOCC-inequivalent classes of entangled states. It is well known that atomic entangled states, even from different SLOCC-classes, can be remotely prepared by projective measurements on photons \cite{Cab99,Bas07qph}, which previously have been entangled with the atoms, i.e., by a measurement of typically half of the total multi-partite entangled state. Here, in contrast, we focus on the property of certain symmetric multi-partite entangled states that allow a more flexible preparation of families of SLOCC-inequivalent types of entanglement by projective measurements on small subsystems. The initial $n$-qubit symmetric states can be observed in linear optics set-ups that distribute $n$ photons of a single spatial mode to $n$ different output modes. Subsequent projective measurements on these $n$-qubit states will yield states belonging to different SLOCC classes. We focus in the following on the case of $n=4$ and $n=5$ and demonstrate that our approach can be realized by using a single linear optical set-up only.
The paper is structured as follows. In \sref{sec-1} we discuss the effect of projective measurements on particular symmetric states. We begin our investigations with SLOCC-inequivalent three-qubit states obtained from the four-qubit symmetric Dicke state with two excitations $\ket{D_4^{(2)}}$ \cite{Dic54,Kie07,Sch07ProcNato}. Further, we show how to obtain SLOCC-inequivalent four-qubit entangled states like, e.g., the states $\ket{GHZ_4}$, $\ket{W_4}$ and even $\ket{D_4^{(2)}}$, via projective measurements on five-qubit states, which are given by superpositions of two symmetric Dicke states \cite{ScP07}. In \sref{sec-2} we will discuss the experimental implementation of the proposed schemes. We recapitulate the experiment of Ref.~\cite{Kie07} that lead to the observation of $\ket{D_4^{(2)}}$ and discuss the feasibility of an extension in order to observe the five-qubit states.
\section{\label{sec-1}Projective measurements on particular symmetric states}
In their seminal work D\"ur \emph{et al.} \cite{Dur00} discovered that only two SLOCC-inequivalent classes of genuine tri-partite entanglement exist: the GHZ and W class. Well known representatives of these classes are the states $\ket{GHZ_3}=1/\sqrt{2}(\ket{HHH}+\ket{VVV})$ and $\ket{W_3}=1/\sqrt{3}(\ket{HHV}+\ket{HVH}+\ket{VHH})$, respectively. We utilize the notation for polarization encoded qubits throughout this work, e.g.~$\vert HHV\rangle=\vert H\rangle_a\otimes\vert H\rangle_b\otimes\vert V\rangle_c$ and $\vert H\rangle$ or $\vert V\rangle$ mean linear horizontal ($H$) or vertical ($V$) polarization of photons, respectively, and the subscript denotes the spatial mode of each photon. In contrast to the three qubit case, the SLOCC classification of four-partite entangled states is much richer, containing infinitely many SLOCC-inequivalent four-partite entangled states \cite{Ver02,Lam07}.
In the following we show that via projective measurements on particular symmetric states, SLOCC-inequivalent entangled states of a lower qubit number can be obtained. To this end, we consider particular members of the family of symmetric Dicke states \cite{Dic54}. Generally, a symmetric $N$-qubit Dicke state with $m$ excitations, denoted as $\ket{D_N^{(m)}}$, is, again in the notation of polarization encoded photonic qubits, the equally weighted superposition of all possible permutations of $N$-qubit product states with $m$ vertically and $N-m$ horizontally polarized photons.
\subsection{Projections of the four-qubit Dicke state $\ket{D_4^{(2)}}$}
First, we aim at obtaining states from the two inequivalent tri-partite entanglement classes by applying projective measurements on a four-qubit entangled state. The symmetric Dicke state $\ket{D_4^{(2)}}=\frac{1}{\sqrt{6}}(\vert HHVV\rangle+\vert HVHV\rangle+\vert VHHV\rangle+\vert HVVH\rangle+\vert VHVH\rangle+\vert VVHH\rangle)$ turned out to be useful for this purpose \cite{Kie07}. Here, we will analyze in more detail which three-qubit states can be obtained.
Generally, an arbitrary projective measurement can be expressed by $P(\markproj{\alpha},\markproj{\epsilon}):=\ket{\markproj{\alpha},\markproj{\epsilon}}\bra{\markproj{\alpha},\markproj{\epsilon}}$ with $\ket{\markproj{\alpha},\markproj{\epsilon}}=\markproj{\alpha} \ket{H}+\markproj{\beta} e^{i\markproj{\epsilon}}\ket{V}$ (all parameters real and $\markproj{\alpha}^2+\markproj{\beta}^2=1$). The projection $P(\markproj{\alpha},\markproj{\epsilon})$ applied on $\ket{D_4^{(2)}}$ leads to the three-qubit states \begin{equation} \label{eq-3qubitDicke} \propto\markproj{\alpha}\ket{D_3^{(2)}}+\markproj{\beta}e^{-i\markproj{\epsilon}}\ket{D_3^{(1)}}, \end{equation} which are arbitrary superpositions of the two entangled, symmetric three-qubit Dicke states (\tref{tab-3qubit}).
\begin{table} \caption{\label{tab-3qubit} Three-qubit states obtained by a single projective measurement on the state $\ket{D_4^{(2)}}$, cp.~\eref{eq-3qubitDicke}.} \begin{ruledtabular}
\begin{tabular}{l|l|l||c} $\markproj{\alpha}$ & $\markproj{\beta}$ & $\markproj{\epsilon}$ & state\\\hline\hline $\cos{\markproj{\theta}}$ & $\sin{\markproj{\theta}}$ & $\markproj{\epsilon}$ & $\cos{\markproj{\theta}}\ket{D_3^{(2)}}+e^{-i\markproj{\epsilon}}\sin{\markproj{\theta}}\ket{D_3^{(1)}}$\\ $1$ & $0$ & $-$ & $\ket{D_3^{(2)}}\equiv\ket{\overline{W}_3}$ \\ $0$ & $1$ & $-$ & $\ket{D_3^{(1)}}\equiv\ket{W_3}$ \\ $\frac{1}{\sqrt{2}}$ & $\frac{1}{\sqrt{2}}$ & $0$ & $(\ket{D_3^{(1)}}+\ket{D_3^{(2)}})/\sqrt{2}\equiv\ket{G_3^+}$ \\ $\frac{1}{\sqrt{2}}$ & $\frac{1}{\sqrt{2}}$ & $\pi$ & $(\ket{D_3^{(1)}}-\ket{D_3^{(2)}})/\sqrt{2}\equiv\ket{G_3^-}$ \\ \end{tabular} \end{ruledtabular} \end{table}
\begin{figure}\label{fig-tangle}
\end{figure}
To analyze the entanglement of the states we choose as a suitable entanglement measure the three-tangle $\tau_3$ \cite{Cof00}, which distinguishes the W and GHZ class as only for $GHZ$ type entangled states $\tau_3$ is non-zero \cite{Dur00}. The solid line in \fref{fig-tangle} shows $\tau_3$ for the states of \eref{eq-3qubitDicke} in dependence of $\markproj{\theta}$ ($\markproj{\alpha}=\cos{\markproj{\theta}}$). It is found that the three-tangle is zero for $\markproj{\theta}=0,\pi/2$ ($\markproj{\alpha}=0,1$), which corresponds to a measurement in the computational basis. There, we obtain states from the W class, namely $\ket{W_3}\equiv\ket{D_3^{(1)}}$ and $\ket{\overline{W}_3}=1/\sqrt{3}(\ket{VVH}+\ket{VHV}+\ket{HVV})\equiv\ket{D_3^{(2)}}$, respectively. For all other values of $\markproj{\theta}$, $\tau_3\neq0$ implying that these states belong to the GHZ class. The maximal value of $\tau_3=1/3$ is obtained for $\markproj{\theta}=\pi/4$ corresponding to a measurement in the $(\pm)$-basis, where $\ket{\pm}=1/\sqrt{2}(\ket{H}\pm\ket{V})$. For $\markproj{\theta}=\pi/4$ and $\markproj{\epsilon}=0,\pi$ one obtains the $G_3$ states \cite{Sen03a} with $\ket{G_3^+}=\frac{1}{\sqrt{2}}(\ket{\overline{W}_3}+\ket{W_3})$ and $\ket{G_3^-}=\frac{1}{\sqrt{2}}(\ket{\overline{W}_3}-\ket{W_3})$, respectively.
The $G_3$ states can be transformed directly into the $GHZ_3$ state, which has the maximal possible three-tangle $\tau_3=1$, via the stochastic local operations (local filtering) \begin{eqnarray*} \mathcal{T}_{+}&=&\ensuremath{\mathcal{H}}\left[\frac{1}{2}\left((\frac{1}{\sqrt{3}}+i)\cdot\openone +(\frac{1}{\sqrt{3}}-i)\cdot\sigma_z\right)\right]\ensuremath{\mathcal{H}},\\ \mathcal{T}_{-}&=&\ensuremath{\mathcal{H}}\left[\frac{1}{2}\left((\frac{1}{\sqrt{3}}+i)\cdot\sigma_x +i(\frac{1}{\sqrt{3}}-i)\cdot\sigma_y\right)\right]\ensuremath{\mathcal{H}}, \end{eqnarray*} where $\sigma_x$, $\sigma_y$ and $\sigma_z$ are the Pauli spin matrices and $\ensuremath{\mathcal{H}}$ is the Hadamard transformation, in the following way: \begin{eqnarray} \nonumber\left(\mathcal{T}_{+}\otimes\mathcal{T}_{+}\otimes\mathcal{T}_{+}\right)\ket{G_3^+}&=&\frac{1}{3}\ket{GHZ_3},\\ \label{eq-trafosGHZ}\left(\mathcal{T}_{-}\otimes\mathcal{T}_{-}\otimes\mathcal{T}_{-}\right)\ket{G_3^-}&=&\frac{1}{3}\ket{GHZ_3}. \end{eqnarray} Though these operations perform the desired transformation, they only do so with a success probability of $1/9$. \fref{fig-tangle} shows the three-tangle when the operation $\mathcal{T}_{+}\otimes\mathcal{T}_{+}\otimes\mathcal{T}_{+}$ is applied successfully to all states of \eref{eq-3qubitDicke}. For $\markproj{\theta}=\pi/4$ the three-tangle is indeed increased to its maximal value of $\tau_3=1$.
\subsection{Projections of superpositions of five-qubit Dicke states\label{sec-TDt5}}
Extending the idea described before, SLOCC-inequivalent four-partite entangled states can be obtained from suitable five-qubit symmetric states via projective measurements. Here we consider an arbitrary superposition of the two symmetric five-qubit Dicke states $\ket{D_5^{(2)}}$ and $\ket{D_5^{(3)}}$: \begin{equation} \label{eq-5photres} \ket{\Delta_5}=\alpha \ket{D_5^{(2)}}+\beta e^{i\epsilon}\ket{D_5^{(3)}}. \end{equation} We note that these states can also be seen as a natural choice as they are obtained via a single projective measurement on the six-qubit Dicke state $\ket{D_6^{(3)}}$. The states $\ket{\Delta_5}$ belong to two different SLOCC classes. The first class occurs for $\alpha=0$ or $\beta=0$, as the states $\ket{D_5^{(2)}}$ and $\ket{D_5^{(3)}}$ can be transformed into each other by spin-flipping all qubits. The second class is given for $\alpha\neq0$ and $\beta\neq0$. The weighting and phase between the terms of $\ket{\Delta_5}$ can be changed easily via the SLOCC-operations $\mathcal{T}_r^{\otimes 5}=\mathcal{T}_r\otimes\mathcal{T}_r\otimes\mathcal{T}_r\otimes\mathcal{T}_r\otimes\mathcal{T}_r$ with $\mathcal{T}_r=[(1+1/r)\openone+(1-1/r)\sigma_z]/2$ and $r\neq0$ complex. To obtain a new ratio of parameters $\overline{\alpha}/(\overline{\beta} e^{i\overline{\epsilon}})$, $r$ needs to be chosen as $\beta\overline{\alpha}e^{i\epsilon}/(\overline{\beta}\alpha e^{i\overline{\epsilon}})$. Note that this reasoning could also be applied for the states of \eref{eq-3qubitDicke}.
A single projective measurement $P(\markproj{\alpha},\markproj{\epsilon})$ applied on $\ket{\Delta_5}$ yields the four-qubit entangled states \begin{eqnarray} \label{eq-4photext} \nonumber\ket{\Delta_4}&\propto&\alpha\markproj{\beta} e^{-i\markproj{\epsilon}}\ket{D_4^{(1)}}+\markproj{\alpha} \beta e^{i\epsilon} \ket{D_4^{(3)}}\\ &&+(\alpha\markproj{\alpha} + \beta\markproj{\beta} e^{i (\epsilon-\markproj{\epsilon})})\sqrt{6/4}\ket{D_4^{(2)}}. \end{eqnarray} These states are superpositions of all symmetric four-qubit entangled Dicke states. In particular, these superpositions contain the SLOCC-inequivalent family of states $\mu(\alpha,\markproj{\alpha},\epsilon,\markproj{\epsilon})\ket{GHZ_4}+\nu(\alpha,\markproj{\alpha},\epsilon,\markproj{\epsilon})\ket{D_4^{(2)}}$ (for details see \cite{StatsTrafo}), which forms according to the SLOCC-classification by Verstraete {\emph{et al.}}~\cite{Ver02} a subset of the four-qubit entangled generic family $G_{abcd}$. In the following we discuss prominent SLOCC-inequivalent states of the family \eref{eq-4photext} (see also \tref{tab-4qubit}).
Remarkably, we obtain a four-qubit $GHZ_4$ state. This can be easily seen when we consider the state $\ket{GHZ_4^-}=1/\sqrt{2}(\ket{HHHH}-\ket{VVVV})$ under a Hadamard transformation $\mathcal{H}$ acting on each qubit: \begin{eqnarray*} \ket{GHZ_4^+}&\equiv&(\mathcal{H}\otimes\mathcal{H}\otimes\mathcal{H}\otimes\mathcal{H})\ket{GHZ_4^-}\\ &=&\frac{1}{\sqrt{2}}(\ket{D_4^{(1)}}+\ket{D_4^{(3)}}). \end{eqnarray*} We get $\ket{GHZ_4^+}$ when the amplitude of $\ket{D_4^{(2)}}$ is zero and the two remaining terms in \eref{eq-4photext} are equally balanced, i.e.~the conditions (i) $\alpha\markproj{\alpha} =-\beta\markproj{\beta} e^{i (\epsilon-\markproj{\epsilon})}$ and (ii) $\alpha\markproj{\beta} e^{-i\markproj{\epsilon}}=\markproj{\alpha} \beta e^{i\epsilon}$ are fulfilled. This holds only for $\alpha=\beta=\markproj{\alpha}=\markproj{\beta}=1/\sqrt{2}$ and $\epsilon=-\markproj{\epsilon}=\pi/2$ or $-\epsilon=\markproj{\epsilon}=\pi/2$. When we impose only condition (i), i.e.~the amplitude of $\ket{D_4^{(2)}}$ is zero [which holds for $\alpha=\sqrt{1-\markproj{\alpha}^2}$ and $\Delta\epsilon=\epsilon-\markproj{\epsilon}=(2n+1)\pi$ for $n\in\{0,1,2,...\}$], a continuous transition between the states $\ket{W_4}$ ($\alpha=1$), $\ket{GHZ_4^+}$ ($\alpha=\sqrt{1/2}$ and $\Delta\epsilon=\pi$) and $\ket{\overline{W}_4}$ ($\alpha=0$) can be achieved.
\begin{table} \caption{\label{tab-4qubit} Four-qubit entangled states obtained from the states $\ket{\Delta_5}$ by a single projective measurement, cp.~\eref{eq-4photext}. In analogy to \eref{eq-5photres} the states $\cos{^2\theta}\ket{D_4^{(1)}}-\sin{^2\theta}e^{2i\epsilon}\ket{D_4^{(3)}}$ belong to two SLOCC classes given by (i) $\theta=0$ or $\theta=\pi/2$ and (ii) $\theta\in(0,\pi/2)$ with $\epsilon$ arbitrary.} \begin{ruledtabular}
\begin{tabular}{l|l|l||l|l|l||c} $\alpha$ & $\beta$ & $\epsilon$ & $\markproj{\alpha}$ & $\markproj{\beta}$ & $\markproj{\epsilon}$ & state\\\hline\hline $1$ & $0$ & $-$ & $1$ & $0$ & $-$ & $\ket{D_4^{(2)}}$ \\ $0$ & $1$ & $-$ & $0$ & $1$ & $-$ & $\ket{D_4^{(2)}}$ \\ $1$ & $0$ & $-$ & $0$ & $1$ & $-$ & $\ket{W_4}\equiv\ket{D_4^{(1)}}$ \\ $0$ & $1$ & $-$ & $1$ & $0$ & $-$ & $\ket{\overline{W}_4}\equiv\ket{D_4^{(3)}}$ \\ $\frac{1}{\sqrt{2}}$ & $\frac{1}{\sqrt{2}}$ & $\frac{\pi}{2}$ & $\frac{1}{\sqrt{2}}$ & $\frac{1}{\sqrt{2}}$ & $-\frac{\pi}{2}$ & $\ket{GHZ_4^+}$ \\ &&&&&& $\cos{^2\theta}\ket{D_4^{(1)}}$\\ \rbox{$\cos{\theta}$} & \rbox{$\sin{\theta}$} & \rbox{$\epsilon$} & \rbox{$\sin{\theta}$} & \rbox{$\cos{\theta}$} & \rbox{$\epsilon-\pi$} & $-\sin{^2\theta}e^{2i\epsilon}\ket{D_4^{(3)}}$ \\ \end{tabular} \end{ruledtabular} \end{table}
Further, three-qubit states are obtained by performing a projective measurement $P(\markproj{\markproj{\alpha}},\markproj{\markproj{\epsilon}})$ on $\ket{\Delta_4}$: \begin{eqnarray} \label{eq-threequbitDelta5} \nonumber\ket{\Delta_3}&\propto&\alpha\markproj{\beta}\markproj{\markproj{\beta}} e^{-i(\markproj{\epsilon}+\markproj{\markproj{\epsilon}})}\ket{D_3^{(0)}} +\markproj{\alpha} \beta \markproj{\markproj{\alpha}} e^{i\epsilon} \ket{D_3^{(3)}}\\ \nonumber&&+[(\alpha\markproj{\beta}\markproj{\markproj{\alpha}}e^{-i\markproj{\epsilon}})\\ \nonumber&&\hspace{2mm}+\sqrt{\frac{6}{4}}(\alpha\markproj{\alpha} + \beta\markproj{\beta} e^{i(\epsilon-\markproj{\epsilon})})\markproj{\markproj{\beta}}e^{-i\markproj{\markproj{\epsilon}}}] \sqrt{3}\ket{D_3^{(1)}}\\ \nonumber&&+[(\markproj{\alpha}\beta\markproj{\markproj{\beta}}e^{i(\epsilon-\markproj{\markproj{\epsilon}})})\\ &&\hspace{2mm}+\sqrt{\frac{6}{4}}(\alpha\markproj{\alpha} + \beta\markproj{\beta} e^{i(\epsilon-\markproj{\epsilon})})\markproj{\markproj{\alpha}}] \sqrt{3}\ket{D_3^{(2)}}. \end{eqnarray} These are {\textit{all}} permutation symmetric three-qubit states \cite{Bas08tbd}. In particular, we note that a $GHZ_3$ state can be obtained directly without the need for local operations. To show this, we consider $\ket{GHZ_3^-}=1/\sqrt{2}(\ket{HHH}-\ket{VVV})$ under a Hadamard transformation $\mathcal{H}$ acting on each qubit: \begin{eqnarray*} \ket{GHZ_3^+}&\equiv&(\mathcal{H}\otimes\mathcal{H}\otimes\mathcal{H})\ket{GHZ_3^-}\\ &=&\sqrt{\frac{3}{4}}\ket{D_3^{(1)}}+\sqrt{\frac{1}{4}}\ket{D_3^{(3)}}. \end{eqnarray*} The state $\ket{GHZ_3^+}$ is obtained for $\alpha=\beta=\markproj{\alpha}=\markproj{\beta}=1/\sqrt{2}$, $\markproj{\markproj{\alpha}}=1$ and $\epsilon=-\markproj{\epsilon}=\pi/2$.
\section{\label{sec-2}Experimental implementation}
The states $\ket{D_4^{(2)}}$ and $\ket{\Delta_5}$ are permutation symmetric. Hence, their experimental implementation can be achieved via a symmetric distribution of photons. For the observation of the state $\ket{D_4^{(2)}}$ the necessary four photons originate from the second order emission of a collinear, type II spontaneous parametric down conversion (SPDC). For observing the states $\ket{\Delta_5}$ we will consider different experimental implementations, which are extensions of the $\ket{D_4^{(2)}}$ set-up.
\subsection{\label{sec-2a}The Dicke state $\ket{D_4^{(2)}}$}
\begin{figure}\label{fig-setup}
\end{figure}
\fref{fig-setup} shows one of the possible set-ups for the states $\ket{\Delta_5}$. As can be seen, the set-up for observing the state $\ket{D_4^{(2)}}$ is at the core of it. The state $\ket{D_4^{(2)}}$ was observed after a symmetric distribution of two horizontally and two vertically polarized photons, initially in a single spatial mode $s$, onto four spatial modes ($a,b,c,d$) via three polarization-independent beam splitters (BS). The photons originate from a $\beta$-Barium borate (BBO) crystal in a type II, collinear SPDC process, which emits the state \cite{Kok00,Wei01} \begin{equation} \label{eq-SPDC}
\ket{\Psi_{\mathrm{dc}}}=\sqrt{1-|z_{\mathrm{dc}}|^2}\sum_{n=0}^\infty\frac{(iz_{\mathrm{dc}})^n}{n!}({s}_H^\dagger {s}_V^\dagger)^n\ket{\mathrm{vac}}, \end{equation}
where ${s_i}^\dagger$ is the creation operator for a photon in mode $s$ having polarization $i\in\{H,V\}$, $\ket{\rm{vac}}$ is the vacuum state, $z_{\mathrm{dc}}=|z_{\mathrm{dc}}|e^{i2\phi_{\mathrm{dc}}}$ with $|z_{\mathrm{dc}}|=\tanh{\tau}$ and $\tau$ depends on the pump amplitude and the coupling between the electromagnetic field and the crystal ($\tau\ll 1$). The probability to create a single pair is ${(1-|z_{\mathrm{dc}}|^2)}|z_{\mathrm{dc}}|^2$. Here, we are interested in the second order emission $\propto ({s}_H^\dagger {s}_V^\dagger)^2\ket{\rm{vac}}$.
The BBO crystal was pumped by a frequency-doubled, femtosecond, mode-locked Ti:Sapphire laser. The spatial mode $s$ of the photons is defined by coupling into a single mode (SM) fiber. The photons pass an interference filter (IR) reducing their spectral distinguishability. The polarization state of each photon is analyzed via a polarizing beam splitter (PBS) preceded by a half-wave- (HWP) and quarter-wave plate (QWP). Finally, the photons are detected by fiber-coupled single photon detectors. The experimental state was observed under the condition of detecting a photon in each of the four spatial modes $(a,b,c,d)$.
We found a fidelity of $F_{\mathrm{exp}}=\bra{D_4^{(2)}}\rho_{\mathrm{exp}}\ket{D_4^{(2)}}=0.844\pm0.008$ to the state $\ket{D_4^{(2)}}$. Using a generic entanglement witness $\mathcal{W}$ \cite{Bou04}, where we use the shorthand notation $\mathcal{W}(\Psi,\alpha)=\alpha\openone-\ketbra{\Psi}$ with $\alpha=\frac{2}{3}$ and $\ket{\Psi}=\ket{D_4^{(2)}}$, genuine four-partite entanglement of $\rho_{\mathrm{exp}}$ was verified: $\mathrm{Tr}[\mathcal{W}(D_4^{(2)},\frac{2}{3})\rho_{\mathrm{exp}}]=\frac{2}{3}-F_{\mathrm{exp}}=-0.177\pm0.008$. A value $<0$ is sufficient to prove genuine four-partite entanglement \cite{Bou04}. Further, by using the state-discrimination method described in \cite{Sch08DS} we were able to exclude W- and Cluster-type entanglement for the experimentally observed state.
\begin{figure}
\caption{ (Color online) Experimental density matrices for the measured $W_3$ (a) and $\overline{W}_3$ (b) states. The density matrices for the $GHZ_3$ states of (c) and (d) are calculated from the measured $G_3^+$ and $G_3^-$ states. Displayed is the real part of the corresponding density matrix.}
\label{fig-states3}
\end{figure}
For demonstrating that we can experimentally access states from both inequivalent tri-partite entanglement classes we performed a full state tomography to reconstruct the density matrices of the respective states. A projection measurement of the photon in mode $d$ in the computational basis yields the $W_3$ states characterized by the density matrices shown in \fref{fig-states3}(a) and (b). We calculated fidelities of $0.882\pm0.015$ and $0.835\pm0.015$ to the theoretical states $\ket{W_3}$ and $\ket{\overline{W}_3}$, respectively. Their genuine tri-partite entanglement is verified via the entanglement witnesses $\mathcal{W}(W_3,\frac{2}{3})$ and $\mathcal{W}(\overline{W}_3,\frac{2}{3})$ \cite{Bou04}, where we determined values of $-0.215\pm0.015$ and $-0.168\pm0.015$, respectively.
A measurement in the $(\pm)$-basis yields $G_3$ states, which belong to the GHZ class. If we apply the corresponding transformations (see \eref{eq-trafosGHZ}) on the measured density matrices we indeed obtain $GHZ_3$ states, see \fref{fig-states3}(c) and (d). We determined fidelities of $0.719\pm0.022$ and $0.733\pm0.024$ to a $GHZ_3$ state, respectively. An entanglement witness detecting genuine tri-partite entanglement of these states is $\mathcal{W}(GHZ_3,\frac{1}{2})$ \cite{Bou04}. We find the negative values of $-0.219\pm0.022$ and $-0.233\pm0.024$, respectively. A witness that further excludes W type entanglement is given by $\mathcal{W}(GHZ_3,\frac{3}{4})$ \cite{Aci01}. The transformed $GHZ_3$ states do not fulfill the witness's entanglement condition. However, by applying local filtering operations on this witness \cite{Haf05} we obtain values of $-0.033\pm0.026$ and $-0.029\pm0.023$, respectively, which finally proves GHZ-type entanglement with a significance of one standard deviation.
\subsection{Towards $\ket{\Delta_5}$}
\subsubsection{Implementation}
For observing the states $\ket{\Delta_5}$ different implementations are possible. One possibility is given by the application of a projective measurement on the state $\ket{D_6^{(3)}}$ (see \sref{sec-TDt5}), where the necessary six photons originate from the third order SPDC emission. However, when implementing the state $\ket{\Delta_5}$ directly only five photons are necessary and, thus, a higher count rate should be possible. These five photons can be obtained by superimposing the four photons from the second order SPDC emission with an additional photon. The polarization of the additional photon determines the parameters $\alpha$,$\beta$ and $\epsilon$ in \eref{eq-5photres}. In the ideal case, the additional photon is obtained from a single photon source (see e.g.~\cite{Oxb05,LoS05}) that acts on demand and matches the SPDC photons spectrally, temporally and spatially. However, to our knowledge, no such source exists. Alternatively, an heralded SPDC source \cite{Roh05} can be employed, which results in practice in low count rates, since again six photons have to be detected in total. Instead, we investigate whether the single photon source can be substituted by a weak coherent beam (WCB) \cite{Rar99}, i.e., whether this simplification influences the state quality.
This implementation is based on the set-up used for observing the state $\ket{D_4^{(2)}}$ described in \sref{sec-2a} (\fref{fig-setup}). The WCB can be derived via a beam splitter [and additional attenuation via optical density (OD) filters] from the Ti:Sapphire laser, which also pumps the BBO crystal for the SPDC process after a frequency doubling stage. The polarization of the WCB can be set arbitrarily via a polarizer, followed by a HWP and QWP. These settings determine the parameters $\alpha$,$\beta$ and $\epsilon$ in \eref{eq-5photres}. A delay line in the WCB allows to adjust the temporal overlap with the SPDC emission. The photons of both sources are overlapped collinearly in the BBO crystal and coupled in the same single mode fiber. They are symmetrically distributed onto the modes $(a,b,c,d,e)$ via a beam splitter with 4:1 splitting ratio and further splitting as described in \sref{sec-2a}.
\subsubsection{Weak coherent beam: effects}
The state of the WCB that substitutes the single photon source is \cite{Man95} \begin{equation} \label{eq-wcb}
\ket{\Psi_{\mathrm{w}}}=e^{-{|z_{\mathrm{w}}|}^2/2}\cdot\sum_{n=0}^\infty\frac{{(z_{\mathrm{w}})}^n}{n!}(w_j^\dagger)^n\ket{\mathrm{vac}}, \end{equation}
where $w_j^\dagger$ is the creation operator of a photon with polarization $j$ in mode $w$, $z_{\mathrm{w}}=|z_{\mathrm{w}}|e^{i\phi_{\mathrm{w}}}$, $|z_{\mathrm{w}}|^2$ is the mean photon number and $|z_{\mathrm{w}}|^2e^{-{|z_{\mathrm{w}}|}^2}$ is the probability for the photon state $\ket{1}$. The photons of the WCB and the SPDC originate from the same laser, i.e., the Ti:Sapphire laser, but travel different paths before they are coupled into the same single mode fiber. As only their relative phase is relevant, we set for the following considerations $\phi_{\mathrm{dc}}=0$, without loss of generality. In the experiment the relative phase fluctuates without an active stabilization of the relative delay of the WCB and SPDC photons. Further, the WCB compared to a real single photon source exhibits higher order terms resulting in multiple photons per pulse. We note that the phase dependence and the higher order terms have a much smaller influence if an heralded source is employed, of course, with the disadvantage of requiring, effectively, a six-photon down conversion experiment.
We will now demonstrate effects caused by using the WCB. First, we observe quantum interference, which occurs when there are at least two indistinguishable possibilities that lead to the same detection event. In our case this becomes already observable when we consider only two photons. There, the following two possibilities exist: Two photons originate either from the SPDC emission \emph{or} the WCB. To show the interference let us assume a left circularly polarized WCB, whose two-photon term is \begin{eqnarray} \label{eq-wcbtwo} &\propto& e^{2i\phi_{\mathrm{w}}}\left(w_L^{\dagger}\right)^2=e^{2i\phi_{\mathrm{w}}}\left(w_H^{\dagger}-iw_V^{\dagger}\right)^2/2\nonumber\\ &=&e^{2i\phi_{\mathrm{w}}}\left[\left(w_H^{\dagger}\right)^2-\left(w_V^{\dagger}\right)^2-2i\left(w_H^{\dagger}w_V^{\dagger}\right)\right]/2. \end{eqnarray} For a coherent overlap, i.e., $w_j^{\dagger}\rightarrow {s}_j^{\dagger}$, the last term of \eref{eq-wcbtwo} is identical with the first order SPDC emission ($\propto {s}_H^{\dagger}{s}_V^{\dagger}$). Hence, for the two-fold coincidence detection event $HV$, both possibilities contribute and interfere in dependence on $\phi_{\mathrm{w}}$. This is shown in \fref{fig-inter}(a). When we change the path difference between the photons of both sources we observe an oscillation in the coincidence count rate on the order of the wavelength ($<\mu$m), which is due to the change of $\phi_{\mathrm{w}}$. The exact modulation is unresolved as $\phi_{\mathrm{w}}$ was not actively stabilized. The width of the envelope of that interference is on the order of the coherence length of the photons ($\approx100\mu$m). It indicates the spatial region for which the mode overlap is different from zero.
\begin{figure}
\caption{ Coherence between the weak coherent beam and the SPDC photons. (a) Interference of the two possibilities how to obtain an $HV$ coincidence. (b) Enhanced emission (cloning) due to bosonic nature of photons. The solid line shows a Gaussian fit to the data points giving an enhancement of $1.52\pm0.03$.}
\label{fig-inter}
\end{figure}
Furthermore, we can observe bosonic enhancement (cloning \cite{Lam02,Sci05,Mar05}), i.e., stimulation of the SPDC emission, which appears independent of the employed single photon source and enhances the total count rate. This enhancement is visible for, e.g., a horizontally polarized WCB as input and registration of a three-fold coincidence of $HHV$, \fref{fig-inter}(b). The single photon term of the WCB ($\propto w_H^{\dagger}$) and the first order emission of the SPDC ($\propto {s}_H^{\dagger}{s}_V^{\dagger}$) lead incoherently overlapped to $\propto w_H^{\dagger}{s}_H^{\dagger}{s}_V^{\dagger}\ket{\rm{vac}}=\ket{H}_w\ket{HV}_{s}$. In contrast, a coherent superposition yields $\propto ({s}_H^{\dagger})^2{s}_V^{\dagger}\ket{\rm{vac}}=\sqrt{2}\ket{HHV}_{s}$, hence, an increase by a factor of two in the count rate due to the bosonic enhancement \cite{Sun07}. This effect occurs on the order of the coherence length of the photons ($\approx100\mu$m). In the experiment we observe an enhancement of $1.52\pm0.03$. We attribute the deviation from the expected value of two to higher order emissions of the WCB and the SPDC, which add an offset to the three photon count rate.
\subsubsection{Fidelity of the states $\ket{GHZ_4^+}$ and $\ket{W_4}$}
In the following we give a quantitative estimate of the influence on the quality of the desired four-photon states when a WCB is used instead of a single photon source. To this end two effects have to be considered leading to the observation of imperfect states. Firstly, the coherent superposition of different emission orders leads, in dependence on $\phi_{\mathrm{w}}$, to the observation of a different {\emph{pure}} state. Secondly, higher order emissions cause an {\emph{admixture}} of correlated noise. The first effect can be analyzed when considering all terms from \eref{eq-SPDC} and \eref{eq-wcb} that contribute directly to five photons (yielding the state on which a projective measurement is applied): \begin{eqnarray} \label{eq-5phot}
\propto&-&|z_{\mathrm{dc}}|^2|z_{\mathrm{w}}|e^{i\phi_{\mathrm{w}}}w_j^\dagger ({s}_H^\dagger {s}_V^\dagger)^2/2\nonumber\\
&+&i|z_{\mathrm{dc}}||z_{\mathrm{w}}|^3e^{i3\phi_{\mathrm{w}}}{w_j^\dagger}^3({s}_H^\dagger {s}_V^\dagger)/6\nonumber\\
&+&|z_{\mathrm{w}}|^5e^{i5\phi_{\mathrm{w}}}{w_j^\dagger}^5/120. \end{eqnarray} Only the first term is necessary to observe the state $\ket{\Delta_5}$. The other terms significantly modify the desired state.
Exemplarily, we calculate the fidelity to the ideal $GHZ_4^+$ and $W_4$ states when the photons from \eref{eq-5phot} are symmetrically distributed onto five spatial modes and the respective projective measurement is performed. We obtain \begin{eqnarray*}
F_{W_4}&=&1/\big[1+|z_{\mathrm{w}}|^4/(9|z_{\mathrm{dc}}|^2)\big],\\
F_{GHZ_4^+}&=&1-\frac{1}{2+36\frac{|z_{\mathrm{dc}}|^2}{|z_{\mathrm{w}}|^4}-12\frac{|z_{\mathrm{dc}}|}{|z_{\mathrm{w}}|^2}\cos{(2\phi_{\mathrm{w}})}}, \end{eqnarray*}
see \fref{fig-simfid}(a) and (b). Both fidelities are better than $>0.99$ for $|z_{\mathrm{w}}|<0.2$, whereas for higher $|z_{\mathrm{w}}|$ the fidelity decreases rapidly. This is the case as with increasing $|z_{\mathrm{w}}|$ the second term of \eref{eq-5phot} grows relatively stronger than the first term and, thus, spoils the state quality. Obviously the relative phase $\phi_{\mathrm{w}}$ becomes only relevant for $F_{GHZ_4^+}$. There, the highest fidelity values are found for $\phi_{\mathrm{w}}=\pi/2$.
\begin{figure}\label{fig-simfid}
\end{figure}
A second effect causes the admixture of correlated noise, which reduces the fidelity. This admixture is produced by the detection of additional five-fold coincidences that originate from six or more photons (higher order emissions from both, the SPDC and the WCB), where multiple photons are registered by the same detector or some photons are not registered at all. As this leads to additional noise, the quality of the observed states is dependent on the photon detection efficiency. For our set-up we determined an efficiency for the photon coupling to the single mode fiber of $\eta_\mathrm{c}\approx\frac{1}{3}$ and a detection efficiency of $\eta_\mathrm{d}\approx\frac{1}{3}$. For calculating the fidelity in that case these loss channels are accounted for by additional beam splitters with ancillary output modes \cite{Kok00}, where reflected photons are lost, and transmitted photons (with probability $\eta_i$) correspond to detectable photons. We consider for this calculation all photon terms of five photons (see \eref{eq-5phot}) and the next higher order contribution from six photons, which are obtained from the multiplication of \eref{eq-SPDC} with \eref{eq-wcb}. The numerical results are shown in \fref{fig-simfid}(c) and (d). The fidelity of the state $\ket{W_4}$ reaches its maximum of $0.776$ for $|z_{\mathrm{w}}|=0.39$ independent of $\phi_{\mathrm{w}}$. For larger $|z_{\mathrm{w}}|$ the fidelity decreases due to the increase of the multiple photon terms of the WCB. For lower $|z_{\mathrm{w}}|$ the fidelity decreases as the contribution from the third order SPDC emission constitutes the major source of noise. The fidelity of the state $\ket{GHZ_4^+}$ reaches its maximum of $0.701$ for $|z_{\mathrm{w}}|=0.6$. Again, it is phase dependent with maximal values for $\phi_{\mathrm{w}}=\pi/2$. The dependence on $|z_{\mathrm{w}}|$ follows the same arguments as given for the $W_4$ state.
The calculations show that the fidelity of each state is still high enough to demonstrate, e.g., four-photon entanglement via an entanglement witness, as for the state $\ket{W_4}$ ($\ket{GHZ_4^+}$) a fidelity larger than 0.75 (0.5) is sufficient for this purpose \cite{Haf05}(\cite{Tot05pra}). However, in the considerations so far we neglected other experimental imperfections, like spectral distinguishability of photons. For pulsed type II SPDC it is known that the broad pump spectrum results in the generation of photons with partial spectral distinguishability \cite{Gri97,Kel97}, which leads to an additional reduction in state fidelity. For example, the state $\ket{D_4^{(2)}}$ described in \sref{sec-2a} was observed with a fidelity of $F_{\mathrm{exp}}=0.844\pm0.008$ \cite{Kie07}. This fidelity value can be partly ascribed to higher order contributions of the SPDC emission, which give a reduction in fidelity of about $9\%$ \cite{Kie07PHD}. However, the missing $7\%$ can be attributed to a remaining degree of distinguishability of the SPDC photons and non-ideal optical components. It is reasonable to expect that at least the same additional reduction of the fidelity in the proposed implementation occurs. Then, the fidelity for the state $\ket{W_4}$ is below the threshold for proving four-partite entanglement directly.
For this reason we suggest to use a (heralded) single photon source instead of a WCB in order to achieve higher fidelity values. The utilization of a single photon source avoids on the one hand noise from higher order contributions of the WCB and therefore also phase-dependence of the state. On the other hand, even noise from the SPDC alone would become negligible as the heralding signal from the single photon source serves as a trigger for a valid detection event, and thus, the SPDC noise is suppressed. Alternatively, one could realize the state $\ket{D_6^{(3)}}$ with photons coming from the third order SPDC emission, however, on the cost of introducing again SPDC higher order noise. Both alternative implementations demand new and stronger photon sources, which are currently being developed.
\section{Conclusion}
We have demonstrated possibilities for the observation of SLOCC-inequivalent families of three- and four-qubit entangled states. They are based on the property of the states $\ket{D_4^{(2)}}$ and $\ket{\Delta_5}$ to allow access to different classes of quantum states via projective measurements on single qubits.
We experimentally demonstrated that indeed all types of three-qubit entangled states can be obtained from $\ket{D_4^{(2)}}$. We presented a scheme how the states $\ket{\Delta_5}$ can be observed experimentally. As this requires the use of an additional photon, it still poses a considerable challenge when reasonable count rates are to be achieved. We could demonstrate that the most simple approach, i.e., substituting the single photon with a weak coherent beam, leads to a drastic reduction of the state quality. Yet, we identified two alternative possibilities to realize the powerful scheme we presented, which both seem achievable in the near future.
Altogether our scheme is an alternative method \cite{Wie08,Bas07qph} for the observation of many different multi-partite entangled states. We are optimistic that sources for the observation of the presented states will soon be available and that schemes relying on the same kind of approach will allow the observation of many other interesting quantum states in the future.
\begin{acknowledgments}
We would like to thank Enrique Solano for stimulating discussions. We acknowledge the support of this work by the DFG-Cluster of Excellence MAP, the EU Project QAP and the DAAD exchange program. W.W.~acknowledges support by QCCC of the Elite Network of Bavaria and the Studienstiftung des dt.~Volkes.
\end{acknowledgments}
\copyright \, 2009 The American Physical Society
\end{document} |
\begin{document}
\title{Mukai implies McKay:\
the McKay correspondence as an\ equivalence of derived categories}
\begin{abstract} Let $G$ be a finite group of automorphisms of a non\-singular complex threefold $M$ such that the canonical bundle $\omega_M$ is locally trivial as a $G$-sheaf. We prove that the Hilbert scheme $Y=\operatorname{\hbox{$G$}-Hilb}{M}$ parametrising $G$-clusters in $M$ is a crepant resolution of $X=M/G$ and that there is a derived equi\-valence (Fourier--Mukai transform) between coherent sheaves on $Y$ and coherent \hbox{$G$-sheaves} on $M$. This identifies the K~theory of $Y$ with the equi\-variant K~theory of $M$, and thus generalises the classical McKay correspondence. Some higher dimensional extensions are possible. \\ MSC2000: Primary 14E15, 14J30; Secondary 18E20,18F20,19L47. \end{abstract}
\section{Introduction}
The classical McKay correspondence relates representations of a finite subgroup $G\subset\operatorname{SL}(2,\mathbb C)$ to the cohomology of the well-known minimal resolution of the Kleinian singularity $\mathbb C^2/G$. Gonzalez-Sprinberg and Verdier \cite{GV} interpreted the McKay correspondence as an isomorphism on K~theory, observing that the representation ring of $G$ is equal to the $G$-equi\-variant K~theory of $\mathbb C^2$.
A natural generalisation is to replace $\mathbb C^2$ by a non\-singular quasi\-projective complex variety $M$ of dimension $n$ and $G$ by a finite group of automorphisms of $M$, with the property that the stabiliser subgroup of any point $x\in M$ acts on the tangent space $T_xM$ as a subgroup of $\operatorname{SL}(T_xM)$. Thus the canonical bundle $\omega_M$ is locally trivial as a $G$-sheaf, in the sense that every point of $M$ has a $G$-invariant open neighbourhood on which there is a nonvanishing $G$-invariant $n$-form. This implies that the quotient variety $X=M/G$ has only Gorenstein singularities.
The natural generalisation of the McKay correspondence should then be an isomorphism between the $G$-equi\-variant K~theory of $M$ and the ordinary K~theory of a crepant resolution $Y$ of $X$, that is, a resolution of singularities $\tau\colon Y\to X$ such that $\tau^*(\omega_X)=\omega_Y$. Crepant resolutions of Gorenstein quotient singularities are known to exist in dimension $n=3$, but only through a case by case analysis of the local linear actions by Ito, Markushevich and Roan (see Roan \cite{Roa} and references given there). In dimension $\ge4$, crepant resolutions exist only in rather special cases.
The point of view of this paper is that the derived category is the natural context for this formulation of the correspondence, and, more importantly, provides key tools for an appropriately general proof. Indeed, this point of view is not so revolutionary. Gonzalez-Sprinberg and Verdier were aware that their isomorphism on K~theory would lift to a derived equivalence and an explicit proof of this was given by Kapranov and Vasserot \cite{KV}. Moreover, the statement of the McKay correspondence in 3 dimensions in terms of K~theory and derived categories is contained in Reid \cite[Conjecture~4.1]{R}. One surprise, however, is that the methods of the derived category are powerful enough to prove the existence of a crepant resolution in 3 dimensions, without any case by case analysis.
A good candidate for a crepant resolution of $X$ is Nakamura's $G$-Hilbert scheme $\operatorname{\hbox{$G$}-Hilb}{M}$ parametrising $G$-clusters or `scheme theoretic $G$-orbits' on $M$: recall that a {\em cluster} $Z\subset M$ is a zero dimensional subscheme, and a {\em $G$-cluster} is a $G$-invariant cluster whose global sections $H^0(\mathcal O_Z)$ are isomorphic to the regular
representation $\mathbb C[G]$ of $G$. Clearly, a $G$-cluster has length $|G|$ and a free $G$-orbit is a $G$-cluster. There is a Hilbert--Chow morphism
\[
\tau\colon\operatorname{\hbox{$G$}-Hilb}{M}\longrightarrow X,
\] which, on closed points, sends a $G$-cluster to the orbit supporting it. Note that $\tau$~is a projective morphism, is onto and is birational on one component.
When $M=\mathbb C^3$ and $G\subset\operatorname{SL}(3,\mathbb C)$ is Abelian, Nakamura \cite{N} proved that $\operatorname{\hbox{$G$}-Hilb}{M}$ is irreducible and is a crepant resolution of $X$ (compare also Reid \cite{R} and Craw and Reid \cite{CR}). He conjectured that the same result holds for an arbitrary finite subgroup $G\subset\operatorname{SL}(3,\mathbb C)$. Ito and Nakajima \cite{IN} observed that the construction of Gonzalez-Sprinberg and Verdier \cite{GV} is the $M=\mathbb C^2$ case of a natural correspondence between the equi\-variant K~theory of $M$ and the ordinary K~theory of $\operatorname{\hbox{$G$}-Hilb}{M}$. They proved that this correspondence is an isomorphism when $M=\mathbb C^3$ and $G\subset\operatorname{SL}(3,\mathbb C)$ is Abelian by constructing an explicit resolution of the diagonal in Beilinson style. Our approach via Fourier--Mukai transforms leaves this resolution of the diagonal implicit (it appears as the object $\operatorname{\mathcal Q}$ of $\operatorname{D}(Y\times Y)$ in Section~\ref{Sec6!Proj_case}), and seems to give a more direct argument. Two of the main consequences of the results of this paper are that Nakamura's conjecture is true and that the natural correspondence on K~theory is an isomorphism for all finite subgroups of $\operatorname{SL}(3,\mathbb C)$.
As already indicated, the basic approach of the paper is to lift the McKay correspondence to the appropriate derived categories. We may then apply the techniques of Fourier--Mukai transforms, in particular the ideas of Bridgeland \cite{Br1} and \cite{Br2}, to show that it is an equivalence at this level. The more formal nature of the arguments means that they work equally well for arbitrary quasi\-projective varieties. In fact, they are somewhat simpler for projective varieties, and we therefore deal with this case first.
Since it is not known whether $\operatorname{\hbox{$G$}-Hilb}{M}$ is irreducible or even connected in general, we take as our initial candidate for a resolution $Y$ the {\em irreducible component} of $\operatorname{\hbox{$G$}-Hilb}{M}$ containing the free $G$-orbits, that is, the component mapping birationally to $X$. The aim is to show that $Y$ is a crepant resolution, and to construct an equivalence between the derived categories $\operatorname{D}(Y)$ of coherent sheaves on $Y$ and $\operatorname{D}^G(M)$ of coherent \hbox{$G$-sheaves} on $M$. A particular consequence of this equivalence is that $Y=\operatorname{\hbox{$G$}-Hilb}{M}$ when $M$ has dimension 3.
We now describe the correspondence and our results in more detail. Let $M$ be a non\-singular quasiprojective complex variety of dimension $n$ and let $G\subset\operatorname{Aut}(M)$ be a finite group of automorphisms of $M$ such that $\omega_M$ is locally trivial as a $G$-sheaf. Put $X=M/G$ and let $Y\subset\operatorname{\hbox{$G$}-Hilb}{M}$ be the irreducible component containing the free orbits, as described above. Write $\mathcal Z$ for the universal closed subscheme $\mathcal Z\subset Y\times M$ and $p$ and $q$ for its projections to $Y$ and $M$. There is a commutative diagram of schemes \begin{equation*}\label{maindiag}
\setlength{\unitlength}{36pt}
\begin{picture}(2.2,2.2)(0,0)
\put(1,0){\object{$X$}}
\put(0,1){\object{$Y$}}
\put(2,1){\object{$M$}}
\put(1,2){\object{$\mathcal Z$}}
\put(0.25,0.75){\vector(1,-1){0.5}}
\put(0.5,0.5){\nwlabel{$\tau$}}
\put(1.75,0.75){\vector(-1,-1){0.5}}
\put(1.5,0.5){\swlabel{$\pi$}}
\put(0.75,1.75){\vector(-1,-1){0.5}}
\put(0.5,1.5){\nelabel{$p$}}
\put(1.25,1.75){\vector(1,-1){0.5}}
\put(1.5,1.5){\selabel{$q$}}
\end{picture} \end{equation*} in which $q$ and $\tau$ are birational, $p$ and $\pi$ are finite, and $p$ is flat. Let $G$ act trivially on $Y$ and $X$, so that all morphisms in the diagram are equi\-variant.
Define the functor
\[
\Phi=\mathbf R q_*\circ p^*\colon\operatorname{D}(Y)\longrightarrow \operatorname{D}^G(M),
\] where a sheaf $E$ on $Y$ is viewed as a $G$-sheaf by giving it the trivial action. Note that $p^*$ is already exact, so we do not need to write $\mathbf L p^*$. Our main result is the following.
\goodbreak
\begin{thm} \label{second} Suppose that the fibre product
\[
Y\times_X Y= \Bigl\{(y_1, y_2)\in Y\times Y \Bigm|
\tau(y_1)=\tau(y_2)\Bigr\} \subset Y\times Y \] has dimension\/ $\le n+1$. Then\/ $Y$ is a crepant resolution of\/ $X$ and\/ $\Phi$ is an equivalence of categories. \end{thm}
When $n\le 3$ the condition of the theorem always holds because the exceptional locus of $Y\to X$ has dimension $\le2$. In this case we can also show that $\operatorname{\hbox{$G$}-Hilb}{M}$ is irreducible, so we obtain
\begin{thm} \label{IN} Suppose $n\le3$. Then $\operatorname{\hbox{$G$}-Hilb}{M}$ is irreducible and is a crepant resolution of $X$, and\/ $\Phi$ is an equivalence of categories. \end{thm}
The condition of Theorem~\ref{second} also holds whenever $G$ preserves a complex symplectic form on $M$ and $Y$ is a crepant resolution of $X$, because such a resolution is symplectic and hence semismall (see Verbitsky \cite{Vb}, Theorem~2.8 and compare Kaledin \cite{Ka}).
\begin{cor} \label{Kal} Suppose $M$ is a complex symplectic variety and $G$ acts by symplectic automorphisms. Assume that $Y$ is a crepant resolution of $X$. Then $\Phi$ is an equivalence of categories. \end{cor}
Note that the condition of Theorem~\ref{second} certainly fails in dimension $\ge4$ whenever $Y\to X$ has an exceptional divisor over a point. This is to be expected since there are many examples of finite subgroups $G\subset\operatorname{SL}(4,\mathbb C)$ for which the corresponding quotient singularity $\mathbb C^4/G$ has no crepant resolution.
\section{Category theory}
This section contains some basic category theory, most of which is well known. The only non\-trivial part is Section~\ref{trings} where we state a condition for an exact functor between triangulated categories to be an equivalence.
\subsection{Triangulated categories}
A triangulated category is an additive category $\mathcal A$ equipped with a {\em shift auto\-morphism} $T_{\mathcal A}\colon\mathcal A\to\mathcal A\colon a\mapsto a[1]$ and a collection of {\em distinguished triangles}
\[
a_1\lRa{f_1}a_2\lRa{f_2}a_3\lRa{f_3}a_1[1]
\] of morphisms of $\mathcal A$ satisfying certain axioms (see Verdier \cite{V}). We write $a[i]$ for $T_{\mathcal A}^i(a)$ and
\[
\operatorname{Hom}^i_{\mathcal A}(a_1,a_2)=\operatorname{Hom}_{\mathcal A}(a_1,a_2[i]).
\] A triangulated category $\mathcal A$ is {\em trivial}\/ if every object is a zero object.
The principal example of a triangulated category is the derived category $\operatorname{D}(A)$ of an Abelian category $A$. An object of $\operatorname{D}(A)$ is a bounded complex of objects of $A$ up to quasi-isomorphism, the shift functor moves a complex to the left by one place and a distinguished triangle is the mapping cone of a morphism of complexes. In this case, for objects $a_1,a_2\in A$, one has $\operatorname{Hom}^i_{\operatorname{D}(A)}(a_1,a_2)=\operatorname{Ext}^i_A(a_1,a_2)$.
A functor $F\colon\mathcal A\to\mathcal B$ between triangulated categories is {\em exact}\/ if it commutes with the shift automorphisms and takes distinguished triangles of $\mathcal A$ to distinguished triangles of $\mathcal B$. For example, derived functors between derived categories are exact.
\subsection{Adjoint functors}
Let $F\colon\mathcal A\to\mathcal B$ and $G\colon\mathcal B\to\mathcal A$ be functors. An adjunction for $(G,F)$ is a bifunctorial isomorphism
\[
\operatorname{Hom}_{\mathcal A}(G\kern.0mm\smash{-}\kern.0mm,\kern.0mm\smash{-}\kern.0mm)\,\cong\,\operatorname{Hom}_{\mathcal B}(\kern.0mm\smash{-}\kern.0mm,F\kern.0mm\smash{-}\kern.0mm).
\] In this case, we say that $G$ is left adjoint to $F$ or that $F$ is right adjoint to $G$. When it exists, a left or right adjoint to a given functor is unique up to isomorphism of functors. The adjoint of a composite functor is the composite of the adjoints. An adjunction determines and is determined by two natural transformations $\varepsilon\colon G\circ F\to\operatorname{id}_\mathcal A$ and $\eta\colon\operatorname{id}_\mathcal B\to F\circ G$ that come from applying the adjunction to $1_{Fa}$ and $1_{Gb}$ respectively (see Mac~Lane \cite[IV.1]{Mac} for more details).
The basic adjunctions we use in this paper are described in Section~\ref{adjunctions} below.
\subsection{Fully faithful functors and equivalences} \label{fff}
A functor $F\colon\mathcal A\to \mathcal B$ is {\em fully faithful}\/ if for any pair of objects $a_1$, $a_2$ of $\mathcal A$, the map
\[
F\colon\operatorname{Hom}_{\mathcal A}(a_1,a_2)\to\operatorname{Hom}_{\mathcal B}(Fa_1,Fa_2)
\]
is an isomorphism. One should think of $F$ as an `injective' functor. This is more clear when $F$ has a left adjoint $G\colon\mathcal B\to\mathcal A$ (or a right adjoint $H\colon\mathcal B\to\mathcal A$), in which case $F$ is fully faithful if and only if the natural transformation $G\circ F\to \operatorname{id}_{\mathcal A}$ (or $\operatorname{id}_{\mathcal A}\to H\circ F$) is an isomorphism.
A functor $F$ is an {\em equivalence} if there is an `inverse' functor $G\colon\mathcal B\to\mathcal A$ such that $G\circ F\cong \operatorname{id}_{\mathcal A}$ and $F\circ G\cong \operatorname{id}_{\mathcal B}$. In this case $G$ is both a left and right adjoint to $F$ (see Mac~Lane \cite[IV.4]{Mac}). In practice, we show that $F$ is an equivalence by writing down an adjoint (a priori, one-sided) and proving that it is an inverse. One simple example of this is the following.
\begin{lemma} \label{extra} Let $\mathcal A$ and $\mathcal B$ be triangulated categories and $F\colon\mathcal A\to \mathcal B$ a fully faithful exact functor with a right adjoint $H\colon\mathcal B\to\mathcal A$. Then $F$ is an equivalence if and only if $Hc\cong 0$ implies $c\cong 0$ for any object $c\in\Ob\mathcal B$. \end{lemma}
\begin{pf} By assumption $\eta\colon\operatorname{id}_\mathcal A\to H\circ F$ is an isomorphism, so $F$ is an equivalence if and only if $\varepsilon\colon F\circ H\to\operatorname{id}_\mathcal B$ is an isomorphism. Thus the `only if' part of the lemma is immediate, since $c\cong FHc$.
For the `if' part, take any object $b\in\Ob\mathcal B$ and embed the natural adjunction map $\varepsilon_b$ in a triangle \begin{equation} \label{semiortho}
c\to FHb\lRa{\varepsilon_b} b\to c[1]. \end{equation} If we apply $H$ to this triangle, then $H(\varepsilon_b)$ is an isomorphism, because $\eta_{Hb}$ is an isomorphism and $H(\varepsilon_b)\circ\eta_{Hb}=1_{Hb}$ (\cite[IV.1, Theorem 1]{Mac}). Hence $Hc\cong 0$ and so $c\cong 0$ by hypothesis. Thus $\varepsilon_b$ is an isomorphism, as required. \end{pf}
One may understand this lemma in a broader context as follows. The triangle~(\ref{semiortho}) shows that, when $F$ is fully faithful with right adjoint $H$, there is a `semi-orthogonal' decomposition $\mathcal B=(\operatorname{Im} F,\operatorname{Ker} H)$, where \begin{align*}
\operatorname{Im} F &= \{ b\in\Ob\mathcal B : \text{$b\cong Fa$ for some $a\in\Ob\mathcal A$} \},\\
\operatorname{Ker} H &= \{ c\in\Ob\mathcal B : Hc\cong 0 \}. \end{align*} Since $F$ is fully faithful, the fact that $b\cong Fa$ for some object $a\in\Ob\mathcal A$ necessarily means that $b\cong FHb$, so only zero objects are in both subcategories. The semi-orthogonality condition also requires that $\operatorname{Hom}_{\mathcal B}(b,c)=0$ for all $b\in\operatorname{Im} F$ and $c\in\operatorname{Ker} H$, which is immediate from the adjunction. The lemma then has the very reasonable interpretation that if $\operatorname{Ker} H$ is trivial, then $\operatorname{Im} F=\mathcal B$ and $F$ is an equivalence. Note that if $G$ is a left adjoint for $F$, then there is a similar semi-orthogonal decomposition on the other side $\mathcal B=(\operatorname{Ker} G,\operatorname{Im} F)$ and a corresponding version of the lemma. For more details on semi-orthogonal decompositions see Bondal \cite{Bo}.
\subsection{Spanning classes and orthogonal decomposition}
A {\em spanning class} for a triangulated category $\mathcal A$ is a subclass $\Omega$ of the objects of $\mathcal A$ such that for any object $a\in\Ob\mathcal A$
\[
\operatorname{Hom}^i_{\mathcal A}(a,\omega)=0 \quad\mbox{for all }
\omega\in\Omega, i\in\mathbb Z\quad\mbox{implies }a\cong 0
\] and
\[
\operatorname{Hom}^i_{\mathcal A}(\omega,a)=0\quad\mbox{for all }
\omega\in\Omega, i\in\mathbb Z\quad\mbox{implies }a\cong 0.
\] For example, the set of skyscraper sheaves $\{\mathcal O_x\colon x\in X\}$ on a non\-singular variety $X$ is a spanning class for $\operatorname{D}(X)$.
A triangulated category $\mathcal A$ is {\em decomposable} as an orthogonal direct sum of two full subcategories $\mathcal A_1$ and $\mathcal A_2$ if every object of $\mathcal A$ is isomorphic to a direct sum $a_1\oplus a_2$ with $a_j\in\Ob\mathcal A_j$, and if
\[
\operatorname{Hom}_{\mathcal A}^i(a_1,a_2)=\operatorname{Hom}_{\mathcal A}^i(a_2,a_1)=0
\] for any pair of objects $a_j\in\Ob\mathcal A_j$ and all integers $i$. The category $\mathcal A$ is indecomposable if for any such decomposition one of the two subcategories $\mathcal A_i$ is trivial. For example, if $X$ is a scheme, $\operatorname{D}(X)$ is indecomposable precisely when $X$ is connected. For more details see Bridgeland \cite{Br1}.
\subsection{Serre functors}
The properties of Serre duality on a non\-singular projective variety were abstracted by Bondal and Kapranov \cite{BK} into the notion of a Serre functor on a triangulated category. Let $\mathcal A$ be a triangulated category in which all the $\operatorname{Hom}$ sets are finite dimensional vector spaces. A {\em Serre functor} for $\mathcal A$ is an exact equivalence $S\colon\mathcal A\to\mathcal A$ inducing bifunctorial iso\-morphisms
\[
\operatorname{Hom}_{\mathcal A}(a,b)\to\operatorname{Hom}_{\mathcal A}(b,S (a))^{\vee}
\quad \text{for all $a,b\in\Ob\mathcal A$}
\] that satisfy a simple compatibility condition (see \cite{BK}). When a Serre functor exists, it is unique up to isomorphism of functors. We say that $\mathcal A$ has {\em trivial}\/ Serre functor if for some integer $i$ the shift functor $[i]$ is a Serre functor for $\mathcal A$.
The main example is the bounded derived category of coherent sheaves $\operatorname{D}(X)$ on a non\-singular projective $n$-fold $X$, having the Serre functor
\[
S_X(\kern.0mm\smash{-}\kern.0mm)=(\kern.0mm\smash{-}\kern.0mm\otimes\omega_X)[n].
\] Thus $\operatorname{D}(X)$ has trivial Serre functor if and only if the canonical bundle of $X$ is trivial.
\subsection{A criterion for equivalence} \label{trings}
Let $F\colon\mathcal A\to\mathcal B$ be an exact functor between triangulated categories with Serre functors $S_{\mathcal A}$ and $S_{\mathcal B}$. Assume that $F$ has a left adjoint $G\colon\mathcal B\to\mathcal A$. Then $F$ also has a right adjoint $H=S_{\mathcal A}\circ G\circ S_{\mathcal B}^{-1}$.
\begin{thm} \label{tring} Suppose there is a spanning class $\Omega$ for $\mathcal A$ such that \begin{equation*} F\colon\operatorname{Hom}^i_{\mathcal A}(\omega_1,\omega_2)\to \operatorname{Hom}^i_{\mathcal B}(F\omega_1,F\omega_2) \end{equation*} is an isomorphism for all $i\in\mathbb Z$ and all $\omega_1,\omega_2\in\Omega$. Then $F$ is fully faithful. \end{thm}
\begin{pf} See \cite[Theorem 2.3]{Br1}. \end{pf}
\begin{thm} \label{tring2} Suppose further that $\mathcal A$ is non\-trivial, that $\mathcal B$ is indecomposable and that $FS_{\mathcal A}(\omega)\congS_{\mathcal B}F(\omega)$ for all $\omega\in\Omega$. Then $F$ is an equivalence of categories. \end{thm}
\begin{pf} Consider an object $b\in\Ob\mathcal B$. For any $\omega\in\Omega$ and $i\in\mathbb Z$ we have isomorphisms \begin{eqnarray*}
&\operatorname{Hom}_{\mathcal A}^i(\omega,Gb)=\operatorname{Hom}_{\mathcal A}^i(Gb,S_{\mathcal A} \omega)^{\vee}= \operatorname{Hom}_{\mathcal B}^i(b,FS_{\mathcal A} \omega)^{\vee} \\ &=\operatorname{Hom}_{\mathcal B}^i(b,S_{\mathcal B} F \omega)^{\vee}= \operatorname{Hom}_{\mathcal B}^i(F\omega,b)=\operatorname{Hom}_{\mathcal A}^i(\omega,Hb), \end{eqnarray*} using Serre duality and the adjunctions for $(G,F)$ and $(F,H)$. Since $\Omega$ is a spanning class we can conclude that $Gb\cong 0$ precisely when $Hb\cong 0$. Then the result follows from \cite[Theorem 3.3]{Br1}. \end{pf}
The proof of Theorem 3.3 in \cite{Br1} may be understood as follows. If $\operatorname{Ker} H\subset\operatorname{Ker} G$, then the semiorthogonal decomposition described at the end of Section~\ref{fff} becomes an orthogonal decomposition. Hence $\operatorname{Ker} H$ must be trivial, because $\mathcal B$ is indecomposable and $\mathcal A$, and hence $\operatorname{Im} F$, is nontrivial. Thus $\operatorname{Im} F=\mathcal B$ and $F$ is an equivalence.
\section{Derived categories of sheaves}
This section is concerned with various general properties of complexes of $\mathcal O_X$-modules on a scheme $X$. Note that all our schemes are of finite type over $\mathbb C$. Given a scheme $X$, define $\operatorname{D}^{\mathrm{qc}}(X)$ to be the (unbounded) derived category of the Abelian category $\operatorname{Qcoh}(X)$ of quasi\-coherent sheaves on $X$. Also define $\operatorname{D}(X)$ to be the full subcategory of $\operatorname{D}^{\mathrm{qc}}(X)$ consisting of complexes with bounded and coherent cohomology.
\subsection{Geometric adjunctions} \label{adjunctions} Here we describe three standard adjunctions that arise in algebraic geometry and are used frequently in what follows. For the first example, let $X$ be a scheme and $E\in\operatorname{D}(X)$ an object of finite homological dimension. Then the derived dual
\[
E^{\vee}=\mathbf R\operatorname{\mathcal H\mathit{om}}_{\mathcal O_X}(E,\mathcal O_X)
\] also has finite homological dimension, and the functor $\kern.0mm\smash{-}\kern.0mm\Ltensor E$ is both left and right adjoint to the functor $\kern.0mm\smash{-}\kern.0mm\Ltensor E^{\vee}$.
For the second example take a morphism of schemes $f\colon X\to Y$. The functor
\[
\mathbf R f_*\colon \operatorname{D}^{\mathrm{qc}}(X)\longrightarrow \operatorname{D}^{\mathrm{qc}}(Y)
\] has the left adjoint
\[
\mathbf L f^*\colon \operatorname{D}^{\mathrm{qc}}(Y)\longrightarrow \operatorname{D}^{\mathrm{qc}}(X).
\] If $f$ is proper then $\mathbf R f_*$ takes $\operatorname{D}(X)$ into $\operatorname{D}(Y)$. If $f$ has finite Tor dimension (for example if $f$ is flat, or $Y$ is non\-singular) then $\mathbf L f^*$ takes $\operatorname{D}(Y)$ into $\operatorname{D}(X)$.
The third example is Grothendieck duality. Again take a morphism of schemes $f\colon X\to Y$. The functor $\mathbf R f_*$ has a right adjoint
\[
f^!\colon \operatorname{D}^{\mathrm{qc}}(Y)\longrightarrow \operatorname{D}^{\mathrm{qc}}(X)
\] and moreover, if $f$ is proper and of finite Tor dimension, there is an isomorphism of functors \begin{equation} \label{amos} f^!(\kern.0mm\smash{-}\kern.0mm)\,\cong\, \mathbf L f^*(\kern.0mm\smash{-}\kern.0mm)\Ltensor f^!(\mathcal O_Y). \end{equation} Neeman \cite{Ne} has recently given a completely formal proof of these statements in terms of the Brown representability theorem.
Let $X$ be a non\-singular projective variety of dimension $n$ and write $f\colon X\to Y=\operatorname{Spec}(\mathbb C)$ for the projection to a point. In this case $f^!(\mathcal O_Y)=\omega_X[n]$. The above statement of Grothendieck duality implies that the functor \begin{equation} \label{sefu} S_X(\kern.0mm\smash{-}\kern.0mm)=(\kern.0mm\smash{-}\kern.0mm\otimes\omega_X)[n] \end{equation} is a Serre functor on $\operatorname{D}(X)$.
\subsection{Duality for quasi\-projective schemes} \label{qpdual}
In order to apply Grothendieck duality on quasi\-projective schemes, we need to restrict attention to sheaves with compact support. The {\em support} of an object $E\in\operatorname{D}(X)$ is the locus of $X$ where $E$ is not exact, that is, the union of the supports of the cohomology sheaves of $E$. It is always a closed subset of $X$.
Given a scheme $X$, define the category $\operatorname{D_{\mathrm c}}(X)$ to be the full subcategory of $\operatorname{D}(X)$ consisting of complexes whose support is proper. Note that when $X$ itself is proper, $\operatorname{D_{\mathrm c}}(X)$ is just the usual derived category $\operatorname{D}(X)$.
If $f\colon X\to Y$ is a morphism of schemes of finite Tor dimension, but not necessarily proper, then~(\ref{amos}) still holds for all objects in $\operatorname{D_{\mathrm c}}(Y)$. Using this we see that if $X$ is a non\-singular quasi\-projective variety of dimension $n$, the category $\operatorname{D_{\mathrm c}}(X)$ has a Serre functor given by~(\ref{sefu}).
\subsection{Crepant resolutions} Let $X$ be a variety and $f\colon Y\to X$ a resolution of singularities. Suppose that $X$ has rational singularities, that is, $f_*\mathcal O_Y=\mathcal O_X$ and
\[
\mathbf R^i f_* \mathcal O_Y=0 \qquad\text{for all } i>0.
\] Given a point $x\in X$ define $\operatorname{D}_x(Y)$ to be the full subcategory of $\operatorname{D_{\mathrm c}}(Y)$ consisting of objects whose support is contained in the fibre $f^{-1}(x)$. We have the following categorical criterion for $f$ to be crepant.
\begin{lemma} \label{crepancy} If\/ $\operatorname{D}_x(Y)$ has trivial Serre functor for each $x\in X$, then $X$ is Gorenstein and\/ $f\colon Y\to X$ is a crepant resolution. \end{lemma}
\begin{pf} The Serre functor on $\operatorname{D}_x(Y)$ is the restriction of the Serre functor on $\operatorname{D_{\mathrm c}}(Y)$. Hence, by Section~\ref{qpdual}, the condition implies that for each $x\in X$ the restriction of the functor $(\kern.0mm\smash{-}\kern.0mm\otimes\omega_Y)$ to the category $\operatorname{D}_x(Y)$ is isomorphic to the identity. Since $\operatorname{D}_x(Y)$ contains the structure sheaves of all fattened neighbourhoods of the fibre $f^{-1}(x)$ this implies that the restriction of $\omega_Y$ to each formal fibre of $f$ is trivial. To get the result, we must show that $\omega_X$ is a line bundle and that $f^*\omega_X=\omega_Y$. Since $\omega_X=f_*\omega_Y$, this is achieved by the following lemma. \end{pf}
\begin{lemma} A line bundle $L$ on $Y$ is the pullback $f^*M$ of some line bundle $M$ on $X$ if and only if the restriction of $L$ to each formal fibre of $f$ is trivial. Moreover, when this holds, $M=f_*L$. \end{lemma}
\begin{pf} For each point $x\in X$, the formal fibre of $f$ over $x$ is the fibre product
\[
Y\times_X \operatorname{Spec}(\widehat{\mathcal O}_{X,x}).
\] The restriction of the pullback of a line bundle from $X$ to each of these schemes is trivial because a line bundle has trivial formal stalks at points.
For the converse suppose that the restriction of $L$ to each of these formal fibres is trivial. The theorem on formal functions shows that the completion of the stalks of the sheaves $\mathbf R^i f_*\mathcal O_Y$ and $\mathbf R^i f_*L$ at any point $x\in X$ are isomorphic for each $i$. Since $X$ has rational singularities it follows that $\mathbf R^i f_*L=0$ for all $i>0$, and $M=f_*L$ is a line bundle on $X$.
Since $f^*M$ is torsion free, the natural adjunction map $\eta\colon f^* f_*L \to L$ is injective, so there is a short exact sequence \begin{equation} \label{seseta} 0\to f^* f_*L \lRa{\eta} L\to Q\to 0. \end{equation} By the projection formula and the fact that $X$ is rational,
\[
\mathbf R^i f_*(f^* M)=M\otimes \mathbf R^i f_*\mathcal O_Y=0 \quad\mbox{for all }i>0.
\] The fact that $\eta$ is the unit of the adjunction for $(f^*,f_*)$ implies that $f_*\eta$ has a left inverse, and in particular is surjective. Applying $f_*$ to~(\ref{seseta}) we conclude that $f_* Q=0$.
Using the theorem on formal functions again, we can deduce that
\[
f_*(Q\otimes L^{-1})=0.
\] In particular, $Q\otimes L^{-1}$ has no global sections. Tensoring~(\ref{seseta}) with $L^{-1}$ gives a contradiction unless $Q=0$. Hence $\eta$ is an isomorphism and we are done. \end{pf}
\section{$G$-sheaves}
Throughout this section $G$ is a finite group acting on a scheme $X$ (on the left) by auto\-morphisms. As in the last section, all schemes are of finite type over $\mathbb C$. We list some results we need concerning the category of sheaves on $X$ equipped with a compatible $G$ action, or `$G$-sheaves' for short. Since $G$ is finite, most of the proofs are trivial and are left to the reader. The main point is that natural constructions involving sheaves on $X$ are canonical, so commute with automorphisms of $X$.
\subsection{Sheaves and functors}
A $G$-sheaf $E$ on $X$ is a quasi\-coherent sheaf of $\mathcal O_X$-modules together with a lift of the $G$ action to $E$. More precisely, for each $g\in G$, there is a lift $\lift{g}{E}\colon E\to g^*E$ satisfying $\lift{1}{E}=\operatorname{id}_E$ and $\lift{hg}{E}=g^*\left(\lift{h}{E}\right)\circ\lift{g}{E}$.
If $E$ and $F$ are $G$-sheaves, then there is a (right) action of $G$ on $\operatorname{Hom}_X(E,F)$ given by $\theta^g=\left(\lift{g}{F}\right)^{-1}\circ g^*\theta\circ\lift{g}{E}$ and the space $\operatorname{\hbox{$G$}-Hom}_X(E,F)$ of $G$-invariant maps give the morphisms in the Abelian categories $\operatorname{Qcoh}^G(X)$ and $\operatorname{Coh}^G(X)$ of $G$-sheaves.
The category $\operatorname{Qcoh}^G(X)$ has enough injectives (Grothendieck \cite[Proposition 5.1.2]{Gr}) so we may take $G$-equivariant injective resolutions. Since $G$ is finite, if $X$ is a quasi\-projective scheme there is an ample invertible $G$-sheaf on $X$ and so we may also take \hbox{$G$-equivariant} locally free resolutions. The functors $\operatorname{\hbox{$G$}-Ext}^i_X(\kern.0mm\smash{-}\kern.0mm,\kern.0mm\smash{-}\kern.0mm)$ are the $G$-invariant parts of $\operatorname{Ext}^i_X(\kern.0mm\smash{-}\kern.0mm,\kern.0mm\smash{-}\kern.0mm)$ and are the derived functors of $\operatorname{\hbox{$G$}-Hom}_X(\kern.0mm\smash{-}\kern.0mm,\kern.0mm\smash{-}\kern.0mm)$. Thus if $X$ is non\-singular of dimension $n$, so that $\operatorname{Qcoh}(X)$ has global dimension $n$, then the category $\operatorname{Qcoh}^G(X)$ also has global dimension $n$.
The local functors $\operatorname{\mathcal H\mathit{om}}$ and $\otimes$ are defined in the obvious way on $\operatorname{Qcoh}^G(X)$, as are pullback $f^*$ and pushforward $f_*$ for any $G$-equivariant morphism of schemes $f\colon X\to Y$. Thus, for example, $\lift{g}{f^*E}=f^*\lift{g}{E}$. Natural isomorphisms such as $\operatorname{Hom}_X(f^*E,F)\cong\operatorname{Hom}_Y(E,f_*F)$ are canonical, that is, commute with isomorphisms of the base, and hence are $G$-equivariant. Therefore they restrict to natural iso\-morphisms
\[
\operatorname{\hbox{$G$}-Hom}_X(f^*E,F)\cong\operatorname{\hbox{$G$}-Hom}_Y(E,f_*F).
\] In other words, $f^*$ and $f_*$ are also adjoint functors between the categories $\operatorname{Qcoh}^G(X)$ and $\operatorname{Qcoh}^G(Y)$.
Similarly, the natural isomorphisms implicit in the projection formula, flat base change, etc. are canonical and hence $G$-equivariant.
It seems worthwhile to single out the following point:
\begin{lemma} \label{last} Let $E$ and $F$ be $G$-sheaves on $X$. Then, as a representation of $G$, we have a direct sum decomposition
\[
\operatorname{Hom}_X(E,F)=\bigoplus_{i=0}^k\operatorname{\hbox{$G$}-Hom}_X(E\otimes\rho_i,F)\otimes\rho_i
\] over the irreducible representations $\{\rho_0,\cdots,\rho_k\}$. \end{lemma}
\begin{pf} The result amounts to showing that \[ \operatorname{\hbox{$G$}-Hom}(\rho_i, \operatorname{Hom}_X(E,F))=\operatorname{\hbox{$G$}-Hom}_X(E\otimes\rho_i,F). \] Let $f\colon X\to Y=\operatorname{Spec}(\mathbb C)$ be projection to a point, with $G$ acting trivially on $Y$ so that the map is equivariant. Then $\operatorname{Qcoh}^G(Y)$ is just the category of $\mathbb C[G]$-modules. Note that $\operatorname{Hom}_X(E,F)= f_*\operatorname{\mathcal H\mathit{om}} _{\mathcal O_X}(E,F)$ and $f^*\rho_i=\mathcal O_X\otimes\rho_i$, so that the adjunction between $f^*$ and $f_*$ gives \begin{eqnarray*}
\operatorname{\hbox{$G$}-Hom}_Y(\rho_i, f_* \operatorname{\mathcal H\mathit{om}}_{\mathcal O_X}(E,F))&=& \operatorname{\hbox{$G$}-Hom}_X(\mathcal O_X\otimes\rho_i, \operatorname{\mathcal H\mathit{om}}_{\mathcal O_X}(E,F)) \\ &=&\operatorname{\hbox{$G$}-Hom}_X(E\otimes\rho_i,F), \end{eqnarray*} as required. \end{pf}
\subsection{Trivial actions} \label{sec:triv}
If the group $G$ acts trivially on $X$, then any $G$-sheaf $E$ decomposes as a direct sum
\[
E=\bigoplus_i E_i\otimes \rho_i
\] over the irreducible representations $\{\rho_0,\rho_1,\dots,\rho_k\}$ of $G$ (where $\rho_0=\mathbf1$ is the trivial representation). The sheaves $E_i$ are just ordinary sheaves on $X$. Furthermore, $\operatorname{\hbox{$G$}-Hom}_X(E_i\otimes \rho_i,E_j\otimes \rho_j)=0$ for $i\ne j$. Thus the category $\operatorname{Qcoh}^G(X)$ decomposes as a direct sum $\bigoplus_i\operatorname{Qcoh}^i(X)$ and each summand is equivalent to $\operatorname{Qcoh}(X)$.
In particular, every $G$-sheaf $E$ has a fixed part $[E]^G$ and the functor
\[
[\kern.0mm\smash{-}\kern.0mm]^G\colon\operatorname{Qcoh}^G(X)\to\operatorname{Qcoh}(X)
\] is the left and right adjoint to the functor
\[
\kern.0mm\smash{-}\kern.0mm\otimes\rho_0\colon\operatorname{Qcoh}(X)\to\operatorname{Qcoh}^G(X),
\] that is, `let $G$ act trivially'. Both functors are exact.
\subsection{Derived categories}
The $G$-equivariant derived category $\operatorname{D}^G(X)$ is defined to be the full subcategory of the (unbounded) derived category of $\operatorname{Qcoh}^G(X)$ consisting of complexes with bounded and coherent cohomology.
The usual derived functors $\mathbf R\operatorname{\mathcal H\mathit{om}}$, $\Ltensor$, $\mathbf L f^*$ and $\mathbf R f_*$ may be defined on the equivariant derived category, and, as for sheaves, the standard properties of adjunctions, projection formula and flat base change then hold because the implicit natural iso\-morphisms are sufficiently canonical.
To obtain an equivariant Grothendieck duality we refer to Neeman's results \cite{Ne}. Let $f\colon X\to Y$ be an equivariant morphism of schemes. The only thing to check is that equivariant pushdown $\mathbf R f_*$ commutes with small coproducts. This is proved exactly as in \cite{Ne}. Then the functor $\mathbf R f_*$ has a right adjoint $f^!$, and~(\ref{amos}) holds when $f$ is proper and of finite Tor dimension. Moreover the same result holds if $f$ is not proper, providing that we restrict $\mathbf R f_*$ to the subcategories of objects with compact support. Thus if $X$ is a non\-singular quasi\-projective variety of dimension $n$, the full subcategory $\operatorname{D_{\mathrm c}^G}(X)\subset\operatorname{D}^G(X)$ consisting of objects with compact supports has the Serre functor
\[
S_X(\kern.0mm\smash{-}\kern.0mm)\,=\,(\kern.0mm\smash{-}\kern.0mm\otimes\omega_X)[n],
\] where $\omega_X$ is the canonical bundle of $X$ with its induced $G$-structure.
\subsection{Indecomposability} If $G$ acts trivially on $X$ then the results of Section~\ref{sec:triv} show that $\operatorname{D}^G(X)$ decomposes as a direct sum of orthogonal subcategories indexed by the irreducible representations of $G$. More generally it is easy to see that $\operatorname{D}^G(X)$ is decomposable unless $G$ acts faithfully. We need the converse of this statement.
\begin{lemma} Suppose a finite group $G$ acts faithfully on a quasi\-projective variety $X$. Then $\operatorname{D}^G(X)$ is indecomposable. \end{lemma}
\begin{pf} Suppose that $\operatorname{D}^G(X)$ decomposes as an orthogonal direct sum of two subcategories $\mathcal A_1$ and $\mathcal A_2$. Any indecomposable object of $\operatorname{D}^G(X)$ lies in either $\mathcal A_1$ or $\mathcal A_2$ and
\[
\operatorname{Hom}_{\operatorname{D}^G(X)}(a_1,a_2)=0 \mbox{ for all } a_1\in \mathcal A_1, a_2\in\mathcal A_2.
\] Since the action of $G$ is faithful, the general orbit is free. Let $D=G\cdot x$ be a free orbit. Then $\mathcal O_D$ is indecomposable as a $G$-sheaf. Suppose without loss of generality that $\mathcal O_D$ lies in $\mathcal A_1$.
Let $\rho_i$ be an irreducible representation of $G$. The sheaf $\mathcal O_X\otimes\rho_i$ is indecomposable in $\operatorname{D}^G(X)$ and there exists an equivariant map $\mathcal O_X\otimes\rho_i\to\mathcal O_D$ so $\mathcal O_X\otimes\rho_i$ also lies in $\mathcal A_1$. Any indecomposable $G$-sheaf $E$ supported in dimension 0 has a section, so by Lemma~\ref{last} there is an equivariant map $\mathcal O_X\otimes\rho_i\to E$, and thus $E$ lies in $\mathcal A_1$.
Finally given an indecomposable $G$-sheaf $F$, take an orbit $G\cdot x$ contained in $\operatorname{Supp}(F)$ and let $i\colon G\cdot x\hookrightarrow X$ be the inclusion. Then $i_* i^*(F)$ is supported in dimension 0 and there is an equivariant map $F\to i_* i^*(F)$, so $F$ also lies in $\mathcal A_1$. Now $\mathcal A_2$ is orthogonal to all sheaves, hence is trivial. \end{pf}
\section{The intersection theorem}
Our proof that $\operatorname{\hbox{$G$}-Hilb}{M}$ is non\-singular follows an idea developed in Bridgeland and Maciocia \cite{Br2} for moduli spaces over K3 fibrations, and uses the following famous and difficult result of commutative algebra:
\begin{thm}[Intersection theorem] \label{com} Let $(A,m)$ be a local\/ \hbox{$\mathbb C$-algebra} of dimension $d$. Suppose that
\[
0\to M_s\to M_{s-1}\to\cdots\to M_0\to 0
\] is a nonexact complex of finitely generated free $A$-modules with each homology module $H_i(M_{{\scriptscriptstyle\bullet}})$ an $A$-module of finite length. Then $s\ge d$. Moreover, if $s=d$ and $H_0(M_{{\scriptscriptstyle\bullet}})\cong A/m$, then
\[
H_i(M_{{\scriptscriptstyle\bullet}})=0 \quad\text{for all $i\ne 0$,}
\] and $A$ is regular. \end{thm}
The basic idea is as follows. Serre's criterion states that any finite length $A$-module has homological dimension $\geq d$ and that $A$ is regular precisely if there is a finite length $A$-module which has homological dimension exactly $d$. The intersection theorem gives corresponding statements for complexes of $A$-modules with finite length homology. As a rough slogan, ``regularity is a property of the derived category". For the main part of the proof, see Roberts \cite{Ro1}, \cite{Ro2}; for the final clause, see \cite{Br2}.
We may rephrase the intersection theorem
using the language of support and homological dimension. If $X$ is a scheme and $E$ an object in $\operatorname{D}(X)$, then it is easy to check \cite{Br2} that, for any closed point $x\in X$,
\[
x\in\operatorname{Supp} E\iff \operatorname{Hom}_{\operatorname{D}(X)}^i(E,\mathcal O_x)\ne 0 \text{ for some $i\in\mathbb Z$.}
\] The {\em homological dimension} of a nonzero object $E\in\operatorname{D}(X)$, written $\operatorname{hom\, dim} E$, is the smallest non\-negative integer $s$ such that $E$ is isomorphic in $\operatorname{D}(X)$ to a complex of locally free sheaves on $X$ of length $s$. If no such integer exists we put $\operatorname{hom\, dim} E =\infty$. One can prove \cite{Br2} that if $X$ is quasiprojective, and $n$ is a nonnegative integer, then $\operatorname{hom\, dim} E \le n$ if and only if there is an integer $j$ such that for any point $x\in X$
\[
\operatorname{Hom}^i_{\operatorname{D}(X)}(E,\mathcal O_x)=0 \mbox{ unless }j\le i\le j+n.
\] The two parts of Theorem~\ref{com} now become the following (cf.\ \cite{Br2}).
\begin{cor} \label{ab} Let $X$ be a scheme and $E$ a nonzero object of\/ $\operatorname{D}(X)$. Then
\[
\operatorname{codim}(\operatorname{Supp} E)\,\le\,\operatorname{hom\, dim} E.
\] \end{cor}
\begin{cor} \label{smooth} Let $X$ be an irreducible $n$-dimensional scheme, and fix a point $x\in X$. Suppose that there is an object $E$ of\/ $\operatorname{D}(X)$ such that for any point $z\in X$, and any integer $i$,
\[
\operatorname{Hom}^i_{\operatorname{D}(X)}(E,\mathcal O_z)=0 \quad\mbox{unless }z=x \mbox{ and \/} 0\le i\le n.
\] Suppose also that $H_0(E)\cong\mathcal O_x$. Then $X$ is non\-singular at $x$ and $E\cong\mathcal O_x$. \end{cor}
\section{The projective case}
\label{Sec6!Proj_case}
The aim of this section is to prove Theorem~\ref{second} under the additional assumption that $M$ is projective. The quasi\-projective case involves some further technical difficulties that we deal with in the next section. Take notation as in the introduction. We break the proof up into 7 steps.
\Step1 Let $\pi_Y\colon Y\times M\to Y$ and $\pi_M\colon Y\times M\to M$ denote the projections. The functor $\Phi$ may be rewritten
\[
\Phi(\kern.0mm\smash{-}\kern.0mm)\cong \mathbf R\pi_{M_*}(\mathcal O_\mathcal Z\otimes\pi_Y^*(\kern.0mm\smash{-}\kern.0mm\otimes\rho_0)).
\] Note that $\mathcal O_\mathcal Z$ has finite homological dimension, because $\mathcal Z$ is flat over $Y$ and $M$ is non\-singular. Hence the derived dual $\mathcal O_\mathcal Z^{\vee}= \mathbf R\operatorname{\mathcal H\mathit{om}}_{\mathcal O_{Y\times M}}(\mathcal O_\mathcal Z,\mathcal O_{Y\times M})$ also has finite homological dimension
and we may define another functor $\Psi\colon\operatorname{D}^G(M)\to \operatorname{D}(Y)$, by the formula
\[
\Psi(\kern.0mm\smash{-}\kern.0mm)=[\mathbf R \pi_{Y_*}(\mathcal P\Ltensor \pi_M^*(\kern.0mm\smash{-}\kern.0mm))]^G,
\] where $\mathcal P=\mathcal O_\mathcal Z^{\vee}\otimes\pi_M^*(\omega_M)[n]$.
Now $\Psi$ is left adjoint to $\Phi$ because of the three standard adjunctions described in Section~\ref{adjunctions}. The functor $\pi_M^*$ is the left adjoint to $\mathbf R\pi_{M,^*}$. The functor $\kern.0mm\smash{-}\kern.0mm\otimes\mathcal O_\mathcal Z$ has the (left and right) adjoint $\kern.0mm\smash{-}\kern.0mm\otimes\mathcal O_\mathcal Z^{\vee}$. Finally the functor $\pi_Y^!$ has the left adjoint $\mathbf R\pi_{Y_*}$ and
\[
\pi_Y^!(\kern.0mm\smash{-}\kern.0mm)=\pi_Y^*(\kern.0mm\smash{-}\kern.0mm)\otimes\pi_M^*(\omega_M)[n].
\]
\Step2 The composite functor $\Psi\circ\Phi$ is given by
\[
\mathbf R\pi_2{}_*(\operatorname{\mathcal Q}\Ltensor\pi_1^*(\kern.0mm\smash{-}\kern.0mm)),
\] where $\pi_1$ and $\pi_2$ are the projections of $Y\times Y$ onto its factors, and $\operatorname{\mathcal Q}$ is some object of $\operatorname{D}(Y\times Y)$. This is just composition of correspondences (see Mukai \cite[Proposition~1.3]{muk}).
If $i_y\colon\{y\}\times Y\hookrightarrow Y\times Y$ is the closed embedding then $\mathbf L i_y^*(\operatorname{\mathcal Q})=\Psi\Phi\mathcal O_y$, so that for any pair of points $y_1, y_2$,
\begin{equation}
\label{support}
\operatorname{Hom}_{\operatorname{D}(Y\times Y)}^i(\operatorname{\mathcal Q},\mathcal O_{(y_1,y_2)})=
\operatorname{Hom}_{\operatorname{D}(Y)}^i(\Psi\Phi\mathcal O_{y_1},\mathcal O_{y_2}) =
\operatorname{\hbox{$G$}-Ext}^i_M(\mathcal O_{Z_{y_1}},\mathcal O_{Z_{y_2}}),
\end{equation} using the adjunction for $(\Psi,\Phi)$. Our first objective is to show that $\operatorname{\mathcal Q}$ is supported on the diagonal $\Delta\subset Y\times Y$, or equivalently that the groups in~(\ref{support}) vanish unless $y_1=y_2$. When $n=3$ this is the same as the assumption (4.8) of Ito and Nakajima \cite{IN}.
\Step3 \label{bike} Let $Z_1,Z_2\subset M$ be $G$-clusters. Then
\[
\operatorname{\hbox{$G$}-Hom}_M(\mathcal O_{Z_1},\mathcal O_{Z_2})
=\begin{cases}
\mathbb C &\text{if $Z_1=Z_2$,} \\
0 &\text{otherwise.} \end{cases}
\] To see this note that $\mathcal O_Z$ is generated as an $\mathcal O_M$ module by any nonzero constant section. But, since $H^0(\mathcal O_Z)$ is the regular representation of $G$, the constant sections are precisely the \hbox{$G$-invariant} sections. Hence any equivariant morphism maps a generator to a scalar multiple of a generator and so is determined by that scalar.
Let $y_1$ and $y_2$ be distinct points of $Y$. Serre duality, together with our assumption that $\omega_M$ is locally trivial as a $G$-sheaf implies that
\[
\operatorname{\hbox{$G$}-Ext}^n_M(\mathcal O_{Z_{y_1}},\mathcal O_{Z_{y_2}})=
\operatorname{\hbox{$G$}-Hom}_M(\mathcal O_{Z_{y_2}},\mathcal O_{Z_{y_1}})=0,
\] so that
\[
\operatorname{\hbox{$G$}-Ext}^p_M(\mathcal O_{Z_{y_1}},\mathcal O_{Z_{y_2}})=0 \quad\text{unless $1\le p\le n-1$.}
\] Hence $\operatorname{\mathcal Q}$ restricted to $(Y\times Y)\setminus\Delta$ has homological dimension $\le n-2$.
\Step4 Now we apply the intersection theorem. If $y_1$ and $y_2$ are points of $Y$ such that $\tau(y_1)\ne \tau(y_2)$ then the corresponding clusters $Z_{y_1}$ and $Z_{y_2}$ are disjoint, so that the groups in~(\ref{support}) vanish. Thus the support of $\operatorname{\mathcal Q}|_{(Y\times Y)\setminus\Delta}$ is contained in the subscheme $Y\times_X Y$. By assumption this has codimension $>n-2$ so Corollary~\ref{ab} implies that
\[
\operatorname{\mathcal Q}|_{(Y\times Y)\setminus\Delta}\cong 0,
\] that is, $\operatorname{\mathcal Q}$ is supported on the diagonal.
\Step5 Fix a point $y\in Y$, and put $E=\Psi\Phi(\mathcal O_y)$. We proved above that $E$ is supported at the point $y$. We claim that $H_0(E)=\mathcal O_y$. Note that Corollary~\ref{smooth} then implies that $Y$ is non\-singular at $y$ and $E\cong\mathcal O_y$.
To prove the claim, note that there is a unique map $E\to\mathcal O_y$, so we obtain a triangle
\[
C\to E\to\mathcal O_y\to C[1]
\] for some object $C$ of $\operatorname{D}(Y)$. Using the adjoint pair $(\Psi,\Phi)$, this gives a long exact sequence
\begin{gather*}
\cdots\to\operatorname{Hom}_{\operatorname{D}(Y)}^0(\mathcal O_y,\mathcal O_y)\to
\operatorname{Hom}_{\operatorname{D}^G(M)}^0(\Phi\mathcal O_y,\Phi\mathcal O_y)\to \operatorname{Hom}_{\operatorname{D}(Y)}^0(C,\mathcal O_y) \\
\to \operatorname{Hom}_{\operatorname{D}(Y)}^1(\mathcal O_y,\mathcal O_y)\lRa{\varepsilon}
\operatorname{Hom}_{\operatorname{D}^G(M)}^1(\Phi\mathcal O_y,\Phi\mathcal O_y)\to \cdots.
\end{gather*} The homomorphism $\varepsilon$ is just the Kodaira--Spencer map for the family of clusters $\{\mathcal O_{Z_y}:y\in Y\}$ (Bridgeland \cite[Lemma~4.4]{Br1}), so is injective. It follows that
\[
\operatorname{Hom}_{\operatorname{D}(Y)}^i(C,\mathcal O_y)=0\quad\text{for all $i\le0$.}
\] An easy spectral sequence argument (see \cite[Example~2.2]{Br1}), shows that $H_i(C)=0$ for all $i\le0$. Taking homology sheaves of the above triangle gives $H_0(E)=\mathcal O_y$, which proves the claim.
\Step6 We have now proved that $Y$ is non\-singular, and that for any pair of points $y_1,y_2\in Y$, the homo\-morphisms
\[
\Phi\colon\operatorname{Ext}^i_Y(\mathcal O_{y_1},\mathcal O_{y_2})\to
\operatorname{\hbox{$G$}-Ext}^i_M(\mathcal O_{Z_{y_1}},\mathcal O_{Z_{y_2}})
\] are isomorphisms. By assumption, the action of $G$ on $M$ is such that $\omega_M$ is trivial as a $G$-sheaf on an open neighbourhood of each orbit $G\cdot x\subset M$. This implies that
\[
\mathcal O_{Z_y}\otimes\omega_M\cong\mathcal O_{Z_y}
\] in $\operatorname{Coh}^G(M)$, for each $y\in Y$. Applying Theorem~\ref{tring2} shows that $\Phi$ is an equivalence of categories.
\Step7 It remains to show that $\tau\colon Y\to X$ is crepant. Take a point $x\in X=M/G$. The equivalence $\Phi$ restricts to give an equivalence between the full subcategories $\operatorname{D}_x(Y)\subset\operatorname{D}(Y)$ and $\operatorname{D}^G_x(M)\subset \operatorname{D}^G(M)$ consisting of objects supported on the fibre $\tau^{-1}(x)$ and the orbit $\pi^{-1}(x)$ respectively.
The category $\operatorname{D}^G_x(M)$ has trivial Serre functor because $\omega_M$ is trivial as a $G$-sheaf on a neighbourhood of $\pi^{-1}(x)$. Thus $\operatorname{D}_x(Y)$ also has trivial Serre functor and Lemma~\ref{crepancy} gives the result.
This completes the proof of Theorem~\ref{second} in the case that $Y$ is projective.
\section{The quasi\-projective case}
In this section we complete the proof of Theorem~\ref{second}. Once again, take notation as in the introduction. The only problem with the argument of the last section is that when $M$ is not projective Grothendieck duality in the form we need only applies to objects with compact support. Thus if we restrict $\Phi$ to a functor
\[
\Phi_{\mathrm c}\colon\operatorname{D_{\mathrm c}}(Y)\to\operatorname{D_{\mathrm c}^G}(M).
\] the argument of the last section carries through to show that $Y$ is non\-singular and crepant and that $\Phi_{\mathrm c}$ is an equivalence. It remains to show that also $\Phi$ is an equivalence.
\Step8 The functor $\Phi$ has a right adjoint
\[
\Upsilon(\kern.0mm\smash{-}\kern.0mm)\,=\,[p_*\circ q^!(\kern.0mm\smash{-}\kern.0mm)]^G
\,=\,[\mathbf R\pi_{Y}{}_*(\omega_{Z/M}\Ltensor\pi_M^*(\kern.0mm\smash{-}\kern.0mm))]^G.
\] As before, the composition $\Upsilon\circ\Phi$ is given by
\[
\mathbf R\pi_2{}_*(\operatorname{\mathcal Q}\Ltensor\pi_1^*(\kern.0mm\smash{-}\kern.0mm)),
\] where $\pi_1$ and $\pi_2$ are the projections of $Y\times Y$ onto its factors, and $\operatorname{\mathcal Q}$ is some object of $\operatorname{D}(Y\times Y)$.
Since $\Phi_{\mathrm c}$ is an equivalence, $\Upsilon\Phi\mathcal O_y=\mathcal O_y$ for any point $y\in Y$, and it follows that $\operatorname{\mathcal Q}$ is actually the pushforward of a line bundle $L$ on $Y$ to the diagonal in $Y\times Y$. The functor $\Upsilon\circ\Phi$ is then just twisting by $L$, and to show that $\Phi$ is fully faithful we must show that $L$ is trivial.
There is a morphism of functors $\varepsilon\colon\operatorname{id}\to\Upsilon\circ\Phi$, which for any point $y\in Y$ gives a commutative diagram
\[
\begin{CD} \mathcal O_Y &@>\varepsilon(\mathcal O_Y)>> &L \\ @VfVV && @VVL\otimes fV \\ \mathcal O_y &@>\varepsilon(\mathcal O_y)>> &\,\mathcal O_y
\end{CD}
\] where $f$ is non\-zero. Since $\varepsilon$ is an isomorphism on the subcategory $\operatorname{D_{\mathrm c}}(Y)$, the maps $\varepsilon(\mathcal O_y)$ are all isomorphisms, so the section $\varepsilon(\mathcal O_Y)$ is an isomorphism.
\Step9 The fact that $\Phi$ is an equivalence follows from Lemma~\ref{extra} once we show that
\[
\Upsilon(E)\cong 0 \implies E\cong 0 \quad
\text{for any object $E$ of $\operatorname{D}^G(M)$.}
\] Suppose $\Upsilon(E)\cong 0$. Using the adjunction for $(\Phi,\Upsilon)$,
\[
\operatorname{Hom}^i_{\operatorname{D}^G(M)}(B,E)=0 \quad\text{for all $i$,}
\] whenever $B\cong\Phi(A)$ for some object $A\in\operatorname{D}(Y)$. In particular, this holds for any $B$ with compact support.
If $E$ is nonzero, let $D=G\cdot x$ be an orbit of $G$ contained in the support of $E$. Let $i\colon D\hookrightarrow M$ denote the inclusion, a projective equi\-variant morphism of schemes. Then the adjunction morphism $i_* i^!(E)\to E$ is non\-zero, which gives a contradiction.
This completes the proof of Theorem~\ref{second}. \qed
\section{Nakamura's conjecture}
Recall that in Theorem~\ref{second} we took the space $Y$ to be an irreducible component of $\operatorname{\hbox{$G$}-Hilb}{M}$. Note that when $Y$ is non\-singular and $\Phi$ is an equivalence, $Y$ is actually a connected component. This is simply because for any point $y\in Y$, the bijection
\[
\Phi\colon\operatorname{Ext}^1_Y(\mathcal O_y,\mathcal O_y)\to\operatorname{\hbox{$G$}-Ext}^1_M(\mathcal O_{Z_y},\mathcal O_{Z_y})
\] identifies the tangent space of $Y$ at $y$ with the tangent space of $\operatorname{\hbox{$G$}-Hilb}{M}$ at $y$. In this section we wish to go further and prove that when $M$ has dimension 3, $\operatorname{\hbox{$G$}-Hilb}{M}$ is in fact connected.
\paragraph{Proof of Nakamura's conjecture} Suppose by contradiction that there exists a \hbox{$G$-cluster} $Z\subset M$ not contained among the $\{Z_y: y\in Y\}$. Since $\Phi$ is an equivalence we can take an object $E\in\operatorname{D_{\mathrm c}}(Y)$ such that $\Phi(E)=\mathcal O_Z$. The argument of Section~\ref{bike}, Step~3 shows that for any point $y\in Y$
\[
\operatorname{Hom}_{\operatorname{D}(Y)}^i(E,\mathcal O_y)=\operatorname{\hbox{$G$}-Ext}_M^i(\mathcal O_Z,\mathcal O_{Z_y})=0\quad
\mbox{unless }1\le i\le 2.
\] This implies that $E$ has homological dimension 1, or more precisely, that $E$ is quasi-isomorphic to a complex of locally free sheaves of the form \begin{equation} \label{ouch} 0\to L_2\lRa{f} L_1\to 0. \end{equation} But $\mathcal O_Z$ is supported on some $G$-orbit in $M$, so $E$ is supported on a fibre of $Y$, and hence in codimension $\ge1$. It follows that the complex~(\ref{ouch}) is exact on the left, so $E\cong\operatorname{coker} f[1]$. In particular $[E]=-[\operatorname{coker} f]$ in the Grothendieck group $K_{\mathrm c}(Y)$ of $\operatorname{D_{\mathrm c}}(Y)$.
Let $y$ be a point of the fibre that is the support of $E$. By Lemma~\ref{Kgrps} below, $[\mathcal O_{Z_y}]=[\mathcal O_Z]$ in $K^G_{\mathrm c}(M)$, so that $[\mathcal O_y]=[E]$ in $K_{\mathrm c}(Y)$, since the equivalence $\Phi$ gives an isomorphism of Grothendieck groups.
Let $\overline{Y}$ be a non\-singular projective variety with an open inclusion $i\colon Y\hookrightarrow \overline{Y}$. The functor $i_*\colon\operatorname{D_{\mathrm c}}(Y)\to\operatorname{D}(\overline{Y})$ induces a map on K groups, so $[\operatorname{coker} f]=-[\mathcal O_y]$ in $K_{\mathrm c}(\overline{Y})$. But this contradicts Riemann--Roch, because if $L$ is a sufficiently ample line bundle on $\overline{Y}$, then $\operatorname{\chi}(\operatorname{coker} f\otimes L)$ and $\operatorname{\chi}(\mathcal O_y\otimes L)$ are both positive.
\begin{lemma} \label{Kgrps} If $Z_1$ and $Z_2$ are two $G$-clusters on $M$ supported on the same orbit then the corresponding elements $[\mathcal O_{Z_1}]$ and $[\mathcal O_{Z_2}]$ in the Grothendieck group $K^G_{\mathrm c}(M)$ of $\operatorname{D}^G_{\mathrm c}(M)$ are equal. \end{lemma}
\begin{pf} We need to show that, as $G$-sheaves,
$\mathcal O_{Z_1}$ and $\mathcal O_{Z_2}$ have composition series with the same simple factors. Suppose that they are both supported on the $G$-orbit $D=G\cdot x\subset M$ and let $H$ be the stabiliser subgroup of $x$ in $G$. The restriction functor is an equivalence of categories from finite length $G$-sheaves supported on $D$ to finite length $H$-sheaves supported at $x$. The reverse equivalence is the induction functor $\left(\kern.0mm\smash{-}\kern.0mm\otimes_{\mathbb C[H]}\mathbb C[G]\right)$. Since the restriction of a $G$-cluster supported on $D$ is an $H$-cluster supported at $x$, it is sufficient to prove the result for $H$-clusters supported at $x$.
If $\{\rho_0,\cdots,\rho_k\}$ are the irreducible representations of $H$, then we claim that the simple $H$-sheaves supported at $x$ are precisely
\[
\{S_i=\mathcal O_x\otimes\rho_i:0\le i\le k\}
\] These sheaves are certainly simple, since they are simple as $\mathbb C[H]$-modules. On the other hand, any $H$-sheaf $E$ supported at $x$ has a nonzero ordinary sheaf morphism $\mathcal O_x\to E$. By Lemma~\ref{last} there must be a nonzero $H$-sheaf morphism $S_i\to E$, for some $i$, and, if $E$ were simple, then this would have to be an isomorphism.
Thus a composition series as an $H$-sheaf is also a composition series as a $\mathbb C[H]$-module. Hence all $H$-clusters supported at $x$ have the same composition factors as $H$-sheaves, since as $\mathbb C[H]$-modules they are all the regular representation of $H$. \end{pf}
\section{K theoretic consequences of equivalence}
In this section we put $M=\mathbb C^n$ and assume that the functor $\Phi$ is an equivalence of categories. This is always the case when $n\le 3$. The main point is that such an equivalence of derived categories immediately gives an isomorphism of the corresponding Grothendieck groups.
\subsection{Restricting to the exceptional fibres} Let $\operatorname{D}^G_0(\mathbb C^n)$ denote the full subcategory of $\operatorname{D}^G(\mathbb C^n)$ consisting of objects supported at the origin of $\mathbb C^n$. Similarly, let $\operatorname{D}_0(Y)$ denote the full subcategory of $\operatorname{D}(Y)$ consisting of objects supported on the subscheme $\tau^{-1}(\pi(0))$ of $Y$.
The equivalence $\Phi$ induces an equivalence
\[
\Phi_0\colon\operatorname{D}_0(Y)\to\operatorname{D}^G_0(\mathbb C^n),
\] so we obtain a diagram
\[
\renewcommand{1.5}{1.5}
\begin{array}{ccc}\operatorname{D}(Y) &\lRa{\Phi} &\operatorname{D}^G(\mathbb C^n) \\
\bigg{\uparrow} && \bigg{\uparrow} \\
\operatorname{D}_0(Y) &\lRa{\Phi} &\operatorname{D}^G_0(\mathbb C^n)
\end{array}
\] in which the vertical arrows are embeddings of categories.
Note that the Euler characteristic gives natural bilinear forms on both sides; if $E$ and $F$ are objects of $\operatorname{D}^G(\mathbb C^n)$ and $\operatorname{D}^G_0(\mathbb C^n)$ respectively, then we can compute
\[
\operatorname{\chi}^G(E,F)=\sum_i(-1)^i \dim \operatorname{Hom}_{\operatorname{D}^G(\mathbb C^n)}(E,F[i]),
\] since the fact that the cohomology of $F$ has finite length implies that the Ext groups are only nonvanishing in a finite interval. Similarly, we can compute the ordinary Euler character on the left. The fact that $\Phi$ is an equivalence of categories commuting with the shift functors immediately gives
\[
\operatorname{\chi}^G(\Phi(A),\Phi(B))=\operatorname{\chi}(A,B),
\] for any objects $A$ of $\operatorname{D}(Y)$ and $B$ of $\operatorname{D}_0(Y)$.
\subsection{Equivalence of K groups}
Let $K(Y)$, $K^G(\mathbb C^n)$, $K_0(Y)$ and $K_0^G(\mathbb C^n)$ be the Grothendieck groups of the corresponding derived categories. The equivalences of categories from the last section immediately give isomorphisms of these groups. The following lemma is proved in the same way as in Gonzalez-Sprinberg and Verdier \cite[Proposition 1.4]{GV}.
\begin{lemma} The maps that send a representation $\rho$ of $G$ to the $G$-sheaves $\rho\otimes\mathcal O_{\mathbb C^n}$ and $\rho\otimes\mathcal O_0$ on $\mathbb C^n$ give ring isomorphisms of the representation ring $R(G)$ with $K^G(\mathbb C^n)$ and $K_0^G(\mathbb C^n)$ respectively. \end{lemma}
We obtain a diagram of groups
\[
\renewcommand{1.5}{1.5}
\begin{array}{ccc}
K(Y) &\lRa{\varphi} & R(G) \\
\scriptstyle{i}\bigg{\uparrow}\hphantom{\scriptstyle{i}} &&
\hphantom{\scriptstyle{j}}\bigg{\uparrow}\scriptstyle{j} \\
K_0(Y) &\lRa{\varphi} & R(G).
\end{array}
\] in which the horizontal maps are isomorphisms but the vertical maps are not. In fact, if $Q$ is the representation induced by the inclusion $G\subset\operatorname{SL}(n,\mathbb C)$, then the map $j$ is multiplication by
\[
r=\sum_{i=0}^n (-1)^i \Lambda^i Q\in R(G).
\] This formula is obtained by considering a Koszul resolution of $\mathcal O_0$ on $M$, as in \cite[Proposition 1.4]{GV}. For example, in the case $n=2$ one has $r=2-Q$.
The above bilinear forms descend to give pairings on the Grothendieck groups. These forms are non\-degenerate because if $\{\rho_0,\cdots,\rho_k\}$ are the irreducible representations of $G$ then the corresponding bases
\[
\{\rho_i\otimes\mathcal O_{\mathbb C^n}\}_{i=0}^k\subset K^G(\mathbb C^n) \quad\text{and}\quad \{\rho_i\otimes\mathcal O_{0}\}_{i=0}^k\subset K^G_0(\mathbb C^n)
\] are dual with respect to the pairing $\operatorname{\chi}^G(\kern.0mm\smash{-}\kern.0mm,\kern.0mm\smash{-}\kern.0mm)$. Applying $\varphi^{-1}$ gives dual bases
\[
\{\mathcal R_i\}_{i=0}^k\subset K(Y) \quad\text{and}\quad \{\mathcal S_i\}_{i=0}^k\subset K_0(Y)
\] as in Ito and Nakajima \cite{IN}.
\section{Topological K~theory and physics}
With notation as in the introduction, suppose that $M$ is projective, and further that $Y$ is non\-singular and $\Phi\colon\operatorname{D}(Y)\to\operatorname{D}^G(M)$ is an equivalence. For example suppose that $n=2$ or $3$.
\subsection{K~theory and the orbifold Euler number} Let $\mathcal K^*(Y)$ denote the topological complex K~theory of $Y$ and $\mathcal K_G^*(M)$ the $G$-equi\-variant topological K~theory of $M$. There are natural forgetful maps
\[
\alpha_Y\colon K(Y)\to \mathcal K^0(Y)
\quad\text{and}\quad \alpha_M\colon K^G(M)\to \mathcal K_G^0(M).
\] Since $\Phi$ and its inverse $\Psi$ are defined as correspondences, we may define correspondences
\[
\varphi\colon \mathcal K^*(Y)\to \mathcal K_G^*(M)
\quad\text{and}\quad
\psi\colon \mathcal K_G^*(M)\to\mathcal K^*(Y)
\] compatible with the maps $\alpha$, using the functors $\otimes$, $f^*$ and $f_*$ (also written $f_!$) on topological K~theory, which extend to equi\-variant K~theory, as usual, because they are canonical. Note that the definition and compatibility of $f_*$ is non\-trivial; see \cite{AH} for more details. But now the fact that $\Phi$ and $\Psi$ are mutually inverse implies that $\varphi$ and $\psi$ are mutually inverse, that is, we have a graded isomorphism \begin{equation} \label{Kiso}
\mathcal K^*(Y)\cong \mathcal K_G^*(M) \end{equation}
Atiyah and Segal \cite{AS} observed that the physicists' orbifold Euler number of $M/G$ is the Euler characteristic of $\mathcal K_G^*(M)$, that is,
\[
e(M,G)=\dim \mathcal K_G^0(M)\otimes\mathbb Q -\dim \mathcal K_G^1(M)\otimes\mathbb Q.
\] On the other hand, since the Chern character gives a $\mathbb Z/2$ graded isomorphism $\mathcal K^*(Y)\otimes\mathbb Q \cong H^*(Y,\mathbb Q)$, the Euler characteristic of $\mathcal K^*(Y)$ is just the ordinary Euler number $e(Y)$ of $Y$. Hence the isomorphism~(\ref{Kiso}) on topological K~theory provides a natural explanation for the physicists' Euler number conjecture
\[
e(M,G)=e(Y).
\] This was verified in the case $n=2$ as a consequence of the original McKay correspondence (cf.\ \cite{AS}). It was proved in the case $n=3$ by Roan \cite{Roa} in the more general case of quasi\-projective Gorenstein orbifolds, since the numerical statement reduces to the local linear case $M=\mathbb C^3$, $G\subset\operatorname{SL}(3,\mathbb C)$.
\subsection{An example: the Kummer surface}
One of the first interesting cases of the isomorphism~(\ref{Kiso}) is when $M$ is an Abelian surface (topologically, a 4-torus $T^4$), $G=\mathbb Z/2$ acting by the involution $-1$ and $Y$ is a K3 surface. In this case $Y$ is a non\-singular Kummer surface, having 16 disjoint $-2$-curves $C_1,\dots,C_{16}$ coming from resolving the images in $M/G$ of the 16 $G$-fixed points $x_1,\dots,x_{16}$ in $M$. Write $V=\{x_1,\dots,x_{16}\}$ for this fixed point set.
On the Abelian surface $M$ there are 32 flat line $G$-bundles, arising from a choice of 2 $G$-actions on each of the 16 square roots of $\mathcal O_M$. Each such flat line $G$-bundle $L(\rho)$ is characterised by a map $\rho\colon V\to \mathbb F_2=\{0,1\}$ such that at a fixed point $x\in V$ the group $G$ acts on the fibre $L_x$ with weight $(-1)^{\rho(x)}$. Now the set $V$ naturally has the structure of an affine $4$-space over $\mathbb F_2$ and the maps $\rho$ that occur are precisely the affine linear maps, including the two constant maps corresponding to the two actions on $\mathcal O_M$.
On the other hand, on the K3 surface $Y$ one may consider the lattice $\mathbb Z^V\subset H^2(Y,\mathbb Z)$ spanned by $C_1,\dots,C_{16}$ and the smallest primitive sublattice $\Lambda$ containing $\mathbb Z^V$. The elements of $\Lambda$ give precisely the rational linear combinations of the divisors $C_1,\dots,C_{16}$ which are themselves divisors. It is easy to see that $\mathbb Z^V\subset\Lambda\subset(\frac12\mathbb Z)^V$ and it can also be shown that the image of $\Lambda$ in the quotient $(\frac12\mathbb Z)^V/\mathbb Z^V\cong\mathbb F_2^V$ consists of precisely the affine linear maps on $V$ (see Barth, Peters and Van de Ven \cite[Chapter~VIII, Proposition 5.5]{BPV}).
We claim that under the correspondence $\Psi$, the flat line $G$-bundle $L(\rho)$ is taken to the line bundle $\mathcal O_Y(D(\rho))$, where
\[
D(\rho)=\frac12\Bigl( \sum_i \rho(x_i) C_i \Bigr).
\] To check the claim note that $\mathcal O_M$ is taken to $\mathcal O_Y$, and that, in the local linear McKay correspondence for $\mathbb C^2/(\mathbb Z/2)$, the irreducible representation of weight $-1$ is taken to the line bundle $\mathcal O(\frac12 C)$, dual to the $-2$-curve $C$ resolving the singularity.
\noindent Tom Bridgeland, \\ Department of Mathematics and Statistics, \\ University of Edinburgh, King's Buildings, \\ Mayfield Road, Edinburgh EH9 3JZ, UK \\ e-mail: tab@maths.ed.ac.uk
\noindent Alastair King, \\ Department of Mathematical Sciences, \\ University of Bath, \\ Bath BA2 7AY, England \\ e-mail: a.d.king@maths.bath.ac.uk
\noindent Miles Reid, \\ Math Inst., Univ.\ of Warwick, \\ Coventry CV4 7AL, England \\ e-mail: miles@maths.warwick.ac.uk
\end{document} |
\begin{document}
\title{Parity proofs of the Kochen-Specker theorem based on 60 complex rays in four dimensions} \begin{abstract}
It is pointed out that the 60 complex rays in four dimensions associated with a system of two qubits yield over $10^{9}$ critical parity proofs of the Kochen-Specker theorem. The geometrical properties of the rays are described, an overview of the parity proofs contained in them is given, and examples of some of the proofs are exhibited.\\\\ \end{abstract}
\section{\label{sec:Intro}Introduction}
In a recent paper \cite{Waegell2010} we showed that the 60 real rays in four dimensions derived from the vertices of a 600-cell yield over hundred million critical parity proofs of the Kochen-Specker (KS) theorem \cite{KS1967}. In the present paper we point out that the 60 complex rays in four dimensions connected with a pair of qubits lead to a similarly large number of parity proofs. Both these 60-ray systems can be regarded as generalizations, in different ways, of the set of 24 real rays in four dimensions used by Peres \cite{Peres1991} and others \cite{Kernaghan1994,Cabello1996,Aravind2000,Pavicic2010,Waegell2011a} to give proofs of the KS theorem.\\
The real and complex systems of 60 rays have 12 rays in common, but do not share a single parity proof. The proofs in both systems are interesting for several reasons: they are very transparent and take no more than simple counting to verify; they are extremely numerous, numbering over a 100 million in each case; and they possess many geometrical features of interest \cite{BengtssonBook}. In addition to these purely mathematical reasons, they are of physical interest because they can be converted into experimentally testable Bell inequalities, as pointed out in \cite{Aolita,Cabello2010}, and also tie in with discussions of contextuality \cite{Klyachko,Liang,Bengtsson} and nonlocality \cite{Cabello2008} in four-state systems. Four-state systems provide an attractive setting for a discussion of these ideas because they can be realized experimentally using ions \cite{Kirchmair}, neutrons \cite{Bartosik}, photons \cite{Amselem} and nuclear spins \cite{Moussa}. The present parity proofs could also find application in protocols such as quantum key distribution \cite{BPPeres}, random number generation \cite{Svozil} and parity oblivious transfer \cite{Spekkens}.\\
The plan of this paper is as follows. Sec.~\ref{sec:60-105} introduces the 60 rays and 105 bases connected with a system of two-qubits and points out their important properties. It also explains what a ``parity proof" is (as well as the notion of a {\it critical} parity proof) and introduces the two alternative symbols we use for such proofs. Sec.~\ref{sec:40-40} introduces a 40-40 subsystem (i.e., one consisting of 40 rays and 40 bases) of the 60-105 system and explores its parity proofs. The proofs are of just six different sizes (although each consists of a number of geometrically distinct varieties), but their replicas under symmetry run into the thousands. We point out that a projective configuration originating in the work of Desargues holds the key to a geometrical construction of all the proofs of the smallest size. Sec.~\ref{sec:36-36} introduces a 36-36 subsystem whose parity proofs are more varied than those of the 40-40 system, but which can also be completely delineated. Sec.~\ref{sec:Full} discusses a variety of other interesting parity proofs in the 60-105 system. Finally, Sec.~\ref{sec:Discussion} contains some concluding remarks.
\section{\label{sec:60-105}The 60 complex rays and their bases}
A system of two qubits has 15 nontrivial observables, which can be taken to be the Pauli operators of the qubits and their direct products. These 15 observables form 15 triads of commuting observables that are shown in the first column of Table \ref{60States}. The simultaneous eigenstates of these triads give rise to the 60 rays (as we will term the eigenstates) shown in the second column of Table \ref{60States}. These 60 rays form the 105 bases shown in Table \ref{tab:Bases}, with the top 15 already being present in Table \ref{60States}. This 60-105 system provides the grand framework within which all the parity proofs of this paper are embedded. Table \ref{tab:Bases} has the following important properties:\\ \\(1) It contains two different types of bases: 15 ``pure" bases that arise as eigenstates of the triads in Table \ref{60States} (and shown in the top part of the table), and 90 ``hybrid" bases that arise through a mating (or hybridization) of a pair of pure bases. For example, the hybrid bases 1 2 15 16 and 3 4 13 14 arise through a mating of the pure bases 1 2 3 4 and 13 14 15 16 and consist of equal (and complementary) mixes of rays from the two of them. \\ \\(2) Each ray occurs in exactly seven bases. In addition to the brief symbol 60-105 we have introduced for this system, we will sometimes use the expanded symbol $60_{7}$-$105_{4}$ to indicate that each of the rays occurs in 7 bases (the subscript 4 simply indicates that each of the bases consists of four rays, a fact which is true of all the configurations to be discussed in this paper). Note the check $60\times7=105\times4$ on the numbers entering the expanded symbol.\\ \\(3) Each ray is orthogonal to 15 others and occurs three times with three of them and once each with the twelve others in the seven bases it occurs in. The fact that all the orthogonalities between the rays are represented in their basis table is sometimes conveyed by saying that the system is {\bf saturated}. For a saturated system, such as the present one, the basis table is completely equivalent to the Kochen-Specker diagram of the rays. \\ \\(4) The 60-105 system contains (in ten different ways) the 24-24 system of rays and bases used by Peres and others to give proofs of the KS theorem. However it also contains an (astronomically!) large number of new proofs that cannot be reduced to those of the Peres system simply by shedding rays or bases. It is these new proofs that are the subject of this paper.\\ \\(5) The symmetry group of the 60-105 system (i.e. the group of unitary transformations that keeps its rays and bases invariant as a whole) has 11,520 elements in it. This can be seen as follows. Each symmetry operation maps a particular pure basis, say 1 2 3 4, into any of the pure bases, including itself, and so is described by an operator of the form $U = \mid x><1\mid + a\mid y><2\mid + b\mid z><3\mid + c\mid w><4\mid$, where $x$ $y$ $z$ $w$ is the basis into which the mapping occurs. However all 24 permutations of the final basis states are allowed and the phase factors $a,b$ and $c$ can each take on the values $\pm 1,\pm i$ subject to the restriction that only an even number of $i's$ occur. The order of the symmetry group is therefore $15\times24\times32 = 11,520.$\\
\begin{table}[ht] \centering
\begin{tabular}{|c | c c c c |} \hline Operators & ++ & + -- & -- + & -- -- \\ \hline $Z_1$, $Z_2$, $Z_1$$Z_2$ & $1=1000$ & $2=0100$ & $3=0010$ & $4=0001$ \\ $X_1$, $X_2$, $X_1$$X_2$ & $5=1111$ & $6=1\overline{1}1\overline{1}$ & $7=11\overline{1}\overline{1}$ & $8=1\overline{1}\overline{1}1$ \\ $Y_1$, $Y_2$, $Y_1$$Y_2$ & $9=1ii\overline{1}$ & $10=1\overline{i}i1$ & $11=1i\overline{i}1$ & $12=1\overline{i}\overline{i}\overline{1}$ \\ $Z_1$, $X_2$, $Z_1$$X_2$ & $13=1100$ & $14=1\overline{1}00$ & $15=0011$ & $16=001\overline{1}$ \\ $X_1$, $Y_2$, $X_1$$Y_2$ & $17=1i1i$ & $18=1\overline{i}1\overline{i}$ & $19=1i\overline{1}\overline{i}$ & $20=1\overline{i}\overline{1}i$ \\ $Y_1$, $Z_2$, $Y_1$$Z_2$ & $21=10i0$ & $22=010i$ & $23=10\overline{i}0$ & $24=010\overline{i}$ \\ $Z_1$, $Y_2$, $Z_1$$Y_2$ & $25=1i00$ & $26=1\overline{i}00$ & $27=001i$ & $28=001\overline{i}$ \\ $X_1$, $Z_2$, $X_1$$Z_2$ & $29=1010$ & $30=0101$ & $31=10\overline{1}0$ & $32=010\overline{1}$ \\ $Y_1$, $X_2$, $Y_1$$X_2$ & $33=11ii$ & $34=1\overline{1}i\overline{i}$ & $35=11\overline{i}\overline{i}$ & $36=1\overline{1}\overline{i}i$ \\ $Z_1$$X_2$, $X_1$$Z_2$, $Y_1$$Y_2$ & $37=111\overline{1}$ & $38=11\overline{1}1$ & $39=1\overline{1}11$ & $40=1\overline{1}\overline{1}\overline{1}$ \\ $X_1$$Y_2$, $Y_1$$X_2$, $Z_1$$Z_2$ & $41=100i$ & $42=01\overline{i}0$ & $43=01i0$ & $44=100\overline{i}$ \\ $Y_1$$Z_2$, $Z_1$$Y_2$, $X_1$$X_2$ & $45=1ii1$ & $46=1\overline{i}i\overline{1}$ & $47=1i\overline{i}\overline{1}$ & $48=1\overline{i}\overline{i}1$ \\ $Z_1$$Z_2$, $X_1$$X_2$, $-Y_1$$Y_2$ & $49=1001$ & $50=100\overline{1}$ & $51=0110$ & $52=01\overline{1}0$ \\ $Z_1$$X_2$, $X_1$$Y_2$, $-Y_1$$Z_2$ & $53=11\overline{i}i$ & $54=11i\overline{i}$ & $55=1\overline{1}ii$ & $56=1\overline{1}\overline{i}\overline{i}$ \\ $Z_1$$Y_2$, $X_1$$Z_2$, $-Y_1$$X_2$ & $57=1i1\overline{i}$ & $58=1i\overline{1}i$ & $59=1\overline{i}1i$ & $60=1\overline{i}\overline{1}\overline{i}$ \\ \hline \end{tabular} \caption{The 15 triads of commuting observables for a pair of qubits and their simultaneous eigenstates. $X_{i},Y_{i},Z_{i}, (i=1,2)$ are the Pauli operators of the two qubits. The eigenstates, or rays, are numbered from 1 to 60. The components of the (unnormalized) rays are given in an orthonormal basis, with commas omitted between components and a bar over a number indicating its negative. The eigenvalue signatures of the eigenstates with respect to the first two observables in the defining triad are shown at the top (the signature for the third observable is always such as to make the product of the three signatures a +)} \label{60States} \end{table}
\begin{table}[ht] \centering
\begin{tabular}{|c c c c | c c c c | c c c c | c c c c | c c c c|} \hline 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\ 21 & 22 & 23 & 24 & 25 & 26 & 27 & 28 & 29 & 30 & 31 & 32 & 33 & 34 & 35 & 36 & 37 & 38 & 39 & 40 \\ 41 & 42 & 43 & 44 & 45 & 46 & 47 & 48 & 49 & 50 & 51 & 52 & 53 & 54 & 55 & 56 & 57 & 58 & 59 & 60 \\ \hline 1 & 2 & 15 & 16 & 1 & 2 & 27 & 28 & 1 & 3 & 22 & 24 & 1 & 3 & 30 & 32 & 1 & 4 & 42 & 43 \\ 1 & 4 & 51 & 52 & 2 & 3 & 41 & 44 & 2 & 3 & 49 & 50 & 2 & 4 & 21 & 23 & 2 & 4 & 29 & 31 \\ 3 & 4 & 13 & 14 & 3 & 4 & 25 & 26 & 5 & 6 & 19 & 20 & 5 & 6 & 31 & 32 & 5 & 7 & 14 & 16 \\ 5 & 7 & 34 & 36 & 5 & 8 & 46 & 47 & 5 & 8 & 50 & 52 & 6 & 7 & 45 & 48 & 6 & 7 & 49 & 51 \\ 6 & 8 & 13 & 15 & 6 & 8 & 33 & 35 & 7 & 8 & 17 & 18 & 7 & 8 & 29 & 30 & 9 & 10 & 23 & 24 \\ 9 & 10 & 35 & 36 & 9 & 11 & 18 & 20 & 9 & 11 & 26 & 28 & 9 & 12 & 38 & 39 & 9 & 12 & 49 & 52 \\ 10 & 11 & 37 & 40 & 10 & 11 & 50 & 51 & 10 & 12 & 17 & 19 & 10 & 12 & 25 & 27 & 11 & 12 & 21 & 22 \\ 11 & 12 & 33 & 34 & 13 & 14 & 27 & 28 & 13 & 15 & 34 & 36 & 13 & 16 & 39 & 40 & 13 & 16 & 55 & 56 \\ 14 & 15 & 37 & 38 & 14 & 15 & 53 & 54 & 14 & 16 & 33 & 35 & 15 & 16 & 25 & 26 & 17 & 18 & 31 & 32 \\ 17 & 19 & 26 & 28 & 17 & 20 & 43 & 44 & 17 & 20 & 54 & 56 & 18 & 19 & 41 & 42 & 18 & 19 & 53 & 55 \\ 18 & 20 & 25 & 27 & 19 & 20 & 29 & 30 & 21 & 22 & 35 & 36 & 21 & 23 & 30 & 32 & 21 & 24 & 47 & 48 \\ 21 & 24 & 53 & 56 & 22 & 23 & 45 & 46 & 22 & 23 & 54 & 55 & 22 & 24 & 29 & 31 & 23 & 24 & 33 & 34 \\ 25 & 28 & 46 & 48 & 25 & 28 & 59 & 60 & 26 & 27 & 45 & 47 & 26 & 27 & 57 & 58 & 29 & 32 & 38 & 40 \\ 29 & 32 & 58 & 60 & 30 & 31 & 37 & 39 & 30 & 31 & 57 & 59 & 33 & 36 & 42 & 44 & 33 & 36 & 57 & 60 \\ 34 & 35 & 41 & 43 & 34 & 35 & 58 & 59 & 37 & 38 & 55 & 56 & 37 & 39 & 58 & 60 & 37 & 40 & 49 & 52 \\ 38 & 39 & 50 & 51 & 38 & 40 & 57 & 59 & 39 & 40 & 53 & 54 & 41 & 42 & 54 & 56 & 41 & 43 & 57 & 60 \\ 41 & 44 & 51 & 52 & 42 & 43 & 49 & 50 & 42 & 44 & 58 & 59 & 43 & 44 & 53 & 55 & 45 & 46 & 53 & 56 \\ 45 & 47 & 59 & 60 & 45 & 48 & 50 & 52 & 46 & 47 & 49 & 51 & 46 & 48 & 57 & 58 & 47 & 48 & 54 & 55 \\ \hline \end{tabular} \caption{Basis table of the 60 rays of Table \ref{60States}. The rays form 105 bases, with each ray occurring in 7 bases. The 15 pure bases are shown in the top section, and the 90 hybrid bases below.} \label{tab:Bases} \end{table}
We now explain what we mean by a parity proof. A parity proof of the KS theorem in $d$ dimensions, with $d$ even, is a set of $R$ rays and $B$ bases, with $B$ odd, with the property that each of the $R$ rays occurs an even number of times among the $B$ bases. Such a set provides a proof of the KS theorem because it is impossible to assign 0/1 values to the rays, in a noncontextual fashion, in such a way that each basis has one 1 and $d-1$ 0's in it. The even multiplicity of the rays, together with the odd number of bases, makes such an assignment impossible and it is this even-odd contradiction that gives the parity proof its name. The parity proofs presented in this paper will all be in dimension $d = 4$.\\
A parity proof involving $R$ rays and $B$ bases will be denoted by the symbol $R$-$B$. However this symbol is not very informative because it does not reveal the multiplicities (i.e. the number of occurrences) of the rays in the proof. This defect can be remedied by using the expanded symbol $R^{'}_{m'}R^{''}_{m''}\ldots-B_{d}$, which reveals that there are $R^{'}$ rays of multiplicity $m^{'}$, $R^{''}$ rays of multiplicity $m^{''}$, etc., in the $B$ bases of the proof (each of which contains $d$ rays). The foregoing numbers are not all independent but obey the constraint $m^{'}R^{'} + m^{''}R^{''} + \ldots = d\cdot B$. The expanded symbol makes it easy to check the validity of a parity proof. Consider a parity proof involving 40 rays and 25 bases with the expanded symbol $32_{2}6_{4}2_{6}$-$25_{4}$. This symbol reveals that there are 32 rays of multiplicity two, 6 rays of multiplicity four and 2 rays of multiplicity six, which together account for the 100 rays in the 25 bases of the proof. We will usually use the brief symbol for a parity proof and resort to the expanded one only when the additional detail proves helpful. We should mention that we will sometimes use the brief and expanded symbols not for parity proofs but for systems of rays and bases housing parity proofs. For example, we have already used the symbols 60-105 and $60_{7}$-$105_{4}$ to refer to the system of rays and bases within which all our parity proofs are embedded. It should be obvious from the context whether a symbol refers to a parity proof or to a system of rays and bases housing parity proofs.\\
The only parity proofs we will present in this paper are ones that are {\bf basis-critical}. A $R$-$B$ proof will be said to be basis-critical if dropping even a single basis from it makes it possible to assign noncontextual 0/1 values to the rays in the surviving $B-1$ bases in such a way that each basis has one 1 and $d-1$ 0's in it. Basis-critical proofs are of interest because they permit the tightest experimental tests of the KS theorem. Proofs that are not basis-critical can always be reduced to ones that are by shedding bases, and so we will ignore them. The use of basis-criticality as a filter allows us to pare down our list of proofs considerably by eliminating redundant proofs that contain smaller proofs within them.
\section{\label{sec:40-40}The 40-40 subsystem and its parity proofs}
The 60-105 system has six 40-40 subsystems in it. These subsystems are of interest because they have a simple pattern of parity proofs. We explain how to obtain these subsystems, and then proceed to extract all the parity proofs in one of them.\\
The 15 triads of observables in Table \ref{60States} can be grouped into six sets of five (see Table \ref{tab:MUBs}) in such a way that each set defines a system of mutually unbiased bases (MUBs) \cite{Durt}. Two bases are said to be mutually unbiased if the absolute value of the inner product of any normalized ray of one with any normalized ray of the other is always the same (and equal to $\frac{1}{2}$ in dimension 4). The maximum number of MUBs in dimension 4 is five. The triads in each row of Table \ref{tab:MUBs} define a maximal set of MUBs, with any two rows having one triad (and therefore one basis) in common. A $40_{4}$-$40_{4}$ system can be obtained by dropping the 20 rays defined by the triads in any row of Table \ref{tab:MUBs} from the full set of 60 rays and keeping only the 40 bases made up exclusively of the remaining 40 rays. This construction can be described by the schematic equation\\
$(60_{7}$-$105_{4})$ - (a maximal set of MUBs) = $40_{4}$-$40_{4}$ (in 6 different ways) \\
Table \ref{40States} shows the 40 rays that result if one drops the triads in the last row of Table \ref{tab:MUBs}, and Fig. \ref{tab:40Bases} shows the 40 bases formed by these rays. {\it Note that the numbering of the rays in Table \ref{40States} and Fig. \ref{tab:40Bases} is different from that in the earlier sections, and it is this new numbering that will be used throughout this section.}\\
\begin{table}[ht] \centering
\begin{tabular}{|c | c c c c c |} \hline 1 & \{$Z_1$, $Z_2$, $Z_1$$Z_2$\} & \{$X_1$, $X_2$, $X_1$$X_2$\} & \{$Y_1$, $Y_2$, $Y_1$$Y_2$\} & \{$Z_1$$X_2$, $X_1$$Y_2$, $-Y_1$$Z_2$\} & \{$Z_1$$Y_2$, $X_1$$Z_2$, $-Y_1$$X_2$\}\\ 2 & \{$Z_1$, $Z_2$, $Z_1$$Z_2$\} & \{$X_1$, $Y_2$, $X_1$$Y_2$\} & \{$Y_1$, $X_2$, $Y_1$$X_2$\} & \{$Z_1$$X_2$, $X_1$$Z_2$, $Y_1$$Y_2$\} & \{$Y_1$$Z_2$, $Z_1$$Y_2$, $X_1$$X_2$\}\\ 3 & \{$X_1$, $X_2$, $X_1$$X_2$\} & \{$Y_1$, $Z_2$, $Y_1$$Z_2$\} & \{$Z_1$, $Y_2$, $Z_1$$Y_2$\} & \{$Z_1$$X_2$, $X_1$$Z_2$, $Y_1$$Y_2$\} & \{$X_1$$Y_2$, $Y_1$$X_2$, $Z_1$$Z_2$\}\\ 4 & \{$Y_1$, $Y_2$, $Y_1$$Y_2$\} & \{$Z_1$, $X_2$, $Z_1$$X_2$\} & \{$X_1$, $Z_2$, $X_1$$Z_2$\} & \{$X_1$$Y_2$, $Y_1$$X_2$, $Z_1$$Z_2$\} & \{$Y_1$$Z_2$, $Z_1$$Y_2$, $X_1$$X_2$\}\\ 5 & \{$Z_1$, $X_2$, $Z_1$$X_2$\} & \{$X_1$, $Y_2$, $X_1$$Y_2$\} & \{$Y_1$, $Z_2$, $Y_1$$Z_2$\} & \{$Z_1$$Z_2$, $X_1$$X_2$, $-Y_1$$Y_2$\} & \{$Z_1$$Y_2$, $X_1$$Z_2$, $-Y_1$$X_2$\}\\ 6 & \{$Z_1$, $Y_2$, $Z_1$$Y_2$\} & \{$X_1$, $Z_2$, $X_1$$Z_2$\} & \{$Y_1$, $X_2$, $Y_1$$X_2$\} & \{$Z_1$$Z_2$, $X_1$$X_2$, $-Y_1$$Y_2$\} & \{$Z_1$$X_2$, $X_1$$Y_2$, $-Y_1$$Z_2$\}\\ \hline \end{tabular} \caption{The 6 sets of maximal MUBs made up of the 15 observables of a two-qubit system. Each row shows the five triads of observables making up a maximal MUB set, with any two rows have exactly one triad in common.} \label{tab:MUBs} \end{table}
\begin{table}[ht] \centering
\begin{tabular}{|c | c c c c |} \hline Operators & ++ & + -- & -- + & -- -- \\ \hline $Z_1$, $Z_2$, $Z_1$$Z_2$ & $1=1000$ & $2=0100$ & $3=0010$ & $4=0001$ \\ $X_1$, $X_2$, $X_1$$X_2$ & $5=1111$ & $6=1\overline{1}1\overline{1}$ & $7=11\overline{1}\overline{1}$ & $8=1\overline{1}\overline{1}1$ \\ $Y_1$, $Y_2$, $Y_1$$Y_2$ & $9=1ii\overline{1}$ & $10=1\overline{i}i1$ & $11=1i\overline{i}1$ & $12=1\overline{i}\overline{i}\overline{1}$ \\ $Z_1$, $X_2$, $Z_1$$X_2$ & $13=1100$ & $14=1\overline{1}00$ & $15=0011$ & $16=001\overline{1}$ \\ $X_1$, $Y_2$, $X_1$$Y_2$ & $17=1i1i$ & $18=1\overline{i}1\overline{i}$ & $19=1i\overline{1}\overline{i}$ & $20=1\overline{i}\overline{1}i$ \\ $Y_1$, $Z_2$, $Y_1$$Z_2$ & $21=10i0$ & $22=010i$ & $23=10\overline{i}0$ & $24=010\overline{i}$ \\ $Z_1$$X_2$, $X_1$$Z_2$, $Y_1$$Y_2$ & $25=111\overline{1}$ & $26=11\overline{1}1$ & $27=1\overline{1}11$ & $28=1\overline{1}\overline{1}\overline{1}$ \\ $X_1$$Y_2$, $Y_1$$X_2$, $Z_1$$Z_2$ & $29=100i$ & $30=01\overline{i}0$ & $31=01i0$ & $32=100\overline{i}$ \\ $Y_1$$Z_2$, $Z_1$$Y_2$, $X_1$$X_2$ & $33=1ii1$ & $34=1\overline{i}i\overline{1}$ & $35=1i\overline{i}\overline{1}$ & $36=1\overline{i}\overline{i}1$ \\ $Z_1$$Y_2$, $X_1$$Z_2$, $-Y_1$$X_2$ & $37=1i1\overline{i}$ & $38=1i\overline{1}i$ & $39=1\overline{i}1i$ & $40=1\overline{i}\overline{1}\overline{i}$ \\ \hline \end{tabular} \caption{The 40-ray system obtained by discarding the five triads of observables in the last row of Table \ref{tab:MUBs} and keeping only the eigenstates associated with the remaining triads. {\it Note that the numbering of the rays here is different from that in Table \ref{60States}}.} \label{40States} \end{table}
The 40-40 system is small enough that all its parity proofs can be determined through an exhaustive computer search. A complete list of the proofs in this system is shown in Table \ref{tab:40List}. An interesting feature of this list is that the proofs come in {\bf basis-complementary} pairs, i.e. pairs that have no bases in common and whose union gives back the entire 40-40 set. Tables \ref{tab:30-15}, \ref{tab:32-17} and \ref{tab:34-19} give one example of each of the pairs of basis-complementary proofs listed in Table \ref{tab:40List}. It is an interesting numerological observation that the total number of distinct parity proofs in this system, which is twice the total of the last column in Table \ref{tab:40List}, or 32768, can also be expressed as $2^{15}$, where 15 is the number of hybrid bases in each of the proofs. This resonates with a similar observation we made recently \cite{Waegell2011a}: the 24-24 Peres system has $512 = 2^9$ distinct parity proofs in it, where 9 is again the number of hybrid bases involved in the proofs.\\
\begin{figure}
\caption{The 40 bases formed by the 40 rays of Table \ref{40States}. Each ray occurs in exactly four bases, so this is a $40_{4}$-$40_{4}$ system. The ten bases closest to the center are pure bases, while the remaining thirty are hybrids. The four pure bases along any edge of the central pentagram are mutually unbiased. The pure bases mate in pairs to produce all the hybrids. The lighter lines pass through a pair of pure bases and the two hybrids they give rise to.}
\label{tab:40Bases}
\end{figure}
\begin{table}[ht] \centering
\begin{tabular}{|c |c |c |} \hline Parity Proof & Complementary Proof & Number\\ \hline 30-15 ($30_{2}$-$15_{4}$) & 40-25 ($30_{2}10_{4}$-$25_{4}$) & 64 \\ 32-17 ($30_{2}2_{4}$-$17_{4}$) & 38-23 ($30_{2}8_{4}$-$23_{4}$) & 2880 \\ 34-19 ($30_{2}4_{4}$-$19_{4}$) & 36-21 ($30_{2}6_{4}$-$21_{4}$) & 13440 \\ \hline \end{tabular} \caption{Parity proofs in the 40-40 system. Each line shows a pair of basis-complementary proofs, by both their abbreviated and expanded symbols, and indicates the number of replicas of each in the 40-40 set.} \label{tab:40List} \end{table}
The five other 40-40 subsystems contain proofs that are identical to the ones above and related to them by symmetry. The role of symmetry in creating replicas of a basic proof pattern should now be clear. For example, the 30-15 proof shown in Table \ref{tab:30-15} has $32\times6 = 192$ replicas under symmetry in the full 60-105 set (on recalling that the 30-15 proofs come in two geometrically distinct types of equal numbers). In this case, and many of the others to be discussed, the amplification due to symmetry comes in two steps: first there is the replication due to the symmetries of the subsystem itself, and then there is a further amplification due to the symmetries that exchange that subsystem with other similar subsystems in the 60-105 set.\\
\begin{table}[ht] \centering
\begin{tabular}{|| c c c c || c c c c | c c c c | c c c c ||} \hline\hline 1 & 2 & 3 & 4 & \textbf{1} & \textbf{2} & \textbf{15} & \textbf{16} & 6 & 8 & 13 & 15 & \textbf{17} & \textbf{20} & \textbf{31} & \textbf{32}\\ 5 & 6 & 7 & 8 & \textbf{1} & \textbf{3} & \textbf{22} & \textbf{24} & 7 & 8 & 17 & 18 & 18 & 19 & 29 & 30\\ 9 & 10 & 11 & 12 & 1 & 4 & 30 & 31 & \textbf{9} & \textbf{10} & \textbf{23} & \textbf{24} & 21 & 24 & 35 & 36\\ 13 & 14 & 15 & 16 & \textbf{2} & \textbf{3} & \textbf{29} & \textbf{32} & 9 & 11 & 18 & 20 & \textbf{22} & \textbf{23} & \textbf{33} & \textbf{34}\\ 17 & 18 & 19 & 20 & 2 & 4 & 21 & 23 & \textbf{9} & \textbf{12} & \textbf{26} & \textbf{27} & \textbf{25} & \textbf{27} & \textbf{38} & \textbf{40}\\ 21 & 22 & 23 & 24 & 3 & 4 & 13 & 14 & 10 & 11 & 25 & 28 & 26 & 28 & 37 & 39\\ 25 & 26 & 27 & 28 & \textbf{5} & \textbf{6} & \textbf{19} & \textbf{20} & \textbf{10} & \textbf{12} & \textbf{17} & \textbf{19} & \textbf{29} & \textbf{31} & \textbf{37} & \textbf{40}\\ 29 & 30 & 31 & 32 & \textbf{5} & \textbf{7} & \textbf{14} & \textbf{16} & 11 & 12 & 21 & 22 & 30 & 32 & 38 & 39\\ 33 & 34 & 35 & 36 & 5 & 8 & 34 & 35 & 13 & 16 & 27 & 28 & 33 & 35 & 39 & 40\\ 37 & 38 & 39 & 40 & \textbf{6} & \textbf{7} & \textbf{33} & \textbf{36} & \textbf{14} & \textbf{15} & \textbf{25} & \textbf{26} & \textbf{34} & \textbf{36} & \textbf{37} & \textbf{38}\\ \hline\hline \end{tabular} \caption{A 30-15 proof (bold font) and its complementary 40-25 proof (ordinary font) in the 40-40 set of Fig. \ref{tab:40Bases}.} \label{tab:30-15} \end{table}
\begin{table}[ht] \centering
\begin{tabular}{|| c c c c || c c c c | c c c c | c c c c ||} \hline\hline \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{1} & \textbf{2} & \textbf{15} & \textbf{16} & 6 & 8 & 13 & 15 & 17 & 20 & 31 & 32\\ \textbf{5} & \textbf{6} & \textbf{7} & \textbf{8} & \textbf{1} & \textbf{3} & \textbf{22} & \textbf{24} & 7 & 8 & 17 & 18 & \textbf{18} & \textbf{19} & \textbf{29} & \textbf{30}\\ 9 & 10 & 11 & 12 & \textbf{1} & \textbf{4} & \textbf{30} & \textbf{31} & \textbf{9} & \textbf{10} & \textbf{23} & \textbf{24} & 21 & 24 & 35 & 36\\ 13 & 14 & 15 & 16 & 2 & 3 & 29 & 32 & \textbf{9} & \textbf{11} & \textbf{18} & \textbf{20} & \textbf{22} & \textbf{23} & \textbf{33} & \textbf{34}\\ 17 & 18 & 19 & 20 & 2 & 4 & 21 & 23 & 9 & 12 & 26 & 27 & 25 & 27 & 38 & 40\\ 21 & 22 & 23 & 24 & 3 & 4 & 13 & 14 & \textbf{10} & \textbf{11} & \textbf{25} & \textbf{28} & \textbf{26} & \textbf{28} & \textbf{37} & \textbf{39}\\ 25 & 26 & 27 & 28 & \textbf{5} & \textbf{6} & \textbf{19} & \textbf{20} & 10 & 12 & 17 & 19 & \textbf{29} & \textbf{31} & \textbf{37} & \textbf{40}\\ 29 & 30 & 31 & 32 & \textbf{5} & \textbf{7} & \textbf{14} & \textbf{16} & 11 & 12 & 21 & 22 & 30 & 32 & 38 & 39\\ 33 & 34 & 35 & 36 & \textbf{5} & \textbf{8} & \textbf{34} & \textbf{35} & 13 & 16 & 27 & 28 & \textbf{33} & \textbf{35} & \textbf{39} & \textbf{40}\\ 37 & 38 & 39 & 40 & 6 & 7 & 33 & 36 & \textbf{14} & \textbf{15} & \textbf{25} & \textbf{26} & 34 & 36 & 37 & 38\\ \hline\hline \end{tabular} \caption{A 32-17 proof (bold font) and its complementary 38-23 proof (ordinary font) in the 40-40 set of Fig. \ref{tab:40Bases}.} \label{tab:32-17} \end{table}
\begin{table}[ht] \centering
\begin{tabular}{|| c c c c || c c c c | c c c c | c c c c ||} \hline\hline \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{1} & \textbf{2} & \textbf{15} & \textbf{16} & 6 & 8 & 13 & 15 & 17 & 20 & 31 & 32\\ \textbf{5} & \textbf{6} & \textbf{7} & \textbf{8} & \textbf{1} & \textbf{3} & \textbf{22} & \textbf{24} & 7 & 8 & 17 & 18 & \textbf{18} & \textbf{19} & \textbf{29} & \textbf{30}\\ \textbf{9} & \textbf{10} & \textbf{11} & \textbf{12} & \textbf{1} & \textbf{4} & \textbf{30} & \textbf{31} & \textbf{9} & \textbf{10} & \textbf{23} & \textbf{24} & 21 & 24 & 35 & 36\\ 13 & 14 & 15 & 16 & 2 & 3 & 29 & 32 & \textbf{9} & \textbf{11} & \textbf{18} & \textbf{20} & \textbf{22} & \textbf{23} & \textbf{33} & \textbf{34}\\ 17 & 18 & 19 & 20 & 2 & 4 & 21 & 23 & \textbf{9} & \textbf{12} & \textbf{26} & \textbf{27} & \textbf{25} & \textbf{27} & \textbf{38} & \textbf{40}\\ 21 & 22 & 23 & 24 & 3 & 4 & 13 & 14 & 10 & 11 & 25 & 28 & 26 & 28 & 37 & 39\\ 25 & 26 & 27 & 28 & \textbf{5} & \textbf{6} & \textbf{19} & \textbf{20} & 10 & 12 & 17 & 19 & \textbf{29} & \textbf{31} & \textbf{37} & \textbf{40}\\ 29 & 30 & 31 & 32 & \textbf{5} & \textbf{7} & \textbf{14} & \textbf{16} & 11 & 12 & 21 & 22 & 30 & 32 & 38 & 39\\ 33 & 34 & 35 & 36 & \textbf{5} & \textbf{8} & \textbf{34} & \textbf{35} & 13 & 16 & 27 & 28 & \textbf{33} & \textbf{35} & \textbf{39} & \textbf{40}\\ \textbf{37} & \textbf{38} & \textbf{39} & \textbf{40} & 6 & 7 & 33 & 36 & \textbf{14} & \textbf{15} & \textbf{25} & \textbf{26} & 34 & 36 & 37 & 38\\ \hline\hline \end{tabular} \caption{A 34-19 proof (bold font) and its complementary 36-21 proof (ordinary font) in the 40-40 set of Fig. \ref{tab:40Bases}.} \label{tab:34-19} \end{table}
Though the proofs in Table \ref{tab:40List} were discovered through a computer search, we later found simple constructions for them based on deletions of selected subsets of rays from the 40-40 system. The most interesting of these constructions is the one for the 30-15 proof, which we now describe. Let us define a {\bf Desarguesian configuration} as any two sets of ten objects with the property that each object of either set is associated with three objects of the other. The original example of such a configuration was provided by Desargues \cite{Hilbert}, who showed that if two triangles are perspective from a point (i.e. the joins of corresponding vertices pass through a point), then they are also perspective from a line (i.e. the extensions of their corresponding sides intersect in three points that lie on a line). The two sets of objects in this case are ten points (namely, the vertices of the two triangles, the perspective point and the three points in which pairs of their sides intersect) and ten lines (namely, the sides of the two triangles, the lines joining corresponding vertices to the perspective point and the perspective line), and the association between the points and the lines is one of incidence. Because the ten objects of either set are each associated with three of the other, the symbol $10_{3}$ is sometimes used to denote a Desarguesian configuration. The English mathematician Cayley gave another example of a Desarguesian configuration: he pointed out \cite{Coxeter} that the ten lines and ten planes determined by five points of general position in projective 3-space meet an arbitrary plane in the ten points and ten lines of a Desarguesian configuration. A geometrical model of a Desarguesian configuration is provided by a 20-gon whose alternate vertices represent the two sets of ten objects, with the edges and certain diagonals representing the associations between the objects of the two sets (see Fig. \ref{Desarg}).\\
\begin{figure}
\caption{Levi graph of the Desarguesian configuration, $10_{3}$. The solid and open circles represent two sets of ten objects. Each object of either set is associated with the three objects of the other to which it is connected by line segments.}
\label{Desarg}
\end{figure}
The 40-40 system contains two types of Desarguesian configurations that hold the key to the construction of the 30-15 proofs. Let us call the rays of the 40-40 system ``points" and define a ``line" as any set of three "points" (i.e. rays) with the property that the components of one can be expressed as a linear combination of those of the other two. With these definitions it is straightforward to check that the 40-40 system contains 32 Desarguesian configurations of ``points" and ``lines", with each ``point" being incident with three ``lines" of its configuration and vice-versa. One example of such a configuration is provided by the ten rays (or ``points") 4,8,11,13,18,21,28,30,35 and 39 and the ten lines (4,8,28), (4,11,35), (4,18,39), (8,11,30), (8,21,39), (11,13,39), (13,18,35), (13,21,30), (18,21,28) and (28,30,35), where each line has been indicated by the three "points" on it. If all the bases containing any of the foregoing rays are omitted from the 40 bases of Fig. \ref{tab:40Bases}, the remaining 15 bases give the 30-15 proof of Table \ref{tab:30-15}. A second type of Desarguesian configuration is obtained by defining the ``points" as before but replacing the ``lines" with ``triangles", where a ``triangle" is defined as any set of three rays that are mutually unbiased (i.e., the squared modulus of the overlap of any two normalized rays is 1/4) but nevertheless do not lie on a ``line". An example of a ``triangle" is given by the rays 1,5 and 27. There are exactly 32 Desarguesian configurations of this second kind in a 40-40 set, and omitting all bases involving any of the rays in one of them from the full 40-40 set again leads to a 30-15 proof (of a geometrically different kind from the earlier one). Thus we have shown how all the 30-15 proofs in a 40-40 system can be obtained by dropping bases picked out by one of the two types of Desarguesian configurations in it.
\section{\label{sec:36-36}The 36-36 subsystem and its parity proofs}
The 60 rays possess several subsets of 12 rays with the property that each ray is orthogonal to seven others in the group. Table \ref{12-gonTablesA} shows the 15 such sets of rays that exist within the 60 rays. We will term any such set of 12 rays a {\bf dodecagon} because its Kochen-Specker diagram, shown in Fig. \ref{KS12-gon}, has the form of a dodecagon with all its edges and thirty of its diagonals drawn in. Table \ref{12-gonTablesB} shows the six different coverings of all 60 rays by sets of five non-overlapping dodecagons. If one keeps all the rays in any three dodecagons in any row of Table \ref{12-gonTablesB}, one gets 36 rays, and if one picks out all the hybrid bases formed by these rays one gets a 36-36 system. The total number of 36-36 systems that can be constructed in this way is $10\times6 = 60$.\\
\begin{table}[ht] \centering
\begin{tabular}{||c | c c c c c c c c c c c c |} \hline
Index & \multicolumn{12}{|c|}{12 rays in a dodecagon} \\ \hline 1 & 1 & 3 & 2 & 4 & 13 & 15 & 14 & 16 & 25 & 27 & 26 & 28\\ 2 & 1 & 2 & 3 & 4 & 21 & 22 & 23 & 24 & 29 & 30 & 31 & 32\\ 3 & 1 & 2 & 4 & 3 & 41 & 42 & 44 & 43 & 49 & 51 & 50 & 52\\ 4 & 5 & 7 & 6 & 8 & 17 & 19 & 18 & 20 & 29 & 31 & 30 & 32\\ 5 & 5 & 6 & 7 & 8 & 13 & 14 & 15 & 16 & 33 & 34 & 35 & 36\\ 6 & 5 & 6 & 8 & 7 & 45 & 46 & 48 & 47 & 49 & 50 & 51 & 52\\ 7 & 9 & 11 & 10 & 12 & 21 & 23 & 22 & 24 & 33 & 35 & 34 & 36\\ 8 & 9 & 10 & 11 & 12 & 17 & 18 & 19 & 20 & 25 & 26 & 27 & 28\\ 9 & 9 & 10 & 12 & 11 & 37 & 38 & 40 & 39 & 50 & 49 & 51 & 52\\ 10 & 13 & 14 & 16 & 15 & 37 & 39 & 38 & 40 & 53 & 55 & 54 & 56\\ 11 & 17 & 18 & 20 & 19 & 41 & 43 & 42 & 44 & 53 & 54 & 55 & 56\\ 12 & 21 & 22 & 24 & 23 & 45 & 47 & 46 & 48 & 54 & 53 & 55 & 56\\ 13 & 25 & 26 & 28 & 27 & 45 & 46 & 47 & 48 & 57 & 59 & 58 & 60\\ 14 & 29 & 30 & 32 & 31 & 37 & 38 & 39 & 40 & 57 & 58 & 59 & 60\\ 15 & 33 & 34 & 36 & 35 & 41 & 42 & 43 & 44 & 58 & 57 & 59 & 60\\ \hline\hline \end{tabular} \caption{The 15 ``dodecagons" in the set of 60 rays.} \label{12-gonTablesA} \end{table}
\begin{table}[ht] \centering
\begin{tabular}{||c | c c c c c |} \hline
Index & \multicolumn{5}{|c|}{Covering} \\ \hline 1 & 1 & 4 & 9 & 12 & 15\\ 2 & 1 & 6 & 7 & 11 & 14\\ 3 & 2 & 5 & 9 & 11 & 13\\ 4 & 2 & 6 & 8 & 10 & 15\\ 5 & 3 & 4 & 7 & 10 & 13\\ 6 & 3 & 5 & 8 & 12 & 14\\ \hline\hline \end{tabular} \caption{The six coverings of the 60 rays by five non-overlapping dodecagons, with the dodecagons numbered as in Table \ref{12-gonTablesA}.} \label{12-gonTablesB} \end{table}
\begin{figure}
\caption{Kochen-Specker diagram of the 12 rays in a ``dodecagon". The rays are shown as filled and open circles at the vertices of a dodecagon and numbered from 1 to 12. Each ray is orthogonal to seven others, six of the opposite type and one of the same type. The rays form the three (mutually unbiased) pure bases 1 2 3 4, 5 6 7 8 and 9 10 11 12 and the six hybrid bases 1 3 6 8, 1 3 10 12, 5 7 2 4, 5 7 10 12, 9 11 2 4 and 9 11 4 6. The pure bases consist of consecutive sets of four vertices along the perimeter of the 12-gon, while the hybrid bases are made up of a pair of open circles from one pure basis and a pair of filled circles from another. This configuration can be characterized by the symbol $12_{3}$-$9_{4}$, because each of the twelve rays occurs in three of the nine bases. If the rays in each of the rows of Table \ref{12-gonTablesA} are arranged at the vertices of this dodecagon, beginning at the vertex marked 1 and proceeding clockwise, their orthogonalities are represented faithfully by the lines in the figure and the nine bases formed by them can be picked out by replacing the numbers 1-12 in the above listings by the rays at those positions.}
\label{KS12-gon}
\end{figure}
The 36-36 system (like the 40-40 system) is small enough to allow all its parity proofs to be determined through an exhaustive computer search. The complete list of proofs in this system is 18-9, 22-11, 24-13, 26-13, 26-15, 28-15 (34-21), 29-15 (35-21), 30-15, 30-17 (32-19), 31-17 (33-19), 32-17 (34-19), 32-19 (30-17), 33-19 (31-17), 34-19 (32-17), 34-21 (28-15), 35-21 (29-15) and 36-21, where we have used the abbreviated symbols for the proofs and indicated the basis-complementary proof to a given one after it in brackets, if it is basis-critical. Since the 36-36 system has an even number of bases, the basis-complement of any parity proof contained in it is automatically another parity proof. However the complement need not be basis-critical, and then we would discard it. This explains why every proof in the list is not accompanied by its basis-complement. We should also point out that the first proof in this list, 18-9, is actually one of the six different types of parity proofs in the 24-24 Peres set. However all the other proofs are new.\\
Tables \ref{36States} and \ref{BasisTable36-36} show an example of a 36-36 set and a pair of basis-complementary parity proofs in it.
\begin{table}[ht] \centering
\begin{tabular}{|c | c c c c |} \hline Operators & ++ & + -- & -- + & -- -- \\ \hline $Z_1$, $Z_2$, $Z_1$$Z_2$ & $1=1000$ & $2=0100$ & $3=0010$ & $4=0001$ \\ $X_1$, $X_2$, $X_1$$X_2$ & $5=1111$ & $6=1\overline{1}1\overline{1}$ & $7=11\overline{1}\overline{1}$ & $8=1\overline{1}\overline{1}1$ \\ $Y_1$, $Y_2$, $Y_1$$Y_2$ & $9=1ii\overline{1}$ & $10=1\overline{i}i1$ & $11=1i\overline{i}1$ & $12=1\overline{i}\overline{i}\overline{1}$ \\ $Z_1$, $X_2$, $Z_1$$X_2$ & $13=1100$ & $14=1\overline{1}00$ & $15=0011$ & $16=001\overline{1}$ \\ $X_1$, $Y_2$, $X_1$$Y_2$ & $17=1i1i$ & $18=1\overline{i}1\overline{i}$ & $19=1i\overline{1}\overline{i}$ & $20=1\overline{i}\overline{1}i$ \\ $Z_1$, $Y_2$, $Z_1$$Y_2$ & $21=1i00$ & $22=1\overline{i}00$ & $23=001i$ & $24=001\overline{i}$ \\ $X_1$, $Z_2$, $X_1$$Z_2$ & $25=1010$ & $26=0101$ & $27=10\overline{1}0$ & $28=010\overline{1}$ \\ $Z_1$$X_2$, $X_1$$Z_2$, $Y_1$$Y_2$ & $29=111\overline{1}$ & $30=11\overline{1}1$ & $31=1\overline{1}11$ & $32=1\overline{1}\overline{1}\overline{1}$ \\ $Z_1$$Z_2$, $X_1$$X_2$, $-Y_1$$Y_2$ & $33=1001$ & $34=100\overline{1}$ & $35=0110$ & $36=01\overline{1}0$ \\ \hline \end{tabular} \caption{The 36 rays obtained by keeping all rays in the first three dodecagons in the first row of Table \ref{12-gonTablesB}. The rays form nine pure bases, corresponding to the triads of observables shown in the first column. {\it Note that the numbering of the rays here is different from that in the earlier sections.}} \label{36States} \end{table}
\begin{table}[ht] \centering
\begin{tabular}{|c c c c | c c c c | c c c c | c c c c |} \hline \textbf{1} & \textbf{2} & \textbf{15} & \textbf{16} & \textbf{5} & \textbf{6} & \textbf{27} & \textbf{28} & 9 & 12 & 30 & 31 & 15 & 16 & 21 & 22\\ \textbf{1} & \textbf{2} & \textbf{23} & \textbf{24} & \textbf{5} & \textbf{7} & \textbf{14} & \textbf{16} & 9 & 12 & 33 & 36 & 17 & 18 & 27 & 28\\ \textbf{1} & \textbf{3} & \textbf{26} & \textbf{28} & \textbf{5} & \textbf{8} & \textbf{34} & \textbf{36} & 10 & 11 & 29 & 32 & \textbf{17} & \textbf{19} & \textbf{22} & \textbf{24}\\ \textbf{1} & \textbf{4} & \textbf{35} & \textbf{36} & 6 & 7 & 33 & 35 & 10 & 11 & 34 & 35 & \textbf{18} & \textbf{20} & \textbf{21} & \textbf{23}\\ 2 & 3 & 33 & 34 & 6 & 8 & 13 & 15 & 10 & 12 & 17 & 19 & 19 & 20 & 25 & 26\\ 2 & 4 & 25 & 27 & \textbf{7} & \textbf{8} & \textbf{17} & \textbf{18} & 10 & 12 & 21 & 23 & 25 & 28 & 30 & 32\\ 3 & 4 & 13 & 14 & 7 & 8 & 25 & 26 & 13 & 14 & 23 & 24 & \textbf{26} & \textbf{27} & \textbf{29} & \textbf{31}\\ \textbf{3} & \textbf{4} & \textbf{21} & \textbf{22} & 9 & 11 & 18 & 20 & 13 & 16 & 31 & 32 & 29 & 32 & 33 & 36\\ \textbf{5} & \textbf{6} & \textbf{19} & \textbf{20} & 9 & 11 & 22 & 24 & \textbf{14} & \textbf{15} & \textbf{29} & \textbf{30} & \textbf{30} & \textbf{31} & \textbf{34} & \textbf{35}\\ \hline \end{tabular} \caption{The 36 hybrid bases formed by the 36 rays of Table \ref{36States}. Each ray occurs in four bases, so this is a $36_{4}$-$36_{4}$ system. A $26_{2}2_{4}$-$15_{4}$ parity proof is shown in bold, while the remaining bases make up the basis-complementary $26_{2}8_{4}$-$21_{4}$ proof.} \label{BasisTable36-36} \end{table}
\section{\label{sec:Full} Other proofs in the 60-105 system}
So far we have identified two subsystems of the 60-105 system, namely the 40-40 and 36-36 systems, and given a complete account of the parity proofs in them. The only smaller subsystem of interest is the 24-24 Peres system, whose parity proofs have been completely mapped out \cite{Waegell2011a}. Some larger subsystems of the 60-105 system that are of interest are the 36-45 system (obtained from a 36-36 system by adding on the 9 pure bases formed by the 36 rays) and the 48-60 and 48-72 systems (obtained by keeping the rays in any four dodecagons in any row of Table \ref{12-gonTablesB} together with just the hybrid bases formed by them, or both the hybrid and pure bases formed by them). We have explored these larger subsystems and found a huge number of new proofs in them. The search for these new proofs proved more time consuming because a larger number of bases had to be explored and rays with multiplicities greater than two also occurred more frequently among the proofs. The full 60-105 system is of course the most difficult one to mine, for the various reasons mentioned, and also because of the very large number of non-critical proofs that are netted and have to be weeded out. The strategy of starting with the smallest subsystems and then working upwards, which we have adopted in this work, thus seems the most fruitful approach.\\
We give below a small sample of some of the other proofs we have found in the 60-105 system, grouped according to the number of bases in the proof. \\
{\indent} 19 bases: {\space} $35_{2}1_{6}$,{\space \space}$33_{2}1_{4}1_{6}$,{\space \space}$31_{2}2_{4}1_{6}$,{\space \space}$29_{2}3_{4}1_{6}$\\ \\ {\indent} 21 bases: {\space} $39_{2}1_{6}$,{\space \space}$37_{2}1_{4}1_{6}$,{\space \space}$35_{2}2_{4}1_{6}$,{\space \space}$33_{2}3_{4}1_{6}$,{\space \space}$31_{2}4_{4}1_{6}$,{\space \space}$29_{2}5_{4}1_{6}${\space \space}\\ \\ {\indent} 23 bases: {\space} $43_{2}1_{6}$,{\space \space}$41_{2}1_{4}1_{6}$,{\space \space}$39_{2}2_{4}1_{6}$,{\space \space}$37_{2}3_{4}1_{6}$,{\space \space}$35_{2}4_{4}1_{6}$,{\space \space}$29_{2}7_{4}1_{6}$\\ \\ {\indent} 29 bases: {\space} $36_{2}11_{4}$,{\space \space}$37_{2}9_{4}1_{6}$,{\space \space}$38_{2}7_{4}2_{6}$,{\space \space}$39_{2}5_{4}3_{6}$,{\space \space}$40_{2}3_{4}4_{6}$\\ \\ {\indent} 31 bases: {\space} $42_{2}10_{4}$,{\space \space}$43_{2}8_{4}1_{6}$,{\space \space}$44_{2}6_{4}2_{6}$,{\space \space}$45_{2}4_{4}3_{6}$,{\space \space}$46_{2}2_{4}4_{6}$\\ \\
As one goes from left to right in any row of the above listing, rays of multiplicity 2 get traded for a smaller number of rays of multiplicities 4 and 6. The appearance of rays of multiplicity 6 in many of these proofs is a new feature that was not present in the 40-40 and 36-36 subsystems. Table \ref{tab:47-29} shows an example of the last proof in the fourth line above. The proofs in the fourth and fifth lines illustrate an interesting phenomenon we term {\bf isomerism}. In chemistry, isomers are compounds that have the same chemical formula but different structural formulas. Analogously, we term two parity proofs isomers if they have the same abbreviated symbol but different expanded symbols, reflecting the fact that their structures are different. For example, all the proofs in the fourth line have the abbreviated symbol 47-29, but they involve different numbers of rays of multiplicities 2, 4 and 6. Similarly, the proofs in the fifth line are all 52-31 proofs but again differ in their multiplicities. Another way of stating the difference between isomers is that they are unitarily inequivalent to each other and thus geometrically distinct (unlike the various replicas of a given proof under symmetry). Isomerism can be more subtle than in the examples just discussed. The 60-105 system has a 30-15 parity proof involving 30 rays that each occur twice. The 600-cell \cite{Waegell2010} also has a 30-15 proof involving 30 rays that occur twice each. However these two proofs are not identical but are isomers of each other. The simplest way of seeing this is to note that all the rays in the latter proof are real, whereas there is no choice of basis that will allow all the rays in the former proof to be made real. We have found hundreds of examples of isomers among all the proofs we have found. Isomerism is experimentally significant, because different experimental arrangements would be needed to test proofs that are isomers of each other. \\
\begin{table}[ht] \centering
\begin{tabular}{|c c c c | c c c c | c c c c | c c c c | c c c c|} \hline 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\ 21 & 22 & 23 & 24 & 25 & 26 & 27 & 28 & 29 & 30 & 31 & 32 & 33 & 34 & 35 & 36 & 37 & 38 & 39 & 40 \\ 41 & 42 & 43 & 44 & 45 & 46 & 47 & 48 & 49 & 50 & 51 & 52 & 53 & 54 & 55 & 56 & 57 & 58 & 59 & 60 \\ \hline \textbf{1} & \textbf{2} & \textbf{15} & \textbf{16} & \textbf{1} & \textbf{2} & \textbf{27} & \textbf{28} & \textbf{1} & \textbf{3} & \textbf{22} & \textbf{24} & 1 & 3 & 30 & 32 & \textbf{1} & \textbf{4} & \textbf{42} & \textbf{43} \\ 1 & 4 & 51 & 52 & 2 & 3 & 41 & 44 & 2 & 3 & 49 & 50 & 2 & 4 & 21 & 23 & 2 & 4 & 29 & 31 \\ \textbf{3} & \textbf{4} & \textbf{13} & \textbf{14} & 3 & 4 & 25 & 26 & \textbf{5} & \textbf{6} & \textbf{19} & \textbf{20} & \textbf{5} & \textbf{6} & \textbf{31} & \textbf{32} & \textbf{5} & \textbf{7} & \textbf{14} & \textbf{16} \\ \textbf{5} & \textbf{7} & \textbf{34} & \textbf{36} & \textbf{5} & \textbf{8} & \textbf{46} & \textbf{47} & \textbf{5} & \textbf{8} & \textbf{50} & \textbf{52} & \textbf{6} & \textbf{7} & \textbf{45} & \textbf{48} & \textbf{6} & \textbf{7} & \textbf{49} & \textbf{51} \\ \textbf{6} & \textbf{8} & \textbf{13} & \textbf{15} & \textbf{6} & \textbf{8} & \textbf{33} & \textbf{35} & \textbf{7} & \textbf{8} & \textbf{17} & \textbf{18} & \textbf{7} & \textbf{8} & \textbf{29} & \textbf{30} & 9 & 10 & 23 & 24 \\ \textbf{9} & \textbf{10} & \textbf{35} & \textbf{36} & 9 & 11 & 18 & 20 & 9 & 11 & 26 & 28 & 9 & 12 & 38 & 39 & \textbf{9} & \textbf{12} & \textbf{49} & \textbf{52} \\ 10 & 11 & 37 & 40 & 10 & 11 & 50 & 51 & \textbf{10} & \textbf{12} & \textbf{17} & \textbf{19} & 10 & 12 & 25 & 27 & 11 & 12 & 21 & 22 \\ 11 & 12 & 33 & 34 & 13 & 14 & 27 & 28 & 13 & 15 & 34 & 36 & 13 & 16 & 39 & 40 & 13 & 16 & 55 & 56 \\ 14 & 15 & 37 & 38 & 14 & 15 & 53 & 54 & 14 & 16 & 33 & 35 & 15 & 16 & 25 & 26 & 17 & 18 & 31 & 32 \\ 17 & 19 & 26 & 28 & 17 & 20 & 43 & 44 & 17 & 20 & 54 & 56 & 18 & 19 & 41 & 42 & 18 & 19 & 53 & 55 \\ \textbf{18} & \textbf{20} & \textbf{25} & \textbf{27} & 19 & 20 & 29 & 30 & 21 & 22 & 35 & 36 & 21 & 23 & 30 & 32 & 21 & 24 & 47 & 48 \\ 21 & 24 & 53 & 56 & \textbf{22} & \textbf{23} & \textbf{45} & \textbf{46} & 22 & 23 & 54 & 55 & 22 & 24 & 29 & 31 & \textbf{23} & \textbf{24} & \textbf{33} & \textbf{34} \\ 25 & 28 & 46 & 48 & \textbf{25} & \textbf{28} & \textbf{59} & \textbf{60} & 26 & 27 & 45 & 47 & 26 & 27 & 57 & 58 & 29 & 32 & 38 & 40 \\ \textbf{29} & \textbf{32} & \textbf{58} & \textbf{60} & 30 & 31 & 37 & 39 & \textbf{30} & \textbf{31} & \textbf{57} & \textbf{59} & 33 & 36 & 42 & 44 & 33 & 36 & 57 & 60 \\ 34 & 35 & 41 & 43 & 34 & 35 & 58 & 59 & 37 & 38 & 55 & 56 & 37 & 39 & 58 & 60 & 37 & 40 & 49 & 52 \\ 38 & 39 & 50 & 51 & 38 & 40 & 57 & 59 & 39 & 40 & 53 & 54 & 41 & 42 & 54 & 56 & 41 & 43 & 57 & 60 \\ 41 & 44 & 51 & 52 & \textbf{42} & \textbf{43} & \textbf{49} & \textbf{50} & 42 & 44 & 58 & 59 & 43 & 44 & 53 & 55 & 45 & 46 & 53 & 56 \\ 45 & 47 & 59 & 60 & 45 & 48 & 50 & 52 & \textbf{46} & \textbf{47} & \textbf{49} & \textbf{51} & \textbf{46} & \textbf{48} & \textbf{57} & \textbf{58} & 47 & 48 & 54 & 55 \\ \hline \end{tabular} \caption{A $40_{2}3_{4}4_{6}$-$29_{4}$ parity proof is shown in bold within the bases of the 60-105 system. Rays 1,46,49 have multiplicity four, rays 5,6,7,8 have multiplicity six, and all the remaining rays have multiplicity two.} \label{tab:47-29} \end{table}
\section{\label{sec:Discussion} Discussion}
We have shown that the 60-105 system of rays and bases connected with a pair of qubits has a cornucopia of parity proofs of the KS theorem in it. The proofs involve anywhere from 9 to 41 bases (with all odd numbers in this range being possible), with several different numbers of rays generally being possible for each basis size. Each proof can be characterized most precisely through its expanded symbol, which specifies not just the number of rays and bases involved in the proof but the multiplicities of the rays as well. However we have found that even the expanded symbol sometimes fails to characterize a proof completely because different proofs with the same symbol could be unitarily inequivalent. The number of unitarily inequivalent parity proofs runs into the tens of thousands, and we have not attempted a systematic count. Each distinct type of proof typically has many replicas (sometimes thousands) under the symmetries of the system. All these effects conspire to make the total number of distinct parity proofs contained in the 60-105 system an astronomically large number. We find this number hard to estimate but would guess that it is somewhere in the vicinity of a thousand million. We have exhibited only a handful of proofs in this paper, but hope to make a much wider sample available on a website we plan to set up.\\
It is a matter of some astonishment to us that a set of 105 bases, which can be fitted easily into a page, should contain something like $10^{9}$ parity proofs in it. This ratio of parity proofs to bases is the largest in any system we are aware of. We recently investigated a 60-75 system derived from the 600-cell \cite{Waegell2010} and found about 100 million parity proofs in it. That seems like the closest competitor to the present system, but we believe the present one wins out. What makes both these systems attractive is the combination of the large number of parity proofs in them together with the fact that every one of the proofs is so transparent. It is worth pointing out that both these 60-ray systems contain many basis-critical KS sets that are not parity proofs. These proofs are not as transparent as parity proofs, but they are just as conclusive. A distinguishing feature of these proofs is that they can involve an even number of bases, whereas parity proofs necessarily involve an odd number. In a recent paper \cite{Megill2011} we estimated the total number of critical KS sets of all types (both parity proofs and others) in the 60-75 system to be on the order of $10^{12}$. We think the total number of sets in the present system is just as large, but that still remains to be confirmed.\\
We pointed out at the beginning that the 15 observables connected with a pair of qubits also form 15 triads of commuting observables. This symmetry is further enhanced by the fact that each observable occurs in three triads and each triad involves three observables. In other words, the observables and the triads form a $15_{3}$ configuration, i.e., two sets of 15 objects with the property that each object of either set is associated with three objects of the other. Geometers have identified a variety of examples of $15_{3}$ configurations. One of the simplest is known as a {\it hexastigm} because it is based on the numbers 1 to 6. Let a {\it duad} be an unordered pair of numbers from this set and a {\it syntheme} a set of three duads that include all six numbers between them. There are 15 duads and 15 synthemes and each is incident with three members of the opposite kind. Coxeter \cite{Coxeter} arranges the synthemes in a 6 x 6 table symmetrical about its diagonal (and with no entries along the diagonal) in such a way that every row and every column contains five synthemes that include all 15 duads among them. This table can be made to pass into a symmetrical version of our Table \ref{tab:MUBs} (showing the maximal sets of MUBs in a two-qubit system) if one translates the duads and synthemes into observables and triads of observables in a fairly obvious fashion. We have not been able to push this analogy to obtain further insights into the two-qubit system or the KS proofs contained in it, but we nevertheless find it striking enough to be worth mentioning.\\
An alternative approach \cite{Saniga} to the two-qubit Pauli group exploits its connection with the geometry of the generalized quadrangle GQ(2,2). This approach allows many significant properties of the observables, such as the sets of MUBs shown in Table \ref{tab:MUBs}, to be derived from purely geometrical considerations. These ideas can also be generalized, to some extent, to a system of several qudits \cite{Planat}.\\
There is one application of Table \ref{tab:MUBs} to quantum tomography, mentioned by one of us in an earlier work \cite{Aravind2006}, that is worth recalling. Following the seminal work of Wootters and others \cite{Wootters}, it is known that the most efficient method of determining the 15 parameters of an arbitrary two-qubit state is by carrying out measurements of five triads of observables that constitute a maximal set of MUBs. The triads that one uses for this purpose can be the ones in any of the rows of Table \ref{tab:MUBs}. However a superior strategy would be to measure all 15 triads instead of just the five in a particular row. This would involve three times as much work, but would allow the state reconstruction to be done in six different ways based on the rows of Table \ref{tab:MUBs}, and so lead to a two-to-one return on the investment. \\
{\bf Acknowledgements.} A part of this work was presented by one of us (PKA) at the Workshop on Topological Quantum Information held in Pisa, Italy, during May 16-17, 2011. PKA would like to thank the organizers, Professors Lou Kauffman and Sam Lomonaco, for the invitation to participate in the workshop and for the fruitful discussions that took place with them and the other participants during the visit. Thanks are also due to the Centro di Ricerca Matematica for its hospitality and support during the workshop. \\
\end{document} |
\begin{document}
\begin{abstract} Our purpose in this note is to investigate the order properties of positive operators from a locally convex space into its conjugate dual. We introduce a natural generalization of the Busch-Gudder strength function and we prove Kadison's anti-lattice theorem and Ando's result on the infimum of positive operators in that context. \end{abstract}
\maketitle
\section{Introduction} Let $\mathcal H$ be a complex (possibly infinite dimensional) Hilbert space. It is well known that the set of bounded self-adjoint operators $B_{sa}(\mathcal H)$ is a partially ordered set with respect to the most natural L\"owner order \begin{equation*}
A\leq B\quad \iff\quad \sip{Ax}{x}\leq \sip{Bx}{x}\qquad (\forall x\in\mathcal H). \end{equation*} That is to say, `$\leq $' is a reflexive, transitive and anti-symmetric relation. It is however proved by Kadison \cite{kadison1951} that the partially ordered set $(B_{sa}(\mathcal H),\leq )$ is as far from being a lattice as possible. In fact, the supremum $A\vee B$ of the self-adjoint operators $A$ and $B$ can exist only in the trivial case when they are comparable. The same is true for the question of the infimum, as we have \begin{equation*}
A\wedge B=-((-A)\vee(-B)). \end{equation*} If we examine the same questions in the cone $B_+(\mathcal H)$ of positive operators, the answer does not change in the case of the supremum.
However, it is not difficult to see that the infimum of two orthogonal projections $P,Q$ with disjoint ranges is $P\wedge Q=0$ in $B_+(\mathcal H)$. This simple example suggests that the infimum problem considered over the cone of positive operators is a more sophisticated problem than that over $B_{sa}(\mathcal H)$.
The latter issue was completely solved by T. Ando \cite{ando1999problem}. He showed that the infimum problem is closely related to the so-called Lebesgue-type decomposition of positive operators, the theory of which was also developed by him \cite{ando1976lebesgue}. Namely, given two positive operators $A,B$ on $\mathcal H$, there exists two positive operators $B_a,B_s$ on $\mathcal H$, where $B_a$ is absolutely continuous to $A$, whilst $B_s$ is singular to $A$, such that $B_a+B_s=B$ (see \cite{ando1976lebesgue}). It is also proved by Ando that such a decomposition is not unique in general, but there is a distinguished positive operator (denoted by $[B]A\coloneqq B_a$) which satisfies the decomposition and which is the largest among those operators $C\geq 0$ such that $C\leq A$ and $C\ll B$. (We refer the reader to \cite{ando1976lebesgue} and \cite{tarcsay2013lebesgue} for the details.) This maximal operator $[B]A$ is called the \textit{$B$-absolutely continuous part} of $A$.
From the maximality property of the absolutely continuous parts it is easy to check that if $[B]A$ and $[A]B$ are comparable, then the infimum of $A$ and $B$ exists, namely, \begin{equation*}
A\wedge B=\max\{[A]B,[B]A\}. \end{equation*} The main result of Ando's paper \cite{ando1999problem} states that the reverse of this is also true: $A\wedge B$ exists if and only if the corresponding absolutely continuous parts are comparable.
The aim of this note is to investigate the supremum and infimum problem in a much more general context, namely, among the positive cone of operators on so called anti-dual pairs. By anti-dual pair we mean a pair $(E,F)$ of complex vector spaces which are connected by a so-called anti-duality function $\dual{\cdot}{\cdot }:F\times E\to\mathbb{C}$. The latter differs from the well-known duality function (see \cite{Schaefer}*{Chapter IV}) only in that it is conjugate linear in its second variable. The notion of anti-dual pairs allows us to define positivity of linear operators of type $A:E\to F$ in a way that is formally identical to the positivity of operators over Hilbert spaces (cf. \cite{TARCSAY2020Lebesgue} and \cite{TarcsayMathNach}). The set $\mathscr{L}_+(E,F)$ of positive operators from $E$ to $F$ is then a partially ordered set via the ordering \begin{equation*}
A\leq B\qquad\overset{def}{\Longleftrightarrow}\qquad \dual{Ax}{x}\leq \dual{Bx}{x}\qquad (\forall x\in E). \end{equation*} In this article we are going to present the corresponding analogous versions to Kadison's and Ando's theorems in the ordered set $(\mathscr{L}_+(E,F),\leq )$. In the case of supremum, the key is a corresponding generalization of the Busch-Gudder strength function \cite{busch1999effects} to positive operators in $\mathscr{L}_+(E,F)$. This will enable us to provide a numerical characterization of the ordering. The solution to the infimum problem is based on the Lebesgue decomposition theory for operators on anti-dual pairs developed in \cite{TARCSAY2020Lebesgue}.
\section{Preliminaries}
Throughout the paper, let $E$ and $F$ be complex vector spaces which are intertwined via a function $$\dual\cdot\cdot:F\times E\to\mathbb{C},$$ which is linear in its first argument and conjugate linear in its second one, so that it separates the points of $E$ and $F$. We shall refer to $\dual\cdot\cdot$ as \emph{anti-duality} function and the triple $(E,F,\dual\cdot\cdot)$ will be called an \emph{anti-dual pair}. and shortly denoted by $\dual FE$.
The prototype of anti-dual pairs is the triple $(\mathcal H,\mathcal H,\sip{\cdot}{\cdot})$ , where $\mathcal H$ is a Hilbert space and $\sip{\cdot}{\cdot}$ is a scalar product over $\mathcal H$. In that case, of course, the concept of symmetric and positive operators coincides with the usual concept of those in functional analysis.
However, the most general example of anti-dual pairs is $(X,\anti{X},\dual{\cdot}{\cdot})$, where $X$ is a locally convex Hausdorff space, $\anti{X}$ denotes the set of continuous and conjugate linear functionals on $X$, and $\dual{\cdot}{\cdot}$ is the evaluation \begin{equation*}
\dual{f}{x}\coloneqq f(x)\qquad (x\in X, f\in\anti X). \end{equation*}
If $(E,F)$ is an anti-dual pair, then we may consider the corresponding weak topologies $w\coloneqq w(E,F)$ on $E$, and $w^*\coloneqq w(F,E)$ on $F$, respectively. Both $(E,w)$ and $(F,w^*)$ are locally convex Hausdorff spaces such that \begin{equation}\label{E:E'=F}
\anti E=F,\qquad F'=E. \end{equation} This fact and \eqref{E:E'=F} enable us to define the adjoint (that is, the topological transpose) of a weakly continuous operator. Let $\dual{F_1}{E_1}$ and $\dual{F_2}{E_2}$ be anti-dual pairs and $T:E_1\to F_2$ a weakly continuous linear operator, then the (necessarily weakly continuous) linear operator $T^*:E_2\to F_1$ satisfying \begin{equation*}
\dual{Tx_1}{x_2}_2=\overline{\dual{T^*x_2}{x_1}_1},\qquad x_1\in E_1,x_2\in E_2 \end{equation*} is called the adjoint of $T$. In the following, we will use two particularly important special cases of this: on the one hand, the adjoint $T^*$ of a weakly continuous operator $T:E\to F$ is also an operator of type $E\to F$. On the other hand, if $\mathcal H$ is a Hilbert space, then the adjoint of a weakly continuous operator $T:E\to \mathcal H$ acts as and operator $T^*:\mathcal H\to F$, so that their composition $T^*T:E\to F$ satisfies \begin{equation}\label{E:TadjTxx}
\dual{T^*Tx}{x}=\sip{Tx}{Tx} \qquad (x\in E). \end{equation} A linear operator $S:E\to F$ is called \textit{symmetric} if \begin{equation*}
\dual{Sx}{y}=\overline{\dual{Sy}{x}}\qquad (x,y\in E), \end{equation*} and positive, if \begin{equation*}
\dual{Sx}{x}\geq 0\qquad(x\in E). \end{equation*} It can be readily checked that every positive operator is symmetric, and every symmetric operator $S$ is weakly continuous (see \cite{TARCSAY2020Lebesgue}). We shall denote the set of weakly continuous operators $T:E\to F$ by $\mathscr{L}(E,F)$, while we write $\mathscr{L}_+(E,F)$ for the set of positive operators $A:E\to F$.
According to \eqref{E:TadjTxx}, $T^*T\in\mathscr{L}(E,F)$ is a positive operator if $T\in\mathscr{L}(E;\mathcal H)$. However, if we assume $F$ to be $w^*$-sequentially complete, then we can also state the converse: every positive operator $A\in\mathscr{L}_+(E,F)$ can be written in the form $A=T^*T$ for some auxiliary Hilbert space $\mathcal H$ and $T\in\mathscr{L}(E;\mathcal H)$ (cf. \cite{TARCSAY2020Lebesgue}). Recall that the anti-dual pair $(X,\anti X)$ is $w^*$-sequentially complete if $X$ is a barreled space (e.g., a Banach space). Also, $(X,\bar X^*)$ is $w^*$-sequentially complete with $X$ being an arbitrary vector space and $\bar X^*$ its algebraic anti-dual space.
Let us now recall briefly the Hilbert space factorization method of a positive operator $A$. Let $(E,F)$ be a $w^*$-sequentially anti-dual pair and let $A\in\mathscr{L}_+(E,F)$. Then \begin{equation*}
\sipa{Ax}{Ay}\coloneqq \dual{Ax}{y}\qquad (x,y\in E). \end{equation*} defines an inner product on the range space $\operatorname{ran} A$ under which it becomes a pre-Hilbert space. We shall denote the completion by $\hil_A$. The canonical embedding operator \begin{equation}\label{E:J_A}
J^{}_A(Ax)=Ax\qquad (x\in E)
\end{equation} of $\operatorname{ran} A\subseteq \hil_A$ into $F$ is weakly continuous. By $w^*$-sequentially completeness, it uniquely extends to a weakly continuous operator $J_A\in \mathscr{L}(\hil_A,F)$. It can be checked that the adjoint operator $J_A^*\in\mathscr{L}(E,\hil_A)$ satisfies \begin{equation}\label{E:J*}
J_A^*x=Ax\in\hil_A \qquad (x\in E), \end{equation} that yields the fallowing factorization of $A$: \begin{equation}\label{E:JAJA}
A=J_A^{}J_A^*. \end{equation}
In order to discuss the infimum problem of positive operators, we will need some concepts and results related to the Lebesgue decomposition theory of positive operators. Hence, for sake of the reader, we briefly summerize the content of \cite{TARCSAY2020Lebesgue} which is essential for the understanding of this article. The details can be found in the mentioned paper.
We say that the positive operator $B\in\mathscr{L}_+(E,F)$ is absolutely continuous with respect to another positive operator $A\in\mathscr{L}_+(E,F)$ (in notation, $B\ll A$) if, for every sequence $\seq x$ of $E$, $\dual{Ax_n}{x_n}\to0$ and $\dual{B(x_n-x_m)}{x_n-x_m}\to0$ imply $\dual{Bx_n}{x_n}\to0$. $A$ and $B$ are said to be mutually singular (in notation, $A\perp B$) if the only positive operator $C\in\mathscr{L}_+(E,F)$ for which $C\leq A$ and $C\leq B$ are satisfied is $C=0$. It is easy to check that $B\leq A$ imples $B\ll A$, however the converse is apparently not true (cf. \cite{TARCSAY2020Lebesgue}*{Theorem 5.1}).
In \cite{TARCSAY2020Lebesgue}*{Theorem 3.3} it was proved that every positive operator $B$ has a Lebesgue-type decomposition with respect to any other positive operator $A$. More precisely, there exists a pair $B_a$ and $B_s$ of positive operators such that \begin{equation}\label{E:LebDec}
B=B_a+B_s,\qquad B_a\ll A,\qquad B_s\perp A. \end{equation} Such a decomposition is not unique in general (cf \cite{TARCSAY2020Lebesgue}*{Theorem 7.2}, however, there is a unique operator $B_a\in\mathscr{L}_+(E,F)$ satisfying \eqref{E:LebDec} which is maximal among those positive operators $C$ such that $C\leq B$ and $C\ll A$. We shall call this uniquely determined operator $B_a$ the $A$-\textit{absolutely continuous part} of $B$, and adopting Ando's \cite{ando1976lebesgue} notation, we shall write $[A]B\coloneqq B_a$ for it.
\section{The strength of a positive operator} Throughout the section let $(E,F)$ be a $w^*$-sequentially complete anti-dual pair. Let us introduce the partial order on $\mathscr{L}_+(E,F)$ by \begin{equation*}
A\leq B \qquad \iff \qquad\dual{Ax}{x}\leq \dual{Bx}{x}\qquad (x\in E). \end{equation*} In what follows, inspired by the paper \cite{busch1999effects} of Busch and Gudder, we associate a function to each positive operator $A$ (called the strength function) which can be used to characterize that ordering.
Let $f\in F$ be a non-zero vector and set \begin{equation*}
(f\otimes f)(x)\coloneqq \overline{\dual fx}\cdot f\qquad(x\in E), \end{equation*} so that $f\otimes f\in\mathscr{L}_+(E,F)$ is a rank one positive operator with range space $\mathbb{C}\cdot f$. For a given positive operator $A\in\mathscr{L}_+(E,F)$ we set
\begin{equation*}
\lambda(A,f)\coloneqq \sup \set{t\geq0}{t\cdot f\otimes f\leq A}. \end{equation*} Following the terminology of \cite{busch1999effects}, the nonnegative (finite) number $\lambda(A,f)$ will be called the strength of $A$ along the ray $f$, whilst the function \begin{equation*} \lambda(A,\cdot): F\setminus\{0\}\to[0,+\infty) \end{equation*} is the \textit{strength function} of $A$. To see that $\lambda(A,f)$ is always finite, consider an $x\in E$ such that $\dual{f}{x}\neq 0$. Then $\lambda(A,f)=+\infty$ would result equality $\dual{Ax}{x}=+\infty$, which is impossible.
At this point, we note that in \cite{busch1999effects}, the strength function was only defined along vectors from the unit sphere of a Hilbert space. In that case, the strength function has the uniform upper bound $\|A\|$. By the above interpretation of the strength function, $\lambda(A,\cdot)$ will not be bounded, but this fact is not of considerable importance for the applications.
In our first result, we examine along which 'rays' $f$ the strength function takes a positive value. The factorization \eqref{E:JAJA} of the positive operator $A$ and the auxiliary Hilbert space $\hil_A$ will play an important role in this. \begin{theorem}\label{T:3.1} Let $A\in\mathscr{L}_+(E,F)$ and $f\in F$, $f\neq 0$. The following are equivalent: \begin{enumerate}[label=\textup{(\roman*)}]
\item $\lambda(A,f)>0$,
\item there is a constant $m\geq0$ such that $\abs{\dual fx}^2\leq m\cdot\dual{Ax}{x}$, $(x\in E),$
\item there is a (unique) $\xi_f\in\hil_A$ such that $J_A\xi_f=f$. \end{enumerate} In any case, we have \begin{equation}
\lambda(A,f)=\frac{1}{\|\xi_f\|_A^2}=\frac{1}{m_f}, \end{equation} where $m_f>0$ is the smallest constant $m$ that satisfies (ii). \end{theorem} \begin{proof} (i)$\Rightarrow$(ii): Fix a real number $0<t<\lambda(A,f)$, then \begin{equation*}
\dual{(f\otimes f)x}{x}\leq \frac1t\cdot\dual{Ax}{x}\qquad x\in E, \end{equation*} that implies (ii).
(ii)$\Rightarrow$(iii): Inequality (ii) expresses just that \begin{equation*}
\varphi(Ax)\coloneqq \dual fx,\qquad x\in E, \end{equation*}
defines a continuous conjugate linear functional from $\operatorname{ran} A\subseteq \hil_A$ to $\mathbb{C}$, namely $\|\varphi\|^2\leq m$. Denote by $\xi_f\in\hil_A$ the corresponding Riesz representing vector. Then \begin{equation*}
\dual{f}{x}=\sipa{\xi_f}{Ax}=\sipa{\xi_f}{J_A^*x}=\dual{J_A\xi_f}{x},\qquad x\in E, \end{equation*} and hence $f=J_A\xi_f$.
(iii)$\Rightarrow$(i): Suppose that $J_A\xi_f=f$ for some $\xi_f\in\hil_A$. Then \begin{equation*}
\abs{\dual fx}^2=\abs{\sipa{\xi_f}{Ax}}^2\leq \|\xi_f\|^2_A\dual{Ax}{x}, \end{equation*}
hence $\lambda f\otimes f\leq A$ with $\lambda=\|\xi_f\|_A^{-2}$. \end{proof} As a corollary we retrieve \cite{busch1999effects}*{Theorem 3}. The proof presented here is significantly shorter and simpler. \begin{corollary} Let $\mathcal H$ be a Hilbert space, $A\in\mathscr{B}(\hil)$ and $f\in \mathcal H$, then $\lambda(A,f)>0$ if and only if $f\in\operatorname{ran} A^{1/2}$. In that case, \begin{equation}\label{E:BGthm3}
\lambda(A,f)=\frac{1}{\|A^{-1/2}f\|^2}, \end{equation} where $A^{-1/2}$ denotes the (possibly unbounded) inverse of $A^{1/2}$ restricted to $\ker A^{\perp}$. \end{corollary} \begin{proof}
The first part of the statement is clear because $A^{1/2}A^{1/2}=A=J_A^{}J_A^*$ implies $\operatorname{ran} A^{1/2}=\operatorname{ran} J_A$. In order to prove \eqref{E:BGthm3}, consider the linear functional
\begin{equation*}
\varphi:\operatorname{ran} A^{1/2}\to \mathbb{C},\qquad \varphi(A^{1/2}x)\coloneqq \sip xf\qquad (x\in \mathcal H)
\end{equation*}
which is well-defined and bounded with norm $\|\varphi\|^2=\lambda(A,f)^{-1}$. By the Riesz representation theorem, there is a unique vector $\zeta\in\overline{\operatorname{ran} A^{1/2}}=\ker A^{\perp}$ such that
\begin{equation*}
\sip xf=\sip{A^{1/2}x}{\zeta}=\sip{x}{A^{1/2}\zeta}\qquad (x\in \mathcal H).
\end{equation*}
Hence $A^{1/2}\zeta=f$ and thus $\zeta=A^{-1/2}f$. Furthermore, $\|\zeta\|^2=\|\varphi\|^2=\lambda(A,f)^{-1}$ that completes the proof. \end{proof} Below we establish the most useful property of the strength function for our purposes. Namely, it can be used to characterize the ordering. \begin{theorem} Let $A,B\in\mathscr{L}_+(E,F)$ be positive operators then the following assertions are equivalent: \begin{enumerate}[label=\textup{(\roman*)}]
\item $A\leq B$,
\item $\lambda(A,f)\leq \lambda (B,f)$ for every $0\neq f\in F$. \end{enumerate} \end{theorem} \begin{proof} (i)$\Rightarrow$(ii): If $A\leq B$ and $\lambda \cdot f\otimes f\leq A$ for some $\lambda $ then also $\lambda \cdot f\otimes f\leq B$ hence $\lambda (A,f)\leq \lambda(B,f)$.
(ii)$\Rightarrow$(i): Assume $\lambda(A,\cdot)\leq \lambda(B,\cdot)$ everywhere on $F\setminus\{0\}$. By contradiction suppose $\dual{Ax}{x}>\dual{Bx}{x}$ for some $x$ and set $f\coloneqq \dual{Ax}{x}^{-1}\cdot Ax$. For every $y\in E$, \begin{equation*}
\dual{(f\otimes f)(y)}{y}=\frac{\abs{\dual{Ax}{y}}^2 }{\dual{Ax}{x}}\leq \dual{Ay}{y} \end{equation*} by the Cauchy-Schwarz inequality $\abs{\dual{Ax}{y}}^2\leq \dual{Ax}{x}\dual{Ay}{y}$. Hence $\lambda(A,f)\geq 1$ and thus, by assumption, \begin{equation*}
\dual{Bx}{x}\geq \dual{(f\otimes f)(x)}{x}=\dual{Ax}{x}, \end{equation*} which leads to a contradiction. \end{proof} \begin{corollary} Two positive operators are identical if and only if so are their strength functions. \end{corollary}
\begin{remark} From the very definition of the strength function it is readily check the following elementary properties:
Let $A,B\geq 0$ and $0\neq f\in F$, then \begin{enum1}
\item $\lambda(\alpha A,f )=\alpha\lambda(A,f )$ for every $\alpha\geq 0$,
\item $\lambda(\alpha A,f)+\lambda ((1-\alpha)B,f)\geq \alpha\lambda(A,f)+(1-\alpha)\lambda (B,f)$ for every $\alpha\in[0,1]$,
\item $\lambda(A+B,f)\geq \lambda(A,f)+\lambda(B,f)$. \end{enum1} \end{remark} \section{Supremum of positive operators} In this section, we are going to prove the Kadison's anti-lattice theorem in the setting of positive operators on an anti-dual pair. Namely, we show that the supremum of two positive operators $A$ and $B$ exists if and only if they are comparable (that is, $A\leq B$ or $B\leq A$).
\begin{theorem}\label{T:4.1} Let $(E,F)$ be a $w^*$-sequentially complete anti-dual pair and let $A, B\in\mathscr{L}_+(E,F)$ positive operators. Then the following two statements are equivalent: \begin{enumerate}[label=\textup{(\roman*)}]
\item the supremum $A\vee B$ exists,
\item $A$ and $B$ are comparable. \end{enumerate} \end{theorem} \begin{proof} The comparability of $A$ and $B$ trivially implies the existence of $A\vee B$. For the opposite direction we suppose that $T\in\mathscr{L}_+(E,F)\setminus\{A, B\}$ such that $T\ge A$ and $T\ge B$ and show that there exists $S\in\mathscr{L}_+(E,F)$ such that $S\ge A$ and $S\ge B$, but $S$ is not comparable with $T$.
In the first case we suppose that the intersection of $\operatorname{ran}(J_{T-A})$ and $\operatorname{ran}(J_{T-B})$ contains a non-zero vector $e\in\operatorname{ran}(J_{T-A})\cap\operatorname{ran}(J_{T-B})$. By Theorem \ref{T:3.1} there exists $\lambda>0$ such that $\lambda\cdot e\otimes e\le T-A$ and $\lambda\cdot e\otimes e\le T-B$. Let $f\in F\setminus\mathbb{C}\cdot e$, then with \begin{equation*}
S\coloneqq T-\lambda\cdot e\otimes e+f\otimes f \end{equation*} we have $S\ge A$ and $S\ge B$, but $S$ is apparently not comparable with $T$.
In the second case we suppose that $\operatorname{ran}(J_{T-A})\cap\operatorname{ran}(J_{T-B})=\{0\}$. By Theorem \ref{T:3.1} there exist $e\in \operatorname{ran}(J_{T-A})$ and $f\in\operatorname{ran} (J_{T-B})$ such that $e\otimes e\le T-A$ and $f\otimes f\le T-B$. Let us define \begin{equation*}
S_0\coloneqq e\otimes e+2e\otimes f+2f\otimes e+f\otimes f \end{equation*} and \begin{equation*}
S:=T+\frac{1}{3}S_0. \end{equation*} Clearly, $S_0$ and $S$ are symmetric operators. We claim that $S\ge A$ and $S\ge B$. Indeed, we have \begin{equation*}
S_0+3e\otimes e=(2e+f)\otimes(2e+f)\ge 0, \end{equation*} which implies \begin{equation*}
-\frac{1}{3}S_0\le e\otimes e\le T-A, \end{equation*} therefore \begin{equation*}
\frac{1}{3}S_0\ge A-T. \end{equation*} A similar argument shows that $ \frac{1}{3}S_0\ge B-T$.
As a consequence we see that \begin{equation*}
S=T+\frac{1}{3}S_0\ge T+(B-T)=B, \end{equation*} and similarly, $S\geq A$. However, the symmetric operator $S_0$ is not positive. For let $x\in E$ be such that $\langle e,x\rangle=1$, and $\langle f,x\rangle=-1$, then $\langle S_0x,x\rangle=-2$. Consequently, $S$ and $T$ are not comparable, which proves the theorem. \end{proof}
\section{Infimum of positive operators} Let $A,B\in\mathscr{L}_+(E,F)$ be positive operators and consider the corresponding $A$-, and $B$-absolute continuous parts $[A]B$ and $[B]A$, arising from the corresponding Lebesgue decompositions in the sense of \eqref{E:LebDec}. Thus, \begin{equation}\label{E:[A]B}
[A]B=\max\set{C\in\mathscr{L}_+(E,F)}{C\leq B, C\ll A}, \end{equation} and \begin{equation}\label{E:[B]A}
[B]A=\max\set{C\in\mathscr{L}_+(E,F)}{C\leq A, C\ll B}, \end{equation} see \cite{TARCSAY2020Lebesgue}*{Theorem 3.3}. Assume for a moment that $[A]B\leq [B]A$, then clearly, $[A]B\leq A$, $[A]B\leq B$ and for every $C\in\mathscr{L}_+(E,F)$ such that $C\leq A$,$C\leq B$ we also have $C\leq [A]B$. This in turn means that $[A]B$ is the infimum of $A$ and $B$ in the cone $\mathscr{L}_+(E,F)$. The situation is essentially same if we assume that $[B]A\leq [A]B$.
Ando's result \cite{ando1999problem}*{Theorem 6} states that, in the context of positive operators on Hilbert spaces, the infimum of two operators only in one of the above two cases exists. That is, if $A\wedge B$ exists in the cone $B_+(\mathcal H)$, then $[A]B$ and $[B]A$ are comparable (and the smaller one is the infimum).
Our aim in the present section is to establish the anti-dual pair analogue of Ando's mentioned result: \begin{theorem}\label{T:5.1} Let $(E,F)$ be a $w^*$-sequentially complete anti-dual pair and let $A, B\in\mathscr{L}_+(E,F)$ positive operators. Then the following two statements are equivalent: \begin{enumi}
\item the infimum $A\wedge B$ exists in $\mathscr{L}_+(E,F)$,
\item the corresponding absolute continuous parts $[A]B$ and $[B]A$ are comparable. \end{enumi} In any case, $A\wedge B=\min\{[A]B,[B]A\}$. \end{theorem} \begin{proof} We only prove the non-trivial implication (i)$\Rightarrow$(ii). First of all we remark that $A\wedge B$ exists if and only if $[A]B\wedge [B]A$ exists, and in that case these two operators coincide. This is easily obtained from the maximality properties \eqref{E:[A]B} and \eqref{E:[B]A}. Recall also that $[A]B$ and $[B]A$ are mutually absolutely continuous according to \cite{TARCSAY2020Lebesgue}*{Theorem 3.6}, so we may assume without loss of generality that $A\ll B$ and $B<\ll A$.
Consider now the auxiliary Hilbert space $\mathcal H\coloneqq \mathcal H^{}_{A+B}$ associated with the sum $A+B$, and denote by $\sip{\cdot}{\cdot}$ its inner product and by $J\coloneqq J_{A+B}^{}$ the corresponding canonical embedding of $\mathcal H$ into $F$. Since the nonnegative forms \begin{equation*}
((A+B)x,(A+B)y)\mapsto \dual{Ax}{y},\qquad ((A+B)x,(A+B)y)\mapsto \dual{Bx}{y}, \end{equation*}
defined on the dense subspace $\operatorname{ran} (A+B)\subseteq \mathcal H$ are obviously bounded by norm $\leq 1$, the Lax-Milgram lemma provides us two positive operators $\widetilde A,\widetilde B\in\mathscr B(\mathcal H)$ with $\|\widetilde A\|,\|\widetilde B\|\leq 1$, which satisfy \begin{equation*}
\sip{\widetilde A(A+B)x}{(A+B)y}=\dual{Ax}{y},\qquad
\sip{\widetilde B(A+B)x}{(A+B)y}=\dual{Bx}{y}. \end{equation*} Using the canonical property \begin{equation*}
J^*x=(A+B)x,\qquad x\in E \end{equation*} we obtain that \begin{equation*}
A=J\widetilde AJ^*\qquad\mbox{and}\qquad B=J\widetilde{B} J^*. \end{equation*} Observe also that \begin{equation*}
\sip{(\widetilde A+\widetilde B)J^*x}{J^*y}=\dual{(A+B)x}{y}=\sip{J^*x}{J^*y},\qquad (x,y\in E), \end{equation*} whence \begin{equation*}
\widetilde A+\widetilde B=I, \end{equation*} where $I$ stands for the identity operator on $\mathcal H$. Note also that \begin{equation}\label{E:ker}
\ker \widetilde A=\ker \widetilde B =\{0\}, \end{equation} because $A\ll B$ and $B\ll A$ (see \cite{TARCSAY2020Lebesgue}*{Lemma 5.2}).
Denote by $C\coloneqq A\wedge B$ and let $\widetilde C\in\mathscr{B}(\hil)$ be the positive operator such that $C=J\widetilde C J^*$. We claim that $\widetilde C=\widetilde A\wedge \widetilde B$ in $\mathscr{B}_+(\mathcal H)$. Indeed, it is clear on the one hand that $\widetilde C\leq \widetilde A$ and $\widetilde C\leq \widetilde{B}$. On the other hand, consider a positive operator $\widetilde D\in B_+(\mathcal H)$ such that $\widetilde D\leq \widetilde A, \widetilde{B}$. It is readily seen that $D\coloneqq J\widetilde{D}J^*\in \mathscr{L}_+(E,F)$ satisfies $D\leq A,B$, whence $D\leq C$ which implies $\widetilde D\leq \widetilde C$.
Let $E$ be the spectral measure of $\widetilde A$. Following the reasoning of \cite{ando1999problem}*{Lemma 1} it follows that \begin{equation}\label{E:tn1-t}
\widetilde A\wedge \widetilde B=\widetilde A\wedge (I-\widetilde A)=\int_{0}^{1} \min\{t,(1-t)\}\,dE(t). \end{equation} The statement of the theorem will be proved if we show that either $\widetilde A\wedge \widetilde B=\widetilde A$ or $\widetilde A\wedge \widetilde B=\widetilde{B}$. By \eqref{E:tn1-t} this is equivalent to prove that \begin{equation}\label{E:spectrum}
\sigma(\widetilde A)\subseteq [0,\tfrac12]\qquad\mbox{or}\qquad \sigma(\widetilde A)\subseteq [\tfrac12,1] \end{equation} holds on the spectrum of $\widetilde A$. Suppose that is not the case. By \eqref{E:ker} we have $E(\{0\})=E(\{1\})=0$, hence there exists some $\varepsilon>0$ such that none of the spectral projections \begin{equation*}
P_1\coloneqq E([\tfrac12+3\varepsilon,1-3\varepsilon]) \qquad \mbox{and}\qquad P_2\coloneqq E([3\varepsilon,\tfrac12-3\varepsilon]) \end{equation*} is zero. Suppose that $0<\dim P_2\leq \dim P_1$ and take a partial isometry $V\in\mathscr{B}(\hil)$ such that $[\ker V]^\perp\subseteq \operatorname{ran} P_2$ and $\operatorname{ran} V\subseteq \operatorname{ran} P_1$. Then, following Ando's treatment used in the proof of \cite{ando1999problem}*{Theorem 1}, one can prove that \begin{equation*}
\widetilde D\coloneqq (\widetilde A-\varepsilon)\cdot P_2+(\widetilde B-\varepsilon)\cdot P_1+\sqrt{2}\varepsilon (VP_2+P_2V^*) \end{equation*} satisfies $0\leq \widetilde D\leq \widetilde A,\widetilde B$, but $\widetilde D$ is not comparable with $\widetilde C$. This, however contradicts the fact that $\widetilde C=\widetilde A\wedge \widetilde B$, which proves the theorem. \end{proof}
As a direct application of Theorems \ref{T:4.1} and \ref{T:5.1}, we obtain the following result regarding the supremum and infimum of non-negative forms over a given vector space. With the latter statement we retrieve \cite{titkos2012ando}*{Theorem 3}. \begin{corollary} Let $\sform$ and $\tform$ be nonnegative forms on a complex vector space $\mathscr{D}$. \begin{enuma}
\item The supremum $\tform\vee \sform$ exists if and only if $\tform\leq \sform$ or $\sform\leq \tform$,
\item The infimum $\tform\wedge\sform$ exists if and only if $[\tform]\sform\leq [\sform]\tform$ or $[\sform]\tform\leq [\tform]\sform$. \end{enuma} \end{corollary} \begin{proof} Denote by $\bar \mathscr{D}^*$ the conjugate algebraic dual of $\mathscr{D}$ (that is, $\bar\mathscr{D}^*$ consists of all anti-linear forms $f:\mathscr{D}\to\mathbb{C}$). Then $(\mathscr{D},\bar \mathscr{D}^*)$, endowed with the natural anti-duality $\dual\cdot\cdot$, forms a $w^*$-sequentially continuous anti-dual pair. Note that every nonnegative form $\tform$ on $\mathscr{D}$ induces a positive operator $T:\mathscr{D}\to\bar \mathscr{D}^*$ by the correspondence \begin{equation*}
\dual{Tx}{y}\coloneqq \tform(x,y),\qquad (x,y\in\mathscr{D}). \end{equation*}
Conversely, every positive operator $T:\mathscr{D}\to\bar \mathscr{D}^*$ defines a nonnegative form in the most natural way. The correspondence $\tform\mapsto T$ is an order preserving bijection between nonnnegative forms and positive operators, so that both statements (a) and (b) of the theorem follows from Theorems \ref{T:4.1} and \ref{T:5.1}, respectively. \end{proof}
\end{document} |
\begin{document}
\title{Canonical Stratifications along Bisheaves}
\begin{abstract}A theory of bisheaves has been recently introduced to measure the homological stability of fibers of maps to manifolds. A bisheaf over a topological space is a triple consisting of a sheaf, a cosheaf, and compatible maps from the stalks of the sheaf to the stalks of the cosheaf. In this note we describe how, given a bisheaf constructible (i.e., locally constant) with respect to a triangulation of its underlying space, one can explicitly determine the coarsest stratification of that space for which the bisheaf remains constructible.\end{abstract}
\section*{Introduction}
The space of continuous maps from a compact topological space $\mathscr{X}$ to a metric space $\mathscr{M}$ carries a natural metric structure of its own --- the distance between $f,g:\mathscr{X}\to\mathscr{M}$ is given by $\sup_{x \in \mathscr{X}} d_\mathscr{M}[f(x),g(x)]$, where $d_\mathscr{M}$ is the metric on $\mathscr{M}$. It is natural to ask how sensitive the fibers $f^{-1}(p)$ over points $p \in \mathscr{M}$ are to perturbations of $f$ in this metric space of maps $\mathscr{X}\to\mathscr{M}$. The case $\mathscr{M}= \mathbb{R}$ (endowed with its standard metric) is already interesting, and lies at the heart of both Morse theory \cite{milnor} and the stability of persistent homology \cite{interlevelsets, cdsgo, cseh}.
The theory of {\bf bisheaves} was introduced in \cite{macpat} to provide stable lower bounds on the homology groups of such fibers in the case where $f$ is a reasonably tame (i.e., Thom-Mather stratified) map. The fibers of $f$ induce two algebraic structures generated by certain basic open subsets $\mathscr{U} \subset \mathscr{M}$ --- their Borel-Moore homology $\bH^\text{\tiny BM}_\bullet(f^{-1}(\mathscr{U})) = \text{H}_\bullet(\mathscr{X},\mathscr{X}-f^{-1}(\mathscr{U}))$ naturally forms a sheaf of abelian groups, whereas their singular homology $\text{H}_\bullet(f^{-1}(\mathscr{U}))$ naturally forms cosheaf. If $\mathscr{M}$ is a $\mathbb{Z}$-orientable manifold, then its fundamental class---let's call it $o \in \text{H}^m_c(\mathscr{M})$---restricts to a generator $o_\mathscr{U}$ of the top compactly-supported cohomology $\text{H}^m_c(\mathscr{U})$ of basic open subsets $\mathscr{U} \subset \mathscr{M}$. The cap product \cite[Sec 3.3]{hatcher} with its pullback $f^*(o_\mathscr{U}) \in \text{H}^m_c(f^{-1}(\mathscr{U}))$ therefore induces group homomorphisms \[ \bH^\text{\tiny BM}_{m+\bullet}(f^{-1}(\mathscr{U})) {\longrightarrow} \text{H}_\bullet(f^{-1}(\mathscr{U})) \] from the ($m$-shifted) Borel-Moore to the singular homology over $\mathscr{U}$. These maps commute with restriction maps of the sheaf and extension maps of the cosheaf by naturality of the cap product. This data, consisting of a sheaf plus a cosheaf along with such maps is the prototypical and motivating example of a bisheaf.
Fix an arbitrary open set $\mathscr{U} \subset \mathscr{M}$ and restrict the bisheaf described above to $\mathscr{U}$. We replace the restricted Borel-Moore sheaf with its largest sub {\em episheaf} (i.e., a sheaf whose restriction maps on basic opens are all surjective), and similarly, we replace the restricted singular cosheaf with its largest quotient {\em monocosheaf} (i.e., a cosheaf whose extension maps on basic opens are all injective). It is not difficult to confirm that even after the above alterations, one can induce canonical maps from the episheaf to the monocosheaf which form a new bisheaf over $\mathscr{U}$. The stalkwise-images of the maps from the episheaf to the monocosheaf in this new bisheaf form a {\em local system} over $\mathscr{U}$ --- this may be viewed as either a sheaf or a cosheaf depending on taste, since all of its restriction/extension maps are invertible. The authors of \cite{macpat} call this the {\bf persistent local system} of $f$ over $\mathscr{U}$. The persistent local system of $f$ over $\mathscr{U}$ is a collection of subquotients of $\text{H}_\bullet(f^{-1}(p))$ for all $p \in \mathscr{U}$ and provides a principled lower bound for the fiberwise homology of $f$ over $\mathscr{U}$ which is stable to perturbations. For a sufficiently small $\epsilon > 0$, let $\mathscr{U}_\epsilon$ be the shrinking of $\mathscr{U}$ by $\epsilon$. For all tame maps $g: \mathscr{X} \to \mathscr{M}$ within $\epsilon$ of $f$, the persistent local system of $f$ over $U$ restricted to $\mathscr{U}_\epsilon$ is a fiberwise subquotient of the persistent local system of $g$ over $\mathscr{U}_\epsilon$.
The goal of this paper is to take the first concrete steps towards rendering this new theory of bisheaves amenable to explicit machine computation. In Sec \ref{sec:simpbish} we introduce the notion of a {\bf simplicial bisheaf}, i.e., a bisheaf which is constructible with respect to a fixed triangulation of the underlying manifold $\mathscr{M}$. Such bisheaves over simplicial complexes are not much harder to represent on computers than the much more familiar cellular (co)sheaves --- if we work with field coefficients rather than integers, for instance, a simplicial bisheaf amounts to the assignment of one matrix to each simplex $\sigma$ of $\mathscr{M}$ and two matrices to each face relation $\sigma \leq \sigma'$, subject to certain functoriality constraints --- more details can be found in Sec \ref{sec:simpbish} below.
On the other hand, bisheaves are profoundly different from (co)sheaves in certain fundamental ways --- as noted in \cite{macpat}, the category of bisheaves, simplicial or otherwise, over a manifold $\mathscr{M}$ is not abelian. Consequently, we have no direct recourse to bisheafy analogues of basic (co)sheaf invariants such as sheaf cohomology and cosheaf homology. Even so, some of the ideas which produced efficient algorithms for computing cellular sheaf cohomology \cite{cgn} can be suitably adapted towards the task of extracting the persistent local system from a given simplicial bisheaf. One natural way to accomplish this is to find the coarsest partition of the simplices of $\mathscr{M}$ into regions so that over each region the cap product map relating the Borel-Moore stalk to the singular costalk is locally constant. This idea is made precise in Sec \ref{sec:strat}.
Our main construction is described in Sec \ref{sec:main}. Following \cite{strat}, we use the bisheaf data over an $m$-dimensional simplicial complex $\mathscr{M}$ to explicitly construct a stratification by simplicial subcomplexes \[ \varnothing = \mathscr{M}_{-1} \subset \mathscr{M}_0 \subset \cdots \subset \mathscr{M}_{m-1} \subset \mathscr{M}_m = \mathscr{M}, \] called the {\bf canonical stratification of $\mathscr{M}$} along the given bisheaf; the connected components of each $\mathscr{M}_d - \mathscr{M}_{d-1}$, called the {\em canonical $d$-strata}, enjoy three remarkably convenient properties for our purposes. \begin{enumerate}
\item {\em Constructibility}: if two simplices lie in the same stratum, then the cap-product maps assigned to them by the bisheaf are related by invertible transformations.
\item {\em Homogeneity}: if two adjacent simplices $\sigma \leq \sigma'$ of $\mathscr{M}$ lie in different strata, then the (isomorphism class of the) bisheaf data assigned to the face relation $\sigma \leq \sigma'$ in $\mathscr{M}$ depends only on those strata.
\item {\em Universality:} this is the coarsest stratification (i.e., the one with fewest strata) satisfying both constructibility and homogeneity. \end{enumerate}
Armed with the canonical stratification of $\mathscr{M}$ along a bisheaf, one can reduce the computational burden of building the associated persistent local system as follows. Rather than extracting an episheaf and monocosheaf for {\em every} simplex and face relation, one only has to perform these calculations for each canonical stratum. The larger the canonical strata are, the more computationally beneficial this strategy becomes.
{\footnotesize
\section{Bisheaves around Simplicial Complexes} \label{sec:simpbish}
Let $\mathscr{M}$ be a simplicial complex and let $\category{Ab}$ denote the category of abelian groups. By a {\bf {sheaf} over $\mathscr{M}$} we mean a functor \[ \shf{F}:{\category{Fc}}(\mathscr{M}) \to \category{Ab} \] from the poset of simplices in $\mathscr{M}$ ordered by the face relation to the abelian category $\category{Ab}$. In other words, each simplex $\sigma$ of $\mathscr{M}$ is assigned an abelian group $\shf{F}(\sigma)$ called the {\em stalk} of $\shf{F}$ over $\sigma$, while each face relation $\sigma \leq \sigma'$ among simplices is assigned a group homomorphism $\shf{F}(\sigma \leq \sigma'):\shf{F}(\sigma) \to \shf{F}(\sigma')$ called its {\em restriction map}. These assignments of objects and morphisms are constrained by the usual functor-laws of associativity and identity. A morphism $\shf{\alpha}:\shf{F} \to \shf{G}$ of sheaves over $\mathscr{M}$ is prescribed by a collection of group homomorphisms $\{\shf{\alpha}_\sigma:\shf{F}(\sigma) \to \shf{G}(\sigma)\}$, indexed by simplices of $\mathscr{M}$, which must commute with restriction maps.
The dual notion is that of a {\bf {cosheaf} under $\mathscr{M}$}, which is a functor \[ \csh{F}:{\category{Fc}}(\mathscr{M})^\text{op} \to \category{Ab}; \] this assigns to each simplex $\sigma$ an abelian group $\csh{F}(\sigma)$ called its {\em costalk}, and to each face relation $\sigma \leq \sigma'$ a contravariant group homomorphism $\csh{F}(\sigma \leq \sigma'):\csh{F}(\sigma') \to \csh{F}(\sigma)$, called the {\em extension map}. As before, a morphism $\csh{\alpha}:\csh{F} \to \csh{G}$ of cosheaves under $\mathscr{M}$ is a simplex-indexed collection of abelian group homomorphisms $\{\csh{\alpha}_\sigma:\csh{F}(\sigma) \to \csh{G}(\sigma)\}$ which must commute with extension maps. For a thorough introduction to cellular (co)sheaves, the reader should consult \cite{curry}.
\subsection{Definition}
The following algebraic-topological object (see \cite[Def 5.1]{macpat}) coherently intertwines sheaves with cosheaves. \begin{definition}
\label{def:bisheaf}
A {\bf {bisheaf} around $\mathscr{M}$} is a triple $\bsh{F} = (\shf{F},\csh{F},F)$ defined as follows. Here $\shf{F}$ is a sheaf over $\mathscr{M}$, while $\csh{F}$ is an cosheaf under $\mathscr{M}$, and
\[
F = \{F_\sigma:\shf{F}(\sigma) \to \csh{F}(\sigma)\}
\] is a collection of abelian group homomorphisms indexed by the simplices of $\mathscr{M}$ so that the following diagram, denoted $\bsh{F}(\sigma \leq \sigma')$, commutes for each face relation $\sigma \leq \sigma'$:
\[
\xymatrixcolsep{1in}
\xymatrix{
\shf{F}(\sigma) \ar@{->}[d]_{F_\sigma}\ar@{->}[r]^{\shf{F}(\sigma \leq \sigma')}& \shf{F}(\sigma') \ar@{->}[d]^{F_{\sigma'}} \\
\csh{F}(\sigma) & \csh{F}(\sigma') \ar@{->}[l]^{\csh{F}(\sigma \leq \sigma')}\\
}
\]
(The right-pointing map is the restriction map of the sheaf $\shf{F}$, while the left-pointing map is the extension map of the cosheaf $\csh{F}$). \end{definition}
\subsection{Bisheaves from Fibers}
The following construction is adapted from \cite[Ex 5.3]{macpat}. Consider a map $f:\mathscr{X}\to\mathscr{M}$ whose target space $\mathscr{M}$ is a connected, triangulated manifold of dimension $m$. Let $o$ be a generator of the top compactly-supported cohomology group $\text{H}_\text{\tiny c}^m(\mathscr{M})$. Our assumptions on $\mathscr{M}$ imply $\text{H}_\text{\tiny c}^m(\mathscr{M}) \simeq \mathbb{Z}$, so $o \in \{\pm 1\}$. Now the inclusion $\category{st } \sigma \subset \mathscr{M}$ of the open star\footnote{The open star of $\sigma \in \mathscr{M}$ is given by $\category{st } \sigma = \{\tau \in \mathscr{M} \mid \sigma \leq \tau\}$.} of any simplex $\sigma$ in $\mathscr{M}$ induces an isomorphism on $m$-th compactly supported cohomology, so let $o|_\sigma$ be the image of $o$ in $\text{H}^m_c(\category{st } \sigma)$ under this isomorphism. Since $f$ restricts to a map $f^{-1}(\category{st } \sigma) \to \category{st } \sigma$, the generator $o|_\sigma$ pulls back to a class $f^*(o|_\sigma)$ in $\text{H}^m_c(f^{-1}(\category{st } \sigma))$. The cap product with $f^*(o|_\sigma)$ therefore constitutes a map \[ \xymatrixcolsep{0.7in} \xymatrix{
\text{H}_{m+\bullet}^\text{\tiny BM}\left(f^{-1}(\category{st } \sigma)\right) \ar@{->}[r]^-{\frown f^*(o|_\sigma)} & \text{H}_\bullet\left(f^{-1}(\category{st } \sigma)\right) } \] from the Borel-Moore homology to the singular homology of the fiber $f^{-1}(\category{st } \sigma)$. We note that the former naturally forms a sheaf over $\mathscr{M}$ while the later forms a cosheaf; as mentioned in the Introduction, the above data constitutes the primordial example of a bisheaf.
\section{Stratifications along Bisheaves} \label{sec:strat}
Throughout this section, we will assume that $\bsh{F} = (\shf{F},\csh{F},F)$ is a bisheaf of abelian groups over some simplicial complex $\mathscr{M}$ of dimension $m$ in the sense of Def \ref{def:bisheaf}. We do not require this $\mathscr{M}$ to be a manifold.
\begin{definition}
\label{def:fstrat} An {\bf $\bsh{F}$-{stratification} of $\mathscr{M}$} is a filtration $\mathscr{K}_\bullet$ by subcomplexes: \[ \varnothing = \mathscr{K}_{-1} \subset \mathscr{K}_0 \subset \cdots \subset \mathscr{K}_{m-1} \subset \mathscr{K}_m = \mathscr{M}, \] so that connected components of the (possibly empty) difference $\mathscr{K}_d - \mathscr{K}_{d-1}$, called the {\em $d$-dimensional strata} of $\mathscr{K}_\bullet$, obey the following axioms. \begin{enumerate}
\item{\bf Dimension:} The maximum dimension of simplices lying in a $d$-stratum should precisely equal $d$ (but we do not require every simplex in a $d$-stratum $S$ to be the face of some $d$-simplex in $S$).
\item{\bf Frontier:} The transitive closure of the following binary relation $\prec$ on the set of all strata forms a partial order: we say $S \prec S'$ if there exist simplices $\sigma \in S$ and $\sigma' \in S'$ with $\sigma \leq \sigma'$. Moreover, this partial order is graded in the sense that $S \prec S'$ implies $\dim S \leq \dim S'$, with equality of dimension occurring if and only if $S = S'$.
\item {\bf Constructibility:} $\bsh{F}$ is {locally constant} on each stratum. Namely, if two simplices $\sigma \leq \tau$ of $\mathscr{M}$ lie in the same stratum, then $\shf{F}(\sigma \leq \tau)$ and $\csh{F}(\sigma \leq \tau)$ are both isomorphisms. \end{enumerate} \end{definition}
\begin{remark}
It follows from constructibility (and the fact that strata must be connected) that the commuting diagram $\bsh{F}(\sigma \leq \sigma')$ assigned to simplices $\sigma \leq \sigma'$ of $\mathscr{M}$ depends, up to isomorphism, only on the strata containing $\sigma$ and $\sigma'$. That is, given any other pair $\tau \leq \tau'$ so that $\sigma$ and $\tau$ lie in the same stratum $S$ while $\sigma'$ and $\tau'$ lie in the same stratum $S'$, there exist four isomorphisms (depicted as dashed vertical arrows) which make the following cube of abelian groups commute up to isomorphism:
\[
\xymatrixcolsep{.4in}
\xymatrixrowsep{0.05in}
\xymatrix{
& \shf{F}(\sigma) \ar@{-->}[dddd]_\sim \ar@{->}[rr]^{\shf{F}(\sigma \leq \sigma')} \ar@{->}[dl]_{F_\sigma} & & \shf{F}(\sigma') \ar@{-->}[dddd]^\sim \ar@{->}[dr]^{F_{\sigma'}} &\\
\csh{F}(\sigma) & & & & \csh{F}(\sigma') \ar@{->}[llll]^{\csh{F}(\sigma \leq \sigma')} \\
& & & & \\
& & & & \\
& \shf{F}(\tau) \ar@{->}[rr]^{\shf{F}(\tau \leq \tau')} \ar@{->}[dl]_{F_\tau} & & \shf{F}(\tau') \ar@{->}[dr]^{F_{\tau'}} &\\
\csh{F}(\tau) \ar@{-->}[uuuu]^\sim & & & & \csh{F}(\tau') \ar@{->}[llll]^{\csh{F}(\tau \leq \tau')} \ar@{-->}[uuuu]_\sim
}
\]
These vertical isomorphisms are not unique, but rather depend on choices of paths lying in $S$ (from $\sigma$ to $\tau$) and in $S'$ (from $\sigma'$ to $\tau'$). \end{remark}
\begin{example} The first example of an $\bsh{F}$-stratification of $\mathscr{M}$ that one might consider is the {\bf skeletal} stratification, where the $d$-strata are simply the $d$-simplices. \end{example}
Since we are motivated by computational concerns, we seek an $\bsh{F}$-stratification with as few strata as possible. To make this notion precise, note that the set of all $\bsh{F}$- stratifications of $\mathscr{M}$ admits a partial order --- we say that $\mathscr{K}_\bullet$ {\em refines} another $\bsh{F}$-stratification $\mathscr{K}'_\bullet$ if every stratum of $\mathscr{K}_\bullet$ is contained inside some stratum of $\mathscr{K}'_\bullet$ (when both are viewed as subspaces of $\mathscr{M}$). The skeletal stratification refines all the others, and serves as the maximal object in this poset; and the object that we wish to build here lies at the other end of this hierarchy.
\begin{definition}
\label{def:canstr} The {\bf canonical} $\bsh{F}$-stratification of $\mathscr{M}$ is the minimal object in the poset of $\bsh{F}$-stratifications of $\mathscr{M}$ ordered by refinement --- every other stratification is a refinement of the canonical one. \end{definition}
The reader may ask why this object is well-defined at all --- why should the poset of all $\bsh{F}$-stratifications admit a minimal element, and even if it does, why should that element be unique? Taking this definition as provisional for now, we will establish the existence and uniqueness of the {canonical $\bsh{F}$-stratification} of $\mathscr{M}$ via an explicit construction in the next section.
\section{The Main Construction} \label{sec:main}
As before, we fix a bisheaf $\bsh{F} = (\shf{F},\csh{F},F)$ on an $m$-dimensional simplicial complex $\mathscr{M}$. Our goal is to construct the canonical $\bsh{F}$-stratification, which was described in Def \ref{def:canstr} and will be denoted here by $\mathscr{M}_\bullet$: \[ \varnothing = \mathscr{M}_{-1} \subset \mathscr{M}_0 \subset \cdots \subset \mathscr{M}_{m-1} \subset \mathscr{M}_m = \mathscr{M}. \] We will establish the existence and uniqueness of this stratification by constructing the strata in reverse-order: the $m$-dimensional canonical strata will be identified before the $(m-1)$-dimensional canonical strata, and so forth. There is a healthy precedent for such top-down constructions that dates back to work of Whitney \cite{whitney} and Goresky-MacPherson \cite[Sec 4.1]{gm2}.
\subsection{Localizations of the Face Poset} The key ingredient here, as in \cite{strat}, is the ability to {\em localize} \cite[Ch I.1]{gabzis} the poset ${\category{Fc}}(\mathscr{M})$ about a special sub-collection $W$ of face relations that is {closed} in the following sense: if $(\sigma \leq \tau)$ and $(\tau \leq \nu)$ both lie in $W$ then so does $(\sigma \leq \nu)$.
\begin{definition} \label{def:loc}
Let $W$ be a closed collection of face relations in ${\category{Fc}}(\mathscr{M})$ and let $W^+$ denote the union of $W$ with all equalities of the form $(\sigma = \sigma)$ for $\sigma$ ranging over simplices in $\mathscr{M}$. The {\bf localization} of ${\category{Fc}}(\mathscr{M})$ about $W$ is a category ${\category{Fc}}_W(\mathscr{M})$ whose objects are the simplices of $\mathscr{M}$, while morphisms from $\sigma$ to $\tau$ are given by equivalence classes of finite (but arbitrarily long) $W$-{\em zigzags}. These have the form
\[
(\sigma \leq \tau_0 \geq \sigma_0 \leq \cdots \leq \tau_k \geq \sigma_k \leq \tau), \text{ where:}
\]
\begin{enumerate}
\item only relations in $W^+$ can point backwards (i.e., $\geq$),
\item composition is given by concatenation, and
\item the trivial zigzag $(\sigma = \sigma)$ represents the identity morphism of each simplex $\sigma$.
\end{enumerate}
The equivalence between $W$-zigzags is generated by the transitive closure of the following basic relations. Two such zigzags are related
\begin{itemize}
\item {\em horizontally} if one is obtained from the other by removing internal equalities, e.g.:
\begin{align*}
\left(\cdots \leq \tau_0 \geq \sigma_0 = \sigma_0 \geq \tau_1 \leq \cdots \right) &\sim \left(\cdots \leq \tau_0 \geq \tau_1 \leq \cdots\right), \\
\left(\cdots \geq \sigma_0 \leq \tau_1 = \tau_1 \leq \sigma_1 \geq \cdots \right) &\sim \left(\cdots \geq \sigma_0 \leq \sigma_1 \geq \cdots\right),
\end{align*}
\item or {\em vertically}, if they form the rows of a grid:
\begin{alignat*}{11}
\sigma~ & ~\leq~ & ~\tau_0~ & ~\geq~ & ~\sigma_0~ & ~\leq~ & ~\cdots~ & ~\geq~ & ~\sigma_k~ & ~\leq~ & ~\tau \\
\roteq~ & & \leqdn~ & & \leqdn~ & & & & \leqdn~ & & \roteq \\
\sigma~ & ~\leq~ & ~\tau'_0~ & ~\geq~ & ~\sigma'_0~ & ~\leq~ & ~\cdots~ & ~\geq~ & ~\sigma'_k~ & ~\leq~ & ~\tau
\end{alignat*}
whose vertical face relations (also) lie in $W^+$.
\end{itemize} \end{definition}
\begin{remark} These horizontal and vertical relations are designed to render invertible all the face relations $(\sigma \leq \tau)$ that lie in $W$. The backward-pointing $\tau \geq \sigma$ which may appear in a $W$-zigzag serves as the formal inverse to its forward-pointing counterpart $\sigma \leq \tau$ --- one can use a vertical relation followed by a horiztonal relation to achieve the desired cancellations whenever $(\cdots \geq \sigma \leq \tau \geq \sigma \leq \cdots)$ or $(\cdots \leq \tau \geq \sigma \leq \tau \geq \cdots)$ are encountered as substrings of a $W$-zigzag. \end{remark}
\subsection{Top Strata} Consider the subset of face relations in ${\category{Fc}}(\mathscr{M})$ to which $\bsh{F}$ assigns invertible maps, i.e., \begin{align} \label{eq:E} E = \{(\sigma \leq \tau) \text{ in } {\category{Fc}}(\mathscr{M}) \mid \shf{F}(\sigma \leq \tau) \text{ and }\csh{F}(\sigma \leq \tau) \text{ are isomorphisms}\}. \end{align} One might expect, in light of the constructibility requirement of Def \ref{def:fstrat}, that finding canonical strata would amount to identifying isomorphism classes in the localization of ${\category{Fc}}(\mathscr{M})$ about $E$. Unfortunately, this does not work --- the pieces of $\mathscr{M}$ obtained in such a manner do not obey the frontier axiom in general. To rectify this defect, we must suitably modify $E$. Define the set of simplices \begin{align*} U = \{\sigma \in {\category{Fc}}(\mathscr{M}) \mid (\sigma \leq \tau) \in E \text{ for all } \tau \in \category{st } \sigma\}, \end{align*} and consider the subset $W \subset E$ given by \begin{align} \label{eq:W} W = \{(\sigma \leq \tau) \in E \mid \sigma \in U\}. \end{align} Thus, a pair of adjacent simplices $(\sigma \leq \tau)$ of $\mathscr{M}$ lies in $W$ if and only if the sheaf $\shf{F}$ and cosheaf $\csh{F}$ assign isomorphisms not only to $(\sigma \leq \tau$) itself, but also to {\em all other face relations} encountered among simplices in the open star of $\sigma$. For our purposes, it is important to note that $U$ is {\em upward closed} as a subposet of ${\category{Fc}}(\mathscr{M})$, meaning that $\sigma \in U$ and $\sigma' \geq \sigma$ implies $\sigma' \in U$.
\begin{proposition}
\label{prop:isostrat} Every simplex $\tau$ lying in an $m$-stratum of any $\bsh{F}$-stratification of $\mathscr{M}$ must be isomorphic in ${\category{Fc}}_{W}(\mathscr{M})$ to an $m$-dimensional simplex of $\mathscr{M}$. \end{proposition} \begin{proof} Assume $\tau$ lies in an $m$-dimensional stratum $S$ of an $\bsh{F}$-stratification of $\mathscr{M}$. By the dimension axiom, $S$ contains at least one $m$-simplex, which we call $\sigma$. Since $S$ is connected, there exists a zigzag of simplices lying entirely in $S$ that links $\sigma$ to $\tau$, say \[ \zeta = (\sigma \leq \tau_0 \geq \sigma_0 \leq \cdots \leq \tau_k \geq \sigma_k \leq \tau). \] By the constructibility requirement of Def \ref{def:fstrat}, every face relation in sight (whether $\leq$ or $\geq$) lies in $E$. And by the frontier requirement of that same definition, membership in $m$-strata is upward closed, so in particular all the $\sigma_\bullet$'s lie in $U$. Finally, since $\sigma$ is top-dimensional and $\tau_0 \geq \sigma$, we must have $\tau_0 = \sigma$. Thus, not only is our $\zeta$ a $W$-zigzag, but it also represents an invertible morphism in ${\category{Fc}}_{W}(\mathscr{M})$. Indeed, a $W$-zigzag representing its inverse can be obtained simply by traversing backwards: \[ \zeta^{-1} = (\tau \leq \tau \geq \sigma_k \leq \tau_k \geq \cdots \leq \tau_0 \geq \sigma \leq \sigma). \] This confirms that $\sigma$ and $\tau$ are isomorphic in ${\category{Fc}}_{W}(\mathscr{M})$, as desired. \end{proof}
Given the preceding result, the coarsest $m$-strata that one could hope to find are isomorphism classes of $m$-dimensional simplices in ${\category{Fc}}_{W}(\mathscr{M})$.
\begin{proposition} \label{prop:topstrat} The canonical $m$-strata of $\mathscr{M}_\bullet$ are precisely the isomorphism classes of $m$-dimensional simplices in ${\category{Fc}}_{W}(\mathscr{M})$. \end{proposition} \begin{proof} Let $\sigma$ be an $m$-simplex of $\mathscr{M}$. We will show that the set $S$ of all $\tau$ which are isomorphic to $\sigma$ forms an $m$-stratum by verifying the frontier and constructibility axioms from Def \ref{def:fstrat} --- the dimension axiom is trivially satisfied since $\sigma \in S$. Note that for any $\tau \in S$ there exists some $W$-zigzag whose simplices all lie in $S$, and which represents an isomorphism from $\sigma$ to $\tau$ in ${\category{Fc}}_{W}(\mathscr{M})$. (The existence of these zigzags shows that $S$ is connected). So let us fix for each $\tau \in S$ such a zigzag \[ \zeta_\tau = (\sigma \leq \tau_0 \geq \sigma_0 \leq \cdots \geq \sigma_k \leq \tau), \] and assume it is horizontally reduced in the sense that none of its order relations (except possibly the first and last $\leq$) are equalities. Thus, all the $\sigma_d$'s in $\zeta_\tau$ lie in $U$. Upward closure of $U$ now forces simplices in $\category{st } \sigma_k$, which contains $\category{st } \tau$, to also lie in $S$. This shows that $S$ satisfies the frontier axiom, because any simplex of $\mathscr{M}$ with a face in $S$ must itself lie in $S$. We now turn to establishing constructibility. Since $\sigma$ is top-dimensional, we know that $\tau_0 = \sigma$, so in fact the first $\leq$ in $\zeta_\tau$ must be an equality. Consider the bisheaf data $\bsh{F}(\zeta_\tau)$ living over our zigzag: \[ \xymatrix{
\shf{F}(\sigma) \ar@{->}[r] \ar@{->}[d] & \shf{F}(\tau_0) \ar@{->}[d] & \shf{F}(\sigma_0) \ar@{->}[d] \ar@{->}[r] \ar@{->}[l] & \cdots \ar@{->}[r] & \shf{F}(\tau_k) \ar@{->}[d] & \shf{F}(\sigma_k) \ar@{->}[d] \ar@{->}[l]\ar@{->}[r]& \shf{F}(\tau) \ar@{->}[d]\\
\csh{F}(\sigma) & \csh{F}(\tau_0) \ar@{->}[l]\ar@{->}[r]& \csh{F}(\sigma_0) & \cdots \ar@{->}[l] & \csh{F}(\tau_k)\ar@{->}[l] \ar@{->}[r] & \csh{F}(\sigma_k) &\ar@{->}[l]\csh{F}(\tau) } \] (All horizontal homomorphisms in the top row are restriction maps of $\shf{F}$, all horizontal homomorphisms in the bottom row are extension maps of $\csh{F}$, and the vertical morphism in the column of a simplex $\nu$ is $F_\nu$). By definition of $W$ (and the fact that $\sigma = \tau_0)$, all horizontal maps in sight are isomorphisms, so in particular we may replace all left-pointing arrows in the top row and all the right-pointing arrows in the bottom row by their inverses to get abelian group isomorphisms $\phi_\tau:\shf{F}(\sigma) \to \shf{F}(\tau)$ and $\psi_\tau:\csh{F}(\tau) \to \csh{F}(\sigma)$ that fit into a commuting square with $F_\sigma$ and $F_\tau$. Now given any other simplex $\tau' \geq \tau$ lying in $S$, one can repeat the argument above with the bisheaf data $\bsh{F}\left(\zeta_{\tau'} \circ \zeta_\tau^{-1}\right)$ to confirm that \[ \shf{F}(\tau \leq \tau') = \phi_{\tau'} \circ \phi_\tau^{-1} \quad \text{and} \quad \csh{F}(\tau \leq \tau') = \psi_\tau^{-1} \circ \psi_{\tau'}. \] Thus both maps are isomorphisms, as desired. \end{proof}
\subsection{Lower Strata}
Our final task is to determine which simplices lie in canonical strata of dimension $< m$. This is accomplished by iteratively modifying both the simplicial complex $\mathscr{M} = \mathscr{M}_m$ and the set of face relations $W = W_m$ which was defined in (\ref{eq:W}) above.
\begin{definition}
\label{def:subpair} Given $d \in \{0,1,\ldots,m-1\}$, assume we have the pair $(\mathscr{M}_{d+1},W_{d+1})$ consisting of a simplicial complex $\mathscr{M}_{d+1}$ of dimension $\leq (d+1)$ and a collection $W_{d+1}$ of face relations in ${\category{Fc}}(\mathscr{M})$. The subsequent pair $(\mathscr{M}_{d},W_{d})$ is defined as follows. \begin{enumerate}
\item The set $\mathscr{M}_{d}$ is obtained from $\mathscr{M}_{d+1}$ by removing all the simplices which are isomorphic to some $(d+1)$-simplex in the localization ${\category{Fc}}_{W_{d+1}}(\mathscr{M})$.
\item To define $W_{d}$, first consider the collection of simplices
\[
U_{d} = \{\sigma \in {\category{Fc}}(\mathscr{M}_{d}) \mid (\sigma \leq \tau) \in E \text{ for all } \tau \in \textbf{st}_{d}~\sigma\};
\]
here $\textbf{st}_{d}~\sigma$ is the open star of $\sigma$ in $\mathscr{M}_{d}$ (i.e., the collection of all $\tau \in \mathscr{M}_{d}$ satisfying $\tau \geq \sigma$), while $E$ is the set of face relations defined in (\ref{eq:E}). Now, set
\[
W_{d} = W_{d+1} \cup \{(\sigma \leq \tau) \mid \sigma \in U_{d} \text{ and }\tau \in \textbf{st}_{d}~\sigma\}.
\] \end{enumerate} \end{definition}
\begin{proposition} The sequence $\mathscr{M}_\bullet$ described in Def \ref{def:subpair} constitutes a filtration of the original simplicial complex $\mathscr{M}$ by subcomplexes with the property that $\dim \mathscr{M}_d \leq d$ for each $d \in \{0,1,\ldots,m\}$. \end{proposition} \begin{proof} Since $\mathscr{M}_m = \mathscr{M}$ is manifestly its own $m$-dimensinal subcomplex, it suffices by induction to show that if $\mathscr{M}_{d+1}$ is a simplicial complex of dimension $\leq (d+1)$, then the simplices in $\mathscr{M}_{d} \subset \mathscr{M}_{d+1}$ constitute a subcomplex of dimension $\leq d$. To this end, we will confirm that the difference $\mathscr{M}_{d+1} - \mathscr{M}_{d}$ satisfies two properties --- it must: \begin{itemize}
\item contain all the $(d+1)$-simplices in $\mathscr{M}_{d+1}$, and
\item be upward closed with respect to the face partial order of $\mathscr{M}_{d+1}$. \end{itemize} Since every $(d+1)$-simplex is isomorphic to itself in ${\category{Fc}}_{W_{d+1}}(\mathscr{M})$ via the identity morphism, the first requirement is immediately met. And by definition of $W_{d+1}$, if an arbitrary simplex $\sigma$ of $\mathscr{M}_{d+1}$ is isomorphic to a $(d+1)$-simplex in ${\category{Fc}}_{W_{d+1}}(\mathscr{M})$, then so are all the simplices that lie in its open star $\textbf{st}_{d+1}~\sigma \subset \mathscr{M}_{d+1}$. Thus, our second requirement is also satisfied and the desired conclusion follows. \end{proof}
The structure of the sets $W_{\bullet}$ from Def \ref{def:subpair} enforces a convenient monotonicity among morphisms in the localization ${\category{Fc}}_{W_\bullet}(\mathscr{M})$. \begin{lemma}
\label{lem:mono} For each $d \in \{0,1,\ldots,m\}$, there are no morphisms in the localization ${\category{Fc}}_{W_d}(\mathscr{M})$ from any simplex $\sigma$ in the difference $\mathscr{M} - \mathscr{M}_d$ to a simplex $\tau$ of $\mathscr{M}_{d}$. \end{lemma} \begin{proof} Any putative morphism from $\sigma$ to $\tau$ in ${\category{Fc}}_{W_d}(\mathscr{M})$ would have to be represented by a $W_{d}$-zigzag, say \[ \zeta = (\sigma \leq \tau_0 \geq \sigma_0 \leq \cdots \leq \tau_k \geq \sigma_k \leq \tau). \] Note that all face relations appearing here, except possibly the first $(\sigma \leq \tau_0)$, must lie in $W_{d}$ by upward closure. Since $\sigma \in \mathscr{M} - \mathscr{M}_d$, it must be isomorphic in ${\category{Fc}}_{W_{i}}(\mathscr{M})$ to an $i$-simplex in $\mathscr{M}_{i}$ for some $i > d$. But the very existence of a zigzag representing such an isomorphism requires the bisheaf $\bsh{F}$ to be constant on the open star $\textbf{st}_{i}~\sigma$, meaning that $(\sigma \leq \tau_0)$ must lie in $W_{i} \subset W_d$. Thus, all the face relations ($\leq$ and $\geq$) encountered in $\zeta$ lie in $W_d$, whence $\zeta$ must be an isomorphism in the localization ${\category{Fc}}_{W_d}(\mathscr{M})$ (with its inverse being given by backwards traversal). But now, $\tau$ would also be isomorphic to some $i$-simplex in ${\category{Fc}}_{W_{i}}(\mathscr{M})$ with $i > d$, which forces the contradiction $\tau \not\in \mathscr{M}_d$. \end{proof}
Here is our main result.
\begin{theorem} The sequence $\mathscr{M}_\bullet$ of simplicial complexes described in Def \ref{def:subpair} is the canonical $\bsh{F}$-stratification of $\mathscr{M}$. Moreover, for each $d \in \{0,1,\ldots,m\}$, the canonical $d$-strata of $\mathscr{M}_\bullet$ are isomorphism classes of $d$-simplices from $\mathscr{M}_d$ in the localization ${\category{Fc}}_{W_d}(\mathscr{M})$. \end{theorem} \begin{proof} We proceed by reverse-induction on $d$, with the base case $d=m$ being given by Prop \ref{prop:topstrat}. So we assume that the statement holds up to $(d+1)$, and establish that the canonical $d$-strata must be isomorphism classes of $d$-simplices from $\mathscr{M}_d$ in the localization ${\category{Fc}}_{W_d}(\mathscr{M})$. Let $S$ denote the isomorphism class of a $d$-simplex $\sigma_*$ in $\mathscr{M}_d$. We will establish that $S$ satisfies all three axioms of Def \ref{def:fstrat}. \begin{itemize} \item {\bf Dimension:} clearly, $S$ contains a simplex $\sigma_*$ of dimension $d$; moreover, since $\dim \mathscr{M}_d \leq d$, all simplices of $\mathscr{M}$ with dimension $> d$ lie in $\mathscr{M} - \mathscr{M}_{d}$. None of these can be isomorphic in ${\category{Fc}}_{W_d}(\mathscr{M})$ to $\sigma_*$ without contradicting Lem \ref{lem:mono}. \item {\bf Frontier:} it suffices to check antisymmetry of the relation $\prec$: there should be no simplices $\sigma \leq \sigma'$ with $\sigma \in \mathscr{M}-\mathscr{M}_d$ and $\sigma' \in S$. But the existence of such a $\sigma \leq \sigma'$ would result in a $W_d$-zigzag from $\sigma$ to $\sigma_*$, which is prohibited by Lem \ref{lem:mono}. \item {\bf Constructibility:} it is straightforward to adapt the argument from the proof of Prop \ref{prop:topstrat} --- given simplices $\tau \leq \tau'$ both in $S$, one can find $W_d$-zigzags from $\sigma_*$ to $\tau$ and to $\tau'$ which guarantee that $\shf{F}(\tau \leq \tau')$ and $\csh{F}(\tau \leq \tau')$ are both isomorphisms. \end{itemize} To confirm that the strata obtained in this fashion are canonical, one can re-use the argument form the proof of Prop \ref{prop:isostrat} to show that a simplex which lies in a $d$-stratum of any $\bsh{F}$-stratification is isomorphic in ${\category{Fc}}_{W_d}(\mathscr{M})$ to a $d$-simplex from $\mathscr{M}_d$, meaning that the strata can not be any larger than these isomorphism classes. \end{proof}
Finally, we remark that since the sets $W_\bullet$ defined in \ref{def:subpair} form a sequence that increases as $d$ decreases, the set of $W_d$-zigzags is contained in the set of $W_{d-1}$-zigzags and so forth. Therefore, successive localization of ${\category{Fc}}(\mathscr{M})$ about these $W_\bullet$'s creates a nested sequence of categories: \[ {\category{Fc}}_{W_m}(\mathscr{M}) \hookrightarrow {\category{Fc}}_{W_{m-1}}(\mathscr{M}) \hookrightarrow \cdots \hookrightarrow {\category{Fc}}_{W_1}(\mathscr{M}) \hookrightarrow {\category{Fc}}_{W_0}(\mathscr{M}). \] And thanks to the monotonicity guaranteed by Lem \ref{lem:mono}, isomorphism classes of $d$-simplices from $\mathscr{M}_d$ in ${\category{Fc}}_{W_d}(\mathscr{M})$ are stable under inclusion to ${\category{Fc}}_{W_i}(\mathscr{M})$ for $i \leq d$, since no simplex of $\mathscr{M}_i$ can ever become isomorphic to a simplex from $\mathscr{M}_d - \mathscr{M}_i$ in this entire sequence of categories. Consequently, we can extract all the canonical strata just by examining isomorphism classes in a single category.
\begin{corollary} The $d$-dimensional strata of the canonical $\bsh{F}$-stratification of $\mathscr{M}$ are isomorphism classes of $d$-simplices from $\mathscr{M}_{d}$ in ${\category{Fc}}_{W_0}(\mathscr{M})$. \end{corollary}
\end{document} |
\begin{document}
\title{Induced subgraphs of graphs with large chromatic number.
\X. Holes of specific residue} \begin{abstract} A large body of research in graph theory concerns the induced subgraphs of graphs with large chromatic number, and especially which induced cycles must occur. In this paper, we unify and substantially extend results from a number of previous papers, showing that, for every positive integer $k$, every graph with large chromatic number contains either a large complete subgraph or induced cycles of all lengths modulo $k$. As an application, we prove two conjectures of Kalai and Meshulam from the 1990's connecting the chromatic number of a graph with the homology of its independence complex. \end{abstract}
\section{Introduction} All graphs in this paper are finite and have no loops or parallel edges. We denote the chromatic number of a graph $G$ by $\chi(G)$, and its clique number (the cardinality of its largest clique) by $\omega(G)$. A {\em hole} in $G$ means an induced subgraph which is a cycle of length at least four.
What can we say about the hole lengths in a graph $G$ with large chromatic number? If $G$ is a complete graph then it has no holes at all, and the question becomes trivial. But if we bound the clique number of $G$ then the question becomes much more interesting, and much deeper. In an influential paper written thirty years ago, Gy\'arf\'as~\cite{gyarfas} made a number of conjectures about induced subgraphs of graphs with large chromatic number and bounded clique number. Three of these conjectures, concerning holes, are particularly well-known: \begin{thm}\label{gyarfasconj} For all $\kappa\ge 0$ \begin{itemize}
\item there exists $c$ such that every graph with chromatic number greater than $c$ contains either a complete subgraph on $\kappa$ vertices or a hole of odd length; \item for all $\ell\ge 0$ there exists $c$ such that every graph with chromatic number greater than $c$ contains either a complete subgraph on $\kappa$ vertices or a hole of length at least $\ell$; \item for all $\ell\ge 0$ there exists $c$ such that every graph with chromatic number greater than $c$ contains either a complete subgraph on $\kappa$ vertices or a hole whose length is odd and at least $\ell$. \end{itemize} \end{thm}
All three conjectures are now known to be true: the first was proved by the authors in~\cite{oddholes} (see \cite{cycles} for earlier work); the second jointly with Maria Chudnovsky in~\cite{longholes}; and the third (which is a strengthening of the first two) jointly with Chudnovsky and Sophie Spirkl in~\cite{longoddholes}.
The analogous result for long even holes is also known (it is enough to find two vertices joined by three long paths with no edges between them, and this follows from results of \cite{bananas}).
Another intriguing result on holes was shown by Bonamy, Charbit and Thomass\'e~\cite{bonamy}, who proved a conjecture of Kalai and Meshulam by showing the following.
\begin{thm}\label{bonamy}
Every graph with sufficiently large chromatic number contains either a triangle or a hole of length $0$ modulo $3$. \end{thm}
In this paper we prove the following theorem, which contains all the results mentioned above as special cases.
\begin{thm}\label{mainthm}
For all $\kappa,\ell\ge 0$ there exists $c$ such that every graph $G$ with $\chi(G)>c$ and $\omega(G)\le \kappa$ contains holes of every length modulo $\ell$. \end{thm}
Note that this result allows us to demand a {\em long} hole of length $i$ modulo $j$ by taking $\ell=Nj$ for large $N$ and then choosing a suitable residue. Thus it implies all three Gy\'arf\'as conjectures; and it extends \ref{bonamy} in several ways, allowing us to ask for any size of clique, and a hole of any residue and as long as we want. (Though we cannot demand a hole of any {\em specific} length: it is well-known that there are graphs with arbitrarily large girth and chromatic number.)
We will in fact prove an even stronger statement. We say $A,B\subseteq V(G)$ are {\em anticomplete} if $A\cap B=\emptyset$ and no vertex in $A$ has a neighbour in $B$; and subgraphs $P,Q$ of $G$ are {\em anticomplete} if $V(P),V(Q)$ are anticomplete. We prove the following. \begin{thm}\label{superkalai} Let $\kappa,n\ge 0$ be integers, and for $1\le i\le n$ let $p_i\ge 0$ and $q_i\ge 1$ be integers. Then there exists $c\ge 0$ with the following property. Let $G$ be a graph such that $\chi(G)>c$ and $\omega(G)\le \kappa$. Then there are $n$ holes $H_1,\ldots, H_{n}$ in $G$, pairwise anticomplete, such that $H_i$ has length $p_i$ modulo $q_i$ for $1\le i\le n$. \end{thm}
Let us restate this in slightly different language. An ideal of graphs is {\em $\chi$-bounded} if there is a function $f$ such that $\chi(G)\le f(\omega(G))$ for every graph $G$ in the class. Thus \ref{superkalai} can be reformulated as: \begin{thm}\label{boundedkalai} Let $n\ge 0$ be an integer, and for $1\le i\le n$ let $p_i\ge 0$ and $q_i\ge 1$ be integers. Let $\mathcal{C}$ be the ideal of all graphs that do not contain $n$ pairwise anticomplete holes $H_1,\ldots, H_n$ where $H_i$ has length $p_i$ modulo $q_i$ for $1\le i\le n$. Then $\mathcal{C}$ is $\chi$-bounded. \end{thm}
\ref{boundedkalai} (or equivalently \ref{superkalai}) implies \ref{mainthm}, and also implies the main theorem of~\cite{complement}, which is the case of \ref{superkalai} when $p_i=1$ and $q_i=2$ for each $i$. But it also has applications to further conjectures of Kalai and Meshulam~\cite{kalai}, connecting graph theory with topology, and in particular with the homology of the independence complex of $G$. We discuss these in the final section.
Let us say a hole $H$ in $G$ is {\em $d$-peripheral} if $\chi(G[X])>d$, where $X$ is the set of vertices of $G$ that are not in $V(H)$ and have no neighbours in $V(H)$. \ref{superkalai} follows easily from the following version of \ref{mainthm}, which will therefore be our main objective: \begin{thm}\label{peripheral} For all $\kappa,\ell,d\ge 0$ there exists $c$ such that every graph $G$ with $\chi(G)>c$ and $\omega(G)\le \kappa$ contains $d$-peripheral holes of every length modulo $\ell$. \end{thm} \noindent{\bf Proof of \ref{superkalai}, assuming \ref{peripheral}.\ \ } Let $\kappa, n$ and $p_i,q_i\;(1\le i\le n)$ be as in \ref{superkalai}. We may assume that $n\ge 1$ and $\kappa\ge 2$, and we proceed by induction on $n$, for fixed $\kappa$. Choose $d$ such that for every graph $G$ with $\chi(G)>d$ and $\omega(G)\le \kappa$, there are $n-1$ holes $H_1,\ldots, H_{n-1}$ in $G$, pairwise anticomplete, where $H_i$ has length $p_i$ modulo $q_i$ for $1\le i\le n-1$. Let $c$ satisfy \ref{peripheral} with $\ell$ replaced by $q_n$. We claim that $c$ satisfies \ref{superkalai}; for let $G$ be a graph such that $\chi(G)>c$ and $\omega(G)\le \kappa$. By \ref{superkalai}, $G$ has a $d$-peripheral hole $H_n$ of length $p_n$ modulo $q_n$. Let $X$ be the set of vertices of $G$ not in $H_n$ and with no neighbour in $H_n$. Thus $\chi(G)>d$. From the inductive hypothesis, $G[X]$ has $n-1$ holes $H_1,\ldots, H_{n-1}$ in $G$, pairwise anticomplete, where $H_i$ has length $p_i$ modulo $q_i$ for $1\le i\le n-1$. But then $H_1,\ldots, H_n$ satisfy the theorem.~\bbox
In this paper, we are also interested in holes of nearly equal length. In the triangle-free case, a result is known that is even stronger than \ref{mainthm}: we proved in~\cite{holeseq} that
\begin{thm}\label{holeseq}
For all $\ell\ge 0$ there exists $c$ such that every triangle-free graph with chromatic number greater than $c$ contains holes of $\ell$ consecutive lengths. \end{thm}
We conjectured in~\cite{holeseq} that the same should be true if we exclude larger cliques: \begin{thm}\label{moreholeseq} {\bf Conjecture:}
For all integers $\kappa, \ell\ge 0$, there exists $c\ge 0$ such that every graph with chromatic number greater than $c$ contains either a complete subgraph on $\kappa$ vertices or holes of $\ell$ consecutive lengths. \end{thm} This conjecture remains open. However, we make a small step towards it: we will show that under the same hypotheses, there are (long) holes of two consecutive lengths. \begin{thm}\label{2holes}
For each $\kappa,\ell\ge 0$ there exists $c\ge 0$ such that every graph with chromatic number greater than $c$ contains either a complete subgraph on $\kappa$ vertices or holes of two consecutive lengths, both of length more than $\ell$. \end{thm} We have convinced ourselves that with a great deal of work, which we omit, we could get three consecutive ``long'' holes, but so far that is the best we can do.
As in several other papers of this series, the proof of \ref{peripheral} examines whether there is an induced subgraph of large chromatic number such that every ball of small radius in it has bounded chromatic number. Let us make this more precise. If $X\subseteq V(G)$, the subgraph of $G$ induced on $X$ is denoted by $G[X]$, and we often write $\chi(X)$ for $\chi(G[X])$. The {\em distance} or {\em $G$-distance} between two vertices $u,v$ of $G$ is the length of a shortest path between $u,v$, or $\infty$ if there is no such path. If $v\in V(G)$ and $\rho\ge 0$ is an integer, $N_G^{\rho}(v)$ or $N^{\rho}(v)$ denotes the set of all vertices $u$ with $G$-distance exactly $\rho$ from $v$, and $N_G^{\rho}[v]$ or $N^{\rho}[v]$ denotes the set of all $u$ with $G$-distance at most $\rho$ from $v$. If $G$ is a nonnull graph and $\rho\ge 1$, we define $\chi^{\rho}(G)$ to be the maximum of $\chi(N^{\rho}[v])$ taken over all vertices $v$ of $G$. (For the null graph $G$ we define $\chi^{\rho}(G)=0$.) Let $\mathbb{N}$ denote the set of nonnegative integers, and let $\phi:\mathbb{N}\rightarrow \mathbb{N}$ be a non-decreasing function. For $\rho\ge 1$, let us say a graph $G$ is {\em $(\rho,\phi)$-controlled} if $\chi(H)\le \phi(\chi^{\rho}(H))$ for every induced subgraph $H$ of $G$. Roughly, this says that in every induced subgraph $H$ of $G$ with large chromatic number, there is a vertex $v$ such that $H[N^{\rho}_H[v]]$ has large chromatic number. Let $\mathcal{C}$ be a class of graphs. We say $\mathcal{C}$ is an {\em ideal} if every induced subgraph of each member of $\mathcal{C}$ also belongs to $\mathcal{C}$. If $\rho\ge 2$ is an integer, an ideal $\mathcal{C}$ is {\em $\rho$-controlled} if there is a nondecreasing function $\phi:\mathbb{N}\rightarrow \mathbb{N}$ such that every graph in $\mathcal{C}$ is $(\rho,\phi)$-controlled. For $\ell\ge 4$, an {\em $\ell$-hole} means a hole of length exactly $\ell$. The proof of \ref{peripheral} breaks into two parts, the 2-controlled case and the $\rho$-controlled case when $\rho>2$ (because if we can be sure that all 2-balls have small chromatic number then it is easier to piece together paths to make holes of any desired length.) We will prove the following two complementary results, which together imply \ref{peripheral}:
\begin{thm}\label{radthm} Let $\rho\ge 2$ be an integer, and let $\mathcal{C}$ be a $\rho$-controlled ideal of graphs. Let $\ell\ge 24$ if $\rho=2$, and $\ell\ge 8\rho^2+6\rho$ if $\rho>2$. Then for all $\kappa,d\ge 0$, there exists $c\ge 0$ such that every graph $G\in \mathcal{C}$ with $\omega(G)\le \kappa$ and $\chi(G)>c$ has a $d$-peripheral $\ell$-hole. \end{thm}
\begin{thm}\label{uncontrolled} For all integers $\ell\ge 2$ and $\tau,d\ge 0$ there is an integer $c\ge 0$ with the following property. Let $G$ be a graph such that $\chi^8(G)\le \tau$, and every induced subgraph $J$ of $G$ with $\omega(J)<\omega(G)$ has chromatic number at most $\tau$. If $\chi(G)>c$ then there are $\ell$ $d$-peripheral holes in $G$ with lengths of all possible values modulo $\ell$. \end{thm} \noindent {\bf Proof of \ref{peripheral}, assuming \ref{radthm} and \ref{uncontrolled}.\ \ } Let $\kappa,\ell,d\ge 0$, and let $\mathcal{C}$ be the ideal of graphs with clique number at most $\kappa$ and with no $d$-peripheral hole of some length modulo $\ell$. By \ref{uncontrolled}, for each $\tau\ge 0$ there exists $c_{\tau}$ such that every $G\in \mathcal{C}$ with $\chi^8(G)\le \tau$ satisfies $\chi(G)\le c_{\tau}$, and so $\mathcal{C}$ is 8-controlled. By \ref{radthm} the theorem follows. This proves \ref{peripheral}.~\bbox
We prove the 2-controlled case of \ref{radthm} in the next section, and the $\rho>2$ case in section 3, deducing \ref{radthm} at the end of section 3. We prove \ref{uncontrolled} in section~\ref{sec:uncontrolled}, completing the proof of \ref{peripheral}; and prove the theorem about two consecutive long holes in section \ref{sec:2holes}.
\section{2-control}
First we handle the 2-controlled case. The proof here is very much like part of the proof of theorem 4.8 of~\cite{longoddholes}; the main difference is a strengthening of theorem 4.5 of that paper. First we need some definitions. If $G$ is a graph and $B,C\subseteq V(G)$, we say that $B$ {\em covers} $C$ if $B\cap C=\emptyset$ and every vertex in $C$ has a neighbour in $B$. Let $G$ be a graph, let $x\in V(G)$, let $N$ be some set of neighbours of $x$, and let $C\subseteq V(G)$ be disjoint from $N\cup \{x\}$, such that $x$ is anticomplete to $C$ and $N$ covers $C$. In this situation we call $(x,N)$ a {\em cover} of $C$ in $G$. For $C,X\subseteq V(G)$, a {\em multicover of $C$} in $G$ is a family $(N_x:x\in X)$ such that \begin{itemize} \item $X$ is stable; \item for each $x\in X$, the pair $(x,N_x)$ is a cover of $C$; \item for all distinct $x,x'\in X$, the vertex $x'$ is anticomplete to $N_x$ (and in particular all the sets $\{x\}\cup N_x$ are pairwise disjoint). \end{itemize}
Its {\em length} is $|X|$, and the multicover $(N_x:x\in X)$ is {\em stable} if each of the sets $N_x\;(x\in X)$ is stable. \begin{figure}
\caption{A multicover of length two (the wiggle indicates possible edges)}
\label{fig:1}
\end{figure}
Let $(N_x:x\in X)$ be a multicover of $C$, let $X'\subseteq X$, and for each $x\in X'$ let $N_x'\subseteq N_x$; and let $C'\subseteq C$ be covered by each of the sets $N_x'\;(x\in X')$. Then $(N_x':x\in X')$ is a multicover of $C'$, and we say it is {\em contained} in $(N_x:x\in X)$.
Again, let $(N_x:x\in X)$ be a multicover of $C$. Let $P$ be an induced path of $G$ with the following properties: \begin{itemize} \item $P$ has length three or five; \item the ends of $P$ are in $X$; \item no vertex of $X$ not an end of $P$ belongs to or has a neighbour in $V(P)$; and \item every vertex of $P$ belongs to $X\cup \bigcup_{x\in X}N_x\cup C$. \end{itemize} Let us call such a path $P$ an {\em oddity} for the multicover.
\begin{figure}
\caption{Oddities}
\label{fig:2}
\end{figure}
If $(N_x:x\in X)$ is a multicover of $C$, with an oddity $P$, and $(N_x':x\in X')$ is a multicover of $C'\subseteq C$ contained in $(N_x:x\in X)$, and $V(P)$ is anticomplete to $X'\cup \bigcup_{x\in X'}N_x'\cup C'$, we say that $(N_x':x\in X')$ is a multicover of $C'$ {\em compatible} with $P$. Let $H$ be the subgraph induced on $\bigcup_{x\in X}N_x$; we call the clique number of $H$ the {\em cover clique number} of $(N_x:x\in X)$.
First we need to show the following:
\begin{thm}\label{getoddity} Let $\tau,\kappa, m',c'\ge 0$ be integers, and let $0\le \kappa'\le \kappa$ be an integer. Then there exist integers $m,c\ge 0$ with the following property. Let $G$ be a graph such that \begin{itemize} \item $\omega(G)\le \kappa$; \item $\chi(H)\le \tau$ for every induced subgraph $H$ of $G$ with $\omega(H)<\kappa$; and \item $G$ admits a stable multicover $(N_x:x\in X)$ with length $m$, of a set $C$ with $\chi(C)>c$, with cover clique number at most $\kappa'$. \end{itemize} Then there is an oddity $P$ for the multicover, and a multicover $(N_x':x\in X')$
of $C'\subseteq C$ contained in $(N_x:x\in X)$ and compatible with $P$, such that $|X'|=m'$ and $\chi(C')>c'$. \end{thm} We proceed by induction on $\kappa'$, with $\tau,\kappa,m',c'$ fixed. Thus, inductively, there exist $m_0,c_0\ge 0$ such that the theorem holds if $m,c$ are replaced by $m_0,c_0$ respectively, and $\kappa'$ is replaced by any $\kappa_0$ with $0\le \kappa_0<\kappa'$. (Note that possibly $\kappa'=0$, when this statement is vacuous; in that case take $m_0=c_0=0$.)
Let $m=4+4m_0+2m'$. Define $c_m=4\tau+2^{m}(c_0+c')$, and for $i=m-1,\ldots, 1$ let $c_{i}=2c_{i+1}+\tau$. Let $c=2c_1+\tau$; we will show that $m,c$ satisfy the theorem.
Let $G$, $(N_x:x\in X)$ and $C$ be as in the theorem, where $|X|=m$, $\chi(C)>c$ and the cover clique number of $(N_x:x\in X)$ is at most $\kappa'$. We may assume (because otherwise the theorem follows from the inductive hypothesis) that: \\ \\
(1) {\em There is no multicover $(N_x':x\in X')$ of $C'\subseteq C$ contained in $(N_x:x\in X)$ with cover clique number less than $\kappa'$, and with $|X'|=m_0$ and $\chi(C')>c_0$.}
Let $X=\{x_1,\ldots, x_m\}$, and let us write $N_i$ for $N_{x_i}$ for $1\le i\le m$. \\ \\ (2) {\em For $1\le i\le m$, there exist disjoint $C_i,D_i\subseteq C$ with $\chi(C_i), \chi(D_i)>c_i$, and $A_h\subseteq N_h$ for $1\le h\le i$, such that each $A_h$ covers one of $C_i, D_i$ and is anticomplete to the other.} \\ \\ If $A\subseteq N_1\cup \cdots\cup N_m$, let $f(A)$ denote the set of vertices in $C$ with a neighbour in $A$. Since $\chi(C)>c$, there exists $A_1\subseteq N_1$ minimal such that $f(A_1)$ has chromatic number more than $c_1$. Let $C_1=f(A_1)$ and $D_1=C\setminus C_1$. From the minimality of $A_1$, it follows that $\chi(C_1)\le c_1+\tau$. Consequently $\chi(D_1)>\chi(C)-(c_1+\tau)\ge c_1$. Thus (2) holds for $i = 1$. Now we assume that $i>1$ and $C_{i-1}, D_{i-1}$ and the sets $A_1,\ldots, A_{i-1}$ satisfy (2) for $i-1$. Choose $A_i\subseteq N_i$ minimal such that one of $\chi(f(A_i)\cap C_{i-1})$, $\chi(f(A_i)\cap D_{i-1})$ is more than $c_i$; say the first (without loss of generality). Let $C_i=f(A_i)\cap C_{i-1}$. Now $\chi(f(A_i)\cap D_{i-1})\le c_i+\tau$, from the minimality of $A_i$, so $\chi(D_i)>c_{i-1}-c_i-\tau\ge c_i$, where $D_i = D_{i-1}\setminus f(A_i)$. Thus $A_i$ covers $C_i$
and is anticomplete to $D_i$. This proves (2).
From (2) with $i = m$, each $A_i$ covers one of $C_m, D_m$ and is anticomplete to the other. By exchanging $C_m,D_m$ if necessary, we may assume that for at least $m/2$ values of $i$, $A_i$ covers $C_m$ and is anticomplete to $D_m$. We may assume (by reordering $x_1,\ldots, x_m$) that $A_i$ covers $C_m$ and is anticomplete to $D_m$ for $1\le i\le m/2$. Let $B_i=N_i\setminus A_i$ for $1\le i\le m/2$. \\ \\ (3) {\em There is an oddity $P$ for $(N_x:x\in X)$ with ends $x_1,x_2$ and with interior in $B_1\cup B_2\cup D_m.$} \\ \\ Since $\chi(D_m)>c_m\ge \tau$, there is a clique $Z\subseteq D_m$
with $|Z|=\kappa$. Now $N_1$ covers $D_m$, but $A_1$ is anticomplete to $D_m$, so $B_1$ covers $D_m$. Similarly $B_2$ covers $D_m$. Choose a vertex $y_1\in B_{1}\cup B_{2}$ with as many neighbours in $Z$ as possible; and we may assume that $y_1\in B_{1}$. Not every vertex of $Z$ is incident with $y_1$ since $\omega(G)\le \kappa$; let $z_2\in Z$ be nonadjacent to $y_1$. Choose $y_2\in B_{2}$ adjacent to $z_2$. From the choice of $y_1$, there exists $z_1\in Z$ adjacent to $y_1$ and not to $y_2$. If $y_1,y_2$ are nonadjacent, then $x_1\hbox{-} y_1\hbox{-} z_1\hbox{-} z_2\hbox{-} y_2\hbox{-} x_2$ is an oddity, and if $y_1,y_2$ are adjacent then $x_1\hbox{-} y_1\hbox{-} y_2\hbox{-} x_2$ is an oddity. This proves (3).
Now there are at most four vertices of $P$ that have neighbours in $C_m$, and so there exists $F\subseteq C_m$ with $\chi(F)>c_m-4\tau=2^{m}(c_0+c')$ that is anticomplete to $V(P)$. There are two vertices of $P$ in $N_1\cup N_2$, and those are the only vertices of $P$ that might have neighbours in $A_i$ for $3\le i\le m/2$. Let these vertices be $p,q$, and for $3\le i\le m/2$ let $P_i$ be the set of vertices in $A_i$ adjacent to $p$, and $Q_i$ the set adjacent to $q$.
For each $v\in F$, let $I(v)$ be the set of $i$ with $3\le i\le m/2$
such that $v$ has a neighbour in $P_i$. For each subset $I\subseteq \{3,\ldots, m/2\}$ with $|I|=m_0$, the chromatic number of the set of $v\in F$ with $I\subseteq I(v)$ is at most $c_0$, by (1). Since there are at most $2^{m-1}$ such subsets $I$, the set of vertices $v\in F$ with $|I(f)|\ge m_0$ has chromatic number at most $2^{m-1}c_0$; and similarly the set of vertices adjacent to neighbours of $q$ in at least $m_0$ sets $A_i$ has chromatic number at most $2^{m-1}c_0$. Consequently there exists $F'\subseteq F$ with $$\chi(F')\ge \chi(F)-2^{m}c_0> 2^{m}c'$$ such that for each $v\in F'$, there are at most $2m_0$ values of $i\in \{3,\ldots, m/2\}$ such that $v$ is adjacent to a neighbour of $p$ or $q$ in $A_i$. There are only at most $2^{m}$ possibities for the set of these values, so there exists $C'\subseteq F'$ with $\chi(C')\ge \chi(F')2^{-m}>c'$ such that all vertices in $C'$ have the same set of values,
and in particular there exists $I\subseteq \{3,\ldots, m/2\}$ with $|I|=m/2-2-2m_0=m'$ such that no vertex in $C'$
has a neighbour adjacent to $p$ or $q$ in any $A_i(i\in I)$. For each $i\in I$, let $N_{x_i}'$ be the set of vertices in $A_i$ nonadjacent to both $p,q$. Then $(N_{x}':x\in \{x_i:i\in I\})$ is a multicover of $C'$, contained in $(N_x:x\in X)$, and compatible with $P$. This proves \ref{getoddity}.~\bbox
By three successive applications of \ref{getoddity} (one for each oddity), we deduce:
\begin{thm}\label{gettriple} For all integers $\tau,\kappa\ge 0$, there exist integers $m,c\ge 0$ with the following property. Let $G$ be a graph such that \begin{itemize} \item $\omega(G)\le \kappa$; \item $\chi(H)\le \tau$ for every induced subgraph $H$ of $G$ with $\omega(H)<\kappa$; and
\item $G$ admits a stable multicover $(N_x:x\in X)$ of a set $C$, where $|X|=m$ and $\chi(C)>c$. \end{itemize} Then there are three oddities $P_1,P_2,P_3$ for the multicover, where $V(P_1), V(P_2), V(P_3)$ are pairwise anticomplete. \end{thm} (The same is true with ``three'' replaced by any other positive integer, but we only need three.) Next we need:
\begin{thm}\label{findhole} Let $\ell\ge 24$ be an integer. Take the complete bipartite graph $K_{\ell,\ell}$, with bipartition $A,B$. Add three more edges joining three disjoint pairs of vertices in $A$. Now subdivide every edge between $A$ and $B$ once, and subdivide each of the three additional edges either two or four times. The graph we produce has a hole of length $\ell$. \end{thm} We leave the proof to the reader (use the fact that if $x,y,z\in \{3,5\}$ then $\ell$ is expressible as a sum of some or none of $x,y,z$ and at least three 4's).
A multicover $(N_x:x\in X)$ of $C$ is said to be {\em stably $k$-crested} if there are vertices $a_1,\ldots, a_k$ and vertices $a_{ix}\;(1\le i\le k, x\in X)$ of $G$, all distinct, with the following properties: \begin{itemize} \item $a_1,\ldots, a_k$ and the vertices $a_{ix}\;(1\le i\le k, x\in X)$ do not belong to $X\cup C\cup \bigcup_{x\in X}N_x$; \item for $1\le i\le k$ and each $x\in X$, $a_{ix}$ is adjacent to $x$, and there are no other edges between the sets $\{a_1,\ldots, a_k\}\cup \{a_{ix}:1\le i\le k, x\in X\}$ and $X\cup C\cup \bigcup_{x\in X}N_x$; \item for $1\le i\le k$ and each $x\in X$, $a_{ix}$ is adjacent to $a_i$, and there are no other edges between $\{a_1,\ldots, a_k\}$ and $\{a_{ix}:1\le i\le k, x\in X\}$ \item $a_1,\ldots, a_k$ are pairwise nonadjacent; \item for all $i,j\in \{1,\ldots, k\}$ and all distinct $x,y\in X$, $a_{ix}$ is nonadjacent to $a_{jy}$. \end{itemize} (Thus the ``crest'' part is obtained from
$K_{k,|X|}$ by subdividing every edge once.) We deduce:
\begin{thm}\label{crested} Let $\ell\ge 24$, and let $\tau,\kappa\ge 0$. Then there exist $m,c\ge 0$ with the following property. Let $G$ be a graph such that \begin{itemize} \item $\omega(G)\le \kappa$; \item $\chi(J)\le \tau$ for every induced subgraph $H$ of $G$ with $\omega(H)<\kappa$; and \item $G$ admits a stably
$\ell$-crested stable multicover $(N_x:x\in X)$ of a set $C$, where $|X|=m$ and $\chi(C)>c$. \end{itemize} Then $G$ has a hole of length $\ell$. \end{thm} \noindent{\bf Proof.}\ \ Let $m,c$ satisfy \ref{gettriple}, choosing $m\ge \ell$ (note that if $m,c$ satisfy \ref{gettriple} then so do $m',c$ for $m'\ge m$.) By \ref{gettriple}, there are three oddities, pairwise anticomplete; and the result follows from \ref{findhole}. This proves \ref{crested}.~\bbox
Theorem 4.4 of~\cite{longoddholes} says: \begin{thm}\label{getbigtick}
For all $m,c,k,\kappa,\tau\ge 0$ there exist $m',c'\ge 0$ with the following property. Let $G$ be a graph with $\omega(G)\le \kappa$, such that $\chi(H)\le \tau$ for every induced subgraph $H$ of $G$ with $\omega(H)<\kappa$. Let $(N_x':x\in X')$ be a multicover in $G$ of some set $C'$, such that $|X'|\ge m'$ and $\chi(C')> c'$. Then there exist $X\subseteq X'$ with $|X|\ge m$, and $C\subseteq C'$ with $\chi(C)> c$, and a stable multicover $(N_x:x\in X)$ of $C$ contained in $(N_x':x\in X')$ that is stably $k$-crested. \end{thm}
Combining \ref{crested} and \ref{getbigtick}, we deduce the following (a strengthening of theorem 4.5 of~\cite{longoddholes}):
\begin{thm}\label{nomult} Let $\ell\ge 24$, and let $\tau,\kappa\ge 0$. Then there exist $m,c\ge 0$ with the following property. Let $G$ be a graph such that \begin{itemize} \item $\omega(G)\le \kappa$; \item $\chi(H)\le \tau$ for every induced subgraph $H$ of $G$ with $\omega(H)<\kappa$; and \item $G$ admits a multicover with length $m$, of a set $C$ with $\chi(C)>c$. \end{itemize} Then $G$ has a hole of length $\ell$. \end{thm}
We need the following, a consequence of theorem 9.7 of~\cite{chandeliers}. That involves ``trees of lamps'', but we do not need to define those here; all we need is that a cycle of length $\ell$ is a tree of lamps. (Note that what we call a ``multicover'' here is called a ``strongly-independent 2-multicover'' in that paper, and indexed in a slightly different way.) \begin{thm}\label{findchand5} Let $m,\kappa,c',\ell\ge 0$, and let $\mathcal{C}$ be a 2-controlled ideal, such that for every $G\in \mathcal{C}$: \begin{itemize} \item $\omega(G)\le \kappa$; \item $G$ does not admit a multicover of length $m$ of a set with chromatic number more than $c'$; and \item $G$ has no hole of length $\ell$. \end{itemize} Then there exists $c$ such that all graphs in $\mathcal{C}$ have chromatic number at most $c$. \end{thm}
Now we prove the main result of this section, that is, \ref{radthm} with $\rho=2$.
\begin{thm}\label{rad2} Let $\ell\ge 24$ and let $\mathcal{C}$ be a 2-controlled ideal of graphs. For all $\kappa,d\ge 0$ there exists $c$ such that every graph in $\mathcal{C}$ with clique number at most $\kappa$ and chromatic number more than $c$ has a $d$-peripheral hole of length $\ell$. \end{thm} \noindent{\bf Proof.}\ \ We proceed by induction on $\kappa$. The result holds for $\kappa\le 1$, so we assume that $\kappa\ge 2$ and every graph in $\mathcal{C}$ with clique number less than $\kappa$ has chromatic number at most $\tau$. Let $\mathcal{C}'$ be the ideal of graphs $G\in \mathcal{C}$ such that $\omega(G)\le \kappa$ and $G$ has no hole of length $\ell$. Choose $m,c'$ to satisfy \ref{nomult} (with $c$ replaced by $c'$). Choose $c''$ to satisfy \ref{findchand5} (with $\mathcal{C}$ replaced by $\mathcal{C}'$, and $c$ replaced by $c''$). Let $c=\max(c'',d+\ell \tau)$; we claim that $c$ satisfies the theorem. For let $G\in \mathcal{C}$ with $\omega(G)\le \kappa$ and $\chi(G)>c$. Suppose first that $G\in \mathcal{C}'$. By \ref{nomult}, $G$ does not admit a multicover with length $m$ of a set with chromatic number more than $c'$. From \ref{findchand5}, $\chi(G)\le c''$, a contradiction.
Thus $G\notin \mathcal{C}'$, and so $G$ has an $\ell$-hole $H$. For each vertex of $H$, its set of neighbours has chromatic number at most $\tau$; and so the set of all vertices of $G$ that belong to or have a neighbour in $H$ has chromatic number at most $\ell\tau$. Since $\chi(G)>d+\ell \tau$, it follows that $H$ is $d$-peripheral. This proves \ref{rad2}.~\bbox
\section{The $\rho$-controlled case for $\rho\ge 3$.}
Let $G$ be a graph. We say a {\em grading} of $G$ is a sequence $(W_1,\ldots, W_n)$ of subsets of $V(G)$, pairwise disjoint and with union $V(G)$. If $w\ge 0$ is such that $\chi(G[W_i])\le \tau$ for $1\le i\le n$ we say the grading is {\em $\tau$-colourable}. We say that $u\in V(G)$ is {\em earlier} than $v\in V(G)$ (with respect to some grading $(W_1,\ldots, W_n)$) if $u\in W_i$ and $v\in W_j$ where $i<j$.
Let $G$ be a graph, and let $B,C\subseteq V(G)$, where $B$ covers $C$. Let $B=\{b_1,\ldots, b_m\}$. For $1\le i<j\le m$ we say that $b_i$ is {\em earlier} than $b_j$ (with respect to the enumeration $(b_1,\ldots, b_m)$). For $v\in C$, let $i\in \{1,\ldots, m\}$ be minimum such that $b_i,v$ are adjacent; we call $b_i$ the {\em earliest parent} of $v$. An edge $uv$ of $G[C]$ is said to be {\em square} (with respect to the enumeration $(b_1,\ldots, b_m)$) if the earliest parent of $u$ is nonadjacent to $v$, and the earliest parent of $v$ is nonadjacent to $u$. Let $B=\{b_1,\ldots, b_m\}$, and let $(W_1,\ldots, W_n)$ be a grading of $G[C]$. We say the enumeration $(b_1,\ldots, b_m)$ of $B$ and the grading $(W_1,\ldots, W_n)$ are {\em compatible} if for all $u,v\in C$ with $u$ earlier than $v$, the earliest parent of $u$ is earlier than the earliest parent of $v$.
A graph $H$ is a {\em $\rho$-ball} if either $V(H)=\emptyset$ or there is a vertex $z\in V(H)$ such that every vertex of $H$ has $H$-distance at most $\rho$ from $z$; and we call $z$ a {\em centre} of the $\rho$-ball. If $G$ is a graph, a subset $X\subseteq V(G)$ is said to be a {\em $\rho$-ball} if $G[X]$ is a $\rho$-ball. (Note that there may be vertices of $G$ not in $X$ that have $G$-distance at most $\rho$ from $z$; and also, for a pair of vertices in $X$, their $G$-distance and their $G[X]$-distance may be different.)
\begin{thm}\label{greentouchrad} Let $\phi$ be a nondecreasing function and $\rho\ge 3$, and let $G$ be a $(\rho,\phi)$-controlled graph. Let $\tau\ge 0$ such that $\chi^{\rho-1}(G)\le \tau$ and $\chi(J)\le \tau$ for every induced subgraph $J$ of $G$ with $\omega(J)<\omega(G)$. Let $c\ge 0$ and let $(W_1,\ldots, W_n)$ be a $\tau$-colourable grading of $G$. Let $H$ be a subgraph of $G$ (not necessarily induced) with $\chi(H)>\tau+1+\phi(c+\tau)$, and such that $W_i\cap V(H)$ is stable in $H$ for each $i\in \{1,\ldots, n\}$. Then there is an edge $uv$ of $H$, and a $\rho$-ball $X$ of $G$, such that \begin{itemize} \item $u,v$ are both earlier than every vertex in $X$; \item $v$ has a $G$-neighbour in $X$, and $u$ does not; and \item $\chi(G[X])>c$. \end{itemize} \end{thm} \noindent{\bf Proof.}\ \ Let us say that $v\in V(G)$ is {\em internally active} if there is a $\rho$-ball $X\ni v$ with $\chi(X)>c+\tau$ such that no vertex of $X$ is earlier than $v$. (Note that $X\cap W_i$ may have more than one element, so there may be vertices in $X$ that are neither earlier nor later than $v$.) Let $R_1$ be the set of internally active vertices. We claim first: \\ \\ (1) {\em $\chi(G\setminus R_1)\le \phi(c+\tau)$.} \\ \\ For suppose not. Then since $G$ is $(\rho,\phi)$-controlled, there is a $\rho$-ball $X\subseteq V(G)\setminus R_1$ with $\chi(G)>c+\tau$, which therefore contains an internally active vertex, a contradiction. This proves (1).
Let us say $v\in V(G)$ is {\em externally active} if there is a $\rho$-ball $X$ of $G$ with $\chi(X)>c+\tau$ such that every vertex of $X$ is later than $v$, and $v$ has an $H$-neighbour in $X$. Let $R_2$ be the set of externally active vertices. We claim: \\ \\ (2) {\em $R_1\setminus R_2$ is stable in $H$.} \\ \\ For suppose that $uv$ is an edge of $H$ with both ends in $R_1\setminus R_2$. Since each $W_i\cap V(H)$ is stable in $H$, we may assume that $u$ is earlier than $v$. Since $v$ is internally active, there is a $\rho$-ball $X$ containing $v$ with $\chi(X)>c+\tau$ such that no vertex of $X$ is earlier than $v$; but then $u$ is externally active, a contradiction. This proves (2). \\ \\ (3) {\em There is a subset $Y\subseteq V(H)$ such that $H[Y]$ is connected and has chromatic number more than $\tau$, and a $\rho$-ball $X$ of $G$ with $\chi(G[X])>c+\tau$, such that every vertex of $Y$ is earlier than every vertex of $X$, and some vertex of $Y$ has a $H$-neighbour in $X$.} \\ \\ Since $H$ has chromatic number more than $\tau+1+\phi(c+\tau)$, it follows from (1) and (2) that $\chi(H[R_2])>\tau$. Let $Y$ be the vertex set of a component of $H[R_2]$ with maximum chromatic number. Choose $v\in Y$ such that no vertex of $Y$ is later than $v$. Since $v$ is externally active, this proves (3).
Let $X,Y$ be as in (3). If some vertex of $Y$ has no $G$-neighbour in $X$, then since $H[Y]$ is connected, there is an edge $uv$ of $H[Y]$ such that $v$ has a $G$-neighbour in $X$ and $u$ does not, and the theorem holds. We assume then that every vertex of $Y$ has a $G$-neighbour in $X$. For each $y\in Y$, let $N(y)$ denote its set of $G$-neighbours in $X$. Let $z$ be a centre of $X$, and for $0\le i\le \rho$ let $L_i$ be the set of vertices in $X$ with $G[X]$-distance $i$ to $z$. Thus $L_0\cup\cdots\cup L_{\rho}=X$. Let $Y_0$ be the set of all $y\in Y$ with $N(y)\subseteq L_{\rho-1}\cup L_{\rho}$. \\ \\ (4) {\em $Y_0\ne\emptyset$.} \\ \\ Since $\chi(H[Y])>\tau$, it follows that $\chi(G[Y])>\tau$, and so some vertex $y\in Y$ has $G$-distance at least $\rho$ from $z$. Consequently $N(y)\subseteq L_{\rho-1}\cup L_{\rho}$. This proves (4).
Choose $y\in Y_0$, if possible with the additional property that $N(y)\cap L_{\rho-1}=\emptyset$. Let $U$ be the set of vertices in $L_{\rho}$ with a neighbour in $N(y)\cap L_{\rho-1}$. \\ \\ (5) {\em There is a vertex $y'$ of $Y$ with $N(y')\not\subseteq N(y)\cup U$.} \\ \\ For there is a vertex $y'\in Y$ with $G$-distance at least $\rho$ from $y$, since $\chi(G[Y])>\tau$. Since $\rho>2$, $N(y)\cap N(y')=\emptyset$. If $N(y')\subseteq U$, then $y'\in Y_0$ and $N(y')\cap L_{\rho-1}=\emptyset$; but then $N(y)\cap L_{\rho-1}=\emptyset$ from the choice of $y$, and so $U=\emptyset$, a contradiction. Thus $N(y')\not\subseteq U$. This proves (5).
Now $X\setminus (N(y)\cup U)$ is a $\rho$-ball $X'$ say, and some vertex (namely $y'$) of $Y$ has a $G$-neighbour in it, and another (namely $y$) has no $G$-neighbour in it. Since $H[Y]$ is connected, there is an edge $uv$ of $H[Y]$ such that $v$ has a $G$-neighbour in $X'$ and $u$ does not. But $\chi(X)>c+\tau$, and every vertex in $N(y)\cup U$ has $G$-distance at most two from $y$ and so $\chi(N(y)\cup U)\le \tau$, and consequently $\chi(X')\ge \chi(X)-\tau>c$. This proves \ref{greentouchrad}.~\bbox
We also need the following, proved in~\cite{longoddholes}:
\begin{thm}\label{getgreenedge} Let $G$ be a graph, and let $B,C\subseteq V(G)$, where $B$ covers $C$. Let every induced subgraph $J$ of $G$ with $\omega(J)<\omega(G)$ have chromatic number at most $\tau$. Let the enumeration $(b_1,\ldots, b_m)$ of $B$ and the grading $(W_1,\ldots, W_n)$ of $G[C]$ be compatible. Let $H$ be the subgraph of $G$ with vertex set $C$ and edge set the set of all square edges. Let $(W_1,\ldots, W_n)$ be $\tau$-colourable; then $\chi(G[C])\le \tau^2\chi(H)$. \end{thm}
We deduce: \begin{thm}\label{greenedgerad} Let $\phi$ be a nondecreasing function and $\rho\ge 3$, and let $G$ be a $(\rho,\phi)$-controlled graph. Let $\tau\ge 0$ such that $\chi^{\rho-1}(G)\le \tau$ and $\chi(J)\le \tau$ for every induced subgraph $J$ of $G$ with $\omega(J)<\omega(G)$. Let $B,C\subseteq V(G)$, where $B$ covers $C$. Let the enumeration $(b_1,\ldots, b_m)$ of $B$ and the grading $(W_1,\ldots, W_n)$ of $G[C]$ be compatible. Let $(W_1,\ldots, W_n)$ be $\tau$-colourable, and let $\chi(G[C])>\tau^2 (\tau+1+\phi(c+\tau))$. Then there is a square edge $uv$, and a $\rho$-ball $X$ of $G$, such that \begin{itemize} \item $u,v$ are both earlier than every vertex in $X$; \item $v$ has a neighbour in $X$, and $u$ does not; and \item $\chi(X)>c$. \end{itemize} \end{thm} \noindent{\bf Proof.}\ \ Let $H$ be as in \ref{getgreenedge}. By \ref{getgreenedge}, $\chi(G[C])\le \tau^2\chi(H)$. Since $\chi(G[C])>\tau^2 (\tau+1+\phi(c+\tau))$ and $\chi^1(G)\le \tau$, it follows that $\chi(H)>\tau+1+\phi(c+\tau)$. By \ref{greentouchrad} applied to $G[C]$ and $H$, we deduce that there is an edge $uv$ of $H$, and a $\rho$-ball $X$ of $G$, satisfying the theorem. This proves \ref{greenedgerad}.~\bbox
A {\em $\rho$-comet} $(\mathcal{P}, X)$ in a graph $G$ consists of a set $\mathcal{P}$ of induced paths, each with the same pair of ends $x,y$ say, and a $\rho$-ball $X$, such that $y$ has a neighbour in $X$ and no other vertex of any member of $\mathcal{P}$ has a neighbour in $X$. We call $x$ the {\em tip} of the $\rho$-comet, and $\chi(X)$ its {\em chromatic number}, and the set of lengths of members of $\mathcal{P}$ its {\em spectrum}.
\begin{thm}\label{twotails} Let $\phi$ be a nondecreasing function, and let $\rho\ge 3$ and $\tau\ge 0$. For all integers $c\ge 1$ there exists $c'\ge 0$ with the following property. Let $G$ be a $(\rho,\phi)$-controlled graph such that $\chi^{\rho-1}(G)\le \tau$ and $\chi(J)\le \tau$ for every induced subgraph $J$ of $G$ with $\omega(J)<\omega(G)$. Let $x\in V(G)$, and let $V(G)\setminus \{x\}$ be a $\rho$-ball, such that $x$ has a neighbour in $G\setminus x$. Let $\chi(V(G)\setminus \{x\})>c'$. Then there is a $\rho$-comet $(\{P,Q\}, C)$ in $G$ with tip $x$ and chromatic number more than $c$, where
$|E(Q)|=|E(P)|+1$, and $|E(P)|\le 2\rho+1$. \end{thm} \noindent{\bf Proof.}\ \ Let $c' = 2\tau^2 (\tau+1+\phi(c+\tau))$, and let $G,x$ be as in the theorem. Since $V(G)\setminus \{x\}$ is a $\rho$-ball, every vertex of $G$ has $G$-distance at most $2\rho+1$ from $x$; for $0\le k\le 2\rho+1$ let $L_k$ be the set of vertices of $G$ with $G$-distance exactly $k$ from $x$. Since $\chi(V(G)\setminus \{x\})>c'$, there exists $k$ such that $\chi(L_k)>c'/2$. Since $\chi^2(G)\le \tau$ it follows that $k\ge 3$. Let $(b_1,\ldots, b_n)$ be an enumeration of $L_{k-1}$, and for $1\le i\le n$ let $W_i$ be the set of vertices in $L_k$ that are adjacent to $b_i$ but not to $b_1,\ldots, b_{i-1}$. Then $(W_1,\ldots, W_n)$ is a $\tau$-colourable grading of $G[L_k]$, compatible with $(b_1,\ldots, b_n)$.
Since $\chi(L_k)>\tau^2 (\tau+1+\phi(c+\tau))$, by \ref{greenedgerad} there is a square edge $uv$ of $G[L_k]$, and a $\rho$-ball $C$ of $G[L_k]$, such that \begin{itemize} \item $u,v$ are both earlier than every vertex in $C$; \item $v$ has a neighbour in $C$, and $u$ does not; and \item $\chi(C)>c$. \end{itemize}
Let $u',v'$ be the earliest parents of $u,v$ respectively. Let $P$ consist of the union of the path $v\hbox{-} v'$ and a path of length $k-1$ between $v',x$ with interior in $L_1,\ldots, L_{k-2}$; and let $Q$ consist of the union of the path $v\hbox{-} u\hbox{-} u'$ and a path of length $k-1$ between $u',x$ with interior in $L_1,\ldots, L_{k-2}$. Then $|E(Q)|=|E(P)|+1$ and $|E(P)|\le 2\rho$. Moreover, no vertex in $C$ has a neighbour in $P\cup Q$ different from $v$, since all vertices in $C$ are later than $u,v$. This proves \ref{twotails}.~\bbox
By repeated application of \ref{twotails} we deduce:
\begin{thm}\label{manytails} Let $\phi$ be a nondecreasing function, and let $\rho\ge 3$, $\ell\ge \rho(8\rho+6)$ and $\tau\ge 0$. For all integers $c\ge 1$ there exists $c'\ge 0$ with the following property. Let $G$ be a $(\rho,\phi)$-controlled graph such that $\chi^{\rho-1}(G)\le \tau$ and $\chi(J)\le \tau$ for every induced subgraph $J$ of $G$ with $\omega(J)<\omega(G)$. Let $x\in V(G)$, and let $V(G)\setminus \{x\}$ be a $\rho$-ball, such that $x$ has a neighbour in $G\setminus x$. Then there is a $\rho$-comet $(\mathcal{P}, X)$ in $G$ with tip $x$ and chromatic number more than $c$, such that its spectrum includes $\{\ell+i:\:0\le i\le 2\rho+3\}$. \end{thm} \noindent{\bf Proof.}\ \ Let $c_{\ell+1}=c$, and for $i=\ell,\ldots, 1$ let \ref{twotails} be satisfied setting $c = c_{i+1}$ and $c'=c_{i}$. Let $c'=c_1$. \\ \\ (1) {\em For all $k\ge 1$ there exists $p_k$ with $1\le p_k\le 2\rho$ and a $\rho$-comet in $G$ with tip $x$, chromatic number more than $c_k$, and spectrum including $\{p_1+\cdots+p_k+i:\:1\le i\le k\}$.} \\ \\ By hypothesis there is a $\rho$-comet in $G$ with chromatic number more than $c_1$, tip $x$ and spectrum $\{1\}$, so the statement holds when $k=1$, setting $p_1=0$; and it follows for $k\ge 2$ by repeated application of \ref{twotails}. This proves (1).
Now $p_1,\ldots, p_{\ell}$ exist and sum to at least $\ell$, so there exists $k\le \ell$ maximum such that $$p_1+\cdots+p_{k}\le \ell.$$ Since $p_1+\cdots+p_{4\rho+3}< 2(4\rho+3)\rho\le \ell$, it follows that $k\ge 4\rho+3$. From the maximality of $k$, and since $p_{k+1}\le 2\rho$, it follows that $p_1+\cdots+p_{k}> \ell-2\rho$. Consequently the spectrum of the corresponding $\rho$-comet contains $\{\ell+i:\:0\le i\le 2\rho+3\}$. This proves \ref{manytails}.~\bbox
\begin{thm}\label{manyholes} Let $\phi$ be a nondecreasing function, and let $\rho\ge 3$, $\ell\ge 8\rho^2+6\rho$, and $d,\tau\ge 0$. Then there exists $c$ with the following property. Let $G$ be a $(\rho,\phi)$-controlled graph with $\chi(G)>c$ such that $\chi^{\rho-1}(G)\le \tau$ and $\chi(J)\le \tau$ for every induced subgraph $J$ of $G$ with $\omega(J)<\omega(G)$. Then there is a $d$-peripheral $\ell$-hole in $G$. \end{thm} \noindent{\bf Proof.}\ \ Define $c_4=\ell(2\rho+4)\tau$. Choose $c_3$ such that \ref{manytails} is satisfied replacing $c,c', \ell$ by $c_4,c_3,\ell-6\rho$ respectively. Let $c_2 =\rho\tau +\phi(c_3)$, $c_1=\tau\phi(c_2)$, and let $c=\max(\phi(c_1),\ell\tau+d)$. Let $G$ be as in the theorem with $\chi(G)>c$. Since $\chi(G)>c\ge \phi(c_1)$, there exists $z\in V(G)$ such that, denoting the set of vertices of $G$ with $G$-distance $i$ from $z$ by $L_i$, we have $\chi(L_{\rho})>c_1$. Since $L_1$ is $\tau$-colourable, there is a stable subset $A$ of $L_1$ such that the set $B$ of vertices in $L_{\rho}$ that are descendants of vertices in $A$ has chromatic number more than $c_1/\tau=\phi(c_2)$. Consequently there is a $\rho$-ball $C\subseteq B$ with $\chi(C)>c_2$. Choose $D\subseteq A$ minimal such that every vertex in $C$ has an ancestor in $D$. Let $v_1\in D$; then there exists $v_{\rho-1}\in L_{\rho-1}$ with a neighbour in $C$ such that $v_1$ is its only ancestor in $D$. Let $v_1\hbox{-} v_2\hbox{-}\cdots\hbox{-} v_{\rho-1}$ be an induced path, where $v_i\in L_i$ for $1\le i\le {\rho}-1$. The set of vertices in $C$ with $G$-distance less than $\rho$ from one of $v_1,v_2,\ldots, v_{\rho-1}$ has chromatic number at most $\rho\tau$, and so the set $E$ of vertices in $C$ with $G$-distance at least $\rho$ from each of $v_1,v_2,\ldots, v_{\rho-1}$ has chromatic number more than $c_2 -\rho\tau =\phi(c_3)$. Consequently there is a $\rho$-ball $F\subseteq E$, with chromatic number more than $c_3$.
Since $C$ is a $\rho$-ball and $v_{\rho-1}$ has a neighbour in $C$, there is an induced path $P$ of $G[C\cup \{v_{\rho}\}]$ from $v_{\rho-1}$ to some vertex $x\in C$ with a neighbour in $F$, of length at most $2\rho$, such that no vertex of $P$ different from $x$ has a neighbour in $F$. By \ref{manytails} applied to $x,F$, since $\chi(F)> c_3$, there is a vertex $v\in F$, $2\rho+4$ induced paths $P_0,\ldots, P_{2\rho+3}$ of $G[F\cup\{v_{\rho-1}\}]$ between $x,v$, and a $\rho$-ball $X\subseteq F$, such that: \begin{itemize}
\item $|E(P_i)|=\ell-6\rho+i$ for $0\le i\le 2\rho+3$; \item $V(P_i)\cap X=\emptyset$ for $0\le i\le 2\rho+3$; \item $v$ has a neighbour in $X$ and no other vertex of $P_i$ has a neighbour in $X$, for $0\le i\le 2\rho+2$; and \item $\chi(X)>c_4$. \end{itemize} Now every vertex of $X$ has $G$-distance at least $\rho$ from each of $v_1,\ldots, v_{\rho-1}$, but there may be vertices in $X$ with $G$-distance less than $\rho$ to a vertex in $P$ or in one of $P_0,\ldots, P_{2\rho+3}$. The union of these paths has at most $\ell(2\rho+4)$ vertices (in fact much fewer), and since $\chi(X)>\ell(2\rho+4)\tau$, there exists a vertex in $X$ with $G$-distance at least $\rho$ from all vertices of these paths. Let $y\in L_{\rho-1}$ be adjacent to this vertex, and let $Q$ be an induced path between $y,x$ with interior in $X$, of length at most $2\rho+2$. Let $R$ be a path of length $\rho-1$ between $y,z$ with interior in $D\cup L_2\cup L_3\cup \cdots\cup L_{\rho-2}$.
The union of the four paths $z\hbox{-} v_1\hbox{-} v_2\hbox{-}\cdots\hbox{-} v_{\rho-1}, P,Q,R$ has length at most $6\rho$, and at least $4\rho-3$, since $P,Q$ have lengths at least $\rho$ and at least $\rho-1$ respectively. Let their union have length $j$ where $4\rho-3\le j\le 6\rho$. Let $i = 6\rho-j$; then $0\le i\le 2\rho+3$, and so $P_i$ is defined and has length $\ell-6\rho+i+j$. Consequently the union of the five paths $z\hbox{-} v_1\hbox{-} v_2\hbox{-}\cdots\hbox{-} v_{\rho-1}$, $P$, $P_i$, $Q$ and $R$ is a cycle $H_i$ of length $\ell$, and we claim it is induced. Certainly it is a cycle; suppose that it is not induced, and so there is an edge $ab$ say that joins two nonconsecutive vertices of $H_i$. It follows that $a,b$ do not both belong to any of its five constituent paths. Certainly $a,b\ne z$. Suppose that $a=v_i$ for some $i$. Since every vertex in $E$ has $G$-distance at least $\rho$ from $v_i$, it follows that $b\notin V(P_i)$ and $b\notin V(Q)$. Also $b\notin V(P)$ since every vertex of $V(P)\setminus \{v_{\rho-1}\}$ belongs to $L_k$ and $P$ is an induced path containing $v_{\rho-1}$. Thus $b\in R$. Since the $G$-distance between $y,a$ is at least $\rho-1$, and $R$ has length $\rho-1$, it follows that $b\in L_1$ and so $i \in \{1,2\}$; but $i\ne 1$ since $A$ is stable, and $i\ne 2$ since $v_{\rho-1}$, and hence $v_2$, has a unique ancestor in $D$. This proves that $a,b\notin \{v_1,\ldots, v_{\rho-1}\}$.
Next suppose that $a\in V(R)$, and so either $b\in V(P)\setminus \{v_{\rho-1}\}$, or $b\in V(P_i\cup Q\setminus \{y\})$. In either case $b\in L_k$, and so $a=y$; hence $b\notin V(Q)$ since $Q$ is induced and $y\in V(Q)$, and $a\notin V(P\cup P_i)$ since $y$ has a neighbour in $X$ with $G$-distance at least $\rho$ from each vertex of $V(P\cup P_i)$, and $\rho\ge 3$. This proves that $a,b\notin V(R)$. Next suppose that $a\in V(P)\setminus \{x\}$. Then $b\in F$; but no vertex of $P$ except $x$ has a neighbour in $F$, a contradiction. Finally, suppose that $a\in V(P_i)$ and $b\in V(Q)$. No vertex of $P_i$ has a neighbour in $X$ except $v$, and $v\in V(Q)$, a contradiction. This proves that $H_i$ is an $\ell$-hole. Now the set of vertices of $G$ that belong to or have a neighbour in $H_i$ has chromatic number at most $\ell\tau$, and since $\chi(G)>c\ge \ell\tau+d$, it follows that $H_1$ is $d$-peripheral. This proves \ref{manyholes}.~\bbox
Let us deduce \ref{radthm}, which we restate: \begin{thm}\label{radthm2} Let $\rho\ge 2$ be an integer, and let $\mathcal{C}$ be a $\rho$-controlled ideal of graphs. Let $\ell\ge 24$ if $\rho=2$, and $\ell\ge 8\rho^2+6\rho$ if $\rho>2$. Then for all $\kappa,d\ge 0$, there exists $c\ge 0$ such that every graph $G\in \mathcal{C}$ with $\omega(G)\le \kappa$ and $\chi(G)>c$ has a $d$-peripheral $\ell$-hole. \end{thm} \noindent{\bf Proof.}\ \ By induction on $\kappa$ we may assume that there exists $\tau_1$ such that every graph in $\mathcal{C}$ with clique number less than $\kappa$ and no $d$-peripheral $\ell$-hole has chromatic number at most $\tau_1$. Let $\mathcal{C}_2$ be the ideal of $G\in \mathcal{C}$ with clique number at most $\kappa$ and no $d$-peripheral $\ell$-hole. We suppose that there are graphs in $\mathcal{C}_2$ with arbitrarily large chromatic number, and so $\mathcal{C}_2$ is not 2-controlled, by \ref{rad2}. Consequently there exists $\tau_2$ such that if $\mathcal{C}_3$ denotes the class of graphs $G\in \mathcal{C}_2$ with $\chi^2(G)\le \tau_2$, there are graphs in $\mathcal{C}_3$ with arbitrarily large chromatic number. Hence by \ref{manyholes} with $\rho=3$, $\mathcal{C}_3$ is not 3-controlled, and so on; and we deduce that there is an ideal $\mathcal{C}_{\rho}$ of graphs in $\mathcal{C}$ that is not $\rho$-controlled, a contradiction. This proves \ref{radthm2}.~\bbox
\section{Controlling 8-balls}\label{sec:uncontrolled}
In this section we prove \ref{uncontrolled}. We use the following relative of \ref{greentouchrad}, proved in~\cite{longoddholes}:
\begin{thm}\label{greentouch} Let $\tau,c\ge 0$ and let $(W_1,\ldots, W_n)$ be a $\tau$-colourable grading of a graph $G$. Let $H$ be a subgraph of $G$ (not necessarily induced) with $\chi(H)>\tau+2(c+\chi^1(G))$. Then there is an edge $uv$ of $H$, and a subset $X$ of $V(G)$, such that \begin{itemize} \item $G[X]$ is connected; \item $u,v$ are both earlier than every vertex in $X$; \item $v$ has a neighbour in $X$, and $u$ does not; and \item $\chi(X)>c$. \end{itemize} \end{thm} We deduce a version of \ref{greenedgerad} that has no assumption of $\rho$-control:
\begin{thm}\label{greenedge} Let $G$ be a graph, and let $B,C\subseteq V(G)$, where $B$ covers $C$. Let every induced subgraph $J$ of $G$ with $\omega(J)<\omega(G)$ have chromatic number at most $\tau$. Let the enumeration $(b_1,\ldots, b_m)$ of $B$ and the grading $(W_1,\ldots, W_n)$ of $G[C]$ be compatible. Let $(W_1,\ldots, W_n)$ be $\tau$-colourable, and let $\chi(G[C])>\tau^2 (2c+3\tau)$. Then there is a square edge $uv$, and a subset $X$ of $V(G)$, such that \begin{itemize} \item $G[X]$ is connected; \item $u,v$ are both earlier than every vertex in $X$; \item $v$ has a neighbour in $X$, and $u$ does not; and \item $\chi(X)>c$. \end{itemize} \end{thm} \noindent{\bf Proof.}\ \ Let $H$ be as in \ref{getgreenedge}; then by \ref{getgreenedge}, $\chi(G[C])\le \tau^2\chi(H)$. Since $\chi(G[C])>\tau^2 (2c+3\tau)$ and $\chi^1(G)\le \tau$, it follows that $\chi(H)>\tau+2(c+\chi^1(G))$. By \ref{greentouch} applied to $G[C]$ and $H$, we deduce that there is an edge $uv$ of $H$, and a subset $X$ of $V(G)$, satisfying the theorem. This proves \ref{greenedge}.~\bbox
A {\em shower} in $G$ is a sequence $(L_0, L_1,\ldots, L_k,s)$ where $L_0, L_1,\ldots, L_k$ are pairwise disjoint subsets of $V(G)$ and $s\in L_k$, such that \begin{itemize}
\item $|L_0|=1$; \item $L_{i-1}$ covers $L_i$ for $1\le i< k$; \item for $0\le i<j\le k$, if $j>i+1$ then no vertex in $L_j$ has a neighbour in $L_i$; and \item $G[L_k]$ is connected. \end{itemize} (Note that we do not require that $L_{k-1}$ covers $L_k$.) We call the vertex in $L_0$ the {\em head} of the shower, and $s$ its {\em drain}, and $L_0\cup \cdots\cup L_k$ is its {\em vertex set}. (Thus the drain can be any vertex of $L_k$.) The set of vertices in $L_k$ with a neighbour in $L_{k-1}$ is called the {\em floor} of the shower.
Let $\mathcal{S}$ be a shower with head $z_0$, drain $s$ and vertex set $V$. An induced path of $G[V]$ between $z_0,s$ is called a {\em jet} of $\mathcal{S}$. For $d\ge 0$, a jet $J$ is {\em $d$-peripheral} if there is a subset $X$ of the floor of the shower, anticomplete to $V(J)$, with $\chi(G[X])>d$. The set of all lengths of $d$-peripheral jets of $\mathcal{S}$ is called the {\em $d$-jetset} of $\mathcal{S}$. For integers $\ell\ge 2$ and $1\le k\le \ell$, we say a $d$-jetset is {\em $(k,\ell)$-complete} if there are
$k$ jets $J_0,\ldots, J_{k-1}$, all $d$-peripheral, such that $|E(J_j)|=|E(J_0)|+j$ modulo $\ell$ for $0\le j\le k-1$.
We will prove:
\begin{thm}\label{multijet} Let $\tau\ge 0$ and $\ell\ge 2$. For all integers $d\ge 0$ and $t$ with $1\le t\le \ell$ there exists $c_{d,t}\ge 0$ with the following property. Let $G$ be a graph such that $\chi^{3}(G)\le \tau$, and such that every induced subgraph $J$ of $G$ with $\omega(J)<\omega(G)$ has chromatic number at most $\tau$. Let $\mathcal{S}=(L_0, L_1,\ldots, L_k,s)$ be a shower in $G$, with floor of chromatic number more than $c_t$. Then the $d$-jetset of $\mathcal{S}$ is $(t,\ell)$-complete. \end{thm} \noindent{\bf Proof.}\ \ Suppose first that $t=1$. Let $c_{d,1}=d+\tau$, and let $G$ and $\mathcal{S}=(L_0, L_1,\ldots, L_k,s)$ be as in the theorem, with floor $F$ where $\chi(F)>c_1$. Since $c_{d,1}\ge 0$, it follows that $F\ne \emptyset$, and so there is an induced path $P$ of $G[L_k]$ between $s$ and some vertex in $F$. Choose such a path $P$ of minimum length, and let its end in $F$ be $v$. If $s=v$ let $u=v$, and otherwise let $u$ be the neighbour of $v$ in $P$. It follows that no vertex of $P$ belongs to or has a neighbour in $F$ except $u,v$. Also there is a path $Q$ of length $k$ between $v$ and $z_0\in L_0$, with one vertex $w\in L_{k-1}$ and all others in $L_0\cup\cdots\cup L_{k-2}\cup \{v\}$. Thus $P\cup Q$ is a jet. Moreover, the only vertices of $P\cup Q$ that belong to or have a neighbour in $F$ are $u,v,w$, and so all these neighbours have $G$-distance at most two from $v$, and consequently have chromatic number at most $\tau$. Since $\chi(F)>c_{d,1}=d+\tau$, it follows that $P\cup Q$ is a $d$-peripheral jet, as required.
We may therefore assume that $2\le t\le \ell$, and inductively the result holds for $t-1$ (and all $d$). Define $d'=d+2\tau$, and let $c_{d,t}=2(\ell+1)\tau^2 (2c_{d',t-1}+7\tau)$; let $G$ be a graph, and let $\mathcal{S}=(L_0, L_1,\ldots, L_k,s)$ be a shower in $G$ with floor $F$ of chromatic number more than $c_{d,t}$. For $i\ge 0$, let $M_i$ be the set of vertices with $G[L_k]$-distance exactly $i$ from $s$. Then there exists $r\ge 0$ such that $\chi(F\cap M_r)\ge \chi(F)/2$; and $r\ge 3$ since $\chi^2(G)\le \tau<\chi(F)/2$.
For $0\le i\le r$, each vertex $v\in M_i$ is joined to $s$ by an induced path of length $i$ with interior in $M_1\cup\cdots\cup M_{i-1}$; let us call such a path a {\em bloodline} of $v$. \begin{figure}
\caption{$v$ is a grandparent of $u$.}
\label{fig:3}
\end{figure}
If $u\in F\cap M_r$ and $v\in M_0\cup\cdots\cup M_{r-2}$, we say that $v$ is a {\em grandparent} of $u$ if there exists $b\in L_{k-1}$ adjacent to both $u,v$. Let $C$ be the set of vertices in $M_r$ with a grandparent in $M_0\cup\cdots\cup M_{r-2}$. \\ \\ (1) {\em If $\chi(C)>\ell \tau^2 (2c_{d',t-1}+7\tau)$ then the theorem holds.} \\ \\ Let $(v_1,\ldots, v_n)$ be an enumeration of $M_0\cup\cdots \cup M_{r-2}$, where the vertex in $M_0$ is first, followed by the vertices in $M_1$ in some order, and so on; more precisely, for $1\le i<j\le n$, if $v_i\in M_a$ and $v_j\in M_b$ where $a,b\in \{0,\ldots, r-2\}$, then $a\le b$. Let $B_0$ be the set of vertices in $L_{k-1}$ that have neighbours in $M_0\cup\cdots \cup M_{r-2}$, and let $(b_1,\ldots, b_m)$ be an enumeration of $B_0$, enumerating the members of $B_0$ with the earliest neighbours in $(v_1,\ldots, v_n)$ first; more precisely, for $1\le i<j\le m$, if $b_j$ is adjacent to $v_q$ for some $q\in \{1,\ldots, n\}$, then there exists $p\in \{1,\ldots, q\}$ such that $b_i$ is adjacent to $v_p$. We say $v_i$ is the {\em earliest grandparent} of $u\in M_r$ if $i$ is minimum such that $v_i$ is a grandparent of $u$. For $1\le i\le n$, let $W_i$ be the set of vertices in $M_r$ whose earliest grandparent is $v_i$. Thus $C=W_1\cup\cdots\cup W_n$, and $(W_1,\ldots, W_n)$ is a grading of $G[C]$. It is $\chi^2(G)$-colourable, since $W_i\subseteq N^2_G[v_i]$ for each $i$, and hence $\tau$-colourable. For $u\in C$, if $v$ is the earliest grandparent of $u$, then the earliest parent of $u$ is adjacent to $v$. Consequently the enumeration $(b_1,\ldots, b_m)$ and the grading $(W_1,\ldots, W_n)$ are compatible; because if $u,v\in W$ with $u$ earlier than $v$, the earliest grandparent of $u$ is earlier than the earliest grandparent of $v$, and consequently the earliest parent of $u$ is earlier than the earliest parent of $v$.
For $0\le j<\ell$, let $C_j$ be the set of vertices $u\in C$ such that $i-j$ is a multiple of $\ell$, where $i$ is the length of the bloodline of the earliest grandparent of $u$. Thus $C_0\cup\cdots\cup C_{\ell-1} = C$. Choose $j$ such that $\chi(C_j)>\tau^2 (2c_{d,t-1}+7\tau)$. Then by \ref{greenedge}, there is a square edge $uv$ with $u,v\in C_j$, and a subset $X$ of $C_j$, such that \begin{itemize} \item $G[X]$ is connected; \item $u,v$ are both earlier than every vertex in $X$; \item $v$ has a $G$-neighbour in $X$, and $u$ does not; and \item $\chi(X)>c_{d,t-1}+2\tau$. \end{itemize}
\begin{figure}
\caption{The square edge and its relatives.}
\label{fig:4}
\end{figure}
Let the earliest parents of $u,v$ be $u', v'$ respectively, and let their earliest grandparents be $u'', v''$ respectively. Let $P$ be the induced path between $v,s$ consisting of the path $v\hbox{-} v'\hbox{-} v''$ and a bloodline of $v''$. Thus $P$ has length $j+2$ modulo $\ell$. Let $Q$ be the path between $v,s$ consisting of the edge $uv$, the path $u\hbox{-} u'\hbox{-} u''$, and a bloodline of $u''$. Note that $Q$ is induced, since $v$ is not adjacent to $u'$ (because $uv$ is square). Moreover, $Q$ has length $j+3$ modulo $\ell$.
Let $Z$ be the set of vertices in $X$ with $G$-distance at least four from both $u',v'$, and let $L_{k-1}'$ be the set of vertices in $L_{k-1}$ with a neighbour in $Z$. Let $L_{k-2}'$ be the set of vertices in $L_{k-2}$ with a neighbour in $L_{k-1}'$, and for $0\le i\le k-3$ let $L_i'=L_i$. It follows that $L_{i-1}$ covers $L_i$ for $0\le i\le k-2$, and so $(L_0',\ldots, L_{k-1}',X\cup \{v\},v)$ is a shower $\mathcal{S}'$. Let $V'$ be its vertex set.
We claim that $V'\cap V(P\cup Q)=\{v\}$, and every edge between $V'$ and $V(P\cup Q)$ is incident with $v$. For suppose that $a\in V'$ and $b\in V(P\cap Q)$ are equal or adjacent, and $a,b\ne v$. If $b=u$, then $a\ne b$ since $u\notin V'$; $a\notin X$ since $u$ has no neighbour in $X$; and so $a\in L_{i}'$ for some $i'<k$. Then $i=k-1$ since $b\in L_k$, and so $a$ has a neighbour in $Z$ from the definition of $L_{k-1}'$; and so the $G$-distance between $a,u$ is at least two from the definition of $Z$, a contradiction. Thus $b\ne u$. Next suppose that $b\in \{u',v'\}$. Since $u',v'$ are the earliest parents of $u,v$ respectively, and $u,v$ are earlier than every vertex in $X$, it follows that no vertex in $X$ is adjacent to $b$; and so $a\ne b$, and $a\notin X$. Hence $a\in L_i'$ for some $i<k$, and since $b\in L_{k-1}$ it follows that $k-2\le i\le k-1$. But then the $G$-distance between $a$ and some vertex of $Z$ is at most two, and since the $G$-distance between $b,Z$ is at least four, it follows that $a,b$ are nonadjacent, a contradiction. So $b\notin \{u',v'\}$ and so $b\in M_j$ for some $j\le r-2$, and so $b$ is a vertex of a bloodline of the earliest grandparent of one of $u,v$ (say $u'', v''$ respectively). Consequently $a\ne b$, and $a\notin X$, and so $a\in L_{k-1}'$. Choose $z\in Z$ adjacent to $a$; then $u,v$ are earlier than $z$, and so $u'',v''$ are both earlier than the earliest grandparent of $z$. It follows that no parent of $z$ has a neighbour in a bloodline of either of $u'',v''$, and so $a,b$ are nonadjacent, a contradiction. This proves our claim that $V'\cap V(P\cup Q)=\{v\}$, and every edge between $V'$ and $V(P\cup Q)$ is incident with $v$. In particular $Z$ is anticomplete to $V(P\cup Q)$.
The floor of $\mathcal{S}'$ includes $Z$, and $\chi(Z)\ge \chi(X)-2\chi^3(G)>c_{d',t-1}$. Consequently the $d'$-jetset of $\mathcal{S}'$ is $(t-1,\ell)$-complete; let $J_0,\ldots, J_{t-2}$ be corresponding $d'$-peripheral jets of $\mathcal{S}'$. For $0\le h\le t-2$, both $J_h\cup P$ and $J_h\cup Q$ are jets of $\mathcal{S}$, and we claim they are $d$-peripheral. For let $0\le h\le t-2$, and let $Y$ be a subset of the floor of $\mathcal{S}'$ with $\chi(G[Y])>d'$, anticomplete to $V(J_h)$. Thus $Y\subseteq X\cup \{v\}\subseteq F$. Let $Y'=Y\cup Z$. Since every vertex in $Y\setminus Z$ has $G$-distance at most three from one of $u',v'$, it follows that $\chi(Y\setminus Z)\le 2\tau$, and so $\chi(Y')\ge \chi(Y)-2\tau> d$. Since $Y'$ is anticomplete to $V(P\cup Q)$, this proves our claim that both $J_h\cup P$ and $J_h\cup Q$ are $d$-peripheral jets of $\mathcal{S}$. Consequently the $d$-jetset of $\mathcal{S}$ is $(t,\ell)$-complete. This proves (1). \\ \\ (2) {\em If $\chi((F\cap M_r)\setminus C)>\tau^2 (2c_{d',t-1}+7\tau)$ then the theorem holds.} \\ \\ Let us write $C' = (F\cap M_r)\setminus C$; then $C'$ is the set of vertices in $M_r$ that have neighbours in $L_{k-1}$, but every such neighbour has no neighbour in $M_0\cup\cdots\cup M_{r-2}$. The neighbours in $L_{k-1}$ might or might not have neighbours in $M_{r-1}$. Take an arbitrary enumeration $(b_1,\ldots, b_n)$ of $M_{r-1}$, and for $1\le i\le n$ let $W_i$ be the set of vertices in $C'$ adjacent to $b_i$ and nonadjacent to $b_1,\ldots, b_{i-1}$. Thus $(W_1,\ldots, W_n)$ is a grading of $G[C']$, compatible with $(b_1,\ldots, b_n)$, and it is $\chi^1(G)$-colourable and hence $\tau$-colourable. By \ref{greenedge}, there is a square edge $uv$ of $G[C']$, and a subset $X$ of $C'$, such that \begin{itemize} \item $G[X]$ is connected; \item $u,v$ are both earlier than every vertex in $X$; \item $v$ has a neighbour in $X$, and $u$ does not; and \item $\chi(X)>c_{d',t-1}+2\tau$. \end{itemize} Let $u',v'$ be the earliest neighbours in $M_{r-1}$ of $u,v$ respectively. Let $P$ be an induced path between $v,s$ consisting of the edge $vv'$ and a bloodline of $v'$; and let $Q$ be the path consisting of the edges $uv,uu'$, and a bloodline of $u'$. They are both induced. Let $Z$ be the set of all vertices in $X$ with $G$-distance at least four from both of $u,v$. Let $L_{k-1}'$ be the set of vertices in $L_{k-1}$ with a neighbour in $Z$, and for $0\le i\le k-2$ let $L_i'=L_i$. Then $(L_0',\ldots, L_{k-1}', X\cup \{v\},v)$ is a shower $\mathcal{S}'$. Moreover, its vertex set $V'$ satisfies $V'\cap V(P\cup Q)=\{v\}$, and every edge between $V'$ and $V(P\cup Q)$ is incident with $v$ (the proof is as in (1), and we omit it). Its floor includes $Z$, and $\chi(Z)\ge \chi(X)-2\chi^3(G)>c_{d',t-1}$; and so the $d'$-jetset of $\mathcal{S}'$ is $(t-1,\ell)$-complete. But for each jet $J$ of $\mathcal{S}'$, $J\cup P, J\cup Q$ are both jets of $\mathcal{S}$, and as in (1) it follows that the $d$-jetset of $\mathcal{S}$ is $(t,\ell)$-complete. This proves (2).
Since $\chi(F)>c_{d,t}$, it follows that $\chi(F\cap M_r)>(\ell+1)\tau^2 (2c_{d',t-1}+7\tau)$, and so either $\chi(C)>\ell \tau^2 (2c_{d',t-1}+7\tau)$ or $\chi((F\cap M_r)\setminus C)>\tau^2 (2c_{d',t-1}+7\tau)$. Hence the result follows from (1) or (2). This proves \ref{multijet}.~\bbox
A {\em recirculator} for a shower $(L_0, L_1,\ldots, L_k,s)$ with head $z_0$ is an induced path $R$ with ends $s,z_0$ such that no internal vertex of $R$ belongs to $V$ and no internal vertex of $R$ has any neighbours in $V\setminus \{s,z_0\}$. We need the following, proved in~\cite{holeseq}:
\begin{thm}\label{doubleshower} Let $c\ge 0$ be an integer, and let $G$ be a graph such that $\chi(G)>44c+4\chi^{8}(G)$. Then there is a shower in $G$, with floor of chromatic number more than $c$, and with a recirculator. \end{thm}
We deduce \ref{uncontrolled}, which we restate:
\begin{thm}\label{uncontrolled2} For all integers $\ell\ge 2$ and $\tau,d\ge 0$ there is an integer $c\ge 0$ with the following property. Let $G$ be a graph such that $\chi^8(G)\le \tau$, and every induced subgraph $J$ of $G$ with $\omega(J)<\omega(G)$ has chromatic number at most $\tau$. If $\chi(G)>c$ then there are $\ell$ $d$-peripheral holes in $G$ with lengths of all possible values modulo $\ell$. \end{thm} \noindent{\bf Proof.}\ \ Let $c_{d,\ell}$ be as in \ref{multijet}, and let $c=44c_{d,\ell}+4\tau$. Let $G$ be a graph such that $\chi^8(G)\le \tau$, and every induced subgraph $J$ of $G$ with $\omega(J)<\omega(G)$ has chromatic number at most $\tau$, with $\chi(G)>c$. By \ref{doubleshower} there is a shower in $G$, with floor of chromatic number more than $c_{d,\ell}$, and with a recirculator. By \ref{multijet} the $d$-jetset of this shower is $(\ell,\ell)$-complete. Thus adding the recirculator to each of the corresponding jets gives the $\ell$ $d$-peripheral holes we need. This proves \ref{uncontrolled2}.~\bbox
\section{Two consecutive holes}\label{sec:2holes}
Finally let us prove \ref{2holes}, which we restate.
\begin{thm}\label{2holesagain} For each $\kappa,\ell\ge 0$ there exists $c\ge 0$ such that every graph $G$ with $\omega(G)\le \kappa$ and $\chi(C)>c$ has holes of two consecutive lengths, both of length more than $\ell$. \end{thm} \noindent{\bf Proof.}\ \ We may assume that $\ell\ge 8$. By induction on $\kappa$, there exists $\tau_1$ such that every graph with clique number less than $\kappa$ and chromatic number more than $\tau_1$ has two holes of consecutive length more than $\ell$. Let $\mathcal{C}_2$ be the ideal of graphs with clique number at most $\kappa$ and with no two holes of consecutive lengths more than $\ell$. By \ref{rad2}, $\mathcal{C}_2$ is not 2-controlled, and so for some $\tau_1$ there are graphs $G$ in $\mathcal{C}_2$ with arbitrarily large chromatic number and $\chi^2(G)\le \tau$. Let $\mathcal{C}_3$ be the ideal of graphs in $G$ with $\chi^2(G)\le \tau$. By \ref{manyholes} with $\rho=3$, $\mathcal{C}_3$ is not 3-controlled, and so on; and we deduce that there is an ideal $\mathcal{C}_{\ell}$ of graphs in $\mathcal{C}_2$, with unbounded chromatic number, and a number $\tau$ such that $\chi^{\ell}(G)\le \tau$ for each $G\in \mathcal{C}_{\ell}$.
Let $c'=14\tau^3$, and let $c=44c'+4\tau$; and choose $G\in \mathcal{C}_{\ell}$ with $\chi(G)>c$. By \ref{doubleshower}, there is a shower $\mathcal{S}=(L_0,\ldots, L_k,s)$ in $G$ with a recirculator and with floor $F$ with chromatic number more than $14\tau^3$. Define $d, M_0,\ldots, M_d$ and $C$ as in the proof of \ref{multijet}. \\ \\ (1) {\em If $\chi(C)> 7\tau^3$ then the theorem holds.} \\ \\ Define the enumeration $(v_1,\ldots, v_n)$ of $M_0\cup \cdots\cup M_{d-2}$, and $B_0$ and its enumeration $(b_1,\ldots, b_m)$, as in the proof of \ref{multijet}. Let $b_{m+1-i}'=b_i$; so $(b_1',\ldots, b_m')=(b_m,\ldots, b_1)$ is also an enumeration of the same set. For $i=1,\ldots, m$ let $W_i$ be the set of vertices in $M_d$ that are adjacent to $b_i'$ and to none of $b_1',\ldots, b_{i-1}'$. Thus $W_1\cup\cdots\cup W_m=C$, and $(W_1,\ldots, W_m)$ is a $\chi^1(G)$-colourable (and hence $\tau$-colourable) grading of $G[C]$, compatible with $(b_1',\ldots, b_m')$. By \ref{greenedge} (taking $c=2\tau$), there is a square edge $uv$ of $G[C]$, and a subset $X$ of $C$, such that \begin{itemize} \item $G[X]$ is connected; \item $u,v$ are both earlier than every vertex in $X$; \item $v$ has a neighbour in $X$, and $u$ does not; and \item $\chi(X)>2\tau$. \end{itemize} Let $u',v'\in B_0$ be the earliest parents of $u,v$ respectively. Since $\chi(X)>2\tau$, there exists $x\in X$ with $G$-distance at least four from each of $u',v'$. Let $x'\in B_0$ be its earliest parent, and let $R$ be an induced path of $G[X\cup \{v,x'\}]$ between $v$ and $x'$. Now no vertex of the interior of $R$ is adjacent to either of $u',v'$ since $u,v$ are earlier than every member of $X$ and $u',v'$ are their earliest parents. Also, $x'$ is nonadjacent to $u,v,u',v'$ since the $G$-distance between $x$ and $u',v'$ is at least four. Since $uv$ is square, the path $Q'$ obtained from $R$ by adding the edge $vv'$ is induced, and since $u$ has no neighbour in $X$, so is the path $P'$ obtained from $R$ by adding the path $v\hbox{-} u\hbox{-} u'$. Now there are paths between the apex of $\mathcal{S}$ (say $a$) and $u'$, and between $a,v'$, both of length $k-1\ge \ell$. No vertex of $L_k$ has a neighbour different from $u',v'$ in these paths. Also $x'$ has no neighbour in $P'\cup Q'$; because any such neighbour would be in $L_k\cup L_{k-1}\cup L_{k-2}$, and hence would have $G$-distance at most one from one of $u',v'$, which is impossible since the $G$-distance between $x$ and $u',v'$ is at least four. Thus by taking the union of the first of these with $P'$ and the second with $Q'$, we obtain two paths $P,Q$, both induced and both between $a,x'$, with consecutive lengths. Choose $i$ minimum such that $x'$ is adjacent to $v_i$, and let $R$ be a bloodline of $v_i$ (defined as in \ref{multijet}). Let $u'=b_f'$, $v'=b_g'$, $x'=b_h'$. Since $u,v$ are earlier than $x$, it follows that $f,g<h$. Now $u',v'$ are nonadjacent to $v_i$ since the $G$-distance between $u'v'$ and $x$ is at least four. Since $u'=b_{m+1-f}$ and $x'=b_{m+1-h}$, and $m+1-f>m+1-h$, it follows that no neighbour of $u'$ belongs to $V(R)$, and similarly for $v'$. Thus the union of $P$ with the edge $x'v_i$ and $R$ is induced, and so is the union of $Q$ with $x'v_i$ and $R$. Consequently there are jets of $\mathcal{S}$ with consecutive lengths. By taking their unions with the recirculator, we obtain holes of consecutive lengths. This proves (1). \\ \\ (2) {\em If $\chi((F\cap M_d)\setminus C)>7\tau^3$ then the theorem holds.} \\ \\ The proof of this is the same as that for step (2) of the proof of \ref{multijet} and we omit it.
From (1) and (2) the result follows.~\bbox
\section{Some connections with homology}
In the 1990s, Kalai and Meshulam made several intriguing conjectures connecting the chromatic number of a graph with the homology of a simplicial complex associated with $G$. (Most of them are mentioned in~\cite{kalai}, and see also~\cite{kalai2}.) The {\em $n$th Betti number} $b_n(X)$ of a simplicial complex $X$ is the rank of the $n$th homology group $H_n(X)$ (see, for instance, \cite{hatcher}). The {\em Euler characteristic\footnote{The standard notation for the Euler characteristic of a simplicial complex $X$ is $\chi(X)$; however, we will avoid using that notation here, as there is clearly some potential for confusion between $\chi(I(G))$ and $\chi(G)$.}} of $X$ is $\sum_{n\ge 0}(-1)^n c_n(X)$, where $c_n(X)$ is the number of $n$-faces in $X$; it turns out that the Euler characteristic is also equal to the alternating sum $\sum_{n\ge0}(-1)^n b_n(X)$ of Betti numbers. The {\em independence complex} $I(G)$ of a graph $G$ is the simplicial complex whose faces are the stable sets of vertices. For a graph $G$, we say that its {\em total Betti number} is $b(G):=\sum_{n\ge 0}b_n(G)$. The total Betti number of a graph $G$ is clearly greater than or equal to the modulus of the Euler characteristic of $I(G)$, as the former is the sum of Betti numbers and the latter is equal to the alternating sum.
Kalai and Meshulam made several conjectures on this topic: one was already mentioned (at \ref{bonamy}), and two others are as follows:
\begin{thm}\label{kalaiconj2} For every integer $k\ge 0$ there exists $c$ such that the following holds. If $b(H)\le k$ for every induced subgraph $H$ of $G$ then $\chi(G)\le c$. \end{thm}
\begin{thm}\label{kalaiconj1} For every integer $k\ge 0$ there exists $c$ such that following holds. For every graph $G$, if the Euler characteristic of $I(H)$ has modulus at most $k$ for every induced subgraph $H$ of $G$ then $\chi(G)\le c$. \end{thm}
Kalai and Meshulam also asked about the graphs $G$ that satisfy the condition in \ref{kalaiconj1} with $k=1$ (i.e.~for every induced subgraph $H$, the Euler characteristic of $H$ lies in $\{-1,0,1\}$). They conjectured that $G$ has this property if and only if $G$ has no induced cycle of length divisible by three. We prove this conjecture in~\cite{ternary}, with Chudnovsky and Spirkl.
In this paper, we prove conjectures \ref{kalaiconj2} and \ref{kalaiconj1}. The second conjecture is clearly stronger, as the modulus of the Euler characteristic of $I(G)$ is at most the total Betti number of $G$. In this section, we will prove both of these conjectures, using \ref{superkalai}.
Say that a graph $G$ is {\em $k$-balanced} if for every induced subgraph $H$ of $G$, the number of stable sets in $H$ of even cardinality differs by at most $k$ from the number of stable sets of odd cardinality. The condition in \ref{kalaiconj1} is exactly that $G$ is $k$-balanced, so \ref{kalaiconj1} (and therefore also \ref{kalaiconj2}) is an immediate consequence of the following result. \begin{thm}\label{kalaiconj} For every integer $k\ge 0$ there exists $c$ such that $\chi(G)\le c$ for every $k$-balanced graph $G$. \end{thm} \noindent{\bf Proof of \ref{kalaiconj}, assuming \ref{superkalai}.\ \ } Let $k\ge 1$ be an integer. By \ref{superkalai}, we may choose $c$ such that every graph $G$ with $\omega(G)\le k+1$ and $\chi(G)>c$ contains $k$ holes, pairwise anticomplete and each of length a multiple of three. We claim that every $k$-balanced graph has chromatic number at most $c$. For if $G$ is $k$-balanced, then $G$ has no clique of cardinality more than $k+1$ (because a complete subgraph $H$ with $k+2$ vertices has $k+2$ odd stable sets and only one even one); and $G$ does not have $k$ holes that are pairwise anticomplete, each of length a multiple of three. (We leave the reader to check this.) This proves \ref{kalaiconj}.~\bbox
\end{document} |
\begin{document}
\title{Robust Pseudo-Markets for Reusable Public Resources}
\begin{abstract}
We study non-monetary mechanisms for the fair and efficient allocation of \emph{reusable public resources}, i.e., resources used for varying durations. We consider settings where a limited resource is repeatedly shared among a set of agents, each of whom may request to use the resource over multiple consecutive rounds, receiving utility only if they get to use the resource for the full duration of their request. Such settings are of particular significance in scientific research where large-scale instruments such as electron microscopes, particle colliders, or telescopes are shared between multiple research groups; this model also subsumes and extends existing models of repeated non-monetary allocation where resources are required for a single round only.
We study a simple pseudo-market mechanism where upfront we endow each agent with some budget of artificial credits, with the budget proportion reflecting the \emph{fair share} of the resource we want the agent to receive. The endowments thus define for each agent her \emph{ideal utility} as that which she derives from her favorite allocation with no competition, but subject to getting at most her fair share of the resource across rounds. Next, on each round, and for each available resource item, our mechanism runs a first-price auction with a selective reserve, wherein each agent submits a desired duration and a per-round-bid, which must be at least the reserve price if requesting for multiple rounds; the bidder with the highest per-round-bid wins, and gets to use the item for the desired duration. We consider this problem in a Bayesian setting and show that under a carefully chosen reserve price, irrespective of how others bid, each agent has a simple strategy that guarantees she receives a $1/2$ fraction of her \textit{ideal utility} in expectation. We also show this result is tight, i.e., no mechanism can guarantee that all agents get more than half of their ideal utility. \end{abstract}
\section{Introduction}
Our goal in this paper is to design mechanisms that enable fair and efficient utilization of reusable public resources between agents who share the resource. To formalize what we mean, it helps to consider the problem faced by researchers sharing some expensive scientific equipment, such as a telescope, mass spectrometer, gene sequencer, etc. Such settings exhibit several common features: \begin{itemize}
\item \emph{Resource constraints:} A telescope can at any time be used only by one researcher; a biology core facility may have multiple spectrometers allowing simultaneous sharing by a few, but still, not by all agents. This necessitates some form of centralized coordination.
\item \emph{Stochastic and time-sensitive demands:} Research is uncertain, and so a researcher may not always know beforehand when they may need to use the equipment. Moreover, requirements are often time-sensitive, and cannot be delayed. For example, astronomers often use less powerful telescopes to find potential astronomical events of interest; as soon as one is detected they need a large-scale telescope to take detailed measurements before the event ends. Consequently, the coordinator of the larger telescope needs to make allocation decisions on the fly while being uncertain about future events.
\item \emph{Multi-round demands:} Different researchers may require to use the instrument for different lengths of time to complete their experiments. Moreover, they may not be able to interrupt them and/or resume later. For example, stopping a gene sequencer renders the samples and chemicals unusable. In other cases, a very large setup time makes switching between tasks not viable, as is the case in of focusing a large telescope towards a specific part of the sky. Hence, the coordinator may need to enable reserving the resource for long durations that block the workflow of some researchers.
\item \emph{Strategic behavior by agents:} At the end of the day, given any coordination mechanism, competing agents will try to `game' the mechanism for their own good, and there is always a danger that this may lead to a tragedy of the commons, where the resource is inefficiently utilized. \end{itemize}
The way these challenges are handled in practice is often ad-hoc. For example, modern large-scale telescopes such as the \href{https://webb.nasa.gov/}{James Webb telescope}~\cite{jameswebb} use complex protocols for time-sharing between researchers, based on a combination of guaranteed slots, proposal-based allocations, and on-demand slots. Of course, one common solution to any such coordination problem is to enable some form of `free market', and indeed, astronomers have proposed using credit-based market mechanisms~\cite{etherton2004free,saunders2018abstract}. However, our understanding of such non-monetary mechanisms is limited, especially when incorporating dynamics, uncertainty, and scheduling constraints. Our work aims to advance our understanding of such \emph{pseudo-market} mechanisms for such settings.
At a high level, it seems clear that the principal who owns the instrument should decide when each agent gets to control the instrument in a way that aims for high overall utilization while ensuring equitable distribution of resource usage between agents. One way to do so could be via round-robin sharing -- however, while this is clearly equitable and maximizes utilization, in many cases this is undesirable as it has no alignment with when the agents would most benefit from the instrument, leading to low utility guarantees for the agents. Unfortunately, without charging money, it is unclear how the principal can determine the exact utility that each agent gets from using the instrument on any given round. The principal could of course ask agents to report their utilities, but without money, the principal can not incentivize agents to report truthfully. Additionally, there is no direct way to compare these reports since there is no relative scale of their utilities.
Pseudo-markets based on artificial currencies offer a way to guarantee fairness and ensure that the instrument is used in a way that generates high utilities for the agents. The basic idea is that the principal first endows each agent with some budget of artificial credits proportional to the \emph{fair share} of the resource each agent is entitled to receive. These fair shares are exogenously specified, in a way that reflects the ex-ante `priority/importance' accorded to each agent. After the initial allocation of credits, the principal uses some mechanism to allocate the resource to some agents, charging them in this artificial currency. In the case that the allocation process happens only once, running a first-price auction offers an appealing mechanism, where the equilibrium being the outcome of the corresponding Fisher market. \cite{DBLP:conf/sigecom/GorokhBI21} studied the case when allocation happens repeatedly over a number of rounds but the item is required only for a single round at a time. They study a first-price mechanism and show approximate fairness properties of the resulting allocation. Their approach however depends on having only single-round allocations, and as we demonstrate below, performs poorly in settings that require multi-round allocations. Consequently, we need new ideas and techniques to extend their robustness notion to settings where the resources may be needed for varying amounts of time.
\subsection{Overview of our results and techniques}
Our basic setting is as follows: A principal has a single item that is to be shared between a set of agents over $T$ rounds. In each round, each agent has a random requirement which is comprised of a required duration, and a value for utilizing the item over that duration. The principal uses a pseudo-market to coordinate the agents, where she first endows each agent with some budget of artificial credits, and then whenever the resource is free, runs some mechanism to determine who should get to use it, and for how long.
Our aim is to give \emph{per-agent performance guarantees} under minimal behavioral assumptions on other agents. In particular, following~\cite{DBLP:conf/sigecom/GorokhBI21}, we define the \textit{ideal utility} of an agent with fair share $0 < \a < 1$ as the best long-run average utility she can achieve in a setting without competition, but where she is constrained to have the item for at most an $\a$ fraction of the rounds (see \cref{sec:ideal} for the formal definition). The ideal utility is defined independently of other agents' demands or behavior, thus making it a reliable measure of the maximum utility an agent can attain with their fair share. Our results focus on what fraction of her ideal utility an agent can achieve in a \emph{minimax} sense, i.e., irrespective of how other agents bid. This is in contrast to other measures like no-regret, where the resulting utility is compared with a benchmark that depends on how other agents bid. In this sense, our guarantees can be viewed as characterizing the robustness of the underlying mechanism with respect to an agent's ideal utility.
To better appreciate our mechanism's guarantees, it is instructive to first observe that simple mechanisms like round-robin have poor performance, even when agents have single-round demands. Consider a setting with $n$ agents, each having fair share $\a = \nicefrac 1 n$ and non-zero value with probability $\nicefrac 1 n$. Every round, the probability that round-robin allocates the item to an agent with positive value that round is only $\nicefrac 1 n$, even though the probability of some agent having positive value is $\Omega(1)$. Thus, every agent receives a $O(1/n)$ fraction of her ideal utility in expectation, even though our mechanism guarantees a $\Omega(1)$ fraction.
In this regard, our main contribution is \textbf{\hyperref[algo:algo]{\color{black}First-Price Pseudo-Auction with Multi-Round Reserves\xspace}}, a new mechanism that guarantees that every agent can realize at least $\nicefrac{1}{2}$ of her ideal utility, even with multi-round allocations. In the setting of single-round allocations, this same guarantee is achieved by a simple first price auction~\cite{DBLP:conf/sigecom/GorokhBI21}; however, this auction performs poorly with multi-round reservations, even if agents are truthful. The reason for this is that if we allow agents to reserve the item for longer rounds, in a round in which there is no serious demand, an agent can capture the item for a long time at a very low price, and such a reservation can block higher-valued demands that arrive in later rounds.
To overcome this difficulty, our mechanism uses a reserve price for multi-round reservations. In each round each agent can request the resource for a consecutive sequence of rounds starting with the current one by submitting a requested duration and their per-round bid of that duration. Bids that request the resource for more than one round need to be at least the reserve price; aside from that rule, the agent with the highest per-round bid wins. We can think of our mechanism as a combination of a spot market for the current round, and a buy-ahead reservation option but with a price floor.
Our main technical contributions are as follows: \begin{itemize}
\item In \cref{sec:guar} we study our mechanism, where an agent with fair share $\a$ is endowed with budget $\a T$ and the other agents have a total budget of $(1-\a)T$.
Now when the reserve price is $r$, we prove that an agent with ideal utility $v^{\star}$ can guarantee $v^{\star} T \min\{ 1/r, 1-1/r \} - O(\sqrt T)$ utility in expectation over $T$ rounds, regardless of how the other agents behave (\cref{thm:guar:guarantee}). This quantity is maximized when $r = 2$, in which case the agent can guarantee half her ideal utility. We then show that this is the best possible bound in our mechanism: an agent cannot guarantee more than $v^{\star} T (\min\{ 1/r, 1-1/r \})^+$ expected utility (\cref{thm:guar:impossibility}). This shows that reserve prices are essential in our mechanism for multi-round reservations.
The $1/r$ part in the minimum of both results comes from the fact that an agent with budget $\a T$ and multi-round demands cannot get the item for more than $\a T / r$ rounds when the reserve price is $r$. The argument for being able to guarantee the $1-1/r$ part is the following: The other agents have budget at most $T$ which means they can win at most $T/r$ rounds with bids at or above the reserve price, leaving $T(1 - 1/r)$ for the agent if she is willing to pay the reserve price. However, with the multi-round demands, the rounds are not at all independent. For example, if the adversary could reserve every other round ahead of time, this could eliminate all value for an agent with only two-round demands. Using a martingale argument we prove that because the other agents' behavior is independent of the value of the agent, the agent can get $v^{\star}$ utility from each one of those rounds in expectation.
For the upper bound result, the other agents can get the item for $T/r$ rounds, and this leaves the agent with $T - T/r$ rounds in which the item is available. We prove that if the mechanism allows long reservation durations, then our agent may have high value in only an $\a$ fraction of the available rounds.
\item In \cref{sec:hardness} we prove that no non-monetary mechanism (pseudo-market or otherwise) can guarantee that every agent can get more than half her ideal utility (\cref{thm:imposs}), making our previous result optimal. We prove this by examining an example where every agent has positive demand with low probability, that lasts many rounds.
\item In \cref{sec:multi} we study the same setting we did in \cref{sec:guar}, but when there are $L$ identical items that the principal can allocate to the agents (we still assume each agent wants at most one item at a time). In this setting the ideal utility of an agent with fair share $\a$ allows her to have the item for a fraction of $\a L$ rounds. We show that the agent can again guarantee half of her ideal utility (\cref{thm:mult:guarantee}). \end{itemize}
\subsection{Paper Outline}
Before presenting our work, we survey related literature in~\cref{sec:litrev}. In \cref{sec:setting}, we define the simplest case of the reusable resource model, where the principal has a single resource to allocate in each round, and outline the general structure of the pseudo-market mechanism that we study in this work. For most of the paper, we focus on this single resource setting: we first define our per-agent benchmarks (\cref{sec:ideal}) and then state our main robustness guarantees (\cref{sec:guar}) and associated hardness results (\cref{sec:hardness}). We extend our basic setting to incorporate multi-unit settings in \cref{sec:multi}.
\section{Related Literature} \label{sec:litrev}
Our work sits at the intersection of two topics in mechanism design: (i) non-monetary mechanisms for resource allocation, and (ii) dynamic mechanisms with state.
We now briefly summarize the literature on these topics.
Classical non-monetary mechanism design utilizes a wide variety of models and objectives to realize good welfare outcomes despite having strategic agents. Some of these include targeting alternate solution concepts such as pairwise stability~\cite{roth1992}, disregarding incentives to focus on fairness properties of realized outcomes~\cite{moulin2002proportional,caragiannis2016unreasonable,banerjee2022online}, partial public information (e.g., utilities are public but feasibility is private~\cite{arpitaShaddin}), designing lotteries to approximate efficient outcomes~\cite{moulin2007scheduling,procaccia2013approximate,kojima2010incentives}, explicitly hurting efficiency to align incentives (i.e., `money burning'~\cite{hartline2008optimal,DBLP:conf/sigecom/ColeGG13}), and using ex-post verification~\cite{branzei2015verifiably}. Most of this literature, however, deals with one-shot (i.e., static) allocation settings.
More recently, there has been an increasing focus on \emph{pseudo-markets} -- simulating monetary mechanisms using an \emph{artificial currency} -- driven largely by their success in real-world deployment for university course allocation~\cite{budish2017course}, food banks~\cite{walsh2014allocation,prendergast2022allocation} and cloud computing platforms~\cite{dawson2013reserving,vasudevan2016customizable}. Theoretical foundations of such mechanisms have been studied in the context of one-shot combinatorial assignment problems~\cite{budish2011combinatorial,2017nashwelfare}, but also in dynamic settings, including redistribution mechanisms~\cite{cavallo2014incentive}, and approximate mechanisms for infinite-horizon Bayesian settings with knowledge of value distributions~\cite{jackson,guo2010,balseiro2017,gorokh2017}. The latter works all build on the core idea of \citewithauthor{jackson} of `linking' multiple allocation problems to mitigate any gains from strategic behavior in any one problem. More recently,~\citewithauthor{DBLP:conf/sigecom/GorokhBI21} showed how pseudo-markets can be used for repeated single-round allocations to get individual performance guarantees \emph{without} knowing demand distributions; our work adapts and extends their ideas to the more complex reusable resource setting.
The challenge of dealing with both fixed budgets and reusable resources also places our work in the active area of dynamic mechanisms with state; once the resource is allocated, it is unavailable for the next few rounds, and the user with the allocation has decreased credit. The difficulty in these problems arises due to factors that couple allocations across time; for example, incomplete information and learning~\cite{DBLP:conf/wine/KanoriaN14,Iyer2014,devanur2015perfect,nekipelov2015econometrics}, cross-round constraints including budget limits~\cite{nazerzadeh2008,gummadi2012repeated,leme2012sequential,balseiro2015repeated}, leftover packets in a queuing system \cite{DBLP:conf/sigecom/GaitondeT20,DBLP:conf/sigecom/GaitondeT21}, stochastic fluctuations in the underlying setting~\cite{gershkov2009dynamic,bergemann2010dynamic,bergemann2011dynamic}, adversarial environments \cite{DBLP:journals/mansci/BalseiroG19}, etc. Analyzing equilibria in repeated settings however can be difficult, and so authors have explored approximation techniques such as mean-field approaches~\cite{gummadi2012repeated,balseiro2015repeated} and bi-criteria approximations~\cite{nekipelov2015econometrics}.
Another relevant and recent work is that of \cite{gaitonde2022budget}, where value maximizing agents have budget constraints that correspond to real money. They show a regret type guarantee against the strategy that spends at most the agent's average budget in expectation each round, but their result degrades a lot when the behavior of the other agents is adversarial. \cite{balseiro2021landscape,deng2021towards,pai2014optimal} study a similar setting where the agents' behavior is assumed to reach equilibrium; the first and third focusing on revenue maximization and the second on welfare maximization.
\section{Reusable Public Resources and Pseudo-Markets -- Basic Setting} \label{sec:setting}
We now formally define the simplest case of the reusable resource model, where the principal has a single resource it can allocate in each round -- for example, time-sharing a single telescope. We then outline the general structure of pseudo-market mechanisms that we study in this work. In \cref{sec:ideal,sec:guar,sec:hardness}, we focus on this setting; subsequently, we extend our model to incorporate multi-unit settings in \cref{sec:multi}.
\subsection{Allocating a Single Reusable Public Resource}
There are $n$ agents and $T$ rounds. In each round, the principal has a single item to allocate. Agents have single-minded multi-round valuations; formally, in every round $t\in [T]$, each agent $i\in [n]$ samples a random \emph{type} $\theta_{i}[t]=(V_{i}[t], K_{i}[t])$, where $K_{i}[t]$ is the number of rounds that agent $i$ needs the item for starting from round $t$, and $V_{i}[t]$ is the \emph{per-round value} she gets if she is allocated the item for the next $K_{i}[t]$ rounds. In other words, if agent $i$ is allocated the item on rounds $t, t+1, \ldots, t+K_{i}[t]-1$ (henceforth denoted $[t,t+K_{i}[t]-1]$), then she receives a total utility of $K_{i}[t]V_{i}[t]$; on the other hand, if the agent is not allocated the item for all of these rounds, then she does not get any utility arising from her round $t$ demand. We henceforth use $\Theta=\mathbb{R}_+\times\mathbb{N}$ to denote the type space for each agent and round.
We assume that each agent $i$ in each round $t$ draws demand type $\theta_{i}[t]$ from some underlying distribution $\mathcal F_i$, that is \emph{independent across agents} and \emph{across rounds}. In particular, agent $i$'s demand $\theta_{i}[t]$ is drawn independently in round $t$ irrespective of her demand in previous rounds. Note that this means if agent $i$ has $K_{i}[t] > 1$ but is not allocated the item in round $t$, then in round $t+1$, the earlier demand is lost, and she draws a new demand $\theta_{i}[t+1]$. Having a demand that is lost if not immediately allocated is meaningful in many settings where it is not possible for agents to hold back on executing a demand till a later round. For example, a biologist may not be able to preserve a sample that she needs a microscope to study. Similarly, an astronomer might need to immediately redirect a shared telescope to point toward a transient phenomenon she wants to observe. In addition, even when the astronomer is denied use of the telescope to take measurements of any particular event, new opportunities may arise soon afterward.
Once an agent $i$ is allocated the item for days $[t, t+K_{i}[t]-1]$, we assume the item is unavailable for reallocation to \emph{any} agent (including agent $i$). In other words, in every round $t$, either the item is unavailable for allocation, or the principal commits it to some agent $i$ for rounds $[t,t+K_{i}[t]-1]$. The commitment to not interrupt allocation models situations with large setup cost. For example, there is often some setup cost in preparing an instrument for an experiment, or in directing and focusing a telescope, and so it may be desirable for agents to complete any task they start.
Finally, we assume that each agent $i$ has a fair share $\a_i$, where $\sum_i \a_i = 1$. Fair shares are usually exogenously defined and represent the fraction of the resource that the principal wants each agent to have (which then determines her associated \emph{ideal utility}, see \cref{sec:ideal}).
\subsection{Artificial Currency Mechanism for Reusable Resources} \label{ssec:mechanism}
Given the above setting, the mechanism we study for allocating the resource is a \emph{pseudo-market} (or artificial credit) mechanism. The basic idea behind such mechanisms is to endow all agents competing for the shared resource with some budget of artificial credits, which they then use over time to compete in some form of repeated auction. Such mechanisms have been widely used in practice~\cite{prendergast2017food,budish2017course}, and also studied theoretically in one-shot allocation settings under known value distributions~\cite{guo2009,gorokh2017,balseiro2017}. Our presentation here is closest to that of~\cite{DBLP:conf/sigecom/GorokhBI21}, who consider the robustness properties of such mechanisms for repeated allocation with single-round demands.
Our mechanism, \hyperref[algo:algo]{\color{black}First-Price Pseudo-Auction with Multi-Round Reserves\xspace}, starts by endowing every agent with some artificial credits, proportional to their fair shares. Since the currency has no intrinsic value (and hence no fixed scale), we henceforth normalize the total budget of all agents to be $T$, of which every agent $i$ has a share of $\a_i$; in other words, each agent $i$ has an initial budget $B_i[1]=\a_i T$. At a high level, the budget fraction $\a_i$ corresponds to the idea that agent $i$ could get to use the item on an $\a_i$ fraction of rounds.
Following the initial endowment, the basic idea behind pseudo-market mechanisms is to then run some particular mechanism in each round, and allow agents to bid (and pay) in these mechanisms using their credits. Agents have no intrinsic value for these credits (i.e., their utility is not quasi-linear, it comes only from the allocations gained), but they are unable to bid more than their remaining budget. While different works consider different mechanisms, the most commonly studied is a first-price auction~\cite{DBLP:conf/sigecom/GorokhBI21,balseiro2017,gorokh2017,guo2009}.
Our mechanism handles multi-round allocations as follows: first, the principal declares a \emph{reserve price} $r$ for multi-round allocations; next, at the start of any round in which the resource is available, each agent declares a duration of rounds she wants to reserve the resource for, as well as a \emph{per-round} bid (which must exceed the reserve if the requested duration lasts multiple rounds). The agent with the \emph{highest per-round bid} is then awarded the item for her requested duration and is `charged' her bid times the duration from her credit budget.
We note that if we set a reserve price $r > 1$, then, if only multi-round requests are made, there are not enough credits among the agents to enable the resource to be allocated in every round. However, as we show in \cref{thm:guar:impossibility}, this wastage is necessary with multi-round demands to obtain any meaningful performance guarantee. Note though that in settings where preemption is not possible, even if the resource is used by a single agent with full knowledge of all future demands, then also the agent might leave the resource unused at times, so as not to interfere with future higher-valued demand.
We present the mechanism in detail in \cref{algo:algo}. In~\cref{sec:multi}, we discuss how to extend it to settings where the principal has more than one resource to allocate.
\begin{algorithm}[t] \SetAlgoNoLine \caption{\hyperref[algo:algo]{\color{black}First-Price Pseudo-Auction with Multi-Round Reserves\xspace}} \label{algo:algo} \KwIn{Rounds $T$, agents $n$, reserve $r \ge 0$, and agents' fair shares $\{\a_i\}_{i\in[n]}$}
\textbf{Initialize} $t = 1$, agent budgets $B_i[1] = \a_i T\quad\forall\,i\in [n]$\; \While{$t \le T$} {
Collect bids $b_{1}[t], \ldots, b_{n}[t]$ and desired durations $d_{1}[t], \ldots, d_{n}[t]$\;
Let $\mathcal V = \big\{ i\in [n] : b_i[t]d_i[t] \le B_i[t] \textrm{ and } (d_{i}[t] = 1 \textrm{ or } b_i[t] \ge r) \big\}$
\tcp*{Determine valid bids}
\eIf{$\mathcal V=\emptyset$}{
Do not allocate item and set $t=t+1$\;
}{
Define $I_t = \argmax_{i\in\mathcal V} b_{i}[t]$ (ties broken arbitrarily)
\tcp*{Choose winning agent}
Update $B_{i}[t+1] = B_{i}[t] - b_{i}[t]d_{i}[t]\One{i=I_t}$
\tcp*{Update agents' budgets}
Allocate item to agent $I_t$ for rounds $[t, t + d_{I_t }[t] - 1]$\;
Set $t = t + d_{I_t }[t]$
\tcp*{Block item for requested duration}
} } \end{algorithm}
\section{Individual Agent Benchmarks: Fair Shares and the Ideal Utility} \label{sec:ideal}
In this section, we define the utility benchmarks we consider for the agents. When mechanisms can use real money and agents have quasi-linear utilities, then payments provide an easy way to compare different agents' values and utilities. In contrast, when there are no payments that affect the agents' utilities, then there is no way to make interpersonal comparisons between agents. For this reason, we need a welfare benchmark for each agent that is independent of other agents' values. To this end, we adapt an idea from~\cite{DBLP:conf/sigecom/GorokhBI21} (which in turn borrows ideas from the bargaining literature and the Fisher market model), wherein agents' benchmarks are defined by their (exogenous) fair shares as well as their own relative valuations for items in different rounds.
The main idea behind our benchmark is that for each agent $i$, her budget fraction $\a_i$ (with $\sum_j \a_j = 1$) determines the \textit{fair share} of the overall resource, i.e., the fraction of total rounds she is entitled to utilize while respecting the rights of other agents to access the resource. To see how this translates into our welfare benchmark, consider the following simple example: suppose we have $n$ agents, where each agent has fair share $\a_i = 1/n$. Moreover, suppose every agent $i$ has $(V_{i}[t], K_{i}[t]) = (1, 1)$ with probability $1$ in every round. An agent's maximum total utility \emph{without competition} is $T$, but it would be unreasonable to expect this to be attainable for any agent. In contrast, by symmetry, each agent could expect to win $T/n$ total rounds resulting in $T/n$ total utility, which is indeed easily achieved (for example, via a round-robin allocation scheme).
\citewithauthor{DBLP:conf/sigecom/GorokhBI21} extend the above idea to define the \textit{ideal utility} for agents in settings where every request lasts for just one round. Their basic definition asserts that an agent's ideal utility is \emph{the highest per-round utility she can get while ensuring that other agents can get at least their fair share of the resource}. Formally, for each agent $i$, they consider a simplified setting with no other agents, but where agent $i$ is constrained to request the item for at most an $\a_i$ fraction of the rounds, and define agent $i$'s ideal utility to be the maximum expected per-round utility she can achieve in this setting. For example, if agent $i$ has $(V_{i}[t], K_{i}[t]) = (1, 1)$ with probability $1$, then her ideal utility is thus $\a_i$ for any fair share $\a_i$; more generally, if the agent has value $V_{i}[t]\sim\mathcal{F}_i$ (and $K_i[t]=1$) and fair share $\a_i$, her ideal utility essentially corresponds to that achieved by requesting the item only on rounds in which the agent's value $V_i[t]$ is in the top $\a_i$ quantile of her value distribution, which makes the agent request the item with probability $\a_i$ depending on her demand.
A first challenge in extending the ideal utility to our setting with reusable resources is that now it is tricky to define what it means for an agent to request each round in the no-competition setting while ensuring that she is only using her fair share, since each bid may need to reserve the resource for multiple days. To this end, given fair share $\a_i$, we define agent $i$'s ideal utility to be her \emph{long-run average utility in an infinite horizon setting with no competition, subject to her long-run average resource utilization being at most $\a_i$}.
In more detail, let $\pi:\Theta\to[0,1]$ denote a (stationary) policy that specifies for each type $\theta = (V, K)$ the probability with which the agent requests to reserve the resource for $K$ rounds, conditioned on it being available. The agent's ideal utility (rate) is that which she obtains under the optimal policy $\pi$ (which is uniquely defined under mild technical conditions). Note though that requesting the item in any round affects her ability to request it in future rounds, even with no competition. To formalize this, let $Z[t] = 1$ denote that the agent chooses to reserve the item in round $t$ for $K[t]$ rounds; $Z[t] = 0$ otherwise. This affects the future availability of the item: if $Z[t] = 1$ then the item is unavailable for the next $K[t] - 1$ rounds. Let $A[t]$ be an indicator variable for item availability, with $A[t]=1$ if the item is available in round $t$, and $A[t] = 0$ if unavailable; then, if $Z[t] = 1$, we have that $A[t'] = 0$ for all $t'\in[t+1, t+K[t]-1]$. Now, given policy $\pi$, we can define $Z[t]$ as a Bernoulli random variable that is $1$ with probability $A[t]\pi(\theta[t])$. Note though that these definitions are not recursive: even though the definition of $Z$ involves $A$ and vice versa, $A[t]$ depends only on $Z[t']$ for $t' < t$ and $Z[t]$ depends only on $A[t]$.
Now we can make the following definition:
\begin{definition}[Ideal Utility]\label{def:ideal:single}
Consider the single reusable resource setting, with a single agent $i$ with fair share $\a_i$ and type $\theta[t] = (V[t], K[t])$ in round $t$ drawn independently of other rounds from distribution $\mathcal F_i$.
For any policy $\pi:\Theta\rightarrow [0,1]$, let $Z[t]\sim \text{Bernoulli}(A[t]\pi(\theta[t]))$ denote a sequence of indicator variables each of which is $1$ in round $t$ if the resource is available (indicated by $A[t] = 1$) and is requested by the agent, else $0$; moreover if $Z[t]=1$, then $A[t'] = 0$ for all $t'\in[t+1, t+K[t]-1]$.
Now, the \textit{ideal utility $v^{\star}_i$} for agent $i$ is defined as the value of the following constrained infinite-horizon control problem:
\begin{equation}
\label{eq:ideal:MDP}
\begin{split}
\max{}_{\pi} \quad &\lim_{H\rightarrow\infty}\frac{1}{H}\sum_{t=1}^H V[t]K[t]Z[t]
\\
\textrm{such that} \quad
&\lim_{H\rightarrow\infty}\frac{1}{H}\sum_{t=1}^H K[t]Z[t] \le \a_i
\end{split}
\end{equation} \end{definition}
Note that the above problem does not depend on our true (finite) horizon $T$. Moreover, assuming $V[t], K[t]$ are bounded, via the Markov chain ergodic theorem we have that for any policy $\pi$, the above time average costs exist and equal their expected value under the stationary distribution of the resulting Markov chain. However, unlike a standard average cost MDP, due to the additional constraint, the optimal policy here may not be deterministic.
Intuitively, the above definition extends the notion of the ideal utility to the single reusable resource setting by again considering a world with only a single agent $i$ with fair share $\a_i$, and allowing the agent to choose any stationary request policy (i.e., the probability, as a function of $\theta[t]$, with which the agent can reserve the resource whenever it is free) subject to it using the resource at most an $\a_i$ fraction of rounds on average. Defining the control problem over the infinite horizon allows us to ignore boundary issues (e.g., if there are $5$ rounds remaining but agent $i$'s demand lasts $6$ rounds); moreover, note that in the case of single-round demands, our definition recovers that of~\citewithauthor{DBLP:conf/sigecom/GorokhBI21}.
\subsection{Computing the ideal utility} \label{ssec:comp}
\begin{figure}
\caption{\it \small Example of an agent's ideal utility: The numbers on top are the agent's type $\theta[t] = (V[t], K[t])$ each round $t$; a type in red denotes that $\texttt{Req}(\theta[t]) = 1$. Each epoch is associated with a request that the agent made while the item was available: it includes the rounds when the agent held the item because of that request (green blocks) and the rounds before that while the item was free. For each epoch $j$ we also include its length, $\ell_j$, the number of rounds the agent holds the item for, $k_j$, and the total value the agent gets, $v_j$.}
\label{fig:epoch}
\end{figure}
One problem with defining the ideal utility via the infinite horizon control problem in \cref{eq:ideal:MDP} is that it is unclear if it can be solved efficiently, and moreover, how to interpret the solution for any given distribution $\mathcal{F}_i$ and fair share $\a_i$. We now show how the above definition of the ideal utility for agent $i$ can be re-formulated as a simpler optimization problem, which we show can be efficiently solved by converting it into a linear program. For ease of notation, we drop the subscript $i$ for the remainder of this section.
To reformulate the above program, we first define $\texttt{Req}(\theta)$ to be an indicator random variable that is $1$ if the agent has type $\theta$, and wants to request the item if available (in our earlier notation, for given randomized policy $\pi$, we have $\texttt{Req}(\theta[t]))\sim\text{Bernoulli}(\pi(\theta[t]))$). Now, given some function $\texttt{Req}$, we divide the entire horizon into a collection of discrete renewal cycles or \emph{epochs}, where each epoch comprises of all rounds between successive times in which the resource is released by the agent: formally, if in round $t$ the agent requests the item, then the epoch associated with that request comprises of all the rounds before $t$ since the last time the resource was unavailable (which can be $0$) and all the rounds after $t$ till the agent releases her hold of the item (i.e., rounds $[t,t+K[t]-1]$). We show an example in \cref{fig:epoch}. Under any stationary policy $\texttt{Req}$, any two epochs are independent and identically distributed. Now let $q = \Pr{ \texttt{Req}(V, K) = 1 }$ denote the probability that the agent requests the item if it is available (where the probability is over both $(V, K) \sim \mathcal F$ and any randomization in the agent's request policy). Then the following are true for each epoch: \begin{itemize}
\item The number of rounds in an epoch after its start and until (and including) the round in which the agent requests for the item is distributed as $\text{Geometric}(q)$.
\item If $\ell_j$ is the length of an epoch $j$, then $\Ex{\ell_j} = \nicefrac{1}{q} - 1 + \Ex{K | \texttt{Req}(V, K) = 1}$.
\item If $v_j$ is the total utility the agent gets in epoch $j$, then $\Ex{v_j} = \Ex{V K | \texttt{Req}(V, K) = 1}$. Similarly, if $k_j$ is the number of rounds the agent holds the item in epoch $j$, then it holds that $\Ex{k_j} = \Ex{K | \texttt{Req}(V, K) = 1}$.
\item The agent's per-round utility is $\nicefrac{\sum_j v_j}{\sum_j \ell_j}$ and the total fraction of rounds she holds the item for is $\nicefrac{\sum_j k_j}{\sum_j \ell_j}$. Since $(\ell_j, k_j, v_j)$ is independent across different epochs, as the number of epochs approaches infinity, we get that her expected per-round utility is $\nicefrac{\Ex{v_1}}{\Ex{\ell_1}}$ and the expected fraction of rounds she holds the item for is $\nicefrac{\Ex{k_1}}{\Ex{\ell_1}}$. \end{itemize}
Using these facts, we can re-parameterize and re-write the optimization problem (\ref{eq:ideal:MDP}) as follows: \begin{equation} \label{eq:ideal:request} \begin{split}
\max_{\texttt{Req}} \qquad & \frac{\Ex{ V K \big| \texttt{Req}(V, K)=1 }}{\frac{1}{q} - 1 + \Ex{K \big| \texttt{Req}(V, K)=1}}
\\
\textrm{such that} \qquad
& \Pr{ \texttt{Req}(V,K)=1 } = q
\\
& \frac{\Ex{K \big| \texttt{Req}(V, K)=1}}{\frac{1}{q} - 1 + \Ex{K \big| \texttt{Req}(V, K)=1}} \le \a \end{split} \end{equation}
Before we show how an agent can efficiently solve the optimization problem (\ref{eq:ideal:request}), we make some observations about its optimal solution. \begin{itemize}
\item One natural question is whether the optimal request policy of the agent is independent of $K$ (and in particular, if $\texttt{Req}(V, K) = \One{V \ge \bar v}$ for some $\bar v$); note that this is the case in the settings with single-round demands.
However, the following example shows this is not the case for reusable resources:
Consider an agent with fair share $\a$ and the following distribution on $(V, K)$
\begin{align*}
(V, K) =
\begin{cases}
(1, 1), &\textrm{ with probability } \a/2 \\
(\e, 2), &\textrm{ with probability } \a/2 \\
(\e^2, 1), &\textrm{ otherwise}
\end{cases}
\end{align*}
for some $\e$ much smaller than $\a$. It is easy to see that the optimal request policy should have $\texttt{Req}(1, 1) = 1$ with probability $1$. However, if $\texttt{Req}(V, K) = 0$ in the other cases, then the agent gets the item for only $\a/2$ fraction of the rounds. In order to increase her ideal utility the agent can set $\texttt{Req}(\e^2, 1) = 1$ with some probability, but in the optimal solution, it should always be $\texttt{Req}(\e, 2) = 0$. Intuitively, getting the item for $2$ rounds and gaining only $2\e$ utility on those rounds, hinders the agent from getting expected utility $\a/2$ on the next round (recall that $\a$ is much larger than $\e$). Formally, if $\texttt{Req}(\e, 2) = 1$ with positive probability, the denominator in the objective function of \cref{eq:ideal:request} becomes much larger, while the numerator increases only slightly, overall decreasing the ideal utility.
\item The above example also shows that given fair share $\alpha$, it is possible that the optimal request policy by the agent results in resource-usage fraction less than $\a$ (we will need to distinguish this in our proofs later). In particular, if the agent's value is only $(1, 1)$ or $(\e, 2)$, then as we argue above, the optimal request policy sets $\texttt{Req}(\e, 2) = 0$ with probability $1$, resulting in the agent holding the item with probability $\a/2$. \end{itemize}
Finally, in the case where $\Theta$ is a \emph{finite} type-space, we can convert the optimization problem \eqref{eq:ideal:request} into a linear program as shown in the lemma that follows. The lemma shows that in the case where $\Theta$ is a finite type set, then the optimal request policy $\texttt{Req}(\theta)$ underlying the ideal utility can be solved efficiently via a linear program. Subsequently, we will use this policy as a black box for defining the agent's robust bidding strategy in the pseudo-market.
In the linear program below, for each type $\theta$, we use variables $f_\theta$ to denote the expected fraction of rounds in which the resource is available, the agent has type $\theta$, and she requests to reserve the resource (note that this is not the fraction of rounds the agent uses the resource while \textit{having} demand type $\theta$ -- this is $k_\theta f_\theta$). Using these variables, we can restrict the agent to use the resource in at most a fraction $\a$ rounds using a linear inequality. In addition, we need to bound each $f_\theta$ to be at most as much as type $\theta$ is available.
\begin{lemma} \label{lem:ideal:LP}
The optimization problem \eqref{eq:ideal:request} can be constructed as follows:
Suppose that the agent has each type $\theta=(v_\theta, k_\theta)\in\Theta$ with probability $p_\theta$, and let $x_\theta = p_\theta \Pr{\texttt{Req}(\theta) = 1 | \theta}$ denote the probability that the agent has type $\theta$ and requests the item given its availability. Then we have $x_\theta = \frac{f_\theta}{1 - \sum_{\theta'} (k_{\theta'} - 1) f_{\theta'}},$ where $\{f_\theta\}$ is the solution to the following linear program.
\begin{equation}
\label{eq:ideal:LP}
\begin{split}
\max_{\{f_\theta\}_{\theta\in\Theta}} \qquad
& \sum_\theta v_\theta k_\theta f_\theta
\\
\textrm{such that} \qquad
& \sum_\theta k_\theta f_\theta \le \a
\\
& 0 \le f_\theta \le p_\theta \left( 1 - \sum_{\theta'} (k_{\theta'} - 1) f_{\theta'} \right)\qquad\forall\,\theta\in\Theta
\end{split}
\end{equation} \end{lemma}
To understand the conversion between $x_\theta$ and $f_\theta$ and the upper bound used for $f_\theta$, note that when the agent gets the item for $k$ rounds at some time $t$ then on the following $k-1$ rounds the item is not available. This means that the fraction of rounds the item is available is $1 - \sum_{\theta'} (k_{\theta'} - 1) f_{\theta'}$. On any of those rounds, the probability of having type $\theta$ is $p_\theta$, which results in the upper bound on $f_\theta$ given above.
\begin{proof}[Proof of~\cref{lem:ideal:LP}]
Using the variables $\{x_\theta\}_{\theta\in\Theta}$ we rewrite \cref{eq:ideal:request}:
\begin{equation*}
\begin{split}
\max_{\{x_{\theta}\}_{\theta\in\Theta}} \qquad
& \frac{\sum_\theta v_\theta k_\theta x_\theta}{1 + \sum_\theta (k_\theta - 1)x_\theta}
\\
\textrm{such that} \qquad
& \frac{\sum_\theta k_\theta x_\theta}{1 + \sum_\theta (k_\theta - 1) x_\theta} \le \a
\\
& 0 \le x_\theta \le p_\theta\qquad\qquad\qquad\forall\,\theta\in\Theta
\end{split}
\end{equation*}
We turn the above into an LP by setting $f_\theta = \frac{x_\theta}{1 + \sum_{\theta'} (k_{\theta'} - 1) x_{\theta'}}$, which is equivalent to the the linear system $\left(I - \vec f \left(\vec k - 1\right)^\top\right) \vec{x} = \vec f$. Now, as long as $1 - \vec f^\top \left(\vec k - 1\right) = 1 - \sum_{\theta'} (k_{\theta'} - 1) f_{\theta'} \ne 0$ (which holds due to the constraints imposed on $\vec f$ as shown below), we can use the Sherman-Morrison matrix inversion formula\footnote{\url{https://en.wikipedia.org/wiki/Sherman-Morrison_formula}} to get $\left(I - \vec f \left(\vec k - 1\right)^\top\right)^{-1} = I + \frac{\vec f \left(\vec k - 1\right)^\top}{1 - \vec f^\top \left(\vec k - 1\right)}$. Thus, we get the unique solution $x_\theta = \frac{f_\theta}{1 - \sum_{\theta'} (k_{\theta'} - 1) f_{\theta'}}$, and substituting this in the above program, we get the promised LP in~\cref{eq:ideal:LP}. \end{proof}
\subsection{Ideal Utility and Social Welfare}
As we mentioned at the beginning of the section, the ideal utility provides a benchmark for how much utility each agent can hope to achieve, independent of other agents. This is important in our setting since the absence of money implies no way to make interpersonal comparisons between agents. However, in the special case when all agents are \emph{symmetric} (i.e., have the same type distributions), then even without knowing this distribution, we can directly reason about their relative values, and consequently, maximizing social welfare becomes a reasonable goal. In this setting, if all fair shares are set to be the same (and therefore, where every agent has the same ideal utility $v^{\star}$), it is easy to see (for example, by symmetry) that $n v^{\star} T$ is an upper bound for the expected maximum social welfare. This means that every per-agent robustness guarantee also implies a social welfare guarantee. For example, our guarantee in \cref{thm:guar:guarantee} that every agent can realize at least half her ideal utility in expectation, implies a $\nicefrac 1 2$ approximation-ratio for the optimal social welfare. We emphasize, however, that in most cases $n v^{\star} T$ is a loose upper bound for the maximum social welfare. For example, if there are $n$ agents, with single-round demands, and where each agent has value $1$ with probability $\nicefrac{1}{n}$, and $0$ otherwise, then with equal shares we have $v^{\star} = \nicefrac{1}{n}$, giving a bound of $n v^{\star} T = T$. However, since on average a $\nicefrac{1}{e}$ fraction of rounds have $0$ value for all agents, the social welfare is at most $(1-\nicefrac{1}{e})T$. \section{Allocating a Single Reusable Resource: Robustness Guarantees} \label{sec:guar}
Given the above setup, we are now ready to characterize the performance of the \hyperref[algo:algo]{\color{black}First-Price Pseudo-Auction with Multi-Round Reserves\xspace}~\cref{algo:algo}). In particular, our main result is the following \emph{per-agent robustness guarantee} that the mechanism enjoys: we show that under a reserve price $r$, \emph{every agent can get a constant fraction (depending on $r$) of their ideal utility, irrespective of how other agents behave}. With respect to this robustness guarantee, we show that the minimax optimal reserve $r$ is $2$, in which case the agent can ensure they get at least half their ideal utility.
Before proceeding, we need to introduce some notation. Since we are studying guarantees from the perspective of a single agent, we henceforth drop the $i$ subscript. Throughout this section, we define $v^{\star}$ to be the ideal utility of the agent and $\b$ to be the fraction of rounds in which the agent claims the item under the optimal request policy $\texttt{Req}$, when there is no competition. More specifically, from \cref{eq:ideal:request}, we have \begin{equation} \label{eq:vstar_beta} \begin{split}
v^{\star} &= \frac{\Ex{V K \big| \texttt{Req}(V, K) = 1}}{\frac{1}{q} - 1 + \Ex{K \big| \texttt{Req}(V, K) = 1}}
\\
\b &= \frac{\Ex{K \big| \texttt{Req}(V, K) = 1}}{\frac{1}{q} - 1 + \Ex{K \big| \texttt{Req}(V, K) = 1}} \le \a \end{split} \end{equation}
Finally, we assume there is some upper bound $k_{\max}$ on the duration of demands that agents sample (i.e., $K_{i}[t] \le k_{\max}$ for all $i, t$).
To prove our robustness bound, we consider the following simple bidding strategy for an agent: \begin{tcolorbox}[standard jigsaw,opacityback=0,label=def:policy]
\textbf{\hyperref[def:policy]{\color{black}Robust Bidding Policy}\xspace}:\\
-- Agent solves the LP in \cref{lem:ideal:LP} to compute the request probability $\Pr{\texttt{Req}(\theta) = 1 | \theta}$ for each type $\theta$ that realizes the ideal utility for her fair share $\a$.\\
-- For round $t$ the agent re-samples $\texttt{Req}(\theta[t])$.\\
-- If in round $t$ the item is available, the agent has enough budget ($B[t]\geq rK[t]$), and $\texttt{Req}(\theta[t])=1$, then she competes in the auction with (per-round) bid $b[t]=r$ and duration $d[t]=K[t]$. \end{tcolorbox}
In other words, the agent computes her optimal stationary request policy in the no-competition setting, and then, while her budget is sufficient, bids at the reserve price according to this request policy. Note that this policy can be computed efficiently using~\cref{lem:ideal:LP}. Now, to understand the performance of this strategy, we first establish a simple lemma that shows that the agent's utility in each round under the \hyperref[def:policy]{\color{black}Robust Bidding Policy}\xspace is directly proportional to her payment in that round. For simplicity, we henceforth assume that if an agent who follows \hyperref[def:policy]{\color{black}Robust Bidding Policy}\xspace wins the auction in round $t$ for $K[t]$ rounds, then she pays the total amount $P[t] = r K[t]$ and realizes her total utility $U[t] = V[t] K[t]$ instantaneously.
\begin{lemma}\label{lem:guar:BPB}
Consider an agent with fair share $\a$, and compute her ideal utility $v^{\star}$ and ideal utilization fraction $\b\leq\a$.
Let $\theta[t]=(V[t], K[t])$ denote her demand type in round $t$, and let $(U[t], P[t])$ be her total realized utility and total payment in round $t$ under \hyperref[def:policy]{\color{black}Robust Bidding Policy}\xspace (and any fixed policy of other agents).
Then, for all $t\in[T]$, we have
\begin{align*}
\Ex{U[t]} = \frac{v^{\star}}{\beta r} \Ex{P[t]}
\end{align*} \end{lemma}
This lemma utilizes the simplicity of the \hyperref[def:policy]{\color{black}Robust Bidding Policy}\xspace, and in particular, the fact that the agent's utility is positive in round $t$ only if $P[t] = r K[t]$ and that the agent winning on round $t$ and $\theta[t]$ are independent conditioned on $\texttt{Req}(\theta[t]) = 1$.
\begin{proof}
Let $W[t]$ be an indicator random variable that is $1$ if the agent wins in round $t$, else $W[t] = 0$.
We are going to use the fact that the agent's type $\theta[t] = (V[t], K[t])$ does not directly depend on $W[t]$, but rather on whether the agent chooses to bid, which is determined by the request policy $\texttt{Req}(V[t], K[t])$ (formally, $\theta[t]$ is conditionally independent of $W[t]$ given $\texttt{Req}(\theta[t])$).
Now we write
\begin{alignat*}{3}
\Line{
\Ex{U[t]}
}{=}{
\Ex{K[t] V[t] W[t]}
}{}
\\
\Line{}{=}
{
\Ex{K[t] V[t] \big| W[t]=1} \Pr{W[t]=1}
}{}
\\
\Line{}{=}
{
\Ex{K[t] V[t] \big| \texttt{Req}(\theta[t]) = 1} \Ex{W[t]}
}{}
\\
\Line{}{=}
{
\frac{v^{\star}}{\frac{1}{q} - 1 + \Ex{K[t] \big| \texttt{Req}(\theta[t])=1}}
\Ex{W[t]}
}{}
\end{alignat*}
where in the last equality we used \cref{eq:vstar_beta}. Similarly,
\begin{alignat*}{3}
\Line{
\Ex{P[t]}
}{=}{
\Ex{r K[t] W[t]}
}{}
\\
\Line{}{=}
{
r \Ex{K[t] \big| W[t]=1} \Pr{W[t]=1}
}{}
\\
\Line{}{=}
{
r \Ex{K[t] \big| \texttt{Req}(\theta[t])=1} \Ex{W[t]}
}{}
\\
\Line{}{=}
{
\frac{r \b}{\frac{1}{q} - 1 + \Ex{K[t] \big| \texttt{Req}(\theta[t])=1}}
\Ex{W[t]}
}{}
\end{alignat*}
where in the last equality we used \cref{eq:vstar_beta}.
Combining the two above equalities we get the desired bound. \end{proof}
We now proceed to prove the main result of this section, that using \hyperref[def:policy]{\color{black}Robust Bidding Policy}\xspace, an agent can guarantee a fraction of her ideal utility.
\begin{theorem}\label{thm:guar:guarantee}
Consider \hyperref[algo:algo]{\color{black}First-Price Pseudo-Auction with Multi-Round Reserves\xspace} with reserve price $r \ge 1$, and any agent with fair share $\a$, corresponding ideal utility $v^{\star}$, and ideal utilization fraction $\b$. Then, using \hyperref[def:policy]{\color{black}Robust Bidding Policy}\xspace, irrespective of how other agents bid, the agent can guarantee a total expected utility
\begin{align*}
\Ex{\sum_{t\in[T]}U[t]}\geq v^{\star} T \left(\min\left\{ \frac{1}{r} , 1 - \frac{1}{r} \right\} - \frac{1}{\b} O\left( \sqrt{\frac{k_{\max}}{T}}\right)\right)
\end{align*} \end{theorem}
\begin{remark}
The first term in the competitive ratio in~\cref{thm:guar:guarantee} is maximized when $r = 2$, in which case it becomes $\nicefrac{1}{2}$. We note also that via more careful accounting, the first term can be improved to $\min\left\{ \frac{1}{r} , 1 - \frac{1-\a}{r} \right\}$, which gives the optimal result for $\a=1$; for ease of presentation, we defer this improvement to~\cref{sec:multi} (see~\cref{thm:mult:guarantee} and~\cref{sec:app:missing_proofs}). \end{remark}
At a high level (and ignoring the sub-linear in $T$ terms), the proof of the theorem rests on a critical property of the reserve price that irrespective of the bidding policy, \emph{it limits the number of rounds that other agents can claim at prices greater than or equal to the reserve}. The rest of the agents have a budget of $(1-\a)T$, and hence at a reserve price of $r$, they can collectively win at most $(1-\a)T/r$ rounds at a bid that is higher than the reserve. More precisely, owing to the structure of the \hyperref[algo:algo]{\color{black}First-Price Pseudo-Auction with Multi-Round Reserves\xspace}, the other agents can collectively \emph{block} the agent on at most $(1-\a)T/r$ rounds. The remaining rounds are available to the agent, and roughly a $\b$ fraction of these have high value (i.e., are requested for under $\texttt{Req}$). In particular, if we choose $r=2$, then the other agents can block at most $T/2$ rounds, and among the remaining rounds, the agent wants roughly $\b T/2$ rounds and also has sufficient budget to claim these rounds at the reserve price.
The problem with formalizing the above intuition is that with multi-round allocations, it is not enough to show that the agent has access to a $\b/2$ fraction of the rounds on average, as in order to receive utility for a demand $(V[t], K[t])$, the agent must get the item for the entire interval $[t,t+K[t]-1]$. As an extreme case, suppose $\a$ is small, and the agent had type either $(1,1)$ or $(0,0)$, and moreover, on a given sample path, had $(V[t], K[t])=(0,0)$ on almost all odd rounds. Now the remaining agents could `adversarially' bid $(2r,1)$ on all even rounds, and by blocking these, ensure the agent rarely wins the item. Of course, such a sample path is unlikely with i.i.d. demands, but the challenge still is to rule out all such bidding behavior by the other agents.
The main idea in our proof is to track all the rounds in which the agent is not \emph{blocked} -- i.e., where the item is available and all other agents bid lower than $r$ -- and argue that by playing the \hyperref[def:policy]{\color{black}Robust Bidding Policy}\xspace, the agent wins close to a $q$ fraction of these rounds (recall $q$ is the probability that $\texttt{Req}(V[t], K[t]) = 1$ in the solution of \cref{eq:ideal:request}). Such a property is true for any set of rounds \emph{fixed upfront}; however, the set of non-blocked rounds may not be independent, both because they may be blocked adversarially, and also due to the lengths of the reservations. Additionally, because the agent's final payment is the minimum between her budget and a function of the unblocked rounds she has a high value for, in order to lower bound her payment we need a high probability bound on the second quantity. We do that by showing that at any time $t$, the difference between the utilization of the agent up to $t$, and the number of unblocked rounds up to $t$ scaled by $\b$, forms a sub-martingale, which lets us use the Azuma-Hoeffding inequality\footnote{\url{https://en.wikipedia.org/wiki/Azuma's_inequality}} to get the high probability bound. We then combine this with the fact that the number of blocked rounds overall is at most $T/r$ to get the desired result.
\begin{proof}[Proof of \cref{thm:guar:guarantee}]
Fix our agent, Alice, with fair share $\a$, and corresponding ideal utility $v^{\star}$ and utilization $\b\leq \a$. We assume that Alice plays \hyperref[def:policy]{\color{black}Robust Bidding Policy}\xspace: in any round $t$, if the item is available and $\texttt{Req}(V[t], K[t]) = 1$, then she participates in \hyperref[algo:algo]{\color{black}First-Price Pseudo-Auction with Multi-Round Reserves\xspace} with per-round bid $b[t]=r$ and duration $d[t]=K[t]$.
Let $\texttt{Blk}[t]\in\{0,1\}$ be an indicator variable that Alice is \emph{blocked} from competing for the item in round $t$. In particular, we have $\texttt{Blk}[t] = 1$ if
\begin{itemize}
\item The item was reserved for round $t$ by Alice in a previous round (at a rate of $r$ credits per day for the item).
\item The item was previously reserved for round $t$ by an agent other than Alice, who is paying at least $r$ per round.
\item The item is available, but some agent other than Alice bids at least $r$ (for one or multiple rounds), and wins the round $t$ auction.
\end{itemize}
In all other cases, we have $\texttt{Blk}[t] = 0$.
Given $\texttt{Blk}[t]$, and assuming Alice has a remaining budget of at least $rK[t]\leq rk_{\max}$, we have that her payment $P[t]$ in round $t$ is
\begin{align*}
P[t] = r K[t] \One{\textrm{Alice wins auction in round } t}
= r K[t] \texttt{Req}(\theta[t]) (1 - \texttt{Blk}[t])
\end{align*}
Since Alice might become budget limited at some point, her overall payment is at least
\begin{align}\label{eq:1:1}
\sum_t P[t]
\ge
\min\left\{
\a T, r \sum_{t=1}^T K[t] \texttt{Req}(\theta[t]) (1 - \texttt{Blk}[t])
\right\} - r k_{\max}
\end{align}
We now study the second term in the minimum, which represents Alice's \emph{unconstrained} spending (i.e., if she was never budget limited), and prove a high probability lower bound on this. Subsequently, using~\cref{lem:guar:BPB}, we can translate this into a utility lower bound. To this end, we define
\begin{align*}
Z_\tau
=
\sum_{t=1}^\tau K[t] \texttt{Req}(\theta[t]) (1 - \texttt{Blk}[t])
-
\b\sum_{t=1}^\tau (1 - \texttt{Blk}[t])
\end{align*}
To understand the rationale behind this, briefly assume that Alice makes requests that last only one round, i.e., $\texttt{Req}(\theta[t]) = 1$ implies $K[t] = 1$. Given any set $\mathcal{T}$ of rounds chosen \emph{independently} of $\texttt{Req}$, if Alice wins \emph{all} the rounds in $\mathcal{T}$ she requests for under policy $\texttt{Req}$, then she would expect to utilize the item for at least $\b |\mathcal{T}|$ rounds in $\mathcal{T}$.
Now observe that, over the first $\tau$ rounds, the set of rounds $\{t\leq \tau: \texttt{Blk}[t]=0\}$ are precisely those on which Alice is not blocked from the item, and $Z_\tau$ counts the difference between the actual number rounds (budget-unconstrained) Alice wins in this set, and $\b$ times the number of rounds in the set. If the set was chosen independently of Alice's policy, then $Z_\tau$ would be a martingale, which we can then use to estimate the total expected payment $\Ex{\sum_t P[t]}$, and hence the total utility.
Unfortunately, however, with no additional assumptions on the bidding behavior of other agents, we can not assert that the set of unblocked rounds is independent of the $\texttt{Req}$ policy (for example, the adversary's policy may depend on her budget). Nevertheless, we show below that $Z_\tau$ is a sub-martingale with respect to the history of the previous rounds $\mathcal H_{\tau-1}$.
First, recall we define $q = \Pr{\texttt{Req}(\theta[t])=1}$ under Alice's optimal request policy (\cref{eq:ideal:request}). Moreover, we also have $q = \Pr{\texttt{Req}(\theta[t])=1|H_{t-1}}$, since $\texttt{Req}$ is a \emph{stationary} policy that only depends on $\theta[t]$, the agent's type in round $t$, which is independent of $\mathcal H_{t-1}$. Thus, we have:
\begin{alignat*}{3}
\Line{
\Ex{Z_\tau - Z_{\tau - 1} \big| \mathcal H_{\tau-1}}
}{=}{
\Ex{K[\tau] \texttt{Req}(\theta[\tau]) (1 - \texttt{Blk}[\tau]) \big| \mathcal H_{\tau-1}} - \b \Ex{(1 - \texttt{Blk}[\tau]) \big| \mathcal H_{\tau-1}}
}{}
\\
\Line{}{=}{
q\Ex{K[\tau] (1 - \texttt{Blk}[\tau]) \big| \texttt{Req}(\theta[\tau])=1, \mathcal H_{\tau-1}} - \b \Ex{(1 - \texttt{Blk}[\tau]) \big| \mathcal H_{\tau-1}}
}{}
\end{alignat*}
Next, from~\cref{eq:vstar_beta}, we get that $\Ex{K[t] \big| \texttt{Req}(\theta[t]) = 1} = \frac{1-q}{q}\frac{\b}{1-\b}$. Moreover, note that $K[\tau]$ and $\texttt{Blk}[\tau]$ are independent given that Alice wants to request the item in round $\tau$ (i.e., $K[\tau]$ and $\texttt{Blk}[\tau]$ are conditionally independent given $\texttt{Req}(\theta[t])=1$). Substituting, in the above equation, we get
\begin{alignat*}{3}
\Line{
\Ex{Z_\tau - Z_{\tau - 1} | \mathcal H_{\tau-1}}
}{=}{
\frac{\b(1-q)}{(1-\b)}
\Ex{(1 - \texttt{Blk}[\tau]) \big| \texttt{Req}(\theta[\tau]), \mathcal H_{\tau-1}} - \b \Ex{(1 - \texttt{Blk}[\tau]) \big| \mathcal H_{\tau-1}}
}{}
\\
\Line{}{\ge}{
\b
\Ex{(1 - \texttt{Blk}[\tau]) \big| \mathcal H_{\tau-1}} - \b \Ex{(1 - \texttt{Blk}[\tau]) \big| \mathcal H_{\tau-1}} = 0
}{}
\end{alignat*}
where the second inequality follows from the facts that $\texttt{Blk}[\tau]$ can only increase if we remove the condition that $\texttt{Req}(\theta[\tau]) = 1$ and that $q \le \b$ (by \cref{eq:vstar_beta} and the fact that $K \ge 1$).
Now, since $\Ex{Z_\tau - Z_{\tau - 1} | \mathcal H_{\tau-1}} \ge 0$, we have that $Z_\tau$ is a sub-martingale. Moreover, since $|Z_\tau - Z_{\tau-1}| \le k_{\max}$ with probability $1$, we can use Azuma's inequality to get for any $\e > 0$,
\begin{align*}
\Pr{Z_T - Z_0 \le -\e}
\le
e^{-\frac{\e^2}{2Tk_{\max}}}.
\end{align*}
In other words, for any $\e>0$, we have that with probability at least $1 - e^{-\frac{\e^2}{2Tk_{\max}}}$
\begin{align*}
r \sum_{t=1}^T K[t] \texttt{Req}(V[t], K[t]) (1 - \texttt{Blk}[t])
\ge
r\b\sum_{t=1}^T (1 - \texttt{Blk}[t]) - r \e
\end{align*}
On the other hand, note that we have $\sum_t \texttt{Blk}[t] \le T \frac{1}{r}$ with probability $1$ -- this follows from the fact that for every round in which Alice is blocked, some agent (including possibly Alice) pays at least $r$ credits. Thus we have with probability at least $1 - e^{-\frac{\e^2}{2Tk_{\max}}}$
\begin{align}\label{eq:1:2}
r \sum_{t=1}^T K[t] \texttt{Req}(V[t], K[t]) (1 - \texttt{Blk}[t])
\ge
r\b T \left(1 - \frac{1}{r}\right) - r \e
\end{align}
Combining \cref{eq:1:1,eq:1:2}, we get with probability at least $1 - e^{-\frac{\e^2}{2Tk_{\max}}}$, Alice's payment satisfies
\begin{align*}
\sum_t P[t]
\ge
\min\left\{
\a T, r\b T \left(1 - \frac{1}{r}\right) - r \e
\right\} - r k_{\max}
\ge
\b T r \min\left\{ \frac{1}{r} , 1 - \frac{1}{r} \right\} - r \e - r k_{\max}
\end{align*}
Setting $\e=\Theta(\sqrt{Tk_{\max}})$ and taking expectations, we get that her expected payment is at least
\begin{align*}
\sum_t \Ex{P[t]}
\ge
\b T r \min\left\{ \frac{1}{r} , 1 - \frac{1}{r} \right\} - r k_{\max} - r O\left( \sqrt{T k_{\max}} \right)
\end{align*}
Finally, using \cref{lem:guar:BPB}, we get that Alice's expected utility satisfies
\begin{align*}
\sum_t \Ex{U[t]}
\ge
v^{\star} T \min\left\{ \frac{1}{r} , 1 - \frac{1}{r} \right\} - \frac{v^{\star}}{\b}k_{\max} - \frac{v^{\star}}{\b} O\left( \sqrt{Tk_{\max}} \right)
\end{align*}
Finally, since $k_{\max}\leq T$, we have $k_{\max}=O(\sqrt{Tk_{\max}})$, which gets our promised guarantee. \end{proof}
In \cref{thm:guar:guarantee} we proved that an agent can guarantee approximately a $\max\{1/r, 1-1/r\}$ fraction of her ideal utility, by using a very simple strategy that involves only bidding $r$ or $0$. It is reasonable to wonder if the agent can guarantee a bigger fraction by using a more complicated strategy or if the choice of $r = 2$ is the optimal one. In the next theorem, we prove that the answer is negative. More specifically, we prove that our result in \cref{thm:guar:guarantee} is asymptotically tight under our mechanism as we jointly scale $T$, $\a$, and $k_{\max}$ (in particular, for large $T$, and assuming $k_{\max} = \omega(1)$ and $\a = o(1)$ with respect to $T$).
\begin{theorem}\label{thm:guar:impossibility}
Consider the \hyperref[algo:algo]{\color{black}First-Price Pseudo-Auction with Multi-Round Reserves\xspace} mechanism, with maximum reservation duration $k_{\max} \ge 2$, and reserve price $r \ge 0$. Then there is a strategy for the other agents such that an agent with budget $\a T$ and ideal utility $v^{\star}$ is limited to
\begin{align*}
\Ex{\sum_{t\in[T]}U[t]}\leq v^{\star} T \left(
1 - \frac{(1-\a)}{\max\{1,r\}}
+ \frac{1}{k_{\max}}
\right)
+
v^{\star}(k_{\max} - 1)
\end{align*}
\end{theorem}
In order to get the bound we consider an agent with a (small) fair share $\a$, and moreover, who in each round has demand $\theta[t]=(1,1)$ with probability $\a$, and $(0,0)$ otherwise. In contrast, suppose every time the item is available, at least one other agent demands $k_{\max}$ rounds at price $\max\{1,r\}$. This means that the agent has three options to get utility. First, she can bid slightly more than $\max\{r,1\}$ only when she has positive value and get the item if it is available (which only happens with probability $1/k_{\max}$) until all other agents run out of money. Second, she can bid slightly more than $\max\{r,1\}$ when she has zero value to stop the other agents from getting and blocking the item. Third, she can wait for the other agents' budget to deplete, which happens in the last $T(1 - \frac{1-\a}{\max\{1,r\}})$ rounds.
\begin{proof}
Fix our agent, her budget $\a T$, and her type to be $(V[t], K[t]) = (1, 1)$ with probability $\a$ and $(V[t], K[t]) = (0, 1)$ otherwise. We notice that $v^{\star} = \a$ and $\b = \a$.
We will show the impossibility result by assuming that all the other agents follow the same deterministic strategy: each tries to reserve $k_{\max}$ rounds, by bidding $\max\{1,r\}$ per round. For the rest of the proof, we are going to think of all the other agents into one adversary, with a total budget of $(1-\a)T$.
Let $A[t] \in \{0,1\}$ denote the availability of the item: if $A[t] = 1$ then in round $t$ the agent has the ability to bid and reserve the item; if $A[t] = 0$ then the adversary has reserved the item in a previous round and still has it. As long as $A[t] = 1$, the agent can control if she wins the item in round $t$ or not by bidding slightly above $\max\{1,r\}$. This allows us to assume w.l.o.g. that she only requests the item for $1$ round at a time since requesting the item for multiple rounds can be simulated by requesting the item for $1$ round multiple times. We notice that the agent's utility is
\begin{align*}
\sum_{t=1}^T U[t]
=
\sum_{t=1}^T V[t] \One{ \textrm{agent wins in } t }
=
\sum_{t=1}^T A[t] V[t] \One{ \textrm{agent bids in } t }
\le
\sum_{t=1}^T A[t] V[t].
\end{align*}
We now notice that since the random variables $A[t]$ and $V[t]$ are independent, $\Ex{A[t] V[t]} = \a\Ex{A[t]}$. This proves that the agent's expected utility
\begin{align}\label{eq:1:3}
\Ex{\sum_{t=1}^T U[t]}
\le
\a\Ex{\sum_{t=1}^T A[t]}.
\end{align}
We now upper bound the sum of the above quantity. Let $U$ be the number of rounds the adversary wins, which makes the item unavailable for $(k_{\max}-1)U$ rounds in total. Note that $\sum_{t=1}^T A[t] = T - (k_{\max}-1) U$. We now lower bound $k_{\max} U$, the number of rounds the adversary holds the item, by observing that the adversary must eventually run out of budget (since as long as the adversary has budget either she or the agent pays at least $1$ for the item each round). This means that $k_{\max} U \ge \frac{(1-\a)T}{\max\{1,r\}} - k_{\max}$, i.e., the adversary gets the item until their budget is depleted, up to an additive error of $k_{\max}$. Combining this with our previous bound for the agent's utility in \eqref{eq:1:3}, we have that
\begin{align*}
\Ex{\sum_{t=1}^T U[t]}
\le
\a\left(T - \frac{k_{\max}-1}{k_{\max}} \frac{(1-\a)T}{\max\{1,r\}} + k_{\max} - 1\right)
\end{align*}
By rearranging the above, we complete the proof. \end{proof} \section{Impossibility Result for Single Resource Allocation} \label{sec:hardness}
In this section, we study upper bounds on the fraction of ideal utility an agent can get under \emph{any} mechanism, including mechanisms without a pseudo-market structure. More specifically, we show that our result in \cref{thm:guar:guarantee} is tight: as the number of agents $n$ and the maximum number of days an agent can have demand for, $k_{\max}$, become large, no mechanism can guarantee every agent more than half their ideal utility.
\begin{theorem}\label{thm:imposs}
There exists a sequence of settings with $n$ symmetric agents, each with fair share $\nicefrac{1}{n}$, and consequently the same ideal utility $v^{\star}$, but where, as $n\to\infty$, no mechanism can guarantee every agent total expected utility more than
\begin{align*}
v^{\star} T \left( \frac{1}{2} + O\left( \frac{1}{k_{\max}} \right) \right)
\end{align*}
\end{theorem}
We study an instance where the $n$ agents have zero value with high probability and positive values have duration $k_{\max}$. In our example, if every agent gets allocated the item every time she has a positive value, then her total expected utility is exactly $v^{\star} T$. However, because the demands of the agents overlap, no mechanism can guarantee such an allocation for every agent and in fact can guarantee at most half of that in expectation.
\begin{proof}
Consider an example with $n$ agents, each with budget $T/n$ and the same type distribution
\begin{align*}
(V, K) =
\begin{cases}
(1, k_{\max}) , & \textrm{ with probability } \frac{1}{k_{\max}(n-1) + 1} := p
\\
(0, 1), & \textrm{ otherwise }
\end{cases}
.
\end{align*}
Note that given the above probability $p$, if an agent sets $\texttt{Req}(1, k_{\max}) = 1$ with probability $1$, then her ideal utilization is exactly $\b = \a = \nicefrac{1}{n}$. This means that
the ideal utility of every agent is $v^{\star} = \nicefrac{1}{n}$. We now calculate the expected maximum social welfare in this setting. Let $p' = 1 - (1-p)^n$ be the probability that any agent has positive value in a certain round when the item is free. The expected number of rounds before an item is allocated in this mechanism is $\nicefrac{1}{p'} - 1$ and then the item is allocated for $k_{\max}$ rounds. This means that the fraction of rounds the item is allocated for is
\begin{align*}
\frac{k_{\max}}{k_{\max} - 1 + \frac{1}{p'}}
\end{align*}
We show that the above fraction equals $\nicefrac{1}{2}$ as $n$ and $k_{\max}$ approach infinity. We notice that if $n$ is large enough, for any value of $k_{\max}$
\begin{align*}
p'
=
1 - (1-p)^n
=
1 - \left( 1 - \frac{1}{k_{\max}(n-1) + 1} \right)^n
\approx
1 - e^{-1/k_{\max}}
\le
\frac{1}{k_{\max}}
\end{align*}
which makes
\begin{align*}
\frac{k_{\max}}{k_{\max} - 1 + \frac{1}{p'}}
\lessapprox
\frac{k_{\max}}{k_{\max} - 1 + k_{\max}}
=
\frac{1}{2} + O\left( \frac{1}{k_{\max}} \right)
\end{align*}
This implies that the expected maximum social welfare is $T(\nicefrac{1}{2} + O(\nicefrac{1}{k_{\max}}))$. This means that under any mechanism, at least one agent is going to have expected utility at most $\nicefrac{1}{n}$ times that, which proves the theorem. \end{proof} \section{Generalized Reusable Public Resource Allocation} \label{sec:multi}
In this section, we extend our results to the case where there are $L \ge 1$ identical items shared among the agents. We still assume that each agent has nothing to gain by having more than one item on a single round. Simply adapting our single-resource result to multiple resources would result (ignoring lower order terms) in guaranteeing a $$\min\left\{ \frac{1}{r} , 1 - \frac{1 + \a (L-1)}{r} \right\} $$ fraction of the agent's ideal utility (formally defined below). In his section, we show an improved guarantee of $$\min\left\{ \frac{1}{r} , 1 - \frac{1 - \a}{r} \right\} $$ fraction, eliminating the deterioration of the guarantee as $L$ gets larger. In fact, the improved bound would also slightly improve our result in Section \ref{sec:guar}. There we focused on the simpler proof as the improvement in the case of $L=1$ is not so significant.
\subsection{Mechanism and Ideal Utility}
Our mechanism, \hyperref[algo:algo]{\color{black}First-Price Pseudo-Auction with Multi-Round Reserves\xspace}, is similar to the single resource case, except that now if in round $t$ there are $m$ available items, the agents with the top $m$ valid bids get allocated an item each. Additionally, in this case, we normalize the budgets such that the total budget is $L T$ and each agent $i$ has budget $\a_i L T$.
Our definition of ideal utility for agent $i$, in this case, is almost identical to the one in \cref{def:ideal:single}, except we allow each agent to request the item for at most $\a_i L$ fraction of the rounds, assuming $\a_i L\le 1$ (otherwise the ideal utility of the agent allows him to request the item every round). Specifically, the analogous LP of \cref{eq:ideal:request} that defines the agent $i$'s ideal utility when she has fair share $\a_i$ and $(V, K) \sim \mathcal F_i$ is \begin{equation}\label{eq:mult:request} \begin{split}
v^{\star} = \max{}_\texttt{Req} \qquad
& \frac{\Ex{ V K \big| \texttt{Req}(V, K) }}{\frac{1}{q} - 1 + \Ex{K \big| \texttt{Req}(V, K) }}
\\
\textrm{such that} \qquad
& \Pr{ \texttt{Req}(V, K) } = q
\\
& \frac{\Ex{K \big| \texttt{Req}(V, K) }}{\frac{1}{q} - 1 + \Ex{K \big| \texttt{Req}(V, K) }}
= \b \le \a_i L \end{split} \end{equation}
\subsection{Guarantees of \texorpdfstring{\hyperref[algo:algo]{\color{black}First-Price Pseudo-Auction with Multi-Round Reserves\xspace}}{First-Price Pseudo-Auction with Multi-Round Reserves} for multiple resources}
For the rest of the section, we focus on guarantees that an agent can get when using our mechanism. Henceforth, we drop the $i$ subscript.
We are going to focus on the case when $\a L$ is small. We do so because this is the more interesting setting. Otherwise, very simple mechanisms like round-robin could make strong guarantees.
We proceed to prove that even in this more complicated setting, an agent who follows \hyperref[def:policy]{\color{black}Robust Bidding Policy}\xspace, exactly as described in \cref{sec:guar}, can guarantee an almost $1/2$ fraction of their ideal utility in expectation. We start with a lemma that is identical to \cref{lem:guar:BPB}. We defer the proofs of this section to \cref{sec:app:missing_proofs}.
\begin{lemma}\label{lem:mult:BPB}
Consider an agent with budget $\a L T$, and compute her ideal utility $v^{\star}$ and optimal utilization fraction $\b\leq\a L$.
Let $\theta[t]=(V[t], K[t])$ denote her demand type in round $t$, and let $(U[t], P[t])$ be her total realized utility and total payment in round $t$ under \hyperref[def:policy]{\color{black}Robust Bidding Policy}\xspace (and any fixed policy of other agents).
Then, for all $t\in[T]$, we have
\begin{align*}
\Ex{U[t]} = \frac{v^{\star}}{\beta r} \Ex{P[t]}
\end{align*} \end{lemma}
We now proceed to show the main result of the section, that any agent can guarantee a fraction of her ideal utility in expectation.
\begin{theorem}\label{thm:mult:guarantee}
Consider \hyperref[algo:algo]{\color{black}First-Price Pseudo-Auction with Multi-Round Reserves\xspace} with reserve price $r \ge 1$, and any agent with budget $\a L T$, corresponding ideal utility $v^{\star}$, and optimal utilization fraction $\b$. Then, using \hyperref[def:policy]{\color{black}Robust Bidding Policy}\xspace, irrespective of how other agents bid, the agent can guarantee a total expected utility
\begin{align*}
\Ex{\sum_{t\in[T]}U[t]}
\geq
v^{\star} T \left(\min\left\{ \frac{1}{r} , 1 - \frac{1 - \a}{r} \right\} - \frac{1}{\b} O\left( \sqrt{\frac{k_{\max}}{T}}\right)\right)
\end{align*} \end{theorem}
\begin{remark}
The term in \cref{thm:mult:guarantee} is maximized when $r = 2 - \a$, in which case it becomes $\frac{1}{2 - \a}$. \end{remark}
We defer the proof of this theorem to the Appendix. We note that even though the result is quite similar to the one in \cref{thm:guar:guarantee} the proof for it requires more careful analysis. More specifically, as mentioned above, if we follow the steps of the proof in \cref{thm:guar:guarantee} we would get a $\min\{\nicefrac{1}{r}, 1 - \nicefrac{1 + \a(L-1)}{r}\}$ bound instead. In order to improve the bound we need to analyze carefully the rounds when the item is not available because the agent still holds it from a previous round. Previously we upper bounded the number of those rounds by $T \frac{\a}{r}$ which in this case would become $T \frac{\a L}{r}$ which in turn leads to the worse bound.
\printbibliography
\appendix \section{Missing proofs from Section \ref{sec:multi}} \label{sec:app:missing_proofs}
In this section, we include the missing proofs from \cref{sec:multi}. The proof of \cref{lem:mult:BPB} is identical to the proof of \cref{lem:guar:BPB}; we include it here for completeness \begin{proof}[Proof of \cref{lem:mult:BPB}]
Let $W[t]$ be an indicator random variable that is $1$ if the agent wins in round $t$, else $W[t] = 0$.
We are going to use the fact that the agent's type $\theta[t] = (V[t], K[t])$ does not directly depend on $W[t]$, but rather on whether the agent chooses to bid, which is determined by the request policy $\texttt{Req}(V[t], K[t])$ (formally, $\theta[t]$ is conditionally independent of $W[t]$ given $\texttt{Req}(\theta[t])$).
Now we write
\begin{alignat*}{3}
\Line{
\Ex{U[t]}
}{=}{
\Ex{K[t] V[t] W[t]}
}{}
\\
\Line{}{=}
{
\Ex{K[t] V[t] \big| W[t]} \Ex{W[t]}
}{}
\\
\Line{}{=}
{
\Ex{K[t] V[t] \big| \texttt{Req}(\theta[t])} \Ex{W[t]}
}{}
\\
\Line{}{=}
{
\frac{v^{\star}}{\frac{1}{q} - 1 + \Ex{K[t] \big| \texttt{Req}(V[t], K[t])}}
\Ex{W[t]}
}{}
\end{alignat*}
where in the last equality we used \cref{eq:mult:request}. Similarly,
\begin{alignat*}{3}
\Line{
\Ex{P[t]}
}{=}{
\Ex{r K[t] W[t]}
}{}
\\
\Line{}{=}
{
r \Ex{K[t] \big| W[t]} \Ex{W[t]}
}{}
\\
\Line{}{=}
{
r \Ex{K[t] \big| \texttt{Req}(\theta[t])} \Ex{W[t]}
}{}
\\
\Line{}{=}
{
\frac{r \b}{\frac{1}{q} - 1 + \Ex{K[t] \big| \texttt{Req}(V[t], K[t])}}
\Ex{W[t]}
}{}
\end{alignat*}
where in the last equality we used \cref{eq:mult:request}.
Combining the two above equalities we get the desired bound. \end{proof}
We now prove the main result of \cref{sec:multi}, \cref{thm:mult:guarantee}.
\begin{proof}[Proof of \cref{thm:mult:guarantee}]
The start of the proof is analogous to the proof of \cref{thm:mult:guarantee}. We follow the same outline till the use of the Azuma-Hoeffding inequality. To get the improved bound, we need to do more careful accounting of the rounds the agent cannot bid on the resource either if she already has one or if all resources are blocked by others.
Fix our agent, Alice, with budget $\a L T$, and corresponding ideal utility $v^{\star}$ and utilization $\b\leq \a L$. We assume that Alice plays \hyperref[def:policy]{\color{black}Robust Bidding Policy}\xspace: in any round $t$, if an item is available and $\texttt{Req}(V[t], K[t]) = 1$, then she participates in \hyperref[algo:algo]{\color{black}First-Price Pseudo-Auction with Multi-Round Reserves\xspace} with per-round bid $b[t]=r$ and desired duration $d[t]=K[t]$.
Let $W[t]\in\{0,1\}$ be an indicator variable that indicates that Alice wins the auction in round $t$ while following our policy but having an unlimited budget. Using this notation her total payment (with limited budget) is
\begin{align}\label{eq:mult:1}
\sum_t P[t]
\ge
\min\left\{
\a L T, r \sum_{t=1}^T K[t] W[t]
\right\} - r k_{\max}
\end{align}
Let $\texttt{Blk}[t]\in\{0,1\}$ be an indicator variable that Alice is \emph{blocked} from competing for the item in round $t$. In particular, we have $\texttt{Blk}[t] = 1$ if
\begin{itemize}
\item An item was reserved for round $t$ by Alice in a previous round (at a rate of $r$ credits per round for the item).
\item \textit{All} the items were previously reserved for round $t$ by agents other than Alice, who are paying at least $r$ per round.
\item Some items are available, but agents other than Alice bid at least $r$ (for one or multiple rounds) for them, and win all the auctions in round $t$.
\end{itemize}
In all other cases, we have $\texttt{Blk}[t] = 0$.
Let $\kappa = \Ex{K[t] | \texttt{Req}[t]}$ and recall that $\Ex{K[t] | \texttt{Req}[t]} = \frac{(1-q)\b}{q(1-\b)}$ and $q = \Pr{\texttt{Req}(\theta[t])=1}$. Now we define
\begin{align*}
Z_\tau
=
\sum_{t=1}^\tau K[t] W[t]
-
q \kappa\sum_{t=1}^\tau (1 - \texttt{Blk}[t])
\end{align*}
We show below that $Z_\tau$ is a sub-martingale with respect to the history of the previous rounds $\mathcal H_{\tau-1}$. Note that $q = \Pr{\texttt{Req}(\theta[t])=1|\mathcal H_{t-1}}$, since $\texttt{Req}$ is a \emph{stationary} policy that only depends on $\theta[t]$, which is independent of $\mathcal H_{t-1}$. Thus, we have:
\begin{alignat*}{3}
\Line{
\Ex{Z_\tau - Z_{\tau - 1} \big| \mathcal H_{\tau-1}}
}{=}{
\Ex{K[\tau] W[t] \big| \mathcal H_{\tau-1}} - q \kappa \Ex{(1 - \texttt{Blk}[\tau]) \big| \mathcal H_{\tau-1}}
}{}
\\
\Line{}{=}{
\Ex{K[\tau] \texttt{Req}(\theta[\tau]) (1 - \texttt{Blk}[\tau]) \big| \mathcal H_{\tau-1}} - q \kappa \Ex{(1 - \texttt{Blk}[\tau]) \big| \mathcal H_{\tau-1}}
}{}
\\
\Line{}{=}{
q\Ex{K[\tau] (1 - \texttt{Blk}[\tau]) \big| \texttt{Req}(\theta[\tau])=1, \mathcal H_{\tau-1}} - q \kappa \Ex{(1 - \texttt{Blk}[\tau]) \big| \mathcal H_{\tau-1}}
}{}
\end{alignat*}
Note that $K[\tau]$ and $\texttt{Blk}[\tau]$ are independent given that Alice wants to request the item in round $\tau$ (i.e., $K[\tau]$ and $\texttt{Blk}[\tau]$ are conditionally independent given $\texttt{Req}(\theta[\tau])=1$). Substituting, in the above equation, we get
\begin{alignat*}{3}
\Line{
\Ex{Z_\tau - Z_{\tau - 1} | \mathcal H_{\tau-1}}
}{=}{
q \kappa
\Ex{(1 - \texttt{Blk}[\tau]) \big| \texttt{Req}(\theta[\tau]), \mathcal H_{\tau-1}} - q \kappa \Ex{(1 - \texttt{Blk}[\tau]) \big| \mathcal H_{\tau-1}}
}{}
\\
\Line{}{\ge}{
q \kappa \Ex{(1 - \texttt{Blk}[\tau]) \big| \mathcal H_{\tau-1}} - q \kappa \Ex{(1 - \texttt{Blk}[\tau]) \big| \mathcal H_{\tau-1}} = 0
}{}
\end{alignat*}
where the second inequality follows from the fact that $\texttt{Blk}[\tau]$ can only increase if we remove the condition on $\texttt{Req}(\theta[\tau]) = 1$.
Now, since $\Ex{Z_\tau - Z_{\tau - 1} | \mathcal H_{\tau-1}} \ge 0$, we have that $Z_\tau$ is a sub-martingale. Moreover, since with probability $1$ it holds that $|Z_\tau - Z_{\tau-1}| \le k_{\max}$, we can use the Azuma-Hoeffding inequality to get for any $\e_1 > 0$,
\begin{align*}
\Pr{Z_T - Z_0 \le - \e_1}
\le
e^{-\frac{\e_1^2}{2Tk_{\max}}}.
\end{align*}
In other words, for any $\e_1 > 0$, we have that with probability at least $1 - e^{-\frac{\e_1^2}{2Tk_{\max}}}$
\begin{align*}
\sum_{t=1}^T K[t] W[t]
\ge
q \kappa\sum_{t=1}^T (1 - \texttt{Blk}[t]) - \e_1
\end{align*}
For the proof of \cref{thm:guar:guarantee} this inequality was strong enough to yield the desired bound. To get the stronger bound we claimed here, we now `split' the indicator variable $\texttt{Blk}[t]$ into two disjoint variables: $\texttt{Blk}[t] = \texttt{Blk}_1[t] + \texttt{Blk}_2[t]$. $\texttt{Blk}_1[t] = 1$ if another agent blocked Alice at round $t$ and $\texttt{Blk}_2[t] = 1$ if Alice having the item in round $t$ because she won it in a previous round. Note that with these definitions $\texttt{Blk}[t] = \texttt{Blk}_1[t] + \texttt{Blk}_2[t]$. We notice that because of the reserve price it holds that $\sum_{t=1}^T \texttt{Blk}_1[t] \le T \frac{1-\a}{r}$. Additionally, because whenever Alice wins in round $t$ she blocks herself for the next $K[t] - 1 $ rounds, it holds that
\begin{align*}
\sum_{t=1}^T \texttt{Blk}_2[t]
\le
\sum_{t=1}^T (K[t] - 1)W[t]
\end{align*}
Combining these facts with the bound above, we get that for any $\e_1 > 0$, we have that with probability at least $1 - e^{-\frac{\e_1^2}{2Tk_{\max}}}$
\begin{align}\label{eq:mult:21}
\sum_{t=1}^T \big( (q \kappa + 1)K[t] - q \kappa \big) W[t]
\ge
q \kappa T \left( 1 - \frac{1-\a}{r} \right) - \e_1
\end{align}
The above bound does not seem particularly useful, because we are interested in $\sum_t K[t] W[t]$. We correlate the two quantities by using another martingale argument. We define
\begin{align*}
M_\tau
=
\sum_{t=1}^\tau \big( (q \kappa + 1)K[t] - q \kappa \big) W[t]
-
\big(1 + (\kappa - 1) q \big) \sum_{t=1}^\tau K[t] W[t]
\end{align*}
We are going to show that $M_\tau$ is a martingale with respect to the history of the previous rounds, $\mathcal H_{\tau-1}$. We utilize the fact that $W[t]$ and $K[t]$ are conditionally independent on $\texttt{Req}[t]$. More specifically, we have that (we omit the condition on $\mathcal H_{\tau-1}$ for brevity)
\begin{alignat*}{4}
\Line{
\Ex{\big( (q \kappa + 1)K[\tau] - q \kappa \big) W[\tau]}
}{=}{
q \Ex{\big( (q \kappa + 1)K[\tau] - q \kappa \big) W[\tau] \big| \texttt{Req}(\theta[t])}
}{}
\\
\Line{}{=}{
q
\Ex{W[\tau] \big| \texttt{Req}(\theta[t])}
\Ex{\big( (q \kappa + 1)K[\tau] - q \kappa \big) \big| \texttt{Req}(\theta[t])}
}{}
\\
\Line{}{=}{
q
\Ex{W[\tau] \big| \texttt{Req}(\theta[t])} \kappa
\frac{\Ex{\big( (q \kappa + 1)K[\tau] - q \kappa \big) \big| \texttt{Req}(\theta[t])}}{\kappa}
}{}
\\
\Line{}{=}{
q
\Ex{W[\tau] K[\tau] \big| \texttt{Req}(\theta[t])}
\frac{\Ex{\big( (q \kappa + 1)K[\tau] - q \kappa \big) \big| \texttt{Req}(\theta[t])}}{\kappa}
}{}
\\
\Line{}{=}{
\Ex{W[\tau] K[\tau]}
\frac{ (q \kappa + 1)\kappa - q \kappa }{\kappa}
}{}
\end{alignat*}
The above proves that $\Ex{M_\tau - M_{\tau-1} | \mathcal H_{\tau-1}} = 0$, i.e., $M_\tau$ is a martingale. Because it holds almost surely that $|M_\tau - M_{\tau-1}| \le k_{\max}$ we use Azuma-Hoeffding inequality and have that for any $\e_2 > 0$, with probability at least $1 - e^{-\frac{\e_2^2}{2Tk_{\max}}}$
\begin{align*}
(1 + (\kappa - 1) q) \sum_{t=1}^\tau K[t] W[t]
\ge
\sum_{t=1}^\tau \big( (q \kappa + 1)K[t] - q \kappa \big) W[t]
-
\e_2
\end{align*}
Combining the above bound with \cref{eq:mult:21} and using the union bound we get that with probability at least $1 - e^{-\frac{\e^2}{2Tk_{\max}}} - e^{-\frac{\e'^2}{2Tk_{\max}}}$
\begin{align*}
\sum_{t=1}^\tau K[t] W[t]
\ge
\frac{q \kappa}{1 + (\kappa - 1) q} T \left( 1 - \frac{1-\a}{r} \right)
- \frac{\e + \e'}{1 + (\kappa - 1) q}
\end{align*}
Substituting $\kappa = \frac{(1-q)\b}{q(1-\b)}$ and that $1 + (\kappa - 1) q \ge 1$ the above becomes
\begin{align*}
\sum_{t=1}^\tau K[t] W[t]
\ge
\b T \left( 1 - \frac{1-\a}{r} \right)
- (\e + \e')
\end{align*}
Combining the above with \cref{eq:mult:1} and the fact that $\a L \ge \b$, we get with probability at least $1 - e^{-\frac{\e^2}{2Tk_{\max}}} - e^{-\frac{\e'^2}{2Tk_{\max}}}$, Alice's payment satisfies
\begin{align*}
\sum_t P[t]
\ge
\b T r \min\left\{ \frac{1}{r} , 1 - \frac{1-\a}{r} \right\} - r (\e + \e') - r k_{\max}
\end{align*}
Setting $\e = \e' = \Theta(\sqrt{Tk_{\max}})$ and taking expectations, we get that her expected payment is at least
\begin{align*}
\sum_t \Ex{P[t]}
\ge
\b T r \min\left\{ \frac{1}{r} , 1 - \frac{1 - \a}{r} \right\} - r k_{\max} - r O\left( \sqrt{T k_{\max}} \right)
\end{align*}
Finally, using \cref{lem:mult:BPB}, we get that Alice's expected utility satisfies
\begin{align*}
\sum_t \Ex{U[t]}
\ge
v^{\star} T \min\left\{ \frac{1}{r} , 1 - \frac{1 - \a}{r} \right\} - \frac{v^{\star}}{\b}k_{\max} - \frac{v^{\star}}{\b} O\left( \sqrt{Tk_{\max}} \right)
\end{align*}
Finally, since $k_{\max}\leq T$, we have $k_{\max}=O(\sqrt{Tk_{\max}})$, which gets our promised guarantee. \end{proof}
\end{document} |
\begin{document}
\begin{abstract} We study the distribution of the zeroes of the zeta functions of the family of Artin-Schreier covers of the projective line over $\mathbb{F}_q$ when $q$ is fixed and the genus goes to infinity. We consider both the global and the mesoscopic regimes, proving that when the genus goes to infinity, the number of zeroes with angles in a prescribed non-trivial subinterval of $[-\pi,\pi)$ has a standard Gaussian distribution (when properly normalized). \end{abstract}
\keywords{Artin-Schreier covers, finite fields, distribution of zeroes of $L$-functions of curves.}
\subjclass[2010]{Primary 11G20; Secondary 11M50, 14G15}
\title{Distribution of zeta zeroes of Artin--Schreier covers}
\section{Introduction} Recently there has been a great deal of interest in statistics for numbers of rational points on curves over finite fields, where the curve varies in a certain family but is always defined over a fixed finite field. This is in contrast to situations studied using Deligne's equidistribution theorem \cite{deligne, deligne2}, which requires the size of the finite field to go to infinity, and which tends to produce statistics related to random matrices in certain monodromy groups. When one fixes the base field, one instead tends to encounter discrete probabilities, typically sums of independent identically distributed random variables. The first result in this direction is the work of Kurlberg and Rudnick for hyperelliptic curves \cite{kr}; other cases considered include cyclic $p$-fold covers of the projective line \cite{bdfl1, bdfl2} (for a slightly different approach see \cite{x2}), plane curves \cite{bdfl3}, complete intersections in projective spaces \cite{bk}, and general trigonal curves \cite{wood}.
The number of rational points on a curve over a finite field is determined by the zeta function, and statistical properties of the number of points may be interpreted as properties of the coefficients of the zeta function. A related but somewhat deeper question is to consider statistical properties of \textit{zeroes} of the zeta function. In the case of hyperelliptic curves, these properties were studied by Faifman and Rudnick \cite{fr}. A related family was studied in \cite{x1}.
In this paper, we make similar considerations for the family of \emph{Artin-Schreier covers of $\mathbb P^1$}; this family is interesting because the characteristic of the base field plays a more central role in the definition than in any of the other families mentioned so far. The Artin-Schreier construction is special because it cannot be obtained by base-change from a family of schemes over $\mathbb Z.$ Since Artin-Schreier covers are cyclic covers of $\mathbb{P}^1$, one obtains a direct link between their zeta functions and certain exponential sums; while this is also the case for cyclic $p$-fold covers in characteristics other than $p$, the Artin-Schreier case admits a much more precise analysis. One example of how to exploit this additional precision is the work of Rojas-Leon and Wan \cite{rw} refining the Weil bound for Artin-Schreier curves.
To explain our results in more detail, we introduce some notation. Fix an odd prime $p$ and a finite field $\mathbb F_q$ of characteristic $p.$ Each polynomial $f \in \mathbb F_q[X]$ whose degree $d$ is not divisible by $p$ defines an Artin-Schreier cover $C_f$ of $\mathbb P^1$ with affine model \begin{equation}\label{ascurve} Y^p-Y=f(X).\end{equation} Since $f$ is a polynomial rather than a more general rational function, $C_f$ has $p$-rank $0.$ For more details about the structure of the moduli space of Artin-Schreier curves and its $p$-rank strata, see \cite{pz}. The Riemann-Hurwitz formula implies that the genus of the above curve is $g= (d-1)(p-1)/2.$ As usual, the Weil zeta function of $C_f$ has the form
\[Z_{C_f} (u) = \frac{P_{C_f}(u)}{(1-u)(1-qu)}.\] Here $P_{C_f}(u)$ is a polynomial of degree $2g=(d -1)(p-1)$ which factors as
\begin{equation}\label{product}P_{C_f}(u) = \prod_{\psi \neq 1} L(u, f, \psi),\end{equation} where the product is taken over the non-trivial additive characters $\psi$ of $\mathbb F_p$ and $L(u,f,\psi)$ are certain $L$-functions (see \eqref{Euler-product} for the formula). Computing the distribution of the zeroes of the zeta functions $Z_{C_f}(u)$ as $C_f$ runs over the $\mathbb{F}_q$-points of the moduli space $\mathcal{AS}_{g,0}$ of Artin-Schreier covers of genus $g$ and $p$-rank $0$ amounts to computing the distribution of the zeroes of $\prod_{j=1}^{p-1}L(u,f,\psi^j)$ for a fixed non-trivial additive character $\psi$ as $f$ runs over polynomials of degree $d$. In fact, going over each $\mathbb F_q$-point of the moduli space $\mathcal{AS}_{g,0}$ once is equivalent to letting $f$ vary over the set $\mathcal{F}'_d$ of polynomials of degree $d$ containing no non-constant terms of degree divisible by $p$, as such terms can always be eliminated in a unique way without changing the resulting Artin-Schreier cover.
Some statistics for the zeroes in the family of Artin-Schreier covers were considered in the recent work of Entin \cite{entin}, who employs the methods of Kurlberg and Rudnick \cite{kr} to study the variation of the number of points on such a family, then translates the results into information about zeroes. In the present work, we consider the global and mesoscopic regime, as was done by Faifman and Rudnick \cite{fr} for the family of hyperelliptic curves.
More precisely, we write \begin{equation}\label{factorization-of-L}
L(u,f,\psi) = \prod_{j=1}^{d-1} (1-\alpha_{j}(f, \psi)u), \end{equation} \noindent where $\alpha_j(f, \psi) = \sqrt{q} e^{2 \pi i \theta_j(f, \psi)}$ and $\theta_j(f, \psi) \in [-1/2, 1/2)$. We study the statistics of the set of angles $\{\theta_j(f, \psi)\}$ as $f$ varies. For an interval $\mathcal{I} \subset[-1/2,1/2),$ let \[N_\mathcal{I}(f, \psi) := \#\{1\leq j \leq d-1:\,\theta_j(f, \psi) \in \mathcal{I}\},\] \[ N_{\mathcal{I}}(f,\psi, \bar\psi):=N_\mathcal{I}(f, \psi)+ N_\mathcal{I}(f,\bar \psi), \] and \[ N_\mathcal{I}(C_f):=\sum_{j=1}^{p-1}N_\mathcal{I}(f,\psi^j). \] We show that the number of zeroes with angle in a prescribed non-trivial subinterval $\mathcal I$
is asymptotic to $2g|\mathcal I|$ (Theorem \ref{boundN_I}), has variance asymptotic to $\frac{2(p-1)}{\pi^2} \log(g|\mathcal I|)$ and properly normalized has a Gaussian distribution.
\begin{thm}\label{mainthm} Fix a finite field $\mathbb F_q$ of characteristic $p$. Let $\mathcal{F}_d'$ be the family of polynomials defined in \eqref{ourfamily}. Then for any real numbers $a<b$ and $0<|\mathcal{I}|<1$ either fixed or
$|\mathcal{I}|\rightarrow 0$ while $d |\mathcal{I}|\rightarrow \infty$,
\[\lim_{d \rightarrow \infty} \mathrm{Prob}_{\mathcal{F}_d'}\left(a < \frac{N_\mathcal{I}(C_f)-(d-1)(p-1)|\mathcal{I}|}{\sqrt{\frac{2(p-1)}{\pi^2}\log(d|\mathcal{I}|)}} < b\right)=\frac{1}{\sqrt{2\pi}} \int_a^b e^{-x^2/2} dx.\] \end{thm}
As noted earlier, this result can also be stated in terms of the $\mathbb{F}_q$-points of $\mathcal{AS}_{g,0}$.
\begin{cor}\label{cor} Fix a finite field $\mathbb F_q$ of characteristic $p$. Then for any real numbers $a<b$ and $0<|\mathcal{I}|<1$ either fixed or
$|\mathcal{I}|\rightarrow 0$ while $g |\mathcal{I}|\rightarrow \infty$,
\[\lim_{g \rightarrow \infty} \mathrm{Prob}_{\mathcal{AS}_{g,0}(\mathbb{F}_q)}\left(a < \frac{N_\mathcal{I}(C_f)-2g|\mathcal{I}|}{\sqrt{\frac{2(p-1)}{\pi^2}\log\left(g|\mathcal{I}|\right)}} < b\right)=\frac{1}{\sqrt{2\pi}} \int_a^b e^{-x^2/2} dx.\] \end{cor}
Theorem \ref{mainthm} is obtained by computing the normalized moments of certain approximations of
\\${N_\mathcal{I}(C_f)-(p-1)(d-1)|\mathcal{I}|}$ given by Beurling-Selberg polynomials to verify that they fit the Gaussian moments. Our results are compatible with the following result for the distribution of zeroes of the $L$-functions $L(u,f, \psi)$ and $L(u, f, \bar \psi).$ \begin{prop}\label{prop}
Fix a finite field $\mathbb F_q$ of characteristic $p$. Then for any real numbers $a<b$ and $0<|\mathcal{I}|<1$ either fixed or
$|\mathcal{I}|\rightarrow 0$ while $d |\mathcal{I}|\rightarrow \infty$,
\[\lim_{d \rightarrow \infty} \mathrm{Prob}_{\mathcal{F}_d'}\left(a < \frac{N_{\mathcal{I}}(f,\psi, \bar\psi)-2(d-1)|\mathcal{I}|}{\sqrt{\frac{4}{\pi^2}\log(d|\mathcal{I}|)}} < b\right)=\frac{1}{\sqrt{2\pi}} \int_a^b e^{-x^2/2} dx.\] \end{prop}
\begin{rem} Analogous results hold for $N_{\mathcal I}(f,\psi)$ as long as the interval $\mathcal I$ is symmetric. \end{rem}
Notice that Proposition \ref{prop} is compatible with the philosophy of Katz and Sarnak, which predicts that when $q \rightarrow \infty$, the distribution of $N_{\mathcal{I}}(C_f)$ is the same as the distribution of $\hat{N}_{\mathcal{I}}(U)$, the number of eigenvalues of a $2g \times 2g$ matrix $U$ in the monodromy group of $C_f$ chosen uniformly at random with respect to the Haar measure. The monodromy groups of Artin-Schreier covers are computed by Katz in \cite{Katz1, Katz2}. In the large matrix limit, which corresponds to the limit as $d \rightarrow \infty$ for the family of Artin-Schreier covers because $g = (p-1)(d-1)/2$, the statistics on $\hat{N}_{\mathcal{I}}(U)$ have been found to have Gaussian fluctuations in various ensembles of random matrices.
\subsection{Outline of the article} This article is set up as follows. We begin by reviewing basic Artin-Schreier theory in Section \ref{a-s}. In Section \ref{explicit} we prove two explicit formulas for the zeroes of $L(u, f, \psi)$ which we will need later to compute the moments. In Section \ref{distrzeros} we prove a result about the number of zeroes of the zeta function for a fixed Artin-Schreier cover of $\mathbb P^1$. In Section \ref{trigapprox} we recall some facts on Beurling-Selberg polynomials and use them to prove some technical statements about their coefficients. A certain sum of these trigonometric polynomials approximate the characteristic function of the interval $\mathcal I$. We use the explicit formula to reduce the problem of studying this sum of Beurling-Selberg polynomials to a problem about sums of characters of traces of a polynomial $f$ evaluated at elements in extensions of $\mathbb F_q$. In Sections \ref{1mom}, \ref{2mom} and \ref{3mom} we analyze the first, second and third
moments of this sum. These moments tell us the expectation and variance of the distribution. In Section \ref{genmom} we compute the general moments of our approximating function and conclude that it has a standard Gaussian limiting distribution as the degree $d$ of $f$ goes to infinity for $\mathcal I$ either fixed or in the mesoscopic regime. Finally, in Section \ref{proof} we conclude the proof of Theorem \ref{mainthm} by proving that under normalization $N_{\mathcal I}(C_f)-(d-1)(p-1)|\mathcal I|$ converges in mean square and hence distribution to our approximating function.
\section{Basic Artin-Schreier theory}\label{a-s}
We now recall some more facts about Artin-Schreier covers. For each integer $n\geq 1$, denote by $\tr_n: \mathbb F_{q^n} \to \mathbb F_p$ the absolute trace map (not the trace to $\mathbb F_q$). For each polynomial $g \in \mathbb F_q[X]$ and non-trivial additive character $\psi$ of $\mathbb F_p$, set \[ S_n(g, \psi) = \sum_{x\in \mathbb F_{q^n}} \psi(\tr_n(g(x))). \]
The $L$-functions that appear in \eqref{product} are given by \begin{equation}\label{Euler-product} L(u,f,\psi) = \exp\left(\sum_{n=1}^{\infty} S_n(f, \psi) \frac{u^n}{n}\right) = \prod_P \left(1-\psi_f(P)u^{\deg P}\right)^{-1}, \end{equation} where the product is taken over monic irreducible polynomials in $\mathbb F_q[X].$ In fact, throughout this paper $P$ will denote such a polynomial and, if $n=\deg P$ we have \[\psi_f(P) = \sum_{\alpha \in \mathbb F_{q^n} \atop P(\alpha)=0} \psi(f(\alpha))= \psi(\tr_n(f(\alpha))) \textrm{ for any root $\alpha$ of $P$.}\] To see that the exponential is equal to the product over primes in \eqref{Euler-product}, one has to write the exponential as an Euler product over the closed points of $\mathbb A^1.$ Namely, if we denote by $\mathcal S_n$ the set of closed points of $\mathbb A^1$ of degree $n,$ we can write \begin{eqnarray*} L(u,f,\psi) &= & \exp\left(\sum_{n=1}^{\infty} S_n(f, \psi) \frac{u^n}{n}\right)\\ &=& \exp\left(\sum_{n=1}^\infty \sum_{x \in \mathcal S_n} \sum_{k=1}^\infty \psi(\tr_{kn}(f(x))) \frac{u^{kn}}{k} \right). \end{eqnarray*}
The denominator of the fraction is $k$, not $kn,$ because each closed point $x \in \mathcal S_n$ produces $n$ rational points of $\mathbb F_{q^n}.$ Thus,
\begin{eqnarray*}L(u,f,\psi) &=& \prod_{n=1}^\infty \prod_{x \in \mathcal S_n} \exp\left(\sum_{k=1}^\infty \frac{\big(\psi(\tr_{n}(f(x)))u^n\big)^k}{k} \right)\\ &=& \prod_{n=1}^\infty \prod_{x \in \mathcal S_n} (1-\psi(\tr_{n}(f(x)))u^n)^{-1}\\ &= &\prod_{x \textrm{ closed point of } \mathbb A^1} (1-\psi(\tr_{\deg x}(f(x)))u^{\deg x})^{-1}, \end{eqnarray*} \noindent which is exactly the product over primes that appears in \eqref{Euler-product}.
Note that for the trivial character $\psi=1$, the same formula gives \[L(u,f,1)= Z_{\mathbb A^1}(u)= \frac{1}{1-qu}.\] The factor at infinity is then given by \[\psi_f(\infty) = \begin{cases} 1 & \psi = 1, \\ 0 & \psi \neq 1. \end{cases}\] Therefore we have \[Z_{C_f}(u) = \prod_{\psi} L^*(u,f,\psi),\] where $L^*(u,f,\psi)$ are the completed $L$-functions, \[L^*(u,f,\psi) = \prod_{v} \left(1-\psi_f(P_v) u^{\deg P_v}\right)^{-1}.\] Here the product is taken over all places $v$ of $\mathbb F_q(X).$
From now on we will fix a non-trivial additive character $\psi$ of $\mathbb F_p$ given by a certain choice $\zeta$ of a primitive $p$th root of unity in $\mathbb C.$ Then, all the other non-trivial characters of $\mathbb F_p$ are of the form $\sigma \circ \psi$ where $\sigma$ is an automorphism of the cyclotomic field $\mathbb Q(\zeta).$ The reciprocals of zeroes of the $L(u,f, \sigma\circ \psi)$ are exactly the Galois conjugates $\sigma(\alpha_j(f,\psi)),$ $1\leq j \leq d-1,$ of the reciprocals of the roots of $L(u, f, \psi).$ In order to compute the distribution of the zeroes of the Weil zeta functions $Z_{C_f}$ as $C_f$ runs over $\mathcal AS_{g,0}(\mathbb F_q)$ we are going to compute the distribution of the angles $\theta_j(f, \psi) , \theta_j(f, \bar \psi), 1 \leq j \leq d-1,$ for our specific choice of the additive character $\psi,$ as $f$ runs through $\mathcal F_d',$ where $g=(d-1)(p-1)/2.$ Since the roots of $L(u, f, \psi)$ and $L(u, f, \bar{\psi})$ are conjugate, it suffices to work with symmetric intervals. The distribution of the roots of the whole zeta function is then obtained by combining the $(p-1)/2$ distributions for the various choices of $\psi$.
As discussed in the introduction, we will consider $\mathbb F_q$-points of the moduli space $\mathcal{AS}_{g,0}$ of Artin-Schreier covers of $p$-rank $0.$ A cover consists of an Artin-Schreier curve for which we fix an automorphism of order $p$ and an isomorphism between the quotient and $\mathbb P^1.$ We also choose the ramification divisor to be $D=(\infty).$ Thus the one branch point of our $p$-rank $0$ covers is at infinity.
Concretely, we consider, up to $\mathbb F_q$-isomorphism, pairs of curves with affine model $C_f: Y^p - Y = f(X)$ with $f(X)$ a polynomial of degree $d = 2g/(p-1)+1$ not divisible by $p$ together with the automorphism $Y \mapsto Y+1.$
Using the $\mathbb F_q$-isomorphism $(X,Y) \mapsto (X, Y+aX^k)$, we get that $C_f$ is isomorphic to $C_g$ where $g(X) = f(X) + aX^k - a^p X^{kp}$. By using this isomorphism, we are reduced to considering the Artin-Schreier curves with model $C_f: Y^p-Y = f(X)$ where $f(X)$ is an element of the family $\mathcal F_d'$ defined in the introduction as \begin{eqnarray} \label{ourfamily} \mathcal F_d'=\left\{ a_dX^d + a_{d-1}X^{d-1} + \dots + a_0\in \mathbb F_q[X] : a_d \in \mathbb F_q^*, a_{pk}=0, 1 \leq k \leq \left\lfloor\frac{d}{p}\right\rfloor \right\}. \end{eqnarray} Except for the isomorphisms described above, no two such affine models are isomorphic. Therefore considering all affine
models $Y^p - Y = f(X)$ with $f(X) \in \mathcal F_d'$ is equivalent to considering all the $\mathbb F_q$- points of the moduli space $\mathcal{AS}_{g,0}.$ For more details on this one-to-one correspondence between our family and $\mathcal{AS}_{g,0} (\mathbb F_q),$ see \cite[Proposition 3.6]{pz}.
In \cite{entin}, the author is considering a slightly different family by also allowing twists, i.e. isomorphism over $\mathbb F_{q^p}$. This amounts to the models $C_f: Y^p-Y = f(X),$ with $f(X) \in \mathcal F_d''$, where \[ \mathcal F_d''=\left\{ a_dX^d + a_{d-1}X^{d-1} + \dots + a_0\in \mathbb F_q[X] : a_d \in \mathbb F_q^*, a_{pk}=0, 0 \leq k \leq\left\lfloor\frac{d}{p}\right\rfloor \right\}. \]
Finally, we will denote by \begin{eqnarray*} \mathcal F_d=\left\{ a_dX^d + a_{d-1}X^{d-1} + \dots + a_0\in \mathbb F_q[X] : a_d \in \mathbb F_q^* \right\}, \end{eqnarray*} \noindent the set of all polynomials of degree $d$ in $\mathbb F_q[X].$ We will also need the map $\mu: \mathcal{F}_d\rightarrow \mathcal{F}_d'$ defined by \begin{equation} \label{map-mu} \mu\left(\sum_{i=0}^d a_i X^i\right)= a_0+\sum_{{i=1}\atop{i \neq kp, k \geq 1}}^d \left( \sum_{j=0}^{\left\lfloor \log_p (d/i)\right\rfloor} a_{ip^j}^{p^{-j}}\right) X^i. \end{equation} This map is $q^{\left\lfloor\frac{d}{p}\right\rfloor}$-to-one and preserves the trace of $f(\alpha)$, which will allow us to work with $\mathcal F_d$ instead of $\mathcal F'_d$ when taking averages.
\subsection{Remark on the number of points}
For $d$ large enough, the elements of $\mathcal F_d'$ have the same chance as any random polynomial of degree $d$ in $\mathbb F_q[X]$ to take a given value in some extension of $\mathbb F_q.$ Thus, if $p \nmid n,$ as soon as $d - \left\lfloor d/p \right\rfloor >q^n,$ the distribution of $\{\#C_f(\mathbb F_{q^n}): f \in \mathcal F_d'\}$ is given by a sum of i.i.d. random variables, one variable for each closed point of $\mathbb P^1$ of degree $e\mid n.$ As long as we stay away from the point at infinity where $f(X)$ has a pole, the fiber above each closed point $x$ of $\mathbb P^1$ contains $pe$ rational points on the Artin-Schreier cover $C_f$ if $x$ happens to be in the kernel of the absolute trace map $\tr_{n}:\mathbb F_{q^n} \to \mathbb F_p,$ and no points otherwise. Hence each random variable in the sum takes the value $pe$ with probability $1/p$ and $0$ with probability $1-1/p.$ The average number of points is then $1+q^n,$ the constant $1$ coming from the point at infinity where the polynomial $f(X)$ has a pole and the fiber above it contains just $1$ point.
If $p\mid n,$ the average is higher because there are certain points of $\mathbb P^1$ of degree $e$ for which the fiber is forced to have $pe$ points (i.e. the points of degree $ e \mid \frac{n}{p}$). One adjusts the computation accordingly and obtains that the average number in $C_f(\mathbb F_{q^n})$ is now $1+q^n + (p-1)q^{n/p}.$ This is the essential reason behind Entin's result on the matter \cite[Theorem 4]{entin}, except that his count does not take into account the point at infinity.
\section{Explicit Formulas}\label{explicit}
Let $K$ be a positive integer, $e(\theta) = e^{2\pi i \theta}$ and let $h(\theta) = \sum_{{|k|}\leq K} a_k e(k\theta)$ be a trigonometric polynomial. Then the coefficients $a_k$ are given by the Fourier transform $$a_k = \widehat{h}(k) = \int_{-1/2}^{1/2} h(\theta) e (- k \theta) d\theta.$$
We prove in this section two explicit formulas for $L(u,f,\psi)$, written as an exponential of a sum or as a product over primes as in (\ref{Euler-product}). The first explicit formula (Lemma \ref{Explicit-Formula}) will be used to compute the moments over the family ${\mathcal{F}}_d'$, and the second explicit formula (Lemma \ref{Explicit-formula-relevant}) will be used to prove a result about the number of zeroes for a fixed $C_f$ (see Section \ref{distrzeros}).
\begin{lem}\label{Explicit-Formula}
Let $h(\theta) = \sum_{{|k|}\leq K}\widehat{h}(k)e(k\theta)$ be a trigonometric polynomial. Let $\theta_j(f, \psi)$ be the eigenangles of the $L$-function $L(u,f,\psi)$. Then we have \begin{equation} \sum_{j=1}^{d-1}h(\theta_j(f, \psi)) = (d-1)\widehat{h}(0) - \sum_{k=1}^{K}\frac{\widehat{h}(k)S_k(f,\psi) + \widehat{h}(-k)S_k(f,\overline{\psi})}{q^{k/2}}. \end{equation} \end{lem} \begin{proof} Recall from above that $$L(u,f,\psi) = \exp\left(\sum_{n=1}^{\infty}S_n(f,\psi)\frac{u^n}{n}\right) = \prod_{j=1}^{d-1}(1- \alpha_j(f,\psi)u).$$ Taking logarithmic derivatives, we have $$\frac{d}{du}\sum_{j=1}^{d-1}\log(1- \alpha_j(f,\psi)u) = \frac{d}{du}\sum_{n=1}^{\infty}S_n(f,\psi)\frac{u^n}{n}.$$ Multiplying both sides by $u,$ we get $$\sum_{j=1}^{d-1}\frac{-\alpha_j(f,\psi)u}{1-\alpha_j(f,\psi)u} = \sum_{n=1}^{\infty}S_n(f,\psi)u^n,$$ that is, $$-\sum_{j=1}^{d-1}\sum_{n=1}^{\infty}(\alpha_j(f,\psi)u)^n = \sum_{n=1}^{\infty}S_n(f,\psi)u^n.$$ Comparing coefficients, $$-\sum_{j=1}^{d-1}(\alpha_j(f,\psi))^n = S_n(f,\psi).$$ Thus, for $n>0,$ we get \begin{equation} \label{ngeq0} -\sum_{j=1}^{d-1}e^{2\pi i n \theta_j(f, \psi)} = \frac{S_n(f,\psi)}{q^{n/2}}.\end{equation} For $n<0,$ taking complex conjugates, we have by (\ref{factorization-of-L}) and (\ref{ngeq0}) \begin{eqnarray*} -\sum_{j=1}^{d-1}e^{2\pi i n \theta_j(f, \psi)}
&=& -\overline{\sum_{j=1}^{d-1}e^{2\pi i |n|\theta_j(f, \psi)}}
=-\overline{\sum_{j=1}^{d-1}\frac{\alpha_j(f, \psi)^{|n|}}{q^{|n|/2}}}\\
&=&\overline{\frac{S_{|n|}(f,\psi)}{q^{|n|/2}}}
= \frac{S_{|n|}(f,\overline{\psi})}{q^{|n|/2}} = \frac{S_{|n|}(f,\psi^{-1})}{q^{|n|/2}}.
\end{eqnarray*} Thus, \begin{eqnarray*} \sum_{j=0}^{d-1}h(\theta_j(f, \psi))\ &=&\sum_{j=1}^{d-1}\sum_{k=-K}^K\widehat{h}(k)e(k\theta_j(f, \psi))\\ &&=(d-1)\widehat{h}(0)+\sum_{j=1}^{d-1}\sum_{k=1}^K\widehat{h}(k)e(k\theta_j(f, \psi)) +\sum_{j=1}^{d-1}\sum_{k = -K}^{-1}\widehat{h}(k)e(k\theta_j(f, \psi))\\ &&=(d-1)\widehat{h}(0) - \sum_{k=1}^K\widehat{h}(k)\left(\frac{S_k(f,\psi)}{q^{k/2}}\right) - \sum_{k = -K}^{-1}\widehat{h}(k)\left(\frac{S_{-k}(f,\overline{\psi})}{q^{-k/2}}\right)\\ &&=(d-1)\widehat{h}(0) - \sum_{k=1}^K\frac{\widehat{h}(k)S_k(f,\psi) + \widehat{h}(-k)S_k(f,\overline{\psi})}{q^{k/2}}. \end{eqnarray*} \end{proof}
\begin{lem}\label{Explicit-formula-relevant} Let $\theta_j(f, \psi)$ be the eigenangles of the $L$-function $L(u,f,\psi)$. Then for any $n\geq 1,$ $$ -\sum_{j=1}^{d-1}e^{2\pi i n \theta_j(f, \psi)} = \sum_{\deg (M) = n}\frac{\Lambda(M)\psi_f(M)}{q^{n/2}}$$ where $M$ runs over monic polynomials in $\mathbb F_q[X],$ \[\Lambda(M) = \begin{cases} \deg P&\textrm{ if } M = P^k\,\textrm{for some }k\geq 1\textrm{ and } P \textrm{ irreducible},\\ 0&\textrm { otherwise}, \end{cases}\] and $\psi_f(P^k) = \psi_f(P)^k.$ \end{lem}
\begin{proof} Comparing equations (\ref{Euler-product}) and (\ref{factorization-of-L}), we have $$\prod_{j=1}^{d-1}(1 - \alpha_j(f, \psi) u) = \prod_P(1 - \psi_f(P)u^{\deg P})^{-1},$$ where the product on the right hand side is taken over monic irreducible polynomials in $\mathbb F_q[X].$ Taking logarithmic derivatives and multiplying by $u,$ we deduce that $$-\sum_{j=1}^{d-1}\sum_{n=1}^{\infty}(\alpha_j(f, \psi) u )^n = \sum_M\Lambda(M)u^{\deg M}\psi_f(M).$$
Comparing the coefficients of $u^n,$ we get $$-\sum_{j=1}^{d-1}\alpha_j(f, \psi)^n = \sum_{\deg(M) = n}\Lambda(M)\psi_f(M),$$ and the result follows by dividing both sides by $q^{n/2}$. \end{proof}
\section{The distribution of zeroes of $L(u,f,\psi)$} \label{distrzeros}
In this section we use the Erd\"{o}s-Tur\'{a}n inequality (see \cite{M}, Corollary 1.1) to prove a result on the number of eigenangles $\theta_j(f, \psi)$ in an interval ${\mathcal{I}}$ for a fixed $L$-function $L(u,f,\psi)$.
\begin{thm}\label{Erdos-Turan}\text{[P. Erd\"{o}s, P. Tur\'{a}n]} Let $x_1,x_2,\dots,x_N$ be real numbers lying in the unit interval $[-1/2,1/2).$ For any interval $\mathcal{I} \subseteq [-1/2,1/2),$
let $A(\mathcal{I},N,\{x_n\})$ denote the number of elements from the above set in $\mathcal{I}$. Let $|\mathcal{I}|$ denote the length of the interval. There exist absolute constants $B_1$ and $B_2$ such that for any $K\geq 1,$
$$|A(\mathcal{I},N,\{x_n\}) - N |\mathcal{I}|| \leq \frac{B_1N}{K+1} + B_2\sum_{k=1}^K\frac{1}{k}\left|\sum_{n=1}^{N}e^{2\pi i k x_n}\right|.$$ \end{thm}
We now prove the following theorem, which is the analogue of Proposition 5.1 in \cite{fr}.
\begin{thm}\label{boundN_I} For any $\mathcal{I} \subseteq [-1/2,1/2),$ let $N_\mathcal{I}(f, \psi) := \#\{1\leq j \leq d-1:\,\theta_j(f, \psi) \in \mathcal{I}\}.$ Then
$$N_\mathcal{I}(f,\psi) = (d-1) |\mathcal{I}| +O\left(\frac{d}{\log d}\right).$$ \end{thm}
\begin{proof}
By the Erd\"{o}s-Tur\'{a}n inequality and Lemma \ref{Explicit-formula-relevant}, we have \begin{eqnarray*}
|N_\mathcal{I}(f,\psi) - (d-1) |\mathcal{I}| |\
&\ll&\frac{d}{K} + \sum_{k=1}^K\frac{1}{k}\left|\sum_{\deg M = k}\frac{\Lambda(M)\psi_f(M)}{q^{k/2}}\right|\\ &&\ll \frac{d}{K} + \sum_{k=1}^K\frac{1}{q^{k/2}}\sum_{M = P^a,\,a\geq 1 \atop{\deg M = k}}1. \end{eqnarray*} Applying the function-field analogue of the prime number theorem, the above expression is $\ll \displaystyle \frac{d}{K} + \frac{q^{K/2}}{K}.$ Choosing $K = \left[\frac{\log d}{\log q}\right],$ we deduce the theorem. \end{proof}
\section{Beurling-Selberg functions} \label{trigapprox}
By the functional equation, the conjugate of a root of $Z_{C_f}(u)$ is also a root so we can restrict to considering symmetric intervals. Let $0<\beta<1$ and set $\mathcal I = [-\beta/2, \beta/2] \subset [-1/2,1/2)$. We are going to approximate the characteristic function of ${\mathcal{I}}$, $\chi_{\mathcal{I}}$, with Beurling-Selberg polynomials $I_K^\pm$. We will use the following properties of the coefficients of Beurling-Selberg polynomials (see \cite{M}, ch 1.2).
\begin{itemize}
\item[{\bf (a)}] The $I^\pm_K$ are trigonometric polynomials of degree $\leq K$, i.e., $$I_K^{\pm}(x) = \sum_{|k| \leq K} \widehat{I}_K^{\pm}(k) e(k x).$$ \item[{\bf (b)}] The Beurling-Selberg polynomials bound the characteristic function from below and above: \[I_K^- \leq \chi_{\mathcal{I}}\leq I_K^+.\] \item[{\bf (c)}] The integral of Beurling-Selberg polynomials is close to the length of the interval: \[\int_{-1/2}^{1/2} I_K^\pm(x) dx =\int_{-1/2}^{1/2} \chi_{\mathcal{I}}(x) dx \pm \frac{1}{K+1}.\] \item[{\bf (d)}] The $I^\pm_K$ are even (since we are taking the interval $\mathcal{I}$ to be symmetric about the origin). It then follows that the Fourier coefficients are also even, i.e. $\widehat{I}_K^{\pm}(-k) = \widehat{I}_K^{\pm}(k)$ for
$|k| \leq K$. \item[{\bf (e)}] The nonzero Fourier coefficients are also close to those of the characteristic function:
\[|\widehat{I}_K^\pm (k) - \widehat{\chi}_{
\mathcal{I}}(k) | \leq \frac{1}{K+1}
\quad \Longrightarrow \quad \widehat{I}^\pm_K(k)=\frac{\sin (\pi k|\mathcal{I}|)}{\pi k} + O \left( \frac{1}{K+1}\right), \quad k \geq 1. \] This implies the following bound:
\[|\widehat{I}_K^\pm (k)| \leq \frac{1}{K+1} +\min \left \{|\mathcal{I}|, \frac{\pi}{|k|}\right \}, \quad 0<|k|\leq K;\] \end{itemize}
\begin{prop}(Proposition 4.1, \cite{fr}) \label{propFR} For $K\geq 1$ such that $K|\mathcal{I}|>1$, we have \begin{eqnarray*} \sum_{k \geq 1} \widehat{I}_K^\pm (2k)&=&O(1),\\
\sum_{k \geq 1} \widehat{I}_K^\pm (k)^2 k&=&\frac{1}{2\pi^2} \log (K|\mathcal{I}|) +O(1),\\
\sum_{k \geq 1} \widehat{I}_K^+ (k)\widehat{I}_K^- (k) k&=&\frac{1}{2\pi^2} \log (K|\mathcal{I}|) +O(1).\\ \end{eqnarray*} \end{prop} Note that for a given $K$ these sums are actually finite, since the Beurling-Selberg polynomials $I_K^\pm$ have degree at most $K$.
\begin{proof} The first two statements are proven in Proposition 4.1 of \cite{fr}. Since $$\widehat{I}_K^{\pm}(k) = \frac{\sin (\pi k|\mathcal{I}|)}{\pi k} + O \left( \frac{1}{K}\right),$$ holds for both $\widehat{I}_K^{+}(k)$ and $\widehat{I}_K^{-}(k),$ the third statement follows by exactly the same proof as the second statement. \end{proof}
We will also need the following estimates. \begin{prop} \label{propmanysums} For $\alpha_1,\dots, \alpha_r, \gamma_1,\dots, \gamma_r>0$, and $\beta_1,\dots,\beta_r \in \mathbb{R}$, we have,
\[\sum_{k_1,\dots,k_r \geq 1} {\widehat{I}_K^\pm(k_1)}^{\alpha_1} \dots {\widehat{I}_K^\pm(k_r)}^{\alpha_r}k_1^{\beta_1}\dots k_r^{\beta_r}q^{-\gamma_1k_1-\dots-\gamma_r k_r}=O(1).\] For $\alpha_1,\alpha_2,\gamma >0$, and $\beta \in \mathbb{R}$, \[\sum_{k\geq1} {\widehat{I}_K^\pm(k)}^{\alpha_1} {\widehat{I}_K^\pm(2k)}^{\alpha_2} k^\beta q^{-\gamma k}=O(1).\] \end{prop}
\begin{proof} Since $\left|\widehat{I}_K^\pm(k)\right|\leq \frac{1}{K+1}+\min\left\{|\mathcal{I}|, \frac{\pi}{|k|}\right\}$, we obtain \begin{eqnarray*}
&&\left|\sum_{k_1,\dots,k_r \geq 1} {\widehat{I}_K^\pm(k_1)}^{\alpha_1} \dots {\widehat{I}_K^\pm(k_r)}^{\alpha_r}k_1^{\beta_1}\dots k_r^{\beta_r}q^{-\gamma_1k_1-\dots-\gamma_r k_r}\right| \ll \sum_{k_1,\dots,k_r\geq 1} k_1^{\beta_1}\dots k_r^{\beta_r}q^{-\gamma_1k_1-\dots-\gamma_r k_r} \end{eqnarray*} Since $\sum_{k\geq 1} k^\beta q^{-\gamma k}=O(1)$ for $q>1$ and $\gamma>0$, we get that the right hand side above is also equal to $O(1).$ The second equation is a particular form of the more general equation established above. \end{proof}
\section{First Moment} \label{1mom}
Recall that $N_{\mathcal I} (f,\psi)$ denotes the number of angles $\theta_j(f, \psi)$ of the zeroes of the $L$-function $L(u,f,\psi)$ in the interval $\mathcal I \subset [-1/2, 1/2)$ of length $0<|\mathcal I |<1.$
From now on, for a function $\phi:\mathcal{F}_d' \rightarrow\mathbb C$, we denote its average by \[\left<\phi(f)\right>:=\frac{1}{|\mathcal{F}_d'|} \sum_{f \in \mathcal{F}_d'}\phi(f).\]
We want to compute the first moment \begin{eqnarray*}
\left< N_{\mathcal I}(f,\psi) \right> = \frac{1}{|\mathcal{F}_d'|} \sum_{f \in \mathcal{F}_d'} N_{\mathcal I}(f,\psi) . \end{eqnarray*} We will do so by proving the following result. \begin{thm} \label{averageET} As $d \rightarrow \infty$, \begin{eqnarray*}
\left< N_{\mathcal I}(f,\psi) - (d-1) |\mathcal{I}| \right> = O(1). \end{eqnarray*} \end{thm}
\begin{rem} Recall that in Theorem \ref{boundN_I} we showed that
$$N_{\mathcal I}(f,\psi) - (d-1) |\mathcal{I}| = O\left(\frac{d}{\log d}\right).$$ Theorem \ref{averageET}, on the other hand, gives us a far better estimate for the average of $\left< N_{\mathcal I}(f,\psi) - (d-1) |\mathcal{I}| \right>$ than we could have derived from Theorem \ref{boundN_I}. \end{rem}
For the proof of Theorem \ref{averageET}, we will use the Beurling-Selberg approximation of the characteristic function of the interval $\mathcal I.$ By property {\bf (b)} of the Beurling-Selberg polynomials, $$ \sum_{j=1}^{d-1} I_K^{-}(\theta_j(f,\psi)) \leq N_\mathcal{I}(f, \psi) \leq \sum_{j=1}^{d-1} I_K^{+}(\theta_j(f,\psi)) . $$
With the explicit formula of Lemma \ref{Explicit-Formula} and property {\bf (c)}, we write \begin{eqnarray*}
\sum_{j=1}^{d-1} I_K^{\pm}(\theta_j(f,\psi)) &=& (d-1) |\mathcal{I}| - S^{\pm}(K, f, \psi) \pm \frac{d-1}{K+1} \\ \end{eqnarray*} where \begin{eqnarray} \label{defSK} S^{\pm}(K, f,\psi) := \sum_{k=1}^K \frac{\widehat{I}^\pm_K(k)S_k(f, \psi)+\widehat{I}^\pm_K(-k)S_k(f, \bar{\psi})}{q^{k/2}} . \end{eqnarray}
This gives \begin{eqnarray} \label{T-estimate}
- S^{-}(K, f,\psi) -\frac{d-1}{K+1} \leq N_\mathcal{I}(f, \psi) - (d-1)|\mathcal{I}| \leq - S^{+}(K, f,\psi) + \frac{d-1}{K+1}. \end{eqnarray}
In order to complete the proof it remains to estimate $\langle S^{\pm}(K, f, \psi) \rangle.$ We will need the following results from \cite{entin}. As we remarked in Section \ref{a-s}, we are using a slightly different description for the family of Artin-Schreier covers since we do not allow twists. Because of that, our results are slightly simpler than those stated in \cite{entin}. We have also modified the original notation so that it fits the generalization that we pursue in the next sections.
\begin{lem} (\cite{entin}, Lemma 5.2)\label{lem:avg}Let $h$ be an integer, $p\nmid h$. Assume $k<d$ and $\alpha \in \mathbb F_{q^k}.$ Then \[\left< h \psi(\tr_k f(\alpha))\right> = \begin{cases} 1 & p\mid k, \, \alpha \in \mathbb F_{q^{k/p}},\\ 0 & \textrm{otherwise.} \end{cases} \] \end{lem} \begin{proof} If $p \mid k$ and $\alpha \in \mathbb F_{q^{k/p}}$ then $\tr_k(f(\alpha))=p \tr_{\frac{k}{p}}(f(\alpha))=0$ so $\left< \psi(\tr_k f(\alpha))\right>=1$. For the remaining case we first note that the average is the same if we average over the family $\mathcal F_d$ of degree $d$ polynomials (without the condition $a_{pk}=0$). This is due to the existence of the map $\mu$ defined by (\ref{map-mu}).
Denote by $u$ the degree of $\alpha$ over $\mathbb F_q$. Since $u\leq k<d$ the map \[ \tau : \mathcal F_d\rightarrow \mathbb F_{q^{u}} \] defined by $\tau(f)=f(\alpha)$ is $(q-1)q^{d-u}$-to-one. Thus as $f$ ranges over $\mathcal F_d$, $f(\alpha)$ takes each value in $\mathbb F_{q^u}$ an equal number of times. Since $p\nmid \frac{k}{u}$, $\tr_k(f(\alpha))=\frac{k}{u}\tr_u(f(\alpha))$ also takes every value in $\mathbb F_p$ the same number of times as $f$ ranges over $\mathcal F_d$ and the same is true for $h\tr_k(f(\alpha))$. Thus each $p$th root of unity occurs the same number of times in $\psi(h\tr_k(f(\alpha)))$ as $f$ ranges over $\mathcal F_d$ and so the average is $0$. \end{proof}
The lemma has the following consequence.
\begin{cor} (\cite{entin}, Corollary 5.3) \label{cor:moment1} Let $h$ be an integer, $p\nmid h$. Assume $k <d$ and set \[M^{k,1,h}_{1,d}:= \left<q^{-k/2} \sum_{\alpha \in \mathbb F_{q^k}} \psi(h \tr_k f(\alpha))\right>.\] Then \[M^{k,1,h}_{1,d}= e_{p,k}q^{-(1/2-1/p)k},\] where \[e_{p,k}=\begin{cases} 0 & p\nmid k, \\ 1 & p\mid k. \end{cases}\] \end{cor} We also denote \[M^{k,-1,h}_{1,d}:= \left<q^{-k/2} \sum_{\alpha \in \mathbb F_{q^k}} \psi(- h \tr_k f(\alpha))\right>.\] Clearly, $M^{k,-1,h}_{1,d}=\overline{M^{k,1,h}_{1,d}}.$
Notice that changing $h$ allows us to vary the character from $\psi$ to $\psi^h$. This will be useful later.
\begin{proof}(Theorem \ref{averageET}) We have that \begin{eqnarray*} \left< S^{\pm}(K, f, \psi) \right> &=& \sum_{k=1}^K \frac{\widehat{I}^\pm_K(k)\left<S_k(f,\psi) \right>+\widehat{I}^\pm_K(-k)\left<S_k(f,\bar{\psi})\right>}{q^{k/2}}\\ &=& \sum_{k=1}^K \widehat{I}^\pm_K(k)M^{k,1,1}_{1,d}+\widehat{I}^\pm_K(-k)M^{k,-1,1}_{1,d}\\ &= & 2\sum_{k=1}^K \widehat{I}^\pm_K(k)e_{p,k}q^{-(1/2-1/p)k}\\ \end{eqnarray*} and the result follows from property {\bf (e)} and \eqref{T-estimate} taking $K=cd$ with $c<1$. \end{proof}
\begin{rem} \label{Chantal's favorite nonconstant} We denote by \[C(K):=\sum_{k=1}^K \widehat{I}^\pm_K(k)e_{p,k}q^{-(1/2-1/p)k}\] and \[
C:=\sum_{k=1}^\infty \frac{\sin(\pi k |\mathcal I|)}{\pi k} e_{p,k} q^{-(1/2-1/p)k}. \] These terms will reappear in the computation of the higher moments. Note that, since $p>2,$ the above infinite series converges absolutely. By Proposition \ref{propmanysums}, $C(K)=O(1)$. By property {\bf (e)} of the Beurling-Selberg polynomials, $C=C(K) + O(1/K)$. \end{rem}
\section{Second moment} \label{2mom} Let \begin{eqnarray} S^\pm(K,C_f)=\sum_{h=1}^{p-1}S^{\pm}(K, f, \psi^h), \end{eqnarray} where $S^{\pm}(K, f, \psi)$ is defined in \eqref{defSK}.
In the next sections, we are computing the moments of $S^{\pm}(K, C_f)$. We show that they fit the Gaussian moments when properly normalized (Theorem \ref{thm:sumisgaussian}). We will then use this result to show that $$ \frac{N_{\mathcal{I}}(C_f)
- (p-1)(d-1) |\mathcal{I}|}{\sqrt{\frac{2(p-1)}{\pi^2} \log(d |\mathcal{I}|)}}$$
converges to a normal distribution as $d \rightarrow \infty$ since it converges in mean square to $$\frac{S^{\pm}(K, C_f)}{{\sqrt{\frac{2(p-1)}{\pi^2} \log(d |\mathcal{I}|)}}}.$$
The following lemma is a generalization of Lemma 6.2 in \cite{entin}, that also takes into account the difference in our family of Artin-Schreier covers.
Recall that $\psi^j(\alpha)=\psi(j\alpha)$ for $\alpha \in \mathbb F_p$. We have the following
\begin{lem} \label{lemmaindependence} Fix $h_1, h_2$ such that $p\nmid h_1 h_2$ and let $e_1, e_2 \in \{-1,1 \}$. Assume $k_1,k_2 > 0$, $k_1+k_2 < d$. Let $\alpha_1 \in \mathbb F_{q^{k_1}}$, $\alpha_2 \in \mathbb F_{q^{k_2}}$ with monic minimal polynomials $g_1,g_2$ of degrees $u_1, u_2$ over $\mathbb F_q$ respectively.
We have \begin{eqnarray*} \left< \psi(e_1 h_1\tr_{k_1} f(\alpha_1)+e_2h_2\tr_{k_2} f(\alpha_2)) \right> &=&\begin{cases} 1, & \textrm{$g_1=g_2$, $p \mid \frac{{e_1 h_1 k_1}+{e_2 h_2 k_2}}{u_1}$, $p \nmid \frac{k_1k_2}{u_1u_2}$} \\ & \textrm{or $p\mid \left(\frac{k_1}{u_1}, \frac{k_2}{u_2}\right)$}; \\ 0, & \textrm{otherwise.} \end{cases} \end{eqnarray*} \end{lem} \begin{proof} If $p \mid \frac{k_2}{u_2}$ then $\tr_{k_2} f(\alpha_2)=p\tr_{{\frac{k_2}{p}}}f(\alpha_2)=0$, so \[ \left< \psi(e_1 h_1\tr_{k_1} f(\alpha_1) +e_2 h_2 \tr_{k_2} f(\alpha_2)) \right> =\left< \psi( e_1 h_1 \tr_{k_1} f(\alpha_1)) \right>. \] By Lemma \ref{lem:avg}, this equals $0$ if $p \nmid \frac{k_1}{u_1}$ and $1$ if $p\mid \frac{k_1}{u_1}$ as $p \nmid e_1h_1$.
The only remaining case is when $p \nmid \frac{k_1k_2}{u_1u_2}$. We first suppose that $g_1 \neq g_2$. We note that we will have the same value if we average over $\mathcal F_d$ rather than $\mathcal F_d'$ due to the existence of the map $\mu$ defined by (\ref{map-mu}). Since $u_1+u_2\leq {k_1}+{k_2} <d$, the map \[ \tau : \mathcal F_d \rightarrow \mathbb F_q[X]/(g_1g_2) \simeq \mathbb F_{q^{u_1}}\times \mathbb F_{q^{u_2}} \] is exactly $(q-1)q^{d-{u_1-u_2}}$-to-one. Hence as $f$ ranges over $\mathcal F_d$, $(f(\alpha_1), f(\alpha_2))$ takes every value in $\mathbb F_{q^{u_1}}\times \mathbb F_{q^{u_2}}$ the same number of times. Now, since $p\nmid \frac{{e_1 h_1 k_1}}{u_1}$ and $p \nmid \frac{{e_2 h_2 k_2}}{u_2}$, \[(\tr_{k_1} f(\alpha_1), \tr_{k_2} f(\alpha_2))=\left(\frac{{e_1h_1k_1}}{u_1}\tr_{u_1}(f(\alpha_1)), \frac{{e_2h_2k_2}}{u_2}\tr_{u_2}(f(\alpha_2))\right)\] also takes every value in $\mathbb F_p\times \mathbb F_p$
the same number of times as $f$ ranges over $\mathcal F_d$. Then \begin{eqnarray*} &&\psi\left(e_1h_1\tr_{{k_1}}(f(\alpha_1))+e_2h_2 \tr_{{k_2}} (f(\alpha_2))\right)= \\&=&\psi\left(e_1h_1\frac{{k_1}}{u_1}\tr_{u_1}(f(\alpha_1))+e_2h_2\frac{{k_2}}{u_2} \tr_{u_2} (f(\alpha_2))\right) \end{eqnarray*} assumes every $p$th root of unity equally many times as we average over $\mathcal F_d$ and so the average is $0$.
If $g_1=g_2$, then $\alpha_1$ and $\alpha_2$ are conjugates over $\mathbb F_q$ and so are $f(\alpha_1)$ and $f(\alpha_2).$ Then $\tr_{u_1} f(\alpha_1)=\tr_{u_1} f(\alpha_2)$. This implies \begin{eqnarray*} e_1h_1\tr_{k_1} f(\alpha_1)+e_2h_2\tr_{k_2} f(\alpha_2) &=& e_1h_1\frac{{k_1}}{u_1}\tr_{u_1} f(\alpha_1) +e_2h_2 \frac{{k_2}}{u_1}\tr_{u_1} f(\alpha_2) = \\ &=& \frac{{e_1h_1k_1}+e_2h_2 {k_2}}{u_1} \tr_{u_1} f(\alpha_1), \end{eqnarray*} which is zero when $p\mid \frac{e_1h_1{k_1}+e_2h_2{k_2}}{u_1}$. If $p$ does not divide $\frac{e_1h_1{k_1}+e_2h_2{k_2}}{u_1}$ then \[\left< \psi(e_1h_1\tr_{k_1} f(\alpha_1)+e_2h_2 \tr_{k_2} f(\alpha_2)) \right> = \left< \psi\left(\frac{{e_1h_1k_1}+e_2h_2 {k_2}}{u_1} \tr_{u_1} f(\alpha_1)\right) \right> =0\] by Lemma \ref{lem:avg}.
\end{proof}
For positive integers $k_1,k_2,h_1,h_2$ with $p\nmid h_1h_2$ and $e_1, e_2 \in \{ -1, 1\}$, let \begin{eqnarray*} M_{2,d}^{(k_1,k_2),(e_1,e_2),(h_1,h_2)} &:=& \left< q^{-(k_1+k_2)/2} \sum_{{\alpha_1 \in \mathbb F_{q^{k_1}}}\atop{\alpha_2 \in \mathbb F_{q^{k_2}}}}\psi(e_1h_1\tr_{k_1} f(\alpha_1)+e_2h_2 \tr_{k_2} f(\alpha_2)) \right> \\ &=& q^{-(k_1+k_2)/2} \sum_{{\alpha_1 \in \mathbb F_{q^{k_1}}}\atop{\alpha_2 \in \mathbb F_{q^{k_2}}}} \left< \psi(e_1h_1\tr_{k_1} f(\alpha_1) +e_2h_2 \tr_{k_2} f(\alpha_2)) \right> . \end{eqnarray*} Then we have the following analogue of Theorem 8 in \cite{entin}. \begin{thm}\label{Mcovariance} Assume ${k_1} \geq {k_2} > 0$ and ${k_1}+{k_2} < d$. Let $0<h_1, h_2 \leq (p-1)/2$. Then \begin{eqnarray*} M_{2,d}^{({k_1},{k_2}),(e_1,e_2),(h_1,h_2)} &=&\delta_{{k_1},2{k_2}} O \left({k_1} q^{-{k_2}/2}\right) + O \left({k_1} q^{-{k_2}/2-{k_1}/6} + q^{-(1/2 - 1/p)({k_1}+{k_2})}\right)\\ &&+ \begin{cases}
\delta_{{k_1},{k_2}} {k_1} \left(1+O\left(q^{-{k_1}/2}\right)\right), & (e_1,e_2)=(1,-1), h_1=h_2,\\
0, & \text{ otherwise,} \end{cases} \end{eqnarray*} where \begin{eqnarray*} \delta_{{k_1},{k_2}} = \begin{cases} 1, & {k_1}={k_2}, \\0, & {k_1} \neq {k_2}. \end{cases} \end{eqnarray*} \end{thm} Before we proceed with the proof, we would like to make a few remarks. In the instances when we apply this result, we will choose $K=cd$, for $0<c<1/2$, and therefore ${k_1}, {k_2}\leq K$ will imply that $k_1 + k_2 < d$, and will be able to apply Theorem \ref{Mcovariance} for all values of $k_1, k_2$ under consideration. Also note that the condition $k_1\geq k_2>0$ does not restrict the validity of the statement, since $M_{2,d}^{({k_2},{k_1}),(1,-1),(h_1,h_2)}=\overline{M_{2,d}^{({k_1},{k_2}),(1,-1), (h_2,h_1)}}$.
\begin{proof} From Lemma \ref{lemmaindependence}, \begin{eqnarray*} &&M_{2,d}^{({k_1},{k_2}),(e_1,e_2),(h_1,h_2)} = q^{-({k_1}+{k_2})/2}\left(e_{p,e_1h_1{k_1}+e_2h_2{k_2}}\sum_{{{m\mid ({k_1},{k_2})}\atop{mp\nmid {k_1},{k_2}}}\atop{mp \mid (e_1h_1{k_1}+e_2 h_2{k_2})}} \pi(m)m^2 +e_{p,{k_1}}e_{p,{k_2}}q^{({k_1}+{k_2})/p}\right), \end{eqnarray*} where $\pi(m)$ denotes the number of monic irreducible polynomials of degree $m$ over $\mathbb F_q[X]$. The prime number theorem for function fields (see \cite{rosen}, Theorem 2.2) states that $\pi(m) =\frac{q^m}{m}+O\left(\frac{q^{m/2}}{m}\right).$
When ${k_1}={k_2}$, the conditions on the summation indices become $m\mid {k_1}$, $mp\nmid {k_1}$, and $mp\mid (e_1h_1+e_2h_2){k_1}$, a contradiction unless $p\mid (e_1h_1+e_2h_2)$. Due to the range in which the $h_1, h_2$ take values, this can only happen when $e_1=-e_2$ and $h_1=h_2$. In this case, one gets \[\sum_{{m\mid {k_1}}\atop{mp\nmid {k_1}}} \pi(m)m^2 = {k_1}q^{k_1}+O\left({k_1}q^{{k_1}/2}\right).\] On the other hand, when ${k_1}=2{k_2}$, one gets \[\sum_{{{m\mid {k_2}}\atop{mp\nmid {k_2}}}\atop{mp \mid (2e_1h_1+e_2h_2){k_2}}} \pi(m)m^2=O({k_2}q^{k_2})=O\left({k_1}q^{{k_1}/2}\right).\] Finally, if ${k_1}>{k_2}$ but ${k_1}\not = 2{k_2}$, we have $({k_1},{k_2})\leq {k_1}/3$ and \[\sum_{{{m\mid ({k_1},{k_2})}\atop{mp\nmid {k_1},{k_2}}}\atop{mp \mid (e_1h_1{k_1}+e_2h_2{k_2})}} \pi(m)m^2=O\left({k_1} q^{{k_1}/3}\right).\] This concludes the proof of the theorem. \end{proof}
Finally, we are able to compute the covariances.
\begin{thm}\label{covariance}Let $h_1, h_2$ be integers such that $0< h_1, h_2\leq (p-1)/2$. Then for any $K$ with $\max\{1,1/|\mathcal I|\}<K<d/2$, \begin{eqnarray*}
\left< S^\pm(K, f, \psi^{h_1}) S^\pm(K, f, \psi^{h_2})\right> &=&\left< S^\pm(K, f, \psi^{h_1}) S^\mp(K, f, \psi^{h_2}) \right>
=\begin{cases}
\displaystyle \frac{1}{\pi^2} \log (K |\mathcal{I}|)+ O\left(1\right), & h_1=h_2
\\
&\\
O\left(1\right), & h_1\neq h_2.
\end{cases} \end{eqnarray*}
\end{thm} \begin{proof}
By definition, \begin{eqnarray*}
&&\left< S^\pm(K, f, \psi^{h_1}) S^\pm(K, f, \psi^{h_2}) \right> \\ &&= \sum_{{k_1},{k_2} =1}^K \widehat{I}_K^\pm({k_1}) \widehat{I}_K^\pm({k_2}) M_{2,d}^{({k_1},{k_2}),(1,1),(h_1,h_2)} + \widehat{I}_K^\pm({k_1}) \widehat{I}_K^\pm(-{k_2}) M_{2,d}^{({k_1},{k_2}),(1,-1),(h_1,h_2)} \\ && \;\;\;\; + \widehat{I}_K^\pm(-{k_1}) \widehat{I}_K^\pm({k_2}) M_{2,d}^{({k_1},{k_2}),(-1,1),(h_1,h_2)}+\widehat{I}_K^\pm(-{k_1}) \widehat{I}_K^\pm(-{k_2}) M_{2,d}^{({k_1},{k_2}),(-1,-1),(h_1,h_2)}. \end{eqnarray*} Then, by repeated use of Theorem \ref{Mcovariance} and Proposition \ref{propmanysums}, the summation over $k_1, k_2$ is $O(1)$ if $h_1\neq h_2$. If $h_1=h_2$ then \begin{eqnarray*} \left< S^\pm(K, f, \psi^{h_1}) ^2 \right> &=& 2 \sum_{{k_1}=1}^K \widehat{I}_K^\pm({k_1})\widehat{I}_K^\pm({-k_1}) {k_1} + O(1) = 2 \sum_{{k_1}\geq 1} \widehat{I}_K^\pm({k_1})^2 {k_1}
+ O ( 1 ) \\
&=& \frac{1}{\pi^2} \log(K |\mathcal{I}|) + O(1) \end{eqnarray*} by applying Proposition \ref{propFR}. The proof for $\left< S^\pm(K, f, \psi^{h_1}) S^\mp(K, f, \psi^{h_2}) \right>$ follows along exactly the same lines. \end{proof}
\begin{cor}\label{cor:2ndmoment}
For any $K$ with $\max\{1,1/|\mathcal I|\}<K<d/2$, \[
\langle S^\pm(K,C_f)^2 \rangle=\langle S^+(K,C_f)S^-(K,C_f)\rangle=\frac{2(p-1)}{\pi^2}\log(K|\mathcal I|) + O(1). \] \end{cor} \begin{proof} First we note that \begin{eqnarray*} \langle S^\pm(K,C_f)^2\rangle= \sum_{h_1,h_2=1}^{p-1} \left\langle S^{\pm}(K, f, \psi^{h_1}) S^{\pm}(K, f, \psi^{h_2})\right\rangle. \end{eqnarray*}
Notice that by Theorem \ref{covariance}, the mixed average contributes $\frac{1}{\pi^2}\log(K|\mathcal I|)+O(1)$ for each term where $h_1=h_2$ or $h_1=p-h_2$. The proof for $\langle S^+(K,C_f)S^-(K,C_f)\rangle$ is identical. \end{proof}
\section{Third moment} \label{3mom}
Let $k_1, k_2, k_3$ be positive integers, $e_1, e_2, e_3$ take values $\pm 1$, and $h_1,h_2,h_3$ be integers such that $p\nmid h_i$. Denote ${\bf k}=(k_1,k_2,k_3)$, ${\bf e}=(e_1,e_2,e_3)$, and ${\bf h}=(h_1,h_2,h_3)$. For every ${\bm \alpha}=(\alpha_1,\alpha_2,\alpha_3) \in \mathbb F_{q^{k_1}} \times \mathbb F_{q^{k_2}} \times \mathbb F_{q^{k_3}} $, set $$ m_{3,d}^{{\bf k}, {\bf e}, {\bf h}}({\bm \alpha}) = \left< \psi(e_1 h_1 \tr_{k_1}f(\alpha_1) + e_2 h_2 \tr_{k_2}f(\alpha_2) + e_3 h_3 \tr_{k_{3}} f(\alpha_3)) \right>, $$ and $$M_{3, d}^{{\bf k}, {\bf e}, {\bf h}}=
\sum_{{\alpha_i \in \mathbb F_{q^{k_i}}} \atop {i=1,2,3}} q^{-(k_1 + k_2 + k_3)/2} m_{3,d}^{{\bf k}, {\bf e}, {\bf h}}({\bm \alpha}).$$
In an analogous manner to Section \ref{2mom}, one can prove the following. \begin{lem}\label{lemm3} Let $p\nmid h_1h_2h_3$ and let $e_1, e_2, e_3 \in \{-1, 1\}$. Assume $k_1, k_2, k_3 > 0$ and $k_1+k_2+k_3 < d$. For $i=1,2,3$ $\alpha_i$ be an element of $\mathbb F_{q^{k_i}}$ with minimal polynomial $g_i$ over $\mathbb F_q$ of degree $u_i.$ We have $m_{3,d}^{{\bf k}, {\bf e}, {\bf h}}({\bm \alpha})=1$ in any of the following cases \begin{itemize}
\item $g_1=g_2=g_3, p \mid \frac{(e_1h_1k_1+e_2h_2k_2+e_3h_3k_3)}{u_1}, p\nmid\frac {k_1k_2k_3}{u_1u_2u_3}$.
\item $g_{j_1}=g_{j_2}, p \mid \frac{(e_{j_1}h_{j_1}k_{j_1}+e_{j_2}h_{j_2}k_{j_2})}{ u_{j_1}}, p \nmid \frac{k_{j_1}k_{j_2}}{u_{j_1}u_{j_2}}, p \mid\frac{k_{j_3}}{u_{j_3}}$ , where $(j_1,j_2,j_3)$ is any permutation of $(1,2,3)$. \item $p \mid \frac{k_i}{u_i}, i=1,2,3$. \end{itemize} Otherwise $m_{3,d}^{{\bf k}, {\bf e}, {\bf h}}({\bm \alpha})=0$.
\end{lem}
\begin{thm}\label{thm3} Assume $k_1 \geq k_2 \geq k_3 > 0$ and $k_1 + k_2 + k_3 < d$. Then \begin{eqnarray*} &&M_{3, d}^{{\bf k}, {\bf e}, {\bf h}}\\ &=&M_{1,d}^{k_1,e_1, h_1}M_{2,d}^{(k_2,k_3),(e_2,e_3), (h_1,h_2)}+M_{1,d}^{k_2,e_2, h_3}M_{2, d}^{(k_1,k_3),(e_1,e_3), (h_1,h_3)}\\ &&+M_{1,d}^{k_3,e_3, h_3}M_{2, d}^{(k_1,k_2),(e_1,e_2),(h_1,h_2)} -2M_{1,d}^{k_1,e_1,h_1}M_{1,d}^{k_2,e_2,h_2}M_{1,d}^{k_3,e_3,h_3}\\ &&+O\left(\delta_{k_1,k_2,k_3} k_1^2q^{-k_1/2}+\delta_{k_1,k_2,2k_3} k_1^2 q^{-3k_1/4} + \delta_{k_1,2k_2,2k_3} k_1^2 q^{-k_1/2}+k_1^2 q^{-k_1/6-k_2-k_3} \right)\\ &=& e_{p,k_1}q^{-(1/2-1/p)k_1}M_{2,d}^{(k_2,k_3),(e_2,e_3),(h_2,h_3)}+e_{p,k_2}q^{-(1/2-1/p)k_2}M_{2,d}^{(k_1,k_3),(e_1,e_3),(h_1,h_3)}\\ &&+e_{p,k_3}q^{-(1/2-1/p)k_3}M_{2, d}^{(k_1,k_2),(e_1,e_2),(h_1,h_2)}+O \left(\delta_{k_1,k_2,k_3} k_1^2q^{-k_1/2}+\delta_{k_1,k_2,2k_3} k_1^2 q^{-3k_1/4}\right) \\ &&+ O\left( \delta_{k_1,2k_2,2k_3} k_1^2 q^{-k_1/2}+k_1^2 q^{-k_1/6-k_2-k_3} +q^{-(1/2 - 1/p)(k_1+k_2+k_3)} \right).\\ \end{eqnarray*} \end{thm}
\begin{proof} We can use induction in the same way as we used it in the proof of Lemma \ref{lemm3}. The only new term to be considered is given by the case $g_1=g_2=g_3$ and $p u_1 \mid (e_1h_1k_1+e_2h_2k_2+e_3h_3k_3)$. This term yields \[ q^{-(k_1+k_2+k_3)/2}e_{p,e_1h_1k_1+e_2h_2k_2+e_3h_3k_3}\sum_{{{m\mid (k_1,k_2,k_3)}\atop{mp\nmid k_1,k_2,k_3}}\atop{mp \mid (e_1h_1k_1+e_2h_2k_2+e_3h_3k_3)}} \pi(m)m^3. \\\] Suppose that $k_1\geq k_2\geq k_3$. If $k_1=k_3$, we have \[\sum_{{{m\mid k_1}\atop{mp\nmid k_1}}\atop{mp \mid (e_1h_1+e_2h_2+e_3h_3)k_1}}\pi(m)m^3 = O\left(k_1^2q^{k_1}\right).\] If $k_1=2k_3$, $k_2=k_1$ or $k_2=k_3$, we have \[\sum_{{{m\mid (k_1,k_2,k_3)}\atop{mp\nmid k_3}}\atop{mp \mid (e_1h_1k_1+e_2h_2k_2+e_3h_3k_3)}} \pi(m)m^3=O\left(k_1^2q^{k_1/2}\right). \] Finally, for the other cases, \[\sum_{{{m\mid (k_1,k_2,k_3)}\atop{mp\nmid k_1,k_2,k_3}}\atop{mp \mid (e_1h_1k_1+e_2h_2k_2+e_3h_3k_3)}} \pi(m)m^3=O\left(k_1^2q^{k_1/3}\right). \]
\end{proof}
\begin{thm} Let $0< h_1, h_2, h_3 \leq (p-1)/2$. For any $K$ with $\max\{1, 1/|\mathcal I|\} <K< d/3$,
\begin{eqnarray*}
&&\left< S^\pm(K, f, \psi^{h_1}) S^\pm(K, f, \psi^{h_2})S^\pm(K, f, \psi^{h_3})\right>
\\&=& \begin{cases}\frac{3C}{\pi^2}\log(K|\mathcal{I}|)+O\left(1\right) & h_1=h_2= h_3,\\
\frac{C}{\pi^2}\log(K|\mathcal{I}|)+O\left(1\right) & h_{j_1}=h_{j_2}\not = h_{j_3}, \, (j_1,j_2,j_3)\, \text{a permutation of}\, (1,2,3),\\ O(1) & h_i \,\text{distinct}.
\end{cases}
\end{eqnarray*} where $C$ is the constant defined in Remark $\ref{Chantal's favorite nonconstant}$. \end{thm} \begin{cor}
For any $K$ with $\max\{1, 1/|\mathcal I|\} <K< d/3$, \[
\langle S^\pm(K,C_f)^3\rangle=\frac{6C(p-1)^2}{\pi^2}\log(K|\mathcal I|)+O(1). \] \end{cor}
\section{General Moments}\label{genmom}
Let $n, k_1, \dots, k_n$ be positive integers, let $e_1, \dots, e_n$ take values $\pm 1$ and let $h_1, \dots, h_n$ be integers such that $p\nmid h_i$, $1\leq i\leq n$. Let ${\bf k} = (k_1, \dots, k_n)$, ${\bf e} = (e_1, \dots, e_n)$ and ${\bf h}=(h_1,\dots, h_n)$. Let $\alpha_i \in \mathbb F_{q^{k_i}}$, $1 \leq i \leq n$, and let ${\bm \alpha}=(\alpha_1,\dots,\alpha_n)$. We define $$ m_{n, d}^{{\bf k}, {\bf e}, {\bf h}}({\bm \alpha}) = \left< \psi(e_1 h_1 \tr_{k_1}f(\alpha_1) + \dots + e_nh_n \tr_{k_{n}} f(\alpha_n)) \right> $$ and $$M_{n, d}^{{\bf k}, {\bf e}, {\bf h}} =
\sum_{{\alpha_i \in \mathbb F_{q^{k_i}}} \atop {i=1, \dots, n}} q^{-(k_1 + \dots + k_n)/2} m_{n, d}^{{\bf k}, {\bf e}, {\bf h}}({\bm \alpha}).$$
We are computing in this section the general moments $$\left< S^\pm(K, f, \psi)^n \right> = \sum_{k_1, \dots, k_n=1}^K \sum_{e_1, \dots, e_n = \pm 1} I_K^{\pm}(e_1 k_1) \dots I_K^{\pm}(e_n k_n) M_{n,d}^{{\bf k}, {\bf e}} $$ and \[ \left< S^\pm(K, f, \psi^{h_1})\dots S^\pm(K, f, \psi^{h_n}) \right>= \sum_{k_1, \dots, k_n=1}^K \sum_{e_j = \pm 1 , \atop {1 \leq j \leq n}} I_K^{\pm}(e_1 k_1) \dots I_K^{\pm}(e_n k_n) M_{n,d}^{{\bf k}, {\bf e}, {\bf h}}. \]
\begin{lem} \label{generalcase} Assume $k_1, \dots, k_n > 0$, $k_1 + \dots + k_n < d$. Let $g_1, \dots, g_s$ of degree $u_1, \dots, u_s$ respectively be all the distinct minimal polynomials over $\mathbb F_q$ of $\alpha_1, \dots, \alpha_n$ (we allow the possibility that some $\alpha_i$'s are conjugate to each other, thus $s\leq n$), and let $$\epsilon_i = \frac{1}{u_i} \sum_{\alpha_j \in R(g_i)} {k_j} e_j h_j, \;\; 1 \leq i \leq s,$$ where $R(g)$ is the set of roots of $g$. Then $$m_{n, d}^{{\bf k}, {\bf e}, {\bf h}}({\bm \alpha}) = \left\{ \begin{array}{ll} 1 & \mbox{if $p \mid \epsilon_i$ for $1 \leq i \leq s$}, \\ 0 & \mbox{otherwise}. \end{array} \right.$$ \end{lem}
\begin{proof} As before, we can take the average over the family $\mathcal{F}_d$ of polynomials of degree $d$ without the condition that $a_{kp} = 0$ for $1 \leq k \leq d/p$. Renumbering, suppose that $\alpha_i$ has minimal polynomial $g_i$ for $1 \leq i \leq s$.
Since $\sum_{i=1}^s u_i \leq \sum_{i=1}^s k_i < d$, the map \begin{eqnarray*} \tau: \mathcal{F}_d \rightarrow \mathbb F_q[X]/(g_1 \dots g_s) \simeq \mathbb F_{q^{u_1}}\times \dots \times \mathbb F_{q^{u_s}} \end{eqnarray*} is exactly $(q-1) q^{d - (u_1+ \dots +u_s)}$-to-one, and as $f$ ranges over $\mathcal{F}_d$, $\left( f(\alpha_1), \dots , f(\alpha_s) \right)$ takes every value in $\mathbb F_{q^{u_1}} \times \dots \times \mathbb F_{q^{u_s}}$ the same number of times. Now, the product $\left( \tr_{u_1} f(\alpha_1), \dots, \tr_{u_s}f(\alpha_s) \right)$ also takes every value in $(\mathbb F_p)^s$ the same number of times as $f$ ranges over $\mathcal{F}_d$, and the same holds for any linear combination $$\gamma_1 \tr_{u_1} f(\alpha_1) + \dots + \gamma_s \tr_{u_s}f(\alpha_s), $$ unless $p$ divides every $\gamma_i$. This shows that each $p$th root of unity occurs as many times as $$\psi \left( \gamma_1 \tr_{u_1} f(\alpha_1) + \dots + \gamma_s \tr_{u_s}f(\alpha_s) \right)$$ when $p$ does not divide all the $\gamma_i$. We now determine the coefficients $\gamma_i$ for $$m_{n, d}^{{\bf k}, {\bf e}, {\bf h}}({\bm \alpha}) = \sum_{f \in \mathcal{F}_d} \psi \left( e_1 h_1 \tr_{k_1} f(\alpha_1) + \dots + e_nh_n \tr_{k_n}f(\alpha_n) \right).$$ Recall that $\tr_{k_i}f(\alpha_i)=\frac{k_i}{u_i} \tr_{u_i} f(\alpha_i)$ for $i=1, \dots, s$. Let $$\epsilon_i = \frac{1}{u_i} \sum_{\alpha_j \in R(g_i)} e_j h_j k_j, \;\; 1 \leq i \leq s.$$ Then $\gamma_i=\epsilon_i$, i.e., $$m_{n, d}^{{\bf k}, {\bf e}, {\bf h}}({\bm \alpha}) = \sum_{f \in \mathcal{F}_d} \psi \left( \epsilon_1 \tr_{u_1} f(\alpha_1) + \dots + \epsilon_s \tr_{u_s}f(\alpha_s) \right),$$ which implies that $m_{n, d}^{{\bf k}, {\bf e}, {\bf h}}({\bm \alpha})$ takes the value 1 if $p \mid \epsilon_i$ for $1 \leq i \leq s$, and 0 otherwise. \end{proof}
Recall that $\pi(m)$ denotes the number of monic irreducible polynomials in $\mathbb F_q[X]$.
\begin{lem} \label{generalcase2} Assume $k_1, \dots, k_n > 0$, $k_1 + \dots + k_n < d$. Then $M_{n, d}^{{\bf k}, {\bf e}, {\bf h}}$ is bounded by a sum of terms made of products of elementary terms of the type \[q^{-(j_1+\dots+j_r)/2} \sum_{{{m\mid (j_1,\dots,j_r)}\atop{mp\mid \sum_{i=1}^r e_i h_i j_i}}} \pi(m) m^r \] where the indices $j_1, \dots, j_r$ of the elementary terms appearing in each product are in bijection with $k_1, \dots, k_n$.
Let $N_{n, d}^{{\bf k}, {\bf e}, {\bf h}}$ be the sum of the terms made exclusively of products of elementary terms $$q^{-(j_1+j_2)/2} \sum_{{{m\mid (j_1,j_2)}\atop{mp\mid e_1h_1 j_1+ e_2h_2 j_2}}} \pi(m) m^2. $$ If $n$ is odd, these terms will also be multiplied by an elementary term $$e_{p,j} q^{-j/2} \sum_{{{m\mid j}\atop{mp\mid e j}}} \pi(m) m = e_{p,j} \sum_{m \mid \frac{j}{p}} \pi(m)m = e_{p,j} \# \mathbb F_{q^{j/p}} = e_{p,j} q^{j/p}.$$ Let $E_{n, d}^{{\bf k}, {\bf e}, {\bf h}}$ be the sum of all the other terms appearing in $M_{n, d}^{{\bf k}, {\bf e}, {\bf h}}$. Then, $M_{n, d}^{{\bf k}, {\bf e}, {\bf h}} = N_{n, d}^{{\bf k}, {\bf e}, {\bf h}} + O \left( E_{n, d}^{{\bf k}, {\bf e}, {\bf h}} \right).$ \end{lem}
\begin{proof} We first remark that the number of $(\alpha_1, \dots, \alpha_t) \in \mathbb F_{q^{k_1}} \times \dots \times \mathbb F_{q^{k_t}}$ which are conjugate over $\mathbb F_q$ is $$\sum_{m \mid (k_1, \dots, k_t)} \pi(m) m^t.$$ Using Lemma \ref{generalcase}, we then have to count the contribution coming from the ${\bm \alpha} = (\alpha_1, \dots, \alpha_n)$ such that $p \mid \epsilon_i$ for $1 \leq i \leq s$. Let $\mathcal{P}$ be the set of partitions of $n$ in $s$ subsets $T_1, \dots, T_s$. Let $k(T_j)$ be the gcd of the $k_i$ such that $i \in T_j$ and let $s(T_j)=\sum_{i\in T_j} e_i h_i k_i$. Then, for any such partition, the number of ${\bf \alpha} = (\alpha_1, \dots, \alpha_n) \in \mathbb F_{q^{k_1}} \times \dots \times \mathbb F_{q^{k_n}}$ such that $\alpha_i$ is a root of $g_j$ when $i \in T_j$ is less than or equal to $$
\sum_{{{m\mid k(T_1)}\atop{mp \mid s(T_1)}}} \pi(m)m^{|T_1|} \dots
\sum_{{{m\mid k(T_s)}\atop{mp \mid s(T_s)}}} \pi(m)m^{|T_s|} . $$ This proves the first statement of the lemma. We remark that the above count is an over-count, as it may also count polynomials $g_1, \dots, g_s$ which are not distinct. For example, the number of $(\alpha_1, \alpha_2, \alpha_3, \alpha_4) \in \mathbb F_{q^{j_1}} \times \dots \times \mathbb F_{q^{j_4}}$ with minimal polynomials $g_1=g_2, g_3=g_4$ and $g_1 \neq g_3$
is \begin{eqnarray*} &&q^{-(j_1+\dots+j_4)/2}\sum_{{{m\mid (j_1,j_2)}\atop{mp\mid e_1 h_1 j_1 + e_2 h_2 j_2}}} \pi(m) m^2 \sum_{{{m\mid (j_3,j_4)}\atop{mp\mid e_3 h_3 j_3 + e_4 h_4 j_4}}} \pi(m) m^2 - q^{-(j_1+\dots+j_4)/2} \sum_{{{m\mid (j_1,\dots,j_4)}\atop{mp\mid e_1 h_1 j_1 + \dots + e_4 h_4 j_4}}} \pi(m) m^4, \end{eqnarray*} which can be written as a term in $N_{n, d}^{{\bf k}, {\bf e}, {\bf h}}$ and a term in $E_{n, d}^{{\bf k}, {\bf e}, {\bf h}}$. The general case is similar. Suppose that $n=2\ell$ is even. Then, using inclusion-exclusion, the number of $(\alpha_1, \dots, \alpha_n) \in (\mathbb F_{q^{k_1}}, \dots, \mathbb F_{q^{k_n}})$ such that $\alpha_i$ and $\alpha_{\ell+i}$ have minimal polynomial $g_i$, and all the $g_i$ are distinct can be written as \begin{eqnarray*} &&q^{-(k_1+\dots+k_{2\ell})/2} \left( \sum_{{{m\mid (k_1,k_{\ell+1})\atop{mp\mid e_1 h_1 k_1 + e_{\ell+1} h_{\ell+1}k_{\ell+1}}}}} \pi(m) m^2 \dots \sum_{{{m\mid (k_{\ell},k_{2 \ell})}\atop{mp\mid e_\ell h_\ell k_\ell + e_{2 \ell} h_{2\ell}k_{e \ell}}}} \pi(m) m^2 \right) + S(k_1, \dots, k_n)\\ \end{eqnarray*} \noindent where $S(k_1, \dots, k_n)$ is a sum of terms in $E_{n, d}^{{\bf k}, {\bf e}, {\bf h}}$.
The case of $n=2\ell+1$ follows similarly, taking into account that one has to multiply by the factor $e_{p,k_n} q^{-k_n/2} \sum_{{{m\mid k_n}\atop{mp\mid e k_n}}} \pi(m) m$. \end{proof}
We now compute \[ \left< S^\pm(K, f, \psi^{h_1})\dots S^\pm(K, f, \psi^{h_n}) \right>=\sum_{{k_1, \dots, k_n =1}\atop{e_1, \dots, e_n = \pm 1}}^K I_K^{\pm}(e_1 k_1) \dots I_K^{\pm}(e_n k_n) M_{n,d}^{{\bf k}, {\bf e}, {\bf h}}. \]
We will use $K = cd$ where $0 < c < 1/n$. Then, $k_i \leq K$ implies that $k_1 + \dots + k_n <d$, and we can apply the lemmas above.
Using Lemma \ref{generalcase2}, we have to compute sums of the type \begin{eqnarray} \sum_{k=1}^K \widehat{I}_K^{\pm}(k) q^{-(1/2 - 1/p)k} = C(K)= O(1), \end{eqnarray} and for $r \geq 2$ \begin{eqnarray*} \sum_{k_1,\dots,k_r=1}^K\widehat{I}_K^\pm(e_1 k_1) \dots \widehat{I}_K^\pm(e_r k_r)q^{-(k_1+\dots+k_r)/2}\sum_{{m\mid (k_1,\dots,k_r)}\atop{mp \mid \sum_{i=1}^f e_i h_ik_i}} \pi(m)m^r. \end{eqnarray*}
If $r=2$, we have when $p \mid e_1 h_1 k_1 + e_2 h_2 k_2$ \begin{eqnarray} \label{bigones} &&\sum_{k_1, k_2=1}^K\widehat{I}_K^\pm(e_1 k_1) \widehat{I}_K^\pm(e_2 k_2)
q^{-(k_1+k_2)/2}\sum_{{m\mid (k_1,k_2)}\atop{mp \mid (e_1h_1k_1+e_2h_2k_2)}} \pi(m)m^2 \nonumber \\&&=\left \{\begin{array}{ll} \frac{1}{2 \pi^2} \log{\left(K |\mathcal{I}|\right)} +O(1)&e_1h_1+e_2h_2\equiv 0\, \mathrm{mod}\, p,\\\\O(1) & \text{otherwise}\end{array}\right. \end{eqnarray} as we computed in the proof of Theorems \ref{Mcovariance} and \ref{covariance}. (In those theorems we had the extra condition $mp \nmid k_1,k_2$ in the sum, but those additional terms only add an $O(1)$ to the final sum, and we can ignore them.)
For the other terms, we have \begin{lem} \label{rbig} Let $r>2$, then \[S:=\sum_{k_1,\dots,k_r=1}^K\widehat{I}_K^\pm(k_1) \dots \widehat{I}_K^\pm(k_r)q^{-(k_1+\dots+k_r)/2}\sum_{{m\mid (k_1,\dots,k_r)}\atop{mp\nmid (k_1,\dots,k_r)}} \pi(m)m^r=O(1)\] \end{lem} \begin{proof} Suppose for the moment that $k_1\geq\dots\geq k_r$. If $k_1=k_r$, we have \[\sum_{{m\mid (k_1,\dots,k_r)}\atop{mp\nmid (k_1,\dots,k_r)}} \pi(m)m^f=O\left(k_1^r q^{k_1} \right).\] If $k_1=2k_r$, and all the other $k_i$ are equal to $k_1$ or $k_r$, we have \[\sum_{{m\mid (k_1,\dots,k_r)}\atop{mp\nmid (k_1,\dots,k_r)}} \pi(m)m^r=O\left(k_1^r q^{k_1/2} \right).\] In all the other cases, \[\sum_{{m\mid (k_1,\dots,k_r)}\atop{mp\nmid (k_1,\dots,k_r)}} \pi(m)m^r=O\left(k_1^r q^{k_1/3} \right).\] Putting things together, we get \begin{eqnarray*}S&\ll &\sum_{k=1}^K\widehat{I}_K^\pm(k)^r k^rq^{-(r-2)k/2}+\sum_{\ell=1}^{r-1}\sum_{k=1}^K\widehat{I}_K^\pm(2k)^\ell \widehat{I}_K^\pm(k)^{r-\ell}k^rq^{(1-r/2-\ell/2)k}\\ && +\sum_{k_1,\dots,k_r=1}^K\widehat{I}_K^\pm(k_1) \dots \widehat{I}_K^\pm(k_r)k_1^rq^{-k_1/6-(k_2+\dots+k_r)/2}\\&\ll& 1\end{eqnarray*} by Proposition \ref{propmanysums}. \end{proof}
\begin{thm}\label{thm:Smoments}
For any $K$ with $\max\{1, 1/|\mathcal I|\} <K< d/n$
\[\left< S^\pm(K, f, \psi)^{n} \right>= \left\{\begin{array}{ll}\frac{(2 \ell)!}{\ell! (2\pi^2)^\ell} \log^\ell(K|\mathcal{I}|) \left(1+O\left(\log^{-1}(K|\mathcal{I}|)\right)\right) & n=2\ell,\\ \\
C\frac{(2\ell+1)!}{\ell! (2\pi^2)^{\ell}}\log^\ell(K|\mathcal{I}|)\left(1+O\left({\log^{-1}\left(K|\mathcal{I}|\right)}\right)\right) & n=2\ell+1, \end{array}\right.\] where $C$ is defined in Remark $\ref{Chantal's favorite nonconstant}$. \end{thm}
\begin{proof}
By Lemmas \ref{generalcase2} and \ref{rbig}, we observe that the leading term in $S^{\pm}(K, f, \psi)^n$ will come from the contributions $N_{n, d}^{{\bf k}, {\bf e}}$. By equation (\ref{bigones}), if $n=2\ell$, the leading terms are of the form
$$\left( \frac{1}{2\pi^2} \log{\left( K|\mathcal{I}| \right)} \right)^\ell$$ and if $n=2\ell+1$, the leading terms are of the form $$
C \left( \frac{1}{2\pi^2} \log{\left(K |\mathcal{I}|\right)} \right)^\ell. $$
The final coefficient is obtained by counting the numbers of ways to choose the $\ell$ (or $\ell+1$) coefficients $k_i's$ with positive sign ($e_i=1$) and to pair them with those with negative sign ($e_j=-1$).
\end{proof} As $S^\pm(K, f, \psi)=S^\pm(K,f,\bar\psi)$, it is sufficient to study the
sum of $S^\pm(K, f, \psi^j)$ for $j$ up to $(p-1)/2$ rather than $p-1$.
We let \[ \delta_n(C)=\begin{cases} 1 & n=2\ell \\ C & n=2\ell+1. \end{cases} \] \begin{thm}\label{thm:genmoments}
Let $\ell=\lfloor \frac{n}{2}\rfloor$. Let $0< h_1,\dots, h_n\leq (p-1)/2$. Then for any $K$ with $\max\{1, 1/|\mathcal I|\} <K< d/n$,
\begin{eqnarray*}&&\left< S^\pm(K, f, \psi^{h_1})\dots S^\pm(K, f, \psi^{h_n}) \right>\\ &=& \delta_n(C)\frac{\Delta(h_1,\dots,h_n)}{(2\pi^2)^\ell} \log^\ell(K|\mathcal{I}|) \left(1+O\left(\log^{-1}(K|\mathcal{I}|)\right)\right) \end{eqnarray*} where $C$ is defined in Remark $\ref{Chantal's favorite nonconstant}$ and
\begin{equation*}\Delta(h_1,\dots,h_{n})=\# \{(e_1,\dots,e_{n})\in \{-1,1\}, \sigma \in \mathbb{S}_{n} | e_1h_{\sigma(1)}+e_2h_{\sigma(2)}\equiv \dots \equiv e_{2\ell-1}h_{\sigma(2\ell-1)}+e_{2\ell}h_{\sigma(2\ell)}\equiv 0 \, \mathrm{mod}\, p \}\end{equation*} where $\mathbb{S}_{n}$ denotes the permutations of the set of $n$ elements. \end{thm}
\begin{proof}
By Lemmas \ref{generalcase2} and \ref{rbig}, we observe that the leading term in the product $S^\pm(K, f, \psi^{h_1})\dots S^\pm(K, f, \psi^{h_n})$ will come from the contributions $N_{n, d}^{{\bf k}, {\bf e}, {\bf h}}$. By Theorem \ref{covariance}, if $n=2\ell$, the leading terms are of the form
$$\left( \frac{1}{2\pi^2} \log{\left( K|\mathcal{I}| \right)} \right)^\ell$$ and if $n=2\ell+1$, the leading terms are of the form $$
C \left( \frac{1}{2\pi^2} \log{\left(K |\mathcal{I}|\right)} \right)^\ell. $$
The final coefficient is obtained by counting the numbers of ways to choose the $\ell$ (or $\ell+1$) coefficients $k_i$ with positive sign ($e_i=1$) and to pair them with $k_j$ with negative sign ($e_j=-1$) in such a way that $p$ divides $e_ih_i+e_jh_j$.
\end{proof}
We note that if $n=2\ell$,
\begin{equation}\label{combinatorics}
\sum_{h_1, \dots, h_n=1}^{(p-1)/2} \Delta(h_1,\dots,h_n)=\frac{(p-1)^\ell (2 \ell)!}{2^\ell\ell! }.\end{equation} There are $\frac{(2\ell)!}{\ell!2^\ell}$ ways of choosing pairs $\{e_i,e_j\}$ (because the order does not count inside the pair). For each pair either $e_i$ or $e_j$ can be negative and the other one positive so there are a total $2^\ell$ choices for the signs. Finally, for each pair there are $((p-1)/2)$ possible values for $h_i$ and this determines $h_j$.
\begin{rem} A consequence of Theorem \ref{thm:genmoments} is that the moments are given by sums of products of covariances, exactly in the same way as the moments of a multivariate normal distribution. Moreover, the generating function of the moments converges due to \eqref{combinatorics}. Therefore, our random variables are jointly normal. Since the variables are uncorrelated (cf.~Theorem \ref{covariance}), it follows that our random variables are independent. \end{rem}
Recall that \[ S^\pm(K,C_f)=\sum_{j=1}^{p-1}S^{\pm}(K, f, \psi^j). \]
\begin{thm}\label{thm:sumisgaussian}
Assume that $K=d/\log\log(d|\mathcal I|)$, $d \rightarrow \infty$ and either $0<|\mathcal{I}|<1$ is fixed or $|\mathcal{I}| \rightarrow 0$
while $d|\mathcal{I}| \rightarrow \infty$. Then \[
\frac{S^\pm(K, C_f)}{\sqrt{\frac {2(p-1)}{\pi^2}\log(d|\mathcal{I}|)}} \]
has a standard Gaussian limiting distribution when $d \rightarrow \infty$. \end{thm} \begin{proof} First we compute the moments and then we normalize them. Let $\ell=\lfloor \frac{n}{2}\rfloor$. We note that with our choice of $K$ we have \[
\frac{\log (K|\mathcal I|)}{ \log (d|\mathcal I |)}=1- \frac{\log\log\log(d|\mathcal I|)}{\log (d|\mathcal I |)}.\]
Therefore, we can replace $\log (K|\mathcal I|)$ by $\log (d|\mathcal I |)$ in our formulas.
Recall that $S^\pm(K, f, \psi^j)=S^\pm(K, f, \psi^{p-j})$, then \begin{eqnarray*}
S^\pm(K,C_f)^n=\left(2\sum_{j=1}^{(p-1)/2}S^\pm(K, f, \psi^j) \right)^n=2^n\sum_{j_1, \dots, j_n=1}^{(p-1)/2}S^\pm(K,f,\psi^{j_1})\dots S^\pm(K,f,\psi^{j_n}). \end{eqnarray*} Therefore, we can compute the moment \begin{eqnarray*} \left\langle S^\pm(K, C_f)^n \right\rangle &=&2^n\sum_{j_1, \dots, j_n=1}^{(p-1)/2}\langle S^\pm(K,f,\psi^{j_1})\dots S^\pm(K,f,\psi^{j_n})\rangle \end{eqnarray*} and then by Theorem \ref{thm:genmoments} this is asymptotic to \begin{eqnarray*}
&&\frac{2^n\delta_{n}(C)}{(2\pi^2)^\ell}\log^\ell(d|\mathcal{I}|)\sum_{j_1, \dots, j_n=1}^{(p-1)/2} \Delta(j_1,\dots,j_n). \end{eqnarray*} Finally we use equation \eqref{combinatorics} to conclude that when $n=2\ell$, \[
\left\langle S^\pm(K, C_f)^n \right\rangle=\frac{2^n(p-1)^\ell (2 \ell)!}{2^\ell \ell! (2\pi^2)^\ell}\log^\ell(d|\mathcal{I}|)=\frac{(2\ell)!}{\ell!\pi^{2\ell}} (p-1)^\ell\log^\ell(d|\mathcal{I}|) \]
and the variance is asymptotic to $\frac{2(p-1)}{\pi^2}\log(d|\mathcal{I}|)$.
Hence the normalized moment converges to $0$ for $n$ odd and for $n$ even,
\[\lim_{d\rightarrow \infty} \frac{\left\langle S^\pm(K, C_f)^{2\ell} \right\rangle}{\left( \sqrt{\frac{2(p-1)}{\pi^2} \log (d|\mathcal I|)}\right)^{2\ell}} = \frac{(2\ell)!}{\ell!2^\ell}.\] \end{proof}
\section{Proof of main theorem}\label{proof} We prove in this section that $$ \frac{N_{\mathcal{I}}(C_f)
- 2g |\mathcal{I}|}{\sqrt{(2(p-1)/\pi^2) \log(d |\mathcal{I}|)}|}$$ converges in mean square to
$$\frac{S^{\pm}(K, C_f)}{{\sqrt{(2(p-1)/\pi^2) \log(d |\mathcal{I}|)}}}.$$ Then, using Theorem \ref{thm:sumisgaussian}, we get the result of Theorem \ref{mainthm} since convergence in mean square implies convergence in distribution.
\begin{lem} Assume that $K =d/\log \log(d|\mathcal{I}|)$, $d\rightarrow \infty$ and either $0<|\mathcal{I}|<1$ is fixed or
$|\mathcal{I}|\rightarrow 0$ while $d |\mathcal{I}|\rightarrow \infty$. Then
\[\left< \left| \frac{N_\mathcal{I}(C_f) - (d-1)(p-1) |\mathcal{I}| +S^{\pm}(K,C_f)}{\sqrt{(2(p-1)/\pi^2) \log (d |\mathcal{I}|)}}\right|^2 \right>\rightarrow 0\] \end{lem} \begin{proof} From equation (\ref{T-estimate}) from Section \ref{1mom}, using the Beurling-Selberg polynomials and the explicit formula (Lemma \ref{Explicit-Formula}), we deduce that \begin{eqnarray*}
\frac{-(p-1)(d-1)}{K+1} &\leq& N_\mathcal{I}(C_f) - (p-1)(d-1) |\mathcal{I}| +S^-(K, C_f) \\& \leq &S^-(K, C_f) - S^+(K,C_f) +\frac{(p-1)(d-1)}{K+1} \end{eqnarray*} and \begin{eqnarray*}
\frac{-(p-1)(d-1)}{K+1} &\leq& -N_\mathcal{I}(C_f) + (p-1)(d-1) |\mathcal{I}| -S^+(K, C_f) \\ &\leq& S^-(K, C_f) - S^+(K,C_f) + \frac{(p-1)(d-1)}{K+1}.\ \end{eqnarray*}
Using these two inequalities to bound the absolute value of the central term, we obtain \begin{eqnarray*}
&&\left< \left ( N_\mathcal{I}(C_f) - (p-1)(d-1) |\mathcal{I}| + S^{\pm}(K, C_f) \right)^2 \right>\\ &\leq& \max \left \{ \left(\frac{(p-1)(d-1)}{K+1} \right)^2, \left< \left(S^{-}(K, C_f) - S^{+}(K,C_f) +\frac{(p-1)(d-1)}{K+1}\right)^2 \right> \right\} \\ &&\leq \left(\frac{(p-1)(d-1)}{K+1} \right)^2 \\&+& \max \left \{ 0, \left< \left( S^{-}(K, C_f) - S^{+}(K, C_f) \right)^2 \right> + 2 \frac{(p-1)(d-1)}{K+1} \left< S^{-}(K, C_f) - S^{+}(K, C_f) \right> \right\}. \end{eqnarray*}
Now using the estimate in the proof of Theorem \ref{averageET}, we have that \begin{eqnarray*}
\left\langle S^{-}(K, C_f) - S^{+}(K, C_f) \right\rangle &=& \left< S^{-}(K, C_f) \right> - \left< S^{+}(K, C_f) \right> =O(1). \end{eqnarray*} For the remaining term we note that \begin{eqnarray*} &&\left\langle \left(S^{-}(K, C_f) - S^{+}(K, C_f)\right)^2\right \rangle \\ &=&\left\langle \left(S^{-}(K, C_f)\right)^2\right\rangle + \left\langle \left(S^{+}(K, C_f)\right)^2\right\rangle-2\left\langle \sum_{j_1, j_2=1}^{p-1} S^{-}(K, f, \psi^{j_1}) S^{+}(K, f, \psi^{j_2})\right\rangle. \end{eqnarray*} By Corollary \ref{cor:2ndmoment}, this equals
\[\frac{4(p-1)}{\pi^2}\log(d|\mathcal I|)+O(1)-\frac{4(p-1)}{\pi^2}\log(d|\mathcal I|)+O(1)=O(1).\] Therefore,
\[\left< \left ( N_\mathcal{I}(C_f )- (p-1)(d-1) |\mathcal{I}| +S^{\pm}(K, C_f) \right)^2 \right>=O\left(\left(\frac{(p-1)(d-1)}{K+1}\right)^2\right)\] and
\[\left \langle \left( \frac{N_\mathcal{I}(C_f) - (p-1)(d-1) |\mathcal{I}|+S^\pm(K, C_f)}{\sqrt{(2(p-1)/\pi^2) \log(d|\mathcal{I}|)} }\right)^2 \right \rangle \rightarrow 0\]
when $d$ tends to infinity and $K =d/\log \log(d|\mathcal{I}|)$. \end{proof}
\begin{rem}
Proposition \ref{prop} is proved in a similar way. For this, one uses Theorem \ref{thm:Smoments} to examine the moments of \[
\frac{S^\pm(K, f, \psi)+S^\pm(K, f, \bar{\psi})}{\sqrt{\frac {4}{\pi^2}\log(d|\mathcal{I}|)}}=\frac{2S^\pm(K, f, \psi)}{\sqrt{\frac {4}{\pi^2}\log(d|\mathcal{I}|)}}. \] \end{rem}
\section*{Acknowledgments} The authors would like to thank Ze\'ev Rudnick for many useful discussions while preparing this paper. The authors are also grateful to Louis-Pierre Arguin, Andrew Granville and Rachel Pries for helpful conversations related to this work. The first, third and fifth named authors thank the Centre de Recherche Math\'ematique (CRM) and the Mathematical Sciences Research Institute (MSRI) for their hospitality.
This work was supported by the National Science Foundation of U.S. [DMS-1201446 to B. F.],
the Simons Foundation [\#244988 to A. B.] the UCSD Hellman Fellows Program [2012-2013 Hellman Fellowship to A. B.], the Natural Sciences and Engineering Research Council
of Canada [Discovery Grant 155635-2008 to C. D., 355412-2008 to M. L.] and the Fonds de recherche du Qu\'ebec - Nature et technologies [144987 to M. L., 166534 to C. D. and M. L.]
\end{document} |
\begin{document}
\title[] {\bf Topology of Surfaces with Finite Willmore Energy} \author[ ] {Jie Zhou} \address{ \newline Department of Mathematical Sciences, Tsinghua University, Beijing, P. R. China, 100084 {\tt Email: zhoujiemath@mail.tsinghua.edu.cn}} \today
\maketitle \begin{abstract}
In this paper, we study the critical case of the Allard regularity theorem. Combining with Reifenberg's topological disk theorem, we get a critical Allard-Reifenberg type regularity theorem. As a main result, we get the topological finiteness for a class of properly immersed surfaces in $\mathbb{R}^n$ with finite Willmore energy. Especially, we prove a removability of singularity of multiplicity one surface with finite Willmore energy and a uniqueness theorem of the catenoid under no a priori topological finiteness assumption. \end{abstract} \tableofcontents \section{Introduction} Assume $\Sigma\subset \mathbb{R}^n$ is a properly immersed smooth surface and denote the immersion by $f:\Sigma\to \mathbb{R}^n$. Let $g=f^*g_{\mathbb{R}^n}$ be the induced metric and $H_f=\triangle_g f$ be the mean curvature. If $H_f=0$, $f$ is called a minimal immersion and $\Sigma$ is called an immersed minimal surface in $\mathbb{R}^n$. One of the most important property for minimal surfaces in $\mathbb{R}^n$ is the monotonicity formula, i.e., for $x\in \mathbb{R}^n$, $$\Theta(x,r)=\frac{\mathcal{H}^2(B_r(x)\cap \Sigma)}{\pi r^2}$$ is increasing, where $\mathcal{H}^2$ is the two dimensional Hausdorff measure in $\mathbb{R}^n$. It implies the density $$\Theta(\Sigma,\infty)=\lim_{r\to +\infty}\Theta(x,r)\in [1,\infty]$$ of a minimal surface at infinity is well defined. A first important fact about the density of minimal surface is the following corollary of the Allard regularity theorem\cite{A72}: if an immersed minimal surface satisfying $\Theta(\Sigma,\infty)<1+\varepsilon$ for $\varepsilon$ sufficient small, then $\Sigma$ is a plane. For $\Theta(\Sigma,\infty)=2$, in the case $n=3$, there are two typical nontrivial examples--- the catenoid($x_1^2+x_2^2=ch^2x_3$) and Scherk's singly-periodic surface.
They are both embedded minimal surfaces. The catenoid is rotationally symmetric, is the simplest minimal surface except for the plane and can be regarded as the fundamental solution of minimal surface equation. The catenoid has finite topology and finite total curvature but Scherk's singly periodic surface has infinite topology and infinite total curvature. And it is found by Karcher\cite{K88} that there is a one parameter deformation $\Sigma_\theta, \theta\in (0,\frac{\pi}{2}]$, of Scherk's surface $\Sigma_{\frac{\pi}{2}}$. They are all embedded minimal surfaces with $\Theta(\Sigma_{\theta},\infty)=2$ and are also called Scherk's surfaces. Conversely, Meeks and Wolf proved: \begin{thmn}[\bf{Meeks-Wolf,\cite{MW07}}] A connected properly immersed minimal surface in $\mathbb{R}^3$ with infinite symmetry group and $\Theta(\Sigma,\infty)<3$ is a plane, a catenoid or a Scherk singly-periodic minimal surface $\Sigma_\theta,\theta\in (0,\frac{\pi}{2}]$. \end{thmn} They conjecture the infinite symmetry condition can be removed(see also Conjecture 10 in\cite{M03}). For $3\le\Theta(\Sigma,\infty)<\infty$, there are not so clear classification, and Meeks and Wolf also conjecture such minimal surfaces admit unique tangent cone at infinity\cite[Conjecture 1]{MW07}.
Besides the above uniqueness result of Meeks and Wolf, there are many classical classification theorems for minimal surfaces\cite{O64}\cite{O69}\cite{LR91}\cite{L92}\cite{C84}\cite{C89}\cite{C91}\cite{S83}\cite{HM90}. Their common requirement is the minimal surface has finite total curvature, i.e., $$\int_{\Sigma}|A|^2d\mathcal{H}^2<\infty,$$ where $A$ is the second fundamental form of the surface. Especially, by moving plane method, Schoen\cite{S83} proved the only connected complete immersed minimal surface in $\mathbb{R}^3$ with finite total curvature and two embedding ends is the catenoid. There is a purely topological description for embedded minimal surface with finite total curvature. A surface is said to have finite topology if it is homeomorphic to a closed surface with finite many points removed. And the number of ends of a properly immersed minimal surface is defined by the number of the noncompact connected components of the surface at infinity, i.e.,
$$e(\Sigma,\infty)=\lim_{r\to \infty}\tilde{\beta}_0(\Sigma\cap(\mathbb{R}^n\backslash B_r(0)))\in [1,\infty],$$
where by $\tilde{\beta}_0$ we mean the number of noncompact connected components of a topology space. Each such noncompact connected component at infinity is called an end of $\Sigma$. On the one hand, by Huber's result\cite{H57}, any surface in $\mathbb{R}^n$ with finite total curvature must has finite topology. On the other hand, with Meeks and Rosenberg's\cite{MR93} classification of the complex structure of properly embedded minimal surface with at least two ends, Collin \cite{C97} proved a properly embedded minimal surface in $\mathbb{R}^3$ with at least two ends has finite total curvature if and only if it has finite topology. In both \cite{MR93} and \cite{C97}, the assumption $e(\Sigma,\infty)\ge 2$ is necessary to rule out the helicoid type ends. To distinguish the number of ends is also helpful for understanding Meek's conjecture. The catenoid has two ends. But the ``two" tangent planes of Scherk's singly-periodic surface joint together, which forces the surface to possess only one end. So a corollary of Meek's conjecture is that the only connected properly immersed minimal surface in $\mathbb{R}^3$ with $\Theta(\Sigma,\infty)<3$ and at least two ends is the catenoid. By the results of Schoen and Collin recalled above, the only gap to the corollary is the topological finiteness of the surface. And the topological finiteness is the main question we care about in this paper:
{\color{blue} For a surface immersed in $\mathbb{R}^n$ with finite Willmore energy $\int_{\Sigma}|H|^2d\mathcal{H}^2<\infty$ (or simply, $H=0$), when does it have finite topology?}
The counterexample of Scherk's singly periodic minimal surface gives some geometric intuition: The number of ends should not be too less with respect to the density. Otherwise, ``different" tangent planes at infinity will twist together to shape infinite many genuses. And our answer is:
\begin{thm}[$\mathbf{Finite\ Topology}$]\label{main}Assume $\Sigma\subset \mathbb{R}^{2+k}$ is a properly immersed open surface with finite Willmore energy, i.e.,
$$\int_{\Sigma}|H|^2d\mathcal{H}^2<\infty.$$
If its number of ends is not less than the lower density at infinity, more precisely,
$$e(\Sigma,\infty)>\Theta_*(\Sigma,\infty)
-1<+\infty,$$
then $\Sigma$ has finite topology and finite total curvature and $\Theta(\Sigma,\infty)=e(\Sigma,\infty)$ is an integer number.
\end{thm}
By some geometric measure theory argument\cite{KLS}(see Remark \ref{end density}), the assumption $e(\Sigma,\infty)>\Theta_*(\Sigma,\infty)-1$ in fact implies $\Sigma$ has exact $e=e(\Sigma,\infty)=\Theta(\Sigma,\infty)$ many ends and each of them has density one at infinity. By the compactness theorem for integral varifolds\cite{A72}, these ends blow down to planes with multiplicity one. Thus by Leon Simon's theorem on the uniqueness of tangent cone with smooth cross section\cite{LS83b}\cite[page 269, The paragraph after Theorem 5.7]{LS85}, in the case of
$$H=0,$$
each end of $\Sigma$ is a graph over a tangent plane, hence already has finite topology.
In\cite{LS83b} and \cite{LS85}, by using the variation structure and PDE techniques, especially the monotonicity formula and the $3$-circle theorem, Leon Simon established a decay estimate around the isolated singularities of solutions for very general variation equations and got the uniqueness of the tangent cone at isolated singular points. Leon Simon also showed\cite{LS85} the same method works for the tangent cone at infinity. This method is very powerful in analysing the asymptotic behavior of Geometric PDE. For decades, the general method has been applied to many geometric objects including minimal surfaces, harmonic maps, Einstein metrics and corresponding geometric flows. These conclusions imply much more analytic information than the topological finiteness. And we are trying to understand if only caring about the topology, can we get a soft result under looser condition without equation. Theorem \ref{main} is the answer. Below we still take the case of $H=0$ to explain our key observation. It will not loss generality.
The idea comes out when we are watching minimal surfaces by the inversion. By combining the monotonicity formulae of a minimal surface $\Sigma$ and its inverted surface $\tilde{\Sigma}$ and a key conformal antisymmetrical invariance we observe(see (\ref{localinvariant})), we get the following density identity:
\begin{align*}
\Theta(\tilde{\Sigma},p)
=\frac{1}{16\pi}\int_{\tilde{\Sigma}\backslash\{p\}}|\tilde{H}|^2d\mu_{\tilde{g}}
=\Theta(\Sigma,\infty), \ \ p\notin \Sigma,
\end{align*}
which means the single quantity $\Theta(\Sigma,\infty)$ can control both the Willmore energy $\int_{\tilde{\Sigma}\backslash\{p\}}|\tilde{H}|^2d\mu_{\tilde{g}}$ and the local density $\Theta(\tilde{\Sigma},p)$ of $\tilde{\Sigma}$ at the inverting base point $p$.
This implies if we invert only one end with density one, then we will get a varifold with density one at the inverting point and bounded Willmore energy, which is on the border of the classical Allard regularity theorem. Recall the Allard regularity theorem\cite{A72} says if an integral $n-$varifold $V=\underline{v}(M,\theta)$ in $B_r(0)\subset\mathbb{R}^{n+k}$ satisfies
\begin{align*}
\Theta^n(V,0)<1+\varepsilon, \ \ \ \ \ \ \ \ \ (r^{p-n}\int_{B_r(0)}|H|^p)^{1/p}<\varepsilon
\end{align*}
for some $p>n$, $\varepsilon$ small and $0\in sptV$, then the varifold is a $C^{1,\alpha=1-\frac{n}{p}}$ graph in a small neighborhood of $0$. For a smooth immersion $f:M^n\to \mathbb{R}^{n+k}$, $H=\triangle_g f$. Comparing a varifold to a function, then the generalized mean curvature should be regarded as the weak ``Laplacian". In this viewpoint, Allard regularity theorem could be regarded as a geometric nonlinear disturbed version of the $W^{2,p}$ estimates for solutions of linear elliptic equations, combining with the Sobolev embedding theorem
$W^{2,p}\hookrightarrow C^{1,1-\frac{n}{p}}$.
\begin{table}[H]
\begin{tabular}{|c|c|c|c|c|} \hline Geometry & smooth manifold&Varifold& weak $H$ & {\color{blue} Allard Regularity}\\ \hline Analysis&smooth function& Sobolev function & $\triangle_{dist} f$ & $W^{2,p}$ Esitimate\\ \hline \end{tabular} \end{table}
But the mean curvature equation is nonlinear, when getting regularity, one need to do linear approximation first and then use a supercritical index (here $p>n$) to get an iteration program and then a Campanato type regularity estimate. And now it is in the critical case $p=n=2$. The best expected result is a regularity of type ($W^{2,2}\hookrightarrow$) $C^{\alpha}$. By the experience of graphical estimate(See \cite[Lemma 2.11]{CM04} or \cite[Lemma 2.4]{CM11}), the graphical result is always corresponding to a Lipschitz estimate, which seems impossible in our case. So we may not get a $C^{\alpha}$ graph but only get a $C^{\alpha}$ parametrization, which is also enough to show the end is embedded and has finite topology. There is another positive evidence. In \cite{SZ}, Sun and the author proved a properly immersed smooth surface in the unit ball with finite area and small total curvature admits $C^{\alpha}$ parametrization with uniform estimate in some uniform small scale, which can be regarded as a geometric disturbed version of Sobolev's embedding $W^{2,2}\hookrightarrow C^{\alpha}$. This indicates the $C^{\alpha}$ parametrization is hopeful and encourages us to check the original proof of Allard regularity theorem in the critical case. It turns out there is no difficulty in getting the Lipschitz approximation\cite[Section 20]{LS83} from Leon Simon's monotonicity formula\cite{LS93}, but it is impossible to run the iteration program to get a decay of the tilt-excess\cite[Section 22]{LS83}.
Fortunately, there is a well developed criteria for the $C^{\alpha}$ regularity of a closed set in $\mathbb{R}^n$. That is Reifenberg's topological disk theorem\cite{R60}\cite{M66}\cite{LS96}, whose proof contains a geometric iteration program.
Reifenberg's theorem has been established in 1960. Recent twenty years, many mathematicians used the method to research the regularity of both Ricci limit spaces and Radon measures. Let us refer \cite[Appendix]{CC97} \cite{DKT01}\cite{P98}\cite[section 7]{DP02}\cite{DP07} for readers who are interested in related topics. Especially, Paolini proved\cite{P98} the $C^{\alpha}$ regularity for minimal boundaries in $\mathbb{R}^n$ with mean curvature in $L^n$. Similarly, in our critical case, when combining with the Lipschitz approximation, we can check Reifenberg's condition. As a result, we get the $C^{\alpha}$--regularity for rectifiable $2$-varifold with square integrable generalized mean curvature at those points with density close to one. See Theorem \ref{Holder Regularity} for precise statement.
As an application of Theorem \ref{main}, we studied isolated singularities for properly immersed surfaces with finite Willmore energy. We get the removability of such singularities under the assumption of density less than two. We do not assume the surface have finite topology or finite total curvature a priori. See Corollary \ref{removability} for details. As corollaries of Theorem \ref{main}, we also give a simple proof of the uniqueness of the catenoid(see Corollary \ref{catenoid}) and analysis the structure of minimal ends in $\mathbb{R}^{2+k}$ with multiplicity less than two.
This paper is organized as following. In section \ref{lipschitz approximation}, we prove the Lipschitz approximation theorem. In section \ref{reifenberg}, we check the Reifenberg condition and complete the proof of the $C^\alpha$ regularity. In section \ref{application}, we deduce the density identity of inverting minimal surface and apply the $C^{\alpha}$ regularity theorem to ends with density less than two to get the main theorem of this paper. In section \ref{realapplication}, we give the two applications.
\section{Lipschitz Approximation}\label{lipschitz approximation} In this section, we check out the Lipschitz approximation theorem in the critical case. It is the first step of proving the Allard regularity theorem and many of the ideas are similar to those of \cite{LS83}(see also \cite[section 5.2, 5.4, 5.5, 5.6]{LS14}), except for a careful analysis involving the remainder term of Leon Simon's monotonicity identity (\ref{monotonicity equality}) in the proof Lemma \ref{band} and some other details. We also focus on the semi-Reifenberg condition (\ref{semireifenberg}), which is essential for the proof of the $C^\alpha$ regularity theorem in section \ref{reifenberg}.
For a rectifiable $2$-varifold $V=\underline{v}(\Sigma,\theta)$ in an open set $U\subset \mathbb{R}^n$, we always denote the corresponding Radon measure by $\mu=\mu_V=\mathcal{H}^2\llcorner \theta$, i.e, for any Borel set $A\subset \mathbb{R}^n$, $$\mu(A)=\mu_V(A)=\int_{A\cap \Sigma}\theta d\mathcal{H}^2.$$
The following is the main result of this section---the Lipschitz approximation theorem. \begin{thm}[\textbf{Lipschitz Approximation for 2-varifold}]\label{Lipschitz Approximation for 2-varifold} Assume $V=\underline{v}(\Sigma,\theta)$ is a rectifiable 2-varifold in $U\supset B_{\rho}(0)\subset \mathbb{R}^{2+k}$ with generalized mean curvature $H\in L^2(d\mu_V)$, $0\in sptV$ and $\theta\ge 1$ for $\mu_V-a.e. x\in U$. Then there exists small $\delta_6(=\frac{1}{2^{1688}k^{40}})$ such that for any $\delta\le \delta_6$ if
$$\frac{\mu_V(B_{\rho}(0))}{\pi \rho^2}\le1+\delta\text{\ and \ }\int_{B_{\rho}(0)}|H|^2\le \delta,$$ then for any $\xi\in B_{\frac{1}{2}\delta^{\frac{1}{2}}\rho}(0)$ and $\sigma\in (0,\frac{1}{2^{16}}\delta^{\frac{1}{2}}\rho)$, there exist $T=T(\xi, \sigma)\in G_{2+k,2}(\mathbb{R})$ passing through $\xi$ and a Lipschitz function $$f=(f^1,f^2,...f^k):B_{\sigma}(\xi)\cap T\to \mathbb{R}^k:=T^{\bot}$$
with \begin{align*}
&i) \ \ Lipf\le\delta^{\frac{1}{40}},\\
&ii) \ \ \operatorname*{sup}\limits_{x\in B_{\sigma}(\xi)}|f(x)|\le\delta^{\frac{1}{40}}\sigma,\\
&iii)\ \ \operatorname*{sup}\limits_{x\in B_{\sigma}(\xi)\cap spt\mu_V}|q(x)|\le\delta^{\frac{1}{40}}\sigma,\\
&iv) \ \ \mathcal{H}^2((graphf\backslash sptV)\cap B_{\sigma}(\xi))+\mu_V(B_{\sigma}(\xi)\backslash graphf)\le 2^{83}\delta^{\frac{1}{16}}\pi \sigma^2, \end{align*} where $q:\mathbb{R}^{2+k}\to T^\bot$ is the orthogonal projection. \end{thm} \subsection{Preliminaries} \
We begin with some preliminaries: the monotonicity formula and its corollaries. \begin{lemma}\cite{LS93}\cite{KS} Assume $V=\underline{v}(\Sigma, \theta)$ is a rectifiable 2-varifold in an open set $U\subset \mathbb{R}^{2+k}$ with generalized mean curvature $H\in L^2(d\mu)$. Then, for any $x\in \mathbb{R}^{2+k}$, and $0<\sigma<\rho<\infty$ with $B_\rho(x)\subset U$, \begin{align}\label{monotonicity equality} \frac{\mu(B_{\sigma}(x))}{\sigma^2}=
\frac{\mu(B_{\rho}(x))}{\rho^2}&+\frac{1}{16}\int_{B_{\rho}(x)\backslash B_{\sigma}(x)}|H|^2d\mu-\int_{B_{\rho}\backslash B_{\sigma}}|\frac{\nabla^\bot r}{r}+\frac{H}{4}|^2d\mu\nonumber\\ &+\frac{1}{2\rho^2}\int_{B_{\rho}(x)}r\langle\nabla^\bot r,H\rangle d\mu-\frac{1}{2\sigma^2}\int_{B_{\sigma}(x)}r\langle\nabla^\bot r,H\rangle d\mu, \end{align}
Where $r=r_x=|\cdot-x|$. Moreover, for any $\delta\le 1$, we have \begin{align}\label{monotonicity inequality} \frac{\mu(B_{\sigma}(x))}{\sigma^2}\le
(1+\delta)\frac{\mu(B_{\rho}(x))}{\rho^2}+\frac{1}{2\delta}\int_{B_{\rho}(x)}|H|^2d\mu. \end{align} \end{lemma}
\begin{cor}\label{density at each point}
Assume $V=\underline{v}(\Sigma, \theta)$ is a rectifiable 2-varifold in an open set $U\subset \mathbb{R}^{2+k}$ with generalized mean curvature $H\in L^2(d\mu)$ and $B_\rho(0)\subset U$. If $\int_{B_{\rho}(0)}|H|^2d\mu+\mu(B_{\rho}(0))<\infty$ and $\theta(x)\ge 1$, for $\mu-a.e. x\in spt\mu$. Then \begin{align*} \Theta(x)=\lim_{\tau\to 0}\frac{\mu(B_\tau(x))}{\pi\tau^2} \end{align*} is well-defined in $\breve{B}_\rho(0)$ and is upper semi-continuous. Moreover, for any $x\in \breve{B}_\rho(0)$, $$\Theta(x)\ge 1.$$ \end{cor} \begin{proof} See \cite[Appendix]{KS}
\end{proof}
The following corollary is prepared for section \ref{application}. For simplicity, we will omit the measure notation $d\mu$ under the integral from now on. \begin{cor}\label{integral discription of upper density at infinity}
Assume $V=\underline{v}(\Sigma, \theta)$ is a rectifiable 2-varifold in $\mathbb{R}^n$ with generalized mean curvature $H\in L^2(\mathbb{R}^n,d\mu_V)$. Then, for any $x\in \mathbb{R}^{n}$, $$\Theta^*(V,\infty):=\limsup_{r\to \infty}\frac{\mu_V(B_r(x))}{\pi r^2}<+\infty$$ if and only if $$\Theta_*(V,\infty):=\liminf_{r\to \infty}\frac{\mu_V(B_r(x))}{\pi r^2}<+\infty$$ if and only if $$\int_{\mathbb{R}^n}|\frac{\nabla^{\bot}r_x}{r_x}|^2<\infty.$$
Moreover, if one of the above condition holds, then for any $\rho\in (0,\infty)$, \begin{align}\label{everyradius}
\frac{\mu_V(B_\rho(x))}{\pi \rho^2}\le 9\Theta_*(V,\infty)+\frac{59}{16\pi}\int_{\mathbb{R}^n}|H|^2. \end{align} \end{cor} \begin{proof}
For simplicity, we denote $r_x=|\cdot-x|$ by $r$. Since $$|\frac{1}{2\sigma^2}\int_{B_{\sigma}(x)}r\langle \nabla^{\bot}r,H\rangle|\le \frac{(\mu(B_{\sigma}(x)))^{\frac{1}{2}}}{2\sigma}\|H\|_{L^{2}(B_{\sigma}(x))}\to 0,$$ letting $\sigma\to 0$ in the monotonicity formula (\ref{monotonicity equality}), we get \begin{align}
\int_{B_{\rho}(x)}|\frac{\nabla^{\bot}r}{r}+\frac{H}{4}|^2-\frac{1}{2\rho^2}\int_{B_{\rho}(x)}r\langle\nabla^{\bot}r,H\rangle
=\frac{\mu_V(B_{\rho}(x))}{\rho^2}-\pi \Theta(x)+\frac{1}{16}\int_{B_{\rho}(x)}|H|^2, \end{align}
which implies $\int_{B_{\rho}(x)}|\frac{\nabla^{\bot}r}{r}|^2<+\infty$. Note \begin{align*}
\frac{1}{2}\int_{B_{\rho}(x)}|\frac{\nabla^{\bot}r}{r}|^2-\int_{B_{\rho}(x)}|\frac{H}{4}|^2\le\int_{B_{\rho}(x)}|\frac{\nabla^{\bot}r}{r}+\frac{H}{4}|^2\le2\int_{B_{\rho}(x)}(|\frac{\nabla^{\bot}r}{r}|^2+|\frac{H}{4}|^2) \end{align*} and \begin{align*}
|\frac{1}{2\rho^2}\int_{B_{\rho}(x)}r\langle \nabla^{\bot}r,H\rangle|\le \frac{1}{4}\int_{B_{\rho}(x)}|\frac{\nabla^{\bot}r}{r}|^2+\frac{1}{4}\int_{B_{\rho}(x)}|H|^2. \end{align*} We know \begin{align*}
\frac{1}{4}\int_{B_{\rho}(x)}|\frac{\nabla^{\bot}r}{r}|^2-\frac{3}{8}\int_{B_{\rho}(x)}|H|^2 &\le\frac{\mu_V(B_{\rho}(x))}{\rho^2}-\pi \Theta(x)\\
&\le \frac{9}{4}\int_{B_{\rho}(x)}|\frac{\nabla^{\bot}r}{r}|^2+\frac{5}{16}\int_{B_{\rho}(x)}|H|^2. \end{align*} Letting $\rho\to\infty$, we get \begin{align*}
\frac{1}{4}\int_{\mathbb{R}^n}|\frac{\nabla^{\bot}r}{r}|^2-\frac{3}{8}\int_{\mathbb{R}^n}|H|^2 &\le\pi(\Theta_*(V,\infty)-\Theta(x))\\ &\le\pi(\Theta^{*}(V,\infty)-\Theta(x))\\
&\le\frac{9}{4}\int_{\mathbb{R}^n}|\frac{\nabla^{\bot}r}{r}|^2+\frac{5}{16}\int_{\mathbb{R}^n}|H|^2. \end{align*} Finally, combining the last two lines we get \begin{align*} \frac{\mu(B_\rho(x)}{\pi \rho^2}
\le 9\Theta_*(V,\infty)+\frac{59}{16\pi}\int_{\mathbb{R}^n}|H|^2. \end{align*} \end{proof}
\subsection{Semi-Reifenberg Condition} \
In the proof of the Allard regularity theorem, the following non-dimensional(scaling invariant) quantity $E(\xi, \rho, T)$ plays an important role. \begin{defn} Assume $V=\underline{v}(\Sigma, \theta)$ is a rectifiable 2-varifold in $\mathbb{R}^{2+k}$ and $B_\rho(\xi)\subset U$. Denote $\mu=\mu_V$. For any $2$-plane $T$ in $\mathbb{R}^{2+k}$, the tilt-Excess $E(\xi, \rho, T)$ is defined by \begin{align*}
E(\xi,\rho,T):&=\rho^{-2}\int_{B_{\rho}(\xi)}|p_{T_x\Sigma-p_T}|^2d\mu, \end{align*} where $T_x\Sigma$ is the approximate tangent plane of the varifold $V$ at $x\in spt\mu$ and $p_T$ and $p_{T_x\Sigma}$ are orthogonal projection to $T$ and $T_x\Sigma$ respectively. \end{defn}
The tilt-excess measures the mean oscillation of the approximate tangent space( Gaussian map) of the varifold in the ball $B_\rho(\xi)$. The oscillation behavior of the tangent spaces are always relating to the regularity of the geometric objects at different levels. For example, the $C^{1,\alpha}$ regularity occurred in the Allard regularity theorem owes to the decay of tilt-excess. Stephen Semmes proved \cite{S91a}\cite{S91b}\cite{S91c} the Lipschitz regulairty for hypersurfaces in $\mathbb{R}^{n+1}$ with Gaussian maps small BMO. And Reifenberg's topological disk theorem, the key to the $C^\alpha$ regularity, is also established on some oscillation condition--the Reifenberg condition (\ref{reifenberg con}).
\begin{thm}[\bf{Reifenberg}]\label{reifenberg theorem}\cite{R60}\cite{M66}\cite{LS96} For integers $m,k>0$ and $\alpha>0$, there exists $\varepsilon=\varepsilon(m,k,\alpha)>0$ such that for any closed set $S\subset \mathbb{R}^{m+k}$ with $0\in S$, if for any $y\in S\cap B_1(0)$ and $\rho\in (0,1]$, there exists an $m$-dimensional plane $L_{y,\rho}\subset \mathbb{R}^{m+k}$ passing through $y$ such that
\begin{align}\label{reifenberg con}
d_{\mathcal{H}}(S\cap B_\rho(y), L_{y,\rho}\cap B_{\rho}(y))\le \varepsilon \rho,
\end{align}
then $S\cap B_1$ is homeomorphic to the unit ball $B_1^m(0)\subset \mathbb{R}^m$. More precisely, there exist closed set $M\subset \mathbb{R}^{m+k}$ and $m$-dimensional subspace $T_0\subset \mathbb{R}^{m+k}$ and a homeomorphism $\tau: T_0\to M$ such that $M\cap B_1=S\cap B_1$, both $\tau,\tau^{-1}\in C^{\alpha}$ and
\begin{align*}
|\tau(x)-x|\le C(m,k)\varepsilon, \forall x\in T_0 \ \ \ \text{ and } \ \ \ \ \tau(x)=x, \forall x\in T_0\backslash B_2.
\end{align*}
\end{thm} The condition (\ref{reifenberg con}) is called the Reifenberg condition. In this subsection, we establish the tilt-excess estimate. By the way, we note the process in fact implies half of the Reifenberg condition, we call it semi-Reifenberg condition (\ref{semireifenberg}).
By noting the integrand in the tilt-excess is just the gradient of the position function, the tilt-excess estimate can be reduced to some ``$L^2$-estimate" by integral gradient estimate of the generalized mean curvature equation. \begin{lemma}\label{integral gradient estimate} Assume $V=\underline{v}(\Sigma, \theta)$ is a rectifiable 2-varifold in an open set $U\subset \mathbb{R}^{2+k}$ with generalized mean curvature $H\in L^2(d\mu)$ for $\mu=\mu_V$ and $B_\rho(\xi)\subset U$. Then, for any $2$-plane $T$ in $\mathbb{R}^{2+k}$,
$$E(\xi,\frac{\rho}{2},T)\le 4\int_{B_{\rho}(\xi)}|H|^2+592\rho^{-2}\int_{B_{\rho}(\xi)}(\frac{d(x,T)}{\rho})^2d\mu.$$ \end{lemma} \begin{proof} Take coordinates of $\mathbb{R}^{2+k}$ such that $T=\text{span}\{(x^1,x^2,0,\ldots, 0)\}$ and $T^{\bot}=\text{span}\{X'=(0,0,x^3,x^4,...x^{2+k})\}$. Then, an observation is \begin{align}\label{gradient expression}
\frac{1}{2}|p_{T_x\Sigma-p_T}|^2=\Sigma_{j=1}^{k}|\nabla^{\Sigma}x^{2+j}|^2=div^{\Sigma}X'. \end{align}
So, to estimate $E(\xi,\rho,T)=2\rho^{-2}\int_{B_{\rho}(\xi)}\Sigma_{j=1}^{k}|\nabla^{\Sigma}x^{2+j}|^2d\mu$ is equal to give an integral gradient estimate of the generalized mean curvature equation $$\int div^{\Sigma}X=-\int X\cdot\overrightarrow{H}.$$ For details, see \cite[Lemma 22.2]{LS83}
\end{proof}
The above lemma reduces the tilt-excess estimate to the ``$L^2$- estimate" of the form $\rho^{-2}\int_{B_\rho(\xi)}(\frac{d(x,T)}{\rho})^2d\mu$, whose estimate can be seen as an integral version of half of the Reifenberg condition. The following lemma gives a point-wise semi-Reifenberg condition, which implies the tilt-excess estimate.
\begin{lemma}[$\mathbf{Semi-Reifenberg\ Condition}$]\label{Semi-Reifenberg Condition} Assume $V=\underline{v}(\Sigma,\theta)$ is a rectifiable 2-varifold in $U\supset B_{\rho}(0)\subset \mathbb{R}^{2+k}$ with generalized mean curvature $H\in L^2(d\mu)$, $0\in sptV$ and $\theta\ge 1$ for $\mu_V-a.e. x\in U$. If for some $\delta\le2^{-4}$,
$$\frac{\mu_V{B_{\rho}(0)}}{\pi \rho^2}\le1+\delta\text{ and }\int_{B_{\rho}(0)}|H|^2\le \delta,$$ then for $\forall \xi\in spt\mu_V\cap B_{\frac{1}{2}\delta^{\frac{1}{2}}\rho}(0)$ and $\forall \sigma\le\frac{1}{2}\delta^{\frac{1}{2}}\rho$, there exists a $T=T(\xi,\sigma)$ passing through $\xi$, such that \begin{align}\label{semireifenberg} \sigma^{-1}\sup_{x\in spt\mu_V\cap B_{\sigma}(\xi)}d(x,T)\le 2^{13}\delta^{1/16}. \end{align} \end{lemma} \begin{proof} Step 3.1\
Volume ratio estimate. For $\forall \xi\in B_{\delta^{\frac{1}{2}}\rho}(0)$ and $\forall \sigma\in (0,(1-\delta^{\frac{1}{2}})\rho)$, we have \begin{align}\label{upperbound} \frac{\mu(B_{\sigma}(\xi))}{\pi\sigma^2}\le1+36\delta^{\frac{1}{2}}. \end{align} Moreover, if $\xi\in spt\mu_V$, then \begin{align}\label{lowerbound} \frac{\mu(B_{\sigma}(\xi))}{\pi\sigma^2}\ge1-2\delta^{\frac{1}{2}}. \end{align} In fact, take $\beta=\delta^{\frac{1}{2}}\le\frac{1}{2}$ and $\delta_0=\delta^{\frac{1}{2}}$. Then by the monotonicity formula (\ref{monotonicity inequality}), we know \begin{align*}
\frac{\mu(B_{\sigma}(\xi))}{\pi\sigma^2}\le(1+\delta_0)\frac{\mu(B_{\rho-\beta\rho})(\xi)}{\pi(\rho-\beta\rho)^2}+\frac{1}{2\pi\delta_0}\int_{B_{\rho-\beta\rho}(\xi)}|H|^2\le1+36\delta^{\frac{1}{2}}. \end{align*} On the other hand, for $\xi\in spt\mu_V$, (\ref{monotonicity inequality}) and Corollary \ref{density at each point} imply \begin{align*}
1\le(1+\delta_0)\frac{\mu(B_{\sigma}(\xi))}{\pi\sigma^2}+\frac{1}{2\pi\delta_0}\int_{B_{\sigma}(\xi)}|H|^2 \le(1+\delta_0)\frac{\mu(B_{\sigma}(\xi))}{\pi\sigma^2}+\frac{\delta}{2\delta_0}. \end{align*} Thus \begin{align*} \frac{\mu(B_{\sigma}(\xi))}{\pi\sigma^2}\ge\frac{1-\frac{\delta^{\frac{1}{2}}}{2}}{1+\delta^{\frac{1}{2}}} \ge1-2\delta^{\frac{1}{2}}. \end{align*} Step 3.2 For $\forall \xi\in spt\mu_V\cap B_{\beta\rho}(0)$ and small $\sigma$, the goal is to find $T=T(\xi,\sigma)$ such that $$\sigma^{-1}\sup\{d(x,T):d\in spt\mu_V\cap B_{\sigma}(\xi)\} \text{ small }.$$
It is not easy to get the point estimate directly. So, in the spirit of Chebyshev inequality, we estimate the mean integral value in a small neighborhood of $\xi$. More precisely, for small $\alpha$(to be determined) and $y \in spt\mu_V\cap B_{\alpha\sigma}(\xi)$, denote $T_y$ to be the translation of the approximate tangent space of $\Sigma$ at $y$(which exists for $\mu$-almost $y$ since $V$ is rectifiable) such that $T_y\ni y$. Then $d(x,T_y)=|p^{\bot}_{T_y}(x-y)|$ measures how close $x\in spt\mu_V\cap B_{\sigma}(\xi)$ is to $T_y$. Consider the mean integral \begin{align*} \bar{J}&=\frac{1}{(\alpha\sigma)^2}\int_{B_{\alpha\sigma}(\xi)}\frac{1}{\sigma^2}\int_{B_{\sigma}(\xi)}\frac{d^2(x,T_y)}{\sigma^2}d\mu(x)d\mu(y)\\ &\le\frac{1}{\sigma^2}\int_{B_{\sigma}(\xi)}\frac{1}{(\alpha\sigma)^2}\int_{B_{(\alpha+1)\sigma}(x)}\frac{d^2(x,T_y)}{\sigma^2}d\mu(y)d\mu(x). \end{align*}
For fixed $x$ and $r_x(y)=|y-x|$, note \begin{align}\label{remainder formula}
|\nabla^{\bot}r_x(y)|=r_x^{-1}(y)|p_{T_y}^{\bot}(y-x)|=r_x^{-1}(y)d(x,T_y). \end{align}
So
\begin{align*}
\bar{J}&\le\frac{1}{\sigma^2}\int_{B_{\sigma}(\xi)}\frac{1}{(\alpha\sigma)^2}\int_{B_{(\alpha+1)\sigma}(x)}\frac{r_x^4}{\sigma^2}\frac{|\nabla^{\bot}r_x|^2}{r_x^2}d\mu(y)d\mu(x)\\
&=\frac{(1+\alpha)^4}{\alpha^2\sigma^2}\int_{B_{\sigma}(\xi)}{\color{blue}\underline{\int_{B_{(\alpha+1)\sigma}(x)}\frac{|\nabla^{\bot}r_x|^2}{r_x^2}d\mu(y)}_{=:K(x)}}d\mu(x)\\
\end{align*}
Note $\Theta(x)\ge1$ for $x\in spt\mu_V$ and
\begin{align*}
\lim_{\sigma_1\to0}|\frac{1}{\sigma_1^2}\int_{B_{\sigma_1}(x)}r_x\langle\nabla^{\bot}r_x,H\rangle|
\le \lim_{\sigma_1\to0}(\int_{B_{\sigma_1}(x)}|H|^2)^{\frac{1}{2}}(\frac{\mu(B_{\sigma_1}(x))}{\sigma_1^2})^{\frac{1}{2}}
=0.
\end{align*}
Taking $\rho_1=(1+\alpha)\sigma$, using the monotonicity formula (\ref{monotonicity equality}) for $0<\sigma_1<\rho_1$ and then letting $\sigma_1\to 0$, we get
\begin{align*}
\int_{B_{\rho_1}(x)}|\frac{\nabla^{\bot}r}{r}+\frac{H}{4}|^2
&\le(\frac{\mu(B_{\rho_1}(x))}{\rho_1^2}-\pi)+\frac{1}{16}\int_{B_{\rho_1}(x)}|H|^2+\frac{1}{2\rho_1}\int_{B_{\rho_1}(x)}|H|. \end{align*} Taking $\xi\in spt\mu_V\cap B_{\frac{1}{2}\delta^{\frac{1}{2}}\rho}(0)$, $\alpha\le 1$ and $\sigma\le\frac{1}{2}\delta^{\frac{1}{2}}\rho$, then $x\in B_{\sigma}(\xi)\in B_{\delta^{\frac{1}{2}}\rho}(0)$
and $(1+\alpha)\sigma<(1-\delta^{\frac{1}{2}})\rho$. So, by (\ref{upperbound}), we get \begin{align}\label{remainder term} K(x)
&\le\int_{B_{\rho_1}(x)}\frac{|\nabla^{\bot}r|^2}{r^2}d\mu(y)
\le2\int_{B_{\rho_1}(x)}|\frac{\nabla^{\bot}r}{r}+\frac{H}{4}|^2
+2\int_{B_{\rho_1}(x)}|\frac{H}{4}|^2\nonumber\\
&\le2[(\frac{\mu(B_{\rho_1}(x))}{\rho_1^2}-\pi)+\frac{2}{16}\int_{B_{\rho_1}(x)}|H|^2
+\frac{1}{2}(\int_{B_{\rho_1}(x)}|H|^2)(\frac{\mu(B_{\rho_1}(x))}{\rho_1})^{\frac{1}{2}}]\nonumber\\
&\le80\pi\delta^{\frac{1}{2}}, \end{align} and \begin{align*} \bar{J}\le\frac{2^4 \cdot80\pi\delta^{\frac{1}{2}}\mu(B_{\sigma}(\xi))}{\alpha^2\sigma^2}\le \frac{2^{12}\pi\delta^{\frac{1}{2}}}{\alpha^2}=2^{12}\pi\delta^{\frac{1}{4}} ({\color{blue}\text{ take } \alpha=\delta^{\frac{1}{8}}}). \end{align*} Thus by Chebyshev's inequality, there exists $y\in B_{\alpha\sigma}(\xi)$, such that $$\frac{1}{\sigma^2}\int_{B_{\sigma}(\xi)}\frac{d^2(x,T_y)}{\sigma^2}d\mu(x)\le2^{12}\delta^{\frac{1}{4}}/(1-2\delta^{\frac{1}{2}})\le2^{13}\delta^{\frac{1}{4}}{ \color{blue}(\text{ take }\delta\le2^{-4})}.$$ So if we denote $T=T_y+\xi-y$, then $d^2(x,T)\le2d^2(x,T_y)+2{\alpha}^2\sigma^2$. Thus \begin{align}\label{integral semi reifenberg} \frac{1}{\sigma^2}\int_{B_{\sigma}(\xi)}\frac{d^2(x,T)}{\sigma^2}d\mu(x)\le2^{14}\delta^{\frac{1}{4}} +2\delta^{\frac{1}{4}}(\pi+32\pi\delta^{\frac{1}{2}})\le2^{15}\delta^{\frac{1}{4}}. \end{align} And by Lemma \ref{integral gradient estimate}, we know that \begin{align*} E(\xi,\frac{\sigma}{2},T(\xi,\sigma))\le 4\delta+592\cdot 2^{15}\delta^{\frac{1}{4}}\le2^{25}\delta^{\frac{1}{4}}. \end{align*} Up to now, we have established the tilt-excess estimate: \begin{cor}\label{Tilt-Excess estimate} under the condition of Lemma \ref{Semi-Reifenberg Condition}, for $\delta\le2^{-4}$, $ \xi\in B_{\frac{1}{2}\delta^{\frac{1}{2}}\rho}(0)$ and $ \sigma\le\frac{1}{2}\delta^{\frac{1}{2}}\rho$, there exists $T=T(\xi,\sigma)$ such that $$E(\xi,\sigma,T)\le 2^{25}\delta^{1/4}.$$ \end{cor} We write the corollary to emphasize this is enough for the Lipschitz approximation Theorem \ref{Lipschitz Approximation for 2-varifold}(see proof below). But for the final goal of the $C^{\alpha}$-rugularity, the integral semi-Reifenberg condition (\ref{integral semi reifenberg}) is not enough, the point-wise estimate (\ref{semireifenberg}) is necessary. We follow the argument as in \cite[Section 24]{LS83} to complete the proof.
Step 3.2$'$
By (\ref{remainder term}) and (\ref{remainder formula}), for $\delta\le2^{-4},\alpha\le1$, $\forall \xi\in spt\mu_V\cap B_{\frac{1}{2}\delta^{\frac{1}{2}}\rho}(0), \forall \sigma\le\frac{1}{2}\delta^{\frac{1}{2}}\rho$ and $\forall x\in spt\mu_V\cap B_{\sigma}(\xi)$, we have \begin{align*}
I(x):&=\int_{B_{\alpha\sigma}(\xi)}|p_{T_y}^{\bot}(x-y)|^2d\mu(y)=\int_{B_{\alpha\sigma}(\xi)}d^2(x,T_y)d\mu(y)\\
&\le\int_{B_{(1+\alpha)\sigma}(x)}r_x^2(y)|\nabla^{\bot}r_x|^2(y)d\mu(y)\le[(1+\alpha)\sigma]^4K(x)\le2^{11}\pi\sigma^4\delta^{\frac{1}{2}}. \end{align*} Now, take a maximal disjoint collection $\{B_{\frac{\alpha\sigma}{4}}(x_i)\}_{i=1}^N$ of balls with radius $\frac{\alpha\sigma}{4}$ and centered in $spt\mu_V\cap B_{\sigma}(\xi)$. Then we have \begin{align*} spt\mu_V\cap B_{\sigma}(\xi)\subset \cup_{i=1}^NB_{\alpha\sigma}(x_i) \end{align*} and \begin{align*} \mu_V(B_{\frac{\alpha\sigma}{4}}(x_i))\ge\pi(\frac{\alpha\sigma}{4})^2(1-2\delta^{\frac{1}{2}}). \end{align*} Moreover, we know \begin{align*} N\le\frac{\mu_V(B_{\sigma}(\xi))}{\pi(\frac{\alpha\sigma}{4})^2(1-2\delta^{\frac{1}{2}})}\le 2^5\frac{\mu_V(B_{\sigma}(\xi))}{\pi\sigma^2\alpha^2}\le\frac{2^8}{\alpha^2}, \end{align*} and \begin{align*}
\int_{B_{\alpha\sigma}(\xi)}\Sigma_{i=1}^{N}|p^{\bot}_{T_y\Sigma}(x_i-y)|^2d\mu(y)
\le\frac{2^8}{\alpha^2}\int_{B_{\alpha\sigma}(\xi)}|p^{\bot}_{T_y}(x_i-y)|^2d\mu(y)\le\frac{2^{19}\pi\sigma^4\delta^{\frac{1}{2}}}{\alpha^2}. \end{align*} Take $\alpha=\delta^{\frac{1}{16}}$ and $\theta=2^{21}\sigma^2\delta^{\frac{1}{4}}$. Then we get \begin{align*}
\frac{\mu_V(B_{\alpha\sigma}(\xi)\cap\{\Sigma_{i=1}^N|p^{\bot}_{T_y}(x_i-y)|^2\ge\theta\})}{\mu_V(B_{\alpha\sigma}(\xi))} \le\frac{\frac{2^{19}\pi\sigma^4\delta^{\frac{1}{2}}}{\alpha^2\theta}}{(1-2\delta^{\frac{1}{2}})\pi(\alpha\sigma)^2}\le\frac{1}{2}<1. \end{align*} Thus there exists a point $y_0\in B_{\delta^{\frac{1}{16}}\sigma}(\xi)$ such that \begin{align*}
\Sigma_{i=1}^N|p^{\bot}_{T_{y_0}}(x_i-y_0)|^2<\theta=2^{21}\sigma^2\delta^{\frac{1}{4}}. \end{align*} That is, \begin{align*}
\sup_{1\le i\le N}|p^{\bot}_{T_{y_0}}(x_i-y_0)|\le 2^{11}\sigma\delta^{\frac{1}{8}}. \end{align*} So, for $\forall x\in spt\mu_V\cap B_{\sigma}(\xi)\subset\cup_{i=1}^N B_{\alpha\sigma}(x_i)=\cup_{i=1}^N B_{\delta^{\frac{1}{16}}\sigma}(x_i)$, there exists $1\le i\le N$ such that $x\in B_{\delta^{\frac{1}{16}}\sigma}(x_i)$. Thus \begin{align*}
|p^{\bot}_{T_{y_0}}(x-y_0)|\le|p^{\bot}_{T_{y_0}}(x-x_i)|+|p^{\bot}_{T_{y_0}}(x_i-y_0)|\le \delta^{\frac{1}{16}}\sigma+2^{11}\sigma\delta^{\frac{1}{8}}\le2^{12}\delta^{\frac{1}{16}}\sigma. \end{align*} So, if we let $T=T_{y_0}+\xi-y_0$, then $\xi\in T$ and for any $x\in spt\mu_V\cap B_{\sigma}(\xi)$, \begin{align*}
\sigma^{-1}d(x,T)=\sigma^{-1}|p^{\bot}_T(x-\xi)|\le\sigma^{-1}(|p^{\bot}_T(x-y_0)|+|y_0-\xi|)\le 2^{13}\delta^{\frac{1}{16}}. \end{align*}
\end{proof}
\subsection{Lipschitz Approximation} \
Firstly, we need the following version of weighted monotonicity inequality, which roughly means most of the measure concentrate in the neighborhood of a plane. \begin{lemma}\label{band} Assume $V=\underline{v}(\Sigma,\theta)$ is a rectifiable 2-varifold in $U$ with generalized mean curvature $H\in L^2(\mu_V)$. Then for $l\in(0,1),\beta\in(0,1/4),B_R(\xi)\subset U $ and $\forall y\in B_{\beta R}(\xi)$, we have \begin{align*}
\pi \Theta(\mu_V,y)\le&(1+24\beta)\frac{\mu_V\big(\{x:|q_0(x-y)|<2l\beta R\}\cap B_{R}(\xi)\big)}{R^2}\\
&+\frac{6}{(l\beta)^5}\frac{1}{R^2}\int_{B_{R}(\xi)}\|p_{T_x\Sigma}-p_{\mathbb{R}^2\times\{0\}}\|^2+\frac{2}{(l\beta)^3}\int_{B_{R}(\xi)}|H|^2, \end{align*} where $q_0$ is the orthogonal projection of $\mathbb{R}^{2+k}$ onto $\{0\}\times\mathbb{R}^k$. \end{lemma} \begin{proof}
W.l.o.g., assume $\xi=0$ and $y\in B_{\beta R}(0)$. Denote $T_y=\mathbb{R}^2\times \{0\}+y$ and define $q_y:\mathbb{R}^{2+k}\to T_y^{\bot}$, $q_y(y+(x_1,x_2))=y+x_2$, $p_y=id-q_y$ to be the orthogonal projection to $T_y^{\bot}$ and $T_y$ respectively. For $\alpha$ to be determined, choose a function $g\in C^1(\mathbb{R},[0,1])$ such that $g(t)\equiv 1$ for $t\in[-\alpha R,\alpha R]$, $g(t)\equiv 0$ for $|t|\ge2\alpha R$ and $|g'(t)|\le \frac{2}{\alpha R}$. Put $h(x)=g(|q_y(x)-y|)$. We will deduce a monotonicity formula involving the weight $h^2$. Take $X=h^2\eta r\nabla r$( here $r(x)=r_y(x)=|x-y|, \eta=\eta(r)$ to be determined below). Since \begin{align*} div^{\Sigma}X &=\langle\nabla^{\Sigma}(h^2\eta), r\nabla r\rangle+h^2\eta div^{\Sigma}(r\nabla r)\\
&=h^2r\eta'+2h^2\eta+2\eta hr\langle\nabla^{\Sigma}r,
\nabla^{\Sigma}h\rangle-h^2r\eta'|\nabla^{\bot}r|^2, \end{align*} by the definition of generalized mean curvature $\int div^{\Sigma}X=-\int X\cdot H$, we know \begin{align*}
LH:=\int h^2(r\eta'+2\eta)=\int h^2r\eta'|\nabla^{\bot}r|^2-2\eta hr\langle\nabla^{\Sigma}r,\nabla^{\Sigma}h\rangle-h^2\eta r\langle\nabla^{\bot}r, H\rangle=:RH. \end{align*} As before, since $B_{(1-\beta)R}(y)\subset B_R(0)$, for $0<\sigma<\rho<(1-\beta)R$, we can take $$
\eta(r)=
\begin{cases}
f(\sigma)-f(\rho),& r\le \sigma,\\
f(r)-f(\rho),&r\in(\sigma,\rho).\\
\end{cases} $$ for decreasing $f$ to be chosen. Then $$
\eta'(r)=
\begin{cases}
0,& r\le \sigma,\\
f'(r),&r\in(\sigma,\rho),\\
\end{cases} $$ \begin{align*} LH&=\int_{B_{\rho}\backslash B_{\sigma}(y)}h^2rf'(r)+2\int_{B_{\sigma}(y)}h^2(f(\sigma)-f(\rho))+2\int_{B_{\rho}\backslash B_{\sigma}(y)}h^2(f(r)-f(\rho))\\ &=\int_{B_{\rho}\backslash B_{\sigma}(y)}h^2(rf'+2f)+2f(\sigma)\int_{B_{\sigma}(y)}h^2-2f(\rho)\int_{B_{\rho}(y)}h^2\\ \end{align*} and $RH=-2T+RH_1$
for
$T=\int_{B_{\rho}(y)}\eta hr\langle\nabla^{\Sigma}r, \nabla^{\Sigma}h\rangle$
and \begin{align*}
RH_1&=\int_{B_{\rho}\backslash B_{\sigma}(y)}h^2\{rf'|\nabla^{\bot}r|^2-rf\langle\nabla^{\bot}r, H\rangle\}+f(\rho)\int_{B_{\rho}\backslash B_{\sigma}(y)}h^2r\langle\nabla^{\bot}r, H\rangle\\ & \ \ \ \ \ -(f(\sigma)-f(\rho))\int_{B_{\sigma}(y)}h^2r\langle\nabla^{\bot}, H\rangle\\
&=-\int_{B_{\rho}\backslash B_{\sigma}(y)}h^2|\sqrt{-rf'}\nabla^{\bot}r+\frac{rfH}{2\sqrt{-rf'}}|^2+\int_{B_{\rho}\backslash B_{\sigma}(y)}h^2|\frac{rfH}{2\sqrt{-rf'}}|^2\\ & \ \ \ \ \ +f(\rho)\int_{B_{\rho}(y)}h^2r\langle\nabla^{\bot}r,H\rangle-f(\sigma)\int_{B_{\sigma}(y)}h^2r\langle\nabla^{\bot}r,H\rangle. \end{align*}
Especially, if we take $f(r)=\frac{1}{r^2}$, then $rf'+2f=0$, $\sqrt{-rf'}=\sqrt{2}r^{-1}$, $rf=r^{-1}$ and $\frac{rf}{2\sqrt{-rf'}}=\frac{1}{2\sqrt{2}}$, then by $LH=RH$ and $-\int_{B_{\rho}\backslash B_{\sigma}(y)}|\frac{\nabla^{\bot}r}{r}+\frac{H}{4}|^2h^2\le 0$, we get \begin{align}\label{L<H}
\frac{1}{\sigma^2}\int_{B_{\sigma}(y)}h^2-\frac{1}{\rho^2}\int_{B_{\rho}(y)}h^2\le-T+T_4-T_5+\frac{1}{16}\int_{B_{\rho}\backslash B_{\sigma}(y)}|H|^2h^2, \end{align} where \begin{align*} T_4=\frac{1}{2\rho^2}\int_{B_{\rho}(y)}h^2r\langle\nabla^{\bot}r,H\rangle,\ \ T_5=\frac{1}{2\sigma^2}\int_{B_{\sigma}(y)}h^2r\langle\nabla^{\bot}r,H\rangle \end{align*} and \begin{align*} T&=\int_{B_{\rho}\backslash B_{\sigma}(y)}(f(r)-f(\rho))h\langle r\nabla^{\Sigma}r,\nabla^{\Sigma} h\rangle+\int_{B_{\sigma}(y)}(f(\sigma)-f(\rho))h\langle r\nabla^{\Sigma}r,\nabla^{\Sigma} h\rangle\\ &=\int_{B_{\rho}\backslash B_{\sigma}(y)}h\langle \frac{\nabla^{\Sigma}r}{r},\nabla ^{\Sigma}h\rangle+\frac{1}{\sigma^2}\int_{B_{\sigma}(y)}h\langle r\nabla^{\Sigma}r,\nabla^{\Sigma} h\rangle-\frac{1}{\rho^2}\int_{B_{\rho}(y)}h\langle r\nabla^{\Sigma}r,\nabla^{\Sigma} h\rangle\\ &=:T_1+T_2+T_3. \end{align*} By Young's inequality, we know \begin{align*}
|T_2|\le\frac{\varepsilon}{\sigma^2}\int_{B_{\sigma}(y)}h^2+\frac{1}{4\varepsilon}\int_{B_{\sigma}(y)}|\nabla^{\Sigma}h|^2,
\ \ \ \
|T_3|\le\frac{\varepsilon}{\rho^2}\int_{B_{\rho}(y)}h^2+\frac{1}{4\varepsilon}\int_{B_{\rho}(y)}|\nabla^{\Sigma}h|^2, \end{align*} \begin{align*}
|T_4|\le\frac{\varepsilon}{\rho^2}\int_{B_{\rho}(y)}h^2+\frac{1}{8\varepsilon}\int_{B_{\rho}(y)}|H|^2h^2, \ \ \ \
|T_5|\le\frac{\varepsilon}{\sigma^2}\int_{B_{\sigma}(y)}h^2+\frac{1}{8\varepsilon}\int_{B_{\sigma}(y)}|H|^2h^2, \end{align*} and \begin{align*}
|T_1|\le\int_{B_{\rho}\backslash B_{\sigma}(y)}\frac{\varepsilon h^2}{r^2}+\frac{|\nabla^{\Sigma}h|^2}{4\varepsilon}
\le\varepsilon\frac{\rho^2}{\sigma^2}\frac{1}{\rho^2}\int_{B_{\rho}(y)}h^2+\frac{1}{4\varepsilon}\int_{B_{\rho}\backslash B_{\sigma}(y)}|\nabla^{\Sigma}h|^2. \end{align*} Substitute the estimate of $T_i,i=1,2,3,4,5$ into (\ref{L<H}). We get, for $\varepsilon<\frac{1}{2}$, \begin{align*} (1-2\varepsilon)\frac{1}{\sigma^2}\int_{B_{\sigma}(y)}h^2\le&(1+2\varepsilon+\varepsilon(\frac{\rho}{\sigma})^2)\frac{1}{\rho^2}\int_{B_{\rho}(y)}h^2\\
&+\frac{3}{4\varepsilon}\int_{B_{\rho}(y)}|\nabla^{\Sigma}h|^2+(\frac{1}{16}+\frac{1}{4\varepsilon})\int_{B_{\rho}(y)}|H|^2h^2. \end{align*}
On the one hand, by definition, we know for $x\in B_{\alpha R}(y)$, $|q_y(x)-y|=|q_0(x-y)|\le\alpha R$. Hence $h\equiv 1$ on $B_{\alpha R}(y)$. So if we take $\sigma=\alpha R<\rho<(1-\beta)R$ and use the monotonicity inequality (\ref{monotonicity inequality}), then \begin{align*} \pi\Theta(\mu_V,y)
\le&(1+\varepsilon)\frac{\mu_V(B_{\alpha R}(y))}{(\alpha R)^2}+\frac{1}{2\varepsilon}\int_{B_{\alpha R}(y)}|H|^2\\
\le&(1+\varepsilon)\frac{1}{\sigma^2}\int_{B_{\sigma}(y)}h^2+\frac{1}{2\varepsilon}\int_{B_{\rho}(y)}|H|^2\\ \le&\frac{1+\varepsilon}{1-2\varepsilon}\{(1+2\varepsilon+\varepsilon(\frac{\rho}{\sigma})^2)\frac{1}{\rho^2}\int_{B_{\rho}(y)}h^2
+\frac{1}{\varepsilon}\int_{B_{\rho}(y)}(|\nabla^{\Sigma}h|^2+|H|^2)\}, \end{align*} where we use $h\le1$ in the last inequality.
On the other hand, since $\nabla^{\mathbb{R}^{2+k}}|q_y(x)-y|=\frac{q_0(x-y)}{|q_0(x-y)|}$, we know for $x\in spt\mu_V$ where $T_x\Sigma$ exists, \begin{align*}
|\nabla^{\Sigma}h|^2(x)\le&(|g'(q_0(x-y))||\nabla^{\Sigma}|q_0(x-y)||)^2\\
\le&\big(\frac{2}{\alpha R}\big)^2\big|\frac{p_{T_x\Sigma}(q_0(x-y))}{|q_0(x-y)|}\big|^2\\
\le&\frac{4}{(\alpha R)^2}\|p_{T_x\Sigma}-p_{\mathbb{R}^2\times\{0\}}\|^2. \end{align*}
Moreover, $sptq\subset[-2\alpha R,2\alpha R]$ implies $spt h\subset \{x:|q_0(x-y)|<2\alpha R\}$. Thus if we take $\rho=(1-2\beta)R$, then $\frac{\rho}{\sigma}=\frac{1-2\beta}{\alpha}$, $B_{\rho}(y)\subset B_{R}(\xi)$ and \begin{align*} \pi\Theta(\mu_V,y) \le&\frac{1+\varepsilon}{1-2\varepsilon}\{(1+2\varepsilon+\varepsilon(\frac{\rho}{\sigma})^2)\frac{1}{\rho^2}\int_{B_{\rho}(y)}h^2
+\frac{1}{\varepsilon}\int_{B_{\rho}(y)}(|\nabla^{\Sigma}h|^2+|H|^2)\}\\
\le&\frac{1+\varepsilon}{1-2\varepsilon}\{(1+2\varepsilon+\frac{(1-2\beta)^2\varepsilon}{\alpha^2})\frac{\mu_V\big(\{x:|q_0(x-y)|<2\alpha R\}\cap B_{R}(\xi)\big)}{((1-2\beta)R)^2}\\
&\ \ \ +\frac{4}{\varepsilon(\alpha R)^2}\int_{B_{(1-2\beta)R}(y)}\|p_{T_x\Sigma}-p_{\mathbb{R}^2\times\{0\}}\|^2+\frac{1}{\varepsilon}\int_{B_{(1-2\beta)R}(y)}|H|^2\}. \end{align*} Take $\alpha=l\beta$ and $\varepsilon=(l\beta)^3=\alpha^3\le 2^{-6}$ for $l<1$. Then \begin{align*} \frac{1+\varepsilon}{1-2\varepsilon}\le 1+6(l\beta)^3 \ \ \ \text{ and }\ \ \ 1+2\varepsilon+\frac{(1-2\beta)^2\varepsilon}{\alpha^2}\le 1+2l\beta. \end{align*} Thus \begin{align*} \pi\Theta(\mu_V,y)
\le&(1+6(l\beta)^3)(1+2l\beta)\frac{\mu_V\big(\{x:|q_0(x-y)|<2l\beta R\}\cap B_{(1-2\beta)R}(y)\big)}{((1-2\beta)R)^2}\\
&+\frac{6}{(l\beta)^5}\frac{1}{R^2}\int_{B_{(1-2\beta)R}(y)}\|p_{T_x\Sigma}-p_{\mathbb{R}^2\times\{0\}}\|^2+\frac{2}{(l\beta)^3}\int_{B_{(1-2\beta)R}(y)}|H|^2\\
\le&(1+24\beta)\frac{\mu_V\big(\{x:|q_0(x-y)|<2l\beta R\}\cap B_{R}(0)\big)}{R^2}\\
&+\frac{6}{(l\beta)^5}E(0,R,T)+\frac{2}{(l\beta)^3}\int_{B_{R}(0)}|H|^2. \end{align*} \end{proof}
\begin{cor}\label{graph condition}
Assume $\alpha, l\in (0,1)$ and $V=\underline{v}(\Sigma,\theta)$ is a rectifiable 2-varifold in $U$ with generalized mean curvature $H\in L^2(\mu_V)$ and $\frac{\mu_V(B_R(\xi))}{\pi R^2}\le 2-\alpha$. For $\beta_1=\beta_1(\alpha)=\frac{\alpha}{48(2-\alpha)}$ and $\delta=\delta(\alpha)=\frac{\alpha^3}{2^{23}}$, if
$$l^{-5}E(\xi,R,T)\le \delta^2\pi \ \ and\ \ l^{-3}\int_{B_R(\xi)}|H|^2\le \delta^2\pi, $$
then for $\forall y,z\in B_{\beta_1 R}(\xi)$ with $|y-z|\ge \beta_1 R$, we have
$$|q_0(y-z)|\le l|y-z|.$$ \end{cor} \begin{proof}
We assume $T=\mathbb{R}^2\times \{0\}$ and argue by contradiction. Otherwise, there exist $y,z\in spt\mu_V\cap B_{\beta_1 R}(\xi)$ with $|y-z|\ge \beta_1 R$ but $|q_0(y-z)|>l|y-z|$. Thus for $\forall x\in B_{R}(\xi)$, \begin{align*}
|q_0(x-y)|+|q_0(x-z)|\ge |q_0(y-z)|> l|y-z|\ge l\beta_1 R, \end{align*}
So, either $|q_0(x-y)|>\frac{l\beta_1 R}{2}$ or $|q_0(x-z)|>\frac{l\beta_1 R}{2}$, i.e., \begin{align*}
\{x\in B_{R}(\xi):|q_0(x-y)|\le\frac{l\beta_1 R}{2}\}\cap\{x\in B_{R}(\xi):|q_0(x-z)|\le\frac{l\beta_1 R}{2}\}=\emptyset. \end{align*} Noting $y,z\in spt\mu_V\cap B_{\beta_1 R}(\xi)$, by Lemma \ref{band}, we get \begin{align*} 2\pi\le& (\Theta(\mu_V,y)+\Theta(\mu_V,z))\pi\\
\le&(1+24\beta_1)\frac{\mu_V\big(\{x\in B_{R}(\xi):|q_0(x-z)|<\frac{1}{2}l\beta_1 R\text{ or } |q_0(x-z)|<\frac{1}{2}l\beta_1 R\} \big)}{R^2}\\
&\ \ \ +\frac{12}{(\frac{1}{2}l\beta_1)^5}\frac{1}{R^2}\int_{B_{R}(\xi)}\|p_{T_x\Sigma}-p_{\mathbb{R}^2\times\{0\}}\|^2
+\frac{4}{(\frac{1}{2}l\beta_1)^3}\int_{B_{R}(\xi)}|H|^2\\ \le&(1+24\beta_1)\frac{\mu_V(B_R(\xi))}{R^2}+\frac{12}{(\frac{1}{2}l\beta_1)^5}E(\xi,R,\mathbb{R}^2\times\{0\})
+\frac{4}{(\frac{1}{2}l\beta_1)^3}\int_{B_{R}(\xi)}|H|^2\\ \le& (1+24\beta_1)(2-\alpha)\pi+\frac{3\cdot2^{7}}{l^5\beta_1^5}l^5\delta^2\pi+\frac{2^5}{l^3\beta_1^3}l^3\delta^2\pi\\ \le& (2-\frac{\alpha}{8})\pi. \end{align*}
A contradiction! \end{proof}
\begin{prop}[Lipschitz Approximation 0.5 version]\label{Lipschitz Approximation 0.5 version} For $\forall \alpha\in (0,1)$, there exists \begin{align*} &\beta_3(\alpha)=\frac{1}{4}\beta_2(\alpha)=\frac{\alpha}{2^8\cdot3\cdot5(4-\alpha)},\ \ \beta_4(\alpha)=\frac{\alpha^3}{2^{21}}\\ &\delta^2_3(\alpha)=\delta_2^4\delta_1^{10}=\frac{\alpha^{12}}{2^{174}k^5}{\ \color{blue} (\delta_1=\frac{1}{2^7k},\delta_2(\alpha)=\frac{\alpha^3}{2^{26}})} \end{align*} such that the following statement holds:
Assume $V=\underline{v}(\Sigma,\theta)$ is a rectifiable 2-varifold in $U\subset \mathbb{R}^{2+k}$ satisfying
\begin{align*}
&(1) \ \ 0\in spt\mu_V, B_R(0)\subset U,\\
&(2) \ \ \frac{\mu_V(B_R(0))}{\pi R^2}\le 2-\alpha,\\
&(3)\ \ \theta\ge1, \mu_V-a.e. x\in U. \end{align*} For any $l\in(0,1)$, if \begin{align*} l^{-5}E:=l^{-5}E(0,R,\mathbb{R}^2\times \{0\})\le \delta_3^2 \text{ and } l^{-3}W:=l^{-3}\int_{B_R(0)}H^2\le \delta_3^2,
\end{align*}
then for any $\beta\in (\beta_4,\beta_3)$, there exists a Lipschitz function $f=(f^1,f^2, \ldots, f^k):B_{\beta R}^{\mathbb{R}^2\times\{0\}}(0)\to \mathbb{R}^k$ with
$$Lipf\le l,\ \ \sup_{x\in B_{\beta R}}|f|\le l\beta R $$ and for $F=Graphf$, \begin{align*} \mathcal{H}^2((F\backslash spt\mu_V)\cap B_{\beta R}(0))+\mu_V(B_{\beta R}(0)\backslash F)\le 2^{27}(l^{-\frac{5}{2}} E^{\frac{1}{2}}+l^{-\frac{3}{2}} W^{\frac{1}{2}})\pi R^2. \end{align*} Moreover, for the orthogonal projection $q_0:\mathbb{R}^{2+k}\to \{0\}\times\mathbb{R}^k$, we have \begin{align*}
\sup_{x\in B_{\beta R}(0)\cap spt\mu_V}|q_0(x)|\le l\beta R. \end{align*} \end{prop} \begin{proof} Following the notation of Corollary \ref{graph condition}, take $$\delta_0(\alpha)=\beta_0(\alpha)=\frac{\alpha}{40},\ \ \ \ \ \ \ \ \ \ \ \delta_1\in(0,1)\text{ to be determined },$$ $$\beta_2(\alpha)=\frac{1}{20}\beta_1(\frac{\alpha}{2})=\frac{\alpha}{960(4-\alpha)},\ \ \ \ \ \ \ \delta_2(\alpha)=\delta(\frac{\alpha}{2})=\frac{\alpha^3}{2^{26}},\ \ \ \ \ \ \ \delta_3^2=\delta_2^4\delta_1^{10}.$$ The same as the proof of (\ref{upperbound}) and (\ref{lowerbound}), by the monotonicity formula, it is easy to show that for $x\in B_{\beta_0 R}(\xi)\cap spt\mu_V$ and $\sigma\in (0,(1-\beta_0) R)$, we have
\begin{align}\label{uplowerbound}
1-2\delta_0\le\frac{\mu_V(B_{\sigma}(x))}{\pi \sigma^2}\le 2-\frac{\alpha}{2}.
\end{align}
Letting
\begin{align*}
G:=\{x\in spt\mu_V\cap B_{\beta R}(0):\frac{E(x,\sigma,\mathbb{R}^2\times \{0\})}{(l\delta_1)^{5}}+\frac{\int_{B_{\sigma}(x)}|H|^2}{(l\delta_1)^{3}}\le\pi\delta_2^2,\forall \sigma<\frac{R}{10} \}, \end{align*} then for any $\beta\in (\beta_4,\beta_2)$, $ x\in G \text{ and } y\in spt\mu_V\cap B_{\beta R}(0)$, we have
$$\sigma:=\frac{|x-y|}{\beta_1(\frac{\alpha}{2})}<\frac{2\beta_2 R}{\beta_1(\frac{\alpha}{2})}\le \frac{R}{10}\le (1-\beta_0)R\ \ \text{ and }\ \ x\in B_{\beta R}(0)\subset B_{\beta_0 R}(0).$$ By (\ref{uplowerbound}) we get \begin{align*}
\frac{\mu_V\Big(B_{\frac{|x-y|}{\beta_1(\frac{\alpha}{2})}}(x)\Big)}{\pi\Big(\frac{|x-y|}{\beta_1(\frac{\alpha}{2})}\Big)^2}\le 2-\frac{\alpha}{2}. \end{align*}
Since $x\in G$, we know that for $\sigma=\frac{|x-y|}{\beta_1(\frac{\alpha}{2})}$,
$$\frac{E(x,\sigma,\mathbb{R}^2\times \{0\})}{(l\delta_1)^{5}}+\frac{\int_{B_{\sigma}(x)}|H|^2}{(l\delta_1)^{3}}\le\pi\delta^2(\frac{\alpha}{2}).$$ Thus by Corollary \ref{graph condition} we know \begin{align}\label{lip estimate}
|q_0(x-y)|\le l\delta_1|x-y|. \end{align} Especially, if we take $y=0\in spt\mu_V\cap B_{\beta R}(0)$, then \begin{align}\label{Hausdorff distance estimate}
\sup_{x\in G}|q_0(x)|\le \delta_1 l|x|\le\delta_1 l\beta R. \end{align} Moreover, if we define $p_0:\mathbb{R}^{2+k}\to \mathbb{R}^2$, $p_0(x_1,x_2)=x_1$ and $\Omega_0=p_0(G)$, then for $x,y\in G$, by (\ref{lip estimate}) we know
$$|q_0(x-y)|\le \frac{\delta_1 l}{\sqrt{1-(\delta_1 l)^2}}|p_0(x-y)|\le\frac{2\delta_1l}{\sqrt{3}}|p_0(x-y)|{\color{red} (\text{ for } \delta_1\le \frac{1}{2}}).$$
Thus
$$G=Graph f_0,\ \ \ f_0(p_0(x))=q_0(x):\Omega_0\to \mathbb{R}^k, \ \ Lipf_0\le \frac{2\delta_1l}{\sqrt{3}}.$$
Now by the extension theorem of Lipschitz function, there exists a Lipschitz function $\tilde{f}:\mathbb{R}^2\to \mathbb{R}^k$ such that $\tilde{f}=f$ on $\Omega_0$ and $Lip\tilde{f}\le kLipf_0\le \frac{2k\delta_1l}{\sqrt{3}}$. Noting that $|f_0|\le \sup_{G}|q_0|\le \delta_1 l \beta R$, we can put $f=\min\{\max\{f,-l\beta R\},l\beta R\}$ and get an extending $f$ of $f_0$ with
\begin{align}\label{lipconstant}
Lipf\le \frac{2k\delta_1l}{\sqrt{3}}\ \ \ \text{ and }\ \ \ sup|f|\le l\beta R.
\end{align}
Noting $spt\mu_V\backslash Graphf\subset spt\mu_V\backslash G$, we estimate $\mu_V\big((spt\mu_V\backslash G)\cap B_{\beta R}(0)\big)$ next.
For any $x\in (spt\mu_V\backslash G)\cap B_{\beta R}(0)$, there is $\sigma_{x}\in (0,\frac{R}{10})$ such that
\begin{align*}
(l\delta_1)^{-5}E(x,\sigma_{x},\mathbb{R}^2\times \{0\})+(l\delta_1)^{-3}\int_{B_{\sigma_x}(x)}|H|^2\ge\pi\delta_2^2.
\end{align*}
Let
\begin{align*}
A:=&\{x\in spt\mu_V\cap B_{\beta R}(0):(l\delta_1)^{-5}E(x,\sigma_x,\mathbb{R}^2\times \{0\})\ge\frac{1}{2}\pi\delta_2^2\},\\
B:=&\{x\in spt\mu_V\cap B_{\beta R}(0):(l\delta_1)^{-3}\int_{B_{\sigma_x}(x)}|H|^2\ge\frac{1}{2}\pi\delta_2^2\}.
\end{align*}
Then $(spt\mu_V\backslash G)\cap B_{\beta R}(0)\subset A\cup B$.
By $5$-times lemma, there exists disjoint collection $\{B_{\sigma_{x_j}}(x_j)\}_{j=1}^{\infty}$ of $\{B_{\sigma_x}(x)\}_{x\in A}$ such that $A\subset \cup_{j=1}^{\infty} B_{5\sigma_{x_j}}(x_j)$. For $x\in A$, we know
$$\sigma_x^2\le\frac{2}{\delta_2^2(\delta_1 l)^5}\int_{B_{\sigma_x}(x)}\|_{T_x\Sigma}-p_{\mathbb{R}^2\times\{0\}}\|^2d\mu_V(x).$$
Since $5\sigma_{x_j}\le \frac{5 R}{10}\le (1-\beta_0) R$, by (\ref{uplowerbound}) we know $\frac{\mu_V(B_{5\sigma_{x_j}}(x_j))}{\pi (5\sigma_{x_j})^2}\le 2-\frac{\alpha}{2}.$
Thus
\begin{align*}
\mu(A)&\le \Sigma_{j=1}^{\infty}\mu(B_{5\sigma_{x_j}}(x_j))\le \Sigma_{j=1}^{\infty}(2-\frac{\alpha}{2})25\pi \sigma_{x_j}^2\\
&\le 50\pi \Sigma_{j=1}^{\infty}\frac{2}{\delta_2^2(\delta_1 l)^5}\int_{B_{\sigma_{x_j}}(x_j)}\|p_{T_x\Sigma}-p_{\mathbb{R}^2\times\{0\}}\|^2d\mu\\
&\le \frac{100\pi}{\delta_2^2(\delta_1 l)^5}\int_{B_R(0)}\|p_{T_x\Sigma}-p_{\mathbb{R}^2\times\{0\}}\|^2d\mu\\
&\le 100\pi l^{-\frac{5}{2}} R^2 E^{\frac{1}{2}}(0,R,\mathbb{R}^2\times \{0\}),
\end{align*}
where we use the condition $l^{-5}E(0,R,\mathbb{R}^2\times \{0\})\le \delta_2^4 \delta_1^{10}$ in the last inequality.
Similarly, there exists disjoint collection $\{B_{\sigma_{y_j}}(y_j)\}_{j=1}^{N}$ of $\{B_{\sigma_y}(y)\}_{y\in B}$ such that $B\subset \cup_{j=1}^{N} B_{5\sigma_{y_j}}(y_j)$. Since $\int_{B_{\sigma_y}(y)}|H|^2\ge\frac{1}{2}\pi\delta_2^2(l\delta_1)^{3}$ for any $y\in B$, we know $N\le \frac{\int_{B_R(0)}|H|^2}{\delta_2^2(l\delta_1)^{3}}$ and
\begin{align*}
\mu(B)&\le \Sigma_{j=1}^{N}\mu(B_{5\sigma_{y_j}}(y_j))\le \Sigma_{j=1}^{N}(2-\frac{\alpha}{2})25\pi \sigma_{y_j}^2\\
&\le \frac{\pi}{2}N R^2\le \frac{\pi R^2}{2\delta_2^2(\delta_1 l)^3}\int_{B_R(0)}|H|^2\\
&\le \pi l^{-\frac{3}{2}} R^2 (\int_{B_R(0)}|H|^2)^{\frac{1}{2}},
\end{align*}
where we use $l^{-3}\int_{B_R(0)}|H|^2\le \delta_2^4 \delta_1^{6}$ in the last line.
As a result,
\begin{align}\label{sptoutgraph}
\mu_V((spt\mu_V\backslash G)\cap B_{\beta R}(0))&\le \pi R^2 l^{-\frac{5}{2}} [100E^{\frac{1}{2}}(0,R,\mathbb{R}^2\times \{0\})+l (\int_{B_R(0)}|H|^2)^{\frac{1}{2}}]\\
&\le101\pi R^2 \delta_2^2\delta_1^3\le 101\frac{\alpha^6}{2^{52}}\pi R^2\nonumber\\
{\color{blue} (\text{ since } \beta>\frac{\alpha^3}{2^{21}})}&<\frac{1}{2}\pi(\beta R)^2\nonumber\\
{\color{blue}(\text{ by } 0\in spt\mu_V\text{ and }(\ref{uplowerbound}))}&< \mu_V(B_{\beta R}(0)).\nonumber
\end{align}
So $G\neq \emptyset$. Taking $ x_0\in G$, by (\ref{Hausdorff distance estimate}) and (\ref{lip estimate}) we know $|q_0(x_0)|\le \delta_1l\beta R$ and
\begin{align}\label{Hausdorff distance estimate outside graph}
\sup_{y\in spt\mu_V\cap B_{\beta R}(0)}|q_0(y)|\le \sup_{y\in spt\mu_V\cap B_{\beta R}(0)}|q_0(y-x_0)|+|q_0(x_0)| \le3\delta_1 l\beta R. \end{align}
Next, we estimate the $\mathcal{H}^2\big((Graph f\backslash spt\mu_V)\cap B_{\frac{\beta}{4} R}(0)\big)$. For this, set $F=Graph f$ and denote
\begin{align*}
C:=(F\backslash spt\mu_V)\cap B_{\frac{\beta}{4} R}(0).
\end{align*}
For $\forall \eta\in C$, take $\sigma_{\eta}$ to be the smallest $\sigma$ such that $B_{\frac{\sigma}{2}}(\eta)\cap spt\mu_V=\emptyset$ but $B_{\frac{3\sigma}{4}}(\eta)\cap spt\mu_V\neq\emptyset$. Since $\eta\notin spt\mu_V$, $\sigma_{\eta}>0$ and $0\in spt\mu_V$, we know $\frac{\sigma_{\eta}}{2}\le |\eta|\le \frac{\beta}{4} R$ and $\sigma_{\eta}\le \frac{\beta R}{2}$. Now, $B_{\frac{3\sigma_{\eta}}{4}}(\eta)\cap spt\mu_V\neq\emptyset$ implies there is $ \xi_{\eta}\in spt\mu_V\cap B_{\frac{3\sigma_{\eta}}{4}}(\eta)\subset spt\mu_V\cap B_{\frac{3\beta R}{8}}(\eta)\subset spt\mu_V\cap B_{\beta R}(0)$. Thus $B_{\sigma_{\eta}}(\eta)\supset B_{\frac{1}{4}\sigma_{\eta}}(\xi_{\eta})$ and by (\ref{uplowerbound}) we know, \begin{align}\label{onehand} \mu(B_{\sigma_{\eta}}(\eta))\ge \mu(B_{\frac{1}{4}\sigma_{\eta}}(\xi_{\eta}))\ge (1-2\delta_0)\pi (\frac{1}{4}\sigma_{\eta})^2. \end{align} On the other hand, since $B_{\frac{\sigma_{\eta}}{2}}(\eta)\cap spt\mu=\emptyset$, by the monotonicity formula (\ref{monotonicity equality}) we know \begin{align}\label{otherhand}
\int_{B_{\sigma_{\eta}}(\eta)}|\frac{\nabla^{\bot}r}{r}+\frac{H}{4}|^2&=\frac{\mu(B_{\sigma_{\eta}}(\eta))}{\sigma_{\eta}^2}
+\frac{1}{16}\int_{B_{\sigma_{\eta}}(\eta)}|H|^2+\frac{1}{2\sigma_{\eta}^2}\int_{B_{\sigma_{\eta}}(\eta)}r\langle \nabla^{\bot}r,H\rangle\nonumber\\
&\ge (1-\varepsilon)\frac{\mu(B_{\sigma_{\eta}}(\eta))}{\sigma_{\eta}^2}+(\frac{1}{16}-\frac{1}{4\varepsilon})\int_{B_{\sigma_{\eta}}(\eta)}|H|^2. \end{align} Taking $\varepsilon=\frac{1}{2}$ in (\ref{otherhand}) and using (\ref{onehand}), we get \begin{align}\label{long} \frac{(1-2\delta_0)\pi\sigma_{\eta}^2}{16}\le\mu(B_{\sigma_{\eta}}(\eta))
&\le \Big(2\int_{B_{\sigma_{\eta}}(\eta)}|\frac{\nabla^{\bot}r}{r}+\frac{H}{4}|^2+\frac{7}{8}\int_{B_{\sigma_{\eta}}(\eta)}|H|^2\Big)\sigma_\eta^2\nonumber\\
&\le \Big(4\int_{B_{\sigma_{\eta}}(\eta)}|\frac{\nabla^{\bot}r}{r}|^2+\frac{9}{8}\int_{B_{\sigma_{\eta}}(\eta)}|H|^2\Big)\sigma_{\eta}^2\nonumber\\ {\color{blue} (By\ (\ref{remainder formula}))}
&\le\Big(4\int_{B_{\sigma_{\eta}}(\eta)}|\frac{p_{T_x\Sigma}^{\bot}(x-\eta)}{|x-\eta|^2}|^2+2\int_{B_{\sigma_{\eta}}(\eta)}|H|^2\Big)\sigma_{\eta}^2\nonumber\\ {\color{blue}(Since\ spt\mu\cap B_{\frac{\sigma_{\eta}}{2}}(\eta)=\emptyset)}&\le\frac{4\sigma_{\eta}^2}{(\frac{\sigma_{\eta}}{2})^2}
\int_{B_{\sigma_{\eta}}(\eta)}|p_{T_x\Sigma}^{\bot}\big(\frac{x-\eta}{|x-\eta|}\big)|^2+\underline{2\sigma_{\eta}^2\int_{B_{\sigma_{\eta}}(\eta)}|H|^2}\nonumber\\
{\color{blue}(-2\sigma_{\eta}^2\int_{B_{\sigma_{\eta}}(\eta)}|H|^2)}
&\lesssim\underline{32\int_{B_{\sigma_{\eta}}(\eta)}\|p_{T_x\Sigma}^{\bot}-q_0\|^2}+32\int_{B_{\sigma_{\eta}}(\eta)}|q_0\big(\frac{x-\eta}{|x-\eta|}\big)|^2\nonumber\\
{\color{blue}(-32\int_{B_{\sigma_{\eta}}(\eta)}\|p_{T_x\Sigma}-p_0\|^2)}
&\lesssim 32\mu(B_{\sigma_{\eta}}(\eta)\backslash F)+32\int_{B_{\sigma_{\eta}}(\eta)\cap F}|q_0\big(\frac{x-\eta}{|x-\eta|}\big)|^2\nonumber\\ {(\color{blue} By\ Lipf\le \frac{2k\delta_1l}{\sqrt{3}})} &\le 32\mu(B_{\sigma_{\eta}}(\eta)\backslash F)+32(\frac{2k\delta_1l}{\sqrt{3}})^2\mu(B_{\sigma_{\eta}}(\eta))\nonumber\\ &\le 32\mu(B_{\sigma_{\eta}}(\eta)\backslash F)+2^6(k\delta_1l)^2(2-\frac{\alpha}{2})\sigma_{\eta}^2, \end{align} where we use the non-standard notation \begin{align*} &A_1+\underline{S_1}\\ (-S_1)\lesssim &A_2+\underline{S_2}\\ (-S_2)\lesssim & A_3 \end{align*} to mean $A_1\le A_2+S_1\le A_3+S_1+S_2$ to save space. {\color{red} Fixing $\delta_1=\frac{1}{2^7k}$}, then \begin{align}\label{short}
2\sigma_{\eta}^2\int_{B_{\sigma_{\eta}}(\eta)}|H|^2+2^6(k\delta_1l)^2(2-\frac{\alpha}{2})\sigma_{\eta}^2 \le \big(2l^3\delta_2^4\delta_1^6+2^6(k\delta_1l)^2(2-\frac{\alpha}{2})\big)\sigma_{\eta}^2\le 2^{-6}\sigma_{\eta}^2. \end{align} Noticing $\frac{(1-2\delta_0)\pi\sigma_{\eta}^2}{16}\ge \frac{\sigma_{\eta}^2}{2^5}$ and substituting (\ref{short}) into (\ref{long}), then we get \begin{align*}
\frac{\sigma_\eta^2}{2^5}\le 32(\int_{B_{\sigma_{\eta}}(\eta)}\|p_{T_x\Sigma}-p_0\|^2+\mu(B_{\sigma_{\eta}}(\eta)\backslash F))+2^{-6}\sigma_{\eta}^2, \end{align*} i.e., \begin{align*}
\sigma_{\eta}^2\le2^{11}\big(\int_{B_{\sigma_{\eta}}(\eta)}\|p_{T_x\Sigma}-p_0\|^2+\mu(B_{\sigma_{\eta}}(\eta)\backslash F)\big). \end{align*}
Denote $\eta'=p_0(\eta)$, then the Lebesgue measure
\begin{align*}
\mathcal{L}^2(B_{5\sigma_{\eta}}^{\mathbb{R}^2\times\{0\}}(\eta'))=\pi(5\sigma_{\eta})^2\le2^{11}\cdot 5^2\pi\big(\int_{B_{\sigma_{\eta}}(\eta)}\|p_{T_x\Sigma}-p_0\|^2+\mu(B_{\sigma_{\eta}}(\eta)\backslash F)\big).
\end{align*}
Again by the $5$-times lemma, there exists disjoint collection $\{B^{\mathbb{R}^2\times\{0\}}_{\sigma_{\eta_j}}(\eta'_j)\}_{j=1}^{\infty}$ of $\{B^{\mathbb{R}^2\times\{0\}}_{\sigma_{\eta}}(\eta')\}_{\eta\in C}$ such that $p_0(C)\subset \cup_{j=1}^{\infty} B^{\mathbb{R}^2\times\{0\}}_{5\sigma_{\eta_j}}(\eta'_j)$. Thus
$$\cup_{j=1}^{\infty}B_{\sigma_{\eta_j}}(\eta_j)\subset \cup_{j=1}^{\infty}(B_{\sigma_{\eta_j}}^{\mathbb{R}^2\times\{0\}}(\eta'_j)\times\mathbb{R}^k)\cap B_{\beta R}(0)\subset B_{\beta R}(0)$$
and
\begin{align*}
\mathcal{L}^2(p_0(C))\le \Sigma_{j=1}^{\infty}\mathcal{L}^2(B_{5\sigma_{\eta_j}}^{\mathbb{R}^2\times\{0\}}(\eta'_j))
\le 2^{18}\big(\int_{B_{\beta R}(0)}\|p_{T_x\Sigma}-p_0\|^2+\mu(B_{\beta R}(0)\backslash F)\big).
\end{align*}
Moreover, since $C\subset Graph f$ for some $f$ with $Lipf\le 1$ and $(spt\mu_V\backslash F)\cap B_{\beta R}$ is included in $(spt\mu_V\backslash G)\cap B_{\beta R}$ whose measure has been estimated, we know
\begin{align}\label{graphoutspt}
\mathcal{H}^2&((F\backslash spt\mu)\cap B_{\frac{\beta}{4} R}(0))\nonumber\\
&\le(\sqrt{2})^2\mathcal{L}^2(p_0(C))\nonumber\\
&\le 2^{19}[\int_{B_{R}(0)}\|p_{T_x\Sigma}-p_0\|^2+\pi R^2 l^{-\frac{5}{2}} \big(100E^{\frac{1}{2}}(0,R,\mathbb{R}^2\times \{0\})+l (\int_{B_R(0)}H^2)^{\frac{1}{2}}\big)]\nonumber\\
&\le2^{19}[101\pi R^2 l^{-\frac{5}{2}} \big(E^{\frac{1}{2}}(0,R,\mathbb{R}^2\times \{0\})+l (\int_{B_R(0)}H^2)^{\frac{1}{2}}\big)]\nonumber\\
&\le2^{26}\pi R^2 \big(l^{-\frac{5}{2}} E^{\frac{1}{2}}(0,R,\mathbb{R}^2\times \{0\})+l^{-\frac{3}{2}} (\int_{B_R(0)}H^2)^{\frac{1}{2}}\big).
\end{align}
As a result, if we take $\beta_3(\alpha)=\frac{1}{4}\beta_2(\alpha)$,then for
$\beta\in(\beta_4,\beta_3)$,by(\ref{lipconstant}) (\ref{sptoutgraph})(\ref{graphoutspt}) and (\ref{Hausdorff distance estimate outside graph}), we are done. \end{proof}
Combing this theorem with the tilt-excess estimate( Corollary \ref{Tilt-Excess estimate}), we can finish the proof of Theorem \ref{Lipschitz Approximation for 2-varifold}. \begin{proof}[ proof of Theorem \ref{Lipschitz Approximation for 2-varifold}]
By (\ref{upperbound}) and Corollary \ref{Tilt-Excess estimate} we know, for any $\xi \in spt\mu_V\cap B_R(\xi)$ and $R<\frac{1}{2}\delta^{\frac{1}{2}}\rho$, there exists a plane $T=T(\xi,R)$ such that $$\frac{\mu(B_{R}(\xi))}{\pi R^2}\le 1+36\delta^{\frac{1}{2}}=:2-\alpha,\ \ \ \ \text{ and } \ \ \ \ \ \ E(\xi,R,T)\le 2^{25}\delta^{\frac{1}{4}}.$$
Since $\alpha=1-36\delta^{\frac{1}{2}}\in [\frac{1}{2},1]${\color{red} \ (for $\delta\le \frac{1}{2^{16}}$)}, we know $\beta_4(\alpha)\le\frac{1}{2^{21}}\le\frac{1}{2^{14}}\le\beta_3(\alpha)$ and $\delta_3^2\ge \frac{1}{2^{186}k^5}$. So {\color{blue} if
\begin{align}\label{assumption}
l^{-20}\delta\le \delta_5:= \frac{1}{2^{844}k^{20}},
\end{align}}
then \begin{align}\label{condition}
l^{-5}E(\xi,R,T)\le \delta_3^2\text{ and } l^{-3}W \le \delta_3^2(\text{ by } \int_{B_R(\xi)}|H|^2\le \delta). \end{align} Thus by Proposition \ref{Lipschitz Approximation 0.5 version} we know, for any $\beta\in (\frac{1}{2^{21}},\frac{1}{2^{14}})$, there exists a Lipschitz function $f=(f^1,f^2, \ldots, f^k):B_{\beta R}^{T}(\xi)\to T^{\bot}$ with
$$Lipf\le l,\ \ \sup_{x\in B_{\beta R}(\xi)}|f|\le l\beta R $$ and for $F=Graphf$, \begin{align*} \mathcal{H}^2((F\backslash spt\mu_V)\cap B_{\beta R}(\xi))&+\mu_V(B_{\beta R}(\xi)\backslash F)\\ &\le 2^{27}(l^{-\frac{5}{2}} E^{\frac{1}{2}}+l^{-\frac{3}{2}} W^{\frac{1}{2}})\pi R^2\\ &\le 2^{27} (l^{-\frac{5}{2}} 2^{13}\delta^{\frac{1}{8}}+l^{-\frac{3}{2}} \delta^{\frac{1}{2}})\pi R^2\\ &\le 2^{83}l^{-\frac{5}{2}}\delta^{\frac{1}{8}}\pi (\beta R)^2. \end{align*} Moreover, for $q:\mathbb{R}^{2+k}\to T^{\bot}$ the orthogonal projection, we have \begin{align*}
\sup_{x\in B_{\beta R}(\xi)\cap spt\mu_V}|q(x)|\le l\beta R. \end{align*} Especially, for $\delta\le \delta_6=\delta_5^2=\frac{1}{2^{1688}k^{40}}$, we can take $l=\delta^{\frac{1}{40}}$ such that (\ref{assumption}) holds. So, if we fix $\beta=\frac{1}{2^{15}}$ and denote $\sigma=\beta R$. Then we actually proved: for $\forall \xi\in B_{{\frac{\delta^{\frac{1}{2}}}{2^{16}}\rho}}(0)$ and $\forall \sigma\in(0,{\frac{\delta^{\frac{1}{2}}}{2^{16}}\rho})$, there exist a plane $T=T(\xi,\sigma)$ and a vector valued Lipschitz function $f=(f^1,f^2, \ldots, f^k):B_{\sigma}^{T}(\xi)\to T^{\bot}$ with
$$Lipf\le \delta^{\frac{1}{40}},\ \ \sup_{x\in B_{\sigma}(\xi)}|f|\le \delta^{\frac{1}{40}}\sigma $$ and for $F=Graphf$, \begin{align*} \mathcal{H}^2((F\backslash spt\mu_V)\cap B_{\sigma}(\xi))+\mu_V(B_{\sigma}(\xi)\backslash F)\le 2^{83}\delta^{\frac{1}{16}}\pi \sigma^2. \end{align*} Moreover, for $q:\mathbb{R}^{2+k}\to T^{\bot}$ the orthogonal projection, we have \begin{align*}
\sup_{x\in B_{\sigma}(\xi)\cap spt\mu_V}|q(x)|\le \delta^{\frac{1}{40}}\sigma. \end{align*} \end{proof} \section{$C^\alpha$-Regularity}\label{reifenberg} In this section, we combine the Lipschitz approximation theorem and Reifenberg's topological theorem to finish the proof of the $C^{\alpha}$-regularity Theorem. We have proved half of it in (\ref{semireifenberg}). As it is noted in the last section, to show the Lipschitz approximation Theorem \ref{Lipschitz Approximation for 2-varifold}, the integral semi-Reifenberg condition(Corollary \ref{Tilt-Excess estimate}) is enough. We will show that Theorem \ref{Lipschitz Approximation for 2-varifold} can feed back to provide another half of the Reifenberg condition. They together complete the proof of the $C^{\alpha}$-regularity.
\begin{thm}[$\mathbf{Allard-Reifenberg\ Type\ Regularity}$]\label{Holder Regularity} Assume $V=\underline{v}(\Sigma,\theta)$ is a rectifiable 2-varifold in $U\supset B_{\rho}(0)\subset \mathbb{R}^{2+k}$ with $0\in sptV$ and $\theta\ge 1$ $\mu-a.e.x\in U$ for $\mu:=\mu_V:=\mathcal{H}^2\llcorner \theta$. Then there exists small $\delta'_6(=\frac{1}{2^{3536}k^{80}})$ such that for any $\delta\le \delta'_6$ if
$$\frac{\mu(B_{\rho}(0))}{\pi \rho^2}\le1+\delta\text{\ and \ }\int_{B_{\rho}(0)}|H|^2\le \delta,$$ then for any $\xi\in B_{\frac{1}{2}\delta^{\frac{1}{2}}\rho}(0)$ and $\sigma\in (0,\frac{\delta^{\frac{1}{2}}}{2^{18}}\rho)$, there exists a plane $T=T(\xi, \sigma)$ passing through $\xi$ such that \begin{align}\label{reifenberg condition}
\sigma^{-1}d_{\mathcal{H}}(spt\mu_V\cap B_{\sigma}(\xi), T\cap B_{\sigma}(\xi))\le 2^{44}\delta^{\frac{1}{80}},
\end{align}
Where $d_{\mathcal{H}}$ is the Hausdorff distance in $\mathbb{R}^{2+k}$.
Moreover, for any $\alpha\in (0,1)$ and $\varepsilon=\varepsilon(k,\alpha)$ the small constant in Reifenberg's topological disk Theorem \ref{reifenberg theorem}, if $2^{44}\delta^{\frac{1}{80}}\le \varepsilon$, then $spt\mu_V\cap B_{\frac{1}{2^{19}}\delta^{\frac{1}{2}}\rho}(0)$ is $C^{\alpha}$ homeomorphic to a $2$-dimensional topological closed disk. \end{thm} \begin{proof}
For $\xi\in B_{\frac{1}{2}\delta^{\frac{1}{2}}\rho}(0)$ and $\sigma<\frac{1}{2^{18}}\delta^{\frac{1}{2}\rho}$, consider the ball $B_{4\sigma}(\xi)$. By Lemma \ref{Semi-Reifenberg Condition}, there exists a plane $T$ passing through $\xi$ such that for any $x\in
spt\mu_V\cap B_{4\sigma}(\xi)$,
\begin{align}\label{semi}
\sigma^{-1}d(x,T)\le 2^{15}\delta^{\frac{1}{16}}.
\end{align}
For the same $T$, by Lemma \ref{integral gradient estimate} we know \begin{align}\label{tilt} E(\xi,2\sigma,T)
\le 4\int_{B_{4\sigma}(\xi)}|H|^2+592\cdot (4\sigma)^{-2}\int_{B_{4\sigma}(\xi)}(\frac{d(x,T)}{4\sigma})^2
\le2^{37}\delta^{\frac{1}{8}}. \end{align} Replace Corollary \ref{Tilt-Excess estimate} by (\ref{tilt}) in the proof of Theorem \ref{Lipschitz Approximation for 2-varifold}, we know that for $\delta'_5=\frac{1}{2^{1768}k^{40}}, l=\delta^{\frac{1}{80}}, \delta\le\delta'_6:={\delta'_5}^2, \beta=\frac{1}{2^{15}}$ and $2\sigma=\beta R$, there exists a Lipschitz function $f=(f^1,f^2,...f^k):B_{2\sigma}(\xi)\cap T\to \mathbb{R}^k:=T^{\bot}$ with
$$Lipf\le\delta^{\frac{1}{80}},\ \ \operatorname*{sup}\limits_{x'\in B_{2\sigma}(\xi)\cap T}|f(x')|\le\delta^{\frac{1}{80}}\cdot2\sigma$$
and \begin{align}\label{smallmeasure}
\mathcal{H}^2((graphf\backslash sptV)\cap B_{2\sigma}(\xi))+\mu_V(B_{2\sigma}(\xi)\backslash graphf)\le 2^{76}\delta^{\frac{1}{32}}\pi (2\sigma)^2. \end{align}
Now, for any $x'\in B_{\sigma}(\xi)\cap T$, denote $x=(x',f(x'))$ and define $d(x)=\min\{d(x,spt\mu_V\cap B_{\sigma}(\xi)),\frac{1}{2}\sigma\}$. Then for any $y'\in B_{\frac{d(x)}{4}}(x')\cap B_{(1-2\delta^{\frac{1}{80}})\sigma}(\xi)\cap T$ and $y=(y',f(y'))$, we have
\begin{align*}
d(y,x)\le \sqrt{1+(Lipf)^2}d(x',y')\le \sqrt{1+\delta^{\frac{1}{40}}}\frac{d(x)}{4}\le \frac{d(x)}{2}
\end{align*}
and
\begin{align*}
d(y,\xi)\le |y'-\xi|+|f(y')|\le (1-2\delta^{\frac{1}{80}})\sigma+\delta^{\frac{1}{80}}(2\sigma)=\sigma.
\end{align*}
Thus
\begin{align}\label{largedomain}
B_{\frac{d(x)}{4}}(x')\cap B_{(1-2\delta^{\frac{1}{80}})\sigma}(\xi)\cap T\subset p(B_{\frac{d(x)}{2}}(x)\cap B_{\sigma}(\xi)),
\end{align}
where $p:\mathbb{R}^{2+k}\to T$ is the orthogonal projection. We now claim
$$d(x)\le 2^{43}\delta^{\frac{1}{80}}\sigma.$$
To see this, we assume $d(x)\ge 16\delta^{\frac{1}{80}}\sigma$ without loss of generality.
In the case $d(x',\partial B_{(1-2\delta^{\frac{1}{80}})\sigma}(\xi)\cap T)\le2\delta^{\frac{1}{80}}\sigma\le \frac{d(x)}{8}$, there exists a point $x''\in \partial B_{(1-2\delta^{\frac{1}{80}})\sigma}(\xi)\cap T$ such that $B_{\frac{d(x)}{4}}(x')\supset B_{\frac{d(x)}{8}}(x'')$. Moreover, since $d(x)\le \frac{\sigma}{2}\le (1-2\delta^{\frac{1}{80}})\sigma$, we know for $x'''=x''+\frac{\xi-x''}{|\xi-x''|}\frac{d(x)}{16}$, there holds
$$B_{\frac{d(x)}{8}}(x'')\cap B_{(1-2\delta^{\frac{1}{80}})\sigma}(\xi)\cap T\supset B_{\frac{d(x)}{16}}(x''')\cap T.$$
In the case $x'\in B_{(1-2\delta^{\frac{1}{80}})\sigma}(\xi)\cap T$, by letting $x'''=x'+\frac{\xi-x'}{|\xi-x'|}\frac{d(x)}{16}$ we also get
$$|x'''-\xi|+\frac{d(x)}{16}=max\{|x'-\xi|, \frac{d(x)}{8}-|x'-\xi|\}\le (1-2\delta^{\frac{1}{80}})\sigma$$
and
$$B_{\frac{d(x)}{16}}(x''')\cap T\subset B_{(1-2\delta^{\frac{1}{80}})\sigma}(\xi)\cap B_{\frac{d(x)}{4}}(x')\cap T.$$
Thus in either case, by (\ref{largedomain}) we know,
\begin{align}\label{dsmall}
\mathcal{H}^2(B_{\frac{d(x)}{2}}(x)\cap graphf \cap B_{\sigma}(\xi))\ge \int_{B_{\frac{d(x)}{16}}(x''')\cap T}\sqrt{1+|\nabla f|^2(y')}dy'\ge \frac{\pi d^2(x)}{2^8}.
\end{align}
But by the definition of $d(x)$, we know $B_{\frac{d(x)}{2}}(x)\cap spt\mu_V\cap B_\sigma(\xi)=\emptyset$, so by (\ref{smallmeasure}), we get
\begin{align}\label{sigmalarge}
\mathcal{H}^2(B_{\frac{d(x)}{2}}(x)\cap graphf \cap B_{\sigma}(\xi))\le\mathcal{H}^2((graphf\backslash sptV)\cap B_{2\sigma}(\xi))\le 2^{76}\delta^{\frac{1}{32}}\pi (2\sigma)^2.
\end{align}
Combining (\ref{dsmall}) and (\ref{sigmalarge}), we know $d(x)\le 2^{43}\delta^{\frac{1}{64}}\sigma\le 2^{43}\delta^{\frac{1}{80}}\sigma$.
Moreover, since $\delta\le \delta'_6=\frac{1}{2^{3536}k^{80}}\le \frac{1}{2^{3520}}$, we know $d(x)\le 2^{43}\delta^{\frac{1}{80}}\sigma<\frac{\sigma}{2}$. Thus by the definition of $d(x)$ we know
$d(x,spt\mu_V\cap B_{\sigma}(\xi))=d(x)\le 2^{41}\delta^{\frac{1}{64}}\sigma$ and hence
\begin{align}\label{anothersemi}
d(x',spt\mu_V\cap B_{\sigma}(\xi))\le |f(x')|+d(x,spt\mu_V)\le\delta^{\frac{1}{80}}(2\sigma)+2^{43}\delta^{\frac{1}{80}}\sigma\le 2^{44}\delta^{\frac{1}{80}}\sigma.
\end{align}
Combining (\ref{semi}) and (\ref{anothersemi}), we get
\begin{align*}
\sigma^{-1}d_{\mathcal{H}}(spt\mu_V\cap B_{\sigma}(\xi), T\cap B_{\sigma}(\xi))\le \min\{2^{15}\delta^{\frac{1}{16}},2^{44}\delta^{\frac{1}{80}}\}=2^{44}\delta^{\frac{1}{80}}.
\end{align*}
This is the complete Reifenberg condition (\ref{reifenberg condition}). And the second part of Theorem \ref{Holder Regularity} is just a restatement Reifenberg's Theorem \ref{reifenberg theorem}. \end{proof}
\begin{rmk} We do not know whether some Lipschitz regularity hold under the same condition. See Corollary \ref{removability} for some positive evidence. \end{rmk}
\section{The Density Identity and Topological Finiteness} \label{application} \subsection{The Density Formula} \
This section is the start point of this paper: we are asking what is the behavior of the inverting of a minimal surface?
Our first observation is the following density formula which explains the meaning of $\Theta(\Sigma,\infty)$ in the inverting setting. It turns out the result does not depend on the minimal surface equation.
Assume $\Sigma\subset \mathbb{R}^n$ is an immersed surface, we denote the immersion by $f:\Sigma\to \mathbb{R}^n$ and simply call $f:\Sigma\to \mathbb{R}^n$ an immersed surface. We also abuse the notations $\Sigma$ and $f(\Sigma)$ and use $\mathcal{H}^2(B_r(0)\cap \Sigma)$ to mean the Hausdorff measure of the intersection of the extrinsic ball $B_r(0)$ with $f(\Sigma)$. By $d\mu_g$ we mean the volume form of the induced metric $g=f^{*}g_{\mathbb{R}^{n}}$.
\begin{lemma}[\textbf{Density Formula}]\label{Density Formula} Assume $f:\Sigma\to \mathbb{R}^{n}$ is a properly immersed surface satisfying \begin{align}\label{finite willmore}
\int_{\Sigma}|H|^2d\mu_g<+\infty \end{align} and \begin{align}\label{lowerdendity} \Theta_*(\Sigma,\infty)=\liminf_{r\to \infty}\frac{\mathcal{H}^2(B_{r}(0)\cap \Sigma)}{\pi r^2}< +\infty. \end{align} Let $h:\Sigma\to \mathbb{R}^n$ be the inverted surface, that is,
$h(x)=\frac{f(x)}{|f(x)|^2},\ \forall x\in\Sigma$. Denote $\tilde{\Sigma}=h(\Sigma)$, $\tilde{H}$=the mean curvature of $\tilde{\Sigma}$ and $\tilde{g}=dh\otimes dh$. Then
we have the locally antisymmetric transformation formula \begin{align}\label{localinvariant}
\big(\frac{|\tilde{H}|^2}{16}-\big|\frac{\tilde{H}}{4}+\frac{\tilde{\nabla}^{\bot}\tilde{r}}{\tilde{r}}\big|^2\big)d\mu_{\tilde{g}}
=-\big(\frac{|H|^2}{16}-\big|\frac{H}{4}+\frac{\nabla^{\bot}r}{r}\big|^2 \big)d\mu_g \end{align}
for $r=|f|$ and $\tilde{r}=|h|$. Moreover, the density $$\Theta(\Sigma,\infty):=\lim_{r\to+\infty}\frac{\mathcal{H}^2(\Sigma\cap B_r(0))}{\pi r^2}$$ at infinity is well-defined and satisfies the global representation formula
\begin{align}\label{density formula}
\int_{\tilde{\Sigma}\backslash \{0\}}\big(\frac{|\tilde{H}|^2}{16}-\big|\frac{\tilde{H}}{4}+\frac{\tilde{\nabla}^{\bot}\tilde{r}}{\tilde{r}}\big|^2\big)d\mu_{\tilde{g}}=
\begin{cases}
\pi\Theta(\Sigma,\infty) & 0\notin \Sigma,\\
\pi(\Theta(\Sigma,\infty)-\Theta(\Sigma,0)) & 0\in \Sigma,
\end{cases}
\end{align} where $\Theta(\Sigma, 0)=\lim_{r\to 0}\frac{\mu_g(\Sigma\cap B_r(0))}{\pi r^2}$. \end{lemma} \begin{proof} Denote $\tilde{g}=\langle\frac{\partial h}{\partial x^i},\frac{\partial h}{\partial x^j}\rangle dx^i\otimes dx^j$. Then,
$$ h_i=|f|^{-2}f_i-2f|f|^{-4}\langle f,f_i\rangle,$$ \begin{align*}
h_{ij}=|f|^{-2}f_{ij}&-2|f|^{-4}\langle f,f_j\rangle f_i-2|f|^{-4}\langle f,f_i\rangle f_j+8|f|^{-6}f\langle f,f_i\rangle \langle f,f_j\rangle\\
&-2|f|^{-4}f\langle f,f_{ij}\rangle-2|f|^{-4}f\langle f_i,f_j\rangle, \end{align*} and \begin{align*} \tilde{g}_{ij}
=\langle h_i,h_j\rangle=|f|^{-4}g_{ij},\ \ \ \tilde{g}=|f|^{-4}g\ \ \ \ \ \text{ and }\ \ \ \ \tilde{g}^{ij}=|f|^4g^{ij}. \end{align*} So \begin{align*} \tilde{H}=\tilde{g}^{ij}\big(h_{ij}-\langle h_{ij},h_k\rangle \tilde{g}^{kl}h_l\big)
=|f|^4g^{ij}h_{ij}-|f|^8\langle g^{ij}h_{ij}, h_k\rangle g^{kl}h_l, \end{align*} where \begin{align*}
g^{ij}h_{ij}=&|f|^{-2}g^{ij}f_{ij}-4|f|^{-4}g^{ij}\langle f,f_j\rangle f_i+8|f|^{-6}|f^\top|^2f\\
&-2|f|^{-4}\langle f, g^{ij}f_{ij}\rangle-4|f|^{-4}f, \end{align*} and \begin{align*}
\langle g^{ij}h_{ij}, h_k\rangle =&\big(|f|^{-2}g^{ij}f_{ij}-4|f|^{-4}g^{ij}\langle f,f_j\rangle f_i+8|f|^{-6}|f^\top|^2f\\
&-2|f|^{-4}\langle f, g^{ij}f_{ij}\rangle-4|f|^{-4}f\big)\cdot\big(|f|^{-2}f_k-2|f|^4\langle f,f_k\rangle f\big)\\
=&|f|^{-4}\langle g^{ij}f_{ij},f_k\rangle. \end{align*} Thus \begin{align}
\tilde{H}=&|f|^4(|f|^{-2}g^{ij}f_{ij}-4|f|^{-4}g^{ij}\langle f,f_j\rangle f_i+8|f|^{-6}|f^{\top}|^2f-2|f|^{-4}f\langle f,g^{ij}f_{ij}\rangle\nonumber\\
&-4|f|^{-4}f)-|f|^4\langle g^{ij}f_{ij},f_k\rangle g^{kl}(|f|^{-2}f_l-2|f|^{-4}\langle f,f_l\rangle f)\nonumber\\
=&|f|^2H-4f^{\top}+8|f|^{-2}|f^{\top}|^2f-2f\langle f,g^{ij}f_{ij}\rangle-4f+2\langle g^{ij}f_{ij},f^{\top}\rangle f\nonumber\\
=&|f|^2H-4f^{\top}+8|f|^{-2}|f^{\top}|^2f-2f\langle H,f\rangle-4f\label{generalinverth}
\end{align} where in the last step we use the equation $\langle g^{ij}f_{ij},f-f^{\top} \rangle=\langle \big(g^{ij}f_{ij}\big)^{\bot},f\rangle=\langle H,f\rangle$.
Since $h=\frac{f}{|f|^2}$, we know $|h|^2=\frac{1}{|f|^2}$, $\tilde{g}^{ij}=|f|^4g^{ij}=\frac{1}{|h|^4}g^{ij}$ and $f=\frac{h}{|h|^2}$, $f_i=|h|^{-2}h_i-2|h|^{-4}\langle h,h_i\rangle h$, $f^{\bot}=f-g^{ij}\langle f,f_i\rangle f_j$. By
$$\langle f,f_i\rangle=\langle \frac{h}{|h|^2}, \frac{1}{|h|^2}h_i-\frac{2}{|h|^4}\langle h, h_i\rangle h\rangle=\frac{\langle h, h_i\rangle}{|h|^4}-\frac{2\langle h,h_i\rangle}{|h|^4}=-\frac{\langle h,h_i\rangle}{|h|^4},$$ we have \begin{align*} f^{\bot}
=\frac{h}{|h|^2}+|h|^4\tilde{g}^{ij}\frac{\langle h,h_i\rangle}{|h|^4}(\frac{h_j}{|h|^2}-\frac{2\langle h,h_j\rangle h}{|h|^4})
=\frac{h^{\top}}{|h|^2}+\frac{|h|^2-2|h^{\top}|^2}{|h|^4}h. \end{align*} Thus \begin{align}\label{bot}
|f^{\bot}|^2
=\frac{|h^{\top}|^2}{|h|^4}+\frac{2(|h|^2-2|h^{\top}|^2)}{|h|^6}|h^{\top}|^2+\frac{(|h|^2-2|h^{\top}|^2)^2}{|h|^6}
=\frac{|h^{\bot}|^2}{|h|^4}, \end{align}
$$f^{\top}=f-f^{\bot}=-\frac{h^{\top}}{|h|^2}+\frac{2|h^{\top}|^2h}{|h|^4}\ \ , \ \ |f^{\top}|^2=|f|^2-|f^{\bot}|^2
=\frac{|h^{\top}|^2}{|h|^4},$$ and \begin{align}\label{tildeh}
\tilde{H}&=|f|^2H-2\langle H,f\rangle f-4f^{\top}+\frac{8|f^{\top}|^2}{|f|^2}f-4f\nonumber\\
&=\frac{H}{|h|^2}-\frac{2}{|h|^4}\langle H,h\rangle h+4(\frac{|h|^2h^{\top}-2|h^{\top}|^2h}{|h|^4})+\frac{8\frac{|h^{\top}|^2}{|h|^4}}{\frac{|h|^2}{|h|^4}}\frac{h}{|h|^2}-4\frac{h}{|h|^2}\nonumber\\
&=\frac{H}{|h|^2}-\frac{2}{|h|^4}\langle H,h\rangle h-4\frac{h^{\bot}}{|h|^2}. \end{align}
Moreover, for $\tilde{r}=|h|$, we know $\tilde{\nabla}^{\bot}\tilde{r}=\frac{h^{\bot}}{|h|}$. So, by (\ref{bot}) and (\ref{tildeh}) we get, \begin{align}\label{bracketterm}
\langle \tilde{H}, \frac{h^{\bot}}{|h|^2}\rangle=\langle \frac{H}{|h|^2}-\frac{2}{|h|^4}\langle H,h\rangle h-4\frac{h^{\bot}}{|h|^2},\frac{h}{|h|^2} \rangle=-|f|^2\langle H,f\rangle-4|f^{\bot}|^2 \end{align} and \begin{align*}
-\big(\frac{|\tilde{H}|^2}{16}-\big|\frac{\tilde{H}}{4}+\frac{\tilde{\nabla}^{\bot}\tilde{r}}{\tilde{r}}\big|^2\big)d\mu_{\tilde{g}}
&=\big(\frac{1}{2}\langle\tilde{H},\frac{h^{\bot}}{|h|^2}\rangle+\big|\frac{h^{\bot}}{|h|^2}\big|^2\big)|f|^{-4}d\mu_g\\
&=\big(-\frac{1}{2}|f|^2\langle H,f\rangle-2|f^{\bot}|^2+|f^{\bot}|^2\big)|f|^{-4}d\mu_g\\
&=-\big(\frac{|f^{\bot}|^2}{|f|^4}+\frac{1}{2}\langle H,\frac{f^{\bot}}{|f|^2}\rangle\big)d\mu_g\\
&=\big(\frac{|H|^2}{16}-\big|\frac{H}{4}+\frac{\nabla^{\bot}r}{r}\big|^2 \big)d\mu_g. \end{align*}
The following argument belongs to \cite[Appendix]{KS}. By (\ref{lowerdendity}), (\ref{finite willmore}) and Corollary \ref{integral discription of upper density at infinity}, we know $\Theta^*(\Sigma,\infty)<+\infty$ and \begin{align}\label{integral finite}
\int_{\Sigma}\big|\frac{\nabla^{\bot}r}{r}\big|^2<+\infty. \end{align} Thus for any $\varepsilon>0$, there exists $\rho_0>0$ such that for any $\rho\ge \rho_0$, we have \begin{align*}
\int_{\Sigma\backslash B_{\rho_0}}|H|^2d\mu_g\le \varepsilon \ \ \ \ \text{ and } \ \ \ \ \frac{\mathcal{H}^2(\Sigma\cap B_\rho(0))}{\pi \rho^2}\le \Theta^*(\Sigma,\infty)+\varepsilon. \end{align*} On the one hand, \begin{align*}
|\frac{1}{2\rho^2}\int_{B_{\rho}(0)}\langle r\nabla^{\bot} r,H \rangle d\mu_g|
&\le \frac{1}{2\rho}\int_{B_{\rho_0}(0)}|H|+\frac{\pi\varepsilon^{\frac{1}{2}}}{2}(\Theta^*(\Sigma,\infty)+\varepsilon)^{\frac{1}{2}}. \end{align*} Letting $\rho\to \infty$ first and then $\varepsilon\to 0$, we get \begin{align}\label{rhovanishing} \lim_{\rho\to \infty}\frac{1}{2\rho^2}\int_{B_{\rho}(0)}\langle r\nabla^{\bot} r,H \rangle d\mu_g=0. \end{align} On the other hand, by \begin{align*}
|\frac{1}{2\sigma^2}\int_{B_{\sigma}(0)}\langle r\nabla^{\bot} r,H \rangle d\mu_g|
\le \frac{1}{2}\big(\frac{\mathcal{H}^2(\Sigma\cap B_\sigma(0))}{\sigma^2}\int_{\Sigma\cap B_\sigma(0)}|H|^2\big)^{\frac{1}{2}}, \end{align*} we also know \begin{align}\label{sigmavanishing} \lim_{\sigma\to 0}\frac{1}{2\sigma^2}\int_{B_{\sigma}(0)}\langle r\nabla^{\bot} r,H \rangle d\mu_g=0 \end{align} So, by (\ref{finite willmore})(\ref{integral finite})(\ref{rhovanishing})(\ref{sigmavanishing}) and letting $\rho\to \infty$ and $\sigma\to 0$ in the monotonicity formula (\ref{monotonicity equality}), we know $\Theta(\Sigma,\infty)$ is well-defined and satisfies \begin{align}\label{density diff} \pi(\Theta(\Sigma,\infty)-\Theta(\Sigma,0))
&=-\int_{\Sigma}\big(\frac{|H|^2}{16}-\big|\frac{H}{4}+\frac{\nabla^{\bot}r}{r}\big|^2\big)d\mu_g\\
&=\int_{\tilde{\Sigma}\backslash\{0\}}\big(\frac{|\tilde{H}|^2}{16}-\big|\frac{\tilde{H}}{4}+\frac{\tilde{\nabla}^{\bot}\tilde{r}}{\tilde{r}}\big|^2\big)d\mu_{\tilde{g}},\nonumber \end{align} where in the last line we use (\ref{localinvariant}). \end{proof}
\begin{rmk} In the special case of minimal surfaces, the density formula goes like \begin{align*}
\int_{\tilde{\Sigma}\backslash \{0\}}|\tilde{H}|^2d\mu_{\tilde{g}}
=16\int_{\Sigma}\big|\frac{\nabla^{\bot}r}{r}\big|^2d\mu_g=
\begin{cases}
16\pi\Theta(\Sigma,\infty) & 0\notin \Sigma,\\
16\pi(\Theta(\Sigma,\infty)-\Theta(\Sigma,0)) & 0\in \Sigma.
\end{cases} \end{align*} It means the density of a minimal surface can dominate the Willmore energy of its inverted surface $\tilde{\Sigma}$. But in general, the inverted surface $\tilde{\Sigma}$ has singularity at the inverted point $0$ and the density formula can not dominate the topology of geometry(say total curvature) of $\tilde{\Sigma}$. For example, the family of Scherk's singly-periodic minimal surfaces have density two at infinity, but they all have infinity genuses. \end{rmk} \begin{rmk} As it is seen, the locally antisymmetric transformation formula (\ref{localinvariant}) and then density formula follows easily from direct calculation. But how such a term occurs? Here we give an explanation in the setting of conformal deformation of submanifolds. Recall there are two conformal invariances for surfaces, the extrinsic local one \begin{align}\label{tracefree}
|A-\frac{H}{n}g|^2_gd\mu_g=|\tilde{A}-\frac{\tilde{H}}{n}\tilde{g}|^2_{\tilde{g}}d\mu_{\tilde{g}} \end{align} and the intrinsic global one---the Gauss-Bonnet formula $$\int_\Sigma Kd\mu_g=\int_{\Sigma}\tilde{K}d\mu_{\tilde{g}}.$$ Under conformal setting, the global Gauss-Bonnet formula has a local explanation. Assume $\tilde{g}=e^{2u}g$ is a conformal metric on a closed Riemann surface $(\Sigma,g)$. Applying Stokes' formula to the Yamabe equation $$\triangle_g u-K+\tilde{K}e^{2u}=0,$$ we get $$\int_\Sigma Kd\mu_g=\int_\Sigma\tilde{K}e^{2u}d\mu_g=\int_{\Sigma}\tilde{K}d\mu_{\tilde{g}}.$$
For the same reason, in higher dimensional, assume $\tilde{g}=u^{\frac{4}{n-2}}g$ and apply the Stokes formula to the Yamabe equation $$\triangle_g u-\frac{n-2}{4(n-1)}Su+\frac{n-2}{4(n-1)}\tilde{S}u^{\frac{n+2}{n-2}}=0.$$ We get
\begin{align}\label{conformal}
\int_{M}Sud\mu_g=\int_{M}\tilde{S}\tilde{u}d\mu_{\tilde{g}},
\end{align}
where $\tilde{u}=u^{-1}$ satisfies $g=\tilde{u}^{\frac{4}{n-2}}\tilde{g}$. Note both sides contain the conformal factors $(u,\tilde{u})$. So, in high dimension, the invariance is not in a conformal class, but just for a conformal pair $(g,\tilde{g})$. With this experience, we guess the corresponding extrinsic invariant should also admit the shape of
\begin{align}\label{star}
\star ud\mu_g=\tilde{\star}\tilde{u}d\mu_{\tilde{g}}\text{ (local) },
\end{align}
or
\begin{align}\label{globalstar}
\int_{M}\star ud\mu_g=\int_M\tilde{\star}\tilde{u}d\mu_{\tilde{g}} \text{ (global) }.
\end{align}
For example, for high dimensional analogue of (\ref{tracefree}), we assume $M^n\subset N^{n+k}$ and $(G,\tilde{G})$ are a pair of conformal metrics on $N^{n+k}$ with conformal factors $(U,\tilde{U})$, i.e., $\tilde{G}=U^{\frac{4}{n-2}}G$(note the index $n=dim M$) and $\tilde{U}=U^{-1}$. Denote $u=U|_M, \tilde{u}=\tilde{U}|_M$ and assume $(g,\tilde{g})$ are the induced metrics of $M\subset \big(N,(G,\tilde{G})\big)$. Then $\tilde{g}=u^{\frac{4}{n-2}}g$ and direct calculus shows high dimensional analogue of (\ref{tracefree}) is of type (\ref{star}): \begin{align*}
|A-\frac{H}{n}g|^2_gud\mu_g=|\tilde{A}-\frac{\tilde{H}}{n}\tilde{g}|^2_{\tilde{g}}\tilde{u}d\mu_{\tilde{g}}. \end{align*} This is a local one. A natural question is, what extrinsic global invariance is corresponding to the intrinsic global invariance (\ref{conformal}). We take $n\ge 3$ as an example. For this, we take the trace of the restriction of Ricci tensor of $G$ on $M$, i.e., denote $$S^G_g=tr_gRic(N,G)$$ and call it the extrinsic scalar curvature. The goal is to find the invariance of type (\ref{globalstar}) involving $R_g^G$. As in the intrinsic case, the first step is to calculate the equation of the extrinsic scalar curvature when the background metric deforms conformally. The result is
\begin{align}\label{exyamabe}
div^M\nabla U+\frac{n}{n-2}\frac{|\nabla^{\bot}U|^2}{u}-\frac{n-2}{4(n-1)}S^G_gu+\frac{n-2}{4(n-1)}S^{\tilde{G}}_{\tilde{g}}u^{\frac{n+2}{n-2}}=0, \end{align} where,$div^M\nabla U$ means the extrinsic divergence of the restriction of the gradient of $U$ on $M$ and $\nabla^{\bot}U$ represent the projection of $\nabla U$ to the normal bundle $T^{\bot}M$. Since (\ref{exyamabe}) reduces to the Yamabe equation when $M=N$, we call it extrinsic Yamabe equation. The next is to apply the Stokes formula to the extrinsic Yamabe equation. Note the extrinsic divergence theorem goes like $$\int_{M}div^M\nabla U d\mu_g=-\int_{M}\nabla U\cdot Hd\mu_g,$$ where $H$ is the mean curvature of the submanifold $(M,g)\subset (N,G)$. We get the global equation \begin{align}\label{nonsymetry} \tilde{C}:=\int_{M}S^{\tilde{G}}_{\tilde{g}}\tilde{u}d\mu_{\tilde{g}}=&\int_{M}S^G_gud\mu_g\nonumber\\
&+\int_{M}\big(\frac{4(n-1)}{n-2}\langle \frac{\nabla U}{u},H\rangle -\frac{4(n-1)n}{(n-2)^2}\frac{|\nabla^{\bot}U|^2}{u^2}\big)ud\mu_g\nonumber\\ =&:C+Q'. \end{align} This equation looks not so symmetrically as we expected. To make (\ref{nonsymetry}) to possess the symmetry of type (\ref{globalstar}), we guess the term $Q'$ is a global antisymmetric term, i.e, $\tilde{Q}'=-Q'$. If so, then (\ref{nonsymetry}) become the symmetric form $$\tilde{C}+\frac{\tilde{Q}'}{2}=C+\frac{Q'}{2}.$$ It turns out that $Q'$ is not only globally antisymmetric, but also comes form a local conformal antisymmetry: \begin{align}\label{localantisym}
\tilde{Q}:&=\big(\frac{1}{n-2}\langle \frac{\tilde{\nabla}\tilde{U}}{\tilde{u}},\tilde{H}\rangle_{\tilde{g}}-\frac{n}{(n-2)^2}|\frac{\tilde{\nabla}^{\bot}\tilde{U}}{\tilde{u}}|_{\tilde{g}}^2\big)\tilde{u}d\mu_{\tilde{g}}\nonumber\\
&=-\big(\frac{1}{n-2}\langle \frac{\nabla U}{u},H\rangle_{g}-\frac{n}{(n-2)^2}|\frac{\nabla^{\bot}U}{u}|_{g}^2\big)ud\mu_{g}=-Q. \end{align} So (\ref{nonsymetry}) becomes the symmetric form of type (\ref{globalstar}), i.e., \begin{align} \int_M(S^{\tilde{G}}_{\tilde{g}}+T^{\tilde{G}}_{\tilde{g}})\tilde{u}d\mu_{\tilde{g}}=\int_M(S^G_g+T^G_g)ud\mu_g, \end{align}
where, $T^G_g=2(n-1)Q=\frac{2(n-1)}{n-2}\langle \frac{\nabla U}{u},H\rangle_{g}-\frac{2n(n-1)}{(n-2)^2}|\frac{\nabla^{\bot}U}{u}|_{g}^2$.
The above calculation is in a compact manifold, but the antisymmetry (\ref{localantisym}) is a local form, which also holds in noncompact ambient space. Especially, when we are caring about submanifolds in $\mathbb{R}^{n+k}$ and the conformal factor is induced by the inversion, (\ref{localantisym}) coincides with the locally antisymmetric transformation formula (\ref{localinvariant}) in dimension $n=2$, which is a key observation in getting the density identity. \end{rmk}
\begin{rmk}\label{infinitedensity}
In the case (\ref{lowerdendity}) does not holds, i.e., $\Theta_*(\Sigma,\infty)=+\infty$, its natural to define $\Theta(\Sigma,\infty)=+\infty$. So, by the lemma, for a properly immersed surface in $\mathbb{R}^n$ with$\int_{\Sigma}|H|^2d\mu_g<+\infty$, the density $ \Theta(\Sigma,\infty)=\lim_{r\to\infty}\frac{\mathcal{H}^2(\Sigma\cap B_r(0))}{\pi r^2} $ is always well-defined, whether it is finite of infinite. In this sense, Lemma \ref{Density Formula} holds without the assumption of (\ref{lowerdendity}). Only in the case $\Theta(\Sigma,\infty)=+\infty$, by (\ref{localinvariant}) and Corollary \ref{integral discription of upper density at infinity}, both side of (\ref{density formula}) are infinite. \end{rmk}
\subsection{The Density Identity} \
Firstly, we need the following weak(in varifold sense) removability of singularity.
\begin{prop}\label{weak removable} Assume $f:\Sigma\to \mathbb{R}^{n}$ is a properly immersed surface satisfying (\ref{finite willmore}) and (\ref{lowerdendity}) and $\tilde{\Sigma}=h(\Sigma)$ is its inverted surface. Then for any $r\in (0,\infty)$, \begin{align*} \mu_{\tilde{g}}(B_r(0)\backslash\{0\})\le \frac {C e^{4}}{(e^2-1)}\pi r^2, \end{align*}
where $C=9\Theta_*(\Sigma,\infty)+\frac{59}{16\pi}\int_{\Sigma}|H|^2d\mu_g$. And we have \begin{align*}
\int_{\tilde{\Sigma}\backslash \{0\}}|\tilde{H}|^2d\mu_{\tilde{g}}<+\infty. \end{align*} Moreover, if we extend $\mu_{\tilde{g}}$ and $\tilde{H}$ trivially across $0\in \mathbb{R}^n$, then for vector field $X\in C_0^1(\mathbb{R}^n,\mathbb{R}^n)$ (do not need to be supported in $\mathbb{R}^n\backslash\{0\}$), we have \begin{align*} \int_{\mathbb{R}^n}div^{\tilde{\Sigma}}Xd\mu_{\tilde{g}}=-\int_{\mathbb{R}^n}\langle X, \tilde{H}\rangle d\mu_{\tilde{g}}. \end{align*} That is, $\tilde{\Sigma}$ is a varifold in $\mathbb{R}^n$ with generalized mean curvature $\tilde{H}\in L^{2}(\mu_{\tilde{g}})$. \end{prop} \begin{proof} By (\ref{finite willmore}), (\ref{lowerdendity}) and (\ref{everyradius}) in Corollary \ref{integral discription of upper density at infinity}, we know for any $\rho\in (0,\infty)$, $$\frac{\mathcal{H}^2(B_\rho(0)\cap \Sigma)}{\pi \rho^2}\le C,$$
where $C=9\Theta_*(\Sigma,\infty)+\frac{59}{16\pi}\int_{\Sigma}|H|^2d\mu_g$. Since $\tilde{g}=\frac{1}{|f|^4}g$, we know $d\mu_{\tilde{g}}=\frac{1}{|f|^4}d\mu_g$. So, for $r=e^{-t}>0$, \begin{align}\label{localuperdensity} \mu_{\tilde{g}}(\tilde{\Sigma}\cap B_r\backslash \{0\})
&=\lim_{\varepsilon\to 0}\int_{\tilde{\Sigma}\cap (B_r\backslash B_{\varepsilon})}d\mu_{\tilde{g}}=\lim_{\varepsilon\to 0}\int_{\Sigma\cap (B_{\frac{1}{\varepsilon}}\backslash B_{\frac{1}{r}})}\frac{1}{|f|^4}d\mu_{g}\nonumber\\ &=\sum_{k=1}^{\infty}\int_{\Sigma\cap(B_{e^{t+k}}\backslash B_{e^{t+(k-1)}})}\frac{1}{r^4}d\mu_g\nonumber\\ &\le \sum_{k=1}^{\infty}\frac{C\pi e^{2(t+k)}}{e^{4(t+(k-1))}}=\frac{C\pi e^{4}}{(e^2-1)}r^2. \end{align}
By (\ref{generalinverth}), we note \begin{align}\label{observation}
\tilde{H}&=|f|^2H-2\langle H,f\rangle f-4f^{\top}+\frac{8|f^{\top}|^2}{|f|^2}f-4f\nonumber\\
&=|f|^2H-2\langle H,f\rangle f+4f^{\bot}-\frac{8|f^{\bot}|^2}{|f|^2}f. \end{align} Thus \begin{align*}
|\tilde{H}|^2
&\le 320(|f|^4|H|^2+|f^{\bot}|^2) \end{align*} and \begin{align*}
\int_{\tilde{\Sigma}\cap(B_\rho(0)\backslash \{0\})}|\tilde{H}|^2d\mu_{\tilde{g}}
&\le 320\int_{\Sigma\backslash B_{\frac{1}{\rho}}}(|f|^4|H|^2+|f^{\bot}|^2)|f|^{-4}d\mu_g\\
&\le 320\int_{\Sigma}\big(|H|^2+\big|\frac{\nabla^{\bot}r}{r}\big|^2\big)d\mu_g. \end{align*} By Corollary \ref{integral discription of upper density at infinity} again, the right hand term is finite. So, letting $\rho\to+\infty$ and we get \begin{align}\label{finiteinveresewillmore}
\int_{\tilde{\Sigma}\backslash \{0\}}|\tilde{H}|^2d\mu_{\tilde{g}}\le 320\int_{\Sigma}\big(|H|^2+\big|\frac{\nabla^{\bot}r}{r}\big|^2\big)<+\infty. \end{align} Finally, by (\ref{localuperdensity}), (\ref{finiteinveresewillmore}) and cut-off argument(see \cite[Appendix]{KS}), we know $\tilde{\Sigma}$ is a varifold in $\mathbb{R}^n$ with generalized mean curvature $\tilde{H}\in L^{2}(\mu_{\tilde{g}})$.
\end{proof}
With this proposition, we know the monotonicity formula holds for the varifold $\tilde{\Sigma}$ with generalized mean curvature $\tilde{H}$ and can be used to show the following density identity. \begin{lemma}[Density identity]\label{density identity} Assume $f:\Sigma\to \mathbb{R}^{n}$ is a properly immersed surface satisfying (\ref{finite willmore}) and (\ref{lowerdendity}) and $\tilde{\Sigma}=h(\Sigma)$ is its inverted surface. If the base point $0\notin \Sigma$, then \begin{align*} \Theta(\tilde{\Sigma},0):=\lim_{\sigma\to 0}\frac{\mu_{\tilde{g}}(\tilde{\Sigma}\cap B_{\sigma}(0))}{\sigma^2}=\Theta(\Sigma,\infty)\ge 1. \end{align*} \end{lemma} \begin{proof} In this case, for $0<\sigma<\rho<\infty$, we have the monotonicity formula \begin{align}\label{monoton} \frac{\mu_{\tilde{g}}(\tilde{\Sigma}\cap B_{\sigma})}{\sigma^2} &=\frac{\mu_{\tilde{g}}(\tilde{\Sigma}\cap B_{\rho})}{\rho^2}+\frac{1}{2\rho^2}\int_{\tilde{\Sigma}\cap B_{\rho}}\langle \tilde{r}\tilde{\nabla}^{\bot}\tilde{r}, \tilde{H}\rangle-\frac{1}{2\sigma^2}\int_{\tilde{\Sigma}\cap B_{\sigma}}\langle \tilde{r}\tilde{\nabla}^{\bot}\tilde{r}, \tilde{H}\rangle\nonumber\\
&+\frac{1}{16}\int_{\tilde{\Sigma}\cap (B_{\rho}\backslash B_{\sigma})}|\tilde{H}|^2-\int_{\tilde{\Sigma}\cap (B_{\rho}\backslash B_{\sigma})}|\frac{\tilde{\nabla}^{\bot} \tilde{r}}{\tilde{r}}+\frac{\tilde{H}}{4}|^2 \end{align} On the one hand, by (\ref{finiteinveresewillmore}) and (\ref{localuperdensity}), we know
$$\lim_{\sigma\to 0}W(\sigma):=\lim_{\sigma\to0}\int_{\tilde{\Sigma}\cap B_{\sigma}}|\tilde{H}|^2= 0$$ and
$$\lim_{\sigma\to 0}|\frac{1}{2\sigma^2}\int_{\tilde{\Sigma}\cap B_{\sigma}}\langle \tilde{r}\tilde{\nabla}^{\bot}\tilde{r}, \tilde{H}\rangle|\le \lim_{\sigma\to 0}\frac{1}{2}(\frac{\mu_{\tilde{g}}(\tilde{\Sigma}\cap B_{\sigma})}{\sigma^2})^{1/2}W(\sigma)^{1/2}= 0.$$ On the other hand, the properness of $f$ and $0\notin \Sigma$ implies $\tilde{\Sigma}\backslash B_\sigma(0)$ is compact. So we have $$\lim_{\rho\to \infty}\frac{\mu_{\tilde{g}}(\tilde{\Sigma}\cap B_{\rho})}{\rho^2}= 0$$ and
$$\lim_{\rho\to +\infty}|\frac{1}{2\rho^2}\int_{\tilde{\Sigma}\cap B_{\rho}}\langle \tilde{r}\tilde{\nabla}^{\bot}\tilde{r}, \tilde{H}\rangle|\le \lim_{\rho\to+\infty}\frac{1}{2}(\frac{\mu_{\tilde{g}}(\tilde{\Sigma}\cap B_{\rho})}{\rho^2})^{1/2}(\int_{\tilde{\Sigma}}|\tilde{H}|^2)^{1/2}= 0.$$ Letting $\rho\to \infty$ and $\sigma\to 0$ in (\ref{monoton}) and applying the density formula (\ref{density formula}), we get
$$\Theta(\tilde{\Sigma},0)=\int_{\tilde{\Sigma}\backslash \{0\}}\big(\frac{|\tilde{H}|^2}{16}-\big|\frac{\tilde{H}}{4}
+\frac{\tilde{\nabla}^{\bot}\tilde{r}}{\tilde{r}}\big|^2\big)d\mu_{\tilde{g}} =\Theta(\Sigma,\infty).$$
Noting the inverted surface $\tilde{\Sigma}$ is smooth away from 0, by Lemma \ref{density at each point} and the properness of $f$, we know \begin{align*} \Theta(\Sigma,\infty)=\Theta(\tilde{\Sigma},0)\ge \limsup_{y\to 0}\Theta(\tilde{\Sigma},y)\ge 1. \end{align*} \end{proof}
\subsection{Topological Finiteness} \
The density identity and density formula implies the single term $\Theta(\Sigma, \infty)$ can control both the Willmore energy and the local density of the inverted surface. So, when combining with the Allard-Reifenberg type $C^{\alpha}$ regularity Theorem \ref{Holder Regularity}, we can prove the main theorem. \begin{prop}\label{topolofical rigidity of minimal ends} Assume $f:\Sigma\to \mathbb{R}^n$ is a properly immersed surface with finite Willmore energy. For any $R>0$, let $\Sigma_1$ be a noncompact connected component of $\Sigma\backslash f^{-1}(B_R(0))$. Then $$\Theta(\Sigma_1,\infty):=\lim_{r\to \infty}\frac{\mathcal{H}^2(\Sigma_1\cap B_r(0)}{\pi r^2}\ge1.$$
Moreover, there exists an $\varepsilon=\varepsilon(n)>0$ such that if $$\Theta(\Sigma_1,\infty)\le 1+\varepsilon(n),$$
then there is an $R_2\ge R$ such that for any $r\ge R_2$, $\Sigma_1\backslash f^{-1}(B_r)$ is homeomorphic to $S^1\times \mathbb{R}$ and $f:\Sigma_1\backslash f^{-1}(B_r)\to \mathbb{R}^n$ is embedding. \end{prop} \begin{proof} Since $\Sigma$ is proper, we know $\Sigma_1$ has compact boundary, thus can be extended to be a complete surface in $\mathbb{R}^n$ without boundary by gluing a compact surface $\Sigma_2$ with $\partial \Sigma_2=-\partial\Sigma_1$. So we can assume $\Sigma_1$ to be a surface properly immersed in $\mathbb{R}^n$ without boundary and satisfies (\ref{finite willmore}).
Thus by Remark \ref{infinitedensity} and Lemma \ref{density identity}, we know $\Theta(\Sigma_1,\infty):=\lim_{r\to \infty}\frac{\mathcal{H}^2(\Sigma_1\cap B_r(0)}{\pi r^2}$ is well-defined and $\Theta(\Sigma_1,\infty)\ge1.$
Moreover, in the case $\Theta(\Sigma_1,\infty)\le 1+\varepsilon$, choose a base point $x_0\notin \Sigma_1$, define $h(x)=\frac{f(x)-x_0}{|f(x)-x_0|^2}+x_0$ and denote the inverted surface by $\tilde{\Sigma}_1=h(\Sigma_1)$. Then by Proposition \ref{weak removable} and Lemma \ref{density identity}, we know $\tilde{\Sigma}_1$ is a rectifiable $2$-varifold in $\mathbb{R}^n$ with generalized mean curvature $\tilde{H}\in L^{2}(\mathbb{R}^n,d\mu_{\tilde{g}})$ and \begin{align*} \Theta(\tilde{\Sigma}_1,x_0)
=\frac{1}{16}\int_{\tilde{\Sigma}_1}|\tilde{H}|^2-\int_{\tilde{\Sigma}_1}|\frac{\tilde{\nabla}^{\bot}\tilde{r}}{\tilde{r}}+\frac{\tilde{H}}{4}|^2 =\Theta(\Sigma_1,\infty) \in [1,1+\varepsilon). \end{align*} So, there exists $\rho_0>0$ such that for any $\rho<\rho_0$, we have \begin{align*}
\frac{\mathcal{H}^2(\tilde{\Sigma}_1\cap B_\rho(x_0))}{\pi \rho^2}\le 1+2\varepsilon,\text{ and } \int_{\tilde{\Sigma}_1\cap B_{\rho}(x_0)}|\tilde{H}|^2<\varepsilon. \end{align*} Since $\tilde{\Sigma}_1$ is smooth outside the base point $x_0$, we know $\Theta(x)\ge 1$ for every $x\in \tilde{\Sigma}_1$. Taking $\varepsilon=\varepsilon(n)$ small enough and applying Theorem \ref{Holder Regularity} we know $\tilde{\Sigma}_1\cap B_\sigma(x_0)$ is a topological disk for $\sigma\le \frac{1}{2^{19}}(2\varepsilon)^{\frac{1}{2}}\rho_0$, which implies the conclusion. \end{proof}
\begin{rmk}\label{end density} By a geometric measure theory argument of E.Kuwert, Y.X.Li and R.Sch\"{a}tzle (see \cite[Appendix]{KS} and \cite{KLS}), it is directly shown $\Theta(\Sigma_1,\infty)\ge 1$ and if $\Theta(\Sigma_1,\infty)<2$, then $\Theta(\Sigma_1,\infty)=1$. We sketch the proof for reader's convenience. \end{rmk} \begin{proof}
We also assume $\Theta(\Sigma_1,\infty)<+\infty$. Extend $\Sigma_1$ to be smooth and boundary free and still denote it by $\Sigma_1$. Take the current $T_r=\big(\frac{1}{r}\big)_{\sharp}\Sigma_1$ and the varifold $\mu_r=\big(\frac{1}{r}\big)_{\sharp}(\mathcal{H}^2\llcorner \Sigma_1)$. By (\ref{rhovanishing}), for any $R>0$,
$$\lim_{r\to \infty}\|\delta \mu_r\|(B_R(0))=\lim_{r\to \infty}\frac{\int_{\Sigma_1\cap B_{rR}(0)}|H|d\mu_g}{r}=0.$$
So, by the compactness of varifold \cite{FF60}\cite[Theorem 32.2 and Lemma 26.14]{LS83} and the compactness of integral varifolds\cite{A72},\cite[Theorem 42.7 and Remark 42.8]{LS83}, there exist an integral current $T_\infty$, a stationary integral varifold $\mu_\infty$ and a sequence of $r_i\to +\infty$ such that
$$T_{r_i}\to T_\infty( \text{ weak convergence as currents }),$$
$$\mu_{r_i}\to \mu_\infty(\text{ weak convergence as varifolds}).$$
Since $\mu_\infty$ is a Radon measure, we know for fixed $x$ and $\mathcal{L}^1$-almost every $\rho>0$, $\mu_{\infty}(\partial B_\rho(x))=0$ and
$$\frac{\mu_\infty(B_{\rho}(x))}{\pi \rho^2}= \lim_{i\to +\infty}\frac{\mathcal{H}^2(\Sigma_1\cap B_{r_i\rho}(x))}{\pi(r_i\rho)^2}=\Theta(\Sigma_1,\infty).$$
Since $\mu_\infty$ is stationary and integral, by the monotonicity formula and the upper semi-continuity, we know
$$\Theta(\Sigma_1,\infty)=\Theta(\mu_\infty,\infty)=\Theta(\mu_\infty,x)\ge\limsup_{y\to x}\Theta(\mu_\infty,y)\ge 1.$$
Moreover, when $\Theta(\Sigma_1,\infty)<2$, noting $\lim_{i\to \infty}\|\delta \mu_{r_i}\|(B_R(0))=0$ for any $R>0$ and $\Theta(\mu_{r_i},\infty)\equiv \Theta(\Sigma_1,\infty)<2$, by the same argument as in \cite[Proposition 2.2]{KLS}, we get
$$\mu_\infty=\mu_{T_\infty}.$$
So $\Theta(\mu_{T_\infty},\infty)=\Theta(\mu_\infty,\infty)\in [1,2)$ and by \cite[Theorem 2.1]{KLS}, we know $T_\infty$ is a plane. Thus
$$\Theta(\Sigma_1,\infty)=\Theta(\mu_\infty,0)=\Theta(\mu_{T_\infty},0)=1.$$ \end{proof}
As a corollary, our main theorem is a global version of the above topological rigidity Proposition \ref{topolofical rigidity of minimal ends}. \begin{thm}[$\mathbf{Finite\ Topology}$]\label{finite topology} Assume $f:\Sigma\to \mathbb{R}^n$ is a properly immersed surface with finite Willmore energy. Then $$e(\Sigma,\infty)\le \Theta(\Sigma,\infty).$$
Moreover, if we assume
$$e(\Sigma,\infty)> \Theta(\Sigma, \infty)-1<\infty,$$
then \begin{enumerate}[1)] \item $\Sigma$ has finite topology; \item $\Theta(\Sigma,\infty)=e(\Sigma,\infty)=:e$ is an integer and $\Sigma$ has exact $e$ ends with density one.
\item $\Sigma$ has finite total curvature, i.e., $\int_{\Sigma}|A|^2d\mu_g<+\infty$; \item $\Sigma$ is conformal to a closed Riemann surface with $e(\Sigma,\infty)$ points removed.
\end{enumerate} \end{thm} \begin{proof} There is nothing to prove if $\Theta(\Sigma,\infty)=+\infty$. So we assume $\Theta(\Sigma,\infty)<+\infty$. By the properness, for each $r>0$, $\Sigma\cap B_{r}(0)$ has the connected components decomposition $\Sigma\cap B_{r}(0)=K_r\sqcup \sqcup_{i\in I(r)} \Sigma_{i,r}$, where $K_r$ is the compact part and each $\Sigma_{i,r}$ is noncompact. By Proposition \ref{topolofical rigidity of minimal ends}, we get for each $i\in I(r)$, $\Theta(\Sigma_{i,r},\infty)\ge 1$. Since these $\{\Sigma_{i,r}\}_{i\in I(r)}$ are disjoint, we know \begin{align}\label{Ir}
|I(r)|\le \sum_{i\in I(r)}\Theta(\Sigma_{i,r},\infty)\le \Theta(\Sigma,\infty)<+\infty. \end{align}
Letting $r\to +\infty$, we know $$e(\Sigma,\infty)=\lim_{r\to \infty}|I(r)|\le \Theta(\Sigma,\infty).$$ Moreover, if $e(\Sigma,\infty)> \Theta(\Sigma, \infty)-1$, then there exists $r_0>0$ such that $e(\Sigma,\infty)=|I(r_0)|>\Theta(\Sigma, \infty)-1$. So by (\ref{Ir}) and $\Theta(\Sigma_{i,r_0},\infty)\ge 1$, we know $$\Theta(\Sigma_{i,r_0},\infty)<2, \forall i\in I(r_0).$$ By Remark \ref{end density} we know in fact $$\Theta(\Sigma_{i,r_0},\infty)=1.$$
Thus $e(\Sigma,\infty)=\Theta(\Sigma,\infty)$ and by Proposition \ref{topolofical rigidity of minimal ends} again, there exists $r_1>r_0$ such that for every $r\ge r_1$, each $\Sigma_{i,r_0}\backslash B_r(0)$ is an embedded annulus in $\mathbb{R}^n$. Take $r$ large enough such that $K_{r_0}\subset B_r(0)$. Then $\Sigma\backslash B_r(0)=\sqcup_{i\in I(r_0)}\big(\Sigma_{i,r_0}\backslash B_r(0)\big)$ consists of $|I(r_0)|=e(\Sigma,\infty)$ many properly embedded annulus. By properness, $\Sigma\cap B_r(0)$ is compact, so $\Sigma$ is homeomorphic to a closed surface with $e(\Sigma,\infty)$ points removed. Now, by Ilmanen's local Gauss-Bonnet estimate \cite[Theorem 3]{I95}, we know for each $r<s<\infty$ and $\varepsilon>0$, \begin{align*}
(1-\varepsilon)\int_{\Sigma\cap B_r(0)}|A|^2d\mu_g
\le \int_{\Sigma\cap B_s(0)}|H|^2d\mu_g+8\pi g(\Sigma\cap B_s(0))+\frac{24\pi D's^2}{\varepsilon(s-r)^2}, \end{align*} where $g(\Sigma\cap B_s(0))$ is the genus of the closed surface obtained by capping off the boundary of $\Sigma\cap B_s(0)$ by disks and $D'=\sup_{t\in [r,s]}\frac{\mathcal{H}^2(\Sigma\cap B_t(0))}{\pi t^2}$. Since we have shown $\Sigma$ has finite topology, by letting $s\to \infty$ and then $r\to \infty$ and taking $\varepsilon=\frac{1}{2}$, we get \begin{align*}
\int_{\Sigma}|A|^2d\mu_g\le 2\int_{\Sigma}|H|^2d\mu_g+16\pi g(\Sigma)+96\pi \Theta(\Sigma,\infty)<+\infty. \end{align*} So, by Huber's classification\cite{H57} of complex structures for complete surfaces with finite total curvature, each end of $\Sigma$ is parabolic, i.e., $\Sigma$ is conformal to a closed Riemann surface with $e(\Sigma,\infty)$ points removed.
\end{proof}
\begin{rmk}The surfaces in Theorem \ref{finite topology} have finite topology and finite total curvature, but it is impossible to dominate their topology or total curvature by the Willmore energy and density of such surfaces. For example, Hoffman and Meeks find\cite{HM90} there are a family of embedded minimal surfaces with three multiplicity one ends but arbitrary many genuses. Their total curvature also tend to infinity as the genus goes to infinity.
\end{rmk}
\section{Applications}\label{realapplication} \subsection{Isolated Singularities} \
In this subsection, we will care about the isolated singularity and do inverse process of section \ref{application}. \begin{prop}\label{isodensityidentity} Assume $\Sigma\subset B_1(0)\backslash\{0\}\subset\mathbb{R}^{2+k}$ is a properly immersed surface with $\partial{\Sigma}\subset \partial B_1(0)$, \begin{align}\label{isofinitewill}
\int_{\Sigma\backslash \{0\}}|H|^2d\mathcal{H}^2<+\infty \end{align} and \begin{align*} \Theta_*(\Sigma,0)=\liminf_{r\to 0}\frac{\mathcal{H}^2(\Sigma\cap B_r(0))}{\pi r^2}<+\infty. \end{align*} Then the inverted surface $\tilde{\Sigma}$ is properly immersed in $\mathbb{R}^{n+k}$ with finite density $\Theta(\tilde{\Sigma},\infty)\ge 1$ at infinity and \begin{align}\label{isofinitewillmore}
\int_{\tilde{\Sigma}}|\tilde{H}|^2d\mathcal{H}^2<+\infty. \end{align} Moreover, there holds the density identity \begin{align}\label{isofinitedensity} \Theta(\Sigma,0)=\Theta(\tilde{\Sigma},\infty), \end{align} which means both sides are well-defined and they are equal. \end{prop} \begin{proof} Since $\partial \Sigma\subset \partial B_1(0)$ is compact, we can close it up and assume $\Sigma\subset B_2(0)$ is a surface without boundary. By (\ref{isofinitewillmore}) and (\ref{isofinitedensity}) and the same argument as in Proposition \ref{weak removable}, we know $\bar{\Sigma}=\Sigma\cup \{0\}$ is an integral varifold in $B_2(0)$ with generalized mean curvature $H\in L^2$. So, the monotonicity formula (\ref{monotonicity equality}) holds and $\Theta(\Sigma,0)\ge 1$ is well defined . Noting $\bar{\Sigma}$ has finite volume, by Corollary \ref{integral discription of upper density at infinity}, we know \begin{align*}
\int_{\Sigma}\big|\frac{\nabla^{\bot}r}{r}\big|^2d\mathcal{H}^2<+\infty. \end{align*} Hence by letting $\sigma\to 0$ and $\rho\to \infty$ in (\ref{monotonicity equality}), we get \begin{align}\label{isodensityformula}
\pi\Theta(\Sigma,0)=\int_{\Sigma}\bigg(\frac{|H|^2}{16}-\big|\frac{H}{4}+\frac{\nabla^{\bot}r}{r}\big|^2\bigg)d\mathcal{H}^2. \end{align}
Also use $f:\Sigma\to B_2(0)\subset \mathbb{R}^{2+k}$ to denote the immersion map and let $h=\frac{f}{|f|^2}$ be the inversion. Again by the observation (\ref{observation}). We know for any $R>0$, \begin{align*}
\int_{\tilde{\Sigma}\cap B_R(0)}|\tilde{H}|^2d\mathcal{H}^2\le 320\int_{\Sigma\backslash B_{\frac{1}{R}(0)}}\bigg(|H|^2+\big|\frac{\nabla^{\bot}r}{r}\big|^2\bigg)d\mathcal{H}^2. \end{align*}
Letting $R\to \infty$, we get $\int_{\tilde{\Sigma}}|\tilde{H}|^2d\mathcal{H}^2<+\infty$ and the monotonicity formula (\ref{monoton}) holds for $\tilde{\Sigma}$.
Now, on the one hand, since $\tilde{\Sigma}\subset \mathbb{R}^{2+k}\backslash B_{\frac{1}{2}}(0)$, we know for $\sigma<\frac{1}{2}$, \begin{align*} \frac{\mathcal{H}^2(\tilde{\Sigma}\cap B_{\sigma}(0))}{\sigma^2}=\frac{1}{2\sigma^2}\int_{B_{\sigma}(0)}\langle \tilde{r}\tilde{\nabla}^{\bot}\tilde{r}, \tilde{H}\rangle d\mathcal{H}^2=0. \end{align*} On the other hand, by (\ref{bracketterm}), we know \begin{align*}
|\langle \tilde{\nabla}^{\bot}\tilde{r}, \tilde{H}\rangle|
\le 4|f^{\bot}|^2+|f|^2|\langle H,f^{\bot}\rangle|
\le 5|f^{\bot}|^2+ |H|^2|f|^4. \end{align*} So, \begin{align*}
\big|\frac{1}{2\rho^2}\int_{\tilde{\Sigma}\cap B_{\rho}(0)}\langle \tilde{r}\tilde{\nabla}^{\bot}\tilde{r}, \tilde{H}\rangle d\mu_{\tilde{g}}=\big|
&\le \frac{5}{2\rho^2}\int_{\Sigma\backslash B_{\frac{1}{\rho}}(0)}\frac{1}{|f|}(|f^{\bot}|^2+|f|^4|H|^2)\frac{1}{|f|^4}d\mu_g\\
&\le \frac{5}{2\rho} \int_{\Sigma}\bigg(|H|^2+\big|\frac{\nabla^{\bot}r}{r}\big|^2\bigg)d\mu_g. \end{align*} Letting $\sigma\to 0$ and $\rho\to \infty$ in (\ref{monoton}), we get \begin{align*} \Theta(\tilde{\Sigma},\infty) &=-\lim_{\rho\to\infty,\sigma\to 0}\frac{1}{\pi}\int_{\tilde{\Sigma}\cap (B_\rho\backslash B_\sigma)}
\bigg(\frac{|\tilde{H}|^2}{16}-\big|\frac{\tilde{H}}{4}
+\frac{\tilde{\nabla}^{\bot}\tilde{r}}{\tilde{r}}\big|^2 \bigg)d\mu_{\tilde{g}}\\ &=\lim_{\rho\to\infty,\sigma\to 0}\frac{1}{\pi}\int_{\Sigma\cap (B_{\frac{1}{\sigma}}\backslash B_{\frac{1}{\rho}})}
\bigg(\frac{|H|^2}{16}-\big|\frac{H}{4}
+\frac{\nabla^{\bot}r}{r}\big|^2 \bigg)d\mu_{g}\\ &=\Theta(\Sigma,0), \end{align*} where we use the local antisymmetric transformation formula (\ref{localinvariant}) and (\ref{isodensityformula}). \end{proof} Similar to the conception of the number of ends at infinity, for a surface $\Sigma$ properly immersed in $B_1(0)\backslash\{0\}$, we define the number of local connected components of $\Sigma$ near $0$ by \begin{align*} e(\Sigma, 0)=\lim_{r\to 0}\tilde{\beta}_0(\Sigma\cap B_r(0)\backslash\{0\}) \end{align*} where by $\tilde{\beta}_0$ we mean the number of noncompact connected components of a topology space. \begin{cor}\label{removability} Assume $\Sigma\subset B_1(0)\backslash\{0\}\subset\mathbb{R}^{2+k}$ is a properly immersed surface with $\partial{\Sigma}\subset \partial B_1(0)$ and satisfying (\ref{isofinitewill}) and \begin{align*} e(\Sigma,0)>\Theta_*(\Sigma,0)-1<\infty. \end{align*} Then $\Sigma$ has finite topology and finite total curvature. Moreover, we also know $\Theta(\Sigma,0)=e(\Sigma,0)$ is an integer and for small $r>0$, and $1\le i\le e(\Sigma,0)$, each component $\big((\Sigma_i\cup \{0\})\cap B_r(0),g)$ is bi-Lipschitz homeomorphic to a $2$-dimensional disk. \end{cor} \begin{proof}Without loss of generality, we assume $e(\Sigma,0)=1$ and $\Theta_*(\Sigma,0)<2$. By Proposition \ref{isodensityidentity}, the inverted surface $\tilde{\Sigma}$ is properly immersed in $\mathbb{R}^{2+k}$ with finite Willmore energy and $$1\le \Theta(\tilde{\Sigma},\infty)=\Theta(\Sigma,0)<2.$$ So, by Theorem \ref{finite topology}, $\tilde{\Sigma}$ has finite topology and finite total curvature, is conformal to a punctured disk when restricted to the outside of a large ball and has density $\Theta(\tilde{\Sigma},\infty)=1.$ So, $\Sigma\cap B_r(0)$ is conformal to a punctured disk for small $r$, i.e., there is a conformal parametrization $\varphi: D_1(0)\backslash\{0\}\to \Sigma\cap B_r(0)$. Noting the trace free part of the second fundamental form is conformal invariant, We know \begin{align*}
\int_{\Sigma}|A|^2d\mu_g
&\le 2\int_{\Sigma}|A-\frac{H}{2}g|^2d\mu_g+\int_{\Sigma}|H|^2d\mu_g\\
&=2\int_{\tilde{\Sigma}}|\tilde{A}-\frac{\tilde{H}}{2}\tilde{g}|^2+\int_{\Sigma}|H|^2d\mu_g\\
&\le 4\int_{\tilde{\Sigma}}|\tilde{A}|^2d\mu_{\tilde{g}}
+2\int_{\tilde{\Sigma}}|\tilde{H}|^2d\mu_{\tilde{g}}+\int_{\Sigma}|H|^2d\mu_g<+\infty. \end{align*} By Kuwert and Li's classification theorem\cite[Theorem 3.1]{KL12} for isolated singularities of surfaces with finite area and finite total curvature, we know $\varphi\in W^{2,2}(D,\mathbb{R}^{2+k})$ and the induced conformal metric $g=e^{2u}(dx^2+dy^2)$ satisfying $$u(z)=m\log{z}+w(z)$$ for $w(z)\in C^0\cap W^{1,2}(D)$ and $m=\Theta(\Sigma,0)-1$. Now, since $\Theta(\Sigma,0)=\Theta(\tilde{\Sigma},\infty)=1$, we know $m=0$. Hence $u=w\in C^0(D)$, which means \begin{align*}
\frac{1}{C}|x-y|\le d_g(f(x),f(y))=\inf_{\gamma\text{ joining } x, y} \int_{0}^{1}e^{u(\gamma)}|\dot{\gamma}|dt\le C|x-y|, \end{align*} i.e., $f: D\to (\bar{\Sigma}\cap B_r(0),g)$ is a bi-Lipschitz parametrization for $r$ small. \end{proof} \begin{rmk} The same conclusion holds for surfaces properly immersed in a punctured geodesic ball $ B_1(p)\backslash\{p\}$ of a Riemannian manifold $(M^{2+k},g)$, since $(M,g)$ can be embedded in $R^{2+k+N}$ by Nash embedding theorem and the density, topology and finiteness of Willmore energy of $\Sigma$ will not change. \end{rmk}
\subsection{Uniqueness of The Catenoid and Minimal Ends}\label{minimalend} \
As a corollary, we prove a uniqueness result for the catenoid.
\begin{cor}\label{catenoid}
Assume $\Sigma\subset \mathbb{R}^3$ is a connected properly immersed minimal surface with at least two ends. If
$$\Theta(\Sigma,\infty)<3,$$
then $\Sigma$ is the catenoid.
\end{cor} \begin{proof} Since $e(\Sigma,\infty)\ge 2>\Theta(\Sigma, \infty)-1$, by Theorem \ref{finite topology}, we know $\Sigma$ has finite total curvature and exactly two embedded ends. So, by Schoen's uniqueness theorem\cite{S83}, $\Sigma$ is a catenoid. \end{proof}
As mentioned in the introduction, this uniqueness of the catenoid is also a direct corollary of Leon Simon's theorem on the uniqueness of the tangent cone\cite{LS83b}\cite[The paragraph after Theorem 5.7]{LS85}. The following is a most simple example of such uniqueness phenomenon.
\begin{cor}\label{global} Assume $\Sigma$ is a complete immersed minimal surface in $\mathbb{R}^{2+k}$ with $$\Theta(\Sigma,+\infty)<e+1\ \ \ \text{ and }\ \ \ \ e(\Sigma,\infty)\ge e.$$ Then $\Sigma$ has exactly $e$ ends and each end $\Sigma_i$ can be written as a graph over some plane $V_i$ in with gradient tends to be zero. Moreover, in the case $k=1$, these $T_i$ are the same. \end{cor}
\begin{proof} Since $\Sigma$ is complete and of quadratic area growth, by \cite[Lemma 3]{Ch97}, the immersion $f$ is proper. By Theorem \ref{finite topology}, there exist $r_1>0$ such that $$\Sigma\backslash B_{r_1}(0)=\sqcup_{i=1}^{e}\Sigma_i,$$ where $e=e(\Sigma,\infty)$ and each $\Sigma_i$ is conformal to a punctured disk with finite total curvature and $\Theta(\Sigma_i,\infty)=1$. Moreover, since $\Sigma_i$ is minimal, its Gaussian map $G(x)=e_1(x)\wedge e_2(x):\Sigma_i \to (G_{2,n}(\mathbb{R}),g_c)$ is a harmonic map on the punctured disk with finite energy(note the energy of the Gaussian map is exactly the total curvature). So by Sacks and Uhlenbeck's \cite[Theorem 3.6]{SU81} removability of singularity for harmonic maps with finite energy(or \cite[Theorem A]{H91a}), $G(x)$ can be extended continuously across infinity. The rest is well known.
\end{proof}
\end{document} |
\begin{document}
\title{Integrated silicon qubit platform with single-spin addressability, exchange control and robust single-shot singlet-triplet readout}
\author{M.~A.~Fogarty} \affiliation{Centre for Quantum Computation and Communication Technology, School of Electrical Engineering and Telecommunications, The University of New South Wales, Sydney, New South Wales 2052, Australia}
\author{K.~W.~Chan} \affiliation{Centre for Quantum Computation and Communication Technology, School of Electrical Engineering and Telecommunications, The University of New South Wales, Sydney, New South Wales 2052, Australia}
\author{B.~Hensen} \affiliation{Centre for Quantum Computation and Communication Technology, School of Electrical Engineering and Telecommunications, The University of New South Wales, Sydney, New South Wales 2052, Australia}
\author{W.~Huang} \affiliation{Centre for Quantum Computation and Communication Technology, School of Electrical Engineering and Telecommunications, The University of New South Wales, Sydney, New South Wales 2052, Australia}
\author{T.~Tanttu} \affiliation{Centre for Quantum Computation and Communication Technology, School of Electrical Engineering and Telecommunications, The University of New South Wales, Sydney, New South Wales 2052, Australia}
\author{C.~H.~Yang} \affiliation{Centre for Quantum Computation and Communication Technology, School of Electrical Engineering and Telecommunications, The University of New South Wales, Sydney, New South Wales 2052, Australia}
\author{A.~Laucht} \affiliation{Centre for Quantum Computation and Communication Technology, School of Electrical Engineering and Telecommunications, The University of New South Wales, Sydney, New South Wales 2052, Australia}
\author{M.~Veldhorst} \affiliation{QuTech and Kavli Institute of Nanoscience, TU Delft, Lorentzweg 1, 2628CJ Delft, the Netherlands}
\author{F.~E.~Hudson} \affiliation{Centre for Quantum Computation and Communication Technology, School of Electrical Engineering and Telecommunications, The University of New South Wales, Sydney, New South Wales 2052, Australia}
\author{K.~M.~Itoh} \affiliation{School of Fundamental Science and Technology, Keio University, 3-14-1 Hiyoshi, Kohoku-ku, Yokohama 223-8522, Japan}
\author{D.~Culcer} \affiliation{School of Physics, The University of New South Wales, Sydney 2052, Australia}
\author{T.~D.~Ladd} \affiliation{HRL Laboratories, LLC, 3011 Malibu Canyon Rd., Malibu, CA 90265}
\author{A.~Morello} \affiliation{Centre for Quantum Computation and Communication Technology, School of Electrical Engineering and Telecommunications, The University of New South Wales, Sydney, New South Wales 2052, Australia}
\author{A.~S.~Dzurak} \affiliation{Centre for Quantum Computation and Communication Technology, School of Electrical Engineering and Telecommunications, The University of New South Wales, Sydney, New South Wales 2052, Australia}
\begin{abstract} Silicon quantum dot spin qubits provide a promising platform for large-scale quantum computation because of their compatibility with conventional CMOS manufacturing and the long coherence times accessible using $^{28}$Si enriched material. A scalable error-corrected quantum processor, however, will require control of many qubits in parallel, while performing error detection across the constituent qubits. Spin resonance techniques are a convenient path to parallel two-axis control, while Pauli spin blockade can be used to realize local parity measurements for error detection. Despite this, silicon qubit implementations have so far focused on either single-spin resonance control, or control and measurement via voltage-pulse detuning in the two-spin singlet-triplet basis, but not both simultaneously. Here, we demonstrate an integrated device platform incorporating a silicon metal-oxide-semiconductor double quantum dot that is capable of single-spin addressing and control via electron spin resonance, combined with high-fidelity spin readout in the singlet-triplet basis. \end{abstract}
\date{\today}
\maketitle
The manipulation of single-spin qubits in silicon, using either ac magnetic \cite{Pla2012, Veldhorst2014} or electric \cite{Kawakami2016,Takeda2016,Thorgrimsson2016} fields at microwave frequencies, has been a powerful driver of progress in the field of solid state qubit development, in part due to the sophistication of microwave technology which allows convenient two-axis control of the qubit via simple phase adjustment, and the generation of complex pulse sequences for dynamical decoupling. This has resulted in high-fidelity single-qubit gates\cite{Veldhorst2014,Muhonen2014,Takeda2016,Thorgrimsson2016} and initial two-qubit gates now realised in a variety of structures\cite{Veldhorst2015,Watson2017,Zajac2017}. To date, all demonstrations of single-shot readout in silicon systems employing spin resonance\cite{Pla2012, Veldhorst2014,Kawakami2016,Takeda2016} have utilized single-spin selective tunnelling to a reservoir \cite{Elzerman2004}. While convenient, this reservoir-based readout approach is not well suited to gate-based dispersive sensing\cite{Colless2013}, which has significant advantages in terms of minimizing electrode overheads for large-scale qubit architectures. In contrast, readout based on Pauli spin blockade\cite{Ono2002} in the singlet-triplet basis of a double QD\cite{Petta2005} is compatible with dispersive sensing and, when combined with an ancilla qubit, can be used for parity readout in quantum error detection and correction codes\cite{Jones2016,VeldhorstCMOS2016,Vandersypen2016}. Moreover, because singlet-triplet readout can provide high-fidelity spin readout at much lower magnetic fields than single-spin reservoir-based readout\cite{Elzerman2004}, it allows spin-resonance control to be performed at lower microwave frequencies, which will benefit scalability.
Qubits based on singlet-triplet spin states were first demonstrated in GaAs heterostructures \cite{Petta2005, Petta2010} and have now been operated in a variety of silicon-based structures \cite{Maune2012, Eng2015, Wu2014, Harvey-Collard2017,Jock2017}. High-fidelity single-shot singlet-triplet readout has also recently been demonstrated in various silicon systems\cite{Harvey-Collard2017,Nakajima2017,Broome2017}.
In order to combine the ability to address individual spin qubits using ESR with the voltage-pulse-based detuning control and high-fidelity readout of pairs of spins in the singlet-triplet basis, we employ a $^{28}$Si metal-oxide-semiconductor (SiMOS) double quantum dot device\cite{Angus2007,Zwanenburg2013} (Fig.~\ref{fig:Figure1}a,b) with a microwave transmission line that can be used to supply ESR pulses, similar to one previously used for demonstration of a two-qubit logic gate \cite{Veldhorst2015}. The device also includes an integrated single-electron-transistor (SET) sensor to achieve the single-charge sensitivity required for singlet-triplet readout. Electrons are populated into the two quantum dots (QD1 and QD2) with occupancy ($N_1$,$N_2$) using positive voltages on gates G1 and G2. An electron reservoir is induced beneath the Si-SiO$_2$ interface via a positive bias on gate ST, which also serves as the SET top gate. The reservoir is isolated from QD1 and QD2 by a barrier gate B (see Fig.~\ref{fig:Figure1}a,b).
\begin{figure*}\label{fig:Figure1}
\end{figure*}
\section*{Single-shot singlet-triplet readout} Figure \ref{fig:Figure1}c shows the stability diagram of the double QD system in the charge regions ($N_1$,$N_2$) where we operate the device. When two electrons occupy a double quantum dot, the exchange interaction results in an energy splitting between the singlet ($S$) and triplet ($T_-,T_0,T_+$) spin states. The exchange interaction can be controlled by electrical pulsing on nearby gates, providing a means to initialize, control and read out the singlet and triplet states\cite{Petta2005}. At the core of singlet-triplet spin readout is the observation of Pauli spin-blockade (PSB)\cite{Johnson_2005, Liu2008, Shaji2008, Lai2011, Maune2012}. When pulsing from (1,1)$\rightarrow$(0,2) charge configuration, the QD1 electron tunnels to QD2 only when the two spatially separated electrons were initially in the singlet spin configuration. The triplet states are blockaded from tunnelling due to the large exchange interaction in the (0,2) charge configuration. The blockade is made observable on the stability diagram by applying a pulse sequence \cite{Johnson_2005,Maune2012} to gates G1 and G2 as depicted in Fig.~\ref{fig:Figure1}c. After first flushing the system of a QD1 electron to create the (0,1) state at A, a (1,1) state at B loads a randomly configured mixture of singlet and triplet states (solid arrow in Fig.~\ref{fig:Figure1}c). The current through the nearby single-electron-transistor (SET) is recorded at this position, tuned to be at the half-maximum point of a Coulomb peak. The system is then ramped to a variable measurement point (dashed arrows in Fig.~\ref{fig:Figure1}c,d) where the SET current is measured again. A map of the comparison current $\Delta I_{\text{SET}}$ between these two points is created, where the derivative in sweep direction $d(\Delta I_{\text{SET}})/d(\Delta V_\text{G1})$ (Fig.~\ref{fig:Figure1}c) decorrelates the capacitive coupling of the control gates to the SET island. A change in the charge configuration marks a shift in the SET current, clearly observed as bright/dark bands. The bright band in the centre of the (1,1)-(0,2) anti-crossing of Fig.~\ref{fig:Figure1}c is consistent with PSB, where the blockade triangle is restricted to a narrow trapezoidal area, bounded by state co-tunnelling via the reservoir and the first available excited triplet state\cite{Maune2012}.
The charge sensor design used (Fig.~\ref{fig:Figure1}a) is relatively insensitive to inter-dot charge transitions, due to the symmetry of the QD1 and QD2 locations with respect to the SET island \cite{Zajac2016}. In order to enhance the blockade signal for this layout, we employ state-latching using the nearby electron reservoir \cite{Yang2014CHarge}. Recent studies of reservoir charge state latching \cite{Harvey-Collard2017,Nakajima2017} and intermediate excited states \cite{Studenikin2012} in semiconductor quantum dot devices have led to novel methods to reduce readout error by almost an order of magnitude \cite{Harvey-Collard2017}. A variant of this state latching is observed and utilized here.
The latching is produced via asymmetric couplings of the two dots to the common electron reservoir \cite{Yang2014CHarge}, where a (1,1)-(1,2) dot-reservoir metastable charge state is produced via a combination of the low tunnel rate between QD2 and the reservoir (shown as $\Gamma_{\text{Slow}}$ in Fig.~\ref{fig:Figure1}b) and co-tunnelling between QD1, QD2 and the reservoir ($\Gamma_{\text{Fast}}$ in Fig.~\ref{fig:Figure1}b). The latching results in the prominent feature observed at the (1,1)-(1,2) transition in Fig.~\ref{fig:Figure1}c. In contrast, when the system is initialized in the (0,2) charge configuration the singlet state is prepared robustly due to large energy splitting, and the resulting map in Fig.~\ref{fig:Figure1}d has no latched PSB region, as expected. The energy splitting between the (0,2) singlet ground state and first available triplet state is measured to be (1.7$\pm$0.2)\% of the charging energy $E_\text{C}$ (see Supplementary Information S5). Typically for this device design $E_\text{C} \sim10-20$ meV \cite{Yang2012}, indicating that this splitting exceeds electron thermal energies by two orders of magnitude. The first available triplet here is likely the first excited valley state\cite{Culcer2009,Culcer2010,Yang2012}. To compare the visibility of the standard PSB and latched PSB, histograms of $\Delta I_{\text{SET}}$ are shown in Figs.~\ref{fig:Figure1}f and \ref{fig:Figure1}g respectively. We find that state latching increases our measurement visibility from around 70\% to 98\%, reducing the misidentification error by more than 16-fold for this SiMOS device layout (see Supplementary Information S1). We note that this measurement fidelity of $F_\text{M} = 99\%$ does not include errors that occur during the evolution from a separated (1,1) charge state to the blockade region, which we discuss in more detail below.
\begin{figure*}\label{fig:Figure2}
\end{figure*}
\section*{Singlet-triplet Hamiltonian for Silicon-MOS qubits} The large valley splitting in SiMOS devices\cite{Yang2012,Veldhorst2015} allows us to restrict ourselves to the lowest valley state when considering spin dynamics near the (0,2)-(1,1) anti-crossing, which we now address. These dynamics are governed by a Hamiltonian in which single-spin distinguishability and exchange are in competition. Single-spin distinguishability arises from the varying Zeeman energy between each dot, interpreted as a site-specific effective $g$-factor and resulting in an energy difference $\delta E_\text{Z} = g_2\mu_B B^z_2-g_1\mu_B B^z_1$. For high in-plane magnetic field, the varying effective $g$-factors result from a combination of interface spin-orbit terms, which depend on local strain, electric fields, and the atomistic details of the oxide interface\cite{Yang2012,Veldhorst2015a,Ferdous2017}. In previous devices on similar devices we have observed $g$-factor differences between QDs as large as 0.5\%\cite{Veldhorst2015}; at high-field, Overhauser contributions to $\delta E_\text{Z}$ are negligible in isotopically purified samples. At lower magnetic fields, magnetic screening from the superconducting aluminum gates may also contribute significantly to $\delta E_\text{Z}$\cite{underwood2017coherent}.
The Hamiltonian term in competition with the Zeeman gradient is kinetic exchange, which lowers the energy of the spin singlet energy by an amount $J(\varepsilon)$ due to interdot tunnelling. In the standard Fermi-Hubbard model\cite{}, $J(\varepsilon)$ is proportional to $2t_c^2(\varepsilon)/|\varepsilon|$ for large $\varepsilon$, where $t_c(\varepsilon)$ is the inter-dot tunnel coupling and $\varepsilon$ combines the on-site charging energy and electrochemical potential difference between the two dots\cite{Taylor2007} (see Supplementary Information S5). In previous experiments\cite{Veldhorst2015} on a similar SiMOS two-qubit device the tunnel coupling at the anti-crossing was estimated as $900\sqrt{2}$~MHz. In this model, the ground-state singlet is hybridized between (0,2) and (1,1) charge states as $\ket{S_H}=\cos(\theta/2)\ket{\text{(1,1)S}}+\sin(\theta/2)\ket{\text{(0,2)S}}$, where $\theta=-\tan^{-1}(2t_c/\varepsilon)$. Again neglecting higher energy valley or orbital states, the spin-triplet states $\ket{T_m}$ with two-spin angular momentum projection $m=0,\pm 1$ are fully separated in the (1,1) charge state. Besides being split from the $m=0$ states by the mean Zeeman energy, $\bar{E}_\text{Z}$, the $\ket{T_{\pm}}$ states may couple to the hybridized singlet states by local magnetic fields which are orthogonal to the average applied field, as well as by spin orbit coupling. We summarize such terms by a spin-flipping term $\Delta(\theta)$.
Hence in the basis $\{\ket{T_+},\ket{T_0},\ket{T_-},\ket{S_H}\},$ the approximate effective Hamiltonian is written \begin{equation}\label{eq:STHammil} H= \begin{pmatrix} \bar{E}_\text{Z}-\varepsilon/2 & 0 & 0 & \Delta(\theta) \\ 0 & -\varepsilon/2 & 0 & \delta E_\text{Z}\cos\theta \\ 0 & 0 & -\bar{E}_\text{Z}-\varepsilon/2 & -\Delta(\theta) \\ \Delta(\theta)^* & \delta E_\text{Z}\cos\theta & -\Delta(\theta)^* & \varepsilon/2-J(\varepsilon) \end{pmatrix}. \end{equation}
A typical energy spectrum of this Hamiltonian as a function of detuning $\varepsilon$ is shown in Fig.~\ref{fig:Figure2}a for small magnetic fields, $B^z \sim J(\varepsilon)/g\mu_B$.
\section*{Characterizing the singlet-triplet Hamiltonian}
The anti-crossing between the $\ket{S_H}$ and $\ket{T_\pm}$ states due to $\Delta(\theta)$ can be used to map out the energy separation $|E_{S_H}(\varepsilon)-E_{T_-}(\varepsilon)|$ as a function of small detuning $\varepsilon$ by performing a spin funnel experiment \cite{Petta2005}. Here, we initialize in a (0,2) singlet ground state, $\ket{(0,2)S}$, and pulse toward the spatially separated $\ket{(1,1)S}$, as shown in Figs.~\ref{fig:Figure2}b \&~\ref{fig:Figure2}c. By varying the applied magnetic field $B_0^z$ while dwelling at various values of detuning $\varepsilon$, the location of the anti-crossing can be mapped out via the increased triplet probability $P_\text{T}$ (Fig.~\ref{fig:Figure2}d) due to mixing under $\Delta(\theta)$. Ramping across the anti-crossing causes a coherent population transfer between $\ket{S_H}$ and $\ket{T_-}$ due to Landau-Zener tunnelling \cite{Shevchenko2010} proportional to $\exp(-2\pi|\Delta(\theta)|^2/\hbar\nu))$, characterized by the ratio of $\Delta(\theta)$ to the energy level velocity $\nu = |d(E_{S_H} - E_{T_-})|/dt$. As the ramp rate rises the singlet state $\ket{S_H}$ is increasingly maintained (see Fig.~\ref{fig:Figure2}f) and so the triplet return probability $P_\text{T}$ falls, as we observe in Fig.~\ref{fig:Figure2}e. By fitting this data
(Supplementary Information S3) we estimate $|\Delta(\theta)|$ at the location of the minimum energy gap is $(196\pm6)$~kHz at $B_0^z=0$ (an offset field of $B_{OS}^z=-1.04\pm0.06$~mT is estimated from spin funnel asymmetry). Further $|\Delta(\theta)|= 16.72\pm1.64$~MHz at $B_0^z= 155$~mT, where the uncertainty here (and elsewhere) corresponds to $95\%$ confidence intervals.
There are a number of possible processes that can contribute to $\Delta$ in the silicon-MOS platform. For 800~ppm nuclear-spin-1/2 $^{29}$Si in the isotopically enriched $^{28}$Si epilayer\cite{Ito2014}, we expect random hyperfine fields in all vector directions with root-mean-square of order 50~kHz\cite{Eng2015} for unpolarized nuclei, so this may contribute to $\Delta$. However, other effects may have a comparable contribution. At low $B^z_0~(\lesssim~50$~mT), Meissner effects from the superconducting aluminum gates can provide transverse local magnetic fields at the location of the QDs; contributions to $\delta B^z$ from this effect of up to a few MHz have been reported\cite{underwood2017coherent}. Further, off-diagonal terms in the difference between the electron $g$-tensors can contribute to coupling between (1,1) states. Finally, in the presence of inter-dot tunnelling, the interface spin-orbit interaction provides a separate contribution to $\Delta$, leading to estimated couplings of tens of kHz at low-$B_0^z$. Detailed studies on magnetic field magnitude and angle dependence, such as those performed to isolate hyperfine from spin-orbit contributions in nuclear-rich materials such as GaAs\cite{Nichol2015quenching}, are required to separate and explore each of these individual effects.
We can further characterize the Hamiltonian in Eq.~(\ref{eq:STHammil}) at much larger detuning $\varepsilon$ than is accessible via the spin funnel by performing a Landau-Zener-Stueckelberg (LZS) interference experiment\cite{Shevchenko2010, Petta2010} (Fig.~\ref{fig:Figure2}g). This is performed at $B_0^z = 0$, but the residual magnetic field present, which may include some nuclear polarization\cite{Reilly2008} is sufficient to split the $\ket{T_0}$ and $\ket{T_\pm}$ states. By setting the ramp rate across the $\ket{S_H}$/$\ket{T_-}$ anti-crossing to $2\pi\nu\approx|\Delta|^2/\hbar$, an approximately equal superposition of both states is created. Dwelling for varying times $\tau_D$ and detunings $\varepsilon$ results in a Stueckelberg phase accumulation $\phi = \int(E_{S_H}(\varepsilon[t])-E_{T_-}(\varepsilon[t]))dt/\hbar$, with $E_{S_H} (E_{T_-})$ the energy of the $\ket{S_H} (\ket{T_-})$ state. Depending on the accumulated phase, the returning passage through the anti-crossing either constructively interferes, resulting in the blockaded $\ket{T_-}$, or destructively interferes, bringing the system back to $\ket{S_H}$. By keeping $\nu$ constant throughout the experiment the Fourier transform of the interference pattern (Fig.~\ref{fig:Figure2}f - left) directly extracts the energy separation $|E_{S_H}(\varepsilon)-E_{T_-}(\varepsilon)|$ as a function of detuning.
We now investigate exchange between the hybridised singlet $\ket{S_H}$ and unpolarized triplet $\ket{T_0}$ by applying an external magnetic field $B_0^z = 200$~mT to strongly split away the $T_\pm$ triplet states. At these fields the Zeeman energy difference $\delta E_\text{Z}$ dominates exchange $J(\varepsilon)$ deep in the (1,1) region, and the eigenstates there become $\ket{\downarrow\uparrow}$ and $\ket{\uparrow\downarrow}$, as depicted in Fig.~\ref{fig:Figure3}a. Maintaining a ramp rate $\nu$ fast enough to be diabatic with respect to $\Delta$, but slow enough to be adiabatic with respect to $t_c(\varepsilon)$, ensures adiabatic preparation of a ground state $\ket{\downarrow\uparrow}$ or $\ket{\uparrow\downarrow}$, depending upon the sign of $\delta E_\text{Z} = g_2\mu_BB^z_2-g_1 \mu_BB^z_1$. At $B_0^z = 200$~mT we expect the Meissner effect to be quenched, so that $\delta E_\text{Z}$ is dominated by the effective $g$-factor difference between the dots.
For simplicity we henceforth assume $\delta E_\text{Z}>0$, so that we adiabatically prepare $\ket{\downarrow\uparrow}$ for large $\varepsilon$. Following the pulse sequence illustrated in Fig.~\ref{fig:Figure3}c, coherent-exchange-driven oscillations can then be observed between $\ket{\downarrow\uparrow}$ and $\ket{\uparrow\downarrow}$ by rapidly plunging the prepared state $\ket{\downarrow\uparrow}$ back towards the (1,1)-(0,2) anti-crossing where $J(\varepsilon)$ is no longer negligible. Variable dwell time $\tau_D$ results in coherent exchange oscillations, and the reversal of the rapid plunge leaves the state in a superposition of $\ket{\downarrow\uparrow}$ and $\ket{\uparrow\downarrow}$. The semi-adiabatic ramp back to (0,2) maps the final state $\ket{\downarrow\uparrow}$ to the $\ket{(0,2)S}$ singlet, while $\ket{\uparrow\downarrow}$ is mapped to a blockaded state via the $T_0$ triplet \cite{Petta2005, Maune2012}. The resulting data is shown in Fig.~\ref{fig:Figure3}d.
\begin{figure*}
\caption{\textbf{$\vert$ Exchange drive oscillations and individual electron ESR at low field.}
\textbf{a)}Energy diagram for the five lowest energy states near the (0,2)-(1,1) anti-crossing represented in the spin basis. Compared to Fig.~\ref{fig:Figure2}a, an increased magnetic field $B_0^z$ splits off polarized triplets $\ket{T_\pm}$ while $\delta g$ due to the SO coupling breaks the $\ket{(1,1)S}/\ket{T_0}$ degeneracy producing $\delta E_\text{Z}$. \textbf{b)} Bloch sphere representation of the $\ket{(1,1)S}/\ket{T_0}$ qubit showing effect of Heisenberg exchange $J$ and $\delta E_\text{Z}$ \textbf{c, d)} Coherent Rabi oscillations between $\ket{\downarrow\uparrow}$ and $\ket{\uparrow\downarrow}$ states, driven by exchange $J$. \textbf{c)} Pulse sequence for data in d); adiabatic ramp (diabatic through the $S/T_-$ crossing) prepares $\ket{\downarrow\uparrow}$ (assuming $g_2>g_1$). Diabatic pulses back to the high exchange region then causes coherent evolution of the state for a period of variable time/depth. The resulting change in population of $\ket{\downarrow\uparrow}$ is mapped back to $\ket{(0,2)S}$ using the inverse adiabatic ramp. \textbf{d)} (left) Fourier transform of time series (right) which shows exchange driven oscillations between the $\ket{\downarrow\uparrow}$ and $\ket{\uparrow\downarrow}$ states. \textbf{e)} Pulse sequence used for data in f). \textbf{f)} Triplet probability as a function of detuning $\varepsilon$ and applied ESR frequency with $f_0 = 4.205$~GHz. ESR spin rotations of the spin in the left dot (upper branch) and right dot (lower branch), using an on-chip microwave ESR line. $\ket{\downarrow\uparrow}$ is prepared similar to b), a 25~\textmu s ESR pulse of varying frequency is applied rotating $\ket{\downarrow\uparrow} \rightarrow \ket{\uparrow\uparrow}$ when $g_2\mu_BB_0^z/h = f_{ESR}$, and $\ket{\downarrow\uparrow} \rightarrow \ket{\downarrow\downarrow}$, when $g_1\mu_BB_0^z/h = f_{ESR}$; $\ket{\downarrow\uparrow}$ is again mapped back $\ket{(0,2)S}$. We find $|g_2-g_1| = (0.43\pm0.02)\times10^{-3}$.}
\label{fig:Figure3}
\end{figure*}
\begin{figure}
\caption{\textbf{$\vert$ Effective exchange with detuning.} Exchange energy splitting $|E_{S_H}(\varepsilon)-E_{T_0}(\varepsilon)|$ as a function of detuning $\varepsilon$, as extracted from the spin-funnel (Similar to Fig.~\ref{fig:Figure2}d, see Supplementary Information S3), Landau-Zener-Stueckelberg interferometry (Fig.~\ref{fig:Figure2}g), coherent exchange oscillations (Fig.~\ref{fig:Figure3}d) and ESR funnel data (Fig.~\ref{fig:Figure3}f). Each include 95\% confidence intervals based on data fits uncertainties or measurement resolution. The solid/dashed lines represent fits to the data based on the model Hamiltonian in Eq.~(\ref{eq:STHammil}), for which we find $t_c(\varepsilon=0) = 1.86\pm0.03$ GHz}
\label{fig:Figure4}
\end{figure}
\section*{Individual qubit addressability via electron spin resonance} We note that previous experiments performed at $B_0^z =1.4$~T on another SiMOS device exploited the g-factor difference between two QDs in the low-$J(\varepsilon)$ region to perform a two-qubit controlled-phase operation \cite{Veldhorst2015}. Utilizing the high-$J(\varepsilon)$ region as above, the $\ket{\downarrow\uparrow}\leftrightarrow\ket{\uparrow\downarrow}$ operation can extend the two-qubit toolbox to include a SWAP gate, with a potentially shorter operation time, in this case with $\tau_\text{SWAP} \sim 0.25$~\textmu s, limited by exchange pulse rise times.
Having characterized the system in the singlet-triplet basis, we now investigate the compatibility of spin blockade readout with individual QD (i.e. single spin) addressability via electron spin resonance (ESR) \cite{Veldhorst2014}, a combination desirable for scalable spin qubit architectures incorporating error correction \cite{Jones2016, VeldhorstCMOS2016}. Using the pulse sequence illustrated in Fig.~\ref{fig:Figure3}e, we again adiabatically prepare the large-$\varepsilon$ ground state $\ket{\downarrow\uparrow}$, as discussed above. We now apply an ac magnetic field to perform ESR with pulse duration 25~\textmu s, supplied by the on-chip microwave transmission line \cite{Dehollain2012} (Fig.~\ref{fig:Figure1}a), to drive transitions that correspond to $\ket{\downarrow\uparrow}\leftrightarrow\ket{\downarrow\downarrow}$ and $\ket{\downarrow\uparrow}\leftrightarrow\ket{\uparrow\uparrow}$ at large detuning, when exchange is small (see Fig.~\ref{fig:Figure3}a). Any excitation from the ground state will now map to the blockaded triplet state population. Figure~\ref{fig:Figure3}f shows the measured ESR spectrum as a function of detuning $\varepsilon$. The higher frequency $f_{ESR2}$ branch corresponds to a coherent rotation of the electron spin in QD2, while the lower frequency $f_\text{ESR1}$ rotates the QD1 spin. At large detuning $f_\text{ESR1} \sim 4.2 $GHz, consistent with the applied magnetic field $B_0^z =150$mT for this experiment. As $\varepsilon$ decreases (and $J(\varepsilon)$ increases), the ground state is better described as $\ket{S_H}$, so the transitions become $\ket{S_H}\leftrightarrow\ket{\downarrow\downarrow}$ and $\ket{S_H}\leftrightarrow\ket{\uparrow\uparrow}$ and exchange now competes with ESR, resulting in a lower visibility.
\section*{Exchange coupling between qubits} Each of the experiments described above probes the Hamiltonian in Eq.~(\ref{eq:STHammil}) for different ranges of detuning. Figure~\ref{fig:Figure4} collates the results of all experiments and plots the energy splitting between the hybridised singlet $\ket{S_H}$ and unpolarised triplet $\ket{T_0}$ across all detuning values. Close to the (0,2)-(1,1) anti-crossing, for low $\varepsilon$, the splitting is dominated by exchange coupling $J$, while for large $\varepsilon$, $\delta E_\text{Z}$ dominates. As expected, the energy differences obtained from the LZS interferometry (for $B_0^z \approx 0$) diverge from those obtained via ESR (where $B_0^z =150$~mT), since when $B_0^z \approx 0$ there remains only a small residual $\delta E_\text{Z}$ due to combined Meissner screening and weak Overhauser fields. Figure.~\ref{fig:Figure4} also shows a fit to the data employing the Hamiltonian of Eq.~(\ref{eq:STHammil}) as documented elsewhere. A constant $t_c$ fits poorly; instead a model for a dependence of the tunnel coupling on $\varepsilon$ is employed (see Supplementary Information section S5). At the anti-crossing $(\varepsilon=0)$, the curve fit to this model indicates $t_c(\varepsilon=0)=1.864\pm0.033$~GHz and $\delta g=(0.43\pm0.02)\times10^{-3}$. We note that this tunnel coupling is comparable to that observed for a separate two-qubit device\cite{Veldhorst2015} for which $t_c(\varepsilon) = 900\sqrt{2}$~MHz.
\section*{Discussion} By analyzing the error processes present in these experiments, we can identify where improvements will be required before these mechanisms can be integrated into a parity readout tool useful for future multi-qubit architectures. We can discriminate between the effect of various error processes by comparing blockade probability observations under different operating regimes in the exchange oscillation data of Fig.~\ref{fig:Figure3}d. The histograms shown in Fig.~\ref{fig:Figure1}g each reveal state preparation and measurement (SPAM)-related errors, leading to a visibility maximum of 98\% (orange data) and an error of 0.8\% associated with $\ket{(0,2)S}$ preparation and the transfer process to a latched readout position (red data). Additional to these SPAM errors are the transfer and mapping error processes present when converting states semi-adiabatically from the (0,2)$\rightarrow$(1,1) or (1,1)$\rightarrow$(0,2) respectively. The combined error from SPAM, state transfer and mapping can be observed from the background visibility at a detuning where exchange is minimal. Here, the prepared $\ket{(0,2)S}$ state is ideally transferred to and from the (1,1) region without loss, resulting in zero triplet probability. In contrast, the average blockaded return probability from Fig.~\ref{fig:Figure3}d (and therefore the combined transfer and mapping errors) saturates to around 30\%. From the decay in the oscillations of Fig.~\ref{fig:Figure3}d as a function of operation time $\tau_D$, we find a maximum control fidelity of $F_\pi = 0.95\pm0.04$ at $\varepsilon = 0.6$~meV. We find that the decay time is proportional to the Rabi period, suggesting that exchange noise limits our control fidelity. Further, comparisons with the 61\% visibility of the first fringe in Fig.~\ref{fig:Figure3}d suggests that diabaticity errors due to each fast plunge to/from the exchange position are also present. With respect to our system’s utility for providing a parity readout tool, the main error source in the present work appears to occur during adiabatic transfer into and out of the (1,1) region. Time-dependent simulations (Supplementary Information S6) of the model Hamiltonian Eq.~(\ref{eq:STHammil}) show that this error can be well explained by diabaticity with respect to $t_c(\varepsilon)$ near the anti-crossing. We expect that this error can be significantly reduced by optimizing the shape of the ramp as a function of detuning, to remain diabatic with respect to $\Delta$ near the $\ket{S_H}/\ket{T_-}$ crossing, while staying adiabatic elsewhere. Of relevance to the fidelity of exchange-based two-qubit gates, we note that charge and voltage noise will couple via detuning $\varepsilon$ to produce noise in exchange. Our simulations (Supplementary Information S6) indicate that the level of charge noise expected \cite{Muhonen2014,Eng2015} in our system results in a $\ket{S_H}/\ket{T_0}$ oscillation decay consistent with our measurements. The effect of charge noise could be minimized by symmetric biasing \cite{Reed2016}, with the use of an additional exchange gate.
To conclude, we have for the first time in a silicon device experimentally combined single-spin control using electron spin resonance, with high-fidelity single-shot readout in the singlet-triplet basis. By characterising the relevant energy scales $\Delta$, $\delta E_\text{Z}$ and $t_c(\varepsilon)$ of the two-spin Hamiltonian, we found that we could coherently manipulate both the $S/T_-$ and $S/T_0$ states, the latter of which provides potential for a fast two-qubit SWAP gate at high exchange. The integration of low-frequency ESR of individual spins with singlet-triplet based initialisation and readout holds promise for qubit architectures operating at significantly lower magnetic fields and higher temperatures. Future experiments will focus on improvements in operational fidelities, as well as further characterisation of low-frequency ESR operation. The presented initialisation and readout of singlet-triplet states attests to the compatibility of the SiMOS quantum dot platform with parity readout based on spin-blockade, key for the realisation of a future large-scale silicon-based quantum processor \cite{Jones2016, VeldhorstCMOS2016}.
\section*{Methods} \subsection*{Device Fabrication} The device is fabricated on an epitaxially grown, isotopically purified $^{28}$Si epilayer with residual $^{29}$Si concentration of 800 ppm\cite{Ito2014}. Following the multi-level gate-stack silicon MOS technology\cite{Angus2007}, four layers of Al-gates are fabricated on top of a SiO$_2$ dielectric with a thickness of 5.9 nm. Gate layers have a thickness of 25, 60, 80 and 80 nm, with three layers used to form the device and the fourth layer attributed to the ESR transmission line. Overlapping layers are separated by thermally grown aluminum oxide.
\subsection*{Experimental Set-up} The measurements were conducted in a dilution refrigerator with base temperature $T_\text{bath}=30$~mK. DC voltages were applied using battery-powered voltage sources and are combined with voltage pulses using an arbitrary waveform generator (LeCroy ArbStudio 1104) through resistive voltage divider network. Filters were included for DC, slow-pulse and fast-pulse lines (10~Hz to 80~MHz). Microwave pulses were delivered by an Agilent E8257D analogue signal generator, passing signal through a 10~dBm attenuator at the 4~K plate and 3~dBm attenuator at the mixing chamber plate.
All the measured qubit statistics are based on counting the blockade signal in the latched region as described in the main text. The operating region within the experiments involves a system of two tunnel coupled quantum dots with a total of two electrons shared between them. The latched readout procedure involves conditional loading of a third electron from tunnel-coupled reservoir onto one of these dots. Each data point represents the average of between 100 and 1200 single shot blockade events, including experiment trace repetition. Stability maps generated from three level pulsing could be produced with less averaging, with figure data being the average of 40 shots per point.
\section*{Acknowledgments} \vspace*{-5mm} We thank Mark Gyure for helpful discussions. We acknowledge support from the Australian Research Council (CE11E0001017 and CE170100039), the US Army Research Office (W911NF-13-1-0024 and W911NF-17-1-0198) and the NSW Node of the Australian National Fabrication Facility. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. M.V. and B.H. acknowledges support from the Netherlands Organization for Scientific Research (NWO) through a Rubicon Grant. K.M.I. acknowledges support from a Grant-in-Aid for Scientific Research by MEXT, NanoQuine, FIRST, and the JSPS Core-to-Core Program.
\begin{comment} \section*{Author Contributions} M.A.F and B.H performed experiments. M.V designed the device, fabricated by K.W.C and F.E.H with A.S.D's supervision. K.M.I prepared and supplied the $\leftidx{^{28}}{}{}$Si epilayer. W.H., T.T., C.H.Y. and A.L. contributed to the preparation of experiments. M.A.F., B.H. and A.S.D. designed the experiments, with T.D.L, W.H., D.C., K.W.C., T.T., C.H.Y., A.L., A.M. contributing to results discussion and interpretation. M.A.F., B.H. and A.S.D. wrote the manuscript with input from all co-authors. \end{comment}
\section*{Author Information} \vspace*{-5mm} The authors declare no competing financial interests. Readers are welcome to comment on the online version of the paper. Correspondence and request for materials should be addressed to M.A.F. (m.fogarty@unsw.edu.au), B.H. (b.hensen@unsw.edu.au) or A.S.D. (a.dzurak@unsw.edu.au).
\hyphenpenalty=10000
\begin{thebibliography}{46} \makeatletter \providecommand \@ifxundefined [1]{
\@ifx{#1\undefined} } \providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Pla}\ \emph {et~al.}(2012)\citenamefont {Pla},
\citenamefont {Tan}, \citenamefont {Dehollain}, \citenamefont {Lim},
\citenamefont {Morton}, \citenamefont {Jamieson}, \citenamefont {Dzurak},\
and\ \citenamefont {Morello}}]{Pla2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~J.}\ \bibnamefont
{Pla}}, \bibinfo {author} {\bibfnamefont {K.~Y.}\ \bibnamefont {Tan}},
\bibinfo {author} {\bibfnamefont {J.~P.}\ \bibnamefont {Dehollain}}, \bibinfo
{author} {\bibfnamefont {W.~H.}\ \bibnamefont {Lim}}, \bibinfo {author}
{\bibfnamefont {J.~J.~L.}\ \bibnamefont {Morton}}, \bibinfo {author}
{\bibfnamefont {D.~N.}\ \bibnamefont {Jamieson}}, \bibinfo {author}
{\bibfnamefont {A.~S.}\ \bibnamefont {Dzurak}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Morello}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {{A single-atom electron spin qubit in silicon}},}\ }\href
{\doibase 10.1038/nature11449} {\bibfield {journal} {\bibinfo {journal}
{Nature}\ }\textbf {\bibinfo {volume} {488}},\ \bibinfo {pages} {541--545}
(\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Veldhorst}\ \emph {et~al.}(2014)\citenamefont
{Veldhorst}, \citenamefont {Hwang}, \citenamefont {Yang}, \citenamefont
{Leenstra}, \citenamefont {de~Ronde}, \citenamefont {Dehollain},
\citenamefont {Muhonen}, \citenamefont {Hudson}, \citenamefont {Itoh},
\citenamefont {Morello},\ and\ \citenamefont {Dzurak}}]{Veldhorst2014}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Veldhorst}}, \bibinfo {author} {\bibfnamefont {J.~C.~C.}\ \bibnamefont
{Hwang}}, \bibinfo {author} {\bibfnamefont {C.~H.}\ \bibnamefont {Yang}},
\bibinfo {author} {\bibfnamefont {A.~W.}\ \bibnamefont {Leenstra}}, \bibinfo
{author} {\bibfnamefont {B.}~\bibnamefont {de~Ronde}}, \bibinfo {author}
{\bibfnamefont {J.~P.}\ \bibnamefont {Dehollain}}, \bibinfo {author}
{\bibfnamefont {J.~T.}\ \bibnamefont {Muhonen}}, \bibinfo {author}
{\bibfnamefont {F.~E.}\ \bibnamefont {Hudson}}, \bibinfo {author}
{\bibfnamefont {K.~M.}\ \bibnamefont {Itoh}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Morello}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.~S.}\ \bibnamefont {Dzurak}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {{An addressable quantum dot qubit with
fault-tolerant control-fidelity}},}\ }\href {\doibase 10.1038/nnano.2014.216}
{\bibfield {journal} {\bibinfo {journal} {Nature Nanotechnology}\ }\textbf
{\bibinfo {volume} {9}},\ \bibinfo {pages} {981--985} (\bibinfo {year}
{2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kawakami}\ \emph {et~al.}(2016)\citenamefont
{Kawakami}, \citenamefont {Jullien}, \citenamefont {Scarlino}, \citenamefont
{Ward}, \citenamefont {Savage},\ and\ \citenamefont
{Lagally}}]{Kawakami2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Kawakami}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Jullien}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Scarlino}}, \bibinfo
{author} {\bibfnamefont {D.~R.}\ \bibnamefont {Ward}}, \bibinfo {author}
{\bibfnamefont {D.~E.}\ \bibnamefont {Savage}}, \ and\ \bibinfo {author}
{\bibfnamefont {M.~G.}\ \bibnamefont {Lagally}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {{Gate fidelity and coherence of an electron spin
in an Si/SiGe quantum dot with micromagnet}},}\ }\href {\doibase
10.1073/pnas.1603251113} {\bibfield {journal} {\bibinfo {journal}
{Proceedings of the National Academy of Sciences}\ }\textbf {\bibinfo
{volume} {113}},\ \bibinfo {pages} {1--6} (\bibinfo {year}
{2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Takeda}\ \emph {et~al.}(2016)\citenamefont {Takeda},
\citenamefont {Kamioka}, \citenamefont {Otsuka}, \citenamefont {Yoneda},
\citenamefont {Nakajima}, \citenamefont {Delbecq}, \citenamefont {Amaha},
\citenamefont {Allison}, \citenamefont {Kodera}, \citenamefont {Oda},\ and\
\citenamefont {Tarucha}}]{Takeda2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Takeda}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kamioka}},
\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Otsuka}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Yoneda}}, \bibinfo {author}
{\bibfnamefont {T.}~\bibnamefont {Nakajima}}, \bibinfo {author}
{\bibfnamefont {M.~R.}\ \bibnamefont {Delbecq}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Amaha}}, \bibinfo {author} {\bibfnamefont
{G.}~\bibnamefont {Allison}}, \bibinfo {author} {\bibfnamefont
{T.}~\bibnamefont {Kodera}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Oda}}, \ and\ \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Tarucha}},\ }\bibfield {title} {\enquote {\bibinfo
{title} {{A fault-tolerant addressable spin qubit in a natural silicon
quantum dot}},}\ }\href {\doibase 10.1126/sciadv.1600694} {\bibfield
{journal} {\bibinfo {journal} {Science Adv.}\ }\textbf {\bibinfo {volume}
{2}},\ \bibinfo {pages} {e1600694} (\bibinfo {year} {2016})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Thorgrimsson}\ \emph {et~al.}(2016)\citenamefont
{Thorgrimsson}, \citenamefont {Kim}, \citenamefont {Yang}, \citenamefont
{Smith}, \citenamefont {Simmons}, \citenamefont {Ward}, \citenamefont
{Foote}, \citenamefont {Corrigan}, \citenamefont {Savage}, \citenamefont
{Lagally}, \citenamefont {Friesen}, \citenamefont {Coppersmith},\ and\
\citenamefont {Eriksson}}]{Thorgrimsson2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Thorgrimsson}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Kim}},
\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Yang}}, \bibinfo {author}
{\bibfnamefont {L.~W.}\ \bibnamefont {Smith}}, \bibinfo {author}
{\bibfnamefont {C.~B.}\ \bibnamefont {Simmons}}, \bibinfo {author}
{\bibfnamefont {D.~R.}\ \bibnamefont {Ward}}, \bibinfo {author}
{\bibfnamefont {R.~H.}\ \bibnamefont {Foote}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Corrigan}}, \bibinfo {author}
{\bibfnamefont {D.~E.}\ \bibnamefont {Savage}}, \bibinfo {author}
{\bibfnamefont {M.~G.}\ \bibnamefont {Lagally}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Friesen}}, \bibinfo {author} {\bibfnamefont
{S.~N.}\ \bibnamefont {Coppersmith}}, \ and\ \bibinfo {author} {\bibfnamefont
{M.~A.}\ \bibnamefont {Eriksson}},\ }\bibfield {title} {\enquote {\bibinfo
{title} {{Extending the coherence of a quantum dot hybrid qubit}},}\ }\href
{\doibase 10.1038/s41534-017-0034-2} {\bibfield {journal} {\bibinfo
{journal} {npj Quantum Information}\ ,\ \bibinfo {pages} {2--5}} (\bibinfo
{year} {2016})},\ \Eprint {http://arxiv.org/abs/1611.04945}
{arXiv:1611.04945} \BibitemShut {NoStop} \bibitem [{\citenamefont {Muhonen}\ \emph {et~al.}(2014)\citenamefont
{Muhonen}, \citenamefont {Dehollain}, \citenamefont {Laucht}, \citenamefont
{Hudson}, \citenamefont {Kalra}, \citenamefont {Sekiguchi}, \citenamefont
{Itoh}, \citenamefont {Jamieson}, \citenamefont {Mccallum}, \citenamefont
{Dzurak},\ and\ \citenamefont {Morello}}]{Muhonen2014}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~T.}\ \bibnamefont
{Muhonen}}, \bibinfo {author} {\bibfnamefont {J.~P.}\ \bibnamefont
{Dehollain}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Laucht}},
\bibinfo {author} {\bibfnamefont {F.~E.}\ \bibnamefont {Hudson}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Kalra}}, \bibinfo {author}
{\bibfnamefont {T.}~\bibnamefont {Sekiguchi}}, \bibinfo {author}
{\bibfnamefont {K.~M.}\ \bibnamefont {Itoh}}, \bibinfo {author}
{\bibfnamefont {D.~N.}\ \bibnamefont {Jamieson}}, \bibinfo {author}
{\bibfnamefont {J.~C.}\ \bibnamefont {Mccallum}}, \bibinfo {author}
{\bibfnamefont {A.~S.}\ \bibnamefont {Dzurak}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Morello}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {{Storing quantum information for 30 seconds in a
nanoelectronic device}},}\ }\href {\doibase 10.1038/nnano.2014.211}
{\bibfield {journal} {\bibinfo {journal} {Nature Nanotechnology}\ }\textbf
{\bibinfo {volume} {9}} (\bibinfo {year} {2014}),\
10.1038/nnano.2014.211}\BibitemShut {NoStop} \bibitem [{\citenamefont {Veldhorst}\ \emph
{et~al.}(2015{\natexlab{a}})\citenamefont {Veldhorst}, \citenamefont {Yang},
\citenamefont {Hwang}, \citenamefont {Huang}, \citenamefont {Dehollain},
\citenamefont {Muhonen}, \citenamefont {Simmons}, \citenamefont {Laucht},
\citenamefont {Hudson}, \citenamefont {Itoh}, \citenamefont {Morello},\ and\
\citenamefont {Dzurak}}]{Veldhorst2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Veldhorst}}, \bibinfo {author} {\bibfnamefont {C.~H.}\ \bibnamefont {Yang}},
\bibinfo {author} {\bibfnamefont {J.~C.~C.}\ \bibnamefont {Hwang}}, \bibinfo
{author} {\bibfnamefont {W.}~\bibnamefont {Huang}}, \bibinfo {author}
{\bibfnamefont {J.~P.}\ \bibnamefont {Dehollain}}, \bibinfo {author}
{\bibfnamefont {J.~T.}\ \bibnamefont {Muhonen}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Simmons}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Laucht}}, \bibinfo {author} {\bibfnamefont {F.~E.}\
\bibnamefont {Hudson}}, \bibinfo {author} {\bibfnamefont {K.~M.}\
\bibnamefont {Itoh}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Morello}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont
{Dzurak}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{A two-qubit
logic gate in silicon}},}\ }\href {\doibase 10.1038/nature15263} {\bibfield
{journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {526}},\
\bibinfo {pages} {410} (\bibinfo {year} {2015}{\natexlab{a}})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Watson}\ \emph {et~al.}(2017)\citenamefont {Watson},
\citenamefont {Philips}, \citenamefont {Kawakami}, \citenamefont {Ward},
\citenamefont {Scarlino}, \citenamefont {Veldhorst}, \citenamefont {Savage},
\citenamefont {Lagally}, \citenamefont {Friesen}, \citenamefont
{Coppersmith}, \citenamefont {Eriksson},\ and\ \citenamefont
{Vandersypen}}]{Watson2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~F.}\ \bibnamefont
{Watson}}, \bibinfo {author} {\bibfnamefont {S.~G.~J.}\ \bibnamefont
{Philips}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Kawakami}},
\bibinfo {author} {\bibfnamefont {D.~R.}\ \bibnamefont {Ward}}, \bibinfo
{author} {\bibfnamefont {P.}~\bibnamefont {Scarlino}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Veldhorst}}, \bibinfo {author}
{\bibfnamefont {D.~E.}\ \bibnamefont {Savage}}, \bibinfo {author}
{\bibfnamefont {M.~G.}\ \bibnamefont {Lagally}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Friesen}}, \bibinfo {author} {\bibfnamefont
{S.~N.}\ \bibnamefont {Coppersmith}}, \bibinfo {author} {\bibfnamefont
{M.~A.}\ \bibnamefont {Eriksson}}, \ and\ \bibinfo {author} {\bibfnamefont
{L.~M.~K.}\ \bibnamefont {Vandersypen}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {{A programmable two-qubit quantum processor in
silicon}},}\ }\href {https://arxiv.org/abs/1708.03530} {\ (\bibinfo {year}
{2017})},\ \Eprint {http://arxiv.org/abs/arXiv:1708.04214v1}
{arXiv:arXiv:1708.04214v1} \BibitemShut {NoStop} \bibitem [{\citenamefont {Zajac}\ \emph {et~al.}(2017)\citenamefont {Zajac},
\citenamefont {Sigillito}, \citenamefont {Russ}, \citenamefont {Borjans},
\citenamefont {Taylor},\ and\ \citenamefont {Burkard}}]{Zajac2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~M.}\ \bibnamefont
{Zajac}}, \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont
{Sigillito}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Russ}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Borjans}}, \bibinfo
{author} {\bibfnamefont {J.~M.}\ \bibnamefont {Taylor}}, \ and\ \bibinfo
{author} {\bibfnamefont {G.}~\bibnamefont {Burkard}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {{Quantum CNOT Gate for Spins in Silicon}},}\
}\href {https://arxiv.org/abs/1708.04214} {\ (\bibinfo {year} {2017})},\
\Eprint {http://arxiv.org/abs/1708.04214v1} {arXiv:1708.04214v1} \BibitemShut
{NoStop} \bibitem [{\citenamefont {Elzerman}\ \emph {et~al.}(2004)\citenamefont
{Elzerman}, \citenamefont {Hanson}, \citenamefont {{Willems Van Beveren}},
\citenamefont {Witkamp}, \citenamefont {Vandersypen},\ and\ \citenamefont
{Kouwenhoven}}]{Elzerman2004}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M}\ \bibnamefont
{Elzerman}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Hanson}},
\bibinfo {author} {\bibfnamefont {L.~H.}\ \bibnamefont {{Willems Van
Beveren}}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Witkamp}},
\bibinfo {author} {\bibfnamefont {L.~M.~K.}\ \bibnamefont {Vandersypen}}, \
and\ \bibinfo {author} {\bibfnamefont {L.~P.}\ \bibnamefont {Kouwenhoven}},\
}\bibfield {title} {\enquote {\bibinfo {title} {{Single-shot read-out of an
individual electron spin in a quantum dot}},}\ }\href {\doibase
10.1038/nature02747.1.} {\bibfield {journal} {\bibinfo {journal} {Nature}\
}\textbf {\bibinfo {volume} {430}},\ \bibinfo {pages} {431--435} (\bibinfo
{year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Colless}\ \emph {et~al.}(2013)\citenamefont
{Colless}, \citenamefont {Mahoney}, \citenamefont {Hornibrook}, \citenamefont
{Doherty}, \citenamefont {Lu}, \citenamefont {Gossard},\ and\ \citenamefont
{Reilly}}]{Colless2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~I.}\ \bibnamefont
{Colless}}, \bibinfo {author} {\bibfnamefont {A.~C.}\ \bibnamefont
{Mahoney}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont
{Hornibrook}}, \bibinfo {author} {\bibfnamefont {A.~C.}\ \bibnamefont
{Doherty}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Lu}},
\bibinfo {author} {\bibfnamefont {A.~C.}\ \bibnamefont {Gossard}}, \ and\
\bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Reilly}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {{Dispersive Readout of a Few-Electron
Double Quantum Dot with Fast rf Gate Sensors}},}\ }\href {\doibase
10.1103/PhysRevLett.110.046805} {\bibfield {journal} {\bibinfo {journal}
{Physical Review Letters}\ }\textbf {\bibinfo {volume} {110}} (\bibinfo
{year} {2013}),\ 10.1103/PhysRevLett.110.046805}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ono}\ \emph {et~al.}(2002)\citenamefont {Ono},
\citenamefont {Austing}, \citenamefont {Tokura},\ and\ \citenamefont
{Tarucha}}]{Ono2002}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Ono}}, \bibinfo {author} {\bibfnamefont {D.~G.}\ \bibnamefont {Austing}},
\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Tokura}}, \ and\ \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Tarucha}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {{Current Rectification by Pauli Exclusion in a
Weakly Coupled Double Quantum Dot System}},}\ }\href {\doibase DOI:
10.1126/science.1070958} {\bibfield {journal} {\bibinfo {journal}
{Science}\ }\textbf {\bibinfo {volume} {297}},\ \bibinfo {pages} {1313}
(\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Petta}\ \emph {et~al.}(2005)\citenamefont {Petta},
\citenamefont {Johnson}, \citenamefont {Taylor}, \citenamefont {Laird},
\citenamefont {Yacoby}, \citenamefont {Lukin}, \citenamefont {Marcus},
\citenamefont {Hanson},\ and\ \citenamefont {Gossard}}]{Petta2005}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont
{Petta}}, \bibinfo {author} {\bibfnamefont {A.~C.}\ \bibnamefont {Johnson}},
\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Taylor}}, \bibinfo
{author} {\bibfnamefont {E.~A.}\ \bibnamefont {Laird}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Yacoby}}, \bibinfo {author} {\bibfnamefont
{M.~D.}\ \bibnamefont {Lukin}}, \bibinfo {author} {\bibfnamefont {C.~M.}\
\bibnamefont {Marcus}}, \bibinfo {author} {\bibfnamefont {M.~P.}\
\bibnamefont {Hanson}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~C.}\
\bibnamefont {Gossard}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{{Coherent Manipulation of Coupled Electron Spins in Semiconductor Quantum
Dots}},}\ }\href {\doibase 10.1126/science.1116955} {\bibfield {journal}
{\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {309}},\ \bibinfo
{pages} {2180--2184} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jones}\ \emph {et~al.}(2016)\citenamefont {Jones},
\citenamefont {Fogarty}, \citenamefont {Morello}, \citenamefont {Gyure},
\citenamefont {Dzurak},\ and\ \citenamefont {Ladd}}]{Jones2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Jones}}, \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Fogarty}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Morello}}, \bibinfo
{author} {\bibfnamefont {M.~F.}\ \bibnamefont {Gyure}}, \bibinfo {author}
{\bibfnamefont {A.~S.}\ \bibnamefont {Dzurak}}, \ and\ \bibinfo {author}
{\bibfnamefont {T.~D.}\ \bibnamefont {Ladd}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {{A logical qubit in a linear array of semiconductor
quantum dots}},}\ }\href@noop {} {\ (\bibinfo {year} {2016})},\ \Eprint
{http://arxiv.org/abs/1608.06335} {arXiv:1608.06335} \BibitemShut {NoStop} \bibitem [{\citenamefont {Veldhorst}\ \emph {et~al.}(2016)\citenamefont
{Veldhorst}, \citenamefont {Eenink}, \citenamefont {Yang},\ and\
\citenamefont {Dzurak}}]{VeldhorstCMOS2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Veldhorst}}, \bibinfo {author} {\bibfnamefont {H.~G.~J.}\ \bibnamefont
{Eenink}}, \bibinfo {author} {\bibfnamefont {C.~H.}\ \bibnamefont {Yang}}, \
and\ \bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont {Dzurak}},\
}\bibfield {title} {\enquote {\bibinfo {title} {{Silicon CMOS architecture
for a spin-based quantum computer}},}\ }\href
{http://arxiv.org/abs/1609.09700} {\ (\bibinfo {year} {2016})},\ \Eprint
{http://arxiv.org/abs/1609.09700} {arXiv:1609.09700} \BibitemShut {NoStop} \bibitem [{\citenamefont {Vandersypen}\ \emph {et~al.}(2016)\citenamefont
{Vandersypen}, \citenamefont {Bluhm}, \citenamefont {Clarke}, \citenamefont
{Dzurak}, \citenamefont {Morello}, \citenamefont {Reilly}, \citenamefont
{Schreiber},\ and\ \citenamefont {Veldhorst}}]{Vandersypen2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~M.~K.}\
\bibnamefont {Vandersypen}}, \bibinfo {author} {\bibfnamefont
{H.}~\bibnamefont {Bluhm}}, \bibinfo {author} {\bibfnamefont {J.~S.}\
\bibnamefont {Clarke}}, \bibinfo {author} {\bibfnamefont {A.~S.}\
\bibnamefont {Dzurak}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Morello}}, \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Reilly}},
\bibinfo {author} {\bibfnamefont {L.~R.}\ \bibnamefont {Schreiber}}, \ and\
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Veldhorst}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {{Interfacing spin qubits in quantum dots
and donors - hot, dense and coherent}},}\ }\href@noop {} {\ (\bibinfo {year}
{2016})},\ \Eprint {http://arxiv.org/abs/1612.05936v1} {arXiv:1612.05936v1}
\BibitemShut {NoStop} \bibitem [{\citenamefont {Petta}\ \emph {et~al.}(2010)\citenamefont {Petta},
\citenamefont {Lu},\ and\ \citenamefont {Gossard}}]{Petta2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont
{Petta}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Lu}}, \ and\
\bibinfo {author} {\bibfnamefont {A.~C.}\ \bibnamefont {Gossard}},\
}\bibfield {title} {\enquote {\bibinfo {title} {{A coherent beam splitter
for electronic spin states.}}}\ }\href {\doibase 10.1126/science.1183628}
{\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo
{volume} {327}},\ \bibinfo {pages} {669--672} (\bibinfo {year}
{2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Maune}\ \emph {et~al.}(2012)\citenamefont {Maune},
\citenamefont {Borselli}, \citenamefont {Huang}, \citenamefont {Ladd},
\citenamefont {Deelman}, \citenamefont {Holabird}, \citenamefont {Kiselev},
\citenamefont {Alvarado-Rodriguez}, \citenamefont {Ross}, \citenamefont
{Schmitz}, \citenamefont {Sokolich}, \citenamefont {Watson}, \citenamefont
{Gyure},\ and\ \citenamefont {Hunter}}]{Maune2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~M.}\ \bibnamefont
{Maune}}, \bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont {Borselli}},
\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Huang}}, \bibinfo
{author} {\bibfnamefont {T.~D.}\ \bibnamefont {Ladd}}, \bibinfo {author}
{\bibfnamefont {P.~W.}\ \bibnamefont {Deelman}}, \bibinfo {author}
{\bibfnamefont {K.~S.}\ \bibnamefont {Holabird}}, \bibinfo {author}
{\bibfnamefont {A.~A.}\ \bibnamefont {Kiselev}}, \bibinfo {author}
{\bibfnamefont {I.}~\bibnamefont {Alvarado-Rodriguez}}, \bibinfo {author}
{\bibfnamefont {R.~S.}\ \bibnamefont {Ross}}, \bibinfo {author}
{\bibfnamefont {A.~E.}\ \bibnamefont {Schmitz}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Sokolich}}, \bibinfo {author}
{\bibfnamefont {C.~A.}\ \bibnamefont {Watson}}, \bibinfo {author}
{\bibfnamefont {M.~F.}\ \bibnamefont {Gyure}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.~T.}\ \bibnamefont {Hunter}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {{Coherent singlet-triplet oscillations in a
silicon-based double quantum dot}},}\ }\href {\doibase 10.1038/nature10707}
{\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo
{volume} {481}},\ \bibinfo {pages} {344--347} (\bibinfo {year}
{2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Eng}\ \emph {et~al.}(2015)\citenamefont {Eng},
\citenamefont {Ladd}, \citenamefont {Smith}, \citenamefont {Borselli},
\citenamefont {Kiselev}, \citenamefont {Fong}, \citenamefont {Holabird},
\citenamefont {Hazard}, \citenamefont {Huang}, \citenamefont {Deelman},
\citenamefont {Milosavljevic}, \citenamefont {Schmitz}, \citenamefont {Ross},
\citenamefont {Gyure},\ and\ \citenamefont {Hunter}}]{Eng2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Eng}}, \bibinfo {author} {\bibfnamefont {T.~D.}\ \bibnamefont {Ladd}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Smith}}, \bibinfo
{author} {\bibfnamefont {M.~G.}\ \bibnamefont {Borselli}}, \bibinfo {author}
{\bibfnamefont {A.~A.}\ \bibnamefont {Kiselev}}, \bibinfo {author}
{\bibfnamefont {B.~H.}\ \bibnamefont {Fong}}, \bibinfo {author}
{\bibfnamefont {K.~S.}\ \bibnamefont {Holabird}}, \bibinfo {author}
{\bibfnamefont {T.~M.}\ \bibnamefont {Hazard}}, \bibinfo {author}
{\bibfnamefont {B.}~\bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont
{P.~W.}\ \bibnamefont {Deelman}}, \bibinfo {author} {\bibfnamefont
{I.}~\bibnamefont {Milosavljevic}}, \bibinfo {author} {\bibfnamefont {A.~E.}\
\bibnamefont {Schmitz}}, \bibinfo {author} {\bibfnamefont {R.~S.}\
\bibnamefont {Ross}}, \bibinfo {author} {\bibfnamefont {M.~F.}\ \bibnamefont
{Gyure}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~T.}\ \bibnamefont
{Hunter}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Isotopically
enhanced triple-quantum-dot qubit}},}\ }\href {\doibase
10.1126/sciadv.1500214} {\bibfield {journal} {\bibinfo {journal} {Science
Adv.}\ }\textbf {\bibinfo {volume} {1}},\ \bibinfo {pages} {e1500214}
(\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wu}\ \emph {et~al.}(2014)\citenamefont {Wu},
\citenamefont {Ward}, \citenamefont {Prance}, \citenamefont {Kim},
\citenamefont {Gamble}, \citenamefont {Mohr}, \citenamefont {Shi},
\citenamefont {Savage}, \citenamefont {Lagally}, \citenamefont {Friesen},
\citenamefont {Coppersmith},\ and\ \citenamefont {Eriksson}}]{Wu2014}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont
{Wu}}, \bibinfo {author} {\bibfnamefont {D.~R.}\ \bibnamefont {Ward}},
\bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {Prance}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Kim}}, \bibinfo {author}
{\bibfnamefont {J.~K.}\ \bibnamefont {Gamble}}, \bibinfo {author}
{\bibfnamefont {R.~T.}\ \bibnamefont {Mohr}}, \bibinfo {author}
{\bibfnamefont {Z.}~\bibnamefont {Shi}}, \bibinfo {author} {\bibfnamefont
{D.~E.}\ \bibnamefont {Savage}}, \bibinfo {author} {\bibfnamefont {M.~G.}\
\bibnamefont {Lagally}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Friesen}}, \bibinfo {author} {\bibfnamefont {S.~N.}\ \bibnamefont
{Coppersmith}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont
{Eriksson}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Two-axis
control of a singlet-triplet qubit with an integrated micromagnet}},}\ }\href
{\doibase 10.1073/pnas.1412230111} {\bibfield {journal} {\bibinfo {journal}
{Proceedings of the National Academy of Sciences}\ }\textbf {\bibinfo
{volume} {111}},\ \bibinfo {pages} {11938--11942} (\bibinfo {year}
{2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Harvey-Collard}\ \emph {et~al.}(2017)\citenamefont
{Harvey-Collard}, \citenamefont {D'Anjou}, \citenamefont {Rudolph},
\citenamefont {Jacobson}, \citenamefont {Dominguez}, \citenamefont {{Ten
Eyck}}, \citenamefont {Wendt}, \citenamefont {Pluym}, \citenamefont {Lilly},
\citenamefont {Coish}, \citenamefont {Pioro-Ladri{\`{e}}re},\ and\
\citenamefont {Carroll}}]{Harvey-Collard2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Harvey-Collard}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{D'Anjou}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Rudolph}},
\bibinfo {author} {\bibfnamefont {N.~T.}\ \bibnamefont {Jacobson}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Dominguez}}, \bibinfo {author}
{\bibfnamefont {G.~A.}\ \bibnamefont {{Ten Eyck}}}, \bibinfo {author}
{\bibfnamefont {J.~R.}\ \bibnamefont {Wendt}}, \bibinfo {author}
{\bibfnamefont {T.}~\bibnamefont {Pluym}}, \bibinfo {author} {\bibfnamefont
{M.~P.}\ \bibnamefont {Lilly}}, \bibinfo {author} {\bibfnamefont {W.~A.}\
\bibnamefont {Coish}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Pioro-Ladri{\`{e}}re}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~S.}\
\bibnamefont {Carroll}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{{High-fidelity single-shot readout for a spin qubit via an enhanced latching
mechanism}},}\ }\href {http://arxiv.org/abs/1703.02651} {\ (\bibinfo {year}
{2017})},\ \Eprint {http://arxiv.org/abs/1703.02651} {arXiv:1703.02651}
\BibitemShut {NoStop} \bibitem [{\citenamefont {Jock}\ \emph {et~al.}(2017)\citenamefont {Jock},
\citenamefont {Jacobson}, \citenamefont {Harvey-collard}, \citenamefont
{Mounce}, \citenamefont {Ward}, \citenamefont {Anderson}, \citenamefont
{Manginell}, \citenamefont {Wendt}, \citenamefont {Rudolph}, \citenamefont
{Gamble}, \citenamefont {Baczewski}, \citenamefont {Witzel},\ and\
\citenamefont {Carroll}}]{Jock2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~M.}\ \bibnamefont
{Jock}}, \bibinfo {author} {\bibfnamefont {N.~T.}\ \bibnamefont {Jacobson}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Harvey-collard}},
\bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Mounce}}, \bibinfo
{author} {\bibfnamefont {D.~R.}\ \bibnamefont {Ward}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Anderson}}, \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Manginell}}, \bibinfo {author}
{\bibfnamefont {J.~R.}\ \bibnamefont {Wendt}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Rudolph}}, \bibinfo {author} {\bibfnamefont
{J.~K.}\ \bibnamefont {Gamble}}, \bibinfo {author} {\bibfnamefont {A.~D.}\
\bibnamefont {Baczewski}}, \bibinfo {author} {\bibfnamefont {W.~M.}\
\bibnamefont {Witzel}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~S.}\
\bibnamefont {Carroll}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{{Probing low noise at the MOS interface with a spin-orbit qubit}},}\
}\href@noop {} {\ (\bibinfo {year} {2017})},\ \Eprint
{http://arxiv.org/abs/1707.04357v1} {arXiv:1707.04357v1} \BibitemShut
{NoStop} \bibitem [{\citenamefont {Nakajima}\ \emph {et~al.}(2017)\citenamefont
{Nakajima}, \citenamefont {Delbecq}, \citenamefont {Otsuka}, \citenamefont
{Stano}, \citenamefont {Amaha}, \citenamefont {Yoneda}, \citenamefont
{Noiri}, \citenamefont {Kawasaki}, \citenamefont {Takeda}, \citenamefont
{Allison}, \citenamefont {Ludwig}, \citenamefont {Wieck}, \citenamefont
{Loss},\ and\ \citenamefont {Tarucha}}]{Nakajima2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Nakajima}}, \bibinfo {author} {\bibfnamefont {M.~R.}\ \bibnamefont
{Delbecq}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Otsuka}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Stano}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Amaha}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Yoneda}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Noiri}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Kawasaki}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Takeda}}, \bibinfo {author} {\bibfnamefont
{G.}~\bibnamefont {Allison}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Ludwig}}, \bibinfo {author} {\bibfnamefont {A.~D.}\
\bibnamefont {Wieck}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Loss}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Tarucha}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Robust
single-shot measurement of spin correlations using a metastable charge state
in a quantum dot array}},}\ }\href {\doibase 10.1103/PhysRevLett.119.017701}
{\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\
}\textbf {\bibinfo {volume} {119}} (\bibinfo {year} {2017}),\
10.1103/PhysRevLett.119.017701}\BibitemShut {NoStop} \bibitem [{\citenamefont {Broome}\ \emph {et~al.}(2017)\citenamefont {Broome},
\citenamefont {Watson}, \citenamefont {Keith}, \citenamefont {Gorman},
\citenamefont {House}, \citenamefont {Keizer}, \citenamefont {Hile},
\citenamefont {Baker},\ and\ \citenamefont {Simmons}}]{Broome2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont
{Broome}}, \bibinfo {author} {\bibfnamefont {T.~F.}\ \bibnamefont {Watson}},
\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Keith}}, \bibinfo
{author} {\bibfnamefont {S.~K.}\ \bibnamefont {Gorman}}, \bibinfo {author}
{\bibfnamefont {M.~G.}\ \bibnamefont {House}}, \bibinfo {author}
{\bibfnamefont {J.~G.}\ \bibnamefont {Keizer}}, \bibinfo {author}
{\bibfnamefont {S.~J.}\ \bibnamefont {Hile}}, \bibinfo {author}
{\bibfnamefont {W.}~\bibnamefont {Baker}}, \ and\ \bibinfo {author}
{\bibfnamefont {M.~Y.}\ \bibnamefont {Simmons}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {{High-Fidelity Single-Shot Singlet-Triplet
Readout of Precision-Placed Donors in Silicon}},}\ }\href {\doibase
10.1103/PhysRevLett.119.046802} {\bibfield {journal} {\bibinfo {journal}
{Physical Review Letters}\ }\textbf {\bibinfo {volume} {119}} (\bibinfo
{year} {2017}),\ 10.1103/PhysRevLett.119.046802}\BibitemShut {NoStop} \bibitem [{\citenamefont {Angus}\ \emph {et~al.}(2007)\citenamefont {Angus},
\citenamefont {Ferguson}, \citenamefont {Dzurak},\ and\ \citenamefont
{Clark}}]{Angus2007}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~J.}\ \bibnamefont
{Angus}}, \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Ferguson}},
\bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont {Dzurak}}, \ and\
\bibinfo {author} {\bibfnamefont {R.~G.}\ \bibnamefont {Clark}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {{Gate-defined quantum dots in intrinsic
silicon}},}\ }\href {\doibase 10.1021/nl070949k} {\bibfield {journal}
{\bibinfo {journal} {Nano Letters}\ }\textbf {\bibinfo {volume} {7}},\
\bibinfo {pages} {2051--2055} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zwanenburg}\ \emph {et~al.}(2013)\citenamefont
{Zwanenburg}, \citenamefont {Dzurak}, \citenamefont {Morello}, \citenamefont
{Simmons}, \citenamefont {Hollenberg}, \citenamefont {Klimeck}, \citenamefont
{Rogge}, \citenamefont {Coppersmith},\ and\ \citenamefont
{Eriksson}}]{Zwanenburg2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~A.}\ \bibnamefont
{Zwanenburg}}, \bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont
{Dzurak}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Morello}},
\bibinfo {author} {\bibfnamefont {M.~Y.}\ \bibnamefont {Simmons}}, \bibinfo
{author} {\bibfnamefont {L.~C.~L.}\ \bibnamefont {Hollenberg}}, \bibinfo
{author} {\bibfnamefont {G.}~\bibnamefont {Klimeck}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Rogge}}, \bibinfo {author} {\bibfnamefont
{S.~N.}\ \bibnamefont {Coppersmith}}, \ and\ \bibinfo {author} {\bibfnamefont
{M.~A.}\ \bibnamefont {Eriksson}},\ }\bibfield {title} {\enquote {\bibinfo
{title} {{Silicon quantum electronics}},}\ }\href {\doibase
10.1103/RevModPhys.85.961} {\bibfield {journal} {\bibinfo {journal}
{Reviews of Modern Physics}\ }\textbf {\bibinfo {volume} {85}},\ \bibinfo
{pages} {961--1019} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Johnson}\ \emph {et~al.}(2005)\citenamefont
{Johnson}, \citenamefont {Petta}, \citenamefont {Taylor}, \citenamefont
{Yacoby}, \citenamefont {Lukin}, \citenamefont {Marcus}, \citenamefont
{Hanson},\ and\ \citenamefont {Gossard}}]{Johnson_2005}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~C.}\ \bibnamefont
{Johnson}}, \bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {Petta}},
\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Taylor}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Yacoby}}, \bibinfo {author}
{\bibfnamefont {M.~D.}\ \bibnamefont {Lukin}}, \bibinfo {author}
{\bibfnamefont {C.~M.}\ \bibnamefont {Marcus}}, \bibinfo {author}
{\bibfnamefont {M.~P.}\ \bibnamefont {Hanson}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.~C.}\ \bibnamefont {Gossard}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {{Triplet - singlet spin relaxation via nuclei in
a double quantum dot}},}\ }\href {\doibase 10.1038/nature03815} {\bibfield
{journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {435}},\
\bibinfo {pages} {925--928} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Liu}\ \emph {et~al.}(2008)\citenamefont {Liu},
\citenamefont {Fujisawa}, \citenamefont {Ono}, \citenamefont {Inokawa},
\citenamefont {Fujiwara}, \citenamefont {Takashina},\ and\ \citenamefont
{Hirayama}}]{Liu2008}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~W.}\ \bibnamefont
{Liu}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Fujisawa}},
\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Ono}}, \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Inokawa}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Fujiwara}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Takashina}}, \ and\ \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Hirayama}},\ }\bibfield {title} {\enquote {\bibinfo
{title} {{Pauli-spin-blockade transport through a silicon double quantum
dot}},}\ }\href {\doibase 10.1103/PhysRevB.77.073310} {\bibfield {journal}
{\bibinfo {journal} {Physical Review B}\ ,\ \bibinfo {pages} {1--4}}
(\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shaji}\ \emph {et~al.}(2008)\citenamefont {Shaji},
\citenamefont {Simmons}, \citenamefont {Thalakulam}, \citenamefont {Klein},
\citenamefont {Qin}, \citenamefont {Luo}, \citenamefont {Savage},
\citenamefont {Lagally}, \citenamefont {Rimberg}, \citenamefont {Joynt},
\citenamefont {Friesen}, \citenamefont {Blick}, \citenamefont {Coppersmith},\
and\ \citenamefont {Eriksson}}]{Shaji2008}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Shaji}}, \bibinfo {author} {\bibfnamefont {C.~B.}\ \bibnamefont {Simmons}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Thalakulam}}, \bibinfo
{author} {\bibfnamefont {L.~J.}\ \bibnamefont {Klein}}, \bibinfo {author}
{\bibfnamefont {H.~U.~A.}\ \bibnamefont {Qin}}, \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Luo}}, \bibinfo {author} {\bibfnamefont
{D.~E.}\ \bibnamefont {Savage}}, \bibinfo {author} {\bibfnamefont {M.~G.}\
\bibnamefont {Lagally}}, \bibinfo {author} {\bibfnamefont {A.~J.}\
\bibnamefont {Rimberg}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Joynt}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Friesen}},
\bibinfo {author} {\bibfnamefont {R.~H.}\ \bibnamefont {Blick}}, \bibinfo
{author} {\bibfnamefont {S.~N.}\ \bibnamefont {Coppersmith}}, \ and\ \bibinfo
{author} {\bibfnamefont {M.~A.}\ \bibnamefont {Eriksson}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {{Spin blockade and lifetime-enhanced
transport in a few-electron Si/SiGe double quantum dot}},}\ }\href {\doibase
10.1038/nphys988} {\bibfield {journal} {\bibinfo {journal} {Nature
Physics}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {2--6}
(\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lai}\ \emph {et~al.}(2011)\citenamefont {Lai},
\citenamefont {Lim}, \citenamefont {Yang}, \citenamefont {Zwanenburg},
\citenamefont {Coish}, \citenamefont {Qassemi}, \citenamefont {Morello},\
and\ \citenamefont {Dzurak}}]{Lai2011}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~S.}\ \bibnamefont
{Lai}}, \bibinfo {author} {\bibfnamefont {W.~H.}\ \bibnamefont {Lim}},
\bibinfo {author} {\bibfnamefont {C.~H.}\ \bibnamefont {Yang}}, \bibinfo
{author} {\bibfnamefont {F.~A.}\ \bibnamefont {Zwanenburg}}, \bibinfo
{author} {\bibfnamefont {W.~A.}\ \bibnamefont {Coish}}, \bibinfo {author}
{\bibfnamefont {F.}~\bibnamefont {Qassemi}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Morello}}, \ and\ \bibinfo {author} {\bibfnamefont
{A.~S.}\ \bibnamefont {Dzurak}},\ }\bibfield {title} {\enquote {\bibinfo
{title} {{Pauli spin blockade in a highly tunable silicon double quantum
dot.}}}\ }\href {\doibase 10.1038/srep00110} {\bibfield {journal} {\bibinfo
{journal} {Scientific reports}\ }\textbf {\bibinfo {volume} {1}},\ \bibinfo
{pages} {110} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zajac}\ \emph {et~al.}(2016)\citenamefont {Zajac},
\citenamefont {Hazard}, \citenamefont {Mi}, \citenamefont {Nielsen},\ and\
\citenamefont {Petta}}]{Zajac2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~M.}\ \bibnamefont
{Zajac}}, \bibinfo {author} {\bibfnamefont {T.~M.}\ \bibnamefont {Hazard}},
\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Mi}}, \bibinfo {author}
{\bibfnamefont {E.}~\bibnamefont {Nielsen}}, \ and\ \bibinfo {author}
{\bibfnamefont {J.~R.}\ \bibnamefont {Petta}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {{Scalable Gate Architecture for a One-Dimensional Array of
Semiconductor Spin Qubits}},}\ }\href {\doibase
10.1103/PhysRevApplied.6.054013} {\bibfield {journal} {\bibinfo {journal}
{Physical Review Applied}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo
{pages} {1--8} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yang}\ \emph {et~al.}(2014)\citenamefont {Yang},
\citenamefont {Rossi}, \citenamefont {Lai}, \citenamefont {Leon},
\citenamefont {Lim},\ and\ \citenamefont {Dzurak}}]{Yang2014CHarge}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~H.}\ \bibnamefont
{Yang}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Rossi}},
\bibinfo {author} {\bibfnamefont {N.~S.}\ \bibnamefont {Lai}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Leon}}, \bibinfo {author}
{\bibfnamefont {W.~H.}\ \bibnamefont {Lim}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.~S.}\ \bibnamefont {Dzurak}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {{Charge state hysteresis in semiconductor
quantum dots}},}\ }\href {\doibase 10.1063/1.4901218} {\bibfield {journal}
{\bibinfo {journal} {Applied Physics Letters}\ }\textbf {\bibinfo {volume}
{105}},\ \bibinfo {pages} {1--5} (\bibinfo {year} {2014})},\ \Eprint
{http://arxiv.org/abs/1407.1625} {arXiv:1407.1625} \BibitemShut {NoStop} \bibitem [{\citenamefont {Studenikin}\ \emph {et~al.}(2012)\citenamefont
{Studenikin}, \citenamefont {Thorgrimson}, \citenamefont {Aers},
\citenamefont {Kam}, \citenamefont {Zawadzki}, \citenamefont {Wasilewski},
\citenamefont {Bogan},\ and\ \citenamefont {Sachrajda}}]{Studenikin2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~A.}\ \bibnamefont
{Studenikin}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Thorgrimson}}, \bibinfo {author} {\bibfnamefont {G.~C.}\ \bibnamefont
{Aers}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kam}}, \bibinfo
{author} {\bibfnamefont {P.}~\bibnamefont {Zawadzki}}, \bibinfo {author}
{\bibfnamefont {Z.~R.}\ \bibnamefont {Wasilewski}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Bogan}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.~S.}\ \bibnamefont {Sachrajda}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {{Enhanced charge detection of spin qubit readout
via an intermediate state}},}\ }\href {\doibase 10.1063/1.4749281} {\bibfield
{journal} {\bibinfo {journal} {Applied Physics Letters}\ }\textbf {\bibinfo
{volume} {101}},\ \bibinfo {pages} {1--4} (\bibinfo {year}
{2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yang}\ \emph {et~al.}(2012)\citenamefont {Yang},
\citenamefont {Lim}, \citenamefont {Lai}, \citenamefont {Rossi},
\citenamefont {Morello},\ and\ \citenamefont {Dzurak}}]{Yang2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~H.}\ \bibnamefont
{Yang}}, \bibinfo {author} {\bibfnamefont {W.~H.}\ \bibnamefont {Lim}},
\bibinfo {author} {\bibfnamefont {N.~S.}\ \bibnamefont {Lai}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Rossi}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Morello}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.~S.}\ \bibnamefont {Dzurak}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {{Orbital and valley state spectra of a
few-electron silicon quantum dot}},}\ }\href {\doibase
10.1103/PhysRevB.86.115319} {\bibfield {journal} {\bibinfo {journal}
{Physical Review B}\ }\textbf {\bibinfo {volume} {86}},\ \bibinfo {pages}
{1--5} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Culcer}\ \emph {et~al.}(2009)\citenamefont {Culcer},
\citenamefont {Cywi{\'{n}}ski}, \citenamefont {Li}, \citenamefont {Hu},\ and\
\citenamefont {{Das Sarma}}}]{Culcer2009}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Culcer}}, \bibinfo {author} {\bibfnamefont {{\L}.}~\bibnamefont
{Cywi{\'{n}}ski}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Li}},
\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Hu}}, \ and\ \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {{Das Sarma}}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {{Realizing singlet-triplet qubits in
multivalley Si quantum dots}},}\ }\href {\doibase 10.1103/PhysRevB.80.205302}
{\bibfield {journal} {\bibinfo {journal} {Physical Review B}\ }\textbf
{\bibinfo {volume} {80}},\ \bibinfo {pages} {1--6} (\bibinfo {year}
{2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Culcer}\ \emph {et~al.}(2010)\citenamefont {Culcer},
\citenamefont {Cywi{\'{n}}ski}, \citenamefont {Li}, \citenamefont {Hu},\ and\
\citenamefont {{Das Sarma}}}]{Culcer2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Culcer}}, \bibinfo {author} {\bibfnamefont {{\L}.}~\bibnamefont
{Cywi{\'{n}}ski}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Li}},
\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Hu}}, \ and\ \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {{Das Sarma}}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {{Quantum dot spin qubits in silicon:
Multivalley physics}},}\ }\href {\doibase 10.1103/PhysRevB.82.155312}
{\bibfield {journal} {\bibinfo {journal} {Physical Review B}\ }\textbf
{\bibinfo {volume} {82}} (\bibinfo {year} {2010}),\
10.1103/PhysRevB.82.155312}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shevchenko}\ \emph {et~al.}(2010)\citenamefont
{Shevchenko}, \citenamefont {Ashhab},\ and\ \citenamefont
{Nori}}]{Shevchenko2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~N.}\ \bibnamefont
{Shevchenko}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Ashhab}},
\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\
}\bibfield {title} {\enquote {\bibinfo {title} {{Landau – Zener –
St{\"{u}}ckelberg interferometry}},}\ }\href {\doibase
10.1016/j.physrep.2010.03.002} {\bibfield {journal} {\bibinfo {journal}
{Physics Reports}\ }\textbf {\bibinfo {volume} {492}},\ \bibinfo {pages}
{1--30} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Veldhorst}\ \emph
{et~al.}(2015{\natexlab{b}})\citenamefont {Veldhorst}, \citenamefont
{Ruskov}, \citenamefont {Yang}, \citenamefont {Hwang}, \citenamefont
{Hudson}, \citenamefont {Flatt$\backslash$'{\{}e{\}}}, \citenamefont {Tahan},
\citenamefont {Itoh}, \citenamefont {Morello},\ and\ \citenamefont
{Dzurak}}]{Veldhorst2015a}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Veldhorst}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Ruskov}},
\bibinfo {author} {\bibfnamefont {C.~H.}\ \bibnamefont {Yang}}, \bibinfo
{author} {\bibfnamefont {J.~C.~C.}\ \bibnamefont {Hwang}}, \bibinfo {author}
{\bibfnamefont {F.~E.}\ \bibnamefont {Hudson}}, \bibinfo {author}
{\bibfnamefont {M.~E.}\ \bibnamefont {Flatt$\backslash$'{\{}e{\}}}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Tahan}}, \bibinfo {author}
{\bibfnamefont {K.~M.}\ \bibnamefont {Itoh}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Morello}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.~S.}\ \bibnamefont {Dzurak}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {{Spin-orbit coupling and operation of
multivalley spin qubits}},}\ }\href {\doibase 10.1103/PhysRevB.92.201401}
{\bibfield {journal} {\bibinfo {journal} {Physical Review B - Condensed
Matter and Materials Physics}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo
{pages} {7--11} (\bibinfo {year} {2015}{\natexlab{b}})},\ \Eprint
{http://arxiv.org/abs/arXiv:1505.01213v1} {arXiv:arXiv:1505.01213v1}
\BibitemShut {NoStop} \bibitem [{\citenamefont {Ferdous}\ \emph {et~al.}(2017)\citenamefont
{Ferdous}, \citenamefont {Chan}, \citenamefont {Veldhorst}, \citenamefont
{Hwang}, \citenamefont {Yang}, \citenamefont {Klimeck}, \citenamefont
{Morello}, \citenamefont {Dzurak},\ and\ \citenamefont
{Rahman}}]{Ferdous2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Ferdous}}, \bibinfo {author} {\bibfnamefont {K.~W.}\ \bibnamefont {Chan}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Veldhorst}}, \bibinfo
{author} {\bibfnamefont {J.~C.~C.}\ \bibnamefont {Hwang}}, \bibinfo {author}
{\bibfnamefont {C.~H.}\ \bibnamefont {Yang}}, \bibinfo {author}
{\bibfnamefont {G.}~\bibnamefont {Klimeck}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Morello}}, \bibinfo {author} {\bibfnamefont {A.~S.}\
\bibnamefont {Dzurak}}, \ and\ \bibinfo {author} {\bibfnamefont
{R.}~\bibnamefont {Rahman}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{{Interface induced spin-orbit interaction in silicon quantum dots and
prospects of scalability}},}\ }\href {http://arxiv.org/abs/1703.03840} {\
(\bibinfo {year} {2017})},\ \Eprint {http://arxiv.org/abs/1703.03840}
{arXiv:1703.03840} \BibitemShut {NoStop} \bibitem [{\citenamefont {Underwood}(2017)}]{underwood2017coherent}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Devin}\ \bibnamefont
{Underwood}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Coherent
oscillations in silicon double quantum dots due to meissner-screened magnetic
field gradients},}\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Bulletin of the American Physical Society}\ }\textbf {\bibinfo {volume}
{62}} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Taylor}\ \emph {et~al.}(2007)\citenamefont {Taylor},
\citenamefont {Petta}, \citenamefont {Johnson}, \citenamefont {Yacoby},
\citenamefont {Marcus},\ and\ \citenamefont {Lukin}}]{Taylor2007}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont
{Taylor}}, \bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {Petta}},
\bibinfo {author} {\bibfnamefont {A.~C.}\ \bibnamefont {Johnson}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Yacoby}}, \bibinfo {author}
{\bibfnamefont {C.~M.}\ \bibnamefont {Marcus}}, \ and\ \bibinfo {author}
{\bibfnamefont {M.~D.}\ \bibnamefont {Lukin}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {{Relaxation, dephasing, and quantum control of electron
spins in double quantum dots}},}\ }\href {\doibase
10.1103/PhysRevB.76.035315} {\bibfield {journal} {\bibinfo {journal}
{Physical Review B}\ }\textbf {\bibinfo {volume} {76}},\ \bibinfo {pages}
{035315} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Itoh}\ and\ \citenamefont
{Watanabe}(2014)}]{Ito2014}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.~M.}\ \bibnamefont
{Itoh}}\ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Watanabe}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Isotope
engineering of silicon and diamond for quantum computing and sensing
applications}},}\ }\href {\doibase 10.1557/mrc.2014.32} {\bibfield {journal}
{\bibinfo {journal} {Materials Research Society}\ }\textbf {\bibinfo
{volume} {4}},\ \bibinfo {pages} {143--157} (\bibinfo {year}
{2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nichol}\ \emph {et~al.}(2015)\citenamefont {Nichol},
\citenamefont {Harvey}, \citenamefont {Shulman}, \citenamefont {Pal},
\citenamefont {Umansky}, \citenamefont {Rashba}, \citenamefont {Halperin},\
and\ \citenamefont {Yacoby}}]{Nichol2015quenching}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {John~M}\ \bibnamefont
{Nichol}}, \bibinfo {author} {\bibfnamefont {Shannon~P}\ \bibnamefont
{Harvey}}, \bibinfo {author} {\bibfnamefont {Michael~D}\ \bibnamefont
{Shulman}}, \bibinfo {author} {\bibfnamefont {Arijeet}\ \bibnamefont {Pal}},
\bibinfo {author} {\bibfnamefont {Vladimir}\ \bibnamefont {Umansky}},
\bibinfo {author} {\bibfnamefont {Emmanuel~I}\ \bibnamefont {Rashba}},
\bibinfo {author} {\bibfnamefont {Bertrand~I}\ \bibnamefont {Halperin}}, \
and\ \bibinfo {author} {\bibfnamefont {Amir}\ \bibnamefont {Yacoby}},\
}\bibfield {title} {\enquote {\bibinfo {title} {Quenching of dynamic nuclear
polarization by spin-orbit coupling in gaas quantum dots},}\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Nature communications}\ }\textbf
{\bibinfo {volume} {6}} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Reilly}\ \emph {et~al.}(2008)\citenamefont {Reilly},
\citenamefont {Taylor}, \citenamefont {Laird}, \citenamefont {Petta},
\citenamefont {Marcus}, \citenamefont {Hanson},\ and\ \citenamefont
{Gossard}}]{Reilly2008}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont
{Reilly}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Taylor}},
\bibinfo {author} {\bibfnamefont {E.~A.}\ \bibnamefont {Laird}}, \bibinfo
{author} {\bibfnamefont {J.~R.}\ \bibnamefont {Petta}}, \bibinfo {author}
{\bibfnamefont {C.~M.}\ \bibnamefont {Marcus}}, \bibinfo {author}
{\bibfnamefont {M.~P.}\ \bibnamefont {Hanson}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.~C.}\ \bibnamefont {Gossard}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {{Measurement of temporal correlations of the
overhauser field in a double quantum dot}},}\ }\href {\doibase
10.1103/PhysRevLett.101.236803} {\bibfield {journal} {\bibinfo {journal}
{Physical Review Letters}\ }\textbf {\bibinfo {volume} {101}},\ \bibinfo
{pages} {1--4} (\bibinfo {year} {2008})},\ \Eprint
{http://arxiv.org/abs/0712.4033} {arXiv:0712.4033} \BibitemShut {NoStop} \bibitem [{\citenamefont {Dehollain}\ \emph {et~al.}(2012)\citenamefont
{Dehollain}, \citenamefont {Pla}, \citenamefont {Siew}, \citenamefont {Tan},
\citenamefont {Dzurak},\ and\ \citenamefont {Morello}}]{Dehollain2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {JP}~\bibnamefont
{Dehollain}}, \bibinfo {author} {\bibfnamefont {JJ}~\bibnamefont {Pla}},
\bibinfo {author} {\bibfnamefont {E}~\bibnamefont {Siew}}, \bibinfo {author}
{\bibfnamefont {KY}~\bibnamefont {Tan}}, \bibinfo {author} {\bibfnamefont
{AS}~\bibnamefont {Dzurak}}, \ and\ \bibinfo {author} {\bibfnamefont
{A}~\bibnamefont {Morello}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Nanoscale broadband transmission lines for spin qubit control},}\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nanotechnology}\
}\textbf {\bibinfo {volume} {24}},\ \bibinfo {pages} {015202} (\bibinfo
{year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Reed}\ \emph {et~al.}(2016)\citenamefont {Reed},
\citenamefont {Maune}, \citenamefont {Andrews}, \citenamefont {Borselli},
\citenamefont {Eng}, \citenamefont {Jura}, \citenamefont {Kiselev},
\citenamefont {Schmitz}, \citenamefont {Smith}, \citenamefont {Wright},
\citenamefont {Gyure},\ and\ \citenamefont {Hunter}}]{Reed2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~D.}\ \bibnamefont
{Reed}}, \bibinfo {author} {\bibfnamefont {B.~M.}\ \bibnamefont {Maune}},
\bibinfo {author} {\bibfnamefont {R.~W.}\ \bibnamefont {Andrews}}, \bibinfo
{author} {\bibfnamefont {M.~G.}\ \bibnamefont {Borselli}}, \bibinfo {author}
{\bibfnamefont {K.}~\bibnamefont {Eng}}, \bibinfo {author} {\bibfnamefont
{M.~P.}\ \bibnamefont {Jura}}, \bibinfo {author} {\bibfnamefont {A.~A.}\
\bibnamefont {Kiselev}}, \bibinfo {author} {\bibfnamefont {A.~E.}\
\bibnamefont {Schmitz}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Smith}}, \bibinfo {author} {\bibfnamefont {J.~A.}\ \bibnamefont {Wright}},
\bibinfo {author} {\bibfnamefont {M.~F.}\ \bibnamefont {Gyure}}, \ and\
\bibinfo {author} {\bibfnamefont {A.~T.}\ \bibnamefont {Hunter}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {{Reduced Sensitivity to Charge Noise in
Semiconductor Spin Qubits via Symmetric Operation}},}\ }\href {\doibase
10.1103/PhysRevLett.116.110402} {\bibfield {journal} {\bibinfo {journal}
{Physical Review Letters}\ }\textbf {\bibinfo {volume} {116}} (\bibinfo
{year} {2016}),\ 10.1103/PhysRevLett.116.110402}\BibitemShut {NoStop} \end{thebibliography}
\end{document} |
\begin{document}
\renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}} \author{{\sc Moritz Otto\footnotemark[1]}} \footnotetext[1]{Department of Mathematics,
Aarhus University, Aarhus, Denmark, otto@math.au.dk}
\date{}
\title{Poisson approximation of Poisson-driven point\\ processes and extreme values in stochastic geometry}
\maketitle
\begin{abstract}
\noindent We study point processes that consist of certain centers of point tuples of an underlying Poisson process. Such processes arise in stochastic geometry in the study of exceedances of various functionals describing geometric properties of the Poisson process. We use a coupling of the point process with its Palm version to prove a general Poisson limit theorem. We then combine our general result with the theory of asymptotic shapes of large cells (Kendall's problem) in random mosaics and prove Poisson limit theorems for large cells (with respect to a general size functional) in the Poisson-Voronoi and -Delaunay mosaic. As a consequence, we establish Gumbel limits for the asymptotic distribution of concrete size functionals and specify the rate of convergence. This extends extreme value results from Calka and Chenavier (2014) and Chenavier (2014).
\\
{\bf Keywords}. {Chen-Stein method, Delaunay mosaic, Gumbel distribution, Kendall's problem, maximum cell, Palm distribution, point process approximation, Poisson process, stopping set, total variation distance, Voronoi mosaic}\\
{\bf MSC}. 60G55, 60F17, 60D05. \end{abstract}
\section{Introduction}\label{s1} Point processes and random mosaics are fundamental objects in modern probability and find many applications, both in theory and practice. Often one is interested in certain geometric features of the configuration (e.g. the distance of a point to its nearest neighbor or the volume of a cell in the mosaic). One way to study these properties is to thin the process to a subprocess that is given by all points with a certain property (e.g. a large distance to its nearest neighbor). More generally, one can define a point process of certain centers of point tuples. In this article we study those processes in the situation where the original process is Poisson.
Let $\eta$ be a Poisson process on the locally compact second countable Hausdorff space $(\mathbb{X},\mathcal{X})$. We consider $\eta$ as a random object in the space $\mathbf{N}$ of locally finite counting measures on $\mathbb X$ equipped with its standard $\sigma$-field $\mathcal{N}$ (see Section \ref{s2} for its definition). For $m\in \{1,\dots,d+1\}$, let $g:\mathbb{X}^m\times\mathbf{N}\to \{0,1\}$ be measurable and symmetric in the first $m$ coordinates and let $z:\mathbb{X}^m \to \mathbb{X}$ be a measurable and symmetric function. For instance, if the intensity measure of $\eta$ is absolutely continuous (such that all $\mathbf{x}\in \eta^{(m)}$ are a.s. in general position), we can choose $z(\mathbf{x})$ as the center of the unique $(m-2)$-sphere through $x_1,\dots,x_m$. We denote the Dirac measure in a point $z\in \mathbb X$ by $\delta_z$ and consider the point process \begin{align}
\xi:=\frac{1}{m!} \sum\limits_{\mathbf{x}\in \eta^{(m)}} g(\mathbf{x},\eta)\,\delta_{z(\mathbf{x})}. \label{inxidef} \end{align} Here, the function $g$ can be understood as a selection mechanism that decides if $z(\mathbf x)$ is considered or not. We will require that $g(\mathbf{x},\mu)$ depends only locally on the configuration $\mu$ around $z(\mathbf{x})$ and formalize this concept in Definition \ref{defstab}. In many applications, there is some compact $W \subset {\mathbb R}^d$ and an underlying function $f:\mathbb{X}^m \times \mathbf{N}\to {\mathbb R}$ that measures the size of a geometrically defined object associated to $\mathbf{x}$ (e.g. its Voronoi or Delaunay cell with respect to $\mu$) and $g(\mathbf{x},\mu):=\mathbf{1}\{z(\mathbf{x}) \in W\}\mathbf{1}\{f(\mathbf{x},\mu)>v\}$ is chosen as the indicator that encodes whether $z(\mathbf{x}) \in W$ and whether $f(\mathbf{x},\mu)$ exceeds a certain value $v \in {\mathbb R}$ or not.
The motivation of our paper is twofold. First, we establish a general Poisson process approximation result for the process $\xi$ under the asumption that $g$ satisfies a stabilization condition. Our method of proof uses Stein's method and constructs a concrete coupling of $\xi$ and its Palm measure. Thus, we avoid the use of Glauber dynamics on the Poisson space which is applied in \cite{bobrowski2021poisson} to establish Poisson approximation. Second, we demonstrate the flexibility of our approach and derive Poisson approximation results for large cells in the Poisson-Voronoi and -Delaunay mosaics for general size functionals. This extends and generalizes results from \cite{calka2014extreme} and \cite{chenavier2014general}. We make use the theory of the asymptotic shape of large typical cells in random mosaics (Kendall's problem) that is contained in a series articles (see \cite{{kovalenko1997proof},{hug2004large},{hug2004limit},{hug2005large},{hug2004large1},{hug2007asymptotic},{hug2007typical},{bonnet2018cells}}).
Various results on Poisson approximation of point processes in stochastic geometry can be found in the literature. In \cite{schuhmacher2005distance} and \cite{schuhmacher2009distance} approximation results for dependent thinnings of point procesesses that have a density with respect to a Poisson process are derived. The Malliavin calculus on the Poisson space is used in \cite{schulte2012scaling} and \cite{schulte2016poisson} to determine scaling limits for Poisson U-statistics. In \cite{decreusefond2016functional} the theory of Glauber dynamics for birth and death processes is combined with the Chen-Stein method and a Poisson approximation result in Kantorovich-Rubinstein distance is derived for U-statistics. \cite{chenavier2014general} uses a d Poisson approximation result from \cite{arratia1989two} and \cite{arratia1990poisson} together with a discretizetion technique to study extremes in the Poisson-Voronoi and -Delaunay mosaic. \cite{chenavier2016extremes} and \cite{otto2021extremal} derive Poisson approximation results for small and large cells in the Poisson hyperplane mosaic. Poisson approximation for the volume of $k$-nearest neighbor balls is discussed in \cite{gyorfi2018limit}, \cite{chenavier2022limit}, \cite{bobrowski2021poisson} in the Euclidean space and in \cite{otto2022large} in the hyperbolic space. Moreover, Poisson approximation for functionals of Gibbs process is discussed in \cite{{schuhmacher2005distance},{schuhmacher2009distance},{last2023+disagreement}}. For more information on extreme values in random tessellations we refer to \cite{{bonnet2020maximal},{chenavier2019largest},{chenavier2018cluster},{chenavier2015extremal},{bonnet2018small},{schneider2019small}}.
This article is organized as follows. We settle our notation and provide background information on Palm theory and stopping sets in Section 2. In Section 3 we introduce the notion of stabilization that we work with and state and prove a general theorem on Poisson approximation for point processes. In Section 4 we study cells in the Poisson-Voronoi mosaic that are large with respect to a general size functional. Beyond that, we consider large Poisson-Delaunay cells in Section 5.
\section{Preliminaries}\label{s2}
\subsection{Point and Poisson processes} Let $(\mathbb{X},\mathcal X)$ be a locally compact second countable Hausdorff space. Let $\mathbf{N}$ denote the space of locally finite (i.e. finite on compact subsets) counting measures $\mu$ on $\mathbb{X}$ and let $\mathcal{N}$ be the $\sigma$-field on $\mathbf{N}$ generated by the mappings $\mu \mapsto \mu(B)$, $B \in \mathcal X$. Examples of elements of $\mathbf{N}$ are the zero measure $0$ and the Dirac measure $\delta_x$ in the point $x \in \mathbb X$, given by $\delta_x(B):=\mathbf{1}_B(x),\,B \in \mathcal{X}$. We write $\mu_B:=\mu \cap B$ for the restriction of $\mu \in \mathbf N$ to $B \in \mathcal X$. For $k \in {\mathbb N}$, $\mathbf{x}=(x_1,\dots,x_k)\in \mathbb{X}^k$, $\mu \in \textbf{N}$ and $A \in \mathcal{X}$ we write $\delta_{\mathbf{x}}:=\delta_{x_1}+\cdots+\delta_{x_k}$, $\mu_{\mathbf{x}}:=\mu+\delta_{\mathbf{x}}$ and $\mathbf{x} \in A$ if $x_i\in A$ for every $i \in [k]$. By a slight abuse of notation, we write $x \in \mu$ if the point $x$ is charged by the configuration $\mu$, i.e. $\mu(\{x\})>0$.
Suppose $\mu \in \mathbf{N}$ is given by $\mu=\sum_{i=1}^k \delta_{x_i}$ for some $k \in {\mathbb N}_0 \cup \{\infty\}$ and some $x_1,\dots,x_k \in \mathbb{X}$ (not necessarily distinct). For $m \in {\mathbb N}$ we define the {\em factorial measure} (see \cite{last2017lectures}, (4.5)) $\mu^{(m)}$ of $\mu$ by \begin{align*}
\mu^{(m)}:=\sum^{\neq}\limits_{i_1,\dots,i_m \le k} \delta_{(x_{i_1},\dots,x_{i_m})}, \end{align*} where the superscript $\neq$ indicates summation over $m$-tuples with pairwise different entries and where an empty sum is defined as zero. (For $k=\infty$ this involves only integer-valued indices.) A measure $\mu \in \textbf{N}$ is called {\em simple} if $\mu(\{x\})\in \{0,1\}$ for all $x \in \mathbb{R}^d$. Let $\mathbf{N}_s$ denote the set of simple locally finite counting measures. It is a measurable subset of $\mathbf{N}$ (see \cite{schneider2008stochastic}, p. 51) and its induced $\sigma$-field is denoted by $\mathcal{N}_s$.
A {\em point process} is a random element $\xi$ in $\textbf{N}$, defined over some fixed probability space $(\Omega, \mathcal{A},{\mathbb P})$. We assume that this probability space is rich enough to support all random objects in this article. By definition, $\xi(B)$ is a random variable in ${\mathbb N}_0 \cup \{\infty\}$ for every $B \in \mathcal{X}$. A point process $\xi$ is called {\em simple} if ${\mathbb P}(\xi \text{ is simple})=1$.
The central object of this article is the {\em Poisson process}. We refer to \cite[Chapter 3]{last2017lectures} for its definition and basic properties. Particularly useful is the following {\em multivariate Mecke equation} (see \cite[Theorem 4.5]{last2017lectures}):
Let $\eta$ be a Poisson process on $\mathbb{X}$ with $\sigma$-finite intensity measure $\lambda$. Then we have for all $k \in {\mathbb N}$ and every measurable function $f:\mathbb{X}^k\times \mathbf{N}\to [0,\infty]$, \begin{align}
{\mathbb E} \int f(\mathbf{x},\eta )\,\eta^{(k)}(\mathrm{d}\mathbf{x})=\int {\mathbb E} f(\mathbf{x},\eta+\delta_{\mathbf x})\,\lambda^k(\mathrm{d}\mathbf{x}). \label{mecke} \end{align}
\subsection{Palm measures and stopping sets} Following \cite{kallenberg2017random}, Chapter 6, we next introduce Palm processes. Let $\eta,\xi$ be two point processes and assume that $\xi$ has $\sigma$-finite intensity measure ${\mathbb E} \xi$. In general there exists a whole family of Palm measures, one for every $x \in \mathbb{R}^d$. Palm measures generalize the notion of regular conditional distributions (see Theorem 6.3 in \cite{kallenberg2006foundations}) and agree with them for $\xi=\delta_x$ for some $x \in \mathbb{R}^d$. The $\sigma$-finiteness of ${\mathbb E} \xi$ implies that the {\em Campbell measure} $C_{\eta,\xi}$ defined by \begin{align*}
C_{\eta,\xi}f:= {\mathbb E} \int f(x,\eta)\,\xi(\mathrm{d}x),\quad f:{\mathbb R}^d \times \mathbf{N} \to [0,\infty) \text{ measurable}, \end{align*} is also $\sigma$-finite. There exists a (unique) probability kernel $P_{\eta,\xi}^x$ from ${\mathbb R}^d$ to $\textbf{N}$ such that for all measurable $f:{\mathbb R}^d \times \mathbf{N} \to [0,\infty)$ the disintegration \begin{align*}
C_{\eta,\xi}f= \iint f(x,\mu)\,P_{\eta,\xi}^x(\mathrm{d}\mu)\,{\mathbb E} \xi(\mathrm{d}x) \end{align*} holds. The measure $P_{\eta,\xi}^x$ is called the {\em Palm measure} of $\eta$ with respect to $\xi$ at $x$. A point process $\eta^{\xi,x}$ with distribution $P_{\eta,\xi}^x$ is called a {\em Palm process} or a {\em Palm version} of $\eta$ with respect to $\xi$ at $x$. If $\xi$ is simple, $\eta^{\xi,x}$ can be interpreted as the process $\eta$ seen from $x$ conditioned on $\xi$ having a point in $x$. If $\xi = \eta$ a.s.\ we write $\eta^x$ for a Palm version of $\eta$ (with respect to itself) at $x$ and we obtain from Lemma 6.2(ii) in \cite{kallenberg2017random} that \begin{align*}
{\mathbb P}(x \in \eta^x)=1. \end{align*} The process $\eta^x-\delta_x$ is called a {\em reduced Palm process} of $\eta$ at $x$.
An important tool in the analysis of point processes are stopping sets. They generalize the concept of stopping times for random variables. Let $\mathcal{F}$ denote the system of closed sets in $\mathbb X$. We endow $\mathcal{F}$ with the smallest $\sigma$-field containing $\mathcal{F}_K:=\{F \in \mathcal{F}:\,F \cap K \neq {\varnothing}\}$ for all compact $K \subset \mathbb X$. For $F \in \mathcal{F}$ we denote by $\mu_F$ the restriction of $\mu$ to $F$. Moreover, let $\mathcal{N}_F$ be the $\sigma$-field on $\mathbf{N}$ generated by the mappings $\mu\mapsto \mu(B \cap F)$, $B \in \mathcal{X}$. \begin{definition}
A measurable map $S: \textbf{N} \to \mathcal{F}$ is called {\em stopping set} (with respect to the filtration $(\mathcal{N}_F)_{F \in \mathcal{F}}$) if $\{\mu \in \textbf{N}:\,S(\mu)\subset F\} \in \mathcal{N}_F$ for all $F \in \mathcal{F}$. \end{definition}
Intuitively, if $S$ is a stopping set and $\eta$ is a random element in $\mathbf{N}$, then $S(\eta)$ is a random subset of $\mathbb{R}^d$ such that $S(\eta)$ only depends on the restriction $\eta_{S(\eta)}$ of $\eta$ to $S(\eta)$.
From \cite[Proposition A.1]{baumstark2009gamma} we have that a measurable map $S:\mathbf{N}\to {\mathcal F}$ is a stopping set if and only if $S(\mu)=S(\mu_{S(\mu)})$ for all $\mu \in \mathbf{N}$ and if the following implication holds for all $\mu,\varphi \in \mathbf{N}$: \begin{align}
\varphi=\mu_{S(\varphi)} \Longrightarrow S(\varphi)=S(\mu). \label{propstop} \end{align}
The following lemma is similar to Lemma A.2 in \cite{last2021phase}. Since the latter works under a more general notion of stopping sets, we give a proof of the statement. \begin{lemma} \label{remstop} Let $\varphi, \mu \in \mathbf{N}$ and let $S:\mathbf{N}\to \mathcal{F}$ be a stopping set. Then it holds that
\begin{align*}
\varphi= \mu_{S(\mu)} \quad \Longleftrightarrow \quad \varphi= \mu_{S(\varphi)}
\end{align*}
and in this case we also have that $S(\varphi)=S(\mu)$. \end{lemma}
\begin{proof}
Let $S:\mathbf{N}\to \mathcal{F}$ be a stopping set and assume that $\varphi=\mu_{S(\mu)}$. Then we have by \cite[Proposition A.1]{baumstark2009gamma} that $S(\mu)=S(\mu_{S(\mu)})=S(\varphi)$. For the other direction we assume that $\varphi=\mu_{S(\varphi)}$. Then it follows from \eqref{propstop} that $S(\varphi)=S(\mu)$ and, hence, that $\varphi=\mu_{S(\mu)}$. \end{proof}
The next statement will be used repeatedly in Section 3.
\begin{lemma} \label{stopmecam}
Let $\eta$ and $\eta'$ be independent Poisson process on $\mathbb{X}$ with $\sigma$-finite intensity measure $\lambda$, let $S$ be a stopping set such that $S(\eta)$ is a.s.\ compact and let $h:\mathbf{N}\times\mathbf{N}\to [0,\infty)$ be measurable. Then we have
\begin{align*}
{\mathbb E} h(\eta_{S(\eta)},\eta_{S(\eta)^c})={\mathbb E} h(\eta_{S(\eta)},\eta'_{S(\eta)^c}).
\end{align*} \end{lemma}
\begin{proof}
The argument can be assembled from different sources in the literature (see \cite{baumstark2009gamma}, \cite{zuyev1999stopping}). Nevertheless, we give a proof for completeness and convenience of the reader. We have
\begin{align}
{\mathbb E} h(\eta_{S(\eta)},\eta_{S(\eta)^c})&= \sum\limits_{k=0}^\infty {\mathbb E} h(\eta_{S(\eta)},\eta_{S(\eta)^c})\,\mathbf{1}\{\eta(S(\eta))=k\}\nonumber\\
&= \sum\limits_{k=0}^\infty \frac{1}{k!} {\mathbb E} \int h(\delta_{\mathbf{x}},\eta_{S(\eta)^c})\,\mathbf{1}\{\eta_{S(\eta)}=\delta_{\mathbf{x}}\}\,\eta^{(k)}(\mathrm{d}\mathbf{x}),\label{funstdis}
\end{align}
where we recall that $\delta_{\mathbf{x}}:=\delta_{x_1}+\cdots+\delta_{x_k}$ for $\mathbf{x}=(x_1,\dots,x_k)\in \mathbb X^k$. By Lemma \ref{remstop} we have that $\eta_{S(\eta)}=\delta_{\mathbf{x}}$ if and only if $\eta_{S(\delta_{\mathbf{x}})}=\delta_{\mathbf{x}}$ and in this case it holds that $S(\eta)= S(\delta_{\mathbf{x}})$. Hence, \eqref{funstdis} is given by
\begin{align*}
\sum\limits_{k=0}^\infty \frac{1}{k!}{\mathbb E} \int h(\delta_{\mathbf{x}},\eta_{S(\delta_{\mathbf{x}})^c})\,\mathbf{1}\{\eta_{S(\delta_{\mathbf{x}})}=\delta_{\mathbf{x}}\}\,\eta^{(k)}(\mathrm{d}\mathbf{x}).
\end{align*}
From the multivariate Mecke equation \eqref{mecke} we obtain that the above is given by
\begin{align}
\sum\limits_{k=0}^\infty \frac{1}{k!} \int{\mathbb E} h(\delta_{\mathbf{x}},(\eta+\delta_{\mathbf{x}})_{S(\delta_{\mathbf{x}})^c})\,\mathbf{1}\{(\eta+\delta_{\mathbf{x}})_{S(\delta_{\mathbf{x}})}=\delta_{\mathbf{x}}\}\,\lambda^{k}(\mathrm{d}\mathbf{x}).\label{funstdism}
\end{align}
Note that $(\eta+\delta_{\mathbf{x}})_{S(\delta_{\mathbf{x}})}=\delta_{\mathbf{x}}$ implies that $(\eta+\delta_{\mathbf{x}})_{S(\delta_{\mathbf{x}})^c}=\eta_{S(\delta_{\mathbf{x}})^c}$. Let $\eta'$ be a point process that is independent of $\eta$ with $\eta \stackrel{d}{=}\eta'$. Since $\eta_{S(\delta_{\mathbf{x}})}$ and $\eta_{S(\delta_{\mathbf{x}})^c}$ are independent, \eqref{funstdism} is given by
\begin{align*}
&\sum\limits_{k=0}^\infty \frac{1}{k!} \int{\mathbb E} h(\delta_{\mathbf{x}},\eta'_{S(\delta_{\mathbf{x}})^c})\,\mathbf{1}\{(\eta+\delta_{\mathbf{x}})_{S(\delta_{\mathbf{x}})}=\delta_{\mathbf{x}}\}\,\lambda^{k}(\mathrm{d}\mathbf{x})\\
&\quad =\sum\limits_{k=0}^\infty \frac{1}{k!} {\mathbb E}\int h(\delta_{\mathbf{x}},\eta'_{S(\delta_{\mathbf{x}})^c})\,\mathbf{1}\{\eta_{S(\delta_{\mathbf{x}})}=\delta_{\mathbf{x}}\}\,\eta^{(k)}(\mathrm{d}\mathbf{x}),
\end{align*}
where we have applied the Mecke equation to obtain the equality. Using here Lemma \ref{remstop} again, we arrive at
\begin{align*}
&\sum\limits_{k=0}^\infty \frac{1}{k!} {\mathbb E} \int h(\delta_{\mathbf{x}},\eta'_{S(\eta)^c})\,\mathbf{1}\{\eta_{S(\eta)}=\delta_{\mathbf{x}}\}\,\eta^{(k)}(\mathrm{d}\mathbf{x})\nonumber\\
&\quad= \sum\limits_{k=0}^\infty {\mathbb E} h(\eta_{S(\eta)},\eta'_{S(\eta)^c})\,\mathbf{1}\{\eta(S(\eta))=k\}={\mathbb E} h(\eta_{S(\eta)},\eta'_{S(\eta)^c}).
\end{align*} \end{proof}
\section{General result on Poisson process approximation}\label{s3}
Let $m \in [d+1]$ and let $\eta$ be a Poisson process in $\mathbb X$ with $\sigma$-finite and diffuse intensity measure $\lambda$. In the following we assume that $g:\mathbb X^m\times \mathbf{N}\to \{0,1\}$ is measurable and symmetric in the first $m$ coordinates, i.e. \begin{align*}
g(x_1,\dots,x_m,\mu)=g(x_{\pi(1)},\dots,x_{\pi(m)},\mu) \end{align*} for all $x_1,\dots,x_m \in \mathbb X,\,\mu \in \mathbf{N}$ and every permutation $\pi:[m]\to [m]$. We think of $g$ as a selection mechanism that decides whether an $m$-tuple $\mathbf{x}=(x_1,\dots,x_m)$ is considered or not. Let $z:\mathbb{X}^m \to \mathbb{X}$ be a measurable and symmetric function.
We define the point process \begin{align}
\xi [\mu]=\frac{1}{m!} \sum\limits_{\mathbf{x}\in \mu^{(m)}} g(\mathbf{x},\mu)\,\delta_{z(\mathbf{x})},\quad \mu \in \mathbf N, \label{xidef} \end{align} and write $\xi:=\xi[\eta]$. Note that for $m=1$ and $z(x)=x$, the process $\xi$ is a thinning of the Poisson process $\eta$. From the multivriate Mecke equation \ref{mecke} we find that the intensity measure ${\mathbb E} \xi$ of $\xi$ is given by \begin{align}
{\mathbb E} \xi(A)&= \frac {1}{m!} {\mathbb E} \int \mathbf{1}\{z(\mathbf{x}) \in A\}\,g(\mathbf{x},\eta)\,\eta^{(m)}(\mathrm{d}\mathbf{x})\nonumber\\
&= \frac {1}{m!}\int \mathbf{1}\{z(\mathbf{x}) \in A\}\, {\mathbb E} g(\mathbf{x},\eta+\delta_\mathbf{x})\,\lambda^m(\mathrm{d}\mathbf{x}),\quad A \in \mathcal{X}.\label{intpalm} \end{align}
The goal of this section is to approximate $\xi[\eta]$ by a Poisson process under the condition that $g$ is stabilizing. This concept is defined formally in the next definition. Loosely sopken, it requires that the value of $g(\mathbf{x},\mu)$ is determined by the resctriction of $\mu$ to a ball centred at $z(\mathbf x)$ with a finite radius. We write $B_r(z)$ for the closed ball with radius $r>0$ around $z\in \mathbb{X}$.
\begin{definition} \label{defstab}
Let $g:\mathbb{X}^m\times \mathbf{N}\to [0,\infty)$ be measurable and symmetric in the first $m$ coordinates and let $\eta$ be a Poisson process in $\mathbb{X}$ with $\sigma$-finite intensity measure. We call $g$ {\em stabilizing} if there exists a measurable function $R:\mathbb X\times \mathbf{N}\to [0,\infty]$, such that for all $\mathbf{x} \in \mathbb{X}^m$ we have
\begin{enumerate}
\item[(i)] $R(z(\mathbf{x}),\eta+\delta_\mathbf{x}) <\infty\quad {\mathbb P}$-a.s.
\item[(ii)] $g(\mathbf{x},\mu)=g(\mathbf{x},\mu \cap B_{R(z(\mathbf{x}),\mu+\delta_\mathbf{x})}(z(\mathbf{x}))),\quad \mu \in \mathbf{N}.$
\item[(iii)] The map $\mu \mapsto B_{R(z(\mathbf{x}),\mu+\delta_\mathbf{x})}(z(\mathbf{x}))$ from $\mathbf{N}$ to $\mathcal{F}$ is a stopping set.
\item[(iv)] $\mathbf{x} \in B_{R(z(\mathbf{x}),\mu+\delta_\mathbf{x})}(z(\mathbf{x}))$,\quad $\mu\in \textbf{N}$.
\end{enumerate}
We call $R:= R(z,\mu)$ {\em stabilization radius} and use the notation $S(z,\mu):=B_{R(z,\mu)}(z),\,z \in {\mathbb R}^d,\,\mu \in \mathbf{N}$. \end{definition}
Note that from Definition \ref{defstab}(ii) and (iii) it follows that \begin{align}
g(\mathbf{x},\mu)=g(\mathbf{x},\mu \cap B_{R(z(\mathbf{x}),\mu+\delta_\mathbf{x})}(z(\mathbf{x}))+\chi \cap B_{R(z(\mathbf{x}),\mu+\delta_\mathbf{x})}(z(\mathbf{x}))^c),\quad \mu,\,\chi \in \mathbf N.\label{stoprem} \end{align} This property is sometimes assumed in the literature (see e.g. \cite{penrose2005normal}).
In order to state the main result of this section, we still need to fix some notation. For point processes $\xi$ and $\nu$ on $\mathbb{X}$ the {\em total variation distance} is given by \begin{align*}
\mathbf{d_{TV}}(\xi,\nu):=\sup_{A \in \mathcal N} |\mathbb{P}(\xi \in A)-\mathbb{P}(\nu \in A)|. \end{align*}
Moreover, we denote by $\mu_+$ and $\mu_-$ the positive and negative part of a finite signed measure $\mu$ on $\mathcal X$ and by $\|\mu\|:=\mu_+(\mathbb{X})+\mu_-(\mathbb{X})$ its total variation.
\begin{theorem} \label{maithmallg}
Let $\xi=\xi[\eta]$ be the process defined at \eqref{xidef} with stabilizing $g$ in the sense of Definition \ref{defstab} and assume that $\mathbb E \xi(\mathbb{X})<\infty$. Let $\nu$ be a Poisson process on $\mathbb{X}$ satisfying ${\mathbb E} \nu(\mathbb{X})<\infty$ and let $\mathbf x \mapsto b_{\mathbf x}$ be a measurable function from $\mathbb X^m$ to $[0,\infty)$. Then we have
\begin{align}
\mathbf{d_{TV}} (\xi, \,\nu)\le \|{\mathbb E}\xi-{\mathbb E}\nu\|+ T_{1}+T_2+T_3+T_4+T_5
\end{align}
where
\begin{align*}
&T_{1}:=\frac{2\mathbb{E}\xi(\mathbb{X})}{m!} \int_{\mathbb{X}^m} \mathbb{E}g(\mathbf{x},\eta+\delta_{\mathbf{x}})\mathbf 1\{R(z(\mathbf{x}),\eta+\delta_{\mathbf x})>b_{\mathbf{x}}\} \,\lambda^m(\mathrm{d} \mathbf x),\\
&T_{2}:=\frac{1}{(m!)^2}\int_{\mathbb{X}^m} \int_{\mathbb{X}^m} \mathbb{E} g(\mathbf{x},\eta+\delta_{\mathbf{x}})\mathbb{E}g(\mathbf{y},\eta+\delta_{\mathbf{y}})\,\mathbf 1 \{\|z(\mathbf{x})-z(\mathbf{y})\|\le b_{\mathbf{x}}+b_{\mathbf{y}}\}\,\lambda^m(\mathrm{d} \mathbf y)\,\lambda^m(\mathrm{d} \mathbf x),\\
&T_{3}:=\frac{2}{(m!)^2}\int_{\mathbb{X}^m} \int_{\mathbb{X}^m} {\mathbb E} g(\mathbf{x},\eta+\delta_{(\mathbf{x},\mathbf{y})})\,g(\mathbf{y},\eta+\delta_{(\mathbf{x},\mathbf{y})}) \,\mathbf{1}\{R(z(\mathbf{x}),\eta+\delta_{\mathbf{x},\mathbf{y}})>b_{\mathbf{x}}\}\, \lambda^m(\mathrm{d}\mathbf{y})\,\lambda^m(\mathrm{d}\mathbf{x}),\\
&T_4:=\frac{1}{(m!)^2}\int_{\mathbb{X}^m} \int_{\mathbb{X}^m} {\mathbb E} g(\mathbf{x},\eta+\delta_{(\mathbf{x},\mathbf{y})})\,g(\mathbf{y},\eta+\delta_{(\mathbf{x},\mathbf{y})}) \,\mathbf{1}\{\|z(\mathbf{x})-z(\mathbf{y})\|\le b_{\mathbf{x}}+b_{\mathbf{y}}\}\\
&\qquad \qquad \times \mathbf{1}\{R(z(\mathbf{x}),\eta+\delta_{(\mathbf{x},\mathbf{y})})\le b_{\mathbf{x}}\}\,\mathbf{1}\{R(z(\mathbf{y}),\eta+\delta_{(\mathbf{x},\mathbf{y})})\le b_{\mathbf{y}}\}\, \,\lambda^m(\mathrm{d}\mathbf{y})\,\lambda^m(\mathrm{d}\mathbf{x}),\\
&T_5:=\sum\limits_{\ell=1}^{m-1}\frac{1}{m!\,(m-\ell)!}\int_{\mathbb{X}^m} \int_{\mathbb{X}^{m-\ell}} \mathbb{E} g((\mathbf{x}_\ell,\mathbf{y}),\eta+\delta_{(\mathbf{x},\mathbf{y})}) g(\mathbf x,\eta+\delta_{(\mathbf{x},\mathbf{y})})\lambda^{m-\ell}(\mathrm{d}\mathbf{x}) \lambda^{m} (\mathrm{d}\mathbf y)
\end{align*}
with $\mathbf{x}_\ell:=(x_1,\dots,x_\ell)$. \end{theorem}
\begin{comment}
The terms $C_1$, $C_2$, $C_3$ in the upper bound in Theorem \ref{maithmallg} can be interpreted as follows. The term $C_1$ is the expected number of pairs of $(d+1)$-tuples in $\eta$ with centers in $\xi$ that have at least one element in common. Clearly, this term does not appear in the case $m=1$. In $C_2$ we consider pairs of $(d+1)$-tuples in $\eta$ with centers in $\xi$ that are close by one another. The term $C_3$ is (up to a constant factor) the expected number of pairs of $(d+1)$-tuples in $\eta$ with centers in $\xi$ such that at least one of the tuples has a large stabilization radius. In the case of deterministic stabilization radii, this term disappears when $b$ is chosen appropriately. \end{comment}
The strategy of the proof of Theorem \ref{maithmallg} is as follows. First, we construct a special (reduced) Palm version $\xi^{w!}$ of $\xi=\xi[\eta]$ at a given $w \in \mathbb{X}$. Thereafter, we find bounds on the total variations of the positive and the negative part of $\xi-\xi^{w!}$. Finally, we combine the two bounds and conclude the proof of Theorem \ref{maithmallg} using a general Poisson approximation result from \cite{barbour1992stein}.
\begin{lemma}\label{lemxiz} Let $\xi=\xi[\eta]$ be the process defined at \eqref{xidef} and $\eta^{\xi,w}$ be a Palm process of $\eta$ with respect to $\xi$ at $w$ such that $\eta^{\xi,w}$ and $\eta$ are independent point processes. For $\mathbb E \xi$-almost all $w \in \mathbb{X}$,
\begin{align}
\xi^{w!}:=\xi[(\eta^{\xi,w})_{S(w,\eta^{\xi,w})}+\eta_{S(w,\eta^{\xi,w})^c}]-\delta_w \label{def:xiz}
\end{align}
is a reduced Palm process of $\xi$ (with respect to itself) at $w$. \end{lemma}
\begin{proof}
It suffices to show that
\begin{align}
(\eta^{\xi,w})_{S(w,\eta^{\xi,w})}+\eta_{S(w,\eta^{\xi,w})^c} \label{etazetaclaimallg}
\end{align}
is a Palm version of $\eta$ with respect to $\xi$ at $w$. To this end, let $h:\mathbb X\times \textbf{N} \to [0,+\infty)$ be measurable and let $\widetilde{\eta}$ be a Poisson process with $\eta \stackrel{d}{=} \widetilde{\eta}$ such that $\eta$ and $\widetilde{\eta}$ are independent. By independence of $\eta$ and $\eta^{\xi,w}$ and the definitions of $\eta^{\xi,w}$ and $\xi$, we obtain that
\begin{align*}
& \int {\mathbb E} h(w,(\eta^{\xi,w})_{S(w,\eta^{\xi,w})}+\eta_{S(w,\eta^{\xi,w})^c})\,{\mathbb E} \xi(\mathrm{d}w)\nonumber\\
&\quad ={\mathbb E} \int h(w,\eta_{S(w,\eta)}+\widetilde{\eta}_{S(w,\eta)^c})\,\xi[\eta](\mathrm{d}w)\nonumber\\
&\quad =\frac{1}{m!} {\mathbb E} \int h(z(\mathbf{x}),\eta_{S(z(\mathbf{x}),\eta)}+\widetilde{\eta}_{S(z(\mathbf{x}),\eta)^c})\,g(\mathbf{x},\eta)\,\eta^{(m)} (\mathrm{d}\mathbf{x}),
\end{align*}
which is by the multivariate Mecke equation \eqref{mecke} given by
\begin{align}
\frac{1}{m!} \int &{\mathbb E} h(z(\mathbf{x}),\eta_{S(z(\mathbf{x}),\eta+\delta_\mathbf{x})}+\widetilde{\eta}_{S(z(\mathbf{x}),\eta+\delta_\mathbf{x})^c}+\delta_\mathbf{x})\,g(\mathbf{x},\eta+\delta_{\mathbf{x}})\,\lambda^m(\mathrm{d}\mathbf{x}),\label{etazetaallgmecke}
\end{align}
where we have used that $(\eta+\delta_{\mathbf x})_{S(z(\mathbf{x}),\eta+\delta_\mathbf{x})}=\eta_{S(z(\mathbf{x}),\eta+\delta_\mathbf{x})}+\delta_{\mathbf x}$ by Definition \ref{defstab}(iv). Now we apply Theorem \ref{stopmecam} with
\begin{align*}
g:\textbf{N}\times \textbf{N}\to [0,+\infty),\quad (\mu,\phi) \mapsto h(z(\mathbf{x}),\mu+\phi+\delta_\mathbf{x}) g(\mathbf{x},\mu+\delta_{\mathbf{x}}).
\end{align*}
Since $g$ is stabilizing and since $S$ is a stopping set, \eqref{etazetaallgmecke} can be written as
\begin{align}
&\frac{1}{m!} \int {\mathbb E} h(z(\mathbf{x}),\eta+\delta_\mathbf{x})\,g(\mathbf{x},\eta+\delta_{\mathbf{x}}) \,\lambda^m(\mathrm{d}\mathbf{x}),
\end{align}
which is by the multivariate Mecke equation \eqref{mecke} and the definition of $\xi$ given by
\begin{align*}
&\frac{1}{m!} {\mathbb E} \int h(z(\mathbf{x}),\eta)\,g(\mathbf{x},\eta) \,\eta^{(m)}(\mathrm{d}\mathbf{x}) ={\mathbb E} \int h(w,\eta)\,\xi(\mathrm{d}w).
\end{align*}
Hence, $(\eta^{\xi,w})_{S(w,\eta^{\xi,w})}+\eta_{S(w,\eta^{\xi,w})^c}$ is indeed a Palm version of $\eta$ with respect to $\xi$ at $w$ for ${\mathbb E} \xi$-almost all $w$. \end{proof}
\begin{lemma}\label{lem+}
Let $\xi=\xi[\eta]$ be the process defined at \eqref{xidef} and for $\mathbb{E} \xi$-almost all $w \in \mathbb{X}$ let $\xi^{w!}$ be the process defined at \eqref{def:xiz}. We have
\begin{align*}
&\int \mathbb{E} [(\xi-\xi^{w!})_+(\mathbb{X})]\,{\mathbb E}\xi(\mathrm{d}w)\le T_1+T_2,
\end{align*}
where $T_1, T_2$ are given in Theorem \ref{maithmallg}. \end{lemma}
\begin{proof}
Let $\eta$ and $\tilde \eta$ be independent Poisson processes on $\mathbb{X}$ with intensity measure $\lambda$. It follows from the definition of $\xi^{w!}$ in \eqref{def:xiz} that the left-hand side of the inequality in the lemma is given by
\begin{align}
\mathbb{E} \int(\xi[\eta]-\xi[\tilde \eta_{S(w,\tilde \eta)}+\eta _{S(w,\tilde \eta)^c}]+\delta_w)_+(\mathbb{X})\,\xi[\tilde \eta](\mathrm{d}w).\label{eqn:tv+lhs}
\end{align}
By definition of $\xi$, the integrand can be bounded as follows
\begin{align}
&(\xi[\eta]-\xi[\tilde \eta_{S(w,\tilde \eta)}+\eta _{S(w,\tilde \eta)^c}]+\delta_w)_+(\mathbb{X})\nonumber\\
&\quad\le \frac {1}{m!}\int_{\mathbb{X}^m} g(\mathbf x,\eta) \mathbf 1\{\mathbf x \in (\tilde \eta_{S(w,\tilde \eta)}+\eta _{S(w,\tilde \eta)^c})^{(m)} \} \mathbf 1 \{g(\mathbf x,\eta)\neq g(\mathbf x,\tilde \eta_{S(w,\tilde \eta)}+\eta _{S(w,\tilde \eta)^c})\}\nonumber\\
&\qquad \qquad \times \mathbf 1\{z(\mathbf x) \neq w\}\,\eta^{(m)}(\mathrm{d}\mathbf x)\label{eqn:gineq1}\\
&\qquad +\frac {1}{m!}\int_{\mathbb{X}^m} g(\mathbf x,\eta) \mathbf 1\{\mathbf x \notin (\tilde \eta_{S(w,\tilde \eta)}+\eta _{S(w,\tilde \eta)^c})^{(m)} \} \mathbf 1\{z(\mathbf x) \neq w\}\,\eta^{(m)}(\mathrm{d}\mathbf x)\label{eqn:gineq2}\\
&\qquad +\frac {1}{m!}\int_{\mathbb{X}^m} g(\mathbf x,\eta) \mathbf 1\{z(\mathbf x) = w\} \,\eta^{(m)}(\mathrm{d}\mathbf x).\label{eqn:gineq3}
\end{align}
Now we use that $g$ is stabilizing with stabilization radius $R$. Hence, by \eqref{stoprem} with $\chi:=\eta_{S(w,\tilde \eta)^c}+\tilde \eta_{S(w,\tilde \eta)}$, the second indicator in \eqref{eqn:gineq1} is given by
\begin{align*}
\mathbf 1 \{g(\mathbf x,\eta_{S(z(\mathbf x),\eta)}+\eta_{S(z(\mathbf x),\eta)^c \cap S(w,\tilde \eta)^c}+ \tilde \eta_{S(z(\mathbf x),\eta)^c \cap S(w,\tilde \eta)})\neq g(\mathbf x,\tilde \eta_{S(w,\tilde \eta)}+\eta _{S(w,\tilde \eta)^c})\}.
\end{align*}
Note that if $S(z(\mathbf x),\eta) \cap S(w,\tilde \eta)={\varnothing}$, we have $S(z(\mathbf x),\eta)=S(z(\mathbf x),\eta) \cap S(w,\tilde \eta)^c$ and $S(w,\tilde \eta)=S(w,\tilde \eta)\cap S(z(\mathbf x),\eta)^c$, yielding that
\begin{align*}
g(\mathbf x,\eta_{S(z(\mathbf x),\eta)}+\eta_{S(z(\mathbf x),\eta)^c \cap S(w,\tilde \eta)^c}+ \tilde \eta_{S(z(\mathbf x),\eta)^c \cap S(w,\tilde \eta)})= g(\mathbf x,\tilde \eta_{S(w,\tilde \eta)}+\eta _{S(w,\tilde \eta)^c}),
\end{align*}
which lets the second indicator in \eqref{eqn:gineq1} vanish. Using Definition \ref{defstab}(iv) we fnd that for $S(z(\mathbf x),\eta) \cap S(w,\tilde \eta)={\varnothing}$ also the first indicators in \eqref{eqn:gineq2} and the indicator in \eqref{eqn:gineq3} vanish. Thus, we conclude that
\begin{align*}
(\xi[\eta]-\xi[\tilde \eta_{S(w,\tilde \eta)}+\eta _{S(w,\tilde \eta)^c}]+\delta_w)_+(\mathbb{X})\le \frac 1 {m!} \int_{\mathbb{X}^m} g(\mathbf x,\eta) \mathbf 1 \{S(z(\mathbf x),\eta) \cap S(w,\tilde \eta) \neq {\varnothing}\}\,\eta^{(m)}(\mathrm{d} \mathbf x).
\end{align*}
Together with the multivariate Mecke equation \eqref{mecke}, this shows that \eqref{eqn:tv+lhs} is given by
\begin{align*}
\frac{1}{(m!)^2} \int_{\mathbb{X}^m} \int_{\mathbb{X}^m} \mathbb{E} g(\mathbf{x},\eta+\delta_{\mathbf{x}})g(\mathbf{y},\tilde \eta+\delta_{\mathbf{y}})\mathbf 1 \{S(z(\mathbf x),\eta+\delta_{\mathbf x}) \cap S(z(\mathbf y),\tilde \eta+\delta_{\mathbf y}) \neq {\varnothing}\}\,\lambda^m(\mathrm{d} \mathbf y)\,\lambda^m(\mathrm{d} \mathbf x).
\end{align*}
Finally, we want to replace $S(z(\mathbf x),\eta+\delta_{\mathbf x})$ and $S(z(\mathbf y),\tilde \eta+\delta_{\mathbf y})$ by deterministic sets. To achieve this goal, we split he integration area into $\{R(z(\mathbf x),\eta+\delta_{\mathbf x})\le b_{\mathbf{x}},\,R(z(\mathbf y),\tilde \eta+\delta_{\mathbf y})\le b_{\mathbf{y}}\}$ and the complement of this set. Hence, the above is bounded by
\begin{align*}
&\frac{1}{(m!)^2}\int_{\mathbb{X}^m} \int_{\mathbb{X}^m} \mathbb{E}g(\mathbf{x},\eta+\delta_{\mathbf{x}})g(\mathbf{y},\tilde \eta+\delta_{\mathbf{y}}) \mathbf 1\{R(z(\mathbf{x}),\eta+\delta_{\mathbf x})>b_{\mathbf{x}} \text{ or } R(z(\mathbf{y}),\tilde\eta+\delta_{\mathbf y})>b_{\mathbf{y}}\} \\
&\qquad \qquad \qquad \qquad\times \,\lambda^m(\mathrm{d} \mathbf y)\,\lambda^m(\mathrm{d} \mathbf x)\\
& \quad + \frac{1}{(m!)^2}\int_{\mathbb{X}^m} \int_{\mathbb{X}^m} \mathbb{E} g(\mathbf{x},\eta+\delta_{\mathbf{x}})g(\mathbf{y},\tilde \eta+\delta_{\mathbf{y}})\mathbf 1 \{\|z(\mathbf{x})-z(\mathbf{y})\|\le b_{\mathbf{x}}+b_{\mathbf{y}}\}\,\lambda^m(\mathrm{d} \mathbf y)\,\lambda^m(\mathrm{d} \mathbf x) .
\end{align*}
Here, the first term can be bounded by $T_1$ if we assume that $R(z(\mathbf{x}),\eta+\delta_{\mathbf x})>b_{\mathbf{x}}$ (at the cost of a factor 2) and the second term is $T_2$. \end{proof}
\begin{lemma}\label{lem-}
Let $\xi=\xi[\eta]$ be the process defined at \eqref{xidef} and for $\mathbb{E} \xi$-almost all $w \in \mathbb{X}$ let $\xi^{w!}$ be the process defined at \eqref{def:xiz}. We have
\begin{align*}
\int \mathbb{E}[(\xi-\xi^{w!})_-(\mathbb{X})] \,\mathbb{E}\xi(\mathrm{d}w) \le T_3+T_4+T_5,
\end{align*}
where $T_3,T_4,T_5$ are given in Theorem \ref{maithmallg}. \end{lemma}
\begin{proof}
Let $\eta$ and $\tilde \eta$ be independent Poisson processes on $\mathbb{X}$ with intensity measure $\lambda$. It follows from the definition of $\xi^{w!}$ in \eqref{def:xiz} that the left-hand side of the statement of the lemma is given by
\begin{align}
&\mathbb{E} \int(\xi[\tilde \eta_{S(w,\tilde \eta)}+\eta _{S(w,\tilde \eta)^c}]-\delta_w-\xi[\eta])_+(\mathbb{X})\,\xi[\tilde \eta](\mathrm{d}w)\nonumber\\
&\quad =\mathbb{E} \int(\xi[\tilde \eta_{S(z(\mathbf x),\tilde \eta)}+\eta _{S(z(\mathbf x),\tilde \eta)^c}]-\delta_{z(\mathbf x)}-\xi[\eta])_+(\mathbb{X})\,g(\mathbf x,\tilde \eta)\,\tilde \eta^{(m)}(\mathrm{d}\mathbf x).\label{eqn:tv-lhs}
\end{align}
Now we use that $\lambda$ is diffuse, which implies by \cite[Proposition 6.9]{last2017lectures} that $\eta$ (and hence also $\eta^{(m)}$) is a simple Poisson proecess. Hence, we find
\begin{align}
&(\xi[\tilde \eta_{S(z(\mathbf x),\tilde \eta)}+\eta _{S(z(\mathbf x),\tilde \eta)^c}]-\delta_{z(\mathbf x)}-\xi[\eta])_+(\mathbb{X})\nonumber\\
&\quad \le \frac{1}{m!}\int_{\mathbb{X}^m} g(\mathbf y,\tilde \eta_{S(z(\mathbf x),\tilde \eta)}+\eta _{S(z(\mathbf x),\tilde \eta)^c}) \,\mathbf 1\{\mathbf y \in \eta^{(m)}\}\mathbf 1 \{g(\mathbf{y},\eta) \neq g(\mathbf y,\tilde \eta_{S(z(\mathbf x),\tilde \eta)}+\eta _{S(z(\mathbf x),\tilde \eta)^c})\}\nonumber\\
&\qquad \qquad \times \mathbf 1\{\mathbf x \neq \mathbf y\}(\tilde \eta_{S(z(\mathbf x),\tilde \eta)}+\eta _{S(z(\mathbf x),\tilde \eta)^c})(\mathrm{d}\mathbf y)\label{eqn:gineq1-}\\
&\qquad + \frac{1}{m!}\int_{\mathbb{X}^m} g(\mathbf y,\tilde \eta_{S(z(\mathbf x),\tilde \eta)}+\eta _{S(z(\mathbf x),\tilde \eta)^c}) \,\mathbf 1\{\mathbf y \notin \eta^{(m)}\} \mathbf 1\{\mathbf x \neq \mathbf y\}(\tilde \eta_{S(z(\mathbf x),\tilde \eta)}+\eta _{S(z(\mathbf x),\tilde \eta)^c})(\mathrm{d}\mathbf y)\label{eqn:gineq2-}.
\end{align}
Now we invoke that $g$ is stabilizing and find by \eqref{stoprem} with $\chi:=\eta$ that the second indicator in \eqref{eqn:gineq1-} is given by
\begin{align*}
&\mathbf 1 \{g(\mathbf{y},\eta) \neq g(\mathbf y,\tilde \eta_{S(z(\mathbf x),\tilde \eta) \cap S(z(\mathbf y),\omega)}+\eta _{S(z(\mathbf x),\tilde \eta)^c \cap S(z(\mathbf y),\omega)}+\eta _{S(z(\mathbf y),\omega)^c})\},
\end{align*}
where $\omega:=\tilde \eta_{S(z(\mathbf x),\tilde \eta)}+ \eta_{S(z(\mathbf x),\tilde \eta)^c}$. Note that the indicator vanishes for $S(z(\mathbf x),\tilde \eta) \cap S(z(\mathbf y),\omega)={\varnothing}$. Since in this case also the first indicator in \eqref{eqn:gineq2-} vanishes by Definition \ref{defstab}(iv), we conclude that
\begin{align*}
&\mathbb{E} \int(\xi[\tilde \eta_{S(w,\tilde \eta)}+\eta _{S(w,\tilde \eta)^c}]-\delta_w-\xi[\eta])_+(\mathbb{X})\,\xi[\tilde \eta](\mathrm{d}w)\nonumber\\
&\quad \le \frac{1}{(m!)^2}\mathbb{E}\int_{\mathbb{X}^m} \int_{\mathbb{X}^m} \mathbf 1 \{S(z(\mathbf x),\tilde \eta) \cap S(z(\mathbf y),\tilde \eta_{S(z(\mathbf x),\tilde \eta)}+\eta_{S(z(\mathbf x),\tilde \eta)^c}) \neq {\varnothing}\}\,\mathbf 1\{\mathbf x \neq \mathbf y\}\nonumber\\
&\qquad \qquad \times g(\mathbf{x},\tilde\eta)g(\mathbf y,\tilde \eta_{S(z(\mathbf x),\tilde \eta)}+\eta _{S(z(\mathbf x),\tilde \eta)^c}) \,(\tilde \eta_{S(z(\mathbf x),\tilde \eta)}+\eta _{S(z(\mathbf x),\tilde \eta)^c})^{(m)}(\mathrm{d}\mathbf y) \,\tilde \eta ^{(m)} (\mathrm{d}\mathbf x).
\end{align*}
Now we apply the multivariate Mecke equation to the outer integral and obatin
\begin{align}
&\frac{1}{(m!)^2}\mathbb{E}\int_{\mathbb{X}^m} \int_{\mathbb{X}^m} \mathbf 1 \{S(z(\mathbf x),\tilde \eta+\delta_{\mathbf x}) \cap S(z(\mathbf y),\tilde \eta_{S(z(\mathbf x),\tilde \eta+\delta_{\mathbf x})}+\eta_{S(z(\mathbf x),\tilde \eta+\delta_{\mathbf x})^c}+\delta_{\mathbf x}) \neq {\varnothing}\}\,\nonumber\\
&\qquad \times \mathbf 1\{\mathbf x \neq \mathbf y\}\,g(\mathbf x,\tilde \eta+\delta_{\mathbf x})\,g(\mathbf y,\tilde \eta_{S(z(\mathbf x),\tilde \eta+\delta_{\mathbf{x}})}+\eta _{S(z(\mathbf x),\tilde \eta+\delta_{\mathbf{x}})^c}+\delta_{\mathbf{x}})\nonumber\\
&\qquad \times (\tilde \eta_{S(z(\mathbf x),\tilde \eta+\delta_{\mathbf{x}})}+\eta _{S(z(\mathbf x),\tilde \eta+\delta_{\mathbf{x}})^c}+\delta_{\mathbf{x}})^{(m)}(\mathrm{d} \mathbf y) \lambda^{m} (\mathrm{d}\mathbf x),\label{eqn:tv-meckeout}
\end{align}
where we have used that by Definition \ref{defstab}(iv) it holds that $(\tilde \eta+\delta_{\mathbf{x}})_{S(z(\mathbf x),\tilde \eta+\delta_{\mathbf{x}})}=\tilde \eta_{S(z(\mathbf x),\tilde \eta+\delta_{\mathbf{x}})}+\delta_{\mathbf{x}}$.
Now we use Lemma \ref{stopmecam} with the stopping set $\mu \mapsto S(z(\mathbf x),\mu+\delta_{\mathbf x})$ and with the function $h:\textbf{N}\times \textbf{N} \to [0,\infty)$ that maps $(\mu,\phi)$ to
\begin{align*}
\sum\limits_{\mathbf{y} \in (\mu+\phi+\delta_\mathbf{x})^{(m)}}\mathbf{1}\{S(z(\mathbf{x}),\mu+\delta_\mathbf{x})\cap S(z(\mathbf{y}),\mu+\phi+\delta_\mathbf{x})\neq {\varnothing}\} \mathds{1}\{\mathbf x \neq \mathbf{y}\}\, g(\mathbf{x},\mu)\,g(\mathbf{y},\mu+\phi+\delta_\mathbf{x}),
\end{align*}
we find that \eqref{eqn:tv-meckeout} is given by
\begin{align*}
&\frac{1}{(m!)^2}\mathbb{E}\int_{\mathbb{X}^m} \int_{\mathbb{X}^m} \mathbf 1 \{S(z(\mathbf x), \eta+\delta_{\mathbf{x}})\cap S(z(\mathbf y),\eta+\delta_{\mathbf{x}}) \neq {\varnothing}\}\,\mathbf 1\{\mathbf x \neq \mathbf y\} g(\mathbf{x},\eta+\delta_{\mathbf{x}})g(\mathbf y,\eta+\delta_{\mathbf{x}})\nonumber\\
&\qquad \qquad\times (\eta+\delta_{\mathbf{x}})^{(m)}(\mathrm{d} \mathbf y) \lambda^{m} (\mathrm{d}\mathbf x).
\end{align*}
Here we distinguish by the number $0\le\ell \le m-1$ of elements that $\mathbf{x}$ and $\mathbf{y}$ have in common (note that $\ell=m$ is not possible since $\mathbf{x}\neq \mathbf{y}$). This gives for the above
\begin{align*}
&\sum\limits_{\ell=0}^{m-1}\frac{1}{m!\,(m-\ell)!}\mathbb{E}\int_{\mathbb{X}^m} \int_{\mathbb{X}^{m-\ell}} \mathbf 1 \{S(z(\mathbf x_\ell,\mathbf y), \eta+\delta_{\mathbf{x}})\cap S(z(\mathbf y),\eta+\delta_{\mathbf{x}}) \neq {\varnothing}\} \,\nonumber\\
&\qquad \times g((\mathbf{x}_\ell,\mathbf{y}),\eta+\delta_{\mathbf{x}})\,g(\mathbf y,\eta+\delta_{\mathbf{x}})\,\eta^{(m-\ell)}(\mathrm{d}\mathbf y) \lambda^{m} (\mathrm{d}\mathbf x)\nonumber\\
&\quad=\sum\limits_{\ell=0}^{m-1}\frac{1}{m!\,(m-\ell)!}\mathbb{E}\int_{\mathbb{X}^m} \int_{\mathbb{X}^{m-\ell}} \mathbf 1 \{S(z(\mathbf x_\ell,\mathbf y), \eta+\delta_{(\mathbf{x},\mathbf y})\cap S(z(\mathbf y),\eta+\delta_{(\mathbf{x},\mathbf y)}) \neq {\varnothing}\} \,\nonumber\\
&\qquad \times g((\mathbf{x}_\ell,\mathbf{y}),\eta+\delta_{(\mathbf{x},\mathbf y)})\,g(\mathbf y,\eta+\delta_{(\mathbf{x},\mathbf y)})\lambda^{m-\ell}(\mathrm{d}\mathbf y) \lambda^{m} (\mathrm{d}\mathbf x).
\end{align*}
Here, all terms with $\ell \neq 0$ form the term $T_5$ from the statement of the lemma. For $\ell =0$ we distinguish by the sizes of the stabilization radii. This gives the terms $T_3$ and $T_4$. \end{proof}
\begin{proof} [Proof of Theorem \ref{maithmallg}]
By Lemma \ref{lemxiz}, the process $\xi^{w!}$ defined at \eqref{def:xiz} is for $\mathbb{E}\xi$-almost all $w \in \mathbb{X}$ a Palm version of $\xi$. Hence, we find from \cite[Theorem 2.6]{barbour1992stein} that
\begin{align*}
\mathbf{d_{TV}} (\xi, \,\nu) \le \|\mathbb{E} \xi- \mathbb{E} \nu \| + \int\limits {\mathbb E} \|\xi-\xi^{w!}\|\,\mathbb E \xi(\mathrm{d}w).
\end{align*}
Now we invoke Lemma \ref{lem+} and Lemma \ref{lem-} to bound the integral. This finishes the proof of Theorem \ref{maithmallg}. \end{proof}
\section{Maximum cells in the Poisson-Voronoi mosaic} \label{s5}
In this section we apply Theorem \ref{maithmallg} to point processes of centres of large cells in the Poisson-Voronoi mosaic. Let $\mathbb{X}=\mathbb{R}^d$ ($d \ge 2$) with Borel $\sigma$-field $\mathcal{B}^d$ with $d$-dimensional Lebesgue measure ${\lambda}_d$ and standard scalar product $\langle \cdot,\cdot \rangle$. For $\mu \in \mathbf{N}_s$ and $x \in \mu$ the {\em Voronoi cell} $C(x,\mu)$ is the set of all points $y \in {\mathbb R}^d$ with $|y-x|\le \min_{z \in \mu} |y-z|$, where $|\cdot|$ is the Euclidean norm. It is a closed convex set with interior points. If $\mu=\eta$ is a (stationary) Poisson process in $\mathbb{R}^d$, the system $\{C(x,\mu):\,x \in \eta\}$ is called {\em Poisson-Voronoi mosaic}. For an in-depth introduction to the theory of (random) mosaics we refer to Section 10 in \cite{schneider2008stochastic}.
Next we explain how we measure the size of a Voronoi cell. Let $\mathcal{K}_o^d$ denote the space of all convex bodies (nonempty, compact and convex sets) $K \subset \mathbb{R}^d$ containing the origin $o$ as an interior point and equip $\mathcal{K}_o^d$ with the Hausdorff metric. Following \cite{hug2007asymptotic} let $k>0$ and call a map $\Sigma: \mathcal{K}_o^d \to [0,\infty)$ {\em size functional} if $\Sigma$ is continuous, not identically $0$, $k$-homogeneous (i.e.\ $\Sigma(aK)=a^k \Sigma(K)$ for all $a>0$ and $K \in \mathcal{K}_o^d$, where $aK:=\{ax:\,x \in K\}$) and increasing under set inclusions (i.e.\ $\Sigma(K_1) \le \Sigma(K_2)$ for all $K_1,K_2 \in \mathcal{K}_o^d$ with $K_1 \subset K_2$). The {\em centred inradius} $\rho_o$ (i.e.\ the inradius of the largest ball with centre $o$ contained in $K$) is an example for a $1$-homogeneous size functional. The {\em $k$th intrinsic volume} $V_k$ ($k \in [d]$) of $K$ serves as an example for a $k$-homogeneous size functional. These size functionals are discussed in more details in Example \ref{PVTEx1}.
We study cells in the Poisson-Voronoi mosaic that are large with respect to a size functional $\Sigma$. Let $c>0$, $W \subset \mathbb{R}^d$ be compact, $\eta_\gamma$ be a stationary Poisson process with intensity $\gamma>0$ and let $\Sigma$ be a $k$-homogeneous size functional. We slightly abuse the notation and write $\Sigma(C(x,\mu)):=\Sigma(C(x,\mu)-x)$ for $\mu \in \mathbf N_s$ with $x \in \mu$. For a threshold $v_{c,\gamma}>0,\,\gamma > 0,$ (to be specified in {\eqref{PVT:defvn} below) we consider the process
\begin{align}
\xi_{c,\gamma}:=\sum\limits_{x \in \eta} \mathbf{1}\{\Sigma(C(x,\eta_\gamma))>v_{c,\gamma}\}\,\delta_{x}. \label{PVTxi}
\end{align}
Here, the threshold $v_{c,\gamma}$ is chosen such hat the intensity measure $\mathbb{E}\xi_{c,\gamma}$ of $\xi_{c,\gamma}$ satisfies
\begin{align}
\mathbb{E} \xi_{c,\gamma}(A)=c\lambda_d(A),\quad\gamma >0,\, A \in \mathcal B^d. \label{PVTintvt}
\end{align}
Note that by the Mecke equation \eqref{mecke} and by stationarity of $\eta$ we have
\begin{align}
\mathbb{E} \xi_{c,\gamma}(A)=\gamma \lambda_d(A)\,{\mathbb P}(\Sigma(C(o,\eta_\gamma+\delta_o))>v_{c,\gamma}).\label{PVTmecke}
\end{align}
To see that $v_{c,\gamma}$ can be chosen such that \eqref{PVTxi} exists, note that by \cite[Section 9]{hug2007asymptotic} the distribution $\mathbb{P}^{\Sigma(C(o,\eta_\gamma+\delta_o))}$ of $\Sigma(C(o,\eta_\gamma+\delta_o))$ and the Lebesgue meaure $\lambda_1$ are equivalent measures on $[0,\infty)$. Together with \eqref{PVTmecke}, this implies that the choice
\begin{align}
v_{c,\gamma}:=\inf\{v>0:\,\gamma\mathbb{P}(\Sigma(C(o,\eta_\gamma+\delta_o))>v)>c\},\quad \gamma >0,\label{PVT:defvn}
\end{align}
indeed satisfies \eqref{PVTintvt}.
We need to introduce some more notation. For $K \in \mathcal{K}_o^d$ let $h_K(u):=\max\{\langle x,u \rangle:\,x \in K\},\,u\in \mathbb S^{d-1},$ be the {\em support function} of $K$. Define
\begin{align*}
\Phi(K):=\frac 1d \int_{\mathbb S^{d-1}}h_K(u)^d\, \sigma(\mathrm{d}u),
\end{align*}
where we write $\sigma$ for the uniform distribution on the unit sphere $\mathbb{S}^{d-1}$ in ${\mathbb R}^d$. There is a constant $\tau >0$ such that $\Phi$ and $\Sigma$ satisfy the sharp isoperimetric inequality
\begin{align} \label{lowbvor}
\Phi(K)\ge \tau \Sigma(K)^{d/k},\quad K \in \mathcal{K}_o^d.
\end{align}
That this inequality is sharp means that there is some $K \in \mathcal{K}_o^d$ with more that one point for which equality holds in \eqref{lowbvor} (see \cite[Section 3]{hug2007asymptotic}). Every such body is called an {\em extremal body}. For example, if $\Sigma$ is the $d$-dimensional volume, $\tau=(d \kappa_d)^{-1}$ and the extremal bodies are exactly the $d$-dimensional balls centred at the origin $o$.
We call a non-negative, continuous, $0$-homogeneous functional $\vartheta: \mathcal{K}_o^d \to [0,\infty)$ {\em deviation functional} if it has the property that ${\vartheta}(K)=0$ holds for $K \in \mathcal{K}_o^d$ with $\Sigma(K)>0$ if and only if $K$ is an extremal body. Such deviation functionals always exist. For example,
\begin{align*}
\vartheta(K):=\frac{\Phi(K)}{\tau \Sigma(K)^{d/k}}-1
\end{align*}
from (5) in \cite{hug2007asymptotic} defines a deviation functional. There exists a continuous function $f:(0,+\infty) \to (0,+\infty)$ with $f(0)=0$ and $f({\varepsilon})>0$ for ${\varepsilon}>0$ such that
\begin{align*}
\Phi(K)\ge (1+f({\varepsilon})) \tau \Sigma(K)^{d/k} \quad \text{for } {\vartheta}(K)\ge {\varepsilon},
\end{align*}
which sharpens \eqref{lowbvor}. Any such function is called a {\em stability function}.
The following statement is Theorem 1 in \cite{hug2007asymptotic} (specialized to the Poisson-Voronoi mosaic). Suppose that a stationary Poisson process $\eta$ with intensity $\gamma$, a size functional $\Sigma$, a deviation functional ${\vartheta}$ and a stability function $f$ (for $\Phi$, $\Sigma$ and ${\vartheta}$) are given. Then the following holds. There exists a positive constant $c_0$ (depending only on $\tau$) such that for all ${\varepsilon}>0$ and $v>0$ we have
\begin{align}
\mathbb{P}({\vartheta}(C(o,\eta_\gamma+\delta_o)) \ge {\varepsilon} \mid \,\Sigma(C(o,\eta_\gamma+\delta_o))>v) \le c_1 \exp(-c_0f({\varepsilon}) v^{d/k}\gamma),\label{PVT:shape}
\end{align}
where $c_1>0$ depends only on $d$, $\Sigma$, $f$, ${\varepsilon}$.
Next we determine the asymptotic behavior of $v_{c,\gamma}$ as $\gamma \to \infty$. Since $v_{c,\gamma}\to \infty$ as $\gamma \to \infty$ (which follows from the definitoon of $v_{c,\gamma}$ together with the equivalence of $\mathbb{P}^{\Sigma(C(o,\eta_\gamma+\delta_o))}$ and $\lambda_1$ on $[0,\infty)$) we find from Theorem 2 in \cite{hug2007asymptotic} that
\begin{align*}
\lim_{\gamma \to \infty} \gamma^{-1}v_{c,\gamma}^{-d/k}\log {\mathbb P}(\Sigma(C(o,\eta_\gamma+\delta_o))>v_{c,\gamma}) =-2^d d \kappa_d \tau,\quad c>0,
\end{align*}
where $\tau$ is the constant from \eqref{lowbvor}. Since $\gamma^{-1}v_{c,\gamma}^{-d/k}\log \gamma$ is given by
\begin{align*}
\gamma^{-1} v_{c,\gamma}^{-d/k}\log {\mathbb P}(\Sigma(o,\eta_\gamma+\delta_o)>v_{c,\gamma}) \Big(\frac{\log(\gamma{\mathbb P}(\Sigma(o,\eta_\gamma+\delta_o)>v_{c,\gamma}))}{\log \gamma}-1\Big)^{-1}
\end{align*}
and since $\gamma{\mathbb P}(\Sigma(o,\eta_\gamma+\delta_o)>v_{c,\gamma})=c$ for all $\gamma>0$ we conclude that $\gamma^{-1}v_{c,\gamma}^{-d/k}\log \gamma\to 2^d d\kappa_d \tau$ as $\gamma \to \infty$.
In our Poisson approximation result for $\xi_{c,\gamma}$ we will need the following condition on the extremal bodies of the size functional $\Sigma$. For $K \in \mathcal{K}_o^d$ let $r_o(K)$ be the radius of the smallest ball with center $o$ containing $K$ (centred circumradius) and $\rho_o(K)$ be the radius of the largest ball with center $o$ contained in $K$ (centred inradius). We assume that there exists a function $h:[0,\infty)\to (0,1]$ such that for some $\varepsilon >0$:
\begin{align}
\frac{\rho_o(K)}{r_o(K)} \ge h({\varepsilon}) \quad\text{for all }K\in \mathcal{K}_o^d \text{ with }{\vartheta}(K)<{\varepsilon}.\label{PVTass1}
\end{align}
The following theorem is the main result of this section.
\begin{theorem} \label{PVTmaith}
Suppose that a stationary Poisson process $\eta_\gamma$ with intensity $\gamma>0$, a size functional $\Sigma$, a deviation functional ${\vartheta}$ and a stability function $f$ are given and that \eqref{PVTass1} holds for some ${\varepsilon}>0$. Let $W \subset \mathbb{R}^d$ be compact, $c>0$ and let $\nu_c$ be a stationary Poisson process with intensity $c$. Then we have for all $\delta>0$
\begin{align*}
\mathbf{d_{TV}}(\xi_{c,\gamma} \cap W,\,\nu_c\cap W) \le C \gamma^{\delta-\min\big[c_0 f({\varepsilon}), h({\varepsilon})^d k(h({\varepsilon}))\big]},\quad \gamma>0,
\end{align*}
where the constant $C>0$ does not depend on $\gamma$. Here, $c_0$ is the constant from \eqref{PVT:shape} and $k(a)$ is the volume of the intersection of an infinite cone with apex $o$ and angular radius $\arcsin\big(\frac{a}{\sqrt{1+a^2}}\big)$ and of a $d$-dimensional ball centred at $o$ with volume 1.
\end{theorem}
As the proof will show, the expoenent of $\gamma$ of the right-hand side of the statement in Theorem \ref{PVTmaith} has a clear geometric interpretation. While the first term $c_0f({\varepsilon})$ comes from \eqref{PVT:shape} and can be interpreted as the approximation error of a large Voronoi cell by an extremal body, the second term $h({\varepsilon})^d k(h({\varepsilon}))$ stems from a stabilization result for Voronoi cells whose shape is close to that of an extremal body (see Lemma \ref{stabaug}).
We now demonstrate how Theorem \ref{PVTmaith} applies to concrete size functionals $\Sigma$ and how it can be used to derive extreme value statements for large cells in the Poisson-Voronoi mosaic.
\begin{example} \label{PVTEx1} (a) Let $\Sigma:=\rho_o$ be the centred inradius. Then $\tau=1/d$ and the isoperimetric inequality \eqref{lowbvor} reads $\rho_o(K)^d\le d\Phi(K),\, K \in \mathcal{K}_o^d,$ where equality holds if and only if $K$ is a $d$-dimensional ball centred at the origin $o$. We choose the deviation functional
\begin{align*}
{\vartheta}(K):=\frac{r_o(K)-\rho_o(K)}{r_o(K)+\rho_o(K)} \in [0,1].
\end{align*}
Hence, for ${\varepsilon} \in (0,1)$ we have that ${\vartheta}(K)< {\varepsilon}$ if and only if $\frac{\rho_o(K)}{r_o(K)}> \frac{1-{\varepsilon}}{1+{\varepsilon}}$. Therefore, \eqref{PVTass1} holds with $h({\varepsilon}):=\frac{1-{\varepsilon}}{1+{\varepsilon}}$ and a stability function is given by $f({\varepsilon}):=\big(\frac{1+{\varepsilon}}{1-{\varepsilon}}\big)^d-1$. Since
\begin{align*}
\{\rho_o(C(x,\eta_\gamma+\delta_x))>v\}=\{\eta_\gamma \cap B_{2v}(x)=\varnothing\}
\end{align*}
we choose $v_{c,\gamma}^d:=2^{-d}\kappa_d^{-1} \gamma^{-1} \log (\gamma/c)$. Letting $c:=e^{-t}$ for $t \in \mathbb{R}$ and $$M:=\min\Big[c_0\Big(\big(\frac{1+{\varepsilon}}{1-{\varepsilon}}\big)^d-1\Big), \big(\frac{1-{\varepsilon}}{1+{\varepsilon}}\big)^d k\Big(\frac{1-{\varepsilon}}{1+{\varepsilon}}\Big)\Big]>0$$ we find from Theorem \ref{PVTmaith} that for all $\delta>0$
\begin{align*}
\Big|\mathbb{P}(2^d \kappa_d \gamma \max_{x \in \eta_\gamma \cap W}\rho_o(C(x,\eta_\gamma))^d-\log \gamma \le t )-\exp(-\lambda_d(W)e^{-t})\Big|\le C \gamma^{\delta-M},\quad t \in \mathbb{R},\,\gamma>0.
\end{align*}
This shows that $\max_{x \in \eta_\gamma \cap W}\rho_o(C(x,\eta))^d$ is in the domain of attraction of Gumbel distribution and quantifies the rate of convergence in statement (2a) from Theorem 1 \cite{calka2014extreme}.\\
(b) Let $\Sigma:=V_k$ ($1 \le k \le d$) be the $k$th intrinsic volume (in particular, $V_d$ is the volume, $dV_{d-1}$ is the surface area and $2V_1/\kappa_d$ is the mean width). From (15) in \cite{hug2007asymptotic} we have that
\begin{align*}
\frac 1d \Big(\frac{ k! (d-k)!\kappa_{d-k}}{ d!\kappa_d} \Big)^{d/k} V_k(K)^{d/k} \le \Phi(K),\quad K \in \mathcal{K}_o^d.
\end{align*}
As in (a), the extremal bodies are precisely the $d$-dimensional balls with centre at $o$. Hence, ${\vartheta}$ and $a$ can be chosen as above.
\end{example}
As a preparation for the proof of Theorem \ref{maithmallg} we show that the function
\begin{align}
g(x,\mu)=\mathbf 1\{x \in W\} \mathbf 1\{\Sigma(C(x,\mu))>v_{c,\gamma}\},\quad x \in \mathbb{R}^d,\,\mu \in \mathbf{N}_s, \label{gvor}
\end{align}
is stabilizing in the sense of Definition \ref{defstab} and we construct a stabilization radius that satisfies the conditions from Definiton \ref{defstab}. Following Section 6.3 in \cite{penrose2007gaussian} let $K_i(x),\,1 \le i \le I,$ be a finite collection of infinite open cones in ${\mathbb R}^d$ with angular radius $\pi/6$, apex at $x$ and union ${\mathbb R}^d$. For $\mu \in \mathbf {N}_s$ we define
\begin{align}
R_i(x,\mu):&=\inf\{r>0:\,K_i(x)\cap B_r(x)\neq{\varnothing}\},\quad i \in I,\label{stabivor}\\
R(x,\mu):&=2\max\limits_{1 \le i \le I} R_i(x,\mu).\label{stabvor}
\end{align}
Then we have
\begin{align*}
C(x,\mu+\delta_x)=C(x,\mu_{B(x,2R(x,\mu))}+\delta_x)
\end{align*}
which implies that $g$ from \eqref{gvor} is stabilizing. Since ${\mathbb P}(R(x,\eta_\gamma)<\infty)=1$ and since $\mu \mapsto B(x,R(x,\mu))$ is a stopping set we conclude that $R$ is a stabilization radius.
By construction we find that
\begin{align}
{\mathbb P}(R(o,\eta_\gamma+\delta_o)>r)\le 1-(1-e^{-\gamma (r/2)^d/I})^{I}\sim e^{-\gamma (r/2)^d/I} \quad \text{as }r\to \infty.\label{PVTRass}
\end{align}
Next we derive a more refined stabilization property that holds in a mosaic for which is underlying point configuration is augmented by a given element $y \in \mathbb{R}^d$. Let $\mathbb S_{\{y-x\}^\perp}$ denote the unit sphere in the linear subspace orthogonal to $y-x$. We define the infinite cone
\begin{align*}
K_a(x,y):=\{y+t(y-x) +at |y-x|u:\,t> 0,\,u \in \mathbb S_{\{y-x\}^\perp} \}.
\end{align*}
Hence, $K_a(x,y)$ has apex $o$, axis $y-x$ and angular radius $\arcsin\big(\frac{a}{\sqrt{1+a^2}}\big)$.
In the proof of our main theorem of this section we will make use of the following statement.
\begin{lemma}\label{stabaug}
Let $x,y \in \mathbb{R}^d$ such that $\mu+\delta_{(x,y)} \in \mathbf{N}_s$. Let ${\vartheta}(C(x,\mu+\delta_{(x,y)}))\le {\varepsilon}$ for some ${\varepsilon}>0$ and assume that condition \eqref{PVTass1} holds for ${\varepsilon}>0$. Then we have
\begin{align*}
C(x,\mu+\delta_{(x,y)})=C(x,\mu \cap K_a(x,y)^c+\omega \cap K_a(x,y)+\delta_{(x,y)}),\quad \omega \in \mathbf N_s,
\end{align*}
where $a:=\frac{h({\varepsilon})}{\sqrt{1-h({\varepsilon})^2}}$ with $h$ from \eqref{PVTass1}.
\end{lemma}
\begin{figure}
\caption{The cone $K_a(x,y)$ is marked in blue.}
\end{figure}
\begin{proof}
First note that $\rho_o(C(x,\mu+\delta_{(x,y)}))\le \frac{|x-y|}{2}$. By \eqref{PVTass1}, this together with the condition ${\vartheta}(C(x,\mu+\delta_{(x,y)}))\le {\varepsilon}$ yield that $r_o(C(x,\mu+\delta_{(x,y)}))\le \frac{|x-y|}{2h({\varepsilon})}$. Hence, it suffices to show that
\begin{align}
|z-w| \ge \frac{|x-y|}{2h({\varepsilon})}\quad \text{for all } z \in \{y\} \cup K_a(x,y),\,w\in \partial B\Big(x,\frac{|x-y|}{2h({\varepsilon})}\Big) \cap H_x^+(x),\label{stabaughalv}
\end{align}
where $H_x^+(y)$ is the closed half-space that contains $x$ and which is delimited by the bisecting hyperplane of $[x,y]$. We use the parametrizations $z=y+t_1(y-x)+t_2|x-y|v$ and $w=y+s_1(y-x)+s_2|x-y|u$ with
$$s_1 \in \Big[-1-\frac{1}{2h({\varepsilon})},-\frac 12\Big],\quad s_2=\sqrt{\frac{1}{4h({\varepsilon})^2}-(s_1+1)^2}, \quad t_1\ge 0,\quad t_2 \in \Bigg[0,\frac{h({\varepsilon})}{\sqrt{1-h({\varepsilon})^2}}t_1\Bigg]$$
and $u,v \in S_{\{y-x\}^\perp}$. We have
\begin{align*}
\frac{|z-w|^2}{|x-y|^2} &\ge (t_1-s_1)^2+(t_2-s_2)^2
\end{align*}
which attains its minimum value $\frac{1}{4h({\varepsilon})^2}$ for $t_1=0$, $t_2=0$ and $s_1=-\frac 12$ (under the constraints on $s_1$, $s_2$, $t_1$ and $t_2$ from above). This shows \eqref{stabaughalv}.
\end{proof}
\begin{comment}
The main difficulty in the proof will be to bound the term $C_2$ on the right hand side of \eqref{maithmallgdtv}. To do so, we need to estimate the joint probability that two close-by cells in the Poisson-Voronoi mosaic are large with respect to the size functional $\Sigma$. To obtain this estimate we distiguish by the shape of the cells. If both do not deviate too much from the shape of an extreml body that is defined below, we demonstrate in Lemma \ref{PVTlem} how to bound the joint probability that both are large. In the other case we study the conditional probability that the shape of a large typical cell deviates considerably from the shape of an extremal body and are in a situation where we can apply the bound in \cite[Theorem 1]{hug2007asymptotic}. The techniques used there require additional notation that we introduce next.
Let $K \in \mathcal{K}_o^d$ with ${\vartheta}(K)<{\varepsilon}$ satisfying \eqref{PVTass1}. By monotonicity and $k$-homogeneity of $\Sigma$ and $\sigma(\mathbb{S}^{d-1})=1$, we find that
\begin{align} \label{PVTmaibou}
\Sigma(K) \le \Sigma(r_o(K)B^d)\le r_o(K)^k \tau^{-k/d} \Phi(B^d)^{k/d}< \left(\frac{\kappa_d}{a^d\tau}\right)^{k/d}\rho_o(K)^k.
\end{align}
\end{comment}
\begin{comment}
The following lemma is a first step towards a Poisson process approximation result for $\xi_t$ from $\eqref{PVTxi}$. The lemma gives an upper bound for the probability that two cells in the Poisson-Voronoi mosaic with a shape close to an extremal body are both large.
\begin{lemma} \label{PVTlem}
Let $x,y\in {\mathbb R}^d,\,x \neq y,$ and let $\eta$ be a stationary Poisson process with intensity $\gamma>0$. Let $\Sigma$ be a size functional, let ${\vartheta}$ be a deviation functional and assume that \eqref{PVTass1} is satisfied for some $a \in (0,1)$ and some ${\varepsilon}>0$. Then we have for all $v>0$,
\begin{align}
&{\mathbb P}\left(\Sigma(x,\eta_{x,y})>v,\,\Sigma(y,\eta_{x,y})>v,\,{\vartheta}(x,\eta_{x,y})<{\varepsilon},\,{\vartheta}(y,\eta_{x,y})<{\varepsilon}\right)\nonumber\\
& \quad\le{\mathbb P}\left(\Sigma(o,\eta_o)>v\right)\,\exp\left(-\gamma a^d \tau \left(1-\frac{\arccos \sqrt{1-a^2}}{\pi}\right)\,v^{d/k}\right). \label{PVTlem1}
\end{align}
\end{lemma}
\begin{proof}
Let $x,y \in {\mathbb R}^d,\,x \neq y,$ ${\varepsilon}>0$ and $a \in (0,1)$ such that \eqref{PVTass1} holds and define
\begin{align*}
A:=\left\{w \in {\mathbb R}^d:\,\frac{\langle w-y,y-x\rangle}{|w-y|\,|y-x|}\le \sqrt{1-a^2}\right\}.
\end{align*}
In a first step we prove that
\begin{align}
\{\mu \in \mathbf{N}:\,\Sigma(x,\mu_{x,y})>v,\,{\vartheta}(x,\mu_{x,y}) <{\varepsilon}\} \in \mathcal{N}_A. \label{PVTlemxev}
\end{align}
For this purpose we define the {\em Voronoi flower}
\begin{align*}
T_x:\mathbf{N} \to \mathcal{F},\quad \mu \mapsto \bigcup\limits_{z \in C(x,\mu_{x})} B(z,|z-x|).
\end{align*}
By \cite[Lemma 5.1]{baumstark2009gamma}, $T_x$ is a stopping set. We have that
\begin{align*}
C(x,\mu_{x})=C(x,(\mu_{x})_{T_x(\mu)}+\chi_{T_x(\mu)^c})\quad \text{for all }\mu,\chi \in \mathbf{N}.
\end{align*}
Hence, to prove \eqref{PVTlemxev} it suffices to show that $\Sigma(x,\mu_{x,y})>v$ and ${\vartheta}(x,\mu_{x,y}) <{\varepsilon}$ imply that $T_x(\mu_{y})\subset A$. By definition of the Voronoi mosaic, $C(x,\mu_{x,y})\subset H_y(x)^-$ and therefore $\rho_o(x,\mu_{x,y})\le \frac{|x-y|}{2}$. By \eqref{PVTass1} this implies that $r_o(x,\mu_{x,y})\le \frac{|x-y|}{2a}$ for ${\vartheta}(x,\mu_{x,y}) <{\varepsilon}$. Hence,
\begin{align*}
C(x,\mu_{x,y})\subset H_y(x)^- \cap B\left(x,\frac{|x-y|}{2a}\right).
\end{align*}
Define $Z$ as the set of all points $z \in {\mathbb R}^d$ that have distance $\frac{|x-y|}{2a}$ from $x$ and $y$. We have that
\begin{align}
T_x(\mu_{y}) \subset \bigcup\limits_{z \in Z} \{w \in {\mathbb R}^d:\,\langle w-z,y-z\rangle \le |y-z|^2 \}. \label{PVTFbou}
\end{align}
Let $w \in T_x(\mu_{y})$ with $\langle w-y,y-x\rangle>0$ and $z \in Z$ such that $\langle w-z,y-z\rangle \le |y-z|^2$. Denote the $2$-dimensional plane containing $x,y,z$ by $E$ and recall that for $v\in {\mathbb R}^d$ we denote the projection of $v$ onto $E$ by $v^{E}$. It is an elementary fact from planar geometry that
\begin{align}
\arccos \frac{\langle y-x,w^{E}-y \rangle}{|y-x||w^{E}-y|}+\arccos \frac{\langle w^{E}-y,z-y \rangle}{|w^{E}-y||z-y|}+\arccos \frac{\langle z-y,x-y \rangle}{|z-y||x-y|}&=\pi,\label{PVTgeofzL}\\
\arccos \frac{\langle y-z,\frac{x+y}{2}-z \rangle}{|y-z||\frac{x+y}{2}-z|}+\arccos \frac{\langle x-y,z-\frac{x+y}{2} \rangle}{|x-y||z-\frac{x+y}{2}|}+\arccos \frac{\langle z-y,x-y \rangle}{|z-y||x-y|}&= \pi. \label{PVTgeofxL}
\end{align}
From the definition of $E$ and \eqref{PVTFbou} we find that
\begin{align}
\langle w^{E}-y,z-y \rangle = \langle w-y,z-y \rangle =|z-y|^2-\langle w-z,y-z \rangle \ge 0. \label{PVTlinebou}
\end{align}
Since $\langle x-y,z-\frac{x+y}{2} \rangle=0$ by the definition of $Z$, we obtain from \eqref{PVTgeofzL}, \eqref{PVTgeofxL}, \eqref{PVTlinebou} and monotonicity of the cosine in the interval $[0,\pi]$ that
\begin{align*}
\arccos \frac{\langle y-x,w^{E}-y \rangle}{|y-x||w^{E}-y|} \ge \arccos \frac{\langle y-z,\frac{x+y}{2}-z \rangle}{|y-z||\frac{x+y}{2}-z|}.
\end{align*}
Since $\langle y-x,w^{E}-y \rangle= \langle y-x ,w-y \rangle$ and $|w^{E}-y| \le |w-y|$ this yields
\begin{align*}
\frac{\langle y-x,w-y \rangle}{|y-x||w-y|} \le \frac{\langle y-z,\frac{x+y}{2}-z \rangle}{|y-z||\frac{x+y}{2}-z|}=\frac{|\frac{x+y}{2}-z|}{|y-z|}.
\end{align*}
From Pythagoras' theorem and the definition of $Z$ we find that
\begin{align*}
\|\frac{x+y}{2}-z\|= \sqrt{|y-z|^2-\left(\frac{|x-y|}{2}\right)^2}=|y-z|\sqrt{1-a^2} .
\end{align*}
This shows that $T_x(\mu_{y})\subset A$ for $\Sigma(x,\mu_{x,y})>v$ and ${\vartheta}(x,\mu_{x,y}) <{\varepsilon}$.
Since the restrictions of $\eta$ to the disjoint sets $A$ and $A^c$ are stochastically independent Poisson processes, we obtain from \eqref{PVTlemxev} and \eqref{PVTass1} the bound
\begin{align}
&{\mathbb P}\left(\Sigma(x,\eta_{x,y})>v,\,\Sigma(y,\eta_{x,y})>v,\,{\vartheta}(x,\eta_{x,y})<{\varepsilon},\,{\vartheta}(y,\eta_{x,y})<{\varepsilon}\right)\nonumber\\
&\quad\le {\mathbb P}\left(\Sigma(x,\eta_{x,y})>v,\,{\vartheta}(x,\eta_{x,y})<{\varepsilon}\right)\, {\mathbb P}\left(\eta \left(B(y,\left(a^d\tau/\kappa_d\right)^{1/d}\,v^{1/k})\cap A^c\right)=0\right). \label{PVTlemfac}
\end{align}
Using the bound ${\lambda}_d\left(B(y,r)\cap A^c\right)\ge r^d\kappa_d \left(1-\frac{\arccos \sqrt{1-a^2}}{\pi}\right)$ for $r>0$ and the Poisson property of $\eta$, we obtain that \eqref{PVTlemfac} is bounded from above by
\begin{align*}
{\mathbb P}\left(\Sigma(o,\eta_o)>v\right)\,\exp\left(-\gamma a^d \tau \left(1-\frac{\arccos \sqrt{1-a^2}}{\pi}\right)\,v^{d/k}\right).
\end{align*}
\end{proof}
\end{comment}
\begin{comment}
The following lemma shows that the distribution function of $\Sigma(o,\eta_o)$ is continuous under ${\mathbb P}$.
\begin{lemma} \label{PVTcont}
Let $\Sigma:\mathcal{K}_o^d\to [0,\infty)$ be continuous. Then the distribution function of $\Sigma(o,\eta_o)$ is continuous on $(0,\infty)$.
\end{lemma}
\begin{proof}
We need to show that for all $v>0$,
\begin{align}
{\mathbb P}(\Sigma(o,\eta_o)=v)=0. \label{PVTcontcl}
\end{align}
Let $f_{d-1}(P)$ denote the number of facets of a polytope $P$. For hyperplanes $H_1,\dots,H_{n}\in \mathcal{H}^d$ let $P_{(n)}:=H_1^-\cap\dots \cap H_n^-$ and define for a Borel set $\mathcal{B} \subset \mathcal{K}_o^d$,
\begin{align*}
R(\mathcal{B},n):=\{(H_1,\dots,H_n)\in (\mathcal{H}^d)^n:\,P_{(n)}\in \mathcal{B},\,f_{d-1}(P)=n\},\quad n \ge d+1.
\end{align*}
Then we have by Lemma 7 in \cite{hug2007asymptotic},
\begin{align}
{\mathbb P}(C(o,\eta_o)\in R(\mathcal{B},n))= \frac{\widehat{\gamma}^n}{n!} \int_{R(\mathcal{B},n)} \exp\{-\Phi(P_{(n)})\widehat{\gamma}\}\,\nu_{d-1}^n(\mathrm{d}(H_1,\dots,H_n)),\quad n \ge d+1. \label{PVTRvn}
\end{align}
Using standard representations for $H_1,\dots,H_n$ we obtain from \eqref{PVTRvn} that
\begin{align}
&{\mathbb P}(\Sigma(o,\eta_o)=v,f_{d-1}(C(o,\eta_o))=n)\nonumber\\
&=\int_{(\mathbb{S}^{d-1})^n} \int_{[0,\infty)^n} \mathds{1}\{(H(u_1,r_1),\dots,H(u_n,r_n)) \in R(\Sigma^{-1}(\{v\}),n) \}\,(r_1\cdots r_n)^{d-1}\nonumber\\
&\quad \times\,\exp\{- \Phi(H(u_1,r_1)^-\cap\cdots\cap H(u_{n},r_{n})^-) \widehat{\gamma}\}\,\mathrm{d}(r_1,\dots,r_n)\,\sigma^n(\mathrm{d}(u_1,\dots,u_n)). \label{PVTlem8}
\end{align}
We substitute $r_i=s\widetilde{r}_i$, $i \in [n-1]$, $r_n=s$ and first carry out the integration with respect to $z$. Writing $H(u_i,\widetilde{r}_i)=H_i$ we find that
\begin{align*}
&{\mathbb P}(\Sigma(o,\eta_o)=v,f_{d-1}(C(o,\eta_o))=n)\\
&=\int_{(\mathcal{H}^d)^{n-1}} \int_{\mathbb{S}^{d-1}} \int_{[0,\infty)} \mathds{1}\{s^k(H_1,\dots,H_{n-1},H(u,1)) \in R(\Sigma^{-1}(\{v\}),n) \}\,s^{dn-1}\nonumber\\
&\quad \times\exp\{- s^d\Phi(H_1^-\cap\cdots\cap H_{n-1}^- \cap H(u,1)^-)\ \widehat{\gamma}\}\,\mathrm{d}s\,\sigma(\mathrm{d}u)\,\nu_{d-1}^{n-1}(\mathrm{d}(H_1,\dots,H_{n-1})).
\end{align*}
If $\Sigma$ is continuous, the inner integral is equal to $0$ for all $n \ge d+1$. This shows the assertion.
\end{proof}
\end{comment}
\begin{proof}[Proof of Theorem \ref{PVTmaith}]
For $\gamma>0$ we apply Theorem \ref{maithmallg} with the function $g$ from \eqref{gvor}, $z(x):=x$, $\lambda:=\gamma \lambda_d \cap W$ and the stabilization radius $R$ defined at \eqref{stabvor}. Let
\begin{align}
b_x:=b_\gamma=2(3I \gamma^{-1} \log \gamma)^{1/d},\quad x \in \mathbb{R}^d, \label{PVTdefbt}
\end{align}
and let $\nu_c$ be a stationary Poisson process in $\mathbb{R}^d$ with intensity $c$. From Theorem \ref{maithmallg} (with $T_5=0$ since $m=1$) we obtain that
\begin{align*}
\mathbf{d_{TV}}(\xi_{c,\gamma} \cap W,\nu_{c,\gamma} \cap W)\le \|\mathbb{E} (\xi_{c,\gamma}\cap W)-\mathbb{E}(\nu_{c}\cap W)\|+T_1+T_2+T_3+T_4
\end{align*}
with
\begin{align*}
T_1&=2\gamma \mathbb{E} \xi_{c,\gamma}(W) \int_{W} \mathbb{P}(\Sigma(C(x,\eta_\gamma+\delta_x))>v_{c,\gamma},\,R(x,\eta_\gamma+\delta_x)>b_\gamma)\,\mathrm{d}x,\\
T_2&=\gamma^2\int_{W} \int_{W}\,\mathbf 1\{|x-y|\le 2b_\gamma\}\mathbb{P}(\Sigma(C(x,\eta_\gamma+\delta_x))>v_{\gamma}) \mathbb{P}(\Sigma(C(y,\eta_\gamma+\delta_y))>v_{n})\,\mathrm{d}y\,\mathrm{d}x,\\
T_3&=\gamma^2\int_{W} \int_{W} \mathbb{P}(\Sigma(C(x,\eta_\gamma+\delta_x))>v_{\gamma},\,\Sigma(C(y,\eta_\gamma+\delta_y))>v_{c,\gamma},\,R(x,\eta_\gamma+\delta_x)>b_\gamma)\,\mathrm{d}y\,\mathrm{d}x,\\
T_4&=\gamma^2\int_{W} \int_{W}\mathbf 1\{|x-y|\le 2b_\gamma\}\, \mathbb{E} \mathbf 1\{\Sigma(C(x,\eta_\gamma+\delta_{(x,y)}))>v_{c,\gamma},\,\Sigma(C(y,\eta_\gamma+\delta_{(x,y)}))>v_{c,\gamma}\}\\
&\qquad \times \mathbf 1\{R(x,\eta_\gamma+\delta_{(x,y)})\le b_\gamma,\,R(y,\eta_\gamma+\delta_{(x,y)})\le b_\gamma\}\,\mathrm{d}y\,\mathrm{d}x.
\end{align*}
where $\mathrm d :=\mathrm{d}\lambda_d$ denotes integration with respect to the Lebesgue measure $\lambda_d$. In the following, $\alpha_i>0$ ($i \in \mathbb{N}$) are positive constants that do not depend on $\gamma$. Their precise values are not important for the argument.
First we show that the total variation on the right-hand side vanishes. Note that by the Mecke equation, by stationarity of $\eta$ and by \eqref{PVTintvt} we have
\begin{align*}
\mathbb{E} \xi_{c,\gamma}(A)=n \lambda_d(A) \mathbb{P}(\Sigma(C(o,\eta_\gamma+\delta_o))>v_{c,\gamma})=c\lambda_d(A)=\nu_c(A),\quad A \in\mathcal B^d,
\end{align*}
which yields that the intensity measures of $\xi_{c,\gamma}$ and $\nu_{c}$ coincide for all $\gamma>0$ and $c>0$.
{\em The estimate of $T_1$.} We bound the probability in the integral of $T_1$ by
\begin{align*}
{\mathbb P}(R(x,\eta_\gamma+\delta_x)>v_{c,\gamma})\le 1-\prod_{i \in I} \mathbb{P}(R_i(x,\eta_\gamma+\delta_x)>b_\gamma)=1-(1-\mathrm e^{-\gamma (b_\gamma/2)^d/I})^{I}\le Ie^{-\gamma (b_\gamma/2)^d/I}
\end{align*}
where $R_i(x,\mu)$ is defined at \eqref{stabivor}. Using the definition of $b_\gamma$, we find that
\begin{align*}
T_1 \le \frac{2 \gamma\mathbb{E} \nu_{c}(W)\lambda_d(W)}{\gamma^3}\le \frac{\alpha_1}{\gamma^2}.
\end{align*}
{\em The estimate of $T_2$.} Since $\Sigma$ is translation-invariant and $\eta_\gamma$ is stationary, $T_2$ is bounded by
\begin{align*}
T_2 \le \gamma^2 \lambda_d(W) \lambda_d(B_{2b_\gamma}(o)) \mathbb{P}(\Sigma(C(o,\eta_\gamma+\delta_o))>v_{c,\gamma})^2\le \frac{\alpha_2 \log \gamma}{\gamma}
\end{align*}
where we have used the definitions of $b_\gamma$ and of $v_{c,\gamma}$ to obtain the second inequality.
{\em The estimate of $T_3$.} We estimate the probability in the integral of $T_3$ in the same way as we did for $T_1$. This yields the bound
\begin{align*}
T_3 \le \gamma^2 \lambda_d(W)^2 e^{-\gamma (v_{c,\gamma}/2)^d/I}\le \frac{\alpha_3}{\gamma}.
\end{align*}
{\em The estimate of $T_4$.} Let ${\varepsilon}>0$ be the constant from Theorem \ref{PVTmaith}. We distinguish by the shape of $C(x,\eta_\gamma+\delta_{(x,y)})$ which gives the bound
\begin{align}
&2\gamma^2 \int_{W} \int_{W} \mathbf 1\{|x-y|\le 2b_\gamma\} \mathbb{P}(\Sigma(C(x,\eta_\gamma+\delta_{(x,y)})>v_{c,\gamma},\,{\vartheta}(C(x,\eta_\gamma+\delta_{(x,y)})>\varepsilon)\,\mathrm{d}y\,\mathrm{d}x\label{eqn:vorT4a}\\
&\quad + \gamma^2\int_{W} \int_{W} \mathbf 1\{|x-y|\le 2b_\gamma\} \, \mathbb{E}\,\mathbf 1\{\Sigma(C(x,\eta_\gamma+\delta_{(x,y)}))>v_{c,\gamma},\,{\vartheta}(C(x,\eta_\gamma+\delta_{(x,y)}))<\varepsilon\}\nonumber\\
&\qquad \qquad \times \mathbf 1\{\Sigma(C(y,\eta_\gamma+\delta_{(x,y)}))>v_{c,\gamma}\,{\vartheta}(C(y,\eta_\gamma+\delta_{(x,y)}))<\varepsilon\}\,\mathrm{d}y\,\mathrm{d}x.\label{eqn:vorT4b}
\end{align}
Since the probability in \eqref{eqn:vorT4a} uses the augmented process $\eta+\delta_{(x,y)}$ instead of $\eta+\delta_y$, we can not directly invoke \eqref{PVT:shape}. Instead, we apply the Mecke formula to the inner integral and use that $\eta_\gamma$ is stationary. This gives
\begin{align*}
2\gamma^2 \lambda_d(W) \mathbb{E}\sum_{y \in \eta \cap B_{2\gamma}} \mathbf 1\{C(o,\eta_\gamma+\delta_{o})>v_{c,\gamma},\,{\vartheta}(C(o,\eta_\gamma+\delta_{o}))>\varepsilon\}
\end{align*}
Next distinguish by the number $B_\gamma:=\eta (B_{2b_\gamma})$ of points of $\eta_\gamma$ in $B_{2b_\gamma}$. This yields \eqref{eqn:vorT4a} the bound
\begin{align}
&2\gamma^2 \lambda_d(W) \,\mathbb{E} B_\gamma\mathbf 1\{B_\gamma \le 2\mathbb{E} B_\gamma\} \mathbf1 \{\Sigma(C(o,\eta_\gamma+\delta_{o}))>v_{c,\gamma},\,{\vartheta}(C(o,\eta_\gamma+\delta_{o}))>\varepsilon\}\label{eqn:vorT4aa}\\
&\quad +2\gamma^2 \lambda_d(W) \,\mathbb{E} B_\gamma\mathbf 1\{B_\gamma > 2\mathbb{E} B_\gamma\} \mathbf1 \{\Sigma(C(o,\eta_\gamma+\delta_{o}))>v_{c,\gamma},\,{\vartheta}(C(o,\eta_\gamma+\delta_{o}))>\varepsilon\}.\label{eqn:vorT4ab}
\end{align}
Since $\mathbb{E} B_{\lambda}=\gamma(2b_\gamma)^d \kappa_d =4^d 3I \log (\gamma) \kappa_d$ we obtain for \eqref{eqn:vorT4aa} the bound
\begin{align*}
&4\gamma^2 \lambda_d(W_n) \,\mathbb{E} B_\gamma\mathbb{P}(\Sigma(C(o,\eta_\gamma+\delta_{o}))>v_{c,\lambda})\,\mathbb{P}({\vartheta}(C(o,\eta_\gamma+\delta_{c,\gamma}))>\varepsilon \mid \,\Sigma(C(o,\eta_\gamma+\delta_{o}))>v_{c,\gamma})\\
&\quad \le \alpha_4 \log(\gamma) \exp(-c_0 f(\varepsilon) v_{c,\gamma}^{d/k}\gamma),
\end{align*}
where we have used \eqref{PVT:shape} to obtain the inequality. For \eqref{eqn:vorT4ab} we find by Cauchy-Schwarz the bound
\begin{align}
2 \gamma^2 \lambda_d(W) \sqrt{\mathbb{E}B_\gamma} \sqrt{\mathbb P(B\gamma >2 \mathbb{E}B_\gamma)}. \label{eqn:vorT4abcher}
\end{align}
Recall that $B_\gamma$ follows a Poisson distribution with parameter $\mathbb{E} B_\gamma$. Hence, we find from the Chernoff bound \cite[Section 5.3]{mitzenmacher2017probability}
\begin{align}
{\mathbb P}(X>u)\le \frac{(\mathrm e \lambda)^u \mathrm e^{-\lambda}}{\mathrm e^{u \log u}},\quad u >\lambda,\label{cher}
\end{align}
where $X$ is Poisson distributed with parameter $\lambda>0$, that \eqref{eqn:vorT4abcher} is bounded by
\begin{align*}
2 \gamma^2 \lambda_d(W) \sqrt{\mathbb{E}B_\gamma} \sqrt{\frac{(\mathrm e \mathbb{E} B_\gamma)^{2 \mathbb{E}B_\gamma}\mathrm e^{-\mathbb{E}B_\gamma}}{\mathrm e ^{2 \mathbb{E} B_\gamma \log (2 \mathbb{E} B_\gamma)}}} &\le 2 \gamma^2\lambda_d(W) \sqrt{\mathbb{E}B_\gamma} \mathrm e^{-\mathbb E B_\gamma (1/2-\log 2)}\\
&\le \alpha_5 \gamma^{2-4^d3I\kappa_d(2-\log 2)}\sqrt{\log \gamma},
\end{align*}
where we have used that $\mathbb{E} B_{\lambda}=4^d 3I \log (\gamma) \kappa_d$.
To bound \eqref{eqn:vorT4b} we use that \eqref{lowbvor} and the trivial relation $r_o(K) \ge h_u(K),\,u\in \mathbb{S}^{d-1},$ imply that $r_o(K)^d\ge d\tau \Sigma(K)^{d/k}$. Since we have assumed \eqref{PVTass1}, this gives for $\Sigma(K)>v$ and ${\vartheta}(K)<{\varepsilon}$ that $\rho_o(K)^d>h({\varepsilon})^dd\tau v^{d/k}$. Hence, \eqref{eqn:vorT4b} is bounded by
\begin{align*}
& \gamma^2 \int_{W} \int_{W} \mathbf 1\{|x-y|\le 2b_\gamma\} \, \mathbb{E}\mathbf 1\{\rho_o(C(y,\eta_\gamma+\delta_{(x,y)}))^d>h({\varepsilon})^d d\tau v_{c,\gamma}^{d/k}\}\nonumber\\
&\quad \quad \times \mathbf 1\{\Sigma(C(x,\eta_\gamma+\delta_{(x,y)}))>v_{c,\gamma},\,{\vartheta}(C(x,\eta_\gamma+\delta_{(x,y)}))<\varepsilon\}\,\mathrm{d}y\,\mathrm{d}x.
\end{align*}
Now we use Lemma \ref{stabaug} and exploit that $\rho_o(C(y,\eta_\gamma+\delta_{(x,y)}))>s$ implies that $\eta_\gamma \cap B_{2s}(y)={\varnothing}$. This gives the bound
\begin{align*}
&\gamma^2 \int_{W} \int_{W} \mathbf 1\{|x-y|\le 2b_\gamma\} \, \mathbb{E} \mathbf 1\{\eta_\gamma \cap B_{2h({\varepsilon})(d\tau)^{1/d} v_{c,\gamma}^{1/k}}(y)={\varnothing}\}\nonumber\\
&\quad \quad \times\,\mathbf 1\{\Sigma(C(x,\eta_\gamma\cap K_a(x,y)^c+\delta_{(x,y)}))>v_{c,\gamma},\,{\vartheta}(C(y,\eta\cap K_a(x,y)^c+\delta_{(x,y)}))<\varepsilon\}\,\mathrm{d}y\,\mathrm{d}x
\end{align*}
where $a:=\frac{h({\varepsilon})}{\sqrt{1-h({\varepsilon})^2}}$. Since the processes $\eta_\gamma \cap K_a(x,y)$ and $\eta_\gamma \cap K_a(x,y)^c$ are independent, we arrive at the bound
\begin{align*}
&\gamma^2 \int_{W} \int_{W} \mathbf 1\{|x-y|\le 2b_\gamma\} \,\mathbb{P}(\eta_\gamma \cap K_a(x,y) \cap B_{2h({\varepsilon})(d\tau)^{1/d} v_{c,\gamma}^{1/k}}(y)={\varnothing})\nonumber\\
&\quad \quad \times \,\mathbb{P}(\Sigma(C(x,\eta_\gamma\cap K_a(x,y)^c+\delta_{(x,y)}))>v_{c,\gamma})\,\mathrm{d}y\,\mathrm{d}x.
\end{align*}
Now we use Lemma \ref{stabaug} again and note that $\lambda_d( K_a(x,y) \cap B_{r}(y))=r^d \kappa_d k(h({\varepsilon}))$ for $r>0$. Hence, the above is bounded by
\begin{align*}
&\gamma c\,\lambda_d(W) \lambda_d(B_{2b_\gamma}) \,\exp\Big(-2^{d} h({\varepsilon})^d d \tau v_{c,\gamma}^{d/k}\kappa_dk(h({\varepsilon}))\gamma \Big)\\
&\quad \le \alpha_6 \log (\gamma) \exp\Big(-2^{d} h({\varepsilon})^d d \tau v_{c,\gamma}^{d/k}\kappa_dk(h({\varepsilon}))\gamma \Big).
\end{align*}
Finally, we complete the proof and collect the bounds of the $T$-terms. This gives
\begin{align*}
& \mathbf{d_{TV}}(\xi_{c,\gamma} \cap W,\nu_{c,\gamma} \cap W)\le \frac{\alpha_1}{\gamma^2}+\frac{\alpha_2 \log \gamma}{\gamma}+ \frac{\alpha_3}{\gamma}+\alpha_4 \log(\gamma) \exp(-c_0 f({\varepsilon})v_{c,\gamma}^{d/k}\gamma)\\
&\qquad +\alpha_5 \gamma^{2-4^d3I\kappa_d(2-\log 2)} \sqrt{\log \gamma}+ \alpha_6 \log (\gamma) \exp\Big(-2^{d} h({\varepsilon})^d d \tau v_{c,\gamma}^{d/k}\kappa_dk(h({\varepsilon}))\gamma \Big).
\end{align*}
Using that $\gamma^{-1} v_{c,\gamma}^{-d/k}\log \gamma\to 2^d d\kappa_d \tau$ as $\gamma \to \infty$ we conlude that for all $\delta>0$,
\begin{align*}
& \mathbf{d_{TV}}(\xi_{c,\gamma} \cap W,\nu_{c,\gamma} \cap W)\le C \gamma^{\delta-\min\big[c_0 f({\varepsilon}), h({\varepsilon})^d k(h({\varepsilon}))\big]}
\end{align*}
for a constant $C>0$ that does not depend on $\gamma$.
\end{proof}
\section{Maximum cells in the Poisson-Delaunay mosaic}
In this section we apply Theorem \ref{maithmallg} to processes of centres of large cells in the Poisson-Delaunay mosaic. As in the previous section we work in the Euclidean space $\mathbb{X}=\mathbb{R}^d$ ($d \ge 2$) with Borel $\sigma$-field $\mathcal{B}^d$ and $d$-dimensional Lebesgue measure ${\lambda}_d$. Let $\mu \in \mathbf{N}_s$ and $\mathbf{x}:=(x_1,\dots,x_{d+1})\in \mu^{(d+1)}$ be in general position. Let $B(\mathbf{x})$ be the (unique) open $d$-dimensional ball that has the points $x_1,\dots,x_{d+1}$ on its boundary and let $z(\mathbf{x})$ be its centre. If $\mu \cap B(\mathbf{x})={\varnothing}$, we call the simplex $S(\mathbf{x}):=\mathrm{conv}(x_1,\dots,x_{d+1})$ a {\em Delaunay cell}. The system of all such cells is called {\em Delaunay mosaic}.
Let $\Delta$ be the space of all $d$-simplices with circumcentre at the origin $o$, equipped with the Hausdorff metric. For $k>0$ we call $\Sigma: \Delta \to \mathbb{R}$ a {\em size functional} if it is continuous, $k$-homogeneous and such that $\Sigma$ attains a maximum on the set of simplices with vertices on the unit sphere and if $V_d/\Sigma^{1/k}$ is bounded (where $V_d$ is the volume). Examples for size functionals are the volume and the inradius (see Example \ref{PDTEx1}). We slightly abuse the notation and write $\Sigma(S(\mathbf x)):=\Sigma(S(\mathbf x)-z(\mathbf x))$ for $\mathbf x:=(x_1,\dots,x_{d+1})\in (\mathbb{R}^d)^{d+1}$ in general position. For $c >0$, a threshold $v_{c,\gamma}>0,\,\gamma>0,$ (to be specified below) and a stationary Poisson process $\eta_\gamma$ of intensity $\gamma$ we consider the process
\begin{align}
\xi_{c,\gamma}:=\frac{1}{(d+1)!} \sum\limits_{\mathbf{x} \in \eta_\gamma^{(d+1)}} \mathds{1}\{\eta \cap B(\mathbf{x})=\varnothing\}\,\mathds{1}\{\Sigma(S(\mathbf{x}))>v_{c,\gamma}\} \,\delta_{z(\mathbf{x})}.\label{PDTxit}
\end{align}
Particularly useful in the asymptotic study of random cells is the notion of the typical cell $Z_\gamma$ in a Delaunay mosaic generated by a Poisson process with intensity $\gamma$. This is any random simplex with distribution given by
\begin{align}
{\mathbb P}(Z_\gamma \in \cdot)=\frac{1}{\gamma \beta_d (d+1)! } {\mathbb E} \int \mathds{1}\{z(\mathbf{x}) \in [0,1]^d \} \mathds{1}\{S(\mathbf{x})\in \cdot\} \mathds{1}\{\eta_\gamma \cap B(\mathbf{x})={\varnothing}\} \eta_\gamma^{(d+1)}(\mathrm{d}\mathbf{x}), \label{PDTtypZ}
\end{align}
with some constant $\beta_d>0$ that only depends on $d$ (see \cite{schneider2008stochastic}, p. 450 and (10.31)). This allows us to write the intensity measure of $\xi_{c,\gamma}$ as
\begin{align}
{\mathbb E} \xi_{c,\gamma}(A)=\gamma \beta_d {\lambda}_d(A){\mathbb P}(\Sigma(Z_\gamma)>v_{c,\gamma}),\quad A \in \mathcal B^d. \label{PDTintxittyp}
\end{align}
We choose the threshold $v_{c,\gamma}$ in \eqref{PDTxit} as
\begin{align}
v_{c,\gamma}:=\inf\{v>0:\,\gamma\beta_d{\mathbb P}(\Sigma(Z_\gamma)>v)\le c\}. \label{PDTvtinf}
\end{align}
By \eqref{PDTintxittyp} and Lemma \ref{PDTcont} below it holds that $\mathbb{E} \xi_{c,\gamma}=c \lambda_d$.
For $S \in \Delta$ let $r(S)$ denote the circumradius of $S$ and define $\tau:=\max \{\Sigma(S):\,S \in \Delta,\, r(S)=1\}$. Since $\Sigma$ is $k$-homogeneous we obtain that
\begin{align}
\Sigma(S)\le \tau r(S)^k,\quad S \in \Delta.\label{lowb}
\end{align}
If equality holds in \eqref{lowb}, we call $S \in \Delta$ an {\em extremal simplex}. Following \cite{hug2005large} we define a functional ${\vartheta}$ that measures the deviation of a simplex $S \in \Delta$ from a regular simplex as follows. Let $S\in \Delta$ and $u_1,\dots,u_{d+1}\in \mathbb{S}^{d-1}$ be such that $\text{conv}(u_1,\dots,u_{d+1})$ is a regular simplex. Then we define ${\vartheta}(S)$ as the smallest number $\alpha>0$ such that there are points $v_1,\dots,v_{d+1}\in \mathbb{S}^{d-1}$ such that $\text{conv}(v_1,\dots,v_{d+1})$ is similar to $S$ and $\|u_i-v_i\| \le \alpha$ for $i\in [d+1]$. Note that ${\vartheta}(S)=0$ if and only if $S$ is a regular simplex.
As in \cite{hug2005large} we call $f:[0,1) \to [0,1)$ a {\em stability function} of $\Sigma$ and ${\vartheta}$ if $f(0)=0,\,f({\varepsilon}) >0$ for ${\varepsilon} >0$ and
\begin{align*}
\Sigma(S) \le (1-f({\varepsilon}) )r(S)^k\tau \quad \text{for all }S \in \Delta\text{ with }{\vartheta}(S) \ge {\varepsilon}.
\end{align*}
Let $\Sigma$, ${\vartheta}$ and $f$ be as above and $v,\,{\varepsilon}>0$. It was shown in Theorem 1 in \cite{hug2005large} that there is a constant $c_0>0$ depending only on $\Sigma,\,{\vartheta},\,f$ and $d$ such that
\begin{align}
{\mathbb P}(\vartheta(Z_\gamma) \ge {\varepsilon} \mid\,\Sigma(Z_\gamma)\ge v) \le c_1\,\exp(-c_0f({\varepsilon})v^{d/k}\gamma),\quad \gamma>0, \label{af}
\end{align}
where $c_1>0$ depends only on $d,\,{\varepsilon},\,\Sigma,\,{\vartheta},\,f$.
Next we determine the asymptotic behaviour of $v_{c,\gamma}$ for fixed $c$ as $\gamma \to \infty$. By \eqref{PDTvtinf} and Lemma \ref{PDTcont} we have that $v_{c,\gamma} \to \infty$ as $\gamma \to \infty$. Hence, Theorem 2 in \cite{hug2005large} implies that
\begin{align}
\lim\limits_{\gamma\to \infty} \gamma^{-1} v_{c,\gamma}^{-d/k}\log {\mathbb P}(\Sigma(Z_\gamma)>v_{c,\gamma}) =-\kappa_d \tau^{-d/k}. \label{PDTvass}
\end{align}
Since $\gamma^{-1} v_{c,\gamma}^{-d/k} \log \gamma$ is given by
\begin{align*}
\gamma^{-1}v_{c,\gamma}^{-d/k} \log {\mathbb P}(\Sigma(Z_\gamma)>v_{c,\gamma}) \Big(\frac{\log(\gamma{\mathbb P}(\Sigma(Z_\gamma)>v_{c,\gamma}))}{\log \gamma}-1\Big)^{-1}
\end{align*}
we conclude from \eqref{PDTvass} and the definition of $v_{c,\gamma}$ that $\gamma^{-1} v_{c,\gamma}^{-d/k} \log \gamma\to \kappa_d \tau^{-d/k}$ as $\gamma \to \infty$.
The next statement is the main result of this section.
\begin{theorem} \label{PDTmaith}
Suppose that $\eta_\gamma$ be a stationary Poisson process with intensity $\gamma>1$, $\Sigma$ is a size functional, ${\vartheta}$ is a deviation functional with stability function $f$ and $\tau$ from \eqref{lowb}. Assume that all extremal simplices of $\Sigma$ are regular. Let $c>0$, $W \subset \mathbb{R}^d$ be compact and let $\nu_c$ be a stationary Poisson process with intenisty $c$. Then we have for all $\delta>0$ and all ${\varepsilon}\in (0,1/d)$
\begin{align*}
\mathbf{d_{TV}}(\xi_{c,\gamma} \cap W, \nu_c \cap W)\le C \gamma^{\delta-\min[c_0f({\varepsilon}),1-I({\varepsilon})]},\quad \gamma>1,
\end{align*}
where the constant $C>0$ does not depend on $\gamma$. Here, $I({\varepsilon}):=\int_{\frac{d-1}{d}+{\varepsilon}}^1 (1-s^2)^{d/2}\,\mathrm{d}s$.
\end{theorem}
In the following example we discuss applications of Theorem \ref{PDTmaith} to concrete size functionals.
\begin{example} \label{PDTEx1} (a) For the volume $\Sigma=V_d$ it was shown in \cite{hug2004large} that the extremal simplices are regular and that there is a constant $c_d>0$ only depending on $d$ such that for every ${\varepsilon}\in (0,1)$ we have that
\begin{align*}
V_d(S)\le (1-c_d{\varepsilon}^2)r(S)^d\tau\quad \text{for all }S \in \Delta \text{ with }{\vartheta}(S)>{\varepsilon}.
\end{align*}
Hence, a stability function is given by $f({\varepsilon})=c_d {\varepsilon}^2$. In dimension $d=2$ the distribution of the volume of the typical cell in the Poisson-Delaunay mosaic is known explicitly from \cite{rathie1992volume} and given by
\begin{align*}
{\mathbb P}(V_2(Z_\gamma)>v)=\frac{8\pi}{9} \int_{\gamma v}^\infty u\, K_{1/6}^2\left(\frac{2\pi u}{3 \sqrt{3}}\right)\,\mathrm{d}u,\quad v>0,
\end{align*}
where $K_{1/6}(u)$ is the modified Bessel function of order $1/6$. Since $K_{1/6}(u)=\sqrt{\frac{\pi}{2u}}e^{u}(1+o(1))$ as $u \to \infty$ we find for $v_{c,\gamma}:=\frac{3 \sqrt{3}}{4 \pi \gamma}\log \big(\frac{3\gamma}{c}\big)$ that
\begin{align*}
\lim_{\gamma \to \infty} \gamma \mathbb P(V_2(Z_\gamma)>v_{c,\gamma})=c.
\end{align*}
Letting $c:=e^{-t}$ for $t \in \mathbb{R}$ we obtain from Theorem \ref{PDTmaith} for all ${\varepsilon}\in (0,1/2)$:
\begin{align*}
\Big|\mathbb{P}\Big(\frac{4 \pi \gamma}{3 \sqrt{3}}\max_{\mathbf x \in \eta_\gamma^{(3)}}\{V_2(S(\mathbf x)):\,z(\mathbf x)\in W, \eta_\gamma \cap B(\mathbf x)={\varnothing}\}-\log(3\gamma)\le t\Big)-\exp(-\lambda_2(W)e^{-t})\Big|\le C\gamma^{\delta-M}
\end{align*}
for all $t \in \mathbb{R}$ and $\gamma>1$, where $M:=\min[c_0 c_d {\varepsilon}^2,1/6-{\varepsilon}+(1/2-{\varepsilon})^3/3]$. This quantifies the result from Section 3.2 in \cite{chenavier2014general}.\\
(b) Let $\Sigma(S)=\rho(S)$ be the inradius (i.e. the radius of the largest ball inscribed in $S$). Then the extremal simplices are also precisely all regular simplices. In was shown in \cite[Section 4]{hug2005large} that for the constant $c_d$ from (a) and ${\varepsilon}\in [0,1]$ it holds that
\begin{align*}
\rho(S)\le (1-c_d{\varepsilon}^2/d)r(S)\tau\quad \text{for all }S \in \Delta \text{ with }{\vartheta}(S)>{\varepsilon}.
\end{align*}
Hence, a stability function is given by $f({\varepsilon})=c_d {\varepsilon}^2/d$.
\end{example}
The following spherical Blaschke-Petkantschin formula (Theorem 7.3.1 in \cite{schneider2008stochastic}) will play a fundamental role in the proof of Theorem \ref{PDTmaith}. For $\mathbf{u}:=(u_1,\dots,u_{d+1})$ let $\Delta_{d}(\mathbf{u})$ be the $d$-dimensional volume of the convex hull $\mathrm{conv}(u_1,\dots,u_{d+1})$ of $u_1,\dots,u_{d+1}$. Let $\sigma$ be the uniform probability distribution on the unit sphere $\mathbb{S}^{d-1}$ and $\kappa_d:=\pi^{d/2}/\Gamma(1+d/2),\,d \in \mathbb{N},$ be the volume of the $d$-dimensional unit ball, where $\Gamma(\cdot)$ denotes the Gamma function. Let $f:({\mathbb R}^d)^{d+1}\to [0,\infty)$ be a measurable function. Then we have
\begin{align}
\int\limits_{({\mathbb R}^d)^{d+1}} f\,\mathrm{d}\lambda_d^{d+1}=d!(d\kappa_d)^{d+1} \int\limits_{{\mathbb R}^d} \int\limits_0^\infty\int\limits_{(\mathbb{S}^{d-1})^{d+1}} f(z+r\mathbf{u}) r^{d^2-1}\,\Delta_d(\mathbf{u})\,\sigma^{d+1}(\mathrm{d}\mathbf{u}) \,\mathrm{d}r\,\mathrm{d}z \label{bpclas}
\end{align}
where $z+r \mathbf{u}:=(z+ru_1,\dots,z+ru_{d+1})$.
From the Mecke formula and \eqref{PDTtypZ} we obtain for all $v \ge 0$ that
\begin{align*}
\mathbb{P}(r(Z_\gamma)>v)=\frac{\gamma^{d+1}}{\beta_d(d+1)!} \int \mathbf 1\{z(\mathbf x) \in [0,1]^d\} \mathbf 1\{r(\mathbf x)>v\} e^{-\gamma \kappa_d r(\mathbf x)^d}\,\lambda_d^{d+1}(\mathrm{d}\mathbf x),
\end{align*}
which is by \eqref{bpclas} given by
\begin{align}
\frac{\gamma \kappa_d d^dC_d}{\beta_d(d+1)} \int_{(\frac{v}{\gamma \kappa_d})^{1/d}}^\infty e^{-s} s^{d-1} \mathrm d s\quad \text{with}\quad C_d:=\int_{(\mathbb S^{d-1})^{d+1}} \Delta_d(\mathbf u)\,\sigma_{d-1}^{d+1}(\mathrm{d} \mathbf u).\label{PDT:defCd}
\end{align}
Using here that $\mathbb{P}(r(Z_\gamma)>0)=1$ and that $\int_0^\infty e^{-s} s^{d-1}\,\mathrm d s=\Gamma(d)$, we find
\begin{align}
\mathbb{P}(\gamma \kappa_d r(Z_\gamma)^d>v)=\frac{1}{\Gamma(d)} \int_v^\infty e^{-s} s^{d-1}\,\mathrm{d}s,\quad v \ge 0,\label{PDTdistrd}
\end{align}
i.e. $\gamma \kappa_d r(Z_\gamma)^d$ follows a Gamma distribution with parameter $d$.
\begin{figure}
\caption{The figure shows the situation of Lemma \ref{PDTform} for $d=2$ and $\ell=2$. The dashed drawn triangle is regular.}
\end{figure}
\begin{lemma} \label{PDTform}
For $\ell \in [d+1]$ let $\mathbf{x}\in ({\mathbb R}^d)^{d+1}$ and $\mathbf{y}\in ({\mathbb R}^d)^{\ell}$ such that $\mathbf{x}$ and $(\mathbf{x}_\ell,\mathbf{y})$ are in general position. We assume that
\begin{align*}
B(\mathbf{x})\cap \{y_1,\dots,y_{d+1-\ell}\}={\varnothing},\qquad
B(\mathbf{x}_\ell,\mathbf{y}) \cap \{x_{\ell+1},\dots,x_{d+1}\}={\varnothing}.
\end{align*}
and that ${\vartheta}(\mathbf x)<{\varepsilon}$ and ${\vartheta}(\mathbf{x}_\ell,\mathbf y)<{\varepsilon}$ for some ${\varepsilon} \in (0,1/d)$.
Then we have
\begin{align*}
\lambda_d(B(\mathbf{x})\cup B(\mathbf{x}_\ell,\mathbf{y}))\ge \big(1-I({\varepsilon})\big) \kappa_d r(\mathbf{x})^d+\kappa_d r(\mathbf{x}_\ell,\mathbf{y})^d
\end{align*}
with $I({\varepsilon}):=\int_{\frac{d-1}{d}+{\varepsilon}}^1 (1-s^2)^{d/2}\,\mathrm{d}s$.
\end{lemma}
\begin{proof} Let $\rho_s$ be the inradius (radius of the largest inscribed sphere) and $r_s$ be the circumradius (radius of the smallest circumscribing sphere) of a regular $(d+1)$-simplex. We use the fact that $d\rho_s=r_s$. For $1 \le k \le d+1$ let $H_{-k}$ be the (unique) hyperplane through $\{x_1,\dots,x_{d+1}\} \setminus \{x_k\}$. It follows from the definition of ${\vartheta}$ that ${\vartheta}(\mathbf{x})<{\varepsilon}$ implies that
\begin{align}
\min_{1 \le k \le d+1} d(z(\mathbf{x}),H_{-k}) \le r(\mathbf{x})\Big(\frac{1}{d}+{\varepsilon}\Big). \label{eqn:mindist}
\end{align}
Let $H_{-k}^-$ be the closed halfspace that is bounded by $H_{-k}$ and that does not contain $z(\mathbf x)$. We have
\begin{align}
\lambda_d(B(\mathbf{x}) \cup B(\mathbf{x}_\ell,\mathbf{y}))\ge \lambda_d(B(\mathbf{x}))+\lambda_d(B(\mathbf{x}_\ell,\mathbf{y}))-\max_{1 \le k \le d+1} \lambda_d(B(\mathbf{x}) \cap H_{-k}^-). \label{eqn:volcup}
\end{align}
For $d(z(\mathbf{x}),H_{-k})=ar(\mathbf{x})$ we have
\begin{align*}
\lambda_d(B(\mathbf{x}) \cap H_{-k}^-)=r(\mathbf{x})^d\kappa_d\int_{1-a}^1 (1-s^2)^{d/2}\,\mathrm{d}s.
\end{align*}
Hence, we find the assertion from \eqref{eqn:mindist} and \eqref{eqn:volcup}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{PDTmaith}]
Let $m=d+1$, $\gamma>0$ and $c>0$ and $W \subset \mathbb{R}^d$ be compact. We apply Theorem \ref{maithmallg} with $\lambda:=\gamma \lambda_d\cap W$ and
\begin{align*}
g(\mathbf{x},\mu)=\mathbf{1}\{z(\mathbf{x}) \in W\}\mathbf{1}\{\mu \cap B(\mathbf{x})=\varnothing,\,\Sigma(S(\mathbf{x}))>v_{c,\gamma}\}.
\end{align*}
We choose the stabilization radius $R(z,\mu):=\|z-x_1\|$ if there is a unique $(d+1)$-tuple $\mathbf{x}=(x_1,\dots,x_{d+1})\in \mu^{(d+1)}$ (up to permutations of the components of $\mathbf{x}$) such that $z(\mathbf{x})=z$ (in this case $R(z,\mu)=r(\mathbf x)$) and $R(z,\mu):=\infty$, otherwise. Let
\begin{align}
b_{\mathbf x}:= b_\gamma=(3\gamma^{-1} \kappa_d^{-1}\log \gamma)^{1/d}. \label{PDTdefbt}
\end{align}
This gives the bound
\begin{align*}
\mathbf{d_{TV}}(\xi_{c,\gamma} \cap W, \nu_c \cap W)\le \|\mathbb{E}(\xi_{c,\gamma}\cap W)-\mathbb{E}(\nu_{c}\cap W)\|+T_1+T_2+T_3+T_4+T_5
\end{align*}
where
\begin{align*}
T_1&=\frac{2\gamma^{d+1}\mathbb{E}\xi_{n,c}(W)}{(d+1)!} \int\mathbf 1 \{z(\mathbf x)\in W\} \mathds 1\{\Sigma(S(\mathbf x))>v_{c,\gamma}\} \mathbf 1\{r(\mathbf x)>b_\gamma\}\mathbb{P}(\eta_\gamma \cap B(\mathbf x)={\varnothing})\,\lambda_d^{d+1}(\mathrm{d}\mathbf x),\\
T_2&=\frac{\gamma^{2d+2}}{((d+1)!)^2} \iint \mathbf 1 \{z(\mathbf x)\in W, |z(\mathbf{x})-z(\mathbf y)| \le 2b_\gamma\} \mathbf 1\{\Sigma(S(\mathbf x))>v_{c,\gamma},\Sigma(S(\mathbf y))>v_{c,\gamma}\} \\
&\qquad \qquad \times \mathbb{P}(\eta_\gamma \cap B(\mathbf x)={\varnothing})\, \mathbb{P}(\eta_\gamma \cap B(\mathbf y)={\varnothing})\,\lambda_d^{d+1}(\mathrm{d}\mathbf y)\,\lambda_d^{d+1}(\mathrm{d}\mathbf x),\\
T_3&=\frac{\gamma^{2d+2}}{((d+1)!)^2} \iint \mathbf 1 \{z(\mathbf x)\in W, z(\mathbf{y})\in W\}\mathbf 1\{\Sigma(S(\mathbf x))>v_{c,\gamma},\Sigma(S(\mathbf y))>v_{c,\gamma}\} \\
&\qquad \qquad \times \mathbf 1\{r(\mathbf x)>b_\gamma\}\mathbb{P}((\eta_\gamma+\delta_{(\mathbf x,\mathbf y)}) \cap (B(\mathbf x) \cup B(\mathbf y))={\varnothing})\,\lambda_d^{d+1}(\mathrm{d}\mathbf y)\,\lambda_d^{d+1}(\mathrm{d}\mathbf x),\\
T_4&=\frac{2\gamma^{2d+2}}{((d+1)!)^2} \iint \mathbf 1 \{z(\mathbf x)\in W, |z(\mathbf{x})-z(\mathbf y)| \le 2b_\gamma\}\mathbf 1\{\Sigma(S(\mathbf x))>v_{c,\gamma},\Sigma(S(\mathbf y))>v_{c,\gamma}\} \\
&\qquad \qquad \times \mathbf 1\{r(\mathbf x)>b_\gamma\}\mathbb{P}((\eta_\gamma+\delta_{(\mathbf x,\mathbf y)}) \cap (B(\mathbf x) \cup B(\mathbf y))={\varnothing})\,\lambda_d^{d+1}(\mathrm{d}\mathbf y)\,\lambda_d^{d+1}(\mathrm{d}\mathbf x),\\
T_5&=\sum_{\ell=1}^{d}\frac{\gamma^{2d+2-\ell}}{\ell!(d+1-\ell)!} \iint \mathbf 1 \{z(\mathbf x)\in W, z(\mathbf{x}_\ell,\mathbf y)\in W\}\mathbf 1\{\Sigma(S(\mathbf x))>v_{c,\gamma},\Sigma(S(\mathbf x_\ell,\mathbf y))>v_{c,\gamma}\} \\
&\qquad \qquad \times \mathbb{P}((\eta_\gamma+\delta_{(\mathbf x,\mathbf y)}) \cap (B(\mathbf x) \cup B(\mathbf x_\ell,\mathbf y))={\varnothing})\,\lambda_d^{d+1-\ell}(\mathrm{d}\mathbf y)\,\lambda_d^{d+1}(\mathrm{d}\mathbf x).
\end{align*}
From \eqref{PDTintxittyp} and the choice of $v_{c,\gamma}$ in \eqref{PDTvtinf} we find that $\mathbb{E} \xi_{c,\gamma}=\mathbb{E} \nu_c$. Hence, $\|\mathbb{E}(\xi_{c,\gamma}\cap W)-\mathbb{E}(\nu_{c}\cap W)\|=0$, $\gamma>1$. Next we bound the $T$-terms from the right-hand side. In the following, $\alpha_i>0$ ($i \in \mathbb{N}$) are positive constants that do not depend on $\gamma$. Their precise values are not important for the argument.
{\em The estimate of $T_1$.} From the definition of $b_\gamma$ we have that $T_1$ is bounded by
\begin{align*}
\frac{2\gamma^{d+1} c \lambda_d(W)}{(d+1)!} \int\mathbf 1 \{z(\mathbf x)\in W\} \mathbf 1\{\gamma \kappa_dr(\mathbf x)^d>3 \log \gamma\} e^{-\gamma \kappa_d r(\mathbf{x})^d}\,\lambda_d^{d+1}(\mathrm{d}\mathbf x).
\end{align*}
Now we invoke the definition of the typical cell $Z_\gamma$ from \eqref{PDTtypZ} and use \eqref{PDTdistrd}. This gives for the above
\begin{align}
2c\gamma \lambda_d(W)^2\beta_d\mathbb{P}(\gamma \kappa_d r(Z_\gamma)^d>3 \log \gamma)=\frac{2c\gamma \lambda_d(W)^2\beta_d }{\Gamma(d)} \int_{3 \log \gamma}^\infty e^{-s} s^{d-1} \,\mathrm ds\le \frac{\alpha_1 (\log \gamma)^{d-1}}{\gamma^2}.\label{PDT1bou}
\end{align}
{\em The estimate of $T_2$.} By definition of the typical cell $Z_\gamma$ we find that $T_2$ is given by
\begin{align}
\gamma^2 \lambda_d(B_{2b_\gamma}) \lambda_d(W) \beta_d^2 \mathbb{P}(\Sigma(Z_\gamma)>v_{c,\gamma})^2 \le \frac{\alpha_2 \log \gamma}{\gamma}. \label{PDT2bou}
\end{align}
{\em The estimate of $T_3$.} For the estimate of $T_3$ we assume (at the cost of a factor 2) that $r(\mathbf y)\le r( \mathbf x)$ and bound $T_3$ by
\begin{align*}
&\frac{\gamma^{2d+2}}{((d+1)!)^2} \iint \mathbf 1 \{z(\mathbf x)\in W, z(\mathbf{y})\in W\} \mathbf 1\{r(\mathbf x)>b_\gamma\} \mathbf 1\{r(\mathbf x)\ge r(\mathbf y)\} \\
&\qquad \qquad \times \mathbb{P}(\eta_\gamma\cap (B(\mathbf x) \cup B(\mathbf y)) ={\varnothing})\,\lambda_d^{d+1}(\mathrm{d}\mathbf y)\,\lambda_d^{d+1}(\mathrm{d}\mathbf x).
\end{align*}
Now we apply \eqref{bpclas} to the inner integral and obtain for the above
\begin{align*}
&\frac{\gamma^{2d+2} \kappa_d d^d\lambda_d(W)C_d d!}{d^2((d+1)!)^2} \int r(\mathbf x)^{d^2} \mathbf 1 \{z(\mathbf x)\in W\}\mathbf 1\{r(\mathbf x)>b_\gamma\} \mathbb{P}(\eta_\gamma\cap B(\mathbf x) ={\varnothing})\,\lambda_d^{d+1}(\mathrm{d}\mathbf x)
\end{align*}
with the constant $C_d$ from \eqref{PDT:defCd}. We apply \eqref{bpclas} a second time and substitute $s:=\gamma \kappa_d r^d$. Thus we arrive at
\begin{align*}
\alpha_3 \gamma^{2d+2} \int_{b_\gamma}^\infty r^{2d^2-1} e^{-\gamma \kappa_d r^d} \mathrm d r=\alpha_4 \gamma^2 \int_{\gamma \kappa_d b_\gamma^d}^\infty s^{2d-1} e^{-s} \,\mathrm ds\le \frac{\alpha_5 (\log \gamma)^{2d-1}}{\gamma} .
\end{align*}
{\em The estimate of $T_4$.} Let ${\varepsilon}>0$. We consider the shapes of the simplices $S(\mathbf x)$ and $S(\mathbf y)$ and split $T_4$ into
\begin{align}
&\frac{2\gamma^{2d+2}}{((d+1)!)^2} \iint \mathbf 1\{{\vartheta}(S(\mathbf x))\le {\varepsilon} ,\,{\vartheta}(S(\mathbf y))\le {\varepsilon} \}\mathbf 1 \{z(\mathbf x)\in W, |z(\mathbf{x})-z(\mathbf y)| \le 2b_\gamma,\,r(\mathbf x)\le b_\gamma,\,r(\mathbf y)\le b_\gamma\}\nonumber\\
&\qquad \times \mathbf 1\{\Sigma(S(\mathbf x))>v_{c,\gamma},\Sigma(S(\mathbf y))>v_{c,\gamma}\} \mathbb{P}((\eta_\gamma+\delta_{(\mathbf x,\mathbf y)}) \cap (B(\mathbf x) \cup B(\mathbf y))={\varnothing})\,\lambda_d^{d+1}(\mathrm{d}\mathbf y)\,\lambda_d^{d+1}(\mathrm{d}\mathbf x)\label{PDT4bouleps}\\
&\quad + \frac{4\gamma^{2d+2}}{((d+1)!)^2} \iint \mathbf 1\{{\vartheta}(S(\mathbf x))>{\varepsilon}\}\mathbf 1 \{z(\mathbf x)\in W, |z(\mathbf{x})-z(\mathbf y)| \le 2b_\gamma\}\mathbf 1\{r(\mathbf x)\le b_\gamma,\,r(\mathbf y)\le b_\gamma\} \nonumber\\
&\qquad\times \mathbf 1\{\Sigma(S(\mathbf x))>v_{c,\gamma},\Sigma(S(\mathbf y))>v_{c,\gamma}\}\mathbb{P}((\eta_\gamma+\delta_{(\mathbf x,\mathbf y)}) \cap (B(\mathbf x) \cup B(\mathbf y))={\varnothing})\,\lambda_d^{d+1}(\mathrm{d}\mathbf y)\,\lambda_d^{d+1}(\mathrm{d}\mathbf x).\label{PDT4bougeps}
\end{align}
To estimate \eqref{PDT4bouleps} we assume (at the cost of a factor 2) that $r(\mathbf x)<r(\mathbf y)$ and use that Lemma \ref{PDTform} implies that for ${\vartheta}(S(\mathbf x))<{\varepsilon}$ and ${\vartheta}(S(\mathbf y))<{\varepsilon}$,
$$
\lambda_d(B(\mathbf x)\cup B(\mathbf y)) \ge (1-I({\varepsilon}))\kappa_d r(\mathbf{x})^d+\kappa_d r(\mathbf y)^d\ge (2-I({\varepsilon})) \kappa_d r(\mathbf {x})^2.
$$
where $I({\varepsilon}):=\int_{\frac {d-1}{d}+{\varepsilon}}^1 (1-s^2)^{d/2}\,\mathrm{d}s$. Since $|z(\mathbf x)-z(\mathbf y)|\le 2b_\gamma$ and $r(\mathbf y)\le b_\gamma$ implies by the triangle inequality that $|z(\mathbf x)-y_i| \le 3b_\gamma$, $i=1,\dots,d+1$, we find for \eqref{PDT4bouleps} the bound
\begin{align}
\frac{2 \gamma^{2d+2}\lambda_d(B_{3b_\gamma})^{d+1}}{((d+1)!)^2} \int \mathbf 1\{z(\mathbf x)\in W\} \mathbf 1\{\Sigma(S(\mathbf x))>v_{c,\gamma}\} e^{-\gamma \kappa_d (2-I({\varepsilon})) r(\mathbf{x})^d} \,\lambda_d^{d+1}(\mathrm{d}\mathbf{x}).\label{PDT4bouleps1}
\end{align}
Now we invoke \eqref{lowb} which says that $\tau r(\mathbf x)^k \ge\Sigma(S(\mathbf x))$ for $\mathbf x$ in general position. Hence, $r(\mathbf x)>(v_{c,\gamma}/\tau)^{1/k}$ for $\Sigma(S(\mathbf x))>v_{c,\gamma}$. Therefore, the above is bounded by
\begin{align}
\frac{2 \gamma^{2d+2}\lambda_d(B_{3b_\gamma})^{d+1}e^{-\gamma \kappa_d(1-I({\varepsilon}))\tau^{-d/k}v_{c,\gamma}^{d/k}}}{((d+1)!)^2} \int \mathbf 1\{z(\mathbf x)\in W\} \mathbf 1\{\Sigma(S(\mathbf x))>v_{c,\gamma}\} e^{-\gamma \kappa_d r(\mathbf{x})^d} \,\lambda_d^{d+1}(\mathrm{d}\mathbf{x}). \label{PDTvolbou}
\end{align}
Since the integral is by definition of $v_{c,\gamma}$ equal to $c \lambda_d(W)$, we find from the definiton of $b_\gamma$ for \eqref{PDTvolbou} the bound
\begin{align*}
\alpha_6 (\log \gamma)^{d+1} \exp(-\gamma \kappa_d(1-I({\varepsilon}))\tau^{-d/k}v_{c,\gamma}^{d/k}).
\end{align*}
An analogous application of the triangle inequality as above yields for \eqref{PDT4bougeps} the bound
\begin{align*}
\frac{4 \gamma^{2d+2}\lambda_d(B_{3b_\gamma})^{d+1}}{((d+1)!)^2} \int \mathbf1 \{{\vartheta}(S(\mathbf x))>{\varepsilon}\}\mathbf 1\{z(\mathbf x)\in W\} \mathbf 1\{\Sigma(S(\mathbf x))>v_{c,\gamma}\} e^{-\gamma \kappa_d r(\mathbf{x})^d} \,\lambda_d^{d+1}(\mathrm{d}\mathbf{x}),
\end{align*}
for which we obtain by the definition of the typical cell $Z_\gamma$ and by \eqref{af} the bound
\begin{align*}
&\alpha_7 \gamma^{2d+2} \lambda_d(B_{3b_n})^{d+1} \mathbb{P}(\Sigma(Z_\gamma)>v_{c,\gamma}) \mathbb{P}({\vartheta} (Z_\gamma)>{\varepsilon} \mid \, \Sigma(Z_\gamma)>v_{c,\gamma})\\
&\quad\le \alpha_8 (\log \gamma)^{d+1} \exp(-c_0 f({\varepsilon}) v_{c,\gamma}^{d/k}\gamma).
\end{align*}
{\em The estimate of $T_5$.} To estimate $T_5$ we distinguish by the circumradii $r(\mathbf x)$ and $r(\mathbf x_\ell,\mathbf y)$. This gives
\begin{align}
&\sum_{\ell=1}^{d}\frac{\gamma^{2d+2-\ell}}{\ell!(d+1-\ell)!} \iint \mathbf 1\{r(\mathbf x)\le b_\gamma,\,r(\mathbf x_\ell,\mathbf y)\le b_\gamma\}\,\mathbf 1\{\Sigma(S(\mathbf x))>v_{c,\gamma},\Sigma(S(\mathbf x_\ell,\mathbf y))>v_{c,\gamma}\}\ \nonumber \\
&\quad \times\mathbf 1 \{z(\mathbf x)\in W, z(\mathbf{x}_\ell,\mathbf y)\in W\} \mathbb{P}((\eta_\gamma+\delta_{(\mathbf x,\mathbf y)}) \cap (B(\mathbf x) \cup B(\mathbf x_\ell,\mathbf y))={\varnothing}) \,\lambda_d^{d+1-\ell}(\mathrm{d}\mathbf y)\,\lambda_d^{d+1}(\mathrm{d}\mathbf x)\label{PDT5bourleb}\\
&\, +\sum_{\ell=1}^{d}\frac{2\gamma^{2d+2-\ell}}{\ell!(d+1-\ell)!} \iint \mathbf 1\{r(\mathbf x)>b_n,\,r(\mathbf x)\ge r(\mathbf x_\ell,\mathbf y)\}\,\mathbf 1\{\Sigma(S(\mathbf x))>v_{c,\gamma},\Sigma(S(\mathbf x_\ell,\mathbf y))>v_{c,\gamma}\} \nonumber\\
&\quad \times\,\mathbf 1 \{z(\mathbf x)\in W, z(\mathbf{x}_\ell,\mathbf y)\in W\}\mathbb{P}((\eta_\gamma+\delta_{(\mathbf x,\mathbf y)}) \cap (B(\mathbf x) \cup B(\mathbf x_\ell,\mathbf y))={\varnothing})\, \lambda_d^{d+1-\ell}(\mathrm{d}\mathbf y)\,\lambda_d^{d+1}(\mathrm{d}\mathbf x)\label{PDT5bourgeb}
\end{align}
where the factor 2 in the second term comes from the assumption $r(\mathbf x)\ge r(\mathbf x_\ell,\mathbf y)$. In \eqref{PDT5bourleb} we next distinguish by the shape of the simplices $S(\mathbf x)$ and $S(\mathbf x_\ell,\mathbf y)$. This gives the bound
\begin{align}
&\sum_{\ell=1}^{d}\frac{\gamma^{2d+2-\ell}}{\ell!(d+1-\ell)!} \iint \mathbf 1\{{\vartheta}(S(\mathbf x))\le {\varepsilon},\,{\vartheta}(S(\mathbf x_\ell,\mathbf y))\le {\varepsilon}\} \mathbf 1\{r(\mathbf x)\le b_\gamma,\,r(\mathbf x_\ell,\mathbf y)\le b_\gamma\} \nonumber \\
&\quad \times \mathbf 1 \{z(\mathbf x)\in W, z(\mathbf{x}_\ell,\mathbf y)\in W\}\, \mathbb{P}((\eta_\gamma+\delta_{(\mathbf x,\mathbf y)}) \cap (B(\mathbf x) \cup B(\mathbf x_\ell,\mathbf y))={\varnothing})\,\lambda_d^{d+1-\ell}(\mathrm{d}\mathbf y)\,\lambda_d^{d+1}(\mathrm{d}\mathbf x) \label{PDT5bourlebleps}\\
&\, +\sum_{\ell=1}^{d}\frac{2\gamma^{2d+2-\ell}}{\ell!(d+1-\ell)!} \iint \mathbf 1\{{\vartheta}(S(\mathbf x))>{\varepsilon}\} \mathbf 1\{r(\mathbf x)\le b_n,\,r(\mathbf x_\ell,\mathbf y)\le b_n\} \mathbf 1\{\Sigma(S(\mathbf x))>v_{c,\gamma}\} \nonumber \\
&\quad \times\mathbf 1 \{z(\mathbf x)\in W, z(\mathbf{x}_\ell,\mathbf y)\in W\}\, \mathbb{P}((\eta_\gamma+\delta_{(\mathbf x,\mathbf y)}) \cap (B(\mathbf x) \cup B(\mathbf x_\ell,\mathbf y))={\varnothing})\,\lambda_d^{d+1-\ell}(\mathrm{d}\mathbf y)\,\lambda_d^{d+1}(\mathrm{d}\mathbf x) \label{PDT5bourlebgeps}.
\end{align}
For \eqref{PDT5bourlebleps} we exploit now the bound \eqref{PDTvolbou} and use that by the triangle inequality $|z(\mathbf x)-y_i|\le |z(\mathbf x)-x_1|+|x_1-z(\mathbf x_\ell,\mathbf y)|+|z(\mathbf x_\ell,\mathbf y)-y_i| \le r(\mathbf x)+2r(\mathbf x_\ell,\mathbf y)$ for $i=1,\dots,d+1-\ell$. This gives the bound
\begin{align*}
&\sum_{\ell=1}^{d}\frac{\gamma^{2d+2-\ell}\lambda_d(B_{3b_\gamma})^{d+1-\ell}}{\ell!(d+1-\ell)!} \int \mathbf 1 \{z(\mathbf x)\in W\}\mathbf 1\{\Sigma(S(\mathbf x))>v_{c,\gamma}\} e^{-\gamma \kappa_d (2-I({\varepsilon}))r(\mathbf x)^d}\,\lambda_d^{d+1}(\mathrm{d}\mathbf x),
\end{align*}
which can be bounded exactly as \eqref{PDT4bouleps1} above and we arrive at the bound
\begin{align*}
\alpha_9 (\log \gamma)^{d} \exp(-\gamma \kappa_d(1-I({\varepsilon}))\tau^{-d/k}v_{c,\gamma}^{d/k}).
\end{align*}
For \eqref{PDT5bourlebgeps} we use again that $|z(\mathbf x)-y_i|\le r(\mathbf x)+2r(\mathbf x_\ell,\mathbf y)$ for $i=1,\dots,d+1-\ell$. Analogously to the estimate of \eqref{PDT4bougeps} we find the bound
\begin{align*}
\alpha_{10} (\log \gamma)^{d} \exp(-c_0 f({\varepsilon}) v_n^{d/k}\gamma).
\end{align*}
Finally, we dicuss \eqref{PDT5bourgeb}. Since $|z(\mathbf x)-y_i|\le 3r(\mathbf x)$ for $r(\mathbf x_\ell,\mathbf y)\le r(\mathbf x)$, we find the bound
\begin{align*}
&\sum_{\ell=1}^{d}\frac{2\gamma^{2d+2-\ell}}{\ell!(d+1-\ell)!} \int \lambda_d(B_{3r(\mathbf x)})^{d+1-\ell}\mathbf 1\{r(\mathbf x)>b_\gamma\} \mathbf 1 \{z(\mathbf x)\in W\} \mathbb{P}(\eta_\gamma \cap B(\mathbf x) ={\varnothing})\,\lambda_d^{d+1}(\mathrm{d}\mathbf x)\\
&\, =\sum_{\ell=1}^{d}\frac{2(3^d\kappa_d)^{d+1-\ell}\gamma^{2d+2-\ell}}{\ell!(d+1-\ell)!} \int r(\mathbf x)^{d(d+1-\ell)}\mathbf 1\{r(\mathbf x)>b_\gamma\} \mathbf 1 \{z(\mathbf x)\in W\} \mathbb{P}(\eta_\gamma \cap B(\mathbf x) ={\varnothing})\,\lambda_d^{d+1}(\mathrm{d}\mathbf x).
\end{align*}
which is by \eqref{bpclas} bounded by
\begin{align*}
d! \omega_d^{d+1} \lambda_d(W) \sum_{\ell=1}^{d}\frac{2(2^d\kappa_d)^{d+1-\ell}\gamma^{2d+2-\ell}}{\ell!(d+1-\ell)!} \int_{b_\gamma}^\infty r^{d(d+1-\ell)+d^2-1 } e^{-\gamma \kappa_dr^d} \,\mathrm{d}r\le \frac { \alpha_{11} (\log \gamma)^{d}} {\gamma}.
\end{align*}
Finally, we collect the bounds of the different $T$-terms and obtain that
\begin{align*}
\mathbf{d_{TV}}(\xi_{c,\gamma} \cap W, \nu_c \cap W)&\le \frac{\alpha_1 (\log \gamma)^{d-1}}{\gamma^2}+\frac{\alpha_2 \log \gamma}{\gamma}+ \frac{\alpha_5 (\log \gamma)^{2d-1}}{\gamma}+\frac { \alpha_{11} (\log \gamma)^{d}} {\gamma}\\
&\quad+(\alpha_6 (\log \gamma)^{d+1}+\alpha_9 (\log \gamma)^{d}) \exp(-\gamma \kappa_d(1-I({\varepsilon}))\tau^{-d/k}v_{c,\gamma}^{d/k})\\
&\quad+(\alpha_8 (\log \gamma)^{d+1}+\alpha_{10} (\log \gamma)^{d}) \exp(-c_0 f({\varepsilon}) v_n^{d/k}\gamma).
\end{align*}
Hence, we conclude from the asyomptotic form of $v_{c,\gamma}$ that for all $\delta>0$ and some constant $C>0$
\begin{align*}
\mathbf{d_{TV}}(\xi_{c,\gamma} \cap W, \nu_c \cap W)\le C \gamma^{\delta-\min[c_0f({\varepsilon}),1-I({\varepsilon})]}.
\end{align*}
\end{proof}
\begin{lemma} \label{PDTcont}
For all $\gamma>0$ the distribution $\mathbb{P}^{\Sigma(Z_\gamma)}$ of $\Sigma(Z_\gamma)$ and the Lebesgue measure $\lambda_1$ are equivalent on $[0,\infty).$
\end{lemma}
\begin{proof}
\begin{comment}
The proof follows the strategy of Section 9 in \cite{hug2007asymptotic}. We first show that $\mathbb{P}_{\Sigma(Z)}$ is absolutely continuous with respect to $\lambda_1$ on $[\gamma^{-1/k},\infty)$. For $K \in \mathcal{K}^d$ let $D(K)$ be the diameter of $K$ and $\Delta(K):=\frac{D(K)}{c_1\Sigma(K)^{1/k}}$ be the {\em relative diameter}, where $c_1$ is chosen such that $\Delta(K)\ge 1$ for all $K \in \mathcal{K}^d$. For $a>0$, $h>0$ and $m \in \mathbb{N}$ let
\begin{align*}
\mathcal{K}_{a,h}(m):=\{K \in \mathcal{K}^d:\,\Sigma(K)\in a(1,1+h),\,\Delta(K)\in [m,m+1)\}
\end{align*}
and $q_{a,h}(m):=\mathbb{P}(\Sigma(Z)\in \mathcal{K}_{a,h}(m))$. Fix $\kappa>0$. Analogously to the argumentation before $(41)$ in \cite{hug2007asymptotic} (with \cite[Lemma 5]{hug2007asymptotic} replaced by \cite[Lemma 4.5]{hug2007typical}), we find some $\nu\in \mathbb{N}$ such that
\begin{align*}
q_{a,1}(m)\le c(\kappa)m^{d\nu} \exp(-(1-\kappa/4)\tau d^{1/k}\gamma).
\end{align*}
From \cite[Lemma 4.3]{hug2007typical} we obtain $q_{a,1}(m)\le c_2\exp(-c_3ma^{1/k}\gamma)$ for some $c_2,c_3 >0$. By \cite[Lemma 4.8]{hug2007typical} we have for some $c_4>0$
\begin{align*}
\mathbb{P}(\Sigma(Z)\in a(1,1+h))&=\sum_{m \in \mathbb{N}} q_{a,h}(m)=c_4h a^{1/k} \gamma \Big(\sum_{m\le m_0} q_{a,1}(m)+\sum_{m>m_0} q_{a,1}(m)\Big),
\end{align*}
where $m_0 \in \mathbb{N}$ is chosen such that
\begin{align}
c_3 m \ge 2(1-\kappa/4)\tau,\quad m >m_0. \label{eqn:estm}
\end{align}
Now we can argue analogously to the proof of Proposition 7.1 in \cite{HRS2004limit} (where \cite[(24)]{HRS2004limit} is replaced by \eqref{eqn:estm}) to arrive that
\begin{align}
\mathbb{P}(\Sigma(Z)\in a(1,1+h))\le c_5(\kappa) h \exp(-(1-\kappa/2)\tau a^{1/k}\gamma).\label{eqn:estah}
\end{align}
As in \cite[Section 9]{hug2007asymptotic} we can now conclude from \eqref{eqn:estah} that if a set $M \subset [\gamma^{-1/k}, \infty)$ is covered by countably many intervals of total length $\varepsilon$, then the $\mathbb{P}_{\Sigma(Z)}$-measure of $M$ is at most $c_6\varepsilon$, where $c_6$ does not depend on $\varepsilon$.
\end{comment}
First we show that $\mathbb{P}^{\Sigma(Z_\gamma)}$ is absolutely continuous with respect to $\lambda_1$ on $[\gamma^{-1/k},\infty)$. For $A \in \mathcal B^1$ we have by Theorem 7.3.1 in \cite{schneider2008stochastic}
\begin{align}
\mathbb P(\Sigma(Z_\gamma)\in A)=\frac{\gamma^d}{\beta_d} \int_{(\mathbb S^{d-1})^{d+1}} \int_0^\infty \mathbf 1\{\Sigma(\mathrm{conv}(r\mathbf u))\in A\} e^{-\gamma \kappa_d r^d} r^{d^2-1} \Delta_d(\mathbf u) \,\mathrm d r\,\sigma^{d+1}(\mathrm d \mathbf u). \label{PDTabscont}
\end{align}
Since $\Sigma$ is $k$-homogeneous and $\Sigma(\mathrm{conv}(\mathbf u))$ is assumed to be bounded, we have that $\Sigma(\mathrm{conv}(r\mathbf u))\in A$ if $r^k \in A/\Sigma(\mathrm{conv}(\mathbf u))$. Thus, the inner integral in \eqref{PDTabscont} vanishes if $A$ is a $\lambda_1$-null set.
Next we show that the Radon-Nikod\'{y}m density of $\mathbb{P}^{\Sigma(Z_\gamma)}$ with respect to the Lebesgue measure $\lambda_1$ is positive on $[0, \infty)$. From Lemma 1 in \cite{hug2005large} we obtain for $a >0$ and all ${\varepsilon}>0$
\begin{align*}
\frac{\mathrm{d}\mathbb{P}^{\Sigma(Z_\gamma)}}{\mathrm{d}\lambda_1} (a) =\lim_{h \downarrow 0} \frac{\mathbb{P}(\Sigma(Z_\gamma) \in a[1,1+h))}{ah}\ge c_1(a^{d/k}\gamma)^d \exp\Big(-\frac{\kappa_d}{\tau^{d/k}} (1+{\varepsilon}) a^{d/k}\gamma\Big)>0.
\end{align*}
This gives that $\mathbb{P}^{\Sigma(Z_\gamma)}$ and $\lambda_1$ are equivalent measures on $[0, \infty)$.
\end{proof}
\noindent {\bf Acknowledgments:}
The author wishes to thank Günter Last for helpful discussions.
\end{document} |
\begin{document}
\title{AABC: approximate approximate Bayesian computation when simulating a large number of data sets is computationally infeasible}
\author{Erkan O. Buzbas \\ Department of Biology \\ Stanford University, Stanford, CA 94305-5020 USA \\ and\\ Department of Statistical Science \\ University of Idaho, Moscow, ID 84844-1104 USA\\ email: \texttt{erkanb@uidaho.edu}\\ and\\ Noah A. Rosenberg \\ Department of Biology \\ Stanford University, Stanford, CA 94305-5020 USA \\ email: \texttt{noahr@stanford.edu} } \maketitle
\begin{center} \textbf{Abstract} \end{center} Approximate Bayesian computation (ABC) methods perform inference on model-specific parameters of mechanistically motivated parametric statistical models when evaluating likelihoods is difficult.
Central to the success of ABC methods is computationally inexpensive simulation of data sets from the parametric model of interest.
However, when simulating data sets from a model is so computationally expensive that the posterior distribution of parameters cannot be adequately sampled by ABC, inference is not straightforward.
We present ``approximate approximate Bayesian computation'' (AABC), a class of methods that extends simulation-based inference by ABC to models in which simulating data is expensive.
In AABC, we first simulate a \emph{limited number} of data sets that is computationally feasible to simulate from the parametric model.
We use these data sets as fixed background information to inform a non-mechanistic statistical model that approximates the correct parametric model and enables efficient simulation of a large number of data sets by Bayesian resampling methods.
We show that under mild assumptions, the posterior distribution obtained by AABC converges to the posterior distribution obtained by ABC, as the number of data sets simulated from the parametric model and the sample size of the observed data set increase simultaneously.
We illustrate the performance of AABC on a population-genetic model of natural selection, as well as on a model of the admixture history of hybrid populations.
\vspace*{.3in}
\noindent\textsc{Keywords}: {Approximate Bayesian computation, likelihood-free methods, nonparametrics, posterior distribution}
\section{Introduction}\label{Intro}
Stochastic processes motivated by mechanistic considerations enable investigators to capture salient phenomena in modeling natural systems.
Statistical models resulting from these stochastic processes are often parametric, and estimating model-specific parameters---which often have a natural interpretation---is a major aim of data analysis.
Contemporary mechanistic models tend to involve complex stochastic processes, however, and parametric statistical models resulting from these processes lead to computationally intractable likelihood functions.
When likelihood functions are computationally intractable, likelihood-based inference is a challenging problem that has received considerable attention in the literature \citep{RobertCasella2004, Liu2008}.
\par
When statistical models are known only at the level of the stochastic mechanism generating the data---such as in implicit statistical models \citep{DiggleGratton1984}---explicit evaluation of likelihoods might be impossible.
In these models, standard computational methods that require evaluation of likelihoods up to a proportionality constant (e.g., rejection methods) cannot be used to sample distributions of interest.
However, data sets simulated from the model under a range of parameter values can be used to assess parameter likelihoods without explicit evaluation \citep{Rubin1984}.
Approximate Bayesian computation (ABC) methods \citep{Tavareetal1997,Beaumontetal2002, Marjorametal2003} implement this idea in a Bayesian context to sample an {\em approximate} posterior distribution of the parameters.
Intuitively, parameter values producing simulated data sets similar to the observed data set arise in approximate proportion to their likelihood, and hence, when weighted by prior probabilities, to their posterior probabilities.
\par
\subsection{The ABC literature}
ABC methods have been based on rejection algorithms \citep{Tavareetal1997, Beaumontetal2002, BlumFrancois2010}, Markov chain Monte Carlo \citep{Beaumont2003, Marjorametal2003, Bortotetal2007, Wegmannetal2009}, and sequential Monte Carlo \citep{Sissonetal2007, Sissonetal2009, Beaumontetal2009, Tonietal2009}.
Model selection using ABC \citep{Pritchardetal1999, Fagundesetal2007, Grelaudetal2009, BlumJakobsson2010, Robertetal2011}, the choice of summary statistics when the likelihood is based on summary statistics instead of the full data \citep{JoyceMarjoram2008, Wegmannetal2009, NunesBalding2010, FearnheadPrangle2012}, and the equivalence of posterior distributions targeted in different ABC methods \citep{Wilkinson2008, Sissonetal2010} have also been investigated.
\par
ABC methods have had a considerable effect on model-based inference in disciplines that rely on genetic data, particularly data shaped by diverse evolutionary, demographic, and environmental forces.
Example applications have included problems in the demographic history of populations \citep{Pritchardetal1999, Francoisetal2008, Verduetal2009, BlumJakobsson2010} and species \citep{Estoupetal2004, PlagnolTavare2004, BecquetPrzeworski2007, Fagundesetal2007, Wilkinsonetal2010}, as well as problems in the evolution of cancer cell lineages \citep{Tavare2005, Siegmundetal2008} and the evolution of protein networks \citep{Ratmannetal2009}.
Other applications outside of genetics have included inference on the physics of stereological extremes \citep{Bortotetal2007}, the ecology of tropical forests \citep{JabotChave2009}, dynamical systems in biology \citep{Tonietal2009}, and small-world network disease models \citep{Walkeretal2010}.
ABC methods have been reviewed by \citet{MarjoramTavare2006}, \citet{Cornuetetal2008}, \citet{Beaumontetal2009}, \citet{Beaumont2010}, \citet{Csilleryetal2010}, and \citet{Marinetal2011}.
\par
\subsection{A limitation of ABC methods}
An informal categorization of the information available about the likelihood function is helpful to illustrate the class of models in which ABC methods are most useful.
First, exact inference on the posterior distribution of the parameters is possible only if the likelihood function is analytically available.
Second, if the likelihood function is not analytically available but can be evaluated up to a constant given a parameter value, then standard computational methods such as rejection algorithms can sample the posterior distribution.
In this case, inference is exact up to a Monte Carlo error due to sampling from the posterior.
Third, if the likelihood function cannot be evaluated, but data sets can feasibly be simulated from the model, then ABC methods sample the posterior distribution using approximations on the {\em data space} in addition to a Monte Carlo error due to sampling.
\par
Although ABC methods sample the posterior distribution of parameters without evaluating the likelihood function, they are computationally intensive.
Adequately sampling a posterior distribution of a parameter by ABC requires many random realizations from the prior distribution of the parameter and the sampling distribution of the data.
Simulating from the prior is straightforward, but the computational cost of simulating a data set from the mechanistic model increases quickly with the complexity and number of stochastic processes involved.
Henceforth, we refer to statistical models in which not only evaluating the likelihoods is difficult but also simulating a large number of data sets is computationally infeasible as {\em limited-generative} models.
When a model is limited-generative and only a small number of data sets can be simulated from the model, likelihoods cannot be assessed using ABC and hence, the posterior distribution of parameters cannot be adequately sampled. \par
\subsection{Our contribution}
In this article, we introduce {\em approximate} approximate Bayesian computation (AABC), a class of methods that perform inference on model-specific parameters of limited-generative models when standard ABC methods are computationally infeasible to apply.
In AABC, the idea of assessing the likelihoods approximately using simulated data sets is taken one step further than in ABC.
AABC methods make approximations on the {\em parameter space} and the {\em model space} in addition to standard ABC approximations on the data space.
In conjunction with Bayesian resampling methods, these approximations help us overcome the computational intractability associated with simulating data from a limited-generative model (Figure \ref{fig:1}).
\par
Our key innovation is to condition on a limited number of data sets that can be feasibly simulated from the limited-generative model and to employ a non-mechanistic statistical model to simulate a large number of data sets.
We set up the non-mechanistic model based on empirical distributions of the limited number of data sets simulated from the mechanistic model.
Since the data values from the limited number of simulated data sets are used to construct new random data sets by resampling methods, it is computationally inexpensive to simulate a large number of data sets in AABC.
The AABC approach allows a researcher to allocate a fixed computer time to simulating a limited number of data sets from the limited-generative model, thus making otherwise challenging likelihood-based inference attainable.
\par
Intuitively, the information conditioned upon by the non-mechanistic model increases with the number of data sets simulated from the mechanistic model, and the expected accuracy of inference obtained by AABC methods increases.
We formalize this intuition by showing that the posterior distribution of parameters obtained by AABC converges to the corresponding posterior distribution obtained by standard ABC, as the sample size of the observed data set and the number of data sets simulated from the limited-generative model increase simultaneously.
\par
\begin{figure}\label{fig:1}
\end{figure}
\par
AABC methods utilize the established machinery of ABC methods in sampling the posterior distribution of the parameters.
Therefore, standard approximations on the data space involved in an ABC method---which facilitate the sampling of the posterior distribution---apply to AABC methods as well.
We now briefly review these approximations in the context of ABC by rejection algorithms.
\section{Review of ABC by rejection algorithms}\label{sec:ABCreview}
To more formally set up the class of problems in which ABC methods are useful, we assume that a parametric model generates observations conditional on parameter $\theta \in \Theta \equiv \mathbf{R}^p,\;p\geq1.$
We let $P_{\theta}$ be the sampling distribution of a data set of $n$ observations independent and identically distributed (IID) from this model.
We denote a random data set by ${\bf x}=(x_1,x_2,...,x_n) \in \mathcal{X},$ where $\mathcal{X}$ is the space in which the data set sits, and the observed data set by $\db_o.$
In the genetics context, a data point $x_i$ might be a vector denoting the allelic types of a genetic locus at genomic position $i$ in a group of individuals; the data matrix ${\bf x}$ might then contain genotypes from these individuals in a sample of $n$ independent genetic loci.
\par
Suppose that $P_{\theta}$ is available to the extent that the likelihood function $p(\db_o|\theta)$ can be evaluated up to a constant whose value does not depend on the parameters.
Given a prior distribution $\pi(\theta)$ on parameter $\theta,$ the posterior distribution of $\theta$ given the observed data $\db_o$ under the model $P_{\theta}$ is
$\pi(\theta|\db_o, P_{\theta}).$
Then $\pi(\theta|\db_o, P_{\theta})$ can be sampled by standard rejection sampling from $p(\db_o|\theta)\pi(\theta),$ a quantity that is proportional to
$\pi(\theta|\db_o, P_{\theta})$ by Bayes' Theorem.
In principle, sampling $\pi(\theta|\db_o, P_{\theta})$ without evaluating the likelihood function $p(\db_o|\theta)$ is possible, if simulating the data from the model $P_{\theta}$ is feasible.
An early example due to Tavar\'e {\em et al.} (1997) samples $\pi(\theta|\db_o,P_{\theta})$ by accepting a value $\theta_i$ simulated from the prior $\pi(\theta)$ only if the data set ${\bf x}_i$ simulated from $P_{\theta_i}$ satisfies ${\bf x}_i=\db_o.$
By standard rejection algorithm arguments, the $\theta_i$ sampled in this fashion are from the correct posterior distribution.
However, the acceptance condition ${\bf x}_i=\db_o$ is rarely satisfied with high-dimensional data.
A first approximation in ABC methods is dimension reduction by substituting the data set ${\bf x}$ with a low-dimensional set of summary statistics $\textrm{\boldmath$s$}.$
The observed data $\db_o$ and the simulated data ${\bf x}_i$ are substituted by $\ssb_o$ and $\textrm{\boldmath$s$}_i,$ calculated from their respective data sets.
This is equivalent to substituting the likelihood function of the data $p({\bf x}|\theta)$ with the likelihood function of the summary statistics $p(\textrm{\boldmath$s$}|\theta).$
Since ABC is most useful in statistical models that do not admit sufficient statistics, dimension reduction to summary statistics often entails information loss about the parameters.
The choice of summary statistics minimizing this information loss is an active research area \citep{JoyceMarjoram2008, Wegmannetal2009, Robertetal2011, Aeschbacheretal2012, FearnheadPrangle2012}.
\par
When the data are substituted with summary statistics, the acceptance condition ${\bf x}_i=\db_o$ is substituted by $\textrm{\boldmath$s$}_i=\ssb_o,$ but exact equality may still be too stringent a condition to be satisfied with simulated data.
A second approximation in ABC is to relax the exact acceptance condition with a tolerance acceptance condition.
For example, \citet{Pritchardetal1999} used the Euclidean distance $||\cdot||$ and a small tuning parameter $\epsilon$ to accept a value $\theta_i$ from an approximate posterior distribution if the data set ${\bf x}_i$ simulated from $P_{\theta_i}$ produced $\textrm{\boldmath$s$}_i$ satisfying \begin{equation}\label{eq:euclideandistance}
||\textrm{\boldmath$s$}_i-\ssb_o||=\left[\sum_{j=1}^{k}(s_{ij}-s_{oj})^2\right]^{1/2}\leq\epsilon, \end{equation} where $\textrm{\boldmath$s$}$ is a $k$-dimensional statistic, and $s_{ij}$ and $s_{oj}$ are the $j$th components of $\textrm{\boldmath$s$}_i$ and $\ssb_o,$ respectively (see also \citet{WeissVonHaeseler1998} for an application in a pure likelihood inference context).
Distance metrics other than the Euclidean distance, such as the total variation distance \citep{Tavareetal2002}, have also been used.
\par
Substituting the binary accept/reject step in the rejection sampling by weighting $\textrm{\boldmath$s$}_i$ smoothly according to its distance from $\ssb_o$ using a kernel density ${\rm K}_{\epsilon}(\textrm{\boldmath$s$}_i,\ssb_o)$ with bandwidth $\epsilon$ leads to importance sampling \citep{Wilkinson2008}.
The tolerance condition $||\textrm{\boldmath$s$}_i-\ssb_o||\leq\epsilon$ in the rejection algorithm of \citet{Pritchardetal1999} then corresponds to using a uniform kernel on an $\epsilon$-ball around $\ssb_o.$
Other approaches to kernel choice include Epanechnikov \citep{Beaumontetal2002} and Gaussian \citep{LeuenbergerWegmann2010} kernels.
\par
When the data likelihood is substituted by the likelihood based on the summary statistics and a tolerance condition with a uniform kernel and the Euclidean distance is used, the posterior distribution sampled with ABC by rejection is
\begin{equation}\label{eq:2}
\pi_{\epsilon}(\theta|\db_o, P_{\theta}) =\frac{1}{C_{P_{\theta}}}\int_{\mathcal{X}} \mathbf{I}_{\{||\textrm{\boldmath$s$}-\ssb_o||<\epsilon\}} p({\bf x}|\theta)\pi(\theta) \;d{\bf x}, \end{equation} where $\mathbf{I}_{A}$ is an indicator function that takes a value of 1 on set $A$ and is zero otherwise,
and $C_{P_{\theta}}=\int_{\Theta}\int_{\mathcal{X}} \mathbf{I}_{\{||\textrm{\boldmath$s$}-\ssb_o||<\epsilon\}} p({\bf x}|\theta)\pi(\theta) \;d{\bf x} \;d\theta$ is the normalizing constant.
A standard ABC algorithm that samples $\pi_{\epsilon}(\theta|\db_o, P_{\theta})$ appears in Figure \ref{fig:2}. \par
\begin{figure}\label{fig:2}
\end{figure}
\par
The choice of summary statistics, tolerance parameter $\epsilon,$ distance function, and kernel constitute approximations on the data space in ABC methods.
We assume that these standard ABC approximations work reasonably well, and we focus on new modeling approximations on the parameter and model spaces introduced by AABC (Figure \ref{tablo:1}).
\begin{figure}\label{tablo:1}
\end{figure}
\par
\section{Approximate approximate Bayesian computation (AABC)}\label{sec:theory}
Algorithm 1 returns an adequate sample size from the posterior distribution of a parameter if it is iterated a large number of times, $M$.
The set of realizations simulated from the joint distribution of the parameter and the data by steps 1 and 2 of Algorithm 1 is then $\{({\bf x}_1, \theta_1),({\bf x}_2, \theta_2),...,({\bf x}_M, \theta_M)\}.$
AABC methods seek inference on parameter $\theta$ when the model $P_{\theta}$ is limited-generative, and simulating $M$ data sets under $P_{\theta}$ is therefore computationally infeasible.
We thus assume that only a limited number $m$ of data sets ${\bf x}_1,{\bf x}_2,...,{\bf x}_m$ can be obtained by step 2 of Algorithm 1 $(m \ll M)$.
We denote the set of realizations simulated from the joint distribution of the parameter and the data by $\mathcal{Z}_{n,m}=\{({\bf x}_1, \theta_1),({\bf x}_2, \theta_2),...,({\bf x}_m, \theta_m)\},$ where each data set ${\bf x}_i$ of $n$ IID observations is simulated from the model $P_{\theta_i}.$
\par
In AABC, we substitute the joint sampling distribution $P_{\theta}$ of a data set of size $n$ with the joint sampling distribution $Q_{\theta},$ from which simulating data sets is computationally inexpensive.
In replacing $P_{\theta}$ with $Q_{\theta},$ we require that the posterior distribution $\pi(\theta|\db_o, Q_{\theta})$ based on the likelihood implied by model $Q_{\theta}$ approximates the posterior distribution $\pi(\theta|\db_o, P_{\theta})$ based on the likelihood implied by model $P_{\theta}.$
Further, we require that $Q_{\theta}$ can be used with a wide range of $P_{\theta},$ in the sense that $Q_{\theta}$ is constructed without using the details of model $P_{\theta}.$
\par
\subsection{Approximations on the parameter and model spaces due to replacing $P_{\theta}$ with $Q_{\theta}$}\label{subsec:nonparametric}
Two approximations are involved in substituting $P_{\theta}$ with $Q_{\theta}.$
First, $\mathcal{Z}_{n,m}$ includes only $m$ parameter values $\theta_1,\theta_2,...,\theta_m$ under which data sets are simulated from $P_{\theta}$.
After obtaining $\mathcal{Z}_{n,m},$ for any new parameter value $\theta$ from the prior distribution under which we want to simulate a new data set, we substitute $\theta$ with $\tilde{\theta}$ such that $(\tilde{{\bf x}},\tilde{\theta})\in\mathcal{Z}_{n,m}.$
The value $\tilde{\theta}$ has the minimum Euclidean distance to the value $\theta$ among all parameter values in $\mathcal{Z}_{n,m}.$
More precisely, $\tilde{\theta}=\displaystyle{\mathop{\mbox{arg\;min}}_{\theta_j \in \mathcal{Z}_{n,m}}}||\theta_j-\theta||.$
In essence, this approximation is equivalent to replacing the sampling distribution of the data set $P_{\theta}$ with the sampling distribution $P_{\tilde{\theta}}$; we call this an approximation on the parameter space.
However, this parameter space approximation is not sufficient to simulate data sets efficiently, since the model $P_{\tilde{\theta}}$ is still limited-generative after this substitution.
\par
As a second approximation, we substitute the model $P_{\tilde{\theta}}$ with the empirical distribution of the data set $\tilde{{\bf x}}$ that has already been simulated from $P_{\tilde{\theta}}$ as $(\tilde{{\bf x}},\tilde{\theta})\in\mathcal{Z}_{n,m}.$
Here, we assume a positive probability mass only on the data values observed in the set $\tilde{{\bf x}}.$
We call this an approximation on the model space because the model $P_{\tilde{\theta}}$ is substituted with the empirical distribution of a data set simulated from $P_{\tilde{\theta}}.$
\par
To simulate a new data set ${\bf x}$ in AABC, we utilize a vector of positive auxiliary parameters $\textrm{\boldmath$\phi$}=(\phi_1,\phi_2,...,\phi_{n}),$ that satisfy $\sum_{i=1}^{n}\phi_i=1.$
We let $\phi_i$ be the probability that a random data value $x_j\in{\bf x}$ is equal to a given value $\tilde{x}_i$ found in the data set $\tilde{{\bf x}}=(\tilde{x}_1,\tilde{x}_2,...,\tilde{x}_n).$
The premise is that the sample $\tilde{{\bf x}}$ simulated under $\tilde{\theta}$ provides information about the model $P_{\tilde{\theta}},$ and by an approximation of $\theta$ to $\tilde{\theta}$ on the parameter space, about $P_{\theta}$.
\par
If we denote the approximate sampling distribution of a data set ${\bf x}=(x_1,x_2,...,x_n)$ by $Q_{\theta},$ its joint probability mass function is \begin{equation}\label{eq:Q}
\int_{{\Phi}}q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})\pi(\textrm{\boldmath$\phi$}) \;d\textrm{\boldmath$\phi$}\; \mathbf{I}_{\{\theta,\tilde{\theta}\}}, \end{equation}
where $q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})={n \choose n_1 \;n_2\; \cdots\; n_k}\prod_{j=1}^{n}\prod_{i=1}^{n}\phi_i^{\mathbf{I}_{\{x_j=\tilde{x}_i\}}},$ and $\mathbf{I}_{\{\theta,\tilde{\theta}\}}$ is 1 if $\tilde{\theta}\in\mathcal{Z}_{n,m}$ is the closest value to $\theta$ in the Euclidean sense and is 0 otherwise.
Here, $n_i$ is the number of times $\tilde{x}_i$ observed in the new sample ${\bf x},$ $k$ is the number of distinct data values observed in the data set ${\bf x},$ and $\mathbf{I}_{\{x_j=\tilde{x}_i\}}$ is 1 if $x_j=\tilde{x}_i$ and is 0 otherwise.
The distribution $q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})$ is that of an IID sample ${\bf x}=(x_1,x_2,...,x_n),$ where $x_j$ is drawn from the values $(\tilde{x}_1,\tilde{x}_2,...,\tilde{x}_n)$ with probabilities $(\phi_1,\phi_2,...,\phi_n).$
\par
The probability vector $\textrm{\boldmath$\phi$}$ is a parameter of the model conditional on $\tilde{{\bf x}},$ and thus, we need to posit a prior distribution on $\textrm{\boldmath$\phi$}.$
As a natural prior on probabilities, we let the prior distribution $\pi(\textrm{\boldmath$\phi$})$ on $\textrm{\boldmath$\phi$}$ be the symmetric Dirichlet distribution on the $(n-1)${\em-}dimensional simplex $\Phi,$ with hyperparameters (1,1,...,1) and a uniform probability density function proportional to $1.$
This choice assigns equal weight to all distributions placing positive probability mass on the data points $\tilde{x}_i\in\tilde{{\bf x}}.$
Further, it assigns zero posterior probability to data values unobserved in the sample $\tilde{{\bf x}},$ thereby avoiding difficulties created by such values in the likelihood \citep{Rubin1981, Owen1990}.
\par
To distinguish the parameter and data set realizations in $\mathcal{Z}_{n,m}=\{({\bf x}_i,\theta_i)\}_{i=1}^{m}$ from the parameter and data sets simulated using AABC, we use starred versions of each quantity to denote specific values simulated in AABC.
For example, as the sampling distribution $P_{\theta_i}$ delivers a data set ${\bf x}_i$ under a given parameter value $\theta_i$ in the ABC procedure of Algorithm 2, the sampling distribution $Q_{\theta^*_i}$ delivers a data set ${\bf x}^*_i$ under a given parameter value $\theta^*_i$ simulated from its prior distribution (see Figure \ref{fig:11} for notation).
\begin{figure}\label{fig:11}
\end{figure}
\par
The sampling distribution $Q_{\theta}$ utilizes the information available in the set of realizations $\mathcal{Z}_{n,m}$ through the parameter $\textrm{\boldmath$\phi$},$ since the prior distribution of $\textrm{\boldmath$\phi$}$ conditions on $(\tilde{{\bf x}},\tilde{\theta})\in \mathcal{Z}_{n,m}$ and thus on the set $\mathcal{Z}_{n,m}.$
In this sense, the available realizations $\mathcal{Z}_{n,m}$ are used as fixed background information about $P_{\theta},$ and inferences using the substitute model $Q_{\theta}$ are conditional on the simulated sets $\mathcal{Z}_{n,m}.$ \par
\subsection{The posterior distribution of $\theta$ sampled by AABC}\label{sec:posterior}
In sampling the approximate posterior distribution of $\theta$ by AABC methods, we use the two ABC approximations described in Section \ref{sec:ABCreview}.
First, we substitute each data instance ${\bf x}$ with summary statistics $\textrm{\boldmath$s$}.$
Second, we use an acceptance condition with tolerance $\epsilon,$ employing the Euclidean distance to measure the proximity of the summary statistics calculated from the observed and simulated data, as in equation \ref{eq:euclideandistance}.
If we let $\theta^*_j$ be a new parameter value simulated from its prior distribution after obtaining the set $\mathcal{Z}_{n,m},$ in AABC we accept the parameter values $\theta^*_j$ producing summary statistics $\textrm{\boldmath$s$}^*_j$ that satisfy the condition $||\textrm{\boldmath$s$}^*_j-\ssb_o||<\epsilon$ as being draws from the posterior distribution.
This acceptance condition corresponds to a uniform kernel, which we use throughout this article, although like ABC, AABC can employ other kernels to obtain smooth weighting of $\textrm{\boldmath$s$}^*_j$ values by their distance from $\ssb_o.$
Substituting $P_{\theta}$ with $Q_{\theta}$ involves replacing $p({\bf x}|\theta)$ in expression \ref{eq:2} with expression \ref{eq:Q} and adjusting the normalizing constant accordingly.
The approximate posterior distribution sampled by an AABC method is
\begin{equation}\label{eq:4}
\pi_{\epsilon}(\theta|\db_o,Q_{\theta}) =
\frac{1}{C_{Q_{\theta}}}\int_{\mathcal{X}}\mathbf{I}_{\{||\textrm{\boldmath$s$}-\ssb_o||<\epsilon\}}\left[\int_{\Phi}q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})\pi(\textrm{\boldmath$\phi$})\;d\textrm{\boldmath$\phi$}\;\mathbf{I}_{\{\theta,\tilde{\theta}\}}\right]\pi(\theta) \;d{\bf x}, \end{equation}
where $C_{Q_{\theta}}=\int_{\Theta}\int_{\mathcal{X}}\mathbf{I}_{\{||\textrm{\boldmath$s$}-\ssb_o||<\epsilon\}}\left[\int_{\Phi}q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})\pi(\textrm{\boldmath$\phi$})\;d\textrm{\boldmath$\phi$}\;\mathbf{I}_{\{\theta,\tilde{\theta}\}}\right]\pi(\theta) \;d{\bf x} \;d\theta$ is the normalizing constant. \par
The AABC approach is sensible in that as the limited generative model increasingly permits a larger number of simulated data sets, for large sample sizes the posterior distribution obtained by an AABC method approaches the same distribution as the posterior distribution obtained by an ABC method.
We codify this claim with a theorem. \par \vskip 0.5cm \noindent
{\em Theorem.}
Let $\pi(\theta)$ be a bounded prior on $\theta.$ Let $\pi_{\epsilon}(\theta|\db_o,P_{\theta})$ and $\pi_{\epsilon}(\theta|\db_o,Q_{\theta})$ be the posterior distributions sampled by a standard ABC method and an AABC method, respectively. Then \begin{equation}\label{eq:theorem}
\lim_{m \rightarrow \infty}\lim_{n\rightarrow \infty}\pi_{\epsilon}(\theta|\db_o,Q_{\theta})=\lim_{n \rightarrow \infty}\pi_{\epsilon}(\theta|\db_o,P_{\theta}). \end{equation}
A proof of the theorem is given in Appendix 1.
The convergence of the posterior distribution sampled by AABC is a consequence of the fact that, for each given value of $\theta,$ the sampling distribution $\int_{\Phi}q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})\pi(\textrm{\boldmath$\phi$})\;d\textrm{\boldmath$\phi$}\;\mathbf{I}_{\{\theta,\tilde{\theta}\}}$ converges to the true sampling distribution $p({\bf x}|\theta)$ as the sample size $n$ and the number of simulated samples $m$ from $P_{\theta}$ increase.
The intuition for the double limit in equation \ref{eq:theorem} is as follows.
The standard notion of a distibution converging to a point in the parameter space as the sample size $n$ increases does not directly apply to the posterior distribution $\pi_{\epsilon}(\theta|\db_o,Q_{\theta}),$ since this posterior depends not only on the sample size $n,$ but also on the number $m$ of simulated data sets from $P_{\theta}.$
Hence, for convergence of the posterior distribution based on the likelihood of $Q_{\theta},$ the requirement is that both $n\rightarrow \infty$ and $m\rightarrow \infty.$
As $n\rightarrow\infty,$ the empirical distribution converges to $P_{\tilde{\theta}},$ the correct sampling distribution with the incorrect parameter value $\tilde{\theta}.$
As $m\rightarrow \infty,$ the distance between the parameter value $\theta$ under which we want to simulate a new data set and the parameter value $\tilde{\theta}\in\mathcal{Z}_{n,m}$ closest to $\theta$ approaches zero.
Therefore, taking both limits simultaneously results in convergence to the correct sampling distribution $P_{\theta}.$ \par
\subsection{AABC algorithms}\label{subsec:ABCapproximations}
The structure of AABC algorithms sampling the posterior distribution in expression \ref{eq:4} can be conveniently summarized in three parts, as shown in AABC by a rejection algorithm (Figure \ref{fig:3}).
In Algorithm 2, Part I involves obtaining a limited number of realizations from the joint distribution of the parameter and the data from the limited-generative model $P_{\theta}.$
Part I simply involves the application of steps 1 and 2 from Algorithm 1, but only for $m$ iterations.
Part II involves simulating a new parameter value $\theta^*_i$ from its prior distribution (step 4) and then simulating a data set ${\bf x}^*_i$ from the model $Q_{\theta^*_i}$ (steps 5, 6, 7), conditional on $\mathcal{Z}_{n,m}$ obtained in Part I.
Part III involves comparing the summary statistics $\textrm{\boldmath$s$}^*_i$ calculated from the simulated data set ${\bf x}^*_i$ with the summary statistics $\ssb_o$ calculated from the observed data set $\db_o,$ to accept or reject the parameter value $\theta^*_i.$
The calculation and comparison of summary statistics follows the same procedure as in steps 3 and 4 of Algorithm 1.
Hence, Part II of AABC by rejection has the novel steps 5, 6, and 7, whereas Parts I and III use the machinery of ABC by rejection from Algorithm 1.
\par
We can show that Algorithm 2 samples the correct posterior distribution $\pi_\epsilon(\theta|\db_o,Q_{\theta}).$
The probability of sampling a parameter value $\theta$ in Algorithm 2 is proportional to \begin{align*}
&\sum_{\textrm{\boldmath$s$}}\sum_{\textrm{\boldmath$\phi$}}\pi(\theta)\mathbf{I}_{\{\theta,\tilde{\theta}\}}\pi(\textrm{\boldmath$\phi$})q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})
\mathbf{I}_{\{||\textrm{\boldmath$s$}-\ssb_o||<\epsilon\}}\\
& = \sum_{\textrm{\boldmath$s$}}\sum_{\textrm{\boldmath$\phi$}}\pi(\theta,\textrm{\boldmath$\phi$})\mathbf{I}_{\{\theta,\tilde{\theta}\}}q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})\mathbf{I}_{\{||\textrm{\boldmath$s$}-\ssb_o||<\epsilon\}}\\
&\propto \sum_{\textrm{\boldmath$s$}}\sum_{\textrm{\boldmath$\phi$}} \pi(\theta,\textrm{\boldmath$\phi$}|Q_{\theta})\mathbf{I}_{\{||\textrm{\boldmath$s$}-\ssb_o||<\epsilon)\}}\\
& \propto \pi_\epsilon(\theta|\db_o,Q_{\theta}), \end{align*}
where the third line follows from the fact that the expression on the second line is the product of the likelihood under the model $Q_{\theta}$ and the prior, and therefore it is proportional to the posterior distribution of parameters based on the model $Q_{\theta}.$
\begin{figure}\label{fig:3}
\end{figure}
\section{Applications}
In this section, we investigate the inferential performance of AABC approach with two examples.
The following simulation setup is used in both examples.
\subsection{Simulation study design}\label{sec:simulation}
We simulated a reference set with $M=10^5$ realizations $\{({\bf x}_1,\theta_1),({\bf x}_2,\theta_2),...,({\bf x}_{10^5},\theta_{10^5})\},$ by first generating $\theta_i\sim \pi(\theta)$ and then simulating a data set ${\bf x}_i\sim P_{\theta_i}.$
We then sampled 1000 pairs $({\bf x}_i,\theta_i)$ from the reference set, uniformly at random without replacement.
Thus, we selected 1000 ``true'' parameter values $\theta_i,$ along with corresponding test data sets ${\bf x}_i$ generated under each value $\theta_i$ from the model $P_{\theta_i}$.
Further, we built the sets $\mathcal{Z}_{n,m},$ with $m=10^2, 5\times 10^2, 10^3, 5\times 10^3, 10^4,5\times 10^4, 10^5$ by sampling the reference set uniformly at random without replacement for $m<10^5,$ and taking all the realizations in the reference set for $m=M=10^5.$
The sample size $n$ of the data is described in each relevant example.
\par
On each test data set, we performed AABC by rejection sampling (Algorithm 2) using each set $\mathcal{Z}_{n,m}.$
In example 1, where our goal is to compare the performance of the AABC and ABC approaches, we performed ABC analyses by rejection sampling (Algorithm 1) using the same sets $\mathcal{Z}_{n,m}.$
For all analyses, we obtained a sample from the joint posterior distribution of the parameter vector $\theta$ by accepting the parameter vector values that generated data whose summary statistics were in the top $1$ percentile with respect to the statistics calculated from the test data set, in the sense of equation \ref{eq:euclideandistance}.
Compared to the approach of fixing the $\epsilon$ cutoff, accepting parameter vectors that generate data whose summary statistics are in a top percentile has the advantage that a desired number of samples from the posterior is always obtained given a total fixed number of proposed parameter values.
This approach is often preferred by ABC practitioners and is convenient in our case for comparing ABC and AABC.
\par
We assessed the accuracy of the posterior samples for each component of the parameter vector $\theta$ separately, using the root sum of squared error for standardized parameter values accepted in the posterior sample.
For a generic scalar parameter $\alpha,$ the root sum of squared errors is given by $\textrm{RSSE}=(1/r)\sqrt{\sum_{j=1}^{r}(\alpha_j-\alpha_T)^2/\textrm{Var}(\mathbf{\alpha})},$
where $\mathbf{\alpha}=(\alpha_1,\alpha_2,...,\alpha_r)$ are $r$ accepted values in the posterior sample, $\alpha_T$ is the true parameter value, and $\textrm{Var}(\mathbf{\alpha})$ is the variance of the set of $r$ values.
We report the mean RSSE over 1000 test data sets as $\textrm{RMSE}=(1/1000)\sum_{i=1}^{1000}\textrm{RSSE}_i$ (see \citet{NunesBalding2010}).
\par
\subsection{Example 1: The strength of balancing selection in a multi-locus $K${\em-}allele model}\label{sec:example1}
In this section, we consider inference from the stationary distribution of allele frequencies in the diffusion approximation to a Wright-Fisher model with symmetric balancing selection and mutation \citep{Wright1949}.
If we let $a_i>0,$ with $i=1,2,...,K,$ and $\sum_{i=1}^{K}a_i=1,$ and denote the frequency of allelic type $i$ in the population at a genetic locus, the joint probability density function of allele frequencies $x=(a_1,a_2,...,a_K)$ is $f(x|\sigma, \mu)= c(\sigma,\mu)^{-1}\exp(-\sigma\sum_{i=1}^{K}a_i^2)\prod_{i=1}^{K}a_i^{\mu/K-1}.$
Parameters $\sigma$ and $\mu$ determine the population-scaled strength of balancing selection and the mutation rate, respectively.
A data set of observed allele frequencies is a random sample of $n$ draws from the population frequencies $f(x|\sigma,\mu).$ \par
ABC methods are well-suited for inference from this model for three reasons.
First, the statistics $\sum_{j=1}^{K}a_j^2$ and $-\sum_{j=1}^{K}\log a_j$ are jointly sufficient for parameters $\sigma$ and $\mu,$ and no information loss occurs in dimension reduction to the summary statistics.
Second, the parameter-dependent normalizing constant $c(\sigma,\mu)$ is hard to calculate, and performing likelihood-based inference on $\sigma$ and $\mu$ is therefore difficult.
Third, a method specifically designed to simulate data sets from $f(x|\sigma,\mu)$ is readily available \citep{Joyceetal2012}, and performing ABC is therefore straightforward.
For simplicity, we assume 100 loci with the same true parameter values, each with $K=4,$ and that the allele frequencies at each locus are independent of the allele frequencies at other loci.
Thus, the joint probability density function of allele frequencies for 100 loci is equal to the product of probability density functions across loci.
We choose uniform prior distributions, on $(0.1,10)$ for the mutation rate $(\mu),$ and on $(0,50)$ for the selection parameter $(\sigma)$.
\par
{\em Results.} Posterior samples model parameters $(\sigma,\mu)$ obtained by ABC and AABC using a typical data set are given in Figure \ref{fig:4}.
In analyses with $m=10^2,5\times 10^2, 10^3$ or $5\times 10^3$ simulated data sets, few samples are accepted with ABC, and thus, little mass is observed in ABC histograms (black).
For small $m,$ ABC does not produce an adequate sample size from the posterior distribution of parameters.
AABC, however, produces a posterior sample of size $10^3$ for any $m,$ because $10^5$ data sets are simulated from the non-mechanistic model (Algorithm 2, steps 5, 6, 7) and the top 1 percentile are accepted as belonging to the approximate posterior distribution.
The histograms obtained by AABC recover the true value reasonably well (Figure \ref{fig:4}).
The RMSE values in AABC procedures are approximately constant with increasing $m.$
For $m=10^2,5\times 10^2, 10^3, 5\times 10^3, 10^4, 5 \times 10^4,$ and $10^5$ simulated data sets, the RMSE values for parameter $\mu$ are 5.988, 5.932, 6.012, 6.086, 6.125, 6.078, and 6.088 respectively, close to the RMSE of 5.290 obtained by a standard ABC approach using $M=10^5$ simulated data sets from the mechanistic model.
The RMSE values in the last column of Figure \ref{fig:4} show that an AABC approach produces posterior samples that have on average greater variance than posterior samples obtained from ABC with the same large number of realizations.
Here, greater variance in posterior samples obtained by AABC is a result of simulating data sets in AABC by resampling the observed data values that are found only in the $m$ realizations in $\mathcal{Z}_{n,m}.$
Consider two parameter values $\theta^*_1$ and $\theta^*_2$ for which data sets ${\bf x}^*_1$ and ${\bf x}^*_2$ are simulated in the AABC approach by steps 5, 6, 7 of Algorithm 2 such that the parameter value $\tilde{\theta}\in\mathcal{Z}_{n,m}$ closest to both $\theta^*_1$ and $\theta^*_2$ is the same value.
The data sets ${\bf x}^*_1$ and ${\bf x}^*_2$ can include only the data values observed in $\tilde{{\bf x}}$ of the pair $(\tilde{{\bf x}},\tilde{\theta})\in\mathcal{Z}_{n,m}.$
On average, ${\bf x}^*_1$ and ${\bf x}^*_2$ share more observations in common than two data sets simulated from the respective mechanistic models $P_{\theta^*_1}$ and $P_{\theta^*_2}.$
Therefore, each data set simulated in the AABC approach using $Q_{\theta}$ is expected to be less able to distinguish between different parameter values than the independent data sets simulated in the ABC approach using $P_{\theta}.$
This situation results in relatively flat likelihoods and hence posterior samples with larger variance.
\par
\begin{figure}\label{fig:4}
\end{figure}
\subsection{Example 2: Admixture rates in hybrid populations}
Models in which hybrid populations are founded by, and receive genetic contributions from, multiple source populations are of interest in describing the demographic history of admixture.
Stochastic models including admixture often result in likelihoods that are difficult to calculate, and statistical methods capable of performing inference on admixture rates have received much attention for their implications on topics ranging from human evolution to conservation ecology \citep{Falushetal2003, Tangetal2005, BuerkleLexer2008}.
Here, we consider inference on admixture rates from a mechanistic model of \citet{VerduRosenberg2011}.
We use reported estimates of individual admixture as data. \par
We consider a model of admixture for a diploid hybrid population of constant size $N,$ founded at some known $t$ generations in the past with contributions from source populations A and B.
We follow the distribution of admixture fractions of individuals in the hybrid population at a given genetic locus.
Each generation, the admixture fraction for each individual in the hybrid population is obtained as the mean of the admixture fractions of its parents.
The parents are chosen independently of each other, from source population A, source population B, or the hybrid population of the previous generation with probabilities $p_A,p_B,$ and $p_H,$ respectively ($p_A+p_B+p_H=1$).
In the special case of the founding generation, $p_H=0,$ and we assume $p_A=p_B=0.5.$
Individuals from source populations A and B are assigned admixture fractions of $1$ and $0$ respectively.
For example, if both parents of an individual in the hybrid population of the founding generation are from source population A, that individual has admixture fraction $(1+1)/2=1.$
If both parents are from population 2, the admixture fraction is $(0+0)/2=0,$ and if one parent is from population 1 and the other is from population B, then the admixture fraction is $(1+0)/2=0.5.$
The distribution of the admixture fraction in the hybrid population is propagated in this manner for $t$ generations until the present, in which a sample of $n$ individuals is obtained from the resulting distribution (Figure \ref{fig:5}).
Our goal is to estimate the admixture rates $(p_A,p_B,p_H),$ given the individual admixture fractions estimated from observed genetic data.
\begin{figure}\label{fig:5}
\end{figure}
\par
We apply the AABC approach using individual admixture fractions from $n=604$ individuals from Central African Pygmy populations reported by \citet{Verduetal2009}, with an assumed constant population size of $N=10^4.$
This assumption differs slightly from the original model in \citet{VerduRosenberg2011} in that a finite population size is assumed, so that only $10^4$ admixture fraction values are allowed in the population at any given generation.
We assume that an admixture event with contributions from two ancestral source populations started at the mean estimate of $t=771$ generations ago \citep{Verduetal2009} with a generation time of 25 years, and that it continued until the present.
Source population A refers to an ancestral Pygmy population, and source population B refers to an ancestral non-Pygmy population.
The feature of this model relevant to our method is the computational intractability of simulating data sets.
For each set of parameter values $(p_A,p_B,p_H)$ simulated from the priors, the distribution of admixture fractions is discrete on a support of a number of admixture fraction values that doubles each generation, and this distribution evolves for 771 generations.
A random sample of admixture fraction values comparable to the values calculated from the observed data set is obtained from the distribution of the present generation.
Simulating a large number of data sets under this model with such a large number of generations is computationally infeasible, and standard ABC is impractical.
We thus perform AABC by rejection (Algorithm 2) using $m=10^4$ realizations from this model.
We assume a Dirichlet prior with hyperparameters $(1,1,1)$ on parameters $(p_A,p_B,p_H).$ \par
We also assessed the contribution of the approximations on the parameter and model spaces in the AABC approach to the RMSE separately, with a simulation study using a small number of generations ($t=30$), where simulating data sets from the mechanistic model is feasible.
First, we performed AABC with rejection as in Algorithm 2 with 1000 ``true'' data sets using $m=10^2, 5\times10^2,10^3, 5\times10^3,10^4, 5\times10^4,$ and $10^5$ realizations from the model, and we calculated the RMSE for $p_A,p_B,$ and $p_H$ over 1000 ``true'' data sets as described in Section \ref{sec:simulation}.
This AABC analysis includes error due to approximations on the parameter space and on the model space.
Second, we performed an AABC analysis with the same set of $m$ realizations, by including the error only due to the approximation on the parameter space.
We achieved this by running Algorithm 2 up through step 5, and then simulating data sets from the mechanistic model by substituting steps 6 and 7 of Algorithm 2 with step 2 of Algorithm 1, the standard ABC approach by rejection.
By this substitution, all data sets are simulated from the mechanistic model, but each data set is obtained using a parameter vector $(\tilde{p}_A,\tilde{p}_B,\tilde{p}_H)$ found in step 5 of Algorithm 2.
In this procedure, the error due to the approximation on the model space is eliminated, because data sets are simulated from the correct mechanistic model and not by resampling from the available realizations in $\mathcal{Z}_{n,m}$.
However, this procedure includes error due to the approximation on the parameter space, because each data set is simulated not under the correct proposed parameter value, but under the parameter value $(\tilde{p}_A,\tilde{p}_B,\tilde{p}_H),$ the closest value to the correct proposed value that can be found in $\mathcal{Z}_{n,m}.$
We compared the RMSE of the AABC procedure involving the approximation on both the parameter and model spaces and the RMSE of the AABC procedure involving only the approximation on the parameter space to the RMSE obtained from a standard ABC approach.
For these two AABC procedures, we also compared the percent excess in RMSE, defined as the ratio of the absolute difference in RMSE of the AABC and standard ABC approaches to the RMSE of the standard ABC approach, expressed as a percent.
\par
{\em Results.} The individual admixture fractions calculated from the Pygmy data carry substantial information about the admixture parameters $p_A,p_B,$ and $p_H,$ since the joint posterior distribution is concentrated in a relatively small region of the 3-dimensional unit simplex on which $(p_A,p_B,p_H)$ sits (Figure \ref{fig:6}A). The marginal posterior distributions (Figure \ref{fig:6}B, \ref{fig:6}C, and \ref{fig:6}D) have means $p_A=0.151,\; p_B=0.132,$ and $p_H=0.717.$
These values are interpreted as contribution of genetic material of 15.1\% from the ancestral Pygmy population (source population A), 13.2\% from the ancestral Non-Pygmy population (source population B), and 71.7\% from the hybrid population to itself at each generation, over $771$ generations of constant admixture.
\begin{figure}\label{fig:6}
\end{figure}
\par
For the simulation study with $t=30$ generations and 1000 ``true data'' sets, the RMSE values from AABC analyses decrease with increasing $m$ (Figure \ref{fig:7}A, \ref{fig:7}B, \ref{fig:7}C).
Further, as $m$ increases, the error due to the approximation on the parameter space decreases (Figure \ref{fig:7}D last column), due to the fact that for large $m,$ the difference decreases between the closest parameter value chosen at step 5 of Algorithm 2 and the correct parameter value under which we want to simulate a data set.
In fact, the RMSE from the AABC analysis with $m=10^5$ realizations and approximation only on the parameter space and the RMSE from the standard ABC approach are virtually indistinguishable (Figure \ref{fig:7}A, \ref{fig:7}B, \ref{fig:7}C, red star).
For $m=10^3,$ the AABC analysis with approximations on the parameter and model spaces has a percent excess RMSE of 13.81\%, whereas AABC analysis including only the approximation on the parameter space has excess RMSE of 6.61\%.
That is, at $m=10^3,$ approximately half of the excess RMSE in the AABC approach with respect to the standard ABC analysis comes from the error due to the approximation on the parameter space and half arises due to the approximation on the model space.
\begin{figure}\label{fig:7}
\end{figure}
\section{Discussion}
Performing likelihood-based inference from statistical models incorporating a multitude of stochastic processes is often challenging due to computationally intractable likelihoods.
In principle, when stochastic processes are complex but a family of parametric statistical models is well-defined, data can be simulated from the model to assess the parameter likelihoods.
In the last decade, ABC methods have become a standard tool to perform approximate Bayesian inference in subject areas such as ecology and evolution, by exploiting the idea of simulating many data sets from a model, when such simulations are computationally feasible.
To deliver an adequate sample from the posterior distribution of the parameters, however, ABC requires a large number of simulated data sets, and it might not perform well when only a limited number of data sets can be simulated.
\par
In this article, we introduced an approach that extends simulation-based Bayesian inference methods to model spaces in which only a limited number of data sets can be simulated from the model, at the expense of requiring approximations on the parameter and the model spaces.
Our AABC approaches rely on two statistical approximations.
In our approximation on the parameter space, for each parameter simulated from the prior distribution, we take the closest parameter value available in the set of realizations $\mathcal{Z}_{n,m}$ obtained from the mechanistic model.
This approach has a uniform kernel smoothing interpretation in the sense that each parameter value in the set $\mathcal{Z}_{n,m}$ dissects the support of the prior distribution into non-overlapping components such that each interval is mapped to the same parameter value in $\mathcal{Z}_{n,m}.$
Each component then represents the support of a uniform kernel.
Kernel approximations have an operational role in implementing ABC methods, and a natural future direction for AABC is to improve the accuracy of posterior samples using smooth weighting kernels for the approximation on the parameter space.
\par
The approximation on the model space is achieved by assigning Dirichlet probabilities to data points of realizations obtained from the mechanistic model.
This is a variation on the resampling method originally introduced in Rubin's Bayesian bootstrap \citep{Rubin1981}, and therefore, it is an application of Bayesian nonparametric methods.
From this perspective, AABC methods connect standard model-based Bayesian inference on model-specific parameters and Bayesian nonparametric methods within the ABC framework.
\par
Our approach of using a non-mechanistic model and Bayesian resampling methods to help perform inference on model-specific parameters of a mechanistic model is a fundamental difference between AABC and existing ABC methods.
ABC performs inference on model-specific parameters of a mechanistic model using a likelihood based purely on the mechanistic model.
AABC instead performs inference on the same model-specific parameters of the mechanistic model as ABC, using a likelihood based on a non-mechanistic model that incorporates a limited number of data sets simulated from the mechanistic model.
Consequently, the model likelihoods used in ABC and AABC are not exactly the same, and the posterior distributions targeted by the two classes of methods are not exaxctly equivalent for finite sample sizes.
The advantage of AABC methods in contrast to pure non-mechanistic modeling approaches (e.g., nonparametric methods) is that AABC can perform inference on the quantities of interest---the model-specific parameters of the mechanistic model.
\par
Unlike other ABC methods, the AABC approach delivers a posterior sample of desired size from the joint distribution of parameters for any $m>1.$
This is both a strength and a limitation of AABC.
The strength is that in practice, a researcher can fix $m$ and thus the computation time {\em a priori}, to simulate data from the mechanistic model to obtain a reasonable inference by AABC; other ABC methods may fail to produce an adequate posterior sample in equivalent computation time.
In our example, for moderate values of $m$ (e.g., $10^3$ to $10^4$) for which standard ABC approaches were unsatisfactory, AABC adequately sampled an approximate posterior distribution.
The limitation is that when $m$ is too small, the posterior sample obtained by AABC can be a distorted representation of the true posterior distribution.
Although in the limit, AABC and ABC are expected to produce similar results, the posterior distribution sampled by an AABC approach is not the correct posterior distribution, because many parameter values simulated from the prior are tested for acceptance based on repeated use of the data values in $m$ realizations, instead of based on data sets simulated independently of each other.
A future direction is to investigate the relationship between $m$ and the dimensionality of the parameter space to optimize $m$ in producing a given level of accuracy for approximating the true posterior distributions.
\par
\section*{Acknowledgments} The authors thank Paul Verdu for helpful discussions on the genetics of Central African Pygmy populations. Support for this research is partially provided by NIH grant R01 GM 081441, NSF grant DBI-1146722, and the Burroughs Wellcome Fund.
\section*{Appendix 1} We let $k\leq n$ be the number of distinct values $\tilde{x}_1,\tilde{x}_2,...,\tilde{x}_k$ in the data set $\tilde{{\bf x}},$ and denote the number of observed $\tilde{x}_i$ by $\tilde{n}_i,$ where $n=\sum_{i=1}^{k}\tilde{n}_i.$
Then the prior distribution for the probabilities of an AABC replicate data set based on the ABC simulated data set $\tilde{{\bf x}}$ is the Dirichlet distribution $\pi(\textrm{\boldmath$\phi$})=[\Gamma(\sum_{i=1}^k \tilde{n}_i)/\prod_{i=1}^k\Gamma(\tilde{n}_i)] \prod_{i=1}^{k}\phi^{\tilde{n}_i-1}$ with parameters $\tilde{n}_1,\tilde{n}_2,...,\tilde{n}_k.$
The special case of the prior proportional to $1$ described in the text is obtained with $k=n,$ when all observations in $\tilde{{\bf x}}$ are distinct $(\tilde{n}_1,=\tilde{n}_2=\;\cdots\;=\tilde{n}_n=1)$.
Our goal is to show that $\lim_{m\rightarrow \infty}\lim_{n\rightarrow \infty}\pi_\epsilon(\theta|\db_o,Q_{\theta})=\lim_{n\rightarrow\infty}\pi_{\epsilon}(\theta|\db_o,P_{\theta}).$
\par
Recalling equation \ref{eq:4}, \begin{equation}\label{eq:app1}
\lim_{m\rightarrow \infty}\lim_{n\rightarrow \infty}\pi_\epsilon(\theta|\db_o,Q_{\theta})= \displaystyle{\lim_{m\rightarrow\infty}\lim_{n\rightarrow \infty}}\frac{1}{C_{Q_{\theta}}}\int_{\mathcal{X}}\mathbf{I}_{\{||\textrm{\boldmath$s$}-\ssb_o||<\epsilon\}}\left[\int_{\Phi}q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})\pi(\textrm{\boldmath$\phi$})\;d\textrm{\boldmath$\phi$}\;\mathbf{I}_{\{\theta,\tilde{\theta}\}}\right]\pi(\theta)\; d{\bf x}. \end{equation}
The integral in the brackets is the expectation of $q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}}),$ with respect to the prior $\pi(\textrm{\boldmath$\phi$}).$ We let $C={n \choose n_1 \;n_2\; \cdots\; n_k},$ and using the definition of $q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})=C\;\prod_{j=1}^{n}\prod_{i=1}^{n}\phi_i^{\mathbf{I}_{\{x_j=\tilde{x}_i\}}}$ in section \ref{subsec:nonparametric}, and $\pi(\textrm{\boldmath$\phi$})=[\Gamma(\sum_{i=1}^k \tilde{n}_i)/\prod_{i=1}^k\Gamma(\tilde{n}_i)] \prod_{i=1}^{k}\phi^{\tilde{n}_i-1}$ we get
\begin{equation*}
\int_{\Phi}q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})\pi(\textrm{\boldmath$\phi$})\;d\textrm{\boldmath$\phi$}=C\;\frac{\Gamma(\sum_{i=1}^{k}\tilde{n}_i)}{\prod_{i=1}^{k}\Gamma(\tilde{n}_i)} \;\prod_{j=1}^{n}\int_{\Phi}\left(\prod_{i=1}^{n}\phi_i^{\mathbf{I}_{\{x_j=\tilde{x}_i\}}}\right)\left(\prod_{i=1}^{k}\phi_i^{\tilde{n}_i-1}\right)\;d\textrm{\boldmath$\phi$}. \end{equation*}
Here, we have exchanged the order of the product over $j$ with the integral since the expectation of the product of $n$ IID observations in sample ${\bf x}$ is equal to the the product of the expectations of observations $x_j.$
We label the realized value of the $j$th data point $x_j$ by $(j)$ such that $\prod_{i=1}^{n}\phi_i^{\mathbf{I}_{\{x_j=\tilde{x}_i\}}}=\phi_{(j)},$ and write \begin{equation}\label{eq:app2}
\int_{\Phi}q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})\pi(\textrm{\boldmath$\phi$})\;d\textrm{\boldmath$\phi$}=C\;\frac{\Gamma(\sum_{i=1}^{k}\tilde{n}_i)}{\prod_{i=1}^{k}\Gamma(\tilde{n}_i)}\;\prod_{j=1}^{n}\int_{\Phi} \left(\prod_{\substack{i=1\\i\neq(j)}}^{k}\phi_i^{\tilde{n}_i-1}\right)\phi_{(j)}^{\tilde{n}_{(j)}}\;d\textrm{\boldmath$\phi$}. \end{equation}
Using $\int_{\Phi}\frac{\Gamma[(\sum_{i=1,i\neq (j)}^{k}\tilde{n}_i)+\tilde{n}_{(j)}+1]}{[\prod_{i=1,i\neq (j)}^{k}\Gamma(\tilde{n}_i)]\Gamma(\tilde{n}_{(j)}+1)} \; \left(\prod_{i=1,i\neq (j)}^{k}\phi_i^{\tilde{n}_i-1}\right)\phi_{(j)}^{\tilde{n}_{(j)}}\;d\textrm{\boldmath$\phi$}=1$ (p. 487, \citet{Kotzetal2000}), we substitute the integral in equation (\ref{eq:app2}) with the ratio of the gamma functions to get
\begin{align*}
\int_{\Phi}q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})\pi(\textrm{\boldmath$\phi$})\;d\textrm{\boldmath$\phi$}&=C\;\frac{\Gamma(\sum_{i=1}^{k}\tilde{n}_i)}{\prod_{i=1}^{k}\Gamma(\tilde{n}_i)}\prod_{j=1}^{n} \frac{\left[\prod_{i=1,i\neq (j)}^{k}\Gamma(\tilde{n}_i)\right]\Gamma(\tilde{n}_{(j)}+1)}{\Gamma[(\sum_{i=1,i\neq (j)}^{k}\tilde{n}_i)+\tilde{n}_{(j)}+1]}\\ &=C\;\prod_{j=1}^{n}\frac{\Gamma(n)}{\Gamma(\tilde{n}_{(j)})}\frac{\Gamma(\tilde{n}_{(j)}+1)}{\Gamma(n+1)}=C\;\prod_{j=1}^{n}\left(\frac{\tilde{n}_{(j)}}{n}\right). \end{align*}
Substituting $C\;\prod_{j=1}^{n}\left(\frac{\tilde{n}_{(j)}}{n}\right)$ for the integral in brackets in equation (\ref{eq:app1}), we have
\begin{align} \nonumber
\lim_{m\rightarrow \infty}\lim_{n\rightarrow \infty}\pi_\epsilon(\theta|\db_o,Q_{\theta})&=\displaystyle{\lim_{m\rightarrow\infty}\lim_{n\rightarrow \infty}}\frac{1}{C_{Q_{\theta}}}\int_{\mathcal{X}}\mathbf{I}_{\{||\textrm{\boldmath$s$}-\ssb_o||<\epsilon\}}\;C\;\prod_{j=1}^{n}\left(\frac{\tilde{n}_{(j)}}{n}\right)\;\mathbf{I}_{\{\theta,\tilde{\theta}\}}\pi(\theta)\; d{\bf x}\\ \label{eq:exchangelimit}
&=\frac{\displaystyle{\lim_{m\rightarrow\infty}\lim_{n\rightarrow \infty}}\int_{\mathcal{X}}\mathbf{I}_{\{||\textrm{\boldmath$s$}-\ssb_o||<\epsilon\}}\;C\;\prod_{j=1}^{n}\left(\frac{\tilde{n}_{(j)}}{n}\right)\;\mathbf{I}_{\{\theta,\tilde{\theta}\}}\pi(\theta)\; d{\bf x}}{\displaystyle{\lim_{m \rightarrow \infty }\lim_{n\rightarrow \infty}}C_{Q_{\theta}}}. \end{align}
\par
We apply the dominated convergence theorem to exchange the limits in $n$ and the integrals in the numerator and denominator of equation (\ref{eq:exchangelimit}).
The assumptions of the theorem are satisfied as follows: 1) The integrand in equation (\ref{eq:exchangelimit}) is bounded: The indicator functions are bounded by 1, the ratios $(\tilde{n}_{(j)}/n),$ where $n_{(j)}\leq n$ are bounded by 1, and the prior $\pi(\theta)$ is bounded by assumption.
2) $\lim_{n\rightarrow \infty}(\tilde{n}_{(j)}/n)$ converges pointwise to the probability of $x_{(j)}$ under $\tilde{\theta}$ and the model $P_{\tilde{\theta}},$ given by
$p(x_{(j)}|\tilde{\theta}),$ by the frequency interpretation of probability.
Exchanging the limits in $n$ and the integrals, and using $\lim_{n\rightarrow \infty}(\tilde{n}_{(j)}/n)=p(x_{(j)}|\tilde{\theta}),$ \begin{align} \nonumber
\lim_{m\rightarrow \infty}\lim_{n\rightarrow \infty}\pi_\epsilon(\theta|\db_o,Q_{\theta})&=\frac{\displaystyle{\lim_{m\rightarrow\infty}}\int_{\mathcal{X}}\mathbf{I}_{\{||\textrm{\boldmath$s$}-\ssb_o||<\epsilon\}}\prod_{j=1}^{k}\left[p(x_{(j)}|\tilde{\theta})\right]^{n_{(j)}}\;\mathbf{I}_{\{\theta,\tilde{\theta}\}}\pi(\theta)\;d{\bf x}}{\displaystyle{\lim_{m \rightarrow \infty}}C_{P_{\tilde{\theta}}}}\\ \label{eq:app4}
&=\frac{\displaystyle{\lim_{m\rightarrow\infty}}\int_{\mathcal{X}}\mathbf{I}_{\{||\textrm{\boldmath$s$}-\ssb_o||<\epsilon\}}p({\bf x}|\tilde{\theta})\;\mathbf{I}_{\{\theta,\tilde{\theta}\}}\pi(\theta)\;d{\bf x}}{\displaystyle{\lim_{m \rightarrow \infty}}C_{P_{\tilde{\theta}}}}, \end{align}
where (\ref{eq:app4}) follows by the definition of the joint distribution $p({\bf x}|\tilde{\theta})=\prod_{j=1}^{k}\left[p(x_{(j)}|\tilde{\theta})\right]^{n_{(j)}}.$
\par We now apply the dominated convergence theorem a second time to exchange the limits in $m$ and the integrals on $\mathcal{X}$. Again, the assumptions of the dominated convergence theorem are satisfied since the integrand in (\ref{eq:app4}) is a sequence in $m$ of bounded functions, and as $m\rightarrow \infty,$ $\tilde{\theta}\rightarrow \theta,$ and $p({\bf x}|\tilde{\theta})\rightarrow p({\bf x}|\theta).$
We get
\begin{equation*}
\lim_{m\rightarrow \infty}\lim_{n\rightarrow \infty}\pi_\epsilon(\theta|\db_o,Q_{\theta})=\frac{1}{C_{P_{\theta}}}\int_{\mathcal{X}}\mathbf{I}_{\{||\textrm{\boldmath$s$}-\ssb_o||<\epsilon\}}p({\bf x}|\theta)\pi(\theta)\;d{\bf x}=\displaystyle{\lim_{n\rightarrow \infty}}\pi_{\epsilon}(\theta|\db_o,P_{\theta}) \end{equation*}
which shows that AABC posterior converges to the ABC posterior as the sample size $n$ and the simulated number of data sets $m$ increase.
\end{document} |
\begin{document}
\begin{frontmatter}
\title{Asymptotic behavior of homogeneous additive functionals of the solutions of It\^{o} stochastic differential equations with nonregular dependence on parameter}
\author{\inits{G.}\fnm{Grigorij}\snm{Kulinich}}\email{zag\_mat@univ.kiev.ua} \address{ Taras Shevchenko National University of Kyiv, 64/13,\break Volodymyrska Street, 01601, Kyiv, Ukraine}
\author{\inits{S.}\fnm{Svitlana}\snm{Kushnirenko}\corref{cor1}}\email{bksv@univ.kiev.ua} \cortext[cor1]{Corresponding author.}
\author{\inits{Yu.}\fnm{Yuliia}\snm{Mishura}}\email{myus@univ.kiev.ua}
\markboth{G. Kulinich et al.}{Asymptotic behavior of homogeneous additive functionals}
\begin{abstract} We study the asymptotic behavior of mixed functionals of the form $I_T(t)=\break F_T(\xi_T(t))+\int_{0}^{t} g_T(\xi_T(s))\,d\xi_T(s)$, $
t\ge0$, as $T\to\infty$. Here $\xi_T(t)$ is a strong solution of the stochastic differential equation $d\xi_T (t)=a_T(\xi_T(t))\, dt+dW_T(t)$, $T>0$ is a parameter, $a_T= a_T(x)$ are measurable functions such that $\left|a_T(x)\right|\leq C_T$ for all $x\in\mathbb{R}$, $W_T(t)$ are standard Wiener processes, $ F_T= F_T(x)$, $x\in\mathbb{R}$, are continuous functions, $ g_T= g_T(x)$, $x\in\mathbb{R}$, are locally bounded functions, and everything is real-valued. The explicit form of the limiting processes for $I_T(t)$ is established under very nonregular dependence of $ g_T $ and $a_T $ on the parameter $T$. \end{abstract}
\begin{keyword} Diffusion-type processes\sep asymptotic behavior of additive functionals\sep nonregular dependence on the parameter \MSC[2010] 60H10\sep 60J60 \end{keyword}
\received{28 May 2016}
\revised{17 June 2016}
\accepted{17 June 2016} \publishedonline{4 July 2016}
\end{frontmatter}
\section{Introduction}
Consider the It\^{o} stochastic differential equation
\begin{equation}\label{o1} d\xi_T (t)=a_T\bigl(\xi_T(t)\bigr)\,dt+dW_T(t),\quad t\ge0,\;\xi_T (0)=x_0, \end{equation}
where $T>0$ is a parameter, $a_T (x)$, $x\in\mathbb{R}$, are real-valued measurable functions such that for some constants $L_T>0$ and for all
$x\in\mathbb{R}$ $\left|a_T(x)\right|\leq L_T$, and $W_T=\{W_T(t), t\geq0\}$, $T>0$, is a family of standard Wiener processes defined on a complete probability space $(\varOmega,\Im, \pr)$.
It is known from Theorem 4 in \cite{paper1} that, for any $T>0$ and $x_0\in\mathbb{R}$, equation (\ref{o1}) possesses a unique strong pathwise solution $\xi_T=\{\xi_T(t), t\geq0\}$, and this solution is a homogeneous strong Markov process.
We suppose that the drift coefficient $a_T (x)$ in equation (\ref{o1}) can have a very nonregular dependence on the parameter. For example, the drift coefficient can be of ``$\delta$''-type sequence at some points $x_k$ as $T\to \infty$, or it can be equal to $\sqrt{T}\sin ((x-x_k)\sqrt{T})$, or it can have degeneracies of some other types. Such a nonregular dependence of the coefficients in equation (\ref{o1}) first appeared in \cite{paper3} and \cite{paper4}, where the limit behavior of the normalized unstable solution of It\^{o} stochastic differential equation as $t\to\infty$ was investigated. In those papers, a special dependence of the coefficients $a_T (x)=\sqrt{T}a(x\sqrt{T})$ on the parameter $T$ was considered in the case where $a(x)$ is an absolutely integrable function on $\mathbb {R}$. Assume that this is the case and let $\int_{\mathbb{R}} a(x)\,dx=\lambda$. The sufficiency of the condition $\lambda=0$ for the asymptotic equivalence of distributions $\xi_T$ and $W_{T}(t)$ is established in \cite{paper3}, and the necessity of this condition is proved in \cite{paper45}. If $\lambda\neq0$, then we can deduce from \cite{paper4} that the distributions of the solution $\xi_T$ of equation (\ref{o1}) weakly converge as $T\to\infty$ to the corresponding distributions of the Markov process $\hat{\xi}(t)=l\left(\zeta(t)\right)$, where $l(x)=c_1x$ for $x>0$ and $l(x)=c_2x$ for $x\leq0$; $\zeta(t)$ is a strong solution of the It\^{o} equation $d\zeta(t)=\bar \sigma(\zeta(t))\,d W(t)$, where $\bar\sigma(x)=\sigma_1$ for $x>0$ and $\bar\sigma(x)=\sigma_2$ for $x<0$, and
$\int_{0}^{t}P\{|\zeta(s)|=0\}\,ds=0$. The explicit form of the transition density of the process $\hat{\xi}(t)$ is obtained. Moreover, in \cite{paper5}, it is proved that \[ \hat{\xi}(t)=x_0+\beta(t)+W(t), \] where $\beta(t)$ is a certain functional of $\zeta(t)$, and the necessity of the condition $\lambda \neq0$ is established for the weak convergence as $T\to\infty$ of the solution $\xi_T$ of equation (\ref{o1}) to the process $\hat{\xi}(t)$.
Furthermore, in \cite{paper3} and \cite{paper4}, a probabilistic method to study the ``awkward'' term $\sqrt{T}\int_{0}^{t}a(\xi _T(s)\sqrt{T})\,ds$ in equation (\ref{o1}) is developed. This method uses a representation of this ``awkward'' term through a family of continuous functions $\varPhi_T (x)$ of $\xi_T(t)$ and a family of martingales $\int_{0}^{t}\varPhi'_T(\xi_T(s))\,dW_T(s)$, with the further application of the It\^{o} formula. After the mentioned transformations, according to this method, we can apply Skorokhod's convergent subsequence principle for $\xi_{T_n}(t)$ and $W_{T_n}(t)$ (see \cite{book6}, Chapter I, \S6) in order to pass to the limit in the resulting representation.
Note that this method is also used in the present paper to study the asymptotic behavior of integral functionals.
It is known from \cite{book2}, \S16, that the asymptotic behavior of the solution $\xi_T$ of equation (\ref{o1}) is closely related to the asymptotic behavior of harmonic functions, that is, functions satisfying the following ordinary differential equation almost everywhere (a.e.) with respect to the Lebesgue measure:
\[ f_T'(x)a_T(x)+\frac{1}{2}\,f_T''(x)=0. \]
It is obvious that the functions $f_T(x)$ have the form
\begin{equation}\label{o2} f_T(x)=c_T^{(1)}\int_{0}^{x}\exp\Bigg\{-2\int _{0}^{u}a_T(v)\,dv\Bigg\}\,du+c_T^{(2)}, \end{equation}
where $c_T^{(1)}$ and $c_T^{(2)}$ are some families of constants.
The latter functions possess the continuous derivatives $f_T^{\prime}(x)$, and their second derivatives $f_T^{\prime\prime}(x)$ exist almost everywhere with respect to the Lebesgue measure and are locally integrable. Note that $c_T^{(1)}$ are normalizing constants and $c_T^{(2)}$ are centralizing constants in the limit theorems (see \cite{book7}, \S6). Further, for simplicity, we assume that in (\ref{o2}), $c_T^{(1)}\equiv1$ and $c_T^{(2)}\equiv 0$.
In this paper, we assume for the coefficient $a_T(x)$ of equation (\ref{o1}) that there exists a family of functions $G_T(x)$, $x\in\mathbb{R}$, with continuous derivatives $G_T^{\prime}(x)$ and locally integrable second derivatives $G_T^{\prime\prime}(x)$ a.e. with respect to the Lebesgue measure such that, for all $T>0$ and $x\in\mathbb{R}$, the following inequalities hold:
\begin{align*} (A_1) \hspace*{27pt}\quad&\biggl( G_T^{\prime}(x)a_T(x)+\frac{1}{2}\,G_T^{\prime\prime}(x) \biggr)^2+\bigl( G_T^{\prime}(x)\bigr)^2 \leq C \bigl(1+ \bigl(G_T(x)\bigr)^2\bigr),\hspace*{27pt}\\
&\quad\big|G_T(x_0)\big|\leq C. \end{align*}
Suppose additionally that the functions $G_T(x)$, $x\in\mathbb{R}$, introduced by condition $(A_1)$ satisfy the following assumptions:
\begin{itemize}
\item[$(i)$] There exist constants $C>0$ and $\alpha>0$ such that
$|G_T(x)|\geq C |x|^{\alpha}$.
\item[$(ii)$] There exist a bounded function $\psi\left(|x|\right)$
and a constant $m\geq0$ such that\break $\psi\left(|x|\right)\to0$ as
$|x|\to0$ and, for all $x\in\mathbb{R}$ and $T>0$ and for any measurable bounded set $B$, the following inequality holds:
\begin{align*} (A_2) \hspace*{15pt}\quad\quad\int_{0}^{x} f_T'(u) \Biggl( \,\int _{0}^{u}\frac{\chi_B\left(G_T(v)\right)}{f'_T(v)}\,dv\Biggr)\,du
\leq\psi\bigl(\lambda(B)\bigr) \left[1+ |x|^m\right],\hspace*{15pt} \end{align*}
where $\chi_B(v)$ is the indicator function of a set $B$, $\lambda (B)$ is the Lebesgue measure of $B$, and $f_T'(x)$ is the derivative of the function $f_T(x)$ defined by equality (\ref{o2}). \end{itemize}
Let $\left\{G_T \right\}$ be the class of the functions $G_T(x)$, $x\in \mathbb{R}$, satisfying conditions $(A_1)$ and (i)--(ii). The class of equations of the form (\ref{o1}) whose coefficients $a_T(x)$ admit $G_T(x)$, $x\in\mathbb{R}$, from the class $\left\{G_T \right\}$ will be denoted by $K\left(G_T \right)$. It is easy to understand that class $K\left(G_T \right)$ does not depend on the constants $c_T^{(1)}$ and $c_T^{(2)}$ in representation (\ref{o2}).
It is clear that if there exist constants $\delta> 0$ and $C>0$ such that $0<\delta\leq f_T'(x)\leq C$ for all $x\in\mathbb{R}$, $T>0$, then the corresponding equations (\ref{o1}) belong to the class $K\left(G_T \right)$ for $G_T(x)=f_T(x)$. We denote this subclass as $K_1$. Note that the class $K\left(G_T \right)$ contains in particular the equations for which, at some points $x_k$, we have the convergence $f_T'(x_k)\to\infty$ or the convergence $f_T'(x_k)\to 0$ as $T\to\infty$. For example, consider equation (\ref{o1}) with $a_T(x)=\frac{c_0Tx}{1+x^2T}$. It is easy to obtain that $f_T'(x)=\frac{1}{\left(1+x^2T\right)^{c_0}}$, and if $c_0>-\frac{1}{2}$, then such equations belong to the class $K\left(G_T \right)$ with $G_T(x)=x^2$ (here, at points $x\neq0$, we have $f_T'(x)\to0$ for $c_0>0$, $f_T'(x)\to\infty$ for $-\frac{1}{2}<c_0<0$, and $f_T'(x)\equiv1$ for $c_0=0$).\vadjust{\eject}
For the class of equations $K\left(G_T \right)$, we study the asymptotic behavior as $T\to\infty$ of the distributions of the following functionals:
\begin{align*} &\beta_T^{(1)} (t)=\int_{0}^{t} g_T\bigl(\xi_T(s)\bigr)\,ds,\qquad \beta _T^{(2)} (t)=\int_{0}^{t} g_T\bigl(\xi_T(s)\bigr)\,dW_T(s),\\ &I_T(t)=F_T\bigl(\xi_T(t)\bigr)+\int_{0}^{t} g_T\bigl(\xi_T(s)\bigr)\,dW_T(s),\qquad \beta_T (t)=\int_{0}^{t} g_T\bigl(\xi_T(s)\bigr)\,d\xi_T(s), \end{align*}
where the processes $\xi_T(t)$, $W_T(t)$ are related via \xch{equation~(\ref {o1})}{Eq.~(\ref {o1})}, $ g_T (x) $ is a family of measurable locally bounded real-valued functions, and $ F_T (x)$ is a family of continuous real-valued functions.
This paper is a continuation of \cite{paper77,paper88,paper8}. Note that the behavior of the distributions of functionals $\beta_T^{(1)} (t)$, $\beta_T^{(2)} (t)$ for the solutions $\xi_T$ of equations (\ref {o1}) from the class $K_1$ is studied in \cite{paper55} and \cite {paper56}. The case where $W_T(t)$ is replaced with $\eta_T(t)$, where $\eta_T(t)$ is a family of continuous martingales with the characteristics $\langle{\eta_T}\rangle(t)\to t$ as $T\to\infty$, was studied in \cite{paper57}. Paper \cite{paper58} was devoted to a discrete analogue of the results from \cite{paper57}. A similar problem for the functionals $I_T(t)$ in the case of equation (\ref{o1}) with $a_T (x)\equiv0$ was considered in \cite{book7} and in \cite{paper11}, for the class $K_1$. In \cite{paper77,paper88,paper8}, the behavior of the distributions of the functionals $\beta_T^{(1)} (t)$, $\beta_T^{(2)} (t)$, and $I_T(t)$ with a special dependence of the drift coefficients $a_T (x)=\sqrt
{T}a(x\sqrt{T})$ on the parameter $T$ is considered, mainly in the case where $\left|xa(x)\right|\leq C$ for all $x\in\mathbb{R}$. The behavior of the distributions of functionals $\beta_T^{(1)} (t)$ was studied in \cite{paper77}, $\beta_T^{(2)} (t)$ was studied in \cite{paper88}, and $I_T(t)$ was investigated in \cite{paper8}. A more detailed review of the known results in this area is presented in \cite{paper77,paper88,paper8}. Note that the functionals $\beta_T^{(1)} (t)$, $\beta_T^{(2)} (t)$, and $\beta_T (t)$ are particular cases of the functional $I_T(t)$ (see \cite{paper8}, Lemma 4.1).
\begin{zau}\label{z3}
In this paper, we often apply the It\^{o} formula to the process $\varPhi (\xi_T(t))$, where $\xi_T(t)$ is a solution of equation (\ref{o1}), the derivative $\varPhi'(x)$ of the function $\varPhi(x)$ is assumed to be continuous, and the second derivative $\varPhi ''(x)$ is assumed to exist a.e.\ with respect to the Lebesgue measure and to be locally integrable. Then it follows from \cite{paper9} that with probability one, for all $t\geq0$, the following equality holds:
\begin{align*}
\varPhi\bigl(\xi_T(t)\bigr)-\varPhi(x_0)&= \int_{0}^{t}\biggl(\varPhi '\bigl(\xi_T(s)\bigr)a_T\bigl(\xi_T(s)\bigr)+\frac{1}{2}\,\varPhi''\bigl(\xi_T(s)\bigr)\biggr)\,ds\\ &\quad+\int _{0}^{t}\varPhi'\bigl(\xi_T(s)\bigr)\,dW_T(s).
\end{align*} \end{zau}
\begin{zau}\label{zZ3} Let $\xi_T$ be a solution of equation (\ref {o1}), and $G_T(x)$ be a family of functions satisfying condition $(A_1)$. Theorem 1 from \cite{paper10} implies that the family of the processes $\{\zeta_T(t)=G_T(\xi_T(t)), t\geq0\}$ is weakly compact. The proof of this result is based on the equality
\begin{equation}\label{o4} \zeta_T(t)=G_T(x_0)+\int_{0}^{t}\biggl( G_T^{\prime}\bigl(\xi_T(s)\bigr)a_T\bigl(\xi _T(s)\bigr)+\frac{1}{2}\,G_T^{\prime\prime}\bigl(\xi_T(s)\bigr) \biggr)\,ds+\eta_T(t), \end{equation}
where
\[ \eta_T(t)=\int_{0}^{t}G_T^{\prime}\bigl(\xi_T(s)\bigr)\,dW_T(s),\qquad\zeta _T (t)=G_T\bigl(\xi_T(t)\bigr). \]
In turn, the latter equality follows from Remark \ref{z3}. In addition, it is established in the proof of Theorem 1 from \cite {paper10} that for any constants $L>0$ and $\varepsilon>0$,
\begin{align}\label{o5}
\begin{split} \mathop{\lim}\limits_{N\to\infty}\overline{\mathop{\lim}\limits _{T \to\infty}} \mathop{\sup}\limits_{0\leq t\leq L} \pr\bigl\{
\big|\lambda_T(t)\big|>N\bigr\}&=0, \\ \mathop{\lim}\limits_{h\to0}\overline{\mathop{\lim}\limits_{T \to
\infty}} \mathop{\sup}\limits_{|t_1-t_2|\leq h;\, t_i\leq L} \pr
\bigl\{\big|\lambda_T(t_2)-\lambda_T(t_1)\big|>\varepsilon\bigr\}&=0, \end{split}
\end{align}
and that, for any $k>1$ and for certain constants $C_k$ and $C$,
\begin{equation}\label{o6}
\M\mathop{\sup}\limits_{0\leq t\leq L} \big|\lambda_T(t)\big|^k\leq C_k,\qquad
\M\big|\lambda_T(t_2)-\lambda_T(t_1)\big|^4\leq C|t_2-t_1|^2, \end{equation}
where $\lambda=\eta$ or $\lambda=\zeta$ (see \cite{book2}, \S6, Theorem 4). \end{zau}
\begin{zau}\label{z1} Here and throughout the paper, the weak convergence of the processes means the weak convergence in the uniform topology of the space of continuous functions $C[0,L]$ for any $L>0$. The processes that have continuous trajectories with probability 1 will be simply called continuous. \end{zau}
The paper is organized as follows. Section~\ref{section2} contains the statements of the main results. In Section~\ref{section3}, they are proved. Auxiliary results are collected in Section~\ref{section4}.
\section{Statement of the main results}\label{section2}
In what follows, we denote by $C,\,L,\,N,\,C_N$ any constants that do not depend on $T$ and $x$. Assume that, for certain locally bounded functions $ q_T (x) $ and any constant $N>0$, the following condition holds:
\[
(A_3)\hspace*{40pt}\quad\quad\quad\quad\lim\limits_{T\to\infty}\mathop{\sup}\limits _{|x|\leq N} f_T'(x) \Biggl| \,\int_{0}^{x}\frac
{q_T(v)}{f'_T(v)}\,dv\Biggr|=0,\hspace*{80pt} \]
where $f_T'(x)$ is the derivative of the function $f_T(x)$ defined by Eq.~(\ref{o2}).
\begin{thm}\label{th2} Let $\xi_T$ be a solution of Eq.~\eqref{o1} from the class $K\left(G_T \right)$ and $G_T(x_0)\to y_0$ as $T\to\infty$. Assume that there exist measurable locally bounded functions $a_0(x)$ and $\sigma_0(x)$ such that:
\begin{itemize}
\item[\rm1.] the functions
\[ q_T^{(1)}(x)=G_T^{\prime}(x)a_T(x)+\frac{1}{2}\,G_T^{\prime\prime}(x)- a_0\bigl(G_T(x)\bigr) \]
and
\[ q_T^{(2)}(x)=\bigl(G_T^{\prime}(x)\bigr)^2- \sigma_0^2\bigl(G_T(x)\bigr) \]
satisfy assumption $(A_3)$;
\item[\rm2.] the It\^{o} equation
\begin{equation}\label{o3} \zeta(t)=y_0+\int_{0}^{t}a_0\bigl(\zeta(s)\bigr)\,ds+\int _{0}^{t}\sigma_0\bigl(\zeta(s)\bigr)\,d\hat{W}(s) \end{equation}
has a unique weak solution. \end{itemize}
Then the stochastic process $\zeta_T(t)=G_T(\xi_T(t))$ weakly converges, as $T\to\infty$, to the solution $\zeta(t)$ of Eq.~\eqref{o3}.\vadjust{\eject} \end{thm}
\begin{thm}\label{th3} Let $\xi_T$ be a solution of Eq.~\eqref{o1} from the class $K\left(G_T \right)$, and let the assumptions of Theorem \emph{\ref{th2}} hold. Assume that, for measurable locally bounded functions $g_T(x)$, there exists a measurable locally bounded function $g_0(x)$ such that the function
\[ q_T(x)=g_T(x)- g_0\bigr(G_T(x)\bigl) \]
satisfies assumption $(A_3)$. Then the stochastic process $ \beta _T^{(1)} (t)=\int_{0}^{t} g_T(\xi_T(s))\,ds$ weakly converges, as $T\to\infty$, to the process
\[ \beta^{(1)} (t)=\int_{0}^{t} g_0\bigl(\zeta(s)\bigr)\,ds, \]
\noindent where $\zeta(t)$ is a solution of Eq.~\eqref{o3}. \end{thm}
\begin{thm}\label{th4} Let $\xi_T$ be a solution of Eq.~\eqref{o1} from the class $K\left(G_T \right)$, and let the assumptions of Theorem \emph{\ref{th2}} hold. Assume that, for measurable locally bounded functions $g_T(x)$, there exists a measurable locally bounded function $g_0(x)$ such that
\[
(A_4)\hspace*{15pt}\quad\quad\quad\lim\limits_{T\to\infty}\mathop{\sup}\limits _{|x|\leq N} \Biggl| f_T'(x) \,\int_{0}^{x}\frac
{g_T(v)}{f'_T(v)}\,dv-g_0\bigl(G_T(x)\bigr)G_T^{\prime}(x)\Biggr|=0\hspace*{39pt} \]
for all $N>0$. Then the stochastic process $ \beta_T^{(1)} (t)=\int _{0}^{t} g_T(\xi_T(s))\,ds$ weakly converges, as $T\to\infty$, to the process
\[ \tilde{\beta}^{(1)} (t)=2\Biggl(\int_{y_0}^{\zeta(t)} g_0(x)\, dx-\int_{0}^{t} g_0\bigl(\zeta(s)\bigr)\,\sigma_0\bigl(\zeta(s)\bigr)\,d\hat {W}(s)\Biggr), \]
where $\zeta(t)$ and the Wiener process $\hat{W}(t)$ are related via Eq.~\eqref{o3}. \end{thm}
\begin{thm}\label{th5} Let $\xi_T$ be a solution of equation \emph{(\ref{o1})} from the class $K\left (G_T \right)$, and let the assumptions of Theorem \emph{\ref{th2}} hold. Assume that, for measurable locally bounded functions $g_T(x)$, there exists a measurable locally bounded function $g_0(x)$ such that the function
\[ q_T(x)=\bigl(g_T(x)- g_0\bigl(G_T(x)\bigr)G_T^{\prime}(x)\bigr)^2 \]
satisfies assumption $(A_3)$. Then the stochastic process $\beta_T^{(2)} (t)\,{=}\int_{0}^{t} g_T(\xi_T(s))\,dW_T(s)$, where $\xi_T(t)$ and $W_T(t)$ are related via Eq.~\eqref{o1}, weakly converges, as $T\to\infty$, to the process
\[ {\beta}^{(2)} (t)=\int_{0}^{t} g_0\bigl(\zeta(s)\bigr)\,d\zeta(s)-\int _{0}^{t} g_0\bigl(\zeta(s)\bigr)\,a_0\bigl(\zeta(s)\bigr)\,ds, \]
where $\zeta(t)$ is a solution of Eq.~\eqref{o3}. \end{thm}
\begin{thm}\label{th6} Let $\xi_T$ be a solution of Eq.~\eqref{o1} from the class $K\left(G_T \right)$, and let the assumptions of Theorem \emph{\ref{th2}} hold. Assume that, for continuous functions $F_T(x)$ and locally bounded measurable functions $g_T(x)$, there exist a continuous function $F_0(x)$ and locally bounded measurable function $g_0(x)$ such that, for all~$N>0$,
\[
\lim\limits_{T\to\infty}\mathop{\sup}\limits_{|x|\leq N} \big|
F_T(x)-F_0\bigl(G_T(x)\bigr) \big|=0, \]
and let the functions $g_T(x)$ and $g_0(x)$ satisfy the assumptions of Theorem \emph{\ref{th5}}. Then the stochastic process\vadjust{\eject}
\[ I_T(t)=F_T\bigl(\xi_T(t)\bigr)+\int_{0}^{t} g_T\bigl(\xi_T(s)\bigr)\,dW_T(s), \]
where $\xi_T(t)$ and $W_T(t)$ are related via Eq.~\eqref{o1}, weakly converges, as $T\to\infty$, to the process
\[ I_0(t)=F_0\bigl(\zeta(t)\bigr)+\int_{0}^{t} g_0\bigl(\zeta(s)\bigr)\,\sigma_0\bigl(\zeta (s)\bigr)\,d\hat{W}(s), \]
where $\zeta(t)$ and the Wiener process $\hat{W}(t)$ are related via Eq.~\eqref{o3}. \end{thm}
The next theorem principally follows from \cite{paper11}; however, we provide its proof for the reader's convenience and completeness of the results.
\begin{thm}\label{th7} Let $\xi_T$ be a solution of Eq.~\eqref{o1} from the class $K\left(G_T \right)$ for $G_T(x)=f_T(x)$, and let $0<\delta\leq f_T'(x)\leq C$ and $f_T(x_0)\to y_0$ as $T\to\infty$. Also, let $\zeta_T(t)=f_T(\xi _T(t))$,
\[ I_T(t)=F_T\bigl(\xi_T(t)\bigr)+\int_{0}^{t} g_T\bigl(\xi_T(s)\bigr)\,dW_T(s), \]
$F_T(x)$ be continuous functions, $g_T(x)$ be locally square-integrable functions, and the processes $\xi_T(t)$ and $W_T(t)$ be related via Eq.~\eqref{o1}.
The two-dimensional process $\left(\zeta_T (t),I_T(t)\right)$ weakly converges, as $T\to\infty$, to the process $\left(\zeta(t),I(t)\right )$, where $I(t)=F_0(\zeta(t))+\int_{0}^{t} g_0(\zeta(s))\,d\zeta (s)$, and $\zeta(t)$ is a~weak solution of the It\^{o} equation $\zeta (t)=y_0+\int_{0}^{t}\sigma_0(\zeta(s))\,d{W}(s)$, if and only if there exist constants $c_T^{(1)}$ and $c_T^{(2)}$ in \emph{(\ref{o2})} such that, as $T\to\infty$:
\begin{itemize}
\item[\rm1.] for all $x$,
\[ \int_{0}^{\varphi_T (x)}\frac{[f_T'(v)]^2- \sigma _0^2(f_T(v))}{f_T'(v)}\,dv \to0, \]
where $\varphi_T (x)$ is the inverse function of the function $f_T(x)$;
\item[\rm2.] for all $N>0$,
\[
\mathop{\sup}\limits_{|x|\leq N} \big|F_T(x)+f_T(x)-F_0
\bigl(f_T(x)\bigr)\big|\to0 \]
and
\[
\int_{-N}^{N}\frac{|g_T(x)-f_T'(x)[1+g_0
(f_T(x))]|^2}{f_T'(x)}\,dx \to0. \]
\end{itemize}
\end{thm}
\section{Proof of the main results}\label{section3}
Proof of Theorem \ref{th2}. Rewrite Eq.~(\ref{o4}) as
\begin{equation}\label{o7} \zeta_T(t)=G_T(x_0)+\int_{0}^{t}a_0\bigl(\zeta_T(s)\bigr)\,ds+\alpha ^{(1)}_T(t)+\eta_T(t), \end{equation}
where
\[ \alpha^{(1)}_T(t)=\int_{0}^{t}q^{(1)}_T\bigl(\xi_T(s)\bigr)\,ds,\qquad q^{(1)}_T(x)= G_T^{\prime}(x)a_T(x)+\frac{1}{2}\,G_T^{\prime\prime}(x)-a_0 \bigl(G_T(x)\bigr).\vadjust{\eject} \]
The functions $q^{(1)}_T(x)$ satisfy the conditions of Lemma \ref{lm2}. Thus, for any $L>0$,
\begin{equation}\label{o8}
\sup_{0\leq t\leq L}\big| \alpha^{(1)}_T(t)\big|\stackrel{\pr}{\to} 0 \end{equation}
as $T\to\infty$. It is clear that $\eta_T (t)$ is a family of continuous martingales with quadratic characteristics
\begin{equation}\label{o9} \langle\eta_T\rangle(t)=\int_{0}^{t}\bigl(G_T^{\prime}\bigl(\xi _T(s)\bigr)\bigr)^2\,ds=\int_{0}^{t}\sigma_0^2\bigl(\zeta_T(s)\bigr)\,ds+\alpha ^{(2)}_T(t), \end{equation}
where
\[ \alpha^{(2)}_T(t)=\int_{0}^{t}q^{(2)}_T\bigl(\xi_T(s)\bigr)\,ds,\qquad q^{(2)}_T(x)= \bigl(G_T^{\prime}(x)\bigr)^2-\sigma_0^2\bigl(G_T(x) \bigr). \]
The functions $q^{(2)}_T(x)$ satisfy the conditions of Lemma \ref{lm2}. Thus, for any $L>0$,
\begin{equation}\label{o10}
\sup_{0\leq t\leq L}\big| \alpha^{(2)}_T(t)\big|\stackrel{\pr}{\to} 0 \end{equation}
as $T\to\infty$.
We have that relations (\ref{o5}) and (\ref{o6}) hold for the processes $\zeta_T(t)$ and $\eta_T(t)$, and, according to (\ref{o8}) and (\ref{o10}), these relations hold for the processes $\alpha ^{(k)}_T(t)$, $k=1,2$, as well. This means that we can apply Skorokhod's convergent subsequence principle (see \cite{book6}, Chapter I, \S6) for the process $ (\zeta_T(t), \eta_T(t),\alpha^{(1)}_T(t), \alpha^{(2)}_T(t))$: given an arbitrary sequence $T'_n\to\infty$, we can choose a subsequence $T_n\to\infty$, a~probability space $(\tilde\varOmega,\tilde\Im, \tilde \pr)$, and a stochastic process $(\tilde{\zeta}_{T_n}(t), \tilde {\eta}_{T_n}(t),\tilde{\alpha}^{(1)}_{T_n}(t), \tilde{\alpha }^{(2)}_{T_n}(t))$ defined on this space such that its finite-dimensional distributions coincide with those of the process $(\zeta_{T_n}(t), \eta_{T_n}(t),\alpha^{(1)}_{T_n}(t), \alpha ^{(2)}_{T_n}(t))$ and, moreover, \begin{align*} \tilde{\zeta}_{T_n}(t)\stackrel{\tilde{\pr}}{\to}\tilde{\zeta}(t),\qquad \tilde{\eta}_{T_n}(t)\stackrel{\tilde{\pr}}{\to}\tilde{\eta}(t),\qquad \tilde{\alpha}^{(1)}_{T_n}(t)\stackrel{\tilde{\pr}}{\to}\tilde{\alpha }^{(1)}(t),\qquad \tilde{\alpha}^{(2)}_{T_n}(t)\stackrel{\tilde{\pr}}{\to }\tilde{\alpha}^{(2)}(t) \end{align*} for all $0\leq t\leq L$, where $\tilde{\zeta}(t)$, $\tilde{\eta}(t)$, $\tilde{\alpha}^{(1)}(t)$, $\tilde{\alpha}^{(2)}(t)$ are some stochastic processes.
Evidently, relations (\ref{o8}) and (\ref{o10}) imply that $\tilde {\alpha}^{(k)}(t)\equiv0$, $k=1,2$, a.s. According to (\ref{o6}), the processes $\tilde{\zeta}(t)$ and $\tilde{\eta}(t)$ are continuous. Moreover, applying Lemma \ref{lm5} together with Eqs.~(\ref{o7}) and (\ref{o9}), we obtain that
\begin{align}\label{o11}
\begin{split} \tilde{\zeta}_{T_n} (t)&=G_{T_n}(x_0)+\int_{0}^{t}a_0\bigl(\tilde{\zeta }_{T_n}(s)\bigr)\,ds+\tilde{\alpha}^{(1)}_{T_n}(t)+\tilde{\eta}_{T_n}(t), \\ \langle\tilde{\eta}_{T_n}\rangle(t)&=\int_{0}^{t}\sigma _0^2\bigl(\tilde{\zeta}_{T_n}(s)\bigr)\,ds+\tilde{\alpha}^{(2)}_{T_n}(t),
\end{split} \end{align}
where $\tilde{\zeta}_{T_n}(t)\stackrel{\tilde{\pr}}{\to}\tilde{\zeta }(t)$, $\tilde{\eta}_{T_n}(t)\stackrel{\tilde{\pr}}{\to}\tilde{\eta
}(t)$, $\sup_{0\leq t\leq L}| \tilde{\alpha
}^{(k)}_{T_n}(t)|\stackrel{\tilde{\pr}}{\to} 0$, $k=1,2$, as $T_n\to\infty$. In addition,
it is established in \cite{paper10} that, for any constants $L>0$ and $\varepsilon>0$,
\[ \mathop{\lim}\limits_{h\to0}\overline{\mathop{\lim}\limits_{T_n
\to\infty}} \tilde{\pr}\Big\{\mathop{\sup}\limits_{|t_1-t_2|\leq h;\, t_i\leq L} \big|\tilde{\zeta}_{T_n}(t_2)-\tilde{\zeta
}_{T_n}(t_1)\big|>\varepsilon\Big\}=0. \]
Using the latter convergence and (\ref{o11}), we conclude that, for any constants $L>0$ and $\varepsilon>0$,\vadjust{\eject}
\[ \mathop{\lim}\limits_{h\to0}\overline{\mathop{\lim}\limits_{T_n
\to\infty}} \tilde{\pr}\Big\{\mathop{\sup}\limits_{|t_1-t_2|\leq h;\, t_i\leq L} \big|\tilde{\eta}_{T_n}(t_2)-\tilde{\eta
}_{T_n}(t_1)\big|>\varepsilon\Big\}=0. \]
Therefore, according to the well-known result of Prokhorov \cite{paper155}, we conclude that
\[
\sup\limits_{0\leq t\leq L}\big| \tilde{\zeta}_{T_n}(t)-\tilde{\zeta
}(t)\big|\stackrel{\tilde{\pr}}{\to} 0 \qquad\mbox{and}\qquad\sup
\limits_{0\leq t\leq L}\big| \tilde{\eta}_{T_n}(t)-\tilde{\eta
}(t)\big|\stackrel{\tilde{\pr}}{\to} 0 \]
as $T_n\to\infty$. According to Lemma \ref{lm3}, we can pass to the limit in (\ref{o11}) and obtain
\begin{equation}\label{o13} \tilde{\zeta} (t)=y_0+\int_{0}^{t}a_0\bigl(\tilde{\zeta}(s)\bigr)\, ds+\tilde{\eta}(t), \end{equation}
where $\tilde{\eta} (t)$ is a continuous martingale with the quadratic characteristic
\[ \langle\tilde{\eta}\rangle(t)=\int_{0}^{t}\sigma_0^2\bigl(\tilde {\zeta}(s)\bigr)\,ds. \]
Now, it is well known that the latter representation provides the existence of a Wiener process $\hat{W}(t)$ such that
\begin{equation}\label{o14} \tilde{\eta} (t)=\int_{0}^{t}\sigma_0\bigl(\tilde{\zeta}(s)\bigr)\,d\hat{W}(s). \end{equation}
Thus, the process $(\tilde{\zeta} (t),\hat{W}(t))$ satisfies Eq.~(\ref{o3}), and the process $\tilde{\zeta}_{T_n}(t)$ weakly converges, as $T_n\to\infty$, to the process $\tilde{\zeta}(t)$. Since the subsequence $T_n\to\infty$ is arbitrary and since a solution of Eq.~(\ref{o3}) is weakly unique, the proof of the Theorem \ref{th2} is complete.
Proof of Theorem \ref{th3}. It is clear that, for all $t> 0$, with probability one,
\[ \beta_T^{(1)} (t)=\int_{0}^{t} g_0\bigl(\zeta_T(s)\bigr)\,ds+\alpha_T(t), \]
where $\alpha_T(t)=\int_{0}^{t} q_T(\xi_T(s))\,ds$ and $q_T(x)=g_T(x)-g_0\left(G_T(x)\right)$.
The functions $q_T(x)$ satisfy the conditions of Lemma \ref{lm2}. Thus, for any $L>0$,
\[
\sup_{0\leq t\leq L}\big| \alpha_T(t)\big|\stackrel{\pr}{\to} 0 \]
as $T\to\infty$. Similarly to (\ref{o11}), we obtain the equality
\begin{equation}\label{o17} \tilde{\beta}_{T_n}^{(1)} (t)=\int_{0}^{t} g_0\bigl(\tilde{\zeta }_{T_n}(s)\bigr)\,ds+\tilde{\alpha}_{T_n}(t), \end{equation}
where $\tilde{\zeta}_{T_n}(t)\stackrel{\tilde{\pr}}{\to}\tilde{\zeta
}(t)$ and $\sup_{0\leq t\leq L}| \tilde{\alpha
}_{T_n}(t)|\stackrel{\tilde{\pr}}{\to} 0$ as $T_n\to\infty$. The process $\tilde{\zeta}(t)$ is a solution of Eq.~(\ref{o13}), whereas by Lemma \ref{lm5} the finite-dimensional distributions of the stochastic process $\beta_{T_n}^{(1)} (t)$ coincide with those of the process $\tilde{\beta}_{T_n}^{(1)} (t)$.
Using Lemma \ref{lm3} and Eq.~(\ref{o17}), we conclude that
\[
\sup\limits_{0\leq t\leq L}\Biggl| \tilde{\beta}_{T_n}^{(1)} (t)-\int _{0}^{t} g_0\bigl(\tilde{\zeta}(s)\bigr)\,ds\Biggr|\stackrel{\tilde{\pr }}{\to} 0 \]
as $T_n\to\infty$. Thus, the process ${\beta}_{T_n}^{(1)} (t)$ weakly converges, as $T_n\to\infty$, to the process $ \beta^{(1)} (t)=\int _{0}^{t} g_0(\zeta(s))\,ds$, where $\zeta(t)$ is a solution of Eq.~(\ref{o3}). Since the subsequence\vadjust{\eject} $T_n\to\infty$ is arbitrary and since a solution $\zeta(t)$ of Eq.~(\ref {o3}) is weakly unique, the proof of Theorem \ref{th3} is complete.
Proof of Theorem \ref{th4}. Consider the function
\[ \varPhi_T(x)=2\int_{0}^{x}f'_T(u)\Biggl(\int_{0}^{u}\frac {g_T(v)}{f'_T(v)}\,dv\Biggr)\,du. \]
Applying the It\^{o} formula to the process $\varPhi_T(\xi_T(t))$, where $\xi_T(t)$ is a solution of Eq.~(\ref{o1}), we get that
\begin{align*}
{\beta}_{T}^{(1)} (t)&=\varPhi_T\bigl(\xi_T(t)\bigr)-\varPhi_T(x_0)- \int _{0}^{t}\varPhi'_T\bigl(\xi_T(s)\bigr)\,dW_T(s) \\ &=2\int_{x_0}^{\xi_T(t)}g_0\bigl(G_T(u)\bigr)G_T^{\prime}(u)\,du -2\int_{0}^{t}g_0\bigl({\zeta}_{T}(s)\bigr)G_T^{\prime}\bigl({\xi }_{T}(s)\bigr)\,dW_T(s) \\ &\quad+2\int_{x_0}^{\xi_T(t)}\hat{q}_T(u)\,du -2\int_{0}^{t}\hat {q}_T\bigl({\xi}_{T}(s)\bigr)\,dW_T(s)\\ &= 2\int_{G_T(x_0)}^{\zeta_T(t)}g_0(u)\,du -2\int _{0}^{t}g_0\bigl({\zeta}_{T}(s)\bigr)\,d\eta_T(s)+\gamma ^{(1)}_T(t)-\gamma^{(2)}_T(t), \end{align*}
where
\begin{align*} \gamma^{(1)}_T(t)&=2\int_{x_0}^{\xi_T(t)}\hat{q}_T(u)\,du,\qquad \gamma^{(2)}_T(t)=2\int_{0}^{t}\hat{q}_T\bigl({\xi}_{T}(s) \bigr)\,dW_T(s),\\ \hat{q}_T(x)&=\left(f'_T(x)\int_{0}^{x}\frac{g_T(v)}{f'_T(v)}\, dv-g_0\bigl(G_T(x)\bigr)G_T^{\prime}(x)\right). \end{align*}
Denote $P_{NT}=\pr\{\mathop{\sup}_{0\leq t\leq L}
|\xi_{T}(t)|>N\}$. It is clear that, for any constants $\varepsilon>0$, $N>0$, and $L>0$, we have the inequalities
\begin{align*}
\pr\Big\{\mathop{\sup}\limits_{0\leq t\leq L} \big|\gamma
^{(1)}_T(t)\big|>\varepsilon\Big\}&\leq P_{NT}+\frac{2}{\varepsilon
}\M\mathop{\sup}\limits_{0\leq t\leq L}\left|\int_{x_0}^{\xi _T(t)}\hat{q}_T(u)\,du\right| \chi_{\{|\xi_T(t)|\leq N\}}\\
&\leq P_{NT}+\frac{2}{\varepsilon}\int_{-N}^{N}\left|\hat
{q}_T(u)\right|\,du \leq P_{NT}+\frac{4}{\varepsilon}N \mathop{\sup
}\limits_{|x|\leq N}\left|\hat{q}_T(x)\right| \end{align*}
and
\begin{align*}
&\pr\Big\{\mathop{\sup}\limits_{0\leq t\leq L} \big|\gamma
^{(2)}_T(t)\big|>\varepsilon\Big\}\\ &\quad\leq P_{NT}+\frac{4}{\varepsilon
^2}\M\mathop{\sup}\limits_{0\leq t\leq L} \left|\int _{0}^{t}\hat{q}_T\bigl({\xi}_{T}(s)\bigr)\chi_{\{|\xi_T(s))\leq N|\}}\,dW_T(s)
\right|^2\\ &\quad\leq P_{NT}+\frac{16}{\varepsilon^2}\M\int_{0}^{L}\hat
{q}^2_T\bigl({\xi}_{T}(s)\bigr)\chi_{\{|\xi_T(s))\leq N|\}}\,ds \leq P_{NT}+\frac{16}{\varepsilon^2}L\mathop{\sup}\limits_{|x|\leq N}\big
|\hat{q}^2_T(x)\big|. \end{align*}
The inequality $|G_T(x)|\geq C |x|^{\alpha}$, $\alpha>0$, together with convergence (\ref{o5}), implies that
\begin{equation}\label{o19} \mathop{\lim}\limits_{N \to\infty}\overline{\mathop{\lim}\limits _{T \to\infty}}P_{NT}=0. \end{equation}
Thus, using the conditions of Theorem \ref{th4}, we get the convergence
\[
\sup_{0\leq t\leq L}\big|\gamma^{(k)}_T(t)\big|\stackrel{\pr}{\to} 0,\quad k=1,2, \]
as $T\to\infty$.
The same arguments as we used establishing (\ref{o11}) yield that
\begin{equation}\label{o21} \tilde{\beta}_{T_n}^{(1)} (t)=2\int_{G_{T_n}(x_0)}^{\tilde{\zeta }_{T_n}(t)}g_0(u)\,du -2\int_{0}^{t}g_0\big(\tilde{\zeta }_{T_n}(s)\big)\,d\tilde{\eta}_{T_n}(s)+\tilde{\gamma }^{(1)}_{T_n}(t)-\tilde{\gamma}^{(2)}_{T_n}(t), \end{equation}
where
\begin{align*}
&\sup_{0\leq t\leq L}\big|\tilde{\zeta}_{T_n}(t)-\tilde{\zeta}(t)\big
|\stackrel{\tilde{\pr}}{\to} 0, \qquad\sup_{0\leq t\leq L}\left|\tilde
{\eta}_{T_n}(t)-\tilde{\eta}(t)\right|\stackrel{\tilde{\pr}}{\to} 0,\\
&\sup_{0\leq t\leq L}\big|\tilde{\gamma}^{(k)}_{T_n}(t)\big|\stackrel {\tilde{\pr}}{\to} 0,\quad k=1,2, \end{align*}
as $T_n\to\infty$ for all $L>0$. According to (\ref{o13}) with $\tilde {\eta}(t)$ defined in (\ref{o14}), the process $(\tilde{\zeta} (t),\hat{W}(t))$ satisfies Eq.~(\ref{o3}).
By Lemma \ref{lm5} the finite-dimensional distributions of the stochastic process $\beta_{T_n}^{(1)} (t)$ coincide with those of the process $\tilde{\beta}_{T_n}^{(1)} (t)$. Using Lemma \ref{lm3}, we can pass to the limit as $T_n\to\infty$ in (\ref{o21}) and obtain
\begin{equation}\label{o22}
\sup_{0\leq t\leq L}\big|\tilde{\beta}_{T_n}^{(1)}(t)-\tilde{\beta
}^{(1)}(t)\big|\stackrel{\tilde{\pr}}{\to} 0 \end{equation}
as $T_n\to\infty$, where
\begin{align*} \tilde{\beta}^{(1)}(t)&=2\int_{y_0}^{\tilde{\zeta} (t)}g_0(u)\,du -2\int_{0}^{t}g_0\bigl(\tilde{\zeta}(s)\bigr)\,d\tilde{\eta}(s)\\ &=2\int_{y_0}^{\tilde{\zeta} (t)}g_0(u)\,du -2\int _{0}^{t}g_0\bigl(\tilde{\zeta}(s)\bigr)\,d\tilde{\zeta}(s)+2\int _{0}^{t}g_0\bigl(\tilde{\zeta}(s)\bigr) a_0\bigl(\tilde{\zeta}(s)\bigr)\,ds, \end{align*}
and $\tilde{\zeta}(t)$ is a solution of Eq.~(\ref{o3}). Therefore, we have that Theorem \ref{th4} holds for the process ${\beta}_{T_n}^{(1)} (t)$ as $T_n\to\infty$. Since the subsequence $T_n\to\infty$ is arbitrary and since a solution $\zeta(t)$ of Eq.~(\ref{o3}) is weakly unique, the proof of Theorem \ref {th4} is complete.
Proof of Theorem \ref{th5}. It is clear that
\begin{equation}\label{o23} \beta_T^{(2)} (t)=\int_{0}^{t} g_0\bigl(\zeta_T(s)\bigr)\,d\eta_T(s)+\gamma _T (t), \end{equation}
where
\[ \gamma_T (t)=\int_{0}^{t}{q}_T\bigl({\xi}_{T}(s)\bigr)\, dW_T(s),\qquad q_T(x)=g_T(x)-g_0\bigl(G_T(x)\bigr)G_T^{\prime}(x). \]
The process $\gamma_T (t)$ is a continuous martingale with the quadratic characteristics
\[ \langle{\gamma_T}\rangle(t)=\int_{0}^{t}q_T^2\bigl(\xi_T(s)\bigr)\,ds \quad\text{for all} \ T>0.\vadjust{\eject} \]
According to the conditions of Theorem \ref{th5}, the functions $q_T^2(x)$ satisfy the conditions of Lemma \ref{lm2}. Thus, for any $L>0$, we have the convergence $\langle{\gamma_T}\rangle(L)\stackrel {{\pr}}{\to} 0$ as $T\to\infty$.
The following inequality holds for any constants $\varepsilon>0$ and $\delta>0$:
\[
\pr\Bigl\{\mathop{\sup}\limits_{0\leq t\leq L} \big|\gamma_T(t)
\big|>\varepsilon\Bigr\}\leq\delta+\pr\bigl\{ \langle{\gamma_T}\rangle (L)>\varepsilon^2\delta\bigr\} \]
(see \cite{book2}, \S3, Theorem 2), which implies the relation
\begin{equation}\label{o24}
\sup_{0\leq t\leq L}\big|\gamma_T(t)\big|\stackrel{\pr}{\to} 0 \end{equation}
as $T\to\infty$.
Then, similarly to representation (\ref{o11}), on a certain probability space $(\tilde{\varOmega},\tilde{\Im} , \tilde{\pr})$, for an arbitrary subsequence $T_n$, we get the equality
\[ \tilde{\beta}_{T_n}^{(2)} (t)=\int_{0}^{t} g_0\bigl(\tilde{\zeta }_{T_n}(s)\bigr)\,d\tilde{\eta}_{T_n}(s)+\tilde{\gamma}_{T_n} (t), \]
where
\[
\sup_{0\leq t\leq L}\big|\tilde{\zeta}_{T_n}(t)-\tilde{\zeta}(t)\big
|\stackrel{\tilde{\pr}}{\to} 0, \qquad\sup_{0\leq t\leq L}\big|\tilde
{\eta}_{T_n}(t)-\tilde{\eta}(t)\big|\stackrel{\tilde{\pr}}{\to} 0,\qquad
\sup_{0\leq t\leq L}\big|\tilde{\gamma}_{T_n}(t)\big|\stackrel{\tilde {\pr}}{\to} 0 \]
as $T_n\to\infty$ for any $L>0$, where the process $(\tilde{\zeta} (t),\hat{W}(t))$ satisfies Eq.~(\ref{o3}), $\tilde{\eta}(t)$ is defined in (\ref{o14}), and the processes $\tilde{\beta}_{T_n}^{(2)} (t)$ and ${\beta }_{T_n}^{(2)} (t)$ are stochastically equivalent.
Similarly to the proof of convergence (\ref{o22}), we obtain
\[
\sup_{0\leq t\leq L}\big|\tilde{\beta}_{T_n}^{(2)}(t)-\tilde{\beta
}^{(2)}(t)\big|\stackrel{\tilde{\pr}}{\to} 0 \]
as $T_n\to\infty$, where
\begin{align*}
\tilde{\beta}^{(2)}(t)&=\int_{0}^{t}g_0\bigl(\tilde{\zeta }(s)\bigr)\,d\tilde{\eta}(s)\\ &=\int_{0}^{t}g_0\bigl(\tilde{\zeta}(s)\bigr)\,d\tilde{\zeta }(s)-\int_{0}^{t}g_0\bigl(\tilde{\zeta}(s)\bigr) a_0\bigl(\tilde{\zeta}(s)\bigr)\,ds.
\end{align*}
Thus, the process $\tilde{\beta}_{T_n}^{(2)} (t)$ weakly converges, as $T_n\to\infty$, to the process~$\tilde{\beta}^{(2)} (t)$. Since the subsequence $T_n\to\infty$ is arbitrary and since the processes $\tilde {\beta}_{T_n}^{(2)} (t)$ and ${\beta}_{T_n}^{(2)} (t)$ are stochastically equivalent, the proof of Theorem \ref{th5} is complete.
Proof of Theorem \ref{th6}. It is clear that
\[ I_T(t)=F_0\bigl(\zeta_T(t)\bigr)+\int_{0}^{t} g_0\bigl(\zeta_T(s)\bigr)\,d\eta _T(s)+\alpha_T (t)+\gamma_T (t), \]
where
\begin{align*} \alpha_T (t)&=F_T\bigl({\xi}_{T}(t)\bigr)-F_0\bigl({\zeta}_{T}(t) \bigr),\qquad\eta_T(t)=\int_{0}^{t}G_T^{\prime}\bigl(\xi_T(s)\bigr)\,dW_T(s),\\ \gamma_T (t)&=\int_{0}^{t}{q}_T\bigl({\xi}_{T}(s)\bigr)\, dW_T(s),\qquad q_T(x)=g_T(x)-g_0\bigl(G_T(x)\bigr)G_T^{\prime}(x). \end{align*}
Denote, as before, $P_{NT}=\pr\{\mathop{\sup}_{0\leq t\leq L} |\xi_{T}(t)|>N\}$. Since for any constants $\varepsilon>0$, $N>0$, and $L>0$, we have the inequality
\begin{align*}
&\pr\Big\{\mathop{\sup}\limits_{0\leq t\leq L} \big|F_T\bigl({\xi
}_{T}(t)\bigr)-F_0\bigl(G_T\bigl(\xi_T(t)\bigr)\bigr)\big|>\varepsilon\Big \}\\
&\quad\leq P_{NT}+\frac{2}{\varepsilon}\M\mathop{\sup}\limits_{0\leq t\leq L}\big|F_T\bigl({\xi}_{T}(t)\bigr)-F_0\bigl(G_T\bigl(\xi_T(t)\bigr)\bigr)
\big| \chi_{\{|\xi_T(t)|\leq N\}}\\
&\quad\leq P_{NT}+\frac{2}{\varepsilon} \mathop{\sup}\limits_{|x|\leq N}\big|F_T(x)-F_0\big(G_T\xch{(x)\big)}{(x)}\big|, \end{align*}
we can apply conditions of Theorem \ref{th6} and convergence (\ref{o19}) to get that
\[
\sup_{0\leq t\leq L}\big| \alpha_T(t)\big|\stackrel{\pr}{\to} 0 \]
as $T\to\infty$. The proof of the fact that, for $\gamma_T (t)$, an analogue of convergence (\ref{o24}) holds is literally the same as in the proof of Theorem \ref{th5}. Then, we can apply Skorokhod's convergent subsequence principle to the process $\left(\zeta_T(t), \eta _T(t),\alpha_T(t), \gamma_T (t)\right)$ and, similarly to representation (\ref{o11}), obtain the following equality for an arbitrary subsequence $T_n$ in a certain probability space $(\tilde{\varOmega},\tilde{\Im} , \tilde{\pr})$:
\[ \tilde{I}_{T_n}(t)=F_0\bigl(\tilde{\zeta}_{T_n}(t)\bigr)+\int_{0}^{t} g_0\bigl(\tilde{\zeta}_{T_n}(s)\bigr)\,d\tilde{\eta}_{T_n}(s)+\tilde{\alpha }_{T_n} (t)+\tilde{\gamma}_{T_n} (t), \]
where, as $T_n\to\infty$, for any $L>0$,
\begin{align*}
&\sup_{0\leq t\leq L}\big|\tilde{\zeta}_{T_n}(t)-\tilde{\zeta}(t)
\big|\stackrel{\tilde{\pr}}{\to} 0, \qquad\sup_{0\leq t\leq L}\big|\tilde
{\eta}_{T_n}(t)-\tilde{\eta}(t)\big|\stackrel{\tilde{\pr}}{\to} 0,\\
&\sup_{0\leq t\leq L}\big|\tilde{\alpha}_{T_n}(t)\big|\stackrel{\tilde
{\pr}}{\to} 0,\qquad\sup_{0\leq t\leq L}\big|\tilde{\gamma
}_{T_n}(t)\big|\stackrel{\tilde{\pr}}{\to} 0. \end{align*}
To complete the proof of Theorem \ref{th6}, we repeat the same arguments as in the proof of Theorem \ref{th5}.
Proof of Theorem \ref{th7}. According to the It\^{o} formula, the process $\zeta_T(t)\,{=}\,f_T(\xi _T(t))$ satisfies the equation $d\zeta_T (t)=\hat{\sigma}_T(\zeta _T(t))\,dW_T(t)$, where $\hat{\sigma}_T(x)=f_T'\left(\varphi_T (x)\right )$, $\varphi_T (x)$ is the inverse function of the function $f_T (x)$, and $\zeta_T(0)=f_T(x_0)\to y_0$ as $T\to\infty$. In addition, the following equality holds:
\[ I_T(t)=\hat{F}_T\bigl(\zeta_T(t)\bigr)+\int_{0}^{t} \hat{g}_T\bigl(\zeta_T(s)\bigr)\, d\zeta_T(s), \]
where $\hat{F}_T(x)=F_T(\varphi_T (x))$ and $\hat{g}_T(x)=g_T(\varphi_T (x))\cdot\hat{\sigma}_T^{-1}(x)$.
It is easy to see that condition 1 of the present theorem implies that
\[ \int_{0}^{x}\frac{dv}{\hat{\sigma}_T^{2}(v)}\to\int _{0}^{x}\frac{dv}{{\sigma}_0^{2}(v)} \]
as $T\to\infty$ for all $x$, whereas condition 2 implies that
\[
\mathop{\sup}\limits_{|x|\leq N} \big|\hat
{F}_T(x)+c_T^{(2)}+c_T^{(1)}x-F_0(x)\big|\to0 \]
and
\[
\int_{-N}^{N} \big|\hat{g}_T(x)-c_T^{(1)}-g_0(x)\big|^2\,dx \to0 \]
as $T\to\infty$ for any $N>0$.
This means that the necessary and sufficient conditions of weak convergence of the process $\left(\zeta_T (t),I_T(t)\right)$ as $T\to \infty$ to the process $\left(\zeta(t),I(t)\right)$ from \cite {paper11} hold with $b_T=c_T^{(1)}$ and $a_T=c_T^{(2)}$.
\section{Auxiliary results}\label{section4}
\begin{lem}\label{lm1} Let $\xi_T$ be a solution of Eq.~\eqref{o1} from the class $K\left(G_T \right)$. Then, for any $N>0$ and any Borel set $B\subset\left[-N; N\right]$, there exists a constant $C_L$ such that
\[ \int_{0}^{L}\pr\bigl\{G_T\bigl(\xi_T(s)\bigr)\in B\bigr\}\,ds\leq C_L \psi \bigl(\lambda(B)\bigr), \]
where $\lambda(B)$ is the Lebesgue measure of $B$, and $\psi\left
(|x|\right)$ is a bounded function satisfying $\psi\left(|x|\right)\to 0$ as $|x|\to0$. \end{lem}
\begin{proof} Consider the function
\[ \varPhi_T(x)=2\int_{0}^{x}f'_T(u)\Biggl(\int_{0}^{u}\frac{\chi _B\bigl(G_T(v)\bigr)}{f'_T(v)}\,dv\Biggr)\,du. \]
The function $\varPhi_T(x)$ is continuous, the derivative $\varPhi'_T(x)$ of this function is continuous, and the second derivative $\varPhi''_T(x)$ exists a.e.\ with respect to the Lebesgue measure and is locally bounded. Therefore, we can apply the It\^{o} formula to the process $\varPhi_T(\xi_T(t))$, where $\xi_T(t)$ is a solution of Eq.~(\ref{o1}).
Furthermore,
\[ \varPhi'_T(x)a_T(x)+\frac{1}{2}\,\varPhi''_T(x)=\chi_B(x) \]
a.e.\ with respect to the Lebesgue measure. Using the latter equality, we conclude that
\begin{equation}\label{l1} \int_{0}^{t}\chi_B\bigl(\zeta_T(s)\bigr)\,ds=\varPhi_T\bigl(\xi _T(t)\bigr)-\varPhi_T(x_0)- \int_{0}^{t}\varPhi'_T\bigl(\xi_T(s)\bigr)\,dW_T(s) \end{equation}
with probability one for all $t\geq0$, where $\zeta_T (t)=G_T(\xi_T(t))$. Hence, using the properties of stochastic integrals, we obtain that
\begin{equation}\label{l2} \int_{0}^{t}\pr\bigl\{\zeta_T (s)\in B\bigr\}\,ds=\M\bigl[\varPhi_T\bigl(\xi _T(t)\bigr)-\varPhi_T(x_0)\bigr]. \end{equation}
According to condition $(A_2)$ and inequality $|G_T(x)|\geq C
|x|^{\alpha}$, $C>0$, $\alpha>0$, we have
\[
\big|\varPhi_T(x)-\varPhi_T(x_0)\big|\leq2\psi\bigl(\lambda(B)
\bigr)\left[1+|x|^m\right]\leq2\psi\bigl(\lambda(B)\bigr)
\bigl[1+C^{-\frac{m}{\alpha}}\big|G_T(x)\big|^{\frac{m}{\alpha}}\bigr]. \]
Hence, using inequality (\ref{o6}), we obtain that\vadjust{\eject}
\[
\big|\M\bigl[\varPhi_T\bigl(\xi_T(L)\bigr)-\varPhi_T(x_0)\bigr]\big|\leq C_L \psi \bigl(\lambda(B)\bigr) \]
for some constant $C_L$. The latter inequality and Eq.~(\ref{l2}) prove Lemma \ref{lm1}. \end{proof}
\begin{lem}\label{lm2} Let $\xi_T$ be a solution of Eq.~\eqref{o1} from the class $K\left(G_T \right)$. If, for measurable locally bounded functions $q_T(x)$, condition $(A_3)$ holds, then, for any $L>0$,
\[
\sup_{0\leq t\leq L}\Biggl|\int_{0}^{t} q_T\bigl(\xi_T(s)\bigr)\,ds
\Biggr|\stackrel{\pr}{\to} 0 \]
as $T\to\infty$. \end{lem}
\begin{proof} Consider the function
\[ \varPhi_T(x)=2\int_{0}^{x}f'_T(u)\Biggl(\int_{0}^{u}\frac {q_T(v)}{f'_T(v)}\,dv\Biggr)\,du. \]
The same arguments as used to obtain Eq.~(\ref{l1}) yield that
\begin{equation}\label{l3} \int_{0}^{t}q_T\bigl(\xi_T(s)\bigr)\,ds=\varPhi_T\bigl(\xi_T(t)\bigr)-\varPhi_T(x_0)- \int _{0}^{t}\varPhi'_T\bigl(\xi_T(s)\bigr)\,dW_T(s). \end{equation}
It is clear that, for any constants $\varepsilon>0$, $N>0$, and $L>0$, we have the inequalities
\begin{align*}
&\pr\Big\{\mathop{\sup}\limits_{0\leq t\leq L} \big|\varPhi_T\bigl(\xi _T(t)\bigr)\big|>\varepsilon\Big\}\leq P_{NT}+\frac{2}{\varepsilon}\int _{-N}^{N}f'_T(u)\left|\int_{0}^{u}\frac{q_T(v)}{f'_T(v)}\, dv\right|\,du,\\
&\pr\Biggl\{\mathop{\sup}\limits_{0\leq t\leq L} \left|\int _{0}^{t}\varPhi'_T\bigl(\xi_T(s)\bigr)\,dW_T(s)\right|>\varepsilon\Biggr\}\\
&\quad\leq P_{NT}+\frac{1}{\varepsilon^2}\M\mathop{\sup}\limits_{0\leq t\leq L} \left|\int_{0}^{t}\varPhi'_T\bigl(\xi_T(s)\bigr)\chi_{\{|\xi _T(s))\leq N|\}}\,dW_T(s)\right|^2\\ &\quad\leq P_{NT}+\frac{4}{\varepsilon^2}\M\int_{0}^{L}\left[\varPhi
'_T\bigl(\xi_T(s)\bigr)\right]^2\chi_{\{|\xi_T(s))\leq N|\}}\,ds\\
&\quad\leq P_{NT}+\frac{16}{\varepsilon^2}L\mathop{\sup}\limits_{|x|\leq N}\left[f'_T(x)\left|\int_{0}^{x}\frac{q_T(v)}{f'_T(v)}\,dv\right
|\right]^2, \end{align*}
where $P_{NT}=\pr\{\mathop{\sup}_{0\leq t\leq L} |\xi _{T}(t)|>N\}$.
Therefore, using convergence (\ref{o19}), we obtain
\[
\sup_{0\leq t\leq L}\big|\varPhi_T\bigl(\xi_T(t)\bigr)-\varPhi_T(x_0)\big|\stackrel {\pr}{\to} 0 \]
and
\[
\sup_{0\leq t\leq L}\Biggl|\int_{0}^{t}\varPhi'_T\bigl(\xi_T(s)\bigr)\, dW_T(s)\Biggr|\stackrel{\pr}{\to} 0 \]
as $T\to\infty$. Thus, Eq.~(\ref{l3}) implies the statement of Lemma \ref{lm2}. \end{proof}
\begin{lem}\label{lm3} Let $\xi_T$ be a solution of Eq.~\eqref{o1} from the class $K\left(G_T \right)$, and let $\zeta_T (t)=G_T(\xi_T(t))\stackrel{\pr}{\to}\zeta (t)$ as $T\to\infty$. Then for any measurable locally bounded function $g(x)$, we have the convergence\vadjust{\eject}
\[
\sup_{0\leq t\leq L}\Biggl|\int_{0}^{t} g\bigl(\zeta_T(s)\bigr)\,ds-\int _{0}^{t} g\bigl(\zeta(s)\bigr)\,ds\Biggr|\stackrel{\pr}{\to} 0 \]
as $T\to\infty$ for any constant $L>0$. \end{lem}
\begin{proof}
Let $\varphi_N(x)=1$ for $|x|\leq N$, $\varphi_N(x)=N+1-|x|$ for
$|x|\in\left[N,N+1\right]$, and $\varphi_N(x)=0$ for $|x|> N+1$. Then, for all $T>0$ and $L>0$,
\begin{align*}
&\pr\Biggl\{\mathop{\sup}\limits_{0\leq t\leq L} \left|\int _{0}^{t}\bigl[g\bigl(\zeta_T(s)\bigr)-g\bigl(\zeta_T(s)\bigr)\varphi_N\bigl(\zeta_T(s)\bigr)\bigr]\, ds\right|{>}\,0\Biggr\}\leq\pr\Big\{\mathop{\sup}\limits_{0\leq t\leq L} \big|\zeta_{T}(t)\big|\,{>}\,N\Big\},\\
&\pr\Biggl\{\mathop{\sup}\limits_{0\leq t\leq L} \left|\int _{0}^{t}\bigl[g\bigl(\zeta(s)\bigr)-g\bigl(\zeta(s)\bigr)\varphi_N\bigl(\zeta(s)\bigr)\bigr]\, ds\right|>0\Biggr\}\\
&\quad\leq\pr\Big\{\mathop{\sup}\limits_{0\leq t\leq L} \big|\zeta(t)\big|>N\Big\}\leq\overline{\mathop{\lim}\limits_{T \to\infty}}\pr\Big\{\mathop
{\sup}\limits_{0\leq t\leq L} \big|\zeta_{T}(t)\big|>N\Big\}. \end{align*}
According to Theorem \ref{th2}, convergence (\ref{o5}) holds for the process $\zeta_{T}(t)$. So, to complete the proof of Lemma \ref{lm3}, we need to establish that
\begin{equation}\label{l4}
\int_{0}^{L}\big|g\bigl(\zeta_T(s)\bigr)\varphi_N\bigl(\zeta_T(s)\bigr)-g\bigl(\zeta
(s)\bigr)\varphi_N\bigl(\zeta(s)\bigr)\big|\,ds\stackrel{\pr}{\to} 0 \end{equation}
as $T\to\infty$.
First, assume that the function $g(x)$ is continuous. Then
\[ g\bigl(\zeta_T(s)\bigr)\varphi_N\bigl(\zeta_T(s)\bigr)-g\bigl(\zeta(s)\bigr)\varphi_N\bigl(\zeta (s)\bigr)\stackrel{\pr}{\to} 0 \]
as $T\to\infty$ for all $0\leq s\leq L$, and $\left|g(x)\varphi _N(x)\right|\leq C_N$ for all $x$. Thus, by Lebesgue's dominated convergence theorem we have convergence (\ref{l4}). Second, let the function $g(x)$ be measurable and locally bounded. Then, using Luzin's theorem, we conclude that, for any $\delta>0$, there exists a continuous function $g^{\delta}(x)$ that coincides with $g(x)$ for $x\notin B^{\delta}$, where $B^{\delta}\subset\left[-N-1,N+1\right ]$, and the Lebesgue measure satisfies the inequality $\lambda (B^{\delta})< \delta$. Thus, for every $\delta>0$, convergence (\ref{l4}) holds for the function $g^{\delta}(x)$. Since, for any $\varepsilon>0$,
\begin{align*}
&\pr\left\{\int_{0}^{L}\left|g\bigl(\zeta_T(s)\bigr)\varphi_N\bigl(\zeta _T(s)\bigr)-g^{\delta}\bigl(\zeta_T(s)\bigr)\varphi_N\bigl(\zeta_T(s)\bigr)\right|\, ds>\varepsilon\right\}\\
&\quad\leq\frac{1}{\varepsilon}\M\int_{0}^{L}\left|g\bigl(\zeta _T(s)\bigr)\varphi_N\bigl(\zeta_T(s)\bigr)-g^{\delta}\bigl(\zeta_T(s)\bigr)\varphi_N\bigl(\zeta _T(s)\bigr)\right|\chi_{\{B^{\delta}\}}\bigl(\zeta_T(s)\bigr)\,ds\\ &\quad\leq\frac{C_N}{\varepsilon}\int_{0}^{L}\pr\left\{\zeta_T(s)\in B^{\delta}\right\}\,ds,\\
&\pr\left\{\int_{0}^{L}\left|g\bigl(\zeta(s)\bigr)\varphi_N\bigl(\zeta
(s)\bigr)-g^{\delta}\bigl(\zeta(s)\bigr)\varphi_N\bigl(\zeta(s)\bigr)\right|\,ds>\varepsilon \right\}\\ &\quad\leq\frac{C_N}{\varepsilon}\int_{0}^{L}\pr\left\{\zeta(s)\in B^{\delta}\right\}\,ds\leq\frac{C_N}{\varepsilon}\;\overline{\mathop {\lim}\limits_{T \to\infty}}\;\int_{0}^{L}\pr\left\{\zeta _T(s)\in B^{\delta}\right\}\,ds, \end{align*}
taking into account Lemma \ref{lm1}, we conclude that convergence (\ref {l4}) holds for such a function $g(x)$ as well.\vadjust{\eject} \end{proof}
\begin{lem}\label{lm4} Let $\xi_T$ be a solution of Eq.~\eqref{o1} from the class $K\left(G_T \right)$, and let $\zeta_T (t)=G_T(\xi_T(t))\stackrel{\pr}{\to}\zeta (t)$ and $\eta_T(t)=\int_{0}^{t}G_T^{\prime}(\xi_T(s))\, dW_T(s)\stackrel{\pr}{\to}\eta(t)$ as $T\to\infty$. Then, for measurable locally bounded functions $g(x)$, we have the convergence
\[
\sup_{0\leq t\leq L}\left|\int_{0}^{t} g\bigl(\zeta_T(s)\bigr)\,d\eta _T(s)-\int_{0}^{t} g\bigl(\zeta(s)\bigr)\,d\eta(s)\right|\stackrel{\pr}{\to } 0 \]
as $T\to\infty$ for any constant $L>0$. \end{lem}
\begin{proof} Similarly to the proof of Lemma \ref{lm3}, it suffices to obtain an analogue of convergence (\ref{l4}), that is, to get that, for any $N>0$ and $L>0$,
\begin{equation}\label{l5}
\sup_{0\leq t\leq L}\left|\int_{0}^{t}g\bigl(\zeta_T(s)\bigr)\varphi _N\bigl(\zeta_T(s)\bigr)\,d\eta_T(s)-\int_{0}^{t}g\bigl(\zeta(s)\bigr)\varphi_N\bigl(\zeta
(s)\bigr)\,d\eta(s)\right|\stackrel{\pr}{\to} 0 \end{equation}
as $T\to\infty$, where $\varphi_N(x)$ is defined in the proof of Lemma \ref{lm3}. The proof of convergence (\ref{l5}) for a continuous function $g(x)$ is similar to that of the corresponding theorem in \cite {book6}, Chapter~2, \S6. The explicit form of the quadratic characteristic $\langle\eta_T\rangle(t)$ of the martingale $\eta_T (t)$ and condition $(A_1)$ imply the inequality
\[ \int_{0}^{L}\bigl[\varphi_N\bigl(\zeta_T(t)\bigr)\bigr]^2\,d\langle\eta _T\rangle(t)\leq C_N L, \]
which is used for the proof of convergence (\ref{l5}). The extension of such a convergence to the class of measurable locally bounded functions is based on Lemma~\ref{lm1} and is provided similarly to the proof of Lemma \ref{lm3}. \end{proof}
\begin{lem}\label{lm5} Let $\xi_T$ be a solution of Eq.~\eqref{o1} belonging to the class $K\left(G_T \right)$, and let the stochastic process $\left(\zeta_T (t),\eta_T (t)\right)$, with $\zeta_T (t)=G_T(\xi_T(t)) $ and $\eta _T(t)=\int_{0}^{t}G_T^{\prime}(\xi_T(s))\,dW_T(s)$ be stochastically equivalent to the process $(\tilde{\zeta}_T (t), \tilde{\eta}_T (t))$. Then the process
\[ \int_{0}^{t}g\bigl(\zeta_T(s)\bigr)\,ds+\int_{0}^{t}q\bigl(\zeta_T(s)\bigr)\, d\eta_T(s), \]
where $g(x)$ and $q(x)$ are measurable locally bounded functions, is stochastically equivalent to the process
\[ \int_{0}^{t}g\bigl(\tilde{\zeta}_T(s)\bigr)\,ds+\int_{0}^{t}q\bigl(\tilde {\zeta}_T(s)\bigr)\,d\tilde{\eta}_T(s). \]
\end{lem}
\begin{proof} The proof is the same as that \xch{of}{of the} Theorem 2 from \cite{paper9}. \end{proof}
\end{document} |
\begin{document}
\title{Cocycle Conjugacy of Free Bogoljubov Actions of $\mathbb{R}$}
\author{Joshua Keneda and Dimitri Shlyakhtenko}
\thanks{Research supported by NSF grant DMS-1762360.}
\email{jkeneda@southplainscollege.edu, shlyakht@math.ucla.edu}
\address{Department of Mathematics, UCLA, Los Angeles, CA 90095} \address{Mathematics and Engineering Department, South Plains College, Levelland, TX 79336}
\begin{abstract} We show that Bogoljubov actions of $\mathbb{R}$ on the free group factor $L(\mathbb{F}_{\infty})$ associated to sums of infinite multiplicity trivial and certain mixing representations are cocycle conjugate if and only if the underlying representations are conjugate. \end{abstract}
\maketitle
\section{Introduction}
Recall that two actions $\beta_{t}$, $\gamma_{t}$ of $\mathbb{R}$ on a von Neumann algebra $M$ are said to be \emph{conjugate} if $\beta_{t}=\alpha\circ\gamma_{t}\circ\alpha^{-1}$ for some automorphism $\alpha$ of $M$. The actions are said to be \emph{cocycle conjugate} if there exists a strongly continuous one-parameter family of unitaries $u_{t}\in M$ and an automorphism $\alpha$ of $M$ so that \[ \beta_{t}(x)=\alpha(\textrm{Ad}_{u_{t}}(\gamma_{t}(\alpha^{-1}(x)))),\qquad\forall x\in M; \] in other words, $\beta_{t}$ and $\textrm{Ad}_{u_{t}}\circ\gamma_{t}$ are conjugate. Cocycle conjugacy is clearly a weaker notion of equivalence than conjugacy.
A consequence of cocycle conjugacy is an isomorphism between the crossed product von Neumann algebras $M\rtimes_{\beta}\mathbb{R}$ and $M\rtimes_{\gamma}\mathbb{R}$. This isomorphism takes $M$ to $M$ and sends the unitary $U_{t}\in L(\mathbb{R})\subset M\rtimes_{\beta}\mathbb{R}$ implementing the automorphism $\beta_{t}$ to the unitary $u_{t}V_{t}$, where $V_{t}\in M\rtimes_{\gamma}\mathbb{R}$ is the implementing unitary for $\gamma_{t}$.
An important class of automorphisms of the free group factor $L(\mathbb{F}_{\infty}$) are so-called \emph{free Bogoljubov automorphisms}, which are defined using Voiculescu's free Gaussian functor. As a starting point, we write $L(\mathbb{F}_{\infty})=W^{*}(S_{1},S_{2},\dots)$ where $S_{j}$ are an infinite free semicircular system. The closure in the $L^{2}$ norm of their real-linear span is an infinite dimensional real Hilbert space. Voiculescu proved that any automorphism of that Hilbert space extends to an automorphism of $L(\mathbb{F}_{\infty})$. In particular, any representation of $\mathbb{R}$ on an infinite dimensional Hilbert space canonically gives an action of $\mathbb{R}$ on $L(\mathbb{F}_{\infty})$.
Motivated by the approach in \cite{Classification}, we prove the following theorem, which states that for a large class of Bogoljubov automorphisms, cocycle conjugacy and conjugacy are equivalent to conjugacy of the underlying representations and thus gives a classification of such automorphisms up to cocycle conjugacy. \begin{thm*} \label{thm:SameSpectrum-1}Let $\pi_{1},\pi_{1}'$ be two mixing orthogonal representations of $\mathbb{R}$, and assume that $\pi_{1}\otimes\pi_{1}\cong\pi_{1}$, $\pi_{1}'\otimes\pi_{1}'\cong\pi_{1}'$. Denote by $\mathbf{1}$ the trivial representation of $\mathbb{R}$. Let \[ \pi=(\mathbf{1}\oplus\pi_{1})^{\oplus\infty},\qquad\pi'=(\mathbf{1}\oplus\pi_{1}')^{\oplus\infty}, \] and let $\alpha$ (resp., $\alpha'$) be the associated free Bogoljubov actions of $\mathbb{R}$ on $L(\mathbb{F}_{\infty})$. Then $\alpha$ and $\alpha'$ are cocycle conjugate iff the representations $\pi^{\oplus\infty}$ and $(\pi')^{\oplus\infty}$ are conjugate, i.e. \[ \pi^{\oplus\infty}=V(\pi')^{\oplus\infty}V^{-1} \] for some orthogonal isomorphism $V$ of the underlying real Hilbert spaces. \end{thm*} It is worth noting that the conjugacy class of a representation of $\mathbb{R}$ on a real Hilbert space is determined by the measure class of its spectral measure (a measure on $\mathbb{R}$ satisfying $\mu(-X)=\mu(X)$ for all Borel sets $X$) and a multiplicity function which is measurable with respect to that class (for the purposes of our Theorem, we may assume that this multiplicity function is identically infinite). Our Theorem then states that, for Bogoljubov actions satisfying the hypothesis of the Theorem, cocycle conjugacy occurs if and only if these measure classes are the same.
\section{Preliminaries on conjugacy of automorphisms}
\subsection{Crossed products.}
If $M$ is a type II$_{1}$ factor and $\alpha_{t}:\mathbb{R}\to\textrm{Aut}(M)$ is a one-parameter group of automorphisms, the \emph{crossed product} $M\rtimes_{\alpha}\mathbb{R}$ is of type II$_{\infty}$ with a canonical trace $Tr$. Furthermore, the crossed product construction produces in a canonical way a distinguished copy of the group algebra $L(\mathbb{R})$ inside the crossed product algebra. We denote this copy by $L_{\alpha}(\mathbb{R})$. The relative commutant $L_{\alpha}(\mathbb{R})'\cap M\rtimes_{\alpha}\mathbb{R}$ is generated by $L_{\alpha}(\mathbb{R})$ and the fixed point algebra $M^{\alpha}$.
\subsection{Conjugacy of actions.}
Recall that if $\beta_{t}$ and $\gamma_{t}$ are cocycle conjugate, each choice of a cocycle conjugacy produces an isomorphism $\Pi_{\gamma,\beta}$ of the crossed product algebras $M\rtimes_{\beta}\mathbb{R}$ and $M\rtimes_{\gamma}\mathbb{R}$. Note that $\Pi_{\gamma,\beta}$ does \emph{not} necessarily map $L_{\beta}(\mathbb{R})$ to $L_{\gamma}(\mathbb{R})$. In fact, as we shall see, this is rarely the case even if we compare the image $\Pi_{\gamma,\beta}(L_{\beta}(\mathbb{R}))$ with $L_{\gamma}(\mathbb{R})$ up to a weaker notion of equivalence, $\prec_{M\rtimes_{\gamma}\mathbb{R}}$ which was introduced by Popa in the framework of his deformation-rigidity theory. Indeed, in parallel to Theorem 3.1 in \cite{Classification}, we show that, very roughly, conjugacy of $\Pi_{\gamma,\beta}(L_{\beta}(\mathbb{R}))$ and $L_{\gamma}(\mathbb{R})$ inside the crossed product is essentially equivalent (up to compressing by projections) to \emph{conjugacy} of the actual actions by an inner automorphism of $M$. \begin{thm} \label{embedding} Let $M$ be a tracial von Neumann algebra with a fixed faithful normal trace $\tau$. Suppose $\alpha,\beta:\mathbb{R}\rightarrow\text{Aut}(M)$ are two trace-preserving actions of $\mathbb{R}$ on $M$ which are cocycle conjugate, and suppose that the only finite-dimensional $\alpha$-invariant subspaces of $L^{2}(M)$ are those on which $\alpha$ acts trivially. Fix any $q\in M^{\beta}$ a nonzero projection. The following are equivalent:
(a) There exists a nonzero projection $r\in L_{\beta}(\mathbb{R})$ such that \[ \Pi_{\alpha,\beta}(L_{\beta}(\mathbb{R})qr)\prec_{M\rtimes_{\alpha}\mathbb{R}}L_{\alpha}(\mathbb{R}) \]
(b) There exists a nonzero partial isometry $v\in M$ such that $v^{*}v\in qM^{\beta}q$, $vv^{*}\in M^{\alpha}$, and for all $x\in M$, \[ \alpha_{t}(vxv^{*})=v\beta_{t}(x)v^{*}. \] \end{thm}
\begin{proof} To see that (a) implies (b), take $r$ as in (a), so that $\Pi_{\alpha,\beta}(L_{\beta}(\mathbb{R})qr)\prec_{M\rtimes_{\alpha}\mathbb{R}}L_{\alpha}(\mathbb{R})$, and take $w_{t}\in M$ with $\text{Ad }w_{t}\circ\alpha_{t}=\beta_{t}$.
First, we claim that there's a $\delta>0$ for which there exist $x_{1},...,x_{k}\in qM$ with \[
\sum_{i,j=1}^{k}|\tau(x_{i}^{*}w_{t}\alpha_{t}(x_{j}))|^{2}\ge\delta \] for all $t$. Suppose for a contradiction that no such $\delta$ exists. Then we can find a net $(t_{i})_{i\in I}$ such that \[ \lim_{i}\tau(x^{*}w_{t_{i}}\alpha_{t_{i}}(y))=0 \] for any $x,y\in qM$.
But then for any $p,p'$ finite trace projections in $L_{\alpha}(\mathbb{R})$, $s,s'\in\mathbb{R}$, and $x,y\in M$, we have (in the 2-norm from the trace on $M\rtimes_{\alpha}\mathbb{R}$): \begin{align*}
\|E_{L_{\alpha}(\mathbb{R})}(p\lambda_{\alpha}(s)^{*}x^{*}\Pi_{\alpha,\beta}(\lambda_{\beta}(t_{i})q)y\lambda_{\alpha}(s')p')\|_{2} & =\|\lambda_{\alpha}(s)^{*}pE_{L_{\alpha}(\mathbb{R})}(x^{*}q\Pi_{\alpha,\beta}(\lambda_{\beta}(t_{i})qy)p'\lambda_{\alpha}(s'))\|_{2}\\
& =\|pE_{L_{\alpha}(\mathbb{R})}((qx)^{*}w_{t_{i}}\alpha_{t_{i}}(qy))p'\lambda_\alpha(s'+t_{i})\|_{2}\\
& =\|E_{L_{\alpha}(\mathbb{R})}((qx)^{*}w_{t_{i}}\alpha_{t_{i}}(qy))pp'\|_{2}\rightarrow0, \end{align*} where the last equality follows from the fact that $(qx)^{*}w_{t_{i}}\alpha_{t_{i}}(qy)\in M$, so \[ E_{L_{\alpha}(\mathbb{R})}((qx)^{*}w_{t_{i}}\alpha_{t_{i}}(qy))=\tau((qx)^{*}w_{t_{i}}\alpha_{t_{i}}(qy)), \] and the latter term goes to zero by supposition for any $x,y\in M$.
Now note that linear combinations of terms of the form $x\lambda_{\alpha}(s)p$ (resp. $y\lambda_{\beta}(s')p'$) as above are dense in $L^{2}(M\rtimes_{\alpha}\mathbb{R},Tr),$ so by approximating $\Pi_{\alpha,\beta}(r)a$, (resp. $\Pi_{\alpha,\beta}(r)b$) with such sums for any $a,b\in M\rtimes_{\alpha}\mathbb{R}$, it follows from the above estimate that \[
\|E_{L_{\alpha}(\mathbb{R})}(a^{*}\Pi_{\alpha,\beta}(\lambda_{\beta}(t_{i})qr)b)\|_{2}\rightarrow0. \] But this contradicts $\Pi_{\alpha,\beta}(L_{\beta}(\mathbb{R})qr)\prec_{M\rtimes_{\alpha}\mathbb{R}}L_{\alpha}(\mathbb{R})$, so the $\delta>0$ of our above claim exists.
We can thus find $\delta>0$, $x_{1},...,x_{k}\in qM$ such that $\sum_{i,j=1}^{k}|\tau(x_{i}^{*}w_{t}\alpha_{t}(x_{j}))|^{2}\ge\delta$ for all $t$.
Let us now consider the space $B(L^{2}(M))$ of bounded operators on $L^{2}(M,\tau)$ and the rank-one orthogonal projection $e_{\tau}$ onto $1\in L^{2}(M,\tau)$. We can identify $(B(L^{2}(M)),e_{\tau})$ with the basic construction $\langle M, e_\tau \rangle$ for $\mathbb{C}\subset M$, so that $e_{\tau}$ is the Jones projection. Let $\hat{\tau}$ be the usual trace on $B(L^{2}(M))$, satisfying $\hat{\tau}(xe_{\tau}y)=\tau(xy)$ for all $x,y\in M$. Finally, for a finite-rank operator $Q=\sum_{i}y_{i}e_{\tau}z_{i}^{*}$ with $y_{i},z_{i}\in M$, let \[ T_{M}(Q)=\sum y_{i}z_{i}^{*}\in M. \] Then $T_{M}$ extends to a normal operator-valued weight from the basic construction to $M$ satisfying $\hat{\tau}=\tau\circ T_{M}$ (i.e. $T_{M}$ is the pull-down map).
Consider now the positive element \[ X=\sum_{i=1}^{k}x_{i}e_{\tau}x_{i}^{*}, \] together with the following normal positive linear functional on $\langle M,e_{\tau}\rangle$: \[ \psi(T)=\sum_{i=1}^{k}\hat{\tau}(e_{\tau}x_{i}^{*}Tx_{i}e_{\tau}). \]
Note that $T_{M}(X)=\sum_{i=1}^{k}x_{i}x_{i}^{*}\in M$, so in particular
$\|T_{M}(X)\|<\infty$.
For every $t\in\mathbb{R}$, we have: \begin{align*} \psi(\beta_{t}(X)) & =\sum_{i,j}\hat{\tau}(e_{\tau}x_{i}^{*}w_{t}\alpha_{t}(x_{j})e_{\tau}\alpha_{t}(x_{j})^{*}w_{t}^{*}x_{i}e_{\tau})\\
& =\sum_{i,j}|\tau(x_{i}^{*}w_{t}\alpha_{t}(x_{j})|^{2}\ge\delta>0. \end{align*}
Now consider $K$, the ultraweak closure of the convex hull of $\{\beta_{t}(X):t\in\mathbb{R}\}$ inside $q\langle M,e_{\tau}\rangle q$. Note that by normality of $\psi$, $\psi(x)\ge\delta$ for any $x\in K$.
Since $K$ is convex and $\|\cdot\|_{2}$-closed, there exists a unique
$X_{0}\in K$ of minimal 2-norm. But since the 2-norm is invariant under $\beta$, we must have that $\|\beta_{t}(X_{0})\|_{2}=\|X_{0}\|_{2}$ for all $t$, so by uniqueness of the minimizer, $X_{0}$ is itself fixed by the extended $\beta$ action (and nonzero since $\psi(X_{0})\ge\delta$). Also, by ultraweak lower semicontinuity of $T_{M}$, we know that
$\|T_{M}(X_{0})\|\le\|T_{M}(X)\|<\infty$.
Take a nonzero spectral projection $e$ of $X_{0}$. Then $e$ is still $\beta$-invariant and satisfies $\|T_{M}(e)\|<\infty$. But this means that $\hat{\tau}(e)=\tau(T_{M}(e))<\infty$, so $e$ must be a finite rank projection, since $\hat{\tau}$ corresponds to the usual trace $Tr$ on the trace-class operators in $B(L^{2}(M),\tau)$.
Now since $e_{\tau}$ has central support 1 in $\langle M,e_{\tau}\rangle$ (and because $e_{\tau}$ is minimal), we have that there exists $V$ a partial isometry in $\langle M,e_{\tau}\rangle$ such that $V^{*}V=f\le e$ and $VV^{*}=e_{\tau}$. We remark that $f$ remains $\beta$-invariant, since $e$ was finite rank, and our finite-dimensional invariant subspaces are all fixed by the action. Note also that $e\le q$ (since $X_{0}\in q\langle M,e_{\tau}\rangle q$), so that $V=Vq=e_{\tau}V$.
Applying the pull-down lemma, we see that: \[ V=e_{\tau}V=e_{\tau}(T_{M}(e_{\tau}V))=e_{\tau}T_{M}(V). \]
Set $v=T_{M}(V)$ and note that because $\|T_{M}(V^{*}V)\| \le \|T_{M}(e)\|<\infty$, we have $v\in M$, and $V=e_{\tau}v$.
Since $e_{\tau}\langle M,e_{\tau}\rangle e_{\tau}=\mathbb{C}e_{\tau}$, and since $V$ is left-supported by $e_{\tau}$, we have that for each $t$ there exists a $\lambda_{t}\in\mathbb{C}$ such that $\lambda_{t}e_{\tau}=Vw_{t}{\alpha_{t}}(V^{*})$. Note that since $Vw_{t}{\alpha_{t}}(V^{*})(Vw_{t}{\alpha_{t}}(V^{*}))^{*}=Vw_{t}{\alpha_{t}}(V^{*}V)w_{t}^{*}V^{*}=V{\beta_{t}}(e)V^{*}=VV^{*}=e_{\tau}$, the last equality of the previous sentence implies that $\lambda_{t}\overline{\lambda_{t}}=1$. We also have: \begin{align*} e_{\tau}\lambda_{t}\alpha_{t}(v) & =\lambda_{t}e_{\tau}\alpha_t(v)=\lambda_{t}e_{\tau}{\alpha_t}(V)\\
& =Vw_{t}{\alpha_{t}}(V^{*}V)=V{\beta_{t}}(e)w_{t}=Vw_{t}\\
& =e_{\tau}vw_{t}. \end{align*}
Thus, applying the pull-down map, we have that $\lambda_{t}\alpha_{t}(v)=vw_{t}$, and, replacing $v$ by its polar part if necessary, we've found a partial isometry in $M$, conjugation by which intertwines the actions. We have for any $x\in M$: \begin{align*} \alpha_{t}(vxv^{*})=\alpha_{t}(v)\alpha_{t}(x)\alpha_{t}(v^{*})=\overline{\lambda_{t}}vw_{t}\alpha_{t}(x)w_{t}^{*}v^{*}\lambda_{t}=v\beta_{t}(x)v^{*}. \end{align*}
Furthermore, with some applications of $\alpha_{t}(v)=\overline{\lambda_{t}}vw_{t}$, we see that \[ \beta_{t}(v^{*}v)=w_{t}\alpha_{t}(v^{*}v)w_{t}^{*}=w_{t}(w_{t}^{*}v^{*}\lambda_{t})(\overline{\lambda_{t}}vw_{t})w_{t}^{*}=v^{*}v, \] and \[ \alpha_{t}(vv^{*})=(\overline{\lambda_{t}}vw_{t})(w_{t}^{*}v^{*}\lambda_{t})=vv^{*}, \] so we've found the promised intertwiner.
Conversely, assume that we have $v\in M$ satisfying $v^{*}v\in qM^{\beta}q$, $vv^{*}\in M^{\alpha}$, and $\alpha_{t}(vxv^{*})=v\beta_{t}(x)v^{*}$ for all $x\in M$. Take $w_{t}\in M$ with $\text{Ad }w_{t}\circ\alpha_{t}=\beta_{t}$. Then, as above, we have $vw_{t}=\lambda_{t}\alpha_{t}(v)$, for some $\lambda_{t}\in \mathbb{T}$. Multiplying both sides by $\overline{\lambda_{t}}$ and absorbing this factor into $w_{t}$, we may assume without loss of generality that $\lambda_{t}=1$ for all $t$, so we have $vw_{t}=\alpha_{t}(v)$.
Now let $\lambda_{t}^{\alpha}$ (resp., $\lambda_{t}^{\beta}$) denote the canonical unitaries that implement the respective actions on $M$ in the crossed product $M\rtimes_{\alpha}\mathbb{R}$ (resp., $M\rtimes_{\beta}\mathbb{R}$). Then the relation $vw_{t}=\alpha_{t}(v)$ implies $v\Pi_{\alpha,\beta}(\lambda_{t}^{\beta})=\lambda_{t}^{\alpha}v$. Furthermore, for any finite trace projection $r\in L_{\beta}(\mathbb{R})$, we have $v\Pi_{\alpha,\beta}(qr)=vq\Pi_{\alpha,\beta}(r)=v\Pi_{\alpha,\beta}(r)\neq0$, so $v^{*}$ is a partial isometry that witnesses $\Pi_{\alpha,\beta}(L_{\beta}(\mathbb{R})qr)\prec_{M\rtimes_{\alpha}\mathbb{R}}L_{\alpha}(\mathbb{R})$ (e.g. see condition (4) of Theorem F.12 in \cite{BO}). Thus, (b) implies (a). \end{proof}
\section{Cocycle conjugacy of Bogoljubov Automorphisms}
\subsection{Free Bogoljubov automorphisms.}
Let $\pi$ be an orthogonal representation of $\mathbb{R}$ on a real Hilbert space $H_{\mathbb{R}}$. Recall that Voiculescu's free Gaussian functor associates to $H_{\mathbb{R}}$ a von Neumann algebra \[ \Phi(H_{\mathbb{R}})=\{s(h):h\in H_{\mathbb{R}}\}''\cong L(\mathbb{F}_{\dim H_{\mathbb{R}}}) \] where $s(h)=\ell(h)+\ell(h)^{*}$ and $\ell(h)$ is the creation operator $\xi\mapsto h\otimes\xi$ acting on the full Fock space $\mathscr{F}(H)=\bigoplus_{n\geq0}(H_{\mathbb{R}}\otimes_{\mathbb{R}}\mathbb{C})^{\otimes n}$. Denoting by $\Omega$ the unit basis vector of $(H_{\mathbb{R}}\otimes_{\mathbb{R}}\mathbb{C})^{\otimes0}=\mathbb{C}$, it is well-known that the vector-state \[ \tau(\cdot)=\langle\Omega,\cdot\Omega\rangle \] defines a faithful trace-state on $\Phi(H_{\mathbb{R}})$. Furthermore, $\mathbb{R}$ acts on $\mathscr{F}(H)$ by unitary transformations $U_{t}=\bigoplus_{n\geq0}(\pi\otimes1)^{\otimes n}$, and conjugation by $U_{t}$ leaves $\Phi(H_{\mathbb{R}})$ globally invariant thus defining a strongly continuous one-parameter family of \emph{free Bogoljubov automorphisms} \[ t\mapsto\alpha_{t}\in\textrm{Aut}(\Phi(H_{\mathbb{R}})). \]
Note that if $\pi$ is such that $\pi\otimes\pi$ and $\pi$ are conjugate (as representations of $\mathbb{R}$), then the representation $U_{t}$ is conjugate to $\mathbf{1}\oplus\pi^{\oplus\infty}$.
A complete invariant for the orthogonal representation $\pi$ consists of the absolute continuity class $\mathscr{C}_{\pi}$ of a measure $\mu$ on $\mathbb{R}$ satisfying the symmetry condition $\mu(B)=\mu(-B)$ for all $\mu$-measurable sets $B$ and a $\mu$-measurable multiplicity function $n:\mathbb{R}\to\mathbb{N}\cup\{+\infty\}$ satisfying $n(x)=n(-x)$ almost surely in $x$. In particular, assuming that $\pi\cong\pi\otimes\pi$ (i.e., that for some (hence any) probability measure $\mu\in\mathscr{C}_{\pi}$ that generates $\mathscr{C}_{\pi}$, $\mu*\mu$ and $\mu$ are mutually absolutely continuous), then the measure class $\mathscr{C}_{\pi}$ is an invariant of $\alpha$ up to conjugacy (since it can be recovered from $U_{t}$, the unitary representation induced by $\alpha_{t}$ on $L^{2}(\Phi(H_{\mathbb{R}})$).
Recall that the representation $\pi$ is said to be \emph{mixing}
if for all $\xi,\eta\in H_{\mathbb{R}}$, $\lim_{|t|\to\infty}\langle\xi,\pi(t)\eta\rangle\to0$. This is equivalent to saying that for some (hence any) probability measure $\mu\in\mathscr{C}_{\pi}$ that generates $\mathscr{C}_{\pi}$, the Fourier transform satisfies $\hat{\mu}(t)\to0$ whenever $t\to\pm\infty$.
\subsection{Operator-valued semicircular systems.\label{subsec:OpValuSemSys}}
The crossed product $\Phi(H_{\mathbb{R}})\rtimes_{\alpha}\mathbb{R}$ has a description in terms of so-called operator-valued semicircular systems (see \cite[Examples 2.8, 5.2]{A-valued}). Decompose $\pi=\bigoplus_{i\in I}\pi_{i}$ into cyclic representations $\pi_{i}$ with cyclic vectors $\xi_{i}$. Let $\mu_{i}$ be the measure with Fourier transform $t\mapsto\langle\xi_{i},\pi_{i}(t)\xi_{i}\rangle$, and denote by $\eta_{i}:L^{\infty}(\mathbb{R})\to L^{\infty}(\mathbb{R})$ the completely positive map given by \[ \eta_{i}(f)(x)=\int f(y)d\mu(x-y). \] Then \cite[Proposition 2.18]{A-valued} shows that $\Phi(H_{\mathbb{R}})\rtimes_{\alpha}\mathbb{R}=W^{*}(L(\mathbb{R}),S_{i}:i\in I)$ where $S_{i}$ are free with amalgamation over $L(\mathbb{R})\cong L^{\infty}(\mathbb{R})$ and each $S_{i}$ is an $L^{\infty}(\mathbb{R})$-valued semicircular system with covariance $\eta_{i}$.
Operator-valued semicircular variables can be associated to any normal self-adjoint completely positive map on $L(\mathbb{R})\cong L^{\infty}(\mathbb{R})$. In particular, given any measure $K$ on $\mathbb{R}^{2}$ satisfying $\pi_{x}K,\pi_{y}K\prec\textrm{Lebesgue measure}$ and $dK(x,y)=dK(y,x)$ (here $\pi_{x},\pi_{y}$ are projections on the two coordinate axes), we can construct an $L^{\infty}(\mathbb{R})$-valued semicircular variable $S=S_{K}$ with covariance \[ \eta(f)(x)=\int f(y)dK(x,y). \] If $K'$ is absolutely continuous with respect to $K$, then $W^{*}(L^{\infty}(\mathbb{R}),S_{K'})\subset W^{*}(L^{\infty}(\mathbb{R}),S_{K})$; if $K=\sum K_{j}$ with $K_{j}$ disjoint, then $S_{K_{j}}$ are free with amalgamation over $L^{\infty}(\mathbb{R}).$
The algebra $W^{*}(L^{\infty}(\mathbb{R}),S_{K})$ is denoted $\Phi(L^{\infty}(\mathbb{R}),\eta)$. For our choices of $\eta$ it is semifinite and $L^{\infty}(\mathbb{R})$ is in the range of a conditional expectation.
If $I\subset\mathbb{R}$ is a finite interval and $K$ is a measure on $I^{2}$ satisfying $\pi_{x}K,\pi_{y}K\prec\textrm{Lebesgue measure}$ and $dK(x,y)=dK(y,x)$, then one can in a similar way associate a completely positive map to $K$ and consider an $L^{\infty}(I)$-semicircular variable $S_{K}$. This time, composition of the conditional expectation onto $L^{\infty}(I)$ with integration with respect to (rescaled) Lebesgue measure on $L^{\infty}(I)$ gives rise to a normal faithful trace on the algebra $\Phi(L^{\infty}(I),\eta)=W^{*}(L^{\infty}(I),S_{K})$.
We call measures $K$ (on $\mathbb{R}^{2}$ or on the square of some finite interval $I$) satisfying the conditions $\pi_{x}K,\pi_{y}K\prec\textrm{Lebesgue measure}$ and $dK(x,y)=dK(y,x)$ \emph{kernel measures.}
\subsection{Solidity of certain algebras generated by operator-valued semicircular systems.}
Let $\eta$ be a normal self-adjoint completely positive map defined on the von Neumann algebra $A = L^{\infty}(\mathbb{T})$ (with Haar measure on $\mathbb{T}$). By \cite[Corollary 4.2]{Thin}, if the $A,A$-bimodule associated to $\eta$ is mixing (see Def. 2.2 of that paper for a definition), then $\Phi(A,\eta)$ is strongly solid, and in particular, solid: the relative commutant of any diffuse abelian von Neumann subalgebra of $\Phi(A,\eta)$ is amenable. As noted in \cite[Proposition 7.3.4]{Thin} and its surrounding remarks, if $\mu$ is a measure on $\mathbb{T}$ so that its Fourier transform $\hat{\mu}$ satisfies $\lim_{n\to\pm\infty}\hat{\mu}(n)=0$ (i.e., $\mu$ is a measure associated to a mixing representation), then the bimodule associated to the completely positive map $\eta:f\mapsto f*\mu$ is mixing. We record the following lemma, whose proof is straightforward from the definition of mixing bimodules: \begin{lem} Suppose that $H,H'$ are mixing $A,A$-bimodules, $p_{0}\in A$ is a nonzero projection, and $K\subset H$ is an $A,A$-submodule. Then:
(i) $H\oplus H'$ is mixing; (ii) $K$ is mixing; (iii) $p_{0}Hp_{0}$ is mixing as a $p_{0}A,p_{0}A$-bimodule. \end{lem}
We now make use of this lemma. \begin{lem} \label{lem:ACTrick}Let $(K_{j}:j\in J)$ be a family of kernel measures on $\mathbb{R}^{2}$ and let $\eta_{j}$ be the associated completely positive maps on $L^{\infty}(\mathbb{R})$.
Assume that each $K_{j}$ can be written as a sum of measures $K_{j}=\sum_{i\in S(j)}K_{j}^{(i)}$ with $K_{j}^{(i)}$ disjoint, and so that $K_{j}^{(i)}$ is supported on the square $I_{j}^{(i)}\times I_{j}^{(i)}$ for a finite interval $I_{j}^{(i)}$. Finally, suppose that there exist measures $\hat{K}_{j}^{(i)}$ on $I_{j}^{(i)}\times I_{j}^{(i)}$, so that $K_{j}^{(i)}$ is absolutely continuous with respect to $\hat{K}_{j}^{(i)}$ and so that the associated completely positive map \[ \hat{\eta}_{j}^{(i)}:f\mapsto(x\mapsto\int f(y)d\hat{K}_{j}^{(i)}(x,y)) \] defines a mixing $L^{\infty}(I_{j}^{(i)})$-bimodule.
Let $X_{j}$ be $\eta_{j}$-semicircular variables over $L^{\infty}(\mathbb{R})$, and assume that $X_{j}$ are free with amalgamation over $L^{\infty}(\mathbb{R})$. Then the semifinite von Neumann algebra $M=W^{*}(L^{\infty}(\mathbb{R}),X_{j}:j\in J)$ is solid, in the sense that if $A\subset M$ is any diffuse abelian von Neumann subalgebra generated by its finite projections, then $A'\cap M$ is amenable. \end{lem}
\begin{proof} Denote by $\hat{X}_{j}^{(i)}$ the $\hat{\eta}_{j}^{(i)}$-semicircular family, and assume that $\hat{X}_{j}^{(i)}$ are free with amalgamation over $L^{\infty}(\mathbb{R})$. Let $\hat{M}=W^{*}(L^{\infty}(\mathbb{R}),\{\hat{X}_{j}^{(i)}:j\in J,i\in S(j)\})$. Since $K_{j}=\sum_{i\in S(j)}K_{j}^{(i)}$ is a disjoint sum and $K_{j}^{(i)}$ is absolutely continuous with respect to $\hat{K}_{j}^{(i)}$, we conclude that $M\subset\hat{M}$ and moreover $M$ is in the range of a conditional expectation from $\hat{M}$. Thus is sufficient to prove that $\hat{M}$ is solid.
By freeness with amalgamation, we know that $\hat{M}$ is the amalgamated free product of the algebras $\hat{M}_{j}^{(i)}=W^{*}(L^{\infty}(\mathbb{R}),\hat{X}_{j}^{(i)})$. Thus by \cite[Theorem 4.4]{Rigidity}, if $B\subset\hat{M}$ is an abelian algebra generated by its finite projections and $B'\cap\hat{M}$ is non-amenable, then $B\prec_{\hat{M}}\hat{M}_{j}^{(i)}$ for some $j\in J$ and $i\in S(j)$ and moreover it follows that $\hat{M}_{j}^{(i)}$ is not solid. But \[ \hat{M}_{j}^{(i)}\cong L^{\infty}(\mathbb{R}\setminus I_{j}^{(i)})\oplus W^{*}(L^{\infty}(I_{J}^{(i)}),\hat{X}_{j}^{(i)}) \] and the (finite) von Neumann algebra $W^{*}(L^{\infty}(I_{J}^{(i)}),\hat{X}_{j}^{(i)})$ is solid by \cite[Corollary 4.2]{Thin}, which is a contradiction. \end{proof} \begin{cor} \label{cor:Solid}Suppose that $\pi$ is a mixing orthogonal representation of $\mathbb{R}$ on a real Hilbert space $H_{\mathbb{R}}$, and let $\alpha$ be the free Bogoljubov action on $\Phi(H_{\mathbb{R}})$ associated to $\pi$. Then the semi-finite von Neumann algebra $M=\Phi(H_{\mathbb{R}})\rtimes_{\alpha}\mathbb{R}$ is solid: if $B\subset M$ is an diffuse abelian subalgebra generated by its finite projections, then $B'\cap M$ is amenable. \end{cor}
\begin{proof} Our goal is to apply Lemma \ref{lem:ACTrick}. Fix any decomposition of $\pi$ into cyclic representations $(\pi_{j}:j\in J)$ with associated cyclic vectors $\xi_{j}$ in such a way that the spectrum of $\pi_{j}(t)$ is contained in the set $\exp(iI_{j}t)$ for a finite subinterval $I_{j}\subset\mathbb{R}$. Let us fix integers $n_{j}$ so that $I_{j}\subset[-n_{j},n_{j}]$. Selecting a possibly different set of cyclic vectors and subrepresentations $\pi_j(t)$, we may assume that $\pi(t)=\bigoplus_j \pi_j(t)$, and that the spectrum of the infinitesimal generator of $\pi_j$ is contained in $I_j$. Denote by $\mu_{j}$ the measures with Fourier transform \[ \hat{\mu}_{j}(t)=\langle\xi_{j},\pi(t)\xi_{j}\rangle. \] By assumption that $\pi$ is mixing, $\lim_{t\to\pm\infty}\hat{\mu}_{j}(t)=0$. Moreover, by construction, the support of $\mu_{j}$ is contained in $[-n_{j},n_{j}]$.
Let $\eta_{j}:L^{\infty}(\mathbb{R})\to L^{\infty}(\mathbb{R})$ be the completely positive map given by convolution with $\mu_{j}$. Then $\eta_{j}$ has an associated kernel measure $K_{j}$ given by $dK_{j}(x,y)=\mu_{j}(x-y)$.
Let $K_{j}^{(i)}$ denote the restriction of $K_{j}$ to the region $[-4n_{j}+i,4n_{j}+i]\times[-4n_{j}+i,4n_{j}+i] \setminus [-4n_j + i-1, 4n_j + i-1] \times [-4n_j + i-1, 4n_j + i -1 ] $, $i\in\mathbb{Z}$, and let $\hat{K}_{j}^{(i)}$ be the restriction of $K_j$ to $[-4n_{j}+i,4n_{j}+i]\times[-4n_{j}+i,4n_{j}+i]$. If we identify $[-4n_{j}+i,4n_{j}+i]$ with the circle, then the completely positive map associated to $\hat{K}_{j}^{(i)}$ is given by convolution with the measure $\mu'_{j}$ whose Fourier transform is given by $k\mapsto\hat{\mu}_{j}(k/8n_{j})$; since $\lim_{t\pm\infty}\hat{\mu}_{j}(t)=0$, it follows that $\lim_{k\to\pm\infty}\hat{\mu}'_{j}(k)=0$, so that the hypothesis of Lemma \ref{lem:ACTrick} is satisfied. \end{proof}
\subsection{Cocycle conjugacy.}
We are now ready to prove the main result of this paper: \begin{thm} \label{thm:SameSpectrum}Let $\pi_{1},\pi_{1}'$ be two mixing orthogonal representations of $\mathbb{R}$, and assume that $\pi_{1}\otimes\pi_{1}\cong\pi_{1}$, $\pi_{1}'\otimes\pi_{1}'\cong\pi_{1}'$. Denote by $\mathbf{1}$ the trivial representation of $\mathbb{R}$. Let \[ \pi=(\mathbf{1}\oplus\pi_{1})^{\oplus\infty},\qquad\pi'=(\mathbf{1}\oplus\pi_{1}')^{\oplus\infty}, \] and let $\alpha$ (resp., $\alpha'$) be the corresponding free Bogoljubov actions of $\mathbb{R}$ on $L(\mathbb{F}_{\infty})$. Then $\alpha$ and $\alpha'$ are cocycle conjugate iff $\mathscr{C}_{\pi_{1}}=\mathscr{C}_{\pi_{1}'}$. \end{thm}
\begin{proof} If $\mathscr{C}_{\pi_{1}}=\mathscr{C}_{\pi_{1}'}$, then $\pi$ and $\pi'$ are conjugate representations and the associated Bogoljubov actions are conjugate; thus it is the opposite direction that needs to be proved. The proof will be broken into several steps.
If $H_{\pi}$ is the representation space of $\pi$, then $H_{\pi}\cong H_{0}\oplus H_{1}$ corresponding to the decomposition $\pi=\mathbf{1}^{\infty}\oplus\pi_{1}^{\oplus\infty}$. Let $N=\Phi(H_{\pi})\cong L(\mathbb{F}_{\infty})$. Then $N$ decomposes as a free product $N\cong\Phi(H_{0})*\Phi(H_{1})$; moreover, the free Bogoljubov action is also a free product $\alpha=\mathbf{1}*\alpha_{1}$. Note that the subalgebra $L(\mathbb{F}_{\infty})\cong\Phi(H_{0})\subset N$ is fixed by the action $\alpha$. In particular, the crossed product $M=N\rtimes_{\alpha}\mathbb{R}$ decomposes as a free product: \[ M=N\rtimes_{\alpha}\mathbb{R}\cong(\Phi(H_{0})\otimes L(\mathbb{R}))*_{L(\mathbb{R})}(\Phi(H_{1})\rtimes_{\alpha_{1}}\mathbb{R}). \]
Let us assume that $\alpha$ and $\alpha'$ are cocycle conjugate. Denote by $A=L_{\alpha}(\mathbb{R})\subset M$. Then $N\rtimes_{\alpha}\mathbb{R}\cong N\rtimes_{\alpha'}\mathbb{R}$ and thus (up to this identification, which we fix once and for all) also $L_{\alpha'}(\mathbb{R})\subset M$. Thus $A'\cap M\supset\Phi(H_{0})\cong L(\mathbb{F}_{\infty})$, so that $A'\cap M$ is non-amenable. Exactly the same argument implies that $L_{\alpha'}(\mathbb{R})\cap M$ is non-amenable.
By \cite[Theorem 4.4]{Rigidity}, it follows from the amalgamated free product decomposition of $M$ that $L_{\alpha'}(\mathbb{R})\prec_{M}\Phi(H_{0})\otimes L(\mathbb{R})$ or $L_{\alpha'}(\mathbb{R})\prec_{M}\Phi(H_{1})\rtimes_{\alpha_1}\mathbb{R}$. But the latter is impossible by Corollary \ref{cor:Solid}, since $\alpha_{1}$ comes from a mixing representation $\pi_{1}$. Thus it must be that $L_{\alpha'}(\mathbb{R})\prec_{M}\text{\ensuremath{\Phi(H_{0})\otimes L(\mathbb{R})}}\cong L(\mathbb{F}_{\infty})\otimes L(\mathbb{R})$ and thus $L_{\alpha'}(\mathbb{R})\prec_{M}L_{\alpha}(\mathbb{R})$.
By Theorem \ref{embedding}, $L_{\alpha'}(\mathbb{R})\prec_{M}L_{\alpha}(\mathbb{R})$ implies that there exists a nonzero partial isometry $v\in N$ such that $v^{*}v\in N^{\alpha'}$, $vv^{*}\in N^{\alpha}$, and for all $x\in N$, $\alpha_{t}(vxv^{*})=v\alpha'_{t}(x)v^{*}.$
Let $p=vv^{*}\in N^\alpha$, and denote by $\hat{\alpha}$ the restriction of $\alpha$ to $pNp$.
Let $H=H_{0}\oplus H_{1}$ be as above. By replacing $p\in\Phi(H_{0})$ with a subprojection and modifying $v$, we may assume that $\tau(p)=1/n$ for some $n$. Then we can find partial isometries $v_{i}\in\Phi(H_{0})$, $i \in \{1,...,n\}$, such that $v_{i}v_{i}^{*}=p$ for all $i$ and $\sum_{i}v_{i}^{*}v_{i}=1.$
Let $\{s(h):h\in H_{1}\}$ be a semicircular family of generators for $\Phi(H_{1})$. Then $N$ is generated by $\Phi(H_{0})\cup\{s(h):h\in H_{1}\}$, so that $pNp$ is generated by $p\Phi(H_{0})p$ and $\{v_{i}s(h)v_{j}^{*}:1\leq i,j\leq n,h\in H_{1}\}$ \cite[Lemma 5.2.1]{FRV}.
For $i,j\in \{1,...,n\}$ and $h\in H_{1}$, denote $S_{ij}(h)=\operatorname{Re}(n^{1/2}v_{i}s(h)v_{j}^{*})$,
$S'_{ij}(h)=\operatorname{Im}(n^{1/2}v_{i}s(h)v_{j}^{*})$. The normalization is chosen so that in the compressed $W^{*}$-probability space $(pNp,n\tau|_{pNp})$ these elements form a semicircular family \cite[Prop. 5.1.7]{FRV}. So, all together, $pNp$ is generated $*$-freely by $p\Phi(H_{0})p$ and the semicircular family $\{S_{ij}(h):h\in H_{1},1\leq i,j\leq n\}\cup\{S'_{ij}(h):h\in H_{1},1\leq i<j\leq n\}$. The action of the restriction $\hat{\alpha}_{t}$ of $\alpha_{t}$ to $pNp$ is given, on these generators, as follows: $\hat{\alpha}_{t}(x)=x$
for $x\in p\Phi(H_{0})p$; $\hat{\alpha}_{t}(S_{ij}(h))=S_{ij}(\pi_t|_{H_{1}}(h))$,
$\hat{\alpha}_{t}(S'_{ij}(h))=S_{ij}'(\pi_t|_{H_{1}}(h))$. From this we see that $\hat{\alpha}_{t}$ is once again a Bogoljubov automorphism but corresponding to the representation $\mathbf{1}^{\oplus\infty}\oplus(\pi_{1}^{\oplus\infty})^{\oplus n^{2}}\cong\pi$. Since by assumption $\pi_{1}\cong\pi_{1}\otimes\pi_{1}$, also $\pi\cong\pi\otimes\pi$ and so conjugacy of $\hat{\alpha}_{t}$ and $\hat{\alpha}'_{t}$ implies equality of measure classes $\mathscr{C}_{\pi}$ and $\mathscr{C}_{\pi'}$ and thus of $\mathscr{C}_{\pi_{1}}$ and $\mathscr{C}_{\pi_{1}'}$. \end{proof}
\begin{bibdiv} \begin{biblist}
\bib{Classification}{article}{
title={Classification of a Family of Non-Almost Periodic Free Araki-Woods Factors},
author={Houdayer, Cyril},
author={Shlyakhtenko, Dimitri},
author={Stefaan Vaes},
journal={Journal of the European Mathematical Society},
year={to appear}
eprint={arXiv:1605.06057 [math.OA]}
}
\bib{BO}{book}{
title={C*-algebras and Finite-dimensional Approximations},
author={Brown, Nathanial Patrick},
author = {Ozawa, Narutaka},
volume={88},
year={2008},
publisher={American Mathematical Society} }
\bib{A-valued}{article}{
title={A-valued Semicircular Systems},
author={Shlyakhtenko, Dimitri},
journal={Journal of Functional Analysis},
volume={166},
number={1},
pages={1--47},
year={1999},
publisher={Academic Press} }
\bib{Thin}{article}{
title={Thin {II$_1$} factors with no Cartan subalgebras},
author={Krogager, Anna Sofie},
author={Vaes, Stefaan},
journal={Kyoto Journal of Mathematics},
volume={59},
number={4},
pages={815-867},
year={2019}
eprint={arXiv:1611.02138 [math.OA]} }
\bib{Rigidity}{article}{
title={Rigidity of Free Product von Neumann Algebras},
author={Houdayer, Cyril},
author={Ueda, Yoshimichi},
journal={Compositio Mathematica},
volume={152},
number={12},
pages={2461--2492},
year={2016},
publisher={London Mathematical Society}
eprint={arXiv:1507.02157 [math.OA]} }
\bib{FRV}{book}{
title={Free Random Variables},
author={Voiculescu, Dan-Virgil},
author={Dykema, Ken},
author={Nica, Alexandru},
number={1},
year={1992},
publisher={American Mathematical Society} }
\end{biblist} \end{bibdiv}
\end{document} |
\begin{document}
\title{Filtering the Tau method with Frobenius-Pad\'e Approximants\footnote{This work was partially supported by CMUP(UID/MAT/00144/2013), which is funded by FCT (Portugal) with national and European structural funds (FEDER), under the partnership agreement PT2020}}
\author{Jo\~ao Carrilho de Matos\footnote{jem@isep.ipp.pt, Instituto Superior de Engenharia do Porto, Rua Dr. Ant\'onio Bernardino de Almeida, 431, 4249-015 Porto, Portugal}, Jos\'e M. A. Matos\footnote{Instituto Superior de Engenharia do Porto and Centro de Matem\'atica da Universidade do Porto}, Maria Jo\~ao Rodrigues\footnote{Faculdade de Ci\^encias da Universidade do Porto and Centro de Matem\'atica da Universidade do Porto}}
\maketitle
\begin{abstract} In this work, we use rational approximation to improve the accuracy of spectral solutions of differential equations. When working in the vicinity of solutions with singularities, spectral methods may fail their propagated spectral rate of convergence and even they may fail their convergence at all. We describe a Pad\'e approximation based method to improve the approximation in the Tau method solution of ordinary differential equations. This process is suitable to build rational approximations to solutions of differential problems when their exact solutions have singularities close to their domain. \end{abstract}
\textbf{keywords:} Tau method, Pad\'e approximation, Froissart doublets.
\section{Introduction} It is well known that spectral methods are very efficient to solve differential equations with smooth solutions without singularities close to the interval of orthogonality. In fact they exhibit exponential rate of convergence \cite{Canuto}. However when the solution of a differential problem has singularities close to or on the interval of orthogonality spectral methods usually loose their efficiency. In fact, when the solution has singularities near to the orthogonality interval, the convergence of spectral methods is slow and when a solution has singularities on the interval of orthogonality spectral methods have only algebraic rate of convergence. There are several methods to improve the approximation given by the spectral solution, e.g. using extrapolation methods \cite{Brez,sidi2003}, filtering functions \cite{Canuto}, changing of variables or by decomposing the domain \cite{Peyret}. These post-processing methods are frequently called filtering processes of a spectral solution.
In this paper we suggest to filter a tau solution of a differential problem with a slow rate of convergence, using Chebyshev-Pad\'e or Legendre-Pad\'e approximants. This choice is motivated by the theoretic results related to meromorphic functions \cite{Suetin82} and Markov type functions \cite{Gonchar92}. These results are related to convergence acceleration and the domain expansion given by the partial orthogonal series. Moreover, it is possible to extract more information from Pad\'e approximants. In fact, we can determine singularities of functions using the poles of Pad\'e approximants \cite{buslaev06,buslaev09}. Here we have to be aware of the fact that all the theoretic results mentioned above are related with Pad\'e approximants computed with the coefficients of a formal orthogonal expansion. In this filtering process we use the Tau coefficients which do not coincide with the orthogonal expansion coefficients. The Tau coefficients are affected by the errors inherent to the Tau method and by errors caused by the use of finite arithmetic. We will give a special relevance to the numeric errors since they origin Pad\'e approximants with Froissart doublets, which destroy the structure of the numeric Pad\'e table.
The purpose of this paper is to present some numerical results of application of this filtering process to several cases. In sections $2$ and $3$ we give the notations and algorithms concerning the tau method and Pad\'e approximation, respectively. In sections 4 we present the filtering process, in section 5 we obtain some properties of the filters and we conclude in section 6 with comments and conclusions.
\section{The Operational Tau Method}
Here we describe an improved version \cite{JMatos2014} of the operational Tau method to solve linear ordinary differential equations \cite{ortiz69,Ortiz81}.
Let $\mathcal{D}_\nu$ be the class of linear differential operators of order $\nu$ with polynomial coefficients and $D\in \mathcal{D}_\nu$ \begin{equation}\label{Ddef} D \equiv \sum_{i=0}^\nu p_i (t) \frac{d^i}{dt^i},\ \ p_i(t)\in \mathbb{P}. \end{equation} In order to solve a differential equation \begin{equation} \label{Diffeqn} D y(t) = f(t),\ t\in ]a,\ b[\subset\mathbb{R} \end{equation} Ortiz and Samara \cite{Ortiz81} developed the operational approach for the Lanczos Tau method based on the algebraic representation of the linear differential operator \eqref{Ddef}. They proved that \begin{equation*} D\, y = \mathbf{y}\, \boldsymbol{\Pi}\, \mathbf{t}^T,\quad \text{with}\quad \boldsymbol{\Pi}=\sum_{i=0}^\nu \boldsymbol{\eta}^i\, p_i (\boldsymbol{\mu}) \end{equation*} where $y = \mathbf{y}\mathbf{t}^T=\sum_{i=0}^\infty y_i t^i$, with $\mathbf{y}=[y_0,\ y_1,\ \ldots]$ and $\mathbf{t}=[1,\ t,\ t^2,\ \ldots]$, is the matricial representation of $y$, and the matrices $\boldsymbol{\eta}$ and $\boldsymbol{\mu}$ represent, respectively, the differentiation and the shift effects on $y$.
Let $\{\phi_i\}_{i\geq 0}$ be a family of functions defined in an interval $I\subset\mathbb{R}$, orthogonal with respect to a weight function $w$, and let \begin{equation*}\label{series1}
y = \sum_{i=0}^{\infty} c_{i}\phi_i,\quad c_{i}=\frac{\left\langle y,\phi_i\right\rangle_w}{\left\|\phi_i\right\|_{w}^{2}}, \end{equation*}
be a formal series, where $\left\langle f,g \right\rangle_{w} =\int_{I}f(t)g(t)w(t)\text{d}t$ and $\left\| f\right\|_{w}=\sqrt{\left\langle f,f\right\rangle_{w}}$.
If $\{\phi_i\}_{i\geq 0}$ is a polynomial basis such that $\phi_i$ is a polynomial of degree $i$ and $\mathbf{V}=[v_{i,j}]_{i,j\geq 0}$ is the matrix of the coefficients of those polynomials, that is, if \[ \phi_i = \sum_{j=0}^i v_{i j}t^j,\ i=0,1,\ldots \] and if \begin{equation}\label{yv}
y=\mathbf{y_v}\mathbf{v}^T=\sum_{i=0}^\infty c_i \phi_i,\quad \text{with}\quad \mathbf{v}^T=\mathbf{V}\mathbf{t}^T. \end{equation} then the effect of the differential operator \eqref{Ddef} in the coefficients of $y$ can be represented by the effect of an algebraic operator over the vector $\mathbf{y_v}$, and is given by \cite{Ortiz81} $$ D y=\mathbf{y_v}\, \boldsymbol{\Pi_{\phi}} \mathbf{v}^T,\quad \text{where}\quad \boldsymbol{\Pi_{\phi}} = \mathbf{V}\, \Pi\, \mathbf{V}^{-1} $$
\subsection{Improved operational Tau method}
At this point, we must make two remarks: the first one is that, in practice, numerical methods work with finite matrices and the second one is that this procedure can be numerically unstable, particularly if the condition number of $\mathbf{V}$ is large. However, if $\{\phi_i\}_{i\geq 0}$ is an orthogonal polynomials basis, Matos et al presented in \cite{JMatos2014} a method to overcome this drawback. Since the polynomials $\phi_{i}$ satisfy a three term recurrence relation
\begin{equation}\label{RecRel} \left\{\begin{array}{ll} t \phi_i(t)=\alpha_i \phi_{i+1}(t)+\beta_i \phi_i(t)+\gamma_i \phi_{i-1}(t), & i=1,2,\ldots \\ \phi_0(t)=1,\ \phi_1(t)=(t-\beta_0)/\alpha_0 \end{array}\right. \end{equation} then the matricial representation of the shift effect takes the form $$ t\, y =\mathbf{y_v}\, \boldsymbol{\mu_{\phi}}\, \mathbf{v}^T $$ where \begin{equation*} \boldsymbol{\mu_{\phi}}=\left[\begin{array}{cccc} \beta_0 & \alpha_0 \\ \gamma_1 & \beta_1 & \alpha_1 \\ & \gamma_2 & \beta_2 & \alpha_2 \\ & & & \cdots \end{array}\right] \end{equation*} On the other hand, defining $\eta_{i,j}$ as the coefficients in \begin{equation*}\label{DifPi} \frac{d}{dt}\phi_i = \sum_{j=0}^{i-1}\eta_{i j}\phi_j,\ i= 0,1,\ldots \end{equation*} and defining $\eta_\phi =[\eta_{i j}]_{i,j\geq 0}$, then \[ \frac{d}{dt} y =\mathbf{y_{v}}\, \mathbf{\eta_{\phi}}\, \mathbf{v}^T. \] \noindent By differentiating both sides of recurrence relation \eqref{RecRel}, it is easy to see that the coefficients $\eta_{i j}$ satisfy the following recurrence relation
\begin{equation*} \left\{\begin{array}{l} \eta_{i+1,j}=\frac{1}{\alpha_i}(\alpha_{j-1}\eta_{i,j-1}+(\beta_j-\beta_i)\eta_{i,j} +\gamma_{j+1}\eta_{i,j+1} - \gamma_i \eta_{i-1,j}),\ j=0,\ldots,i-1 \\ \eta_{i+1,i}=\frac{1}{\alpha_i}(\alpha_{i-1}\eta_{i,i-1}+1) \end{array}\right. \end{equation*} \noindent for $i=1,2,\ldots$ and \[ \left\{\begin{array}{ll} \eta_{0,j}=0, & j=0,1,\ldots \\ \eta_{1,0}=\frac{1}{\alpha_0}, \\ \eta_{1,j}=0, & j=1,2,\ldots \end{array}\right. \]
Having defined $\boldsymbol{\mu_{\phi}}$ and $\boldsymbol{\eta_{\phi}}$, we get
\begin{equation}\label{piphi} D\, y = \mathbf{y_v}\, \boldsymbol{\Pi_{\phi}}\, \mathbf{v}^T,\ \boldsymbol{\Pi_{\phi}} = \sum_{i=0}^\nu \boldsymbol{\eta}^{i}_{\phi}\, p_i (\boldsymbol{\mu}_{\phi}) \end{equation} Thus, we don't need to compute the inverse of $\mathbf{V}$, which stabilizes the operational Tau method.
The operational approach of the Tau method is based on the matrix $\Gamma_{\phi}=[G\quad \overline{\Pi}_{\phi}]$ where, for some $n\in\mathbb{N},\ n\geq \nu$, $G\in \mathbb{R}^{(n+1)\times \nu}$ is the matrix representation of the initial conditions of the differential problem and $\overline{\Pi}_{\phi}\in \mathbb{R}^{(n+1)\times (n+1-\nu)}$ is the matrix $\Pi_{\phi}$ truncated to its first lines and first columns \cite{JMatos2014, Ortiz81}. If $\Gamma_{\phi}$ is a regular matrix then we solve an algebraic system of linear equations in order to obtain $y_n=[c_0^{(n)},\ldots,c_n^{(n)}]$, the coefficients of the Tau approximant of $y_v$ in (\ref{yv}).
\subsection{Numerical example} In order to explain the proposed filtering method, we begin by considering the following example, where some useful notation is introduced.
\begin{examp}\label{exampleSQRT}
Here, we will consider the function $y(t)=\frac{\pi}{4}\sqrt{2(t+1)}$, which has a branch cut on $]-\infty,-1]$. It is known that the Chebyshev expansion of this function \cite{Paszkowski84} is \begin{equation} y = \sum_{k=0}^{\infty}c_{k}T_{k} = 1+\sum_{k=1}^{\infty}(-1)^{k+1}\frac{2}{4k^{2}-1}T_{k}. \label{yex1} \end{equation}
The function $y$ is analytic on $\mathbb{ C}\setminus ]-\infty , -1]$ thus, the singularity, $\zeta$, closest to the interval of orthogonality, $[-1,1]$, coincides with the extreme point $\zeta=-1$.
We can define $y$ as the solution of the linear ordinary differential equation \begin{equation}\label{ex1tau}
(t+1)\frac{\text{d}y}{\text{d}t}-\frac{1}{2}y=0 \end{equation} \noindent with condition $y(0)=\frac{\pi}{4}\sqrt{2}$. \end{examp}
From (\ref{piphi}), for the differential operator $D$ associated to equation (\ref{ex1tau}), we get $\Pi_{\phi}=\eta_{\phi}(\mu_{\phi}+I)-\frac{1}{2}I$, where $I$ is the infinite identity matrix. The initial condition is translated into the matrix $G=(\phi_0(0),\phi_1(0),\ldots)^T$. Since we are considering $\phi_i=T_i,\ i\geq 0$, the Chebyshev polynomials, then \[ \Gamma_T= \left[ \begin{array}{rccccc} 1 & -1/2 & \\ 0 & 1 & 1/2 \\ -1 & 2 & 4 & 3/2\\ 0 & 3 & 6 & 6 & 5/2 \\ 1 & 4 & 8 & 8 & 8 & 7/2 \\ &&&&&\cdots \end{array} \right] \]
As the singularity $\zeta$ coincides with the extreme point of the interval of orthogonality, the convergence of the Tau method is slow. To emphasize this fact, we show in Figure \ref{fig1tauconv}, the rate of consecutive errors $\| e_{n+1}\|_{w}/ \|e_{n}\|_{w}$, for $n=9,10,\ldots,149$, where $e_{n}=y-y_{n}$, and $\left\| e_{n} \right\|_{w}^{2} = \int_{-1}^1 e_{n}^{2}(t)w(t) \text{d}t$ with $w(t)=(1-t^{2})^{-1/2}$.
We can observe that the rate $\| e_{n+1}\|_{w}/ \|e_{n}\|_{w}$ approaches 1 when $n$ increases.
\begin{figure}
\caption{ Rate of consecutive errors $\| e_{n+1}\|_{w}/ \|e_{n}\|_{w}$, for $n=9,10,\ldots,149$.}
\label{fig1tauconv}
\end{figure}
\subsection{Error analysis in the Tau Method}
As noted by \cite{Ortiz81} for the operational approach of the Tau method, the polynomial solution $y_n$, obtained by solving the linear algebraic system with the truncated matrix $\Gamma_{\Phi}$, is the exact solution of the perturbed differential equation \begin{equation} \label{TauDiffeqn} D y_n(t) = f(t) + \tau_n(t),\ t\in ]a,\ b[\subset\mathbb{R} \end{equation} where $\tau_n$ is the polynomial residual resulting from the truncation process of matrix $\Gamma_{\Phi}$. Since we are considering linear differential operators $D$, then, subtracting side by side (\ref{TauDiffeqn}) from (\ref{Diffeqn}) results that the error $e_n=y-y_n$ in the Tau method, is the exact solution of the differential equation \begin{equation} \label{ErrorDiffeqn} D e_n(t) = -\tau_n(t),\ t\in ]a,\ b[\subset\mathbb{R} \end{equation} with homogeneous conditions.
Based on that property, some authors \cite{MJRodrigues13} developed an \textit{a posteriori} error analysis, solving by the Tau method the differential equation in (\ref{ErrorDiffeqn}) and getting a polynomial approximation of, say, degree $n+m$, as the exact solution of \begin{equation} \label{TauErrorDiffeqn} D \tilde{e}_{n+m}(t) = -\tau_n(t)+\tau_m(t),\ t\ ]a,\ b[\subset\mathbb{R} \end{equation} In Figure \ref{ErrorsAprox} we show, for a selected set of $n$ values, and with $m=1$ and $m=20$, the error curves $y(t)-y_n(t)$ and the curves $\tilde{e}_{n+1}(t)$ and $\tilde{e}_{n+20}(t)$. We can see the effective numerical estimation of the error, even for $m=1$, and that, in general, the numerical estimator $e_{n+20}$ follows the error closer than the estimator $e_{n+1}$.
\begin{figure}
\caption{Error curves $y(t)-y_n(t)$ and error curves approximations $\tilde{e}_{n+1}(t)$ and $\tilde{e}_{n+20}(t)$ obtained with the Tau method for example \ref{exampleSQRT}.}
\label{ErrorsAprox}
\end{figure}
For the filtering process, proposed in section \ref{FilterinProcess}, the coefficients errors in the Tau method \[ \Delta c^{(n)}=\left( \Delta c^{(n)}_{0}, \Delta c^{(n)}_{1},\ldots ,\Delta c^{(n)}_{n}\right), \quad \Delta c^{(n)}_{k}= c_{k}- c^{(n)}_{k}, \ k=0,1,\ldots,n \] play a main role in the final results. We expect that, for each $k=0,1,\ldots,n$, the error $\Delta c^{(n)}_{k}$ decreases with increasing $n$, provided that the Tau method converges, and also that this errors do not affect too much the filtered solution error. In general cases, with the Tau method, those values $\Delta c^{(n)}_{k}$ are the coefficients of the error function $e_n=y-y_n$ and can be approximated by the coefficients of the a posteriori error estimator $\tilde{e}_{n+m}$.
\subsection{Errors on the coefficients in example \ref{exampleSQRT}}
In the previous example we can simplify and solve exactly the system associated to the Tau method, obtaining exact values for $c^{(n)}_k$. Writing the last $n-1$ equations in the form \[ \sum_{i=k}^n 2i c^{(n)}_i + (k-\frac{3}{2})c^{(n)}_{k-1} =0,\ k=2,\ldots,n \] and subtracting each equation $k-1$ from equation $k$, we get the backward recurrence \begin{equation}\label{cnk} c^{(n)}_k = -\frac{2k+3}{2k-1}c^{(n)}_{k+1},\ k=1,\ldots,n-2,\quad\text{and}\quad c^{(n)}_0 = \frac{3}{2}c^{(n)}_{1} \end{equation} This result leads, by mathematical induction, to \begin{equation}\label{cnk}
\left\{ \begin{array}{ll}
c^{(n)}_k = (-1)^{n-k}\frac{4n(2n-1)}{4k^2-1}c^{(n)}_{n}, & k=1,\ldots,n-1,\\
c^{(n)}_0 = (-1)^{n+1} 2n(2n-1)c^{(n)}_{n} & \end{array}\right. \end{equation} and, substituting in the first equation, we can solve for $c^{(n)}_{n}$, getting \[
c^{(n)}_n = (-1)^{n+1} \frac{y_0}{2n(2n-1)S_n} \] where $y_0=y(0)$ is the initial value, and \[ S_n=1+\sum_{k=1}^{n} c_{k}T_{k}(0)-\frac{1}{2n(2n+1)}T_{n}(0) \] is the partial sum for $y(0)$ if $n$ is odd and is the partial sum plus a correction term in the last coefficient if $n$ is even. Substituting in (\ref{cnk}) we get \[ c^{(n)}_k = (-1)^{k+1} \frac{2y_0}{(4k^2-1)S_n},\ k = 1,\ldots,n-1,\quad\text{and}\quad c^{(n)}_0 = \frac{y_0}{S_n} \] and so, comparing these coefficients $c^{(n)}_k $ with the exact coefficients $c_k$ \eqref{yex1}, we verify that \[ c_n^{(n)}=(\frac{2n+1}{4n})\frac{y_0}{S_n}c_n\quad\text{and}\quad c_k^{(n)}=\frac{y_0}{S_n}c_k,\ k = 0,\ldots,n-1 \] This means that, for fixed $n\in\mathbb{N}$, each Tau coefficient $c_k^{(n)}$, except the last one, is the exact coefficient $c_k$ times a constant factor. This is relevant, in next sections, for our filtering results and to justify the exact formula for the error in the Tau coefficients \begin{equation}\label{Deltackn} \Delta c_k^{(n)} = \left\{ \begin{array}{ll} (1-\frac{y_0}{S_n})c_k, & k=0,\ldots,n-1 \\ (1-(\frac{2n+1}{4n})\frac{y_0}{S_n})c_n & k=n \end{array}\right. \end{equation}
Another property, resulting from the last column values of the $\Gamma_{\phi}$ matrix, for this particular example, is that the residual is $\tau_n(t)=(n-\frac{1}{2})c_n^{(n)} T_{n+1}(t)$. Using the previous formula for $c_n^{(n)}$, we get \[ \tau_n(t)=(-1)^{n+1}\frac{y_0}{4n S_n} T_{n+1}(t), \] and so, the residual $\tau_n$ is approximating zero in $[-1,\ 1]$, with uniform norm and with amplitudes decreasing with $n$.
Before we proceed with the proposed filtering method, we will remember the definition of Frobenius-Pad\'e approximants, also known as linear Pad\'e approximants from series of orthogonal polynomials.
\section{Frobenius-Pad\'e approximants from orthogonal series}
We begin this section by defining Frobenius-Pad\'e approximants from orthogonal series. Let $\{\phi_i\}_{i\geq 0}$ be an orthogonal polynomial basis. Given two nonnegative integers $p$ and $q$, we say that the rational function \begin{equation}\label{phi} \Phi_{p,q}(y)=\frac{N_{p,q}}{D_{p,q}}=\frac{\sum_{i=0}^{p}a_{i}\phi_i}{\sum_{i=0}^{q} b_{i}\phi_i} \end{equation}
\noindent is a Frobenius-Pad\'{e} approximant of type $(p,q)$ from the series $y$ \cite{ACMatos01} if \begin{equation}\label{lindef} D_{p,q} y - N_{p,q} = \sum_{i=p+q+1}^{\infty}e_{i}\phi_i. \end{equation}
In order to determine the coefficients $a_i,\ i=0,1,\ldots,p$ and $b_i,\ i=0,1,\ldots,q$ in (\ref{phi}) we introduce $h_{j,i}, j=0,1,\ldots$ such that \[ \phi_{i} y = \sum_{j=0}^{\infty}h_{j,i}\phi_{j},\quad i=0,1,\ldots. \] \noindent Thus $h_{j,i}$, j=0,1,\ldots, are the coefficients of the orthogonal series $\phi_{i} y,\quad i=0,1,\ldots$.
Identifying coefficients in (\ref{lindef}) we can show that the coefficients $a_{j}$ and $b_{i}$ of $\Phi_{p,q}$ are solutions of the following homogeneous system of $p+q+1$ linear equations and $p+q+2$ unknowns
\begin{equation}\label{eq:afp1D11} \left\{\begin{array}{ll}
\displaystyle{\sum_{i=0}^{q}h_{j,i}b_{i}-a_{j}}=0, & j=0,\ldots,p \\
\displaystyle{\sum_{i=0}^{q}h_{j,i}b_{i}}=0, & j=p+1,\ldots,p+q.
\end{array}\right. \end{equation}
\noindent which always admits a non trivial solution.
If we set $b_{q}=1$ then we can use equations \eqref{eq:afp1D11} to determine the coefficients of the normalized approximant, based in the matricial form introduced in the following proposition \cite{ACMatos01}
\begin{proposition}\label{Prop:NormalizedApprox} Let $p,q\in \mathbb{N}_0$, \[ \mathbf{g}^{[p/q]}=\left[h_{0,q}\ldots h_{p,q} \right]^{T},\quad \mathbf{h}^{[p/q]}=\left[h_{p+1,q}\ldots h_{p+q,q} \right]^{T}, \] and \[ \mathbf{G}^{[p/q]}=\left[\begin{array}{ccc} h_{0,0}&\cdots & h_{0,q-1}\\ \vdots & &\vdots \\ h_{p,0}&\cdots &h_{p,q-1} \end{array} \right], \quad \text{and} \quad \mathbf{H}^{[p/q]}=\left[\begin{array}{ccc} h_{p+1,0}&\cdots & h_{p+1,q-1}\\ \vdots & &\vdots \\ h_{p+q,0}&\cdots &h_{p+q,q-1} \end{array} \right]. \] If $\mathbf{H}^{[p/q]}$ is nonsingular then \[ \mathbf{a}=\left[ a_{0}\ldots a_{p} \right]^{T}, \quad \mathbf{b}=\left[ b_{0}\ldots b_{q-1} \right]^{T}, \] are determined by
\begin{align} & \mathbf{H}^{[p/q]}\cdot\mathbf{b}=-\mathbf{h}^{[p/q]}\label{AFPsist1}\\ & \mathbf{a}=\mathbf{G}^{[p/q]}\cdot\mathbf{b}+\mathbf{g}^{[p/q]}\label{AFPsist2} \end{align}
\end{proposition}
In the conditions of this proposition, the coefficients of the denominator, $b_{i}$, $i=0,1,\ldots,q-1$, are uniquely determined by solving \eqref{AFPsist1}. Once determined the coefficients $b_{i}$, we use \eqref{AFPsist2} to compute the numerators coefficients $a_{i}$, $i=0,1,\ldots,p$.
The characteristic recurrence relation \eqref{RecRel} of the orthogonal polynomials $\phi_{i}$ leads to the following proposition, allowing the computation of the entries in $H^{[p/q]},\ G^{[p/q]},\ h^{[p/q]}$ and $g^{[p/q]}$
\begin{proposition}\label{Prop:ReqRel} The coefficients $h_{i j}$ can be computed \cite{ACMatos01} using the recurrence relation
\begin{equation}\label{AFP:eqrel1}
h_{i,j+1}=\frac{1}{\alpha_{j}}\left( \frac{\mu_{i+1}}{\mu_{i}}\alpha_{i}h_{i+1,j}+(\beta_{i}-\beta_{j})h_{i,j}+
\frac{\mu_{i-1}}{\mu_{i}}\gamma_{i}h_{i-1,j}-\gamma_{j}h_{i,j-1}\right), \ i,j=1,2,\ldots \end{equation} \noindent and, \[ h_{i,0}=c_{i}, \ \ i=0,1,\ldots, \ \ \ h_{0,j}=\frac{\mu_{j}}{\mu_{0}}h_{j,0}, \ \ \ \ j=1,2,\ldots, \]
where $h_{i,-1}=0$, $\alpha_{i},\beta_{i}$ and $\gamma_{i}$ are the coefficients in \eqref{RecRel} and $\mu_{i}=\left\|\phi_{i}(t)\right\|_{w}^{2}$. \end{proposition}
In particular, for Chebyshev and for Legendre polynomials we have
\paragraph{\textbf{Chebyshev Polynomials}:} The Chebyshev polynomials, normalized with $T_{i}(1)=1, \ \ i=0,1,\ldots$, satisfy the recurrence relation \eqref{RecRel} with \[ \left\{ \begin{array}{lll} \alpha_{i}=\gamma_{i}=\frac{1}{2}, & \beta_{i}=0, & i=1,2,\ldots \\ \alpha_{0}=1, & \beta_{0}=0 \end{array} \right. \] and $\mu_{0}=\pi$, $\mu_{i}=\pi /2,\ i=0,1,\ldots$. Thus, the recurrence relation \eqref{AFP:eqrel1} takes the form $$ \left\{ \begin{array}{ll} h_{i,0}=c_{i}, & i=0,1,\ldots \\ h_{0,j}=\displaystyle{\frac{1}{2}c_{j}}, & j=1,2,\ldots \\ h_{1,1}=\displaystyle{h_{0,0}+\frac{1}{2}h_{2,0}} \\ h_{i,1}=\displaystyle{\frac{1}{2}(h_{i-1,0}+h_{i+1,0})}, & i=2,3,\ldots \\ h_{1,j}=2h_{0,j-1}+h_{2,j-1}-h_{1,j-2}, & j= 2,3,\ldots \\ h_{i,j}=h_{i-1,j-1}+h_{i+1,j-1}-h_{i,j-2}, & i,j= 2,3,\ldots \\ \end{array} \right. $$
With these formulas we can build direct formulas for some sequences of Chebyshev-Pad\'e approximants.
\begin{corollary}\label{ChebyPadeApprox} Let $ y = \sum_{k=0}^\infty c_k T_k $ be a formal Chebyshev series and let $\Phi_{p,q}(y)=N_{p,q}/D_{p,q}$, with \[ D_{p,q} =T_q + \sum_{i=0}^{q-1}b_{i}T_i, \quad \text{and}\quad N_{p,q} = \sum_{i=0}^{p}a_{i}T_i \] be its $(p,q)$ Chebyshev-Pad\'e approximant, then \begin{itemize}
\item[(a)] $\forall p\in\mathbb{N}_0$ such that $c_{p+1}\neq 0$ we have
\[ D_{p,1}(t)=b_{0}+t,\quad\text{and}\quad N_{p,1}(t)=\frac{1}{2}\sum_{k=0}^p (c'_{k-1}+c_{k+1}+2b_{0}c_k)T_k(t) \]
with
\[ b_{0}=-\frac{c'_p+c_{p+2}}{2c_{p+1}} \]
where $c'_{-1}=0,\ c'_0=2c_0$ and $c'_k=c_k,\ k\geq 1$.
\item[(b)] $\forall p\in\mathbb{N}_0$ such that the determinant
\[ \underline{\Delta} = \left| \begin{array}{cc}
c_{p+1} & c'_{p}+c_{p+2} \\
c_{p+2} & c_{p+1}+c_{p+3}
\end{array} \right| \neq 0 \] we have
\[ D_{p,2}(t)=b_0+b_1 t+T_2(t),\quad\text{and}\quad N_{p,2}(t)=\sum_{k=0}^p a_k T_k(t) \]
with
\[ a_k=c'_k b_0+\frac{1}{2}(c'_{k-1}+c_{k+1})b_1+\frac{1}{2}(c'_{k-2}+c_{k+2}),\ k=0,\ldots,p \] $$
b_{0}=-\frac{\left| \begin{array}{cc} c_{p-1}+c_{p+3} & c_{p}+c_{p+2}\\ c'_{p}+c_{p+4} & c_{p+1}+c_{p+3}
\end{array}\right|}{2\underline{\Delta}}
\quad \text{and} \quad b_{1}=-\frac{\left| \begin{array}{cc} c_{p+1} & c_{p-1}+c_{p+3}\\ c_{p+2} & c'_{p}+c_{p+4}
\end{array}\right|}{\underline{\Delta}}. $$
where $c'_{-2}=0,\ c'_{-1}=2c_1-c_0,\ c'_0=2c_0$ and $c'_k=c_k,\ k\geq 1$. \end{itemize} \end{corollary}
\paragraph{\textbf{Legendre Polynomials}:} The Legendre polynomials, normalized with $P_{i}(1)=1, \ \ i=0,1,\ldots$, satisfy the recurrence relation \eqref{RecRel} with \[ \alpha_{i}=\frac{i+1}{2i+1}, \quad \beta_{i}=0, \quad \gamma_{i}=\frac{i}{2i+1}, \quad \text{and}\quad \mu_{i}=\frac{2}{2i+1},\quad i=0,1,\ldots \]
Thus, \eqref{AFP:eqrel1} takes the form $$ \left\{ \begin{array}{ll} h_{i,0}=c_{i}, \qquad h_{0,i}=\displaystyle{\frac{1}{2i+1}c_{i}}, & i=0,1,\ldots \\ h_{i,1}=\displaystyle{\frac{i+1}{2i+3}c_{i+1}+\frac{i}{2i-1}c_{i-1}}, & i=1,2,\ldots \\ h_{i,j+1}=\displaystyle{\frac{2j+1}{j+1}\left[\frac{i+1}{2i+3}h_{i+1,j}+\frac{i}{2i-1}h_{i-1,j}\right]-\frac{j}{j+1}h_{i,j-1}}, & i,j=1,2,\ldots \end{array} \right. $$
In that case, even not so simple as in the Chebyshev case, formulas for Legendre-Pad\'e approximants $\Phi_{p,1}$ and $\Phi_{p,2}$, in terms of the series coefficients $c_k$ can be derived.
\begin{corollary}\label{LegPadeApprox} Let $ y = \sum_{k=0}^\infty c_k P_k $ be a formal Legendre series and let $\Phi_{p,q}(y_{n})=N_{p,q}/D_{p,q}$, with \[ D_{p,q} = P_q + \sum_{i=0}^{q-1}b_{i}P_i, \quad \text{and}\quad N_{p,q} = \sum_{i=0}^{p}a_{i}P_i \] be its $(p,q)$ Legendre-Pad\'e approximant, then \begin{itemize}
\item[(a)] $\forall p\in\mathbb{N}_0$ such that $c_{p+1}\neq 0$ we have
\[ D_{p,1}(t)=b_{0}+t,\quad\text{and}\quad N_{p,1}(t)=\sum_{k=0}^p \left(h_{k,1}+b_{0}c_k\right)P_k(t) \]
with
\[ b_{0}=-\frac{h_{p+1,1}}{c_{p+1}},\quad\text{and}\quad h_{k,1}=\frac{k}{2k-1}c_{k-1}+\frac{k+1}{2k+3}c_{k+1},\ k=0,1,\ldots \]
\item[(b)] $\forall p\in\mathbb{N}_0$ such that the determinant
\[ \underline{\Delta} = \left| \begin{array}{cc}
c_{p+1} & h_{p+1,1} \\
c_{p+2} & h_{p+2,1}
\end{array} \right| \neq 0 \] we have
\[ N_{p,2}(t)=b_0+b_1 t+P_2(t),\quad\text{and}\quad D_{p,2}(t)=\sum_{k=0}^p a_k P_k(t) \]
with
\[ a_k=c_k b_0+h_{k,1}b_1+h_{k,2},\quad k=0,\ldots,p \] $$
b_{0}=-\frac{\left| \begin{array}{cc}
h_{p+1,2} & h_{p+1,1} \\
h_{p+2,2} & h_{p+2,1}
\end{array}\right|}{\underline{\Delta}}
\quad \text{and} \quad b_{1}=-\frac{\left| \begin{array}{cc}
c_{p+1} & h_{p+1,2} \\
c_{p+2} & h_{p+2,2}
\end{array}\right|}{\underline{\Delta}}. $$ \noindent where
\begin{multline*}
h_{k,2}=\frac{3(k^2-1)}{2(2k+3)(2k-1)}\left[\left(\frac{4k+3}{(2k-3)(k+1)}+1\right)c_{k-2}\right. \\ \left.+\frac{2k}{3(k-1)}c_{k}+\left(\frac{3}{(2k+5)(k-1)}+1\right)c_{k+2}\right],\ k=0,1,\ldots
\end{multline*} \end{itemize} \end{corollary}
In the next section we will describe some numerical problems related with this filtering process and we will use the example \ref{exampleSQRT} to illustrate them.
\section{The Filtering Process}\label{FilterinProcess}
Let $y$ be the solution of a given differential problem, with formal orthogonal expansion $y=\sum_{i\geq 0}c_{i}\phi_{i}$ and let \[ \Phi_{p,q}(y)=\frac{N_{p,q}}{D_{p,q}}=\frac{\displaystyle{\sum_{i=0}^{p}a_{i}\phi_i}}{\displaystyle{\sum_{i=0}^{q-1}b_{i}\phi_i + \phi_q}} \] be its Pad\'e approximant of type $(p,q)$. Since we have not access to the exact coefficients $c_i$, our filtering process makes use of the coefficients $c_i^{(n)}$ of the Tau solution of order $n$, $y_n = \sum_{i=0}^{n}c^{(n)}_{i}\phi_{i}$, of the given differential problem to construct the Pad\'{e} approximant of type $(p,q)$ \[ \Phi_{p,q}(y_{n})=\frac{N^{(n)}_{p,q}}{D^{(n)}_{p,q}}=\frac{\displaystyle{\sum_{i=0}^{p}a^{(n)}_{i}\phi_i}}{\displaystyle{\sum_{i=0}^{q-1}b^{(n)}_{i}\phi_i + \phi_q}}, \quad \text{with} \quad p+2q+1\leq n. \]
\subsection{Error in the filtering process}
A filter $\Phi_{p,q}(y_{n})$ represents the exact solution $y$ with an error that depends on the errors on the coefficients $\Delta c^{(n)}$ and also on numerical errors caused by the numerical instability of the algorithm described above to compute Pad\'e approximants. In fact, the critical step is the resolution of the system of linear equations \eqref{AFPsist1}. To be more precise, the matrices $\mathbf{H}^{\left[ p/q\right]}$ are ill-conditioned for sufficiently large values of $p$ and $q$.
We remark that, in example \ref{exampleSQRT}, both matrices $\mathbf{H}^{\left[ p/q\right]}$, built with Tau coefficients or built with Fourier series coefficients, have condition numbers with same order of magnitude.
These numerical errors, caused by the numerical instability of the algorithm, are a serious drawback in the filtering method. In effect, they origin Pad\'e approximants with Froissart doublets located nearby of the real segment $I=[-1,1]$. This behaviour is similar to the Chebyshev and Legendre Pad\'e approximants computed with expansions perturbed with random noises \cite{JCMatos2014,JCMatos2015}. The presence of Froissart doublets apart from destroying the structure of the computed Pad\'e table also spoil locally the approximation given by the Pad\'e approximants. In order to bypass this drawback we build a table, that we call \textit{Froissart table}.
\subsection{The Froissart table} The formal definition of Froissart doublet, \cite{Stahl98}, is given in an asymptotic way, being thus useless for our purposes.
For practical effects we will consider a Froissart doublet of a Pad\'e approximant as a pair $(\zeta,\eta)$ such that $\zeta$ is a pole, $\eta$ is a zero and $|\zeta-\eta|<\text{\rm tol}$, where $\text{\rm tol}>0$ is a prescribed \textit{tolerance}. The $(p,q)$ entry of the $\text{\rm tol}$-Froissart table, $n_{p,q}$, is found by computing the zeros and poles of $\Phi_{p,q}$ and $n_{p,q}$ is the number of pairs of poles/zeros at distance less than $\text{\rm tol}$, or, by other words $n_{p,q}$ is the number of Froissart doublets of $\Phi_{p,q}$. We note that numerically $\Phi_{p,q}$ is a rational function with numerator of \textit{numeric} degree $p-n_{p,q}$ and denominator of \textit{numeric} degree $q-n_{p,q}$, since we have $n_{p,q}$ pairs of factors that \textit{almost} cancel. \begin{figure}
\caption{Froissart table of the Tau solution $y_{150}$ with $\text{\rm tol}=10^{-5}$, $1\leq p,q\leq 25$.}
\label{FroissartTable1}
\end{figure}
\begin{figure}
\caption{Absolute error in Tau solution $y_{150}$, in black. Absolute error in the filter $\Phi_{10,10}(y_{150})$, in blue.}
\label{fig:ErroDeFiltragem1}
\end{figure}
We present the Froissart table of the Tau solution $y_{150}$ with $\text{\rm tol}=10^{-5}$ and $1\leq p,q\leq 25$, in Figure \ref{FroissartTable1}. In order to find a ``good'' filter, we look for one filter in the white region of the Froissart table. By other words we look for an $\Phi_{p,q}(y_{150};t)$ such that $n_{p,q}=0$ and that improves the Tau approximation. For example, if we look for a good diagonal filter $\Phi_{p,p}(y_{150};t)$, $p=1,2,\ldots,25$, by inspecting the Froissart table, we can see that $\Phi_{10,10}(y_{150};t)$ is the last filter in the white region. All filters $\Phi_{p,p}(y_{150};t)$, $p\geq 10$ have Froissart doublets and almost all have Froissart doublets on the real segment $[-1,1]$.
In Figure \ref{fig:ErroDeFiltragem1}, we illustrate ours results filtering the Tau solution $y_{150}$ of the example $1$ using the Pad\'e approximation $\Phi_{10,10}(y_{150};t)$. The absolute error $|y(t)-y_{150}(t)|$ is represented with a black line while the absolute error $|y(t)-\Phi_{10,10}(y_{150};t)|$ of the filter is represented with a blue line.
\subsection{Estimation of singularities} Another application of this filtering process is the estimation of singularities of the solution of a given differential problem. In fact, the poles of certain sequences of Pad\'e approximants from orthogonal series allow to estimate the singularities \cite{buslaev06,buslaev09}. However, we remark again, that our Pad\'e approximants are computed with the Tau coefficients, not with the orthogonal expansion coefficients.
For particular cases of orthogonal polynomial bases we can build particular formulas for poles $\lambda_{p}$, of the sequence of Pad\'e approximants $\Phi_{p,1},\ p=0,1,\ldots$ and for poles $\lambda_{p^{\pm}}$, of the sequence $\Phi_{p,2},\ p=0,1,\ldots$. The next results are consequences of corollary \ref{ChebyPadeApprox} and of corollary \ref{LegPadeApprox}.
\begin{corollary} Let $y(t)=\sum_{k=0}^\infty c_k T_k(t)$ be a formal Chebyshev series and $\Phi_{p,q}$ be its normalized Chebyshev-Pad\'e approximant of type $(p,q)$, then \begin{itemize}
\item[(a)] $\forall p\geq 1$ such that $c_{p+1}\neq 0$, the approximant $\Phi_{p,1}(y;t)$ has the pole \begin{equation}\label{polos1} \lambda_{p}=\frac{c'_{p}+c_{p+2}}{2c_{p+1}} \end{equation}
\item[(b)] $\forall p\geq 1$ such that the determinant
\[ \underline{\Delta} = \left| \begin{array}{cc}
c_{p+1} & c'_{p}+c_{p+2} \\
c_{p+2} & c_{p+1}+c_{p+3}
\end{array} \right| \neq 0 \] the approximant $\Phi_{p,2}$ has the poles \begin{equation}\label{polos2} \lambda_{p^\pm}=\frac{-b_{1}\pm \sqrt{b_{1}^{2}-8(b_{0}-1)}}{4} \end{equation} \noindent where $c'_p,\ b_{0}$ and $b_{1}$ are given in corollary \ref{ChebyPadeApprox} (b). \end{itemize}
\end{corollary}
\begin{corollary} Let $y(t)=\sum_{k=0}^\infty c_k P_k(t)$ be a formal Legendre series and $\Phi_{p,q}$ be its normalized Legendre-Pad\'e approximant of type $(p,q)$, then \begin{itemize}
\item[(a)] $\forall p\geq 1$ such that $c_{p+1}\neq 0$, the approximant $\Phi_{p,1}(y;t)$ has the pole \begin{equation}\label{Legpolos1} \lambda_{p}=\frac{c_{p}+c_{p+2}}{2c_{p+1}} \end{equation}
\item[(b)] $\forall p\geq 1$ such that the determinant
\[ \underline{\Delta} = \left| \begin{array}{cc}
c_{p+1} & h_{p+1,1} \\
c_{p+2} & h_{p+2,1}
\end{array} \right| \neq 0 \] \noindent the approximant $\Phi_{p,2}$ has the poles \begin{equation}\label{Legpolos2} \lambda_{p^\pm}=\frac{-b_{1}\pm \sqrt{b_{1}^{2}-3(b_{0}-1)}}{3} \end{equation} \noindent where $h_{k,1},\ b_{0}$ and $b_{1}$ are given in corollary \ref{LegPadeApprox} (b). \end{itemize}
\end{corollary}
Thus, it is possible to compute the poles of the Pad\'e sequences $\Phi_{p,1}^{(n)}$ and $\Phi_{p,2}^{(n)}$, $p=0,1,\ldots$ only using the Tau coefficients on relations (\ref{polos1}-\ref{Legpolos2}) without computing Pad\'e approximants.
Applying the relation \eqref{polos1} to the Tau solution of the Example \ref{exampleSQRT}, $y^{(150)}$, we obtained the results shown in Table \ref{tab:poles1}. We can see that all poles lie on the branch cut of $y$ and come closer to the singularity $\zeta =-1$, with increasing $p$. We remark that, in that case, since the exact Fourier coefficients $c_p=2(-1)^{p+1}/(4p^2-1)$ are known, we have access to the exact formula $\lambda_p=-1-3/(p-1/2)(p+5/2)$, easily obtained by substitution on \eqref{polos1}.
\begin{table}[htb]
\centering
\begin{tabular}{| c || c | c | c | c | c | }\hline
$p$& $5$ & $45$ & $85$ & $105$ & $145$ \\ \hline \hline $\lambda_{p}$ & $-1.088889$ & $-1.001419$ & $-1.000406$ & $-1.000267$ & $-1.000141$ \\
\hline
\end{tabular}
\caption{Poles of $\Phi_{p,1}^{(150)}$ for somes values of $p$.}
\label{tab:poles1} \end{table}
\section{Algebraic properties of filters}
In previous sections we saw how to obtain a rational approximation of a series, even when its coefficients are unknown. In this section we present some numerical properties of Frobenius-Pad\'e filters, justifying the observed results in numerical experiments.
Our first property results from the following corollary of proposition \ref{Prop:ReqRel}:
\begin{corollary}\label{corol:filter} Let \[ y = \sum_{i=0}^{\infty} c_i \phi_i \quad \text{and}\quad z = \sum_{i=0}^{\infty} d_i \phi_i \] be two formal series where $\left\{\phi_i\right\}_{i\geq 0}$ is an orthogonal polynomial base satisfying \eqref{RecRel}. Let $H_y^{[p/q]}$ and $H_z^{[p/q]}$ be the matrices defined in \eqref{AFPsist1}, with subscript $y$ and $z$ identifying the respective series and with similar notation for other matrices and vectors in \eqref{AFPsist1} and \eqref{AFPsist2}.
If \[ d_i=\rho\, c_i,\ i=0,1,\ldots,n \]
with a constant $0< |\rho|< \infty$ and if $H_y^{[p/q]}$ is regular, then $\forall p,q\in \mathbb{N}$ such that $p+2q\leq n$ we have: \begin{itemize}
\item[(a)] $H_z^{[p/q]}=\rho\, H_y^{[p/q]},\quad G_z^{[p/q]}=\rho\, G_y^{[p/q]},\quad h_z^{[p/q]}=\rho\, h_y^{[p/q]}\ $ \\ and $\ g_z^{[p/q]}=\rho\, g_y^{[p/q]}$;
\item[(b)] $H_z^{[p/q]}$ is regular;
\item[(c)] $N_{p,q}(z)=N_{p,q}(y)$ and $D_{p,q}(z)=\rho\, D_{p,q}(y)$, where $N$ and $D$ are the numerator and denominator polynomials introduced in \eqref{phi};
\item[(d)] $\Phi_{p,q}(z)=\rho\, \Phi_{p,q}(y)$ \end{itemize}
\end{corollary}
The proof of (a) follows by induction over $j$ in \eqref{RecRel} and then (b) and (c) results from (a), and (d) results from (b) and (c).
From this results we get the following corollary, related to the Frobenius-Pad\'{e} filtering of spectral approximation of Fourier series, which explains the good behaviour of the filtering process.
\begin{corollary}\label{corol:poles} Let $p,q,m,n\in\mathbb{N}_0$ and $\Phi_{p,q}(y_n)$ be a $(p,q)$ Pad\'e filter from $y_n$. Suppose that for some finite and non null constant $\rho\in\mathbb{R}$, the coefficients $c_k^{(n)}=\rho\, c_k,\ k=0,\ldots,m$ and $m\leq n$. \begin{itemize}
\item[(a)] If $p+2q\leq m$ and $H_{y_n}^{[p/q]}$ is regular then $\Phi_{p,q}(y_n)=\rho\, \Phi_{p,q}(y)$;
\item[(b)] If $p\leq m-2$ then $\lambda_{p}^{(n)}=\lambda_{p}$, that is, the pole of the filter $\Phi_{p,1}(y_n)$ coincides with the pole of $\Phi_{p,1}(y)$;
\item[(c)] If $p\leq m-4$ then $\lambda_{p^\pm}^{(n)}=\lambda_{p^\pm}$, that is, the poles of the filter $\Phi_{p,2}(y_n)$ coincide with the poles of $\Phi_{p,2}(y)$ \end{itemize} \end{corollary}
So, if we have a set of numerical approximations $c_k^{(n)}\approx c_k,\ k=0\ldots,m$ and if all of those approximations have the same relative errors $\delta=(c_k-c_k^{(n)})/c_k,\ k=0\ldots,m$, then corollaries \ref{corol:filter} and \ref{corol:poles} old with $\rho=1-\delta$ and our filter process, working with $c_k^{(n)},\ k=0\ldots,m$, will give rational approximants with relative error $\delta$ and with the same poles as if we work with the exact coefficients $c_k,\ k=0\ldots,m$.
\section{Numerical example}
In the next example, we will test this filtering procedure to Legendre-Tau solutions of a differential equation with boundary conditions. Furthermore, since the differential equation depends on a parameter, that allows to control the rate of convergence of the Legendre-Tau method, we can test the behavior of the filters when applied to problems with different rates of convergence.
\begin{examp}\label{genfunLeg} Let us consider the family of functions \[ y(t)=\frac{1-\alpha^2}{(1+\alpha^2-2\alpha t)^{3/2}},\ t\ ]-1, 1[ \] depending on the real parameter $\alpha$ and whose Legendre series representation \[ y(t)=\sum_{k=0}^{\infty} (2k+1)\alpha^k P_k(t),\ t\in [-1, 1] \] can be derived from the generating function of Legendre polynomials \cite{Abramowitz65}. For $\alpha\neq 0$, $y$ has branch points at $\zeta=\frac{1}{2}(\alpha+\frac{1}{\alpha})$ and at $\infty$, and furthermore, $\zeta$ is the closest singularity of the interval of orthogonality $[-1,1]$.
For our purpose, we can define $y$ as the solution of the boundary value pro\-blem \begin{equation} \label{examp2} \left\{
\begin{array}{ll}
(1+\alpha^2-2\alpha t)^{2}y''(t)-15\alpha^2 y(t)=0, & t\in [-1, 1] \\
y(-1)=\frac{1-\alpha}{(1+\alpha)^2},\ y(1)=\frac{1+\alpha}{(1-\alpha)^2}
\end{array} \right. \end{equation}
\noindent The matricial form of the operator $D$, introduced in (\ref{piphi}), associated to this differential problem is given by: \[ \Pi_{\phi} = \eta_{\phi}^{2}((1+\alpha^2)I-2\alpha \mu_{\phi})^2-15\alpha^2 I \] where $I$ is the infinity identity matrix and with $\phi$ being the Legendre polynomials.
\begin{figure}
\caption{$w$-norm of the functions errors $e_{n}$, $n=50, \ 151, \ 501$ and $1000$, for values of $\alpha\in ]0,1[$.}
\label{Errotauleg1a}
\end{figure}
The rate of convergence of the Tau method applied to problem (\ref{examp2}) depends on the parameter $\alpha$, exhibiting slow rate of convergence for values of $\alpha$ nearby $1$. To illustrate this behavior we show in Figure \ref{Errotauleg1a}, the weighted-norm $||e_{n}||_{w}$ of the errors of four Legendre-Tau solutions $y_{n}$, $n=50,151,501$ and $1000$, for values $\alpha\in]0,1[$. We can see that for $\alpha<0.5$ it is enough to compute $y_{50}$ to get an error of order of the machine precision. For values of $\alpha>0.5$, we need to increase the order of the Legendre Tau solutions, to get a reasonable approximation and the machine precision is lost.
\begin{figure}
\caption{Froissart Tables, computed with $tol=10^{-5}$, of $y_{150}$ with $\alpha=0.9$ in the left table and for $y_{1000}$ with $\alpha=0.99$ in the right.}
\label{legendrefroissart1}
\end{figure}
\begin{figure}
\caption{ Left image: Absolute error of theTau solution, $y_{150}$, of the problem with $\alpha=0.9$ (black line) and absolute error of the filter $\Phi_{7,7}(y_{150})$ (blue line). Right image: Absolute error of the Tau solution, $y_{1000}$, of the problem with $\alpha=0.99$ (black line) and absolute error of the filter $\Phi_{6,6}(y_{1000})$ (blue line).}
\label{errosfiltragem}
\end{figure}
In order to test our filtering method, we computed the Legendre Tau solution $y_{150}$ of (\ref{examp2}) with $\alpha=0.9$ and the Legendre Tau solution $y_{1000}$ of the same problem with $\alpha=0.99$. Proceeding in analogous way to the example \ref{exampleSQRT}, we took for a ``good'' filter the diagonal Legendre Pad\'e approximant $\Phi_{p,p}(y_{n};t)$ in the white region of the Froissart Table for which $p$ is maximum. The Figure \ref{legendrefroissart1} shows the Froissart tables (with $tol=10^{-5}$) of $y_{150}$, $\alpha=0.9$ (left table) and of $y_{1000}$, $\alpha=0.99$ (right table). Inspecting the tables we see that $\Phi_{7,7}(y_{150};t)$ is a ``good'' diagonal filter for the first problem while for the second problem we must choose $\Phi_{6,6}(y_{1000};t)$. In Figure \ref{errosfiltragem} we show the absolute errors of the tau solutions $y_{150}$ and $y_{1000}$ and the absolute errors of their filters, $\Phi_{7,7}(y_{150})$ and $\Phi_{6,6}(y_{1000})$, respectively. In both cases, the filters improve the Legendre Tau approximations for values of $t\in[-1,1]$ that are not close of $t=1$.
In order to estimate $\zeta$ we can use the relation \eqref{Legpolos1} to compute the zeros of the filters $\Phi_{p,1}(y_{n})$ and use them as approximants of $\zeta$. However, this problem it is a differential equation with boundary conditions and we did not get a relation between the Tau coefficients and the Fourier coefficients, as in example \ref{exampleSQRT}. In fact, the relative error of the Tau coefficients are not constant and we need proceed carefully, because the poles of $\Phi_{p,1}(y_{n})$ have not the same behavior of the poles of $\Phi_{p,1}(y)$. \end{examp}
\section{Conclusions}
Our numerical experiments reveal that it is possible to improve the Tau solutions approximations using Pad\'e approximation. The noise introduced on the Tau coefficients in the numerical computation of Pad\'e approximants yields the occurrence of Froissart doublets for high order rational approximants. The Froissart table, introduced in this work, reveals to be an efficient tool to find a good filter of the Tau solution.
This filtering method also allows to estimate singularities of exact solutions, since the computation of the poles of $\Phi_{p,1}(y_{n})$ and $\Phi_{p,2}(y_{n})$ can be computed using only the Tau coefficients.
Some algebraic properties of the filtering process were introduced, justifying the good properties of the filtered solutions.
\end{document} |
\begin{document}
\title {Squarefree vertex cover algebras}
\author {Shamila Bayati and Farhad Rahmati}
\address{Shamila Bayati, Faculty of Mathematics and Computer Science, Amirkabir University of Technology (Tehran Polytechnic), 424 Hafez Ave., Tehran 15914, Iran}\email{shamilabayati@gmail.com}
\address{Farhad Rahmati, Faculty of Mathematics and Computer Science, Amirkabir University of Technology (Tehran Polytechnic), 424 Hafez Ave., Tehran 15914, Iran} \email{frahmati@cic.aut.ac.ir}
\begin{abstract} In this paper we introduce squarefree vertex cover algebras. We study the question when these algebras coincide with the ordinary vertex cover algebras and when these algebras are standard graded. In this context we exhibit a duality theorem for squarefree vertex cover algebras.
\end{abstract}
\subjclass{13A30, 05C65} \keywords{Monomial ideals; Alexander Dual; Vertex Cover algebras; Squarefree Borel ideals}
\maketitle
\section*{Introduction}
The study of squarefree vertex cover algebras was originally the motivation to better understand the Alexander dual of the facet ideals of the skeletons of a simplicial complex. In the special case of the simplicial complex $\Delta_r(P)$ whose facets correspond to sequences $p_{i_1}\leq p_{i_2}\leq \ldots \leq p_{i_r}$ in a finite poset $P$, it was shown in \cite{VHF} that $I(\Delta_{r}(P)^{(k)})^\vee=(I(\Delta_{r}(P))^\vee)^{\langle d-k\rangle}$; see Section~1 for a detailed explanation of the formula.
The question arises whether this kind of duality is valid for more general simplicial complexes. Considering this problem, it turned out that this is indeed the case for a pure simplicial complex $\Delta$ on the vertex set $[n]$, provided a certain algebra attached to $\Delta$ is standard graded. The algebra in question, which we now call the squarefree vertex cover algebras of $\Delta$, denoted by $B(\Delta)$, is defined as follows: let $K$ be a field and $S=K[x_1,\ldots,x_n]$ be the polynomial ring over $K$ in the variables $x_1,\ldots,x_n$. Then $B(\Delta)$ is the graded $S$-algebra generated by the monomials $x^{\bold c} t^k\in S[t]$ where the $(0,1)$-vector ${\bold c}$ is a $k$-cover of $\Delta$ in the sense of \cite{HHT}. Thus in contrast to the vertex cover algebra $A(\Delta)$, introduced in \cite{HHT}, whose generators correspond to all $k$-covers, $B(\Delta)$ is generated only by the monomials corresponding to squarefree $k$-covers, called binary $k$-covers in \cite{DV}. In particular, $B(\Delta)$ is a graded $S$-subalgebra of $A(\Delta)$ whose generators in degree $1$ coincide.
The graded components of $B(\Delta)$ are of the form $L_k(\Delta)t^k$, where $L_k(\Delta)$ is a monomial ideal whose squarefree part, denoted by $L_k(\Delta)^{sq}$, corresponds to the squarefree $k$-covers. The above mentioned duality is a consequence of the following duality \begin{eqnarray} \label{volleyball}
L_j(\Delta^{(d-i)})^{sq}=L_i(\Delta^{(d-j)})^{sq}, \quad 1\leq i,j\leq d, \end{eqnarray} inside $B(\Delta)$, which is valid for any pure simplicial complex of dimension $d-1$, no matter whether $B(\Delta)$ is standard graded or not.
This result is a simple consequence of Theorem~\ref{dual} where it is shown that the squarefree $k$-covers of $\Delta$ correspond to the vertex covers of the $(d-k)$-skeleton $\Delta^{(d-k})$ of $\Delta$.
The duality described in (\ref{volleyball}) yields the desired generalization of \cite[Theorem 1.1]{VHF}, and we obtain in Corollary~\ref{duality} \[
I(\Delta^{(k)})^\vee=(I(\Delta)^\vee)^{\langle d-k\rangle} \quad\text{for all} \quad k, \] if and only if $B(\Delta)$ is standard graded. Therefore it is of interest to know when $B(\Delta)$ is standard graded.
The starting point of our investigations has been the formula $I(\Delta_{r}(P)^{(k)})^\vee=(I(\Delta_{r}(P))^\vee)^{\langle d-k\rangle}$. As explained before this implies that $B(\Delta_r(P))$ is standard graded. As a last result in Section~1, we show that even the algebra $A(\Delta_r(P))$ is standard graded.
In Section 2 we compare the algebras $A(\Delta)$ and $B(\Delta)$, and discuss the following questions: \begin{itemize} \item[(1)] When is $B(\Delta)$ standard graded and when does this imply that $A(\Delta)$ is standard graded? \item[(2)] When do we have that $B(\Delta)=A(\Delta)$? \end{itemize} In general $B(\Delta)$ may be standard graded, while $A(\Delta)$ is not standard graded, as an example shows which was communicated to the authors by Villarreal; see Section~2.
On the other hand, quite often it happens that $A(\Delta)$ is standard graded if $B(\Delta)$ is standard graded. In Proposition~\ref{standardgraph} we show that for any $1$-dimensional simplicial complex $\Delta$, the algebra $B(\Delta)$ is standard graded if and only if $A(\Delta)$ is standard graded. We also show in Proposition \ref{coveringIdeal} that the same statement holds true when the facet ideal of $\Delta$ is the covering ideal of a graph. In Theorem~\ref{no-odd}, it is shown that this is also the case for all subcomplexes of $\Delta$ if and only if $\Delta$ has no special odd cycles. In the remaining part of Section~2 we present cases for which we have $B(\Delta)=A(\Delta)$. The $1$-dimensional simplicial complexes with this property are classified in Proposition~\ref{graphequality}. A classification of simplicial complexes $\Delta$ of higher dimension with $B(\Delta)=A(\Delta)$ seems to be unaccessible for the moment. Thus we consider simplicial complexes in higher dimensions which generalize the concept of a graph. Roughly speaking, we replace the vertices of a graph by simplices of various dimensions. This graph is called the intersection graph of $\Delta$. The main result regarding this class of simplicial complexes is formulated in Theorem~\ref{str-intersec-prop}, where the criterion for the equality $A(\Delta)=B(\Delta)$ is given in terms of the intersection graph of $\Delta$.
The last section of this paper is devoted to study the vertex cover algebras of shifted simplicial complexes. A simplicial complexes $\Delta$ is shifted if its set of facets is a Borel sets ${\mathcal B}$. When the set of facets of $\Delta$ is principal Borel we have $B(\Delta)=A(\Delta)$ as shown in Theorem~\ref{borel-generators}. Since all skeletons of such simplicial complexes also correspond to principal Borel sets, one even has that $B(\Delta^{(i)})=A(\Delta^{(i)})$ for all~$i$.
The squarefree monomial ideal $(\{x_F\:\; x_Ft\in A_1(\Delta)\})$ generated by the degree~1 elements of $A(\Delta)$ is the Alexander dual $I(\Delta)^\vee$ of $I(\Delta)$. Francisco, Mermin and Schweig showed that this ideal is again a squarefree Borel ideal, and they give precise Borel generators in the case that ${\mathcal B}$ is principal Borel. We generalize this result in Theorem~\ref{B-generators}, by showing that the squarefree part of the ideal $(\{x_F\:\; x_Ft^k\in A_k(\Delta)\})$ is again a squarefree Borel ideal whose generators can be explicitly described when ${\mathcal B}$ is principal Borel. It turns out that in this case the $S$-algebra $A(\Delta)$ may have minimal generators in the degrees up to $\dim\Delta+1$. In Proposition~\ref{higher-generator}, we present a necessary and sufficient condition such that this maximal degree achieved.
We would like to thank Professor Villarreal for several useful comments and for drawing our attention to related work on this subject.
\section{Duality}
We first fix some notation and recall some basic concepts regarding simplicial complexes.
Let $\Delta$ be a simplicial complex of dimension $d-1$ on the vertex set $[n]$. We denote by ${\mathcal F}(\Delta)$ the set of facets of $\Delta$.
The $i$-skeleton $\Delta^{(i)}$ of $\Delta$ is defined to be the simplicial complex whose faces are those of $\Delta$ with $\dim F\leq i$. Observe that $\Delta^{(i)}$ is a pure simplicial complex with ${\mathcal F}(\Delta^{(i)})=\{F\in \Delta\:\; \dim F=i\}$ if $\Delta$ is pure.
The Alexander dual $\Delta^\vee$ of $\Delta$ is the simplicial complex with \[ \Delta^\vee=\{[n]\setminus F\:\; F\not\in \Delta\}. \] One has $(\Delta^\vee)^\vee =\Delta$.
Let $K$ be a field and $S=K[x_1,\ldots,x_n]$ the polynomial ring over $K$ in the variables $x_1,\ldots,x_n$. The Stanley-Reisner ideal of $\Delta$ is defined to be \[ I_\Delta=(\{x_F\:\; F\subseteq [n], F\not\in\Delta\}). \] Here $x_F=\prod_{i\in F}x_i$ for $F\subseteq [n]$.
For $F\subseteq [n]$, we denote by $P_F$ the monomial prime ideal $(\{x_i\:\; i\in F\}$. Let $$I_\Delta=P_{F_1}\sect \cdots \sect P_{F_m}$$ be the irredundant primary decomposition of $I_\Delta$, then $I_{\Delta^\vee}$ is minimally generated by $x_{F_1},\ldots,x_{F_m}.$
Now let $I\subseteq S$ be an arbitrary squarefree monomial ideal. There is a unique simplicial complex $\Delta$ on $[n]$ such that $I=I_\Delta$. We set $I^\vee =I_{\Delta^\vee}$. It follows that if $I=P_{F_1}\sect \cdots \sect P_{F_m}$, then $I^\vee=(x_{F_1},\ldots,x_{F_m})$, and if $J=(x_{G_1},\ldots,x_{G_r})$, then $J^\vee=P_{G_1}\sect \cdots \sect P_{G_m}$.
Let $k$ be a nonnegative integer. A {\em $k$-cover} or a {\em cover of order $k$} of $\Delta$ is a nonzero vector $\mathbf{c}=(c_1,\ldots,c_n)$ whose entries are nonnegative integers such that $\sum_{i\in F}c_i\geq k$ for all $F\in {\mathcal F}(\Delta)$. We denote the set $\{i\in [n]\:\; c_i\neq 0\}$ by $\supp({\bold c})$. The $k$-cover $\mathbf{c}$ is called {\em squarefree } if $c_i\leq 1$ for all $i$. A $(0,1)$-vector $\mathbf{c}$ is a squarefree cover of $\Delta$ with positive order if and only if $\supp({\bold c})$ is a vertex cover of $\Delta$. A $k$-cover $\mathbf{c}$ of $\Delta$ is called {\em decomposable} if there exist an $i$-cover $\mathbf{a}$ and a $j$-cover $\mathbf{b}$ of $\Delta$ such that $\mathbf{c}=\mathbf{a}+\mathbf{b}$ and $k=i+j$. Then ${\bold c}={\bold a}+{\bold b}$ is called a {\em decomposition} of ${\bold c}$. A $k$-cover of $\Delta$ is {\em indecomposable} if it is not decomposable.
The $K$-vector space spanned by the monomials $x^{{\bold c}}$ where ${\bold c}$ is a $k$-cover, denoted by $J_k(\Delta)$, is an ideal. Obviously one has $J_k(\Delta)J_\ell(\Delta)\subseteq J_{k+\ell}(\Delta)$ for all $k$ and $\ell$. Therefore \[ A(\Delta)=\Dirsum_{k\geq 0}J_k(\Delta)t^k\subseteq S[t] \] is a graded $S$-subalgebra of the polynomial ring $S[t]$ over $S$ in the variable $t$. The $S$-algebra $A(\Delta)$ is called the vertex cover algebra of $\Delta$. This algebra is minimally generated by the monomials $x^{\mathbf{c}}t^k$ where $\mathbf{c}$ is an indecomposable $k$-cover of $\Delta$ with $k\neq 0$.
If ${\mathcal F}(\Delta)=\{F_1,\ldots,F_m\}$, then $J_k(\Delta)=P_{F_1}^k\sect \cdots \sect P_{F_m}^k$ \cite[Lemma 4.1]{HHT}. In particular, for the facet ideal of $\Delta$, i.e.
$I(\Delta)= (x_{F_1},\ldots,x_{F_m})$, one has $J_k(\Delta)=(I(\Delta)^\vee)^{(k)}$ where $(I(\Delta)^\vee)^{(k)}$ is the $k$-th symbolic power of $I(\Delta)^\vee$.
Let $B(\Delta)$ be the $S$-subalgebra of $S[t]$ generated by the elements $x^{\bold c} t^k$ where ${\bold c}$ is a squarefree $k$-cover. The algebra $B(\Delta)$ is called the {\em squarefree vertex cover algebra} of $\Delta$. Observe that $B(\Delta)$ is a graded $S$-algebra, \[ B(\Delta)=\Dirsum_{k\geq 0}L_k(\Delta)t^k. \] It is clear that each $L_k(\Delta)$ is a monomial ideal in $S$ and that $L_k(\Delta)\subseteq J_k(\Delta)$ for all $k$.
For a monomial ideal $I\subseteq S$, we denote by $I^{sq}$ the squarefree monomial ideal generated by all squarefree monomials $u\in I$. The $k$-th squarefree power of a monomial ideal $I$, denoted by $I^{\langle k\rangle}$, is defined to be $(I^k)^{sq}$.
\begin{Proposition} \label{alsoeasy} Let $\Delta$ be a simplicial complex with ${\mathcal F}(\Delta)=\{F_1,\ldots,F_m\}$. Then $$ L_k(\Delta)^{sq}= \bigcap _{i=1}^m {P_{F_i}}^{\langle k\rangle},$$ and the algebra $B(\Delta)$ is standard graded if and only if $\bigcap _{i=1}^m {P_{F_i}}^{\langle k\rangle}={(\bigcap _{i=1}^m P_{F_i})}^{\langle k\rangle}.$ \end{Proposition} \begin{proof}
Let $x^{{\bold c}}$ be a squarefree monomial in $L_k(\Delta)^{sq}$. Then $\sum_{j\in F_i}c_j\geq k$ for all $i$, and $c_j\leq 1$ for all $j$. So if $u_i=\prod_{j\in F_i}x_j^{c_j}$, then $u_i$ is of degree at least $k$ which implies $u_i\in P_{F_i}^{\langle k\rangle}$. Furthermore, we have $u_i|x^c$ for all $i$. Therefore, $x^{{\bold c}}\in \bigcap _{i=1}^m {P_{F_i}}^{\langle k\rangle}$. On the other hand, if
$x^{{\bold c}} \in \bigcap _{i=1}^m {P_{F_i}}^{\langle k\rangle}$, then $\sum_{j\in F_i}c_j\geq k$ for all $i$, or in other words ${\bold c}$ is a $k$-cover. Hence $x^{{\bold c}}\in L_k(\Delta)^{sq}.$
One has ${(\bigcap _{i=1}^m P_{F_i})}^{\langle k\rangle} \subseteq \bigcap _{i=1}^m {P_{F_i}}^{\langle k\rangle}$. Now the graded algebra $B(\Delta)$ is standard graded if and only if every squarefree $k$-cover of $\Delta$ can be written as a sum of $k$ 1-cover of $\Delta$, and this is the case if and only if
$\bigcap _{i=1}^m {P_{F_i}}^{\langle k\rangle} \subseteq {(\bigcap _{i=1}^m P_{F_i})}^{\langle k\rangle}$. \end{proof}
It turns out that every squarefree $k$-cover of $\Delta$ can be considered as a $1$-cover of a suitable skeleton of $\Delta$ as we see in the next result. \begin{Theorem} \label{dual} Let $\Delta$ be a simplicial complex of dimension $d-1$ on the vertex set $[n]$, and $k \in \{1, \dots, d\}$. Then \[ L_k(\Delta)^{sq}\subseteq I(\Delta^{(d-k)})^\vee \quad \text{for all} \quad k. \] Furthermore, the following conditions are equivalent: \begin{enumerate} \item[(i)]$\Delta$ is a pure simplicial complex; \item[(ii)]$L_k(\Delta)^{sq}= I(\Delta^{(d-k)})^\vee \quad \text{for some} \quad k \neq 1$; \item[(iii)]$L_k(\Delta)^{sq}= I(\Delta^{(d-k)})^\vee \quad \text{for all} \quad k$. \end{enumerate} \end{Theorem}
\begin{proof} Let $x^{{\bold c}}$ belong to the minimal set of monomial generators of the ideal $L_k(\Delta)^{sq}$. So ${\bold c}$ is a squarefree $k$-cover of $\Delta$. Let $C=\supp({\bold c}) =\{i : c_i=1\}$. Then $C$ contains at least $k$ elements of each facet. Therefore, $C$ also meets all $(d-k)$-faces of $\Delta$. This implies that $x^{\bold c} \in I(\Delta^{(d-k)})^\vee$.
Now we show that statements (i), (ii) and (iii) are equivalent.
(i)$\Rightarrow$ (iii): Let $x^{{\bold c}}$ be in the minimal set of monomial generators of the ideal $I(\Delta^{(d-k)})^\vee$. Then $C=\supp({\bold c})=\{i : c_i=1\}$ is a minimal vertex cover of $\Delta^{(d-k)}$. Every facet of $\Delta$ is of dimension of $d-1$. Hence the set $C$ contains at least $k$ vertices of each facet because otherwise there exists a $(d-k)$-face of $\Delta$ which does not intersect $C$. Therefore ${\bold c}$ is a $k$-cover of $\Delta$. Since $c_i \leq 1$ for all $i$, the cover $x^{\bold c}$ is squarefree, and hence $x^{\bold c} \in L_k(\Delta)^{sq}$.
(iii)$\Rightarrow $(ii): This implication is trivial.
(ii)$\Rightarrow$ (i): Suppose for some fixed $k \neq 1$, we have $L_k(\Delta)^{sq}= I(\Delta^{(d-k)})^\vee$. By contrary assume there is a facet $H$ of $\Delta$ of dimension $\ell-1$ where $\ell<d$. Every facet of $\Delta^{(d-k)}$ which is not a subset of $H$ contains at least one vertex which does not belong to $H$. Hence there exists a set $A\subseteq [n]$ such that $A \cap H = \emptyset$ and $A \cap F \neq \emptyset$ for all facets $F$ of $\Delta^{(d-k)}$ which are not subsets of $H$.
First suppose that $\ell-1<d-k$, then $H$ is also a facet of $\Delta^{(d-k)}$. By choosing a vertex $i$ of $H$, the set $C = A \cup \{ i\}$ is a vertex cover of $\Delta^{(d-k)}$ which contains exactly one vertex of $H$. Hence for the vector ${\bold c}$ with $C=\supp({\bold c})$, the monomial $x^{{\bold c}}$ belongs to $I(\Delta^{(d-k)})^\vee$. But since $C$ contains only one vertex of $H$, it does not belong to $L_k(\Delta)^{sq}$, a contradiction.
Next suppose that $\ell-1 \geq d-k$. By choosing $\ell+k-d$ vertices $i_1,\ldots, i_{\ell+k-d}$ of $H$, the set
$C= A \cup \{i_1,\ldots, i_{\ell+k-d}\}$ is a vertex cover of $\Delta^{(d-k)}$. Indeed, let $F$ be a facet of $\Delta^{(d-k)}$. If $F\not\subseteq H$, then $F\sect C\neq\emptyset$ because $F\sect A\neq\emptyset$ by the choice of $A$. If $F\subseteq H$, then $|F|=d-k+1$. So
$F\sect H\neq \emptyset$ because $$|F|+|C\sect H|=(d-k+1)+(\ell+k-d)>\ell=|H|.$$ Hence for the vector ${\bold c}$ with $C=\supp({\bold c})$, we have $x^{{\bold c}} \in I(\Delta^{(d-k)})^\vee$. Thus $x^{{\bold c}}\in L_k(\Delta)^{sq}$ which implies that $C$ contains at least $k$ elements of $H$. Furthermore, we know that $C$ contains exactly $\ell+k-d$ vertices of $H$. Hence $\ell+k-d \geq k$ which means $\ell \geq d$, a contradiction. \end{proof}
As an immediate consequence of Theorem~\ref{dual} we obtain
\begin{Corollary} \label{duality} Let $\Delta$ be a pure simplicial complex of dimension $d-1$. Then \[
I(\Delta^{(k)})^\vee=(I(\Delta)^\vee)^{\langle d-k\rangle} \quad\text{for all} \quad k, \]
if and only if $B(\Delta)$ is standard graded. \end{Corollary} \begin{proof} It is enough to notice that $B(\Delta)$ is standard graded if and only if $L_k(\Delta)^{sq}=(I(\Delta)^\vee)^{\langle k\rangle}$ for all $k$. Now Theorem~\ref{dual} yields the result. \end{proof}
\begin{Corollary}[Duality] Let $\Delta$ be a pure simplicial complex of dimension $d-1$. Then \[
L_j(\Delta^{(d-i)})^{sq}=L_i(\Delta^{(d-j)})^{sq}, \] where $1\leq i,j \leq d.$ \end{Corollary} \begin{proof} Consider integer numbers $i,j\in \{1,\ldots,d\}$. Since $\Delta$ is pure, the simplicial complexes $\Delta^{(d-i)}$ and $\Delta^{(d-j)}$ are also pure. So by Theorem~\ref{dual}, we have $$L_j(\Delta^{(d-i)})^{sq}= I(\Delta^{((d-i+1)-j)})^\vee,$$ and $$L_i(\Delta^{(d-j)})^{sq}= I(\Delta^{((d-j+1)-i)})^\vee.$$
Thus $L_j(\Delta^{(d-i)})^{sq}=L_i(\Delta^{(d-j)})^{sq}$. \end{proof}
As an application we consider the following class of ideals introduced in \cite{VHF}. Let $P=\{p_1,\ldots,p_m\}$ be a finite poset and $r\geq 1$ an integer. We consider the $(r\times m)$-matrix $X=(x_{ij})$ of indeterminates, and define the ideal $I_r(P)$ generated by all monomials $x_{1j_1}{x_{2j_2}}\cdots x_{rj_r}$ with $p_{j_1}\leq p_{j_2}\leq\ldots \leq p_{j_r}$. Let $\Delta_{r}(P)$ be the simplicial complex on the vertex set $V=\{(i,j)\: 1\leq i \leq r,\;1\leq j \leq m \}$ with the property that $I(\Delta_{r}(P))=I_r(P)$. So $\Delta_r(P)$ is of dimension $r-1$. In \cite[Theorem 1.1]{VHF}, it is shown that \[
I(\Delta_{r}(P)^{(k)})^\vee=(I(\Delta_{r}(P))^\vee)^{\langle r-k\rangle} \quad\text{for all} \quad k. \] Hence Corollary \ref{duality} implies that $B(\Delta_{r}(P))$ is standard graded. One even has
\begin{Theorem} \label{evenmaybe} The $S$-algebra $A(\Delta_{r}(P))$ is standard graded. \end{Theorem} \begin{proof} Let the $(r\times m)$-matrix ${\bold c}=[c_{ij}]$ be a $k$-cover of $\Delta_r(P)$ with $k\geq 2$. We show that there is a decomposition of ${\bold c}$ into a 1-cover ${\bold a}$ and a $(k-1)$-cover ${\bold b}$ of $\Delta_r(P)$. This will imply that $A(\Delta_{r}(P))$ is standard graded.
We consider the subset $A$ of $V$ containing the vertices $(i,j)\in V$ with the following properties: \begin{enumerate} \item[(i)] $c_{ij}\neq 0$; \item[(ii)] There exists a chain $ p_{j_1}\leq p_{j_2}\leq\ldots \leq p_{j_{i-1}}$ of elements of $P$ with $ p_{j_{i-1}} \leq p_j$ and $c_{t,j_t}=0$, for $t=1,\dots,i-1$. \end{enumerate} One should notice that $A$ includes the set $\{(1,j)\in V \: c_{1,j}\neq 0 \}$. First, we show that $A$ is a vertex cover of $\Delta_r(P)$. Suppose $F$ is a facet of $\Delta_r(P)$. So there exists the chain $p_{j_1}\leq p_{j_2}\leq\ldots \leq p_{j_r}$ of elements of $P$ such that $$F=\{(1,j_1),(2,j_2),\ldots,(r,j_r)\}.$$ Since ${\bold c}$ is a cover of $\Delta_r(P)$ of positive order, the set $D=\{(i,j_i)\in F \: c_{ij_i}\neq 0\}$ is nonempty. Suppose $t=\min\{s\:(s,j_s)\in D \}$. Then $(t,j_t)$ satisfies the property (i) of elements of $A$. Considering the chain $p_{j_1}\leq p_{j_2}\leq\ldots \leq p_{j_{t-1}}$, one can see $(t,j_t)$ also satisfies the property (ii) of elements of $A$. Hence $(t,j_t)\in A$. This shows that $A$ is a vertex cover of $\Delta_r(P)$. Let the $(r \times m)$-matrix $[a_{ij}]$ be the squarefree 1-cover of $\Delta_r(P)$ corresponding to $A$. In other words, $[a_{ij}]$ is the $(0,1)$-matrix with $a_{ij}=1$ if and only if $(i,j)\in A$.
Next we show that for every chain $p_{j_1}\leq p_{j_2}\leq\ldots \leq p_{j_r}$ of elements of $P$ with $a_{tj_t}>0$, we have $\sum_{s=t}^r c_{s,j_s}\geq k$. Indeed, since $(t,j_t)\in A$, property (ii) implies that there exists a chain $ p_{i_1}\leq p_{i_2}\leq\ldots \leq p_{i_{t-1}}$ of elements of $P$ with $ p_{i_{t-1}} \leq p_{j_t}$ and $c_{s,i_s}=0$, for $s=1,\dots,t-1$. Consider the facet $F$ of $\Delta_r(P)$ corresponding to the chain $$p_{i_1}\leq p_{i_2}\leq\ldots \leq p_{i_{t-1}}\leq p_{j_t}\leq\ldots \leq p_{j_r}.$$ Since ${\bold c}$ is a $k$-cover of $\Delta_r(P)$, one has $\sum_{(i,j)\in F}c_{ij}\geq k$. This implies that $$\sum_{s=1}^{t-1} c_{s,i_s}+\sum_{s=t}^r c_{s,j_s}=\sum_{s=t}^r c_{s,j_s}\geq k.$$
Finally, we show that the $(r \times m)$-matrix $[b_{ij}]=[c_{ij}-a_{ij}]$ is a $(k-1)$-cover of $\Delta_r(P)$. To see this, let $F$ be a facet of $\Delta_r(P)$ corresponding to a chain $p_{j_1}\leq p_{j_2}\leq\ldots \leq p_{j_r}$ of elements of $P$. Since $A$ is a vertex cover of $\Delta_r(P)$, the set $A\cap F$ is nonempty. Suppose $A\cap F=\{(i_1,j_{i_1}),(i_2,j_{i_2}),\ldots,(i_t,j_{i_t})\}$ with $i_1< i_2<\ldots< i_t$. By the above discussion, since $a_{i_t j_{i_t}}>0$, we have $\sum_{s=i_t}^r c_{s,j_s}\geq k$. Furthermore, the elements $a_{i_1 j_{i_1}},a_{i_1,j_{i_1}},\ldots,a_{i_{t-1},j_{i_{t-1}}}$ are nonzero because $$\{(i_1,j_{i_1}),(i_2,j_{i_2}),\ldots,(i_{t-1},j_{i_{t-1}})\}\subseteq A.$$
This implies $c_{i_1 j_{i_1}},c_{i_2,j_{i_2}},\ldots,c_{i_{t-1},j_{i_{t-1}}}$ are nonzero. Hence $$\sum_{(i,j)\in F}c_{ij}\geq \sum _{s=1}^{t-1} c_{i_s,j_{i_s}} +\sum_{s=i_t}^r c_{s,j_s}\geq (t-1)+\sum_{s=i_t}^r c_{s,j_s}\geq (t-1)+k.$$ Consequently, $$\sum_{(i,j)\in F}b_{ij}=\sum_{(i,j)\in F}c_{ij}-\sum_{(i,j)\in F}a_{ij} \geq ((t-1)+k)-t=k-1.$$ It shows that ${\bold b}$ is a $(k-1)$-cover of $\Delta_r(P)$, and ${\bold c}={\bold a}+{\bold b}$ is the desired decomposition of ${\bold c}$. \end{proof}
\section{Comparison of $A(\Delta)$ and $B(\Delta)$}
In view of Corollary~\ref{duality} it is of interest to know when $B(\Delta)$ is standard graded as an $S$-algebra. Of course this is the case if $A(\Delta)$ is standard graded. Thus the following questions arise: \begin{enumerate} \item[(1)] Is $A(\Delta)$ standard graded if and only if $B(\Delta)$ is standard graded?
\item[(2)] When do we have $A(\Delta)=B(\Delta)$? \end{enumerate}
In general Question (1) does not have a positive answer. The following example was communicated to us by Villarreal. Let $\Delta$ be the simplicial complex with the following facets: \[ \{1,2\}, \{3,4\}, \{5,6\}, \{7,8\}, \{1,3,7\}, \{1,4,8\}, \{3,5,7\}, \{4,5,8\}, \{2,3,6,8\}, \{2,4,6,7\} \] It can be seen that ${\bold c}=(1,1,1,1,2,0,1,1)$ is an indecomposable 2-cover of $\Delta$, and hence $A(\Delta)$ is not standard graded. However using CoCoA \cite{Cocoa}, one can check that the (finitely many) squarefree covers of $\Delta$ of order $\geq 2$ are all decomposable. Thus $B(\Delta)$ is standard graded.
Now we consider some cases where Question (1) has a positive answer. We begin with a general fact about the generators of $B(\Delta)$.
\begin{Lemma} \label{lunch}
Let $r=\min\{|F|\:\; F\in {\mathcal F}(\Delta)\}$. Then $B(\Delta)$ is generated in degree $\leq r$, and $L_r(\Delta)^{sq}=(x_1x_2\cdots x_n)$ if and only if each vertex is contained in a $(r-1)$-facet of $\Delta$. \end{Lemma}
\begin{proof} Let ${\bold c}$ be a squarefree cover of $\Delta$, and $F\in {\mathcal F}(\Delta)$ for which $|F|=r$. If $C=\supp({\bold c})$, then $|C\cap F|\leq r$. Hence the order of ${\bold c}$ is at most $r$. This shows that $B(\Delta)$ is generated in degree $\leq r$. For the other statement we first observe that by the definition of the integer $r$ we have $(x_1x_2\cdots x_n)\subseteq L_r(\Delta)^{sq}$. Therefore $(x_1x_2\cdots x_n)\neq L_r(\Delta)^{sq}$ if and only if there exists $i\in[n]$ such that $x_1\cdots \hat{x}_i\cdots x_n\in L_r(\Delta)^{sq}$. This is the case if and only if for each facet $F\in\ \Delta$ with $i\in F$ one has $|F\cap\{1,\ldots,\hat{i},\ldots,n\}|\geq r$ which implies that $|F|>r$. \end{proof}
When $\dim\Delta=1$, one may view $\Delta$ as a finite simple graph on $[n]$. In that case we have
\begin{Proposition} \label{standardgraph} Let $G$ be a finite simple graph. Then $B(G)$ is standard graded if and only if $A(G)$ is standard graded. \end{Proposition}
\begin{proof} We may assume that $G$ has no isolated vertices, because adding or removing an isolated vertex to $G$ does not change the property of $B(G)$, respectively of $A(G)$, to be standard graded.
Assume $B(G)$ is standard graded. Then ${\bold c}=(1,\ldots,1)$ is decomposable, or in other words, there exist vertex covers $C_1,C_2\subseteq [n]$ such that $C_1\union C_2=[n]$ and $C_1\sect C_2=\emptyset$. This implies that each edge of $G$ has exactly one vertex in $C_1$ and one in $C_2$. Therefore $G$ is bipartite. By Theorem~\cite[Theorem5.1.(b)]{HHT} it follows that $A(G)$ is standard graded. \end{proof}
Let $G$ be a finite simple graph on $[n]$. The {\em edge ideal} of $G$, denoted by $I(G)$, is the ideal generated by $\{x_ix_j\: \{i,j\}\text{ is an edge of } G\}$. We also denote $I(G)^\vee$ by $J(G)$ which is called the {\em cover ideal} of $G$.
An ideal $I$ is said to be {\em normally torsion free} if all the powers $I^j$ have the same associated prime ideals. The following result is shown in \cite[Theorem 5.9]{SVV}. \begin{Theorem}[Simis, Vasconcelos, Villarreal] \label{Villarreal} Let G be a graph, and suppose $I(G)$ is its edge ideal. The following conditions are equivalent: \begin{enumerate} \item[(i)] $G$ is bipartite; \item[(ii)] $I(G)$ is normally torsion free. \end{enumerate} \end{Theorem}
By this theorem, we have another case for which the question (1) has a positive answer; as shown in the next proposition. \begin{Proposition} \label{coveringIdeal} Let $G$ be a graph, and suppose $\Delta$ is the simplicial complex with $I(\Delta)=J(G)$. Then $B(\Delta)$ is standard graded if and only if $A(\Delta)$ is standard graded. \end{Proposition} \begin{proof} Suppose $B(\Delta)$ is standard graded. We show that $G$ is bipartite. By contrary, assume that $G$ has a cycle $l\:\;i_1,i_2,\ldots,i_r$ of odd length. Every facet $F$ of $\Delta$ is a vertex cover of the graph $G$. In addition, $r$ is an odd number, so we have \begin{eqnarray} \label{shokr}
|F\cap \{i_1,i_2,\ldots ,i_r\} |\geq (r+1)/2. \end{eqnarray} for every facet $F$ of $\Delta$. Indeed, if inequality (\ref{shokr}) does not hold, then there exists an edge of the cycle $l$ which does not meet $F$.
Consider the squarefree vector ${\bold c}=(c_1,\ldots,c_n)$ with $c_j=1$ if and only if $j\in C=\{i_1,i_2,\ldots,i_r\}$. Inequality (\ref{shokr}) shows that ${\bold c}$ is a $(r+1)/2$-cover of $\Delta$. Thus the set $C$ is a disjoint union of $(r+1)/2$ vertex covers of $\Delta$ because $B(\Delta)$ is standard graded. Observe that the minimal vertex covers of $\Delta$ are exactly the edges of the graph $G$. So the above argument implies that the set of edges of the cycle $l$ contains $(r+1)/2$ pairwise disjoint edges, a contradiction. Hence $G$ is bipartite. So by Theorem~\ref{Villarreal}, the ideal $I(G)$ is normally torsion free. It is well known that in this case the simplicial complex with facet ideal $I(G)^\vee=J(G)$, i.e. $\Delta$, has the standard graded vertex cover algebra; see \cite[Corollary 3.14]{GVV}, \cite[Corollary 1.5 and 1.6]{XIN} and \cite{HSV}. \end{proof}
Next we discuss another result regarding question (1).
Let $\Delta$ be a simplicial complex. A {\em subcomplex} $\Gamma$ of $\Delta$, denoted by $\Gamma \subseteq \Delta$, is a simplicial complex such that ${\mathcal F}(\Gamma) \subseteq {\mathcal F}(\Delta)$. A {\em cycle} of length $r$ of $\Delta$ is a sequence $i_1,F_1,i_2,\ldots , F_r,i_{r+1}=i_1$ where $F_j\in {\mathcal F}(\Delta)$, $i_j \in [n]$ and $v_j,v_{j+1}\in F_j$ for $j=1,\ldots , j=r$. A cycle is called {\em special} if each facet of the cycle contains exactly two vertices of the cycle.
\begin{Theorem} \label{no-odd} Let $\Delta$ be a simplicial complex. Then the following conditions are equivalent: \begin{enumerate} \item[(i)] The vertex cover algebra $B(\Gamma)$ is standard graded for all $\Gamma \subseteq \Delta;$ \item[(ii)] The vertex cover algebra $A(\Gamma)$ is standard graded for all $\Gamma \subseteq \Delta;$ \item[(iii)] $\Delta$ has no special odd cycles. \end{enumerate} \end{Theorem} \begin{proof} The equivalence of (ii) and (iii) is known by \cite[Theorem 2.2]{XIN}; see also \cite[Proposition 4.10]{GRV}. We show (i) and (iii) are also equivalent.
(i)$\Rightarrow$ (iii) Following the proof of \cite[Lemma 2.1]{XIN}, let $i_1,F_1,i_2,\ldots , F_r,i_{r+1}=i_1$ be a special cycle of $\Delta$. Consider the subcomplex $\Gamma \subseteq \Delta$ with ${\mathcal F}(\Gamma)=\{F_1,\ldots , F_r\}$. By the definition of the special cycles, one has $|F_j\cap \{i_1,\ldots, i_r\}|= 2$ for each $j=1,\ldots,r$. So
$\{i_1,\ldots, i_r\}$ corresponds to a squarefree 2-cover of $\Gamma$, that is, there exists a squarefree 2-cover ${\bold c}$ of $\Gamma$ with $\supp({\bold c})=\{i_1,\ldots, i_r\}$. Since by assumption $B(\Gamma)$ is standard graded, there are disjoint vertex covers $C_1$ and $C_2$ of $\Gamma$ such that $\{i_1,\ldots, i_r\}=C_1\cup C_2$. So the sets $F_j\cap C_1$ and $F_j\cap C_2$ are nonempty for all $j$. Furthermore, since every facet of a special cycle contains exactly two vertices of the cycle, it follows that $|F_j\cap C_1|=|F_j\cap C_2|=1$. Hence $C_1$ and $C_2$ have the same number of vertices which implies that $r$ is an even number.
(iii)$\Rightarrow$ (i) By \cite[Theorem 2.2]{XIN}, when $\Delta$ has no special odd cycle, statement (ii) holds. Therefore $B(\Gamma)$ is standard graded for all $\Gamma \subseteq \Delta$. \end{proof}
Next we discuss question (2) in which we ask when $B(\Delta)=A(\Delta)$. In the case that $\dim \Delta=1$, in other words if $\Delta$ can be identified with a graph $G$, we have a complete answer which is an immediate consequence of the following result, see \cite[Proposition 5.3]{HHT}: \begin{Proposition} \label{quoted} Let G be a simple graph on $[n]$. Then the following conditions are equivalent:
\begin{enumerate} \item[(i)] The graded $S$-algebra $A(G)$ is generated by $x_1x_2\cdots x_nt^2$ together with those monomials $x^{{\bold c}}t$ where $c$ is a squarefree 1-cover of $G$; \item[(ii)] For every cycle $C$ of $G$ of odd length and for every vertex $i$ of $G$ there exist a vertex $j$ of the cycle $C$ such that $\{i,j\}$ is an edge of $G$. \end{enumerate} \end{Proposition}
By using this result we get
\begin{Proposition} \label{graphequality} Let G be a simple graph on $[n]$. Then the following conditions are equivalent: \begin{enumerate} \item[(i)] $A(G)=B(G);$ \item[(ii)] For every cycle $C$ of $G$ of odd length and for every vertex $i$ of $G$ there exist a vertex $j$ of the cycle $C$ such that $\{i,j\}$ is an edge of $G$. \end{enumerate} \end{Proposition} \begin{proof} (i)$\Rightarrow$ (ii): We may assume that $G$ has no isolated vertex. So by Lemma~\ref{lunch}, the graded $S$-algebra $B(G)$ is generated by $x_1x_2\cdots x_nt^2$ together with those monomials $x^{\bold c} t$ where ${\bold c}$ is a squarefree $1$-cover of $G$. Since we assume that $A(G)=B(G)$, the same holds true for $A(G)$. Hence (ii) follows from Proposition~\ref{quoted}.
(ii)$\Rightarrow$ (i): If (ii) holds, then Proposition~\ref{quoted} implies that $A(G)$ is generated by the monomials corresponding to squarefree covers. So $A(G)=B(G)$. \end{proof}
Let $\Delta$ be a simplicial complex on $[n]$. For a subset $W$ of $[n]$, we define the {\em restriction} of $\Delta$ with respect to $W$, denoted by $\Delta_W$, to be the subcomplex of $\Delta$ with
$${\mathcal F}(\Delta_W) = \{F\in {\mathcal F}(\Delta) \: F \subseteq W\}.$$
\begin{Lemma} \label{restriction} Let $\Delta$ be a simplicial complex on $[n]$ and $W\subseteq [n]$. If $B(\Delta)=A(\Delta)$, then $B(\Delta_W)=A(\Delta_W)$. \end{Lemma} \begin{proof} We assume that there exists at least one facet $F$ of $\Delta$ such that $F \subseteq W$, otherwise there is nothing to prove. Furthermore, without loss of generality we assume that $W=\{1,\ldots,t\}$. Let $\mathbf{c}'=(c_1,\ldots,c_t)$ be an indecomposable $k$-cover of $\Delta_W$ with $k>0$. We will show that ${\bold c}'$ is squarefree. We extend ${\bold c}'$ to the $k$-cover ${\bold c}=(c_1,\ldots,c_t,k,\ldots,k)$ of $\Delta$. Since $B(\Delta)=A(\Delta)$, there exists a decomposition of ${\bold c}={\bold a}+{\bold b}$ where ${\bold a}$ is an indecomposable squarefree $i$-cover of $\Delta$ with $i>0$, ${\bold b}$ is a $j$-cover of $\Delta$, and $i+j=k$. Let ${\bold a}'$ and ${\bold b}'$ be respectively the restrictions of ${\bold a}$ and ${\bold b}$ to the first $t$-components. For every facet $F\in {\mathcal F}(\Delta_W)$ we have $\sum_{\ell\in F}a_{\ell}\geq i>0$ This implies that ${\bold a}'\neq 0$ . Suppose ${\bold b}'\neq 0$, then ${\bold c}'={\bold a}'+{\bold b}'$ is a decomposition of ${\bold c}'$, a contradiction. Hence ${\bold b}'=0$, and so ${\bold c}'={\bold a}'$ which means that ${\bold c}'$ is squarefree. \end{proof}
By Theorem~\ref{no-odd}, so far we know that for a simplicial complex $\Delta$ without any special odd cycle we have $B(\Delta)=A(\Delta)$. On the other hand, if $\Delta$ contains special odd cycles, then $A(\Delta)$ and $B(\Delta)$ may not be equal. This may even happen if the facets of $\Delta$ are precisely the facets of a special odd cycle, as the following two examples demonstrate. Figure~\ref{cycle5} shows a simplicial complex $\Delta_1$ of dimension 2 such that $$1,F_1,2,F_2,3,F_3,4,F_5,5,F_5,1$$ is a special odd cycle of length 5. \begin{figure}\label{cycle5}
\end{figure} One can see the vector $(1,0,2,0,1,0,1)$ is an indecomposable 2-cover of $\Delta_1$. Therefore $B(\Delta_1)\neq A(\Delta_1)$.
Next consider the simplicial complex $\Delta_2$ as shown in Figure~\ref{cycle3}. In this case the facets of $\Delta_2$ form the special odd cycle $6,F_1,2,F_2,4,F_3,6$ of length 3 and the equality $B(\Delta_2)=A(\Delta_2)$ holds. Indeed, the equality $B(\Delta_2)=A(\Delta_2)$ is a consequence of a more general result given in the next theorem. \begin{figure}\label{cycle3}
\end{figure}
These two examples show that it is not easy to classify simplicial complexes $\Delta$ containing odd cycles for which $B(\Delta)=A(\Delta)$. Therefore, we restrict ourselves to consider special classes of simplicial complexes. Firstly, we consider simplicial complexes satisfying the following intersection property: let $\Delta$ be a simplicial complex with $\mathcal{F}(\Delta)=\{F_1,\ldots,F_m\}$. We say that $\Delta$ has the {\em strict intersection property} if \begin{enumerate}
\item[ ($\text{I}_1$)] $|F_i\sect F_j|\leq 1$ for all $i\neq j$; \item[ ($\text{I}_2$)] $F_i\sect F_j\sect F_k=\emptyset$ for pairwise distinct $i$,$j$ and $k$. \end{enumerate}
Given a simplicial complex $\Delta$ with $\mathcal{F}(\Delta)=\{F_1,\ldots,F_m\}$ satisfying the strict intersection property, we define the {\em intersection graph $G_\Delta$} of $\Delta$ as follows: $$V(G_\Delta)=\{v_1,\ldots,v_m\}$$ is the vertex set of $G_\Delta$, and $$E(G_\Delta)=\{ \{v_i,v_j\}\:\; i\neq j \quad \text{and}\quad F_i\sect F_j\neq \emptyset\}$$ is the edge set of $G_\Delta$.
Note that if $W$ is a subset of $[n]$, then $\Delta_W$ satisfies again the strict intersection property and the graph $G_{\Delta_W}$ is the subgraph of $G_\Delta$ induced by $$S=\{v_i\in V(G_{\Delta})\: F_i\subseteq W \}.$$
\begin{Lemma} \label{odd-cycle} Let $\Delta$ be a simplicial complex satisfying the strict intersection property. If $G_\Delta$ is an odd cycle, then $A(\Delta)$ is minimally generated in degree $1$ and $2$, and $B(\Delta)=A(\Delta)$. \end{Lemma} \begin{proof} Let $[n]$ be the vertex set of $\Delta$ and ${\mathcal F}(\Delta)=\{F_1,\ldots,F_m\}$. Since $G_\Delta$ is an odd cycle and $\Delta$ has the strict intersection property, we may assume $$i_1, F_1,i_2,F_2,\ldots,i_m,F_m, i_{m+1}=i_1$$ is the special odd cycle corresponding to $G_\Delta$ where $\{i_j\}=F_{j-1}\cap F_j$ for $j=2,\ldots,m$ and $\{i_1\}=F_{1}\cap F_m$. We consider a non-squarefree $k$-cover ${\bold c}$ of $\Delta$ with $k>0$ and show that it has a decomposition ${\bold a}+{\bold b}$ such that ${\bold a}$ is a squarefree cover of positive order. This will imply that $B(\Delta)=A(\Delta)$. If all entries $c_{i_1},\ldots,c_{i_m}$ of the $k$-cover ${\bold c}$ are nonzero, then we have the decomposition ${\bold c}={\bold a}+{\bold b}$ where ${\bold a}$ is the squarefree 2-cover of $\Delta$ with $a_{i_1}=\ldots=a_{i_m}=1$ and all other entries of ${\bold a}$ are zero, and ${\bold b}$ is the $(k-2)$-cover ${\bold c}-{\bold a}$. So we may assume that at least one of the entries $c_{i_1},\ldots,c_{i_m}$, say $c_{i_1}$, is zero. Let $\Gamma$ be the simplicial complex on the vertex set $[n]\setminus \{i_1\}$ with $${\mathcal F}(\Gamma)= \{F\setminus \{i_1\}\:F\in {\mathcal F}(\Delta)\}.$$ Consider the vector ${\bold c}'=(c_1,\ldots, \hat{c}_{i_1},\ldots,c_m)$. The vector ${\bold c}'$ is also a $k$-cover of $\Gamma$ because $c_{i_1}= 0$. Since the simplicial complex $\Gamma$ has no special cycle, by Theorem~\ref{no-odd}, the $S$-algebra $A(\Gamma)$ is standard graded. Therefore, there exists a decomposition ${\bold a}'+{\bold b}'$ of ${\bold c}'$ such that ${\bold a}'$ is a squarefree 1-cover and ${\bold b}'$ is a $(k-1)$-cover of $\Gamma$. The vector ${\bold a}$ with $a_{i_1}=0$ and $a_j=a'_j$ for $j\neq i_1$, is a 1-cover of $\Delta$ and the vector ${\bold b}$ with $b_{i_1}=0$ and $b_j=b'_j$ for $j\neq i_1$, is a $(k-1)$-cover of $\Delta$. Hence ${\bold a}+{\bold b}$ gives us the desired decomposition of ${\bold c}$.
It follows from the above discussion that $A(\Delta)$ is generated by $x_{i_1}x_{i_2}\cdots x_{i_m}t^2$ and the monomials $x^{{\bold c}}t$ where ${\bold c}$ is an indecomposable squarefree $1$-cover of $\Delta$. This is a minimal set of generators of $A(\Delta)$ because the 2-cover $(1,\ldots,1)$ is not decomposable. In fact since $m$ is an odd number, if $(1,\ldots,1)={\bold a}+{\bold b}$, then one of ${\bold a}$ and ${\bold b}$ has at most $(m-1)/2$ nonzero entries which means that it is not a 1-cover. \end{proof}
\begin{Theorem} \label{str-intersec-prop} Let $\Delta$ be a simplicial complex satisfying the strict intersection property and suppose that no two cycles of $G_\Delta$ have precisely two edges in common. Then $B(\Delta)=A(\Delta)$ if and only if each connected component of $G_\Delta$ is a bipartite graph or an odd cycle. \end{Theorem}
\begin{proof} Let $G_1,\dots,G_t$ be the connected components of $G_\Delta$, $\Delta_1,\ldots,\Delta_t$ be the corresponding connected components of $\Delta$ and $\{v_{j1},\ldots,v_{js_j}\}$ be the vertex set of $G_j$. Then a $k$-cover ${\bold c}$ of $\Delta$ can be decomposed into an $i$-cover ${\bold a}$ and a $j$-cover ${\bold b}$ of $\Delta$ if and only if the $k$-cover ${\bold c}_j=(c_{j1},\ldots,c_{js_j})$ of $\Delta_j$ can be decomposed to $i$-cover $(a_{j1},\ldots,a_{js_j})$ and $j$-cover $(b_{j1},\ldots,b_{js_j})$ of $\Delta_j$ for all $j$. Hence $B(\Delta)=A(\Delta)$ if and only if $B(\Delta_j)=A(\Delta_j)$ for all $j$. Therefore, it is enough to consider the case that $G_\Delta$ is connected.
First, let $B(\Delta)=A(\Delta)$. We assume that $G_\Delta$ is not bipartite and show that it is an odd cycle. Let $[n]$ be the set of vertices of $\Delta$ and ${\mathcal F}(\Delta)=\{F_1,\ldots,F_m\}$. Since $G_\Delta$ is not bipartite, it has an odd cycle $C$. We may assume that the cycle $C\: v_{i_1},\ldots,v_{i_t}$ has no chord because if $\{v_{i_r},v_{i_s}\}$ is a chord of $C$, then either $v_{i_1},\ldots,v_{i_r},v_{i_s},\ldots,v_{i_t}$ or $v_{i_r},v_{i_{r+1}},\ldots,v_{i_s}$ is an odd cycle. Suppose the special cycle corresponding to $C$ in $\Delta$, after a relabeling of facets, is the cycle $${\mathcal C}:\; i_1,F_1,i_2,\ldots,i_r,F_r,i_{r+1}=i_1,$$ where $\{i_j\}=F_{j-1}\cap F_j$ and $r$ is an odd integer. We show that $m=r$. This will imply that $G_\Delta =C$ because $C$ has no chord.
Assume $m>r$. We consider two cases and in each case we will find a non-squarefree indecomposable cover of $\Delta$, contradicting our assumption that $B(\Delta)=A(\Delta)$.
{\em Case1.} Suppose each vertex of $\Delta$ belongs to at least one of the facets of the cycle ${\mathcal C}$. It is enough to consider the case that $m=r+1$. Indeed, if $m>r+1$, we set $W=[n]\setminus \Union_{j=r+2}^mF_j$ and consider $\Delta_W$. The strict intersection property implies that $\Delta_W$ has the special odd cycle
$${\mathcal C}:\; i_1,F_1',i_2,\ldots,i_r,F_r',i_{r+1}=i_1,$$
where $F_i' =F_i\setminus \Union_{j=r+2}^mF_j$ for all $i$. Moreover, ${\mathcal F}(\Delta_W)=\{F_1',\ldots, F_r',F_{r+1}\}$. Lemma~\ref{restriction} implies that $B(\Delta)\neq A(\Delta)$ if $B(\Delta_W)\neq A(\Delta_W)$.
There exist two facets of ${\mathcal C}$, say $F_1$ and $F_s$, which intersect $F_{r+1}$. Since $r$ is an odd integer, one of the cycles $$ j_1,F_1,i_2,F_2,\ldots,i_s,F_s,j_s,F_{r+1},j_1$$ or $$j_1, F_{r+1},j_s,F_s,i_{s+1},\ldots,F_r, i_1,F_1,j_1$$ is of odd length where $\{j_1\}=F_{r+1}\cap F_1$ and $\{j_s\}=F_{r+1}\cap F_s$. Without loss of generality, we assume $ j_1,F_1,i_2,F_2,\ldots,i_s,F_s,j_s,F_{r+1},j_1$ is an odd cycle and call it ${\mathcal D}$. One has $r-s>1$ because $r-s$ is an odd number and if we had $r-s=1$, then the cycles $v_1,v_{r+1},v_s,v_r$ and $v_1,\ldots,v_r$ in $G_\Delta$ would have exactly two edges in common, contradicting our assumption.
Now we consider the vector ${\bold c}$ with $c_{j_1}=c_{j_s}=c_{i_2}=\ldots=c_{i_s}=1$, $c_{i_{s+2}}=\ldots=c_{i_r}=2$ and all the other entries of ${\bold c}$ are zero. The vector ${\bold c}$ is a non-squarefree 2-cover of $\Delta$ while $x^{{\bold c}}t^2\not\in B(\Delta)$. In fact, if $x^{{\bold c}}t^2\in B(\Delta)$, then there is a decomposition ${\bold c}={\bold a}+{\bold b}$ by an indecomposable squarefree cover ${\bold a}$ and a cover ${\bold b}$ such that either ${\bold a}$ is a 2-cover or ${\bold a}$ and ${\bold b}$ are both 1-covers of $\Delta$. Since ${\bold a}$ is squarefree, one has $\sum_{j\in F_{s+1}}a_j \leq 1$. Hence the order of ${\bold a}$ cannot be 2. On the other hand, since $\sum_{j \in F_i }a_j \geq 1$ for all facets $F_i$ of ${\mathcal D}$, at least $(s/2)+1$ of the entries $a_{j_1},a_{j_s},a_{i_2},\ldots,a_{i_s}$ of ${\bold a}$ are nonzero. Therefore at most $s/2$ of the entries $b_{j_1},b_{j_s},b_{i_2},\ldots,b_{i_s}$ of ${\bold b}$ are nonzero. Therefore ${\bold b}$ is not a 1-cover of $\Delta$, and so $x^{{\bold c}}t^2\not\in B(\Delta)$.
{\em Case 2}. Suppose there exists a vertex $t$ of $\Delta$ which belongs to none of the facets of the cycle ${\mathcal C}$.
We may assume that $t$ is the only vertex of $\Delta$ with this property. Because if $t_1,\ldots,t_s$ are the other vertices of $\Delta$ with the same property, then we consider the restriction of $\Delta$ to the set $W= [n]\backslash \{t_1,\ldots,t_s\}$, and show that $B(\Delta_W) \neq A(\Delta_W)$. Then by applying Lemma~\ref{restriction} we obtain $B(\Delta) \neq A(\Delta)$.
We may assume that every facet of $\Delta$ which is not a facet of ${\mathcal C}$ contains $t$. Otherwise we restrict $\Delta$ to $W'=[n]\setminus \{t\}$, and by Case 1 it follows that $B(\Delta_{W'})\neq A(\Delta_{W'})$ which implies that $B(\Delta)\neq A(\Delta)$. Let ${\bold c}$ be the vector with $c_t=2$, $c_{i_1}=c_{i_2}=\ldots=c_{i_r}=1$ and all other entries of ${\bold c}$ be zero. The vector ${\bold c}$ is a 2-cover of $\Delta$. Suppose ${\bold c}={\bold a}+{\bold b}$ for some squarefree indecomposable cover ${\bold a}$ of positive order and some cover ${\bold b}$ of $\Delta$. Then at least $(r+1)/2$ of the entries $a_{i_1},a_{i_2},\ldots,a_{i_r}$ of ${\bold a}$ are nonzero, and consequently at most $(r-1)/2$ of the entries $b_{i_1},b_{i_2},\ldots,b_{i_r}$ of ${\bold b}$ are nonzero. Hence ${\bold b}$ is not a $1$-cover. Furthermore, if $F$ is a facet of $\Delta$ containing $t$, then $\sum_{j\in F}a_j=a_t=1$. Hence ${\bold a}$ is not 2-cover of $\Delta$. Therefore ${\bold c}$ is an indecomposable non-squarefree 2-cover of $\Delta$.
Conversely, we suppose $G_\Delta$ is a bipartite graph or an odd cycle. If $G_\Delta$ is bipartite, then $\Delta$ has no special odd cycle. Therefore by Theorem~\ref{no-odd}, $A(\Delta)$ is standard graded and consequently $B(\Delta)=A(\Delta)$. The equality for the case that $G_\Delta$ is an odd cycle, has been shown in Lemma~\ref{odd-cycle}. \end{proof}
Consider the simplicial complexes $\Delta_1$ and $\Delta_2$ as shown in Figure~\ref{2common}. They both satisfy the strict intersection property and have the same intersection graph. The cycles $v_1,v_2,v_3$ and $v_1,v_2,v_3,v_4$ of this intersection graph have exactly two edges in common. We have $B(\Delta_1)=A(\Delta_1)$ but $B(\Delta_2)\neq A(\Delta_2)$.
In fact, vector $(1,0,2,0,1,1)$ is an indecomposable 2-cover of $\Delta_2$ which shows $B(\Delta_2)\neq A(\Delta_2)$. However, $A(\Delta_1)$ is even standard graded. Let ${\bold c}=(c_1,\ldots,c_5)$ be a $k$-cover of $\Delta_1$ with $k\geq 2$. We show that there is a decomposition of ${\bold c}$ to a 1-cover ${\bold a}$ and $(k-1)$-cover ${\bold b}={\bold c} -{\bold a}$ of $\Delta_1$. If $c_1$ and $c_3$ are both nonzero, then ${\bold a}=(1,0,1,0,0)$ gives the desired decomposition because the set $\{1,3\}$ meets each facet of $\Delta_1$ at exactly one vertex. Therefore, we may assume $c_1=0$ or $c_3=0$. By the same argument, one may assume $c_2=0$ or $c_4=0$. So it is enough to consider the case that $c_1$ and $c_2$ are zero or the case that $c_1$ and $c_4$ are zero. However, ${\bold c}$ is a $k$-cover of positive order so the entries $c_1$ and $c_4$ cannot be both zero. Now since the vector ${\bold c}=(0,0,c_3,c_4,c_5)$ is a $k$-cover of $\Delta_1$, one obtains that $c_i\geq k$ for $i=3,4,5$. This implies that the vector ${\bold a}=(0,0,1,1,1)$ and ${\bold b}={\bold c}-{\bold a}$ give the desired decomposition of ${\bold c}$.
\begin{figure}\label{2common}
\end{figure}
\section{Vertex covers of principal Borel sets} The classes of simplicial complexes considered so far for which $B(\Delta)=A(\Delta)$, happened to have the property that $A(\Delta)$ is generated over $S$ in degree at most 2. In this section we present classes of simplicial complexes $\Delta$ such that $B(\Delta)=A(\Delta)$ and $A(\Delta)$ has generators in higher degrees.
We will consider a family of simplicial complexes whose set of facets corresponds to a Borel set. Recall that a subset ${\mathcal B}\in 2^{[n]}$ is called {\em Borel} if whenever $F\in B$ and $i < j$ for some $i\in [n]\setminus F$ and $j\in F$, then $(F \setminus \{j\}) \cup \{i\}\in {\mathcal B}$. Elements $F_1,\ldots,F_m\in {\mathcal B}$ are called {\em Borel generators} of ${\mathcal B}$, denoted by ${\mathcal B}=B( F_1,\ldots,F_m)$, if ${\mathcal B}$ is the smallest Borel subset of $2^{[n]}$ such that $F_1,\ldots,F_m\in {\mathcal B}$. A Borel set B is called {\em principal} if there exists $F\in {\mathcal B}$ such that ${\mathcal B} = B(F)$.
A squarefree monomial ideal $I\subseteq S$ is called a {\em squarefree Borel ideal} if there exists a Borel set ${\mathcal B}\subseteq 2^{[n]}$ such that \[ I=(\{x_F \:\; F\in {\mathcal B} \}). \] If ${\mathcal B}=B( F_1,\ldots,F_m)$, then the monomials $x_{F_1},\ldots,x_{F_m}$ are called the {\em Borel generators} of $I$. The ideal $I$ is called a {\em squarefree principal Borel ideal} if ${\mathcal B}$ is principal Borel.
It is known that the Alexander dual of a squarefree Borel ideal is again squarefree Borel \cite{FMS}. In the case that $I$ is squarefree principal Borel, the following result is shown in \cite[Theorem 3.18]{FMS}.
\begin{Theorem}[Francisco, Mermin, Schweig] \label{BorelAlexander} Let $I$ be a squarefree principal Borel ideal with the Borel generator $x_F$ where $F=\{i_1<i_2<\cdots < i_d\}$. Then the Alexander dual $I^\vee$ of $I$ is the squarefree Borel ideal with the Borel generators $x_{H_1},\ldots,x_{H_d}$ where $H_q=\{q,q+1,\ldots,i_q\}$ for $q=1,\ldots,d$. \end{Theorem}
Let $G_1=\{i_1<\cdots<i_d\}$ and $G_2=\{j_1<\cdots<j_d\}$ be subsets of $[n]$. It is said that {\em $G_1$ precedes $G_2$ (with respect to the Borel order)}, denoted by $G_1\prec G_2$, if $i_s\leq j_s$ for all $s$. By \cite[Lemma 2.11]{FMS}, $F\in B( F_1,\ldots,F_m) $ if and only if $F$ precedes $F_i$, for some $i=1,\ldots,m.$
\begin{Lemma} \label{skeleton} Let ${\mathcal B}=B( F_1,\ldots,F_m)$ be a Borel set with Borel generators $F_j=\{i_{j,1}<i_{j,2}<\cdots < i_{j,d_j}\}$ for $j=1,\ldots,m$, and suppose $\Delta$ is the simplicial complex with ${\mathcal F}(\Delta)={\mathcal B}$. Then for the $q$-skeleton $\Delta^{(q)}$ of $\Delta$, the set ${\mathcal F}(\Delta^{(q)})$ is a Borel set with the Borel generators $G_1,\ldots,G_m$ such that $G_j=\{i_{j,d_j-q}<i_{j,d_j-q+1}<\cdots<i_{j,d_j}\}$ if $d_j> q$, and $G_j=F_j$ if $d_j\leq q$. \end{Lemma} \begin{proof}
Let ${\mathcal B}\subseteq 2^{[n]}$, and $G$ be a subset of $[n]$. First assume that $|G|\leq q$. In this case $G$ is a facet of $\Delta^{(q)}$ if and only if $G$ is a facet of $\Delta$, and this is the case if and only if $G$ precedes $F_j$ for some $j$ with $d_j\leq q$.
Next assume that $|G|=q+1$. We must show that $G$ is a facet of $\Delta^{(q)}$ if and only if $G$ precedes $\{i_{j,d_j-q},i_{j,d_j-q+1},\ldots,i_{j,d_j}\}$ for some $j$ with $d_j> q$. The set $G$ is a facet of $\Delta^{(q)}$ if and only if there exists a facet $H$ of $\Delta$, preceding $F_j$ for some $j$ with $d_j> q$, and $G\subseteq H$. So for simplicity, we may assume that ${\mathcal B}$ is a principal Borel set with the Borel generator $F=\{i_1<i_2<\cdots < i_d\}$ where $d>q$, and show that $G$ is a facet of $\Delta^{(q)}$ if and only if $G$ precedes $\{i_{d-q},\ldots,i_d\}$. Firstly, let $G= \{k_{d-q}<k_{d-q+1}<\cdots<k_d\}\subseteq [n]$, and suppose $G$ precedes $\{i_{d-q},\ldots,i_d\}$. So we have $k_j\leq i_j$, for $j=d-q,\ldots,d$. Let $r$ be an integer such that the cardinality of $[r]\cup \{k_{d-q},k_{d-q+1},\cdots,k_d\}$ is $d$. The set $H=[r]\cup \{k_{d-q},k_{d-q+1},\cdots,k_d\}$ precedes $F$ and obviously includes $G$. This means that $G$ is a facet of $\Delta^{(q)}$. On the other hand, if there exists a set $H$ such that $G\subseteq H$ and $H$ precedes $F$, then $G$ precedes $\{i_{d-q},\ldots,i_d\}$. \end{proof}
By the preceding lemma, we have the following generalization of Theorem \ref{BorelAlexander}(\cite[Theorem 3.18]{FMS}).
\begin{Proposition} \label{B-generators} Let ${\mathcal B}=B( F)$ be a principal Borel set with Borel generator $F=\{i_1<i_2<\cdots < i_d\}$, and let $\Delta$ be the simplicial complex with ${\mathcal F}(\Delta)={\mathcal B}$. Then the $S$-algebra $B(\Delta)$ is generated by the elements $x_Ht^k$, for k=1,\ldots,d, where \[
H\in B(\{q,q+1,\ldots, i_{k+q-1}\}\:\; q=1,\ldots,d-k+1). \] \end{Proposition} \begin{proof} First observe that by Lemma \ref{lunch}, $B(\Delta)$ is generated in degree $\leq d$. So it is enough to show that for each $k=1,\ldots,d$, the monomials $x_Ht^k$ with $H\in B(\{q,q+1,\ldots ,i_{k+q-1}\}\:\; q=1,\ldots,d-k+1)$ generate $L_k(\Delta)^{sq}$. Since $\Delta$ is pure, by Theorem \ref{dual}, this is the case if these monomials generate $I(\Delta^{(d-k)})^\vee$. By Lemma~\ref{skeleton}, the ideal $I(\Delta^{(d-k)})$ is a Borel ideal with the Borel generator $x_{i_k}x_{i_{k+1}}\cdots x_{i_d}$. Hence Theorem~\ref{BorelAlexander} implies the result. \end{proof}
\begin{Proposition} \label{borel}
Let ${\mathcal B}=B(F_1,\ldots,F_m)$ be a Borel set such that $|F_i|=|F_j|$ for all $i,j$, and suppose $\Delta $ is a simplicial complex with ${\mathcal F}(\Delta)={\mathcal B}$. Then $L_k(\Delta)^{sq}$ is a squarefree Borel ideal for all $k$. \end{Proposition} \begin{proof}
Since $|F_i|=|F_j|$ for all $i,j$, the simplicial complex $\Delta$ is pure. Considering Theorem~\ref{dual}, this implies that $L_k(\Delta)^{sq}=I(\Delta^{(d-k)})^\vee$ for all $k$. By Lemma~\ref{skeleton}, the ideal $I(\Delta^{(d-k)})$ is a squarefree Borel ideal. As it is shown in Theorem~\ref{BorelAlexander}, the Alexander dual of a principal squarefree Borel ideal is squarefree Borel. Since $(I+J)^\vee=I^\vee\cap J^\vee$ for every squarefree monomial ideals $I$ and $J$, by squarefree version of \cite[Proposition 2.16]{FMS} one obtains that $I(\Delta^{(d-k)})^\vee$ is a squarefree Borel ideal. Hence $L_k(\Delta)^{sq}$ is squarefree Borel. \end{proof}
In Proposition \ref{borel}, suppose that the cardinality of the Borel generators of ${\mathcal B}$ are not the same. Consider the simplicial complex $\Delta$ with ${\mathcal F}(\Delta)={\mathcal B}'$ where ${\mathcal B}'$ is the set of maximal elements of ${\mathcal B}$, with respect to inclusion. Then the ideal $L_k(\Delta)^{sq}$ does not need to be squarefree Borel. For example, if ${\mathcal B}=B(\{1,4\},\{1,2,3\})$, then ${\mathcal F}(\Delta)=\{\{1,4\},\{1,2,3\}\}$. However, the ideal $L_2(\Delta)^{sq}=(x_1x_2x_4,x_1x_3x_4)$ is not squarefree Borel. Following the proof of Proposition \ref{borel}, one observes that anyway $I(\Delta^{(j)})^\vee$ is always squarefree Borel ideal for all $j$ no matter what is the cardinality of the Borel generators.
Let ${\mathcal B}$ be a Borel set not necessarily principal, and let $\Delta$ be the simplicial complex with ${\mathcal F}(\Delta)={\mathcal B}$. In \cite{FMS}, the authors describe 1-covers of $\Delta$ by using Theorem~\ref{BorelAlexander} and the fact that $(I+J)^\vee=I^\vee\cap J^\vee$ for all squarefree ideals $I$ and $J$. With similar argument, one can use Proposition \ref{B-generators} to have squarefree $k$-covers of $\Delta$ when $\Delta$ is pure. More precisely, let ${\mathcal F}(\Delta)=B( F_1,\ldots,F_m)$ be a Borel set and $|F_i|=|F_j|$ for all $i,j$. By Lemma~\ref{skeleton}, ${\mathcal F}(\Delta^{(d-k)})$ is also a Borel set with the Borel generators as described in this lemma. Now using the above mentioned fact, i.e. $(I+J)^\vee=I^\vee\cap J^\vee$ for all squarefree ideals $I$ and $J$, and the fact $I(\Delta^{(d-k)})^\vee=L_k(\Delta)^{sq}$, one has the result in more general case.
For example, let $\Delta$ be the simplicial complex with ${\mathcal F}(\Delta)=B(\{1,4,5\},\{2,3,4\})$. We find the 2-covers of $\Delta$ as follows: for the simplicial complex $\Delta_1$ with ${\mathcal F}(\Delta_1)=\{1,4,5\}$, Proposition~\ref{B-generators} yields \begin{eqnarray*} L_2(\Delta_1)^{sq}&=&(x_H\:\; H\in B(\{1,2,3,4\},\{2,3,4,5\})\;)\\ &=&(x_1x_2x_3x_4,x_1x_2x_3x_5,x_1x_2x_4x_5,x_1x_3x_4x_5,x_2x_3x_4x_5), \end{eqnarray*} and similarly for the simplicial complex $\Delta_2$ with ${\mathcal F}(\Delta_2)=\{2,3,4\}$, one obtains \begin{eqnarray*} L_2(\Delta_2)^{sq}&=&(\{x_H\:\; H\in B(\{1,2,3\},\{2,3,4\})\})\\ &=&(x_1x_2x_3,x_1x_2x_4,x_1x_3x_4,x_2x_3x_4).\hspace{3.2cm} \end{eqnarray*} Hence \begin{eqnarray*}
L_2(\Delta)^{sq}&=&L_2(\Delta_1)^{sq}\cap L_2(\Delta_2)^{sq}\\
&=&(x_1x_2x_3x_4,x_1x_2x_3x_5,x_1x_2x_4x_5,x_1x_3x_4x_5,x_2x_3x_4x_5). \end{eqnarray*} Observe that the generators of $B(\Delta)$ as described in Proposition~\ref{B-generators} are not necessarily the minimal ones. \begin{Theorem} \label{borel-generators} Let ${\mathcal B}=B( F)$ be a principal Borel set with Borel generator $F=\{i_1<i_2<\cdots < i_d\}$, and let $\Delta$ be the simplicial complex with ${\mathcal F}(\Delta)={\mathcal B}$. Then $B(\Delta)=A(\Delta)$. \end{Theorem} \begin{proof} We show that for every non-squarefree cover ${\bold c}$ of $\Delta$ of positive order, there exists a decomposition ${\bold c}={\bold a}+{\bold b}$ such that ${\bold a}$ is a squarefree cover of $\Delta$. This will imply that $B(\Delta)=A(\Delta).$ So consider a non-squarefree $k$-cover ${\bold c}$ of $\Delta$ with $k>0$ and let $C=\supp({\bold c})$. We may assume that $k$ is the maximum order of ${\bold c}$, that is, if $\ell> k$, then ${\bold c}$ is not an $\ell$-cover. Indeed, assume that $k$ is the maximum order of ${\bold c}$ and ${\bold c}={\bold a}+{\bold b}$ is a decomposition of ${\bold c}$ such that ${\bold a}$ and ${\bold b}$ are covers of $\Delta$ of orders $i$ and $j$, respectively, with $k=i+j$. Then for every $k'<k$, one can choose $i'\leq i$ and $j'\leq j$ such that $i'+j'=k'$. Hence ${\bold c}={\bold a}+{\bold b}$ can be also considered a decomposition of ${\bold c}$ as a $k'$-cover.
We denote by ${\mathcal A}_{\ell}$ the Borel set $ B(\{q,q+1,\ldots i_{\ell+q-1}\}\:\; q=1,\ldots,d-\ell+1)$, for $\ell=1,\ldots,d$. Then by Proposition~\ref{B-generators}, for every $H\in {\mathcal A}_{\ell}$, the (0,1)-vector $x^{\mathbf{h}}$ with $\supp(\mathbf{h})=H$ is a squarefree $\ell$-cover of $\Delta$.
\\ {\em Step 1} (Defining the cover ${\bold a}$)\textbf{.} Let $${\mathcal T}=\{H\subseteq C\:\; H\in {\mathcal A}_{\ell} \text{ for some } \ell \}.$$ Since ${\bold c}$ is a cover of positive order, it is a $1$-cover of $\Delta$ as well. Hence Theorem \ref{BorelAlexander} implies that ${\mathcal T}$ is nonempty. Let $$r=\max\{\ell \:\; \text{there exists } H\in {\mathcal T} \text{ with } H\in {\mathcal A}_{\ell} \text{ for some } \ell\}.$$ Observe that $r\leq k$ because $k$ is the maximum order of ${\bold c}$. Let $A$ be an element of ${\mathcal T}$ such that $A\in {\mathcal A}_{r}$. Consider the vector ${\bold a}$ with $\supp({\bold a})=A$. Then the vector ${\bold a}$ is an $r$-cover of $\Delta$. We will show later that ${\bold b}={\bold c}-{\bold a}$ is a $(k-r)$-cover of $\Delta$ to obtain the desired decomposition of ${\bold c}$.
\\ {\em Step 2.} If $r\neq d$, we observe that there exist at least $t$ entries $c_j$ of ${\bold c}$ with $c_j=0$ where $j\leq i_{r+t}$ for all $t=1,\ldots d-r.$ In fact, if at most $t-1$ number of entries $c_j$ with $j\leq i_{r+t}$ are zero, then there exists a subset $A'$ of $C$ which precedes $\{t,t+1,\ldots,i_{r+t}\}$. This means that $A'\in {\mathcal A}_{r+1}$, a contradiction to the choice of $r$.
\\ {\em Step 3.} We show that ${\bold b}={\bold c}-{\bold a}$ is a $(k-r)$-cover of $\Delta$. If $r=d$, then $A=[i_d]$ and ${\bold a}$ is the vector for which all entries are $1$. Thus ${\bold b}$ is a $(k-r)$-cover of $\Delta$ because every facet of $\Delta$ is of cardinality $d$. Therefore, we have the desired decomposition of ${\bold c}$. So we may assume that $r<d$.
We consider a facet $G$ of $\Delta$, and show that $\sum_{i\in G}b_i\geq k-r$. For this purpose, attached to $G$, we inductively define a sequence $G=G_0,G_1,\ldots,G_m$ of facets of $\Delta$, where $m=|G\cap A |-r$, with the following properties: \begin{enumerate}
\item[(i)] $1+|G_i\cap A|=|G_{i-1}\cap A|$ for all $i=1,\ldots,m;$ \item[(ii)] $1+\sum_{j\in G_i}c_j\leq \sum_{j\in G_{i-1}}c_j $ for all $i=1,\ldots,m.$ \end{enumerate}
Assume that the facets $G_0,\ldots,G_{\ell}$ is already defined for some $\ell<m$. Let $G_{\ell}=\{j_1<\cdots<j_d\}$ and $j_s=\max\{j_i\in G_{\ell}\: \; j_i\in A\}$. By property (i) and since $\ell<m=|G\cap A|-r$, one has $|G_{\ell}\cap A|>r$. Hence $s>r$. By step 2, since ${\bold c}$ has at least $d$ zero entries, the set $\{i\:\; c_i=0 \text{ and } i\not\in G_{\ell}\}$ is nonempty. In fact, otherwise $i\in G_\ell$ whenever $c_i=0$, and so $G_{\ell}\cap \supp({\bold c}) = \emptyset$, a contradiction. Let $$t=\min\{i\:\; c_i=0 \text{ and } i\not\in G_{\ell}\}.$$
We set $G_{\ell+1}=(G_{\ell}\setminus \{j_s\})\cup \{t\}$. Since $c_t=0$, one has $t\not\in A$, and so (i) holds for $G_{\ell+1}$. But $c_{j_s}\neq 0$ because $j_s\in A$. Thus (ii) also holds for $G_{\ell+1}$. So we only need to show $G_{\ell+1}$ is a facet of $\Delta$. If $t<j_s$, then $G_{\ell+1}$ is a facet of $\Delta$ by the definition of the Borel sets. Furthermore, since $t\not\in G_{\ell}$, one has $t\neq j_s$. So assume that $t>j_s$ and $i_{s+q}<t\leq i_{s+q+1}$. Then for each $q'$ where $q'=1,\ldots,q+1$, we have $j_{s+q'}\leq i_{s+q'-1}$. By contrary, assume that $j_{s+q'}> i_{s+q'-1}$ for some $q'=1,\ldots,q$. Then exactly $s+q'-1$ elements of $G_{\ell}$ belongs to the set $[i_{s+q'-1}]$. On the other hand, by Step 2 there exist $s+q'-1$ entries $c_i=0$ with $i\in [i_{s+q'-1}]$. So by the choice of $t$, we would have $c_i=0$ for $i=j_1,\ldots,j_{s+q'-1}$ which implies $\{j_1,\ldots,j_{s+q'-1}\}\cap A=\emptyset$, a contradiction. Thus $j_{s+q'}\leq i_{s+q'-1}$ for all $q'=1,\ldots,q+1$, and hence the set $\{j_1<\cdots<\hat{j}_s<\cdots < j_{s+q+1}\}$ precedes $\{i_1<\cdots< i_{s+q}\}.$ Let $E_1=\{j'_{s+q+1}<\cdots<j'_d\}$ equals the set $\{t,j_{s+q+2},\ldots,j_d\}$. Considering the fact $t\leq i_{s+q+1}$, it follows that $E_1$ precedes $\{i_{s+q+1},\ldots , i_d\}$ because if $j_{q'}<t$, then $j_{q'}\leq i_{q'-1}$ for $q'=s+q+2,\ldots,d$. Therefore, $G_{\ell+1}=\{j_1<\cdots<\hat{j_s}<\cdots < j_{s+q+1}<j'_{s+q+1}<\cdots<j'_d\}$ precedes $F$ which means $G_{\ell+1}$ is a facet of $\Delta$, as desired.
Now using the property (ii) of the sequence $G_0,\ldots,G_m$, we obtain
$$\sum_{i\in G}c_i\geq \sum_{i\in G_m}c_i+m \geq k+m=k+|G\cap A |-r.$$ The second above inequality holds because ${\bold c}$ is a $k$-cover of $\Delta$. So
$$\sum_{i\in G}b_i=\sum_{i\in G}c_i-\sum_{i\in G}a_i\geq (k+|G\cap A |-r)-|G\cap A |=k-r.$$ Thus ${\bold b}$ is a $(k-r)$-cover of $\Delta$, and this means that ${\bold c}={\bold a}+{\bold b}$ is the desired decomposition of ${\bold c}$. \end{proof}
The statement of Theorem \ref{borel-generators} does not hold for the Borel sets in general. Once more consider the example after Proposition \ref{borel} where $\Delta$ is the simplicial complex with ${\mathcal F}(\Delta)=B(\{1,4,5\},\{2,3,4\})$. Then ${\bold c}=(2,1,1,1,0)$ is a 3-cover of $\Delta$. We have already known the squarefree 2-covers of $\Delta$ and one can also find the squarefree 1-covers and 3-cover of $\Delta$ by Proposition~\ref{B-generators} in order to see that ${\bold c}$ cannot be decomposed to squarefree covers of $\Delta$. Hence $B(\Delta)\neq A(\Delta).$
\begin{Corollary} Let ${\mathcal B}=B( F)$ be a principal Borel set with Borel generator $F=\{i_1<i_2<\cdots < i_d\}$, and let $\Delta$ be the simplicial complex with ${\mathcal F}(\Delta)={\mathcal B}$. Then $B(\Delta^{(j)})=A(\Delta^{(j)})$ for every $j=0,\ldots,d-1$. \end{Corollary} \begin{proof} It is enough to notice that by Lemma~\ref{skeleton}, the set ${\mathcal F}(\Delta^{(j)})$ is a principal Borel set. Hence Theorem~\ref{borel-generators} implies the result. \end{proof}
An immediate consequence of Theorem \ref{borel-generators} is the following result in \cite[Proposition 4.6]{HHT}. \begin{Corollary} Let $\Sigma_n$ denote the simplex of all subsets of $[n]$. Then the $S$-algebra $A(\Sigma_n^{(d-1)})$ is minimally generated by the monomials $x_{j_1}x_{j_2}\cdots x_{j_{n-d+k}}t^k $, where $k=1,\ldots,d$ and $1\leq j_1<j_2<\cdots <j_{n-d+k}\leq n$. \end{Corollary} \begin{proof} Consider the Borel set ${\mathcal B}=B(\{n-d+1,n-d+2,\ldots,n\})$. Then ${\mathcal F}(\Sigma_n^{(d-1)})={\mathcal B}$. Thus by Proposition ~\ref{B-generators} and Theorem~\ref{borel-generators}, the $S$-algebra $A(\Sigma_n^{(d-1)})$ is generated by the monomials $x_Ht^k$ where \[ H\in B(\{1,2,\ldots,n-d+k\},\{2,3,\ldots,n-d+k+1\},\ldots,\{d-k+1,d-k+2,\ldots,n\}), \]
for $k=1,\ldots,d$. The above mentioned set is exactly the set of all subsets of $[n]$ of cardinality $n-d+k$. On the other hand, by these monomials we have a minimal system of generators. In fact for the case that $n=d$ there is nothing to prove, and for the case that $n>d$ by contrary, assume that for a generator $x_Ht^k$, there exist monomials $x_{H_1}t^{k_1}$ and $x_{H_2}t^{k_2}$ in this set such that $x_{H_1}x_{H_2}|x_H$ and $k_1+k_2=k$. Hence we have \[
(n-d+k_1)+(n-d+k_2)=|H_1|+|H_2|\leq |H|=(n-d+k). \] Thus $n\leq d$, a contradiction. \end{proof}
The following proposition exhibit a condition which guarantees the existence of a generator of degree $\dim(\Delta)+1$ in the minimal set of monomial generators of $A(\Delta)$. \begin{Proposition} \label{higher-generator} Let ${\mathcal B}=B( F)$ be a principal Borel set with Borel generator $F=\{i_{1}<i_{2}<\cdots < i_{d}\}$, and let $\Delta$ be the simplicial complex with ${\mathcal F}(\Delta)={\mathcal B}$. Then $x_1x_2\cdots x_{i_d}t^d$ belongs to the minimal set of monomial generators of $A(\Delta)$ if and only if $i_1\neq 1$. \end{Proposition} \begin{proof}
First observe that $i_1\neq 1$ if and only if $i_j\neq j$ for all j. So we show that the $d$-cover ${\bold c}$ of $\Delta$ with $c_i=1$ for all $i$, is decomposable if and only if $i_j=j$ for some $j$. First, suppose that ${\bold c}$ is decomposable and ${\bold c}={\bold a}+{\bold b}$ is a decomposition of ${\bold c}$. Since $\Delta$ is pure of dimension $d-1$, the vector ${\bold c}$ is the only squarefree $d$-cover of $\Delta$. Therefore, the order of ${\bold a}$ and ${\bold b}$ is nonzero. Let the order of ${\bold a}$ be $r$ and the order of ${\bold b}$ be $d-r$. Since ${\bold c}$ does not have a decomposition to a 0-cover and a $d$-cover, the same is true for the covers ${\bold a}$ and ${\bold b}$. Hence if $A=\supp({\bold a})$ and $B=\supp({\bold b})$, then they are in the form described in Proposition~\ref{B-generators}. In other words, there exist numbers $j$ and $l$ such that $A$ precedes $\{l-r+1,l-r+2,\ldots,i_l\}$ and $B$ precedes $\{j-d+r+1, j-d+r+2, \ldots, i_j\}$. Observe that $[i_d]$ is the disjoint union of $A$ and $B$ because ${\bold c}={\bold a}+{\bold b}$. Thus we may assume that $i_d\in A$ which implies $l=d$. Moreover, we see that $|A|+|B|= i_d$. For this reason, since $|A|=i_d-(d-r+1)+1$ and $|B|=i_j-(j-d+r+1)+1$, we obtain $i_j=j$.
Conversely, let $i_j=j$ for some $j$. By Theorem~\ref{B-generators}, the monomial $x_At^j\in L_j(\Delta)$ where $A=\{1,2,\ldots,i_j=j\}$ and the monomial $x_Bt^{d-j}\in L_{d-j}$ where $B=\{j+1,j+2,\ldots,i_d\}$. Let ${\bold a}$ and ${\bold b}$ be the vectors with $A=\supp({\bold a})$ and $B=\supp({\bold b})$. Then we have the decomposition ${\bold c}={\bold a}+{\bold b}$ of the cover ${\bold c}$. \end{proof}
{}
\end{document} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.